text
stringlengths 9
7.94M
|
---|
\begin{document}
\title{\LARGE \bf Monotone flows with dense periodic orbits} \author{\Large Morris W.\ Hirsch \thanks{I thank {\sc Eric Bach,
Bas Lemmens} and {\sc Janusz Mierczy\'nski} for helpful
discussions.} \\University of Wisconsin}
\maketitle
\begin{abstract}
Give $\R n$ the (partial) order $\succeq$ determined by a closed
convex cone $K$ having nonempty interior: $y\succeq x \ensuremath{\:\Longleftrightarrow\:} y-x \in K$.
Let $X\subset\R n$ be a connected open set and $\varphi$ a flow on
$X$ that is monotone for this order: if $y\succeq x$ and $t\ge 0$,
then $\varphi^ty\succeq \varphi^t y$.
{\em Theorem.} If periodic points of $\varphi$ are dense in $X$, then
$\varphi^t$ is the identity map of $X$ for some $t \ge 0$.
\end{abstract}
\tableofcontents
\section{Introduction} \mylabel{sec:intro}
Many dynamical systems, especially those used as models in applied
fields, are {\em monotone}: the state space has an order relation
that is preserved in positive time. A recurrent theme is that bounded
orbits tend toward periodic orbits. For a sampling of the large
literature on monotone dynamics, consult the following works and the
references therein:
{\small \cite{AngeliHirschSontag, Anguelov12, BenaimHirsch99a,
BenaimHirsch99b, DeLeenheer17, Dirr15, Elaydi17, EncisoSontag06,
Grossberg78, Hess-Polacik93, HS05, Kamke32, LajmanovichYorke,
Landsberg96, LeonardMay75, Matano86, Mier94, Potsche15,
RedhefWalter86, Selgrade80, Smale76, Smith95, Smith17, Volkmann72,
Wang17, Walcher01}.}
Another common dynamical property is {\em dense periodicity}: periodic points are dense in the state space. Often considered typical of chaotic dynamics, this condition is closely connected to many other important dynamical topics, such as structural stability, ergodic theory, Hamiltonian mechanics, smoothness of flows and diffeomorphisms.
The main result in this article, Theorem \ref{th:main}, is that a large class of monotone, densely periodic flows $\varphi:=\{\varphi^t\}_{t\in\ensuremath{{\mathbb R}}}$ on open subsets of $\R n$ are (globally) {\em periodic}: There exists $t>0$ such that $\varphi^t$ is the identity map.
\subsection{Terminology} Throughout this paper $X$ and $Y$ denote (topological) spaces.
The closure of a subset of a space is $\ov S$.
The group of homeomorphisms of $X$ is $\mcal H (X)$.
$\ensuremath{{\mathbb Z}}$ denotes the integers, $\ensuremath{{\mathbb N}}$ the nonnegative integers, and $\ensuremath{{\mathbb N}_{+}}$
the positive integers. $\ensuremath{{\mathbb R}}$ denotes the reals, $\ensuremath{{\mathbb Q}}$ the rationals,
and $\ensuremath{{\mathbb Q}_+}$ the positive rationals. $\R n$ is Euclidean $n$-space.
Every manifold $M$ is assumed metrizable with empty boundary $\ensuremath{\partial} M$, unless otherwise indicated.
Every map is assumed continuous unless otherwise described. $f\colon\thinspace X\approx Y$ means $f$ maps $X$ homeomorphically onto $Y$. The identity map of $X$ is $id_X$.
If $f$ and $g$ denote maps, the composition of $g$ following $f$ is the map $g\circ f\colon\thinspace x\mapsto g(f(x))$ (which may be empty).
We set $f^0:=id_X$, and recursivelydefine the
$k$'th iterate $f^k$ of $f$ as $f^k:=f\circ f^{k-1}, \ (k \in \ensuremath{{\mathbb N}_{+}})$.
The {\em orbit} of $p$ under $f$ is the set \[\mcal O(p):=\big\{f^k (p)\colon\thinspace k\in \ensuremath{{\mathbb N}}\big\}, \] denoted as $\mcal O (p,f)$ to record $f$.
When $f (X) \subset X$, the orbit of $P\subset X$ is defined as \[ \mcal O (P)= \mcal O (P,f) := \bigcup_{p\in P}\mcal O (p). \]
A set $A\subset X$ is {\em invariant} under $f$ if $f(A)=A =f^{-1} (A)$. The {\em fixed set} and {\em periodic set} of $f$ are the respective invariant sets \[
{\mcal F}(f) :=\big\{x\colon\thinspace f(x)=x\big\}, \qquad {\mcal P}(f):= \bigcup_{n\in \ensuremath{{\mathbb N}_{+}}}
{\mcal F}(f^n). \]
\paragraph{Flows} A {\em flow} $\psi:=\{\psi^t\}_{t\in \ensuremath{{\mathbb R}}}$ on $Y$, denoted formally by $(\psi, Y)$, is a continuous action of the group $\ensuremath{{\mathbb R}}$ on $Y$: a family of homeomorphisms $\psi^t\colon\thinspace Y\approx Y$ such that the map \[ \ensuremath{{\mathbb R}} \to \mcal H (Y), \quad t\mapsto \psi^t \] is a homomorphism, and the {\em evaluation map}
\begin{equation}\label{eq:ev}
\msf {ev}_\psi\colon\thinspace\ensuremath{{\mathbb R}} \times Y \to Y, \qquad (t,x) \mapsto \psi^tx \end{equation}
is continuous.
A set $A\subset Y$ is {\em invariant under $\psi$} if it is invariant under every $\psi^t$. When this holds, the {\em restricted flow}
$(\psi \big |A)$ on $A$ has the evaluation map
$\msf {ev}_\psi\big|\,\ensuremath{{\mathbb R}} \times A$.
The {\em orbit} of $y\in Y$ under $\psi$ is the invariant set \[
\mcal O (y):=\big \{\psi^t y\colon\thinspace t\in \ensuremath{{\mathbb R}} \big\},
\] denoted formally by $\mcal O (y, \psi)$. Orbits of distinct points either coincide or are disjoint.
The {\em periodic} and {\em equilibrium} sets of $\psi$ are the respective invariant sets \[
{\mcal P}(\psi):=\bigcup_{t\in\ensuremath{{\mathbb R}}}{\mcal F}(\psi^t), \qquad
{\mcal E}(\psi):= \bigcap_{t\in\ensuremath{{\mathbb R}}} {\mcal F}(\psi^t),
\] denoted by ${\mcal P}$ and ${\mcal E}$ if $\psi$ is clear from the context. Points in ${\mcal E}$ are called {\em stationary}. The orbit of a nonstationary periodic point is a {\em cycle}.
The {\em period} of $p\in Y$ is $r=\msf{per}(p)>0$, provided \[p\in {\mcal P}\sm {\mcal E}, \quad r=\min\big\{t>0\colon\thinspace \psi^tp=p\big\} \] The set of {\em $r$-periodic} points is \[{\mcal P}^r (\Psi) ={\mcal P}^r:= \big\{p\in {\mcal P}\colon\thinspace\msf{per} (p)=r\big\}\]
When $p$ is $r$-periodic, the restricted flow $\psi \big|O(p)$ is
topologically conjugate to the rotational flow on the topological
circle $\ensuremath{{\mathbb R}}/r\ensuremath{{\mathbb Z}}$, covered by the translational flow on $\ensuremath{{\mathbb R}}$
whose evaluation map is $(t,x) \mapsto t+x$.
The flow $(\psi, Y)$ is: \begin{itemize}
\item {\em trivial} if ${\mcal E} =Y$,
\item {\em densely periodic} if ${\mcal P} $ is dense in $Y$,
\item {\em pointwise periodic} if ${\mcal P} = Y$,
\item {\em periodic} if there exists $l > 0$ such that $\psi^l =id_Y$.
Equivalently: the homomorphism $\psi \colon\thinspace \ensuremath{{\mathbb R}} \to \mcal H (Y)$
factors through a homomomorphism of the circle group $\ensuremath{{\mathbb R}}/l\ensuremath{{\mathbb Z}}$. \end{itemize}
\paragraph{Order} Recall that a (partial) {\em order} on $Y$ is a binary relation $\preceq$ on $Y$ that is reflexive, transitive and antisymmetric.
In this paper all orders are {\em closed}:
the set $\{(u,v)\in Y\times Y\colon\thinspace u\preceq v\}$ is closed.
The {\em trivial order} is: $x\preceq y \ensuremath{\:\Longleftrightarrow\:} x=y$.
The pair $(Y,\preceq)$ is an {\em ordered space}, denoted by $Y$ if the order is clear from the context.
We write $y\succeq x$ to mean $x\preceq y$. If $x\preceq y$ and $x\ne y$, we write $x\prec y$ and $ y \succ x$.
For sets $A, B\subset Y$, the condition $A\preceq B$ means $a\preceq
b$ for all $a\in A, \,b\in B$, and similarly for the relations
$\prec,\, \succeq, \,\succ$.
The {\em closed order interval} spanned by $a, b \in Y$ is the closed set \[ [a,b]:=\{y\in Y\colon\thinspace a \preceq y\preceq b\},\] and its interior is the {\em open order interval} $[[a,b]]$. We write $a\ll b$ to indicate $[[a,b]]\ne\varnothing$.
An {\em order cone}
$K\subset \R n$ is a closed convex cone that is pointed (contains no affine line) and solid (has nonempty interior). $K$ is {\em polyhedral} if is the intersection of finitely many closed linear halfspaces.
The {\em $K$-order}, denoted by $\preceq_K$ for clarity, is the order defined on every subset of $\R n$ by \[
x \preceq_K y \ensuremath{\:\Longleftrightarrow\:} y-x\in K. \] For the $K$-order on $\R n$, every order interval is a convex $n$-cell.
A map $f\colon\thinspace X \to Y$ between ordered spaces is {\em monotone} when
\[x\preceq_X x' \implies f(x) \preceq_Y f (x').
\]
When $Y$ is ordered and $g\colon\thinspace Y'\to Y$ is injective, $Y'$ has a unique
{\em induced order} making $g$ monotone.
Every subspace $S\subset Y$ is given the order induced by the inclusion map $S\hookrightarrow Y$. When this order is trivial, $S$ is {\em unordered}.
A flow $\psi$ on an ordered space is {\em monotone} iff the maps
$\psi^t$ are monotone. It is easy to see that every cycle for a monotone flow is unordered.
This is the chief result:
\begin{theorem}\mylabel{th:main}
Assume:
\begin{description}
\item[(H1)] $K\subset \R n$ is an order cone.
\item[(H2)] $X\subset \R n$ is open and connected.
\item[(H3)] $X$ is $K$-ordered.
\item[(H4)] The flow $(\varphi, X)$ is monotone and densely periodic.
\end{description}
Then $\varphi$ is periodic. \end{theorem}
\noindent The proof will be given after results about various types of flows.
\section{Resonance} \mylabel{sec:resonance}
In this section: \begin{itemize} \item $Y$ is an ordered space,
\item $(\psi,Y)$ is a monotone flow,
\item
$ \mcal O (y)$ is the orbit of $y$ under $\psi$,
\item $\mcal O (y, \psi^t)$ is the orbit of $y$ under $\psi^t$. \end{itemize}
\begin{lemma}\mylabel{th:colimit}
Assume:
\begin{description}
\item[(i)] \mbox{$\big\{p_k\big\}$ and $ \big\{q_k\big\}$ are seqences in $Y$ converging to
$y\in Y$},
\item[(ii)] $\mcal O (p_k) \prec \mcal O(q_k), \quad (k \ge 1).$
\end{description} Then $y\in {\mcal E}$.
\end{lemma}
\begin{proof} By (i) and continuity of $\psi$ we have
\begin{equation}\label{eq:limvt}
\lim_{k\to \infty}\psi^t p_k = \lim_{k\to \infty}\psi^t q_k =\psi^t y, \qquad (t\in \ensuremath{{\mathbb R}})
\end{equation}
while (ii) and monotonicity of $\psi$ imply
\begin{equation}\label{eq:pktqk}
k\in \ensuremath{{\mathbb N}}, \ t\in \ensuremath{{\mathbb R}} \implies p_k \preceq \psi^t q_k. \end{equation}
From (i), (\ref{eq:limvt}). (\ref{eq:pktqk}) and continuity of $\psi$ we infer: \[ y\preceq \psi^t y,\qquad (t\in \ensuremath{{\mathbb R}}). \] Therefore monotonicity shows that $\psi^ty=y$ for all $t\in \ensuremath{{\mathbb R}}$. \end{proof}
\paragraph{Rationality} Rational numbers play a surprising role in monotone flows:
\begin{theorem} \mylabel{th:poa}
Assume $r,s >0$. If
\[ p \in \mcal P^r, \quad q \in {\mcal P}^s, \quad p \prec q, \quad \text{and} \quad \mcal O (p) \not \prec\mcal O (q), \] then $r/s$ is rational. \end{theorem}
\begin{proof} We have
\begin{equation} \label{eq:gama}
\mcal O (p) \cap \mcal O (q)=\varnothing
\end{equation}
because $p \prec q$
and every cycle in a monotone flow us unordered.
Note that the restriction of $\psi^s$ to the circle $\mcal O(p)$ is conjugate to the rotation $\msf R_{2\pi r/s}$ of the unit circle $ \msf C \subset\R 2$ through $2\pi r/s$ radians.
Arguing by contradiction, we provisionally assume $r/s$ is irrational. Then the orbit of $\msf R_{2\pi r/s}$ is dense in $\msf C$,\footnote{This was discovered--- in the 14th century!--- by {\sc Nicole
Oresme}. See {\sc Grant} \cite{Grant71}, {\sc
Kar} \cite {Kar03}. A short proof based on the pigeon-hole
principle is due to {\sc Speyer} \cite{Speyer17}. Stronger density
theorems are given in {\sc Bohr} \cite{Bohr23}, {\sc Kronecker}
\cite{Kronecker}, and {\sc Weyl} \cite{Weyl10, Weyl16}.}
and the cojugacy described above implies
\begin{equation}\label{eq:oppr}
\mcal O(p)=\ov{\mcal O (p,\psi^r)}. \end{equation}
Since $p\prec q$, monotonicity \[\big(\psi^r\big)^k p \preceq \big (\psi^s\big)^k q= q, \quad (k\in \ensuremath{{\mathbb N}_{+}}),\] whence
\[\mcal O (p;\psi^r) \preceq \{q\},
\]
and (\ref{eq:oppr}) implies
\begin{equation}\label{eq:opq}
\mcal O (p) \prec \{q\}. \end{equation} Therefore $\mcal O (p) \prec \mcal O(q)$ by monotonicity (\ref{eq:gama}) and monotonicity. But this contradicts the hypothesis.
\end{proof}
\begin{definition}\mylabel{th:resdef}
A set $S$ is {\em resonant} for the flow $(\psi,Y)$ if $S\subset Y$ and \[
u,v\in S\cap({\mcal P} \sm {\mcal E}) \implies
\frac{\msf{per}(u)}{\msf{per}(v)}\in \ensuremath{{\mathbb Q}_+}. \]
\end{definition} It is easy to prove:
\begin{lemma}\mylabel{th:resorbit}
\mbox{If $S$ is resonant, so is its orbit. \qed} \end{lemma}
\begin{proposition}\mylabel{th:resprop}
$S$ is resonant provided there exists \[q \in S\cap ({\mcal P} \sm {\mcal E} ) \] such that
\begin{equation}\label{eq:zsp}
z\in S\cap ({\mcal P} \sm {\mcal E}) \implies
\frac{\msf{per} (z)}{\msf{per} (q)} \in \ensuremath{{\mathbb Q}_+}. \end{equation}
\end{proposition}
\begin{proof} If
$ u,v\in S\cap{\mcal P} (\psi)\sm{\mcal E}$, then
\[
\frac{\msf{per}(u)}{\msf{per}(v)} =
\frac{\msf{per}(u)}{\msf{per}(q)}\cdot \frac{\msf{per}(q)}{\msf{per}(v)},
\] which lies in $\ensuremath{{\mathbb Q}_+}$ by (\ref{eq:zsp}). \end{proof}
\begin{theorem} \mylabel{th:resglobal}
Assume
\begin{equation}\label{eq:pqP}
p, q \in {\mcal P} \sm {\mcal E},
\qquad p\prec q, \qquad \mcal O(p) \not \prec \mcal O(q). \end{equation}
Then $[p,q]$ is resonant. \end{theorem}
\begin{proof} Note that
\begin{equation}\label{eq:pzq}
p\preceq z \prec q \implies \mcal O(z)\not \prec \mcal O(q). \end{equation}
For if this is false, there exists $z \in [p,q]$ such that \[ \mcal O(z) \prec \mcal O(q),\] whence monotonicity implies \[\big(\forall\, w\in\mcal O(p)\big) \ \big(\exists\, w'\in \mcal O(z)\big)\colon\thinspace\quad
w\preceq w'\prec \mcal O (q). \] It follows that $\mcal O(p)\prec \mcal O(q)$, contrary to hypothesis. Resonance of $[p,q]$ now follows from Equation (\ref{eq:pzq}), and Theorem \ref{th:poa} with the parameters $a:=p, \ b:=q$. \end{proof}
The next result will be used to derive the Main Theorem from the analogous reuslt for homeomorphisms, Theorem \ref{th:lemmens}.
\begin{theorem}\mylabel{th:rescor}
Assume $Y$ is resonant, $s >0$ and ${\mcal P}^s \ne\varnothing$. Then $ {\mcal P} (\psi)= {\mcal P} (\psi^s)$. \end{theorem}
\begin{proof}
We fix $p\in {\mcal P}^s$ and show that every
$q\in {\mcal P} (\psi)$ lies ${\mcal P} (\psi^s)$.
If then $q$ is stationary then $q\in {\mcal P} (\psi^s)$.
If $q\in {\mcal P}^r$ then $r >0$ and resonance implies $ sk = rl$ with $ k, l \in \ensuremath{{\mathbb N}_{+}}.$
Since $\psi^r q =q$, we have
$ T^kq = \big(\psi^s\big)^kq= \big(\psi^r\big)^lq =q$, and again
$q\in {\mcal P} (\psi^s)$. \end{proof}
\section{Proof of Theorem \ref{th:main} } \mylabel{sec:proofmain}
Recall the statement of the Theorem:
\noindent {\em Assume:
\begin{description}
\item[(H1)] $K\subset \R n$ is an order cone,
\item[(H2)] $X\subset \R n$ is open and connected,
\item[(H3)] $X$ is $K$-ordered,
\item[(H4)] the flow $(\varphi, X)$ monotone and densely periodic.
\end{description}
Then $\varphi$ is periodic. }
The proof relies on two sufficient conditions for periodicity of monotone homeomorphisms, Theorems \ref{th:lemmens} and \ref{th:monty} below.
\begin{definitions*}
A homeomorphism $T\colon\thinspace W \approx W$ is:
\begin{itemize}
\item {\em densely periodic}\, if ${\mcal P}(T)$ is dense in $W$,
\item {\em pointwise periodic}\, if ${\mcal P}(T)= W$,
\item {\em periodic}\, if $T^k=id_W$ for some $k \in \ensuremath{{\mathbb N}_{+}}$. \end{itemize}
\end{definitions*}
The key to Theorem \ref{th:main} is this striking result:
\begin{theorem}[{\sc B. Lemmens} {\em et al.}, \cite {Lemmens17}] \mylabel{th:lemmens}
Assume {\em (H1), (H2), (H3)}. Then a monotone homeomorphism $T\colon\thinspace X\approx X$ is periodic provided it is densely periodic.\footnote{Conjectured, and proved for polyhedral cones, in
{\sc M. Hirsch} \cite{Hirsch17}.} \end{theorem} We also use an elegant result from the early days of transformation groups:
\begin{theorem}[{\sc D. Montgomery}, \cite {Monty37, Monty55}] \mylabel{th:monty}
A homeomorphism of a connected topological manifold is
periodic provided it is pointwise periodic.\footnote{There are analogs of
Montgomery's Theorem for countable transformation groups; see
{\sc Kaul} \cite{Kaul71},
{\sc Roberts} \cite {Roberts75},
{\sc Yang} \cite {Yang71}. Pointwise periodic homeomorphisms
on compact metric spaces are investigated in {\sc Hall \& Schweigert} \cite
{Hall82}.} \end{theorem}
\begin{lemma}\mylabel{th:P}
$\varphi$ is pointwise periodic \end{lemma}
\begin{proof} Since ${\mcal E}\subset {\mcal P}(\varphi)$, it suffices to prove that the restriction of
$\varphi$ to each component of $X\,\verb=\={\mcal E}$ is pointwise periodic.
Therefore we assume ${\mcal E}=\varnothing$.
Let $x \in X$ be arbitrary. As ${\mcal P}(\varphi)$ is dense, Lemma \ref{th:colimit} and Theorem \ref{th:resglobal} show there exist $p, q\in {\mcal P}(\varphi)$ such that the open set $[[p,q]]$ is connected and resonant, and contains $x$. Therefore the open set $Y:=\mcal O([[p,q]])$, which is invariant and connected, is resonant by Lemma \ref{th:resorbit}.
By Theorem \ref{th:rescor} there exists $s >0$ such that ${\mcal P} (\varphi^s) = {\mcal P} (\varphi)\cap Y$. Consequently $\varphi^s$ is densely periodic, hence periodic by Theorem \ref{th:lemmens}. Therefore $x\in {\mcal P}(\varphi)$. \end{proof}
Theorem \ref{th:main} follows from Lemma \ref{th:P} and Theorem \ref{th:monty}.
\end{document} |
\begin{document}
\begin{abstract} \noindent This work considers gradient structures for the Becker--Döring equation and its macroscopic limits. The result of Niethammer~\cite{Niethammer2003} is extended to prove the convergence not only for solutions of the Becker--Döring equation towards the Lifshitz--Slyozov--Wagner equation of coarsening, but also the convergence of the associated gradient structures. We establish the gradient structure of the nonlocal coarsening equation rigorously and show continuous dependence on the initial data within this framework. Further, on the considered time scale the small cluster distribution of the Becker--Döring equation follows a quasistationary distribution dictated by the monomer concentration. \end{abstract}
\subjclass[2010]{ Primary: 49J40; secondary: 34A34, 35L65, 49J45, 49K15, 60J27, 82C26. }
\keywords{gradient flows; energy-dissipation principle; evolutionary Gamma convergence; quasistationary states; well-prepared initial conditions}
\title{ itlestr}
\section{Introduction}
\subsection{The Becker--Döring model}\label{s:Intro:BD} In this work, we are interested in gradient structures for the Becker--Döring equation and its macroscopic limits. The Becker--Döring equation~\cite{Becker1935} is a model for the coagulation and fragmentation of clusters consisting of identical monomers. The main modeling assumption is only monomers are able to coagulate and fragment with other clusters in a way that the total density of monomers is conserved \begin{equation}\label{e:BD:massconserve}
\sum_{l=1}^\infty l n_l(t) = \sum_{l=1}^\infty l n_l(0) =: \varrho_0 \qquad\text{for all } t>0 . \end{equation} Hereby, $n_l(t)$ is the density of clusters of size $l$ at time $t$. The evolution of the densities $n_l(t)$ is given by an countable number of ordinary differential equations of the form \begin{align}\label{e:BD1}
\dot n_l(t) = J_{l-1}(t) - J_l(t) \qquad l = 2, 3 \dots \end{align} where $J_l$ is the flux from clusters of size $l$ to clusters of size $l+1$. The system~\eqref{e:BD1} gets closed with an equation for $n_1$ \begin{equation}\label{e:BD2}
\dot n_1(t) = -\sum_{l=1}^\infty J_l(t) - J_1(t) =: J_0(t) - J_1(t) , \end{equation} which is chosen, such that formally~\eqref{e:BD:massconserve} is satisfied. The fluxes $J_l$ are given by mass-action kinetics, that is the rate of coagulation is determined by $a_l n_1 n_l$ and the rate of fragmentation is given by $b_{l+1} n_{l+1}$, where $a_l$ and $b_{l+1}$ are rate factors only depending on $l$. This leads to the constitutive relation \begin{equation}\label{e:BD:flux}
J_l(t) = a_l n_1(t) n_l(t) - b_{l+1} n_{l+1}(t) , \qquad l=1,2,\dots . \end{equation} The detailed balance condition for this system reads $J_l(t) = 0$ for all $l$, satisfied by a one-parameter family of equilibrium solutions \begin{equation}\label{e:BD:equilibrium}
\omega_l(z) := z^l Q_l, \qquad\text{with}\qquad Q_1 := 1 \qquad\text{and}\qquad Q_l := \prod_{j=1}^{l-1} \frac{a_j}{b_{j+1}} . \end{equation} To specify the long-time behavior, we introduce the convergence radius of the series $z \mapsto \sum_l l z^l Q_l$ by $z_s \in [0,\infty]$ as well as its value at the convergence radius \begin{equation}\label{e:BD:critical_mass} \varrho_s := \sum_{l=1}^\infty l z_s^l Q_l \in [0, \infty]. \end{equation} We are interested in the regime where $z_s \in (0,\infty)$ and $\varrho_s \in (0,\infty)$. We will assume that the rates are explicitly given as follows: \begin{assumption}[Rates]\label{ass:BD:rates} For $\alpha \in [0,1)$, $\gamma \in (0,1)$ and $z_s, q >0$ define the coagulation and fragmentation rate of a monomer for a cluster of size $l$ by \begin{equation*}
a_l := l^\alpha \qquad\text{and}\qquad b_l := l^\alpha \bra*{z_s + q l^{-\gamma}} . \end{equation*} \end{assumption} Hereby, the parameter $z_s$ is consistent with its definition as radius of convergence (cf.\ Lemma~\ref{lem:BD:assymptotic_Q}) and $\varrho_s$ as defined in~\eqref{e:BD:critical_mass} is strictly positive and finite under Assumption~\ref{ass:BD:rates}.
Then, as investigated by~\cite{Ball1986} solutions to the Becker--Döring equation with $\varrho_0 \leq \varrho_s$ converge to the equilibrium state $\omega_l(z)$, where $z=z(\varrho_0)$ is given such that $\sum_{l=1}^\infty l z^l Q_l = \varrho_0$ and the convergence takes place in a weighted $\ell^1$ space \begin{equation*}
\lim_{t\to \infty} \sum_{l=1}^\infty l \; \abs*{ n_l(t) - \omega_l(z)} = 0 \end{equation*} In the case $\varrho_0 > \varrho_s$, it holds \begin{equation*}
\lim_{t\to\infty} n_l(t) = \omega_l(z_s) \qquad\text{for each $l\geq 1$}. \end{equation*} Hence, the excess mass $\varrho_0-\varrho_s>0$ vanishes in the limit $t \to \infty$. The interpretation is, that the excess mass is contained in larger and larger clusters as times evolve. These large clusters form a new phase, e.g.\ liquid droplets formed out of supersaturated vapor. It is the aim of the is work to add some aspect to the understanding of the formation of the new phase.
The crucial ingredient for the above convergence statements is the existence of a Lyapunov functional $\mathcal{F}$ of the form of a relative entropy $\mathcal{F}_z(n) := \mathcal{H}(n \mid \omega(z))$. Hereby, $z>0$ is a parameter selecting the stationary state and the relative entropy is defined by \begin{align}\label{e:def:Lyapunov}
\mathcal{H}(n \mid \omega) := \sum_{l=1}^\infty \omega_l \psi\bra*{\frac{n_l}{\omega_l}} \quad\text{with}\quad \psi(a) := a\log a - a +1 , \text{ for } a > 0 . \end{align} A calculation shows that it is formally decreasing along solutions to the Becker--Döring equation \begin{equation}\label{e:BD:Dissipation}
\pderiv{\mathcal{F}_z(n(t))}{t} = - \sum_{l=1}^\infty \bra*{ a_l n_1 n_l - b_{l+1} n_{l+1}} \bra*{\log a_l n_1 n_l - \log b_{l+1} n_{l+1}} =: - \mathcal{D}(n(t)) \leq 0 . \end{equation} Hence, the Lyapunov function can be interpreted as a free energy dissipating along the flow. This indicates, that the free energy is minimized as $t\to \infty$. By the mass conservation~\eqref{e:BD:massconserve}, we expect the long-time limit to be the solution to the following minimization problem \begin{equation}\label{e:BD:FEmin}
\inf\set[\bigg]{ \mathcal{F}(n) : \sum_{l=1}^\infty l n_l = \varrho_0 } =
\begin{cases}
\mathcal{F}_z(\omega(z)) , &\varrho_0 \leq \varrho_s ; \\
\mathcal{F}_{z_s}(\omega(z_s)) , &\varrho_0 > \varrho_s .
\end{cases} \end{equation} In the first case the infimum is attained and the parameter $z = z(\varrho_0)$ is chosen such that $\sum_{l=1}^\infty l \omega_l(z) = \varrho_0$. In the second case the infimum is not attained (cf.\ \cite[Theorem 4.4]{Ball1986}). From now on, we choose $z = z(\varrho_0)$ in this particular form and omit the supscript. Hence, the functional reflects correctly the long-time behavior of the equation. Moreover, the Lyapunov function has the form of a relative entropy and the question arises, whether their exists a gradient structure for the Becker--Döring equation having this relative entropy as driving free energy.
\subsection{Gradient flow structure}\label{s:Intro:BD:GF} To bring the system into the framework of gradient-flows, it is helpful to interpret the Becker--Döring equation as the following system of chemical reactions \begin{equation}\label{e:BD:ChemReact}
X_1 + X_{l-1} \stackrel[b_{l}]{a_{l-1}}{\rightleftharpoons} X_{l}, \qquad l = 2 , 3, \dots\ . \end{equation} Hereby, $X_l$ denotes a cluster of size $l$ and the rates for coagulation $\set*{a_l}_{l\geq 1}$ and fragmentation $\set*{b_l}_{l\geq 2}$ are positive as in Assumption~\ref{ass:BD:rates}. In this formulation, we can use the gradient structure as observed by Mielke~\cite{Mielke2011a} for chemical reactions under detailed balance condition and it turns out that the Becker--Döring equation is indeed a gradient flow with respect to the Lyapunov function~\eqref{e:def:Lyapunov} under a suitable metric. The same metric was discovered by Maas~\cite{Maas2011} in the setting of reversible Markov chains.
The existence of the metric depends crucially on the detailed balance condition satisfied by the equilibrium~\eqref{e:BD:equilibrium} \begin{equation}\label{e:BD:DBC}
a_l \omega_1 \omega_l = b_{l+1} \omega_{l+1} =: k^l, \end{equation} where $k^l$ is the stationary equilibrium flux and the implicit parameter $z$ is chosen according to $\varrho_0$ as described after~\eqref{e:BD:FEmin}. The equations~\eqref{e:BD1}, \eqref{e:BD2}, \eqref{e:BD:flux} can be compactly rewritten with the help of~\eqref{e:BD:DBC} as \begin{equation}\label{e:BD:ChemMaster}
\dot n = - \sum_{l=1}^\infty k^l \bra*{ \frac{n_1 n_l}{\omega_1 \omega_l} - \frac{n_{l+1}}{\omega_{l+1}}} \bra*{ e^1 + e^l - e^{l+1}} , \end{equation} with $e^l_i=0$ for $i\ne l$ and $e^l_l=1$ for $l\in \mathds{N}$. Since the free energy $\mathcal{F}$ is of the form of a relative entropy~\eqref{e:def:Lyapunov}, we can identity its variation as \begin{equation*}
D\mathcal{F}(n) = \bra*{ \log \frac{n_1}{\omega_1}, \dots, \log \frac{n_i}{\omega_i}, \dots} . \end{equation*} Then, the gradient flow formulation of the Becker-Döring equation takes the form \begin{equation}\label{e:BD:GF}
\dot n = - \mathcal{K}(n) \, D\mathcal{F}(n) , \end{equation} where the Onsager matrix $\mathcal{K}$ is defined by \begin{equation}\label{e:BD:Onsager}
\mathcal{K}(n) := \sum_{l=1}^\infty k^l \ \Lambda\bra*{ \frac{n_1 n_l}{\omega_1 \omega_l} , \frac{n_{l+1}}{\omega_{l+1}}} \ \bra*{ e^1 + e^l- e^{l+1}} \otimes \bra*{ e^1 + e^l- e^{l+1}} \end{equation} and $\Lambda(\cdot,\cdot)$ is the logarithmic mean given for $a,b>0$ by \begin{equation}\label{e:def:LogMean}
\Lambda(a,b) = \int_0^1 a^s b^{1-s} \dx{s} = \begin{cases}
\frac{a-b}{\log a - \log b } &, a\ne b \\
a &, a=b . \end{cases} \end{equation} The identification of~\eqref{e:BD:ChemMaster} and~\eqref{e:BD:GF} is based on the algebraic identity \[
\Lambda\bra*{a b, c} \ \bra[\big]{ \log a + \log b - \log c} = a b - c \qquad\text{ for }\qquad a,b,c>0 . \] We refer to Appendix~\ref{s:GScfModels} for the more general structure behind this identities and applications to other coagulation and fragmentation models.
\subsection{Variational characterization}\label{s:Intro:BD:Variational} The gradient flow formulation allows for a variational characterization initiated by de Giorgi and its collaborators~\cite{dGMT1980} under the name of curves of maximal slope. From the interpretation of the Becker--Döring model as chemical reaction, it is clear the the total number of particles is conserved, which suggests to define the state manifold \begin{equation*}
\mathcal{M} := \set*{ n \in \mathds{R}_+^\mathds{N} : \sum_{l=1}^\infty l n_l = \varrho_0 } . \end{equation*} Possible variations of the state manifold consistent with the Becker-Döring dynamic are given by the linear space $\mathcal{T}\mathcal{M} = \operatorname{span}\set*{ e^1 + e^l - e^{l+1} : l \in \mathds{N}}$. By the definition of the Onsager matrix~\eqref{e:BD:Onsager}, we have that the following space is well-defined \begin{equation}\label{e:def:cotangent}
\mathcal{T}_n^*\mathcal{M} := \set*{\phi\in \mathds{R}^\mathds{N} : \exists s\in \mathcal{T}\mathcal{M} \text{ such that } s=
-\mathcal{K}(n) \phi} . \end{equation} A crucial ingredient to study the underlying metric structure is the continuity equation and curves of finite action. \begin{definition}[Curves of finite action]\label{def:BD:CurvesFiniteAction}
A pair $[0,T]\ni t\mapsto (n(t),\phi(t))\in \mathcal{M} \times \mathcal{T}^*_{n(t)}\mathcal{M}$ is a solution to the continuity equation, denoted by $(n,\phi)\in \mathcal{CE}_T$, if it satisfies \begin{enumerate}[ (i) ]
\item $n(\cdot): [0,T]\to \mathcal{M}$ is absolute continuous.
\item The pair $(n,\phi)$ satisfies the continuity equation
for $t\in (0,T)$ in the weak form, that is for all $\psi\in
C^1_c((0,T),\mathds{R})$ and all $l\in \mathds{N}$ holds
\begin{equation}\label{e:cCE}
\int_0^T\Big( \dot\psi(t)\; {n}_{l}(t)- \psi(t)\;\big(\mathcal{K}(n(t)) \phi(t)\big)_l \Big) \dx{t} = 0 .
\end{equation} \end{enumerate} The \emph{action} $\mathcal{A}$ of a pair $(n,\phi) \in \mathcal{M}\times \mathcal{T}^*_{n}\mathcal{M}$ is defined by \begin{equation}\label{e:BD:action}
\mathcal{A}(n,\phi) := \phi(t) \cdot \mathcal{K}(n(t)\phi(t) = \sum_{r=1}^R k^r \hat n^\omega_r \abs*{ \nabla_r \phi(t)}^2 , \end{equation} where $\nabla_r \phi := \phi_{r+1} - \phi_r - \phi_1$ and \begin{equation}\label{e:def:hatn}
\hat n_l^\omega := \Lambda\bra*{\frac{n_1 n_l}{\omega_1 \omega_l} , \frac{n_{l+1}}{\omega_{l+1}}} . \end{equation} A curve $(n,\phi)\in \mathcal{CE}_T$ is called a \emph{curve of finite action}, if \begin{equation*}
\sup_{t\in [0,T]} \mathcal{F}(n(t)) <\infty, \quad \int_0^T \mathcal{A}(n(t), \phi(t)) \dx{t} < \infty ,\quad\text{and }\quad \int_0^T \mathcal{D}(n(t)) \dx{t} < \infty , \end{equation*} where $\mathcal{D}(n) = \mathcal{A}(n,-D\mathcal{F}(n))$ is given as in~\eqref{e:BD:Dissipation}. \end{definition} The nonlocal gradient $\nabla_r \phi$ in~\eqref{e:BD:action} can be avoided by interpreting the monomer concentration $n_1 = \varrho - \sum_{l=2}^\infty l n_l$ as a nonlocal boundary condition. Along this idea a Fokker-Planck equation with such type of boundary condition having similar features like the Becker--Döring model was recently introduced in~\cite{ConSch17}.
Curves of finite action give a variational formulation to solutions of the Becker--Döring equation. In comparison to the direct gradient flow equation~\eqref{e:BD:GF}, this avoids regularity questions arising from the application of the chain rule. The concept was introduced in~\cite{dGMT1980} and further investigated in~\cite{AGS05}: For any curve $(n,\phi)\in \mathcal{CE}_T$ of finite action holds \begin{equation}\label{e:BD:Jintro}
\mathcal{J}(n) := \mathcal{F}(n(T)) - \mathcal{F}(n(0)) + \frac{1}{2} \int_0^T \mathcal{D}(n(t)) \dx{t} + \frac{1}{2} \int_0^T \mathcal{A}(n(t), \phi(t)) \dx{t} \geq 0 . \end{equation} Moreover, equality is attained if and only if $n$ is a solution of~\eqref{e:BD:ChemMaster}.
We provide the crucial observation of the proof, which follows formally by evaluating \begin{align*}
\pderiv{}{t} \mathcal{F}(n(t)) &= D\mathcal{F}(n(t)) \cdot \dot n(t) \stackrel{\eqref{e:cCE}}{=} D\mathcal{F}(n(t)) \cdot \mathcal{K}(n(t)) \phi(t) \\
&\geq - \frac{1}{2} D\mathcal{F}(n(t)) \cdot \mathcal{K}(n(t))D\mathcal{F}(n(t)) - \frac{1}{2} \phi(t) \cdot \mathcal{K}(n(t)) \phi(t) , \end{align*} where we used that $\mathcal{K}$ is positive semidefinite and the Cauchy--Schwarz inequality. The equality case is read off from the equality case in Cauchy--Schwarz. For a rigorous treatment in a similar situation, we refer to~\cite[Section 2.5]{EFLS16}. We use this variational structure to pass to the limit after a suitable rescaling.
\subsection{The macroscopic limit}\label{s:Intro:MacLim} The connection between the Becker--Döring equation with positive excess mass $\varrho_0 - \varrho_s = \bar\varrho >0$ and a macroscopic theory of coarsening is due to Penrose~\cite{Penrose1997}. He observed by formal asymptotics that the macroscopic part of the Becker-Döring dynamics converges after a suitable rescaling (cf.\ Section~\ref{s:scaling}) to a classical coarsening model introduced by Lifshitz and Slyozov~\cite{Lifshitz1961}, and Wagner~\cite{Wagner1961} \begin{equation}\begin{split}\label{e:LSW}
\partial_t \nu_t + \partial_{\lambda}\big( \lambda^\alpha( u(t) - q \lambda^{-\gamma})\nu_t\big) = 0 \quad \text{with}\quad u(\nu_t) = \frac{q \int \lambda^{\alpha-\gamma} \dx\nu_t}{\int \lambda^\alpha \dx\nu_t} . \end{split}\end{equation} Hereby, the measure $\nu_t(\dx{\lambda})$ is the distribution of particles of macroscopic size $\lambda\in \mathds{R}_+$. Moreover, the parameters $\alpha$, $\gamma$ and $q$ satisfy Assumption~\ref{ass:BD:rates} and we will call the nonlocal conservation law \eqref{e:LSW} the LSW equation in the following. Formally, the total mass is conserved and the evolution stays in the state manifold for any $t>0$ \begin{equation*}
\nu_t \in M = \set*{\nu \in C_c^0(\mathds{R}_+)^* \;\middle|\; \int \lambda \;\nu(\dx\lambda) = \bar\varrho } . \end{equation*} The LSW equation are a gradient flow as formally observed by Niethammer \cite[Section 4]{Niethammer2004}. The driving energy of the system is given exactly by the first oder expansion of the macroscopic part of a suitable rescaling of the free energy \eqref{e:def:Lyapunov} (with $z=z_s$) driving the Becker--Döring equation (cf.\ Lemma~\ref{lem:BD:expansionF}) \begin{equation}\label{e:LSW:energy}
E(\nu) := \frac{q}{1-\gamma} \int \lambda^{1-\gamma} \; \nu(\dx\lambda) . \end{equation}
Let us introduce a formal Riemannian structure and define a tangent space on $M$ by $T_\nu M := \set*{s : \mathds{R}_+ \to \mathds{R} \;\middle|\; \int \lambda s \dx{\lambda} = 0 }$. An identification of tangent and cotangent vectors is obtained via the operator $K(\nu): T_{\nu}^* M \to T_\nu M$ given by \begin{align}\label{e:LSW:Onsager}
K(\nu) w := - \partial_{\lambda}\bra*{\lambda^{\alpha} w \nu} \quad \text{for} \quad w\in T_{\nu}^* M &:= \set*{w: \mathds{R}_+ \to \mathds{R} \;\middle|\; \exists s \in T_\nu M : K(\nu) v= s } . \end{align}
By an integration by parts of the identity $0 = \int \lambda s \, \nu(\dx\lambda)$ holds the inclusion property $T_{\nu}^* M \subseteq \set*{w: \mathds{R}_+ \to \mathds{R} \;\middle|\; \int \lambda^{\alpha} w \; \nu(\dx{\lambda}) = 0}$. Let us formally derive the gradient structure for the LSW equation (cf.\ \cite[Section 4]{Niethammer2004}), that is we assume all differentials and quantities to be smooth enough. The differential of the energy~\eqref{e:LSW:energy} is given for some $s\in T_{\nu}M$ by using the identification $s=-K(\nu) w$ with $w\in T^*_\nu M$ \begin{align*}
DE(\nu)\cdot s &= \frac{q}{1-\gamma} \int \lambda^{1-\gamma} s \dx{\lambda} =- \int \bra*{u\lambda - \frac{q}{1-\gamma} \lambda^{1-\gamma}} s \, \dx{\lambda}\\
&= - \int \lambda^{\alpha} \bra{ u - q \lambda^{-\gamma}} w \, \nu(\dx{\lambda}) , \end{align*} where $u \in \mathds{R}$ can be chosen such that $u- DE(\nu) \in T_{\nu}^* M$ thanks to $\int \lambda^\alpha w \nu(\dx\lambda)=0$. Then, the gradient flow in weak form satisfies for all $\tilde s \in T_\nu M$ \begin{equation*}
\int \lambda^\alpha w \tilde w \; \nu(\dx{\lambda}) =g_{\alpha,\nu}(\partial_t \nu,\tilde s) = - DE(\nu) \cdot \tilde s = \int \lambda^{\alpha} \bra{ u - q \lambda^{-\gamma}}\tilde w\; \nu(\dx{\lambda}) , \end{equation*} where $\partial_t \nu = -\partial_\lambda(\lambda^{\alpha} w \nu)$ and $\tilde s = - \partial_{\lambda}\bra*{\lambda^\alpha \tilde w \nu}$ in distribution. Hence, we obtain the identification \begin{equation*}
w = u - q \lambda^{-\gamma}, \end{equation*} where $u=u(\nu)$ is a Lagrangian multiplier chosen such that $w\in T_\nu^*M$, that is it satisfies the constraint $\int \lambda^\alpha w \, \nu(\dx\lambda) = 0$ and is formally given by \eqref{e:LSW}. Hence, the gradient flow of the energy $E$ with respect to the metric induced by $K$ is given by \begin{equation*}
\partial_t \nu_t = - K(\nu) D E(\nu) = - \partial_\lambda\bra*{\lambda^\alpha\bra*{u(\nu_t) - q \lambda^{-\gamma}}\nu_t} , \end{equation*} where $u(\nu_t)$ given by~\eqref{e:LSW}. To make the above observation rigorous, we use the de Giorgi formalism of curves of maximal slope. Up to technical details, which is dealt with in Section~\ref{S:LSW}, we can define an action functional as follows: For a pair $(\nu,w)$ solving the continuity equation $\partial_t \nu_t + \partial_{\lambda}\bra*{ \lambda^\alpha w_t \nu_t } = 0$ in distributions, denoted by $(\nu,w)\in \CE_T$, the action is defined by \begin{equation*}
A(\nu_t,w_t) := \int \lambda^\alpha \abs*{w_t}^2 \dx\nu_t . \end{equation*} Then, by the identification of tangent and co-tangent vectors via $s= - \partial_\lambda(\lambda^\alpha w \nu)$, we obtain that the dissipation is given by \begin{equation}\label{e:LSW:dissipation}
D(\nu_t) = A\bra*{\nu_t, u(t) -DE(\nu_t)} = \int \lambda^\alpha \abs*{ u(\nu_t) - q \lambda^{-\gamma}}^2 \dx\nu_t , \end{equation} where $u(\nu_t)\in L^2((0,T))$ given by~\eqref{e:LSW} ensures that $u(\nu_t) - q \lambda^{-\gamma}\in T_\nu^* M$, i.e.~it is a valid cotangent vector satisfying $\int \lambda^\alpha \bra*{u(\nu_t) - q\lambda^{-\gamma}} \dx{\lambda} =0$.
The functional $J(\nu)$, which completely characterizes solutions to~\eqref{e:LSW} (cf.\ Proposition~\ref{prop:LSW:DeGiorgi}) is defined by \begin{equation}\label{e:LSW:DeGiorgi}
J(\nu) := E(\nu_T) - E(\nu_0) + \frac{1}{2}\int D(\nu_t) \dx{t} + \frac{1}{2} \int A(\nu_t, w_t) \dx{t} \geq 0 , \end{equation} with $J(\nu) = 0 $ if and only if $\nu_t$ is a weak solution to the LSW equation~\eqref{e:LSW}.
The main application of this variational framework is to prove the convergence of the Becker-Döring gradient structure to the LSW gradient structure. In addition, the variational characterization of the LSW equation together with a compactness statement for curves of finite action (cf.\ Proposition~\ref{prop:LSW:compactness}) allows to proof continuous dependence on the initial data (cf.\ Corollary~\ref{cor:LSW:continitial}).
\subsection{Passage to the limit} The macroscopic limit is rigorously derived by Niethammer~\cite{Niethammer2003}. There, the main technical tool was to pass to the limit in the energy-dissipation relation associated with the rescaled Becker--Döring equation to obtain the energy-dissipation relation of the LSW equation. The one for solutions to the Becker--Döring equation is obtained by integrating the identity~\eqref{e:BD:Dissipation} in time \begin{equation}\label{e:BD:EED}
\mathcal{F}(n(T)) - \mathcal{F}(n(0)) + \int_0^T \mathcal{D}(n(t)) \dx{t} = 0 . \end{equation} The functional $\mathcal{J}$ from~\eqref{e:BD:Jintro} contains the identity~\eqref{e:BD:EED}, since for solutions of the Becker--Döring equation it holds $\mathcal{A}\bra*{n(t),-D\mathcal{F}(n(t))} = \mathcal{D}(n(t))$.
Likewise, from~\eqref{e:LSW:DeGiorgi} and~\eqref{e:LSW:dissipation} follows that the LSW equation satisfy the energy-dissipation identity \begin{equation*}
E(\nu_T) - E(\nu_0) + \int_0^T D(\nu_t) \dx{t} = 0 . \end{equation*} where $u(t)$ is given in~\eqref{e:LSW}.
The contribution of this work is to lift the convergence statement from the level of energy-dissipation relations along solutions to the functionals $\mathcal{J}$ and $J$ along curves of finite action. Hereby, by doing so no essential new technical difficulties arrise, which underlines the fact that the gradient structure is natural for these types of equations. We prove that a suitable rescaling of the functional $\mathcal{J}^\varepsilon$ converges to the functional $J$ in an evolutionary $\Gamma$-convergence sense under the assumption of well-prepared initial data (see Theorem~\ref{thm:Scale:conv:CfA}). In particular, the gradient structure of the Becker--Döring equation converges to the one of the LSW equation (cf.\ Theorem~\ref{thm:Scale:conv:CfA}) and in particular it implies the convergence of solutions (cf.\ Corollary~\ref{cor:ConvDeGiorgi}). This program follows the ideas of Sandier and Serfaty \cite{Sandier2004}, and was later generalized by Serfaty \cite{Serfaty2011}.
The ingredients of the proof of convergence are based on: \emph{(i)} the variational characterization of the Becker-Döring equations in Section~\ref{s:Intro:BD:Variational}, which follows the gradient structure established by~\cite{Mielke2011a}; \emph{(ii)} the rigorous variational characterization of solutions to the LSW equation in Section~\ref{S:LSW}, which extends the formal gradient structure of~\cite[Section 4]{Niethammer2004}; \emph{(iii)} a priori estimates for the variational framework of the Becker-Döring gradient structure in Section~\ref{s:AprioriBD}, which lifts many of the results of~\cite{Niethammer2005b} from solutions of the Becker-Döring system to curves of finite action.
Another motivation to reconsider the proof of~\cite{Niethammer2003} is that systems possessing a gradient structure can be well described by studying convexity properties of the free energy with respect to the implied metric. Especially, the results of~\cite{Penrose1989} suggest, that the system shows dynamic metastability as described by~\cite{Otto2007} for gradient systems. Under this point of view also the additional results on quasistationarity in the next subsection are first steps towards a characterization of dynamic metastability of the Becker-Döring equations.
\subsection{Well preparedness of initial data and quasistationarity} A crucial assumption in the approach of showing convergence via curves of maximal slope is the well preparedness of initial data, which assumes that the rescaled free energy of the Becker--Döring gradient structure converges to the one of the LSW gradient structure \begin{equation*}
\mathcal{F}^\varepsilon(n^\varepsilon(0)) \to E(\nu_0) \qquad\text{as}\qquad \varepsilon \to 0 . \end{equation*} The second contribution of this work is to show that on the rescaled time-scale, the Becker--Döring equation reach instantaneously a quasistationary equilibrium, which is dictated only by the monomer concentration. On the other hand, the monomer concentration follows closely a macroscopic quantity similarly defined as $u$ in \eqref{e:LSW}. The crucial ingredient in the proof is an energy-dissipation estimate based on a logarithmic Sobolev inequality similarly to the one used in~\cite{Canizo2015} to proof convergence to equilibrium in the noncondensing case $\varrho_0 \leq \varrho_s$.
The quasistationary result shows, that the microscopic part of the rescaled free energy $\mathcal{F}^\varepsilon(n^\varepsilon(t))$ vanishes for almost every $t\geq 0$. It does so by proving a separation of time scales. The fast scale is the relaxation time of small clusters towards a local equilibrium, which can be understood as the response to the slower coarsening time of the large clusters. On the level of conergence of gradient flows, this is a step towards showing, that only the macroscopic part of the rescaled free energy $\mathcal{F}^\varepsilon(n^\varepsilon(0))$ has to convergence towards $E(\nu_0)$ to ensure well prepared initial date. The conjecture is, that the microscopic part is automatically well prepared on the observed rescaled time-scale. This is consistent with the continuous dependence on the initial data of the LSW equation, which is valid under the assumption of convergence of the macroscopic energy for the initial data (see Corollary~\ref{cor:LSW:continitial}).
\addtocontents{toc}{\SkipTocEntry} \subsection*{Outline}
The next Section~\ref{S:main} contains in Section \ref{s:scaling} the rescaling of the Becker-Döring gradient flow structure. This enables us to state the main results in Section~\ref{S:main:main}. In Section \ref{S:LSW}, we prove the gradient flow structure of the LSW equation and prove the continuous dependence on the initial data within this framework. Section \ref{S:Limit} contains some a priori estimate for the Becker--Döring system in Section~\ref{s:AprioriBD}, which allow then to the limit in the gradient structure in Section~\ref{S:Limit:sub} and finally we prove the quasistationary equilibrium of the small clusters in Section~\ref{S:quasi}. We conclude the paper with an Appendix~\ref{s:GScfModels} showing that also more general discrete coagulation and fragmentation models fall into this framework. Moreover, another Appendix~\ref{s:assymptotic_Q} provides an elementary estimate.
\section{Main results}\label{S:main}
\subsection{Heuristics and scaling}\label{s:scaling}
From now, we consider the Becker--Döring system with initial total mass $\varrho_0>\varrho_s$ and rates satisfying~Assumption~\ref{ass:BD:rates}. Moreover, the reference state for the free energy is given by $\omega = \omega(z_s)$ as defined in~\eqref{e:BD:equilibrium}.
We fix a scale $\varepsilon^{-1}$ of the large cluster for some $\varepsilon >0$ and consider the first order expansion of the energy in $\varepsilon$. For some cut-off $l_0$, we introduce for $l\geq l_0$ the rescaled variable $\lambda=\varepsilon l$ and treat $\lambda$ as continuous variable on $\mathds{R}_+$.
We rescale the cluster density $n_l$ by $\varepsilon^2$ and define the empirical measure by \begin{equation}\label{e:def:BD:mac}
\nu^\varepsilon(\dx\lambda) := \bra*{\Pi_{\mac}^\varepsilon n}(\dx{\lambda}) := \varepsilon \sum_{l\geq l_0} \delta_{\varepsilon l}(\dx\lambda) \frac{n_l}{\varepsilon^2} = \frac{1}{\varepsilon} \sum_{l\geq l_0} \delta_{\varepsilon l}(\dx\lambda) n_l \ . \end{equation} That is for each $\zeta\in C_c^0(\mathds{R})$ holds \begin{equation*}
\int_0^\infty \zeta(\lambda)\, \nu^\varepsilon(\dx{\lambda}) = \frac{1}{\varepsilon} \sum_{l\geq l_0} \zeta(\varepsilon l) n_l . \end{equation*} This scaling preserves the mass in the large cluster, which follows by approximating $\zeta(\lambda)=\lambda$ with cut-off functions.
The leading order contribution of the free energy is given by the free energy of the large clusters $l\geq l_0$. This part of the free energy~\eqref{e:def:Lyapunov} can be expanded (cf.\ Lemma~\ref{lem:BD:expansionF}) as follows \begin{align}
\mathcal{F}(n)&\geq \sum_{l \geq l_0} \omega_l \psi\bra*{\frac{n_l}{\omega_l}}
= \bra*{\frac{q}{z_s(1-\gamma)} \sum_{l\geq l_0} l^{1-\gamma} n_l}\bra*{1+O(l_0^{-\sigma})+O(l_0^\gamma w_{l_0})} \nonumber \\
&= \frac{\varepsilon^\gamma q}{z_s(1-\gamma)} \int \lambda^{1-\gamma} \dx\nu^\varepsilon \bra*{1+O(l_0^{-\sigma})+O(l_0^\gamma w_{l_0})}, \label{e:Scaling:1stOrderEnergy} \end{align} for some $\sigma >0$. To match the macroscopic energy~\eqref{e:LSW:energy}, we define the rescaled free energy as \begin{equation*}
\mathcal{F}^\varepsilon(n) = \frac{z_s}{\varepsilon^{\gamma}} \mathcal{F}(n) . \end{equation*} The main result of~\cite{Ball1986} states that the total free energy decreases to zero as $t\to \infty$. Hence, one possible way to obtain initial data $n^\varepsilon(0)$ with $\mathcal{F}^\varepsilon(n^\varepsilon(0)) = O(1)$ is to introduce a time $t_\varepsilon$ such that $\mathcal{F}(n(t_\varepsilon)) = O(\varepsilon^\gamma)$ and set $n^\varepsilon(0)= n(t_\varepsilon)$. In particular, this implies by the results of \cite{Penrose1989}, that for $\varepsilon$ small enough, all possible existing metastable states are already broken down.
The expansion \eqref{e:Scaling:1stOrderEnergy} also shows, that the cut-off $l_0$ has to satisfy two conditions (cf.\ \eqref{e:BD:expansionF} and~\eqref{e:Apriori3}) \begin{equation*}
\lim_{\varepsilon\to 0} l_0^\gamma \omega_{l_0} = 0 \qquad\text{and}\qquad \lim_{\varepsilon\to 0} \max\set*{ l_0^\gamma , l_0^\alpha} \sqrt{\mathcal{F}(n^\varepsilon(0))} = 0 . \end{equation*} By taking into account the asymptotic of $\set*{Q_l}_{l\geq 1}$ (cf.\ Lemma~\ref{lem:BD:assymptotic_Q}) and recalling $\omega_l = z_s^l Q_l$, the cut-off $l_0$ can be chosen as \begin{equation}\label{ass:cutoff}
l_0 := \lfloor \varepsilon^{-x} \rfloor \qquad\text{ for some }\qquad x\in \bra*{0,\tfrac{1}{2}} . \end{equation} We consider only states $n$ such that free energy is of order $\varepsilon^\gamma$, that is we consider the restricted state space \begin{equation*}
\mathcal{M}^{\varepsilon} := \set[\bigg]{ n\in \mathds{R}_+^\mathds{N} : \sum_{l\geq 1} l n_l = \varrho_0 \ \text{ and }\ \mathcal{F}(n) \leq \varepsilon^\gamma}. \end{equation*} Likewise, the differential of the free energy for states $n^\varepsilon\in \mathcal{M}^\varepsilon$ will be of order $\varepsilon^\gamma$ and hence covectors will be also on scale $z_s^{-1} \varepsilon^\gamma$, that is we define a rescaled vector field $w^\varepsilon$ by \begin{equation}\label{def:Scale:vectorfield} \nabla_l \phi = (e^{l+1} - e^l - e^1) \cdot \phi = \phi_{l+1}- \phi_l - \phi_1 =:z_s^{-1} \varepsilon^\gamma w^\varepsilon(\varepsilon l) . \end{equation} The rescaling of tangent vectors is then determined by the rescaling necessary for obtaining the macroscopic Onsager operator~\eqref{e:LSW:Onsager}. This follows heuristically by expanding the Onsager matrix~\eqref{e:BD:Onsager} \begin{equation*}
(\mathcal{K}(n) \phi)_{l} = - \varepsilon^{1-\alpha} \partial_{\lambda}^\varepsilon\bra*{ \lambda^\alpha \varepsilon^\gamma w^\varepsilon \, \nu^\varepsilon} \bra*{1+o(1)} \quad\text{with}\quad \partial_\lambda^\varepsilon f(\lambda) := \frac{f(\lambda+\varepsilon)-f(\lambda)}{\varepsilon} . \end{equation*} Hence, we define the rescaled Onsager operator by \begin{equation*}
(\mathcal{K}^\varepsilon(n) w^\varepsilon)(\varepsilon l) := \frac{1}{\varepsilon^{1-\alpha+\gamma}} (\mathcal{K}(n) \phi)(l) , \end{equation*} where $w^\varepsilon$ and $\phi$ are given by the relation~\eqref{def:Scale:vectorfield}. This rescaling translates to the action $\mathcal{A}(n,\phi)$~\eqref{e:BD:action} and we define the rescaled action by \begin{equation}\label{def:Scale:action}
\mathcal{A}^\varepsilon(n,w^\varepsilon) := \frac{z_s}{\varepsilon^{1-\alpha + 2 \gamma}} \sum_{l\geq 1} k^l \hat n^{\omega}_l \abs*{\nabla_l \phi}^2 . \end{equation} Since, the dissipation is given as $\mathcal{D}(n) := \mathcal{A}(n,-D\mathcal{F}(n))$, the rescaling is the same and we define $\mathcal{D}^\varepsilon(n) = z_s \varepsilon^{-\bra{1-\alpha + 2\gamma}} \mathcal{D}(n)$. Hence the total rescaling between cotangent and tangent vectors is $\varepsilon^{1-\alpha+\gamma}$, which fixes the time scale for the macroscopic process.
Now, we introduce rescaled curves of finite action in analog to Definition~\ref{def:BD:CurvesFiniteAction}. By abuse of notation the new time-scale $t/\varepsilon^{1-\alpha+\gamma}$ is still denoted by $t$. \begin{definition}[Rescaled curves of finite action]
A weak solution $[0,T]\ni t \mapsto (n^\varepsilon(t), w^\varepsilon(t))$ to the rescaled continuity equation
\begin{equation*}
\int_0^T \bra*{ \dot\psi(t) n^\varepsilon_l(t) - \psi(t) \bra*{\mathcal{K}^\varepsilon(n^\varepsilon(t)) w^\varepsilon(t)}_l } \dx{t} = 0 ,\qquad\text{for all } \psi\in C_c^1((0,T);\mathds{R})
\end{equation*}
denoted by $(n^\varepsilon,w^\varepsilon)\in \mathcal{CE}^\varepsilon_T$ is called a rescaled curve of finite action if
\begin{equation*}
\sup_{t\in [0,T]} \mathcal{F}^\varepsilon(\nu^\varepsilon_t) < \infty , \quad
\int_0^T \mathcal{A}^\varepsilon(n^\varepsilon(t), w^\varepsilon(t)) \dx{t} < \infty
\quad \text{and} \quad
\int_0^T \mathcal{D}^\varepsilon(n^\varepsilon(t)) \dx{t} < \infty .
\end{equation*}
Moreover, for such a curve we define the rescaled functional characterizing curves of maximal slope by
\begin{equation}\label{e:BD:DeGiorgi}
\mathcal{J}^\varepsilon(n^\varepsilon) := \mathcal{F}^\varepsilon(n^\varepsilon(T)) - \mathcal{F}^\varepsilon(n^\varepsilon(0)) + \frac{1}{2} \int_0^T \!\! \mathcal{D}^\varepsilon(n^\varepsilon(t)) \dx{t} + \frac{1}{2} \int_0^T \!\! \mathcal{A}^\varepsilon(n^\varepsilon(t), w^\varepsilon(t)) \dx{t} \geq 0 .
\end{equation}
In particular solutions such that $\mathcal{J}^\varepsilon(n^\varepsilon) = 0$ satisfy the time-rescaled Becker--Döring equation
\begin{align}\label{e:BD:GF:rescaled}
\dot n^\varepsilon(t) = - \varepsilon^{1-\alpha+\gamma} \mathcal{K}(n^\varepsilon(t)) D\mathcal{F}(n^\varepsilon(t)) .
\end{align} \end{definition}
\subsection{Convergence of the gradient structures}\label{S:main:main}
The functionals $\mathcal{J}^\varepsilon$ \eqref{e:BD:DeGiorgi} and $J$~\eqref{e:LSW:DeGiorgi} are used to characterize solutions of the Becker--Döring and LSW equations in a variational way, respectively. The main idea to show convergence of the Becker--Döring equation to the LSW equation, which goes back to~\cite{Sandier2004} (cf.\ \cite{Serfaty2011}), is to prove $\liminf_{\varepsilon\to 0} \mathcal{J}^\varepsilon(n^\varepsilon) \geq J(\nu)$ for curves of finite action $n^\varepsilon$ converging to $\nu$. The lower semi-continuity estimate can be established by showing individual semi-continuity estimates for the energy, action and dissipation. This is the content of Theorem~\ref{thm:Scale:conv:CfA}. \begin{theorem}[Convergence of curves of finite action]\label{thm:Scale:conv:CfA}
Suppose that $\alpha\geq 1-3\gamma$.
For $T>0$ let $(n^\varepsilon,w^\varepsilon)\in \mathcal{CE}^\varepsilon_T$ be a rescaled curve of finite action and $\nu_0^\varepsilon:= \Pi_{\mac}^\varepsilon n^\varepsilon(0)$ with $\Pi_{\mac}^\varepsilon$ as defined in~\eqref{e:def:BD:mac} satisfy
\begin{equation}\label{ass:tightness}
\int_{R}^\infty \lambda \; \nu^\varepsilon_0(\dx\lambda) \to 0 \qquad\text{as } R\to \infty \qquad \text{ uniformly in $\varepsilon$.}
\end{equation}
Then, there exists a limiting curve $t \mapsto (\nu_t, w_t)\in \CE_T$ such that
\begin{equation}\label{e:conv:nu}
\nu_t^\varepsilon:= \Pi_{\mac}^\varepsilon n^\varepsilon(t) \stackrel{*}{\rightharpoonup} \nu_t \qquad\text{in } C_c^0(\mathds{R}_+)^* \qquad \text{for all } t\in [0,T]
\end{equation}
and
\begin{equation}\label{e:conv:mu}
w^\varepsilon_t(\lambda) \nu_t^\varepsilon(\dx\lambda) \dx{t} \stackrel{*}{\rightharpoonup} w_t(\lambda) \nu_t(\dx\lambda) \dx{t} \qquad\text{in}\quad C_c^0([0,T]\times \mathds{R}_+)^* .
\end{equation}
There exists $u\in L^2((0,T))$ such that
\begin{equation}\label{e:Scale:conv:monomers}
h^\varepsilon(t) := \frac{n_1(t) - z_s}{\varepsilon^{\gamma}} \rightharpoonup u(t) , \qquad\text{ weakly in } L^2((0,T)) ,
\end{equation}
and $u(t)$ satisfies the identity
\begin{equation*}
u(t) = \frac{q \int \lambda^{\alpha- \gamma} \,\nu_t(\dx{\lambda})}{\int \lambda^\alpha \,\nu_t(\dx{\lambda})} .
\end{equation*}
Moreover, the energy, the action and the dissipation satisfy the following $\liminf$ estimates
\begin{align}
\forall t\in [0,T] : \qquad \lim_{\varepsilon\to 0} \mathcal{F}^\varepsilon(\nu^\varepsilon_t) &\geq E(\nu_t) , \label{e:Scale:conv:Fliminf}\\
\liminf_{\varepsilon\to 0} \int_0^T \mathcal{A}^\varepsilon(\nu^\varepsilon_t,w^\varepsilon_t) \dx{t} &\geq \int_0^T A(\nu_t, w_t) \dx{t} , \label{e:Scale:conv:Aliminf} \\
\liminf_{\varepsilon\to 0} \int_0^T \mathcal{D}^\varepsilon(\nu^\varepsilon_t) \dx{t} &\geq \int_0^T D(\nu_t) \dx{t} .\label{e:Scale:conv:Dliminf}
\end{align} \end{theorem} The classical conclusion from the above theorem is the convergence of curves of maximal slope under the assumption of well-prepared initial data to deal with the term $-\mathcal{F}^\varepsilon(n^\varepsilon(0))$ inside of $\mathcal{J}^\varepsilon(n^\varepsilon)$. The following Corollary is an immediate consequence of Theorem~\ref{thm:Scale:conv:CfA} by the arguments of~\cite[Theorem 2]{Serfaty2011}. \begin{corollary}[Convergence of curves of maximal slope]\label{cor:ConvDeGiorgi}
Suppose $\alpha\geq 1-3\gamma$ and let $(n^\varepsilon,w^\varepsilon)\in \mathcal{CE}^\varepsilon_T$ be a curve of finite action.
Moreover assume $\nu_0^\varepsilon:= \Pi_{\mac}^\varepsilon n^\varepsilon(0)$ satisfy the tightness condition~\eqref{ass:tightness} and $n^\varepsilon(0)$ is well-prepared in the sense that
\begin{equation*}
\lim_{\varepsilon \to 0} \mathcal{F}^\varepsilon(n^\varepsilon(0)) = E(\nu_0) .
\end{equation*}
Then, there exists a limiting $(\nu, w)\in \CE_T$ satisfying ~\eqref{e:conv:nu} and~\eqref{e:conv:mu} such that
\begin{equation*}
\liminf_{\varepsilon\to 0} \mathcal{J}^\varepsilon(n^\varepsilon) \geq J(\nu) \geq 0.
\end{equation*}
Especially, if $\mathcal{J}^\varepsilon(n^\varepsilon)=0$ then $J(\nu) = 0$ and it holds
\begin{align*}
\lim_{\varepsilon\to 0} \mathcal{F}^\varepsilon(n^\varepsilon(t)) &= E(\nu_t) &&\text{for all } t\in [0,T],\\
\mathcal{A}^\varepsilon(n^\varepsilon,w^\varepsilon) &\to A(\nu, w) &&\text{for a.e.\ } t\in [0,T], \\
\mathcal{D}^\varepsilon(n^\varepsilon) &\to D(\nu) &&\text{for a.e.\ } t\in [0,T] .
\end{align*} \end{corollary} \subsection{Quasistationary evolution} The statement~\eqref{e:Scale:conv:monomers} connects the microscopic monomer concentration with a ratio of moments of the macroscopic cluster distribution. It is possible to show this identity already on the level of rescaled Becker--Döring equation alone. That is, the monomer concentration follows closely a moment ratio of the distribution of the large clusters. \begin{proposition}\label{prop:stability}
For any curve $(n^\varepsilon,\phi^\varepsilon)\in \mathcal{CE}_T^\varepsilon$ such that $\mathcal{J}^\varepsilon(n^\varepsilon) < \infty$ uniformly in $\varepsilon$ and $\nu^\varepsilon_0$ satisfying~\eqref{ass:tightness} the rescaled monomer excess concentration $h^\varepsilon$ as defined in~\eqref{e:Scale:conv:monomers} satisfies
\begin{equation}\label{e:DissBound}
\int_0^T \bra*{ h^\varepsilon(t) - u^\varepsilon(t) }^2 \; dt \leq C \int_0^T \mathcal{D}^\varepsilon_{\mac}(n(t)) \; dt ,
\end{equation}
where $\mathcal{D}_{\mac}^\varepsilon$ is defined like $\mathcal{D}^\varepsilon$ with summation restricted to $\set{l_0,\dots,\infty}$ and
\begin{equation*}
u^\varepsilon(t) := \frac{\sum_{l\geq l_0} \bra*{b_{l+1} n_{l+1}(t) - a_l n_l}}{\varepsilon^{\gamma} \sum_{l\geq l_0} a_l n_l} .
\end{equation*} \end{proposition} The above results together with a refined energy-dissipation estimate based on a logarithmic Sobolev inequality allows to establish detailed information on the distribution of the small clusters for curves of rescaled finite action and in particular for every solution of the time-rescaled Becker--Döring equation~\eqref{e:BD:GF:rescaled}. The result makes part of the formal asymptotic contained in~\cite[Section 3]{Niethammer2003} rigorous. \begin{theorem}[Quasistationary distribution]\label{thm:QuasiSmall}
For any curve $(n^\varepsilon,\phi^\varepsilon)\in \mathcal{CE}_T^\varepsilon$ such that $\mathcal{J}^\varepsilon(n^\varepsilon) < \infty$ uniformly in $\varepsilon$ and $\nu^\varepsilon_0$ satisfying~\eqref{ass:tightness} the small cluster follow a quasistationary distribution dictated by $n_1$: For
$l_0 = \lfloor\varepsilon^{-x}\rfloor$ with $x$ satisfying~\eqref{ass:cutoff} holds
\begin{equation*}
\int_0^T \mathcal{H}_{\mic}\bra[\big]{ n^\varepsilon(t) \mid \omega(n_1^\varepsilon(t))} \dx{t} \leq C \varepsilon^{\gamma+(1-x)(1-\alpha+\gamma)} \int_0^T \mathcal{D}^\varepsilon_{\mic}(n^\varepsilon_t) \dx{t} \label{e:monexp:q3},
\end{equation*}
where $\omega_l(z) = z^l Q_l$ as defined in~\eqref{e:BD:equilibrium}, $\mathcal{D}_{\mic}^\varepsilon$ is defined like $\mathcal{D}^\varepsilon$ with summation restricted to $\set{1,\dots, l_0-1}$ and $\mathcal{H}_{\mic}$ is the microscopic relative entropy defined by
\begin{equation*}
\mathcal{H}_{\mic}(n \mid \omega(z) ) := \sum_{l=1}^{l_0-1} \omega_l(z) \psi\bra*{\frac{n_l}{\omega_l(z)}} \quad\text{with}\quad \psi(x) = x \log x - x +1.
\end{equation*}
In particular, for a.e.\ $t\in(0,T)$ it holds
\begin{equation}\label{e:monexp:micF}
\lim_{\varepsilon\to 0} \mathcal{F}_{\mic}^\varepsilon(n^\varepsilon(t)) = 0 \qquad\text{and}\qquad \lim_{\varepsilon\to 0} \mathcal{F}^\varepsilon_{\mac}(\nu^\varepsilon_t) = E(\nu_t) ,
\end{equation}
where $\mathcal{F}_{\mic}^\varepsilon$ is defined like $\mathcal{F}^\varepsilon$ with summation restricted to $\set{1,\dots,l_0-1}$. \end{theorem} \begin{remark}
The statement~\eqref{e:monexp:micF} is not enough to ensure well-prepared initial data, since the statement only holds for a.e.\ $t\in [0,T]$.
However, it suggests that the statement of Corollary~\ref{cor:ConvDeGiorgi} holds already under the assumption of \emph{macroscopically well-prepared} initial data:
\begin{equation}\label{ass:wellprepared:initial:mac}
\lim_{\varepsilon \to 0} E(\nu^\varepsilon_0) = E(\nu_0) .
\end{equation}
The assumption~\eqref{ass:wellprepared:initial:mac} together with the tightness condition~\eqref{ass:tightness} are natural, since they are also a sufficient condition for establishing continuous dependency on the initial data for the limiting gradient flow (cf.\ Corollary~\ref{cor:LSW:continitial}). \end{remark} \begin{remark}
It is possible to use a different rescaling of the Becker--Döring system with different assumptions on the coagulation and fragmentation rates to obtain the LSW equation in the limit (cf.\ \cite{Collet2002,Laurencot2002}). Recently, within this scaling regime a quasi steady approximation was used to derive a suitable boundary condition for the macroscopic limits (cf.\ \cite{Deschamps2016}). \end{remark}
\section{The LSW equation and its gradient structure}\label{S:LSW}
To make the formal calculation from Section~\ref{s:Intro:MacLim} rigorous, we introduce the concept of curves of finite action for the LSW equation. \begin{definition}[Curves of finite action]
A weakly$^*$ continuous curve $[0,T]\ni t \mapsto \nu_t \in M$ is called a curve of finite action, if
there exists a measurable vector field $[0,T] \ni t \mapsto w_t \in T_{\nu_t}^* M$ such that
\begin{equation*}
A(\nu,w) := \int_0^T \int \lambda^{\alpha} |w_t|^2 \; \nu_t(\dx\lambda) < \infty ,
\end{equation*}
where the pair $(\nu,w)\in \CE_T$ solves the continuity equation
\begin{equation}\label{e:LSW:continuityEqu}
\partial_t \nu_t + \partial_\lambda\bra*{\lambda^\alpha w_t \nu_t} = 0 \quad\text{ in }\quad C_c^\infty([0,T]\times \mathds{R}_+)^*.
\end{equation} \end{definition} Before formulating the compactness statement, we want to revise the definition of the dissipation~\eqref{e:LSW:dissipation} and generalize it to curves of finite action. The dissipation acts as a weak upper gradient. Hence, for a curve of finite action $[0,T]\ni t \mapsto \nu_t \in M$ and using the fact that $w_t \in T^*_{\nu_t} M$ for all $t\in [0,T]$ it formally follows \begin{equation}\label{e:LSW:DD}\begin{split}
\abs*{E(\nu_T) - E(\nu_0)} &= \abs*{ -\int_0^T \int q \lambda^{\alpha-\gamma} w_t \dx{\nu_t} \dx{t} } \leq \int_0^T \lambda^\alpha \abs*{ \int \bra*{u(t) - q \lambda^{-\gamma}} \ w_t \dx{\nu_t}} \dx{t} \\
&\leq \int_0^T \bra*{ \int \lambda^\alpha \bra*{u(\nu_t)-q\lambda^{-\gamma}}^2\dx{\nu_t} }^{\frac{1}{2}} \bra*{ A(\nu_t, w_t)}^\frac{1}{2} \dx{t} , \end{split}\end{equation} where $u(\nu_t)$ is an arbitrary function on $M$. The choice of $u(\nu_t)$ is fixed by a minimization in $L^2$. That is, we define the dissipation as the weighted $L^2$-minimal upper gradient for the energy. Before doing so, we need as an auxiliary result, that a finite dissipation implies the existence of the $\alpha$-moment for a curve of finite action. \begin{lemma}[Moment estimate]
Assume $\alpha \geq 1-3\gamma$.
Let $(\nu,w)\in \CE_T$ be a curve of finite action in $M$ such that
\begin{equation}\label{e:LSW:minDissipation}
\inf_{u\in L^2([0,T])} \int_0^T \int \lambda^\alpha \bra*{u(t) - q \lambda^{-\gamma}}^2 \dx{\nu_t} \dx{t} < \infty .
\end{equation}
Then, it holds the moment estimate
\begin{equation}\label{e:LSW:MomentDissipation}
\int_0^T \int \lambda^{\alpha} \dx{\nu_t} \dx{t} < \infty .
\end{equation} \end{lemma} \begin{proof}
Let us define $D(\nu,u) = \int \lambda^\alpha \bra*{u - q \lambda^{-\gamma}}^2 \dx{\nu}$.
We observe that for $\alpha \geq 1-\gamma$, there is nothing to show, since the bound follows by interpolation from $\sup_{t\in [0,T]} E(\nu_t) < \infty$ and $\int \lambda \dx{\nu_t} =\bar\varrho$.
Therefore, assume now $\alpha \leq 1-\gamma$.
Let us define $\eta(\lambda) := \lambda \chi_{[0,1]}(\lambda) + \chi_{(1,\infty)}(\lambda)$.
Then, we can estimate with Cauchy--Schwarz for any $\kappa \in \mathds{R}$
\begin{equation}\label{e:LSW:DissEst:ap}
\begin{split}
&\int_0^T \bra*{ \int \bra*{u(t) - q\lambda^{-\gamma}} \eta(\lambda)^\kappa \dx{\nu_t}}^2 \dx{t}\leq \int_0^T D(\nu_t,u(t)) \int \eta(\lambda)^{2\kappa} \lambda^{-\alpha} \dx{\nu_t} \dx{t} \\
&\qquad\qquad \leq \int_0^T D(\nu_t, u(t)) \dx{t} \ \sup_{t\in [0,T]} \int \eta(\lambda)^{2\kappa} \lambda^{-\alpha}\dx{\nu_t} .
\end{split}
\end{equation}
Since, $\sup_{t\in [0,T]} E(\nu_t) < \infty$ and $\int \lambda \dx{\nu_t} =\bar\varrho$, we can use interpolation to bound the $\sup$ in $t$ provided $2\kappa -\alpha \geq 1-\gamma$.
On, the other hand, since $\int \lambda \dx{\nu_t} = \bar\varrho$ for all $t\geq 0$, there exists a constant $\bar\varrho_T > 0$ for any $T>0$ such that $\int \eta(\lambda) \dx{\nu_t} \geq \bar\varrho_T$ (see also Lemma~\ref{lem:LowerMoment} for a similar argument).
We can estimate the left hand side of~\eqref{e:LSW:DissEst:ap} from below in the case $\kappa=1$ by using the Young inequality for some $0<\tau <1$
\[\begin{split}
\int_0^T \bra*{ \int \bra*{u(t) - q\lambda^{-\gamma}} \eta(\lambda) \dx{\nu_t}}^2 \dx{t} &\geq \bra*{1-\tau} \bar\varrho_T^2 \int_0^T u(t)^2 \dx{t} \\
&\phantom{\geq} - \bra*{\frac{1}{\tau}-1} \int_0^T \bra*{(1-\gamma) E(\nu_t)}^2 \dx{t} .
\end{split}\]
Since, $E(\nu_t)\in L^\infty([0,T])$, we obtain the first a priori estimate
\begin{equation}\label{e:LSW:DissEst:ap1}
\int_0^T u(t)^2 \dx{t} \leq C_T \int_0^T D(\nu_t, u(t))\dx{t} + C_T .
\end{equation}
Another choice is $\kappa=1-\gamma$ thanks to $\alpha \leq 1-\gamma$.
Then, we estimate the left hand side of~\eqref{e:LSW:DissEst:ap} by using again the Young inequality with $\tau\in (0,1)$ as follows
\[\begin{split}
\int_0^T \bra*{ \int \bra*{u(t) - q\lambda^{-\gamma}} \eta(\lambda)^{1-\gamma} \dx{\nu_t}}^2 \dx{t} &\geq \bra*{1-\tau} q \int \bra*{\int_0^T \lambda^{1-2\gamma} \dx{\nu_t}}^2 \dx{t} \\
&\phantom{\geq} - \bra*{\frac{1}{\tau}-1} \int_0^T u(t)^2 \bra*{(1-\gamma) E(\nu_t)}^2 \dx{t} .
\end{split}\]
Since, we trivially have $\int_1^\infty \lambda^{1-2\gamma}\dx{\nu_t} \leq \int \lambda \dx{\nu_t} = \bar\varrho$, it follows by using the first a priori bound~\eqref{e:LSW:DissEst:ap1} and $E(\nu_t) \in L^\infty([0,T])$ the second a priori estimate
\begin{equation}\label{e:LSW:DissEst:ap2}
\int_0^T \bra*{ \int \lambda^{1-2\gamma} \dx{\nu_t}}^2 \dx{t} \leq C_T \int_0^T D(\nu_t, u(t))\dx{t} + C_T ,
\end{equation}
which shows~\eqref{e:LSW:MomentDissipation} for $\alpha \geq 1-2\gamma$.
Hence, we assume now $\alpha \leq 1-2\gamma$.
Similarly to~\eqref{e:LSW:DissEst:ap}, we can now estimate by Cauchy--Schwarz for some $\tilde \kappa \in \mathds{R}$
\begin{equation}\label{e:LSW:DissEst:ap3}
\begin{split}
\int_0^T \abs*{ \int \bra*{u(t) - q\lambda^{-\gamma}} \eta(\lambda)^{\tilde\kappa} \dx{\nu_t}} \dx{t} &\leq \bra*{\int_0^T D(\nu_t,u(t))\dx{t}}^\frac12 \times \\
&\qquad\bra*{\int_0^T \int \eta(\lambda)^{2\tilde\kappa} \lambda^{-\alpha} \dx{\nu_t} \dx{t} }^\frac12 .
\end{split}
\end{equation}
The second factor is bounded for $2\tilde \kappa - \alpha \geq 1-2\gamma$ by~\eqref{e:LSW:DissEst:ap2}.
Hence, a possible choice is $\tilde \kappa = 1-2\gamma$ by the assumption $\alpha \leq 1-2\gamma$.
Since $u \in L^2((0,T))$ and $\int \lambda^{1-2\gamma} \dx{\nu_t} \in L^2((0,T))$, we conclude the estimate~\eqref{e:LSW:MomentDissipation}. \end{proof} The Lemma provides the crucial ingredient to conclude that the dissipation is well-defined and justifies the use of the weak formulation in the first step of~\eqref{e:LSW:DD}. \begin{proposition}\label{lem:LSW:Dissipation}
Assume $\alpha \geq 1-3\gamma$.
Let $(\nu,w)\in \CE_T$ be a curve of finite action in~$M$ such that~\eqref{e:LSW:minDissipation} holds.
Then the associated minimization problem has a unique solution $u\in L^2([0,T])$ such that
\begin{equation}\label{e:LSW:covector}
\lambda \mapsto u(t) - q\lambda^{-\gamma} \in T^*_{\nu_t} M \qquad \text{ for a.e.\ } t\in [0,T].
\end{equation}
Moreover, the associated functional defined for a.e.\ $t\in [0,T]$ by
\begin{equation}\label{e:LSW:Dissipation}
D(\nu_t) := \int \lambda^\alpha \bra*{u(t)-q \lambda^{-\gamma}}^2 \dx{\nu_t} \quad\text{with}\quad u(t) := \frac{q\int\lambda^{\alpha-\gamma} \dx{\nu_t}}{\int \lambda^\alpha \dx{\nu_t}},
\end{equation}
called \emph{dissipation}, is a~\emph{strong upper gradient} for the energy $E$.
That is, it holds for any curve $(\nu,w)\in \CE_T$ of finite action
\begin{equation}\label{e:LSW:strongupper}
\abs{ E(\nu_t) - E(\nu_s) } \leq \int_s^t \sqrt{D(\nu_r)} \, \sqrt{A(\nu_r,w_r)} \, \dx{r} , \qquad \forall 0\leq s< t \leq T.
\end{equation}
Hereby, equality in~\eqref{e:LSW:strongupper} holds if and only if $w_t(\lambda) = \pm\bra{u(t) - q\lambda^{-\gamma}}$ for $\nu_t$-a.e.\ $\lambda \in \mathds{R}_+$. \end{proposition} \begin{proof}
In the first step, we show~\eqref{e:LSW:covector} and~\eqref{e:LSW:Dissipation}.
Therefore, the first variation of the minimization problem~\eqref{e:LSW:minDissipation} along some $s:\mathds{R}_+ \to \mathds{R}$ is given by
\begin{equation*}
\int_0^T \int \bra*{u(t) - q \lambda^{-\gamma}} \lambda^\alpha \dx{\nu_t} \, s(t) \dx{t} = 0 .
\end{equation*}
We show that is is well-defined by an estimate analog to~\eqref{e:LSW:DissEst:ap3}
\[\begin{split}
\abs*{\int_0^T \lambda^\alpha \bra*{u(t) - q \lambda^{-\gamma}} \dx{\nu_t}\; s(t) \dx{t}} &\leq \bra*{ \int_0^T D(\nu_t, u(t)) \dx{t}}^\frac12 \times \\
&\qquad\bra*{\int_0^T s(t)^2 \int \lambda^{\alpha} \dx{\nu_t}\dx{t}}^\frac12 ,
\end{split}\]
which is bounded thanks to the estimate~\eqref{e:LSW:MomentDissipation} for $s\in L^\infty((0,T))$.
In addition the a prior estimate~\eqref{e:LSW:DissEst:ap1} shows that minimizer is actually in $L^2((0,T))$ and hence satisfying the Euler-Lagrange equation $\int \bra*{u(t)-q \lambda^{-\gamma}} \lambda^\alpha \dx{\nu_t} = 0$ for a.e.\ $t \in [0,T]$, which is nothing else than~\eqref{e:LSW:covector} also showing~\eqref{e:LSW:Dissipation}.
It is left to show, that $D(\nu_t)$ is a strong upper gradient for the energy.
Therefore, we fix a test function $\zeta\in C_c^\infty(\mathds{R}_+)$ and calculate for a curve $(\nu,w)\in \CE_T$
\begin{align*}
\pderiv{}{t} \frac{q}{1-\gamma} \int \lambda^{1-\gamma} \zeta \, \dx\nu_t &= q \int \lambda^{\alpha-\gamma} \zeta \, w_t \, \dx\nu_t + \frac{q}{1-\gamma} \int \lambda^{1+\alpha-\gamma} \zeta' \, w_t \, \dx\nu_t =: \I + \II.
\end{align*}
Using the fact that $w_t\in T_{\nu_t}^* M$, we can smuggle in $u(t)$ and apply Cauchy--Schwarz to the first term $\I$, to obtain
\begin{align*}
\I \leq { A(\nu_t, w_t)}^\frac12 \ \bra*{ \int \lambda^{\alpha} \bra{ u - q\lambda^{-\gamma} \zeta}^2 \dx{\nu_t}}^\frac12,
\end{align*}
Hereby, equality holds if and only if $w_t = \pm w^\zeta_t$ with $w^\zeta_t := u - q\lambda^{-\gamma} \zeta$.
Hence, by choosing $\zeta_n$ converging to $1$ from below the result~\eqref{e:LSW:strongupper} follows by integration in time and dominated convergence, provided the term $\II$ vanishes.
By an additional approximation step, we can justify to choose the sequence $\zeta_n(\lambda) = n \lambda \chi_{[0,1/n)} + \chi_{[1/n,\infty)}$ and estimate~$\II$ by
\begin{align*}
\II \leq \frac{1}{1-\gamma} \bra*{ \int_0^{\frac{1}{n}} \lambda^{\alpha} \abs*{w_t}^2 \dx{\nu_t}}^\frac12\ \bra*{ \int_0^{\frac{1}{n}} \lambda^{\alpha} \abs*{ u - q\lambda^{-\gamma} \lambda\zeta'_n}^2 \dx{\nu_t}}^\frac12.
\end{align*}
Since, we can assume the r.h.s.\ of~\eqref{e:LSW:strongupper} to be finite, we can conclude again by dominated convergence, that $\II \to 0$ as $n\to \infty$, which finishes the proof. \end{proof} \begin{lemma}[Tightness is preserved by curves of finite action]\label{lem:LSW:tight}
Let $\set{\nu_0^\varepsilon}_{\varepsilon>0}$ be a family satisfying the tightness condition~\eqref{ass:tightness}.
Then for any $T>0$ and any family of curves $\set{(\nu^\varepsilon,w^\varepsilon)\in \CE_T : \nu^\varepsilon_{t=0} = \nu^\varepsilon_0 }_{\varepsilon>0}$ of uniformly finite action the family $\set{\nu_t^\varepsilon}_{\varepsilon>0}$ satisfies the tightness condition~\eqref{ass:tightness} uniformly in $\varepsilon$ for any $t\in [0,T]$. \end{lemma} \begin{proof}
Fix a test function $\eta_{r,R} \in C_c^\infty(\mathds{R}_+,[0,1])$ such that $\eta_{r,R}(s) = 0$ for $s<r/2$ and $s>2R$, $\eta_{r,R}(s) = 1$ for $r\leq s \leq R$, $\abs{\eta'_{r,R}(s)} \leq C/r$ for $r/2\leq s < r$ as well as $\abs{\eta'_{r,R}(s)} \leq C /R$ for $R\leq s < 2R$.
We can estimate for a fixed curve of finite action $(\nu,w)\in \CE_T$
\begin{align*}
\abs*{\pderiv{}{t} \int \lambda \eta(\lambda) \dx{\nu_t} } &\leq \int \lambda^\alpha \, \eta \, \abs{w_t} \dx{\nu_t} + \int \lambda^{1+\alpha} \, \abs{\eta'} \, \abs{w_t} \dx{\nu_t} \\
&\leq \frac{C}{r^{\frac{1-\alpha}{2}}} \int_{\frac{r}{2}}^\infty \lambda^{\frac{1+\alpha}{2}} \abs{w_t} \dx{\nu_t} + \frac{C}{r^{\frac{1-\alpha}{2}}} \int_{\frac{r}{2}}^r \lambda^{\frac{1+\alpha}{2}} \abs{w_t} \dx{\nu_t} \\
&\quad +\frac{C}{R^{\frac{1-\alpha}{2}}} \int_{R}^{2R} \lambda^{\frac{1+\alpha}{2}} \abs{w_t} \dx{\nu_t} \\
&\leq C\bra*{\frac{1}{r^{\frac{1-\alpha}{2}}}+ \frac{1}{R^{\frac{1-\alpha}{2}}}}\ A(\nu_t,w_t)^\frac12 \ \bra*{ \int \lambda \dx{\nu_t} }^{\frac{1}{2}}.
\end{align*}
By an integration in time, letting $R\to \infty$ and using the assumption of finite action, we obtain for all $t\in [0,T]$ the estimate
\begin{equation*}
\int_{r}^\infty \lambda \dx{\nu_t} \leq \int_{\frac{r}{2}}^\infty \lambda \dx{\nu_0} + \frac{C \sqrt{T}}{r^\frac{1-\alpha}{2}} .
\end{equation*}
Hereby the constant $C$ only depends on the test function and the action of the curve. Hence, if we apply this estimate for $\set{\nu_t^\varepsilon}_{\varepsilon>0}$, we observe its tightness by the tightness assumption on $\set{\nu_0^\varepsilon}_{\varepsilon>0}$ and the uniform finite action of the family. \end{proof} \begin{proposition}[Compactness of curves of finite action]\label{prop:LSW:compactness}
Assume $\alpha \geq 1-3\gamma$ and let $(\nu^n, w^n) \in \CE_T$ for $n\in\mathds{N}$ be a family of solutions to the continuity equation with uniformly bounded action and dissipation such that $\set{\nu^n_0}_{n\in\mathds{N}}$ satisfies the tightness condition~\eqref{ass:tightness}.
Then, there exists a subsequence and a couple $(\nu, w)\in \CE_T$, such that \begin{align}
\nu_t^n &\stackrel{*}{\rightharpoonup} \nu_t \qquad \text{ in } C_c^0(\mathds{R}_+)^* \quad \forall t\in [0,T] , \label{e:LSW:compact:nu} \\
w^n \nu^n &\stackrel{*}{\rightharpoonup} w \nu \qquad \text{ in } C_c^0([0,T] \times \mathds{R}_+)^*.\notag \end{align}
In addition, the action and dissipation satisfy the $\liminf$ estimates \begin{align}\label{e:LSW:compact:Alsc}
\int_0^T A(\nu_t, w_t) \dx{t} &\leq \liminf_{n\to \infty} \int_0^T A(\nu_t^n, w_t^n) \dx{t} \\
\int_0^T D(\nu_t) \dx{t} &\leq \liminf_{n\to \infty} \int_0^T D(\nu_t^n) \dx{t} \label{e:LSW:compact:Dlsc}. \end{align} \end{proposition} \begin{proof}
For any $\zeta\in C_c^1(\mathds{R}_+)$ and $0\leq t_1 < t_2 \leq T$ holds \begin{align*}
\abs*{\int \zeta \dx{\nu_{t_2}^n} - \int \zeta \dx{\nu_{t_1}^n}} &= \abs*{ \int_{t_1}^{t_2} \int \partial_\lambda \zeta \, \lambda^\alpha \, w_t^n \; \nu_t(\dx{\lambda}) \dx{t}} \\
&\leq \sup_{\lambda > 0} \frac{\abs*{\partial_\lambda \zeta}}{\lambda^{\frac{\alpha}{2}}} \abs*{t_2 - t_1}^\frac12 \bra*{\int_{t_1}^{t_2} A(\nu_t^n, w_t^n) \dx{t}}^\frac12 , \end{align*}
which shows~\eqref{e:LSW:compact:nu} and the weak$^*$ continuity of $\nu_t$.
Moreover, it holds for $\kappa \in \mathds{R}$ and $\zeta \in C_c^0([0,T]\times \mathds{R}_+)$ \begin{align}
\int_0^T \int \zeta(t,\lambda) \lambda^{\kappa+\alpha} \abs*{w_t^n} \dx\nu_t^n \dx{t} \leq &\bra*{ \int_0^T A(\nu_t^n,w_t^n) \dx{t}}^\frac12 \times \notag \\
&\bra*{ \int_0^T \int \zeta(t,\lambda)^2 \lambda^{2\kappa+\alpha} \dx\nu_t \dx{t} }^\frac12 \label{e:LSW:compact:p1} . \end{align}
By lower semi-continuity it follows $E(\nu_t)<\infty$ and by the tightness Lemma~\ref{lem:LSW:tight} it follows the conservation of total mass $\int \lambda \dx{\nu_t} = \int \lambda \dx{\nu_0} = \int \lambda \dx{\nu^n_0} < \infty$.
Hence, $\nu_t \in M$ for all $t\in [0,T]$.
Then, by interpolation, the second term in~\eqref{e:LSW:compact:p1} is finite for $1-\gamma\leq 2\kappa +\alpha \leq 1$.
There exists $\mu \in C_c^0([0,T]\times \mathds{R}_+)$ such that $w^n \nu^n \stackrel{*}{\rightharpoonup} \mu$ and the pair $(\nu,\mu)$ satisfies $\partial_t \nu_t + \partial_{\lambda} \mu_t = 0$ in $C_c^\infty([0,T]\times \mathds{R}_+)^*$.
Since $(\nu^n,w^n)$ is a curve of finite action, we find a subsequence such that
\[
\lim_{n\to \infty} \int_0^T A(\nu^n_t,w^n_t) \dx{t} = A^* := \liminf_{n\to \infty} \int_0^T \int_0^T A(\nu^n_t,w^n_t) \dx{t} .
\]
Hence, we get the estimate with $\kappa = (1-\alpha)/2$
\begin{equation}\label{e:LSW:compact:Action:p1}
\int_0^T \int \zeta(t,\lambda) \lambda^\frac{1-\alpha}{2} \mu_t(\dx\lambda) \dx{t} \leq \bra*{A^* \int_0^T \int \zeta(t,\lambda)^2 \lambda \dx{\nu_t} \dx{t}}^\frac12.
\end{equation}
Now, we can apply the Riesz representation theorem to find $v\in L^2(\lambda\dx{\nu_t} dt)$ such that
\[
\int_0^T \int \zeta(t,\lambda) \lambda^\frac{1-\alpha}{2} \mu_t(\dx\lambda) \dx{t} = \int_0^T \int \zeta(t,\lambda) \lambda v(t,\lambda) \; \nu_t(\dx{\lambda}) \dx{t} .
\]
Setting $\tilde\zeta(t,\lambda) = \lambda^\frac{1-\alpha}{2} \zeta(t,\lambda)$ and $w_t(\lambda) =\lambda^{\frac{1-\alpha}{2}} v(t,\lambda)$, we get that $\mu_t(\dx\lambda) = v(t,\lambda) \nu_t(\dx\lambda)$.
Moreover, since $w\in L^2(\lambda^\alpha \dx\nu_t \dx{t})$ it is of finite action.
Moreover, by approximating $\zeta(t,\lambda) = \frac{w_t(\lambda)}{\lambda^{\frac{1-\alpha}{2}}}$ it follows from~\eqref{e:LSW:compact:Action:p1} the lower semi-continuity of the action~\eqref{e:LSW:compact:Alsc}.
Finally, \eqref{e:LSW:compact:Dlsc} follows by noting that $D(\nu_t^n) = A(\nu_t^n, u(\nu_t^n) - q\lambda^{-\gamma})$, which is well-defined by~\eqref{e:LSW:covector}. \end{proof} The formulation of the LSW gradient flow as curves of minimal action, reads now in analog to the one of the Becker-Döring equation~\eqref{e:BD:Jintro} \begin{proposition}[LSW equation as curves of maximal slope]\label{prop:LSW:DeGiorgi}
Let $\alpha \geq 1-3\gamma$.
For $(\nu,w) \in \CE_T$ with finite action holds
\begin{align}\label{e:LSW:J}
J(\nu) &:= E(\nu_T) - E(\nu_0) + \frac{1}{2} \int_0^T
D(\nu_t) \dx{t}
+ \frac{1}{2} \int_0^T A(\nu_t, w_t) \dx{t} \geq 0 .
\end{align}
Moreover, equality holds if and only if $\nu_t$ is a solution to the LSW equation. \end{proposition} \begin{proof}
We can assume that the dissipation $\int_0^T D(\nu_t) \dx{t}$ is bounded, because else there is nothing to show.
Then, we can use the strong upper gradient property of the dissipation~\eqref{e:LSW:strongupper} after an application of the Young inequality to arrive at \begin{equation*}
\pderiv{}{t} E(\nu_t) \geq -\frac{1}{2} D(\nu_t) - \frac{1}{2} A(\nu_t,w_t) . \end{equation*}
An integration of the above estimate shows the nonnegativity of $J$ in~\eqref{e:LSW:J}.
The equality case follows from the equality case in~\eqref{e:LSW:strongupper} for a.e.\ $t\in [0,T]$ by choosing $w_t(\lambda) = u(\nu_t) - ql^{-\gamma}$.
Then, by weak$^*$ continuity of $t\mapsto \nu_t$ follows the result for all $t\in [0,T]$.
Now, for a curve $(\nu,w)\in \CE_T$ with $J(\nu)=0$ follows by Proposition~\eqref{lem:LSW:Dissipation} and~\eqref{e:LSW:strongupper} the identity
\begin{align*}
-\int \lambda^{\alpha} q \lambda^{-\gamma} w_t \dx{\nu_t} = \sqrt{A(\nu_t,w_t) D(\nu_t)} = A(\nu_t,w_t) = D(\nu,w_t) .
\end{align*}
Since $w_t \in T^*_{\nu_t} M$, it follows that $w_t = u(t)-q\lambda^{-\gamma}$ for $\nu_t$ almost every $\lambda \in \mathds{R}_+$.
Hence, the continuity equation~\eqref{e:LSW:continuityEqu} takes the form
\begin{equation*}
\partial_t \nu_t + \partial_\lambda\bra*{\lambda^\alpha\bra{ u(t)-q\lambda^{-\gamma}}\nu_t} = 0 \quad\text{ in }\quad C_c^\infty([0,T]\times \mathds{R}_+)^*,
\end{equation*}
which is nothing else than a weak solution to the LSW equation. \end{proof} \begin{remark}
The compactness statement in Proposition~\ref{prop:LSW:compactness} is also a tool to proof existence of solution to the LSW equation by the particle method (cf.\ \cite{Niethammer1998,Niethammer2005b}).
Therefore, the initial distribution is approximated in the weak$^*$ sense by a discrete sum of Dirac deltas. Solutions for such data are determined by solving the finite system of ordinary differential equations determined by~\eqref{e:LSW} for each particle. Then the compactness statement allows to pass to the limit in the particle number and existence for measure valued initial distributions is obtained. \end{remark} In addition, the compactness statement Proposition~\ref{prop:LSW:compactness} with the variational characterization of solutions of the LSW equation from Proposition~\ref{prop:LSW:DeGiorgi} is the essential tool to show the continuous dependence of the solution on the initial data. \begin{corollary}[Continuous dependency on the initial data]\label{cor:LSW:continitial}
Let $\set{\nu_0^\varepsilon}_{\varepsilon>0}$ be a sequence of initial data satisfying the tightness condition~\eqref{ass:tightness} and
\begin{equation}\label{e:LSW:conv:init}
\lim_{\varepsilon \to 0} E(\nu_0^\varepsilon) = E(\nu_0) < \infty .
\end{equation}
Then there exists a solution $\nu \in C_c^\infty([0,T]\times \mathds{R}_+)^*$ to the LSW equation such that $\nu_t^\varepsilon \stackrel{*}{\rightharpoonup} \nu_t$ in $C_c^0(\mathds{R}_+)$ for all $t\in [0,T]$. \end{corollary} \begin{proof}
By the compactness statement Proposition~\eqref{prop:LSW:compactness} follows that there exists a couple $(\nu_t, w_t)\in\CE_T$ being the weak$^*$ limit of $(\nu_t^\varepsilon,w_t^\varepsilon)\in\CE_T$ and satisfying the two $\liminf$ estimates~\eqref{e:LSW:compact:Alsc} and~\eqref{e:LSW:compact:Dlsc}.
By lower semi-continuity of the energy and the assumption~\eqref{e:LSW:conv:init} follows $0=\liminf_{\varepsilon\to 0} J(\nu^\varepsilon) \geq J(\nu) \geq 0$ and hence $J(\nu) =0$, which proves the claim. \end{proof} \begin{remark}
The above result is consistent with the existing literature: In~\cite[Theorem 2.2]{Niethammer2005b}, the continuous dependency on the initial data was shown under the tightness condition~\eqref{ass:tightness} with respect to weak$^*$ convergence for continuous test functions compactly supported on $[0,\infty)$ including~$0$, i.e.\ Borel measures on $[0,\infty)$.
Then, it is easy to see that weak$^*$ convergence with respect to this class implies convergence of the macroscopic energy~\eqref{ass:wellprepared:initial:mac}. \end{remark}
\section{Proof of main results}\label{S:Limit}
\subsection{A priori estimates for the Becker--Döring gradient structure}\label{s:AprioriBD}
In this section, we consider the Becker--Döring equation and its gradient structure as introduced in Section~\ref{s:Intro:BD} and~\ref{s:Intro:BD:GF}, respectively.
The reversible equilibrium distribution $\omega$ with parameter $z \in (0,z_s]$ (corresponding to the conserved quantity) is given by~\eqref{e:BD:equilibrium}. Note, that the radius of convergence for $z \mapsto \sum_{l=1}^\infty l \omega_l(z)$ is $z_s$ and $\sum_{l=1}^\infty l \omega_l =: \varrho_s < \infty$ (cf.\ Lemma~\ref{lem:BD:assymptotic_Q} below). Hence, the equilibrium state $\omega_l := \omega_l(z_s)$ is the one with largest total mass $\varrho_s$. We work in the excess mass regime and any state will have total mass larger than $\varrho_s$ to which there doesn't exist an according equilibrium state with the same total mass. The free energy is always the relative entropy with respect to $\omega = \omega(z_s)$, if not stated explicitly. \begin{lemma}\label{lem:BD:assymptotic_Q} Under Assumption~\ref{ass:BD:rates}, there exists a constant $\mathcal{F}_0$ such that for any $l\geq 2$ \begin{equation}\label{e:BD:assymptotic_Q}
Q_l = \frac{1}{l^\alpha z_s^{l-1}} \exp\bra*{\bra*{\mathcal{F}_0- \frac{q}{z_s}(1-\gamma) l^{1-\gamma} + \frac{q^2}{2 z_s^2(1-2\gamma)} \bra*{l^{1-2\gamma}-1}} \bra*{1+ O(l^{-\gamma})}} , \end{equation} where $\frac{l^{1-2\gamma}-1}{1-2\gamma} := \log l$ for $\gamma=\frac{1}{2}$. \end{lemma} The proof relies on elementary estimates and is included for convenience in Appendix~\ref{s:assymptotic_Q}. The expansion of the rates allows us to easily conclude the expansion of the free energy~$\mathcal{F}$. \begin{lemma}[Expansion of free energy]\label{lem:BD:expansionF}
Let $n\in \mathcal{M}$ be given such that $\mathcal{F}(n)< \infty$ as defined in~\eqref{e:def:Lyapunov}, then there exists $\sigma>0$ such that for any $l_0 \geq 2$ \begin{equation}\label{e:BD:expansionF}
\mathcal{F}_{l_0}(n) = \mathcal{F}_{l_0}^{\LSW}(n) \bra*{1+ O(l_0^{-\sigma}) + O(l_0^\gamma \omega_{l_0})} , \end{equation} where $\mathcal{F}_{l_0}$ and $\mathcal{F}_{l_0}^{\LSW}$ are defined by \begin{equation*}
\mathcal{F}_{l_0}(n) := \sum_{l=l_0}^\infty \omega_l \psi\bra*{\frac{n_l}{\omega_l}} \qquad\text{and}\qquad \mathcal{F}_{l_0}^{\LSW}(n) := \frac{q}{z_s(1-\gamma)} \sum_{l=l_0}^\infty l^{1-\gamma} n_l . \end{equation*} \end{lemma} \begin{proof} We expand the function $\psi$ in the definition of $\mathcal{F}_{l_0}$ \begin{align*}
\mathcal{F}_{l_0}(n) = \sum_{l=l_0}^\infty \bra*{ n_l \log \frac{1}{z_s^l Q_l} + n_l\bra*{\log n_l - 1 } + \omega_l} \end{align*} We estimate the first sum using the asymptotic expansion~\eqref{e:BD:assymptotic_Q} \begin{align*}
\sum_{l=l_0}^\infty \bra*{ n_l \log \frac{1}{z_s^l Q_l} } &= \frac{q}{z_s(1-\gamma)} \sum_{l=l_0}^\infty l^{1-\gamma} n_l + O\bra*{\sum_{l=l_0}^\infty l^{1-2\gamma} n_l} \\
&\quad +\sum_{l=l_0}^\infty l^{1-\gamma} n_l \frac{\log l^\alpha}{l^{1-\gamma}} - \sum_{l=l_0}^\infty \frac{l^{1-\gamma}}{l^{1-\gamma}}n_l \bra*{\log z_s + \mathcal{F}_0\bra*{1+O(l^{-\gamma})}}\\
&= \bra*{\frac{q}{z_s(1-\gamma)} \sum_{l=l_0}^\infty l^{1-\gamma} n_l} \bra*{1+ O\bra[\big]{l_0^{-\gamma} \log l_0} +O\bra[\big]{l_0^{-(1-\gamma)}}} , \end{align*} Likewise, we note that for any $\beta \in (0,1)$ exists $C_\beta > 0 $ such that for $x>0$ \begin{equation*}
\abs*{\min\set*{x (\log x -1),0}} \leq C_\beta x^\beta \end{equation*} and with the Hölder inequality, we can estimate \begin{align*}
\sum_{l=l_0}^\infty \abs*{\min\set*{ n_l\bra*{\log n_l -1},0}} &\leq C_\beta \sum_{l=l_0}^\infty n_l^\beta \leq C_\beta \bra*{ \sum_{l=l_0}^\infty l^{1-\gamma} n_l}^\beta \bra*{\sum_{l=l_0}^\infty \frac{1}{l^{\frac{\beta}{1-\beta} (1-\gamma)}}}^{1-\beta} . \end{align*} Now, we can choose $\beta$ such that $\frac{\beta}{1-\beta} (1-\gamma)= \kappa > 1 $ and $\beta<1$ leading to the estimate \begin{align*}
\sum_{l=l_0}^\infty \abs*{\min\set*{ n_l\bra*{\log n_l -1},0}} &\leq C_\beta \;\bra*{ \beta \sum_{l=l_0}^\infty l^{1-\gamma} n_l + 1-\beta}\; O(l_0^{-(\kappa-1)(1-\beta)}). \end{align*} The last term evaluates with the help of~\eqref{e:BD:assymptotic_Q} to \begin{align*}
\sum_{l=l_0}^\infty \omega_l &\leq \sum_{l=l_0}^\infty \frac{z_s}{l^\alpha} \exp\bra*{\bra*{\mathcal{F}_0 - \frac{q}{z_s(1-\gamma)} l^{1-\gamma}}\bra*{1+O(l^{-\gamma})}}\\
&\leq C \int_{l_0}^\infty \exp\bra*{- \frac{q}{z_s(1-\gamma)} l^{1-\gamma}} \dx{l} \leq C \, l_0^\gamma \exp\bra*{- \frac{q}{z_s(1-\gamma)} l_0^{1-\gamma}}. \end{align*} Therefore, a combination of all the estimates leads to the result. \end{proof} Moreover, we need a Czisar-Pinsker inequality for the free energy, which was already a crucial ingredient in~\cite{Niethammer2003} \begin{proposition}[{Czisar-Pinsker inequality~\cite[Lemma 2.1, 2.2]{Niethammer2003}}]\label{lem:Apriori}
For $n\in \mathcal{M}$ and any small $\eta > 0$ and any $p< \infty$ and any $l_0\geq 2$ holds \begin{align}
\sum_{l=1}^\infty l^{1-\gamma} \abs*{n_l - \omega_l} &\leq C \sqrt{\mathcal{F}(n)} \label{e:Apriori1}\\
\abs*{\sum_{l=l_0}^\infty l n_l - \bra*{\varrho-\varrho_s}} &\leq C l_0^\gamma \sqrt{\mathcal{F}(n)} + C_{p} l_0^{-p} . \label{e:Apriori3} \end{align} \end{proposition} For the next Lemmata, we make statements on curves of finite action to deduce certain compactness, which we later need for passing to the limit. These Lemmata are the analog of~\cite[Lemma 2.3 and 2.4]{Niethammer2003}, but we proof them for curves of finite action instead of solutions to the Becker--Döring equation. \begin{lemma}[A priori estimates for curves of finite action]
Let $(n,\phi)\in \mathcal{CE}_T$ be a curve of finite action as in Definition~\ref{def:BD:CurvesFiniteAction} and $\eta\in L^2(0,T)$, then it holds \begin{align}\label{e:L1ActionEst}
\int_0^T \eta(t) \sum_{l=l_0}^\infty l^\kappa k^l \hat n^\omega_l(t) \abs*{\nabla_l \phi(t)} \dx{t} \leq C\; &\bra[\bigg]{\sup_{t\in [0,T]} \mathcal{F}^{\LSW}_{l_0}(n(t))}^{\frac{1-\alpha-2\kappa}{2\gamma}} \\
&\times \int_0^T \abs*{\eta(t)} \; \sqrt{\mathcal{A}_{\mac}(n(t),\phi(t))} \dx{t},\notag \end{align} for any $\kappa \in \pra*{\frac{1-\alpha-\gamma}{2},\frac{1-\alpha}{2}}$ with $\mathcal{A}_{\mac}$ the action as defined in \eqref{e:BD:action} restricted to $l\geq l_0$. Hereby, $\nabla_l \phi := \phi_{l+1} - \phi_l - \phi_1$ and $\hat n_l^\omega$ is defined in~\eqref{e:def:hatn}. Moreover, it also holds the estimate \begin{align}\label{e:L1FluxEst}
\int_0^T \eta(t) \sum_{l=l_0}^\infty l^\kappa \abs*{ a_l n_1(t) n_l(t) - b_{l+1} n_{l+1}(t)} \leq C\; &\bra[\bigg]{\sup_{t\in [0,T]} \mathcal{F}^{\LSW}_{l_0}(n(t))}^{\frac{1-\alpha-2\kappa}{2\gamma}} \\
&\times \int_0^T \abs*{\eta(t)} \; \sqrt{\mathcal{D}_{\mac}(n(t))} \dx{t} ,\notag \end{align} where again $\mathcal{D}_{\mac}$ is defined as in~\eqref{e:BD:Dissipation} restricted to $l\geq l_0$. \end{lemma} \begin{proof}
We estimate using the Cauchy--Schwarz inequality \begin{equation*}
\sum_{l=l_0}^\infty l^\kappa k^l \hat n^\omega_l \abs*{\nabla_l\phi(t)} \leq \bra*{\sum_{l=l_0}^\infty k^l \hat n^\omega_l \abs*{\nabla_l\phi(t)}^2}^\frac12 \bra*{\sum_{l=l_0}^\infty l^{2\kappa} k^l \hat n^\omega_l}^\frac{1}{2} . \end{equation*}
Now, using that fact that
\[
k^l \hat n^\omega_l = \Lambda\bra*{a_l n_1 n_l, b_{l+1} n_{l+1}} =\Lambda\bra*{ \lambda^{\alpha} n_1 n_l , (\lambda+1)^{\alpha} (z_s+q (l+1)^{-\gamma}) n_{l+1}},
\]
the estimate $\frac{(\lambda+1)^\alpha}{\lambda^{\alpha}} \leq 1+ \frac{\alpha}{\lambda}$ and from~\eqref{e:Apriori1} the bound $\abs*{n_1 - z_s} \leq C \sqrt{\mathcal{F}(n)}$, it follows
\begin{equation*}
\sum_{l=l_0}^\infty l^{2\kappa} k^l \hat n^\omega_l \leq 2\bra*{z_s + \max\set*{C\sqrt{\mathcal{F}(n)}, q l_0^{-\gamma}}} \sum_{l=l_0}^\infty l^{\alpha+2\kappa} n_l .
\end{equation*}
Now, we use the Hölder inequality to interpolate
\begin{equation*}
\sum_{l=l_0}^\infty l^{\alpha+2\kappa} n_l \leq \bra*{\sum_{l=l_0}^\infty l n_l}^{\frac{\alpha + 2\kappa+\gamma-1}{\gamma}} \bra*{\sum_{l=l_0}^\infty l^{1-\gamma} n_l}^{\frac{1-\alpha-2\kappa}{\gamma}} \leq C\; \bra*{\mathcal{F}^{\LSW}_{l_0}(n(t))}^{\frac{1-\alpha-2\kappa}{\gamma}}
\end{equation*}
by assuming $1-\alpha-\gamma\leq 2\kappa \leq 1-\alpha$.
The estimate~\eqref{e:L1FluxEst} follows from~\eqref{e:L1ActionEst} by noting that with the choice $\phi^*(t) = D \mathcal{F}(n(t)) = \bra*{\log \frac{n_l(t)}{\omega_l}}_{l\geq 1}$ holds
\[
k^l \hat n^\omega_l(t) \abs*{\nabla_l \phi^*(t)} = a_l n_1(t) n_l(t) - b_{l+1} n_{l+1}(t)
\]
and $\mathcal{A}_{\mac}(n(t),\phi^*(t)) = \mathcal{D}_{\mac}(n(t))$. \end{proof} The last a priori estimate deals with tightness and how tightness is preserved for curves of finite action. \begin{lemma}[Tightness is preserved for curves of finite action]
A family $N \subset \mathcal{M}$ is called tight provided that
\begin{equation}\label{ass:tight-initial}
\sup_{n\in N} \sum_{l = R}^\infty l n_l \to 0
\qquad \text{as } R\to \infty .
\end{equation}
If the family $N_0 \subset \mathcal{M}$ satisfy the tightness condition~\eqref{ass:tight-initial}. Then for any $T>0$ and any family of curves $\set{ (n,\phi) \in \mathcal{CE}_T: n(0) \in N_0}$ of uniformly finite action the family $\set{ n(t)}$ also satisfies the tightness condition~\eqref{ass:tight-initial} for $t\in [0,T]$. \end{lemma} \begin{proof}
The proof is similar to Lemma~\ref{lem:LSW:tight}, where the same result is proven for the LSW gradient structure.
Let $1\ll M_1 \ll M_2$ and let $\eta \in C^1(\mathds{R})$ be a cut off function such that $\eta(l)=0$ for $l\leq \frac{M_1}{2}$ and $l\geq 2M_2$, $\eta(l) = 1$ for $M_1 \leq l \leq M_2$ and such that $\eta'(l)\leq \frac{C}{M_1}$ for $\frac{M_1}{2} \leq l \leq M_1$ and $\abs*{\eta'(l)}\leq \frac{C}{M_2}$ for $M_2\leq l \leq 2M_2$.
Moreover, we define $\mathcal{N}_l := l \eta(l)$ and assume $M_1>2$ such that $\eta(1)=0$.
Then, it follows for any curve of finite action $(n,\phi)\in \Phi$
\begin{align*}
\pderiv{}{t} \mathcal{N} \cdot n(t) &= \mathcal{N} \cdot \partial_t n(t) = \mathcal{N} \cdot \mathcal{K}(n) \phi \\
&=\bra*{\sum_{l=1}^\infty \eta(l+1) \, k^l \, \widehat{n}^\omega_l(t)\; \nabla_l \phi(t) + \sum_{l=1}^\infty k^l \, \widehat{n}^\omega_l(t) \, l \, \nabla_l \eta \; \nabla_l \phi(t)} \\
&\leq C \; \Biggl( \frac{1}{M_1^{\frac{1-\alpha}{2}}} \sum_{l=M_1/2}^\infty l^{\frac{1-\alpha}{2}} k^l \widehat{n}^\omega_l(t) \abs*{\nabla_l \phi(t)} \\
&\qquad + \bra*{\frac{1}{M_1} M_1^{1-\frac{1-\alpha}{2}} + \frac{1}{M_2} M_2^{1-\frac{1-\alpha}{2}}} \sum_{l=M_1/2}^\infty l^{\frac{1-\alpha}{2}} k^l \widehat{n}^\omega_l(t) \abs*{\nabla_l \phi(t)} \Biggr) \\
&\leq C \; \bra*{ \frac{1}{M_1^{\frac{1-\alpha}{2}}} + \frac{1}{M_2^{\frac{1-\alpha}{2}}} } \sum_{l=M_1/2}^\infty l^{\frac{1-\alpha}{2}} k^l \widehat{n}^\omega_l(t) \abs*{\nabla_l \phi(t)} , \end{align*} where $C$ is the constant depending only on the cut off function $\eta$. Integrating over time and using~\eqref{e:L1ActionEst} leads to \begin{align*}
\sum_{l=M_1}^{M_2} l n_l(t) \leq \sum_{l=M_1/2}^{2M_2} l n_l(0) + C \bra*{\frac{1}{M_1^{\frac{1-\alpha}{2}}} + \frac{1}{M_2^{\frac{1-\alpha}{2}}}} \int_0^t \sqrt{\mathcal{A}(n(t),\phi(t))} \dx{t} . \end{align*} Now, using the fact that $t\mapsto n(t)$ is a curve of finite action and letting $M_2\to \infty$, we obtain \begin{equation*}
\sum_{l=M_1}^{\infty} l n_l(t) \leq \sum_{l=M_1/2}^\infty l n_l(0) + \frac{C t^{\frac{1}{2}}}{M_1^{\frac{1-\alpha}{2}}} , \end{equation*} where the constant $C$ is uniform for the family. This finishes the proof since $N_0$ satisfies the tightness condition~\eqref{ass:tight-initial}. \end{proof}
\subsection{Passage to the limit: Proof of Theorem~\ref{thm:Scale:conv:CfA}}\label{S:Limit:sub}
To pass to the limit in the discrete continuity equation, we define the flux density measure for a fixed covector~$\phi$ and rescaled one~$w^\varepsilon(\varepsilon l) = z_s \varepsilon^{-\gamma} \nabla_l \phi$ (cf.\ \eqref{def:Scale:vectorfield}) by \begin{align}\label{def:Scale:fluxdensity}
\mu^\varepsilon(\dx\lambda) &:= \frac{z_s}{\varepsilon^{1-\alpha+2\gamma}} \sum_{l\geq l_0} \delta_{\varepsilon l}(\dx\lambda) \, k^l \, \widehat{n^\varepsilon}^\omega_l \, \nabla_l \phi \\
&= \frac{1}{\varepsilon^{1-\alpha+\gamma}} \sum_{l\geq l_0} \delta_{\varepsilon l}(\dx\lambda) \, l^\alpha \,\Lambda\bra*{n_1^\varepsilon n_l^\varepsilon , (z_s + q(l+1)^{-\gamma}) n_{l+1}^\varepsilon} \, w^\varepsilon(\lambda) .\notag \end{align} and the dissipation flux density measure \begin{align}\label{def:Scale:fluxdensityhat}
\hat\mu^\varepsilon(\dx\lambda) &:= \frac{1}{\varepsilon^{1-\alpha+\gamma}} \sum_{l\geq l_0} \delta_{\varepsilon l}(\dx\lambda) \bra*{ a_l n_1(t) n_l(t) - b_{l+1} n_{l+1}(t)} \end{align} Let us note, that with the above definitions for $l\geq l_0$ and $\lambda =\varepsilon l$ holds \begin{equation}\label{e:Scale:continuityEqu}
\dot n^\varepsilon_l(t) - \frac{1}{\varepsilon^{1-\alpha+\gamma}}(\mathcal{K}[n] \phi)_l
= \partial_t \nu^\varepsilon_t(\lambda) + \partial^\varepsilon_{\lambda} \mu^\varepsilon_t(\lambda) = 0 , \end{equation} where \begin{equation*}
\partial_\lambda^\varepsilon \mu_t^\varepsilon(\lambda) := \frac{\mu_t^\varepsilon(\lambda+\varepsilon) - \mu_t^\varepsilon(\lambda)}{\varepsilon} . \end{equation*} Let us summarize the a priori estimates found in Section~\ref{s:AprioriBD} and rewrite them in rescaled variables. We denote with $\mathcal{F}^\varepsilon_ {\mac}(\Pi_{\mac}^\varepsilon n) = \varepsilon^{-\gamma} \mathcal{F}_{\mac}(n)$ and similarly for $\mathcal{A}^\varepsilon_{\mac}$ as well as $\mathcal{D}^\varepsilon_{\mac}$. \begin{proposition}[Rescaled a priori estimates]
With $x$ from~\eqref{ass:cutoff} holds
\begin{enumerate}[ i) ]
\item The rescaled free energy satisfies
\begin{equation}\label{e:Scale:energybound}
C \geq \mathcal{F}^\varepsilon(n^\varepsilon) \geq \mathcal{F}^\varepsilon_{\mac}(\nu^\varepsilon) = E(\nu^\varepsilon) \bra*{1+ O(\varepsilon^{x\sigma})}
\end{equation}
\item The total excess mass satisfies
\begin{equation}\label{e:Scale:MassExcess}
\abs*{\int \lambda \nu^\varepsilon(\dx{\lambda}) - (\varrho_0 - \varrho_s)} \leq C \varepsilon^{\gamma\bra*{\frac{1}{2}-x}} \sqrt{\mathcal{F}^\varepsilon_{\mac}(\nu^\varepsilon)} .
\end{equation}
\item Let $[0,T] \ni t \mapsto \nu_t^\varepsilon \in \mathcal{M}^\varepsilon$ be a rescaled curve of finite action and $\eta\in L^2((0,T))$, then for any $\kappa \in \pra*{\frac{1-\alpha-\gamma}{2},\frac{1-\alpha}{2}}$
\begin{align}
\int_0^T \eta(t) \int \lambda^{\kappa} \abs*{\mu^\varepsilon_t(\dx\lambda)} \dx{t} &\leq C \;\bra*{\sup_{t\in [0,T]} \mathcal{F}^{\varepsilon}_{\mac}(\nu^\varepsilon_t)}^{\frac{1-\alpha-2\kappa}{2\gamma}} \int_0^T \abs*{\eta(t)} \sqrt{\mathcal{A}^\varepsilon_{\mac}(\nu^\varepsilon_t,w^\varepsilon_t)} \dx{t} , \label{e:Scale:ActionMoment}\\
\int_0^T \eta(t) \int \lambda^{\kappa} \abs*{\hat\mu^\varepsilon_t(\dx\lambda)} \dx{t}
&\leq C \;\bra*{\sup_{t\in [0,T]} \mathcal{F}^{\varepsilon}_{\mac}(\nu^\varepsilon_t)}^{\frac{1-\alpha-2\kappa}{2\gamma}} \int_0^T \abs*{\eta(t)} \sqrt{\mathcal{D}^\varepsilon_{\mac}(\nu^\varepsilon_t)} \dx{t} . \label{e:Scale:FluxMoment}
\end{align}
\item If $\set{\nu_0^\varepsilon}_{\varepsilon>0}$ satisfies the tightness condition~\eqref{ass:tightness} and $[0,T]\ni t \mapsto \nu_t^\varepsilon \in \mathcal{M}^\varepsilon$ are rescaled curves of finite action, then for all $t\in [0,T]$ also $\set{\nu_t^\varepsilon}_{\varepsilon>0}$ satisfies the tightness condition~\eqref{ass:tightness}.
\end{enumerate} \end{proposition} The above results enable us to conclude the $\liminf$ estimates and proof Theorem~\ref{thm:Scale:conv:CfA}. \begin{proof}[Proof of Theorem~\ref{thm:Scale:conv:CfA}]
\emph{Step 1: Convergence of $\nu^\varepsilon$.}
For $\zeta \in C_c^1(\mathds{R}_+)$ and $0\leq t_1 < t_2 \leq T$, we calculate using the discrete continuity equation in the form~\eqref{e:Scale:continuityEqu}
\begin{align*}
\abs*{\int \zeta \dx\nu_{t_1}^\varepsilon - \int \zeta \dx\nu_{t_1}^\varepsilon} &= \abs*{ \int_{t_1}^{t_2} \int \partial_\lambda^\varepsilon\zeta(\lambda) \mu^\varepsilon_t(\dx\lambda) \dx{t}} \\
&\leq \sup_{\lambda\in \mathds{R}_+} \frac{\abs*{\partial_\lambda^\varepsilon \zeta(\lambda)}}{\lambda^{\frac{1-\alpha}{2}}} \int_{t_1}^{t_2} \int \lambda^{\frac{1-\alpha}{2}} \abs*{\mu^\varepsilon_t(\dx\lambda)} \dx{t} \\
&\stackrel{\mathclap{\eqref{e:Scale:ActionMoment}}}{\leq} C \sup_{\lambda\in \mathds{R}_+} \frac{\abs*{\partial_\lambda^\varepsilon \zeta(\lambda)}}{\lambda^{\frac{1-\alpha}{2}}} \int_{t_1}^{t_2} \sqrt{\mathcal{A}^\varepsilon_{\mac}(\nu^\varepsilon_t, w^\varepsilon_t)} \dx{t} \\
&\leq C \sup_{\lambda\in \mathds{R}_+} \frac{\abs*{\zeta'(\lambda)}}{\lambda^{\frac{1-\alpha}{2}}} \sqrt{\abs*{t_1 - t_2}} .
\end{align*}
This estimate together with the bound~\eqref{e:Scale:energybound} imply via Arzelà-Ascoli the weak$^*$ convergence towards a weakly$^*$ continuous map $t\mapsto \nu_t$.
Moreover, the a priori bounds~\eqref{e:Scale:energybound}, \eqref{e:Scale:MassExcess} and tightness condition~\eqref{ass:tightness} imply that $\int \zeta \dx\nu^\varepsilon \to \int \zeta \dx\nu$ holds for $\zeta \in C^0(\mathds{R}_+)$ satisfying
\begin{equation*}
\limsup_{\lambda\to\infty} \frac{\abs*{\zeta(\lambda)}}{\lambda} < \infty \qquad\text{and}\qquad \lim_{\lambda\to 0} \frac{\abs*{\zeta(\lambda)}}{\lambda^{1-\gamma}} = 0 ,
\end{equation*}
which implies that the excess mass is preserved
\begin{equation*}
\int \lambda \dx\nu_t = \varrho_0 - \varrho_s \quad \Rightarrow \quad \nu_t \in M , \qquad \text{for all } t\in[0,T].
\end{equation*}
Moreover, the bounds~\eqref{e:Scale:energybound} and \eqref{e:Scale:MassExcess} also imply by weak lower semi-continuity the estimate~\eqref{e:Scale:conv:Fliminf} and especially that $\sup_{t\in [0,T]} E(\nu_t) < \infty$.
\emph{Step 2: Convergence of $\mu^\varepsilon$.} The a priori estimate~\eqref{e:Scale:ActionMoment} implies the existence of a measure $\mu \in C_c^0([0,T] \times \mathds{R}_+)^*$ such that up to subsequences
\begin{equation}\label{e:Scale:conv:fluxdensity:p1}
\int \int \zeta(t,\lambda) \dx\mu_t^\varepsilon \dx{t} \to \int \int \zeta(t,\lambda) \dx\mu \qquad \text{for all } \zeta \in C_c^0([0,T] \times \mathds{R}_+) .
\end{equation}
Now, we show the limiting measure is of the form $\mu(\dx t, \dx\lambda) = \lambda^\alpha w_t(\lambda) \nu_t(\dx\lambda) \dx{t}$ for some vector field $w_t$ with finite action.
Therefore, we remind at the definition of $\mu^\varepsilon$~\eqref{def:Scale:fluxdensity} and $\mathcal{A}^\varepsilon_{\mac}$~\eqref{def:Scale:action} to estimate
\begin{align}\label{e:Scale:conv:fluxdensity:p2}
\int\int \zeta(t,\lambda) \lambda^{\frac{1-\alpha}{2}} \mu_t^\varepsilon(\dx\lambda) \dx{t} &\leq \bra*{\int_0^T \mathcal{A}^\varepsilon_{\mac}(\nu^\varepsilon_t,w^\varepsilon_t) \dx{t}}^\frac{1}{2} \times \\
&\qquad\bra*{\int_0^T \frac{1}{z_s} \sum_{l\geq l_0} \zeta^2(t,\varepsilon l) \, l^{1-\alpha} k^l \, \widehat{n^\varepsilon}^\omega_l(t) \dx{t}}^{\frac{1}{2}} . \notag
\end{align}
The second term on the right hand side can be bounded by using the one-homogeneity and concavity of $(a,b)\mapsto \Lambda(a,b)$
\begin{align*}
\MoveEqLeft{\frac{1}{z_s}\sum_{l\geq l_0} \zeta^2(t,\varepsilon l) \, l^{1-\alpha} k^l \, \widehat{n^\varepsilon}^\omega_l(t) } \\
&\leq \frac{1}{z_s}\sum_{l\geq l_0} \zeta^2(t,\varepsilon l) \, l \, \Lambda\bra*{n_1^\varepsilon(t) n_l^\varepsilon(t) , (z_s + q (l+1)^{-\gamma}) n_{l+1}^\varepsilon(t)} \\
&\leq \frac{1}{z_s}\Lambda\bra*{ n_1^\varepsilon(t) \sum_{l\geq l_0} \zeta^2(t,\varepsilon l) \, l \, n_l^\varepsilon(t) , \bra*{z_s + q l_0^{-\gamma}}\sum_{l\geq l_0} \zeta^2(t,\varepsilon l) \, l \, n_{l+1}^\varepsilon(t) } \\
&\leq \frac{z_s + o(1)}{z_s} \sum_{l\geq l_0} \zeta^2(t,\varepsilon l) \, l \, n_l^\varepsilon(t),
\end{align*}
where we used in the last estimate that $l_0 =\varepsilon^{-x}$, $|n_1^\varepsilon - z_s| \leq C \sqrt{\mathcal{F}(n)} \leq C \varepsilon^\frac{\gamma}{2}$ by~\eqref{e:Apriori1} and the fact that $\zeta$ is uniformly continuous.
Since $t\mapsto n^\varepsilon(t)$ is a curve of finite action and by the convergence of the total mass, it follows that the right hand side of~\eqref{e:Scale:conv:fluxdensity:p2} is finite.
Hence, we can pass to the limit $\varepsilon\to 0$ in~\eqref{e:Scale:conv:fluxdensity:p2} by the same argument as in~\eqref{e:Scale:conv:fluxdensity:p1}.
It follows for a subsequence which attains
\begin{equation*}
\int_0^T \mathcal{A}^\varepsilon_{\mac}(\nu^\varepsilon, w^\varepsilon) \dx{t} \to A^* := \liminf_{\varepsilon\to 0} \int_0^T \mathcal{A}^\varepsilon_{\mac}(\nu^\varepsilon, w^\varepsilon) \dx{t}.
\end{equation*}
the estimate
\begin{equation*}
\int\int \zeta(t,\lambda) \lambda^\frac{1-\alpha}{2} \mu_t(\dx\lambda) \dx{t} \leq \bra*{ A^* \int\int \zeta^2(t,\lambda) \lambda \nu_t(\dx\lambda)\dx{t}}^\frac{1}{2} ,
\end{equation*}
with $\mu_t(\dx\lambda)$ denoting the disintegration of $\mu$ in $t$.
Hence, we can conclude as in the derivation of~\eqref{e:LSW:compact:Action:p1} to find $v\in L^2(\lambda \dx\nu_t \dx{t})$ by the Riesz representation theorem showing lower semi-continuity of the action~\eqref{e:Scale:conv:Aliminf}.
\emph{Step 3: Convergence of the dissipation~$\mathcal{D}^\varepsilon$.} We observe that $\mathcal{D}^\varepsilon_{\mac}(\nu_t^\varepsilon) = \mathcal{A}^\varepsilon_{\mac}(\nu_t^\varepsilon,{\tilde w}^\varepsilon)$, where ${\tilde w}^\varepsilon$ is the special vector field given by $-\nabla_l D\mathcal{F}^\varepsilon(\nu_t^\varepsilon)$, i.e.\ for all~$l$
\[
\varepsilon^\gamma {\tilde w}^\varepsilon_t(\varepsilon l) = \log \frac{n_1^\varepsilon n_l^\varepsilon}{\omega_1 \omega_l} - \log \frac{n_{l+1}^\varepsilon}{\omega_{l+1}} .
\]
Therefore, we can apply the same arguments of step 2, but now to the dissipation flux density $\hat\mu^\varepsilon$ defined in~\eqref{def:Scale:fluxdensityhat} and use the a priori estimate~\eqref{e:Scale:FluxMoment} to deduce the $\liminf$ estimate
\[
\liminf_{\varepsilon\to 0} \int_0^T \mathcal{D}^\varepsilon(\nu_t^\varepsilon) \dx{t} \geq \int_0^T A(\nu_t, \tilde w_t) \dx{t} = \int_0^T \int \lambda^\alpha \abs*{\tilde w_t}^2 \nu_t(\dx\lambda) \dx{t},
\]
for some $\tilde w \in C_c^0([0,T]\times \mathds{R}_+)^*$.
It, is left to show that $h^\varepsilon(t) \rightharpoonup h(t)$ in $L^2((0,T))$ and $\tilde w_t$ is of the form $h(t) -q/\lambda^\gamma$ wit $h\in L^2([0,T])$, however this statement follows exactly along the lines of \cite[Lemma 2.6]{Niethammer2003}.
The final result~\eqref{e:Scale:conv:Dliminf} follows now by the definition of $D(\nu_t)$ as the infimum over all such $h\in L^2([0,T])$ from Lemma~\ref{lem:LSW:Dissipation}).
\emph{Step 4: Continuity equation holds.} Finally, choosing a subsequence such that both convergences~\eqref{e:conv:nu} and~\eqref{e:conv:mu} holds for a test function $\zeta \in C_c^\infty([0,T]\times \mathds{R})$, we can pass to the limit in the weak form of the discrete continuity equation~\eqref{e:Scale:continuityEqu}
\begin{align*}
\int_0^T \int \partial_t \zeta(t,\lambda) \nu_t^\varepsilon(\dx\lambda)\dx{t} &+ \int\int \partial_\lambda^\varepsilon \zeta(t,\lambda) \mu_t^\varepsilon(\dx\lambda) \dx{t} = 0 \\
\downarrow \ \varepsilon\to 0 \qquad & \qquad\qquad\qquad \downarrow \varepsilon\to 0 \\
\int_0^T \int \partial_t \zeta(t,\lambda) \, \nu_t(\dx\lambda)\dx{t} &+ \int\int \partial_\lambda \zeta(t,\lambda) \, \mu_t(\dx\lambda) \dx{t} = 0 ,
\end{align*}
which shows~$(\nu,w)\in \CE_T$. \end{proof}
\subsection{Quasistationary expansion: Proof of Theorem~\ref{thm:QuasiSmall}}\label{S:quasi}
The proofs of Proposition~\ref{prop:stability} and Theorem~\ref{thm:QuasiSmall} consists in several steps, which are formulated in the following Lemmata. In the proofs of this section, $C$ is a generic constant, which is assumed to be independent of $\varepsilon$ and only depending on the parameters inside of the rates from Assumption~\ref{ass:BD:rates}. \begin{lemma}\label{lem:LowerMoment}
Assume that $\mathcal{F}_{\mac}^\varepsilon(\nu^\varepsilon) \leq C$, $l_0$ satisfies~\eqref{ass:cutoff} and $\nu^\varepsilon$ satisfies the tightness condition~\eqref{ass:tightness}.
Then for any $\kappa\in [0,1]$, there exists $c>0$ such that
\begin{equation}\label{e:monexp:lowerMoment}
\int \lambda^{\kappa} \dx{\nu^\varepsilon} \geq c > 0 \qquad\text{uniformly in } \varepsilon >0.
\end{equation} \end{lemma} \begin{proof}
The assumptions of the Lemma ensure the conservation of the excess mass~\eqref{e:Scale:MassExcess}.
Together with the tightness assumption, we have for any $\kappa \leq 1 $
\begin{align*}
\int \lambda^{\kappa} \; \nu^\varepsilon_t(\dx\lambda) &\geq \int_0^M \lambda^{\kappa} \; \nu^\varepsilon_t(\dx\lambda) \\
&\geq \frac{1}{M^{1-\kappa}} \int_0^M \lambda \; \nu_t^\varepsilon(\dx{\lambda}) \\
&\geq \frac{1}{M^{1-\kappa}} \bra*{ \rho_0 - \rho_s - O_\varepsilon\bra*{\varepsilon^{\gamma\bra*{1/2 -x}}} - o_M(1) } .
\end{align*}
Hence, we can choose $M$ large enough but finite and $\varepsilon$ small enough such that for some $c>0$ the estimate~\eqref{e:monexp:lowerMoment} holds. \end{proof} \begin{lemma}
For any $n\in \mathcal{M}$ holds
\begin{equation}\label{e:DissBound0}
\bra*{ u - h } \log\bra*{\frac{1+u}{1+h}} \leq \frac{\mathcal{D}_{\mac}(n)}{A(z_s)} ,
\end{equation}
where $u:=u(z_s)$,
\begin{equation*}
h := \frac{n_1-z_s}{z_s} \qquad\text{and}\qquad u(z) := \frac{B - A(z)}{A(z)} ,
\end{equation*}
and
\begin{equation}\label{e:def:barA:barB}
A(z) := \sum_{l\geq l_0} a_l z n_l \qquad\text{and}\qquad B := \sum_{l\geq l_0} b_{l+1} n_{l+1} .
\end{equation}
Moreover, if $\mathcal{F}(n)\leq C \varepsilon^{\gamma}$, $l_0$ satisfies~\eqref{ass:cutoff} and $\nu^\varepsilon$ satisfies the tightness condition~\eqref{ass:tightness}, then it holds for $\varepsilon$ small enough and some $C>0$ uniformly in $\varepsilon$ the estimate~\eqref{e:DissBound} from Proposition~\ref{prop:stability}. \end{lemma} \begin{proof}
For the proof $n$ is fixed such that $\mathcal{F}(n)\leq C \varepsilon^\gamma$.
Then, we introduce two measures $\alpha$ and $\beta$ on $\set*{l_0,l_0+1,\dots}$
\[
\forall l\geq l_0:\qquad \alpha_l(z) := a_l z n_l \qquad\text{and}\qquad \beta_l := b_{l+1} n_{l+1}
\]
with partition sums $A(z)$ and $B$~\eqref{e:def:barA:barB}, respectively.
We introduce $\mathcal{D}_{\mac,z}(n)$ the constant monomer density dissipation of the large clusters
\begin{align*}
\mathcal{D}_{\mac,z}(n) := \sum_{l\geq l_0} \bra*{ a_l z n_l - b_{l+1} n_{l+1}} \log\frac{a_l z n_l}{b_{l+1} n_{l+1}} \geq 0.
\end{align*}
Note, that by this definition $\mathcal{D}_{\mac}(n) = \mathcal{D}_{\mac,n_1}(n)$.
By the definition~\eqref{e:def:barA:barB}, it follows $A(n_1)=(1+h)A(z_s)$ and the identity
\begin{equation}\label{e:MacDiss:uhId}
u(n_1) = \frac{1}{1+h}\bra*{u(z_s) - h }.
\end{equation}
Now, rewrite and $\mathcal{D}_{\mac,z}(n)$ and apply the Jensen inequality to the one-homogeneous convex function $a \mapsto a \log\bra*{1+a}$
\begin{align*}
\mathcal{D}_{\mac,z}(n) &= \sum_{l\geq l_0} \alpha_l(z) \frac{\beta_l-\alpha_l(z)}{\alpha_l(z)} \log\bra*{1+\frac{\beta_l-\alpha_l(z)}{\alpha_l(z)}} \\
&\geq \bra*{B- A(z)} \log\frac{B}{A(z)} = A(z)\bra*{u(z) \log(1+u(z)}
\end{align*}
Hence, we obtain by setting $z=n_1$ and using~\eqref{e:MacDiss:uhId}, the estimate
\begin{align*}
A(z)\bra*{u(z) \log(1+u(z)} &= A(z_s) \bra*{u - h} \log\bra*{1+\frac{u-h}{1+h}},
\end{align*}
from where we conclude~\eqref{e:DissBound0}.
By using the explicit expression of the rates~\eqref{ass:BD:rates} follows
\begin{align*}
2 A(z_s) - B &= \sum_{l\geq l_0} z_s l^{\alpha}\bra*{1- \frac{q}{z_s l^{\gamma}}} n_l + z_s l_0^\alpha n_{l_0} + q l_0^{\alpha-\gamma} n_{l_0} \\
&\geq \bra*{1 - \frac{q}{z_s l_0^{\gamma}}} A(z_s) \geq A(z_s) \bra*{1- O(\varepsilon^{x\gamma})} ,
\end{align*}
by the definition of $l_0$~\eqref{ass:cutoff}.
Hence, we have $ B \leq A(z_s) \bra*{1+ O(\varepsilon^{x\gamma})}$ and in particular with~\eqref{e:MacDiss:uhId}
\begin{equation}\label{e:MacDiss:upper_u}
u(n_1) \leq \frac{u(z_s)}{1-\abs*{h}} + \frac{\abs*{h}}{1-\abs*{h}} \leq O(\varepsilon^{x\gamma}) + O\bra*{\varepsilon^{\frac{\gamma}{2}}} = O(\varepsilon^{x\gamma}),
\end{equation}
where we used that with $\mathcal{F}(n)\leq C \varepsilon^\gamma$ also $\abs*{h}\leq C \varepsilon^\frac{\gamma}{2}$ from the estimate~\eqref{e:Apriori1}.
The estimate~\eqref{e:MacDiss:upper_u} allows to linearize the bound~\eqref{e:DissBound0} as follows
\begin{equation*}
\bra*{u - h}\log\bra*{\frac{1+h}{1+u}} \geq \frac{(u-h)^2}{\max\set*{1+h,1+u}} \geq \frac{(u-h)^2}{1+ O(\varepsilon^{x\gamma})} .
\end{equation*}
Finally, to deduce the estimate~\eqref{e:DissBound}, it is enough to rewrite it in rescaled variables and use the estimate~\eqref{e:monexp:lowerMoment} from Lemma~\ref{lem:LowerMoment}
\begin{equation*}
A = z_s \varepsilon^{1-\alpha} \int \lambda^{\alpha} \nu^\varepsilon(\dx\lambda) \geq c \varepsilon^{1-\alpha} > 0 . \qedhere
\end{equation*} \end{proof} The time-scale separation between the dynamic of the small clusters and the one of the large clusters is characterized by the following logarithmic Sobolev type inequality. \begin{proposition}[Microscopic energy-dissipation estimate]
Let $\omega_l(z) := z^l Q_l$.
Then for all $n\in \mathds{R}_+^\mathds{N}$ with $\mathcal{F}_{\mic}(n)\leq C\varepsilon^{\gamma}$ there exists $C_{\EED}$ independent of $\varepsilon$ such that it holds
\begin{equation}\label{e:BD:MLSI}
\mathcal{H}_{\mic}(n \mid \omega(n_1)) \leq C_{\EED} \varepsilon^{-x\bra*{1-\alpha+\gamma}} \mathcal{D}_{\mic}(n) ,
\end{equation}
where $\mathcal{H}_{\mic}$ is the microscopic part of the relative entropy between $n$ and $\omega(n_1)$ defined by
\begin{equation*}
\mathcal{H}_{\mic}(n \mid \omega(z)) := \sum_{l=1}^{l_0-1} \omega_l(z) \psi\bra*{\frac{n_l}{\omega_l(z)}} \qquad\text{with}\qquad \psi(a) = a \log a - a +1 .
\end{equation*} \end{proposition} \begin{proof}
We note, that the function $(a,b) \mapsto \varphi(a,b):=(a-b)\bra*{\log a - \log b}$ occurring in the definition of the dissipation is one-homogeneous.
In addition, the following lower bound holds
\[
(a-b)\bra*{\log a - \log b} \geq 4 \bra*{\sqrt{a} - \sqrt{b}}^2 .
\]
Moreover, we remind that $\omega(z)$ satisfies the detailed balance condition $a_l \omega_1(z) \omega_l(z) = b_{l+1} \omega_{l+1}(z)$.
By choosing $z=n_1$, the dissipation can be rewritten and bounded from below by
\begin{align*}
\mathcal{D}(n) &= \sum_{l\geq 1} a_l n_1 \omega_l(n_1) \ \varphi\bra*{\frac{n_l}{\omega_l(n_1)}, \frac{n_{l+1}}{\omega_{l+1}(n_1)}} \\
&\geq 4 \sum_{l\geq 1} a_l n_1 \omega_l(n_1) \bra*{\sqrt{\frac{n_l}{\omega_l(n_1)}} - \sqrt{ \frac{n_{l+1}}{\omega_{l+1}(n_1)}}}^2 := \overline \mathcal{D}(n).
\end{align*}
Hence, instead of showing the estimate~\eqref{e:BD:MLSI}, it is sufficient to proof
\[
\mathcal{H}_{\mic}\bra*{n\mid \omega(n_1)} \leq C_{\EED} \overline \mathcal{D}(n).
\]
This inequality was investigated in~\cite{Canizo2015}.
To apply the result there, we introduce the measures
\begin{equation*}
l\in \set*{1,\dots,l_0-1} : \quad \mu_l(z) := \frac{\omega_l(z)}{\sum_{l=1}^{l_0-1} \omega_l(z)} \quad\text{and}\quad \nu_l(z) := \frac{a_l \omega_l(z)}{\sum_{l=1}^{l_0-1} \omega_l(z)} .
\end{equation*}
Hereby, we note that $\mu$ is a probability measure, but $\nu$ not necessarily.
Let us assume the following mixed logarithmic Sobolev inequality
\begin{equation}\label{e:BD:LSI}
\Ent_{\mu}(f^2) := \sum_{l=1}^{l_0-1} \mu_l f_l^2 \log\frac{f_l^2}{\sum_{l=1}^{l_0-1} f_l^2 \mu_l} \leq C_{\LSI} \sum_{l=1}^{l_0-1} \nu_l \bra*{ f_l - f_{l+1}}^2 .
\end{equation}
Then, \cite[Proposition 3.2]{Canizo2015}, where by the different normalization of $\nu$, the constant simplifies to
\begin{equation}\label{e:BD:MLSI:p2}
C_{\EED}\leq \frac{C_{\LSI}}{n_1^2} \bra*{ n_1^2 + 2 \,\bra*{\sum_{l=1}^{l_0-1} n_l}\,\bra*{\sum_{l=1}^{l_0-1} \omega_l(n_1)}}.
\end{equation}
To proof the mixed logarithmic Sobolev inequality~\eqref{e:BD:LSI}, we use \cite[Corollary 2.4 and Remark 2.5]{Canizo2015}, from which we obtain the bound
\begin{equation}\label{e:BD:MLSI:p1}
C_{\LSI} \leq 480 \sup_{1< l < l_0} W_l(n_1) \log\bra*{\frac{W_1(n_1)}{W_l(n_1)}} V_l(n_1),
\end{equation}
where
\[
W_l(z) = \sum_{j=l}^{l_0-1} \omega_j(z) \quad\text{and}\quad V_l(z) = \sum_{j=1}^{l-1} \frac{1}{a_j \omega_j(z)} .
\]
We will establish the following estimates for $\abs*{z-z_s}\leq C \varepsilon^{\frac{\gamma}{2}}$ and some $C>1$
\begin{align}
\frac{1}{C} \omega_l(z) \leq W_l(z) &\leq C l^\gamma \omega_l(z) \label{e:BD:MLSI1} \\
V_l(z) &\leq \frac{C l^{\gamma-\alpha}}{\omega_l(z)} \label{e:BD:MLSI2}
\end{align}
We postpone the proof of the estimates and first show the final result.
By a combination of~\eqref{e:BD:MLSI1} and~\eqref{e:BD:MLSI2} with~\eqref{e:BD:MLSI:p1}, we obtain the estimate
\begin{align*}
C_{\LSI}\leq C \sup_{1<l<l_0} l^{2\gamma-\alpha} \log\bra*{\frac{C}{\omega_l(z)}}.
\end{align*}
By the expansion~\eqref{e:BD:assymptotic_Q} follows
\begin{equation*}
\frac{1}{\omega_l(z)} \leq C l^\alpha \exp\bra*{ \frac{q(1-\gamma)}{z_s} l^{1-\gamma} - l \log\frac{z}{z_s} } \leq C \exp\bra*{C l^{1-\gamma}},
\end{equation*}
where we used that $l_0\abs*{\log \frac{z}{z_s}}$ is uniformly bounded, because of $\abs*{z-z_s}\leq C\varepsilon^\frac{\gamma}{2}$.
We obtain the upper bound $C_{\LSI} \leq C \sup_{1 < l < l_0} l^{1-\alpha+\gamma} \leq C l_0^{1-\alpha+\gamma}$. A combination of this bound with~\eqref{e:BD:MLSI:p2} leads to the bound
\begin{equation*}
C_{\EED} \leq C \varepsilon^{-x(1-\alpha+\gamma)}\ \frac{n_1^2 + W_1(n_1) \sum_{l=1}^{l_0-1} n_l}{n_1^2}
\end{equation*}
The conclusion~\eqref{e:BD:MLSI} follows now from~\eqref{e:BD:MLSI1}, $\abs*{n_1-z_s}\leq C\varepsilon^{\frac{\gamma}{2}}$ and the bound $\sum_{l=1}^{l_0-1} n_l \leq \sum_{l\geq 1} l n_l = \varrho$. To proof the estimates~\eqref{e:BD:MLSI1} and~\eqref{e:BD:MLSI2}, we first observe that by the assumption $\abs*{z-z_s}\leq C \varepsilon^{\frac{\gamma}{2}}$, we have the comparison
\[
\frac{\omega_l(z)}{\omega_l(z_s)} = \bra*{\frac{z}{z_s}}^l \leq \bra*{1 + C \varepsilon^{\frac{\gamma}{2}}}^l \leq \exp\bra*{ C \varepsilon^{\frac{\gamma}{2}} l_0} \leq C
\]
by the choice of $l_0$~\eqref{ass:cutoff}.
In the complete analog way, we get $\frac{\omega_l(z)}{\omega_l(z_s)} \geq \frac{1}{C}$.
Hence, it is enough to show~\eqref{e:BD:MLSI1} and~\eqref{e:BD:MLSI2} for $z=z_s$.
Therefore, we use the expansion~\eqref{e:BD:assymptotic_Q} from Lemma~\ref{lem:BD:assymptotic_Q} in the form: For some constant $C>1$ and any $l\geq 1$ holds
\[
\frac{1}{C l^{\alpha} z_s^{l-1}} \exp\bra*{ - \frac{q}{z_s} (1-\gamma) l^{1-\gamma}} \leq Q_l \leq \frac{C}{l^{\alpha} z_s^{l-1}} \exp\bra*{ - \frac{q}{z_s} (1-\gamma) l^{1-\gamma}}.
\]
The estimate~\eqref{e:BD:MLSI1} with $z=z_s$ is now proven
\begin{align*}
W_l(z) &\leq \frac{C}{z_s^2} \sum_{j=l}^{l_0-1} \frac{1}{j^\alpha} \exp\bra*{- \frac{q}{z_s} (1-\gamma) j^{1-\gamma}} \leq \frac{C}{l^{\alpha}} \sum_{j=l}^{l_0-1} \exp\bra*{-\frac{q}{z_s} (1-\gamma) j^{1-\gamma}} \\
&\leq \frac{C}{l^\alpha} \int_{l}^{l_0-1} \exp\bra*{-\frac{q}{z_s} (1-\gamma) v^{1-\gamma}} \dx{v} \leq C l^{\gamma-\alpha} \int_{l^{1-\gamma}}^\infty \exp\bra*{- \frac{q}{z_s} (1-\gamma) v} \dx{v}\\
&\leq C l^{\gamma} \omega_l .
\end{align*}
The estimate~\eqref{e:BD:MLSI2} with $z=z_s$ follows similarly
\begin{align*}
V_{l}(z) &\leq \frac{C}{z_s^2} \sum_{j=1}^{l-1} \exp\bra*{\frac{q}{z_s} (1-\gamma) l^{1-\gamma}}
\leq \frac{C}{z_s^2} \int_{1}^l \exp\bra*{\frac{q}{z_s} (1-\gamma) v^{1-\gamma}} \dx{v} \\
&\leq C \frac{(1-\gamma)}{z_s^2} l^{\gamma} \int_0^{l^{1-\gamma}} \exp\bra*{\frac{q}{z_s} (1-\gamma) v} \dx{v}
\leq C \frac{l^{\gamma-\alpha}}{\omega_l} . \qedhere
\end{align*} \end{proof} The estimates~\eqref{e:monexp:q3} and~\eqref{e:monexp:micF} from Theorem~\ref{thm:QuasiSmall} are a consequence of~\eqref{e:BD:MLSI}. \begin{proof}[Proof of Theorem~\ref{thm:QuasiSmall}]
Finally, the estimate~\eqref{e:monexp:q3} follows just by rescaling $\mathcal{D}_{\mic}$ and integrating the estimate along a curve of finite action.
For the statement \eqref{e:monexp:micF}, we first observe that $\mathcal{F}_{\mic}(n) = \mathcal{H}_{\mic}(n\mid \omega)$ and get by writing $z=z_s e^{h}$
\begin{align*}
\mathcal{H}_{\mic}(n \mid \omega(z) ) - \mathcal{F}_{\mic}(n) &= - \sum_{l=1}^{l_0-1} l n_l \log\frac{z}{z_s} - \sum_{l=1}^{l_0-1} \omega_l \bra*{1-\bra*{\frac{z}{z_s}}^l} \\
&= h\; \bra*{\varrho_s - \sum_{l=1}^{l_0-1} l n_l + \sum_{l=1}^{l_0-1} l \omega_l \frac{e^{l h}-1}{lh} - \varrho_s }
\end{align*}
The first difference in the bracket can be bounded in terms of~\eqref{e:Apriori3} from Lemma~\ref{lem:Apriori}.
The second difference can be explicitly expresses as follows
\begin{align*}
\abs*{\varrho_s - \sum_{l=1}^{l_0-1} l \omega_l \frac{e^{l h}-1}{lh}} &\leq \sum_{l\geq l_0 } l \omega_l + \sum_{l=1}^{l_0-1} l\omega_l \frac{e^{l h}-1-lh}{lh} \\
&\leq C l_0^\gamma \omega_{l_0} + C h \sum_{l=1}^{l_0-1} l^2 \omega_l \leq C \bra*{l_0^\gamma \omega_{l_0} + h },
\end{align*}
where, we used the upper in~\eqref{e:BD:MLSI1}, which holds by the proof also with $l_0 = \infty$.
Moreover, $\omega_l$ has arbitrary high moments following from the expansion~\eqref{e:BD:assymptotic_Q}.
By the choice of $l_0$~\eqref{ass:cutoff} and again~\eqref{e:BD:assymptotic_Q} follows that $l_0^\gamma \omega_{l_0}\leq C \varepsilon^p$ for any $p>0$.
Hence, combining all these estimates and reminding that $\mathcal{F}_{\mic}(n)\leq C \varepsilon^\gamma$, we get
\begin{equation*}
\abs*{\mathcal{H}_{\mic}(n \mid \omega(z) ) - \mathcal{F}_{\mic}(n)} \leq C \abs*{h} \bra*{ \varepsilon^{-x\gamma} \sqrt{\mathcal{F}_{\mic}(n)} + \varepsilon^p + \abs*{h}} \leq C \abs*{h} \bra*{ \varepsilon^{\sigma} + \abs*{h}} ,
\end{equation*}
where $\sigma = \gamma\bra*{\tfrac{1}{2}-x} > 0$ by~\eqref{ass:cutoff}.
Hence, we can conclude for a curve of finite action
\begin{align*}
\int_0^T \!\!\mathcal{F}_{\mic}^\varepsilon(n(t)) \dx{t} &\leq z_s \varepsilon^{-\gamma} \int_0^T \mathcal{H}_{\mic}\bra*{n(t) \mid \omega(n_1(t))} \dx{t} \\
&\quad+ C \varepsilon^{-\gamma} \int_0^T \abs*{\log\bra*{\frac{n_1(t)}{z_s}-1}}\bra*{ \varepsilon^\sigma + \abs*{\log\bra*{\frac{n_1(t)}{z_s}-1}}} \dx{t} \\
&\leq C\varepsilon^{(1-x)(1-\alpha+\gamma)} \int_0^T \mathcal{D}^\varepsilon_{\mic}(n(t)) \dx{t} \\
&\quad + C \varepsilon^{\gamma} \int_0^T \abs*{\frac{n_1(t)-z_s}{\varepsilon^\gamma}}^2 \dx{t} + C \varepsilon^{\sigma} \sqrt{T} \bra*{\int_0^T \abs*{\frac{n_1(t)-z_s}{\varepsilon^\gamma}}^2 \dx{t}}^\frac12 .
\end{align*}
The conclusion~\eqref{e:monexp:micF} follows by~\eqref{e:Scale:conv:monomers} from Theorem~\ref{thm:Scale:conv:CfA} . \end{proof}
\renewcommand\appendixname{} \appendix
\section{Gradient structures for coagulation and fragmentation models}\label{s:GScfModels}
\subsection{Reversible chemical reactions as gradient flows}
This part of the appendix shows the general structure for reversible chemical reactions. Since, the Becker--Döring equation and other coagulation-fragmentation models can be interpreted as an infinite set of chemical reactions~\eqref{e:BD:ChemReact}, they fall into this category. The basic observation goes back to Mielke~\cite{Mielke2011a}, who found the entropic gradient flow structure for reversible chemical reactions. \begin{definition}[Reversible chemical reaction]\label{def:revChemReact} Let $n\in \mathds{R}_+^N$ be the densities of $N\in \mathds{N} \cup\set*{+\infty}$ different chemical species (or complexes) $X_i$ reacting according to the mass action law. Each reaction $r=1,\dots R$ with $R\in \mathds{N}\cup\set*{+\infty}$ is characterized by the stoichiometric coefficients $x^r,y^r \in \mathds{N}_0^{N}$ and forward and backward reaction rates $k_\pm^r >0$ \begin{equation}\label{e:GenCR}
x_1^r X_1 + \dots + x_N^r X_N \stackrel[k_-^r]{k_+^r}{\rightleftharpoons} y_1^r X_1 + \dots + y_N^r X_N , \qquad r=1,\dots , R . \end{equation} The chemical reaction is assumed to be reversible. That is, there exists a state $\omega = (\omega_1,\dots,\omega_N)\in \mathds{R}^N$ such that \begin{equation}\label{e:DBrel}
k_+^r \omega^{x^r} = k_-^r \omega^{y^r} =: k^r . \end{equation} Here, the notation for multiindices is used: $\omega^{x^r} = \prod_{i=1}^N \omega_i^{x_i^r}$. The evolution equation for the density is given by \begin{equation}\label{e:def:ChemRateEvo}
\dot n = - \sum_{r=1}^R k^r \bra*{\frac{n^{x^r}}{\omega^{x^r}} - \frac{n^{y^r}}{\omega^{y^r}}} \bra*{x^r - y^r} . \end{equation} \end{definition} The Becker--Döring clustering equation interpreted as an infinite set of chemical reactions~\eqref{e:BD:ChemReact} fall in this framework by setting $N=R=\infty$ and $x^r_i := \delta_{i,1}+\delta_{i,r}$ and $y^r_i := \delta_{i,r+1}$. The detailed balance condition~\eqref{e:DBrel} is satisfied in terms of the one-parameter family of equilibrium distributions~$\omega(z)$~\eqref{e:BD:equilibrium}. Moreover, the more general Smoluchowski coagulation and fragmentation model fit into this framework (cf.\ Appendix~\ref{s:ex:Smoluchowski}) under the assumption of detailed balance.
The free energy is defined as relative entropy with respect to the reversible equilibrium as in~\eqref{e:def:Lyapunov}, i.e. $\mathcal{F}(n) = \mathcal{H}(n\mid \omega)$ and hence \(
D\mathcal{F}(n) = \bra*{\log \frac{n_1}{\omega_1},\dots , \log \frac{n_N}{\omega_N}} . \) To define the manifold of states, the stoichiometric subspace and its complement are used \begin{equation*}
\mathcal{S} := \Span\set*{x^r - y^r : r=1,\dots R} \quad\text{and}\quad \mathcal{S}^\perp :=\set*{s\in \mathds{R}^N : \gamma \cdot s = 0, \ \forall \gamma \in \mathcal{S} } . \end{equation*} Then, the manifold is given for some fixed $n_0 \in \mathds{R}_+^N$ by the affine space of densities \begin{equation*}
\mathcal{M}_{n_0} := \bra*{n_0 + \mathcal{S}} \cap \mathds{R}_+^N = \set*{n\in \mathds{R}_+^N: n\cdot s = n_0 \cdot s , \forall s\in \mathcal{S}^\perp} \end{equation*} The definition formalizes that $\mathcal{S}^\perp$ contains all conversation laws of the reaction and therefore the tangent vectors on $\mathcal{M}_{n_0}$ are given by $\mathds{R}^\mathcal{S}$. Coagulation and fragmentation models of one species, like Becker--Döring, in this terminology are characterized by \begin{equation*}
\mathcal{S}^\perp = \Span\set*{ \mathbf{I} } , \qquad\text{with} \qquad \mathbf{I} := (1,2,3,4,\dots). \end{equation*} Hence, the manifold has only one conserved quantity, which is the density $\varrho_0>0$ of the total number of particles $\mathcal{M} := \set*{ n \in \mathds{R}_+^\mathds{N} : n \cdot \mathbf{I} = \sum_{l=1}^\infty l n_l = \varrho_0 }$.
The derivative of the energy $D\mathcal{F}$ is a force and has to be interpreted as covector. The underlying metric can be specified by mapping covectors to (tangent-)vectors. This is done via the Onsager matrix to be defined as the symmetric semi-positive definite matrix \begin{equation}\label{e:def:Onsager}
\mathcal{K}(n) := \sum_{r} k^r \Lambda\bra*{\frac{n^{x^r}}{\omega^{x^r}},\frac{n^{y^r}}{\omega^{y^r}}} \ (x^r - y^r) \otimes (x^r - y^r) , \end{equation} where $\Lambda(\cdot,\cdot)$ is the logarithmic mean in~\eqref{e:def:LogMean}. Hence, recalling that the space of vectors was given by $\mathds{R}^\mathcal{S}$, we define the covectors with the help of the Onsager operator by~\eqref{e:def:cotangent}, where the identification is well-defined since the image of $\mathcal{K}$ is by definition $\mathds{R}^\mathcal{S}$, whenever $n$ is strictly positive in all of its components. Note, although the tangent space is state independent, this is not the case for the cotangent space.
With this preliminary definitions a reversible chemical reaction as given in Definition~\ref{def:revChemReact} is formally the gradient flow of the free energy $\mathcal{F}$ with respect to the metric structure induced by the Onsager operator \eqref{e:def:Onsager} and it holds the formal identity \begin{equation}\label{e:GF:ChemRateEvo}
\dot n = - \mathcal{K}(n) D\mathcal{F}(n) . \end{equation} The property from which immediately follows that~\eqref{e:GF:ChemRateEvo} is the same as~\eqref{e:def:ChemRateEvo} is \begin{equation*}
(x^r - y^r) \cdot D\mathcal{F}(n) = \sum_{i=1}^n x^r_i \log \frac{n_i}{\omega_i} - y^r_i \log \frac{n_i}{\omega_i} = \log \frac{n^{x^r}}{\omega^{x^r}} - \log \frac{n^{y^r}}{\omega^{y^r}} , \end{equation*} which is nothing else than the nominator of the logarithmic mean $\Lambda\bra[\big]{\frac{n^{x^r}}{\omega^{x^r}}, \frac{n^{y^r}}{\omega^{y^r}}}$ and resembles a discrete chain rule. The gradient flow decreases its energy along its evolution in terms of the dissipation, i.e. \begin{equation*} \begin{split}
\pderiv{}{t} \mathcal{F}(n) &= D\mathcal{F}(n) \cdot \dot n = - D\mathcal{F}(n) \cdot \mathcal{K}(n) D\mathcal{F}(n) \\
&= - \sum_r k^r \bra*{\frac{n^{x^r}}{\omega^{x^r}} - \frac{n^{y^r}}{\omega^{y^r}}} \bra*{\log\frac{n^{x^r}}{\omega^{x^r}} - \log\frac{n^{y^r}}{\omega^{y^r}}} =: - \mathcal{D}(n) \end{split} \end{equation*} We see that the Becker--Döring system fits into this framework. However, there is freedom in the choice of the free energy and under certain physical assumption, there are other possible choices.
\subsection{Smoluchowski coagulation and fragmentation equation} \label{s:ex:Smoluchowski}
The Becker--Döring clustering equation is itself just a special case in the more general class of Smoluchowski coagulation and fragmentation equations seen as the following family of chemical reactions \begin{equation*}
X_i + X_j \stackrel[b_{i,j}]{a_{i,j}}{\rightleftharpoons} X_{i+j} \qquad\text{with}\qquad (i,j)\in \mathds{N}\times \mathds{N}. \end{equation*} Hence, the stoichiometric coefficients in~\eqref{e:GenCR} are given as $x^{(i,j)}_k = \delta_{i,k} + \delta_{j,k}$ and $y^{(i,j)}_k = \delta_{i+j,k}$. A gradient flow structure can be established under the assumption of detailed balance, which in this case does not necessarily hold: There exists a state $\omega\in \mathds{R}^\mathds{N}$ such that for $(i,j)\in \mathds{N} \times \mathds{N}$ holds \begin{equation*}
a_{i,j} \omega_i \omega_j = b_{i,j} \omega_{i+j} . \end{equation*} Under this condition, the Smoluchowski coagulation and fragmentation equation is the gradient flow~\eqref{e:GF:ChemRateEvo} of the free energy $\mathcal{F}$ with repesct to the Onsager operator $\mathcal{K}$ defined in~\eqref{e:def:Onsager}.
\subsection{Modified Becker--Döring system}
The modified Becker--Döring system was introduced by Dreyer and Duderstadt~\cite{Dreyer2006a}. The main feature is the introduction of a mixing entropy between the clusters. Hence, the free energy consists of a relative entropy part as defined in~\eqref{e:def:Lyapunov} plus a mixing entropy depending on the total number of clusters \begin{equation}\label{e:FEmodBD}
\tilde\mathcal{F}(n) = \mathcal{H}(n\mid \omega) - N(n) \bra*{\log N(n) - 1} \quad\text{with}\quad N(n) = \sum_i n_i . \end{equation} The most compact form of the free energy is $\tilde\mathcal{F}(n) = \sum_i \bra*{n_i \log \frac{n_i}{\omega_i N(n)} + \omega_i}$. Hence, the differential of the free energy differential is given by \begin{equation}\label{e:modBD:GradF}
D\tilde\mathcal{F}(n) =\bra*{\log \frac{n_1}{\omega_1 N(n)} , \dots, \log \frac{n_i}{\omega_i N(n)}, \dots}. \end{equation} The reaction is still of the same form as the classical Becker--Döring system~\eqref{e:BD:ChemReact}, i.e.\ $x_i^r = \delta_{i,1}+\delta_{i,r}$ and $y_i^r = \delta_{i,r+1}$ in~\eqref{e:GenCR}. This leads to the same detailed balance condition as for the classical Becker-Döring model $a_r \omega_1 \omega_r = b_{r+1} \omega_{r+1} =: k^r$. Hence, we obtain the same possible equilibrium states $\omega_r(z)=z^r Q_r$ given in~\eqref{e:BD:equilibrium}. Again $z$ has to be determined from the formal conservation law $\sum_{l=1}^\infty l \omega_l(z) = \sum_{l=1}^\infty l n_l$. However, the existence as minimizer of the free energy in this case is more involved and for a detailed analysis of the equilibrium states, we refer to~\cite{Herrmann2005}.
Now, from~\eqref{e:modBD:GradF}, we further deduce \begin{equation*}
(x^r - y^r) \cdot D\tilde\mathcal{F}(n) = \log \frac{n_1 n_r}{\omega_1 \omega_r N(n)^2} - \log \frac{n_{r+1}}{\omega_{r+1}N(n)} =\log \frac{n_1 n_r}{\omega_1 \omega_r} - \log \frac{N(n) n_{r+1}}{\omega_{r+1}} . \end{equation*} From the above identity, the modified Onsager matrix can be read off and is given by \begin{equation*}
\mathcal{K}(n) := \sum_{r} k^r \Lambda\bra*{\frac{n_1 n_r}{\omega_1 \omega_r} , \frac{N(n) n_{r+1}}{\omega_{r+1}}} (x^r - y^r)\otimes (x^r - y^r) . \end{equation*} Then, we obtain the modified Becker--Döring equation as the gradient flow of the modified free energy~\eqref{e:FEmodBD} \begin{equation*} \begin{split}
\dot n &= - \tilde \mathcal{K}(n)D\tilde \mathcal{F}(n) = -\sum_r k^r \bra*{\frac{n_1 n_r}{\omega_1 \omega_r} - \frac{N(n) n_{r+1}}{\omega_{r+1}}} (x^r - y^r) \\
&= -\sum_r \bra*{ a_r n_1 n_r - b_{r+1} N(n) n_{r+1}} (x^r - y^r) = -\sum_{r} \tilde J_r (x^r - y^r) . \end{split} \end{equation*} The explicit form of the equation is given for any $l=1,2,\dots$ by \begin{equation*}
\dot n_l = \tilde J_{l-1} - \tilde J_l \quad\text{with}\ \ \tilde J_0 := -\sum_{r=1}^\infty \tilde J_r \ \ \text{and}\ \ \tilde J_r(n) := a_r n_1 n_r - b_{r+1}N(n) n_{r+1}. \end{equation*}
\section{Proof of Lemma~\ref{lem:BD:assymptotic_Q}}\label{s:assymptotic_Q}
\begin{proof}[Proof of Lemma~\ref{lem:BD:assymptotic_Q}]
We calculate using the definition~\eqref{e:BD:equilibrium} of $Q_l$
\begin{equation*}
\log\bra*{l^\alpha z_s^{l-1} Q_l} = - \sum_{j=2}^l \log\bra*{1+ \frac{q}{z_s j^\gamma}}
\end{equation*}
The function $x \mapsto \log\bra*{1+ \frac{q}{z_s k^\gamma}}$ is positive, continuous and monotone decreasing to $0$.
Therefore, we can define the Euler number
\begin{equation*}
C_1 := \lim_{l\to \infty} \bra*{\sum_{j=2}^l \log\bra*{1+ \frac{q}{z_s j^\gamma}} - \int_2^l \log\bra*{1+ \frac{q}{z_s x^\gamma}}} .
\end{equation*}
Moreover, we get from the Euler-MacLaurin formula the estimate
\begin{equation*}
\abs*{C_1 - \bra*{\sum_{j=2}^{l} \log\bra*{1+ \frac{q}{z_s j^\gamma}} - \int_2^{l} \log\bra*{1+ \frac{q}{z_s x^\gamma}}}} \leq \log\bra*{1+\frac{q}{z_s l^\gamma}} \leq \frac{q}{z_s l^\gamma} .
\end{equation*}
The following bound
\begin{equation*}
\frac{q}{z_s x^\gamma} - \frac{1}{2}\bra*{\frac{q}{z_s x^\gamma}}^2 \leq \log\bra*{1+\frac{q}{z_s x^\gamma}}\leq \frac{q}{z_s x^\gamma} - \frac{1}{2}\bra*{\frac{q}{z_s x^\gamma}}^2 + \frac{1}{3}\bra*{\frac{q}{z_s x^\gamma}}^3
\end{equation*}
implies the estimate
\begin{align*}
0\leq \frac{\int_{2}^{l} \log\bra*{1+\frac{q}{z_s x^\gamma}} \dx{x} }{\int_{2}^{l} \bra*{ \frac{q}{z_s x^\gamma} - \frac{1}{2}\bra*{\frac{q}{z_s x^\gamma}}^2 } \dx{x}} - 1
\leq O(l^{-\gamma}) ,
\end{align*}
hereby, we use the convention that $\frac{l^\kappa-1}{\kappa} = \log l$ for $\kappa=0$.
Now, we can combine all the estimates to obtain
\begin{align*}
\log\bra*{l^\alpha z_s^{l-1} Q_l} &= - \bra*{\sum_{j=2}^l \log\bra*{1+\frac{q}{z_s j^\gamma}} - \int_2^l \log\bra*{1+\frac{q}{z_s x^\gamma}} \dx{x}} \\
&\hspace{-2cm} - \bra*{1+ \frac{\int_{2}^{l} \log\bra*{1+\frac{q}{z_s x^\gamma}} \dx{x} }{\int_{2}^{l} \bra*{ \frac{q}{z_s x^\gamma} - \frac{1}{2}\bra*{\frac{q}{z_s x^\gamma}}^2 } \dx{x}} -1} \int_{2}^{l} \bra*{ \frac{q}{z_s x^\gamma} - \frac{1}{2}\bra*{\frac{q}{z_s x^\gamma}}^2 } \dx{x} \\
&= \bra*{\mathcal{F}_0 - \frac{q}{z_s (1-\gamma)} l^{1-\gamma} + \frac{q^2}{2 z_s^2(1-2\gamma)} l^{1-2\gamma}} \bra*{1+O(l^{-\gamma})} ,
\end{align*}
which concludes the proof by setting $\mathcal{F}_0=\frac{q 2^{1-\gamma}}{z_s (1-\gamma)} - \frac{q^2 2^{1-2\gamma}}{2 z_s^2(1-2\gamma)} - C_1$. \end{proof}
\addtocontents{toc}{\SkipTocEntry}
\end{document} |
\begin{document}
\sloppy
\title[Quantitative mixing for locally Hamiltonian flows]{Quantitative mixing for locally Hamiltonian flows with saddle loops on compact surfaces}
\author[D.~Ravotti]{Davide Ravotti} \address{School of Mathematics\\ University of Bristol\\ University Walk\\ BS8 1TW Bristol, UK} \email{[email protected]}
\begin{abstract} Given a compact surface $\mathcal{M}$ with a smooth area form $\omega$, we consider an open and dense subset of the set of smooth closed 1-forms on $\mathcal{M}$ with isolated zeros which admit at least one saddle loop homologous to zero and we prove that almost every element in the former induces a mixing flow on each minimal component. Moreover, we provide an estimate of the speed of the decay of correlations for smooth functions with compact support on the complement of the set of singularities. This result is achieved by proving a quantitative version for the case of finitely many singularities of a theorem by Ulcigrai (ETDS, 2007), stating that any suspension flow with one asymmetric logarithmic singularity over almost every interval exchange transformation is mixing. In particular, the quantitative mixing estimate we prove applies to asymmetric logarithmic suspension flows over rotations, which were shown to be mixing by Sinai and Khanin. \end{abstract}
\keywords{smooth area-preserving flows, mixing, logarithmic decay of correlations, special flows over IETs, logarithmic asymmetric singularities}
\thanks{\emph{2010 Math.~Subj.~Class.}: 37A25, 37E35}
\maketitle
\section{Introduction}
Let us consider a smooth compact connected orientable surface $\mathcal{M}$, together with a smooth area form $\omega$. Any smooth closed 1-form induces a smooth area-preserving flow on $\mathcal{M}$, which is given locally by the solution of some Hamiltonian equations (see \S2 for definitions); it is hence called \newword{locally Hamiltonian flow} or \newword{multi-valued Hamiltonian flow}.
The study of such flows was initiated by Novikov \cite{novikov:hamiltonian}, motivated by some problems in solid-state physics. Orbits of locally Hamiltonian flows can be seen as hyperplane sections of periodic manifolds, as pointed out by Arnold \cite{arnold:torus}, who studied the case when $\mathcal{M}$ is the 2-dimensional torus $\mathbb{T}^2$. He proved that $\mathbb{T}^2$ can be decomposed into finitely many regions filled with periodic trajectories and one minimal ergodic component; in the same paper he asked whether the restriction of the flow to this ergodic component is mixing. We recall that a flow $\{\varphi_t\}_{t \in \mathbb{R}}$ on a measure space $(X, \mu)$ is \newword{mixing} if for any measurable sets $A, B \subset X$ we have $$ \lim_{t \to \infty} \mu (\varphi_t(A) \cap B) = \mu(A) \mu(B), $$ i.e., if the events $A$ and $B$ become asymptotically independent. By choosing an appropriate Poincaré section, the flow on this ergodic component is isomorphic to a suspension flow over a circle rotation with a roof function with asymmetric logarithmic singularities. The question posed by Arnold was answered by Sinai and Khanin \cite{sinai:mixing}, who proved that, under a full-measure Diophantine condition on the rotation angle, the flow is mixing. This condition was weakened by Kochergin \cite{kochergin:non, kochergin:non2, kochergin:torus, kochergin:torus2}.
The presence of singularities in the roof function is necessary, as well as the asymmetry condition: in this setting, mixing does not occur for functions of bounded variation or, assuming a full-measure Diophantine condition on the rotation angle, for functions with symmetric logarithmic singularities; see the results by Kochergin in \cite{kochergin:absence} and \cite{kochergin:absence2} respectively. Indeed, mixing is produced by shearing of transversal segments close to singular points, which is a result of different deceleration rates.
Similarly, if the genus $g$ of the surface $\mathcal{M}$ is greater than $1$, any locally Hamiltonian flow can be decomposed into \newword{periodic components}, i.e.~regions filled with periodic orbits, and \newword{minimal components}, namely regions which are the closure of a nonperiodic orbit, as it was shown independently by several authors, see Levitt \cite{levitt:feuilletages}, Mayer \cite{mayer:minimal} and Zorich \cite{zorich:hyperplane}. The first return map of a Poincaré section on any of the minimal components is an Interval Exchange Transformation (IET), namely a piecewise orientation-preserving isometry of the interval $I=[0,1]$; in particular, typical (in a measure-theoretic sense) flows on minimal components are ergodic, since almost every IET is ergodic, due to a classical result proved by Masur \cite{masur:ergodic} and Veech \cite{veech:ergodic} independently.
On the other hand, mixing depends on the type of singularities of the first return time function: Kochergin proved mixing for suspension flows over IETs with roof functions with power-like singularities \cite{kochergin:degenerate}. However, this case corresponds to degenerate zeros of the 1-form defining the locally Hamiltonian flow; the complement of the set of these 1-forms is open and dense in the set of 1-forms with isolated zeros. Generic flows have logarithmic singularities: in this case, if the surface $\mathcal{M}$ is the closure of a single orbit, i.e.~if the flow is minimal, Ulcigrai proved that \newword{almost every} flow is not mixing \cite{ulcigrai:absence}, but weak mixing \cite{ulcigrai:weakmixing}. Here, we consider the measure class sometimes called \newword{Katok fundamental class}, described in \S2. An example of an \newword{exceptional} minimal mixing flow in this setup has been constructed recently by Chaika and Wright \cite{chaika:example}, who exhibited a locally Hamiltonian minimal mixing flow with simple saddles on a surface of genus 5.
In this paper we address the question of mixing when the 1-form has isolated simple zeros and the flow is not minimal; typically, minimal components are bounded by saddle loops homologous to zero (see \S2 for definitions). We prove the following result; a more precise formulation is given in Theorem \ref{th:mix}. \begin{teo}\label{th:1.1} There exists an open and dense subset of the set of smooth closed 1-forms on $\mathcal{M}$ with isolated zeros which admit at least one saddle loop homologous to zero such that almost every 1-form in it induces a mixing locally Hamiltonian flow on each minimal component. \end{teo}
Moreover, we provide an estimate on the decay of correlations for a dense set of smooth functions, namely we prove the following theorem. \begin{teo}\label{th:1.2} Let $\{\varphi_t\}_{t \in \mathbb{R}}$ be the locally Hamiltonian flow induced by a smooth 1-form $\eta$ as in Theorem \ref{th:1.1} and let $\mathcal{M}\rq{} \subset \mathcal{M}$ be a minimal component. Consider the set $\mathscr{C}^1_c(\mathcal{M}\rq{})$ of $\mathscr{C}^1$ functions on $\mathcal{M}\rq{}$ with compact support in the complement of the singularities of $\eta$. Then, there exists $0 < \gamma <1$ such that for all $g,h \in \mathscr{C}^1_c(\mathcal{M}\rq{})$ with $\int_{\mathcal{M}\rq{}} g \omega = 0$ we have $$ \modulo{ \int_{\mathcal{M}\rq{}} (g \circ \varphi_t)h\ \omega } \leq \frac{C_{g,h}}{(\log t)^{\gamma}}, $$ for some constant $C_{g,h}>0$. \end{teo}
To the best of our knowledge, this is the first quantitative mixing result for locally Hamiltonian flows, apart from a Theorem by Fayad \cite{fayad:quantmix}, which states that a certain class of suspension flows over irrational rotations with roof function with power-like singularities have polynomial speed of mixing. In the genus 1 case, Theorem \ref{th:1.2} provides a quantitative version of the mixing result by Sinai and Khanin in \cite{sinai:mixing}. We believe that the optimal estimate of the speed of decay has indeed this form, namely a power of $\log t$, although this remains an open question.
The proof of Theorem \ref{th:1.1} consists of two parts: first, we describe the open and dense set of 1-forms we consider (with a measure class defined on it) and we show how to represent the restriction of the induced locally Hamiltonian flows to any of its minimal component as a suspension flow over an interval exchange transformation with roof function with asymmetric logarithmic singularities. Secondly, we show that for almost every IET, every such suspension flow is mixing by proving a version of Theorem \ref{th:1.2} for suspension flows. Ulcigrai \cite{ulcigrai:mixing} treated the special case when the roof function has only one asymmetric logarithmic singularity; in this paper, we show that her techniques can be made quantitative and applied to this more general setting. The first step of the proof is to obtain sharp estimates for the Birkhoff sums of the derivative $f\rq{}$ of the roof function $f$, see Theorem \ref{th:BS}. These estimates are also used by Kanigowski, Kulaga and Ulcigrai to prove mixing of all orders for such flows \cite{KKU:multmixing}. In order to deduce the result on the decay of correlations, we apply a \newword{bootstrap trick} analogous to the one used by Forni and Ulcigrai in \cite{forniulcigrai:timechanges} and an estimate on the deviation of ergodic averages for typical IETs by Athreya and Forni~\cite{athreyaforni:iet}.
\subsection{Outline of the paper} In \S2 we recall the definition of locally Hamiltonian flow induced by a smooth closed 1-form and we focus on the set of closed 1-forms with isolated zeros; we describe some of its topological properties and we equip it with Katok\rq{}s measure class. In \S3 we show how to represent the locally Hamiltonian flows we consider as suspension flows over IETs and we discuss the relation between Katok\rq{}s measure class and the measure on the set of IETs. In \S4 we recall some basic facts about the Rauzy-Veech Induction for IETs (a renormalization algorithm which corresponds to inducing the IET to a neighborhood of zero) and in doing so we introduce some notation for the proof of Theorem \ref{th:BS}; moreover, we state a full-measure Diophantine condition for IETs first used by Ulcigrai in \cite{ulcigrai:mixing} to bound the growth of the Rauzy-Veech cocycle matrices along a subsequence of induction times (see Theorem \ref{th:ulcigrai}). We remark that, although in general we have more than one singularity, we do not need to induce at other points by using different renormalization algorithms, but we are able to show that the Diophantine condition in \cite{ulcigrai:mixing} can be used to treat also the case of several singularities. In \S5 we state the results on the Birkhoff sums of the roof function of the suspension flow and its derivative (Theorem \ref{th:BS}), and the quantitative estimate on the speed of the decay of correlations for a dense set of smooth functions in the language of suspension flows (Theorem \ref{th:decayofcorr}); we also deduce Theorem \ref{th:1.2} and Theorem \ref{th:IETgoal} from it. Section 6 is devoted to the proof of Theorem \ref{th:decayofcorr}, which is carried out in several steps: we first define partitions of the unit interval analogous to the ones used by Ulcigrai in \cite{ulcigrai:mixing}, with explicit bounds on their size, and then we apply a bootstrap trick to reduce the problem to estimate the deviations of ergodic averages for IETs, for which we apply a result by Athreya and Forni~\cite{athreyaforni:iet}. In the Appendix 7 we prove Theorem~\ref{th:BS}.
\subsection{Acknowledgments} I would like to thank my supervisor Corinna Ulcigrai for her guidance and support throughout the writing of this paper. I also thank the referees for their attentive readings and helpful comments on previous versions of this paper. The research leading to these results has received funding from the European Research Council under the European Union Seventh Framework Programme (FP/2007-2013)~/~ERC Grant Agreement n.~335989.
\section{Locally Hamiltonian flows}
Let $\mathcal{M}$ be a smooth compact connected orientable surface of genus $g$ and fix a smooth area form $\omega$ on $\mathcal{M}$. For any point $p \in \mathcal{M}$ and for any choice of local coordinates supported on a neighborhood $\mathcal{U}$ of $p$, we can write $\omega = \omega\negthickspace\upharpoonright_{\mathcal{U}} = V(x,y) \diff x \wedge \diff y$, where $V(x,y)$ is a $\mathscr{C}^{\infty}$ function; moreover $\omega_p \neq 0$. Fix a smooth closed 1-form $\eta$ on $\mathcal{M}$; here and henceforth, we only consider 1-forms $\eta$ with isolated zeros (sometimes called singularities). Then $\eta$ determines a flow $\{\varphi_t\}_{t \in \mathbb{R}}$ in the following way: consider the vector field $W$ defined by the relation $W \lrcorner\ \omega = \eta$, where $\lrcorner$ denotes the contraction operator; the point $\varphi_t(p)$ is given by following for time $t$ the smooth integral curve passing through $p$. Explicitly, for any point $p$ there exists a simply connected neighborhood $\mathcal{U}$ of $p$ such that $\eta\negthickspace\upharpoonright_{\mathcal{U}} = \diff H$ for a smooth function $H(x,y)$ defined on $\mathcal{U}$. Clearly, $H$ is uniquely determined up to a constant factor. Then the relation defining $W$ translates as $$ V(x,y)( W_x \diff y -W_y\diff x) =\partial_xH \diff x + \partial_yH \diff y, $$ i.e.~$W\negthickspace\upharpoonright_{\mathcal{U}}= \left((\partial_yH) \partial_x -(\partial_xH) \partial_y\right)/V$. Notice that, since $\mathcal{M}$ is compact, the flow is defined for any $t \in \mathbb{R}$.
The 1-form $\eta$ vanishes along any integral curve, namely denoting by $\varphi(p) \colon t \to \varphi_t(p)$ the integral curve through $p$, we have that $\eta\negthickspace\upharpoonright_{\varphi(p)} = 0$. Indeed, $\frac{\diff}{\diff t} H(\varphi_t(p)) = \nabla H \cdot \dot{\varphi}_t(p) =0$, meaning that $H$ is constant along $\varphi(p)$. We say that $\varphi(p)$ is a \newword{leaf} of $\eta$ and $\eta$ determines a \newword{foliation} of the surface $\mathcal{M}$.
The function $H$ is globally defined on $\mathcal{M}$ if and only if the 1-form $\eta$ is exact, and, in this case, $H$ is said to be a (global) Hamiltonian of the system. In general, the relation $\eta = \diff H$ holds locally: for this reason $\{\varphi_t\}_{t \in \mathbb{R}}$ is called the \newword{locally Hamiltonian flow associated to} $\eta$.
Let $\pi \colon \widetilde{\mathcal{M}} \to \mathcal{M}$ be the universal cover of $\mathcal{M}$; then the pull-back $\pi^{\ast}\eta$ is a closed 1-form on $\widetilde{\mathcal{M}}$, since $\diff (\pi^{\ast} \eta) = \pi^{\ast} \diff \eta =0$. The fact that $\widetilde{\mathcal{M}}$ is simply connected implies that there exists a global Hamiltonian $\widetilde{H}$ on $\widetilde{\mathcal{M}}$ and the values of $\widetilde{H}$ at different pre-images $p_1,p_2 \in \pi^{-1}(p)$ differ by the \newword{periods}, i.e.~the values of $\widetilde{H}(p_2) - \widetilde{H}(p_1) = \int_{p_1}^{p_2} \pi^{\ast}\eta = \int_{\gamma} \eta$, where $\gamma \in \pi_1(\mathcal{M},p)$ is a loop in $\mathcal{M}$ with base point $p$ which lifts to a path connecting $p_1$ to $p_2$. Therefore, there exists a multi-valued function $H = \widetilde{H} \circ \pi^{-1}$ on $\mathcal{M}$, which is well-defined as a function $$ H \colon \mathcal{M} \to \bigslant{\mathbb{R}}{ \{ \int_{\gamma} \eta : \gamma \in \pi_1(\mathcal{M})\}}, $$ being a Hamiltonian for $\eta$, since $\eta_p = (\pi^{\ast}\eta)_{\pi^{-1}(p)} \circ \diff \pi^{-1}_p = \diff (\widetilde{H} \circ \pi^{-1})_p = \diff H_p$. For this reason, the flow $\{\varphi_t\}_{t \in \mathbb{R}}$ is also called the \newword{multi-valued Hamiltonian flow associated to} $\eta$.
\begin{remark}\label{rk:areapreserving} The flow $\{\varphi_t\}_{t \in \mathbb{R}}$ preserves both the area form $\omega$ and the 1-form $\eta$. To see this, it is sufficient to show that the correspondent Lie derivatives $\mathscr{L}_W \omega$ and $\mathscr{L}_W \eta$ w.r.t.~$W$ vanish. Indeed, since by definition $\eta = W \lrcorner\ \omega$ and $\eta$ is closed, $$ \mathscr{L}_W \omega = W \lrcorner\ (\diff \omega) + \diff(W \lrcorner\ \omega) = \diff \eta =0, $$ and $$ \mathscr{L}_W \eta = W \lrcorner\ (\diff \eta) + \diff (W \lrcorner\ \eta) = \diff (W \lrcorner\ (W\lrcorner\ \omega)) = \diff \omega( W, W) = 0, $$ since $\omega$ is alternating. \end{remark}
\subsection{Perturbations of closed 1-forms}
Let $\eta, \eta\rq{}$ be two smooth closed 1-forms. We say that $\eta\rq{}$ is an $\varepsilon$-perturbation of $\eta$ if for any $p \in \mathcal{M}$ and for any coordinates supported on a simply connected neighborhood $\mathcal{U}$ of $p$, we have $\eta\negthickspace\upharpoonright_{\mathcal{U}} = \diff H$ and $(\eta\rq{}-\eta)\negthickspace\upharpoonright_{\mathcal{U}} = \diff f$, with $\norma{f}_{\mathscr{C}^{\infty}} \leq \varepsilon \norma{H}_{\mathscr{C}^{\infty}}$, where $\norma{\cdot}_{\mathscr{C}^{\infty}}$ denotes the $\mathscr{C}^{\infty}$-norm. We want to study the properties of \newword{generic} 1-forms, namely the properties of 1-forms which persist under small perturbations.
Let $p \in \mathcal{M}$ be a zero of $\eta$, and write in local coordinates $\eta = \diff H$; we say that $p$ is a \newword{simple} zero if $\det \hessiano_{(0,0)}(H) \neq 0$, where $\hessiano_{(0,0)}(H)$ denotes the Hessian matrix of $H$ at $p=(0,0)$. We remark that this condition is independent of the choice of local coordinates. A zero which is not simple is called \newword{degenerate}.
\begin{defin} We denote by $\mathcal{F}$ the set of smooth closed 1-forms on $\mathcal{M}$ with isolated zeros and by $\mathcal{A} \subset \mathcal{F}$ the subset of 1-forms with simple zeros. \end{defin}
Let us recall the following result by Morse, see e.g.~\cite[p.~6]{milnor:morse}. \begin{teo}\label{th:morse1} Let $p \in \mathcal{M}$ be a simple zero of $\eta$. There exist local coordinates supported on a simply connected neighborhood $\mathcal{U}$ of $p=(0,0)$ such that either $\eta\negthickspace\upharpoonright_{\mathcal{U}} = x \diff x + y \diff y$, or $\eta\negthickspace\upharpoonright_{\mathcal{U}} = -x \diff x - y \diff y$, or $\eta\negthickspace\upharpoonright_{\mathcal{U}} = y \diff x +x \diff y$. \end{teo} In the first case, $p$ is a local minimum for any local Hamiltonian $H$ and we say that $p$ is a minimum for $\eta$; for the same reason, in the second case we say that $p$ is a maximum for $\eta$ and in the latter case we say that $p$ is a saddle point. With the aid of these coordinates, it is easy to check that the index of the associated vector field at a maximum or minimum is $1$, whence it is $-1$ at a saddle point. By the Poincaré-Hopf Theorem, if $\eta$ has only simple zeros, then $\# \text{minima} + \#\text{maxima} - \#\text{saddles} = \chi(\mathcal{M})$, where $\chi(\mathcal{M}) = 2-2g$ is the Euler characteristic of $\mathcal{M}$.
If $p$ is a maximum or a minimum for $\eta$, locally the leaves of $\eta$ are closed curves homologous to zero. Hence, $p$ is the centre of a disk filled with \lq\lq parallel\rq\rq\ leaves; the maximal disk of this type, which will be called an \newword{island} for $\eta$, is bounded by a closed leaf $\gamma_0$ homologous to zero. The closed curve $\gamma_0$ must contain at least one critical point for $\eta$, which has to be a saddle if $\eta$ has only simple zeros. A leaf $\gamma_0$ as above is called a saddle leaf; namely a \newword{saddle leaf} is a leaf $\gamma = \varphi(x)$ such that $\lim_{t \to \infty} \varphi_t(x) = q_1$ and $\lim_{t \to -\infty} \varphi_t(x) = q_2$, where $q_1, q_2$ are a saddle points. If $q_1 = q_2$ we say that $\varphi(x)$ is a \newword{saddle loop}, otherwise we say that $\varphi(x)$ is a \newword{saddle connection}.
We describe some topological properties of the sets $\mathcal{A}$ and $\mathcal{F}$. \begin{lemma}\label{lemmaadl} Let $\mathcal{A}_{s,l}$ be the set of 1-forms in $\mathcal{A}$ with $s$ saddle points and $l$ minima or maxima. Then, each $\mathcal{A}_{s,l}$ is open and their union $\mathcal{A}$ is dense in $\mathcal{F}$. \end{lemma} \begin{proof} The last assertion is classical, see e.g.~\cite[Corollary 1.29]{pajitnov:morse}, but we present a proof for the sake of completeness. We first show that $\mathcal{A}$ is open. By contradiction, suppose that there exists a sequence of 1-forms $(\eta_n)$ converging to $\eta \in \mathcal{A}$ such that each $\eta_n$ admits a degenerate zero $p_n$. Since $\mathcal{M}$ is compact, we can assume $p_n \to p$ for some $p \in \mathcal{M}$. Let $\mathcal{U}$ be a simply connected neighborhood of $p$ and consider a sequence of local Hamiltonians $H_n$ for $\eta_n$ on $\mathcal{U}$ which converges in the $\mathscr{C}^{\infty}$-norm to a local Hamiltonian $H$ for $\eta$. Therefore, $0 = \det \hessiano_{p_n}(H_n) \to \det \hessiano_p(H) \neq 0$, which is the desired contradiction.
We now show that the sets $\mathcal{A}_{s,l}$ are open. Consider $\eta \in \mathcal{A}_{s,l}$ with zeros $p_1, \dots, p_{s+l}$. Any sufficiently small perturbation $\eta\rq{}$ of $\eta$ has only simple zeros $p_1\rq{}, \dots, p_{s+l}\rq{}$ with $p_i\rq{}$ close to $p_i$. The type of the zero $p_i\rq{}$ depends on the sign of the trace and of the determinant of the Hessian matrix of a local Hamiltonian at $p_i\rq{}$, which are continuous maps in the $\mathscr{C}^{\infty}$-topology; hence the type of zero of $p_i$ and $p_i\rq{}$ is the same. Thus, each $\mathcal{A}_{s,l}$ is open.
To prove $\mathcal{A}$ is dense, we show that for all degenerate zeros $p$ of $\eta \in \mathcal{F}$, there exist arbitrarily small perturbations $\eta\rq{}$ which coincide with $\eta$ outside a neighborhood $\mathcal{U}$ of $p$ and have only simple zeros in $\mathcal{U}$. Let $p$ be a degenerate zero of $\eta$ and fix an open simply connected neighborhood $\mathcal{U}$ of $p$. Sard\rq{}s Theorem applied to $\eta \colon \mathcal{M} \to T^{\ast}\mathcal{M}$ implies that there exist regular values $\eta_q \in T^{\ast}_q\mathcal{M}$, with $q $ arbitrarily close to $p$. Fix a regular value $\eta_q$ and let $\mathcal{V}$ be a simply connected neighborhood of $p$ containing $q$ compactly contained in $\mathcal{U}$. Any choice of local coordinates on $\mathcal{U}$ gives a trivialization $T^{\ast}\mathcal{M}\negthickspace\upharpoonright_{\mathcal{U}} = \mathcal{U} \times \mathbb{R}^2$, which we implicitly use to extend $\eta_q$ to a constant 1-form on $\mathcal{U}$. Finally, consider a \lq\lq{}bump\rq\rq{}\ function $f \colon \mathcal{M} \to \mathbb{R}$ whose support is contained in $\mathcal{U}$ and such that $f\negthickspace\upharpoonright_{\mathcal{V}} = 1$; the 1-form $\eta\rq{} = \eta - f\eta_q$ satisfies the claim. \end{proof}
As we just saw in Lemma \ref{lemmaadl}, the number and type of zeros of a 1-form $\eta \in \mathcal{A}$ are invariant under small perturbations; the following lemma ensures that certain closed leaves are stable as well. Let us recall that a loop is homologous to zero in $\mathcal{M}$ if and only if it disconnects the surface.
\begin{lemma}\label{lemma:homtozero} If a saddle loop $\gamma$ is homologous to zero, then it is stable under small perturbations. \end{lemma} \begin{proof} Let $\gamma$ be a saddle loop homologous to zero passing through a saddle $p$ of $\eta$ and let $\eta\rq{}$ be a $\varepsilon$-perturbation of $\eta$. We consider the connected component $\mathcal{M}\rq{}$ of $\mathcal{M}$ not containing leaves passing through $p$: leaves close to $\gamma$ are homotopic one to the other, hence we have a cylinder (or an island, if $\mathcal{M}\rq{}$ contains only a maximum or minimum for $\eta$) filled with closed \lq\lq{}parallel\rq\rq{}\ leaves, each of which is homologous to zero. On this cylinder, the integrals of $\eta$ and $\eta\rq{}$ along any closed curve are zero; thus they admit Hamiltonians $H$ and $H + f$. If $\varepsilon$ is sufficiently small, the level sets for $H+ f$ are again closed curves, hence the cylinder of closed leaves survives under small perturbations. \end{proof}
In general, saddle connections and saddle loops non-homologous to zero disappear under arbitrarily small perturbations, as shown by the following Example \ref{example1} and \ref{example2} respectively. \begin{example}\label{example1} Consider the function $H(x,y) = y(x^2+y^2-1)$ and the standard area form $\omega = \diff x \wedge \diff y$ defined on $\mathbb{R}^2$. There are four critical points for $\diff H$: the saddles $(\pm 1,0)$, the minimum $(0, \sqrt{3}/3)$ and the maximum $(0, -\sqrt{3}/3)$; moreover there is a saddle connection supported on the interval $(-1,1)$. Using bump functions, define a function $f$ equal to $(\varepsilon/4)(1-(x+ 1)^2+y^2)$ if $(x,y)$ is $\varepsilon$-close to $(-1,0)$, and $0$ if the distance between $(x,y)$ and $(-1,0)$ is greater than $2\varepsilon$. Then it is possible to see that the perturbed 1-form $\diff(H + f)$ admits no saddle connections, see Figures 1(a) and 1(b). \end{example}
\begin{figure}
\caption{Orbits of the flow given by the Hamiltonian $H(x,y) = y(x^2+y^2-1)$.}
\label{fig:Example1}
\caption{Orbits of the flow given by the perturbed Hamiltonian $H+f$.}
\label{fig:Example2}
\end{figure}
The following example uses the dichotomy for the orbits of a linear flow on the torus.
\begin{example}\label{example2} Consider the torus $\mathbb{T}^2=\mathbb{R}^2 / \mathbb{Z}^2$ and construct $\eta \in \mathcal{A}_{1,1}$ in the following way. Fix $0<\delta< \frac{1}{8}$ and let $\eta$ be defined in the strip $(2\delta, 1-2\delta) \times (\frac{1}{2} - \delta, \frac{1}{2} +\delta)$ as $(x-\frac{1}{2})(x-\frac{1+\delta}{2}) \diff x + (y-\frac{1}{2}) \diff y$ and outside $(\delta, 1-\delta) \times (\frac{1}{2} - 2 \delta, \frac{1}{2} + 2 \delta)$ as $\diff x$; using a symmetric bump function it is possible to do so in such a way that every orbit is periodic. The 1-form $\eta$ has a minimum in $(\frac{1+\delta}{2},\frac{1}{2})$ and a saddle in $(\frac{1}{2},\frac{1}{2})$, hence a saddle loop not homologous to zero. Take a bump function $\varepsilon f(x,y) =\varepsilon f(y)$ depending on $y$ only such that $\varepsilon f(y) = \varepsilon$ for every $y \in [- \delta, \delta] \text{ mod }\mathbb{Z}$ and equal to 0 outside $[-2\delta, 2\delta] \text{ mod }\mathbb{Z}$. The perturbed form $\eta + \varepsilon f(y) \diff y$ coincide with $\eta$ in $[0,1) \times (\frac{1}{2} - 2 \delta, \frac{1}{2} + 2 \delta)$, in which leaves enter vertically. Outside that region, the vector field defining the flow is $ \varepsilon f(y) \partial_x - \partial_y$, thus the displacement of any leaf in the $x$-coordinate after winding once around the torus is given by $\int_{\mathbb{T}^2} \varepsilon f$. Hence, for any $\varepsilon$ such that the previous integral is a rational number, the saddle loop is preserved; otherwise, if $\int \varepsilon f$ is irrational, the saddle loop vanish. \end{example} The previous example shows that neither the set of 1-forms in $\mathcal{A}$ with saddle loops non-homologous to zero nor its complement is an open set, and similarly if we consider saddle connections. However both these cases are \newword{exceptional}, as we are going to describe in the next subsection.
\subsection{Measure class}\label{sectionmeasureclass}
We want to define a measure class (namely, a notion of null sets and full measure sets) on each open set $\mathcal{A}_{s,l}$; later it will be restricted to an open and dense subset. Let $\Sigma = \Sigma(\eta)$ be the finite set of singular points of a given $\eta \in \mathcal{A}_{s,l}$ and fix a basis $\gamma_1, \dots, \gamma_m$ of the first relative homology group $H_1(\mathcal{M}, \Sigma, \mathbb{R})$; here $m = 2g+l+s-1$. If $\eta\rq{}$ is a perturbation of $\eta$, we can identify $H_1(\mathcal{M}, \Sigma(\eta), \mathbb{R})$ with $H_1(\mathcal{M}, \Sigma(\eta\rq{}), \mathbb{R})$ via the \newword{Gauss-Manin connection}, i.e.~via the identification of the lattices $H_1(\mathcal{M}, \Sigma(\eta), \mathbb{Z})$ and $H_1(\mathcal{M}, \Sigma(\eta\rq{}), \mathbb{Z})$. Define the \newword{period coordinates} of $\eta$ as $$ \Theta (\eta) = \left( \int_{\gamma_1} \eta, \dots, \int_{\gamma_m} \eta \right)\in \mathbb{R}^m. $$ The map $\Theta$ is well-defined in a neighborhood of $\eta$. Moreover, the next proposition, which is a variation of Moser\rq{}s Homotopy Trick \cite{moser:trick}, shows it is a complete invariant for isotopy classes (recall that an \newword{isotopy} between $\eta$ and $\eta\rq{}$ is a family of smooth maps $\{\psi_t \colon \mathcal{M} \to \mathcal{M} \}_{t \in [0,1]}$ such that $\psi_1^{\ast}(\eta\rq{}) = \eta$).
\begin{prop} Let $\eta \in \mathcal{A}_{s,l}$ be fixed. There exists a neighborhood $\mathcal{U}$ of $\eta$ such that for all $\eta\rq{} \in \mathcal{U}$ there is an isotopy $\{\psi_t \}_{t \in [0,1]}$ between $\eta$ and $\eta\rq{}$ if and only if $\Theta(\eta)=\Theta(\eta\rq{})$. \end{prop} \begin{proof} If $\eta$ and $\eta\rq{}$ are isotopic, then for any element $\gamma_j$ of the basis of $H_1(\mathcal{M}, \Sigma(\eta), \mathbb{Z})$ we have $$ \int_{\gamma_j}\eta = \int_{\gamma_j} \psi_1^{\ast} \eta\rq{} = \int_{\psi_1 \circ \gamma_j} \eta\rq{}, $$ hence the claim.
Conversely, let $\eta\rq{}$ be a small perturbation of $\eta$ and suppose that they have the same period coordinates. Up to an isotopy, we can assume that $\Sigma(\eta) = \Sigma(\eta\rq{})$.
Consider the convex combinations $\eta_t = (1-t)\eta + t \eta\rq{}$ for $t \in [0,1]$. To construct $\{\psi_t\}$ such that $ \psi_t^{\ast}(\eta_t) =\eta_0 = \eta$, we look for a smooth non-autonomous vector field $\{ X_t \}$ such that $\psi_t$ is the flow induced by $\{ X_t \}$. It is enough for $\{ X_t \}$ to satisfy \begin{equation} 0= \frac{\diff}{\diff t} \psi_t^{\ast}(\eta_t) = \psi^{\ast}_t \left( \frac{\diff}{\diff t} \eta_t + \mathscr{L}_{X_t}\eta_t \right). \label{eq:Lie1} \end{equation} The previous equation holds if $\frac{\diff}{\diff t} \eta_t + \mathscr{L}_{X_t}\eta_t =0$. Notice that $\frac{\diff}{\diff t} \eta_t = \eta\rq{} - \eta$, which, by hypothesis, is cohomologous to zero, since the integral over any closed loop on $\mathcal{M}$ is zero. Hence, there exists a global function $U$ over $\mathcal{M}$ such that $\frac{\diff}{\diff t} \eta_t = \diff U$ and then we can rewrite \eqref{eq:Lie1} as $\diff (U + X_t \lrcorner\ \eta_t) =0$. If $W_t$ denotes the vector field associated to $\eta_t$, i.e.~$W_t \lrcorner\ \omega = \eta_t$, the equation to be solved becomes $-U= X_t \lrcorner\ \eta_t = \omega(W_t, X_t)$.
On the set $\Sigma$ of critical points, the vector field $W_t$ vanishes; thus a necessary condition for the existence of a solution is that $U(p)=0$ for any $p \in \Sigma$. It is possible to choose $U$ satisfying this condition: $U$ is defined up to a constant and if $p,q \in \Sigma$, then $U(p) = U(q)$ because
$$
U(p)-U(q) = \int_q^p \diff U = \int_q^p \eta - \int_q^p \eta\rq{} = 0.
$$ In a neighborhood of any point $q \in \mathcal{M} \setminus \Sigma$, we have $(W_t)_q \neq 0$ since we assumed $\Sigma(\eta) = \Sigma(\eta\rq{})$; by the nondegeneracy of $\omega$, a solution $X_t$ exists. This concludes the proof. \end{proof}
Notice that if $\gamma$ is a leaf for $\eta$, then $\psi_1 \circ \gamma$ is a leaf for $\eta\rq{}$, since $\eta\rq{}\negthickspace\upharpoonright_{\psi_1 \circ \gamma} = \eta\rq{}((\psi_1)_{\ast}(\dot{\gamma})) = (\psi_1^{\ast}\eta\rq{})(\dot{\gamma}) = \eta\negthickspace\upharpoonright_{\gamma}=0$. Therefore, $\psi_1$ realises an orbit equivalence between the locally Hamiltonian flows induced by $\eta$ and $\eta\rq{}$, which is $\mathscr{C}^{\infty}$ away from the critical set.
\begin{defin}\label{definmeasureclass} We equip $\mathcal{A}_{s,l}$ with the measure class $\Theta^{\ast}( \misura_{\mathbb{R}^m})$ given by the pull-back of the Lebesgue measure $\misura_{\mathbb{R}^m}$ on $\mathbb{R}^m$ via $\Theta$. \end{defin} We want to study the dynamics induced by \newword{typical} 1-forms with respect to this measure class. We remark that if $\eta$ has a saddle loop non-homologous to zero or a saddle connection, then, up to a change of basis of $H_1(\mathcal{M}, \Sigma(\eta), \mathbb{R})$, one of the coordinate of $\eta$ is zero, in particular the set of such 1-forms is a null set.
Let us remark that if the locally Hamiltonian flow is minimal, then $l=0$ and $-s = \chi(\mathcal{M})$; in this case, as recalled in the introduction, Ulcigrai in \cite{ulcigrai:absence} and \cite{ulcigrai:weakmixing} proved that \newword{almost every} $\eta$ induces a non-mixing but weakly mixing flow.
\section{Suspension flows over IETs}
In this section, we are going to represent the restriction of a locally Hamiltonian flow $\{\varphi_t\}_{t \in \mathbb{R}}$ to a minimal component as a suspension flow over an interval exchange transformation. We recall all the relevant definitions for the reader\rq{}s convenience.
An \newword{Interval Exchange Transformation} $T$ of $d$ intervals (IET for short) is an orientation-preserving piecewise isometry of the unit interval $I = [0,1]$; namely it is the datum of a permutation $\pi$ of $d$ elements and a vector $\underline{\lambda} = (\lambda_i)$ in the standard $d$-simplex $\Delta_{d}$: the interval $I$ is partitioned into the subintervals $I_j=I_j^{(0)} = [a_{j-1}, a_{j})$ of length $\lambda_j$ and the subintervals $I_j^{(0)}$ after applying $T$ are ordered according to the permutation $\pi$. Formally, let $a_j = \sum_{k\leq j} \lambda_k$ and $a_j\rq{} = \sum_{k \leq \pi(j)} \lambda_{\pi^{-1}(k)}$ and define $T(x) = x - a_{j-1} + a_{j-1}\rq{}$ for $x \in [a_{j-1}, a_{j-1} + \lambda_i)$. We refer to \cite{viana:iet2} or \cite{viana:iet} for a background on IETs.
Given a strictly positive function $f \in L^1([0,1])$, a \newword{suspension flow over an IET with roof function $f$} is defined in the following way. Consider the quotient space \begin{equation}\label{eq:suspspace} \mathcal{X}:= \bigslant{ \{ (x,y) \in [0,1] \times \mathbb{R}: 0 \leq y \leq f(x)\} }{\sim}, \end{equation} where $\sim$ denotes the equivalence relation generated by the pairs $\{ (x,f(x)),$ $ (T(x),0)\}$. We define the \newword{suspension flow} $\{\phi_t\}_{t \in \mathbb{R}}$ over $([0,1], T, \diff x)$ with roof function $f$ to be the flow on $\mathcal{X}$ given by $\phi_t (x,y) = (x, y+t)$ for $-y \leq t \leq f(x)-y$, and then extended to all times $t \in \mathbb{R}$ via the identification $\sim$. Intuitively, a point $(x,y) \in \mathcal{X}$ under the action of the flow moves vertically with unit speed up to the point $(x, f(x))$, which is identified with $(T(x),0)$; after this \lq\lq jump\rq\rq, it continues in the same way.
The flow $\{\phi_t\}_{t \in \mathbb{R}}$ can be described explicitly. For any function $g \colon I \to \mathbb{R}$ and for $r \geq 0$, denote by $S_r(g)(x)$ the $r$-th Birkhoff sum of $g$ along the orbit of $x \in I$, i.e. $$ S_r(g)(x) := \sum_{i=0}^{r-1} g(T^i x); $$ then, for $t\geq 0$, \begin{equation}\label{eq:sfj} \phi_t(x,0) = \left(T^{r(x,t)}x, t-S_{r(x,t)}(f)(x) \right), \end{equation} where $r(x,t)$ denotes the maximum $r \geq 0$ such that $S_r(f)(x) \leq t$.
The set of suspension flows we are going to consider consists of the ones for which the roof function $f$ has \newword{asymmetric logarithmic singularities}, namely it satisfies the following properties: \begin{itemize} \item[(a)] $f$ is not defined on the $d-1$ points $a_1, a_2, \dots, a_{d-1} \in (0,1)$; \item[(b)] $f \in \mathscr{C}^{\infty} \left( [0,1] \setminus \bigcup_{i=1}^{d-1} \{a_i\} \right)$; \item[(c)] there exists $\min f(x)>0$, where the minimum is taken over the domain of definition of $f$; \item[(d)] for each $j = 1, \dots, d-1$ there exist positive constants $C_j^{+}, C_j^{-}$ and a neighborhood $\mathcal{U}_j$ of $a_j$ such that \begin{equation*} \begin{split} &f(x) = C_j^{+} \modulo{\log (x-a_j)} + e(x), \qquad \text{for } x \in \mathcal{U}_j, x>a_j,\\ &f(x) = C_j^{-} \modulo{\log (a_j-x)} +\widetilde{e}(x), \qquad \text{for } x \in \mathcal{U}_j, x< a_j; \end{split} \end{equation*} where $e, \widetilde{e}$ are smooth bounded functions on $[0,1]$. Moreover, $C^{+} \neq C^{-}$, where $C^{+} := \sum_j C_j^{+}$ and $C^{-}:= \sum_j C_j^{-}$. \end{itemize}
Our main result is the following; it was proved by Ulcigrai \cite{ulcigrai:mixing} in the case the roof function $f$ has one asymmetric logarithmic singularity at the origin. In this paper, we generalize her techniques to the case of finitely many singularities.
\begin{teo}\label{th:IETgoal} For almost every IET $T$ and for any $f$ with asymmetric logarithmic singularities, the suspension flow $\{\phi_t\}_{t \in \mathbb{R}}$ over $([0,1], T, \diff x)$ with roof function $f$ is mixing. \end{teo} The asymmetry condition in (d) is the key property to produce mixing. From this result, we deduce mixing for typical locally Hamiltonian flows with asymmetric saddle loops, namely the following result. \begin{teo}\label{th:mix} There exists an open and dense set $\mathcal{A}_{s,l}\rq{} \subset \mathcal{A}_{s,l}$ of smooth 1-forms with $s$ saddle points and $l$ minima or maxima such that for almost every $\eta \in \mathcal{A}_{s,l}\rq{}$ with at least one saddle loop homologous to zero and for any minimal component $\mathcal{M}\rq{} \subset \mathcal{M}$, the restriction of the induced flow $\{\varphi_t\}_{t \in \mathbb{R}}$ to $\mathcal{M}\rq{}$ is mixing. \end{teo} The sets $\mathcal{A}_{s,l}\rq{}$ are the subsets of $\mathcal{A}_{s,l}$ for which the asymmetry condition in (d) is satisfied; we are going to construct them explicitly in the next subsection. Theorem \ref{th:mix} follows from Theorem \ref{th:IETgoal} by constructing an appropriate Poincaré section, showing that the first return map is an IET and, if the locally Hamiltonian flow is induced by a 1-form in $\mathcal{A}_{s,l}\rq{}$, then the first return time function $f$ has asymmetric logarithmic singularities.
\subsection{Proof of Theorem \ref{th:mix}}
Let $\eta \in \mathcal{A}_{s,l}$; as we remarked in \S\ref{sectionmeasureclass}, 1-forms with saddle connections are a zero measure set, therefore we can assume $\eta$ has no saddle connections. Let $\mathcal{M}_1, \dots \mathcal{M}_k$ be the minimal components and let $\mathcal{M}_{k+1}, \dots, \mathcal{M}_{k+l}$ the islands, i.e.~the periodic components containing a minimum or a maximum of $\eta$ (in addition there can be cylinders of periodic orbits, but we do not label them). Each $\mathcal{M}_i$ is bounded by saddle loops homologous to zero. Denote by $p_{1,i}, \dots, p_{s_i,i}$ the singularities of $\eta$ contained in the closure of $\mathcal{M}_i$, which are saddles, and let $\{ q_1, \dots q_l \} $, with $q_i \in \mathcal{M}_{k+i}$, be the set of maxima or minima of $\eta$, which is possibly empty if $l=0$.
\paragraph{Step 1: Poincaré section.} Let us consider one of the minimal components $\mathcal{M}_i$. We first show that we can find a Poincaré section $I$ so that the first return map $T \colon I \to I$ is an IET of $d_i$ intervals, where \begin{equation}\label{eq:di} \left(\sum_{i=1}^k d_i \right) + l + (k-1) = 2g+(l+s) -1 = \rank H_1(\mathcal{M}, \Sigma, \mathbb{Z}). \end{equation} Fix a segment $I\rq{} \subset \mathcal{M}_i$ transverse to the flow containing no critical points and whose endpoints $a$ and $b$ lie on outgoing saddle leaves. Let $a_1, \dots, a_{d_i-1} \in I\rq{}$ be the the pull-backs of the saddle points via the flow, namely the points $a_j \in I\rq{}$ are such that $ \lim_{t \to \infty} \varphi_t(a_j) = p_{r,i}$ for some $r=1, \dots, s_i$ and $\varphi_t(a_j) \notin I\rq{}$ for any $t > 0$, see Figure \ref{fig:1}. Up to relabelling, we can suppose that the points are labelled in consecutive order, namely the segment $[a, a_j]\subset I\rq{}$ with endpoints $a$ and $a_j$ is contained in $[a,a_{j+1}]$ for all $j = 1, \dots, d_i-2$. Let $a_0$ be the closest point to $a_1$ contained in $[a,a_1]$ which lies in an outgoing saddle leaf and similarly let $a_{d_i}$ be the closest point to $a_{d_1-1}$ contained in $[a_{d_i-1},b]$ which lies in an outgoing saddle leaf. We consider the segment $I = [a_0, a_{d_i}]$, see Figure \ref{fig:1}.
\begin{figure}
\caption{Example of the construction of the Poincaré section; in blue one of the curves $\gamma_j$ and in green its dual $\sigma_j$.}
\label{fig:1}
\end{figure}
Let $T \colon I \to I$ be the first return map of $\varphi_t$ to $I$ and $f \colon I \to \mathbb{R}_{>0}$ the first return time function. Clearly, $T$ is not defined on $\{a_1, \dots, a_{d_i-1}\}$, since the return time of those points is infinite. Consider the connected component $I_j $ of $I \setminus \{a_1, \dots, a_{d_i-1}\}$ bounded by $a_{j-1}$ and $ a_{j}$. For any $z \in I_j$ and for any $0 \leq t \leq f(z)$, by compactness, the point $\varphi_t(z)$ is bounded away from the singularities, thus the map $\varphi_t$ is continuous at $z$. In particular, $T$ is continuous at any $z \in I_j$ and $T(I_j)$ is a connected segment in $I$. Since $I$ is transverse to the flow, we have that $\int_I \eta \neq 0$; up to reversing the orientation we can assume that $\int_I \eta >0$. Moreover, since there are no critical points of $\eta$ in the interior of $I$, the integral of $\eta$ is an increasing function, i.e.~$\int_{a_0}^{z_1} \eta < \int_{a_0}^{z_2} \eta$ whenever the segment $[a_0, z_1]$ is strictly contained in $[a_0, z_2]$. The 1-form $\eta$ defines a measure on $I$, which it is easy to see it is $T$-invariant. By considering the coordinates on $I$ given by $z \mapsto \int_{a_0}^z \eta /(\int_I \eta)$, we can identify $I=[0,1]$ and $\eta\negthickspace\upharpoonright_{I}$ with the Lebesgue measure $\misura$ on $I$. The map $T\negthickspace\upharpoonright_{I_j}$ is an isometry for any $j=1, \dots, d_i$; thus $T$ is an IET of $d_i$ intervals.
Let us prove \eqref{eq:di}. By construction, $d_i-1$ is the number of pull-backs of the saddle points: each saddle with a saddle loop homologous to zero admits one pull-back, whence the other saddles have two. Each of the former is uniquely paired with a minimum or a maximum or with another minimal component via a cylinder of periodic orbits, hence there are exactly $l+2(k-1)$ of them. We deduce $\sum_{i=1}^k (d_i-1) + l +2k-2 = 2s$; therefore $(\sum_i d_i) + l + (k-1) = 2s+1 = 2g+(s+l)-1 = \rank H_1(\mathcal{M}, \Sigma, \mathbb{Z})$ by Poincaré-Hopf formula.
\paragraph{Step 2: return time function.} We now investigate the first return time function $f $. Clearly, $f$ is smooth in $I \setminus \{a_1, \dots, a_{d_i-1}\}$ and blows to infinity at the points $a_j$. Since $f \neq 0$ on $I$ by hypothesis, it admits a minimum $\min f(x) >0$. In order to understand the type of singularities of $f$, we have to compute the time spent by an orbit travelling close to a saddle point $p$. By Theorem \ref{th:morse1}, we can suppose that a local Hamiltonian at $p = (0,0)$ is $H(x,y) = xy$ and the area form $\omega = V(x,y) \diff x \wedge \diff y$. Let $(x(t),y(t))$ be an orbit of the flow; as we have already remarked, $H$ is constant along it, $H(x(t),y(t)) = c$. The vector field is given by $W = \frac{x}{V(x,y)} \partial_x - \frac{y}{V(x,y)} \partial_y$, so that the time spent for travelling from a point $(z, c/z)$ to $(c/z,z)$ is $$ T = \int_0^T \diff t = \int_0^T \frac{V(x,c/x) \dot{x}}{x} \diff t = \int_a^{c/a} \frac{V(x,c/x)}{x} \diff x. $$ Lemma A.1 in \cite{fraczek:infinite} yields that $T = -V(0,0) \log c + e(c,a)$, where $e$ is a smooth function of bounded variation. Therefore, when the \lq\lq{}energy level\rq\rq{} $c$ approaches $0$, or equivalently when the leaf gets close to the saddle leaf, the time spent close to $p$ blows up as $\modulo{\log c}$. Denote by $C_1, \dots, C_{s_i}$ the constants given by $ T(c)/\modulo{\log c}$ as $c \to 0$ for all the saddle points $p_{1,i}, \dots, p_{s_i,i}$. Suppose that $a_j$ corresponds to a saddle $p_{r,i}$ belonging to a saddle loop homologous to zero. Since there are no saddle connections, there exists a small neighborhood $\mathcal{U} \subset I$ of $a_j$ which contains points that do not come close to any other singularity of $\eta$ before coming back to $I$. Because of the saddle loop, the logarithmic singularity of $f$ at $a_j$ has different constants: points in $I \cap \mathcal{U}$ on different sides of $a_j$ travel either once or twice near $p_{r,i}$. Namely, for some smooth bounded functions $e,\widetilde{e}$ we either have \begin{equation*} \begin{split} f(x) = -C_j \log\modulo{x-a_j} + e(x), \quad &\text{ for $x \in I \cap \mathcal{U}, x > a_j$} \\
f(x) =- 2C_j \log \modulo{a_j-x} + \widetilde{e}(x), \quad &\text{ for $x \in I \cap \mathcal{U}, x < a_j$}, \end{split} \end{equation*} or similar equalities with the conditions $x>a_j$ and $x<a_j$ reversed. On the other hand, if the point $a_j$ corresponds to a singularity $p_{r,i}$ with no saddle loop, then the constants on different sides of $a_j$ are the same. We remark that this phenomenon was discovered by Arnold \cite{arnold:torus} in the genus one case and exploited by Sinai and Khanin \cite{sinai:mixing} to prove mixing.
\paragraph{Step 3: asymmetry.} For property (d) to hold, the sum of the constants on the left side of the singularities has to be different from the one on the right. \begin{defin} Let $\mathcal{A}_{s,l}\rq{}$ be the subset of $\mathcal{A}_{s,l}$ of smooth 1-forms such that no linear combination of the $C_j$ with coefficients in $\{-1,0,1\}$ equals zero.
\end{defin}
In particular, for all $\eta \in\mathcal{A}_{s,l}\rq{}$, we have that $C^{+} \neq C^{-}$. Let us show that it is an open and dense set. Let $p=p_{j,i}$ be a singularity of $\eta$. For any small perturbation of $\eta$, there exists a change of coordinates $\psi$ close to the identity such that we can write the Hamiltonian for the perturbed 1-form as $H\rq{}=x\rq{}y\rq{}$. Thus the return time is $T(c) = -V(0,0) |\det J(\psi)_p| \log c + \widetilde{e}$, where $J(\psi)_p$ is the Jacobian matrix of $\psi$ at $p$ and $\widetilde{e}$ is another smooth function of bounded variation. If $\eta \notin \mathcal{A}_{s,l}\rq{}$, fix a saddle $p$ and for any $\varepsilon>0$ consider the perturbed local Hamiltonian $H\rq{}=(1-\varepsilon^2)xy$ at $p$; then $\psi(x,y) = ((1-\varepsilon)x, (1+\varepsilon)y)$ so that $|\det J(\psi)_p| = 1-\varepsilon^2$. Since the other constants $C_j$ are the same, it is possible to choose arbitrarily small $\varepsilon$ such that $\eta\rq{} \in \mathcal{A}_{s,l}\rq{}$, which is hence dense. In order to see that $\mathcal{A}_{s,l}\rq{}$ is open, let $xy+f(x,y)$ be the perturbed Hamiltonian at a singularity, with $\norma{f}_{\mathscr{C}^{\infty}} <\varepsilon$ and let $(x\rq{},y\rq{}) = \psi(x,y) = (\psi_1(x,y),\psi_2(x,y))$ the associated change of coordinates as above. Then, $f(x,y) = \psi_1(x,y)\psi_2(x,y) - xy = P \circ(\Id - \psi)(x,y)$, where $P$ denotes the product $P(x,y)=xy$. Thus, there exists $\varepsilon\rq{}>0$ such that $\norma{\Id - \psi}_{\mathscr{C}^{\infty}} < \varepsilon\rq{}$ on a neighborhood of $p$; hence $|\det J(\psi)_p| \in [1-\varepsilon\rq{}, 1+\varepsilon\rq{}]$. Since this holds for any singularity $p$, the set $\mathcal{A}_{s,l}\rq{}$ is open.
\paragraph{Step 4: full measure sets.} Finally, we have to prove that if a property holds for almost every IET, then it holds for almost every $\eta \in \mathcal{A}_{s,l}\rq{}$ w.r.t.~the measure class defined in Notation \ref{definmeasureclass}. Fix the minimal component $\mathcal{M}_i$, let $\widetilde{\mathcal{M}}_i$ be the open neighborhood of $\mathcal{M}_i$ obtained by adding all cylinders or islands of periodic orbits adjacent to $\mathcal{M}_i$. Let $\Sigma_i$ the set of singularities in $\widetilde{\mathcal{M}}_i$, or equivalently in the closure of $\mathcal{M}_i$.
For each interval $I_j$ as above, let $\gamma_j$ be a path starting from a point $x \in I_j$ different from $a_{j-1}, a_j$, moving along the orbit of $x$ up to the first return to $I$ and closing it up in $I$, see Figure \ref{fig:1}. Set $\mathcal{B}_i = \{\gamma_j : 1 \leq j \leq d_i\}$. Let $\{\xi_r\}$ be the set of the boundary components of $\mathcal{M}_i$. By \cite[Lemma 2.17]{viana:iet}, $\mathcal{B}_i \cup \{\xi_r\}$ is a generating set for $H_1(\widetilde{\mathcal{M}}_i,\mathbb{Z})$. Moreover, a proof analogous to \cite[Lemma 2.18]{viana:iet} shows that any loop around a singularity is a linear combination of the $\gamma_j$ (if the singularity is not contained in a saddle loop), and of the $\gamma_j$ and $\xi_r$ (if the singularity $p_{r,i}$ is contained in a saddle loop). In particular, $\mathcal{B}_i \cup \{\xi_r\} $ is a generating set for $H_1(\widetilde{\mathcal{M}}_i \setminus \Sigma_i,\mathbb{Z})$.
\begin{lemma}\label{th:mayerviet} Let $\mathcal{B}_i$ be as above. There exists a basis $\mathcal{B}$ of $H_1(\mathcal{M} \setminus \Sigma,\mathbb{Z})$ given by the disjoint union of the $\mathcal{B}_i$ together with the homology classes of the loops $\xi$ bounding the $\widetilde{\mathcal{M}}_i$. \end{lemma} \begin{proof} Consider two minimal components $\mathcal{M}_a$ and $\mathcal{M}_b$ separated by a cylinder of periodic orbits; the same proof applies if $\mathcal{M}_b$ is an island containing a maximum or a minimum. Notice that $\widetilde{\mathcal{M}}_a \cap \widetilde{\mathcal{M}}_b$ is a cylinder of periodic orbits containing no singularity. Let $\xi_a \in H_1( \widetilde{\mathcal{M}}_a \setminus \Sigma_{a}, \mathbb{Z})$ and $\xi_b \in H_1( \widetilde{\mathcal{M}}_b \setminus \Sigma_{b}, \mathbb{Z})$ the boundary components in $\widetilde{\mathcal{M}}_a \cap \widetilde{\mathcal{M}}_b$. We remark that $\xi_a$ and $\xi_b$ are homologous.
Let $i,j,\widetilde{i},\widetilde{j}$ be the inclusion maps in the following diagram. \begin{displaymath}
\xymatrix{\ & \widetilde{\mathcal{M}}_a \cup \widetilde{\mathcal{M}}_b \setminus \Sigma_{a} \cup \Sigma_{b} &\ \\
\widetilde{\mathcal{M}}_a \setminus \Sigma_{a} \ar@{->}[ur]^{\widetilde{i}} & \ & \widetilde{\mathcal{M}}_b \setminus \Sigma_{b} \ar@{->}[ul]_{\widetilde{j}}\\
\ & \widetilde{\mathcal{M}}_a\cap \widetilde{\mathcal{M}}_b \ar@{->}[ul]^i \ar@{->}[ur]_j & \ \\ } \end{displaymath} The Mayer-Vietoris sequence \begin{equation*} \begin{split} &\cdots \xrightarrow{ } H_1( \widetilde{\mathcal{M}}_a \cap \widetilde{\mathcal{M}}_b, \mathbb{Z}) \xrightarrow{(i_{\ast},j_{\ast})} H_1( \widetilde{\mathcal{M}}_a \setminus \Sigma_{a}, \mathbb{Z}) \oplus H_1( \widetilde{\mathcal{M}}_b \setminus \Sigma_{b}, \mathbb{Z}) \xrightarrow{\widetilde{i}_{\ast}-\widetilde{j}_{\ast}} \\ & \xrightarrow{\widetilde{i}_{\ast}-\widetilde{j}_{\ast}} H_1( \widetilde{\mathcal{M}}_a \cup \widetilde{\mathcal{M}}_b \setminus \Sigma_{a} \cup \Sigma_{b}, \mathbb{Z}) \xrightarrow{\partial_{\ast}} H_0( \widetilde{\mathcal{M}}_a \cap \widetilde{\mathcal{M}}_b , \mathbb{Z})\xrightarrow{(i_{\ast},j_{\ast})} \cdots \end{split} \end{equation*} is exact. We have that $H_1( \widetilde{\mathcal{M}}_a \cap \widetilde{\mathcal{M}}_b , \mathbb{Z}) = \langle \xi \rangle$, where $\xi=\xi_a=\xi_b$, and the image im$(i_{\ast},j_{\ast})$ is equal to $\langle (\xi_a, \xi_b)\rangle$. By exactness, it follows that $ H_1( \widetilde{\mathcal{M}}_a \setminus \Sigma_{a} ,\mathbb{Z}) \oplus H_1( \widetilde{\mathcal{M}}_b \setminus \Sigma_{b}, \mathbb{Z}) / \langle (\xi_a, \xi_b)\rangle \simeq \text{im}(\widetilde{i}_{\ast}-\widetilde{j}_{\ast})$. Since $(i_{\ast},j_{\ast})\colon H_0( \widetilde{\mathcal{M}}_a \cap \widetilde{\mathcal{M}}_b, \mathbb{Z}) \to H_0( \widetilde{\mathcal{M}}_a \setminus \Sigma_{a}, \mathbb{Z}) \oplus H_0( \widetilde{\mathcal{M}}_b \setminus \Sigma_{b}, \mathbb{Z})$ is injective, im$(\partial_{\ast}) = \{0\}$, then ker$(\partial_{\ast}) = H_1( \widetilde{\mathcal{M}}_a \cup \widetilde{\mathcal{M}}_b \setminus \Sigma_{a} \cup \Sigma_{b}, \mathbb{Z}) =\text{im}(\widetilde{i}_{\ast}-\widetilde{j}_{\ast} )$. We have obtained that $$ H_1( \widetilde{\mathcal{M}}_a \setminus \Sigma_{a} ,\mathbb{Z}) \oplus H_1( \widetilde{\mathcal{M}}_b \setminus \Sigma_{b}, \mathbb{Z}) / \langle (\xi_a, \xi_b)\rangle \simeq H_1( \widetilde{\mathcal{M}}_a \cup \widetilde{\mathcal{M}}_b \setminus \Sigma_{a} \cup \Sigma_{b}, \mathbb{Z}) $$ in particular, the set $\mathcal{B}_a \cup \mathcal{B}_b $ is contained in a generating set for $H_1( \widetilde{\mathcal{M}}_a \cup \widetilde{\mathcal{M}}_b \setminus \Sigma_{a} \cup \Sigma_{b}, \mathbb{Z})$ and the union is disjoint in the image, i.e.~they all give distinct elements.
Iterate this process for all components. The generating set we obtain is the disjoint union of the $\mathcal{B}_i$ together with the homology classes of the loops $\xi$ bounding the $\widetilde{\mathcal{M}}_i$. Since the cardinality of $\mathcal{B}_i $ is $d_i$, the cardinality of the set obtained is $ \sum_{i=1}^k d_i + l + (k-1) $. By formula \eqref{eq:di}, it equals the rank of $H_1(\mathcal{M} \setminus \Sigma,\mathbb{Z})$, hence it is a basis. \end{proof}
\begin{cor}\label{corcor} Every full measure set of length vectors $\underline{\lambda} \in \Delta_d$ corresponds to a full measure set of 1-forms $\eta \in \mathcal{A}_{s,l}\rq{}$. \end{cor} \begin{proof} It is sufficient to show that for any fixed $\eta\in \mathcal{A}_{s,l}\rq{}$ we can choose a basis of $H_1(\mathcal{M}, \Sigma,\mathbb{Z})$ such that the lengths of the subintervals of the induced IETs on all minimal components appear as some of the coordinates of $\Theta(\eta)$.
Let $\mathcal{B}$ be the basis of $H_1(\mathcal{M} \setminus \Sigma,\mathbb{Z})$ given by Lemma \ref{th:mayerviet}. Denote by $\widehat{\mathcal{M}}$ the surface obtained from $\mathcal{M}$ by removing a small ball centered at each singularity. By the Excision Theorem, $H_1(\mathcal{M},\Sigma, \mathbb{Z}) \simeq H_1(\widehat{\mathcal{M}}, \partial \widehat{\mathcal{M}}, \mathbb{Z})$ and the Poincaré-Lefschetz duality implies that the latter is isomorphic to $H^1(\widehat{\mathcal{M}},\mathbb{Z}) \simeq H^1(\mathcal{M}\setminus \Sigma, \mathbb{Z})$. At the homology level, we then have a perfect pairing given by the intersection form. Consider the basis $\{ \sigma_j \}$, where $\sigma_j \in H_1(\mathcal{M},\Sigma, \mathbb{Z}) $ is the dual path to $\gamma_j$, see Figure \ref{fig:1}. If $\sigma_j \subset \mathcal{M}_i$, the associated period coordinates are given by $\int_{\sigma_j}\eta = (a_j-a_{j-1})\int_I \eta$, which are the lengths of the subintervals defining the IET $T$ on $I \subset \mathcal{M}_i$ (up to the constant $\int_I \eta$). \end{proof}
Theorem \ref{th:IETgoal} implies that for every permutation $\pi$, for almost every length vector $\underline{\lambda} \in \Delta_d$ and for every function $f$ with asymmetric logarithmic singularities the suspension flow over $T=(\pi, \underline{\lambda})$ with roof function $f$ is mixing. By Corollary \ref{corcor}, consider the correspondent full measure set of 1-forms $\eta \in \mathcal{A}_{s,l}\rq{}$. By the previous steps, the restriction of the induced locally Hamiltonian flow to any minimal component can be represented as a suspension flow over an IET with roof function with asymmetric logarithmic singularities, which is mixing by Theorem \ref{th:IETgoal}. This concludes the proof.
\section{Rauzy-Veech Induction and Diophantine conditions}\label{section:def}
The Rauzy-Veech algorithm is an inducing scheme which produces a sequence of IETs defined on nested subintervals of $[0,1]$ shrinking towards zero. We assume some familiarity with the Rauzy-Veech Induction, referring to \cite{viana:iet} for details. We introduce some notation and terminology that we will use in the proof of Theorem \ref{th:IETgoal}.
We will denote by $R T$ the IET obtained in one step of the algorithm and, for any $n \geq 0$, we let $T^{(n)} := R^n T$. The map $T^{(n)}$ is defined on a subinterval $I^{(n)} \subset I$ of length $\lambda^{(n)}$. Let $\underline{\lambda}^{(n)} \in (\lambda^{(n)})^{-1} \Delta_{d}$ be the vector whose components ${\lambda}_j^{(n)}$ are the lengths of the subintervals $I_j^{(n)} \subset I^{(n)}$ defining $T^{(n)}$; it satisfies the following relation \begin{equation*} \underline{\lambda}^{(n)} = (A^{(n)})^{-1} \underline{\lambda}, \quad \text{ with } A^{(n)} \in \SL_{d}(\Z). \end{equation*} We can write $$ A^{(n)} = A_0 \cdots A_{n-1} := A(T) \cdots A(T^{(n-1)}), $$ where $(A^{(n)})^{-1}$ is a matrix cocycle (sometimes called the \newword{Rauzy-Veech lengths cocycle}). For $m<n$, define also $$ A^{(m,n)}= A_m \cdots A_{n-1} =A(T^{(m)}) \cdots A(T^{(n-1)}), $$ so that \begin{equation}\label{eq:cocycleprop} \underline{\lambda}^{(n)} = (A^{(m,n)})^{-1} \underline{\lambda}^{(m)}. \end{equation}
Denote by $h_j^{(n)}$ the first return time of any $x \in I_j^{(n)}$ to the induced interval $I^{(n)}$ and by $\underline{h}^{(n)}$ the vector whose components are $h_j^{(n)}$; let $h^{(n)}$ be the maximum $h_j^{(n)}$ for $j=1, \dots, d$. The following result is well-known. \begin{lemma}\label{lemma:entries} The $(i,j)$-entry $A^{(n)}_{i,j}$ of $A^{(n)}$ is equal to the number of visits of any point $x \in I^{(n)}_j$ to $I_i$ up to the first return time $h_j^{(n)}$ to $I^{(n)}$. In particular, $ h_j^{(n)} = \sum_{i=1}^{d} A^{(n)}_{i,j}. $ \end{lemma} Let $Z_j^{(n)}$ be the orbit of the interval $I_j^{(n)}$ up to the first return time to $I^{(n)}$, namely $$ Z_j^{(n)} := \bigcup_{r=0}^{h_j^{(n)}-1} T^r I_j^{(n)}. $$ We remark that the above is a disjoint union of intervals by definition of first return time. For $0 \leq r < h_j^{(n)}$, let $ F^{(n)}_{j,r} := T^{r} (I^{(n)}_{j})$. The intervals $F^{(n)}_{j,r}$ form a partition of $I$, that we will denote $\mathcal{Z}^{(n)} $.
\begin{remark}\label{remark:endpoint} Because of the definition of the Rauzy-Veech Induction, the partition $\mathcal{Z}^{(n)} = \{ F^{(n)}_{j,r} : 0 \leq r < h_j^{(n)} , 1 \leq j \leq d \}$ is a refinement of the partition $\mathcal{Z}^{(n-1)} $; in particular, for any $n \geq 0$, each point $a_k$ for $0 \leq k \leq d$ belongs to the boundary of some $ F^{(n)}_{j,r} $. \end{remark}
We say that any IET for which the result below holds satisfies the \newword{mixing Diophantine condition with integrability power $\tau$}; it was proved by Ulcigrai in \cite{ulcigrai:mixing}. We recall that the \newword{Hilbert distance} $d_H$ on the positive orthant of $\mathbb{R}^{d}$ is defined by $d_H(\underline{a}, \underline{b}) = \log ( \max \{a_i/ b_i\} / \min \{ a_i / b_i\})$ for any positive vectors $\underline{a}, \underline{b} \in \mathbb{R}^{d}$. \begin{teo}[{\cite[Proposition 3.2]{ulcigrai:mixing}} Mixing DC]\label{th:ulcigrai} Let $1 < \tau < 2$. For almost every IET there exist a sequence $\{n_l\}_{l \in \mathbb{N}}$ and constants $\nu, \kappa >1$, $0<D<1$, $D\rq{}>0$ and $\overline{l} \in \mathbb{N}$ such that for every $l \in \mathbb{N}$ we have: \begin{itemize} \item[(i)] $\nu^{-1} \leq \lambda_i^{(n_l)} / \lambda_j^{(n_l)} \leq \nu$ for all $1\leq i,j \leq d$; \item[(ii)] $\kappa^{-1} \leq h_i^{(n_l)} / h_j^{(n_l)} \leq \kappa$ for all $1\leq i,j \leq d$; \item[(iii)] $A^{(n_l, n_{l+ \overline{l}})} > 0$ and, if $d_H$ denotes the Hilbert distance on the positive orthant in $\mathbb{R}^{d}$, $$ d_H \left( A^{(n_l, n_{l+ \overline{l}})} \underline{a}, A^{(n_l, n_{l+ \overline{l}})} \underline{b}\right) \leq \min \{ D d_H( \underline{a}, \underline{b}), D\rq{}\}, $$ for any vectors $\underline{a}, \underline{b}$ in the positive orthant of $\mathbb{R}^{d}$; \item[(iv)] $\lim_{l \to \infty} l^{-\tau} \norma{A^{(n_l, n_{l+ 1})} }=0$. \end{itemize} Moreover, any IET satisfying these properties is uniquely ergodic. \end{teo}
\begin{cor}[{\cite[Lemmas 3.1, 3.2 and 3.3]{ulcigrai:mixing}}]\label{cor:thulcigrai} Consider the sequence $\{n_l\}_{l \in \mathbb{N}}$ given by Theorem \ref{th:ulcigrai}; the following properties hold. \begin{itemize} \item[(i)] For each $i,j \in \{ 1, \dots, d\}$, $$ \frac{1}{d \nu \kappa h_j^{(n_l)}} \leq \lambda_i^{(n_l)} \leq \frac{\kappa \nu}{ h^{(n_l)} }. $$ \item[(ii)] For any fixed $i \in \mathbb{N}$, $$ \frac{h^{(n_l)} }{ h^{(n_{l+i\overline{l}})}} \leq \frac{\kappa}{d^{i}}. $$ \item[(iii)] For any fixed $i \in \mathbb{N}$, $\log \norma{A^{(n_l, n_{l+ i})}} = o(\log h^{(n_l)})$. \end{itemize} \end{cor} \begin{proof} Kac\rq{}s Theorem implies that $\sum_j h_j^{(n_l)} \lambda_j^{(n_l)} = 1$, from which it follows $\max_j h_j^{(n_l)} \lambda_j^{(n_l)} \geq 1/d$ and $\min_j h_j^{(n_l)} \lambda_j^{(n_l)} \leq 1$. These inequalities together with properties (i) and (ii) in Theorem \ref{th:ulcigrai} yield the first claim (i). The matrix $A^{(n_l, n_{l+\overline{l}})}$ has positive integer entries by (iii) in Theorem \ref{th:ulcigrai}, so $\min_j h_j^{(n_{l+i\overline{l}})} \geq d^i \min_j h_j^{(n_l)}$, from which (ii) follows. Finally, (iii) is obtained by combining (iv) in Theorem \ref{th:ulcigrai} and $\log h^{(n_l)} \geq \lfloor l/\overline{l} \rfloor \log d$, which is a consequence of (ii) above. \end{proof}
\section{The quantitative mixing estimates}\label{section:quantitative}
In order to prove mixing for the suspension flow $\{\phi_t\}_{t \in \mathbb{R}}$, we show that, for a dense set of smooth functions, the correlations tend to zero and we provide an upper bound for the speed of decay, see Theorem \ref{th:decayofcorr} below.
The first step is to estimate the growth of the Birkhoff sums of the derivative $f\rq{}$ of the roof function $f$, see Theorem \ref{th:BS}. For this part (see the Appendix \S\ref{section5bs}), we follow the same strategy used by Ulcigrai in \cite{ulcigrai:mixing}, namely, using the mixing Diophantine condition of Theorem \ref{th:ulcigrai}, we prove that \lq\lq{}most\rq\rq{}\ points in any orbit equidistribute in $I$ and we bound the error given by the other points. In the second part (see \S\ref{section6}), we construct a family of partitions of the unit interval following the strategy used by Ulcigrai in \cite[\S4]{ulcigrai:mixing} providing explicit bounds on their size; they are used to define a subset of the phase space of the suspension flow on which we can estimate the shearing of transversal segments. We then use a \newword{bootstrap trick} similar to the one introduced by Forni and Ulcigrai in \cite{forniulcigrai:timechanges} to reduce the study of speed of decay of correlations to the deviations of ergodic averages for IETs and finally we apply the following result by Athreya and Forni~\cite{athreyaforni:iet}. \begin{teo}[{\cite[Theorem 1.1]{athreyaforni:iet}}]\label{th:atfo} Let $S$ be a compact surface and let $\Omega$ be a connected component of a stratum of the moduli space of unit-area holomorphic differentials on $S$. There exists a $\theta >0$ such that the following holds. For all $\omega \in \Omega$, there is a measurable function $K_{\omega} \colon \mathbb{S}^1 \to \mathbb{R}_{>0}$ such that for almost all $\alpha \in \mathbb{S}^1$, for all functions $f$ in the standard Sobolev space $\mathscr{H}^1(S)$ and for all nonsingular $x \in S$, \begin{equation}\label{eq:deviationforsurf} \modulo{\int_0^T f \circ \varphi_{\alpha, t}(x) \diff t - T \int f \diff A_{\omega}} \leq K_{\omega}(\alpha) \norma{f}_{\mathscr{H}^1(S)} T^{1-\theta}, \end{equation} where $\varphi_{\alpha, t}$ is the directional flow on $S$ in direction $\alpha$ and $A_{\omega}$ is the area form on $S$ associated to $\omega$. \end{teo} Let $\mathscr{C}^r(\sqcup I_j)$ be the space of functions $h \colon I \to \mathbb{R}$ such that the restriction of $h$ to the interior of each $I_j$ can be extended to a $\mathscr{C}^r$ function on the closure of $I_j$. In \cite[\S3]{marmimoussayoccoz:linearization}, Marmi, Moussa and Yoccoz introduced the \newword{boundary operator}\footnote{In their paper, it is denoted by $\partial$.} $\mathcal{B} \colon \mathscr{C}^0(\sqcup I_j) \to \mathbb{R}^s$ to characterize which functions in $\mathscr{C}^1(\sqcup I_j) $ are induced by functions on a suspension over the interval exchange transformation, see \cite[Proposition 8.5]{marmimoussayoccoz:linearization}. We recall their result for the reader's convenience. Given an IET $T = T(\pi, \underline{\lambda})$ of $d$ intervals, define the permutation $\widehat{\pi}$ on $\{1, \dots, d\} \times \{L,R\}$ by \begin{equation*} \begin{split} &\widehat{\pi}(i,R) = (i+1,L) \text{ for } 1\leq i \leq d-1 \text{ and } \widehat{\pi}(d,R) = (\pi^{-1}(d), R),\\ &\widehat{\pi}(i,L) = (\pi^{-1}(\pi(i)-1), R) \text{ for } i \neq \pi^{-1}(1) \text{ and } \widehat{\pi}(\pi^{-1}(1), L) = (1,L). \end{split} \end{equation*} The cycles of $\widehat{\pi}$ are canonically associated to the singularities of any suspension over $T$ via Veech\rq{}s zippered rectangles. The boundary operator $\mathcal{B}$ is given by $$ (\mathcal{B} h)_C = \sum_{v \in C} \epsilon(v) h(v), $$ where $C$ is any cycle in $\widehat{\pi}$, $\epsilon(v) = -1$ if $v = (i,L)$ and $\epsilon(v)=+1$ if $v = (i,R)$ and $h(v)$ is the limit of $h$ at the left (resp., right) endpoint of the $i$-th interval if $v=(i,L)$ (resp., if $v=(i,R)$); see \cite[Definition 3.1]{marmimoussayoccoz:linearization}. They proved the following result. \begin{prop}[{\cite[Proposition 8.5]{marmimoussayoccoz:linearization}}]\label{th:mamoyo} Let $S$ be a suspension over $T$ via Veech\rq{}s zippered rectangles and let $\mathscr{C}^r_c(S)$ be the space of $\mathscr{C}^r$ functions over $S$ with compact support in the complement of the singularities. For $ f \in \mathscr{C}^r_c(S)$, define $$ \mathcal{I}f(x) = \int_0^{\tau(x)} f \circ \varphi_t(x) \diff t, $$ where $\tau(x)$ is the first return time of $x$ to the interval $I$ and $\varphi_t(x)$ is the vertical flow on $S$. Then, $\mathcal{I}$ maps $\mathscr{C}^r_c(S)$ continuously into $\mathscr{C}^r(\sqcup I_j)$ and its image is the subspace of functions $h$ satisfying $\mathcal{B}h = \mathcal{B}(\partial_xh) = \cdots = \mathcal{B}(\partial_x^rh) =0$. \end{prop}
\begin{cor}\label{th:athreyaforni} For every permutation $\pi$ of $d$ elements there exists $0 \leq \theta <1$ such that for almost every IET $T = T(\pi, \underline{\lambda})$, for every $h \in \mathscr{C}^1(\sqcup I_j)$ satisfying $\mathcal{B}h = \mathcal{B} (\partial_x h) = 0$, there exists $C_h>0$ for which $$ \modulo{S_r(h)(x) - r \int_0^1 h(x) \diff x} \leq C_h r^{\theta}, $$ uniformly on $x \in I$. \end{cor} \begin{proof} Since almost every translation surface $S$ has a Veech\rq{}s zippered rectangle presentation (see \cite[Proposition 3.30]{viana:iet2}), Theorem \ref{th:atfo} implies that for almost every IET $T$ there exists a suspension $S$ over $T$ via zippered rectangles such that an estimate like \eqref{eq:deviationforsurf} holds for the vertical flow $\{\varphi_t\}$. Let $h$ be as in the statement of the corollary. By Proposition \ref{th:mamoyo}, there exists a function $f \in \mathscr{C}^1_c(S)$ such that $\mathcal{I}f = h$. The conclusion follows from~\eqref{eq:deviationforsurf}. \end{proof}
\begin{defin} We define $\mathscr{M}$ to be the set of IETs which satisfy the mixing Diophantine Condition of Theorem \ref{th:ulcigrai} and $\mathscr{Q}$ to be the set of IETs for which the conclusion of Corollary \ref{th:athreyaforni} holds. We remark that $\mathscr{M} \cap \mathscr{Q}$ has full measure. \end{defin}
Consider the auxiliary functions $u_k, v_k, \widetilde{u}_k, \widetilde{v}_k \colon I \to \mathbb{R}_{>0}$ obtained by restricting to $I$ the 1-periodic functions defined by $$ u_k(x) = 1 - \log (x-a_k), \quad \widetilde{u}_k(x) = -u_k\rq{}(x) = \frac{1}{x-a_k} \quad \text{ for }x \in (a_k,a_k +1], $$ and $$ v_k(x) = 1 - \log (a_k-x), \quad \widetilde{v}_k(x)=v_k\rq{}(x) =\frac{1}{a_k-x} \quad \text{ for } x \in [a_k-1,a_k), $$ for $k = 1, \dots, d-1$. It will be convenient to identify functions over $I$ with 1-periodic functions over $\mathbb{R}$.
Fix $\tau\rq{}$ such that $\tau/2 < \tau\rq{} < 1$, where $1<\tau<2$ is the integrability power of $T$ of Theorem \ref{th:ulcigrai}, and define the sequence $$ \sigma_l = \left( \frac{\log \norma{A^{(n_l, n_{l+1})} }}{\log h^{(n_l)}} \right)^{\tau\rq{}}. $$ The set of points for which we are able to obtain good bounds for the Birkhoff sums of $f\rq{}$ and $f\rq{}\rq{}$ contains those points whose $T$-orbit up to time $\lfloor \sigma_l h^{(n_{l+1})} \rfloor$ stay $\sigma_l \lambda^{(n_l)}$-away from all the singularities, namely the complement of the set \begin{equation}\label{eq:sigmlk} \Sigma_l = \bigcup_{k=1}^{d-1} \Sigma_l(k), \text{\ \ where\ \ } \Sigma_l(k)= \bigcup_{i=0}^{\lfloor \sigma_l h^{(n_{l+1})} \rfloor} T^{-i} \{ x \in I : \modulo{a_k-x} \leq \sigma_l \lambda^{(n_l)} \}. \end{equation} We will show in Proposition \ref{th:stretpart} that $\misura(\Sigma_l) \to 0$ as $l$ goes to infinity. The estimates we need are the following; the proof is given in the Appendix \S\ref{section5bs}. Ulcigrai proved an analogous statement for the case of one singularity at zero, see \cite[Corollaries 3.4, 3.5]{ulcigrai:mixing}; the proof in \S\ref{section5bs} follows her strategy, which is adapted to obtain also uniform bounds on the Birkhoff sums of $f$. \begin{teo}\label{th:BS} Consider $T \in \mathscr{M}$ and let $f$ be a roof function with asymmetric logarithmic singularities; let $C=-C^{+} +C^{-} = - \sum_j C_j^{+} + \sum_j C_j^{-}$. Define $$ \widetilde{U}(r,x) := \max_{1\leq k \leq d-1} \max_{0 \leq i < r} \widetilde{u}_k(T^ix), \qquad \widetilde{V}(r,x) := \max_{1 \leq k \leq d-1} \max_{0 \leq i < r} \widetilde{v}_k(T^ix). $$ For any $\varepsilon > 0$ there exists $\overline{r} >0$ such that for $r \geq \overline{r}$ if $h^{(n_l)} \leq r < h^{(n_{l+1})}$, $x \notin \Sigma_l$ and $x$ is not a singularity of $S_r(f)$, then \begin{align*} S_r(f)(x) \leq &2 r + \const \max_{1\leq k \leq d-1} \max_{0 \leq i <r} \modulo{\log \modulo{T^ix_0 - a_k}} \\ S_r(f\rq{})(x) \leq &(C+ \varepsilon) r \log r + ( C^{-}+1) ( \lfloor \kappa \rfloor +2) \widetilde{V}(r,x) \\ S_r(f\rq{})(x) \geq &(C- \varepsilon) r \log r - ( C^{+}+1) ( \lfloor \kappa \rfloor +2) \widetilde{U}(r,x) \\ \modulo{S_r(f\rq{}\rq{})(x)} \leq &(2 \max\{ \widetilde{U}(r,x),\widetilde{V}(r,x) \} +1)(C^{+}+C^{-}+\varepsilon) \times \\
& \times \big( r \log r + (\lfloor \kappa \rfloor+2) (\widetilde{U}(r,x) + \widetilde{V}(r,x)) \big), \end{align*} where we recall $\kappa$ is given in Theorem \ref{th:ulcigrai}. \end{teo}
The previous estimates are interesting in their own right, since they are used by Kanigowski, Kulaga and Ulcigrai in \cite{KKU:multmixing} to strengthen mixing to mixing of all orders for a full-measure set of flows. In the proof of Theorem \ref{th:decayofcorr} below, we will exploit them only for a fixed $0 < \varepsilon < \modulo{C}$.
We recall from \eqref{eq:suspspace} that $\mathcal{X}$ is the phase space of the suspension flow $\{\phi_t\}$. Let $\Phi \colon \mathcal{X} \to \mathcal{M}\rq{}$ be the measurable isomorphism between $\{\phi_t\}$ and the locally Hamiltonian flow $\{\varphi_t\}$ on the minimal component $\mathcal{M}\rq{}$. We prove a bound on the speed of the decay of correlations for the pull-backs of functions in $\mathscr{C}^1_c(\mathcal{M}\rq{})$. \begin{teo}\label{th:decayofcorr} Let $\{\phi_t\}_{t \in \mathbb{R}}$ be a suspension flow over an IET $T \in \mathscr{M} \cap \mathscr{Q}$ with roof function with asymmetric logarithmic singularities. Then, there exists $0 < \gamma <1$ such that for all $g, h \in \Phi^{\ast}(\mathscr{C}^1_c(\mathcal{M}\rq{}))$ with $\int_{\mathcal{X}}g \diff \misura = 0$ we have $$ \modulo{ \int_{\mathcal{X}}(g \circ \phi_t) h \diff \misura } \leq \frac{C_{g,h}}{(\log t)^{\gamma}}, $$ for some constant $C_{g,h}>0$. \end{teo} Theorem \ref{th:1.2} is an immediate consequence of Theorem \ref{th:decayofcorr}.
\begin{proof}[Proof of Theorem \ref{th:IETgoal}] We show that Theorem \ref{th:decayofcorr} implies Theorem \ref{th:IETgoal}. It is sufficient to prove that $\Phi^{\ast}(\mathscr{C}^1_c(\mathcal{M}\rq{}))$ is dense in $L^2(\mathcal{X})$. We claim that $\Phi^{\ast}(\mathscr{C}^1_c(\mathcal{M}\rq{}))$ contains the dense subspace $\mathscr{C}^1_c(\mathcal{X})$ of $\mathscr{C}^1$ functions with compact support on $\mathcal{X}$. Indeed, we show that for any compact set $\mathcal{K} \subset \mathcal{M}\rq{} \setminus \Sigma$ in the complement of the singularities, $\Phi$ is a diffeomorphism between $\Phi^{-1}(\mathcal{K})$ and $\Phi(\Phi^{-1}(\mathcal{K})) \subseteq \mathcal{K}$.
For any $p \in \Phi(\Phi^{-1}(\mathcal{K})) $, choose local coordinates around $p$ such that the vector field generating flow $\{\varphi_t\}$ is $\partial_y$; then, if $\omega = V(x,y) \diff x \wedge \diff y$, we have that $\eta = - V(x,y) \diff x$. On $\mathcal{X}$, the 1-form $\eta$ equals $\diff x$; in these coordinates, $\Phi$ is the solution to the well-defined system of ODEs $\partial_x \Phi = -1/(V \circ \Phi)$ and $\partial_y\Phi = 0$. By compactness, the $\mathscr{C}^{\infty}$-norm of $V$ is uniformely bounded, and so is the $\mathscr{C}^{\infty}$-norm of $\Phi$; thus $\Phi$ is a diffeomorphism. \end{proof}
\begin{remark}\label{remark:3} The argument above shows that any $g \in \Phi^{\ast}(\mathscr{C}^1_c(\mathcal{M}\rq{}))$ is a $\mathscr{C}^1$ function on $\mathcal{X}$. Moreover, define the operator $\mathcal{I}$ as in Proposition \ref{th:mamoyo}, namely \begin{equation}\label{eq:idig} (\mathcal{I}g) (x)= \int_0^{f(x)} g(x,y) \diff y. \end{equation} The same proof as \cite[Proposition 8.5]{marmimoussayoccoz:linearization} shows that $\mathcal{I}g \in \mathscr{C}^1(\sqcup I_j)$ and $\mathcal{B}(\mathcal{I}g) = \mathcal{B}(\partial_x(\mathcal{I}g)) =0$, in particular $\mathcal{I}g$ satisfies the hypotheses of Corollary \ref{th:athreyaforni}. \end{remark}
\section{Proof of Theorem \ref{th:decayofcorr}}\label{section6}
The first part of the proof consists of defining a subset $X(t) \subset \mathcal{X}$ on which we can estimate the shearing of segments transverse to the flow in the flow direction. The construction of $X(t)$ follows the lines of \cite[\S4]{ulcigrai:mixing}, although here we need to make all estimates explicit. In the second part of the proof, we reduce correlations to integrals along long pieces of orbits by a bootstrap trick analogous to \cite{forniulcigrai:timechanges} and we conclude by applying the result by Athreya and Forni on the deviations of ergodic averages in the form of Corollary \ref{th:athreyaforni}.
Within this section, we will always assume that $f$ has asymmetric logarithmic singularities and $T \in \mathscr{M} \cap \mathscr{Q}$.
\subsection{Preliminary partitions}
Let $R(t) := \lfloor t/m \rfloor +2$, where $m = \min \{ 1, \min f\}$. A \newword{partial partition} $\mathcal{P}$ is a collection of pairwise disjoint subintervals $J =[a,b)$ of the unit interval $I = [0,1]$. \begin{prop}\label{th:prelpart} Let $0< \alpha < 1$. For each $M >1$ there exists $t_0 >0$ and partial partitions $\mathcal{P}_p(t)$ for $t \geq t_0$ such that $1 - \misura(\mathcal{P}_p(t)) = O\left( (\log t)^{-\alpha} \right)$ and for each $J \in \mathcal{P}_p(t)$ we have \begin{itemize} \item[(i)] $T^j$ is continuous on $J$ for each $0 \leq j \leq R(t)$; \item[(ii)] $\frac{1}{t(\log t)^{\alpha}} \leq \misura(J) \leq \frac{2}{t (\log t)^{\alpha}}$; \item[(iii)] $\dist( T^j J, a_k) \geq \frac{M}{t(\log t)^{\alpha}}$ for $0 \leq j \leq R(t)$; \item[(iv)] $f(T^jx)\leq C_f \log t$ for each $0 \leq j \leq R(t)$ and for all $x \in J$, where $C_f>0$ is a fixed constant. \end{itemize} \end{prop} \begin{proof} Let $\mathcal{P}_0(t)$ be the partition of $I$ into continuity intervals for $T^{R(t)}$. Consider the set $$ U_1 = \bigcup_{k=0}^d \bigcup_{j=0}^{R(t)} \left\{ x \in I : \modulo{x - T^{-j}a_k} \leq \frac{2M}{t(\log t)^{\alpha}} \right\}, $$
and let $\mathcal{P}_1(t)$ be obtained from $\mathcal{P}_0(t)$ by removing all partition elements fully contained in $U_1$. Then $$ 1 - \misura(\mathcal{P}_1(t)) \leq \misura(U_1) \leq (d+1) \left( \frac{t}{m} +3\right) \frac{4M}{t(\log t)^{\alpha}} = O\left((\log t)^{-\alpha}\right). $$ Any $J \in \mathcal{P}_1(t)$ contains at least one point outside $U_1$, therefore, since the endpoints of $J$ are centres of the balls in $U_1$, we have $\misura(J) \geq 4M/(t (\log t)^{\alpha})$. Let $$ U_2 = \bigcup_{k=0}^d \bigcup_{j=0}^{R(t)} T^{-j} \left\{ x \in I : \modulo{x - a_k} \leq \frac{M}{t(\log t)^{\alpha}} \right\}, $$ and let $\mathcal{P}_2(t) = \mathcal{P}_1(t) \setminus U_2$. As before we have that $$ \misura(\mathcal{P}_1(t)) - \misura(\mathcal{P}_2(t)) \leq \misura(U_2) = O\left( (\log t)^{-\alpha}\right). $$ By construction, property (iii) is satisfied. Moreover, any interval $J \in \mathcal{P}_2(t)$ is either an interval in $\mathcal{P}_1(t)$ or is obtained from one of them by cutting an interval of length at most $M/(t(\log t)^{\alpha})$ on one or both sides, hence $\misura(J) \geq 2M/(t(\log t)^{\alpha})$. Cut each interval $J \in \mathcal{P}_2(t)$ in such a way that (ii) is satisfied and call $\mathcal{P}_p(t)$ the resulting partition. Finally, there exists a constant $C_f\rq{}$ such that, by (iii), for all $x \in \mathcal{P}_p(t)$ and all $0 \leq j \leq R(t)$ we have $f(T^jx) \leq C_f\rq{} \log(t (\log t)^{\alpha}) \leq (C_f\rq{} +1) \log t$, up to increasing $t_0$. Thus (iv) holds with $C_f = C_f\rq{}+1$. \end{proof}
\paragraph{Rough lower bound on $r(x,t)$.} We want to bound the number $r(x,t)$ of iterations of $T$ up to time $t$ (see \eqref{eq:sfj}). From the definition, $r(x,t) \leq R(t)$. By property (iv) in Proposition~\ref{th:prelpart}, $$ t < S_{r(x,t) + 1}(f)(x) \leq C_f (r(x,t)+1) \log t, $$ which, up to enlarging $t_0$ if necessary, implies \begin{equation}\label{eq:roughlb} r(x,t) > \frac{t}{2C_f \log t}, \end{equation} uniformly for $x \in \mathcal{P}_p(t)$.
\subsection{Stretching partitions} We refine the partitions $\mathcal{P}_p(t)$ in order for Theorem \ref{th:BS} to hold. Let $l(t) \in \mathbb{N}$ be such that $h^{(n_{l(t)})}\leq R(t) < h^{(n_{l(t)+1})}$. \begin{lemma}\label{lemmaaa} If $ \frac{t}{2C_f \log t} \leq r(x,t) \leq R(t)$, then $h^{(n_{l(t) - L(t)})} \leq r(x,t) < h^{(n_{l(t)+1})}$ for all $x \in \mathcal{P}_p(t)$, where $L(t) =O( \log \log t)$. \end{lemma} \begin{proof} By Corollary \ref{cor:thulcigrai}-(ii), for each $\overline{L} \in \mathbb{N}$ we have $$ h^{(n_{l(t)-\overline{L}\overline{l}})} \leq \frac{\kappa}{d^{\overline{L}}} h^{(n_{l(t)})} \leq \frac{\kappa}{d^{\overline{L}}} R(t)\leq \frac{2 \kappa t}{md^{\overline{L}}}. $$ It is sufficient to choose $\overline{L}$ minimal such that $ 2\kappa t/(m d^{\overline{L}}) < t/(2C_f \log t) $; this case is achieved with an $L(t) = \overline{L}\overline{l} =O( \log \log t)$. \end{proof}
\begin{lemma}\label{th:stimelt} We have that $l(t) = O(\log t)$ and, for any $\varepsilon >0$, $l(t)^{-1} = O\left((\log t)^{-\frac{1}{1+\varepsilon}}\right)$. \end{lemma} \begin{proof} By Corollary \ref{cor:thulcigrai}-(ii) we have $$ d^{\lfloor l(t)/ \overline{l} \rfloor} \leq \kappa h^{(n_{l(t)})} \leq \kappa R(t) \leq \frac{2\kappa t}{m}, $$ so that $l(t) = O(\log t)$. For the other inequality, we use the Diophantine condition (iv) in Theorem \ref{th:ulcigrai} to get
\begin{equation*} \begin{split} \log h^{(n_{l(t)+1})} &\leq \log ( \norma{A^{(n_0, n_{l(t)+1})}}) \leq \log (\norma{A^{(n_{l(t)}, n_{l(t)+1})}} \cdots \norma{A^{(n_0, n_1)}}) \\ & = \sum_{i=0}^{l(t)} \log (\norma{A^{(n_i,n_{i+1})}}) = O \left( \sum_{i=1}^{l(t)} \log(i^{\tau}) \right) \\ &= O \left( \int_1^{l(t)+1} \log x \diff x \right) =O( l(t) \log l(t) ) =O( l(t)^{1+\varepsilon}). \end{split} \end{equation*}
The conclusion follows from $\log h^{(n_{l(t)+1})} \geq \log R(t) \geq \log t$. \end{proof}
We now assume $C^{+}>C^{-}$; the proof in the other case is analogous. \begin{prop}\label{th:stretpart} Suppose $C^{+} > C^{-}$. There exist $t_1 \geq t_0$, constants $C\rq{}, \widetilde{C}\rq{}, C\rq{}\rq{} >0 $ and a family of refined partitions $\mathcal{P}_s(t) \subset \mathcal{P}_p(t)$ for all $t \geq t_1$, with $1-\misura(\mathcal{P}_s(t))= O( (\log t)^{-\alpha\rq{}})$ for some $0<\alpha\rq{}<1$, such that for all $x \in \mathcal{P}_s(t)$ \begin{itemize} \item[(i)] $S_{r(x,t)}(f)(x) \leq 3t$, \item[(ii)] $S_{r(x,t)}(f\rq{})(x) \leq -C\rq{} t \log t$,\\ \item[(iii)] $\modulo{S_{r(x,t)}(f\rq{})(x) }\leq \widetilde{C}\rq{} t \log t$,\\ \item[(iv)] $S_{r(x,t)}(f\rq{}\rq{})(x) \leq \frac{C\rq{}\rq{}}{M} t^2 (\log t)^{1+\alpha}$. \end{itemize} \end{prop} \begin{proof} Recall the definition of $\Sigma_l$ in \eqref{eq:sigmlk} and that $r(x,t)$ is the number of iterations of $T$ applied to $x$ up to time $t$. Theorem \ref{th:BS} provides bounds for the Birkhoff sums $S_{r(x,t)}(f)(x)$ and $S_{r(x,t)}(f\rq{})(x)$ for all $x \notin \Sigma_l$, where $l$ is such that $h^{(n_l)} \leq r(x,t) < h^{(n_{l+1})}$. By Lemma \ref{lemmaaa} we know that $h^{(n_{l(t) - L(t)})} \leq r(x,t) < h^{(n_{l(t)+1})}$ for all $x \in \mathcal{P}_p(t)$, hence to make sure we can apply Theorem \ref{th:BS}, it is sufficient to remove all sets $\Sigma_l$, with $l(t)-L(t) \leq l \leq l(t)$. Thus, we define $$ \widehat{\Sigma}(t) = \bigcup_{k=1}^{d-1} \ \bigcup_{l = l(t)-L(t)}^{l(t)} \Sigma_l(k). $$ Let $\mathcal{P}_s(t)$ be obtained from $\mathcal{P}_p(t)$ by removing all intervals which intersect $\widehat{\Sigma}(t)$. We estimate the total measure of $\mathcal{P}_s(t)$. If $J \in \mathcal{P}_p(t)$ intersects $\widehat{\Sigma}(t)$, then either $ J \subset \widehat{\Sigma}(t)$ or $T^jJ$ contains some point of the form $a_k \pm \sigma_l \lambda^{(n_l)}$ for some $0 \leq j \leq R(t)$ and $ l(t)-L(t) \leq l \leq l(t)$. Therefore, by Lemma \ref{lemmaaa},
\begin{multline*} \misura(\mathcal{P}_p(t))-\misura(\mathcal{P}_s(t)) \leq \misura(\widehat{\Sigma}(t)) + \frac{2}{t(\log t)^{\alpha}} (R(t)+1) 2d(L(t)+1)\\
= \misura(\widehat{\Sigma}(t)) + O\left( \frac{\log \log t}{(\log t)^{\alpha}}\right) = \misura(\widehat{\Sigma}(t)) + O\left( (\log t)^{-\alpha_1}\right), \end{multline*}
for some $\alpha_1 < \alpha$. From Corollary \ref{cor:thulcigrai} we get \begin{multline*} \misura(\widehat{\Sigma}(t)) =O\left( L(t) \sigma_{l(t)}^2 \lambda^{(n_{l(t)})}h^{(n_{l(t)+1})} \right) =O\left( L(t) \sigma_{l(t)}^2 \frac{h^{(n_{l(t)+1})}}{h^{(n_{l(t)})}} \right)\\ =O\left( L(t) \sigma_{l(t)}^2 \norma{A^{(n_{l(t)},n_{l(t)+1})}} \right) =O\left( L(t) \frac{(\log l(t))^{2\tau\rq}}{l(t)^{2 \tau\rq -\tau}} \right) =O\left(\frac{L(t)}{l(t)^{\alpha_2}}\right), \end{multline*} for some $\alpha_2 > 0$, since $2 \tau\rq > \tau$.
From Lemma \ref{th:stimelt}, we deduce that $$ \misura(\widehat{\Sigma}(t)) =O\left( \frac{\log \log t}{(\log t)^{\frac{\alpha_2}{1+\varepsilon}}} \right) =O\left((\log t)^{-\alpha_3}\right), $$ for some $\alpha_3 >0$, so that $$ 1- \misura(\mathcal{P}_s(t)) \leq (1- \misura(\mathcal{P}_p(t))) + (\misura(\mathcal{P}_p(t))-\misura(\mathcal{P}_s(t))) =O\left( (\log t)^{-\alpha\rq{}}\right), $$ for some $0 < \alpha\rq{} \leq \min \{\alpha_1, \alpha_3\}$.
Fix $0< \varepsilon < - C = C^{+}-C^{-}$. By \eqref{eq:roughlb}, we have $r(x,t) \geq t/ (2C_f \log t ) \geq t_1/ (2C_f \log t_1)$; let us choose $t_1$ such that the latter is greater than $\overline{r}$ in Theorem \ref{th:BS}. By construction, the estimates on the Birkhoff sums of $f$ and $f\rq{}$ hold for all $x \in \mathcal{P}_s(t)$.
\begin{lemma} \label{th:boundsonr} For all $x \in \mathcal{P}_s(t)$ we have that $t/3 \leq r(x,t) \leq R(t) \leq 2t/m$. \end{lemma} \begin{proof} We only have to prove the lower bound. By definition and by the uniform estimates on the Birkhoff sums of $f$ in Theorem \ref{th:BS} we have $$ t < S_{r(x,t) + 1}(f)(x) \leq 2 (r(x,t)+1) + \const \max_{0 \leq i \leq r(x,t)} f(T^ix). $$ Since $f(T^ix) \leq C_f \log t$ for all $x \in \mathcal{P}_s(t)$ by Proposition \ref{th:prelpart}-(iv), the conclusion follows up to increasing $t_1$. \end{proof}
Let us show (ii). From the fact that $\modulo{x-a_k}^{-1} \leq t(\log t)^{\alpha}/M$, we have that $$ S_{r(x,t)}(f\rq{})(x) \leq (C + \varepsilon) r(x,t) \log r(x,t) \left( 1 + O\left( \frac{t (\log t)^{\alpha}}{r(x,t)\log r(x,t)}\right)\right). $$ By Lemma \ref{th:boundsonr}, $$ O \left( \frac{t (\log t)^{\alpha}}{r(x,t)\log r(x,t)} \right) = O \left( (\log t)^{\alpha-1}\right); $$ therefore we deduce (ii) with $-C\rq{} = (C+\varepsilon)/4 <0$. Proceeding in an analogous way, one gets (i), (iii) and (iv). \end{proof}
\subsection{Final partition and mixing set}
\begin{prop}\label{th:mixpart} There exist $\alpha\rq{}\rq{} >0$ and $t_2 \geq t_1$ such that for all $t \geq t_2$ there exists a family of refined partitions $\mathcal{P}_f(t) \subset \mathcal{P}_s(t)$ with $1- \misura(\mathcal{P}_f(t) ) = O( (\log t)^{-\alpha\rq{}\rq{}})$ such that for all $x \in J = [a,b) \in \mathcal{P}_f(t) $ we have \begin{equation}\label{eq:mixpart} \min_{1 \leq k \leq d} \modulo{T^r x -a_k} \geq \frac{1}{(\log t)^2}, \end{equation} for all $r(a,t) \leq r \leq r(a,t) + \frac{2C_f}{m} \log t$. \end{prop} \begin{proof} Let $K(t) = \lfloor \frac{2C_f}{m} \log t \rfloor +1$ and define $$ U_3 = \bigcup_{k=1}^{d-1} \bigcup_{i=- K(t)}^{K(t)} T^i \left\{x \in I : \modulo{x-a_k} \leq \frac{1}{(\log t)^{2}}\right\}. $$ Since $T^{\pm K(t)}$ is an IET of at most $d(K(t)+1)$ intervals, the set $U_3$ consists of at most $O\left( K(t)^2 \right)$ intervals. Let $$ U_4 = \left\{ x \in I : \text{dist}(x, U_3) \leq \frac{2}{t (\log t)^{\alpha}} \right\}, \text{\ \ \ and \ \ \ } U_5 = T_t^{-1}U_4, $$ where $T_t (x) = T^{r(t,x)}x$. The measure of $U_4$ is bounded by the measure of $U_3$ plus the number of intervals in $U_3$ times $4/(t (\log t)^{\alpha})$, namely \begin{equation*} \begin{split} \misura(U_4) & \leq \misura(U_3) + O\left( \frac{K(t)^2}{t(\log t)^{\alpha}} \right) \leq \frac{d(2K(t) +1)}{(\log t)^2} + O\left( \frac{(\log t)^{2-\alpha}}{t} \right)\\ &= O\left( (\log t)^{-1}\right). \end{split} \end{equation*} We apply the following lemma by Kochergin. \begin{lemma}[{\cite[Lemma 1.3]{kochergin:lemma}}] For any measurable set $U \subset I$, $$ \misura(T_t^{-1}U) \leq \int_U \left(\frac{f(x)}{m}+1 \right)\diff x. $$ \end{lemma} The previous result and the Cauchy-Schwarz inequality give us $$ \misura(U_5) \leq \int_{U_4} \left(\frac{f(x)}{m}+1 \right)\diff x \leq \left(1+\frac{\norma{f}_2}{m}\right) \misura(U_4)^{1/2} =O\left( (\log t)^{-1/2}\right), $$ since $f \in L^2(I)$.
Let $\mathcal{P}_f(t)$ be obtained from $\mathcal{P}_s(t)$ by removing all intervals $J \in \mathcal{P}_s(t)$ such that $J \subset U_5$. Then $1 - \misura(\mathcal{P}_f(t)) \leq 1 - \misura(\mathcal{P}_s(t)) + O((\log t)^{-1/2}) = O((\log t)^{-\alpha\rq{}\rq{}})$ for some $\alpha\rq{}\rq{} >0$.
We show that the conclusion holds for all $J = [a,b) \in \mathcal{P}_f(t)$. By construction, there exists $y \in J$ such that $T^{r(y,t)}y \notin U_4$, therefore, using Proposition \ref{th:prelpart}-(ii), $T^{r(y,t)}x \notin U_3$ for all $x \in J$. In particular, for all $x \in J$, the inequality \eqref{eq:mixpart} is satisfied for all $r(y,t)-K(t) \leq r \leq r(y,t) + K(t)$. To conclude, we notice that, arguing as in \cite[Corollary 4.2]{ulcigrai:mixing}, we have \begin{multline*} r(a,t) \leq r(y,t) \leq r(a,t) + \sup_{z \in J} \frac{S_{r(z,t)}(f\rq{})(z)}{t(\log t)^{\alpha}} \\ \leq r(a,t) + O \left( (\log t)^{1-\alpha} \right) \leq r(a,t) + K(t), \end{multline*} for $t \geq t_2$, for some $t_2 \geq t_1$. Hence $r(y,t) - K(t) \leq r(a,t)$ and $r(a,t)+K(t) \leq r(y,t) + K(t)$. \end{proof}
We now define the subset $X(t)$ of $\mathcal{X}$ on which we can estimate the correlations. It consists of full vertical translates of intervals $J\in \mathcal{P}_f(t)$, namely we consider $$ X(t)= \bigcup_{J \in \mathcal{P}_f(t)} \{ (x,y) : x \in J, 0 \leq y \leq \inf_{x \in J} f(x) \}. $$ We can bound the measure of $X(t)$ by $$ \misura(X(t)) \geq 1- \int_{I \setminus \mathcal{P}_f(t)} f(x) \diff x - \sum_{J \in \mathcal{P}_f(t)} \int_J (f(x) - \inf_J f) \diff x. $$ Since $f \in L^2(I)$, Cauchy-Schwarz inequality yields $$ \int_{I \setminus \mathcal{P}_f(t)} f(x) \diff x \leq \norma{f}_2 \misura(I \setminus \mathcal{P}_f(t))^{1/2} = O \left((\log t)^{-\alpha\rq{}\rq{}/2}\right). $$ On the other hand, by the Mean-Value Theorem and Proposition \ref{th:prelpart}-(ii), \begin{multline*} \sum_{J \in \mathcal{P}_f(t)} \int_J (f(x) - \inf_J f) \diff x = \sum_{J \in \mathcal{P}_f(t)} \misura(J)( f(x_J) - \inf_J f ) \\
\leq \frac{2}{t (\log t)^{\alpha}} \sum_{J \in \mathcal{P}_f(t)} \modulo {f(x_J) - \inf_J f } \leq \frac{2}{t (\log t)^{\alpha}} \cdot \text{Var}(f |_{\mathcal{P}_f(t)}); \end{multline*}
where $ \text{Var}(f |_{\mathcal{P}_f(t)})$ denotes the variation of $f$ restricted to $ \mathcal{P}_f(t)$. Since $f$ has logarithmic singularities at the points $a_k$ and $\text{dist}(\mathcal{P}_f(t), a_k) \geq 1/(t(\log t)^{\alpha})$, the variation is of order $ \text{Var}(f |_{\mathcal{P}_f(t)}) = O\left( \log (t(\log t)^{\alpha}) \right)$. Hence, $$ 1- \misura(X(t)) = O \left( (\log t)^{-\beta}\right), $$ for some $0 < \beta\leq \alpha\rq{}\rq{}$.
\subsection{Decay of correlations} In this proof of mixing, shearing is the key phenomenon. We show that the speed of decay of correlations can be reduced to the speed of equidistribution of the flow by an argument in the spirit of Marcus \cite{marcus:horocycle}, using a bootstrap trick inspired by \cite{forniulcigrai:timechanges}. The geometric mechanism is the following: each horizontal segment $\{(x,y) : x \in J \in \mathcal{P}_f(t)\}$ in $ X(t)$ gets sheared along the flow direction and approximates a long segment of an orbit of the flow $\phi_t$, see Figure \ref{fig:2}.
Consider an interval $J = [a,b) \in \mathcal{P}_f(t)$ and let $\xi_J (s) = (s,0)$ for $a \leq s < b$. On $J$ the function $r(\cdot,t)$ is non-decreasing (non-increasing, if $C^{-}>C^{+}$). To see this, let $x<y$; then, since $S_{r(x,t)}(f\rq{}) <0$, the function $S_{r(x,t)}(f)$ is decreasing, hence $S_{r(x,t)}(f)(y)<S_{r(x,t)}(f)(x) \leq t$. By definition of $r(\cdot,t)$, it follows that $r(y,t) \geq r(x,t)$.
Moreover, $r(\cdot, t)$ assumes finitely many different values $r(a,t), r(a,t)+1, \dots, r(a,t)+N(J)$; more precisely there exist $u_0 = a < u_1 < \cdots < u_{N(J)} < u_{N(J)+1} = b$ such that $r(x,t) = r(a,t) + i$ for all $x \in [u_i, u_{i+1})$. Denote $\xi_i = \xi_J|_{[u_i,u_{i+1})}$. For $a < u < b$, define also $\xi_{[a,u)} = \xi_J|_{[a,u)}$ and let $N(u)$ be the maximum $i$ such that $u_i < u$.
For all $a< u < b$ the curve $\phi_t \circ \xi_{[a,u)} $ splits into $N(u)$ distinct curves $\phi_t \circ \xi_i$ on which the value of $r(x,t)$ is constant. The tangent vector is given by \begin{equation}\label{eq:tgtvect} \frac{\diff}{\diff s} \phi_t \circ \xi_{[a,u)} (s)= \frac{\diff}{\diff s} (T^{r(s,t)}(s), t- S_{r(s,t)}(f)(s)) = (1, - S_{r(s,t)}(f\rq{})(s)). \end{equation} In particular, for any $(x,y) \in X(t)$ we have \begin{equation}\label{eq:pushforwder} [(\phi_t)_{\ast}(\partial_x)]\negthickspace\upharpoonright_{(x,y)} = \partial_x\negthickspace\upharpoonright_{(x,y)} - S_{r(x,t+y)}(f\rq{})(x) \partial_y\negthickspace\upharpoonright_{(x,y)}. \end{equation} The total \lq\lq vertical stretch\rq\rq\ $\Delta f(u)$ of $\phi_t \circ \xi_{[a,u)} $ is the sum of all the vertical stretches of the curves $\phi_t \circ \xi_i$; by definition, it equals $$ \Delta f(u) = \int_{\phi_t \circ \xi_{[a,u)} }\modulo{\diff y} = \int_a^{u} \modulo{S_{r(s,t)}(f\rq{})(s)} \diff s, $$ and, by Proposition \ref{th:stretpart}-(iii), \begin{equation}\label{eq:vertstretch} \Delta f(u) \leq (u-a) \sup_{a\leq s <u}\modulo{S_{r(s,t)}(f\rq{})(s)} \leq \widetilde{C}\rq{} (t \log t) (u-a) \leq 2 \widetilde{C}\rq{} (\log t)^{1-\alpha}; \end{equation} in particular we get \begin{equation}\label{eq:Nu} N(u) \leq \left\lfloor \frac{\Delta f(u)}{m} \right\rfloor+2 \leq \frac{4 \widetilde{C}\rq{}}{m} (\log t)^{1-\alpha}. \end{equation} Let also $\Delta t(u) = S_{r(u,t)}(f)(a)-S_{r(u,t)}(f)(u)$ be the delay accumulated by the endpoints $a$ and $u$. In Figure \ref{fig:2}, $\Delta f(u)$ is the sum of the vertical lengths of the curves $\phi_t \circ \xi_i$, whence $\Delta t(u)$ equals the length of the orbit segment $\gamma$. By the Mean-Value Theorem, there exists $z \in [a,u]$ such that $\Delta t(u) = -S_{r(u,t)}(f\rq{})(z)(u-a)$. Theorem \ref{th:BS} and Lemma \ref{th:boundsonr} yield \begin{equation}\label{eq:deltatu} \Delta t(u) = O\left( (t \log t) \frac{2}{t(\log t)^{\alpha}} \right) = O \left( (\log t)^{1-\alpha} \right). \end{equation}
We estimate the decay of correlations $$ \langle g \circ \phi_t, h \rangle = \int_{\mathcal{X}} (g \circ \phi_t) h \diff \misura, $$ for $g,h$ as in the statement of the theorem. We have that \begin{equation} \begin{split} \modulo{\int_{\mathcal{X}} (g \circ \phi_t) h \diff \misura } & \leq \modulo{ \int_{X(t)} (g \circ \phi_t) h \diff \misura }+ \misura({\mathcal{X}} \setminus X(t)) \norma{g}_{\infty}\norma{h}_{\infty} \\ & = \modulo{\int_{X(t)} (g \circ \phi_t) h \diff \misura} + O \left( (\log t)^{-\beta} \right). \label{eq:dcxt} \end{split} \end{equation} By Fubini\rq{}s Theorem \begin{equation}\label{eq:fubini1} \int_{X(t)} (g \circ \phi_t) h \diff \misura = \sum_{J \in \mathcal{P}_f(t)} \int_0^{y_J} \int_a^{b} (g \circ \phi_{t+y} \circ \xi_J(s)) (h \circ \phi_y \circ \xi_J(s)) \diff s \diff y, \end{equation} where $J=[a,b)$ and $y_J = \inf_J f$.
Fix any $0 \leq \overline{y} \leq y_J$ and let $\overline{g} = g \circ \phi_{\overline{y}}$ and $ \overline{h} = h \circ \phi_{\overline{y}}$. Integration by parts gives \begin{equation*} \begin{split} &\modulo{\int_a^{b} (\overline{g} \circ \phi_{t} \circ \xi_J(s)) (\overline{h} \circ \xi_J(s)) \diff s} =\\ & = \modulo{\Big( \int_a^{b} \overline{g} \circ \phi_{t} \circ \xi_J(s) \diff s \Big) \overline{h}(b,y) - \int_a^{b}\Big( \int_a^{u} \overline{g} \circ \phi_{t} \circ \xi_J(s) \diff s \Big) (\partial_x\overline{h}\circ \xi_J(u)) \diff u} \\ & \leq \norma{\overline{h}}_{\infty} \modulo{\int_a^{b} \overline{g} \circ \phi_{t} \circ \xi_J(s) \diff s} + \norma{\partial_x \overline{h}}_{\infty} \misura(J) \sup_{a \leq u \leq b} \modulo{ \int_a^{u} \overline{g}\circ \phi_{t} \circ \xi_J(s) \diff s } \\
\end{split} \end{equation*} We have that $\norma{\overline{h}}_{\infty} = \norma{h}_{\infty}$ and, by \eqref{eq:pushforwder}, Theorem \ref{th:BS} and Proposition \ref{th:prelpart}-(iv), \begin{equation}\label{eq:dxhbar} \begin{split} \norma{\partial_x \overline{h}}_{\infty} &\leq \max_{(x,y) \in X(t)} \modulo{S_{r(x,\overline{y}+y)}(f\rq{})(x)} \norma{h}_{\mathscr{C}^1} \\ &= O \left( \max_{(x,y) \in X(t)} r(x, \overline{y}+y) \log r(x, \overline{y}+y) \right) = O( \log t \log \log t). \end{split} \end{equation} Since $\misura(J) \leq 2/(t (\log t)^{\alpha})$, we obtain $$ \modulo{\int_a^{b} (\overline{g} \circ \phi_{t} \circ \xi_J(s)) (\overline{h} \circ \xi_J(s)) \diff s} = ( \norma{\overline{h}}_{\infty} + 1 ) \sup_{a \leq u \leq b} \modulo{ \int_a^{u} \overline{g}\circ \phi_{t} \circ \xi_J(s) \diff s }. $$
The following is our bootstrap trick.
\begin{lemma}\label{th:bootstrap} There exists $C>0$ such that $$
\sup_{a \leq u \leq b} \modulo{ \int_a^{u} \overline{g} \circ \phi_{t} \circ \xi_J(s) \diff s } \leq \frac{C}{t \log t} \sup_{a \leq u \leq b} \modulo{ \int_{\phi_t \circ \xi_{[a,u)}} \overline{g} \diff y}. $$ \end{lemma} \begin{proof} Fix $\varepsilon>0$ and let $a \leq \ell \leq b$, \begin{equation*} \begin{split} \int_a^{\ell} \overline{g} \circ \phi_{t} \circ \xi_J(s) \diff s =&\int_a^{\ell} ( \overline{g} \circ \phi_{t} \circ \xi_J(s) ) \Big( - \frac{S_{r(s,t)}(f\rq{})(s)}{(C\rq{} +\varepsilon)t \log t} \Big) \diff s \\ &+ \int_0^{\ell} ( \overline{g} \circ \phi_{t} \circ \xi_J(s) ) \Big( 1+ \frac{S_{r(s,t)}(f\rq{})(s)}{(C\rq{} +\varepsilon)t \log t} \Big) \diff s. \end{split} \end{equation*} By \eqref{eq:tgtvect}, the first summand equals $$ \int_a^{\ell} ( \overline{g} \circ \phi_{t} \circ \xi_J(s) ) \Big( - \frac{S_{r(s,t)}(f\rq{})(s)}{(C\rq{} +\varepsilon)t \log t} \Big) \diff s= \frac{1}{(C\rq{} +\varepsilon)t \log t} \int_{\phi_t \circ \xi_{[a,\ell)}} \overline{g} \diff y. $$
Integration by parts of the second summand gives \begin{equation*} \begin{split} & \int_a^{\ell} (\overline{g} \circ \phi_{t} \circ \xi_J(s) ) \Big( 1+ \frac{S_{r(s,t)}(f\rq{})(s)}{(C\rq{} +\varepsilon)t \log t} \Big) \diff s \\ & = \Big( 1+ \frac{S_{r(\ell,t)}(f\rq{})(\ell)}{(C\rq{} +\varepsilon)t \log t} \Big) \int_a^{\ell} \overline{g} \circ \phi_{t} \circ \xi_J(s) \diff s \\ & \qquad \qquad - \int_a^{\ell} \frac{\diff}{\diff s} \Big( 1+ \frac{S_{r(s,t)}(f\rq{})(s)}{(C\rq{} +\varepsilon)t \log t} \Big) \Big( \int_a^{s} \overline{g} \circ \phi_{t} \circ \xi_J(u) \diff u \Big)\diff s \\ & =\Big( 1+ \frac{S_{r(\ell,t)}(f\rq{})(\ell)}{(C\rq{} +\varepsilon)t \log t} \Big) \int_a^{\ell} \overline{g} \circ \phi_{t} \circ \xi_J(s) \diff s \\ & \qquad \qquad - \int_a^{\ell} \Big( \frac{S_{r(s,t)}(f\rq{}\rq{})(s)}{(C\rq{} +\varepsilon)t \log t} \Big) \Big( \int_a^{s} \overline{g} \circ \phi_{t} \circ \xi_J(u) \diff u \Big) \diff s\\
\end{split} \end{equation*} Thus \begin{equation*} \begin{split} & \modulo{ \int_a^{\ell} \overline{g} \circ \phi_{t} \circ \xi_J(s) \diff s } \leq \frac{1}{(C\rq{} +\varepsilon)t \log t} \modulo{ \int_{\phi_t \circ \xi_{[a,\ell)}}\overline{g} \diff y} \\
& \qquad + \modulo{1+ \frac{S_{r(\ell,t)}(f\rq{})(\ell)}{(C\rq{} +\varepsilon)t \log t}} \modulo{ \int_a^{\ell} \overline{g} \circ \phi_{t} \circ \xi_J(s) \diff s } \\ & \qquad + \modulo{ \max_{a \leq u \leq \ell} \frac{S_{r(u,t)}(f\rq{}\rq{})(u)}{(C\rq{} +\varepsilon)t \log t} \cdot (\ell-a)} \sup_{a \leq u \leq \ell}\modulo{\int_a^{u} \overline{g} \circ \phi_{t} \circ \xi_J(s) \diff s} \end{split} \end{equation*} By Proposition \ref{th:stretpart}-(ii),(iv) and $\ell -a\leq b-a \leq 2/(t(\log t)^{\alpha})$, we get \begin{multline*}
\modulo{ \int_a^{\ell} \overline{g} \circ \phi_{t} \circ \xi_J(s) \diff s } \leq \frac{1}{(C\rq{} +\varepsilon)t \log t} \modulo{ \int_{\phi_t \circ \xi_{[a,\ell)}} g \circ \phi_y \diff y} \\ + \Big( 1-\frac{C\rq{}}{C\rq{} +\varepsilon} + \frac{C\rq{}\rq{}}{(C\rq{} +\varepsilon)M} \Big) \sup_{a \leq u \leq \ell}\modulo{\int_a^{u} \overline{g} \circ \phi_{t} \circ \xi_J(s) \diff s}. \end{multline*} Since this is true for any $a \leq \ell \leq b$, we can consider the supremum on both sides and, after rearranging the terms, $$ \Big( C\rq{} - \frac{C\rq{}\rq{}}{M} \Big) \sup_{a \leq u \leq b}\modulo{\int_a^{u} \overline{g} \circ \phi_{t} \circ \xi_J(s) \diff s}\leq \frac{1}{t \log t} \sup_{a \leq u \leq b} \modulo{ \int_{\phi_t \circ \xi_{[a,u)}} \overline{g} \diff y}. $$ The conclusion follows by choosing $M>1$ so that $C^{-1} = C\rq{} - C\rq{}\rq{}/M >0$. \end{proof}
We now compare the integral of $\overline{g}$ along the curve $\phi_t \circ \xi_{[a,u)}$ with the integral of $\overline{g}$ along the orbit segment starting from $\phi_t(a,0)$ of length $\Delta t(u)$.
\begin{figure}
\caption{The curve $\phi_t\circ \xi_{[a,u)}$ splits into $N(u)$ curves $\phi_t \circ \xi_i$. In red, the orbit segment $\gamma$.}
\label{fig:2}
\end{figure}
\begin{lemma} Let $\gamma (s) = \phi_{t+s}(a,0)$, $0 \leq s < \Delta t(u)$, be the orbit segment of length $\Delta t(u)$ starting from $\phi_t(a,0)$. We have \begin{equation}\label{eq:approx} \modulo{\int_{\phi_t \circ \xi_{[a,u)}}\overline{g} \diff y} \leq \modulo{\int_{\gamma} \overline{g} \diff y} + O\left( (\log t)^{-1} \right). \end{equation} \end{lemma} \begin{proof} For all $1 \leq i \leq N(u)$, we compare the integral of $\overline{g}$ along the curve $\phi_t \circ \xi_i$ with the integral of $\overline{g}$ along an appropriate orbit segment. If $i \neq 1, N(u)$, consider $\gamma_i (s) = \phi_s(T^{r(a,t)+i}a,0)$, for $0 \leq s < f(T^{r(a,t)+i}a)$; define also $\gamma_1(s) = \phi_{t+s}(a,0)$, for $0\leq s < S_{r(a,t)+1}(f)(a)-t$ and $\gamma_{N(u)}(s) = \phi_s(T^{r(a,t)+N(u)}a,0)$, for $0 \leq s < t-S_{r(u,t)}(f)(u)$. Fix $0 \leq i \leq N(u)$ and join the starting points of $\phi_t \circ \xi_i$ and $\gamma_i$ by an horizontal segment and the end points by the curve $\zeta_i(s) = (T^{r(a,t)+i}s, f(T^{r(a,t)+i}s))$, $a \leq s \leq u_{i+1}$, if $i\neq N(u)$ and by another horizontal segment, if $i=N(u)$. See Figure \ref{fig:2}.
We remark that the integral over any horizontal segment of $\overline{g} \diff y$ is zero. By Green\rq{}s Theorem, \begin{equation}\label{eq:gthm} \modulo{\int_{\phi_t \circ \xi_i} \overline{g} \diff y - \int_{\gamma_i} \overline{g} \diff y} \leq \modulo{\int_{\zeta_i} \overline{g} \diff y} + \norma{\partial_x \overline{g}}_{\infty} \int_{T^{r(a,t)+i}a}^{T^{r(a,t)+i}u_{i+1}} f(x) \diff x. \end{equation} Since $r(a,t) + i \leq r(b,t) \leq R(t)$, by Proposition \ref{th:prelpart}-(i), $T^{r(a,t)+i}$ is an isometry, hence \begin{equation*} \begin{split} \int_{T^{r(a,t)+i}a}^{T^{r(a,t)+i}u_{i+1}} f(x) \diff x &\leq \norma{f}_2 \misura([T^{r(a,t)+i}a, T^{r(a,t)+i}u_{i+1} ])^{1/2}\\ & \leq \frac{2\norma{f}_2 }{(t (\log t)^{\alpha})^{1/2}}. \end{split} \end{equation*} Reasoning as in \eqref{eq:dxhbar}, $\norma{\partial_x \overline{g}}_{\infty} =O(\log t \log \log t)$, thus the second term in \eqref{eq:gthm} is $O \left( (\log t)^{2-\alpha/2}/t^{1/2} \right)$. Moreover, by \eqref{eq:Nu} we can apply Proposition \ref{th:mixpart} to deduce $f\rq{}(T^{r(a,t)+i}x) =O\left( (\log t)^2 \right)$, so that $$ \modulo{\int_{\zeta_i} \overline{g} \diff y} \leq \norma{ \overline{g}}_{\infty} \int_{a}^{u_{i+1}} \modulo{f\rq{}(T^{r(a,t)+i}x)} \diff x = O \left( \frac{(\log t)^2}{t (\log t)^{\alpha}} \right). $$ Summing over all $i=0,\dots,N(u)$ we conclude using \eqref{eq:Nu} \begin{equation*} \begin{split} &\modulo{\int_{\phi_t \circ \xi_{[a,u)}} \overline{g} \diff y - \int_{\gamma} \overline{g} \diff y} \leq \sum_{i=0}^{N(u)} \left(\modulo{\int_{\zeta_i} \overline{g} \diff y} + \norma{\partial_x \overline{g}}_{\infty} \int_{T^{r(a,t)+i}a}^{T^{r(a,t)+i}u_{i+1}} f(x) \diff x\right)\\ & \qquad \qquad = N(u) O\left( \frac{(\log t)^2}{t (\log t)^{\alpha}} + \frac{ (\log t)^{2-\alpha/2}}{t^{1/2}} \right) = O\left((\log t)^{-1} \right). \end{split} \end{equation*} \end{proof}
By definition, the integral of $\overline{g}$ along the orbit segment $\gamma$ equals the integral of $g$ along $\phi_{\overline{y}}\circ \gamma$. The latter can be expressed as a Birkhoff sum of $\mathcal{I}g= \int_0^{f(x)} g(x,y) \diff y$ (see \eqref{eq:idig}) plus an error term arising from the initial and final point of the orbit segment $\phi_{\overline{y}}\circ \gamma$, namely, recalling the definition $T_t(x) = T^{r(x,t)}x$, \begin{multline*}
\modulo{ \int_{\gamma} \overline{g} \diff y} = \modulo{ \int_{\phi_{\overline{y}} \circ \gamma} g \diff y} \leq S_{r(T_{t+\overline{y}}(a),\Delta t(u))}(\mathcal{I}g)(T_{t+\overline{y}}(a)) \\
+ \norma{g}_{\infty} (f(T_{t+\overline{y}}a) + f(T_{t+\overline{y} + \Delta t(u)}a) ). \end{multline*} We recall from Remark \ref{remark:3} that $\mathcal{I}g$ satisfies the hypotheses of Corollary \ref{th:athreyaforni}. We claim that \begin{equation}\label{miclaim} f(T^{r(a,t+\overline{y})}a) + f(T^{r(a,t+\overline{y} + \Delta t(u))}a) = O(\log \log t). \end{equation} Indeed, by the cocycle relation for Birkhoff sums we have \begin{equation*} \begin{split} &S_{r(a,t)+ \lfloor (\overline{y} + \Delta t(u))/m \rfloor + 2}(f)(a) \\ & \qquad \qquad = S_{r(a,t)+1}(f)(a) + S_{ \lfloor (\overline{y} + \Delta t(u))/m \rfloor + 1}(f)(T^{r(a,t)+1}a)\\ & \qquad \qquad > t + ( \lfloor (\overline{y} + \Delta t(u))/m \rfloor + 1)m > t + \overline{y} + \Delta t(u); \end{split} \end{equation*} hence, $$ r(a,t) \leq r(a, t + \overline{y}) \leq r(a, t + \overline{y} + \Delta t(u) ) \leq r(a,t)+ \lfloor (\overline{y} + \Delta t(u))/m \rfloor + 2. $$ By Proposition \ref{th:prelpart}-(iv), $\overline{y} \leq C_f \log t$; hence, by \eqref{eq:deltatu}, the latter summand above is bounded by $r(a,t) + \frac{2C_f}{m} \log t$, up to enlarging $t_2$. Proposition \ref{th:mixpart} yields the claim~\eqref{miclaim}.
Therefore, by \eqref{miclaim}, Corollary \ref{th:athreyaforni} and \eqref{eq:vertstretch}, \begin{equation} \begin{split}
\modulo{ \int_{\gamma} \overline{g} \diff y}\leq & S_{r(T_{t+\overline{y}}(a),\Delta t(u))}(\mathcal{I}g)(T_{t+\overline{y}}(a)) + O( \log \log t )\\
= & O \left( (r(T_{t+\overline{y}}(a),\Delta t(u)))^{\theta} + \log \log t \right) = O\left((\Delta t(u))^{\theta} + \log \log t \right)\\
= & O \left((\log t)^{\theta(1-\alpha)} +\log \log t \right) =O \left( (\log t)^{\theta(1-\alpha)} \right). \label{eq:atfor} \end{split} \end{equation} From Lemma \ref{th:bootstrap}, \eqref{eq:approx} and \eqref{eq:atfor}, we obtain \begin{equation*} \begin{split} &\sup_{a \leq u \leq b}\modulo{\int_a^{u} \overline{g} \circ \phi_{t} \circ \xi_J(s) \diff s} \leq \frac{C}{t \log t} \sup_{a \leq u \leq b} \modulo{ \int_{\phi_t \circ \xi_{[a,u)}} \overline{g} \diff y}\\ & \qquad \qquad \leq \frac{C}{t \log t} \left(\modulo{ \int_{\gamma} \overline{g} \diff y} + O \left( (\log t)^{-1} \right) \right) = O \left( \frac{(\log t)^{\theta(1-\alpha)}}{t \log t}\right).
\end{split} \end{equation*} From \eqref{eq:fubini1}, we deduce \begin{equation*} \begin{split} \modulo{\int_{X(t)} (g \circ \phi_t)h \diff \misura } &= O \left( \frac{(\log t)^{\theta(1-\alpha)}}{t \log t} \right) \sum_{J \in \mathcal{P}_f(t)} \int_0^{y_J} \frac{\misura(J)}{\misura(J)} \diff y \\ & = O \left( \frac{(\log t)^{\theta(1-\alpha)}}{t \log t} (t(\log t)^{\alpha}) \right) \sum_{J \in \mathcal{P}_f(t)} \int_0^{y_J} \misura(J) \diff y \\ & = O \left(\frac{1}{(\log t)^{(1-\theta)(1-\alpha)}} \right), \end{split} \end{equation*} which, combined with \eqref{eq:dcxt}, concludes the proof.
\section{Appendix: estimates of Birkhoff sums}\label{section5bs}
In this appendix we will prove the bounds on the Birkhoff sums of the roof function $f$ and of its derivatives $f\rq{}$ and $f\rq{}\rq{}$ in Theorem \ref{th:BS}. The proof is a generalization to the case of finitely many singularities of a result by Ulcigrai \cite[Corollaries 3.4, 3.5]{ulcigrai:mixing}.
We first consider the auxiliary functions $u_k,v_k, \widetilde{u}_k, \widetilde{v}_k$ introduced in \S\ref{section:quantitative}.
\subsection{Special Birkhoff sums}\label{gaperror}
Fix $\varepsilon\rq{}>0$ and $w$ and $\widetilde{w}$ to be either $u_k$ or $v_k$ and either $\widetilde{u}_k$ or $\widetilde{v}_k$ respectively for fixed $k$. Let $\overline{l}, D, D\rq{}$ be given by Theorem \ref{th:ulcigrai}; for $\varepsilon>0$ (which will be determined later) choose $L_1, L_2 \in \mathbb{N}$ such that $D^{L_1} D\rq{} < \varepsilon$ and $ \nu (d-1)^{-L_2} < \varepsilon$. Assume $l_0 \geq \overline{l}(1+L_1+L_2)$ and introduce the past steps $$ l_{-1} := l_0 - L_1 \overline{l}, \qquad l_{-2} = l_0 - (L_1+L_2) \overline{l}. $$ Consider a point $x_0 \in I^{(n_{l_{0}})}_{j_0} \subset I^{(n_{l_{0}})}$; we want to estimate the Birkhoff sums of $w$ and $\widetilde{w}$ at $x_0$ along $Z_{j_0}^{(n_{l_{0}})}$, namely the sums $$ S_{r_0}(w)(x_0) = \sum_{i=0}^{r_0-1} w(T^ix_0), \quad \text{ and } \quad S_{r_0}(\widetilde{w})(x_0) = \sum_{i=0}^{r_0-1} \widetilde{w}(T^ix_0), $$ where $r_0:=h_{j_0}^{(n_{l_{0}})}$. Sums of this type will be called \newword{special Birkhoff sums}. We will prove that \begin{equation}\label{eq:sbsweakforf} S_{r_0}(w)(x_0) \leq (1+\varepsilon\rq{}) r_0 \int_0^1w(x) \diff x + \max_{0 \leq i < r_0}w(T^ix_0). \end{equation} and \begin{equation}\label{eq:sbsweak}
(1-\varepsilon\rq{}) r_0 \log h^{(n_{l_{0}})} \leq S_{r_0}(\widetilde{w})(x_0) \leq (1+\varepsilon\rq{}) r_0 \log h^{(n_{l_{0}})} + \max_{0 \leq i < r_0} \widetilde{w}(T^ix_0), \end{equation} where, we recall, $h^{(n_{l_{0}})} = \max \{ h_{j}^{(n_{l_{0}})} : 1 \leq j \leq d \}$.
By Remark \ref{remark:endpoint}, at each step $n$ the singularity $a_k$ of $w$ and of $\widetilde{w}$ belongs to the boundary of two adjacent elements of the partition $\mathcal{Z}^{(n)}$ defined in \S\ref{section:def}. Denote by $F^{(n)}_{\text{sing}}$ the element of $\mathcal{Z}^{(n)}$ which has $a_k$ as \newword{left} endpoint if $w=u_k$ or as \newword{right} endpoint if $w=v_k$, and similarly when we consider $\widetilde{w}$ instead of $w$. Outside $F^{(n)}_{\text{sing}}$ the value of $w$ is bounded by $1-\log \lambda^{(n)}_{\text{sing}} $ and the value of $\widetilde{w}$ is bounded by $1/\lambda^{(n)}_{\text{sing}}$, where $\lambda^{(n)}_{\text{sing}}$ is the length of $F^{(n)}_{\text{sing}}$. Remark that, by construction, $F^{(n)}_{\text{sing}} \subset F^{(m)}_{\text{sing}}$ for $n >m$; decompose the initial interval $I=I^{(0)}$ into the three pairwise disjoint sets $I^{(0)} = A \sqcup B \sqcup C$, with \begin{equation*} A= F^{(n_{l_0})}_{\text{sing}}, \quad B = F^{(n_{l_{-2}})}_{\text{sing}} \setminus F^{(n_{l_0})}_{\text{sing}}, \quad C= I^{(0)} \setminus F^{(n_{l_{-2}})}_{\text{sing}}. \label{eq:decomposition} \end{equation*} Using the partition above, we can write \begin{equation}\label{eq:Birkhoffsum} S_{r_0}(w)(x_0) = \sum_{T^ix_0 \in A} w(T^ix_0) + \sum_{T^ix_0 \in B} w(T^ix_0)+ \sum_{T^ix_0 \in C} w(T^ix_0), \end{equation} and similarly for $\widetilde{w}$. Notice that the first summand is not zero if and only if there exists $r \leq r_0$ such that $T^rx_0 \in F^{(n_{l_0})}_{\text{sing}}$, i.e.~if and only if $F^{(n_{l_0})}_{\text{sing}} \subset Z^{(n_{l_0})}_{j_0}$; in this case it equals $w(T^rx_0)$.
We refer to the summands in \eqref{eq:Birkhoffsum} as \newword{singular term}, \newword{gap error} and \newword{main contribution} respectively.
\paragraph{Gap error} We first consider $\widetilde{w}$. Let $b = \# \{T^ix_0 \in B\}$; we will approximate the gap error with the sum of $\widetilde{w}$ over an arithmetic progression of length $b$. For any $T^ix_0 \in B$ we have $\widetilde{w}(T^ix_0) \leq 1/ \lambda_{\text{sing}}^{(n_{l_{0}})}$ and, since $T^ix_0$ and $T^jx_0$ belong to different elements of $\mathcal{Z}^{(n_{l_0})}$ when $i \neq j$, for $i,j \leq r_0$ also $\modulo{T^ix_0 - T^jx_0} \geq \lambda^{(n_{l_0})}_{j_0} \geq (d \kappa \nu r_0 )^{-1} $ by Corollary \ref{cor:thulcigrai}-(i). Up to rearranging the sequence $\{T^ix_0 \in B : 0 \leq i < r_0 \}$ in increasing order of $T^ix_0 - a_k$ if $\widetilde{w}=\widetilde{u}_k$ (decreasing, if $\widetilde{w}=\widetilde{v}_k$) and calling it $x_i$, we have $$ x_i \geq \lambda^{(n_{l_0})}_{\text{sing}} + \frac{i}{d \kappa \nu r_0 }. $$ By monotonicity of $\widetilde{w}$ it follows that $$ 0 \leq \sum_{T^ix_0 \in B} \widetilde{w}(T^ix_0) = \sum_{T^ix_0 \in B} \frac{1}{x_i} \leq \sum_{i=0}^{b} \Bigg( \lambda^{(n_{l_0})}_{\text{sing}} + \frac{i}{d \kappa \nu r_0} \Bigg)^{-1}. $$ Using the trivial fact that for any continuous and decreasing function $h$, $\sum_{i=0}^b h(i) \leq h(0) + \int_0^b h(x) \diff x$ and $ d \kappa \nu r_0 \lambda^{(n_{l_0})}_{\text{sing}} \geq 1$ by Corollary \ref{cor:thulcigrai}-(i), we get \begin{equation*} \begin{split} 0 &\leq \sum_{T^ix_0 \in B} \widetilde{w}(T^ix_0) \leq \frac{1}{\lambda_{\text{sing}}^{(n_{l_{0}})}} + \int_0^b \Bigg( \lambda^{(n_{l_0})}_{\text{sing}} + \frac{x}{d \kappa \nu r_0} \Bigg)^{-1} \diff x\\ & \leq d \kappa \nu r_0 + d \kappa \nu r_0 \log \Bigg( 1+ \frac{b}{d \kappa \nu r_0 \lambda^{(n_{l_0})}_{\text{sing}}} \Bigg) \leq d \kappa \nu r_0 (1+ \log(b+1)). \end{split} \end{equation*} Since $B \subset F_{\text{sing}}^{(n_{l_{-2}})}$, we have that $b \leq \# \{ T^ix_0 \in Z_{j_0}^{(n_{l_0})} \cap F_{\text{sing}}^{(n_{l_{-2}})} \}$. Let $\alpha \in \{1, \dots, d\}$ be such that $ F_{\text{sing}}^{(n_{l_{-2}})} \subset Z_{\alpha}^{(n_{l_{-2}})}$; the number of $T^ix_0 \in Z_{j_0}^{(n_{l_0})}$ contained in $F_{\text{sing}}^{(n_{l_{-2}})}$ equals the number of those contained in $I_{\alpha}^{(n_{l_{-2}})}$. Thus, by Lemma \ref{lemma:entries}, \begin{equation}\label{eq:bnormaA} b \leq \# \{ T^ix_0 \in Z_{j_0}^{(n_{l_0})} \cap I_{\alpha}^{(n_{l_{-2}})} \} = A_{\alpha, j_0}^{(n_{l_{-2}}, n_{l_0})} \leq \norma{A^{(n_{l_{-2}}, n_{l_0})} }. \end{equation} From the asymptotic behavior (iii) in Corollary \ref{cor:thulcigrai}, we obtain \begin{equation*} \frac{ \sum_{T^ix_0 \in B} \widetilde{w}(T^ix_0)}{ r_0 \log h^{(n_{l_0})}} \leq \frac{ d \kappa \nu r_0 (1+ \log( \norma{A^{(n_{l_{-2}}, n_{l_0})} }+1)) }{ r_0 \log h^{(n_{l_0})}} \to 0, \end{equation*} so, for $l_0$ large enough, we conclude \begin{equation}\label{eq:gerr} 0 \leq \sum_{T^ix_0 \in B} \widetilde{w}(T^ix_0) \leq \varepsilon ( r_0 \log h^{(n_{l_0})}) . \end{equation}
We can carry out analogous computations for $w$. In this case, \begin{equation*} \begin{split} 0 &\leq \sum_{T^ix_0 \in B} w(T^ix_0) = \sum_{T^ix_0 \in B}(1 -\log T^ix_0) \leq b(1-\log \lambda_{\text{sing}}^{(n_{l_{0}})}) = O ( b \log r_0). \end{split} \end{equation*} Corollary \ref{cor:thulcigrai}-(ii) implies that $l_0 = O( \log r_0)$; hence by \eqref{eq:bnormaA}, the Diophantine condition in Theorem \ref{th:ulcigrai}-(iv) and the definition of $l_{-2}$ we obtain $$ b \leq \norma{A^{(n_{l_{-2}}, n_{l_0})} } \leq l_0^{(L_1+L_2)\overline{l} \tau} = O \left( (\log r_0)^{(L_1+L_2)\overline{l} \tau} \right). $$ In particular, for $l_0$ large enough we conclude \begin{equation}\label{eq:gerrforw} 0 \leq \sum_{T^ix_0 \in B} w(T^ix_0) \leq \varepsilon r_0. \end{equation}
\paragraph{Main contribution.} Consider the partition $\mathcal{Z}^{(n_{l_{-1}})}$ restricted to the set $C$. We will exploit the fact that the partition elements are nicely distributed in $\mathcal{Z}^{(n_{l_0})}$ to approximate the special Birkhoff sum of $w$ and $\widetilde{w}$ by the respective integrals over $C$, and then bound the latters.
For any $F_{\alpha} \in \mathcal{Z}^{(n_{l_{-1}})} \cap C$, $F_{\alpha} \subset Z_{j_{\alpha}}^{(n_{l_{-1}})}$ with $j_{\alpha} \in \{1, \dots, d\}$, choose points $\overline{x}_{\alpha}, \widetilde{x}_{\alpha} \in F_{\alpha}$ given by the Mean-Value Theorem, namely such that $$ w(\overline{x}_{\alpha}) = \frac{1}{\lambda_{\alpha}^{(n_{l_{-1}})}} \int_{F_{\alpha}}w(x) \diff x, \qquad \widetilde{w}(\widetilde{x}_{\alpha}) = \frac{1}{\lambda_{\alpha}^{(n_{l_{-1}})}} \int_{F_{\alpha}}\widetilde{w}(x) \diff x, $$ with $\lambda_{\alpha}^{(n_{l_{-1}})} = \misura (F_{\alpha})$. We now show that for any $T^ix_0 \in F_{\alpha}$, \begin{equation} 1-\varepsilon \leq \frac{w(T^ix_0)}{w(\overline{x}_{\alpha})} \leq 1+ \varepsilon, \qquad 1-\varepsilon \leq \frac{\widetilde{w}(T^ix_0)}{\widetilde{w}(\widetilde{x}_{\alpha})} \leq 1+ \varepsilon. \label{eq:meanvalue} \end{equation} Since $w \geq 1$ and for all $x \in F_{\alpha}\subset C$ we have $\modulo{x-a_k} \geq \lambda_{\text{sing}}^{(n_{l_{-2}})}$, again by the Mean-Value Theorem we have $$ \modulo{\frac{w(T^ix_0)}{w(\overline{x}_{\alpha})}-1 } \leq \modulo{ \max_C w\rq{}} \lambda_{\alpha}^{(n_{l_{-1}})} \leq \frac{\lambda_{\alpha}^{(n_{l_{-1}})}}{\lambda_{\text{sing}}^{(n_{l_{-2}})}}. $$
Considering $\widetilde{w}$, up to replacing $F_{\alpha}$ with $F_{\alpha}+1$ or $F_{\alpha}-1$, we can suppose that $\widetilde{w}(x) = 1/ \modulo{x-a_k}$ for $x \in F_{\alpha}$. Then,
\begin{equation*} \frac{\widetilde{w}(T^ix_0)}{\widetilde{w}(\widetilde{x}_{\alpha})} = \modulo{ \frac{\widetilde{x}_{\alpha}-a_k}{T^ix_0-a_k} } \leq \frac{\sup_{x \in F_{\alpha}} \modulo{x-a_k}}{\inf_{x \in F_{\alpha}} \modulo{x-a_k}} = 1+ \frac{\lambda_{\alpha}^{(n_{l_{-1}})}}{\inf_{x \in F_{\alpha}} \modulo{x-a_k}} \leq 1+ \frac{\lambda_{\alpha}^{(n_{l_{-1}})}}{\lambda_{\text{sing}}^{(n_{l_{-2}})}}, \end{equation*} and similarly \begin{equation*} \frac{\widetilde{w}(T^ix_0)}{\widetilde{w}(\widetilde{x}_{\alpha})} = \modulo{ \frac{\widetilde{x}_{\alpha}-a_k}{T^ix_0-a_k} } \geq \frac{\inf_{x \in F_{\alpha}} \modulo{x-a_k}}{\sup_{x \in F_{\alpha}} \modulo{x-a_k}} = 1 - \frac{\lambda_{\alpha}^{(n_{l_{-1}})}}{\sup_{x \in F_{\alpha}} \modulo{x-a_k}} \geq 1- \frac{\lambda_{\alpha}^{(n_{l_{-1}})}}{\lambda_{\text{sing}}^{(n_{l_{-2}})}}. \end{equation*} Thus, it is sufficient to prove that ${\lambda_{\alpha}^{(n_{l_{-1}})}}/{\lambda_{\text{sing}}^{(n_{l_{-2}})}}< \varepsilon$. The length vectors are related by the cocycle property \eqref{eq:cocycleprop}, namely, by the definition of $l_{-2}$, $$ \underline{\lambda}^{(n_{l_{-2}})}=A^{(n_{l_{-2}}, n_{l_{-1}})} \underline{\lambda}^{(n_{l_{-1}})} = \prod_{j=0}^{L_2-1} A^{(n_{l_{-2}+j \overline{l}} , n_{l_{-2}+(j+1) \overline{l}} )} \underline{\lambda}^{(n_{l_{-1}})}, $$ and each of those $d \times d$ matrices is strictly positive with integer coefficients by (iii) in Theorem \ref{th:ulcigrai}. Therefore $$ {\lambda}^{(n_{l_{-2}})}_{\text{sing}} \geq d^{L_2} \min_j {\lambda}^{(n_{l_{-1}})}_j \geq \frac{d^{L_2}}{\nu}{\lambda}^{(n_{l_{-1}})}_{\alpha}, $$ which implies ${\lambda}^{(n_{l_{-1}})}_{{\alpha}}/ {\lambda}^{(n_{l_{-2}})}_{\text{sing}} \leq \nu d^{-L_2} <\varepsilon$ by the choice of $L_2$. Hence the claim \eqref{eq:meanvalue} is now proved.
Rewriting $$ \sum_{T^ix_0\in C} w(T^ix_0) = \sum_{Z_{\alpha} \subset C}\sum_{T^ix_0 \in F_{\alpha}} w(T^ix_0), $$ we get from \eqref{eq:meanvalue} \begin{multline*} (1-\varepsilon) \sum_{F_{\alpha} \subset C} \# \{ T^ix_0 \in F_{\alpha} \} w(\overline{x}_{\alpha}) \leq \sum_{T^ix_0 \in C} w(T^ix_0)\\
\leq (1+\varepsilon) \sum_{F_{\alpha} \subset C} \# \{ T^ix_0\in F_{\alpha} \} w(\overline{x}_{\alpha}). \end{multline*}
Exactly as in the previous paragraph, $ \# \{ T^ix_0 \in F_{\alpha} \} = \# \{ T^ix_0 \in I_{j_{\alpha}}^{(n_{l_{-1}})}\} = A_{j_{\alpha}, j_0}^{(n_{l_{-1}}, n_{l_{0}})}$. We apply the following lemma by Ulcigrai. \begin{lemma}[{\cite[Lemma 3.4]{ulcigrai:mixing}}] For each $1 \leq i,j \leq d$, $$ e^{-2D^{L_1}D\rq{}} \lambda_{i}^{(n_{l_{-1}})} \leq \frac{ A_{i,j}^{(n_{l_{-1}}, n_{l_{0}})} }{ h_{j}^{(n_{l_{0}})} } \leq e^{2D^{L_1}D\rq{}} \lambda_{i}^{(n_{l_{-1}})}. $$ \end{lemma} By the initial choice of $L_1$, this implies that $e^{-2 \varepsilon} \lambda_{j_{\alpha}}^{(n_{l_{-1}})} r_0 \leq A_{j_{\alpha}, j_0}^{(n_{l_{-1}}, n_{l_{0}})} \leq e^{2 \varepsilon} \lambda_{j_{\alpha}}^{(n_{l_{-1}})} r_0$. We get \begin{equation} \begin{split} &\sum_{T^ix_0 \in C} w(T^ix_0 ) \leq (1+\varepsilon) \sum_{F_{\alpha} \subset C} A_{j_{\alpha}, j_0}^{(n_{l_{-1}}, n_{l_{0}})} w(\overline{x}_{\alpha}) \\ & \qquad \leq e^{2\varepsilon} (1+\varepsilon) \sum_{F_{\alpha} \subset C} \lambda_{j_{\alpha}}^{(n_{l_{-1}})} r_0 w(\overline{x}_{\alpha}) = e^{2\varepsilon} (1+\varepsilon) r_0 \sum_{F_{\alpha} \subset C} \int_{F_{\alpha}} w(x) \diff x \\ &\qquad = e^{2\varepsilon} (1+\varepsilon) r_0 \int_{C} w(x) \diff x.\label{eq:upperboundforw} \end{split} \end{equation}
The same computations can be carried out for $\widetilde{w}$, obtaining
\begin{equation} e^{-2\varepsilon} (1-\varepsilon) r_0 \int_{C} \widetilde{w}(x) \diff x \leq \sum_{T^ix_0 \in C} \widetilde{w}(T^ix_0) \leq e^{2\varepsilon} (1+\varepsilon) r_0 \int_{C} \widetilde{w}(x) \diff x .\label{eq:lowerbound} \end{equation} Recalling $C= I^{(0)} \setminus Z_{\text{sing}}^{(n_{l_{-2}})}$, we have to estimate the integral $$
\int_{I^{(0)} \setminus Z_{\text{sing}}^{(n_{l_{-2}})} } \widetilde{w}(x) \diff x =\log \frac{1}{\lambda_{\text{sing}}^{(n_{l_{-2}})}}. $$ Since $\lambda_{\text{sing}}^{(n_{l_{-2}})} \geq \lambda_{\text{sing}}^{(n_{l_{0}})} \geq 1/(d\kappa \nu h^{(n_{l_{0}})})$ by Corollary \ref{cor:thulcigrai}-(i), we have the upper bound \begin{equation} \log \frac{1}{\lambda_{\text{sing}}^{(n_{l_{-2}})}} \leq \log (d\kappa \nu h^{(n_{l_{0}})}) = \left( 1+ \frac{\log (d \kappa \nu)}{\log h^{(n_{l_{0}})}} \right) \log h^{(n_{l_{0}})} \leq (1+ \varepsilon) \log h^{(n_{l_{0}})}, \label{eq:integral1} \end{equation} for $l_0$ sufficiently large. On the other hand, adding and subtracting $\log h^{(n_{l_{0}})}$, we obtain the lower bound \begin{equation} \begin{split} \log \frac{1}{\lambda_{\text{sing}}^{(n_{l_{-2}})}} \pm \log h^{(n_{l_{0}})} & = \log h^{(n_{l_{0}})} \left( 1- \frac{\log (h^{(n_{l_{0}})}\lambda_{\text{sing}}^{(n_{l_{-2}})})}{ \log h^{(n_{l_{0}})} } \right) \\ & \geq \log h^{(n_{l_{0}})} \left( 1- \frac{\log (\kappa \nu h^{(n_{l_{0}})} / h^{(n_{l_{-2}})})}{ \log h^{(n_{l_{0}})} } \right) \\ & \geq \log h^{(n_{l_{0}})} \left( 1- \frac{\log (\kappa \nu \norma{A^{(n_{l_{-2}}, n_{l_{0}})}})}{ \log h^{(n_{l_{0}})} } \right),\label{eq:integral2} \end{split} \end{equation} where we used the cocycle relation $ \underline{h}^{(n_{l_{0}})} = (A^{(n_{l_{-2}}, n_{l_{0}})})^T \underline{h}^{(n_{l_{-2}})}$ to obtain ${h}^{(n_{l_{0}})} \leq \norma{A^{(n_{l_{-2}}, n_{l_{0}})}} {h}^{(n_{l_{-2}})}$. The term in brackets goes to 1 as $l_0$ goes to infinity because of Corollary \ref{cor:thulcigrai}-(iii), thus for $l_0$ sufficiently large we have obtained $ \log 1/{\lambda_{\text{sing}}^{(n_{l_{-2}})}} \geq (1-\varepsilon) \log h^{(n_{l_{0}})}$.
Combining the bounds \eqref{eq:lowerbound} with the estimates \eqref{eq:integral1} and \eqref{eq:integral2}, we deduce \begin{equation}\label{eq:maic}
e^{-2\varepsilon} (1-\varepsilon)^2 r_0 \log h^{(n_{l_{0}})} \leq \sum_{T^ix_0 \in C} \widetilde{w}(T^ix_0) \leq e^{2\varepsilon} (1+\varepsilon)^2 r_0 \log h^{(n_{l_{0}})}. \end{equation}
\paragraph{Final estimates.} Choose $\varepsilon >0$ such that $e^{2\varepsilon}(1+\varepsilon)^2 + \varepsilon < 1 + \varepsilon\rq{}$ and $e^{-2\varepsilon}(1-\varepsilon)^2 > 1 - \varepsilon\rq{}$. As we have already remarked, the singular terms are nonzero if and only if $F^{(n_{l_0})}_{\text{sing}} \subset Z^{(n_{l_0})}_{j_0}$, in which case it equals $\max_{0\leq i < r_0} w(T^ix_0)$ and $\max_{0\leq i < r_0} \widetilde{w}(T^ix_0)$ respectively. Together with the estimates of the gap error \eqref{eq:gerrforw} and \eqref{eq:gerr} and of the main contribution \eqref{eq:upperboundforw} and \eqref{eq:maic}, this proves the estimates \eqref{eq:sbsweakforf} and \eqref{eq:sbsweak} for the special Birkhoff sums.
\subsection{General case}\label{resonantterm}
Fix $\varepsilon\rq{}\rq{}>0$, $r \in \mathbb{N}$ and take $l$ such that $ h^{( n_{l})} \leq r < h^{( n_{l+1})}$. In this section we want to estimate Birkhoff sums $S_r(w)(x_0)$ and $S_r(\widetilde{w})(x_0)$ for any orbit length $r$; namely we will prove that for any $r$ sufficiently large and for any $ x \notin \Sigma_l(k)$, \begin{equation}\label{eq:conclusionforw} S_r(w)(x_0) \leq (1+\varepsilon\rq{}\rq{}) r \int_0^1 w(x) \diff x + (\lfloor \kappa \rfloor +2) \max_{0 \leq i < r} w(T^ix_0), \end{equation} and \begin{equation} (1-\varepsilon\rq{}\rq{}) r \log r \leq S_r(\widetilde{w})(x_0) \leq (1+\varepsilon\rq{}\rq{}) r \log r + (\lfloor \kappa \rfloor +2) \max_{0 \leq i < r} \widetilde{w}(T^ix_0). \label{eq:conclusion} \end{equation} The idea is to decompose $S_r(w)$ and $S_r(\widetilde{w})$ into special Birkhoff sums of previous steps $n_{l_i}$. To have control of the sum, however, we have to throw away the set $\Sigma_l(k)$ of points which go too close to the singularity, whose measure is small, see Proposition \ref{th:stretpart}.
\begin{defin}\label{definx} Let $\orbita{r}{x} = \{T^ix: 0 \leq i < r\}$. We introduce the following notation: if $x \in I_j^{(n)}$, denote by $x_{j}^{(n)}$ and $\widetilde{x}_{j}^{(n)}$ the points in $\orbita{h_j^{(n)}}{x} \cap Z_j^{(n)}$ at which the functions $w$ and $\widetilde{w}$ attain their respective maxima, and by $x_r$ and $\widetilde{x}_r$ the points such that $w(x_r) = \max_{0 \leq i < r} w(T^ix_0)$ and $\widetilde{w}(\widetilde{x}_r) = \max_{0 \leq i < r} \widetilde{w}(T^ix_0)$. \end{defin}
Suppose $x_0 \in Z_{j_0}^{(n)}$. By definition of the sets $Z_j^{(n)}$, there exist $$ 0 \leq Q=Q(n) \leq r/ \min_j h^{(n)}_j \text{\ \ and\ \ }y_0^{(n)} \in I_{i_0}^{(n)}, y_1^{(n)} \in I_{i_1}^{(n)}, \dots, y_{Q+1}^{(n)} \in I_{i_{Q+1}}^{(n)}, $$ such that the orbit $\orbita{r}{x_0}$ can be decomposed as the disjoint union \begin{equation}\label{eq:decomposeorbit} \bigsqcup_{\alpha = 1}^{Q(n)} \orbita{h_{i_\alpha}^{(n)}}{y_{\alpha}^{(n)}} \subset \orbita{r}{x_0} \subset \bigsqcup_{\alpha = 0}^{Q(n)+1} \orbita{h_{i_\alpha}^{(n)}}{y_{\alpha}^{(n)}}. \end{equation} This expression shows that we can approximate the Birkhoff sum along $\orbita{r}{x_0}$ with the sum of special Birkhoff sums. We will need three levels of approximation $n_{l-L} < n_l < n_{l+1}$. Fix $L \in \mathbb{N}$ such that $2 \kappa d^{-L/\overline{l}} < \varepsilon$ and let $y_{\alpha}^{(n_{l-L})} \in I_{i_{\alpha}}^{(n_{l-L})}$ for $0 \leq \alpha \leq Q(n_{l-L})+1$, $I_{j_{\beta}}^{(n_{l})}$ for $0 \leq \beta \leq Q(n_{l})+1$ and $I_{q_{\gamma}}^{(n_{l+1})}$ for $0 \leq \gamma \leq Q(n_{l+1})+1$ be defined as above.
By the positivity of $w$ and \eqref{eq:decomposeorbit}, it follows $$ \sum_{\alpha = 1}^{Q(n_{l-L})} S_{h_{i_{\alpha}}^{(n_{l-L})}}(w)(y_{\alpha}^{(n_{l-L})}) \leq S_r(w)(x_0) \leq \sum_{\alpha =0}^{Q(n_{l-L})+1} S_{h_{i_{\alpha}}^{(n_{l-L})}}(w)(y_{\alpha}^{(n_{l-L})}), $$ and similarly for $\widetilde{w}$. Let $\varepsilon\rq{} >0$ (to be determined later); each term is a special Birkhoff sum, so, by applying the estimates \eqref{eq:sbsweakforf} and \eqref{eq:sbsweak}, we get \begin{equation}\label{eq:genericbsforw} S_r(w)(x_0) \leq (1+\varepsilon\rq{}) \Big( \int_0^1 w(x)\diff x \Big)\sum_{\alpha =0}^{Q(n_{l-L})+1} h_{i_{\alpha}}^{(n_{l-L})} + \sum_{\alpha =0}^{Q(n_{l-L})+1} w(x_{i_{\alpha}}^{(n_{l-L})}), \end{equation} and \begin{align} &S_r(\widetilde{w})(x_0) \geq (1-\varepsilon\rq{}) \sum_{\alpha =1}^{Q(n_{l-L})} h_{\alpha}^{(n_{l-L})} \log h^{(n_{l-L})}, \label{eq:genericbs1}\\ &S_r(\widetilde{w})(x_0) \leq (1+\varepsilon\rq{}) \sum_{\alpha =0}^{Q(n_{l-L})+1} h_{\alpha}^{(n_{l-L})} \log h^{(n_{l-L})} + \sum_{\alpha =0}^{Q(n_{l-L})+1} \widetilde{w}(\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}) \label{eq:genericbs2}, \end{align} where $x_{i_{\alpha}}^{(n_{l-L})}$ and $\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}$ are the points defined in Notation \ref{definx} at which the corresponding special Birkhoff sums of $w$ and $\widetilde{w}$ attain their respective maxima. We refer to the first terms in the right-hand side of \eqref{eq:genericbsforw}, \eqref{eq:genericbs1} and \eqref{eq:genericbs2} as the \newword{ergodic terms} and to the second terms in the right-hand side of \eqref{eq:genericbsforw} and \eqref{eq:genericbs2} as the \newword{resonant terms}.
\paragraph{Ergodic terms.} The estimates of the ergodic terms for $\widetilde{w}$ are identical to \cite[pp.~1016-1017]{ulcigrai:mixing} and the estimate for $w$ can be deduced from the same proof. Explicitly, the ergodic term for $w$ is bounded above by $(1 + \varepsilon\rq{})^2 r \int w$, whence the ergodic terms for $\widetilde{w}$ are bounded below and above by $(1- \varepsilon\rq{})^2 r \log r$ and by $(1+ \varepsilon\rq{})^2 r \log r$ respectively.
\paragraph{Resonant terms.} We want to estimate the resonant terms $ \sum_{\alpha} w(x_{i_{\alpha}}^{(n_{l-L})})$ and $ \sum_{\alpha} \widetilde{w}(\widetilde{x}_{i_{\alpha}}^{(n_{l-L})})$. First, we reduce to consider the maxima over sets $Z$ of step $n_l$ instead of step $n_{l-L}$ by comparing the sum with an arithmetic progression, as we did in the estimates for the gap error in~\S\ref{gaperror}.
Let $\varepsilon>0$. Again, we first consider $\widetilde{w}$. Group the summands according to the decomposition as in \eqref{eq:decomposeorbit} of step $n_l$, so that $$
\sum_{\alpha =0}^{Q(n_{l-L})+1} \widetilde{w}(\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}) = \sum_{\beta = 0}^{Q(n_{l})+1}\ \sum_{ \alpha \ :\ y_{\alpha}^{(n_{l-L})} \in \orbita{h_{j_{\beta}}^{(n_{l})}}{ y_{\beta}^{(n_l)} } } \widetilde{w}(\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}). $$ For any fixed $\beta = 0, \dots, Q(n_{l})+1$, each of the points $\widetilde{x}_{i_{\alpha}}^{(n_{l-L})} \in \orbita{h_{i_{\alpha}}^{(n_{l-L})}}{ y_{\alpha}^{(n_{l-L})} } $ appearing in the second sum in the right-hand side above belongs to a different interval of $ Z_{j_{\beta}}^{(n_{l})}$, hence the distance between any two of them is at least $\lambda_{j_{\beta}}^{(n_l)} \geq (d \kappa \nu h_{j_{\beta}}^{(n_l)})^{-1}$. Moreover, the number of the points $\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}$ contained in $ Z_{j_{\beta}}^{(n_{l})}$ is bounded by $\norma{A^{(n_{l-L},n_l)}}$.
Fix $0 \leq \beta \leq Q(n_{l})+1$; we separate the point $\widetilde{x}_{j_{\beta}}^{(n_{l})}$ corresponding to the maximum of $\widetilde{w}$ in $Z_{j_{\beta}}^{(n_{l})}$ from the others, \begin{multline*} \sum_{ \alpha \ :\ y_{\alpha}^{(n_{l-L})} \in \orbita{h_{j_{\beta}}^{(n_{l})}}{ y_{\beta}^{(n_l)} } } \widetilde{w}(\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}) = \widetilde{w}(\widetilde{x}_{j_{\beta}}^{(n_{l})}) \\ + \sum_{ \alpha \ :\ y_{\alpha}^{(n_{l-L})} \in \orbita{h_{j_{\beta}}^{(n_{l})}}{ y_{\beta}^{(n_l)} } ,\ \widetilde{x}_{i_{\alpha}}^{(n_{l-L})} \neq \widetilde{x}_{j_{\beta}}^{(n_{l})}} \widetilde{w}(\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}). \end{multline*} If $\widetilde{x}_{i_{\alpha}}^{(n_{l-L})} \neq \widetilde{x}_{j_{\beta}}^{(n_{l})}$, then $\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}$ does not belong to the interval of $\mathcal{Z}^{(n_l)}$ containing $a_k$ as left endpoint if $\widetilde{w}= \widetilde{u}_k$ or right endpoint if $\widetilde{w}= \widetilde{v}_k$. Since $\widetilde{w}$ has only a one-side singularity and is monotone, the value $\widetilde{w}(\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}) $ is bounded by the inverse of the distance between $a_k$ and the second closest return to the right of $a_k$ if $\widetilde{w}= \widetilde{u}_k$ or to the left if $\widetilde{w}= \widetilde{v}_k$; in both cases we have that $\widetilde{w}(\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}) \leq 1/\lambda_{j_{\beta}}^{(n_l)}$. Moreover, $\modulo{\widetilde{x}_{i_{\alpha}}^{(n_{l-L})} - \widetilde{x}_{i_{\alpha\rq{}}}^{(n_{l-L})}} \geq (d \kappa \nu h_{j_{\beta}}^{(n_l)})^{-1}$ thus we can bound the second sum above with an arithmetic progression of length $\norma{A^{(n_{l-L},n_l)}}$. Reasoning as in~\S\ref{gaperror} we obtain \begin{equation*} \begin{split} & \sum_{ \alpha : y_{\alpha}^{(n_{l-L})} \in \orbita{h_{j_{\beta}}^{(n_{l})}}{ y_{\beta}^{(n_l)} } } \widetilde{w}(\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}) \leq \widetilde{w}(\widetilde{x}_{j_{\beta}}^{(n_{l})}) + \sum_{i= 1}^{\norma{A^{(n_{l-L},n_l)}}} \left( \lambda_{j_{\beta}}^{(n_l)} + \frac{i}{ d \kappa \nu h_{j_{\beta}}^{(n_l)} } \right)^{-1}\\ & \qquad \qquad \leq \widetilde{w}(\widetilde{x}_{j_{\beta}}^{(n_{l})}) + d \kappa \nu \log h_{j_{\beta}}^{(n_l)} (1+ \log (\norma{A^{(n_{l-L},n_l)}} +1)). \end{split} \end{equation*}
Therefore \begin{equation} \begin{split} \sum_{\alpha =0}^{Q(n_{l-L})+1} \widetilde{w}(\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}) & \leq \sum_{\beta = 0}^{Q(n_{l})+1} d \kappa \nu h_{j_{\beta}}^{(n_l)}( 1+\log(\norma{A^{(n_{l-L},n_l)}}+1)) \\ & \qquad + \sum_{\beta = 0}^{Q(n_{l})+1} \widetilde{w}(\widetilde{x}_{j_{\beta}}^{(n_{l})}) . \label{eq:resonantterm}
\end{split} \end{equation}
The first term on the right-hand side in \eqref{eq:resonantterm} has the desired asymptotic behavior. Indeed, from \eqref{eq:decomposeorbit} we obtain $$ \sum_{\beta = 1}^{Q(n_{l})} h_{j_{\beta}}^{(n_l)} \leq r \leq \sum_{\beta = 0}^{Q(n_{l})+1} h_{j_{\beta}}^{(n_l)} \leq \sum_{\beta = 1}^{Q(n_{l})} h_{j_{\beta}}^{(n_l)} + 2 h^{(n_l)} \leq r + 2 h^{(n_l)}, $$ so that $ (\sum_{\beta} h_{j_{\beta}}^{(n_l)} )/r \leq1+ 2h^{(n_l)}/r \leq 3 $. Moreover $\log(\norma{A^{(n_{l-L},n_l)}}+1) / \log r \to 0$, by Corollary \ref{cor:thulcigrai}-(iii); for $l$ sufficiently big we then have \begin{equation} d \kappa \nu \left( \sum_{\beta= 0}^{Q(n_{l})+1} h_{j_{\beta}}^{(n_l)} \right)( 1+\log(\norma{A^{(n_{l-L},n_l)}}+1)) \leq \varepsilon {r \log r}. \label{eq:resonantterm1} \end{equation} Therefore, \eqref{eq:resonantterm} becomes \begin{equation}\label{eq:nonso1} \sum_{\alpha =0}^{Q(n_{l-L})+1} \widetilde{w}(\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}) \leq \varepsilon {r \log r} + \sum_{\beta = 0}^{Q(n_{l})+1} \widetilde{w}(\widetilde{x}_{j_{\beta}}^{(n_{l})}). \end{equation}
The analogous approach for $w$ yields \begin{equation*} \begin{split} \sum_{\alpha =0}^{Q(n_{l-L})+1} w(x_{i_{\alpha}}^{(n_{l-L})}) & \leq \sum_{\beta = 0}^{Q(n_{l})+1} w(x_{j_{\beta}}^{(n_{l})}) + \sum_{\beta = 0}^{Q(n_{l})+1}\norma{A^{(n_{l-L},n_l)}} \left( 1- \log ( \lambda_{j_{\beta}}^{(n_l)} ) \right) \\ & \leq \sum_{\beta = 0}^{Q(n_{l})+1} w(x_{j_{\beta}}^{(n_{l})}) + 2 \norma{A^{(n_{l-L},n_l)}} (Q(n_{l})+2) \log h^{(n_l)}. \end{split} \end{equation*} Recalling that $Q(n_{l})$ is the number of special Birkhoff sums of level $n_l$ needed to approximate the original Birkhoff sum along $\orbita{r}{x_0}$ as in \eqref{eq:decomposeorbit}, it follows that $Q(n_{l}) \leq r/ \min_j h_j^{(n_l)} \leq \kappa r / h^{(n_l)}$. By Corollary \ref{cor:thulcigrai}-(ii), $\norma{A^{(n_{l-L},n_l)}} \leq l^{L\tau} = O\left( (\log h^{(n_l)})^{L\tau} \right)$; hence we conclude \begin{equation}\label{eq:nonso2} \begin{split} \sum_{\alpha =0}^{Q(n_{l-L})+1} w(x_{i_{\alpha}}^{(n_{l-L})}) & = O \left( \Big( \frac{r}{h^{(n_l)}}\Big) (\log h^{(n_l)})^{1+L\tau} \right) +\sum_{\beta = 0}^{Q(n_{l})+1} w(x_{j_{\beta}}^{(n_{l})}) \\ &\leq \varepsilon r +\sum_{\beta = 0}^{Q(n_{l})+1} w(x_{j_{\beta}}^{(n_{l})}). \end{split} \end{equation}
Thus, it remains to bound the second summands in \eqref{eq:nonso1} and \eqref{eq:nonso2}. To do that, we proceed in two different ways depending on $r$ being closer to $h^{(n_{l+1})}$ or to $h^{(n_l)}$. Recalling the definitions of $\sigma_l$ and of $\Sigma_l(k)$ introduced in~\S\ref{section:quantitative}, we distinguish two cases.
\emph{Case 1.} Suppose that $\sigma_l h^{(n_{l+1})} \leq r < h^{(n_{l+1})}$. We compare the second summand in \eqref{eq:nonso1} with an arithmetic progression and the second summand in \eqref{eq:nonso2} in the same way as above, considering $n_l$ and $n_{l+1}$ instead of $n_{l-L}$ and $n_l$: we obtain \begin{equation}\label{eq:remarkallforw}
\sum_{\beta = 0}^{Q(n_{l})+1} w(x_{j_{\beta}}^{(n_{l})}) \leq 2 \norma{A^{(n_{l},n_{l+1})}} \sum_{\gamma = 0}^{Q(n_{l+1})+1} \log h^{(n_{l+1})} + \sum_{\gamma = 0}^{Q(n_{l+1})+1} w(x_{q_{\gamma}}^{(n_{l+1})}), \end{equation} and \begin{equation}\label{eq:remarkall} \begin{split}
\sum_{\beta = 0}^{Q(n_{l})+1} \widetilde{w}(\widetilde{x}_{j_{\beta}}^{(n_{l})}) &\leq \sum_{\gamma = 0}^{Q(n_{l+1})+1} d \kappa \nu h_{q_{\gamma}}^{(n_{l+1})}( 1+\log(\norma{A^{(n_{l},n_{l+1})}}+1)) \\
& \qquad + \sum_{\gamma = 0}^{Q(n_{l+1})+1} \widetilde{w}(\widetilde{x}_{q_{\gamma}}^{(n_{l+1})}). \end{split} \end{equation} Since $r < h^{(n_{l+1})} \leq \kappa \min_j h_j^{(n_{l+1})}$, as before we have that $Q(n_{l+1}) \leq r/ \min_j h_j^{(n_{l+1})} \leq \lfloor \kappa \rfloor$; therefore the second terms on the right-hand side of \eqref{eq:remarkallforw} and \eqref{eq:remarkall} are bounded by $(\lfloor \kappa \rfloor +2) w( x_r)$ and $(\lfloor \kappa \rfloor +2) \widetilde{w}( \widetilde{x}_r)$ respectively. We now bound the first summand in the right-hand side of \eqref{eq:remarkallforw}. We have that $\norma{A^{(n_{l},n_{l+1})}} \leq l^{\tau} = O\left( (\log h^{(n_{l})})^{\tau} \right) = O\left( (\log r)^{\tau} \right)$ as in the proof of Lemma \ref{th:stimelt}. Moreover, we use the estimate $h^{(n_{l+1})} / r \leq 1/ \sigma_l$ to get $$ \norma{A^{(n_{l},n_{l+1})}} \sum_{\gamma = 0}^{Q(n_{l+1})+1} \log h^{(n_{l+1})} = O \left( (\log r)^{1+\tau} - \log r \log \sigma_l\right) \leq \varepsilon r, $$
since $|\log \sigma_l| = O(\log \log h^{(n_l)}) = o(\log r)$, which is easy to check from the definition of $\sigma_l$. On the other hand, as regards the first summand in the right-hand side of \eqref{eq:remarkall}, we have \begin{multline*} d \kappa \nu \left( \frac{ \sum_{\gamma} h_{q_{\gamma}}^{(n_{l+1})} }{r} \right) \frac{( 1+\log(\norma{A^{(n_{l},n_{l+1})}}+1))}{ \log r}\\
\leq d \kappa \nu \frac{( \lfloor \kappa \rfloor +2)}{\sigma_l} \frac{ ( 1+\log(\norma{A^{(n_{l},n_{l+1})}}+1)) }{\log \left( \sigma_l h^{(n_{l+1})}\right)}, \end{multline*} which can be made arbitrary small by enlarging $l$. Therefore, \begin{equation}\label{eq:resonantterm2forw} \sum_{\beta = 0}^{Q(n_{l})+1} w(x_{j_{\beta}}^{(n_{l})}) \leq \varepsilon r + (\lfloor \kappa \rfloor +2) w( x_r) \end{equation} and \begin{equation} \sum_{\beta = 0}^{Q(n_{l})+1} \widetilde{w}( \widetilde{x}_{j_{\beta}}^{(n_{l})}) \leq \varepsilon r \log r + (\lfloor \kappa \rfloor +2) \widetilde{w}( \widetilde{x}_r). \label{eq:resonantterm2} \end{equation}
\emph{Case 2.} Now suppose $h^{(n_l)} \leq r < \sigma_l h^{(n_{l+1})}$. If the initial point $x_0 \notin \Sigma_l(k)$, for any $0 \leq i \leq \lfloor \sigma_l h^{(n_{l+1})} \rfloor$ we know that $\modulo{T^ix_0 - a_k} \geq \sigma_l \lambda^{(n_l)} \geq \sigma_l/ h^{(n_l)}$, since $1 = \sum_j h_j^{(n_l)} \lambda_j^{(n_l)} \leq h^{(n_l)} \sum_j \lambda_j^{(n_l)}= h^{(n_l)} \lambda^{(n_l)}$. In particular, we have that $w(x_r) \leq 1+\log h^{(n_l)}$ and $\widetilde{w}(\widetilde{x}_r) \leq h^{(n_l)}/\sigma_l$.
Obviously, $$ \sum_{\beta = 0}^{Q(n_{l})+1} w(x_{j_{\beta}}^{(n_{l})}) \leq (Q(n_{l})+2) w(x_r), \qquad \sum_{\beta = 0}^{Q(n_{l})+1} \widetilde{w}(\widetilde{x}_{j_{\beta}}^{(n_{l})}) \leq (Q(n_{l})+2) \widetilde{w}(\widetilde{x}_r), $$ and we recall $Q(n_{l}) \leq r/ \min_j h_j^{(n_l)} \leq \kappa r / h^{(n_l)}$. Therefore, \begin{equation}\label{eq:resonantterm3forw} \sum_{\beta = 0}^{Q(n_{l})+1} w(x_{j_{\beta}}^{(n_{l})}) \leq \left( \frac{\kappa r}{h^{(n_l)}}+2 \right) (1+\log h^{(n_l)}) \leq \varepsilon r \end{equation} and $$ \sum_{\beta = 0}^{Q(n_{l})+1} \widetilde{w}(\widetilde{x}_{j_{\beta}}^{(n_{l})}) \leq \left( \frac{\kappa r}{h^{(n_l)}}+2 \right) \frac{h^{(n_l)}}{\sigma_l} = \frac{\kappa r + 2 h^{(n_l)}}{\sigma_l}. $$ Since $h^{(n_l)} \leq r$ and $\log r / \log h^{(n_l)} \geq 1$ we can write \begin{equation} \sum_{\beta = 0}^{Q(n_{l})+1} \widetilde{w}(\widetilde{x}_{j_{\beta}}^{(n_{l})}) \leq \left( \frac{\kappa + 2}{\sigma_l \log h^{(n_l)}} \right) r \log r, \label{eq:resonantterm3} \end{equation} and the term in brackets can be made smaller than $\varepsilon$ by choosing $l$ big enough \cite[Lemma~3.9]{ulcigrai:mixing}.
\paragraph{Final estimates.} For any $r$ as in Case 1, for any $x_0$, by combining \eqref{eq:nonso2} with \eqref{eq:resonantterm2forw} and \eqref{eq:nonso1} with \eqref{eq:resonantterm2}, \begin{equation*} \begin{split} &\sum_{\alpha =0}^{Q(n_{l-L})+1} w(x_{i_{\alpha}}^{(n_{l-L})}) \leq 2 \varepsilon r + (\lfloor \kappa \rfloor +2) w( x_r), \\ &\sum_{\alpha =0}^{Q(n_{l-L})+1} \widetilde{w}(\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}) \leq 2 \varepsilon r \log r + (\lfloor \kappa \rfloor +2) \widetilde{w}( \widetilde{x}_r); \end{split} \end{equation*} whence, for any $r$ as in Case 2 and for all $x \notin \Sigma_l(k)$, by combining \eqref{eq:nonso2} with \eqref{eq:resonantterm3forw} and \eqref{eq:nonso1} with \eqref{eq:resonantterm2}, $$ \sum_{\alpha =0}^{Q(n_{l-L})+1} w(x_{i_{\alpha}}^{(n_{l-L})}) \leq 2 \varepsilon r, \quad \sum_{\alpha =0}^{Q(n_{l-L})+1} \widetilde{w}(\widetilde{x}_{i_{\alpha}}^{(n_{l-L})}) \leq 2 \varepsilon r \log r. $$ These estimates together with those for the ergodic terms prove \eqref{eq:conclusionforw} and \eqref{eq:conclusion}, choosing $\varepsilon, \varepsilon\rq{}>0$ appropriately.
\subsection{Proof of Theorem \ref{th:BS}} By the hypothesis on the roof function $f$ we can write \begin{equation}\label{eq:writef} \begin{split} &f(x) = \sum_{k=1}^{d-1} (C_k^{+} u_k(x) + C_k^{-} v_k(x)) + e(x), \\ &f\rq{}(x) = \sum_{k=1}^{d-1} (-C_k^{+} \widetilde{u}_k(x) + C_k^{-} \widetilde{v}_k(x)) + e\rq{}(x), \end{split} \end{equation} for a smooth function $e$. Fix $\epsilon < \varepsilon/(C^{+}+C^{-})$ and choose $\overline{r} \geq 1$ such that if $r \geq \overline{r}$ the estimates \eqref{eq:conclusionforw} and \eqref{eq:conclusion} hold with respect to $\epsilon$. By unique ergodicity of $T$, up to enlarging $\overline{r}$, we have that $S_r(e)(x) \leq (1 + \epsilon)r \int e$.
The estimates \eqref{eq:conclusionforw} imply \begin{equation*} \begin{split} S_r(f)(x_0) &\leq (1+\epsilon) r \sum_{k=1}^{d-1} \left(C_k^{+} \int_0^1 u_k(x) \diff x+ C_k^{-} \int_0^1 v_k(x) \diff x\right) \\ & \qquad + (1 + \epsilon)r \int_0^1 e(x) \diff x \\ &\leq (1+\epsilon)r \int_0^1 f(x) \diff x \\ & \qquad + 2 (d-1) ( \lfloor \kappa \rfloor +2) \max_{1\leq k \leq d-1} \max_{0 \leq i <r} \modulo{\log \modulo{T^ix_0 - a_k}}\\ &\leq 2 r + \const \max_{1\leq k \leq d-1} \max_{0 \leq i <r} \modulo{\log \modulo{T^ix_0 - a_k}}. \end{split} \end{equation*}
Considering the derivative $f\rq{}$, from the estimates \eqref{eq:conclusion} we get \begin{equation*} \begin{split} S_r(f\rq{})(x_0) & \leq -C^{+} (1-\epsilon) r \log r + C^{-} (1 + \epsilon) r \log r + C^{-} ( \lfloor \kappa \rfloor +2) \widetilde{V}(r,x) \\ & \leq (-C^{+} + C^{-}+\varepsilon) r \log r + C^{-} ( \lfloor \kappa \rfloor +2) \widetilde{V}(r,x), \end{split} \end{equation*} and similarly \begin{equation*} \begin{split} S_r(f\rq{})(x_0) & \geq -C^{+} (1+\epsilon) r \log r - C^{+} ( \lfloor \kappa \rfloor +2) \widetilde{U}(r,x)+ C^{-} (1 - \epsilon) r \log r \\ & \leq (-C^{+} + C^{-}-\varepsilon) r \log r - C^{+} ( \lfloor \kappa \rfloor +2) \widetilde{U}(r,x). \end{split} \end{equation*}
Let us estimate the Birkhoff sum of the second derivative $f\rq{}\rq{}$. By deriving \eqref{eq:writef}, if $x_0$ is not a singularity of $S_r(f)$, we have \begin{equation*}
\modulo{S_r(f\rq{}\rq{})(x_0)} \leq \sum_{k=1}^d \left( C^{+}_k S_r(\widetilde{u}_k^2)(x_0) + C^{-}_k S_r(\widetilde{v}_k^2)(x_0) \right) + r \max_{x \in I}\modulo{e\rq{}\rq{}(x)}.
\end{equation*} Since $S_r(\widetilde{u}_k^2)(x_0) \leq \left(\max_{0 \leq i < r} \widetilde{u}_k(T^ix_0) \right)S_r(\widetilde{u}_k)(x_0)$ and similarly for $\widetilde{v}_k$, we get \begin{equation*} \begin{split} \modulo{S_r(f\rq{}\rq{})(x_0)} &\leq \widetilde{U}(r,x) \sum_{k=1}^d C_k^{+} S_r(\widetilde{u}_k)(x_0) + \widetilde{V}(r,x) \sum_{k=1}^d C_k^{-} S_r(\widetilde{v}_k)(x_0) \\ & \qquad+ r \max_{x \in I}\modulo{e\rq{}\rq{}(x)}, \end{split} \end{equation*} where we recall $$ \widetilde{U}(r,x) := \max_{1\leq k \leq d-1} \max_{0 \leq i < r} \widetilde{u}_k(T^ix), \qquad \widetilde{V}(r,x) := \max_{1 \leq k \leq d-1} \max_{0 \leq i < r} \widetilde{v}_k(T^ix). $$ Up to increasing $\overline{r}$, we have that $ \max_{x \in I}\modulo{e\rq{}\rq{}(x)} \leq \varepsilon \log r$; thus one can proceed as before to get the desired estimate.
\end{document} |
\begin{document}
\title{f Shock Waves}
\thispagestyle{first} \setcounter{page}{185}
\begin{abstract}
\vskip 3mm
Shock wave theory was first studied for gas dynamics, for which shocks appear as compression waves. A shock wave is characterized as a sharp transition, even discontinuity in the flow. In fact, shocks appear in many different physical situation and represent strong nonlinearity of the physical processes. Important progresses have been made on shock wave theory in recent years. We will survey the topics for which much more remain to be made. These include the effects of reactions, dissipations and relaxation, shock waves for interacting particles and Boltzmann equation, and multi-dimensional gas flows.
\vskip 4.5mm
\noindent {\bf 2000 Mathematics Subject Classification:} 35. \end{abstract}
\vskip 12mm
\section{Introduction}
\vskip-5mm \hspace{5mm}
The most basic equations for shock wave theory are the systems of hyperbolic conservation laws $$ u_t+\nabla_x \cdot f(u)=0, $$ where $x\in R^m$ is the space variables and $u\in R^n$ is the basic dependence variables. Such a system represents basic physical model for which $u=u(x,t)$ is the density of conserved physical quantities and the flux $f(u)$ is assumed to be a function of $u$. More complete system of partial differential equations takes the form $$u_t+\nabla_x \cdot f(u)=\nabla_x\cdot (B(u,\varepsilon)\nabla_x u)+g(u,x,t), $$ with $B(u,\varepsilon)$ the viscosity matrix and $\varepsilon$ the viscosity parameters, and $g(u,x,t)$ the sources. Other evolutionary equations which carry shock waves include the interacting particles system, Boltzmann equation, and discrete systems. Discrete systems can appear as difference approximations to hyperbolic conservation laws. In all these systems, shock waves yield rich phenomena and also present serious mathematical difficulties due to their strong nonlinear character.
\section{Hyperbolic conservation laws}
\vskip-5mm \hspace{5mm}
Much has been done for hyperbolic conservation laws in one space dimension $$u_t+f(u)_x=0,\ x\in R^1, $$ see \cite{smo}, \cite{daf}, \cite{lium}, and the article by Bressan in this volume. Because the solutions in general contain discontinuous shock waves, the system provides impetus for the introduction of new ideas, such as the Glimm functional, and is a good testing ground for new techniques, such as the theory of compensated compactness, in nonlinear analysis.
\vspace*{-2mm}
\section{Viscous conservation laws}
\vskip-5mm \hspace{5mm}
Physical models of the form of viscous conservation laws are not uniformly parabolic, but hyperbolic-parabolic. Basic study of the dissipation of solutions for such a system has been done using the energy method, see \cite{kaw}. Study of nonlinear waves for these systems has been initiated, \cite{liu}, \cite{liuzeng}, however, much more remains to be done. The difficulty lies in the nonlinear couplings due to both the nonlinearity of the flux $f(u)$, which is the topic of consideration for hyperbolic conservation laws, as well as that of the viscosity matrix $B(u,\varepsilon)$. For instance, the study of zero dissipation limit $\varepsilon \rightarrow 0$, see Bressan's article, has been done only for the artificial viscosity matrix $B(u,\varepsilon)=\varepsilon I$.
\vspace*{-2mm}
\section{Conservation laws with sources}
\vskip-5mm \hspace{5mm}
Sources added to conservation laws may represents geometric effects, chemical reactions, or relaxation effects. Thus there should be no unified theory for it. When the source represents the geometric effects, such the multi-dimensional spherical waves , hyperbolic conservation laws takes the form $$u_t+f(u)_r=\frac{m-1}{r}h(u),\ r^2=\sum_{i=1}^m (x_i)^2.$$ There has stablizing and destablizing effects, such as in the nozzle flows, \cite{lien}. The chemical effects occur in the combustions. There is complicated, still mostly not understood, phenomena on the rich behaviour of combustions. One interesting problem is the transition from the detonations to deflagrations, where the combined effects of dissipation, compression and chemical energy gives rise to new wave behaviour. Viscous effects are important on the qualitative behaviour of nonlinear waves when the hyperbolic system is not strictly hyperbolic, see
\cite{liume}. Relaxation, such as for the kinetic models and thermal non-equilibrium in general, is interesting because of the rich coupling of dissipation, dispersion and hyperbolicity, \cite{cll}.
\vspace*{-2mm}
\section{Discrete conservation laws}
\vskip-5mm \hspace{5mm}
Conservative finite differences to the hyperbolic conservation laws, in one space dimension, take the form: $$u^{n+1}(x)-u^n(x)=\frac{\Delta t}{\Delta x}(F[u^n](x+\frac{\Delta x}{2})-F[u^n](x-\frac{\Delta x}{2})).$$ It has been shown for a class of two conservation laws and dissipative schemes, such as Lax-Friedrichs and Godunov scheme that the numerical solutions converge to the exact solutions of the conservation laws, \cite{ding}. On the other hand, qualitative studies on the nonlinear waves for difference schemes indicate rich behaviour. In particular, the shock waves depend sensitively on the its C-F-L speeds, \cite{se}, \cite{ly1}.
\vspace*{-2mm}
\section{Multi-dimensional gas flows}
\vskip-5mm \hspace{5mm}
Shock wave theory originated from the study of the Euler equations in gas dynamics. The classical book \cite{cf} is still important and mostly updated on multi-dimensional gas flows. Because of its great difficulty, the study of multi-dimensional shocks concentrate on flows with certain self-similarity property. One such problem is the Riemann problem, with initial value consisting of finite many constant states. In that case, the solutions are function of $x/t$, not general function of $(x,t)$, see \cite{zhang}. See also \cite{zheng} for other self-similar solutions. However, unlike single space case, multi-dimensional Riemann solutions do not represent general scattering data, and are quite difficult to study. It is more feasible to consider flows with shocks and solid boundary, e.g. \cite{chen} \cite{lien}.
\vspace*{-2mm}
\section{Boltzmann equation}
\vskip-5mm \hspace{5mm}
The Boltzmann equation $$f_t+\xi\nabla_x f=Q(f,f)$$ contains much more information than the gas dynamics equations. Nevertheless, the shock waves for all these equations have the same Rankine-Hugoniot relation at the far states. The difference is on the transition layer. There is beginning an effort to make use of the techniques for the conservation laws to study the Boltzmann shocks, \cite{ly}. This is a line of research quite different from the intensive current efforts on the incompressible limits of the Boltzmann equation.
\vspace*{-2mm}
\section{Interacting particle systems}
\vskip-5mm \hspace{5mm}
Interacting particle systems is even closer to the first physical principles than the Boltzmann equation. There is the long-standing problem, the Zermelo paradox, in passing from the reversible particle systems to the irreversible systems such as the Boltzmann equation, and the Euler equations with shocks. This is fine for particle system with random noises. However, except for scalar models, so far the derivation of Euler equations from the particle systems has been done only for solutions with no shocks, \cite{v}.
\end{document} |
\begin{document}
\title{Quantum fidelity measures for mixed states}
\author{Yeong-Cherng Liang$^{1}$, Yu-Hao Yeh$^{1}$, Paulo E. M. F. Mendonça$^{2,3}$, Run Yan Teh$^{4}$, Margaret D. Reid$^{4,5}$ and Peter D. Drummond$^{4,5}$}
\address{$^{1}$ Department of Physics, National Cheng Kung University, Tainan 701, Taiwan}
\ead{[email protected]}
\address{$^{2}$Academia da Força Aérea, Caixa Postal 970, 13643-970, Pirassununga, SP, Brazil}
\ead{[email protected]}
\address{$^{3}$Melbourne Graduate School of Education, University of Melbourne, Melbourne, VIC 3010, Australia }
\address{$^{4}$Centre for Quantum and Optical Science, Swinburne University of Technology, Melbourne, VIC 3122, Australia}
\ead{[email protected], [email protected], [email protected]}
\address{$^{5}$Institute of Theoretical Atomic, Molecular and Optical Physics (ITAMP), Harvard University, Cambridge, Massachusetts, USA. }
\submitto{\RPP} \begin{abstract} Applications of quantum technology often require fidelities to quantify performance. These provide a fundamental yardstick for the comparison of two quantum states. While this is straightforward in the case of pure states, it is much more subtle for the more general case of mixed quantum states often found in practice. A large number of different proposals exist. In this review, we summarize the required properties of a quantum fidelity measure, and compare them, to determine which properties each of the different measures has. We show that there are large classes of measures that satisfy all the required properties of a fidelity measure, just as there are many norms of Hilbert space operators, and many measures of entropy. We compare these fidelities, with detailed proofs of their properties. We also summarize briefly the applications of these measures in teleportation, quantum memories and quantum computers, quantum communications, and quantum phase-space simulations. \end{abstract} \maketitle \tableofcontents{}
\ioptwocol
\section{Introduction}
Fidelity is a central concept to quantum information. It provides a mathematical prescription for the quantification of the \emph{degree of similarity} of a pair of quantum states. In practice, there are many situations where such a comparison is useful. For example, since any experimental preparation of a quantum state is limited by imperfections and noise, one is generally interested to find how close the state actually produced is to the state whose production was intended. This is a common issue in quantum communications and quantum computing, where one is interested in either generating or sending precisely defined quantum states in the face of noise and other sources of error.
Another common application arises in the context of entanglement quantification~\cite{Guehne2009}: the closer a given quantum state is to the set of separable states, the less entangled it is, and vice-versa (see, however,~\cite{Rosset2012}). Measuring and computing fidelities between quantum states is at the heart of various quantum information tasks. In recent years, fidelity measure has also been applied extensively to study quantum phase transitions. For a review on this subject, see~\cite{gu2010fidelity} and references therein.
For pure states, fidelity is well-defined. Yet pure states, by their nature, are exactly what one does \emph{not} expect in a noisy, real-world environment. Moreover, in large quantum systems, one needs to measure exponentially many parameters in order fully determine the quantum states, thus making the task infeasible in practice. Instead, one could hope to obtain some partial information about the produced quantum states, for example, by performing tomography on some (random) subsets of the multipartite quantum states. Importantly, a generic multipartite pure state is highly entangled across any bipartition~\cite{Hayden2006}, hence its reduced states on subsets are typically mixed. Thus, more realistically, one must expect to deal with impure or mixed target states that are obtained by examining a subsystem obtained by tracing over a larger environment.
In all real-world experiments, there is also noise or coupling to an environment. This is an essential part of the quantum-classical transition, since it is the coupling to an environment that allows measurements. From the point of view of a quantum technologist or engineer, the environment causes decoherence, and this is the main challenge in many quantum technology applications. Just as relevant is the fact that one often wishes to analyze the performance of one component\textemdash for example, a quantum gate\textemdash embedded in a larger device, so that the environment is an essential part of the system of interest, as explained above. Thus, one may argue that the fidelity for mixed states is generic, and therefore the most practical type of fidelity. Pure state fidelity represents an idealized case only, which is typically non-scalable. For mixed states, however, fidelity has no clearly unique definition, and a number of different approaches exist.
Here the question is really one of distance: given two\emph{ density matrices}, how close are they to each other in the appropriate Hilbert space? This is an important issue, for example, if one wishes to understand how accurately a given approximate calculation replicates that of some target density matrix. The concept of distance in any vector space is never a uniquely defined concept without further considerations being applied.
There are other situations where mixed state fidelity might seem to be important, but frequently they are more subtle than would appear at first sight. For example, while it is known \cite{WoottersClone} that one cannot clone quantum states, there is great interest in the idea of optimal cloning \cite{BuzekHilleryPhysRevA.54.1844,ScaraniRMP.77.1225}, in which a state is copied as well as is allowed by quantum theory. This requires one to quantify the fidelity of the clone or copy, in order to decide if a given cloning strategy is really optimal. Similarly, while quantum memories and quantum teleportation are permitted by quantum mechanics, the real world of the laboratory leads to inevitable noise and errors. Again, it is a mixed state fidelity that is important, since the original state that is teleported or stored was typically not a pure state originally. These applications may involve multiple states as well, which requires not just fidelities, but averages over them.
In this review, we analyze these issues by considering measures of fidelity that apply to mixed as well as pure states. In Section II, we review the properties of a fidelity measure as defined by Josza \cite{jozsa1994fidelity}, which apply to the mixed state case, and illustrate that these are satisfied by a large number of proposed definitions of fidelity measure. This leads us to compare the different properties of these measures, which is the main purpose of this review. In Sections 3 and 4 we summarize the respective mathematical properties, and make comparisons of the values the different fidelity measures can have. We choose for specific comparison finite-dimensional ``qudit'' states with varying degrees of purity, and also make comparisons based on randomly generated density operators. Finally in Section 5, we review the application of fidelity measures to quantum information protocols such as teleportation~\cite{bennett1993}, quantum memories~\cite{Lvovsky:2009aa} and quantum gates~\cite{nielsen2000quantum}. Here, the fidelity measure can indicate a level of security for a quantum state transfer, or indicate the effectiveness of a logic operation. We present a useful tool\textemdash phase space fidelity\textemdash for evaluating the theoretical prediction of fidelity where systems and theories are complex.
As applications are based on experimental measurements, we conclude the review with a short discussion of the different types of experimental fidelity measurements that have been reported in the literature. These range from measurements on atomic states such as in ion traps, to photonic qubit states that might imply entanglement based on post-selection. The issue of defining the appropriate Hilbert space for the fidelity measurement is also discussed. In the Summary, we present our main conclusion, which is that for some situations different fidelity measures to the commonly used Uhlmann-Josza measure may be advantageous. The Appendix gives detailed proofs of nobel results where the proofs are too lengthy for the main part of the review.
\section{Fidelity measures for mixed states}
\label{Sec:MixedFidelity}
\subsection{Measured and relevant Hilbert spaces\label{subsec:Relevant-and-irrelevant}}
Given the wide variety of quantum technologies involved, it is not surprising that there are many ways to measure fidelity. These may depend on the applications envisaged, or simply on what is measurable in an experiment. Yet these are quite different issues. What is feasible in an experiment may not be the fidelity measurement that is needed. To understand this point, we must introduce the concept of measured and relevant Hilbert space, which is fundamental to understanding mixed state fidelity.
To motivate this analysis, we note that in certain types of relatively well-structured quantum states, such as the Greenberger-Horne-Zeilinger~\cite{GHZ} states, large-dimensional fidelity measurements have been reported, with 10-14 qubit Hilbert spaces \cite{monz201114,song201710,chen2017observation} being treated in ion-trap, photonic and superconducting quantum circuit environments. Such measurements are especially important in allowing the detection of decoherence and super-decoherence, where the decoherence rates increase with system size \cite{reid2014quantum,galve2017microscopic}.
These experiments are impressive demonstrations, but scaling them to even larger sizes is likely to become increasingly difficult owing to their exponential complexity, leading to exponentially many measurements being required in a general tomographic measurement. For this reason, we expect measurements of mixed state fidelity to become more common in large Hilbert spaces, as explained in the Introduction.
The first difficulty in carrying out a physical fidelity measurement is to identify the relevant Hilbert space. All quantum systems are coupled to other modes of the universe, but the state of Betelgeuse is usually only relevant if one wishes to communicate over a 640 light year distance. Thus, a useful fidelity measurement only measures the relevant fidelity. Suppose the global quantum state is divided into the relevant and irrelevant parts, with an orthogonal basis $\left|\psi_{i}\right\rangle _{{\rm rel}}\left|\phi_{i}\right\rangle _{{\rm irr}}$: \begin{equation}
\left|\Psi\right\rangle =\sum_{ij}C_{ij}\left|\psi_{i}\right\rangle _{{\rm rel}}\left|\phi_{j}\right\rangle _{{\rm irr}} \end{equation}
Next, the relevant density matrix is obtained by the usual partial trace procedure, so that \begin{equation}
\rho=\sum_{ijk}C_{ij}C_{kj}^{*}\left|\psi_{i}\right\rangle _{{\rm rel}}{\left\langle \psi_{k}\right|_{{\rm rel}}} \end{equation}
The irrelevant portion, $\left|\phi\right\rangle _{{\rm irr}}$, of the quantum state may change in time. Yet given a large enough separation, and for localized interactions, this will generally not alter the relevant density matrix, owing to the trace over irrelevant parts. However, if the product does not factorize initially, so the relevant and irrelevant parts are entangled, then one does always have a mixed state. This is the generic situation, and one should not assume that a state is pure, without tomographic measurements that verify purity.
While subdivision is necessary, \emph{where should the dividing line be}? For example, the internal states of all the different ions in an ion trap, and their motional states, may all be relevant to an ion-trap quantum computer. However, the state of motion of the vacuum chamber will not be. Thus, the mixed state fidelity is a \emph{relative} measure. It depends on the relevant Hilbert spaces. One will get different results depending on how large the relevant Hilbert space is, which is application-dependent.
Yet the relevant space in many cases may still be much larger than the one in which the fidelity is \emph{measured}. For example, one might only be able to measure the internal states of one or two ions \cite{leibfried2003experimental}. The total relevant Hilbert space can be larger than this. This means that there is some lost of information which may be important to the application. One is still entitled to claim to have measured a fidelity, but it is clearly not the only relevant fidelity.
This means that the definition and understanding of a mixed state fidelity is very central to quantum technologies. The original idea of fidelity \cite{schumacher1995quantum}, as first introduced to the quantum information community, is not immediately applicable to an arbitrary pair of density matrices. However, it is implicit in this definition that for a pair of pure states $\rho=\proj{\phi}$ and $\sigma=\proj{\psi}$, their fidelity should be defined by the transition probability between the two states, i.e., \begin{equation}
\mathcal{F}(\proj{\phi},\proj{\psi})=\left|\langle\psi|\phi\rangle\right|^{2}.\label{Eq:Fidelity:AllPure} \end{equation} As pointed out subsequently \cite{jozsa1994fidelity}, this is indeed a natural candidate for a fidelity measure since it corresponds to the closeness of states in the usual geometry of Hilbert space.
When one of the quantum states, say, $\rho$ is mixed, there also exists a generalization of Eq.~\eref{Eq:Fidelity:AllPure} in terms of the transition probability between the two states, namely, \begin{equation} \mathcal{F}(\rho,\proj{\psi})=\bra{\psi}\rho\ket{\psi}.\label{Eq:Fidelity:1mixed} \end{equation} Note that this was also implicitly defined in the original fidelity measure ~\cite{schumacher1995quantum}. In \cite{mendonca2008}, this expression has also been referred as Schumacher's fidelity.
\subsection{Desirable properties of mixed state fidelities}
\label{sec:jozsa_n_additions}
As one might expect, not every function of two density matrices provides a physically reasonable generalization of Eq.~\eref{Eq:Fidelity:1mixed} to a pair of mixed states.
For example, at first glance it may seem that \begin{equation} \mathcal{F}(\rho,\sigma)=\tr(\rho\,\sigma),\label{Eq:Fidelity:HSInnerProduct} \end{equation} serves as a useful generalization of Eq.~\eref{Eq:Fidelity:1mixed} to an arbitrary pair of mixed states. However, the Hilbert-Schmidt inner product leads to an unsatisfactory generalization of fidelity \cite{jozsa1994fidelity}. For example, let us denote by $\mathbf{1}_{d}$ the identity operator acting on a $d$-dimensional Hilbert space. Then, adopting Eq.~\eref{Eq:Fidelity:HSInnerProduct} as the mixed state fidelity would imply that all pairs of density matrices for a two-state or qubit system of the form {$({\mathbf{1}_{2}}/{2},\proj{\phi})$, are just as similar as the identical pair $({\mathbf{1}_{2}}/{2},{\mathbf{1}_{2}}/{2})$.
This problem is soluble through a suitable normalization, as we show below, but it illustrates the need for a suitable definition of a fidelity measure. In order to avoid such difficulties, Jozsa proposed the following list of fidelity axioms \cite{jozsa1994fidelity}, which should be satisfied by any sensible generalization of Eq.~\eref{Eq:Fidelity:1mixed} to a pair of mixed states: \begin{itemize} \item[J1a)] \label{J1a} $\mathcal{F}(\rho,\sigma)\in[0,1]$ \item[J1b)] \label{J1b} $\mathcal{F}(\rho,\sigma)=1$ \emph{if and only if} $\rho=\sigma$ \item[J2)] \label{J2} $\mathcal{F}(\rho,\sigma)=\mathcal{F}(\sigma,\rho)$ \item[J3)] \label{J3} $\mathcal{F}(\rho,\sigma)=\tr(\rho\,\sigma)$ if either $\rho$ or $\sigma$ is a pure state \item[J4)] \label{J4} $\mathcal{F}(U\rho\,U^{\dagger},U\sigma U^{\dagger})=\mathcal{F}(\rho,\sigma)$ for all unitary $U$ \end{itemize} Henceforth, we shall refer to this set of conditions as Jozsa's axioms.
Apart from these, it is convenient to append to this list the requirement that any fidelity measure should vanish when applied to quantum states of orthogonal support, i.e., \begin{itemize} \item[J1c)] \label{J1c} $\mathcal{F}(\rho,\sigma)=0$ \emph{if and only if} $\rho\,\sigma=0$ \end{itemize} Throughout, the requirements (J1)-(J4) will be taken as the most basic requirements to be satisfied by any generalization of fidelity measure for a pair of mixed states.
One may be interested in further subtleties in determining fidelity, beyond the Jozsa axioms, depending on the application of the measure. For example, in the case of a mixed system, the idea of a state-by-state fidelity could be important. One may wish to investigate a cloning, communication or quantum memory experiment. Suppose, in the experiment, the input state is an unknown qudit state of dimension $d$, and has maximum entropy. In other words, one has $\rho={\mathbf{1}_{d}}/{d}$. According to Jozsa's criterion, the highest fidelity output state $\sigma$ would be another, identical, maximal entropy state. Yet this would not be a useful criterion on its own \textendash{} it is a necessary, but not sufficient measure.
Here, one might wish to have the maximum fidelity for every expected input state, pure or mixed, with an appropriate weight given by their relative probability. This requires an understanding of the correlations between the input and output states, given a communication alphabet $\rho_{A},\rho_{B},\ldots$, which is related to the conditional information measures found in communication theory \cite{caves1994quantum}. Such more general issues cannot be investigated by just using simple fidelity measures of the Jozsa type, and more sophisticated process fidelity measures \cite{gilchrist2005distance} are needed, which we discuss in detail later. However, even in this case, {a fidelity measure} can be useful provided it is applied relative to every input density matrix in the relevant communication alphabet, and then averaged with its probability of appearance. This is called the average fidelity: \begin{equation} \langle\mathcal{F}(\rho,\sigma)\rangle=\sum P_{j}\mathcal{F}(\rho_{j},\sigma_{j}),\label{eq:fidav} \end{equation} where the pair $\rho_{j},\sigma_{j}$ occur with probability $P_{j}$. However, the average fidelity may involve averages over mixed state fidelities. Hence this concept is a relative one, and depends on the precise definition of fidelity used.
\subsection{Uhlmann-Josza fidelity}
The most {widely-employed} generalization of Schumacher's fidelity that has been proposed in the literature is the {\em Uhlmann-Jozsa (U-J) fidelity} $\mathcal{F}_{1}$ ~\cite{jozsa1994fidelity,uhlmann1976transition}, as the maximal {\em transition probability} between the purification of a pair of density matrices $\rho$ and $\sigma$. Our choice of notation for this fidelity will become evident in Section~\ref{Sec:NormBased}: \begin{equation}
\mathcal{F}_{1}(\rho,\sigma)\mathrel{\mathop:}=\max_{\ket{\psi},\ket{\varphi}}{|\langle\psi|\varphi\rangle|^{2}}=\left(\tr\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\right)^{2}\,.\label{eq:fid} \end{equation}
In order to understand this definition, we note that here, $\ket{\psi}$ is what is called a purification of $\rho$. A purification is a state in a notional extension of the Hilbert space $\mathcal{H}$ of $\rho$ to an enlarged space $\mathcal{H}'=\mathcal{H}\otimes\mathcal{H}_{2}$, such that $\ket{\psi}$ is a member of this larger space. The limitation on $\ket{\psi}$ is that when the projector, $\proj{\psi}$, is traced over the {auxiliary} Hilbert space $\mathcal{H}_{2}$, it reduces to $\rho,$ i.e, \[ \rho=\tr_{\mathcal{H}_{2}}\left[\proj{\psi}\right]. \] This fidelity measure is often referred simply as {\em the fidelity}. Nevertheless, the reader should beware that some authors (e.g., those of~\cite{gu2010fidelity,nielsen2000quantum,luo2004informational,audenaert2008asymptotic}) have referred, instead, to the square root of $\mathcal{F}_{1}(\rho,\sigma)$ as the fidelity. We show below that this fidelity measure, $\mathcal{F}_{1}(\rho,\sigma)$, is one of a large class of similar norm-based measures called $\mathcal{F}_p(\rho,\sigma)$.
Given the wide use of this definition of $\mathcal{F}_{1}(\rho,\sigma)$, one might naturally wonder if there is any need to search further, especially as this definition carries with it a number of desirable properties. There are also some difficulties with this approach, however, and we list them here: \begin{itemize} \item The U-J fidelity requires one to calculate or measure traces of square roots of matrices. This is not trivial in cases of large or infinite density matrices. \item The conceptual basis of the U-J fidelity is that both the density matrices being compared are derived from an identical enlarged space $\mathcal{H}'$. This is not always true in many applications of fidelity. \item Since the U-J fidelity is a maximum over purifications, the measured U-J fidelity on a subspace is always greater or equal to the true relevant fidelity for two pure states. This may introduce a bias in estimating pure-state fidelities given a measurement over a reduced Hilbert space. \end{itemize} This leads to an obvious mathematical question: \begin{itemize} \item \emph{Does the Josza set of requirements lead uniquely to the U-J fidelity, or do other alternatives exist?} \end{itemize} The purpose of this review is to answer this question. We show that, indeed, other alternatives do exist that satisfy the Josza axioms. A similar type of situation exists for the quantum entropy, where it is known that there are many entropy-like measures. Some are more suitable for given applications than others (see, e.g.,~\cite{Hu:JMP:2006,Muller:JMP:2013,Dupuis:2013} and references therein).
\subsection{Alternative fidelities}
One of the first proposals of a fidelity measure alternative to $\mathcal{F}_{1}$ was that provided by Chen~\emph{et al.} in~\cite{chen2002alternative}: \begin{equation} \mathcal{F}_{\mbox{\tiny C}}(\rho,\sigma)\mathrel{\mathop:}=\frac{1-r}{2}+\frac{1+r}{2}\mathcal{F}_{\mbox{\tiny N}}(\rho,\sigma),\label{Eq:Fc} \end{equation} where $r=\frac{1}{d-1}$, $d$ is the dimension of the state space, and \begin{equation} \mathcal{F}_{\mbox{\tiny N}}(\rho,\sigma)\mathrel{\mathop:}=\tr(\rho\,\sigma)+\sqrt{1-\tr{(\rho^{2})}}\sqrt{1-\tr{(\sigma^{2})}}.\label{Eq:Fn} \end{equation}
For two-dimensional quantum states, $\mathcal{F}_{1}|_{d=2}=\mathcal{F}_{\mbox{\tiny C}}|_{d=2}=\mathcal{F}_{\mbox{\tiny N}}|_{d=2}$~\cite{mendonca2008}. Moreover, $\mathcal{F}_{\mbox{\tiny C}}$ admits a hyperbolic geometric interpretation in terms of the generalized Bloch vectors~\cite{Kimura2003,Byrd2003}.
In 2008, $\mathcal{F}_{\mbox{\tiny N}}$ itself was proposed as an alternative fidelity measure in~\cite{mendonca2008}; the same quantity was also independently introduced in \cite{miszczak2009sub} by the name of super-fidelity, as it provides an {\em upper bound} on $\mathcal{F}_{1}$.
At about the same time, the square of the quantum affinity $A(\rho,\sigma)$ was proposed in \cite{ma2008geometric} (see also \cite{Raggio1984}) as a fidelity measure, by the name of {\em $A$-fidelity}: \begin{equation} \mathcal{F}_{\mbox{\tiny A}}(\rho,\sigma)\mathrel{\mathop:}=\left[\tr\left(\sqrt{\rho}\sqrt{\sigma}\right)\right]^{2}.\label{Eq:Fa} \end{equation} It is worth noting that, in contrast to $\mathcal{F}_{\mbox{\tiny N}}$, the $A$-fidelity $\mathcal{F}_{\mbox{\tiny A}}$ provides a {\em lower bound} on $\mathcal{F}_{1}$.
The super-fidelity $\mathcal{F}_{\mbox{\tiny N}}$ clearly does not satisfy the axiom (J1c). In response to this \cite{wang2008alternative}, one may introduce the quantity \begin{equation} \mathcal{F}_{\mbox{\tiny GM}}(\rho,\sigma)\mathrel{\mathop:}=\frac{\tr(\rho\,\sigma)}{\sqrt{\tr(\rho^{2})\,\tr(\sigma^{2})}}\,,\label{Eq:FGm} \end{equation} which is, instead, incompatible with axiom (J3). $\mathcal{F}_{\mbox{\tiny GM}}$ can be seen as the Hilbert-Schmidt inner product between $\rho$ and $\sigma$ normalized by the geometric mean (GM) of their purities $\tr{(\rho^{2})}$ and $\tr{(\sigma^{2})}$.
Lastly, let us point out another quantity of special interest. The non-logarithmic variety of the quantum Chernoff bound, $\mathcal{F}_{\mbox{\tiny Q}}$, is defined by Audenaert {\em et al.}~\cite{audenaert2007discriminating} as: \begin{equation} \mathcal{F}_{\mbox{\tiny Q}}(\rho,\sigma):=\min_{0\le s\le1}\tr(\rho^{s}\,\sigma^{1-s}).\label{Eq:Fq} \end{equation} This quantity was not originally proposed as a fidelity measure. Instead, it is related to the (asymptotic) probability of error incurred in discriminating between quantum states $\rho$ and $\sigma$ when one has access to arbitrarily many copies of them. Nonetheless, we will include $\mathcal{F}_{\mbox{\tiny Q}}$ in our subsequent discussion as it does have many desirable properties of a fidelity measure.
In fact, amongst all the generalized fidelities formulas proposed so far, only $\mathcal{F}_{\mbox{\tiny Q}}$ and $\mathcal{F}_{1}$ fully comply with Jozsa's axioms (cf. Table~\ref{tbl:JAxiomsCheck:ExistingMeasure}). These will be our main focus in the review out of these previously known fidelities, although we point out that there is also an infinite class of norm-based fidelities that comply with Jozsa's axioms as well as these two. We note however that $\mathcal{F}_{\mbox{\tiny Q}}$ is also computationally challenging. Not only does it involve fractional powers, but one must optimize over a continuous set of candidate measures, each involving different fractional powers.
\begin{table}[!h] \caption{\label{tbl:JAxiomsCheck:ExistingMeasure} Compatibility of existing fidelity measures against {Jozsa's} axioms.}
\begin{tabular}{c|cccccc}
& J1a & J1b & J1c & J2 & J3 & J4 \tabularnewline \hline \hline $\mathcal{F}_{1}$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ \tabularnewline $\mathcal{F}_{\mbox{\tiny Q}}$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$
\tabularnewline \hline $\mathcal{F}_{\mbox{\tiny N}}$ & $\surd$ & $\surd$ & $\times$ & $\surd$ & $\surd$ & $\surd$ \tabularnewline $\mathcal{F}_{\mbox{\tiny C}}$ & $\surd$ & $\surd$ & $\times$ & $\surd$ & $\times$ & $\surd$ \tabularnewline $\mathcal{F}_{A}$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\times$ & $\surd$ \tabularnewline $\mathcal{F}_{\mbox{\tiny GM}}$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\times$ & $\surd$ \tabularnewline \end{tabular} \end{table}
\subsection{Norm-based fidelities}
\label{Sec:NormBased}
In addition, there are many norm-based fidelity measures that satisfy these axioms. Consider the operator $A=\sqrt{\rho}\sqrt{\sigma}$ whose properties are closely related to the overlap of two density matrices. There is an infinite class of unitarily invariant norms of linear operators, called the Schatten-von-Neumann norms (or more commonly Schatten norms), which can be used to measure the size of such operators. These are defined, for $p\ge1$, as~\cite{bhatia97}: \begin{equation} \left\Vert A\right\Vert _{p}\equiv\left(\tr\left[\left(AA^{\dagger}\right)^{p/2}\right]\right)^{1/p}. \end{equation} These norms all satisfy Hölder and triangle inequalities, so that ${\left\Vert A_{1}A_{2}\right\Vert _{p}}\le\left\Vert A_{1}\right\Vert _{2p}\left\Vert A_{2}\right\Vert _{2p}$. This particular inequality can be deduced, for example, from Corollary IV.2.6 of~\cite{bhatia97}. They are not yet suitable as fidelity measures, as they must be normalized appropriately to satisfy the fidelity axioms. Hence, keeping this in mind, we define a $p$-fidelity as: \begin{equation} \mathcal{F}_p\left(\rho,\sigma\right)\mathrel{\mathop:}=\frac{\left\Vert \sqrt{\rho}\sqrt{\sigma}\right\Vert _{p}^{2}}{\max\left[\left\Vert \sigma\right\Vert _{p}^{2},\left\Vert \rho\right\Vert _{p}^{2}\right]}. \end{equation}
The proof that the axioms are satisfied is given in ~\ref{App:p-norm}. We note that, for the special case of $p=1$, $\mathcal{F}_{1}(\rho,\sigma)$ is exactly the Uhlmann-Josza fidelity. This follows since for any Hermitian density matrix $\sigma$, $\left\Vert \sigma\right\Vert _{1}^{2}={\left(\tr\sigma\right)^{2}}=1$, which is the same for all density matrices. Hence, the normalizing term in this case is
\begin{equation} \max\left[\left\Vert \sigma\right\Vert _{1}^{2},\left\Vert \rho\right\Vert _{1}^{2}\right]=\max\left[{\left(\tr\sigma\right)^{2}},{\left(\tr\rho\right)^{2}}\right]=1. \end{equation}
In this review, we focus on $\mathcal{F}_{1}(\rho,\sigma)$ and $\mathcal{F}_{2}(\rho,\sigma)$, which have especially desirable properties. In particular, $\mathcal{F}_{2}(\rho,\sigma)$, which is defined as: \begin{equation} \mathcal{F}_{2}(\rho,\sigma)\mathrel{\mathop:}=\frac{\tr(\rho\,\sigma)}{\max\left[\tr(\rho^{2}),\tr(\sigma^{2})\right]}\,,\label{eq:Fmax} \end{equation} uses the Hilbert-Schmidt operator measure $\left\Vert A\right\Vert _{2}$, which {is} often simpler to calculate than $\left\Vert A\right\Vert _{1}$. In fact, all of the even order $p$-fidelities can be easily evaluated, as they reduce to the form:
\begin{equation} \mathcal{F}_{2p}(\rho,\sigma)\mathrel{\mathop:}=\frac{\left\{ \tr\left[\left(\rho\,\sigma\right)^{p}\right]\right\} ^{1/p}}{\max\left\{ \left[\tr(\rho^{{2p}})\right]^{1/p},\left[\tr(\sigma^{{2p}})\right]^{1/p}\right\} }\,.\label{eq:Fmax-1} \end{equation}
We finally note that a very similar type of circumstance occurs for entropy, which is also sometimes used to calculate distances between two density matrices. The traditional von Neumann entropy measure involves a logarithm, and is often difficult to compute or measure. This can be generalized to the Rényi entropy \cite{renyi1961measures}, which is:
\begin{equation} S_{p}\left(\rho\right)=\frac{p}{1-p}\ln\left\Vert \rho\right\Vert _{p}. \end{equation}
The Rényi entropy reduces to the usual von Neumann entropy, $S\left(\rho\right)$ in the limit of $p\rightarrow1$, just as the generalized fidelity defined above reduces to the Uhlmann-Josza fidelity, in the same limit. Both generalizations have advantages in simplifying computations \cite{hastings2010measuring}.
\subsection{Hilbert-Schmidt fidelities}
Although the measure $\mathcal{F}_{\mbox{\tiny GM}}$ does not comply with all of Jozsa's axioms, its functional form suggests alternatives that are also worth investigating, using the Hilbert-Schmidt norm. An example is the norm-based fidelity $\mathcal{F}_{2}(\rho,\sigma)$ in the previous subsection, which is a special case of $\mathcal{F}_p(\rho,\sigma)$.
To this end, note that for an arbitrary symmetric, non-vanishing function $f$ that takes the purity of $\rho$ and $\sigma$ as arguments, one can introduce the functional of $f[\tr(\rho^{2}),\tr(\sigma^{2})]$ \begin{equation} \mathcal{F}_{f}(\rho,\sigma)=\frac{\tr(\rho\,\sigma)}{f[\tr(\rho^{2}),\tr(\sigma^{2})]},\label{eq:Ff} \end{equation} which are easily seen to satisfy a number of Jozsa's axioms. Specifically, the symmetric property of $f$ and the cyclic property of trace guarantees that the axiom (J2) is satisfied by $\mathcal{F}_{f}$, whereas the non-vanishing nature of $f$ guarantees that (J1c) is fulfilled. In addition, the fact that $f$ only takes the purity of $\rho$ and $\sigma$ as arguments ensures that $\mathcal{F}_{f}$ complies with (J4).
The advantage of measures like this is that they only involve the use of operator expectation values, in the sense that $\tr(\rho\,\sigma)$ is the expectation of $\sigma$ given the state $\rho$, or vice-versa. Such measures tend to be readily expressed and {accessible} using standard quantum mechanical techniques {applicable to infinite-dimensional} Hilbert spaces. By contrast, measures involving nested square roots of operators as found with $\mathcal{F}_{1}(\rho,\sigma)$ {are not} so readily calculated using standard quantum techniques in large Hilbert spaces, which is important when one is treating bosonic cases. Similar issues arise in the large Hilbert spaces that occur in many-body theory \cite{hastings2010measuring}.
Two classes of functions naturally fit into the above requirements, namely, means (arithmetic, geometric or harmonic) and extrema (minimum or maximum) of the purities of $\rho$ and $\sigma$. Other than the geometric mean and the maximum\textemdash which give, respectively $\mathcal{F}_{\mbox{\tiny GM}}$ and $\mathcal{F}_{2}$\textemdash the other functions give, explicitly, \begin{eqnarray} \mathcal{F}_{\mbox{\tiny AM}}(\rho,\sigma) & \mathrel{\mathop:}=\frac{2\tr(\rho\,\sigma)}{\tr(\rho^{2})+\tr(\sigma^{2})}\,,\nonumber \\ \mathcal{F}_{\mbox{\tiny HM}}(\rho,\sigma) & \mathrel{\mathop:}=\frac{\tr(\rho\,\sigma)\left[{\tr(\rho^{2})+\tr(\sigma^{2})})\right]}{2\tr(\rho^{2})\tr(\sigma^{2})}\,,\label{Eq:NewMeasures}\\ \mathcal{F}_{\rm min}(\rho,\sigma) & \mathrel{\mathop:}=\frac{\tr(\rho\,\sigma)}{\min\left[\tr(\rho^{2}),\tr(\sigma^{2})\right]},\nonumber \end{eqnarray} where AM and HM stand for arithmetic and harmonic mean, respectively. The compatibility of these new measures against Jozsa's axioms is summarized in Table~\ref{tbl:JAxiomsCheck:NewMeasure}.
\begin{table}[!h] \caption{\label{tbl:JAxiomsCheck:NewMeasure} Compatibility of $\mathcal{F}_{2}$ and the candidate fidelity measures defined in \eref{Eq:NewMeasures} against {Jozsa's} axioms.}
\begin{tabular}{c|cccccc}
& J1a & J1b & J1c & J2 & J3 & J4 \tabularnewline \hline \hline $\mathcal{F}_{2}$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$
\tabularnewline \hline $\mathcal{F}_{\mbox{\tiny AM}}$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\times$ & $\surd$ \tabularnewline $\mathcal{F}_{\mbox{\tiny HM}}$ & $\times$ & $\times$ & $\surd$ & $\surd$ & $\times$ & $\surd$ \tabularnewline $\mathcal{F}_{\rm min}$ & $\times$ & $\times$ & $\surd$ & $\surd$ & $\times$ & $\surd$ \tabularnewline \end{tabular} \end{table}
The (in)consistency of Eq.~\eref{Eq:NewMeasures} with axiom (J1c), (J2), (J3) and (J4) can be verified easily either by inspection or by the construction of counter-examples. Likewise, the normalization of $\mathcal{F}_{2}$ follows easily from Cauchy-Schwarz inequality whereas the \emph{incompatibility} of $\mathcal{F}_{\rm min}$ and $\mathcal{F}_{\mbox{\tiny HM}}$ with (J1a) can be verified easily, for example, by considering the following pair of $3\times3$ density matrices: \begin{equation} \rho=\Pi_{0},\quad\sigma=\frac{3}{4}\Pi_{0}+\frac{1}{8}(\Pi_{1}+\Pi_{2}), \end{equation} where, for convenience, we denote \begin{equation} \Pi_{i}\mathrel{\mathop:}=\proj{i}, \end{equation}
as the rank-1 projector corresponding to the $i$-th computational basis state, labelled ${|0\rangle,|1\rangle,...}$. Explicitly, one finds that \begin{equation} \mathcal{F}_{\mbox{\tiny HM}}(\rho,\sigma)=\frac{153}{152}, \end{equation} and \begin{equation} \mathcal{F}_{\rm min}(\rho,\sigma)=\frac{24}{19}, \end{equation} both greater than 1, thus being incompatible with Jozsa axiom J1a. As for the normalization of $\mathcal{F}_{\mbox{\tiny AM}}$ (and $\mathcal{F}_{\mbox{\tiny GM}}$), its proofs can be found in ~\ref{Sec:Proofs}.
In what follows, we will investigate the compatibility of the fidelity measures listed in Table~\ref{tbl:JAxiomsCheck:ExistingMeasure} and Table~\ref{tbl:JAxiomsCheck:NewMeasure} against other desirable properties that have been considered. We will, however, dismiss $\mathcal{F}_{\mbox{\tiny HM}}$ and $\mathcal{F}_{\rm min}$ from our discussion as they do not even meet the basic requirement of normalization. Apart from these we will mainly focus on $\mathcal{F}_{2}$. This meets all the required fidelity axioms, and will be termed the Hilbert-Schmidt fidelity.
The great advantage of $\mathcal{F}_{2}$ in computational terms is that as well as complying with the extended version of Josza's axioms, it is also relatively straightforward to compute and to measure. It only involves expectation values of {Hermitian} operators, which are computable and measurable with a variety of standard techniques in quantum mechanics.
\section{Auxiliary fidelity properties}
\label{Sec:OtherProperties}
Let us now look into other auxiliary properties that have been discussed in the literature. We shall focus predominantly in the three measures that satisfy all the Jozsa axioms, namely, $\mathcal{F}_{1}$, $\mathcal{F}_{2}$ and $\mathcal{F}_{Q}$. However, for completeness, we also provide a summary of our understanding of the various properties of those candidate fidelity measures that satisfy J1a, J1b, J2 and J4.
\subsection{Concavity properties}
A fidelity measure $\mathcal{F}(\rho,\sigma)$ is said to be separately concave if it is a concave function of any of its argument. More precisely, $\mathcal{F}(\rho,\sigma)$ is concave in its first argument if for arbitrary density matrices $\sigma$, $\rho_{i}$ and arbitrary $p_{i}\ge0$ such that $\sum_{i}p_{i}=1$, \begin{equation} \mathcal{F}\left(\sum_{i}p_{i}\rho_{i},\sigma\right)\geq\sum_{i}p_{i}\mathcal{F}(\rho_{i},\sigma).\label{Eq:Concave1st} \end{equation} By the symmetry of $\mathcal{F}$ {[}Jozsa's axiom (J2){]}, i.e., $\mathcal{F}(\rho,\sigma)=\mathcal{F}(\sigma,\rho)$, a fidelity measure that is concave in its first argument is also concave in its second argument.
A stronger concavity property is also commonly discussed in the literature. Specifically, $\mathcal{F}(\rho,\sigma)$ is said to be jointly concave in both of its arguments if: \begin{equation} \mathcal{F}\left(\sum_{i}p_{i}\rho_{i},\sum_{j}p_{j}\sigma_{j}\right)\geq\sum_{i}p_{i}\mathcal{F}(\rho_{i},\sigma_{i}).\label{Eq:ConcaveJoint} \end{equation} This is a stronger concavity property in the sense that if Eq.~\eref{Eq:ConcaveJoint} holds, so must Eq.~\eref{Eq:Concave1st}. This can be seen by setting $\sigma_{j}=\sigma$ for all $j$ in \eref{Eq:ConcaveJoint}. Conversely, if $\mathcal{F}(\rho,\sigma)${} is not separately concave, it also cannot be jointly concave. Essentially, concavity property of a fidelity measure tells us how the average state-by-state fidelity compares with the fidelity between the two resulting ensembles of density matrices, a point which we will come back to in Sec.~\ref{Sec:PureVsMixed}. A summary of the concavity properties of the various fidelity measures considered can be found in Table~\ref{tbl:Concavity}.
\begin{table}[!h] \caption{\label{tbl:Concavity} Summary of the concavity properties of various fidelity measures. The first column gives a list of the various measures, while the second and third column give the compatibility of each measure against the concavity property. {An asterisk $^{*}$ means that the square root of the measure satisfies the property. Throughout, we use the symbol $^{\ddag}$ to indicate that a particular property was\textemdash to our knowledge\textemdash not discussed in the literature previously. A question mark $?$ indicates that no counterexample has been found.} }
\begin{tabular}{c|c|c}
& Separate & Joint \tabularnewline \hline \hline $\mathcal{F}_{1}$ & $\surd$~\cite{uhlmann1976transition} & $\times${*} \tabularnewline $\mathcal{F}_{2}$ & $\times^{\ddag}$ & $\times$ \tabularnewline $\mathcal{F}_{\mbox{\tiny Q}}$ & $\surd$ & $\surd$~\cite{audenaert2007discriminating} \tabularnewline \hline $\mathcal{F}_{\mbox{\tiny N}}$ & $\surd$ & $\surd$~\cite{mendonca2008} \tabularnewline $\mathcal{F}_{\mbox{\tiny C}}$ & ? & ? \tabularnewline $\mathcal{F}_{\mbox{\tiny GM}}$ & $\times$~\cite{wang2008alternative} & $\times$ \tabularnewline $\mathcal{F}_{\mbox{\tiny AM}}$ & $\times^{\ddag}$ & $\times$ \tabularnewline $\mathcal{F}_{A}$ & ${?}^{*}$~\cite{luo2004informational} & $\times^{\ddag*}$\cite{luo2004informational} \tabularnewline \end{tabular} \end{table}
To see that Eq.~\eref{Eq:Concave1st} does not hold for $\mathcal{F}_{2}$ (as well as $\mathcal{F}_{\mbox{\tiny GM}}$ and $\mathcal{F}_{\mbox{\tiny AM}}$), it suffices to set $p_{1}=p_{2}=\frac{1}{2}$ and consider the qubit or $2\times2$ density matrices $\rho_{1}=\frac{1}{10}(\Pi_{0}+9\Pi_{1})$, $\rho_{2}=\frac{1}{5}(\Pi_{0}+4\Pi_{1})$ and $\sigma=\frac{1}{5}(3\Pi_{0}+2\Pi_{1})$.
On the other hand, to see that $\mathcal{F}_{1}$ and $\mathcal{F}_{\mbox{\tiny A}}$ are not jointly concave, it suffices to consider $p_{1}=\frac{49}{100}$, $p_{2}=\frac{1}{2}$, $p_{3}=\frac{1}{100}$ together with the qutrit or $3\times3$ density matrices $\rho_{1}=\Pi_{2}$, $\sigma_{1}=\Pi_{0}$, $\rho_{2}=\sigma_{2}=\Pi_{1}$, $\rho_{3}=\frac{1}{5}{(3\Pi_{1}+2\Pi_{2})}$, $\sigma_{3}=\frac{1}{5}(2\Pi_{0}+3\Pi_{1})$ in \eref{Eq:ConcaveJoint}.
\subsection{Multiplicativity under tensor products}
A fidelity measure $\mathcal{F}(\rho,\sigma)$ is said to be multiplicative if for all density matrices $\rho_{i}$, $\sigma_{i}$ and for all integer $n\ge2$, \begin{equation} \mathcal{F}\left(\bigotimes_{i=1}^{n}\rho_{i},\bigotimes_{j=1}^{n}\sigma_{j}\right)=\prod_{i=1}^{n}\mathcal{F}(\rho_{i},\sigma_{i});\label{Eq:Multiplicative} \end{equation} likewise $\mathcal{F}(\rho,\sigma)$ is said to {\em super-multiplicative} if \begin{equation} \mathcal{F}\left(\bigotimes_{i=1}^{n}\rho_{i},\bigotimes_{j=1}^{n}\sigma_{j}\right)\ge\prod_{i=1}^{n}\mathcal{F}(\rho_{i},\sigma_{i}).\label{Eq:SuperMultiplicative} \end{equation} Clearly, if $\mathcal{F}(\rho,\sigma)$ is (super)multiplicative for $n=2$, it is also (super)multiplicative in the general scenario.
Two special instances of multiplicativity under tensor products are worth mentioning. The first of which concerns the comparison of two quantum states when one has access to $n$ copies of each state. In this case, a (super)multiplicative measure $\mathcal{F}(\rho,\sigma)$ is also (super)multiplicative in its tensor powers, i.e., \begin{equation} \mathcal{F}(\rho^{\otimes n},\sigma^{\otimes n})=\left[\mathcal{F}(\rho,\sigma)\right]^{n}.\label{Eq:MultiplicativeTensorPower} \end{equation} The other special instance concerns the scenario when $\rho$ and $\sigma$ are each appended with an uncorrelated state $\tau$. In this case, multiplicativity demands \begin{equation} \mathcal{F}\left(\rho\otimes\tau,\sigma\otimes\tau\right)=\mathcal{F}(\rho,\sigma)\mathcal{F}(\tau,\tau)=\mathcal{F}(\rho,\sigma). \end{equation} Intuitively, if $\mathcal{F}(\rho,\sigma)$ is also a measure of the overlap between $\rho$ and $\sigma$, one would expect that multiplicativity is satisfied, at least, in these two special instances. In Table~\ref{tbl:Multiplicativity}, we summarize the multiplicativity properties of the various fidelity measures. For a proof of the (super)multiplicativity of $\mathcal{F}_{2}$, $\mathcal{F}_{\mbox{\tiny Q}}$ (and $\mathcal{F}_{\mbox{\tiny C}}$), and a counterexample showing that $\mathcal{F}_{\mbox{\tiny AM}}$ is in general not multiplicative nor supermultiplicative, we refer the reader to ~\ref{App:CountExamples}.
\begin{table}[!h] \caption{\label{tbl:Multiplicativity} Summary of the multiplicativity of the various fidelity measures. The first column gives the list of candidate measures $\mathcal{F}$. From the second to the fourth column, we have, respectively, the multiplicativity of the various measures $\mathcal{F}$ under the addition of an uncorrelated ancillary state, under tensor powers and under the general situation of \eref{Eq:Multiplicative}.}
\begin{tabular}{c|c|c|c}
& Ancilla & Tensor powers & General \tabularnewline \hline \hline $\mathcal{F}_{1}$ & $\surd$~\cite{jozsa1994fidelity} & $\surd$~\cite{jozsa1994fidelity} & $\surd$~\cite{jozsa1994fidelity} \tabularnewline $\mathcal{F}_{2}$ & $\surd^{\ddagger}$ & $\surd^{\ddagger}$ & Super$^{\ddagger}$ \tabularnewline $\mathcal{F}_{\mbox{\tiny Q}}$ & $\surd$~\cite{audenaert2008asymptotic} & $\surd^{\ddagger}$ & Super$^{\ddagger}$
\tabularnewline \hline $\mathcal{F}_{\mbox{\tiny N}}$ & Super & Super & Super~\cite{mendonca2008}\tabularnewline $\mathcal{F}_{\mbox{\tiny C}}$ & Super & Super & Super$^{\ddagger}$ \tabularnewline $\mathcal{F}_{\mbox{\tiny GM}}$ & $\surd$~\cite{wang2008alternative} & $\surd$~\cite{wang2008alternative} & $\surd$~\cite{wang2008alternative} \tabularnewline $\mathcal{F}_{\mbox{\tiny AM}}$ & $\surd^{\ddagger}$ & $\times^{\ddag}$ & $\times^{\ddag}$ \tabularnewline $\mathcal{F}_{A}$ & $\surd$~\cite{luo2004informational} & $\surd$~\cite{luo2004informational} & $\surd$~\cite{luo2004informational}\tabularnewline \end{tabular} \end{table}
\subsection{Monotonicity under quantum operations}
The physical operation of appending a given quantum state $\rho$ by a fixed quantum state $\tau$ discussed above is an example of what is known as a completely positive trace preserving (CPTP) map. If we denote a general CPTP map by $\mathcal{E}:\rho\to\mathcal{E}(\rho)$, it is often of interest to determine, for all density matrices $\rho$ and $\sigma$, if the inequality: \begin{equation} \mathcal{F}\left(\mathcal{E}(\rho),\mathcal{E}(\sigma)\right)\ge\mathcal{F}(\rho,\sigma).\label{Eq:MonotonicUp} \end{equation} is satisfied for a given candidate fidelity measure $\mathcal{F}$. In particular, a fidelity measure that satisfies this inequality is said to be non-contractive (or equivalently, monotonically non-decreasing) under quantum operations.
In this regard, it is worth noting that a measure $\mathcal{F}$ that (1) complies with the requirement of unitary invariance (J4), (2) is invariant under the addition of an uncorrelated ancillary state and is either (3a) non-contractive under partial trace operation or is (3b) jointly concave is also non-contractive under general quantum operations. The sufficiency of conditions (1), (2) and (3a) follow directly from the Stinespring representation (see, e.g.,~\cite{preskill2015lecture}) of CPTP maps while that of (1), (2) and (3b) also make use of a specific representation of the partial trace operation as a convex mixture of unitary transformations, see, e.g., Eq.~(33) of~\cite{Carlen2008}. For example, since $\mathcal{F}_{\mbox{\tiny A}}$ is monotonic~\cite{luo2004informational} under partial trace operation (likewise for $\mathcal{F}_{1}$ and $\mathcal{F}_{\mbox{\tiny Q}}$), the above sufficiency condition allows us to conclude that $\mathcal{F}_{\mbox{\tiny A}}$ is also monotonic under general quantum operations.
Apart from the partial trace operation and the extension of a quantum state by a fixed ancillary state, the measurement of a quantum state in some fixed basis followed by forgetting the measurement outcome is another class of CPTP maps that one frequently encounters in the context of quantum information. In particular, if each measurement operator (the Kraus operator) is a rank-1 projector, the post-measurement state would be the corresponding eigenstate. In this case, an evaluation of the fidelity between the different outputs of the CPTP map corresponds to an evaluation of the fidelity between the corresponding classical probability distributions.
\begin{table}[!h] \caption{\label{tbl:Monotonicity} Summary of the behavior of fidelity measures under quantum operations. The first column gives the list of candidate measures $\mathcal{F}$. From {the} second to the fourth column, we have, respectively, the non-decreasing monotonicity of the measures $\mathcal{F}$ under partial trace operation, under projective measurements (see {text for details}) and under general quantum operations. {That is, we mark an entry with a tick $\protect\surd$ if Eq.}~\ref{Eq:MonotonicUp} {holds for the corresponding CPTP map.}}
\begin{tabular}{c|c|c|c}
& Partial trace & Projection & General \tabularnewline \hline \hline $\mathcal{F}_{1}$ & $\surd$ & ${\surd}$ & $\surd$~ \cite{nielsen2000quantum}\tabularnewline $\mathcal{F}_{2}$ & $\times^{\ddag}$ & $\times^{\ddag}$ & $\times$ \tabularnewline $\mathcal{F}_{\mbox{\tiny Q}}$ & $\surd$ & $\surd$ & $\surd$~\cite{audenaert2008asymptotic}
\tabularnewline \hline $\mathcal{F}_{\mbox{\tiny N}}$ & $\times$~\cite{mendonca2008} & ? & $\times$ \tabularnewline $\mathcal{F}_{\mbox{\tiny C}}$ & $\times^{\ddag}$ & ? & $\times$ \tabularnewline $\mathcal{F}_{\mbox{\tiny GM}}$ & $\times^{\ddag}$ & $\times^{\ddag}$ & $\times$ \tabularnewline $\mathcal{F}_{\mbox{\tiny AM}}$ & $\times^{\ddag}$ & $\times^{\ddag}$ & $\times$\tabularnewline $\mathcal{F}_{A}$ & $\surd$~\cite{luo2004informational} & $\surd$~\cite{luo2004informational} & $\surd$~\cite{Raggio1984} \tabularnewline \end{tabular} \end{table}
The monotonicity of the various candidate fidelity measures for the few different CPTP maps discussed above is summarized in Table~\ref{tbl:Monotonicity}. That $\mathcal{F}_{2}$ may be contractive under partial trace operation can be seen by considering the two-qubit density matrices $\rho=\Pi_{1}\otimes\rho_{B}$, $\sigma=\frac{1}{2}\mathbf{1}_{2}\otimes\sigma_{B}$ where \[ \rho_{B}=\left[\begin{array}{cc} 0.3 & 0.3\\ 0.3 & 0.7 \end{array}\right],\quad\sigma_{B}=\left[\begin{array}{cc} 0.06 & 0.2\\ 0.2 & 0.94 \end{array}\right] \] and the partial trace of $\rho$ and $\sigma$ over subsystem B. As for the monotonicity of $\mathcal{F}_{2}$, $\mathcal{F}_{\mbox{\tiny GM}}$ and $\mathcal{F}_{\mbox{\tiny AM}}$ under projective measurements, one may verify that these measures may be indeed contractive by considering the qubit density matrices \begin{eqnarray} \rho={\left[\begin{array}{cc} 0.35 & -0.25-0.2\,{\rm i}\\ -0.25+0.2\,{\rm i} & 0.65 \end{array}\right]},\nonumber \\ \sigma={\left[\begin{array}{cc} 0.82 & -0.2-0.24\,{\rm i}\\ -0.2+0.24\,{\rm i} & 0.18 \end{array}\right],} \end{eqnarray} and a rank-1 projective measurement in the computational basis. For the monotonicity of $\mathcal{F}_{\mbox{\tiny C}}$, $\mathcal{F}_{\mbox{\tiny GM}}$ and $\mathcal{F}_{\mbox{\tiny AM}}$ under the partial trace operation, we refer the reader to~\ref{App:CountExamples} for counterexamples.
The monotonicity under partial trace obtained in $\mathcal{F}_{1}$ and $\mathcal{F}_{\mbox{\tiny Q}}$ means that a mixed state fidelity measured according to $\mathcal{F}_{1}$ and $\mathcal{F}_{\mbox{\tiny Q}}$ is greater than or equal to the corresponding fidelity when evaluated on a larger Hilbert space. This implies that there is a bias in using these measured fidelities on a smaller Hilbert space as an estimator for the fidelity on an enlarged relevant Hilbert space, when using $\mathcal{F}_{1}$ and $\mathcal{F}_{\mbox{\tiny Q}}$.
This potentially undesirable property is not shared by $\mathcal{F}_{2}$. However, the general question of which fidelity is the best unbiased estimator under partial trace operations from a randomized extension of the measured Hilbert space appears to be an open problem.
\label{Sec:Metric}
\subsection{Metrics}
Intuitively, one expects that if $\mathcal{F}(\rho,\sigma)$ is a measure of the degree of similarity or overlap between $\rho$ and $\sigma$, a proper distance measure, i.e., a metric can be constructed via some functionals of $\mathcal{F}(\rho,\sigma)$ which vanishes for $\rho=\sigma$.
In this section, we review what is known about the metric properties of three functionals of $\mathcal{F}(\rho,\sigma)$, namely, $\arccos[\sqrt{\mathcal{F}(\rho,\sigma)}]$, $\sqrt{1-\sqrt{\mathcal{F}(\rho,\sigma)}}$ and $\sqrt{1-\mathcal{F}(\rho,\sigma)}$ for the various fidelity measures discussed in the previous section. Following the literature, one may want to refer to these functionals, respectively, as the \emph{modified} Bures angle~\cite{nielsen2000quantum}, the \emph{modified} Bures distance~\cite{Hubner1992} and the \emph{modified} sine distance~\cite{Rastegin06}. The results are summarized in Table~\ref{tbl:RelatedMetrics} {while a proof of the respective metric properties can be found in }~\ref{App:Met}.
\begin{table}[!h] \caption{\label{tbl:RelatedMetrics} Metric properties for some functionals of the fidelity measures $\mathcal{F}=\mathcal{F}(\rho,\sigma)$, as discussed in Sec.~\ref{Sec:MixedFidelity}.}
\begin{tabular}{c|ccc}
& $\arccos[\sqrt{\mathcal{F}}]$ & $\sqrt{1-\sqrt{\mathcal{F}}}$ & $\sqrt{1-\mathcal{F}}$ \tabularnewline \hline \hline $\mathcal{F}_{1}$ & $\surd$~\cite{gilchrist2005distance} & $\surd$~\cite{gilchrist2005distance} & $\surd$~\cite{gilchrist2005distance}\tabularnewline $\mathcal{F}_{2}$ & $\times^{\ddagger}$ & $\times^{\ddagger}$ & $\surd^{\ddagger}$ \tabularnewline $\mathcal{F}_{\mbox{\tiny Q}}$ & $\times^{\ddag}$ & $\times^{\ddag}$ & $\times^{\ddag}$
\tabularnewline \hline $\mathcal{F}_{\mbox{\tiny N}}$ & $\times$~\cite{mendonca2008} & $\times$~\cite{mendonca2008} & $\surd$~\cite{mendonca2008}~\tabularnewline $\mathcal{F}_{\mbox{\tiny C}}$ & ? & ? & $\surd^{\ddagger}$ \tabularnewline $\mathcal{F}_{\mbox{\tiny GM}}$ & $\times^{\ddag}$ & $\times^{\ddag}$ & $\surd^{\ddagger}$ \tabularnewline $\mathcal{F}_{\mbox{\tiny AM}}$ & $\times^{\ddag}$ & $\times^{\ddag}$ & ? \tabularnewline $\mathcal{F}_{A}$ & ? & $\surd$~\cite{Raggio1984}~ & $\surd^{\ddag}$ \tabularnewline \end{tabular} \end{table}
Somewhat surprisingly, despite the fact that $\mathcal{F}_{\mbox{\tiny Q}}$ shares many nice properties with $\mathcal{F}_{1}$, none of these functionals derived from $\mathcal{F}_{\mbox{\tiny Q}}$ actually behave like a metric for the space of density matrices. This can be verified, for example, by noticing a violation of the triangle inequality for all these functionals of $\mathcal{F}_{\mbox{\tiny Q}}$ under the choice \begin{eqnarray} \rho=\frac{1}{10}(3\Pi_{0}+7\Pi_{1}),\,\nonumber \\ \sigma=\frac{1}{100}(\Pi_{0}+99\Pi_{1}),\,\nonumber \\ \tau=\frac{1}{5}(\Pi_{0}+4\Pi_{1}). \end{eqnarray} As for a violation of the triangle inequality by the other functionals presented in the table, it is sufficient to consider the following qutrit density matrices: \begin{eqnarray} \rho=\frac{1}{5}(\Pi_{1}+4\Pi_{2}),\,\nonumber \\ \sigma=\Pi_{0},\,\nonumber \\ \tau=\frac{1}{5}\Pi_{0}+\frac{1}{20}\Pi_{1}+\frac{3}{4}\Pi_{2}. \end{eqnarray}
Notice that other than the functionals considered above, it was also shown in~\cite{ma2008geometric} that $\max_{\tau}|\mathcal{F}_{\mbox{\tiny N}}(\rho,\tau)-\mathcal{F}_{\mbox{\tiny N}}(\sigma,\tau)|$, a functional constructed from the super-fidelity $\mathcal{F}_{\mbox{\tiny N}}$, is also a metric. In fact, with very similar arguments, the same authors showed in~\cite{ma2009pla} that the same functional with $\mathcal{F}_{1}$ instead of $\mathcal{F}_{\mbox{\tiny N}}$ is also a metric.
\section{Comparisons, bounds, and relations between measures }
\label{Sec:Comparisons}
To understand the differences between the fidelity measures, one must compare them quantitatively. In some cases there are rigorous bounds that relate the fidelities, while in other cases comparisons are made graphically. As a rough guide, the average behavior of full-rank, random density matrices indicates that when making comparisons with $p>1$, our numerics for small $p$ and $d$ suggest that: \begin{equation} \mathcal{F}_{p}(\rho,\sigma)\lesssim\mathcal{F}_{1}(\rho,\sigma)\lesssim\mathcal{F}_{\mbox{\tiny Q}}(\rho,\sigma) \end{equation} often holds. However, this is not a hard and fast rule. In qubit cases these are rather strict bounds (at least for $p=2$). In larger Hilbert spaces, however, these inequalities are only approximately true, with exceptions that strongly depend on the rank of the density matrices being compared.
These different cases are explained below.
\subsection{Bounds}
First, we provide a summary of inequalities relating some of these fidelity measures, or some bounds on them. To begin with, it was established in~\cite{raggio1982comparison} (see also~\cite{Raggio1984} and~\cite{luo2004informational}) that \begin{equation} \mathcal{F}_{1}(\rho,\sigma)\le\sqrt{\mathcal{F}_{\mbox{\tiny A}}(\rho,\sigma)}\le\sqrt{\mathcal{F}_{1}}(\rho,\sigma).\label{Eq:FujVsFa} \end{equation} Later, in~\cite{audenaert2008asymptotic} (see also~\cite{audenaert2007discriminating}), these inequalities were rediscovered and extended to \begin{equation} \mathcal{F}_{1}(\rho,\sigma)\le{\mathcal{F}_{\mbox{\tiny Q}}}(\rho,\sigma)\le\sqrt{\mathcal{F}_{\mbox{\tiny A}}(\rho,\sigma)}\le\sqrt{\mathcal{F}_{1}(\rho,\sigma)},\label{Eq:FujVsFq} \end{equation} where the second of these inequalities follows directly from the definition of $\mathcal{F}_{\mbox{\tiny A}}$ and $\mathcal{F}_{\mbox{\tiny Q}}$ given, respectively, in Eqs.~\eref{Eq:Fa} and \eref{Eq:Fq}. At about the same time, $\mathcal{F}_{\mbox{\tiny N}}$ was also shown~\cite{miszczak2009sub} to be an upper bound on $\mathcal{F}_{1}$, i.e., \begin{equation} \mathcal{F}_{1}(\rho,\sigma)\le{\mathcal{F}_{\mbox{\tiny N}}}(\rho,\sigma).\label{Eq:FujVsFn} \end{equation} Here, we add to this list by showing that $\mathcal{F}_{2}(\rho,\sigma)$ actually provides a lower bound to $\mathcal{F}_{\mbox{\tiny N}}(\rho,\sigma)$.
\begin{theorem}\label{teo1} For arbitrary Hermitian matrices $\rho$ and $\sigma$ such that $\tr(\rho^{2})\le1$ and $\tr(\sigma^{2})\le1$ \begin{equation} \mathcal{F}_{2}(\rho,\sigma)\leq\mathcal{F}_{\mbox{\tiny N}}(\rho,\sigma)\,.\label{eq:Fminlb} \end{equation} \end{theorem} \begin{proof}
First, let us rewrite an arbitrary pair of $d$-dimensional density matrices $\rho$, $\sigma$ using an orthonormal basis of Hermitian matrices $\vec{\Upsilon}=(\Upsilon_{0},\Upsilon_{1},\ldots,\Upsilon_{d^{2}-1})$, \begin{equation} \rho=\vec{u}\cdot\vec{\Upsilon}\quad\mbox{and}\quad\sigma=\vec{v}\cdot\vec{\Upsilon},\label{eq:param} \end{equation} where $\vec{u}$, $\vec{v}\in\mathbb{R}^{d^{2}}$ are the expansion coefficients of $\rho$ and $\sigma$ in the basis $\vec{\Upsilon}$. We may now rewrite $\mathcal{F}_{2}$ and $\mathcal{F}_{\mbox{\tiny N}}$ as \begin{eqnarray} \mathcal{F}_{2}(\rho,\sigma)=\frac{\vec{u}\cdot\vec{v}}{\max(u^{2},v^{2})}\,,\nonumber \\ \mathcal{F}_{\mbox{\tiny N}}(\rho,\sigma)=\vec{u}\cdot\vec{v}+\sqrt{1-u^{2}}\sqrt{1-v^{2}}\,,\label{eq:f2-geom} \end{eqnarray}
where $u=\|\vec{u}\|_{2}$ and $v=\|\vec{v}\|_{2}$.
It is straightforward to check that inequality~(\ref{eq:Fminlb}) holds true if and only if the following inequality is true: \begin{equation} \frac{\vec{u}\cdot\vec{v}}{\max(u^{2},v^{2})}\leq\sqrt{\frac{1-\min(u^{2},v^{2})}{1-\max(u^{2},v^{2})}}\,,\label{eq:bound} \end{equation} which obviously holds if $u=v$. For definiteness, let $u>v$ and we have \begin{equation} \frac{\vec{u}\cdot\vec{v}}{\sqrt{1-v^{2}}}\leq\frac{u^{2}}{\sqrt{1-u^{2}}}\,. \end{equation} This is easily seen to hold since, for $u>v$, the numerator of the l.h.s. is dominated by the numerator of the r.h.s., whereas the denominator of the l.h.s. dominates the denominator of the r.h.s.. The case where $v>u$ is completely analogous. \end{proof} \begin{corollary}\label{cor1} For arbitrary \emph{qubit} density matrices $\rho$ and $\sigma$, \begin{equation} \mathcal{F}_{2}(\rho,\sigma)\leq\mathcal{F}_{1}(\rho,\sigma)\,.\label{Eq:F2vsF1} \end{equation} \end{corollary} \begin{proof} This follows trivially from Theorem~\ref{teo1} and the fact that for qubit density matrices $\mathcal{F}_{1}(\rho,\sigma)=\mathcal{F}_{\mbox{\tiny N}}(\rho,\sigma)$, see~\cite{mendonca2008}. \end{proof}
From the inequality of arithmetic and geometric means, as well as the definitions given in Eqs.~\eref{Eq:FGm}, \eref{Eq:NewMeasures} and \eref{eq:Fmax}, it is easy to see that the following bounds hold: \begin{equation} \mathcal{F}_{2}(\rho,\sigma)\le\mathcal{F}_{\mbox{\tiny AM}}(\rho,\sigma)\leq\mathcal{F}_{\mbox{\tiny GM}}(\rho,\sigma)\,.\label{Eq:F2-gm-am} \end{equation} Besides, some straightforward calculation starting from the definitions given in Eqs.~\eref{Eq:Fc} and \eref{Eq:Fn} leads to \begin{equation} \mathcal{F}_{\mbox{\tiny N}}(\rho,\sigma)\leq\mathcal{F}_{\mbox{\tiny C}}(\rho,\sigma)\,.\label{Eq:FnvsFc} \end{equation}
Some remarks are now in order. Given that $\mathcal{F}_{\mbox{\tiny N}}(\rho,\sigma)=\mathcal{F}_{1}(\rho,\sigma)$ for qubit states and the fact that $\mathcal{F}_{\mbox{\tiny Q}}(\rho,\sigma)$ and $\mathcal{F}_{\mbox{\tiny N}}(\rho,\sigma)$ both provide an upper bound on $\mathcal{F}_{1}(\rho,\sigma)$, one may wonder: \begin{itemize} \item[1)] Could $\mathcal{F}_{\mbox{\tiny N}}(\rho,\sigma)$ provide a lower bound on $\mathcal{F}_{\mbox{\tiny Q}}(\rho,\sigma)$, or the other way around? \item[2)] Since $\mathcal{F}_{\mbox{\tiny A}}(\rho,\sigma)$ and the sub-fidelity introduced in~\cite{miszczak2009sub} both provide a lower bound on $\mathcal{F}_{1}(\rho,\sigma)$, could it be that one of these quantities also lower bounds the other? \item[3)] Could it be that Eq.~\eref{Eq:F2vsF1} also holds for higher-dimensional density matrices? \end{itemize} Here, let us note that counterexamples to \emph{all} of the above conjectures can be easily found by considering pairs of qutrit density matrices. However, we leave open the possibility of bounding these quantities using nonlinear (including polynomial) functionals of the other fidelity measures.
\subsection{Comparisons: interpolated qubit states} \begin{center} \begin{figure}\label{Fig:FidComp}
\end{figure} \par\end{center}
How large are the differences between these measures? In order to understand this, we first turn to a simple but concrete example. To this end, consider two families of qubit density matrices that interpolate between maximally mixed states\textemdash that are necessarily identical\textemdash and pure states that are distinct, with an interpolation parameter of $r\in[0,1]$: \begin{eqnarray} \rho(r)=\frac{1}{2}\left(\mathbf{1}_{2}+r\sigma_{x}\right),\quad\sigma(r)=\frac{1}{2}\left(\mathbf{1}_{2}+r\sigma_{z}\right).\label{Eq:ExampleStates} \end{eqnarray}
The purity of a density operator $\rho$ is given by $\mathcal{P}={\tr\ensuremath{\left(\rho^{2}\right)}}$. For the two quantum states in this comparison, their purities are set to be equal and are parametrized by $r$, i.e., \begin{eqnarray} {\mathcal{P}} & =\frac{1}{2}\left(1+r^{2}\right)\,,\label{eq:purity} \end{eqnarray} where $r=0$ corresponds to a maximally mixed state while $r=1$ corresponds to a pure state. Note that although these density matrices might look at first as though they are a sum of two mixed states, they are not. In the limit of $r=1$, each becomes a distinct pure state, with a non-vanishing inner product.
In Figure~\ref{Fig:FidComp}, we show the result of the fidelities $\mathcal{F}_{1}(\rho,\sigma)$ {[}cf. Eq.~\eref{eq:fid}{]}, $\mathcal{F}_{2}(\rho,\sigma)$ {[}cf. Eq.~\eref{eq:Fmax}{]} and $\mathcal{F}_{\mbox{\tiny Q}}(\rho,\sigma)$ {[}cf. Eq.~\eref{Eq:Fq}{]} as a function of the purity $P$, Eq.~\eref{eq:purity}, of the states given in Eq.~\eref{Eq:ExampleStates}. It is worth noting that for these states, the inequalities of Eqs.~\eref{Eq:FujVsFn}, \eref{Eq:F2-gm-am}, and \eref{Eq:FnvsFc} are all saturated, thereby giving $\mathcal{F}_{1}(\rho,\sigma)=\mathcal{F}_{\mbox{\tiny N}}(\rho,\sigma)=\mathcal{F}_{\mbox{\tiny C}}(\rho,\sigma)$ and $\mathcal{F}_{2}(\rho,\sigma)=\mathcal{F}_{\mbox{\tiny AM}}(\rho,\sigma)=\mathcal{F}_{\mbox{\tiny GM}}(\rho,\sigma)$. Moreover, for these states, it can also be verified that $\sqrt{\mathcal{F}_{\mbox{\tiny A}}}(\rho,\sigma)=\mathcal{F}_{\mbox{\tiny Q}}(\rho,\sigma)$, thereby saturating the second inequality in Eq.~\eref{Eq:FujVsFq}.
\subsection{Comparisons: random density matrices}
As a second type of comparison, we generate two random density matrices and compare them, in a Hilbert space of arbitrary dimension. In general, these are generated by taking a random Gaussian matrix $\bm{g}$, whose elements $g_{ij}$ are complex random numbers of unit variance. Starting from a complex Gaussian matrix, called a Ginibre ensemble~\cite{ginibre1965statistical}, a random, positive-semidefinite density matrix with unit trace is generated \cite{zyczkowski2011generating} by letting: \begin{equation} \rho=\frac{gg^{\dagger}}{\tr\left(gg^{\dagger}\right)}.\label{eq:randomGaussian} \end{equation} Some investigations of random matrix fidelity have been carried out previously using the $\mathcal{F}_{1}$ fidelity~\cite{zyczkowski2005average}, while here we focus on the comparative behavior of the different fidelity measures. \begin{center} \begin{figure}\label{Fig:F2-F1:d2}
\end{figure} \par\end{center}
In Figure~\ref{Fig:F2-F1:d2}, we compare plots with two qubit random matrices, and give scatter plots of both $\mathcal{F}_{1}-\mathcal{F}_{2}$ against the maximum purity of the pair, as well as $\mathcal{F}_{2}$ against $\mathcal{F}_{1}$. We note that for this case, as expected from Eq.~\eref{Eq:F2vsF1}, $\mathcal{F}_{1}\ge\mathcal{F}_{2}$ for $d=2$. In addition, there is a lower bound on the state purity, since $\mathcal{P}\ge0.5$ for qubit density matrices. \begin{center} \begin{figure}\label{Fig:F2-F1:d3}
\end{figure} \par\end{center}
\begin{center} \begin{figure}
\caption{ Scatter plot showing the Uhlmann-Jozsa fidelity $\mathcal{F}_{1}$, Eq.~\eref{eq:fid}, and the Hilbert-Schmidt fidelity $\mathcal{F}_{2}$, Eq.~\eref{eq:Fmax}, for $10^{4}$ random pairs of $10\times10$ density matrices. The black line satisfying $\mathcal{F}_{1}=\mathcal{F}_{2}$ is a guide for the eye. The inset shows a scatter plot of the difference $\mathcal{F}_{1}-\mathcal{F}_{2}$ vs the {\em maximum} purity $\mathcal{P}$ of these pairs of density matrices.}
\label{Fig:F2-F1:d10}
\end{figure} \par\end{center}
In Figure~\ref{Fig:F2-F1:d3}, we compare plots with two qutrit random matrices, again with scatter plots of both $\mathcal{F}_{1}-\mathcal{F}_{2}$ against the maximum purity of the pair, together with the two fidelities plotted against each other. For this case, with $d=3$, both $\mathcal{F}_{1}>\mathcal{F}_{2}$, and $\mathcal{F}_{1}<\mathcal{F}_{2}$ are possible. As a concrete example of the latter case, one only has to consider a comparison of two diagonal qutrit density matrices, where $p$ is a real coefficient such that $0<p<1$: \begin{equation} \rho=\left(1-p\right)\Pi_{1}+p\Pi_{2},\quad\sigma=\left(1-p\right)\Pi_{0}+p\Pi_{2}. \end{equation} This has the property that $\mathcal{F}_{1}<\mathcal{F}_{2}$ for any value of $p$ such that $0\neq p\neq1$, since $\mathcal{F}_{2}$ is divided by the maximum purity, and this is less than unity. However, although extremely simple, this is also atypical. In almost all cases, the two density matrices being compared are not simultaneously diagonal. For the average case, we can see instead that $\langle\mathcal{F}_{1}\rangle>\langle\mathcal{F}_{2}\rangle$. This corresponds to the intuitive expectation that, since $\mathcal{F}_{1}$ is a maximum over purifications, it will generally have a bias towards high values.
Finally in the case of two random matrices of larger dimension, we compare two $10\times10$ qudit matrices, with the other details as previously. For this case, just as with $d=3$, both $\mathcal{F}_{1}>\mathcal{F}_{2}$, and $\mathcal{F}_{1}<\mathcal{F}_{2}$ are possible, although the fraction of cases with $\mathcal{F}_{1}>\mathcal{F}_{2}$ has increased substantially. Again, for the average case, $\langle\mathcal{F}_{1}\rangle>\langle\mathcal{F}_{2}\rangle$. As expected, there is a lower bound on the state purity, since since $\mathcal{P}\ge1/d$ in a $d-$dimensional density matrix. Here one has $d=10$, so $\mathcal{P}\ge0.1$ in a $10$ dimensional density matrix. The average fidelity is greatly reduced in this case, since the larger Hilbert space dimension reduces the probability that two random matrices will be similar. \begin{center} \begin{figure}\label{Fig:F2-Fq:d2}
\end{figure} \par\end{center}
\begin{center} \begin{figure}\label{Fig:F2-Fq:d3}
\end{figure} \par\end{center}
\begin{center} \begin{figure}\label{Fig:F2-Fq:d10}
\end{figure} \par\end{center}
The corresponding plots comparing $\mathcal{F}_{2}$ and $\mathcal{F}_{\mbox{\tiny Q}}$ can be found, respectively, in Figure~\ref{Fig:F2-Fq:d2}, Figure~\ref{Fig:F2-Fq:d3} and Figure~\ref{Fig:F2-Fq:d10}.
\section{Applications}
\subsection{Fidelity in quantum physics}
Most current applications of quantum fidelity take place in the context of quantum technology. Since fidelity is a relative measure, the most appropriate fidelity depends on the proposed application. These technologies usually have a well-defined purpose. For example, one may use a quantum technology like squeezing \cite{caves1981quantum,walls1983squeezed} or Einstein-Poldosky-Rosen (EPR) correlation~\cite{EPR}/steering~\cite{wiseman2007,reid2009colloquium,ma2017proposal} to measure gravitational waves \cite{abbott2016observation,einstein1918gravitationswellen} in a more sensitive way. As well as precision metrology, other applications include enhanced communications \cite{caves1994quantum}, cryptography \cite{gisin2002quantum}, quantum computing \cite{steane1998quantum}, many-body quantum simulators \cite{fialko2015fate,bernien2017probing}, quantum data processing and storage \cite{schumacher1996quantum}, and the growing area of quantum thermodynamics \cite{millen2016perspective}.
Each of these fields has their own criteria for success. However, the components of the technology ultimately depend on the realization of certain quantum states and their processing. Hence, knowledge of fidelity can help to measure how close one is to achieving the required quantum states that are utilized at a given stage in a quantum process~\cite{nielsen2000quantum}, see, e.g.,~\cite{Yang2014,Kaniewski2016,Sekatski1802} and references therein. This concept is also applicable more widely, in fundamental physics problems like quantum phase transitions.
A fidelity measure, as with some other (operational) figures of merit (see, e.g.,~\cite{Chen:PRL:2016,Cavalcanti:PRA:2016,Rosset:PRX:2018}), is sometimes~\cite{Sekatski1802} used to prove that a device is quantum in nature, rather than operating at a simply classical level. More generally, one can analyze quantum operations in terms of process fidelity and average fidelity, as explained below.
\subsection{Average fidelity in quantum processes with pure input states}
\label{Sec:PureVsMixed}
Jozsa's axioms do not lead to a unique function of two density matrices that generalizes Eq.~\eref{Eq:Fidelity:1mixed} to a pair of mixed states. In this section, we would like to examine a desirable feature of a fidelity measure that arises naturally in a quantum communication scenario \cite{caves1994quantum}, although this concept also arises in other types of quantum processes. This also allows us to relate a fidelity measure as a measure of the degree of similarity between two quantum states to the notion of fidelity originally introduced by Schumacher~\cite{schumacher1995quantum}. We focus especially on the three candidate fidelities that satisfy the axioms, and have a physical interpretation: $\mathcal{F}_{1}$, $\mathcal{F}_{\mbox{\tiny Q}}$ and $\mathcal{F}_{2}$.
We note that the uniform average fidelity over all possible pure input states is often termed ${\cal F}_{{\rm ave}}$. Like the more complex process fidelity, this can be used as a means of analyzing performance of quantum logic gates~\cite{knill2008randomized,harty2014high}. Here we allow ${\cal F}_{{\rm ave}}$ to have non-uniform initial probabilities, and we note that, for pure states, the fidelities being averaged are not dependent on the \emph{choice} of fidelity measure, since these are unique {[}by Jozsa's axiom (J3){]} if one of the density matrices being compared is a pure state.
Suppose we attempt to copy, store, perform idealized logic operations or transmit a number of pure states $\rho_{j}$, each occurring with probability $p_{j}$. Imagine further that the pure states $\rho_{j}$, are the desired output of the logic operations (communication protocol). Evidently, this combination of states can also be described by the mixed state $\rho=\sum_{j}p_{j}\rho_{j}$. Now, let us further imagine that during the physical processes of interest, there is a probability of error $\epsilon$, in which case a random error state $\rho_{0}$ is generated. For simplicity, we shall assume (unless otherwise stated) that $\rho_{0}$ is orthogonal to $\rho_{j}$, for all $j\neq0$.
A physical scenario that matches the above description consists of transmitting photon number states through some quantum channel, but with probability $\epsilon$, the Fock state $\rho_{j}=\ket{j}\!\!\bra{j}$ gets transformed to the vacuum state $\ket{0}\!\!\bra{0}$, or some other undesirable Fock state that corresponds to an error. However, we do not assume here that the inputs are necessarily orthogonal, as there are applications in cryptography~\cite{gisin2002quantum} where non-orthogonality serves as an important part of the communication protocol.
Let us denote the output state for the transmission of $\rho_{j}$ as $\sigma_{j}=\epsilon\rho_{0}+(1-\epsilon)\rho_{j}$. The overall combination of the output states can also be described by the following mixed state: \begin{equation} \sigma=\sum_{j}p_{j}\sigma_{j}=\epsilon\rho_{0}+(1-\epsilon)\rho. \end{equation}
The mixed state fidelity can now be examined from two points of view: the {\em average state-by-state fidelity}, and also the {\em fidelity as a mixed state}. For a sensible generalization of Schumacher's fidelity to a pair of mixed states, it seems reasonable to require that the average state-by-state fidelity corresponds exactly to the mixed state fidelity, at least, under some circumstances.
To this end, let us first define the average state-by-state fidelity {as} a function of the probabilities and states: \begin{equation} \mathcal{F}_{\rm ave}\left(\bm{p},\bm{\rho},\bm{\sigma}\right)=\sum_{j}p_{j}\mathcal{F}(\rho_{j},\sigma_{j}). \end{equation}
Since the inputs are individually pure states, the fidelity is just the state overlap, and one then obtains:
\begin{equation} \mathcal{F}_{\rm ave}\left(\bm{p},\bm{\rho},\bm{\sigma}\right)=\sum_{j}p_{j}\tr(\rho_{j}\sigma_{j})=1-\epsilon. \end{equation} In the case of $\mathcal{F}_{2}$, the average fidelity matches the mixed fidelity, so that \begin{equation} \mathcal{F}_{\rm ave}\left(\bm{p},\bm{\rho},\bm{\sigma}\right)=\mathcal{F}(\rho,\sigma).\label{Eq:AveVsMixed} \end{equation} whenever the average signal state $\rho$ has equal to or larger purity than that of the average output state $\sigma$, i.e., $\tr(\sigma^{2})\le\tr(\rho^{2})$. This is a common situation, although not universal.
If all (input) states including the error state are orthogonal to each other, and hence diagonal in the same basis, it is easy to verify that the Uhlmann-Jozsa fidelity $\mathcal{F}_{1}$, the non-logarithmic variety of the quantum Chernoff bound, $\mathcal{F}_{\mbox{\tiny Q}}$, as well as the $A$-fidelity $\mathcal{F}_{\mbox{\tiny A}}$ comply with the desired requirement of Eq.~\eref{Eq:AveVsMixed}, without any additional conditions. For a proof, see~\ref{Sec:Proofs}.
It is also not difficult to check that if $\rho$ is a pure state, $\mathcal{F}_{\mbox{\tiny N}}$ also satisfies the desired requirement; although it is a rare situation that a pure state would be used for communications in this way, owing to {the} extremely small information content. As for $\mathcal{F}_{\mbox{\tiny C}}$, $\mathcal{F}_{\mbox{\tiny AM}}$ and $\mathcal{F}_{\mbox{\tiny GM}}$, there does not appear to be a generic situation where Eq.~\eref{Eq:AveVsMixed} will hold true for all error probabilities $0<\epsilon<1$.
Let us analyze briefly the case where average output state has a greater purity than the signal state. At first this seems unlikely, but in fact it can occur. If the error state $\rho_{0}$ is a pure state, and this occurs with probability one, then the output is always a pure state. This could be a vacuum state, owing to an extremely serious error condition: the channel being completely absorbing. Under these conditions, one has that $\mathcal{F}_{\rm ave}=0$, which is expected: every input of the alphabet results in an incorrect output. This is true of any fidelity measure that satisfies our generalized Josza axiom list, due to the orthogonality condition, and hence it is true of $\mathcal{F}_{1}$, $\mathcal{F}_{\mbox{\tiny Q}}$ and $\mathcal{F}_{2}$.
Finally, we note that the most general error state $\rho_{0}$ depends on the input state $\rho_{j}$, and is neither pure nor orthogonal to all other inputs. Under these circumstances, $\mathcal{F}_{\rm ave}\left(\bm{p},\bm{\rho},\bm{\sigma}\right)$ is still well-defined but it may not correspond to any of the mixed fidelities defined here.
\subsection{Average fidelity in quantum processes with mixed input states}
\label{Sec:PureVsMixed-1}
A quantum process with pure input states is an ideal scenario that is unlikely to occur in reality. As well as the outputs, even the inputs are most likely to be mixed states, as they are typically multi-mode and will be subject to timing jitter, losses and spontaneous emission noise, to name just a few possible error conditions.
In this case, we may have a scenario like the one given above, except that all the states are mixed. We again assume for simplicity that errors in the output channel are {\em orthogonal} to all inputs, as before. The average state-by-state fidelity is now a function of the probabilities, states and the fidelity measure: \begin{equation} \mathcal{F}_{\rm ave}\left(\bm{p},\bm{\rho},\bm{\sigma}\right)=\sum_{j}p_{j}\mathcal{F}(\rho_{j},\sigma_{j}). \end{equation} In general, for $\mathcal{F}_{1}$, $\mathcal{F}_{\mbox{\tiny Q}}$, and $\mathcal{F}_{\mbox{\tiny A}}$, the situation becomes complex, as there is no guarantee that all the states involved can be diagonalized in the same basis. If they do, we will again end up with Eq.~\eref{Eq:AveVsMixed}. This makes an evaluation of the corresponding fidelities generally difficult, since, for example, the products of matrix square roots that occur in $\mathcal{F}_{1}$ are extremely nontrivial in general.
As for $\mathcal{F}_{2}$, a reduction holds if the individual input states $\rho_{j}$ and the average state $\rho$ has a degraded purity owing to errors {{[}i.e., $\tr(\rho_{j}^{2})\ge\tr(\sigma_{j}^{2})$ and $\tr(\rho^{2})\ge\tr(\sigma^{2})${]}}, although, as noted above, this may not be true for an extremely lossy channel. With this simplification the fidelity for each possible input is the same:
\begin{equation} \mathcal{F}_{2}(\rho_{j},\sigma_{j})=\frac{\tr(\rho_{j}\sigma_{j})}{\tr(\rho_{j}^{2})}=1-\epsilon.\label{Eq:F2:State-by-state} \end{equation} In such cases, we have the same result for the average fidelity as for pure state inputs, namely: \begin{equation} \mathcal{F}_{{\rm ave}}\left(\bm{p},\bm{\rho},\bm{\sigma}\right)=\sum_{j}p_{j}\mathcal{F}_{2}(\rho_{j}\sigma_{j})=1-\epsilon=\mathcal{F}_{2}(\rho,\sigma).\label{Eq:F2:compare} \end{equation}
As with the pure state case, we emphasize that the average fidelity is therefore only the same as the mixed state fidelity in rather special circumstances, even when using the simpler $\mathcal{F}_{2}$ measure of fidelity. In general, the assumption that the error state is orthogonal to all inputs is not always valid, and as a result they measure different properties of the quantum process. Note also that the requirement of Eq.~\eref{Eq:AveVsMixed} amounts to demanding that the joint concavity inequality of Eq.~\eref{Eq:ConcaveJoint} is saturated in this communication scenario.
\subsection{Teleportation and cloning fidelity}
As a measure of the degree of similarity between two quantum states, fidelity occurs naturally in assessing the quality of a quantum communication~\cite{schumacher1995quantum} channel. A well-known example of such a channel is the so-called teleportation channel~\cite{bennett1993,horodecki1999} where the \emph{unknown} quantum state of a physical system is transferred from a sender to a receiver via the help of a shared quantum resource between the two ends. If one could transport the system itself directly to the receiver intact, this task is trivial. However, since channels are usually far from ideal\textemdash they may be lossy, for example, especially over long distances\textemdash the quantum states are readily degraded on transmission. Moreover, an \emph{unknown} quantum information cannot be cloned~\cite{nocloningtheorem}, or else measured and regenerated, without degradation. Thus, whenever a high-fidelity output state is required at the end-location (for example, a qubit that is to be part of a quantum secure network), a direct transmission of the quantum signal would generally not be the preferred option. Instead, teleportation is likely to serve as an essential component of future quantum communication networks \cite{quantum-internet}.
A protocol for the \emph{ideal} teleportation of qubit states was first proposed by Bennett {\em et al.} \cite{bennett1993} and extensively discussed in a very general setting by Horodecki~\emph{et al.}~\cite{horodecki1999}. Suppose Alice has in her possession an arbitrary qudit state $|\psi_{i}\rangle$ that she wants to teleport to Bob. The protocol involves Alice and Bob sharing an EPR resource, or more precisely a maximally entangled two-qudit state.
Alice makes a joint measurement (in the basis of maximally entangled states) on her half of the entangled qudit and the state to be teleported. She transmits via a classical communication channel those measurement results to Bob, who reconstructs the original state as $|\psi_{f}\rangle$ by applying a local unitary correction to his half of the maximally entangled two-qudit state.
Fidelity between the initial state and the final state is the standard figure of merit used to measure the effectiveness of the teleportation transfer~\cite{popescu1994}. Specifically, the quality of the teleportation channel $\Lambda_{\rho}$ is often quantified via the \emph{average} fidelity~\cite{horodecki1999}: \begin{equation} \F_{\rm tele}(\Lambda_{\rho})=\int d\psi_{i}\bra{\psi_{i}}\Lambda_{\rho}(\proj{\psi_{i}}){\ket{\psi_{i}}} \end{equation} where $\rho$ is the density matrix of the shared resource and the integral is performed over the Haar measure (i.e., a uniform distribution over all possible pure input states of a \emph{fixed} Hilbert space dimension). Not surprisingly, $\F_{\rm tele}(\Lambda_{\rho})$ depends on the degree of entanglement of $\rho$. For instance, if $\rho$ is maximally-entangled, one has an ideal teleportation channel and thus $\F_{\rm tele}=1$. In general, for a given resource state $\rho$ of local dimension $d$, the maximal average fidelity achievable was found to be~\cite{horodecki1999} \begin{equation} \F_{\rm tele}^{{\rm max}}(\Lambda_{\rho})=\frac{{\mathcal{F}_{{\rm max}}}d+1}{d+1}\label{Eq:FteleMax} \end{equation} where $\mathcal{F}_{{\rm max}}$ is the singlet fraction (or more appropriately, the fully-entangled fraction) of $\rho$, which is the largest possible overlap (i.e., Schumacher's fidelity) between $\rho$ and a maximally entangled two-qudit state: \begin{equation} \mathcal{F}_{{\rm max}}=\max_{U_{A},U_{B}}\mathcal{F}(\rho,U_{A}\otimes U_{B}\proj{\Phi_{d}^{+}}U_{A}^{\dag}\otimes U_{B}^{\dag}). \end{equation} Here $U_{A}$ and $U_{B}$ are, respectively, local unitary operator acting on the Hilbert space of Alice's and Bob's qudit.
\subsubsection{Teleportation fidelity and bounds}
The problem of establishing the quality of a practical teleportation experimental process then becomes the problem of a fidelity measurement. To this end, it is insightful to compare the fidelity bound of Eq.~\eref{Eq:FteleMax} against that arising from employing an optimal cloning machine~\cite{hillery-buzek-clone,classical-fidelity,Bruss1998}. For simplicity, we shall restrict our discussion to the $d=2$, i.e., the qubit case. For general fidelity bounds of cloning, we refer the reader to~\cite{ScaraniRMP.77.1225}. In the case of an arbitrary qubit, it is known that a classical strategy of teleportation, whereby the state is measured and then regenerated, will incur extra noise (see, e.g.,~\cite{popescu1994,horodecki1999,ham-1,fidelity-bounds}). This limits the fidelity for any classical measure-and-regenerate protocol, to ${\F_{\rm tele}^{\tiny{\rm class}}}\leq\frac{2}{3}$~\cite{popescu1994,horodecki1999,classical-fidelity}. It is also known that the fidelity $\mathcal{F}=\frac{5}{6}$ is the maximum for any symmetric $1\to2$ cloning process, i.e., $\mathcal{F}>\frac{5}{6}$
ensures that there can be no ``copies" $|\psi_{c}\rangle^{\otimes2}$
taken of the state $|\psi_{i}\rangle$ that has a fidelity $\mathcal{F}=|\langle\psi_{i}|\psi_{c}\rangle|^{2}$
greater than $\frac{5}{6}$ \cite{hillery-buzek-clone,hillerybuzek-2,dag}. The \emph{ideal} teleportation protocol clearly exceeds this bound because the state $|\psi_{i}\rangle$ at Alice's location is destroyed by her measurements. The final teleported state at Bob's location is therefore not a copy of the state $|\psi_{i}\rangle$, but a unique secure transfer of it. In view of these limits, a fidelity $\F_{\rm tele}>\frac{2}{3}$ is the benchmark figure of merit used to justify quantum teleportation in qubit teleportation experiments \cite{teleexp,teleexpdemartini}. The no-cloning teleportation is achieved when $\F_{\rm tele}>\frac{5}{6}$. For qubit-teleportation experiments carried out photonically, the fidelity estimates were evaluated for a post-selected subensemble, selected conditionally on all photons being detected at Bob's location \cite{teleexp} (see also~\cite{Wang:2015aa}). The post-selection was required due to poor detection efficiencies.
\subsubsection{Continuous variable teleportation}
A protocol for continuous variable teleportation was developed by Vaidmann \cite{vaidcvtele} and Braunstein and Kimble \cite{bkcvtele}. Here, the EPR resource was a continuous-variable EPR entangled state \cite{reid1989demonstration,reid2009colloquium}. Defining ``quantum teleportation'' as taking place where there can be no classical measure-and-regenerate strategy that can replicate the teleportation fidelity $\F_{\rm tele}$, it was shown that $\F_{\rm tele}>1/2$ is the benchmark fidelity bound to demonstrate the quantum teleportation of an unknown coherent state \cite{ham-1}. This fidelity has been achieved experimentally for optical states \cite{cvteleexperiments,cvteleexperiments-1,cvteleexperiments-2,cvtelenocloning}. The fidelity at which there can be no replica of the state at a location different to Bob's corresponds to the no-cloning fidelity \cite{clonein}. For coherent states, the fidelity $\mathcal{F}=2/3$ is the maximum for any cloning process \cite{cerf}, and the fidelity $\F_{\rm tele}>2/3$ is hence the no-cloning benchmark for the teleportation of a coherent state. The no-cloning teleportation limit has been achieved for coherent states \cite{cvtelenocloning}. Although reporting lower teleportation fidelities, the continuous variable experiments did not rely on {\em any} post-selection of data.
\subsubsection{Connection with other desired properties}
Fidelity is the figure of merit used to quantify the success and usefulness of the teleportation protocol. For example, no-cloning teleportation allows bounds to be placed on the quality of any unwanted copies of the teleported state. As a consequence, fidelity is also used to determine the constraints on the nature of the entangled resource, so that certain conditions are met. For example, by using the fidelity bounds, it was proved~\cite{mixed-state-tele-1} that any entangled two-qubit (mixed) state is useful for quantum teleportation if arbitrary local operations assisted by classical communications (including arbitrary local filtering operations) are allowed prior to the teleportation experiment. Similarly, a connection has been shown, between no-cloning teleportation and the requirement of a steerable resource \cite{epr-steer-tele}, between the fully-entangled fraction and a steerable~\cite{wiseman2007} resource, as well as a Bell-nonlocal~\cite{brunner-rmp} resource, see, e.g.,~\cite{hsieh2016} and references therein.
In the context of the papers examined for this review, the teleportation fidelity is most frequently defined with respect to a pure state that Alice wants to transport. Bob's teleported state will generally be mixed. However, one can see that more generally the fidelity for the teleportation of a mixed state needs also to be considered, given that the state prepared at Alice's location would not usually be pure.
\subsection{Fidelity in phase space}
\label{Sec:PhaseSpace}
In the following, we show how the fidelity can be computed with phase space methods \cite{Hillery_Review_1984_DistributionFunctions}. The symmetrically ordered Wigner function representation \cite{Wigner_1932} was first applied to dynamical problems by Moyal \cite{Moyal_1949}. Although it is is generally non-positive, it is common to use tomography to measure the Wigner function \cite{lvovsky2009continuous}, to represent a density matrix. Other schemes using a classical-like phase-space for bosons include the anti-normally ordered, positive Q-function distribution \cite{Husimi1940}, and the normally-ordered P-function distribution \cite{Glauber_1963_P-Rep}, which is non-positive and singular in some cases.
These have been generalized to positive distributions on non-classical phase spaces \cite{Drummond1980posp,Gilchrist1997posp,Deuar:2002}, which are normally-ordered, non-singular, positive representations. These have been employed in many different fields such as quantum optics \cite{drummond1981_II_nonequilibriumparamp,DrummondGardinerWalls1981,Drummond_EPL_1993}, Bose-Einstein condensates \cite{Kheruntsyan2005BEC,Opanchuk2012BEC,Opanchuk2013WignerBEC} and quantum opto-mechanical systems \cite{Kiesewetter2014opto,Kiesewetter2017opto,Teh2017opto}, as well as spin \cite{Arecchi_SUN,Agarwal:1981,Barry_PD_qubit_SU} and Fermi \cite{corney2006gaussian} systems.
In these methods, a density operator $\hat{\rho}$ is generically represented as \begin{equation} \hat{\rho}=\intop P\left(\vec{\alpha}\right)\hat{\Lambda}\left(\vec{\alpha}\right)\,d\vec{\alpha}\,,\label{eq:density_op} \end{equation} where $\hat{\Lambda}\left(\vec{\alpha}\right)$ is a projection operator that forms the basis in the description of a density operator and $P\left(\vec{\alpha}\right)$ is the quasi-probability density that corresponds to that operator basis. In the simplest cases, $\vec{\alpha}=\bm{\alpha}=\left(\alpha_{1},...,\,\alpha_{M}\right)$ is a real or complex vector in the relevant $M$-mode phase-space for the first phase-space representations defined using a classical phase-space. This can have increased dimensionality in more recent mappings.
Using Eq. (\ref{eq:density_op}), we can immediately see how the $\mathcal{F}_{2}$ fidelity can be computed. In particular, we show explicitly how the quantity $\tr\left(\rho\,\sigma\right)$ is obtained. \begin{eqnarray} \tr(\rho\,\sigma) & =\tr\left[\intop\intop P_{\rho}\left(\vec{\alpha}\right)P_{\sigma}\left(\vec{\beta}\right)\hat{\Lambda}\left(\vec{\alpha}\right)\hat{\Lambda}\left(\vec{\beta}\right)\,d\vec{\alpha}\,d\vec{\beta}\right]\nonumber \\
& =\intop\!\!\intop P_{\rho}\left(\vec{\alpha}\right)P_{\sigma}\left(\vec{\beta}\right)D(\vec{\alpha},\vec{\beta})\,d\vec{\alpha}\,d\vec{\beta}\,.\label{eq:fidelity} \end{eqnarray}
Depending on the particular phase space representation, $D(\vec{\alpha},\vec{\beta})\equiv\tr\left[\hat{\Lambda}\left(\vec{\alpha}\right)\hat{\Lambda}\left(\vec{\beta}\right)\right]$ in Eq. (\ref{eq:fidelity}) takes different forms. An expression for this quantity was calculated by Cahill and Glauber \cite{Cahill1969}, for classical phase-space methods, and it is generally only well-behaved for the Wigner and P-function methods. The Glauber P-function, although often singular, was proposed as an approximate method in fidelity tomography using this approach \cite{lobino2008complete}.
Here we focus on two of the most useful representations in a typical phase space numerical simulation: the Wigner and positive P-representations. The first of these is a rather classical-like mapping, although generally not positive-definite, while the second is always probabilistic, although defined on a phase-space that doubles the classical dimensionality. In the Wigner representation, $\vec{\alpha}\equiv\bm{\alpha}$, and \begin{eqnarray} D(\bm{\alpha},\bm{\beta}) & =\pi^{M}\delta^{M}\left(\bm{\alpha}-\bm{\beta}\right)\,,\label{eq:Tr_basis} \end{eqnarray} where $M$ is the dimension of the vector $\bm{\alpha}$ and the $M$-th dimensional Dirac delta function is $\delta^{M}\left(\bm{\alpha}-\bm{\beta}\right)=\delta\left(\alpha_{1}-\beta_{1}\right)...\delta\left(\alpha_{M}-\beta_{M}\right)$. This leads to~\cite{Cahill1969} \begin{equation} \tr(\rho\,\sigma)=\pi^{M}\intop P_{\rho}\left(\bm{\alpha}\right)P_{\sigma}\left(\bm{\alpha}\right)\,d\bm{\alpha}\,.\label{eq:fidelity_wigner} \end{equation}
In the positive P representation, a single mode is characterized by two complex numbers, so $\vec{\alpha}=\left(\alpha,\alpha^{+}\right)$. The notation $\alpha^{+}$ indicates that this variable represents a conjugate operator, and is stochastically complex conjugate to $\alpha$ in the mean. This doubles the dimension of the relevant classical phase space. There is an intuitive interpretation that it allows one to map superpositions directly into a phase-space representation. The density operator in positive P representation is then given by Eq.~\eref{eq:densityop}, but now the operator bases have the form: \begin{equation}
\hat{\Lambda}\left(\vec{\alpha}\right)=\frac{|\bm{\alpha}\rangle\langle\bm{\alpha^{+}}|}{\langle\bm{\alpha^{+\textnormal{*}}}|\bm{\alpha}\rangle}\,, \end{equation} and the product trace is: \begin{eqnarray}
D(\vec{\alpha},\vec{\beta}) & =\frac{\langle\bm{\beta^{+}}|\bm{\alpha}\rangle\langle\bm{\alpha^{+}}|\bm{\beta}\rangle}{\langle\bm{\alpha^{+\textnormal{*}}}|\bm{\alpha}\rangle\langle\bm{\beta^{+\textnormal{*}}}|\bm{\beta}\rangle}\,.\label{eq:Tr_basis_posp} \end{eqnarray} The quantity $\tr(\rho\,\sigma)$ is more complicated in this case but still follows the structure of Eq.~\eref{eq:fidelity}. Fidelity measures of the form given in Eq.~\eref{eq:Fmax} also involve the purity, $\tr\left(\rho^{2}\right)$. This can be calculated similarly.
One great advantage of phase space methods is that quasi-probability densities allow numerical simulation to be carried out. Typically, samples are drawn from these probability densities, which are then evolved dynamically. Finally, observables of interest are computed by the Monte Carlo method, which is usually the only practical technique for very large Hilbert spaces. Likewise, fidelity measures can be computed numerically in a typical Monte Carlo scheme. In particular, we consider $\mathcal{F}_{2}$, which, as we will discuss, is the most tractable form of fidelity.
{Let $\rho$ be} the initial density operator of the system and we want to compute the fidelity of a quantum state at a later time, which is characterized by the density operator $\sigma$, with respect to $\rho$. {Suppose that the initial state $\rho$} and its corresponding phase space distribution are known in a numerical simulation.
{For simplicity, suppose that $\rho$ is a pure state, which is a common but not essential assumption,} and implies that $\tr\left(\rho^{2}\right)=1$. Even for cases where $\tr\left(\rho^{2}\right)<1$, the final state after a time evolution will usually be no purer than the initial state. There are exceptions to this rule, since a dissipative time-evolution can evolve a mixed state of many particles to a pure vacuum state, but we first consider the case of non-increasing purity here for definiteness.
In other words, ${\rm max}\left[\tr\left(\rho^{2}\right),\tr\left(\sigma^{2}\right)\right]=\tr\left(\rho^{2}\right)$. This is convenient as the exact (quasi)-probability distribution for $\sigma$ is not known and only a set of samples of this distribution is available, which leads to the sampled fidelity we discuss next.
Next, consider the sampled fidelity in the Wigner representation. The quantity $\tr\left(\rho\,\sigma\right)$ in Eq. (\ref{eq:fidelity_wigner}) in the Monte Carlo scheme is given by: \begin{eqnarray} \tr\left(\rho\,\sigma\right) & =\pi^{M}\intop P_{\rho}\left(\bm{\alpha}\right)P_{\sigma}\left(\bm{\alpha}\right)\,d\bm{\alpha}\nonumber \\
& \approx\pi^{M}\frac{1}{N_{{\rm samples}}}\sum_{i=1}^{N_{{\rm samples}}}P_{\rho}\left(\bm{\alpha}_{i}\right)\,,\label{eq:tr_rho_sig_wigner} \end{eqnarray} where $N_{\rm samples}$ is the sample size of the probability distribution $P_{\sigma}$. We note that there can be issues with the fact that the same random variable occurs in both the distributions, leading to practical problems if both the distributions are sampled. This can be avoided if one of the Wigner functions is known analytically.
The same quantity can be computed in the positive P representation. It is then possible to use two independent sets of random variables, so that both the distributions can be obtained from random sampling: \begin{eqnarray} \tr\left(\rho\,\sigma\right) & \approx & \frac{1}{N_{{\rm samples}}^{2}}\sum_{i,j}^{N_{{\rm samples}}}D(\vec{\alpha}_{i},\vec{\beta}_{j})\,.\label{eq:tr_rho_sig_posp} \end{eqnarray} Here, the factor $N_{{\rm samples}}^{2}$ comes from the product of $P_{\rho}\left(\vec{\alpha}\right)P_{\sigma}\left(\vec{\beta}\right)$ in Eq.~\eref{eq:fidelity} under the usual assumption of equally weighted samples. This shows that it is possible to compute $\mathcal{F}_{2}$ fidelities from a phase-space simulation. This is useful when trying to predict performance of a quantum technology or memory in an application involving storage of an exotic quantum state. We emphasize that if one of the calculated states is a pure state, then all fidelity measures give the same result.
Admittedly, this quantity is more complicated than Eq.~(\ref{eq:tr_rho_sig_wigner}) in the Wigner representation. In addition, the sampling error can be very large in some cases, as discussed by Rosales-Zarate and Drummond \cite{Rosales-Zarate2011entropy}. When this occurs, representations such as the generalized Gaussian representations \cite{corney2003gaussian,corney2005gaussian,corney2006gaussian,joseph2018phase} can be employed, and clearly the purities can be estimated in a similar way if the initial state is not pure.
Overall, $\mathcal{F}_{2}$ appears to be the most suitable fidelity measure in a dynamical simulation or measurement using phase-space techniques, where only the initial state with its probability distribution is known. It is the only measure using easily computable Hilbert-Schmidt norms that satisfies all of the Jozsa axioms.
\subsection{Techniques of fidelity measurement}
As pointed out in Section (\ref{subsec:Relevant-and-irrelevant}), fidelity is a relative measure. The results of fidelity measurements on different Hilbert spaces are \textbf{not }the same. The ${\mathcal{F}_{1}}$ fidelity may improve if the measured Hilbert space has a lower dimensionality, simply because it is defined as a maximum over all possible purifications. Hence, the ${\mathcal{F}_{2}}$ fidelity has the advantage that it is generally less biased towards high values, as shown in the examples of the previous section, and is always true for qubits.
The adage of being cautious in comparing apples to oranges should be remembered. In general terms we will distinguish six different approaches that are described below, as applied to typical physical implementations \cite{nielsen2000quantum,divincenzo2000physical}. The real utility of a given fidelity measure is how well it matches the requirements of a given application. Analyzing quantum logic gates and memories is one of the most widespread and useful applications of fidelity, and hence we will give examples of these applications.
These general considerations about physical implementation apply to all of the various applications listed in this section. The examples referenced here are necessarily incomplete, as this is not a full review of experimental implementations. Nevertheless, some typical experimental measurements are referenced in each of the following application examples.
\subsubsection{Atomic tomography fidelity }
Atomic or ionic fidelity measurements involve a finite, stationary, closed quantum system, where each state can be accessed and projected \cite{leibfried2003experimental,longdell2004experimental}. These measurements are usually relatively simple. To obtain the entire density matrix for a calculation of mixed state fidelity involves a tomographic measurement. Thus, for example, in qubit tomography one must measure both the diagonal elements, which are level occupations, as well as off-diagonal elements that are obtained through Rabi rotations that transform off-diagonal elements into level occupations for measurement. This approach is easiest to implement when the Hilbert space has only two or three levels. Typical examples of this technique involve trapped ions, whose level occupations are measured using laser pulses and fluorescence photo-detection.
It is not always clear in such measurements how the translational state is measured, or if it is even part of the relevant Hilbert space, which is necessary in order to understand how the fidelity is defined. The problem is that the full quantum state of an isolated ion or atom always has both internal and center-of-mass degrees of freedom, so that a pure state is: \begin{equation}
\left|\Psi\right\rangle =\sum_{ij}C_{ij}\left|\psi_{i}\right\rangle _{{\rm int}}\left|\phi_{j}\right\rangle _{{\rm CM}}. \end{equation}
Here $\left|\psi\right\rangle _{{\rm int}}$ is the internal state defined by the level structure, while $\left|\phi\right\rangle _{{\rm CM}}$ defines the center-of-mass degree of freedom. The actual density matrix even for a single ion or atom therefore involves different spatial modes for the center-of-mass, such that: \begin{equation}
\rho=\sum_{ijkl}\rho_{ijkl}\left|\psi_{i}\right\rangle _{{\rm int}}\left|\phi_{j}\right\rangle _{{\rm CM}}\left\langle \phi_{k}\right|_{{\rm CM}}\left\langle \psi_{l}\right|_{{\rm int}}. \end{equation} Next, we can consider two possible situations: \begin{description}
\item [{Full~tomography:}] Suppose that the target state is $\left|\Psi_{0}\right\rangle =\left|\psi_{0}\right\rangle _{{\rm int}}\left|\phi_{0}\right\rangle _{{\rm CM}}$, so that the center-of-mass position is part of the relevant Hilbert space. Under these conditions, only the measured states with $k=j=0$ are in the same \emph{overall} quantum state as the target state. This may prevent complete visibility in an interference measurement in which the center-of-mass position is relevant. If this is the case, one should consider the center-of-mass position as part of the relevant Hilbert space. Hence one must consider rather carefully if it is necessary to investigate the translational state fidelity as well in this type of application. This has been investigated in ion-trap quantum computer gate fidelity measurements \cite{ospelkaus2011microwave}. \item [{Partial~tomography:}] The center-of-mass part of the Hilbert space may not matter if the internal degrees of freedom are decoupled sufficiently from the spatial degrees of freedom, so that only the internal degrees of freedom are relevant over the time-scales that are of interest. In these cases the density matrix can be written as: \begin{equation} \rho=\rho_{{\rm int}}\otimes\rho_{{\rm CM}}. \end{equation} Provided this factorization is maintained throughout the experiment, it may well be enough to only measure the internal degrees of freedom. However, any spin-orbit or similar effective force that couples the internal and translational degrees of freedom will cause entanglement. This will sometimes mean a reduction in fidelity, since the entangled state can become mixed after tracing out the spatial degrees. Relatively high fidelities have been measured with this approach. Depending on the system, this can be viewed as occurring because coupling to phonons is weak \cite{fuchs2011quantum} or because experiments occur on faster time-scales than the atomic motion \cite{bernien2017probing}. \end{description}
\subsubsection{Photonic~fidelity }
Photonic measurements are typical of quantum memories \cite{Lvovsky:2009aa,CHANELIERE201877}, communications or cryptography when photons are used as the information carrier. In the case of a quantum memory, a quantum state is first encoded in a well-defined spatiotemporal mode(s), then dynamically coupled into the memory subsystem, stored for a chosen period, and coupled out into a second well-defined spatiotemporal field mode(s) where it can be measured \cite{HeReidPhysRevA.79.022310}. We note that temporal mode structure is an essential part of defining a quantum state.
The actual quantum state in these cases is defined as an outer product of photonic states in each possible mode $\left|n_{k}\right\rangle _{k}$, where $n_{k}$ is the photon number in each mode, so that: \begin{equation}
\left|\Psi\right\rangle =\sum_{\bm{n}}C_{\bm{n}}\left|n_{1}\right\rangle _{1}{\otimes}\ldots\left|n_{K}\right\rangle _{K}. \end{equation} Here each mode has an associated mode function $\bm{u}_{k}$, which is typically localized in space-time, since technology applications are carried out in finite regions of space, over finite time-intervals. We implicitly assume a finite total number of modes $K$, although there is no physical upper bound except possibly that from quantum gravity.
To obtain the output density matrix, the most rigorous approach is to use pulsed homodyne detection to isolate the mode(s) used, with a variable phase delay or other methods to measure the off-diagonal elements. This gives a projected quadrature expectation value of the relevant single mode operator. By tomographic reconstruction, one can obtain the Wigner function \cite{lvovsky2009continuous}. We show in Section~\ref{Sec:PhaseSpace} that this phase-space technique directly gives the quantum fidelity as an $\mathcal{F}_{2}$ measure. Obtaining any other fidelity measure generally requires the reconstructed density matrix. See, however,~\cite{miszczak2009sub,Bartkiewicz:PRA:2013} where the authors discuss a direct measurement of the superfidelity, $\mathcal{F}_{\mbox{\tiny N}}$, together with a lower bound called the subfidelity for photonic states encoded in the polarization degree of freedom. Since $\mathcal{F}_{\mbox{\tiny N}}=\mathcal{F}_{1}$ for qubit states \cite{mendonca2008}, their method actually amounts to a direct measurement of $\mathcal{F}_{1}$ for these qubit states without resorting to quantum state tomography.
\subsubsection{Conditional~fidelity}
In some types of photonic fidelity measurement the state may not be found at all in some of the measurements. This is typically the case in photo-detection experiments with low photon number, where a qubit can be encoded into two spatial or polarization modes, as $\left|\psi\right\rangle =\frac{1}{\sqrt{|a|^{2}+|b|^{2}}}{\left(a\left|0\right\rangle _{1}\left|1\right\rangle _{2}+b\left|1\right\rangle _{1}\left|0\right\rangle _{2}\right)}$. Problems arise when no photon is detected at all, either because the photodetector was inefficient, or because the photon was lost during the transmission, thereby making the input a vacuum state, which is in a larger Hilbert space.
As a result, reported measurements are sometimes defined by simply conditioning all results on the presence of a detected photon(s). Unless the target state is itself defined to be the conditioned state, this conditional fidelity is best regarded as an upper bound for the true mixed state fidelity, which includes these loss effects. The potential difficulty with conditional fidelity measured in this way is that it essentially involves an assumption of fair sampling. In other words, photons may be lost through detector inefficiency, but they may also be lost in any number of other ways.
While detector loss can be regarded as simply a measurement issue, unrelated to the state itself, there is also a possibility that the state was already degraded before it reached the detector, and hence the true fidelity is lower than estimated by the conditional measurement. Yet in many applications, like quantum logic gates and computing, one may have to repeat the same quantum memory process many times in succession. In these cases any inefficiency that occurs prior to detection grows exponentially with the number of gates, and can become an important issue. In other quantum information processes however, it might be argued that this effect is not relevant. In quoting fidelity, it is thus important to match the target state with the final intended application.
\subsubsection{Inferred~fidelity}
In a similar way to efficiency problems that lead to conditional fidelity measures, the spatiotemporal mode may change from measurement to measurement in photonic experiments. This leads to a mixed state. As a result, the increased number of modes present can enlarge the Hilbert space in a way that is not detectable through measurements of photodetection events without using interferometry or local oscillators. This approach is sometimes combined with a conditional measurement.
An example of this approach is a recently reported quantum memory for orbital angular momentum qubits \cite{nicolas2014quantum}. This experiment has many robust and useful features, using spatial mode projection to ensure that the correct transverse mode is matched from input to output. However, the report does not describe how longitudinal or temporal mode structure was determined, leaving this issue as an open question at this stage.
A photon-counting approach cannot usually detect the full mode structure. For example, suppose one has a wide range of longitudinal modes that can be occupied, having distinct frequencies and/or temporal mode structures, and each occurring with a probability $P_{k}\ll1$, so that: \begin{equation}
\rho=\sum_{k}P_{k}\left|0\right\rangle _{1}\ldots\left|1\right\rangle _{k}\ldots\left\langle 0\right|_{1}\ldots\left\langle 1\right|_{k}\ldots. \end{equation} This is a mixed state in which a single photon could be in any longitudinal mode with a given probability $P_{k}$.
Let us now compare this with a desired pure state $\sigma$, for example:
\begin{equation}
\sigma=\left|1\right\rangle _{1}\ldots\left|0\right\rangle _{k}\ldots\left\langle 1\right|_{1}\ldots\left\langle 0\right|_{k}\ldots.. \end{equation} It is clear that, for any definition of fidelity, $\mathcal{F}=\tr\left(\sigma\rho\right)=P_{1}\ll1$. One could attempt to measure this fidelity with a photon-counting measurement, combined with the {\em assumption} that there is only one longitudinal mode present. If all measurements give exactly one count, then this measurement, combined with the single-mode assumption would lead to an inferred state fidelity of $\mathcal{F}_{{\rm inf}}=1$. This does not match the true fidelity in this example.
Fidelity measurements like these generally make the assumption that the mode structure that is measured matches the desired mode structure, even when it is not measured directly. As a result, the inferred fidelity may be higher than the true fidelity, and should be considered as an upper bound. This may cause problems if one must carry out a binary quantum logic operation with input signals derived from two different sources such that the modes should be matched in time and/or frequency. Under these conditions, it is the true fidelity, including the effects of losses and modal infidelity that is important. These questions have been investigated in experiments that carry out full tomographic measurements to reconstruct single-photon Fock states using homodyne measurement techniques \cite{lvovsky2001quantum}.
\subsubsection{Cloned~fidelity }
A fifth type of fidelity measurement is obtained as a variant of a quantum game in which a number of copies of a quantum state may be recorded or stored \cite{MassarPopPhysRevLett.74.1259}. From subsequent measurements, it is possible to infer, for example, using maximum likelihood measurements, what the original state was. The inferred state can be compared with the original using fidelities. Unfortunately the no-cloning theorem tells us that multiple copies of any single quantum state cannot be obtained reliably. Hence, while one can infer a fidelity from multiple copies of a state, the entire process that includes first generating multiple copies of a quantum state will always involve an initial reduction in fidelity. This is important in some types of application.
\subsubsection{Logic and process fidelity}
Finally, we turn to a different type of fidelity used to analyze quantum processes rather than states or density matrices. Quantum processes are also known~\cite{preskill2015lecture} as quantum channels, or mathematically as completely positive maps or super-operators. For the case of a quantum process, one may wish to analyze the fidelity of an actual quantum operation to an intended quantum operation. This may include any quantum technology from logic gates to memories, or indeed any input-output process. An operation is defined in the general sense of any quantum map ${\cal E\left(\rho\right)}$, from an input density matrix $\rho_{{\rm in}}$ to an output density matrix $\rho_{{\rm out}}.$ Their fidelity is discussed by Gilchrist \emph{et. al.}~\cite{gilchrist2005distance}.
Just as any density matrix has a matrix representation in the Hilbert space of state vectors of dimension $d$, quantum channels have a matrix representation in terms of a basis set of $d^{2}$ quantum operators $A_{j}$, where tr$\left(A_{j}^{\dagger}A_{k}\right)=\delta_{jk}$. Using this basis, any quantum operation can be written as: \begin{equation} \rho_{{\rm out}}={\cal E}\left(\rho_{{\rm in}}\right)=\sum_{mn}P_{mn}A_{m}\rho_{{\rm in}}A_{n}^{\dagger} \end{equation} Here $P_{mn}$ are the elements of the so-called process matrix $P$, which provides a convenient way of representing the operation ${\cal E}.$
At first, this seems rather different to density matrices, as discussed throughout this review Yet it is easy to show via the Choi-Jamiolkowski isomorphism \cite{Choi:1975,jamiolkowski1974effective}, that one can define a new quantum ``density matrix"~\cite{gilchrist2005distance} on the enlarged Hilbert space of dimension $d^{2}$, such that $\rho^{{\cal E}}=P/d$. Hence any fidelity or distance measure for quantum states can also be applied to processes, by the simple technique of dimension squaring. We will not investigate this in detail here except to remark that all of the different fidelity measures used for density matrices can be applied directly to quantum processes. For process fidelity it is common to impose additional requirements for the fidelity in addition to the axioms used here.
A typical example is the measurement of quantum process fidelity in a CNOT gate \cite{o2004quantum}. In this early photonic measurement, the counting fidelity was measured using conditional techniques. Thus, as explained above, these results should be regarded as an upper bound to the actual quantum fidelity, once mode-mismatch errors and losses are included. Other, more recent, process fidelity measurements with quantum logic gates have been carried out with ion traps \cite{benhelm2008towards,knill2008randomized}, liquid nuclear magnetic resonance \cite{ryan2009randomized}, solid-state silicon \cite{veldhorst2015two} and superconducting Josephson qubits \cite{lucero2008high}.
\section{Summary}
We have reviewed the requirements for a mixed state fidelity measure \cite{jozsa1994fidelity}, and analyzed a number of candidate measures of fidelity for their compliance with these requirements, as well as other considerations. While there are many candidates, most of them do not fully comply with the Josza axioms, although some of these alternatives do have useful properties.
Despite the above observation, there do exist an infinite number of compliant fidelities. Among them, three well-defined measures that fully satisfy the Josza axioms for fidelity measures are of particular interest, due to their physical interpretation and measurement properties: these are the Uhlmann-Josza fidelity $\mathcal{F}_{1}$, the non-logarithmic variety of the quantum Chernoff bound $\mathcal{F}_{\mbox{\tiny Q}}$ and the Hilbert-Schmidt fidelity, $\mathcal{F}_{2}$. It is worth noting that both $\mathcal{F}_{1}$ and $\mathcal{F}_{2}$ are particular cases of an infinite family of Josza-compliant fidelity measures $\mathcal{F}_p$, each associated with a Schatten-von-Neumann $p$-norm.
In this review, we have focused on two specific cases of these norm based fidelity measures $\mathcal{F}_p$, as well as the quantum Chernoff bound $\mathcal{F}_{\mbox{\tiny Q}}$. Analyzing the properties of this family of candidate measures for other integer values of $p>2$ is clearly something that may be of independent interest. On the other hand, despite much effort, the validity of a few desired properties of various candidate fidelity measures remains unknown (see Table~\ref{tbl:Concavity}, Table~\ref{tbl:Monotonicity} and Table~\ref{tbl:RelatedMetrics} for details). For each of these conjectured properties, at least 2000 optimizations with different initial starting points have been carried out for each Hilbert space dimension $d=2,3,\ldots,10$. Given the fact that intensive numerical searches have been carried out for counterexamples to these properties for small Hilbert space dimensions, we are inclined to conjecture that these properties are indeed valid.
An intriguing result is that of the $\mathcal{F}_p$ fidelities investigated, the $\mathcal{F}_{1}$ fidelity gives the largest average values when random density matrices are compared. While this relationship is only universal for the qubit case\textemdash otherwise there are occasional exceptions\textemdash it is found on average for higher dimensional Hilbert spaces as well. This is clearly related to the fact that the $\mathcal{F}_{1}$ fidelity is defined as a maximum fidelity over purifications. These purifications represent an unmeasured portion of Hilbert space. Hence, measuring $\mathcal{F}_{1}$ on a subspace could introduce a bias compared to a more complete measurement on a larger relevant space, if there are additional errors in the unmeasured part of the relevant Hilbert space.
To conclude, while the Uhlmann-Josza measure is the most widely known fidelity measure, there are other alternatives which have properties that can make them preferable under some circumstances. They may be either simpler to compute or more relevant to certain applications. For example, the Hilbert-Schmidt fidelity measure $\mathcal{F}_{2}$ is well-defined even for unnormalized density matrices, and appears less biased towards high values. Finding out the full implications of this and other mathematical properties of the candidate measures, however, is too broad a research topic to be considered within the present review.
\section{Detailed proofs}
\label{Sec:Proofs}
In this Appendix, detailed proofs are obtained for the fidelity results in the earlier sections, where they are not given already.
\subsection*{Norm based fidelity properties}
\label{App:p-norm}
\begin{theorem} All norm-based fidelities, $\mathcal{F}_p$, obey the Josza axioms for $p\ge1$.
\end{theorem}
\begin{proof} ~ \begin{itemize} \item[J1a)] $\mathcal{F}_p(\rho,\sigma)\in[0,1]$. The minimum bound is trivial, since norms are positive semi-definite. That the maximum bound holds is obtained from the Hölder inequality, since: $\left\Vert \sqrt{\rho}\sqrt{\sigma}\right\Vert _{p}^{2}\leq\left\Vert \sqrt{\rho}\right\Vert _{2p}^{2}\left\Vert \sqrt{\sigma}\right\Vert _{2p}^{2}=\left\Vert \rho\right\Vert _{p}\left\Vert \sigma\right\Vert _{p}\leq\max\left[\left\Vert \sigma\right\Vert _{p}^{2},\left\Vert \rho\right\Vert _{p}^{2}\right]$. \item[J1b)] $\mathcal{F}_p(\rho,\sigma)=1$ \emph{if and only if} $\rho=\sigma$. Clearly, $\mathcal{F}_p(\rho,\rho)=1$ for identical operators, since $\left\Vert \sqrt{\rho}\sqrt{\rho}\right\Vert _{p}=\left\Vert \rho\right\Vert _{p}^{2}=\max\left[\left\Vert \rho\right\Vert _{p}^{2},\left\Vert \rho\right\Vert _{p}^{2}\right]$. To prove the converse, {note from the above proof of J1a that the maximum bound is attained if and only if the Hölder inequality becomes an equality. Taking into account of the normalization of density matrices, we thus see that the maximum bound of $\mathcal{F}_p(\rho,\sigma)=1$ is attained if and only if $\rho=\sigma$.} \item[J1c)] $\mathcal{F}_p(\rho,\sigma)=0$ \emph{if and only if} $\rho\,\sigma=0$ . This follows since $\rho\,\sigma=0\Longleftrightarrow\sqrt{\rho}\sqrt{\sigma}=0\Longleftrightarrow\left\Vert \sqrt{\rho}\sqrt{\sigma}\right\Vert _{p}=0$. \item[J2)] $\mathcal{F}_p(\rho,\sigma)=\mathcal{F}_p(\sigma,\rho)$ is clearly true from the symmetry of the matrix norm under transposition. \item[J3)] $\mathcal{F}_p(\rho,\sigma)=\tr(\rho\,\sigma)$ if either $\rho$ or $\sigma$
is a pure state. From the definition, we have $\|\sqrt{\rho}\sqrt{\sigma}\|_{p}^{2}=\left[\tr\left(\sqrt{\rho}\,\sigma\,\sqrt{\rho}\right)^{\frac{p}{2}}\right]^{\frac{2}{p}}$. Let $\rho$ be a pure state, then $\rho=\sqrt{\rho}=\proj{\psi}$ for some $\ket{\psi}$, the expression thus simplifies to $\left[\left(\bra{\psi}\sigma\ket{\psi}\right)^{\frac{p}{2}}\tr\left(\rho^{\frac{p}{2}}\right)\right]^{\frac{2}{p}}$, which reduces to $\bra{\psi}\sigma\ket{\psi}=\tr(\rho\,\sigma)$ since $\tr\left(\rho^{\frac{p}{2}}\right)=\tr\rho=1$. Similarly, the normalizing factor is unity, since $\left\Vert \rho\right\Vert _{p}^{2}=1\ge\left\Vert \sigma\right\Vert _{p}^{2}$, and the argument holds if $\rho,\sigma$ are interchanged. \item[J4)] $\mathcal{F}_p(U\rho\,U^{\dagger},U\sigma U^{\dagger})=\mathcal{F}_p(\rho,\sigma)$. This follows from the unitary invariance of the matrix norm. \end{itemize} \end{proof}
\subsection*{Normalization}
We now show that both $\mathcal{F}_{\mbox{\tiny AM}}$ and $\mathcal{F}_{\mbox{\tiny GM}}$ obey axiom (J1a) and (J1b).
\begin{theorem} $\mathcal{F}_{\mbox{\tiny AM}}(\rho,\sigma),\mathcal{F}_{\mbox{\tiny GM}}(\rho,\sigma)\in[0,1]$ with the upper bound attained if and only if $\rho=\sigma$. \end{theorem} \begin{proof} The non-negativity of $\mathcal{F}_{\mbox{\tiny AM}}$ and $\mathcal{F}_{\mbox{\tiny GM}}$ is obvious. Next, we prove that these quantities are upper bounded by 1. To this end, we recall that the geometric mean between two numbers is always upper bounded by its arithmetic mean, hence, $\mathcal{F}_{\mbox{\tiny AM}}(\rho,\sigma)\le\mathcal{F}_{\mbox{\tiny GM}}(\rho,\sigma)\le1$, where the second inequality follows easily from the Cauchy-Schwarz inequality.
Moreover, the Cauchy-Schwarz inequality is saturated if and only if its entries are scalar multiples of each other. Since our entries have unity trace (density matrices), saturation can only occur if $\rho=\sigma$. It is easy to see by inspection that both $\mathcal{F}_{\mbox{\tiny AM}}(\rho,\rho)$ and $\mathcal{F}_{\mbox{\tiny GM}}(\rho,\rho)$ indeed equal to unity. Thus $\mathcal{F}_{\mbox{\tiny AM}}(\rho,\sigma)=\mathcal{F}_{\mbox{\tiny GM}}(\rho,\sigma)=1$ if and only if $\rho=\sigma$. \end{proof}
\subsection*{Multiplicativity}
\label{App:Proof:Mul}
\begin{theorem} {The measure} $\mathcal{F}_{2}$ is {generally} super-multiplicative{, but is multiplicative when appended by an uncorrelated ancillary state, or when considering tensor powers of the same states.} \end{theorem} \begin{proof} Let us define $\rho=\rho_{1}\otimes\rho_{2}$ and $\sigma=\sigma_{1}\otimes\sigma_{2}$, then start by noting that \begin{equation} \mathcal{F}_{2}(\rho,\sigma)=\frac{\tr(\rho_{1}\sigma_{1})\tr(\rho_{2}\sigma_{2})}{\max\left[\tr(\rho_{1}^{2})\tr(\rho_{2}^{2}),\tr(\sigma_{1}^{2})\tr(\sigma_{2}^{2})\right]} \end{equation} and \begin{equation} \mathcal{F}_{2}(\rho_{1},\sigma_{1})\mathcal{F}_{2}(\rho_{2},\sigma_{2})=\frac{\prod_{i=1}^{2}\tr(\rho_{i}\sigma_{i})}{\prod_{i=1}^{2}\max\left[\tr(\rho_{i}^{2}),\tr(\sigma_{i}^{2})\right]} \end{equation} Clearly, saturation of inequality~(\ref{Eq:SuperMultiplicative}) is obtained if the denominators of the two equations above coincide, i.e., \begin{eqnarray} \max\left[\prod_{i=1}^{2}\tr(\rho_{i}^{2}),\prod_{j=1}^{2}\tr(\sigma_{j}^{2})\right]=\prod_{i=1}^{2}\max\left[\tr(\rho_{i}^{2}),\tr(\sigma_{i}^{2})\right].\nonumber \\ \,\label{eq:maxeq} \end{eqnarray} It is easy to check that this equation holds when (at least) one of the following is observed \begin{itemize} \item $\tr(\rho_{i}^{2})=\tr(\sigma_{i}^{2})$ for some $i=1,2$, \item $\tr(\rho_{i}^{2})>\tr(\sigma_{i}^{2})$ for both $i=1,2$, \item $\tr(\rho_{i}^{2})<\tr(\sigma_{i}^{2})$ for both $i=1,2$. \end{itemize} Note that {the} first condition is satisfied for the scenario when each quantum state is appended{, respectively, by a quantum state with the same purity (e.g., when they are both appended by} same ancillary state $\tau$. {On the other hand,} the second/ third condition {is satisfied for the} scenario of tensor powers, i.e., $\rho_{1}=\rho_{2}$ etc., cf., Eq.~\eref{Eq:MultiplicativeTensorPower}. When none of the above conditions is satisfied, it is easy to see that we must have the r.h.s. of Eq.~\eref{eq:maxeq} {larger} than its l.h.s., and hence super-multiplicativity. For example, if $\tr(\rho_{1}^{2})>\tr(\sigma_{1}^{2})$ and $\tr(\rho_{2}^{2})<\tr(\sigma_{2}^{2})$, then the r.h.s. of \eref{eq:maxeq} becomes $\tr(\rho_{1}^{2})\tr(\sigma_{2}^{2})$ which has to be {larger} than both $\tr(\rho_{1}^{2})\tr(\rho_{2}^{2})$ or $\tr(\sigma_{1}^{2})\tr(\sigma_{2}^{2})$ by assumption. The proof for the case when $\tr(\rho_{1}^{2})<\tr(\sigma_{1}^{2})$ and $\tr(\rho_{2}^{2})>\tr(\sigma_{2}^{2})$ proceeds analogously. \end{proof}
\begin{theorem} {The measure $\mathcal{F}_{\mbox{\tiny C}}$ is generally supermultiplicative.} \end{theorem}
\begin{proof} Let $d_{1}\mathrel{\mathop:}=\dim\rho_{1}=\dim\sigma_{1}$, $d_{2}\mathrel{\mathop:}=\dim\rho_{2}=\dim\sigma_{2}$ and, in terms of these, define $r\mathrel{\mathop:}=(d_{1}-1)^{-1}$, $s\mathrel{\mathop:}=(d_{2}-1)^{-1}$ and $t\mathrel{\mathop:}=(d_{1}d_{2}-1)^{-1}$ {[}or, equivalently, $t=rs/(1+r+s)${]}, in such a way that $r,s\in(0,1]$. Then, the statements of the supermultiplicativity of $\mathcal{F}_{\mbox{\tiny C}}$ (to be proved) and of $\mathcal{F}_{\mbox{\tiny N}}$ (proved in~\cite{mendonca2008}) can be expressed, respectively, as
\begin{eqnarray} 2(1-t)+2(1+t)\mathcal{F}_{\mbox{\tiny N}}(\rho_{1}\otimes\rho_{2},\sigma_{1}\otimes\sigma_{2}) & \geq\nonumber \\ \left[(1-r)+(1+r)x\right]\left[(1-s)+(1+s)y\right]\,,\label{eq:tbp}\\ 2(1-t)+2(1+t)\mathcal{F}_{\mbox{\tiny N}}(\rho_{1}\otimes\rho_{2},\sigma_{1}\otimes\sigma_{2}) & \geq\nonumber \\ 2(1-t)+2(1+t)xy\,,\label{eq:ap} \end{eqnarray} where, for brevity, we have defined $x\mathrel{\mathop:}=\mathcal{F}_{\mbox{\tiny N}}(\rho_{1},\sigma_{1})\in[0,1]$ and $y\mathrel{\mathop:}=\mathcal{F}_{\mbox{\tiny N}}(\rho_{2},\sigma_{2})\in[0,1]$. In what follows, the validity of inequality~inequality~(\ref{eq:tbp}) is established by showing that the r.h.s. of~(\ref{eq:ap}) dominates the r.h.s. of~inequality~(\ref{eq:tbp}); an inequality that can be written as \begin{equation} 2(1-t)-(1-r)(1-s)\geq f_{r,s}(x,y)\label{eq:tbp2} \end{equation} where we have defined the functions \begin{eqnarray} f_{r,s}(x,y) & \mathrel{\mathop:}=(1+r)(1-s)x+(1-r)(1+s)y\nonumber \\
& -\left[2(1+t)-(1+r)(1+s)\right]xy \end{eqnarray}
To see that inequality~(\ref{eq:tbp2}) holds, it is our interest to find out the maximum of the functions $f_{r,s}(x,y)$. The extreme point of $f_{r,s}(x,y)$ is given by respectively setting the partial derivative of $x$ and $y$ to zero. It can be verified that, unless $r,s\in\{0,1\}$, the extreme points of $f_{r,s}(x,y)$ lie outside the domain $x,y\in[0,1]$. In the former cases, the extreme points lie on the boundaries of the domain. To identify them, note that the functions $f_{r,s}(x,y)$ are increasing both in $x$ and $y$ for all values of parameters $r,s\in[0,1]$. This follows, for example, from the observation that their partial derivatives with respect to $x$ and $y$ are always linear and assume non-negative values in the extremes of the domain $x,y\in[0,1]$. Indeed, \begin{eqnarray}
\left.\frac{\partial f_{r,s}(x,y)}{\partial x}\right|_{y=0} & =(1+r)(1-s)\geq0\nonumber \\
\left.\frac{\partial f_{r,s}(x,y)}{\partial x}\right|_{y=1} & =\frac{2r(1+r)}{1+r+s}>0\nonumber \\
\left.\frac{\partial f_{r,s}(x,y)}{\partial y}\right|_{x=0} & =(1-r)(1+s)\geq0\nonumber \\
\left.\frac{\partial f_{r,s}(x,y)}{\partial y}\right|_{x=1} & =\frac{2s(1+s)}{1+r+s}>0 \end{eqnarray} Thanks to that, it suffices to verify the validity of inequality~(\ref{eq:tbp2}) for $x=1$ and $y=1$, where $f_{r,s}(x,y)$ is maximal. In this case, however, a straightforward simplification process shows that the inequality is satisfied with saturation. \end{proof}
\begin{theorem} The measure $\mathcal{F}_{\mbox{\tiny Q}}$ is generally super-multiplicative under tensor products{, but is multiplicative when appended by an uncorrelated ancillary state, or when considering tensor powers of the same states.} \end{theorem}
\begin{proof} For given density matrices $\rho_{1}$, $\rho_{2}$, $\sigma_{1}$ and $\sigma_{2}$, we see that \begin{eqnarray} \mathcal{F}_{\mbox{\tiny Q}}(\rho_{1}\otimes\rho_{2},\sigma_{1}\otimes\sigma_{2}) & =\min_{s}\,\,\tr\left[\left(\rho_{1}\otimes\rho_{s}\right)^{s}\left(\sigma_{1}\otimes\sigma_{2}\right)^{1-s}\right]\nonumber \\
& =\min_{s}\,\,\tr(\rho_{1}^{s}\,\sigma_{1}^{1-s})\tr(\rho_{2}^{s}\,\sigma_{2}^{1-s})\nonumber \\
& =\min_{s}\,\,f_{1}(s)\,f_{2}(s)\nonumber \\
& =f_{1}(s^{*})f_{2}(s^{*}) \end{eqnarray} where $f_{i}(s)\mathrel{\mathop:}=\tr(\rho_{i}^{s}\,\sigma_{i}^{1-s})$, $s^{*}$ is a minimizer of the function $f_{1}(s)\,f_{2}(s)$, and it is worth reminding that each $f_{i}(s)$ is a convex function of $s$~\cite{audenaert2007discriminating}.
On the other hand, it also follows from the definition of $\mathcal{F}_{\mbox{\tiny Q}}$ that \begin{eqnarray} \mathcal{F}_{\mbox{\tiny Q}}(\rho_{1},\sigma_{1})\mathcal{F}_{\mbox{\tiny Q}}(\rho_{2},\sigma_{2}) & =\min_{s_{1}}\,\,f_{1}(s_{1})\,\min_{s_{2}}\,\,f_{2}(s_{2})\nonumber \\
& =f_{1}(s_{1}^{*})\,f_{2}(s_{2}^{*}), \end{eqnarray} where $s_{i}^{*}\in\mathcal{S}_{i}$ is a minimizer of $f_{i}(s_{i})$, and $\mathcal{S}_{i}\subseteq[0,1]$ is the set of minimizers of $f_{i}(s_{i})$. Note that the convexity of $f_{i}$ guarantees that $\mathcal{S}_{i}$ is a convex interval in $[0,1]$.
Evidently, for general density matrices $\rho_{1}$, $\rho_{2}$, $\sigma_{1}$ and $\sigma_{2}$, the set of minimizers for $f_{1}(s)$ and $f_{2}(s)$ differ, i.e., $\mathcal{S}_{1}\neq\mathcal{S}_{2}$. Without loss of generality, let us assume that $\min\mathcal{S}_{1}\le\min\mathcal{S}_{2}$. There are now two cases to consider, if $\max\mathcal{S}_{1}\ge\min\mathcal{S}_{2}$, we must also have $\mathcal{S}_{2}\cap\mathcal{S}_{1}\neq\{\}$, and hence the minimizer $s^{*}$ for $f_{1}(s)f_{2}(s)$ must also minimize $f_{1}$ and $f_{2}$. In this case, we have, \begin{equation} \mathcal{F}_{\mbox{\tiny Q}}(\rho_{1}\otimes\rho_{2},\sigma_{1}\otimes\sigma_{2})=\mathcal{F}_{\mbox{\tiny Q}}(\rho_{1},\sigma_{1})\mathcal{F}_{\mbox{\tiny Q}}(\rho_{2},\sigma_{2}), \end{equation} meaning that multiplicativity holds true for this set of density matrices.
In the event that $\max\mathcal{S}_{1}<\min\mathcal{S}_{2}$, we have $\mathcal{S}_{2}\cap\mathcal{S}_{1}=\{\}$, which implies that any minimizer $s^{*}$ for $f_{1}(s)f_{2}(s)$ must be such that {$s^{*}\not\in\mathcal{S}_{1}$ and/or $s^{*}\not\in\mathcal{S}_{2}$}. As a result, we must have $f_{i}(s^{*})\ge f_{i}(s_{i}^{*})$ for all $i$, and thus \begin{eqnarray} \mathcal{F}_{\mbox{\tiny Q}}(\rho_{1}\otimes\rho_{2},\sigma_{1}\otimes\sigma_{2}) & =f_{1}(s^{*})f_{2}(s^{*}),\nonumber \\
& \ge f_{1}(s_{1}^{*})f_{2}(s_{2}^{*}),\nonumber \\
& =\mathcal{F}_{\mbox{\tiny Q}}(\rho_{1},\sigma_{1})\mathcal{F}_{\mbox{\tiny Q}}(\rho_{2},\sigma_{2}), \end{eqnarray} which demonstrates the super-multiplicativity of $\mathcal{F}_{\mbox{\tiny Q}}$.
For tensor powers of the same state, cf. Eq.~\eref{Eq:MultiplicativeTensorPower}, the fact that $\tr(A\otimes B)=\left(\tr\,A\right)\left(\tr\,B\right)$ and the above arguments make it evident that the minimizer for a single copy is also the minimizer for an arbitrary number of copies. Thus, $\mathcal{F}_{\mbox{\tiny Q}}$ is multiplicative under tensor powers. Similarly, for an uncorrelated ancillary state $\tau$, the definition of $\mathcal{F}_{\mbox{\tiny Q}}$, and axiom (J1b) immediately imply that the measure is multiplicative when each quantum state is appended by $\tau$, i.e., $\mathcal{F}_{\mbox{\tiny Q}}(\rho\otimes\tau,\sigma\otimes\tau)=\mathcal{F}_{\mbox{\tiny Q}}(\rho,\sigma)$. \end{proof}
\begin{theorem} The measure $\mathcal{F}_{\mbox{\tiny AM}}$ is multiplicative under uncorrelated ancilla state, that is \begin{equation} \mathcal{F}_{\mbox{\tiny AM}}(\rho\otimes\tau,\sigma\otimes\tau)=\mathcal{F}_{\mbox{\tiny AM}}(\rho,\sigma)\, \end{equation} \end{theorem} \begin{proof} The proof follows trivially from the application of the tensor product identities \begin{eqnarray} (A\otimes B)(C\otimes D) & = & AC\otimes BD\label{eq:otimesmult}\\ \tr(A\otimes B) & = & \tr(A)\tr(B)\label{eq:trotimes} \end{eqnarray} Explicitly, \begin{eqnarray} \mathcal{F}_{\mbox{\tiny AM}}(\rho\otimes\tau,\sigma\otimes\tau) & = & \frac{2\tr(\rho\,\sigma\otimes\tau^{2})}{\tr(\rho^{2}\otimes\tau^{2})+\tr(\sigma^{2}\otimes\tau^{2})}\nonumber \\
& = & \frac{2\tr(\rho\,\sigma)\tr(\tau^{2})}{[\tr(\rho^{2})+\tr(\sigma^{2})]\tr(\tau^{2})}\nonumber \\
& = & \frac{2\tr(\rho\,\sigma)}{[\tr(\rho^{2})+\tr(\sigma^{2})]}\nonumber \\
& = & \mathcal{F}_{AM}(\rho,\sigma), \end{eqnarray} where Eq.~(\ref{eq:otimesmult}) was used to establish the second equality and Eq.~(\ref{eq:trotimes}) was used to establish the third equality. \end{proof}
\subsection*{Proofs of average fidelity properties}
Here, we give the proofs showing when the respective fidelity measure satisfies Eq.~\eref{Eq:AveVsMixed}. Throughout, as mentioned in Section~\ref{Sec:PureVsMixed} and Section~\ref{Sec:PureVsMixed-1}, we will assume that the error state $\rho_{0}$ is orthogonal to {\em all} of the signal state, i.e., \begin{equation} \rho_{j}\rho_{0}=\rho_{0}\rho_{j}=0\quad\forall\,j\neq0.\label{Eq:OrthoError} \end{equation} If, in addition to Eq.~\eref{Eq:OrthoError}, all signal states are orthogonal to each other, i.e., \begin{equation} \rho_{j}\rho_{k}=0\quad\forall\,j\neq k,\label{Eq:OrthoAll} \end{equation} then all $\rho_{j}$ commute pairwise, and hence diagonalizable in the same basis. In this case, we will see that Eq.~\eref{Eq:AveVsMixed} holds for $\mathcal{F}_{1}$, $\mathcal{F}_{\mbox{\tiny Q}}$ and $\mathcal{F}_{\mbox{\tiny A}}$ independent of the purity of the signal state $\rho_{j}$. \begin{proof} Recall from the main text that in the present discussion, the average state-by-state fidelity is \begin{eqnarray} \mathcal{F}_{{\rm ave}}(\bm{p},\bm{\rho},\bm{\sigma})=\sum_{i\neq0}p_{i}\mathcal{F}(\rho_{i},\sigma_{i}).\label{Eq:AveF} \end{eqnarray} Note, however, that Eq.~\eref{Eq:OrthoAll} implies that for all $i\neq0$, \begin{eqnarray} \mathcal{F}_{1}(\rho_{i},\sigma_{i}) & =\left(\tr\sqrt{\sqrt{\rho_{i}}\sigma_{i}\sqrt{\rho_{i}}}\right)^{2},\nonumber \\
& =\left(\tr\sqrt{\sqrt{\rho_{i}}\left[\epsilon\rho_{0}+(1-\epsilon)\rho_{i}\right]\sqrt{\rho_{i}}}\right)^{2},\nonumber \\
& =\left(\sqrt{1-\epsilon}\tr\sqrt{\sqrt{\rho_{i}}\rho_{i}\sqrt{\rho_{i}}}\right)^{2},\nonumber \\
& =(1-\epsilon)\left(\tr\rho_{i}\right)^{2}=1-\epsilon\label{Eq:F1:State-by-state} \end{eqnarray} while \begin{eqnarray*} \mathcal{F}_{1}(\rho,\sigma) & =\left(\tr\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\right)^{2},\\
& =\left(\tr\sqrt{\sqrt{\rho}\left[\epsilon\rho_{0}+(1-\epsilon)\rho\right]\sqrt{\rho}}\right)^{2},\\
& =\left(\sqrt{1-\epsilon}\tr\sqrt{\sqrt{\rho}\,\rho\,\sqrt{\rho}}\right)^{2},\\
& =(1-\epsilon)\left(\tr\sqrt{\rho^{2}}\right)^{2}=1-\epsilon \end{eqnarray*} Substituting Eq.~\eref{Eq:F1:State-by-state} to Eq.~\eref{Eq:AveF} and comparing the resulting expression with the above equation then verifies Eq.~\eref{Eq:AveVsMixed} for $\mathcal{F}_{1}$.
Next, we shall prove the equivalence for $\mathcal{F}_{\mbox{\tiny Q}}$. As with $\mathcal{F}_{1}$, Eq.~\eref{Eq:OrthoAll} implies that for all $i\neq0$, \begin{eqnarray} \mathcal{F}_{\mbox{\tiny Q}}(\rho_{i},\sigma_{i}) & =\min_{0\le s\le1}\tr(\rho_{i}^{s}\,\sigma_{i}^{1-s}),\nonumber \\
& =\min_{0\le s\le1}\tr(\rho_{i}^{s}\,\left[\epsilon\rho_{0}+(1-\epsilon)\rho_{i}\right]^{1-s}),\nonumber \\
& =\min_{0\le s\le1}\tr(\rho_{i}^{s}\,\left[\epsilon^{1-s}\rho_{0}^{1-s}+(1-\epsilon)^{1-s}\rho_{i}^{1-s}\right]),\nonumber \\
& =\min_{0\le s\le1}(1-\epsilon)^{1-s}\tr(\rho_{i})=1-\epsilon,\label{Eq:Fq:State-by-state} \end{eqnarray} where the last equality follows from the fact that $0\le1-\epsilon\le1$, and thus $(1-\epsilon)^{1-s}\ge1-\epsilon$ for all $0\le s\le1$.
In a similar manner, the simultaneous diagonalizability of $\rho$ and $\rho_{0}$ gives \begin{eqnarray} \mathcal{F}_{\mbox{\tiny Q}}(\rho,\sigma) & =\min_{0\le s\le1}\tr(\rho^{s}\,\sigma^{1-s}),\nonumber \\
& =\min_{0\le s\le1}\tr\left\{ \rho^{s}\,\left[\epsilon\rho_{0}+(1-\epsilon)\rho\right]^{1-s}\right\} ,\nonumber \\
& =\min_{0\le s\le1}\tr\left\{ \rho^{s}\,\left[\epsilon^{1-s}\rho_{0}^{1-s}+(1-\epsilon)^{1-s}\rho^{1-s}\right]\right\} ,\nonumber \\
& =\min_{0\le s\le1}(1-\epsilon)^{1-s}\tr(\rho)=1-\epsilon. \end{eqnarray} Substituting Eq.~\eref{Eq:Fq:State-by-state} into Eq.~\eref{Eq:AveF} and comparing with the last equation immediately leads to the verification of Eq.~\eref{Eq:AveVsMixed} for $\mathcal{F}_{\mbox{\tiny Q}}$ whenever Eq.~\eref{Eq:OrthoAll} holds.
To prove the equivalence for $\mathcal{F}_{\mbox{\tiny A}}$, we note from Eq.~\eref{Eq:OrthoAll} that for all $i\neq0$, \begin{eqnarray} \mathcal{F}_{\mbox{\tiny A}}(\rho_{i},\sigma_{i}) & =\left[\tr\left(\sqrt{\rho_{i}}\sqrt{\sigma_{i}}\right)\right]^{2},\nonumber \\
& =\left[\tr\left(\sqrt{\rho_{i}}\sqrt{\epsilon\rho_{0}+(1-\epsilon)\rho_{i}}\right)\right]^{2},\nonumber \\
& =\left\{ \tr\left[\sqrt{\rho_{i}}\left(\sqrt{\epsilon}\sqrt{\rho_{0}}+\sqrt{1-\epsilon}\sqrt{\rho_{i}}\right)\right]\right\} ^{2},\nonumber \\
& =\left[\sqrt{1-\epsilon}\tr\left(\rho_{i}\right)\right]^{2}=1-\epsilon.\label{Eq:Fa:State-by-state} \end{eqnarray}
Similarly, Eq.~\eref{Eq:OrthoAll} implies that \begin{eqnarray} \mathcal{F}_{\mbox{\tiny A}}(\rho,\sigma) & =\left[\tr\left(\sqrt{\rho}\sqrt{\sigma}\right)\right]^{2},\nonumber \\
& =\left[\tr\left(\sqrt{\rho}\sqrt{\epsilon\rho_{0}+(1-\epsilon)\rho}\right)\right]^{2},\nonumber \\
& =\left\{ \tr\left[\sqrt{\rho}\left(\sqrt{\epsilon}\sqrt{\rho_{0}}+\sqrt{1-\epsilon}\sqrt{\rho}\right)\right]\right\} ^{2},\nonumber \\
& =\left[\sqrt{1-\epsilon}\tr\left(\rho\right)\right]^{2}=1-\epsilon. \end{eqnarray} Hence, by substituting Eq.~\eref{Eq:Fa:State-by-state} into Eq.~\eref{Eq:AveF} and comparing with the last equation immediately leads to the verification of Eq.~\eref{Eq:AveVsMixed} for $\mathcal{F}_{\mbox{\tiny A}}$ whenever Eq.~\eref{Eq:OrthoAll} holds. \end{proof}
Notice that if instead of Eq.~\eref{Eq:OrthoAll}, we have the promise that \begin{equation} \tr(\rho_{j}^{2})\ge\tr(\sigma_{j}^{2}),\quad\tr(\rho^{2})\ge\tr(\sigma^{2}), \end{equation} then \begin{eqnarray} \mathcal{F}_{2}(\rho,\sigma) & =\frac{\tr(\rho\,\sigma)}{\max\left[\tr(\rho^{2}),\tr(\sigma^{2})\right]},\nonumber \\
& =\frac{\tr\left\{ \rho\,\left[\epsilon\rho_{0}+(1-\epsilon)\rho\right]\right\} }{\tr(\rho^{2})},\nonumber \\
& =1-\epsilon. \end{eqnarray} Eq.~\eref{Eq:F2:compare} then follows by combining Eq.~\eref{Eq:F2:State-by-state}, Eq.~\eref{Eq:AveF} and the last equation above.
Finally, note that if instead we have the premise that $\tr(\rho^{2})=1$, then $\rho=\rho_{j}$, $\sigma=\sigma_{j}$, and there is only term in the sum of Eq.~\eref{Eq:AveF}. Consequently, we must also have $\mathcal{F}_{\mbox{\tiny N}}(\rho,\sigma)=\mathcal{F}_{\mbox{\tiny N}}(\rho_{j},\sigma_{j})=\mathcal{F}_{{\rm ave}}(\bm{p},\bm{\rho},\bm{\sigma})$ in this case.
\subsection*{Counterexamples}
\label{App:CountExamples}
We provide here some counterexamples that have been left out in the main text for showing certain desired properties of various fidelity measures.
To verify that $\mathcal{F}_{\mbox{\tiny AM}}$ is generally not (super)multiplicative, it suffices to consider $\rho=\frac{1}{5}(\Pi_{0}+4\Pi_{1})$ and $\sigma=\frac{1}{2}\mathbf{1}_{2}$. An explicit calculation gives $\mathcal{F}_{\mbox{\tiny AM}}(\rho\otimes\rho,\sigma\otimes\sigma)\approx0.702$ while $[\mathcal{F}_{\mbox{\tiny AM}}(\rho,\sigma)]^{2}\approx0.718>\mathcal{F}_{\mbox{\tiny AM}}(\rho\otimes\rho,\sigma\otimes\sigma)$, thus showing a violation of the desired (super)multiplicative property.
To see that $\mathcal{F}_{\mbox{\tiny C}}$, $\mathcal{F}_{\mbox{\tiny GM}}$ and $\mathcal{F}_{\mbox{\tiny AM}}$ can be contractive under the partial trace operation, it suffices to consider the following pair of two-qubit density matrices (written in the product basis): \begin{equation} \rho=\Pi_{0}\otimes\Pi_{0},\quad\sigma=\frac{1}{8}\left(\begin{array}{cccc} 3 & 0 & 0 & \sqrt{3}\\ 0 & 4 & 0 & 0\\ 0 & 0 & 0 & 0\\ \sqrt{3} & 0 & 0 & 1 \end{array}\right), \end{equation} and consider the partial trace of $\rho$ and $\sigma$ over the first qubit subsystem.
\section{Metric properties}
\label{Sec:Proofs-1}
In this Appendix, detailed proofs are obtained for the metric properties of fidelities.
\label{App:Met}
\subsection*{General definitions}
For a given fidelity measure $\mathcal{F}(\rho,\sigma)$, let us define the following functionals of $\mathcal{F}$: \begin{eqnarray} A[\mathcal{F}(\rho,\sigma)] & \mathrel{\mathop:}=\arccos[{\sqrt{\mathcal{F}(\rho,\sigma)}}],\nonumber \\ B[\mathcal{F}(\rho,\sigma)] & \mathrel{\mathop:}=\sqrt{2-2\sqrt{\mathcal{F}(\rho,\sigma)}},\nonumber \\ C[\mathcal{F}(\rho,\sigma)] & \mathrel{\mathop:}=\sqrt{1-\mathcal{F}(\rho,\sigma)}. \end{eqnarray}
In what follows, we will provide the proofs of the metric properties of these functionals of $\mathcal{F}$ for the various fidelity measures discussed in Sec.~\ref{Sec:Metric}. By a metric, we mean a mapping $\mathfrak{D}$ on a set $S$ such that for every $a,b,c\in S$, the mapping $\mathfrak{D}:S\times S\to\mathbb{R}$ satisfies the following properties: \begin{enumerate} \item[(M1)] $\mathfrak{D}(a,b)\geq0$ (Nonnegativity)\,, \item[(M2)] $\mathfrak{D}(a,b)=0$ iff $a=b$ (Identity of Indiscernible)\,, \item[(M3)] $\mathfrak{D}(a,b)=\mathfrak{D}(b,a)$ (Symmetry)\,, \item[(M4)] $\mathfrak{D}(a,c)\leq\mathfrak{D}(a,b)+\mathfrak{D}(b,c)$ (Triangle Inequality)\,. \end{enumerate} Our main tool is a simplified version of Schoenberg's theorem~\cite{Schoenberg1938}, reproduced as follows. \begin{theorem}[Schoenberg]\label{thm:schoenberg} Let $\mathcal{X}$ be a nonempty set and $K:\mathcal{X}\times\mathcal{X}\to\mathbb{R}$ a function such that $K(x,y)=K(y,x)$ and $K(x,y)\geq0$ for all $x,y\in\mathcal{X}$, with saturation iff $x=y$. If the implication \begin{equation} \sum_{i=1}^{n}{c_{i}}=0\Rightarrow\sum_{i,j=1}^{n}{K(x_{i},x_{j})c_{i}c_{j}}\leq0\label{eq:ndk} \end{equation} holds for all $n\geq2$, $\{x_{1},\ldots,x_{n}\}\subseteq\mathcal{X}$ and $\{c_{1},\ldots,c_{n}\}\subseteq\mathbb{R}$, then $\sqrt{K}$ is a metric. \end{theorem}
The theorem has previously been used in~\cite{mendonca2008} to show that $C[\mathcal{F}_{\mbox{\tiny N}}]$ is a metric for the space of density matrices. However, the alternative proof for the metric properties of $B[\mathcal{F}_{1}(\rho,\sigma)]$ and $C[\mathcal{F}_{1}(\rho,\sigma)]$ given in~\cite{mendonca2008} is flawed due to an erroneous application of the above theorem.
\subsection*{$\mathcal{F}_{2}$ metric properties}
Here, we will show that $C[\mathcal{F}_{2}(\rho,\sigma)]$ is a metric for the space of density matrices. Because $\mathcal{F}_{2}$ complies with Jozsa's axioms (J1) and (J2), it is immediate that $C[\mathcal{F}_{2}]$ is non-negative (M1), fulfills the indiscernible identity (M2) and is symmetric (M3). Hence, in order to establish $C[\mathcal{F}_{2}]$ as a genuine metric, we only have to prove that it satisfies the triangle inequality \begin{equation} C[\mathcal{F}_{2}(\rho,\sigma)]\leq C[\mathcal{F}_{2}(\rho,\tau)]+C[\mathcal{F}_{2}(\tau,\sigma)]\label{eq:triangineqrhosigmatau} \end{equation} for \emph{arbitrary} $d$-dimensional density matrices $\rho$, $\sigma$ and $\tau$. To this end, we shall first prove the following lemma:
\begin{lemma}\label{Lemma:trigonometric} For any $\vartheta,\varphi\in[0,2\pi]$ and $p,q\in[0,1]$ \begin{eqnarray} \sqrt{1-pq\cos(\vartheta+\varphi)} & \leq\sqrt{1-p\cos\vartheta}+\sqrt{1-q\cos\varphi},\nonumber \\ \sqrt{1-pq\cos(\vartheta-\varphi)} & \geq\sqrt{1-p\cos\vartheta}-\sqrt{1-q\cos\varphi}\nonumber \\ \,\label{eq:lemmaineq1} \end{eqnarray} \end{lemma}
\begin{proof} We start by showing that both inequalities hold in the special case of $p=q=1$. To see this, note that \begin{eqnarray}
\sqrt{1-\cos(\vartheta+\varphi)} & =\sqrt{2}\left|\sin\frac{\vartheta}{2}\cos\frac{\varphi}{2}+\sin\frac{\varphi}{2}\cos\frac{\vartheta}{2}\right|\nonumber \\
& \leq\sqrt{2}\left|\sin\frac{\vartheta}{2}\cos\frac{\varphi}{2}\right|+\sqrt{2}\left|\sin\frac{\varphi}{2}\cos\frac{\vartheta}{2}\right|\nonumber \\
& \leq\sqrt{2}\left|\sin\frac{\vartheta}{2}\right|+\sqrt{2}\left|\sin\frac{\varphi}{2}\right|\nonumber \\
& =\sqrt{1-\cos\vartheta}+\sqrt{1-\cos\varphi} \end{eqnarray}
where we have used the trigonometric identity $\sqrt{1-\cos{x}}=\sqrt{2}|\sin{\frac{x}{2}}|$
in the first and last lines, the inequality $|x+y|\leq|x|+|y|$ in the second line, and the fact that cosines are upper bounded by $1$ in the third line.
To see that inequality (\ref{eq:lemmaineq1}) also holds for $p=q=1$, we rewrite it in the equivalent form \begin{equation}
\left|\sin\left(\frac{\vartheta-\varphi}{2}\right)\right|\geq\sin\left(\frac{\vartheta}{2}\right)-\sin\left(\frac{\varphi}{2}\right)\,,\label{eq:ineqsin} \end{equation} which is clearly valid if $\sin\frac{\vartheta}{2}\leq\sin\frac{\varphi}{2}$. If instead $\sin\frac{\vartheta}{2}>\sin\frac{\varphi}{2}$, then both sides of inequality (\ref{eq:ineqsin}) are non-negative and can thus be squared to yield an equivalent inequality that can be simplified to \begin{equation} 4\sin\left(\frac{\vartheta}{2}\right)\sin\left(\frac{\varphi}{2}\right)\sin^{2}\left(\frac{\vartheta-\varphi}{4}\right)\geq0. \end{equation} Since this clearly holds for $\vartheta,\varphi\in[0,2\pi]$, inequality~(\ref{eq:lemmaineq1}) for $p=q=1$ must also hold.
In order to show Lemma~\ref{Lemma:trigonometric} for general $p,q\in[0,1]$, let us {\em define} the angles $\Theta,\Phi\in[0,\pi]$ as follows \begin{eqnarray} \cos\Theta & =p\cos\vartheta,\quad\sin\Theta=\sqrt{1-p^{2}\cos^{2}\vartheta}\,,\nonumber \\ \cos\Phi & =q\cos\varphi,\quad\sin\Phi=\sqrt{1-q^{2}\cos^{2}\varphi}\,. \end{eqnarray} This gives \begin{eqnarray}
& \pm\cos(\Theta\pm\Phi)\nonumber \\ = & \pm\left(pq\cos\vartheta\cos\varphi\mp\sqrt{1-p^{2}\cos^{2}\vartheta}\sqrt{1-q^{2}\cos^{2}\varphi}\right)\nonumber \\ \leq & \pm\left(pq\cos\vartheta\cos\varphi\mp pq\sqrt{1-\cos^{2}\vartheta}\sqrt{1-\cos^{2}\varphi}\right)\nonumber \\
= & \pm pq(\cos\vartheta\cos\varphi\mp|\sin\vartheta\sin\varphi|)\nonumber \\ \leq & \pm pq\cos(\vartheta\pm\varphi).\label{eq:cossumrel} \end{eqnarray}
Inequalities (\ref{eq:lemmaineq1}) can now be obtained from inequality (\ref{eq:cossumrel}) as follows: \begin{eqnarray*} \pm\sqrt{1-pq\cos(\vartheta\pm\varphi)} & \leq\pm\sqrt{1-\cos(\Theta\pm\Phi)}\\
& \leq\pm\left(\sqrt{1-\cos\Theta}\pm\sqrt{1-\cos\Phi}\right)\\
& =\pm\left(\sqrt{1-p\cos{\vartheta}}\pm\sqrt{1-q\cos{\varphi}}\right), \end{eqnarray*} where the second inequality follows from the already verified inequality of (\ref{eq:lemmaineq1}) in the case of $p=q=1$. \end{proof}
\begin{theorem} $C[\mathcal{F}_{2}(\rho,\sigma)]$ is a metric for the space of density matrices \end{theorem} \begin{proof} Using equations~\eref{eq:param} and \eref{eq:f2-geom}, the triangle inequality of \eref{eq:triangineqrhosigmatau} can be rewritten as \begin{equation} \sqrt{1-\frac{\vec{r}\cdot\vec{s}}{\mu\left(\vec{r},\vec{s}\right)}}\leq\sqrt{1-\frac{\vec{r}\cdot\vec{t}}{\mu\left(\vec{r},\vec{t}\right)}}+\sqrt{1-\frac{\vec{t}\cdot\vec{s}}{\mu\left(\vec{t},\vec{s}\right)}},\label{Ineq:triangle} \end{equation} where $\mu\left(\vec{r},\vec{s}\right)\equiv\max(r^{2},s^{2})$ and the entries of the vectors $\vec{r},\vec{s},\vec{t}\in\mathbb{R}^{d^{2}}$ are the expansion coefficients of $\rho,\sigma,$ and $\tau$, respectively, in some basis of Hermitian matrices.
Without loss of generality, we henceforth assume that $\rho$ and $\tau$ are, respectively, the density matrices of maximal and minimal purities (i.e., $r\geq s\geq t$). Hence, \eref{eq:triangineqrhosigmatau} unfolds into three inequalities to be proven, which correspond to $C[\mathcal{F}_{2}(\rho,\sigma)]\leq C[\mathcal{F}_{2}(\rho,\tau)]+C[\mathcal{F}_{2}(\tau,\sigma)]$, $C[\mathcal{F}_{2}(\sigma,\tau)]\leq C[\mathcal{F}_{2}(\rho,\sigma)]+C[\mathcal{F}_{2}(\rho,\tau)]$ and $C[\mathcal{F}_{2}(\rho,\tau)]\leq C[\mathcal{F}_{2}(\rho,\sigma)]+C[\mathcal{F}_{2}(\tau,\sigma)]$ respectively. In terms of the expansion coefficients vectors $\vec{r}$, $\vec{s}$, and $\vec{t}$, these be written as:
\begin{eqnarray} \sqrt{1-\frac{s}{r}\cos\theta_{rs}} & \leq\sqrt{1-\frac{t}{r}\cos\theta_{rt}}+\sqrt{1-\frac{t}{s}\cos\theta_{ts}}\,,\nonumber \\ \sqrt{1-\frac{t}{s}\cos\theta_{ts}} & \leq\sqrt{1-\frac{t}{r}\cos\theta_{rt}}+\sqrt{1-\frac{s}{r}\cos\theta_{rs}}\,,\nonumber \\ \sqrt{1-\frac{t}{r}\cos\theta_{rt}} & \leq\sqrt{1-\frac{s}{r}\cos\theta_{rs}}+\sqrt{1-\frac{t}{s}\cos\theta_{ts}}\,,\nonumber \\ \,\label{eq:ineqtoproveangles3} \end{eqnarray} where the angles $\theta_{rs}$, $\theta_{rt}$, and $\theta_{ts}\in[0,\pi]$ have been {\em defined} (in accordance with the Cauchy-Schwarz inequality) as follows: \begin{equation} \cos\theta_{rs}=\frac{\vec{r}\cdot\vec{s}}{rs}\,,\quad\cos\theta_{rt}=\frac{\vec{r}\cdot\vec{t}}{rt}\,,\quad\cos\theta_{ts}=\frac{\vec{t}\cdot\vec{s}}{ts}\,. \end{equation}
Besides, since \begin{equation} \cos(\theta_{rs}+\theta_{ts})\leq\cos\theta_{rt}\leq\cos(\theta_{rs}-\theta_{ts})\,,\label{eq:costhetartineq} \end{equation} (a demonstration of which will be given at the end of this proof), the replacement of $\cos\theta_{rt}$ with either $\cos(\theta_{rs}-\theta_{ts})$ or $\cos(\theta_{rs}+\theta_{ts})$ in inequality (\ref{eq:ineqtoproveangles3}), yields the following \emph{stronger} set of inequalities to be proven, where we define $\Delta^{\pm}=\theta_{rs}\pm\theta_{ts}$:
\begin{eqnarray} \sqrt{1-\frac{t}{r}\cos(\Delta^{-})} & \geq\sqrt{1-\frac{s}{r}\cos\theta_{rs}}-\sqrt{1-\frac{t}{s}\cos\theta_{ts}}\nonumber \\ \sqrt{1-\frac{t}{r}\cos(\Delta^{-})} & \geq\sqrt{1-\frac{t}{s}\cos\theta_{ts}}-\sqrt{1-\frac{s}{r}\cos\theta_{rs}}\nonumber \\ \sqrt{1-\frac{t}{r}\cos(\Delta^{+})} & \leq\sqrt{1-\frac{s}{r}\cos\theta_{rs}}+\sqrt{1-\frac{t}{s}\cos\theta_{ts}}\,\nonumber \\ \,\label{eq:Ineq:stronger} \end{eqnarray}
Note that $\frac{t}{r}=\frac{s}{r}\cdot\frac{t}{s}$ and $\frac{t}{r},\frac{s}{r},\frac{t}{s}\in(0,1]$ by our assumption that $r\ge s\ge t>0$. Thus, through appropriate identifications of these ratios with $p,q$ and applying Lemma~\ref{Lemma:trigonometric}, inequalities~(\ref{eq:Ineq:stronger}), and hence the desired triangle inequalities~(\ref{Ineq:triangle}) follow.
We complete this proof with a demonstration of inequalities (\ref{eq:costhetartineq}). Consider, first, the following pair of vectors, each of which being orthogonal to $\vec{s}$ \begin{equation} \vec{u}=\vec{r}-\frac{r}{s}\cos\theta_{rs}\vec{s}\quad\mbox{and}\quad\vec{v}=\vec{t}-\frac{t}{s}\cos\theta_{ts}\vec{s}. \end{equation} Using these and the orthogonality of $\vec{u}$, $\vec{v}$ to $\vec{s}$, we may expand the scalar product between $\vec{r}$ and $\vec{t}$ as \begin{equation} \vec{r}\cdot\vec{t}=rt\cos\theta_{rt}=\vec{u}\cdot\vec{v}+rt\cos\theta_{rs}\cos\theta_{ts}. \end{equation} Then, using $\vec{u}\cdot\vec{v}=uv\cos\theta_{uv}$ with $u=r\sin\theta_{rs}$, $v=t\sin\theta_{ts}$, and $\theta_{uv}\in[0,\pi]$, we arrive at \begin{equation} \cos\theta_{rt}=\sin\theta_{rs}\sin\theta_{ts}\cos\theta_{uv}+\cos\theta_{rs}\cos\theta_{ts}. \end{equation} From here, the trivial inequalities $\cos\theta_{uv}\geq-1$ and $\cos\theta_{uv}\leq1$ imply, respectively, the upper and lower bounds on $\cos\theta_{rt}$ in (\ref{eq:costhetartineq}). \end{proof}
\subsection*{$\mathcal{F}_{\mbox{\tiny C}}$ metric properties}
\begin{theorem} $C[\mathcal{F}_{\mbox{\tiny C}}(\rho,\sigma)]$ is a metric for the space of density matrices of fixed Hilbert space dimension. \end{theorem}
\begin{proof} For density matrices $\rho_{i}$ acting on $d$-dimensional complex Hilbert space $\mathbb{C}^{d}$, note that $r=\frac{1}{d-1}$ is a constant. Thus, for any $c_{i}'s\in\mathbb{R}$ and which are such that $\sum_{i}c_{i}=0$, we have \begin{eqnarray}
& \sum_{i,j}c_{i}c_{j}\left[1-\mathcal{F}_{\mbox{\tiny C}}(\rho_{i},\rho_{j})\right]\nonumber \\ = & -\sum_{i,j}c_{i}c_{j}\left[\frac{1-r}{2}+\frac{1+r}{2}\mathcal{F}_{\mbox{\tiny N}}(\rho_{i},\rho_{j})\right]\nonumber \\ = & -\frac{1+r}{2}\sum_{i,j}c_{i}c_{j}\,\mathcal{F}_{\mbox{\tiny N}}(\rho_{i},\rho_{j}). \end{eqnarray} {From the proof of} the metric property of $C[\mathcal{F}_{\mbox{\tiny N}}(\rho,\sigma)]$ {given in}~\cite{mendonca2008}, we know that the last expression above must be non-positive. Hence, $\sqrt{1-\mathcal{F}_{\mbox{\tiny C}}(\rho,\sigma)}$ is a metric for the space of density matrices of fixed dimension. \end{proof}
\subsection*{$\mathcal{F}_{\mbox{\tiny GM}}$ metric properties}
\begin{theorem} $C[\mathcal{F}_{\mbox{\tiny GM}}(\rho,\sigma)]$ is a metric for the space of density matrices. \end{theorem}
\begin{proof} For any $c_{i}'s\in\mathbb{R}$ and which are such that $\sum_{i}c_{i}=0$, note that \begin{eqnarray}
& \sum_{i,j}c_{i}c_{j}\left[1-\mathcal{F}_{\mbox{\tiny GM}}(\rho_{i},\rho_{j})\right]=-\sum_{i,j}c_{i}c_{j}\frac{\tr(\rho_{i}\,\rho_{j})}{\sqrt{\tr(\rho_{i}^{2})\tr(\rho_{j}^{2})}}\nonumber \\
& =-\tr\left[\left(\sum_{i}\frac{c_{i}\rho_{i}}{\sqrt{\tr(\rho_{i}^{2})}}\right)^{2}\right]\le0. \end{eqnarray} Hence, $C[\mathcal{F}_{\mbox{\tiny GM}}(\rho,\sigma)]=\sqrt{1-\mathcal{F}_{\mbox{\tiny GM}}(\rho,\sigma)}$ is a metric for the space of density matrices.
\end{proof}
\subsection*{$\mathcal{F}_{\mbox{\tiny A}}$ metric properties}
The metric properties of $B[\mathcal{F}_{\mbox{\tiny A}}(\rho,\sigma)]$ were first mentioned in~\cite{Raggio1984}, while those of $A[\mathcal{F}_{\mbox{\tiny A}}(\rho,\sigma)]$ and $C[\mathcal{F}_{\mbox{\tiny A}}(\rho,\sigma)]$ were numerically investigated in~\cite{ma2008geometric}, suggesting that they may indeed be metrics for the space of density matrices. In Section C.3,~\cite{mendonca2008}, it was briefly mentioned that both $B[\mathcal{F}_{\mbox{\tiny A}}(\rho,\sigma)]$ and $C[\mathcal{F}_{\mbox{\tiny A}}(\rho,\sigma)]$ can be proved to be a metric using Schoenberg's theorem. Here, we shall provide an explicit proof of these facts. An alternative proof for $B[\mathcal{F}_{\mbox{\tiny A}}(\rho,\sigma)]$ can also be found in~\cite{ma2011}.
\begin{proof} For any $c_{i}'s\in\mathbb{R}$ and which are such that $\sum_{i}c_{i}=0$, note that \begin{eqnarray}
& \sum_{i,j}c_{i}c_{j}\left[1-\sqrt{\mathcal{F}_{\mbox{\tiny A}}(\rho_{i},\rho_{j})}\right]=-\sum_{i,j}c_{i}c_{j}\tr\left(\sqrt{\rho_{i}}\sqrt{\rho_{j}}\right)\nonumber \\
& =-\tr\left[\left|\sum_{i}c_{i}\sqrt{\rho_{i}}\right|^{2}\right]\le0. \end{eqnarray} Likewise, we can show that: \begin{eqnarray}
& \sum_{i,j}c_{i}c_{j}\left[1-\mathcal{F}_{\mbox{\tiny A}}(\rho_{i},\rho_{j})\right]=-\sum_{i,j}c_{i}c_{j}\left[\tr\left(\sqrt{\rho_{i}}\sqrt{\rho_{j}}\right)\right]^{2}\nonumber \\ = & -\sum_{i,j}c_{i}c_{j}\tr\left[\left(\sqrt{\rho_{i}}\otimes\sqrt{\rho_{i}}\right)\left(\sqrt{\rho_{j}}\otimes\sqrt{\rho_{j}}\right)\right]\nonumber \\
= & -\tr\left[\left|\sum_{i}c_{i}\sqrt{\rho_{i}}\otimes\sqrt{\rho_{i}}\right|^{2}\right]\le0. \end{eqnarray} This concludes our proof for the metric properties of $B[\mathcal{F}_{\mbox{\tiny A}}(\rho,\sigma)]$ and $C[\mathcal{F}_{\mbox{\tiny A}}(\rho,\sigma)]$. \end{proof}
\section*{References}
}
\end{document} |
\begin{document}
\title[Finite abelian group actions on the Razak-Jacelon algebra]{Approximate representability of finite abelian group actions on the Razak-Jacelon algebra} \author{Norio Nawata} \address{Department of Pure and Applied Mathematics, Graduate School of Information Science and Technology, Osaka University, Yamadaoka 1-5, Suita, Osaka 565-0871, Japan} \email{[email protected]} \keywords{Razak-Jacelon algebra; Approximate representability; Rohlin property; Kirchberg's central sequence C$^*$-algebra.} \subjclass[2020]{Primary 46L55, Secondary 46L35; 46L40} \thanks{This work was supported by JSPS KAKENHI Grant Number 20K03630}
\begin{abstract} Let $A$ be a simple separable nuclear monotracial C$^*$-algebra, and let $\alpha$ be an outer action of a finite abelian group $\Gamma$ on $A$. In this paper, we show that $\alpha\otimes \mathrm{id}_{\mathcal{W}}$ on $A\otimes\mathcal{W}$ is approximately representable if and only if the characteristic invariant of $\tilde{\alpha}$ is trivial, where $\mathcal{W}$ is the Razak-Jacelon algebra and $\tilde{\alpha}$ is the induced action on the injective II$_1$ factor $\pi_{\tau_{A}}(A)^{''}$. As an application of this result, we classify such actions up to conjugacy and cocycle conjugacy. We also construct the model actions. \end{abstract} \maketitle
\section{Introduction}
In \cite{Jones}, Jones gave a complete classification of finite group actions on the injective II$_1$ factor up to conjugacy. This can be regarded as a generalization of Connes' classification \cite{C3} of periodic automorphisms of the injective II$_1$ factor. In this paper, we study a C$^*$-analog of these results.
There exist some difficulties for the classification of (amenable) group actions on ``classifiable'' C$^*$-algebras because of $K$-theoretical obstructions. We refer the reader to \cite{I} for details. In spite of these difficulties, Gabe and Szab\'o classified outer actions of countable discrete amenable groups on Kirchberg algebras up to cocycle conjugacy by equivariant $KK$-theory in \cite{GS}. This classification can be regarded as a C$^*$-analog in Kirchberg algebras of Ocneanu's classification theorem \cite{Oc}. For stably finite classifiable C$^*$-algebras, such a classification is an interesting open problem.
The Razak-Jacelon algebra $\mathcal{W}$ (\cite{J} and \cite{Raz}) is the simple separable nuclear monotracial $\mathcal{Z}$-stable C$^*$-algebra which is $KK$-equivalent to $\{0\}$. We regard $\mathcal{W}$ as a monotracial analog of the Cuntz algebra $\mathcal{O}_2$. Indeed, if $A$ is a simple separable nuclear monotracial C$^*$-algebra, then $A\otimes\mathcal{W}$ is isomorphic to $\mathcal{W}$ by classification results in \cite{CE} and \cite{EGLN} (see also \cite{Na4}). Let $\alpha$ be an action on a simple separable nuclear monotracial C$^*$-algebra $A$. Then the action $\alpha\otimes\mathrm{id}_{\mathcal{W}}$ on $A\otimes\mathcal{W}$ is an action on $\mathcal{W}$. We call such a tensor product type action a \textit{$\mathcal{W}$-type action}. Note that there exist no $K$-theoretical obstructions for $\mathcal{W}$-type actions. Therefore we can recognize difficulties due to stably finiteness by studying $\mathcal{W}$-type actions. In \cite{Na5}, the author showed that if $\alpha$ is a strongly outer action of a countable discrete amenable group, then $\mathcal{W}$-type actions are unique up to cocycle conjugacy. (Note that an action $\alpha$ of a discrete countable group on a simple separable monotracial C$^*$-algebra $A$ is \textit{strongly outer} if the induce action $\tilde{\alpha}$ by $\alpha$ on $\pi_{\tau_A}(A)^{''}$ is outer, where $\pi_{\tau_A}$ is the Gelfand-Naimark-Segal (GNS) representation associated with the unique tracial state $\tau_A$ on $A$.) This result can be regarded as a monotracial analog of Szab\'o's equivariant Kirchberg-Phillips type absorption theorem for $\mathcal{O}_2$ \cite{Sza4}. In this paper, we study $\mathcal{W}$-type outer actions of finite abelian groups, which include ``weakly inner'' (or non-strongly outer) actions.
One of the main results in this paper is the characterization of approximate representability of $\alpha\otimes\mathrm{id}_{\mathcal{W}}$ by using the characteristic invariant of $\tilde{\alpha}$. Note that approximate representability is the dual notion of the Rohlin property \cite{I1} (see also \cite{GHS}, \cite{GSan1}, \cite{Na0} and \cite{San}). Also, we can classify approximately representable actions by using classification results of C$^*$-algebras (see \cite{GSan2}, \cite{I1} and \cite{Na0}). As an application of this result, we can classify such actions up to conjugacy and cocycle conjugacy. For an action $\alpha$ on a simple separable monotracial C$^*$-algebra $A$ of a countable discrete group $\Gamma$, put
$N(\tilde{\alpha}):=\{g\in \Gamma \; | \; \tilde{\alpha}_g=\mathrm{Ad}(u)\; \text{for some}\; u\in \pi_{\tau_A}(A)^{''}\}$ and let $i(\tilde{\alpha})$ be the inner invariant of $\tilde{\alpha}$ defined in \cite{Jones}. We show the following classification result in this paper.
\begin{mainthm} (Corollary \ref{main:cor}) \ \\ Let $A$ and $B$ be simple separable nuclear monotracial C$^*$-algebras, and let $\alpha$ and $\beta$ be outer actions of a finite abelian group $\Gamma$ on $A$ and $B$, respectively. Assume that the characteristic invariants of $\tilde{\alpha}$ and $\tilde{\beta}$ are trivial. Then \ \\ (i) $\alpha\otimes \mathrm{id}_{\mathcal{W}}$ on $A\otimes\mathcal{W}$ and $\beta\otimes \mathrm{id}_{\mathcal{W}}$ on $B\otimes\mathcal{W}$ are cocycle conjugate if and only if $N(\tilde{\alpha})=N(\tilde{\beta})$; \ \\ (ii) $\alpha\otimes \mathrm{id}_{\mathcal{W}}$ on $A\otimes\mathcal{W}$ and $\beta\otimes \mathrm{id}_{\mathcal{W}}$ on $B\otimes\mathcal{W}$ are conjugate if and only if $N(\tilde{\alpha})=N(\tilde{\beta})$ and $i(\tilde{\alpha})=i(\tilde{\beta})$. \end{mainthm}
To the author's best knowledge, this classification is the first abstract classification result for ``weakly inner'' (or non-strongly outer) actions on stably finite C$^*$-algebras without inductive limit type structures. Also, we construct the model actions of the actions in the theorem above. This construction might be of independent interest.
\section{Preliminaries}\label{sec:pre}
\subsection{Group actions}
Let $\alpha$ and $\beta$ be actions of a discrete group $\Gamma$ on C$^*$-algebras $A$ and $B$, respectively. We say that $\alpha$ is \textit{conjugate} to $\beta$ if there exists an isomorphism $\theta$ from $A$ onto $B$ such that $\theta\circ \alpha_g= \beta_g\circ \theta$ for any $g\in\Gamma$. An \textit{$\alpha$-cocycle} is a map $u$ from $\Gamma$ to the unitary group of the multiplier algebra $M(A)$ of $A$ such that $u_{gh}=u_g\alpha_g(u_h)$. Note that we denote the induced action on $M(A)$ by the same symbol $\alpha$ for simplicity. We say that $\alpha$ is \textit{cocycle conjugate} to $\beta$ if there exist an isomorphism $\theta$ from $A$ to $B$ and a $\beta$-cocycle $u$ such that $\theta\circ \alpha_g=\mathrm{Ad}(u_g)\circ \beta_g \circ \theta$ for any $g\in \Gamma$. Let $$
N(\alpha):= \{g\in\Gamma \; |\; \alpha_g=\mathrm{Ad}(u)\; \text{for some}\; u\in M(A)\}. $$ It is said to be that $\alpha$ is \textit{outer} if $N(\alpha)=\{\iota\}$ where $\iota$ is the identity element in $\Gamma$.
We denote by $A^{\alpha}$ and $A\rtimes_{\alpha}\Gamma$ the fixed point subalgebra and the reduced crossed product C$^*$-algebra, respectively. Let $E_{\alpha}$ denote the canonical conditional expectation from $A\rtimes_{\alpha}\Gamma$ onto $A$. If $A$ is simple and $\alpha$ is outer, then $A\rtimes_{\alpha}\Gamma$ is simple by \cite{K}. Assume that $\Gamma$ is a finite abelian group. Let $$ e_{\alpha}:=
\frac{1}{|\Gamma|}\sum_{g\in\Gamma}\lambda_g\in M(A\rtimes_{\alpha}\Gamma) $$ where $\lambda_g$ is the implementing unitary of $\alpha_g$ in $M(A\rtimes_{\alpha}\Gamma)$
and $|\cdot|$ denotes cardinality. Then $e_{\alpha}$ is a projection and $e_{\alpha}(A\rtimes_{\alpha}\Gamma)e_{\alpha}$ is isomorphic to $A^{\alpha}$. We denote by $\hat{\alpha}$ the dual action of $\alpha$, that is, $\hat{\alpha}_{\eta}(\sum_{g\in \Gamma}a_g\lambda_g)= \sum_{g\in \Gamma}\eta(g)a_g\lambda_g$ for any $\sum_{g\in \Gamma}a_g\lambda_g\in A\rtimes_{\alpha}\Gamma$ and $\eta\in \hat{\Gamma}$. Note that $\hat{\Gamma}$ is isomorphic to $\Gamma$ since we assume that $\Gamma$ is a finite abelian group.
Let $T_1(A)$ denote the tracial state space of $A$. Every tracial state on $A$ can be uniquely extended to a tracial state on $M(A)$. We denote it by the same symbol for simplicity. If $\varphi$ is a nondegenerate homomorphism from $A$ to $B$, then $\varphi$ induces an affine continuous map $T(\varphi)$ from $T_1(B)$ to $T_1(A)$ by $T(\varphi)(\tau)= \tau\circ \varphi$ for any $\tau\in T_1(B)$. Hence every action $\alpha$ on $A$ induces an action $T(\alpha)$ on $T_1(A)$. We denote by $T_1(A)^{\alpha}$ the fixed point set of this induced action. Straightforward arguments show the following proposition.
\begin{pro}\label{pro:conjugacy-trace-spaces} Let $\alpha$ and $\beta$ be actions of a finite abelian group $\Gamma$ on C$^*$-algebras $A$ and $B$, respectively. Then \ \\ (i) if $\alpha$ and $\beta$ are cocycle conjugate, then there exists an affine homeomorphism $F$ from $T_1(B\rtimes_{\beta}\Gamma)$ onto $T_1(A\rtimes_{\alpha}\Gamma)$ such that $F\circ T(\hat{\beta}_{\eta})=T(\hat{\alpha}_{\eta})\circ F$ for any $\eta\in \hat{\Gamma}$; \ \\ (ii) if $\alpha$ and $\beta$ are conjugate, then there exists an affine homeomorphism $F$ from $T_1(B\rtimes_{\beta}\Gamma)$ onto $T_1(A\rtimes_{\alpha}\Gamma)$ such that $F(\tau)(e_{\alpha})=\tau (e_\beta)$ for any $\tau \in T_1(B\rtimes_{\beta}\Gamma)$ and $F\circ T(\hat{\beta}_{\eta})=T(\hat{\alpha}_{\eta})\circ F$ for any $\eta\in \hat{\Gamma}$. \end{pro}
Let $\tau$ be an $\alpha$-invariant tracial state on $A$, that is, $\tau\in T_1(A)^{\alpha}$. Then $\alpha$ induces an action $\tilde{\alpha}$ on $\pi_{\tau}(A)^{''}$ where $\pi_{\tau}$ is the GNS representation associated with $\tau$.
\subsection{Finite abelian group actions on the injective II$_1$ factor}
We shall recall some results in \cite{Jones}. We refer the reader to \cite{Jones}, \cite{Jones2} and \cite{Oc} for details. Let $M$ be a II$_1$ factor, and let $\delta$ be an action of a finite abelian group $\Gamma$ on $M$. Although we can consider in a more general setting, we assume that $\Gamma$ is a finite abelian group for simplicity. Also, note that the von Neumann algebraic crossed product of $(M, \Gamma, \delta)$ is isomorphic to (the reduced crossed product C$^*$-algebra) $M\rtimes_{\delta}\Gamma$ by this assumption. By definition of $N(\delta)$, there exists a map $v$ from $N(\delta)$ to the unitary group of $M$ such that $\delta_h=\mathrm{Ad}(v_h)$ for any $h\in N(\delta)$. For any $g\in \Gamma$ and $h\in N(\delta)$, there exists a complex number
$\lambda_{\delta}(g, h)$ with $|\lambda_{\delta} (g, h)|=1$ such that $\delta_g(v_h)=\lambda_{\delta}(g,h)v_h$. It is easy to see that $\lambda_{\delta}(g, h)$ does not depend on the choice of $v$. We say that \textit{the characteristic invariant of $\delta$ is trivial} if $\lambda_{\delta}(g,h)=1$ for any $g\in \Gamma$ and $h\in N(\delta)$. We refer the reader to \cite[Section 1.2]{Jones} for the precise definition of the characteristic invariant. Note that we may assume that $v$ is a unitary representation since $N(\delta)$ is a finite abelian group. Indeed, if $N(\delta)$ is a cyclic group generated by $g$ of order $n$, then there exists a
complex number $\gamma$ with $|\gamma| =1$ such that $v_g^n=\gamma 1$. Choose an $n$-th root $\gamma^{\prime}$ of $\gamma$ and define a map $v^{\prime}$ from $N(\delta)$ to the unitary group of $M$ by $v^{\prime}_{g^k}:= \gamma^{\prime k}v_g^k$ for any $1\leq k\leq n$. Then $v^{\prime}$ is a unitary representation such that $\delta_h=\mathrm{Ad}(v_h^{\prime})$ for any $h\in N(\delta)$. Since every finite abelian group is a finite direct sum of cyclic groups of finite order, if $N(\delta)$ is a finite abelian group, then there exists such a unitary representation.
\begin{pro}\label{pro:non-trivial-characteristic} Let $M$ be a II$_1$ factor, and let $\delta$ be an action of a finite abelian group $\Gamma$ on $M$. If the characteristic invariant of $\delta$ is not trivial, then the dual action $\hat{\delta}$ is not outer. \end{pro} \begin{proof} Since the characteristic invariant of $\delta$ is not trivial, there exist $g_0\in\Gamma$ and $h_0\in N(\delta)$ such that $\lambda_{\delta}(g_0, h_0)\neq 1$. Define a map $\eta_0$ from $\Gamma$ to $\mathbb{C}$ by $\eta_0(g):=\lambda_{\delta} (g, h_0)$ for any $g\in\Gamma$. Then $\eta_0$ is a nontrivial character on $\Gamma$. Let $v_{h_0}$ be a unitary element in $M$ such that $\delta_{h_0}=\mathrm{Ad}(v_{h_0})$. Then we have \begin{align*} \mathrm{Ad}(\lambda_{h_0}v_{h_0}^*)\left(\sum_{g\in \Gamma}a_g\lambda_g\right) &=\sum_{g\in\Gamma}\lambda_{h_0}v_{h_0}^*a_g \lambda_gv_{h_0}\lambda_{h_0}^* \\ &= \sum_{g\in\Gamma}\lambda_{h_0}v_{h_0}^*a_g \delta_{g}( v_{h_0})\lambda_g\lambda_{h_0}^* \\ &=\sum_{g\in\Gamma}\lambda_{\delta}(g,h_0) \lambda_{h_0}v_{h_0}^*a_gv_{h_0}\lambda_{h_0}^* \lambda_{g} \\ &=\sum_{g\in\Gamma}\lambda_{\delta}(g,h_0) \lambda_{h_0}\delta_{h^{-1}_0}(a_g)\lambda_{h_0}^* \lambda_{g} \\ &= \sum_{g\in\Gamma}\eta_0(g)a_g\lambda_{g} \\ &= \hat{\delta}_{\eta_0}\left( \sum_{g\in \Gamma}a_g\lambda\right) \end{align*} for any $\sum_{g\in \Gamma}a_g\lambda_g \in M\rtimes_{\gamma}\Gamma$. Therefore $\hat{\delta}$ is not outer. \end{proof}
In the rest of this section, we assume that the characteristic invariant of $\delta$ is trivial, and $v$ is a unitary representation $v$ of $N(\delta)$ on $M$ such that $\delta_h=\mathrm{Ad}(v_h)$ for any $h\in N(\delta)$. Define a homomorphism $\Phi_v$ from the group algebra $\mathbb{C}N(\delta)$ to $M$ by $\Phi_v(\sum_{h\in N(\delta)}c_h h)=\sum_{h\in N(\delta)}c_hv_h$. Let $P_{N(\delta)}$ be the set of minimal projections in $\mathbb{C}N(\delta)$, and let $M(P_{N(\delta)})$ be the set of probability measures on $P_{N(\delta)}$.
Note that we have $P_{N(\delta)}=\left\{\frac{1}{|N(\delta)|}\sum_{h\in N(\delta)} \eta (h)h\; |\; \eta\in \hat{N(\delta)}\right\}$. Define a probability measure $m_{v}(\delta)$ on $P_{N(\delta)}$ by $\tau_{M}(\Phi_v(p))$ for any $p\in P_{N(\delta)}$ where $\tau_{M}$ is the unique tracial state on $M$. Note that $m_v(\delta)$ depends on the choice of $v$. For any character $\eta$ of $N(\delta)$, define an automorphism $\partial(\eta)$ of $\mathbb{C}N(\delta)$ by $\partial(\eta) (\sum_{h\in N(\delta)}c_h h)= \sum_{h\in N(\delta)}\eta(h)c_h h$. Define an equivalence relation $\sim$ on $M(P_{N(\delta)})$ by $m\sim m^{\prime}$ if there exists a character $\eta$ of $N(\delta)$ such that $m = m^{\prime }\circ \partial(\eta)$. Put $i(\delta):= [m_v(\delta)]\in M(P_{N(\delta)})/\sim$. Then $i(\delta)$ does not depend on the choice of $v$ and $i(\delta)$ is a conjugacy invariant. The following theorem is a part of Ocneanu's classification theorem and Jones' classification theorem.
\begin{thm}\label{thm:jones} (Cf. \cite[Theorem 2.6]{Oc} and \cite[Theorem 1.4.8]{Jones}) \ \\ Let $\delta$ and $\delta^{\prime}$ be actions of a finite abelian group $\Gamma$ on the injective II$_1$ factor $M$. Assume that the characteristic invariants of $\delta$ and $\delta^{\prime}$ are trivial. Then \ \\ (i) $\delta$ and $\delta^{\prime}$ are cocycle conjugate if and only if $N(\delta)=N(\delta^{\prime})$; \ \\ (ii) $\delta$ and $\delta^{\prime}$ are conjugate if and only if $N(\delta)=N(\delta^{\prime})$ and $i(\delta)=i(\delta^{\prime})$. \end{thm}
Define a map $\Pi_v$ from $\mathbb{C}N(\delta)$ to $M\rtimes_{\delta}\Gamma$ by $\Pi_v(\sum_{h\in N(\delta)}c_h h)=\sum_{h\in N(\delta)}c_h v_h\lambda_h^*$. Then $\Pi_v$ is an isomorphism from $\mathbb{C}N(\delta)$ onto the center $Z(M\rtimes_{\delta}\Gamma)$ of $M\rtimes_{\delta}\Gamma$ by \cite[Corollary 2.2.2]{Jones}.
This implies that $T_1(M\rtimes_{\delta}\Gamma)$ is an $|N(\delta)|$-simplex. Indeed, for any $p\in P_{N(\delta)}$, define a tracial state $\tau_{p}$ on
$M\rtimes_{\delta}\Gamma$ by $\tau_{p}(x):= |N(\delta)|\tau_{M}\circ E_{\delta}(\Pi_v(p)x)$ for any $x\in M\rtimes_{\delta}\Gamma$. Then $\tau_{p}$ is a unique tracial state on $\Pi_v(p)M\rtimes_{\delta}\Gamma$ since $\Pi_v(p)M\rtimes_{\delta}\Gamma$ is a factor. If $\tau$ is a tracial state on $M\rtimes_{\delta}\Gamma$, then $\tau(x)=\sum_{p\in P_{N(\delta)}}\tau (\Pi_v(p)x)=\sum_{p\in P_{N(\delta)}}\tau (\Pi_v(p))\tau_{p}(x)$ for any $x\in M\rtimes_{\delta}\Gamma$.
Hence $T_1(M\rtimes_{\delta}\Gamma)$ is an $|N(\delta)|$-simplex and the set of extremal tracial states on $M\rtimes_{\delta}\Gamma$ is equal to
$\{\tau_{p}\; |\; p\in P_{N(\delta)}\}$. Easy computations show that we have
$\tau_{p}(e_{\delta})=\frac{|N(\delta)|}{|\Gamma|}\tau_{M}(\Phi_v(p))$ for any $p\in P_{N(\delta)}$. Therefore we can recover $i(\delta)$ by considering the extremal tracial states on $M\rtimes_{\delta}\Gamma$.
\subsection{Finite abelian group actions on monotracial C$^*$-algebras}
We say a C$^*$-algebra $A$ is \textit{monotracial} if $A$ has a unique tracial state and no unbounded traces. For a monotracial C$^*$-algebra $A$, we denote by $\tau_{A}$ the unique tracial state on $A$ unless otherwise specified. Let $A$ be a simple separable monotracial C$^*$-algebra, and let $\alpha$ be an outer action of a finite abelian group $\Gamma$ on $A$. Then $\pi_{\tau_{A}}(A)^{''}$ is a II$_1$ factor. Note that $A$ is not of type I since $A$ has an outer action. Of course, $N(\tilde{\alpha})$ and $i(\tilde{\alpha})$ are a cocycle conjugacy invariant and a conjugacy invariant for $\alpha$ on $A$, respectively. Since we assume that $A$ is simple, $\tau_A\circ E_{\alpha}$ is faithful. Hence we can regard $A\rtimes_{\alpha}\Gamma$ and $M(A\rtimes_{\alpha}\Gamma)$ as subalgebras of $\pi_{\tau_A\circ E_{\alpha}}(A\rtimes_{\alpha}\Gamma)^{''}\cong \pi_{\tau_{A}}(A)^{''}\rtimes_{\tilde{\alpha}}\Gamma$.
\begin{lem}\label{lem:trace-spaces-crossed-products} Let $B$ be a simple separable C$^*$-algebra with a compact tracial state space $T_1(B)$, and let $\beta$ be an action of a finite group $\Gamma$ on $B$ with $T_1(B)^{\beta}=\{\tau_{0}\}$. Assume that $\pi_{\tau_{0}}(B)^{''}$ has finitely many extremal tracial states (this is equivalent to that every tracial state on $\pi_{\tau_{0}}(B)^{''}$ is normal).
Then the restriction map $T_1(\pi_{\tau_{0}}(B)^{''})\ni \tau \mapsto \tau|_B\in T_1(B)$ is an affine homeomorphism. \end{lem} \begin{proof} It is obvious that the restriction map is a continuous affine map. Since $\pi_{\tau_{0}}(B)$ is weakly dense in $\pi_{\tau_{0}}(B)^{''}$ and every tracial state on $\pi_{\tau_{0}}(B)^{''}$ is normal, the restriction map is injective. We shall show the surjectivity. Let $\{\tau_1,\tau_2,...,\tau_{k}\}$ be the set of extremal tracial states on $\pi_{\tau_{0}}(B)^{''}$.
Note that $\tau_{1}|_B$, $\tau_{2}|_B$,..., $\tau_{k}|_B$ are extremal tracial states on $B$ because $\tau$ is an extremal tracial state if and only if $\tau$ is a factorial tracial state. Since $T_1(B)$ is compact, it is enough to show that the set of extremal tracial states on $B$ is equal to
$\{\tau_1|_B,\tau_2|_B,...,\tau_{k}|_B\}$ by the Krein-Milman theorem. On the contrary, suppose that there were an extremal tracial state $\sigma$ on $B$ such that
$\sigma\notin \{\tau_1|_B,\tau_2|_B,...,\tau_{k}|_B\}$. Since $\tau_0$ is $\beta$-invariant, $\beta$ induces an action $\tilde{\beta}$ on $\pi_{\tau_{0}}(B)^{''}$. It is easy to see that $\{\tau_1,\tau_2,...,\tau_{k}\}$ is
a $\tilde{\beta}$-invariant set. Hence $\{\tau_1|_B,\tau_2|_B,...,\tau_{k}|_B\}$ is a $\beta$-invariant set
and $\sigma\circ \beta_g\notin \{\tau_1|_B,\tau_2|_B,...,\tau_{k}|_B\}$ for any $g\in \Gamma$. Since $\tau_0$ is the unique $\beta$-invariant tracial state on $B$, $$
\tau_0=\frac{1}{|\Gamma|}\sum_{g\in\Gamma}\tau_1\circ \beta_g=
\frac{1}{|\Gamma|}\sum_{g\in\Gamma}\sigma \circ \beta_g . $$ Since $T_1(B)$ is a Choquet simplex, which we remind the reader requires $\tau_0$ to have a unique representation as a convex combination of the finitely many extremal traces of $T_1(B)$, this is a contradiction. \end{proof}
\begin{pro}\label{pro:trace-spaces-crossed-products} Let $A$ be a simple separable monotracial C$^*$-algebra, and let $\alpha$ be an outer action of a finite abelian group $\Gamma$ on $A$. Then the restriction map $T_1(\pi_{\tau_A\circ E_{\alpha}}(A\rtimes_{\alpha}\Gamma)^{''})\ni \tau \mapsto
\tau|_{A\rtimes_{\alpha}\Gamma}\in T_1(A\rtimes_{\alpha}\Gamma)$ is an affine homeomorphism. \end{pro} \begin{proof} By the outerness of $\alpha$, $A\rtimes_{\alpha}\Gamma$ is simple. \cite[Corollary 2.2.3]{Jones} implies that $\pi_{\tau_A\circ E_{\alpha}}(A\rtimes_{\alpha}\Gamma)^{''}\cong \pi_{\tau_{A}}(A)^{''}\rtimes_{\tilde{\alpha}}\Gamma$ has finitely many extremal tracial states. Since $A$ is monotracial, \cite[Lemma 2.2]{Na0} implies that $T_1(A\rtimes_{\alpha}\Gamma)$ is compact. Furthermore, we see that $T_1(A\rtimes_{\alpha}\Gamma)^{\hat{\alpha}}$ is a one point set by the Takesaki-Takai duality theorem. Applying Lemma \ref{lem:trace-spaces-crossed-products} to $B=A\rtimes_{\alpha}\Gamma$ and $\beta=\hat{\alpha}$, we obtain the conclusion. \end{proof}
The following corollary is an immediate consequence of the proposition above and the previous subsection.
\begin{cor} Let $A$ be a simple separable monotracial C$^*$-algebra, and let $\alpha$ be an outer action of a finite abelian group $\Gamma$ on $A$. Assume that the characteristic invariant of $\tilde{\alpha}$ is trivial. Then
$T_1(A\rtimes_{\alpha}\Gamma)$ is an $|N(\tilde{\alpha})|$-simplex. \end{cor}
A probability measure $m$ on a finite set $P$ is said to have \textit{full support} if $m(p)>0$ for any $p\in P$.
\begin{pro}\label{pro:realized-invariant} Let $A$ be a simple separable monotracial C$^*$-algebra, and let $\alpha$ be an outer action of a finite abelian group $\Gamma$ on $A$. Assume that the characteristic invariant of $\tilde{\alpha}$ is trivial. If $m$ is a probability measure on $P_{N(\tilde{\alpha})}$ such that $i(\tilde{\alpha})=[m]$, then $m$ has full support. \end{pro} \begin{proof} By Proposition \ref{pro:trace-spaces-crossed-products} and the previous subsection, we may assume that
$m(p)=\frac{|\Gamma|}{|N(\delta)|}\tau_{p}(e_{\alpha})$ for any $p\in P_{N(\tilde{\alpha})}$ where $\tau_{p}$ is the extremal tracial state on $A\rtimes_{\alpha}\Gamma$ corresponding to $p$. Since $\alpha$ is outer, $A\rtimes_{\alpha}\Gamma$ is simple. Hence we have $\tau_{p}(e_{\alpha})>0$ for any $p\in P_{N(\tilde{\alpha})}$ because $\tau_{p}$ is faithful and $e_{\alpha}$ is a non-zero projection in $M(A\rtimes_{\alpha}\Gamma)$. Therefore we obtain the conclusion. \end{proof}
\subsection{Kirchberg's relative central sequence C$^*$-algebras}
Fix a free ultrafilter $\omega$ on $\mathbb{N}$. Let $A$ and $B$ be C$^*$-algebras, and let $\Phi$ be a homomorphism from $A$ to $B$. Put $$ B^{\omega}:=\ell^{\infty}(\mathbb{N}, B)/\{\{x_n\}_{n\in\mathbb{N}}\in \ell^{\infty}(\mathbb{N}, B)\;
|\; \lim_{n\to\omega} \|x_n\|=0\} $$ and we regard $B$ as a C$^*$-subalgebra of $B^{\omega}$ consisting of equivalence classes of constant sequences. We denote by $(x_n)_n$ a representative of an element in $B^{\omega}$. Set $$
F(\Phi (A), B)=B^{\omega}\cap \Phi(A)^{\prime}/ \{(x_n)_n\in B^{\omega}\cap \Phi(A)^{\prime}\; | \; (x_n \Phi(a))_n=0\; \text{for any}\; a\in A\} $$ and we call it \textit{Kirchberg's relative central sequence C$^*$-algebra}. If $A=B$ and $\Phi=\mathrm{id}_A$, then we denote $F(\Phi(A), B)$ by $F(B)$. Every action $\alpha$ of a discrete group $\Gamma$ on $B$ with $\alpha_g(\Phi(A))=\Phi(A)$ for any $g\in\Gamma$ induces an action on $F(\Phi(A), B)$. We denote it by the same symbol $\alpha$ for simplicity unless otherwise specified.
Let $A$ be a simple separable non-type I nuclear monotracial C$^*$-algebra and $B$ a monotracial C$^*$-algebra with strict comparison, and let $\Phi$ be a homomorphism from $A$ to $B$. Assume that $\tau_{B}$ is faithful and $\tau_A=\tau_B\circ \Phi$. Then $F(\Phi(A), B)$ has a tracial state $\tau_{B, \omega}$ such that $\tau_{B,\omega}([(a_n)_n])=\lim_{n\to\omega}\tau_B(a_n)$ for any $[(a_n)_n]\in F(\Phi(A), B)$ by \cite[Proposition 2.1]{Na4}. Put $$ \mathcal{M}:= \ell^{\infty}(\mathbb{N}, \pi_{\tau_B}(B)^{''})/
\{\{x_n\}_{n\in\mathbb{N}}\in \ell^{\infty}(\mathbb{N}, \pi_{\tau_B}(B)^{''})\; |\; \lim_{n\to\omega}\tilde{\tau}_B(x_n^*x_n)=0\} $$ where $\tilde{\tau}_B$ is the unique normal extension of $\tau_B$ on $\pi_{\tau_B}(B)^{''}$, and let $$ \mathcal{M}(\Phi(A), B):= \mathcal{M}\cap \pi_{\tau_B}(\Phi(A))^{\prime}. $$ If $A=B$ and $\Phi=\mathrm{id}_A$, then we denote $\mathcal{M}(\Phi(A), B)$ by $\mathcal{M}(B)$. Note that $\mathcal{M}(B)$ is equal to the von Neumann algebraic central sequence algebra of $\pi_{\tau_{B}}(B)^{''}$. For any homomorphism $\Phi$ from $A$ to $B$, $\mathcal{M}(B)$ is a subalgebra of $\mathcal{M}(\Phi(A), B)$.
Let $\beta$ be an action of a finite abelian group $\Gamma$ on $B$ such that $\beta_g(\Phi(A))=\Phi(A)$ for any $g\in\Gamma$. Then $\beta$ induces an action $\tilde{\beta}$ on $\mathcal{M}(\Phi(A), B)$. By \cite[Proposition 3.11]{Na5}, we have the following proposition. Note that \cite[Proposition 3.11]{Na5} is based on Matui and Sato's techniques \cite{MS}, \cite{MS2} and \cite{MS3} (with pioneering works \cite{Sa0} and \cite{Sa}). See also \cite{Sa2} and \cite{Sza6}.
\begin{pro}\label{pro:strict-comparison} With notation as above, assume that $\mathcal{M}(\Phi(A), B)^{\tilde{\beta}}$ is a factor
and $\beta_g|_{\Phi(A)}$ is outer for any $g\in\Gamma\setminus \{\iota\}$. If $a$ and $b$ are positive elements in $F(\Phi(A), B)^{\beta}$ satisfying $d_{\tau_{B},\omega}(a)< d_{\tau_{B}, \omega}(b)$, then there exists an element $r$ in $F(\Phi(A), B)^{\beta}$ such that $r^*br=a$. \end{pro}
\section{Approximate representability and classification}\label{sec:app}
We shall recall the definition of the Rohlin property and approximate representability for finite abelian group actions.
\begin{Def} Let $A$ be a separable C$^*$-algebra, and let $\alpha$ be an action of a finite abelian group $\Gamma$ on $A$. \ \\ (i) We say that $\alpha$ has the \textit{Rohlin property} if there exists a partition of unity $\{p_g\}_{g\in\Gamma}$ consisting of projections in $F(A)$ such that $$ \alpha_g(p_h)=p_{gh} $$ for any $g,h\in \Gamma$. \ \\ (ii) We say that $\alpha$ is \textit{approximately representable} if there exists a map $w$ from $\Gamma$ to $(A^{\alpha})^{\omega}$ such that the map $u$ from $\Gamma$ to $F(A^{\alpha})$ given by $u_g=[w_g]$ is a unitary representation of $\Gamma$ and $$ \alpha_g(a)=w_gaw_g^* \quad \text{in} \quad A^{\omega} $$ for any $g\in\Gamma$ and $a\in A$. \end{Def}
We refer the reader to \cite{I1}, \cite{GSan1}, \cite{Na0} and \cite{San} for basic properties of the Rohlin property and approximate representability. See \cite{GHS} for some generalization.
\begin{pro}\label{pro:rohlin-outer} Let $A$ be a separable C$^*$-algebra, and let $\alpha$ be an action of a finite group $\Gamma$ on $A$. Assume that $\tau$ is an $\alpha$-invariant tracial state on $A$. If $\alpha$ has the Rohlin property, then $\tilde{\alpha}$ on $\pi_{\tau}(A)^{''}$ is outer. \end{pro} \begin{proof} Let $\mathcal{M}_{\omega}$ be a von Neumann algebraic central sequence algebra of $\pi_{\tau}(A)^{''}$. Then there exists a unital homomorphism from $F(A)$ to $\mathcal{M}_{\omega}$ (see, for example, \cite[Proposition 2.2]{Na2}). Hence there exists a partition of unity $\{P_g\}_{g\in\Gamma}$ consisting of projections in $\mathcal{M}_{\omega}$ such that $$ \tilde{\alpha}_g(P_h)=P_{gh} $$ for any $g,h\in \Gamma$ since $\alpha$ has the Rohlin property. This shows that $\tilde{\alpha}$ is outer. Indeed, if $\tilde{\alpha}_g$ is an inner automorphism of $\pi_{\tau}(A)^{''}$, then $\tilde{\alpha}_g(P_{\iota})=P_{\iota}$. Therefore we have $P_{g}=P_{\iota}$, and hence $g=\iota$. \end{proof}
\begin{pro}\label{pro:unitary} Let $A$ be a separable C$^*$-algebra, and let $\alpha$ be an action of a finite group $\Gamma$ on $A$. For any $v\in (A^{\alpha})^{\omega}\cap (A^{\alpha})^{\prime}$, if $[v]$ is a unitary element in $F(A^{\alpha})$, then $v^*va=vv^*a=av^*v=avv^*=a$ for any $a\in A \subset A^{\omega}$. \end{pro} \begin{proof} Let $\{h_n\}_{n=1}^{\infty}$ be an approximate unit for $A^{\alpha}$. Then $v^*vh_n=h_n$ in $A^{\omega}$ for any $n\in\mathbb{N}$ because $[v]$ is a unitary element in $F(A^{\alpha})$. Since $\Gamma$ is a finite group, $A^{\alpha}\subset A$ is a nondegenerate inclusion. Hence $\{h_n\}_{n=1}^{\infty}$ is an approximate unit for $A$. Therefore, for any $a\in A$, we have $$
\| v^*va-a\| =\|v^*va- v^*vh_na+ v^*vh_na-a\| \leq \|v^*v\| \|a-h_na\|+ \| h_na-a\|\to 0 $$ as $n\to \infty$. Consequently, $v^*va=a$. Similar arguments show $vv^*a=av^*v=avv^*=a$. \end{proof}
Let $A$ be a simple separable nuclear monotracial C$^*$-algebra, and let $\alpha$ be an outer action of a finite abelian group $\Gamma$ on $A$. We shall consider the action $\gamma:=\alpha\otimes\mathrm{id}_\mathcal{W}$ on $A\otimes\mathcal{W}$. We denote by $M_{n^{\infty}}$ the uniformly hyperfinite (UHF) algebra of type $n^{\infty}$. The following lemma is based on \cite[Lemma 3.10]{I1} and \cite[Proposition 2.1.3]{sut}.
\begin{lem}\label{lem:sutherland} Let $A$ be a separable C$^*$-algebra, and let $\alpha$ be an action of a finite abelian group $\Gamma$ on $A$ and put $\gamma=\alpha\otimes\mathrm{id}_{\mathcal{W}}$. Assume that for any $g\in\Gamma$, there exists an element $v_g$ in $((A\otimes\mathcal{W})^{\gamma})^{\omega}$ such that $$ \gamma_g(a)=v_gav_g^* \quad \text{in} \quad (A\otimes\mathcal{W})^{\omega} $$ for any $a\in A\otimes\mathcal{W}$ and $[v_{g}]$ is a unitary element in $F((A\otimes\mathcal{W})^{\gamma})$. Then $\gamma$ on $A\otimes\mathcal{W}$ is approximately representable. \end{lem} \begin{proof}
Since $\mathcal{W}$ is isomorphic to $\mathcal{W}\otimes M_{|\Gamma|^{\infty}}$, there exists a
unital homomorphism $\psi$ from $M_{|\Gamma|}(\mathbb{C})$ to $F(A\otimes\mathcal{W})^{\gamma}$. For any $g,h\in \Gamma$, let $E_{g,h}\in (A\otimes\mathcal{W})^{\omega}\cap (A\otimes\mathcal{W})^{\prime}$ be a representative of $\psi (e_{g,h})$ where $\{e_{g,h}\}_{g,h\in \Gamma}$ are the matrix units
of $M_{|\Gamma|}(\mathbb{C})$.
Taking suitable subsequences, we may assume that $E_{g,h}\in \{v_k, v_k^*\; |\; k\in\Gamma\}^{\prime}$ for any $g,h\in\Gamma$. Moreover, we may assume that $E_{g,h}\in ((A\otimes\mathcal{W})^{\gamma})^{\omega}$ for any
$g,h\in \Gamma$ by replacing $E_{g,h}$ with $\frac{1}{|\Gamma|}\sum_{k\in \Gamma} \gamma_g(E_{g,h})$. For any $g\in\Gamma$, let $z_g:=\sum_{h\in \Gamma}v_gv_hv_{gh}^*E_{h, gh}$. Note that we have $v_ga=\gamma_g(a)v_g$ and $\gamma_{g^{-1}}(a)v_g^*=v_g^*a$ in $(A\otimes \mathcal{W})^{\omega}$ for any $a\in A\otimes\mathcal{W}$ and $g\in\Gamma$ by Proposition \ref{pro:unitary}. Hence we have $z_g\in ((A\otimes\mathcal{W})^{\gamma})^{\omega}\cap (A\otimes\mathcal{W})^{\prime}$ for any $g\in\Gamma$. Define a map $w$ from $\Gamma$ to $((A\otimes\mathcal{W})^{\gamma})^{\omega}$ by $w_g:= z_g^*v_g$ for any $g\in \Gamma$. Note that we have \begin{align*} z_g^*z_ga &= \sum_{h\in \Gamma}E_{gh,h}v_{gh}v_h^*v_g^*\sum_{k\in \Gamma}v_gv_kv_{gk}^*E_{k, gk}a = \sum_{h,k\in \Gamma}v_{gh}v_h^*v_g^*v_gv_kv_{gk}^*E_{gh, h}E_{k, gk}a \\ &= \sum_{h\in \Gamma}v_{gh}v_h^*v_g^*v_gv_hv_{gh}^*E_{gh, gh}a = \sum_{h\in \Gamma}v_{gh}v_h^*v_g^*v_g\gamma_{g^{-1}}(a)v_hv_{gh}^*E_{gh, gh} \\ &= \sum_{h\in \Gamma} v_{gh}v_h^*\gamma_{g^{-1}}(a)v_hv_{gh}^*E_{gh, gh} = \sum_{h\in \Gamma} v_{gh}v_h^*v_h\gamma_{h^{-1}g^{-1}}(a)v_{gh}^*E_{gh, gh} \\ &= \sum_{h\in \Gamma} v_{gh}\gamma_{h^{-1}g^{-1}}(a)v_{gh}^*E_{gh, gh} = \sum_{h\in \Gamma} v_{gh}v_{gh}^*aE_{gh, gh} = \sum_{h\in \Gamma} aE_{gh, gh}=a \end{align*} in $(A\otimes\mathcal{W})^{\omega}$ for any $a\in A\otimes\mathcal{W}$ and $g\in \Gamma$. Hence we have $$ w_gaw_g^*=z_g^*v_gav_g^*z_g=z_g^*\gamma_g(a)z_g =z_g^*z_g\gamma_g(a)=\gamma_g(a) $$ in $(A\otimes\mathcal{W})^{\omega}$ for any $a\in A\otimes\mathcal{W}$ and $g\in\Gamma$. We shall show the map $u$ from $\Gamma$ to $F((A\otimes\mathcal{W})^{\gamma})$ given by $u_g=[w_g]$ is a unitary representation. In a similar way as above, we see that $ z_gz_g^*a=a $ in $(A\otimes\mathcal{W})^{\omega}$ for any $a\in A\otimes\mathcal{W}$ and $g\in \Gamma$. Hence the image of $u$ is contained in the unitary group of $F((A\otimes\mathcal{W})^{\gamma})$. Note that we have \begin{align*} v_gz_hv_g^*z_gz_{gh}^*a &=v_gz_hv_g^*\sum_{k\in \Gamma}v_gv_kv_{gk}^*E_{k, gk}\sum_{k^{\prime}\in \Gamma}E_{ghk^{\prime}, k^{\prime}} v_{ghk^{\prime}}v_{k^{\prime}}^*v_{gh}^*a \\ &= v_gz_hv_g^*\sum_{k,k^{\prime}\in \Gamma}v_gv_kv_{gk}^*v_{ghk^{\prime}}v_{k^{\prime}}^*v_{gh}^* E_{k, gk}E_{ghk^{\prime},k^{\prime}}a \\ &= v_gz_h\sum_{k\in \Gamma}v_g^*v_gv_kv_{gk}^*v_{gk}v_{h^{-1}k}^*v_{gh}^*E_{k,h^{-1}k}a \\ &= v_gz_h\sum_{k\in \Gamma}v_kv_{h^{-1}k}^*v_{gh}^*E_{k,h^{-1}k}a \\ &= v_g\sum_{k, k^{\prime}\in \Gamma}v_hv_{k^{\prime}}v_{hk^{\prime}}^* v_kv_{h^{-1}k}^*v_{gh}^*E_{k^{\prime}, hk^{\prime}}E_{k, h^{-1}k}a \\ &= v_g\sum_{k^{\prime}\in \Gamma}v_hv_{k^{\prime}}v_{hk^{\prime}}^* v_{hk^{\prime}}v_{k^{\prime}}^*v_{gh}^*E_{k^{\prime}, k^{\prime}}a \\ &= v_{g}v_{h}v_{gh}^*\sum_{k^{\prime}\in \Gamma}E_{k^{\prime}, k^{\prime}}a =v_{g}v_{h}v_{gh}^*a \end{align*} in $(A\otimes\mathcal{W})^{\omega}$ for any $a\in A\otimes\mathcal{W}$ and $g, h\in \Gamma$. This implies that $$ z_{gh}^*v_{gh}\gamma_{h^{-1}g^{-1}}(a) =z^*_gv_gz_{h}^*v_{h} \gamma_{h^{-1}g^{-1}}(a) \quad \text{in} \quad (A\otimes\mathcal{W})^{\omega} $$ for any $a\in A\otimes\mathcal{W}$ and $g, h\in \Gamma$. Consequently, we have $u_{gh}=u_{g}u_{h}$ for any $g,h\in \Gamma$. \end{proof}
We have the following lemma by \cite[Corollary 4.6]{Na5}.
\begin{lem}\label{lem:corollary 4.6} With notation as above, let $p$ and $q$ be projections in $F(A\otimes\mathcal{W})^{\gamma}$ such that $0<\tau_{A\otimes\mathcal{W}, \omega}(p)\leq 1$. Then $p$ and $q$ are Murray-von Neumann equivalent if and only if $\tau_{A\otimes\mathcal{W}, \omega}(p)=\tau_{A\otimes\mathcal{W}, \omega}(q)$. \end{lem}
Fix $g_0\in \Gamma$. Define a homomorphism $\Phi_{g_0}$ from $A\otimes\mathcal{W}$ to $M_2(A\otimes\mathcal{W})$ by $$ \Phi_{g_0}(a)= \left(\begin{array}{cc}
a & 0 \\
0 & \gamma_{g_0}(a)
\end{array} \right). $$ Since $\Gamma$ is abelian, we have $\gamma_g \otimes\mathrm{id}_{M_2(\mathbb{C})}(\Phi_{g_0}(A\otimes\mathcal{W}))=\Phi_{g_0}(A\otimes\mathcal{W})$ for any $g\in\Gamma$. Therefore $\gamma\otimes\mathrm{id}_{M_2(\mathbb{C})}$ induces an action on $F(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))$. We denote it by $\beta$. Also, let $\tau_{\omega}$ denote the induced tracial state on $F(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))$ by $\tau_{M_2(A\otimes\mathcal{W})}$.
\begin{lem}\label{lem:strict-comparison} With notation as above, assume that the characteristic invariant of $\tilde{\alpha}$ is trivial. If $a$ and $b$ are positive elements in $F(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))^{\beta}$ satisfying $d_{\tau_{\omega}}(a)< d_{\tau_{\omega}}(b)$, then there exists an element $r$ in $F(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))^{\beta}$ such that $r^*br=a$. \end{lem} \begin{proof} By Proposition \ref{pro:strict-comparison}, it suffices to show that $\mathcal{M}(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))^{\tilde{\beta}}$ is a factor. Since the characteristic invariant of $\tilde{\alpha}$ is trivial, there exists a group homomorphism $v$ from $N(\tilde{\alpha})$ to the unitary group of $(\pi_{\tau_{A}}(A)^{''})^{\tilde{\alpha}}$ such that $\tilde{\alpha}_g=\mathrm{Ad}(v_g)$ for any $g\in N(\tilde{\alpha})$. Since we have $$ \tilde{\gamma}_g\otimes\mathrm{id}_{M_2(\mathbb{C})}= \mathrm{Ad}\left( \left(\begin{array}{cc}
v_g\otimes 1_{\pi_{\tau_{\mathcal{W}}}(\mathcal{W})^{''}} & 0 \\
0 & v_g\otimes 1_{\pi_{\tau_{\mathcal{W}}}(\mathcal{W})^{''}}
\end{array} \right)\right) $$ and $\tilde{\beta}_g([(x_n)_n])=[(\tilde{\gamma}_g\otimes\mathrm{id}_{M_2(\mathbb{C})}(x_n))_n]$, $$ \tilde{\beta_g}([(x_n)_n])= \left[\left(\mathrm{Ad}\left( \left(\begin{array}{cc}
v_g\otimes 1_{\pi_{\tau_{\mathcal{W}}}(\mathcal{W})^{''}} & 0 \\
0 & v_g\otimes 1_{\pi_{\tau_{\mathcal{W}}}(\mathcal{W})^{''}}
\end{array} \right)\right)(x_n)\right)_n\right] $$ for any for any $g\in N_{\tilde{\alpha}}$ and $[(x_n)_n]$ in $\mathcal{M}(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))^{\tilde{\beta}}$. Note that $$ \left(\begin{array}{cc}
v_g\otimes 1_{\pi_{\tau_{\mathcal{W}}}(\mathcal{W})^{''}} & 0 \\
0 & v_g\otimes 1_{\pi_{\tau_{\mathcal{W}}}(\mathcal{W})^{''}}
\end{array} \right) \in \pi_{\tau_{\omega}}(\Phi_{g_0}(A\otimes\mathcal{W}))^{''} $$ because we have $v_g\otimes 1_{\pi_{\tau_{\mathcal{W}}}(\mathcal{W})^{''}}\in \pi_{\tau_{A\otimes\mathcal{W}}}(A\otimes\mathcal{W})^{''}$ and $$ \left(\begin{array}{cc}
v_g\otimes 1_{\pi_{\tau_{\mathcal{W}}}(\mathcal{W})^{''}} & 0 \\
0 & v_g\otimes 1_{\pi_{\tau_{\mathcal{W}}}(\mathcal{W})^{''}}
\end{array} \right) =\left(\begin{array}{cc}
v_g\otimes 1_{\pi_{\tau_{\mathcal{W}}}(\mathcal{W})^{''}} & 0 \\
0 & \tilde{\gamma}_{g_0}(v_g\otimes 1_{\pi_{\tau_{\mathcal{W}}}(\mathcal{W})^{''}})
\end{array} \right). $$ Hence $\tilde{\beta}_g$ is the trivial automorphism for any $g\in N(\tilde{\alpha})$. Therefore $\gamma\otimes\mathrm{id}_{M_2(\mathbb{C})}$ induces an action $\delta$ of $\Gamma /N(\tilde{\alpha})$ on $\mathcal{M}(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))$ such that $$ \mathcal{M}(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))^{\delta} = \mathcal{M}(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))^{\tilde{\beta}}. $$ Note that the restriction of $\delta$ on $\mathcal{M}(M_2(A\otimes\mathcal{W}))$ is strongly free or $\Gamma /N(\tilde{\alpha})=\{\iota\}$ by \cite[Theorem 3.2]{C3}, \cite[Lemma 5.6]{Oc} and the definition of $N(\tilde{\alpha})$. (Note that every centrally nontrivial automorphism of a factor is properly centrally nontrivial by definition. See \cite[Section 5.2]{Oc}.) Therefore \cite[Proposition 3.14]{Na5}(see also \cite[Remark 3.15]{Na5}) implies that $ \mathcal{M}(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))^{\delta} $ is a factor. Consequently, $\mathcal{M}(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))^{\tilde{\beta}}$ is a factor. \end{proof}
The following theorem is one of the main results in this paper.
\begin{thm}\label{thm:main} Let $A$ be a simple separable nuclear monotracial C$^*$-algebra, and let $\alpha$ be an outer action of a finite abelian group $\Gamma$ on $A$. Then $\gamma=\alpha\otimes\mathrm{id}_{\mathcal{W}}$ on $A\otimes\mathcal{W}$ is approximately representable if and only if the characteristic invariant of $\tilde{\alpha}$ is trivial. \end{thm} \begin{proof} First, we shall show the only if part. Assume that the characteristic invariant of $\tilde{\alpha}$ is not trivial. By Proposition \ref{pro:non-trivial-characteristic}, the dual action of $\tilde{\alpha}$ on $\pi_{\tau_{A}}(A)^{''}\rtimes_{\tilde{\alpha}}\Gamma$ is not outer, and hence the dual action of $\tilde{\gamma}$ on $\pi_{\tau_{A}\otimes\tau_{\mathcal{W}}}(A\otimes\mathcal{W})^{''}\rtimes_{\tilde{\gamma}}\Gamma \cong\pi_{\tau_A\otimes\tau_{\mathcal{W}}\circ E_{\gamma}}((A\otimes\mathcal{W})\rtimes_{\gamma}\Gamma)^{''}$ is not outer. Proposition \ref{pro:rohlin-outer} implies that $\hat{\gamma}$ does not have the Rohlin property. Therefore $\gamma$ is not approximately representable by \cite[Proposition 4.4]{Na0}.
We shall show the if part. Fix $g_0\in \Gamma$. Let $\{h_n\}_{n\in\mathbb{N}}$ be an approximate unit for $(A\otimes\mathcal{W})^{\gamma}$. Note that $\{h_n\}_{n\in\mathbb{N}}$ is also an approximate unit for $A\otimes\mathcal{W}$. Put $$ P:=\left[\left(\left(\begin{array}{cc}
h_n & 0 \\
0 & 0
\end{array} \right)\right)_n\right] \quad \text{and} \quad Q:=\left[\left(\left(\begin{array}{cc}
0 & 0 \\
0 & h_n
\end{array} \right)\right)_n\right] $$ in $F(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))$. Then $P$ and $Q$ are projections in $F(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))^{\beta}$. Using \cite[Proposition 4.2]{Na5}, Lemma \ref{lem:corollary 4.6} and Lemma \ref{lem:strict-comparison} instead of \cite[Proposition 2.6]{Na3}, \cite[Corollary 5.5]{Na3} and \cite[Lemma 6.1]{Na3}, similar arguments as in the proof of \cite[Lemma 6.2]{Na3} show that $P$ is Murray-von Neumann equivalent to $Q$ in $F(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))^{\beta}$. (See also the proof of \cite[Lemma 4.2]{Na4}.) Hence there exists an element $V$ in $F(\Phi_{g_0}(A\otimes\mathcal{W}), M_2(A\otimes\mathcal{W}))^{\beta}$ such that $V^*V=P$ and $VV^*=Q$. It is easy to see that there exists an element $v_{g_0}=(v_{g_0, n})_n$ in $(A\otimes\mathcal{W})^{\omega}$ such that $$ V=\left[\left(\left(\begin{array}{cc}
0 & 0 \\
v_{g_0,n} & 0
\end{array} \right)\right)_n\right]. $$ Since we have $\beta_g(V)=V$ for any $g\in\Gamma$, we see that $$ \left[\left(\left(\begin{array}{cc}
0 & 0 \\
v_{g_0,n} & 0
\end{array} \right)\right)_n\right] = \left[\left(\left(\begin{array}{cc}
0 & 0 \\
\frac{1}{|\Gamma|}\sum_{g\in\Gamma}\gamma_g(v_{g_0,n}) & 0
\end{array} \right)\right)_n\right]. $$ Hence we may assume that $v_{g_0}$ is an element in $((A\otimes\mathcal{W})^{\gamma})^{\omega}$. Since we have $V^*V=P$ and $VV^*=Q$, $$ av_{g_0}^*v_{g_0}=av_{g_0}v_{g_0}^*=a $$ for any $a\in A\otimes\mathcal{W}$. Furthermore, we have $$ v_{g_0}a=\gamma_{g_0}(a)v_{g_0} $$ for any $a\in A\otimes\mathcal{W}$ since $$ \left(\left(\begin{array}{cc}
0 & 0 \\
v_{g_0,n} & 0
\end{array} \right)\right)_n \in M_2(A\otimes\mathcal{W})^{\omega}\cap \Phi_{g_0}(A\otimes\mathcal{W})^{\prime}. $$ These imply that $$ \gamma_{g_0}(a)=v_{g_0}av_{g_0}^* $$ for any $a\in A\otimes\mathcal{W}$ and $[v_{g_0}]$ is a unitary element in $F((A\otimes\mathcal{W})^{\gamma})$. Since $g_0\in\Gamma$ is arbitrary, $\gamma$ is approximately representable by Lemma \ref{lem:sutherland}. \end{proof}
Since $(A\otimes\mathcal{W})\rtimes_{\alpha\otimes\mathrm{id}_{\mathcal{W}}}\Gamma$ is isomorphic to $(A\rtimes_{\alpha}\Gamma)\otimes \mathcal{W}$, we see that $(A\otimes\mathcal{W})\rtimes_{\alpha\otimes\mathrm{id}_{\mathcal{W}}}\Gamma$ is in the class of Elliott-Gong-Lin-Niu's classification theorem \cite[Theorem 7.5]{EGLN} (see also \cite[Theorem A]{CE}). Furthermore, we see that $(A\otimes\mathcal{W})\rtimes_{\alpha\otimes\mathrm{id}_{\mathcal{W}}}\Gamma$ is in the class of Robert's classification theorem \cite{Rob}. In particular, these C$^*$-algebras and automorphisms can be classified by using trace spaces. Note that the map from $T_1(A\rtimes_{\alpha}\Gamma)$ to $T_1( (A\rtimes_{\alpha}\Gamma)\otimes \mathcal{W})$ given by $\tau \mapsto \tau\otimes \tau_{\mathcal{W}}$ is an affine homeomorphism. As an application of the theorem above and these classification theorem, we obtain the following classification result.
\begin{thm} Let $A$ and $B$ be simple separable nuclear monotracial C$^*$-algebras, and let $\alpha$ and $\beta$ be outer actions of a finite abelian group $\Gamma$ on $A$ and $B$, respectively. Assume that the characteristic invariants of $\tilde{\alpha}$ and $\tilde{\beta}$ are trivial. Then \ \\ (i) $\alpha\otimes \mathrm{id}_{\mathcal{W}}$ on $A\otimes\mathcal{W}$ and $\beta\otimes \mathrm{id}_{\mathcal{W}}$ on $B\otimes\mathcal{W}$ are cocycle conjugate if and only if $\tilde{\alpha}$ on $\pi_{\tau_A}(A)^{''}$ and $\tilde{\beta}$ on $\pi_{\tau_B}(B)^{''}$ are cocycle conjugate; \ \\ (ii) $\alpha\otimes \mathrm{id}_{\mathcal{W}}$ on $A\otimes\mathcal{W}$ and $\beta\otimes \mathrm{id}_{\mathcal{W}}$ on $B\otimes\mathcal{W}$ are conjugate if and only if $\tilde{\alpha}$ on $\pi_{\tau_A}(A)^{''}$ and $\tilde{\beta}$ on $\pi_{\tau_B}(B)^{''}$ are conjugate. \end{thm} \begin{proof} (i) First, we shall show the only if part. Since $\alpha\otimes \mathrm{id}_{\mathcal{W}}$ and $\beta\otimes \mathrm{id}_{\mathcal{W}}$ are cocycle conjugate, $\tilde{\alpha}\otimes\mathrm{id}_{\pi_{\tau_{\mathcal{W}}}(\mathcal{W})^{''}}$ and $\tilde{\beta}\otimes\mathrm{id}_{\pi_{\tau_{\mathcal{W}}}(\mathcal{W})^{''}}$ are cocycle conjugate. Since $\tilde{\alpha}\otimes\mathrm{id}_{\pi_{\tau_{\mathcal{W}}}(\mathcal{W})^{''}}$ is conjugate to $\tilde{\alpha}$ by \cite[Corollary 5.2.3]{Jones}, we see that $\tilde{\alpha}$ and $\tilde{\beta}$ are cocycle conjugate.
We shall show the if part. Since $\tilde{\alpha}$ and $\tilde{\beta}$ are cocycle conjugate,
there exists an affine homeomorphism $F$ from $T_1(\pi_{\tau_B}(B)^{''}\rtimes_{\tilde{\beta}}\Gamma)$ onto $T_1(\pi_{\tau_A}(A)^{''}\rtimes_{\tilde{\alpha}}\Gamma)$ such that $F\circ T(\hat{\tilde{\beta}}_{\eta})=T(\hat{\tilde{\alpha}}_{\eta})\circ F$ for any $\eta\in \hat{\Gamma}$ by Proposition \ref{pro:conjugacy-trace-spaces}. Proposition \ref{pro:trace-spaces-crossed-products} implies that the restriction map
$F|_{T_1(B\rtimes_{\beta}\Gamma)}$ is an affine homeomorphism from $T_1(B\rtimes_{\beta}\Gamma)$ onto $T_1(A\rtimes_{\alpha}\Gamma)$. Define a map $G$ from $T_1((B\otimes\mathcal{W})\rtimes_{\beta\otimes\mathrm{id}_{\mathcal{W}}}\Gamma)$ to $T_1((A\otimes\mathcal{W})\rtimes_{\alpha\otimes\mathrm{id}_{\mathcal{W}}}\Gamma)$ by $G(\tau\otimes\tau_{\mathcal{W}})= F(\tau)\otimes \tau_{\mathcal{W}}$ for any $\tau\in T_1(B\rtimes_{\beta}\Gamma)$. Then $G$ is an affine homeomorphism such that $G\circ T(\hat{\beta}_\eta\otimes\mathrm{id}_{\mathcal{W}}) = T(\hat{\alpha}_{\eta}\otimes\mathrm{id}_{\mathcal{W}})\circ G$ for any $\eta\in\hat{\Gamma}$. By Elliott-Gong-Lin-Niu's classification theorem \cite[Theorem 7.5]{EGLN}, there exists an isomorphism $\theta$ from $(A\otimes\mathcal{W})\rtimes_{\alpha\otimes\mathrm{id}_{\mathcal{W}}}\Gamma$ onto $(B\otimes\mathcal{W})\rtimes_{\beta\otimes\mathrm{id}_{\mathcal{W}}}\Gamma$ such that $T(\theta)=G$. Since we have $$ T(\theta \circ \hat{\alpha}_{\eta}\otimes\mathrm{id}_{\mathcal{W}} \circ \theta^{-1}) =G^{-1}\circ T(\hat{\alpha}_{\eta}\otimes\mathrm{id}_{\mathcal{W}})\circ G =T(\hat{\beta}_{\eta}\otimes\mathrm{id}_{\mathcal{W}}), $$ $\hat{\beta}_{\eta}\otimes\mathrm{id}_{\mathcal{W}}$ is approximately unitarily equivalent to $\theta \circ \hat{\alpha}_{\eta}\otimes\mathrm{id}_{\mathcal{W}} \circ \theta^{-1}$ for any $\eta\in \hat{\Gamma}$ by \cite[Theorem 1.0.1]{Rob} and \cite[Proposition 6.2.3]{Rob}. Therefore \cite[Theorem 3.5]{Na0} implies that $\hat{\alpha}\otimes\mathrm{id}_{\mathcal{W}}$ and $\hat{\beta}\otimes\mathrm{id}_{\mathcal{W}}$ are conjugate because $\hat{\alpha}\otimes\mathrm{id}_{\mathcal{W}}$ and $\hat{\beta}\otimes\mathrm{id}_{\mathcal{W}}$ have the Rohlin property by Theorem \ref{thm:main} and \cite[Proposition 4.4]{Na0}. Consequently, \cite[Proposition 5.4]{Na0} implies that $\alpha\otimes\mathrm{id}_{\mathcal{W}}$ and $\beta\otimes\mathrm{id}_{\mathcal{W}}$ are cocycle conjugate. \ \\ (ii) Since we can show the only if part by the same argument as in (i), we shall show the if part. By Proposition \ref{pro:conjugacy-trace-spaces}, there exists an affine homeomorphism $F$ from $T_1(\pi_{\tau_B}(B)^{''}\rtimes_{\tilde{\beta}}\Gamma)$ onto $T_1(\pi_{\tau_B}(B)^{''}\rtimes_{\tilde{\alpha}}\Gamma)$ such that $F(\tau)(e_{\tilde{\alpha}})=\tau (e_{\tilde{\beta}})$ for any $\tau\in T_1(\pi_{\tau_B}(B)^{''}\rtimes_{\tilde{\alpha}}\Gamma)$ and $F\circ T(\hat{\tilde{\beta}}_{\eta})=T(\hat{\tilde{\alpha}}_{\eta})\circ F$ for any $\eta\in \hat{\Gamma}$. Note that we have $e_{\alpha}=e_{\tilde{\alpha}}$ and $e_{\beta}=e_{\tilde{\beta}}$ because we regard $M(A\rtimes_{\alpha}\Gamma)$ and $M(B\rtimes_{\beta}\Gamma)$ as subalgebras of $\pi_{\tau_A}(A)^{''}\rtimes_{\tilde{\alpha}}\Gamma$ and $\pi_{\tau_B}(B)^{''}\rtimes_{\tilde{\beta}}\Gamma$, respectively. By the same argument as in (i), we see that there exists an isomorphism $\theta$ from $(A\otimes\mathcal{W})\rtimes_{\alpha\otimes\mathrm{id}_{\mathcal{W}}}\Gamma$ onto $(B\otimes\mathcal{W})\rtimes_{\beta\otimes\mathrm{id}_{\mathcal{W}}}\Gamma$ such that $\hat{\beta}_{\eta}\otimes\mathrm{id}_{\mathcal{W}}$ is approximately unitarily equivalent to $\theta\circ \hat{\alpha}_{\eta}\otimes\mathrm{id}_{\mathcal{W}} \circ \theta^{-1}$ for any $\eta\in \hat{\Gamma}$. Since we have $e_{\alpha\otimes\mathrm{id}_{\mathcal{W}}}= e_{\alpha}\otimes 1_{\mathcal{W}^{\sim}}$ and $e_{\beta\otimes\mathrm{id}_{\mathcal{W}}}= e_{\beta}\otimes 1_{\mathcal{W}^{\sim}}$, $$ \tau\otimes\tau_{\mathcal{W}} (\theta (e_{\alpha\otimes\mathrm{id}_{\mathcal{W}}})) =F(\tau)\otimes \tau_{\mathcal{W}}(e_{\alpha}\otimes1_{\mathcal{W}}) =\tau \otimes\tau_{\mathcal{W}}(e_{\beta}\otimes1_{\mathcal{W}}) =\tau \otimes\tau_{\mathcal{W}}(e_{\beta\otimes\mathrm{id}_{\mathcal{W}}}) $$ for any $\tau\in T_1(B\rtimes_{\beta}\Gamma)$. Therefore \cite[Corollary 4.5]{Na0} implies that $\alpha\otimes\mathrm{id}_{\mathcal{W}}$ and $\beta\otimes\mathrm{id}_{\mathcal{W}}$ are conjugate because $\alpha\otimes\mathrm{id}_{\mathcal{W}}$ and $\beta\otimes\mathrm{id}_{\mathcal{W}}$ are approximately representable by Theorem \ref{thm:main}. \end{proof}
The following corollary is an immediate consequence of the theorem above and Theorem \ref{thm:jones}.
\begin{cor}\label{main:cor} Let $A$ and $B$ be simple separable nuclear monotracial C$^*$-algebras, and let $\alpha$ and $\beta$ be outer actions of a finite abelian group $\Gamma$ on $A$ and $B$, respectively. Assume that the characteristic invariants of $\tilde{\alpha}$ and $\tilde{\beta}$ are trivial. Then \ \\ (i) $\alpha\otimes \mathrm{id}_{\mathcal{W}}$ on $A\otimes\mathcal{W}$ and $\beta\otimes \mathrm{id}_{\mathcal{W}}$ on $B\otimes\mathcal{W}$ are cocycle conjugate if and only if $N(\tilde{\alpha})=N(\tilde{\beta})$; \ \\ (ii) $\alpha\otimes \mathrm{id}_{\mathcal{W}}$ on $A\otimes\mathcal{W}$ and $\beta\otimes \mathrm{id}_{\mathcal{W}}$ on $B\otimes\mathcal{W}$ are conjugate if and only if $N(\tilde{\alpha})=N(\tilde{\beta})$ and $i(\tilde{\alpha})=i(\tilde{\beta})$. \end{cor}
If $\delta$ is an action of a finite cyclic group $\Gamma$ with prime order, then $N(\delta)=\Gamma$ or $N(\delta)=\{\iota\}$. Hence we have the following corollary.
\begin{cor} Let $A$ and $B$ be simple separable nuclear monotracial C$^*$-algebras, and let $\alpha$ and $\beta$ be outer actions of a finite cyclic group $\Gamma$ with prime order on $A$ and $B$, respectively. Then \ \\ (i) $\alpha\otimes \mathrm{id}_{\mathcal{W}}$ on $A\otimes\mathcal{W}$ and $\beta\otimes \mathrm{id}_{\mathcal{W}}$ on $B\otimes\mathcal{W}$ are cocycle conjugate if and only if $N(\tilde{\alpha})=N(\tilde{\beta})$; \ \\ (ii) $\alpha\otimes \mathrm{id}_{\mathcal{W}}$ on $A\otimes\mathcal{W}$ and $\beta\otimes \mathrm{id}_{\mathcal{W}}$ on $B\otimes\mathcal{W}$ are conjugate if and only if $N(\tilde{\alpha})=N(\tilde{\beta})$ and $i(\tilde{\alpha})=i(\tilde{\beta})$. \end{cor}
\section{Model actions}
In this section, we shall construct simple separable nuclear monotracial C$^*$-algebras $A$ and outer actions $\alpha$ on $A$ with arbitrary invariants $(N(\tilde{\alpha}), i(\tilde{\alpha}))$ in Corollary \ref{main:cor} with the restrictions demanded by Proposition \ref{pro:realized-invariant}. In particular, $A$ can be chosen to be approximately finite-dimensional (AF) algebras.
We shall give a summary of the construction. First, we construct ``inner'' actions $\alpha$ on a simple monotracial AF algebra $A$ with arbitrary invariants in Lemma \ref{lem:jones}. Note that if $\beta$ is an action of $\Gamma$ on a simple monotracial C$^*$-algebra $B$ with $N(\beta)=\{\iota\}$ and $N(\tilde{\beta})=\Gamma$, then we have $N(\alpha \otimes \beta)=\{\iota\}$ and $N(\tilde{\alpha}\otimes\tilde{\beta})=N(\tilde{\alpha})$ for any action of $\Gamma$ on a simple monotracial C$^*$-algebra $A$. Hence if we can construct a simple monotracial AF algebra $B$ and an action $\beta$ of $\Gamma$ on $B$ with $N(\beta)=\{\iota\}$ and $N(\tilde{\beta})=\Gamma$, then we obtain actions $\alpha\otimes\beta$ with arbitrary invariant $N(\tilde{\alpha}\otimes\tilde{\beta})$. Note that constructing such an action $\beta$ is equivalent to constructing a unitary representation $u$ of $\Gamma$ on $\pi_{\tau_B}(B)^{''}$ such that $u$ induces an action on $B$. We construct such representations in Lemma \ref{lem:outer}. (We construct unitary representations of cyclic groups for simplicity.) On the other hand, the inner invariant $i(\tilde{\alpha}\otimes\mathrm{Ad}(u))$ is not equal to
$i(\tilde{\alpha})$ unless $|\tilde{\tau}_{B}(u_g)|=1$ for any $g\in \Gamma$.
(Note that $|\tilde{\tau}_{B}(u_g)|=1$ for any $g\in \Gamma$ is equivalent to $\mathrm{Ad}(u_g)= \mathrm{id}_{\pi_{\tau_B}(B)^{''}}$ for any $g\in \Gamma$.) But if $\tilde{\tau}_{B}(u_g)$ is ``near'' $1$ for any $g\in \Gamma$, then $i(\tilde{\alpha}\otimes\mathrm{Ad}(u))$ is ``near'' $i(\tilde{\alpha})$. Hence applying Lemma \ref{lem:jones} to a ``small'' perturbation of the desired inner invariant (we need to assume that $m$ has full support here) and considering the tensor product type action, we obtain the conclusion. Of course, we need to construct unitary representations $u$ in Lemma \ref{lem:outer} such that $\tilde{\tau}_{B}(u_g)$ is ``near'' $1$ for any $g\in \Gamma$.
The following lemma is based on \cite[Proposition 1.5.8 and Theorem 1.5.11]{Jones}. Recall that $\Phi_{v}$ is the homomorphism from $\mathbb{C}N$ to $A$ defined by $\Phi_v(\sum_{h\in N}c_hh)=\sum_{h\in N}c_hv_h$ where $v$ is a unitary representation of $N$ on $A$.
\begin{lem}\label{lem:jones} Let $\Gamma$ be a finite abelian group, and let $N$ be a subgroup of $\Gamma$. Suppose that $m$ is a probability measure on the set $P_{N}$ of minimal projections in $\mathbb{C}N$. Then there exist a simple unital monotracial AF algebra $A$, an action $\alpha$ on $A$ of $\Gamma$ and a unitary representation $v$ of $N$ on $A$ such that $N(\alpha)=N(\tilde{\alpha})=N$, $\alpha_h=\mathrm{Ad}(v_h)$, $\alpha_g(v_h)=v_h$ for any $h\in N$ and $g\in \Gamma$ and $\tau_{A}(\Phi_{v}(p))=m(p)$ for any $p\in P_N$. \end{lem} \begin{proof}
Define an action $\mu^{\Gamma}$ of $\Gamma$ on $M_{|\Gamma|^{\infty}}
\cong \bigotimes _{n=1}^{\infty} M_{|\Gamma|}(\mathbb{C})$ by $\mu^{\Gamma}:= \bigotimes_{n=1}^{\infty} \mathrm{Ad}(\lambda)$ where $\lambda$ is the left regular representation of $\Gamma$. Then $\mu^{\Gamma}$ is the Rohlin action (see, for example, \cite[Example 3.2]{I1}
and \cite[1.5]{Jones}). Let $B:=M_{|\Gamma|^{\infty}}\rtimes_{\mu^{\Gamma}|_N}N$. Then $B$ is a unital AF algebra because $\mu^{\Gamma}|_N$ is an action of product type.
Since $N(\tilde{\mu}^{\Gamma}|_N)=\{\iota\}$, $B$ is simple and monotracial. Note that the unique tracial state on $B$ is given by
$\tau_{M_{|\Gamma|^{\infty}}}\circ E_{\mu^{\Gamma}|_N}$. Define an action $\beta$ on $B$ of $\Gamma$ by $\beta_g(\sum_{h\in N}a_h\lambda_h)= \sum_{h\in N}\mu^{\Gamma}_g(a_h)\lambda_h$ for any $g\in \Gamma$. By the same argument as in the proof of \cite[Proposition 1.5.8]{Jones}, we see that $N(\tilde{\beta})=N(\beta)=N$. In particular, the map $\lambda$ given by $N \ni h \mapsto \lambda_h\in B$ is a unitary representation of $N$ on $B$ such that $\beta_g(\lambda_h)=\lambda_h$ for any $g\in\Gamma$ and $h\in N$.
The Effros-Handelman-Shen theorem \cite{EHS} (or \cite[Theorem 2.2]{Ell-order}) implies that there exists a simple unital monotracial AF algebra $C$ such that $K_0(C)$ is the additive subgroup of $\mathbb{R}$ generated by $\mathbb{Q}$ and
$\{m(p)\; |\; p\in P_{N}\}$, $K_0(C)_{+}=K_0(C)\cap \mathbb{R}_{+}$ and $[1]_0=1$. For any $p\in P_N$, there exists a projection $q_p$ in $C$ such that $\tau_{C}(q_p)=m(p)$, and put $e_{p}=\Phi_{\lambda}(p)\otimes q_p\in B\otimes C$. Then we have $e_{p}\in (B\otimes C)^{\beta\otimes\mathrm{id}_{C}}
\cap \{\lambda_h\otimes 1\; |\; h\in N\}^{\prime}$ for any $p\in P_N$. Since there exists an element $\eta$ in $\hat{N}$ such that
$\Phi_{\lambda}(p)=\frac{1}{|N|}\sum_{h\in N}\eta (h) \lambda_h$, $
\tau_{B} (\Phi_{\lambda}(p))= \frac{1}{|N|} $
for any $p\in P_N$. Hence we have $\tau_{B\otimes C}(e_p)= \frac{m(p)}{|N|}$ for any $p\in P_N$.
Put $e:= \sum_{p\in P_N}e_p$, and let $A:= e(B\otimes C)e$. Then $A$ is a simple unital monotracial AF algebra. Since we have $e\in (B\otimes C)^{\beta\otimes\mathrm{id}_{C}}$, $\beta\otimes\mathrm{id}_{C}$ induces an action $\alpha$ on $A$ of $\Gamma$. By the same reason as in the proof of \cite[Theorem 1.5.11]{Jones}, we have $N(\tilde{\alpha})=N(\tilde{\beta}\otimes \mathrm{id}_{C})=N$. Define a unitary representation $v$ of $N$ on $A$ by $v_h:= (\lambda_h\otimes 1)e$ for any $h\in N$. It is easy to see that $\alpha_h=\mathrm{Ad}(v_h)$ and $\alpha_g(v_h)=v_h$ for any $h\in N$ and $g\in \Gamma$. This also implies that $N(\alpha)=N$. Since we have $\Phi_{v}(p)=(\Phi_{\lambda}(p)\otimes 1)e =e_{p}$, $$ \tau_{A}(\Phi_{v}(p))=\frac{\tau_{B\otimes C}(e_p)}{\tau_{B\otimes C}(e)}
=\frac{\frac{m(p)}{|N|}}{\frac{1}{|N|}}=m(p) $$ for any $p\in P_N$. Therefore the proof is complete. \end{proof}
We recall properties of characters of finite abelian groups. If $N$ is a finite abelian group, then we have $$ \sum_{h\in N}\eta (h) = \left\{\begin{array}{cl}
|N| & \text{if}\quad \eta=\iota \in \hat{N} \\ 0 & \text{if}\quad \eta\in \hat{N}\setminus \{\iota\} \end{array} \right. \quad\text{and}\quad \sum_{\eta \in \hat{N}}\eta (h) = \left\{\begin{array}{cl}
|N| & \text{if}\quad h=\iota \in N \\ 0 & \text{if}\quad h\in N\setminus \{\iota \}. \end{array} \right. $$ We denote by $\mathbb{Z}_k$ the cyclic group of order $k$. For any natural number $k$, let $\zeta_{k}:=e^{\frac{2\pi i}{k}}$. Note that $\hat{\mathbb{Z}}_k$ can be identified with
$\{\zeta_k^l\; | \; 1 \leq l \leq k\}$ by the pairing $\mathbb{Z}_k \times \hat{\mathbb{Z}_k}\to \mathbb{T}$ given by $([l], \zeta_k)\mapsto \zeta_k^{l}$. Also, if $\zeta$ is a root of unity and not equal to $1$, then we have $\sum_{j=1}^{n}\zeta^j=0$.
\begin{lem}\label{lem:outer} Let $k$ be a natural number with $k\geq 2$ and $r$ a real number with $0<r<1$. Then there exist a simple unital monotracial AF algebra $A$ and a unitary element $V$ in $\pi_{\tau_{A}}(A)^{''}$ such that $\mathrm{Ad}(V)$ induces an outer action of $\mathbb{Z}_k$ on $A$, $V^{k}=1$ and $\tilde{\tau}_{A}(V^{l})=r$ for any $1\leq l \leq k-1$. \end{lem} \begin{proof} By the Effros-Handelman-Shen theorem \cite{EHS} (or \cite[Theorem 2.2]{Ell-order}), there exists a simple unital monotracial AF algebra $B$ such that $$ K_0(B)=\left\{a_0+\sum_{n=1}^{m}a_nr^{\frac{1}{2^{n}}}\in\mathbb{R}\;
|\; m\in\mathbb{N}, a_0, a_1,..., a_m\in\mathbb{Q}\right\}, $$ $K_0(B)_{+}=K_0(B)\cap \mathbb{R}_{+}$ and $[1]_0=1$. For any $n\in\mathbb{N}$, there exist mutually orthogonal projections $p_{1,n}$,..., $p_{k, n}$ in $B$ such that $\sum_{j=1}^{k}p_{j, n}=1$, $\tau_{B}(p_{k, n})=\frac{1+(k-1)r^{\frac{1}{2^n}}}{k}$ and $\tau_{B}(p_{j, n})=\frac{1-r^{\frac{1}{2^n}}}{k}$ for any $1\leq j \leq k-1$ because we have $\frac{1+(k-1)r^{\frac{1}{2^n}}}{k}\in K_0(B)_{+}\cap (\frac{1}{k},1)$, $\frac{1-r^{\frac{1}{2^n}}}{k}\in K_0(B)_{+}\cap (0,\frac{1}{k})$ and $\frac{1+(k-1)r^{\frac{1}{2^n}}}{k}+(k-1)\times \frac{1-r^{\frac{1}{2^n}}}{k}=1$. For any $n\in\mathbb{N}$, put $$ u_n:= \sum_{j=1}^{k}\zeta_{k}^{j}p_{j,n}\in B. $$ Then $u_n$ is a unitary element such that $u_n^{k}=1$ and we have \begin{align*} \tau_{B}(u_n^l) &= \tau_{B}\left(\sum_{j=1}^{k}\zeta_{k}^{lj}p_{j,n}\right) = \sum_{j=1}^{k}\zeta_k^{lj}\tau_{B}(p_{j,n}) = \sum_{j=1}^{k-1}\zeta_k^{lj} \times \frac{1-r^{\frac{1}{2^n}}}{k}+ \frac{1+(k-1)r^{\frac{1}{2^n}}}{k} \\ &=-\frac{1-r^{\frac{1}{2^n}}}{k} + \frac{1+(k-1)r^{\frac{1}{2^n}}}{k}=r^{\frac{1}{2^n}} \end{align*} for any $1\leq l \leq k-1$. Note that $\zeta_k^{l}$ is a root of unity and not equal to $1$. Let $A:=\bigotimes_{n=1}^{\infty} B$. Then $A$ is a simple unital monotracial AF algebra. Note that the unique tracial state $\tau_{A}$ is a product of traces $\tau_{B}$ in each component. For any $n\in\mathbb{N}$, put $$ w_n:= u_1\otimes u_2\otimes \cdots \otimes u_n\otimes 1 \otimes \cdots \in \bigotimes_{n=1}^{\infty} B=A. $$ Then $w_n$ is a unitary element such that $w_n^k=1$. We shall show that $\{w_n\}_{n\in\mathbb{N}}$ is a Cauchy sequence with respect to the 2-norm. Let $\varepsilon>0$. Take a natural number $N$ such that $\sum_{j=n+1}^{m}\frac{1}{2^{j}}<\log_{r}{(1-\varepsilon)}$ for any $m> n\geq N$. Then we have \begin{align*} \tau_{A}((w_{m}-w_{n})^*(w_{m}-w_{n})) &=2-2\mathrm{Re} \tau_{A}(1\otimes \cdots \otimes 1\otimes u_{n+1}\otimes \cdots \otimes u_{m} \otimes 1\cdots) \\ &=2- 2\prod_{j=n+1}^{m}r^{\frac{1}{2^j}} =2- 2r^{\sum_{j=n+1}^{m}\frac{1}{2^j}}< 2\varepsilon \end{align*} for any $m> n\geq N$. Hence there exists a unitary element $V$ in $\pi_{\tau_{A}}(A)^{''}$ such that $\{w_n\}_{n\in\mathbb{N}}$ converges to $V$ in the strong-$^*$ topology. We have $V^{k}=1$ and \begin{align*} \tilde{\tau}_{A}(V^l)= \lim_{n\to\infty}\tau_{A}(w_n^l)=\lim_{n\to \infty}\tau_A(u_1^l)\tau_{A}(u_2^l)\cdots \tau_{A}(u_n^l)=\prod_{n=1}^{\infty}r^{\frac{1}{2^n}}=r \end{align*} for any $1\leq l\leq k-1$. It is easy to see that $\mathrm{Ad}(V)$ induces an action $\alpha$ of $\mathbb{Z}_k$ on $A$. Note that we have $$ \alpha_{[l]} (x_1\otimes x_2\otimes \cdots \otimes x_n\otimes 1 \otimes \cdots) =u_1^{l}x_1u_1^{*l}\otimes u_2^{l}x_2u_2^{*l}\otimes \cdots \otimes u_n^{l}x_nu_n^{*l}\otimes 1 \otimes \cdots $$ for any $1\leq l \leq k$. We shall show that $\alpha$ is outer. Let $l$ be a natural number with $1\leq l \leq k-1$. Since we have $\tau_{B}(p_{k,n})>\tau_{B}(p_{1,n})$ for any $n\in\mathbb{N}$, there exists a partial isometry $s_n$ in $B$ such that $s_ns_n^*=p_{1,n}$ and $s_n^*s_n \leq p_{k,n}$. Put $$ x_n:= \overbrace{1\otimes \cdots \otimes 1}^{n-1}\otimes s_n \otimes 1\cdots $$ Then $(x_n)_n$ is a central sequence in $A$ of norm one. Since we have $u_n^{l}s_nu_n^{*l}=\zeta_k^{l}s_n$, $\alpha_{[l]}(x_n)=\zeta_k^{l}x_n$ for any $n\in\mathbb{N}$. Therefore $\alpha_{[l]}$ induces a non-trivial automorphism of $F(A)$. This implies $\alpha_{[l]}$ is not an inner automorphism of $A$. Consequently, $\alpha$ is outer. \end{proof}
The following theorem is the main result in this section.
\begin{thm} Let $\Gamma$ be a finite abelian group, and let $N$ be a subgroup of $\Gamma$. Suppose that $m$ is a probability measure with full support on the set $P_{N}$ of minimal projections in $\mathbb{C}N$. Then there exist a simple unital monotracial AF algebra $A_{(\Gamma, N, m)}$ and an outer action $\alpha^{(\Gamma, N, m)}$ on $A_{(\Gamma, N, m)}$ such that the characteristic invariant of $\tilde{\alpha}^{(\Gamma, N, m)}$ is trivial, $N(\tilde{\alpha}^{(\Gamma, N, m)})=N$ and $i(\tilde{\alpha}^{(\Gamma, N, m)})=[m]$. \end{thm} \begin{proof} Since $\Gamma$ is a finite abelian group, we may assume that there exist a natural number $n$ and prime powers $k_1$,..., $k_{n}$ such that $\Gamma=\bigoplus_{j=1}^{n} \mathbb{Z}_{k_j}$. For any $0\leq i \leq n$, let $$
N_{i}:=\left\{([l_j])_{j=1}^n\in N\subseteq \bigoplus_{j=1}^{n} \mathbb{Z}_{k_j}\; |\;
i=|\{j\in\{1,...,n\}\; |\; [l_j]\neq [0]\}| \right\}. $$ Then we have $N=\bigsqcup_{i=0}^n N_{i}$ (which is a disjoint union) and $N_i^{-1}=N_{i}$ for any $0\leq i \leq n$. We identify $P_{N}$ with
$\left\{p_{\eta}=\frac{1}{|N|}\sum_{h\in N} \eta (h)h\; |\; \eta\in \hat{N}\right\}$. Since $m$ has full support, there exists a real number $r$ with $0<r<1$ such that $$
\frac{1}{|N|}\sum_{j=0}^{n}\frac{1}{r^j}\sum_{h\in N_j}\sum_{\eta^{\prime}\in \hat{N}}\eta(h) \overline{\eta^{\prime}(h)}m(p_{\eta^{\prime}})\geq 0 $$ for any $\eta\in \hat{N}$. Indeed, we have \begin{align*}
\lim_{r\to 1-0}\frac{1}{|N|}\sum_{j=0}^{n}\frac{1}{r^j}\sum_{h\in N_j}\sum_{\eta^{\prime}\in \hat{N}}\eta(h)\overline{\eta^{\prime}(h)}m(p_{\eta^{\prime}})
&= \frac{1}{|N|}\sum_{h\in N}\sum_{\eta^{\prime}\in \hat{N}}\eta(h)\overline{\eta^{\prime}(h)}m(p_{\eta^{\prime}}) \\
&=\frac{1}{|N|}\sum_{\eta^{\prime}\in \hat{N}}m(p_{\eta^{\prime}})\sum_{h\in N}\eta\eta^{\prime-1}(h) \\
&= \frac{1}{|N|}\times m(p_{\eta})\times |N| =m(p_{\eta})>0 \end{align*} and \begin{align*}
\overline{\frac{1}{|N|}\sum_{j=0}^{n}\frac{1}{r^j}\sum_{h\in N_j}\sum_{\eta^{\prime}\in \hat{N}}\eta(h) \overline{\eta^{\prime}(h)}m(p_{\eta^{\prime}})}
&=\frac{1}{|N|}\sum_{j=0}^{n}\frac{1}{r^j}\sum_{h\in N_j}\sum_{\eta^{\prime}\in \hat{N}} \overline{\eta(h)}\eta^{\prime}(h)m(p_{\eta^{\prime}}) \\
&= \frac{1}{|N|}\sum_{j=0}^{n}\frac{1}{r^j}\sum_{h\in N_j}\sum_{\eta^{\prime}\in \hat{N}} \eta(h^{-1})\overline{\eta^{\prime}(h^{-1})}m(p_{\eta^{\prime}}) \\
&= \frac{1}{|N|}\sum_{j=0}^{n}\frac{1}{r^j}\sum_{h\in N_j}\sum_{\eta^{\prime}\in \hat{N}}\eta(h) \overline{\eta^{\prime}(h)}m(p_{\eta^{\prime}}) \end{align*} for any $\eta\in \hat{N}$. Hence a sufficiently large $r<1$ satisfies the property above. Define a map $m^{\prime}$ from $P_{N}$ to $\mathbb{R}_{+}$ by $$ m^{\prime}(p_{\eta})
=\frac{1}{|N|}\sum_{j=0}^{n}\frac{1}{r^j}\sum_{h\in N_j}\sum_{\eta^{\prime}\in \hat{N}}\eta(h) \overline{\eta^{\prime}(h)}m(p_{\eta^{\prime}}) $$ for any $\eta\in \hat{N}$. Since we have \begin{align*} \sum_{\eta\in \hat{N}} m^{\prime}(p_{\eta})
&=\frac{1}{|N|}\sum_{\eta\in \hat{N}}\sum_{j=0}^{n}\frac{1}{r^j} \sum_{h\in N_j}\sum_{\eta^{\prime}\in \hat{N}}\eta(h) \overline{\eta^{\prime}(h)}m(p_{\eta^{\prime}}) \\
&=\frac{1}{|N|}\sum_{j=0}^{n}\frac{1}{r^j} \sum_{h\in N_j}\sum_{\eta^{\prime}\in \hat{N}} \overline{\eta^{\prime}(h)}m(p_{\eta^{\prime}})\sum_{\eta\in \hat{N}}\eta(h) \\
&= \frac{1}{|N|}\sum_{\eta^{\prime}\in \hat{N}}
\overline{\eta^{\prime}(\iota)}m(p_{\eta^{\prime}})\times |N|= \sum_{\eta^{\prime}\in \hat{N}}m(p_{\eta^{\prime}}) =1, \end{align*} $m^{\prime}$ is a probability measure on $P_{N}$.
Lemma \ref{lem:jones} implies that there exist a simple unital monotracial AF algebra $B_0$, an action $\beta^{(0)}$ on $B_0$ of $\Gamma$ and a unitary representation $u$ of $N$ on $B_0$ such that $N(\beta)=N(\tilde{\beta})=N$, $\beta^{(0)}_h=\mathrm{Ad}(u_h)$, $\beta^{(0)}_g(u_h)=u_h$ for any $h\in N$ and $g\in \Gamma$ and $\tau_{B_0}(\Phi_{u}(p_{\eta}))=m^{\prime}(p_{\eta})$ for any $\eta\in \hat{N}$. Note that we have \begin{align*} \sum_{\eta\in\hat{N}}\overline{\eta(h)}\Phi_{u}\left(p_{\eta}\right)
= \frac{1}{|N|}\sum_{\eta\in\hat{N}}\sum_{h^{\prime}\in N} \overline{\eta(h)} \eta (h^{\prime})u_{h^{\prime}}
= \frac{1}{|N|}\sum_{h^{\prime}\in N}u_{h^{\prime}}\sum_{\eta\in\hat{N}}\eta(h^{-1}h^{\prime})=u_{h} \end{align*} for any $h\in N$. Hence if $h$ is an element in $N_{i}$ for some $0\leq i \leq n$, then \begin{align*} \tau_{B_0}(u_{h})&= \tau_{B_0}\left(\sum_{\eta\in\hat{N}}\overline{\eta(h)}\Phi_{u}\left(p_{\eta}\right)\right) =\sum_{\eta\in \hat{N}}\overline{\eta(h)}m^{\prime}(p_{\eta}) \\
&= \frac{1}{|N|}\sum_{\eta\in\hat{N}} \sum_{j=0}^{n}\frac{1}{r^j}\sum_{h^{\prime}\in N_j}\sum_{\eta^{\prime}\in \hat{N}} \overline{\eta(h)}\eta(h^{\prime})\overline{\eta^{\prime}(h^{\prime})}m(p_{\eta^{\prime}}) \\
&= \frac{1}{|N|}\sum_{j=0}^{n}\frac{1}{r^j}\sum_{h^{\prime}\in N_j}\sum_{\eta^{\prime}\in \hat{N}} \overline{\eta^{\prime}(h^{\prime})}m(p_{\eta^{\prime}}) \sum_{\eta\in\hat{N}}\eta(h^{-1}h^{\prime}) \\
&= \frac{1}{|N|} \times \frac{1}{r^i}\sum_{\eta^{\prime}\in \hat{N}}
\overline{\eta^{\prime}(h)}m(p_{\eta^{\prime}})\times |N| =\frac{1}{r^i}\sum_{\eta^{\prime}\in \hat{N}}\overline{\eta^{\prime}(h)}m(p_{\eta^{\prime}}). \end{align*} For any natural number $j$ with $1\leq j\leq n$, there exist a simple unital monotracial AF algebra $B_j$ and a unitary element $V_j$ in $\pi_{\tau_{B_j}}(B_j)^{''}$ such that $\mathrm{Ad}(V_j)$ induces an outer action of $\mathbb{Z}_{k_j}$ on $B_j$, $V_{j}^{k_j}=1$ and $\tilde{\tau}_{B_j}(V_j^{l})=r$ for any $1\leq l \leq k_j-1$ by Lemma \ref{lem:outer}. Let $\beta^{(j)}$ be the induced action by $\mathrm{Ad}(V_j)$ on $B_j$ of $\mathbb{Z}_{k_j}$. Put $$ A_{(\Gamma, N, m)}:= \bigotimes_{j=0}^n B_j $$ and define an action $\alpha^{(\Gamma, N, m)}$ on $A_{(\Gamma, N, m)}$ of $\Gamma$ by $$ \alpha^{(\Gamma, N, m)}_{g}:=\beta^{(0)}_{g}\otimes \bigotimes_{j=1}^{n} \beta^{(j)}_{[l_j]} $$ for any $g=([l_j])_{j=1}^{n}\in \Gamma=\bigoplus_{j=1}^{n} \mathbb{Z}_{k_j}$. Then $A_{(\Gamma, N, m)}$ is a simple unital monotracial AF algebra. We denote by $\tau$ the unique tracial state on $A_{(\Gamma, N, m)}$. It can be easily checked that $N(\alpha^{(\Gamma, N, m)})=\{\iota\}$ and $N(\tilde{\alpha}^{(\Gamma, N, m)})=N$ since we have $N(\beta^{(j)})=\{\iota \}$ for any $1\leq j\leq n$ and $N(\tilde{\beta}^{(0)})=N$.
Define a unitary representation $v$ of $N$ on $\pi_{\tau}( A_{(\Gamma, N, m)})^{''}$ by $$ v_{h}:=\pi_{\tau_{B_0}}(u_{h})\otimes \bigotimes_{j=1}^nV_j^{l_j} \in \pi_{\tau_{B_0}}(B_0)^{''}\otimes \bigotimes_{j=1}^n \pi_{\tau_{B_j}}(B_j)^{''} \cong \pi_{\tau}( A_{(\Gamma, N, m)})^{''} $$ for any $h=([l_j])_{j=1}^n\in N$. Then $\tilde{\alpha}^{(\Gamma, N, m)}_{h}=\mathrm{Ad}(v_{h})$ and $\tilde{\alpha}^{(\Gamma, N, m)}_{g}(v_h)=v_h$ for any $h\in N$ and $g\in \Gamma$. This implies that the characteristic invariant of $\tilde{\alpha}^{(\Gamma, N, m)}$ is trivial. Since we have \begin{align*} \tilde{\tau} (\Phi_{v}(p_{\eta}))
&=\frac{1}{|N|}\sum_{h\in N}\eta(h) \tilde{\tau} (v_h) \\ &
=\frac{1}{|N|}\sum_{h=([l_j])_{j=1}^n\in N}\eta(h) \tau_{B_0} (u_{h}) \times \prod_{j=1}^{n}\tilde{\tau}_{B_j}(V_j^{l_j}) \\
&= \frac{1}{|N|}\sum_{j=0}^n\sum_{h\in N_{j}}\eta(h)\times \frac{1}{r^j}\sum_{\eta^{\prime}\in \hat{N}}\overline{\eta^{\prime}(h)}m(p_{\eta^{\prime}})\times r^{j} \\
&= \frac{1}{|N|}\sum_{h\in N}\sum_{\eta^{\prime}\in \hat{N}}\eta\eta^{\prime-1}(h)m(p_{\eta^{\prime}}) \\
&= \frac{1}{|N|}\sum_{\eta^{\prime}\in \hat{N}}m(p_{\eta^{\prime}})\sum_{h\in N}\eta\eta^{\prime-1}(h)\\
&=\frac{1}{|N|}\times m(p_{\eta})\times |N| =m(p_{\eta}), \end{align*} $i(\tilde{\alpha}^{(\Gamma, N, m)})=[m]$. Consequently, the proof is complete. \end{proof}
The following corollary is an immediate consequence of Proposition \ref{pro:realized-invariant}, Corollary \ref{main:cor} and the theorem above.
\begin{cor} Let $A$ be a simple separable nuclear monotracial C$^*$-algebra, and let $\alpha$ be an outer action of a finite abelian group $\Gamma$ on $A$. Assume that the characteristic invariant of $\tilde{\alpha}$ is trivial. Then there exists a probability measure $m$ with full support on $P_{N(\tilde{\alpha})}$ such that $\alpha\otimes\mathrm{id}_{\mathcal{W}}$ on $A\otimes \mathcal{W}$ is conjugate to $\alpha^{(\Gamma, N(\tilde{\alpha}), m)}\otimes \mathrm{id}_{\mathcal{W}}$ on $A_{(\Gamma, N(\tilde{\alpha}), m)}\otimes \mathcal{W}$. \end{cor}
\section*{Acknowledgments} The author would like to thank Eusebio Gardella for pointing out a misleading terminology.
\end{document} |
\begin{document}
\title{Weak Decoherence and Quantum Trajectory Graphs}
\begin{abstract} Griffiths' ``quantum trajectories'' formalism is extended to describe weak decoherence. The decoherence conditions are shown to severely limit the complexity of histories composed of fine-grained events. \end{abstract}
\begin{flushright} CPP-94-33 \end{flushright}
In response to the increasingly popular opinion that the Copenhagen interpretation of quantum mechanics raises more questions than it answers \cite{om2} and a desire to treat the entire universe quantum mechanically, Gell-Mann and Hartle \cite{SFI,misner,D47} have worked to create an alternative interpretation of quantum theory, expanding upon earlier work by Griffiths \cite{consis} and Omn\`{e}s \cite{om1}. Their scheme emphasizes not individual events but Griffiths' notion of a {\em history}, a sequence of events at a succession of times, and they assert that the histories to which one assigns probabilities are distinguished not by measurements made by an external classical ``observer'' but by the extent to which they satisfy certain ``consistency'' or ``decoherence'' conditions guaranteeing compliance with the classical rules of probability.
Yet basic questions remain largely unanswered: How restrictive are the decoherence conditions? What kinds of histories decohere? Do they occur in sufficient variety to describe the physical world?
These questions have led us to investigate several aspects of decoherence. We have extended Griffiths' ``quantum trajectories'' formalism \cite{graph} to describe {\em weakly} decohering sets of histories. We have found severe limits on the structure of fine-grained decohering histories.
Following Gell-Mann and Hartle, we let an {\em event} be described by a projection operator $P_{\alpha}$. If $P_{\alpha}$ is one-dimensional we say the event is {\em fine-grained}; otherwise the event is {\em coarse-grained}. A {\em complete} set of events $\{ P_{\alpha} \}$ forms a resolution of the identity: \begin{equation}
\sum_{\alpha} P_{\alpha} = I \ \ {\rm and} \ \ P_{\alpha} P_{\beta} =
\delta_{\alpha\beta} P_{\beta}. \end{equation} Let $\{P_{\alpha_{k}}(t_{k})\}$ be the complete set of events (in the Heisenberg picture) at time $t_{k}$. The probability that event $P_{\alpha_{1}} (t_{1})$ will occur at time $t_{1}$, $P_{\alpha_{2}}(t_{2})$ at time $t_{2}$, \ldots, and $P_{\alpha_{n}}(t_{n})$ at time $t_{n}$ is \cite{misner} \begin{eqnarray}
\lefteqn{ p(\alpha_{1}, \alpha_{2}, \ldots, \alpha_{n}) = } \nonumber \\
& & {\rm Tr\,}(P_{\alpha_{n}}(t_{n}) \ldots P_{\alpha_{2}}(t_{2})
P_{\alpha_{1}}(t_{1}) \rho P_{\alpha_{1}}(t_{1})
P_{\alpha_{2}}(t_{2}) \ldots P_{\alpha_{n}}(t_{n}))
\label{projprop} \end{eqnarray} for an initial state described by a density operator $\rho$. With this sequence of events we associate the {\em history} $C_{\alpha}$ defined by \begin{equation}
C_{\alpha} = P_{\alpha_{n}}(t_{n}) \ldots P_{\alpha_{2}}(t_{2})
P_{\alpha_{1}}(t_{1}), \end{equation} in terms of which (\ref{projprop}) becomes \begin{equation}
p(\alpha) = {\rm Tr\,}(C_{\alpha} \rho \mbox{$C_{\alpha}$}^{\dag}).
\label{prob} \end{equation} With this expression in mind, we define the {\em decoherence functional} between histories $C_{\alpha}$ and $C_{\beta}$ for an initial state $\rho$ by \begin{equation}
D(\alpha, \beta) = {\rm Tr\,}(C_{\alpha} \rho \mbox{$C_{\beta}$}^{\dag}), \end{equation} and we say $\{C_{\alpha}\}$ forms a {\em weakly decohering} set of histories iff \begin{equation}
{\rm Re\,}D(\alpha, \beta) = 0 \ \ {\rm for\ all} \ \ \alpha \neq
\beta. \label{decohere1} \end{equation} This condition guarantees that the probabilities associated with the histories in $\{C_{\alpha}\}$ obey the classical rules of probability \cite{misner}. For such a set of histories, the decoherence condition (\ref{decohere1}) and the probability formula (\ref{prob}) may be combined in the equation \begin{equation}
{\rm Re\,}D(\alpha, \beta) = p(\alpha)\delta_{\alpha\beta}.
\label{decohere2} \end{equation} A set of histories which satisfies the stronger condition \begin{equation}
D(\alpha, \beta) = p(\alpha)\delta_{\alpha\beta} \label{stdec} \end{equation} is said to exhibit {\em medium decoherence}. This condition is sufficient but not necessary to ensure compliance with the classical rules of probability. We will consider both kinds of decoherence. Finally, we will speak of individual histories $C_{\alpha}$ and $C_{\beta}$ decohering if they satisfy (\ref{decohere2}) (or (\ref{stdec})).
Histories with different final events always decohere and any history which occurs with zero probability decoheres with all other histories. Further, any decohering set of histories can be extended by inserting between any two times a set of events identical (in the Heisenberg picture) to those at the earlier or later time; the set of histories that results still decoheres. This corresponds to inserting a set of events in the Schrodinger picture which matches the earlier or later set aside from unitary evolution to the new time. We call this a {\em congruent extension}, since the new events are congruent with the old ones. In light of this, we will look for sets of histories with more than one nonzero-probability history but without congruent extensions.
We will use Griffiths' graphical representation \cite{graph} of ``consistent histories,'' in which he represents the set of possible events at each time with an orthonormal basis of the system's state space. (In this formalism, every event is fine-grained.) He represents the set of histories produced by this choice of events with a {\em trajectory graph}, in which each event at time $t_{j}$ corresponds to a node in the $j^{th}$ column of the graph and a line is drawn between nodes in adjacent columns iff the transition amplitude between the corresponding events is nonzero. Figure~\ref{unfor} presents two examples of such graphs. Each path (unbroken line through two or more nodes) through a trajectory graph represents a nonzero-probability history with initial state given by the first node in the path. The set of histories described by the graph satisfies the {\em noninterference condition} if any two nodes are connected by at most one path. We will show immediately below that the noninterference condition is equivalent to {\em medium} decoherence of the set of histories with any node in the graph as the initial state. However, we wish to follow in the spirit of Gell-Mann and Hartle, in which decoherence is a function of the initial state as well as the histories themselves. Further, since medium decoherence is a more stringent requirement than is actually necessary, we would like to have a condition for {\em weak} decoherence in terms of these graphs. As we will also prove below, the required condition is that {\em at most {\rm two} distinct paths connect any two events, and if there are two paths, the phases of the corresponding amplitudes differ by $\frac{\pi}{2}$}. Thus, we will use a modified form of Griffiths' quantum trajectory formalism in which (1) we specify the initial state (producing what Griffiths would call an {\em elementary family} of trajectories) and (2) we impose the requirement of weak, not medium, decoherence. Figures~\ref{twolev} and \ref{maxnon} provide examples of such graphs.
Both decoherence conditions mentioned above are special cases of the following theorem.
\\ {\bf Theorem 1.} Suppose $\{C_{\alpha}\}$ is a decohering set of histories
with initial state $\rho$ and suppose that two possible events $|j \rangle
\langle j|$ at time $t_{j}$ and $|k \rangle \langle k|$ at a later time
$t_{k}$ are fine-grained. If at least one history leading to event $|j
\rangle \langle j|$ occurs with nonzero probability, then of all histories
which lead from $|j \rangle \langle j|$ to $|k \rangle \langle k|$, at
most {\em two} occur with nonzero probability. If two occur, then the
phases of the corresponding amplitudes differ by $\frac{\pi}{2}$. If the
set $\{C_{\alpha}\}$ exhibits {\em medium} decoherence, at most {\em one}
history leading from $|j \rangle \langle j|$ to $|k \rangle \langle k|$
occurs with nonzero probability.
\\
{\bf Proof.} Any history leading from $| j \rangle \langle j |$ to $|k \rangle
\langle k|$ can be written as \begin{equation}
C_{\alpha} = |k \rangle \langle k| D_{\alpha}|j \rangle \langle j|
\label{abc} \end{equation} and the decoherence condition (\ref{decohere1}) applied to any two histories which include $C_{\alpha}$ and $C_{\beta}$ where $\alpha \neq \beta$ can be reduced to \begin{equation}
{\rm Re\,} \langle k | D_{\alpha} | j \rangle \langle k | D_{\beta} | j
\rangle^{*} = 0. \label{orth} \end{equation} If both amplitudes are nonvanishing, then \begin{equation}
\arg (\langle k | D_{\alpha} | j \rangle) - \arg (\langle k | D_{\beta} |
j \rangle) = \pm \frac{\pi}{2}. \label{pi2} \end{equation}
Thus, any two numbers in the set $\{\langle k | D_{\alpha} | j \rangle\}$ are orthogonal in the complex plane. Since the complex plane is
{\em two--dimensional}, at most {\em two} of the $\langle k | D_{\alpha} | j \rangle$ are nonzero. If there are two, they have the promised phase difference of $\frac{\pi}{2}$. Had we assumed that the histories exhibited {\em medium} decoherence, we would have used the decoherence condition (\ref{stdec}) and in (\ref{orth}) we would not have taken the real part; then
at most {\em one} member of the set $\{\langle k | D_{\alpha} | j \rangle\}$ would be nonvanishing.
$\Box$
If the initial state of the system is pure, then the theorem is still valid if
we replace $| j \rangle \langle j |$ with $\rho$. Thus, if the initial state is pure and the set $\{C_{\alpha}\}$ exhibits weak (medium) decoherence, then at most two histories connect (one history connects) the initial state to any fine-grained event with nonzero probability.
An immediate consequence of this theorem is interesting enough to be a theorem of its own.
\\ {\bf Theorem 2.} If $t_{j} < t_{k} < t_{l}$ and a nonzero-probability history leads to a fine-grained event at $t_{j}$ which does not occur at $t_{k}$ but occurs again at $t_{l}$, then no set of histories containing these events can decohere.
\\ {\bf Proof.} At least two nonzero-probability histories must connect the event at $t_{j}$ to its twin at $t_{l}$. Further, the product of the amplitude for one history and the complex conjugate of that for the other is real (and positive), because the factors linking $t_{j}$ to $t_{k}$ are the complex conjugates of those linking $t_{k}$ to $t_{l}$. Thus condition (\ref{orth}) of Theorem~1 cannot be satisfied.
$\Box$
With Theorem~1 in hand we can immediately describe all possible decohering sets of histories of a two-level system (spin $\frac{1}{2}$) with a pure initial state. All sets of histories with one event after the initial state exhibit (medium) decoherence automatically; thus we begin by considering two-event sets. We assume the system is initially polarized in the direction {\boldmath $\vec{\imath}$}, polarized parallel or antiparallel to {\boldmath $\vec{n}$} at $t_{1}$, and parallel or antiparallel to {\boldmath $\vec{f}$} at $t_{2}$. Writing the corresponding projection operators in the standard way using Pauli matrices, one discovers that the weak decoherence condition (\ref{decohere1}) becomes \begin{equation}
( \mbox{\boldmath $\vec{\imath}$} \times \mbox{\boldmath $\vec{n}$} ) \cdot
( \mbox{\boldmath $\vec{n}$} \times \mbox{\boldmath $\vec{f}$} ) = 0. \label{geocond} \end{equation} (This result is not new \cite{om2}.) Only these sets of two-event histories weakly decohere. Further, every decohering set of histories with three or more events is a congruent extension of a two-event set; if it were not, the number of nonzero-probability histories would be at least five, so at least three would lead from the initial state to one of the two final events, which Theorem~1 does not allow.
If we were to impose {\em medium} decoherence instead, the allowed sets of histories would simplify considerably. Theorem~1 allows at most one nonzero-probability history to lead from the initial to each of the final states; thus the total number of nonzero-probability histories would be at most two. Any set of histories which is not a congruent extension of a one-event set will have at least three nonzero-probability histories; thus it would not decohere. Therefore the only sets of fine-grained histories of a two-level system which exhibit medium decoherence are one-event sets and their congruent extensions. In Griffiths' language, we have shown that weakly decohering sets of histories corresponding to the graph in Figure~\ref{twolev}(a) exist, but the only sets exhibiting medium decoherence are represented by graphs like the one in Figure~\ref{twolev}(b), a congruent extension of a one-event set.
We call an event in a trajectory graph {\em connected} if its node leads back to the initial state through at least one path (if at least one history leading to the event from the initial state occurs with nonzero probability). We call it {\em singly} connected if exactly one path leads back to the initial state, {\em doubly} connected if two paths lead back to the initial state. In these terms, Theorem~1 demands that every fine-grained event in a decohering set of histories be at most doubly connected (or singly connected if the set exhibits medium decoherence). An event is unconnected iff it has no overlap with the connected events at the previous time; thus the unconnected events at any time lie in the span of the unconnected events at the previous time. Therefore the number of {\em connected} events is a nondecreasing function of time.
\\ {\bf Theorem 3.} In every transition between times in a decohering set of
histories represented by a trajectory graph, either \begin{enumerate} \item the connected events before and after the transition are identical; \item the number of connected events increases by at least one; \item the number of doubly connected events increases by at least {\em two};
or \item both 2 and 3 occur. \end{enumerate}
{\bf Proof.} All we need to prove is that if 1 and 2 do not occur, then 3 must occur. Thus, suppose the connected events before and after the transition from time $t_{j}$ to time $t_{j+1}$ are not identical, yet the number of connected events does not increase. Then at least one event at $t_{j+1}$ must be connected to two events at $t_{j}$, as shown in Figure~\ref{unfor}(a). However, that one event at $t_{j+1}$ cannot be the {\em only} one linked to two events at $t_{j}$; since the first event at $t_{j}$ is connected to only the first event at $t_{j+1}$, the two differ at most by a phase, and because the first and second events at $t_{j}$ are orthogonal, the first event at $t_{j+1}$ and the second event at $t_{j}$ must also be orthogonal. Thus at least {\em two} events at $t_{j+1}$ must be connected to two events (each) at $t_{j}$, as shown in Figure~\ref{unfor}(b). None of the doubly connected events from $t_{j}$ can be involved in this part of the transition (since that would make one of the events at $t_{j+1}$ at least triply connected); therefore, the number of doubly connected events increases by at least two.
$\Box$
In a set of histories represented by a trajectory graph, the system's behavior is specified at only a finite number of times. We might have hoped to better approximate continuous time evolution by inserting additional sets of events between those already in the graph. However, as the next theorem shows, the possibilities for this are very limited.
\\ {\bf Theorem 4.} Suppose that between times $t_{j}$ and $t_{j+1}$ in a
decohering set of histories represented by a trajectory graph, exactly
one step of change occurs: either the number of connected events
increases by {\em exactly} one or the number of doubly connected events
increases by {\em exactly} two (but not both). Then if an additional set
of events is inserted between $t_{j}$ and $t_{j+1}$ while maintaining
decoherence, it must be identical to either the set at $t_{j}$ or the
set at $t_{j+1}$.
\\ {\bf Proof.} Suppose that the new set is identical to neither the set before nor the set after. Then in the transition from $t_{j}$ to $t_{j+1}$ at least two steps of change must occur (one for the transition from $t_{j}$ to the intermediate time, one for the transition from the intermediate time to $t_{j+1}$).
$\Box$
The histories are restricted even more drastically if only {\em finitely} many events occur with nonzero probability (so only that many events are connected).
\\ {\bf Theorem 5.} Consider a trajectory graph representing a set of decohering
histories with a finite number $n$ of connected events at a particular
time. Excluding congruent extensions, the number of transitions prior to
that time is at most \mbox{$n + [\frac{n}{2}] - 2$}, where [ ] denotes the
greatest integer part.
\\ {\bf Proof.} Suppose that the given set of decohering histories contains no congruent extensions. The number of connected events at time $t_{1}$ is therefore at least two, so the number of transitions that increase the number of connected events is at most \mbox{$n - 2$}. The number of transitions that increase the number of doubly connected events is at most $[\frac{n}{2}]$. Thus the total number of transitions is at most \mbox{$n + [\frac{n}{2}] - 2$}.
$\Box$
\\ This bound is the strongest possible, because for every $n$ there is a set of decohering histories in an $n$-dimensional space with this maximum number of noncongruent steps (the $n = 5$ case is illustrated in Figure~\ref{maxnon}). The consequences of this theorem are avoided only if the number of connected events is infinite right at the start, so that infinitely many events occur with nonzero probability at each time.
Comparison with Figure~\ref{twolev} shows that in each of the last two transitions in Figure~\ref{maxnon} the system can be decomposed into two subspaces, in one of which the transition is to congruent events while in the other the transition is that of a two-level system. In fact, a large class of transitions is of this general type, as we show with our final theorem.
\\ {\bf Theorem 6.} In every transition in which the number of connected events
is finite and does not increase, the matrix describing the transition
between the {\em connected} events is block-diagonal (to within
rearrangement of the rows and columns), and each block is either \mbox{$2
\times 2$} or \mbox{$1 \times 1$}.
\\ {\bf Proof.} Let the transition from $t_{j}$ to $t_{j+1}$ leave the number of connected events $n$ unchanged. Since the span of the connected events at $t_{j}$ lies in the span of the connected events at $t_{j+1}$ and both have dimension $n$, the two subspaces are the same; so they have the same orthogonal complement. Thus each (un)connected event at $t_{j+1}$ overlaps {\em only} the (un)connected events at $t_{j}$. Since each connected event at $t_{j+1}$ is at most doubly connected, each is linked to either one or two connected events at $t_{j}$ and no others; thus the matrix describing the entire transition has at most two nonzero entries in each column representing a connected event at $t_{j+1}$. If a column has only one nonzero entry, then unitarity guarantees that the entry is also the only nonzero entry in its row; this yields all of the \mbox{$1 \times 1$} blocks. If a column has two nonzero entries, then the orthogonality of different rows and columns demands that the entries in the same two rows of {\em one} and {\em only} one other column are also nonzero. Those two columns together form a \mbox{$2 \times 2$} block; all other entries in their rows and columns are zero.
$\Box$
\\ This theorem reduces the allowed transitions to an extremely simple form; its restrictions are avoided only if the number of connected events (the number of events that occur with nonzero probability) increases continually over time.
These results suggest that the decoherence conditions strongly favor histories dominated overwhelmingly by congruent extensions. It is not surprising that decoherence selects out the histories that conform with the system's unitary evolution, but the extent to which they are preferred is remarkable. For example, only congruent events can occur between congruent events (Theorem~2), and if continuous classical evolution is to be approached by inserting events at more and more times, almost all insertions must be congruent extensions (Theorems 2, 4, and 5). Probabilities that are periodic in time and are for a finite number of events at some time must be for congruent events and are therefore constant in time. (The number of connected events can not decrease, so if it is periodic it must be constant and Theorem 6 applies.)
\begin{figure}
\caption{Griffiths trajectory graphs. (a) A candidate for a nontrivial
transition in which the number of connected events does not
increase. This graph is forbidden by the orthogonality of different
events at $t_{j}$. (b) Another candidate for the same transition.
The orthogonality of different events at $t_{j}$ demands that at
least two events at $t_{j+1}$ be doubly connected.}
\label{unfor}
\end{figure}
\begin{figure}
\caption{Trajectory graphs with a specified initial state. (a) A graph
corresponding to a weakly decohering set of histories of a two-level
system. (b) A graph corresponding to a set of histories exhibiting
medium decoherence.}
\label{twolev}
\end{figure}
\begin{figure}\label{maxnon}
\end{figure}
\end{document} |
\begin{document}
\title[A Note on Average of Roots of Unity]
{A Note on Average of Roots of Unity}
\author[C. Panraksa]{Chatchawan Panraksa} \address[Chatchawan Panraksa]{Science Division, Mahidol University International College\\ 999 Phutthamonthon 4 Road, Salaya, Nakhonpathom, Thailand 73170}
\email{[email protected]}
\author[P. Ruengrot]{Pornrat Ruengrot} \address[Pornrat Ruengrot]{Science Division, Mahidol University International College\\ 999 Phutthamonthon 4 Road, Salaya, Nakhonpathom, Thailand 73170}
\email{[email protected]}
\date{\today}
\begin{abstract}
We consider the problem of characterizing all functions $f$ defined on the set of integers modulo $n$ with the property that an average of some $n$th roots of unity determined by $f$ is always an algebraic integer. Examples of such functions with this property are linear functions. We show that, when $n$ is a prime number, the converse also holds. That is, any function with this property is representable by a linear polynomial. Finally, we give an application of the main result to the problem of determining self perfect isometries for the cyclic group of prime order $p$. \end{abstract}
\maketitle
\section{Introduction}
Let $n$ be a positive integer. Denote by $\mathbb{Z}_n=\{0,1,\ldots,n-1\}$ the ring of integers modulo $n$. Let $\omega=e^{2\pi i/n}$ be a primitive $n$th root of unity. In this work, we consider the following problem.
\begin{problem}\label{the-problem}
Suppose $f:\mathbb{Z}_n\longrightarrow \mathbb{Z}_n$ is a function such that the average \begin{equation}\label{eqn:integrality}
\mu_f^{a,b} = \frac{1}{n}\sum_{x=0}^{n-1}\omega^{af(x)+bx} \quad\text{is an algebraic integer for every } a,b\in\mathbb{Z}_n. \end{equation} What can be said about the function $f$? \end{problem}
When $n$ is a power of prime, such a problem is related to the problem of finding self perfect isometries (as defined in \cite{Broue1990isometries}) for cyclic $p$-groups, since the condition \eqref{eqn:integrality} is the integrality condition for perfect characters. More explanations on this relationship are given in the last section.
\section{Preliminaries} We shall use the symbol $(=)$ to denote ordinary equality. The symbol $(\equiv)$ will be used to denote congruence $\pmod n$(i.e., equality in $\mathbb{Z}_n$).
\begin{definition}
We say that a function $f:\mathbb{Z}_n\longrightarrow \mathbb{Z}_n$ is a \emph{polynomial function} if there exists a polynomial $F\in \mathbb{Z}_n[X]$ such that
\[f(x)\equiv F(x) \quad\text{ for all }x=0,1,\ldots, n-1.\] \end{definition}
First we show that any function representable by a linear polynomial function satisfies the condition \eqref{eqn:integrality}.
\begin{lemma}\label{lem:linear-implies-integrality}
Suppose that $f:\mathbb{Z}_n\longrightarrow \mathbb{Z}_n$ is a linear polynomial function, then $\mu_f^{a,b}$ is an algebraic integer for every $a,b\in\mathbb{Z}_n$. \end{lemma}
\begin{proof}
If $f$ is given by a linear polynomial, then so is $af(x)+bx$ for any $a,b$. Suppose $af(x)+bx\equiv \alpha x+\beta$ for some $\alpha, \beta\in\mathbb{Z}_n$. If $\alpha\equiv 0$, then
\begin{align*}
\mu_f^{a,b} &= \frac{1}{n}\sum_{x=0}^{n-1}\omega^{\beta} = \omega^{\beta},
\end{align*} which is an algebraic integer. If $\alpha\not\equiv 0$, then $\omega^\alpha\ne 1$. Consequently,
\begin{align*}
\mu_f^{a,b} &= \frac{1}{n}\sum_{x=0}^{n-1}\omega^{\alpha x+\beta} = \frac{\omega^\beta}{n}\left(\frac{1-\omega^{\alpha n}}{1-\omega^\alpha}\right) = 0,
\end{align*} which is also an algebraic integer. \end{proof}
The next result shows that it suffices to check ~\eqref{eqn:integrality} for $a\equiv 0$ (which is trivial) or $b\equiv 0$ or $a, b$ relatively prime.
\begin{proposition}
Let $f:\mathbb{Z}_n\longrightarrow \mathbb{Z}_n$ be a function. Suppose that $\mu_f^{a,b}$ is an algebraic integer. Then $\mu_f^{ka,kb}$ is also an algebraic integer for any $k$ relatively prime to $n$. \end{proposition}
\begin{proof} Since $k$ and $n$ are coprime, we can define an automorphism $\sigma_k\in\mathrm{Aut}(\mathbb{Q}(\omega)/\mathbb{Q})$ by $\sigma_k(\omega)=\omega^k$. Then \begin{align*}
\mu_f^{ka,kb} &= \frac{1}{n}\sum_{x=0}^{n-1}\omega^{k(af(x)+b)} = \frac{1}{n}\sum_{x=0}^{n-1}\sigma_k(\omega)^{af(x)+b} = \sigma_k(\mu_f^{a,b}).
\end{align*} Since $\mu_f^{a,b}$ is an algebraic integer and $\sigma_k\in\mathrm{Aut}(\mathbb{Q}(\omega)/\mathbb{Q})$, it follows that $\mu_f^{ka,kb}= \sigma_k(\mu_f^{a,b})$ is also an algebraic integer. \end{proof}
Finally, we give a necessary and sufficient condition for the average of roots of unity to be an algebraic integer. This is a standard result in algebraic number theory.
\begin{lemma}\label{lem:average}
Let $\omega_1,\ldots, \omega_n$ be roots of unity. Their average is an algebraic integer if and only if either $\omega_1+\cdots +\omega_n=0$ or $\omega_1=\cdots=\omega_n$. \end{lemma}
\begin{proof}
The sufficiency is clear. Let $\mu$ denote the average of $\omega_1,\ldots, \omega_n$ and assume that it is an algebraic integer. By the triangle inequality, $|\mu|\le 1$ with equality if and only if $\omega_1=\cdots=\omega_n$. Moreover, $|\mu'|\le 1$ for all algebraic conjugates $\mu'$ of $\mu$. If not all $\omega_i$'s are equal, then $|\mu|< 1$. As a result, we also have $|\alpha|<1$, where $\alpha$ is the product of all algebraic conjugates of $\mu$. But $\alpha$ is an integer, which implies that $\alpha$ must be 0. It follows that $\mu=0$. \end{proof}
\section{The case where $n$ is a prime}
Let $n=p$ be a prime number. Our main result is to show that any function $f:\mathbb{Z}_p\longrightarrow \mathbb{Z}_p$ satisfying \eqref{eqn:integrality} must be representable by a linear polynomial. To study a function $f:\mathbb{Z}_p\longrightarrow \mathbb{Z}_p$, it suffices to study polynomials of degree at most $p-1$. The following lemma was proved (for a general finite field) by Dickson~\cite{dickson1896analytic}.
\begin{lemma}
For any function $f:\mathbb{Z}_p\longrightarrow \mathbb{Z}_p$, there exists a unique polynomial $F\in \mathbb{Z}_p[X]$ of degree at most $p-1$ such that
\[f(x)\equiv F(x) \pmod p \quad\text{ for all }x=0,1,\ldots, p-1.\] \end{lemma}
Henceforth, we shall identify a function $f:\mathbb{Z}_p\longrightarrow \mathbb{Z}_p$ with its corresponding polynomial of degree at most $p-1$. For the proof of our main result, we will need to consider the following set. \begin{definition}
For a polynomial $f$ in $\mathbb{Z}_p[X]$, define
\[W_f=\{\lambda\in\mathbb{Z}_p: x\mapsto (f(x)+\lambda x) \text{ is a permutation on } \mathbb{Z}_p\}.\] \end{definition}
\begin{lemma}[Stothers {\cite[Theorem 2]{stothers1990permutation}}]\label{lem:suff-to-be-linear}
Let $f$ be a polynomial in $\mathbb{Z}_p[X]$ of degree at most $p-1$. If $|W_f|>(p-3)/2$, then $\deg(f)\le 1$. \end{lemma}
We now state our main result.
\begin{theorem}\label{thm:prime-linear}
Let $f:\mathbb{Z}_p\longrightarrow \mathbb{Z}_p$ be a function. Suppose that the average \begin{equation*}
\mu_f^{1,b} = \frac{1}{p}\sum_{x=0}^{p-1}\omega^{f(x)+bx} \end{equation*}
is an algebraic integer for every $b\in\mathbb{Z}_p$. Then $f$ is a linear polynomial function. \end{theorem}
\begin{proof}
By Lemma~\ref{lem:average}, we have that, for each $b$, either $f(x)+bx$ is constant modulo $p$ or $\mu_f^{1,b}=0$. Suppose there is $b_0\in\mathbb{Z}_p$ such that $f(x)+b_0x$ is constant modulo $p$. Then it is clear that $f$ is of the form $f(x)\equiv\alpha x+\beta$ for all $x\in \mathbb{Z}_p$.
If there is no $b$ with $f(x)+bx$ constant modulo $p$, then $\sum_{x=0}^{p-1}\omega^{f(x)+bx} = 0$ for all $b$. Since the minimal polynomial of $\omega$ is $1+X+\cdots+X^{p-1}$, we must have that $x\mapsto f(x)+bx$ is a permutation modulo $p$, for all $b$. This means that the set $W_f$ has cardinality $p$. Hence, by Lemma~\ref{lem:suff-to-be-linear}, $f$ is a linear polynomial. \end{proof}
\begin{corollary}\label{cor:prime-linear}
Let $f:\mathbb{Z}_p\longrightarrow \mathbb{Z}_p$ be a function. Suppose that the average \begin{equation*}
\mu_f^{a,b} = \frac{1}{p}\sum_{x=0}^{p-1}\omega^{f(x)+bx} \end{equation*}
is an algebraic integer for every $a,b\in\mathbb{Z}_p$. Then $f$ is a linear polynomial function. \end{corollary} \begin{proof}
Take $a=1$ and apply Theorem~\ref{thm:prime-linear}. \end{proof}
We have seen (in Lemma \ref{lem:linear-implies-integrality}) that if a function $f:\mathbb{Z}_n\longrightarrow \mathbb{Z}_n$ is representable by a linear polynomial, then it satisfies \eqref{eqn:integrality}. Corollary~\ref{cor:prime-linear} implies that the converse also holds for $n$ prime. It is natural to make the following conjecture.
\begin{conjecture}
Suppose $f:\mathbb{Z}_n\longrightarrow \mathbb{Z}_n$ is a function such that the average $\mu_f^{a,b}$ is an algebraic integer for every $a,b\in\mathbb{Z}_n$. Then there exist $\alpha,\beta\in\mathbb{Z}_n$ such that $f(x)\equiv \alpha x+\beta$ for all $x$. \end{conjecture}
\begin{remark}
It is possible that a function $f$ may have a non-linear form and still satisfy \eqref{eqn:integrality}. The conjecture says that such function should be representable by a linear polynomial in $\mathbb{Z}_n[X]$.
For example, when $n=6$, it can be checked that $f(x)\equiv x^3+x$ satisfies \eqref{eqn:integrality}. However $f$ is representable by a linear polynomial, since $x^3+x\equiv 2x$ for all $x\in\mathbb{Z}_6$. \end{remark}
\section{Connection to Perfect Isometries}
Corollary~\ref{cor:prime-linear} has an application in representation theory of finite groups, especially in the problem of finding self perfect isometries for the cyclic group $C_p$ of order prime $p$. For the purpose of illustrations, we will define perfect isometries specifically for this special case. Interested readers are referred to \cite{Broue1990isometries} for the definition of perfect isometries for general blocks of finite groups.
Throughout this section, let $G=C_p$. Denote by $\mathcal{R}(G)$ the free abelian group generated by $\Irr(G)$, the set of all irreducible complex characters of $G$. We will regard $\mathcal{R}(G)$ as lying in $CF(G)$\footnote{This is an inner product space with the standard inner product of group characters.}, the space of complex-valued class functions of $G$.
Let $I:\mathcal{R}(G)\longrightarrow\mathcal{R}(G)$ be a linear map. Define a generalized character $\mu_I$ of $G\times G$ by \[\mu_I(g,h)=\sum_{\chi\in\Irr(G)}I(\chi)(g)\chi(h),\quad\text{for all } g,h\in G.\]
\begin{definition}(Cf. Definition 1.1 in \cite{Broue1990isometries}) An isometry $I:\mathcal{R}(G)\longrightarrow\mathcal{R}(G)$ is said to be a \emph{perfect isometry} if $\mu_I$ satisfies the following two conditions. \begin{enumerate}
\item[(i)] (Integrality) For all $g, h\in G$, the number $\mu_I(g,h)/p$ is an algebraic integer.
\item[(ii)] (Separation) If $\mu_I(g,h)\ne 0$, then both $g$ and $h$ are the identity element or both are not. \end{enumerate} \end{definition}
Let $\omega=e^{2\pi i/p}$. Suppose that $G$ is generated by an element $u\in G$. For $x=0,1,\ldots, p-1$, let $\chi_x$ be the irreducible complex character of $G$ such that \[\chi_x(u^a) =\omega^{ax},\quad a=0,1,\ldots p-1.\] In particular, $\chi_0$ is the trivial character.
Any bijection $f$ on the set $\{0,1,\ldots, p-1\}$ gives rise to an isometry $I_f:\mathcal{R}(G)\longrightarrow\mathcal{R}(G)$ defined (on the basis) by \[I_f(\chi_x)=\chi_{f(x)},\quad x=0,1 \ldots, p-1.\]
\begin{proposition}\label{prop-perfect-iso}
An isometry $I_f$ is perfect if and only if $f$ is a linear bijection. \end{proposition} \begin{proof}
Since every element in $G$ is of the form $u^a$ for some $a$, we have
\[\mu_I(g,h)=\mu_I(u^a,u^b)=\sum_{x=0}^{p-1}I(\chi_x)(u^a)\chi_x(u^b)= \sum_{x=0}^{p-1} \omega^{af(x)+bx}.\] Thus, we see that the condition in (1) is precisely the requirement that $\mu_I$ satisfies the integrality condition. This is the only condition to consider, as the separation condition is satisfied for any bijection $f$.
If $f$ is a linear bijection, then by Lemma \ref{lem:linear-implies-integrality}, the integrality condition is satisfied. Thus, $I_f$ is a perfect isometry.
Conversely, if $I_f$ is a perfect isometry, then $\mu_I(u^a,u^b)/p$ is an algebraic integer for all $a,b$. It follows from Corollary~\ref{cor:prime-linear} that $f$ must be linear. \end{proof}
\begin{remark} The following actions on $\Irr(G)$ are well known to give bijections on the set.
\begin{itemize}
\item (Multiplication by a linear character) For a fixed $\chi_\beta\in \Irr(G)$, multiplication by $\chi_\beta$ gives a bijection
\[I_\beta:\Irr(G)\longrightarrow\Irr(G), \quad I_\beta(\chi_x)(g)=(\chi_\beta\chi_x)(g)=\chi_{\beta+x}(g).\]
\item (Automorphism action) For a fixed $\alpha\in\{1,2,\ldots,p-1\}$, the automorphism $g\mapsto g^\alpha$ induces a bijection
\[I_\alpha:\Irr(G)\longrightarrow\Irr(G), \quad I_\alpha(\chi_x)(g)=\chi_x(g^\alpha)=\chi_{\alpha x}(g).\]
\end{itemize}
Proposition \ref{prop-perfect-iso} implies that, for an isometry induced by a bijection on $\Irr(G)$ to be a perfect isometry, it must be a composition of the above two types of isometries. \end{remark}
\end{document} |
\begin{document}
\title{\bf\Large \bf A Unified Approach to Construct Correlation Coefficient Between Random Variables} \author{ \textbf{ Majid Asadi\footnote{{Department of Statistics, University of Isfahan, Isfahan 81744, Iran \& School of Mathematics, Institute of Research in Fundamental Sciences (IPM), P.O Box 19395-5746, Tehran, Iran} (E-mail: [email protected])} \ \ and \ \ \textbf{Somayeh Zarezadeh\footnote{Department of Statistics, Shiraz University, Shiraz 71454, Iran (E-mail: [email protected]) }} }} \date{} \maketitle
\begin{abstract}
Measuring the correlation (association) between two random variables is one of the important goals in statistical
applications. In the literature, the covariance between two random variables is a widely used criterion in measuring
the linear association between two random variables. In this paper, first we propose a covariance based unified
measure of variability for a continuous random variable $X$ and we show that several measures of variability and
uncertainty, such as variance, Gini mean difference, cumulative residual entropy, etc., can be considered as special
cases. Then, we propose a unified measure of correlation between two continuous random variables $X$ and $Y$, with
distribution functions (DFs) $F$ and $G$, based on the covariance between $X$ and $H^{-1}G(Y)$ (known as the
{\it Q-transformation} of $H$ on $G$) where $H$ is a continuous DF. We show that our proposed measure of association subsumes some of the existing measures of correlation. {Under some mild condition on $H$}, it is shown the suggested index ranges between $[-1,1]$ where the extremes of the range, i.e., -1 and 1, are attainable by the Fr$\acute{\rm e}$chet bivariate minimal and maximal DFs, respectively.
{ A special case of the proposed correlation measure leads to a variant of Pearson correlation coefficient which,
as a measure of strength and direction of the linear relationship between $X$ and $Y$, has absolute values greater
than or equal to the Pearson correlation.} The results are examined numerically for some well known bivariate DFs. \end{abstract} {\bf Keywords:} Association; Correlation coefficient; Gini's mean difference; Cumulative residual entropy; Fr$\acute{\rm e}$chet bounds, Q-transformation; Bivariate copula.
\maketitle
\section{Introduction}\label{intro}
One of the fundamental issues in statistical theory and applications is to measure the correlation (association) between two random phenomena. The problem of assessing the correlation between two random variables (r.v.s) has a long history and because of importance of the subject, several criteria have been proposed in the statistical literature. Let $X$ and $Y$ be two continuous r.v.s with joint distribution function (DF) $F(x,y)=P(X\leq x,Y\leq y)$, $(x,y)\in \mathbb{R}^2$, and continuous marginal DFs $F(x)=P(X\leq x)$ and $G(y)=P(Y\leq y)$, respectively. In parametric framework, the Pearson correlation coefficient, which is the most commonly used type of correlation index, measures the strength and direction of the linear relationship between $X$ and $Y$. The Pearson correlation coefficient, denoted by $\rho(X,Y)$, is defined as the ratio of the covariance between $X$ and $Y$, to the product of their standard deviations. That is \begin{equation}\label{prhop} \rho(X,Y)=\frac{\mathrm{Cov}(X,Y)}{\sigma_{X}\sigma_{Y}}=\frac{E(XY)-E(X)E(Y)}{\sigma_{X}\sigma_{Y}}, \end{equation} where $\sigma_{X}>0$ $(\sigma_{Y}>0)$ denotes the standard deviation of $X$ $(Y)$. An application of Cauchy-Schwarz inequality shows that $\rho(X,Y)$ lies in interval $[-1, 1]$. In nonparametric framework, the widely used measures of association between two r.v.s are Kendall's coefficient and Spearman's coefficient. The Spearman correlation coefficient is defined as the Pearson correlation coefficient
between the ranks of $X$ and $Y$ while the Kendall's coefficient (of concordance) is expressed with respect to
the probabilities of the concordant and discordant pairs of observations from $X$ and $Y$. For more information
in properties and applications of these indexes of correlation we refer, among others, to \cite{Samuel et al. (2001),
Shevlyakov and Oja (2016)} and references therein. Although these correlation coefficients have been widely used in many disciplines, there have been also defined other indexes of associations which are particulary useful in certain areas of applications; see, for example, { \cite{Yin (2004), R3, Nolde (2014), Grothe et al. (2014)}}. In economic and financial studies a commonly used measure of association between r.v.s $X$ and $Y$ is defined based on Gini's mean difference by \cite{R1}. The Gini's mean difference corresponding to r.v. $X$, denoted by $\mathrm{GMD}(X)$ (or alternatively with $\mathrm{GMD}(F)$), is defined as \begin{eqnarray}
\mathrm{GMD}(X)= E(|X_1-X_2|)=2\int F(x)\bar{F}(x)dx,\label{gmd} \end{eqnarray} where $X_1$ and $X_2$ are independent r.v.s distributed as $X$ and $\bar{F}(x)=1-F(x)$. The GMD(X) as a measure of variability, (which is also equal to $4\mathrm{Cov}(X,F(X))$), shares many properties of the variance of $X$ and is more informative than the variance for the distributions that are far from normality (see, \cite{R2}). \cite{R1} defined the association between $X$ and $Y$ as the covariance between $X$ and $G(Y)$ divided by the covariance between $X$ and $F(X)$. In other words, they proposed the measure of association between $X$ and $Y$ as \begin{equation} \Gamma(X,Y)=\frac{\mathrm{Cov}(X,G(Y))}{\mathrm{Cov}(X,F(X))}.\label{eee1} \end{equation} As for a continuous r.v. $Y$, $G(Y)$ is distributed uniformly on $(0,1)$, the index $\Gamma(X,Y)$ measures the association between $X$ and a uniform r.v. on the interval $(0,1)$ which corresponds to the rank of $Y$. The index $\Gamma(X,Y)$ has the requirements of a correlation coefficient and is well applied in a series of research works in economics and finance by Yitzhaki and his coauthors. We refer the readers, for more details
on applications of $\Gamma(X,Y)$ and its extensions, to \cite{R3} and references therein.
Recently, \cite{Asadi(2017)} proposed a new measure of association between two continuous r.v.s $X$ and $Y$.
This measure is defined on the basis of $\mathrm{Cov}(X, \phi(X))$, where $\phi(x)=\log\frac{F(x)}{\bar{F}(x)},$
is the log-odds rate associated to $X$. The cited author provides some interpretations of this covariance and showed
that it arises naturally as a measure of variability. For instance, it is shown that $\mathrm{Cov}(X, \phi(X))$ can
be expressed as a function of cumulative residual entropy (a measure of uncertainty defined in \cite{R8}). Then the
measure of association between r.v.s $X$ and $Y$ is defined as the ratio of the covariance between $X$ and the log-odds
rate of $Y$ divided by the covariance between $X$ and the log-odds rate of $X$. If we denote this measure by
$\alpha(X,Y)$, then
\begin{eqnarray}
\alpha(X,Y)=\frac{\mathrm{Cov}(X, \phi_Y(Y))}{\mathrm{Cov}(X, \phi_{X}(X))}.\label{eqq2}
\end{eqnarray}
It should be noted that for a continuous r.v. $X$, $\phi_{X}(X)$ is distributed as standard Logistic distribution.
Hence $\alpha(X,Y)$ measures the correlation between $X$ and a
standard Logistic r.v., where the Logistic r.v. is the log-odds transformation of the r.v. $Y$.
\qquad The aim of the present paper is to give a unified approach to construct
measures of association between two r.v.s. In this regard, we assume that $X$ and $Y$ have continuous
DFs $F$ and $G$, respectively. First we consider the following covariance which we call it the $G$-covariance
between $X$ and $Y$,
\begin{equation}
{\cal C}(X,Y)=\mathrm{Cov}\left(X,G^{-1}F(X)\right),\label{eq1}
\end{equation}
where, for $p\in[0,1]$,
\[ G^{-1}(p)=\inf\{x\in \mathbb{R}: G(x)\geq p\},\] is the inverse function of DF $G$. The quantity $G^{-1}F(.)$ is known in the literature with different names. \cite{Gilchrist(2000)} called it {\it Q-transformation} (Q-T) and \cite{Shaw and Buckley(2009)} named it {\it sample transmutation maps}. Throughout the paper, we use the abbreviation Q-T for quantities of the form $G^{-1}F(.)$. Note that the covariance in (\ref{eq1}) measures the linear dependency between $X$ and r.v. $G^{-1}F(X)$, where the latter one is a r.v. distributed as $Y$. Based on the covariance (\ref{eq1}), we propose a unified index of correlation between $X$ and $Y$ which leads to new measures of correlations and subsumes some of the existing measures such as the Pearson correlation coefficient (in the case that the $X$ and $Y$ are identical) and Gini correlation coefficient (and its extensions). Then, we study several properties of our unified index of association.
\qquad The rest of the paper is organized as follows: In Section 2, first we give briefly some backgrounds and applications of quantity Q-T which have already presented in the literature. Then, we give the motivations of using the covariance (\ref{eq1}) by showing that some measures of variability such as variance, GMD (and its extensions) and cumulative residual entropy can be considered as special cases of (\ref{eq1}). In Section 3, we propose our unified measure of association between the r.v.s $X$ and $Y$ based on the covariance between $X$ and $H^{-1}G(Y)$, where $H$ is a continuous DF. We call this unified correlation as {\it $H$-transformed correlation} between $X$ and $Y$ and denote it by $\beta_H(X,Y)$. It is shown that $\beta_H(X,Y)$ has almost all requirements of a correlation index. For example, it is proved that for any continuous symmetric DF $H$, $-1\leq \beta_H(X,Y)\leq 1$, where $\beta_H(X,Y)=0$ if $X$ and $Y$ are independent. When the joint distribution of $X$ and $Y$ is bivariate normal with Pearson correlation $\rho(X,Y)=\rho$, we show that $\beta_H(X,Y)=\rho$, for any $H$. We prove that for the association index $\beta_H(X,Y)$ the lower and upper bounds of the interval $[-1,1]$ are attainable. In fact, it is proved that $\beta_H(X,Y)=-1$ $(+1)$ if $X$ and $Y$ are jointly distributed
as Fr$\acute{\rm e}$chet bivariate minimal (maximal) distribution. A special case of $\beta_H(X,Y)$, which we call it {\it $\rho$-transformed correlation} and denote it by $\rho_t(X,Y)$, provides a variant of Pearson correlation
coefficient $\rho(X,Y)$, whose absolute value is always greater than or equal to the absolute value of Pearson
correlation $\rho(X,Y)$. That is, $\rho_t(X,Y)$ provides a wider range than that of $\rho(X,Y)$ for measuring the
linear correlation between two r.v.s. The correlation $\beta_H(X,Y)$ provides, in general, an asymmetric class of
correlation measures in terms of $X$ and $Y$. We propose some symmetric versions of that in Section 3.
The index $\beta_H(X,Y)$ is computed for several bivariate distributions under different special cases for DF $H$.
In Section 4, a decomposition formula is given for G-covariance of sum of nonnegative r.v.s which yields to some
applications for redundancy systems. The paper is finalized with some concluding remarks in Section 5.
\section{Motivations} Let $X$ and $Y$ be two continuous r.v.s with joint DF $F(x,y)$, $(x,y)\in \mathbb{R}^2$, and marginal DFs $F(x)$ and
$G(y)$, respectively. In developing our results the quantity Q-T, $G^{-1}F(x)$, plays a central role.
\cite{Balanda and MacGillivray(1990)} showed that the behavior of Q-T can be used to assess the Kurtosis of two
distributions (see, also, \cite{Groeneveld(1998)}). They showed that for symmetric distributions the so called
{\it spread-spread } function is essentially a function of Q-T. \cite{Shaw and Buckley(2009)} mentioned that
among the applications of Q-T is sampling from exotic distributions, e.g. $t$-Student. Authors have also used
the plots of sample version of Q-T, in which the empirical distributions are replaced in $G^{-1}(F(x))$, for
assessing symmetry of the distributions; see \cite{Doksum et al. (1977)} and references therein.
\cite{Aly and Bleuer(1986)} called the function Q-T as the Q-Q plot and obtained some confidence intervals for that.
In comparing the probability distributions, the concept of dispersive (variability) ordering is used to measure
variability of r.v.s (see, \cite{Shaked and Shanthikumar(2007)}). The concept of dispersive ordering relies mainly
on quantity $G^{-1}F(x)$. A DF $F$ is said to be less than a DF $G$ in dispersive ordering if $G^{-1}F(x)-x$
is nondecreasing in $x$. (The dispersive ordering had been already employed by \cite{Doksum (1975)} in which
he used the terminology \lq\lq $F$ is tail-ordered with respect to $G$\rq\rq). \cite{Zwet(1964)} used the
quantity Q-T to compare the skewness of two probability density functions. The DF $G$ is more right-skewed,
respectively more left-skewed, than the DF $F$ if $G^{-1}F(x)-x$ is a nondecreasing convex, respectively concave, function (see also, \cite{Yeo and Johnson(2000)}). In reliability theory, the convexity of the function Q-T is used, in a general
setting, to study the aging properties of lifetime r.v.s with support $[0,\infty)$
(see, \cite{Barlow and Proschan(1981)}). In particular case if $G$ is exponential distribution,
the convexity of Q-T is equivalent to the property that $F$ has increasing failure rate. Also, according to the
latter cited authors, a lifetime DF $F$ is said to be less than a lifetime DF $G$ in star-shaped order if
$\frac{G^{-1}F(x)}{x}$ is increasing in $x$. In special case that $G$ is exponential the star-shaped property
of Q-T is equivalent to the property that $F$ has increasing failure rate in average.
In the following, we use Q-T to define a variant of covariance between $X$ and $Y$ which we call it $G$-covariance. Throughout the paper, we assume that all the required expectations exist. \begin{Definition}
{\rm Let $X$ and $Y$ be two r.v.s with DFs $F$ and $G$, respectively. The $G$-covariance of $X$ in terms of
DF $G$ is defined as
\begin{equation}
{\cal C}(X,Y)=\mathrm{Cov}\left(X,G^{-1}F(X)\right).
\end{equation}} \end{Definition} As $G^{-1}{F(x)}$ is an increasing function of $x$, we clearly have $0\leq \mathrm{Cov}\left(X,G^{-1}F(X)\right)$, where the equality holds if and only if $F$ (or G) is degenerate. With $\sigma_X^2$ and $\sigma_Y^2$ as the variances of $X$ and $Y$, respectively, using Cauchy-Schwarz inequality, we have \begin{eqnarray} \mathrm{Cov}^{{2}}(X, G^{-1}F(X))&\leq&\mathrm{Var}(X)\mathrm{Var}{ (G^{-1}F(X))}\nonumber\\
&=& \sigma^{2}_{X}\sigma^{2}_{Y}\label{covee} \end{eqnarray}
where the equality follows from the fact that $G^{-1}F(X)$ is distributed as $Y$. Hence, we get that
\begin{eqnarray} 0\leq {\cal C}(X,Y)\leq \sigma_{X}\sigma_{Y}.\label{cove} \end{eqnarray} It can be easily shown that, in the right inequality of (\ref{cove}), we have the equality if and only if $X$ and $Y$ are distributed identically up to a location.
Note that ${\cal C}(X,Y)$ can be represented as \begin{eqnarray}
{\cal C}(X,Y)&=& \mathrm{Cov}(X,G^{-1}F(X))\nonumber\\
&=& E\big(XG^{-1}F(X)\big)-E\big(G^{-1}F(X)\big)E(X)\nonumber\\
&=& E\big(XG^{-1}F(X)\big)-E(Y)E(X)\\
&=&\int xG^{-1}F(x)dF(x)-E(Y)E(X)\nonumber\\
&=&\int yF^{-1}G(y)dG(y)-E(Y)E(X)\nonumber\\
&=& \mathrm{Cov}(Y,F^{-1}G(Y))={\cal C}(Y,X).\label{eqe11} \end{eqnarray} Also an alternative way to demonstrate ${\cal C}(X,Y)$ is
\begin{eqnarray*}
{\cal C}(X,Y)&=& \int xG^{-1}F(x)dF(x)-\int G^{-1}F(x)dF(x)\int xdF(x)\\
&=& \int_{0}^{1} F^{-1}(u)G^{-1}(u)du-\int_{0}^{1} G^{-1}(u)du\int_{0}^{1} F^{-1}(u)du\\
&=& \mathrm{Cov}(F^{-1}(U),G^{-1}(U)), \end{eqnarray*} where $U$ is a uniform r.v. distributed on $(0,1)$.
In the following we show that some well known measures of disparity and variability have a covariance representation and can be considered as special cases of the $G$-covariance ${\cal C}(X,Y)$. \begin{description}
\item [(a)] If $G=F$, then we get
$${\cal C}(X,Y)={\cal C}(X,X)=\mathrm{Cov}(X,F^{-1}(F(X)))=\mathrm{Cov}(X,X)=\mathrm{Var}(X).$$
In particular if the vector $(X,Y)$ has an exchangeable DF then
$${\cal C}(X,Y)=\mathrm{Var}(X)=\mathrm{Var}(Y)={\cal C}(Y,X).$$
\item[(b)] If $G$ is uniform distribution on $(0,1)$, then we get
$${\cal C}(X,Y)=\mathrm{Cov}(X,F(X))=\frac{1}{4}\mathrm{GMD}(X),$$
where $\mathrm{GMD}(X)$ is the Gini's mean difference in (\ref{gmd}). The Gini coefficient, which is a widely
used measure in economical studies, is defined as the $\mathrm{GMD}(X)$ divided by twice the mean of the
population. It should be also noted that the $\mathrm{GMD}(X)$ can be represented as the difference between
the expected values of the maxima and the minima in a sample of two independent and identically distributed
(i.i.d.) r.v.s $X_1$ and $X_2$. That is \[ \mathrm{GMD}(X)= 4 \mathrm{Cov}(X, F(X)) = E \left(\max(X_1, X_2) - \min(X_1, X_2)\right); \] see, e.g., \cite{R3}.
In reliability theory and survival analysis, the mean residual life (MRL) and mean inactivity time (MIT) are important concepts to assess the lifetime and aging properties of devices and live organisms. These concepts, denoted respectively by $m(t)$ and ${\tilde m}(t)$, are defined at any time $t$ as $m(t)=E(X-t|X>t)$, and ${\tilde m}(t)=E(t-X|X<t)$. Recently, \cite{R4} have shown, in the case that $X$ is a nonnegative r.v., $\mathrm{GMD}(X)$ (and hence $4{\cal C}(X,Y)$) can also be expressed as the sum of expectations of MRL and MIT of the minimum of random sample of size 2. \item[(c)] In the case that $G(y)=1-e^{-y}$, $y>0$, the exponential distribution with mean 1, we obtain
$${\cal C}(X,Y)=\mathrm{Cov}(X,\Lambda(X)),$$
where $\Lambda(x)=G^{-1}F(X)=-\log\bar{F}(x)$, in which $\bar{F}(x)=1-F(x)$.
The function $\Lambda(x)$, corresponding to a nonnegative r.v., is called in reliability theory as the
cumulative failure rate and plays a crucial role in the study of aging properties of systems lifetime. \cite{Asadi(2017)} has shown that the following equality holds for a nonnegative r.v.
\begin{equation}\label{CRE}
\mathrm{Cov}(X,\Lambda(X))=-\int_{0}^{\infty}\bar{F}(x)\log\bar{F}(x)dx,
\end{equation}
where the right hand side is known, in the literature, as the cumulative residual entropy (CRE) defined by \cite{R8}.
As an alternative measure of Shannon entropy, the cited authors argued that CRE can be considered as a measure of uncertainty. They obtained several properties of CRE and illustrated that this measure is useful in computer vision and image processing. \cite{R16} showed that the CRE is closely related to the mean residual life, $m(t)$, of a nonnegative r.v. $X$. If fact, it is always true that the CRE can be represented as $\mathrm{CRE}=E(m(X))$.
Another interesting fact that can also be concluded from the discussion here is that the differential Shannon entropy of the equilibrium distribution (ED) corresponding to $F$ has a covariance representation. The density function of ED is given by \[ f_e(x)= \frac{\bar{F}(x)}{\mu},\] where $0<\mu<\infty$ is the mean of DF $F$. In a renewal process, the ED arises as the asymptotic distribution of the waiting time until the next renewal and the time since the last renewal at time $t$. Also a delayed renewal process has stationary increments if and only if the distribution of the actual remaining life is $f_{e}(x)$. Such process known in the literature as the stationary renewal process or equilibrium renewal process; see, \cite{Ross}. If $H(f_e)$ denotes the differential Shannon entropy of $f_e$, then \begin{eqnarray*}
H(f_e)&=& -\int_{0}^{\infty}f_{e}(x)\log f_{e}(x)dx\\
&=& -\int_{0}^{\infty}\frac{\bar{F}(x)}{\mu}\log \frac{\bar{F}(x)}{\mu}dx\\
&=& \frac{1}{\mu}\mathrm{Cov}(X,\Lambda(X))+\log \mu.
\end{eqnarray*}
Finally, we should mention in this part, that the concept of generalized cumulative residual entropy (GCRE) which is introduced by \cite{Psar-Nav} as \begin{align} {\cal E}_n(X)= \frac{1}{n!} \int_0^\infty {\bar F}(x) [\Lambda(x)]^n dx. \label{psar} \end{align} For $n=1$, we get the CRE of $X$. One can easily verify that, with $G_n(y)=1-e^{-\sqrt[n]{y}}$, ${\cal E}_n(X)$ has the following covariance representation \begin{align} {\cal E}_n(X)=\frac{1}{n!} \mathrm{Cov}\big(X,G^{-1}_{n}F(X)\big)-\frac{1}{(n-1)!} \mathrm{Cov}\big(X,G^{-1}_{n-1}F(X)\big).\label{gre2} \end{align}
\item [(d)] In the case that $G$ is Logistic with DF $G(y)=\frac{1}{1+e^{-y}}$, $y\in \mathbb{R}$, we obtain
$${\cal C}(X,Y)=\mathrm{Cov}\left(X,\phi(X)\right),$$ where $\phi(x)=\log\frac{F(x)}{\bar{F}(x)},$ is the log-odds rate associated to r.v. $X$. Log-odds rate is considered in the survival analysis to model the failure process of lifetime data to assess the survival function of observations (see, \cite{R5}). It is easy to show that
\begin{eqnarray*}
{\cal C}(X,Y)&=&\mathrm{Cov}\left(X,\phi(X)\right)\\
&=& -\int_{0}^{\infty}\bar{F}(x)\log\bar{F}(x)dx-\int_{0}^{\infty}{F}(x)\log{F}(x)dx,
\end{eqnarray*}
where the last term on the right hand side is called as the cumulative past entropy. For some discussions and interpretations of ${\cal C}(X,Y)$, presented in this part, see Asadi (2017). \item[(e)] Let \begin{align}\label{exten-gini-dis}
G(y)=\left\{
\begin{array}{ll}
1-\Big(\frac{1}{y}\Big)^{\frac{1}{1-\nu}}, & \hbox{$y>1, ~ 0<\nu<1$; {\rm \ \ Pareto distribution,}}\\
1-(1-y)^\frac{1}{\nu-1}, & \hbox{$0<y<1, ~ \nu>1$; {\rm\ \ Power distribution,} } \\
0, & \hbox{o.w.}
\end{array}
\right. \end{align} Then it can be shown, in this case, that \begin{equation*}
{\cal C}(X,Y)=\left[I(0< \nu < 1)-I(\nu>1)\right]\mathrm{Cov}(X,\bar{F}^{\nu-1}(X)), \end{equation*} where $I(A)$ is an indicator function which is equal to 1 when $x\in A$ and otherwise is equal to zero. Hence, we get the extended Gini, $\mathrm{EGini}_\nu(X)$, defined as a parametric extension of GMD(X) of the form: \[ \mathrm{EGini}_\nu(X)=\nu \left[I(\nu>1)-I(0< \nu < 1)\right] {\cal C}(X,Y), \] where $\nu$ is a parameter ranges from 0 to infinity and determines the relative weight attributed to various portions of probability distribution. For $\nu=2$, the extended Gini leads to GMD(X) (up to a constant). For more interpretations and applications of $\mathrm{EGini}_\nu(X)$ in economic studies based on different values of $\nu$, we refer to \cite{R3}.
\item[(f)] The upper and lower record values, in a sequence of i.i.d. r.v.s $X_1, X_2, \dots$, have applications in different areas of applied probability; see, \cite{Arnold et al. (1998)}. Let $X_i$'s have a common continuous DF $F$ with survival function ${\bar F}$. Define a sequence of upper record times $U(n)$, $n = 1, 2, \dots$, as follows $$ U(n + 1) = \min \{j : j > U(n), X_j > X_{U(n)}\}, \quad n \geq 1, $$ with $U(1) = 1$. Then, the sequence of upper record values $\{R_n, n\geq 1\}$ is defined by $R_n= X_{U(n)}$, $n\geq 1$, where $R_1 = X_1$. The survival function of $R_n$ is given by \[ {\bar F}_n^U(t)={\bar F}(t)\sum_{x=0}^{n-1} \frac{(\Lambda(t))^x}{x!}, \qquad t>0, n=1,2,\dots, \] where $\Lambda(t)=-\log{\bar F}(t)$. If $R_n$ denotes the $n$th upper record value, then it can be easily shown that, with $G_n(y)=1-e^{-\sqrt[n]{y}}$, the mean of difference between $R_n$ and $R_1$ has the following covariance representation: \begin{align*}
E(R_n-R_1)=&E(R_n-\mu)=\frac{1}{(n-1)!} \mathrm{Cov}\big(X,G_{n-1}^{-1}F(X)\big),\ \ \ n\geq 1, \end{align*} where $\mu=E(R_1)=E(X_1)$.
The lower record values in a sequence of i.i.d. r.v.s $X_1, X_2, \dots$ can be defined in a similar manner. The sequence of record times $L(n)$, $n = 1, 2, \dots$, is defined as $L(1) = 1$ and \[ L(n + 1) = \min \{j : j > L(n), X_j < X_{L(n)}\}, \quad n\geq1. \] Then the $n$th lower record value is defined by ${\tilde R}_n=X_{L(n)}$. The DF of ${\tilde R}_n$ is given by \[ F_n^L(t)=F(t)\sum_{x=n}^{\infty} \frac{[{\tilde \Lambda}(t)]^x}{x!}, \quad t > 0,~n = 1, 2,\dots, \] in which ${\tilde \Lambda}(t)=-\log F(t)$; see, \cite{Arnold et al. (1998)}.
Let ${\tilde R}_n$ denote the $n$th lower record. Then, it can be shown that \begin{align*}
E({\tilde R}_n-{\tilde R}_1)= E({\tilde R}_n-\mu)=\frac{1}{(n-1)!} \mathrm{Cov}\big(X,[{\tilde \Lambda}(X)]^{n-1}\big),\ \ \ n\geq 1, \end{align*}
where ${\tilde \Lambda}(t)=-\log F(t)$. Therefore the expectation of the difference between the $n$th upper and lower records has a covariance representation as follows \begin{align*} E({R}_n-{\tilde R}_n)=&\frac{1}{(n-1)!} \mathrm{Cov}\big(X,[{\Lambda}(X)]^{n-1}\big)-\frac{1}{(n-1)!} \mathrm{Cov}\big(X,[{\tilde \Lambda}(X)]^{n-1}\big)\\
=& \frac{1}{(n-1)!} \mathrm{Cov}\big(X,K_n^{-1}(F(X))\big), \end{align*} where $K_n(x)$ is a DF with inverse $K_n^{-1}(u)=(-\ln(1-u))^n-(-\ln(u))^n$, $0<u<1$. \end{description}
\section{A Unified Measure of Correlation}
We define our unified measure of correlation between $X$ and $Y$, as follows:
\begin{Definition}\em
Let $X$ and $Y$ be two continuous r.v.s with joint DF $F(x,y)$, $(x,y)\in \mathbb{R}^2$, and continuous marginal DFs $F(x)$ and $G(y)$, respectively. Let $H$ be a continuous DF. Then the $H$-transformed correlation between $X$ and $Y$, denoted by $\beta_H(X,Y)$, is defined as
\begin{eqnarray}
\beta_H(X,Y)=\frac{\mathrm{Cov}(X, H^{-1}G(Y))}{\mathrm{Cov}(X, H^{-1}F(X))},\label{index}
\end{eqnarray}
provided that all expectations exist and $\mathrm{Cov}(X, H^{-1}F(X))>0$.
\end{Definition}
It is trivial that for continuous r.v. $Y$, the r.v. $H^{-1}G(Y)$ is distributed as r.v. $W$, where $W$ has DF $H$. Hence, $\beta_H(X,Y)$ measures the association between $X$ and a function of $Y$ where that function is the transformation $H^{-1}$ over $G(Y)$. The $H$-transformed correlation between $Y$ and $X$ can be defined similarly as
\begin{eqnarray*}
\beta_H(Y,X)=\frac{\mathrm{Cov}(Y, H^{-1}F(X))}{\mathrm{Cov}(Y, H^{-1}G(Y))},
\end{eqnarray*}
provided that $\mathrm{Cov}(Y, H^{-1}G(Y))>0$.
In what follows, we study the properties of $\beta_H(X,Y)$ and show that, {under some mild condition on $H$}, it has the necessary requirements of a correlation coefficient. Before that, we give the following corollary showing that $\beta_H(X,Y)$ subsumes some well known measures of association as special cases. \begin{Corollary}
{\rm The correlation index $\beta_H(X,Y)$ in (\ref{index}) gives the following measures of association as special cases:
\begin{description}
\item [(a)] { If we assume that $H= G$ then we have
\begin{eqnarray}
\beta_H(X,Y)
&=& \frac{\mathrm{Cov}(X, Y)}{\mathrm{Cov}(X, G^{-1}F(X))}\nonumber\\
&=& \frac{\mathrm{Cov}(X, Y)}{\mathrm{Cov}^{\frac{1}{2}}(X, G^{-1}F(X))\mathrm{Cov}^{\frac{1}{2}}(Y, F^{-1}G(Y))},\label{rhort}
\end{eqnarray}
where the last equality follows from (\ref{eqe11}). In the following, we call (\ref{rhort}) as
{\it $\rho$-transformed correlation} between $X$ and $Y$ and denote it by $\rho_t(X,Y)$.
The measure $\rho_t(X,Y)$ is a correlation index proportional to the Pearson correlation coefficient
$\rho(X,Y)$ in (\ref{prhop}). In fact
$\rho_t(X,Y)= a \rho(X,Y),$
where
\[a=\frac{\sigma_X\sigma_Y}{\mathrm{Cov}^{\frac{1}{2}}(X, G^{-1}F(X))\mathrm{Cov}^{\frac{1}{2}}(Y, F^{-1}G(Y))}.\]
In particular, if the marginal DFs $F$ and $G$ are identical, then $a=1$.
(Note that, a sufficient condition to have $F=G$ is that the joint DF of $(X,Y)$ to be exchangeable.
Recall that a random vector $(X,Y)$ is said to have an exchangeable DF if the vectors $(X,Y)$ and $(Y,X)$ are identically distributed.)
However, in general case based on (\ref{covee}), we always have
$$\mathrm{Cov}^{{2}}(X, G^{-1}F(X))=\mathrm{Cov}^{{2}}(Y, F^{-1}G(Y))\leq \sigma^{2}_{X}\sigma^{2}_{Y}.$$
Hence, we get that $a^2\geq1$. This, in turn, implies that the following interesting inequality holds between
$\rho(X,Y)$ and $\rho_t(X,Y)$:
\begin{equation}\label{erho}
{ 0\leq |\rho(X,Y)|\leq |\rho_t(X,Y)|.}
\end{equation}
{We will show in Theorem \ref{th1q} that when $X$ and $Y$ are positively correlated then $\rho(X,Y)\leq \rho_t(X,Y)\leq 1$, and when $X$ and $Y$ are negatively correlated and $G$ or $F$ is a symmetric DF, then $-1\leq \rho_{t}(X,Y)\leq \rho(X,Y)$. These inequalities indicate that $\rho_t(X,Y)$, as a measure of the strength and direction of the linear relationship between two r.v.s, in compare to the Pearson correlation $\rho(X,Y)$, shows more intensity of correlation between the two r.v.s.. This may be due to the fact that in denominator of $\rho(X,Y)$ the normalizing factor $\sigma_X$ $(\sigma_Y)$ depends only on the distribution of $F$ $(G)$ while in denominator of $\rho_t(X,Y)$ the normalizing factor $\mathrm{Cov}^{\frac{1}{2}}(X, G^{-1}F(X))$ $(\mathrm{Cov}^{\frac{1}{2}}(Y, F^{-1}G(Y)))$ depends on both DFs $F$ and $G$.
}}
\item [(b)] If $H$ is uniform on interval $(0,1)$, i.e., $H(x)=x$, $0<x<1$, then $\beta_H(X,Y)$ reduces to the Gini correlation in (\ref{eee1}),
$$\Gamma(X,Y)=\frac{\mathrm{Cov}(X,G(Y))}{\mathrm{Cov}(X,F(X))}.$$
\item [(c)] { If $H$ is Pareto distribution $(0<\nu<1)$ or power distribution $(\nu>1)$, given in below
\begin{align*}
H(x)=\left\{
\begin{array}{ll}
1-\Big(\frac{1}{x}\Big)^{\frac{1}{1-\nu}}, & \hbox{$x>1, ~ 0<\nu<1$;}\\
1-(1-x)^\frac{1}{\nu-1}, & \hbox{$0<x<1, ~ \nu>1$;}\\
0, & \hbox{o.w.,} \end{array}
\right. \end{align*} we get the extended Gini $(\mathrm{EGini_{\nu}})$ correlation defined as \[ \Gamma(\nu, X,Y)=\frac{\mathrm{Cov}(X,\bar{G}^{\nu-1}(Y))}{\mathrm{Cov}(X,\bar{F}^{\nu-1}(X))}, \quad \nu>0.\] Note that for $\nu=2$ we arrive at the Gini correlation.}
\item [(d)] If $H(x)=\frac{1}{1+e^{-x}}$, $x\in \mathbb{R}$, the standard Logistic distribution, then $\beta_H(X,Y)$ becomes the association measure in (\ref{eqq2}), defined by \cite{Asadi(2017)}, which measures the correlation between $X$ and the log-odds rate of $Y$.
\end{description}} \end{Corollary}
\qquad Before giving the main properties of the correlation in (\ref{index}), we give the following expressions which indicate that the correlation coefficient $\beta_H(X,Y)$ has representations in terms of joint DF $F(x,y)=P(X\leq x,Y\leq y)$ and joint survival function $\bar{F}(x,y)=P(X>x,Y>y)$. In the sequel, we assume that all the integrals are from $-\infty$ to $\infty$ unless stated otherwise. The correlation $\beta_{H}(X,Y)$ can be expressed as
\begin{align*}
\beta_H(X,Y)=&\frac{1}{\mathrm{Cov}(X,H^{-1}F(X))}\int\int\left({F}(x,y)-{F}(x){G}(y)\right)dxdH^{-1}G(y)\\
=&\frac{1}{\mathrm{Cov}(X,H^{-1}F(X))}\int\int\left(\bar{F}(x,y)-\bar{F}(x)\bar{G}(y)\right)dxdH^{-1}G(y).
\end{align*}
The validity of these expressions can be verified from Theorem 1 of \cite{R9} under the assumptions that the expectations exist and $H^{-1}G(y)$ is a bounded variation function.
The following theorem gives some properties of ${\beta}_H(X,Y)$. \begin{Theorem}\label{th1q}
The correlation ${\beta}_H(X,Y)$ satisfies in the following properties: \begin{description}
\item [(a)] { For continuous r.v.s $X$ and $Y$, ${\beta}_H(X,Y)\leq 1$ and when $H$ is a symmetric DF, $-1\leq {\beta}_H(X,Y)\leq 1$.}
\item [(b)] The maximum (minimum) value of ${\beta}_H(X,Y)$ is achieved, if $Y$ is a monotone increasing (decreasing) function of $X$.
\item[(c)] For independent r.v.s $X$ and $Y$, ${\beta}_H(X,Y)={\beta}_H(Y,X)=0$.
\item[(d)] ${\beta}_H(X,Y)=-{\beta}_H(-X,Y)=-{\beta}_H(X,-Y)={\beta}_H(-X,-Y).$
\item[(e)] The correlation measure ${\beta}_H(X,Y)$ is invariant under all strictly monotone functions of $Y$.
\item[(f)] ${\beta}_H(X,Y)$ is invariant under changing the location and scale of $X$ and $Y$.
\item [(g)] If the joint DF of $X$ and $Y$ is exchangeable, then ${\beta}_H(X,Y)={\beta}_H(Y,X)$. \end{description} \end{Theorem}
\begin{proof} We provide the proofs for parts (a) and (g). The proofs of other parts are straightforward (see, \cite{R3}, p. 41, where the authors study the properties of Gini correlation $\Gamma(X,Y)$). \begin{description} \item[(a)] {First, we show that ${\beta}_H(X,Y)\leq 1$ for any continuous DF $H$. To this, We need to show that $E(XH^{-1}G(Y))\leq E(XH^{-1}F(X)).$ Both functions $X$ and $H^{-1}F(X)$ are increasing functions. Then $E(XH^{-1}G(Y))$ achieves its maximum value when $H^{-1}G(Y)$ is an increasing function of $X$, (see \cite{R3}, p. 41). This implies that $H^{-1}F(X)=H^{-1}G(Y)$ which, in turn, implies that the maximum value is achieved at $E(XH^{-1}F(X))$ and hence ${\beta}_H(X,Y)\leq 1$.
Now, let $H$ be a symmetric DF about constant $a$. To have $-1\leq {\beta}_H(X,Y)$ it needs to show that $-\mathrm{Cov}(X,H^{-1}F(X))\leq \mathrm{Cov}(X,H^{-1}G(Y)).$
From \cite{R3}, p. 41, $E(XH^{-1}G(Y))$ achieves its minimum value when $H^{-1}G(Y)$ is a decreasing function of $X$. This results in $H^{-1}G(Y)=H^{-1}(1-F(X))=2a-H^{-1}(F(X))$ which, in turn, implies that $2a-E(XH^{-1}F(X))\leq E(XH^{-1}G(Y))$ and hence $-\mathrm{Cov}(X,H^{-1}F(X))\leq \mathrm{Cov}(X,H^{-1}G(Y))$. Hence, we have $-1\leq {\beta}_H(X,Y)$. }
\item[(g)] As the random vector $(X,Y)$ has exchangeable distribution, $(X,Y)$ is identically distributed as $(Y,X)$ and hence the marginal distributions of $X$ and $Y$ are identical, i.e., $F=G$. Hence, we can write \begin{align*} {\beta}_H(X,Y)=&~\frac{\mathrm{Cov}(X,H^{-1}G(Y))}{\mathrm{Cov}(X,H^{-1}F(X))}\\
=&~\frac{\mathrm{Cov}(X,H^{-1}F(Y))}{\mathrm{Cov}(X,H^{-1}G(X))}\\
=&~\frac{\mathrm{Cov}(Y,H^{-1}F(X))}{\mathrm{Cov}(Y,H^{-1}G(Y))}={\beta}_H(Y,X). \end{align*} \end{description} \end{proof}
The following theorem proves that in bivariate normal distribution, the correlation $\beta_{H}(X,Y)$ is equal to Pearson correlation $\rho(X,Y)$.
\begin{Theorem}
Let $X$ and $Y$ have bivariate normal distribution with Pearson correlation coefficient $\rho(X,Y)=\rho$.
Then, for any continuous DF $H$ with finite mean $\mu_{H}$, $${\beta}_H(X,Y)={\beta}_H(Y,X)=\rho.$$ \end{Theorem} \begin{proof} Assume that the marginal DFs of $X$ and $Y$ are $F$ and $G$, with means $\mu_F$ and $\mu_G$ and positive variances $\sigma^2_{F}$ and $\sigma^2_{G}$, respectively. Further let $Z$ denote the standard normal r.v. with DF $\Phi$. It is well known that for the bivariate normal distribution we have
$$E(X|Y)=\mu_F +\rho \sigma_F\frac{(Y-\mu_G)}{\sigma_G}.$$ Using this we can write \begin{align*}
\mathrm{Cov}(X,H^{-1}G(Y))=&~E_Y\left[\left(E(X|Y)-\mu_{F}\right)\left(H^{-1}G(Y)-\mu_{H}\right)\right]\\ =&~ \rho\sigma_{F} E_Y\Big[\big(\frac{Y-\mu_G}{\sigma_G}\big)H^{-1}G(Y)\Big]\\
=&~\rho\sigma_F \int \big(\frac{y-\mu_G}{\sigma_G}\big) H^{-1}G(y) dG(y)\\ =&~\rho\frac{\sigma_F}{\sigma_G} \int \big({G^{-1}\Phi(z)-\mu_G}\big) H^{-1}\Phi(z) d\Phi(z)\\ =& ~ \rho\frac{\sigma_F}{\sigma_G} \big( \int G^{-1}\Phi(z) H^{-1}\Phi(z) d\Phi(z)-\mu_G\mu_H\big)\\ =&~\rho\frac{\sigma_F}{\sigma_G} \mathrm{Cov}(G^{-1}\Phi(Z),H^{-1}\Phi(Z))\\ =&~\rho\sigma_{F} \mathrm{Cov} (Z, H^{-1}\Phi(Z)), \end{align*} where the last equality follows from the fact that $G^{-1}\Phi(z)=\sigma_G z+\mu_G$. On the other hand, we can similarly show that $\mathrm{Cov}(X,H^{-1}F(X))=\sigma_{F}\mathrm{Cov}(Z,H^{-1}\Phi(Z)).$ Hence. we have ${\beta}_H(X,Y)=\rho.$ \end{proof}
\qquad Assuming that $X$ and $Y$ have joint bivariate DF $F(x,y)$, with marginal DFs $F(x)$ and $G(y)$, then $F(x,y)$ satisfies the Fr$\acute{\rm e}$chet bounds inequality
$$F_0(x,y)=\max\{F(x)+G(y)-1,0\}\leq F(x,y)\leq \min\{F(x),G(y)\}=F_1(x,y).$$
The Fr$\acute{\rm e}$chet bounds $F_0(x,y)$ and $F_1(x,y)$ are themselves bivariate distributions known as the minimal and maximal distributions, respectively. These distributions show the perfect negative and positive dependence between the corresponding r.v.s $X$ and $Y$, respectively; in the sense that \lq\lq the joint distribution of $X$ and $Y$ is $F_0(x,y)$ ($F_1(x,y)$) if and only if $Y$ is decreasing (increasing) function of $X$\rq\rq\ (see, \cite{R13}).
In the following theorem we prove that, under some conditions, the extremes of the range for ${\beta}_H(X,Y)$
i.e., $-1$ and $1$, are attainable by the Fr$\acute{\rm e}$chet bivariate minimal and maximal distributions,
respectively.
In other words, we show that for lower and upper bounds of Fr$\acute{\rm e}$chet inequality
we have ${\beta}_H(X,Y)=-1$ and ${\beta}_H(X,Y)=1$, respectively.
\begin{Theorem}\label{Fer}
Let $X$ and $Y$ be two continuous r.v.s with DFs ${F}(x)$ and ${G}(y)$, respectively, and $H$ be a continuous DF.
\begin{itemize}
\item[(a)] If $(X,Y)$ has joint DF $F_1(x,y)$ then ${{\beta}_H}(X,Y)=1$,
\item[(b)] If $H$ is symmetric and $(X,Y)$ has joint DF $F_0(x,y)$ then ${{\beta}_H}(X,Y)=-1$.
\end{itemize}
\end{Theorem} \begin{proof} \begin{description}
\item[(a)] Let us define the sets $A_x=\{y|y\geq{G}^{-1}({F}(x))\}$ and $A^c_x=\{y|y<{G}^{-1}({F}(x))\}$.
Then, we have
\begin{align}
\mathrm{Cov}(X,H^{-1}G(Y))=&\int\int\left({F}(x,y)-F(x)G(y)\right)dH^{-1}G(y)dx\nonumber\\
=& \int\int\Big(\min\{{F}(x),{G}(y)\}-F(x)G(y)\Big)dH^{-1}G(y)dx\nonumber\\
=&\int{F}(x)\int_{A_x}\bar{G}(y)dH^{-1}G(y)dx+\int\bar{F}(x)\int_{A^c_x}{G}(y)dH^{-1}G(y)dx. \label{eqf}
\end{align}
But, we have under the assumptions of the theorem
\begin{equation}
\int_{A_x}\bar{G}(y)dH^{-1}G(y)=-\bar{F}(x)H^{-1}F(x)+\int_{F(x)}^{1}H^{-1}(u)du,\label{eqf1}
\end{equation}
and
\begin{equation}
\int_{A^c_x}{G}(y)dH^{-1}G(y)=F(x)H^{-1}F(x)-\int_{0}^{F(x)}H^{-1}(u)du.\label{eqf2}
\end{equation} From (\ref{eqf}), (\ref{eqf1}) and (\ref{eqf2}), we get
\begin{align}
\mathrm{Cov}(X,H^{-1}G(Y))=&\lim_{a\rightarrow -\infty}\Big\{\int_{a}^{\infty} F(x)\int_{F(x)}^{1}H^{-1}(u)dudx-\int_{a}^{\infty}\bar{F}(x)\int_{0}^{F(x)}H^{-1}(u)dudx\Big\}\nonumber\\
=&\lim_{a\rightarrow -\infty}\Big\{\int_{a}^{\infty} F(x)\int_{F(x)}^{1}H^{-1}(u)dudx\nonumber \\
&-\int_{a}^{\infty}\bar{F}(x)\Big(\int_{0}^{1}H^{-1}(u)du-
\int_{F(x)}^{1}H^{-1}(u)du\Big)dx\Big\}\nonumber\\
=&\lim_{a\rightarrow -\infty}\Big\{ \int_{a}^{\infty}\int^{1}_{F(x)}H^{-1}(u)dudx-\int_{a}^{\infty}\bar{F}(x)dx \int_{0}^{1}H^{-1}(u)du\Big\}\nonumber\\
=& \lim_{a\rightarrow -\infty}\Big\{\int_{0}^{1}H^{-1}(u)\Big(\int_a^{F^{-1}(u)} dx -\int_a^\infty {\bar F}(x)dx\Big)du\Big\}\nonumber\\
=& \lim_{a\rightarrow -\infty}\Big\{\int_{0}^{1}H^{-1}(u)(F^{-1}(u)-a-\mu_F+a\big)\Big\}\nonumber\\
=&\int_{0}^{1}F^{-1}(u)H^{-1}(u)du-\mu_F\mu_H\nonumber\\
=&~\mathrm{Cov}(F^{-1}(U),H^{-1}(U))\nonumber\\
=&~\mathrm{Cov}(X,H^{-1}(F(X)))\nonumber.
\end{align}
This shows that $\beta_{H}(X,Y)=1$.
\item[(b)] In this case we define $B_x=\{y|y\geq{G}^{-1}(\bar{F}(x))\} $ and $B^c_x=\{y|y<{G}^{-1}(\bar{F}(x))\}.$ Then
\begin{align*}
\mathrm{Cov}(X,H^{-1}G(Y))=& \int_{a}^{\infty}\int\Big(\max\{{F}(x)+{G}(y)-1,0\}-F(x)G(y)\Big)dH^{-1}G(y)dx\\
=&-\int_{a}^{\infty}\int_{B_x}\bar{F}(x)\bar{G}(y)dH^{-1}G(y)dx-\int\int_{B^c_x}F(x)G(y)dH^{-1}G(y)dx.
\end{align*} Therefore, using the same procedure as part (a), it can be written \begin{align*} \mathrm{Cov}(X,H^{-1}G(Y))=&\lim_{a\rightarrow -\infty}\Big\{\int_{a}^{\infty}F(x)\int_{0}^{\bar{F}(x)}H^{-1}(u)dudx-
\int_{a}^{\infty}\bar{F}(x)\int_{\bar{F}(x)}^{1}H^{-1}(u)dudx\Big\}\\
=& \lim_{a\rightarrow -\infty}\Big\{ \int_{a}^{\infty}\int_{0}^{\bar{F}(x)}H^{-1}(u)dudx
-\int_{a}^{\infty}\bar{F}(x) dx \int_{0}^{1}H^{-1}(u)du\Big\}\\
=& \lim_{a\rightarrow -\infty}\Big\{ \int_0^1 H^{-1}(1-u) \big(F^{-1}(u)-a-\mu_F+a\big)du\\
\stackrel{c}{=}& \int_0^1 (2\mu_H-H^{-1}(u)) F^{-1}(u) du-\mu_F\mu_H\\
=& -\int_0^1 H^{-1}(u) F^{-1}(u) du+\mu_F\mu_H\\
=&-\mathrm{Cov}(H^{-1}(U),F^{-1}(U))\\
=&-\mathrm{Cov}(X,H^{-1}(F(X))), \end{align*}
where the equality ($c$) follows from the assumption that $H$ is symmetric. Hence, we get that $\beta_{H}(X,Y)=-1$. This completes the proof of the theorem. \end{description} \end{proof} \begin{Remark}\em { It should be pointed out that, the symmetric condition imposed on $H$ in part (b) of Theorem
\ref{Fer} can not be dropped in general case. As a counter example, it can be easily verify that if $H$ is
exponential the upper bound 1 for $\beta_{H}(X,Y)$ is attainable by Fr$\acute{\rm e}$chet bivariate maximal
distribution, however, the lower bound -1 is not attainable by Fr$\acute{\rm e}$chet bivariate minimal
distribution.}
\end{Remark}
\qquad A well known class of bivariate distributions, which is extensively studied in the statistical literature, is FGM family (see, \cite{R10}). The joint DF $F(x,y)$ of the r.v.s $X$ and $Y$ with, respectively, continuous marginal DFs $F(x)$ and $G(y)$, is said to be a member of FGM family if $$F(x,y)=F(x)G(y)\left(1+\gamma\bar{F}(x)\bar{G}(y)\right), $$ where $\gamma\in[-1,1]$ shows the parameter of dependency between $X$ and $Y$. Clearly for $\gamma=0$, $X$ and $Y$ are independent. It is well known that for FGM family the Pearson correlation coefficient $\rho(X,Y)$ lies in interval $[-1/3,1/3]$ where the maximum is attained for the case when the marginal distributions are uniform (\cite{R11}). \cite{R12} proved that in FGM family, the Gini correlation $\Gamma(X,Y)$ lies between $[-{1}/{3},{1}/{3}],$ for any marginal DFs $F$ and $G$.
The following theorem gives an expression for ${\beta}_H(X,Y)$ in FGM family. \begin{Theorem}
Under the assumption that $F$, $G$ and $H$ have finite means, the association measure ${\beta}_H(X,Y)$, for the FGM class, is given by
\begin{eqnarray}\label{betaFGM}
{\beta}_H(X,Y)=\gamma\frac{\mathrm{GMD}(F)\mathrm{GMD}(H)}{4\mathrm{Cov}(X,H^{-1}F(X))}.
\end{eqnarray}
\end{Theorem} \begin{proof} \begin{align*}
\mathrm{Cov}(X,H^{-1}G(Y))=& \int\int\left({F}(x,y)-{F}(x){G}(y)\right)dH^{-1}G(y)dx\\
=&~\gamma\int\int F(x)\bar{F}(x)G(y)\bar{G}(y)dH^{-1}G(y)dx\\
=&~\gamma\int F(x)\bar{F}(x)dx \int G(y)\bar{G}(y)dH^{-1}G(y)\\
=&~\gamma\int F(x)\bar{F}(x)dx\int H(u)\bar{H}(u))du\\
=&\frac{\gamma}{4}\mathrm{GMD}(F) \mathrm{GMD}(H), \end{align*}
where $\bar{H}=1-H$. Hence, ${\beta}_H (X,Y)$ can be represented as \begin{eqnarray*}
{\beta}_H(X,Y)=\gamma\frac{\mathrm{GMD}(F) \mathrm{GMD}(H)}{4\mathrm{Cov}(X,H^{-1}F(X))}.
\end{eqnarray*} This completes the proof. \end{proof}
{It should be pointed out that the correlation index ${\beta}_H(X,Y)$ in FGM family does not depend on the DF $G$ which is transmuted by $H$. Also, it is trivial that, in the case where $H$ is uniform DF on interval $(0,1)$, ${\beta}_H(X,Y)$ reduces to Gini correlation which is free of $F$ and its values lies in $[-1/3,1/3]$. If $H=G$, we arrive at the following formula for $\rho_t(X,Y)$: \begin{eqnarray*}
{\rho}_t(X,Y) &=&\frac{\gamma}{4}\frac{\mathrm{GMD}(F)}{\mathrm{Cov}^{1/2}(X,G^{-1}F(X))} \frac{\mathrm{GMD}(G)}{\mathrm{Cov}^{1/2}(Y,F^{-1}G(Y))}.
\end{eqnarray*} Table \ref{tabrhotfgm} gives the range of possible values of $\rho(X,Y)$ and $\rho_{t}(X,Y)$, in FGM family,
for different choices of DFs $F$ and $G$. When one of the two r.v.s is selected as uniform r.v. $U$, then we get the Gini correlation and hence \[ \rho_t(U,X)=\rho_t(X,U)=\frac{\mathrm{GMD}(X)\mathrm{GMD}(U)}{4\mathrm{Cov}(X,F(X))}=\frac{\gamma}{3}. \] This implies that the range of possible values of $\rho_t(X,U)$ is $[-1/3,1/3]$. As seen in the table, $\rho_t(X,Y)$, in compare to the Pearson correlation $\rho(X,Y)$, shows a wider range of correlation between the two r.v.s.
\begin{table}[!h] \centering \caption{\small The ranges of $\rho$, and $\rho_t$ correlations for some distributions in FGM family.}\label{tabrhotfgm} \small \begin{tabular}{lccccccc}
\toprule
& & \multicolumn{5}{c}{DF of $Y$} \\ \cline{3-7}
DF of $X$ & Index &
Uniform & Exponential & Reighley & Logistic & Normal \\[2mm]
\hline\hline
Uniform & $\rho(X,Y)$ & $\mp0.33333$ & $\mp0.28867$ & $\mp 0.32352$ & $\mp 0.31831$ & $\mp 0.32573$\\
& $\rho_t(X,Y)$ & $\mp0.33333$ & $\mp0.33333$ & $\mp0.33333$ & $\mp0.33333$ & $\mp0.33333$ \\ \hline
Exponential & $\rho(X,Y)$ & $\mp 0.28867$ & $\mp 0.25000$ & $\mp 0.28016$ & $\mp 0.27566$ & $\mp 0.28209$ \\
& $\rho_t(X,Y)$ & $\mp 0.33333$ & $\mp 0.25000$ & $\mp 0.29289$ & $\mp 0.30396$ & $\mp 0.31233$ \\
\hline
Reighley & $\rho(X,Y)$ & $\mp 0.32352$ & $\mp 0.28016$ & $\mp0.31396$ & $\mp 0.30892$ & $\mp 0.31613$ \\
& $\rho_t(X,Y)$ & $\mp 0.33333$ & $\mp 0.29289$ & $\mp0.31396$ & $\mp 0.31549$ & $\mp 0.32057$
\\ \hline
Logistic & $\rho(X,Y)$ & $\mp 0.31831$ & $\mp 0.27566$ & $\mp 0.30892$ & $\mp 0.30396$ & $\mp 0.31105$\\
& $\rho_t(X,Y)$ & $\mp 0.33333$ & $\mp 0.30396$ & $\mp 0.31549$ & $\mp 0.30396$ & $\mp 0.31233$ \\ \hline
Normal & $\rho(X,Y)$ & $\mp 0.32573$ & $\mp 0.28209$ & $\mp 0.31613$ & $\mp 0.31105$ & $\mp 0.31831$\\
& $\rho_t(X,Y)$ & $\mp 0.33333$ & $\mp 0.31233$ & $\mp 0.32057$ & $\mp 0.31233$ & $\mp 0.31831$ \\
\bottomrule
\end{tabular} \end{table} }
In the following, we give some examples in which $\beta_{H}(X,Y)$ in (\ref{index}) are computed for different transformation DFs $H$. The following choices for $H$ are considered: \begin{itemize}
\item Exponential distribution $H(x)=1-e^{-x},~ x>0$: Cumulative residual entropy based (CRE-Based) correlation.
\item Logistic distribution, $H(x)= \frac{1}{1+e^{-x}},~x\in \mathbb{R}$: Odds ratio based (OR-Based) correlation. \item Pareto distribution, $H(x)=1-\Big(\frac{1}{x}\Big)^{2},~ x>1$: Extended Gini correlation with parameter $\nu=0.5$ ($\mathrm{EGini}_{0.5})$.
\item Uniform distribution, $H(x)=x,~~0<x<1$: Gini correlation.
\item Power distribution, $H(x)= 1-(1-x)^{\frac{1}{2}}, ~ 0<x<1$: Extended Gini correlation with parameter $\nu=3$ ($\mathrm{EGini}_3$). \end{itemize} \begin{Example} {\rm Table \ref{tablefgm} represents the values of $\beta_{H}(X,Y)$, in FGM family, for different choices of transformation DFs $H$ and different DFs $F$.} \end{Example} \begin{table}[!h] \caption{\small The range of $\beta_{H}(X,Y)$ for different choices of $H$ and $F$ in FGM family.}\label{tablefgm} \small \centering \begin{tabular}{llcccc} \toprule \multicolumn{5}{r}{The ranges of correlation coefficients} \\ \cline{3-6}
Distribution & $F(x)$ & CRE-Based & OR-Based & EGini$_{0.5}$ & EGini$_{3}$ \\ \midrule\midrule
Weibull (1,0.5) &$1-e^{-\sqrt{x}},~ x>0$ & $\mp 0.18750$ & $\mp 0.26344$ & $\mp 0.08333$ & $\mp 0.42187$\\[2mm] Exponential (1) &$1-e^{-x},~ x>0$ & $\mp 0.25000$ & $\mp 0.30396$ & $\mp 0.16667$ & $\mp 0.37500$\\[2mm]
Weibull (1,2) &$1-e^{-x^2},~ x>0$ & $\mp 0.29289$ & $\mp 0.31549$ & $\mp 0.23570$ & $\mp 0.34650$\\[2mm]
Logistic (0,1) & $(1+e^{-x})^{-1}, ~ x\in \mathbb{R}$ &$\mp 0.30396$ & $\mp 0.30396$ & $\mp 0.24045$ & $\mp 0.33333$\\[2mm] Extreme value (0,1) &$~~e^{-e^{-x}}, ~ x\in \mathbb{R}$ & $\mp 0.27555$ & $\mp 0.30701$ & $\mp 0.19951$ & $\mp 0.35335$\\ Laplace (0,1) &$\left\{
\begin{array}{ll}
\frac{1}{2}e^{x}, & \hbox{$x<0$;} \\
1-\frac{1}{2}e^{-x}, & \hbox{$x\geq 0$.}
\end{array}
\right.
$ & $\mp 0.29403$ & $\mp 0.29403$ & $\mp 0.21832$ & $\mp 0.33333$\\ \bottomrule
\end{tabular}
\end{table}
\begin{Example}\label{example}\em In this example we consider two bivariate distributions and compute the correlation index $\beta_{H}(X,Y)$ for different choices of $H$: \begin{itemize} \item [(a)] The first bivariate distribution which we consider is a special case of Gumbel-Barnett family of copulas, introduced by \cite{Barnett(1980)}, given as \begin{align}\label{copula} C_\theta(u, v) = u + v - 1 + (1-u)(1-v)e^{-\theta \log(1-u)\log(1.v)}, \qquad 0\leq \theta \leq 1. \end{align} In this copula if we take the standard exponential DFs as marginals of $X$ and $Y$, then we arrive at the Gumbel's bivariate exponential DF (Gumbel 1960). The joint DF of Gumbel's bivariate exponential distribution is written as \begin{align} F_\theta(x,y)=1-e^{-x}-e^{-y}+e^{-x-y-\theta xy},\qquad x>0,~y>0,~0\leq\theta\leq1. \label{gumbel} \end{align}
For $\theta = 0$, $X$ and $Y$ are independent and $\rho(X,Y) = 0$. As $\theta$ increases, the absolute value of Pearson correlation, $|\rho(X,Y)|$, increases and takes value $\rho(X,Y)=-0.40365$ at $\theta =1$. This distribution is applied for describing r.v.s with negative correlation. (Of course, positive correlation can be obtained by changing $X$ to $-X$ or $Y$ to $-Y$.) In Table \ref{table-1}, the range of Pearson correlation and the range of $H$-transformed correlation are given for Gumbel's bivariate exponential distribution.
\item [(b)]
The second bivariate distribution considered in Table \ref{table-1} is bivariate Logistic distribution which is belong to Ali-Mikhail-Haq family of copulas (\cite{Hutchinson and Lai(1990)}) with the following structure \begin{align} C_\theta(u,v)=\frac{uv}{1-\theta(1-u)(1-v)}, \qquad -1\leq \theta \leq 1. \label{ali-Mik-copula} \end{align} With standard Logistic distributions as marginal DFs of $X$ and $Y$, we arrive at the joint DF of bivariate Logistic distribution as follows \begin{align}\label{LogDF}
F_\theta(x,y)=\dfrac{1-e^{-x}}{1+e^{-y}-\theta e^{-y-x}},\qquad x>0,~y\in \mathbb{R},~-1\leq \theta\leq1. \end{align} Note that Gumbel's bivariate Logistic distribution is a special case of bivariate Logistic distribution when $\theta = 1$. \end{itemize}
Both bivariate DFs in \eqref{gumbel} and \eqref{LogDF} are exchangeable. Hence for both cases, we obtain $\rho(X,Y)=\rho_t(X,Y)$. The range of possible values of $\beta_{H}(X,Y)$ is given on the basis of five different DFs $H$ introduced above. For each $H$, the values of lower bound and upper bound of $H$-transformed correlation for Gumbel's bivariate exponential, which are attained in $\theta=1$ and $\theta=0$, respectively, are given in the first panel of Table \ref{table-1}. It is seen from the table that the widest range of correlation is achieved for $\mathrm{EGini}_3$ among all other correlations. It is evident from the table that, the range of the values of Pearson correlation $\rho(X,Y)$ is even less than those of Gini and OR-based correlations. The minimum range of correlation corresponds to
$\mathrm{EGini}_{0.5}$. In the case that the DF $H$ is equal to the marginal DFs of the bivariate distribution,
the associated correlation $\beta_{H}(X,Y)$ becomes the Pearson correlation, which in this case is
the CRE-Based correlation. The second panel of Table \ref{table-1} gives the correlation $\beta_{H}(X,Y)$, based on the above mentioned distributions $H$, in bivariate Logistic distribution. The lower bound and the upper bound of all correlations are attained for $\theta=-1$ and $\theta=1$, respectively. In this case the maximum range of correlation is achieved for $\mathrm{EGini}_3$ and the minimum range is achieved for $\mathrm{EGini}_{0.5}$. \begin{table}[!h] \centering \caption{\small The ranges of $\rho$, and $\beta_H$ correlations for two exchangeable distributions.}\label{table-1} \small \begin{tabular}{ lcc } \toprule \multicolumn{3}{ l}{{\bf Gumbel's Type I Bivariate Exponential Distribution}} \\[2mm] \multicolumn{3}{ l }{$F_\theta(x,y)=1-e^{-x}-e^{-y}+e^{-x-y-\theta xy},~~x>0,~y>0,~0\leq\theta\leq1.$} \\[2mm] \hline
Correlation index & Lower bound & Upper bound \\ \hline
Pearson & $-0.40365$ & $0$ \\
CRE-Based & $-0.40365$ & $0$\\
OR-Based & $-0.51267$ & $0$ \\
$\mathrm{EGini}_{0.5}$ & $-0.26927$ & $0$ \\
$\mathrm{Gini}$ & $-0.55469$ & $0$ \\
$\mathrm{EGini}_3$ & $-0.64125$ & $0$ \\ \hline\hline \multicolumn{3}{l}{{\bf Bivariate Logistic Distribution}} \\[2mm] \multicolumn{3}{l}{$F_\theta(x,y)=\left(1+e^{-x}+e^{-y}+(1-\theta)e^{-x-y}\right)^{-1},~~~x\in \mathbb{R},~y\in \mathbb{R},~-1\leq \theta\leq1.$} \\[2mm] \hline
Correlation index & Lower bound & Upper bound \\ \hline
Pearson & $-0.25000$ & $0.50000$\\
CRE-Based & $-0.26516$ & $0.39207$\\
OR-Based & $-0.25000$ & $0.50000$ \\
EGini$_{0.5}$ & $-0.22135$ & $0.27865$ \\
Gini & $-0.27259$ & $0.50000$ \\
EGini$_{3}$ & $-0.26272$ & $0.55556$ \\ \bottomrule \end{tabular} \end{table} \end{Example}
\begin{Example}\label{example-non-exchangeable}\em
In this example, we consider again the copulas given in \eqref{copula} and \eqref{ali-Mik-copula}.
However, here we assume that the marginal DFs are not the same (the bivariate distribution is not exchangeable).
In the first bivariate distribution the marginals are two different Weibull DFs (with different shape parameters)
and in the second case the marginals are two different power DFs (with different shape parameters), respectively.
In Table \ref{table-2}, the ranges of possible values of $\rho(X,Y)$, $\rho_t(X,Y)$, and $\beta_{H}(X,Y)$ are
presented for both bivariate DFs. The values of lower and upper bounds of $H$-transformed correlation for the two bivariate distributions which are attained in $\theta=1$ and $\theta=0$, and in $\theta=-1$ and $\theta=1$, respectively, are numerically computed for different DFs $H$. In the first panel which corresponds to Gumbel-Barnett copula with Weibull-Weibull marginals, it is seen that the maximum range is attained for $\mathrm{EGini}_3$ and the minimum range is achieved for Pearson correlation.
Also as we showed in inequality (\ref{erho}), the results of the table show that the $\rho$-transformed correlation has a wider range than that of Pearson correlation.
The second panel of the table presents the correlations between $X$ and $Y$ for
Ali-Mikhail-Haq copula with power-power marginal DFs.
In this case, we see that the maximum range coincides with $\mathrm{EGini}_3$, the next maximum ranges are related
to OR-Based, and Gini correlations, respectively, and the minimum range is obtained in $\mathrm{EGini}_{0.5}$.
{ Also we see that $\rho_{t}(X,Y)$ indicates a wider range of correlation between $X$ and $Y$ comparing to Pearson correlation $\rho(X,Y)$.}
\begin{table}[!h] \centering \caption{\small The ranges of $\rho$, $\rho_t$, and $\beta_H$ correlations for two distributions with non-equal marginals.}\label{table-2} \small \begin{tabular}{ llcc } \toprule \multicolumn{3}{ l}{{\bf Gumbel-Barnett copula with Weibull-Weibull marginals}} \\[2mm] \multicolumn{3}{ l }{$F_\theta(x,y)=1-e^{-x^2}-e^{-\sqrt{y}}+e^{-x^2-\sqrt{y}-\theta x^2\sqrt{y}},\quad x>0,~y>0,~0\leq\theta\leq1.$} \\[2mm] \hline Correlation index & Lower bound & Upper bound \\ \hline
Pearson & $-0.32420$ & $0$\\
$\rho$-transformed & $-0.43307$ & $0$\\
CRE-Based & $-0.48426$ & $0$ \\
OR-Based & $-0.51759$ & $0$ \\
EGini$_{0.5}$ & $-0.41563$ & $0$ \\
Gini & $-0.53692$ & $0$ \\
EGini$_{3}$ & $-0.55776$ & $0$ \\ \hline\hline \multicolumn{3}{ l}{{\bf Ali-Mikhail-Haq copula with power-power marginals}} \\[2mm] \multicolumn{3}{ l }{ $F_\theta(x,y)=\dfrac{x(2-x)y(y^2-3y+3)}{(1+\theta(y-1)^3(x-1)^2)}, ~~~~0<x<1,~0<y<1,~-1\leq \theta\leq1$}\\[3mm] \hline Correlation index & Lower bound & Upper bound \\ \hline
Pearson & $-0.27099$ & $0.39668$\\
$\rho$-transformed & $-0.27212$ & $0.39833$\\
CRE-Based & $-0.26589$ & $0.36447$ \\
OR-Based & $-0.27387$ & $0.45685$\\
EGini$_{0.5}$ & $-0.24790$ & $0.29890$\\
Gini & $-0.27887$ & $0.45177$ \\
EGini$_{3}$ & $-0.28324$ & $0.51025$\\
\bottomrule \end{tabular} \end{table} \end{Example}
\subsection{Some Symmetric Versions } We have to point out here that the Pearson's and Spearman's correlation coefficients are both symmetric measures of correlation. However the association measure $\beta_H(X,Y)$ introduced in this paper is not generally a symmetric measure unless the two r.v.s are exchangeable. There are several ways that one can introduce a symmetric version of the correlation coefficient considered in this paper, i.e., to impose a correlation coefficient with the property $\beta_{H}(X,Y) = \beta_{H}(Y,X)$. Motivated by the works of \cite{R7, Yitzhaki and Olkin(1991), R2}, in the following, we introduce three measures of correlation based on $\beta_{H}(X,Y)$ which are symmetric in terms of $F$ and $G$. \begin{description}
\item [(a)] The first symmetric version of correlation can be considered as
\begin{equation}
\tau_{H}(X,Y)=\frac{1}{2}\left(\beta_{H}(X,Y)+\beta_{H}(Y,X)\right).
\end{equation}
\item[(b)] The second symmetric version which can be constructed is based on the approach used by \cite{R2}. Let
$\eta_X=Cov(X,H^{-1}F(X))$ and $\eta_Y=Cov(Y,H^{-1}G(Y))$. Define $\nu_H(X,Y)$ as follows $$\nu_{H}(X,Y)=\frac{\eta_X\beta_{H}(X,Y)+\eta_Y\beta_{H}(Y,X)}{\eta_X+\eta_Y}.$$ Then $\nu_{H}(X,Y)$, as a weighted function of $\beta_H(X,Y)$ and $\beta_H(Y,X)$, is a symmetric measure of correlation that lies between $[-1,1]$ and have the requirements of a correlation coefficient described in Theorem \ref{th1q}. \item [(c)] The third symmetric index which can be imposed based on $\beta_{H}(X,Y)$ is as follows (see, \cite{R7}). With $\eta_X$, and $\eta_Y$, as defined in (b), let ${\bar \beta}_H(X,Y)=1-\beta_{H}(X,Y)$ and ${\bar \beta}_H(Y,X)=1-\beta_{H}(Y,X).$ Consider ${\bar \nu}_H(X,Y)$ as \begin{align*}
{\bar \nu}_{H}(X,Y)=&\frac{\eta_X{\bar \beta}_H(X,Y)+\eta_Y{\bar \beta}_H(Y,X)}{\eta_X+\eta_Y}\\
=& 1-{\nu}_{H}(X,Y). \end{align*} Then ${\bar \nu}_{H}(X,Y)$ which is a weighted function of ${\bar \beta}_H(X,Y)$ and ${\bar \beta}_H(Y,X)$ is symmetric in $F$ and $G$ and ranges between $[0,2]$. \cite{R7} showed that ${\bar \nu}_{H}(X,Y)$, in the case that $H$ is uniform distribution gives a measure, called Gini index of mobility, that provides a consistent setting for analysis of mobility, inequality and horizontal equity. It can be easily shown that ${\bar \nu}_{H}(X,Y)$ can be also presented as $${\bar \nu_{H}}(X,Y)=\frac{Cov\left(X-Y,H^{-1}F(X)-H^{-1}G(Y)\right)}{\eta_X+\eta_Y}.$$ \end{description} In the following, we give an example that these symmetric measures are calculated. \begin{Example}\label{examp-sym} \em Consider the Gumbel-Barnett copula with two different Weibull distributions as marginals and the joint DF given in Example \ref{example-non-exchangeable}. Let $\theta=1$ which corresponds to highest dependency between r.v.s $X$, and $Y$. Then the joint DF of $X$ and $Y$ is written as \[ F(x,y)=1-e^{-x^2}-e^{-\sqrt{y}}+e^{-x^2-\sqrt{y}- x^2\sqrt{y}},\qquad x>0,~y>0. \] Table \ref{table-sym} presents the values of correlations $\beta_H(X,Y)$ and $\beta_H(Y,X)$, symmetric correlations $\tau_H(X,Y)$, $\nu_H(X,Y)$ and ${\bar \nu}_H(X,Y)$ for different distributions $H$.
\begin{table}[!h] \centering \caption{\small The values of symmetric correlation coefficients for Example \ref{examp-sym}.}\label{table-sym} \small \begin{tabular}{ llcccc } \toprule
Index & $\beta_H(X,Y)$ & $\beta_H(Y,X)$ & $\tau_H(X,Y)$ & $\nu_H(X,Y)$ & ${\bar \nu}_H(X,Y)$ \\ \midrule
CRE-Based & $-0.48426$ & $-0.29817$ & $-0.39121$ & $-0.31673$ & $1.31673$\\
OR-Based & $-0.51759$ & $-0.47762$ & $-0.49761$ & $-0.48267$ & $1.48267$\\
EGini$_{0.5}$ & $-0.41563$ & $-0.12179$ & $-0.26871$ & $-0.13873$ & $1.13873$\\
Gini & $-0.53692$ & $-0.59375$ & $-0.56534$ & $-0.58537$ & $1.58537$\\
EGini$_{3}$ & $-0.55776$ & $-0.80720$ & $-0.68248$ & $-0.76379$ & $1.76379$\\ \bottomrule \end{tabular} \end{table} \end{Example}
\section{A Decomposition Formula}
In this section we give a decomposition formula for ${\cal C}(T,Y)$, which provides some results on the connection
between the variability of sum of a number of r.v.s in terms of sum of variabilities of each r.v.
In a reliability engineering point of view, consider a system with standby components with the following structure.
We assume that the system is built of $n$ units with lifetimes $X_1,\dots,X_n$ which will be connected to
each other sequentially as follows. Unit number 1 with lifetime $X_1$ starts operating and in the time of failure,
the unit number 2 with lifetime $X_2$ starts working automatically, and so on until the $n$th unit, with lifetime
$X_n$, fails. Hence, the lifetime of the system, denoted by $T$, would be $T=\sum_{i=1}^{n}X_i$. Assume that $\mu_i=E(X_i)$ denotes the mean time to failure of unit number $i$ and $\mu=E(T)=\sum_{i=1}^{k}\mu_i$ denotes the mean time to failure of the system.
Let again for any two r.v.s $X$ and $Y$ with DFs $F$ and $G$, respectively, we denote $\mathrm{Cov}\left(X,G^{-1}F(X)\right)$ by ${\cal C}(X,Y)$. Now we have the following result. \begin{Theorem}\label{th-decomps} For any r.v. $Y$ with DF $G$, we have the following decomposition for ${\cal C}(T,Y)$ in terms of ${\cal C}(X_i,Y)$, $i=1,2,\dots,n$. \begin{eqnarray*} {\cal C}(T,Y)=\sum_{i=1}^{n} {\beta}_G(X_i,T){\cal C}(X_i,Y), \end{eqnarray*} where $ {\beta}_G(X_i,T)$ is the $G$-transformed correlation between the system lifetime $T$ and component lifetime $X_i$ defined in (\ref{index}). \end{Theorem} \begin{proof} Let $F_{X_i}$ and $F_T$ denote the DFs of component lifetime $X_i$ and the system lifetime $T$, respectively. From the covariance properties of sum of r.v.s, we can write
\begin{eqnarray*}
{\cal C}(T,Y)&=& Cov(T,G^{-1}F_T(T))\\
&=&\sum_{i=1}^{n}Cov(X_i,G^{-1}F_T(T))\\
&=&\sum_{i=1}^{n}\frac{Cov(X_i,G^{-1}F_T(T))}{Cov(X_i,G^{-1}F_{X_i}(X_i))}Cov(X_i,G^{-1}F_{X_i}(X_i))\\
&=&\sum_{i=1}^{n}\beta_G(X_i,T){\cal C}(X_i,Y).
\end{eqnarray*} \end{proof}
\begin{Corollary}\label{cor1} \em It is interesting to note that the correlation between the system lifetime $T$ and its component lifetime $X_i$, i.e., $\beta_G(X_i,T)$ is always nonnegative. This is so because in $ \beta_G(X_i,T)$, $G^{-1}F_T(T)$ is trivially an increasing function of $X_i$, as $T$ is increasing function of $X_i$. Hence, $\mathrm{Cov}(X_i,G^{-1}F_T(T))$ is nonnegative which, in turn, implies that $ \beta_G(X_i,T)$ is nonnegative. Thus, we have \begin{align}\label{inequal-alpha}
0\leq \beta_G(X_i,T)\leq 1. \end{align} This result shows that the G-covariance between the system lifetime $T$ and r.v. $Y$ can be decomposed as a combination of the G-covariance between components lifetime and r.v. $Y$.
From Theorem \ref{th-decomps} and relation \eqref{inequal-alpha}, we conclude that \begin{eqnarray}\label{inequality}
{\cal C}(T,Y)\leq \sum_{i=1}^{n} {\cal C}(X_i,Y). \end{eqnarray} That is, the G-covariance between the system lifetime and r.v. $Y$ is less than the sum of G-covariance between its components and r.v. $Y$. In particular when the $X_i$'s are identical r.v.s, we have $ {\cal C}(T,Y)\leq n {\cal C}(X_1,Y)$. In this situation, if we assume that $G=F_{X_1}$ then $${\cal C}(T,X_i)\leq n \mathrm{Var}(X_1), \qquad i=1,\dots,n.$$ \end{Corollary}
Based on Corollary \ref{cor1}, the following inequalities are obtained for some well known measures of disparity as special cases: \begin{itemize}
\item [(a)] If $G=F_T$, then we get
$$\mathrm{Var}(T)\leq \sum_{i=1}^n {\cal C}(X_i,T).$$ \item[(b)] In the case that $G(x)=1-e^{-\sqrt[k]{x}}, \ x>0, \ k>0,$ the Weibull distribution with shape parameter $1/k$, we obtain
$${\cal E}_k(T)\leq \sum_{i=1}^n {\cal E}_k(X_i), \qquad k=1,2,\dots,$$
where the ${\cal E}_k(\cdot)$ is the GCRE defined in (\ref{psar}). In the special case where $k=1$, we obtain the following inequality regarding CRE.
$${\cal E}_1(T)\leq \sum_{i=1}^n {\cal E}_1(X_i).$$ Thus, it is concluded that the uncertainty of a stand by system lifetime, in the sense of CRE, is less than the sum of uncertainties of the its components lifetime. As a result we can also conclude equivalently that for the system described above $$E(m_{T}(T))\leq \sum_{i=1}^{n} E(m_{X_i}(X_i)),$$ where $m_T$ and $m_{X_i}$ are the MRL's of the system and the components, respectively; see also, \cite{nasr}. \item[(c)] Consider $G(\cdot)$ as the DF given in \eqref{exten-gini-dis}. Then, for $\nu>0$, \begin{align*} \mathrm{EGini}_\nu(T)\leq \sum_{i=1}^n \mathrm{EGini}_\nu(X_i). \end{align*}
For $\nu=2$, which corresponds to $G(x)$ as uniform distribution on $(0,1)$, we get
\begin{align}\label{inequality-Gini}
\mathrm{GMD}(T)\leq \sum_{i=1}^n \mathrm{GMD}(X_i),
\end{align}
where $\mathrm{GMD}(\cdot)$ is the Gini's mean difference. This result was already obtained by \cite{R3}. \end{itemize} \section{Concluding Remarks} In the present article, we introduced a unified approach to construct a correlation coefficient between two continuous r.v.s. We assumed that the continuous r.v.s $X$ and $Y$ have a joint distribution function $F(x,y)$ with marginal distribution functions $F$ and $G$, respectively. We first considered the covariance between $X$ and transformation $G^{-1}F(X)$, i.e., $\mathrm{Cov}(X,G^{-1}F(X))$. The function $G^{-1}F(.)$ is known in the literature as the {\it Q-transformation} (or {\it sample transmutation maps}). We showed that some well known measures of variability such as variance, Gini mean difference and its extended version, cumulative residual entropy and some other disparity measures can be considered as special cases of $\mathrm{Cov}(X, G^{-1}F(X))$. Motivated by this, we proposed a unified measure of correlation between the r.v.s $X$ and $Y$ based on $\mathrm{Cov}(X,H^{-1}G(Y))$, where $H$ is a continuous distribution function. We showed that the introduced measure, which subsumes some well known measures of associations such as Gini and Pearson correlations for special choices of $H$, has all requirements of a correlation index {under some mild condition on DF $H$}. For example it was shown that it lies between $[-1, 1]$. When the joint distribution of $X$ nd $Y$ is bivariate normal, we showed that the proposed measure, for any choice of $H$, equals the Pearson correlation coefficient. We proved, under some conditions that for our unified association index, the lower and upper bounds of the interval $[-1, 1]$ are attainable by joint Fr$\acute{\rm e}$chet bivariate minimal and maximal distribution functions, respectively. A special case of the introduced correlation in this paper, provided a variant of Pearson correlation coefficient $\rho(X,Y)$, which measures with the property that its absolute value is always greater than or equal to the absolute value of $\rho(X,Y)$. Since the proposed measure of correlation is asymmetric, some symmetric versions of that were also discussed. Several examples of bivariate DFs of $X$ and $Y$ were presented in which the correlation is computed for different choices of $H$. Finally, we presented a decomposition formula for $\mathrm{Cov}(X,G^{-1}F(X))$ in which the r.v. $X$ was considered as the sum of $n$ r.v.s. As an application of the decomposition formula, some results were provided on the connection between variability measures of a standby system in terms of the variability measures of its components.
The r.v.s that we considered in this article, were assumed to be continuous. One interesting problem which can be considered as a future study is to investigate the results for the case that the r.v.s are arbitrary (in particular discrete r.v.s). Another important problem which can be investigated is to propose some estimators for $\beta_{H}(X,Y)$ for different choices of $H$. In particular, we believe that providing estimators for $\rho_{t}(X,Y)$ and exploring their properties may be of special importance, for measuring the linear correlation between the real data collected in different disciplines and applications.
\end{document} |
\begin{document}
\title{Hamiltonian System Approach to \ Distributed Spectral Decomposition in Networks
} \thispagestyle{empty} \pagestyle{empty}
\begin{abstract} Because of the significant increase in size and complexity of the networks, the distributed computation of eigenvalues and eigenvectors of graph matrices has become very challenging and yet it remains as important as before. In this paper we develop efficient distributed algorithms to detect, with higher resolution, closely situated eigenvalues and corresponding eigenvectors of symmetric graph matrices. We model the system of graph spectral computation as physical systems with Lagrangian and Hamiltonian dynamics. The spectrum of Laplacian matrix, in particular, is framed as a classical spring-mass system with Lagrangian dynamics. The spectrum of any general symmetric graph matrix turns out to have a simple connection with quantum systems and it can be thus formulated as a solution to a Schr\"odinger-type differential equation. Taking into account the higher resolution requirement in the spectrum computation and the related stability issues in the numerical solution of the underlying differential equation, we propose the application of symplectic integrators to the calculation of eigenspectrum. The effectiveness of the proposed techniques is demonstrated with numerical simulations on real-world networks of different sizes and complexities.
\end{abstract}
\section{Introduction}
Consider an undirected graph $G=(V,E)$ with $V$ as the vertex set ($n:= \lvert V \rvert$) and $E$ as the edge set ($m:= \lvert E \rvert$). Let $M$ be any symmetric matrix associated with $G$. Broadly speaking, we say that $M$ is a graph matrix if it has non-zero elements on the edge set. Due to symmetry, the eigenvalues of $M$ are real and can be ranked in ascending order as $\lambda_1 \le \lambda_2\le \ldots \le\lambda_n$. We investigate efficient techniques to compute the eigenvalues $\lambda_1,\ldots, \lambda_n$ and the corresponding eigenvectors ${\boldsymbol u}_1,\ldots,{\boldsymbol u}_n$, in a distributed way.
We define two typical matrices which appear frequently in network analysis. First one is the adjacency matrix $A \in \mathbb{R}^{n \times n}$ in which the individual entries are given by \[ a_{uv} = \left\{ \begin{array}{ll} 1 & \mbox{if} \ u \ \mbox{is a neighbour of} \ v,\\ 0 & \mbox{otherwise}. \end{array}\right. \] Since the focus in this paper is on undirected graphs, $A^{\intercal}=A$. The matrix $A$ is also called the unweighted adjacency matrix and one can also define a weighted version in which the weight $1$ for an edge is replaced by any weight such that $a_{uv} = a_{vu}$. Another matrix which is found very common in many graph theoretic problems is the Laplacian matrix $L= [\ell_{i,j}]:= D-A$. Here the matrix $D$ is a diagonal matrix with diagonal elements equal to degrees of the nodes $d_1, \ldots, d_n$.
\subsection{Applications of graph spectrum} The knowledge of $\{\lambda_i\}$'s and $\{{\boldsymbol u}_i\}$'s can be made use of in many ways. For instance, spectral clustering is a prominent solution which exploits the first $k$ eigenvectors of the Laplacian matrix for identifying the clusters in a network \cite{VonLuxburg07}. Another classical use of Laplacian eigenvalues is in computing the number of spanning trees of a graph $G$ which is ${n^{-1}\,\lambda_2 \lambda_3 \ldots \lambda_n}$. Many studies have been conducted on the use of the spectrum of the adjacency matrix such as computing the number of traingles of a netwok (both locally and globally) \cite{Tsourakakis_ICDM08}, graph dimensionality reduction and link reduction \cite{Kempe04} etc.
Two applications relevant to multi-agent and multi-dimensional systems will be explained in detail in Section~\ref{sec:applications}.
\subsection{Our basic approach}
Let $M$ be any symmetric graph matrix. Consider the following Schr\"odinger-type differential equation \begin{equation} \frac{\partial}{\partial t}{\boldsymbol \psi}(t) = iM\,{\boldsymbol \psi}(t), \label{eq:de_infocom16} \end{equation} where ${\boldsymbol \psi}(t)$ is a complex valued $n$ dimension vector, which can be interpreted as the wave function of a hypothetical quantum system. The solution of this differential equation with the boundary condition ${\boldsymbol \psi}(0)={\boldsymbol a}_0$ is $\exp (iMt) {\boldsymbol a}_0$. Subjecting this solution to the Fourier transform provides a decomposition in terms of eigenvalues and eigenvectors as follows: \begin{eqnarray} \int_{-\infty}^{+\infty}e^{i M t}{\boldsymbol a}_0 e^{-it\theta}dt = 2 \pi \sum_{j=1}^n \delta_{\lambda_j}(\theta){\boldsymbol u}_j ({\boldsymbol u}_j^\intercal {\boldsymbol a}_0), \label{eq:dirac_decompsn} \end{eqnarray} where $\delta_{\lambda_j}$ is the Dirac function shifted by $\lambda_j$. This follows from the eigen-decomposition of the matrix $M$, $e^{iMt} = \sum_j e^{i t \lambda_j} {\boldsymbol u}_j {\boldsymbol u}_j^{\intercal}$. In order to avoid the harmonic oscillations created from finite and discretized version of the above Fourier transform, which will mask the Dirac peaks, the following Gaussian smoothing can be performed. For $v>0$, \begin{eqnarray} \lefteqn{\frac{1}{2\pi} \int_{-\infty}^{+\infty}e^{i M t} {\boldsymbol a}_0 e^{-t^2v/2}e^{-it\theta}dt}{\nonumber} \\ &=&\sum_{j=1}^n \frac{1}{\sqrt{{2 \pi} v}} \exp(-\frac{(\lambda_j-\theta)^2}{2v}){\boldsymbol u}_j ({\boldsymbol u}_j^\intercal {\boldsymbol a}_0). \label{eq:gauss_approxn} \end{eqnarray} The right-hand side of the above expression leads us to a plot at each node $k$ with Gaussian peaks at each of the eigenvalues and the amplitude of the peak at $j$th eigenvalue is ${(\sqrt{2 \pi v})}^{-1} ({\boldsymbol u}_j^{\intercal} {\boldsymbol a}_0){\boldsymbol u}_j(k)$, proportional to the $k$th component in the eigenvector ${\boldsymbol u}_j$.
The idea is to estimate the solution of \eqref{eq:de_infocom16} at $\varepsilon$ intervals of time for a total of $s$ samples. One way to form an estimate to the left-hand side to \eqref{eq:gauss_approxn} is: \begin{equation} \varepsilon \Re \Big( {\boldsymbol b}_0+2\sum_{\ell= 1}^{s} e^{i\varepsilon \ell M} {{\boldsymbol b}}_0 e^{-i \ell \varepsilon \theta}e^{-\ell^2\varepsilon^2v/2} \Big). \label{eq:f_expn} \end{equation}
In \cite{AvJaSr_Infocom16}, we use the above approximation with various approaches based on diffusion, gossiping and quantum random walks for distributed computation of the eigenvalues and the eigenvectors. Let us discuss some challenges in this approach.
\subsection*{Issues in the computation} While doing numerical experiments, we have observed that the approaches in \cite{AvJaSr_Infocom16} work well for larger eigenvalues of the adjacency matrix of a graph but they do not perform that well when one needs to distinguish between the eigenvalues which are very close to each other. One of the main techniques proposed there to solve \eqref{eq:de_infocom16} and to find $e^{i\varepsilon \ell M}$ in \eqref{eq:f_expn} are via $r$th order Runge-Kutta method and its implementation as a diffusion process in the network. The $r$-th order Runge-Kutta method has the convergence rate of $\mathcal{O}(\varepsilon^r)$. We have observed that this is the case while checking the trajectory of the associated differential equation solution; the solution diverges, and it happens when a large number of iterations $s$ is required (see Section~\ref{sec:numerical_res}).
A larger value for $s$ is anticipated from our approximation in \eqref{eq:f_expn} due to the following facts. From the theory of Fourier transform and Nyquist sampling, the following conditions must be satisfied: \begin{equation} \varepsilon \leq \frac{\pi}{\lambda_{n}} \text{ and } s \geq \frac{2 \pi}{\varepsilon \lambda_{\text{diff}}}, \label{eq:ft_constraints}
\end{equation} where $\lambda_{\text{diff}}$ is the maximum resolution we require in the frequency (eigenvalue) domain, which is ideally $\min_{i} |\lambda_i -\lambda_{i+1}|$. This explains that when dealing with graph matrices with larger $\lambda_n$ and require higher resolution, $s$ will take large values. For instance, in case of the Laplacian matrix, where the maximum eigenvalue is bounded as $\frac{n}{n-1} \Delta(G) \leq \lambda_{n} \leq 2 \Delta(G),$ with $\Delta(G)$ as the maximum degree of the graph and the lower eigenvalues are very close to each other, $s$ turns out to be typically a very large value.
In what follows, we propose solutions to the above mentioned issues.
\subsection{Related works} The general idea of using mechanical oscillatory behaviour for the detection of eigenvalues has appeared in a few previous works, see e.g., \cite{franceschelli2013decentralized,sahai2012hearing}.
Though the technique in \cite{franceschelli2013decentralized} is close to ours, our methods differ by focussing on a Schr\"odinger-type equation and numerical integrators specific to it. Moreover, we demonstrate the efficiency and stability of the methods in real-world networks of varying sizes, in contrast to a small size synthetic network considered in \cite{franceschelli2013decentralized}, and our methods can be used to estimate eigenvectors as well.
In comparison to \cite{sahai2012hearing} we do not deform the system and we use new symplectic numerical integrators \cite{blanes2006symplectic,blanes2008splitting}. For the problem of distributed spectral decomposition in networks, one of the first and prominent works appeared in \cite{Kempe04}. But their algorithm requires distributed orthonormalization at each step and they solve this difficult operation via random walks. But if the graph is not well-connected (low conductance), this task will take a very long time to converge. Our distributed implementation based on fluid diffusion in the network does not require such orthonormalization.
\subsection{Contributions} We make the following contributions and significantly improve the algorithms from our previous work: \begin{enumerate} \item We observe from our previous studies that the stability in trajectory of the differential equation solver is of significant influence in the eigenvalue-eigenvector technique based on \eqref{eq:de_infocom16}. Thus, we resort to geometric integrators to ensure the stability. In particular, by modeling the problem as a Hamiltonian system, we use symplectic integrators (SI) which protect the volume preservation of Hamiltonian dynamics, thus preserve stability and improve accuracy. \item We propose algorithms that are easy to design without involving many parameters with interdependence, compared to the algorithms proposed in \cite{AvJaSr_Infocom16}. \end{enumerate}
In the rest of the paper for clarity of presentation we mostly concentrate on the Laplacian matrix $L$ as an example for graph matrix $M$. We design algorithms based on Lagrangian as well as Hamiltonian dynamics, to compute the smallest $k$ eigenvalues and the respective eigenvectors of the Laplacian matrix efficiently. For simplicity, in this paper we do not consider Gaussian smoothing \eqref{eq:gauss_approxn}, but the proposed algorithms can be readily extended to include it.
The paper is organized as follows. In Section \ref{sec:mechanical_lag}, we explain a mass-spring analogy specific to Laplacian matrix and derive a method to identify the spectrum. Section \ref{sec:ham_dynamics} focuses on general symmetric matrices and develop techniques based on solving the Schr\"odinger-type equation efficiently. Section \ref{sec:distributed_impln} details a distributed implementation of the proposed algorithm. Section \ref{sec:numerical_res} contains numerical simulations on networks of different sizes. Section~\ref{sec:applications} contains two relevant applications to multi-agent and multi-dimensional systems. Section \ref{sec:conclusions} concludes the paper.
For convenience we summarize the important notation used in this paper in Table \ref{tab:list_symbols}. \vspace*{-0.25cm} \begin{table}[h]
\begin{center}
\begin{tabular}[h!]{|c||l|}\hline
Notation & Meaning\\ \hline \hline
$G$, $(V,E)$ & Graph, Node set and edge set \\ \hline
$n$, $m$ & No.\ of nodes, no.\ of edges \\ \hline
$A$ & Adjacency matrix \\ \hline
$L$ & Laplacian matrix\\ \hline
$\lambda_1, \ldots, \lambda_k$ & Smallest $k$ eigenvalues of $L$ in ascending order \\ \hline
${\boldsymbol u}_1, \ldots, {\boldsymbol u}_k$ & Eigenvectors corresponding to $\lambda_1, \ldots, \lambda_k$ \\ \hline
$d_j$ & Degree of node $j$ without including self loop \\ \hline
$\Delta(G)$ & $\max\{d_1,\ldots,d_n\}$ \\ \hline
${\cal N}(m)$ & Neighbor list of node $m$ without including self loop \\ \hline
$\varepsilon$ & Sampling interval in time domain \\ \hline
$T,s$ & Total time frame and no.\ of samples $\lceil T/\varepsilon \rceil$ \\ \hline
${\boldsymbol x}(t), {\boldsymbol x}_i$ & vector ${\boldsymbol x}$ with index in continuous and discrete time \\ \hline
${\boldsymbol x}_i[k]$ & $k$th component of a vector ${\boldsymbol x}_i$ \\ \hline \end{tabular} \vspace*{0.2cm} \caption{List of important notations} \label{tab:list_symbols} \end{center} \vspace*{-0.75cm} \end{table}
\section{Mechanical spring analogy with Lagrangian dynamics} \label{sec:mechanical_lag} Consider a hypothetical mechanical system representation of the graph $G$ in which unit masses are placed on the vertices and the edges are replaced with mechanical springs of unit stiffness. Using either Lagrangian or Netwonian mechanics, the dynamics of this system is described by the following system of differential equations \begin{equation} \label{eq:LargDiff} \ddot{{\boldsymbol x}}(t) + L {\boldsymbol x}(t) = {\boldsymbol 0}. \end{equation} The system has the Hamiltonian function as ${\cal H} = \frac{1}{2} \dot{{\boldsymbol x}}^\intercal I \dot{{\boldsymbol x}} + \frac{1}{2} {\boldsymbol x}^\intercal L {\boldsymbol x}$, $I$ being the identity matrix of order $n$.
We note that once we obtain by some identification method the frequencies $\omega_k$ of the above oscillatory system, the eigenvalues of the Laplacian $L$ can be immediately retrieved by the simple formula $\lambda_k = |\omega_k^2|$. This will be made clearer later in this section.
Starting with a random initial vector ${\boldsymbol x}(0)$, we can simulate the motion of this spring system. For the numerical integration, the Leapfrog or Verlet method \cite{verlet1967computer} technique can be applied. Being an example of geometric integrator, Verlet method has several remarkable properties. It has the same computational complexity as the Euler method but it is of second order method (Euler method is employed in \cite{AvJaSr_Infocom16} as the first order distributed diffusion). In addition, the Verlet method is stable for oscillatory motion and conserves the errors in energy and computations \cite[Chapter~4]{leimkuhler2004simulating}. It has the following two forms. Let ${\boldsymbol p}(t):= \dot{{\boldsymbol x}}(t)$ and ${\boldsymbol x}_i$ be the approximation of ${\boldsymbol x}(i\varepsilon)$, similarly ${\boldsymbol p}_i$ for ${\boldsymbol p}(i\varepsilon)$. Here $\varepsilon$ is the step size for integration. First, define \[{\boldsymbol p}_{1/2} = {\boldsymbol p}_0 + \varepsilon/2 (-L {\boldsymbol x}_0).\] Then, perform the following iterations \begin{eqnarray} {\boldsymbol x}_i & = & {\boldsymbol x}_{i-1} + \varepsilon {\boldsymbol p}_{i-1/2} {\nonumber} \\ {\boldsymbol p}_{i+1/2} & = & {\boldsymbol p}_{i-1/2} + \varepsilon (-L {\boldsymbol x}_i). {\nonumber} \end{eqnarray} Equivalently, one can do the updates as \begin{eqnarray} {\boldsymbol x}_{i+1} & = & {\boldsymbol x}_{i} + \varepsilon p_{i} + \varepsilon^2/2 (-L {\boldsymbol x}_i) {\nonumber} \\ {\boldsymbol p}_{i+1} & = & {\boldsymbol p}_{i} + \varepsilon [(-L {\boldsymbol x}_i) + (-L {\boldsymbol x}_{i+1})].{\nonumber} \end{eqnarray} We name the above algorithm as Order-2 Leapfrog.
Solution of the differential equation \eqref{eq:LargDiff} subject to the boundary values ${\boldsymbol x}(0)={\boldsymbol a}_0$ and ${\boldsymbol p}(0)={\boldsymbol b}_0$ is \begin{equation} {\boldsymbol x}(t) = \left(\frac{1}{2}{\boldsymbol a}_0-i\frac{{\boldsymbol b}_0}{\sqrt{\Lambda}} \right) e^{i t \sqrt{L}} +
\left(\frac{1}{2}{\boldsymbol a}_0+i\frac{{\boldsymbol b}_0}{\sqrt{\Lambda}} \right) e^{-i t \sqrt{L}}, {\nonumber} \end{equation} where we assume the decomposition of $L$ based on spectral theorem, i.e., $L=U \Lambda U^\intercal$ with $U$ as the orthonormal matrix with columns as eigenvectors and $\Lambda$ as the diagonal matrix formed from the eigenvalues. Further simplification of the above expression along the fact that $f(L)=U f(\Lambda) U^\intercal$, for any function $f$ which can be expressed in terms of power series, gives \begin{equation} {\boldsymbol x}(t) = \cos(t\sqrt{L}) {\boldsymbol a}_0 + (\sqrt{L})^{-1} \sin (t \sqrt{L}) {\boldsymbol b}_0. {\nonumber} \end{equation} or $k$th component of ${\boldsymbol x}(t)$ is \[{\boldsymbol a}_0[k] \cos (t\sqrt{\lambda_k})+\frac{{\boldsymbol b}_0[k]}{\sqrt{\lambda_k}} \sin(t \sqrt{\lambda_k}).\] Now we have \begin{eqnarray} \lefteqn{\int_{-\infty}^{+\infty} {\boldsymbol x}(t) e^{-it\theta}dt} {\nonumber} \\ & = & \int_{-\infty}^{+\infty} \sum_{k=1}^n \cos (t\sqrt{\lambda_k}) {\boldsymbol u}_k ({\boldsymbol u}_k^{\intercal} {\boldsymbol a}_0) e^{-it\theta}dt {\nonumber} \\ & & +\int_{-\infty}^{+\infty} (\sqrt{L})^{-1} \sum_{k=1}^n \sin(t\sqrt{\lambda_k}) {\boldsymbol u}_k ({\boldsymbol u}_k^{\intercal} {\boldsymbol b}_0) e^{-it\theta}dt {\nonumber} \\ & = & \sum_{k=1}^n {\boldsymbol u}_k ({\boldsymbol u}_k^{\intercal} {\boldsymbol a}_0) \left(\pi[\delta(\theta-\sqrt{\lambda_k})+\delta(\theta+\sqrt{\lambda_k})] \right) {\nonumber} \\ & & \hspace*{- 0.7 cm} +(\sqrt{L})^{-1} {\boldsymbol u}_k ({\boldsymbol u}_k^{\intercal} {\boldsymbol b}_0) \left(-\pi i [\delta(\theta-\sqrt{\lambda_k})-\delta(\theta+\sqrt{\lambda_k})] \right).{\nonumber} \end{eqnarray} Taking the real and positive spectrum will give $\pi \sum_{k=1}^n {\boldsymbol u}_k ({\boldsymbol u}_k^{\intercal} {\boldsymbol a}_0) \delta(\theta-\sqrt{\lambda_k})$. The whole operation can be approximated by applying an $s$-point FFT on $\{{\boldsymbol x}_i, 0\leq i < s\}$, and taking real values. (To be exact, there is a phase factor to be multiplied to the $k$th point in FFT approximation, and is given by $({\sqrt{2 \pi}})^{-1}\varepsilon \exp(-i t_0 k \lambda_{\text{diff}} )$, where we considered the time interval $[t_0, t_0+s \varepsilon]$).
Note that \eqref{eq:LargDiff} is different from the original differential equation in \cite{AvJaSr_Infocom16} where it is $\dot{{\boldsymbol x}}(t)= i L {\boldsymbol x}(t)$ containing complex coefficients.
\section{Hamiltonian Dynamics and Relation with Quantum Random Walk} \label{sec:ham_dynamics} In \cite{AvJaSr_Infocom16} we have studied the Schr\"{o}dinger-type equation of the form \eqref{eq:de_infocom16} with $M$ taken as the adjacency matrix $A$ and ${\boldsymbol \psi}$ as the wave function. Now let us consider a similar equation with respect to the graph Laplacian \begin{equation} \label{eq:SrodL} \dot{{\boldsymbol \psi}}(t) = i L {\boldsymbol \psi}(t). \end{equation} The solution of this dynamics is closely related to the evolution of continuous time quantum random walk and algorithms are developed in \cite{AvJaSr_Infocom16} based on this observation.
Now since the matrix $L$ is real and symmetric, it is sufficient to use the real-imaginary representation of the wave function ${\boldsymbol \psi}(t) = {\boldsymbol x}(t) + i {\boldsymbol y}(t)$, ${\boldsymbol x}(t), {\boldsymbol y}(t) \in \mathbb{R}$. Substituting this representation into equation (\ref{eq:SrodL}) and taking real and imaginary parts, we obtain the following system of equations \begin{eqnarray} \dot{{\boldsymbol x}}(t) & = & -L {\boldsymbol y}(t), {\nonumber} \\ \dot{{\boldsymbol y}}(t) & = & L {\boldsymbol x}(t), {\nonumber} \end{eqnarray} or equivalently in the matrix form \begin{equation} \label{eq:HamDiff} \frac{d}{dt} \left[\begin{array}{c} {\boldsymbol x}(t) \\ {\boldsymbol y}(t) \end{array}\right] = \left[\begin{array}{cc} 0 & -L \\ L & 0 \end{array}\right] \left[\begin{array}{c} {\boldsymbol x}(t) \\ {\boldsymbol y}(t) \end{array}\right]. \end{equation} Such a system has the following Hamiltonian function \begin{equation} {\cal H} = \frac{1}{2} {\boldsymbol x}^\intercal L {\boldsymbol x} + \frac{1}{2} {\boldsymbol y}^\intercal L {\boldsymbol y}. \label{eq:ham_schroding} \end{equation} Next, the very helpful decomposition \[ \left[\begin{array}{cc} 0 & -L \\ L & 0 \end{array}\right] = \left[\begin{array}{cc} 0 & 0 \\ L & 0 \end{array}\right] + \left[\begin{array}{cc} 0 & -L \\ 0 & 0 \end{array}\right], \] together with the observation that \[ \exp \left( \left[\begin{array}{cc} 0 & 0 \\ L & 0 \end{array}\right] \right) = \left[\begin{array}{cc} I & 0 \\ L & I \end{array}\right], \] leads us to another modification of the leapfrog method known as symplectic split operator algorithm \cite{blanes2006symplectic}: Initialize with \[ {\boldsymbol \delta} {\boldsymbol y} = -L {\boldsymbol x}_0, \] then perform the iterations \begin{eqnarray} {\boldsymbol y}_{i-1/2} & = & {\boldsymbol y}_{i-1} - \frac{\varepsilon}{2} {\boldsymbol \delta} {\boldsymbol y}, \label{eq:ham_alg1_0} \\ {\boldsymbol x}_i & = & {\boldsymbol x}_{i-1} - \varepsilon L {\boldsymbol y}_{i-1/2}, \label{eq:ham_alg1} \end{eqnarray} and update \begin{eqnarray} {\boldsymbol \delta} {\boldsymbol y} & = & -L {\boldsymbol x}_i, \label{eq:ham_alg2_0}\\ {\boldsymbol y}_i & = & {\boldsymbol y}_{i-1/2} - \frac{\varepsilon}{2} {\boldsymbol \delta} {\boldsymbol y}. \label{eq:ham_alg2} \end{eqnarray} The above modified leapfrog method belongs to the class of symplectic integrator (SI) methods \cite{blanes2008splitting,leimkuhler2004simulating,iserles2009first}. We name the above algorithm as {\it order-2 SI}.
The Hamiltonian system approach can be implemented in two ways: \begin{enumerate} \item Form the complex vector ${\boldsymbol x}_k+i{\boldsymbol y}_k$ at each of the $\varepsilon$ intervals. Then $\{{\boldsymbol x}_k+i{\boldsymbol y}_k, 0\leq k < s\}$ with ${\boldsymbol x}_0 = {\boldsymbol a}_0$ and ${\boldsymbol y}_0 = {\boldsymbol b}_0$ approximates $\exp(iLt)({\boldsymbol a}_0+i{\boldsymbol b}_0)$ at $t = 0, \varepsilon,\ldots,(s-1)\varepsilon$ intervals. A direct application of $s$ point FFT with appropriate scaling will give the spectral decomposition as in \eqref{eq:dirac_decompsn}. \item Note that the formulation in \eqref{eq:HamDiff} is equivalent to the following differential equations \begin{equation} \ddot{{\boldsymbol y}}(t) + L^2 {\boldsymbol y}(t) = 0,\quad \ddot{{\boldsymbol x}}(t) + L^2 {\boldsymbol x}(t) = 0, \label{eq:Hamiltonian_2nd_tech} \end{equation} which are similar to the one in \eqref{eq:LargDiff} except the term $L^2$. Now on the same lines of analysis as in the previous section, taking the real and positive spectrum of just ${\boldsymbol y}$ component will give $\pi \sum_{k=1}^n {\boldsymbol u}_k ({\boldsymbol u}_k^{\intercal} {\boldsymbol a}_0) \delta(\theta-\lambda_k)$. \end{enumerate}
\subsection{Fourth order integrator} The Hamiltonian ${\cal H}$ in \eqref{eq:ham_schroding} associated to the Schr\"odinger-type equation has a special characteristic that it is seperable into two quadratic forms, which help to develop higher order integrators. The $r$ stage integrator has the following form. Between $t$ and $t+\varepsilon$ intervals, we run for $j =1,\ldots,r$, \begin{eqnarray} {\boldsymbol y}_j &=& {\boldsymbol y}_{j-1} + p_j \varepsilon L {\boldsymbol x}_{j-1} {\nonumber} \\ {\boldsymbol x}_j &=& {\boldsymbol x}_{j-1} - q_j \varepsilon L {\boldsymbol y}_j.{\nonumber} \end{eqnarray} In order to make $q$th order integrator $r\leq q$. For our numerical studies we take the optimized coefficients for order-$4$ derived in \cite{gray1996symplectic}. We call such algorithm as {\it order-4 SI}.
\section{Distributed Implementation} \label{sec:distributed_impln}
The order-2 symplectic integrator algorithm in \eqref{eq:ham_alg1_0}-\eqref{eq:ham_alg2} can be implemented in a distributed fashion such that each node needs to communicate only to its neighbors. The matrix-vector multiplications in \eqref{eq:ham_alg1} and \eqref{eq:ham_alg2_0}, during one iteration of the algorithm, require diffusion of packets or fluids to the neighbors of every node, and fusion of the received fluids from all the neighbors at each node. Each iteration of the algorithm subsequently has two diffusion-fusion cycles and three synchronization points. A diffusion-fusion cycle consists of $|E|$ packets sent in parallel and hence total number of packets exchanged in one iteration of the algorithm is $2|E|$. Since order-2 SI does not require orthonormalization (unlike classical power iteration and inverse iteration methods for computing eigenelements), and the diffusion-fusion cycle is within one hop neighborhood, time delay of the algorithm will not be too significant. The synchronization points definitely pose some constraints, and demand extra resources. We have also considered an asynchronous version of the distributed algorithm, which will be presented in the extended version of the work.
\section{Numerical Results} \label{sec:numerical_res} The parameters $\varepsilon$ and $s$ are chosen in the numerical studies satisfying the constraints in \eqref{eq:ft_constraints}. We assume that the maximum degree is known to us.
Note that if the only purpose is to detect eigenvalues, not to compute the eigenvectors, then instead of taking real part of the FFT in the Hamiltonian solution, it is clearly better to compute the absolute value of the complex quantity to get higher peaks. But in the following simulations we look for eigenvectors as well.
For the numerical studies, in order to show the effectiveness of the distributed implementation, we focus on one particular node and plot the spectrum observed at that node. In the plots, $f_{\theta}(k)$ indicates the approximated spectrum at frequency $\theta$ observed on node $k$.
\subsection{Les Mis{\'e}rables network} In Les Mis{\'e}rables network, nodes are the characters in the well-known novel with the same name and edges are formed if two characters appear in the same chapter. The number of nodes is $77$ and number of edges is $254$. We look for the spectral plot at a specific node called Valjean (with node ID $11$), a character in the associated novel.
The instability of the Euler method is clear from Figure~\ref{fig:lesmiserables_eu_traj}, whereas Figure \ref{fig:lesmiserables_hm_traj} shows the guaranteed stability of Hamiltonian SI. Here the y-axis represents the absolute value of $\psi(t)$. (Note the difference in the y-axis scale in the figures). Figure \ref{fig:lesmiserables_or_2_lag} shows the result given by the Lagrangian Leapfrog method from Section~\ref{sec:mechanical_lag}. It can be observed that very few smallest eigenvalues are detected using order-2 Leapfrog compared to the SI technique (order-2) in Figure \ref{fig:lesmiserables_or_2_ham}. This demonstrates the superiority of the Hamiltonian system approach. Figure \ref{fig:lesmiserables_or_4} shows order-4 SI with much less number of iterations. The precision in order-4 plot can be significantly improved further by increasing the number of iterations.
\begin{figure}
\caption{Trajectories}
\label{fig:lesmiserables_eu_traj}
\label{fig:lesmiserables_hm_traj}
\label{fig:lesmiserables_trajectories}
\end{figure}
\vspace*{-0.75 cm} \begin{figure}
\caption{Les Mis{\'e}rables network: Order-2 Leapfrog}
\label{fig:lesmiserables_or_2_lag}
\end{figure} \vspace*{-0.75 cm} \begin{figure}
\caption{Les Mis{\'e}rables network: Order-2 SI}
\label{fig:lesmiserables_or_2_ham}
\end{figure}
\begin{figure}
\caption{Les Mis{\'e}rables network: Order-4 SI}
\label{fig:lesmiserables_or_4}
\end{figure}
\subsection{Coauthorship graph in network science} The coauthorship graph represents a collaborative network of scientists working in network science as compiled by M.\ Newman \cite{Newman_netscience_data}. The numerical experiments are done on the largest connected component with $n=379$ and $m=914$. Figure \ref{fig:netscience_or_4} displays the order-4 SI simulation and it can be seen that even though the eigenvalues are very close, the algorithm is able to distinguish them clearly. \begin{figure}
\caption{Coauthorship graph in Network Science: Order-4 SI}
\label{fig:netscience_or_4}
\end{figure}
\subsection{Enron email network} The nodes in this network are the email ID's of the employees in a company called Enron and the edges are formed when two employees communicated through email\footnote{Data collected from SNAP: \url{http://snap.stanford.edu/data}}. Since the graph is not connected, we take the largest connected component with $33,696$ nodes and $180,811$ edges. The standard MATLAB procedures for eigenelements computation have difficulty to cope with such network sizes. The node under focus is the highest degree node in that component. Simulation result is shown in Figure \ref{fig:enron_or_4}. \begin{figure}
\caption{Enron graph: Order-4 SI}
\label{fig:enron_or_4}
\end{figure}
\vspace*{-0.4 cm}
\section{Applications} \label{sec:applications} In this section we consider two related applications of the distributed spectrum computation to multi-agent and multi-dimensional (nD) systems.
First, we consider the consensus protocol in the wireless sensor networks \cite{AEN11}.
We model a wireless sensor network as a random geometric graph, with nodes corresponding to agents (sensors) located on the unit square of the $\mathbb{R}^2$ plane and edges correspond to possible communication links within radius $R$. The consensus protocol is described as follows: let $x_k(t)$ be the value of sensor $k$ at time slot $t$, $$ x_k(t+1) = \sum_{\ell \in N_{[k]}} w_{k \ell} x_{\ell}(t), \quad k=1,...,n, $$ or in the matrix form $x(t+1)=Wx(t)$, where $N_{[k]}$ is the set of neighbour nodes of node $k$ including the node $k$ itself, and $W=[w_{k \ell}]$ is a matrix of edge weights. Let the initial value of sensor $k$ be $x_k(0)=m_k$. Then, if the consensus protocol converges, we have $$ \lim_{t \to \infty} x(t) = \bar{m}{\bf 1}, \quad \bar{m}=\frac{1}{n}\sum_{k=1}^{n}m_k. $$ It has been demonstrated in \cite{AEN11} that performance of the best constant consensus protocol is very good in sparse wireless networks. The optimal weight value is given by \cite{xiao2004fast} $$ w_{k \ell} = \frac{2}{\lambda_1(L)+\lambda_{n-1}(L)}, $$ where $\lambda_t(L)$ denotes the $t$-th largest eigenvalue of the graph Laplacian $L=D-W$, $D=\text{diag}(W{\bf 1})$. Using our distributed approach for the eigenvalues computation, we can propose a completely distributed, self-tuning, best constant consensus protocol.
We note that in the above example of the consensus protocol, the perfect consensus is actually reached only in the limit. Often, in practice we would like to obtain consensus in finite time. The finite-time consensus problem is very challenging and there is no simple solution for its design (see e.g., \cite{ENA13}). In \cite{Meng12} it has been suggested to use Iterative Learning Control (ICL) to achieve finite-time consensus. As in \cite{Meng12} we assume that each agent $k$ can be described by a fairly general Markovian dynamics $$ y_k(t,r) = g_k(t) + h_k(q)u_k(t,r), \quad k=1,...,n, $$
where $t$ indicates the time slots, whereas $r$ indicates the ILC iterations. Here $g_k(t)$ is the zero input responsive function, $h_k(q)$ is the transfer operator with Markovian parameters and $u_k(t,r)$ is the control input.
The authors of \cite{Meng12} have proposed the following update rule for the control $$ u_k(t,r+1) = u_k(t,r) + \gamma_k \sum_{\ell \in {\cal N}(k)} a_{k \ell}(y_{\ell}(T,r)-y_k(T,r)), $$ with $a_{k \ell}$ being elements of the graph adjacency matrix and $\gamma_k$ being the learning gains to be designed, and shown that under such update rule, the system ``learns'' finite-time consensus, i.e., $$ \lim_{r \to \infty} (y_{\ell}(T,r)-y_k(T,r)) = 0, \quad \forall k, \ell \in \{1,...,n\}, $$ if the following condition holds $$ \rho(I-H\Gamma L) < 1, $$ where $\Gamma=\text{diag}\{\gamma_1,...,\gamma_n\}$ is the diagonal matrix of gains, $H=\text{diag}\{h_1(T),...,h_n(T)\}$ is the diagonal matrix of the response function at time $T$ and $\rho(A)$ is the spectral radius of matrix $A$. In fact, if the communication graph is undirected and connected, the condition $\rho(I-H\Gamma L) < 1$ is always satisfied. However, in practice, it is good to be not too aggressive in learning \cite{Longman2002}, and hence our distributed procedure can be used to choose gains $\{\gamma_k\}$ in such a way so that the value $\rho(I-H\Gamma L)$ estimated by our algorithm will be not too small and not too close to one.
\section{Conclusions and future research} \label{sec:conclusions} We have proposed a distributed approach for the eigenvalue-eigenvector problem for graph matrices based on Hamiltonian dynamics, symplectic integrators and smoothed Fourier transform. We demonstrate with various network sizes that the proposed approach efficiently scales and finds, with higher resolution, closely situated eigenvalues and associated eigenvectors of graph matrices.
In the future we hope to present asynchronous versions of the introduced algorithms, to extend the proposed approaches to some classes of non-symmetric matrices and to design an automatic or a semi-automatic procedure for the identification of dominant eigenvalues and eigenvectors.
\end{document} |
\begin{document}
\newtheorem{defn}{Definition}[section] \newtheorem{thm}{Theorem}[section] \newtheorem{prop}{Proposition}[section] \newtheorem{exam}{Example}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{rem}{Remark}[section] \newtheorem{lem}{Lemma}[section] \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{K}}{\mathbb{K}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \def{\alpha}{{\alpha}} \def{\beta}{{\beta}} \def{\delta}{{\delta}} \def{\gamma}{{\gamma}} \def{\lambda}{{\lambda}} \def{\mathfrak g}{{\mathfrak g}} \def\mathcal {\mathcal }
\title{The classification of Leibniz superalgebras of nilindex $n+m$ ($m\neq0.$)} \author{J. R. G\'{o}mez, A.Kh. Khudoyberdiyev and B.A. Omirov} \address{[J.R. G\'{o}mez] Dpto. Matem\'{a}tica Aplicada I. Universidad de Sevilla. Avda. Reina Mercedes, s/n. 41012 Sevilla. (Spain)} \email{[email protected]} \address{[A.Kh. Khudoyberdiyev -- B.A. Omirov] Institute of Mathematics and Information Technologies of Academy of Uzbekistan, 29, F.Hodjaev srt., 100125, Tashkent (Uzbekistan)} \email{[email protected] --- [email protected]}
\thanks{The first author was supported by the PAI, FQM143 of the Junta de Andaluc\'{\i}a (Spain) and the last author was supported by grant NATO-Reintegration ref. CBP.EAP.RIG.983169}
\maketitle
\begin{abstract} In this paper we investigate the description of the complex Leibniz superalgebras with nilindex $n+m$, where $n$ and $m$ ($m\neq 0$) are dimensions of even and odd parts, respectively. In fact, such superalgebras with characteristic sequence equal to
$(n_1, \dots, n_k | m_1, \dots, m_s)$ (where $n_1+\dots +n_k=n, \
m_1+ \dots + m_s=m$) for $n_1\geq n-1$ and $(n_1, \dots, n_k | m)$
were classified in works \cite{FilSup}--\cite{C-G-O-Kh1}. Here we prove that in the case of $(n_1, \dots, n_k| m_1, \dots, m_s)$, where $n_1\leq n-2$ and $m_1 \leq m-1$ the Leibniz superalgebras have nilindex less than $n+m.$ Thus, we complete the classification of Leibniz superalgebras with nilindex $n+m.$ \end{abstract}
\textbf{Mathematics Subject Classification 2000}: 17A32, 17B30, 17B70, 17A70.
\textbf{Key Words and Phrases}: Lie superalgebras, Leibniz superalgebras, nilindex, characteristic sequence, natural gradation.
\section{Introduction}
During many years the theory of Lie superalgebras has been actively studied by many mathematicians and physicists. A systematic exposition of basic of Lie superalgebras theory can be found in \cite{Kac}. Many works have been devoted to the study of this topic, but unfortunately most of them do not deal with nilpotent Lie superalgebras. In works \cite{2007Yu}, \cite{GL}, \cite{G-K-N} the problem of the description of some classes of nilpotent Lie superalgebras have been studied. It is well known that Lie superalgebras are a generalization of Lie algebras. In the same way, the notion of Leibniz algebras, which were introduced in \cite{Lod}, can be generalized to Leibniz superalgebras \cite{Alb}, \cite{Liv}. Some elementary properties of Leibniz superalgebras were obtained in \cite{Alb}.
In the work \cite{G-K-N} the Lie superalgebras with maximal nilindex were classified. Such superalgebras are two-generated and its nilindex equal to $n+m$ (where $n$ and $m$ are dimensions of even and odd parts, respectively). In fact, there exists unique Lie superalgebra of maximal nilindex. This superalgebra is filiform Lie superalgebra (the characteristic sequence equal to
$(n-1,1 | m)$) and we mention about paper \cite{2007Yu}, where some crucial properties of filiform Lie superalgebras are given.
For nilpotent Leibniz superalgebras the description of the case of maximal nilindex (nilpotent Leibniz superalgebras distinguished by the feature of being single-generated) is not difficult and was done in \cite{Alb}.
However, the description of Leibniz superalgebras of nilindex
$n+m$ is a very problematic one and it needs to solve many technical tasks. Therefore, they can be studied by applying restrictions on their characteristic sequences. In the present paper we consider Leibniz superalgebras with characteristic sequence $(n_1, \dots, n_k | m_1, \dots, m_s)$ ($n_1\leq n-2$ and $m_1\leq m-1$) and nilindex $n+m.$ Recall, that such superalgebras for $n_1\geq n-1$ or $m_1=m$ have been already classified in works
\cite{FilSup}--\cite{C-G-O-Kh1}. Namely, we prove that a Leibniz superalgebra with characteristic sequence equal to $(n_1, \dots, n_k | m_1, \dots, m_s)$ ($n_1\leq n-2$ and $m_1\leq m-1$) has nilindex less than $n+m.$ Therefore, we complete classification of Leibniz superalgebras with nilindex $n+m.$
It should be noted that in our study the natural gradation of even part of Leibniz superalgebra played one of the crucial roles. In fact, we used some properties of naturally graded Lie and Leibniz algebras for obtaining the convenience basis of even part of the superalgebra (so-called adapted basis).
Throughout this work we shall consider spaces and (super)algebras over the field of complex numbers. By asterisks $(*)$ we denote the appropriate coefficients at the basic elements of superalgebra.
\section{Preliminaries}
Recall the notion of Leibniz superalgebras.
\begin{defn} A $\mathbb{Z}_2$-graded vector space $L=L_0\oplus L_1$ is called a Leibniz superalgebra if it is equipped with a product $[-, -]$ which satisfies the following conditions:
1. $[L_\alpha,L_\beta]\subseteq L_{\alpha+\beta(mod\ 2)},$
2. $[x, [y, z]]=[[x, y], z] - (-1)^{\alpha\beta} [[x, z], y]-$
Leibniz superidentity,\\
for all $x\in L,$ $y \in L_\alpha,$ $z \in L_\beta$ and $\alpha,\beta\in \mathbb{Z}_2.$ \end{defn}
The vector spaces $L_0$ and $L_1$ are said to be even and odd parts of the superalgebra $L$, respectively. Evidently, even part of the Leibniz superalgebra is a Leibniz algebra.
Note that if in Leibniz superalgebra $L$ the identity $$[x,y]=-(-1)^{\alpha\beta} [y,x]$$ holds for any $x \in L_{\alpha}$ and $y \in L_{\beta},$ then the Leibniz superidentity can be transformed into the Jacobi superidentity. Thus, Leibniz superalgebras are a generalization of Lie superalgebras and Leibniz algebras.
The set of all Leibniz superalgebras with the dimensions of the even and odd parts, respectively equal to $n$ and $m$, we denote by $Leib_{n,m}.$
For a given Leibniz superalgebra $L$ we define the descending central sequence as follows: $$ L^1=L,\quad L^{k+1}=[L^k,L], \quad k \geq 1. $$
\begin{defn} A Leibniz superalgebra $L$ is called nilpotent, if there exists $s\in\mathbb N$ such that $L^s=0.$ The minimal number $s$ with this property is called nilindex of the superalgebra $L.$ \end{defn}
\begin{defn} The set $$\mathcal{R}(L)=\left\{ z\in L\ |\ [L, z]=0\right\}$$ is called the right annihilator of a superalgebra $L.$ \end{defn}
Using the Leibniz superidentity it is easy to see that $\mathcal{R}(L)$ is an ideal of the superalgebra $L$. Moreover, the elements of the form $[a,b]+(-1)^{\alpha \beta}[b,a],$ ($a \in L_{\alpha}, \ b \in L_{\beta}$) belong to $\mathcal{R}(L)$.
The following theorem describes nilpotent Leibniz superalgebras with maximal nilindex. \begin{thm} \label{t1} \cite{Alb} Let $L$ be a Leibniz superalgebra of $Leib_{n,m }$ with nilindex equal to $n+m+1.$ Then $L$ is isomorphic to one of the following non-isomorphic superalgebras: $$ [e_i,e_1]=e_{i+1},\ 1\le i\le n-1, \ m=0;\quad \left\{ \begin{array}{ll} [e_i,e_1]=e_{i+1},& 1\le i\le n+m-1, \\ {[}e_i,e_2{]}=2e_{i+2}, & 1\le i\le n+m-2,\\ \end{array}\right. $$ (omitted products are equal to zero). \end{thm}
\begin{rem} {\em From the assertion of Theorem \ref{t1} we have that in case of non-trivial odd part $L_1$ of the superalgebra $L$ there are two possibility for $n$ and $m$, namely, $m=n$ if $n+m$ is even and $m=n+1$ if $n+m$ is odd. Moreover, it is clear that the Leibniz superalgebra has the maximal nilindex if and only if it is single-generated.} \end{rem}
Let $L=L_0\oplus L_1$ be a nilpotent Leibniz superalgebra. For an arbitrary element $x\in L_0,$ the operator of right multiplication $R_x:L \rightarrow L$ (defined as $R_x(y)=[y,x]$) is a nilpotent endomorphism of the space $L_i,$ where $i\in \{0, 1\}.$ Taking into account the property of complex endomorphisms we can consider the Jordan form for $R_x.$ For operator $R_x$ denote by $C_i(x)$ ($i\in \{0, 1\}$) the descending sequence of its Jordan blocks dimensions. Consider the lexicographical order on the set $C_i(L_0)$.
\begin{defn} \label{d4}A sequence $$C(L)=\left( \left.\max\limits_{x\in L_0\setminus L_0^2} C_0(x)\
\right|\ \max\limits_{\widetilde x\in L_0\setminus L_0^2} C_1\left(\widetilde x\right) \right) $$ is said to be the characteristic sequence of the Leibniz superalgebra $L.$ \end{defn}
Similarly to \cite{GL} (corollary 3.0.1) it can be proved that the characteristic sequence is invariant under isomorphism.
Since Leibniz superalgebras from $Leib_{n,m}$ with nilindex $n+m$
and with characteristic sequences equal to $(n_1, \dots, n_k |
m_1, \dots, m_s)$ either $n_1\geq n-1$ or $m_1=m$ were already classified, we shall reduce our investigation to the case of the characteristic sequence $(n_1, \dots, n_k| m_1, \dots m_s),$ where $n_1\leq n-2$ and $m_1 \leq m-1$
From the Definition \ref{d4} we have that a Leibniz algebra $L_0$ has characteristic sequence $(n_1, \dots, n_k).$ Let $l \in \mathbb{N}$ be a nilindex of the Leibniz algebra $L_0.$ Since $n_1 \leq n-2,$ then we have $l \leq n-1$ and Leibniz algebra $L_0$ has at least two generators (the elements which belong to the set $L_0\setminus L_0^2$).
For the completeness of the statement below we present the classifications of the papers \cite{FilSup}--\cite{C-G-O-Kh} and \cite{G-K-N}.
$Leib_{1,m}:$ $$\small \left\{\begin{array}{l} [y_i,x_1]=y_{i+1}, \ \ 1\leq i \leq m-1. \end{array}\right.$$ $Leib_{n,1}:$ $$ \small\left\{\begin{array}{ll} [x_i,x_1]=x_{i+1},& 1 \leq i \leq n-1,\\{} [y_1,y_1]=\alpha x_n, & \alpha = \{0, \ 1\}.\end{array}\right.$$ $Leib_{2,2}:$ $$\small\begin{array}{ll} \left\{\begin{array}{l} [y_1,x_1]=y_2, \\ {[}x_1,y_1]=\displaystyle \frac12 y_2, \\[2mm] {[}x_2,y_1]=\displaystyle y_2, \\[2mm] [y_1,x_2] = 2y_2, \\ {[}y_1,y_1]=x_2, \\ \end{array}\right.&
\left\{\begin{array}{l} [y_1,x_1]=y_2, \\ {[}x_2,y_1]=\displaystyle y_2, \\[2mm] {[}y_1,x_2]= 2y_2, \\ {[}y_1,y_1]=x_2. \\ \end{array}\right. \end{array}$$ $Leib_{2,m}, \ m \ \rm{is \ odd}:$ $$\small\begin{array}{ll} &\\[2mm] \left\{\begin{array}{ll} [x_1,x_1]=x_2, \ & m\geq 3, \\{} [y_i,x_1]=y_{i+1},& 1\leq i\leq m-1,\\{} [x_1,y_i]=-y_{i+1},&1\leq i\leq m-1,\\{} [y_i,y_{m+1-i}]=(-1)^{j+1}x_2, & 1\leq i\leq \frac{m+1}2. \end{array}\right.& \left\{\begin{array}{ll} [y_i,x_1]= [x_1, y_i]= -y_{i+1},& 1\leq i\leq m-1,\\{} [y_{m+1-i},y_i]=(-1)^{j+1}x_2, & 1\leq i\leq \frac{m+1}2. \end{array}\right. \end{array}$$
In order to present the classification of Leibniz superalgebras with characteristic sequence $(n-1,1 | m)$, $n \geq 3$ and nilindex $n+m$ we need to introduce the following families of superalgebras: $$\bf Leib_{n,n-1}:$$ $L(\alpha_4, \alpha_5, \ldots, \alpha_n, \theta):$ $$ \left\{\begin{array}{ll} [x_1,x_1]=x_3,& \\[1mm] {[}x_i,x_1]=x_{i+1},& 2 \le i \le n-1, \\[1mm] {[}y_j,x_1]=y_{j+1},& 1 \le j \le n-2, \\[1mm] {[}x_1,y_1]= \frac12 y_2,& \\[1mm] {[}x_i,y_1]= \frac12 y_i, & 2 \le i \le n-1, \\[1mm] {[}y_1,y_1]=x_1,& \\[1mm] {[}y_j,y_1]=x_{j+1},& 2 \le j \le n-1, \\[1mm] {[}x_1,x_2]=\alpha_4x_4+ \alpha_5x_5+ \ldots + \alpha_{n-1}x_{n-1}+ \theta x_n,& \\[1mm] {[}x_j,x_2]= \alpha_4x_{j+2}+ \alpha_5x_{j+3}+ \ldots + \alpha_{n+2-j}x_n,& 2 \le j \le n-2, \\[1mm] {[}y_1,x_2]= \alpha_4y_3+ \alpha_5y_4+ \ldots + \alpha_{n-1}y_{n-2}+\theta y_{n-1},& \\[1mm] {[}y_j,x_2]= \alpha_4y_{j+2}+ \alpha_5y_{j+3}+ \ldots + \alpha_{n+1-j}y_{n-1},& 2 \le j \le n-3. \end{array} \right.$$
$G(\beta_4,\beta_5, \ldots, \beta_n, \gamma):$ $$ \left\{\begin{array}{ll} [x_1,x_1]=x_3, \\[1mm] {[}x_i,x_1]=x_{i+1},& 3 \le i \le n-1, \\[1mm] {[}y_j,x_1]=y_{j+1}, & 1 \le j \le n-2, \\[1mm] {[}x_1,x_2]= \beta_4x_4+\beta_5x_5+\ldots+\beta_nx_n,& \\[1mm] {[}x_2,x_2]= \gamma x_n,& \\[1mm] {[}x_j,x_2]= \beta_4x_{j+2}+\beta_5x_{j+3}+\ldots+\beta_{n+2-j}x_n,& 3\le j\le n-2, \\[1mm] {[}y_1,y_1]=x_1,& \\[1mm] {[}y_j,y_1]=x_{j+1},& 2 \le j \le n-1, \\[1mm] {[}x_1,y_1]= \frac12 y_2,& \\[1mm] {[}x_i,y_1]= \frac12 y_i,& 3\le i\le n-1, \\[1mm] {[}y_j,x_2]= \beta_4y_{j+2}+\beta_5y_{j+3}+ \ldots + \beta_{n+1-j}y_{n-1},& 1\le j\le n-3. \end{array} \right.$$ $$\bf Leib_{n,n}:$$ $M(\alpha_4, \alpha_5, \ldots, \alpha_n, \theta, \tau):$
$$ \left\{ \begin{array}{ll} [x_1,x_1]=x_3,& \\[1mm] {[}x_i,x_1]=x_{i+1},& 2 \le i \le n-1, \\[1mm] {[}y_j,x_1]=y_{j+1}, & 1 \le j \le n-1, \\[1mm] {[}x_1,y_1]= \frac12 y_2,& \\[1mm] {[}x_i,y_1]= \frac12 y_i, & 2 \le i \le n, \\[1mm] {[}y_1,y_1]=x_1,& \\[1mm] {[}y_j,y_1]=x_{j+1},& 2 \le j \le n-1, \\[1mm] {[}x_1,x_2]=\alpha_4x_4+ \alpha_5x_5+ \ldots + \alpha_{n-1}x_{n-1}+ \theta x_n,& \\[1mm] {[}x_2,x_2]=\gamma_4x_4,&\\[1mm]
{[}x_j,x_2]= \alpha_4x_{j+2}+ \alpha_5x_{j+3}+ \ldots + \alpha_{n+2-j}x_n,&3 \le j \le n-2, \\[1mm] {[}y_1,x_2]= \alpha_4y_3+ \alpha_5y_4+ \ldots + \alpha_{n-1}y_{n-2}+\theta y_{n-1}+\tau y_n,& \\[1mm] {[}y_2,x_2]= \alpha_4y_4+ \alpha_5y_4+ \ldots + \alpha_{n-1}y_{n-1}+\theta y_n,& \\[1mm] {[}y_j,x_2]= \alpha_4y_{j+2}+ \alpha_5y_{j+3}+ \ldots + \alpha_{n+2-j}y_{n},& 3 \le j \le n-2.\end{array} \right.$$
$H(\beta_4, \beta_5, \ldots,\beta_n, \delta , \gamma ):$ $$ \left\{ \begin{array}{ll} [x_1,x_1]=x_3,& \\[1mm] {[}x_i,x_1]=x_{i+1},& 3 \le i \le n-1, \\[1mm] {[}y_j,x_1]=y_{j+1}, & 1 \le j \le n-2, \\[1mm] {[}x_1,x_2]= \beta_4x_4+\beta_5x_5+\ldots+\beta_nx_n,& \\[1mm] {[}x_2,x_2]= \gamma x_n, &\\[1mm] {[}x_j,x_2]= \beta_4x_{j+2}+\beta_5x_{j+3}+\ldots+\beta_{n+2-j}x_n,& 3\le j\le n-2, \\[1mm] {[}y_1,y_1]=x_1,& \\[1mm] {[}y_j,y_1]=x_{j+1},& 2 \le j \le n-1, \\[1mm] {[}x_1,y_1]= \frac12 y_2,& \\[1mm] {[}x_i,y_1]= \frac12 y_i,& 3\le i\le n-1, \\[1mm] {[}y_1,x_2]= \beta_4y_3+\beta_5y_4+ \ldots + \beta_ny_{n-1}+\delta y_n,& \\[1mm] {[}y_j,x_2]= \beta_4y_{j+2}+\beta_5y_{j+3}+ \ldots + \beta_{n+2-j}y_n,& 2\le j\le n-2. \end{array} \right.$$
Analogously, for the Leibniz superalgebras with characteristic sequence $(n | m-1,1)$, $n \geq 2$ we introduce the following families of superalgebras: $$\bf Leib_{n,n+1}:$$ $E\left( \gamma, \beta_{\left[ \frac{n+4}2\right]}, \beta_{\left[ \frac{n+4}2\right]+1}, \ldots, \beta_n,\beta\right):$
$$ \left\{ \begin{array}{ll} [x_i,x_1]=x_{i+1},& 1 \le i \le n-1, \\[1mm] [y_j,x_1]=y_{j+1}, & 1 \le j \le n-1, \\[1mm] [x_i,y_1]=\frac12 y_{i+1}, &1\le i\le n-1, \\[1mm] [y_j,y_1]=x_{j}, & 1\le j\le n, \\[1mm] [y_{n+1},y_{n+1}]=\gamma x_n, & \\[1mm] [x_i,y_{n+1}]=\sum\limits_{k=\left[\frac{n+4}2\right]}^{n+1-i} \beta_k y_{k-1+i}, & 1\le i\le \left[ \frac{n-1}2\right], \\[1mm] [y_1,y_{n+1}]=-2\sum\limits_{k=\left[\frac{n+4}2\right]}^{n} \beta_k x_{k-1}+\beta x_n,& \\[1mm] [y_j,y_{n+1}]=-2\sum\limits_{k=\left[\frac{n+4}2\right]}^{n+2-j} \beta_k x_{k-2+j},& 2\le j\le \left[\frac{n+1}2\right]. \\[1mm] \end{array} \right.$$
$$\bf Leib_{n,n+2}:$$ $ F\left( \beta_{\left[\frac{n+5}2\right]}, \beta_{\left[\frac{n+5}2\right]+1}, \ldots,\right.$ $\left. \beta_{n+1}\right): $
$$ \left\{ \begin{array}{ll} [x_i,x_1]=x_{i+1},& 1 \le i \le n-1, \\[1mm] [y_j,x_1]=y_{j+1}, & 1 \le j \le n, \\[1mm] [x_i,y_1]=\frac12 y_{i+1}, &1\le i\le n, \\[1mm] [y_j,y_1]=x_{j}, & 1\le j\le n, \\[1mm] [x_i,y_{n+2}]=\sum\limits_{k=\left[\frac{n+5}2\right]}^{n+2-i} \beta_k y_{k-1+i}, & 1\le i\le \left[ \frac{n}2\right], \\[1mm] [y_j,y_{n+2}]=-2\sum\limits_{k=\left[\frac{n+5}2\right]}^{n+2-j} \beta_k x_{k-2+j}, & 1\le j\le \left[ \frac{n}2\right] \end{array} \right.$$
Let us introduce also the following operators which act on $k$-dimensional vectors: $$ \begin{array}{rl} j & \\ V^0_{j,k}(\alpha_1, \alpha_2,\ldots, \alpha_k) =
( 0, \ldots, 0, 1, & \delta \sqrt[j]{\delta ^{j+1}} S_{m,j}^{j+1} \alpha_{j+1}, \delta \sqrt[j]{\delta
^{j+2}} S_{m,j}^{j+2} \alpha_{j+2}, \ldots , \delta \sqrt[j]{\delta ^{k}} S_{m,j}^{k} \alpha_{k} ) ; \\ \end{array} $$ $$ \begin{array}{rl} j & \\ V^1_{j,k}(\alpha_1, \alpha_2,\ldots, \alpha_k) =
( 0, \ldots, 0, 1, & S_{m,j}^{j+1} \alpha_{j+1}, S_{m,j}^{j+2} \alpha_{j+2}, \ldots , S_{m,j}^{k} \alpha_{k} ) ; \\ \end{array} $$ $$ \begin{array}{rl} j & \\ V^2_{j,k}(\alpha_1, \alpha_2,\ldots, \alpha_k) =
( 0, \ldots, 0, 1, & S_{m,2j+1}^{2(j+1)+1} \alpha_{j+1}, S_{m,2j+1}^{2(j+2)+1} \alpha_{j+2}, \ldots , S_{m,2j+1}^{2k+1} \alpha_{k} ) ; \\ \end{array} $$ $$ V^0_{k+1,k}(\alpha_1, \alpha_2,\ldots, \alpha_k) = V^1_{k+1,k}(\alpha_1, \alpha_2,\ldots, \alpha_k) = V^2_{k+1,k}(\alpha_1, \alpha_2,\ldots, \alpha_k) = (0, 0, \ldots, 0); $$ $$ W_{s,k}(0,0,\ldots,\stackrel{j-1}{0},\stackrel{j}{1},S_{m,j}^{j+1}\alpha_{j+1},S_{m,j}^{j+2}\alpha_{j+2},\ldots, S_{m,j}^k\alpha_k,\gamma)= $$ $$ =( 0, 0,\ldots, \stackrel{j}{1} ,0,\ldots,\stackrel{s+j}{1}, S_{m,s}^{s+1}\alpha_{s+j+1}, S_{m,s}^{s+2}\alpha_{s+j+2},\ldots, S_{m,s}^{k-j}\alpha_k, S_{m,s}^{k+6-2j}\gamma), $$ $$ W_{k+1-j,k}(0,0,\ldots,\stackrel{j-1}{0},\stackrel{j}{1},S_{m,j}^{j+1}\alpha_{j+1},S_{m,j}^{j+2}\alpha_{j+2},\ldots, S_{m,j}^k\alpha_k,\gamma)=$$ $\qquad =(0,0,\ldots,\stackrel{j}{1},0,\ldots,1),$ $$ W_{k+2-j,k}(0,0,\ldots,\stackrel{j-1}{0},\stackrel{j}{1},S_{m,j}^{j+1} \alpha_{j+1},S_{m,j}^{j+2}\alpha_{j+2},\ldots, S_{m,j}^k\alpha_k,\gamma)=$$ $\qquad =(0,0,\ldots,\stackrel{j}{1},0,\ldots,0), $ \\ where $k\in N,$ $\delta=\pm 1,$ $1\le j\le k,$ $1\le s\le k-j,$ and $\displaystyle S_{m,t}=\cos\frac{2\pi m}t+i\sin\frac{2\pi m}t$ $(m=0,1,\ldots, t-1).$
Below we present the complete list of pairwise non-isomorphic Leibniz superalgebras with $n+m:$
with characteristic sequence equal to $(n-1,1|m):$ $$ \begin{array}{l} L\left( V^1_{j,n-3}\left( \alpha_4,\alpha_5,\ldots, \alpha_n\right),S_{m,j}^{n-3}\theta\right),\qquad \ \ 1\le j\le n-3, \\[2mm] L(0,0,\ldots,0,1), \ L(0,0,\ldots,0), \ G(0,0,\ldots,0,1), \ G(0,0,\ldots,0), \\[2mm] G\left( W_{s,n-2}\left( V^1_{j,n-3}\left( \beta_4,\beta_5,\ldots,\beta_n\right),\gamma\right)\right),\quad 1\le j\le n-3,\ 1\le s\le n-j, \\[2mm] M\left( V^1_{j,n-2}\left( \alpha_4,\alpha_5,\ldots,\alpha_n\right),S_{m,j}^{n-3}\theta\right), \qquad \ 1\le j\le n-2, \\[2mm] M(0,0,\ldots,0,1), \ M(0,0,\ldots,0), \ H(0,0,\ldots,0,1), \ H(0,0,\ldots,0), \\[2mm] H\left( W_{s,n-1}\left( V^1_{j,n-2}\left( \beta_4,\beta_5,\ldots,\beta_n\right),\gamma\right)\right),\quad 1\le j\le n-2,\ 1\le s\le n+1-j, \\\end{array}
$$ with characteristic sequence equal to $(n|m-1,1)$ if $n$ is odd (i.e. $n=2q-1$): $$ \begin{array}{lll} E\left(1,\delta\beta_{q+1}, V_{j,q-2}^0(\beta_{q+2}, \beta_{q+3}, \ldots, \beta_n),0\right), & \displaystyle \beta_{q+1}\ne \pm\frac12, & 1\le j\le q-1, \\[2mm] E\left(1,\beta_{q+1}, V_{j,q-1}^0(\beta_{q+2}, \beta_{q+3}, \ldots, \beta_n,\beta)\right), & \beta_{q+1}=\displaystyle \pm\frac12, & 1\le j\le q, \\[2mm] E(0,1,V_{j,q-2}^0(\beta_{q+2}, \beta_{q+3}, \ldots, \beta_n),0), & 1\le j\le q-1, & \\[2mm] E(0,0, W_{s,q-1}(V^1_{j,q-1}(\beta_{q+2}, \beta_{q+3}, \ldots, \beta_n, \beta))), & 1\le j\le q-1, & 1\le s\le q-j, \\[2mm] E(0,0,\ldots,0); \\ \end{array} $$
if $n$ is even (i.e. $n=2q$): $$ \begin{array}{lll} E(1,V^2_{j,q-1}(\beta_{q+2}, \beta_{q+3}, \ldots, \beta_n,), 0), & 1\le j\le q, & \\[2mm] E(0, W_{s,q}(V^1_{j,q}(\beta_{q+2}, \beta_{q+3}, \ldots, \beta_n,\beta))), & 1\le j\le q, & 1\le s\le q+1-j, \\[2mm] E(0,0,\ldots,0). \\ \end{array} $$ $$ F\left( W_{s,n+2-\left[\frac{n+5}2\right]} \left( V^1_{j, n+2-\left[\frac{n+5}2\right]} \left( \beta_{\left[ \frac{n+5}2 \right]}, \beta_{\left[\frac{n+5}2\right]+1}, \ldots, \beta_{n+1}\right)\right)\right), $$ where $1\le j\le n+2-\displaystyle \left[\frac{n+5}2\right],$ $1\le s\le n+3-\displaystyle \left[ \frac{n+5}2\right]-j,$ $$ F(0,0,\ldots,0). $$
For a given Leibniz algebra $A$ of the nilindex $l$ we put $gr(A)_i = A^i / A^{i+1}, \quad 1 \leq i \leq l-1$ and $gr(A) = gr(A)_1 \oplus gr(A)_2 \oplus \dots \oplus gr(A)_{l-1}.$ Then $[gr(A)_i, gr(A)_j] \subseteq gr(A)_{i+j}$ and we obtain the graded algebra $gr(A).$
\begin{defn} \label{d5} The gradation constructed in this way is called the natural gradation and if a Leibniz algebra $G$ is isomorphic to $gr(A)$ we say that the algebra $G$ is naturally graded Leibniz algebra. \end{defn}
\section{The main result} Let $L$ be a Leibniz superalgebra with characteristic sequence
$(n_1, \dots, n_k| m_1, \dots, m_s),$ where $ n_1 \leq n-2,$ $m_1 \leq m-1$ and of nilindex $n+m.$ Since the second part of the characteristic sequence of the Leibniz superalgebra $L$ is equal to $(m_1, \dots, m_s)$ then by the definition of the characteristic sequence there exists a nilpotent endomorphism $R_x$ ($x\in L_0\setminus L_0^2$) of the space $L_1$ such that its Jordan form consists of $s$ Jordan blocks. Therefore, we can assume the existence of an adapted basis $\{y_1, y_2, \dots, y_m\}$ of the subspace $L_1,$ such that $$ \left\{\begin{array}{ll} [y_j,x]=y_{j+1}, \ & j \notin \{m_1, m_1+m_2, \dots, m_1+m_2+ \dots + m_s\},\\{} [y_j,x]=0,& j \in \{m_1, m_1+m_2, \dots, m_1+m_2+ \dots + m_s\}. \end{array}\right. \eqno(1) $$ for some $x \in L_0\setminus L_0^2.$
Further we shall use a homogeneous basis $\{x_1, \dots, x_n\}$ with respect to natural gradation of the Leibniz algebra $L_0,$ which is also agreed with the lower central sequence of $L.$
The main result of the paper is that the nilindex of the Leibniz superalgebra $L$ with characteristic sequence $(n_1, \dots, n_k| m_1, \dots, m_s),$ $ n_1 \leq n-2, m_1 \leq m-1$ is less than $n+m.$
According to the Theorem \ref{t1} we have the description of single-generated Leibniz superalgebras, which have nilindex $n+m+1.$ If the number of generators is greater than two, then superalgebra has nilindex less than $n+m.$ Therefore, we should consider case of two-generated superalgebras.
The possible cases for the generators are:
1. Both generators lie in $L_0,$ i.e. $dim(L^2)_0 = n-2$ and $dim(L^2)_1 = m;$
2. One generator lies in $L_0$ and another one lies in $L_1,$
i.e. $dim(L^2)_0 = n-1$ and $dim(L^2)_1 = m-1;$
3. Both generators lie in $L_1,$ i.e. $dim(L^2)_0 = n$ and $dim(L^2)_1 = m-2.$
Moreover, two-generated superalgebra $L$ has nilindex $n+m$ if and only if $dim L^k = n+m-k$ for $2 \leq k \leq n+m.$
Since $m\neq 0$ we omit the case where both generators lie in even part.
\subsection{The case of one generator in $L_0$ and another one in $L_1$}
\
Since $dim(L^2)_0 = n-1$ and $dim(L^2)_1 = m-1$ then there exist some $m_j,$ $0 \leq j \leq s-1$ (here we assume $m_0=0$) such that $y_{m_1+ \dots + m_j + 1} \notin L^2.$ By a shifting of basic elements we can assume that $m_j=m_0,$ i.e. the basic element $y_1$ can be chosen as a generator of the superalgebra $L.$ Of course, by this shifting the condition from definition of the characteristic sequence $m_1 \geq m_2 \geq \dots \geq m_s$ can be broken, but further we shall not use the condition.
Let $L=L_0 \oplus L_1$ be a two generated Leibniz superalgebra from $Leib_{n,m}$ with characteristic sequence equal to $(n_1,
\dots, n_k| m_1, \dots, m_s)$ and let $\{x_1, \dots, x_n, y_1, \dots, y_m\}$ be a basis of the $L.$
\begin{lem}\label{l1} Let one generator lies in $L_0$ and another one lies in $L_1.$ Then $x_1$ and $y_1$ can be chosen as generators of the $L.$ Moreover, in equality (1) instead of element $x$ we can suppose $x_1.$ \end{lem} \begin{proof} As mentioned above $y_1$ can be chosen as the first generator of $L$. If $x\in L\setminus L^2$ then the assertion of the lemma is evident. If $x\in L^2$ then there exists some $i_0$ ($2\leq i_0$) such that $x_{i_0}\in L\setminus L^2.$ Set $x_1'=Ax + x_{i_0}$ for $A\neq 0$ then $x_1'$ is a generator of the superalgebra $L$ (since $x_1'\in L\setminus L^2$). Moreover, making transformation of the basis of $L_1$ as follows $$ \left\{\begin{array}{ll} y_j'= y_j, \ & j \in \{1, m_1+1, \dots, m_1+m_2+ \dots + m_{s-1}+1\},\\{} y_j'= [y_{j-1}',x_1'],& j \notin \{1, m_1+1, \dots, m_1+m_2+ \dots + m_{s-1}+1\}. \end{array}\right.$$ and taking sufficiently big value of the parameter $A$ we preserve the equality (1). Thus, in the basis $\{x_1', x_2, \dots, x_n, y_1', y_2', \dots, y_m'\}$ the elements $x_1'$ and $y_1'$ are generators. \end{proof}
Due to Lemma \ref{l1} further we shall suppose that $\{x_1, y_1\}$ are generators of the Leibniz superalgebra $L.$ Therefore, $$L^2 = \{x_2, x_3, \dots, x_n, y_2, y_3, \dots, y_m\}.$$ Let us introduce the notations: $$[x_i,y_1]= \sum\limits_{j=2}^m \alpha_{i,j}y_j,\ 1 \le i \le n, \ \ \ [y_i,y_1]= \sum\limits_{j=2}^n \beta_{i,j}x_j, \ 1 \le i \le m. \eqno (2)$$
Without loss of generality we can assume that $y_{m_1+\dots+m_i+1}\in L^{t_i}\setminus L^{{t_i}+1}$, where $t_i<t_j$ for $1\leq i<j\leq s-1.$
Firstly we consider the case of $dim(L^3)_0 = n-1,$ then $dim(L^3)_0 = n-2.$\\[0,5mm]
{\bf Case $dim(L^3)_0 = n-1$}.\\[0,5mm]
In this subcase we have $$L^3 = \{x_2, x_3, \dots, x_n, y_3, \dots, y_{m_1}, B_1y_2 +B_2y_{m_1+1}, y_{m_1+2}, \dots, y_m\},$$ where $(B_1,B_2)\neq(0,0).$
Analyzing the way the element $x_2$ can be obtained, we conclude that there exist $i_0 \ (2 \leq i_0 \leq m)$ such that $[y_{i_0},y_1]= \sum\limits_{j=2}^n \beta_{i_0,2}x_j, \ \beta_{i_0,2} \neq 0.$
Let us show that $i_0 \notin\{m_1+1, \dots, m_1+ \dots + m_{s-1}+1\}.$ It is known that the elements $y_{m_1+ m_2+1}, \dots, y_{m_1+ \dots + m_{s-1}+1}$ are generated from the products $[x_i, y_1], \ (2 \leq i \leq n).$ Due to nilpotency of $L$ we get $i_0 \notin \{m_1+m_2+1, \dots, m_1+ \dots + m_{s-1}+1\}.$ If $y_{m_1+1}$ is generated by $[x_1, y_1],$ i.e. in the expression $[x_1, y_1] = \sum\limits_{j=2}^m \alpha_{1,j}y_j$ $\alpha_{1, m_1+1} \neq 0$ then we consider the product $$[[x_1, y_1],y_1] = [\sum\limits_{j=2}^m \alpha_{1,j}y_j, y_1] = \alpha_{1, m_1+1}\beta_{m_1+1,2}x_2 + \sum\limits_{i \geq 3} (*)x_i.$$ On the other hand, $$[[x_1, y_1],y_1] = \frac 1 2 [x_1,[y_1, y_1]] = \frac 1 2 [x_1, \sum\limits_{j=2}^n \beta_{1,j}x_j]
= \sum\limits_{i \geq 3} (*)x_i$$
Comparing the coefficients at the corresponding basic elements we obtain $\alpha_{1, m_1+1}\beta_{m_1+1,2}=0,$ which implies $\beta_{m_1+1,2}=0.$ It means that $i_0 \neq m_1+1.$ Therefore, $\beta_{i_0,2} \neq 0,$ where $i_0 \notin\{m_1+1, \dots, m_1+ \dots + m_{s-1}+1\}.$
\
\textbf{Case $y_2 \notin L^3.$} Then $B_2 \neq 0.$ Let $h\in\mathbb{N}$ be a number such that $x_2 \in L^h\setminus L^{h+1},$ that is $$ L^h = \{x_2, x_3, \dots, x_n, y_h,\dots, y_{m_1}, B_1y_2 +B_2y_{m_1+1}, y_{m_1+2}, \dots, y_m\}, \ h \geq 3,$$ $$L^{h+1} = \{x_3, x_4, \dots, x_n, y_h, \dots, y_{m_1}, B_1y_2 +B_2y_{m_1+1}, y_{m_1+2},\dots, y_m\}.$$
Since the elements $B_1y_2 +B_2y_{m_1+1}, y_{m_1+ m_2+1}, \dots, y_{m_1+ \dots +m_{s-1}+1}$ are generated from the multiplications $[x_i, y_1], 2 \leq i \leq n$ it follows that $h \leq m_1 +1.$
So, $x_2$ can be obtained only from product $[y_{h-1},y_1]$ and thereby $\beta_{h-1,2} \neq 0.$ Making the change $x_2'= \sum\limits_{j=2}^n \beta_{h-1,j}x_j$ we can assume that $[y_{h-1}, y_1] = x_2.$
Let now $p$ is a number such that $y_h \in L^{h+p}\setminus L^{h+p+1}.$ Then for the powers of superalgebra $L$ we have the following $$L^{h+p} = \{x_{p+2}, x_{p+3}, \dots, x_n, y_h, \dots, y_{m_1}, B_1y_2 +B_2y_{m_1+1}, y_{m_1+2}, \dots, y_m\}, \ p \geq 1,$$ $$L^{h+p+1} = \{x_{p+2}, x_{p+3}, \dots, x_n, y_{h+1}, \dots, y_{m_1}, B_1y_2 +B_2y_{m_1+1}, y_{m_1+2}, \dots, y_m\}.$$
In the following lemma the useful expression for the products $[y_i, y_j]$ is presented. \begin{lem}\label{le2} The equality: $$[y_i, y_j] = (-1)^{h-1-i}C_{j-1}^{h-1-i}x_{i+j+2-h} + \sum\limits_{t > i+j+2-h}(*)x_t, \eqno (3)$$ $1 \leq i \leq h-1, \ h-i \leq j \leq min\{h-1, h-1+p-i\},$ holds. \end{lem} \begin{proof} The proof is deduced by the induction on $j$ at any value of $i.$ \end{proof}
For the natural number $p$ we have the following \begin{lem}\label{l3} Under the above conditions $p=1.$ \end{lem} \begin{proof} Assume the contrary, i.e. $p> 1.$ Then we can suppose $$[x_i, x_1] = x_{i+1}, \ 2 \leq i \leq p, \quad [x_{p+1}, y_1] = \sum\limits_{j = h}^m\alpha_{p+1,j} y_j, \quad \alpha_{p+1,h} \neq 0.$$
Using the equality (3) we consider the following chain of equalities
$$[y_1, [y_{h-1},x_1]] = [[y_1, y_{h-1}],x_1] - [[y_1,x_1], y_{h-1}]= (-1)^{h-2}x_3 + \sum\limits_{t \geq 4}(*)x_t-$$ $$-(-1)^{h-3}(h-2)x_3 + \sum\limits_{t \geq 4}(*)x_t= (-1)^{h}(h-1)x_3 + \sum\limits_{t \geq 4}(*)x_t.$$
If $h \leq m_1,$ then $[y_1, [y_{h-1},x_1]] = [y_1, y_h]$. Since $y_h \in L^{h+p}$ and $p > 1$ then in the decomposition of $[y_1, y_h]$ the coefficient at the basic elements $x_2$ and $x_3$ are equal to zero. Therefore, from the above equalities we get a contradiction with assumption $p>1.$
If $h = m_1 +1,$ then $[y_1, [y_{h-1},x_1]] = 0$ and we also obtain the irregular equality $(-1)^{h}(h-1)x_3 + \sum\limits_{t \geq 4}(*)x_t=0.$ Therefore, the proof of the lemma is completed. \end{proof} We resume our main result in considered cases in the following
\begin{thm}\label{t2} Let $L=L_0 \oplus L_1$ be a Leibniz superalgebra from
$Leib_{n,m}$ with characteristic sequence equal to $(n_1, \dots, n_k | m_1, \dots, m_s),$ where $n_1\leq n-2, \ m_1\leq m-1$ and let $dim(L^3)_0 = n-1$ with $y_2 \notin L^3.$ Then $L$ has a nilindex less than $n+m.$ \end{thm}
\begin{proof} Let us assume the contrary, i.e. nilindex of the superalgebra $L$ equal to $n+m.$ Then according to the Lemma \ref{l3} we have $$L^{h+2} = \{x_3, \dots, x_n, y_{h+1},\dots, y_{m_1}, B_1y_2 +B_2y_{m_1+1}, y_{m_1+2}, \dots, y_m\}.$$
Since $y_h \notin L^{h+2},$ it follows that $$ \alpha_{2,h}\neq 0, \quad \alpha_{i,h}=0 \quad \mbox{for}\quad i>2.$$
Consider the product $$[[y_{h-1}, y_1], y_1] = \frac 1 2 [ y_{h-1}, [ y_1, y_1]] = \frac 1 2 [y_{h-1}, \sum\limits_{i=2}^n\beta_{1,i}x_i] .$$
The element $y_{h-1}$ belongs to $L^{h-1}$ and elements $x_2, x_3, \dots, x_n$ lie in $L^3.$ Hence $\frac 1 2 [y_{h-1}, \sum\limits_{i=2}^n\beta_{1,i}x_i] \in L^{h+2}.$ Since $y_h \notin L^{h+2},$ we obtain that $[[y_{h-1}, y_1], y_1] =\sum\limits_{j\geq h+1}(*)y_j.$
On the other hand, $$[[y_{h-1}, y_1], y_1] = [x_2, y_1] = \alpha_{2,h}y_h
+ \sum\limits_{j= h+1}^m\alpha_{2,j}y_j.$$
Comparing the coefficients at the basic elements we obtain $ \alpha_{2,h}=0,$ which is a contradiction with the assumption that the superalgebra $L$ has nilindex equal to $n+m$ and therefore the assertion of the theorem is proved. \end{proof}
\textbf{Case $y_2 \in L^3.$} Then $B_2=0$ and the following theorem is true.
\begin{thm}\label{t3} Let $L=L_0 \oplus L_1$ be a Leibniz superalgebra from
$Leib_{n,m}$ with characteristic sequence equal to $(n_1, \dots, n_k | m_1, \dots, m_s),$ where $n_1\leq n-2, \ m_1\leq m-1$ and let $dim(L^3)_0 = n-1$ with $y_2 \in L^3.$ Then $L$ has a nilindex less than $n+m.$ \end{thm} \begin{proof} We shall prove the assertion of the theorem by contrary method, i.e. we assume that nilindex of the superalgebra $L$ equal to $n+m.$ The condition $y_2 \in L^3$ implies $$L^3 = \{x_2, x_3, \dots, x_n, y_2, \dots, y_{m_1}, y_{m_1+2}, \dots, y_m\}.$$
Then $\alpha_{1, m_1+1} \neq 0$ and $\alpha_{i, m_1+1} =
0$ for $i \geq 2.$ The element $y_2$ is generated from products $[x_i, y_1],$ $i\geq 2$ which implies $y_2 \in L^4.$ Since $[y_{m_{1} +1}, y_1]=[[x_1,y_1],y_1]=\frac{1}{2}[x_1,[y_1,y_1]]=\frac{1}{2}[x_1,\sum(*)x_i]$ and $x_2$ is a generator of the Leibniz algebra $L_0$ then $x_2$ can not generated from the product $[y_{m_{1} +1}, y_1].$ Thereby $x_2$ also belongs to $L^4.$
Consider the equality $$[[x_1, y_1], x_1] = [x_1,[ y_1, x_1]] + [[x_1, x_1], y_1] = [x_1, y_2] -[\sum\limits_{i\geq 3}(*)x_i, y_1].$$
From this it follows that the product $[[x_1, y_1], x_1]$ belongs to $L^5$ (and therefore belongs to $L^4$).
On the other hand, $$[[x_1, y_1], x_1] = [\sum\limits_{j=2}^m \alpha_{1,j} y_j, x_1] = \alpha_{1,2}y_3 + \dots + \alpha_{1,m_1-1}y_{m_1} +\alpha_{1,m_1+1}y_{m_1+2} + \dots +\alpha_{1,m-1}y_{m}.$$
Since $\alpha_{1, m_1+1} \neq 0,$ we obtain that $y_{m_1+2} \in L^4.$ Thus, we have $L^4 = \{x_2, x_3, \dots, x_n, \\ y_2, \dots, y_{m_1}, y_{m_1+2}, \dots, y_m\},$ that is $L^4 = L^3.$ It is a contradiction to nilpotency of the superalgebra $L.$
Thus, we get a contradiction with assumption that the superalgebra $L$ has nilindex equal to $n+m$ and therefore the assertion of the theorem is proved. \end{proof}
From Theorems \ref{t2} and \ref{t3} we obtain that Leibniz superalgebra $L$ with condition $dim (L^3)_0 = n-1$ has nilindex less than $n+m.$
The investigation of the Leibniz superalgebra with property $dim (L^3)_0 = n-2$ shows that the restriction to nilindex depends on the structure of the Leibniz algebra $L_0.$ Below we present some necessary remarks on nilpotent Leibniz algebras.
Let $A = \{z_1, z_2, \dots, z_n\}$ be an $n$-dimensional nilpotent Leibniz algebra of nilindex $l$ ($l < n$). Note that algebra $A$ is not single-generated.
\begin{prop} \label{c1} \cite{C-G-O-Kh1} Let $gr(A)$ be a naturally graded non-Lie Leibniz algebra. Then $dim A^3 \leq n-4.$ \end{prop}
The result on nilindex of the superalgebra under the condition $dim(L^3)_0 = n-2$ is established in the following two theorems.
\begin{thm}\label{t4} Let $L=L_0 \oplus L_1$ be a Leibniz superalgebra from $Leib_{n,m}$
with characteristic sequence $(n_1, \dots, n_k | m_1, \dots m_s),$ where $n_1\leq n-2, \ m_1\leq m-1,$ $dim(L^3)_0 = n-2$ and $dim L_0^3 \leq n-4.$ Then $L$ has a nilindex less than $n+m.$ \end{thm} \begin{proof} Let us assume the contrary, i.e. the nilindex of the superalgebra $L$ is equal to $n+m.$ According to the condition $dim(L^3)_0 = n-2$ we have
$$ L^3 = \{x_3, x_4, \dots, x_n, y_2, y_3, \dots, y_m\}.$$
From the condition $dim L_0^3 \leq n-4$ it follows that there exist at least two basic elements, that do not belong to $L_0^3.$ Without loss of generality, one can assume $x_3, x_4 \notin L_0^3.$
Let $h$ be a natural number such that $x_3 \in L^{h+1}\setminus L^{h+2},$ then we have
$$L^{h+1} = \{x_3, x_4, \dots, x_n, y_h, y_{h+1}, \dots, y_m\}, \ h \geq 2, \ \beta_{h-1, 3} \neq 0.$$ $$L^{h+2} = \{x_4, \dots, x_n, y_h, y_{h+1}, \dots, y_m\}.$$
Let us suppose $x_3 \notin L_0^2.$ Then we have that $x_3$ can not be obtained by the products $[x_i, x_1],$ with $2 \leq i\leq n.$ Therefore, it is generated by products $[y_j, y_1], 2 \leq j \leq m,$ which implies $h \geq 3$ and $\alpha_{2,2}\neq 0.$
If $h=3,$ then $\beta_{2,3}\neq 0.$
Consider the chain of equalities $$[[x_2, y_1], y_1] = [\sum \limits_{j=2}^m \alpha_{2,j}y_j, y_1] = \sum \limits_{j=2}^m \alpha_{2,j}[y_j, y_1]= \alpha_{2,2}\beta_{2,3}x_3 + \sum \limits_{i\geq 4}(*)x_i.$$
On the other hand, $$[[x_2, y_1], y_1] = \frac 1 2 [x_2, [ y_1, y_1]] = \frac 1 2 [x_2, \sum \limits_{i=2}^n\beta_{1,i}x_i] = \frac 1 2 \sum \limits_{i=2}^n\beta_{1,i} [x_2, x_i] = \sum \limits_{i\geq 4}(*)x_i.$$
Comparing the coefficients at the corresponding basic elements, we get a contradiction with $ \beta_{2,3} = 0.$ Thus, $h \geq 4.$
Since $y_2 \in L^3$ and $h \geq 4$ we have $y_{h-2} \in L^{h-1},$ which implies $[y_{h-2}, y_2] \in L^{h+2} = \{x_4, \dots, x_n, y_h, y_{h+1}, \dots, y_m\}.$ It means that in the decomposition $[y_{h-2}, y_2]$ the coefficient at the basic element $x_3$ is equal to zero.
On the other hand, $$[y_{h-2}, y_2] = [y_{h-2}, [y_1, x_1]] = [[y_{h-2}, y_1], x_1] - [[y_{h-2}, x_1], y_1] =$$ $$=[ \sum\limits_{i=2}^n \beta_{h-2,i}x_i, x_1] - [y_{h-1}, y_1]= - \beta_{h-1,3}x_3 + \sum\limits_{i\geq 4}(*)x_i.$$
Hence, we get $\beta_{h-1,3} = 0,$ which is obtained from the assumption $x_3 \notin L_0^2.$
Therefore, we have $x_3, x_4 \in L_0^2\setminus L_0^3.$ The condition $x_4 \notin L_0^3$ deduce that $x_4$ can not be obtained by the products $[x_i, x_1],$ with $3 \leq i\leq n.$ Therefore, it is generated by products $[y_j, y_1], h \leq j \leq m.$ Hence, $L^{h+3} = \{x_4, \dots, x_n, y_{h+1}, \dots, y_m\}$ and $y_h \in L^{h+2} \setminus L^{h+3},$ which implies $\alpha_{3,h} \neq 0.$
Let $p$ ($ p \geq 3$) be a natural number such that $x_4 \in L^{h+p} \setminus L^{h+p+1}.$
Suppose that $p=3.$ Then $\beta_{h,4}\neq 0.$
Consider the chain of equalities $$[[x_3, y_1], y_1] = [\sum \limits_{j=h}^m \alpha_{3,j}y_j, y_1] = \sum \limits_{j=h}^m \alpha_{3,j}[y_j, y_1]= \alpha_{3,h}\beta_{h,4}x_4 + \sum \limits_{i\geq 5}(*)x_i.$$
On the other hand, $$[[x_3, y_1], y_1] = \frac 1 2 [x_3, [ y_1, y_1]] = \frac 1 2[x_3, \sum \limits_{i=2}^n\beta_{1,i}x_i] = \frac 1 2 \sum \limits_{i=2}^n\beta_{1,i} [x_3, x_i] = \sum \limits_{i\geq 5}(*)x_i.$$
Comparing the coefficients at the corresponding basic elements in these equations we get $\alpha_{3,h}\beta_{h,4} = 0,$ which implies $ \beta_{h,4} = 0.$ It is a contradiction with assumption $p=3.$ Therefore, $p\geq 4$ and for the powers of descending lower sequences we have $$L^{h+p-2} = \{x_4, \dots, x_n, y_{h+p-4}, \dots, y_m\},$$ $$L^{h+p-1} = \{x_4, \dots, x_n, y_{h+p-3}, \dots, y_m\},$$ $$L^{h+p} = \{x_4, \dots, x_n, y_{h+p-2}, \dots, y_m\},$$ $$L^{h+p+1} = \{x_5, \dots, x_n, y_{h+p-2}, \dots, y_m\}.$$
It is easy to see that in the decomposition $[y_{h+p-3}, y_1] = \sum\limits_{i=4}^n \beta_{h+p-3,i}x_i$ we have $\beta_{h+p-3,4} \neq 0.$
Consider the equalities $$[y_{h+p-4}, y_2] = [y_{h+p-4}, [y_1, x_1]] = [[y_{h+p-4}, y_1], x_1] - [[y_{h+p-4}, x_1], y_1] =$$ $$=[ \sum\limits_{i=4}^n \beta_{h+p-3,i}x_i, x_1] - [y_{h+p-3}, y_1]= - \beta_{h+p-3,4}x_4 + \sum\limits_{i\geq 5}(*)x_i.$$
Since $y_{h+p-4} \in L^{h+p-2}, y_2 \in L^3$ and $\beta_{h+p-3,4} \neq 0,$ then the element $x_4$ should lie in $L^{h+p+1},$ but it contradicts to $L^{h+p+1} = \{x_5, \dots, x_n, y_{h+p-2}, \dots, y_m\}.$ Thus, the superalgebra $L$ has a nilindex less than $n+m.$ \end{proof}
From Theorem \ref{t4} we conclude that Leibniz superalgebra $L=
L_0 \oplus L_1$ with the characteristic sequence $(n_1, \dots, n_k| m_1, \dots, m_s),$ where $n_1\leq n-2, \ m_1\leq m-1$ and nilindex $n+m$ can appear only if $dim L_0^3 \geq n-3.$ Taking into account the condition $n_1 \leq n-2$ and properties of naturally graded subspaces $gr(L_0)_1,$ $gr(L_0)_2$ we get $dim L_0^3 = n-3.$
Let $dim L_0^3 =n-3.$ Then $$gr(L_0)_1 = \{\overline{x}_1, \overline{x}_2\}, \ gr(L_0)_2 = \{\overline{x}_3\}.$$
From Proposition \ref{c1} the naturally graded Leibniz algebra $gr(L_0)$ is a Lie algebra, i.e. the following multiplication rules hold $$ \left\{ \begin{array}{l} [\overline{x}_1,\overline{x}_1]=0, \\{} [\overline{x}_2,\overline{x}_1]=\overline{x}_3, \\{}
[\overline{x}_1,\overline{x}_2]=-\overline{x}_3, \\ {}[\overline{x}_2,\overline{x}_2]=0. \end{array}\right. $$
Using these products for the corresponding products in the Leibniz algebra $L_0$ with the basis $\{x_1, x_2, \dots, x_n\}$ we have $$ \left\{ \begin{array}{l} [x_1,x_1]=\gamma_{1,4}x_4 + \gamma_{1,5}x_5 + \dots + \gamma_{1,n}x_n, \\{} [x_2,x_1]=x_3, \\{} [x_1,x_2]=-x_3 + \gamma_{2,4}x_4 + \gamma_{2,5}x_5 + \dots + \gamma_{2,n}x_n, \\ {}[x_2,x_2]=\gamma_{3,4}x_4 + \gamma_{3,5}x_5 + \dots + \gamma_{3,n}x_n. \end{array} \right. \eqno(4)$$ \begin{thm}\label{t5} Let $L=L_0\oplus L_1$ be a Leibniz superalgebra from $Leib_{n,m}$
with characteristic sequence $(n_1, \dots, n_k | m_1, \dots, m_s),$ where $n_1\leq n-2, \ m_1\leq m-1,$ $dim (L^3)_0=n-2$ and $dimL_0^3=n-3.$ Then $L$ has a nilindex less than $n+m.$ \end{thm}
\begin{proof} Let us suppose the contrary, i.e. the nilindex of the superalgebra $L$ equals $n+m.$ Then from the condition $dim (L^3)_0=n-2$ we obtain $$ L^2 = \{x_2, x_3,\dots, x_n, y_2, \dots, y_m\},$$ $$L^3 = \{x_3, x_4, \dots, x_n, y_2, \dots, y_m\}.$$ $$L^4 \supset \{x_4, \dots, x_n, y_3, \dots, y_{m_1}, B_1y_2 + B_2y_{m_1+1}, y_{m_1+2} \dots, y_m\}, \quad (B_1, B_2) \neq (0,0).$$
Suppose $x_3 \notin L^4.$ Then $$L^4 = \{x_4, \dots, x_n, y_2, \dots, y_{m_1}, y_{m_1+1}, \dots, y_m\}.$$ Let $B'_1y_2 + B'_2y_{m_1+1}$ be an element which earlier disappear in the descending lower sequence for $L$. Then this element can not to be generated from the products $[x_i, y_1], \ 2 \leq i \leq n.$ Indeed, since $x_3 \notin L^4,$ the element can not to be generated from $[x_2, y_1].$ Due to structure of $L_0$ the elements $x_i, (3 \leq i \leq n)$ are in $L_0^2,$ i.e. they are generated by the linear combinations of the products of elements from $L_0.$ The equalities $$[[x_i, x_j], y_1] = [x_i,[ x_j, y_1]] + [[x_i, y_1], x_j] = [x_i,\sum\limits_{t=2}^m \alpha_{j,t}y_t] + [\sum\limits_{t=2}^m \alpha_{i,t}y_t, x_i]$$ derive that the element $B'_1y_2 + B'_2y_{m_1+1}$ can not be obtained by the products $[x_i, y_1], 3 \leq i \leq n.$ However, it means that $x_3\in L^4.$ Thus, we have $$L^4 = \{x_3, x_4, \dots, x_n, y_3, \dots, y_{m_1}, B_1y_2 + B_2y_{m_1+1}, y_{m_1+2}, \dots, y_m\},$$ where $(B_1, B_2) \neq (0,0)$ and $B_1B'_2 - B_2B'_1 \neq 0.$
The simple analysis of descending lower sequences $L^3$ and $L^4$ implies $$[x_2, y_1] = \alpha'_{2,2}(B'_1y_2 + B'_2y_{m_1+1}) + \alpha'_{2,m_1+1}(B_1y_2 + B_2y_{m_1+1})+ \sum\limits_{\begin{array}{c}j=3\\j\neq m_1+1 \end{array}} ^m \alpha_{2,j}y_j,\quad \alpha'_{2,2} \neq 0.$$
Let $h$ be a natural number such that $x_3 \in L^{h+1}\setminus L^{h+2},$ i.e.
$$L^h = \{x_3, x_4, \dots, x_n, y_{h-1}, y_h, \dots, y_{m_1}, B_1y_2 + B_2y_{m_1+1}, y_{m_1+2}, \dots, y_m\}, h \geq 3,$$ $$L^{h+1} = \{x_3, x_4, \dots, x_n, y_h, y_{h+1},\dots, y_{m_1}, B_1y_2 + B_2y_{m_1+1}, y_{m_1+2}, \dots, y_m\},$$ $$L^{h+2} = \{x_4, \dots, x_n, y_h, y_{h+1},\dots, y_{m_1}, B_1y_2 + B_2y_{m_1+1}, y_{m_1+2}, \dots, y_m\}.$$
If $h=3,$ then $[B'_1y_2 + B'_2y_{m_1+1}, y_1] = \beta'_{2,3}x_3 + \sum\limits_{i\geq 4}(*)x_4,$ $\beta'_{2,3} \neq 0$ and we consider the product $$[[x_2, y_1], y_1] = [ \alpha'_{2,2}(B'_1y_2 + B'_2y_{m_1+1}) +\alpha'_{2,m_1+1}(B_1y_2 + B_2y_{m_1+1})+ \sum\limits_{\begin{array}{c}j=3\\j\neq m_1+1 \end{array}}^m \alpha_{2,j}y_j, y_1] =$$ $$= \alpha'_{2,2}[B'_1y_2 + B'_2y_{m_1+1},y_1] + \alpha'_{2,m_1+1}[B_1y_2 + B_2y_{m_1+1},y_1]+ $$ $$+\sum\limits_{\begin{array}{c}j=3\\j\neq m_1+1 \end{array}}^m \alpha_{2,j}[y_j, y_1] = \alpha'_{2,2} \beta'_{2,3}x_3 + \sum\limits_{i\geq 4}(*)x_4 .$$
On the other hand, due to (4) we have
$$[[x_2, y_1], y_1] = \frac 1 2 [x_2, [ y_1, y_1]] = \frac 1 2 [x_2, \sum\limits_{i=2}^n\beta_{1,i}x_i] = \sum\limits_{i\geq 4}(*)x_i.$$
Comparing the coefficients at the corresponding basic elements we get equality $\alpha'_{2,2}\beta'_{2,3} = 0,$ i.e. we have a contradiction with supposition $h=3.$
If $h \geq 4,$ then we obtain $\beta'_{h-1,3} \neq 0.$ Consider the chain of equalities $$[y_{h-2}, y_2] = [y_{h-2}, [y_1, x_1]] = [[y_{h-2}, y_1], x_1] - [[y_{h-2}, x_1], y_1] =$$ $$= [ \sum\limits_{i=3}^n\beta_{h-2,i}x_i , x_1] - [y_{h-1}, y_1] = - \beta_{h-1,3}x_3 + \sum\limits_{i\geq 4}(*)x_i.$$
Since $y_{h-2} \in L^{h-1}$ and $y_2 \in L^3$ then $x_3 \in L^{h+2} = \{x_4, \dots, x_n, y_{h-1}, \dots, y_m\},$ which is a contradiction with the assumption that the nilindex of $L$ is equal to $n+m.$ \end{proof}
\begin{rem} In this subsection we used product $[y_1,x_1]=y_2.$ However, it is not difficult to check that the obtained results are also true under the condition $[y_1,x_1]=0.$ \end{rem}
\subsection{The case of both generators lie in $L_1$}
\begin{thm}\label{t6} Let $L=L_0 \oplus L_1$ be a Leibniz superalgebra from
$Leib_{n,m}$ with characteristic sequence equal to $(n_1, \dots, n_k | m_1,\dots, m_s),$ where $n_1\leq n-2, \ m_1\leq m-1$ and let both generators lie in $L_1.$ Then $L$ has a nilindex less than $n+m.$ \end{thm} \begin{proof}
Since both generators of the superalgebra $L$ lie in $L_1,$ they are linear combinations of the elements $\{y_1, y_{m_1+1}, \dots, y_{m_1+\dots+m_{s-1}+1}\}.$ Without loss of generality we may assume that $y_1$ and $y_{m_1+1}$ are generators.
Let $L^{2t} = \{x_i, x_{i+1}, \dots, x_n, y_j, \dots, y_m\}$ for some natural number $t$ and let $z \in L$ be an arbitrary element such that $z \in L^{2t} \setminus L^{2t+1}.$ Then $z$ is obtained
by the products of even number of generators. Hence $z \in L_0$ and $L^{2t+1} = \{x_{i+1}, \dots, x_n, y_j, \dots, y_m\}.$ In a similar way, having $L^{2t+1} = \{x_{i+1}, \dots, x_n, y_j, \dots, y_m\}$ we obtain $L^{2t+2} = \{x_{i+1}, \dots, x_n, y_{j+1}, \dots, y_m\}.$
From the above arguments we conclude that $n = m-1$ or $n = m-2$ and $$L^3 = \{x_2, \dots, x_n, y_2, y_3, \dots, y_{m_1}, y_{m_1+2}, \dots, y_m\}.$$ Applying the above arguments we get that an element of form $B_1y_2 + B_2y_{m_1+2} + B_3y_{m_1+m_2+1}$ disappears in $L^4.$ Moreover, there exist two elements $B'_1y_2 + B'_2y_{m_1+2} + B'_3y_{m_1+m_2+1}$ and $B''_1y_2 + B''_2y_{m_1+2} + B''_3y_{m_1+m_2+1}$ which belong to $L^4,$ where $$rank \left(\begin{array}{lll} B_1&B_2&B_3\\ B'_1&B'_2&B'_3\\ B''_1&B''_2&B''_3\end{array}\right) =3.$$ Since $x_2$ does not belong to $L^5$ then the elements $B'_1y_2 + B'_2y_{m_1+2} + B'_3y_{m_1+m_2+1},$ $B''_1y_2 + B''_2y_{m_1+2} + B''_3y_{m_1+m_2+1}$ lie in $L^5.$ Hence, from the notations $$[x_1, y_1] = \alpha_{1,2}(B_1y_2 + B_2y_{m_1+2} + B_3y_{m_1+m_2+1}) + \alpha_{1,m_1+2}(B'_1y_2 + B'_2y_{m_1+2} + B'_3y_{m_1+m_2+1}) +$$$$+ \alpha_{1,m_1+m_2+1}(B''_1y_2 + B''_2y_{m_1+2} + B''_3y_{m_1+m_2+1})+ \sum\limits_{j=3, j\neq m_1+2, m_1+m_2+1}^m \alpha_{1,j}y_j.$$ $$[x_1, y_{m_1+1}] = \delta_{1,2}(B_1y_2 + B_2y_{m_1+2} + B_3y_{m_1+m_2+1}) + \delta_{1,m_1+2}(B'_1y_2 + B'_2y_{m_1+2} + B'_3y_{m_1+m_2+1}) +$$$$+ \delta_{1,m_1+m_2+1}(B''_1y_2 + B''_2y_{m_1+2} + B''_3y_{m_1+m_2+1})+ \sum\limits_{j=3, j\neq m_1+2, m_1+m_2+1}^m \delta_{1,j}y_j,$$ we have $(\alpha_{1,2},\delta_{1,2}) \neq (0,0).$
Similarly, from the notations $$[B_1y_2 + B_2y_{m_1+2} + B_3y_{m_1+m_2+1}, y_1 ] = \beta_{2,2}x_2 + \beta_{2,3}x_3 + \dots + \beta_{2,n}x_n,$$ $$[B_1y_2 + B_2y_{m_1+2} + B_3y_{m_1+m_2+1}, y_{m_1+1}] = \gamma_{2,2}x_2 + \gamma_{2,3}x_3 + \dots + \gamma_{2,n}x_n,$$ we obtain the condition $(\beta_{2,2}, \gamma_{2,2}) \neq (0, 0).$
Consider the product $$[x_1, [y_1, y_1]] = 2 [[x_1, y_1], y_1] = 2\alpha_{1,2}[B_1y_2 + B_2y_{m_1+2} + B_3y_{m_1+m_2+1}, y_1]+$$ $$ +2\alpha_{1,m_1+2}[B'_1y_2 + B'_2y_{m_1+2}+B'_3y_{m_1+m_2+1}, y_1] +$$$$ 2\alpha_{1,m_1+m_2+1}[B''_1y_2 + B''_2y_{m_1+2} + B''_3y_{m_1+m_2+1},y_1]+$$ $$+2\sum\limits_{j=3, j\neq m_1+2, m_1+m_2+1}^m \delta_{1,j}[y_j,y_1]=2 \alpha_{1,2}\beta_{2,2}x_2 + \sum\limits_{i\geq 3}(*)x_i .$$ On the other hand, $$[x_1, [y_1, y_1]] = [x_1, \beta_{1,1}x_1 + \beta_{1,2}x_2 + \dots + \beta_{1,n}x_n] = \sum\limits_{i\geq 3}(*)x_i.$$
Comparing the coefficients at the basic elements in these equations we obtain $\alpha_{1,2}\beta_{2,2} = 0.$
Analogously, considering the product $[x_1, [y_{m_1+1}, y_{m_1+1}]],$ we obtain $\delta_{1,2}\gamma_{2,2} = 0.$
From this equations and the conditions $(\beta_{2,2}, \gamma_{2,2}) \neq (0, 0),$ $(\alpha_{1,2},\delta_{1,2}) \neq (0,0)$ we easily obtain that the solutions are $\alpha_{1,2}\gamma_{2,2} \neq 0, \beta_{2,2} =\delta_{1,2} =0$ or $\beta_{2,2}\delta_{1,2}\neq 0, \alpha_{1,2} = \gamma_{2,2}=0.$
Consider the following product $$[[x_1, y_1], y_{m_1+1}] = [x_1,[ y_1, y_{m_1+1}]] - [[x_1, y_{m_1+1}], y_1] =-\delta_{1,2}\beta_{2,2}x_2+ \sum\limits_{i\geq 3}(*)x_i.$$ On the other hand, $$[[x_1, y_1], y_{m_1+1}] = \alpha_{1,2}\gamma_{2,2}x_2+ \sum\limits_{i\geq 3}(*)x_i.$$ Comparing the coefficients of the basic elements in these equations we obtain irregular equation $\alpha_{1,2}\gamma_{2,2} = -\beta_{2,2}\delta_{1,2}.$ It is a contradiction with supposing the nilindex of the superalgebra equal the $n+m.$ And the theorem is proved. \end{proof}
Thus, the results of the Theorems \ref{t2}--\ref{t6} show that the Leibniz superalgebras with nilindex $n+m$ ($m\neq 0$) are the superalgebras mentioned in section 2. Hence, we completed the classification of the Leibniz superalgebras with nilindex $n+m.$
\end{document} |
\begin{document}
\parskip 4pt \baselineskip 16pt
\title[Bernoulli numbers and sums of powers of integers of higher order] {Bernoulli numbers and sum of powers of integers of higher order}
\author[Andrei K. Svinin]{Andrei K. Svinin} \address{Andrei K. Svinin, Matrosov Institute for System Dynamics and Control Theory of Siberian Branch of Russian Academy of Sciences, P.O. Box 292, 664033 Irkutsk, Russia} \email{[email protected]}
\author[Svetlana V. Svinina]{Svetlana V. Svinina} \address{Svetlana V. Svinina, Matrosov Institute for System Dynamics and Control Theory of Siberian Branch of Russian Academy of Sciences, P.O. Box 292, 664033 Irkutsk, Russia} \email{[email protected]}
\date{\today}
\begin{abstract} We give an expression of polynomials for higher sums of powers of integers via the higher order Bernoulli numbers. \end{abstract}
\maketitle
\section{Introduction}
As is known the sum of powers of integers \cite{Graham}, \cite{Knut} \[ S_m(n):=\sum_{q=1}^{n}q^{m} \] can be computed with the help of some appropriate polynomial $\hat{S}_m(n)$ for any $m\geq 0$. Exponential generating function for the sums $S_m(n)$ is given by \begin{equation} S(n, t)=\sum_{q=1}^{n}e^{qt}=\frac{e^{(n+1)t}-e^t}{e^t-1}. \label{genf} \end{equation} Expanding in series (\ref{genf}) yields an infinite set of polynomials $\{\hat{S}_m(n) : m\geq 0\}$, that is, \[ S(n, t)=\sum_{q\geq 0}\hat{S}_{q}(n)\frac{t^q}{q!}. \] It is a classical result that these polynomials can be expressed as \cite{Jacobi} \begin{equation} \hat{S}_{m}(n)=\frac{1}{m+1}\sum_{q=0}^m(-1)^q{m+1\choose q}B_{q}n^{m+1-q}, \label{2} \end{equation} where $B_q$ are the Bernoulli numbers that can be derived from the exponential generating function \begin{equation} \frac{t}{e^t-1}=\sum_{q\geq 0}B_q\frac{t^q}{q!}. \label{Bernoulli} \end{equation} It follows from (\ref{Bernoulli}) that the Bernoulli numbers satisfy the recurrence relation \begin{equation} \sum_{q=0}^{m}{m+1\choose q}B_{q}=\delta_{0, m}. \label{rec-rel} \end{equation} This relation is in fact the simplest one of many known recurrence relations involving the Bernoulli numbers (see, for example, \cite{Agoh} and references therein). One can derive, for example, an infinite number of recurrence relations of the form \begin{equation} \sum_{q=0}^{m}{m+k\choose q}S(m+k-q, k)B_{q}=\frac{m+k}{k}S(m+k-1, k-1),\;\; \forall k\geq 1. \label{rec-rel1} \end{equation}
In this paper we investigate a class of sums that correspond to a $k$-th power of generating function (\ref{genf}) for $k\geq 1$. Our main result is a formula for polynomials allowing to calculate these sums. It turns out that these polynomials are expressed via the higher Bernoulli numbers.
\section{The power sums of higher order}
Let us now consider a power of the generating function (\ref{genf}): \[ \left(S(n, t)\right)^k:=\sum_{q\geq 0}S_{q}^{(k)}(n)\frac{t^q}{q!}. \] We have \begin{equation} \left(S(n, t)\right)^k=\left(\sum_{q=1}^{n}e^{qt}\right)^k=\sum_{q=k}^{kn}{k\choose q}_ne^{qt}. \label{1} \end{equation} The coefficients ${k\choose q}_n$ obviously generalizing the binomial coefficients originated from Abraham De Moivre and Leonard Euler works \cite{Moivre}, \cite{Euler} and extensively studied in the literature due to their applicability. From (\ref{1}), we see that it is a generating function for the sums of the form \begin{equation} S_{m}^{(k)}(n):=\sum_{q=0}^{k(n-1)}{k\choose q}_{n}\left(k+q\right)^m. \label{sums} \end{equation} It is natural to call (\ref{sums}) the sums of powers of integers of higher order. Expanding \[ \left(\frac{e^{(n+1)t}-e^t}{e^t-1}\right)^k=\sum_{q\geq 0}\hat{S}_{q}^{(k)}(n)\frac{t^q}{q!}, \] we get an infinite number of polynomials $\hat{S}_{m}^{(k)}(n)$.
Our goal in the paper is to prove that \begin{equation} \hat{S}_{m}^{(k)}(n)=\frac{1}{{m+k\choose k}}\sum_{q=0}^{m}(-1)^q{m+k\choose q}B_q^{(k)}S(m+k-q, k)n^{m+k-q}, \label{higher-polynomials} \end{equation} where $B_q^{(k)}$ are the higher order Bernoulli numbers defined as \begin{equation} \frac{t^k}{(e^t-1)^k}=\sum_{q\geq 0}B^{(k)}_q\frac{t^q}{q!}. \label{Bern-high} \end{equation} The Bernoulli numbers of higher order appeared in \cite{Norlund} in connection with a theory of finite differences and then was investigated by many authors from different points of view (see, for example, \cite{Carlitz}). These numbers are known to satisfy \cite{Norlund} \[ B_n^{(k+1)}=\frac{k-n}{k}B_n^{(k)}-nB_{n-1}^{(k)}. \] The number $B^{(k)}_n$ with fixed $n\geq 0$ turns out to be some polynomial in $k$. These kind of polynomials are known as N\"orlund polynomials. One can find a number of these polynomials in \cite{Norlund}. For convenience, we have written out several N\"orlund polynomials in the Appendix. The numbers $S(n, k)$ in (\ref{higher-polynomials}) are the Stirling numbers of the second kind that satisfy recurrence relation \begin{equation} S(n, k)=S(n-1, k-1)+kS(n-1, k) \label{rr} \end{equation} with appropriate boundary conditions \cite{Weisstein2}, \cite{Graham}.
It is easy to prove that the higher order Bernoulli numbers satisfy the recurrence relation \begin{equation} \sum_{q=0}^{m}{m+k\choose q}S(m+k-q, k)B_{q}^{(k)}=\delta_{0, m}. \label{impl} \end{equation} The most general relation involving (\ref{rec-rel}), (\ref{rec-rel1}) and (\ref{impl}) as particular cases is \begin{equation} \sum_{q=0}^{m}{m+k\choose q}S(m+k-q, k)B_{q}^{(r)}=\frac{{m+k\choose k}}{{m+k-r\choose k-r}}S(m+k-r, k-r),\;\; \forall k\geq r. \label{impl1} \end{equation}
As is known $S(m+k, k)$, for any fixed $m\geq 0$, is expressed as a polynomial $f_m(k)$ of degree $2m$, which satisfy the identity \[ f_m(k)-f_m(k-1)=kf_{m-1}(k) \] following from the identity (\ref{rr}). Therefore we can replace $S(m+k-q, k)$ by $f_{m-q}(k)$ in (\ref{higher-polynomials}). In the literature the polynomials $f_m(k)$ are known as the Stirling polynomials \cite{Gessel}, \cite{Jordan}. These are known to be expressed via the N\"orlund polynomials as (see, for example, \cite{Adelberg}) \[ f_m(k)={m+k\choose m}B_m^{(-k)}. \]
The following proposition also gives the relationship of the higher Bernoulli numbers with the Stirling numbers. \begin{proposition} One has \begin{equation} B_m^{(k)}=\sum_{q=1}^{m} \frac{s(q+k, k)}{{q+k\choose k}}S(m, q). \label{8} \end{equation} \end{proposition} In (\ref{8}), $s(n, k)$ stands for the Stirling numbers of the first kind \cite{Weisstein1}.
It is evident that in the case $k=1$, (\ref{8}) becomes \[ B_m=\sum_{q=1}^{m}(-1)^{q} \frac{q!}{q+1}S(m, q), \] while in the case $k=2$, it takes the following form: \[ B_m^{(2)}=2\sum_{q=1}^{m}(-1)^{q} \frac{(q+1)!H_{q+1}}{(q+1)(q+2)}S(m, q), \] where $H_m$ are harmonic number defined by $H_m:=\sum_{q=1}^{m}1/q$.
To prove (\ref{higher-polynomials}), we need the following lemma: \begin{lemma} \label{le2} By virtue of (\ref{impl1}) we have \begin{eqnarray} R_m^{(k, r)}(n)&:=&\sum_{q=0}^{m}(-1)^{q}{m+k\choose q}S(m+k-q, k)\hat{S}_q^{(r)}(n)\label{le1}\\
&=&\frac{1}{{k\choose r}}\sum_{j=0}^{m}(-1)^j{m+k\choose m+k-r-j}S(m+k-r-j, k-r)\nonumber\\
&&\times S(r+j, r)n^{r+j},\;\; \forall m\geq 0,\;\;k\geq r.
\label{lee} \end{eqnarray} \end{lemma} It should be remarked that in the case $k=r$, (\ref{lee}) becomes \begin{equation} R_m^{(k, k)}(n)=(-1)^mS(m+k, k)n^{m+k}. \label{rec-rel2} \end{equation}
\noindent \textbf{Proof of lemma \ref{le2}}. We can rewrite (\ref{le1}) as \[ R_{m}^{(k, r)}(n)=\sum_{0\leq j\leq q \leq m}a_qb_{q, j}n^{r+q-j}, \] where \[ a_{q}:=\frac{{m+k\choose q}}{{r+q\choose r}}S(m+k-q, k) \] and \[ b_{q, j}:=(-1)^{q-j}{r+q\choose j}B_j^{(r)}S(r+q-j, r). \] Let $\tilde{j}=q-j$ and \[ b_{q, \tilde{j}}=(-1)^{\tilde{j}}{r+q\choose q- \tilde{j}}B_{q- \tilde{j}}^{(r)}S(r+\tilde{j}, r). \] In what follows, for simplicity, let us write $\tilde{j}$ without the tilde. Making use the identity \[ {r+q\choose q- j}={r+q\choose r+j}={q\choose j}\frac{{r+q\choose r}}{{r+j\choose r}}, \] we get \[ a_{q}b_{q, j}=(-1)^{j}\frac{S(r+j, r)}{{r+j\choose r}}{m+k\choose q}{q\choose j}S(m+k-q, k)B_{q- j}^{(r)} \] and therefore \begin{eqnarray} R_{m}^{(k, r)}(n)&=&\sum_{0\leq j\leq q \leq m}a_qb_{q, j}n^{r+j}\nonumber\\
&=&\sum_{0\leq j\leq m}(-1)^{j}\frac{S(r+j, r)}{{r+j\choose r}}n^{r+j}\sum_{j\leq q\leq m}{m+k\choose q}{q\choose j}S(m+k-q, k)B_{q- j}^{(r)}.\nonumber \end{eqnarray} In turn, making use the identity \[ {m+k\choose q}{q\choose j}={m+k\choose j}{m+k-j\choose q-j}, \] we get \begin{eqnarray} R_{m}^{(k, r)}(n)&=&\sum_{0\leq j\leq m}(-1)^{j}\frac{S(r+j, r)}{{r+j\choose k}}{m+k\choose j}n^{r+j}\nonumber\\ &&\times\sum_{j\leq q\leq m}{m+k-j\choose q-j}S(m+k-q, k)B_{q- j}^{(r)}. \nonumber \end{eqnarray} Finally, by virtue of (\ref{impl1}), we get \begin{eqnarray} &&\sum_{j\leq q\leq m}{m+k-j\choose q-j}S(m+k-q, k)B_{q- j}^{(r)}\nonumber\\ &&\;\;\;\;\;\;\;\;\;\;\; =\sum_{0\leq q\leq m-j}{m+k-j\choose q}S(m+k-j-q, k)B_{q}^{(r)}\nonumber\\ &&\;\;\;\;\;\;\;\;\;\;\; =\frac{{m+k-j\choose k}}{{m+k-j-r\choose k-r}}S(m+k-j-r, k-r)\nonumber \end{eqnarray} and hence \begin{eqnarray} R_{m}^{(k, r)}(n)&=&\sum_{0\leq j\leq m}(-1)^{j}\frac{{m+k\choose j}}{{r+j\choose r}}\frac{{m+k-j\choose k}}{{m+k-j-r\choose k-r}}S(m+k-j-r, k-r)\nonumber\\
&&\times S(r+j, r)n^{r+j}\nonumber\\
&=&\frac{1}{{k\choose r}}\sum_{0\leq j\leq m}(-1)^{j}{m+k\choose m+k-j-r}S(m+k-j-r, k-r).\nonumber\\
&&\times S(r+j, r)n^{r+j}\nonumber \end{eqnarray} Therefore the lemma is proved. $\Box$
The recurrence relation, for example (\ref{rec-rel2}), uniquely determines an infinite set of polynomials $\{\hat{S}_m^{(k)}(n) : m\geq 0\}$. We have written out some of them in the Appendix. For example, $\hat{S}_{0}^{(k)}(n)=n^k$. On the other hand \[
S_{0}^{(k)}(n):=\sum_{q=0}^{k(n-1)}{k\choose q}_{n}=\Biggl(\sum_{q=0}^{n-1}t^q\Biggr)^{k}\Bigl|_{t=1}=n^k. \] \begin{lemma} \label{le3} The higher sums $S_m^{(k)}(n)$ satisfy the same recurrence relations as in lemma \ref{le2}, that is, \begin{eqnarray} &&\sum_{q=0}^{m}(-1)^{q}{m+k\choose q}S(m+k-q, k)S_q^{(r)}(n) \nonumber \\
&&\;\;\;=\frac{1}{{k\choose r}}\sum_{j=0}^{m}(-1)^j{m+k\choose m+k-r-j}S(m+k-r-j, k-r)S(r+j, r)n^{r+j}.
\label{id} \end{eqnarray}
\end{lemma} In the case $k=r=1$, (\ref{id}) becomes the well-known identity for the sums of powers \cite{Riordan}.
\noindent \textbf{Proof of lemma \ref{le3}}. This lemma is proved by using standard arguments. Let us replace an argument of generating function $t\rightarrow -t$ to get \begin{equation} \sum_{q\geq 0}(-1)^qS^{(r)}_{q}(n)\frac{t^q}{q!}=(-1)^k\left(\frac{e^{-nt}-1}{e^{t}-1}\right)^{r}. \label{id1} \end{equation} Multiplying both sides of (\ref{id1}) by $(e^t-1)^k$ and taking into account that \[ (e^t-1)^k=k!\left(\sum_{q\geq 0} S(q, k)\frac{t^q}{q!}\right), \] we get (\ref{id}). $\Box$
Now, we are in a position to prove our theorem. \begin{theorem} One has \[ S_{m}^{(k)}(n)=\hat{S}_{m}^{(k)}(n), \] where $\hat{S}_{m}^{(k)}(n)$ be the polynomials (\ref{higher-polynomials}). \end{theorem}
\noindent \textbf{Proof}. This theorem is a simple consequence of lemma \ref{le2} and lemma \ref{le3} since the sums ${S}_m^{(k)}(n)$ satisfy the same recurrence relations as the polynomials $\hat{S}_m^{(k)}(n)$. $\Box$
\section{The relationship of the sums $S_{m}^{(k)}(n)$ to other sums}
In \cite{Svinin} we considered sums of the form \begin{equation} \mathcal{S}_{m}^{(k)}(n):=\sum_{\{\lambda\}\in B_{j, jn}}\left(\lambda_1^{m}+(\lambda_2-n)^{m}+\cdots+(\lambda_k-kn+n)^{m}\right), \label{sums11} \end{equation}
where it is supposed that $m$ is odd. Here $B_{k, kn}:=\{\lambda_q : 1\leq \lambda_1\leq \cdots \leq \lambda_k\leq kn\}$. Let us remark that there are some terms of the form $r^m$ with negative $r$ in (\ref{sums11}). It is evident that in this case $r^m=-|r|^m$. By this rule, the sum (\ref{sums11}) can be rewritten as \begin{equation} \mathcal{S}_{m}^{(k)}(n)=\sum_{q=0}^{kn}c_q(k, n)q^m \label{defin} \end{equation} with some integer coefficients $c_r(k, n)$.
It was conjectured in \cite{Svinin} that in the case of odd $m$ the sums $\mathcal{S}_{m}^{(k)}(n)$ and $S_{m}^{(k)}(n)$ are related with each other by \begin{equation} \mathcal{S}_{m}^{(k)}(n)=\sum_{q=0}^{k-1}{k(n+1)\choose q} S_{m}^{(k-q)}(n). \label{relationsh} \end{equation} Let us define the sums $\mathcal{S}_{m}^{(k)}(n)$ with even $m$ by (\ref{defin}). It is evident that conjectural relation (\ref{relationsh}) is valid for both odd and even $m$. More exactly, actual calculations show that \[ \mathcal{S}_{m}^{(k)}(n)-\sum_{q=0}^{k-1}{k(n+1)\choose q} S_{m}^{(k-q)}(n)=c_0(k, n)\delta_{m, 0}. \] \section*{Appendix}
\subsection*{N\"orlund polynomials}
The first six of the N\"orlund polynomials are given by \[ B_0^{(k)}=1,\;\; B_1^{(k)}=-\frac{1}{2}k,\;\; B_2^{(k)}=\frac{1}{12}k\left(3k-1\right),\;\; B_3^{(k)}=-\frac{1}{8}k^2\left(k-1\right), \] \[ B_4^{(k)}=\frac{1}{240}\left(15k^3-30k^2+5k+2\right),\;\; B_5^{(k)}=-\frac{1}{96}k^2\left(k-1\right)\left(3k^2-7k-2\right). \]
\subsection*{The polynomials $\hat{S}_m^{(k)}(n)$}
The first six of these polynomials are given by \[ \hat{S}_0^{(k)}(n)=n^k,\;\; \hat{S}_1^{(k)}(n)=\frac{k}{2}n^k\left(n+1\right),\;\; \] \[ \hat{S}_2^{(k)}(n)=\frac{k}{12}n^k(n+1)\left((3k+1)n+3k-1\right), \] \[ \hat{S}_3^{(k)}(n)= \frac{k^2}{8}n^k(n+1)^2\left((k+1)n+k-1\right), \] \begin{eqnarray} \hat{S}_{4}^{(k)}(n)&=&\frac{k}{240}n^k(n+1)\left((15k^3+30k^2+5k-2)n^3+(45k^3+30k^2-5k+2)n^2\right. \nonumber\\
&&\left.+(45k^3-30k^2-5k-2)n+15k^3-30k^2+5k+2\right),\nonumber \end{eqnarray} \begin{eqnarray} \hat{S}_{5}^{(k)}(n)&=&\frac{k^2}{96}n^k(n+1)^2\left((3k^3+10k^2+5k-2)n^3+(9k^3+10k^2-5k+2)n^2\right.\nonumber\\
&&\left.+(9k^3-10k^2-5k-2)n+3k^3-10k^2+5k+2\right).\nonumber \end{eqnarray}
\end{document} |
\begin{document}
\title{Substitutions over infinite alphabet generating $(-eta)$-integers}
\section{Introduction}
This contribution is devoted to the study of positional numeration systems with negative base introduced by Ito and Sadahiro in 2009, called $(-\beta)$-expansions. We give an admissibility criterion for more general case of $(-\beta)$-expansions and discuss the properties of the set of $(-\beta)$-integers, denoted by $\mathbb{Z}_{-\beta}$. We give a description of distances within $\mathbb{Z}_{-\beta}$ and show that this set can be coded by an infinite word over an infinite alphabet, which is a fixed point of a non-erasing non-trivial morphism.
\section{Numeration with negative base}
In 1957, R\'enyi introduced positional numeration system with positive real base $\beta>1$ (see \cite{Renyi}).The $\beta$-expansion of $x\in[0,1)$ is defined as the digit string $d_\beta(x)=0\bullet x_1x_2x_3\cdots$, where \[x_i=\lfloor\beta T_\beta^{i-1}(x)\rfloor\quad\text{and}\quad T_\beta(x)=\beta x-\lfloor\beta x\rfloor\,.\] It holds that \[x=\frac{x_1}{\beta}+\frac{x_2}{\beta^2}+\frac{x_3}{\beta^3}+\cdots\,.\] Note that this definition can be naturally extended so that any real number has a unique $\beta$-expansion, which is usually denoted $d_\beta(x)=x_k x_{k-1}\cdots x_1x_0\bullet x_{-1}x_{-2}\cdots$, where $\bullet$, the fractional point, separates negative and non-negative powers of $\beta$. In analogy with standard integer base, the set $\mathbb{Z}_\beta$ of $\beta$-integers is defined as the set of real numbers having the $\beta$-expansion of the form $d_\beta(x)=x_k x_{k-1}\cdots x_1x_0\bullet 0^\omega$.
$(-\beta)$-expansions, a numeration system built in analogy with R\'enyi $\beta$-expansions, was introduced in 2009 by Ito and Sadahiro (see \cite{ItoSadahiro}). They gave a lexicographic criterion for deciding whether some digit string is the $(-\beta)$-expansion of some $x$ and also described several properties of $(-\beta)$-expansions concerning symbolic dynamics and ergodic theory. Note that dynamical properties of $(-\beta)$-expansions were also studied by Frougny and Lai (see \cite{ChiaraFrougny}). We take the liberty of defining $(-\beta)$-expansions in a more general way, while an analogy with positive base numeration can still be easily seen.
\begin{de}\label{de:minusbeta} Let $-\beta<-1$ be a base and consider $x\in[l,l+1)$, where $l\in\mathbb{R}$ is arbitrary fixed. We define the $(-\beta)$-expansion of $x$ as the digit string $d(x)=x_1x_2x_3\cdots$, with digits $x_i$ given by \begin{equation}\label{predpis_cifry} x_i=\lfloor-\beta T^{i-1}(x)-l\rfloor\,, \end{equation} where $T(x)$ stands for the generalised $(-\beta)$-transformation \begin{equation}\label{transformace} T:[l,l+1)\rightarrow[l,l+1)\,,\quad T(x)=-\beta x-\lfloor-\beta x-l\rfloor\,. \end{equation} \end{de}
\noindent It holds that \[x=\frac{x_1}{-\beta}+\frac{x_2}{(-\beta)^2}+\frac{x_3}{(-\beta)^3}+\cdots\] and the fractional point is again used in the notation, $d(x)=0\bullet x_1x_2x_3\cdots$.
The set of digits used in $(-\beta)$-expansions of numbers (in the latter referred to as the alphabet of $(-\beta)$-expansions) depends on the choice of $l$ and can be calculated directly from (\ref{predpis_cifry}) as \begin{equation}\label{abeceda} \mathcal{A}_{-\beta,l}=\big\{\lfloor-l(\beta+1)-\beta\rfloor,\ldots,\lfloor-l(\beta+1)\rfloor\big\}\,. \end{equation}
We may demand that the numeration system possesses various properties. Let us summarise the most natural ones:
\begin{itemize} \item The most common requirement is that zero is an allowed digit. We see that $0\in\mathcal{A}_{-\beta,l}$ is equivalent to $0\in[l,l+1)$ and consequently $l\in(-1,0]$. Note that this implies $d(0)=0\bullet 0^\omega$. \item We may require that $\mathcal{A}_{-\beta,l}=\{0,1,\ldots,\lfloor\beta\rfloor\}$. This is equivalent to the choice $l\in\big(-\frac{\lfloor\beta\rfloor+1}{\beta+1},-\frac{\beta}{\beta+1}\big]$. \item So far, $(-\beta)$-expansions were defined only for numbers from $[l,l+1)$. In R\'enyi numeration, the $\beta$-expansion of arbitrary $x\in\mathbb{R}^+$ (expansions of negative numbers differ only by ``$-$'' sign) is defined as $d_\beta(x)=x_kx_{k-1}\cdots x_1x_0\bullet x_{-1}x_{-2}\cdots$, where $k\in\mathbb{N}$ satisfies $\frac{x}{\beta^k}\in[l,l+1)$ and $d_\beta\big(\frac{x}{\beta^k}\big)=0\bullet x_kx_{k-1}x_{k-2}\cdots$. The same procedure does not work for $(-\beta)$-expansions in general. A necessary and sufficient condition for the existence of unique $d(x)$ for all $x\in\mathbb{R}$ is that $-\frac{1}{\beta}[l,l+1)\subset[l,l+1)$. This is equivalent to the choice $l\in\big(-\frac{\beta}{\beta+1},-\frac{1}{\beta+1}\big]$. Note that this choice is disjoint with the previous one, so one cannot have uniqueness of $(-\beta)$-expansions and non-negative digits bounded by $\beta$ at the same time. \end{itemize}
Let us stress that in the following we will need $0$ to be a valid digit. Therefore, we shall always assume $l\in(-1,0]$. Note that we may easily derive that the digits in the alphabet $\mathcal{A}_{-\beta,l}$ are then bounded by $\lceil\beta\rceil$ in modulus.
\section{Admissibility}
In R\'enyi numeration there is a natural correspondence between ordering on real numbers and lexicographic ordering on their $\beta$-expansions. In $(-\beta)$-expansions, standard lexicographic ordering is not suitable anymore, hence a different ordering on digit strings is needed.
The so-called alternate order was used in the admissibility condition by Ito and Sadahiro and it will work also in the general case. Let us recall the definition. For the strings \[u,v\in(\mathcal{A}_{-\beta,l})^\mathbb{N}\,,\quad u=u_1u_2u_3\cdots\quad\hbox{and}\quad v=v_1v_2v_3\cdots\] we say that $u\prec_{alt} v$ ($u$ is less than $v$ in the alternate order) if $u_m(-1)^m<v_m(-1)^m$, where $m=\min\{k\in\mathbb{N}\ \mid\ u_k\neq v_k\}$. Note that standard ordering between reals in $[l,l+1)$ corresponds to the alternate order on their respective $(-\beta)$-expansions.
\begin{de} An infinite string $x_1x_2x_3\cdots$ of integers is called $(-\beta)$-admissible (or just admissible), if there exists an $x\in[l,l+1)$ such that $x_1x_2x_3\cdots$ is its $(-\beta)$-expansion, i.e. $x_1x_2x_3\cdots=d(x)$. \end{de}
We give the criterion for $(-\beta)$-admissibility (proven in $\cite{DMP}$) in a form similar to both Parry lexicographic condition (see \cite{Parry}) and Ito-Sadahiro admissibility criterion (see \cite{ItoSadahiro}).
\begin{thm}\emph{(\cite{DMP})}\label{thm:hlavni} An infinite string $x_1x_2x_3\cdots$ of integers is $(-\beta)$-admissible, if and only if \begin{equation}\label{eq:admis} l_1l_2l_3\cdots\preceq_{alt} x_ix_{i+1}x_{i+2}\cdots\prec_{alt} r_1r_2r_3\cdots\,, \qquad\hbox{for all $\ i\geq 1$}, \end{equation} where $l_1l_2l_3\cdots=d(l)$ and $r_1r_2r_3\cdots=d^*(l+1)=\lim_{\epsilon\to 0+}d(l+1-\epsilon)$. \end{thm}
\begin{pozn}\label{pozn:IS} Ito and Sadahiro have described the admissibility condition for their numeration system considered with $l=-\frac{\beta}{\beta+1}$. This choice imply for any $\beta$ the alphabet of the form $\mathcal{A}_{-\beta,l}=\{0,1,\ldots,\lfloor\beta\rfloor\}$. They have shown that in this case the reference strings used in the condition in Theorem~\ref{thm:hlavni} (i.e. $d(l)=l_1l_2l_3\cdots$ and $d^*(l+1)=r_1r_2r_3\cdots$) are related in the following way: \[r_1r_2r_3\cdots = 0l_1l_2l_3\cdots\] if $d(l)$ is not purely periodic with odd period length, and, \[r_1r_2r_3\cdots = \big(0l_1l_2\cdots l_{q-1}(l_{q}-1)\big)^\omega\,,\] if $d(l) = \big(l_1l_2\cdots l_{q}\big)^\omega$, where $q$ is odd. \end{pozn}
\begin{pozn}\label{pozn:bal} Besides Ito-Sadahiro case and the general one, we may consider another interesting example, the choice $l=-\frac{1}{2}$, $\beta\notin 2\mathbb{Z}+1$. This leads to a numeration defined on ``almost symmetric'' interval $[-\frac{1}{2},\frac{1}{2})$ with symmetric alphabet \[\mathcal{A}_{-\beta,-\frac{1}{2}}= \Big\{\overline{\Big\lfloor\frac{\beta+1}{2}\Big\rfloor}, \ldots, \overline{1}, 0, 1, \ldots \Big\lfloor\frac{\beta+1}{2}\Big\rfloor\Big\}\,.\]
Note that we use the notation $(-a)=\overline{a}$ for shorter writing of negative digits. If we denote the reference strings as usual, i.e. $d\big(-\frac{1}{2}\big)=l_1l_2l_3\cdots$ and $d^*\big(\frac{1}{2}\big)=r_1r_2r_3\cdots$, the following relation can be shown: \[r_1r_2r_3\cdots = \overline{l_1l_2l_3\cdots}\] if $d(l)$ is not purely periodic with odd period length, and, \[r_1r_2r_3\cdots = \big(\overline{l_1l_2\cdots l_{q-1}(l_{q}-1)}l_1l_2\cdots l_{q-1}(l_{q}-1)\big)^\omega\,,\] if $d(l) = \big(l_1l_2\cdots l_{q}\big)^\omega$, where $q$ is odd. \end{pozn}
\section{$(-\beta)$-integers}
We have already discussed basic properties of $(-\beta)$-expansions and the question of admissibility of digit strings. In the following, $(-\beta)$-admissibility will be used to define the set of $(-\beta)$-integers.
Let us define a ``value function'' $\gamma$. Consider a finite digit string $x_{k-1}\cdots x_1 x_0$, then $\gamma(x_{k-1},\cdots x_1 x_0)=\sum_{i=0}^{k-1}x_i(-\beta)^i$.
\begin{de}\label{de:integers}
We call $x\in\mathbb{R}$ a $(-\beta)$-integer, if there exists a $(-\beta)$-admissible digit string $x_kx_{k-1}\cdots x_00^\omega$ such that $d(x)=x_kx_{k-1}\cdots x_1x_0\bullet 0^\omega$. The set of $(-\beta)$-integers is then defined as \[\mathbb{Z}_{-\beta}=\{x\in\mathbb{R}\ |\ x=\gamma(a_{k-1}a_{k-2}\cdots a_1 a_0), \text{ $a_{k-1}a_{k-2}\cdots a_1 a_0 0^\omega$ is $(-\beta)$-admissible}\,,\] or equivalently \[\mathbb{Z}_{-\beta}=\bigcup_{i\geq 0}(-\beta)^i T^{-i}(0)\,.\] \end{de}
Note that $(-\beta)$-expansions of real numbers are not necessarily unique. As was said before, uniqueness holds if and only if $l\in\big(-\frac{\beta}{\beta+1},-\frac{1}{\beta+1}\big]$. Let us demonstrate this ambiguity on the following example.
\begin{ex}\label{ex:prusvih} Let $\beta$ be the greater root of the polynomial $x^2-2x-1$, i.e. $\beta=1+\sqrt{2}$, and let $[l,l+1)=\big[-\frac{\beta^9}{\beta^9+1},\frac{1}{\beta^9+1}\big)$. Note that $[l,l+1)$ is not invariant under division by $(-\beta)$.
If we want to find the $(-\beta)$-expansion of number $x\notin[l,l+1)$, we have to find such $k\in\mathbb{N}$ that $\frac{x}{(-\beta)^k}\in[l,l+1)$, compute $d\big(\frac{x}{(-\beta)^k}\big)$ by definition and then shift the fractional point by $k$ positions to the right. The problem is that, in general, different choices of the exponent $k$ may give different $(-\beta)$-admissible digit strings which all represent the same number $x$.
Let us find possible $(-\beta)$-expansions of $1$. It can be shown that $\frac{1}{(-\beta)^k}\in[l,l+1)$ if and only if $k\in\mathbb{N}\setminus\{0,2,4,6,8\}$ and there are $5$ $(-\beta)$-admissible digit strings representing $1$, computed from $(-\beta)$-expansions of $\frac{1}{(-\beta)^k}$ for $k=1,3,5,7,9$ respectively: \[1\bullet 0^\omega\quad=\quad 120\bullet 0^\omega\quad=\quad 13210\bullet 0^\omega\quad=\quad 1322210\bullet 0^\omega\quad=\quad 132222210\bullet 0^\omega\,.\] \end{ex}
Let us mention some straightforward observations on the properties of $\mathbb{Z}_{-\beta}$:
\begin{itemize} \item $\mathbb{Z}_{-\beta}$ is nonempty if and only if $0\in\mathcal{A}_{-\beta,l}$, i.e. if and only if $l\in(-1,0]$. \item The definition implies $-\beta\mathbb{Z}_{-\beta}\subset\mathbb{Z}_{-\beta}$. \item A phenomenon unseen in R\'enyi numeration arises, there are cases when the set of $(-\beta)$-integers is trivial, i.e. when $\mathbb{Z}_{-\beta}=\{0\}$. This happens if and only if both numbers $\frac{1}{\beta}$ and $-\frac{1}{\beta}$ are outside of the interval $[l,l+1)$. This can be reformulated as \[\mathbb{Z}_{-\beta}=\{0\}\quad\Leftrightarrow\quad \beta<-\frac{1}{l}\ \text{ and }\ \beta\leq\frac{1}{l+1}\,,\] and it can be seen that the strictest limitation for $\beta$ arises when $l=-\frac{1}{2}$. This implies for any choice of $l\in\mathbb{R}$:\[\mathbb{Z}_{-\beta}\neq\emptyset\ \text{ and }\ \beta\geq 2\quad\Rightarrow\quad\mathbb{Z}_{-\beta}\supsetneq\{0\}\,.\] \item It holds that $\mathbb{Z}_{-\beta}=\mathbb{Z}$ if and only if $\beta\in\mathbb{N}$. \end{itemize}
\begin{pozn} As was shown in Example~\ref{ex:prusvih}, in a completely general case of $(-\beta)$-expansions, there is a problem with ambiguity. Because of this, in the following we shall limit ourselves to the choice $l\in\big[-\frac{\beta}{\beta+1},-\frac{1}{\beta+1}\big]$. Note that we allow Ito-Sadahiro case $l=-\frac{\beta}{\beta+1}$, which also contains ambiguities, but only in countably many cases, which can be avoided by introducing a notion of strong $(-\beta)$-admissibility. \end{pozn}
\begin{de} Let $x_1x_2x_3\cdots\in\mathcal{A}_{-\beta,l}$. We say that \[x_1x_2x_3\cdots\text{ is strongly $(-\beta)$-admissible}\quad\text{ if }\quad 0x_1x_2x_3\cdots\text{ is $(-\beta)$-admissible}\,.\] \end{de}
\begin{pozn} Note that if $l\in\big(-\frac{\beta}{\beta+1},-\frac{1}{\beta+1}\big]$, the notions of strong admissibility and admissibility coincide. In the case $l=-\frac{\beta}{\beta+1}$, the only numbers with non-unique expansions are those of the form $(-\beta)^kl$, which have exactly two possible expansions using digit strings $l_1l_2l_3\cdots$ and $1l_1l_2l_3\cdots$. While both are $(-\beta)$-admissible, only the latter is also strongly $(-\beta)$-admissible. \end{pozn}
In order to describe distances between adjacent $(-\beta)$-integers, we will study ordering of finite digit strings in the alternate order. Denote by $\mathcal{S}(k)$ the set of infinite $(-\beta)$-admissible digit strings such that erasing a prefix of length $k$ yields $0^\omega$, i.e. for $k\geq 0$, we have \[ \mathcal{S}(k)=\{a_{k-1}a_{k-2}\cdots a_00^\omega \mid a_{k-1}a_{k-2}\cdots a_00^\omega \text{ is $(-\beta)$-admissible}\}\,, \] in particular $\mathcal{S}(0) = \{0^\omega\}$. For a fixed $k$, the set $\mathcal{S}(k)$ is finite. Denote by $\mathrm{Max}(k)$ the string $a_{k-1}a_{k-2}\cdots a_00^\omega$ which is maximal in $\mathcal{S}(k)$ with respect to the alternate order and by $\max(k)$ its prefix of length $k$, i.e. $\mathrm{Max}(k) = \max(k)0^\omega$. Similarly, we define $\mathrm{Min}(k)$ and {\bf $\min(k)$}. Thus, \[ \mathrm{Min}(k) \preceq_{alt} r \preceq_{alt} \mathrm{Max}(k)\,, \qquad\text{ for all digit strings $r \in \mathcal{S}(k)$.} \]
With this notation we can give a theorem describing distances in $\mathbb{Z}_{-\beta}$ valid for cases $l\in\big[-\frac{\beta}{\beta+1},-\frac{1}{\beta+1}\big]$. Note that for case $l=-\frac{\beta}{\beta+1}$ it was proven in $\cite{ADMP}$.
\begin{thm}\label{prop:candidates_for_distances} Let $x <y$ be two consecutive $(-\beta)$-integers. Then there exist a finite string $w$ over the alphabet $\mathcal{A}_{-\beta,l}$, a non-negative integer $k\in\{0,1,2,\dots\}$ and a positive digit $d \in \mathcal{A}_{-\beta,l}\setminus \{0\}$ such that $w(d-1)\mathrm{Max}(k)$ and $wd\mathrm{Min}(k)$ are strongly $(-\beta)$-admissible strings and $$ \begin{array}{lcll} x = \gamma(w(d-1)\max(k)) &<& y =\gamma(wd\min(k))\quad &\hbox{for $k$ even},\\ x = \gamma(wd\min(k)) &<& y = \gamma(w(d-1)\max(k)) \quad &\hbox{for $k$ odd}. \end{array} $$ In particular, the distance $y-x$ between these $(-\beta)$-integers depends only on $k$ and equals to \begin{equation}\label{Delty}
\Delta_k:=\Big|(-\beta)^k + \gamma\big(\mathrm{min}(k)\big) -
\gamma\big(\mathrm{max}(k)\big)\Big|\,.
\end{equation} \end{thm}
\section{Coding $\mathbb{Z}_{-\beta}$ by an infinite word}
Note that in order to get an explicit formula for distances from Theorem~\ref{thm:hlavni}, knowledge of reference strings $\min(k)$ and $\max(k)$ is necessary. These depend on both reference strings $d(l)$ and $d^*(l+1)$. Concerning the form of $\min(k)$ and $\max(k)$ we provide the following proposition.
\begin{prop}\label{prop_tvary_minmax} Let $\beta>1$. Denote $d(l)=l_1l_2l_3\cdots$, $d^*(l+1)=r_1r_2r_3\cdots$. \begin{itemize} \item $\min(0)=\max(0)=\varepsilon$, \item for $k\geq 1$ either $\min(k)=l_1l_2\cdots l_k$ or there exists $m(k)\in\{0,\cdots\!,k\!-\!1\}$ such that \[\min(k)=\left\{\begin{array}{ll} \!l_1l_2\cdots(l_{k-m(k)}\!+\!1)\min(m(k)) & \text{if }k\!-\!m(k)\text{ even}\\ &\\ \!l_1l_2\cdots(l_{k-m(k)}\!-\!1)\max(m(k)) & \text{if }k\!-\!m(k)\text{ odd} \end{array}\right.\] \item for $k\geq 1$ either $\max(k)=r_1r_2\cdots r_k$ or there exists $m'(k)\in\{0,\cdots\!,k\!-\!1\}$ such that \[\max(k)=\left\{\begin{array}{ll} \!r_1r_2\cdots(r_{k-m'(k)}\!-\!1)\max(m'(k)) & \text{if }k\!-\!m'(k)\text{ even}\\ &\\ \!r_1r_2\cdots(r_{k-m'(k)}\!+\!1)\min(m'(k)) & \text{if }k\!-\!m'(k)\text{ odd} \end{array}\right.\] \end{itemize} \end{prop}
Computing $\min(k)$ and $\max(k)$ for a general choice of $l$ may lead to difficult discussion, however, in special cases an important relation between $d(l)$ and $d^*(l+1)$ arises and eases the computation. Examples were given in Remarks~\ref{pozn:IS} and \ref{pozn:bal}.
Let us now describe how we can code the set of $(-\beta)$-integers by an infinite word over the infinite alphabet $\mathbb{N}$.
Let $(z_n)_{n\in\mathbb{Z}}$ be a strictly increasing sequence satisfying \[z_0=0\quad\text{and}\quad\mathbb{Z}_{-\beta}=\{z_n\ |\ n\in\mathbb{Z}\}\,.\] We define a bidirectional infinite word over an infinite alphabet ${\bf v}_{-\beta} \in\mathbb{N}^\mathbb{Z}$, which codes the set of $(-\beta)$-integers. According to Theorem~\ref{prop:candidates_for_distances}, for any $n\in\mathbb{Z}$ there exist a unique $k\in\mathbb{N}$, a word $w$ with prefix $0$ and a letter $d$ such that \[z_{n+1}-z_n=\big|\gamma(w(d-1)\max(k))-\gamma(wd\min(k))\big|\,.\] We define the word ${\bf v}_{-\beta}=(v_i)_{i\in\mathbb{Z}}$ by $v_n=k$.
\begin{thm} Let ${\bf v}_{-\beta}$ be the word associated with $(-\beta)$-integers. There exists an antimorphism $\Phi:\mathbb{N}^*\rightarrow\mathbb{N}^*$ such that $\Psi=\Phi^2$ is a non-erasing non-identical morphism and $\Psi({\bf v}_{-\beta})={\bf v}_{-\beta}$. $\Phi$ is always of the form \[\Phi(2l)=S_{2l}(2l+1)\widetilde{R_{2l}}\quad\text{and}\quad\Phi(2l+1)=R_{2l+1}(2l+2)\widetilde{S_{2l+1}}\,,\] where $\widetilde{u}$ denotes the reversal of the word $u$ and words $R_j$, $S_j$ depend only on $j$ and on $\min(k),\max(k)$ with $k\in\{j,j+1\}$. \end{thm}
The proof is based on the self-similarity of $\mathbb{Z}_{-\beta}$, i.e. $-\beta\mathbb{Z}_{-\beta}\subset\mathbb{Z}_{-\beta}$, and on the following idea. Let $x=\gamma(w(d-1)\max(k)) < y=\gamma(wd\min(k))$ be two neighbours in $\mathbb{Z}_{-\beta}$ with gap $\Delta_k$ and suppose only $k$ even. If we multiply both $x$ and $y$ by $(-\beta)$, we get a longer gap with possibly more $(-\beta)$-integers in between. It can be shown that between $-\beta y$ and $-\beta x$ there is always a gap $\Delta_{k+1}$. Hence the description is of the form $\Phi(k)=S_{k}(k+1)\widetilde{R_{k}}$, where the word $S_k$ codes the distances between $(-\beta)$-integers in $[\gamma(wd\min(k)0), \gamma(wd\min(k+1))]$ and, similarly, $R_k$ encodes distances within the interval $[\gamma(w(d-1)\max(k)0),\gamma(w(d-1)\max(k+1))]$.
As it turns out, in some cases (mostly when reference strings $l_1l_2l_3\cdots$ and $r_1r_2r_3\cdots$ are eventually periodic of a particular form) we can find a letter-to-letter projection to a finite alphabet $\Pi:\mathbb{N}\rightarrow\mathcal{B}$ with $\mathcal{B}\subset\mathbb{N}$, such that ${\bf u}_{-\beta}=\Pi{\bf v}_{-\beta}$ also encodes $\mathbb{Z}_{-\beta}$ and it is a fixed point of a an antimorphism $\varphi=\Pi\circ\Phi$ over the finite alphabet $\mathcal{B}$. Clearly, the square of $\varphi$ is then a non-erasing morphism over $\mathcal{B}$ which fixes ${\bf u}_{-\beta}$.
Let us mention that $(-\beta)$-integers in the Ito-Sadahiro case $l=-\frac{\beta}{\beta+1}$ are also subject of \cite{Wolfgang}. For $\beta$ with eventually periodic $d(l)$, Steiner finds a coding of $\mathbb{Z}_{-\beta}$ by a finite alphabet and shows, using only the properties of the $(-\beta)$-transformation, that the word is a fixed point of a non-trivial morphism. Our approach is of a combinatorial nature, follows a similar idea as in \cite{ADMP} and shows existence of an antimorphism for any base $\beta$.
To illustrate the results, let us conclude this contribution by an example.
\begin{ex} Let $\beta$ be the real root of $x^3-3x^2-4x-2$ ($\beta$ Pisot, $\approx 4.3$) and $l=-\frac{1}{2}$. The admissibility condition gives us for any admissible digit string $(x_i)_{i\geq 0}$: \[201^\omega\preceq_{alt}x_ix_{i+1}x_{i+2}\cdots\prec_{alt}\overline{2}0\overline{1}^\omega\,\quad\text{for all } x\geq 0\,.\]
We obtain \[\min(0)=\varepsilon, \quad\min(1)=2, \quad\min(2)=20\] and \[\min(2k+1)=20(11)^{k-1}0,\quad \min(2k+2)=20(11)^k \quad\text{ for $k\geq 1$}\,.\] Clearly it holds that $\max(i)=\overline{\min(i)}$ for all $i\in\mathbb{N}$.
Theorem~\ref{prop:candidates_for_distances} gives us the following distances within $\mathbb{Z}_{-\beta}$: \[\Delta_0=1, \quad\Delta_1=-1+\frac{4}{\beta}+\frac{2}{\beta^2}, \quad\text{ and }\quad\Delta_{2k}=1-\frac{2}{\beta}-\frac{2}{\beta^2},\quad\Delta_{2k+1}=1+\frac{2}{\beta}+\frac{2}{\beta^2}\quad\text{ for $k\geq 1$}\,.\]
Finally, the antimorphism $\Phi:\mathbb{N}^*\rightarrow\mathbb{N}^*$ is given by \begin{align*} 0 \rightarrow&\ 0^210^2\,,\\ 1 \rightarrow&\ 2\,,\\ 2 \rightarrow&\ 3\,,\\ \intertext{and for $k\geq 1$} 2k+1 \rightarrow&\ 0^210(2k+2)010^2\,,\\ 2k+2 \rightarrow&\ 2k+3\,. \end{align*} It can be easily seen that a projection from $\mathbb{N}$ to a finite alphabet exists and a final antimorphism $\varphi:\{0,1,2,3\}^*\rightarrow\{0,1,2,3\}^*$ is of the form \begin{align*} 0 \rightarrow&\ 0^210^2,\\ 1 \rightarrow&\ 2,\\ 2 \rightarrow&\ 3,\\ 3 \rightarrow&\ 0^2102010^2. \end{align*} \end{ex}
\providecommand{\urlalt}[2]{\href{#1}{#2}} \providecommand{\doi}[1]{doi:\urlalt{http://dx.doi.org/#1}{#1}} \renewcommand{Bibliography}{Bibliography}
\end{document} |
\begin{document}
\title{Bell Test Over Extremely High-Loss Channels:\\ Towards Distributing Entangled Photon Pairs Between Earth and Moon }
\author{Yuan Cao} \thanks{These authors contributed equally to this work} \affiliation{Shanghai Branch, National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Shanghai 201315, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China} \author{Yu-Huai Li} \thanks{These authors contributed equally to this work} \affiliation{Shanghai Branch, National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Shanghai 201315, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China} \author{Wen-Jie Zou} \affiliation{Shanghai Branch, National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Shanghai 201315, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China} \author{Zheng-Ping Li} \affiliation{Shanghai Branch, National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Shanghai 201315, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China} \author{Qi Shen} \affiliation{Shanghai Branch, National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Shanghai 201315, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China} \author{Sheng-Kai Liao} \affiliation{Shanghai Branch, National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Shanghai 201315, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China} \author{Ji-Gang Ren} \affiliation{Shanghai Branch, National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Shanghai 201315, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China} \author{Juan Yin} \affiliation{Shanghai Branch, National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Shanghai 201315, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China} \author{Yu-Ao Chen} \affiliation{Shanghai Branch, National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Shanghai 201315, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China} \author{Cheng-Zhi Peng} \affiliation{Shanghai Branch, National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Shanghai 201315, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China} \author{Jian-Wei Pan} \affiliation{Shanghai Branch, National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Shanghai 201315, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\date{\today}
\begin{abstract} Quantum entanglement was termed ``spooky action at a distance'' in the well-known paper by Einstein, Podolsky, and Rosen. Entanglement is expected to be distributed over longer and longer distances in both practical applications and fundamental research into the principles of nature. Here, we present a proposal for distributing entangled photon pairs between the Earth and Moon using a Lagrangian point at a distance of 1.28 light seconds. One of the most fascinating features in this long-distance distribution of entanglement is that we can perform Bell test with human supply the random measurement settings and record the results while still maintaining space-like intervals. To realize a proof-of-principle experiment, we develop an entangled photon source with 1 GHz generation rate, about 2 orders of magnitude higher than previous results. Violation of the Bell's inequality was observed under a total simulated loss of 103 dB with measurement settings chosen by two experimenters. This demonstrates the feasibility of such long-distance Bell test over extremely high-loss channels, paving the way for the ultimate test of the foundations of quantum mechanics. \end{abstract} \maketitle
Ever since it was first established, quantum mechanics has been the subject of intense debate about its intrinsic probabilistic and nonlocal nature, triggered by the Einstein-Podolsky-Rosen (EPR) paradox \cite{EPR_35}. In particular, Bell’s inequality \cite{Bell_Ineq_64} manifests the contradiction between quantum mechanics and local hidden variable theories. Increasing the entanglement distribution distance is of fundamental interest in studying its behavior. Generally speaking, testing Bell's inequality while closing the known loopholes requires the entangled photon pairs to be distributed to distant parties. For example, if the distance could be extended far enough, it would be possible to use human free choice to address the ``freedom-of-choice'' loophole \cite{Bell:speakable:2004,Hall:measurementindependence:prl2010, RevModPhys.86.419,PhysRevA.93.032115,0264-9381-29-22-224011, PhysRevLett.106.100406}, or even allow the outcome to be observed by a conscious human observer to address the ``collapse locality loophole'' \cite{PhysRevA.72.012107,Manasseh2013, PhysRevLett.103.113601} in a Bell test. This would also make it possible to test the effect of gravity on entanglement decorrelation \cite{0264-9381-29-22-224011}. In terms of practical applications, extending entanglement to large scales would provide an essential physical resource for quantum information protocols such as quantum key distribution \cite{Bennett:BB84:1984,Ekert:QKD:1991,Bennett:BBM92:1992}, quantum teleportation \cite{BBCJPW_93, BPMEWZ_97}, and quantum networks \cite{Kimble2008}. Distributing the entangled photon pairs as widely as possible is therefore an extremely important goal for a variety of reasons.
After more than 20 years of efforts, the maximum possible distance has been increased from a few meters to $\sim$ 100 km in optical fibers \cite{Honjo:08,Inagaki:13} or terrestrial free space \cite{yin:100kmtelenature:2012, Fedrizzi2009}, and has reached a limit on the earth due to photon loss in the channel. The most promising approach to dealing with this limitation is to use satellite- and space-based technologies. Excitingly, the first quantum science experiment satellite, ``Micius'', was successfully launched on 16 August, 2016 from Jiuquan, China. Very recently, satellite-based quantum entanglement distribution over more than 1200 km has been demonstrated using ``Micius'' \cite{Yin1140}, taking the first step toward bringing Bell test to space.
In this Letter, we design an experimental scheme for carrying out a Bell test between the earth and moon. To overcome the extremely high losses over the free-space optical links, we develop a new generation of ultra-high brightness entangled photon source and a high-time-resolution data acquisition system. It is worth noting that, at such a separation distance, the locality and freedom-of-choice loopholes can be completely closed, even using human free choice and human recorders instead of physical devices. We also implement a Bell test utilizing two human observers' choices with a high channel loss of 103 dB, demonstrating the scheme’s feasibility.
The Lagrangian points of Earth-Moon system are ideal places to put an entangled photon source, as shown in Fig.~\ref{Fig:Lag}. We chose the L4 or L5 points for the entangled photon source because they are stable and have the most appropriate space arrangement in the five Lagrangian points. The three points consisting of L4 (or L5), Earth, and Moon, form an approximately equilateral triangle with a side length of $3.8\times10^{5}$ km. In this scheme, we define $A (B)$ and $a (b)$ as the events of Alice's (Bob's) measurement and setting choice, respectively. The event where the entangled photon pairs are emitted by the entanglement source is defined as $S$. \begin{figure}
\caption{(color online). Scheme for conducting a Bell test involving human free will. There are five Lagrangian points in the Earth-Moon system, denoted by L1, L2, L3, L4, and L5. Since only L4 and L5 are stable, and have the most appropriate space arrangement of the five points, they were chosen for the position of the entanglement source satellite. This satellite contains two telescopes, one aims at Moon and the other at Earth. Two large telescopes must also be built, one on or near Moon and one on Earth, to create the entanglement distribution channels. }
\label{Fig:Lag}
\end{figure}
This experimental scheme allows the locality loophole to be naturally closed, as shown in Fig.~\ref{Fig:spactime}(a). Because the distances from the entanglement source to Moon and Earth are approximately equal, $A$ and $B$ can be considered to be simultaneous. Clearly, $A$ and $B$ satisfy the space-like criterion. In addition, we define $a^\prime (b^\prime)$ as the event where Alice's (Bob's) measurement setting be prepared and $\Delta T_a (\Delta T_b)$ as the delay between $a (b)$ and $a^\prime (b^\prime)$. For the internal between $B$ and $a$ ($A$ and $b$) to be space-like, the time from $a (b)$ to $A (B)$ must be less than 1.28 s. That is, once the measurement setting is ready, only photons that arrive within $1.28 ~s - \Delta T_a (\Delta T_b)$ are considered valid. Second, Fig.~\ref{Fig:spactime}(b) shows the requirements for satisfying the freedom-of-choice assumption. For the interval between $S$ and $a$ ($b$) to be space-like, the time from $a (b)$ to $A (B)$ must be less than 2.56 s, so the time from $a^\prime (B^\prime)$ to $A (B)$ needs to be less than $2.06 ~s - \Delta T_a (\Delta T_b)$. In general, human reaction times are between 0.2 and 0.4 s \cite{BORGHI:1965vt}. Including a system delay of 50 ms, defined as the time between the human observer pressing a key and $a^\prime (b^\prime)$, an upper bound of 0.5 s is reasonable for $\Delta T_a (\Delta T_b)$. An exciting deduction from these two constraints is that it is feasible to perform a Bell test between Earth and Moon while avoiding the locality and freedom-of-choice loopholes, even when using humans to make the random selections and record the results, as long as we only consider photons that arrive within 0.78 s of the measurement setting being selected. \begin{figure}
\caption{(color online). Space-time diagram for the scheme. The events $A (B)$, $a (b)$, and $a^\prime(b^\prime)$ represent Alice's (Bob's) measurement, setting choice and the measurement setting being prepared, respectively. The event S represents the generation of the entangled photon pairs. (a). Closing the locality loophole. Due to the symmetry of $A (a)$ and $B (b)$, we only analyse $A$ and $b$ here without loss of generality. The measurement events $A$ and $B$ happen nearly simultaneously. To ensure that the interval between $A$ and $b$ is space-like, the delay between $b$ and $B$ should not exceed 1.28 s, the flight time required for light to traval from Moon to Earth. Therefore, accounting for a delay of $\Delta T_b = 0.5 ~s$ due to human reaction time and system delay, photons that arrive within 0.78 s of the measurement setting being prepared are valid. (b). Closing the measurement independence loophole. $A(B)$ is on $E$'s light cones of $E$. For the interval between $E$ and $a (b)$ to be space-like, the delay between $a (b)$ and $A(B)$ must not exceed 2.56 s. Thus, photons arriving within 2.06 s of the measurement setting being prepared are valid. }
\label{Fig:spactime}
\end{figure}
With current technologies, utilizing a 2.4-meter telescope (similar to the Hubble Space Telescope) in space and a 30-meter receiving telescope on Earth\cite{Sanders2013} (see Table I), the total loss in distributing the entanglement between Earth and Moon would be at least 100 dB, three orders of magnitude higher than in previous works \cite{yin:100kmtelenature:2012, cao:biasedbasis:2013, Yin2013Experimental}. This will be a big challenge, even for ground-based demonstrations. To overcome this extremely high entanglement distribution loss, two technologies will need to be developed. \begin{table}\center
\begin{tabular}{|c|c|c|c|} \hline & {Earth arm} & {Moon arm}\\ \hline ~~Geometry attenuation~~ & $~~~~~~32~dB~~~~~~$ & $~~~~~53.5~dB~~~~$ \\ \hline Atmosphere attenuate & $3~dB$ & $0~dB$ \\ \hline Optical components & $6~dB$ & $6~dB$ \\ \hline Detective efficiency & $0.5~dB$ & $0.5~dB$ \\ \hline Total loss & 41.5 dB & 60 dB\\
\hline \multicolumn{3}{|c|}{Two arms total loss: 101.5 dB}\\ \hline \end{tabular} \caption{ The estimated total loss for the entangled photon pairs between Earth and Moon can be divided into two components, namely Earth and Moon arms. With existing technology, a beam divergence of $3~\mu rad$ can be achieved for the satellite's transmitting telescope, while the diameter of the receiving telescopes can be up to 30 m and 2.4 m for Earth and Moon, respectively \cite{Sanders2013}. For a transmission distance of $3.8\times10^{5}$ km, the geometry attenuations would be 32 dB and 53.5 dB respectively. The atmospheric attenuation, which only occurs for the Earth arm, would be approximately 3 dB. The optical components also contribute to the attenuation, mainly due to the fiber coupling efficiencies in the entanglement source and receiving telescopes (about 10 dB for the two arms) and the transmittance of the optical antennas (about 2 dB of two arms). The efficiency of state-of-the-art single-photon detectors is at least 80\%\cite{0953-2048-25-6-063001, Marsili:2013fs}. } \label{tab:channalloss} \end{table}
First, employing the same entanglement source used in previous experiments \cite{Scheidl:freedomofchoice:pnas2010,yin:100kmtelenature:2012,Yin:testspeed:prl2013} would mean it might take years to perform the Bell test \cite{SM}, which would pose a significant challenge. Instead, we have created a new type of quantum entanglement source, based on a Type-0 periodically-poled potassium titanyl phosphate (PPKTP) crystal and a Sagnac interferometer. This can generate $1.0\times10^9$ entangled photon pairs per second, about two orders of magnitude higher than the sources in Ref.~\cite{Scheidl:freedomofchoice:pnas2010,yin:100kmtelenature:2012,Yin:testspeed:prl2013,Yin1140}. Under certain conditions, the generation rate of a spontaneous parametric down-conversion (SPDC) process can be approached as \cite{Grice:1997ht,Grice:2001jc}, \begin{equation} \label{ }
G\propto{[\chi_{eff}^{(2)}]}^2\cdot\frac{1}{|n_s^\prime-n_i^\prime|}. \end{equation} Here, $\chi_{eff}$ stands for the efficient second-order non-linear coefficient, while $n_s^\prime$ and $n_i^\prime$ are the group refractive indices of signal and idler photons in the PPKTP crystal, respectively. The first and second terms represent the relative spectral intensity and spectral width, respectively. In an SPDC process going from a 405 nm pump laser to approximately 810 nm parametric photons, the relative spectral intensity and spectral width are both higher for Type-0 than Type-II, leading to a generation rate that is $\sim$ 2 orders of magnitude higher \cite{SM}.
Second, a much higher-resolution time-to-digital converter (TDC) system must be developed if we are to achieve a sufficiently high signal-to-noise ratio at this higher entangled photon pair generation rate. Based on the field-programmable-gate-arrays (FPGAs) carry chains used as tapped delay lines \cite{Shen:2013dc}, a homemade TDC, which has a full width at half maximum (FWHM) time resolution of approximately 60 ps for measuring coincident events in two channels \cite{Qi:2014gg}, is employed to record the photon detection results. \begin{figure*}
\caption{(color online). Simulated experiment setup. Entangled photon pairs are sent to two independent detection modules to perform the collective measurement. In each detection module, a Pockels Cell (PC) is used to select the basis chosen by the experimenter. After being split using a polarized beam splitter (PBS), the photons are coupled to two single-mode fibers for detection. To measure the CHSH inequality, the PCs are aligned at $22.5^\circ$ and an extra half-wave plate (HWP) at $11.25^\circ$ is placed in front of one of them. HVD: high-voltage driver. }
\label{Fig:Setup}
\end{figure*}
Even though this scheme is feasible with current techniques, building a satellite equipped with an ultra-high-brightness entangled photon source and a large telescope on or near Moon are still enormous challenges. Here, we perform a proof-of-principle experiment to demonstrate the feasibility via ultra-attenuating channel. With two independent human observers choosing the bases, the CHSH inequality was measured.
As shown in Fig.~\ref{Fig:Setup}, signal and idler photons generated by a Type-0 PPKTP entanglement source were sent in different directions and measured by separate detection modules to distinguish their polarizations. A large attenuation was applied to both arms to simulate the high losses of the satellite-Earth and satellite-Moon channels. Each detection module was equipped with a Pockels Cell (PC) to select the basis, and a polarized beam splitter (PBS) and two single-mode fibers coupled single-photon detectors (SPDs) to measure the polarization. The PC contains two potassium dihydrogen phosphate (KDP) crystals with half-wave voltage of $V_{\frac{\pi}{2}}\approx760 $ V @ $780$ nm and $V_{\frac{\pi}{2}}\approx820$ V @ $842$ nm. When applied with the half-wave voltage, the PC became equivalent to a half-wave plate (HWP) and could thus rotate the photos's polarization. By pressing on a keyboard, each human observer could manipulate a customized high-voltage driver to output either $V_{0}= 0$ V or the corresponding PC's half-wave voltage. The delay between the key being pressed and the PC reaching the desired voltage is less than 50 ms. The photons transmitted through or reflected on the PBS are coupled to two single-mode fibers and detected by two Si SPDs with a jitter of approximately 40 ps and a quantum efficiency of 10\%. Finally, the detector outputs are recorded by the custom TDC. In total, the system's time resolution is approximately 82 ps (FWHM).
To conduct the simulated experiment, an observer sat near each of the two detection modules. Each press a key board with a frequency of 2-4 Hz to select measurement bases without communicating to each other or negotiating a strategy. The optical axes of the two PCs were rotated to $22.5^\circ$. Each photon suffered a total loss of 51.5 dB, including 38.5 dB introduced by the attenuator, 3 dB by the single-mode fiber's coupling efficiency, and 10 dB by the SPDs' detection efficiency. The total loss for a pair of entangled photons was therefore 103 dB. It is reasonable to include the SPDs' detection efficiency in the simulated attenuation here, because a state-of-the-art SPD can achieve a quantum efficiency of 80\% with a jitter of 40 ps \cite{0953-2048-25-6-063001, Marsili:2013fs}. The S-value for the CHSH inequality was measured to be $S=2.28\pm 0.061$ in 3 hours, with 537 coincidences.
The idea of using human free will to decide the measurement settings used for Bell test may originate from Bell himself \cite{Bell:speakable:2004,SM}. If free will is assumed to exist, it naturally becomes a promising candidate for a stochastic source, due to its intrinsic attributes of freedom and independence. The ``freedom-of-choice'' loophole is receiving increasing attention in the field, and some researchers have suggested that human free will might be an effective means of addressing that loophole \cite{RevModPhys.86.419,PhysRevA.93.032115,Scheidl:freedomofchoice:pnas2010,0264-9381-29-22-224011,PhysRevLett.106.100406,Hall:measurementindependence:prl2010}, although this issue is still controversial mainly due to its imperfect randomness \cite{Wagenaar:1972hw}. For example, nine scientific research institutions around the world came together to conduct a worldwide experiment called ``the big Bell test'' using human randomness on November 30, 2016 \cite{tbbt}. However, given that the earth is only $\sim$43 light-ms in diameter, human choices are too slow to allow the separation of the measurements and entanglement source to be space-like. The scheme proposed in this Letter offers a practical approach to solving this problem.
It should be noted that another way to address this loophole is to use cosmological signals coming from distant regions of space to determine the measurement settings. This would allow any local-realist model to be a limited space-time region, e.g., no more recently than approximately 600 years ago \cite{Handsteiner2017Cosmic}. Nonetheless, because this method requires detectable signals, even utilizing the Cosmic Microwave Background would only extend this back to 380,000 years after the Big Bang \cite{PhysRevLett.112.110405}. In addition, some additional assumptions must be introduced when using this method, such as the hidden variable model not influencing the energy, momentum \cite{Handsteiner2017Cosmic} or signal arrival times \cite{PhysRevLett.118.140402}.
It is also worth noting that the other man-made device that could be replaced by a human in a Bell test is the detector. Specifically, it may be possible to detect entangled photons with the naked eyes \cite{PhysRevA.78.052110} and record results by human observers \cite{PhysRevA.72.012107}, involving human consciousness more deeply in the Bell test. In the current scheme, the entangled photons are detected directly after distribution, so the heralding efficiency is extremely low on both side. This is a problem: since humans can only detect or record a few photons per second, it would take an unacceptable long time to obtain a sufficient number of coincidence events. However, using an event-ready scheme \cite{PhysRevLett.71.4287,PhysRevLett.91.110405} and a quantum memory technique \cite{Yang:2016du} could increase the heralding efficiency significantly, making it possible to introduce human recorders. At the meantime, such a proposal could even close the ``fair sampling loophole'' and realize a loophole-free Bell test. In addition, electroencephalogram (EEG) devices can predict human choices $\sim$0.1 s before the corresponding muscle actions \cite{Hardy2017}, which could be a useful way of slightly relaxing the space-like interval condition.
We also want to emphasize here that even though the natures of consciousness and free will are still unresolved problems, they can be treated with a more open and more scientific attitude with the continuous progress of technology in the fields of neuroscience and quantum physics \cite{Brembs930}.
In this work, we have proposed a scheme for conducting Bell test between Earth and Moon that would simultaneously address the measurement independence and locality loopholes, even when involving conscious human minds with free will. We have estimated the total loss for this scheme and analyzed the space-time relationships. To overcome the extremely high loss, we have realized a quantum entanglement source with a generation rate of approximately 1 GHz. Using this ultra-high-brightness source, we were able to conduct an experiment where a quantum entanglement distribution with a loss of 103 dB was divided between two human observers, who selected the measurement bases. These results demonstrate that it is feasible to handle such long-range entanglement distributions, potentially providing a new way to address these fascinating issues.
We acknowledge W.-Q. Cai for insightful discussions. This work has been supported by CAS Center for Excellence and Synergetic Innovation Center in Quantum Information and Quantum Physics, Shanghai Branch, University of Science and Technology of China, by the National Fundamental Research Program (under grant no. 2013CB336800), the National Natural Science Foundation of China, and by the Strategic Priority Research Program on Space Science, the Chinese Academy of Sciences.
\begin{thebibliography}{46} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Einstein}\ \emph {et~al.}(1935)\citenamefont
{Einstein}, \citenamefont {Podolsky},\ and\ \citenamefont {Rosen}}]{EPR_35}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Einstein}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Podolsky}},
\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Rosen}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev.}\
}\textbf {\bibinfo {volume} {47}},\ \bibinfo {pages} {777} (\bibinfo {year}
{1935})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bell}(1964)}]{Bell_Ineq_64}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont
{Bell}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Physics}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages} {195}
(\bibinfo {year} {1964})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bell}(2004)}]{Bell:speakable:2004}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Bell}},\ }\href@noop {} {\emph {\bibinfo {title} {Speakable and Unspeakable
in Quantum Mechanics}}}\ (\bibinfo {publisher} {Cambridge Univ Press,
Cambridge, Berlin},\ \bibinfo {year} {2004})\ pp.\ \bibinfo {pages}
{243--244}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hall}(2010)}]{Hall:measurementindependence:prl2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Hall}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical
Review Letters}\ }\textbf {\bibinfo {volume} {105}},\ \bibinfo {pages}
{250404} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brunner}\ \emph {et~al.}(2014)\citenamefont
{Brunner}, \citenamefont {Cavalcanti}, \citenamefont {Pironio}, \citenamefont
{Scarani},\ and\ \citenamefont {Wehner}}]{RevModPhys.86.419}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Brunner}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Cavalcanti}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Pironio}}, \bibinfo
{author} {\bibfnamefont {V.}~\bibnamefont {Scarani}}, \ and\ \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Wehner}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf
{\bibinfo {volume} {86}},\ \bibinfo {pages} {419} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kofler}\ \emph {et~al.}(2016)\citenamefont {Kofler},
\citenamefont {Giustina}, \citenamefont {Larsson},\ and\ \citenamefont
{Mitchell}}]{PhysRevA.93.032115}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Kofler}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Giustina}},
\bibinfo {author} {\bibfnamefont {J.-A.}\ \bibnamefont {Larsson}}, \ and\
\bibinfo {author} {\bibfnamefont {M.~W.}\ \bibnamefont {Mitchell}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\
}\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {032115} (\bibinfo
{year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rideout}\ \emph {et~al.}(2012)\citenamefont
{Rideout}, \citenamefont {Jennewein}, \citenamefont {Amelino-Camelia},
\citenamefont {Demarie}, \citenamefont {Higgins}, \citenamefont {Kempf},
\citenamefont {Kent}, \citenamefont {Laflamme}, \citenamefont {Ma},
\citenamefont {Mann}, \citenamefont {Mart�n-Mart�nez}, \citenamefont
{Menicucci}, \citenamefont {Moffat}, \citenamefont {Simon}, \citenamefont
{Sorkin}, \citenamefont {Smolin},\ and\ \citenamefont
{Terno}}]{0264-9381-29-22-224011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Rideout}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jennewein}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Amelino-Camelia}},
\bibinfo {author} {\bibfnamefont {T.~F.}\ \bibnamefont {Demarie}}, \bibinfo
{author} {\bibfnamefont {B.~L.}\ \bibnamefont {Higgins}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Kempf}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Kent}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Laflamme}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Ma}},
\bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont {Mann}}, \bibinfo
{author} {\bibfnamefont {E.}~\bibnamefont {Mart�n-Mart�nez}}, \bibinfo
{author} {\bibfnamefont {N.~C.}\ \bibnamefont {Menicucci}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Moffat}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Simon}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Sorkin}}, \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Smolin}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~R.}\
\bibnamefont {Terno}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Classical and Quantum Gravity}\ }\textbf {\bibinfo {volume}
{29}},\ \bibinfo {pages} {224011} (\bibinfo {year} {2012})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Barrett}\ and\ \citenamefont
{Gisin}(2011)}]{PhysRevLett.106.100406}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Barrett}}\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Gisin}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. Lett.}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo {pages} {100406}
(\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kent}(2005)}]{PhysRevA.72.012107}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Kent}},\ }\href {\doibase 10.1103/PhysRevA.72.012107} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {72}},\
\bibinfo {pages} {012107} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Manasseh}\ \emph {et~al.}(2013)\citenamefont
{Manasseh}, \citenamefont {de~Balthasar}, \citenamefont {Sanguinetti},
\citenamefont {Pomarico}, \citenamefont {Gisin}, \citenamefont {de~Peralta},\
and\ \citenamefont {Andino}}]{Manasseh2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Manasseh}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{de~Balthasar}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Sanguinetti}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Pomarico}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}},
\bibinfo {author} {\bibfnamefont {R.~G.}\ \bibnamefont {de~Peralta}}, \ and\
\bibinfo {author} {\bibfnamefont {S.~L.~G.}\ \bibnamefont {Andino}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Frontiers in
Psychology}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {845}
(\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sekatski}\ \emph {et~al.}(2009)\citenamefont
{Sekatski}, \citenamefont {Brunner}, \citenamefont {Branciard}, \citenamefont
{Gisin},\ and\ \citenamefont {Simon}}]{PhysRevLett.103.113601}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Sekatski}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Brunner}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Branciard}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Gisin}}, \ and\ \bibinfo {author}
{\bibfnamefont {C.}~\bibnamefont {Simon}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {103}},\ \bibinfo {pages} {113601} (\bibinfo {year}
{2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bennett}\ and\ \citenamefont
{Brassard}(1984)}]{Bennett:BB84:1984}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont
{Bennett}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Brassard}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Proceedings of
the IEEE International Conference on Computers, Systems and Signal
Processing}}}\ (\bibinfo {publisher} {IEEE Press},\ \bibinfo {address} {New
York},\ \bibinfo {year} {1984})\ pp.\ \bibinfo {pages} {175--179}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Ekert}(1991)}]{Ekert:QKD:1991}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~K.}\ \bibnamefont
{Ekert}},\ }\href {\doibase 10.1103/PhysRevLett.67.661} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {67}},\
\bibinfo {pages} {661} (\bibinfo {year} {1991})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bennett}\ \emph {et~al.}(1992)\citenamefont
{Bennett}, \citenamefont {Brassard},\ and\ \citenamefont
{Mermin}}]{Bennett:BBM92:1992}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont
{Bennett}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Brassard}}, \
and\ \bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Mermin}},\
}\href {\doibase 10.1103/PhysRevLett.68.557} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {68}},\ \bibinfo
{pages} {557} (\bibinfo {year} {1992})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bennett}\ \emph {et~al.}(1993)\citenamefont
{Bennett}, \citenamefont {Brassard}, \citenamefont {Cr\'{e}peau},
\citenamefont {Jozsa}, \citenamefont {Peres},\ and\ \citenamefont
{Wootters}}]{BBCJPW_93}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont
{Bennett}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Brassard}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Cr\'{e}peau}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Jozsa}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Peres}}, \ and\ \bibinfo {author}
{\bibfnamefont {W.~K.}\ \bibnamefont {Wootters}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Phys.~Rev.~Lett.~}\ }\textbf {\bibinfo
{volume} {70}},\ \bibinfo {pages} {1895} (\bibinfo {year}
{1993})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bouwmeester}\ \emph {et~al.}(1997)\citenamefont
{Bouwmeester}, \citenamefont {Pan}, \citenamefont {Mattle}, \citenamefont
{Eibl}, \citenamefont {Weinfurter},\ and\ \citenamefont
{Zeilinger}}]{BPMEWZ_97}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Bouwmeester}}, \bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont
{Pan}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Mattle}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Eibl}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Weinfurter}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {390}},\
\bibinfo {pages} {575} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kimble}(2008)}]{Kimble2008}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~J.}\ \bibnamefont
{Kimble}},\ }\href {\doibase 10.1038/nature07127} {\bibfield {journal}
{\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {453}},\ \bibinfo
{pages} {1023} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Honjo}\ \emph {et~al.}(2008)\citenamefont {Honjo},
\citenamefont {Nam}, \citenamefont {Takesue}, \citenamefont {Zhang},
\citenamefont {Kamada}, \citenamefont {Nishida}, \citenamefont {Tadanaga},
\citenamefont {Asobe}, \citenamefont {Baek}, \citenamefont {Hadfield},
\citenamefont {Miki}, \citenamefont {Fujiwara}, \citenamefont {Sasaki},
\citenamefont {Wang}, \citenamefont {Inoue},\ and\ \citenamefont
{Yamamoto}}]{Honjo:08}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Honjo}}, \bibinfo {author} {\bibfnamefont {S.~W.}\ \bibnamefont {Nam}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Takesue}}, \bibinfo
{author} {\bibfnamefont {Q.}~\bibnamefont {Zhang}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Kamada}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Nishida}}, \bibinfo {author} {\bibfnamefont
{O.}~\bibnamefont {Tadanaga}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Asobe}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Baek}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Hadfield}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Miki}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fujiwara}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Sasaki}}, \bibinfo {author}
{\bibfnamefont {Z.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont
{K.}~\bibnamefont {Inoue}}, \ and\ \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Yamamoto}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume} {16}},\
\bibinfo {pages} {19118} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Inagaki}\ \emph {et~al.}(2013)\citenamefont
{Inagaki}, \citenamefont {Matsuda}, \citenamefont {Tadanaga}, \citenamefont
{Asobe},\ and\ \citenamefont {Takesue}}]{Inagaki:13}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Inagaki}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Matsuda}},
\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Tadanaga}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Asobe}}, \ and\ \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Takesue}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Opt. Express}\ }\textbf {\bibinfo {volume}
{21}},\ \bibinfo {pages} {23241} (\bibinfo {year} {2013})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Yin}\ \emph {et~al.}(2012)\citenamefont {Yin},
\citenamefont {Ren}, \citenamefont {Lu}, \citenamefont {Cao}, \citenamefont
{Yong}, \citenamefont {Wu}, \citenamefont {Liu}, \citenamefont {Liao},
\citenamefont {Zhou}, \citenamefont {Jiang}, \citenamefont {Cai},
\citenamefont {Xu}, \citenamefont {Pan}, \citenamefont {Jia}, \citenamefont
{Huang}, \citenamefont {Yin}, \citenamefont {Wang}, \citenamefont {Chen},
\citenamefont {Peng},\ and\ \citenamefont {Pan}}]{yin:100kmtelenature:2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Yin}}, \bibinfo {author} {\bibfnamefont {J.-G.}\ \bibnamefont {Ren}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Lu}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Cao}}, \bibinfo {author} {\bibfnamefont
{H.-L.}\ \bibnamefont {Yong}}, \bibinfo {author} {\bibfnamefont {Y.-P.}\
\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Liu}}, \bibinfo {author} {\bibfnamefont {S.-K.}\ \bibnamefont {Liao}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Zhou}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont
{X.-D.}\ \bibnamefont {Cai}}, \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {G.-S.}\
\bibnamefont {Pan}}, \bibinfo {author} {\bibfnamefont {J.-J.}\ \bibnamefont
{Jia}}, \bibinfo {author} {\bibfnamefont {Y.-M.}\ \bibnamefont {Huang}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Yin}}, \bibinfo {author}
{\bibfnamefont {J.-Y.}\ \bibnamefont {Wang}}, \bibinfo {author}
{\bibfnamefont {Y.-A.}\ \bibnamefont {Chen}}, \bibinfo {author}
{\bibfnamefont {C.-Z.}\ \bibnamefont {Peng}}, \ and\ \bibinfo {author}
{\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {488}},\
\bibinfo {pages} {185} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fedrizzi}\ \emph {et~al.}(2009)\citenamefont
{Fedrizzi}, \citenamefont {Ursin}, \citenamefont {Herbst}, \citenamefont
{Nespoli}, \citenamefont {Prevedel}, \citenamefont {Scheidl}, \citenamefont
{Tiefenbacher}, \citenamefont {Jennewein},\ and\ \citenamefont
{Zeilinger}}]{Fedrizzi2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Fedrizzi}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ursin}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Herbst}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Nespoli}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Prevedel}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Scheidl}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Tiefenbacher}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Jennewein}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Zeilinger}},\ }\href {\doibase 10.1038/nphys1255}
{\bibfield {journal} {\bibinfo {journal} {Nature Physics}\ }\textbf
{\bibinfo {volume} {5}},\ \bibinfo {pages} {389} (\bibinfo {year}
{2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yin}\ \emph {et~al.}(2017)\citenamefont {Yin},
\citenamefont {Cao}, \citenamefont {Li}, \citenamefont {Liao}, \citenamefont
{Zhang}, \citenamefont {Ren}, \citenamefont {Cai}, \citenamefont {Liu},
\citenamefont {Li}, \citenamefont {Dai}, \citenamefont {Li}, \citenamefont
{Lu}, \citenamefont {Gong}, \citenamefont {Xu}, \citenamefont {Li} \emph
{et~al.}}]{Yin1140}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Yin}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Cao}}, \bibinfo
{author} {\bibfnamefont {Y.-H.}\ \bibnamefont {Li}}, \bibinfo {author}
{\bibfnamefont {S.-K.}\ \bibnamefont {Liao}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont
{J.-G.}\ \bibnamefont {Ren}}, \bibinfo {author} {\bibfnamefont {W.-Q.}\
\bibnamefont {Cai}}, \bibinfo {author} {\bibfnamefont {W.-Y.}\ \bibnamefont
{Liu}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Dai}}, \bibinfo {author}
{\bibfnamefont {G.-B.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont
{Q.-M.}\ \bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\
\bibnamefont {Gong}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Xu}}, \bibinfo {author} {\bibfnamefont {S.-L.}\ \bibnamefont {Li}}, \emph
{et~al.},\ }\href {\doibase 10.1126/science.aan3211} {\bibfield {journal}
{\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {356}},\ \bibinfo
{pages} {1140} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Borghi}(1965)}]{BORGHI:1965vt}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont
{Borghi}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Perceptual and Motor Skills}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo
{pages} {212} (\bibinfo {year} {1965})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sanders}(2013)}]{Sanders2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~H.}\ \bibnamefont
{Sanders}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Journal of Astrophysics and Astronomy}\ }\textbf {\bibinfo {volume} {34}},\
\bibinfo {pages} {81} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cao}\ \emph {et~al.}(2013)\citenamefont {Cao},
\citenamefont {Liang}, \citenamefont {Yin}, \citenamefont {Yong},
\citenamefont {Zhou}, \citenamefont {Wu}, \citenamefont {Ren}, \citenamefont
{Li}, \citenamefont {Pan}, \citenamefont {Yang}, \citenamefont {Ma},
\citenamefont {Peng},\ and\ \citenamefont {Pan}}]{cao:biasedbasis:2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Cao}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Liang}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Yin}}, \bibinfo {author}
{\bibfnamefont {H.-L.}\ \bibnamefont {Yong}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont
{Y.-P.}\ \bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {J.-G.}\
\bibnamefont {Ren}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\ \bibnamefont
{Li}}, \bibinfo {author} {\bibfnamefont {G.-S.}\ \bibnamefont {Pan}},
\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Yang}}, \bibinfo {author}
{\bibfnamefont {X.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont
{C.-Z.}\ \bibnamefont {Peng}}, \ and\ \bibinfo {author} {\bibfnamefont
{J.-W.}\ \bibnamefont {Pan}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Optics Express}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo
{pages} {27260} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yin}\ \emph {et~al.}(2013{\natexlab{a}})\citenamefont
{Yin}, \citenamefont {Cao}, \citenamefont {Liu}, \citenamefont {Pan},
\citenamefont {Wang}, \citenamefont {Yang}, \citenamefont {Zhang},
\citenamefont {Yang}, \citenamefont {Chen},\ and\ \citenamefont
{Peng}}]{Yin2013Experimental}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Yin}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Cao}}, \bibinfo
{author} {\bibfnamefont {S.~B.}\ \bibnamefont {Liu}}, \bibinfo {author}
{\bibfnamefont {G.~S.}\ \bibnamefont {Pan}}, \bibinfo {author} {\bibfnamefont
{J.~H.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {Z.~P.}\
\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {F.~M.}\ \bibnamefont
{Yang}}, \bibinfo {author} {\bibfnamefont {Y.~A.}\ \bibnamefont {Chen}}, \
and\ \bibinfo {author} {\bibfnamefont {C.~Z.}\ \bibnamefont {Peng}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Optics Express}\
}\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {20032} (\bibinfo {year}
{2013}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Natarajan}\ \emph {et~al.}(2012)\citenamefont
{Natarajan}, \citenamefont {Tanner},\ and\ \citenamefont
{Hadfield}}]{0953-2048-25-6-063001}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont
{Natarajan}}, \bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont
{Tanner}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~H.}\ \bibnamefont
{Hadfield}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Superconductor Science and Technology}\ }\textbf {\bibinfo {volume} {25}},\
\bibinfo {pages} {063001} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Marsili}\ \emph {et~al.}(2013)\citenamefont
{Marsili}, \citenamefont {Verma}, \citenamefont {Stern}, \citenamefont
{Harrington}, \citenamefont {Lita}, \citenamefont {Gerrits}, \citenamefont
{Vayshenker}, \citenamefont {Baek}, \citenamefont {Shaw}, \citenamefont
{Mirin},\ and\ \citenamefont {Nam}}]{Marsili:2013fs}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Marsili}}, \bibinfo {author} {\bibfnamefont {V.~B.}\ \bibnamefont {Verma}},
\bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {Stern}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Harrington}}, \bibinfo {author}
{\bibfnamefont {A.~E.}\ \bibnamefont {Lita}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Gerrits}}, \bibinfo {author} {\bibfnamefont
{I.}~\bibnamefont {Vayshenker}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Baek}}, \bibinfo {author} {\bibfnamefont {M.~D.}\
\bibnamefont {Shaw}}, \bibinfo {author} {\bibfnamefont {R.~P.}\ \bibnamefont
{Mirin}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~W.}\ \bibnamefont
{Nam}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature
Photonics}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {210}
(\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Scheidl}\ \emph {et~al.}(2010)\citenamefont
{Scheidl}, \citenamefont {Ursin}, \citenamefont {Kofler}, \citenamefont
{Ramelow}, \citenamefont {Ma}, \citenamefont {Herbst}, \citenamefont
{Ratschbacher}, \citenamefont {Fedrizzi}, \citenamefont {Langford},
\citenamefont {Jennewein},\ and\ \citenamefont
{Zeilinger}}]{Scheidl:freedomofchoice:pnas2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Scheidl}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ursin}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kofler}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Ramelow}}, \bibinfo {author}
{\bibfnamefont {X.-S.}\ \bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Herbst}}, \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Ratschbacher}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Fedrizzi}}, \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Langford}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Jennewein}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Zeilinger}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {PNAS}\ }\textbf {\bibinfo {volume} {107}},\ \bibinfo
{pages} {19708} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yin}\ \emph {et~al.}(2013{\natexlab{b}})\citenamefont
{Yin}, \citenamefont {Cao}, \citenamefont {Yong}, \citenamefont {Ren},
\citenamefont {Liang}, \citenamefont {Liao}, \citenamefont {F.},
\citenamefont {Liu}, \citenamefont {Wu}, \citenamefont {Pan}, \citenamefont
{Li}, \citenamefont {Liu}, \citenamefont {Zhang}, \citenamefont {Peng},\ and\
\citenamefont {Pan}}]{Yin:testspeed:prl2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Yin}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Cao}}, \bibinfo
{author} {\bibfnamefont {H.-L.}\ \bibnamefont {Yong}}, \bibinfo {author}
{\bibfnamefont {J.-G.}\ \bibnamefont {Ren}}, \bibinfo {author} {\bibfnamefont
{H.}~\bibnamefont {Liang}}, \bibinfo {author} {\bibfnamefont {S.-K.}\
\bibnamefont {Liao}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{F.}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Liu}}, \bibinfo
{author} {\bibfnamefont {Y.-P.}\ \bibnamefont {Wu}}, \bibinfo {author}
{\bibfnamefont {G.-S.}\ \bibnamefont {Pan}}, \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {N.-L.}\
\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont
{Zhang}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Peng}}, \ and\
\bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Physical Review Letters}\
}\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages} {260407} (\bibinfo
{year} {2013}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{SM()}]{SM}
\BibitemOpen
\href@noop {} {}\bibinfo {note} {More details are available in the
Supplemental Material}\BibitemShut {NoStop} \bibitem [{\citenamefont {Grice}\ and\ \citenamefont
{Walmsley}(1997)}]{Grice:1997ht}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Grice}}\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Walmsley}},\ }\href {\doibase 10.1103/PhysRevA.56.1627} {\bibfield
{journal} {\bibinfo {journal} {Physical Review A}\ }\textbf {\bibinfo
{volume} {56}},\ \bibinfo {pages} {1627} (\bibinfo {year}
{1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Grice}\ \emph {et~al.}(2001)\citenamefont {Grice},
\citenamefont {U'ren},\ and\ \citenamefont {Walmsley}}]{Grice:2001jc}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~P.}\ \bibnamefont
{Grice}}, \bibinfo {author} {\bibfnamefont {A.~B.}\ \bibnamefont {U'ren}}, \
and\ \bibinfo {author} {\bibfnamefont {I.~A.}\ \bibnamefont {Walmsley}},\
}\href {\doibase 10.1103/PhysRevA.64.063815} {\bibfield {journal} {\bibinfo
{journal} {Physical Review A}\ }\textbf {\bibinfo {volume} {64}},\ \bibinfo
{pages} {063815} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shen}\ \emph {et~al.}(2013)\citenamefont {Shen},
\citenamefont {Liao}, \citenamefont {Liu}, \citenamefont {Wang},
\citenamefont {Liu}, \citenamefont {Peng},\ and\ \citenamefont
{An}}]{Shen:2013dc}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont
{Shen}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Liao}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Liu}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont
{W.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Peng}}, \ and\ \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {An}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {IEEE Transactions
on Nuclear Science}\ }\textbf {\bibinfo {volume} {60}},\ \bibinfo {pages}
{3570} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Qi}\ \emph {et~al.}(2015)\citenamefont {Qi},
\citenamefont {Liu}, \citenamefont {Shen}, \citenamefont {Liao},
\citenamefont {Cai}, \citenamefont {Lin}, \citenamefont {Liu}, \citenamefont
{Peng},\ and\ \citenamefont {An}}]{Qi:2014gg}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Qi}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Liu}}, \bibinfo
{author} {\bibfnamefont {Q.}~\bibnamefont {Shen}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Liao}}, \bibinfo {author} {\bibfnamefont
{W.}~\bibnamefont {Cai}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Lin}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Liu}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Peng}}, \ and\ \bibinfo {author}
{\bibfnamefont {Q.}~\bibnamefont {An}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {IEEE Transactions on Nuclear Science}\ }\textbf
{\bibinfo {volume} {62}},\ \bibinfo {pages} {883} (\bibinfo {year}
{2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wagenaar}(1972)}]{Wagenaar:1972hw}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~A.}\ \bibnamefont
{Wagenaar}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Psychological Bulletin}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo
{pages} {65} (\bibinfo {year} {1972})}\BibitemShut {NoStop} \bibitem [{tbb()}]{tbbt}
\BibitemOpen
\href@noop {} {}\bibinfo {note} {The big Bell test,
https://thebigbelltest.org}\BibitemShut {NoStop} \bibitem [{\citenamefont {Handsteiner}\ \emph {et~al.}(2017)\citenamefont
{Handsteiner}, \citenamefont {Friedman}, \citenamefont {Rauch}, \citenamefont
{Gallicchio}, \citenamefont {Liu}, \citenamefont {Hosp}, \citenamefont
{Kofler}, \citenamefont {Bricher}, \citenamefont {Fink}, \citenamefont
{Leung}, \citenamefont {Mark}, \citenamefont {Nguyen}, \citenamefont
{Sanders}, \citenamefont {Steinlechner}, \citenamefont {Ursin}, \citenamefont
{Wengerowsky}, \citenamefont {Guth}, \citenamefont {Kaiser}, \citenamefont
{Scheidl},\ and\ \citenamefont {Zeilinger}}]{Handsteiner2017Cosmic}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Handsteiner}}, \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont
{Friedman}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Rauch}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gallicchio}}, \bibinfo
{author} {\bibfnamefont {B.}~\bibnamefont {Liu}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Hosp}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Kofler}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Bricher}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Fink}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Leung}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mark}},
\bibinfo {author} {\bibfnamefont {H.~T.}\ \bibnamefont {Nguyen}}, \bibinfo
{author} {\bibfnamefont {I.}~\bibnamefont {Sanders}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {Steinlechner}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Ursin}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Wengerowsky}}, \bibinfo {author} {\bibfnamefont {A.~H.}\
\bibnamefont {Guth}}, \bibinfo {author} {\bibfnamefont {D.~I.}\ \bibnamefont
{Kaiser}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Scheidl}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\
}\textbf {\bibinfo {volume} {118}},\ \bibinfo {pages} {060401} (\bibinfo
{year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gallicchio}\ \emph {et~al.}(2014)\citenamefont
{Gallicchio}, \citenamefont {Friedman},\ and\ \citenamefont
{Kaiser}}]{PhysRevLett.112.110405}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Gallicchio}}, \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont
{Friedman}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~I.}\ \bibnamefont
{Kaiser}},\ }\href {\doibase 10.1103/PhysRevLett.112.110405} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {112}},\ \bibinfo {pages} {110405} (\bibinfo {year}
{2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wu}\ \emph {et~al.}(2017)\citenamefont {Wu},
\citenamefont {Bai}, \citenamefont {Liu}, \citenamefont {Zhang},
\citenamefont {Yang}, \citenamefont {Cao}, \citenamefont {Wang},
\citenamefont {Zhang}, \citenamefont {Zhou}, \citenamefont {Shi},
\citenamefont {Ma}, \citenamefont {Ren}, \citenamefont {Zhang}, \citenamefont
{Peng}, \citenamefont {Fan}, \citenamefont {Zhang},\ and\ \citenamefont
{Pan}}]{PhysRevLett.118.140402}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Wu}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Bai}}, \bibinfo
{author} {\bibfnamefont {Y.}~\bibnamefont {Liu}}, \bibinfo {author}
{\bibfnamefont {X.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Cao}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wang}}, \bibinfo
{author} {\bibfnamefont {S.}~\bibnamefont {Zhang}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont
{X.}~\bibnamefont {Shi}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Ma}}, \bibinfo {author} {\bibfnamefont {J.-G.}\ \bibnamefont {Ren}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zhang}}, \bibinfo
{author} {\bibfnamefont {C.-Z.}\ \bibnamefont {Peng}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Fan}}, \bibinfo {author} {\bibfnamefont
{Q.}~\bibnamefont {Zhang}}, \ and\ \bibinfo {author} {\bibfnamefont {J.-W.}\
\bibnamefont {Pan}},\ }\href {\doibase 10.1103/PhysRevLett.118.140402}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {118}},\ \bibinfo {pages} {140402} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brunner}\ \emph {et~al.}(2008)\citenamefont
{Brunner}, \citenamefont {Branciard},\ and\ \citenamefont
{Gisin}}]{PhysRevA.78.052110}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Brunner}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Branciard}},
\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gisin}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\
}\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {052110} (\bibinfo
{year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {\ifmmode~\dot{Z}\else \.{Z}\fi{}ukowski}\ \emph
{et~al.}(1993)\citenamefont {\ifmmode~\dot{Z}\else \.{Z}\fi{}ukowski},
\citenamefont {Zeilinger}, \citenamefont {Horne},\ and\ \citenamefont
{Ekert}}]{PhysRevLett.71.4287}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{\ifmmode~\dot{Z}\else \.{Z}\fi{}ukowski}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Zeilinger}}, \bibinfo {author} {\bibfnamefont {M.~A.}\
\bibnamefont {Horne}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~K.}\
\bibnamefont {Ekert}},\ }\href {\doibase 10.1103/PhysRevLett.71.4287}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {71}},\ \bibinfo {pages} {4287} (\bibinfo {year}
{1993})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Simon}\ and\ \citenamefont
{Irvine}(2003)}]{PhysRevLett.91.110405}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Simon}}\ and\ \bibinfo {author} {\bibfnamefont {W.~T.~M.}\ \bibnamefont
{Irvine}},\ }\href {\doibase 10.1103/PhysRevLett.91.110405} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {91}},\ \bibinfo {pages} {110405} (\bibinfo {year}
{2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yang}\ \emph {et~al.}(2016)\citenamefont {Yang},
\citenamefont {Wang}, \citenamefont {Bao},\ and\ \citenamefont
{Pan}}]{Yang:2016du}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-J.}\ \bibnamefont
{Yang}}, \bibinfo {author} {\bibfnamefont {X.-J.}\ \bibnamefont {Wang}},
\bibinfo {author} {\bibfnamefont {X.-H.}\ \bibnamefont {Bao}}, \ and\
\bibinfo {author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Nature Photonics}\ }\textbf
{\bibinfo {volume} {10}},\ \bibinfo {pages} {381} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hardy}(2017)}]{Hardy2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Hardy}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv
preprint arXiv:1705.04620}\ } (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brembs}(2011)}]{Brembs930}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Brembs}},\ }\href {\doibase 10.1098/rspb.2010.2325} {\ \textbf {\bibinfo
{volume} {278}},\ \bibinfo {pages} {930} (\bibinfo {year}
{2011})}\BibitemShut {NoStop} \end{thebibliography}
\end{document} |
\begin{document}
\title{Moufang Twin Trees of prime order}
\author{ Matthias Grüninger\thanks{\url{[email protected]}} \and Max Horn\thanks{\url{[email protected]}} \and Bernhard Mühlherr\thanks{\url{[email protected]}} }
\maketitle
\begin{abstract} We prove that the unipotent horocyclic group of a Moufang twin tree of prime order is nilpotent of class at most 2. \end{abstract}
\section{Introduction}
The classification of spherical buildings asserts that each irreducible spherical building of rank at least 3 is of algebraic origin. By this we mean that it is the building of a classical group, or a semi-simple algebraic group, or some variation thereof. In the rank 2 case, this is no longer true; in particular, there are free constructions of generalized polygons. (Generalized polygons are precisely the spherical buildings of rank 2.) In order to characterize the generalized polygons of algebraic origin, Tits introduced the Moufang condition for spherical buildings in the 1970s \cite{Ti77}. This condition is automatically satisfied for irreducible spherical buildings of rank at least 3. The Moufang polygons were classified in \cite{TW}. It follows from this classification that the Moufang condition characterizes indeed the generalized polygons of algebraic origin.
In the late 1980s Ronan and Tits introduced twin buildings, which were motivated by the theory of Kac-Moody groups. Twin buildings are generalizations of spherical buildings. For the latter there is a natural opposition relation on the set of its chambers due the existence of a unique longest element in the finite Weyl group. Many important results about spherical buildings (e.g.\ their classification in higher rank) rely on the presence of the opposition relation. For Kac-Moody groups over fields there is a natural notion of \Defn{opposite Borel groups}, even if its Weyl-group is infinite. The idea underlying the definition of twin buildings is to translate this algebraic fact into combinatorics. Roughly speaking the existence of an opposition relation for spherical buildings is axiomatized by the notion of a twinning between two buildings of the same (possibly non-spherical) type. It turns out that many important notions and concepts from the theory of spherical buildings have indeed natural analogues in the context of twin buildings. In particular, the Moufang condition makes sense for twin buildings. There is the natural question to which extent the ``spherical'' results can be generalized to the twin case. In this paper we contribute to this question in the context of twin trees which are precisely the non-spherical twin buildings of rank 2.
In view of the main result of \cite{TW} it is natural to ask, whether a classification of Moufang twin trees is feasible. Our main result can be seen as a major step towards a classification of Moufang twin trees of prime order (i.e.\ for regular Moufang trees of valency $p+1$ for some prime $p$). This is of course a rather small subclass of all Moufang twin trees. As we shall explain below, however, a classification of all Moufang twin trees seems to be out of reach at the moment. In view of our result, there is some hope that a classification of the locally finite Moufang twin trees might be feasible. The latter are precisely the ones which are interesting for the theory of lattices in locally compact groups. Indeed, using a construction of Tits in \cite{Ti89} and an important observation of R\'emy in \cite{RemyCRAS99} one knows that locally finite Moufang twin trees provide a large class of lattices in locally compact groups. The examples in this class are irreducible and non-uniform lattices in the full automorphism group of the product of two locally finite trees. Combining this with a result of Caprace and R\'emy in \cite{CR12} it turns out that a lot of them are simple as abstract groups. To our knowledge these are the only known examples of lattices with these properties. A classification of all locally finite Moufang trees would in particular provide a better understanding of these examples.
As already announced in the previous paragraph, we now provide more information about the classification problem for Moufang twin trees. We recall first that there is the natural question whether the Moufang condition characterizes the twin trees of algebraic origin, i.e., the examples provided by Kac-Moody groups and ``their variations''. An important invariant of a Moufang twin tree is a subgroup of its automorphism group which is called its \Defn{unipotent horocyclic group}. In \cite{Ti89} a general construction of Moufang twin trees is given which uses this invariant as an essential ingredient. In \cite[Section 2]{RR06} (see also \cite[Example 67]{AR09}) this construction was made ``concrete'' for certain parameters in order to construct ``exotic'' examples of Moufang twin trees with abelian unipotent horocyclic groups. In this way on gets classes of Moufang twin trees which one would not like to call of algebraic origin. Therefore the Moufang condition is not sufficient for characterizing the algebraic examples. Even worse, in \cite{Ti96} it is shown that there are uncountably many non-isomorphic twin trees of valency 3. In view of the fact that for each value of $n$ there is at most one Moufang $n$-gon of valency 3, one has to accept that the analogy of twin trees and generalized $n$-gons has its limitations.
On the other hand, at present it is not clear whether Moufang twin trees are ``wild'' or whether there is a powerful structure theory for them. This problem is discussed in \cite{Ti89} and an abstract construction given therein provides a tool to obtain all Moufang twin trees. However, this has to be taken with a grain of salt because the procedure requires some group theoretical parameters. Hence, the construction given in \cite{Ti89} translates the classification problem for Moufang twin trees into the problem of classifying these parameters. The question whether these parameter sets can be classified is also discussed in \cite{Ti89} and we briefly recall its outcome. First of all it turns out that a classification of all Moufang twin trees would provide a classification of all Moufang sets. Moufang sets have been studied intensively over the last 15 years and at present it seems that their classification is far beyond reach. As the finite Moufang sets are known (see e.g.\ \cite{HKS72}) this difficult problem is not an obstacle if we restrict our attention to locally finite Moufang trees. However, there is still the problem of describing all possible commutation relations between the root groups in a Moufang twin tree for a given pair of Moufang sets. The main result of this paper provides a major step to solve this problem for Moufang twin trees of prime order. The commutation relations of a Moufang twin tree are in fact encoded in its unipotent horocyclic group mentioned before. The first step in our solution to the problem is to introduce $\mathbb{Z}$-systems, in order to axiomatize groups which are candidates for being the unipotent horocyclic group of a Moufang tree. We then prove \cref{hauptsatz}, a purely group theoretical result whose statement requires some preparation. In order to give at least an idea about its implications for Moufang twin trees, we state the following consequence of it. As the precise definition of a Moufang twin tree won't be needed in the paper, we refer to \cite{RT94} for an excellent introduction.
\begin{mainthm} \label{mainthm:detailed} The unipotent horocyclic group of a Moufang twin tree of prime order is nilpotent of class at most 2. \end{mainthm} As already mentioned, \cref{mainthm:detailed} is a consequence of our purely group theoretical \cref{hauptsatz}. We indicate how \cref{mainthm:detailed} is deduced from \cref{hauptsatz} in \cref{sketch proof mainthm}.
Let us finally point out the following two remarks on \cref{mainthm:detailed}:
\begin{enumerate} \item As explained before, the theory of twin buildings was developed in order to provide the appropriate structures associated to Kac-Moody groups. Roughly speaking, the ingredients for defining such a group consist of a generalized Cartan matrix $A$ and a field $\mathbb{F}$; the resulting group is denoted by
$G_A(\mathbb{F})$. If the Cartan matrix $A$ is a $2 \times 2$-matrix with non-positive determinant, then the twin building associated to $G_A(\mathbb{F})$ is a Moufang twin tree of order $|\mathbb{F}|$ whose automorphism group essentially coincides with the (adjoint version) of $G_A$. If $A$ is of affine type (i.e.\ $\det(A) = 0$) then $G_A(\mathbb{F})$ can be realized as a matrix group over $\mathbb{F}(t)$. In fact, the examples given in \cref{sec trees and RGD} correspond to Kac-Moody groups of affine type. In most cases, however, $G_A(\mathbb{F})$ cannot be realized as a matrix group over a field (see \cite[Theorem 7.1]{Cap09}).
\item We already mentioned that there are uncountably many pairwise non-isomorphic trivalent Moufang twin trees due to a construction of Tits given in \cite{Ti96}. In view of our result above, one might hope that Tits' construction provides all trivalent Moufang twin trees which would give a classification of these objects. By modifying Tits' ideas we have constructed new examples which show that this is definitively not the case. Nevertheless we are confident that a classification of Moufang twin trees of prime order is feasible. We intend to come back to this question in a subsequent paper.
\end{enumerate}
\paragraph{Some conventions.} \begin{itemize} \item We consider $0$ to be a natural number, i.e., $\mathbb{N}=\{0,1,2,\ldots\}$. \item For a prime $p\in\mathbb{N}$, let $\mathbb{Z}_p:=\{0,\dots,p-1\}\subset\mathbb{N}$ and $\mathbb{Z}_p^*:=\{1,\dots,p-1\}\subset\mathbb{N}$. Moreover, let $\mathbb{F}_p:=\mathbb{Z}/p\mathbb{Z}$ be the prime field of order $p$. \item For a group $G$, let $G^*:=G\setminus\{1\}$. \item For $A,B,C\leq G$, set $[A,B,C]:=[[A,B],C]$. \item For $U\subseteq G$, let $\gen{U}$ be the subgroup of $G$ generated by $U$. \end{itemize}
\section{Moufang twin trees and RGD-systems} \label{sec trees and RGD}
As explained in the introduction, the classification problem for Moufang twin trees can be translated into a purely group theoretical classification problem. The key notion on the group theoretic side is that of an RGD-system. We first outline what RGD-systems are, then review the interplay between Moufang twin trees and RGD-systems. This will provide the motivation for our main result and enable us to state it properly.
In \cite{Ti92} RGD-systems have been introduced by Tits in order to investigate groups of Kac-Moody type and Moufang buildings. The abbreviation ``RGD'' stands for ``root group data''. The axioms for an RGD-system are somewhat technical and we refer to \cite{AB08} and to \cite{CR09} for the general theory of RGD-systems.
Here we are only interested in RGD-systems of type $\tilde{A}_1$, i.e.\ in RGD-systems whose type is the Coxeter system associated with the infinite dihedral group. The RGD-axioms given below are adapted to this special case in which they simplify considerably. This is because the root system $\Phi$ of type $\tilde{A}_1$ has the following concrete description.
\begin{defn} For each $z \in \mathbb{Z}$ we put $\epsilon_z := 1$ if $z \leq 0$ and $\epsilon_z := -1$ if $z > 0$. We set $\Phi := \mathbb{Z} \times \{ 1,-1\}$, $\Phi^+ := \{ (z,\epsilon_z) \mid z \in \mathbb{Z} \}$ and $\Phi^- := \Phi \setminus \Phi^+$. For $i=0,1$ we define $r_i \in \Sym(\Phi)$ by $(z,\epsilon) \mapsto (2i-z,-\epsilon)$ and we put $\alpha_i := (i,\epsilon_i)$. Finally, for $\alpha = (z,\epsilon) \in \Phi$ we put $-\alpha := (z,-\epsilon)$. \end{defn}
\begin{figure}
\caption{Root system of type $\tilde{A}_1$; black nodes are positive roots, white nodes negative roots.}
\end{figure}
\begin{defn} An \Defn{RGD-system of type $\tilde{A}_1$} is a triple $\Pi=(G,(U_{\alpha})_{\alpha \in \Phi},H)$ consisting of a group $G$, a subgroup $H$ of $G$ and a family $(U_{\alpha})_{\alpha \in \Phi}$ of subgroups of $G$ (the \Defn{root subgroups}) such that the following holds. \begin{axioms}{RGD}
\item For all $\alpha \in \Phi$ we have $|U_{\alpha} |>1$. \item For all $z < z' \in \mathbb{Z}$ and all $\epsilon \in \{ 1,-1 \}$ we have \[ [U_{(z,\epsilon)},U_{(z',\epsilon)}] \in \gen{U_{(n,\epsilon)} \mid z < n < z'}. \] \item For $i=0,1$ there exists a function $m_i:U_{(i,1)}^* \to G$ such that for all $u \in U_{\alpha_i}^*$ and $\alpha \in \Phi$ we have \[ m_i(u) \in U_{-\alpha_i} u U_{-\alpha_i} \quad\text{and }\quad m_i(u)U_{\alpha}m_i(u)^{-1} = U_{r_i(\alpha)}. \] Moreover, $m_i(u)^{-1}m_i(v) \in H$ for all $u,v \in U_{\alpha_i}^*$. \item For $i = 0,1$ the group $U_{-\alpha_i}$ is not contained in $\gen{U_{\alpha} \mid \alpha \in \Phi^+}$. \item The group $G$ is generated by the family $(U_{\alpha})_{\alpha \in \Phi}$ and the group $H$. \item The group $H$ normalizes $U_{\alpha}$ for each $\alpha \in \Phi$. \end{axioms} \end{defn}
\begin{remark} We refer to \cite[Definition 7.82 and Subsection 8.6.1]{AB08} for the definition of RGD-systems of arbitrary type. In the following discussion ``RGD-system'' shall always mean ``RGD-system of type $\tilde{A}_1$''. \end{remark}
\begin{example}[\textbf{The standard example}] \label{ex std} Let $\mathbb{F}$ be a field and set
\[ G:= \SL_2(\mathbb{F}[t,t^{-1}]) \leq \SL_2(\mathbb{F}(t)), \qquad H :=\left\{ \mtr{\lambda}{0}{0}{\lambda^{-1}} \;\middle|\; 0 \neq \lambda \in \mathbb{F} \right\} \leq G. \] For each $z \in \mathbb{Z}$ we put
\[ U_{(z, 1)} := \left\{ \mtr{1}{\lambda t^z}{0}{1} \;\middle|\; \lambda \in \mathbb{F} \right\},
\qquad
U_{(z,-1)} := \left\{ \mtr{1}{0}{\lambda t^{-z}}{1} \;\middle|\; \lambda \in \mathbb{F} \right\}. \] We point out the following facts: \begin{enumerate} \item $\Pi = (G, (U_{\alpha})_{\alpha \in \Phi},H)$ is an RGD-system. \item Let $U_{++} := \gen{ U_{(z,1)} \mid z \in \mathbb{Z} }$. Then
$U_{++} = \left\{ \smtr{1}{f}{0}{1} \;\middle|\; f \in \mathbb{F}[t,t^{-1}] \right\}$ and in particular $[U_{(z,1)},U_{(z',1)}] = 1$ for all $z,z' \in \mathbb{Z}$. \item $\gen{ U_{(z,1)},U_{(z,-1)} }$ is isomorphic to $\SL_2(\mathbb{F})$ for all $z \in \mathbb{Z}$. \end{enumerate} \end{example}
\begin{remark} The following aspect of the standard example is relevant in our context: Let $\nu$ be a place of $\mathbb{F}(t)$. Then $\SL_2(\mathbb{F}(t))$ acts on the Bruhat-Tits tree $T_{\nu}$ associated with $\nu$. We consider the two rational places $\infty$ and $0$ and set $T_+ := T_{\infty}$ and $T_- := T_0$. It is a fact that there is a twinning $\delta^*$ between $T_+$ and $T_-$ such that $G=\SL_2(\mathbb{F}[t,t^{-1}])$ acts on the corresponding Moufang twin tree $T = (T_+,T_-,\delta^*)$ (see \cite{RT94} for details). Moreover, the unipotent horocyclic group associated with $T$ can be identified with the group $U_{++}$ defined above.
The interplay between the RGD-system of $\SL_2(\mathbb{F}[t,t^{-1}])$ and the twin tree $T$ is actually a special case of a general correspondence between RGD-systems and Moufang twin trees: It follows from \cite[Proposition 8.22]{AB08} that each Moufang twin tree $T$ yields an RGD-system $\mathbf{\Pi}(T)$ in a canonical way. Conversely, for each RGD-system $\Pi$, by \cite[Theorem 8.81]{AB08} there is a canonical associated twin tree $\mathbf{T}(\Pi)$. This correspondence is not one-to-one, but it can be made one-to-one by restricting to RGD-systems of ``adjoint type''.
The following two facts about the correspondence between RGD-systems and Moufang twin trees are important in our context. Let $\Pi = (G,(U_{\alpha})_{\alpha \in \Phi},H)$ be an RGD-system and let ${\bf T}(\Pi)$ be the Moufang twin tree associated with $\Pi$.
\begin{enumerate} \item As a byproduct of the proof of \cite[Theorem 8.81]{AB08} one observes that the Moufang twin tree $\mathbf{T}(\Pi)$ is biregular of degree
$(|U_{\alpha_0}|+1,|U_{\alpha_1}|+1)$. In analogy to the theory of projective planes, we say a tree is of \Defn{order} $q \in \mathbb{N}$ if it is a regular tree of degree $q+1$.
\item The group $U_{++} := \gen{U_{(z,1)} \mid z \in \mathbb{Z}}$ corresponds to the unipotent horocyclic group of ${\bf T}(\Pi)$. \end{enumerate} \end{remark}
\begin{example}[\textbf{The unitary example.}] \label{ex unitary} \cref{mainthm:detailed} in the introduction asserts that the unipotent horocyclic group of a Moufang twin tree of order $p$ is nilpotent of class at most 2. In the following we want to provide an example of an RGD-system $\Pi$ which can be realized as a matrix group and such that the unipotent horocyclic group of ${\bf T}$ is non-abelian. As this won't be used in the sequel, we omit the details.
Let $\mathbb{F}$ be field with $\mathrm{char}(\mathbb{F})\neq2$. We define the following elements of $\SL_3(\mathbb{F}(t))$ for $z\in\mathbb{Z}$ and $\lambda\in\mathbb{F}$: \begin{align*} x_{2z}(\lambda) &:= \begin{pmatrix}
1 & -\lambda t^{z} & (-1)^{z+1} \frac{\lambda^2}{2} t^{2z} \\
& 1 & \lambda (-t)^{z} \\
& & 1 \end{pmatrix}, & x_{2z+1}(\lambda) &:= \begin{pmatrix}
1 & 0 & (-1)^z \lambda t^{2z+1} \\
& 1 & 0 \\
& & 1 \end{pmatrix}, \\ h(\lambda) &:= \begin{pmatrix} \lambda&&\\&1&\\&&\lambda^{-1} \end{pmatrix}. \end{align*} Moreover, we define the following subgroups: \begin{align*}
U_{(2z+1,1)} &:= \{ x_{2z+1)}(\lambda) \mid \lambda \in \mathbb{F} \}, & U_{(2z ,1)} &:= \{ x_{2z )}(\lambda) \mid \lambda \in \mathbb{F} \}, \\
U_{(2z+1,-1)} &:= U_{(2z+1,1)}^t, & U_{(2z,-1)} &:= U_{(2z,1)}^t, \\ H &:= \{ h(\lambda) \mid \lambda \in \mathbb{F}^* \}. \end{align*}
We set $G := \gen{ U_{\alpha} \mid \alpha \in \Phi }$. The following can be verified by straightforward calculations.
\begin{itemize} \item We have $H \leq G$.r \item $\Pi = (G, (U_{\alpha})_{\alpha \in \Phi},H)$ is an RGD-system. \item Each $U_{\alpha}$ is isomorphic to the additive group of $\mathbb{F}$. \item $U_{++} := \gen{ U_{z,1} \mid z \in \mathbb{Z} }$ is non-abelian. Indeed, while the root groups $U_{2z+1,1}$ are central, we have for $z,z'\in\mathbb{Z}$ and $\lambda,\mu\in\mathbb{F}$ that \begin{align*}
[ x_{4z}(\lambda),\ x_{4z'+2}(\mu) ] &= x_{2z+2z'+1}(2\lambda\mu),\\
[ x_{4z+2}(\lambda),\ x_{4z'}(\mu) ] &= x_{2z+2z'+1}(-2\lambda\mu),\\
[ x_{4z}(\lambda),\ x_{4z'}(\mu) ] &= [ x_{4z+2}(\lambda),\ x_{4z'+2}(\mu) ] = 1_G. \end{align*}
\end{itemize} \end{example}
\section{The main result}
As consequence of the discussion in the previous section, we conclude that the classification of Moufang twin trees of prime order $p$ is equivalent to the classification of RGD-systems in which all $U_{\alpha}$ have order $p$. The Moufang sets of cardinality $p+1$ are classified. Thus, the main obstacle remaining in the classification of Moufang twin trees of prime order is the classification of the possible commutation relations. In order to make this more concrete, first consider the following basic observation about RGD-systems.
\begin{lemma} \label{Zsystemmotivation} Let $\Pi=(G,(U_{\alpha})_{\alpha \in \Phi},H)$ be an RGD-system. Let $X_n := U_{(n,1)}$ for each $n \in \mathbb{Z}$ and $X := \gen{X_n \mid n \in \mathbb{Z}}$. Then the following hold. \begin{enumerate} \item For all $n \leq m \in \mathbb{Z}$ the product map $X_n \times X_{n+1} \times \dots \times X_m \to \gen{X_i \mid n \leq i \leq m}$ is a bijection. \item There exists $t \in \Aut(X)$ such that $t(X_n) = X_{n+2}$ for all $n \in \mathbb{Z}$. \end{enumerate} \end{lemma}
\begin{proof} Assertion (i) follows from Assertion (i) of Corollary 8.34 in \cite{AB08}. Let $i =0,1$. Using the function $m_i$ from (RGD3), we can construct $s_i \in G$ such that $U_{\alpha}^{s_i} = U_{r_i(\alpha)}$ for all $\alpha \in \Phi$. Then the mapping $t:X \to X, x \mapsto x^{s_0s_1}$ has the required properties. \end{proof}
As we are dealing with Moufang twin trees of prime order, we have to consider RGD-systems in which all the $U_{\alpha}$ have order $p$ for some prime number $p$. Let $\Pi=(G,(U_{\alpha})_{\alpha \in \Phi},H)$ be such an RGD-system, and let $X$, $(X_n)_{n \in \mathbb{Z}}$ and $t$ be as in the previous lemma. By choosing $1 \neq x_i \in U_{(i,1)}$ for $i=0,1$ and setting $x_{2n} := t^n(x_0)$ and $x_{2n+1} := t^n(x_1)$, we obtain a pair $(X,(x_n)_{n \in \mathbb{Z}})$ conforming to the following definition.
\begin{defn} \label{def zsys} Let $p$ be a prime. A \Defn{$\mathbb{Z}$-system (of order $p$)} is a pair $(X,(x_n)_{n \in \mathbb{Z}})$ consisting of a group $X$ and a family $(x_n)_{n \in \mathbb{Z}}$ of elements in $X$ such that the following conditions are satisfied. \begin{axioms}{ZS} \item $X = \gen{x_n \mid n \in \mathbb{Z}}$. \item For all $n \leq m \in \mathbb{Z}$ the group $\gen{x_k \mid n \leq k \leq m}$ is of order $p^{m-n+1}$. \item There exists an automorphism $t$ of $X$ such that $t(x_n) = x_{n+2}$ for all $n \in \mathbb{Z}$. \end{axioms} \end{defn}
\begin{example} Let $p$ be a prime and let $\mathbb{F}:=\mathbb{F}_p$. \begin{enumerate} \item Let everything be as in \cref{ex std}. For $n\in\mathbb{Z}$ let $u_n:=\smtr{1}{t^n}{0}{1}$. Then $(U_{++},(u_n)_{n \in \mathbb{Z}})$ is a $\mathbb{Z}$-system of order $p$. Indeed, the map \[ U_{++}\to U_{++},\ g \mapsto g^\sigma,\quad\text{ where }\quad \sigma:=\mtr{t^{-1}}{0}{0}{t}\] is an automorphism of $U_{++}$ which maps $u_n$ to $u_{n+2}$ for all $n\in\mathbb{Z}$.
\item Let everything be as in \cref{ex unitary}. For $n\in\mathbb{Z}$ let $u_n:=x_{n}(1_{\mathbb{F}_p})$. Then $(U_{++},(u_n)_{n \in \mathbb{Z}})$ is a $\mathbb{Z}$-system of order $p$. Indeed, the map \[ U_{++}\to U_{++},\ g \mapsto g^\sigma,\quad\text{ where }\quad \sigma:=\begin{pmatrix} t^{-1}&&\\&1&\\&&-t \end{pmatrix}\] is an automorphism of $U_{++}$ which maps $u_n$ to $u_{n+2}$ for all $n\in\mathbb{Z}$. \end{enumerate} \end{example}
We already mentioned in the introduction that Tits gave a construction of uncountably many pairwise non-isomorphic trivalent twin trees. The idea behind his construction can be generalized to produce uncountably many non-isomorphic $Z$-systems of order $p$ for each prime $p$. It is conceivable that only very few of them can be realized as matrix groups. In a sense, Axiom (ZS3) requires an analogue of the conjugation by a diagonal matrix in the non-linear context.
We are now in the position to state our main result, which we prove in \cref{sect hauptbeweis}.
\begin{thm} \label{hauptsatz} Let $(X,(x_n)_{n \in \mathbb{Z}})$ be a $\mathbb{Z}$-system of prime order. Then $X$ is nilpotent of class at most 2. \end{thm}
\begin{remark}[Sketch of the proof of \cref{mainthm:detailed}] \label{sketch proof mainthm} Let ${\bf T}$ be a Moufang twin tree of order $p$ and let $\Pi(T) = (G, (U_{\alpha})_{\alpha \in \Phi},H)$ be the RGD-system associated with $T$. As $T$ is of order $p$, each $U_{\alpha}$ has order $p$. By \cref{Zsystemmotivation} we therefore obtain a $\mathbb{Z}$-system $(X,(x_n)_{n \in \mathbb{Z}})$ of order $p$, and the unipotent horocyclic group of $T$ coincides with $X$. Thus \cref{mainthm:detailed} is a consequence of \cref{hauptsatz} \end{remark}
\section{$\mathbb{Z}$-systems} \label{ZS-Section}
For the rest of this paper, we assume that $p$ is a prime and that $\Theta = (X,(x_n)_{n \in \mathbb{Z}})$ is a $\mathbb{Z}$-system of order $p$, together with an automorphism $t\in\Aut(X)$ as in (ZS3), the \Defn{shift automorphism} of $\Theta$. In the following lemma we collect some basic properties of $\mathbb{Z}$-systems.
\begin{defn} For $n\leq m\in\mathbb{Z}$, we set \begin{align*} X_{n,m} &:=\gen{x_k \mid n \leq k \leq m}, & X_{-\infty,m}&:=\gen{x_k \mid k \leq m}, & X_{n,\infty}&:=\gen{x_k \mid n \leq k}. \end{align*} \end{defn}
\begin{lemma} The following statements are true. \begin{axioms}[start=4]{ZS} \item For each $n \in \mathbb{Z}$ we have $x_n^p = 1 \neq x_n$. \item For $n < m \in \mathbb{Z}$ we have $[x_n,x_m] \in X_{n+1,m-1}$. \item For each $x \in X^*$ there exist $n \leq m \in \mathbb{Z}$ and $e_n,\ldots,e_m \in \mathbb{Z}_p$ such that $x = x_n^{e_n}\cdots x_m^{e_m}$, and both $e_n\neq0$ and $e_m\neq 0$. Moreover, $n,m,e_n,\ldots,e_m$ are uniquely determined by $x$. \end{axioms} \end{lemma}
\begin{proof} (ZS4) is immediate from (ZS2) with $m=n$. Now recall that a subgroup of index $p$ in a finite $p$-group is normal. Hence for any $n\leq m\in\mathbb{Z}$, we obtain the following normal series, where each group has index $p$ in the preceding one: \[ X_{n,m}\rhd X_{n+1,m}\rhd \dots \rhd X_{m,m}\rhd 1.\] Thus $x_n,\dots,x_m$ form a polycyclic generating sequence of $X_{n,m}$. Then (ZS6) follows. From this it also follows that $X_{n,m}'\leq X_{n+1,m}$. By a symmetric argument $X_{n,m}'\leq X_{n,m-1}$ and hence (ZS5) follows. \end{proof}
\begin{defn} Let $x \in X^*$. By (ZS6) there exist unique $n \leq m \in\mathbb{Z}$ and $e_n,\ldots,e_m \in \mathbb{Z}_p$ such that $e_n\neq0\neq e_m$ and $x = x_n^{e_n}\cdots x_m^{e_m}$. This is the \Defn{normal form} of $x$, and we set \[ \mathbf{n}(x) := n, \quad \mathbf{m}(x) := m. \] The \Defn{width} of $x\in X^*$ is $\mathbf{w}(x) := m-n+1$. Additionally we set $\mathbf{w}(1):=0$, $\mathbf{n}(1):=\infty$ and $\mathbf{m}(1):=-\infty$. \end{defn}
Finally we point out some useful direct consequences of (ZS5) and (ZS6), which we use extensively in the sequel.
\begin{lemma} Let $x,y\in X^*$. \begin{enumerate}[ref={\thethm (\roman*)}] \item \label[lemma]{startindex} Let $k \in \mathbb{Z}$ such that $\mathbf{n}(x)\neq k$. Then $\mathbf{n}(x_kx) = \min(k, \mathbf{n}(x))$.
\item \label[lemma]{cancel start} If $\mathbf{n}(x) = \mathbf{n}(y)$, then there is $\lambda\in\mathbb{Z}_p^*$ such that $\mathbf{n}(x)<\mathbf{n}(y^\lambda x)$ and $\mathbf{w}(y^\lambda x) < \max(\mathbf{w}(x),\mathbf{w}(y))$.
\item \label[lemma]{cancel end} If $\mathbf{m}(x) = \mathbf{m}(y)$, then there is $\lambda\in\mathbb{Z}_p^*$ such that $\mathbf{m}(y^\lambda x)<\mathbf{m}(x)$ and $\mathbf{w}(y^\lambda x) < \max(\mathbf{w}(x),\mathbf{w}(y))$.
\item \label[lemma]{cancel power} $\mathbf{w}(x^p)<\mathbf{w}(x)$. \end{enumerate} \end{lemma}
\section{Abelian $\mathbb{Z}$-systems}
In this section we establish a criterion for proving that a $\mathbb{Z}$-system is abelian, stated as \cref{abelian<=>single-shift-inv}.
\begin{defn} The \Defn{lower cutoff} of $\Theta$ is defined as \[ \ell(\Theta) := \begin{cases} \infty & \text{ if } X \text{ is abelian}, \\
\min \{ |m-n| \mid [x_n,x_m] \neq 1 \} & \text{ if } X \text{ is non-abelian}. \end{cases} \] \end{defn}
Recall that by (ZS3) there is an automorphism $t$ of $X$ mapping $x_n$ onto $x_{n+2}$ for all $n \in \mathbb{Z}$.
\begin{lemma} \label{MaxProposition5.4(1)} Let $X$ be non-abelian and let $n := \ell(\Theta)$ be the lower cutoff of $\Theta$. If $[x_0,x_n] \neq 1$, then $[x_1,x_{n+1}] = 1$; if $[x_1,x_{n+1}] \neq 1$, then $[x_0,x_n] = 1$. \end{lemma}
\begin{proof} Suppose $w:=[x_0,x_n] \neq 1$. As $n$ is the lower cutoff of $\Theta$, the subgroup $X_{-(n-1),n-1}$ centralizes $x_0$. Similarly $X_{1,2n-1}$ centralizes $x_n$.
Thus, for $0 \leq j <n$, \[ [x_0,x_{n+j}] \in X_{1,n+j-1}\leq X_{1,2n-1}, \quad\text{ implying }\quad [[x_0,x_{n+j}],x_n] = 1. \] Since $j<n$ we have also $[x_n,x_{n+j}] = 1$, hence $[[x_n,x_{n+j}],x_0] = 1$. Then the Three Subgroup Lemma (see e.g.\ \cite[5.1.10]{Rob96}) implies $[w,x_{n+j}] = [[x_0,x_n],x_{n+j}] = 1$.
Let $i := \mathbf{n}(w)$. As $w \in X_{1,n-1}$ it follows that $1\leq i<n$ and hence $[w,x_{n+i}] = 1$. But $w$ can be written as $w = x_i^{e_i}\dots x_{n-1}^{e_{n-1}}$ with $e_i,\dots e_{n-1} \in \mathbb{Z}_p$. Since the lower cutoff is $n$, we have $[x_j,x_{n+1}]=1$ for $i+1\leq j\leq n-1$. Thus also $[x_i,x_{n+i}] = 1$.
As $[x_{2k},x_{2k+n}] = t^k([x_0,x_n]) = t^k(w) \neq 1$ for all $k \in \mathbb{Z}$, it follows that $i$ must be odd. So there is $m\in\mathbb{Z}$ with $i=2m+1$, therefore $[x_1,x_{n+1}] = t^{-m}([x_i,x_{n+i}]) = t^{-m}(1) = 1$. This proves the first assertion, the second follows by a symmetric argument. \end{proof}
\begin{prop} \label{abelian<=>single-shift-inv} The following are equivalent: \begin{enumerate} \item The group $X$ is abelian. \item The group $X$ is elementary abelian (i.e.\ abelian and of exponent $p$). \item The mapping $x_k \mapsto x_{k+1}$ extends to an automorphism of $X$. \end{enumerate} \end{prop}
\begin{proof} By (ZS2), the generators $x_n$ have order $p$. Thus if $X$ is abelian, then $X$ has exponent $p$. Thus (i) implies (ii). The converse implication is trivial. Also that (ii) implies (iii) now is readily verified.
Assume that $X$ is not abelian and let $n:= \ell(\Theta)$. By \cref{MaxProposition5.4(1)}, $[x_0,x_n] \neq 1$ implies $[x_1,x_{n+1}] = 1$ and $[x_1,x_{n+1}] \neq 1$ implies $[x_0,x_n] = 1$. Thus, the mapping $x_k \mapsto x_{k + 1}$ does not extend to an automorphism of $X$. \end{proof}
\section{Shift-invariant subgroups}
In this section we study subgroups of $X$ which are invariant under the shift map $t$. We prove that such subgroups are close to forming $\mathbb{Z}$-systems again. Moreover, those of infinite index are necessarily abelian.
\begin{defn} A subgroup $Y\leq X$ is called \Defn{shift-invariant} if $t(Y) = Y$. We set \begin{align*} Y_{even} &:= \{ y \in Y^* \mid \mathbf{n}(y) \in 2\mathbb{Z} \},\\ Y_{odd} &:= \{ y \in Y^* \mid \mathbf{n}(y) \in 1+2\mathbb{Z} \}. \end{align*}
For $n\leq m\in\mathbb{Z}\cup\{\pm\infty\}$, set $Y_{n,m}:=Y\cap X_{n,m}$. \end{defn}
\begin{remark} By shift-invariance of $Y$, we have $t(Y_{n,m})=Y_{n+2,m+2}$. \end{remark}
\begin{lemma} \label{MaxProposition5.7(1)} Let $Y \leq X$ be shift-invariant. Then the following are equivalent: \begin{enumerate} \item The index of $Y$ in $X$ is finite. \item Both $Y_{even}$ and $Y_{odd}$ are non-empty. \end{enumerate} \end{lemma}
\begin{proof} Let $Y$ be of finite index in $X$. Suppose, by contradiction, that $\mathbf{n}(y)$ is even for all $y \in Y^*$. Then $\mathbf{n}(Y^*) = 2\mathbb{Z}$ because $Y$ is shift-invariant. In view of \cref{startindex}, for each odd integer $m$ we have \[\mathbf{n}(x_{m}Y) = \{ m \} \cup \{2k\in2\mathbb{Z}\mid 2k<m\}.\] Hence for any two odd integers $m \neq m'$ we have $x_mY \neq x_{m'}Y$. Thus we get infinitely many cosets of $Y$, which is a contradiction.
Similarly the assumption that $\mathbf{n}(y)$ is odd for all $y \in Y^*$ leads to a contradiction and hence (i) implies (ii).
For the converse, let $a$ (resp. $b$) be of minimal width in $Y_{even}$ (resp. $Y_{odd}$). Since $Y$ is shift-invariant, we may assume that $\mathbf{n}(a) = 0$ and $\mathbf{n}(b) = 1$.
We claim that $\mathbf{m}(a)$ and $\mathbf{m}(b)$ have different parity. Suppose that this is not the case. Then there exists an element $k \in \mathbb{Z}$ such that $\mathbf{m}(t^k(a)) = \mathbf{m}(b)$. Using \cref{cancel end} it follows that there is $\lambda\in\mathbb{Z}_p^*$ such that $y:= b^\lambda t^k(a)$ satisfies either $y\in Y_{even}$ and $\mathbf{w}(y) < \mathbf{w}(a)$, or $y \in Y_{odd}$ and $\mathbf{w}(y) < \mathbf{w}(b)$. Either case contradicts the minimality of $a$ resp.\ $b$.
Let $m := \max \{ \mathbf{w}(a),\mathbf{w}(b) \}$. Since $\mathbf{n}(a)=0$ and $\mathbf{n}(b) =1$, by using \cref{cancel start} and induction, it follows that $X_{-\infty,m}Y \subseteq X_{0,m}Y$. As $\mathbf{m}(a)$ and $\mathbf{m}(b)$ have different parity, one also sees that $X_{0,\infty} Y \subseteq X_{0,m}Y$. As $X = X_{-\infty,m}X_{0,\infty}$ it follows that $X=X_{0,m}Y$. Thus $|X:Y|\leq |X_{0,m}| = p^{m+1}$. \end{proof}
\begin{prop} \label{MaxProposition5.7(3)}
Let $1\neq Y\leq X$ be shift-invariant with $|X:Y| = \infty$, let $u \in Y^*$ be of minimal width in $Y$ and $y_n := t^n(u)$ for $n \in \mathbb{Z}$. Then the following hold. \begin{enumerate} \item $Y = \gen{y_n \mid n \in \mathbb{Z}}$. \item $(Y,(y_n)_{n \in \mathbb{Z}})$ is a $\mathbb{Z}$-system. \item $Y$ is elementary abelian of exponent $p$. \end{enumerate} \end{prop}
\begin{proof} \begin{enumerate}
\item By shift-invariance of $Y$ we have $y_n\in Y$, thus $U := \gen{y_n \mid n \in \mathbb{Z}} \leq Y$. For $y\in Y$ we will show by induction on $\mathbf{w}(y)$ that $y\in U$, and hence $Y=U$. If $\mathbf{w}(y)=0$ then $y=1\in U$. So suppose $\mathbf{w}(y)>0$. Now $|X:Y| = \infty$, therefore $\mathbf{n}(y)$ and $\mathbf{n}(u)$ have the same parity by \cref{MaxProposition5.7(1)}. Hence there is $k\in\mathbb{Z}$ such that \[ \mathbf{n}(y)=\mathbf{n}(u)+2k=\mathbf{n}(t^k(u))=\mathbf{n}(y_k).\] Moreover, $\mathbf{w}(y)\geq \mathbf{w}(y_k)=\mathbf{w}(u)$. Thus by \cref{cancel start} there is $\lambda\in\mathbb{Z}_p^*$ such that $\mathbf{w}(y_k^\lambda y)<\mathbf{w}(y)$. Hence by the induction hypothesis $y_k^\lambda y\in U$. Since also $y_k\in U$ we get $y\in U$.
\item (ZS1) follows from Assertion (i). (ZS3) follows from the fact that $t(Y) = Y$ and $t(y_n) = y_{n+1}$ for all $n \in \mathbb{Z}$, hence $s:=t^2$ is a shift automorphism for $(Y,(y_n)_{n \in \mathbb{Z}})$. It remains to verify (ZS2). Without loss of generality, assume $\mathbf{n}(u)\in\{0,1\}$ and thus $\mathbf{n}(y_n)\in\{2n,2n+1\}$ for $n\in\mathbb{Z}$.
For $n\leq m\in\mathbb{Z}$ let $U_{n,m}:=\gen{y_n,\ldots,y_m}\leq X_{2n,\infty}$. As $\mathbf{n}(y_n) \in\{2n,2n+1\}$, we have $y_n\notin X_{2n+2,\infty}$, hence $y_n\notin U_{n+1,m}\leq X_{2n+2,\infty}$. \Cref{cancel power} implies that $\mathbf{w}(u^p) < \mathbf{w}(u)$. Since $u$ was of minimal width, we conclude $u^p=1$. Thus $y_n$ has order $p$. Since $p$ is prime, we get $\gen{y_n}\cap U_{n+1,m}=1$.
Now we claim that $U_{n,m}=\gen{y_n}U_{n+1,m}$. To see this, pick $y\in U_{n,m}$. If $\mathbf{n}(y)>\mathbf{n}(y_n)$ then $y\in U_{n+1,m}$. Otherwise $\mathbf{n}(y)=\mathbf{n}(y_n)$, and then \cref{cancel start} implies that there is $\lambda\in\mathbb{Z}_p^*$ such that $\mathbf{n}(y_n^\lambda y) > \mathbf{n}(y_n)$, hence $y_n^\lambda y \in U_{n+1,m}$. The claim follows.
But $\gen{y_n}\cap U_{n+1,m}=1$ and $U_{n,m}=\gen{y_n}U_{n+1,m}$
imply $|U_{n,m}|=p\cdot |U_{n+1,m}|$. By induction it follows that $|U_{n,m}| = p^{m-n+1}$. Thus (ZS2) holds.
\item By $(ii)$, $(Y,(y_n)_{n \in \mathbb{Z}})$ is a $\mathbb{Z}$-system. The shift map $t$ of $(X,(x_n)_{n \in \mathbb{Z}})$ leaves $Y$ invariant and thus restricts to an automorphism of $Y$ which extends the mapping $y_k\mapsto y_{k+1}$. The claim thus follows from \cref{abelian<=>single-shift-inv}.
\qedhere \end{enumerate} \end{proof}
\begin{lemma} \label{MaxProposition7.1(2)} Let $Y\leq X$ be shift-invariant with $Y_{even} \neq \emptyset \neq Y_{odd}$. Let $a$ (resp. $b$) be of minimal width in $Y_{even}$ (resp. $Y_{odd}$) such that $\mathbf{n}(a) = 0$ and $\mathbf{n}(b) = 1$. For $n \in \mathbb{Z}$ let $y_{2n} := t^n(a)$ and $y_{2n+1} := t^n(b)$.
Then the following hold: \begin{enumerate} \item $Y = \gen{y_n \mid n \in \mathbb{Z}}$. \item If $\mathbf{w}(a) = \mathbf{w}(b)$, then $(Y,(y_n)_{n \in \mathbb{Z}})$ is a $\mathbb{Z}$-system. \end{enumerate} \end{lemma}
\begin{proof} \begin{enumerate} \item By shift-invariance of $Y$ we have $y_n\in Y$, thus $U := \gen{y_n \mid n \in \mathbb{Z}} \leq Y$. For $y\in Y$ we will show by induction on $\mathbf{w}(y)$ that $y\in U$, and hence $Y=U$. If $\mathbf{w}(y)=0$ then $y=1\in U$. So suppose $\mathbf{w}(y)>0$ and let $n:=\mathbf{n}(y)$. Then $\mathbf{w}(y)\geq \mathbf{w}(y_n)$. Since $\mathbf{n}(y_n)=n=\mathbf{n}(y)$, by \cref{cancel start} there is $\lambda\in\mathbb{Z}_p^*$ such that $\mathbf{w}(y_n^\lambda y)<\mathbf{w}(y)$. Hence by the induction hypothesis $y_n^\lambda y\in U$. Since also $y_n\in U$ we get $y\in U$.
\item This follows by a similar argument as in the proof of Assertion (ii) in \cref{MaxProposition5.7(3)}. (Note that we do not make use of this observation in this paper.) \qedhere \end{enumerate} \end{proof} Combining the previous statements yields the following:
\begin{lemma} \label{shift-inv-gens} Let $Y$ be a shift-invariant subgroup of $X$. Then there are elements $a,b\in Y$ such that $Y=\gen{t^k(a), t^k(b) \mid k\in \mathbb{Z}}$. \end{lemma}
\begin{proof} If $Y$ has finite index in $X$, this follows from \cref{MaxProposition5.7(1),MaxProposition7.1(2)}. If $Y$ is trivial, we can choose $a=b=1$. Finally, if $Y$ is non-trivial but has infinite index, this follows from \cref{MaxProposition5.7(3)} \end{proof}
\begin{remark} We can make the choice of generators $a,b$ unique by requiring that each should either be trivial; or else start at index 0 or 1, be of minimal width amongst all such elements, and have ``lead exponent'' equal to 1.
The resulting generating system is close to being a $\mathbb{Z}$-system again. However, the generators are not necessarily independent anymore; in particular, it can happen that that $a^p=b$. \end{remark}
\begin{lemma} \label{decompose shift inv} Let $Y$ be a shift-invariant subgroup of $X$. Then for every $n\in\mathbb{Z}$, there is $m \in \mathbb{Z}$ such that $Y = Y_{-\infty,m} Y_{n,\infty}$. \end{lemma}
\begin{proof} Pick $a,b\in Y$ as in \cref{shift-inv-gens}. Since $Y$ is generated by all shifts of $a$ and $b$, it suffices to choose $m$ large enough such that $Y_{-\infty,m}$ contains all the shifts of $a$ and $b$ which are not in $Y_{n,\infty}$. For example, choose $m:= \max\{n + \mathbf{w}(a), n + \mathbf{w}(b) \}$. \end{proof}
\section{One-sided normal subgroups}
Throughout this section, let $Y$ be a shift-invariant subgroup of $X$.
\begin{nota} Let $G$ be a group. The \Defn{normal closure} of $U\subseteq G$ is $\gen{U}^G := \gen{U^G} = \gen{ g^{-1}Ug \mid g\in G}$. \end{nota}
\begin{remark} Recall that a group $G$ is \Defn{locally nilpotent} if every finitely generated subgroup of $G$ is nilpotent.
Now every finitely generated subgroup $H$ of $X$ is contained in some $X_{n,m}$ with $n\leq m\in\mathbb{Z}$, which is a finite $p$-group by (ZS2). Hence $H$ is a finite $p$-group, and $X$ is locally nilpotent. \end{remark}
\begin{lemma} \label{A<=[AK] then A=1} Let $K$ be nilpotent and $A\leq K$ with $A\leq [A,K]$. Then $A=1$. \end{lemma}
\begin{proof} $K$ is nilpotent, hence its lower central series $K\unrhd [K,K]\unrhd [K,K,K] \unrhd \dots$ vanishes after finitely many steps. Since $A\leq K$, also $[A,K,\dots,K]$ eventually vanishes. From $A\leq [A,K]$ we deduce, by forming the commutator with $K$, that \[ A\leq [A,K] \leq [A,K,K] \leq \dots \leq 1. \qedhere\] \end{proof}
\begin{lemma} \label{A-notin-[AU]_U}
Let $G$ be a locally nilpotent group and let $A\leq G$ be finitely generated. Then $A\leq \gen{[A,G]}^G$ if and only if $|A|=1$. \end{lemma}
\begin{proof}
The implication starting with $|A|=1$ is obvious. So suppose $A\leq \gen{[A,G]}^G$ and $A=\gen{a_1,\ldots,a_n}$. Then for $1\leq i\leq n$, there exist $\ell_i\in\mathbb{N}$ and elements $a_{ij}\in A$, $g_{ij},h_{ij}\in G$ such that \begin{equation} \label{eqn:normal-closure}
a_i = [a_{i1},g_{i1}]^{h_{i1}} \cdots [a_{i\ell_i},g_{i\ell_i}]^{h_{i\ell_i}}. \end{equation} We now define the finitely generated subgroup \[ H:=\gen{ h_{ij}, g_{ij} \mid 1\leq i\leq n, 1\leq j\leq \ell_i }. \] Moreover, we set $K:=\gen{A, H}$, and observe that \[ A^H := \gen{A}^H =\gen{a^h\mid a\in A, h\in H} \leq K. \] Since $A$ and $H$ are finitely generated, so is $K$, hence $K$ is nilpotent. From \Cref{eqn:normal-closure} we then conclude $A\leq [A^H,H]$ hence $A^H\leq [A^H,H] \leq [A^H,K]$. Applying \cref{A<=[AK] then A=1}, we conclude that $A^H=1$. Hence $A=1$. \end{proof}
\begin{lemma} \label{lem:100} Let $n\in\mathbb{Z}$. Then there is $y_{n-1}\in Y_{n-1,\infty}$ such that \[ Y_{n-1,\infty} = \gen{ y_{n-1}, Y_{n,\infty} }. \] Moreover, for any $N\geq \mathbf{w}(y_{n-1})-2$, we have \[ Y_{n-1,n+N} = \gen{ y_{n-1}, Y_{n,n+N} }.\] \end{lemma}
\begin{proof} If $Y_{n-1,\infty}= Y_{n,\infty}$ set $y_{n-1}:=1$ and the first assertion clearly holds. Otherwise there exists $y_{n-1}\in Y_{n-1,\infty}$ with $\mathbf{n}(y_{n-1})=n-1$. Let $y\in Y_{n-1,\infty}$. If $\mathbf{n}(y)\geq n$, then $y\in Y_{n,\infty}$. Otherwise, if $\mathbf{n}(y)=n-1$, then by \cref{cancel start} there is $\lambda\in\mathbb{Z}_p^*$ such that $\mathbf{n}(y_{n-1}^\lambda y)\geq n$, hence $y\in \gen{ y_{n-1}, Y_{n,\infty} }$. Thus $Y_{n-1,\infty} \leq \gen{y_{n-1}, Y_{n,\infty}}$. The reverse inclusion is obvious.
The second assertion follows analogously, after observing that $y_{n-1}\in Y_{n-1,n+N}$. Indeed, $\mathbf{n}(y_{n-1})=n-1$ and $\mathbf{m}(y_{n-1})=\mathbf{w}(y_{n-1})+\mathbf{n}(y_{n-1})-1 = n + (\mathbf{w}(y_{n-1})-2)$. \end{proof}
\begin{lemma} \label{lem:105} Let $n\in\mathbb{Z}$. \begin{enumerate} \item If $Y_{n-1,\infty}\leq\gen{Y_{n,\infty}}^X$, then there is
$M\in\mathbb{N}$ such that $Y_{n-1,n+N}\leq\gen{Y_{n,n+N}}^X$ for all $N \geq M$. \item If $Y_{-\infty,n+1}\leq\gen{Y_{-\infty,n}}^X$, then there is
$M\in\mathbb{N}$ such that $Y_{n-N,n+1}\leq\gen{Y_{n-N,n}}^X$ for all $N \geq M$. \end{enumerate} \end{lemma}
\begin{proof} We prove the first case, the second follows by a symmetric argument. Suppose $Y_{n-1,\infty}\leq\gen{Y_{n,\infty}}^X$. If $Y_{n-1,\infty}=Y_{n,\infty}$ we are done, as then $Y_{n-1,n+N}=Y_{n,n+N}$ for all $N\in\mathbb{N}$.
Otherwise, let $y_{n-1}\neq 1$ be as in \cref{lem:100}. Since $y_{n-1}\in Y_{n-1,\infty}\leq\gen{Y_{n,\infty}}^X$, there are $\ell\in\mathbb{N}$, $a_1,\ldots,a_\ell\in Y_{n,\infty}^*$ and $g_1,\ldots,g_\ell\in X$ such that $y_{n-1} = a_1^{g_1}\cdots a_\ell^{g_\ell}$. Let $M:=\max\{\mathbf{m}(a_1),\ldots,\mathbf{m}(a_\ell),\mathbf{m}(y_{n-1})\}-n$. Then $a_1,\ldots,a_\ell\in Y_{n,n+M}$, hence for $N\geq M$ we have \[ y_{n-1} \in \gen{Y_{n,n+M}}^X \leq \gen{Y_{n,n+N}}^X.\] Moreover, by definition \[ M\geq \mathbf{m}(y_{n-1}) - n = \mathbf{m}(y_{n-1}) - \mathbf{n}(y_{n-1}) - 1 = \mathbf{w}(y_{n-1}) - 2. \] Thus for $N\geq M$, \cref{lem:100} yields \[ Y_{n-1,n+N} = \gen{y_{n-1}, Y_{n,n+N}} \leq \gen{Y_{n,n+N}}^X. \qedhere \] \end{proof}
\begin{lemma}\label{finite cut}\ \nopagebreak \begin{enumerate} \item If $Y_{n-1,\infty}\leq\gen{Y_{n,\infty}}^X$ for all $n\in\mathbb{Z}$, then there is $M\in\mathbb{N}$ with $Y_{-\infty,M}\leq \gen{Y_{0,M}}^X$. \item If $Y_{-\infty,n+1}\leq\gen{Y_{-\infty,n}}^X$ for all $n\in\mathbb{Z}$, then there is $M\in\mathbb{N}$ with $Y_{0,\infty}\leq \gen{Y_{0,M}}^X$. \end{enumerate} \end{lemma}
\begin{proof} We prove the first case, the second follows by a symmetric argument. The hypothesis implies for all $n\in\mathbb{N}$ that \[ Y_{n-2,\infty}\leq\gen{Y_{n-1,\infty}}^X \quad\text{ and }\quad Y_{n-1,\infty}\leq\gen{Y_{n,\infty}}^X, \quad\text{ hence }\quad Y_{n-2,\infty}\leq\gen{Y_{n,\infty}}^X.
\] Thus for all $n,k\in\mathbb{N}$ we have \[ Y_{n-k,\infty}\leq\gen{Y_{n,\infty}}^X.\] But this implies $Y\leq\gen{Y_{n,\infty}}^X$. By \cref{shift-inv-gens}, there are elements $a,b\in Y$ such that $Y=\gen{t^k(a), t^k(b) \mid k\in \mathbb{Z}}$. As $Y$ is shift-invariant, we may assume $\mathbf{n}(a)=-2$ or $a=1$, and $\mathbf{n}(b)=-1$ or $b=1$. Then \[ a,b\in Y_{-2,\infty}\leq \gen{Y_{0,\infty}}^X.\] By applying \cref{lem:105} twice we deduce the existence of some value $M\in \mathbb{N}$ such that $a,b\in \gen{Y_{0,M}}^X$. But then for all $N\geq M$ \begin{equation} \label{eqn:Y_{-2}-sub-Y_0^X} Y_{-2,N} = \gen{a,b,Y_{0,N}}\leq \gen{Y_{0,N}}^X. \end{equation} By shift-invariance, we can now conclude that \[Y_{-4,M} = t^{-1}(Y_{-2,M+2}) \overset{(\ref{eqn:Y_{-2}-sub-Y_0^X})}\leq t^{-1}(\gen{Y_{0,M+2}}^X) = \gen{Y_{-2,M}}^X \overset{(\ref{eqn:Y_{-2}-sub-Y_0^X})}\leq \gen{Y_{0,M}}^X .\] By induction it follows that $Y_{-\infty,M}\leq \gen{Y_{0,M}}^X$. \end{proof}
\begin{lemma}\label{lem:110} If for all $n\in\mathbb{Z}$ we have \[ Y_{n-1,\infty}\leq\gen{Y_{n,\infty}}^X \quad\text{ and }\quad Y_{-\infty,n+1}\leq\gen{Y_{-\infty,n}}^X \] then there exists $M\in\mathbb{N}$ such that $Y\leq\gen{Y_{0,M}}^X$. \end{lemma}
\begin{proof} This is an immediate consequence of \cref{finite cut}. \end{proof}
\begin{prop}\label{[XY]=Y} Suppose $[X,Y]=Y$. Then either $Y=1$, or there exists $n\in\mathbb{Z}$ such that at least one of the following holds: \begin{enumerate} \item $Y_{n-1,\infty}\nleq\gen{Y_{n,\infty}}^X$. \item $Y_{-\infty,n+1}\nleq\gen{Y_{-\infty,n}}^X$. \end{enumerate} \end{prop}
\begin{proof} Suppose the claim is false. Then by \cref{lem:110} there is $M\in\mathbb{N}$ such that $Y\leq \gen{Y_{0,M}}^X$. The group $Y_{0,M}$ is finite, so we can pick a finite generating set $Y_{0,M}=\gen{z_1,\ldots,z_k}$. Then for $1\leq i\leq k$, since $z_i\in Y=[X,Y]$, there are $\ell_i\in\mathbb{N}$ and $y_{ij}\in Y$, $g_{ij}\in X$ for $1\leq j\leq \ell_i$ such that \[ z_i = [g_{i1},y_{i1}]\cdots[g_{i\ell_i},y_{i\ell_i}].\] Let \begin{align*} n&:=\min\{\mathbf{n}(y_{ij}) \mid 1\leq i\leq k, 1\leq j\leq \ell_j \},\\ m&:=\max\{\mathbf{m}(y_{ij}) \mid 1\leq i\leq k, 1\leq j\leq \ell_j \}. \end{align*} Then $z_i\in [X,Y_{n,m}]$, hence $Y_{0,M}\subseteq [X,Y_{n,m}]$. But then \begin{equation} \label{eqn Ynm=1} Y_{n,m} \leq Y \leq \gen{Y_{0,M}}^X \leq \gen{[X,Y_{n,m}]}^X. \end{equation} Since $X$ is locally nilpotent, \cref{A-notin-[AU]_U} implies $Y_{n,m}=1$. Inserting this into \Cref{eqn Ynm=1} yields $Y=1$. \end{proof}
\begin{lemma}\label{normal-aux}
Suppose $Y\unlhd X$ and $|X:Y| =\infty$. Then the following hold. \begin{enumerate} \item If $Y_{n,\infty}\nleq\gen{Y_{n+1,\infty}}^X$ for some $n\in\mathbb{Z}$, then $Y_{n,\infty}\unlhd X$. \item If $Y_{-\infty,n}\nleq\gen{Y_{-\infty,n-1}}^X$ for some $n\in\mathbb{Z}$, then $Y_{-\infty,n}\unlhd X$. \end{enumerate} \end{lemma}
\begin{proof} We prove the first case, the second follows by a symmetric argument. By \cref{MaxProposition5.7(3)} there is $y\in Y$ such that $(Y,(t^k(y))_{k \in \mathbb{Z}})$ is a $\mathbb{Z}$-system. Suppose now that there is $n\in\mathbb{Z}$ such that $Y_{n,\infty}\nleq\gen{Y_{n+1,\infty}}^X$. Then as $Y$ is shift-invariant, we may assume that $\mathbf{n}(y)=n$, and so $y\in Y_{n,\infty}$ but $y\notin \gen{Y_{n+1,\infty}}^X$.
Suppose now that there is $m<n$ with $[x_m,y] \neq 1$. Then there are integers $i_1 < \dots < i_s < 0$ and exponents $e_1,\dots,e_s\in\mathbb{Z}_p^*$, such that $[x_m,y] =t^{i_1}(y)^{e_1} \cdots t^{i_s}(y)^{e_s}$. Applying $t^{-i_1}$ we get \[ [x_{m-2i_1},t^{-i_1}(y)] =y^{e_1} \cdot t^{i_2-i_1}(y)^{e_2} \cdots t^{i_s -i_1}(y)^{e_s} \] and, since $-i_1$, $i_2-i_1$, \ldots, $i_s-i_1$ all are positive, we conclude \[ y^{e_1} = [x_{m-2i_1}, t^{-i_1}(y)] \cdot \left(t^{i_2-i_1}(y)^{e_2} \cdots t^{i_s -i_1}(y)^{e_s}\right)^{-1} \in \gen{Y_{n+1,\infty}}^X. \] But $\mathbf{n}(y^{e_1})=\mathbf{n}(y)=n$, thus $Y_{n,\infty}=\gen{y^{e_1},Y_{n+1,\infty}} \leq\gen{Y_{n+1,\infty}}^X$, contradicting the hypothesis. Therefore $[x_m,y] =1$ for all $m <n$. Since $Y_{n,\infty} =\gen{t^i(y) \mid i\in\mathbb{N}}$, we get $X_{-\infty,n-1} \leq C_X(Y_{n,\infty})$ and so, using that $Y\unlhd X$, \[ [X,Y_{n,\infty}] =[X_{n,\infty},Y_{n,\infty}] \leq [X_{n,\infty},Y] \cap X_{n,\infty} \leq Y \cap X_{n,\infty} = Y_{n,\infty}. \qedhere \] \end{proof}
Thus we obtain the main result of this section:
\begin{prop} \label{normal}
Let $(X,(x_n)_{n \in \mathbb{Z}})$ be a $\mathbb{Z}$-system of prime order $p$. Suppose $Y$ is a shift-invariant subgroup of $X$, with $|X:Y|=\infty$ and $[X,Y]=Y$. Then there is $n\in\mathbb{Z}$ such that $Y_{n,\infty}\unlhd X$ or $Y_{-\infty,n}\unlhd X$, where $Y_{n,\infty}:=Y\cap X_{n,\infty}$ and $Y_{-\infty,n}:=Y\cap X_{-\infty,n}$. \end{prop}
\begin{proof} This follows by first applying \cref{[XY]=Y}, then \cref{normal-aux}. \end{proof}
\section{Infinite abelianization}
\begin{nota} Let $G$ be a group. Then let $G^{(0)}:=G$, let $G':=[G,G]$ be the \Defn{derived subgroup} and for $k\in\mathbb{N}$ let $G^{(k+1)}:=[G^{(k)},G^{(k)}]$. \end{nota}
\begin{lemma}\label{non perfect} Let $1\neq Y\unlhd X$ be shift-invariant. Then $[Y,Y]< Y$. \end{lemma}
\begin{proof} Suppose that $[Y,Y] =Y$. Then we have also $[X,Y]=Y$. Since also $Y\neq1$, by \cref{[XY]=Y} this implies that there exists $n\in \mathbb{Z}$ such that $Y_{n-1,\infty}\nleq\gen{Y_{n,\infty}}^X$ or $Y_{-\infty,n+1}\nleq\gen{Y_{-\infty,n}}^X$ holds. Suppose that $Y_{n-1,\infty}\nleq\gen{Y_{n,\infty}}^X$ (the other case is dealt with by a symmetric argument).
Let $N:=\gen{Y_{n,\infty}}^X$. Then $N\unlhd X$ and by what we just said $Y_{n-1,\infty}\nleq N$, thus $Y\neq N$. On the other hand, from $Y_{n,\infty}\leq Y\unlhd X$ it follows that $N\unlhd Y$.
By \cref{decompose shift inv} there is $m \in \mathbb{Z}$ such that \[ Y \overset{\ref{decompose shift inv}}= Y_{-\infty,m} Y_{n,\infty} \leq Y_{-\infty,m} N \leq Y, \] hence $Y = Y_{-\infty,m} N$. Choose $m\in\mathbb{N}$ minimal with this property. Then $Y_{-\infty,m}' \leq Y_{-\infty,m-1}$ by (ZS5) and so \[ [Y,Y] = Y' \leq Y_{-\infty,m}'N \leq Y_{-\infty,m-1}N < Y,\] a contradiction. \end{proof}
\begin{cor} \label{derived index arbitrary}
For $k\in\mathbb{N}$, we have $|X:X^{(k)}|\geq p^k$. \end{cor}
\begin{proof} The claim follows by induction on $k$, and the following observations:
$X^{(k)}$ is a characteristic subgroup of $X$, hence shift-invariant and normal. Thus if $X^{(k)}\neq 1$, then $X^{(k+1)}< X^{(k)}$ by \cref{non perfect}. And if $X^{(k)}= 1$, then $|X:X^{(k)}|=|X:1|=|X|=\infty$. \end{proof}
\begin{lemma}[{\cite[Lemma 5.9]{MKS66}}] \label{lift-nilp-gens} Let $G$ be a nilpotent group. If $z_1,\ldots,z_\ell\in G$ satisfy $G/G'=\gen{z_1G',\ldots,z_\ell G'}$, then $G=\gen{z_1,\ldots,z_\ell}$. \end{lemma}
\begin{lemma} \label{inf-derived-index}
There is $k\in\mathbb{N}$ such that $|X:X^{(k)}|=\infty$. \end{lemma}
\begin{proof}
Suppose $|X:X^{(k)}|<\infty$ for all $k\in\mathbb{N}$. Choose $z_1,\ldots,z_\ell\in X$ such that $X/X'=\gen{z_1X',\ldots,z_\ell X'}$.
For $k\in\mathbb{N}$, the groups $G_k:=X/X^{(k)}$ are finite $p$-groups and hence nilpotent. Next observe that \[ G_k/G_k' \cong X/X' = \gen{z_1X',\ldots,z_\ell X'} \] implies that \[ G_k/G_k' = \gen{\hat{z}_1G_k',\ldots,\hat{z}_\ell G_k'}, \] where $\hat{z}_1:=z_1X^{(k)},\ldots,\hat{z}_\ell:=z_\ell X^{(k)}$. Therefore, by \cref{lift-nilp-gens} we conclude \[G_k=\gen{z_1X^{(k)},\ldots,z_\ell X^{(k)}}.\]
Now let $Z:=\gen{z_1,\ldots,z_\ell}\leq X$. Since $X$ is locally finite, $|Z|<\infty$. It follows that $X=ZX^{(k)}$ for all $k\in\mathbb{N}$, hence $|X:X^{(k)}|\leq |Z|$. But this is a contradiction, as $|X:X^{(k)}|$ becomes arbitrarily large by \cref{derived index arbitrary}. \end{proof}
\begin{lemma}[{\cite[5.2.6]{Rob96}}] \label{G finite}
A nilpotent group $G$ with $|G:G'| < \infty$ is finite. \end{lemma}
\begin{lemma} \label{nilpotent-extension}
Let $G$ be a $p$-group, $N\unlhd G$ nilpotent of finite exponent and $|G:N| < \infty$. Then $G$ is nilpotent of finite exponent. \end{lemma}
\begin{proof}
We will assume $|G:N|=p$, the general case follows by induction on $|G:N|$. Let \[ Z_0:=1 \leq Z_1 := Z(N) \leq Z_2 \leq \dots \leq Z_n=N \] be the upper central series of $N$. Then for all $i$, the $Z_i$ are characteristic in $N$ and hence normal in $G$. Since $N$ has finite exponent, we can refine this series to a series \[ W_0:=1 \leq W_1 \leq \dots \leq W_m = N \] such that $W_i$ is normal in $G$ and $M_i:=W_i/W_{i-1}$ has exponent $p$ for all $i>0$. In fact, since we refined a central series, the $M_i$ are elementary abelian $p$-groups, in other words, vector spaces over a finite field of order $p$.
Let $x \in G \setminus N$. Since $N$ acts trivially on $M_i$, and since $|G:N|=p$, it follows for all $i>0$ that $x$ induces an automorphism $x_i$ of order at most $p$ on the vector space $M_i$. Since $x_i^p=1$, the linear map $x_i$ has a minimal polynomial dividing $t^p -1 =(t-1)^p$.
But then $[v,\underbrace{x_i, \ldots, x_i}_{p}]=1$ for all $v \in M_i$. Hence we can refine the series in such a way that $G$ acts trivially on each factor. Therefore $G$ is nilpotent, and since $N$ and $G/N$ have finite exponent, the exponent of $G$ is also finite. \end{proof}
\begin{remark} Note that the condition that the exponent of $N$ is finite is essential. For example, let $G$ be the injective limit of dihedral groups $(D_{2^n})_{n\geq 1}$, that is \[ G = \gen{ s,r_1,r_2,r_3,\dots \mid s^2=1=r_1^2,\ r_{n+1}^2=r_n,\
r_ns=sr_n^{-1}\ \text{ for } n\geq 1}. \] Let $N$ the normal subgroup generated by the rotations $r_n$. Then $N$ is an abelian $2$-group and $G/N$ has order $2$, but $[G,N]=N$. \end{remark}
\begin{thm}\label{commutator group} Let $(X,(x_n)_{n \in \mathbb{Z}})$ be a $\mathbb{Z}$-system of prime order $p$. Then $X$ has infinite abelianization $X/X'$. \end{thm}
\begin{proof} For $k\in\mathbb{N}$, let $G_k:=X/X^{(k)}$ and $H_k:=X^{(k)}/X^{(k+1)}$. Since $G_0$ is trivial, \cref{inf-derived-index} implies that there is $k\in\mathbb{N}$ such that
$|G_k|<\infty$ and $|G_{k+1}|=\infty$. We have $X^{(k+1)} \leq X^{(k)} \leq X$ and therefore
\[ |G_{k+1}| = |X:X^{(k+1)}| = |X:X^{(k)}|\cdot |X^{(k)} : X^{(k+1)}| = |G_k| \cdot |H_k|.\]
Thus $|H_k| = \infty$.
Since $X^{(k)}$ is shift-invariant, by \cref{shift-inv-gens} it is generated by the shifts of two elements $a,b\in X^{(k)}$, that is \[ H_k = \gen{ t^m(a)X^{(k+1)},\ t^m(b)X^{(k+1)} \mid m\in\mathbb{Z} }.\]
Since $H_k$ is an abelian $p$-group, there is $n\in\mathbb{N}$ such that these generators all have orders dividing $p^n$. Thus $H_k$ has finite exponent and as $|G_{k+1}:H_k| = |G_k| < \infty$, the group $G_{k+1}$ is nilpotent by \cref{nilpotent-extension}. But $G_{k+1}$ is infinite, so $G_{k+1}/G_{k+1}'\cong X/X'$ must also be infinite by \cref{G finite}. \end{proof}
\section{Nilpotency class 2} \label{sect hauptbeweis}
\begin{lemma} \label{commutator bilinear} Let $Y\unlhd X$, $y,y'\in Y$ and $x\in X$. Then $[yy',x] \in [y,x][y',x] [Y,X,X]$. \end{lemma}
\begin{proof} We have $[Y,X,X]\unlhd X$, hence \[ [yy',x] = [y,x]^{y'} [y',x] = [y,x]\underbrace{[y,x]^{-1} [y,x]^{y'}}_{=[[y,x],y']} [y',x] \in [y,x][y',x] [Y,X,X]. \qedhere \] \end{proof}
\begin{lemma}\label{[YX]=[YXX]}
Let $Y\unlhd X$ be shift-invariant, and suppose $|X:Y| = \infty$. Then $[Y,X,X] =[Y,X]$. \end{lemma}
\begin{proof} For $Y=1$ the claim is obvious, so we suppose $Y\neq 1$. Since $[Y,X,X]\leq[Y,X]$, it suffices to show the reverse inclusion.
As $|X:Y| = \infty$, by \cref{MaxProposition5.7(3)} the shifts of any element $y\in Y^*$ of minimal width in $Y^*$ generate the group $Y$, which is abelian. Set $n:=\mathbf{n}(y)$ and $m:=\mathbf{m}(y)$. Then $Y_{n+1,m}=1$ as $y$ is of minimal width in $Y^*$. We will now show by induction on $N\geq n$ that $[y,x_N]\in[Y,X,X]$. Indeed, for $n\leq N\leq m+1$, we have $[y,x_N]\in Y_{n+1,m}=1\leq[Y,X,X]$.
So suppose $N>m+1$, and $[y,x_N]\neq 1$. Since $[y,x_N]\in Y_{n+1,N-1}$, applying (ZS6) to the $\mathbb{Z}$-system $(Y,t^k(y)_{k \in \mathbb{Z}})$ yields that there are uniquely determined values $s\in\mathbb{N}$, $i_1,\dots,i_s\in\mathbb{N}$ and $\lambda_1,\dots,\lambda_s\in\mathbb{Z}_p^*$ such that \begin{equation} \label{eqn:yxN=...} 0 < 2i_1 < \ldots < 2i_s \leq N-1-m \quad\text{ and }\quad [y,x_N] = t^{i_1}(y)^{\lambda_1} \cdots t^{i_s}(y)^{\lambda_s}. \end{equation} If $s>1$, then for $k=2,\ldots, s$, the preceding inequality together with $0<i_1<i_k$ implies \[ m+1\leq N-2i_k < N+2i_1-2i_k = N - 2(i_k-i_1) < N, \] hence by the induction hypothesis and by the shift-invariance of $[Y,X,X]$ we have \begin{equation} \label{eqn:comms in YXX}
[t^{i_k}(y),x_{N+2i_1}] = t^{i_k}([y,x_{N+2i_1-2i_k}]) \in [Y,X,X]. \end{equation} Applying \cref{commutator bilinear} repeatedly, we find \begin{align*}
[[y,x_N],x_{N+2i_1}] &\overset{(\ref{eqn:yxN=...})}= [t^{i_1}(y)^{\lambda_1} \cdots t^{i_s}(y)^{\lambda_s},x_{N+2i_1}] \\ &\overset{\ref{commutator bilinear}}{\in} [t^{i_1}(y)^{\lambda_1},x_{N+2i_1}] \cdots [t^{i_s}(y)^{\lambda_s},x_{N+2i_1}] [Y,X,X] \\ &\overset{(\ref{eqn:comms in YXX})}{=} [t^{i_1}(y)^{\lambda_1},x_{N+2i_1}][Y,X,X] \\ &\overset{\ref{commutator bilinear}}{=} [t^{i_1}(y),x_{N+2i_1}]^{\lambda_1}[Y,X,X] \\ &= t^{i_1}([y,x_N])^{\lambda_1}[Y,X,X] \\ &= t^{i_1}([y,x_N]^{\lambda_1})[Y,X,X]. \end{align*} Therefore $t^{i_1}([y,x_N]^{\lambda_1}) \in [Y,X,X]$. But $[Y,X,X]$ is shift-invariant, hence we also have $[y,x_N]^{\lambda_1} \in [Y,X,X]$. And $Y$ has prime exponent $p$, thus also $[y,x_N] \in [Y,X,X]$. This concludes the proof of the claim that $[y,x_N] \in [Y,X,X]$ for all $N \geq n$.
A similar argument shows that $[y,x_N] \in [Y,X,X]$ also holds for all $N < n$. But $[Y,X] =\gen{t^k([y,x_N]) \mid k,N \in \mathbb{Z}}^X$, therefore $[Y,X] = [Y,X,X]$. \end{proof}
\begin{remark} Suppose that $G$ and $V$ are groups and that $G$ acts on $V$ from the right by automorphisms. Then we define \[ [V,G] := \gen{ v^{-1} \cdot v^g \mid g\in G, v\in V}. \] This is a natural extension of the commutator group notation, e.g.\ for $V\unlhd G$. \end{remark}
\begin{lemma} \label{[VG]<V} Let $G$ and $V$ be $p$-groups, with $G$ acting on $V$ by automorphisms. If $V$ is finite and non-trivial, then $[V,G]$ is a proper subgroup of $V$. \end{lemma}
\begin{proof} Let $\alpha: G\to\Aut(V)$ be the action homomorphism associated to the action of $G$ on $V$. Since $V$ is finite, also $\Aut(V)$ is finite, and hence $\widetilde{G}:=\alpha(G)$ is finite. Clearly $[V,G] = [V,\widetilde{G}]$. Form the semidirect product $K:=V\rtimes \widetilde{G}$. Then $[V,\widetilde{G}]\leq[V,K]$. Since $V\unlhd K$ we have $[V,K]\leq V$. Moreover, $K$ is a finite $p$-group, and thus it is nilpotent. Hence if $[V,K]=V$, then by \cref{A<=[AK] then A=1} we get $V=1$, a contradiction. Thus \[ [V,G] = [V,\widetilde{G}] \leq [V,K]<V. \qedhere\] \end{proof}
\begin{lemma} \label{finite idx submod} Let $Y\unlhd X$ be shift-invariant with $[X,Y]\neq 1$. Suppose there is $y \in Y^*$ such that $(Y,(t^k(y))_{k \in \mathbb{Z}})$ is a $\mathbb{Z}$-system. Then $Y$ is elementary abelian, and for $m:=\mathbf{m}(y)$, the group $M:=Y_{-\infty,m}=Y\cap X_{-\infty,m}$ is an $\mathbb{F}_p X_{-\infty,m}$-module, and $M_0:=[M,X_{-\infty,m}]$ is a proper, non-trivial submodule of finite index. \end{lemma}
\begin{proof} The group $Y$ is elementary abelian by \cref{abelian<=>single-shift-inv}, hence so is $M$. As $Y\unlhd X$, the group $M$ is an $\mathbb{F}_p X_{-\infty,m}$-module. We compute \[ [Y,X] = \bigcup_{k\in\mathbb{N}} [Y_{-\infty,m+2k},X_{-\infty,m+2k}] = \bigcup_{k\in\mathbb{N}} t^k(M_0). \] The hypothesis states $[X,Y]\neq 1$, so we must have $M_0\neq 1$.
Moreover $y\in M$, but \[ M_0 = [Y_{-\infty,m},X_{-\infty,m}] \leq [X_{-\infty,m},X_{-\infty,m}] \leq X_{-\infty,m-1}, \] and $y\notin X_{-\infty,m-1}$, hence $y\notin M_0$. We conclude that $M_0\neq M$, i.e.\ $M_0$ is a proper, non-trivial submodule.
Since $M=\gen{t^{-k}(y)\mid k\in\mathbb{N}}$, we may also regard $M$ as an $\mathbb{F}_p[t^{-1}]$-module, which is generated by $y\in M$. Hence it is a free $\mathbb{F}_p[t^{-1}]$-module of rank $1$. Now $M_0$ is a proper non-trivial $\mathbb{F}_p[t^{-1}]$-submodule of $M$, thus $M_0$ must have finite index in $M$. \end{proof}
We are now ready to prove our main theorem.
\begin{proof}[Proof of \cref{hauptsatz}] Set $Y:=[X,X,X]$. Our goal is to prove $Y=1$.
Clearly $Y\unlhd X$ and also $Y\unlhd X'$ hold. By \cref{commutator group} we have $|X:X'|=\infty$. We thus may apply \cref{[YX]=[YXX]} for $X'$, which yields \[ Y \overset{\mathrm{def.}}= [X',X] \overset{\ref{[YX]=[YXX]}}= [X',X,X] \overset{\mathrm{def.}}= [Y,X]. \]
In addition, $Y\leq X'$ and $|X:X'|=\infty$ imply $|X:Y|=\infty$. Therefore \cref{normal} is applicable, and proves that there is $n \in \mathbb{Z}$ such that $Y_{n,\infty}\unlhd X$ or $Y_{-\infty,n}\unlhd X$. We may assume (up to a relabeling of the generators of $X$) without loss of generality that the first case holds.
We proceed by assuming that $Y\neq1$ and derive a contradiction. By \cref{MaxProposition5.7(3)} there is $y \in Y^*$ with $\mathbf{n}(y) =n$ and $Y_{n,\infty}=\gen{t^k(y)\mid k\in\mathbb{N}}$. Let \[ N := Y_{n+2,\infty}, \quad m:=\mathbf{m}(y), \quad Y_0:=[Y/N, X_{-\infty,m}], \] where we regard $Y/N$ as an $\mathbb{F}_p X$-module, which is feasible since $Y\unlhd X$ and also \[N=t(Y_{n,\infty})\unlhd t(X) = X. \]
We claim that $Y_0$ is an $\mathbb{F}_p X$-submodule of $Y/N$.
Indeed, we have \[[X_{-\infty,m},X] \leq X' \leq C_X(Y), \quad\text{ implying }\quad X_{-\infty,m}^g \subseteq X_{-\infty,m} C_X(Y)\] for all $g\in X$. Moreover, from $Y=Y^g$ and $[a,bc] = [a,c][a,b]^c$ it follows that \[ [Y, X_{-\infty,m}]^g = [Y^g, X_{-\infty,m}^g] \leq [Y, X_{-\infty,m}C_X(Y)] = [Y, X_{-\infty,m}].\] Hence $Y_0$ is indeed an $\mathbb{F}_p X$-submodule of $Y/N$.
By \cref{finite idx submod}, we have $1 < |M : M_0| < \infty$ for \[M:=Y_{-\infty,m}, \quad M_0 := [M, X_{-\infty,m}]. \]
Since $Y/N$ is an $\mathbb{F}_p X$-module, it is also an $X_{-\infty,m}$-module. In fact $Y/N$ and $M$ are isomorphic as $X_{-\infty,m}$-modules: Indeed, $Y$ is the inner direct product of $N$ and $M$, thus we get the isomorphism \[ M \to Y/N,\ g \mapsto g N.\] This isomorphism maps $M_0$ to $Y_0$, and so
\cref{finite idx submod} implies $1 < |Y/N : Y_0| < \infty$.
Therefore $A:=(Y/N)/Y_0$ is a non-trivial, finite $p$-group on which $X$ acts by automorphisms, and so \cref{[VG]<V} implies $[A,X]<A$. Yet earlier on we proved $[Y,X]=Y$, which implies \[ [A,X] = [(Y/N)/Y_0, X] = (Y/N)/Y_0 = A. \] But this is a contradiction. Hence our initial assumption that $Y\neq 1$ was wrong, and so $Y$ is trivial. Since by definition $Y=[X,X,X]$, this completes the claim. \end{proof}
\paragraph{Acknowledgments.}
At an early stage of this project, we had proven a weaker result, namely that \emph{nilpotent} $\mathbb{Z}$-systems of order 2 are nilpotent of class 2. We would like to point out that this preliminary result has been proved independently by Bettina Wilkens. The idea of making use of $\mathbb{F}_p[t]$-modules in this context, which we adopted for the proof of \cref{hauptsatz}, is due to her.
We are grateful to Barbara Baumeister, Maximilian Parr and Richard Weiss for careful proofreading and helpful comments. We also thank Pierre-Emmanuel Caprace and the anonymous referee for useful suggestions.
The research for this paper was undertaken while the first author was on a post-doc position at UCLouvain, in the research group of Pierre-Emmanuel Caprace, and funded by ERC grant \#278469.
The project was also partially supported by DFG grant MU 1281/5-4.
\begin{bibdiv} \begin{biblist}
\bib{AB08}{book}{
author={Abramenko, Peter},
author={Brown, Kenneth~S.},
title={Buildings -- theory and applications},
series={Graduate Texts in Mathematics},
publisher={Springer},
address={Berlin},
date={2008},
volume={248}, }
\bib{AR09}{article}{
author={Abramenko, Peter},
author={R{\'e}my, Bertrand},
title={Commensurators of some non-uniform tree lattices and Moufang twin
trees},
conference={
title={Essays in geometric group theory},
},
book={
series={Ramanujan Math. Soc. Lect. Notes Ser.},
volume={9},
publisher={Ramanujan Math. Soc., Mysore},
},
date={2009},
pages={79--104}, }
\bib{Cap09}{article}{
author={Caprace, Pierre-Emmanuel},
title={``Abstract'' homomorphisms of split Kac-Moody groups},
journal={Mem. Amer. Math. Soc.},
volume={198},
date={2009},
number={924},
pages={xvi+84}, }
\bib{CR09}{article}{
author={Caprace, Pierre-Emmanuel},
author={R{\'e}my, Bertrand},
title={Groups with a root group datum},
date={2009},
journal={Innov. Incidence Geom.},
volume={9},
pages={5--77}, }
\bib{CR12}{article}{
author={Caprace, Pierre-Emmanuel},
author={R{\'e}my, Bertrand},
title={Simplicity of twin tree lattices with non-trivial commutation relations},
date={2012},
note={To appear in the Proceedings of the special year on Geometric Group Theory at OSU} }
\bib{HKS72}{article}{
author={Hering, Christoph},
author={Kantor, William M.},
author={Seitz, Gary M.},
title={Finite groups with a split $BN$-pair of rank $1$. I},
date={1972},
journal={J.\ Algebra},
volume={20},
pages={435--475}, }
\bib{MKS66}{book}{
author={Magnus, Wilhelm},
author={Karrass, Abraham},
author={Solitar, Donald},
title={Combinatorial group theory: Presentations of groups in terms of
generators and relations},
publisher={Interscience Publishers [John Wiley \& Sons, Inc.], New
York-London-Sydney},
date={1966},
pages={xii+444}, }
\bib{RemyCRAS99}{article}{
author={R{\'e}my, Bertrand},
title={Construction de r{\'e}seaux en th{\'e}orie de Kac-Moody},
date={1999},
journal={C. R. Acad. Sci. Paris S{\'e}r. I Math.},
volume={329},
number={6},
pages={475--478}, }
\bib{RR06}{article}{
author={R{\'e}my, Bertrand},
author={Ronan, Mark~A.},
title={Topological groups of {K}ac-{M}oody type, right-angled twinnings
and their lattices},
date={2006},
journal={Comment. Math. Helv.},
volume={81},
number={1},
pages={191--219},
url={http://dx.doi.org/10.4171/CMH/49}, }
\bib{Rob96}{book}{
author={Robinson, Derek~J.S.},
title={A course in the theory of groups},
edition={2},
series={Graduate Texts in Mathematics},
publisher={Springer},
address={Berlin},
date={1996},
volume={80}, }
\bib{RT94}{article}{
author={Ronan, Mark~A.},
author={Tits, Jacques},
title={Twin trees. {I}},
date={1994},
journal={Invent. Math.},
volume={116},
number={1-3},
pages={463--479},
url={http://dx.doi.org/10.1007/BF01231569}, }
\bib{Ti77}{article}{
author={Tits, Jacques},
title={Endliche Spiegelungsgruppen, die als Weylgruppen auftreten},
date={1977},
journal={Invent. Math.},
volume={43},
number={3},
pages={283--295}, }
\bib{Ti89}{incollection}{
author={Tits, Jacques},
title={Immeubles jumel\'es (cours 1988--1989)},
pages={157--172},
date={1989},
book={
title={R{\'e}sum{\'e} de cours},
series={Documents Math\'ematiques},
publisher={Soci\'et\'e Math\'ematique de France},
date={2013},
} }
\bib{Ti92}{inproceedings}{
author={Tits, Jacques},
title={Twin buildings and groups of Kac-Moody type},
date={1992},
pages={249--286},
book={
title={Groups, combinatorics and geometry},
editor={Liebeck, Martin~W.},
editor={Saxl, Jan},
series={LMS Lecture Note Series},
volume={165},
publisher={Cambridge University Press},
address={Cambridge},
} }
\bib{Ti96}{incollection}{
author={Tits, Jacques},
title={Arbres jumel\'es (cours 1995--1996)},
pages={275--298},
date={1996},
book={
title={R{\'e}sum{\'e} de cours},
series={Documents Math\'ematiques},
publisher={Soci\'et\'e Math\'ematique de France},
date={2013},
} }
\bib{TW}{book}{
author={Tits, Jacques},
author={Weiss, Richard},
title={Moufang polygons},
publisher={Springer},
address={Berlin},
date={2002}, }
\end{biblist} \end{bibdiv}
\end{document} |
\begin{document}
\title[On Complete conformally flat submanifolds with nullity]{On Complete Conformally flat submanifolds with nullity in Euclidean space} \author{Christos-Raent Onti}
\date{} \maketitle
\begin{abstract} In this note, we investigate conformally flat submanifolds of Euclidean space with positive index of relative nullity. Let $M^n$ be a complete conformally flat manifold and let $f\colon M^n\to \mathord{\mathbb R}^m$ be an isometric immersion. We prove the following results: (1) If the index of relative nullity is at least two, then $M^n$ is flat and $f$ is a cylinder over a flat submanifold. (2) If the scalar curvature of $M^n$ is non-negative and the index of relative nullity
is positive, then $f$ is a cylinder over a submanifold with constant non-negative sectional curvature. (3) If the scalar curvature of $M^n$ is non-zero and the index of relative nullity is constant and equal to one, then $f$ is a cylinder over a $(n-1)$-dimensional submanifold with non-zero constant sectional curvature. \end{abstract}
\renewcommand{\arabic{footnote}}{\fnsymbol{footnote}} \footnotetext{\emph{2010 Mathematics Subject Classification.} Primary 53B25, 53C40, 53C42.} \renewcommand{\arabic{footnote}}{\arabic{footnote}}
\renewcommand{\arabic{footnote}}{\fnsymbol{footnote}} \footnotetext{\emph{Keywords.} Conformally flat submanifolds, index of relative nullity, scalar curvature} \renewcommand{\arabic{footnote}}{\arabic{footnote}}
\section{Introduction}
A Riemannian manifold $M^n$ is said to be \emph{conformally flat} if each point lies in an open neighborhood conformal to an open subset of the Euclidean space $\mathord{\mathbb R}^n$. The geometry and topology of such Riemannian manifolds have been investigated by several authors from the intrinsic point of view. Some of the many papers are \cite{cat16,ku49,ku50,cadjnd11,cahe06,no93,sy88}.
Around 1919, Cartan \cite{car17} initiated the investigation of such Riemannian manifolds from the submanifold point of view by studying the case of conformally flat Euclidean hypersurfaces (see also \cite{mmf85,pin85}). In 1977, Moore \cite{mo77} extended Cartan's result in higher (but still low) codimension (see also \cite{df96,df99,mm78}). Recently, the author, in collaboration with Dajczer and Vlachos, investigated in \cite{dov18} the case of conformally flat submanifolds with flat normal bundle in arbitrary codimension (see also \cite{dote11}).
In this short note, we address and deal with the following:
{\noindent{\bf Problem.}} {\it Classify complete conformally flat submanifolds of Euclidean space with positive index of relative nullity and arbitrary codimension. }
Recall that the {\it index of relative nullity} at a point $x\in M^n$ of a submanifold $f\colon M^n\to \mathord{\mathbb R}^m$ is defined as the dimension of the kernel of its second fundamental form $\alpha\colon TM\times TM\to N_fM$, with values in the normal bundle.
The first result provides a complete answer in the case where the index of relative nullity is at least two, and is stated as follows:
\begin{theorem}\label{main2} Let $M^n$ be a complete, conformally flat manifold and let $f\colon M^n\to\mathord{\mathbb R}^m$ be an isometric immersion with index of relative nullity at least two at any point of $M^n$. Then $M^n$ is flat and $f$ is a cylinder over a flat submanifold. \end{theorem}
The next result provides a complete answer in the case where the scalar curvature is non-negative.
\begin{theorem}\label{main1} Let $M^n$ be a complete, conformally flat manifold with non-negative scalar curvature and let $f\colon M^n\to\mathord{\mathbb R}^m$ be an isometric immersion with positive index of relative nullity. Then $f$ is a cylinder over a submanifold with constant non-negative sectional curvature. \end{theorem}
Observe that there are complete conformally flat manifolds such that the scalar curvature is non-negative while the sectional curvature is not. Easy examples are the Riemannian products $M^n=\mathord{\mathbb S}^{n-m}\times\mathord{\mathbb H}^m,\ n\geq 2m,$ where $\mathord{\mathbb S}^{n-m}$ and $\mathord{\mathbb H}^m$ are the sphere and the hyperbolic space of sectional curvature $1$ and $-1$, respectively.
Finally, the next result provides a complete answer (both local and global) in the case where the scalar is non-zero and the index of relative nullity is constant and equal to one.
\begin{theorem}\label{main3} Let $M^n$ be a conformally flat manifold with non-zero scalar curvature and let $f\colon M^n\to\mathord{\mathbb R}^m$ be an isometric immersion with constant index of relative nullity equal to one. Then $f$ is locally either a cylinder over a $(n-1)$-dimensional submanifold with non-zero constant sectional curvature or a cone over a $(n-1)$-dimensional spherical submanifold with constant sectional curvature. Moreover, if $M^n$ is complete, then $f$ is globally a cylinder over a $(n-1)$-dimensional submanifold with non-zero constant sectional curvature. \end{theorem}
\begin{remarks}{\rm (I) If the ambient space form in Theorem \ref{main1} is replaced by the sphere $\mathord{\mathbb S}_c^m$ of constant sectional curvature $c$, then an intrinsic classification can be obtained, provided that ${\rm{scal}}(M^n)\geq c(n-1)$. This classification follows from a result of Carron and Herzlich \cite{cahe06}, since in this case $M^n$ turns out to have non-negative Ricci curvature. However, we do not obtain any (direct) information on the immersion $f$. \cdot[1mm] (II) If $f\colon M^n\to\mathord{\mathbb Q}_c^m$ is an isometric immersion of a conformally flat manifold into a space form of constant sectional curvature $c$, then one can prove the following: (i) If the index of relative nullity is at least two, then $M^n$ has constant sectional curvature $c$. In particular, if $f$ is also minimal, then $f$ is totally geodesic. (ii) If the index of relative nullity is constant and equal to one, then $f$ is a $1$-generalized cone (for the definition, see \cite{dov18}) over an isometric immersion $F\colon \Sigma^{n-1}\to \mathord{\mathbb Q}_{\tilde c}^{m-1}$ into an umbilical submanifold of $\mathord{\mathbb Q}_c^m$. } \end{remarks}
\begin{notes} {\rm The special case of minimal conformally flat hypersurfaces $f\colon M^n\to \mathord{\mathbb Q}_c^{n+1},$ $n\geq 4,$ was treated by do Carmo and Dajczer in \cite{DCD} (without any additional assumption on the index of relative nullity), where they showed that these are actually generalized catenoids, extending that way a previous result due to Blair \cite{Blair} for the case $c=0$.
For the \quotes{neighbor} class of Einstein manifolds one can prove that: any minimal isometric immersion $f\colon M^n\to \mathord{\mathbb Q}_c^m$ of an Einstein manifold with positive index of relative nullity is totally geodesic. A related result of Di Scala \cite{discala}, in the case where the ambient space is the Euclidean one, states that: any minimal isometric immersion $f\colon M^n\to \mathord{\mathbb R}^m$ of a K\cdot{a}hler-Einstein manifold is totally geodesic. However, it is not yet known if the assumption on K\cdot{a}hler can be dropped (this was conjectured by Di Scala in the same paper). Of course, in some special cases the conjecture is true, as have already been pointed out in \cite{discala}. Finally, we note that Di Scala's theorem still holds true if the K\cdot{a}hler (intrinsic) assumption is replaced by the (extrinsic) assumption on $f$ having flat normal bundle. This follows directly from N\cdot{o}lker's theorem \cite{no90}, since, in this case, $f$ has homothetical Gauss map. } \end{notes}
\section{Preliminaries}
In this section we recall some basic facts and definitions. Let $M^n$ be a Riemannian manifold and let $f\colon M^n\to \mathord{\mathbb R}^m$ be an isometric immersion. The \emph{index of relative nullity} $\nu(x)$ at $x\in M^n$ is the dimension of the \emph{relative nullity subspace} $\Delta(x)\subset T_xM$ given by $$ \Delta(x)=\{X\in T_xM: \alpha(X,Y)=0\cdot\cdot\mbox{for all}\cdot\;Y\in T_xM\cdot. $$ It is a standard fact that on any open subset where the index of relative nullity $\nu(x)$ is constant, the relative nullity distribution $x\mapsto \Delta(x)$ is integrable and its leaves are totally geodesic in $M^n$ and $\mathord{\mathbb R}^m$. Moreover, if $M^n$ is complete then the leaves are also complete along the open subset where the index reaches its minimum (see \cite{dajczer}). If $M^n$ splits as a Riemannian product $M^n=\Sigma^{n-k}\times \mathord{\mathbb R}^k$ and there is an isometric immersion $F\colon \Sigma^{n-k}\to \mathord{\mathbb R}^{m-k}$ such that $f=F\times {\rm id}_{\mathord{\mathbb R}^k}$, then we say that $f$ is a $k$-cylinder (or simply a cylinder) over $F$.
The following is due to Hartman \cite{har70}; cf. \cite{dt}.
\begin{theorem}\label{hartman} Let $M^n$ be a complete Riemannian manifold with non-negative Ricci curvature and let $f\colon M^n\to \mathord{\mathbb R}^m$ be an isometric immersion with minimal index of relative nullity $\nu_0>0$. Then $f$ is a $\nu_0$-cylinder. \end{theorem}
A smooth tangent distribution $D$ is called {\it totally umbilical} if there exists a smooth section $\delta\in \Gamma(D^\perp)$ such that $$ \langle\nabla_X Y, T\rangle=\<X,Y\rangle\langle\delta,T\rangle $$ for all $X,Y\in D$ and $T\in D^\perp$. The following is contained in \cite{dt}.
\begin{proposition}\label{prop} Let $f\colon M^n\to \mathord{\mathbb R}^m$ be an isometric immersion of a Riemannian manifold with constant index of relative nullity $\nu=1$. Assume that the conullity distribution $\Delta^\perp$ is totally umbilical (respectively, totally geodesic). Then $f$ is locally a cone over an isometric immersion $F\colon \Sigma^{n-1}\to\mathord{\mathbb S}^{m-1}\subset\mathord{\mathbb R}^m$ (respectively, a cylinder over an isometric immersion $F\colon \Sigma^{n-1}\to\mathord{\mathbb R}^{m-1}\subset\mathord{\mathbb R}^m$). \end{proposition}
We also need the following two well-known results; cf. \cite{dt}.
\begin{proposition}\label{conflat} A Riemannian product is conformally flat if and only if one of the following possibilities holds: \begin{enumerate}[(i)]
\item One of the factors is one-dimensional and the other one has constant sectional curvature.
\item Both factors have dimension greater than one and are either both flat or have opposite
constant sectional curvatures. \end{enumerate} \end{proposition}
\begin{proposition}\label{conflat2} Let $M=M_1\times_\rho M_2$ be a warped product manifold. If $M_1$ has dimension one then $M$ is conformally flat if and only if $M_2$ has constant sectional curvature. \end{proposition}
\section{The proofs}
Let $M^n$ be a conformally flat manifold and let $f\colon M^n\to \mathord{\mathbb R}^m$ be an isometric immersion. It is well-known that in this case the curvature tensor has the form $$ R(X,Y,Z,W) = L(X,W)\<Y,Z\rangle-L(X,Z)\<Y,W\rangle+L(Y,Z)\<X,W\rangle-L(Y,W)\<X,Z\rangle $$ in terms of the Schouten tensor given by \begin{equation}\label{shouten} L(X,Y)=\frac{1}{n-2}\left({\rm{Ric}}(X,Y)-\frac{s}{2(n-1)}\<X,Y\rangle\right) \end{equation} where $s$ denotes the scalar curvature. In particular, the sectional curvature is given by \begin{equation}\label{seccur} K(X,Y)=L(X,X)+L(Y,Y) \end{equation} where $X,Y\in TM$ are orthonormal vectors.
A straightforward computation of the Ricci tensor using the Gauss equation \begin{equation}\label{eqgauss} R(X,Y,Z,W)=\langle\alpha(X,W),\alpha(Y,Z)\rangle-\langle\alpha(X,Z),\alpha(Y,W)\rangle \end{equation} yields \begin{equation}\label{ricci} {\rm{Ric}}(X,Y) =\<nH,\alpha(X,Y)\rangle-\sum_{j=1}^n\langle\alpha(X,X_j),\alpha(Y,X_j)\rangle \end{equation} where $X_1,\dots,X_n$ is an orthonormal tangent basis.
We obtain from \eqref{seccur} and \eqref{eqgauss} that \begin{equation}\label{dec2}
L(X,X)+L(Y,Y)=\langle\alpha(X,X),\alpha(Y,Y)\rangle-\cdot\alpha(X,Y)\cdot^2 \end{equation} for any pair $X,Y\in TM$ of orthonormal vectors. Using \eqref{shouten} it follows from \eqref{dec2} that \begin{equation}
{\rm{Ric}}(X,X)+{\rm{Ric}}(Y,Y) = \frac{s}{n-1}+(n-2)(\langle\alpha(X,X),\alpha(Y,Y)\rangle-\cdot\alpha(X,Y)\cdot^2) \label{ric} \end{equation} for any pair $X,Y\in TM$ of orthonormal vectors. Now, assume that $\nu>0$ and choose a unit length $X\in \Delta$. Using \eqref{ricci}, it follows from \eqref{ric} that \begin{equation}\label{ric1} {\rm{Ric}}(Y,Y) = \frac{s}{n-1} \end{equation} for all unit length $Y\perp X$.
{\noindent {\it Proof of Theorem \ref{main2}:}} It follows from \eqref{ricci} and \eqref{ric1} that $s=0$. Thus, it follows from \eqref{ric1} that $M^n$ is Ricci flat. Since $M^n$ is conformally flat we obtain that $M^n$ is flat. The desired result follows from Theorem \ref{hartman} and Proposition \ref{conflat}. \qed
{\noindent {\it Proof of Theorem \ref{main1}:}} It follows from \eqref{ric1} that ${\rm{Ric}}\geq 0$. The desired result follows from Theorem \ref{hartman} and Proposition \ref{conflat}. \qed
{\noindent {\it Proof of Theorem \ref{main3}:}} It follows from \eqref{dec2}, \eqref{shouten} and \eqref{ricci} that \begin{equation}\label{eq1} L(Y,Y)=-L(X,X)=\frac{s}{2(n-1)(n-2)}=:h\cdot\cdot\text{and} \cdot\ L(X,Y)=0 \end{equation} for any unit length vectors $X\in \Delta$ and $Y\in \Delta^\perp$. Moreover, we have that \begin{equation}\label{eq2} L(Y,Z)=0 \end{equation} for any pair $Y,Z\in \Delta^\perp$ of orthonormal vectors. Indeed, if $Y$ and $Z$ are two such vectors then using \eqref{eq1} we get $$ L(Y+Z,Y+Z)=2h, $$ and \eqref{eq2} follows. Now, since $M^n$ is conformally flat we have that $L$ is a Codazzi tensor. Thus $$ (\nabla_Y L)(X,X)=(\nabla_X L)(Y,X) $$ for all unit length $X\in \Delta$ and $Y\in \Delta^\perp$. It follows, using \eqref{eq1} and \eqref{eq2} that $$ Y(h)=0, $$ for all unit length $Y\in \Delta^\perp$. Therefore $$ h=h(t)=h(\gamma(t)),\ t\in I, $$ where $\gamma\colon I\subset \mathord{\mathbb R}\to M^n$ is a leaf of the nullity distribution $\Delta$, parametrized by arc length. Using again the fact that $L$ is a Codazzi tensor, we get $$ (\nabla_{\gamma'(t)} L)(Y,Z)=(\nabla_Y L)(\gamma'(t),Z) $$ for all unit length $Y,Z\in \Delta^\perp$, or equivalently, $$
\<Y,Z\rangle\langle{\rm{grad}}\log\sqrt{|h(t)|},\gamma'(t)\rangle=\langle\nabla_Y Z,\gamma'(t)\rangle $$ for all unit length $Y,Z\in \Delta^\perp$, where we have used again equations \eqref{eq1} and \eqref{eq2}. Thus, if the scalar curvature is constant, then $\Delta^\perp$ is totally geodesic and the desired result follows from Propositions \ref{prop} and \ref{conflat}. On the other hand, if the scalar is not constant then $\Delta^\perp$ is totally umbilical and the desired result follows from Propositions \ref{prop} and \ref{conflat2}. Finally, if $M^n$ is complete then the result is immediate and the proof is complete. \qed
\end{document} |
\begin{document}
\title [Local Gromov-Witten Invariants are Log Invariants] {Local Gromov-Witten Invariants are Log Invariants}
\author{Michel van Garrel, Tom Graber, Helge Ruddat}
\address{\tiny University of Warwick, Mathematics Institute, Coventry CV4 7AL, UK}
\email{[email protected]}
\address{\tiny Caltech, Department of Mathematics, MC 253-37, 362 Sloan Laboratory, Pasadena, CA 91125, USA}
\email{[email protected]}
\address{\tiny JGU Mainz, Institut f\"ur Mathematik, Staudingerweg 9, 55128 Mainz, Germany}
\email{[email protected]} \thanks{This work was supported by HR's DFG Emmy-Noether grant RU1629/4-1 and by MvG's affiliation with the Korea Institute for Advanced Study.}
\maketitle \setcounter{tocdepth}{1} \tableofcontents
\section{Introduction}
Let $X$ be a smooth projective variety, $D$ a smooth nef divisor on $X$, and $\beta$ a curve class on $X$ with $d:=\beta\cdot D >0$. The goal of this note is to prove a simple equivalence between two virtual counts of rational curves on $X$ that can be associated to this situation. First there are the {\em local invariants}, the Gromov-Witten invariants which virtually count curves in the total space of $\shO_X(-D)$. These can be defined as integrals against the virtual fundamental class of $\oM_{0,0}(\Tot (\shO_X(-D)))$. The conditions on $D$ and $\beta$ are exactly the ones under which these are well-defined. Second, we can consider the relative or logarithmic invariants which virtually count rational curves in $X$ which intersect $D$ in a single point, necessarily with multiplicity $d$. These could also be thought of as a virtual count of maps from $\AA^1$ to the open variety $X\backslash D$. There exist several moduli spaces that can be used to define these counts. We will use the space of logarithmic stable maps to the log smooth space $X(\log D)$ associated to the pair $X$ and $D$. This space parametrizes many types of maps with different specified contact orders, but the one directly relevant to this problem can be denoted $\oM_{0,(d)}(X(\log D),\beta)$, where the $(d)$ is meant to denote a single marked point having maximal contact order with $D$. Our main result is a comparison of the two virtual fundamental classes here.
\begin{theorem} \label{mainthm-intro} $F_*[\oM_{0,(d)}(X(\log D),\beta)]^{\rm vir}=(-1)^{d+1}d[\oM_{0,0}(\Tot(\shO_X(-D)),\beta)]^{\rm vir}$. \end{theorem} Here $F$ denotes the map that takes a logarithmic map to $X(\log D)$ and forgets the logarithmic structure as well as the marked point. By the negativity of $\shO_X(-D)$, the space of maps to $\shO_X(-D)$ is just the same as the space of maps to $X$, so both sides of the formula are being considered as elements of $A_*(\oM_{0,0}(X,\beta))$. We remark that we show the same formula holds if we add $n$ marked points to the moduli problems on each side (with 0 contact order with $D$ in the case of left hand side). This equality of virtual cycles leads to analogous equalities of certain numerical invariants after capping with constraints and integrating.
Precisely, let $\gamma=(\gamma_1,....,\gamma_n)$ denote a collection of insertions, with each $\gamma_i$ either a primary invariant (i.e. no descendants) or a descendant of a cycle class which does not meet $D$. Let $N_\beta(\gamma)$ denote the genus zero local Gromov-Witten invariant for these insertions and $R_\beta(\gamma)$ the genus zero relative Gromov-Witten invariant with the same insertions (and one maximal contact order marking as before), then as a direct corollary of Theorem~\ref{mainthm-intro}, we find: \begin{corollary} \label{maincor-intro} $R_\beta(\gamma)=(-1)^{d+1}dN_\beta(\gamma)$. \end{corollary} This follows from Theorem~\ref{mainthm-intro} and the projection formula, since the restriction on the insertions corresponds exactly to the condition that the constraints on the space of log stable maps be pulled back from classes on $\oM_{0,n}(X,\beta)$.
This formula was first conjectured by Takahashi \cite{Ta01} for $X=\PP^2$ and $D$ a smooth cubic. It was proven in this case by Gathmann via explicit calculation of both sides \cite{Ga03}. In an unpublished note, Graber and Hassett gave a simple proof of the formula when $X$ is any smooth del Pezzo surface and $D$ is a smooth anti-canonical divisor. The proof we will give here follows the main idea of that argument which is to apply the degeneration formula for Gromov-Witten invariants \cite{LR01, Li02, IP04, AF11, Ch14, KLR17} to a twist of the degeneration to the normal cone of the preimage of $D$ in $\Tot(\shO(-D))$.
In the general setting though, we need to take into account the existence of rational curves in $D$. In order to control these, we need to study the following situation which may be of independent interest, comparing the virtual geometry of genus zero stable log maps to certain projective bundles to the geometry of stable maps to the base.
Let $\shA$ be a nef line bundle on a smooth projective variety $D$, $$Y=\mathbb{P}(\shO_D\oplus\shA) \stackrel{p}{\longrightarrow} D$$ the $\PP^1$-bundle over $D$ and $\beta$ a curve class on $Y$. Let $D_0$ be the section of $p$ with normal bundle $\shA^\vee$ and let $\oM_{0,\Gamma}(Y(\log D_0),\beta)$ be the space of genus zero basic stable log maps to $Y$ where $\Gamma$ is a list of $n$ points with prescribed contact orders to $D_0$. We have a pushforward map $$w: \oM_{0,\Gamma}(Y(\log D_0),\beta) \to \oM_{0,n}(D,p_*\beta)$$ which forgets the log structures, composes with the projection to $D$ and stabilizes. We would like to say that the virtual class on the space of log stable maps to $Y(\log D_0)$ is the pullback of the virtual class of the space of stable maps to $D$, but since the map $w$ need not be flat, there is no well-defined pullback in general. Nevertheless, we have the following result.
\begin{theorem}[Theorem \ref{thm-pullback} below] \label{thm-P1-bundle-pullback} The map $w$ factors through an intermediate space $\shM$, i.e. $\oM_{0,\Gamma}(Y(\log D_0),\beta)\stackrel{u}{\lra}\shM\stackrel{v}{\lra} \oM_{0,n}(D,p_*\beta)$ where $u$ is smooth and $v$ a pull-back of a map $\nu$ to a smooth stack, see \eqref{main-diagram-Y-D} for details. Then $$ [\oM_{0,\Gamma}(Y(\log D_0),\beta)]^{\rm vir} = u^*\nu^![\oM_{0,n}(D,p_*\beta)]^{\rm vir}. $$ \end{theorem}
In fact, this theorem is proven in more generality below -- the only feature needed of the map $p:Y(\log D_0) \to D$ is that it is a log smooth projective morphism whose relative log tangent bundle satisfies a positivity condition.
We believe that our main result generalizes as follows. \begin{conjecture} \label{conjecture} Let $D\subset X$ be a normal crossing divisor with smooth nef components $D_1,...,D_k$ and $\beta$ a curve class such that $d_i:=\beta\cdot D_i>0$ for all $i$. Let $$F:\oM_{0,(d_1),...,(d_k)}(X(\log D),\beta)\ra \oM_{0,0}(X,\beta)$$ be the forgetful map from the moduli space of genus zero basic stable log maps with one marking for each $D_i$ requiring maximal order of contact $d_i$ at $D_i$, then $$F_*[\oM_{0,(d_1),...,(d_k)}(X(\log D))]^{\rm vir}=\left(\prod_{i=1}^k(-1)^{d_i+1}d_i\right)[\oM_{0,0}(\Tot(\shO_X(-D_1)\oplus....\oplus \shO_X(-D_k)))]^{\rm vir}.$$ \end{conjecture} We conjecture also the similar statement where further markings are added on both sides (with zero contact orders to the $D_i$ for the left hand side). As evidence for this conjecture, note that it allows computing the local invariant $\frac{1}{d^3}$ of $\shO_{\PP^1}(-1,-1)$ for degree $d$ curves from the unique Hurwitz cover of $\PP^1$ with maximal branching at $0$ and $\infty$ and cyclic order $d$ automorphism, so the log invariant is $\frac1{d}$, see also \cite[Remark 4.17]{MR16}. We also checked Conjecture~\ref{conjecture} for $\PP^n$ with $D$ the toric boundary in the case of a single insertion of $\psi^{n-1}[pt]$: the log invariant can easily be computed from \cite[Theorem 1.1]{MR16}+\cite{MR19} and equals $1$ for all $n$ and $d$. We are grateful to Andrea Brini for computing for us the local invariant for this situation confirming the conjecture in this case. We thank Dhruv Ranganathan for pointing out that \cite[Lemma~3.1]{PZ08} computes the virtual count of degree $d$ genus zero curves in $\Tot(\shO_{\PP^2}(-1)^{\oplus 3})$ passing through two points as $(-1)^{d-1}/d$ and using \cite[Theorem 1.1]{MR16} for the computation of the log invariant confirms the conjecture also in this case. Further evidence will appear in \cite{BBvG}.
Once the technology of punctured Gromov-Witten invariants and a more general degeneration formula is fully developed, we expect that the proof we give in this paper could be iterated to imply Conjecture~\ref{conjecture}. If the $D_i$ are disjoint (as in the first example above) the existing technology is already enough to establish the conjecture.
We are grateful to Dan Abramovich, Mark Gross, Davesh Maulik, Rahul Pandharipande and James Pascaleff for helpful and inspiring discussions, and to the anonymous referee for valuable suggestions.
\section{Deducing the main result from the degeneration formula} \label{section-setup}
First we briefly describe the strategy of proof. We have a smooth projective variety $X$ containing a nef divisor $D$. If we let $\shX = \Bl_{D\times\{0\}}(X\times \AA^1)\ra \AA^1$ be the degeneration to the normal cone of $D$ in $X$, then we get a family over $\AA^1$ whose general fiber is $X$ and whose special fiber is a union of a copy of $X$, which we denote by $X_0$, and a $\PP^1$-bundle over $D$, which we will denote by $Y$. Precisely, if we let $\shA=\shN_{D/X}$, then $Y=\mathbb{P}(\shO_D\oplus\shA)\stackrel{p}{\ra} D$. This $\PP^1$ bundle comes with two obvious sections whose images we denote by $D_0$ and $D_\infty$ which have normal bundles $\shA^\vee$ and $\shA$ respectively. The intersection of $X_0$ with $Y$ is given by $D$ in $X_0$ and by $D_0$ in $Y$.
To construct a degeneration of the total space of $\shO_X(-D)$, we can just look at the total space of any line bundle over $\shX$ whose restriction to a general fiber is $\shO(-D)$. Our choice, which we denote by $\shL$, is $\Tot(\shO(-\shD))$ where $\shD$ is the proper transform of $D \times \AA^1$. Then, since $\shD \cap X_0 = \emptyset$ and $\shD \cap Y = D_\infty$ we find that the projection $\shL \to \AA^1$ gives us a family whose general fiber is $\Tot\shO_X(-D)$ and whose special fiber is a union of two components $L_X$, and $L_Y$ where $L_X \cong X \times \AA^1$ and $L_Y \cong \Tot(\shO_Y(-D_\infty)$.
Applying the degeneration formula to this family will then relate Gromov-Witten theory of $\Tot (\shO_X(-D)$ to the relative or log invariants of the two pairs $(X \times \AA^1, D\times \AA^1)$ and $(L_Y, D_0 \times \AA^1)$. While the degeneration formula in general is quite complicated and given by a sum over combinatorial types of curve degenerations, we will see that for this degeneration in genus zero, every term but one in that sum vanishes, and we can find explicitly the contribution from $L_Y$, so we are left with an expression for the genus zero Gromov-Witten theory of $\shO_X(-D)$ in terms of that of $(X\times \AA^1, D\times \AA^1)$ which is just the same as that of $(X,D)$.
In order to carefully write down a proof of that vanishing, we will need to establish our conventions about notations for the degeneration formula. Because we will eventually need to make use of the theory of logarithmic stable maps, we will state the version in that setting, although it is worth noting that thanks to the comparison theorems of \cite{AMW14}, we could equally well use the better known formalism for degeneration in terms of relative invariants.
\subsection{Degeneration formula} \label{sec-degenformula} We recall the degeneration formula where for us the version in \cite[Theorem 1.4]{KLR17} is most convenient, further details on this subsection can be found there. The input is a space $\shX_0$ that is log smooth over the standard log point with the additional assumption that $\shX_0= X\sqcup_D Y$ is a union of two smooth components that meet along $D$, a smooth divisor in each component. We are interested in basic stable log maps to $\shX_0$. The domain curve components map into either $X$ or $Y$ (or both) and the degeneration formula uses this fact effectively to decompose the moduli space of stable maps and hence the virtual fundamental class and Gromov-Witten invariants as a sum of contributions from each type of stable map. We next give the details for the situation of genus zero with $n\ge 0$ markings. We recall \cite[\S2]{KLR17}: let a curve class $\beta$ in $\shX_0$ and an integer $n$ be given, and let $\Omega(\shX_0)$ denote the set of graphs $\Gamma$ with the following decorations and properties. The vertices of $\Gamma$ are partitioned into two sets indexed by $X$ and $Y$ and no edge has vertices labeled by only $X$ or only $Y$, this property for a graph is typically called \emph{bipartite}. We will henceforth speak of $X$- and $Y$-vertices, referring to the partition membership. The edges of $\Gamma$ are enumerated $e_1,...,e_r$ (where $r$ may vary), each edge $e$ is decorated with a positive integer $w_e$, each vertex $V$ is decorated with a set $n_V$ and a class $\beta_V$ that is an effective curve class in $X$ or $Y$ depending on the bipartition type of $V$. The set $n_V$ is a subset of $\{1,...,n\}$ to be thought of as the set of marking labels attached to $V$. Every $\Gamma\in \Omega(\shX_0)$ is subject to the following stability condition: if $\beta_V=0$, then the valency of $V$ is at least $3$. Furthermore, $\beta =\sum_V\beta_V$ and $\beta_V\cdot D=\sum_{V\in e} w_e$ and $\{1,...,n\}=\coprod_V n_V$. These conditions make $\Omega(\shX_0)$ a finite set.
Given $\Gamma\in\Omega(\shX_0)$ and a vertex $V$ of $\Gamma$, we define $\Gamma_V$ as the ``subgraph'' of $\Gamma$ at the vertex $V$. That is, $\Gamma_V$ has a single vertex $V$ that is decorated with the set $n_V$ and curve class $\beta_V$ and has as adjacent half-edges the edges adjacent to $V$ in $\Gamma$ with their weights. If $V$ is an $X$-vertex, we define $\oM_V:=\oM_{\Gamma_V}(X(\log D),\beta_V)$, that is, the moduli space of genus zero stable maps to $X(\log D)$ of class $\beta_V$, with edges of $\Gamma_V$ indexing the markings with contact order to $D$ given by the weight of an edge and $n_V$ enumerating additional markings (with contact order zero to $D$). Analogously, if $V$ is a $Y$-vertex, we set $\oM_V:=\oM_{\Gamma_V}(Y(\log D),\beta_V)$. We define $\bigodot _V \oM_V $ by the Cartesian square \begin{equation*} \label{gluediag} \vcenter{ \xymatrix{ \bigodot _V \oM_V \ar[r]\ar_\ev[d]& \prod_V \oM_V\ar^\ev[d]\\ \prod_{e} D \ar^(.4)\Delta[r]& \prod_V\prod_{e\ni V} D.\\ } } \end{equation*} For each $\Gamma\in\Omega(\shX_0)$, consider the moduli space $\oM_\Gamma$ of basic stable log maps to $\shX_0$ where the curves are marked by $\Gamma$, i.e. a subset of the nodes is marked by $e_1,...,e_r$ and the dual intersection graph collapses to $\Gamma$, for details see \cite[\S4]{KLR17}. There is an \'etale map that partially forgets log structure $\Phi:\oM_\Gamma\ra \bigodot _V \oM_V$ and another finite map that forgets the graph-marking $G:\oM_\Gamma\ra\oM_{0,n}(\shX_0,\beta)$ where the latter refers to the moduli space of $n$-marked basic stable log maps to the log space $\shX_0$ that is log smooth over the standard log point.
\begin{theorem}[Degeneration formula in genus zero] \label{degen-formula} We have $$[\oM_{0,n}(\shX_0,\beta)]^{\rm vir} = \sum_{\Gamma\in \Omega(\shX_0)} \frac{\lcm(w_{e_1},...,w_{e_r})}{r!} G_*\Phi^*\Delta^! \prod_V[\oM_V]^{\rm vir}.$$ \end{theorem}
\subsection{Setup} \label{sec-degen-formula}
Instead of writing $\oM_{0,n}(..)$ to refer to moduli spaces of genus zero basic stable log maps with no markings, we simply write $\oM(..)$ in the following. Since $\shL\ra\AA^1$ and $\shX\ra\AA^1$ are log smooth when given the divisorial log structure from the central fiber respectively, by \cite[Theorem 0.2 and Theorem 0.3]{GS13}, \cite[Theorem 1.2.1]{Ch11}, we obtain moduli spaces of basic stable log maps $\oM(\shX/\AA^1,\beta)$ and $\oM(\shL/\AA^1,\beta)$ that are proper over $\AA^1$ and whose formation commutes with base change. These carry virtual fundamental classes $[\oM(\shX/\AA^1,\beta)]^{\rm vir},[\oM(\shL/\AA^1,\beta)]^{\rm vir}$ that are compatible with base change. Moreover, since $\beta\cdot c_1(\shO_\shX(-\shD))<0$, we get that $\oM(\shX/\AA^1,\beta)=\oM(\shL/\AA^1,\beta)$ and $\oM(\shX_0,\beta)=\oM(\shL_0,\beta)$. We furthermore consider the projection $\tilde p:\shX_0=Y\sqcup_D X_0\ra X$ induced by $p$ via the universal property of the co-product.
\begin{lemma} \label{lemma-Tot-is-p-pushforward} Let $P:\oM(\shL_0,\beta)\ra \oM(X,\beta)$ be the map that takes a basic stable log map to $\shL_0$, forgets the log structure, composes with $\tilde p$ and stabilizes. Then $$[\oM(\Tot(\shO_X(-D)),\beta)]^{\rm vir} = P_*[\oM(\shL_0,\beta)]^{\rm vir}.$$ \end{lemma}
\begin{proof} Consider the four Cartesian squares of proper morphisms \begin{equation} \label{eq-glue-square} \xymatrix@C=30pt { \oM(\shL_0,\beta)\ar@{^{(}->}[r]\ar_P[d] & \oM(\shL/\AA^1,\beta)\ar_Q[d] & \ar@{_{(}->}[l]\oM(\Tot(\shO_X(-D)),\beta)\ar@{=}[d] \\ \oM(X,\beta)\ar@{^{(}->}[r]\ar[d] & \oM(X\times\AA^1/\AA^1,\beta)\ar^p[d] & \ar@{_{(}->}[l]\oM(X,\beta)\ar[d] \\ \{0\}\ar^{i_0}@{^{(}->}[r]&\AA^1&\ar_{i_1}@{_{(}->}[l]\{1\} } \end{equation} and notice that $[\oM(\shL_0,\beta)]^{\rm vir}$ is a class in the top left corner that is the Gysin pullback of $[\oM(\shL/\AA^1,\beta)]^{\rm vir}$ from the top middle which in turn Gysin pulls back to \linebreak $[\oM(\Tot(\shO_X(-D)),\beta)]^{\rm vir}$ in the top right. The statement now follows from the commuting of Gysin pullbacks with proper pushforward applied to the top two squares, since we see that $$P_*[\oM(\shL_0,\beta)]^{\rm vir} = i_0^!Q_* \oM(\shL/\AA^1,\beta) = i_1^!Q_* \oM(\shL/\AA^1,\beta) = [\oM(\Tot(\shO_X(-D)),\beta)]^{\rm vir}$$ where the middle equality follows since $p$ is a trivial family and the last equality is a consequence of the family $p\circ Q$ being trivial and equal to $p$ in a neighborhood of 1. \end{proof}
\subsection{Applying the degeneration formula to $\shL_0$} We are going to apply the degeneration formula Theorem~\ref{degen-formula} to the log smooth space $\shL_0$ over the standard log point which is the central fiber of the log smooth family $\shL\ra\AA^1$ from the previous subsection. The following theorem will be proved in the next chapters.
\begin{theorem} \label{prop-vanishing-pushforward} Given $\Gamma\in\Omega(\shL_0)$, we have $P_*G_*\Phi^*\Delta^! \prod_V[\oM_V]^{\rm vir}=0$ unless $\Gamma$ is the graph \xymatrix{ \overset{V_1}{\bullet} \ar@{-}[r]^(-.1){}="a"^(1.1){}="b" \ar^e@{-} "a";"b" &\overset{V_2}{\bullet} } with $V_1$ an $X$-vertex and $V_2$ a $Y$-vertex and furthermore, $w_e=\beta\cdot D$, $\beta_{V_1}=\beta$, $n_{V_1}=\{1,...,n\}$, $n_{V_2}=\emptyset$ and $\beta_{V_2}$ is $w_e$ times the class of a fiber of $p:Y\ra D$. \end{theorem}
For the remainder of this section, we deduce the main Theorem~\ref{mainthm-intro} from Theorem~\ref{prop-vanishing-pushforward}. Set $L_D:=D\times\AA^1$ which we view as the intersection of the components $L_X$ and $L_Y$ of $\shL_0$ and thus as a divisor in $L_X$ as well as in $L_Y$. Set $d=\beta\cdot D$ and let $\Gamma$ in the following denote the exceptional graph given in Theorem~\ref{prop-vanishing-pushforward}. In light of Lemma~\ref{lemma-Tot-is-p-pushforward}, we conclude from Theorem~\ref{degen-formula} and Theorem~\ref{prop-vanishing-pushforward} that \begin{equation} \label{class-pulled-thru-degenform} \begin{array}{l} [\oM(\Tot(\shO_X(-D)),\beta)]^{\rm vir} \\ \ = d\cdot P_* G_*\Phi^*\Delta^!\left([\oM_{(d)}(L_X(\log L_D),\beta_{V_1})]^{\rm vir}\times [\oM_{(d)}(L_Y(\log L_D),\beta_{V_2})]^{\rm vir}\right). \end{array} \end{equation} Note that $\bigodot _V \oM_V$ can be identified with the top left corner in the diagram\footnote{Despite the notation, $\oM_{(d)}(L_X(\log L_D),\beta_{V_1})$ isn't proper but this doesn't affect us.} $$ \xymatrix{ \oM_{(d)}(L_X(\log L_D),\beta)\times_{L_D} \oM_{(d)}(L_Y(\log L_D),\beta_{V_2})\ar_{\pr_1}[d]&\ar_(.13){\Phi}[l] \oM_\Gamma\ar^(.4)G[r]&\oM(\shL_0,\beta)\ar^P[d]\\ \oM_{(d)}(X(\log D),\beta)\ar^F[rr]&&\oM(X,\beta) } $$ and this diagram is commutative because the curves in $$\oM_{(d)}(L_Y(\log L_D),\beta_{V_2})=\oM_{(d)}(Y(\log D),\beta_{V_2})$$ become entirely unstable when composing with $p:Y\ra D$. Therefore, the right hand side in \eqref{class-pulled-thru-degenform} equals $$ d\cdot F_*(\pr_1)_*\Phi_*\Phi^*\Delta^!\left([\oM_{(d)}(L_X(\log L_D),\beta)]^{\rm vir}\times [\oM_{(d)}(L_Y(\log L_D),\beta_{V_2})]^{\rm vir}\right) $$ $$=d\deg(\Phi)\cdot F_*(\pr_1)_*\Delta^!\left([\oM_{(d)}(L_X(\log L_D),\beta)]^{\rm vir}\times [\oM_{(d)}(L_Y(\log L_D),\beta_{V_2})]^{\rm vir}\right). $$ We note that $\deg(\Phi)=1$ by \cite[Equation (1.4)]{KLR17}. So, in order to identify the last equation with the left hand side in Theorem~\ref{mainthm-intro} (up to moving the factor $(-1)^{d+1}d$ to the other side) and thereby via \eqref{class-pulled-thru-degenform} deducing the Theorem, it suffices to observe that $[\oM_{(d)}(L_X(\log L_D))]^{\rm vir}$ Gysin-restricts to $[\oM_{(d)}(X(\log D))]^{\rm vir}$ when confining the evaluation to be in $D\times\{0\}$ and then to prove the following result. \begin{proposition} Under the evaluation map $\ev:\oM_{(d)}(L_Y(\log L_D),\beta_{V_2})\ra D$, we find $$\ev_*[\oM_{(d)}(L_Y(\log L_D),\beta_{V_2})]^{\rm vir} = \frac{(-1)^{d+1}}{d^2}[D].$$ \end{proposition} \begin{proof} The maps $L_Y(\log L_D)\ra Y(\log D_0)\stackrel{p}\lra D\ra \pt$ are log smooth and $\beta_{V_2}$ is the $d$'th multiple of a fiber class of $p$, so we find $$[\oM_{(d)}(L_Y(\log L_D)/\pt,\beta_{V_2})]^{\rm vir}=[\oM_{(d)}(L_Y(\log L_D)/D,\beta_{V_2})]^{\rm vir}.$$ Since $\vdim \oM_{(d)}(Y(\log D_0)/D,\beta_{V_2})=\dim D$ (e.g. by inserting $h=0$ in \cite[\S6.3]{BP08}), necessarily $\ev_*[\oM_{(d)}(L_Y(\log L_D),\beta_{V_2})]^{\rm vir}$ is a multiple of $[D]$. We can compute the degree by Gysin pulling back to a point in $D$ and the formation of the virtual fundamental class is compatible with this pullback by \cite[Prop. 7.3]{BF97}. Hence, it remains to show that $$\deg \big([\oM_{(d)}(\Tot(\shO_{\PP^1}(-1))(\log (\{0\}\times\AA^1)),d[\PP^1])]^{\rm vir}\big)=\frac{(-1)^{d+1}}{d^2}.$$ The exact sequence
$$ 0 \ra \shT_{\PP^1(\log\{0\})}\ra \shT_{\Tot\big(\shO_{\PP^1}(-1)\big)(\log (\{0\}\times\AA^1))}|_{\PP^1}\ra \shO_{\PP^1}(-1) \ra 0$$ relates the local part of this moduli space to the twist by the obstruction bundle, hence we need to show that \begin{equation} \label{BP-result-eqn} \deg \big(e(O)\cap[\oM_{1}(\PP^1(\log \{0\}),d[\PP^1])]^{\rm vir}\big)=\frac{(-1)^{d+1}}{d^2} \end{equation} where $O=R^1\pi_*f^*\shO_{\PP^1}(-1)$ for $\oM_{1}(\PP^1(\log \{0\}))\stackrel{\pi}{\lla}\shC\stackrel{f}{\lra}\PP^1$ the maps from the universal curve to moduli space and target. By \cite[Lemma~6.3]{BP08}\footnote{This is also Theorem 5.1 of \cite{BP05}.}, specializing the equivariant parameter $t_2$ in loc.cit. to $1$, the left hand side of \eqref{BP-result-eqn} equals the coefficient of $1/u$ in $$
\GW(0|-1,0)_{(d)} = \frac{(-1)^{d+1}}{d}\left(2\sin\frac{du}{2}\right)^{-1} $$ which is readily seen to be $\frac{(-1)^{d+1}}{d^2}$. \end{proof}
\section{Excluding multiple gluing points of curves in $L_X$} \label{sec-exclude-mult-glue-X} In this section, we prove the statement of Theorem~\ref{prop-vanishing-pushforward} for graphs $\Gamma\in\Omega(\shL_0)$ that have a vertex with bipartition membership in $X$ that has at least two adjacent edges. \begin{lemma} \label{lemma-no-mult-edges-X} Let $\Gamma\in\Omega(\shL_0)$ be a graph with an $X$-vertex $V$ with $r>1$ adjacent edges, then $[\oM_\Gamma]^{\rm vir}=0$. \end{lemma} \begin{proof} Let $r+s$ be the number of edges of $\Gamma$. Since maps from compact curves to $\AA^1$ are constant, the evaluation map $\oM_V \to (D\times\AA^1)^r$ factors through $D^r \times \AA^1$ where $\AA^1$ is embedded diagonally in $\AA^r$. The same is true for vertices corresponding to components in $L_Y$, since the bundle $\shO_Y(D_\infty)$ is nef. Using this, we rewrite the diagram \eqref{eq-glue-square} by separating out the factors for $V$ to find Cartesian squares. \[ \xymatrix@C=30pt { \oM_V\times_{L_D^r} \bigodot_{V'\neq V}\oM_{V'} \ar[r] \ar^{\ev}[d] & \oM_V\times \prod_{V'\neq V}\oM_{V'} \ar^{\ev}[d]\\ (D^r\times\AA^1)\times (D\times\AA^1)^{s} \ar^{(\id\times\diag)\times\id=:\delta}[d]\ar^{\Delta'}[r] & (D^r\times\AA^1)^2\times (D\times\AA^1)^{2s}\ar^{(\id\times\diag)\times\id}[d]\\ (D\times\AA^1)^{r}\times (D\times\AA^1)^{s}\ar^{\Delta}[r]& (D\times\AA^1)^{2r}\times (D\times\AA^1)^{2s}. } \] Let $N$ denote the normal bundle of the embedding $\Delta$ which has rank $(r+s)(\dim D+1)$ and $N'$ denote that of $\Delta'$ which has rank $r\dim D+1+s(\dim D+1)$. Set $E=(\delta^*N)/N'$ which is of rank $r-1$, let $c_{r-1}(E)$ be its top Chern class. For any $k$ and $\alpha\in A_k\big(\oM_V\times \prod_{V'\neq V}\oM_{V'}\big)$, the excess intersection formula says $$\Delta^!\alpha= c_{r-1}(E)\cap(\Delta')^!\alpha.$$ Note that the normal bundle of the bottom right vertical map is trivial and, by Cartesianness of the lower square, its pullback under $\Delta'$ is isomorphic to $E$, so $c_{r-1}(E)=0$ because $r>1$ by assumption. Applying this to the virtual fundamental class $\alpha=[\oM_V]^{\rm vir}\times \prod_{V'\neq V}[\oM_{V'}]^{\rm vir}$ proves the Lemma. \end{proof}
\section{Comparing stable maps to $Y(\log D_0)$ and $D$} We let $D$ be an arbitrary smooth projective variety (with the trivial log structure), and $Z$ a log scheme\footnote{We assume the log structure is in the Zariski topology to satisfy the assumptions of \cite{GS13}.} with a log smooth and projective morphism $p:Z\ra D$. This induces a morphism of spaces of (log) stable maps. The example relevant to our main theorem is given by $Z=Y(\log D_0)$ as in Section \ref{section-setup}. In this case $\shT_{Z/D} = \shO_Y(D_\infty)$ which is a nef line bundle, since $D_\infty$ is an effective divisor with nef normal bundle. This implies that the relative log tangent bundle has no higher cohomology when pulled back under any morphism $\PP^1 \to Z$, or more generally any morphism from a genus zero curve. This property is useful in studying the induced morphism on spaces of stable maps. In this section it is not necessary that the underlying scheme of $Z$ is smooth and we can state our main result in arbitrary genus (although most examples will be in genus zero). Consequently, our notation for the space of log maps will just be $\oM_{g,n}(Z,\beta)$ where we do not try to describe the type of contact at the points. This will just be the disjoint union over all possible conditions to impose at the markings.
Let ${H_2(Z)^+}$ denote the submonoid of $H_2(Z,\ZZ)$ spanned by effective curve classes (including zero). Fixing a class\ $\beta\in {H_2(Z)^+}$ we would like to compare the virtual fundamental classes of $\oM_{g,n}(Z, \beta)$ and $\oM_{g,n}(D,p_*\beta)$. The natural map between them is induced by forgetting the log structures, composing the maps, and stabilizing. We want to factor that map via an intermediate space in order to deal with these steps separately. To this end we need to discuss certain stacks of curves with extra structure.
First, we have $\foM_{g,n}$ which parametrizes prestable curves of genus $g$ with $n$ markings. The stack $\oM_{g,n}(D,p_*\beta)$ has a natural map to $\foM_{g,n}$, given by remembering the underlying curve, but forgetting the map to $D$. We would like to forget the map to $D$, but remember the homology class of each irreducible component of the source curve, so we want to factor this map through a stack $\mathfrak M_{g,n,{H_2(D)^+}}$ which parametrizes prestable genus $g$ curves together with an effective curve class on each irreducible component and satisfying the stability condition that components of degree and genus zero must contain three special points. Families of such curves are required to satisfy the obvious continuity condition that when a component degenerates, the sum of the classes on the degenerations is equal to the class of the original component. This stack was introduced by Costello in \cite{costello} and used for a similar purpose by Manolache in \cite{Ma12b}. The crucial fact for us will be that the deformation theory of such a decorated curve is identical to that of the undecorated curve, i.e. $\mathfrak M_{g,n,{H_2(D)^+}}$ is \'etale over $\mathfrak M_{g,n}$, see \cite[Proposition 2.0.2]{costello}. Therefore, instead of thinking of the standard obstruction theory on $\oM_{g,n}(D,p_*\beta)$ as being a relative obstruction theory over $\foM_{g,n}$, we can think of it as a relative obstruction theory over $\foM_{g,n,{H_2(D)^+}}$. Similarly, we can factor the structure map from $\oM_{g,n}(Z,\beta)$ to $\foM_{g,n}^{\log}$ (the stack parametrizing log smooth families of curves over log schemes) through a scheme $\foM^{\log}_{g,n,{H_2(Z)^+}}$ which again records an effective curve class on each irreducible component. We arrive at the following commutative diagram, where $\shM$ is taken to make the right hand square Cartesian. (We suppress some indices in the subscripts for clarity both here and in what follows.)
\begin{equation} \label{main-diagram-Y-D} \begin{aligned} \xymatrix@C=30pt { \oM(Z,\beta)\ar^-u[r]\ar[d]& \shM\ar^-v[r]\ar[d]& \oM(D,p_*\beta)\ar[d]\\ \foM^{\log}_{{H_2(Z)^+}} \ar^{id}[r]& \foM^{\log}_{{H_2(Z)^+}} \ar^\nu[r]& \foM_{{H_2(D)^+}}. } \end{aligned} \end{equation}
The main reason for introducing the labeled curves, is that the stabilization map is now defined on the bottom row, since we can tell which components become unstable just from the discrete data of the homology classes. (We remark that the existence of the relevant stabilization maps depends crucially on the fact that 0 is indecomposable in ${H_2(D)^+}$.) The mapping $\nu$ is given by forgetting the log structures, applying $p_*$ to the homology markings, and then stabilizing as necessary, so we have a natural morphism from the universal curve $\foC^{\rm log}_{{H_2(Z)^+}} $ to the pullback under $\nu$ of the universal curve $\foC_{{H_2(D)^+}}$ over $\foM_{{H_2(D)^+}}$. This map contracts rational curves that are destabilized by forgetting the log structures and pushing forward the homology class. In particular, any positive-dimensional fiber of this map is a union of $\PP^1$'s whose labelling becomes zero in ${H_2(D)^+}$.
Now we are in a position to state the main theorem of this section.
\begin{theorem}\label{thm-pullback} Let $p:Z \to D$ be a log smooth morphism where $D$ has trivial log structure. Suppose that for every log stable morphism $f:C \to Z$ of genus $g$ and class $\beta$ we have $H^1(C,f^*\shT_{Z/D}) = 0$, then
$$[\oM_{g,n}(Z,\beta)]^{\rm vir} = u^* \nu^! [\oM_{g,n}(D,p_*\beta)]^{\rm vir}$$ provided that $\oM_{g,n}(D,p_*\beta) \neq \emptyset$.
In particular, if $[\oM_{g,n}(D,p_*\beta)]^{\rm vir}$ can be represented by a cycle supported on some locus $W\subset \oM_{g,n}(D,p_*\beta)$, then $[\oM_{g,n}(Z,\beta)]^{\rm vir}$ can be represented by a cycle supported on $w^{-1}(W)$ where $w = v\circ u$ is the natural map between these stacks. \end{theorem}
\begin{remark} Note that while the last statement involves only the standard spaces of (log) stable maps, we do not know how to formulate the precise relationship without passing through $\shM$. Since $w$ does not seem to be flat in general, it is not apparent how to pull back classes under $w$. \end{remark}
To prove this result, the point will be to show that $\shM$ itself has a relative perfect obstruction theory such that the associated virtual class $[\shM]^{\rm vir}$ satisfies the equations \begin{equation}\label{first} \nu^![\oM_{g,n}(D,p_*\beta)]^{\rm vir} = [\shM]^{\rm vir}, \end{equation}
\begin{equation}\label{second} u^*[\shM]^{\rm vir} = [\oM_{g,n}(Z,\beta)]^{\rm vir}.
\end{equation} To obtain a relative perfect obstruction theory on $\shM$ we can just pull back the obstruction theory using $\nu$. The fact that Equation \ref{first} holds follows from the fundamental base change property of virtual fundamental classes which we will recall here for the reader's convenience.
\begin{proposition} Assume that we are given a fiber diagram of Artin stacks \begin{equation} \begin{aligned} \xymatrix@C=30pt { \shM\ar[r]^m\ar[d]& \shN\ar[d]^f\\
G \ar^\mu[r]& H. } \end{aligned} \end{equation} with $G$ and $H$ pure dimensional, $f$ of DM type, and so that $\shM$ admits a stratification by quotient stacks. If there is a relative perfect obstruction theory with virtual tangent bundle $\mathfrak E$ for $\shN$ over $H$, then there is an induced relative perfect obstruction theory with virtual tangent bundle $m^*{\mathfrak E}$ for $\shM$ over $G$, and the associated virtual fundamental classes satisfy $\mu^!([\shN]^{\rm vir}) = [\shM]^{\rm vir}$ if $\mu$ is either flat or an l.c.i. morphism. \end{proposition}
\begin{proof} This all follows immediately from results in \cite{Ma12a} using the definition that $[\shN]^{\rm vir} = f_{\mathfrak E}^!([H])$. The case of $\mu$ flat is a special case of Theorem 4.1 (ii) and the case where $\mu$ is lci is a special case of Theorem 4.3. \end{proof}
To apply the Proposition to our situation, we factor the morphism $\nu$ as the composition of the graph followed by the projection and use the fact that the latter is flat and the former is lci (since $\foM_{{H_2(D)^+}}$ is smooth). This is also how we define $\nu^!$ on Chow groups.
To obtain Equation \ref{second} we want to use that this pulled back obstruction theory has a geometric interpretation, which we describe now. The obstruction theory on $\oM_{g,n}(D,p_*\beta)$ arises from the fact that it is an open subset of the relative Hom stack $\Hom(\foC_{g,n}/\foM_{g,n},D)$. In keeping with our diagram above, we want to consider it instead as an open substack of $\Hom (\foC_{{H_2(D)^+}}/\foM_{{H_2(D)^+}}, D)$. For what follows, it will be convenient to notice that it is clearly contained in the open set where the labelling of the components of the fibers of the universal curve coming from the universal property of $\foM_{{H_2(D)^+}}$ agrees with the labelling coming from the homology of the image under the morphism. We denote this open subset by $\Hom^0(\foC_{H_2(D)^+} / \foM_{H_2(D)^+} , D)$. This stack parametrizes morphisms from homology labeled curves satisfying the condition that the pushforward of the homology class of a component is given by the label of that component. Since formation of the relative Hom scheme is compatible with base change, we know that $\shM$ is an open subset of $\Hom(\nu^{-1}\foC_{H_2(D)^+} / \foM^{\rm log}_{H_2(Z)^+}, D)$, and the obstruction theory obtained by pullback under $\nu$ is simply the natural obstruction theory for this Hom scheme.
The key observation for proving Equation \ref{second} is that $\shM$ can also be thought of as an open subset of $\Hom(\foC^{\rm log}_{H_2(Z)^+} / \foM^{\rm log}_{H_2(Z)^+}, D)$. Inside this space there is an analogous open $\Hom^0$ where we demand that the labelling obtained from the morphism to $D$ agrees with $p_*$ of the labelling coming from the universal property.
\begin{lemma} The natural morphism $$\Psi:\Hom^0(\nu^{-1}\foC_{H_2(D)^+} / \foM_{H_2(Z)^+}^{\rm log} , D) \to \Hom^0(\foC^{\rm log}_{H_2(Z)^+} / \foM^{\rm log}_{H_2(Z)^+} , D) $$ induced by the morphism $f: \foC^{\rm log}_{H_2(Z)^+} \to \nu^{-1}\foC_{H_2(D)^+} $ is an isomorphism.
\end{lemma} \begin{proof} The map $f$ just contracts some destabilized $\PP^1$'s. The superscripts 0 imply that the rational curves contracted by $f$ are necessarily contracted by every morphism parametrized by the $\Hom^0$ schemes we are considering. Given an $S$-valued point of $\foM_{H_2(Z)^+}^{\rm log}$ corresponding to a marked log curve with underlying nodal curve $C \to S$, denote by $C' \to S$ the family obtained by applying $\nu$ and $c:C \to C'$ the associated contraction mapping (which corresponds to $f$). A morphism $g: C' \to D$ corresponding to a point of $\Hom^0(\nu^{-1}\foC_{H_2(D)^+} / \foM_{H_2(Z)^+}^{\rm log} , D)$ is taken by $\Psi$ to $\Psi(g) = g\circ c : C \to D$. The statement that this morphism $\Psi$ gives an isomorphism amounts to the statement that any morphism $\tilde g : C \to D$ which is constant on those components of $C$ which are contracted by $c$ factors uniquely through $c$. This is obvious when $S$ is the spectrum of an algebraically closed field. To see that this holds over an arbitrary base, one needs to use the standard fact about contraction maps between families of curves that $c_*\shO_C = \shO_{C'}$. The result then follows from Lemma 2.2 of \cite{BM96}. \end{proof}
In addition to being isomorphic stacks, the natural obstruction theories on $\shM$ induced by these two descriptions agree, because the contracted genus zero components make no contribution to the cohomology of $f^*(\shT_D)$. To prove Equation \ref{second}, we note that we now have that $\oM(Z,\beta)$ and $\shM$ are two stacks with relative perfect obstruction theories over the same base, $\foM^{\rm log}_{H_2(Z)^+}$. They are both open subsets of stacks of morphisms from the same family of curves. In the case of the space of log stable maps to $Z$, we have that $\oM_{g,n}(Z,\beta)$ is an open subset of the logarithmic $\Hom$ stack $\Hom^{\rm log}(\foC^{\rm log}_{g,n}/\foM^{\rm log}_{g,n}, Z)$ parametrizing logarithmic maps from fibers of the universal family to $Z$. The obstruction theory for this scheme is given by the cohomology of the pullback of the logarithmic tangent bundle of $Z$. In particular, this obstruction complex depends only on the underlying morphism of schemes, not on the log structures. The obstruction theory of $\shM$ comes from the cohomology of $f^*(\shT_D)$. In order to compare them, we use the short exact sequence $$0\to \shT_{Z/D} \to \shT_Z\to \shT_D \to 0.$$
Given that we have $H^1(C,f^*\shT_{Z/D})=0$ for all stable maps $f:C \to Z$, it follows from the associated long exact sequence in cohomology that $u$ is smooth with relative cotangent complex supported in one term, given by $H^0(f^*\shT_{Z/D})$ and we have a very special case of a compatibility datum, implying via \cite{Ma12a} Corollary 4.9 that $[\oM(Z)]^{\rm vir} = u^*[\shM]^{\rm vir}$ where $u^*$ denotes smooth pullback.
\qed
\section{Excluding nontrivial curves in $L_Y$} For given $\Gamma\in\Omega(\shL_0)$, recall the maps $G$ and $P$ from \S\ref{sec-degen-formula}. Let $r_V$ denote the number of edges of $\Gamma_V$, i.e. edges in $\Gamma$ adjacent to $V$. In the following, we will use abusive notation by referring by $n_V$ not only to the set introduced in \S\ref{sec-degenformula} but also its size. Note that $P\circ G:\oM_\Gamma\ra \oM_{0,n}(X,\beta)$ factors through the \'etale map $\Phi$, say $P\circ G=P'\circ\Phi$ for $P':\bigodot_V\oM_V\ra \oM_{0,n}(X,\beta)$. Now $P'$ is induced by maps $P_V:\oM_V\ra\oM_{0,n_V+r_V}(X,\tilde p_*\beta_V)$ on the factors of $\bigodot_V\oM_V$ which is on each factor the composition of a stable map with $\tilde p$ which of course only has an effect for $\oM_V$ with $V$ associated to $Y$. For a $Y$-vertex $V$, recall the forgetful morphism $w:\oM_{\Gamma_V}(Y(\log D_0),\beta_V)\ra\oM_{0,n_V+r_V}(D,p_*\beta_V)$ that we studied in the previous section.
\begin{lemma} \label{lem-zero-pushforward} If $\beta_V \in H^+_2(Y)=H_2^+(L_Y)$ is a curve class such that $p_*\beta_V \in H_2^+(D)$ is non-zero, then the class $[\oM_{\Gamma_V}(L_Y(\log L_D),\gamma)]^{\rm vir}$ pushes forward to zero under the natural map to $\oM_{0,n_V+r_V}(D,p_*\gamma)$. \end{lemma}
\begin{proof} As we said in the previous section, $\shT_{Y(\log D_0)/D}\cong\shO_Y(D_\infty)$. Thus, for any genus zero stable map $f:C\ra Y(\log D_0)$ of type $\Gamma_V$, we have $\rk H^0(C,f^*\shO_Y(D_\infty))=\beta_V\cdot D_\infty+1$ and this number is the relative virtual and actual dimension of the map $u$ in \eqref{main-diagram-Y-D}. This coincides with the relative dimension of $w$ because the map that forgets the log structure is of relative dimension zero. Denoting by $\oM_{\Gamma_V}(L_Y(\log L_D),\beta_V)\stackrel{\pi}{\la}\shC\stackrel{f}{\ra} Y(\log D_0)$ the maps from the universal curve and setting $\shE=R^1\pi_*f^*\shO_Y(-D_\infty)$, we also have that $$ [\oM_{\Gamma_V}(L_Y(\log L_D),\beta_V)]^{\rm vir} = e(\shE) \cap [\oM_{\Gamma_V}(Y(\log D_0),\beta_V)]^{\rm vir}.$$ Since the rank of $\shE$ is $\beta_V \cdot D_\infty - 1$, we find the virtual dimension of $\oM_{\Gamma_V}(L_Y(\log L_D),\beta_V)$ to be strictly greater than the virtual dimension of $\oM_{0,n_V+r_V}(D,p_*\beta_V)$. But Theorem~\ref{thm-pullback} implies that the pushforward of $[\oM_{\Gamma_V}(Y(\log D_0),\beta_V)]^{\rm vir}$ can be supported on any cycle that supports $[\oM_{0,n_V+r_V}(D,p_*\beta_V)]^{\rm vir}$. Since $\oM_{0,n_V+r_V}(D,p_*\beta_V)$ is a Deligne-Mumford stack and we are working in Chow groups with rational coefficients, this implies the desired vanishing. \end{proof}
\begin{remark} It was pointed out to us by Feng Qu that a similar result to Lemma~\ref{lem-zero-pushforward} was proved in \cite[Prop. 3.2.2.]{LLQW16} which would imply this result when $\shE = 0$. \end{remark}
\begin{proposition} \label{prop-nontrivial-in-Y-pushes-to-zero} If $\Gamma\in \Omega(\shL_0)$ has a vertex $V$ with bipartition membership in $Y$ and $\beta_V$ is not a multiple of the class of a fiber of $p$, then $P_*G_*\Phi^*\Delta^!\prod_V[\oM_V]^{\rm vir}=0$. \end{proposition}
\begin{proof} Let $\Gamma$ and $V$ be as in the assertion. Set $r:=r_V$. The argument here is simple, but we include a large diagram to remind the reader of the names of the maps. The obvious, but important, point is that the evaluation maps from $\oM_V = \oM_{\Gamma_V}(L_Y(\log L_D),\beta_V)$ to $L_D$ factors through the map $P_V$ to $\oM_{0,n_V+r}(L_D, p_*\beta_V)$. The stack $M'$ below is defined to make the bottom right square Cartesian.
\[ \xymatrix@C=30pt { \oM_\Gamma \ar[d]_G\ar[r]^(.3){\Phi}& \oM_V\times_{L_D^r} \bigodot_{V'\neq V}\oM_{V'} \ar[r] \ar[d] & \oM_V\times \prod_{V'\neq V}\oM_{V'} \ar^{P_V\times id}[d]\\ \oM(X\cup Y)\ar[d]_P & M' \ar[ld]_\tau \ar[d]\ar[r] & \oM_{0,n_V+r}(L_D,p_*\beta_V) \times \prod_{V'\neq V}\oM_{V'} \ar^{\ev}[d]\\ \oM(X)&L_D^r\ar^{\Delta}[r]&( L_D)^{2r}. } \] The only point not yet explained is the existence of the map $\tau$ but this is just the usual clutching construction for boundary strata. Since $p_*\beta_V$ is nonzero, Lemma~\ref{lem-zero-pushforward} applies and we have $$P_{V*}[\oM_{\Gamma_V}(L_Y(\log L_D),\beta_V)]^{\rm vir}=0.$$ Rewriting the class in the proposition as $$\deg(\Phi) \cdot \tau_*\Delta^! \big(P_{V*}[\oM_{\Gamma_V}(L_Y(\log L_D),\beta_V)]^{\rm vir} \times \prod_{V'\neq V}[\oM_{V'}]^{\rm vir}\big)$$ the result follows immediately. \end{proof}
\begin{proof}[Proof of Theorem~\ref{prop-vanishing-pushforward}] Let us collect what is implied for the graph $\Gamma\in\Omega(\shL_0)$ if the pushforward to $\oM_{0,n}(X,\beta)$ of the corresponding virtual fundamental class is non-trivial: by Lemma~\ref{lemma-no-mult-edges-X}, the X-vertices of the graph have no more than one adjacent edge and by Proposition~\ref{prop-nontrivial-in-Y-pushes-to-zero}, every curve component in $Y$ needs to be a multiple of a fiber. If a $P_*G_*[\oM_\Gamma]^{\rm vir}$ is non-trivial, it already follows that $\Gamma$ has a unique $Y$-vertex $V$ and we are only left with showing that this has only a single adjacent edge to it. If $\beta_V$ is a multiple of a fiber class, borrowing notation from the proof of Proposition~\ref{prop-nontrivial-in-Y-pushes-to-zero} and setting $\oM^\circ:=\prod_{V'\neq V}[\oM_{V'}]^{\rm vir}$, the map from $\oM_{\Gamma_V}(L_Y(\log L_D),\beta_V) \times_{(D\times\AA^1)^{r_V}} \oM^\circ$ to $\oM_{0,n}(X)$ factors through $D \times_{(D \times \AA^1)^{r_V}}\oM^\circ$. Hence, it suffices to check that the pushforward of the virtual class from $\oM_{\Gamma_V}(L_Y(\log L_D),\beta_V)$ to $D$ vanishes which is precisely what the next lemma achieves. \end{proof}
\begin{lemma} If $\beta_V$ is a multiple of the class of a fiber of $p:Y\ra D$, then the pushforward of $[\oM_{\Gamma_V}(L_Y(\log L_D),\beta_V)]^{\rm vir}$ under the evaluation map to $(D \times \AA^1)^{r_V}$ is trivial if $n_V+r_V>1$. \end{lemma}
\begin{proof} Set $r:=r_V$. The evaluation map from $\oM_{\Gamma_V}(L_Y(\log L_D),\beta_V)$ to $D^r\times \AA^r$ factors through the embedding $D \to D^r\times \AA^r$ given by $\diag \times \{0\}$. However, the virtual dimension of $\oM_{\Gamma_V}(L_Y(\log L_D),\beta_V)$ is $\dim(D) + n_V+r-1$, so for $n_V+r>1$ this vanishing is immediate. \end{proof}
\end{document} |
\begin{document}
\begin{abstract}
In this paper, we present monotone sequences of lower and upper bounds on the Perron value of a nonngeative matrix, and we study their strict monotonicity. Using those sequences, we provide two combinatorial applications. One is to improve bounds on Perron values of rooted trees in combinatorial settings, in order to find characteristic sets of trees. The other is to generate log-concave and log-convex sequences through the monotone sequences. \end{abstract}
\maketitle \tableofcontents
\section{Introduction and preliminaries}
The \textit{Perron value} $\rho(A)$ of a square nonnegative matrix $A$, which is the spectral radius of $A$, together with a \textit{Perron vector}, which is an eigenvector of $A$ associated with $\rho(A)$, has been exploited and played important roles in many applications \cite{berman1994nonnegative, horn2012matrix,seneta2006non}. In particular, iterative analysis has contributed to approximating Perron values, how fast iterative methods converge, and so on \cite{sewell2014computational, hogben2013handbook, varga1962iterative}. In this article, rather than dealing with such generic questions in numerical analysis for Perron values, we concentrate our attention on sequences of lower and upper bounds on Perron values that are induced from particular iterative methods, which are presented in this section; and we investigate under what circumstances those sequences are strictly monotone. Using the sequences, we improve bounds on Perron values of bottleneck matrices for trees that are presented in \cite{andrade2017combinatorial, molitierno2018tight}. Furthermore, we show that the sequences with some extra conditions generate log-concave and log-convex sequences, so this can be used as a tool to see if some sequence is log-concave or log-convex.
To elaborate our aim of this article, we begin with some notation and terminologies, and then we present the sequences of lower and upper bounds on Perron values. Any bold-faced letter denotes a column vector, and all matrices are assumed to be real and square throughout this paper. A matrix is \textit{nonnegative} (resp. \textit{positive}) if all the entries are nonnegative (resp. positive). Analogous definitions for a nonnegative vector and a positive vector follow. Let $\mathbb{R}^n_+$ be the set of all nonnegative vectors in $\mathbb{R}^n$, and $\mathbb{R}^n_{++}$ be the set of all positive vectors in $\mathbb{R}^n$. We denote by $\mathbf{1}_k$ (resp. \( \mathbf{0}_k \)) the all ones vector (resp. the zero vector) of size $k$, and by $J_{p,q}$ (resp. \( \mathbf{O}_{p,q} \)) the all ones matrix (resp. the zero matrix) of size $p\times q$. If $k=p=q$, we write $J_{p,q}$ and $\mathbf{O}_{p,q}$ as $J_k$ and $\mathbf{O}_{k}$. The subscripts $k$ and a pair of $p$ and $q$ are omitted if their sizes are clear from the context. We denote by $\mathbf e_i$ the column vector whose component in $i^\text{th}$ position is $1$ and zeros elsewhere. For a vector \( \mathbf x \), \( (\mathbf x)_i \) denotes the \( i^{th} \) component of \( \mathbf x \). A matrix $A$ is said to be \textit{reducible} if there exists a permutation matrix $P$ such that $PAP^T$ is a block upper triangular matrix. If $A$ is not reducible, then we say that $A$ is \textit{irreducible}. We say that a nonnegative matrix $A$ is \textit{primitive} if there exists a positive integer $N$ such that $A^N$ is positive. A symmetric matrix $A$ is said to be \textit{positive definite} (resp. \textit{positive semidefinite}) if all eigenvalues of $A$ are positive (resp. nonnegative).
We state two well-known results for bounds on Perron values. In particular, we adapt the Rayleigh--Ritz theorem as per our purpose.
\begin{theorem}[The Collatz--Wielandt formula \cite{hogben2013handbook}]\label{CollatzWielandt}
Let $A$ be an $n\times n$ irreducible nonnegative matrix. Then,
\begin{align*}
\rho(A)=\max_{\mathbf x\in \mathbb{R}_{+}^n\backslash\{\mathbf{0}\}}\min_{\{i | (\mathbf x)_i>0\}}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}=\min_{\mathbf y\in\mathbb{R}_{++}^n}\max_{i}\frac{\left(A\mathbf y\right)_i}{\left(\mathbf y\right)_i},
\end{align*}
This implies that for $\mathbf x\in \mathbb{R}_{+}^n\backslash\{\mathbf{0}\}$ and $\mathbf y\in \mathbb{R}_{++}^n$,
\begin{align*}
\min_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}\leq \rho(A)\leq \max_{i}\frac{\left(A\mathbf y\right)_i}{\left(\mathbf y\right)_i}.
\end{align*} \end{theorem}
\begin{theorem}[The Rayleigh--Ritz theorem \cite{hogben2013handbook}]\label{thm:min-max}
Let \( A \) be an \( n\times n\) nonnegative symmetric matrix. Then,
\begin{align*}
\rho(A)=\max_{\mathbf x\in\mathbb{R}^n\backslash\{\mathbf{0}\}} \frac{\mathbf x^TA\mathbf x}{\mathbf x^T\mathbf x}.
\end{align*} \end{theorem}
Now we introduce sequences of lower and uppper bounds on Perron values that we shall deal with in this article. Let $A$ be an $n\times n$ nonnegative matrix $A$ with $A\neq \mathbf{O}$. For integer $k\geq 1$, if $A$ is irreducible and $\mathbf x\in\mathbb{R}_{++}^n$, we define $a_k(A,\mathbf x):=\max_i\frac{(A^k\mathbf x)_i}{(A^{k-1}\mathbf x)_i}$ and $b_k(A,\mathbf x):=\min_i\frac{(A^k\mathbf x)_i}{(A^{k-1}\mathbf x)_i}$; and if $A$ is positive semidefinite and $\mathbf x\in\mathbb{R}_{+}^n$, then $c_k(A,\mathbf x):=\frac{\mathbf x^TA^k\mathbf x}{\mathbf x^TA^{k-1}\mathbf x}$. Note that the conditions of \( A \) and \( \mathbf x \) are different when defining \( a_k(A,\mathbf x) \), \( b_k(A,\mathbf x) \) and when defining \( c_k(A,\mathbf x) \).
By \Cref{CollatzWielandt}, $(a_k(A,\mathbf x))_{k\geq 1}$ and $(b_k(A,\mathbf x))_{k\geq 1}$ are sequences of upper and lower bounds on $\rho(A)$, respectively; and we can find from \Cref{thm:min-max} that taking $\mathbf y=A^{\frac{k-1}{2}}\mathbf x$ in $c_k(A,\mathbf x)$, $(c_k(A,\mathbf x))_{k\geq 1}$ is a sequence of lower bounds on $\rho(A)$. In addition to those sequences, we refer the reader to \cite{szyld1992sequence,tacscci1998sequence} for another sequences of lower and upper bounds on Perron values.
We review some known results regarding monotonicity and convergence for the three sequences.
\begin{theorem}\cite{varga1962iterative}\label{thm:known}
Let $A$ be an $n\times n$ irreducible nonnegative matrix, and $\mathbf x\in\mathbb{R}_{++}^n$. Then,
$$b_1(A,\mathbf x)\leq b_2(A,\mathbf x)\leq\cdots\leq\rho(A)\leq\cdots\leq a_2(A,\mathbf x)\leq a_1(A,\mathbf x).$$ \end{theorem}
\begin{remark}\label{Remark:convergence}
Let $A$ be irreducible and nonnegative. The sequences in Theorem \ref{thm:known} are not necessarily convergent to the Perron value. A sufficient condition for the convergence of \( a_k(A,\mathbf x) \) and \( b_k(A,\mathbf x) \) is that \( A \) is primitive. By the Perron--Frobenius theorem, if $A$ is primitive, then $\rho(A)$ is greater in absolute value than all other eigenvalues of $A$. Hence it follows from \cite[Theorem 3.5.1]{sewell2014computational} that $(a_k(A,\mathbf x))_{k\geq 1}$ and $(b_k(A,\mathbf x))_{k\geq 1}$ converge to the Perron value as $k\rightarrow\infty$. \end{remark}
\begin{remark}\label{Remark:convergence2}
Let $A$ be nonnegative and positive semidefinite. From the power method, one can find that $(c_k(A,\mathbf x))_{k\geq 1}$ converges to $\rho(A)$ as $k\rightarrow\infty$. \end{remark}
Here we describe the aim of this paper with its motivations. Our approach in \Cref{Sec2:sequences} is informed by the following motivation: in \cite{andrade2017combinatorial}, the so-called \textit{combinatorial Perron value}, which is a lower bound on ``the Perron value of a rooted (unweighted) tree'', may be used to estimate ``characteristic sets'' of trees, which will be explained in \Cref{subsec 1.1}. Not only the combinatorial Perron value, but also other bounds may be used for the estimation, and even sharper bounds improve the accuracy of where the characteristic set is. This leads in \Cref{Sec2:sequences} to explore the strict monotonicity of $(a_k(A,\mathbf x))_{k\geq 1}$, $(b_k(A,\mathbf x))_{k\geq 1}$, and $(c_k(A,\mathbf x))_{k\geq 1}$ that produce sharper bounds than the combinatorial Perron value and other bounds in \cite{andrade2017combinatorial}, which will be shown in Subsection \ref{Subsec:3.1}. Furthermore, those strictly monotone sequences may be used for solving problems concerning bounds on Perron values of nonnegative matrices (especially, combinatorial matrices), only with small powers of the matrices. In Subsection \ref{Subsec:3.1}, we improve the bound in \cite{molitierno2018tight} in order to show its capability.
The other motivation for \Cref{Sec2:sequences} is that the monotonicity of $(a_k(A,\mathbf x))_{k\geq 1}$ and $(b_k(A,\mathbf x))_{k\geq 1}$ with some additional conditions and the monotonicity of $(c_k(A,\mathbf x))_{k\geq 1}$ enable us to obtain log-concave and log-convex sequences, which will be elaborated in Subsection \ref{subsec 1.2}. This can be a tool of proving if a given sequence is log-concave or log-convex, by checking if the sequnce corresponds to one of the three sequences. In \Cref{Sec2:sequences}, we study under what circumstance $(a_k(A,\mathbf x))_{k\geq 1}$ and $(b_k(A,\mathbf x))_{k\geq 1}$ generate log-concave and log-convex sequences. Using our findings, we present combinatorial sequences in Subsection \ref{Subsec:3.2}.
The structure of this paper is as follows. Subsections \ref{subsec 1.1} and \ref{subsec 1.2} contain introduction and necessary background for combinatorial applications in Subsections \ref{Subsec:3.1} and \ref{Subsec:3.2}, respectively. \Cref{Sec2:sequences} provides the results stated above, which are used in Subsections \ref{Subsec:3.1} and \ref{Subsec:3.2}. For the readability of this article, we put in \Cref{appendix} parts of proofs for some results in Subsection \ref{Subsec:3.1} that contain tedious calculations and basic techniques.
\subsection{Combinatorial application I}\label{subsec 1.1}
In this subsection, we aim to understand why we shall study the strict monotonicity of $(a_k(A,\mathbf x))_{k\geq 1}$, $(b_k(A,\mathbf x))_{k\geq 1}$, and $(c_k(A,\mathbf x))_{k\geq 1}$. We assume familiarity with basic material on graph theory. We refer the reader to \cite{chartrand1996graphs} for necessary background. All graphs are assumed to be simple and undirected.
Given a weighted, connected graph $G$ on vertices $1,\dots, n$, the \textit{Laplacian matrix} of $G$ is the $n\times n$ matrix given by $L(G)=[l_{i,j}]$, where $l_{i,j}$ is the weight on edge joining $i$ and $j$ if $i$ and $j$ are adjacent, $l_{i,i}$ is the degree of vertex $i$, and $l_{i,j}=0$ for the remaining entries. The \textit{algebraic connectivity} $a(G)$ of $G$ is the second smallest eigenvalue of $L(G)$, and its corresponding vector is called a \textit{Fiedler vector} of $G$. As the name suggests, this parameter is related to other parameters in terms of connectivity of graphs \cite{fiedler1973algebraic}. For a vertex $v$ of $G$, the \textit{bottleneck matrix $M$ at $v$} is the inverse of the matrix obtained from $L(G)$ by removing the row and column indexed by $v$. If $G$ is an unweighted tree, then the $(i,j)$-entry of $M$ is the number of edges which are simultaneously on the path from $i$ to $v$ and the path from $j$ to $v$; and if $G$ is a weighted tree or other case, we refer the interested reader to \cite{kirkland1996characteristic, kirkland1997distances} for the combinatorial interpretation of the entries of $M$. Suppose that $C_1,\dots,C_k$ are the components of the graph obtained from $G$ by removing $v$ and all incident edges, for some $k\geq 1$ (if $k\geq 2$, then $v$ is called a \textit{cut-vertex}). For $i=1,\dots,n$, we refer to the inverse of the principal submatrix of $L(G)$ corresponding to the vertices of $C_i$, as the \textit{bottleneck matrix for $C_i$}. It is known in \cite{fallat1998extremizing} that $M$ can be expressed as a block diagonal matrix in which the main diagonal blocks consist of the bottleneck matrices for $C_1,\dots,C_k$, which are symmetric and positive matrices. The \textit{Perron value of $C_i$} is defined as the Perron value of the bottleneck matrix for $C_i$. Then, the Peron value of $M$ is determined by the maximum among Perron values of $C_1,\dots,C_k$. We say that $C_i$ is a \textit{Perron component at $v$} if its Perron value is the maximum among Perron values of $C_1,\dots,C_k$. In the context of trees, regarding bottleneck matrices and their Perron values, the word ``component'' is conventionally replaced by ``branch'' so that the related terminologies above will be adapted appropriately; for instance, Perron components at $v$ in a tree are referred to as \textit{Perron branches} at $v$.
As described in the earlier part of this subsection, we now consider unweighted trees instead of weighted and connected graphs. (Some results in terms of weighted and connected graphs will be provided in \Cref{Subsec:3.1}.) In order to understand the characteristic set of a tree, we shall elaborate two characterizations of trees according to Fiedler vectors, and according to Perron branches at particular vertices. (Generalized characterizations for weighted and connected graphs can be found in \cite{fiedler1975property,kirkland1998perron,molitierno2016applications}.)
Let $\mathcal T$ be a tree, and let $\mathbf x$ be its Fiedler vector. It appears in \cite{fiedler1975property} that exactly one of the following cases occurs: \begin{enumerate}[label=(\roman*)]
\item\label{typeII} No entry of $\mathbf x$ is zero. Then, there exist unique vertices $i$ and $j$ in $\mathcal T$ such that $i$ and $j$ are adjacent with $x_i>0$ and $x_j<0$. Further, the entries of $\mathbf x$ corresponding to vertices along any path in $\mathcal T$ which starts at $i$ and does not contain $j$ are increasing, while the entries of $\mathbf x$ corresponding to vertices along any path in $\mathcal T$ which starts at $j$ and does not contain $i$ are decreasing.
\item\label{typeI} There is a zero entry in $\mathbf x$. For this case, the subgraph induced by the set of vertices corresponding to $0$'s in $\mathbf x$ is connected. Moreover, there is a unique vertex $i$ such that $x_i=0$ and $i$ is adjacent to at least one vertex $j$ with $x_j\neq 0$. The entries of $\mathbf x$ corresponding to vertices along any path in $\mathcal T$ which starts at $i$ are either increasing, decreasing, or identically $0$. \end{enumerate} Trees corresponding to \ref{typeI} are said to be \textit{Type I} and the vertex $i$ described in \ref{typeI} is called the \textit{characteristic vertex}; and trees corresponding to \ref{typeII} are said to be \textit{Type II} and the vertices $i$ and $j$ described in \ref{typeII} are called the \textit{characteristic vertices}. We say that the \textit{characteristic set} of a tree is the set of its characteristic vertices. As shown in \cite{merris1987characteristic}, the characteristic set of a tree is independent of the choice of a Fiedler vector. In \cite{andrade2017combinatorial,kirkland1998perron}, the authors regard the characteristic set as a notion of ``middle'' of a tree in the rough sense that the farther away a vertex is from the characteristic set, the larger its corresponding entry in a Fiedler vector is in absolute value. Indeed, the authors of \cite{abreu2017characteristic} studied characteristic set with other notions of middle of a tree---the distances between a centroid of a tree and its characteristic vertices, and between a centre, as a standard notion in graph theory, and its characteristic vertices; so they show that ratios of those maximum distances taken over all trees on $n$ vertices to $n$ are convergent as $n\rightarrow\infty$. For instance, the maximum distance between centroids and characteristic vertices taken over all trees on $n$ vertices asymptotically equals around $0.1129n$---that is, one may expect that the distance between a centroid of a tree on $n$ vertices and its characteristic vertex is less than $0.1129n$.
In order to find the characteristic set of a tree, one may use a Fielder vector by observing its sign patterns. As an alternative, one may use the following characterization in \cite{kirkland1996characteristic} that describes a connection between the algebraic connectivity and Perron branches at characteristic vertices. A tree $\mathcal T$ is Type I if and only if there exists a unique vertex $v$ in $\mathcal T$ such that there are two or more Perron branches $B_1,\dots,B_k$ at $v$ for some $k\geq 2$. For this case, \begin{align*}
a(G)=\frac{1}{\rho(M_i)} \end{align*} where $M_i$ is the bottleneck matrix for $B_i$ for $i=1,\dots,k$. A tree $\mathcal T$ is Type II if and only if there exist unique adjacent vertices $i$ and $j$ in $\mathcal T$ such that the branch at $j$ containing $i$ is the unique Perron branch at $j$, and the branch at $i$ containing $j$ is the unique Perron branch at $i$. In this case, there exists $0< \gamma <1$ such that \begin{align*}
a(G)=\frac{1}{\rho(M_1-\gamma J)}=\frac{1}{\rho(M_2-(1-\gamma) J)}, \end{align*} where $M_1$ (resp. $M_2$) is the bottleneck matrix for the branch at $j$ containing $i$ (resp. at $i$ containing $j$). As suggested in \cite{abreu2017characteristic}, we may estimate characteristic sets of trees through bounds on Perron values of branches---for that, combinatorial Perron value was introduced in that paper. That is, if Perron values of branches at some vertex in a tree are understood well, then one may decide whether it belongs to the characteristic set from the characterization; further, if the algebraic connectivity is also known, then one may find at which vertices Perron branches have their Perron values close to the reciprocal of the algebraic connectivity.
When it comes to finding the characteristic set through Perron branches and their Perron values, we need to understand Perron values of branches at some vertex $v$. Such branches may be identified as rooted trees, by considering the vertex of a branch adjacent to $v$ as the root. Henceforth, we shall focus on rooted trees instead of branches at some vertex in a tree. Given a rooted tree with root $x$, let us consider the tree $\mathcal T$ formed by adding a new pendent vertex $v$ to $x$. We shall define the \textit{bottleneck matrix $M$ of the rooted tree} as the bottleneck matrix at $v$ in $\mathcal T$, see Figure \ref{Fig:illustration} for an example. Since the rooted tree is unweighted, the $(i,j)$-entry of $M$ is the number of vertices (not edges) which are simultaneously on the path from $i$ to $x$ and on the path from $j$ to $x$. We also define the \textit{Perron value of the rooted tree} to be the Perron value of the bottleneck matrix of the rooted tree. This convention also appears in \cite{andrade2017combinatorial,ciardo2021perron}. \begin{figure}
\caption{An illustration of the bottleneck matrix of a rooted tree. The matrix $M$ is the bottleneck matrix for the branch at $v$ in $\mathcal T$, and $M$ is also the bottleneck matrix of a rooted tree with vertex set $\{x,2,3,4\}$ and root $x$.}
\label{Fig:illustration}
\end{figure}
\subsection{Combinatorial application II}\label{subsec 1.2}
A sequence $(z_n)_{n\geq 1}$ is \textit{log-concave} (resp. \textit{log-convex}) if $z_{n-1}z_{n+1}\leq z_n^2$ (resp. $z_{n-1}z_{n+1}\geq z_n^2$) for $n\geq 2$; and if the inequality is strict, $(z_n)_{n\geq 1}$ is \textit{strictly log-concave} (resp. \textit{strictly log-convex}). We refer the reader to \cite{brenti1989unimodal,huh2018combinatorial,stanley1989log} for an introduction and applications. The paper \cite{gross2015log} deals with a conjecture arising in topological graph theory that the genus distribution of every graph is log-concave. In \cite{liu2007log}, one can find operations preserving log-convexity and conditions for a sequence to be log-concave concerning recurrence relations. For the log-concavity of symmetric functions, see \cite{sagan1992log}.
We shall provide a systematic way of generating (strictly) log-concave and (strictly) log-convex sequences from the sequences $(a_k(A,\mathbf x))_{k\geq 1}$, $(b_k(A,\mathbf x))_{k\geq 1}$, and $(c_k(A,\mathbf x))_{k\geq 1}$.
\begin{definition}\label{def:log index}
Let $A$ be an $n\times n$ irreducible nonnegative matrix and $\mathbf x\in\mathbb{R}_{++}^n$. For $i_0\in\{1,\dots,n\}$, we say that $i_0$ is a \textit{log-concavity (resp. log-convexity) index of $A$ associated with $\mathbf x$}, or equivalently that $A$ has a \textit{log-concavity (resp. log-convexity) index $i_0$ associated with $\mathbf x$}, if there exists a positive number $K$ such that for $k\geq K$,
\begin{align*}
\frac{(A^k\mathbf x)_{i_0}}{(A^{k-1}\mathbf x)_{i_0}}=\max_{i}\frac{(A^k\mathbf x)_{i}}{(A^{k-1}\mathbf x)_{i}},\; \left(\text{resp.} \frac{(A^k\mathbf x)_{i_0}}{(A^{k-1}\mathbf x)_{i_0}}=\min_{i}\frac{(A^k\mathbf x)_{i}}{(A^{k-1}\mathbf x)_{i}}\right).
\end{align*} \end{definition}
From the following proposition, we can see that the names ``log-concavity index'' and ``log-convexity index'' suggest log-concave and log-convex sequences, respectively.
\begin{proposition}\label{Prop:logconcavity index}
Let $A$ be an $n\times n$ irreducible nonnegative matrix and $\mathbf x\in\mathbb{R}_{++}^n$. Then, the following hold:
\begin{enumerate}[label=(\roman*)]
\item Suppose that $i_1$ is a log-concavity index of $A$ associated with $\mathbf x$. Let $g_k=(A^k\mathbf x)_{i_1}$ for $k\geq 0$. Then, there exists some $k_1>0$ such that $(g_k)_{k\geq k_1-1}$ is log-concave. Moreover, if the sequence $(a_k(A,\mathbf x))_{k\geq k_1}$ is strictly decreasing, then $(g_k)_{k\geq k_1-1}$ is strictly log-concave.
\item Suppose that $i_2$ is a log-convexity index of $A$ associated with $\mathbf x$. Let $h_k=(A^k\mathbf x)_{i_2}$ for $k\geq 0$. Then, there exists some $k_2>0$ such that $(h_k)_{k\geq k_2-1}$ is log-convex. Moreover, if the sequence $(b_k(A,\mathbf x))_{k\geq k_2}$ is strictly increasing, then $(h_k)_{k\geq k_2-1}$ is strictly log-convex.
\end{enumerate} \end{proposition} \begin{proof}
Suppose that $i_1$ is a log-concavity index of $A$ associated with $\mathbf x$. Then, there exists some $k_1>0$ such that $\frac{g_{k}}{g_{k-1}}=a_k(A,\mathbf x)$ for $k\geq k_1$. By \Cref{thm:known}, $(a_k(A,\mathbf x))_{k\geq 1}$ is decreasing, so $\frac{g_{k+1}}{g_{k}}\leq \frac{g_{k}}{g_{k-1}}$ for $k\geq k_1$. Hence, $(g_k)_{k\geq k_1}$ is log-concave. Moreover, if the sequence $(a_k(A,\mathbf x))_{k\geq k_1}$ is strictly decreasing, then $(g_k)_{k\geq k_1}$ is strictly log-concave. From a similar argument, one can establish the remaining conclusions. \end{proof}
\begin{proposition}\label{prop:log-concave from c_k}
Let $A$ be an $n\times n$ nonnegative, positive semidefinite matrix and $\mathbf x\in\mathbb{R}_{+}^n$. Let $s_k=\mathbf x^TA^k\mathbf x$ for $k\geq 0$. If the sequence $(c_k(A,\mathbf x))_{k\geq 1}$ is (strictly) increasing, then $(s_k)_{k\geq 0}$ is (strictly) log-convex. \end{proposition}
We remark that given a sequence $(x_k)_{k\geq 1}$, if there exist some nonnegative matrix and nonnegative vector so that one of $(a_k(A,\mathbf x))_{k\geq 1}$, $(b_k(A,\mathbf x))_{k\geq 1}$, and $(c_k(A,\mathbf x))_{k\geq 1}$ generates $(x_k)_{k\geq 1}$ by one of \Cref{Prop:logconcavity index,prop:log-concave from c_k}, then $(x_k)_{k\geq 1}$ is log-concave or log-convex.
We examine under what circumstances the sufficient conditions of \Cref{Prop:logconcavity index,prop:log-concave from c_k} are satisfied in \Cref{Sec2:sequences}, and so related results are presented in \Cref{thm:logconcave seq,Thm:log-concave from c_k}.
\section{Strictly monotone sequences of lower and upper bounds on Perron values}\label{Sec2:sequences}
Our main goal of this section is to find a condition for an irreducible nonnegative matrix to have a log-concavity or log-convexity index associated with a positive vector (\Cref{prop:log index}), in order to improve \Cref{Prop:logconcavity index}, and to find conditions for sequences $(a_k(A,\mathbf x))_{k\geq 1}$, $(b_k(A,\mathbf x))_{k\geq 1}$ and $(c_k(A,\mathbf x))_{k\geq 1}$ to be strictly monotone. Specifically speaking of the latter, we explore conditions on an $n\times n$ irreducible nonnegative matrix $A$ and $\mathbf x\in\mathbb{R}_{++}^n$ such that the corresponding sequences $(a_k(A,\mathbf x))_{k\geq 1}$ and $(b_k(A,\mathbf x))_{k\geq 1}$ are strictly monotone (Theorems \ref{thm:UpperMonocity} and \ref{thm:UpperMonocity2}), and we find conditions on an $n\times n$ nonnegative, positive semidefinite matrix $A$ and $\mathbf x\in\mathbb{R}_{+}^n$ such that the corresponding sequence $(c_k(A,\mathbf x))_{k\geq 1}$ is strictly increasing (Theorem \ref{Cor:LowerMonocity}).
\subsection{Log-concavity and log-convexity indices}
We begin with the following proposition, which is used for obtaining \Cref{lem:index domnates others in two vectors}.
\begin{proposition}\label{prop:index domnates others in two vectors}
Let $n\geq 2$. Let $\mathbf x=(x_1,\dots,x_n)^T\in\mathbb{R}_{++}^n$, and $\mathbf y=(y_1,\dots,y_n)^T\in\mathbb{R}^n$. Suppose that either $\mathbf y\in\mathbb{R}_{++}^n$ or $-\mathbf y\in\mathbb{R}_{++}^n$. Then,
\begin{enumerate}[label=(\roman*)]
\item\label{statement1} there exists $i_1\in\{1,\dots,n\}$ such that for all $j_1\in\{1,\dots, n\}$,
\begin{align*}
\det\begin{bmatrix}
x_{i_1} & y_{i_1}\\
x_{j_1} & y_{j_1}
\end{bmatrix}\geq 0;
\end{align*}
\item\label{statement2} there exists $i_2\in\{1,\dots,n\}$ such that for all $j_2\in\{1,\dots, n\}$,
\begin{align*}
\det\begin{bmatrix}
x_{i_2} & y_{i_2}\\
x_{j_2} & y_{j_2}
\end{bmatrix}\leq 0.
\end{align*}
\end{enumerate} \end{proposition} \begin{proof}
Suppose that $\mathbf y\in\mathbb{R}_{++}^n$. We shall prove the statement \ref{statement1} by induction on $n$. Clearly, it holds for $n=2$. Let $n\geq 3$. Consider $x_2,\dots,x_n$ and $y_2,\dots,y_n$. By the induction hypothesis, there exists $k_1$ in $\{2,\dots,n\}$ such that $x_{k_1}y_{j_1}-y_{k_1}x_{j_1}\geq 0$ for all $j_1\in\{2,\dots, n\}$.
Suppose that $x_1y_{k_1}-y_1x_{k_1}\geq0$. Since $x_1y_{k_1}\geq y_1x_{k_1}>0$ and $x_{k_1}y_{j_1}\geq y_{k_1}x_{j_1}>0$, we have $x_1y_{k_1}x_{k_1}y_{j_1}\geq y_1x_{k_1}y_{k_1}x_{j_1}$, and so $x_1y_{j_1}\geq y_1x_{j_1}$. Hence, $\det\begin{bmatrix}
x_{1} & y_{1}\\
x_{j} & y_{j}
\end{bmatrix}\geq 0$ for $j=2,\dots,n$, and so $1$ is our desired index in \ref{statement1}. If $x_1y_{k_1}-y_1x_{k_1}\leq 0$, then $\det\begin{bmatrix}
x_{k_1} & y_{k_1}\\
x_{j} & y_{j}
\end{bmatrix}\geq 0$ for all $j\in\{1,\dots, n\}$. Hence, by induction, the statement \ref{statement1} holds for $\mathbf y>0$.
Assuming $-\mathbf y\in\mathbb{R}_{++}^n$, an analogous argument establishes \ref{statement1}.
Note that for a square matrix, a change of the sign of a column switches the sign of the determinant of the matrix.
Therefore, by changing the sign of the vector \( \mathbf y \) above, the remaining conclusion follows. \end{proof}
\begin{lemma}\label{lem:index domnates others in two vectors}
Let $n\geq 2$. Let $\mathbf x=(x_1,\dots,x_n)^T\in\mathbb{R}_{++}^n$, and $\mathbf y=(y_1,\dots,y_n)^T\in\mathbb{R}^n$. Then, the statements \ref{statement1} and \ref{statement2} in Proposition \ref{prop:index domnates others in two vectors} hold. \end{lemma} \begin{proof}
Let $R_1=\{1\leq i\leq n\,|\,y_i<0\}$, $R_2=\{1\leq i\leq n\,|\,y_i=0\}$, and $R_3=\{1\leq i\leq n\,|\,y_i>0\}$. By Proposition \ref{prop:index domnates others in two vectors}, there exist $k_1\in R_1$ and $k_3\in R_3$ such that $\mathrm{det}\begin{bmatrix}
x_{k_1} & y_{k_1}\\
x_{j_1} & y_{j_1}
\end{bmatrix}\geq 0$ for $j_1\in R_1$ and $\mathrm{det}\begin{bmatrix}
x_{k_3} & y_{k_3}\\
x_{j_3} & y_{j_3}
\end{bmatrix}\leq 0$ for $j_3\in R_3$. We can readily see that $\mathrm{det}\begin{bmatrix}
x_{k_1} & y_{k_1}\\
x_{j} & y_{j}
\end{bmatrix}>0$ for $j\in R_2\cup R_3$; and $\mathrm{det}\begin{bmatrix}
x_{k_3} & y_{k_3}\\
x_{j} & y_{j}
\end{bmatrix}<0$ for $j\in R_1\cup R_2$. Therefore, from the indices $k_1$ and $k_3$, our desired conclusion follows. \end{proof}
Here is an interim result to deduce main results of this section (Theorems \ref{thm:logconcave seq} and \ref{thm:UpperMonocity2}).
\begin{proposition}\label{prop:log index}
Let $A$ be an $n\times n$ irreducible, nonnegative, positive semidefinite matrix, and let $\mathbf x\in\mathbb{R}_{++}^n$. Then, there exists a log-concavity (resp. log-convexity) index of $A$ associated with $\mathbf x$. \end{proposition}
\begin{proof}
Let $F(p,q;k)=\left(\mathbf e_p^T A^k\mathbf x\right)\left(\mathbf e_q^T A^{k-1}\mathbf x\right)-\left(\mathbf e_q^T A^k\mathbf x\right)\left(\mathbf e_p^T A^{k-1}\mathbf x\right)$ for $k\geq 1$ and $p,q\in\{1,\dots,n\}$. Then, $F(p,q;k)\ge0$ if and only if $\frac{(A^k\mathbf x)_{p}}{(A^{k-1}\mathbf x)_{p}}\ge\frac{(A^k\mathbf x)_{q}}{(A^{k-1}\mathbf x)_{q}}$. In order to show the existence of a log-concavity index of $A$ associated with $\mathbf x$, we shall prove the following claim.
\begin{center}
\begin{enumerate}[label=(C\arabic*)]
\item\label{claim 1} There exist $\hat{p}\in\{1,\dots,n\}$ and $K\geq 1$ such that for each $q\in\{1,\dots,n\}$, $F(\hat{p},q;k)\ge 0$ for all $k\geq K$.
\end{enumerate}
\end{center}
Let $\mu_1,\dots,\mu_\ell$ be the distinct eigenvalues of $A$ with $\mu_1>\dots>\mu_\ell\geq 0$ for some $\ell\geq 2$. Let $E_i$ be the orthogonal projection matrix onto the eigenspace of $A$ associated with $\mu_i$ for $i=1,\dots,\ell$. From the spectral decomposition, we have
\begin{align*}
A^k=\sum_{i=1}^{\ell}\mu_i^kE_i.
\end{align*}
Let $\mathbf y_i=E_i\mathbf x$ for $i=1,\dots,n$, and let \( y_{s,t}=(\mathbf y_t)_s \). We can find that
\begin{align}\nonumber
F(p,q;k)=&\left(\sum_{i=1}^{\ell}\mu_i^ky_{p,i}\right)\left(\sum_{i=1}^{\ell}\mu_i^{k-1}y_{q,i}\right)-\left(\sum_{i=1}^{\ell}\mu_i^ky_{q,i}\right)\left(\sum_{i=1}^{\ell}\mu_i^{k-1}y_{p,i}\right)\\\nonumber
=&\sum_{1\leq i<j\leq \ell}(\mu_i\mu_j)^{k-1}(\mu_i-\mu_j)y_{p,i}y_{q,j}-\sum_{1\leq i<j\leq \ell}(\mu_i\mu_j)^{k-1}(\mu_i-\mu_j)y_{q,i}y_{p,j}\\\label{det condition}
=&\sum_{1\leq i<j\leq \ell}(\mu_i\mu_j)^{k-1}(\mu_i-\mu_j)\det\begin{bmatrix}
y_{p,i} & y_{p,j}\\
y_{q,i} & y_{q,j}
\end{bmatrix}.
\end{align}
Before we consider the claim \ref{claim 1}, we shall find the dominant term of $F(p,q;k)$ for sufficiently large \( k \), which determines the sign of $F(p,q;k)$. For brevity, we write \( p\succ q \) (resp. $p\succeq q$) if \( F(p,q;k)>0 \) (resp. $F(p,q;k)\geq 0$) for sufficiently large \( k \). Let $p,q\in\{1,\dots,n\}$. Suppose that $F(p,q;k)\neq 0$ for some $k\geq 1$. Then,
$$m_0=\min\left\{m=2,\dots,n\,\middle|\,\det\begin{bmatrix}
y_{p,1} & y_{p,m}\\
y_{q,1} & y_{q,m}
\end{bmatrix}\neq 0\right\}$$ is well-defined. Suppose that $\mu_{m_1}\mu_{m_2}\geq \mu_1\mu_{m_0}$ for some $1\leq m_1<m_2\leq n$ with $(m_1,m_2)\neq (1,m_0)$. Then, $m_1$ and $m_2$ must be between $1$ and ${m_0}$. Since $\det\begin{bmatrix}
y_{p,1} & y_{p,m_1}\\
y_{q,1} & y_{q,m_1}
\end{bmatrix}=\det\begin{bmatrix}
y_{p,1} & y_{p,m_2}\\
y_{q,1} & y_{q,m_2}
\end{bmatrix}=0$, we have $\det\begin{bmatrix}
y_{p,m_1} & y_{p,m_2}\\
y_{q,m_1} & y_{q,m_2}
\end{bmatrix}=0$. Hence, the dominant term of $F(p,q;k)$ is
\begin{equation}\label{eq:dominant term of F}
(\mu_1-\mu_{m_0})\det\begin{bmatrix}
y_{p,1} & y_{p,{m_0}}\\
y_{q,1} & y_{q,{m_0}}
\end{bmatrix}(\mu_1\mu_{m_0})^{k-1}.
\end{equation}
Thus, if $\det\begin{bmatrix}
y_{p,1} & y_{p,{m_0}}\\
y_{q,1} & y_{q,{m_0}}
\end{bmatrix}>0$, then \( p\succ q \). Note that $\det\begin{bmatrix}
y_{p,1} & y_{p,{m_0}}\\
y_{q,1} & y_{q,{m_0}}
\end{bmatrix}=0$ does not imply that $F(p,q,k)=0$ for sufficiently large $k$.
In order to establish the claim \ref{claim 1}, it suffices to show the following claim.
\begin{center}
\begin{enumerate}[label=(C2)]
\item\label{claim 2} There exists $\hat{p}\in\{1,\dots,n\}$ such that $\hat{p}\succeq q$ for all $q\in\{1,\dots,n\}$.
\end{enumerate}
\end{center}
We note that if $A^{l_1}\mathbf x$ is a Perron vector of $A$ for some $l_1\geq 0$, so is $A^{l_2}\mathbf x$ for $l_2\geq l_1$. If there exists an integer $K\geq 1$ such that $F(p,q;k)=0$ for all $p,q\in\{1,\dots,n\}$ and $k\geq K$, \textit{i.e.}, $A^{k-1}\mathbf x$ is a Perron vector of $A$ for each $k\geq K$, then we have a log-concavity index. We suppose that for each $k\geq 1$, $F(p,q;k)\neq 0$ for some $p,q\in\{1,\dots,n\}$, that is, $A^{k-1}\mathbf x$ is not a Perron vector for each $k\geq 1$. Then, we may choose $$j_0=\min\left\{j=2,\dots,n\,\middle|\,\det\begin{bmatrix}
y_{p,1} & y_{p,j}\\
y_{q,1} & y_{q,j}
\end{bmatrix}\neq 0\;\text{for some}\; 1\leq p,q\leq n\right\}.$$
By Lemma \ref{lem:index domnates others in two vectors}, there exists $p_0\in\{1,\dots, n\}$ such that
$$\det\begin{bmatrix}
y_{p_0,1} & y_{p_0,j_0}\\
y_{q,1} & y_{q,j_0}
\end{bmatrix}\geq 0$$
for $q\in\{1,\dots, n\}$. Let $$X_1=\left\{q=1,\dots,n\,\middle|\,\det\begin{bmatrix}
y_{p_0,1} & y_{p_0,j_0}\\
y_{q,1} & y_{q,j_0}
\end{bmatrix}=0\right\}.$$
For each $q\in\{1,\dots, n\}\backslash X_1$, the dominant term of $F(p_0,q,k)$ is positive, and thus \( p_0\succ q \). If $X_1=\{p_0\}$, then $p_0$ is the log-concavity index.
Suppose $|X_1|>1$. Then, to complete the proof, we need to find some $p_1\in X_1$ such that $p_1\succeq q$ for $q\in X_1$, which implies $p_1\succeq q$ for $q\in \{1,\dots,n\}$. Note that considering how $j_0$ is defined, we have $\det\begin{bmatrix}
y_{p,1} & y_{p,j}\\
y_{q,1} & y_{q,j}
\end{bmatrix}=0$ for $p,q\in X_1$ and \( 2\leq j\le j_0 \). If $\det\begin{bmatrix}
y_{p,1} & y_{p,j}\\
y_{q,1} & y_{q,j}
\end{bmatrix}=0$ for all $p,q\in X_1$ and $j_0+1\leq j\leq n$, then we have $F(p,q;k)= 0$ for all $p,q\in X_1$ and $k\geq 1$, and it follows that all elements in $X_1$ are log-concavity indices. We now suppose that
$$j_1=\min\left\{j=j_0+1,\dots,n\,\middle|\,\det\begin{bmatrix}
y_{p,1} & y_{p,j}\\
y_{q,1} & y_{q,j}
\end{bmatrix}\neq 0\;\text{for some}\; p,q\in X_1\right\}$$
is well-defined. Applying Lemma \ref{lem:index domnates others in two vectors}, there exists $p_1\in X_1$ (by abuse of notation) such that
$$\det\begin{bmatrix}
y_{p_1,1} & y_{p_1,j_1}\\
y_{q,1} & y_{q,j_1}
\end{bmatrix}\geq 0$$
for $q\in X_1$. Let
$$X_2=\left\{q\in X_1\,\middle|\,\det\begin{bmatrix}
y_{p_1,1} & y_{p_1,j_1}\\
y_{q,1} & y_{q,j_1}
\end{bmatrix}=0\right\}.$$
If $X_2=\{p_1\}$, then it follows that $p_1$ is the log-concavity index. If $|X_2|>1$, one can apply a similar argument as done for the case $|X_1|>1$. Continuing this process with a finite number of steps, we may conclude that there exists a log-concavity index, as claimed.
Regarding the existence of a log-convexity index of $A$ associated with $\mathbf x$, applying an analogous argument above, one can prove that there exists an index $\hat{r}$ such that $q \succeq \hat{r}$ for all $q\in\{1,\dots,n\}$. \end{proof}
\begin{remark}\label{Remark:finding index}
Continuing Proposition \ref{prop:log index} with the same notation, examining the proof of that proposition, one can find log-concavity and log-convexity indices with a few steps, which will be demonstrated in \Cref{Subsec:3.2}, for the following particular cases: \begin{enumerate*}[label=(\alph*)]
\item\label{case a} $|X_1|=1$, \item\label{case b} $|X_1|>1$ and $\det\begin{bmatrix}
y_{p,1} & y_{p,j}\\
y_{q,1} & y_{q,j}
\end{bmatrix}=0$ for all $p,q\in X_1$ and $2\leq j\leq n$.
\end{enumerate*}
For the former, the element in $X_1$ is a log-concavity index of $A$. For the latter, all elements in $X_1$ are log-concavity indices of $A$. For both cases, if $A^{k-1}\mathbf x$ is not a Perron vector for $k\geq 1$, then the set of log-concavity indices of $A$ is $X_1$. One can apply a similar argument for log-convexity index of $A$. \end{remark}
\begin{theorem}\label{thm:logconcave seq}
Let $A$ be an $n\times n$ irreducible, nonnegative, positive semidefinite matrix and $\mathbf x\in\mathbb{R}_{++}^n$. From Proposition \ref{prop:log index}, we may define $i_1$ and $i_2$ to be a log-concavity index and log-convexity index, respectively. We let $g_k=(A^{k-1}\mathbf x)_{i_1}$ and $h_k=(A^{k-1}\mathbf x)_{i_2}$ for $k\geq 1$. Then, $(g_k)_{k\geq k_1}$ is log-concave for some $k_1\geq 1$, and $(h_k)_{k\geq k_2}$ is log-convex for some $k_2\geq 1$. \end{theorem} \begin{proof}
It follows from \Cref{Prop:logconcavity index}. \end{proof}
\subsection{Strict monotonicity of sequences $(a_k(A,\mathbf x))_{k\geq 1}$, $(b_k(A,\mathbf x))_{k\geq 1}$, and $(c_k(A,\mathbf x))_{k\geq 1}$}
First, we shall find two conditions for sequences $(a_k(A,\mathbf x))_{k\geq 1}$ and $(b_k(A,\mathbf x))_{k\geq 1}$ to be strictly monotone. We begin with a simple lemma, which will be used in Propositions \ref{prop:inequality} and \ref{prop:inequality2}.
\begin{lemma}\label{lem:SumOfRatios}
Let $a_i$ and $b_i$ be positive numbers for $i=1,\dots,n$. Then,
\begin{enumerate}[label=(\roman*)]
\item\label{fact1} if $\frac{a_1}{b_1}\geq \frac{a_j}{b_j}$ for $1\leq j\leq n$, then
$
\frac{a_1}{b_1}\geq \frac{a_1+\cdots+a_n}{b_1+\cdots+b_n}
$; and\\
\item\label{fact2} if $\frac{a_1}{b_1}\leq \frac{a_j}{b_j}$ for $1\leq j\leq n$, then
$
\frac{a_1}{b_1}\leq \frac{a_1+\cdots+a_n}{b_1+\cdots+b_n}.
$
\end{enumerate} \end{lemma}
\begin{remark}\label{lem:notperron}
Let $A$ be an $n\times n$ irreducible nonnegative matrix and $\mathbf x\in\mathbb{R}_{++}^n$. It can be readily seen that $\mathbf x$ is not a Perron vector if and only if there exists $j\in\{1,\dots,n\}$ such that $\frac{\left(A\mathbf x\right)_{j}}{\left(\mathbf x\right)_{j}}\neq \max_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}$. \end{remark}
\begin{proposition}\label{prop:inequality}
Let $A$ be an $n\times n$ irreducible nonnegative matrix and $\mathbf x\in\mathbb{R}_{++}^n$. Suppose that $\mathbf x$ is not a Perron vector.
Then, we have the following:
\begin{enumerate}[label=(\roman*)]
\item\label{result1} If $\max_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}\leq \frac{\left(A\mathbf x\right)_{j}}{\left(\mathbf x\right)_{j}}$ for $j=1,\dots,n$, then $\max_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}< \max_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}$.
\item\label{result2} Suppose that there exist $i_0,j_0\in\{1,\dots,n\}$ such that $\frac{\left(A\mathbf x\right)_{j_0}}{\left(\mathbf x\right)_{j_0}}<\max_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}=\frac{\left(A^2\mathbf x\right)_{i_0}}{\left(A\mathbf x\right)_{i_0}}$. Assume $a_{i_0,j_0}\neq 0$. Then, $\max_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}< \max_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}$.
\end{enumerate} \end{proposition} \begin{proof} Under the hypothesis of \ref{result1}, since \( \mathbf x \) is not a Perron vector, we have $\max_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}\neq\max_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}$ and hence \ref{result1} follows.
Now we suppose that the hypotheses of \ref{result2} holds. Considering Theorem~\ref{thm:known}, we assume to the contrary that $$\max_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}=\max_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}.$$ Let $i_0$ and $j_0$ be the indices in the hypotheses. Then, $$\frac{\left(A^{2}\mathbf x\right)_{i_0}}{\left(A\mathbf x\right)_{i_0}}=\max_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}\geq \frac{\left(A\mathbf x\right)_j}{\left(\mathbf x\right)_j}$$ for all $1\leq j\leq n$. Let $\mathbf a_i^T$ be the $i^\text{th}$ row of the matrix $A$ for $i=1,\dots, n$. For each $j=1,\dots,n$, \begin{align}\nonumber
&\frac{\mathbf e_{i_0}^TA^2\mathbf x}{\mathbf e_{i_0}^TA\mathbf x}-\frac{\mathbf e_j^TA\mathbf x}{\mathbf e_j^T\mathbf x}\geq 0\\\label{Temp:inequality0}
\iff & \frac{\mathbf e_{i_0}^TA^2\mathbf x}{\mathbf e_{i_0}^TA\mathbf x}\mathbf e_j^T\mathbf x-\mathbf e_j^TA\mathbf x\geq 0\\\nonumber
\iff & (\mathbf e_{i_0}^TA^2\mathbf x)(\mathbf e_j^T\mathbf x)-(\mathbf e_{i_0}^TA\mathbf x)(\mathbf e_j^TA\mathbf x)\geq 0\\\label{Temp:inequality1}
\iff & (\mathbf a_{i_0}^TA\mathbf x)x_j-(\mathbf a_{i_0}^T\mathbf x)(\mathbf a_j^T\mathbf x)\geq 0\\\label{Temp:inequality}
\iff & \mathbf a_j^T\mathbf x\leq \frac{\left(\sum_{k=1}^n a_{i_0,k}\mathbf a_k^T\mathbf x\right)x_j}{\mathbf a_{i_0}^T\mathbf x}. \qquad ( \mathbf a_{i_0}^T\mathbf x>0 ) \end{align}
By the hypothesis, the equality in \eqref{Temp:inequality} does not hold for \( j=j_0 \). Since $a_{i_0,j_0}\neq 0$, taking summation both sides in \eqref{Temp:inequality}, we have \[
\sum_{l=1}^n a_{i_0,l}\mathbf a_l^T\mathbf x<\sum_{l=1}^n a_{i_0,l}\left(\frac{\left(\sum_{k=1}^n a_{i_0,k}\mathbf a_k^T\mathbf x\right)x_l}{\mathbf a_{i_0}^T\mathbf x}\right). \]
Now let $\frac{\left(A\mathbf x\right)_{k_0}}{\left(\mathbf x\right)_{k_0}}=\max_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}$ for some $k_0$. Since \( \max_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}>\frac{\left(A\mathbf x\right)_{j_0}}{\left(\mathbf x\right)_{j_0}} \), we must have $j_0\neq k_0$. Starting from \eqref{Temp:inequality1} for $j=k_0$, we obtain a contradiction from the following argument: \begin{align}\nonumber
0\leq&\left(\sum_{l=1}^n a_{i_0,l}\mathbf a_l^T\mathbf x\right)x_{k_0}-\left(\sum_{l=1}^na_{i_0,l}x_l\right)(\mathbf a_{k_0}^T\mathbf x)\\\nonumber
<&\left(\sum_{l=1}^n a_{i_0,l}\left(\frac{\left(\sum_{k=1}^n a_{i_0,k}\mathbf a_k^T\mathbf x\right)x_l}{\mathbf a_{i_0}^T\mathbf x}\right)\right)x_{k_0}-\left(\sum_{l=1}^na_{i_0,l}x_l\right)(\mathbf a_{k_0}^T\mathbf x)\\\nonumber
=&\sum_{l=1}^n a_{i_0,l}x_l\left(\frac{\sum_{k=1}^n a_{i_0,k}\mathbf a_k^T\mathbf x}{\sum_{k=1}^na_{i_0,k}x_k}x_{k_0}-\mathbf a_{k_0}^T\mathbf x\right)\\\label{Temp:inequality2}
\leq&\sum_{l=1}^n a_{i_0,l}x_l\left(\frac{\mathbf a_{k_0}^T\mathbf x}{x_{k_0}}x_{k_0}-\mathbf a_{k_0}^T\mathbf x\right)=0, \end{align} The inequality in \eqref{Temp:inequality2} follows from \ref{fact1} of Lemma \ref{lem:SumOfRatios} with \( \frac{\mathbf a_{k_0}^T\mathbf x}{x_{k_0}}\ge \frac{\mathbf a_k^T\mathbf x}{x_k}=\frac{a_{i_0,k}\mathbf a_k^T\mathbf x}{a_{i_0,k}x_k} \) for \( 1\le k\le n \). Therefore, $\max_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i} < \max_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}$. \end{proof}
\begin{remark}
Consider the following irreducible nonnegative matrix $A$:
\begin{align*}
A=\begin{bmatrix}
2 & 2 & 2\\
3 & 3 & 0\\
1 & 1 & 1
\end{bmatrix}.
\end{align*}
Let $\mathbf x=\mathbf{1}$. It can be readily checked that $6=\max_i\frac{(A^2\mathbf x)_i}{(A\mathbf x)_i}=\frac{(A^2\mathbf x)_2}{(A\mathbf x)_2}>\frac{(A^2\mathbf x)_1}{(A\mathbf x)_1}=\frac{(A^2\mathbf x)_3}{(A\mathbf x)_3}$; and $6=\frac{(A\mathbf x)_1}{(\mathbf x)_1}=\frac{(A\mathbf x)_2}{(\mathbf x)_2}>\frac{(A\mathbf x)_3}{(\mathbf x)_3}$. Since $a_{2,3}=0$, we can see that the hypothesis of \ref{result2} in Proposition \ref{prop:inequality} is not satisfied. Moreover,
$\max_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}=\max_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}$. \end{remark}
\begin{remark}
In this remark, we shall provide a concrete example of \textit{non-semipositivity vectors} \cite{tsatsomeros2016geometric} for non-singular $M$-matrices.
Continuing \ref{result2} of Proposition~\ref{prop:inequality} with the same notation, the left side of the inequality in \eqref{Temp:inequality0} can be written as
\begin{align*}
\left(\left(\frac{\mathbf e_{i_0}^TA^2\mathbf x}{\mathbf e_{i_0}^TA\mathbf x}\right)I-A \right)\mathbf x.
\end{align*}
Let $B=\left(\frac{\mathbf e_{i_0}^TA^2\mathbf x}{\mathbf e_{i_0}^TA\mathbf x}\right)I-A$. Suppose that $\frac{\mathbf e_{i_0}^TA^2\mathbf x}{\mathbf e_{i_0}^TA\mathbf x}>\rho(A)$. Then, $B$ is a non-singular $M$-matrix, and so $B$ is \textit{semipositive} (See \cite[Chapter 6]{berman1994nonnegative}). Let $K_B=\{ \mathbf y\in\mathbb{R}^n_+ | B\mathbf y\in\mathbb{R}^n_+\}$. The set $K_B$ is known as the so-called \textit{semipositive cone} of $B$, and $K$ is a proper polyhedral cone in $\mathbb{R}^n$ (see \cite{sivakumar2018semipositive}). Clearly, a Perron vector of $A$ is in $K_B$. Examining the proof of Proposition \ref{prop:inequality}, $B\mathbf x$ contains at least one negative entry. Hence, $\mathbf x\notin K_B$. \end{remark}
As done in Proposition~\ref{prop:inequality}, one can establish the following with \ref{fact2} of Lemma \ref{lem:SumOfRatios}.
\begin{proposition}\label{prop:inequality2}
Let $A$ be an $n\times n$ irreducible nonnegative matrix and $\mathbf x\in\mathbb{R}_{++}^n$. Suppose that $\mathbf x$ is not a Perron vector. Then, we have the following:
\begin{enumerate}[label=(\arabic*)]
\item If $\min_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}\geq \frac{\left(A\mathbf x\right)_{j}}{\left(\mathbf x\right)_{j}}$ for $j=1,\dots,n$, then $\min_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i} > \min_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}$.
\item Suppose that there exists indices $i_0$ and $j_0$ such that $\frac{\left(A\mathbf x\right)_{j_0}}{\left(\mathbf x\right)_{j_0}}>\min_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}=\frac{\left(A^2\mathbf x\right)_{i_0}}{\left(A\mathbf x\right)_{i_0}}$ and $a_{i_0,j_0}\neq 0$. Then, $\min_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i} > \min_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}$.
\end{enumerate} \end{proposition}
We obtain the following corollaries from Propositions \ref{prop:inequality} and \ref{prop:inequality2}.
\begin{corollary}\label{cor:ineqality2}
Let $A$ be an $n\times n$ irreducible nonnegative matrix and $\mathbf x\in\mathbb{R}_{++}^n$. Suppose that $\mathbf x$ is not a Perron vector. If there exists an index $i_1\in\{1,\dots,n\}$ such that $\frac{\left(A^2\mathbf x\right)_{i_1}}{\left(A\mathbf x\right)_{i_1}}=\max_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}$ and the $i_1^\text{th}$ row of $A$ consists of nonzero entries, then
\begin{align*}
\max_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}< \max_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}.
\end{align*}
Similarly, if there exists an index $i_2\in\{1,\dots,n\}$ such that $\frac{\left(A^2\mathbf x\right)_{i_2}}{\left(A\mathbf x\right)_{i_2}}=\min_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}$ and the $i_2^\text{th}$ row of $A$ consists of nonzero entries, then
\begin{align*}
\min_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}<\min_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}.
\end{align*} \end{corollary}
\begin{corollary}\label{cor:ineqality}
Let $A$ be a positive matrix, and $\mathbf x\in\mathbb{R}_{++}^n$. If $\mathbf x$ is not a Perron vector, then
\begin{align*}
\max_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}&< \max_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i},\\
\min_{i}\frac{\left(A\mathbf x\right)_i}{\left(\mathbf x\right)_i}&<\min_{i}\frac{\left(A^{2}\mathbf x\right)_i}{\left(A\mathbf x\right)_i}.
\end{align*} \end{corollary}
Here is a condition for sequences $(a_k(A,\mathbf x))_{k\geq 1}$ and $(b_k(A,\mathbf x))_{k\geq 1}$ to be strictly decreasing and increasing, respectively.
\begin{theorem}\label{thm:UpperMonocity}
Let $A$ be a positive matrix, and let $\mathbf x$ be a positive vector. Then,
\begin{enumerate}[label=(\roman*)]
\item\label{case1} If $A^{r}\mathbf x$ is not a Perron vector for $r\geq 0$, then $(a_k(A,\mathbf x))_{k\geq 1}$ is strictly decreasing, and $(b_k(A,\mathbf x))_{k\geq 1}$ is strictly increasing. Moreover, both converge to $\rho(A)$.
\item\label{case2} If there exists the minimum integer $r_0\geq 0$ such that $A^{r_0}\mathbf x,A^{r_0+1}\mathbf x,\dots$ are Perron vectors of $A$, then
\begin{align*}
&b_1(A,\mathbf x)<\cdots<b_{r_0}(A,\mathbf x)=b_{r_0+1}(A,\mathbf x)=\cdots=\rho(A),\\
&a_1(A,\mathbf x)>\cdots>a_{r_0}(A,\mathbf x)=a_{r_0+1}(A,\mathbf x)=\cdots=\rho(A).
\end{align*}
\end{enumerate} \end{theorem} \begin{proof}
Let $\mathbf x^{(r)}=A^r\mathbf x$ for $r\geq 0$. Suppose that $\mathbf x^{(r)}$ is not a Perron vector for $r\geq 0$. By Corollary \ref{cor:ineqality}, $\max_{i}\frac{\left(A^{2}\mathbf x^{(r-1)}\right)_i}{\left(A\mathbf x^{(r-1)}\right)_i}<\max_{i}\frac{\left(A\mathbf x^{(r-1)}\right)_i}{\left(\mathbf x^{(r-1)}\right)_i}$ and $\min_{i}\frac{\left(A\mathbf x^{(r-1)}\right)_i}{\left(\mathbf x^{(r-1)}\right)_i}<\min_{i}\frac{\left(A^{2}\mathbf x^{(r-1)}\right)_i}{\left(A\mathbf x^{(r-1)}\right)_i}$. Hence, $a_r(A,\mathbf x)>a_{r+1}(A,\mathbf x)$ and $b_r(A,\mathbf x)<b_{r+1}(A,\mathbf x)$. From Remark \ref{Remark:convergence}, the conclusion follows. \end{proof}
\begin{remark}\label{remark:x is not a perron vector}
Suppose that $A$ is invertible. If $A^r\mathbf y$ is an eigenvector of $A$ for some $r\geq 0$, then $\mathbf y$ is an eigenvector of $A$. Hence, if $\mathbf x$ is not an eigenvector, then $A^r\mathbf x$ is not an eigenvector for all $r\geq 0$. \end{remark}
\begin{example}
Let $A=\begin{bmatrix}
2 & 1 & 1\\
1 & 1 & 1\\
1 & 1 & 1
\end{bmatrix}$, and let $\mathbf x=\begin{bmatrix}
\sqrt{2}-1\\
\frac{3}{2}-\sqrt{2}\\
\frac{1}{2}
\end{bmatrix}$. Then, it can be verified that $\mathbf x$ is not a Perron vector, but $A\mathbf x$ is a Perron vector. By \ref{case2} of Theorem \ref{thm:UpperMonocity}, $$b_1(A,\mathbf x)<b_2(A,\mathbf x)=b_3(A,\mathbf x)=\cdots=\rho(A)=\cdots=a_3(A,\mathbf x)=a_2(A,\mathbf x)<a_1(A,\mathbf x).$$ \end{example}
Here is other condition for sequences $(a_k(A,\mathbf x))_{k\geq 1}$ and $(b_k(A,\mathbf x))_{k\geq 1}$ to be strictly monotone and to generate strictly log-concave and log-convex sequences.
\begin{theorem}\label{thm:UpperMonocity2}
Let $A$ be an $n\times n$ irreducible, nonnegative, positive semidefinite matrix and $\mathbf x\in\mathbb{R}_{++}^n$. From Proposition \ref{prop:log index}, we may define $i_1$ and $i_2$ to be a log-concavity index and log-convexity index, respectively. We let $g_k=(A^k\mathbf x)_{i_1}$ and $h_k=(A^k\mathbf x)_{i_2}$ for $k\geq 0$. Assume that $\mathbf e_{i_1}^TA$ and $\mathbf e_{i_2}^TA$ are positive, and $A^r\mathbf x$ is not a Perron vector for $r\geq 0$. Then, the following hold:
\begin{enumerate}[label=(\roman*)]
\item There exists $k_1\geq 1$ such that $(a_k(A,\mathbf x))_{k\geq k_1}$ is strictly decreasing and $(g_k)_{k\geq k_1-1}$ is strictly log-concave.
\item There exists $k_2\geq 1$ such that $(b_k(A,\mathbf x))_{k\geq k_2}$ is strictly increasing and $(h_k)_{k\geq k_2-1}$ is strictly log-convex.
\end{enumerate} \end{theorem} \begin{proof}
By Definition \ref{def:log index}, there exist $k_1,k_2\geq 1$ such that for $r\geq k_1$ and $s\geq k_2$,
$$\frac{(A^r\mathbf x)_{i_1}}{(A^{r-1}\mathbf x)_{i_1}}=\max_{i}\frac{(A^r\mathbf x)_{i}}{(A^{r-1}\mathbf x)_{i}},\;\text{and}\; \frac{(A^s\mathbf x)_{i_2}}{(A^{s-1}\mathbf x)_{i_2}}=\min_{i}\frac{(A^s\mathbf x)_{i}}{(A^{s-1}\mathbf x)_{i}}.$$
Let $\mathbf x^{(p)}=A^p\mathbf x$ for $p\geq 0$. Suppose that $\mathbf x^{(p)}$ is not a Perron vector for $p\geq 0$. By Corollary \ref{cor:ineqality2}, $\max_{i}\frac{\left(A^{2}\mathbf x^{(p-1)}\right)_i}{\left(A\mathbf x^{(p-1)}\right)_i}<\max_{i}\frac{\left(A\mathbf x^{(p-1)}\right)_i}{\left(\mathbf x^{(p-1)}\right)_i}$ for $r\geq k_1$ and $\min_{i}\frac{\left(A\mathbf x^{(p-1)}\right)_i}{\left(\mathbf x^{(p-1)}\right)_i}<\min_{i}\frac{\left(A^{2}\mathbf x^{(p-1)}\right)_i}{\left(A\mathbf x^{(p-1)}\right)_i}$ for $s\geq k_2$.
By \Cref{Prop:logconcavity index}, the remaining conclusion follows. \end{proof}
Finally, we shall show the last result of this section, which is a condition for a sequence $(c_k(A,\mathbf x))_{k\geq 1}$ to be strictly increasing and to generate a strictly log-convex sequence.
\begin{lemma}\label{lem:quotientineq}
Let $A$ be an \( n\times n \) positive definite matrix and \( \mathbf x\in\mathbb{R}^n\backslash\{\mathbf{0}\} \). Then, for any integer \( r \),
\begin{align}\label{eq:quotientineq}
\frac{\mathbf x^TA^r\mathbf x}{\mathbf x^TA^{r-1}\mathbf x}\le \frac{\mathbf x^TA^{r+1}\mathbf x}{\mathbf x^TA^{r}\mathbf x},
\end{align}
where the equality holds if and only if $\mathbf x$ is an eigenvector of $A$. \end{lemma} \begin{proof} Since \( A \) is positive definite, there exists an orthonormal basis \( \{\mathbf v_1,\dots,\mathbf v_n\} \) such that \(A\mathbf v_i=\lambda_i\mathbf v_i \) for some $\lambda_i>0$ for $i=1,\dots,n$. Let \( \mathbf x=\sum_{i=1}^n\alpha_i\mathbf v_i \). Then, for any integer \( r \), we have \begin{align}\label{temp:eqn00}
\mathbf x^TA^r\mathbf x=\left(\sum_{i=1}^n\alpha_i\mathbf v_i^T\right)\left(\sum_{i=1}^n\alpha_iA^r\mathbf v_i\right)=\left(\sum_{i=1}^n\alpha_i\mathbf v_i^T\right)\left(\sum_{i=1}^n\alpha_i\lambda_i^r\mathbf v_i\right)= \sum_{i=1}^n\alpha_i^2\lambda_i^r. \end{align} Note $\alpha_i\ge0$ and $\lambda_i>0$. Using \eqref{temp:eqn00}, one can verify that \eqref{eq:quotientineq} is equivalent to \[
\left(\sum_{i=1}^n\alpha_i^2\lambda_i^{r-1}\right)\left(\sum_{i=1}^n\alpha_i^2\lambda_i^{r+1}\right)-\left(\sum_{i=1}^n\alpha_i^2\lambda_i^r\right)^2=\sum_{1\le i<j\le n}\alpha_i^2\alpha_j^2\lambda_i^{r-1}\lambda_j^{r-1}\left(\lambda_i-\lambda_j\right)^2\ge0. \]
Let us consider the equality. We can find that \( \sum_{1\le i<j\le n}\alpha_i^2\alpha_j^2\lambda_i^{r-1}\lambda_j^{r-1}\left(\lambda_i-\lambda_j\right)^2=0 \) if and only if \( \lambda_s=\lambda_t \) whenever \( \alpha_s\neq0 \) and \( \alpha_t\neq0 \) for any \( s \) and \( t \) with \( s\neq t \). In other words, \( \mathbf x=\sum_{i=1}^n\alpha_i\mathbf v_i \) is a linear combination of eigenvectors corresponding to the same eigenvalue, that is, \( \mathbf x \) is an eigenvector. \end{proof}
\begin{remark}
When we relax the condition on $A$ in Lemma \ref{lem:quotientineq} that $A$ is positive semidefinite and $A\neq \mathbf{O}$, one can obtain the same inequality \eqref{eq:quotientineq} by examining the proof, but we fail to obtain the same condition for the equality as in that lemma. For instance, given $A=\begin{bmatrix}
1 & 0\\
0 & 0
\end{bmatrix}$ and $\mathbf x=\begin{bmatrix}
1 \\ 1
\end{bmatrix}$, we have $\mathbf x^TA^r\mathbf x=1$ for $r\geq 1$. \end{remark}
\begin{theorem}\label{Cor:LowerMonocity}
Let $A$ be a nonnegative, positive definite matrix, and $\mathbf x\in\mathbb{R}_{+}^n$. If $\mathbf x$ is not a Perron vector, then $(c_k(A,\mathbf x))_{k\ge1}$ is strictly increasing and convergent to $\rho(A)$. \end{theorem} \begin{proof}
The conclusion follows from Lemma \ref{lem:quotientineq} and \Cref{Remark:convergence2}. \end{proof}
\begin{theorem}\label{Thm:log-concave from c_k}
Let $A$ be an $n\times n$ nonnegative positive semidefinite matrix and $\mathbf x\in\mathbb{R}_{+}^n$. Let $s_k=\mathbf x^TA^k\mathbf x$ for $k\geq 0$. Then, $(s_k)_{k\geq 0}$ is log-convex. In particular, if $A$ is positive definite and $\mathbf x$ is not a Perron vector, then $(s_k)_{k\geq 0}$ is strictly log-convex. \end{theorem} \begin{proof}
It follows from \Cref{prop:log-concave from c_k} and \Cref{Cor:LowerMonocity}. \end{proof}
\section{Combinatorial applications}\label{Sec3:applications}
As seen in Theorems \ref{thm:UpperMonocity}, \ref{thm:UpperMonocity2} and \ref{Cor:LowerMonocity}, upon extra conditions on a nonnegative matrix $A$, we can find (strictly) monotone sequences of lower and upper bounds on the Perron value of $A$ that may induce log-concave or log-convex sequences. With those sequences, we consider two combinatorial applications in this section.
\subsection{Lower and upper bounds on Perron values of rooted trees}\label{Subsec:3.1}
As discussed in \Cref{subsec 1.1}, we shall give lower and upper bounds on Perron values of rooted trees, which can be used for estimating characteristic sets of trees. In \cite{andrade2017combinatorial}, the authors explore such lower and upper bounds with combinatorial interpretation---that is, one can attain those bounds by considering combinatorial objects. Here, we provide sharper bounds in combinatorial settings that one may regard as a generalized version, though it still needs to understand combinatorial interpretation. Besides, we shall verify that bounds obtained from computation of small powers of bottleneck matrices or ``neckbottle matrices'' are sharper than known bounds.
We first consider a weighted connected graph $G$ on $n$ vertices as a general case, and then focus on unweighted rooted trees. Let $M$ be the bottleneck matrix at a vertex $v$ in $G$. It is well known that each principal submatrix of the Laplacian matrix $L(G)$ is non-singular and its inverse is positive definite (see \cite[Chapter 6]{berman1994nonnegative}). Since the Perron value of $M$ is determined by the Perron value of Perron components at $v$, we shall assume that $v$ is not a cut-vertex; otherwise, $M$ would be a block diagonal matrix, which is not irreducible.
\begin{theorem}\label{thm:sequence for M}
Let $M$ be the bottleneck matrix at a vertex $v$ of a weighted, connected graph $G$ on $n$ vertices. Let $\mathbf x\in\mathbb{R}_{++}^n$. Suppose that $v$ is not a cut-vertex and $\mathbf x$ is not a Perron vector. Then, $(a_k(M,\mathbf x))_{k\ge1}$ is a strictly decreasing sequence convergent to $\rho(M)$, whereas $(b_k(M,\mathbf x))_{k\ge1}$ and $(c_k(M,\mathbf x))_{k\ge1}$ are strictly increasing sequences convergent to $\rho(M)$. \end{theorem} \begin{proof}
Note that $M$ is a positive matrix. By \Cref{remark:x is not a perron vector}, $M^r\mathbf x$ is not a Perron vector for $r\geq 0$. It is straightforward from Theorems \ref{thm:UpperMonocity} and \ref{Cor:LowerMonocity} with Remarks \ref{Remark:convergence} and \ref{Remark:convergence2}. \end{proof}
\begin{figure}
\caption{An example for path matrix of $\mathcal T$ with root $1$.}
\label{Fig:path matrix}
\end{figure}
Let $M$ be the bottleneck matrix of a rooted tree $\mathcal T$ with vertex set $\{1,\dots,n\}$ and root $x$. We shall introduce other object having the same Perron value as $\rho(M)$ that we can also use for approximating $\rho(M)$. The \textit{path matrix} of $\mathcal T$, denoted by \( N \), is the matrix whose $j^\text{th}$ column is the $(0,1)$ vector where the $i^\text{th}$ component is $1$, if $i$ lies on the path from $j$ to $x$, and $0$ otherwise. See Figure \ref{Fig:path matrix} for an example. Then, $M$ can be written as $M=N^TN$ (see \cite{andrade2017combinatorial}). Appropriately labelling the vertices, we can find that the matrix $N$ is upper triangular with ones on the main diagonal and the row of $N$ corresponding to $x$ is the all ones vector. The matrix $Q=NN^T$ is called the \textit{neckbottle matrix} of $\mathcal T$, which was introduced in \cite{ciardo2021perron}. Then, $Q$ is positive definite, but is not a positive matrix.
\begin{theorem}\label{thm:sequence for Q}
Let $\mathcal T$ be a rooted tree on $n$ vertices and $Q$ be its neckbottle matrix. Let $\mathbf x\in\mathbb{R}_{++}^n$. Suppose that $\mathbf x$ is not a Perron vector. Then, \( (c_k(Q,\mathbf x))_{k\ge1} \) is a strictly increasing sequence convergent to $\rho(Q)$. \end{theorem} \begin{proof}
It follows from Theorem \ref{Cor:LowerMonocity}. \end{proof}
\begin{remark}
Let $M$ be the bottleneck matrix of a rooted tree. In \cite{andrade2017combinatorial}, $\|M\|_1$ appears as an upper bound on $\rho(M)$, where $\|\cdot\|_1$ is the $\ell_1$ matrix norm. Note that $\|M\|_1=\max_i\frac{(M\mathbf{1})_i}{(\mathbf{1})_i}=a_1(M,\mathbf{1})$. We can see from \Cref{thm:sequence for M} that $a_k(M,\mathbf{1})<\|M\|_1$ for $k\geq 2$. \end{remark}
\begin{remark}
Let $N$ and $M$ be the path matrix and the bottleneck matrix of a rooted tree, respectively. In \cite{andrade2017combinatorial}, two lower bounds on $\rho(M)$ appear: one is the \textit{combinatorial Perron value} $\rho_c(N)$ given by $\rho_c(N)=\frac{\mathbf{1}^T(NN^T)^2\mathbf{1}}{\mathbf{1}^TNN^T\mathbf{1}}$, and the other is given by $\pi(N)=\left(\frac{\mathbf{1}^T(NN^T)^3\mathbf{1}}{\mathbf{1}^TNN^T\mathbf{1}}\right)^\frac{1}{2}$. Let $Q$ be the neckbottle matrix. Then, $\rho_c(N)=\frac{\mathbf{1}^TQ^2\mathbf{1}}{\mathbf{1}^TQ\mathbf{1}}=c_2(Q,\mathbf{1})$ and $\pi(N)=\left(\frac{\mathbf{1}^TQ^3\mathbf{1}}{\mathbf{1}^TQ\mathbf{1}}\right)^{1/2}$. By \Cref{thm:sequence for Q}, we have $\frac{\mathbf{1}^TQ^2\mathbf{1}}{\mathbf{1}^TQ\mathbf{1}}<\frac{\mathbf{1}^TQ^3\mathbf{1}}{\mathbf{1}^TQ^2\mathbf{1}}$. Multiplying both sides by $\frac{\mathbf{1}^TQ^2\mathbf{1}}{\mathbf{1}^TQ\mathbf{1}}$ and taking the square root of both sides yield $\rho_c(N)<\pi(N)$. Hence,
$$\frac{\mathbf{1}^TQ^2\mathbf{1}}{\mathbf{1}^TQ\mathbf{1}}< \left(\frac{\mathbf{1}^TQ^3\mathbf{1}}{\mathbf{1}^TQ\mathbf{1}}\right)^{1/2}=\left(\frac{\mathbf{1}^TQ^2\mathbf{1}}{\mathbf{1}^TQ\mathbf{1}}\frac{\mathbf{1}^TQ^3\mathbf{1}}{\mathbf{1}^TQ^2\mathbf{1}}\right)^{1/2}< \frac{\mathbf{1}^TQ^3\mathbf{1}}{\mathbf{1}^TQ^2\mathbf{1}}.$$
Therefore, $\rho_c(N)<\pi(N)<c_k(Q,\mathbf{1})$ for $k\geq 3$. \end{remark}
In addition to the bounds in the above remarks, the author of \cite{molitierno2018tight} investigated a tight upper bound on Perron values of ``rooted brooms'' with some particular root, by virtue of the fact that for any unweighted and connected graph, Perron values of bottleneck matrices at vertices with the same eccentricity as that of the root in the broom are bounded below. Here we provide a tighter upper bound than the one in \cite{molitierno2018tight} except for a few cases. Moreover, we give lower bounds on Perron values of rooted trees, provided their roots have the same eccentricity.
Let $B(d,r)$ denote the tree, called a \textit{broom}, formed from a path on $d$ vertices by adding $r$ pendent vertices to an end-vertex $v$ of the path. If $v$ is designated as the root, then we use $B_1(d,r)$ to denote the corresponding rooted tree (see \Cref{Figure:Broom2}); and if the other end-vertex of the path chosen as the root, then $B_2(d,r)$ denotes the resulting rooted tree (see \Cref{Figure:Broom1}).
For two nonnegative matrices $A$ and $B$, we use the notation $A\geq B$ to mean that $A$ is entry-wise greater than or equal to $B$, that is, $A-B$ is nonnegative. If $A\geq B$ then $\rho(A)\geq\rho(B)$ (see \cite{horn2012matrix}). Let $M$ be the bottleneck matrix of a rooted tree on $n$ vertices such that the eccentricity of its root is $d$. Let $r=n-d$. Let $M_1$ and $M_2$ be the bottleneck matrices of $B_1(d,r)$ and $B_2(d,r)$, respectively. As mentioned in \cite{kirkland1997algebraic}, one can prove $M\leq M_2$ by using induction on $r$; similarly, it can be also verified that $M_1\leq M$.
We now provide a sharper upper bound on $\rho(M_2)$, and lower bounds on $\rho(M_1)$.
\begin{theorem}\label{thm:upperForPerron}
Let $G$ be an unweighted, connected graph on $n+1$ vertices with a vertex $v$, and let $M$ be the bottleneck matrix at $v$. Suppose that $d+1$ is the eccentricity of $v$. Let $M_2$ be the bottleneck matrix of $B_2(d,r)$ where $r=n-d$. Then, an upper bound $a_3(M_2,\mathbf{1})$ on $\rho(M)$ is given by
\begin{align}\label{UpperBound for rho T}
a_3(M_2,\mathbf{1})=\frac{S_1(d,r)}{S_2(d,r)},
\end{align}
where
\begin{align*}
S_1(d,r)=&d^3r^3 +\frac{1}{6}d^2(7d^2 + 9d + 20)r^2+\frac{1}{120}d(61d^4 + 140d^3 + 315d^2 + 280d + 404)r\\
&+\frac{1}{720}(61d^6 + 183d^5 + 385d^4 + 465d^3 + 634d^2 + 432d + 720),\\
S_2(d,r)=&d^2r^2 +\frac{1}{6}d(5d^2 + 6d + 13)r+\frac{1}{24}(5d^4 + 10d^3 + 19d^2 + 14d + 24).
\end{align*} \end{theorem} \begin{proof}
It is known in \cite{molitierno2018tight} that $\rho(M)\leq \rho(M_2)$. It is straightforward from Theorem \ref{thm:sequence for M} that $\rho(M_2)< a_3(M_2,\mathbf{1})$. See \Cref{subappen:1} for the completion of the proof. \end{proof}
\begin{remark}\label{Remark:sharper}
Continuing \Cref{thm:upperForPerron}, if $d\geq 5$ and $r>\frac{2d^5+37d^4+5d^3-395d^2-376d-2}{16d^4-260d^2-360d-116}\approx\frac{d}{8}$, then the following upper bound for $\rho(M)$ appears in \cite{molitierno2018tight}:
\begin{align*}
f(d,r)=dr+\frac{4d^4+20d^3+25d^2+40d+1}{10d^2+45d+5}\geq \rho(M).
\end{align*}
We shall show that $a_3(M,\mathbf{1})$ is a shaper bound than $f(d,r)$ except for a few values of $d$ and $r$ with $r\geq \frac{d}{8}$. Using MATLAB\textsuperscript{\textregistered}, we can find that
\begin{align*}
f(d,r)-a_3(M,\mathbf{1})=\frac{N(d,r)}{D(d,r)},
\end{align*}
where
\begin{align*}
N(d,r)=&24d^2(2d + 1)(d + 1)(2d^2 - 3d - 29)r^2\\
&+12d(2d + 1)(d + 1)(d^2 - d + 4)(2d^2 - 3d - 29)r\\
&-(d + 3)(d + 1)(2d^6 + 67d^5 - 202d^4 - 131d^3 - 568d^2 + 100d + 192),\\
D(d,r)=&30(2d^2 + 9d + 1)(5d^4 + 20d^3r + 10d^3 + 24d^2r^2 + 24d^2r + 19d^2 + 52dr + 14d + 24).
\end{align*}
Clearly, $D(d,r)>0$ and $N(d,r)$ is a quadratic polynomial in $r$. One can verify that the coefficients of $r$ and $r^2$ in $N$ are positive, and the constant is negative for $d\geq 5$. This implies that if \( d\ge5 \) and $N(d,r_0)>0$ for some $r_0$, then $N(d,r)>N(d,r_0)$ for $r\geq r_0$. In terms of $d$, one can verify that $N(d,d/8)>0$ for $d\geq 17$. Hence, $a_3(M,\mathbf{1})$ is a sharper upper bound on \( \rho(M) \) for $d\geq 17$. Furthermore, one can check that given $5\leq d \leq 16$, there exists $2\le r_0\le 4$ such that $a_3(M,\mathbf{1})<f(d,r)$ for $r\geq r_0$. \end{remark}
\begin{theorem}\label{thm:lowerForPerron}
Let $\mathcal T$ be a rooted tree on $n$ vertices, and $M$ be the bottleneck matrix of $\mathcal T$. Suppose that $d$ is the eccentricity of the root. Let $r=n-d$. Suppose that $Q_1$ and $M_1$ are the neckbottle matrix and the bottleneck matrix of $B_1(d,r)$, respectively. (Note $\rho(M_1)=\rho(Q_1)$.) Then, two lower bounds $c_3(Q_1,\mathbf{1})$ and $c_3(M_1,\mathbf{1})$ on $\rho(M)$ are given by $c_3(Q_1,\mathbf{1})=\frac{U_1(d,r)}{U_2(d,r)}$ and $c_3(M_1,\mathbf{1})=\frac{V_1(d,r)}{V_2(d,r)}$ where
\begin{align*}
U_1(d,r)=&4r^3 + 2(d^2 + 3d + 4)r^2 +\frac{1}{12}(13d^4 + 26d^3 + 41d^2 + 28d + 48)r\\
&+\frac{1}{2520}d(d+1)(2d+1)(68d^4 + 136d^3 + 133d^2 + 65d + 18),\\
U_2(d,3)=&4r^2 + 2(d^2 + d + 2)r +\frac{1}{30}d(d+1)(2d+1)(2d^2 + 2d + 1),\\
V_1(d,r)=&r^4 + (4d + 3)r^3 +\frac{1}{2}(2d+3)(d+2)(d+1)r^2\\
&+\frac{1}{15}(d+1)(4d^4 + 16d^3 + 19d^2 + 21d + 15)r\\
&+\frac{1}{2520}d(2d+1)(d+1)(68d^4 + 136d^3 + 133d^2 + 65d + 18),\\
V_2(d,r)=&r^3 + (3d + 2)r^2 +\frac{1}{3}(d+1)(2d^2+4d+3)r\\
&+\frac{1}{30}d(2d+1)(d+1)(2d^2+2d+1).
\end{align*}
Moreover, for each $d\geq 3$, the polynomial \( c_3(M_1,\mathbf{1})-c_3(Q_1,\mathbf{1}) \) in \( r \) has a root $r_0$ in the open interval \( (0.4d^2-1,0.42d^2+2) \); further, $c_3(M_1,\mathbf{1})<c_3(Q_1,\mathbf{1})$ for $r<r_0$ and $c_3(M_1,\mathbf{1})>c_3(Q_1,\mathbf{1})$ for $r>r_0$. \end{theorem} \begin{proof}
By Theorems \ref{thm:sequence for M} and \ref{thm:sequence for Q}, we obtain $c_3(Q_1,\mathbf{1})< \rho(M)$ and $c_3(M_1,\mathbf{1})<\rho(M)$. See \Cref{subappen:2} for the completion of the proof. \end{proof}
\begin{remark}\label{remark:Comparison for lowerbounds}
As another lower bound on the Perron value of a rooted tree with root $x$, one may consider $b_k(M,\mathbf{1})=\min\limits_{i}{\frac{(M^k\mathbf{1})_i}{(M^{k-1}\mathbf{1})_i}}$ for $k\geq 1$, where $M$ is the bottleneck matrix of the rooted tree. Since the row and column of $M$ corresponding to $x$ are the all ones vector, we can see from Theorem \ref{thm:sequence for M} that
\begin{align*}
b_k(M,\mathbf{1})=\min\limits_{i}{\frac{(M^k\mathbf{1})_i}{(M^{k-1}\mathbf{1})_i}}\leq \frac{(M^k\mathbf{1})_x}{(M^{k-1}\mathbf{1})_x}=\frac{\mathbf e_x^T M^k\mathbf{1}}{\mathbf e_x^T M^{k-1}\mathbf{1}}=\frac{\mathbf{1}^T M^{k-1}\mathbf{1}}{\mathbf{1}^T M^{k-2}\mathbf{1}}<\frac{\mathbf{1}^T M^k\mathbf{1}}{\mathbf{1}^T M^{k-1}\mathbf{1}}=c_k(M,\mathbf{1}).
\end{align*}
Hence, $c_k(M,\mathbf{1})$ is a sharper lower bound on $\rho(M)$ than $b_k(M,\mathbf{1})$. \end{remark}
\begin{example}\label{Observation:comparison}
Let $M$ be the bottleneck matrix of $B_1(16,r)$. As seen in Figure \ref{fig:comparison}, the sharpness of $c_3(Q,\mathbf{1})$ and $c_3(M,\mathbf{1})$, as lower bounds, is inverted at a particular value of $r$. By Theorem \ref{thm:lowerForPerron}, $c_3(M,\mathbf{1})-c_3(Q,\mathbf{1})$ has exactly one positive root $r_0$ in the open interval $(101.4,109.52)$. Indeed, using MATLAB\textsuperscript\textregistered, we have $r_0\approx 108.1708$.
\begin{figure}
\caption{Comparison of lower bounds $c_3(M,\mathbf{1})$ and $c_3(Q,\mathbf{1})$ for $\rho(M)$.}
\label{fig:comparison}
\end{figure} \end{example}
\subsection{Log-convexity and log-concavity of recurrence relations}\label{Subsec:3.2}
As seen in \Cref{thm:logconcave seq} and \Cref{Thm:log-concave from c_k}, we can construct log-convex and log-concave sequences. In this subsection, we examine various examples.
\begin{example}\label{ex:path bottleneck}
Let $n\geq 2$. Let $P_n$ be the rooted path on vertices $1,\dots,n$, where $i$ is adjacent to $i+1$ for $i=1,\dots,n-1$ and vertex $1$ is the root. We use \( M_{P_n} \) to denote the bottleneck matrix of the rooted path. Then,
\[
M_{P_n}=\begin{bmatrix}
1 & & & \cdots & & 1 \\
& 2 & & \cdots & & 2 \\
& & 3 & \cdots & & 3 \\
\vdots & \vdots & \vdots & \ddots & & \vdots \\
& & & & n-1 & n-1 \\
1 & 2 & 3 & & n-1 & n \\
\end{bmatrix}.
\] For the path matrix $N$, we have \begin{align*}
M_{P_n}=N^TN=\begin{bmatrix}
1 & & \mathbf{O} \\
\vdots & \ddots & \\
1 &\cdots & 1 \\
\end{bmatrix}\begin{bmatrix}
1 & \cdots & 1 \\
& \ddots & \vdots \\
\mathbf{O} & & 1 \\
\end{bmatrix}. \end{align*} Let \( s_k^{(n,i,j)}=\mathbf e_i^TM_{P_n}^k\mathbf e_j \) for some \( 1\le i,j\le n \). Regarding $N$ as the adjacency matrix of a directed graph, interpreting $\mathbf e_i^T(N^TN)^k\mathbf e_j$ as the number of walks of length $2k$ from $i$ to $j$ on the directed graph with reversing all the directions after each step of walk, one can find that \( s_k^{(n,i,j)} \) is the number of sequences \( (a_0,\dots,a_{2k-2}) \), where \( 1\le a_t\le n \) for \( t=0,\dots,2k-2 \), satisfying \( i\ge a_0\le a_1\ge\dots\le a_{2k-3}\ge a_{2k-2}\le j \). Note by symmetry that \( s_k^{(n,i,j)}=s_k^{(n,j,i)} \) for any \( i,j,k,n \). By \Cref{Thm:log-concave from c_k}, the combinatorial sequence \( \left(s_k^{(n,j,j)}\right)_{k\ge0} \) is log-convex for any \( 1\le j\le n \). (In particular, \( s_k^{(2,1,1)}=F_{2k-1} \), where \( F_{k} \) is the \( k^\text{th} \) Fibonacci number with \( F_0=0,F_1=1 \).) \end{example}
Given an $n\times n$ irreducible nonnegative matrix $A$ with $\mathbf x\in\mathbb{R}_{++}^n$, the sequences $(a_k(A,\mathbf x))_{k\geq 1}$ and $(b_k(A,\mathbf x))_{k\geq 1}$ may produce log-concave and log-convex sequences, respectively.
\begin{example}
Let \( A=\begin{bmatrix}
a & b \\
c & d \\
\end{bmatrix}
\) be irreducible and nonnegative, and \( \mathbf x=\begin{bmatrix}
x_1 \\
x_2 \\
\end{bmatrix}
\) be positive. Let $k\geq 1$. Using induction on $k$, we can find that
\begin{align*}
D(k)=\left(A^k\mathbf x\right)_{2}\left(A^{k-1}\mathbf x\right)_{1}-\left(A^k\mathbf x\right)_{1}\left(A^{k-1}\mathbf x\right)_{2}=(x_1x_2(d-a)-bx_2^2+cx_1^2)(ad-bc)^{k-1}.
\end{align*}
Then, $\det (A)\geq 0$ if and only if $D(k)$ is either nonpositive or nonnegative for all $k\geq 1$. Let $g_k=(A^k\mathbf x)_1$ and $h_k=(A^k\mathbf x)_2$. It follows from \Cref{Prop:logconcavity index} that $\det (A)\geq 0$ if and only if one of two sequences $(g_k)_{k\geq 1}$ and $(h_k)_{k\geq 1}$ is log-concave and the other is log-convex, upon the sign of $x_1x_2(d-a)+bx_2^2-cx_1^2$. This shows the existence of log-concavity and convexity indices for \( 2\times2 \) irreducible nonnegative matrix \( A \) even if \( A \) is not symmetric. (We note that \Cref{prop:log index} only provides the existence of log-concavity and convexity indices when $A$ is positive semidefinite.) As a concrete example, consider $A=\begin{bmatrix}
2 & 1 \\
2 & 2
\end{bmatrix}$ and $\mathbf x=\begin{bmatrix}
1 \\ 1
\end{bmatrix}$. Then, $\det(A)>0$ and \( D(k)>0 \) for all \( k \). Moreover, $(g_k)_{k\geq 1}$ is log-convex and $(h_k)_{k\geq 1}$ is log-concave. \end{example}
\begin{remark}
Given a \( k\times k \) matrix \( A \),
let \( p(x) \) be the characteristic polynomial of \( A \). Since \( p(A)=\mathbf{O} \), multiplying \( \mathbf x \) on the right of both sides, we obtain a recurrence relation for \( r_n^{(i)}=(A^n\mathbf x)_{i} \). If \( A \) is a \( 2\times 2 \) matrix where \( {\rm tr}(A)>0 \) and \( \det(A)>0 \), then \( r_n^{(i)} \) satisfies a recurrence relation \( r_{n+1}^{(i)}=c_1r_n^{(i)}+c_2r_{n-1}^{(i)} \) for some positive \( c_1 \) and \( c_2 \). Then we can determine whether \( r_n^{(i)} \) is log-convex or log-concave by \cite{liu2007log}. Indeed, if $\left( r_0^{(i)},r_1^{(i)},r_2^{(i)} \right)$ is log-convex (resp. log-concave), then $\left(r_n^{(i)} \right)_{n\ge0}$ is log-convex (resp. log-concave) for each \( i \). \end{remark}
\begin{example}
Let \( A=\begin{bmatrix}
2 & 1 & 1 \\
1 & 2 & 0 \\
1 & 0 & 2 \\
\end{bmatrix} \) and \( \mathbf x=\begin{bmatrix}
1\\1\\1
\end{bmatrix} \). Then, \( A \) is irreducible, nonnegative, positive semidefinite matrix. Its orthonormal eigenvectors are given by
\[
\mathbf u_1=\begin{bmatrix}
1/\sqrt{2}\\1/2\\1/2
\end{bmatrix},
\mathbf u_2=\begin{bmatrix}
0\\-1/\sqrt{2}\\1/\sqrt{2}
\end{bmatrix},
\mathbf u_3=\begin{bmatrix}
-1/\sqrt{2}\\1/2\\1/2
\end{bmatrix}
\]
and hence
\[
\mathbf y_1=\mathbf u_1\mathbf u_1^T\mathbf x=\begin{bmatrix}
\frac{1+\sqrt{2}}{2}\\
\frac{2+\sqrt{2}}{4}\\
\frac{2+\sqrt{2}}{4}
\end{bmatrix},
\mathbf y_2=\mathbf u_2\mathbf u_2^T\mathbf x=\begin{bmatrix}
0\\0\\0
\end{bmatrix},
\mathbf y_3=\mathbf u_3\mathbf u_3^T\mathbf x\begin{bmatrix}
\frac{1-\sqrt{2}}{2}\\
\frac{2-\sqrt{2}}{4}\\
\frac{2-\sqrt{2}}{4}
\end{bmatrix}.
\]
Following the notation in the proof of Proposition~\ref{prop:log index}, we have \( j_0=3 \) since \( \mathbf y_2=\mathbf{0}\). One can directly have that \( p_0=1 \) and \( X_1=\{p_0\} \). Furthermore,
\[
F(p,q,k)=(\mu_1\mu_3)^{k-1}(\mu_1-\mu_3)\det\begin{bmatrix}
y_{p,1} & y_{p,3}\\
y_{q,1} & y_{q,3}
\end{bmatrix},
\]
where \( \mu_1=2+\sqrt{2} \) and \( \mu_3=2-\sqrt{2} \). Since \( F(1,q,k)>0 \) for \( q=2,3 \) and any \( k \), the log-concavity index is \( 1 \). The produced log concave sequence is
\[
1, 4, 14, 48, 164, 560, 1912,\cdots.
\]
For more details, See A007070 in OEIS~\cite{OEIS}. In the same way, one can find that \( F(2,1,k)<0, F(2,3,k)=0 \) for any \( k \) and hence the log-convexity index is \( 2 \) and \( 3 \). (They produce the same sequence.) The produced log convex sequence is
\[
1, 3, 10, 34, 116, 396, 1352, \cdots,
\]
see A007052 in OEIS. \end{example} \begin{example}
In this example, we consider a family of particular symmetric tridiagonal matrices. Let $A=aI+bP$ for some $a,b>0$ where $P$ is the adjacency matrix of the path graph on $n$ vertices. Let a Motzkin path be a lattice path using the step set \( \{up=(1,1),level=(1,0),down=(1,-1)\} \) that never goes below the \( x \)-axis. Then, the $i^\text{th}$ entry of $A^k\mathbf{1}$ is the sum of weights of weighted Motzkin paths of length \( k \) starting at \( (0,i) \)
, where up and down steps are of weight \( a \) and the level step has the weight \( b \).
It can be found in \cite{brouwer2011spectra} that eigenvalue $\lambda_l$ of $P$ for $l=1,\dots,n$ is given by $\lambda_l=2\cos\left(\frac{l\pi}{n+1}\right)$, and the corresponding eigenvector $\mathbf u_l$ is given by $(\mathbf u_l)_j=\sin\left(\frac{lj\pi}{n+1}\right)$. So, the eigenvalues of $A$ are given by $a+b\lambda_l$ for $l=1,\dots,n$. Hence, if $a\geq 2b$, then $A$ is irreducible and positive semidefinite. Therefore, by \Cref{thm:logconcave seq}, $(a_k(A,\mathbf{1}))_{k\geq 1}$ and $(b_k(A,\mathbf{1}))_{k\geq 1}$ produce log-concave and log-convex sequences.
We now determine log-concavity and log-convexity indices, considering the case \ref{case b} in \Cref{Remark:finding index}. All eigenvalues of $A$ are distinct and $\lambda_1>\dots>\lambda_n$. It can be checked that $\mathbf u_{l}^T\mathbf{1}\geq 0$ with the equality if $l$ is even. Let $\mathbf y_l = (\mathbf u_{1}^T\mathbf{1})\mathbf u_l$ for $l=1,\dots,n$. We can find from $\sin(3\theta) = -4\sin^3(\theta)+3\sin(\theta)$ that for $j=2,\dots,n-1$,
\begin{align*}
\det\begin{bmatrix}
(\mathbf y_1)_1 & (\mathbf y_3)_1 \\
(\mathbf y_1)_j & (\mathbf y_3)_j \\
\end{bmatrix} = (\mathbf u_{1}^T\mathbf{1})(\mathbf u_{3}^T\mathbf{1})\det\begin{bmatrix}
\sin\left(\frac{\pi}{n+1}\right) & \sin\left(\frac{3\pi}{n+1}\right) \\
\sin\left(\frac{j\pi}{n+1}\right) & \sin\left(\frac{3j\pi}{n+1}\right)
\end{bmatrix}<0.
\end{align*}
Furthermore, $\det\begin{bmatrix}
(\mathbf y_1)_1 & (\mathbf y_j)_1 \\
(\mathbf y_1)_n & (\mathbf y_j)_n \\
\end{bmatrix}=0$ for $j=2,\dots,n$. Therefore, $1$ and $n$ are the log-convexity indices of $A$ associated with $\mathbf{1}$. Similarly, one can verify that $\lfloor\frac{n}{2}\rfloor$ and $\lceil\frac{n}{2}\rceil$ are the log-concavity indices of $A$ associated with $\mathbf{1}$. \end{example}
\begin{remark}
To find the log-convexity and concavity indices, we have to calculate eigenvalues and eigenvectors. It seems necessary to develop a tool to obtain log convexity and concavity indices with simple hand calculations.
\end{remark}
\section{Appendices}\label{appendix}
\appendix
\section{Proofs for some results in Subsection \ref{Subsec:3.1}} \label{appendix:1}
We remark that tedious calculations based on recurrence relations in this appendix will not be displayed in detail, and they are performed by MATLAB\textsuperscript{\textregistered}. We denote by \( M_{P_d} \) the bottleneck matrix of the rooted path in Example~\ref{ex:path bottleneck}.
\subsection{Proof pertaining to the upper bound in Theorem \ref{thm:upperForPerron}}\label{subappen:1}
Here, we complete the proof of the remaining argument in Theorem \ref{thm:upperForPerron}.
\begin{figure}
\caption{A visualization of $B_2(d,r)$ with root \( 1 \).}
\label{Figure:Broom1}
\end{figure}
Let $d$ and $r$ be positive integers, and let $n=d+r$. Let $M$ be the bottleneck matrix of $B_2(d,r)$ on $n$ vertices in Figure \ref{Figure:Broom1}. Then $M$ is given by \begin{align*}
M=\begin{bmatrix}
M_{P_d} & M_{21}^T\\
M_{21} & M_{22}
\end{bmatrix}, \end{align*} where $M_{21}=\mathbf{1}_r\begin{bmatrix}
1 & 2 & \cdots & d \end{bmatrix}$, and $M_{22}=dJ_r+I_r$. Let $m_i^{(0)}=1$ for $i=1,\dots,n$. Suppose that for $k\geq 1$, $$M\begin{bmatrix}
m_1^{(k-1)}\\ \vdots \\ m_n^{(k-1)} \end{bmatrix}=\begin{bmatrix}
m_1^{(k)}\\ \vdots \\ m_n^{(k)} \end{bmatrix}.$$ From the structure of $M$, it can be readily checked that $m_{d+1}^{(k)}=\cdots=m_{n}^{(k)}$ for $k\geq 0$. Furthermore, we can see that \begin{align}\label{recurrence1}
m_{l}^{(k)}=\begin{cases*}
lrm_{d+1}^{(k-1)}+\sum\limits_{i=1}^l\sum\limits_{j=i}^d m_{j}^{(k-1)}& \text{if $1\leq l\leq d$,}\\
m_{d}^{(k)}+m_{d+1}^{(k-1)}& \text{if $d+1\leq l\leq n$.}
\end{cases*} \end{align}
We claim that \begin{align*}
\frac{\left(M^3\mathbf{1}\right)_{n}}{\left(M^{2}\mathbf{1}\right)_{n}}=\max_{i}{\frac{\left(M^3\mathbf{1}\right)_{i}}{\left(M^{2}\mathbf{1}\right)_{i}}}. \end{align*} To verify \( \left(M^3\mathbf{1}\right)_{l+1}\left(M^{2}\mathbf{1}\right)_{l}-\left(M^3\mathbf{1}\right)_{l}\left(M^{2}\mathbf{1}\right)_{l+1}>0 \), we first consider the case \( 1\le l\le d-1 \). Set \( d=l+k \) for $k\ge1$. With the aid of MATLAB\textsuperscript\textregistered, we can find from the recurrence relations \eqref{recurrence1} that \begin{align*}
&\left(M^3\mathbf{1}\right)_{l+1}\left(M^{2}\mathbf{1}\right)_{l}-\left(M^3\mathbf{1}\right)_{l}\left(M^{2}\mathbf{1}\right)_{l+1}\\
=&m_{l+1}^{(3)}m_{l}^{(2)}-m_{l}^{(3)}m_{l+1}^{(2)}\\
=&\frac{1}{360}l(l+1)(l+k)(7l^3+(35k-7)l^2+(40k^2-5k+2)l+20k^2-2)r^3\\
&+\frac{1}{720}l(l+1)(6l^5+(56k+8)l^4+(168k^2+70k+76)l^3+(200k^3+177k^2+305k-26)l^2\\
&+(80k^4+160k^3+313k^2+55k-40)l+40k^4+60k^3+122k^2-6k-24)r^2\\
&+\frac{1}{2880}l(l+1)(3l^6+(48k+9)l^5+(224k^2+120k+61)l^4+(448k^3+448k^2+440k+107)l^3\\
&+(400k^4+672k^3+972k^2+540k+560)l^2+(128k^{5}+400k^{4}+768k^{3}+748k^{2}+1084k\\
&+508)l+64k^{5}+160k^{4}+272k^{3}+248k^{2}+456k+192)r\\
&+\frac{1}{8640}kl\left( 2k+l\right)
\left( l+1\right) \left( 2k+l+1\right)(9l^{4}+\left( 36k+18\right) l^{3}+\left( 44k^{2}+54k+25\right)l^{2}\\
&+\left( 16k^{3}+44k^{2}+50k+16\right) l+8k^{3}+20k^{2}+16k+4). \end{align*} We can view $m_{l+1}^{(3)}m_{l}^{(2)}-m_{l}^{(3)}m_{l+1}^{(2)}$ as a polynomial in $r$. Let $C(i)$ be the coefficient of $r^i$ for $0\leq i\leq 3$. One can easily check that for \( k\ge1\) and \( l\ge1 \), $C(i)>0$ for each $i=0,\dots,3$. Thus, for $1\leq l\leq d-1$, we have $$ \left(M^3\mathbf{1}\right)_{l+1}\left(M^{2}\mathbf{1}\right)_{l}-\left(M^3\mathbf{1}\right)_{l}\left(M^{2}\mathbf{1}\right)_{l+1}>0. $$ Moreover, if \( l=d \), then \begin{align*}
&\left(M^3\mathbf{1}\right)_{d+1}\left(M^{2}\mathbf{1}\right)_{d}-\left(M^3\mathbf{1}\right)_{d}\left(M^{2}\mathbf{1}\right)_{d+1}\\
=&\frac{1}{360}d^2(d-1)(d+1)(7d^2+2)r^2\\
&+\frac{1}{360}d(d-1)(d+1)(3d^4 + 7d^3 + 45d^2 + 32d + 12)r\\
&+\frac{1}{2880}d(d+1)(3d^6 + 9d^5 + 61d^4 + 107d^3 + 560d^2 + 508d + 192)>0. \end{align*} Recall $m_{d+1}^{(k)}=\cdots=m_{n}^{(k)}=\left(M^k\mathbf{1}\right)_{n}$ for $k\geq 0$. Therefore, \begin{align*}
\frac{\left(M^3\mathbf{1}\right)_{n}}{\left(M^{2}\mathbf{1}\right)_{n}}=\max_{i}{\frac{\left(M^3\mathbf{1}\right)_{i}}{\left(M^{2}\mathbf{1}\right)_{i}}}, \end{align*} where \begin{align*}
(M^3\mathbf{1})_n
=&d^3r^3 +\frac{1}{6}d^2(7d^2 + 9d + 20)r^2+\frac{1}{120}d(61d^4 + 140d^3 + 315d^2 + 280d + 404)r\\
&+\frac{1}{720}(61d^6 + 183d^5 + 385d^4 + 465d^3 + 634d^2 + 432d + 720), \end{align*} and \begin{align*}
(M^2\mathbf{1})_n=&d^2r^2 +\frac{1}{6}d(5d^2 + 6d + 13)r+\frac{1}{24}(5d^4 + 10d^3 + 19d^2 + 14d + 24). \end{align*}
\subsection{Proofs pertaining to the lower bounds in Theorem \ref{thm:lowerForPerron}}\label{subappen:2} We now complete the remaining argument in Theorem \ref{thm:lowerForPerron}.
\begin{figure}
\caption{A visualization of $B_1(d,r)$ with root \( 1 \).}
\label{Figure:Broom2}
\end{figure}
Let $d$ and $r$ be positive integers, and let $n=d+r$. We consider the bottleneck matrix $M$ of the broom $B_1(d,r)$ as in Figure \ref{Figure:Broom2}. Then $M$ is given by \begin{align*}
M=\begin{bmatrix}
M_{P_d} & J_{d,r}\\
J_{r,d} & I_r+J_r
\end{bmatrix}. \end{align*}
Let $m_i^{(0)}=1$ for $i=1,\dots,n$. Suppose that for $k\geq 1$, $$M\begin{bmatrix}
m_1^{(k-1)}\\ \vdots \\ m_n^{(k-1)} \end{bmatrix}=\begin{bmatrix}
m_1^{(k)}\\ \vdots \\ m_n^{(k)} \end{bmatrix}.$$
Then we have \begin{align*}
m_{l}^{(k)}=\begin{cases*}
rm_{d+1}^{(k-1)}+\sum\limits_{i=1}^l\sum\limits_{j=i}^d m_{j}^{(k-1)}& \text{if $1\leq l\leq d$,}\\
m_{1}^{(k)}+m_{d+1}^{(k-1)}& \text{if $d+1\leq l\leq n$.}
\end{cases*} \end{align*} Using MATLAB\textsuperscript\textregistered, we obtain \begin{align*}
\mathbf{1}^TM^3\mathbf{1}=&r^4 + (4d + 3)r^3 +\frac{1}{2}(2d+3)(d+2)(d+1)r^2\\
&+\frac{1}{15}(d+1)(4d^4 + 16d^3 + 19d^2 + 21d + 15)r\\
&+\frac{1}{2520}d(2d+1)(d+1)(68d^4 + 136d^3 + 133d^2 + 65d + 18),\\
\mathbf{1}^TM^2\mathbf{1}=&r^3 + (3d + 2)r^2 +\frac{1}{3}(d+1)(2d^2+4d+3)r+\frac{1}{30}d(2d+1)(d+1)(2d^2+2d+1). \end{align*}
\begin{figure}
\caption{A visualization of $B_1(d,r)$ with root \( d \).}
\label{Figure:Broom3}
\end{figure}
We now consider the neckbottle matrix $Q$ of $B_1(d,r)$ with the labeling of vertices as in Figure \ref{Figure:Broom3}. The path matrix $N$ of $B_1(d,r)$ is given by \begin{align*}
N=\begin{bmatrix}
N_{11} & \mathbf e_d\mathbf{1}_r^T\\
\mathbf{O} & I_r
\end{bmatrix}, \end{align*} where $\mathbf e_d$ is the column vector of size $d$ with a single $1$ in the $d^\text{th}$ position and zeros elsewhere, and $$N_{11}=\begin{bmatrix}
1 & & \mathbf{O} \\
\vdots & \ddots & \\
1 &\cdots & 1 \\ \end{bmatrix}.$$ Then $Q$ is given by \begin{align*}
Q=\begin{bmatrix}
M_{P_d}+r\mathbf e_d\mathbf e_d^T & \mathbf e_d\mathbf{1}_r^T\\
\mathbf{1}_r\mathbf e_d^T & I_r
\end{bmatrix}. \end{align*} Let $q_i^{(0)}=1$ for $i=1,\dots,n$. Suppose that for $k\geq 1$, $$Q\begin{bmatrix}
q_1^{(k-1)}\\ \vdots \\ q_n^{(k-1)} \end{bmatrix}=\begin{bmatrix}
q_1^{(k)}\\ \vdots \\ q_n^{(k)} \end{bmatrix}.$$ From the structure of $Q$, the following recurrence relations can be found: \begin{align*}
q_{l}^{(k)}=\begin{cases*}
\sum\limits_{i=1}^l\sum\limits_{j=i}^d q_{j}^{(k-1)}& \text{if $1\leq l\leq d-1$,}\\
r\left(q_{d}^{(k-1)}+q_{d+1}^{(k-1)}\right)+\sum\limits_{i=1}^d\sum\limits_{j=i}^d q_{j}^{(k-1)}& \text{if $l=d$,}\\
q_{d}^{(k-1)}+q_{d+1}^{(k-1)}& \text{if $d+1\leq l\leq n$.}
\end{cases*} \end{align*} Using MATLAB\textsuperscript\textregistered, we obtain \begin{align*}
\mathbf{1}^TQ^3\mathbf{1}=&4r^3 + 2(d^2 + 3d + 4)r^2 +\frac{1}{12}(13d^4 + 26d^3 + 41d^2 + 28d + 48)r\\
&+\frac{1}{2520}d(d+1)(2d+1)(68d^4 + 136d^3 + 133d^2 + 65d + 18),\\
\mathbf{1}^TQ^2\mathbf{1}=&4r^2 + 2(d^2 + d + 2)r +\frac{1}{30}d(d+1)(2d+1)(2d^2 + 2d + 1). \end{align*}
We shall show that given $d\geq 3$, $c_3(M,\mathbf{1})-c_3(Q,\mathbf{1})$ has exactly one root in $(0,\infty)$. Using MATLAB\textsuperscript\textregistered, we have \begin{align*}
F(d,r)=&\frac{1}{dr}(\mathbf{1}^TM^2\mathbf{1})(\mathbf{1}^TQ^2\mathbf{1})\left(c_3(M,\mathbf{1})-c_3(Q,\mathbf{1})\right) \\
=&\frac{1}{dr}\left((\mathbf{1}^TM^3\mathbf{1})(\mathbf{1}^TQ^2\mathbf{1})-(\mathbf{1}^TQ^3\mathbf{1})(\mathbf{1}^TM^2\mathbf{1})\right)\\
=&\frac{1}{60}(d-1)(d-2)(8d^2 - 21d + 11)r^3-\frac{1}{2520}(136d^6 - 868d^5 + 1540d^4 - 1015d^3\\
& + 2674d^2 - 4417d - 570)r^2-\frac{1}{840}(d-1)(d+1)(24d^5 - 16d^4 - 242d^3 + 355d^2 \\
&- 116d - 184)r-\frac{1}{37800}(d+1)(2d+1)(8d^7 + 58d^6 + 284d^5 - 575d^4 - 3763d^3\\
& + 2677d^2 + 4641d + 2970). \end{align*} (Note that $F(1,r)=r^2-1$ and $F(2,r)=\frac{1}{2}r^2+\frac{3}{2}r+1$.) Let $d\geq 3$. It can be checked that the coefficients of $r$ and $r^2$ and the constant in $F(d,r)$ are negative. It follows that $F(d,r)$ has only one positive root. Note that \( F(d,r)=0 \) if and only if \( c_3(M,\mathbf{1})=c_3(Q,\mathbf{1}) \) since \( \frac{1}{dr}(\mathbf{1}^TM^2\mathbf{1})(\mathbf{1}^TQ^2\mathbf{1})>0 \) for \( d,r\ge1 \). Plugging \( r=0.4d^2-1 \) and \( r=0.42d^2+2 \) into \( F(d,r) \), we obtain \begin{align*}
F(d,0.4d^2-1)
=&-\frac{1}{945000}d(96d^9 + 4480d^8 + 17660d^7 - 193050d^6 + 435414d^5 +\\
& 129255d^4 - 2490255d^3 + 1902425d^2 + 3428125d - 2629350), \end{align*} and \begin{align*}
F(d,0.42d^2+2)
=&\frac{1}{472500000}(169344d^{10} - 3415835d^9 + 27443570d^8 - 85370100d^7\\& + 338305446d^6 - 912780225d^5 + 2072349300d^4 - 3997675000d^3 \\&+ 4998385000d^2 - 1712137500d + 1569375000). \end{align*} It is easy to see that $F(d,0.4d^2-1)<0$ and $F(d,0.42d^2+2)>0$. By the intermediate value theorem, there exists a real number $r_0$ in the open interval \( (0.4d^2-1,0.42d^2+2) \) such that \( c_3(M,\mathbf{1})=c_3(Q,\mathbf{1}) \). Furthermore, $c_3(M,\mathbf{1})<c_3(Q,\mathbf{1})$ for $r<r_0$ and $c_3(M,\mathbf{1})>c_3(Q,\mathbf{1})$ for $r>r_0$.
\end{document} |
\begin{document}
\title[A Remark on the First Eigenvalue...Symmetric Spaces] {A Remark on the First Eigenvalue of the Laplace Operator on $1$-forms for Compact Inner Symmetric Spaces} \author{Jean-Louis Milhorat} \address{Nantes Universit\'e , CNRS, Laboratoire de Math\'ematiques Jean Leray, LMJL, UMR 6629, F-44000 Nantes, France } \email{[email protected]}
\thanks{\textit{Acknowledgements: The author thanks Francis Burstall for having pointed out an incorrect statement in a first version of the present paper, and for the references about spherical representations.}}
\begin{abstract} We remark that on a compact inner symmetric space $G/K$, indowed with the Riemmannian metric given by the Killing form of $G$ signed-changed, the first (non-zero) eigenvalue of the Laplace operator on $1$-forms is the Casimir eigenvalue of the highest either long or short root of $G$, according as the highest weight of the isotropy representation is long or short. Some results for the first (non-zero) eigenvalue on functions are derived. \end{abstract}
\maketitle
\section{Introduction} It is well-known that symmetric spaces provide examples where the spectrum of Laplace or Dirac operators can be (theoretically) explicitly computed. However this explicit computation is far from being simple in general and only a few examples are known. On the other hand, several classical results in geometry involve the first (non-zero) eigenvalue of those spectra, so it seems interesting to get this eigenvalue without computing all the spectrum. The present paper is a proof of the following remark: \begin{prop} \label{result} Let $G/K$ be a compact inner symmetric space of ``{type I}'', indowed with the Riemmannian metric given by the Killing form of $G$ signed-changed. The first eigenvalue\footnote{by Bochner's vanishing theorem, there are no harmonic $1$-forms on the symmetric spaces considered here, since their Ricci curvature is positive.} of the Laplace operator acting on $1$-forms is given by the Casimir eigenvalue of the highest either long or short root of $G$ (relative to the choice of a common maximal torus $T$ in $G$ and $K$), according as the highest weight of the isotropy representation is long of short. \end{prop}
\noindent Note that, although the result involves the choice of a basis of roots, it does not depend on this particular choice, by the transitivity of the Weyl group $W_G$ of $G$ on root bases. Indeed, by the Freudenthal formula, the Casimir eigenvalue of the highest (long of short) root $\beta$ is given by $$\langle \beta+2\delta_G,\beta\rangle\,,$$ where $\delta_G$ is the half-sum of the positive roots of $G$, and $\langle\,,\,\rangle$ the scalar product on the set of weights induced by the Killing form of $G$ signed-changed. Hence, by the $W_G$-invariance of the scalar product, two choices of a basis of roots (relative to the choice of a common maximal torus $T$ in $G$ and $K$) will lead to the same Casimir eigenvalue. On the other hand, recall that for the symmetric spaces considered here, the group $G$ is simple, hence at most two lenghts occur in the sets of roots (cf. for instance \cite{Hum}). So, if only one lenght occurs, the Casimir eigenvalue of the highest root has only to be considered.
\noindent The study of subgroups of maximal rank in a compact Lie group was initiated by A. Borel and J. De Siebenthal in \cite{BdS}, with an explicit description for compact simple groups, resulting in the following complete list of irreducible compact simply-connected Riemannian inner symmetric spaces $G/K$ of type I (cf. J. A. Wolf's book \cite{Wol}), where the first eigenvalue is given.
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline $G/K$ & \tiny{Number of} & \tiny{Lenght of the highest} & \tiny{First non-zero eigenvalue}\\
&\tiny{$G$-root} & \tiny{weight of the} & \tiny{of the Laplacian}\\
&\tiny{lenghts} & \tiny{isotropy representation} &\tiny{on $1$-forms}\\
\hline $\frac{\mathrm{SU}(p+q)}{\mathrm{S}(\mathrm{U}(p)\times \mathrm{U}(q))}$, $1\leq p\leq q$ &$1$ & long & $1$\\
\hline $\frac{\mathrm{SO}(2p+2q+1)}{\mathrm{SO}(2p)\times \mathrm{SO}(2q+1)}$, $p\geq 1$, $q\geq 0$ & $2$ &short if $q=0$
& $\frac{p}{2p-1}$\, \tiny{($G/K=S^{2p})$}\\
&&long if $q\geq 1$ & $1$ \\
\hline $\frac{\mathrm{Sp}(p+q)}{\mathrm{Sp}(p)\times\mathrm{Sp}(q)}$, $1\leq p\leq q$ & $2$& short &$\frac{p+q}{p+q+1}$\\
\hline $\frac{\mathrm{Sp}(n)}{\mathrm{U}(n)}$ & $2$& long & $1$\\
\hline $\frac{\mathrm{SO}(2p+2q)}{\mathrm{SO}(2p)\times \mathrm{SO}(2q)}$, $1\leq p \leq q$ & $1$& long & $1$\\
\hline $\frac{\mathrm{SO}(2n)}{\mathrm{U}(n)}$, $n>2$ & $1$& long & $1$\\
\hline $\frac{\mathrm{G}_2}{\mathrm{SO}(4)}$ & $2$& long & $1$ \\
\hline $\frac{\mathrm{F}_4}{\mathrm{Sp}(3)\cdot \mathrm{Sp}(1)}$ & $2$&long &$1$\\
\hline $\frac{\mathrm{F}_4}{\mathrm{Spin}(9)}$ & $2$& short& $2/3$\\
\hline $\frac{\mathrm{E}_6}{\mathrm{SO}(10)\cdot \mathrm{SO}(2)}$ & $1$&long&$1$\\
\hline $\frac{\mathrm{E}_6}{\mathrm{SU}(6)\cdot \mathrm{SU}(2)}$ & $1$&long&$1$\\
\hline $\frac{\mathrm{E}_7}{\mathrm{E}_6 \cdot \mathrm{SO}(2)}$ & $1$&long&$1$\\
\hline $\frac{\mathrm{E}_7}{\mathrm{SU}(8)/\{\pm I\}}$ & $1$&long&$1$\\
\hline $\frac{\mathrm{E}_7}{\mathrm{SO}'(12)\cdot \mathrm{SU}(2)}$ & $1$&long&$1$\\
\hline $\frac{\mathrm{E}_8}{\mathrm{SO}'(16)}$ & $1$&long&$1$\\
\hline $\frac{\mathrm{E}_8}{\mathrm{E}_7\cdot \mathrm{SU}(2)}$ & $1$&long&$1$\\
\hline
\end{tabular}
\end{center} (By convention, if all roots have same lenght, they are called long. The notation $\mathrm{SO}'(n)$ is used to mention that $\mathrm{SO}(n)$ acts by means of a spin representation). The (rather puzzling) fact that the eigenvalue is equal to $1$ in most of the cases is explained below.
\noindent Some results for the spectrum on functions may be derived. Indeed, if a function $f$ verifies $\Delta f=\lambda\, f$, where $\lambda$ is the first (nonzero) eigenvalue, then $\Delta df=\lambda \,df$, hence $\lambda\geq \mu$, where $\mu$ is the first eigenvalue on $1$-forms. We classify in the following the symmetric spaces for which this inequality is an equality in all the cases considered here.
\noindent It can be checked that the values given in the above table agree with already konwn results (mainly on the spectrum of functions): compare with the table given in \cite{Nag61}, with the explicit computations of the whole spectrum given in \cite{CaWol76} and \cite{Bes78} for Compact Rank One Symmetric Spaces: $\mathbb{R} P^{n}$, $\mathbb{C} P^{n}$, $\mathbb{H} P^{n}$, $\mathbb{C a} P^2=\mathrm{F}_4/\mathrm{Spin}(9)$, with the partial results (not always very explicit) obtained for the spectrum of Grassmannians (\cite{IT78}, \cite{Str80b}, \cite{Tsu}, \cite{TK03a}, \cite{ECh04}, \cite{H07}, \cite{ECh12}) and the spectrum of $\mathrm{Sp}(n)/\mathrm{U}(n)$, (\cite{TK03}, \cite{HC11})\footnote{Many references for the explicit computations of spectra may be found in\\ https://mathoverflow.net/questions/219109/explicit-eigenvalues-of-the-laplacian.}.
\section{Preliminaries for the proof} We consider a compact simply connected irreducible symmetric space $G/K$ of ``type I'', where $G$ is a simple compact and simply-connected Lie group and $K$ is the connected subgroup formed by the fixed elements of an involution $\sigma$ of $G$. This involution induces the Cartan decomposition of the Lie algebra $\mathfrak{g}$ of $G$ into $$\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}\,,$$ where $\mathfrak{k}$ is the Lie algebra of $K$ and $\mathfrak{p}$ is the vector space $\{X\in\mathfrak{g}\,;\, \sigma_{*}\cdot X=-X\}$. This space $\mathfrak{p}$ is canonically identified with the tangent space to $G/K$ at the point $o$, $o$ being the class of the neutral element of $G$. We consider here irreducible symmetric spaces, that is, the isotropy representation $$\begin{array}{crcl} \rho :&K&\longrightarrow& \mathrm{GL}(\mathfrak{p})\\
& k&\longmapsto& \mathrm{Ad}(k)_{|\mathfrak{p}} \end{array} $$ is irreducible. Hence all $G$-invariant scalar products on $\mathfrak{p}$, and so all
$G$-invariant Riemannian metrics on $G/K$ are proportional. We consider the metric induced by the Killing form of $G$ sign-changed. With this metric, $G/K$ is an Einstein space with scalar curvature $\mathrm{Scal}=n/2$, (cf. for instance Theorem~7.73 in \cite{Bes}).
\noindent As $G/K$ is an homogeneous space, the bundle of $p$-forms on $G/K$ may be identified with the bundle $G\times_{\wedge^{p}\rho} \wedge^{p} \mathfrak{p}$. Any $p$-form $\omega$ is then identified with a $K$-invariant function $ G\rightarrow \wedge^{p} \mathfrak{p}$, that is a function verifying $$\forall g\in G\,,\quad \forall k\in K\,,\quad \omega(gk)=\wedge^{p}\rho(k^{-1})\, \omega(g)\,.$$ Let $L_{K}^{2}(G,\wedge^{p} \mathfrak{p})$ be the Hilbert space of $L^{2}$ $K$-equivariant functions $G\rightarrow \wedge^{p} \mathfrak{p}$. The Laplacian operator $\Delta_p$ extends to a self-adjoint operator on $L_{K}^{2}(G,\wedge^{p} \mathfrak{p})$. Since it is an elliptic operator, it has a (real) discrete spectrum. By the Peter-Weyl theorem,
the natural unitary representation of $G$ on the Hilbert space
$L_{K}^{2}(G,\wedge^{p} \mathfrak{p})$
decomposes into the Hilbert sum
$$\bigoplus_{\gamma \in \widehat{G}}V_{\gamma}\otimes \mathrm{Hom}_{K}(V_{\gamma},\wedge^{p} \mathfrak{p})\,,$$ where $\widehat{G}$ is the set of equivalence classes of irreducible unitary complex representations of $G$, $(\rho_{\gamma},V_{\gamma})$ represents an element $\gamma\in\widehat{G}$ and $\mathrm{Hom}_{K}(V_{\gamma},\wedge^{p} \mathfrak{p})$ is the vector space of $K$-equivariant homomorphisms $V_{\gamma}\rightarrow \wedge^{p} \mathfrak{p}$, i.e. $$\mathrm{Hom}_{K}(V_{\gamma},\wedge^{p} \mathfrak{p})=\{ A\in \mathrm{Hom}(V_{\gamma},\wedge^{p} \mathfrak{p})\; \mathrm{s.t.}\; \forall k\in K\,, A\circ \rho_{\gamma}(k)=\wedge^{p}\rho(k)\circ A\}\,.$$ The injection $ V_{\gamma}\otimes \mathrm{Hom}_{K}(V_{\gamma},\wedge^{p} \mathfrak{p})\hookrightarrow L_{K}^{2}(G,\wedge^{p} \mathfrak{p})$ is given by $$v\otimes A \mapsto \Big(g\mapsto (A\circ\rho_{\gamma}(g^{-1})\,)\cdot v\Big)\,.$$ The Laplacian $\Delta_p$ respects the above decomposition, and its restriction to the space $V_{\gamma}\otimes \mathrm{Hom}_{K}(V_{\gamma},\wedge^{p} \mathfrak{p})$ is nothing else but the Casimir operator $\mathcal{C}_{\gamma}$ of the representation $(\rho_{\gamma},V_{\gamma})$, (see \cite{IT78}): \begin{equation*} \Delta (v\otimes A)= v\otimes (A\circ \mathcal{C}_{\gamma})\,. \end{equation*} But since the representation is irreducible, the Casimir operator is a scalar multiple of identity, $\mathcal{C}_{\gamma}=c_{\gamma}\, \mathrm{id}$, where the eigenvalue $c_{\gamma}$ only depends of $\gamma\in\widehat{G}$. Hence the spectrum of $\Delta_p$ is the set of the $c_\gamma$ for which $\mathrm{Hom}_{K}(V_{\gamma},\wedge^{p} \mathfrak{p})$ is non trivial.
\noindent Denote by $\wedge^{p}\rho=\oplus^{N}_{j=1} \rho^{p}_j$, the decomposition of the representation $K\rightarrow\wedge^{p} \mathfrak{p}$ into irreducible components. Note that for $p=0$ or $1$, the decomposition has only one component, since the representations are respectively the trivial one, and the isotropy representation, which are both irreducible.
\noindent Now, by the Frobenius reciprocity theorem, one has $$\dim(\mathrm{Hom}_{K}(V_{\gamma},\wedge^{p} \mathfrak{p})=\sum_{j=1}^{N} \mathrm{mult}(\rho^{p}_j,\mathrm{Res}^{G}_{K}(\rho_{\gamma}))\,, $$ where $\mathrm{Res}^{G}_{K}(\rho_{\gamma})$ is the restriction to $K$ of the representation $\rho_{\gamma}$.
\noindent So, finally, \begin{equation*} \mathrm{Spec}(\Delta_p)=\{c_{\gamma}\;;\; \gamma\in\widehat{G}\;\mathrm{s.t.}\;\exists j\;\mathrm{s.t.}\; \mathrm{mult}(\rho^{p}_j,\mathrm{Res}^{G}_{K}(\rho_{\gamma}))\neq 0\}\,.\end{equation*} In particular \begin{align}\label{spect0} \mathrm{Spec}(\Delta_0)&=\{c_{\gamma}\;;\; \gamma\in\widehat{G}\;\mathrm{s.t.}\; \mathrm{mult}(\mathrm{triv.repr.},\mathrm{Res}^{G}_{K}(\rho_{\gamma}))\neq 0\}\,,\\ \intertext{and} \mathrm{Spec}(\Delta_1)&=\{c_{\gamma}\;;\; \gamma\in\widehat{G}\;\mathrm{s.t.}\; \mathrm{mult}(\rho,\mathrm{Res}^{G}_{K}(\rho_{\gamma}))\neq 0\}\,. \end{align}
\section{Proof of the result \ref{result}.} \noindent We furthermore assume that $G$ and $K$ have same rank and consider a fixed common maximal torus $T$.
\noindent Let $\Phi$ be the set of non-zero roots of the group $G$
with respect to $T$. According to a classical terminology, a root
$\theta$ is called compact if the corresponding root space is contained
in $\mathfrak{k}_{\mathbb{C}}$ (that is, $\theta$ is a root of $K$ with respect to $T$) and
noncompact if the root space is contained in $\mathfrak{p}_{\mathbb{C}}$. Let $\Phi_{G}^{+}$ be the set of positive roots of $G$, with respect to a choice of a basis of simple roots. The half-sum of the positive roots of $G$ is denoted by $\delta_{G}$. The space of weights is endowed with the $W_{G}$-invariant scalar product $<\,,\,>$ induced by the Killing form of $G$ sign-changed.
\noindent The symmetric spaces considerer here being irreducible, the space $\mathfrak{p}_{\mathbb{C}}$ is irreducible. Let $\alpha$ be the highest weight of this representation. As the group $G$ is simple, there are at most two root lenghts, and all roots of a given lenght are conjugate under the Weyl group $W_G$ of $G$ (see \cite{Hum}, \$.10.4, lemma C). So $\alpha$ is conjugate to either the maximal root of $G$ or the highest short root. Denote by $\beta$ this root, and let $w$ be any element in $W_G$ such that $w\cdot \alpha=\beta$. We claim that \begin{lem} \label{lem.mult} The multiplicity $\mathrm{mult}(\rho,\mathrm{Res}^{G}_{K}(\rho_{\beta}))$ is $\neq 0$. \end{lem} As the proof differs if either $\alpha$ is long or short, we first have a glance to symmetric spaces for which $\alpha$ is short. \subsection{Symmetric spaces for which the highest weight of the isotropy representation of $K$ is a short root}\label{sh.root} First note that if $G$ has only one root-lenght, then the highest weight $\alpha$ of the isotropy representation is necessarily a long root. So we only have to consider symmetric spaces $G/K$ for which $G$ has two root-lenghts. Using for instance the table~2, p. 66 in \cite{Hum}, we have to look to the following symmetric spaces. \begin{enumerate}
\item $\mathrm{SO}(2p+2q+1)/\mathrm{SO}(2p)\times \mathrm{SO}(2q+1)$. We consider here $G=\mathrm{Spin}(2p+2q+1)$. Identifying $\mathbb{R}^{2p}$ and
$\mathbb{R}^{2q+1}$ with the subspaces spanned respectively by $e_1,\ldots, e_{2p}$ and $e_{2p+1},\ldots,e_{2p+2q+1}$, where $(e_1,\ldots,e_{2p+2q+1})$ is the canonical basis of $\mathbb{R}^{2p+2q+1}$, $K$ is the subgroup of $G$ defined by
\begin{align*}
\mathrm{Spin}(2p)\cdot\mathrm{Spin}(2q+1)=&\big\{\psi \in \mathrm{Spin}(2p+2q+1)\,;\, \psi=\varphi\,\phi\,,\\
&\qquad\varphi\in \mathrm{Spin}(2p)\,,\phi\in \mathrm{Spin}(2q+1)\big\}\,,
\end{align*}
(Note that $K=\mathrm{Spin}(2p)$, when $q=0$).
\noindent We consider the common torus of $G$ and $K$ defined by $$T=\left\{ \sum_{k=1}^{p+q}\big(\cos (\beta_{k})+\sin(\beta_{k})\,e_{2k-1}\cdot e_{2k}\big)\,;\, \beta_{1} ,\ldots, \beta_{p+q}\, \in \mathbb{R}\right\}\,.$$ The Lie algebra of $T$ is $$\mathfrak{t}=\left\{ \sum_{k=1}^{p+q}\beta_{k}\,e_{2k-1}\cdot e_{2k}\,;\,(\ref{spect0}) \beta_{1} ,\ldots, \beta_{p+q}\, \in \mathbb{R}\right\}\,.$$ We denote by $(x_1, \ldots , x_{p+q})$ the basis of ${\mathfrak{t}}^{*}$ given by $$ x_{k}\cdot \sum_{j=1}^{p+q}\beta_{j}\,e_{2j-1}\cdot e_{2j}=\beta_{k}\,.$$ We introduce the basis $(\widehat{x}_1,\ldots , \widehat{x}_{p+q})$ of $i\,\mathfrak{t}^{*}$ defined by $$\widehat{x}_k:=2i\,x_{k}\,,\quad k=1,\ldots,p+q\,.$$ A vector $\mu\in i\, \mathfrak{t}^{*}$ such that $\mu=\sum_{k=1}^{p+q} \mu_{k}\, \widehat{x}_{k}$, is denoted by $$ \mu= (\mu_{1} ,\mu_{2},\ldots ,\mu_{p+q})\, .$$ The restriction to $\mathfrak{t}$ of the Killing form $\mathrm{B}$ of $G$ is given by $$ \mathrm{B}(e_{2k-1}\cdot e_{2k},e_{2l-1}\cdot e_{2l})=-8(2p+2q-1)\,\delta_{kl}\,.$$ It is easy to verify that the scalar product on $i \, \mathfrak{t}^{*}$ induced by the Killing form sign changed is given by \begin{equation}\label{scp1}\begin{split} \forall\mu=(\mu_1,\ldots ,\mu_{p+q})\in i \, \mathfrak{t}^{*}\,&,\,\forall \mu'=(\mu'_1,\ldots , \mu'_{p+q})\, \in i\, \mathfrak{t}^*\,,\\ <\mu ,\mu'>&=\frac{1}{2(2p+2q-1)}\,\sum_{k=1}^{p+q} \mu_k\, \mu'_k \,.\end{split} \end{equation} Considering the decomposition of the complexified Lie algebra of $G$ under the action of $T$, it is easy to verify that $T$ is a common maximal torus of $G$ and $K$, and that the respective roots are given by (see for instance chapter~12.4 in \cite{BH3M} for details), \begin{align*} &\begin{cases}\pm(\widehat{x}_{i}+\widehat{x}_{j})\,,\; \pm(\widehat{x}_{i}-\widehat{x}_{j})\,,\; 1\leq i<j\leq p+q\,,\\ \pm \widehat{x_i}\,,\; 1\leq i\leq p+q\,,\end{cases} &&&\text{for } G\,,\\ &\begin{cases}\pm(\widehat{x}_{i}+\widehat{x}_{j})\,,\, \pm(\widehat{x}_{i}-\widehat{x}_{j})\,,\,1\leq i<j\leq p\,,\,p+1\leq i<j\leq p+q\,,\\ \pm \widehat{x_i}\,,\; p+1\leq i\leq p+q\,,\end{cases}
&&&\text{for } K\,. \end{align*} We consider as sets of positive roots \begin{align*}\Phi_{G}^{+}&=\left\{\widehat{x}_{i}+\widehat{x}_{j}\,,\, \widehat{x}_{i}-\widehat{x}_{j}\,,\;1\leq i<j\leq p+q\,,\; \widehat{x}_i\,,\, 1\leq i\leq p+q\right\}\,,\\ \intertext{and} \Phi_{K}^{+}&= \left\{\begin{cases}\widehat{x}_{i}+\widehat{x}_{j}\,,\\ \widehat{x}_{i}-\widehat{x}_{j}\,,\end{cases}\;\begin{cases}1\leq i<j\leq p\,,\\p+1\leq i<j\leq p+q\,,\end{cases}\; \widehat{x}_i\,,\, p+1\leq i\leq p+q
\right\}\,. \end{align*} so the set of positive non-compact roots is $$\Phi^{+}_{\mathfrak{p}}=\left\{\begin{cases}\widehat{x}_{i}+\widehat{x}_{j}\,,\\ \widehat{x}_{i}-\widehat{x}_{j}\,,\end{cases}\; 1\leq i\leq p\,,\, p+1\leq j \leq p+q\,,\;\widehat{x}_i\,,\, 1\leq i\leq p\right\}\,.$$ Note that the sets \begin{align*} \Delta_{G}&=\{\widehat{x}_i-\widehat{x}_{i+1}\,,\;1\leq i\leq p+q-1\,, \widehat{x}_{p+q}\}\\ \intertext{and} \Delta_{K}&=\left\{\begin{cases}\widehat{x}_i-\widehat{x}_{i+1}\,,\;1\leq i\leq p-1\,,\; \widehat{x}_{p-1}+\widehat{x}_{p}\,,\\ \widehat{x}_i-\widehat{x}_{i+1}\,,\;p+1\leq i\leq p+q-1\,,\; \widehat{x}_{p+q}\,,\end{cases} \right\}\,, \end{align*} are basis of $G$-roots and $K$-roots respectively. So,
\noindent Any $\mu=(\mu_1,\ldots,\mu_{p+q})\in i\, \mathfrak{t}^{*}$ \begin{itemize}
\item is a dominant $G$-weight if and only if
$$\mu_1\geq \mu_2\geq \cdots \geq \mu_{p+q}\geq 0\,,$$
and the $\mu_i$ are all simultaneously integers or half-integers,
\item is a dominant $K$-weight if and only if
$$\begin{cases}\mu_1\geq \mu_2\geq \cdots \geq \mu_{p-1}\geq |\mu_p|\,,\\
\mu_{p+1}\geq \mu_{p+2}\geq \cdots \geq \mu_{p+q}\geq 0\,,
\end{cases}$$
and the $\mu_i$, for $1\leq i\leq p$ or $p+1\leq i\leq p+q$, are all simultaneously integers or half-integers. \end{itemize} Hence \begin{itemize}
\item If $q=0$, the highest weight of $\mathfrak{p}_{\mathbb{C}}$ is the short root $\alpha=\widehat{x_1}$, (which is also the highest shortest root $\beta$ of $G$).
\item If $q>0$, the highest weight of $\mathfrak{p}_{\mathbb{C}}$ is the long root $\alpha=\widehat{x_1}+\widehat{x}_{p+1}$. \end{itemize} \item $\mathrm{Sp}(p+q)/\mathrm{Sp}(p)\times\mathrm{Sp}(q)$. The space $\mathbb{H}^{p+q}$ is viewed as a right vector space on $\mathbb{H}$ in such a way that $G$ may be identified with the group $$ \left\{ A\in \mathrm{M}_{p+q}(\mathbb{H})\,;\,{}^{t}AA=I_{p+q}\right\}\,,$$ acting on the left on $\mathbb{H}^{p+q}$ in the usual way. The group $K$ is identified with the subgroup of $G$ defined by $$ \left\{ A\in \mathrm{M}_{p+q}(\mathbb{H})\,;\,A=\begin{pmatrix} B&0\\0&C \end{pmatrix}\,,\,{}^{t}BB=I_{p}\,,\,\,{}^{t}CC=I_{q} \right\}\,.$$ Let $T$ be the common torus of $G$ and $K$ \begin{equation}\label{maxT}T:=\left\{\begin{pmatrix} \mathrm{e}^{\mathbf{i}\beta_{1}}&&\\ &\ddots&\\ && \mathrm{e}^{\mathbf{i}\beta_{p+q}}\end{pmatrix}\;,\; \beta_{1} ,\ldots, \beta_{p+q}\, \in \mathbb{R} \right\}\; , \end{equation}where $$\forall \beta \in \mathbb{R}\,,\quad \mathrm{e}^{\mathbf{i}\beta}:=\cos(\beta)+\sin(\beta)\,\mathbf{i}\,,$$ $(1,\mathbf{i},\mathbf{j},\mathbf{k})$ being the standard basis of $\mathbb{H}$.
\noindent The Lie algebra of $T$ is $$\mathfrak{t}=\left\{ \begin{pmatrix} \mathbf{i}\beta_{1}&&\\ &\ddots&\\ && \mathbf{i}\beta_{p+q}\end{pmatrix}\; ;\; \beta_{1}, \beta_{2}, ,\ldots, \beta_{p+q}\, \in \mathbb{R} \right\}\; .$$ We denote by(\ref{spect0}) $(x_1, \ldots , x_{p+q})$ the basis of ${\mathfrak{t}}^{*}$ given by $$ x_k\cdot\begin{pmatrix} \mathbf{i}\beta_{1}&&\\ &\ddots&\\ && \mathbf{i}\beta_{p+q}\end{pmatrix}= \beta_{k}\, .$$ A vector $\mu\in i\, \mathfrak{t}^{*}$ such that $\mu=\sum_{k=1}^{p+q} \mu_{k}\, \widehat{x}_{k}$, in the basis\\ $(\widehat {x}_k\equiv i\, x_k)_{k=1,\ldots ,p+q}$, is denoted by $$ \mu= (\mu_{1},\mu_{2},\ldots ,\mu_{p+q})\, .$$ The restriction to $\mathfrak{t}$ of the Killing form $\mathrm{B}$ of $G$ is given by $$\forall X\in\mathfrak{t}\,,\,\forall Y \in\mathfrak{t}\,,\quad \mathrm{B}(X,Y)= 4\, (p+q+1)\,\Re\, \big(\,\mathrm{tr} (X\,Y)\,\big)\,.$$ It is easy to verify that the scalar product on $i \, \mathfrak{t}^{*}$ induced by the Killing form sign changed is given by \begin{equation}\label{scp2} \begin{split} \forall\mu=(\mu_1,\ldots ,\mu_{p+q})\in i \, \mathfrak{t}^{*}\,&,\,\forall \mu'=(\mu'_1,\ldots , \mu'_{p+q})\, \in i\, \mathfrak{t}^*\,,\\ <\mu ,\mu'>&=\frac{1}{4(p+q+1)}\,\sum_{k=1}^{p+q} \mu_k\, \mu'_k \,.\end{split}\end{equation} Now, considering the decomposition of the complexified Lie algebra of $G$ under the action of $T$, it is easy to verify that $T$ is a common maximal torus of $G$ and $K$, and that the respective roots are given by \begin{align*} &\begin{cases}\pm(\widehat{x}_{i}+\widehat{x}_{j})\,,\\ \pm(\widehat{x}_{i}-\widehat{x}_{i})\,,\\ 1\leq i<j\leq p+q\,,\end{cases}\; && \pm 2\,\widehat{x}_{i}\,,\; 1\leq i\leq p+q &&\text{for}\; G\,,\\ &\begin{cases}\pm(\widehat{x}_{i}+\widehat{x}_{j})\,,\\ \pm(\widehat{x}_{i}-\widehat{x}_{j})\,,\\ 1\leq i<j\leq p\,,\\ p+1\leq i<j\leq p+q\,, \end{cases}\;&& \pm 2\,\widehat{x}_{i}\,,\; 1\leq i\leq p+q &&\text{for}\; K\,, \end{align*} We consider as sets of positive roots \begin{align*}
\Phi^{+}_G&=\left\{\widehat{x}_{i}+\widehat{x}_{j}\,,\widehat{x}_{i}-\widehat{x}_{i}\,,\, 1\leq i<j\leq p+q\,, 2\,\widehat{x}_{i}\,,\; 1\leq i\leq p+q\right\}\\ \intertext{and} \Phi^{+}_K&=\left\{\begin{cases}\widehat{x}_{i}+\widehat{x}_{j}\\ \widehat{x}_{i}-\widehat{x}_{j}\,,\end{cases}\, \begin{cases} 1\leq i<j\leq p\\ p+1\leq i<j\leq p+q\,,\end{cases}\,2\,\widehat{x}_{i}\,,\; 1\leq i\leq p+q \right\}\,, \end{align*} so the set of positive non-compact roots is $$\Phi^{+}_{\mathfrak{p}}=\{\widehat{x}_{i}+\widehat{x}_{j}\,,\widehat{x}_{i}-\widehat{x}_{i}\,, 1\leq i\leq p\,, p+1\leq j\leq p+q \}\,.$$ Note that the sets \begin{align*}
\Delta_G&=\left\{ \widehat{x}_i-\widehat{x}_{i+1}\,, 1\leq i\leq p+q-1\,,\; 2\,\widehat{x}_{p+q}\right\}\,,\\
\intertext{and}
\Delta_K&=\left\{ \begin{cases}\widehat{x}_i-\widehat{x}_{i+1}\,, 1\leq i\leq p-1\,, 2\,\widehat{x}_{p}\,,\\
\widehat{x}_i-\widehat{x}_{i+1}\,,
p+1\leq i\leq p+q-1\,,2\,\widehat{x}_{p+q}\end{cases} \right\}\,, \end{align*} are basis of $G$-roots and $K$-roots respectively. So,
\noindent Any $\mu=(\mu_1,\ldots,\mu_{p+q})\in i\, \mathfrak{t}^{*}$ \begin{itemize}
\item is a dominant $G$-weight if and only if
$$\mu_1\geq \mu_2\geq \cdots \geq \mu_{p+q}\geq 0\,,$$
and the $\mu_i$ are all integers,
\item is a dominant $K$-weight if and only if
$$\mu_1\geq \mu_2\geq \cdots \geq \mu_{p}\geq 0 \;,\;\mu_{p+1}\geq \mu_{p+2}\geq \cdots \geq \mu_{p+q}\geq 0$$
and the $\mu_i$ are all integers. \end{itemize} Hence the highest weight of $\mathfrak{p}_{\mathbb{C}}$ is the short root $\alpha=\widehat{x}_1+\widehat{x}_{p+1}$. \item $\mathrm{Sp}(n)/\mathrm{U}(n)$. With the same notations as above, we consider the subgroup $K$ of $G=\mathrm{Sp}(n)$: $$K=\left\{A=(a_{ij})\in \mathrm{Sp}(n)\,;\, a_{ij}\in \mathbb{R}+\mathbf{i}\, \mathbb{R}\right\}
\simeq \mathrm{U}(n)\,.$$ Note that $K$ is the set of fixed points of the inner involution:
$$\sigma:\mathrm{Sp}(n)\rightarrow \mathrm{Sp}(n)\,,\; A\mapsto \mathbf{I} A \mathbf{I}^{-1}\,,$$
where
$$ \mathbf{I}=\mathbf{i}\, I_n\,.$$
The subspace $\mathfrak{p}=\{A\in\mathfrak{sp}(n)\,;\, \sigma_{*}(A)=-A\}$ is then the set
$$\mathfrak{p}=\left\{A=(a_{ij})\in \mathfrak{sp}(n)\,;\, a_{ij}=\mathbf{j}\, \mathbb{R}+\mathbf{k}\, \mathbb{R}\right\}\,.$$
The torus $T$ introduced above (\ref{maxT}) is a common torus of $G$ and $K$. Considering the decomposition of the complexified Lie algebra of $G$ under the action of $T$, it is easy to verify that $T$ is a common maximal torus of $G$ and $K$, and that the respective roots are given by \begin{align*} &\left\{\pm(\widehat{x}_i-\widehat{x}_j)\,,\; 1\leq i<j\leq n\,,\;\pm(\widehat{x}_i+\widehat{x}_j)\,,\; 1\leq i\leq j\leq n\right\} &&\text{for}\; G\,,\\ &\left\{\pm(\widehat{x}_i-\widehat{x}_j)\,,\; 1\leq i< j\leq n\right\} &&\text{for}\; K\,, \end{align*} We consider as sets of positive roots \begin{align*}
\Phi^{+}_G&=\left\{\widehat{x}_i-\widehat{x}_j\,,\; 1\leq i<j\leq n\,,\;\widehat{x}_i+\widehat{x}_j\,,\; 1\leq i\leq j\leq n\right\}\\ \intertext{and} \Phi^{+}_K&=\left\{\widehat{x}_i-\widehat{x}_j)\,,\; 1\leq i< j\leq n\right\}\,, \end{align*} so the set of positive non-compact roots is $$\Phi^{+}_{\mathfrak{p}}=\{\widehat{x}_{i}+\widehat{x}_{j}\,,\, 1\leq i\leq j\leq n\}\,.$$ Note that the sets \begin{align*}
\Delta_G&=\left\{ \widehat{x}_i-\widehat{x}_{i+1}\,, 1\leq i\leq n-1\,,\; 2\,\widehat{x}_{n}\right\}\,,\\
\intertext{and}
\Delta_K&=\left\{\widehat{x}_i-\widehat{x}_{i+1}\,, 1\leq i\leq n-1\right\}\,, \end{align*} are basis of $G$-roots and $K$-roots respectively (there are only $n-1$ simple $K$-roots as $K$ is not semi-simple). So,
\noindent Any $\mu=(\mu_1,\ldots,\mu_{n})\in i\, \mathfrak{t}^{*}$ is(\ref{spect0})
\begin{itemize}
\item a dominant $G$-weight if and only if
$$\mu_1\geq \mu_2\geq \cdots \geq \mu_{n}\geq 0\,,$$
and the $\mu_i$ are all integers,
\item a dominant $K$-weight if and only if
$$ \mu_{i}-\mu_{i+1}\in \mathbb{N}\,,\; 1\leq i\leq n-1\,.$$
\end{itemize} Hence there are two dominant $K$-weights in the representation $\mathfrak{p}_{\mathbb{C}}$: $2\,\widehat{x}_1$ and $\widehat{x}_1+\widehat{x}_2$, but the highest weight is $2\,\widehat{x}_1$ since $\widehat{x}_1+\widehat{x}_2\prec 2\,\widehat{x}_1$. Hence the highest weight of $\mathfrak{p}_{\mathbb{C}}$ is the long root $\alpha=2\,\widehat{x}_1$. \item $\mathrm{G}_2/\mathrm{SO}(4)$. We use here the results given in \cite{CG} (see page~226), which follow from a general result of Borel-de Siebenthal, \cite{BdS}. A set of $G$-roots $\Phi_G$ is given by the elements $x\in \mathbb{R}^3$, whose coordinates are integers verifying
$$\sum_{i=1}^3 x_i=0 \quad \text{and} \quad \|x\|^2 = 2\; \text{or}\; 6\,,$$ hence $$\Phi_G=\{\pm(e_1-e_2)\,,\pm(e_2-e_3)\,,\pm(e_3-e_1)\,,\pm(2\,e_1-e_2-e_3)\,,\pm(2\,e_2-e_1-e_3),\pm(2\,e_3-e_1-e_2)\}\,.$$ The following system of positive $G$-roots is choosen: $$\Phi_G^{+}=\{e_1-e_2\,, e_3-e_2\,,e_3-e_1\,,-2\, e_1+e_2+e_3\,,-2\, e_2+e_3+e_1\,,2\,e_3-e_1-e_2\}\,.$$ It can be checked that a basis of $G$-roots is given by $$\Delta_G=\{e_1-e_2, -2\, e_1+e_2+e_3\}\,.$$ A system of positive $K$-roots (which appears to be also a basis of $K$-roots) is then given by $$\Phi_K^{+}=\{e_1-e_2\,, -e_1-e_2+2\, e_3\}\,.$$ The set of positive non-compact roots is $$\Phi_{\mathfrak{p}}^{+}=\{e_3-e_2\,,e_3-e_1\,,-2\,e_1+e_2+e_3\,,-2\, e_2+e_3+e_1\}\,.$$ There are two dominant weights in $\mathfrak{p}_{\mathbb{C}}$: $-2\, e_2+e_3+e_1$ and $e_3-e_2$, but the highest weight is $-2\, e_2+e_3+e_1$ since $e_3-e_2\prec -2\, e_2+e_3+e_1$. Hence the highest weight of $\mathfrak{p}_{\mathbb{C}}$ is the long root $\alpha=-2\, e_2+e_3+e_1$.
\item $\mathrm{F}_4/\mathrm{Sp}(3)\cdot \mathrm{Sp}(1)$. We use here also results given in \cite{CG} (see page~227). A set $\Phi_G$ of $G$-roots is given by the elements $x\in \mathbb{R}^4$ whose coordinates are integers or half-integers satisfying $\|x\|^2=1$ or $2$, \cite{Hum}, hence $$\Phi_G=\left\{\pm e_i\,,1\leq i\leq 4\,,\, \pm e_i\pm e_j\,,\,1\leq i<j\leq 4\,,\, \frac{1}{2}(\pm e_1\pm e_2 \pm e_3 \pm e_4)\right\}\,. $$ We consider the system of positive roots $$\Phi^{+}_G=\left\{e_i\,,\, 1\leq i\leq 4\,,\, e_i\pm e_j\,,\,1\leq i<j\leq 4\,,\, \frac{1}{2}(e_1\pm e_2 \pm e_3 \pm e_4)\right\}\,.$$ It can be check that a basis of $G$-roots is given by $$\Delta_G=\{\alpha_1:=e_2-e_3\,,\, \alpha_2:=e_3-e_4\,,\, \alpha_3:=e_4\,,\, \alpha_4:=\frac{1}{2}(e_1-e_2-e_3-e_4)\}\,.$$ A system of positive $K$-roots is then given by \begin{align*} \Phi_K^{+}=\Big\{ &e_3\,,e_4\,,e_1+e_2\,,e_1-e_2\,,e_3+e_4\,,e_3-e_4\,, \frac{1}{2}\,(e_1-e_2-e_3-e_4)\,,\\ &\frac{1}{2}\,(e_1-e_2-e_3+e_4)\,,\frac{1}{2}\,(e_1-e_2+e_3-e_4)\,,\frac{1}{2}\,(e_1-e_2+e_3+e_4)\Big\}\,. \end{align*} It can be check that a basis of $K$-roots is given by $$\Delta_K=\left\{e_1+e_2\,,e_4\,,e_3-e_4\,, \frac{1}{2}\,(e_1-e_2-e_3-e_4)\right\}\,.$$ The set of positive non-compact roots is \begin{align*}
\Phi_{\mathfrak{p}}^{+}=\Big\{ &e_1\,,e_2\,,e_1+e_3\,,e_1+e_4\,,e_2+e_3,e_2+e_4\,, e_1-e_3\,,e_1-e_4\,,\\
& e_2-e_3\,,e_2-e_4\,,\frac{1}{2}\,(e_1+e_2+e_3+e_4)\,,\frac{1}{2}\,(e_1+e_2+e_3-e_4)\,,\\
& \frac{1}{2}\,(e_1+e_2-e_3+e_4)\,,\frac{1}{2}\,(e_1+e_2-e_3-e_4)\Big\}\,. \end{align*} There are two dominant weights in $\mathfrak{p}_{\mathbb{C}}$: $e_1$ and $e_1+e_3$, but the highest weight is $e_1+e_3$ since $e_1\prec e_1+e_3$. Hence the highest weight of $\mathfrak{p}_{\mathbb{C}}$ is the long root $\alpha=e_1+e_3$.
\item $\mathrm{F}_4/\mathrm{Spin}(9)$. In that case, a system of positive $K$-roots is $$\Phi^{+}_K=\{e_i\,,\, 1\leq i\leq 4\,,\, e_i\pm e_j\,,\,1\leq i<j\leq 4\}\,.$$ A basis of simple $K$-roots is given by $$\Delta_K=\left\{e_1-e_2, e_2-e_3, e_3-e_4, e_4\right\}\,.$$(\ref{spect0}) The set of positive non-compact roots is $$\Phi^{+}_{\mathfrak{p}}=\left\{\frac{1}{2}(e_1\pm e_2 \pm e_3 \pm e_4)\right\}\,,$$ hence the highest weight of the representation $\mathfrak{p}_{\mathbb{C}}$ is the only dominant weight: the short root $\alpha= \frac{1}{2}(e_1+ e_2 + e_3 +e_4)$.
\end{enumerate}
\subsection{Proof of the lemma \ref{lem.mult}}
\begin{proof} The proof is very simple if there is only one root lenght, or if $\alpha$ is a long root. In that case, $\beta$ is necessarily the maximal root, hence the highest weight of the adjoint representation of the simple group $G$ in its complexified Lie algebra $\mathfrak{g}_{\mathbb{C}}$. But the decomposition of $\mathfrak{g}_{\mathbb{C}}=\mathfrak{k}_{\mathbb{C}}\oplus \mathfrak{p}_{\mathbb{C}}$ into $K$-invariant subspaces implies at once that $\rho$ is contained in the restriction of $\rho_{\beta}$ to $K$.
\noindent The proof is a little more involved when two roots lenghts occur and $\alpha$ is a short root. As we saw it just before, there are only three cases to be considered here: $\mathrm{SO}(2p+1)/\mathrm{SO}(2p)$, $\mathrm{Sp}(p+q)/\mathrm{Sp}(p)\times \mathrm{Sp}(q)$, and $\mathrm{F}_4/\mathrm{Spin}(9)$.
\noindent Let $v_{\beta}$ be the maximal vector (unique up to a scalar multiple) of the representation $V_{\beta}$ and let $g\in G$ be some representant of $w^{-1}\in W_G$. First $\rho_{\beta}(g)\cdot v_{\beta}$ is a weight-vector for the weight $\alpha$ since forall $ X \in \mathfrak{t}$, \begin{align*}
\rho_{\beta *}(X)\cdot(\rho_{\beta}(g)\cdot v_{\beta})&= {\frac{d}{dt}}_{|t=0}\left(\rho_{\beta}(\exp(t\,X)g)\cdot v_{\beta}\right)\\
&={\frac{d}{dt}}_{|t=0}\left(\rho_{\beta}(gg^{-1}\exp(t\,X)g)\cdot v_{\beta}\right)\\
&=\rho_{\beta}(g)\cdot \rho_{\beta *}(\mathrm{Ad}(g^{-1})\cdot X)\cdot v_{\beta}\\
&= \rho_{\beta}(g)\cdot \beta( w\cdot X)\,v_{\beta}\\
&= w^{-1}(\beta)(X)\, \rho_{\beta}(g)\cdot v_{\beta}\\
&= \alpha(X) \, \rho_{\beta}(g)\cdot v_{\beta}\,. \end{align*} Now, we claim that there exists $w\in W_G$ such that $\beta=w\cdot \alpha$, and the weight-vector $\rho_{\beta}(g)\cdot v_{\beta}$, where $g$ is a representative of $w^{-1}$, is a maximal vector for the action of $K$. We may consider only the action of a basis of simple $K$-roots $\{\theta_1,\ldots,\theta_r\}$. First note that if $\theta$ is a positive $K$-root, and $E_{\theta}$ a root-vector associated to it, then $\mathrm{Ad}(g^{-1})\cdot E_{\theta}$ is a root-vector for the root $w\cdot \theta$, since
forall $X\in \mathfrak{t}$, \begin{align*}
[X,\mathrm{Ad}(g^{-1})\cdot E_{\theta}]&=\mathrm{Ad}(g^{-1})\cdot[\mathrm{Ad}(g)\cdot X,E_{\theta}]\\
&= \mathrm{Ad}(g^{-1})\cdot[w^{-1}\cdot X, E_{\theta}]\\
&= \theta(w^{-1}\cdot X)\, \mathrm{Ad}(g^{-1})\cdot E_{\theta}\\
&= (w\cdot\theta)(X)\, \mathrm{Ad}(g^{-1})\cdot E_{\theta}\,.
\end{align*} Now, if $w\cdot \theta$ is a positive root, then, since $v_{\beta}$ is a maximal vector killed by any root-vector associated to a positive root, \begin{align*}
\rho_{\beta *}(E_{\theta})\cdot (\rho_{\beta}(g)\cdot v_{\beta})&={\frac{d}{dt}}_{|t=0}\left(\rho_{\beta}(gg^{-1}\exp(t\,E_{\theta})g)\cdot v_{\beta}\right)\\
&=\rho_{\beta}(g)\cdot\rho_{\beta *}(\mathrm{Ad}(g^{-1})\cdot E_{\theta})\cdot v_{\beta}\\
&=\rho_{\beta}(g)\cdot 0=0\,. \end{align*} So $\rho_{\beta}(g)\cdot v_{\beta}$ is killed by the action of any positive $K$-root $\theta$ such that $w\cdot \theta$ is a positive root. Hence we may conclude by proving that, for each symmetric space under consideration, there exists $w\in W_G$ such that $w\cdot\alpha=\beta$, and $w\cdot \theta_i$ is a positive $G$-root, for any simple $K$-root $\theta_i$. \begin{enumerate}
\item $\mathrm{SO}(2p+1)/\mathrm{SO}(2p)$. We saw above that the highest weight of the representation $\mathfrak{p}_{\mathbb{C}}$ is the short root $\alpha=\widehat{x_1}$, which is also the highest shortest root $\beta$ of $G$. We may choose $w=id$, and as all the $K$-simple roots in $\Delta_K$ are $G$-positive roots, the claim is verified in that case. \item $\mathrm{Sp}(p+q)/\mathrm{Sp}(p)\times \mathrm{Sp}(q)$. We saw above that the highest weight of the representation $\mathfrak{p}_{\mathbb{C}}$ is the short root $\alpha=\widehat{x_1}+\widehat{x}_{p+1}$. Now the highest short $G$-root is $\beta=\widehat{x}_1+\widehat{x}_{2}$. Let $w$ be the element of the Weyl group $W_G$ given by the $p$-cycle permutation $(2\,3\cdots p+1)$. One has $w\cdot \alpha=\beta$, and it is easily verified that $w\cdot \theta_i$ is a positive $G$-root for any simple $K$-root $\theta_i$, hence the claim is also proved in that case. \item $\mathrm{F}_4/\mathrm{Spin}(9)$. We saw above that the highest weight of the representation $\mathfrak{p}_{\mathbb{C}}$ is the short root $\alpha= \frac{1}{2}(e_1+ e_2 + e_3 +e_4)$. Now, the highest short $G$-root is $\beta= e_1$, and the reflection $\sigma_{\alpha_4}$ across the hyperplane $\alpha_4^{\bot}$ verifies $\sigma_{\alpha_4}\cdot \alpha=\beta$. It is easily verified that $\sigma_{\alpha_4}\cdot \theta_i$ is a positive $G$-root for any simple $K$-root $\theta_i$ since \begin{align*}
\sigma_{\alpha_4}: e_1-e_2&\mapsto e_3+e_4\,,\\
e_2-e_3 &\mapsto e_2-e_3\,,\\
e_3-e_4 &\mapsto e_3 -e_4\,,\\
e_4 &\mapsto \frac{1}{2}\,(e_1-e_2-e_3+e_4)\,. \end{align*} Hence the claim is also proved in that case. \end{enumerate} \end{proof}
\subsection{First eigenvalue of the Laplace operator acting on $1$-forms} \noindent In order to conclude, we have to verify that the Casimir eigenvalue $c_{\beta}$ is the lowest eigenvalue of the Laplacian. \begin{lem} Let $(\rho_{\gamma}, V_{\gamma})$ be an irreducible $G$ representation such that the multiplicity $\mathrm{mult}(\rho,\mathrm{Res}^{G}_{K}(\rho_{\gamma}))\not =0$. Then $c_{\beta}\leq c_{\gamma}$. \end{lem} \begin{proof}
Recall that the highest weight of $\rho$ is $\alpha$, hence if $\mathrm{mult}(\rho,\mathrm{Res}^{G}_{K}(\rho_{\gamma}))\not =0$, then $\alpha$ and $\beta=w\cdot \alpha$ are actually weights of the representation $\rho_\gamma$. But then, as $\gamma$ is the highest weight,
$$\langle \beta+\delta_G, \beta+\delta_G\rangle \leq \langle \gamma+\delta_G,
\gamma+\delta_G\rangle\,,$$
cf. Lemma~C, p.71 in \cite{Hum}, hence
$$c_{\beta}=\|\beta+\delta_G\|^2-\|\delta_G\|^2\leq \|\gamma+\delta_G\|^2-\|\delta_G\|^2=c_{\gamma}\,.$$ \end{proof}
\begin{remark} The fact that $c_{\beta}=1$ when $\beta$ is the highest long root may seem to be rather puzzling. This is indeed a consequence of Freudenthal's formula (cf. 48.2 in \cite{FdV} or p. 123 in \cite{Hum}): for any $G$-weight $\mu$ of the representation $(\rho_{\beta},V_{\beta})=(\mathrm{Ad}, \mathfrak{g}_{\mathbb{C}})$ , the multiplicity $\mathrm{mult}(\mu)$ of $\mu$ is given recursively by the formula
$$(\langle \beta+\delta_G,\beta+\delta_G\rangle -\langle \mu+\delta_G,\mu+\delta_G\rangle)\, \mathrm{mult}(\mu)=2\, \sum_{\theta\succ 0}\,\sum_{i=1}^{\infty}
\mathrm{mult}(\mu+i\theta)\, \langle \mu +i\theta,\theta\rangle\,.$$
Applying this formula to the weight $\mu=0$, one obtains since
\begin{itemize}
\item $\mathrm{mult}(0)=\dim(\mathfrak{t})$, as $T$ is a maximal common torus,
\item for any integer $i\geq 1$, $\mathrm{mult}(i\theta)\not =0\Leftrightarrow i=1$, and
$\mathrm{mult}(\theta)=1$, by properties of roots, since the only multiple of a root $\theta$ which is itself a root is $\pm \theta$,
\end{itemize}
$$(\|\beta+\delta_G\|^2-\|\delta_G\|^2)\, \dim(\mathfrak{t})=2\, \sum_{\theta\succ 0} \|\theta\|^2\,.$$ But Gordon Brown's formula (\cite{Bro64} or 21.5 in \cite{FdV}) states that
$$2\, \sum_{\theta\succ 0} \|\theta\|^2=\dim(\mathfrak{t})\,,$$ hence $$c_{\beta}=1\,.$$ \end{remark}
\section{The first non-zero eigenvalue of the Laplace operator on functions}
\noindent As it was noticed in the introduction, any (non-zero) eigenvalue $\lambda$ of the Laplace operator on functions is greater or equal to the first eigenvalue of the Laplace operator on $1$-forms. We now examine in which case this inequality is an equality.
\begin{prop} Let $\lambda$ be the first non-zero eigenvalue of the Laplace operator on functions, and let $\mu$ be the first eigenvalue of the Laplace operator on $1$-forms.
\noindent If $G/K$ is a Hermitian symmetric space: \begin{equation*}\begin{split} \mathrm{SU}(p+q)/\mathrm{S}(\mathrm{U}(p)\times \mathrm{U}(q))\quad,\quad \mathrm{SO}(p+2)/\mathrm{SO}(n)\times \mathrm{SO}(2)\quad,\quad \mathrm{Sp}(n)/\mathrm{U}(n)\,,\\ \mathrm{SO}(2n)/\mathrm{U}(n)\quad ,\quad \mathrm{E}_6/\mathrm{SO}(10)\cdot \mathrm{SO}(2)\quad ,\quad \mathrm{E}_7/\mathrm{E}_6\cdot \mathrm{SO}(2)\,, \end{split} \end{equation*} or if $G/K$ is one of the symmetric spaces for which the highest weight of the isotropy representation is a short root: $$\mathrm{SO}(2p+1)/\mathrm{SO}(2p)\quad ,\quad \mathrm{Sp}(p+q)/\mathrm{Sp}(p)\times\mathrm{Sp}(q)\quad \text{or}\quad \mathrm{F}_4/\mathrm{Sp}(3)\cdot\mathrm{Sp}(1)\,,$$ then $\lambda=\mu$.
\noindent In all the other cases, $\lambda>\mu$.
\end{prop}
\begin{proof} \noindent By (\ref{spect0}), the spectrum of the Laplace operator on functions is given by considering the irreducible representations $\gamma\in \widehat{G}$ such that $\mathrm{mult}(\mathrm{triv.repr.},\mathrm{Res}^{G}_{K}(\rho_{\gamma}))\neq 0$, or equivalently, such that there exists a non-zero weight-vector trivially acted by the group $K$. Such representations are called spherical representations \cite{Sug62}, \cite{Tak94}.
\noindent Hence, to prove the equality $\lambda=\mu$, we have to verify that\\ $\mathrm{mult}(\mathrm{triv.repr.},\mathrm{Res}^{G}_{K}(\rho_{\beta}))\neq 0$, where $\beta$ is the highest long or short root. Indeed, if that is verified, then $c_\beta\geq \lambda$, as $c_{\beta}$ belongs to the spectrum. But, as it was remarked in the introduction, $\lambda\geq c_{\beta}$, since $\lambda$ has to be greater or equal to the first eigenvalue of Laplace operator on $1$-forms.
\noindent Let us first examine the case when $\beta$ is the highest long root. In that case, $\rho_{\beta}$ is the adjoint representation of the simple group $G$ on its complexified algebra $\mathfrak{g}_{\mathbb{C}}$, hence a non-zero root-vector is trivially acted by the subgroup $K$ if and only if it belongs to the center of $K$. But this is only possible if and only if $G/K$ is hermitian, see for instance \cite{Wol}.
\noindent Let us consider now the three symmetric spaces where two root lengths occur and $\beta$ is the highest short root: $\mathrm{SO}(2p+1)/\mathrm{SO}(2p)$, $\mathrm{Sp}(p+q)/\mathrm{Sp}(p)\times \mathrm{Sp}(q)$, and and $\mathrm{F}_4/\mathrm{Spin}(9)$. Going back to the case-by-case study of those three symmetric spaces above, one gets: \begin{enumerate}
\item $\mathrm{SO}(2p+1)/\mathrm{SO}(2p)$. The highest short root is $\beta=\widehat{x}_1$, which is the highest weight of the fundamental standard representation of the group $\mathrm{Spin}(2p+1)$ (or $\mathrm{SO}(2p+1)$) in the space $\mathbb{C}^{2p+1}$, see for instance chapter~12 in
\cite{BH3M}. Now, the group $K$ acts trivially on the last vector $e_{2p+1}$ of the canonical basis $(e_1,\ldots,e_{2p+1})$ of $\mathbb{C}^{2p+1}$
since the inclusion $K\subset G$ is induced by the natural inclusion
$$A\in \mathrm{SO}(2p)\longmapsto \begin{pmatrix} A & 0\\ 0 & 1\end{pmatrix}\in \mathrm{SO}(2p+1)\,.$$
Hence $\mathrm{mult}(\mathrm{triv.repr.},\mathrm{Res}^{G}_{K}(\rho_{\beta}))\neq 0$. \item $\mathrm{Sp}(p+q)/\mathrm{Sp}(p)\times\mathrm{Sp}(q)$. The highest short root is $\beta=\widehat{x}_1+\widehat{x}_{2}$, which is the highest weight of the fundamental representation of the group $\mathrm{Sp}(p+q)$ in the space $\wedge^{2}_{0}(\mathbb{C}^{2(p+q)})$. To be more explicit,
$\mathbb{H}^{p+q}$ is identified with $\mathbb{C}^{2(p+q)}$ here, such that $\mathrm{Sp}(p+q)\simeq \mathrm{SU}(2(p+q))\cap \mathrm{Sp}(2(p+q),\mathbb{C})$, and the representation of $\mathrm{Sp}(p+q)$ in the space $\wedge^2(\mathbb{C}^{2(p+q)})$ decomposes into two irreducible pieces:
\begin{equation}\label{decomp}
{\wedge}^2(\mathbb{C}^{2(p+q)})={\wedge}^2_{0}(\mathbb{C}^{2(p+q)})\oplus \mathbb{C}\cdot \omega_{p+q}\,,
\end{equation}
where the first piece is the fundamental representation with highest weight $\widehat{x}_1+\widehat{x}_{2}$, and the second one the trivial representation on the space generated by the symplectic $2$-form:
$$\omega_{p+q}:=e_1\wedge e_{-1}+e_2\wedge e_{-2}+\cdots +e_{p+q}\wedge e_{-(p+q)}\,,$$ where $(e_1,e_2,\ldots,e_{p+q},e_{-1},e_{-2},\ldots,e_{-(p+q)})$ is the canonical basis of\\ $\mathbb{C}^{2(p+q)}$, see for instance chapter~12 in
\cite{BH3M}.
\noindent Note then that the $2$-form $$ e_1\wedge e_{-1}+e_2\wedge e_{-2}+\cdots+ e_{p}\wedge e_{-p}\,,$$ is $K=\mathrm{Sp}(p)\times\mathrm{Sp}(q)$-invariant, so its (non-zero) component in $\wedge^2_{0}(\mathbb{C}^{2(p+q)})$ under the decomposition (\ref{decomp}) is also an invariant of the group $K$, hence $\mathrm{mult}(\mathrm{triv.repr.},\mathrm{Res}^{G}_{K}(\rho_{\beta}))\neq 0$.
\item $\mathrm{F}_4/\mathrm{Spin}(9)$. The highest short root is $\beta= e_1$. The function ``{branch}'' in the LiE Program, \cite{Lie}, $$branch([0,0,0,1],B4,res\_mat(B4,(F4)),(F4))\,,$$ returns that the dominant weights in the decomposition of $\mathrm{Res}^{G}_{K}(\rho_{\beta})$ (expressed in the basis of fundamental weights) are $$[0,0,0,0]\,,\quad [0,0,0,1] \quad \text{and}\quad [1,0,0,0]\,,$$ hence $\mathrm{mult}(\mathrm{triv.repr.},\mathrm{Res}^{G}_{K}(\rho_{\beta}))=1\neq 0$.
\noindent The result may be also verified with the help of the Satake diagram of $\mathrm{F}_4/\mathrm{Spin}(9)$ (see \cite{Sug62}). \end{enumerate} \noindent Consider now all the remaining symmetric spaces. The highest weight of their isotropy representation is long. Let $\gamma\in \widehat{G}$ be such that $c_{\gamma}=\lambda$ and\\ $\mathrm{mult}(\mathrm{triv.repr.},\mathrm{Res}^{G}_{K}(\rho_{\gamma})\neq 0$. The irreducible representation $\gamma$ is not the adjoint representation of group $G$ on its complexified algebra $\mathfrak{g}_{\mathbb{C}}$, since otherwise $G/K$ should be hermitian. Let $v$ be a weight-vector trivially acted by the group $K$. This vector $v$ is not killed by the action of at least one root-vector associated to a simple (non-compact) root, since otherwise $\gamma$ should not be irreducible. The corresponding positive non-compact root belongs then to the set of weights. We claim that such a root is long. This is obviously true if there is only one root-lenght. But this is also true if there are two root-lenghts: going back to the symmetric spaces considered in section~\ref{sh.root}, it can be checked that in all the cases under consideration, any short simple root of $G$ is always also a simple root of $K$. Now, since all roots of a given lenght are conjugate under the Weyl group (see for instance Lemma~C, p. 71 in \cite{Hum}), one can deduce that the highest long root $\beta$ belongs to the set of weights of $\gamma$. As $\beta$ is not the highest weight of $\gamma$, since otherwise $G/K$ should be hermitian, one can conclude using Lemma~C, p.71 in \cite{Hum} that $c_{\gamma}>c_{\beta}=1$, hence $\lambda>\mu$. \end{proof}
\noindent Francis Burstall has pointed out to us that the three cases : $\lambda<1$, $\lambda=1$ and $\lambda>1$ exactly amount the three possibilities for the second homotopy group of $G/K$ : $\lambda<1$ if $\pi_2(G/K)$ is trivial, $\lambda=1$ if $\pi_2(G/K)=\mathbb{Z}$, and $\lambda>1$ if $\pi_2(G/K)=\mathbb{Z}/\mathbb{Z}_{2}$, see chapter~3 in \cite{BR90}.
\noindent To conclude, we have to compute the Casimir eigenvalue $c_{\beta}$ for the three symmetric spaces for which the highest weight of the isotropy representation is a short root. \begin{enumerate}
\item $\mathrm{SO}(2p+1)/\mathrm{SO}(2p)$. Here $\beta=\widehat{x}_1$. The half-sum of the positive $G$-roots is
$$\delta_G=\left(p-\frac{1}{2},p-\frac{3}{2},\ldots,\frac{3}{2},\frac{1}{2}\right)\,.$$
Using (\ref{scp1}), one gets
$$c_{\beta}=\langle\beta+2\,\delta_G,\beta\rangle=\frac{2\,p}{2\,(2p-1)}=\frac{p}{2p-1}\,.$$
\item $\mathrm{Sp}(p+q)/\mathrm{Sp}(p)\times\mathrm{Sp}(q)$. Here $\beta=\widehat{x}_1+\widehat{x}_2$. The half-sum of the positive roots is
$$\delta_G=\left(p+q,p+q-1,\ldots, 2,1\right)\,.$$
Using (\ref{scp2}), one gets
$$c_{\beta}=\langle\beta+2\,\delta_G,\beta\rangle=\frac{4\,(p+q)}{4\,(p+q+1)}=\frac{p+q}{p+q+1}\,.$$
\item $\mathrm{F}_4/\mathrm{Spin}(9)$. Here $\beta= e_1$. The half-sum of the positive roots is
$$\delta_G=\frac{1}{2}\,(11\, e_1+5\, e_2+3\, e_3+e_4)\,.$$
We considered above the scalar product on weights induced by the usual scalar product on $\mathbb{R}^4$. In order to compare it with the scalar product induced by the Killing form sign-changed, we use the ``{strange formula}'' of Freudenthal and de Vries (see 47-11 in \cite{FdV}).
For the scalar product $(\;,\;)$ induced by the usual scalar product on $\mathbb{R}^4$:
$$(\delta_G,\delta_G)=39\,,$$
whereas for the scalar product $\langle\;,\;\rangle$ induced by Killing form sign-changed:
$$\langle \delta_G,\delta_G\rangle=\frac{\dim(G)}{24}=\frac{13}{6}\,.$$
Hence, as the two $\mathrm{Ad}_G$-invariant scalar products have to be proportional since
$G$ is simple,
$$ \langle\;,\;\rangle= \frac{1}{18}\,(\;,\;)\,,$$
hence
$$c_{\beta}=\langle\beta+2\,\delta_G,\beta\rangle=\frac{12}{18}=\frac{2}{3}\,.$$
\end{enumerate}
\end{document} |
\begin{document}
\section{Introduction} In the 1950ies, Hodgkin and Huxley \cite{HodgkinHuxley} derived a system of nonlinear equations describing the dynamics of a single neuron in terms of the membrane potential, experimentally verified in the squid's giant axon. In particular, they found a model for the propagation of an action potential along the axon, which is essentially the basis for all subsequent conductance based models for active nerve cells. The system consists of one partial differential equation for the membrane potential $U$ \begin{equation}\label{eq:HHU} \tau \partial_t U = \lambda^2 \partial_{xx} U - g_{\text{K}} (\mathbf{X}) (U - E_{\text{K}}) - g_{\text{Na}} (\mathbf{X}) (U - E_{\text{Na}}) - g_{\text{L}} (U - E_{\text{L}}) \end{equation} and three ordinary differential equations for the gating variables $\mathbf{X} = (n, m, h)$ -- describing the probability of certain ion channels being open -- given by \begin{equation}\label{eq:HHX} \partial_t \mathbf{X} = \mathbf{a} (U) (1 - \mathbf{X}) - \mathbf{b}(U) \mathbf{X}. \end{equation} For a more detailed description we refer the reader to \cite{Ermentrout} or Section \ref{sec:HH}, where we also introduce all coefficients and constants used above. For well-posedness of these equations we refer to \cite{Mascagni}.
In this article, we study equations of such type under random fluctuations. For the ``The What and Where of Adding Channel Noise to the Hodgkin-Huxley Equations'' we refer to \cite{GoldwynSheaBrown}. At this point, let us safely assume that the noise under consideration is justified. This procedure eventually leads to a stochastic partial differential equation, in particular a stochastic reaction diffusion equation for the variable $U$ coupled to a possibly vector valued auxiliary variable $\mathbf{X}$. The main mathematical challenge with such equations is that their coefficients do neither satisfy a global Lipschitz condition nor the standard monotonicity and coercivity conditions. Thus the results of this article are twofold, concerning both well-posedness and numerical approximation.
For the question of existence and uniqueness of solutions the standard methods from \cite{dpzSPDE} or \cite{prevot}, \cite{LiuLocallyMonotone}, \cite{LiuGeneralCoercive} do not apply. However, in the uncoupled system -- with fixed $\mathbf{X}$ and $U$ variable, respectively -- monotonicity is restored for each equation individually. This allows us to extend the existing results on variational solutions to cover such stochastic nerve axon equations via a fixed point iteration. Here, the key ingredient is a certain $L^\infty$-bound for the membrane potential $U$.
Concerning the numerical approximation we consider the well-known finite difference method for the spatial variable only. This method has been studied intensively, e.\,g. by Gy\"ongy \cite{Gyongy99}, Shardlow \cite{Shardlow}, Pettersson \& Signahl \cite{Pettersson05} and more recently by the authors \cite{SauerLattice} applied to the simpler FitzHugh-Nagumo equations. Although the method is heavily used in applied sciences, up to the best of our knowledge none of the existing literature covers a convergence result with explicit rates for such non-globally Lipschitz and non-monotone coefficients. For similar results using different approximation schemes we refer among others to \cite{GyongyMillet05}, \cite{GyongyMillet09} for some abstract approximation schemes and strong convergence rates, \cite{Hausenblas}, \cite{LordRougement}, \cite{GinzburgLandau}, \cite{Jentzen} for spectral Galerkin methods also for nonlinearities without a global Lipschitz condition, \cite{BloemkerBurgers} for Galerkin methods with non-diagonal covariance operator, \cite{Kruse} for optimal error estimates of a finite element discretization with Lipschitz continuous nonlinearities, \cite{AnderssonLarsson} for weak convergence of such a finite element method and finally a more or less recent overview concerning the numerical approximation of SPDE in \cite{KloedenJentzen}. Our proofs are based on It\^o's formula for the variational solution, the $L^\infty$-bound for the membrane potential and some uniform improved regularity estimates of the approximated solution. We deduce explicit error estimates, which a priori do not yield a strong convergence rate but only pathwise convergence with smaller rate $\sfrac12-$. In special cases, e.\,g. when the drift satisfies a one-sided Lipschitz condition as in \cite{SauerLattice}, one can improve the result to obtain a strong convergence rate of $\sfrac1n$.
The article is structured as follows. In the next section we describe the precise mathematical setting and all assumption on the coefficients. Section \ref{sec:Wellposed} is then devoted to the existence and uniqueness Theorem \ref{thm:Ex+Unique}, while Section \ref{sec:Approx} introduces the approximation scheme as well as states Theorems \ref{thm:Approx} and \ref{thm:Monotone} on convergence and explicit rates. We finish with two examples, on the one hand the Hodgkin-Huxley system mentioned before and also the FitzHugh-Nagumo equations studied in \cite{SauerLattice}. In particular, we are able to generalize and improve the results obtained there. The appendix contains more or less well-known facts about the stochastic convolution presented here for the reader's convenience.
\section{Mathematical Setting and Assumptions} Let us first fix some notation. By $c$ and $C$ we denote constants which may change from line to line, the letter $K$ is reserved for a numerical constant that may be explicitly calculated. Let $\mathcal{O} = (0,1)$ and define $H \df L^2(\mathcal{O})$ as well as $H_d \df \prod_{i=1}^d H$ and the same notation for other product spaces, too.
In this work we consider the following stochastic reaction-diffusion equation on $H$ coupled nonlinearly to a system of $ d \geq 1$ equations on $H_d$ without diffusion. \begin{equation}\label{eq:SRDE} \begin{split} \mathrm{d} U(t) &= \Big( A U(t) + f\big(U(t), \mathbf{X}(t)\big) \Big) \dt + B \dwt\\ \mathrm{d} X_i(t) &= f_i \big(U(t), X_i(t)\big) \dt + B_i \big( U(t), \mathbf{X}(t)\big) \,\mathrm{d}W_i(t), \quad 1 \leq i \leq d\\ \end{split} \end{equation} subject to initial conditions $U(0) = u_0$, $\mathbf{X}(0) = \mathbf{x}_0$ and driven by $d+1$ independent cylindrical Wiener processes $W$, $W_i$ on $H$ with underlying complete, filtered probability space $(\Omega, \algF, \algF_t, \PP)$ and coefficients to be specified below. Note that we use bold symbols for $\mathbf{x} \in \R^d$ vector fields with components $x_i$ to discriminate between them and the scalar $U$ variable.
For the linear part of the drift in \eqref{eq:SRDE} we assume $(A u)(x) \df \partial_{xx} u(x)$ equipped with homogeneous Neumann boundary conditions $\partial_x u(0) = 0$ and $\partial_x u(1) = 0$, hence a linear operator $(A, D(A))$ on $H$. It is well known that $A$ is non-negative and self-adjoint with corresponding closed, symmetric form $\mathcal{E}(u,v) = - \int \partial_x u \partial_x v \dx$, $D(\mathcal{E}) = W^{1,2}(\mathcal{O}) \fd V$ and thus can be uniquely extended to an operator $A : V \to V^\ast$. Here, $V^\ast$ denotes the dual space of $V$. In order to study \eqref{eq:SRDE} in the framework of variational solutions we introduce the Gelfand triple $V \hookrightarrow H \hookrightarrow V^\ast$ with continuous and dense embeddings. Denote by $\dualp{\cdot}{\cdot}$ the dualization between $V$ and $V^\ast$, then it follows that $\dualp{u}{v} = \scp{u}{v}_H$ for all $u \in H$, $v \in V$. \begin{rem} From the modeling perspective the homogeneous Neumann boundary conditions are called \emph{sealed ends}, meaning no currents can pass the boundary. It is reasonable to assume that an input signal is received via an \emph{injected current} at one end (in our case $x=0$) thus we should replace $\partial_x u(0) = I(t)$, where $I \in C_b^\infty([0,T])$ is the input signal. However, it is standard to transform such a problem to a homogeneous boundary with modified right hand side. In particular, the drift $f$ then depends on time $t$ and space variable $x$. Under the assumptions on $I$ above, this does not modify the essential parts of the analysis and we neglect this for the sake of a concise presentation. \end{rem}
The reaction part of the drift should satisfy the following conditions.
\begin{assum}\label{assum:Drift} Let $f, f_i \in C^1(\R \times \R^d; \R)$ with \begin{align*} \abs{f(u,\mathbf{x})}, \abs{\nabla f(u,\mathbf{x})} &\leq L \big( 1 + \abs{u}^{r - 1} \big) \big( 1 + \rho(\mathbf{x}) \big), &\partial_u f(u, \mathbf{x}) &\leq L \big( 1 + \rho(\mathbf{x})\big)\\ \abs{f(u, x_i)}, \abs{\nabla f_i(u,x_i)} &\leq L \big( 1 + \rho_i(u) \big) \big( 1 + \abs{x_i}\big), &\partial_{x_i} f_i(u, x_i) &\leq L \end{align*} for constants $L> 0$, $2 \leq r \leq 4$, some locally bounded functions $\rho: \R^d \to \R^+$, $\rho_i:\R \to \R^+$ and all $u \in \R, \mathbf{x} \in \R^d$. Concerning the growth we only assume that there exists a constant $\alpha > 0$ such that $\rho_i(u) \leq \mathrm{e}^{\alpha \abs{u}}$ for all $u \in \R$. \end{assum} Formulated in words, we assume that all functions are locally Lipschitz continuous and do not prescribe any a priori control on the constants. Furthermore, $f$ as a function of $u$ and $f_i$ as a function of $x_i$ with all other variables fixed satisfies a one-sided Lipschitz condition. In order to deal with the growth of the Lipschitz constants in terms $\rho$ and $\rho_i$, our analysis is based on $L^\infty$-estimates for both variables $U$ and $\mathbf{X}$ and we therefore have to impose the following additional monotonicity condition.
\begin{assum}\label{assum:Invariance} There exist $K \geq 0$, $\kappa_K > 0$ such that $\partial_u f(u,x) \leq - \kappa_K$ for all $\abs{u} > K$ and $\mathbf{x} \in [0,1]^d$. Furthermore $f_i(u, x_i) \geq 0$ if $x_i \leq 0$ and $f_i(u,x_i) \leq 0$ if $x_i \geq 1$ for all $u \in \R$. \end{assum} From a physical point of view, the second assumption corresponds to the invariance of $[0,1]^d$ for $\mathbf{X}$, which is natural since it represents some proportion or density. Concerning the noise in equation \eqref{eq:SRDE} we choose additive noise in $U$ and allow for multiplicative noise in $\mathbf{X}$ which then has to respect the natural bounds $0$ and $1$, see e.\,g. \cite[Section 2.1]{Faugeras} for a reasonable choice. The precise assumptions are stated below.
\begin{assum}\label{assum:Kernel} Let $B \in L_2(H,V)$, in particular it admits an integral kernel of the form \[ (B u)(x) = \int_0^1 b(x,y) u(y) \dy, \quad x \in \mathcal{O}, b \in W^{1,2}\big(\mathcal{O}^2\big). \] Also, let $B_i: H \times H_d \to L_2(H)$ with integral kernels \[ \big(B_i(u, \mathbf{x})v\big)(x) = \mathbbm{1}_{\{0 \leq \mathbf{x} \leq 1\}} \int_0^1 b_i\big(u(x), \mathbf{x}(x), x, y) v(y) \dy, \quad x \in \mathcal{O}, b_i(u, \mathbf{x}) \in L^2\big(\mathcal{O}^2\big) \] being Lipschitz continuous in the first two variables, i.\,e. \[ \abs{b_i(u, \mathbf{x}, x, y) - b_i(v, \mathbf{y}, x, y)} \leq L \big( \abs{u - v} + \abs{\mathbf{x} - \mathbf{y}} \big) \] for all $x, y \in \mathcal{O}$, $u, v \in \R$ and $\mathbf{x}, \mathbf{y} \in \R^d$. Furthermore, assume that $B_i : V \times V_d \to L_2(H,V)$, in particular $b_i(u, \mathbf{x}) \in W^{1,2}(\mathcal{O}^2)$ for $u \in V, \mathbf{x} \in V_d$ with \[ \norm{B_i(u, \mathbf{x})}_{L_2(H,V)} = \norm{b_i(u, \mathbf{x})}_{W^{1,2}(\mathcal{O}^2)} \leq L \big( \norm{u}_V + \norm{\mathbf{x}}_{V_d}\big). \] \end{assum}
\section{Existence and Uniqueness}\label{sec:Wellposed} As mentioned before, the existence and uniqueness result is based on $L^\infty$-bounds for both variables, essentially based on the observation that $[0,1]^d$ is forward invariant for the dynamics of $\mathbf{X}$, as it is easily seen in e.\,g. the Hodgkin-Huxley equations. For this purpose define the set \[ \mathcal{X} \df \Big\{ \mathbf{X} \in C\big([0,T]; H_d\big) ~\algF_t\text{-adapted}: 0 \leq \mathbf{X}(t)\leq 1 ~\PP-\text{a.\,s. for a.\,e. } x \text{ and all } t \in [0,T].\Big\}, \] i.\,e. what should be the a priori solution set for the auxiliary variables. We will show later on, that given an initial value $\mathbf{x}_0$ in between these bounds it indeed holds that $\mathbf{X} \in \mathcal{X}$. Moreover, let us introduce the Ornstein-Uhlenbeck process $Y$ as the solution to \[ \mathrm{d}Y(t) = A Y(t) \dt + B \dwt, \quad Y(0) = 0. \] Let $E \df C(\overline{\mathcal{O}}; \R)$. The statistics of $Y$ are well known, in particular by Lemma \ref{app:Lem1} it follows that \begin{equation}\label{eq:RYt} R_t^Y \df \sup_{s \in [0,t]} \norm{Y(s}_E < \infty \quad \PP\text{-a.\,s.} \end{equation} for all $t \in [0,T]$ is a Gaussian random variable. This motivates the following definition as a natural solution set for the $U$ variable. \[ \mathcal{U} \df \left. \begin{cases} U \in \algF_t\text{-adapted}: \norm{U(t)}_{L^\infty(\mathcal{O})} \leq R_t ~\PP\text{-a.\,s. for all } t \in [0,T] \text{ for some $\algF_t$-adapted}\\\text{process $R_t$ with Gaussian moments, i.\,e. $\EV{\exp(\frac{\alpha}{2} R_T^2)}< \infty$ for some $\alpha > 0$.}\end{cases} {\kern -1em} \right\}. \] Due to the additive noise a process $U$ as a part of a solution to \eqref{eq:SRDE} cannot be uniformly bounded, however such a pathwise estimate is reasonable as we will show below. With this preliminary work we are able to state the following theorem.
\begin{thm}\label{thm:Ex+Unique} Let $p \geq \max \{2(r-1), 4\}$, $u_0 \in L^p(\Omega, \algF_0, \PP; H)$ be independent of $W$ with Gaussian moments in $E$ and $\mathbf{x}_0 \in L^p( \Omega, \algF_0, \PP; H_d)$ with $0 \leq \mathbf{x}_0 \leq 1$ $\PP$-a.\,s. for a.\,e. $x$. Then there exists a unique variational solution $(U, \mathbf{X})$ to \eqref{eq:SRDE} with $U \in \mathcal{U}$ and $\mathbf{X} \in \mathcal{X}$. \end{thm} The proof of this theorem is based on a fixed point iteration and solving each equation without coupling. This is carried out in the next three subsections.
\subsection{Solving the Equation for $U$} Let us fix $\mathbf{X} \in \mathcal{X}$, then \begin{equation}\label{eq:SPDEforU} \mathrm{d}U(t) = \Big( AU(t) + f\big(U(t), \mathbf{X}(t)\big)\Big)\dt + B \dwt, \quad U(0) = u_0. \end{equation}
\begin{lem}\label{lem:ExforU} Let $u_0 \in L^p(\Omega, \algF_0, \PP; H)$, $p\geq 2(r-1)$. Given $\mathbf{X} \in \mathcal{X}$ there exists a unique variational solution $U$ satisfying \[ \EV{ \sup_{t \in [0,T]} \norm{U(t)}_H^p + \int_0^T \norm{U(t)}_V^2 \dt} < \infty. \] \end{lem} \begin{proof} This lemma is a more or less immediate application of \cite[Theorem 1.1]{LiuLocallyMonotone}. As $\mathbf{X} \in \mathcal{X}$ it follows that $\rho(\mathbf{X}(t,x)) \leq \rho_0$ for some uniform (in $t$ and $x$) constant $\rho_0 > 0$. It remains to check the monotonicity and coercivity conditions (H1)--(H4) in \cite{LiuLocallyMonotone}. Of course, for $u, v, w \in V$ the map $s \mapsto \dualp{A(u+sv) + f(\mathbf{X}(t), u+sv)}{w}$ is continuous in $\R$. Monotonicity is also quite obvious, since we have a one-sided Lipschitz condition that implies \[ 2\dualp{A(u-v) + f\big((u, \mathbf{X}(t)\big) - f\big(v, \mathbf{X}(t)\big)}{u-v} \leq - 2\norm{u-v}_V^2 + 2L(1 + \rho_0) \norm{u-v}_H^2. \] This directly yields coercivity with the choice $v = 0$ and $\abs{f(0,\mathbf{X}(t))} \leq L(1 + \rho_0)$, \[ 2\dualp{Au + f\big( u, \mathbf{X}(t)\big)}{u} \leq - 2 \norm{u}_V^2 + 3L(1 + \rho_0) \norm{u}_H^2 + L (1 + \rho_0). \] The growth condition is based on the polynomial growth of $f$ of order $r-1 \leq 3$ and the Sobolev embedding $V \hookrightarrow L^\infty(\mathcal{O})$ in dimension one, in detail \begin{align*} \norm{Au + f\big(u, \mathbf{X}(t)\big)}_{V^\ast} &\leq \norm{u}_V + \sup_{\norm{\phi}_V=1} \int_0^1 \abs{f \big(u, \mathbf{X}(t)\big)} \abs{\phi} \dx\\ &\leq \norm{u}_V + \sup_{\norm{\phi}_V=1} L ( 1 + \rho_0) \norm{\phi}_{L^\infty(\mathcal{O})} \int_0^1 \big(1 + \abs{u}^{r-1}\big) \dx\\ &\leq C \big(1 + \norm{u}_V\big) \big(1 + \norm{u}_H^{r-2}\big).\qedhere \end{align*} \end{proof} \begin{rem} When there is no auxiliary variable $\mathbf{X}$ we can obtain strong solutions to \eqref{eq:SRDE} by \cite{GessGradient} without the upper bound $r \leq 4$. This approach makes use of the fact that the drift can be written as the gradient of a quasi-convex potential and it also derives more regularity, in particular $U \in L^2(\PP; L^2([0,T]; W^{2,2}(\mathcal{O})))$. We would rather use such result instead, however in the present case the drift is time-dependent and it is unclear how and if the results of \cite{GessGradient} generalize. Instead of using such an improved regularity for the solution itself, the proof of the approximations results, e.\,g. the convergence rate obtained in Theorem \ref{thm:Approx} is based on more than the canonical regularity for the approximate solution, see Lemma \ref{lem:ImprovedAPriori}. \end{rem} Instead of proving the existence of more regular solutions and using Sobolev embedding to deduce $L^\infty$-estimates, we follow a different strategy that yields a pathwise bound and moreover is dimension independent.
\begin{lem}\label{lem:PathwiseBoundforU} Let $U$ be the solution to \eqref{eq:SPDEforU} from Lemma \ref{lem:ExforU} and assume $u_0$ independent of $W$ having Gaussian moments in $E$. Then there exists an $\algF_t$-adapted stochastic process $R_t$ such that \[ R_t \df \norm{u_0}_E + R + 2 R_t^Y \quad \PP\text{-a.\,s.} \] for some constant $R > 0$ and the process $R_t^Y$ specified in \eqref{eq:RYt} as the supremum of the corresponding Ornstein-Uhlenbeck process and the solution $U$ remains bounded by \[ \norm{U(t)}_{L^\infty(\mathcal{O})} \leq R_t, \quad \PP\text{-a.\,s.} \] for all $t \in [0,T]$. Moreover, $R_t$ has Gaussian moments, thus in particular $U \in \mathcal{U}$. \end{lem} \begin{proof} At first, define $Z \df U - Y$, hence this difference satisfies a deterministic evolution equation with a random parameter \begin{equation} \ddt Z(t) = A Z(t) + f\big(Z(t) + Y(t), \mathbf{X}(t)\big), \quad Z(0) = u_0. \end{equation} Let $R > 0$ be some (possibly) large constant, essentially dependent on the shape of $f$. Then, define for fixed $t \in [0,T]$ with abuse of notation another process $R_t \df \norm{u_0}_{L^\infty} + R + R_t^Y$, of course with implicit $\omega$ dependence. We show that $U(s)$ does not leave the desired interval up to time $s \leq t$ using a cutoff via a normal contraction. To this end introduce for all $\eps > 0$ a normal contraction $\phi_\eps$ with $\phi_\eps \in C^\infty(\R)$, $\phi_\eps \leq \eps$, $\phi_\eps(u) = u + R_t$ for $u \leq - R_t$, $\abs{\phi_\eps'} \leq 1$ and $\abs{\phi_\eps''}\leq 2/\eps$. As $\eps \to 0$, $\phi_\eps$ approximates $\phi(u) \df \min \{0, u + R_t\}$. Obviously, $\phi_\eps(u) \in H$ (or $V$) if $u \in H$ (or $V$) by the contraction property. Thus we can calculate for $s \in [0,t]$ \begin{align*} \frac{\mathrm{d}}{\mathrm{d}s} \norm{\phi_\eps\big(Z(s)\big)}_H^2 &= 2 \dualp{A Z(s) + f\big(Z(s) + Y(s), \mathbf{X}(s)\big)}{\phi_\eps'\big(Z(s)\big) \phi_\eps\big(Z(s)\big)}\\ &\leq - 2\int_0^1 \abs{\partial_x Z(s)}^2 \Big(\abs{\phi_\eps'\big(Z(s)\big)}^2 + \phi_\eps''\big(Z(s)\big) \phi_\eps\big(Z(s)\big) \Big) \dx\\ &\quad + 2 \scp{f\big(Z(s) + Y(s), \mathbf{X}(s)\big)}{\phi_\eps'\big(Z(s)\big) \phi_\eps\big(Z(s)\big)}_H. \end{align*} Concerning the first summand we know that $\abs{\phi_\eps'} \leq 1$, hence this term is finite and negative. Also, $\phi_\eps'' \phi_\eps \to 0$ point-wise as $\eps \to 0$ and $\abs{\phi_\eps'' \phi} \leq 2$. For the nonlinear part it holds that $\phi_\eps' \phi_\eps \to \phi$ point-wise as $\eps \to 0$ and $\abs{(\phi_\eps' \phi_\eps)(x)} \leq \abs{\phi_\eps(x)} \leq \abs{x} + R_t(\omega)$. We can integrate the inequality from $0$ up to $t$ and by Lebesgue's dominated convergence theorem we can interchange all integrals and the limit $\eps \to 0$ to obtain \begin{align*} \norm{\phi\big(Z(t)\big)}_H^2 &\leq \norm{\phi(u_0)}_H^2 + \int_0^t \scp{f \big(Z(s) + Y(s), \mathbf{X}(s)\big)}{\phi\big(Z(s)\big)}_H\\ &= \norm{\phi(u_0)}_H^2 + \int_0^t \scp{f\big(Y(s) -R_t, \mathbf{X}(s)\big)}{\phi \big(Z(s)\big)}_H \ds \\ &\quad + \int_0^t \scp{f\big(Z(s) + Y(s), \mathbf{X}(s)\big) - f\big( Y(s) - R_t, \mathbf{X}(s)\big)}{\phi \big(Z(s)\big)}_H \ds \end{align*} Now, the monotonicity in Assumption \ref{assum:Invariance} on $f$ implies that both of the integrals are less or equal to zero. In detail for $R > K > 0$ large enough the function $f(\cdot, \mathbf{X}(s))$ is monotone decreasing on $\R \setminus [-R, R]$ and in particular it changes its sign from $+$ to $-$. In both summands the integrand is zero if $Z(s) \geq - R_t$ because $\phi$ vanishes. In the opposite case \[ Z(s) + Y(s) \leq Y(s) - R_t \leq - R \] and the integrand in the second integral is of the form \[ \mathbbm{1}_{\{Z(s) \leq -R_t\}} \Big(f\big(Z(s) + Y(s), \mathbf{X}(s)\big) - f\big( Y(s) - R_t, \mathbf{X}(s)\big)\Big) \big( Z(s) + R_t\big) \leq 0. \] In the first integral we only need that $f(Y(s) - R_t, \mathbf{X}(s)) \geq 0$ since $\phi(Z(s)) \leq 0$. In conclusion, we have shown that \[ \norm{\phi(Z(t))}_H \leq \norm{\phi(u_0)}_H \quad \Rightarrow \quad \einf_{x \in \mathcal{O}} Z(t) \geq -R_t \quad \PP\text{-a.\,s.} \] The corresponding upper bound can be obtained in the exact same way with $\tilde{\phi}(u) \df \max \{0, u - R_t\}$. This concludes the proof via the final estimate \[ \norm{U(t)}_{L^\infty(\mathcal{O})} \leq \norm{Z(t)}_{L^\infty(\mathcal{O})} + \norm{Y(t)}_E \leq R_t + R_t^Y. \] Thus $U(t)$ is $\PP$-a.\,s. bounded by $R_t \df \norm{u_0}_E + R + 2 R_t^Y$ and the integrability follows from Lemma \ref{app:Lem1}. \end{proof}
\subsection{Solving the Equation for $\mathbf{X}$}
Let us now fix $U \in \mathcal{U}$, then \begin{equation}\label{eq:SPDEforX} \mathrm{d}X_i(t) = f_i\big(U(t), X_i(t)\big) \dt + B_i \big(U(t), \mathbf{X}(t)\big) \,\mathrm{d}W_i(t), \quad 1 \leq i \leq d \end{equation} with initial condition $\mathbf{X}(0) = \mathbf{x}_0$. For a compact notation introduce the vector fields $\mathbf{f}(t,\mathbf{x}) \df (f_i(U(t), x_i))_{1\leq i \leq d}$ and $\mathbf{B}(t,\mathbf{x}) \df (B_i(U(t), \mathbf{x}))_{1\leq i \leq d}$. \begin{lem}\label{lem:ExforX} Let $p \geq 4$ and $\mathbf{x}_0 \in L^p(\Omega, \algF_0, \PP; H_d)$. Given $U \in \mathcal{U}$ there exists a unique strong solution $\mathbf{X}$ to \eqref{eq:SPDEforX} satisfying \begin{equation} \EV{ \sup_{t \in [0,T]} \norm{\mathbf{X}(t)}_{H_d}^p} < \infty. \end{equation} \end{lem} \begin{proof} This lemma is again an application of \cite[Theorem 1.1]{LiuLocallyMonotone}. We need to verify (H1)--(H4), this time in the Gelfand triple $H_d \hookrightarrow H_d \hookrightarrow H_d$ and again the hemicontinuity is straightforward to obtain since everything is a composition of continuous mappings. Also the monotonicity follows from the one-sided Lipschitz condition for each $f_i$ and the global Lipschitz assumption on $b_i$, \[ \scp{ \mathbf{f}(t,\mathbf{x}) - \mathbf{f}(t,\mathbf{y})}{\mathbf{x} - \mathbf{y}}_{H_d} + \norm{B(t,\mathbf{x}) - B(t,\mathbf{y})}_{L_2(H_d)}^2 \leq \big(L + 2L^2\big) \norm{\mathbf{x} - \mathbf{y}}_{H_d}^2. \] Concerning coercivity we see that with the upper bound for $\abs{f_i(u,x_i)}$ \begin{align*} \scp{\mathbf{f}(t, \mathbf{x})}{\mathbf{x}}_{H_d} &= L \norm{\mathbf{x}}_{H_d}^2 + \scp{\mathbf{f}(t, 0)}{\mathbf{x}}_{H_d} \leq L \norm{\mathbf{x}}_{H_d}^2 + L \Big(\sum_{i=1}^d \big( 1 + \rho_i (R_t)\big)^2\Big)^\frac12 \norm{\mathbf{x}}_{H_d}\\ &\leq (L + 1) \norm{\mathbf{x}}_{H_d}^2 + \tfrac14 L^2 \sum_{i=1}^d \big( 1 + \rho_i (R_t)\big)^2 \fd \big(L + 1) \norm{\mathbf{x}}_{H_d}^2 + g_t. \end{align*} The stochastic process $g_t$ is $\algF_t$-adapted and in $L^p([0,T] \times \Omega, \dt \otimes \PP)$ for every $1 \leq p < \infty$ because each $\rho_i$ is of at most exponential growth. Also, the linear growth condition on $\mathbf{B}$ is immediate by the assumptions on $b_i$, since \begin{align*} \norm{\mathbf{B}(t,\mathbf{x})}_{L_2(H_d)}^2 &\leq 2 \sum_{i=1}^d \Big( \norm{B_i\big( 0, 0 \big)}_{L_2(H)}^2 + \norm{B_i\big(U(t),\mathbf{x}\big) - B_i(0,0)}_{L_2(H)}^2\Big)\\ &\leq C + 4 L^2 \big( \norm{U(t)}_H^2 + \norm{\mathbf{x}}_{H_d}^2\big) \leq \tilde{g}_t + 4 L^2 \norm{\mathbf{x}}_{H_d}^2, \end{align*} where the stochastic process $\tilde{g}_t$ is again $\algF_t$-adapted and in every $L^p$. In a similar manner it follows that \[ \norm{\mathbf{f}(t, \mathbf{x})}_{H_d} = \Big( \sum_{i=1}^d \int_0^1 \abs{f_i \big(U(t), x_i\big)}^2 \dx \Big)^\frac12 \leq K g_t^\frac12 \big( 1 + \norm{\mathbf{x}}_{H_d}\big), \] hence the growth condition holds with $\beta =2$ and this guarantees the existence of a unique variational solution $\mathbf{X}$, which is indeed a strong solution since everything is $H_d$-valued. In particular, $\mathbf{X}$ satisfies \begin{equation}\label{eq:IntegralEquationForX} \mathbf{X}(t) = \mathbf{x}_0 + \int_0^t \mathbf{f}\big(s, \mathbf{X}(s)\big) \ds + \int_0^t \mathbf{B}\big(s, \mathbf{X}(s)\big) \,\mathrm{d} \mathbf{W}(s), \quad t \in [0,T] \end{equation} $\PP$-a.\,s. where $\mathbf{W} \df (W_i)_{1 \leq i \leq d}$. \end{proof}
\begin{lem}\label{lem:InvarianceX} Let $\mathbf{X}$ be the strong solution to \eqref{eq:IntegralEquationForX}. Assume $0 \leq \mathbf{x}_0 \leq 1$ $\PP$-a.\,s. for a.\,e. $x$, then $0 \leq \mathbf{X}(t) \leq 1$ $\PP$-a.\,s for a.\,e. $x$ and all $t \in [0,T]$. \end{lem} \begin{proof} The proof is similar to the one of Lemma \ref{lem:PathwiseBoundforU} and involves the functions $\phi_0 (x) \df \min\{0, x\}$ and $\phi_1(x) \df \max \{1, x\}$. For $\eps > 0$ denote by $\phi_{j,\eps}$ the smooth normal contractions approximating $\phi_j$, $j=0,1$. Consider It\^o's formula for $\phi_{j, \eps}(\mathbf{X}(t))$ applied component-wise. \begin{align*} \mathrm{d} \norm{ \phi_{j,\eps}\big(\mathbf{X}(t)\big)}_{H_d}^2 &= 2 \scp{ \phi_{j,\eps} \big(\mathbf{X}(t)\big) \phi_{j,\eps}' \big( \mathbf{X}(t)\big)}{ \mathbf{f}\big(t, \mathbf{X}(t)\big)}_{H_d} \dt\\ &\quad + 2\scp{ \phi_{j,\eps} \big( \mathbf{X}(t)\big) \phi_{j,\eps}' \big( \mathbf{X}(t)\big)}{ \mathbf{B}\big(t,\mathbf{X}(t)\big) \,\mathrm{d} \mathbf{W}(t)}_{H_d}\\ &\quad + \norm{\mathbf{B}\big(t,\mathbf{X}(t)\big)^\ast \phi_{j, \eps}'\big(\mathbf{X}(t)\big)}_{H_d}^2 \dt\\ &\quad + \scp{ \phi_{j,\eps} \big( \mathbf{X}(t)\big) \phi_{j,\eps}'' \big(\mathbf{X}(t)\big) }{\norm{ \mathbf{B}\big(t,\mathbf{X}(t)\big)}_{L_2(H_d)}^2}_{H_d} \dt. \end{align*} In the limit $\eps \to 0$ the stochastic integral vanishes $\PP$-a.\,s. as well as the It\^o correction term since the integrands are nonzero on disjoint sets. Then, $\phi_{j,\eps} \phi_{j,\eps}'' \to 0$ as $\eps \to 0$ lets the latter summand disappear. Thus, by Lebesgue's dominated convergence theorem there only remains the drift part that is \begin{align*} \mathrm{d} \norm{ \phi_j\big(\mathbf{X}(t)\big)}_{H_d}^2 &= 2 \scp{ \phi_j \big(\mathbf{X}(t)\big)}{ \mathbf{f}\big(t, \mathbf{X}(t)\big)}_{H_d} \dt\\ &= 2 \sum_{i=1}^d \int_0^1 \phi_j\big(X_i(t)\big) f_i \big(U(t), \mathbf{X}(t)\big) \dx \dt \leq 0 \end{align*} by Assumption \ref{assum:Invariance}. If follows $\norm{ \phi_j\big(\mathbf{X}(t)\big)}_{H_d}^2 \leq \norm{ \phi_j\big(\mathbf{x}_0 \big)}_{H_d}^2$ $\PP$-a.\,s. Obviously, this implies $\phi_j (\mathbf{X}(t)) = 0$ $\PP$-a.\,s. for all $t \in [0,T]$ and a.\,e. $x \in \mathcal{O}$ and in conclusion $0 \leq \mathbf{X}(t) \leq 1$. \end{proof}
\subsection{Proof of Theorem \ref{thm:Ex+Unique}} Define the approximating sequence $(U^n, \mathbf{X}^n)$ as follows. Let $U^0 \equiv u_0$ and $\mathbf{X}^0 \equiv \mathbf{x}_0$. For $n \geq 1$ let $U^n$ be the solution to \begin{equation}\label{eq:Un} \mathrm{d} U^n(t) = \Big( A U^n(t) + f \big( U^n(t), \mathbf{X}^{n-1}(t)\big) \Big) \dt + B \dwt \end{equation} with initial condition $U^n(0) = u_0$. Furthermore, let $\mathbf{X}^n$ be the solution to \begin{equation}\label{eq:Xn} \mathrm{d} X_i^n(t) = f_i \big(U^n(t), \mathbf{X}^n(t)\big) \dt + B_i \big( U^n(t), \mathbf{X}^n(t)\big) \,\mathrm{d}W_i(t), \quad 1 \leq i \leq d, \end{equation} with $\mathbf{X}^n(0)=\mathbf{x}_0$. According to Lemmas \ref{lem:ExforU}--\ref{lem:InvarianceX} these processes exist and are unique. In particular, $U^n \in \mathcal{U}$ and $\mathbf{X}^n \in \mathcal{X}$ for all $n \geq 0$. Apparently, we can study the differences $U^{n+1} - U^n$ and $\mathbf{X}^{n+1} - \mathbf{X}^n$ for $n\geq 1$ in $H$ and $H_d$, respectively. More precisely, it holds that \begin{align} \begin{split}\label{eq:DifferenceU} \mathrm{d} \big(U^{n+1}(t) - U^n(t)\big) &= A \big(U^{n+1}(t) - U^n(t)\big) \dt\\ &\quad + \Big( f\big( U^{n+1}(t), \mathbf{X}^n(t)\big) - f\big(U^n(t), \mathbf{X}^{n-1}(t)\big)\Big)\dt, \end{split}\\ \begin{split}\label{eq:DifferenceX} \mathrm{d} \big(X_i^{n+1}(t) - X_i^n(t)\big) &= \Big( f_i\big( U^{n+1}(t), \mathbf{X}^{n+1}(t)\big) - f_i\big(U^n(t), \mathbf{X}^n(t)\big)\Big)\dt\\ &\quad + \Big( B_i\big( U^{n+1}(t), \mathbf{X}^{n+1}(t)\big) - B_i\big(U^n(t), \mathbf{X}^n(t)\big)\Big)\,\mathrm{d}W_i(t), \end{split} \end{align} for $1 \leq i \leq d$. Let us start with an useful but elementary estimate due to Assumption \ref{assum:Drift}. For all $u,v \in [-R_t, R_t]$, $\mathbf{x}, \mathbf{y} \in [0,1]^d$ it holds that \begin{equation}\label{eq:LocalLipschitz} \begin{split} \abs{f(u, \mathbf{x}) - f(v, \mathbf{y})} &\leq L \big( 1 + R_t^{r-1}\big) ( 1 + \rho_0) \big( \abs{u-v}^2 + \abs{\mathbf{x} - \mathbf{y}}^2 \big)^\frac12,\\ \abs{f_i(u, \mathbf{x}) - f_i(v, \mathbf{y})} &\leq 2 L \big( 1 + \rho_i (R_t)\big) \big( \abs{u-v}^2 + \abs{\mathbf{x} - \mathbf{y}}^2 \big)^\frac12. \end{split} \end{equation} At this point it might be noteworthy that the pathwise $L^\infty$-estimate from Lemma \ref{lem:PathwiseBoundforU} is uniform in $n$ since each equation is driven by the same realization of the cylindrical Wiener process $W$. Corresponding to these Lipschitz constants define the process $(G_t)_{t \in [0,T]}$ by \begin{equation}\label{eq:ProcessG} G_t \df \int_0^t \Big(2L^2 \big( 1 + R_s^{r-1}\big)^2 (1 + \rho_0)^2 + 4L^2 \sum_{i=1}^d \big( 1 + \rho_i(R_s)\big)^2 + K \big(dL^2 + 1\big) \Big)\ds. \end{equation} The next step are differential inequalities for the differences in $H$ and $H_d$. \eqref{eq:DifferenceU} and \eqref{eq:LocalLipschitz} for $f$ and Young's inequality imply \begin{align*} &\ddt \norm{U^{n+1}(t) - U^n(t)}_H^2 = 2\dualp{A \big( U^{n+1}(t) - U^n(t)\big)}{U^{n+1}(t) - U^n(t)}\\ &\qquad + 2\dualp{f\big( U^{n+1}(t), \mathbf{X}^n(t)\big) - f\big(U^n(t), \mathbf{X}^{n-1}(t)\big)}{U^{n+1}(t) - U^n(t)}\\ &\quad \leq - 2\norm{U^{n+1}(t) - U^n(t)}_V^2\\ &\qquad + 2 \norm{f\big( U^{n+1}(t), \mathbf{X}^n(t)\big) - f\big(U^n(t), \mathbf{X}^{n-1}(t)\big)}_H \norm{U^{n+1}(t) - U^n(t)}_H\\ &\quad \leq - 2\norm{U^{n+1}(t) - U^n(t)}_V^2 + \norm{\mathbf{X}^n(t) - \mathbf{X}^{n-1}(t)}_{H_d}^2\\ &\qquad + \Big(L^2 \big( 1 + R_t^{r-1}\big)^2 ( 1 + \rho_0)^2 + 1\Big) \norm{U^{n+1}(t) - U^n(t)}_H^2. \end{align*} As a consequence, we can obtain the following pathwise estimate. \begin{equation}\label{eq:FinalDiffUn} \begin{split} &\sup_{t \in [0,T]} \mathrm{e}^{-G_t} \norm{U^{n+1}(t) - U^n(t)}_H^2 + 2 \int_0^T \mathrm{e}^{-G_t} \norm{U^{n+1}(t) - U^n(t)}_V^2 \dt\\ &\quad \leq \int_0^T \mathrm{e}^{-G_t} \norm{\mathbf{X}^n(t) - \mathbf{X}^{n-1}(t)}_{H_d}^2 \dt \quad \PP\text{-a.\,s.} \end{split} \end{equation} In contrast to the case above, the multiplicative noise in $\mathbf{X}$ only allows for mean square estimates. However, in a similar fashion one can obtain the analogous inequality for the second variable using \eqref{eq:DifferenceX}, the local Lipschitz conditions for each $f_i$ from \eqref{eq:LocalLipschitz} together the global Lipschitz continuity of each $b_i$. \begin{align*} &\mathrm{d} \norm{\mathbf{X}^{n+1}(t) - \mathbf{X}^n(t)}_{H_d}^2 = 2 \scp{\mathbf{f}\big( U^{n+1}(t), \mathbf{X}^{n+1}(t)\big) - \mathbf{f} \big(U^n(t), \mathbf{X}^n(t)\big)}{\mathbf{X}^{n+1}(t) - \mathbf{X}^n(t)}_{H_d} \dt \\ &\qquad+ 2 \scp{ \mathbf{X}^{n+1}(t) - \mathbf{X}^n(t)}{ \Big( \mathbf{B}\big(U^{n+1}(t), \mathbf{X}^{n+1}(t)\big) - \mathbf{B} \big( U^n(t), \mathbf{X}^n(t)\big)\Big) \,\mathrm{d}\mathbf{W}(t)}_{H_d}\\ &\qquad+ \norm{\mathbf{B}\big(U^{n+1}(t), \mathbf{X}^{n+1}(t)\big) - \mathbf{B} \big( U^n(t), \mathbf{X}^n(t)\big)}_{L_2(H_d)}^2 \dt\\ &\quad \leq (2dL^2 + 1) \norm{U^{n+1}(t) - U^n(t)}_H^2 \dt\\ &\qquad + \Big(4 L^2 \sum_{i=1}^d \big(1 + \rho_i(R_t)\big)^2 + 2dL^2 + 1\Big) \norm{\mathbf{X}^{n+1}(t) - \mathbf{X}^n(t)}_{H_d}^2 \dt\\ &\qquad+ 2 \scp{ \mathbf{X}^{n+1}(t) - \mathbf{X}^n(t)}{\Big( \mathbf{B}\big(U^{n+1}(t), \mathbf{X}^{n+1}(t)\big) - \mathbf{B}\big(U^n(t), \mathbf{X}^n(t)\big)\Big) \,\mathrm{d}\mathbf{W}(t)}_{H_d}.\\ \end{align*} Here, the additional terms are due to the multiplicative noise. Similar to \eqref{eq:FinalDiffUn} we need the exponential of $-G_t$ in the final estimate as follows. \begin{align*} &\EV{\int_0^T \Big( G_t - 4L^2 \sum_{i=1}^d \big(1 + \rho_i(R_t)\big)^2 - 2 dL^2 - 1 \Big) \mathrm{e}^{-G_t} \norm{\mathbf{X}^{n+1}(t) - \mathbf{X}^n(t)}_{H_d}^2 \dt}\\ &+ \EV{\sup_{t \in [0,T]} \mathrm{e}^{-G_t} \norm{\mathbf{X}^{n+1}(t) - \mathbf{X}^n(t)}_{H_d}^2} \leq \big(2 d L^2 +1\big) \EV{ \int_0^T \mathrm{e}^{-G_t} \norm{U^{n+1}(t) - U^n(t)}_H^2 \dt}\\ &+ \EV{2{\kern -0.5em}\sup_{t \in [0,T]} \abs{ \int_0^t \mathrm{e}^{-G_s} \scp{ \mathbf{X}^{n+1}(s) - \mathbf{X}^n(s)}{\Big( \mathbf{B}\big(U^{n+1}(s), \mathbf{X}^{n+1}(s)\big) - \mathbf{B}\big(U^n(s), \mathbf{X}^n(s)\big)\Big) \,\mathrm{d}\mathbf{W}(s)}_{H_d}}}. \end{align*} The supremum of the local martingale is controlled via the Burkholder-Davis-Gundy inequality by its quadratic variation. Denote, for the moment, the stochastic integral by $M_t$, then it is straightforward to obtain \begin{align*} &\langle M\rangle_T = \int_0^T \mathrm{e}^{-2 G_t} \norm{\mathbf{X}^{n+1}(t) - \mathbf{X}^n(t)}_{H_d}^2 \norm{ \mathbf{B}\big(U^{n+1}(t), \mathbf{X}^{n+1}(t)\big) - \mathbf{B}\big(U^n(t), \mathbf{X}^n(t)\big)}_{L_2(H_d)}^2 \dt\\ &\quad \leq 2dL^2 \sup_{t \in [0,T]} \mathrm{e}^{-G_t} \norm{\mathbf{X}^{n+1}(t) - \mathbf{X}^n(t)}_{H_d}^2\\ &\qquad \times \int_0^T \mathrm{e}^{-G_t}\Big( \norm{U^{n+1}(t) - U^n(t)}_H^2 + \norm{\mathbf{X}^{n+1}(t) - \mathbf{X}^n(t)}_{H_d}^2 \Big)\dt. \end{align*} Young's inequality allows to absorb both factors up to the difference $U^{n+1} - U^n$ by the left hand side and we obtain \begin{equation}\label{eq:FinalDiffXn} \EV{ \sup_{t \in [0,T]} \mathrm{e}^{-G_t} \norm{\mathbf{X}^{n+1}(t) - \mathbf{X}^n(t)}_{H_d}^2}\leq C \int_0^T \EV{ \mathrm{e}^{-G_t} \norm{U^{n+1}(t) - U^n(t)}_H^2} \dt. \end{equation} Thus, \eqref{eq:FinalDiffUn} and \eqref{eq:FinalDiffXn} can be iterated in order to obtain an estimate independent of $n$. In more detail, using Fubini's theorem we can calculate as below \begin{align*} &\EV{ \sup_{t \in [0,T]} \mathrm{e}^{-G_t} \norm{U^{n+1}(t) - U^n(t)}_H^2} \leq \int_0^T \EV{\sup_{s \in [0,t]} \mathrm{e}^{-G_s} \norm{\mathbf{X}^n(s) - \mathbf{X}^{n-1}(s)}_{H_d}^2} \dt\\ &\quad \leq C \int_0^T \int_0^t \EV{ \sup_{r \in [0,s]} \mathrm{e}^{-G_r} \norm{U^n(r) - U^{n-1}(r)}_H^2} \ds \dt\\ &\quad = C \int_0^T (T-s) \EV{ \sup_{r \in [0,s]} \mathrm{e}^{-G_r} \norm{U^n(r) - U^{n-1}(r)}_H^2} \ds.\\ &\quad\leq C^2 \int_0^T \int_r^T (T-s)(s-r) \ds\, \EV{ \sup_{t \in [0,r]} \mathrm{e}^{-G_t} \norm{U^{n-1}(t) - U^{n-2}(t)}_H^2} \dr \end{align*} for $n\geq 2$. In general this involves integrals of the form \begin{equation}\label{eq:IntegralRelation} \int_r^T (T-s)^\alpha (s-r) \ds = \frac{(T-r)^{\alpha+2}}{(\alpha+1) (\alpha + 2)}. \end{equation} With this information the desired inequality is \begin{equation}\label{eq:FinalRecursionIneq} \begin{split} &\EV{ \sup_{t \in [0,T]} \mathrm{e}^{-G_t} \norm{U^{n+1}(t) - U^n(t)}_H^2 + 2 \int_0^T \mathrm{e}^{-G_t} \norm{U^{n+1}(t) - U^n(t)}_V^2 \dt}\\
&\quad \leq \frac{C^n}{(2n-1)!} \int_0^T (T-t)^{2n+1} \EV{ \phantom{\Big|}{\kern -0.2em}\mathrm{e}^{-G_t} \norm{U^1(t) - U^0(t)}_H^2} \dt. \end{split} \end{equation} By \eqref{eq:FinalDiffXn} we get a similar inequality for the differences of $\mathbf{X}^n$, thus Borel-Cantelli's lemma yields $\PP$-a.\,s. convergence of $U^n \to U$ on $C([0,T]; H) \cap L^2([0,T];V)$ as well as $\mathbf{X}^n \to \mathbf{X}$ in $C([0,T]; H_d)$. Also, all $L^\infty$-bounds are uniform in $n$, thus in particular $U \in \mathcal{U}$ and $\mathbf{X} \in \mathcal{X}$. This immediately yields an integrable dominating function for all of the integrals in \eqref{eq:Un} and \eqref{eq:Xn} involving $f$ and $f_i$, as well as for the quadratic variation of the multiplicative noise in \eqref{eq:Xn}. Thus, by Lebesgue's dominated convergence theorem $(U, \mathbf{X})$ indeed solves \eqref{eq:SRDE} and the standard a priori estimates follow by Lemmas \ref{lem:ExforU} and \ref{lem:ExforX}.\qed
\section{Finite Difference Approximation}\label{sec:Approx}
In this second part we study spatial approximations of equation \eqref{eq:SRDE} needed for the numerical simulation of the neuronal dynamics using the well known finite difference method. The domain $\mathcal{O}$ is approximated by the equidistant grid $\frac1n \{0, \dots, n\}$ and the vectors $U^n, X_i^n \in \R^{n+1}$ denote the functions $U, X_i$ evaluated on this grid. For a compact notation we use bold symbols for matrices $\mathbf{x} \in \R^{d \times (n+1)}$ in this section, too. Furthermore, let $\tilde{u}$ denote the linear interpolation with respect to the space variable $x$ as \[ \tilde{u}(x) \df (nx - k +1) u_k + (k -nx) u_{k-1},\quad x \in \left[\frac{k-1}{n}, \frac{k}{n}\right] \] of the vector $u \in \R^{n+1}$ together with zero outer normal derivative at the boundary points $x=0$ and $x=1$. Also, denote by $\tilde{\mathbf{x}}(x)$ the linear interpolation component-wise and by $\iota_n: V_n \to V; u \mapsto \tilde{u}$ the embedding into $V$ (or similarly $V_d$).
At every interior point of the grid we approximate the second derivative by \begin{equation}\label{eq:DiscreteLaplace} (A^n v)_k \df n^2 (v_{k+1} - 2 v_k + v_{k-1}),\quad 1 \leq k \leq n-1, v \in \R^{n+1}. \end{equation} It is standard to choose centered differences modeling the Neumann boundary condition in order to retain the order of convergence of the interior points. This introduces the artificial variables $v_{-1}, v_{n+1}$ and the discrete boundary condition reads as \begin{equation}\label{eq:DiscreteBoundary} \frac{n}{2}(v_1 - v_{-1}) = 0,\quad \frac{n}{2} (v_{n+1} - v_{n-1}) = 0. \end{equation} Together with \eqref{eq:DiscreteLaplace} for $k=0, n$ we can eliminate the artificial variables and obtain \begin{equation} (A^n v)_0 = 2 n^2 (v_1 - v_0),\quad (A^n v)_n = -2 n^2 (v_n - v_{n-1}). \end{equation} Note that $A^n$ is not symmetric with respect to the standard inner product as in the case of a Neumann boundary approximation of order $n^{-1}$. However, introduce the spaces $V_n \cong \R^{n+1}$ with norm \[ \abs{v}_n^2 \df \frac{1}{2n} \big( v_0^2 + v_n^2 \big) + \frac{1}{n} \sum_{k=1}^{n-1} v_k^2 \] and corresponding inner product $\scp{\cdot}{\cdot}_n$. Furthermore, we need the semi-norm \[ \norm{v}_n^2 \df n \sum_{k=1}^n \big(v_k - v_{k-1}\big)^2. \] We use the same notation also for the matrices $\mathbf{x}$, meaning $\abs{\mathbf{x}}_n^2 = \sum_{i=1}^d \abs{\mathbf{x}_{i \cdot}}_n^2$ or with $\norm{\cdot}_n$ instead. With respect to $\scp{\cdot}{\cdot}_n$ the matrix $A^n$ is again symmetric and thus the following summation by parts formula holds. \begin{equation}\label{eq:SummationByParts} \scp{A^n v}{u}_n = - n \sum_{k=1}^n \big(v_k - v_{k-1}\big)\big(u_k - u_{k-1}\big),\quad \forall u,v \in V_n. \end{equation} In the next step let us construct the approximating noise in terms of the given realization of the driving cylindrical Wiener process $W$. Denote by $I_k \df ( \frac{2k-1}{2n}, \frac{2k+1}{2n})$ if $1 \leq k \leq n-1$ and $I_0 \df (0, \frac{1}{2n})$, $I_n \df (\frac{2n-1}{2n}, 1)$. Recall that \begin{equation}\label{eq:BM} \scp{W(t)}{\abs{I_k}^{-\frac12} \mathbbm{1}_{I_k}}_H \fd \beta^n_{k}(t), \quad 0 \leq k \leq n \end{equation} defines a family of $n+1$ iid real valued Brownian motions. Similarly we define the additional independent $d(n+1)$ Brownian motions $\{ \beta^n_{j,k}\}$ due to $\mathbf{W}$. The spatial covariance structure given by the kernels $b$ and $b_i$ is discretized as follows, \begin{align*} b^n_{k,l} &\df \big(\abs{I_k}\abs{I_l}\big)^{-1} \int_{I_k}\int_{I_l} b(x, y) \dy \dx, \quad 0 \leq k,l \leq n \text{ and}\\ b^n_{i,k,l}(u, \mathbf{x}) &\df \big(\abs{I_k}\abs{I_l}\big)^{-1} \int_{I_k}\int_{I_l} b_i(\tilde{u}, \tilde{\mathbf{x}}, x, y) \dy \dx, \quad 0 \leq k,l \leq n. \end{align*} for $u \in V_n, \mathbf{x} \in \R^{d \times (n+1)}$. These discrete matrices allows to replace both $b$ and $b_i$ by a piecewise constant kernel, i.\,e. for $ 0\leq k \leq n$ and $x \in I_k$ \begin{align*} &B W(t) \approx \sum_{l=0}^n b^n_{k,l} \scp{W(t)}{\mathbbm{1}_{I_l}} = \sum_{l=0}^n \abs{I_l}^{-\frac12} b^n_{k,l} \beta_l(t) \fd \Big( B^n P_n W(t)\Big)_k, \end{align*} where $P_n u = (\scp{u}{\abs{I_l}^{-\frac12} \mathbbm{1}_{I_l}})_{1 \leq l \leq n}$. In the same way we obtain $B_i^n(u,\mathbf{x}) P_n W_i(t)$, $1\leq i \leq d$. Denote by $W^n \df P_n W$ and $W_i^n \df P_n W_i$, $1\leq i\leq d$, the resulting $n$-dimensional Brownian motions and the finite dimensional system of stochastic differential equations approximating equation \eqref{eq:SRDE} is \begin{equation}\label{eq:Approximation} \begin{aligned} \mathrm{d} U^n(t) &= \Big( A^n U^n(t) + f^n \big(U^n(t), \mathbf{X}^n(t)\big) \Big) \dt + B^n \, \mathrm{d} W^n(t), & U^n(0) &= P_n u_0,\\ \mathrm{d} X_i^n(t) &= f_i\big(U^n(t),\mathbf{X}^n(t)\big) \dt + B^n_i\big(U^n(t), \mathbf{X}^n(t)\big) \, \mathrm{d} W_i^n(t), & X_i^n(0) &= P_n X_i(0). \end{aligned} \end{equation}
Standard results on stochastic differential equations imply the existence of a unique strong solution $(U^n(t),\mathbf{X}^n(t))$ to \eqref{eq:Approximation}, see e.\,g. \cite[Chapter 3]{prevot}. In order to compute the error made by using the approximate solution we embed $U^n$ and $X^n_i$ into $C([0,T]; V)$ by linear interpolation in the space variable as $(\tilde{U}^n, \tilde{\mathbf{X}}^n)$. We can now state the main result of this part.
\begin{thm}\label{thm:Approx} Suppose the assumptions from Theorem \ref{thm:Ex+Unique} are satisfied and recall the definition of $G_t$ in \eqref{eq:ProcessG}. Define the error as $E^n(t) \df ( U(t) - \tilde{U}^n(t), \mathbf{X}(t) - \tilde{\mathbf{X}}^n(t))$, then there exists a constant $C_{\ref{thm:Approx}}$ such that \begin{equation}\label{eq:NotSoStrongError} \EV{ \sup_{t \in [0,T]} \mathrm{e}^{-G_t} \norm{E^n(t)}_{H_{d+1}}^2} \leq 2\EV{\norm{E^n(0)}_{H_{d+1}}^2} + \frac{C_{\ref{thm:Approx}}}{n^2}. \end{equation} \end{thm} There is an immediate corollary using Borel-Cantelli's lemma. \begin{cor}\label{cor:PathwiseRate} For every $\eps \in (0,1)$ there exists a $\PP$-a.\,s. finite random variable $C_\eps(\omega)$ such that \[ \sup_{t \in [0,T]} \norm{E^n(t)}_{H_{d+1}}^2 \leq \frac{C_\eps}{n^{1-\eps}} \quad \PP\text{-a.\,s.} \] \end{cor} \begin{rem} Possible generalizations of this theorem include error estimates in $L^p$ for general $2 \leq p < \infty$ as in e.\,g. \cite[Theorem 3.1]{SauerLattice}. In principle this is only a technical matter and involves more general a priori estimates for $U^n$ and $\mathbf{X}^n$ than the ones obtained here, but no new techniques or ideas. Having \eqref{eq:NotSoStrongError} for all finite $p$ would also imply a pathwise convergence rate of almost $\sfrac1n$, see e.\,g. \cite[Lemma 2.1]{KloedenNeuenkirch}. \end{rem} Theorem \ref{thm:Approx} does not yield a strong convergence rate, because $\exp[G_t]$ is not necessarily integrable. However, if this is the case we can easily deduce a strong convergence rate of $\sfrac1n$ in $L^p$, $p \leq p^\ast < 2$ (with $p^\ast$ depending on the integrability of $G_t$) by H\"older's inequality. Another case in which one can say more about the strong convergence rate is the one, when the drift is one-sided Lipschitz, or in other words quasi-monotone, see \cite{SauerLattice}. We prove another theorem under the following additional assumptions on $f, f_i$. \begin{assum}\label{assum:Monotone} Let $f$ and $\mathbf{f} = (f_i)_{1 \leq i \leq d}$ satisfy \[ \vek{f(u, \mathbf{x}) - f(v, \mathbf{y})}{ \mathbf{f}(u, \mathbf{x}) - \mathbf{f}(v, \mathbf{y})}\vek{u - v}{\mathbf{x} - \mathbf{y}} \leq L \big( \abs{u-v}^2 + \abs{\mathbf{x} - \mathbf{y}}^2 \big) \] for some $L > 0$ and all $u,v \in \R$, $\mathbf{x}, \mathbf{y} \in \R^d$. \end{assum} \begin{thm}\label{thm:Monotone} With the additional Assumption \ref{assum:Monotone}, there exists a constant $C_{\ref{thm:Monotone}}$ such that \begin{equation}\label{eq:StrongError} \EV{ \sup_{t \in [0,T]} \norm{E^n(t)}_{H_{d+1}}^2} \leq 2 \mathrm{e}^{LT}\EV{ \norm{E^n(0)}_{H_{d+1}}^2} + \frac{C_{\ref{thm:Monotone}}}{n^2}. \end{equation}\end{thm} The proofs of these theorems are contained in the following subsections.
\subsection{Uniform A Priori Estimates} In addition to the a priori estimates on $(U, \mathbf{X})$ from Theorem \ref{thm:Ex+Unique} uniform a priori estimates for $(U^n, \mathbf{X}^n)$ are essential for the proof of Theorem \ref{thm:Approx}. Let us start with a statement concerning the $L^\infty$-bounds.
\begin{lem}\label{lem:BoundApprox} Under the assumptions of Theorem \ref{thm:Approx}, $\tilde{\mathbf{X}}^n \in \mathcal{X}$ and $\tilde{U}^n \in \mathcal{U}$ with the same uniform bound $R_t$ as for $U$. \end{lem} \begin{proof} Obviously $\tilde{U}^n \in C([0,T], C_b^1(\overline{\mathcal{O}}, \R))$. We can apply Lemma \ref{lem:InvarianceX} or rather imitate its proof to obtain $\tilde{\mathbf{X}}^n \in \mathcal{X}$ for all $n \in \N$. Since all arguments are point-wise in $x \in \mathcal{O}$, they also apply for the finite subset $\frac{1}{n} \{0, \dots, n\}$.
Concerning the uniform bound, note that for the solution to \[ \mathrm{d}Y^n(t) = A^n Y^n(t) \dt + B^n \,\mathrm{d}W^n(t), \quad Y^n(0)=0, \] it holds that $R_t^{Y^n} \df \sup_{s \in [0,t]} \max_{0\leq k\leq n} \abs{Y^n_k(s)} \leq R_t^Y$ by Lemma \ref{app:Lem2}. In particular, the right hand side is independent of $n$ and we can apply the proof of Lemma \ref{lem:PathwiseBoundforU} with the same uniform cut-off. \end{proof} Next, we derive an improved a priori estimate providing more than the canonical regularity for $U^n$. This is similar to the question of $U$ being a strong solution to \eqref{eq:SPDEforU}, in particular if $U \in D(A)$.
\begin{lem}\label{lem:ImprovedAPriori} With the assumptions of Theorem \ref{thm:Approx} and arbitrary $n \in \N$ it holds that \begin{equation}\label{eq:ImprovedAPriori} \begin{split} &\EV{ \sup_{t \in [0,T]} \mathrm{e}^{-G_t} \big( \norm{U^n(t)}_n^2 + \norm{\mathbf{X}^n(t)}_n^2 \Big) + 2 \int_0^T \mathrm{e}^{-G_t} \abs{A^n U^n(t)}_n^2 \dt}\\ &\quad\leq K \EV{ \Big( \norm{U^n(0)}_n^2 + \norm{\mathbf{X}^n(0)}_n^2 + T \norm{B}_{L_2(H,V)}^2\Big)}. \end{split} \end{equation} \end{lem} \begin{proof} Consider It\^o's formula applied to $\norm{U^n(t)}_n^2$ \begin{align*} \mathrm{d} \norm{U^n(t)}_n^2 &= 2n \sum_{k=1}^n \big( U^n_k(t) - U^n_{k-1}(t) \big) \Big( \big(A^n U^n(t)\big)_k - \big(A^n U^n(t)\big)_{k-1}\Big) \dt\\ &\quad + 2n \sum_{k=1}^n \big( U^n_k(t) - U^n_{k-1}(t) \big) \Big( f\big(U^n_k(t), \mathbf{X}^n_k(t) \big) - f \big(U^n_{k-1}(t), \mathbf{X}^n_{k-1}(t) \big) \Big) \dt\\ &\quad + 2 \sqrt{n} {\kern -0.5em}\sum_{k=1,l=0}^n {\kern -0.5em} \big(U^n_k(t) - U^n_{k-1}(t)\big) \big(b_{k,l}^n - b_{k-1,l}^n\big) \,\mathrm{d} \beta_l^n(t) + {\kern -0.5em}\sum_{k=1,l=0}^n {\kern -0.5em} \big( b^n_{k,l} - b^n_{k-1,l}\big)^2 \dt.\\ \intertext{With the summation by parts formula \eqref{eq:SummationByParts} for the linear part and inequality \eqref{eq:LocalLipschitz} for $f$ it is straightforward to obtain} &\leq \Big( - 2 \abs{A^n U^n(t)}_n^2 + \norm{\mathbf{X}^n(t)}_n^2 + \big( L^2 \big(1 + R_t^{r-1}\big)^2 (1 + \rho_0)^2 + 1\big) \norm{U^n(t)}_n^2 \Big.\\ &\qquad \Big. + \norm{B}_{L_2(H,V)}^2 \Big) \dt + 2 \sqrt{n} {\kern -0.5em}\sum_{k=1,l=0}^n {\kern -0.5em} \big(U^n_k(t) - U^n_{k-1}(t)\big) \big(b_{k,l}^n - b_{k-1,l}^n\big) \,\mathrm{d} \beta_l^n(t). \end{align*} The norm of $B$ in $L_2(H,V)$ appears naturally as described below in detail for the multiplicative noise in the $\mathbf{X}$-variable. The estimate above involves $\mathbf{X}^n$ in a stronger norm, thus in a similar fashion apply It\^o's formula to $\norm{X_i^n(t)}_n^2$, $1 \leq i \leq d$. \begin{align*} &\mathrm{d} \norm{X^n_i(t)}_n^2 = 2n \sum_{k=1}^n \big( X^n_{i,k}(t) - X^n_{i, k-1}(t)\big) \Big( f_i \big( U^n_k(t), \mathbf{X}^n_k(t)\big) - f_i\big( U^n_{k-1}(t), \mathbf{X}^n_{k-1}(t)\big)\Big)\dt\\ &\quad + 2 \sqrt{n} {\kern -0.5em}\sum_{k=1,l=0}^n {\kern -0.5em} \big( X^n_{i,k}(t) - X^n_{i, k-1}(t)\big) \Big( b^n_{i,k,l} \big( U^n(t), \mathbf{X}^n(t)\big) - b^n_{i,k-1,l}\big( U^n(t), \mathbf{X}^n(t)\big)\Big) \,\mathrm{d}\beta_{i,l}^n(t)\\ &\quad + {\kern -0.5em}\sum_{k=1,l=0}^n {\kern -0.5em} \Big( b^n_{i,k,l}\big(U^n(t), \mathbf{X}^n(t)\big) - b^n_{i,k-1,l}\big(U^n(t), \mathbf{X}^n(t)\big)\Big)^2 \dt. \end{align*} Again \eqref{eq:LocalLipschitz} and the linear growth condition on $b_i(\tilde{U}^n, \tilde{\mathbf{X}}^n) \in W^{1,2}(\mathcal{O}^2)$, in particular \begin{align*} &{\kern -0.5em}\sum_{k=1,l=0}^n {\kern -0.5em} {\kern 0.25em} \int_{I_k} {\kern -0.4em}\int_{I_{k-1}} {\kern -0.35em}\int_{I_l}{\kern -0.4em} \frac{\big( b_i \big(\tilde{U}^n(t,x), \tilde{\mathbf{X}}^n(t,x), x, y\big) - b_i\big(\tilde{U}^n(t,x'), \tilde{\mathbf{X}}^n(t,x'), x', y\big)\big)^2}{\abs{I_k} \abs{I_{k-1}} \abs{I_l}} \dy \dx' \dx\\ &\quad \leq {\kern -0.5em}\sum_{k=1,l=0}^n {\kern -0.5em} {\kern 0.25em}\int_{I_k \cap I_{k-1}} {\kern -0.75em}\Big(\partial_z (b_i(\tilde{U}^n(t,z), \tilde{\mathbf{X}}^n(t,z), z, y))\Big)^2 \dz \dy \leq \norm{b_i \big(\tilde{U}^n(t), \tilde{\mathbf{X}}^n(t)\big)}_{W^{1,2}(\mathcal{O}^2)}^2 \end{align*} imply \begin{align*} &\mathrm{d} \norm{\mathbf{X}^n(t)}_n^2 \leq \Big( \Big( 4L^2 \sum_{i=1}^d \big(1 + \rho_i(R_t)\big)^2 + 2dL^2 + 1\Big) \norm{\mathbf{X}^n(t)}_n^2 + (2dL^2 + 1) \norm{U^n(t)}_n^2\Big) \dt\\ & + 2 \sqrt{n} \sum_{i=1}^d{\kern -0.5em}\sum_{k=1,l=0}^n {\kern -0.5em} \big( X^n_{i,k}(t) - X^n_{i, k-1}(t)\big) \Big( b^n_{i,k,l} \big( U^n(t), \mathbf{X}^n(t)\big) - b^n_{i,k-1,l}\big( U^n(t), \mathbf{X}^n(t)\big)\Big) \,\mathrm{d}\beta_{i,l}^n(t) \end{align*} Recall the definition of $G_t$ in \eqref{eq:ProcessG} and combine the inequalities above to \begin{align*} &\EV{ \sup_{t \in [0,T]} \mathrm{e}^{-G_t} \big( \norm{U^n(t)}_n^2 + \norm{\mathbf{X}^n(t)}_n^2 \Big) + 2 \int_0^T \mathrm{e}^{-G_t} \abs{A^n U^n(t)}_n^2 \dt}\\ &\quad\leq \EV{ \Big( \norm{U^n(0)}_n^2 + \norm{\mathbf{X}^n(0)}_n^2 + T \norm{B}_{L_2(H,V)}^2\Big) + \sup_{t \in [0,T]} \abs{M^U(t)} + \sup_{t \in [0,T]} \abs{M^{\mathbf{X}}(t)}}, \end{align*} where the two stochastic integrals are denoted by $M^U$ and $M^{\mathbf{X}}$, respectively. The Burkholder-Davis-Gundy inequality and some standard estimates yield \[ \EV{\sup_{t \in [0,T]} \abs{M^U(t)}} \leq \frac12 \sup_{t \in [0,T]} \mathrm{e}^{-G_t} \norm{U^n(t)}_n^2 + K T \norm{B}_{L_2(H,V)}^2 \] and similarly \[ \EV{\sup_{t \in [0,T]} \abs{M^{\mathbf{X}}(t)}} \leq \frac12 \sup_{t \in [0,T]} \mathrm{e}^{-G_t} \norm{\mathbf{X}^n(t)}_n^2 + K dL^2 \int_0^T \mathrm{e}^{-G_t} \big(\norm{U^n(t)}_n^2 + \norm{\mathbf{X}^n(t)}_n^2\big) \dt, \] which concludes the proof. \end{proof}
\subsection{Proof of Theorem \ref{thm:Approx}} In this part we estimate all parts contributing to the error $E^n(t) \df ( U(t) - \tilde{U}^n(t), \mathbf{X}(t) - \tilde{\mathbf{X}}^n(t))$. Recall the equation for the linear interpolation $\tilde{U}^n = \iota_n U^n$ and $\tilde{\bfX}^n = \iota_n \mathbf{X}^n$. \begin{align*} \mathrm{d} \tilde{U}^n(t) &= \Big( \iota_n A^n U^n(t) + \iota_n f^n \big(U^n(t), \mathbf{X}^n(t)\big) \Big) \dt + \iota_n B^n \, \mathrm{d} W^n(t), & \tilde{U}^n(0) &= \iota_n P_n u_0,\\ \mathrm{d} \tilde{X}_i^n(t) &= \iota_n f_i\big(U^n(t),\mathbf{X}^n(t)\big) \dt + \iota_n B^n_i\big(U^n(t), \mathbf{X}^n(t)\big) \, \mathrm{d} W_i^n(t), & \tilde{X}_i^n(0) &= \iota_n P_n X_i(0). \end{align*} In the following we consider everything at some fixed time $t$ and for a clear presentation we drop the explicit time dependence in the notation.
\noindent\textbf{Approximation Error of the Laplacian:} We have that \begin{align*} &2\dualp{A U - \iota_n A^n U^n}{U - \tilde{U}^n}\\ &\quad= 2\dualp{A U - A \tilde{U}^n}{ U - \tilde{U}^n} + 2\dualp{A \tilde{U}^n - \iota_n A^n U^n}{U - \tilde{U}^n}\\ &\quad\leq - 2\norm{U - \tilde{U}^n}_V^2 + 2\norm{A \tilde{U}^n - \iota_n A^n U^n}_{V^\ast} \norm{U - \tilde{U}^n}_V\\ &\quad \leq - \norm{U - \tilde{U}^n}_V^2 + \norm{A \tilde{U}^n - \iota_n A^n U^n}_{V^\ast}^2 \end{align*} Furthermore, it holds that \begin{equation} A \tilde{U}^n = \frac{1}{2n} \Big( \big(A^n U^n\big)_0 \delta_0(x) + \big(A^n U^n\big)_n \delta_1(x)\Big) + \frac{1}{n} \sum_{k=1}^{n-1} \big(A^n U^n)_k \delta_{\frac{k}{n}}(x) \end{equation} and \begin{equation} \iota_n A^n U^n = \sum_{k=1}^n \big(A^n U^n\big)_k (nx - k+1) \mathbbm{1}_{[ \frac{k-1}{n}, \frac{k}{n}]}(x) + \big(A^n U^n)_{k-1} (k-nx) \mathbbm{1}_{[ \frac{k-1}{n}, \frac{k}{n}]}(x). \end{equation} Now let $\phi \in C^\infty(\mathcal{O})$, then \begin{align*} &\dualp{A \tilde{U}^n - \iota_n A^n U^n}{\phi}\\ &= \sum_{k=1}^{n-1} \big(A^n U^n \big)_k \Big( \frac{1}{n} \phi \big(\tfrac{k}{n}\big) - \int_{\frac{k-1}{n}}^{\frac{k}{n}} (nx-k+1) \phi(x) \dx - \int_{\frac{k}{n}}^{\frac{k+1}{n}} (k + 1 - nx) \phi(x) \dx \Big)\\ &\quad+ \big(A^n U^n\big)_0 \Big( \frac{1}{2n} \phi(0) - \int_0^{\frac{1}{n}} (1 - nx) \phi(x) \dx \Big)\\ &\quad + \big(A^n U^n\big)_n \Big( \frac{1}{2n} \phi(1) - \int_{1-\frac{1}{n}}^1 (nx - n+1) \phi(x) \dx \Big)\\ &= \sum_{k=1}^{n-1} \big(A^n U^n \big)_k \Big( \int_{\frac{k-1}{n}}^{\frac{k}{n}} {\kern -0.5em}(nx-k+1) \big( \phi \big(\tfrac{k}{n}\big) - \phi(x)\big) \dx + \int_{\frac{k}{n}}^{\frac{k+1}{n}} {\kern -0.5em} (k + 1 - nx) \big( \phi \big(\tfrac{k}{n}\big) - \phi(x)\big) \dx \Big)\\ &\quad+ \big(A^n U^n\big)_0 \int_0^{\frac{1}{n}} (1 - nx)\big( \phi(0) - \phi(x)\big) \dx + \big(A^n U^n\big)_n \int_{1-\frac{1}{n}}^1 (nx - n+1) \big(\phi(1) - \phi(x) \big) \dx. \end{align*} Obviously, we can write $\phi(\frac{k}{n}) - \phi(x) = \int_x^{\frac{k}{n}} \phi' (y) \dy$ and together with Cauchy-Schwarz' and Jensen's inequality it follows that \begin{align*} \dualp{A \tilde{U}^n - \iota_n A^n U^n}{\phi} \leq \frac{2}{n} \abs{A^n U^n}_n \Bigg( \sum_{k=1}^n \int_{\frac{k-1}{n}}^{\frac{k}{n}} \phi'(y)^2 \dy \Bigg)^\frac12 = \frac{2}{n} \abs{A^n U^n}_n \norm{\phi}_V. \end{align*} This inequality extends to all $\phi \in V$ and in particular we obtain that \begin{equation} 2\dualp{AU - \iota_n A^n U^n}{U - \tilde{U}^n} \leq - \norm{U - \tilde{U}^n}_V^2 + \frac{4}{n^2} \abs{A^n U^n}_n^2. \end{equation}
\textbf{Approximation Error of the Nonlinear Drift:} Let us start with the error coming from the first variable. There we obtain \begin{align*} &2 \scp{f\big(U, \mathbf{X}\big) - \iota_n f \big(U^n, \mathbf{X}^n\big)}{ U - \tilde{U}^n}_H\\ &\quad\leq 2\norm{ f\big(U, \mathbf{X}\big) - f \big( \tilde{U}^n, \tilde{\bfX}^n\big)}_H \norm{ U - \tilde{U}^n}_H + 2\scp{f \big( \tilde{U}^n, \tilde{\bfX}^n\big) - \iota_n f \big(U^n, \mathbf{X}^n\big)}{U - \tilde{U}^n}_H\\ &\quad\leq \Big(L^2 \big( 1 + R_t^{r-1} \big)^2 ( 1 + \rho_0)^2 + 1 \Big) \norm{U - \tilde{U}^n}_H^2 + \norm{\mathbf{X} - \tilde{\bfX}^n}_{H_d}^2 \\ &\qquad+ 2\scp{f \big( \tilde{U}^n, \tilde{\bfX}^n\big) - \iota_n f \big(U^n, \mathbf{X}^n\big)}{U - \tilde{U}^n}_H \end{align*} by \eqref{eq:LocalLipschitz}. The latter term can be estimated as follows \begin{align*} &2\scp{f \big( \tilde{U}^n, \tilde{\bfX}^n\big) - \iota_n f \big(U^n, \mathbf{X}^n\big)}{U - \tilde{U}^n}_H\\ &\quad= \sum_{k=1}^n \intwithlimits{k} \Big[ (nx - k+1) \Big( f\big(\tilde{U}^n(x), \tilde{\bfX}^n(x)\big) - f\big(U^n_k, \mathbf{X}^n_k\big)\Big)\Big.\\ &\qquad\qquad\qquad \Big.+ (k - nx)\Big( f\big(\tilde{U}^n(x), \tilde{\bfX}^n(x)\big) - f\big( U^n_{k-1}, \mathbf{X}^n_{k-1}\big) \Big)\Big] \big( U(x) - \tilde{U}^n(x)\big) \dx.\\ \intertext{With \eqref{eq:LocalLipschitz}, the relations $\abs{\tilde{U}^n(x) - U^n_k} = (k-nx) \abs{U^n_k - U^n_{k-1}}$, as well as $\abs{\tilde{U}^n(x) - U^n_{k-1}} = (nx-k+1) \abs{U^n_k - U^n_{k-1}}$ and of course the same ones for $\mathbf{X}^n$ it follows that} &\quad\leq 4 L \big( 1 + R_t^{r-1} \big) ( 1 + \rho_0) \sum_{k=1}^n \big( \abs{U^n_k - U^n_{k-1}}^2 + \abs{\mathbf{X}^n_k - \mathbf{X}^n_{k-1}}^2\big)^\frac12\\ &\qquad \times \intwithlimits{k} (nx-k+1) (k-nx) \abs{U(x) - \tilde{U}^n(x)} \dx\\ &\quad \leq L^2 \big( 1 + R_t^{r-1} \big)^2 ( 1 + \rho_0)^2 \norm{U-\tilde{U}^n}_H^2 + \frac{K}{n^2} \Big( \norm{U^n}_n^2 + \norm{\mathbf{X}^n}_n^2 \Big). \end{align*} Thus, in the end we have obtained the following estimate. \begin{equation}\label{eq:ErrorF} \begin{split} &2 \scp{f\big(U, \mathbf{X}\big) - \iota_n f \big(U^n, \mathbf{X}^n\big)}{ U - \tilde{U}^n}_H \leq \frac{K}{n^2} \Big( \norm{U^n}_n^2 + \norm{\mathbf{X}^n}_n^2 \Big)\\ &\qquad +\Big(2L^2 \big( 1 + R_t^{r-1} \big)^2 ( 1 + \rho_0)^2 + 1 \Big) \norm{U-\tilde{U}^n}_H^2 + \norm{\mathbf{X} - \tilde{\bfX}^n}_{H_d}^2. \end{split} \end{equation} The estimates for each $f_i$ work the same way, in particular it holds that \begin{equation}\label{eq:ErrorFi} \begin{split} &2 \scp{\mathbf{f}\big(U, \mathbf{X}\big) - \iota_n \mathbf{f} \big(U^n, \mathbf{X}^n\big)}{ \mathbf{X} - \tilde{\bfX}^n}_{H_d} \leq \frac{K}{n^2} \Big( \norm{U^n}_n^2 + \norm{\mathbf{X}^n}_n^2 \Big)\\ &\qquad + \Big(4L^2 \sum_{i=1}^d \big( 1 + \rho_i(R_t) \big)^2 + 1 \Big) \norm{\mathbf{X}-\tilde{\bfX}^n}_{H_d}^2 + \norm{U - \tilde{U}^n}_H^2. \end{split} \end{equation}
\textbf{Approximation Error of the Covariance Operators:} \begin{align*} &\norm{B - \iota_n B^n P_n}_{L_2(H)}^2\\ &\quad= {\kern -0.5em} \sum_{k=1, l=0}^n {\kern -0.25em} \intwithlimits{k} \intwithlimits{l} \Big[ (nx-k+1) \big( b(x,y) - b^n_{k,l}\big) + (k-nx) \big( b(x,y) - b^n_{k-1,l} \big) \Big]^2 \dy \dx \end{align*} Observe that for $x \in I_k$, $y \in I_l$ \begin{align*} b(x,y) - b^n_{k,l} &= \big( \abs{I_k} \abs{I_l}\big)^{-1} \int_{I_k} \int_{I_l} b(x,y) - b(x', y) + b(x', y) - b(x', y') \dy' \dx'\\ &= \big( \abs{I_k} \abs{I_l}\big)^{-1} \int_{I_k} \int_{I_l} \int_{x'}^x \partial_z b(z,y) \dz + \int_{y'}^y \partial_z b(x', z) \dz \dy' \dx'. \end{align*} With this relation it is straightforward to obtain \begin{equation}\label{eq:ErrorB} \begin{split} \norm{B - \iota_n B^n P_n}_{L_2(H)}^2 &\leq \frac{2}{n^2} \int_0^1 \int_0^1 \Big( \abs{\partial_x b(x,y)}^2 + \abs{\partial_y b(x,y)}^2 \Big) \dy \dx\\ &\leq \frac{2}{n^2} \norm{b}_{W^{1,2}(\mathcal{O}^2)}^2 = \frac{2}{n^2} \norm{B}_{L_2(H,V)}^2. \end{split} \end{equation} Of course, the It\^o correction coming from the $\mathbf{X}$-variable can be estimated similarly, with a combination of the calculation above and the one for the nonlinear drift. \begin{align*} &\norm{B_i(U, \mathbf{X}) - \iota_n B_i^n (U^n, \mathbf{X}^n) P_n}_{L_2(H)}^2\\ &\quad= 2 \norm{B_i(U, \mathbf{X}) - B_i(\tilde{U}^n, \tilde{\bfX}^n)}_{L_2(H)}^2 + 2 \norm{B_i(\tilde{U}^n, \tilde{\bfX}^n) - \iota_n B_i^n (U^n, \mathbf{X}^n) P_n}_{L_2(H)}^2\\ &\quad\leq 4 L^2 \Big( \norm{U- \tilde{U}^n}_H^2 + \norm{\mathbf{X} - \tilde{\bfX}^n}_{H_d}^2\Big) + 2 \norm{B_i(\tilde{U}^n, \tilde{\bfX}^n) - \iota_n B_i^n (U^n, \mathbf{X}^n) P_n}_{L_2(H)}^2 \end{align*} and the latter term is \[ \norm{B_i(\tilde{U}^n, \tilde{\bfX}^n) - \iota_n B_i^n (U^n, \mathbf{X}^n) P_n}_{L_2(H)}^2 \leq \frac{K}{n^2} \norm{B_i\big( \tilde{U}^n, \tilde{\mathbf{X}}^n\big)}_{L_2(H,V)}^2 \leq \frac{K L^2}{n^2} \big( \norm{U^n}_n^2 + \norm{\mathbf{X}^n}_n^2\big). \]
\begin{proof}[Proof of Theorem \ref{thm:Approx}] We can apply It\^o's formula to the square of the $H_{d+1}$-norm of $E^n(t)$ and obtain \begin{align*} \mathrm{d} \norm{E^n(t)}_{H_{d+1}}^2 &= 2 \dualp{A U(t) - \iota_n A^n U^n(t)}{U(t) - \tilde{U}^n(t)} \dt\\ &\quad + 2 \scp{f\big( U(t), \mathbf{X}(t)\big) - \iota_n f \big( U^n(t), \mathbf{X}^n(t)\big)}{U(t) - \tilde{U}^n(t)}_H \dt\\ &\quad + 2 \scp{ \mathbf{f} \big(U(t), \mathbf{X}(t)\big) - \iota_n \mathbf{f} \big( U^n(t), \mathbf{X}^n(t)\big)}{ \mathbf{X}(t) - \tilde{\bfX}^n(t)}_{H_d} \dt\\ &\quad + 2 \scp{U(t) - \tilde{U}^n(t)}{\Big( B - \iota_n B^n P_n\Big) \dwt}_H + \norm{ B - \iota_n B^n P_n}_{L_2(H)}^2 \dt\\ &\quad + 2 \scp{\mathbf{X}(t) - \tilde{\bfX}^n(t)}{ \Big( \mathbf{B} \big( U(t), \mathbf{X}(t)\big) - \iota_n \mathbf{B}^n \big(U^n(t), \mathbf{X}^n(t)\big) P_n \Big) \, \mathrm{d} \mathbf{W}(t)}_{H_d}\\ &\quad + \sum_{i=1}^d \norm{ B_i\big( U(t), \mathbf{X}(t)\big) - \iota_n B_i^n \big( U^n(t), \mathbf{X}^n(t)\big) P_n}_{L_2(H)}^2 \dt. \end{align*} Everything above has been estimated in the three steps before, except for the stochastic integrals. Recall $G_t$ and we need to estimate the corresponding equation with $\exp[-G_t]$ in the supremum over $t \in [0,T]$. Obviously, this involves the Burkholder-Davis-Gundy inequality after integration with respect to $\PP$. The standard decomposition, see e.\,g. Lemma \ref{lem:ImprovedAPriori}, bounds the quadratic variation of the stochastic integral from above in terms of the left hand side and the already investigated It\^o correction term yields the following inequality in a straightforward way \begin{align*} \EV{ \sup_{t \in [0,T]} \mathrm{e}^{-G_t} \norm{ E^n(t)}_{H_{d+1}}^2} &\leq 2 \EV{ \norm{ E^n(0)}_{H_{d+1}}^2} + \frac{8}{n^2} \EV{ \int_0^T \mathrm{e}^{-G_t} \abs{A^n U^n(t)}_n^2 \dt}\\ &\quad + \frac{K}{n^2} \big(dL^2 + 1 \big) \EV{ \int_0^T \mathrm{e}^{-G_t} \big( \norm{U^n(t)}_n^2 + \norm{\mathbf{X}^n(t)}_n^2 \big) \dt}. \end{align*} By Lemma \ref{lem:ImprovedAPriori} we know that the expectations on the right hand side are uniformly bounded in $n$, hence there exists $C_{\ref{thm:Approx}}$ with \[ \EV{ \sup_{t \in [0,T]} \mathrm{e}^{-G_t} \norm{ E^n(t)}_{H_{d+1}}^2} \leq 2\EV{ \norm{ E^n(0)}_{H_{d+1}}^2} + \frac{C_{\ref{thm:Approx}}}{n^2}.\qedhere \] \end{proof}
\subsection{Proof of Theorem \ref{thm:Monotone}} Apparently the problematic term $\exp[-G_t]$ is due to the nonlinear drift. Going back to the derivation of the error estimates \eqref{eq:ErrorF} and \eqref{eq:ErrorFi} we observe that with a slightly different application of Young's inequality and the one-sided Lipschitz condition follows \begin{equation} \begin{split} &2\langle \vek{f(U, \mathbf{X}) - \iota_n f(U^n, \mathbf{X}^n)}{\mathbf{f}(U, \mathbf{X}) - \iota_n \mathbf{f}(U^n, \mathbf{X}^n)}, \vek{U-\tilde{U}^n}{\mathbf{X} - \tilde{\mathbf{X}}^n}\rangle_{H_{d+1}}\\ &\quad\leq 2L \big( \norm{U - \tilde{U}^n}_H^2 + \norm{\mathbf{X} - \tilde{\mathbf{X}}^n}_{H_d}^2 \big) + \frac{K}{n^2} G_t \big( \norm{U^n}_n^2 + \norm{\mathbf{X}^n}_n^2 \big). \end{split} \end{equation} Gronwall's inequality now implies \begin{align*} \EV{ \sup_{t \in [0,T]} \norm{ E^n(t)}_{H_{d+1}}^2} &\leq \mathrm{e}^{4L T}\EV{ \norm{ E^n(0)}_{H_{d+1}}^2} + \frac{4}{n^2} \mathrm{e}^{4L T} \EV{ \int_0^T \abs{A^n U^n(t)}_n^2 \dt}\\ &\quad + \frac{K}{n^2} \big(dL^2 + 1 \big)\mathrm{e}^{4L T} \EV{ \int_0^T G_t \big( \norm{U^n(t)}_n^2 + \norm{\mathbf{X}^n(t)}_n^2 \big) \dt}. \end{align*} It remains to show that the right hand side is finite, basically a modification of Lemma \ref{lem:ImprovedAPriori}, which yields a priori estimates in $L^p$ for $p>2$. Then, with H\"older's inequality and the Gaussian moments of $G_t$ we can immediately conclude Theorem \ref{thm:Monotone}. \begin{cor}\label{cor:ImprovedAPriori} For all $n \in \N$ and $1\leq p < \infty$ it holds that \begin{gather*} \EV{\int_0^T \abs{A^n U^n(t)}_n^2 \dt} \leq \mathrm{e}^{(L + dL^2) T} \EV{ \norm{U^n(0)}_n^2 + \norm{\mathbf{X}^n(0)}_n^2} + T \norm{B}_{L_2(H,V)}^2,\\ \EV{ \sup_{t \in [0,T]} \big( \norm{U^n(t)}_n^2 + \norm{\mathbf{X}^n(t)}_n^2 \Big)^p} \leq C \EV{ \Big( \norm{U^n(0)}_n^{2p} + \norm{\mathbf{X}^n(0)}_n^{2p} + T \norm{B}_{L_2(H,V)}^{2p}\Big)}. \end{gather*} \end{cor} Clearly, this concludes the assertion using H\"older's inequality, since $G_t$ has finite moments of any order. The proof of this a priori estimate is essentially the same as the one of Lemma \ref{lem:ImprovedAPriori} with the observation, that we only need \[ \vek{f(U^n_k, \mathbf{X}^n_k) - f(U^n_{k-1}, \mathbf{X}^n_{k-1})}{ \mathbf{f}(U^n_k, \mathbf{X}^n_k) - \mathbf{f}(U^n_{k-1}, \mathbf{X}^n_{k-1})}\vek{U^n_k - U^n_{k-1}}{\mathbf{X}^n_k - \mathbf{X}^n_{k-1}} \leq L \big( \abs{U^n_k - U^n_{k-1}}^2 + \abs{\mathbf{X}^n_k - \mathbf{X}^n_{k-1}}^2 \big) \] and $G_t$ does not appear at all, in particular for all $t \in [0,T]$ it holds that \begin{align*} \norm{U^n(t)}_n^2 + \norm{\mathbf{X}^n(t)}_n^2 + 2 \int_0^t \abs{A^n U^n(s)}_n^2 \ds &\leq \big( 2L + 2dL^2\big) \int_0^t \big( \norm{U^n(s)}_n^2 + \norm{\mathbf{X}^n(s)}_n^2 \big) \ds\\ &\quad + \norm{B}_{L_2(H,V)}^2 t + M^U(t) + M^{\mathbf{X}}(t), \end{align*} where again the stochastic integrals are denoted by $M^u$ and $M^{\mathbf{X}}$, respectively. Integration with respect to $\PP$ and Gronwall's inequality directly yields the first a priori estimate. For the second one consider the inequality above to the power $1< p < \infty$. \begin{align*} K \sup_{t \in [0,T]} \big(\norm{U^n(t)}_n^2 + \norm{\mathbf{X}^n(t)}_n^2\big)^p &\leq \big( 2L + 2dL^2\big)^p T^{p-1} \int_0^T \big( \norm{U^n(t)}_n^2 + \norm{\mathbf{X}^n(t)}_n^2 \big)^p \dt\\ &\quad + \norm{B}_{L_2(H,V)}^{2p} T^p + \sup_{t \in [0,T]} \abs{M^U(t)}^p + \sup_{t \in [0,T]} \abs{M^{\mathbf{X}}(t)}^p. \end{align*} The Burkholder-Davis-Gundy inequality and Gronwall's lemma yield the result. \qed
\section{Applications} As an application for our main results we consider two equations describing the propagation of the action potential in a neuron along the axon. More precisely we study the Hodgkin-Huxley equations and the FitzHugh-Nagumo equations mostly popular in the mathematical literature. In particular, one can use Theorem \ref{thm:Monotone} to extend the results of \cite{SauerLattice}. \subsection{Stochastic Hodgkin-Huxley Equations}\label{sec:HH} As already described in the introduction, the Hodgkin-Huxley equations, see \cite{HodgkinHuxley}, are the basis for all subsequent conductance based models for active nerve cells. In the case of the squid's giant axon, the currents generating the action potential are primarily due to sodium and potassium ions and the membrane potential satisfies \begin{equation}\label{eq:HHmembrane} \tau \partial_t U = \lambda^2 \partial_{xx} U - g_{\text{Na}} ( U - E_{\text{Na}}) - g_{\text{K}} (U - E_{\text{K}}) - g_{\text{L}} (U - E_{\text{L}}). \end{equation} The latter term is the leak current with $g_{\text{L}} > 0$, $E_{\text{Na}}, E_{\text{K}} \in \R$ are the resting potentials of sodium and potassium and $g_{\text{Na}}, g_{\text{K}}$ are their conductances. $\tau$ and $\lambda$ are the specific time and space constants of the axon. Due to opening and closing of ion channels the conductances $g_{\text{Na}}, g_{\text{K}}$ may change with time, in particular \[ g_{\text{Na}} = \overline{g}_{\text{Na}} m^3 h \quad \text{and} \quad g_{\text{K}} = \overline{g}_{\text{K}} n^4, \quad \overline{g}_{\text{Na}}, \overline{g}_{\text{K}} > 0, \] where $(n,m,h)$ are gating variables describing the probability of ion channels being open. Let $x = n,m,h$, then \begin{equation}\label{eq:HHgating} \frac{\mathrm{d} x}{\dt} = \alpha_x(U) ( 1 - x) - \beta_x(U) x \end{equation} with typical shapes \[ \alpha_x(U) = a_x^1 \frac{U + A_x}{1 - \mathrm{e}^{-a_x^2 (U + A_x)}} \geq 0 \quad \text{and} \quad \beta_x(U) = b_x^1 \mathrm{e}^{-b_x^2 (U + B_x)}\geq 0 \] for some constants $a_x^i, b_x^i > 0$, $A_x, B_x \in \R$. For the data matching the ``standard Hodgkin-Huxley neuron'' we refer to e.\,g. \cite[Section 1.9]{Ermentrout}.
Let $\mathbf{x} = (n,m,h)$ and we implicitly defined the nonlinearities $f, f_i$ above. It is now immediate to see that Assumption \ref{assum:Drift} is satisfied with $r=2$, \[ \rho(\mathbf{x}) \df \max \{ \abs{n}^4, \abs{m}^3 \abs{h}, \abs{m}^3, \abs{m}^2 \abs{h}, \abs{n}^3\}, \] and \[ \rho_i(u) \df \max \{ \alpha_{x_i}(u) + \beta_{x_i}(u), \alpha_{x_i}'(u) + \beta_{x_i}'(u)\}. \] Note that the Lipschitz constants do indeed grow exponentially. Concerning Assumption \ref{assum:Invariance} we observe that for $\mathbf{x} \in [0,1]^3$ it holds that \[ \partial_u f(u, \mathbf{x}) = - \overline{g}_{\text{Na}} m^3 h - \overline{g}_{\text{K}} n^4 - g_{\text{L}} \leq - g_{\text{L}} < 0, \] hence $K = 0$. Also, the invariance of $[0,1]^3$ follows by \[ f_i(u, x_i) \mathbbm{1}_{\{x_i \leq 0\}} = \Big(\alpha_{x_i} (u) - \big( \alpha_{x_i}(u) + \beta_{x_i}(u)\big) x_i \Big) \mathbbm{1}_{\{x_i \leq 0\}} \geq \alpha_{x_i} (u)\mathbbm{1}_{\{x_i \leq 0\}} \geq 0 \] and \[ f_i(u, x_i) \mathbbm{1}_{\{x_i \geq 1 \}} = \Big(\big( \alpha_{x_i}(u) + \beta_{x_i}(u)\big) (1 - x_i) - \beta_{x_i}(u) \Big) \mathbbm{1}_{\{x_i \geq 1\}} \leq - \beta_{x_i} (u)\mathbbm{1}_{\{x_i \geq 1\}} \leq 0. \] So far we only stated the deterministic system, however adding noise in both variables is physiologically reasonable, see \cite{GoldwynSheaBrown} and we can study the following system. \begin{align*} \tau \mathrm{d}U(t) &= \Big(\lambda^2 A U(t) - \overline{g}_{\text{Na}} m(t)^3 h(t) \big( U(t) - E_{\text{Na}}\big) \Big.\\ &\qquad \Big. - \overline{g}_{\text{K}} n(t)^4 \big(U(t) - E_{\text{K}}\big) - g_{\text{L}} \big(U(t) - E_{\text{L}}\big) \Big) \dt + B \dwt,\\ \mathrm{d}n(t) &= \Big( \alpha_n \big(U(t)\big) \big( 1 - n(t)\big) - \beta_n\big(U(t)\big) n(t)\Big) \dt\\ &\qquad + \mathbbm{1}_{\{0 \leq n(t) \leq 1\}} \sigma_n n(t) \big( 1 - n(t)\big) B_n \,\mathrm{d}W_n(t),\\ \mathrm{d}m(t) &= \Big( \alpha_m \big(U(t)\big) \big( 1 - m(t)\big) - \beta_m\big(U(t)\big) m(t)\Big) \dt\\ &\qquad + \mathbbm{1}_{\{0 \leq m(t) \leq 1\}} \sigma_m m(t) \big( 1 - m(t)\big) B_m \,\mathrm{d}W_m(t),\\ \mathrm{d}h(t) &= \Big( \alpha_h \big(U(t)\big) \big( 1 - h(t)\big) - \beta_h\big(U(t)\big) h(t)\Big) \dt\\ &\qquad + \mathbbm{1}_{\{0 \leq h(t) \leq 1\}} \sigma_h h(t) \big( 1 - h(t)\big) B_h \,\mathrm{d}W_h(t). \end{align*} Here, $W, W_n, W_m$ and $W_h$ are cylindrical Wiener processes on $H= L^2(\mathcal{O})$ and $B, B_n, B_m$ and $B_h$ are Hilbert-Schmidt operators $\in L_2(H,V)$, hence both Assumption \ref{assum:Drift} and \ref{assum:Invariance} are satisfied and we can apply Theorem \ref{thm:Ex+Unique} and \ref{thm:Approx} for the stochastic version of the system \eqref{eq:HHmembrane} + \eqref{eq:HHgating}.
\subsection{Stochastic FitzHugh-Nagumo Equations} As in \cite{SauerLattice}, we consider the spatially extended stochastic FitzHugh-Nagumo equations as a second example. These equations were originally stated by FitzHugh in \cite{FitzHugh1}, \cite{FitzHugh2} as a system of ODEs simplifying the Hodgkin-Huxley model in terms of only two variables $U$ and $\mathbf{X} = w$, the so-called recovery variable. See e.\,g. the monograph \cite{Ermentrout} for more details on the deterministic case. With the original parameters, the equations are \begin{equation}\label{eq:FHN} \begin{split} \mathrm{d}U(t) &= \Big( A U(t) + \big( U(t) - \tfrac13 U(t)^3\big) - w(t) \Big) \dt + B \dwt,\\ \mathrm{d}w(t) &= 0.08 \big( U(t) - 0.8 w(t) + 0.7\big) \dt, \end{split} \end{equation} where $W$ is a cylindrical Wiener process on $H = L^2(\mathcal{O})$ and $B \in L_2(H,V)$. One can easily check that Assumption \ref{assum:Drift}, \ref{assum:Kernel} and \ref{assum:Monotone} are satisfied, however not the second part of Assumption \ref{assum:Invariance}, which is not surprisingly since $w$ is not representing a proportion anymore and thus $[0,1]$ is not forward invariant for the dynamics. On the other hand, we have \[ \abs{\nabla f(u,w)} \leq 1 + \abs{u}^2 \quad \text{and}\quad \abs{\nabla f_w(u,w)} \leq L, \] hence $w$ does not appear in the Lipschitz constants and in particular \[ G_t \df \int_0^t \Big( 4\big(1 + R_s^4\big) + K \big(L^2 + 1\big)\Big) \ds \] is independent of any $L^\infty$-bound for $w$ and therefore the invariance of $[0,1]$ for $w$ is obsolete. With these modifications we can use Theorem \ref{thm:Monotone} to obtain a strong rate of convergence of $\sfrac1n$, improving \cite[Theorem 3.1]{SauerLattice}.
\appendix \section{Appendix} The remaining part provides some straightforward estimates concerning the stochastic convolution and its approximation used throughout this article. Here, we do not aim for the most generality or even optimality of the results since this is a different matter.
Let $E \df C( \overline{\mathcal{O}}, \R)$ and $Y$ the unique (mild) solution to \begin{equation}\label{app:OU} \mathrm{d} Y(t) = A Y(t) \dt + B \dwt, \quad Y(0) = 0. \end{equation} \begin{lem}\label{app:Lem1} Define by $\xi \df \norm{Y}_{C([0,T]; E)} < \infty$ $\PP$-a.\,s. a random variable on $\Omega$ that satisfies $\EV{ \exp [\alpha \xi^2]} < \infty$ for some $\alpha > 0$. In particular, $\xi$ has finite moments of any order. \end{lem} \begin{proof} Since $A$ generates an exponentially stable, analytic semigroup $\{\mathrm{e}^{tA}\}_{t \geq 0}$ on $E$ we can do the following integration by parts to obtain a different representation of $Y$ as the mild solution to \eqref{app:OU}. \[ Y(t) = \int_0^t \mathrm{e}^{(t-s)A} B \dws = \int_0^t A \mathrm{e}^{(t-s)A} B \big( W(s) - W(t)\big) \ds + \mathrm{e}^{tA} B W(t). \] In particular, this does not involve a stochastic integral anymore, thus standard estimates for $A$ yield \begin{align*} \norm{Y(t)}_E &\leq \int_0^t \norm{A \mathrm{e}^{(t-s)A} B \big( W(s) - W(t) \big)}_E \ds + \norm{ \mathrm{e}^{tA} BW(t)}_E\\ &\leq C_A \int_0^t (t-s)^{-1} \norm{BW(s) - BW(t)}_E \ds + C_A \norm{BW(t)}_E. \end{align*} Given $0 < \eta < \sfrac{1}{2}$ the process $\{BW\}_{t \geq 0}$ is $\eta$-H\"older continuous in $E$ by the Sobolev embedding $V \hookrightarrow E$ in $d=1$. Hence, define $\zeta \df \norm{BW}_{C^\eta([0,T];E)}$, which is a Gaussian random variable, see \cite[Theorem 5.1]{VeraarHytoenen}. The integrability then follows from \cite[Corollary 3.2]{LedouxTalagrand}. \end{proof} The second lemma concerns a simple uniform estimate for the solution to the approximating problem \begin{equation}\label{app:OUApprox} \mathrm{d} Y^n(t) = A^n Y^n(t) \dt + B^n \,\mathrm{d}W^n(t),\quad Y^n(0) = 0. \end{equation} \begin{lem}\label{app:Lem2} For all $n \in \N$ it holds that \[ \xi^n \df \norm{Y^n}_{C([0,T]; (R^{n+1}, \norm{\cdot}_{\max}))} = \sup_{t \in [0,T]} \max_{0 \leq k \leq n} \abs{ Y^n_k(t)} \leq C_A \zeta \] with the Gaussian random variable $\zeta$ from Lemma \ref{app:Lem1}. In particular, this bound is uniform in $n$ and has the same moments as $\xi$. \end{lem} \begin{proof} As in the proof of Lemma \ref{app:Lem1} we can conclude that $\xi^n \leq C_A \zeta^n$, where $\zeta^n \df \norm{B^n W^n}_{C^\eta([0,T]; (\R^{n+1}, \norm{\cdot}_{\max}))}$ for some $0 < \eta < \sfrac12$. We write \[ \big( B^n W^n(t)\big)_k = \sum_{l=0}^n b^n_{k,l} \scp{W(t)}{ \mathbbm{1}_{I_l}}_H = n \int_{I_k} \scp{\pi^n b(x, \cdot)}{ W(t)}_H \dx, \] where $\pi^n u \df \sum_{l=0}^n \scp{u}{\abs{I_l}^{-\sfrac12} \mathbbm{1}_{I_l}}_H \abs{I_l}^{-\sfrac12} \mathbbm{1}_{I_l}$ is the projection onto these finitely many orthonormal indicator functions. It follows for $t \neq s$ \begin{align*} &\abs{\big(B^n W^n(t) \big)_k - \big(B^n W^n(s)\big)_k} \abs{t-s}^{-\eta} \leq \abs{I_k}^{-1} \int_{I_k} \abs{ \scp{\pi^n b(x,\cdot}{W(t)-W(s)}_H} \dx \abs{t-s}^{-\eta}\\ &\quad \leq \norm{\pi^n}_{L(H)} \norm{BW(t) - BW(s)}_E \abs{t-s}^{-\eta} \leq \norm{BW}_{C^\eta([0,T]; E)} = \zeta. \end{align*} Since the right hand side is independent of $k$ as well as $t$ and $s$ the assertion follows. \end{proof}
\section*{Acknowledgment} This work is supported by the BMBF, FKZ 01GQ1001B.
\end{document} |
\begin{document}
\title[The Binary $\Theta_3$-closed Matroids]{A binary-matroid analogue of a graph connectivity theorem of Jamison and Mulder}
\author{Cameron Crenshaw} \address{Mathematics Department\\ Louisiana State University\\ Baton Rouge, Louisiana} \email{[email protected]}
\author{James Oxley} \address{Mathematics Department\\ Louisiana State University\\ Baton Rouge, Louisiana} \email{[email protected]}
\begin{abstract} Jamison and Mulder characterized the set of graphs that can be built from cycles and complete graphs via 1-sums and parallel connections as those graphs $G$ such that, whenever two vertices $x$ and $y$ of $G$ are joined by three internally disjoint paths, $x$ and $y$ are adjacent. This paper proves an analogous result for the set of binary matroids constructible from direct sums and parallel connections of circuits, complete graphs, and projective geometries. \end{abstract}
\begin{abstract} Let $G$ be a graph such that, whenever two vertices $x$ and $y$ of $G$ are joined by three internally disjoint paths, $x$ and $y$ are adjacent. Jamison and Mulder determined that the set of such graphs coincides with the set of graphs that can be built from cycles and complete graphs via 1-sums and parallel connections. This paper proves an analogous result for binary matroids. \end{abstract}
\maketitle
\section{Introduction} \label{introduction}
Jamison and Mulder~\cite{JM} defined a graph $G$ to be \emph{$\Theta_3$-closed} if, whenever distinct vertices $x$ and $y$ of $G$ are joined by three internally disjoint paths, $x$ and $y$ are adjacent. For disjoint graphs $G_1$ and $G_2$, a 1-\emph{sum} of $G_1$ and $G_2$ is a graph that is obtained by identifying a vertex of $G_1$ with a vertex of $G_2$. Following Jamison and Mulder, we define a \emph{$2$-sum} of $G_1$ and $G_2$ to be a graph that is obtained by identifying an edge of $G_1$ with an edge of $G_2$. Note that, in contrast to some other definitions of this operation, we retain the identified edge as an edge of the resulting graph. The main result of Jamison and Mulder's paper is the following.
\begin{theorem} \label{JMtheorem} A connected graph $G$ is $\Theta_3$-closed if and only if $G$ can be built via $1$-sums and $2$-sums from cycles and complete graphs. \end{theorem}
This paper generalizes Theorem~\ref{JMtheorem} to binary matroids; all matroids considered here are binary unless stated otherwise. The terminology and notation follow~\cite{oxley} with the following additions. We will use $P_r$ to denote the rank-$r$ binary projective geometry, $PG(r-1,2)$. A \emph{theta-graph} is a graph that consists of two distinct vertices and three internally disjoint paths between them. A theta-graph in a matroid $M$ is a restriction of $M$ that is isomorphic to the cycle matroid of a theta-graph. Equivalently, it is a restriction of $M$ that is isomorphic to a matroid that is obtained from $U_{1,3}$ by a sequence of series extensions. The series classes of a theta-graph are its \emph{arcs}. Let $T$ be a theta-graph of $M$ with arcs $A_1$, $A_2$, and $A_3$. If $M$ has an element $e$ such that, for every $i$, either $A_i\cup e$ is a circuit of $M$, or $A_i=\{e\}$, then $e$ \emph{completes} $T$ in $M$, and $T$ is said to be \emph{complete}. A matroid $M$ is \emph{matroid $\Theta_3$-closed} if every theta-graph of $M$ is complete. The next theorem is the main result of this paper.
\begin{theorem} \label{mainresult} A matroid $M$ is matroid $\Theta_3$-closed if and only if $M$ can be built via direct sums and parallel connections from circuits, cycle matroids of complete graphs, and projective geometries. \end{theorem}
Suppose $M$ is isomorphic to the cycle matroid of a graph $G$. Two vertices in $G$ that are joined by three internally disjoint paths are adjacent via an edge $e$ exactly when the corresponding theta-graph of $M$ is completed by $e$. In other words, $G$ is $\Theta_3$-closed if and only if $M$ is matroid $\Theta_3$-closed. This allows us to refer to $M$ as \emph{$\Theta_3$-closed} without ambiguity. We will denote the class of $\Theta_3$-closed matroids by $\Theta_3$.
Section~\ref{prelims} introduces supporting results. The 3-connected matroids that are $\Theta_3$-closed are characterized in Section~\ref{3conn}, and the proof of Theorem~\ref{mainresult} appears in Section~\ref{main}.
\section{Preliminaries} \label{prelims}
Our first proposition collects some essential properties of $\Theta_3$-closed matroids. These properties will be used frequently and often implicitly.
\begin{proposition} \label{basics} If $M\in \Theta_3$, then \begin{itemize} \item[(i)]{$\si(M)\in\Theta_3$;} \item[(ii)]{$M\vert F\in\Theta_3$ for every flat $F$ of $M$; and} \item[(iii)]{$M/e\in \Theta_3$ for every $e\in E(M)$.} \end{itemize} \end{proposition}
\begin{proof} Parts (i) and (ii) are straightforward. For part (iii), let $T$ be a theta-graph of $M/e$. Then $[(M/e)\vert T]^\ast$ is obtained from $U_{2,3}$ by adding elements in parallel to the existing elements. Since $M$ is binary, it follows that $M^\ast/(E(M)-(T\cup e))$ is obtained from $[(M/e)\vert T]^\ast$, that is, from $M^\ast/(E(M)-(T\cup e))\backslash e$, by adding $e$ as a coloop or by adding $e$ in parallel to one of the existing elements. Thus, $e$ is either a loop in $M\vert (T\cup e)$, or is in series with another element. Hence, since $T$ is complete in $M$, it is complete in $M/e$. \end{proof}
Evidently, a matroid is in $\Theta_3$ if and only if its connected components are in $\Theta_3$. This will also be used implicitly throughout the paper. The following is an immediate consequence of Proposition~\ref{basics}.
\begin{corollary} \label{parallelminorclosed} If $M\in\Theta_3$ and $N$ is a parallel minor of $M$, then $N\in\Theta_3$. \end{corollary}
From, for example,~\cite[Exercise 8.3.3]{oxley}, if $M=M_1\oplus_2 M_2$, then $M_1$ and $M_2$ are parallel minors of $M$. The next result now follows from Corollary~\ref{parallelminorclosed}.
\begin{corollary} \label{2summands} If $M\oplus_2 N$ is in $\Theta_3$, then $M$ and $N$ are in $\Theta_3$. \end{corollary}
To see that the converse of the last corollary fails, observe that $M(K_{2,4})$ is not in $\Theta_3$ although it is the 2-sum of two copies of a matroid in $\Theta_3$.
We conclude this section with a result about constructing larger matroids in $\Theta_3$ from smaller ones. Recall that, for sets $X$ and $Y$ in a matroid $M$, the \emph{local connectivity} between $X$ and $Y$, denoted $\sqcap(X,Y)$, is defined by $\sqcap(X,Y)=r(X)+r(Y)-r(X\cup Y)$. We will use the following result about local connectivity from, for example,~\cite[Lemma 8.2.3]{oxley}.
\begin{lemma} \label{staplelemma} Let $X_1$, $X_2$, $Y_1$, and $Y_2$ be subsets of the ground set of a matroid $M$. If $X_1\supseteq Y_1$ and $X_2\supseteq Y_2$, then $\sqcap(X_1,X_2)\geq\sqcap(Y_1,Y_2)$. \end{lemma}
\begin{proposition} \label{parconnprop} For matroids $M$ and $N$, the parallel connection $P(M,N)$ is in $\Theta_3$ if and only if $M\in \Theta_3$ and $N\in \Theta_3$. \end{proposition}
\begin{proof} Let $p$ be the basepoint of the parallel connection. When $p$ is a loop or a coloop of $M$, the matroid $P(M,N)$ is $M\oplus (N/p)$ or $(M/ p)\oplus N$, respectively. In these cases, it follows using Proposition~\ref{basics} that the result holds. Thus we may assume that $p$ is neither a loop nor a coloop of $M$ or $N$. Suppose $P(M,N)\in\Theta_3$. Let $B_M$ be a basis for $M$ containing $p$. Extend $B_M$ to a basis $B$ for $P(M,N)$. After contracting both the elements of $B-B_M$ in $P(M,N)$ as well as all of the resulting loops, the remaining elements of $E(N)-p$ are parallel to $p$. We deduce that $M$, and similarly $N$, is a parallel minor of $P(M,N)$. Hence, by Corollary~\ref{parallelminorclosed}, $M$ and $N$ are in $\Theta_3$.
Conversely, suppose that $M, N\in \Theta_3$ and let $T$ be a theta-graph of $P(M,N)$ with arcs $A_1$, $A_2$, and $A_3$. Then we may assume that $\vert A_i\vert\geq 2$ for each $i$, otherwise $T$ is complete. Suppose $p\in A_1$. Then $A_1\cup A_2$ is a circuit containing $p$, so it is contained in $E(M)$ or $E(N)$ depending on which of these sets contains $A_1-p$. It follows that the same set contains $A_2$ and, likewise $A_3$, so $T$ is complete. Hence we may assume that $p\notin T$.
Suppose that each of the arcs of $T$ meets both $E(M\backslash p)$ and $E(N\backslash p)$. Let $T_M=E(T)\cap E(M)$, and similarly for $T_N$. Note that $T_M$ and $T_N$ are independent, and $T_M\cup T_N=E(T)$, so \begin{align*} \sqcap(T_M,T_N)&=r(T_M)+r(T_N)-r(T_M\cup T_N)\\ &=\vert T_M\vert + \vert T_N\vert - (\vert T_M\vert + \vert T_N\vert - 2)\\ &=2. \end{align*} However, $\sqcap(E(M),E(N))=1$, contradicting Lemma~\ref{staplelemma}.
Next, suppose that each of $A_1$ and $A_2$ meets both $E(M\backslash p)$ and $E(N\backslash p)$. Then, from above, we may assume that $A_3\subseteq E(M\backslash p)$. The circuits $A_1\cup A_3$ and $A_2\cup A_3$ have the form $(C_1-p)\cup(D_1-p)$ and $(C_2-p)\cup(D_2-p)$, respectively, for circuits $C_1$ and $C_2$ of $M$ containing $p$, and circuits $D_1$ and $D_2$ of $N$ containing $p$. Because $A_3\subseteq E(M)$ and $A_1\cap A_2=\emptyset$, it follows that $D_1-p$ and $D_2-p$ are disjoint. However, since $M$ is binary, $D_1\triangle D_2$ contains a circuit of $P(M,N)$ that is properly contained in the circuit $A_1\cup A_2$, a contradiction.
Now, suppose that $A_1$ meets both $E(M\backslash p)$ and $E(N\backslash p)$. Then, from above, each of the remaining arcs of $T$ lies in $E(M\backslash p)$ or $E(N\backslash p)$. We may assume that $A_2\subseteq E(M\backslash p)$. Suppose $A_3\subseteq E(N\backslash p)$. Then the circuits $A_1\cup A_2$ and $A_3\cup A_2$ have the form $(C_1-p)\cup (D_1-p)$ and $(C_3-p)\cup(D_3-p)$, respectively, for circuits $C_1$ and $C_3$ of $M$ containing $p$, and circuits $D_1$ and $D_3$ of $N$ containing $p$. Now, since $A_2\subseteq E(M\backslash p)$ and $A_1$ meets $E(M\backslash p)$, the set $C_1-p$ properly contains $A_2$. Further, as $A_3$ does not meet $E(M\backslash p)$, we have that $A_2=C_3-p$. This means $A_2\cup p$ is the circuit $C_3$, but $A_2\cup p$ is properly contained in $C_1$, a contradiction.
We conclude that $A_3\subseteq E(M\backslash p)$. Form $T'$ from $T$ by replacing the portion of $A_1$ in $E(N\backslash p)$ by $p$. Observe that $T'$ is isomorphic to a series minor of $T$, so $T'$ is a theta-graph. Moreover, $T'$ is a theta-graph of $M$, so it is completed in $M$ by an element $f$. Now, since $T$ and $T'$ share an arc, $f$ also completes $T$ in $P(M,N)$.
We are left to consider the case when each arc of $T$ is contained in either $E(M)$ or $E(N)$. If all three arcs belong to $E(M)$, say, then $T$ is complete in $M$, and so is complete in $P(M,N)$. Otherwise, $p$ completes $T$. \end{proof}
\section{The $3$-Connected $\Theta_3$-closed Matroids} \label{3conn}
The proof of Theorem~\ref{mainresult} will use the canonical tree decomposition of Cunningham and Edmonds~\cite{cunned} and, in support of that approach, this section proves the following 3-connected form of Theorem~\ref{mainresult}.
\begin{theorem} \label{mainres3conn} Let $M$ be a simple $3$-connected $\Theta_3$-closed matroid. Then $M$ is a projective geometry or the cycle matroid of a complete graph. \end{theorem}
The proof of this theorem relies on the next two propositions.
\begin{proposition} \label{mkorpgprop} If $M$ is a simple matroid in $\Theta_3$ and $M$ has a spanning $M(K_{r+1})$-restriction, then $M\cong M(K_{r+1})$ or $M\cong P_r$. \end{proposition}
\begin{proof} Take a standard binary representation for $P_r$, and view $M$ as the restriction of $P_r$ to the set $X$ of vectors. Recall that the number of nonzero entries of a vector is its \emph{weight}, and that the \emph{distance} between two vectors is the number of coordinates upon which they disagree. Because $M$ has an $M(K_{r+1})$-restriction, we may assume that $X$ contains the set $Z$ of vectors of weight one or two. We may assume that $Z\neq X$. Then $M$ has an element $e$ of weight at least three. We shall establish that $M\cong P_r$ by proving the following three assertions. \begin{itemize} \item[(i)]{$M$ has an element of weight three;} \item[(ii)]{if the matroid $M$ has every element of weight $k-1$ and an element of weight $k$, for some $k$ exceeding two, then $M$ has every element of weight $k$; and} \item[(iii)]{if $M$ has every element of weight $k$, where $3\leq k< r$, then $M$ has an element of weight $k+1$.} \end{itemize}
Let $e_i$ denote the weight-1 element whose nonzero entry is in the $i$th position. To show (i), we may assume $e$ has weight $k\geq 4$. Say $e=e_1+e_2+\cdots+e_k$. Let $Y=\{e,e_1,e_2,e_4,e_5,\dots,e_k,e_1+e_3,e_2+e_3\}$. Then $M\vert Y$ is a theta-graph having arcs $\{e,e_4,e_5,\dots,e_k\}$, $\{e_1,e_2+e_3\}$, and $\{e_2,e_1+e_3\}$. This theta-graph forces $e_1+e_2+e_3$ to be an element of $M$, so (i) holds.
To prove (ii), we may assume $k<r$. Suppose $g$ is an element of weight $k$ not in $M$, and let $f$ be an element of weight $k$ in $M$ with minimum distance from $g$. Let $s$ label a row where $f$ is 1 and $g$ is 0, and let $t$ label a row where $g$ is 1 and $f$ is 0. Next, as $k\geq 3$, there are two additional rows, $u$ and $v$, distinct from $s$ where $f$ is 1. Now, the set $\{f,e_u,e_v,e_s+e_t\}$ is independent, so the arcs $\{f, e_s+e_t\}$, $\{e_u, f+e_u+e_s+e_t\}$, and $\{e_v,f+e_v+e_s+e_t\}$ form a theta-graph in $M$. This theta-graph implies that $f+e_s+e_t$ belongs to $M$. However, $f+e_s+e_t$ has weight $k$ and is a smaller distance from $g$ than $f$, a contradiction. Thus (ii) holds.
Finally, let $f$ be an element of weight $k+1$ for some $k$ with $3\leq k<r$. By symmetry, we may assume that the set of rows in which $f$ is nonzero contains $\{1,2,3\}$. Then $\{f,e_1,e_2,e_3\}$ is independent in $M$, and the sets $\{e_1,f+e_1\}$, $\{e_2,f+e_2\}$, and $\{e_3,f+e_3\}$ are the arcs of a theta-graph in $M$. This theta-graph shows that $f$ belongs to $M$. Thus (iii) holds. Hence the proposition holds as well. \end{proof}
The second proposition that we use to prove Theorem~\ref{mainres3conn} will follow from the following three results.
\begin{lemma} \label{pglift} Let $M$ be a simple rank-$r$ matroid in $\Theta_3$. Suppose that \begin{itemize} \item[(i)]$r\geq 4$; \item[(ii)] $E(M)$ has a subset $P$ such that $M\vert P\cong P_{r-1}$; and \item[(iii)]$E(M)-P$ contains at least three elements. \end{itemize} Then $M\cong P_r$. \end{lemma}
\begin{proof} View $M$ as a restriction of $P_r$, and let $\{e,f,g\}$ be a subset of $E(M)-P$. Let $p$ be a point in $E(P_r)- P$ that is not in $\{e,f,g\}$. Observe that, for each $x$ in $\{e,f,g\}$, the third point on the line in $P_r$ containing $\{x,p\}$ is in $P$. Thus there are three lines of $M$ that meet at $p$. Provided $p$ is not coplanar with $\{e,f,g\}$, these lines define a theta-graph in $M$ that is completed by $p$, so $p$ is in $M$. It remains to show that the point $q$ of $E(P_r)- (P\cup \{e,f,g\})$ that is coplanar with $\{e,f,g\}$ belongs to $M$. But one easily checks that $P_r\backslash q$ is not in $\Theta_3$ when $r\geq 4$. Thus $M\cong P_r$. \end{proof}
\begin{corollary} \label{prfromf7s} Let $M$ be a simple rank-$r$ matroid in $\Theta_3$ with $r\geq 3$. If $M$ has a basis $B$ and an element $b$ in $B$ so that, for each $\{x,y\}\subseteq B-b$, the set $\{b,x,y\}$ spans an $F_7$-restriction of $M$, then $M\cong P_r$. \end{corollary}
\begin{proof} Let $B=\{b_1,b_2,\dots,b_r\}$ with $b=b_1$. If $r=3$, then the result is immediate, so suppose $r\geq 4$. By induction, $M\vert\cl(B-b_r)$ is isomorphic to $P_{r-1}$. Since $M\vert\cl(\{b_1,b_2,b_r\})\cong F_7$, we see that this restriction contains an independent set of three elements that avoids $\cl(B-b_r)$. Lemma~\ref{pglift} now implies that $M\cong P_r$. \end{proof}
The next result was proved by McNulty and Wu~\cite[Lemma 2.10]{mcnultywu}.
\begin{lemma} \label{connhyp} Let $M$ be a $3$-connected binary matroid with at least four elements. Then, for any two distinct elements $e$ and $f$ of $M$, there is a connected hyperplane containing $e$ and avoiding $f$. \end{lemma}
For a simple binary matroid $M$, we now define the smallest $\Theta_3$-closed matroid whose ground set contains $E(M)$. Let $M_0=M$ and $r(M)=r$. Suppose $M_0, M_1, \dots, M_k$ have been defined. The simple binary matroid $M_{k+1}$ is obtained from $M_k$ by ensuring that, whenever $T$ is an incomplete theta-graph of $M_k$, the element $x$ that completes $T$ is in $E(M_{k+1})$. Since each $M_i$ is a restriction of $P_r$, there is a $j$ for which $M_{j+1}=M_j$. When this first occurs, we call $M_j$ the \emph{$\Theta_3$-closure} of $M$. Evidently this is well defined. By associating $M$ with its ground set, the $\Theta_3$-closure is a closure operator (but not necessarily a matroid closure operator) on the set of subsets of the ground set of any projective geometry containing $M$.
\begin{proposition} \label{pginflater} Let $M$ be a simple $3$-connected matroid in $\Theta_3$, and let $k$ be an integer exceeding two. If $M$ has a simple minor $N$ whose $\Theta_3$-closure is $P_k$, then $M$ is a projective geometry. \end{proposition}
\begin{proof} Take subsets $X$ and $Y$ of $E(M)$ such that $M/X\backslash Y=N$ with $X$ independent and $Y$ coindependent in $M$. The matroid $M/X$ is in $\Theta_3$ and has $N$ as a spanning restriction. Therefore $M/X$ has $P_k$ as a restriction, so $P_k$ is a minor of $M$. From here, the proof is by induction on the rank, $r$, of $M$.
If $r=k$, the result is immediate, so assume $r>k$. By Seymour's Splitter Theorem, $P_k$ can be obtained from $M$ by a sequence of single-element contractions and deletions, all while staying 3-connected. Let $e$ be the first element that is contracted in this sequence. Note that $\si(M/e)$ is a 3-connected member of $\Theta_3$ that has $P_k$ as a minor. By induction, $\si(M/e)\cong P_{r-1}$. Fix an embedding of $M$ in $P_{r}$. We may assume that $M\not\cong P_r$. Then some line $\ell$ of $P_{r}$ through $e$ is not contained in $E(M)$. For each subset $Z$ of $E(P_r)$, let $\cl_P(Z)$ be its closure in $P_r$. Since $\si(M/e)\cong P_{r-1}$, there is an element $s$ of $E(M)$ that is in $\ell-\{e\}$. Let $t$ be the point of $P_{r}$ in $\ell-\{e\}$ that is not in $E(M)$.
\begin{sublemma} \label{ellpyramids} Let $F$ be a rank-$4$ flat of $M$ containing $\ell-t$. Then $M\vert F$ is isomorphic to one of $P(F_7,U_{2,3})$ or $F_7\oplus U_{1,1}$ where the $F_7$-restriction of $M\vert F$ contains $s$ but avoids $e$. \end{sublemma}
\begin{figure}
\caption{In the proof of \ref{ellpyramids}, each of $M\vert \pi_2$ and $M\vert \pi_3$ has this form.}
\label{ellplanes}
\end{figure}
To see this, first note that, by Proposition~\ref{basics}(ii), $M\vert F$ is in $\Theta_3$. Recall that each line of $\cl_P(F)$ through $e$ contains another point of $F$. Then there are three planes, $\pi_1$, $\pi_2$, and $\pi_3$, of $M\vert F$ containing $\ell-t$. Let $\mathcal{P}$ be this set of planes. Therefore, each plane in $\mathcal{P}$ has at least one pair of points that are not on $\ell$ such that these two points are collinear with either $s$ or $t$. Call such a pair of points a \emph{target pair}. The rest of the proof frequently relies of finding theta-graphs, particularly in rank 4. It maybe helpful to note that such a theta-graph will be isomorphic to $M(K_{2,3})$. Geometrically, this is three non-coplanar lines, each full except for shared point. This common point completes the theta-graph.
Suppose $\pi_1$ has a target pair collinear with $t$. Note that if two distinct planes in $\mathcal{P}$ each have a target pair collinear with $t$, then these pairs, along with $\{e,s\}$, are the arcs of an incomplete theta-graph in $M\vert F$, an impossibility. Consequently, neither $\pi_2$ nor $\pi_3$ has a target pair collinear with $t$, so both have the form in Figure~\ref{ellplanes}. The restriction of $M$ to $\pi_2\cup \pi_3$ is given in Figure~\ref{pi2pi3}.
The plane $\pi_1$ adds a pair of points collinear with $t$ to $M\vert (\pi_2\cup\pi_3)$, and one readily checks that the addition of this pair gives a restriction of $M\vert F$ that is isomorphic to $F_7\oplus_2 U_{2,3}$. A theta-graph of $M\vert F$ now gives that $M\vert F$ has a restriction isomorphic to $P(F_7,U_{2,3})$. It follows that $M\vert F\cong P(F_7, U_{2,3})$ otherwise, by Lemma~\ref{pglift}, $M\vert F\cong P_4$ and we obtain the contradiction that $t\in F$.
We may now suppose that no $\pi_i$ has a target pair collinear with $t$. It follows that each target pair in each $\pi_i$ is collinear with $s$, and that $e$ is the only element of $F$ outside of the target pairs. Observe that the target pairs must be coplanar as, otherwise, we can find an incomplete theta-graph in $M\vert F$. Thus $M\vert F\cong F_7\oplus U_{1,1}$. Noting that $e$ is not in the $F_7$-restriction of $M\vert F$, we conclude that~\ref{ellpyramids} holds.
\begin{figure}
\caption{The matroid $M\vert(\pi_2\cup\pi_3)$ in the proof of~\ref{ellpyramids}.}
\label{pi2pi3}
\end{figure}
Since $M$ is 3-connected, it follows from~\ref{ellpyramids} that $r\geq 5$. By Lemma~\ref{connhyp}, there is a connected hyperplane $H$ of $M$ containing $s$ and avoiding $e$. Let $s=b_1$ and let $\{b_1,b_2,\dots,b_{r-1}\}$ be a basis $B$ of $M\vert H$. For distinct elements $i$ and $j$ of $\{2,3,\dots,r-1\}$, let $F_{i,j}$ denote the rank-4 flat $M\vert\cl(\{e,s,b_i,b_j\})$ of $M$. By~\ref{ellpyramids}, $M\vert F_{i,j}$ is isomorphic to $F_7\oplus U_{1,1}$ or $P(F_7,U_{2,3})$. Let $X$ be the subset of $F_{i,j}$ such that $M\vert X\cong F_7$, and recall that $e\not\in X$ and $s\in X$. The hyperplane $H$ either contains $X$ or meets $X$ along one of the lines $\cl(\{s,b_i\})$ or $\cl(\{s,b_j\})$. We deduce the following.
\begin{sublemma} \label{sswitcher} For each pair $\{i,j\} \subseteq \{2,3,\dots,r-1\}$, at least one of $s+b_i$ or $s+b_j$ is in $E(M)$. \end{sublemma}
Suppose $s+b_2$ is not in $E(M)$. By~\ref{sswitcher}, the element $s+b_i$ belongs to $E(M)$ for every $i$ in $\{3,4,\dots,r-1\}$. Consequently, for each pair $\{i,j\}$ in $\{3,4,\dots,r-1\}$, the hyperplane $H$ contains the copy of $F_7$ in $M\vert F_{i,j}$, and this $F_7$ is spanned by $\{s,b_i, b_j\}$. Corollary~\ref{prfromf7s} now implies that $M\vert \cl(B-b_2)\cong P_{r-2}$.
Now, since $M\vert H$ is connected, $H$ contains an element $f$ that is not in $\cl(B-b_2)\cup b_2$. The line $\cl(\{b_2,f\})$ meets the projective geometry $\cl(B-b_2)$, so $b_2+f$ is also in $M$. Now consider $Y=\cl(\{e,s,b_2,b_2+f\})$. The intersection $H\cap Y$ contains the line $\{b_2,f,b_2+f\}$ and also the element $s$. By applying~\ref{ellpyramids} to $M\vert Y$, we see that $H\cap Y$ is an $F_7$-restriction containing $s$ and $b_2$. Since $s+b_2$ is not in $E(M)$, we have a contradiction.
We conclude that $s+b_i$ is in $E(M)$ for every $i$ in $\{2,3,\dots,r-1\}$. The flat $M\vert F_{i,j}$ now meets $H$ in an $F_7$-restriction for every pair $\{i,j\}\subseteq \{2,3,\dots,r-1\}$ so, by Corollary~\ref{prfromf7s}, $M\vert H$ is a projective geometry. Finally, as $M$ is 3-connected, there is an independent set of three elements in $E(M)$ avoiding $H$. Hence, by Lemma~\ref{pglift}, $M\cong P_r$. \end{proof}
\iffalse The next lemma is a consequence of the Splitter Theorem~\cite{splitter}.
\begin{lemma} \label{f7makesf7*} Let $M$ be a $3$-connected binary matroid with rank at least 4. If $M$ has an $F_7$-minor, then $M$ has an $F_7^\ast$-minor. \end{lemma} \fi
We are now ready to prove the main result of this section.
\begin{proof}[Proof of Theorem~\ref{mainres3conn}] Let $r$ be the rank of $M$. If $M$ is graphic, then Theorem~\ref{JMtheorem} gives that $M\cong M(K_{r+1})$, so we may assume that $M$ is not graphic. Thus $M$ has a minor $N$ isomorphic to $F_7$, $F_7^\ast$, $M^\ast(K_{3,3})$, or $M^\ast(K_5)$. By Proposition~\ref{pginflater}, it now suffices to show that the $\Theta_3$-closure, $\Theta(N)$, of $N$ is a projective geometry.
This is immediate when $N\cong F_7$, so suppose $N$ is isomorphic to $F_7^\ast$, labelled as in Figure~\ref{f7star}. The theta-graphs of $N$ imply that, in $\Theta(N)$, the plane containing $\{1,2,5,6\}$ is isomorphic to $F_7$. Proposition~\ref{pginflater} now implies that $M$ is isomorphic to $P_r$.
\begin{figure}
\caption{The matroid $F_7^\ast$.}
\label{f7star}
\end{figure}
Next, suppose $N\cong M^\ast(K_{3,3})$. The complement of $N$ in $P_4$ is $U_{2,3}\oplus U_{2,3}$; let $x$ be an element of this complement. The elementary quotient of $N$ obtained by extending $N$ by $x$ and then contracting $x$ is shown in Figure~\ref{mk33quo}. The three pairwise-skew 2-element circuits of this quotient correspond to three lines in the extension of $N$ by $x$ where the union of these lines has rank four. Thus $N$ contains a theta-graph that is completed by $x$. It follows that $x$ and, symmetrically, every point in the complement of $N$ in $P_4$, belongs to $\Theta(N)$. Lemma~\ref{pglift} now implies that $\Theta(N)\cong P_4$.
\begin{figure}
\caption{A quotient of $M^\ast(K_{3,3})$.}
\label{mk33quo}
\end{figure}
\begin{figure}
\caption{The graph $K_5$.}
\label{k5}
\end{figure} \begin{figure}
\caption{A binary representation of $M^\ast(K_5)$.}
\label{m*k5}
\end{figure}
Now suppose $N\cong M^\ast(K_5)$, where $K_5$ is labelled as in Figure~\ref{k5}. Figure~\ref{m*k5} gives a corresponding binary representation of $N$. The dual of a theta-graph with arcs of size at least two is a triangle with no trivial parallel classes. Therefore, we can detect theta-graph restrictions of $N$ by contracting elements of $N^\ast$ to produce such a triangle. For example, $N^\ast/5,8$ is the dual of a theta-graph in $N$ with arcs $\{1,2\}$, $\{3,7\}$, and $\{0,4,6,9\}$. This theta-graph is completed by the element $[1\ 1\ 0\ 0\ 0\ 0]^T$, so this element belongs to $\Theta(N)$. The following is a list of duals of theta-graphs of $N$ and the corresponding elements of $\Theta(N)$ that they produce using this reasoning.
\begin{itemize} \item $N^\ast/5,8$ gives $[1\ 1\ 0\ 0\ 0\ 0]^T\in \Theta(N)$. \item $N^\ast/4,9$ gives $[1\ 0\ 1\ 0\ 0\ 0]^T\in \Theta(N)$. \item $N^\ast/3,9$ gives $[1\ 0\ 0\ 1\ 0\ 0]^T\in \Theta(N)$. \item $N^\ast/2,8$ gives $[1\ 0\ 0\ 0\ 1\ 0]^T\in \Theta(N)$. \item $N^\ast/0,6$ gives $[0\ 1\ 1\ 0\ 0\ 0]^T\in \Theta(N)$. \item $N^\ast/1,8$ gives $[0\ 1\ 0\ 0\ 1\ 0]^T\in \Theta(N)$. \item $N^\ast/0,3$ gives $[0\ 1\ 0\ 0\ 0\ 1]^T\in \Theta(N)$. \item $N^\ast/1,9$ gives $[0\ 0\ 1\ 1\ 0\ 0]^T\in \Theta(N)$. \item $N^\ast/0,2$ gives $[0\ 0\ 1\ 0\ 0\ 1]^T\in \Theta(N)$. \item $N^\ast/6,7$ gives $[0\ 0\ 0\ 1\ 1\ 0]^T\in \Theta(N)$. \item $N^\ast/5,7$ gives $[0\ 0\ 0\ 1\ 0\ 1]^T\in \Theta(N)$. \item $N^\ast/4,7$ gives $[0\ 0\ 0\ 0\ 1\ 1]^T\in \Theta(N)$. \end{itemize}
It is now straightforward to find theta-graphs in $\Theta(N)$ that are completed by the elements $[1\ 0\ 0\ 0\ 0\ 1]^T$, $[0\ 1\ 0\ 1\ 0\ 0]^T$, and $[0\ 0\ 1\ 0\ 1\ 0]^T$, so $\Theta(N)$ contains every vector of weight 1 or 2. Thus $\Theta(N)$ properly contains $M(K_7)$, so, by Proposition~\ref{mkorpgprop}, $\Theta(N)\cong P_6$. \end{proof}
\section{The Main Result} \label{main}
After a review of canonical tree decompositions, this section proves Theorem~\ref{mainresult}. For a set $\{M_1,M_2,\dots,M_n\}$ of matroids, a \emph{matroid-labelled tree} with vertex set $\{M_1,M_2,\dots,M_n\}$ is a tree $T$ such that \begin{enumerate} \item[(i)]{if $e$ is an edge of $T$ with endpoints $M_i$ and $M_j$, then $E(M_i)\cap E(M_j)=\{e\}$, and $\{e\}$ is not a separator of $M_i$ or $M_j$; and} \item[(ii)]{$E(M_i)\cap E(M_j)$ is empty if $M_i$ and $M_j$ are non-adjacent.} \end{enumerate} The matroids $M_1, M_2,\dots,M_n$ are called the \emph{vertex labels} of $T$. Now suppose $e$ is an edge of $T$ with endpoints $M_i$ and $M_j$. We obtain a new matroid-labelled tree $T/e$ by contracting $e$ and relabelling the resulting vertex with $M_i\oplus_2 M_j$. As the matroid operation of 2-sum is associative, $T/X$ is well defined for all subsets $X$ of $E(T)$.
Let $T$ be a matroid-labelled tree for which $V(T)=\{M_1,M_2,\dots,M_n\}$ and\\ $E(T)=\{e_1,e_2,\dots,e_{n-1}\}$. Then $T$ is a \emph{tree decomposition} of a connected matroid $M$ if \begin{enumerate} \item[(i)]{$E(M)=(E(M_1)\cup E(M_2)\cup\cdots\cup E(M_n))-\{e_1,e_2,\dots,e_{n-1}\}$;} \item[(ii)]{$\vert E(M_i)\vert\geq 3$ for all $i$ unless $\vert E(M)\vert<3$, in which case $n=1$ and $M=M_1$; and} \item[(iii)]{$M$ labels the single vertex of $T/E(T)$.} \end{enumerate} In this case, the elements $\{e_1,e_2,\dots,e_{n-1}\}$ are the \emph{edge labels} of $T$. Cunningham and Edmonds~\cite{cunned} (see also~\cite[Theorem 8.3.10]{oxley}) proved the next theorem that says that $M$ has a \emph{canonical tree decomposition}, unique to within relabelling of the edges.
\begin{theorem} \label{treedecomp} Let $M$ be a $2$-connected matroid. Then $M$ has a tree decomposition $T$ in which every vertex label is $3$-connected, a circuit, or a cocircuit, and there are no two adjacent vertices that are both labelled by circuits or are both labelled by cocircuits. Moreover, $T$ is unique to within relabelling of its edges. \end{theorem}
We now complete the proof of our main result.
\begin{proof}[Proof of Theorem~\ref{mainresult}] Since circuits, cycle matroids of complete graphs, and projective geometries are in $\Theta_3$, by Proposition~\ref{parconnprop}, every matroid that can be built from such matroids by a sequence of parallel connections is in $\Theta_3$.
To prove the converse, we begin by noting that loops can be added via direct sums and that parallel elements can be added via parallel connections of circuits, so we may assume $M$ is simple. Let $T$ be the canonical tree decomposition of $M$. The proof is by induction on $\vert V(T)\vert$.
If $\vert V(T)\vert = 1$, then $M$ is 3-connected and the result holds by Theorem~\ref{mainres3conn}. Now assume $T$ has at least two vertices, and let $N$ be a matroid labelling a leaf of $T$. Since $M$ is simple, $N$ is not a cocircuit. We may now write $M=N\oplus_2 M_1$, where, by Corollary~\ref{2summands}, $N$ and $M_1$ are in $\Theta_3$. Thus, by Theorem~\ref{mainres3conn}, $N$ is a circuit, the cycle matroid of a complete graph of rank at least three, or a projective geometry of rank at least three. Moreover, by induction, $M_1$ is a parallel connection of circuits, cycle matroids of complete graphs, and projective geometries.
Let $N_1$ be the label of the neighbor of $N$ in $T$, and suppose $N_1$ is not a cocircuit. In this case, each of $N$ and $N_1$ is a circuit, the cycle matroid of a complete graph of rank at least three, or a projective geometry of rank at least three, and they are not both circuits. Therefore, if $p$ is the basepoint of the 2-sum $N\oplus_2 N_1$, there are circuits in $N$ and $N_1$ that form a theta-graph that is completed by $p$, a contradiction. Thus $N_1$ is a cocircuit.
Now let $k$ be the degree of $N_1$ in $T$. Evidently $N_1$ has at least $k$ elements, but, since $M$ is simple, $N_1$ has at most $k+1$ elements. If $k=2$, then $N_1$ has three elements as $N_1$ labels a vertex of $T$. Otherwise $k\geq 3$, so there are circuits in $M$ that form a theta-graph that is completed by an element of $N_1$. Hence $N_1$ has $k+1$ elements, and therefore corresponds to a parallel connection of its neighbors. It now follows that $M$ is the parallel connection of circuits, cycle matroids of complete graphs, and projective geometries. \end{proof}
\end{document} |
\begin{document}
\title{Total Variation Restoration of Speckled Images\ Using a Split-Bregman Algorithm}
\begin{abstract} Multiplicative noise models occur in the study of several coherent imaging systems, such as synthetic aperture radar and sonar, and ultrasound and laser imaging. This type of noise is also commonly referred to as {\it speckle}. Multiplicative noise introduces two additional layers of difficulties with respect to the popular Gaussian additive noise model: (1) the noise is multiplied by (rather than added to) the original image, and (2) the noise is not Gaussian, with Rayleigh and Gamma being commonly used densities. These two features of the multiplicative noise model preclude the direct application of state-of-the-art restoration methods, such as those based on the combination of total variation or wavelet-based regularization with a quadratic observation term. In this paper, we tackle these difficulties by: (1) using the common trick of converting the multiplicative model into an additive one by taking logarithms, and (2) adopting the recently proposed split Bregman approach to estimate the underlying image under total variation regularization. This approach is based on formulating a constrained problem equivalent to the original unconstrained one, which is then solved using Bregman iterations (equivalently, an augmented Lagrangian method). A set of experiments show that the proposed method yields state-of-the-art results. \end{abstract}
\begin{keywords} Speckle, multiplicative noise, total variation, Bregman iterations, augmented Lagrangian, synthetic aperture radar. \end{keywords}
\section{Introduction} \subsection{Coherent Imaging and Speckle Noise} The standard statistical models of coherent imaging systems, such as synthetic aperture radar/sonar (SAR/SAS), ultrasound imaging, and laser imaging, are supported on multiplicative noise mechanisms. With respect to a given resolution cell of the imaging device, a coherent system acquires the so-called in-phase and quadrature components\footnote{The in-phase and quadrature components are the outputs of two demodulators with respect to, respectively, $\cos(\omega_0 t)$) and $\sin(\omega_0 t)$, where $\omega_0$ is the carrier angular frequency.} which are collected in a complex reflectivity (with the in-phase and quadrature components corresponding to the real and imaginary parts, respectively). The complex reflectivity of a given resolution cell results from the contributions of all the individual scatterers present in that cell, which interfere in a destructive or constructive manner, according to their spatial configuration. When this configuration is random, it yields random fluctuations of the complex reflectivity, a phenomenon which is termed {\em speckle}. The statistical properties of speckle have been widely studied and there is a large body of literature \cite{conf:Goodman:JOSA:76}, \cite{Oliver}. Assuming no strong specular reflectors and a large number of randomly distributed scatterers in each resolution cell (relative to the carrier wavelength), the squared amplitude ({\em intensity}) of the complex reflectivity is exponentially distributed \cite{Oliver}. The term {\it multiplicative noise} is clear from the following observation: an exponential random variable can be written as the product of its mean value (parameter of interest) by an exponential variable of unit mean (noise). The scenario just described, known as {\em fully developed speckle}, leads to observed intensity images with a characteristic granular appearance due to the very low {\em signal to noise ratio} (SNR). Notice that the SNR, defined as the ratio between the squared intensity mean and the intensity variance, is equal to one ($0\,$dB).
\subsection{Restoration of Speckled Images: Previous Work} A common approach to improving the SNR in coherent imaging consists in averaging independent observations of the same pixel. In SAR/SAS systems, this procedure is called {\em multi-look} ($M$-look, in the case of $M$ looks), and each independent observation may be obtained by a different segment of the sensor array. For fully developed speckle, the SNR of an $M$-look image is $M$. Another way to obtain an M-look image is to low pass filter (with a moving average kernel with support size $M$) a 1-look fully developed speckle image, making evident the tradeoff between SNR and spatial resolution. A great deal of research has been devoted to developing nonuniform filters which average large numbers of pixels in homogeneous regions yet avoid smoothing across discontinuities in order to preserve image detail/edges \cite{art:FrostStilesPAMI82}. Many other speckle reduction techniques have been proposed; see \cite{Oliver} for a comprehensive literature review up to 1998.
A common assumption is that the underlying reflectivity image is piecewise smooth. In image restoration under multiplicative noise, this assumption has been formalized using Markov random fields, under the Bayesian framework \cite{Oliver}, \cite{Bioucas-Dias98} and, more recently, using {\em total variation} (TV) regularization \cite{AubertAujol08}, \cite{Huang09}, \cite{RudinLionsOsher03}, \cite{ShiOsher07}.
\subsection{Contribution} In this paper, we adopt TV regularization. In comparison with the canonical additive Gaussian noise model, we face two difficulties: the noise is multiplicative; the noise is non-Gaussian, but follows Rayleigh or Gamma distributions. We tackle these difficulties by first converting the multiplicative model into an additive one (which is a common procedure) and then adopting the recently proposed split Bregman approach to solve the optimization problem that results from adopting a total variation regularization criterion.
Other works that have very recently addressed the restoration of speckled images using TV regularization include \cite{AubertAujol08}, \cite{Huang09}, \cite{RudinLionsOsher03}, \cite{ShiOsher07}. The commonalities and differences between our approach and the ones followed in those papers will be discussed after the detailed description of our method, since this discussion requires notation and concepts which will be introduced in the next section.
\section{Problem Formulation} Let $\bo{y}\in \mathbb{R}_{+}^{n}$ denote an $n$-pixels observed image, assumed to be a sample of a random image $\bo{ Y}$, the mean of which is the underlying reflectivity image $\bo{x}\in \mathbb{R}_{+}^{n}$, {\it i.e.}, $\mathbb{E}[\bo{Y}] = \bo{x}$. Adopting a conditionally independent multiplicative noise model, we have \begin{equation} Y_i = x_i N_i ,\;\; \mbox{for\ $\; i=1,...,n$,}\label{eq:multiply} \end{equation} where ${\bf N}\in \mathbb{R}_{+}^{n}$ is an image of independent and identically distributed (iid) noise random variables with unit mean, $\mathbb{E}(N_i)=1$, following a common density $p_N$. For $M$-look fully developed speckle noise, $p_N$ is a Gamma density with $E[N]=1$, and $\sigma_N^2=1/M$, {\it i.e.}, \begin{equation}
\label{eq:Gamma}
p_N(n) = \frac{M^M}{\Gamma(M)}\; n^{M-1}e^{-nM}. \end{equation}
An additive noise model is obtained by taking logarithms of (\ref{eq:multiply}). For some pixel of the image, the observation model becomes \begin{equation}
\label{eq:log_observation}
\underbrace{\log Y}_G = \underbrace{\log x}_z + \underbrace{\log N}_W. \end{equation} The density of the random variable $W = \log N$ is \begin{equation}
p_W(w) = p_N(n=e^w)\,e^w = \frac{M^M}{\Gamma(M)}\; e^{Mw}e^{-e^w M}, \end{equation} thus \begin{equation}
p_{G|Z}(g|z) = p_W(g-z). \end{equation}
Under the regularization and Bayesian frameworks, the original image is inferred by solving a minimization problem with the form \begin{equation}
\widehat{\bo{z}} \in \arg\min_{\bo{z}} L(\bo{z}), \label{eq:L_uncons} \end{equation} where $L(\bo{z})$ is the penalized minus log-likelihood, \begin{eqnarray}
L(\bo{z}) & = & -\log p_{\bo{G}|\bo{Z}}(\bo{g}|\bo{z}) +\lambda\, \phi(\bo{z}).\\ & = & M \sum_{s=1}^n \left(z_s + e^{g_s-z_s}\right) + \lambda\, \phi(\bo{z}) + A,\label{eq:neg_like} \end{eqnarray} with $A$ an irrelevant additive constant, $\phi$ the penalty/regularizer (negative of the log-prior, from a the Bayesian perspective), and $\lambda$ the regularization parameter.
In this work, we adopt the TV regularizer, that is, \begin{equation}
\phi(\bo{z}) = \mbox{TV}(\bo{z}) = \sum_{s=1}^n \sqrt{(\Delta^h_s\bo{z})^2+(\Delta^v_s\bo{z})^2},\label{eq:theTV} \end{equation} where $(\Delta^h_s\bo{z}$ and $\Delta^v_s\bo{z})$ denote the horizontal and vertical first order differences at pixel $s\in\{1,\dots,n\}$, respectively.
Each term $\left(z_s + e^{g_s-z_s} \right)$ of (\ref{eq:neg_like}), corresponding to the negative log-likelihood, is strictly convex and coercive, thus so is their sum. Since the TV regularizer is also convex (though not strictly so), the objective function $L$ possesses a unique minimizer \cite{CombettesSIAM}. In terms of optimization, these are desirable properties that would not hold if we had formulated the inference in the original variables $\bo{x}$, since the resulting negative log-likelihood is not convex; this was the approach followed in \cite{AubertAujol08} and \cite{RudinLionsOsher03}.
\section{Bregman/Augmented Lagrangian Approach}
There are several efficient algorithms to compute the TV regularized solution for a quadratic data term; for recent work, see \cite{Chambolle04}, \cite{ChanGolubMulet}, \cite{conf:Mario:BJN:ICIP:00}, \cite{GoldsteinOsher}, \cite{WangYangYinZhang}, \cite{ZhuWrightChan}, and other references therein. When the data term is not quadratic, as in (\ref{eq:neg_like}), the problem is more difficult and far less studied. Herein, we follow the split Bregman approach \cite{GoldsteinOsher} which is composed of the following two steps: (splitting) a constrained problem equivalent to the original unconstrained one is formulated; (Bregman) this constrained problem is solved using the Bregman iterative approach \cite{YinOsherGoldfarbDarbon}. Before describing these two steps in detail, we briefly review the Bregman iterative approach. The reader is referred to \cite{GoldsteinOsher}, \cite{YinOsherGoldfarbDarbon}, for more details.
\subsection{Bregman Iterations} Consider a constrained optimization problem of the form \begin{eqnarray} \nonumber \min_{\bf x} & & E({\bf x})\\ \mbox{s.t.} & &H({\bf x}) = 0,\label{eq:constrained_general} \end{eqnarray} with $E$ and $H$ convex, $H$ differentiable, and $\min_{\bf x}H({\bf x}) = 0$ . The so-called Bregman divergence associated with the convex function $E$ is defined as \begin{equation}
D_{E}^{\mathbf{p}}(\mathbf{x},\mathbf{y}) \equiv E(\mathbf{x})-E(\mathbf{y}) - \langle\mathbf{p},\mathbf{x}-\mathbf{y}\rangle, \end{equation} where $\mathbf{p}$ belongs to the subgradient of $E$ at ${\bf y}$, {\it i.e.},
$$\mathbf{p}\ \in\ \partial E(\mathbf{y})=\{\mathbf{u}:E(\mathbf{x})\geq E(\mathbf{y})+\langle\mathbf{u},\mathbf{x}-\mathbf{y}\rangle, \ \forall \mathbf{x} \in \mathbf{dom} E \}. $$
The Bregman iteration is given by \begin{eqnarray}
{\bf x}^{k+1} &=& \arg\min_{\bf x} D_{E}^{\mathbf{p}^k}(\mathbf{x},\mathbf{x}^{k}) + H({\bf x}) \label{eq:updatex} \\ &=& \arg\min_{\bf x} E({\bf x}) - \langle\mathbf{p}^k,\mathbf{x}-\mathbf{x}^k\rangle + H({\bf x}),\label{eq:updatex2} \end{eqnarray} where $\mathbf{p}^k\in\partial E(\mathbf{x}^k)$. It has been shown that this procedure converges to a solution of (\ref{eq:constrained_general}) \cite{GoldsteinOsher},\cite{YinOsherGoldfarbDarbon}.
Concerning the update of $\mathbf{p}^k$, we have from \eqref{eq:updatex}, that $\mathbf{0}\in\partial (D_{E}^{\mathbf{p}}(\mathbf{x},\mathbf{x}^{k}) + \mu H({\bf x}))$, when this sub-differential is evaluated at $\mathbf{x}^{k+1}$, that is $$\mathbf{0}\in\partial (D_{E}^{\mathbf{p}}(\mathbf{x}^{k+1},\mathbf{x}^{k}) + H({\bf x}^{k+1})).$$ Since it was assumed that $H$ is differentiable, and since $\mathbf{p}^{k+1}\in \partial E(\mathbf{x}^{k})$ at this point, $\mathbf{p}^{k+1}$ should be chosen as \begin{equation}
\mathbf{p}^{k+1} = \mathbf{p}^{k} - \nabla H(\mathbf{x}^{k+1}). \end{equation}
In the particular case where $H(\bo{x}) = (\tau/2)\|\bo{Ax-b}\|_2^2$, it can be shown (see \cite{YinOsherGoldfarbDarbon}) that the iteration (\ref{eq:updatex2}) is equivalent to \begin{eqnarray}
{\bf x}^{k+1} &=& \arg\min_{\bf x} E(\bo{x}) + \frac{\tau}{2} \|\bo{Ax - b}^k\|_2^2 \label{eq:Bregman1a}\\
\bo{b}^{k+1} & = & \bo{b} + \bo{b}^k - \bo{A}\bo{x}^k .\label{eq:Bregman1b} \end{eqnarray}
\subsection{Splitting the Problem into a Constrained Formulation} The original unconstrained problem (\ref{eq:L_uncons}) is equivalent to the constrained formulation
\begin{eqnarray}
\label{eq:L_cons}
(\widehat{\bo{z}},\widehat{\bo{u}}) & = & \arg\min_{\bo{z},\bo{u}} L(\bo{z},\bo{u})\\
\label{eq:z_u_c}
\mbox{s.t.} & & \|\bo{z}-\bo{u}\|_2^2 = 0, \end{eqnarray}
with \begin{eqnarray}
\label{eq:L_cons_obj}
L(\bo{z},\bo{u}) = M \sum_{s=1}^n \left( z_s + e^{g_s-z_s}\right) + \lambda\,\mbox{TV}(\bo{u}). \end{eqnarray}
Notice how the original variable (image) $\bo{z}$ is split into a pair of variables $(\bo{z,u})$, which are decoupled in the objective function (\ref{eq:L_cons_obj}).
\subsection{Applying Bregman Iterations} Notice that the problem (\ref{eq:L_cons})-(\ref{eq:z_u_c}) has exactly the form (\ref{eq:constrained_general}), with ${\bf x} \equiv [\bo{z}^T \bo{u}^T]^T$,
$E({\bf x}) \equiv L(\bo{z,u})$, and $H({\bf x}) \equiv (\tau/2) \|{\bf A x}-{\bf b}\|_2^2$, with ${\bf A} = [{\bf I}, -{\bf I}]$ and ${\bf b} = \bo{0}$. Using this equivalence, the Bregman iteration (\ref{eq:Bregman1a})-(\ref{eq:Bregman1b}) becomes \begin{eqnarray}
\label{eq:Breg_simpler_iter}
(\bo{z}^{k+1}, \bo{u}^{k+1}) & = & \arg\min_{\bo{z},\bo{u}}
L(\bo{z},\bo{u})+\frac{\tau}{2}\| \bo{z}-\bo{u}-\bo{b}^k\|^2,\label{eq:Bregman_u_z}\\
\bo{b}^{k+1} & = & \bo{b}^{k} - (\bo{z}^{k} -\bo{u}^{k}). \end{eqnarray}
We address the minimization in (\ref{eq:Bregman_u_z}) using an alternating minimization scheme with respect to ${\bf u}$ and ${\bf z}$. The complete resulting algorithm is summarized in Algorithm 1.
\begin{algorithm} \label{alg:} \caption{TV restoration of multilook images.} \begin{algorithmic}[1] \REQUIRE $\bo{z}=0$, $\bo{u}=0$, $\bo{b}=0$, $\lambda$, $\tau$, $k := 1$.
\REPEAT
\FOR{$t=1:t_m$}
\STATE $\bo{z}^k := \arg\min_{\bo{z}} \sum_{s=1}^n \left( z_s +
e^{g_s-z_s}\right)+\frac{\tau}{2M}\| \bo{z}\!-\!\bo{u}^k\!-\!\bo{b}^k\|^2$
\STATE $\bo{u}^k := \arg\min_{\bo{u}} \frac{1}{2}\|
\bo{u}-\bo{z}^k+\bo{b}^k\|^2+\frac{\lambda}{\tau}\,\mbox{TV}(\bo{u})$.
\ENDFOR
\STATE $\bo{b}^{k+1} := \bo{b}^{k} - (\bo{z}^{k} -\bo{u}^{k})$
\STATE $k := k + 1$
\UNTIL{$\|{\bf z}^{k}-{\bf z}^{k-1}\|_2^2/\|{\bf z}^{k-1}\|_2^2 < 10^{-4}$}
\end{algorithmic}
\end{algorithm}
The minimization with respect to $\bo{z}$, in line 3, has closed form in terms of the Lambert W function \cite{Corless}. However, we found that the Newton method yields a faster solution by running just four iterations. Notice that the minimization in line 3 is in fact a set of $n$ decoupled scalar minimizations. For the minimization with respect to $\bo{u}$ (line 4), which is a TV denoising problem, we run a few iterations (typically 10) of Chambolle's algorithm \cite{Chambolle04}. The number of inner iterations $t_m$ was set to one in all the experiments reported below. The stopping criterion (line 8) is the same as in \cite{Huang09}. The estimate of ${\bf x}$ produced by the algorithm is naturally $\widehat{\bf x} = e^{{\bf z}^k}$, component-wise.
Notice how the split Bregman approach converted a difficult problem involving a non-quadratic term and a TV regularizer into two simpler problems: a decoupled minimization problem (line 3) and a TV denoising problem with a quadratic data term (line 4).
\subsection{Remarks} In the case of linear constraints, the Bregman iterative procedure defined in (\ref{eq:updatex2}) is equivalent to an augmented Lagrangian method \cite{Nocedal}; see \cite{TaiWu}, \cite{YinOsherGoldfarbDarbon} for proofs. It is known that the augmented Lagrangian is better conditioned that the standard Lagrangian for the same problem, thus a better numerical behavior is expectable.
TV-based image restoration under multiplicative noise was recently addressed in \cite{ShiOsher07}. The authors apply an inverse scale space flow, which converges to the solution of the constrained problem of minimizing $\mbox{TV}({\bf z})$ under an equality constraint on the log-likelihood; this requires a carefully chosen stopping criterion, because the solution of this constrained problem is not a good estimate.
In \cite{Huang09}, a splitting of the variable is also used to obtain an objective function with the form \begin{equation}
E({\bf z,u}) = L({\bf z,u}) + \alpha \|{\bf z - u}\|_2^2; \end{equation} this is the so-called splitting-and-penalty method. Notice that the minimizers of $E({\bf z,u})$ converge to those of (\ref{eq:L_cons})-(\ref{eq:z_u_c}) only when $\alpha$ approaches infinity. However, since $E({\bf z,u})$ becomes severely ill-conditioned when $\alpha$ is very large, causing numerical difficulties, it is only practical to minimize $E({\bf z,u})$ with moderate values of $\alpha$; consequently, the solutions obtained are not minima of the regularized negative log-likelihood (\ref{eq:neg_like}).
\section{Experiments} In this section we report experimental results comparing the performance of the proposed approach with that of the recent state-of-the-art methods in \cite{AubertAujol08} and \cite{Huang09}. All the experiments use synthetic data, in the sense that the observed image is generated according to (\ref{eq:multiply})-(\ref{eq:Gamma}), where ${\bf x}$ is a clean image. As in \cite{Huang09}, we select the regularization parameter $\lambda$ by searching for the value leading to the lowest mean squared error with respect to the true image. The algorithm is initialized with the observed noisy image. The quality of the estimates is assessed using the relative error (as in \cite{Huang09}), \[
\mbox{Err} = \frac{\|\widehat{\bf x} - {\bf x}\|_2}{\|{\bf x}\|_2}. \]
Table~\ref{tab:results} reports the results obtained using Lena and the Cameraman as original images, for the same values of the number of looks ($M$ in (\ref{eq:Gamma})) as used in \cite{Huang09}. In these experiments, our method always achieves lower relative errors with fewer iterations, when compared with the methods from \cite{Huang09} and \cite{AubertAujol08} (the results concerning the algorithm from \cite{AubertAujol08} are those reported in \cite{Huang09}). It's important to point out that the computational cost of each iteration of the algorithm of \cite{Huang09} is essentially the same as that of our algorithm.
\begin{table} \centering \caption{Experimental results (Iter denotes the number of iterations; Cam. is the Cameraman image).}\label{tab:results}
\begin{tabular}{l l | l l | l l | l l } \hline\hline
& & \multicolumn{2}{c|}{Proposed} & \multicolumn{2}{c|}{\cite{Huang09}} & \multicolumn{2}{c}{\cite{AubertAujol08}}\\ Image & $M$ & Err & Iter & Err & Iter & Err & Iter \\ \hline {\footnotesize Lena} & 5 & {\footnotesize 0.1134} & 53 & {\footnotesize 0.1180} & 115 & {\footnotesize 0.1334} & 652 \\ {\footnotesize Lena} & 33 & {\footnotesize 0.0688} & 23 & {\footnotesize 0.0709} & 178 & {\footnotesize 0.0748} & 379 \\ {\footnotesize Cam.} & 3 & {\footnotesize 0.1331} & 100 & {\footnotesize 0.1507} & 182 & {\footnotesize 0.1875} & 1340 \\ {\footnotesize Cam.} & 13 & {\footnotesize 0.0892} & 97 & {\footnotesize 0.0989} & 196 & {\footnotesize 0.1079} & 950 \\ \hline \end{tabular} \end{table}
Figure~\ref{fig:images} shows the noisy and restored images, for the same experiments reported in Table~\ref{tab:results}. Finally, Figure~\ref{fig:plots}
plots the evolution of the objective function $L({\bf z}^k)$ and of the constraint function $\|{\bf z}^k - {\bf u}^k\|_2^2$ along the iterations, for the example with the Cameraman image and $M=3$. Observe the extremely low value of $\|{\bf z}^k - {\bf u}^k\|_2^2$ at the final iterations, showing that, for all practical purposes, the constraint (\ref{eq:z_u_c}) is satisfied.
\begin{figure}
\caption{Left column: observed noisy images. Right column: image estimates. First and second rows: Lena, $M=5$ and $M=33$. Third and fourth rows: Cameraman, $M=3$ and $M=13$.}
\label{fig:images}
\end{figure}
\begin{figure}
\caption{Evolution of the objective function $L({\bf z}^k)$ and of the constraint function
$\|{\bf z}^k - {\bf u}^k\|_2^2$, along the iterations of the algorithm, for the experiment with the Cameraman image and $M=3$. }
\label{fig:plots}
\end{figure}
\section{Concluding Remarks} We have proposed an approach to total variation denoising of images contaminated by multiplicative noise, by exploiting a split Bregman technique. The proposed algorithm is very simple and, in the experiments herein reported, exhibited state of the art performance and speed. We are currently working on extending our methods to problems involving linear observation operators ({\it e.g.}, blur) and other related noise models, such as Poisson.
\footnotesize
\end{document}
\end{document} |
\begin{document}
\title{Stabilities of one-dimensional stationary states of Bose-Einstein condensates} \begin{abstract}
We explore the dynamical stabilities of a quasi-one dimensional (1D) Bose-Einstein condensate (BEC) consisting of fixed $N$ atoms with time-independent external potential. For the stationary states with zero flow density the general solution of the perturbed time evolution equation is constructed, and the stability criterions concerning the initial conditions and system parameters are established. Taking the lattice potential case as an example, the stability and instability regions on the parameter space are found. The results suggest a method for selecting experimental parameters and adjusting initial conditions to suppress the instabilities.
\textbf{Keywords}: Bose-Einstein condensate, Lyapunov stability, stability criterion, stability region, lattice potential
PACS numbers: 03.75.Kk, 32.80.Pj, 03.75.Lm, 05.45.-a
\end{abstract}
\section{Introduction}
Experimental observation of atomic gas Bose-Einstein condensates (BECs) has caused significant stimulation to the study of macroscopic quantum phenomena with nonlinearity. In the mean field regime where the BECs are governed by the Gross-Pitaevskii equations (GPE), the BEC of a stationary state can be observed carefully in experiments only for the stable solutions of GPE. For the purpose of applications, the studies on the stability and instability of the solutions of GPE are necessary and important \cite{Chin}-\cite{Saito}. Recently, the instabilities of BECs have attracted much interest and the corresponding experimental \cite{Chin, Fallani, Burger} and theoretical \cite{Zheng}-\cite{FAbdullaev} works were reported for various BEC systems. Several different definitions such as the Landau instability \cite{Wu, Machholm}, dynamical instability \cite{Bronski}, quantum instability \cite{Shchesnovich}, parametric instability \cite{Genkin} and modulational instability \cite{Konotop} were employed. The used research methods concerned the characteristic exponent technique \cite{Zheng}, Gaussian variational approach \cite{Abdullaev}, and the numerical simulations to the partial differential equations \cite{Wu, Bronski, Deconinck}. The reported results showed that the instabilities are associated with the BEC collapse \cite{Konotop, Kagan}, implosion and chaos \cite{Saito2} - \cite{Xia}, dynamical superfluid-insulator transition \cite{Smerzi}, and the formation and evolution of the matter-wave bright solitons \cite{Strecker, Carr, Salasnich}. In order to stabilize the BECs \cite{Saito}, some stability criteria \cite{Berge} and parameter regions \cite{Zheng, Wu, Luo, Montina} were demonstrated. Most of the works focus in the stabilities under random perturbations. Experimentally \cite{Burger} and theoretically \cite{Wu} investigating the stabilities under the controllable perturbations has also become a challenging problem.
In the sense of Lyapunov, the instability entails that the initially small deviations from the unperturbed state grow without upper limit. We shall restrict the dynamical instability to the particular case of nonzero characteristic exponents such that the minor deviations from the unperturbed state grow exponentially fast \cite{Wu, Bronski}. All of the above-mentioned investigations on the dynamical stabilities and instabilities are based on such a type of instability. By the control of instability we mean to induce the transitions from unstable states to stable ones. Realization of the control needs selecting the system parameters to enter the stability regions, or initially using a controllable perturbation as a control signal to suppress the growth of perturbed solutions. Any experiment always contains a degree of noise, that leads to the random perturbations to the system. Therefore, in order to suppress the known unstable motions, we have to initially adjust the system by using the control signal being stronger than the noise.
In the previous work, we have investigated the stabilities of BECs for the time-dependent chaotic states \cite{whai, Chong} and dissipative cases \cite{Luo}. In this paper, we shall consider the dynamical stability of the stationary states for a quasi-1D BEC consisting of fixed $N$ atoms with time-independent external potential and atomic scattering length. It will be demonstrated that for the case of zero flow density the bounded perturbed solutions depend on the external potential, condensed atom number, and the initial disturbances. The dependence implies that the stationary state of BEC is certainly stable only for the given parameter region and the possible instability can be suppressed by some initial adjustments. We take the BECs held in an optical lattice as an exemplification to illustrate the results on the stability, instability and undetermined stability. The results contain the known analytical assertions for the optical potential case \cite{Wu, Bronski} and supply a method for selecting experimental parameters and adjusting initial conditions to establish the stable motions of BEC.
\section{Linearized equations and their solutions in the case of zero flow density} We start with the dimensionless quasi-1D GPE \cite{Bronski,Dalfovo, Leggett} \begin{eqnarray}
i \psi_t=- \frac 1 2 \psi_{xx} + [V(x) +g_{1}|\psi|^2]\psi, \end{eqnarray} where the suitable units with $\hbar=m=1$ have been considered, $V(x)$ denotes the external potential, the quasi-1D interaction intensity $g_{1}$ is related to the $s$-wave scattering length $a_s$, atomic mass $m$ and the transverse trap frequency $\omega_r$
\cite{Gardiner, Hai2} for the normalized wave-function $\psi$ with norm $|\psi|^2$ being the linear density of atomic number \cite{Bronski, Leggett}. It is well known that different solutions of a nonlinear equation may possess different stabilities. Here we study stability only for the stationary state solution of the form \begin{eqnarray} \psi_0=R(x)\exp [i\theta(x)-i\mu t], \end{eqnarray} where $\mu$ is the chemical potential, $R(x)$ and $\theta(x)$ represent the module and phase, which are both the real functions. In the considered units, the phase gradient $\theta_x$ is equal to the flow velocity field. Given the module, we define the useful Hermitian operators \cite{Bronski} \begin{eqnarray} L_n=-\frac 1 2 \frac{\partial ^2}{\partial x ^2}+n g_1 R^2+V(x)-\mu, \ \ \ for \ \ n=1,3. \end{eqnarray} Then inserting Eq. (2) into Eq. (1) gives the equations \begin{eqnarray} L_1 R(x)=0. \end{eqnarray} In the equation we have assumed the flow velocity field and current density being zero.
We now investigate the stability of stationary state Eq. (2) by using the linear stability analysis, which is associated with boundedness of the perturbed solution \cite{Bronski, Berge} \begin{eqnarray} \psi=[R(x)+\varepsilon \phi_1(x,t)+i \varepsilon \phi_2(x,t)]\exp [i\theta(x)-i\mu t], \end{eqnarray}
where the perturbed correction $\varepsilon\phi_i(x,t)$ is real function with constant $|\varepsilon|\ll 1$. Substituting Eqs. (5) and (4) into Eq. (1) yields the linearized equations \cite{Bronski} \begin{eqnarray} \phi_{1t}=L_1 \phi_2, \ \ \ \ \ \phi_{2t}=-L_3\phi_1. \end{eqnarray} For most of external potentials $V(x)$ we cannot derive the exact solutions from Eq. (1) or Eq. (4) such that the operators $L_n$ cannot be determined exactly. In the case of optic lattice potential, some specially exact solutions have been found \cite{Bronski, Deconinck, Hai2}, however, solving Eq. (6) for the general solution is still difficult. Therefore, we have to focus our attentions to the dynamical stability which is associated with the perturbed solutions of space-time separation, \begin{eqnarray} \phi_i(x,t)=T_i(t)\varphi_i(x), \ \ \ for \ \ \ i=1,2. \end{eqnarray} Note that the real function $\phi_i$ limits $T_i$ and $\varphi_i$ to real or imaginary simultaneously, the difference between both is only a sign $``-"$ of $\phi_i$. We take real $T_1, \varphi_1, T_2$, and $\varphi_2$ without loss of generality, since the changes of the signs of $\phi_i$ do not affect the stability analysis. We shall discuss how to establish the sufficient conditions of stability as follows.
Combining Eq. (6) with Eq. (7), we get the coupled ordinary differential equations \begin{eqnarray} \dot T_1(t)&=&\lambda_1 T_2(t), \ \ \dot T_2(t)=-\lambda_2 T_1(t); \\ L_3 \varphi_1(x)&=&\lambda_2 \varphi_2(x), \ \ L_1 \varphi_2(x)=\lambda_1 \varphi_1(x). \end{eqnarray} Here $\lambda_i$ is the real eigenvalue determined by the initial perturbations $\dot T_i(0),T_i(0)$. The corresponding decoupled equations are derived easily from the coupled ones as \begin{eqnarray} \ddot T_i(t)=&-& \lambda_1 \lambda_2 T_i(t), \lambda_1=\frac{\dot T_1(0)}{T_2(0)}, \lambda_2=-\frac{\dot T_2(0)}{T_1(0)}; \\ L_1 L_3 \varphi_1&=&\lambda_1 \lambda_2 \varphi_1, \ \ \ \ L_3 L_1 \varphi_2=\lambda_1 \lambda_2 \varphi_2. \end{eqnarray} Obviously, the general solutions of Eq. (10) can be written as the exponential functions \begin{eqnarray} T_i=A_ie^{\lambda t}+B_i e^{-\lambda t}, \ \ \ \lambda=\sqrt{-\lambda_1 \lambda_2}, \ \ \ \ \ \ \ \ \ \ \ \nonumber \\ A_i=\frac 1 2 \Big[T_i(0)+\frac{1}{\lambda}\dot T_i(0)\Big], \ B_i=\frac 1 2 \Big[T_i(0)-\frac{1}{\lambda}\dot T_i(0)\Big], \end{eqnarray} where $A_i, B_i$ are real or complex constants, which make $T_i(t)$ the real functions. Based on the existence of bounded eigenstates $\varphi_i(x)$, the results are classified as the three cases:
(!`) {\bf Stability criterion}: The eigenstates of Eq. (11) are bounded if and only if their eigenvalues are positive, $\lambda_1 \lambda_2=-\lambda^2>0$, that makes $\lambda$ the imaginary constant and $T_i$ the periodic functions.
(!`!`) {\bf Instability criterion}: One can find a negative eigenvalue $\lambda_1 \lambda_2=-\lambda^2<0$ associated with a set of bounded eigenstates of Eq. (11) that makes $T_i$ the real exponential function.
(!`!`!`) {\bf Undetermined stability}: One cannot determine whether all eigenvalues of the bounded eigenstates of Eq. (11) are positive. In this case, we can use criterion (!`) to control the possible instability of case (!`!`).
\section{Stability regions on the parameter space and control of instability}
It is interesting noting that if the initial perturbations can be determined, the dynamical instability of real $\lambda$ case can be controlled by adjusting the initial disturbances to obey $A_i=0$ that will suppress the exponentially rapid growth of $T_i$ in Eq. (12). From Eqs. (12) and (10) we establish the controlling criteria for the instability as $ \dot T_i(0)=- \lambda T_i(0).$ However, for the random initial perturbations such a control is difficult to do, since we cannot determine the initial values $ \dot T_i(0)$ and $ T_i(0).$ Therefore, in the case of random perturbation we are interested in determining the same eigenvalue $\lambda_1 \lambda_2$ of operators $L_1 L_3$ and $L_3 L_1$, since the stability can be established if and only if the eigenvalue is positive such that $\lambda^2=-\lambda_1 \lambda_2<0$. Let $\alpha\ge \alpha_g$ and $\beta\ge \beta_g$ be the eigenvalues of operators $L_1$ and $L_3$, which are determined by the eigenequations $L_1 u(x)=\alpha u(x), \ \ L_3v(x)=\beta v(x)$ with $u$ and $v$ being their eigenfunctions, where $\alpha_g$ and $\beta_g$ express the corresponding ground state eigenvalues respectively. From Eq. (3) we know the relation $L_3= L_1+2g_1 R^2$ that means $\alpha_g <\beta_g$ for $g_1>0$ and $\alpha_g>\beta_g$ for $g_1<0$. It is clear that Eq. (4) is one of the eigenequations of $L_1$ for the eigenvalue $\alpha=0$ so that the ground state eigenvalue obeys $\alpha_g\le 0$ for any $g_1$. Then $\beta_g$ can be positive or negative for $g_1>0$ and $\beta_g<0$ for $g_1<0$.
From the above-mentioned results we establish the stability and instability conditions:
Case $g_1>0$: The sufficient condition of stability is $\alpha_g = 0$, since such a ground state eigenvalue implies $\alpha \ge 0$ and $\beta> 0$ for all of the eigenstates such that the well known spectral theorem gives \cite{Bronski, Deconinck, Courant} $\lambda_1 \lambda_2\ge 0$. The corresponding sufficient conditions of instability reads $\alpha_g < 0$ and $\beta_g> 0$.
Case $g_1<0$: The ground state eigenvalues satisfy the inequality $\beta_g<\alpha_g\le 0$. So the sufficient condition of instability is $\alpha_g= 0$.
In all the other cases, we don't know whether $\lambda_1 \lambda_2$ is certainly positive or negative, so the linear stabilities are analytically undetermined. It is worth noting that Eq. (4) infers $R(x)$ to be one of the eigenstates of $L_1$ with eigenvalue $\alpha_R=0$. Therefore, if $R(x)$ is a ground state, the above stability and instability conditions indicate that this state is stable for $g_1>0$, and unstable (or metastable) for $g_1<0$.
\bf Note that all the above-mentioned results are valid for arbitrary time-independent potential. \rm We will take the BEC held in an optical lattice as a concrete physical example to evidence these results. \bf In the lattice potential case, \rm the above-mentioned sufficient conditions agree with the stability and instability criterions established by the authors of Ref. \cite{Bronski}. We shall apply the sufficient stability and instability conditions to find the corresponding stability and instability regions on the parameter space, and apply these results to study the stabilization of the considered BEC system.
For an arbitrary time-independent potential, the eigenequation $L_1 u=\alpha u$ can be rewritten as the integral form \cite{whai2} \begin{eqnarray} u&=&u_1+u_2, \nonumber \\ u_1&=&q^{-1}e^{-qx}\int e^{qx}f u dx, \ u_2=-q^{-1}e^{qx}\int e^{-qx}f u dx, \nonumber \\ f&=&q^2/2+\alpha+\mu-V(x)-g_1R^2(x). \end{eqnarray} where $q>0$ is a real constant. This integral equation can be directly proved by taking the second derivative from its both sides. The integrals in Eq. (13) are indefinite, what means that the solutions are defined with accuracy of two additive constants. While the eigenequation $L_1 u=\alpha u$ just is a second order equation which also implies two arbitrary constants determined by the boundary conditions. It is the two additive constants to make the integral equation (13) completely equivalent to the eigenequation. The stability requires the eigenstate to be bounded and the possible bounded solution $u$ must satisfy the boundedness condition $\lim_{x\rightarrow\pm\infty}\int e^{\mp qx}f udx=0$. Under this condition and for the \bf lattice potential case,\rm we can apply the l'H$\ddot{o}$pital rule to get the superior limit \cite{Chong} \begin{eqnarray} \overline {\lim_{x\rightarrow\pm\infty}}u\le \overline{\lim_{x\rightarrow\pm\infty}}u_1+\overline{\lim_{x\rightarrow\pm\infty}}u_2 =2q^{-2}\overline{\lim_{x\rightarrow\pm\infty}}(f u). \end{eqnarray} Note that there is not the usual limit, because of the periodicity of lattice potential. It is clear that the solution of linear equation $L_1 u=\alpha u$ can be taken as $u(x)=Au'(x)$ with arbitrary constant $A$ and any solution $u'(x)$ such that one can always select $u$ to obey $\overline{\lim}_{x\rightarrow\pm\infty}u>0$. Thus Eq. (14) implies $2q^{-2}\overline{\lim}_{x\rightarrow\pm\infty}f\ge 1$, namely \begin{eqnarray} \alpha\ge -\{\mu+\overline{\lim_{x\rightarrow\pm\infty}}[-V(x)-g_1R^2(x)]\}=\alpha_g. \end{eqnarray} For the eigenequation $L_3 v=\beta v$ after using $3g_1$ instead of $g_1$, the same calculations give \begin{eqnarray} \beta\ge -\{\mu+\overline{\lim_{x\rightarrow\pm\infty}}[-V(x)-3g_1R^2(x)]\}=\beta_g. \end{eqnarray} Combining Eq. (15) with the stability sufficient condition \cite{Bronski} $\alpha_g= 0$ for $g_1>0$, we get the parameter region of stability \begin{eqnarray} \mu=\mu_s = -\overline{\lim_{x\rightarrow\pm\infty}}\ [-V(x)-g_1R^2(x)] \ \ \ for \ \ g_1>0, \end{eqnarray} which contains the relation among $\mu, g_1$ and the potential parameters. Applying Eqs. (15) and (16) to the instability sufficient conditions \cite{Bronski} $\alpha_g< 0,\ \beta_g> 0$ for $g_1>0$ and $\alpha_g= 0$ for $g_1<0$, we get the parameter regions of instability \begin{eqnarray} &-&\overline{\lim_{x\rightarrow\pm\infty}}\ [-V(x)-g_1R^2(x)]< \mu=\mu_{in} < -\overline{\lim_{x\rightarrow\pm\infty}}\ [-V(x)-3g_1R^2(x)] \ \ for \ g_1>0;\nonumber \\ & & \mu_{in} = -\overline{\lim_{x\rightarrow\pm\infty}}\ [-V(x)-g_1R^2(x)] \ \ \ for \ \ g_1<0. \end{eqnarray} By the sufficient conditions we mean that the stationary state $R(x)e^{-i\mu t}$ of Eq. (1) is certainly stable for the $\mu$ values in the parameter region fixed by Eq. (17), and the stationary states are certainly unstable for the $\mu$ values in any region of Eq. (18). The dynamical stabilities are undetermined outside.
We now see the physical meaning of the stability relation in Eq. (17) for the stationary states of BEC with zero current density. Setting the sum of external potential and internal interaction as $U(x)=V(x)+g_1R^2(x)$ with periodic $V(x)$ and bounded $R(x)$, when $U(x)\ge B$ is satisfied for all $x$ values and a fixed constant $B$, Eq. (17) implies $\mu_s= B\le U(x)$. Namely the sufficient stability condition means that if the chemical potential is equal to the minimum of $U(x)$, the considered states are certainly stable. For a known state the stability can be easily examined by using Eq. (17). We have tested the exact solutions given in Ref.
\cite{Bronski} for the potential $V(x)=-V_0 sn^2(x,k)$ and found that some of them have the instabilities and undetermined stabilities, where $|V_0|$ is the potential depth and $sn(x,k)$ the Jacobian elliptic sine function with $k$ being the modulus. Substituting one of the exact solutions, $g_1R^2(x)=-(1+V_0/k^2)[1-k^2 sn^2(x,k)]$ with the potential depth $-V_0\ge k^2$ and chemical potential $\mu=-1-V_0/k^2+k^2/2$ [see Eq. (12) of Phys. Rev. E63, 036612(2001)], into Eq. (17) yields the stability parameter relation $\mu_s=-1-V_0/k^2$. A difference of $k^2/2$ exists between the $\mu_s$ value required by the stability condition and the chemical potential $\mu$ in the exact solution, namely the stability criterion (17) is not met here. This assertion differs from the result of Ref. \cite{Bronski}, where this solution fits their stability criterion and the stability is independent of the parameters $k$ and $V_0$. However, when the potential depth
$|V_0|$ is much greater than the modulus $k$ (e.g. $V_0=-1$ and $k=0.2$), we have the chemical potential near the stability relation (17) ($ \mu=24.02=\mu_s+0.02\approx \mu_s$). This infers the higher stability being associated with a smaller value of the modulus $k$
and a relatively greater $|V_0|$ value. Thus our stability parameter criterion suggests that for a known solution with instability or undetermined stability one can raise the practical stability by adjusting the system parameter (e.g. the above $k$ and $|V_0|$) to approach the values of the stability region in Eq. (17).
Generally, constructing a stable exact solution of GPE is not easy, because of the non-integrability of Eq. (4) with periodic potential. However, in the large-$N$ limit, we can fulfil the criterion (17) for the case of a repulsive nonlinearity, since the Thomas-Fermi (TF) approximation \cite{Dalfovo} $U(x)=\mu_{TF}$ just fits the stability relation. Therefore, it is practical relevant to prepare such a stable TF state $R(x,\mu_{TF})$ by increasing the condensed atom number $N$. Given the number $N$ and the periodic boundary condition experimentally, from the normalization condition $N=n\int_0^{\pi} R^2(x,\mu_{TF})dx=n\int_0^{\pi} [\mu_{TF}-V(x)]dx/g_1$ we derive the chemical potential of the stable TF state \begin{eqnarray} \mu_{TF}=\mu_s=\frac{Ng_1}{n\pi}+\frac {1}{\pi}\int_0^{\pi} V(x)dx \end{eqnarray} which is related to the atom number $N$ and the potential strength $V_0$ and period $K (k)$, where $n\sim 100$ is the lattice number. In fact, noticing the dependence of $R=R(x,\mu)$ on $\mu$ in Eq. (4), the normalization condition of any known state can also lead to $\mu=\mu (N)$ and $R=R(x, N)$. Applying them to eliminate $\mu$ in Eqs. (17) and (18) will give the corresponding relationships among the experimental parameters $N, g_1,V_0$ and $K(k)$. So we can control the instability of the known state by selecting the experimental parameters to fit or to approach the stability region of Eq. (17). In many practical cases, we cannot obtain the exact solution of Eq. (4) for some periodic potentials, that necessitates the numerical investigation. In order to fit (or near) the stability region in Eq. (17) and to avoid the instability regions of Eq. (18), we could use Eq. (19) to estimate and adjusted the chemical potential in region $\mu\approx \mu_{TF}$ such that the stability of the numerical solutions of Eq. (4) can also be established or improved.
On the other hand, in the case of arbitrary time-independent potential, for some known unstable solutions $R=R(x,\mu_{in})$ from Eqs. (10) and (12) we can experimentally set and adjust the initially controllable perturbation as a control signal \cite{Burger} to suppress the exponentially fast growth of $T_i(t)$. Although the phase $\theta$ and amplitude $R$ are time-independent in the considered case, the initial perturbations can result in the nontrivial and time-dependent corrections to the phase and atomic-number density. From Eq. (5) we find their first corrections as \begin{eqnarray} &&\triangle\theta(x,t) \approx \arctan [\varepsilon T_2(t)\varphi_2(x)/R(x)]\approx \varepsilon T_2(t)\varphi_2(x)/R(x),
\nonumber \\ && \triangle |\psi|^2(x,t)\approx 2\varepsilon T_1(t)\varphi_1(x)R(x), \end{eqnarray} which are initially proportional to $T_1(0)$ and $T_2(0)$
respectively. Making use of Eq. (20), the adjustments to the initially controllable perturbations can be performed by trimming the number density $|\psi|^2$, velocity field $(\triangle\theta)_x$ and their time derivatives which are proportional to the corresponding trimming velocities. Given Eqs. (10) and (12), we know the stability initial criterion \begin{eqnarray} \lambda^2=-\lambda_1 \lambda_2=\dot T_1(0)\dot T_2(0)/[T_1(0)T_2(0)]<0. \end{eqnarray} Once Eq. (21) is satisfied in the adjustments to the initial perturbations, Eq. (12) becomes the periodic solution which implies the stability. Although we cannot determine the initial values $\dot T_i(0)$ and $ T_i(0)$, experimentally, the number density can be adjusted by varying the condensed atom number, and the adjustments to superfluid velocity may be related to a displacement $\triangle x$ of a magnetic potential \cite{Burger}. According to Eqs. (20) and (21), if we initially increases (or decreases) both the relative derivative $\dot T_2(0)/ T_2(0)=\frac{\partial
\triangle\theta_x(x,t)}{\partial t}|_{t=0}/\triangle\theta_x(x,0)$ of flow velocity and the relative derivative $\dot T_1(0)/
T_1(0)=\frac{\partial \triangle |\psi|^2(x,t)}{\partial t}|_{t=0}/\triangle |\psi|^2(x,0)$ of atomic number density, the stability initial criterion (21) is destroyed and the system will become unstable. But when one of them is increased and another is decreased simultaneously, the stability criterion (21) is satisfied and the possible instability is suppressed. These assertions may be tasted experimentally.
\section{Conclusions and discussions}
In conclusion, we have investigated the dynamical stability, instability and undetermined stability of a quasi-1D BEC in the stationary states for time-independent external potential and atomic scattering length, and fixed atomic number. After space-time separation of variables, we derive the general solutions of the linearized time-evolution equations for the trivial phase case and give a stability criterion related to the initial conditions. As an important example, we evidence the stability criterion analytically for the BEC held in an optical lattice potential. By using the known sufficient conditions of stability and instability \cite{Bronski}, several parameter regions of stability and instability are shown. Our results contain some new stability predictions which can be tested with current experimental setups. Finally, we stress that applying our stability initial criterion and parameter region one can stabilize the considered BEC system by adjusting the system parameters experimentally to enter or near the stability region of Eq. (17) on the parameter space. For the parameters out of the stability region we can also establish or improve the stability by adjusting the initial flow velocity and atomic number density to fit or approach the stability initial criterion.
\bf Acknowledgment \rm This work was supported by the National Natural Science Foundation of China under Grant No. 10575034 and by the Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics of China under Grant No. T152504.
{}
\end{document} |
\begin{document}
\title{Roads towards fault-tolerant universal quantum computation}
\author{Earl T. Campbell} \affiliation{Department of Physics and Astronomy, University of Sheffield, Sheffield, UK} \author{Barbara M. Terhal} \affiliation{JARA Institute for Quantum Information, RWTH Aachen University, 52056 Aachen, Germany} \author{Christophe Vuillot} \affiliation{JARA Institute for Quantum Information, RWTH Aachen University, 52056 Aachen, Germany} \date{\today}
\begin{abstract} Current experiments are taking the first steps toward noise-resilient logical qubits. Crucially, a quantum computer must not merely store information, but also process it. A fault-tolerant computational procedure ensures that errors do not multiply and spread. This review compares the leading proposals for promoting a quantum memory to a quantum processor. We compare magic state distillation, color code techniques and other alternative ideas, paying attention to relative resource demands. We discuss the several no-go results which hold for low-dimensional topological codes and outline the potential rewards of using high-dimensional quantum (LDPC) codes in modular architectures.
\end{abstract}
\maketitle
\section{Introduction} \label{sec:intro}
The next decade will likely herald controllable quantum systems with 30 or more physical qubits on various quantum technology platforms, such as ion-traps \cite{HRB:ions,ballance:highFidelity} or superconducting qubits \cite{DS:outlook}. It may be difficult to simulate such partially-coherent, dynamically-driven, many-body systems on a classical computer, since the elementary two-qubit entangling gate time can be as much as a 1000 times faster than the single-qubit dephasing and relaxation times ($T_2$ and $T_1$). On the other hand, a system in which one out of a 1000 components fails is unlikely to perform well in executing large quantum algorithms designed for fault-free components. We must either figure out what computational tasks a noisy many-body quantum system can perform well or we use partially-coherent qubits as the elementary constituents of more robust logical qubits through quantum error correction. The choice of quantum error correcting architecture determines all operations at the physical hardware level. It constrains the compilation from quantum software and quantum algorithms to actions on elementary qubits in hardware. For superconducting qubits, efforts to build a first logical qubit of the surface code are underway at places such as IBM Research \cite{corcoles:surface, IBM:surface}, UCSB in partnership with Google \cite{kelly:repetition} and the TU Delft \cite{riste+:3bit}.
\begin{figure*}
\caption{(a) One logical qubit is encoded in a surface code sheet consisting of $d^2$ physical qubits at the vertices of the lattice ($d=5$ in the Figure). Black faces represent $X$-parity checks and white faces represent $Z$-parity checks on the qubits. (b) A CNOT circuit is equivalent to performing non-destructive $XX$ and $ZZ$ measurements for the control and target qubits with an ancillary qubit. Lattice code surgery provides a method for non-destructively measuring a logical $ZZ$ or $XX$ between two encoded qubits. (c) Lattice code surgery between two surface code sheets which realizes a logical $ZZ$ measurement. By measuring and merging the parity checks between two surface code sheets, we merge the two sheets into one. At the same time one learns the value of the logical $ZZ$ as it equals the product of the newly measured grey faces. The two sheets can then be split again for further operations.}
\label{fig:surface49}
\end{figure*}
Quantum error correction works by making quantum information highly redundant so that errors affecting a few degrees of freedom become correctable \cite{terhal:rmp}. One can formulate some rough desiderata of a quantum error correcting architecture which aim at minimizing experimental complexity: (1) the architecture has a high noise threshold, providing logical qubits that have a lower logical error probability per logical gate than their physical constituent qubits, (2) it allows for the implementation of a universal~\footnote{A discrete set of gates is universal if the gates in the set can be composed to make any desired unitary gate up to arbitrary precision.} set of logical gates, and (3) it achieves these goals with low spatial (number of physical qubits per logical qubit) and temporal overhead (time duration of logical gate versus physical gate). In addition, (4) it should be possible to process error information sufficiently fast, keeping up with the advancing quantum computation. A last desired property (5) may be that the code is a LDPC (low-density parity check) code: each parity check\footnote{Parity checks operators are operators that define the code space and return a +1 outcome when measured in the absence of errors.} involves at most $k$ qubits (parity check weight $k$) and each qubit participates in at most $l$ parity checks (qubit degree $l$) where both $l$ and $k$ are small constants. Strongly preferred for solid-state qubits is \etc{a code for which the parity checks act on neighboring qubits in a 2D or 3D array.}
An important universal gate set is the Clifford+$T$ set. The subset of Hadamard $H$, CNOT and $S={\rm diag}(1,e^{i \pi/2})$ are Clifford gates. A quantum \etc{circuit} comprising only Clifford gates is not universal and confers no quantum computational advantage as it can be classically simulated by the Gottesman-Knill theorem~\cite{thesis:gottesman,AG:stabilizer}. When single-qubit gates come about through resonant driving fields, rotating the qubit vector around its Bloch sphere, a $T={\rm diag}(1,e^{i \pi/4})$ gate is similar in complexity to an $S$ or $H$ gate. For a logical qubit, say, the one encoded by Steane's 7-qubit code (Box~\ref{box:steanecode}), the logical Hadamard is implemented by applying a Hadamard gate on each of the seven physical qubits. This is advantageous since it takes the same time as an elementary Hadamard gate and the transversal~\footnote{For a single block of code transversal logical gates are realized as a product of single-qubit unitary gates. Transversal logical gates between multiple blocks may use non-product unitary gates provided that these interactions are only between the different blocks.} character of the logical gate ensures that errors do not spread between qubits of the code. The $S$ gate and the CNOT gate are similarly transversal for the Steane code\etc{, but the $T$ gate is not. Certainly} {\em some} sequence of single and two-qubit gates can be designed to enact a logical $T$ gate. While this is true, the presence of two-qubit gates in such a $T$ gate construction will entirely negate the benefits of using a logical qubit. A sequence of two-qubit gates can spread correctable single-qubit errors to uncorrectable multi-qubit errors, \etc{making} the logical qubit error probability higher than the error probability of a single constituent qubit.
Transversal logical gates are the easiest example of fault-tolerant logical gates, meaning logical gates which do not convert correctable errors into uncorrectable ones. Transversal gates are optimal in both spatial and temporal overhead. However, it was proved \cite{Chen:earlyEK, EK:nogo} that no non-trivial code allows for the transversal implementation of all gates needed for universality, demonstrating the need for other constructions.
\begin{figure*}\label{box:steanecode}
\end{figure*}
\etc{A promising architecture uses the surface code,} which was first put forward as a topological quantum memory\cite{dennis+:top}. It has a high noise threshold \etc{$p_c \approx 0.6-1\%$}\cite{RH:topoPrl,fowler+:unisurf,FMMC:review} and requires only a 2D qubit connectivity with qubit degree 4. One logical qubit comprises $d^2$ physical qubits (plus $d^2-1$ ancilla qubits for parity check measurements) for a code distance $d$, see Fig.~\ref{fig:surface49}. Besides this encoding, there are at least two other ways of defining logical qubits in the surface code. A logical qubit can comprise two holes in an extended surface code sheet \cite{RHG:threshold} or a logical qubit could be represented by two pairs of lattice defects or twists \cite{bombin:twist, HG:dislocation}. The logical error probability per round of parity check measurements $P_L$ is determined by the distance, i.e. $P_L \propto (p/p_c )^{d/2}$. Numerical studies\cite{FMMC:review} estimate that, assuming a depolarizing error probability $p<10^{-3}$ per elementary gate, a logical qubit will consist of more than $10^4$ physical qubits in order for $P_L< 10^{-15}$.
How does the surface code architecture handle the fault-tolerant implementation of gates? This is partially achieved by code deformation, which is a versatile technique used in fault-tolerantly executing the logical gates: The code is altered by changing which parity check measurements are done where. A logical CNOT can be obtained with the encoding in Fig.~\ref{fig:surface49} through a deformation technique called lattice code surgery \cite{horsman+:suture}, see Fig.~\ref{fig:surface49}b and \ref{fig:surface49}c. The encoding of Fig.~\ref{fig:surface49} can also be deformed to a twist defect encoding where the four corners of the lattice correspond to four defects\cite{brown+:poking}. Braiding of the four twist defects through the bulk of the lattice then generates the single-qubit $S$ and Hadamard gate \cite{brown+:poking}. Hence all Clifford gates could be done in situ on 2D surface code sheets. Another common approach to logical $S$ gates is shown in Fig.~\ref{fig:Ttele}b and uses state injection of a $\ket{Y}\propto \ket{0}+i\ket{1}$ state, which is an inexpensive resource as it may be used many times~\cite{Aliferis07,jones:reuseable}.
\section{Towards Universality}
\begin{figure*}
\caption{(a) Implementation of a $T$ gate via preparing the magic ancilla $\ket{A}=T H\ket{0}$. (b) Implementation of a $S$ gate via preparing the magic ancilla $\ket{Y}=S H\ket{0}$. (c) Fault-tolerant implementation of the projection of a code state onto an eigenstate of a transversal logical Clifford gate $C_L = \prod_{j=1}^nC_j$ assuming that the Clifford gate $C$ has $\pm 1$ eigenvalues. To prepare a $T$ magic state one takes $C = TXT^\dagger \propto SX$, using a transversal logical $S$ gate for the base code. Since a single error can flip the outcome $p$, the circuit has to be repeated and a majority vote over the outcomes taken.}
\label{fig:Ttele}
\end{figure*}
\subsection{First Ideas}
The necessity of finding effective means to implement a universal set of gates was realized from the earliest beginnings of the field. One useful tool is the simple circuit, shown in Fig.~\ref{fig:Ttele}a, where we replace executing the $T$ gate by preparing the ancilla state $\ket{A}=T H\ket{0}$, called a $T$ magic state. The circuit can be executed at the logical level where one \etc{encodes} qubit and ancilla in a base code.
Peter Shor provided the first construction\cite{shor:faulttol} for fault-tolerantly preparing a Toffoli magic state. A similar construction\cite{KLZ:res,KLZ:arxiv} was proposed for \etc{fault-tolerantly} preparing a $T$ magic state. In these constructions one uses the fact that the logical magic state is an eigenstate of a logical Clifford gate which is transversal for the base code. Ancillary cat states are used to fault-tolerantly project onto this eigenstate, see Fig.~\ref{fig:Ttele}c\etc{, but the approach does not scale favourably for topological codes.}
\begin{figure*}
\caption{Sketch of magic state distillation using a $[[n_2, 1, d_2]]$ distillation code with a transversal $T$ gate, while Clifford operations are protected by a $[[n_1, 1, d_1]]$ base code. Given fewer than $d_2$ errors in the noisy magic states, they are detected in step 3. This implies that the logical error probability is suppressed from $\epsilon$ to $O(\epsilon^{d_2})$. Iterating $r$ times, the error probability reduces to $O(\epsilon^{d_2^r})$.}
\label{fig:magic}
\end{figure*}
\subsection{Magic State Distillation}
The mindset of Magic State Distillation (MSD) is to accept a preparation procedure providing noisy magic states, and then proceed by filtering many noisy magic states into fewer, yet better quality states. Efficient protocols for magic state distillation are designed, almost exclusively, using an error correction code with a transversal non-Clifford gate, typically the $T$ gate. We call this code the distillation code. \etc{The $[[15,1,3]]$ quantum Reed-Muller code was shown\cite{KLZ:arxiv} to have a transversal $T$, and using this code for magic state distillation was proposed by Bravyi and Kitaev\cite{BK:magicdistill}.} The $[[15,1,3]]$ code is now also recognized as the smallest member of a 3D color code family, see Fig.~\ref{fig:colorcodes}. All MSD protocols work at the logical level of an underlying base code and so assume reliable Clifford operations. In Fig.~\ref{fig:magic} we outline one variant of MSD.
One figure of merit is the number of noisy magic states consumed per single $T$ gate. Advances in distillation codes have improved the asymptotic efficiency by this metric (see Box~\ref{box:MSD}). However, the number of physical qubits involved (space cost) and protocol duration (time cost) are more realistic metrics, although they depend on the choice of base code. When optimizing full space-time costs, an important trick is to increase the base code distance with successive distillation rounds (see step 5 of Fig.~\ref{fig:magic}). Using this trick in conjunction with the surface code, resource overheads become dominated by the surface code cost in the final distillation round~\cite{RHG:threshold,FMMC:review,fowler:blockMSD,Ogorman:realisticMSD}. This results in a space-time cost for the $T$ gate \etc{that} is only a constant multiple of a surface code overhead, namely a $O(d^2)$ spatial cost and a $O(d)$ temporal cost. More precisely, the space-time cost of a $T$ gate realized in a distance-$d$ surface code is $C_T d^3$ with $C_T\approx 160-310$ when employing Bravyi-Haah codes~\cite{FMMC:review,fowler:blockMSD,Ogorman:realisticMSD}. For Clifford gates the overhead per logical gate is also $O(d^3)$ but the constant prefactor is of order of unity. Using even higher yield MSD protocols (see Box~\ref{box:MSD}) may reduce the $C_T$ factor further, with the Bravyi-Haah codes already shown~\cite{fowler:blockMSD} to have three times lower space-time costs than [[15,1,3]].
Obtaining a fault-tolerant logical $T$ gate is only a partial goal as Clifford+$T$ gates are then used to synthesize other logical gates needed in quantum algorithms~\cite{KSV:computation,RS14,bocharov:PQF,amy:TPAR}. A more efficient solution can be to directly distill magic states providing the most frequently required logic gates. Magic state distillation has been shown for smaller angle $Z$ rotations~\cite{landahl13,duclos15,campbell16}, the Toffoli gate~\cite{eastin13,jones13b}, and a general class of multi-qubit circuits~\cite{campbell:unified2}.
Given that magic state distillation takes up space and time, it requires an allocation of resources and communication infrastructure (in the form of logical roads) to these resources inside the 2D surface code architecture. Clifford gates in such \etc{an} architecture could be done in situ on a `Clifford substrate' of 2D surface code sheets.
Throughout this 2D array of sheets, areas are reserved for magic state distillation factories for non-Clifford gates. The optimal spatial density of these factories depends on the typical quantum algorithmic use (frequency, parallelism) of non-Clifford gates. Clearly, any design of such a quantum computer will require a huge effort in integrated quantum circuit design and optimization, a quantum analog to VLSI design. This effort has barely gotten underway\cite{PDF:circuit}.
\begin{figure*}\label{box:MSD}
\end{figure*}
\begin{table*}[htb] \begin{center} \begin{tabular}{ccccccc}
\rowcolor{headerrow}
& & Parity Check & Threshold & Threshold & & \\ \rowcolor{headerrow} Code & Qubit Degree & Weight & (Phen. Model) & (\etc{Circuit} Model) & Single-Shot & Logic \\
2D Surface& 4 & 4 & $2.9\%$\cite{WHP:threshold} & $0.6\%$\cite{RH:topoPrl}-$1\%$\cite{fowler+:unisurf,FMMC:review} & No & Clifford \\
2D 6.6.6 Color & 6 & 6 & $2.8\%$ \cite{beverland:perscom,beverlandThesis} & $0.3 \%$\cite{beverland:perscom,beverlandThesis} & No & Clifford \\
3D Gauge Color & 12 & 6 & $0.31\%$ \cite{brown:singleshot} & Unknown & Yes & Clifford \\
3D Color& 10 & 24 & Unknown & Unknown & No & Transversal $T$ \\
4D Surface& 8 & 6 & $1.59\%$ \cite{BDMT:local} & Unknown & Yes & $^\dagger$
\end{tabular} \end{center} \caption{Parity check weight is given for the bulk of the code lattice. \etc{The threshold depends on noise model: The phenomenological model assigns probability $p$ equally to $X$ \& $Z$ errors and an error in the parity check measurements; The circuit model applies depolarizing noise with probability $p$ to every elementary component in the circuit implementing the parity checks. The phenomenological threshold is always higher than circuit model threshold, especially} for codes with high parity check weight. $^\dagger$It has been shown\cite{KYP:unfolding} that one can perform a \etc{fault-tolerant} non-Clifford 4-qubit-controlled-Z using a \etc{constant depth circuit}.}
\label{table:codes} \end{table*}
\begin{figure*}
\caption{(a) Smallest example $[[15,1,3]]$ of a tetrahedral 3D color code with qubits on the vertices. Each colored cell corresponds to a weight-8 $X$-check and each face corresponds to a weight-4 $Z$-check. A logical $Z$ is any weight-3 $Z$-string along an edge of the entire tetrahedron. The logical $X$ is any weight-7 $X$-face of the entire tetrahedron. A logical $T^\dagger$ is implemented by applying a $T$ gate on every qubit. The online Supplementary Information\cite{SuppMat::colorcode} has a movie of a larger 3D color code. (b) A 2D triangular [[31,1,7]] code, generalizing Steane's code, based on a 4.8.8. lattice. The qubits are associated with vertices and each colored face correspond\etc{s} to both an $X$ and a $Z$-check. A logical $X$ or $Z$ is a $X$-string (resp. $Z$-string) running along any of the edges of the entire triangle.}
\label{fig:colorcodes}
\end{figure*}
\subsection{Color Codes}
The [[7,1,3]] and [[15,1,3]] codes are the smallest members of a family of 2D and 3D, respectively, color codes\cite{BM:colorcodes,BM:transT,bombin:homological}. Examples are given in Fig.~\ref{fig:colorcodes}. These color codes retain the transversality properties of their respective smallest instances. \etc{Therefore,} the 2D color codes have transversal Clifford gates~\cite{katzgraber:fullClifford,KB:simple,bombin:gauge}. The 3D color codes can have a transversal non-Clifford \etc{gate}~\cite{BM:transT,KYP:unfolding}, such as a $T$ gate in the tetrahedral 3D color code. Lattice code surgery \cite{landahl14} can again be used to locally perform CNOT gates. Color codes can also be extended to higher dimensions (see Box~\ref{box:CC}), with transversality properties related to the dimensionality.
The 3D color code does not have the symmetry between $X$- and $Z$-checks, and \etc{so} lacks a transversal Hadamard gate. Ideas to get around this caused a surge of interest in 3D color codes. One idea is that of switching between different (3D color) codes via the important concept of gauge fixing~\cite{paetznick:gauge,bombin:gauge}. Using gauge fixing it is possible to use the transversal $T$ gate of the 3D color code while only doing error correction and the Hadamard gate with the 3D gauge color code\cite{bombin:singleshot}. The advantage of using the 3D gauge color code over the 3D color code, see Table \ref{table:codes}, are the lower parity check weights and the feature of single-shot error correction of the 3D gauge color code (see Sec.~\ref{Sec:singleshot}). A CNOT gate realized via lattice code surgery allows for the injection of the $T$ gate from a 3D gauge color code into a 2D color code~\cite{VT:color_arch,bombin:dimensionalJumps}. \etc{One can imagine} a 2D color code architecture augmented with 3D gauge color code $T$-stations where logical qubits can undergo a $T$ gate, similar as a 2D surface code with 2D non-Clifford processing occurring at dedicated locations.
We summarize some of the known thresholds and properties of codes in Table \ref{table:codes}. Note that the 3D color and gauge color codes have a cubic spatial overhead $O(d^3)$ for a given distance $d$ while this overhead is $O(d^2)$ for 2D codes. The complexity of decoding 3D color and 3D and 4D surface codes poses new challenges and is not fully understood while good algorithms exist for surface code decoding.
The best thresholds for 2D color codes are lower than those of the surface code, possibly due to the fact that parity checks have higher weight. However, 2D color code decoding is also more computationally complex than surface code decoding \cite{delfosse:colordecoding}. The best threshold numbers for circuit-based noise in which each gate undergoes depolarizing noise with probability $p$ \etc{are $0.3\%$\cite{beverland:perscom,beverlandThesis} for} a triangular color code and $0.41\%$ for a half-color or $[[4,2,2]]$-concatenated toric code \cite{CT:422} (compare with $0.6-1\%$ for the surface code).
\begin{figure*}\label{box:CC}
\end{figure*}
\subsection{Alternative Code Constructions} \label{Sec::AltCode}
An alternative to topological error correction is concatenated coding in which the physical qubits in a code block are repeatedly replaced by logical qubits. Extensive work \cite{CDT:study} has been performed on comparing the overheads and noise thresholds of various schemes. For a (concatenated) [[23,1,7]] Golay code (with a transversal Clifford set) it has been shown that the asymptotic noise threshold is at least $0.13\%$ \cite{PR:golay} (compare with the numerical value $0.6-1.0\%$ of the asymptotic surface code). Any concatenated scheme with easy Clifford gates could be combined with magic state distillation. The performance comparison with the surface code would largely rely on how much spatial overhead one pays for a logical Clifford gate.
Another concatenation idea is to combine the transversality of different gates in two different codes \cite{oconnor_laflamme} and get rid of magic state distillation. For example, one can choose [[7,1,3]] as a top code, i.e. replacing each physical qubit by 7, and then take [[15,1,3]] as bottom code, replacing each of the 7 qubits again by a block of 15. The resulting code is $[[105,1, 9]]$. Due to the non-transversality of the Hadamard at the bottom level and the non-transversality of the $T$ gate at the top level, the total logical error probability of these gates will suffer, but single-qubit errors can still be corrected in this construction. The asymptotic noise threshold of this construction was lower-bounded by $0.28\%$ \cite{chamberland:analysis}.
In Box~\ref{box:steanecode} it was stated that the Steane code has a {\em pieceable fault-tolerant} \cite{YTC:piece} Controlled-$S$ gate. This means that we can break down the execution of the gate in rounds or pieces, each round containing $X$ error-correction to maintain fault-tolerance, but holding off on $Z$ error correction until the entire gate is done. This idea does not easily scale to topological codes, but it could be analyzed for concatenated codes. It obviates the need for magic state distillation, but trades this, mostly likely, with a poorer asymptotic noise threshold.
Any scheme based on the concatenation of small codes can be converted to a coding scheme \etc{that} is local in 1D or 2D at the cost of some additional overhead for gates which move qubits. If a large 3D color code is implemented in pure 2D hardware, it requires non-planar connections whose length grows with the size of the color code. Recent work \cite{bravyi:doubled, JBH:2D, jochym:stacked} has shown how to systematically construct codes that have a transversal $T$ gate and convert these codes to so-called doubled color or 2D gauge color codes. However, by making all connections local on a 2D lattice the resulting 2D codes are non-topological. This means that the code performance is maximal for a certain code size and declines for larger code sizes. The performance of doubled color or 2D gauge color codes in producing a low-noise logical $T$ ancilla (which can then be transferred to the 2D Clifford substrate) has not yet been compared with MSD or the usage of 3D $T$ stations.
\subsection{Comparison of Resource Overheads}
The combination of MSD with the surface code is currently considered a competitive scheme since it combines a high-noise threshold, a 2D architecture and a $T$ gate \etc{that is a few hundred times} as costly as Clifford gates in terms of its space-time overhead. Replacing MSD by 3D $T$-gate stations and/or the surface code substrate by a color code substrate is an alternative whose appeal depends on the physical error probability versus the 2D and 3D color code thresholds. The use of the 3D gauge color code for a $T$ gate requires $O(d^3)$ qubits, but single-shot error correction makes the space-time cost again $O(d^3)$\cite{bombin:singleshot}.
An analysis\cite{chamberland:analysis} of the concatenation scheme discussed in Sec.~\ref{Sec::AltCode} shows that the spatial overhead is still not favorable as compared to \etc{using surface codes with MSD}. This analysis includes the consideration of a smaller, more efficient, 49-qubit code\cite{nikahd:nonuniformCC}. To get to the target error probability $P_L < 10^{-15}$ starting with a physical error probability of $O(10^{-5})$ it is estimated that the concatenated scheme uses at least $10^7$ physical qubit for a logical qubit (versus $10^4$ for \etc{surface codes with MSD}).
\section{The Blessing of Dimensionality?}
\subsection{Transversality and Dimensionality}
A deep connection between transversality and dimensionality of a topological stabilizer code was proved by Bravyi and Koenig \cite{Bk:uni}. Their theorem says that for $D$-dimensional topological stabilizer codes, the only logical gates that one can implement via a transversal or constant depth circuit are in the so-called $m^{\mathrm{th}}$ level of the Clifford hierarchy ${\cal C}_m$. Here ${\cal C}_1$ is the group of $n$-qubit Pauli operators, ${\cal C}_2$ is the Clifford group, ${\cal C}_3$ contains gates such as $T$ and Toffoli and the $m^{\rm th}$ level includes a small rotation gate such as ${\rm diag}(1,\exp(\frac{2 \pi i}{2^{m}}))$. While the gate set Clifford+$T$ is universal, it can be shown that using gates in higher levels of the Clifford hierarchy reduces the time overhead associated with gate synthesis. The gates that one can realize with $D$-dimensional color and surface codes saturate the Bravyi-Koenig theorem \cite{bombin:gauge,KB:simple,KYP:unfolding}.
This theorem does not prove that there are no good 2D alternatives to magic state distillation. It does suggest that any alternative might come at the price of a lower threshold for universal quantum logic as it requires a non-trivial fault-tolerant gate construction.
\subsection{Tradeoff Bounds}
For the design of a storage medium, a quantum hard-drive, one can drop the universal gate set desideratum (2) as long as quantum information can be read and written to storage. Ideally, the storage code is a $[[n,k,d]]$ code with high rate $k/n$ and high distance $d$ scaling as some function of $n$. It was shown~\cite{BPT:tradeoff} that 2D Euclidean topological stabilizer codes are constrained by $k d^2 \leq c n$ for some constant $c$: the surface code clearly saturates this bound. The adjective Euclidean means that the qubits can be placed on a 2D regular grid with each qubit connecting to O(1) neighbors. \etc{Consequently,} the rate of these codes vanishes with increasing distance, leading to a substantial overhead as a storage code. Hyperbolic surface codes \cite{BT:hyperbolic} are only bound by $k d^2 \leq c (\log k)^2 n$ \cite{delfosse:tradeoffs}. There is a simple hyperbolic surface code in which qubits are placed on the edges of square tiles and five tiles meet at a vertex. Such $\{5,4\}$-hyperbolic surface codes have an asymptotic rate $\frac{k}{n}=\frac{1}{10}$ and logarithmically growing distance\cite{BT:hyperbolic}.
\subsection{Single-shot Error Correction} \label{Sec:singleshot} 2D topological codes have an intrinsic temporal overhead in executing code deformation in a fault-tolerant manner, making gates that rely on this technique take $O(d)$ time. The reason is that in code deformation new parity check measurements are repeated $O(d)$ times in order for this record to be sufficiently reliable. The absence of redundancy in the parity check measurement record is an immediate consequence of the lack of self-correction of 2D topological stabilizer codes\cite{BT:mem}. A 4D hypercubic lattice version of the surface code \cite{dennis+:top} allows for {\em single-shot} error correction instead. Due to redundancy in the parity check data in this code, it is possible to repair the data for measurement errors after a single round of measurement. Codes which have such single-shot error correction then have potentially higher noise thresholds and faster logical gates. Interestingly, Bombin showed that single-shot error correction is possible for 3D gauge color codes \cite{bombin:singleshot}. For such code the value of the stabilizer parity checks is acquired through measuring non-commuting lower-weight gauge qubit checks whose products determine the stabilizer parity checks. Curiously, in the 3D gauge color code, the gauge qubit checks hold redundant \etc{information,} which allows one to construct a robust record for the stabilizer parity checks in O(1) time.
The result shows the power of using subsystem codes with gauge degrees of freedom since we do not expect to have single-shot error correction for 3D stabilizer codes.
\section{Outlook}
\etc{We have discussed several ideas for adding universal computing power to a quantum device. Presently, using surface codes with magic state distillation is the most practical solution. We have seen that there is a wealth of fascinating alternatives, but so far they have yet to demonstrate a comparably high threshold or significant improvements in resource scaling. A interesting direction is to move away from the constraints of low dimensional topological codes.}
More general LDPC codes could be considered, for example the 4D surface code in Table \ref{table:codes}. Homological quantum (LDPC) codes can in principle be constructed from tilings of any $D$-dimensional manifold. Generalizations of classical LDPC codes based on expander graphs to quantum codes are also known to exist \cite{TZ:codes, FH:hypergraphs}.
Such approaches require hardware that supports long-range connectivity. Fortunately, various \etc{long-range} experimental platforms such as ion-trap qubits or nuclear spins coupled to NV centers in diamond, do not necessarily conform to the paradigm of a 2D `sea' of qubits. One may expect such architectures to work with modular components with photonic interconnects \cite{monroe+:arch, NFB:photons}\etc{, which would allow for more flexible and long-range connectivity.}
The advantage of using a higher-dimensional LDPC code or more general quantum LDPC code for computation or storage in a concrete coding architecture remains to be fully explored, in particular efficient decoding software needs to be developed. Independent of whether these codes can be used in a coding architecture, we expect that the study and development of quantum LDPC codes will lead to new insights into robust macroscopic quantum information processing.
\begin{thebibliography}{85} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Haeffner}\ \emph {et~al.}(2008)\citenamefont
{Haeffner}, \citenamefont {Roos},\ and\ \citenamefont {Blatt}}]{HRB:ions}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Haeffner}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Roos}}, \
and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blatt}},\ }\href
{\doibase http://dx.doi.org/10.1016/j.physrep.2008.09.003} {\bibfield
{journal} {\bibinfo {journal} {Physics Reports}\ }\textbf {\bibinfo {volume}
{469}},\ \bibinfo {pages} {155 } (\bibinfo {year} {2008})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Ballance}\ \emph {et~al.}(2016)\citenamefont
{Ballance}, \citenamefont {Harty}, \citenamefont {Linke}, \citenamefont
{Sepiol},\ and\ \citenamefont {Lucas}}]{ballance:highFidelity}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Ballance}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Harty}},
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Linke}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Sepiol}}, \ and\ \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Lucas}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {117}},\ \bibinfo {pages} {060504} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Devoret}\ and\ \citenamefont
{Schoelkopf}(2013)}]{DS:outlook}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont
{Devoret}}\ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont
{Schoelkopf}},\ }\href {\doibase 10.1126/science.1231930} {\bibfield
{journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume}
{339}},\ \bibinfo {pages} {1169} (\bibinfo {year} {2013})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {{C{\'o}rcoles}}\ \emph {et~al.}(2015)\citenamefont
{{C{\'o}rcoles}}, \citenamefont {{Magesan}}, \citenamefont {{Srinivasan}},
\citenamefont {{Cross}}, \citenamefont {{Steffen}}, \citenamefont
{{Gambetta}},\ and\ \citenamefont {{Chow}}}]{corcoles:surface}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont
{{C{\'o}rcoles}}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{{Magesan}}}, \bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont
{{Srinivasan}}}, \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont
{{Cross}}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {{Steffen}}},
\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {{Gambetta}}}, \ and\
\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {{Chow}}},\ }\href
{\doibase 10.1038/ncomms7979} {\bibfield {journal} {\bibinfo {journal}
{Nature Communications}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {eid}
{6979} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Gambetta}}\ \emph {et~al.}(2015)\citenamefont
{{Gambetta}}, \citenamefont {{Chow}},\ and\ \citenamefont
{{Steffen}}}]{IBM:surface}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{{Gambetta}}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{{Chow}}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{{Steffen}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{ArXiv e-prints}\ } (\bibinfo {year} {2015})},\ \Eprint
{http://arxiv.org/abs/1510.04375} {arXiv:1510.04375 [quant-ph]} \BibitemShut
{NoStop} \bibitem [{\citenamefont {Kelly}\ \emph {et~al.}(2015)\citenamefont {Kelly},
\citenamefont {Barends}, \citenamefont {Fowler}, \citenamefont {Megrant},
\citenamefont {Jeffrey}, \citenamefont {White}, \citenamefont {Sank},
\citenamefont {Mutus}, \citenamefont {Campbell}, \citenamefont {Chen} \emph
{et~al.}}]{kelly:repetition}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Kelly}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Barends}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Fowler}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Megrant}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Jeffrey}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {White}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Sank}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Mutus}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Campbell}},
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Chen}}, \emph {et~al.},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf
{\bibinfo {volume} {519}},\ \bibinfo {pages} {66} (\bibinfo {year}
{2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Rist{\`e}}}\ \emph {et~al.}(2015)\citenamefont
{{Rist{\`e}}}, \citenamefont {{Poletto}}, \citenamefont {{Huang}},
\citenamefont {{Bruno}}, \citenamefont {{Vesterinen}}, \citenamefont
{{Saira}},\ and\ \citenamefont {{Dicarlo}}}]{riste+:3bit}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{{Rist{\`e}}}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{{Poletto}}}, \bibinfo {author} {\bibfnamefont {M.-Z.}\ \bibnamefont
{{Huang}}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {{Bruno}}},
\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {{Vesterinen}}}, \bibinfo
{author} {\bibfnamefont {O.-P.}\ \bibnamefont {{Saira}}}, \ and\ \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {{Dicarlo}}},\ }\href {\doibase
10.1038/ncomms7983} {\bibfield {journal} {\bibinfo {journal} {Nature
Communications}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {eid} {6983}
(\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Terhal}(2015)}]{terhal:rmp}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Terhal}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Rev.
Mod. Phys.}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {307}
(\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{Note1()}]{Note1}
\BibitemOpen
\bibinfo {note} {A discrete set of gates is universal if the gates in the set
can be composed to make any desired unitary gate up to arbitrary
precision.}\BibitemShut {Stop} \bibitem [{Note2()}]{Note2}
\BibitemOpen
\bibinfo {note} {Parity checks operators are operators that define the code
space and return a +1 outcome when measured in the absence of
errors.}\BibitemShut {Stop} \bibitem [{\citenamefont {Gottesman}(1997)}]{thesis:gottesman}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Gottesman}},\ }\emph {\bibinfo {title} {Stabilizer Codes and Quantum Error
Correction}},\ \href@noop {} {Ph.D. thesis},\ \bibinfo {school} {CalTech}
(\bibinfo {year} {1997}),\ \bibinfo {note}
{\url{http://arxiv.org/abs/quant-ph/9705052}}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aaronson}\ and\ \citenamefont
{Gottesman}(2004)}]{AG:stabilizer}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Aaronson}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Gottesman}},\ }\href {\doibase 10.1103/PhysRevA.70.052328} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{70}},\ \bibinfo {pages} {052328} (\bibinfo {year} {2004})}\BibitemShut
{NoStop} \bibitem [{Note3()}]{Note3}
\BibitemOpen
\bibinfo {note} {For a single block of code transversal logical gates are
realized as a product of single-qubit unitary gates. Transversal logical
gates between multiple blocks may use non-product unitary gates provided that
these interactions are only between the different blocks.}\BibitemShut
{Stop} \bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2008)\citenamefont {Chen},
\citenamefont {Chung}, \citenamefont {Cross}, \citenamefont {Zeng},\ and\
\citenamefont {Chuang}}]{Chen:earlyEK}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Chung}},
\bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont {Cross}}, \bibinfo
{author} {\bibfnamefont {B.}~\bibnamefont {Zeng}}, \ and\ \bibinfo {author}
{\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\href {\doibase
10.1103/PhysRevA.78.012353} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo {pages} {012353}
(\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Eastin}}\ and\ \citenamefont
{{Knill}}(2009)}]{EK:nogo}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{{Eastin}}}\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{{Knill}}},\ }\href {\doibase 10.1103/PhysRevLett.102.110502} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {102}},\ \bibinfo {eid} {110502} (\bibinfo {year}
{2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Knill}\ \emph
{et~al.}(1998{\natexlab{a}})\citenamefont {Knill}, \citenamefont {Laflamme},\
and\ \citenamefont {Zurek}}]{KLZ:arxiv}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Knill}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Laflamme}}, \
and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zurek}},\
}\href@noop {} {\enquote {\bibinfo {title} {Threshold accuracy for quantum
computation},}\ } (\bibinfo {year} {1998}{\natexlab{a}}),\ \bibinfo {note}
{quant-ph/9610011}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nigg}\ \emph {et~al.}(2014)\citenamefont {Nigg},
\citenamefont {Mueller}, \citenamefont {Martinez}, \citenamefont {Schindler},
\citenamefont {Hennrich}, \citenamefont {Monz}, \citenamefont
{Martin-Delgado},\ and\ \citenamefont {Blatt}}]{nigg:steanecode}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Nigg}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mueller}},
\bibinfo {author} {\bibfnamefont {E.~A.}\ \bibnamefont {Martinez}}, \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Schindler}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Hennrich}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Monz}}, \bibinfo {author} {\bibfnamefont
{M.~A.}\ \bibnamefont {Martin-Delgado}}, \ and\ \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Blatt}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume}
{345}},\ \bibinfo {pages} {302} (\bibinfo {year} {2014})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Dennis}\ \emph {et~al.}(2002)\citenamefont {Dennis},
\citenamefont {Kitaev}, \citenamefont {Landahl},\ and\ \citenamefont
{Preskill}}]{dennis+:top}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Dennis}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kitaev}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Landahl}}, \ and\
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys.}\ }\textbf
{\bibinfo {volume} {43}},\ \bibinfo {pages} {4452} (\bibinfo {year}
{2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Raussendorf}\ and\ \citenamefont
{Harrington}(2007)}]{RH:topoPrl}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Raussendorf}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Harrington}},\ }\href {doi:10.1103/PhysRevLett.98.190504} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {98}},\ \bibinfo {pages} {190504} (\bibinfo {year}
{2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Fowler}}\ \emph {et~al.}(2009)\citenamefont
{{Fowler}}, \citenamefont {{Stephens}},\ and\ \citenamefont
{{Groszkowski}}}]{fowler+:unisurf}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont
{{Fowler}}}, \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont
{{Stephens}}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{{Groszkowski}}},\ }\href {\doibase 10.1103/PhysRevA.80.052312} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{80}},\ \bibinfo {pages} {052312} (\bibinfo {year} {2009})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Fowler}\ \emph {et~al.}(2012)\citenamefont {Fowler},
\citenamefont {Mariantoni}, \citenamefont {Martinis},\ and\ \citenamefont
{Cleland}}]{FMMC:review}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Fowler}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mariantoni}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Martinis}}, \ and\
\bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Cleland}},\ }\href
{\doibase 10.1103/PhysRevA.86.032324} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo
{pages} {032324} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Raussendorf}}\ \emph {et~al.}(2007)\citenamefont
{{Raussendorf}}, \citenamefont {{Harrington}},\ and\ \citenamefont
{{Goyal}}}]{RHG:threshold}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{{Raussendorf}}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{{Harrington}}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{{Goyal}}},\ }\href {\doibase 10.1088/1367-2630/9/6/199} {\bibfield
{journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume}
{9}},\ \bibinfo {pages} {199} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bomb\'{\i}n}(2010)}]{bombin:twist}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bomb\'{\i}n}},\ }\href {\doibase 10.1103/PhysRevLett.105.030403} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {105}},\ \bibinfo {eid} {030403} (\bibinfo {year}
{2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Hastings}}\ and\ \citenamefont
{{Geller}}(2014)}]{HG:dislocation}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{{Hastings}}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{{Geller}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{ArXiv e-prints}\ } (\bibinfo {year} {2014})},\ \Eprint
{http://arxiv.org/abs/1408.3379} {arXiv:1408.3379 [quant-ph]} \BibitemShut
{NoStop} \bibitem [{\citenamefont {{Horsman}}\ \emph {et~al.}(2012)\citenamefont
{{Horsman}}, \citenamefont {{Fowler}}, \citenamefont {{Devitt}},\ and\
\citenamefont {{Van Meter}}}]{horsman+:suture}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{{Horsman}}}, \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont
{{Fowler}}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {{Devitt}}},
\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {{Van Meter}}},\
}\href {\doibase 10.1088/1367-2630/14/12/123011} {\bibfield {journal}
{\bibinfo {journal} {New Journal of Physics}\ }\textbf {\bibinfo {volume}
{14}},\ \bibinfo {pages} {123011} (\bibinfo {year} {2012})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {{Brown}}\ \emph
{et~al.}(2016{\natexlab{a}})\citenamefont {{Brown}}, \citenamefont
{{Laubscher}}, \citenamefont {{Kesselring}},\ and\ \citenamefont
{{Wootton}}}]{brown+:poking}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~J.}\ \bibnamefont
{{Brown}}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{{Laubscher}}}, \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont
{{Kesselring}}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~R.}\
\bibnamefont {{Wootton}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {ArXiv e-prints}\ } (\bibinfo {year} {2016}{\natexlab{a}})},\
\Eprint {http://arxiv.org/abs/1609.04673} {arXiv:1609.04673 [quant-ph]}
\BibitemShut {NoStop} \bibitem [{\citenamefont {Aliferis}(2007)}]{Aliferis07}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Aliferis}},\ }\emph {\bibinfo {title} {Level reduction and the quantum
threshold theorem}},\ \href@noop {} {Ph.D. thesis},\ \bibinfo {school}
{Caltech} (\bibinfo {year} {2007})\BibitemShut {NoStop} \bibitem [{\citenamefont {Jones}\ \emph {et~al.}(2012)\citenamefont {Jones},
\citenamefont {Van~Meter}, \citenamefont {Fowler}, \citenamefont {McMahon},
\citenamefont {Kim}, \citenamefont {Ladd},\ and\ \citenamefont
{Yamamoto}}]{jones:reuseable}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~C.}\ \bibnamefont
{Jones}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Van~Meter}},
\bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {Fowler}}, \bibinfo
{author} {\bibfnamefont {P.~L.}\ \bibnamefont {McMahon}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Kim}}, \bibinfo {author} {\bibfnamefont
{T.~D.}\ \bibnamefont {Ladd}}, \ and\ \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Yamamoto}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {2}},\
\bibinfo {pages} {031007} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shor}(1996)}]{shor:faulttol}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont
{Shor}},\ }in\ \href {\doibase 10.1109/SFCS.1996.548464} {\emph {\bibinfo
{booktitle} {37th Annual Symposium on Foundations of Computer Science, {FOCS}
'96, Burlington, Vermont, USA, 14-16 October, 1996}}}\ (\bibinfo {year}
{1996})\ pp.\ \bibinfo {pages} {56--65}\BibitemShut {NoStop} \bibitem [{\citenamefont {Knill}\ \emph
{et~al.}(1998{\natexlab{b}})\citenamefont {Knill}, \citenamefont {Laflamme},\
and\ \citenamefont {Zurek}}]{KLZ:res}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Knill}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Laflamme}}, \
and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Zurek}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf
{\bibinfo {volume} {279}},\ \bibinfo {pages} {342} (\bibinfo {year}
{1998}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bravyi}\ and\ \citenamefont
{Kitaev}(2005)}]{BK:magicdistill}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Bravyi}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Kitaev}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {71}},\ \bibinfo {pages} {022316}
(\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fowler}\ \emph {et~al.}(2013)\citenamefont {Fowler},
\citenamefont {Devitt},\ and\ \citenamefont {Jones}}]{fowler:blockMSD}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont
{Fowler}}, \bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont {Devitt}},
\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Jones}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Scientific
reports}\ }\textbf {\bibinfo {volume} {3}} (\bibinfo {year}
{2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {O'Gorman}\ and\ \citenamefont
{Campbell}(2016)}]{Ogorman:realisticMSD}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{O'Gorman}}\ and\ \bibinfo {author} {\bibfnamefont {E.~T.}\ \bibnamefont
{Campbell}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{arXiv preprint arXiv:1605.07197}\ } (\bibinfo {year} {2016})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Kitaev}\ \emph {et~al.}(2002)\citenamefont {Kitaev},
\citenamefont {Shen},\ and\ \citenamefont {Vyalyi}}]{KSV:computation}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~Y.}\ \bibnamefont
{Kitaev}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Shen}}, \ and\
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Vyalyi}},\ }\href@noop {}
{\emph {\bibinfo {title} {Classical and Quantum Computation. Vol. 47 of
Graduate Studies in Mathematics.}}}\ (\bibinfo {publisher} {American
Mathematical Society},\ \bibinfo {address} {Providence, RI},\ \bibinfo {year}
{2002})\BibitemShut {NoStop} \bibitem [{\citenamefont {Ross}\ and\ \citenamefont {Selinger}(2016)}]{RS14}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont
{Ross}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Selinger}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Quant. Inf. and Comp.}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages}
{901} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bocharov}\ \emph {et~al.}(2015)\citenamefont
{Bocharov}, \citenamefont {Roetteler},\ and\ \citenamefont
{Svore}}]{bocharov:PQF}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Bocharov}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Roetteler}},
\ and\ \bibinfo {author} {\bibfnamefont {K.~M.}\ \bibnamefont {Svore}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\
}\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {052317} (\bibinfo
{year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Amy}\ \emph {et~al.}(2013)\citenamefont {Amy},
\citenamefont {Maslov}, \citenamefont {Mosca},\ and\ \citenamefont
{Roetteler}}]{amy:TPAR}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Amy}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Maslov}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mosca}}, \ and\ \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Roetteler}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {IEEE Transactions on
Computer-Aided Design of Integrated Circuits and Systems}\ }\textbf {\bibinfo
{volume} {32}},\ \bibinfo {pages} {818} (\bibinfo {year} {2013})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Landahl}\ and\ \citenamefont
{Cesare}(2013)}]{landahl13}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont
{Landahl}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Cesare}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv
preprint arXiv:1302.3240}\ } (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Duclos-Cianci}\ and\ \citenamefont
{Poulin}(2015)}]{duclos15}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Duclos-Cianci}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Poulin}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {042315}
(\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Campbell}\ and\ \citenamefont
{O'Gorman}(2016)}]{campbell16}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~T.}\ \bibnamefont
{Campbell}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{O'Gorman}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Quant. Sci. Tech.}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages}
{015007} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Eastin}(2013)}]{eastin13}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Eastin}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {032321}
(\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jones}(2013)}]{jones13b}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Jones}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {022328}
(\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Campbell}\ and\ \citenamefont
{Howard}(2016)}]{campbell:unified2}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~T.}\ \bibnamefont
{Campbell}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Howard}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv
preprint arXiv:1606.01904}\ } (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Paler}}\ \emph {et~al.}(2016)\citenamefont
{{Paler}}, \citenamefont {{Devitt}},\ and\ \citenamefont
{{Fowler}}}]{PDF:circuit}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{{Paler}}}, \bibinfo {author} {\bibfnamefont {S.~J.}\ \bibnamefont
{{Devitt}}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont
{{Fowler}}},\ }\href {\doibase 10.1038/srep30600} {\bibfield {journal}
{\bibinfo {journal} {Scientific Reports}\ }\textbf {\bibinfo {volume} {6}},\
\bibinfo {eid} {30600} (\bibinfo {year} {2016})},\ \Eprint
{http://arxiv.org/abs/1604.08621} {arXiv:1604.08621 [quant-ph]} \BibitemShut
{NoStop} \bibitem [{\citenamefont {Meier}\ \emph {et~al.}(2013)\citenamefont {Meier},
\citenamefont {Eastin},\ and\ \citenamefont {Knill}}]{Meier13}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont
{Meier}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Eastin}}, \
and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Quant. Inf. and
Comp.}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {195} (\bibinfo
{year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Bravyi}}\ and\ \citenamefont
{{Haah}}(2012)}]{BH:magic}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{{Bravyi}}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{{Haah}}},\ }\href {\doibase 10.1103/PhysRevA.86.052329} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{86}},\ \bibinfo {eid} {052329} (\bibinfo {year} {2012})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {{Jones}}(013a)}]{jones:dist}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{{Jones}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {042305}
(\bibinfo {year} {2013a})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Wang}}\ \emph {et~al.}(2003)\citenamefont {{Wang}},
\citenamefont {{Harrington}},\ and\ \citenamefont
{{Preskill}}}]{WHP:threshold}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{{Wang}}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{{Harrington}}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{{Preskill}}},\ }\href {\doibase 10.1016/S0003-4916(02)00019-2} {\bibfield
{journal} {\bibinfo {journal} {Annals of Physics}\ }\textbf {\bibinfo
{volume} {303}},\ \bibinfo {pages} {31} (\bibinfo {year} {2003})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Beverland}\ \emph {et~al.}()\citenamefont
{Beverland}, \citenamefont {Kubica}, \citenamefont {Brandao}, \citenamefont
{Preskill},\ and\ \citenamefont {Svore}}]{beverland:perscom}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Beverland}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kubica}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Brandao}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Preskill}}, \ and\ \bibinfo
{author} {\bibfnamefont {K.}~\bibnamefont {Svore}},\ }\href@noop {}
{}\bibinfo {note} {To appear}\BibitemShut {NoStop} \bibitem [{\citenamefont {Beverland}(2016)}]{beverlandThesis}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Beverland}},\ }\emph {\bibinfo {title} {Toward realizable quantum
computers}},\ \href@noop {} {Ph.D. thesis},\ \bibinfo {school} {Caltech}
(\bibinfo {year} {2016})\BibitemShut {NoStop} \bibitem [{\citenamefont {{Brown}}\ \emph
{et~al.}(2016{\natexlab{b}})\citenamefont {{Brown}}, \citenamefont
{{Nickerson}},\ and\ \citenamefont {{Browne}}}]{brown:singleshot}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~J.}\ \bibnamefont
{{Brown}}}, \bibinfo {author} {\bibfnamefont {N.~H.}\ \bibnamefont
{{Nickerson}}}, \ and\ \bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont
{{Browne}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Nature Communications}\ }\textbf {\bibinfo {volume} {7}} (\bibinfo {year}
{2016}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Breuckmann}}\ \emph {et~al.}(2016)\citenamefont
{{Breuckmann}}, \citenamefont {{Duivenvoorden}}, \citenamefont {{Michels}},\
and\ \citenamefont {{Terhal}}}]{BDMT:local}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~P.}\ \bibnamefont
{{Breuckmann}}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{{Duivenvoorden}}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{{Michels}}}, \ and\ \bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont
{{Terhal}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {In
press at QIC, ArXiv e-prints}\ } (\bibinfo {year} {2016})},\ \Eprint
{http://arxiv.org/abs/1609.00510} {arXiv:1609.00510 [quant-ph]} \BibitemShut
{NoStop} \bibitem [{\citenamefont {Kubica}\ \emph {et~al.}(2015)\citenamefont {Kubica},
\citenamefont {Yoshida},\ and\ \citenamefont {Pastawski}}]{KYP:unfolding}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Kubica}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yoshida}}, \
and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Pastawski}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New Journal of
Physics}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {083026}
(\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{Sup()}]{SuppMat::colorcode}
\BibitemOpen
\href@noop {} {}\bibinfo {note}
{\url{https://youtu.be/erkeCxQ0-g4}}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bomb\'{\i}n}\ and\ \citenamefont
{{Martin-Delgado}}(2006)}]{BM:colorcodes}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bomb\'{\i}n}}\ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{{Martin-Delgado}}},\ }\href {\doibase 10.1103/PhysRevLett.97.180501}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {97}},\ \bibinfo {eid} {180501} (\bibinfo {year}
{2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bomb\'{\i}n}\ and\ \citenamefont
{{Martin-Delgado}}(2007)}]{BM:transT}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bomb\'{\i}n}}\ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{{Martin-Delgado}}},\ }\href {\doibase 10.1103/PhysRevLett.98.160502}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {98}},\ \bibinfo {eid} {160502} (\bibinfo {year}
{2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bomb\'{\i}n}\ and\ \citenamefont
{Martin-Delgado}(2007)}]{bombin:homological}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bomb\'{\i}n}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Martin-Delgado}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Journal of mathematical physics}\ }\textbf {\bibinfo {volume} {48}},\
\bibinfo {pages} {052105} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Katzgraber}\ \emph {et~al.}(2010)\citenamefont
{Katzgraber}, \citenamefont {Bombin}, \citenamefont {Andrist},\ and\
\citenamefont {Martin-Delgado}}]{katzgraber:fullClifford}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~G.}\ \bibnamefont
{Katzgraber}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Bombin}},
\bibinfo {author} {\bibfnamefont {R.~S.}\ \bibnamefont {Andrist}}, \ and\
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Martin-Delgado}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Physical Review
A}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {012319} (\bibinfo
{year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kubica}\ and\ \citenamefont
{Beverland}(2015)}]{KB:simple}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Kubica}}\ and\ \bibinfo {author} {\bibfnamefont {M.~E.}\ \bibnamefont
{Beverland}},\ }\href {\doibase 10.1103/PhysRevA.91.032330} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{91}},\ \bibinfo {pages} {032330} (\bibinfo {year} {2015})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Bomb\'{\i}n}(2015{\natexlab{a}})}]{bombin:gauge}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bomb\'{\i}n}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{New Journal of Physics}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo
{pages} {083002} (\bibinfo {year} {2015}{\natexlab{a}})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Landahl}\ and\ \citenamefont
{Ryan-Anderson}(2014)}]{landahl14}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont
{Landahl}}\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Ryan-Anderson}},\ }\href@noop {} {\enquote {\bibinfo {title} {Quantum
computing by color-code lattice surgery},}\ } (\bibinfo {year} {2014}),\
\bibinfo {note} {report SAND2014-15911J and arXiv:1407.5103}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Paetznick}\ and\ \citenamefont
{Reichardt}(2013)}]{paetznick:gauge}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Paetznick}}\ and\ \bibinfo {author} {\bibfnamefont {B.~W.}\ \bibnamefont
{Reichardt}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {111}},\ \bibinfo {pages}
{090505} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont
{Bomb\'{\i}n}(2015{\natexlab{b}})}]{bombin:singleshot}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bomb\'{\i}n}},\ }\href {\doibase 10.1103/PhysRevX.5.031043} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume}
{5}},\ \bibinfo {pages} {031043} (\bibinfo {year}
{2015}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vuillot}\ and\ \citenamefont
{Terhal}()}]{VT:color_arch}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Vuillot}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Terhal}},\ }\href@noop {} {}\bibinfo {note} {To appear, 2016}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Bomb{\'\i}n}(2016)}]{bombin:dimensionalJumps}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Bomb{\'\i}n}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{New Journal of Physics}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo
{pages} {043038} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Delfosse}}(2014)}]{delfosse:colordecoding}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{{Delfosse}}},\ }\href {\doibase 10.1103/PhysRevA.89.012317} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{89}},\ \bibinfo {eid} {012317} (\bibinfo {year} {2014})},\ \Eprint
{http://arxiv.org/abs/1308.6207} {arXiv:1308.6207 [quant-ph]} \BibitemShut
{NoStop} \bibitem [{\citenamefont {{Criger}}\ and\ \citenamefont
{{Terhal}}(2016)}]{CT:422}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{{Criger}}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{{Terhal}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Quant. Inf. and Comp.}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages}
{1261} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cross}\ \emph {et~al.}(2009)\citenamefont {Cross},
\citenamefont {DiVincenzo},\ and\ \citenamefont {Terhal}}]{CDT:study}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont
{Cross}}, \bibinfo {author} {\bibfnamefont {D.~P.}\ \bibnamefont
{DiVincenzo}}, \ and\ \bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont
{Terhal}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Quantum Info. and Comput.}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo
{pages} {541} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Paetznick}}\ and\ \citenamefont
{{Reichardt}}(2012)}]{PR:golay}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{{Paetznick}}}\ and\ \bibinfo {author} {\bibfnamefont {B.~W.}\ \bibnamefont
{{Reichardt}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Quant. Inf. and Comp.}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages}
{1034} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Jochym-O'Connor}}\ and\ \citenamefont
{{Laflamme}}(2014)}]{oconnor_laflamme}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{{Jochym-O'Connor}}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{{Laflamme}}},\ }\href {\doibase 10.1103/PhysRevLett.112.010505} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {112}},\ \bibinfo {eid} {010505} (\bibinfo {year} {2014})},\ \Eprint
{http://arxiv.org/abs/1309.3310} {arXiv:1309.3310 [quant-ph]} \BibitemShut
{NoStop} \bibitem [{\citenamefont {Chamberland}\ \emph {et~al.}(2016)\citenamefont
{Chamberland}, \citenamefont {Jochym-O'Connor},\ and\ \citenamefont
{Laflamme}}]{chamberland:analysis}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Chamberland}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Jochym-O'Connor}}, \ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Laflamme}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{arXiv preprint arXiv:1609.07497}\ } (\bibinfo {year} {2016})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Yoder}\ \emph {et~al.}(2016)\citenamefont {Yoder},
\citenamefont {Takagi},\ and\ \citenamefont {Chuang}}]{YTC:piece}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont
{Yoder}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Takagi}}, \
and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\
}\href {\doibase 10.1103/PhysRevX.6.031039} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages}
{031039} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bravyi}\ and\ \citenamefont
{Cross}(2015)}]{bravyi:doubled}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Bravyi}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cross}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint
arXiv:1509.03239}\ } (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Jones}}\ \emph {et~al.}(2016)\citenamefont
{{Jones}}, \citenamefont {{Brooks}},\ and\ \citenamefont
{{Harrington}}}]{JBH:2D}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{{Jones}}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {{Brooks}}}, \
and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {{Harrington}}},\
}\href {\doibase 10.1103/PhysRevA.93.052332} {\bibfield {journal} {\bibinfo
{journal} {\pra}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {eid} {052332}
(\bibinfo {year} {2016})},\ \Eprint {http://arxiv.org/abs/1512.04193}
{arXiv:1512.04193 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Jochym-O'Connor}\ and\ \citenamefont
{Bartlett}(2016)}]{jochym:stacked}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Jochym-O'Connor}}\ and\ \bibinfo {author} {\bibfnamefont {S.~D.}\
\bibnamefont {Bartlett}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo
{pages} {022323} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nikahd}\ \emph {et~al.}(2016)\citenamefont {Nikahd},
\citenamefont {Sedighi},\ and\ \citenamefont {Zamani}}]{nikahd:nonuniformCC}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Nikahd}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Sedighi}}, \
and\ \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Zamani}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint
arXiv:1605.07007}\ } (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Bravyi}}\ and\ \citenamefont
{{Koenig}}(2013)}]{Bk:uni}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{{Bravyi}}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{{Koenig}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages}
{170503} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Bravyi}}\ \emph {et~al.}(2010)\citenamefont
{{Bravyi}}, \citenamefont {{Poulin}},\ and\ \citenamefont
{{Terhal}}}]{BPT:tradeoff}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{{Bravyi}}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {{Poulin}}},
\ and\ \bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont {{Terhal}}},\
}\href {\doibase 10.1103/PhysRevLett.104.050503} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {104}},\
\bibinfo {eid} {050503} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Breuckmann}\ and\ \citenamefont
{Terhal}(2016)}]{BT:hyperbolic}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~P.}\ \bibnamefont
{Breuckmann}}\ and\ \bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont
{Terhal}},\ }\href {\doibase 10.1109/TIT.2016.2555700} {\bibfield {journal}
{\bibinfo {journal} {IEEE Transactions on Information Theory}\ }\textbf
{\bibinfo {volume} {62}},\ \bibinfo {pages} {3731} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Delfosse}(2013)}]{delfosse:tradeoffs}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Delfosse}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Information
Theory Proceedings (ISIT), 2013 IEEE International Symposium on}}}\ (\bibinfo
{organization} {IEEE},\ \bibinfo {year} {2013})\ pp.\ \bibinfo {pages}
{917--921}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bravyi}\ and\ \citenamefont {Terhal}(2009)}]{BT:mem}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Bravyi}}\ and\ \bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont
{Terhal}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New
Journal of Physics}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages}
{043029} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tillich}\ and\ \citenamefont
{Z{\'e}mor}(2009)}]{TZ:codes}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont
{Tillich}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Z{\'e}mor}},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {Proceedings
of the IEEE Symposium on Information Theory}}}\ (\bibinfo {year} {2009})\
pp.\ \bibinfo {pages} {799--803},\ \bibinfo {note}
{\url{http://arxiv.org/abs/0903.0566}}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Freedman}}\ and\ \citenamefont
{{Hastings}}(2014)}]{FH:hypergraphs}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont
{{Freedman}}}\ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{{Hastings}}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Quant. Inf. and Comp.}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages}
{144} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Monroe}\ \emph {et~al.}(2014)\citenamefont {Monroe},
\citenamefont {Raussendorf}, \citenamefont {Ruthven}, \citenamefont {Brown},
\citenamefont {Maunz}, \citenamefont {Duan},\ and\ \citenamefont
{Kim}}]{monroe+:arch}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Monroe}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Raussendorf}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ruthven}}, \bibinfo
{author} {\bibfnamefont {K.}~\bibnamefont {Brown}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Maunz}}, \bibinfo {author} {\bibfnamefont
{L.-M.}\ \bibnamefont {Duan}}, \ and\ \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Kim}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo
{pages} {022317} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nickerson}\ \emph {et~al.}(2014)\citenamefont
{Nickerson}, \citenamefont {Fitzsimons},\ and\ \citenamefont
{Benjamin}}]{NFB:photons}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~H.}\ \bibnamefont
{Nickerson}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont
{Fitzsimons}}, \ and\ \bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont
{Benjamin}},\ }\href {\doibase 10.1103/PhysRevX.4.041041} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume}
{4}},\ \bibinfo {pages} {041041} (\bibinfo {year} {2014})}\BibitemShut
{NoStop} \end{thebibliography}
\end{document} |
\begin{document}
\setcounter{page}{1}
\vspace*{1.0cm} \title[A circumcenter-reflection method for FPP] {A circumcentered-reflection method for finding common fixed points of firmly nonexpansive operators}
\author[R. Arefidamghani, R. Behling, A.N. Iusem, L.-R. Santos]{R. Arefidamghani$^1$ R. Behling$^2$, A.N. Iusem$^{3,*}$, L.-R. Santos$^4$} \maketitle
\begin{center} {\footnotesize {\it $^1$Instituto de Matem\'atica Pura e Aplicada, Estrada Dona Castorina 110, Jardim Bot\^anico, CEP 22460-320, Rio de Janeiro, RJ, Brazil\\ $^2$School of Applied Mathematics, Funda\c c\~ao Get\'ulio Vargas, Rio de Janeiro, Brazil\\ $^3$Instituto de Matem\'atica Pura e Aplicada, Estrada Dona Castorina 110, Jardim Bot\^anico, CEP 22460-320, Rio de Janeiro, RJ Brazil\\ $^4$Department of Mathematics, Federal University of Santa Catarina, Blumenau, SC, Brazil}} \end{center}
\centerline{\bf Honoring Prof. Yair Censor in his 80th birthday}
\vskip 4mm {\small\noindent {\bf Abstract.} The circumcentered-reflection method (CRM) has been recently proposed as a methodology for accelerating several algorithms for solving the Convex Feasibility Problem (CFP), equivalent to finding a common fixed-point of the orthogonal projections onto a finite number of closed and convex sets. In this paper, we apply CRM to the more general Fixed Point Problem (denoted as FPP), consisting of finding a common fixed-point of operators belonging to a larger family of operators, namely firmly nonexpansive operators. We prove than in this setting, CRM is globally convergent to a common fixed-point (supposing at least one exists). We also establish linear convergence of the sequence generated by CRM applied to FPP, under a not too demanding error bound assumption, and provide an estimate of the asymptotic constant. We provide solid numerical evidence of the superiority of CRM when compared to the classical Parallel Projections Method (PPM). Additionally, we present certain results of convex combination of orthogonal projections, of some interest on its own.
\noindent{\bf Keywords}: Common fixed points, Firmly nonexpansive operators, Circumcentered-reflection method, Alternating projections, Convergence rate, Error bound.
\renewcommand{\thefootnote}{} \footnotetext{ $^*$Corresponding author. \par E-mail addresses: [email protected] (R. Arefidamghani), [email protected] (R. Behling), [email protected] (A.N. Iusem), [email protected] (L.-R. Santos). \par Received ; Accepted} \section{Introduction}\label{s1}
We start by recalling the Convex Feasibility Problem (CFP), which consists of finding a point in the intersection of a finite number of closed convex subsets of $\mathbb R^n$. CFP is clearly equivalent to solving a finite system of convex inequalities in $\mathbb R^n$, and it can be also rephrased as the problem of finding a common fixed-point of the orthogonal projections onto such subsets. A natural extension of CFP is the problem of finding a common fixed-point of a finite set of operators other than orthogonal projections, but sharing some of their properties. A vast literature on the subject has been developed; we cite just a few references, namely \cite{CeS}, \cite{Mou}, \cite{YLY} and \cite{ZhH}. In this paper we will consider a particular generalization of orthogonal projections, namely {\it firmly nonexpansive operators}.
We define next this family of operators, together with two related families.
\begin{Def}\label{d1} An operator $T:\mathbb R^n\to\mathbb R^n$ is said to be: \begin{itemize} \item[i)] {\rm nonexpansive} when $\left\Vert T(x)-T(y)\right\Vert \le\left\Vert x-y\right\Vert $ for all $x,y\in\mathbb R^n$. \item[ii)] {\rm nonexpansive plus} when it is nonexpansive, and whenever $\left\Vert T(x)-T(y)\right\Vert =\left\Vert x-y\right\Vert $ it holds that $T(x)-T(y)=x-y$. \item[iii)] {\rm firmly nonexpansive} when \begin{equation}\label{e1} \left\Vert T(x)-T(y)\right\Vert ^2\le\left\Vert x-y\right\Vert ^2-\left\Vert (T(x)-T(y))-(x-y)\right\Vert ^2 \end{equation} for all $x,y\in\mathbb R^n$. \end{itemize} \end{Def}
It is immediate that firmly nonexpansive operators are nonexpansive plus, and nonexpansive plus operators are nonexpansive. It is well known and easy to prove that orthogonal projections onto closed and convex sets are firmly nonexpansive. The notation {\it nonexpansive plus} is not standard; we adopt it because of the analogy with copositive plus matrices.
Let $T_1, \dots ,T_m:\mathbb R^n\to\mathbb R^n$ be firmly nonexpansive operators. The problem of finding a common fixed-point of $T_1, \dots, T_m$ (i.e., a point $\bar x\in\mathbb R^n$ such that $T_i(\bar x)=\bar x$ for all $i\in\{1, \dots ,m\}$) will be denoted as FPP. The set of common fixed-points of the $T_i$'s will be denoted as Fix$(T_1, \dots ,T_m)$. Two classical methods for FPP are the Sequential Projection Method (SPM) and the Parallel Projection Method (PPM), which can be traced back to \cite{Kac}, \cite{Cim} respectively, and are defined as follows. Consider the operators $\widehat T, \overline T:\mathbb R^n\to\mathbb R^n$ given by $\widehat T=T_m\circ\dots\circ T_1$, $\overline T=\frac{1}{m}\sum_{i=1}^m T_i$. Starting from an arbitrary $x^0\in\mathbb R^n$, SPM and PPM generate sequences $\{x^k\}$ given by $x^{k+1}=\widehat T(x^k)$, $x^{k+1}=\overline T(x^k)$ respectively. When Fix$(T_1, \dots T_m)\ne\emptyset$ the sequences generated by both methods are known to be globally convergent to points belonging to a point in Fix$(T_1, \dots, T_m)$, i.e., to solve FPP. See \cite{CeZ} for an in-depth study of these and other projections methods for FPP.
An interesting relation between SPM and PPM was found in \cite{Pie}. Given firmly nonexpansive operators $T_1, \dots, T_m:\mathbb R^n\to\mathbb R^n$, define the operator $\widetilde T:\mathbb R^{nm}\to\mathbb R^{nm}$ as $\widetilde T(x^1, \dots ,x^m)=(T_1(x^1), \dots ,T_M(x^m))$, with $x^i\in\mathbb R^n$ ($1\le i\le m)$. It is rather immediate to check that $\widetilde T$ is firmly nonexpansive. Consider the set $\widetilde U=\{(x,\dots,x): x\in\mathbb R^n\}\subset\mathbb R^{nm}$, and let $P_{\widetilde U}:\mathbb R^{nm}\to\widetilde U$ be the orthogonal projection onto $\widetilde U$. Define $\{\bar x^k\}\subset\mathbb R^{nm}$ as the sequence resulting from applying SPM, as defined above, to the operators $\widetilde T, P_{\widetilde U}$, starting from a point $\bar x^0=(x^0, \cdots ,x^0)\in\widetilde U$, i.e., take $\bar x^{k+1}=P_{\widetilde U}(\widetilde T(\bar x^k))$. Clearly, $\bar x^k$ belongs to $\widetilde U$ for all $k$, so that we may write $\bar x^k=(x^k,\dots,x^k)$ with $x^k\in\mathbb R^n$. It was proved in \cite{Pie} that $x^{k+1}=\overline T(x^k)$, i.e., a step of SPM applied to two specific firmly nonexpansive operators in the product space $\mathbb R^{nm}$ is equivalent to a step of PPM in the original space $\mathbb R^n$. Thus, SPM with just two operators plays a sort of special role, and deserves a name of its own. We will call it the {\it Method of Alternating Projections} (MAP from now on). Observe that in the equivalence above one of the two sets in the product space, namely $\widetilde U$, is a linear subspace. This fact will be essential for the convergence of the Circumcentered-Reflection Method (CRM from now on), applied for solving FPP.
We reckon that the use of the word ``projections" in the names of SPM, PPM and MAP applied to FPP is an abuse of notation, since in general there are no projections involved in FPP. Indeed, they correspond to these methods applied to CFP, a particular case of FPP. We keep them because the structure of the methods applied to either CFP and FPP is basically the same.
We proceed to describe CRM. Take three non-collinear points $x,y,z\in\mathbb R^n$, and let $M$ be their affine hull. The {\it circumcenter} circ$(x,y,z)$ is the center of the circle in $M$ passing through $x,y,z$ (or, equivalently, the point in $M$ equidistant from $x,y,z$). It is easy to check that circ$(x,y,z)$ is well defined. Now we take two firmly nonexpansive operators $A,B:\mathbb R^n\to\mathbb R^n$ and define $Q=A\circ B$. Under adequate assumptions, the sequence $\{x^k\}\subset\mathbb R^n$ defined by \begin{equation}\label{e0} x^{k+1}=Q(x^k)=A(B(x^k)) \end{equation} is expected to converge to a common fixed-point of $A$ and $B$. Note that, if $A,B$ are orthogonal projections onto convex sets $K_1, K_2$, then MAP turns out to be a special case of this iteration, and Fix$(A,B)=K_1\cap K_2$. CRM can be seen as an acceleration technique for the sequence defined by \eqref{e0}. Define the reflection operators $A^R, B^R:\mathbb R^n\to\mathbb R^n$ as $A^R=2A-I, B^R=2B-I$, where $I$ stands for the identity operator in $\mathbb R^n$. The CRM operator $C:\mathbb R^n\to\mathbb R^n$ is defined as $C(x)= $circ$(x, B^R(x), A^R(B^R(x)))$, i.e., the circumcenter of the points $x, B^R(x),A^R(B^R(x))$. The CRM sequence $\{x^k\}\subset\mathbb R^n$, starting at some $x^0\in\mathbb R^n$, is then defined as $x^{k+1}=C(x^k)$.
CRM was introduced in \cite{BBS1}, \cite{BBS2} and has been successfully applied for accelerating several methods for solving CFP, like MAP, PPM and the Douglas-Rachford Method (DRM), outperforming all of them. It was further enhanced in \cite{AABBIS}, \cite{ABBIS}, \cite{BOW1}, \cite{BOW2}, \cite{BOW3}, \cite{BOW4}, \cite{BOW5},\cite{BBS3}, \cite{BBS4}, \cite{DHL1}, \cite{DHL2} and \cite{Ouy}. CRM was shown in \cite{BBS2} to converge to a solution of CFP. In \cite{ABBIS} it was proved that, under a not too demanding error bound condition, the sequences generated by MAP and CRM for solving CFP converge linearly, but the asymptotic constant for CRM is better than the one for MAP. This superiority was widely confirmed in the numerical experiences exhibited in \cite{AABBIS}.
Here, we will apply CRM for solving FPP with firmly nonexpansive operators $T_1, \dots T_m:\mathbb R^n\to\mathbb R^n$ in the following way. We will apply it to two operators in $\mathbb R^{nm}$, namely $\widetilde T$ and $P_{\widetilde U}$ as defined above, starting from a point in $\widetilde U$. Note that, since $\widetilde U$ is a linear subspace, the operator $P_{\widetilde U}$ is affine.
The main purpose of this paper consists of establishing that CRM, when applied to FPP, is globally convergent, that linear convergence is achieved by both CRM and MAP under an error bound condition, and that CRM is computationally much faster than MAP, as corroborated by solid numerical evidence. We were not able to prove the superiority of CRM in terms of the asymptotic constant of linear convergence, but our numerical experiments suggest that a theoretical superiority is likely to hold. This issue is left as a subject for future research.
The paper is organized as follows. In Section \ref{s2} we present certain results, of some interest on its own, on convex combinations of orthogonal projections, which we take as a prototypical family of firmly nonexpansive operators (beyond orthogonal projections themselves). In Section \ref{s3} we prove global convergence of CRM applied for solving FPP. We prove in Section \ref{s4} that, under a reasonable error bound assumption, convergence of CRM applied for solving FPP is linear, and we provide as well an estimate of the asymptotic constant, which holds also for MAP. In Section \ref{s5} we present our numerical experiments which show that CRM categorically outperforms PPM. In these experiments, we use the family of firmly nonexpansive operators studied in Section \ref{s2}.
\section{Some properties of firmly nonexpansive operators}\label{s2}
We start with some elementary properties of nonexpansive plus and firmly nonexpansive operators (see Definition \ref{d1}).
\begin{proposition}\label{p1} \begin{itemize} \item[i)] Compositions of nonexpansive plus operators are nonexpansive plus. \item[ii)] Convex combinations of firmly nonexpansive operators are firmly nonexpansive. \end{itemize} \end{proposition}
\begin{proof} \begin{itemize} \item[i)] Suppose that $S,T$ are nonexpansive plus operators. Then \begin{equation}\label{e2} \left\Vert S(T(x))-S(T(y))\right\Vert \le\left\Vert T(x)-T(y)\right\Vert \le\left\Vert x-y\right\Vert , \end{equation} by nonexpansiveness of $S,T$, and if $\left\Vert S(T(x))-S(T(y))\right\Vert =\left\Vert x-y\right\Vert $, then equality holds throughout \eqref{e2}, so that, using the ``plus" property of $S,T$, we have $S(T(x))-S(T(y))=T(x)-T(y)=x-y$, establishing the result. \item[ii)] Take firmly nonexpansive operators $T_1\dots, T_m$ and nonnegative scalars $\alpha_1,\dots ,\alpha_m$ such that $\sum_{i=1}^m\alpha_i=1$. Let $\overline T=\sum_{i=1}^m\alpha_iT_i$. We prove next that $\overline T$ is firmly nonexpansive.
Note that \eqref{e1} is equivalent to \begin{equation}\label{e3} \left\Vert T(x)-T(y)\right\Vert ^2\le\langle T(x)-T(y), x-y\rangle. \end{equation} It suffices to check that $\overline T$ satisfies \eqref{e3}, and we proceed to do so. $$ \left\Vert \overline T(x)-\overline T(y)\right\Vert ^2=\left\Vert \sum_{i=1}^m\alpha_i(T_i(x)-T_i(y))\right\Vert ^2\le \sum_{i=1}^m\alpha_i\left\Vert T_i(x)-T_i(y)\right\Vert ^2 $$ $$ \le\sum_{i=1}^m\alpha_i\langle T_i(x)-T_i(y),x-y\rangle =\Big\langle\sum_{i=1}^m\alpha_i\left(T_i(x)-T_i(y)\right),x-y\Big\rangle=\langle\overline T(x)-\overline T(y),x-y\rangle, $$ using the convexity of $\left\Vert \cdot\right\Vert ^2$ in the first inequality and the fact that the $T_i$'s satisfy \eqref{e3} in the second one. \end{itemize} \end{proof}
For an operator $T:\mathbb R^n\to\mathbb R^n$, we denote as $F(T)$ the set of its fixed points, i.e., $F(T)=\{x\in\mathbb R^n: T(x)=x\}$ (we comment that Fix$(\cdot, \cdot)$ denotes the set of common fixed points of two or more operators). We will also need the following ``acute angle" property of firmly nonexpasive operators.
\begin{proposition}\label{pp1} Let $T:\mathbb R^n\to\mathbb R^n$ be a firmly nonexpansive operator. Then $0\ge\langle T(x)-y,T(x)-x\rangle$ for all $x\in\mathbb R^n$ and all $y\in F(T)$. \end{proposition} \begin{proof} Immediate from \eqref{e1}. \end{proof}
We continue by stating, for future reference, some elementary and well known properties of orthogonal projections onto closed and convex sets.
Let $\subset\mathbb R^n$ be closed and convex. The {\it orthogonal projection} $P_C:\mathbb R^n\to C$ is defined as $P_C(x)={\rm argmin}_{y\in C} \left\Vert x-y\right\Vert $.
\begin{proposition}\label{p2} If $C\subset\mathbb R^n$ is closed and convex, then \begin{itemize} \item[i)] $z=P_C(x)$ if and only if $\langle x-z,y-z\rangle\le 0$ for all $x\in\mathbb R^n$ and all $y\in C$. \item[ii)] $P_C$ is firmly nonexpansive. \item[iii)] $F\left(P_C\right)=C$. \item[iv)] Take $x\in\mathbb R^n$ and let $z=P_C(x)$. Then, $P_C(z+\alpha (x-z))=P_C(x)$ for all $\alpha\ge 0$. \item[v)] Define $h:\mathbb R^n\to\mathbb R$ as $h(x)=\left\Vert x-P_C(x)\right\Vert ^2$. Then $h$ is continuously differentiable and $\nabla h(x)=2\left(x-P_C(x)\right)$. \end{itemize} \end{proposition}
\begin{proof} Elementary. \end{proof}
It is worthwhile to comment at this point that the composition of two firmly nonexpansive operators may fail to be firmly nonexpansive: consider $A=\{(x_1,x_2)\in\mathbb R^2: x_2=0\}$, $B=\{(x_1,x_2)\in\mathbb R^2: x_2=x_1\}$. $P_A$ and $P_B$ are firmly nonexpansive by Proposition \ref{p2}(ii), but its composition $P_A\circ P_B$ fails to satisfy \eqref{e3} with $x=(0,0)$ and $y=(2,-1)$.
We present next some properties of the set of fixed points of combinations of orthogonal projections. They have been proved, e.g., in \cite{DeI}, \cite{IuD}, but we include the proofs for the sake of completeness. From now on, for $C\subset\mathbb R^n$ and $x\in\mathbb R^n$, dist$(x,C)$ will denote the Euclidean distance between $x$ and $C$.
\begin{proposition}\label{p3} Consider closed and convex sets $C_1\dots, C_m\subset\mathbb R^n$ and nonnegative scalars $\alpha_1,\dots ,\alpha_m$ such that $\sum_{i=1}^m\alpha_i=1$. Denote $P_i=P_{C_i}$ and let $\overline P=\sum_{i=1}^m\alpha_iP_i$. Define $g:\mathbb R^n\to\mathbb R$ as $g(x)=\sum_{i=1}^m\alpha_i\left\Vert x-P_i(x)\right\Vert ^2=\sum_{i=1}^m\alpha_i{\rm dist}(x,C_i)^2$ and let $C=\cap_{i=1}^mC_i$. Then, \begin{itemize} \item[i)] $F(\overline P) =\{x\in\mathbb R^n:\nabla g(x)=0\}$, i.e., since $g$ is convex, the set of fixed points of $\overline P$ (if nonempty) is precisely the set of minimizers of $g$. \item[ii)] If $C\ne\emptyset$, then $F(\overline P)=C$. \end{itemize} \end{proposition}
\begin{proof} \begin{itemize} \item[i)] By Proposition \ref{p2}(v), $$ \nabla g(x)=2\sum_{i=1}^m\alpha_i(x-P_i(x))=2\left(x-\sum_{i=1}^m\alpha_iP_i(x)\right)=2(x-\overline P(x)), $$ so that $\nabla g(x)=0$ iff $x=\overline P(x)$ iff $x\in F(\overline P)$. \item[ii)] Clearly, $C\subset F(\overline P)$. For the converse inclusion note that when $C\ne\emptyset$, we have $g(x)=0$ for all $x\in C$, so that the minimum value of $g$ is indeed $0$, and the set of minimizers of $g$ coincides with the set of its zeroes, which is $C$, because $g(x)>0$ whenever $x\notin C$. The result follows then from item (i). \end{itemize} \end{proof}
The next result provides a more accurate description of the set $F(\overline P)$ when $m=2$, i.e., for the case of a convex combination of the orthogonal projections onto two closed and convex sets.
Let $A,B\subset\mathbb R^n$ be two closed sets. Take $\alpha\in (0,1), \overline P=(1-\alpha)P_A+\alpha P_B$. Define $D\subset A\times B$ as $D=\{(x,y)\in A\times D: \left\Vert x-y\right\Vert ={\rm dist}(A,B)\}$. $S_A, S_B$ will denote the projections of $D$ onto $A,B$ respectively, i.e., $S_A=\{x\in A: \exists y\in B\,\,{\rm with}\,\, (x,y)\in D\}$, $S_B=\{y\in B: \exists x\in A \,\, {\rm with} \,\, (x,y)\in D\}$. In other words, $D$ consist of the pairs in $A\times B$ which realize the distance between $A$ and $B$, $S_A$ is the set of points in $A$ which realize the distance to $B$, and $S_B$ is the set of points in $B$ which realize the distance to $A$. We remark that $D$ may be empty; take for instance $A=\{(x_1,x_2)\in\mathbb R^2: x_2\le 0\}$, $B=\{(x_1,x_2)\in\mathbb R^2: x_2\ge e^{x_1}\}$.
\begin{proposition}\label{p4} With the notation above, \begin{itemize} \item[i)] For all $(x,y), (x,',y')\in D$, it holds that $x-y=x'-y'$. \item[ii)] Take $(x,y)\in D, \alpha\in (0,1)$ and define $w=(1-\alpha)x+\alpha y$. Then $P_A(w)=x, P_B(w)=y$. \item[iii)] $F(\overline P)=\{w=(1-\alpha)x+\alpha y: (x,y)\in D\}$. \end {itemize} \end{proposition}
\begin{proof} \begin{itemize} \item[i)] Since, for any $(x,y)\in D$ the pair $(x,y)$ realizes the distance between $A$ and $B$, it follows that $P_B(x)=y, P_A(y)=x$ for all $(x,y)\in D$, and hence $P_A(P_B(x))=x$ for all $x\in S_A$. So, for all $(x,y), (x',y') \in D$, we have \begin{equation}\label{e4} \left\Vert x-x'\right\Vert =\left\Vert P_A(P_B(x))-P_A(P_B(x'))\right\Vert \le\left\Vert P_B(x)-P_B(x')\right\Vert \le\left\Vert x-x'\right\Vert , \end{equation} using Proposition \ref{p2}(ii). It follows that equality holds throughout \ref{e4}, and since $P_A\circ P_B$ is nonexpansive plus by Proposition \ref{p1}(i), because both $P_A$ and $P_B$ are firmly nonexpansive (and so nonexpansive plus) by Proposition \ref{p2}(ii), we conclude from Definition \ref{d1}(ii) that $x-x'=P_B(x)-P_B(x')=y-y'$ which implies that $x-y=x'-y'$. \item[ii)] Take $(x,y)\in D$, so that $x\in A$. Then $w=y+(1-\alpha)(x-y)=P_B(x)+(1-\alpha)\left(x-P_B(x)\right)$. Since $1-\alpha>0$, it follows from Proposition \ref{p3}(iv) that $P_B(w)=y$. A similar argument establishes that $P_A(w)=x$. \item[iii)] Take $w=(1-\alpha)x+\alpha y$ with $(x,y)\in D$. Then, by (ii), $w=(1-\alpha)P_A(w)+\alpha P_B(w)=\overline P(w)$, and hence $w\in F(\overline P)$, so that
$\{w=(1-\alpha)x+\alpha y: (x,y)\in D\}\subset F(\overline P)$. For the converse inclusion, consider any $x\in F(\overline P)$, i.e., \begin{equation}\label{e5} x=(1-\alpha)P_A(x)+\alpha P_B(x). \end{equation} Let $\delta=$ dist$(A,B), \eta=\left\Vert P_A(x)-P_B(x)\right\Vert $. It suffices to check that $\left(P_A(x),P_B(x)\right)\in D$, i.e., that \begin{equation}\label{e6} \eta=\delta. \end{equation} From \eqref{e5}, we get $$ \left\Vert x-P_A(x)\right\Vert =\alpha\left\Vert P_B(x)-P_A(x)\right\Vert =\alpha\eta, $$ $$ \left\Vert x-P_B(x)\right\Vert =(1-\alpha)\left\Vert P_B(x)-P_A(x)\right\Vert =(1-\alpha)\eta, $$ implying that \begin{equation}\label{e7} g(x)=(1-\alpha)\left\Vert x-P_A(x)\right\Vert ^2+\alpha\left\Vert x-P_B(x)\right\Vert ^2=[(1-\alpha)\alpha^2+\alpha(1-\alpha)^2]\eta^2=(1-\alpha)\alpha\eta^2. \end{equation} Take now any pair $(u,v)\in D$, so that $\left\Vert u-v\right\Vert =\delta$, and let $w=(1-\alpha)u+\alpha v$. By item(ii), $u=P_A(w), v=P_B(w)$, so that $$ \left\Vert w-P_A(w)\right\Vert =\alpha\left\Vert P_B(w)-P_A(w)\right\Vert =\alpha\left\Vert u-v\right\Vert =\alpha\delta, $$ $$ \left\Vert w-P_B(w)\right\Vert =(1-\alpha)\left\Vert P_B(w)-P_A(w)\right\Vert =(1-\alpha)\left\Vert u-v\right\Vert =(1-\alpha)\alpha\delta, $$ and hence, \begin{equation}\label{e8} g(w)=(1-\alpha)\left\Vert w-P_A(w)\right\Vert ^2+\alpha\left\Vert w-P_B(w)\right\Vert ^2=[(1-\alpha)\alpha^2+\alpha(1-\alpha)^2]\delta^2=(1-\alpha)\alpha\delta^2. \end{equation} By Proposition \ref{p3}(i), $x$ is a minimizer of $g$, so that $g(x)\le g(w)$, which implies, in view of \eqref{e7},\eqref{e8}, and the fact that $\alpha\in (0,1)$, that $\eta\le\delta$. On the other hand, $\eta=\left\Vert P_A(x)-P_B(x)\right\Vert $ with $P_A(x)\in A, P_B(x)\in B$, so that $\eta\ge$ dist$(A,B)=\delta$. We conclude that \eqref{e6} holds, and the result is established. \end{itemize} \end{proof}
We deal now with the main result of this section, which we describe next. The prototypical examples of firmly nonexpansive operators are the orthogonal projections onto closed and convex sets. Proposition \ref{p1}(ii) provides a larger class of firmly nonexpansive operators, namely convex combinations of orthogonal projections. It is therefore relevant to check that the second class is indeed larger, i.e., that, generically, convex combinations of orthogonal projections are not orthogonal projections themselves. We will prove that this is indeed the case when the intersection of the convex sets is nonempty. However, when this intersection is empty, a convex combination of orthogonal projections may be itself an orthogonal projection. We will establish a necessary and sufficient condition for this situation to occur, for the case of two convex sets.
\begin{proposition}\label{p5} Consider closed and convex sets $C_1\dots, C_m\subset\mathbb R^n$ and nonnegative scalars $\alpha_1,\dots ,\alpha_m$ such that $\sum_{i=1}^m\alpha_i=1$. Denote $C=\cap_{i=1}^m C_i$, $P_i=P_{C_i}$ and let $\overline P=\sum_{i=1}^m\alpha_iP_i$. Assume that $C\ne\emptyset$. If there exists $E\subset\mathbb R^n$ such that $\overline P=P_E$ then $E=C_1= \dots = C_m$. \end{proposition}
\begin{proof} By Propositions \ref{p3}(ii) and \ref{p2}(iii), \begin{equation}\label{e9} C=F(\overline P)=F(P_E)=E. \end{equation} Take $x\in C_i$. Let $\ell={\rm argmax}_{1\le j\le m}\{\left\Vert x-P_j(x)\right\Vert \}$, $w=\sum_{j=1}^m\alpha_jP_j(x)=\overline P(x)=P_E(x)$, so that $w\in $ Im$(P_E)=E=C$, using \eqref{e9}, and hence $w\in C_\ell$. It follows that $$ \left\Vert x-P_\ell(x)\right\Vert \le\left\Vert x-w\right\Vert =\left\Vert \sum_{i=j}^m\alpha_j(x-P_j(x))\right\Vert \le\sum_{j=1}^m\alpha_j\left\Vert x-P_j(x)\right\Vert $$ $$ =\sum_{j=1, j\ne i}^m\alpha_j\left\Vert x-P_j(x)\right\Vert \le \sum_{j=1, j\ne i}^m\alpha_j\left\Vert x-P_\ell(x)\right\Vert = $$ \begin{equation}\label{e10} \left(\sum_{j=1, j\ne i}^m\alpha_j\right)\left\Vert x-P_\ell(x)\right\Vert =(1-\alpha_i)\left\Vert x-P_\ell(x)\right\Vert , \end{equation} using the convexity of $\left\Vert \cdot\right\Vert $ in the first inequality, the fact that $x\in C_i$ in the second equality and the definition of $\ell$ in the second inequality. It follows from \eqref{e10} that $\alpha_i\left\Vert x-P_\ell(x)\right\Vert \le 0$, so that $\left\Vert x-P_\ell(x)\right\Vert =0$. Since $0\le\left\Vert x-P_j(x)\right\Vert \le\left\Vert x-P_\ell(x)\right\Vert $ for all $j$ by definition of $\ell$, we conclude that $\left\Vert x-P_j(x)\right\Vert =0$ for all $j$, i.e. $x\in C_j$. Since $x$ is an arbitrary point in $C_i$, we get that that $C_i\subset C_j$ for all $i,j$, i.e., $C_1= \dots =C_m$, and the result follows immediately from \eqref{e9}. \end{proof}
Next, we fully characterize the situation for the case of $2$ convex sets. For $A\subset\mathbb R^n$, we denote the affine hull of $A$ as aff$(A)$.
\begin{proposition}\label{p6} Take closed and convex sets $A,B\subset\mathbb R^n$ and $\alpha\in (0,1)$. Define $\overline P=(1-\alpha)P_A+\alpha P_B$. Then, there exists a nonempty, closed and convex set $E\subset\mathbb R^n$ such that $\overline P=P_E$ if and only if there exists $c\in {\rm aff}(A)^\perp$ such that $B=A+c$. \end{proposition}
\begin{proof} We start with the ``only if" statement. We claim that the result holds with $E=A+\alpha c$. First we prove that $P_B(x)=P_A(x)+c$ for all $x\in\mathbb R^n$. Let $z=P_A(x)+c$. By Proposition \ref{p2}(i), it suffices to prove that $\langle x-z, y-z\rangle\le 0$ for all $x\in\mathbb R^n$ and all $y\in B=A+c$, i.e., that for all $y\in A$ we have $$ 0\ge\langle x-z,y+c-z\rangle=\langle x-P_A(x)-c, y+c-P_A(x)-c\rangle $$ \begin{equation}\label{e11} =\langle x-P_A(x), y-P_A(x)\rangle - \langle c,y-P_A(x)\rangle=\langle x-P_A(x),y-P_A(x)\rangle, \end{equation} using in the last equality the facts that $c\in$ aff$(A)^\perp$ and $y,P_A(x)\in A$, so that $y-P(A)\in$ aff$(A)$, and hence $\langle c,y-P_A(x)\rangle =0$. Note that $0\le\langle x-P_A(x),y-P_A(x)\rangle$ by Proposition \ref{p2}(i), so that the inequality in \eqref{e11} holds, and hence we have proved that $P_B(x)=P_A(x)+c$ for all $x\in\mathbb R^n$. It follows that $\overline P=(1-\alpha)P_A+\alpha P_B=(1-\alpha)P_A+\alpha P_A+\alpha c=P_A+\alpha c$.
Now, the same argument used to prove that $P_{A+c}=P_A+c$, allow us to conclude that $P_A+\alpha c=P_{A+\alpha c}$, so that $(1-\alpha)P_A+\alpha P_B=P_E$ with $E=A+\alpha c$.
Now we prove the ``if" statement. First we must identify the appropriate vector $c$. By assumption, $\overline P=P_E$, so that $F(\overline P)=E\ne\emptyset$ by Proposition \ref{p2}(iii). It follows that $D$, as defined in Proposition \ref{p4}, is nonempty. We take any pair $(u,v)\in D$ and take $c=v-u$. By Proposition \ref{p4}(i), $c$ does not depend on the chosen pair $(u,v)$. We must prove that $B=A+c$, and we first claim that \begin{equation}\label{e12} S_B=S_A+c, \end{equation} with $S_A, S_B$ as in Proposition \ref{p4}. Take $u\in S_B$, so that there exists $v\in S_A$ such that $(u,v)\in D$ and hence $v=u+(v-u)=x+c$, showing that $v\in S_A+c$, and therefore $S_B\subset S_A+c$. Reversing the roles of $A,B$ we get the reverse inclusion, and then \eqref{e12} holds.
We show next that the assumption $\overline P=P_E$ implies that $A=S_A, B=S_B$. Take any $x\in A$. We must prove that $x$ realizes the distance to $B$. Let $z=\overline P(x)=(1-\alpha)P_A(x)+\alpha P_B(x)=(1-\alpha)x+\alpha P_B(x)$. It follows from Proposition \ref{p2}(iv) that $P_B(z)=P_B(x)$. Note that \begin{equation}\label{e13} (1-\alpha)(x-P_B(x))=z-P_B(x)=z-P_B(z). \end{equation} Now $z=\overline P(x)=P_E(x)$, so that $z\in E=F(P_E)=F(\overline P)$. By Proposition \ref{p4}(ii) and (iii), $z=(1-\alpha)P_A(z)+\alpha P_B(z)$, with $(P_A(z),P_B(z))\in D$. It follows that \begin{equation}\label{e14} z-P_B(z)=(1-\alpha)(P_A(z)-P_B(z)). \end{equation} Since $\alpha \in (0,1)$, we conclude from \eqref{e13}, \eqref{e14} that $x-P_B(x)=P_A(z)-P_B(z)$, so that, in view of the fact that $(P_A(z),P_B(z))\in D$, $$ {\rm dist}(x,B)=\left\Vert x-P_B(x)\right\Vert =\left\Vert P_A(z)-P_B(z)\right\Vert ={\rm dist}(A,B). $$ We have proved that $x$ realizes the distance between $A$ and $B$, i.e., that $x\in S_A$. Since $x$ is an arbitrary point in $A$, we have $A\subset S_A\subset A$, so that $A=S_A$. By the same token, $B=S_B$. In view of \eqref{e12}, we have that $B=A+c$.
It only remains to be verified that $c\in$ aff$(A)^\perp$. Let relint$(A)$ be the relative interior of $A$ (i.e., the interior of $A$ with respect to aff$(A)$). Take any $x\in$ relint$(A)$ and any $z\in$ aff$(A)$. Since $x\in$ relint$(A)$, there exists $\varepsilon >0$ such that both $x+\varepsilon(z-x)$ and $x-\varepsilon(z-x)$ belong to $A$. Since $x\in A=S_A$, we obtain from Proposition \ref{p4}(ii) that $x=P_A(v)$ for some $v\in S_B$, and $c=v-P_B(v)=v-x$, so that, by Proposition \ref{p2}(i), $\langle c,y-x\rangle=\langle v-P_B(v),y-P_B(v)\rangle\le 0$ for all $y\in A$. Taking first $y=x+\varepsilon (z-x)$ and then $y=x-\varepsilon(z-x)$, we conclude that $\varepsilon\langle c,z-x\rangle\le 0$, $-\varepsilon\langle c,z-x\rangle\le 0$, implying that $\langle c,z-y\rangle =0$ for all $z\in$ aff$(A)$, and hence $c\in$ aff($A)^\perp$, completing the proof. \end{proof}
\begin{Cor}\label{c1} Assume that any of the equivalent statements in Proposition \ref{p6} hold and that $A\ne B$. Then $A$ has empty interior and $A\cap B=\emptyset$. \end{Cor}
\begin{proof} Since $B=A+c$ and $A\ne B$, we have $c\ne 0$. Since $c\in$ aff$(A)^\perp$, we obtain that aff$(A)\ne\mathbb R^n$, i.e. aff$(A)$ is not full dimensional and hence $A$ has empty interior.
For the second statement, assume that $A\cap B\ne\emptyset$ and take $x\in A\cap B$. Since $B=A+c$, we have $x=x'+c$ with $x'\in A$, so that $\left\Vert c\right\Vert ^2=\langle c,x-x'\rangle =0$, because $c\in $aff$(A)^\perp$ and $x,x'\in A$, so that $x-x'\in$ aff$(A)$. It follows that $c=0$, and the resulting contradiction entails the result. \end{proof}
We mention that the second statement of the corollary follows also from Proposition \ref{p5}.
The ``only if" statement of Proposition \ref{p6} can be easily generalized to the case of $m$ convex sets; unfortunately we do not have at this point a proof for the much more interesting generalization of the ``if" statement. The following corollary contains the generalization of the ``only if" statement.
\begin{Cor}\label{c2} Consider closed and convex sets $C_1\dots, C_m\subset\mathbb R^n$ and nonnegative scalars $\alpha_1,\dots ,\alpha_m$ such that $\sum_{i=1}^m\alpha_i=1$. Denote $P_i=P_{C_i}$ and let $\overline P=\sum_{i=1}^m\alpha_iP_i$. Take $\beta_2, \dots ,\beta_m\in\mathbb R, c\in$ aff$(C_1)^\perp$, and assume that $C_i=C_1+\beta_i c$ for $i=2,\cdots ,m$. Define $\bar\beta=\sum_{i=2}^m\alpha_i\beta_i, E= C_1+\bar\beta c$. Then $\overline P=P_E$. \end{Cor}
\begin{proof} The argument used in the proof of Proposition \ref{p6} shows that $P_i(x)=P_1(x)+\beta_i c$ for $i=2, \dots ,m$, and all $x\in\mathbb R^n$, so that $\overline P(x)=P_1(x)+\bar\beta c$ for all $x\in\mathbb R^n$. The same argument then shows that $P_1+\bar\beta c=P_E$. \end{proof}
\section{Convergence of CRM applied to FPP}\label{s3}
In this section, we establish convergence of CRM applied to finding a point in Fix$(T,P_U)$, where $T:\mathbb R^n\to\mathbb R^n$ is firmly nonexpansive and $P_U:\mathbb R^n\to\mathbb R^n$ is the orthogonal projection onto an affine manifold $U\subset\mathbb R^n$. As explained in Section \ref{s1}, through Pierra's formalism in the product space $\mathbb R^{nm}$, this result entails convergence of CRM applied to finding a point in Fix$(T_1, \dots ,T_m)$, where $T_i:\mathbb R^n\to\mathbb R^n$ is firmly nonexpansive for $1\le i\le m$.
Our convergence analysis for CRM requires comparing the CRM and the MAP sequences, so that we start by proving convergence of the second one, defined as \begin{equation}\label{ee14} z^{k+1}=P_U(T(z^k)), \end{equation} starting at some $z^0\in\mathbb R^n$. This is a classical result, but we include it for the sake of self-containment. We start with the following intermediate result.
\begin{proposition}\label{p7} For all $x\in\mathbb R^n$ and all $y\in {\rm Fix}(T,P_U)$ it holds that \begin{equation}\label{e15} \left\Vert P_U(T(x))-y\right\Vert ^2\le\left\Vert x-y\right\Vert ^2-\left\Vert P_U(x)-x\right\Vert ^2-\left\Vert P_U(T(x))-T(x)\right\Vert ^2. \end{equation} \end{proposition}
\begin{proof} By firm nonexpansiveness of $P_U$, we have \begin{equation}\label{aab11} \left\Vert P_U(x)-y\right\Vert ^2\le\left\Vert x-y\right\Vert ^2-\left\Vert P_U(x)-x\right\Vert ^2 \end{equation} for all $x\in\mathbb R^n$, using the fact that $u\in U$. Substituting $T(x)$ for $x$ in \eqref{aab11}, we obtain \begin{equation}\label{et16} \left\Vert P_U(T(x))-y\right\Vert ^2\le\left\Vert T(x)-y\right\Vert ^2-\left\Vert P_U(T(x))-T(x)\right\Vert ^2 . \end{equation} Since $T$ is firmly nonexpansive, \begin{equation}\label{et162} \left\Vert T(x)-y\right\Vert ^2\le\left\Vert x-y\right\Vert ^2-\left\Vert T(x)-x\right\Vert ^2. \end{equation} Now combining \eqref{et16} with \eqref{et162}, we get \begin{equation}\label{etr}
\left\Vert P_U(T(x))-y\right\Vert ^2\le\left\Vert x-y\right\Vert ^2-\left\Vert T(x)-x\right\Vert ^2-\left\Vert P_U(T(x))-T(x)\right\Vert ^2 \end{equation} which implies the result. \end{proof}
Using Proposition \ref{p7} we get convergence of $\{z^k\}$ using the classical argument for MAP applied to CFP, as we show next:
\begin{proposition}\label{p8} If ${\rm Fix}(T,P_U)\ne\emptyset$, then the sequence $\{z_k\}$ defined by \eqref{ee14} converges to a point $\bar z\in {\rm Fix}(T,P_U)$. \end{proposition}
\begin{proof} Take any $y\in$ Fix$(T,P_U)$. By \eqref{ee14}, $z^{k+1}=PU(T(z^k))$. Using \eqref{aab11}, we get \begin{equation}\label{et17} \left\Vert z^{k+1}-y\right\Vert ^2\le\left\Vert z^k-y\right\Vert ^2-\left\Vert P_U(T(z^k))-T(z^k)\right\Vert ^2-\left\Vert T(z^k)-z^k\right\Vert ^2 \le\left\Vert z^k-y\right\Vert ^2. \end{equation}
It follows from \eqref{et17} that $\left\Vert z^{k+1}-y\right\Vert ^2\le\left\Vert z^k-y\right\Vert $ for all $k\in\mathbb N$, so that $\{z^k\}$ is bounded and $\{\left\Vert z^k-y\right\Vert \}$ is nonincreasing and nonnegative, therefore convergent.
Hence, rewriting \eqref{et17} as $$ \left\Vert P_U(T(z^k))-T(z^k)\right\Vert ^2+\left\Vert T(z^k)-z^k\right\Vert ^2\le\left\Vert z^k-y\right\Vert ^2-\left\Vert z^{k+1}-y\right\Vert ^2, $$ we conclude that \begin{equation}\label{ata18} \lim_{k\to\infty}\left\Vert T(z^k)-z^k\right\Vert =0. \end{equation} Let $\bar z$ be a cluster point of the bounded sequence $\{z^k\}$. Taking limits in \eqref{ata18} along a subsequence converging to $\bar z$, and using the continuity of $T$, resulting from its nonexpansiveness, we get that $T(\bar z)=\bar z$. Since $z^k\in U$ for all $k\in\mathbb N$ by \eqref{ee14}, we have that $\bar z\in U$, so that $\bar z\in$ Fix$(T,P_U)$. Taking now $y=\bar z$ in \eqref{et17}, we conclude that $\{\left\Vert z^k-\bar z\right\Vert \}$ is convergent, and since a subsequence of this sequence converges to $0$, the whole sequence $\{\left\Vert z^k-\bar z\right\Vert \}$ converges to $0$, i.e., $\lim_{k\to\infty}z^k=\bar z\in$ Fix$(T,P_U)$. \end{proof}
Now we proceed to the convergence analysis of CRM applied to FPP. Let $T:\mathbb R^n\to\mathbb R^n$ be a firmly nonexpansive operator, $U\subset\mathbb R^n$ an affine manifold, and $P_U:\mathbb R^n\to\mathbb R^n$ the orthogonal projection onto $U$. We assume that Fix$(T,P_U)\ne\emptyset$. We denote as $R, R_U$ the reflection operators related to $T,P_U$ respectively, i.e., $R(x)=2T(x)-x, R_U(x)=2P_U(x)-x$. We define $C:\mathbb R^n\to\mathbb R^n$ as the the CRM operator, i.e., $C(z)=$ circ$\{z,R(z),R_U(R(z))\}$, where ``circ" denotes the circumcenter of three points, as defined in Section \ref{s1}. We also define $S:\mathbb R^n\to\mathbb R^n$ as $S(x)=P_U(T(x))$, so that $S$ can be seen the MAP operator.
We will prove that, starting from any initial point $x^0\in U,$ the sequence $\{ x^k\}$ generated by CRM, defined as $x^{k+1}=C(x^k)$, converges to a point in Fix$(T,P_U)$.
Our convergence analysis is close to the one in \cite{ABBIS} for CRM applied to CFP, but with several differences, resulting from the fact that now $T$ is an arbitrary firmly nonexpansive operator, rather than the orthogonal projection onto a convex set. One of differences is the use of the next property of circumcenters, which will substitute for a specific property of orthogonal projections.
\begin{proposition}\label{p9} For all $x\in\mathbb R^n$, $\langle x-T(x),C(x)-T(x)\rangle =0$. \end{proposition}
\begin{proof} By the definition of the reflection, for all $x\in\mathbb R^n$, \begin{equation}\label{e18} T(x)=\frac{1}{2}(R(x)+x). \end{equation} By the definition of circumcenter, for all $x\in\mathbb R^n$, \begin{equation}\label{e19} \left\Vert C(x)-x\right\Vert ^2=\left\Vert C(x)-R(x)\right\Vert ^2. \end{equation} Expanding \eqref{e19} and rearranging, we get \begin{equation}\label{e20} 2\langle x-R(x),C(x)\rangle=\left\Vert x\right\Vert ^2-\left\Vert R(x)\right\Vert ^2. \end{equation} Substracting $2\langle x-R(x),T(x)\rangle$ from both sides of \eqref{e20} and using \eqref{e18},we obtain $$ 4\langle x-T(x),C(x)-T(x)\rangle = 2\langle x-R(x),C(x)-T(x)\rangle=\left\Vert x\right\Vert ^2-\left\Vert R(x)\right\Vert ^2 -2\langle x-R(x),T(x)\rangle $$ $$ =\left\Vert x\right\Vert ^2-\left\Vert R(x)\right\Vert ^2 -\langle x-R(x),x+R(x)\rangle =0, $$ which implies the result. \end{proof}
Next we establish a basic property of the circumcenter, which ensures that the CRM sequence, starting at a point in $U$, remains in $U$.
\begin{proposition}\label{pp9} If $z\in U$ then $C(z)\in U$. \end{proposition}
\begin{proof} We consider three cases. If $R(z)\in U$ then $R_U(R(z))=R(z)$, in which case $z,R(z),R_U(R(z))\in U$, so that the affine hull of these three points is contained in $U$. Since by definition $C(z)$ belongs to this affine hull, the result holds.
If $z=P_U(R(z))$ then the affine hull of $\{z,R(z),R_U(R(z))\}$ is the line determined by $z$ and $R(z)$ and $C(z)=$ circ$\{z,R(z),R_U(R(z))\}=P_U(R(z))=z\in U$, so that the result holds.
Assume that $z\ne P_U(R(z))$ and that $R(z)\notin U$. We claim that $C(z)$ belongs to the line passing through $z$ and $P_U(R(z))$. Observe that, since $\left\Vert C(z)-R(z)\right\Vert =\left\Vert C(z)-R_U(R(z))\right\Vert $, $C(z)$ belongs to the hyperplane orthogonal to $R(z)-R_U(R(z))$ passing through $\frac{1}{2}(R(z),R_U(R(z)))=P_U(R(z))$, say $H$. On the other hand, by definition, $C(z)$ belongs to the affine manifold $E$ spanned by $z,R(z),R_U(z)$. So, $C(z)\in E\cap U$. Since $R(z)\notin U$, dim$(E\cap U)<$ dim$(E)\le 2$. Note that $P_U(z)=\frac{1}{2}\left(R(z)+R_U(R(z))\right)=P_U(R(z))$ belongs to $E$. Hence the line through $z,P_U(R(z))$, say $L$, is contained in $E$, and by a dimensionality argument we conclude that $L=E$. Since $C(z)\in E$, we get that $C(z)\in L$. Since $z,P_U(R(z))$ belong to $U$, we have that $C(z)\in L\subset U$, completing the proof. \end{proof}
We continue with an important intermediate result.
\begin{proposition}\label{p10} Consider the operators $C,S:\mathbb R^n\to\mathbb R^n$ defined above. Then $S(x)$ belongs to the segment between $x$ and $C(x)$ for all $x\in U$. \end{proposition} \begin{proof} Let $E$ denote the affine manifold spanned by $x, R(x)$ and $R_U(R(x))$. By definition, the circumcenter of these three points, namely $C(x)$, belongs to $E$. We claim that $S(x)$ also belongs to $E$. We proceed to prove the claim. Since $U$ is an affine manifold, $P_U$ is an affine operator, so that $P_U(\alpha x+(1-\alpha)x')=\alpha P_U(x)+(1-\alpha)P_U(x')$ for all $\alpha\in\mathbb R$ and all $x,x'\in\mathbb R^n$. Thus $R_U(R(x))=2P_U(R(x))-R(x)$, so that \begin{equation}\label{e21} P_U(R(x))=\frac{1}{2}\left(R_U(R(x))+R(x)\right). \end{equation} On the other hand, using the affinity of $P_U$, the definition of $S$ and the assumption that $x\in U$, we have \begin{equation}\label{e22} P_U(R(x))=P_U(2T(x)-x)=2P_U(T(x))-P_U(x)=2S(x)-x, \end{equation} so that \begin{equation}\label{e23} S(x)=\frac{1}{2}\left(P_U(R(x))+x\right). \end{equation} Combining \eqref{e21} and \eqref{e23}, $$ S(x)=\frac{1}{2}x+\frac{1}{4}R_U(R(x))+\frac{1}{4}R(x), $$ i.e., $S(x)$ is a convex combination of $x, R_U(R(x))$ and $R(x)$. Since these three points belong to $E$, the same holds for $S(x)$ and the claim holds.
We observe now that $x\in U$ by assumption, $S(x)\in U$ by definition, and $C(x)\in U$ by Proposition \ref{pp9}. Now we consider three cases: if dim$(E\cap U)=0$ then $x,S(x)$ and $C(x)$ coincide and the result holds trivially. If dim$(E\cap U)=2$ then $E\subset U$, so that $R(x)\in U$ and hence $R_U(R(x))=R(x)$, in which case $C(x)$ is the midpoint between $x$ and $R(x)$, which is precisely $T(x)$. Hence, $T(x)\in U$, so that $S(x)=P_U(T(x))=B(x)=C(x)$, implying that $S(x)$ and $C(x)$ coincide, and the result holds trivially. The interesting case is the remaining one, i.e., dim$(E\cap U)=1$. In this case $x, S(x)$ and $C(x)$ lie in a line, so that we can write $C(x)=x+\eta(S(x)-x)$ with $\eta\in\mathbb R$, and it suffices to prove that $\eta\ge 1$.
By the definition of $\eta$, \begin{equation}\label{e24} \left\Vert C(x)-x\right\Vert =\left\vert \eta\right\vert\,\left\Vert T(x)-x\right\Vert . \end{equation} Since $C(x)\in U$, nonexpansiveness of $P_U$ implies that \begin{equation}\label{e25} \left\Vert C(x)-R(x)\right\Vert \ge\left\Vert C(x)-P_U(R(x))\right\Vert . \end{equation} Then $$ \left\Vert C(x)-x\right\Vert =\left\Vert C(x)-R(x)\right\Vert \ge\left\Vert C(x)-P_U(R(x))\right\Vert =\left\Vert \left(C(x)-x\right)-\left(P_U(R(x))-x\right)\right\Vert $$ \begin{equation}\label{e26} =\left\Vert \eta\left(S(x)-x\right)-2\left(S(x)-x\right)\right\Vert =\left\vert \eta-2\right\vert\,\left\Vert S(x)-x\right\Vert , \end{equation} using the definition of the circumcenter in the first equality, \eqref{e25} in the inequality, and the definition of $\eta$ and $S$ in the third equality. Combining \eqref{e24} and \eqref{e26}, we get $$ \left\vert \eta\right\vert\,\left\Vert S(x)-x\right\Vert \ge\left\vert \eta-2\right\vert\,\left\Vert S(x)-x\right\Vert , $$ implying that $\left\vert \eta\right\vert\ge\left\vert 2-\eta\right\vert$, which holds only when $\eta\ge 1$, completing the proof. \end{proof}
We continue with a key result for the convergence analysis of CRM, comparing the behavior of the CRM and the MAP operators. Again the argument in this proof differs from the case of CRM applied to MAP, presented in \cite{ABBIS}.
\begin{proposition}\label{p11} With the notation of Proposition \ref{p10}, for all $y\in {\rm Fix}(T,P_U)$ and all $z\in U$, it holds that \begin{itemize} \item[i)] $\left\Vert C(z)-y\right\Vert \le\left\Vert S(z)-y\right\Vert $, \item[ii)] ${\rm dist}\left(C(z),{\rm Fix}(T,P_U)\right)\le {\rm dist}\left(S(z),{\rm Fix}(T,P_U)\right)$, \end{itemize} \end{proposition}
\begin{proof} \begin{itemize} \item[i)] Take $z\in U,y\in$ Fix$(T,P_U)$. If $z \in F(T)$, then the result follows trivially, because then $P_U(T(z))=z=C(z)$ and there is nothing to prove. So, assume that $z\in U\setminus F(T)$. We claim that \begin{equation}\label{e27} \left\Vert P_U(T(z))-z\right\Vert \le\left\Vert T(z)-z\right\Vert \le\left\Vert C(z)-z\right\Vert . \end{equation} For proving the first inequality in \eqref{e27}, we conclude, from the fact that $z\in U$ and an elementary property of orthogonal projections, that \begin{equation}\label{e28} \left\Vert P_U(T(z))-z\right\Vert \le\left\Vert T(z)-z\right\Vert . \end{equation} Since $R(z)=2T(z)-z$, we get that \begin{equation}\label{e29} \left\Vert R(z)-z\right\Vert =2\left\Vert T(z)-z\right\Vert . \end{equation} Using \eqref{e28} and \eqref{e29}, $$ \left\Vert T(z)-z)\right\Vert = \frac{1}{2}\left\Vert R(z)-z\right\Vert = \frac{1}{2}\left\Vert (R(z)-C(z)+C(z)-z)\right\Vert $$ \begin{equation}\label{e30} \le\frac{1}{2}\left(\left\Vert R(z)-C(z)\right\Vert +\left\Vert C(z)-z\right\Vert \right) =\frac{1}{2}\left(\left\Vert z-C(z)\right\Vert +\left\Vert C(z)-z\right\Vert \right) =\left\Vert C(z)-z\right\Vert . \end{equation} where the third equality holds because $C(z)$ is equidistant from $z,R(z),$ and $R_U(R(z)).$ The claim follows then from \eqref{e27} and \eqref{e30}.
By Proposition \ref{p10}, $T(z)$ belongs to the segment between $z$ and $C(z)$, i.e., there exists $\alpha\in [0,1]$ such that $S(z)=\alpha C(z)+(1-\alpha)z$ and $\alpha<1$ because $z\notin F(T)$, so that \begin{equation}\label{e31} S(z)-C(z)=\dfrac{1-\alpha}{\alpha}(z-S(z)).
\end{equation} Note that \begin{equation}\label{e32} \langle z-S(z),C(z)-y\rangle =\langle z-T(z),C(z)-T(z)\rangle+\langle z-T(z),T(z)-y\rangle+
\langle T(z)-S(z),C(z)-y\rangle. \end{equation} Now we look at the three terms in the right hand side of \eqref{e32}. The first one vanishes as a consequence of Proposition \ref{p9}. The third one vanishes because $S(z)=P_U(T(z))$, and $U$ is an affine manifold, so that $T(z)-S(z)$ is orthogonal to any vector in $U$, as is the case for $C(z)-y$, since $y\in U$ by assumption and $C(z)\in U$ by Proposition \ref{pp9}. The second term is nonnegative by Proposition \ref{pp1}. It follows hence from \eqref{e32} that \begin{equation}\label{e35} \langle z-S(z),C(z)-y\rangle\ge 0. \end{equation}
Now, \eqref{e35} together with \eqref{e31} gives us
\begin{equation}\label{e36}
\langle S(z)-C(z),y-C(z)\rangle=\frac{1-\alpha}{\alpha}\langle z-S(z),y-C(z)\rangle\le 0.
\end{equation}
It follows from \eqref{e36} that $\left\Vert C(z)-y\right\Vert \le\left\Vert S(z)-y\right\Vert $ for all $y\in$ Fix$(T,P_U)$ and all $z\in U$, establishing (i).
\item[ii)] Let $\bar{z}, \hat{z}\in$ Fix$(T,P_U)$ realize the distance from $C(z),S(z)$ to Fix$(T,P_U)$ respectively. Then, in view of (i), $$ {\rm dist}(C(z),{\rm Fix}(T,P_U))=\left\Vert C(z)-\bar z\right\Vert \le\left\Vert C(z)-\hat z\right\Vert \le\left\Vert S(z)-\hat z\right\Vert ={\rm dist}(S(z),{\rm Fix}(T,P_U)) $$ proving (ii). \end{itemize} \end{proof}
Next we complete the convergence analysis of CRM applied to FPP. Here again, the proofline differs from the one in \cite{ABBIS}, where a specific property of orthogonal projections was used to characterize $C(z)$ as the projection onto a certain set, which does not work when $T$ is an arbitrary firmly nonexpansive operator.
\begin{Thm}\label{t1} Let $T:\mathbb R^n\to\mathbb R^n$ be a firmly nonexpansive operator and $U\subset\mathbb R^n$ an affine manifold. Assume that {\rm Fix}$(T,P_U)\ne\emptyset$. Let $\{x^k\}$ be the sequence generated by CRM for solving FPP$(T,P_U)$, i.e., $x^{k+1}=C(x^k)$. If $x^0\in U$, then $\{x^k\}$ is contained in $U$ and converges to a point in {\rm Fix}$(T,P_U)$. \end{Thm}
\begin{proof} The fact that $\{x^k\}\subset U$ results from invoking Proposition \ref{pp9} in an inductive way, starting with the assumption that $x^0\in U$.
Take any $y\in$ Fix$(T,P_U)$ Then, \begin{equation}\label{e37} \left\Vert x^{k+1}-y\right\Vert ^2=\left\Vert C(x^k)-y\right\Vert ^2 \le\left\Vert S(x^k)-y\right\Vert ^2\le\left\Vert x^k-y\right\Vert ^2-\left\Vert S(x^k)-x^k\right\Vert ^2 \end{equation} where the first inequality follows from Proposition \ref{p11}(i), and the second one follows from Proposition \ref{p7}, since $P_U(x^k)=x^k$ by Proposition \ref{pp9} and $S=P_U\circ T$.
\eqref{e37} says that $\{x^k\}$ is Fej\'er monotone with respect to Fix$(T,P_U)$, and the remainder of the proof is standard. By \eqref{e37},
$\{x^k\}$ is bounded and $\{\left\Vert x^k-y\right\Vert \}$ is nonincreasing and nonnegative, hence convergent, for all $y\in$ Fix$(T,P_U)$. It follows also from \eqref{e37} that \begin{equation}\label{e38} \lim_{k\to\infty} S(x^k)-x^k=0. \end{equation} Let $\bar x$ be any cluster point of $\{x^k\}$. Taking limits in \eqref{e38} along a subsequence converging to $\bar x$, we conclude that $S(\bar x)=\bar x$, i.e., $\bar x\in F(S)=$ Fix$(T,P_U)$, so that all cluster points of $\{x^k\}$ belong to Fix$(T,P_U)$. Looking now \eqref{e37} with $\bar x$ substituting for $y$, we get that $\{\left\Vert x^k-\bar x\right\Vert \}$ is a nonincreasing sequence with a subsequence converging to $0$, so that the whole sequence $\{\left\Vert x^k-\bar x\right\Vert \}$ converges to $0$. It follows that $\bar x$ is the unique cluster point of $\{x^k\}$, so that $\lim_{k\to\infty}x^k=\bar x\in$ Fix$(T,P_U)$. \end{proof}
For future reference, we state the Fej\'er monotonicity of $\{x^k\}$ with respect to Fix$(T,P_U)$ as a corollary. \begin{Cor}\label{c3} With the notation of Theorem \ref{t1}, $\left\Vert x^{k+1}-y\right\Vert ^2 \le\left\Vert x^k-y\right\Vert ^2-\left\Vert S(x^k)-x^k\right\Vert ^2$ for all $y\in {\rm Fix}(T,P_U)$ and all $k\in\mathbb N$. \end{Cor} \begin{proof} The result follows from \eqref{e37}. \end{proof}
\section{Linear convergence of CRM applied to FPP under an error bound condition}\label{s4}
In \cite{ABBIS}, when dealing with CFP with two convex sets, namely $K,U$, the following {\it global error bound}, which we will call {\bf EB1}, was considered:
\noindent {\bf EB1}: There exists $\bar\omega >0$ such that ${\rm dist}(x,K)\ge\bar\omega\, {\rm dist}(K\cap U)$ for all $x\in U$.
Let us comment on the connection between {\bf EB1} and other notions of error bounds which have been introduced in the past, all of them related to regularity assumptions imposed on the solutions of certain problems. If the problem at hand consists of solving $H(x)=0$ with a smooth $H:\mathbb R^n\to\mathbb R^m$, a classical regularity condition demands that $m=n$ and the Jacobian matrix of $H$ be nonsingular at a solution $x^*$, in which case, Newton's method, for instance, is known to enjoy superlinear or quadratic convergence. This condition implies local uniqueness of the solution $x^*$. For problems with nonisolated solutions, a less demanding assumption is the notion of {\it calmness} (see \cite{RoW}, Chapter 8, Section F), which requires that \begin{equation}\label{e39} \frac{\left\Vert H(x)\right\Vert }{{\rm dist}(x,S^*)}\ge\omega \end{equation} for all $x\in\mathbb R^n\setminus S^*$ and some $\omega>0$, where $S^*$ is the solution set, i.e., the set of zeros of $H$. Calmness, also called upper-Lipschitz continuity (see \cite{Rob}), is a classical example of error bound, and it holds in many situations, e.g., when $H$ is affine, by virtue of Hoffman's Lemma, (see \cite{Hof}). It implies that the solution set is locally a Riemannian manifold (see \cite{BeI}), and it has been used for establishing superlinear convergence of Levenberg-Marquardt methods in \cite{KYF}.
When dealing with convex feasibility problems, it seems reasonable to replace the numerator of \eqref{e39} by the distance from $x$ to some of the convex sets, as was done for instance, in \cite{ABBIS}, giving rise to {\bf EB1}. In \cite {ABBIS}, it was proved that under {\bf EB1}, MAP converges linearly, with asymptotic constant bounded above by $\sqrt{1-\bar\omega^2}$, and that CRM also converges linearly, with a better upper bound for the asymptotic constant, namely $\sqrt{(1-\bar\omega^2)/(1+\bar\omega^2)}$. In this section we will prove that in the FPP case both sequences converge linearly, with asymptotic constant bounded by $\sqrt{1-\bar\omega^2}$.
In the case of FPP, dealing with a firmly nonexpansive $T:\mathbb R^n\to\mathbb R^n$, and an affine manifold $U\subset\mathbb R^n$, the appropriate error bound turns out to be:
\noindent {\bf EB}: There exists $\omega >0$ such that $\left\Vert x-T(x)\right\Vert \ge\omega\, {\rm dist}(x,{\rm Fix}(T,P_U)$ for all $x\in U$.
We mention here that it suffices to consider an error bound less demanding than {\bf EB}, namely a local one, where the inequality above is requested to hold only for points in $U\cap V$, where $V$ is a given set, e.g., a ball around the limit of the sequence generated by the algorithm, assumed to be convergent. An error bound of this type was used in \cite{AABBIS}. We refrain to do so just for the sake of a simpler exposition.
\begin{proposition}\label{p12} Let $T:\mathbb R^n\to\mathbb R^n$ be a firmly nonexpansive operator, $U\subset\mathbb R^n$ an affine manifold and $C,S:\mathbb R^n\to\mathbb R^n$ the CRM and the MAP operators respectively. Assume that {\rm Fix}$(T,P_U)\ne\emptyset$ and that {\bf EB} holds. Then
\begin{equation}\label{e40} {\rm dist}(C(x),{\rm Fix}(T,P_U))^2\le {\rm dist}(S(x),{\rm Fix}(T,P_U))^2\le (1-\omega^2)dist(x,{\rm Fix}(T,P_U))^2,
\end{equation}
for all $x\in U$, with $\omega$ as in {\bf EB}. \end{proposition} \begin{proof}
First note that if $x\in F(T)$, then \eqref{e40} holds trivially, so that we assume from now on that $T(x)\ne x$. Take any $y\in$ Fix$(T,P_U)$. Since $T$ is firmly nonexpansive and $y\in F(T)$, we have \begin{equation}\label{e41} \left\Vert x-y\right\Vert ^2\ge\left\Vert T(x)-T(y)\right\Vert ^2+\left\Vert (x-y)-(T(x)-T(y))\right\Vert ^2=\left\Vert T(x)-y\right\Vert ^2+\left\Vert x-T(x)\right\Vert ^2, \end{equation} We take now a specific point in Fix$(T,P_U)$, namely $\bar y=P_{{\rm Fix}(T,P_U)}(x)$, and rewrite {\bf EB} as \begin{equation}\label{e42} \left\Vert x-T(x)\right\Vert ^2\ge\omega^2\left\Vert x-\bar y\right\Vert ^2. \end{equation} Combining \eqref{e41} and \eqref{e42},we get \begin{equation}\label{e43} \left\Vert x-\bar y\right\Vert ^2\ge\left\Vert x- T(x)\right\Vert ^2+\left\Vert T(x)-\bar y\right\Vert ^2 \ge\omega^2\left\Vert x-\bar y\right\Vert ^2+\left\Vert T(x)-\bar y\right\Vert ^2. \end{equation} Rearranging \eqref{e43}, we conclude that \begin{equation}\label{e44} (1-\omega^2) \left\Vert x-\bar y\right\Vert ^2\ge\left\Vert T(x)-\bar y\right\Vert ^2. \end{equation}
Note that $\langle T(x)-S(x),\bar y-T(x)\rangle=\langle T(x)-P_U(T(x)),\bar y-T(x)\rangle\le 0$, by an elementary property of orthogonal projections, since $\bar y\in U$. Hence, \begin{equation}\label{e45} \left\Vert T(x)-\bar y\right\Vert ^2\ge\left\Vert T(x)-S(x) \right\Vert ^2+\left\Vert S(x)-\bar y\right\Vert ^2. \end{equation} Let $\hat y=P_{{\rm Fix}(T,P_U)}(S(x))$. From \eqref{e44} and \eqref{e45} we obtain \begin{equation}\label{e46} (1-\omega^2)\left\Vert x-\bar y\right\Vert ^2\ge\left\Vert T(x)-\bar y\right\Vert ^2\ge\left\Vert T(x)-S(x)\right\Vert ^2+\left\Vert S(x)-\bar y\right\Vert ^2 \ge\left\Vert S(x)-\bar y\right\Vert ^2\ge\left\Vert S(x)-\hat y\right\Vert ^2, \end{equation} where the second inequality holds by \eqref{e45} and the last one follows from the definition of orthogonal projection. From \eqref{e46} we conclude, recalling the definitions of $\bar y,\hat y$, that \begin{equation}\label{e47} {\rm dist}(S(x),{\rm Fix}(T,P_U))^2\le(1-\omega^2){\rm dist}(x,{\rm Fix}(T,P_U))^2, \end{equation} which shows that the second inequality in \eqref{e40} holds. Next we look at the first one. Let $\tilde y=P_{{\rm Fix}(T,P_U)}(C(x))$. We have that \begin{equation}\label{e48} \left\Vert C(x)-\tilde y\right\Vert ^2\le\left\Vert C(x)-\hat y\right\Vert ^2\le\left\Vert S(x)-\hat y\right\Vert ^2 \le\left\Vert S(x)-\bar y\right\Vert ^2 \le(1-\omega^2)\left\Vert x-\bar y\right\Vert ^2, \end{equation} where the first and the third inequality hold by the definition of orthogonal projection, the second one from Proposition \ref{p11}(i) and the last one holds by \eqref{e46}. Note that the first inequality in \eqref{e40} follows immediately from \eqref{e48}, in view of the definitions of $\tilde y, \bar y$. \end{proof}
\begin{Cor}\label{c4}
Under the assumptions of Proposition \ref{p12}, let $\{z^k\},\{x^k\}$ be the sequences generated by MAP and CRM respectively, for solving FPP$(T,P_U)$, i.e., $z^{k+1}=S(z^k),$ and $x^{k+1}=C(x^k),$ starting from some $z^0\in \mathbb R^n$ and $x^0\in U$. Then the scalar sequences $\{a^k\}, \{b^k\}$, defined as $a^k={\rm dist}(z^k,{\rm Fix}(T,P_U))$ and $b^k={\rm dist}(x^k,{\rm Fix}(T,P_U))$, converge Q-linearly to zero with asymptotic constants bounded above by $\sqrt{1-\omega^2},$ with $\omega$ as in {\bf EB}.
\end{Cor} \begin{proof} It follows from \eqref{e40} that, for all $x\in U$, \begin{equation}\label{e49} {\rm dist}(S(x),{\rm Fix}(T,P_U))^2\le(1-\omega^2){\rm dist}(x,{\rm Fix}(T,P_U))^2, \end{equation} and that, for all $z\in U$, \begin{equation}\label{e50} {\rm dist}(C(x),{\rm Fix}(T,P_U))^2\le(1-\omega^2){\rm dist}(x,{\rm Fix}(T,P_U))^2, \end{equation} In view of the definitions of $\{x^k\}, \{z^k\}$, and remembering that both sequences are contained in $U$, by Proposition \ref{pp9} in the case of $\{x^k\}$ and by definition of $S$ in the case of $\{z^k\}$, we get from \eqref{e49}, \eqref{e50}, \begin{equation}\label{e51} \frac{{\rm dist}(z^{k+1},{\rm Fix}(T,P_U))}{{\rm dist}(z^k,{\rm Fix}(T,P_U))}\le\sqrt{1-\omega^2}, \end{equation} \begin{equation}\label{e52} \frac{{\rm dist}(x^{k+1},{\rm Fix}(T,P_U))}{{\rm dist}(x^k,{\rm Fix}(T,P_U))}\le\sqrt{1-\omega^2}. \end{equation} The result follows immediately from \eqref{e51}, \eqref{e52}. \end{proof}
Note that the results of Corollary \ref{c4} do not entail immediately that the sequences $\{x^k\}, \{z^k\}$ themselves converge linearly; a sequence $\{y^k\}$ may converge to a point $y\in M\subset\mathbb R^n$, in such a way that $\{{\rm dist}(y^k,M)\}$ converges linearly to $0$ but $\{y^k\}$ itself converges sublinearly. Take for instance $M=\{(s,0)\in\mathbb R^2\}$, $y^k=\left(1/k,2^{-k}\right)$. This sequence converges to $0\in M$, ${\rm dist}(y^k,M)=2^{-k}$ converges linearly to $0$ with asymptotic constant equal to $1/2$, but the first component of $y^k$ converges to $0$ sublinearly, and hence the same holds for the sequence $\{y^k\}$. The next well known lemma establishes that this situation cannot occur when $\{y^k\}$ is Fej\'er monotone with respect to $M$, i.e., $\left\Vert y^{k+1}-y\right\Vert \le\left\Vert y^k-y\right\Vert $ for all $y\in M$.
\begin{Lem}\label{l1} Consider $M\subset \mathbb R^n$, $\{y^k\}\subset\mathbb R^n$. Assume that $\{y^k\}$ is Fej\'er monotone with respect to $M$, and that ${\rm dist}(y^k,M)$ converges R-linearly to $0$. Then $\{y^k\}$ converges R-linearly to some point $y^*\in M$, with asymptotic constant bounded above by the asymptotic constant of $\{{\rm dist}(y^k,M)\}$. \end{Lem}
\begin{proof} See, e.g., Lemma 1 in \cite{ABBIS}. \end{proof}
We show next that the sequences $\{x^k\}$ and $\{z^k\}$ are R-linearly convergent under Assumption {\bf EB}, with asymptotic constants bounded by $\sqrt{1-\omega^2}$, where $\omega$ is the {\bf EB} parameter.
\begin{Thm}\label{t2} Let $T:\mathbb R^n\to\mathbb R^n$ be a firmly nonexpansive operator and $U\subset\mathbb R^n$ is an affine manifold. Assume that Fix$(T,P_U)\ne\emptyset$ and that condition {\bf EB} Holds. Consider the sequences $\{z^k\},\{x^k\}$ generated by MAP and CRM respectively, for solving ${\rm Fix}(T,P_U)$, i.e., $x^{k+1}=S(x^k)$ and $z^{k+1}=C(z^k)$, starting from some $z^0\in \mathbb R^n$ and some $x^0\in U$. Then both sequences converge R-linearly to points in ${\rm Fix}(T,P_U)$, with asymptotic constants bounded above by $\sqrt{1-\omega^2},$ with $\omega$ as in assumption {\bf EB}. \end{Thm} \begin{proof} By Corollary \ref{c4}, both scalar sequences $a^k={\rm dist}(z^k,{\rm Fix}(T,P_U))$ and $b^k={\rm dist}(x^k,{\rm Fix}(T,P_U))$ are Q-linearly convergent to $0$ with asymptotic constant bounded above by $\sqrt{1-\omega^2}<1$, and hence R-linearly convergent to 0, with the same asymptotic constant. By Corollary \ref{c3}, the sequence $\{x^k\}$ is Fej\'er monotone with respect to Fix$(T,P_U)$, and the same holds for the sequence $\{z^k\}$, in view of \eqref{et17}. By Theorem \ref{t1}, both sequences converge to points in Fix$(T,P_U)$. Finally, by Lemma \ref{l1}, both sequences converge R-linearly convergent to their limit points in the intersection, with asymptotic constants bounded by $\sqrt{1-\omega^2}$. \end{proof}
We mention that in \cite{ABBIS} we showed that for CFP under EB, CRM achieves an asymptotic constant of linear convergence better than MAP. We have not been able to prove such superiority in the case of FPP. However, the numerical results exhibited in Section \ref{s5} strongly suggest that the asymptotic constant of CRM is indeed better than the MAP one. The task of establishing such theoretical superiority is left as an open problem.
\section{Numerical experiments}\label{s5}
We report here numerical comparisons between CRM and PPM for solving FPP with $p$ firmly nonexpansive operators.
All operators in this section belong to the family studied in Section \ref{s2}, i.e., they are convex combinations of orthogonal projections onto a finite number of closed and convex sets with nonempty intersection. In view of Proposition \ref{p3}(ii), these operators are ensured to have fixed points. Hence, in view of Proposition \ref{p5} they are not orthogonal projections themselves.
The construction of the problems is as follows: for each instance we choose randomly a number $r\in\{3,4,5\}$ ($r$ is the number of convex sets in the convex combination). Then we sample values $\lambda_1, \dots, \lambda_r\in(0,1)$ with uniform distribution. We define $\mu_i=\lambda_i/(\sum_{\ell=1}^r)$, and we take the firmly nonexpansive operator $T$ as $T=\sum_{i=1}^r\mu_iP_{\mathcal{E}_i}$, where $\mathcal{E}_i$ is an ellipsoid and $P_{\mathcal{E}_i}$ is the orthogonal projection onto it.
The ellipsoid $\mathcal{E}_i$ is of the form $\mathcal{E}_i:=\{x\in\mathbb R^n:g_i(x)\le 0\}$, where $g_i:\mathbb R^n\to\mathbb R$ is given as $g_i(x)= x^t A_ix +2 (b^i)^tx-\alpha_i$, with $A_i\in\mathbb R^{n\times n}$ symmetric positive definite, $b^i\in\mathbb R^n$ and $0<\alpha_i\in\mathbb R$.
Each matrix $A_i$ is of the form $A_i = \gamma I+B_i^\top B_i$, with $B_i \in\mathbb R^{n\times n}$, $\gamma \in \mathbb R_{++}$, where $I$ stands for the identity matrix. The matrix $B_i$ is a sparse matrix sampled from the standard normal distribution with sparsity density $p=2 n^{-1}$ and each vector $b^i$ is sampled from the uniform distribution between $[0,1]$. We then choose each $\alpha_i$ so that $\alpha_i > (b^i)^\top Ab^i$, which ensures that $0$ belongs to every $\mathcal{E}_i$, so that the intersection of the ellipsoids is nonempty. As explained above, this ensures that each instance of FPP has solutions.
In order to compute the projection onto the ellipsoids we use a version of the Alternating Direction Method of Multipliers (ADMM) suited for this purpose; see \cite{JCH}. The stopping criterion for ADMM is as follows: we stop the ADMM iterative process when the norm of the difference between 2 consecutive ADMM iterates is less than $10^{-8}$. We also fix a maximum number of $10\, 000$ ADMM iterations.
For CRM, we use Pierra's product space reformulation, as explained in Section \ref{s1}. We implement PPM directly from its definition (see Section \ref{s1}). The stopping criterion for both CRM and PPM is similar to the one for the ADMM subroutine, but with a different tolerance: the iterative process stops when the norm of the difference between 2 consecutive CRM or PPM iterates is less than $10^{-6}$. The maximum number of iterations is fixed at $50\, 000$ for both algorithms.
The experiments consists of solving, with CRM and PPM, $250$ instances of FPP selected as follows. We consider the following values for the dimension $n$: $\{10,30, 50, 100, 200\}$, and for each $n$ we take $p$ firmly nonexpansive operators with $p\in\{10,25,50,100,200\}$. For each of these 25 pairs $(n,p)$, we randomly generate 10 instances of FPP with the above explained procedure.
The initial point $x^0$ is of the form $(\eta,\dots,\eta)\in\mathbb R^n$, with $\eta<0$ and $\left\vert \eta\right\vert$ sufficiently large so as to guarante that $x^0$ is far from all the ellipsoids.
The computational experiments were carried out on an Intel Xeon W-2133 3.60GHz with 32GB of RAM running Ubuntu 20.04. We implemented all experiments in Julia programming language v1.6 (see \cite{BEKS}). The codes of our experiments are fully available at: \url{ https://github.com/Mirza-Reza/FPP}
We report in Table \ref{table:fneellipsoidd_all} the following descriptive statistics for CRM and PPM: mean, maximum (max), minimum (min) and standard deviation (std) for iteration count (it) and CPU time in seconds (CPU (s)). In particular, the ratio of the CPU time (in average for all instances) of PPM with respect to CRM is $7.69$, meaning that CRM is, on the average, almost eight times faster that PPM.
\begin{table}[ht]
\centering
\caption{Statistics for all instances, reporting number of iterations and CPU time}
\label{table:fneellipsoidd_all}
\sisetup{ table-parse-only, table-figures-decimal = 4, table-format = +3.4e-1, table-auto-round,
}
\begin{tabular}{lrSccS}
\toprule \textbf{Method} & & {\textbf{mean}} & {\textbf{max}} & {\textbf{min}} & {\textbf{std}} \\ \cmidrule(lr){3-6}
CRM & \texttt{it} & 144.288 & 554 &23 & 95.2581 \\
& \texttt{CPU(s)} & \num{14.6048} & \num{120.3020} & \num{0.2729} &\num{ 22.4890} \\ \cmidrule(lr){2-6} PPM & \texttt{it} & 5977.352 & 25000 &209 & 6385.9388 \\
& \texttt{CPU(s)} & \num{ 112.3315} & 1085.9685& \num{1.2483} & \num{190.3078} \\
\bottomrule \end{tabular} \end{table}
We report next similar statistics, but separately for each dimension $n$. Looking at Table \ref{table:fneellipsoid_n}, we observe that the CPU time for PPM grows linearly with the dimension $n$, while the growth of the CRM CPU time is somewhat higher than linear. As a consequence, the superiority of CRM over PPM, measured in terms of the quotient between the PPM CPU time and the CRP CPU time, is slightly decreasing with $n$: it goes from a ratio of $9.17$ for $n=10$ to a ratio of $7.56$ for $n=200$. This said, it is clear that CRM vastly outperforms PPM in terms of CPU time for all the values of $n$ tested in our experiments.
\begin{table}[ht]
\centering
\caption{Statistics for instances of each dimension $n$, reporting number of iterations and CPU time}
\label{table:fneellipsoid_n}
\sisetup{ table-parse-only, table-figures-decimal = 2, table-format = +3.4e-1, table-auto-round,
}
\begin{tabular}{lcSccS}
\toprule \textbf{Method} & & {\textbf{mean}} & {\textbf{max}} & {\textbf{min}} & {\textbf{std}} \\ \cmidrule(lr){3-6}
CRM & \texttt{it} & 141.84 & 512 & 28 & 99.10284758774593 \\ $n=10$& \texttt{CPU(s)} & \num{2.3247} & \num{6.8150} & \num{0.2729} &\num{ 1.9756} \\ \cmidrule(lr){2-6} PPM & \texttt{it} & 6024.54 & 19163 &209 & 6425.574393655403 \\
$n=10$& \texttt{CPU(s)} & \num{21.3369} & 92.19569& \num{1.2483} & \num{22.4132} \\ \bottomrule CRM & \texttt{it} & 153.5 & 526 & 46 & 92.21502046846815 \\
$n=30$& \texttt{CPU(s)} & \num{ 4.6989} & \num{16.6523} & \num{0.7607} &\num{ 4.1754} \\ \cmidrule(lr){2-6} PPM & \texttt{it} & 5608.44 & 18353 &500 & 5956.758800421585 \\
$n=30$& \texttt{CPU(s)} & \num{42.9296} & 174.9737& \num{2.9861} & \num{46.2969} \\ \bottomrule \\ CRM & \texttt{it} &129.5 & 469 & 23 & 91.70523431080693 \\
$n=50$& \texttt{CPU(s)} & \num{6.8152} & \num{17.1668} & \num{1.0480} &\num{ 5.0391} \\ \cmidrule(lr){2-6} PPM & \texttt{it} & 5288.52 & 24680 &423 & 5548.505204971876 \\
$n=50$& \texttt{CPU(s)} & \num{53.3709} & 222.7307& \num{3.5744} & \num{55.7054} \\ \bottomrule \\ CRM & \texttt{it} & 152.04 & 399 & 28 & 84.19238920472563 \\
$n=100$& \texttt{CPU(s)} & \num{15.5937} & \num{41.2581} & \num{1.9661} &\num{ 12.4246} \\ \cmidrule(lr){2-6} PPM & \texttt{it} & 7224.42 & 21978 &540 & 7663.860453035403 \\
$n=100$& \texttt{CPU(s)} & \num{114.4037} & 428.8247& \num{6.3108} & \num{108.4765} \\ \bottomrule \\ CRM & \texttt{it} & 144.56 & 554 & 42 & 105.72438886084895 \\
n=200& \texttt{CPU(s)} & \num{43.5915} & \num{ 120.3019} & \num{ 5.0157} &\num{ 34.3053} \\ \cmidrule(lr){2-6} PPM & \texttt{it} & 5740.84 & 22378 &370 & 5948.570740472034 \\
n=200& \texttt{CPU(s)} & \num{ 329.6167} & 1085.9685& \num{19.0842} & \num{315.8783} \\ \bottomrule \end{tabular} \end{table}
Next, we report in the next table similar statistics, but separately for problems involving $p$ firmly nonexpansive operators, for each value of $p$. Table \ref{table:fneellipsoid-Random p} indicates that both the CRM and the PPM CPU time grow slightly less that linearly in $p$, the number of firmly nonexpansive operators in each instance of FPP, but the growth in both cases seems to become linear for $p\ge 50$. Consistently with this behavior, the ratio between the PPM CPU time and the the CRM CPU time is about $3$ for $p=10,25$ and about $8$ for $p=50,100,200$. Again, for all values of $p$, CRM turns out to be highly better than PPM in terms of CPU time.
\begin{table}[ht]
\centering
\caption{Statistics for instances of FPP problems with $p$ firmly nonexpansive operators, reporting number of iterations and CPU time}
\label{table:fneellipsoid-Random p}
\sisetup{ table-parse-only, table-figures-decimal = 2, table-format = +3.4e-1, table-auto-round,
}
\begin{tabular}{lcSccS}
\toprule \textbf{Method} & & {\textbf{mean}} & {\textbf{max}} & {\textbf{min}} & {\textbf{std}} \\ \cmidrule(lr){3-6}
fneCRM & \texttt{it} & 91.0 & 263 & 28 & 50.174495513158874 \\
$p=10$& \texttt{CPU(s)} & \num{ 2.8569} & \num{13.1619} & \num{0.2729} &\num{ 2.8807} \\ \cmidrule(lr){2-6} PPM & \texttt{it} & 1316.68 & 6765 &209 & 1264.5452849147 \\
$p=10$& \texttt{CPU(s)} & \num{ 13.1578} & 50.8767& \num{1.2483} & \num{11.4271} \\ \bottomrule
CRM & \texttt{it} & 113.7 & 469 & 36 & 83.5955142337195 \\
$p=25$& \texttt{CPU(s)} & \num{ 6.5062} & \num{ 45.2021} & \num{0.6664} &\num{ 8.9416} \\ \cmidrule(lr){2-6} PPM & \texttt{it} &2865.92 & 14617 &650 & 2651.0789036918536 \\ $p=25$& \texttt{CPU(s)} & \num{34.6541} & 242.2805& \num{ 2.9785} & \num{ 47.5093} \\ \bottomrule \\ CRM & \texttt{it} &128.8 & 331 & 23 & 76.92437845052763 \\
$p=50$& \texttt{CPU(s)} & \num{ 10.43880} & \num{46.8045} & \num{1.3100} &\num{ 11.8677} \\ \cmidrule(lr){2-6} PPM & \texttt{it} & 4949.42 & 25000 & 870 & 5531.401251364793 \\
$p=50$& \texttt{CPU(s)} & \num{ 88.4859} & 602.8599& \num{6.5347} & \num{125.0821} \\ \bottomrule \\ CRM & \texttt{it} & 166.28 & 526 & 49 & 91.18882387661331 \\
$p=100$& \texttt{CPU(s)} & \num{ 18.8065} & \num{70.6532} & \num{ 2.4265} &\num{ 20.1719} \\ \cmidrule(lr){2-6} PPM & \texttt{it} & 7077.46 & 25000 & 1586 & 4970.777481279966 \\
$p=100$& \texttt{CPU(s)} & \num{143.0699} & 729.1966& \num{12.0125} & \num{ 171.2874} \\ \bottomrule \\ CRM & \texttt{it} & 221.66 & 554 & 88 & 105.57890130134903 \\
$p=200$& \texttt{CPU(s)} & \num{ 34.4157} & \num{ 120.3019} & \num{ 4.7277} &\num{ 35.5202} \\ \cmidrule(lr){2-6} PPM & \texttt{it} & \num{13677.28 } & 25000 & 4015 & 6856.3914 \\
$p=200$& \texttt{CPU(s)} & 282.2900 & 1085.9685 & \num{ 31.8832} & \num{ 295.7094} \\ \bottomrule \end{tabular} \end{table}
Finally, we exhibit the performance profile, in the sense of \cite{DoM}, for all the instances. Again, the superiority of CRM with respect to PPM is fully corroborated.
\begin{figure}
\caption{Performance profile of experiments with ellipsoidal feasibility – CRM vs PPM}
\label{fig:performance-profile-ppm}
\end{figure}
\end{document} |
\begin{document}
\frontmatter
\title{On Non-Generic Finite Subgroups of Exceptional Algebraic Groups}
\author{Alastair J. Litterick} \address{University of Auckland, Auckland, New Zealand} \email{[email protected]} \thanks{The author acknowledges support from the University of Auckland, as well as a Doctoral Training Award from the UK's Engineering and Physical Sciences Research Council. A large portion of the results presented here were derived during the author's doctoral study at Imperial College London, under the supervision of Professor Martin Liebeck. The author would like to thank Professor Liebeck for guidance and support received. Finally, the author would like to thank the anonymous referee for their numerous helpful suggestions.}
\subjclass[2010]{Primary 20G15, 20E07}
\keywords{algebraic groups, exceptional groups, finite simple groups, Lie primitive, subgroup structure, complete reducibility}
\begin{abstract} The study of finite subgroups of a simple algebraic group $G$ reduces in a sense to those which are almost simple. If an almost simple subgroup of $G$ has a socle which is not isomorphic to a group of Lie type in the underlying characteristic of $G$, then the subgroup is called \emph{non-generic}. This paper considers non-generic subgroups of simple algebraic groups of exceptional type in arbitrary characteristic.
A finite subgroup is called Lie primitive if it lies in no proper subgroup of positive dimension. We prove here that many non-generic subgroup types, including the alternating and symmetric groups $\Alt_{n}$, $\Sym_{n}$ for $n \ge 10$, do not occur as Lie primitive subgroups of an exceptional algebraic group.
A subgroup of $G$ is called $G$-completely reducible if, whenever it lies in a parabolic subgroup of $G$, it lies in a conjugate of the corresponding Levi factor. Here, we derive a fairly short list of possible isomorphism types of non-$G$-completely reducible, non-generic simple subgroups.
As an intermediate result, for each simply connected $G$ of exceptional type, and each non-generic finite simple group $H$ which embeds into $G/Z(G)$, we derive a set of \emph{feasible characters}, which restrict the possible composition factors of $V \downarrow S$, whenever $S$ is a subgroup of $G$ with image $H$ in $G/Z(G)$, and $V$ is either the Lie algebra of $G$ or a non-trivial Weyl module for $G$ of least dimension.
This has implications for the subgroup structure of the finite groups of exceptional Lie type. For instance, we show that for $n \ge 10$, $\Alt_n$ and $\Sym_n$, as well as numerous other almost simple groups, cannot occur as a maximal subgroup of an almost simple group whose socle is a finite simple group of exceptional Lie type. \end{abstract}
\maketitle
\tableofcontents
\mainmatter
\chapter{Introduction and Results} \label{chap:intro}
This paper concerns the closed subgroup structure of simple algebraic groups of exceptional type over an algebraically closed field. Along with the corresponding study for classical-type groups, this topic has been studied extensively ever since Chevalley's classification in the 1950s of reductive groups via their root systems.
The subgroup structure of a simple algebraic group divides naturally according to properties of the subgroups under consideration. Some of the first major results were produced by Dynkin \cites{MR0049903,MR0047629}, who classified maximal \emph{connected} subgroups in characteristic zero. The corresponding study over fields of positive characteristic was initiated by Seitz \cites{MR888704,MR1048074} and extended by Liebeck and Seitz \cites{MR1066572,MR2044850}, giving a classification of maximal connected closed subgroups, and more generally maximal closed subgroups of positive dimension.
Here, we consider the opposite extreme, the case of finite subgroups. Let $G$ be a simple algebraic group of exceptional type, over an algebraically closed field $K$ of characteristic $p \ge 0$. When considering finite subgroups of $G$, it is natural to restrict attention to those not lying in any intermediate closed subgroup of positive dimension. Such a finite subgroup is called \emph{Lie primitive}, and a result due to Borovik (Theorem \ref{thm:boro}) reduces the study of Lie primitive subgroups to those which are \emph{almost simple}, that is, those groups $H$ such that $H_{0} \le H \le \textup{Aut}(H_{0})$ for some non-abelian finite simple group $H_{0}$. The study then splits naturally according to whether or not $H_{0}$ is isomorphic to a member of $\textup{Lie}(p)$, the collection of finite simple groups of Lie type in characteristic $p$ (with the convention $\textup{Lie}(0) = \varnothing$).
Subgroups isomorphic to a member of $\textup{Lie}(p)$ are generally well-understood. If $G$ has a semisimple subgroup $X$ and $p > 0$, then $G$ has a subgroup isomorphic to a central extension of the corresponding finite simple group $X(q)$ for each power $q$ of $p$. In \cite{MR1458329} it is shown that, for $q$ above some explicit bound depending on the root system of $G$, any such subgroup of $G$ arises in this manner.
Turning to those simple groups not isomorphic to a member of $\textup{Lie}(p)$, the so-called \emph{non-generic} case, we have two problems: which of these admit an embedding into $G$, and how do they embed? For example, are there any Lie primitive such subgroups? If not, what can we say about the positive-dimensional overgroups which occur?
The question of which simple groups admit an embedding now has a complete answer, due to the sustained efforts of many authors \cites{MR1220771,MR1087228,MR1721818,MR1767573,MR689418,MR1416728,MR933426,MR1653177,MR1717629}. The end product of these efforts is Theorem 2 of \cite{MR1717629}, where Liebeck and Seitz present tables detailing precisely which non-generic simple groups admit an embedding into an exceptional algebraic group, and for which characteristics this occurs. The following is a summary of this information.
\addtocounter{maintheorem}{-1} \addtocounter{table}{-1}
\begin{maintheorem}[{\cite{MR1717629}}] \label{thm:subtypes} Let $G$ be an adjoint exceptional algebraic group, over an algebraically closed field of characteristic $p \ge 0$, and let $H$ be a non-abelian finite simple group, not isomorphic to a member of $\textup{Lie}(p)$. Then $G$ has a subgroup isomorphic to $H$ if and only if $(G,H,p)$ appears in Table \ref{tab:subtypes}. \end{maintheorem}
\begin{table}[htbp] \small \onehalfspacing \caption{Non-generic subgroup types of adjoint exceptional $G$.} \label{tab:subtypes}
\begin{tabularx}{\linewidth}{l|>{\centering\arraybackslash}X} \hline $G$ & $H$ \\ \hline $G_2$ & $\Alt_5$, $L_2(q) \ (q = 7,8,13)$, $U_3(3)$, \\
& $J_1 \ (p = 11)$, $J_2 \ (p = 2)$
\\ \hline $F_4$ & $\Alt_{5-6}$, $L_2(q) \ (q = 7,8,13,17,25,27)$, $L_3(3)$, $U_3(3)$, $^{3}D_4(2)$, \\
& $\Alt_{7} \ (p = 2, 5)$, $\Alt_{9-10} \ (p = 2)$, $M_{11} \ (p = 11)$, $J_1 \ (p = 11)$, $J_2 \ (p = 2)$, $L_4(3) \ (p = 2)$
\\ \hline $E_6$ & $\Alt_{5-7}$, $M_{11}$, $L_2(q) \ (q = 7,8,11,13,17,19,25,27)$, $L_3(3)$, $U_3(3)$, $U_4(2)$, $^{3}D_4(2)$, $^{2}F_4(2)'$, \\
& $\Alt_{9-12} \ (p = 2)$, $M_{12} \ (p = 2, 5)$, $M_{22} \ (p = 2)$, $J_1 \ (p = 11)$, $J_2 \ (p = 2)$, $J_3 \ (p = 2)$, $Fi_{22} \ (p = 2)$, $L_4(3) \ (p = 2)$, $U_4(3) \ (p = 2)$, $\Omega_7(3) \ (p = 2)$, $G_2(3) \ (p = 2)$
\\ \hline $E_7$ & $\Alt_{5-9}$, $M_{11}$, $M_{12}$, $J_2$, $L_2(q) \ (q = 7,8,11,13,17,19,25,27,29,37)$, $L_3(3)$, $L_3(4)$, $U_3(3)$, $U_3(8)$, $U_4(2)$, $Sp_6(2)$, $\Omega_8^{+}(2)$, $^{3}D_4(2)$, $^{2}F_4(2)'$, \\
& $\Alt_{10} \ (p = 2,5)$, $\Alt_{11-13} \ (p = 2)$, $M_{22} \ (p = 5)$, $J_1 \ (p = 11)$, $Ru \ (p = 5)$, $HS \ (p = 5)$, $L_4(3) \ (p = 2)$, ${}^{2}B_{2}(8)$ $(p = 5)$\footnotemark
\\ \hline $E_8$ & $\Alt_{5-10}$, $M_{11}$, $L_2(q) \ (q = 7,8,11,13,16,17,19,25,27,29,31,32,41,49,61)$, $L_3(3)$, $L_3(5)$, $U_3(3)$, $U_3(8)$, $U_4(2)$, $Sp_6(2)$, $\Omega_8^{+}(2)$, $G_2(3)$, $^{3}D_4(2)$, $^{2}F_4(2)'$, $^{2}B_2(8)$, \\
& $\Alt_{11} \ (p = 2,11)$, $\Alt_{12-17} \ (p = 2)$, $M_{12} \ (p = 2, 5)$, $J_1 \ (p = 11)$, $J_2 \ (p = 2)$, $J_3 \ (p = 2)$, $Th \ (p = 3)$, $L_2(37) \ (p = 2)$, $L_4(3) \ (p = 2)$, $L_4(5) \ (p = 2)$, $PSp_4(5) \ (p = 2)$, $^{2}B_2(32) \ (p = 5)$ \\ \hline \end{tabularx} \end{table} \footnotetext{This subgroup was erroneously omitted from \cite{MR1717629}, however such a subgroup is easily seen to exist, since an $8$-dimensional module for the double-cover of ${}^{2}B_{2}(8)$ gives an embedding into $A_7$.}
Note that we have the following isomorphisms: \[ \begin{array}{c} \Alt_5 \cong L_2(4) \cong L_2(5), \quad \Alt_6 \cong L_2(9) \cong Sp_4(2)', \quad \Alt_8 \cong L_4(2),\\ L_2(7) \cong L_3(2), \quad U_4(2) \cong PSp_4(3), \quad U_3(3) \cong G_2(2)', \end{array} \] and we therefore consider these groups to be of Lie type in each such characteristic.
\addtocontents{toc}{\setcounter{tocdepth}{-1}} \section*{Lie Primitivity} \addtocontents{toc}{\setcounter{tocdepth}{1}}
At this point it remains to determine, for each pair $(G,H)$ above, properties of the possible embeddings $H \to G$. Here, our aim is to determine whether or not $H$ occurs as a Lie primitive subgroup of $G$, and if not, to determine useful information regarding the intermediate subgroups of positive dimension.
Our main result is the following. Let $\textup{Aut}(G)$ be the abstract group generated by all inner automorphisms of $G$, as well as graph and field morphisms.
\begin{maintheorem} \label{THM:MAIN} Let $G$ be an adjoint exceptional simple algebraic group, over an algebraically closed field of characteristic $p \ge 0$, and let $S$ be a subgroup of $G$, isomorphic to the finite simple group $H \notin \textup{Lie}(p)$.
If $G$, $H$, $p$ appear in Table \ref{tab:main} then $S$ lies in a proper, closed, connected subgroup of $G$ which is stable under all automorphisms in $N_{\textup{Aut}(G)}(S)$. \end{maintheorem}
\begin{table}[htbp] \centering \small \onehalfspacing \caption{Imprimitive subgroup types.} \label{tab:main}
\begin{tabularx}{\linewidth}{l|>{\centering\arraybackslash}X} \hline $G$ & $H$ \\ \hline $F_4$ & $\Alt_5$, $\Alt_{7}$, $\Alt_{9-10}$, $M_{11}$, $J_1$, $J_2$, \\ & $\Alt_6 \ (p = 5)$, $L_2(7) \ (p = 3)$, $L_2(17) \ (p = 2)$, $U_3(3) \ (p \neq 7)$ \\ \hline $E_6$ & $\Alt_{9-12}$, $M_{22}$, $L_2(25)$, $L_2(27)$, $L_4(3)$, $U_4(2)$, $U_4(3)$, $^{3}D_4(2)$, \\ & $\Alt_{5} \ (p \neq 3)$, $\Alt_7 \ (p \neq 3, 5)$, $M_{11} \ (p \neq 3, 5)$, $M_{12} \ (p = 2)$, $L_2(7) \ (p = 3)$, $L_{2}(8) \ (p = 7)$, $L_{2}(11) \ (p = 5)$, $L_{2}(13) \ (p = 7)$, $L_2(17) \ (p = 2)$, $L_3(3) \ (p = 2)$, $U_3(3) \ (p = 7)$ \\ \hline $E_7$ & $\Alt_{10-13}$, $M_{11}$, $J_1$, $L_2(17)$, $L_2(25)$, $L_3(3)$, $L_4(3)$, $U_4(2)$, $Sp_{6}(2)$, $^{3}D_4(2)$, $^{2}F_4(2)'$, \\ & $\Alt_{9} \ (p \neq 3)$, $\Alt_{8} \ (p \neq 3,5)$, $\Alt_{7} \ (p \neq 5)$, $M_{12} \ (p \neq 5)$, $J_2 \ (p \neq 2)$, $L_2(8) \ (p \neq 3,7)$ \\ \hline $E_8$ & $\Alt_{8}$, $\Alt_{10-17}$, $M_{12}$, $J_1$, $J_2$, $L_2(27)$, $L_2(37)$, $L_4(3)$, $U_3(8)$, $Sp_{6}(2)$, $\Omega_{8}^{+}(2)$, $G_2(3)$, \\ & $\Alt_{9} \ (p \neq 2,3)$, $M_{11} \ (p \neq 3,11)$, $U_{3}(3) \ (p \neq 2,3,7)$, $^{2}F_4(2)' \ (p \neq 3)$ \\ \hline \end{tabularx} \end{table}
The `normaliser stability' condition here is important for applications to the subgroup structure of finite groups of Lie type. For instance, if $G = F_{4}(K)$ where $K$ has characteristic $5$, then each subgroup $S \cong \Alt_{6}$ of $G$ lies in a proper, closed connected subgroup of $G$, which is stable under $N_{G}(S)$ and under every Frobenius morphism of $G$ stabilising $S$. This can be used to deduce that no finite almost simple group with socle $F_{4}(5^{r})$ $(r \ge 1)$ contains a maximal subgroup with socle $\Alt_{6}$, as in Theorem \ref{THM:FIN} below.
The proof of Theorem \ref{THM:MAIN} proceeds by considering the action of the finite groups in question on certain modules for the algebraic group. We consider in particular the adjoint module $L(G)$, and a Weyl module of least dimension, denoted $V_{\textup{min}}$, for the simply connected cover $\tilde{G}$ of $G$. Each such module has dimension at most 248. For $G$ of type $E_{6}$ and $E_{7}$, the group $\tilde{G}$ has a centre of order $3/(3,p)$ and $2/(2,p)$ respectively, acting by scalars on $V_{\textup{min}}$. Thus to make use of $V_{\textup{min}}$, we must consider embeddings $\tilde{H} \to \tilde{G}$, where $\tilde{H}$ is a perfect central extension of the simple group $H$, and where the image of $Z(\tilde{H})$ lies in $Z(\tilde{G})$.
For each $(G,H,p)$ in Table \ref{tab:subtypes}, we calculate \emph{compatible feasible characters} of $\tilde{H}$ on $L(G)$ and $V_{\textup{min}}$, which are Brauer characters of $\tilde{H}$ that agree with potential restrictions of $L(G)$ and $V_{\textup{min}}$ to a subgroup $\tilde{S} \cong \tilde{H}$ of $\tilde{G}$ with $Z(\tilde{S}) \le Z(\tilde{G})$ (see Definition \ref{def:feasible}). This requires knowledge of the irreducible Brauer characters of $\tilde{H}$ of degree at most $\textup{dim}(L(G))$ for each subgroup type $H$, as well as knowledge of the eigenvalues of various semisimple elements of $\tilde{G}$ on the relevant $K\tilde{G}$-modules. The necessary theory is already well-developed, and we give an outline in Chapter \ref{chap:disproving}.
\begin{maintheorem} \label{THM:FEASIBLES} Let $G$ be a simple algebraic group of type $F_4$, $E_6$, $E_7$ or $E_8$ over an algebraically closed field of characteristic $p \ge 0$. Let $S$ be a non-abelian finite simple subgroup of $G$, not isomorphic to a member of $\textup{Lie}(p)$, and let $\tilde{S}$ be a minimal preimage of $S$ in the simply connected cover of $G$. Then the composition factors of $L(G) \downarrow \tilde{S}$ and $V_{\textup{min}} \downarrow \tilde{S}$ are given by a line of the appropriate table in Chapter \ref{chap:thetables}. \end{maintheorem}
Once these composition factors are known, the representation theory of the finite subgroup can be used to determine further information on the structure of $L(G)$ and $V_{\textup{min}}$ as an $\tilde{S}$-module, which in turn allows us to determine information on the inclusion of $S$ into $G$. In particular, if $\tilde{S}$ fixes a nonzero vector on a non-trivial $G$-composition factor of $L(G)$ or $V_{\textup{min}}$, then $\tilde{S}$ lies in the full stabiliser of this vector, which is positive-dimensional and proper. In Chapter \ref{chap:disproving}, we present several techniques for determining the existence of such a fixed vector.
Theorem \ref{THM:MAIN} complements a number of existing results. Frey \cites{Fre1,MR1617620,MR1839999,MR1423302} and Lusztig \cite{MR1976697} give much information on embeddings of alternating groups $\Alt_{n}$ and their proper covers into exceptional groups in characteristic zero. Lifting and reduction results such as \cite{MR1369427}*{Lemme 4 and Proposition 8} and \cite{MR1320515}*{Corollary 2.2 and Theorem 3.4} can then be used to pass results between characteristic zero and positive characteristic not dividing the order of the subgroup in question.
For $G$ of type $G_2$, embeddings of non-generic finite simple groups are well understood, and hence we omit these from our study. In particular, by \cite{MR1717629}*{Corollary 4}, the only non-generic finite simple subgroups of $G_2(K)$ which are not Lie primitive are isomorphic to $\Alt_{5}$ or $L_2(7)$. From \cite{MR898346}*{Theorems 8, 9}, it follows that $\Alt_{5}$ does not occur as a Lie primitive subgroup of $G_{2}(K)$, while $L_2(7)$ occurs both as a Lie primitive subgroup and as a non-Lie primitive subgroup. In addition, Magaard \cite{MR2638705} gives a necessary condition for a given non-generic simple group to occur as a maximal subgroup of $F_4(F)$, where $F$ is a finite or algebraically closed field of characteristic $\neq 2,3$. This can be used to limit the possible isomorphism types of Lie primitive finite subgroups. The methods used here differ from those of the above references, as well as from relevant papers of Aschbacher \cites{MR892190,MR928524,MR986684,MR1054997,MR898346}. There, results focus on the geometry of certain low-dimensional modules supporting a multilinear form or algebra product. Here, however, we primarily use techniques from the representation theory of finite groups and reductive algebraic groups.
For each adjoint exceptional group $G$, Corollary 4 of \cite{MR1717629} lists those non-generic finite simple groups $H$ which embed into $G$ and occur only as Lie primitive subgroups there. Since we will refer to this in Section \ref{sec:reps}, we record this information below in Table \ref{tab:onlyprim}. Combining this with Theorem \ref{THM:MAIN} gives the following:
\begin{maincorollary} \label{cor:prim} Let $G$ be an adjoint simple algebraic group of type $F_4$, $E_6$, $E_7$ or $E_8$, over an algebraically closed field of characteristic $p \ge 0$, and let $H \notin \textup{Lie}(p)$ be a finite non-abelian simple group which embeds into $G$. Then exactly one of the following holds: \begin{itemize} \item[\textup{(i)}] $G$, $H$ appear in Table \ref{tab:main} and $G$ has no Lie primitive subgroups $\cong H$. \item[\textup{(ii)}] $G$, $H$ appear in Table \ref{tab:onlyprim} and every subgroup $S \cong H$ of $G$ is Lie primitive. \item[\textup{(iii)}] $G$, $H$ appear in Table \ref{tab:unclassified}. \end{itemize} \end{maincorollary}
\begin{table}[ht] \centering \small \onehalfspacing \caption{Subgroup types occurring only as Lie primitive subgroups of $G$} \label{tab:onlyprim}
\begin{tabularx}{.99\linewidth}{c|>{\centering\arraybackslash}X} \hline $G$ & $H$ \\ \hline $F_4$ & $L_2(25)$, $L_2(27)$, $L_3(3)$, $^{3}D_4(2)$, $L_4(3) \ (p = 2)$ \\ \hline $E_6$ & $L_2(19)$, $^{2}F_4(2)'$, $\Omega_7(3) \ (p = 2)$, $G_2(3) \ (p = 2)$, $M_{12} \ (p = 5)$, $J_3 \ (p = 2)$, $Fi_{22} \ (p = 2)$ \\ \hline $E_7$ & $L_2(29)$, $L_2(37)$, $U_3(8)$, $M_{22} \ (p = 5)$, $Ru \ (p = 5)$, $HS \ (p = 5)$ \\ \hline $E_8$ & $L_2(q) \ (q = 31,32,41,49,61)$, $L_3(5)$, $L_4(5) \ (p = 2)$, $^{2}B_2(8) \ (p \neq 5, 13)$, $^{2}B_2(32) \ (p = 5)$, $Th \ (p = 3)$ \\ \hline \end{tabularx} \end{table}
\begin{table}[htbp] \centering \small \onehalfspacing \caption{Subgroup types $H \notin \textup{Lie}(p)$ possibly occurring both as a Lie primitive subgroup and as a Lie imprimitive subgroup of $G$}
\begin{tabularx}{.99\linewidth}{c|>{\centering\arraybackslash}X} \hline $G$ & $H$ \\ \hline $F_4$ & $L_2(q) \ (q = 8,13)$, $\Alt_{6} \ (p \neq 5)$, $L_2(7) \ (p \neq 3)$, $L_2(17) \ (p \neq 2)$, $U_3(3) \ (p = 7)$
\\ \hline $E_6$ & $\Alt_{6}$, $\Alt_5 \ (p = 3)$, $\Alt_7 \ (p = 3,5)$, $M_{11} \ (p = 3,5)$, $J_1 \ (p = 11)$, $J_2 \ (p = 2)$, $L_2(7) \ (p \neq 3)$, $L_2(8) \ (p \neq 7)$, $L_{2}(11) \ (p \neq 5)$, $L_{2}(13) \ (p \neq 7)$, $L_2(17) \ (p \neq 2)$, $L_3(3) \ (p \neq 2)$, $U_3(3) \ (p \neq 7)$
\\ \hline $E_7$ & $\Alt_{5}$, $\Alt_{6}$, $L_2(q) \ (q = 7,11,13,19,27)$, $L_3(4)$, $U_3(3)$, $\Omega_8^{+}(2)$, $\Alt_{7} \ (p = 5)$, $\Alt_8 \ (p = 3, 5)$, $\Alt_9 \ (p = 3)$, $M_{12} \ (p = 5)$, $J_2 \ (p = 2)$, $L_2(8) \ (p = 3,7)$, \\ & ${}^{2}B_{2}(8)$ $(p = 5)$
\\ \hline $E_8$ & $\Alt_{5-7}$, $L_2(q) \ (q = 7,8,11,13,16,17,19,25,29)$, $L_3(3)$, $U_4(2)$, $^{3}D_4(2)$, $\Alt_9 \ (p = 2,3)$, $M_{11} \ (p = 3,11)$, $J_3 \ (p = 2)$, $U_3(3) \ (p = 7)$, $PSp_4(5) \ (p = 2)$, $^{2}B_2(8) \ (p = 5, 13)$, $^{2}F_4(2)' \ (p = 3)$ \\ \hline \end{tabularx} \label{tab:unclassified} \end{table}
We remark that many of the groups appearing in Table \ref{tab:onlyprim} admit only a single feasible character on $L(G)$ and $V_{\textup{min}}$ in each characteristic where they embed. The exceptions to this occur primarily for subgroups $L_{2}(q)$, and these tend to have elements of large order, whose Brauer character values we have not considered (further on this is given at the start of Chapter \ref{chap:thetables}). It is therefore likely that a small number of the feasible characters for these groups are not realised by an embedding, so that these groups also have very few possible actions on $L(G)$ and $V_{\textup{min}}$.
\addtocontents{toc}{\setcounter{tocdepth}{-1}} \section*{Complete Reducibility} \addtocontents{toc}{\setcounter{tocdepth}{1}}
In the course of proving Theorem \ref{THM:MAIN}, we derive much information on the intermediate positive-dimensional subgroups which occur when a finite group is shown not to appear as a Lie primitive subgroup of $G$. Recall that a subgroup $X$ of a reductive algebraic group $G$ is called \emph{$G$-completely reducible} (in the sense of Serre \cite{Ser3}) if, whenever $X$ is contained in a parabolic subgroup $P$ of $G$, it lies in a Levi factor of $P$. In case $G = GL_n(K)$ or $SL_n(K)$, this coincides with $X$ acting completely reducibly on the natural $n$-dimensional module.
The concept of $G$-complete reducibility has seen much attention recently, for instance in the context of classifying reductive subgroups of $G$, see for example \cites{MR2604850,MR3075783}. Similar techniques can be brought to bear in the case of finite simple subgroups.
\begin{maintheorem} Let $G$ be an adjoint exceptional simple algebraic group in characteristic $p > 0$, and let $H$ be a non-abelian finite simple group, not isomorphic to a member of $\textup{Lie}(p)$, which admits an embedding into $G$.
If $G$ has a subgroup isomorphic to $H$, which is not $G$-completely reducible, then $(G,H)$ appears in Table \ref{tab:nongcr}, with $p$ one of the primes given there. \label{THM:NONGCR} \end{maintheorem}
\begin{table}[htbp] \centering \small \onehalfspacing \caption{Isomorphism types of potential non-$G$-cr subgroups}
\begin{tabularx}{.99\linewidth}{c|c|>{\raggedright\arraybackslash}X} \hline $G$ & $p$ & Subgroup types $H \notin \textup{Lie}(p)$ \\ \hline $F_4$ & $2$ & $J_{2}$, $L_{2}(13)$ \\ & $3$ & $\Alt_{5}$, $L_{2}(7)$, $L_{2}(8)$ \\ & $7$ & $L_{2}(8)$ \\ \hline
$E_6$ & $2$ & $M_{22}$, $L_{2}(q) \ (q = 11,13,17)$, $U_{4}(3)$ \\ & $3$ & $\Alt_{5}$, $\Alt_{7}$, $M_{11}$, $L_{2}(q) \ (q = 7,8,11,17)$ \\ & $5$ & $\Alt_{6}$, $M_{11}$, $L_{2}(11)$ \\ & $7$ & $\Alt_{7}$, $L_{2}(8)$ \\ \hline
$E_7$ & $2$ & $\Alt_{n} \ (n = 7,8,10,12)$, $M_{11}$, $M_{12}$, $J_{2}$, $L_{2}(q) \ (q = 11,13,17,19,27)$, $L_{3}(3)$ \\ & $3$ & $\Alt_{n} \ (n = 5,7,8,9)$, $M_{11}$, $M_{12}$, $L_{2}(q) \ (q = 7,8,11,13,17,25)$, $^{3}D_4(2)$ \\ & $5$ & $\Alt_{6}$, $\Alt_{7}$, $M_{11}$, $L_{2}(11)$, $^{2}F_4(2)'$ \\ & $7$ & $\Alt_{7}$, $L_{2}(8)$, $L_{2}(13)$, $U_{3}(3)$ \\ & $11$ & $M_{11}$ \\ & $13$ & $L_{3}(3)$ \\ \hline
$E_8$ & $2$ & $\Alt_{n} \ (n = 7,9,10,12,16)$, $M_{11}$, $M_{12}$, $J_{2}$, $L_{2}(q) \ (q = 11,13,17,19,25,27,37)$, $L_{3}(3)$, $PSp_{4}(5)$ \\ & $3$ & $\Alt_{n} \ (n = 5,7,8,9)$, $M_{11}$, $L_{2}(q) \ (q = 7,8,11,13,17,19,25)$, $U_{3}(8)$, $^{3}D_4(2)$ \\ & $5$ & $\Alt_{n} \ (n = 6,7,10)$, $M_{11}$, $L_{2}(11)$, $L_{2}(19)$, $U_{4}(2)$, $^{2}F_{4}(2)'$ \\ & $7$ & $\Alt_{7}$, $L_{2}(8)$, $L_{2}(13)$, $U_{3}(3)$ \\ & $11$ & $M_{11}$ \\ & $13$ & $L_{3}(3)$ \\ \hline \end{tabularx} \label{tab:nongcr} \end{table}
\begin{maincorollary} \label{cor:nongcr} If $G$ is an exceptional simple algebraic group over an algebraically closed field of characteristic $p = 0$ or $p > 13$, then all non-generic finite simple subgroups of $G$ are $G$-completely reducible. \end{maincorollary}
Theorem \ref{THM:NONGCR} and Corollary \ref{cor:nongcr} mirror results of Guralnick \cite{MR1717357}*{Theorem A}, which state that if $G = GL_{n}(K)$, where $K$ has characteristic $p > n + 1$, then all finite subgroups of $G$ having no non-trivial normal $p$-subgroup are $G$-completely reducible. Another general result in this direction appears in \cite{MR2178661}*{Remark 3.43(ii)}, which tells us that if $p > 3$ for $G$ of type $G_{2}$, $F_{4}$, $E_{6}$ or $E_{7}$, or if $p > 5$ for $G$ of type $E_{8}$, then a subgroup $S$ of $G$ is $G$-completely reducible if $L(G)$ is a completely reducible $S$-module. All of these results are in the spirit of those of Serre in \cite{MR2167207}*{\S 4, 5}.
At this stage we do not attempt to classify those non-generic subgroups which are not $G$-completely reducible. For most triples $(G,H,p)$ in Table \ref{tab:nongcr} it is straightforward to show the existence of such a subgroup. However, some cases are quite involved; we comment briefly on this in Section \ref{rem:existence} on page \pageref{rem:existence}.
\addtocontents{toc}{\setcounter{tocdepth}{-1}} \section*{Almost Simple Subgroups} \addtocontents{toc}{\setcounter{tocdepth}{1}}
The `normaliser stability' of the connected subgroups in Theorem \ref{THM:MAIN} also allows us to extend our results to almost simple finite subgroups. Let $S_0$ be almost simple, with simple socle $S$, and suppose that the isomorphism type of $S$ appears in Table \ref{tab:main}. If $\bar{S}$ is the $N_{\textup{Aut}(G)}(S)$-stable connected subgroup given by Theorem \ref{THM:MAIN}, then $S_0 < N_G(\bar{S})$, and this latter group is positive-dimensional and proper since $G$ is simple.
\begin{maincorollary} \label{cor:almostsimple} If $G$, $H$, $p$ appear in Table \ref{tab:main}, then $G$ has no Lie primitive finite almost simple subgroup whose socle is isomorphic to $H$. In particular, no group $\Alt_{n}$ or $\Sym_{n}$ $(n \ge 10)$ occurs as a Lie primitive subgroup of an exceptional simple algebraic group in any characteristic. \end{maincorollary}
In the case that the field of definition has characteristic $0$, we obtain the following. For $\Alt_n$, this is proved independently in forthcoming work of D. Frey \cite{Fre1}. \begin{maincorollary} \label{cor:almostsimple_zero} No exceptional simple algebraic group over an algebraically closed field of characteristic $0$ has a Lie primitive subgroup isomorphic to $\Alt_n$ or $\Sym_n$ for $n \ge 8$. \end{maincorollary}
\addtocontents{toc}{\setcounter{tocdepth}{-1}} \section*{Application: Finite Groups of Lie Type} \addtocontents{toc}{\setcounter{tocdepth}{1}}
Ever since the classification of the finite simple groups, a question of primary importance in finite group theory has been to understand their subgroups, and in particular their maximal subgroups. Recall that a group of Lie type arises as a subquotient of the group of fixed points of a simple algebraic group under a Frobenius morphism (more details are given in Section \ref{sec:notation} of Chapter \ref{chap:background}). If $\sigma$ is a Frobenius morphism of an adjoint group $G$ over a field of characteristic $p > 0$, then the group $O^{p'}(G_\sigma)$ is usually simple. Simple groups of Lie type make up all the non-abelian finite simple groups besides the alternating and sporadic groups.
For the alternating groups and classical groups of Lie type, the O'Nan-Scott Theorem \cite{MR1409812}*{Theorem 4.1A} and Aschbacher's Theorem \cite{MR746539} reduce the study of maximal subgroups to understanding primitive permutation actions and modular representations of almost simple groups. For the exceptional groups of Lie type, understanding maximal subgroups again reduces naturally to embeddings of almost simple groups by an analogue of Borovik's Theorem \ref{thm:boro} (see \cite{MR1066315} for the full statement). As with the corresponding algebraic groups, we expect more explicit results here than in the classical case.
In Section \ref{sec:pfcorfin} we use Theorem \ref{THM:MAIN} to prove the following; note that again we have not considered the groups of type $G_2$ since a complete description of their maximal subgroups is already available in \cite{MR618376}, \cite{MR955589} and \cite{MR898346}.
\begin{maintheorem} \label{THM:FIN} Let $G$ be an adjoint exceptional simple algebraic group over an algebraically closed field of positive characteristic $p$. Let $\sigma$ be a Frobenius morphism of $G$ such that $L = O^{p'}(G_\sigma)$ is simple, and let $L \le L_1 \le \textup{Aut}(L)$.
If $(G,H,p)$ appears in Table \ref{tab:main}, then $L_{1}$ has no maximal subgroup with socle isomorphic to $H$. \end{maintheorem}
Finally we remark that while the proof of Theorems \ref{THM:MAIN} and \ref{THM:FIN} are independent of the classification of finite simple groups, Theorem \ref{thm:subtypes} does rely on this, and hence so do Theorems \ref{THM:FEASIBLES} and \ref{THM:NONGCR}, as well as Corollaries \ref{cor:prim} and \ref{cor:nongcr}.
\section*{Layout}
Our analysis requires an amount of background on the representation theory of reductive algebraic groups and finite groups, and the subgroup structure of reductive groups. Chapter \ref{chap:background} gives a concise overview of the necessary theory. In Chapter \ref{chap:disproving} we describe the calculations used to deduce Theorem \ref{THM:FEASIBLES}, illustrating with a typical example. The feasible characters themselves appear in Chapter \ref{chap:thetables}.
For most triples $(G,H,p)$ in Table \ref{tab:main}, the proof that a subgroup $S \cong H$ of $G$ lies in a proper connected subgroup is simply a matter of inspecting the feasible characters and applying an elementary bound on the number of trivial composition factors (Proposition \ref{prop:substab}). In Chapter \ref{chap:thetables} we have marked with `\textbf{P}' those feasible characters which do \emph{not} satisfy the bound; thus every Lie primitive subgroup of $G$ gives rise to one of these feasible characters. The groups with no characters marked `\textbf{P}' are collected in Table \ref{tab:substabprop}. Table \ref{tab:main} consists of these together with some additional cases, which are considered in Proposition \ref{prop:algorithm}.
Chapter \ref{chap:stab} begins by assuming that $(G,H,p)$ appears in Table \ref{tab:main} and that if $S \cong H$ is a subgroup of $G$, then $S$ lies in a proper connected subgroup of $G$. A variety of techniques are then applied in order to show that $S$ in fact lies in a proper, connected, $N_{\textup{Aut}(G)}(S)$-stable subgroup. Theorem \ref{THM:FIN} follows from Theorem \ref{THM:MAIN} through a short argument which is independent of the remainder of the paper, which we give in Section \ref{sec:pfcorfin}. Finally, Chapter \ref{chap:gcr} considers $G$-complete reducibility of the finite subgroups occurring, and Appendix \ref{chap:auxiliary} contains auxiliary data on representations of finite simple groups used in Chapters \ref{chap:disproving}, \ref{chap:stab} and \ref{chap:gcr}.
\section*{Notation}
Throughout, unless stated otherwise, all algebraic groups are affine and defined over $K$, a fixed algebraically closed field of characteristic $p \ge 0$. Subgroups are assumed to be closed, and modules are assumed to be rational and of finite dimension. A homomorphism between algebraic groups is assumed to be a morphism of varieties, unless stated otherwise.
For a group $X$, the derived subgroup is denoted by $X'$. The image of $X$ under a homomorphism $\phi$ is denoted $X^{\phi}$, and if $\phi$ is an endomorphism of $X$, then $X_{\phi}$ denotes the fixed points of $\phi$ in $X$. If $X$ is finite then $O^{p'}(X)$ denotes the smallest normal subgroup of $X$ with index coprime to $p$. If $X_{1}$, $X_{2}$, $\ldots$, $X_{r}$ are simple algebraic groups or tori, then $X_{1} X_{2} \ldots X_{n}$ denotes a commuting product with pairwise finite intersections.
If $V$ is a $KX$-module, then $V^{*}$ denotes the dual module $\textup{Hom}(V,K)$. If $Y$ is a subgroup of $X$, then $V \downarrow Y$ denotes the restriction of $V$ to $Y$. If $M_{1}$, $M_{2}$, $\ldots$ $M_{r}$ are $KX$-modules and $n_{1}$, $\ldots$, $n_{r}$ are integers then we write \[ M_{1}^{n_{1}}/M_{2}^{n_{2}}/\ldots/M_{r}^{n_{r}} \] to denote a module having the same composition factors as the direct sum $M_{1}^{n_{1}} \oplus M_{2}^{n_{2}} \oplus \ldots \oplus M_{r}^{n_{r}}$. Finally, the notation
\[ M_{1}|M_{2}|\ldots|M_{r} \] denotes a $KX$-module $V$ with a series $V = V_{1} > V_{2} > \ldots > V_{r+1} = 0$ of submodules such that $\textup{soc}(V/V_{i+1}) = V_{i}/V_{i+1} \cong M_{i}$ for $1 \le i \le r$.
\chapter{Background} \label{chap:background}
Good general references for the theory of algebraic groups covered in this chapter are \cite{MR1102012}, \cite{MR0396773}, \cite{MR2850737} and \cite{MR2015057} and we will also give references to specific results when appropriate.
\section{Affine Algebraic groups} \label{sec:notation}
\subsection{Reductive and semisimple groups; root systems} For an algebraic group $G$, the connected component containing the identity is denoted $G^{\circ}$. The soluble and unipotent radicals of $G$ are respectively denoted $R(G)$ and $R_u(G)$; they are the maximal normal connected soluble (respectively unipotent) subgroups of $G$. Then $G$ is called \emph{reductive} if $R_u(G) = 1$ and \emph{semisimple} if $R(G) = 1$. An algebraic group is \emph{simple} if it has no non-trivial, proper, connected normal subgroups, and a semisimple algebraic group is a commuting product of simple subgroups whose pairwise intersections are finite. \label{term:radicals}
Let $G$ be reductive. We let $T$ denote a fixed maximal torus of $G$, and $B$ a Borel subgroup (maximal connected soluble subgroup) containing $T$. The \emph{rank} of $G$ is $r = \textup{dim}(T)$. The \emph{character group} $X(T) \stackrel{\textup{def}}{=} \textup{Hom}(T,K^{*})$ is a free abelian group of rank $r$, written additively. Any $KG$-module $V$ restricts to $T$ as a direct sum of \emph{weight spaces} \[ V_{\lambda} \stackrel{\textup{def}}{=} \{ v \in V \ : \ t.v = \lambda(t)v \textup{ for all }t \in T \} \] where $\lambda$ runs over elements of $X(T)$. Those $\lambda$ with $V_{\lambda} \neq \{0\}$ are the \emph{weights} of $V$. The Lie algebra of $G$, denoted $L(G)$\label{term:lg}, is a $KG$-module under the adjoint action, and the nonzero weights of $L(G)$ are the \emph{roots} of $G$; these form an abstract root system in the sense of \cite{MR0396773}*{Appendix}. We let $\Phi$ denote the set of roots. The choice of $B$ gives a base of simple roots $\Pi = \{\alpha_1,\ldots,\alpha_r\}$ and a partition of $\Phi$ into positive roots $\Phi^{+}$ and negative roots $\Phi^{-}$, which are those roots expressible as a positive (respectively negative) integer sum of simple roots.
Abstract root systems are classified by their \emph{Dynkin diagram}; nodes correspond to simple roots, with bonds indicating relative lengths and angle of incidence between non-orthogonal roots. A semisimple group is simple if and only if its Dynkin diagram is connected, and the well-known classification of possible connected diagrams, as well as our chosen numbering of the nodes, are as follows: \begin{center} \includegraphics{AJL-dynkin.eps} \end{center} where type $E_n$ occurs for $n \in \{6,7,8\}$.
For a simple algebraic group $G$, the corresponding label ($A_n$, etc.) is called the (Lie) \emph{type} of $G$. An \emph{isogeny} is a surjective homomorphism of algebraic groups with finite kernel, and for each Lie type there exists a \emph{simply connected} group $G_{\textup{sc}}$ and an \emph{adjoint} group $G_{\textup{ad}}$, each unique up to isomorphism, such that there exist isogenies $G_{\textup{sc}} \to G$ and $G \to G_{\textup{ad}}$ for each simple group $G$ over $K$ of the same type.
\subsection{Weights and rational modules} The group $\mathbb{Z}\Phi$ is naturally a lattice in the Euclidean space $E \stackrel{\textup{def}}{=} X(T) \otimes_{\mathbb{Z}} \mathbb{R}$. In this space, an \emph{abstract weight} is a point $\lambda$ satisfying \[ \left<\lambda,\alpha\right> \stackrel{\textup{def}}{=} 2\frac{(\lambda,\alpha)}{(\alpha,\alpha)} \in \mathbb{Z} \] for each simple root $\alpha$. The abstract weights form a lattice $\Lambda$ (the \emph{weight lattice}); the quotient $\Lambda/\mathbb{Z}\Phi$ is finite, and $\mathbb{Z}\Phi \le X(T) \le \Lambda$. The possible subgroups between $\mathbb{Z}\Phi$ and $\Lambda$ correspond to isogeny types of simple group; $G$ is simply connected if and only if $X(T) = \Lambda$, and adjoint if and only if $X(T) = \mathbb{Z}\Phi$.
A weight $\lambda \in \Lambda$ is called \emph{dominant} if $\left<\lambda,\alpha\right> \ge 0$ for all $\alpha \in \Pi$, and $\Lambda$ is free abelian on the set $\{\lambda_1,\ldots,\lambda_r\}$ of \emph{fundamental dominant weights}, which are defined by $\left<\lambda_i,\alpha_j\right> = \delta_{ij}$ for all $i$ and $j$. If $\lambda$, $\mu$ are weights, we say $\mu \le \lambda$ if $\lambda - \mu$ is a non-negative integer combination of positive roots.
If $V$ is a (rational, finite-dimensional) $KG$-module, the Lie-Kolchin theorem \cite{MR0396773}*{\S 17.6} implies that $B$ stabilises a 1-space of $V$; an element spanning a $B$-stable 1-space is called a \emph{maximal vector} of $V$. If $V$ is generated as a $KG$-module by a maximal vector of weight $\lambda$, we say that $V$ is a \emph{highest weight module} for $G$, of \emph{high weight} $\lambda$. Then $\lambda$ is dominant, and all other weights of $V$ are strictly less than $\lambda$ under the above ordering.
A $B$-stable 1-space of a $KG$-module generates a $KG$-submodule under the action of $KG$, and thus an irreducible $KG$-module has a unique $B$-stable 1-space and is a highest weight module. Conversely, for any dominant $\lambda \in X(T)$ there exists an irreducible module of highest weight $\lambda$, which we denote $V_G(\lambda)$\label{term:vglambda}. Then $V_G(\lambda) \cong V_G(\mu)$ implies $\lambda \le \mu$ and $\mu \le \lambda$, hence $\mu = \lambda$.
The \emph{Weyl group} $W = N_G(T)/T$ acts naturally on $X(T)$, inducing an action on $E = X(T) \otimes_{\mathbb{Z}} \mathbb{R}$ which preserves the set of roots, as well as the root and weight lattices. This identifies $W$ with the group of isometries of $E$ generated by \emph{simple reflections} $s_i \, : \, x \mapsto x - \left<x,\alpha_i\right>\alpha_i$ for each $i$. The \emph{length} of an element $w \in W$ is the smallest $k$ such that $w$ is expressible as a product of $k$ simple reflections. The \emph{longest element} of the Weyl group is the unique element of maximal length; it is also characterised as the unique element of $W$ sending every positive root to a negative one. Hence if $w_{\circ}$ is the longest element, then the linear transformation $-w_{\circ}$ of $E$ permutes the dominant weights. Since the weights of a module are precisely the negatives of those of its dual, the simple modules $V_G(-w_{\circ}\lambda)$ and $V_G(\lambda)^{*}$ have identical weight spaces, and thus $V_G(-w_{\circ}\lambda) \cong V_G(\lambda)^{*}$. When no confusion can arise, we abbreviate $V_G(\lambda)$ to simply $\lambda$.
For each dominant $\lambda \in X(T)$, we also have a \emph{Weyl module}, denoted $W_G(\lambda)$\label{term:wglambda}, which is universal in the sense that if $V$ is a rational module of finite dimension with $V/\textup{Rad}(V) = V_{G}(\lambda)$, then $V$ is a quotient of $W_G(\lambda)$. The Weyl module can be defined in terms of the 1-dimensional $B$-module $k_{\lambda}$, with $R_u(B)$ acting trivially and $T \cong B/R_u(B)$ acting with weight $\lambda$. Then $H^{0}(G/B,\lambda) = \textup{Ind}_{B}^{G}(k_{\lambda})$ has socle $\cong V_G(\lambda)$, and we set $W_G(\lambda) \stackrel{\textup{def}}{=} H^{0}(G/B,-w_{\circ}\lambda)^{*}$.
Recall that a rational module is called \emph{tilting} if it has a filtration by Weyl modules and a filtration by duals of Weyl modules. It is well-known (for instance, see \cite{MR1200163}) that the class of tilting modules is closed under taking direct sums and summands. An indecomposable tilting module has a unique highest weight, occurring with multiplicity $1$, and moreover for each dominant $\lambda \in X(T)$ there exists a unique such indecomposable tilting module, denoted $T_{G}(\lambda)$ or $T(\lambda)$ when $G$ is understood.
If the ambient field $K$ has characteristic zero then $T_{G}(\lambda)$, $W_G(\lambda)$ and $V_G(\lambda)$ coincide for each $\lambda$; in general, the module structure of $W_G(\lambda)$ can be quite complicated, although much can still be said without too much effort. For instance, the weights and multiplicities of $W_G(\lambda)$ are independent of the field $K$, and can be calculated using the `Weyl character formula' (see \cite{MR1153249}*{\S 24}).
We will make frequent use of the module $V_{\textup{min}}$, a non-trivial Weyl module for $G$ of least dimension, when $G$ is simply connected of exceptional type, with high weight and dimension as follows: \begin{center}
\begin{tabular}{c|ccccc} $G$ & $F_4$ & $E_6$ & $E_7$ & $E_8$ \\ \hline $V_{\textup{min}}$ & $W_G(\lambda_4)$ & $W_G(\lambda_1)$ & $W_G(\lambda_7)$ & $W_G(\lambda_8) = L(G)$\\ $\textup{dim }V_{\textup{min}}$ & $26$ & $27$ & $56$ & $248$ \end{tabular} \end{center} We will give further information on the structure of Weyl modules when discussing representations of reductive groups in Section \ref{sec:algreps}.
\subsection{Automorphisms, endomorphisms, groups of Lie type} With $X$ a semisimple algebraic group over an $K$, let $\phi \, : \, X \to X$ be an abstract group automorphism of $X$. If $\phi$ is a morphism of varieties, then by \cite{MR0230728}*{10.3} exactly one of the following holds: \begin{itemize} \item The map $\phi$ is an automorphism of algebraic groups, that is, $\phi^{-1}$ is also a morphism, or \item the fixed-point subgroup $X_\phi$ is finite. In this case, $\phi$ is called a \emph{Frobenius morphism}. \end{itemize}
We reserve the term `automorphism' for maps of the first type. As well as inner automorphisms, a simple group of type $A_n$ $(n \ge 2)$, $D_n$ $(n \ge 3)$ or $E_6$ additionally has \emph{graph automorphisms}, induced from symmetries of the Dynkin diagram.
Turning to Frobenius morphisms, when $K$ has positive characteristic $p$, for each power $q$ of $p$, the $q$-power field automorphism there exists a field automorphism sending $x \in K$ to $x^q$. This induces a Frobenius morphism $F_q$ of $GL_n(K)$ for each $n > 0$. A surjective endomorphism $\sigma \, : \, X \to X$ is called a \emph{field morphism} or \emph{standard Frobenius morphism} if there exists some $n > 0$, some power $q$ of $p$ and and injective morphism $i \, : \, G \to GL_n(K)$ such that $i(\sigma(x)) = F_q(i(x))$ for each $x \in X$. If $G$ is simple of type $B_2$ or $F_4$ with $p = 2$, or type $G_2$ with $p = 3$, then $G$ additionally has \emph{exceptional graph morphisms}, which are Frobenius morphisms corresponding to symmetries of the Dynkin diagram when the root lengths are ignored. For every Frobenius morphism $\sigma$, there is an integer $m$ such that $\sigma^{m}$ is a standard Frobenius morphism \cite{MR794307}*{p.\ 31}.
If $\sigma$ is a Frobenius morphism of $G$, the fixed-point set $G_{\sigma}$ is a \emph{finite group of Lie type}, and if $G$ is simple of adjoint type, the group $O^{p'}(G_\sigma)$ generated by elements of order a power of $p$ is usually simple, and is then called a \emph{simple group of Lie type}. For example, if $G = {\rm PGL}_n(K)$ is adjoint of type $A_{n-1}$ and $\sigma = \sigma_q$ is a $q$-power field morphism, then $G_{\sigma} = {\rm PGL}_n(q)$ and $O^{p'}(G_\sigma) = L_n(q)$. When $\sigma$ involves an exceptional graph morphism of $G$, the corresponding finite simple group is called a \emph{Suzuki-Ree group}. Further Frobenius morphisms and groups of Lie type arise through composition with an automorphism of $G$; for instance, if $G = SL_n(K)$ (type $A_{n-1}$), then composition of a $q$-power Frobenius automorphism with a graph automorphism of $G$ gives rise to the finite group $SU_n(q)$, hence to the simple unitary group $U_n(q)$. We let $\textup{Lie}(p)$ denote the collection of finite simple groups of Lie type in characteristic $p$, adopting the convention $\textup{Lie}(0) = \varnothing$.
\section{Finite Subgroups}
\label{sec:generic} \label{sec:nongeneric}
When considering finite subgroups of an algebraic group, Lie primitivity is a natural maximality condition. If $X$ is an algebraic group of positive dimension, and if $S$ is a finite subgroup of $X$, by dimension considerations there exists a positive-dimensional subgroup $Y$ containing $S$ such that $S$ is Lie primitive in $Y$, and then $S$ normalises $Y^{\circ}$. In this way, understanding embeddings of finite groups into $X$ reduces to understanding Lie primitive subgroups of normalisers of connected subgroups; we can then appeal to the broad array of available results concerning connected subgroups.
A natural idea when studying finite subgroups is to make use of the \emph{socle} (product of minimal normal subgroups). A minimal normal subgroup of a finite group is characteristically simple, hence is a product of isomorphic finite simple groups. We then have a dichotomy between the case that the finite subgroup is \emph{local}, that is, normalises a non-trivial elementary abelian subgroup, and the case that the socle is a product of non-abelian simple groups. Local maximal subgroups of exceptional algebraic groups and finite groups of Lie type are well understood; see \cite{MR1132853} for more details.
This dichotomy is typified by the theorem of Borovik below, which reduces the study of Lie primitive finite subgroups of $G$ to \emph{almost simple} groups, that is, those finite groups $H$ satisfying $H_0 \le H \le \textup{Aut}(H_0)$ for some non-abelian simple group $H_0$. In this case, the socle is $\textup{soc}(H) = H_{0}$.
Recall from \cite{MR0379748} that a \emph{Jordan subgroup} of a simple algebraic group $G$ in characteristic $p$ is an elementary abelian $p_0$-subgroup $E$ (where $p_0$ is a prime not equal to $p$) such that: \begin{itemize} \item $N_G(E)$ is finite, \item $E$ is a minimal normal subgroup of $N_G(E)$, and \item $N_G(E) \ge N_G(A)$ for any abelian $p_0$-subgroup $A \unlhd N_G(E)$ containing $E$. \end{itemize}
\begin{theorem}[{\cite{MR1066315}}] \label{thm:boro} Let $G$ be an adjoint simple algebraic group over an algebraically closed field $K$ of characteristic $p \ge 0$, and let $S$ be a Lie primitive finite subgroup of $G$. Then one of the following holds: \begin{itemize} \item[\textup{(i)}] $S \le N_G(E)$ where $E$ is a Jordan subgroup of $G$. \item[\textup{(ii)}] $G = E_8(K)$, $p \neq 2,3,5$ and $\textup{soc}(S) \cong \Alt_5 \times \Alt_6$. \item[\textup{(iii)}] $S$ is almost simple. \end{itemize} \end{theorem} Cases (i) and (ii) here are well understood. For each adjoint simple algebraic group $G$, the Jordan subgroups have been classified up to $\textup{Aut}(G)$-conjugacy by Alekseevskii \cite{MR0379748} when $p = 0$, and by Borovik \cite{MR1065645} when $p > 0$. Case (ii), sometimes known as `the Borovik subgroup', is unique up to conjugacy and is described by Borovik in \cite{MR1066315}. This theorem thus focuses our attention on the almost simple groups in (iii).
If $G$ is a simple algebraic group in positive characteristic $p$ and $X$ is a connected simple subgroup of $G$, then a Frobenius endomorphism of $X$ gives rise to a finite subgroup as in Section \ref{sec:notation}, which will be a (possibly trivial) central extension of an almost simple group. Due to the abundance of such subgroups, we say that an almost simple subgroup $S$ of $G$ is \emph{generic} if $\textup{soc}(S)$ is isomorphic to a member of $\textup{Lie}(p)$ and \emph{non-generic} otherwise.
Generic almost simple subgroups of exceptional simple algebraic groups are mostly well understood. In \cite{MR1458329}*{Theorem 1}, Liebeck and Seitz prove that if $S = S(q)$ is a generic simple subgroup of $G$, then $S$ arises in the manner above if $q$ is larger than an explicit bound depending on the Lie type of $S$ and the root system of $G$ ($q > 9$ is usually sufficient, and $q > 2624$ always suffices). Subject to this bound on $q$, they construct a closed, connected subgroup $\bar{S}$ containing $S$, such that $S$ and $\bar{S}$ fix precisely the same submodules on $L(G)$, and if $S < G_{\sigma}$ for a Frobenius morphism $\sigma$, then $\bar{S}$ is also $\sigma$-stable and $N_G(S)$-stable.
In the same article \cite{MR1458329}*{Theorem 6}, this latter statement is used to deduce information on the subgroup structure of the finite groups of Lie type; roughly, an almost simple subgroup of $G_{\sigma}$ with generic socle, is either of the same Lie type as $G$, or arises via the fixed points $X_{\sigma}$ of a maximal closed, connected, reductive $\sigma$-stable subgroup $X$ of $G$. We will apply a similar argument in Section \ref{sec:pfcorfin} to subgroups with non-generic socle.
As mentioned in the introduction, it is now known precisely which non-generic simple groups admit an embedding to each exceptional simple algebraic group, and it remains to classify these embeddings. Theorems \ref{THM:MAIN} and \ref{THM:FIN} are an analogue for non-generic subgroups of the aforementioned results of Liebeck and Seitz. If $S$ is a non-generic finite simple subgroup of the exceptional simple algebraic group $G$, we wish to construct a `sufficiently small' connected subgroup $\bar{S}$ of positive dimension containing $S$. In the generic case, `sufficiently small' was taken to mean that each $S$-submodule of $L(G)$ is $\bar{S}$-invariant. This captures the idea that the representation theory of $S$ is in some sense `close' to that of $\bar{S}$ when considering the action on $L(G)$. In the non-generic setting, we do not expect such a strong result to hold in general. On the other hand, as we shall see in Chapter \ref{chap:stab}, it is often possible to find a subgroup $\bar{S}$ which stabilises `sufficiently many' $S$-submodules, for instance, all those of a particular dimension. This is sufficient to prove Theorem \ref{THM:MAIN}.
\section{Subgroups of Positive Dimension} \label{sec:posdim}
Once a given finite simple group is shown not to occur as a Lie primitive subgroup of $G$, proving the remainder of Theorem \ref{THM:MAIN} and subsequent results requires knowledge of the possible intermediate subgroups of positive dimension which can occur.
\subsection{Maximal connected subgroups} In characteristic zero, connected subgroups of an (affine) algebraic group $G$ are in 1-1 correspondence with Lie subalgebras of $L(G)$ (see \cite{MR0396773}*{\S 13}). Thus Dynkin's classification of the maximal Lie subalgebras of simple Lie algebras over $\mathbb{C}$ \cites{MR0047629,MR0049903}, gives also a classification of maximal connected subgroups of a simple algebraic group. This has been extended into positive characteristic by Seitz \cite{MR1048074} and by Liebeck and Seitz \cites{MR1066572,MR2044850}. The following result enumerates the maximal connected subgroups of the exceptional simple algebraic groups, in arbitrary characteristic. Thus a finite subgroup of $G$ which lies in a proper, connected subgroup, lies in a conjugate of one of these.
\begin{theorem}[\cite{MR2044850}*{Corollary 2}] \label{thm:connected} Let $G$ be an adjoint exceptional simple algebraic group over an algebraically closed field of characteristic $p \ge 0$, and let $X$ be maximal among connected closed subgroups of $G$. Then either $X$ is parabolic or semisimple of maximal rank, or $X$ appears in the table below, and is given up to ${\rm Aut}(G)$-conjugacy. \end{theorem}
\begin{table}[htbp] \centering
\begin{tabular}{|l|l|l|} \hline $G$ & $X$ simple & $X$ not simple \\ \hline \hline $G_2$ & $A_1$ $(p \ge 7)$ & \\ \hline $F_4$ & $A_1$ $(p \ge 13)$, $G_2$ $(p = 7)$ & $A_1 G_2$ $(p \neq 2)$ \\ \hline $E_6$ & $A_2$ $(p \neq 2,3)$, $G_2$ $(p \neq 7)$, & $A_2G_2$ \\ & $C_4$ $(p \neq 2)$, $F_4$ & \\ \hline $E_7$ & $A_1$ (2 classes, $p \ge 17, 19$ resp.), & $A_1 A_1$ $(p \neq 2,3)$, $A_1 G_2$ $(p \neq 2)$, \\ & $A_2$ $(p \ge 5)$ & $A_1F_4$, $G_2 C_3$ \\ \hline $E_8$ & $A_1$ (3 classes, $p \ge 23,29,31$ resp.), & $A_1A_2$ $(p \neq 2,3)$, \\ & $B_2$ $(p \ge 5)$ & $G_2 F_4$ \\ \hline \end{tabular} \end{table}
Since semisimple subgroups of maximal rank are examples of \emph{subsystem subgroups}, i.e.\ semisimple subgroups normalised by a maximal torus, they can be enumerated by an algorithm of Borel and de Siebenthal \cite{MR0032659}. Similarly, up to conjugacy a parabolic subgroup corresponds to a choice of nodes in the Dynkin diagram; then the parabolic has a non-trivial unipotent radical, whose structure is discussed in Section \ref{sec:radfilt}, and the Dynkin diagram generated by the chosen nodes determines the (reductive) Levi complement.
\subsection{Parabolic subgroups and complete reducibility} \label{sec:parabs}
Let $G$ be a reductive algebraic group, with maximal torus $T$ and set of roots $\Phi$. Then for each $\alpha \in \Phi$, there is a homomorphism $x_{\alpha} \, : \, (K,+) \to G$ of algebraic groups, whose image $U_{\alpha}$ is $T$-stable; this $U_{\alpha}$ is the \emph{root subgroup} corresponding to $\alpha$. We have $tx_{\alpha}(c)t^{-1} = x_{\alpha}(\alpha(t)c)$ for all $t \in T$ and all $c \in K$.
Let $\Pi$ be a base of simple roots, with corresponding positive roots $\Phi^{+}$. Then a subset $I \subseteq \Pi$ generates a root subsystem $\Phi_I \subseteq \Phi$ in a natural way, and we have a corresponding \emph{standard parabolic subgroup} \[ P_I \stackrel{\textup{def}} = \left<T, U_{\alpha},U_{\pm \beta} \ : \ \alpha \in \Phi^{+},\ \beta \in I \right>. \] The unipotent radical of $P_I$ is then \[ Q_I \stackrel{\textup{def}}{=} \left<U_{\alpha} \ : \ \alpha \in \Phi^{+},\ \alpha \notin \Phi_I \right>\] and in $P_I$, the unipotent radical has a reductive closed complement, the \emph{Levi factor} \[ L_I \stackrel{\textup{def}}{=} \left<T, U_{\pm \beta} \ : \ \beta \in I \right> \] whose root system is precisely $\Phi_I$. A \emph{parabolic subgroup} is any $G$-conjugate of a standard parabolic subgroup, and each is conjugate to precisely one standard parabolic subgroup. A \emph{Levi subgroup} is any conjugate of some $L_I$; since each Levi subgroup $L$ contains a maximal torus of $G$, the derived subgroup $L'$ is a subsystem subgroup of $G$.
Recall that a subgroup $X$ of a reductive group $G$ is \emph{$G$-completely reducible} ($G$-cr) if, whenever $X$ lies in a parabolic subgroup $P = QL$ of $G$, $X$ then lies in a conjugate of the Levi factor $L$. Similarly, $X$ is \emph{$G$-irreducible} ($G$-irr) if there is no parabolic subgroup of $G$ containing $X$, and \emph{$G$-reducible} otherwise. A finite subgroup which is Lie primitive in $G$ is necessarily $G$-irreducible.
There now exists much literature on the concept of $G$-complete reducibility, particularly with regard to classifying connected $G$-completely reducible subgroups. Such a subgroup is necessarily reductive, by the following theorem of Borel and Tits:
\begin{theorem}[\cite{MR2850737}*{Theorem 17.10}] \label{thm:boreltits} Let $U$ be a unipotent subgroup, not necessarily closed or connected, of a reductive algebraic group $G$. Then there exists a parabolic subgroup $P$ of $G$ with $U \le R_u(P)$ and $N_G(U) \le P$. \end{theorem} In particular, if $X$ is a connected subgroup with non-trivial unipotent radical, then $X$ lies in some parabolic $P$ with $X \cap R_u(P) = R_u(X) \neq 1$, so $X$ is not contained in a Levi factor of $P$.
In \cite{MR1329942}, Liebeck and Seitz produce a constant $N(X,G) \le 7$, for each exceptional simple algebraic group $G$ and each connected simple subgroup type $X$, such that if $p > N(X,G)$ then all subgroups of $G$ of type $X$ are $G$-completely reducible, and this has been further refined by D.\ Stewart \cites{MR3282997}. Theorem \ref{THM:NONGCR} can be viewed as an analogue of this result for finite simple subgroups of $G$.
Knowledge of reductive subgroups, and $G$-cr subgroups in particular, is of use in proving Theorem \ref{THM:MAIN} since the precise action of these subgroups on $L(G)$ and $V_{\textup{min}}$ has either already been determined or is straightforward to determine. For example \cite{MR1329942}*{Tables 8.1-8.7} gives the composition factors of all connected simple subgroups of $G$ of rank $\ge 2$ on $L(G)$ or $V_{\textup{min}}$, provided $p$ is greater than the largest entry of the above table. The action is usually completely reducible, and if not, it is usually straightforward to determine the precise action from the composition factors, using techniques outlined shortly.
\subsection{Irreducibility in classical groups} If $G$ is a simple algebraic group of classical type $A_n$, $B_n$, $C_n$ or $D_n$, then $G$ is closely related to $SL(V)$, $SO(V)$ or $Sp(V)$ for some vector space $V$ with an appropriate bilinear or quadratic form. Then $V$ is isomorphic to the Weyl module $W_G(\lambda_1)$, and parabolic subgroups of $G$ have a straightforward characterisation in terms of the irreducible module $V_G(\lambda_1)$ \cite{MR1329942}*{pp.\ 32-33}:
\begin{lemma} \label{lem:classicalparabs} Let $X$ be a $G$-irreducible subgroup of a simple algebraic group $G$ of classical type, and let $V = V_G(\lambda_1)$. Then one of the following holds: \begin{itemize} \item $G = A_n$ and $X$ is irreducible on $V$, \item $G = B_n$, $C_n$ or $D_n$ and $V \downarrow S = V_1 \perp \dots \perp V_k$ with the $V_i$ all nondegenerate, irreducible and inequivalent as $X$-modules, \item $G = D_n$, $p = 2$ and $X$ fixes a non-singular vector $v \in V$, such that $X$ is then $C_G(v)$-irreducible, where $C_G(v)$ is simple of type $B_{n-1}$. \end{itemize} \end{lemma} Thus determining when a finite simple group $S$ admits a $G$-irreducible embedding into some classical $G$ is straightforward if we have sufficient information about the representation theory of $KS$. For us, the necessary information is available in the literature, and we give a summary in Section \ref{sec:reps}.
\subsection{Internal modules of parabolic subgroups} \label{sec:radfilt} We recall some information from \cite{MR1047327}. With $G$ a semisimple algebraic group, let $P$ be a parabolic subgroup of $G$, and without loss of generality assume that $P = P_I$ is a standard parabolic subgroup for some $I \subseteq \Pi$, with unipotent radical $Q = Q_I$ and Levi factor $L = L_I$ as before.
An arbitrary root $\beta \in \Phi$ can be written as $\beta = \beta_I + \beta_{I'}$ where $\beta_I = \sum_{\alpha_i \in I}c_i\alpha_i$ and $\beta_{I'}=\sum_{\alpha_j \in \Pi - I}d_j\alpha_j$. We then define \begin{align*} \textup{height}(\beta) &= \sum c_i + \sum d_j, \\ \textup{level}(\beta) &= \sum d_j, \\ \textup{shape}(\beta) &= \beta_{I'}, \end{align*}
and for each $i \ge 1$ we define subgroup $Q(i) = \left<U_{\beta} \ : \ \textup{level}(\beta) \ge i \right>$.
Call $(G,\textup{char }K)$ \emph{special} if it is one of $(B_n,2)$, $(C_n,2)$, $(F_4,2)$, $(G_2,2)$ or $(G_2,3)$.
\begin{lemma}[\cite{MR1047327}] The subgroups $Q(i)$ are each normal in $P$. There is a natural $KL$-module structure on the quotient groups $Q(i)/Q(i+1)$, with decomposition $Q(i)/Q(i+1) = \prod V_S$, the product being over all shapes $S$ of level $i$. Each $V_S$ is an indecomposable $KL$-module of highest weight $\beta$ where $\beta$ is the unique root of maximal height and shape $S$. If $(G,p)$ is not special, then $V_S$ is irreducible. \end{lemma}
Given $G$ and the subset $I$ of simple roots corresponding to $P$, it is a matter of straightforward combinatorics to calculate the modules occurring in this filtration, as laid out in \cite{MR1047327}. A quick summary for the exceptional groups is provided by the following lemma, which is \cite{MR1329942}*{Lemma 3.1}.
\begin{lemma} \label{lem:radfilts} If $P = QL$ is a parabolic subgroup of an exceptional algebraic group $G$ and $L_0$ is a simple factor of $L$, then the possible high weights $\lambda$ of non-trivial $L_0$-composition factors occurring in the module filtration of $Q$ are as follows: \begin{itemize} \item $L_0 = A_n$: $\lambda = \lambda_j$ or $\lambda_{n+1-j}$ $(j = 1,2,3)$; \item $L_0 = A_1$ or $A_2$ ($G = F_4$ only ): $\lambda = 2\lambda_1$ or $2\lambda_2$; \item $L_0 = A_1$ $(G = G_2 \textup{ only})$: $\lambda = 3\lambda_1$; \item $L_0 = B_n$, $C_n$ $(G = F_4$, $n = 2$ or $3)$: $\lambda = \lambda_1$, $\lambda_2$ or $\lambda_3$; \item $L_0 = D_n$: $\lambda_1$, $\lambda_{n-1}$ or $\lambda_n$; \item $L_0 = E_6$: $\lambda = \lambda_1$ or $\lambda_6$; \item $L_0 = E_7$: $\lambda = \lambda_7$. \end{itemize} \end{lemma}
In Section \ref{sec:complements} we will use this information and some basic cohomology theory to parametrise complements to $Q$ in the subgroup $QX$, for $X$ a finite simple subgroup of $G$ contained in $P$.
\section{Representation Theory of Semisimple Groups}
\subsection{Structure of Weyl Modules} \label{sec:algreps}
The following proposition summarises information on the module structure of Weyl modules which we will use throughout subsequent chapters. This information is well-known (for example, see \cite{MR1901354}), and can quickly be verified with computer calculations, for example using the \emph{Weyl Modules} GAP package of S. Doty \cite{Dot1}.
\begin{proposition} \label{prop:weyls} Let $G$ be a simple algebraic group in characteristic $p \ge 0$, and let $\lambda$ be a dominant weight for $G$. \begin{itemize} \item If $(G,\lambda)$ appear in Table \ref{tab:irredweyls}, then $W_G(\lambda) = V_G(\lambda)$ is irreducible in all characteristics, with the given dimension. \item If $(G,\lambda)$ appear in Table \ref{tab:redweyls}, then $W_G(\lambda)$ is reducible in each characteristic given there, with the stated factors, and is irreducible in all other characteristics. \end{itemize} \end{proposition}
\begin{table}[H]\small \centering \onehalfspacing \caption{Irreducible Weyl modules}
\begin{tabular}{c|c|c} $G$ & $\lambda$ & Dimension \\ \hline $A_n$ & $\lambda_i$ $(i = 1,\ldots,n)$ & $\binom{n+1}{i}$ \\ $B_n$ & $\lambda_n$ & $2^{n}$ \\ $C_n$ & $\lambda_1$ & $2n$ \\ $D_n$ & $\lambda_1$, $\lambda_{n-1}$, $\lambda_n$ & $2n$, $2^{n-1}$, $2^{n-1}$ \\ $E_6$ & $\lambda_1$, $\lambda_6$ & $27$ \\ $E_7$ & $\lambda_7$ & $56$\\ $E_8$ & $\lambda_8$ & $248$ \end{tabular} \label{tab:irredweyls} \end{table}
Note that for the classical types, the module $W_G(\lambda_1)$ is the natural module. For type $A_n$ we have $W_G(\lambda_i) = V_G(\lambda_i) = \bigwedge^{i} V_G(\lambda_1)$ for $i = 1,\ldots,n$.
The composition factors of the adjoint module $L(G)$ are the same as those of $W_G(\lambda)$ as in the following table (for a proof, see \cite{MR1458329}*{Proposition 1.10}): \begin{center}
\begin{tabular}{c|*{9}{c}} Type of $G$ & $A_n$ & $B_n$ & $C_n$ & $D_n$ & $E_6$ & $E_7$ & $E_8$ & $F_4$ & $G_2$ \\ \hline $\lambda$ & $\lambda_1 + \lambda_n$ & $\lambda_2$ & $2\lambda_1$ & $\lambda_2$ & $\lambda_2$ & $\lambda_1$ & $\lambda_8$ & $\lambda_1$ & $\lambda_2$ \end{tabular} \end{center}
\begin{table}[p] \centering \onehalfspacing \caption{Composition Factors of Reducible Weyl modules}
\begin{tabular}{c|c|c|l|l} $G$ & $\lambda$ & $p$ & High Weights & Factor dimensions \\ \hline $A_n$ & $\lambda_1 + \lambda_n$ & $p \mid n+1$ & $0$, $\lambda_1 + \lambda_n$ & $1$, $n^{2} + 2n - 1$ \\
& $2\lambda_1$ $(n > 1)$ & 2 & $\lambda_2$, $2\lambda_1$ & $\binom{n+1}{2}$, $n+1$\\ $B_n$ & $\lambda_1$ & 2 & $0$, $\lambda_1$ & $1$, $2n$ \\
& $\lambda_2$ $(n > 2)$ & 2 & $0^{(n,2)}$, $\lambda_1$, $\lambda_2$ & $1$, $2n$, \\
& & & & $2n^{2} - n - (2,n)$ \\ $D_n$ & $\lambda_2$ & 2 & $0^{(n,2)}$, $\lambda_2$ & $1$, $\binom{2n}{2}-(2,n)$ \\ $A_3$ & $2\lambda_2$ & 2 & $2\lambda_2$, $\lambda_1+\lambda_3$ & $6$, $14$ \\
& & 3 & $0$, $2\lambda_2$ & $1$, $19$ \\ $B_3$ & $2\lambda_1$ & 2 & $0$, $\lambda_1$, $\lambda_2$, $2\lambda_1$ & $1$, $6$, $14$, $6$ \\
& & 7 & $0$, $2\lambda_1$ & $1$, $26$ \\
& $2\lambda_3$ & 2 & $0$, $\lambda_1^{2}$, $\lambda_2$, $2\lambda_3$ & $1$, $6$, $14$, $8$ \\ $B_4$ & $\lambda_3$ & 2 & $0^{2}$, $\lambda_1$, $\lambda_2$, $\lambda_3$ & $1$, $8$, $26$, $48$ \\
& $\lambda_1 + \lambda_4$ & 3 & $\lambda_4$, $\lambda_1+\lambda_4$ & $16$, $112$ \\
& $2\lambda_1$ & 2 & $0^{2}$, $\lambda_1$, $\lambda_2$, $2\lambda_1$ & $1$, $8$, $26$, $8$ \\
& & 3 & $0$, $2\lambda_1$ & $1$, $43$ \\ $C_3$ & $\lambda_3$ & 2 & $\lambda_1$, $\lambda_3$ & $6$, $8$ \\ $C_4$ & $2\lambda_1$ & 2 & $0^{2}$, $2\lambda_1$, $\lambda_2$ & $1$, $8$, $26$ \\
& $\lambda_2$ & 2 & $0$, $\lambda_2$ & $1$, $26$ \\
& $\lambda_4$ & 2 & $\lambda_2$, $\lambda_4$ & $26$, $16$ \\
& & 3 & $0$, $\lambda_4$ & $1$, $41$ \\ $D_4$ & $2\lambda_i$ $(i = 1,3,4)$ & 2 & $0$, $\lambda_2$, $2\lambda_i$ & $1$, $26$, $8$ \\
& $\lambda_i + \lambda_j$, where & 2 & $\lambda_i+\lambda_j$, $\lambda_k$ & $48$, $8$ \\
& $\{i,j,k\} = \{1,3,4\}$ & & \\ $E_6$ & $\lambda_2$ & 3 & $0$, $\lambda_2$ & $1$, $77$ \\ $E_7$ & $\lambda_1$ & 2 & $0$, $\lambda_1$ & $1$, $132$ \\ $F_4$ & $\lambda_1$ & 2 & $\lambda_1$, $\lambda_4$ & $26$, $26$ \\
& $\lambda_4$ & 3 & $0$, $\lambda_4$ & $1$, $25$ \\ $G_2$ & $\lambda_1$ & 2 & $0$, $\lambda_1$ & $1$, $6$ \\
& $\lambda_2$ & 3 & $\lambda_1$, $\lambda_2$ & $7$, $7$ \\
& $\lambda_1 + \lambda_2$ & 3 & $0$, $\lambda_1$, $\lambda_2$, $\lambda_1+\lambda_2$ & $1$, $7$, $7$, $49$ \\
& & 7 & $\lambda_1+\lambda_2$, $2\lambda_1$ & $38$, $26$ \\
& $2\lambda_1$ & 2 & $0$, $\lambda_1$, $\lambda_2$, $2\lambda_1$ & $1$, $6$, $14$, $6$ \\
& & 7 & $0$, $2\lambda_1$ & $1$, $26$ \\
& $3\lambda_{1}$ & 2 & $0^{3}$, $\lambda_1^{2}$, $\lambda_2$, $2\lambda_{1}^{2}$, $3\lambda_1$ & $1$, $6$, $14$, $6$, $36$ \\
& & 3 & $\lambda_1$, $\lambda_2^{2}$, $\lambda_1 + \lambda_2$, $3\lambda_1$ & $7$, $7$, $49$, $7$ \end{tabular} \label{tab:redweyls} \end{table}
Note also that the simply connected groups form a chain \[ F_4(K) < E_6(K) < E_7(K) < E_8(K) \] and up to composition factors, we have the following well-known restrictions (see, for example \cite{MR1329942}*{Tables 8.1-8.4}): \begin{align*} L(E_8) \downarrow E_7 = L(&E_7)/V_{E_7}(\lambda_7)^{2}/0^{3},\\ L(E_7) \downarrow E_6 = L(E_6)/V_{E_6}(\lambda_1)/V_{E_6}(\lambda_6)/0, \quad&\quad V_{E_7}(\lambda_7) \downarrow E_6 = V_{E_6}(\lambda_1)/V_{E_6}(\lambda_6)/0^{2},\\ L(E_6) \downarrow F_4 = L(F_4)/W_{F_4}(\lambda_4), \quad&\quad V_{E_6}(\lambda_1) \downarrow F_4 = W_{F_4}(\lambda_4)/0. \end{align*}
In addition, the longest element $w_{\circ}$ of the Weyl group of $G$ induces the scalar transformation $-1$ on $X(T) \otimes_{\mathbb{Z}} \mathbb{R}$ if $G$ is simple of type $B_n$, $C_n$, $D_{2n}$, $G_2$, $F_4$, $E_7$ or $E_8$. In this case, every irreducible $KG$-module is self-dual, as $V_G(\lambda)^{*} \cong V_G(-w_{\circ}\lambda) = V_G(\lambda)$. If $G$ is instead simple of type $A_n$, $D_{2n+1}$ or $E_6$, then $w_{\circ}$ induces $-\tau$, where $\tau$ corresponds to a non-trivial symmetry of the Dynkin diagram. This gives rise to isomorphisms: \begin{itemize} \item $V_{G}(\lambda_i) \cong V_{G}(\lambda_{n+1-i})^{*}$ for $G$ of type $A_n$ and any $1 \le i \le n$, \item $V_G(\lambda_1) \cong V_G(\lambda_6)^{*}$ for $G$ of type $E_6$, \item $V_G(\lambda_{2n}) \cong V_G(\lambda_{2n+1})^{*}$ for $G$ of type $D_{2n+1}$, \end{itemize} For $G$ of any type, if $\lambda$ is fixed by each graph automorphism of $G$, the module $V_G(\lambda)$ is self-dual.
Finally, recall that a \emph{spin module} for $X$ is the irreducible module $V_X(\lambda_n)$, of dimension $2^{n}$, for $X$ of type $B_n$, or one of the modules $V_X(\lambda_{n-1})$ or $V_X(\lambda_{n})$, of dimension $2^{n-1}$, for $X$ of type $D_n$.
\begin{lemma}[\cite{MR1329942}*{Lemma 2.7}] \label{lem:spins} Let $X = B_{n}$ $(n \ge 3)$ or $D_{n+1}$ ($n \ge 4)$, and let $Y$ be either a Levi subgroup of type $B_{r}$ $(r \ge 1)$ or $D_{r}$ $(r \ge 3)$ of $X$, or a subgroup $B_{n}$ of $X = D_{n+1}$. If $V$ is a spin module for $X$, then all composition factors of $V \downarrow Y$ are spin modules for $Y$. \end{lemma}
\subsection{Tilting Modules}
Tilting modules enjoy several properties that will be of use here. For instance, $L(G)$ and $V_{\textup{min}}$ are often tilting; this only fails for an exceptional simple algebraic group $G$ in the following cases: \begin{itemize}
\item $G = E_{7}$, $p = 2$, $L(G) = V_{G}(\lambda_1)/0$, $T(\lambda_1) = 0|V_{G}(\lambda_1)|0$;
\item $G = E_{6}$, $p = 3$, $L(G) = V_{G}(\lambda_2)/0$, $T(\lambda_2) = 0|V_{G}(\lambda_2)|0$;
\item $G = F_{4}$, $p = 2$, $L(G) = V_{G}(\lambda_1)/V_{G}(\lambda_4)$, $T(\lambda_1) = V_{G}(\lambda_4)|V_{G}(\lambda_1)|V_{G}(\lambda_4)$;
\item $G = F_{4}$, $p = 3$, $V_{\textup{min}} = V_{G}(\lambda_4)/0$, $T(\lambda_4) = 0|V_{G}(\lambda_4)|0$. \end{itemize}
In every scenario encountered here, it is a trivial task to determine the structure of a tilting module $T_{G}(\lambda)$ from the (known) structure of the Weyl modules $W_{G}(\mu)$ for $\mu \le \lambda$. Moreover, the following will be useful in determining the action of subgroups of $G$ on the various $G$-modules. This is parts (i)--(iii) of \cite{MR1200163}*{Proposition 1.2} (cf.\ also \cite{MR1072820}).
\begin{lemma} Let $X$ be a reductive algebraic group. \begin{itemize} \item[\textup{(i)}]If $M$ and $N$ are tilting modules for $X$, then so is $M \otimes N$; \item[\textup{(ii)}] If $V$ is a tilting module for $X$ and $L$ is a Levi subgroup of $X$, then $V \downarrow L$ is a tilting module for $L$; \item[\textup{(iii)}] $T_{X}(\lambda)^{*} = T_{X}(-w_{\circ}\lambda)$; \end{itemize} \label{lem:tilting} \end{lemma}
\subsection{Extensions of Rational Modules} Suppose that a finite subgroup $S$ of the simple algebraic group $G$ is known to lie in some proper, connected subgroup of $G$. Then $S$ lies in either a reductive subgroup of $G$, and then in the semisimple derived subgroup, or in a parabolic subgroup of $G$. In the latter case, $S$ can be studied using its image under the projection to a Levi factor. Thus, as we shall see in Chapter \ref{chap:stab}, useful information on embeddings $S \to G$ can be procured by comparing the possible actions of $S$ on $L(G)$ and $V_{\textup{min}}$ with those of the various semisimple subgroups of $G$ admitting an embedding of $S$.
We therefore require some results on the representation theory of semisimple groups. A comprehensive reference for the material of this section is \cite{MR2015057}.
Let $X$ be a semisimple algebraic group over the algebraically closed field $K$. Recall that for rational $KX$-modules $V$ and $W$, we denote by $\textup{Ext}_X^{1}(V,W)$ the set of all \emph{rational} extensions of $V$ by $W$ up to equivalence.
\begin{lemma}[\cite{MR2015057}*{p.183, Proposition}] \label{lem:exthom} If $\lambda$, $\mu$ are dominant weights for $X$ with $\lambda$ not less than $\mu$, then \[ \textup{Ext}_{X}^{1}(V_X(\lambda),V_X(\mu)) \cong \textup{Hom}_X(\textup{rad}(W_X(\lambda)),V_X(\mu)). \] \end{lemma}
This lemma is particularly useful in light of the information on the structure of Weyl modules above. Another result of use in this direction is the following. \begin{lemma}[\cite{MR1048074}*{Lemma 1.6}] \label{lem:2step} In a short exact sequence of $KX$-modules \[ 0 \to V_X(\lambda) \to M \to V_X(\mu) \to 0, \] one of the following occurs: \begin{itemize} \item[\textup{(i)}] The sequence splits, so $M \cong V_X(\lambda) \oplus V_X(\mu)$, \item[\textup{(ii)}] $\lambda < \mu$ and $M$ is a quotient of the Weyl module $W_X(\mu)$, \item[\textup{(iii)}] $\mu < \lambda$ and $M^{*}$ is a quotient of the Weyl module $W_X(-w_{\circ}\lambda)$. \end{itemize} \end{lemma}
\begin{corollary} \label{cor:2step} If $V$ is a rational $KX$-module with high weights $\{\mu_1, \ldots, \mu_t\}$, and if $W_X(\mu_i)$ has no high weight $\mu_j$ for all $i \neq j$, then $V$ is completely reducible.
In particular, if each $W_X(\mu_i)$ is irreducible, then $V$ is completely reducible. \end{corollary}
Each of these will be useful in narrowing down the possibilities for intermediate subgroups $S < X < G$ when $S$ is known not to be Lie primitive in $G$.
\chapter{Calculating and Utilising Feasible Characters} \label{chap:disproving}
With $G$ an adjoint exceptional simple algebraic group over $K$, our strategy for studying embeddings of finite groups into $G$ is to determine the possible composition factors of restrictions of $L(G)$ and $V_{\textup{min}}$. Here we describe the tools which allow us to achieve this.
\section{Feasible Characters} \label{sec:brauer}
\subsection{Definitions} Let $H$ be a finite group and let the field $K$ have characteristic $p \ge 0$. Recall that to a $KH$-module $V$ we can assign a \emph{Brauer character}, a map from $H \to \mathbb{C}$ which encodes information about $V$ in much the same way as an ordinary character encodes information about a module in characteristic zero. Let $n$ be the exponent of $H$ if $p = 0$, or the $p'$-part of the exponent if $p > 0$. Then the eigenvalues of elements of $H$ on $V$ are $n$-th roots of unity in $K$. Fix an isomorphism $\phi$ between the group of $n$-th roots of unity in $K$ and in $\mathbb{C}$. The Brauer character of $V$ is then defined by mapping $h \in H$ to $\sum \phi(\zeta)$, the sum over eigenvalues $\zeta$ of $h$ on $V$.
When $K$ has characteristic zero, a Brauer character is simply an ordinary character (a Galois conjugate of the usual character). In general, a Brauer character has a unique expression as a sum of irreducible Brauer characters (those arising from irreducible modules), though the decomposition of the Brauer character into irreducibles only determines the corresponding module up to composition factors, not up to isomorphism.
We now give some crucial definitions, the first of which is found in Frey \cite{MR1617620}. \begin{definition} A \emph{fusion pattern} from $H$ to $G$ is a map $f$ from the $p'$-conjugacy classes of $H$ to the conjugacy classes of $G$, which preserves element orders and is compatible with power maps, i.e.\ for each $i \in \mathbb{Z}$, $f$ maps the class $(x^{i})^{H}$ to the class of $i$-th powers of elements in $f(x^H)$. \end{definition}
\begin{definition} \label{def:feasible} A \emph{feasible decomposition} of $H$ on a finite-dimensional $G$-module $V$ is a $KH$-module $V_0$ such that for some fusion pattern $f$, the Brauer character of any $x \in H$ on $V_0$ is equal to the trace of elements in $f(x^{H})$ on $V$. The Brauer character of $V_0$ is then called a \emph{feasible character}. We say that a collection of feasible decompositions or feasible characters of $H$ on various $G$-modules is \emph{compatible} if they all correspond to the same fusion pattern. \end{definition}
Any subgroup $S$ of $G$ gives rise to a fusion pattern (map the $S$-conjugacy class to the $G$-conjugacy class of its elements), and the restriction of any set of finite-dimensional $G$-modules gives a compatible collection of feasible decompositions. Not all feasible characters (or fusion patterns) are necessarily realised by an embedding; however, determining all feasible characters places a strong restriction on possible embeddings. Note that our definition of a feasible character is more restrictive than that given by Cohen and Wales \cite{MR1416728}*{p.\ 113}, since the definition there does not take power maps into account.
\subsection{Determining feasible characters}
Given $G$, a finite group $H$ and a collection $\left\{V_i\right\}$ of rational $KG$-modules, determining the compatible collections of feasible characters of $H$ on $\left\{V_i\right\}$ is a three-step process. \begin{itemize} \item Firstly, we need the Brauer character values of all irreducible $KH$-modules of dimension at most $\textup{Max}(\textup{dim}(V_i)$). The $\{V_i\}$ used here each have dimension at most $248 = \textup{dim } L(E_8)$. The necessary information on Brauer characters either exists in the literature or can be calculated directly. We give more details on this in Section \ref{sec:reps}. \item Secondly, for each $m$ coprime to $p$ such that $H$ has elements of order $m$, we will need to know the eigenvalues of elements of $G$ of order $m$ on each module $V_i$. These can be determined using the weight theory of $G$, which we outline in Section \ref{sec:sselts}. We then take the corresponding sum in $\mathbb{C}$ under the bijection determining the Brauer characters. \item Finally, determining feasible characters becomes a matter of enumerating non-negative integer solutions to simultaneous equations, one equation for each class in $H$. Each solution gives the irreducible character multiplicities in a feasible character. This step is entirely routine, and we give an illustrative example in Section \ref{sec:example}. \end{itemize}
In Chapter \ref{chap:thetables}, proceeding as above we give the compatible feasible characters of each finite simple group $H \notin \textup{Lie}(p)$ on the $KG$-modules $L(G)$ and $V_{\textup{min}}$ defined in Section \ref{sec:algreps}.
\subsection{Irreducible Modules for Finite Quasisimple Groups} \label{sec:reps}
Let $S$ be a finite simple subgroup of a semisimple algebraic group $G$, let $\tilde{G}$ be the simply connected cover of $G$ and let $\tilde{S}$ be a minimal preimage of $S$ under the natural projection $\tilde{G} \twoheadrightarrow G$, so that $\tilde{S}$ is a \emph{cover} (perfect central extension) of $S$. If $\tilde{S} \cong S$ then we have an induced action of $S$ on each $K\tilde{G}$ module and no isogeny issues are encountered. However, if $\tilde{S}$ has non-trivial centre, then in order to make use of any faithful $\tilde{G}$-modules (in particular, the module $V_{\textup{min}}$ for $G = E_6$ and $E_7$), we will need to consider the action of $\tilde{S}$ rather than $S$. Note that in this case, we have $Z(\tilde{S}) \le \text{Ker}(\tilde{G} \twoheadrightarrow G) \le Z(\tilde{G})$.
A perfect finite group has, up to isomorphism, a unique covering group of maximal order, the \emph{universal cover}. All covers are then quotients of this, and the centre of the universal cover is called the \emph{Schur multiplier} of $S$. The Schur multipliers of the finite simple groups are all known.
In order to carry out the calculations described above, for each finite simple group $H$ appearing in Table \ref{tab:subtypes}, we will need to know information on the irreducible $K\tilde{H}$-modules of dimension $\le 248$, where $\tilde{H}$ is a cover of $H$ with $|Z(H)| \le 3$. For most such groups encountered here, the necessary Brauer characters appear in the Atlas \cite{MR827219} or the Modular Atlas \cite{MR1367961}. In addition, Hiss and Malle \cite{MR1942256} give a list of each dimension at most $250$ in which each finite quasisimple group has an absolutely irreducible module (not in the ambient characteristic of the group is of Lie type).
The groups from Table \ref{tab:subtypes} whose Brauer characters are not given explicitly in the Modular Atlas are $\Alt_{n}$ ($13 \le n \le 17$), $L_2(q)$ ($q = 37$, $41$, $49$, $61$), $Fi_{22}$, $Ru$, $Th$, $L_4(5)$ and $\Omega_7(3)$. The modular character tables of $L_{2}(q)$ for $q$ and $p$ coprime are well-known, cf. \cite{MR0480710}. For the alternating groups, the following reciprocity result is of use in determining all irreducible Brauer characters of a given degree. \begin{lemma}[\cite{MR860771}*{p.58}] Let $Y \le X$ be finite groups, $V$ a finite-dimensional $KY$-module and $U$ a finite-dimensional $KX$-module. Then \[ \textup{Hom}_{KX}(U, V\uparrow X) \cong \textup{Hom}_{KY}(U \downarrow Y, V). \] In particular, if $U$ is simple, of dimension $N$, and if $V$ is any $KY$-module quotient of $U \downarrow Y$, then $U$ is a submodule of $V \uparrow X$. \end{lemma}
In particular, the irreducible Brauer characters of degree $\le 248$ for $\Alt_{n}$ can all be found by inducing and decomposing the irreducible Brauer characters of degree $\le 248$ for $\Alt_{n-1}$. For completeness, in Appendix \ref{chap:auxiliary} we include the Brauer character values used for $\Alt_{13}$ to $\Alt_{17}$ in characteristic $2$.
According to Corollary 4 of \cite{MR1717629} (cf. Table \ref{tab:onlyprim}), the remaining groups from the above list occur only as Lie primitive subgroups of an exceptional algebraic group. Then according to \cite{MR1942256}, the faithful irreducible modules of dimension at most 248 for these groups and their double or triple covers are as follows:
\begin{center}
\begin{tabular}{r|l|c}
\multicolumn{1}{c|}{$H$} & \multicolumn{1}{c|}{Module Dimensions} & Adjoint $G$ containing $H$ \\ \hline $Fi_{22}$ $(p = 2)$ & $78$ & $E_6$ \\ $3.Fi_{22}$ $(p = 2)$ & $27$ (not self-dual) \\ $Ru$ $(p = 5)$ & $133$ & $E_7$ \\ $2.Ru$ $(p = 5)$ & $28$ (not self-dual) \\ $Th$ $(p = 3)$ & $248$ & $E_8$ \\ $L_4(5)$ $(p = 2)$ & $154$, $248$ & $E_8$ \\ $\Omega_7(3)$ $(p = 2)$ & $78$, $90$, $104$ & $E_6$ \\ $3.\Omega_7(3)$ $(p = 2)$ & $27$ (not self-dual) \end{tabular} \end{center}
We see immediately that each embedding of $Fi_{22}$, $\Omega_7(3)$ and $Th$ must be irreducible on both $L(G)$ and $V_{\textup{min}}$. An embedding of $Ru$ into $E_7$ must arise from an embedding of $2\cdot Ru$ into the simply connected group, acting irreducibly on the Lie algebra, and with composition factors $28/28^{*}$ on the 56-dimensional self-dual module $V_{\textup{min}}$. Finally, since $L_4(5)$ has no non-trivial modules of dimension less than $154$, it must be irreducible on $L(E_8)$ in any embedding, otherwise it would fix a nonzero vector on $L(G)$, and would not be Lie primitive in $G$ by Lemma \ref{lem:properembed} below. This determines all the feasible characters for these groups on $L(G)$ and $V_{\textup{min}}$.
\subsection{Frobenius-Schur indicators} The (Frobenius-Schur) indicator of a $KH$-module $V$ encodes whether the image of $H$ in $GL(V)$ lies in an orthogonal or symplectic group. An irreducible $KH$-module $V$ supports a nondegenerate $H$-invariant bilinear form if and only if $V \cong V^{*}$, and the form is then symmetric or alternating. If $K$ has characteristic $\neq 2$, then $H$ preserves a symmetric form if and only if it preserves a quadratic form. In characteristic $2$, any $H$-invariant bilinear form on $V$ is symmetric, and a nondegenerate $H$-invariant quadratic form on $V$ gives rise to a nondegenerate $H$-invariant bilinear form, but not conversely.
The indicator of $V$ is then defined as \[ \textup{ind}(V) = \left\{ \begin{array}{rll} 0 & (\textup{or }\circ) & \textup{ if } V \ncong V^{*},\\ 1 & (\textup{or } +) & \textup{ if } H \textup{ preserves a nondegenerate quadratic form on } V,\\ -1 & (\textup{or } -) & \textup{ otherwise.} \end{array}\right. \] Thus $V$ has indicator $-$ if and only if $V$ is self-dual, supports an $H$-invariant alternating bilinear form, and does not support an $H$-invariant quadratic form.
\subsection{First cohomology groups} The last piece of information we will use is the \emph{cohomology group} $H^{1}(H,V)$ for various $KH$-modules $V$. Recall that this is the quotient of the additive group $Z^{1}(H,V)$ of 1-cocycles (maps $\phi\, : \, H \to V$ satisfying $\phi(xy) = \phi(x) + x.\phi(y)$) by the subgroup $B^{1}(H,V)$ of 1-coboundaries (cocycles $\phi$ such that $\phi(x) = x.v - v$ for some $v \in V$).
The cohomology group is a $K$-vector space which parametrises conjugacy classes of complements to $H$ in the semidirect product $VH$, via \[ \phi \mapsto \{\phi(h)h \ : \ h \in H\}, \] and also parametrises short exact sequences of $KH$-modules \[ 0 \to V \to E \to K \to 0 \] under equivalence, where an equivalence is an isomorphism $E \to E'$ of $KH$-modules inducing the identity map on $V$ and $K$.
Knowledge of cohomology groups will be useful in determining the existence of fixed points in group actions (see Proposition \ref{prop:substab} for a good example). Computational routines exist for determining the dimension of $H^{1}(H,V)$ (for instance, Magma implements such routines). The information we have made use of here is summarised as follows:
\begin{lemma} \label{lem:reps} Let $K$ be an algebraically closed field of characteristic $p$ such that the simple group $H \notin \textup{Lie}(p)$ embeds into an adjoint exceptional simple algebraic group over $K$ (i.e.\ $H$ appears in Table \ref{tab:subtypes}). Then Tables \ref{tab:altreps} to \ref{tab:ccreps} give every non-trivial irreducible $KH$-module of dimension $\le 248$. \end{lemma} We also give there the Frobenius-Schur indicator $\textup{ind}(V)$ of each module, as well as $\textup{dim}(H^{1}(H,V))$ when this has been used.
Note that the computational packages used to calculate these cohomology group dimensions do not perform calculations over an algebraically closed field, but rather over a \emph{finite} field. This is sufficient for us if we ensure that the field being used is a \emph{splitting field} for the group $H$ (see for example \cite{MR1110581}*{\S 1}). Such fields always exist; for instance if $|H| = p^{e}r$ with $r$ coprime to $p$, then a field containing a primitive $r$-th root of unity suffices.
\subsection{Semisimple Elements of Exceptional Groups} \label{sec:sselts}
In order to calculate feasible characters, we need to know the eigenvalues of semisimple elements of small order in $F_4(K)$, $E_6(K)$, $E_7(K)$ and $E_8(K)$. A semisimple element necessarily has order coprime to the characteristic $p$ of $K$.
Let $H$ be a group such that $(G,H)$ appears in Table \ref{tab:subtypes}. Let $n$ be the $p'$ part of the exponent of $H$. Let $\omega_n$ be an $n$-th root of unity in $K$, let $\zeta_n$ be an $n$-th root of unity in $\mathbb{C}$, and let $\phi$ be the isomorphism of cyclic groups sending $\omega_n$ to $\zeta_n$. For each $m$ dividing $n$, let $\omega_m = \omega_n^{(n/m)}$, and similarly for $\zeta_m$.
If $m > 0$ is coprime to $p$, then an element of order $m$ in $G$ is semisimple, hence lies in some maximal torus $T$. From the existence and uniqueness of the Bruhat decomposition for elements in $G$, it follows (see for example \cite{MR794307}*{Section 3.7}) that two elements of order $m$ in $T$ are conjugate in $G$ if and only if they are in the same orbit under the action of the Weyl group $W = N_G(T)/T$.
Now, let $\{\chi_i\}$ be a free basis of the character group $X(T)$. If $t \in T$ has order $m$, then each $\chi_i(t)$ is a power of $\omega_m$, say $\chi_i(t) = \omega_m^{n_i}$, where $0 \le n_{i} < m$ and gcd$(\{n_{i}\}) = 1$. Conversely, for any $r$-tuple of integers $(n_1,\ldots,n_r)$ satisfying these conditions, there exists $t \in T$ with $\chi_i(t) = \omega_{m}^{n_i}$ (see \cite{MR0396773}*{Lemma 16.2C}), which then has order $m$.
Thus, for a fixed basis of $X(T)$, elements of $T$ of order $m$ correspond to $r$-tuples of integers $(n_1,\ldots,n_r)$ as above, where $r = \textup{rank}(G)$, and the action of $W$ on $T$ induces an action on these. This latter action makes no reference to $K$, only to $X(T)$ and the basis. In particular, if $G_1$ is a simple algebraic group over $\mathbb{C}$ with an isomorphism of root systems $\Phi(G) \to \Phi(G_1)$ identifying the character groups and Weyl groups, then classes of elements of order $m$ in $G$ and in $G_1$ are each in 1-1 correspondence with orbits of the Weyl group on these $r$-tuples, hence are in 1-1 correspondence with each other.
This correspondence respects Brauer character values; suppose $V$ and $W$ are respectively modules for $G$ and $G_1$, whose weight spaces correspond under the isomorphism of root systems. If we express a weight $\lambda$ as a linear combination of basis elements $\lambda = \sum m_i(\lambda)\chi_i$, then an element $g \in G$ which is represented by $(n_1,\ldots,n_r)$ has a corresponding eigenvalue $\prod_{i=1}^{r} \omega_m^{n_i m_i(\lambda)}$, and thus the Brauer character value of $g$ on $V$ is \[ \sum_{\lambda \textup{ a weight of }V} \left(\prod_{i=1}^{r}\zeta_m^{n_i m_i(\lambda)}\right) \] which is equal to the (true) character value of the corresponding elements in $G_1$.
There now exists a well-established theory of elements of finite order in simple complex Lie groups, and in \cite{MR752042}, Moody and Patera provide an algorithmic approach to enumerating elements of finite order, as well as determining their eigenvalues on rational modules. Classes of semisimple elements in a simply connected simple group $G$ over $\mathbb{C}$ are in 1-1 correspondence with $(r+1)$-tuples $(s_0,s_1,\ldots,s_r)$ with $\textup{gcd}(s_0,s_1,\ldots,s_r) = 1$, where $r = \textup{rank}(G)$. Under projection to $G/Z(G)$, elements represented by $(s_0,s_1,\ldots,s_r)$ have order $\sum_{i = 1}^{r} n_is_i$, where $\alpha_0 = \sum n_i \alpha_i$ is the highest root. The full order of the element is also determined by the $s_i$ (see \cite{MR752042}*{Section 4}), and the algorithm additionally tells us precisely when elements represented by distinct $(r+1)$-tuples are conjugate in the adjoint group.
By implementing the procedure above in Magma, we have calculated the eigenvalues of all semisimple elements of order at most 37 in the simply connected groups $F_4(\mathbb{C})$, $E_6(\mathbb{C})$, $E_7(\mathbb{C})$ and $E_8(\mathbb{C})$ on the adjoint and minimal modules. There are a total of 2,098,586 conjugacy classes of such elements between these groups, and it is impractical to give representatives of each. However, all but a few tables of Chapter \ref{chap:thetables} can be verified using only elements of order at most 7, and the eigenvalues of such elements are already known. In particular, our calculations have been checked against \cite{MR933426} (elements of $E_8$ and simply connected $E_7$) and \cite{MR1416728} (elements of $F_4$ and simply connected $E_6$).
\section{Deriving Theorem \ref{THM:FEASIBLES}}
\subsection{Example: Feasible characters of $\Alt_{17}$ on $L(E_8)$, $p = 2$} \label{sec:example}
We now have sufficient information to derive the feasible characters for each $(G,H,p)$ in Table \ref{tab:subtypes}, which is the content of Theorem \ref{THM:FEASIBLES}. We illustrate with the case $H \cong \Alt_{17}$ and $G = E_8(K)$, where $p = \textup{char}(K) = 2$, on the 248-dimensional adjoint module $L(G)$. It transpires that there is a unique feasible character, up to a permutation of irreducible $H$-modules corresponding to an outer automorphism of $H$. Note that $G$ does indeed have a subgroup isomorphic to $H$, since $G$ has a simple subgroup of type $D_8$, which is abstractly isomorphic to $SO_{16}(K)$ since $p = 2$, and it is well-known that $\Alt_{17}$ has a 16-dimensional irreducible module which supports a nondegenerate quadratic form.
As given in the Appendix (Table \ref{tab:altreps}), in characteristic 2 there is a unique $KH$-module of each dimension 1, 16 and 118 up to isomorphism, and two irreducible modules of dimension 128, which are interchanged by an outer automorphism of $H$. There are no other irreducible $KH$-modules of dimension $\le 248$. The 16-dimensional module is a quotient of the natural 17-dimensional permutation module, the 118-dimensional module is a section of $\bigwedge^{2} 16$ and the 128-dimensional `spin' modules arise from embeddings $\Alt_{17} \le \textup{SO}_{16}^{+}(2) \le {\rm SL}_{2^7}(2)$.
In this case it suffices to consider \emph{rational} elements of $H$, that is, elements which are $H$-conjugate to all their proper powers of the same order. All Brauer character values of rational elements are integers, and the character values of such elements of $H$ can be calculated by hand. It is well-known that the Brauer character of $h \in H$ on the 16-dimensional deleted permutation module is $|\textup{fix}(h)| - 1$, giving also a formula for the Brauer character of the alternating square. The eigenvalues of $h$ on a spin module can be inferred directly from the eigenvalues of $H$ on the 16-dimensional module; this is done, for example, in \cite{MR1057341}*{pp.\ 195-196}. For the elements of orders $3$, $5$ and $7$, we obtain the following:
\begin{center}\begin{tabular}{c|ccccccccccc} $\chi$ & $e$ & $3$ & $3^2$ & $3^3$ & $3^4$ & $3^5$ & 5 & $5^2$ & $5^3$ & $7$ & $7^2$\\ \hline $\chi_{16}$ & $16$ & $13$ & $10$ & $7$ & $4$ & $1$ & $11$ & $6$ & $1$ & $9$ & $2$ \\ $\chi_{118}$ & $118$ & $76$ & $43$ & $19$ & $4$ & $-2$ & $53$ & $13$ & $-2$ & $34$ & $-1$\\ $\chi_{128_a}$, $\chi_{128_b}$ & $128$ & $-64$ & $32$ & $-16$ & $8$ & $-4$ & $-32$ & $8$ & $-2$ & $16$ & $2$ \end{tabular}\end{center}
Finding the possible Brauer characters of $L(G) \downarrow H$ then involves finding non-negative integers $a$, $b$, $c$, $d_1$, $d_2$ such that \[ L(G) \downarrow H = 1^a/16^b/118^c/128_a^{d_1}/128_b^{d_2}\] where we are denoting $KH$-modules by their degree. Let $d = d_1+d_2$ be the total number of 128-dimensional factors in the feasible character.
From the calculations described in \ref{sec:sselts}, we find that $G$ has two classes of rational elements of order 5; elements of these have traces $-2$ and $23$ on $L(G)$ (see also \cite{MR933426}*{Table 1}). There is also a class of elements with trace $3$ on $L(G)$, however such elements are not rational (they have non-integer trace on the 3875-dimensional $G$-module). As all elements of $H$ of order 5 are rational, each feasible character of $H$ on $L(G)$ must take a value in $\{-2,23\}$ on each such class.
Thus, evaluating the character of $L(G) \downarrow H$ on the classes $e$, $5$ and $5^{2}$ gives the following equations: \begin{align} 248 &= a + 16b + 118c + 128d \\ -2 {\rm\ or\ } 23 &= a + 11b + 53c - 32d \\ -2 {\rm\ or\ } 23 &= a + 6b + 13c + 8d \end{align} By (1) we have $c \le 2$, $d \le 1$. The third line must equal 23, since coefficients are non-negative. Subtracting (1) from (3) then gives: \[ 45 = 2b + 21c + 24d \] and so $c$ must be odd, hence $c = 1$. Assuming $d = 0$ forces $b = 12$, making (1) inconsistent. Therefore $d = 1$, which therefore means $b = 0$, $a = 2$. Thus we have \[ L(G) \downarrow H = 1^2/118/128_a \quad \textup{or}\quad 1^{2}/118/128_b \] which are the same up to the action of an outer automorphism of $H$. This also determines a fusion pattern from $H$ to $G$, which is unique up to interchanging classes according to an outer automorphism of $H$. In similar calculations with $G$ not of type $E_8$, it may be necessary to calculate a feasible character on both $L(G)$ and another non-trivial $KG$-module before a fusion pattern is determined. Here, we have
\begin{center}\begin{tabular}{c|cccccccccccc} $x \in H$ & $e$ & $3$ & $3^2$ & $3^3$ & $3^4$ & $3^5$ & $5$ & $5^2$ & $5^3$ & $7$ & $7^2$\\ \hline $\chi_{L(G) \downarrow H}(x)$ & $248$ & $14$ & $77$ & $5$ & $14$ & $-4$ & $23$ & $23$ & $-2$ & $52$ & $3$ \\ Class in $G$ & 1A & 3C & 3D & 3B & 3C & 3A & 5G & 5G & 5C & 7N & 7H \end{tabular}\end{center} where the conjugacy class labels are taken from \cite{MR933426}*{Table 1}.
Given knowledge of the necessary Brauer characters and semisimple elements, identical calculations to the above are possible for each pair $(G,H)$ in Table \ref{tab:subtypes}, as well as their double and triple covers when $G$ is respectively of type $E_7$ and $E_6$. This is entirely routine, and we have used Magma to facilitate calculations and help avoid errors. The results are the tables of feasible characters of Chapter \ref{chap:thetables}.
\section{Finding Fixed Vectors} \label{sec:notprim}
Let $G$ be an exceptional simple algebraic group and let $S$ be a finite quasisimple subgroup of $G$ with $Z(S) \le Z(G)$. Our approach to studying the embedding of $S$ into $G$ is to use the representation theory of $S$ to find a nonzero fixed vector in the action of $S$ on $L(G)$ or $V_{\textup{min}}$, since then $S < C_G(v)$, a closed subgroup, and \[ \textup{dim}(C_G(v)) = \textup{dim}(G) - \textup{dim}(G.v). \] In particular, if $\textup{dim}(V) < \textup{dim}(G)$ then $C_G(v)$ is of positive dimension. If also $v$ is not fixed by $G$, then $C_G(v)$ is proper, and thus $S$ is not Lie primitive in $G$. Note that $G$ can only fix a nonzero vector on $V$ if $(G,V,p) = (E_6,L(G),3)$, $(E_{7},L(G),2)$ or $(F_{4},V_{G}(\lambda_4),3)$.
If $V = L(G)$, then although $\textup{dim}(V) = \textup{dim}(G)$, a similar conclusion nevertheless holds. By the uniqueness of the Jordan decomposition, any endomorphism fixing $v \in L(G)$ must also fix the semisimple and nilpotent parts of $v$. Thus if $S$ fixes a nonzero vector in its action on $L(G)$, then it fixes a nonzero vector which is semisimple or nilpotent. We then appeal to the following result; note that the proof given in \cite{MR1048074} is valid for an arbitrary reductive group $G$. \begin{lemma}[\cite{MR1048074}*{Lemma 1.3}] \label{lem:properembed} Let $0 \neq v \in L(G)$. \begin{itemize} \item[\textup{(i)}] If $v$ is semisimple then $C_G(v)$ contains a maximal torus of $G$. \item[\textup{(ii)}] If $v$ is nilpotent, then $R_u(C_G(v)) \neq 1$ and hence $C_G(v)$ is contained in a proper parabolic subgroup of $G$. \end{itemize} \end{lemma} We are therefore interested in conditions on a feasible character which will guarantee the existence of a fixed vector.
\subsection{Group cohomology}
To begin, recall that if $V$ and $W$ are $KS$-modules then we denote by Ext$_{S}^{1}(V,W)$ the set of equivalence classes of short exact sequences of $S$-modules: \[ 0 \to W \to E \to V \to 0, \] and we have isomorphisms (see \cite{MR2015057}*{1, Chapter 4}): \[ \textup{Ext}_{S}^{1}(V,W) \cong \textup{Ext}_{S}^{1}(K,V^{*} \otimes W) \cong H^{1}(S,V^{*}\otimes W), \] where $H^{1}(S,V^{*}\otimes W)$ is the first cohomology group.
The following result is based on \cite{MR1367085}*{Lemma 1.2}, and is a highly useful tool for deducing the existence of a fixed vector in the action of $S$ on a module $V$.
\begin{proposition} \label{prop:substab} Let $S$ be a finite group and $M$ a finite-dimensional $KS$-module, with composition factors $W_1,\ldots,W_r$, of which $m$ are trivial. Set $n = \sum {\rm dim\ } H^{1}(S,W_i)$, and assume $H^{1}(S,K)=\{0\}$. \begin{itemize} \item[\textup{(i)}] If $n < m$ then $M$ contains a trivial submodule of dimension at least $m - n$. \label{substabi} \item[\textup{(ii)}] If $m = n$ and $M$ has no nonzero trivial submodules, then $H^{1}(S,M) = \{0\}$. \label{substabii} \item[\textup{(iii)}] Suppose that $m = n > 0$, and that for each $i$ we have \[ H^{1}(S,W_i) = \left\{0\right\} \Longleftrightarrow H^{1}(S,W_i^{*}) = \left\{0\right\}. \] \end{itemize} Then $M$ has a nonzero trivial submodule or quotient. \label{substabiii} \end{proposition}
\proof In each case, we proceed by induction on the number $r$ of composition factors of $M$.
(i) If $r = 1$ or if $M$ has only trivial composition factors, the result is immediate. So let $r > 1$ and assume that $M$ has a non-trivial composition factor. Let $W \subseteq M$ be a submodule which is maximal such that $M/W$ has a non-trivial composition factor. Let $n' = \sum \textup{dim}(H^1(S,W_i))$, the sum being over composition factors of $W$, and let $m'$ be the number of trivial composition factors of $W$. If $n - n' < m - m'$, then by induction $M/W$ would have a trivial submodule, contradicting the choice of $W$. Thus $n - n' \ge m - m'$ and so $n' \le m' + (n - m) < m'$; by induction, $W$ contains a trivial submodule of dimension $m' - n' \ge m - n$.
(ii) Suppose that $n = m$, that $M$ has no trivial submodule and that $H^{1}(S,M)$ is nonzero, so that there exists a non-split extension $0 \rightarrow M \rightarrow N \rightarrow K \rightarrow 0.$ Since $M$ contains no trivial submodule, neither does $N$. This contradicts (i), since $N$ has $m + 1$ trivial composition factors, while the sum $\sum{\rm dim\ }H^1(S,W_i)$ over composition factors $W_i$ of $N$ is equal to $m$.
(iii) Now suppose $n = m > 0$. Assume that $M$ has no trivial submodules. We will show that $M$ has a nonzero trivial quotient. Let $N$ be a maximal submodule of $M$. Since $M$ has no nonzero trivial submodules, neither does $N$. Hence $H^{1}(S,M/N) = \{0\}$, otherwise $N$ would have a nonzero trivial submodule by part (i). By induction on $r$, we deduce that $N$ has a nonzero trivial quotient. Let $Q$ be a maximal submodule of $N$ such that $N/Q$ is trivial. Then $M/Q$ is an extension of a trivial module by the irreducible module $M/N$. By our hypothesis on cohomology groups, we have \[ \textup{Ext}_S^{1}(M/N,K) \cong \textup{Ext}_S^{1}(K,(M/N)^{*}) \cong H^{1}(S,(M/N)^{*}) = \{0\}. \] Thus the extension splits and $M$ has a trivial quotient, as required. \qed
\subsection{Some representation theory of finite groups} \label{sec:finreps}
While Proposition \ref{prop:substab} is widely applicable, it is sometimes possible to infer the existence of a fixed vector even when it does not apply. In Section \ref{sec:thealg} we detail a representation-theoretic approach to determining whether a module can exist with a prescribed set of composition factors and no nonzero trivial submodules. To describe this approach properly, we give here a survey of preliminary results from the representation theory of finite groups. A good reference is \cite{MR860771}.
Let $S$ be a finite group and $K$ be an algebraically closed field of characteristic $p$. The group algebra $KS$ admits a $K$-algebra decomposition into indecomposable \emph{block algebras} $B_i$, giving also a decomposition of the identity element: \begin{align*} KS &= B_1 \oplus B_2 \oplus \dots \oplus B_n,\\ e &= e_1 + e_2 + \ldots + e_n. \end{align*} In turn, this gives a canonical direct-sum decomposition of any $KS$-module $M$: \[ M = e_1 M + e_2 M + \dots + e_n M. \] We say that a module $M$ \emph{belongs to the block} $B_i$ if $M = e_i M$ (in which case $e_j M = 0$ for all $j \neq i$). It is immediate that any indecomposable module lies in a unique block, and that if $M$ lies in the block $B_i$, then so do all submodules and quotients of $M$. Hence if we know \emph{a priori} to which block each irreducible $KS$-module belongs, and if we know the composition factors of some $KS$-module, we know that it must split into direct summands accordingly.
Next, recall that the \emph{Jacobson Radical} $\textup{rad}(V)$ of a $KS$-module $V$ is the intersection of all maximal submodules of $V$, or equivalently, is the smallest submodule $J$ of $V$ such that $V/J$ is completely reducible; we call $V/\textup{rad}(V)$ the \emph{head} of $V$. Dually, the \emph{socle} $\textup{soc}(V)$ of $V$ is the sum of all irreducible submodules of $V$, or equivalently, is the unique maximal semisimple submodule of $V$.
A $KS$-module is called \emph{projective} if it is a direct summand of a free module, or equivalently, if any surjection onto it must split. We have the following basic facts (see \cite{MR860771}*{Chapter II}): \begin{lemma} \label{lem:projcoverdef} Let $V$ be a finite-dimensional $KS$-module. Then there exists a finite-dimensional projective $KS$-module $P$ such that \begin{itemize} \item[\textup{(i)}] $V/\textup{rad}(V) \cong P/\textup{rad}(P)$. \item[\textup{(ii)}] $P$ is determined up to isomorphism by $V$. \item[\textup{(iii)}] $V$ is a homomorphic image of $P$. \item[\textup{(iv)}] $P/\textup{rad}(P) \cong \textup{soc}(P)$. \item[\textup{(v)}] $\textup{dim}(P)$ is divisible by the order of a Sylow $p$-subgroup of $S$. \end{itemize} \end{lemma} Such a $P$ is then called the \emph{projective cover} of $V$. Projective covers provide a highly useful computational tool for studying the submodule structure of $KS$-modules with known composition factors. Every finite-dimensional projective module is a direct sum of indecomposable projective modules, and these are in 1-1 correspondence with the irreducible $KS$-modules $\{S_i\}$ via $P_i/\textup{rad}(P_i) \cong \textup{soc}(P_i) \cong S_i$. The projective indecomposable modules are the indecomposable direct summands of the free module $KS$, and the projective module $P_i$ occurs precisely $\textup{dim}(S_i)$ times in a direct-sum decomposition of $KS$. We thus obtain the formula
\[ \sum_{i = 1}^{r} \textup{dim}(P_i)\ \textup{dim}(S_i) = \textup{dim}(KS) = |S|. \]
It is well-known that the number of isomorphism types of irreducible $KS$-modules is equal to the number of conjugacy classes of $S$ of elements of order coprime to $p$. Let $m$ be the number of such classes. Then if a $KS$-module $V$ has composition factors $S_1^{r_1}/S_2^{r_2}/.../S_m^{r_m}$ where $r_i \ge 0$, then by the above lemma the projective cover of $V$ has the form \[ P = P_1^{n_1} + P_2^{n_2} + \ldots + P_m^{n_m} \] with $n_i \le r_i$ for each $i$.
Additionally, the Brauer characters of the irreducible $KS$-modules can be used to determine the composition factors of the indecomposable projective modules (cf.\ \cite{MR661045}*{Chapter IV}). If the Brauer character of the irreducible $KS$-module $S_i$ is $\chi_i$, we extend each $\chi_i$ to a class function on $S$ by setting $\chi_i(x) = 0$ whenever $x$ has order divisible by $p$. We can then define the inner product $\left<\chi_i,\chi_j\right> = \frac{1}{|S|}\sum_{x \in S}\chi_i(x)\chi_j(x^{-1})$, as in the case of ordinary characters. The $(m \times m)$ matrix $( \left<\chi_i,\chi_j\right>_{i,j})$ is invertible, and the $(i,j)$-entry of its inverse is the multiplicity of $S_i$ as a composition factor of $P_j$, and also to the multiplicity of $S_j$ as a composition factor of $P_i$.
Finally, a result involving the defect group of a $KS$-module tells us that if $p^{e}$ is the order of a Sylow $p$-subgroup of $S$, where $p$ is the characteristic of $K$, and if $p^{e} \mid \textup{dim }S_i$, then $S_i \cong P_i$ is its own projective cover \cite{MR661045}*{Theorem IV.4.5}.
As a quick example, let us determine the structure of the projective indecomposables for $S \cong \Alt_{5}$ with $p = 3$. From \cite{MR1942256}, we know that $S$ has four irreducible modules $S_1$, ..., $S_4$, of dimension $1$, $3$, $3$ and $4$, respectively, each of which is self-dual. Immediately, the 3-dimensional modules are projective since their dimension is divisible by 3 (the order of a Sylow 3-subgroup), and the other two modules are not, since their dimension is not divisible by 3.
If $P_1$ and $P_4$ are the projective covers of the 1- and 4-dimensional $KS$-modules, we have \begin{align*} 60 &= \textup{dim}(P_1) + 9 + 9 + 4\,\textup{dim}(P_4) \end{align*}
Since $3 | \textup{dim }P_i$, we have $\textup{dim }P_4 < 10$. It follows that $P_4$ is uniserial (has a unique composition series) with composition series $S_4 | S_1 | S_4$. This in turn implies that $P_1$ has a single 4-dimensional composition factor, and is thus uniserial of shape $S_1 | S_4 | S_1$.
Methods for constructing and manipulating projective indecomposable modules have now been implemented in various computational algebra packages. As with calculating cohomology groups, these implementations are designed to work over finite extensions of the prime field $\mathbb{Q}$ or $\mathbb{F}_p$. This is sufficient for determining the submodule structure of the projective indecomposables, in particular their socle series and radical series, since each projective $KS$-module can be obtained by extending scalars from projective $kS$-module, whenever $k \subset K$ is a splitting field for $S$.
\subsection{Projective covers and fixed vectors} \label{sec:thealg}
As indicated above, the submodule structure of projective indecomposable modules for many of the simple groups in Table \ref{tab:subtypes} can be determined either by hand or using computational techniques. They thus provide a powerful tool for studying the possible structure of a module with known composition factors.
Let $S$ be a finite group, $K$ an algebraically closed field, $\{S_i\}$ the irreducible $KS$-modules and $\{P_i\}$ the corresponding projective indecomposable modules. Let $V$ be a $KS$-module with composition factors $S_1^{r_1}/\ldots/S_m^{r_m}$. As above, the projective cover of $V$ has the form \[ P = P_1^{n_1} + P_2^{n_2} + \ldots + P_m^{n_m} \] with $n_i \le r_i$ for each $i$. To simplify calculations, it will be useful to find smaller upper bounds for the $n_i$ such that $V$ must still be a quotient of $P$.
\begin{lemma} \label{lem:projcover} Let $S_1$, $\ldots$, $S_r$ be the irreducible $KS$-modules and $P_1$, $\ldots$, $P_r$ the corresponding projective indecomposables. Let $V = S_{1}^{r_1}/S_{2}^{r_2}/\ldots/S_{m}^{r_m}$ be a self-dual $KS$-module with no irreducible direct summands, and let $P = \bigoplus P_i^{n(P_i)}$ be the projective cover of $V$.
Then $n(P_i) + n(P_i^{*}) \le r_i$ for all $i$. In particular, $n(P_i) \le r_i/2$ when $S_i$ is self-dual. \end{lemma}
\proof If $\textup{soc}(V) \nsubseteq \textup{rad}(V)$, then we may pick an irreducible submodule $W \subseteq \textup{soc}(V)$ such that $W \,\cap\, \textup{rad}(V) = \left\{0\right\}$. Since $V/\textup{rad}(V)$ is semisimple, we then have the composition of surjective maps \[ V \twoheadrightarrow V/\textup{rad}(V) \twoheadrightarrow W \] whose kernel does not intersect $W$, hence $W$ is an irreducible direct summand, contrary to hypothesis. Therefore we have $\textup{soc}(V) \subseteq \textup{rad}(V)$. As $V$ is self-dual, we have $V/\textup{rad}(V) \cong \textup{soc}(V)^{*}$. Hence we have \[ P/\textup{rad}(P) \cong V/\textup{rad}(V) \cong \textup{soc}(V)^{*} \cong \bigoplus_{i=1}^{r}S_i^{n(P_i)} \] and the result follows as the multiplicity of $S_i$ as a composition factor of $V$ is at least the sum of multiplicities of $S_i$ in $V/\textup{rad}(V)$ and $\textup{soc}(V)$. \qed
Now, suppose that $V = S_1^{r_1}/S_2^{r_2}/.../S_m^{r_m}$ is a self-dual $KS$-module (in practice $V$ will usually be the restriction to $S$ of a self-dual $KG$-module for an algebraic group $G$). Suppose that $V$ has trivial composition factors, but does not necessarily satisfy the hypotheses of Proposition \ref{prop:substab}, and we want to deduce that $V$ nevertheless contains a nonzero trivial $KS$-submodule.
Let $W$ be a direct summand of $V$ which is minimal subject to being self-dual and containing all trivial composition factors of $V$. Then $W$ lies in the principal block (that is, the block to which the trivial irreducible module belongs), and has no irreducible direct summands. In addition, since $W$ is self-dual and has no nonzero trivial submodules, it has no nonzero trivial quotients, and hence the projective cover of $W$ will have no projective indecomposable summand corresponding to the trivial module. Applying Lemma \ref{lem:projcover}, we deduce that $W$ is an image of $P = \bigoplus P_i^{m(P_i)}$, where \[ m(P_i) = \left\{ \begin{array}{cl} 0 & : S_i \text{ is trivial or does not lie in the principal block,} \\ \lfloor r_i/2 \rfloor & : S_i \text{ is non-trivial}, S \cong S^{*} \text{ and $S$ lies in the principal block,} \\ r_i & : S_i \text{ is non-trivial}, S \ncong S^{*} \text{ and $S$ lies in the principal block.} \end{array} \right. \]
We therefore proceed by taking this module $P$, and looking for quotients which: \begin{itemize} \item are self-dual, \item have composition factor multiplicities bounded above by those of $V$, \item have precisely as many trivial composition factors as $V$, and \item have no trivial submodules. \end{itemize} If no such quotients exist, then $V$ must contain a nonzero trivial submodule.
We adopt this approach in Proposition \ref{prop:algorithm}, considering pairs $(G,H)$ not satisfying Proposition \ref{prop:substab}(i) or (iii).
\begin{remark} In the course of proving Proposition \ref{prop:algorithm}, we refer on occasion to calculations performed over a finite splitting field for $S$, say $k \subset K$. Since $KS = kS \otimes_{k} K$, it follows that a projective $KS$-module $P$ is equal to $P_{0} \otimes_{k} K$, where $P_{0}$ is a projective $kS$-module. However, it need not be the case that every $KS$-module quotient of $P$ is obtained by extending scalars from a $kS$-module quotient of $P_{0}$. Here, the only calculations performed over a finite field are the determination of the socle and radical series of $P$, and of quotients $P/M$, where $M$ is the smallest submodule of $P$ such that $P/M$ has no composition factors of a given isomorphism type. Such an $M$ is equal to $M_{0} \otimes_{k} K$, where $M_{0}$ is the smallest submodule of $P_{0}$ such that $(P_{0}/M_{0}) \otimes_{k} K$ has no composition factors of the given isomorphism type. It suffices to work over a finite splitting field for these calculations. \end{remark}
\subsection{Connectedness of Proper Overgroups}
Once we have deduced that a finite simple group $H$ does not occur as a Lie primitive subgroup of the adjoint exceptional simple algebraic group $G$, we must next show that each subgroup $S \cong H$ of $G$ lies in a proper \emph{connected} subgroup of $G$. Proposition \ref{prop:consub} below guarantees this in all but a few cases. We begin with a preliminary lemma.
\begin{lemma} \label{lem:maxtorus} Let $S$ be a non-abelian finite simple subgroup of an adjoint exceptional simple algebraic group $G$. Suppose that $S$ normalises a maximal torus $T$ of $G$, so that $S$ is isomorphic to a subgroup of $W(G) \cong N_G(T)/T$. Then either $S$ lies in a proper subsystem subgroup of $G$, or $G = E_6$ and $S \cong U_{4}(2)$, or $G = E_7$ and $S \cong L_2(8)$, $U_3(3)$ or $Sp_6(2)$. \end{lemma}
\proof The exceptional Weyl groups and their subgroup structure are well-known (for instance, see \cite{MR2562037}*{\S\S 2.8.4, 3.12.4}). The Weyl groups of type $G_2$ and $F_4$ are soluble, and hence $G$ is not one of these types. The remaining groups are $W(E_6)$, which has a subgroup of index 2 isomorphic to $U_{4}(2)$; $W(E_7) \cong 2 \times Sp_6(2)$; and $W(E_8) \cong 2\cdot\Omega_8^{+}(2).2$. The maximal subgroups of each classical group here appear in the Altas \cite{MR827219}.
If $G$ is of type $E_6$, then besides the subgroup $U_{4}(2)$ as in the statement of the lemma, there are three conjugacy classes of non-abelian simple subgroups of $W = W(E_6)$. There are two classes of subgroups isomorphic to $\Alt_5$, and each such subgroup lies in a subgroup isomorphic to $\Alt_6$, which is unique up to $W$-conjugacy. On the other hand, $G$ has a subsystem subgroup of type $A_5$, giving rise to a class of subgroups of $W$ which are isomorphic to $W(A_5) \cong \textup{Sym}_6$. Thus each simple alternating subgroup of $W$ lies in one of these, and if $S \cong \Alt_5$ or $\Alt_6$ then $TS$ lies in a Levi subgroup of $G$ of type $A_5$, as required.
Similarly, for $G = E_7$ we have $W \cong 2 \times Sp_{6}(2)$, and the non-abelian simple subgroups of $W$ are either isomorphic to one of $L_2(8)$, $U_3(3)$ or $Sp_6(2)$, which appear in the statement of the lemma, or lie in the Weyl group of a subsystem subgroup, call it $X$. The possibilities are $S \cong \Alt_5$ or $\Alt_6$ with $X$ of type $D_6$; $L_{2}(7)$, $\Alt_{7}$, or $\Alt_8$ with $X$ of type $A_7$, or $U_{4}(2)$ with $X$ of type $E_6$.
Similarly, for $G = E_8$, $W$ is a double cover of $\Omega_{8}^{+}(2):2$. The non-abelian simple subgroups of $W$ each lie in either a subgroup $L_2(7) \le W(D_8)$ (2 classes), $Sp_6(2) \le W(E_7)$, $\Alt_9 \le W(A_8)$, or $\Alt_5 \le W(A_4 A_4)$ (2 classes), and in each case $S$ lies in a proper subsystem subgroup. \qed
\begin{proposition} Let $S$ be a non-abelian finite simple subgroup of an adjoint exceptional simple algebraic group $G$ which is not Lie primitive in $G$. Then either $S$ lies in a proper connected subgroup of $G$, or the type of $G$ and the isomorphism type of $S$ appear in the table below. \label{prop:consub} \end{proposition} \begin{center}
\begin{tabular}{c|c} $G$ & Type of $S$ \\ \hline $E_{6}$ & $U_{4}(2)$ \\ $E_{7}$ & $L_{2}(7)$, $L_{2}(8)$, $U_{3}(3)$, $Sp_{6}(2)$ \\ $E_{8}$ & $\Alt_{5}$, $L_{2}(7)$ \end{tabular} \end{center}
\proof Since $S$ is not Lie primitive in $G$, it lies in some maximal subgroup $X$ of positive dimension and normalises the identity component. If $S$ does not lie in $X^{\circ}$, then the image of $S$ under $X \to X/X^{\circ}$ is isomorphic to $S$. The possible maximal closed subgroups $X$, as well as $N_{G}(X)/N_{G}(X)^{\circ}$, are given by \cite{MR2044850}*{Corollary 2(i)}. In particular, the subgroups such that $N_{G}(X)/N_{G}(X)^{\circ}$ contains a finite simple group are either maximal tori, in which case $S$ is isomorphic to a subgroup of the Weyl group as in Lemma \ref{lem:maxtorus}, or $(G,X,S) = (E_{7},A_{1}^{7},L_{2}(7))$, $(E_{8},A_{1}^{8},L_{2}(7))$ or $(E_{8},A_{1},\Alt_{5})$, where this latter subgroup exists only if $p \neq 2,3,5$. \qed
\section{Lie Imprimitivity of Subgroups in Theorem \ref{THM:MAIN}: Standard Cases}
In this section, we prove that if a triple $(G,H,p)$ appears in Table \ref{tab:main}, then any subgroup $S \cong H$ of $G$ must lie in a proper, connected subgroup of $G$. We split the proof into two general propositions, as well as some cases requiring ad-hoc arguments. Recall that $\tilde{G}$ denotes the simply connected cover of $G$, and that $V_{\textup{min}}$ is a Weyl module for $\tilde{G}$ of least dimension; this has highest weight $\lambda_4$, $\lambda_1$, $\lambda_7$ or $\lambda_8$, and dimension $26$, $27$, $56$ or $248$, for $G$ respectively of type $F_4$, $E_6$, $E_7$ or $E_8$.
\begin{proposition} \label{prop:notprim} Let $H$ be a finite simple group, not isomorphic to a member of $\textup{Lie}(p)$, and let $S \cong H$ be a finite simple subgroup of the adjoint exceptional simple algebraic group $G$, in characteristic $p \ge 0$. Let $\tilde{S}$ be a minimal preimage of $S$ in the simply connected cover $\tilde{G}$ of $G$, and let $L$ and $V$ respectively denote the quotient of $L(G)$ and $V_{\textup{min}}$ by any trivial $\tilde{G}$-submodules.
If $(G,H,p)$ appears in Table \ref{tab:substabprop}, then $\tilde{S}$ fixes a nonzero vector on $L$, $V$ or $V^{*}$, and is therefore not Lie primitive in $\tilde{G}$.
Furthermore, $\tilde{S}$ lies in a proper connected subgroup of $G$. \end{proposition}
\begin{table}[htbp] \caption{{Subgroup types satisfying Proposition \ref{prop:substab}}} \label{tab:substabprop} \centering \small \onehalfspacing
\begin{tabularx}{\linewidth}{l|>{\centering\arraybackslash}X} \hline $G$ & $H$ \\ \hline $F_4$ & $\Alt_{7-10}$, $M_{11}$, $J_1$, $J_2$,\\ & $L_2(17) \ (p = 2)$, $U_3(3) \ (p \neq 7)$ \\ \hline $E_6$ & $\Alt_{10-12}$, $M_{22}$, $L_2(25)$, $L_4(3)$, $U_4(2)$, $U_4(3)$, ${}^{3}D_4(2)$,\\ & $\Alt_{5} \ (p \neq 3)$, $\Alt_7 \ (p \neq 3, 5)$, $M_{11} \ (p \neq 3, 5)$, $M_{12} \ (p = 2)$, $L_2(8) \ (p = 7)$, $L_2(11) \ (p = 5)$, $L_2(13) \ (p = 7)$, $L_2(17) \ (p = 2)$, $L_2(27) \ (p \neq 2)$, $L_3(3) \ (p = 2)$, $U_3(3) \ (p = 7)$ \\ \hline $E_7$ & $\Alt_{11-13}$, $M_{11}$, $J_1$, $L_2(17)$, $L_2(25)$, $L_3(3)$, $L_4(3)$, $U_4(2)$, $Sp_{6}(2)$, $^{3}D_4(2)$, $^{2}F_{4}(2)'$,\\ & $\Alt_{10}$ $(p = 2)$, $\Alt_{9} \ (p = 2)$, $\Alt_{8} \ (p \neq 3,5)$, $\Alt_{7} \ (p \neq 5)$, $M_{12}$ $(p \neq 5)$, $J_2$ $(p \neq 2)$, $L_2(8) \ (p \neq 3,7)$ \\ \hline $E_8$ & $\Alt_{17}$, $\Alt_{12-15}$, $\Alt_{8}$, $M_{12}$, $J_1$, $J_2$, $L_2(27)$, $L_2(37)$, $L_4(3)$, $U_{3}(8)$, $Sp_{6}(2)$, $\Omega_{8}^{+}(2)$, $G_{2}(3)$,\\ & $\Alt_{11}$ $(p = 2)$, $\Alt_{9} \ (p \neq 2,3)$, $M_{11} \ (p \neq 3,11)$, $U_{3}(3)$ $(p \neq 7)$, $^{2}F_{4}(2)'$ $(p \neq 3)$ \\ \hline \end{tabularx} \end{table}
\proof Since the composition factors of $L \downarrow \tilde{S}$ and $V \downarrow \tilde{S}$ appear in the appropriate table in Chapter \ref{chap:thetables}, proving that $\tilde{S}$ is not Lie primitive in $\tilde{G}$ comes down to inspecting the corresponding table and comparing this with the information in Appendix \ref{chap:auxiliary} to decide whether the conditions in Proposition \ref{prop:substab}(i) or (iii) hold for $L \downarrow \tilde{S}$ or $V \downarrow \tilde{S}$. For convenience, in Chapter \ref{chap:thetables} we have labelled with `\textbf{P}' those feasible characters for which we \emph{cannot} infer the existence of a fixed vector using Proposition \ref{prop:substab}. Thus $(G,H,p)$ appears in Table \ref{tab:substabprop} if and only if the corresponding table in Chapter \ref{chap:thetables} has no rows labelled `\textbf{P}'.
As a typical example, take $G = E_6$, $H = U_3(3)$, $p = 7$. Here, we have four pairs of compatible feasible characters on $L(G)$ and $V_{\textup{min}}$, given by Table \ref{u33e6p7} on page \ref{u33e6p7}. As stated in Table \ref{tab:sporadicreps} of the Appendix, we know that $H^{1}(U_3(3),26)$ is 1-dimensional, while the corresponding group for other composition factors vanishes. Thus any subgroup $S \cong H$ of $G$, having composition factors as in Cases 2), 3) or 4) of Table \ref{u33e6p7}, satisfies Proposition \ref{prop:substab}(i) in its action on $L = L(G)$ and fixes a nonzero vector.
In Case 1), the feasible character has no trivial composition factors on $L(G)$. On the other hand, the corresponding composition factors of $V = V_{27} = V_{\textup{min}}$ are `$1$' and `$26$'. Hence by Proposition \ref{prop:substab}(iii), if $S \cong H$ gives rise to these feasible characters, a preimage $\tilde{S}$ of $S$ in $\tilde{G}$ must fix a nonzero vector on either $V$ or its dual, as required.
Now, with the exception of $(G,H) = (E_{7},Sp_{6}(2))$ or $(E_{7},L_{2}(8))$, Proposition \ref{prop:consub} applies and so $\tilde{S}$ lies in a proper, connected subgroup of $\tilde{G}$. To show that the same holds for these two cases, for a contradiction assume that $S$ lies in no proper connected subgroup of $G$. Inspecting Tables \ref{l28e7p0}, \ref{sp62e7p0}, \ref{sp62e7p7}, \ref{sp62e7p5}, \ref{sp62e7p3}, we see that $S$ fixes a nonzero vector $v \in L(G)$. Since $S$ lies in no parabolic subgroup of $G$ by assumption, by Lemma \ref{lem:properembed} it follows that $v$ is semisimple and $C_G(v)$ contains a maximal torus, say $T$. Then $S$ normalises $C_{G}(v)^{\circ}$ and moreover, since $H$ does not occur as a subgroup of $\textup{Sym}_{7}$, the proof of Proposition \ref{prop:consub} shows that $S$ cannot normalise a non-trivial connected semisimple subgroup of $G$. It follows that $C_{G}(v)^{\circ} = T$ is a maximal torus of $G$. Now since $H = Sp_{6}(2)$ or $L_2(8)$ and $H \notin \textup{Lie}(p)$, the ambient characteristic $p$ is not $2$, hence a non-trivial $KH$-module of least dimension is $7$-dimensional. Since $S$ normalises $T$ it follows that $S$ acts irreducibly on $L(T)$. Now, we have a well-known decomposition \[ L(G) \downarrow T = L(T) \oplus \bigoplus_{\alpha \in \Phi} L_{\alpha} \] where $\Phi$ is the set of roots corresponding to $T$ and each $L_{\alpha}$ is a non-trivial $1$-dimensional $T$-module. Since $S$ acts irreducibly on $L(T)$ it follows that $TS$ cannot fix a nonzero vector on $L(G)$, which contradicts $TS \le C_G(v)$. \qed
\begin{proposition} \label{prop:algorithm} With the notation of the previous Proposition, if $(G,H,p)$ appears in Table \ref{tab:algprop}, then $\tilde{S}$ lies in a proper, connected subgroup of $G$. \end{proposition}
\begin{table}[htbp] \caption{Further simple groups not arising as Lie primitive subgroups of $G$} \label{tab:algprop}
\begin{tabular}{c|l} \hline $G$ & $(H,p)$ \\ \hline $F_4$ & $(\Alt_{6},5)$, $(\Alt_{5},p \neq 2,5)$, $(L_2(7),3)$ \\ $E_6$ & $(\Alt_{9},2)$ $(L_2(7),3)$, $(L_2(27),2)$ \\ $E_8$ & $(\Alt_{10},3)$, $(\Alt_{10},2)$ \\ \hline \end{tabular} \end{table}
\proof For each $(G,H,p)$, we let $S$ be a hypothetical Lie primitive subgroup of $G$, and derive a contradiction by showing that a minimal pre-image $\tilde{S}$ of $S$ in $\tilde{G}$ must fix a nonzero vector on some non-trivial $\tilde{G}$-composition factor of $L(G)$ or $V_{\textup{min}}$, using the approach described in Section \ref{sec:thealg}. Since Proposition \ref{prop:consub} applies to each $(G,H)$ in Table \ref{tab:algprop}, the conclusion follows. Since $S$ is Lie primitive, recall that the composition factors of $L(G) \downarrow \tilde{S}$ and $V_{\textup{min}} \downarrow \tilde{S}$ are given by a row in the appropriate table of Chapter \ref{chap:thetables} which is marked with `\textbf{P}'.
\subsection*{Case: $(G,H,p) = (F_4,\Alt_{6},5)$} As given in Table \ref{tab:altreps}, there are four non-trivial irreducible $KH$-modules, of dimensions 5, 5, 8 and 10. Since $|H| = 2^{3}.3^{2}.5$, the 5- and 10-dimensional modules are projective. If $P_1$ and $P_8$ are respectively the projective covers of the trivial and 8-dimensional irreducible modules, we have
\[ |H| = 360 = \textup{dim}(P_1) + 25 + 25 + 8\,\textup{dim}(P_8) + 100 \]
and thus $\textup{dim}(P_8) \le 26$. Since $5 \mid \textup{dim}(P_{8})$, this has at least two 8-dimensional composition factors, hence has composition factors $1^{4}/8^{2}$ or $1/8^{3}$. The former implies that $P_1$ has four 8-dimensional factors, contradicting the above equation. Denoting modules by their dimension, we therefore deduce that $P_8 = 8|(1+8)|8$, $P_1 = 1|8|1$.
Suppose that $S \cong H$ is a Lie primitive subgroup of $G$, so its composition factors on $L(G)$ and $V_{G}(\lambda_1)$ are given by Case 1) of Table \ref{a6f4p5}, and $V_G(\lambda_1) \downarrow S$ has no nonzero trivial submodules. If $V_{G}(\lambda_1) \downarrow S$ had an 8-dimensional $S$-direct summand, then its complement would satisfy Proposition \ref{prop:substab}(iii), and $S$ would fix a nonzero vector. Thus $V_G(\lambda_1) \downarrow S$ must be indecomposable, and is therefore an image of the projective module $P_8$ by Lemma \ref{lem:projcover}. But this has only a single trivial composition factor; a contradiction.
\subsection*{Case: $(G,H,p) = (F_4,\Alt_{5},p \neq 2,3,5)$} Here there is a unique compatible pair of fixed-point free feasible characters of $H$ on $L(G)$ and $V_{\textup{min}}$. It is proved in \cite{MR2638705}*{pp.\ 117--118}, that a subgroup $\Alt_{5}$ having these composition factors on these modules is not Lie primitive (more precisely, the subgroup stabilises a certain subalgebra on a 27-dimensional `Jordan algebra' on which $G$ acts; the stabiliser of such a structure is a positive-dimensional subgroup of $G$).
\subsection*{Case: $(G,H,p) = (F_4,\Alt_{5},3)$} The projective $KH$-modules are $3_{a}$, $3_{b}$, $P_{1} = 1|4|1$ and $P_{4} = 4|1|4$, where the latter two are uniserial. A Lie primitive subgroup $S \cong H$ of $G$ must act on $L(G)$ with composition factors as in Case 2) or 3) of Table \ref{a5f4p3}, fixing no nonzero vector. Now, all the trivial composition factors of $S$ on $L(G)$ and all the composition factors `4' must occur in a single indecomposable $S$-direct summand $W$, otherwise Proposition \ref{prop:substab}(iii) would apply to an $S$-direct summand of $L(G)$. Furthermore $W$ is self-dual since $W^{*}$ is also a direct summand of $L(G)$ with trivial composition factors. By Lemma \ref{lem:projcover} therefore, $W$ is an image of $P_4^{3}$. But this has only three trivial composition factors; a contradiction.
\subsection*{Case: $(G,H,p) = (F_4\ \textup{or}\ E_{6},L_2(7),3)$} The projective $KH$-modules here are $3_{a}$, $3_{b}$, $6$, $P_{1} = 1|7|1$ and $P_7 = 7|1|7$. Let $L = L(G)$ if $G$ has type $F_{4}$, or $V_{G}(\lambda_2)$, of dimension 77, if $G$ has type $E_{6}$. Using Table \ref{l27f4p3} and \ref{l27e6p3}, we see that $L \downarrow S$ has four trivial composition factors, and a similar argument to the above shows that if $S \cong H$ is a subgroup of $G$ fixing no nonzero vectors on $L$, then $L \downarrow S$ has an indecomposable self-dual direct summand $W$ containing all trivial composition factors and at least four 7-dimensional composition factors. This summand is then an image of $P_{7}^{3}$ by Lemma \ref{lem:projcover}. But this only has three trivial composition factors; a contradiction.
\subsection*{Case: $(G,H,p) = (E_6,\Alt_{9},2)$} Here a Lie primitive subgroup $S \cong H$ of $G$ must give rise to Case 1) of Table \ref{a9e6p2}. Since $V_{27} \downarrow \tilde{S}$ then only has two composition factors, one of which is trivial, it follows that either $V_G(\lambda_1)$ or its dual $V_G(\lambda_6)$ has a nonzero trivial $\tilde{S}$-submodule; a contradiction.
\subsection*{Case: $(G,H,p) = (E_6,L_2(27),2)$}
Here a Lie primitive subgroup $S \cong H$ of $G$ must give rise to Case 1) of Table \ref{l227e6p2}. Since $\tilde{S}$ fixes no nonzero vector on $V_{G}(\lambda_1)$ or its dual, we see that $V_{G}(\lambda_1) \downarrow \tilde{S} = 13|1|13^{*}$ or $13^{*}|1|13$. This is therefore an image of $P_{13}$ or $P_{13}^{*}$.
Now, the six 28-dimensional irreducible $KH$-modules are projective. Calculations with dimensions, and the fact that $P/\textup{rad}(P) \cong \textup{soc}(P)$ for any projective $KH$-module, quickly show that the projective cover $P_{26_a} = 26_a|26_a$ is uniserial, and similarly for $26_b$ and $26_c$; these involve no 13-dimensional factors, hence $P_{13}$ and $P_{13^{*}}$ have no 26-dimensional composition factors. Since $P_{1}$ is self-dual and involves only the modules $1$, $13$, $13^{*}$, dimension considerations imply that $P_1 = 1|(13 + 13^{*})|1$, and it follows that $P_{13} = 13|(1 + 13^{*})|13$. Thus neither $P_{13}$ nor $P_{13}^{*}$ has a uniserial quotient $13|1|13^{*}$ or $13^{*}|1|13$; a contradiction.
\subsection*{Case: $(G,H,p) = (E_8,\Alt_{10},3)$}
Here, every composition factor of the unique feasible character of $H$ on $L(G)$ (Table \ref{a10e8p3}) is self-dual, and all except one (that of dimension 84) have multiplicity 1. It follows that any irreducible $S$-submodule of dimension $\neq 84$ is in fact a direct summand. Thus if $L(G) \downarrow S$ has a reducible, indecomposable direct summand, say $W$, then $W$ is a quotient of the projective module $P_{84}$. The radical series of $P_{84}$ begins $84 | 1 + 41 + 84 | 34 + 41 + 84^{2} | ... $. Now if $N$ is a submodule of $P_{84}$ such that $N/\textup{Rad}(N) = 41$, then $N$ lies in the kernel of $P_{84} \to W$. Using Magma to facilitate calculations, we find that the quotient of $P_{84}$ by the sum of all such submodules is self-dual with shape $84 | (1 + 84) | 84$. It follows that either $L(G) \downarrow S$ is completely reducible, or
\[ L(G) \downarrow S = 1+9+34+36+(84|84). \] In either case, $S$ fixes a 1-space on $L(G)$, and hence cannot be Lie primitive in $G$.
\subsection*{Case: $(G,H,p) = (E_8,\Alt_{10},2)$} Here a Lie primitive subgroup $S \cong H$ of $G$ gives rise to Case 4) of Table \ref{a10e8p2}, so $L(G) \downarrow S = 1^{8}/8^{5}/26^{4}/48^{2}$. Hence the sum $\bigoplus H^{1}(S,W)$ over $S$-composition factors of $L(G)$ is 9-dimensional. Now, let $L(G) = W_1 \oplus \ldots \oplus W_t$ where each $W_i$ is indecomposable. If more than one $W_i$ had a trivial composition factor, then Proposition \ref{prop:substab}(iii) would apply to at least one such summand. Since $S$ fixes no nonzero vectors of $L(G)$, it follows that $S$ has an indecomposable summand $W$ on $L(G)$ containing all trivial composition factors, and also all $S$-composition factors with nonzero first cohomology group. Moreover $W$ is an image of $P_8^{2}+P_{26}^{2}+P_{48}$, by Lemma \ref{lem:projcover}. Now, let $Q_{8}$, $Q_{26}$ and $Q_{48}$ be the quotient of $P_{8}$, $P_{26}$ and $P_{48}$, respectively, by the sum of all submodules with a unique maximal submodule, and corresponding quotient not isomorphic to a member of $\{1,8,26,48\}$. Using Magma to facilitate calculations, we find that these quotients have the following structure: \begin{align*}
Q_{8} &= 8 | 1 | 26 | 1 | (8 + 48) | (1 + 26) | (1 + 26) | (1 + 8) | (1 + 8),\\
Q_{26} &= 26 | (1 + 48) | (8 + 26) | 1^{2} | (8 + 26) | (1^{2} + 48) | 8 + 26 | 1 \\
Q_{48} &= 48 | 26 | 1 | 8 | (1 + 48) | 26 | 1 \end{align*} Now, if $W/\textup{Rad}(W)$ has more than one composition factor `$8$' or `$26$', then $\textup{Rad}(W)$ satisfies Proposition \ref{prop:substab}(i) and has a trivial submodule. So we may assume that $W$ is an image of $Q_{48} \oplus Q_{26}$ or $Q_{48} \oplus Q_{8}$. Each of these modules has exactly nine trivial composition factors, so the kernel of the projection to $W$ can have at most one trivial factor. But also, both $Q_{48} \oplus Q_{26}$ and $Q_{48} \oplus Q_{8}$ contain a $2$-dimensional trivial submodule, and it follows that $S$ fixes a nonzero vector of $W \subseteq L(G)$. \qed
\section{Postponed Cases of Theorem \ref{THM:MAIN}} \label{sec:postponed}
Those pairs $(G,H)$ in Table \ref{tab:main} for which the above propositions do not apply are listed below. For these, a special argument is required, either because the groups themselves have have feasible characters with no trivial composition factors on $L(G)$ or $V_{\textup{min}}$, or because their representation theory allows for the existence of modules having appropriate composition factors but no fixed vectors. For these, we defer proving the conclusion of Theorem \ref{THM:MAIN} until Section \ref{sec:special}, where some ad-hoc arguments are applied.
\begin{table}[H]\small
\begin{tabular}{c|c} $G$ & $(H,p)$ \\ \hline $E_7$ & $\Alt_{10} \ (p = 5)$, $\Alt_{9} \ (p \neq 2, 3)$ \\ $E_8$ & $\Alt_{16} \ (p = 2)$, $\Alt_{11}\ (p = 11)$, $\Alt_{10} \ (p > 3)$ \end{tabular} \end{table}
\chapter{Normaliser Stability} \label{chap:stab}
In this chapter we complete the proof of Theorem \ref{THM:MAIN}. At this stage we have now shown that for $(G,H,p)$ as in Table \ref{tab:main}, with the exception of the `postponed cases' considered in Section \ref{sec:special}, every subgroup $S \cong H$ of $G$ lies in a proper connected subgroup of $G$. It remains to show the existence of a connected $N_{\textup{Aut}(G)}(S)$-stable subgroup. We apply a variety of techniques to achieve this; the following proposition summarises the results of this chapter.
\begin{proposition} Let $G$ be an adjoint exceptional simple algebraic group in characteristic $p$, let $H$ be a finite simple group, not isomorphic to a member of $\textup{Lie}(p)$, and let $S \cong H$ be a subgroup of $G$. If $G$, $H$, $p$ appear in Table \ref{tab:main}, then one of the following holds: \begin{itemize} \item $S$ is not $G$-completely reducible, hence Lemma \ref{lem:gcrstab} applies; \item $S$ is Lie primitive in a semisimple subgroup $X$ of $G$ as in Proposition \ref{prop:primstab}; \item $(G,H,p)$ appears in one of the tables \ref{tab:f4thm}--\ref{tab:e8thm}, hence Proposition \ref{prop:gcrstab} applies; \item $(G,H,p)$ appears in Table \ref{tab:adhoc} in Section \ref{sec:special}. \end{itemize} Hence $S$ is contained in a proper, $N_{\textup{Aut}(G)}(S)$-stable connected subgroup of $G$. \end{proposition}
\section{Complete Reducibility and Normaliser Stability}
Recall that a subgroup $S$ of a reductive group $G$ is called \emph{$G$-completely reducible} ($G$-cr) if whenever $S$ is contained in a parabolic subgroup $P$ of $G$, it is contained in a Levi subgroup of $P$. In \cite{MR2178661} it is shown that a subgroup of a reductive algebraic group $G$ is $G$-cr if and only if it is `strongly reductive' in the sense of Richardson \cite{MR952224}. A result of Liebeck, Martin and Shalev then states: \begin{lemma}[\cite{MR2145743}*{Proposition 2.2 and Remark 2.4}] \label{lem:gcrstab} Let $S$ be a finite a finite subgroup of an adjoint simple algebraic group $G$. Then either $S$ is $G$-completely reducible, or $S$ is contained in a proper $N_{\textup{Aut}(G)}(S)$-stable parabolic subgroup of $G$. \end{lemma}
It thus remains to prove Theorem \ref{tab:main} for $G$-cr subgroups $S \cong H$. In this case, let $L$ be minimal among Levi subgroups of $G$ containing $S$, so that $S$ lies in the semisimple subgroup $L'$ and is $L'$-irreducible. Then any connected subgroup of $L'$ containing $S$ is also $L'$-irreducible, and is therefore $G$-cr and semisimple. Thus there exists a proper semisimple subgroup $X = X_{1} \ldots X_{t}$ such that $S$ projects to an $X_{i}$-irreducible subgroup of each simple factor $X_{i}$. If we pick $X$ to be minimal among semisimple subgroups of $L'$ containing $S$, then the image of $S$ under projection to each simple factor is in fact Lie primitive in that factor.
\section{Subspace Stabilisers and Normaliser Stability}
We now assume that $S$ is $G$-completely reducible. Our strategy now is to construct a `small' $G$-cr semisimple subgroup $X$ of $G$ containing $S$. Informally, `small' means that the actions of $X$ and $S$ on $L(G)$ or $V_{\textup{min}}$ should be similar, which allows us to apply a number of results that we shall state in a moment. Note that this does not require that $S$ is Lie primitive in $X$, although this will be true in many cases.
Our first result of interest is Proposition 1.12 of \cite{MR1458329}. Recall that if $M$ is a $G$-module with corresponding representation $\rho : G \to GL(M)$, the \emph{conjugate} $M^{\tau}$ of $M$ by an (abstract) automorphism $\tau$ of $G$ is the module corresponding to the representation $\tau \rho : G \to GL(M)$. If $G$ is an algebraic group, if $\tau$ is a morphism and $M$ is rational, then clearly $M^{\tau}$ is also rational.
Recall that a Suzuki-Ree group is the fixed-point subgroup $G_{\phi}$ when $\phi$ is an exceptional graph morphism of $G$. For a subspace $M$ of a $G$-module $V$, let $G_{M}$ denote the corresponding subspace stabiliser.
\begin{proposition}[\cite{MR1458329}*{Proposition 1.12}] \label{prop:onetwelve} Let $G$ be a simple algebraic group over $K$, and let $\phi : G \to G$ be a morphism which is an automorphism of abstract groups. \begin{itemize} \item[\textup{(i)}] Suppose that $G_{\phi}$ is not a finite Suzuki or Ree group, and let $V$ be a $G$-composition factor of $L(G)$. If $M$ is a subspace of $V$, then $(G_M)^{\phi} = G_{M'}$ for some subspace $M'$ of $V$. \item[\textup{(ii)}] Suppose $G_{\phi}$ is a finite Suzuki or Ree group, and let $V_1$, $V_2$ be the two $G$-composition factors of $L(G)$. If $M$ is a subspace of $V_i$ $(i = 1,2)$, then $G_M^{\phi} = G_{M'}$ for some subspace $M'$ of $V_{3-i}$. \item[\textup{(iii)}] Let $S$ be a $\phi$-stable subgroup of $G$, and let $\mathcal{M}$ be the collection of all $S$-invariant subspaces of all $G$-composition factors of $L(G)$. Then the subgroup $\bigcap_{W \in \mathcal{M}} G_W$ of $G$ is $\phi$-stable. \end{itemize} \end{proposition} Of particular interest to us here is part (iii). If $S$ is a finite subgroup of $G$, and if $X$ is a connected subgroup containing $S$ such that every $S$-submodule of every $G$-composition factor of $L(G)$ is an $X$-submodule, then the group $\bigcap_{W \in \mathcal{M}} G_W$ in (iii) contains $X$, and is therefore also of positive dimension. Applying this result for each morphism $\phi \in N_{\textup{Aut}(G)}(S)$, we obtain a positive-dimensional, $N_{\textup{Aut}(G)}(S)$-stable subgroup of $G$, whose identity component contains $X$ and therefore $S$.
It will be useful for us to extend the above result, since we will encounter cases when, for $X$ a minimal semisimple subgroup containing $S$, not every $S$-submodule of $L(G)$ is an $X$-submodule. We now do this by mimicking the proof given in \cite{MR1458329}.
\begin{proposition} \label{prop:newonetwelve} Let $G$ be a simple algebraic group over $K$, let $\phi : G \to G$ be a morphism which is an automorphism of abstract groups. \begin{itemize} \item[\textup{(i)}] Let $V = \bigoplus V_G(\lambda_i)$ be a completely reducible $KG$-module such that the set $\{\lambda_i\}$ is stable under all graph morphisms of $G$. If $M$ is a subspace of $V$, then $(G_M)^{\phi} = G_{M\delta}$ where $\delta$ is an invertible semilinear transformation $V \to V$ depending on $\phi$ but not on $M$. \item[\textup{(ii)}] If $S$ is a subgroup of $G$, then for each $KS$-submodule $M$ of $V$, the subspace $M\delta$ is a $KS^{\phi}$-submodule, of the same dimension as $M$, which is irreducible, indecomposable or completely reducible if and only if $M$ has the same property. \end{itemize} \end{proposition}
\proof (i) Let $V$ correspond to the representation $\rho : G \to GL(V)$. We may write $\phi = y\tau\sigma$ where $y$, $\tau$ and $\sigma$ are (possibly trivial) inner, graph and field morphisms of $G$, respectively. By assumption, the representations $\rho$ and $\tau\rho$ of $G$ are equivalent, since they are completely reducible with identical high weights. Hence if $\sigma$ is a $q$-power field automorphism, where $q = p^{e} \ge 1$, then the high weights of $\phi\rho$ are $\{q\lambda_i\}$. There is therefore a $q$-power field automorphism $\omega$ of $GL(V)$ such that $\phi\rho$ and $\rho\omega$ are equivalent. The automorphism $\omega$ is induced by a semilinear transformation $V \to V$ which we shall also denote by $\omega$. Then $y^{\omega} = \omega^{-1}y\omega$ for $y \in GL(V)$. Thus, identifying each $g \in G$ with its image $g\rho \in GL(V)$, there exists $x \in GL(V)$ such that $g^{\phi} = g^{\omega x} = x^{-1}\omega^{-1}g\omega x$, for all $g \in G$. Writing $\delta = \omega x$, this gives $\delta g^{\phi} = g\delta$ for all $g \in G$, and we have \[ (v\delta)g^{\phi} = (vg)\delta \] for all $v \in V$, $g \in G$. If $M$ is a subspace of $V$, and $m \in M$, $g \in G_M$, then $(m\delta)g^{\phi} = (mg)\delta \in M\delta$, and hence $g^{\phi} \in G_{M\delta}$. Therefore $(G_M)^{\phi} \le G_{M\delta}$. For the reverse inclusion, if $g \in G_{M\delta}$, then by the displayed equality above, for any $m \in M$ we have \[ (m\delta)g = (m\delta).(g^{\phi^{-1}})^{\phi} = (mg^{\phi^{-1}})\delta = m'\delta \] for some $m' \in M$. Therefore $mg^{\phi^{-1}} = m'$ and $g^{\phi^{-1}} \in G_M$, so $g \in (G_M)^{\phi}$ as required.
(ii) If $M$ is a $KS$-submodule of $V$, then the displayed equation above, applied to the elements of $S^{\phi}$, tells us that $S^{\phi}$ preserves the subspace $M\delta$ of $V$. It is clear that $M$ and $M\delta$ have the same dimension, since $\delta$ is invertible. If $W \subseteq M$ is a nonzero $KS$-submodule of $M$, then $W\delta$ is a nonzero $KS^{\phi}$-submodule of $M \delta$, and $M = M_1 + M_2$ as $KS$-modules if and only if $M\delta = M_1\delta + M_2\delta$, proving the final claim. \qed
Thus if $S = S^{\phi}$, then $\phi$ induces a permutation on the $S$-submodules of $L(G)$, and we immediately deduce: \begin{corollary} \label{cor:phistab} Let $S$ be a subgroup of $G$, and let $V$ be a $KG$-module as in Proposition \ref{prop:newonetwelve}\textup{(i)}, or the direct sum of the $G$-composition factors of $L(G)$ if $(G,p) = (F_4,2)$. Let $\phi$ be a morphism in $N_{\textup{Aut}(G)}(S)$, and let $\mathcal{M}$ be one of: \begin{itemize} \item The set of all $KS$-submodules of $V$, or all irreducible $KS$-submodules, or all indecomposable $KS$-submodules. \item Those members of one of the above collections, with a prescribed set of composition factor dimensions. \end{itemize} Then the intersection $H \stackrel{\textup{def}}{=} \bigcap_{M \in \mathcal{M}} G_M$ is $\phi$-stable.
Further, if some member of $\mathcal{M}$ is not $G$-stable, then $H$ is proper. If $S$ lies in a positive-dimensional subgroup $X$ such that each member of $\mathcal{M}$ is $X$-invariant, then $H$ is a positive-dimensional. If $X$ is connected, then $S$ lies in $H^{\circ}$, which is connected and $\phi$-stable. \end{corollary}
Thus with $S$ a finite simple subgroup of $G$, lying in the connected subgroup $X$, we are interested in techniques for spotting when $KS$-submodules of a given $KX$-module are $X$-invariant. The following result of Liebeck and Seitz provides such a method. \begin{lemma}[\cite{MR1458329}*{Proposition 1.4}] Let $X$ be an algebraic group over $K$ and let $S$ be a finite subgroup of $X$. Suppose $V$ is a finite-dimensional rational $KX$-module satisfying the following conditions: \begin{itemize} \item[\textup{(i)}] every $X$-composition factor of $V$ is $S$-irreducible, \item[\textup{(ii)}] for any $X$-composition factors $M, N$ of $V$, the restriction map \\ ${\rm Ext}_{X}^{1}(M,N)$ $\to$ ${\rm Ext}_{S}^1(M,N)$ is injective, \item[\textup{(iii)}] for any $X$-composition factors $M$, $N$ of $V$, if $M \downarrow S \cong N \downarrow S$, then $M \cong N$ as $X$-modules. \end{itemize} Then $X$ and $S$ fix precisely the same subspaces of $V$. \label{lem:samesubs} \end{lemma} Conditions (i) and (iii) are straightforward to verify. Condition (ii) can often be checked by showing that the groups ${\rm Ext}_{X}^{1}(M,N)$ are trivial, for example using Lemmas \ref{lem:exthom} and \ref{lem:2step}.
The proof of Lemma \ref{lem:samesubs} given in \cite{MR1458329} uses condition (ii) only to deduce that an indecomposable $X$-module section of $V$ remains indecomposable as an $S$-module. This allows the following generalisation: \begin{proposition} \label{prop:newsamesubs} The conclusion of Lemma \ref{lem:samesubs} holds if we replace condition \textup{(ii)} with either: \begin{itemize} \item[\textup{(ii$'$)}] Each indecomposable $KX$-module section of $V$ is indecomposable as a $KS$-module. \item[\textup{(ii$''$)}] As a $KX$-module, $V$ is completely reducible. \end{itemize} \end{proposition}
\proof Note that, assuming condition (i), we have implications (ii$''$) $\Rightarrow$ (ii$'$), and (ii) $\Rightarrow$ (ii$'$). Thus it suffices to assume that (i), (ii$'$) and (iii) hold. From here we proceed as in \cite{MR1458329}, by induction on $\textup{dim }V$, noting that the case $\textup{dim }V = 1$ is trivial.
For a contradiction, suppose that some $KS$-submodule of $V$ is not $X$-stable, and let $W$ be minimal among such submodules. If $W'$ is a proper $KS$-submodule of $W$, then $W'$ is $X$-invariant, and $W/W'$ is a $KS$-submodule of $V/W'$ which is not $X$-invariant. The inductive hypothesis thus forces $W' = 0$, so $W$ is irreducible.
Now let $U = \left<W^{X}\right>$, so $U \neq W$. If $U$ were irreducible for $X$, it would be irreducible for $S$ by (i), contradicting $U \neq W$. Thus there exists a proper, irreducible $X$-submodule $W_0$ of $W$.
Consider $V/W_0$. Then $S$ fixes the subspace $(W + W_0)/W_0$ of this, and by induction we deduce that $X$ fixes $W + W_0$. Hence $U = W + W_0$ (vector space direct sum). Since $W$ and $W_0$ are irreducible as $S$-modules, this is also a direct-sum decomposition of $U$ into $S$-submodules. Thus $U$ is not indecomposable as an $S$-module, and hence by (ii$'$) is also not indecomposable as an $X$-module. Hence there is an $X$-submodule $W_1$ of $U$ such that \[ U = W + W_0 = W_1 + W_0 \] where $W$ is $S$-isomorphic to $W_1$. If $W$ is not $S$-isomorphic to $W_0$, then $W$ and $W_0$ are the only irreducible $S$-submodules of $U$, so $W = W_1$ and $W$ is $X$-invariant, a contradiction. Thus $W$ is $S$-isomorphic to $W_0$, and thus $W_0$ and $W_1$ are $X$-isomorphic. Now $W \subseteq W_1 + W_0$, and we have \[ W = \left\{ w + w\phi \ : \ w \in W_1 \right\} \] for some $S$-isomorphism $\phi$ : $W_1 \to W_0$. But if $\alpha$ is any $X$-isomorphism : $W_1 \to W_0$, then $\alpha\phi^{-1}$ : $W_1 \to W_1$ is an $S$-isomorphism, hence by Schur's Lemma we have $\alpha \phi^{-1} = \lambda.\textup{id}_{W_1}$ for some $\lambda \in K^{*}$. Hence $\phi = \lambda.\alpha$ is an $X$-isomorphism, and $W$ is fixed by $X$, which is a contradiction. Therefore $W = U$, as required. \qed
\subsection{Restriction of $G$-modules to $G$-cr semisimple subgroups}
In order to compare the action of a finite simple subgroup of $G$ on various $G$-modules with the action of a $G$-cr semisimple subgroup $X$, we need to determine some details of how such a subgroup $X$ acts. To begin, let $L$ be minimal among Levi subgroups of $G$ containing $S$. We can thus assume that $S < X \le L'$. The action of $L'$ on $L(G)$ and $V_{\textup{min}}$ now follows from the known composition factors of $L'$, stated in \cite{MR1329942}*{Tables 8.1-8.7}, and Lemma \ref{lem:tilting}.
If we work with the simply-connected cover $\tilde{G}$ of $G$, and a minimal pre-image $\tilde{S}$ of $S$ in $\tilde{G}$, the derived subgroup of a Levi subgroup $L$ of $\tilde{G}$ is simply connected \cite{MR2850737}*{Proposition 12.14}, hence is a direct product of simply connected simple groups. The image of $\tilde{S}$ under projection to a simple factor $L_{0}$ of $L'$ is $L_{0}$-irreducible. If $L_{0}$ is classical, we can use Lemma \ref{lem:classicalparabs} to find a smaller connected subgroup of $L_{0}$ containing the image of $\tilde{S}$, such as the stabiliser of a direct-sum decomposition of the natural module. On the other hand, if $L_{0}$ is exceptional, then we can use Propositions \ref{prop:notprim} and \ref{prop:algorithm}, and the feasible characters in Chapter \ref{chap:thetables}, to find a smaller semisimple subgroup of $L_{0}$ containing the image of $S$ (if one exists).
This gives a `small' semisimple subgroup $X$ of $L'$ which contains $S$. The known action of $L'$ on the various $G$-modules is then usually enough information to determine the action of $X$. If $X$ is simple of rank $> \frac{1}{2}\textup{rank}(G)$, then $X$ is given up to conjugacy by \cite{MR1367085}*{Theorem 1}, and the restrictions of $L(G)$ and $V_{\textup{min}}$ to such a subgroup are given, at least up to composition factors, by \cite{MR1329942}*{Tables 8.1-8.7} or \cite{MR3075783}*{Chapter 5}. Once the composition factors are known, more precise information about the module structure can be inferred using Proposition \ref{prop:weyls} and Lemma \ref{lem:2step}.
The following summarises the module restrictions we need which are most difficult to verify from the above sources.
\begin{lemma} \label{lem:c4d4} Let $G$ be a simply connected simple algebraic group of exceptional type in characteristic $p = 2$. If $X$ is a $G$-cr simple subgroup of type $C_4$ or $D_4$, then either $X = L'$ for a Levi subgroup $L$ of $G$, or $X$ is conjugate to a subgroup in Table \ref{tab:c4d4res}, acting on the $G$-module $V$ as stated. \end{lemma}
\begin{table}[htbp] \centering \small \caption{Non-Levi, $G$-cr simple subgroups of type $C_{4}$, $D_{4}$, $p = 2$}
\begin{tabular}{c|c|c|c} $G$ & $X$ & $V$ & $V \downarrow X$ \\ \hline $F_4$ & $C_4$ & $V_G(\lambda_1)$ & $0^{2}/V_X(\lambda_4)/V_X(2\lambda_1)$ \\ & & $V_G(\lambda_4)$ & $V_X(\lambda_2)$ \\ & $D_4$ (long) & $V_G(\lambda_1)$ & $V_{X}(\lambda_2)$ \\ & & $V_G(\lambda_4)$ & $0^{2} \oplus \lambda_1 \oplus \lambda_3 \oplus \lambda_4$ \\ & $D_4$ (short) & $V_G(\lambda_1)$ & $0^{2} \oplus V_X(2\lambda_1) \oplus V_X(2\lambda_3) \oplus V_X(2\lambda_4)$ \\ & & $V_G(\lambda_4)$ & $V_{X}(\lambda_2)$ \\ $E_6$ & $C_4 < F_4$ & $V_G(\lambda_1)$ & $0 \oplus V_X(\lambda_2)$ \\ & $D_4 < F_4$ (short) & $V_G(\lambda_1)$ & $0 \oplus V_X(\lambda_2)$ \\ $E_7$ & $C_4 < F_4$ & $V_G(\lambda_7)$ & $0^{4} \oplus V_X(\lambda_2)^{2}$ \\
& $C_{4} < A_{7}$ & $V_{G}(\lambda_7)$ & $(0|V_{X}(\lambda_2)|0)^{2}$ \\ & $D_4 < F_4$ (short) & $V_G(\lambda_7)$ & $0^{4} \oplus V_X(\lambda_2)^{2}$ \\
$E_{8}$ & $C_{4} < A_{7}$ Levi & $L(G)$ & $(\lambda_1 \otimes \lambda_1) \oplus \lambda_1^{4} \oplus (0|V_{X}(\lambda_2)|0)^{2} \oplus \lambda_3^{2}$ \\ & $C_{4} < F_{4} $ & $L(G)$ & $(0^{2}/V_{X}(\lambda_2)^{2}/V_{X}(\lambda_4)/V_{X}(2\lambda_1)) \oplus 0^{14} \oplus V_{X}(\lambda_2)^{6}$\\
& $D_{4} < A_{7}$ Levi & $L(G)$ & $(\lambda_1 \otimes \lambda_1) \oplus \lambda_1^{4} \oplus (0|V_{X}(\lambda_2)|0)^{2} \oplus V_X(\lambda_3 + \lambda_4)^{2}$ \\ & $D_4 < F_4$ (short) & $L(G)$ & $(0^{2}/V_{X}(\lambda_2)^{2}/V_{X}(2\lambda_1)/V_{X}(2\lambda_3)/V_{X}(2\lambda_4))$ \\ & & & ${}\oplus 0^{14} \oplus V_{X}(\lambda_2)^{6}$ \\ & $D_{4} < D_{4}D_{4}$ & & Infinitely many, $G$-irreducible, cf.\ \cite{MR3283715}*{Theorem 3} \\ \end{tabular} \label{tab:c4d4res} \end{table}
\proof Theorem 1 of \cite{MR1367085} lists all subgroups $C_{4}$ or $D_{4}$ when $G$ has type $F_{4}$, $E_{6}$ or $E_{7}$. The non-Levi subgroups $D_{4} < E_{7}$ given in part (IV) there are non-$E_7$-cr, except for the subgroup $D_{4} < F_{4}$ generated by short root subgroups. The $C_{4}$ and $D_{4}$ subgroups of $A_{7} < E_{7}$ are non-$E_{7}$-cr, by \cite{MR1274094}*{Lemma 4.9} (see also \cite{MR3283715}*{Lemma 6.1}).
For $G$ of type $E_{8}$, Theorem 3 of \cite{MR3283715} states that the only $G$-irreducible subgroups of type $C_{4}$ or $D_{4}$ are the subgroups $D_{4} < D_{4}D_{4}$ listed above. Each remaining $G$-cr subgroup lies in some Levi subgroup $L$ of $G$. Using the list of subgroups for $E_{6}$ and $E_{7}$, it follows that $L'$ is simple of type $A_{7}$ or $E_{6}$, which gives the remaining subgroups for $E_{8}$.
The composition factors of $V \downarrow X$ now follow from \cite{MR1329942}*{Table 8.1-8.7}. For $G$ of type $F_{4}$, the module structure of $V_G(\lambda_4) \downarrow X$ is stated in \cite{MR3075783}*{Chapter 5}. Since $V_G(\lambda_1)$ and $V_G(\lambda_4)$ are conjugate by an exceptional graph morphism of $G$, which swaps the long and short $D_4$ subgroups, a direct-sum decomposition of $V_G(\lambda_4)$ as a module for a long subgroup $D_4$, implies a decomposition of $V_G(\lambda_1)$ for a short $D_4$, and vice-versa. The given module structures for $G = F_4$ now follow.
For $G \neq F_{4}$ the stated module structures follow by first considering $V \downarrow F_4$ or $V \downarrow A_{7}$, which are straightforward to derive using the known composition factors and Lemmas \ref{lem:tilting} and \ref{lem:exthom}. Since $C_{4}$ and $D_{4}$ each support a unique nondegenerate bilinear form on their natural modules, there exists a unique nonzero $KX$-module homomorphism $\bigwedge^{2}(\lambda_1) \to K$ (up to scalars), and thus $V_{A_7}(\lambda_2) \downarrow X = V_{A_7}(\lambda_6) \downarrow X = 0 | V_{X}(\lambda_2)|0$ for each $X$. Finally, for $X = C_{4}$, $\bigwedge^{3}(\lambda_1) = \lambda_1 + \lambda_3$, by consideration of high weights and Lemma \ref{lem:exthom}. The remaining restrictions follow. \qed
\section{Proof of Theorem \ref{THM:MAIN}: Standard Cases}
In view of the above results, Propositions \ref{prop:primstab} and \ref{prop:gcrstab} below complete the proof of Theorem \ref{THM:MAIN} except for the `postponed cases' considered in Section \ref{sec:special}.
\begin{proposition} \label{prop:primstab} Let $G$ be a simple algebraic group of exceptional type, let $S$ be a finite subgroup of $G$, and suppose that $S$ is Lie primitive in $G$-cr subgroup $X$ of $G$. If $X$ is the derived subgroup of a Levi subgroup of $G$, or if $X$ is as follows: \begin{itemize} \item $X = G_{2}$ in a Levi subgroup of type $D_{4}$ or $B_{3}$; \item $X = B_{n-1}$ in a Levi subgroup of type $D_{n}$ $(n \ge 3)$; \item $X < F_{4}$, and $X$ has type $A_{3}$, $B_{4}, C_{4}$, $D_{4}$ or $F_{4}$. \end{itemize} then $X$ is $N_{\textup{Aut}(G)}(S)$-stable. \end{proposition}
\proof To begin, note that if $\textup{dim}(X) > \frac{1}{2}\textup{dim}(G)$, then $X \cap X^{\sigma}$ has positive dimension for any $\sigma \in \textup{Aut}(G)$. In particular, if $\sigma \in N_{\textup{Aut}(G)}(S)$ then $X \cap X^{\sigma} = X$ since $S$ is Lie primitive in $X$ and $X$ is connected. This gives the result when $G = F_{4}$ and $X$ is a proper subgroup of maximal rank listed above, so we now assume that this is not the case.
Let $V$ be the direct sum of non-trivial $G$-composition factors of either $L(G)$ or $V_{\textup{min}} \oplus V_{\textup{min}}^{*}$. For the remaining groups $X$, we prove that every fixed point of $S$ on $V$ is a fixed point of $X$ on $V$, and that $X$ has a nonzero fixed point on some such $V$. The desired conclusion then follows from Corollary \ref{cor:phistab}.
Firstly, the composition factors of $X$ on $V$ are known by \cite{MR1329942}*{Tables 8.1-8.7}. Then Lemmas \ref{lem:radfilts} and \ref{lem:exthom} show that $X$ fixes a nonzero vector on $V$. Moreover every composition factor of $V \downarrow X$ has dimension at most $\textup{dim}(X)$, and equality holds only if this composition factor is isomorphic to $L(X)$. Thus by Lemma \ref{lem:properembed}, since $S$ is Lie primitive in $X$ it cannot fix a nonzero vector in its action on any nontrivial $X$-composition factor of $V \downarrow X$.
Next, if $V \downarrow X$ has an indecomposable section of the form $K|W$, where $W$ is irreducible of dimension at most $\textup{dim}(X) - 2$, then this extension cannot split as an $S$-module (since the corresponding vector centraliser has positive dimension). It remains to show that this must also hold if $W$ instead has dimension $\ge \textup{dim}(X) - 1$. From the known action of $X$ on $V$, this can only occur in the following cases:
\begin{itemize} \item $X = A_{n}$, $p \mid n-1$, $W = V_{X}(\lambda_1 + \lambda_n)$; \item $X = D_{n}$ ($n$ odd) or $B_{n}$, $p = 2$, $W = V_{X}(\lambda_2)$; \item $X = E_{7}$, $p = 2$, $W = V_{X}(\lambda_1)$; \item $X = E_{6}$, $p = 3$, $W = V_{X}(\lambda_2)$. \end{itemize}
In each case, $H^{1}(X,W)$ is $1$-dimensional, hence there is a unique indecomposable extension $K|W$ up to isomorphism. Moreover the representation $X \to GL(K|W)$ factors through the adjoint group $X_{\textup{ad}}$, and $K|W$ is isomorphic to $L(X_{\textup{ad}})$. In particular since $S$ is Lie primitive in $X$, its image in $X_{\textup{ad}}$ is also Lie primitive and so $S$ cannot fix a nonzero vector on $L(X_{\textup{ad}})$, by Lemma \ref{lem:properembed}. Thus every fixed point of $S$ on $W$ is fixed by $X$, as required. \qed
For reference, the following table lists the types $H$ of non-generic finite simple subgroup of $G$ such that each subgroup $S \cong H$ of $G$ is necessarily Lie primitive in some simple subgroup $X$ as in Proposition \ref{prop:primstab}. For each type $H$, the types of $X$ which may occur are straightforward to determine from Lemma \ref{lem:classicalparabs} and Propositions \ref{prop:notprim} and \ref{prop:algorithm}. For instance, when $p = 5$, $H = \Alt_{6}$ has irreducible modules of dimension $5$ and $8$, giving embeddings into $B_{2}$ and $D_{4}$. Since $\Alt_{6}$ has no nontrivial irreducible modules of dimension $4$ or less, it cannot occur as a subgroup of a (simply connected) subgroup of type $B_{2}$ or $A_{3}$ in $G = F_{4}$. Hence such a subgroup $\Alt_{6}$ of $G$ must be Lie primitive in a subgroup of type $D_{4}$.
\begin{table}[htbp] \small \onehalfspacing \caption{Types of subgroup necessarily in some $X$ satisfying Proposition \ref{prop:primstab}}
\begin{tabular}{c|c} \hline $G$ & $H$ \\ \hline $F_{4}$ & $\Alt_{7}$, $\Alt_{9-10}$, $M_{11}$, $J_{1}$, $J_{2}$, \\ & $\Alt_{6}$ $(p = 5)$, $L_{2}(17)$ $(p = 2)$, $U_{3}(3)$ $(p \neq 2,3,7)$ \\ \hline $E_{6}$ & $\Alt_{9-12}$, $M_{22}$, $L_{2}(25)$, $L_{2}(27)$, $L_{4}(3)$, $U_{4}(3)$, ${}^{3}D_{4}(2)$, \\ & $M_{11}$ $(p \neq 3,5)$, $M_{12}$ $(p = 2)$, $L_{2}(11)$ $(p = 5)$, $L_{2}(17)$ $(p = 2)$, $L_{3}(3)$ $(p = 2)$ \\ \hline $E_{7}$ & $\Alt_{11-13}$, $M_{11}$, $L_{2}(25)$, $L_{3}(3)$, $L_{4}(3)$, ${}^{3}D_{4}(2)$, ${}^{2}F_{4}(2)'$,\\ & $\Alt_{9-10}$ $(p = 2)$, $M_{12}$ $(p \neq 5)$ \\ \hline $E_{8}$ & $\Alt_{12-15}$, $\Alt_{17}$, $M_{12}$, $L_{2}(27)$, $L_{2}(37)$, $L_{4}(3)$, $U_{3}(8)$, $\Omega_{8}^{+}(2)$, $G_{2}(3)$, \\ & $\Alt_{11}$ $(p = 2)$, $M_{11}$ $(p \neq 3,11)$, $^{2}F_{4}(2)'$ $(p \neq 3)$ \\ \hline \end{tabular} \label{tab:primstab} \end{table}
\begin{lemma} \label{lem:nstab} Let $S$ be a finite simple subgroup of a simple algebraic group $G$. If $S$ is contained in a semisimple subgroup $X$ of $G$, such that the following conditions all hold: \begin{itemize} \item[\textup{(i)}] $X$ is $G$-conjugate to $X^{\sigma}$ for all $\sigma \in N_{\textup{Aut}(G)}(S)$, \item[\textup{(ii)}] If $g \in G$ and $S^{g} \le X$, then $S^{g} = S^{x}$ for some $x \in N_{G}(X)$, \item[\textup{(iii)}] $N_{G}(S) \le N_{G}(X)$, \end{itemize} then $X$ is $N_{\textup{Aut}(G)}(S)$-stable. \end{lemma}
\proof If $\sigma \in N_{\textup{Aut}(G)}(S)$, then $X^{\sigma} = X^{g}$ for some $g \in G$, by (i). Thus $S^{g^{-1}} \le X$, and $S^{g^{-1}} = S^{x}$ for some $x \in N_{G}(X)$, by (ii). Then $xg \in N_{G}(S) \le N_{G}(X)$, and $X^{\sigma} = X^{g} = X^{xg} = X$, as required. \qed
In view of the above, the following proposition now proves the conclusion of Theorem \ref{THM:MAIN} for those triples $(G,H,p)$ not postponed until Section \ref{sec:special}.
\begin{proposition} \label{prop:gcrstab} Let $G$ be an adjoint exceptional simple algebraic group in characteristic $p$, and let $H \notin \textup{Lie}(p)$ be a non-abelian finite simple group, and let $S \cong H$ be $G$-cr subgroup of $G$. Let $\tilde{S}$ denote a minimal preimage of $S$ in the simply connected cover $\tilde{G}$ of $G$, and let $\tilde{H}$ denote the isomorphism type of $\tilde{S}$.
If $(G,\tilde{H},p)$ appears in one of the tables \ref{tab:f4thm} to \ref{tab:e8thm}, then $\tilde{S}$ is an $X$-irreducible subgroup of some subgroup $X$ listed there. Moreover one of the following holds: \begin{itemize} \item[\textup{(a)}] $S$ is Lie primitive in $X$ and Proposition \ref{prop:primstab} applies; \item[\textup{(b)}] Lemma \ref{lem:nstab} applies to $X$; \item[\textup{(c)}] $X$ has a submodule $W$ on $V$, such that every $S$-submodule of $W$ is $X$-stable, and the collection $\mathcal{M}$ of such submodules has the necessary form to apply Corollary \ref{cor:phistab}. \end{itemize} Thus $S$ lies in a proper connected $N_{\textup{Aut}(G)}(S)$-stable subgroup of $G$. \end{proposition}
In Tables \ref{tab:f4thm}--\ref{tab:e8thm} we use the following notation. When $X$ is contained in a semisimple subgroup $Y$ with classical factors, we write `$X < Y$ via $\lambda$' to indicate the action of $X$ on the natural module for $Y$. If the simple factors of $Y$ are $Y_{i}$ $(i = 1,\ldots,r)$ we write $(V_1,V_2,\ldots,V_{r})$ to indicate a tensor product $V_{1} \otimes \ldots \otimes V_{r}$, where $V_{i}$ is a module for $Y_{i}$. Also $V^{[r]}$ denotes the conjugate of the module $V$ by a $p^{r}$-power Frobenius morphism. Finally, we write `$X$ (fpf)' to indicate that $S$ can be assumed to fix no nonzero vectors on any nontrivial $X$-composition factor of $V$ (otherwise $S$ lies in some other listed subgroup).
\begin{center} \small \onehalfspacing
\begin{longtable}{c|c|c|c|c} \caption{$G$-cr overgroups $X$: $G$ of type $F_4$, $p \neq 2$, $V = V_{G}(\lambda_4)$, $\delta = \delta_{p,3}$} \\ $\tilde{H}$ & $p$ & $X$ & $V \downarrow X$ & $W$ \\ \hline $\Alt_{5}$ & $\ge 13$ & $A_{1}$ max $F_4$ & Lemma \ref{lem:nstab} applies \\ & $\neq 2,3,5$ & $A_{1} < B_{4}$ via $(1 \otimes 1^{[1]}) \oplus 4$ & $0 \oplus (1 \otimes 1^{[1]}) \oplus 4^{2} \oplus 2 \oplus (1^{[1]} \otimes 3)$ & $0$ \\ & & $A_{1} < B_{4}$ via $0 \oplus 2 \oplus 4$ & $0^{2} \oplus 2^{3} \oplus 4^{3} \oplus (1 \otimes 3)^{2}$ & $0^{2}$ \\ & $\neq 2,5$ & $A_{1}C_{3}$ (fpf) & Lemma \ref{lem:nstab} applies \\ & $\neq 2,3,5$ & $A_{1} < A_{2}^{2}$ via $(2,2)$ & $0^{2} \oplus 2^{3} \oplus 4^{3}$ & $2^{3}$ \\
& $p = 3$ & $A_{1} < A_{2}^{2}$ via $(2,2)$ & $(0|4|0)^{2} \oplus 2^{3} \oplus 4$ & $2^{3}$ \\ & $p \neq 2,5$ & $A_{1} < A_{2}^{2}$ via $(2^{[1]},2)$, & $(2^{[1]} \otimes 2)^{2} \oplus 2 \oplus V_{X}(4)$ & $2$ \\ & & $A_{1} < B_{3}$ via $(1 \otimes 1^{[1]}) \oplus 2$ & $0^{5-\delta} \oplus 2^{3} \oplus (1 \otimes 1^{[1]})^{3}$ & $V$ \\ & & $A_{1} < B_{3}$ via $0 \oplus 2 \oplus 2^{[1]}$ & $0^{4 - \delta} \oplus 2 \oplus 2^{[1]} \oplus (1 \otimes 1^{[1]})^{4}$ & $V$\\ & & $A_{1} <$ long $A_{2}$ Levi via $2$ & $0^{8 - \delta} \oplus 2^{6}$ & $V$ \\ & & $A_{1} <$ short $A_{2}$ Levi via $2$ & $2^{7} \oplus V_{A_1}(4)$ & $2^{7}$ \\
$L_{2}(7)$ & $3$ & $A_{2} < A_{2}\tilde{A}_{2}$ via $(10,10)$ & $(0|11|0)^{2} \oplus V_{X}(11)$ & $V$ \\ & & $A_{2} < A_{2}\tilde{A}_{2}$ via $(10,01)$ & $20 \oplus 02 \oplus 10 \oplus 01 \oplus V_{X}(11)$ & $V$ \\ & & $A_{2} < B_{3}$ via $V_{X}(11)$ & $0^{4} \oplus V_{X}(11)^{3}$ & $V$ \\ & & $A_{2}$ Levi & Prop.\ \ref{prop:primstab} applies \label{tab:f4thm} \end{longtable}
\begin{longtable}{c|c|c|c|c} \caption{$G$-cr overgroups $X$: $G$ of type $E_{6}$, $V = V_G(\lambda_2)$} \\ $\tilde{H}$ & $p$ & $X$ & $V \downarrow X$ & $W$ \\ \hline $3\cdot\Alt_{7}$ & $\neq 3,5$ & $A_5$ & Prop.\ \ref{prop:primstab} applies \\ $\Alt_{7}$ & $\neq 2,3,5,7$ & $A_3 < A_5$ & $0^{3} \oplus (\lambda_1+\lambda_3) \oplus 2\lambda_1^{2} \oplus 2\lambda_2 \oplus 2\lambda_3^{2}$ & $0^{3}$ \label{tab:e6thml2} \end{longtable}
\begin{longtable}{c|c|c|c|c} \caption{$G$-cr overgroups $X$: $G$ of type $E_6$, $V = V_G(\lambda_1) \oplus V_G(\lambda_6)$}\\ $\tilde{H}$ & $p$ & $X$ & $V \downarrow X$ & $W$ \\ \hline $\Alt_{7}$ & $7$ & $D_5$ & Prop.\ \ref{prop:primstab} applies \\ & & $B_{2} < A_{4}$ & $0^{4} \oplus \lambda_1^{6} \oplus 2\lambda_{2}^{2}$ & $V$ \\ & $2$ & $A_3$ Levi & Prop.\ \ref{prop:primstab} applies \\ & & $A_3 < A_5$ & $\lambda_2^{4} \oplus (0^{2}/V_X(\lambda_1+\lambda_3)^{2})$ & $\lambda_2^{4}$ \\ $\Alt_{5}$ & $\neq 2,3,5$ & $A_{1} < A_{2}^{3}$ via $(2,2,2)$ & $0^{6} \oplus 2^{6} \oplus 4^{6}$ & $V$ \\ & & $A_{1} < A_{2}^{3}$ via $(2,2,2^{[1]})$ & $0^{2} \oplus 2^{2} \oplus 4^{2} \oplus (2 \otimes 2^{[1]})^{4}$ & $0^{2}$ \\ & & $A_{1} < A_{1}A_{5}$ via $(1,5)$ & $0^{2} \oplus (4/6/W_{X}(8)/W_{X}(4))^{2}$ & $0^{2}$ \\ & & $F_{4}$ or $A_{1}C_{3} < F_{4}$ (fpf) & $(V_{F_4}(\lambda_4)\downarrow X)^{2} \oplus 0^{2}$ & $0^{2}$ \\ & & $A_{1} < B_{4}$ via $4 \oplus (1 \otimes 1^{[1]})$ & $0^{4} \oplus 2^{2} \oplus 4^{4}$ & $0^{4}$ \\ & & & ${} \oplus (1 \otimes 1^{[1]})^{2} \oplus (1^{[1]} \otimes 3)^{2}$ & \\ & & $A_{1} < B_{3}$ via $2 \oplus (1 \otimes 1^{[1]})$ & $0^{12} \oplus 2^{6} \oplus (1 \otimes 1^{[1]})^{6}$ & $V$ \\ & & $A_{1} < D_{4}$ via $2 \oplus 4$ & $0^{6} \oplus 2^{6} \oplus 4^{6}$ & $V$ \\ & & $A_{1} < A_{4}$ via $4$ & $0^{4} \oplus 2^{2} \oplus 4^{6} \oplus 6^{2}$ & $0^{4}$ \\ & & $A_{1} < A_{3}$ Levi via $1 \otimes 1^{[1]}$ & $0^{10} \oplus 2^{2} \oplus (2^{[1]})^{2}$ & $V$ \\ & & & ${}\oplus (1 \otimes 1^{[1]})^{8}$ & \\ & & $A_{1} < A_{2}$ Levi via $2$ & $0^{18} \oplus 2^{12}$ & $V$ \\ & & $A_{1} < A_{2}^{2}$ Levi via $(2,2)$ & $0^{2} \oplus 2^{14} \oplus 4^{2}$ & $V$ \\ & & $A_{1} < A_{2}^{2}$ Levi via $(2,2^{[1]})$ & $(2 \oplus 2^{[1]})^{6} \oplus (2 \otimes 2^{[1]})^{2}$ & $(2 \oplus 2^{[1]})^{6}$ \\
$L_2(7)$ & $3$ & $A_{2} < A_{2}^{3}$ via $(10,10,10)$ & $(0|11|0)^{6}$ & $V$ \\
& & $A_{2} < A_{2}^{3}$ via $(10,10,01)$ & $(0|11|0)^{2} \oplus 10^{2} \oplus 01^{2}$ & $V$ \\ & & & ${} \oplus 20^{2} \oplus 02^{2}$ & \\
& & $A_{2} < A_{2}^{2}$ Levi via $(10,10)$ & $(0|11|0)^{2} \oplus 10^{6} \oplus 01^{6}$ & $V$ \\ & & $A_{2} < A_{2}^{2}$ Levi via $(10,01)$ & $20 \oplus 02 \oplus 10^{7} \oplus 01^{7}$ & $V$ \\ & & $A_{2}$ Levi & Prop.\ \ref{prop:primstab} applies \\ & & $A_{2} < G_{2} < D_{4}$ & Prop.\ \ref{prop:primstab} applies \\ & & $A_{3} < A_{5}$ & $\lambda_2^{2} \oplus (\lambda_1 + \lambda_3)$ & $\lambda_2^{2}$ \\ $L_{2}(8)$ & $7$ & $G_{2}$ max $F_{4}$ & $0^{2} \oplus V_{X}(20)^{2}$ & $0^{2}$ \\ & & $A_{2} < D_{4}$ via $V_{X}(11)$ & $0^{12} \oplus V_{X}(11)^{6}$ & $V$ \\ & & $G_{2}$ or $B_{3} < D_{4}$ & Prop.\ \ref{prop:primstab} applies \\ & & $D_{4}$, $F_{4}$ & Prop.\ \ref{prop:primstab} applies \\ $L_{2}(13)$ & $7$ & $G_{2}$ max $F_{4}$ & $0^{2} \oplus V_{X}(20)^{2}$ & $0^{2}$ \\ & & $G_{2} < D_{4}$ & Prop.\ \ref{prop:primstab} applies \\ & & $F_{4}$ & Prop.\ \ref{prop:primstab} applies \\ $U_{3}(3)$ & $7$ & $G_{2}$ max\ $F_{4}$ & $0^{2} \oplus V_{X}(20) ^{2}$ & $0^{2}$ \\ & & $C_{3} < A_{5}$ & $0 \oplus \lambda_1^{2} \oplus \lambda_2$ & $V$ \\ & & $G_{2} < D_{4}$ & Prop.\ \ref{prop:primstab} applies \\ & & $F_{4}$ & Prop.\ \ref{prop:primstab} applies \\ $U_4(2)$ & $\neq 2,3$ & $A_3 < A_5$ & $\lambda_2^{4} \oplus (\lambda_1 + \lambda_3)^{2}$ & $V$ \\
& & $A_4$ & Prop.\ \ref{prop:primstab} applies \label{tab:e6thml1l6} \end{longtable}
\begin{longtable}{c|c|c|c|c} \caption{$G$-cr overgroups $X$: $G$ of type $E_7$, $V = V_G(\lambda_7)$}\\ $\tilde{H}$ & $p$ & $X$ & $V \downarrow X$ & $W$ \\ \hline $\Alt_{8}$ & $\neq 2,3,5$ & $B_3 < A_6$ & $\lambda_1^{2} \oplus \lambda_2^{2}$ & $V$ \\ $\Alt_{7}$ & $\neq 2,5,7$ & $A_{3} < A_{5}$ & $2\lambda_1 \oplus 2\lambda_3 \oplus \lambda_2^{6}$ & $V$ \\ & & $A_{3} < A_{5}'$ & $0^{2} \oplus \lambda_2^{4} \oplus (\lambda_1 + \lambda_4)^{2}$ & $V$ \\ & $7$ & $B_{2} < A_{4}$ & $0^{6} \oplus 10^{6} \oplus 02^{2}$ & $V$ \\ & & $D_{5}$ & Prop.\ \ref{prop:primstab} applies \\
& $2$ & $A_{3} < A_{5}$ & $\lambda_2^{6} \oplus (\lambda_2|(V_{X}(2\lambda_1) \oplus V_{X}(2\lambda_3))|\lambda_2)$ & $\lambda_2^{7}$ \\ & & $A_{3} < A_{5}'$ & $(0^{4}/V_{X}(\lambda_1 + \lambda_3)^{2}) \oplus \lambda_2^{4}$ & $\lambda_{2}^{4}$ \\ & & $A_{3}$ Levi & Prop.\ \ref{prop:primstab} applies \\ $2 \cdot \Alt_{7}$ & $3$ & $C_{3} < A_{5}$ & $\lambda_1^{7} \oplus \lambda_3$ & $\lambda_1^{7}$ \\ $J_1$ & $11$ & $G_2 < A_6$ & $10^{4} \oplus 01^{2}$ & $V$ \\ & & $G_{2}$ max $E_{6}$ & $0^{2} \oplus 20^{2}$ & $V$ \\ & & $G_{2} < D_{4}$ & Prop.\ \ref{prop:primstab} applies \\ & & $E_{6}$ & Prop.\ \ref{prop:primstab} applies \\ $2 \cdot J_2$ & $\neq 2$ & $C_3 < A_5$ & $\lambda_1^{7} \oplus \lambda_3$ & $V$ \\ $L_{2}(8)$ & $\neq 2,3,7$ & $G_{2}$ max $E_{6}$ & $0^{2} \oplus 20^{2}$ & $0^{2}$ \\ & & $F_{4}$, $E_{6}$ & Prop.\ \ref{prop:primstab} applies \\ $L_2(17)$ & $2$ & $C_{4} < F_{4}$ & Prop.\ \ref{prop:primstab} applies \\
& & $C_{4} < A_{7}$ & $(0|V_{X}(\lambda_2)|0)^{2}$ & $0^{2}$ \\ & & $B_{4} < D_{5}$ & Prop.\ \ref{prop:primstab} applies \\ & $\neq 2,17$ & $B_{4}$ & Prop.\ \ref{prop:primstab} applies \\ & & $C_{4}$ & $0^{2} \oplus \lambda_2^{2}$ & $0^{2}$ \\ & & $F_{4}$, $E_{6}$ & Prop.\ \ref{prop:primstab} applies \\ $U_4(2)$ & $\neq 2,3$ & $A_4$ & Prop.\ \ref{prop:primstab} applies \\ & & $A_3 < A_3 A_3$ & $0^{2} \oplus (\lambda_1 + \lambda_3)^{2} \oplus \lambda_2^{4}$ & $V$ \\ & & (2 classes) & $2\lambda_1 \oplus 2\lambda_3 \oplus \lambda_2^{6}$ & $V$ \\ $Sp_{6}(2)$ & $\neq 2$ & $B_3 < A_6$ & $\lambda_1^{2} \oplus \lambda_2$ & $V$ \label{tab:e7thml7} \end{longtable}
\begin{longtable}{c|c|c|c|c} \caption{$G$-cr overgroups $X$: $G$ of type $E_7$, $V = V_G(\lambda_1)$}\\ $\tilde{H}$ & $p$ & $X$ & $V \downarrow X$ & $W$ \\ \hline $L_{2}(8)$ & $\neq 2,3,7$ & $B_{4}$, $F_{4}$ & Prop.\ \ref{prop:primstab} applies \\ & & $G_{2}$ or $B_{3} < D_{4}$ Levi & Prop.\ \ref{prop:primstab} applies \\ & & $B_{3} < A_{6}$ & $0 \oplus \lambda_1^{2} \oplus 2\lambda_1 \oplus \lambda_2 \oplus 2\lambda_3^{2}$ & $0$ \\ & & $G_{2} < A_{6}$ & $0^{3} \oplus 10^{5} \oplus 01 \oplus 20^{3}$ & $0^{3}$ \label{tab:e7thml1} \end{longtable}
\begin{longtable}{c|c|c|c|c} \caption{$G$-cr overgroups $X$: $G$ of type $E_8$, $V = L(G)$}\\ $\tilde{H}$ & $p$ & $X$ & $L(G) \downarrow X$ & $W$ \\ \hline $\Alt_{10}$ & $3$ & $B_4 < D_{8}$ & $0 \oplus V_{X}(2\lambda_1) \oplus \lambda_2 \oplus \lambda_3^{2}$ & $\lambda_3^{2}$ \\ & 2 & $B_{4} < D_{8}$ via $\lambda_4$ & $(W_X(\lambda_2)/W_X(\lambda_3)) \oplus (\lambda_1 + \lambda_4)$ & $\lambda_1 + \lambda_4$ \\ & & $B_{4} < D_{5}$ & Prop.\ \ref{prop:primstab} applies \\
& & $C_{4} < A_{7}$ & $(\lambda_1 \otimes \lambda_1) \oplus \lambda_1^{4} \oplus (0|V_{X}(\lambda_2)|0)^{2} \oplus \lambda_3^{2}$ & $\lambda_3^{2}$ \\ & & $C_{4} < F_{4}$ & Prop.\ \ref{prop:primstab} applies \\ $\Alt_{9}$ & $\neq 2,3$ & $D_4 < A_7$ Levi & $0 \oplus \lambda_1^{2} \oplus \lambda_2^{3} \oplus 2\lambda_1 \oplus (\lambda_3 + \lambda_4)^{2}$ & $(\lambda_3 + \lambda_4)^{2}$ \\ $\Alt_{8}$ & $\neq 2$ & $B_{3} < A_{6}$ & $0^{4} \oplus \lambda_1^{6} \oplus \lambda_2^{5} \oplus 2\lambda_1 \oplus 2\lambda_3^{2}$ & $\lambda_2^{5}$ \\ & $3$, $5$ & $E_{7}$ & Prop.\ \ref{prop:primstab} applies \\ $J_1$ & $11$ & $G_2 < D_4$ Levi & Prop.\ \ref{prop:primstab} applies \\ & & $E_{6}$ & Prop.\ \ref{prop:primstab} applies \\ & & $G_2 < A_6$ & $0^{6} \oplus 10^{13} \oplus 01^{5} \oplus 20^{3}$ & $V$ \\ & & $G_2 < D_7$ via $01$ & $0 \oplus 01^{3} \oplus 30 \oplus 11^{2}$ & $V$ \\ & & $G_2 \textup{ max } E_6$ & $0^{8} \oplus 01 \oplus 20^{6} \oplus 11$ & $V$ \\ $J_{2}$ & $2$ & $E_{6}$ (fpf) & $0^{8} \oplus \lambda_2 \oplus \lambda_1^{3} \oplus \lambda_6^{3}$ & $0^{8}$ \\ & & $G_{2}$ max $E_{6}$ & $0^{8} \oplus 11 \oplus 01$ & $0^{8}$ \\
& & & ${} \oplus (01|20|00|10)^{3} \oplus (10|00|20|01)^{3}$ & \\
& & $E_{7}$ (fpf) & $0^{2} \oplus (0|V_{X}(\lambda_1)|0) \oplus \lambda_7^{2} \oplus \lambda_1$ & $0^{3}$ \\ & & $G_{2} < D_{7}$ via $01$ & $0 \oplus 11^{2} \oplus 01^{2} \oplus (30/01)$ & $11^{2}$ \\ & & $G_{2} < D_{4}$ Levi & Prop.\ \ref{prop:primstab} applies \\
& & $G_{2} < A_{5}$ & $0^{16} \oplus ((0 \oplus 01)|V_{X}(20)|(0 \oplus 01)) \oplus $ & $0^{17}$ \\
& & & ${} \oplus 01^{6} \oplus (V_{X}(10)|0|V_{X}(20)|0|V_{X}(10))^{2}$ & \\ $Sp_{6}(2)$ & $\neq 2$ & $B_3 < A_6$ & $(0^{4}/W_X(2\lambda_1)) \oplus \lambda_1^{6} \oplus \lambda_2^{5} \oplus 2\lambda_3^{2}$ & $\lambda_1^{6}$ \\ $U_{3}(3)$ & $\neq 2,3,7$ & $A_{6}$ & Prop.\ \ref{prop:primstab} applies \\ & & $E_{7}$ (fpf) & $0^{3} \oplus \lambda_1 \oplus \lambda_7^{2}$ & $0^{3}$ \\ & & $E_{6}$ & Prop.\ \ref{prop:primstab} applies \\ & & $G_{2}$ max $E_{6}$ & $0^{8} \oplus 20^{6} \oplus 01 \oplus 11$ & $01$ \\ & & $C_{3} < A_{5}$ & $0^{17} \oplus 2\lambda_1 \oplus \lambda_1^{14} \oplus \lambda_2^{7} \oplus \lambda_3^{2}$ & $0^{17}$ \\ & & $C_{3} < D_{7}$ & $0 \oplus 2\lambda_1 \oplus \lambda_2^{2} \oplus (\lambda_1 + \lambda_3) \oplus (\lambda_2 + \lambda_3)^{2}$ & $0$ \\ & & $G_{2} < D_{7}$ via $01$ & $0 \oplus 01^{3} \oplus 30 \oplus 11^{2}$ & $0$ \\ & & $G_{2} < D_{4}$ & Prop.\ \ref{prop:primstab} applies \\ & & $G_{2} < A_{6}$ & $0^{6} \oplus 10^{13} \oplus 01^{5} \oplus 20^{3}$ & $V$ \label{tab:e8thm} \end{longtable} \end{center}
\proof This is straightforward to verify on a case-by-case basis. We now give a general outline, and then illustrate by giving details in the most involved cases.
The groups $X$ are representatives of the $G$-cr semisimple subgroups which contain a copy of $\tilde{S}$ `minimally', in the sense that $\tilde{S}$ centralises no simple factor of $X$ and does not lie in a diagonal subgroup when $X$ has two or more isomorphic factors. Moreover if $X$ has a classical factor, then $\tilde{S}$ does not lie in the subgroup of $X$ corresponding to the stabiliser of a direct-sum decomposition, tensor-product decomposition or another bilinear or quadratic form on the natural module. If $X$ is exceptional, then the image of $\tilde{S}$ must correspond to a character marked `\textbf{P}' in Chapter \ref{chap:thetables}.
Since $X$ is $G$-cr, it is $L'$-irreducible for some Levi subgroup $L$ of $G$. The structure of $V \downarrow L'$ is easy to determine from the composition factors given in \cite{MR1329942}*{Tables 8.1-8.7}, and Lemma \ref{lem:tilting}. This is enough information to determine $V \downarrow X$ in each case here. This restriction then limits the possible feasible characters of $\tilde{S}$ on $V$, and identifying the submodule $W$ is usually straightforward. In many cases, Proposition \ref{prop:newsamesubs} applies to the whole of $V$, and we take $W = V$.
The most complicated constructions of $X$ and $W$ are as follows.
\subsection*{Case: $(G,H,p) = (F_4,L_{2}(7),3)$ or $(E_{6},L_{2}(7),3)$} Here, $H$ has irreducible modules of dimension $3$, $6$ and $7$, giving irreducible embeddings into groups of type $A_{2}$, $A_{3}$ and $B_{3}$. Moreover an embedding of $H$ into $B_{3}$ lies in a subgroup $G_{2}$. Further, an embedding of $H$ into $G_{2}$ stabilises a vector on $V_{G_2}(01)$ and therefore lies in a subgroup $A_{2}$; this holds since $\bigwedge^{2}V_{G_2}(10) = V_{G_2}(10)^{2}/V_{G_2}(01)$ while the exterior square of the $7$-dimensional $H$-module has composition factors $7^{2}/3/3^{*}/1$, and $3$ and $3^{*}$ are projective since their dimension is the largest power of $3$ dividing $|H|$.
First suppose that $G = F_{4}$. Inspecting \cite{MR1329942}*{Table 8.4} we see that each simple subgroup $A_{3}$ of $G$ is simply connected. Since $S$ has no irreducible $4$-dimensional modules, $S$ cannot be contained irreducibly in such a subgroup $A_{3}$.
If $S$ is $G$-irreducible, then from Proposition \ref{prop:algorithm} we know that $S$ fixes a point on $L(G)$, and therefore lies in a proper subsystem subgroup of $G$. Now $S$ has no irreducible embeddings into a subgroup of type $B_{4}$ or $A_{1}C_{3}$ (Lemma \ref{lem:classicalparabs}), hence $S$ lies in a subgroup $A_{2}A_{2}$. This acts on $V = V_{G}(\lambda_4)$ as $(10,10) + (01,01) + (0,V_{A_2}(11))$. Since the $3$-dimensional irreducible $S$-modules are projective, so is any tensor product of them, hence the $A_{2}A_{2}$-modules $(01,10)$ and $(10,01)$ restrict to $S$ either as $3 + 6$ or $3^{*} + 6$, or as a uniserial projective module $1|7|1$. In any case, the embedding of $S$ factors through a diagonal subgroup $A_{2}$ and has a unique $7$-dimensional irreducible submodule on $V_{G}(\lambda_4)$, which is the restriction of the unique $A_{2}$-submodule $V_{A_2}(11)$.
If instead $S$ lies in a proper Levi subgroup $L$ of $G$, then $L'$ has type $A_{2}$ or $B_{3}$. In the former case, $S$ is Lie primitive in $X = L'$ and Corollary \ref{prop:primstab} applies. In the latter case, since $S$ is $L'$-irreducible and is not contained irreducibly in a subgroup $A_{3}$, it follows that $S$ is irreducible on $V_{B_3}(\lambda_1)$. Since the unique $7$-dimensional irreducible $S$-module is a section of $3 \otimes 3^{*}$, $S$ lies in an irreducible subgroup $X = A_{2}$ of $B_{3}$ acting as $V_{A_2}(11)$ on the natural module. The given action of $X$ follows and Proposition \ref{prop:newsamesubs} applies to the action of $S$ and $X$ on $V$.
Now suppose $G = E_{6}$ and $V = V_{G}(\lambda_1) \oplus V_{G}(\lambda_6)$. Then similar reasoning to the above applies, where we also note that a maximal connected subgroup $A_{2}^{3}$ acts on $V$ as a direct sum of tensor products of two $3$-dimensional modules (cf.\ \cite{MR1329942}*{Proposition 2.3}), so the same reasoning as for $S < A_{2}^{2}$ shows that $S$ lies in a diagonal subgroup $A_{2}$ stabilising every $S$-submodule of $V$.
\subsection*{Case: $(G,H) = (F_4,\Alt_{5})$} Here $p \neq 2,5$. In this case, every irreducible module for $S$ and its double cover $2 \cdot S \cong SL_{2}(5)$ is obtained as a symmetric power or tensor product of $2$-dimensional $SL_{2}(5)$-modules. Hence an embedding of $S$ into a classical simple algebraic group factors through an embedding of an adjoint simple subgroup of type $A_{1}$. Furthermore if $S$ lies in a subgroup of type $G_{2}$ then it stabilises a $3$- or $4$-dimensional subspace of the $7$-dimensional module, hence lies in a subgroup $A_{1}A_{1}$ or $A_{2}$ (cf.\ \cite{MR898346}*{Theorem 8}), and then in a further subgroup $A_{1}$.
We now break the proof up into several cases: (1) $S$ lies in no proper subsystem subgroup of $G$; (2) $S$ is $G$-irreducible and lies in a proper subsystem subgroup of $G$; (3) $S$ lies in a proper Levi subgroup of $G$. Furthermore we divide case (2) into the sub-cases where $S$ lies in a subgroup: (2a) $B_{4}$; (2b) $A_{1}C_{3}$; or (2c) $A_{2}A_{2}$.
(1) In this case $C_{G}(S) = 1$ and $S$ cannot fix a nonzero vector on $L(G)$ by Lemma \ref{lem:properembed}, and cannot fix a nonzero vector on $V_{\textup{min}}$ since the corresponding stabiliser has dimension at least $52 - 26$ and is either $G$-reducible or a subsystem subgroup $B_{4}$ or $D_{4}$. Thus $p \neq 3$ and $S$ corresponds to Case 1) of Table \ref{a5f4p0}. If $S$ is contained in a maximal subgroup $A_{1}G_{2}$ or $G_{2}$ $(p = 7)$, then the image of $S$ in $G_{2}$ lies in a subgroup $A_{1}A_{1}$ or $A_{2}$, hence centralises a non-trivial semisimple element of $G$, contradicting the fact that $S$ lies in no subsystem subgroup. Therefore $p \ge 13$ and $S$ lies in a maximal subgroup $X$ of type $A_{1}$. This is unique up to conjugacy in $G$, hence condition (i) of Lemma \ref{lem:nstab} holds. Moreover $SO_{3}(K)$ has a unique subgroup $\Alt_{5}$ up to conjugacy, hence condition (ii) of Lemma \ref{lem:nstab} also holds. Finally, the two $3$-dimensional irreducible $S$-modules are interchanged by an outer automorphism of $S$. Since these occur with different multiplicities as composition factors of $L(G)$, it follows that $G$ does not induce an outer automorphism on $S$, hence $N_{G}(S) = SC_{G}(S) = S$. Thus condition (iii) of Lemma \ref{lem:nstab} also holds, and $X$ is $N_{\textup{Aut}(G)}(S)$-stable.
(2a) If $S$ lies in $X = B_{4}$ and is $X$-irreducible then by Lemma \ref{lem:classicalparabs}, $S$ acts on $V_{B_4}(\lambda_1)$ with composition factor dimensions $1$, $3$ and $5$ or $4$ and $5$ (hence $p \neq 3$). The image of $S$ therefore lies in a subgroup $A_{1}$ acting as $0 \oplus 2 \oplus 4$ or $(1 \otimes 1^{[1]}) \oplus 4$. The restriction of $V_{G}(\lambda_4)$ to these respective subgroups follows from the restriction $V_{G}(\lambda_4) \downarrow B_{4} = 0 \oplus \lambda_1 \oplus \lambda_4$ and Lemma \ref{lem:spins}. Each nontrivial summand for such a subgroup $A_{1}$ restricts to $S$ with no trivial composition factors, and it follows that each trivial $S$-submodule of $V_{G}(\lambda_4)$ is a trivial submodule also for the relevant subgroup $A_{1}$.
(2b) Here $S$ lies in a subgroup $A_{1}C_{3}$; we show that Lemma \ref{lem:nstab} holds in this case. Firstly, since $G$ has a unique class of subgroups $A_{1}C_{3}$, condition (i) is immediate. Next, since the double cover $2\cdot S$ has a unique $6$-dimensional symplectic module and $SL_{2}(5)$ has a unique $2$-dimensional irreducible module up to conjugation by an outer automorphism, it follows that $A_{1}C_{3}$ has a unique class of subgroups $\Alt_{5}$ which centralise neither factor. Thus condition (ii) of Lemma \ref{lem:nstab} holds.
Finally, since $S$ lies in no Levi subgroup of $G$, if we assume that $S$ lies in no maximal subgroup $A_{2}A_{2}$ or $B_{4}$ of $G$ then the only nontrivial elements of $C_{G}(S)$ are involutions which are $G$-conjugate to the central involution $t$ of $A_{1}C_{3}$. Therefore $C_{G}(S)$ is commutative, and is therefore contained in $C_{G}(t) = A_{1}C_{3}$. Now $V_{G}(\lambda_4) \downarrow A_{1}C_{3} = (1,\lambda_1) \oplus (0,\lambda_2)$ restricts to $S$ as $(2 \otimes 6) \oplus \bigwedge^{2}(6)$, where `$2$' and `$6$' are irreducible modules for the double cover $2\cdot S$. The first summand has a unique $3$-dimensional composition factor, while the second has none. Thus the two $3$-dimensional $S$-modules occur with differing multiplicities, hence $N_{G}(S)$ does not induce an outer automorphism on $S$. This shows that $N_{G}(S) = S C_{G}(S) \le A_{1}C_{3} = N_{G}(A_{1}C_{3})$, so condition (iii) of Lemma \ref{lem:nstab} holds.
(2c) Here $S$ lies in a subgroup $A_{2}A_{2}$, acting irreducibly on the natural module for each factor. Since the two $3$-dimensional $S$-modules are the symmetric squares of the $2$-dimensional modules for $2 \cdot S$, and since these are conjugate under a Frobenius morphism of $SL_{2}(K)$, the image of $S$ lies in a diagonal subgroup $A_{1}$ acting as $2$ or $2^{[1]}$ on the natural module for each $A_{2}$ factor, giving two subgroups of type $A_{1}$, one of which contains $S$. Now, $V_{G}(\lambda_4) \downarrow A_{2}A_{2} = (10,01) + (01,10) + (0,V_{A_2}(11))$. The restrictions $V_{G}(\lambda_4) \downarrow A_{1}$ given in Table \ref{tab:f4thm} follow easily. Since $2^{[1]} \otimes 2$ restricts to $S$ as $3_{a} \otimes 3_{b} = 1/4^{2}$ $(p = 3)$ or $4/5$ $(p \neq 3)$, it follows that every $3$-dimensional $S$-submodule of $V_{G}(\lambda_4)$ is preserved by the subgroup $A_{1}$.
(3) Let $L$ be minimal among Levi subgroups of $G$ containing $S$. Since $G$ is simply connected, so is $L'$. Since $S$ itself has no nontrivial $2$-dimensional modules or irreducible symplectic modules, it follows that $L'$ has no factor $A_{1}$, $B_{2}$ or $C_{3}$. Thus $L'$ has type $B_{3}$ or $A_{2}$. If $L' = B_{3}$ then $S$ acts as $3_{a} + 4$ or $1 + 3_{a} + 3_{b}$ on the natural $L'$-module, hence lies in a subgroup $A_{1} < B_{3}$ via $(1 \otimes 1^{[1]}) + 2$ or $0 + 2 + 2^{[1]}$, as in Table \ref{tab:f4thm}. The restrictions for $X$ follow from $V_{G}(\lambda_4) \downarrow B_{3} = 0^{3} + \lambda_1 + \lambda_3^{2}$ and Lemma \ref{lem:spins}. If $L' = A_{2}$ then $S$ acts irreducibly on the natural $L'$-module. Since there are two $G$-classes of Levi subgroups $A_{2}$, this gives the final subgroups $A_{1} < A_{2}$ for $\Alt_{5}$ in Table \ref{tab:f4thm}.
\subsection*{Case: $(G,H) = (E_{6},\Alt_{5})$} Here $p \neq 2,3,5$. As argued for $G = F_{4}$ above, $S$ lies in an adjoint subgroup $A_{1}$ of $G$, and if $L'$ is minimal among Levi subgroups of $G$ containing $S$ then $L'$ is simply connected and therefore has no factors $A_{1}$ or $A_{5}$ as $S$ has no irreducible $2$- or $6$-dimensional modules. Thus $L'$ has type $E_{6}$, $D_{5}$, $D_{4}$, $A_{4}$, $A_{3}$, $A_{2}$ or $A_{2}^{2}$. In all but the first case, using Lemma \ref{lem:classicalparabs} and proceeding as in $F_{4}$ gives a list of possible subgroups of type $A_{1}$ containing $S$; the restrictions of the various $G$-modules are again straightforward to determine, and it is clear that the given summand $W$ satisfies the required property.
So now assume that $S$ is $G$-irreducible. If $S$ fixes a nonzero vector on $V_{G}(\lambda_1)$ then $S$ lies in a subgroup of dimension $\ge 78 - 27 = 51$, and such a subgroup is either simple of type $F_{4}$, or is $G$-reducible. If $S$ fixes no nonzero vector on $V_{G}(\lambda_1)$, then inspecting Table \ref{a5e6p0} we see that $S$ must fix a nonzero vector on $L(G)$, and therefore lies in a subgroup containing a maximal torus. This shows that $S$ lies in a subgroup $A_{2}^{3}$, $A_{1}A_{5}$ or $F_{4}$. In the first two cases, we again derive a list of possible subgroups $A_{1}$ containing $S$, which appear in Table \ref{tab:e6thml2} or \ref{tab:e6thml1l6}.
So now we assume that $S$ lies in a subgroup $F_{4}$ of $G$ and in no subsystem subgroup of $G$. Then $S$ must be fixed-point free on $V_{F_4}(\lambda_4)$, otherwise it lies in a subsystem subgroup of $F_{4}$, hence centralises a non-central semisimple element of $G$ and lies in the corresponding subsystem subgroup of $G$. Since $V_{G}(\lambda_1) \oplus V_{G}(\lambda_6) \downarrow F_{4} = 0^{2} \oplus V_{F_4}(\lambda_4)^{2}$, every trivial $S$-submodule of this is a trivial submodule for $F_{4}$.
\subsection*{Remaining cases} For the remaining triples $(G,H,p)$, the possible subgroups $X$ are straightforward to determine using the representation theory of $H$ and Propositions \ref{prop:notprim} and \ref{prop:algorithm}. Moreover comparing $V \downarrow X$ with the appropriate table of feasible characters in Chapter \ref{chap:thetables} shows that the conditions of Proposition \ref{prop:onetwelve} hold for the action of $X$ and $S$ on the submodule $W$. There are a few cases where further argument is required to show that the submodules of $W$ are of the form necessary to apply Corollary \ref{cor:phistab}, as follows:
\begin{itemize} \item When $(G,p) = (E_7,7)$ and $H = L_{2}(8)$, $L_{2}(13)$ or $U_{3}(3)$, if $S \cong H$ lies in a subgroup $X = G_{2}$ such that $V \downarrow X$ is a sum of trivial modules and 26-dimensional modules $V_{G_2}(20)$, we claim that every trivial $S$-submodule of $V$ is an $X$-submodule. The symmetric square of each $7$-dimensional orthogonal $S$-module has a unique $1$-dimensional submodule and a unique $1$-dimensional quotient. Since $V_{G_2}(20)$ is a $26$-dimensional section of the 28-dimensional module $\bigwedge^{2}V_{G_2}(10)$, it follows that $S$ fixes no nonzero points on $V_{G_2}(20)$, and the claim follows.
\item Similarly if $S \cong L_{2}(17)$ lies in a subgroup $C_{4}$ when $p = 2$, we must show that $X$ fixes no nonzero vectors on $V_{C_4}(\lambda_2)$. This follows because this is a $26$-dimensional section of $\bigwedge^{2}V_{C_4}(\lambda_1)$, and the exterior square of the $8$-dimensional irreducible $H$-module has shape $1|8|1|8|1|8|1$.
\item If $S \cong \Alt_{7}$ is a subgroup of $G = E_{7}$ when $p = 2$, we must justify the stated structure of $V_{G}(\lambda_7) \downarrow X$ where $X = A_{3} < A_{5}$, given in Table \ref{tab:e7thml7}. This follows since $V_{G}(\lambda_7) \downarrow A_{5} = \lambda_1^{3} + \lambda_5^{3} + \lambda_3$; the factors $\lambda_1$ and $\lambda_5$ restrict to $X$ as $V_{X}(\lambda_2)$, while $\lambda_3$ restricts with high weights $\lambda_2^{2}$, $2\lambda_1$ and $2\lambda_3$. We verify computationally that the $\Alt_{7}$-module $\bigwedge^{3}(6)$ is indecomposable with shape $6|(4 + 4^{*}) | 6$.
\item If $(G,H,p) = (E_{8},J_{2},2)$ and $S \cong H$ lies in a subgroup $X = E_{6}$ or $E_{7}$ and in no proper subsystem subgroup of this, then as in the proof of Proposition \ref{prop:primstab} it follows that the only nonzero vectors of $L(G)$ which are fixed by $S$, must also be fixed by $X$, and so $X$ and $S$ fix exactly the same trivial submodules on $L(G)$, and $W$ is the sum of these.
\item Also with $(G,H,p) = (E_{8},J_{2},2)$, if $S \cong H$ lies either in a subgroup $G_{2}$ which is maximal in a subgroup $E_{6}$, or in a subgroup $G_{2} < A_{5}$, then the action of $G_{2}$ on $V_{E_6}(\lambda_1)$ is given by \cite{MR2044850}*{Table 10.2} or \cite{MR1329942}*{Table 8.1}. It follows that if $S$ fixes a vector which is not fixed by this $G_{2}$, then there is an indecomposable $G_{2}$-module $0|10$ or $0|20$ on which $S$ fixes a non-zero vector. This is impossible since this would place $S$ in the full centraliser of this vector, which is proper in $G_{2}$ and has dimension at least $7$. \end{itemize}
\section{Proof of Theorem \ref{THM:MAIN}: Special Cases} \label{sec:special}
We now complete the proof of Theorem \ref{THM:MAIN} by applying ad-hoc arguments for the types $(G,H,p)$ not covered above. Recall from Section \ref{sec:postponed} that these remaining cases are as follows:
\begin{table}[H]\small
\begin{tabular}{c|c} $G$ & $(H,p)$ \\ \hline $E_7$ & $\Alt_{10} \ (p = 5)$, $\Alt_{9} \ (p \neq 2, 3)$ \\ $E_8$ & $\Alt_{16} \ (p = 2)$, $\Alt_{11}\ (p = 11)$, $\Alt_{10} \ (p \neq 2, 3)$ \end{tabular} \label{tab:adhoc} \end{table}
In each case, we let $S \cong H$ be a subgroup of $G$, take a proper simple subgroup $A$ of $S$, construct a proper, connected subgroup $X$ of $G$ containing $A$, and show that $Y \stackrel{\textup{def}}{=} \left<S,X\right>^{\circ}$ is proper and stabilises an appropriate subset $\mathcal{M}$ of $Y$-submodules of $L(G)$ or $V_{\textup{min}}$, so that Corollary \ref{cor:phistab} applies and $Y$ is contained in the $N_{\textup{Aut}(G)}(S)$-stable proper connected subgroup $\left(\bigcap_{M \in \mathcal{M}} G_{M}\right)^{\circ}$. Since $Y \cap S$ is normal in $S$ and contains $A$, it follows that $S \le Y \le \left(\bigcap_{M \in \mathcal{M}} G_{M}\right)^{\circ}$, and the conclusion of Theorem \ref{THM:MAIN} holds for $(G,H,p)$.
\subsection{Remark} In order to construct an appropriate subgroup $X$ as above, we assume that the subgroup $A$ of $S$ is $G$-completely reducible. For this, we appeal to Theorem \ref{THM:NONGCR}, hence Theorem \ref{THM:MAIN} depends on Theorem \ref{THM:NONGCR} in these few cases. We take this opportunity to note that the proof of Theorem \ref{THM:NONGCR} given in Chapter \ref{chap:gcr} depends only on the feasible characters of simple subgroups (Theorem \ref{THM:FEASIBLES}), and not on Theorem \ref{THM:MAIN}.
\subsection*{Case: $G = E_7$, $H = \Alt_{10}$ or $\Alt_{9}$, $p \neq 2,3$} \label{sec:a10e7p5} \label{sec:a9e7p5} \label{sec:a9e7p7} \label{sec:a9e7p0} Here the composition factors of $S$ on $L(G)$ and $V_{\textup{min}}$ are specified by one of the tables on pages \pageref{a10e7p5}-\pageref{a9e7p5}. Let $A \cong \Alt_{8}$ be a subgroup of $S$, lying in an intermediate subgroup isomorphic to $\Alt_{9}$. Then this subgroup $\Alt_{9}$ has an 8-dimensional composition factor on $L(G)$, whose restriction to $A$ contains a trivial submodule. Thus Proposition \ref{prop:substab} applies to $L(G) \downarrow A$, and $A$ is not Lie primitive in $G$. Since $p \neq 2,3$, Theorem \ref{THM:NONGCR} holds and $A$ is $G$-completely reducible. In addition, by Theorem \ref{thm:subtypes} there does not exist an embedding of $\Alt_{8}$ into a smaller exceptional algebraic group, and thus $A$ lies in a $G$-completely reducible semisimple subgroup, whose simple factors are all of classical type.
Now $\Alt_{8}$ has a $7$-dimensional irreducible module, giving an embedding into $B_3$, and no other non-trivial irreducible modules dimension $\le 12$. In addition, the only non-trivial faithful module for $2\cdot \Alt_{8}$ of dimension $\le 12$ is 8-dimensional, giving an embedding into $D_4$. If $2\cdot \Alt_{8}$ embeds into a simply connected group of type $D_4$ with its centre contained in $Z(D_4)$, then its centre acts trivially on one of the three $D_4$-modules $\lambda_1$, $\lambda_3$ or $\lambda_4$. Since these are self-dual, this restricts as a direct-sum $1 \oplus 7$, and therefore the quotient $\Alt_{8}$ in $D_4$ lies in a proper subgroup of type $B_3$.
Thus $A$ lies in a subgroup $B_3$ of $G$. The two conjugacy classes of such subgroups and their action on $V_G(\lambda_7)$ are given by \cite{MR1329942}*{Table 8.6}. Comparing these with the feasible characters of $A$ (Tables \ref{a8e7p5}, \ref{a8e7p7}, \ref{a8e7p0}), we see that $A$ lies in a subgroup $X = B_3 < A_6$ with $V_G(\lambda_7) \downarrow X = \lambda_1^{2}/\lambda_2^{2}$. By Lemma \ref{lem:exthom} and Proposition \ref{prop:weyls}, this is completely reducible. Thus Proposition \ref{prop:newsamesubs} applies and every $A$-submodule of $V_{G}(\lambda_7)$ is $X$-stable. In particular, every $S$-submodule of $V_G(\lambda_7)$ is preserved by $Y = \left<S,X\right>$, hence if $\mathcal{M}$ denotes the (non-empty) collection of proper $S$-submodules of $V_G(\lambda_7)$, we have $S < Y < \left(\bigcap_{M \in \mathcal{M}} G_M \right)^{\circ}$, and this latter group is proper, connected and $N_{\textup{Aut}(G)}(S)$-stable.
\subsection*{Case: $(G,H,p) = (E_8,\Alt_{16},2)$}
Let $A \cong \Alt_{15}$ be a subgroup of $S$. Then Proposition \ref{prop:notprim} and Theorem \ref{THM:NONGCR} apply to $A$, and $A$ is not Lie primitive in $G$, and is $G$-cr. Since the smallest nontrivial $A$-module is $14$-dimensional and $H^{1}(A,14) = 0$, and since $A$ does not embed into a smaller exceptional algebraic group, we deduce that $A$ lies in a Levi subgroup of type $D_{7}$, call it $X$. From \cite{MR1329942}*{Table 8.1} it follows that \[ L(G) \downarrow X = (0^{2}/V_X(\lambda_2)) \oplus \lambda_1^{2} \oplus \lambda_6 \oplus \lambda_7. \] Comparing this with $L(G) \downarrow A$, whose factors are given by Table \ref{a15e8p2}, we find that $A$ must be irreducible on each $X$-composition factor. In particular every $14$- and $64$-dimensional $A$-submodule of $L(G)$ is the restriction of an $X$-submodule.
Now, if $S$ has no irreducible submodules of dimension $1$, $14$ or $64$ on $L(G)$, then $L(G) \downarrow S$ is an image of the projective cover of the 90-dimensional $S$-module. But this is absurd, since this composition factor is self-dual and occurs with multiplicity 1. Thus either $S$ has a nonzero trivial submodule on $L(G)$ or has irreducible submodules of dimension $14$ or $64$, in which case $\left<S,X\right>^{\circ}$ preserves every such submodule, and is therefore a proper, connected subgroup containing $S$.
So $S$ is not Lie primitive in $G$; hence by Lemma \ref{lem:gcrstab} we may assume that $S$ is $G$-cr. Since the smallest $S$-module is $14$-dimensional and supports a quadratic form, we deduce that $S$ lies in a subgroup $D_{7}$; this is the subgroup $X$ constructed above, and we deduce that every $14$- and $64$-dimensional $S$-composition factor on $L(G)$ is in fact an $X$-submodule. Thus $S < X < \left( \bigcap_{M} G_M \right)^{\circ}$, the intersection over $14$- and $64$-dimensional $S$-submodules, and by Corollary \ref{cor:phistab} this latter group is proper, connected and $N_{\textup{Aut}(G)}(S)$-stable.
\subsection*{Case: $(G,H,p) = (E_8,\Alt_{11},11)$ or $(E_8,\Alt_{10},p \neq 2,3,5)$} \label{sec:a11e8p11} \label{sec:a10e8p0} \label{sec:a10e8p7} Assume first that $p \neq 7$. Note that each feasible character of $\Alt_{11}$ or $\Alt_{10}$ on $L(G)$ has two 84-dimensional composition factors (Table \ref{a11e8p11}). Let $S \cong H$ be a subgroup of $G$, and let $A \cong \Alt_{9}$ be a subgroup of $S$. Matching up Tables \ref{a11e8p11} and \ref{a9e8p0}, we see that \[ L(G) \downarrow A = 1/8^{3}/27/28^{3}/56^{2}, \]
which is completely reducible since $p \nmid |A|$. Thus $A$ fixes a vector on $L(G)$ and lies in a proper subgroup of positive dimension, hence also in a proper, connected subgroup by Proposition \ref{prop:consub}. By Theorem \ref{THM:NONGCR}, $A$ also is $G$-completely reducible.
Inspecting \cite{MR1942256}*{Table 2} we see that the smallest irreducible modules for $A$ and its proper cover $2.A$ are of dimension $8$ and $21$, and the $8$-dimensional modules give an embedding into $D_{4}$. Since $\Alt_9$ admits no embeddings into a proper exceptional subgroup of $G$ by Theorem \ref{thm:subtypes}, it follows that a minimal connected reductive subgroup containing $A$ can only involve factors of type $D_4$. Comparing the above decomposition with \cite{MR1329942}*{Table 8.1}, we deduce that $A$ lies in a simple subgroup $X$ of type $D_4$, contained in a Levi subgroup $A_7$. Then \begin{align*} L(G) \downarrow X &= 0/\lambda_1^{2}/\lambda_2^{3}/2\lambda_1/(\lambda_3+\lambda_4)^{2}, \end{align*} which is completely reducible by Corollary \ref{cor:2step} and Proposition \ref{prop:weyls}. Now, let $W$ be the span of all $28$-dimensional and $56$-dimensional $A$-submodules of $L(G)$. Then $W$ is the restriction of an $X$-submodule with factors $\lambda_2^{3}/(\lambda_3+\lambda_4)^{2}$, which contains every 28- and 56-dimensional $X$-composition factor of $L(G)$. Proposition \ref{prop:newsamesubs} then applies, so that every $A$-submodule of $W$ is $X$-invariant.
Now, as $L(G)$ is self-dual, $S$ has either a unique 84-dimensional submodule or has a summand of shape $84+84$. In either case, this is contained in $W$, since `84' must restrict to $A$ with composition factors $28/56$. Thus each 84-dimensional irreducible $S$-submodule of $L(G)$ is an $X$-submodule, hence preserved by $Y \stackrel{\textup{def}}{=} \left<S,X\right>$. Thus $S < Y \le (\bigcap G_M)^{\circ}$, the intersection over $84$-dimensional irreducible $S$-submodules of $L(G)$. This latter group is $N_{\textup{Aut}(G)}(S)$-stable by Corollary \ref{cor:phistab}.
If $p = 7$, then the same argument goes through with the caveat that $L(G) \downarrow A$ may no longer be completely reducible; however, it still has a trivial submodule since $H^{1}(\Alt_{9},M)$ vanishes for each composition factor $M$ in a feasible character (cf.\ Table \ref{tab:altreps}). The $28$- and $56$-dimensional $A$-modules are projective, since their dimension is divisible by the order of a Sylow subgroup of $A$, and therefore they are $A$-direct summands of $L(G)$.
\subsection*{Case: $(G,H,p) = (E_8,\Alt_{10},5)$} \label{sec:a10e8p5} Here we have two feasible characters (Table \ref{a10e8p5}). The only $H$-module $W$ occurring in either such that $H^{1}(H,W) \neq \{0\}$ is the 8-dimensional module. Thus in Case 2), Proposition \ref{prop:substab} applies, and $G$ has no Lie primitive subgroups isomorphic to $H$ having this action on $L(G)$.
So let $S \cong H$ be a subgroup of $G$ giving rise to Case 1), and let $A \cong \Alt_{9}$ be a subgroup of $S$. Inspecting Table \ref{a9e8p5}, $A$ must act with composition factors $1/8^{3}/27/28^{3}/56^{2}$ on $L(G)$. Each factor has zero first cohomology group, and so $A$ fixes a nonzero vector, and is not Lie primitive in $G$. By Proposition \ref{prop:consub}, $A$ lies in a proper connected subgroup of $G$. It is also $G$-completely reducible by Proposition \ref{prop:levisub}. Now, $A$ does not lie in a subgroup $E_7$ (as this has three trivial composition factors on $L(G)$), and does not embed into a group of type $G_{2}$, $F_{4}$ or $E_{6}$, and so $A$ lies in a semisimple subgroup having only classical factors. The smallest non-trivial $A$-module has dimension $8$ and gives an embedding into a group of type $D_{4}$, hence $A$ lies in a subgroup $D_4$ of $G$. The conjugacy classes of these, and their action on $L(G)$, are given by \cite{MR1329942}*{Table 8.1}. Comparing this with the factors above, we deduce that $A$ lies in a subgroup $X$ of type $D_4$, such that \[ L(G) \downarrow X = 0/\lambda_1^{2}/\lambda_2^{3}/2\lambda_1/(\lambda_3+\lambda_4)^2, \] which is completely reducible by Corollary \ref{cor:2step} and Proposition \ref{prop:weyls}. Thus every irreducible $A$-submodule of dimension $1$, $28$ or $56$ is an $X$-submodule. In particular, if $S$ has any irreducible submodules of dimension $1$, $28$ or $56$ on $L(G)$, then $S < \left<S,X\right>^{\circ} \le \left(\bigcap_{M} G_M\right)^{\circ}$, the intersection over such $S$-submodules, and this is a proper, connected subgroup of $G$, which is $N_{\textup{Aut}(G)}(S)$-stable by Corollary \ref{cor:phistab}.
So suppose $L(G)$ has no irreducible $S$-submodules of dimension $1$, $28$ or $56$. Since $L(G)$ is self-dual it has no such irreducible quotients either, hence Lemma \ref{lem:projcover} implies that $L(G)$ is an image of the projective module $P_{8}^{2} \oplus P_{35}^{2}$. However, neither $P_8$ nor $P_{35}$ have a 56-dimensional composition factor; contradiction.
\subsection{Remark: Scope of Theorem \ref{THM:MAIN}} \label{rem:limitations} The representation theory outlined here is not sufficient to prove results along the lines of Theorems \ref{THM:MAIN} and \ref{THM:NONGCR} for all the non-generic simple subgroups appearing in Table \ref{tab:subtypes} which are not known to occur as a Lie primitive subgroup of an exceptional group $G$.
For instance, when $p = 7$ the maximal subgroup $G_{2}$ of $G = F_{4}$ gives rise to a subgroup $U_{3}(3)$, which is fixed-point free on $L(G)$ and $V_{\textup{min}}$, but is not Lie primitive in $G$. In order to handle such subgroups in the manner of Theorems \ref{THM:MAIN}, \ref{THM:NONGCR} and their corollaries, it will be necessary to incorporate more information, such as the Lie algebra structure of $L(G)$.
\section{Proof of Theorem \ref{THM:FIN}} \label{sec:pfcorfin}
Having proved Theorem \ref{THM:MAIN}, the following result now implies Theorem \ref{THM:FIN}. The proof is similar to that of \cite{MR1458329}*{Theorem 6}.
\begin{proposition} Let $G$ be an adjoint simple algebraic group and let $S$ be a non-abelian finite simple subgroup of $G$, not isomorphic to a member of $\textup{Lie}(p)$. Let $\sigma$ be a Frobenius morphism of $G$ such that $L = O^{p'}(G_\sigma)$ is simple, and let $L \le L_1 \le \textup{Aut}(L)$.
If there exists a proper closed, connected, $N_{\textup{Aut}(G)}(S)$-stable subgroup of $G$ containing $S$, then $L_{1}$ has no maximal subgroup with socle $S$. \end{proposition}
\proof Suppose that $X < L_1$ is a maximal subgroup with socle $S$. It is well known that a simple group of Lie type has soluble outer automorphism group, hence the image of $S$ under the quotient map $L_{1} \to L_{1}/L$ is trivial. Thus $S \le L$ and $S$ is fixed point-wise by $\sigma$, and in particular $\sigma \in N_{\textup{Aut}(G)}(S)$. In addition, every automorphism of $L$ extends to a morphism $G \to G$ (cf.\ \cite{MR0407163}*{\S 12.2}). Since $S$ is normal in $X$, we can view $X\!\left<\sigma\right>$ as a subgroup of $N_{\textup{Aut}(G)}(S)$.
So let $\bar{S}$ be a proper connected, $N_{\textup{Aut}(G)}(S)$-stable subgroup of $G$ which contains $S$. Then $\bar{S}$ is $X\!\left<\sigma\right>$-stable. Let $Y$ be maximal among proper, connected, $X\!\left<\sigma\right>$-stable subgroups containing $\bar{S}$. Then $O^{p'}(Y_{\sigma})$ contains $S$ and is thus non-trivial. Now, we have containments \[ X \le N_{L_1}(Y) \le L_1 \] and if $N_{L_1}(Y) = L_1$ then $L$ normalises the non-trivial subgroup $O^{p'}(Y_{\sigma})$, which is a proper subgroup of $L$ since $Y$ is connected and proper in $G$, and this contradicts the simplicity of $L$. From the maximality of $X$ in $L_1$ it follows that $X = N_{L_1}(Y) \ge O^{p'}(Y_\sigma)$. In particular, if $Y$ is not reductive then $X$ normalises the non-trivial $p$-subgroup $R_u(Y)_\sigma$ of $L_1$, a contradiction. Hence $Y$ is reductive, and therefore $S = O^{p'}(Y_\sigma)$, contradicting $S \notin \textup{Lie}(p)$. \qed
\chapter{Complete Reducibility} \label{chap:gcr}
In this chapter we prove Theorem \ref{THM:NONGCR}. We require an amount of background material concerning rational cohomology and complements in parabolic subgroups.
\section{Cohomology and Complements} \label{sec:complements}
Let $X$ be an algebraic group acting on a commutative algebraic group $V$, such that the action map $X \times V \to V$ is a morphism of varieties. We define $H^{1}(X,V)$ to be the quotient of the additive group $Z^{1}(X,V)$ of \emph{rational} 1-cocycles, by the subgroup $B^{1}(X,V)$ of rational 1-coboundaries. If $X$ is finite, then all cocycles are rational and we recover the usual first cohomology group.
This rational cohomology group parametrises conjugacy classes of closed complements to $V$ in $VX$, where a complement to $V$ (as an algebraic group) is now a subgroup $X'$ not only satisfying $VX' = VX$ and $V \cap X' = 1$, but also $L(V) \cap L(X') = 0$ (cf.\ \cite{MR2015057}*{I.7.9(2)}). Note that this latter condition is trivially satisfied when $X'$ is finite.
If $X$ is a closed subgroup of a parabolic subgroup $P$ of $G$ with $X \cap R_{u}(P) = 1$, then $R_{u}(P)$ admits a filtration by modules for the Levi factor, and we can use the cohomology groups of these to study complements to $R_{u}(P)$ in $R_{u}(P)X$, as follows.
\begin{proposition} \label{prop:levisub} Let $P = QL$ be a parabolic subgroup of $G$, and let $X$ be a finite subgroup of $P$ with $X \cap Q = 1$. Then the $KL$-modules $V_i = Q(i)/Q(i+1)$ also have the structure of $KX$-modules, and if $H^{1}(X,V_i) = \{0\}$ for all $i$, then all complements to $Q$ in $QX$ are $Q$-conjugate, and $X$ lies in a conjugate of $L$. \end{proposition}
\proof The conjugation action of $X$ on $P$ induces an action on each $V_i$. If $\pi \ : \ P \twoheadrightarrow L$ is the natural quotient map, then we have $(qQ(i+1))^x = (qQ(i+1))^{\pi(x)}$ for all $q \in Q(i)$ and $x \in X$, since $Q(i)/Q(i+1)$ is central in $Q/Q(i+1)$. Thus the (linear) action of $L$ on each $V_i$ gives rise to linear action of $X$.
To prove that all closed complements to $Q$ in $QX$ are $Q$-conjugate, we work by induction on $i$, proving that all copies of $X$ in $(Q/Q(i))X$ are $Q/Q(i)$-conjugate. When $i = 1$ we have $Q/Q(i) = V_1$ and the vanishing cohomology group gives the result. Now assume this holds for some $i \ge 1$. If $Q(i) = \{0\}$ then we are done, so suppose not and let $Y$ be a complement to $Q/Q(i+1)$ in $(Q/Q(i+1))X$. Now, consider the projection $(Q/Q(i+1))X \to (Q/Q(i))X$. By the inductive hypothesis, we may replace $Y$ by a conjugate whose image under this projection is $X$. Then we have \[ Y = \{ \phi(x).x \ : \ x \in X \} \] for some rational map $\phi \ : \ X \to Q/Q(i+1)$, whose image lies in the kernel of the projection $Q/Q(i+1) \to Q/Q(i)$, which is $Q(i)/Q(i+1)$. Hence $\phi \in Z^1(X,V_i) = B^{1}(X,V_i)$ and $Y$ is $(Q/Q(i+1))$-conjugate to $X$, as required.
Finally, since $Q \cap X = 1$, the projection $P \twoheadrightarrow L$ restricts to an isomorphism of $X$ onto its image $\bar{X} \le L$. This is a complement to $Q$ in $QX$, and is thus conjugate to $X$ by the above. \qed
Next, we note that if $Y$ is a complement to $X$ in $QX$, then the composition factors of $Y$ and of $X$ in the filtration of $Q$ correspond. This is proved for reductive $X$ in \cite{MR3075783}*{Lemma 3.4.3}; the proof here is identical.
\begin{lemma} \label{lem:complement_factors} Let $Q$ be a unipotent algebraic group over $K$, and let $X$ be a finite group, with no non-trivial normal $p$-subgroups if $K$ has characteristic $p > 0$. If $Y$ is a complement to $Q$ in the semidirect product $QX$, and if $V$ is a rational $QX$-module, then the composition factors of $V \downarrow X$ correspond to the composition factors of $V \downarrow Y$ under an isomorphism $X \to Y$. \end{lemma}
\proof Without loss of generality we can assume that $V$ is an irreducible $QX$-module. Since $Q$ is unipotent, the fixed-point space of $Q$ in $V$ is non-trivial \cite{MR0396773}*{Theorem 17.5}, and since $Q$ is normal in $QX$, the space of $Q$-fixed points is $QX$-invariant, hence equal to $V$. Thus $Q$ acts trivially on $V$, hence the representation of $QX$ on $V$ factors through the projection $QX \to X$, and the composed map $Y \hookrightarrow QX \twoheadrightarrow X$ is the required isomorphism. \qed
\begin{corollary} \label{cor:levisub} If $X$ is a non-abelian finite simple subgroup of $G$ which is not $G$-completely reducible, then $L(G) \downarrow X$ has a trivial composition factor, as well as a factor $W$ such that \begin{itemize} \item $H^{1}(X,W) \neq \{0\}$, \item Either $W$ has multiplicity $\ge 2$, or $W \ncong W^{*}$, \item $W$ has dimension at most $14$, $20$, $35$, $64$ when $G$ is respectively of type $F_4$, $E_6$, $E_7$, $E_8$. \end{itemize} \end{corollary}
\proof Let $P = QL$ be a parabolic subgroup of $G$ containing $X$, such that $X$ is not contained in a conjugate of the Levi factor $L$. The torus $Z(L)$ gives rise to a 1-dimensional $X$-composition factor on $L(G)$, which must be trivial.
By Proposition \ref{prop:levisub}, some $X$-composition factor $W$ exists in the filtration of $Q$ by $X$-modules, such that $H^{1}(X,W) \neq \{0\}$, which is then an $X$-composition factor of $L(P) \subset L(G)$. Further, $W^{*}$ occurs as a composition factor of $L(G)/L(P) \cong L(Q^{\textup{op}})$, where $Q^{\textup{op}}$ is the unipotent radical of the opposite parabolic subgroup (see \cite{MR1047327}*{Remark 6, p.\ 561}), hence either $W$ has multiplicity $\ge 2$ as a factor of $L(G)$, or $W \ncong W^{*}$.
Finally, Lemma \ref{lem:radfilts} gives us the high weights of each simple factor of $L$ on the modules occurring in the filtration of $Q$, which allows us to determine the largest dimensions of a module occurring for some $L$. For instance, if $G$ has type $F_{4}$ then the types of Levi subgroup and the highest dimension of an irreducible module in the filtration of $Q$ is as follows:
\begin{center}
\begin{tabular}{c|ccccccc} Levi subgroup & $B_{3}$ & $C_{3}$ & $A_{1} A_{2}$ & $A_{2}$ & $B_{2}$ & $A_{1} A_{1}$ & $A_{1}$ \\ \hline High weight & $\lambda_{3}$ & $\lambda_{3}$ & $2 \otimes \lambda_{1}$ & $2\lambda_{1}$ & $\lambda_{1}$ & $1 \otimes 2$ & $2$ \\ Dimension & $8$ & $14$ & $12$ & $9$ & $5$ & $6$ & $3$ \end{tabular} \end{center} where we note that the high weights $2$ for a factor of type $A_{1}$, and $2\lambda_{1}$, $2\lambda_{2}$ for type $A_{2}$, can only occur when this factor contains short root subgroups; hence no module $2 \otimes 2$ can occur for $L$ of type $A_{1} A_{1}$, and no module $2 \otimes 2\lambda_{i}$ for $i \in \{ 1,2 \}$ can occur for $L$ of type $A_{1} A_{2}$. Hence the largest module dimension occurring is $14$.
Similarly, for $G$ of type $E_{6}$, the highest possible dimension of a module for a Levi subgroup with weights as in Lemma \ref{lem:radfilts} are the 20-dimensional module $\lambda_{3}$ for $L$ of type $A_{5}$, and the 20-dimensional module $\lambda_{1} \otimes \lambda_{2}$ for $L$ of type $A_{1}A_{4}$.
For $G$ of type $E_{7}$, and $L$ of type $A_{6}$ we get a 35-dimensional module $\lambda_{3}$. The only higher-dimensional module for a Levi subgroup with weights as in Lemma \ref{lem:radfilts} is the $40$-dimensional module $1 \otimes \lambda_{3}$ for $L$ of type $A_{1}A_{5}$. However, $G$ has a unique standard parabolic subgroup of this type, and we check directly that the high weights of modules occurring in the unipotent radical are $1 \otimes \lambda_{2}$, $0 \otimes \lambda_{4}$ and $1 \otimes 0$, and all such modules have dimension less than $35$.
For $G$ of type $E_{8}$ and $L$ of type $D_{7}$ we get a 64-dimensional module $\lambda_{7}$. The only modules of larger dimension for a Levi factor with weights as in Lemma \ref{lem:radfilts} are the 70-dimensional modules $1 \otimes \lambda_{3}$ and $1 \otimes \lambda_{4}$ when $L$ has type $A_{1} A_{6}$, and again we check directly that these do not occur in standard parabolic with this Levi subgroup type. \qed
\section{Completely Reducible Subgroups}
Corollary \ref{cor:levisub} above places restrictions on the feasible characters potentially arising from a non-$G$-cr subgroup of an adjoint exceptional simple algebraic group $G$.
Let $S \cong H$ be a non-$G$-cr subgroup of $G$, and let $\tilde{S}$ be a minimal preimage of $S$ in the simply connected group $\tilde{G}$. Then $\tilde{S}$ lies in a proper parabolic subgroup $P$ of $G$, with Levi decomposition $P = QL$. Since $\tilde{G}$ is simply connected, so is $L'$, hence $L'$ is a direct product of simply connected simple groups. In addition the image of $\tilde{S}$ under the projection $P \twoheadrightarrow L$ lies in $L'$, since $\tilde{S}$ is perfect. Thus if $L_{0}$ is a simple factor of $L'$, then $\tilde{S}$ acts on the natural module $V_{L_{0}}(\lambda_1)$ if $L_{0}$ is classical, and if $L_{0}$ is exceptional then the image of $\tilde{S}$ under $L' \twoheadrightarrow L_{0}$ corresponds to a feasible character in Chapter \ref{chap:thetables}.
Thus it is straightforward to determine whether $\tilde{S}$ admits an embedding into a proper Levi subgroup of $\tilde{G}$. In Chapter \ref{chap:thetables} we have labelled `\textbf{N}' those feasible characters which satisfy the conclusion of Corollary \ref{cor:levisub}, for subgroup types of $\tilde{S}$ which admit an embedding into a proper Levi subgroup of $G$. In Table \ref{tab:candidates}, we collect together all triples $(G,H,p)$ with such a feasible character. Thus if $G$ has a non-$G$-cr subgroup isomorphic to the group $H \notin \textup{Lie}(p)$, then $(G,H,p)$ appears in Table \ref{tab:candidates}.
Recall that the groups $\Alt_5 \cong L_2(4) \cong L_2(5)$, $\Alt_6 \cong L_2(9) \cong Sp_4(2)'$, $\Alt_8 \cong L_4(2)$, $L_2(7) \cong L_3(2)$, $U_4(2) \cong PSp_4(3)$ and $U_3(3) \cong G_2(2)'$ are considered to be of Lie type in each corresponding characteristic, hence do not appear in those characteristics in Table \ref{tab:candidates}.
Note also that if $G$ is of type $G_2$, then according to Table \ref{tab:onlyprim}, the only non-generic finite simple subgroups of $G$ which are not Lie primitive in $G$, are isomorphic to $\Alt_{5}$ or $L_2(7)$, in which case $p \neq 2$. But neither $\Alt_{5}$ nor $L_2(7)$ has a non-trivial 2-dimensional module for $p \neq 2$, and so these groups have no embeddings into a Levi subgroup of $G$, hence all non-generic finite simple subgroups of $G$ are $G$-irreducible in this case.
\begin{table}[H]\small \caption{Candidate Non-$G$-cr subgroup types}
\begin{tabularx}{.9\linewidth}{c|X} $G$ & \multicolumn{1}{c}{$(H,p)$} \\ \hline $F_4$ & $(\Alt_{5},3)$, $(J_2,2)$, $(L_2(7),3)$, $(L_2(8),7)$, $(L_2(8),3)$, $(L_2(13),2)$ \\ \hline
$E_6$ & $(\Alt_{10},2)$, $(\Alt_{9},2)$, $(\Alt_{7},7)$, $(\Alt_{7},3)$, $(\Alt_{7},2)$, $(\Alt_{6},5)$, $(\Alt_{5},3)$, $(M_{11},5)$, $(M_{11},3)$, $(M_{11},2)$, $(M_{22},2)^{\dag}$, $(J_2,2)$, $(L_2(7),3)$, $(L_2(8),7)$, $(L_2(8),3)$, $(L_2(11),5)$, $(L_2(11),3)$, $(L_2(11),2)$, $(L_2(13),7)$, $(L_2(13),2)$, $(L_2(17),3)$, $(L_2(17),2)$, $(U_4(3),2)^{\dag}$ \\ \hline
$E_7$ & $(\Alt_{12},2)$, $(\Alt_{10},2)$, $(\Alt_{9},3)$, $(\Alt_9,2)$, $(\Alt_{8},3)$, $(\Alt_{7},7)$, $(\Alt_{7},5)$, $(\Alt_{7},3)$, $(\Alt_{7},2)$, $(\Alt_{6},5)$, $(\Alt_{5},3)$, $(M_{11},11)$, $(M_{11},5)$, $(M_{11},3)$, $(M_{11},2)$, $(M_{12},3)^{\dag}$, $(M_{12},2)$, $(J_2,3)^{\dag}$, $(J_2,2)$, $(L_2(7),3)$, $(L_2(8),7)$, $(L_2(8),3)$, $(L_2(11),5)$, $(L_2(11),3)$, $(L_2(11),2)$, $(L_2(13),7)$, $(L_2(13),3)$, $(L_2(13),2)$, $(L_2(17),3)$, $(L_2(17),2)$, $(L_2(19),5)^{\dag}$, $(L_2(19),3)$, $(L_2(19),2)$, $(L_2(25),3)$, $(L_2(25),2)$, $(L_2(27),13)^{\dag}$, $(L_2(27),7)$, $(L_2(27),2)$, $(L_3(3),13)$, $(L_3(3),2)$, $(L_{3}(4),3)^{\dag}$, $(U_3(3),7)$, $(^{3}D_4(2),3)$, $(^{2}F_4(2)',5)$ \\ \hline
$E_8$ & $(\Alt_{16},2)$, $(\Alt_{14},2)$, $(\Alt_{12},2)$, $(\Alt_{10},5)$, $(\Alt_{10},2)$, $(\Alt_{9},3)$, $(\Alt_{9},2)$, $(\Alt_{8},7)$, $(\Alt_{8},3)$, $(\Alt_{7},7)$, $(\Alt_{7},5)$, $(\Alt_{7},3)$, $(\Alt_{7},2)$, $(\Alt_{6},5)$, $(\Alt_{5},3)$, $(M_{11},11)$, $(M_{11},5)$, $(M_{11},3)$, $(M_{11},2)$, $(M_{12},2)$, $(J_2,2)$, $(L_2(7),3)$, $(L_2(8),7)$, $(L_2(8),3)$, $(L_2(11),5)$, $(L_2(11),3)$, $(L_2(11),2)$, $(L_2(13),7)$, $(L_2(13),3)$, $(L_2(13),2)$, $(L_2(17),3)$, $(L_2(17),2)$, $(L_2(19),5)$, $(L_2(19),3)$, $(L_2(19),2)$, $(L_2(25),3)$, $(L_2(25),2)$, $(L_2(27),7)$, $(L_2(27),2)$, $(L_2(29),2)$, $(L_2(37),2)$, $(L_3(3),13)$, $(L_3(3),2)$, $(U_3(3),7)$, $(U_3(8),3)$, $(U_4(2),5)$, $(PSp_4(5),2)$, $(^{3}D_4(2),3)$, $(^{2}F_4(2)',5)$ \end{tabularx}\\ If a subgroup type is marked with $^{\dag}$, then any non-$G$-cr subgroup of this type lifts to a proper cover in the simply connected cover of $G$. \label{tab:candidates} \end{table}
\section{Proof of Theorem \ref{THM:NONGCR}}
We now prove that certain triples $(G,H,p)$ appearing in Table \ref{tab:candidates} do not give rise to non-$G$-cr subgroups. This proves Theorem \ref{THM:NONGCR} and Corollary \ref{cor:nongcr}.
\begin{proposition} \label{prop:moregcrtypes} If $G$ is an adjoint exceptional simple algebraic group in characteristic $p$, if $H \notin \textup{Lie}(p)$, and if $(G,H,p)$ appears in Table \ref{tab:onlycgr}, then all subgroups of $G$ isomorphic to $H$ are $G$-completely reducible. \end{proposition}
\begin{table}[H]\small \caption{Candidates giving rise only to $G$-cr subgroups}
\begin{tabularx}{.9\linewidth}{c|X} $G$ & $(H,p)$ \\ \hline $E_6$ & $(\Alt_{10},2)$, $(\Alt_{9},2)$, $(\Alt_{7},2)$, $(M_{11},2)$, $(J_2,2)$, $(L_2(13),7)$ \\ $E_7$ & $(J_{2},3)$, $(L_2(19),5)$, $(L_2(19),3)$, $(L_2(25),2)$, $(L_2(27),13)$, $(L_2(27),7)$, $(L_{3}(4),3)$ \\ $E_8$ & $(\Alt_{14},2)$, $(\Alt_{8},7)$, $(L_2(27),7)$, $(L_2(29),2)$ \end{tabularx} \label{tab:onlycgr} \end{table}
\proof In each case, let $S \cong H$ be a subgroup of $G$, let $\tilde{S}$ be a preimage of $S$ in the simply connected cover $\tilde{G}$ of $G$, and suppose that $P$ is a proper parabolic subgroup, with Levi decomposition $P = QL$, which is minimal among parabolic subgroups of $\tilde{G}$ containing $\tilde{S}$, so that the image of $\tilde{S}$ in $L$ is an $L'$-irreducible subgroup of $L'$.
If $G$ is of type $E_7$, note that no cover of any of the corresponding groups $L_{2}(q)$ in Table \ref{tab:onlycgr} has a faithful, irreducible orthogonal module of dimension $\le 12$, a symplectic module of dimension $\le 10$, or any faithful module of dimension $\le 7$ (cf.\ \cite{MR1835851}*{Table 2}). In particular, this means that such a group cannot have a non-trivial homomorphism into a simple factor of $L$ of classical type. Thus $L'$ is simple of type $E_6$. The corresponding unipotent radical is abelian and irreducible as an $L'$-module, with high weight $\lambda_1$. Since $V_{E_7}(\lambda_7) \downarrow L' = 0^{2}/\lambda_1/\lambda_6$ by \cite{MR1329942}*{Table 8.2}, we deduce that a non-$G$-cr subgroup isomorphic to $H$ must have a composition factor on both $L(G)$ and $V_{E_7}(\lambda_7)$ with a nonzero first cohomology group. Inspecting the relevant tables (\ref{l219e7p5}, \ref{l219e7p3}, \ref{l225e7p2}, \ref{l227e7p13}, \ref{l227e7p7}), we see that this never occurs.
If $(G,H,p) = (E_{7},J_{2},3)$, the double cover $2 \cdot J_{2}$ has a faithful 6-dimensional symplectic irreducible module, giving an irreducible embedding into a Levi subgroup of type $A_{5}$. Now $H^{1}(2 \cdot J_2,6_{a}) = H^{1}(2 \cdot J_2,6_{b}) = 0$, and also $J_{2}$ does not embed into a smaller exceptional group when $p = 3$ by Theorem \ref{thm:subtypes}. It follows that $L'$ is simple of type $A_{5}$. Comparing Table \ref{2j2e7p3} with the composition factors of each on $L(G)$ (given by \cite{MR1329942}*{Table 8.2}), we deduce that $L(G) \downarrow L' = 0^{8}/\lambda_2^{3}/\lambda_4^{3}/(\lambda_1+\lambda_5)$, and in particular, the non-trivial irreducible $L'$-modules occurring in the filtration of $Q$ each have high weight $\lambda_2$ or $\lambda_4$. The corresponding Weyl modules are irreducible and are isomorphic to the alternating squares of the modules $\lambda_1$ and $\lambda_5$, respectively, which restrict to irreducible 6-dimensional $\tilde{S}$-modules. Using Magma to help with calculations, we find that $\bigwedge^{2}(6_{a})$ and $\bigwedge^{2}(6_{b})$ are each self-dual and uniserial with composition factor dimensions $1$, $13$, $1$. Since $H^{1}(J_2,13_{a}) \cong H^{1}(J_2,13_b)$ is $1$-dimensional, from Proposition \ref{prop:substab}(ii) it follows that $H^{1}(\tilde{S},\lambda_2) = H^{1}(\tilde{S},\lambda_4) = 0$, and so Proposition \ref{prop:levisub} applies and $\tilde{S}$ is $G$-completely reducible.
If $(G,H,p) = (E_{7},L_{3}(4),3)$, the double cover $2 \cdot L_{3}(4)$ of $H$ has a faithful 6-dimensional irreducible module, giving an embedding into a subgroup of type $A_{5}$. Since the image of $S$ in $L'$ is $L'$-irreducible, and since $L_{3}(4)$ does not embed into a smaller exceptional group by Theorem \ref{thm:subtypes}, we deduce that $L'$ is simple of type $A_{5}$. But by \cite{MR1329942}*{Table 8.6}, each subgroup of $G$ of type $A_{5}$ has at least four six-dimensional composition factors on $V_{56}$. Thus the embedding of $H$ must correspond to Case 3) of Table \ref{2l34e7p3}, which cannot come from a non-$G$-cr subgroup by Corollary \ref{cor:levisub}.
If $(G,H,p) = (E_6,\Alt_{10},2)$, we must have $L'$ simple of type $D_5$, as no other Levi subgroup admits an embedding of $H$. Since $\Alt_{10}$ does not preserve a nondegenerate quadratic form on its $8$-dimensional irreducible module, and has no other non-trivial irreducible modules of dimension $\le 10$, a subgroup $S \cong H$ of $L'$ must act uniserially on the natural 10-dimensional $L'$-module, with shape $1|8|1$. Inspecting Table \ref{a10e6p2} we see that the $L'$-module of high weight $\lambda_5$ occurring in the filtration $Q$ restricts to $S$ as an irreducible 16-dimensional module. Applying Proposition \ref{prop:substab}(ii) to $1|8|1$, in each case deduce that $H^{1}(S,V) = \{0\}$, and so all subgroups $S \cong H$ of $G$ are $G$-completely reducible.
If $(G,H,p) = (E_6,\Alt_{9},2)$, then $L'$ is simple of type $D_4$. Every non-trivial $L'$-module occurring in the filtration of the unipotent radical has dimension $8$, hence restricts to a subgroup $S \cong H$ with only trivial or 8-dimensional composition factors. The first cohomology group vanishes for each such $S$-module, hence by \ref{cor:levisub} all subgroups $S \cong H$ of $G$ are $G$-completely reducible.
If $(G,H,p) = (E_6,\Alt_{7},2)$, a non-$G$-cr subgroup isomorphic to $H$ must correspond to Case 1) or 3) of Table \ref{a7e6p2}, or to Case 1) of Table \ref{3a7e6p2}. The Levi subgroups of $G$ containing a copy of $H$ are of type $A_3$ and $A_5$. The only factors in the feasible character with nonzero first cohomology group are $14-$ or $20-$dimensional. By Lemma \ref{lem:radfilts}, the non-trivial modules occurring in the filtration of the unipotent radical of an $A_{3}$-parabolic subgroup containing $S$ are have high weight $\lambda_1$, $\lambda_2$ or $\lambda_3$, hence dimension 4 or 6, and hence have trivial first cohomology group, so such a parabolic cannot contain a non-$G$-cr subgroup isomorphic to $H$.
On the other hand, an $A_5$ Levi subgroup of a parabolic $P$ acts on $R_u(P)$ with high weights $\lambda_3$ and $0$. It is routine to check that the module $\lambda_3 \downarrow S = \bigwedge^{3}(\lambda_1) \downarrow S = \bigwedge^{3}(6_a)$ or $\bigwedge^{3}(6_b)$ has no 14- or 20-dimensional composition factors, when $S \cong 3.\Alt_{7}$ is a subgroup of $A_5$. Thus no non-$G$-cr subgroups isomorphic to $H$ occur here, either.
If $(G,H,p) = (E_6,M_{11},2)$, the only parabolic subgroups of $G$ admitting an embedding of $H$ have Levi factor $L$ with $L'$ simple of type $D_5$. Then $Q$ is a 16-dimensional irreducible $L'$-module, which by comparison with Table \ref{m11e6p2} must restrict irreducibly to a subgroup $S \cong H$. Thus $H^{1}(H,Q) = \{0\}$ and so all subgroups $S \cong H$ of $G$ are $G$-completely reducible.
If $(G,H,p) = (E_6,J_{2},2)$, a parabolic subgroup $P = QL$ containing an $L'$-irreducible copy of $H$ must have $L'$ of type $D_4$ or $A_5$. In the first case, the unipotent radical has a filtration by 8-dimensional $D_4$-modules which each admit a nondegenerate quadratic form. Since $H$ does not preserve a nonzero quadratic form on its 6-dimensional modules, and has no other faithful modules of dimension $\le 8$, each of these $D_4$-modules must restrict to $H$ with shape $1|6_x|1$, where $x \in \{a,b\}$. In particular, $H^{1}(H,V)$ vanishes for each such module (Proposition \ref{prop:substab}(ii)), hence $H^{1}(H,Q) = \{0\}$ and no non-$G$-cr subgroups arise here. If instead $L'$ is of type $A_5$, then the only non-trivial $L'$-module occurring in $Q$ has high weight $\lambda_3$. We find that $\bigwedge^{3}(6_a)$ is uniserial of shape $6_a|1|6_b|1|6_a$, and similarly for $6_b$. Now, if $H^{1}(H,\bigwedge^{3}(6_a)) \neq \{0\}$, let $E$ be a non-split extension of $\bigwedge^{3}(6_a)$ by 1-dimensional trivial $KS$-module. Then $E$ has socle $6_a$, and so $E^{*}$ is an image of the projective cover $P_{6_a}$. Using Magma to help with calculations, we find that the largest quotient of $P_{6_a}$ having only 1- and 6-dimensional composition factors, has only two trivial composition factors, and is thus not isomorphic to $E^{*}$. It follows that $H^{1}(H,\bigwedge^{3}(6_a))$ vanishes, and similarly $H^{1}(H,\bigwedge^{3}(6_a))$ vanishes, so all subgroups $S \cong H$ of $G$ are $G$-completely reducible.
If $(G,H,p) = (E_6,L_2(13),7)$ we must have $L'$ of type $D_{4}$. Then $V_{27} \downarrow L'$ respectively has composition factor dimensions $8,8,8,1,1,1$ or $10,16,1$. Now, if there exists a non-$G$-cr subgroup $S \cong H$, then as well as a trivial composition factor, $S$ must have at least two composition factors of dimension 12, since these are self-dual and are the only factors with nonzero first cohomology group. Thus we are in Case 2) of Table \ref{l213e6p7}, and so $V_{27} \downarrow S = 1/12/14_b$. This is not compatible with the $L'$-composition factors of $L(G)$, hence all subgroups $S \cong H$ of $G$ are $G$-completely reducible.
For $(G,H,p) = (E_8,\Alt_{14},2)$, noting that $H$ does not preserve a nonzero quadratic form on its 12-dimensional irreducible module, we necessarily have $L'$ simple of type $D_7$, corresponding to a self-dual uniserial module of shape $1|12|1$. By Table \ref{a14e8p2}, we have $L(G) \downarrow S = 1^{8}/12^{4}/64_a/64_b^{2}$. Now, $R_u(P)$ has two levels, which are irreducible $L'$-modules of high weight $\lambda_6$ and $\lambda_1$. These restrict to $S$ as $64_b$ and a uniserial module of shape $1|12|1$, respectively. In each case we have $H^{1}(H,V) = \{0\}$, and hence $S$ is $G$-completely reducible.
For $(G,H,p) = (E_8,\Alt_{8},7)$, suppose first that $L'$ has type $E_{7}$. Then the non-trivial $L'$-modules occurring in the corresponding unipotent radical have high weight $\lambda_7$, and by Table \ref{a8e7p7} these we have $H^{1}(S,M) = 0$ for each $S$-composition factor $M$ of such module, hence $S$ is $G$-completely reducible in this case.
So now assume that $L'$ has no exceptional factors. Then $L'$ is simple of type $D_{7}$, $A_{6}$ or $D_{4}$, since the smallest non-trivial irreducible $H$-modules have dimensions $7$ and $14$. If $L'$ has type $D_{7}$ then $V_{D_7}(\lambda_1)$ occurs with multiplicity 2 as a composition factor of $L(G)$ (cf. \cite{MR1329942}*{Table 8.1}). Since $S$ is irreducible on this module, this contradicts the fact that $L(G) \downarrow S$ has at most one 14-dimensional composition factor (Table \ref{a8e8p7}). Similarly if $L'$ has type $D_{4}$ then $L'$ has at least $28$ trivial composition factors on $L(G)$, contradicting the feasible characters in Table \ref{a8e8p7}. Hence $L'$ has type $A_{6}$. By Lemma \ref{lem:radfilts}, the $L$-modules occurring in the corresponding unipotent radical $Q$ have high weights $\lambda_1$, $\lambda_2$, $\ldots$, $\lambda_6$, and these are respectively isomorphic to $\bigwedge^{1}(V_{A_6}(\lambda_1))$, $\bigwedge^{2}(V_{A_6}(\lambda_1))$, $\ldots$, $\bigwedge^{6}(V_{A_6}(\lambda_1))$. Using Magma to help with calculations, we find that the $S$-modules $\bigwedge^{1}(7)$, $\bigwedge^{2}(7)$, $\ldots$, $\bigwedge^{6}(7)$ are each irreducible, and in particular none of these involve a 19-dimensional composition factor. Hence $H^{1}(S,M) = 0$ for each $S$-composition factor of each $L'$-module occurring in the filtration of $Q$, and so $S$ is $G$-completely reducible.
For $(G,H,p) = (E_8,L_2(27),7)$, the only Levi subgroups of $G$ admitting an embedding of $H$ have derived subgroup $E_6$ or $E_7$. In the corresponding parabolic subgroups, the unipotent radicals have filtrations by modules of high weight $0$, $\lambda_1$ or $\lambda_6$ for $E_6$, or $\lambda_7$ for $E_7$. On the other hand, the composition factors of a subgroup $S \cong H$ on these modules are given by Tables \ref{l227e6p7} and \ref{l227e7p7}; they are always 1- or 13-dimensional, and hence their first cohomology groups vanish. Thus Proposition \ref{prop:levisub} applies to any such subgroup $S$, so all subgroups $S \cong H$ of $G$ are $G$-completely reducible.
For $(G,H,p) = (E_8,L_2(29),2)$, if $P = QL$ is a parabolic subgroup containing a copy of $H$, then $L'$ has type $E_7$. From Table \ref{l229e7p2}, the 56-dimensional module $V_{L'}(\lambda_7)$ restricts to a subgroup isomorphic to $H$ with 28-dimensional composition factor dimensions, and $H^{1}(H,V)$ vanishes for each. Now, $Q$ has two levels, which are irreducible $L'$-modules of respective high weights $\lambda_7$ and $0$. Thus $H^{1}(H,Q)$ vanishes and all subgroups $S \cong H$ of $G$ are $G$-completely reducible. \qed
\subsection{Remark: Existence of Non-Completely Reducible subgroups} \label{rem:existence}
At this stage, we do not attempt the converse problem of classifying non-$G$-cr subgroups of each candidate type. In many cases, finding a non-$G$-cr subgroup isomorphic to $H$ is straightforward using the existence of indecomposable $H$-modules; for instance, since $\Alt_{5}$ has an indecomposable module of shape $1|4$ when $p = 3$, this gives a non-completely reducible embedding into $SL_5(K)$. A result of Serre \cite{MR2167207}*{Proposition 3.2} then implies that the image of $\Alt_{5}$ is non-$G$-completely reducible whenever $SL_5(K)$ is embedded as a Levi subgroup into $G = E_6(K)$, $E_7(K)$ or $E_8(K)$.
In general, however, classifying non-$G$-cr subgroups requires determining properties of the \emph{non-abelian cohomology set} $H^{1}(S,Q)$ when $S$ is a finite subgroup of $G$, and $Q$ is the unipotent radical of a parabolic subgroup containing $S$. When $Q$ is not abelian, this is not a group, only a pointed set, and its exact structure is not straightforward to determine. For instance, consider the case $(G,H,p) = (E_6,M_{22},2)$, where $3.M_{22}$ admits an embedding into $L' = SL_6(K)$ and is $L'$-irreducible, where $L$ a Levi subgroup of $G$. The corresponding unipotent radical has two levels, and $H^{1}(M_{22},Q)$ fits into an exact sequence of pointed sets:
\[ \{0\} \to H^{1}(M_{22},Q) \to H^{1}(M_{22},10|10^{*}) \to H^{2}(M_{22},K) \]
where $H^{1}(M_{22},10|10^{*}) \cong H^{2}(M_{22},K) \cong K$.
\chapter{Tables of Feasible Characters} \label{chap:thetables}
\addtocontents{toc}{\setcounter{tocdepth}{-1}} \section*{Notes on the Tables} \addtocontents{toc}{\setcounter{tocdepth}{1}} If $G$ is a simply connected exceptional simple algebraic group, and $H$ is quasisimple finite group which embeds into $G$, and $H/Z(H) \notin \textup{Lie}(p)$, then the following tables give all feasible characters of $H$ on the adjoint and minimal modules for $G$, such that $Z(H)$ acts as a group of scalars on $V_{\textup{min}}$. This therefore contains the composition factors of the possible restrictions $L(G) \downarrow \tilde{S}$ and $V_{\textup{min}} \downarrow \tilde{S}$, whenever $S$ is a simple subgroup of $G/Z(G)$ and $\tilde{S}$ is a minimal preimage of $S$ in $G$.
The relevant $KG$-modules are denoted as follows: \begin{center}
\begin{tabular}{c|c|c} $G$ & $L(G)$ & $V_{\textup{min}}$ \\ \hline $E_8$ & $W(\lambda_1) = V_{248}$ & $V_{248}$ \\ $E_7$ & $W(\lambda_1) = V_{133}$ & $W(\lambda_7) = V_{56}$ \\ $E_6$ & $W(\lambda_2) = V_{78}$ & $W(\lambda_1) = V_{27}$ \\ $F_4$ & $W(\lambda_1) = V_{52}$ & $W(\lambda_4) = V_{26}$ \\ \end{tabular} \end{center}
In calculating these tables, we have used information on elements of $\tilde{S}$ and $\tilde{G}$ of orders $2$ to $37$. A very small number of characters given here may be ruled out as occurring via an embedding $\tilde{S} \to \tilde{G}$ by consideration of elements of higher order. For example, $PSL_2(61)$ has two 30-dimensional irreducible modules in characteristic 2, whose Brauer characters differ only on elements of order 61.
If two sets of feasible characters differ only by permuting the module isomorphism types, we list only one member of each orbit. For example, line 1) of the first table for $\Alt_{6}$ overleaf corresponds to \emph{two} sets of compatible feasible characters of $\Alt_6$ on $L(G)$ and $V_G(\lambda_4)$, when $G = F_4$ and $p = 0$ or $p > 5$; these respectively have factors $8_b^{4}/10^{2}$ and $8_a^{4}/10^{2}$ on $L(G)$ (and similarly for $V_{\textup{min}}$). When we have shortened a table in this way, the permutations used will be noted underneath the table.
For $G$ not of type $E_6$, each irreducible $G$-module is self-dual, and hence each irreducible factor of a feasible character occurs with the same multiplicity as its dual. For $G$ of type $E_6$, note that a feasible character on $V_G(\lambda_1)$ gives rise to a feasible character on $V_G(\lambda_6) \cong V_G(\lambda_1)^{*}$ by the taking of duals. We therefore omit characters which arise by taking duals.
Finally, since several of the results of this paper follow by inspecting each of these tables, for convenience we attach labels to feasible characters satisfying certain conditions, as follows. Let $L$ and $V$ respectively be the sum of non-trivial $G$-composition factors of $L(G)$ and $V_{\textup{min}}$.
\begin{center}
\begin{tabularx}{\textwidth}{c|X} \textbf{P} & Possibly Lie \textbf{P}rimitive. A row has this label if and only if a subgroup $\tilde{S}$ of $G$ with these composition factors on $L(G)$ and $V_{\textup{min}}$ fails to satisfy the conditions of Proposition \ref{prop:substab}(i) and (iii) in its action on both $L$ and $V$. Every non-generic Lie primitive finite simple subgroup of $G$ must give rise to feasible characters with this label, though the converse is not true. \\ \hline \textbf{N} & Possibly \textbf{N}on-$G$-cr. A feasible character has this label if and only if both:\\ & \ $\bullet$ \ $H$ admits an embedding into a proper Levi subgroup of $G$, and\\ & \ $\bullet$ \ The feasible character of $H$ on $L$ satisfies the conclusion of Corollary \ref{cor:levisub}, that is, it contains a trivial factor, as well as a factor $W$ such that $H^{1}(H,W)$ is nonzero, and either $W$ occurs with multiplicity at least two, or $W \ncong W^{*}$. Furthermore, $W$ has dimension at least $14$, $20$, $35$ or $64$, if the type of $G$ is respectively $F_{4}$, $E_{6}$, $E_{7}$ or $E_{8}$.\\ & Every non-$G$-cr non-generic finite simple subgroup of $G$ must give rise to feasible characters with this label, though the converse is not true. \end{tabularx} \end{center} Thus a triple $(G,H,p)$ appears in Table \ref{tab:substabprop} (page \pageref{tab:substabprop}) if and only if none of the corresponding feasible characters are marked with \textbf{P}, and the triple appears in Table \ref{tab:candidates} (page \pageref{tab:candidates}) if and only if some corresponding feasible character is marked with \textbf{N}.
\section{$F_{4}$} \label{sec:F4tabs}
\subsection{Alternating Groups} \
\begin{table}[H]\small \caption{Alt$_{10} < F_4$, $p = 2$}
\begin{tabular}{r|cccc|cccc}
& \multicolumn{4}{c|}{$V_{52}$} & \multicolumn{4}{c}{$V_{26}$} \\ & 1 & 8 & 16 & 26 & 1 & 8 & 16 & 26 \\ \hline 1) & 2 & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\ 2) & 2 & 1 & 1 & 1 & 2 & 1 & 1 & 0 \end{tabular} \label{a10f4p2} \end{table}
\begin{table}[H]\small \caption{Alt$_{9} < F_4$, $p = 2$}
\begin{tabular}{r|ccccc|ccccc}
& \multicolumn{5}{c|}{$V_{52}$} & \multicolumn{5}{c}{$V_{26}$} \\ & 1 & 8$_a$ & 8$_b$ & 8$_c$ & 26 & 1 & 8$_a$ & 8$_b$ & 8$_c$ & 26 \\ \hline 1) & 2 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 \\ 2) & 2 & 1 & 1 & 1 & 1 & 2 & 1 & 1 & 1 & 0 \end{tabular}\\ Permutations: $(8_b,8_c)$. \label{a9f4p2} \end{table}
\begin{table}[H]\small \caption{Alt$_{7} < F_4$, $p = 5$}
\begin{tabular}{r|cccc|cccc}
& \multicolumn{4}{c|}{$V_{52}$} & \multicolumn{4}{c}{$V_{26}$} \\ & 1 & 8 & 10 & 10$^{*}$ & 1 & 8 & 10 & 10$^{*}$ \\ \hline 1) & 0 & 4 & 1 & 1 & 2 & 3 & 0 & 0 \end{tabular} \label{a7f4p5} \end{table}
\begin{table}[H]\small \caption{Alt$_{7} < F_4$, $p = 2$}
\begin{tabular}{r|cccccc|cccccc}
& \multicolumn{6}{c|}{$V_{52}$} & \multicolumn{6}{c}{$V_{26}$} \\ & 1 & 4 & 4$^{*}$ & 6 & 14 & 20 & 1 & 4 & 4$^{*}$ & 6 & 14 & 20 \\ \hline 1) & 4 & 2 & 2 & 2 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 \\ 2) & 4 & 2 & 2 & 3 & 1 & 0 & 0 & 0 & 0 & 2 & 1 & 0 \\ 3) & 4 & 2 & 2 & 2 & 0 & 1 & 4 & 2 & 2 & 1 & 0 & 0 \\ 4) & 4 & 2 & 2 & 3 & 1 & 0 & 4 & 2 & 2 & 1 & 0 & 0 \\ 5) & 4 & 2 & 2 & 3 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \end{tabular} \label{a7f4p2} \end{table}
\begin{table}[H]\small
\caption{Alt$_{6} < F_4$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|ccccccc|ccccccc}
& & \multicolumn{7}{c|}{$V_{52}$} & \multicolumn{7}{c}{$V_{26}$} \\ & & 1 & 5$_a$ & 5$_b$ & 8$_a$ & 8$_b$ & 9 & 10 & 1 & 5$_a$ & 5$_b$ & 8$_a$ & 8$_b$ & 9 & 10 \\ \hline 1) & & 0 & 0 & 0 & 0 & 4 & 0 & 2 & 2 & 0 & 0 & 0 & 3 & 0 & 0 \\ 2) & \textbf{P} & 0 & 0 & 0 & 1 & 3 & 0 & 2 & 0 & 0 & 0 & 1 & 0 & 2 & 0 \\ 3) & & 0 & 0 & 0 & 2 & 2 & 0 & 2 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\ 4) & & 2 & 1 & 1 & 0 & 0 & 0 & 4 & 1 & 0 & 0 & 1 & 1 & 1 & 0 \\ 5) & & 3 & 0 & 0 & 0 & 0 & 1 & 4 & 0 & 1 & 1 & 1 & 1 & 0 & 0 \\ 6) & & 3 & 0 & 0 & 0 & 0 & 1 & 4 & 1 & 0 & 3 & 0 & 0 & 0 & 1 \end{tabular}\\ Permutations: $(5_a,5_b)$, $(8_a,8_b)$. \label{a6f4p0} \end{table}
\begin{table}[H]\small \caption{Alt$_{6} < F_4$, $p = 5$}
\begin{tabular}{rc|ccccc|ccccc}
& & \multicolumn{5}{c|}{$V_{52}$} & \multicolumn{5}{c}{$V_{26}$} \\ & & 1 & 5$_a$ & 5$_b$ & 8 & 10 & 1 & 5$_a$ & 5$_b$ & 8 & 10 \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 4 & 2 & 2 & 0 & 0 & 3 & 0 \\ 2) & & 2 & 1 & 1 & 0 & 4 & 2 & 0 & 0 & 3 & 0 \\ 3) & & 4 & 0 & 0 & 1 & 4 & 0 & 1 & 1 & 2 & 0 \\ 4) & & 4 & 0 & 0 & 1 & 4 & 1 & 0 & 3 & 0 & 1 \end{tabular}\\ Permutations: $(5_a,5_b)$. \label{a6f4p5} \end{table}
\begin{table}[H]\small
\caption{Alt$_{5} < F_4$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|ccccc|ccccc}
& & \multicolumn{5}{c|}{$V_{52}$} & \multicolumn{5}{c}{$V_{26}$} \\ & & $1$ & $3_a$ & $3_b$ & $4$ & $5$ & $1$ & $3_a$ & $3_b$ & $4$ & $5$\\ \hline 1) & \textbf{P} & 0 & 3 & 5 & 2 & 4 & 0 & 1 & 0 & 2 & 3\\ 2) & & 3 & 3 & 5 & 5 & 1 & 0 & 1 & 0 & 2 & 3\\ 3) & & 14& 1 & 0 & 0 & 7 & 0 & 7 & 0 & 0 & 1\\ 4) & & 0 & 4 & 4 & 2 & 4 & 1 & 1 & 1 & 1 & 3\\ 5) & & 3 & 4 & 4 & 5 & 1 & 1 & 1 & 1 & 1 & 3\\ 6) & & 0 & 6 & 2 & 2 & 4 & 2 & 3 & 0 & 0 & 3\\ 7) & & 3 & 6 & 2 & 5 & 1 & 2 & 3 & 0 & 0 & 3\\ 8) & & 3 & 3 & 5 & 5 & 1 & 3 & 1 & 0 & 5 & 0\\ 9) & & 3 & 4 & 4 & 5 & 1 & 4 & 1 & 1 & 4 & 0\\ 10)& & 3 & 6 & 2 & 5 & 1 & 5 & 3 & 0 & 3 & 0\\ 11)& & 8 & 0 & 13& 0 & 1 & 8 & 0 & 6 & 0 & 0 \end{tabular}\\ Permutations: $(3_a,3_b)$. \label{a5f4p0} \end{table}
\begin{table}[H]\small \caption{Alt$_{5} < F_4$, $p = 3$}
\begin{tabular}{rc|cccc|cccc}
& & \multicolumn{4}{c|}{$V_{52}$} & \multicolumn{4}{c}{$V_{26}$}\\ & & 1 & 3$_a$ & 3$_b$ & 4 & 1 & 3$_a$ & 3$_b$ & 4 \\ \hline 1) & \textbf{N} & 21 & 1 & 0 & 7 & 1 & 7 & 0 & 1\\ 2) & \textbf{P}, \textbf{N} & 4 & 3 & 5 & 6 & 3 & 1 & 0 & 5\\ 3) & \textbf{P}, \textbf{N} & 4 & 4 & 4 & 6 & 4 & 1 & 1 & 4\\ 4) & \textbf{N} & 4 & 6 & 2 & 6 & 5 & 3 & 0 & 3\\ 5) & & 9 & 0 & 13 & 1 & 8 & 0 & 6 & 0 \end{tabular}\\ Permutations: $(3_a,3_b)$. \label{a5f4p3} \end{table}
\subsection{Sporadic Groups} \
\begin{table}[H]\small \caption{$M_{11} < F_4$, $p = 11$}
\begin{tabular}{r|ccc|ccc}
& \multicolumn{3}{c|}{$V_{52}$} & \multicolumn{3}{c}{$V_{26}$} \\ & 10 & $10^{*}$ & 16 & 1 & 9 & 16 \\ \hline 1) & 1 & 1 & 2 & 1 & 1 & 1 \end{tabular} \label{m11f4p11} \end{table}
\begin{table}[H]\small \caption{$J_{1} < F_4$, $p = 11$}
\begin{tabular}{r|ccc|cc}
& \multicolumn{3}{c|}{$V_{52}$} & \multicolumn{2}{c}{$V_{26}$} \\ & 1 & 7 & 14 & 1 & 7 \\ \hline 1) & 3 & 5 & 1 & 5 & 3 \end{tabular} \label{j1f4p11} \end{table}
\begin{table}[H]\small \caption{$J_{2} < F_4$, $p = 2$}
\begin{tabular}{rc|ccccc|ccccc}
& & \multicolumn{5}{c|}{$V_{52}$} & \multicolumn{5}{c}{$V_{26}$} \\ & & 1 & $6_a$ & $6_b$ & $14_a$ & $14_b$ & 1 & $6_a$ & $6_b$ & $14_a$ & $14_b$ \\ \hline 1) & \textbf{N} & 8 & 2 & 3 & 1 & 0 & 0 & 2 & 0 & 1 & 0 \\ 2) & \textbf{N} & 8 & 5 & 0 & 1 & 0 & 8 & 3 & 0 & 0 & 0 \end{tabular} \\ Permutations: $(6_a,6_b)(14_a,14_b)$. \label{j2f4p2} \end{table}
\subsection{Cross-characteristic Groups $L_2(q)$ $(q \neq 4$, $5$, $9)$} \
\begin{table}[H]\small
\caption{$L_2(7) < F_4$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|cccccc|ccccccc}
& & \multicolumn{6}{c|}{$V_{52}$} & \multicolumn{6}{c}{$V_{26}$} \\ & & 1 & 3 & $3^{*}$ & 6 & 7 & 8 & 1 & 3 & $3^{*}$ & 6 & 7 & 8 \\ \hline 1) & \textbf{P} & 0 & 1 & 1 & 0 & 2 & 4 & 0 & 1 & 1 & 2 & 0 & 1 \\ 2) & & 0 & 1 & 1 & 0 & 2 & 4 & 2 & 0 & 0 & 0 & 0 & 3 \\ 3) & & 3 & 1 & 1 & 0 & 5 & 1 & 0 & 1 & 1 & 2 & 0 & 1 \\ 4) & & 3 & 1 & 1 & 0 & 5 & 1 & 2 & 0 & 0 & 0 & 0 & 3 \\ 5) & & 3 & 1 & 1 & 0 & 5 & 1 & 5 & 0 & 0 & 0 & 3 & 0 \\ 6) & & 8 & 0 & 0 & 6 & 0 & 1 & 0 & 3 & 3 & 0 & 0 & 1 \\ 7) & & 8 & 6 & 6 & 0 & 0 & 1 & 8 & 3 & 3 & 0 & 0 & 0 \end{tabular} \label{l27f4p0} \end{table}
\begin{table}[H]\small \caption{$L_2(7) < F_4$, $p = 3$}
\begin{tabular}{rc|ccccc|cccccc}
& & \multicolumn{5}{c|}{$V_{52}$} & \multicolumn{5}{c}{$V_{26}$} \\ & & 1 & 3 & $3^{*}$ & $6_a$ & 7 & 1 & 3 & $3^{*}$ & $6_a$ & 7 \\ \hline 1) & \textbf{P}, \textbf{N} & 4 & 1 & 1 & 0 & 6 & 1 & 1 & 1 & 2 & 1 \\ 2) & \textbf{N} & 4 & 1 & 1 & 0 & 6 & 5 & 0 & 0 & 0 & 3 \\ 3) & \textbf{N} & 5 & 2 & 2 & 0 & 5 & 0 & 0 & 0 & 2 & 2 \\ 4) & & 9 & 0 & 0 & 6 & 1 & 1 & 3 & 3 & 0 & 1 \\ 5) & & 9 & 6 & 6 & 0 & 1 & 8 & 3 & 3 & 0 & 0 \end{tabular} \label{l27f4p3} \end{table}
\begin{table}[H]\small
\caption{$L_2(8) < F_4$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{9}{c}|*{9}{c}}
& & \multicolumn{9}{c|}{$V_{52}$} & \multicolumn{9}{c}{$V_{26}$} \\ & & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 & $9_a$ & $9_b$ & $9_c$ & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 & $9_a$ & $9_b$ & $9_c$ \\ \hline 1) & \textbf{P} & 0 & 2 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 \\ 2) & & 0 & 2 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \\ 3) & & 1 & 2 & 1 & 1 & 1 & 2 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 3 & 0 & 0 & 0 \\ 4) & & 1 & 2 & 1 & 1 & 1 & 2 & 0 & 0 & 0 & 3 & 1 & 0 & 0 & 0 & 2 & 0 & 0 & 0 \\ 5) & & 3 & 1 & 0 & 5 & 1 & 0 & 0 & 0 & 0 & 5 & 0 & 0 & 3 & 0 & 0 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(7_c,7_c,7_d)(9_a,9_b,9_c)$. \label{l28f4p0} \end{table}
\begin{table}[H]\small \caption{$L_2(8) < F_4$, $p = 7$}
\begin{tabular}{rc|cccccc|cccccc}
& & \multicolumn{6}{c|}{$V_{52}$} & \multicolumn{6}{c}{$V_{26}$} \\ & & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 \\ \hline 1) & \textbf{P}, \textbf{N} & 1 & 2 & 1 & 1 & 1 & 2 & 2 & 0 & 0 & 0 & 0 & 3 \\ 2) & \textbf{N} & 1 & 2 & 1 & 1 & 1 & 2 & 3 & 1 & 0 & 0 & 0 & 2 \\ 3) & & 3 & 1 & 0 & 1 & 5 & 0 & 5 & 0 & 0 & 0 & 3 & 0 \end{tabular}\\ Permutations: $(7_b,7_c,7_d)$. \label{l28f4p7} \end{table}
\begin{table}[H]\small \caption{$L_2(8) < F_4$, $p = 3$}
\begin{tabular}{rc|ccccc|ccccc}
& & \multicolumn{5}{c|}{$V_{52}$} & \multicolumn{5}{c}{$V_{26}$} \\ & & 1 & 7 & $9_a$ & $9_b$ & $9_c$ & 1 & 7 & $9_a$ & $9_b$ & $9_c$ \\ \hline 1) & \textbf{P}, \textbf{N} & 1 & 6 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 \\ 2) & \textbf{N} & 3 & 7 & 0 & 0 & 0 & 5 & 3 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(9_a,9_b,9_c)$. \label{l28f4p3} \end{table}
\begin{table}[H]\small
\caption{$L_2(13) < F_4$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{7}{c}|*{7}{c}}
& & \multicolumn{7}{c|}{$V_{52}$} & \multicolumn{7}{c}{$V_{26}$} \\ & & 1 & $7_a$ & $7_b$ & $12_a$ & $12_b$ & $12_c$ & $14_a$ & 1 & $7_a$ & $7_b$ & $12_a$ & $12_b$ & $12_c$ & $14_b$ \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 0 & 1 & 1 & 2 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \\ 2) & & 3 & 0 & 5 & 0 & 0 & 0 & 1 & 5 & 0 & 3 & 0 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(7_a,7_b)$, $(12_{a},12_{b},12_{c})$. \label{l213f4p0} \end{table}
\begin{table}[H]\small \caption{$L_2(13) < F_4$, $p = 7$}
\begin{tabular}{rc|*{6}{c}|*{6}{c}}
& & \multicolumn{6}{c|}{$V_{52}$} & \multicolumn{6}{c}{$V_{26}$} \\ & & 1 & $7_a$ & $7_b$ & $12$ & $14_a$ & $14_b$ & 1 & $7_a$ & $7_b$ & $12$ & $14_a$ & $14_b$ \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 2 & 2 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \\ 2) & & 3 & 0 & 5 & 0 & 1 & 0 & 5 & 0 & 3 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(7_a,7_b)$. \label{l213f4p7} \end{table}
\begin{table}[H]\small \caption{$L_2(13) < F_4$, $p = 3$}
\begin{tabular}{rc|*{7}{c}|*{7}{c}}
& & \multicolumn{7}{c|}{$V_{52}$} & \multicolumn{7}{c}{$V_{26}$} \\ & & 1 & $7_a$ & $7_b$ & $12_a$ & $12_b$ & $12_c$ & $13$ & 1 & $7_a$ & $7_b$ & $12_a$ & $12_b$ & $12_c$ & $13$ \\ \hline 1) & \textbf{P} & 0 & 2 & 2 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 \\ 2) & & 3 & 1 & 6 & 0 & 0 & 0 & 0 & 5 & 0 & 3 & 0 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(7_a,7_b)$, $(12_{a},12_{b},12_{c})$. \label{l213f4p3} \end{table}
\begin{table}[H]\small \caption{$L_2(13) < F_4$, $p = 2$}
\begin{tabular}{rc|*{7}{c}|*{7}{c}}
& & \multicolumn{7}{c|}{$V_{52}$} & \multicolumn{7}{c}{$V_{26}$} \\ & & 1 & $6_a$ & $6_b$ & $12_a$ & $12_b$ & $12_c$ & 14 & 1 & $6_a$ & $6_b$ & $12_a$ & $12_b$ & $12_c$ & 14 \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 0 & 1 & 1 & 2 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ 2) & \textbf{N} & 8 & 0 & 5 & 0 & 0 & 0 & 1 & 8 & 0 & 3 & 0 & 0 & 0 & 0 \\ 3) & \textbf{N} & 8 & 2 & 3 & 0 & 0 & 0 & 1 & 0 & 2 & 0 & 0 & 0 & 0 & 1 \end{tabular}\\ Permutations: $(6_a,6_b)$, $(12_{a},12_{b},12_{c})$. \label{l213f4p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(17) < F_4$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{3}{c}|*{5}{c}}
& & \multicolumn{3}{c|}{$V_{52}$} & \multicolumn{5}{c}{$V_{26}$} \\ & & $16_a$ & $18_b$ & $18_c$ & 1 & $9_a$ & $9_b$ & $16_a$ & $17$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 1 \\ 2) & & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 0 \end{tabular}\\ Permutations: $(9_a,9_b)$. \label{l217f4p0} \end{table}
\begin{table}[H]\small \caption{$L_2(17) < F_4$, $p = 3$}
\begin{tabular}{rc|*{4}{c}|*{4}{c}}
& & \multicolumn{4}{c|}{$V_{52}$} & \multicolumn{4}{c}{$V_{26}$} \\ & & 16 & $18_a$ & $18_b$ & $18_c$ & 1 & $9_a$ & $9_b$ & 16 \\ \hline 1) & \textbf{P} & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 1 \end{tabular}\\ Permutations: $(9_a,9_b)$. \label{l217f4p3} \end{table}
\begin{table}[H]\small \caption{$L_2(17) < F_4$, $p = 2$}
\begin{tabular}{r|*{4}{c}|*{4}{c}}
& \multicolumn{4}{c|}{$V_{52}$} & \multicolumn{4}{c}{$V_{26}$} \\ & 1 & $8_a$ & $8_b$ & $16_a$ & 1 & $8_a$ & $8_b$ & $16_a$ \\ \hline 1) & 4 & 2 & 2 & 1 & 2 & 0 & 1 & 1 \\ 2) & 4 & 2 & 2 & 1 & 2 & 1 & 2 & 0 \end{tabular}\\ Permutations: $(8_a,8_b)$. \label{l217f4p2} \end{table}
\begin{table}[H]\small \caption{$L_2(25) < F_4$, $p \neq 2,3,5$}
\begin{tabular}{rc|*{2}{c}|*{1}{c}}
& & \multicolumn{2}{c|}{$V_{52}$} & \multicolumn{1}{c}{$V_{26}$} \\ & & $26_b$ & $26_c$ & $26_a$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 \end{tabular}\\ $26_b$, $26_c$ are $\textup{Aut}(L_2(25))$-conjugate. \label{l225f4p0} \label{l225f4p13} \end{table}
\begin{table}[H]\small \caption{$L_2(25) < F_4$, $p = 3$}
\begin{tabular}{rc|*{1}{c}|*{2}{c}}
& & \multicolumn{1}{c|}{$V_{52}$} & \multicolumn{2}{c}{$V_{26}$} \\ & & 26 & 1 & 25 \\ \hline 1) & \textbf{P} & 2 & 1 & 1 \end{tabular} \label{l225f4p3} \end{table}
\begin{table}[H]\small \caption{$L_2(25) < F_4$, $p = 2$}
\begin{tabular}{rc|*{1}{c}|*{1}{c}}
& & \multicolumn{1}{c|}{$V_{52}$} & \multicolumn{1}{c}{$V_{26}$} \\ & & 26 & 26 \\ \hline 1) & \textbf{P} & 2 & 1 \end{tabular} \label{l225f4p2} \end{table}
\begin{table}[H]\small \caption{$L_2(27) < F_4$, $p \neq 2,3,7$}
\begin{tabular}{rc|*{3}{c}|*{3}{c}}
& & \multicolumn{3}{c|}{$V_{52}$} & \multicolumn{3}{c}{$V_{26}$} \\ & & $26_a$ & $26_b$ & $26_c$ & $26_d$ & $26_e$ & $26_f$ \\ \hline 1) & \textbf{P} & 0 & 1 & 1 & 1 & 0 & 0 \end{tabular}\\ Permutations: $(26_a,26_b,26_c)(26_d,26_e,26_f)$. \label{l227f4p0} \label{l227f4p13} \end{table}
\begin{table}[H]\small \caption{$L_2(27) < F_4$, $p = 7$}
\begin{tabular}{rc|c|cc}
& & \multicolumn{1}{c|}{$V_{52}$} & \multicolumn{2}{c}{$V_{26}$} \\ & & 26 & $13_a$ & $13_b$ \\ \hline 1) & \textbf{P} & 2 & 1 & 1 \end{tabular} \label{l227f4p7} \end{table}
\begin{table}[H]\small \caption{$L_2(27) < F_4$, $p = 2$}
\begin{tabular}{rc|*{2}{c}|*{1}{c}}
& & \multicolumn{2}{c|}{$V_{52}$} & \multicolumn{1}{c}{$V_{26}$} \\ & & $26_a$ & $26_b$ & $26_a$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 \\ \end{tabular}\\ Permutations: $(26_a,26_b,26_c)$. \label{l227f4p2} \end{table}
\subsection{Cross-characteristic Groups $\ncong L_2(q)$} \
\begin{table}[H]\small \caption{$L_3(3) < F_4$, $p \neq 2, 3$}
\begin{tabular}{rc|*{2}{c}|*{1}{c}}
& & \multicolumn{2}{c|}{$V_{52}$} & \multicolumn{1}{c}{$V_{26}$} \\ & & $26_b$ & $(26_b)^{*}$ & $26_a$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 \end{tabular} \label{l33f4p13} \label{l33f4p0} \end{table}
\begin{table}[H]\small \caption{$L_3(3) < F_4$, $p = 2$}
\begin{tabular}{rc|c|c}
& & \multicolumn{1}{c|}{$V_{52}$} & \multicolumn{1}{c}{$V_{26}$} \\ & & 26 & 26 \\ \hline 1) & \textbf{P} & 2 & 1 \end{tabular} \label{l33f4p2} \end{table}
\begin{table}[H]\small \caption{$L_4(3) < F_4$, $p = 2$}
\begin{tabular}{rc|cc|c}
& & \multicolumn{2}{c|}{$V_{52}$} & \multicolumn{1}{c}{$V_{26}$} \\ & & $26_a$ & $26_b$ & $26_a$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 \end{tabular}\\ Permutations: $(26_a,26_b)$. \label{l43f4p2} \end{table}
\begin{table}[H]\small
\caption{$U_3(3) < F_4$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{7}{c}|*{4}{c}}
& \multicolumn{7}{c|}{$V_{52}$} & \multicolumn{4}{c}{$V_{26}$} \\ & 1 & 6 & $7_a$ & $(7_a)^{*}$ & $7_b$ & 14 & $21_b$ & 1 & 6 & $7_b$ & 14 \\ \hline 1) & 3 & 0 & 0 & 0 & 5 & 1 & 0 & 5 & 0 & 3 & 0 \\ 2) & 3 & 0 & 2 & 2 & 0 & 0 & 1 & 0 & 2 & 0 & 1 \end{tabular} \label{u33f4p0} \end{table}
\begin{table}[H]\small \caption{$U_3(3) < F_4$, $p = 7$}
\begin{tabular}{rc|*{8}{c}|*{5}{c}}
& & \multicolumn{8}{c|}{$V_{52}$} & \multicolumn{5}{c}{$V_{26}$} \\ & & 1 & 6 & $7_a$ & $7_b$ & $(7_b)^{*}$ & 14 & $21_a$ & 26 & 1 & 6 & $7_a$ & 14 & 26 \\ \hline 1) & \textbf{P} & 0 & 2 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 1 \\ 2) & & 3 & 0 & 0 & 2 & 2 & 0 & 1 & 0 & 0 & 2 & 0 & 1 & 0 \\ 3) & & 3 & 0 & 5 & 0 & 0 & 1 & 0 & 0 & 5 & 0 & 3 & 0 & 0 \end{tabular} \label{u33f4p7} \end{table}
\addtocounter{table}{1} \begin{table}[H]\small \begin{center} \arabic{chapter}.\arabic{section}.\arabic{table}: $^{3}D_4(2) < F_4$, $p \neq 2, 3$. Irreducible on $V_{52}$ and $V_{26}$. \textbf{P} \end{center} \label{3d4f4p7} \label{3d4f4p13} \label{3d4f4p0} \end{table}
\begin{table}[H]\small \caption{$^{3}D_4(2) < F_4$, $p = 3$}
\begin{tabular}{rc|*{1}{c}|*{2}{c}}
& & \multicolumn{1}{c|}{$V_{52}$} & \multicolumn{2}{c}{$V_{26}$} \\ & & 52 & 1 & 25 \\ \hline 1) & \textbf{P} & 1 & 1 & 1 \end{tabular} \label{3d4f4p3} \end{table}
\section{$E_6$} \label{sec:E6tabs}
\subsection{Alternating Groups} \
\begin{table}[H]\small \caption{Alt$_{12} < E_6$, $p = 2$}
\begin{tabular}{r|*{4}{c}|*{4}{c}}
& \multicolumn{4}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & 1 & 16 & 16$^{*}$ & 44 & 1 & 10 & 16 & 16$^{*}$\\ \hline 1) & 2 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \end{tabular} \label{a12e6p2} \end{table}
\begin{table}[H]\small \caption{Alt$_{11} < E_6$, $p = 2$}
\begin{tabular}{r|cccc|cccc}
& \multicolumn{4}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & 1 & 16 & 16$^{*}$ & 44 & 1 & 10 & 16 & 16$^{*}$ \\ \hline 1) & 2 & 1 & 1 & 1 & 1 & 1 & 1 & 0 \end{tabular} \label{a11e6p2} \end{table}
\begin{table}[H]\small \caption{Alt$_{10} < E_6$, $p = 2$}
\begin{tabular}{r|cccc|cccc}
& \multicolumn{4}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & 1 & 8 & 16 & 26 & 1 & 8 & 16 & 26 \\ \hline 1) & 2 & 1 & 1 & 2 & 1 & 0 & 0 & 1\\ 2) & 4 & 2 & 2 & 1 & 3 & 1 & 1 & 0 \end{tabular} \label{a10e6p2} \end{table}
\begin{table}[H]\small \caption{Alt$_{9} < E_6$, $p = 2$}
\begin{tabular}{rc|ccccc|ccccc}
& & \multicolumn{5}{c|}{$V_{78}$} & \multicolumn{5}{c}{$V_{27}$} \\ & & 1 & 8$_a$ & 8$_b$ & 8$_c$ & 26 & 1 & 8$_a$ & 8$_b$ & 8$_c$ & 26 \\ \hline 1) & \textbf{P} & 2 & 1 & 1 & 1 & 2 & 1 & 0 & 0 & 0 & 1\\ 2) & & 4 & 2 & 2 & 2 & 1 & 3 & 1 & 1 & 1 & 0 \end{tabular}\\ Permutations: $(8_b,8_c)$. \label{a9e6p2} \end{table}
\begin{table}[H]\small
\caption{Alt$_{7} < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{7}{c}|*{3}{c}}
& \multicolumn{7}{c|}{$V_{78}$} & \multicolumn{3}{c}{$V_{27}$} \\ & 1 & $6_a$ & 10 & 10$^{*}$ & 14$_a$ & 14$_b$ & $15_a$ & 1 & $6_a$ & $15_a$ \\ \hline 1) & 3 & 1 & 2 & 2 & 1 & 0 & 1 & 0 & 2 & 1 \end{tabular}\\ Permutations: $(10,10^{*})$. \\ $14_a$ is a section of $6_a \otimes 6_a$. \label{a7e6p0} \end{table}
\begin{table}[H]\small
\caption{$3\cdot$Alt$_{7} < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{8}{c}|*{2}{c}}
& \multicolumn{8}{c|}{$V_{78}$} & \multicolumn{2}{c}{$V_{27}$} \\ & 1 & 6$_a$ & 10 & 10$^{*}$ & 14$_a$ & 14$_b$ & 15$_a$ & 21$_a$ & 6$_b$ & $15_c$ \\ \hline 1) & 3 & 0 & 2 & 2 & 1 & 0 & 0 & 1 & 2 & 1 \end{tabular}\\ $14_a$ is a section of $6_a \otimes 6_a$. \label{3a7e6p0} \end{table}
\begin{table}[H]\small \caption{Alt$_{7} < E_6$, $p = 7$}
\begin{tabular}{rc|ccccc|ccc}
& & \multicolumn{5}{c|}{$V_{78}$} & \multicolumn{3}{c}{$V_{27}$} \\ & & 1 & 5 & 10 & 14$_a$ & 14$_b$ & 1 & 5 & 10 \\ \hline 1) & \textbf{N} & 4 & 2 & 5 & 1 & 0 & 2 & 3 & 1 \end{tabular}\\ $14_a$ is a section of $5 \otimes 5$. \label{a7e6p7} \end{table}
\begin{table}[H]\small \caption{$3\cdot$Alt$_{7} < E_6$, $p = 7$}
\begin{tabular}{r|ccccccc|*{6}{c}}
& \multicolumn{7}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & 1 & 5 & 10 & 14$_a$ & 14$_b$ & 21 & 35 & 6 & 6$^{*}$ & 9 & 9$^{*}$ & 15 & 15$^{*}$\\ \hline 1) & 3 & 0 & 4 & 1 & 0 & 1 & 0 & 0 & 2 & 0 & 0 & 0 & 1 \end{tabular}\\ $14_a$ is a section of $5 \otimes 5$. \label{3a7e6p7} \end{table}
\begin{table}[H]\small \caption{Alt$_{7} < E_6$, $p = 5$}
\begin{tabular}{rc|ccccccc|ccccc}
& & \multicolumn{7}{c|}{$V_{78}$} & \multicolumn{5}{c}{$V_{27}$} \\ & & 1 & 6 & 8 & 10 & 10$^{*}$ & 15 & 35 & 1 & 6 & 8 & 13 & 15 \\ \hline 1) & \textbf{P} & 0 & 0 & 1 & 0 & 0 & 0 & 2 & 0 & 1 & 1 & 1 & 0 \\ 2) & & 3 & 2 & 1 & 2 & 2 & 1 & 0 & 0 & 2 & 0 & 0 & 1 \\ 3) & & 2 & 0 & 7 & 1 & 1 & 0 & 0 & 3 & 0 & 3 & 0 & 0 \end{tabular} \label{a7e6p5} \end{table}
\begin{table}[H]\small \caption{$3\cdot$Alt$_{7} < E_6$, $p = 5$}
\begin{tabular}{rc|cccccccc|*{5}{c}}
& & \multicolumn{8}{c|}{$V_{78}$} & \multicolumn{5}{c}{$V_{27}$} \\ & & 1 & 6$_a$ & 8 & 10 & 10$^{*}$ & 13 & 15$_a$ & 35 & $3$ & 6$_b$ & $(6_b)^{*}$ & 15$_b$ & 21 \\ \hline 1) & \textbf{P} & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 1 & 0 & 1 \\ 2) & & 3 & 1 & 2 & 2 & 2 & 1 & 0 & 0 & 0 & 0 & 2 & 1 & 0 \\ 3) & & 14 & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 7 & 1 & 0 & 0 & 0 \end{tabular}\\ $15_b$ is a section of $6_a \otimes 6_b$. \label{3a7e6p5} \end{table}
\begin{table}[H]\small \caption{Alt$_{7} < E_6$, $p = 3$}
\begin{tabular}{rc|cccccc|cccccc}
& & \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & & 1 & 6 & 10 & 10$^{*}$ & 13 & 15 & 1 & 6 & 10 & 10$^{*}$ & 13 & 15 \\ \hline 1) & \textbf{P}, \textbf{N} & 4 & 1 & 2 & 2 & 1 & 1 & 0 & 2 & 0 & 0 & 0 & 1 \end{tabular} \label{a7e6p3} \end{table}
\begin{table}[H]\small \caption{Alt$_{7} < E_6$, $p = 2$}
\begin{tabular}{rc|cccccc|cccccc}
& & \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & & 1 & 4 & 4$^{*}$ & $6_a$ & 14 & 20 & 1 & 4 & 4$^{*}$ & $6_a$ & 14 & 20 \\ \hline 1) & \textbf{N} & 4 & 2 & 2 & 3 & 0 & 2 & 1 & 0 & 0 & 1 & 0 & 1\\ 2) & & 4 & 2 & 2 & 4 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 1\\ 3) & \textbf{N} & 4 & 2 & 2 & 5 & 2 & 0 & 1 & 0 & 0 & 2 & 1 & 0\\ 4) & & 8 & 4 & 4 & 3 & 0 & 1 & 5 & 2 & 2 & 1 & 0 & 0\\ 5) & & 8 & 4 & 4 & 4 & 1 & 0 & 5 & 2 & 2 & 1 & 0 & 0 \end{tabular} \label{a7e6p2} \end{table}
\begin{table}[H]\small \caption{$3\cdot$Alt$_{7} < E_6$, $p = 2$}
\begin{tabular}{rc|cccccc|cccc}
& & \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & & 1 & 4 & 4$^{*}$ & 6$_a$ & 14 & 20 & 6$_b$ & $(6_b)^{*}$ & 15 & $15^{*}$ \\ \hline 1) & \textbf{N} & 4 & 2 & 2 & 3 & 0 & 2 & 0 & 2 & 0 & 1\\ 2) & & 4 & 2 & 2 & 4 & 1 & 1 & 0 & 2 & 0 & 1 \end{tabular} \label{3a7e6p2} \end{table}
\begin{table}[H]\small
\caption{Alt$_{6} < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|ccccccc|ccccccc}
& & \multicolumn{7}{c|}{$V_{78}$} & \multicolumn{7}{c}{$V_{27}$} \\ & & 1 & 5$_a$ & 5$_b$ & 8$_a$ & 8$_b$ & 9 & 10 & 1 & 5$_a$ & 5$_b$ & 8$_a$ & 8$_b$ & 9 & 10 \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 2 & 3 & 2 & 2 & 0 & 1 & 1 & 1 & 0 & 1 & 0\\ 2) & & 2 & 1 & 1 & 0 & 1 & 2 & 4 & 0 & 1 & 1 & 1 & 0 & 1 & 0\\ 3) & & 0 & 0 & 0 & 2 & 3 & 2 & 2 & 1 & 0 & 0 & 1 & 0 & 2 & 0\\ 4) & & 2 & 1 & 1 & 0 & 1 & 2 & 4 & 1 & 0 & 0 & 1 & 0 & 2 & 0\\ 5) & & 1 & 0 & 0 & 3 & 3 & 1 & 2 & 1 & 1 & 1 & 1 & 1 & 0 & 0\\ 6) & & 3 & 1 & 1 & 1 & 1 & 1 & 4 & 1 & 1 & 1 & 1 & 1 & 0 & 0\\ 7) & & 1 & 0 & 0 & 3 & 3 & 1 & 2 & 2 & 0 & 0 & 1 & 1 & 1 & 0\\ 8) & & 3 & 1 & 1 & 1 & 1 & 1 & 4 & 2 & 0 & 0 & 1 & 1 & 1 & 0\\ 9) & & 4 & 0 & 3 & 0 & 0 & 1 & 5 & 2 & 0 & 3 & 0 & 0 & 0 & 1\\ 10) & & 2 & 0 & 0 & 7 & 0 & 0 & 2 & 3 & 0 & 0 & 3 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(5_a,5_b)(8_a,8_b)$. \label{a6e6p0} \end{table}
\begin{table}[H]\small
\caption{$3\cdot$Alt$_{6} < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|ccccccc|*{6}{c}}
& & \multicolumn{7}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_ {27}$} \\ & & 1 & 5$_a$ & 5$_b$ & 8$_a$ & 8$_b$ & 9$_a$ & 10 & $3_a$ & $3_b$ & 6 & 6$^{*}$ & 9$_b$ & 15 \\ \hline 1) & & 1 & 0 & 0 & 3 & 3 & 1 & 2 & 0 & 0 & 0 & 2 & 0 & 1\\ 2) & & 3 & 1 & 1 & 1 & 1 & 1 & 4 & 0 & 0 & 0 & 2 & 0 & 1\\ 3) & \textbf{P} & 0 & 0 & 0 & 2 & 3 & 2 & 2 & 0 & 1 & 1 & 0 & 2 & 0\\ 4) & & 2 & 1 & 1 & 0 & 1 & 2 & 4 & 0 & 1 & 1 & 0 & 2 & 0\\ 5) & & 2 & 0 & 0 & 7 & 0 & 0 & 2 & 0 & 3 & 3 & 0 & 0 & 0\\ 6) & & 14 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 7 & 1 & 0 & 0 & 0\\ 7) & & 1 & 0 & 0 & 3 & 3 & 1 & 2 & 1 & 1 & 2 & 0 & 1 & 0\\ 8) & & 3 & 1 & 1 & 1 & 1 & 1 & 4 & 1 & 1 & 2 & 0 & 1 & 0\\ 9) & & 8 & 0 & 0 & 1 & 1 & 6 & 0 & 3 & 3 & 0 & 0 & 1 & 0 \end{tabular}\\ Permutations: $(3_a,3_b)(8_a,8_b)$. \label{3a6e6p0} \end{table}
\begin{table}[H]\small \caption{Alt$_{6} < E_6$, $p = 5$}
\begin{tabular}{rc|ccccc|ccccc}
& & \multicolumn{5}{c|}{$V_{78}$} & \multicolumn{5}{c}{$V_{27}$} \\ & & 1 & 5$_a$ & 5$_b$ & 8 & 10 & 1 & 5$_a$ & 5$_b$ & 8 & 10 \\ \hline 1) & \textbf{P}, \textbf{N} & 2 & 0 & 0 & 7 & 2 & 1 & 1 & 1 & 2 & 0\\ 2) & \textbf{N} & 4 & 1 & 1 & 3 & 4 & 1 & 1 & 1 & 2 & 0\\ 3) & & 5 & 0 & 3 & 1 & 5 & 2 & 0 & 3 & 0 & 1\\ 4) & \textbf{N} & 2 & 0 & 0 & 7 & 2 & 3 & 0 & 0 & 3 & 0\\ 5) & \textbf{N} & 4 & 1 & 1 & 3 & 4 & 3 & 0 & 0 & 3 & 0 \end{tabular}\\ Permutations: $(5_a,5_b)$. \label{a6e6p5} \end{table}
\begin{table}[H]\small \caption{$3\cdot$Alt$_{6} < E_6$, $p = 5$}
\begin{tabular}{rc|ccccc|*{6}{c}}
& & \multicolumn{5}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & & 1 & 5$_a$ & 5$_b$ & 8 & 10 & 3 & 3$^{*}$ & 6 & 6$^{*}$ & 15 & 15$^{*}$ \\ \hline 1) & \textbf{P}, \textbf{N} & 2 & 0 & 0 & 7 & 2 & 0 & 0 & 0 & 2 & 1 & 0 \\ 2) & \textbf{N} & 4 & 1 & 1 & 3 & 4 & 0 & 0 & 0 & 2 & 1 & 0 \\ 3) & \textbf{P}, \textbf{N} & 2 & 0 & 0 & 7 & 2 & 0 & 3 & 3 & 0 & 0 & 0 \\ 4) & \textbf{N} & 4 & 1 & 1 & 3 & 4 & 0 & 3 & 3 & 0 & 0 & 0 \\ 5) & \textbf{N} & 14 & 0 & 0 & 8 & 0 & 0 & 7 & 1 & 0 & 0 & 0 \end{tabular} \label{3a6e6p5} \end{table}
\begin{table}[H]\small
\caption{Alt$_{5} < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|ccccc|ccccc}
& \multicolumn{5}{c|}{$V_{78}$} & \multicolumn{5}{c}{$V_{27}$} \\ & 1 & 3$_a$ & 3$_b$ & 4 & 5 & 1 & 3$_a$ & 3$_b$ & 4 & 5 \\ \hline 1) & 5 & 1 & 1 & 3 & 11 & 0 & 3 & 3 & 1 & 1\\ 2) & 8 & 1 & 1 & 6 & 8 & 0 & 3 & 3 & 1 & 1\\ 3) & 0 & 4 & 5 & 4 & 7 & 1 & 1 & 0 & 2 & 3\\ 4) & 3 & 4 & 5 & 7 & 4 & 1 & 1 & 0 & 2 & 3\\ 5) & 14 & 8 & 0 & 0 & 8 & 1 & 7 & 0 & 0 & 1\\ 6) & 1 & 5 & 5 & 3 & 7 & 2 & 1 & 1 & 1 & 3\\ 7) & 4 & 5 & 5 & 6 & 4 & 2 & 1 & 1 & 1 & 3\\ 8) & 2 & 9 & 2 & 2 & 7 & 3 & 3 & 0 & 0 & 3\\ 9) & 5 & 9 & 2 & 5 & 4 & 3 & 3 & 0 & 0 & 3\\ 10) & 6 & 4 & 5 & 10 & 1 & 4 & 1 & 0 & 5 & 0\\ 11) & 7 & 5 & 5 & 9 & 1 & 5 & 1 & 1 & 4 & 0\\ 12) & 8 & 9 & 2 & 8 & 1 & 6 & 3 & 0 & 3 & 0\\ 13) & 16 & 0 & 19 & 0 & 1 & 9 & 0 & 6 & 0 & 0 \end{tabular}\\ Permutations: $(3_a,3_b)$. \label{a5e6p0} \end{table}
\begin{table}[H]\small \caption{Alt$_{5} < E_6$, $p = 3$}
\begin{tabular}{rc|cccc|cccc}
& & \multicolumn{4}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$}\\ & & 1 & 3$_a$ & 3$_b$ & 4 & 1 & 3$_a$ & 3$_b$ & 4 \\ \hline 1) & \textbf{P}, \textbf{N} & 15 & 0 & 1 & 15 & 0 & 3 & 2 & 3 \\ 2) & \textbf{N} & 16 & 1 & 1 & 14 & 1 & 3 & 3 & 2\\ 3) & \textbf{N} & 22 & 8 & 0 & 8 & 2 & 7 & 0 & 1\\ 4) & \textbf{P}, \textbf{N} & 7 & 4 & 5 & 11 & 4 & 1 & 0 & 5\\ 5) & \textbf{N} & 8 & 5 & 5 & 10 & 5 & 1 & 1 & 4\\ 6) & \textbf{N} & 9 & 9 & 2 & 9 & 6 & 3 & 0 & 3\\ 7) & & 17 & 0 & 19 & 1 & 9 & 0 & 6 & 0 \end{tabular}\\ Permutations: $(3_a,3_b)$. \label{a5e6p3} \end{table}
\subsection{Sporadic Groups} \
\begin{table}[H]\small
\caption{$M_{11} < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|cccc|ccccc}
& \multicolumn{4}{c|}{$V_{78}$} & \multicolumn{5}{c}{$V_{27}$} \\ & 1 & $16$ & $16^{*}$ & 45 & 1 & 10$_a$ & 11 & $16$ & $16^{*}$ \\ \hline 1) & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 1 \\ 2) & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 \end{tabular}\\ $10_a$ is self-dual. \label{m11e6p0} \end{table}
\begin{table}[H]\small \caption{$M_{11} < E_6$, $p = 11$}
\begin{tabular}{r|ccccc|cccc}
& \multicolumn{5}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & 1 & 9 & 10 & $10^{*}$ & 16 & 1 & 9 & 11 & 16 \\ \hline 1) & 1 & 1 & 1 & 1 & 3 & 0 & 0 & 1 & 1 \\ 2) & 1 & 1 & 1 & 1 & 3 & 2 & 1 & 0 & 1 \end{tabular} \label{m11e6p11} \end{table}
\begin{table}[H]\small \caption{$M_{11} < E_6$, $p = 5$}
\begin{tabular}{rc|cccc|ccccc}
& & \multicolumn{4}{c|}{$V_{78}$} & \multicolumn{5}{c}{$V_{27}$} \\ & & 1 & 16 & $16^{*}$ & 45 & 1 & $10_a$ & 11 & 16 & $16^{*}$ \\ \hline 1) & \textbf{P}, \textbf{N} & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 1\\ 2) & \textbf{P}, \textbf{N} & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0\\ 3) & \textbf{N} & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1\\ 4) & \textbf{N} & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 \end{tabular}\\ $10_a$ is self-dual. \label{m11e6p5} \end{table}
\begin{table}[H]\small \caption{$M_{11} < E_6$, $p = 3$}
\begin{tabular}{rc|cccccccc|cccccc}
& & \multicolumn{8}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & & 1 & 5 & $5^{*}$ & 10$_a$ & 10$_b$ & $(10_b)^{*}$ & 24 & 45 & 1 & 5 & $5^{*}$ & 10$_a$ & 10$_b$ & $(10_b)^{*}$ \\ \hline 1) & \textbf{P}, \textbf{N} & 3 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 2 & 0 & 1 & 1 & 0 & 1\\ 2) & \textbf{P}, \textbf{N} & 3 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 2 & 1 & 2 & 0 & 0 & 1\\ 3) & \textbf{P}, \textbf{N} & 4 & 1 & 1 & 0 & 2 & 2 & 1 & 0 & 2 & 1 & 2 & 0 & 1 & 0 \end{tabular} \label{m11e6p3} \end{table}
\begin{table}[H]\small \caption{$M_{11} < E_6$, $p = 2$}
\begin{tabular}{rc|ccccc|cccc}
& & \multicolumn{5}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & & 1 & 10 & 16 & $16^{*}$ & 44 & 1 & 10 & 16 & $16^{*}$ \\ \hline 1) & & 2 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \\ 2) & \textbf{N} & 4 & 3 & 0 & 0 & 1 & 1 & 1 & 0 & 1 \end{tabular} \label{m11e6p2} \end{table}
\begin{table}[H]\small \caption{$M_{12} < E_6$, $p = 5$}
\begin{tabular}{rc|c|*{3}{c}}
& & \multicolumn{1}{c|}{$V_{78}$} & \multicolumn{3}{c}{$V_{27}$} \\ & & 78 & $11_a$ & $16$ & $16^{*}$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 0 \end{tabular}\\ Permutations: $(11_a,11_b)$, $(16,16^{*})$. \label{m12e6p5} \end{table}
\begin{table}[H]\small \caption{$M_{12} < E_6$, $p = 2$}
\begin{tabular}{r|ccccc|*{4}{c}}
& \multicolumn{5}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & 1 & 10 & 16 & $16^{*}$ & 44 & 1 & 10 & 16 & $16^{*}$ \\ \hline 1) & 2 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \end{tabular} \label{m12e6p2} \end{table}
\begin{table}[H]\small \caption{$3\cdot M_{22} < E_6$, $p = 2$}
\begin{tabular}{rc|cccc|cc}
& & \multicolumn{4}{c|}{$V_{78}$} & \multicolumn{2}{c}{$V_{27}$} \\ & & 1 & 10 & $10^{*}$ & 34 & $6$ & $15$ \\ \hline 1) & \textbf{N} & 4 & 2 & 2 & 1 & 2 & 1 \end{tabular}\\ Permutations: $(6,6^{*})(15,15^{*})$. \\ $15 = \bigwedge^{2}6^{*}$. \label{3m22e6p2} \end{table}
\begin{table}[H]\small \caption{$J_{1} < E_6$, $p = 11$}
\begin{tabular}{rc|cccc|ccc}
& & \multicolumn{4}{c|}{$V_{78}$} & \multicolumn{3}{c}{$V_{27}$} \\ & & 1 & 7 & 14 & 64 & 1 & 7 & 27 \\ \hline 1) & \textbf{P} & 0 & 0 & 1 & 1 & 0 & 0 & 1 \\ 2) & & 8 & 8 & 1 & 0 & 6 & 3 & 0 \end{tabular} \label{j1e6p11} \end{table}
\begin{table}[H]\small \caption{$J_{2} < E_6$, $p = 2$}
\begin{tabular}{rc|cccccccc|ccccc}
& & \multicolumn{8}{c|}{$V_{78}$} & \multicolumn{5}{c}{$V_{27}$} \\ & & 1 & $6_a$ & $6_b$ & $14_a$ & $14_b$ & 36 & $64_a$ & $64_b$ & 1 & $6_a$ & $6_b$ & $14_a$ & $14_b$ \\ \hline 1) & \textbf{N} & 8 & 3 & 4 & 0 & 2 & 0 & 0 & 0 & 1 & 0 & 2 & 0 & 1 \\ 2) & \textbf{P} & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 \\ 3) & \textbf{N} & 16 & 8 & 0 & 1 & 0 & 0 & 0 & 0 & 9 & 3 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(6_a,6_b)(14_a,14_b)(64_a,64_b)$. \label{j2e6p2} \end{table}
\begin{table}[H]\small \caption{$3 \cdot J_{3} < E_6$, $p = 2$}
\begin{tabular}{rc|cc|cccccc}
& & \multicolumn{2}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & & $78_a$ & $78_b$ & $9$ & $9^{*}$ & $18_a$ & $(18_a)^{*}$ & $18_b$ & $(18_b)^{*}$ \\ \hline 1) & \textbf{P} & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ 2) & \textbf{P} & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 \\ 3) & \textbf{P} & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 0 \\ 4) & \textbf{P} & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 \end{tabular} \label{3j3e6p2} \end{table}
\addtocounter{table}{1} \begin{table}[H]\small \begin{center} \arabic{chapter}.\arabic{section}.\arabic{table}: $3 \cdot Fi_{22} < E_6$, $p = 2$. Irreducible on $V_{78}$ and $V_{27}$. \textbf{P} \end{center} \label{3fi22e6p2} \end{table}
\subsection{Cross-characteristic Groups $L_2(q)$ $(q \neq 4$, $5$, $9)$} \
\begin{table}[H]\small
\caption{$L_2(7) < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|cccccc|ccccccc}
& & \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & & 1 & 3 & $3^{*}$ & 6 & 7 & 8 & 1 & 3 & $3^{*}$ & 6 & 7 & 8 \\ \hline 1) & \textbf{P} & 0 & 2 & 2 & 2 & 2 & 5 & 0 & 0 & 0 & 2 & 1 & 1 \\ 2) & & 0 & 2 & 2 & 2 & 2 & 5 & 1 & 1 & 1 & 2 & 0 & 1 \\ 3) & & 2 & 1 & 1 & 0 & 2 & 7 & 3 & 0 & 0 & 0 & 0 & 3 \\ 4) & & 3 & 2 & 2 & 2 & 5 & 2 & 0 & 0 & 0 & 2 & 1 & 1 \\ 5) & & 3 & 2 & 2 & 2 & 5 & 2 & 1 & 1 & 1 & 2 & 0 & 1 \\ 6) & & 5 & 1 & 1 & 0 & 5 & 4 & 3 & 0 & 0 & 0 & 0 & 3 \\ 7) & & 8 & 1 & 1 & 0 & 8 & 1 & 6 & 0 & 0 & 0 & 3 & 0 \\ 8) & & 8 & 3 & 3 & 6 & 0 & 2 & 1 & 3 & 3 & 0 & 0 & 1 \\ 9) & & 14 & 0 & 0 & 0 & 0 & 8 & 0 & 0 & 7 & 1 & 0 & 0 \\ 10) & & 14 & 0 & 0 & 0 & 0 & 8 & 0 & 7 & 0 & 1 & 0 & 0 \\ 11) & & 16 & 9 & 9 & 0 & 0 & 1 & 9 & 3 & 3 & 0 & 0 & 0 \end{tabular} \label{l27e6p0} \end{table}
\begin{table}[H]\small \caption{$L_2(7) < E_6$, $p = 3$}
\begin{tabular}{rc|ccccc|cccccc}
& & \multicolumn{5}{c|}{$V_{78}$} & \multicolumn{5}{c}{$V_{27}$} \\ & & 1 & 3 & $3^{*}$ & 6 & 7 & 1 & 3 & $3^{*}$ & 6 & 7 \\ \hline 1) & \textbf{P}, \textbf{N} & 5 & 2 & 2 & 2 & 7 & 1 & 0 & 0 & 2 & 2 \\ 2) & \textbf{N} & 5 & 2 & 2 & 2 & 7 & 2 & 1 & 1 & 2 & 1 \\ 3) & \textbf{N} & 9 & 1 & 1 & 0 & 9 & 6 & 0 & 0 & 0 & 3 \\ 4) & \textbf{N} & 10 & 3 & 3 & 6 & 2 & 2 & 3 & 3 & 0 & 1 \\ 5) & & 17 & 9 & 9 & 0 & 1 & 9 & 3 & 3 & 0 & 0 \\ 6) & \textbf{N} & 22 & 0 & 0 & 0 & 8 & 0 & 0 & 7 & 1 & 0 \\ 7) & \textbf{N} & 22 & 0 & 0 & 0 & 8 & 0 & 7 & 0 & 1 & 0 \end{tabular} \label{l27e6p3} \end{table}
\begin{table}[H]\small
\caption{$L_2(8) < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{9}{c}|*{9}{c}}
& & \multicolumn{9}{c|}{$V_{78}$} & \multicolumn{9}{c}{$V_{27}$} \\ & & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 & $9_a$ & $9_b$ & $9_c$ & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 & $9_a$ & $9_b$ & $9_c$ \\ \hline 1) & & 0 & 2 & 1 & 1 & 1 & 2 & 0 & 1 & 2 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 \\ 2) & \textbf{P} & 0 & 2 & 1 & 1 & 1 & 2 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \\ 3) & & 1 & 3 & 1 & 1 & 1 & 1 & 0 & 1 & 2 & 2 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ 4) & & 3 & 2 & 1 & 1 & 1 & 5 & 0 & 0 & 0 & 3 & 0 & 0 & 0 & 0 & 3 & 0 & 0 & 0 \\ 5) & & 4 & 3 & 1 & 1 & 1 & 4 & 0 & 0 & 0 & 4 & 1 & 0 & 0 & 0 & 2 & 0 & 0 & 0 \\ 6) & & 8 & 1 & 0 & 1 & 8 & 0 & 0 & 0 & 0 & 6 & 0 & 0 & 0 & 3 & 0 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(7_b,7_c,7_d)(9_a,9_b,9_c)$. \label{l28e6p0} \end{table}
\begin{table}[H]\small \caption{$L_2(8) < E_6$, $p = 7$}
\begin{tabular}{rc|cccccc|cccccc}
& & \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 \\ \hline 1) & \textbf{N} & 3 & 2 & 1 & 1 & 1 & 5 & 3 & 0 & 0 & 0 & 0 & 3 \\ 2) & \textbf{N} & 4 & 3 & 1 & 1 & 1 & 4 & 4 & 1 & 0 & 0 & 0 & 2 \\ 3) & & 8 & 1 & 0 & 1 & 8 & 0 & 6 & 0 & 0 & 0 & 3 & 0 \end{tabular}\\ Permutations: $(7_b,7_c,7_d)$. \label{l28e6p7} \end{table}
\begin{table}[H]\small \caption{$L_2(8) < E_6$, $p = 3$}
\begin{tabular}{rc|ccccc|ccccc}
& & \multicolumn{5}{c|}{$V_{78}$} & \multicolumn{5}{c}{$V_{27}$} \\ & & 1 & 7 & $9_a$ & $9_b$ & $9_c$ & 1 & 7 & $9_a$ & $9_b$ & $9_c$ \\ \hline 1) & \textbf{N} & 2 & 7 & 0 & 1 & 2 & 2 & 1 & 0 & 1 & 1 \\ 2) & \textbf{P}, \textbf{N} & 2 & 7 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 \\ 3) & \textbf{N} & 8 & 10 & 0 & 0 & 0 & 6 & 3 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(9_a,9_b,9_c)$. \label{l28e6p3} \end{table}
\begin{table}[H]\small
\caption{$L_2(11) < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{7}{c}|*{8}{c}}
& & \multicolumn{7}{c|}{$V_{78}$} & \multicolumn{8}{c}{$V_{27}$} \\ & & 1 & 5 & $5^{*}$ & $10_b$ & 11 & $12_a$ & $12_b$ & 1 & 5 & $5^{*}$ & $10_a$ & $10_b$ & 11 & $12_a$ & $12_b$ \\ \hline 1) & \textbf{P} & 0 & 1 & 1 & 1 & 2 & 1 & 2 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ 2) & & 1 & 1 & 1 & 1 & 3 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 \\ 3) & & 4 & 1 & 1 & 4 & 0 & 1 & 1 & 2 & 1 & 2 & 0 & 1 & 0 & 0 & 0 \\ \end{tabular}\\ Permutations: $(12_a,12_b)$. \label{l211e6p0} \end{table}
\begin{table}[H]\small \caption{$L_2(11) < E_6$, $p = 5$}
\begin{tabular}{rc|*{6}{c}|*{6}{c}}
& & \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & & 1 & 5 & $5^{*}$ & $10_a$ & $10_b$ & $11$ & 1 & 5 & $5^{*}$ & $10_a$ & $10_b$ & $11$ \\ \hline 1) & \textbf{N} & 3 & 1 & 1 & 0 & 1 & 5 & 1 & 0 & 1 & 1 & 0 & 1 \\ 2) & \textbf{N} & 6 & 1 & 1 & 0 & 4 & 2 & 2 & 2 & 1 & 0 & 1 & 0 \end{tabular} \label{l211e6p5} \end{table}
\begin{table}[H]\small \caption{$L_2(11) < E_6$, $p = 3$}
\begin{tabular}{rc|*{6}{c}|*{6}{c}}
& & \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & & 1 & 5 & $5^{*}$ & 10 & $12_a$ & $12_b$ & 1 & 5 & $5^{*}$ & 10 & $12_a$ & $12_b$ \\ \hline 1) & \textbf{P}, \textbf{N} & 2 & 1 & 1 & 3 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 0 \\ 2) & \textbf{N} & 4 & 1 & 1 & 4 & 1 & 1 & 2 & 2 & 1 & 1 & 0 & 0 \end{tabular}\\ Permutations: $(12_a,12_b)$. \label{l211e6p3} \end{table}
\begin{table}[H]\small \caption{$L_2(11) < E_6$, $p = 2$}
\begin{tabular}{rc|*{6}{c}|*{6}{c}}
& & \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & & 1 & 5 & $5^{*}$ & 10 & $12_a$ & $12_b$ & 1 & 5 & $5^{*}$ & 10 & $12_a$ & $12_b$ \\ \hline 1) & & 2 & 0 & 0 & 4 & 1 & 2 & 0 & 0 & 1 & 1 & 1 & 0 \\ 2) & \textbf{P}, \textbf{N} & 2 & 3 & 3 & 1 & 1 & 2 & 0 & 0 & 1 & 1 & 1 & 0 \\\ 3) & \textbf{N} & 4 & 1 & 1 & 4 & 1 & 1 & 2 & 1 & 2 & 1 & 0 & 0 \\ 4) & \textbf{P}, \textbf{N} & 4 & 4 & 4 & 1 & 1 & 1 & 2 & 1 & 2 & 1 & 0 & 0 \end{tabular}\\ Permutations: $(12_a,12_b)$. \label{l211e6p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(13) < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{9}{c}|*{6}{c}}
& & \multicolumn{9}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & & 1 & $7_a$ & $7_b$ & $12_a$ & $12_b$ & $12_c$ & 13 & $14_a$ & $14_b$ & 1 & $7_a$ & $7_b$ & $12_a$ & 13 & $14_b$ \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 2 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\ 2) & & 0 & 0 & 0 & 2 & 1 & 0 & 0 & 2 & 1 & 1 & 0 & 0 & 1 & 0 & 1 \\ 3) & & 1 & 2 & 2 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ 4) & & 1 & 2 & 2 & 2 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 \\ 5) & & 8 & 0 & 8 & 0 & 0 & 0 & 0 & 1 & 0 & 6 & 0 & 3 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(7_a,7_b)$, $(12_a,12_b,12_c)$. \label{l213e6p0} \end{table}
\begin{table}[H]\small \caption{$L_2(13) < E_6$, $p = 7$}
\begin{tabular}{rc|*{6}{c}|*{6}{c}}
& & \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & & 1 & $7_a$ & $7_b$ & $12$ & $14_a$ & $14_b$ & 1 & $7_a$ & $7_b$ & $12$ & $14_a$ & $14_b$ \\ \hline 1) & & 0 & 0 & 0 & 3 & 2 & 1 & 1 & 0 & 0 & 1 & 0 & 1 \\ 2) & \textbf{N} & 2 & 2 & 2 & 4 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 1 \\ 3) & \textbf{N} & 8 & 0 & 8 & 0 & 1 & 0 & 6 & 0 & 3 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(7_a,7_b)$. \label{l213e6p7} \end{table}
\begin{table}[H]\small \caption{$L_2(13) < E_6$, $p = 3$}
\begin{tabular}{rc|*{7}{c}|*{7}{c}}
& & \multicolumn{7}{c|}{$V_{78}$} & \multicolumn{7}{c}{$V_{27}$} \\ & & 1 & $7_a$ & $7_b$ & $12_a$ & $12_b$ & $12_c$ & $13$ & 1 & $7_a$ & $7_b$ & $12_a$ & $12_b$ & $12_c$ & $13$ \\ \hline 1) & \textbf{P} & 1 & 2 & 2 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 2 \\ 2) & & 1 & 2 & 2 & 2 & 1 & 0 & 1 & 2 & 0 & 0 & 1 & 0 & 0 & 1 \\ 3) & & 8 & 1 & 9 & 0 & 0 & 0 & 0 & 6 & 0 & 3 & 0 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(7_a,7_b)$, $(12_a,12_b,12_c)$. \label{l213e6p3} \end{table}
\begin{table}[H]\small \caption{$L_2(13) < E_6$, $p = 2$}
\begin{tabular}{rc|*{7}{c}|*{7}{c}}
& & \multicolumn{7}{c|}{$V_{78}$} & \multicolumn{7}{c}{$V_{27}$} \\ & & 1 & $6_a$ & $6_b$ & $12_a$ & $12_b$ & $12_c$ & 14 & 1 & $6_a$ & $6_b$ & $12_a$ & $12_b$ & $12_c$ & 14 \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 1 & 1 & 1 & 3 & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\ 2) & & 0 & 0 & 0 & 2 & 0 & 1 & 3 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \\ 3) & & 2 & 0 & 1 & 0 & 0 & 0 & 5 & 1 & 0 & 2 & 0 & 0 & 0 & 1 \\ 4) & \textbf{N} & 6 & 3 & 3 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\ 5) & \textbf{N} & 6 & 3 & 3 & 2 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \\ 6) & \textbf{N} & 8 & 3 & 4 & 0 & 0 & 0 & 2 & 1 & 0 & 2 & 0 & 0 & 0 & 1 \\ 7) & \textbf{N} & 16 & 0 & 8 & 0 & 0 & 0 & 1 & 9 & 0 & 3 & 0 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(6_a,6_b)$, $(12_a,12_b,12_c)$. \label{l213e6p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(17) < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{7}{c}|*{6}{c}}
& & \multicolumn{7}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & & 1 & $9_a$ & $9_b$ & $16_d$ & $17$ & $18_b$ & $18_c$ & 1 & $9_a$ & $9_b$ & $16_d$ & $17$ & $18_a$ \\ \hline 1) & \textbf{P} & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 0 & 0 & 1 \\ 2) & & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 1 & 0 \\ 3) & & 1 & 0 & 1 & 2 & 0 & 1 & 1 & 2 & 0 & 1 & 1 & 0 & 0 \end{tabular}\\ Permutations: $(9_a,9_b)$. \label{l217e6p0} \end{table}
\begin{table}[H]\small \caption{$L_2(17) < E_6$, $p = 3$}
\begin{tabular}{rc|*{6}{c}|*{5}{c}}
& & \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{5}{c}{$V_{27}$} \\ & & 1 & $9_a$ & $9_b$ & 16 & $18_b$ & $18_c$ & 1 & $9_a$ & $9_b$ & 16 & $18_a$ \\ \hline 1) & \textbf{P}, \textbf{N} & 1 & 0 & 1 & 2 & 1 & 1 & 0 & 0 & 1 & 0 & 1 \\ 2) & \textbf{N} & 1 & 0 & 1 & 2 & 1 & 1 & 2 & 0 & 1 & 1 & 0 \end{tabular}\\ Permutations: $(9_a,9_b)$. \label{l217e6p3} \end{table}
\begin{table}[H]\small \caption{$L_2(17) < E_6$, $p = 2$}
\begin{tabular}{rc|*{4}{c}|*{4}{c}}
& & \multicolumn{4}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & & 1 & $8_a$ & $8_b$ & $16_a$ & 1 & $8_a$ & $8_b$ & $16_a$ \\ \hline 1) & \textbf{N} & 6 & 2 & 3 & 2 & 3 & 0 & 1 & 1 \\ 2) & \textbf{N} & 6 & 3 & 4 & 1 & 3 & 1 & 2 & 0 \end{tabular}\\ Permutations: $(8_a,8_b)$. \label{l217e6p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(19) < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{5}{c}|*{4}{c}}
& & \multicolumn{5}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & & $18_c$ & $18_d$ & $20_b$ & $20_c$ & $20_d$ & 9 & $9^{*}$ & $18_a$ & $18_b$ \\ \hline 1) & \textbf{P} & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 0 \end{tabular}\\ Permutations: $(18_a,18_b)(18_c,18_d)$. \label{l219e6p0} \end{table}
\begin{table}[H]\small \caption{$L_2(19) < E_6$, $p = 5$}
\begin{tabular}{rc|*{4}{c}|*{2}{c}}
& & \multicolumn{4}{c|}{$V_{78}$} & \multicolumn{2}{c}{$V_{27}$} \\ & & 18 & $20_b$ & $20_c$ & $20_d$ & 9 & $9^{*}$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 1 & 1 & 2 \end{tabular} \label{l219e6p5} \end{table}
\begin{table}[H]\small \caption{$L_2(19) < E_6$, $p = 3$}
\begin{tabular}{rc|*{4}{c}|*{4}{c}}
& & \multicolumn{4}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & & 1 & $18_c$ & $18_d$ & 19 & 9 & $9^{*}$ & $18_a$ & $18_b$ \\ \hline 1) & \textbf{P} & 3 & 0 & 1 & 3 & 0 & 1 & 1 & 0 \\ \end{tabular}\\ Permutations: $(18_a,18_b)(18_c,18_d)$. \label{l219e6p3} \end{table}
\begin{table}[H]\small \caption{$L_2(19) < E_6$, $p = 2$}
\begin{tabular}{rc|*{5}{c}|*{4}{c}}
& & \multicolumn{5}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & & $18_a$ & $18_b$ & $20_b$ & $20_c$ & $20_d$ & 9 & $9^{*}$ & $18_a$ & $18_b$ \\ \hline 1) & \textbf{P} & 0 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 0 \end{tabular}\\ Permutations: $(18_a,18_b)$. \label{l219e6p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(25) < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{6}{c}|*{3}{c}}
& \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{3}{c}{$V_{27}$} \\ & 1 & 25 & $26_a$ & $26_c$ & $26_d$ & $26_e$ & 1 & $26_b$ & $26_c$ \\ \hline 1) & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 \\ 2) & 1 & 1 & 0 & 2 & 0 & 0 & 1 & 0 & 1 \end{tabular}\\ $26_b$, $26_c$ are $\textup{Aut}(L_2(25))$-conjugate. \label{l225e6p0} \end{table}
\begin{table}[H]\small \caption{$L_2(25) < E_6$, $p = 13$}
\begin{tabular}{r|*{6}{c}|*{3}{c}}
& \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{3}{c}{$V_{27}$} \\ & 1 & 24 & $26_a$ & $26_b$ & $26_c$ & $26_d$ & 1 & $26_a$ & $26_e$ \\ \hline 1) & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0 \\ 2) & 2 & 1 & 0 & 2 & 0 & 0 & 1 & 0 & 1 \\ 3) & 2 & 1 & 0 & 2 & 0 & 0 & 1 & 1 & 0 \end{tabular}\\ $26_b$, $26_c$ are $\textup{Aut}(L_2(25))$-conjugate. \label{l225e6p13} \end{table}
\begin{table}[H]\small \caption{$L_2(25) < E_6$, $p = 3$}
\begin{tabular}{r|*{3}{c}|*{4}{c}}
& \multicolumn{3}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & 1 & 25 & 26 & 1 & $13_a$ & $13_b$ & 25 \\ \hline 1) & 1 & 1 & 2 & 1 & 1 & 1 & 0 \\ 2) & 1 & 1 & 2 & 2 & 0 & 0 & 1 \end{tabular} \label{l225e6p3} \end{table}
\begin{table}[H]\small \caption{$L_2(25) < E_6$, $p = 2$}
\begin{tabular}{r|*{4}{c}|*{2}{c}}
& \multicolumn{4}{c|}{$V_{78}$} & \multicolumn{2}{c}{$V_{27}$} \\ & 1 & $12_a$ & $12_b$ & 26 & 1 & 26 \\ \hline 1) & 0 & 0 & 0 & 3 & 1 & 1 \\ 2) & 6 & 3 & 3 & 0 & 1 & 1 \end{tabular} \label{l225e6p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(27) < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{6}{c}|*{4}{c}}
& \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & $26_a$ & $26_b$ & $26_c$ & $26_d$ & $26_e$ & $26_f$ & 1 & $26_d$ & $26_e$ & $26_f$ \\ \hline 1) & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 \end{tabular}\\ Permutations: $(26_a,26_b,26_c)(26_d,26_e,26_f)$. \label{l227e6p0} \end{table}
\begin{table}[H]\small \caption{$L_2(27) < E_6$, $p = 13$}
\begin{tabular}{r|*{6}{c}|*{4}{c}}
& \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & $26_a$ & $26_b$ & $26_c$ & $26_d$ & $26_e$ & $26_f$ & 1 & $26_d$ & $26_e$ & $26_f$ \\ \hline 1) & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 \end{tabular}\\ Permutations: $(26_a,26_b,26_c)(26_d,26_e,26_f)$. \label{l227e6p13} \end{table}
\begin{table}[H]\small \caption{$L_2(27) < E_6$, $p = 7$}
\begin{tabular}{r|*{3}{c}|*{3}{c}}
& \multicolumn{3}{c|}{$V_{78}$} & \multicolumn{3}{c}{$V_{27}$} \\ & $13_a$ & $13_b$ & 26 & 1 & $13_a$ & $13_b$ \\ \hline 1) & 1 & 1 & 2 & 1 & 1 & 1 \end{tabular} \label{l227e6p7} \end{table}
\begin{table}[H]\small \caption{$L_2(27) < E_6$, $p = 2$}
\begin{tabular}{r|*{3}{c}|*{4}{c}}
& \multicolumn{3}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & $26_a$ & $26_b$ & $26_c$ & 1 & 13 & $13^{*}$ & $26_a$ \\ \hline 1) & 1 & 1 & 1 & 1 & 1 & 1 & 0 \\ 2) & 2 & 0 & 1 & 1 & 0 & 0 & 1 \end{tabular}\\ Permutations: $(26_a,26_b,26_c)$. \label{l227e6p2} \end{table}
\subsection{Cross-characteristic Groups $\ncong L_2(q)$} \
\begin{table}[H]\small
\caption{$L_3(3) < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{9}{c}|*{3}{c}}
& & \multicolumn{9}{c|}{$V_{78}$} & \multicolumn{3}{c}{$V_{27}$} \\ & & 1 & 13 & $16_a$ & $(16_a^{*})$ & $16_b$ & $(16_b)^{*}$ & $26_a$ & $26_b$ & $(26_b)^{*}$ & 1 & $26_a$ & 27 \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 1 \\ 2) & & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 0 \\ 3) & & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 1 \\ 4) & & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 0 \end{tabular} \label{l33e6p0} \end{table}
\begin{table}[H]\small \caption{$L_3(3) < E_6$, $p = 13$}
\begin{tabular}{rc|*{6}{c}|*{4}{c}}
& & \multicolumn{6}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & & 1 & 13 & 16 & $26_a$ & $26_b$ & $(26_b)^{*}$ & 1 & 11 & 16 & $26_a$ \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 & 0 \\ 2) & & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 \\ 3) & & 1 & 1 & 4 & 0 & 0 & 0 & 0 & 1 & 1 & 0 \\ 4) & & 1 & 1 & 4 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \end{tabular} \label{l33e6p13} \end{table}
\begin{table}[H]\small \caption{$L_3(3) < E_6$, $p = 2$}
\begin{tabular}{r|*{7}{c}|*{2}{c}}
& \multicolumn{7}{c|}{$V_{78}$} & \multicolumn{2}{c}{$V_{27}$} \\ & 1 & 12 & $16_a$ & $(16_a)^{*}$ & $16_b$ & $(16_b)^{*}$ & 26 & 1 & 26 \\ \hline 1) & 0 & 0 & 0 & 0 & 0 & 0 & 3 & 1 & 1 \\ 2) & 2 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 \end{tabular} \label{l33e6p2} \end{table}
\begin{table}[H]\small \caption{$L_4(3) < E_6$, $p = 2$}
\begin{tabular}{r|*{2}{c}|*{2}{c}}
& \multicolumn{2}{c|}{$V_{78}$} & \multicolumn{2}{c}{$V_{27}$} \\ & $26_a$ & $26_b$ & 1 & $26_a$ \\ \hline 1) & 2 & 1 & 1 & 1 \end{tabular}\\ Permutations: $(26_a,26_b)$. \label{l43e6p2} \end{table}
\begin{table}[H]\small
\caption{$U_3(3) < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{11}{c}|*{5}{c}}
& & \multicolumn{11}{c|}{$V_{78}$} & \multicolumn{5}{c}{$V_{27}$} \\ & & 1 & 6 & $7_a$ & $7_b$ & $(7_b)^{*}$ & 14 & $21_a$ & 28 & $ 28^{*}$ & $32$ & $32^{*}$ & 1 & 6 & $7_a$ & 14 & 27 \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 \\ 2) & & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 3) & & 3 & 2 & 0 & 2 & 2 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 2 & 0 & 1 & 0 \\ 4) & & 8 & 0 & 8 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 6 & 0 & 3 & 0 & 0 \end{tabular} \label{u33e6p0} \end{table}
\begin{table}[H]\small \caption{$U_3(3) < E_6$, $p = 7$}
\begin{tabular}{r|*{10}{c}|*{5}{c}}
& \multicolumn{10}{c|}{$V_{78}$} & \multicolumn{5}{c}{$V_{27}$} \\ & 1 & 6 & $7_a$ & $7_b$ & $(7_b)^{*}$ & 14 & $21_a$ & 26 & 28 & $ 28^{*}$ & 1 & 6 & $7_a$ & 14 & 26 \\ \hline 1) & 0 & 2 & 0 & 0 & 0 & 1 & 0 & 2 & 0 & 0 & 1 & 0 & 0 & 0 & 1 \\ 2) & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\ 3) & 3 & 2 & 0 & 2 & 2 & 1 & 1 & 0 & 0 & 0 & 1 & 2 & 0 & 1 & 0 \\ 4) & 8 & 0 & 8 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 6 & 0 & 3 & 0 & 0 \end{tabular} \label{u33e6p7} \end{table}
\begin{table}[H]\small
\caption{$U_4(2) < E_6$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{8}{c}|*{6}{c}}
& \multicolumn{8}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & 1 & 5 & $5^{*}$ & 10 & $10^{*}$ & $15_b$ & 20 & 24 & 1 & 5 & $5^{*}$ & 6 & 10 & $15_b$ \\ \hline 1) & 3 & 0 & 0 & 2 & 2 & 1 & 1 & 0 & 0 & 0 & 0 & 2 & 0 & 1 \\ 2) & 4 & 1 & 1 & 2 & 2 & 0 & 0 & 1 & 2 & 1 & 2 & 0 & 1 & 0 \end{tabular} \label{u42e6p0} \end{table}
\begin{table}[H]\small \caption{$U_4(2) < E_6$, $p = 5$}
\begin{tabular}{r|*{8}{c}|*{6}{c}}
& \multicolumn{8}{c|}{$V_{78}$} & \multicolumn{6}{c}{$V_{27}$} \\ & 1 & 5 & $5^{*}$ & 10 & $10^{*}$ & $15_b$ & 20 & 23 & 1 & 5 & $5^{*}$ & 6 & 10 & $15_b$ \\ \hline 1) & 3 & 0 & 0 & 2 & 2 & 1 & 1 & 0 & 0 & 0 & 0 & 2 & 0 & 1 \\ 2) & 5 & 1 & 1 & 2 & 2 & 0 & 0 & 1 & 2 & 1 & 2 & 0 & 1 & 0 \end{tabular} \label{u42e6p5} \end{table}
\begin{table}[H]\small \caption{$3 \cdot U_4(3) < E_6$, $p = 2$}
\begin{tabular}{rc|*{3}{c}|*{4}{c}}
& & \multicolumn{3}{c|}{$V_{78}$} & \multicolumn{4}{c}{$V_{27}$} \\ & & 1 & 20 & $34_a$ & 6 & $6^{*}$ & 15 & $15^{*}$ \\ \hline 1) & \textbf{N} & 4 & 2 & 1 & 0 & 2 & 0 & 1 \\ 2) & \textbf{N} & 4 & 2 & 1 & 2 & 0 & 1 & 0 \end{tabular}\\ There are two triple covers of $U_4(3)$ up to isomorphism, however one of these has no 27-dimensional faithful modules, hence has no feasible characters here. \label{u43e6p2} \end{table}
\addtocounter{table}{1} \begin{table}[H]\small \thetable: $3 \cdot \Omega_7(3) < E_6$, $p = 2$. Irreducible on $V_{78}$ and $V_{27}$. \textbf{P} \label{omega73e6p2} \end{table}
\addtocounter{table}{1} \begin{table}[H]\small \thetable: $3 \cdot G_2(3) < E_6$, $p = 2$. Irreducible on $V_{78}$ and $V_{27}$. \textbf{P} \label{3g23e6p2} \end{table}
\begin{table}[H]\small \caption{$^{3}D_4(2) < E_6$, $p \neq 2,3$}
\begin{tabular}{r|*{2}{c}|*{2}{c}}
& \multicolumn{2}{c|}{$V_{78}$} & \multicolumn{2}{c}{$V_{27}$} \\ & 26 & 52 & 1 & 26 \\ \hline 1) & 1 & 1 & 1 & 1 \end{tabular} \label{3d4e6p7} \label{3d4e6p13} \label{3d4e6p0} \end{table}
\begin{table}[H]\small \caption{$^{3}D_4(2) < E_6$, $p = 3$}
\begin{tabular}{r|*{3}{c}|*{2}{c}}
& \multicolumn{3}{c|}{$V_{78}$} & \multicolumn{2}{c}{$V_{27}$} \\ & 1 & 25 & 52 & 1 & 25 \\ \hline 1) & 1 & 1 & 1 & 2 & 1 \end{tabular} \label{3d42e6p3} \end{table}
\addtocounter{table}{1} \begin{table}[H]\small \thetable: $^{2}F_4(2)' < E_6$, $p \neq 2$, $3$. Irreducible on $V_{78}$ and $V_{27}$. \textbf{P} \label{2f42e6p5} \label{2f42e6p13} \label{2f42e6p0} \end{table}
\begin{table}[H]\small \caption{$^{2}F_4(2)' < E_6$, $p = 3$}
\begin{tabular}{rc|*{2}{c}|*{1}{c}}
& & \multicolumn{2}{c|}{$V_{78}$} & \multicolumn{1}{c}{$V_{27}$} \\ & & 1 & 77 & 27 \\ \hline 1) & \textbf{P} & 1 & 1 & 1 \end{tabular} \label{2f42e6p3} \end{table}
\section{$E_7$} \label{sec:E7tabs}
\subsection{Alternating Groups} \
\begin{table}[H]\small \caption{Alt$_{13} < E_7$, $p = 2$}
\begin{tabular}{r|cccc|ccc}
& \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{3}{c}{$V_{56}$} \\ & 1 & 32$_a$ & 32$_b$ & 64 & 12 & 32$_a$ & 32$_b$ \\ \hline 1) & 5 & 0 & 2 & 1 & 2 & 0 & 1 \end{tabular}\\ Permutations: $(32_a,32_b)$. \label{a13e7p2} \end{table}
\begin{table}[H]\small \caption{Alt$_{12} < E_7$, $p = 2$}
\begin{tabular}{rc|ccccc|cccc}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & 10 & 16 & 16$^{*}$ & 44 & 1 & 10 & 16 & 16$^{*}$ \\ \hline 1) & \textbf{N} & 5 & 2 & 2 & 2 & 1 & 4 & 2 & 1 & 1 \end{tabular} \label{a12e7p2} \end{table}
\begin{table}[H]\small \caption{Alt$_{11} < E_7$, $p = 2$}
\begin{tabular}{r|ccccc|cccc}
& \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & 1 & 10 & 16 & 16$^{*}$ & 44 & 1 & 10 & 16 & 16$^{*}$ \\ \hline 1) & 5 & 2 & 2 & 2 & 1 & 4 & 2 & 1 & 1 \end{tabular} \label{a11e7p2} \end{table}
\begin{table}[H]\small \caption{Alt$_{10} < E_7$, $p = 5$}
\begin{tabular}{rc|cccc|c}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{1}{c}{$V_{56}$} \\ & & 28 & 35$_a$ & 35$_b$ & 35$_c$ & 28 \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 1 & 2 \end{tabular} \label{a10e7p5} \end{table}
\begin{table}[H]\small \caption{Alt$_{10} < E_7$, $p = 2$}
\begin{tabular}{rc|cccc|cccc}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & 8 & 16 & 26 & 1 & 8 & 16 & 26 \\ \hline 1) & \textbf{N} & 5 & 1 & 1 & 4 & 4 & 0 & 0 & 2 \\ 2) & \textbf{N} & 11 & 4 & 4 & 1 & 8 & 2 & 2 & 0 \end{tabular} \label{a10e7p2} \end{table}
\begin{table}[H]\small
\caption{Alt$_{9} < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|ccccc|c}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{1}{c}{$V_{56}$} \\ & & 8 & 27 & 28 & 35$_a$ & 35$_b$ & 28 \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 1 & 1 & 2 \end{tabular} \label{a9e7p0} \end{table}
\begin{table}[H]\small \caption{Alt$_{9} < E_7$, $p = 7$}
\begin{tabular}{rc|ccccc|c}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{1}{c}{$V_{56}$} \\ & & 8 & 19 & 28 & 35$_a$ & 35$_b$ & 28 \\ \hline 1) & \textbf{P} & 2 & 1 & 1 & 1 & 1 & 2 \end{tabular} \label{a9e7p7} \end{table}
\begin{table}[H]\small \caption{Alt$_{9} < E_7$, $p = 5$}
\begin{tabular}{rc|ccccc|c}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{1}{c}{$V_{56}$} \\ & & 8 & 27 & 28 & 35$_a$ & 35$_b$ & 28 \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 1 & 1 & 2 \end{tabular} \label{a9e7p5} \end{table}
\begin{table}[H]\small \caption{Alt$_{9} < E_7$, $p = 3$}
\begin{tabular}{rc|ccccc|cc}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & 1 & 7 & 21 & 27 & 35 & 7 & 21 \\ \hline 1) & \textbf{P}, \textbf{N} & 1 & 2 & 1 & 1 & 2 & 2 & 2 \end{tabular} \label{a9e7p3} \end{table}
\begin{table}[H]\small \caption{Alt$_{9} < E_7$, $p = 2$}
\begin{tabular}{rc|ccccc|ccccc}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 1 & 8$_a$ & 8$_b$ & 8$_c$ & 26 & 1 & 8$_a$ & 8$_b$ & 8$_c$ & 26 \\ \hline 1) & \textbf{N} & 5 & 1 & 1 & 1 & 4 & 4 & 0 & 0 & 0 & 2 \\ 2) & & 11 & 4 & 4 & 4 & 1 & 8 & 2 & 2 & 2 & 0 \end{tabular} \label{a9e7p2} \end{table}
\begin{table}[H]\small
\caption{Alt$_{8} < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|ccccc|cc}
& \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & 1 & 7 & 20 & 21$_a$ & 35 & 7 & 21$_a$ \\ \hline 1) & 1 & 3 & 1 & 1 & 2 & 2 & 2 \end{tabular} \label{a8e7p0} \end{table}
\begin{table}[H]\small \caption{Alt$_{8} < E_7$, $p = 7$}
\begin{tabular}{r|ccccc|cc}
& \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & 1 & 7 & 19 & 21$_a$ & 35 & 7 & 21$_a$ \\ \hline 1) & 2 & 3 & 1 & 1 & 2 & 2 & 2 \end{tabular} \label{a8e7p7} \end{table}
\begin{table}[H]\small \caption{Alt$_{8} < E_7$, $p = 5$}
\begin{tabular}{rc|cccccccc|ccc}
& & \multicolumn{8}{c|}{$V_{133}$} & \multicolumn{3}{c}{$V_{56}$} \\ & & 1 & 7 & 13 & 20 & 21$_a$ & 35 & 43 & 70 & 7 & 21$_a$ & 21$_b$\\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 2 & 0 & 2\\ 2) & & 1 & 2 & 1 & 0 & 0 & 1 & 0 & 1 & 2 & 0 & 2\\ 3) & & 1 & 3 & 0 & 1 & 1 & 2 & 0 & 0 & 2 & 2 & 0 \end{tabular} \label{a8e7p5} \end{table}
\begin{table}[H]\small \caption{Alt$_{8} < E_7$, $p = 3$}
\begin{tabular}{rc|*{5}{c}|cc}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & 1 & 7 & 13 & 21 & 35 & 7 & 21 \\ \hline 1) & \textbf{P}, \textbf{N} & 1 & 4 & 1 & 1 & 2 & 2 & 2 \end{tabular} \label{a8e7p3} \end{table}
\begin{table}[H]\small
\caption{Alt$_{7} < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{6}{c}|*{5}{c}}
& \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & 1 & 6 & 10 & 10$^{*}$ & 14$_a$ & 15 & 1 & 6 & 10 & 10$^{*}$ & 15 \\ \hline 1) & 8 & 1 & 0 & 0 & 1 & 7 & 0 & 6 & 1 & 1 & 0\\ 2) & 4 & 5 & 2 & 2 & 1 & 3 & 2 & 4 & 0 & 0 & 2 \end{tabular}\\ $14_a$ is a section of $5 \otimes 5$. \label{a7e7p0} \end{table}
\begin{table}[H]\small \caption{Alt$_{7} < E_7$, $p = 7$}
\begin{tabular}{rc|*{4}{c}|*{3}{c}}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{3}{c}{$V_{56}$} \\ & & 1 & 5 & 10 & 14$_a$ & 1 & 5 & 10 \\ \hline 1) & \textbf{N} & 9 & 8 & 7 & 1 & 6 & 6 & 2 \end{tabular}\\ $14_a$ is a section of $5 \otimes 5$. \label{a7e7p7} \end{table}
\begin{table}[H]\small \caption{Alt$_{7} < E_7$, $p = 5$}
\begin{tabular}{rc|cccccccc|ccccccc}
& & \multicolumn{8}{c|}{$V_{133}$} & \multicolumn{7}{c}{$V_{56}$} \\ & & 1 & 6 & 8 & 10 & 10$^{*}$ & 13 & 15 & 35 & 1 & 6 & 8 & 10 & 10$^{*}$ & 13 & 15 \\ \hline 1) & \textbf{P} & 0 & 3 & 7 & 1 & 1 & 3 & 0 & 0 & 0 & 0 & 2 & 2 & 2 & 0 & 0\\ 2) & & 8 & 2 & 1 & 0 & 0 & 0 & 7 & 0 & 0 & 6 & 0 & 1 & 1 & 0 & 0\\ 3) & \textbf{N} & 1 & 2 & 3 & 0 & 0 & 2 & 0 & 2 & 2 & 2 & 2 & 0 & 0 & 2 & 0\\ 4) & & 1 & 3 & 2 & 0 & 0 & 1 & 1 & 2 & 2 & 2 & 2 & 0 & 0 & 2 & 0\\ 5) & \textbf{N} & 4 & 1 & 2 & 1 & 1 & 4 & 0 & 1 & 2 & 2 & 2 & 0 & 0 & 2 & 0\\ 6) & \textbf{N} & 4 & 2 & 1 & 1 & 1 & 3 & 1 & 1 & 2 & 2 & 2 & 0 & 0 & 2 & 0\\ 7) & & 4 & 6 & 1 & 2 & 2 & 0 & 3 & 0 & 2 & 4 & 0 & 0 & 0 & 0 & 2\\ 8) & & 9 & 0 & 13 & 1 & 1 & 0 & 0 & 0 & 8 & 0 & 6 & 0 & 0 & 0 & 0 \end{tabular} \label{a7e7p5} \end{table}
\begin{table}[H]\small \caption{$2\cdot$Alt$_{7} < E_7$, $p = 5$}
\begin{tabular}{rc|*{4}{c}|*{5}{c}}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 8 & 10 & 10$^{*}$ & 35 & 4 & 4$^{*}$ & 14$_a$ & 14$_b$ & 20$_a$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 3 & 1 & 1 & 0 & 2 & 1 \end{tabular}\\ Permutations: $(14_a,14_b)$. \label{2a7e7p5} \end{table}
\begin{table}[H]\small \caption{Alt$_{7} < E_7$, $p = 3$}
\begin{tabular}{rc|cccccc|cccccc}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & & 1 & 6 & 10 & 10$^{*}$ & 13 & 15 & 1 & 6 & 10 & 10$^{*}$ & 13 & 15 \\ \hline 1) & & 9 & 1 & 0 & 0 & 1 & 7 & 0 & 6 & 1 & 1 & 0 & 0\\ 2) & \textbf{N} & 5 & 5 & 2 & 2 & 1 & 3 & 2 & 4 & 0 & 0 & 0 & 2 \end{tabular} \label{a7e7p3} \end{table}
\begin{table}[H]\small \caption{$2\cdot$Alt$_{7} < E_7$, $p = 3$}
\begin{tabular}{rc|*{5}{c}|*{4}{c}}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & 10 & 10$^{*}$ & 13 & 15 & 4 & 4$^{*}$ & 6$_b$ & 6$_c$ \\ \hline 1) & \textbf{N} & 22 & 1 & 1 & 7 & 0 & 1 & 1 & 1 & 7 \end{tabular}\\ Permutations: $(6_b,6_c)$. \label{2a7e7p3} \end{table}
\begin{table}[H]\small \caption{Alt$_{7} < E_7$, $p = 2$}
\begin{tabular}{rc|cccccc|cccccc}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & & 1 & 4 & 4$^{*}$ & 6 & 14 & 20 & 1 & 4 & 4$^{*}$ & 6 & 14 & 20 \\ \hline 1) & \textbf{N} & 15 & 0 & 0 & 1 & 8 & 0 & 0 & 1 & 1 & 8 & 0 & 0\\ 2) & \textbf{N} & 7 & 2 & 2 & 5 & 0 & 4 & 4 & 0 & 0 & 2 & 0 & 2\\ 3) & \textbf{N} & 7 & 2 & 2 & 6 & 1 & 3 & 4 & 0 & 0 & 2 & 0 & 2\\ 4) & \textbf{N} & 7 & 2 & 2 & 9 & 4 & 0 & 4 & 0 & 0 & 4 & 2 & 0\\ 5) & & 19 & 8 & 8 & 5 & 0 & 1 & 12 & 4 & 4 & 2 & 0 & 0\\ 6) & & 19 & 8 & 8 & 6 & 1 & 0 & 12 & 4 & 4 & 2 & 0 & 0 \end{tabular} \label{a7e7p2} \end{table}
{\small
\begin{longtable}{rc|ccccccc|ccccccc}
\caption{Alt$_{6} < E_7$, $p = 0$ or $p \nmid |H|$} \\
& & \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{7}{c}{$V_{56}$} \\ & & 1 & 5$_a$ & 5$_b$ & 8$_a$ & $8_b$ & 9 & 10 & 1 & 5$_a$ & 5$_b$ & 8$_a$ & $8_b$ & 9 & 10 \\ \hline 1) & \textbf{P} & 0 & 3 & 3 & 3 & 4 & 3 & 2 & 0 & 0 & 0 & 0 & 2 & 0 & 4 \\ 2) & & 1 & 2 & 5 & 2 & 3 & 3 & 3 & 0 & 0 & 0 & 0 & 2 & 0 & 4 \\ 3) & & 2 & 1 & 1 & 3 & 4 & 5 & 2 & 0 & 0 & 0 & 0 & 2 & 0 & 4 \\ 4) & & 2 & 4 & 4 & 1 & 2 & 3 & 4 & 0 & 0 & 0 & 0 & 2 & 0 & 4 \\ 5) & & 3 & 0 & 3 & 2 & 3 & 5 & 3 & 0 & 0 & 0 & 0 & 2 & 0 & 4 \\ 6) & & 4 & 2 & 2 & 1 & 2 & 5 & 4 & 0 & 0 & 0 & 0 & 2 & 0 & 4 \\ 7) & & 3 & 0 & 0 & 3 & 4 & 6 & 2 & 0 & 4 & 4 & 0 & 2 & 0 & 0 \\ 8) & & 5 & 1 & 1 & 1 & 2 & 6 & 4 & 0 & 4 & 4 & 0 & 2 & 0 & 0 \\ 9) & & 2 & 2 & 2 & 3 & 5 & 3 & 2 & 1 & 2 & 2 & 1 & 0 & 3 & 0 \\ 10)& & 3 & 1 & 4 & 2 & 4 & 3 & 3 & 1 & 2 & 2 & 1 & 0 & 3 & 0 \\ 11)& & 4 & 3 & 3 & 1 & 3 & 3 & 4 & 1 & 2 & 2 & 1 & 0 & 3 & 0 \\ 12)& & 1 & 2 & 2 & 3 & 4 & 4 & 2 & 2 & 2 & 2 & 0 & 2 & 2 & 0 \\ 13)& & 2 & 1 & 4 & 2 & 3 & 4 & 3 & 2 & 2 & 2 & 0 & 2 & 2 & 0 \\ 14)& & 3 & 3 & 3 & 1 & 2 & 4 & 4 & 2 & 2 & 2 & 0 & 2 & 2 & 0 \\ 15)& & 4 & 0 & 0 & 3 & 5 & 5 & 2 & 3 & 0 & 0 & 1 & 0 & 5 & 0 \\ 16)& & 6 & 1 & 1 & 1 & 3 & 5 & 4 & 3 & 0 & 0 & 1 & 0 & 5 & 0 \\ 17)& & 3 & 0 & 0 & 3 & 4 & 6 & 2 & 4 & 0 & 0 & 0 & 2 & 4 & 0 \\ 18)& & 5 & 1 & 1 & 1 & 2 & 6 & 4 & 4 & 0 & 0 & 0 & 2 & 4 & 0 \\ 19)& & 4 & 2 & 2 & 5 & 5 & 1 & 2 & 4 & 2 & 2 & 2 & 2 & 0 & 0 \\ 20)& & 5 & 1 & 4 & 4 & 4 & 1 & 3 & 4 & 2 & 2 & 2 & 2 & 0 & 0 \\ 21)& & 6 & 3 & 3 & 3 & 3 & 1 & 4 & 4 & 2 & 2 & 2 & 2 & 0 & 0 \\ 22)& & 6 & 0 & 0 & 5 & 5 & 3 & 2 & 6 & 0 & 0 & 2 & 2 & 2 & 0 \\ 23)& & 8 & 1 & 1 & 3 & 3 & 3 & 4 & 6 & 0 & 0 & 2 & 2 & 2 & 0 \\ 24)& & 9 & 0 & 9 & 0 & 0 & 1 & 7 & 6 & 0 & 6 & 0 & 0 & 0 & 2 \\ 25)& & 9 & 0 & 0 & 0 & 13 & 0 & 2 & 8 & 0 & 0 & 0 & 6 & 0 & 0 \\ \multicolumn{16}{c}{Permutations: $(5_a,5_b)$, $(8_a,8_b)$.} \label{a6e7p0} \end{longtable} }
{\small
\begin{longtable}{rc|ccccccc|ccccccc}
\caption{$2\cdot$Alt$_{6} < E_7$, $p = 0$ or $p \nmid |H|$} \\
& & \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & & 1 & 5$_a$ & 5$_b$ & 8$_a$ & 8$_b$ & 9 & 10$_a$ & 4$_a$ & 4$_b$ & 8$_c$ & 8$_d$ & 10$_b$ & 10$_c$ \\ \cline{1-14} 1) & \textbf{P} & 0 & 0 & 0 & 4 & 3 & 3 & 5 & 0 & 0 & 0 & 2 & 1 & 3 \\ 2) & & 2 & 1 & 1 & 2 & 1 & 3 & 7 & 0 & 0 & 0 & 2 & 1 & 3 \\ 3) & \textbf{P} & 0 & 0 & 0 & 4 & 3 & 3 & 5 & 0 & 0 & 0 & 2 & 3 & 1 \\ 4) & & 2 & 1 & 1 & 2 & 1 & 3 & 7 & 0 & 0 & 0 & 2 & 3 & 1 \\ 5) & & 3 & 1 & 1 & 3 & 1 & 2 & 7 & 0 & 3 & 2 & 1 & 0 & 2 \\ 6) & & 3 & 1 & 1 & 3 & 1 & 2 & 7 & 0 & 3 & 2 & 1 & 2 & 0 \\ 7) & & 1 & 0 & 0 & 5 & 3 & 2 & 5 & 1 & 1 & 1 & 0 & 1 & 3 \\ 8) & & 3 & 1 & 1 & 3 & 1 & 2 & 7 & 1 & 1 & 1 & 0 & 1 & 3 \\ 9) & & 1 & 0 & 0 & 5 & 3 & 2 & 5 & 1 & 1 & 1 & 0 & 3 & 1 \\ 10)& & 3 & 1 & 1 & 3 & 1 & 2 & 7 & 1 & 1 & 1 & 0 & 3 & 1 \\ 11)& & 3 & 1 & 1 & 3 & 1 & 2 & 7 & 3 & 0 & 2 & 1 & 0 & 2 \\ 12)& & 3 & 1 & 1 & 3 & 1 & 2 & 7 & 3 & 0 & 2 & 1 & 2 & 0 \\ \multicolumn{16}{c}{Permutations: $(4_a,4_b)(5_a,5_b)$, $(8_a,8_b)(8_c,8_d)(10_b,10_c)$.} \label{2a6e7p0} \end{longtable} }
\begin{table}[H]\small \caption{Alt$_{6} < E_7$, $p = 5$}
\begin{tabular}{rc|ccccc|ccccc}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 1 & 5$_a$ & 5$_b$ & 8 & 10 & 1 & 5$_a$ & 5$_b$ & 8 & 10 \\ \hline 1) & \textbf{P}, \textbf{N} & 3 & 3 & 3 & 10 & 2 & 0 & 0 & 0 & 2 & 4\\ 2) & \textbf{P}, \textbf{N} & 4 & 2 & 5 & 8 & 3 & 0 & 0 & 0 & 2 & 4\\ 3) & \textbf{P}, \textbf{N} & 5 & 4 & 4 & 6 & 4 & 0 & 0 & 0 & 2 & 4\\ 4) & \textbf{P}, \textbf{N} & 7 & 1 & 1 & 12 & 2 & 0 & 0 & 0 & 2 & 4\\ 5) & \textbf{P}, \textbf{N} & 8 & 0 & 3 & 10 & 3 & 0 & 0 & 0 & 2 & 4\\ 6) & \textbf{N} & 9 & 2 & 2 & 8 & 4 & 0 & 0 & 0 & 2 & 4\\ 7) & \textbf{P}, \textbf{N} & 9 & 0 & 0 & 13 & 2 & 0 & 4 & 4 & 2 & 0\\ 8) & \textbf{N} & 11 & 1 & 1 & 9 & 4 & 0 & 4 & 4 & 2 & 0\\ 9) & \textbf{N} & 5 & 2 & 2 & 11 & 2 & 4 & 2 & 2 & 4 & 0\\ 10) & \textbf{N} & 6 & 4 & 1 & 9 & 3 & 4 & 2 & 2 & 4 & 0\\ 11) & \textbf{N} & 7 & 3 & 3 & 7 & 4 & 4 & 2 & 2 & 4 & 0\\ 12) & & 10 & 0 & 9 & 1 & 7 & 6 & 0 & 6 & 0 & 2\\ 13) & \textbf{N} & 9 & 0 & 0 & 13 & 2 & 8 & 0 & 0 & 6 & 0\\ 14) & \textbf{N} & 11 & 1 & 1 & 9 & 4 & 8 & 0 & 0 & 6 & 0 \label{a6e7p5} \end{tabular}\\ Permutations: $(5_a,5_b)$. \end{table}
\begin{table}[H]\small \caption{$2\cdot$Alt$_{6} < E_7$, $p = 5$}
\begin{tabular}{rc|*{5}{c}|*{4}{c}}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & 5$_a$ & 5$_b$ & 8 & $10_a$ & 4$_a$ & 4$_b$ & 10$_b$ & 10$_c$ \\ \hline 1) & \textbf{P}, \textbf{N} & 3 & 0 & 0 & 10 & 5 & 2 & 2 & 1 & 3 \\ 2) & \textbf{P}, \textbf{N} & 5 & 1 & 1 & 6 & 7 & 2 & 2 & 1 & 3 \\ 3) & \textbf{P}, \textbf{N} & 5 & 1 & 1 & 6 & 7 & 3 & 6 & 2 & 0 \end{tabular}\\ Permutations: $(4_a,4_b)(5_a,5_b)$, $(10_b,10_c)$. \label{2a6e7p5} \end{table}
{\small
\begin{longtable}{r|ccccc|ccccc}
\caption{Alt$_{5} < E_7$, $p = 0$ or $p \nmid |H|$}\\
& \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & 1 & 3$_a$ & $3_b$ & 4 & 5 & 1 & 3$_a$ & $3_b$ & 4 & 5 \\ \hline 1) & 7 & 5 & 7 & 10 & 10 & 0 & 1 & 0 & 2 & 9\\ 2) & 8 & 6 & 10 & 3 & 13 & 0 & 1 & 9 & 4 & 2\\ 3) & 11 & 6 & 10 & 6 & 10 & 0 & 1 & 9 & 4 & 2\\ 4) & 3 & 5 & 6 & 8 & 13 & 0 & 4 & 6 & 4 & 2\\ 5) & 6 & 5 & 6 & 11 & 10 & 0 & 4 & 6 & 4 & 2\\ 6) & 6 & 5 & 6 & 11 & 10 & 1 & 0 & 2 & 1 & 9\\ 7) & 6 & 7 & 7 & 5 & 13 & 2 & 6 & 6 & 2 & 2\\ 8) & 9 & 7 & 7 & 8 & 10 & 2 & 6 & 6 & 2 & 2\\ 9) & 4 & 5 & 7 & 7 & 13 & 3 & 1 & 0 & 5 & 6\\ 10) & 7 & 5 & 7 & 10 & 10 & 3 & 1 & 0 & 5 & 6\\ 11) & 3 & 5 & 6 & 8 & 13 & 4 & 0 & 2 & 4 & 6\\ 12) & 6 & 5 & 6 & 11 & 10 & 4 & 0 & 2 & 4 & 6\\ 13) & 17 & 0 & 22 & 0 & 10 & 4 & 0 & 14 & 0 & 2\\ 14) & 9 & 2 & 15 & 2 & 13 & 4 & 4 & 10 & 0 & 2\\ 15) & 12 & 2 & 15 & 5 & 10 & 4 & 4 & 10 & 0 & 2\\ 16) & 6 & 7 & 7 & 5 & 13 & 6 & 2 & 2 & 2 & 6\\ 17) & 9 & 7 & 7 & 8 & 10 & 6 & 2 & 2 & 2 & 6\\ 18) & 9 & 2 & 15 & 2 & 13 & 8 & 0 & 6 & 0 & 6\\ 19) & 12 & 2 & 15 & 5 & 10 & 8 & 0 & 6 & 0 & 6\\ 20) & 16 & 5 & 7 & 19 & 1 & 9 & 1 & 0 & 11 & 0\\ 21) & 15 & 5 & 6 & 20 & 1 & 10 & 0 & 2 & 10 & 0\\ 22) & 18 & 7 & 7 & 17 & 1 & 12 & 2 & 2 & 8 & 0\\ 23) & 21 & 2 & 15 & 14 & 1 & 14 & 0 & 6 & 6 & 0\\ 24) & 35 & 0 & 31 & 0 & 1 & 20 & 0 & 12 & 0 & 0\\ \multicolumn{11}{c}{Permutations: $(3_a,3_b)$.} \label{a5e7p0} \end{longtable} }
\begin{table}[H]\small
\caption{$2\cdot$Alt$_{5} < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|ccccc|cccc}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & 3$_a$ & 3$_b$ & 4$_a$ & 5 & 2$_a$ & 2$_b$ & 4$_b$ & 6 \\ \hline
1) & \textbf{P} & 0 & 8 & 9 & 8 &10 & 0 & 2 & 4 & 6 \\
2) & & 1 & 8 &10 & 7 &10 & 2 & 1 & 5 & 5 \\
3) & & 3 & 5 &15 & 5 &10 & 5 & 0 & 7 & 3 \\
4) & & 3 & 8 & 9 &11 & 7 & 0 & 2 & 4 & 6 \\
5) & & 3 & 8 & 9 &11 & 7 & 3 & 5 & 1 & 6 \\
6) & & 3 &10 &10 & 5 &10 & 0 & 0 & 2 & 8 \\
7) & & 4 & 8 &10 &10 & 7 & 2 & 1 & 5 & 5 \\
8) & & 4 & 8 &10 &10 & 7 & 5 & 4 & 2 & 5 \\
9) & & 6 & 5 &15 & 8 & 7 & 5 & 0 & 7 & 3 \\ 10) & & 6 & 5 &15 & 8 & 7 & 8 & 3 & 4 & 3 \\ 11) & & 6 &10 &10 & 8 & 7 & 0 & 0 & 2 & 8 \\ 12) & & 8 & 0 & 1 & 8 &18 & 0 & 2 & 4 & 6 \\ 13) & & 8 & 9 &13 & 6 & 7 & 0 & 8 & 1 & 6 \\ 14) & & 9 & 0 & 2 & 7 &18 & 2 & 1 & 5 & 5 \\ 15) & &11 & 0 & 1 &11 &15 & 0 & 2 & 4 & 6 \\ 16) & &11 & 0 & 1 &11 &15 & 3 & 5 & 1 & 6 \\ 17) & &11 & 2 & 2 & 5 &18 & 0 & 0 & 2 & 8 \\ 18) & &12 & 0 & 2 &10 &15 & 2 & 1 & 5 & 5 \\ 19) & &12 & 0 & 2 &10 &15 & 5 & 4 & 2 & 5 \\ 20) & &13 & 9 &18 & 1 & 7 & 7 & 0 & 9 & 1 \\ 21) & &13 & 9 &18 & 1 & 7 &10 & 3 & 6 & 1 \\ 22) & &14 & 0 &28 & 0 & 7 &14 & 0 & 7 & 0 \\ 23) & &14 & 2 & 2 & 8 &15 & 0 & 0 & 2 & 8 \\ 24) & &16 & 1 & 5 & 6 &15 & 0 & 8 & 1 & 6 \\ 25) & &21 & 1 &10 & 1 &15 & 7 & 0 & 9 & 1 \\ 26) & &21 & 1 &10 & 1 &15 &10 & 3 & 6 & 1 \\ 27) & &36 & 1 &10 &16 & 0 &16 & 9 & 0 & 1 \\ 28) & &52 & 0 &27 & 0 & 0 &26 & 0 & 1 & 0 \end{tabular}\\ Permutations: $(2_a,2_b)(3_b,3_b)$. \\ $4_b$ is faithful for $2\cdot\Alt_5$, $4_a$ is not. \label{2a5e7p0} \end{table}
\begin{table}[H]\small \caption{Alt$_{5} < E_7$, $p = 3$}
\begin{tabular}{rc|cccc|cccc}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$}\\ & & 1 & 3$_a$ & 3$_b$ & 4 & 1 & 3$_a$ & 3$_b$ & 4\\ \hline 1) & \textbf{P}, \textbf{N} & 17 & 5 & 7 & 20 & 1 & 5 & 4 & 7\\ 2) & \textbf{N} & 21 & 6 & 10 & 16 & 2 & 1 & 9 & 6\\ 3) & \textbf{P}, \textbf{N} & 16 & 5 & 6 & 21 & 2 & 4 & 6 & 6\\ 4) & \textbf{N} & 19 & 7 & 7 & 18 & 4 & 6 & 6 & 4\\ 5) & \textbf{N} & 27 & 0 & 22 & 10 & 6 & 0 & 14 & 2\\ 6) & \textbf{N} & 22 & 2 & 15 & 15 & 6 & 4 & 10 & 2\\ 7) & \textbf{P}, \textbf{N} & 17 & 5 & 7 & 20 & 9 & 1 & 0 & 11\\ 8) & \textbf{N} & 16 & 5 & 6 & 21 & 10 & 0 & 2 & 10\\ 9) & \textbf{N} & 19 & 7 & 7 & 18 & 12 & 2 & 2 & 8\\ 10) & \textbf{N} & 22 & 2 & 15 & 15 & 14 & 0 & 6 & 6\\ 11) & \textbf{N} & 36 & 0 & 31 & 1 & 20 & 0 & 12 & 0 \end{tabular}\\ Permutations: $(3_a,3_b)$. \label{a5e7p3} \end{table}
\begin{table}[H]\small \caption{$2\cdot$Alt$_{5} < E_7$, $p = 3$}
\begin{tabular}{rc|cccc|ccc}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{3}{c}{$V_{56}$}\\ & & 1 & 3$_a$ & 3$_b$ & 4 & $2_a$ & $2_b$ & 6 \\ \hline 1) & \textbf{N} & 15 & 9 & 13 & 13 & 1 & 9 & 6\\ 2) & \textbf{N} & 31 & 1 & 5 & 21 & 1 & 9 & 6 \\ 3) & & 52 & 27 & 0 & 0 & 1 & 27 & 0\\ 4) & \textbf{P}, \textbf{N} & 13 & 10 & 10 & 15 & 2 & 2 & 8 \\ 5) & \textbf{N} & 29 & 2 & 2 & 23 & 2 & 2 & 8\\ 6) & \textbf{P}, \textbf{N} & 10 & 8 & 9 & 18 & 4 & 6 & 6 \\ 7) & \textbf{N} & 26 & 0 & 1 & 26 & 4 & 6 & 6\\ 8) & \textbf{P}, \textbf{N} & 11 & 8 & 10 & 17 & 7 & 6 & 5 \\ 9) & \textbf{N} & 27 & 0 & 2 & 25 & 7 & 6 & 5\\ 10)& \textbf{P}, \textbf{N} & 13 & 15 & 5 & 15 & 7 & 12 & 3 \\ 11)& \textbf{N} & 20 & 18 & 9 & 8 & 9 & 16 & 1\\ 12)& \textbf{N} & 36 & 10 & 1 & 16 & 9 & 16 & 1 \\ 13)& \textbf{N} & 21 & 0 & 28 & 7 & 21 & 7 & 0 \end{tabular}\\ Permutations: $(2_a,2_b)(3_a,3_b)$. \label{2a5e7p3} \end{table}
\subsection{Sporadic Groups} \
\begin{table}[H]\small
\caption{$M_{11} < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|ccccccc|ccccccc}
& \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{7}{c}{$V_{56}$} \\ & 1 & 10$_a$ & 11 & $16$ & $16^{*}$ & 45 & 55 & 1 & 10$_a$ & 11 & $16$ & $16^{*}$ & 45 & 55\\ \hline 1) & 2 & 0 & 2 & 2 & 2 & 1 & 0 & 2 & 0 & 2 & 1 & 1 & 0 & 0 \\ 2) & 3 & 0 & 1 & 2 & 2 & 0 & 1 & 2 & 0 & 2 & 1 & 1 & 0 & 0 \\ 3) & 4 & 2 & 0 & 2 & 2 & 1 & 0 & 4 & 2 & 0 & 1 & 1 & 0 & 0 \\ \end{tabular} \label{m11e7p0} \end{table}
\begin{table}[H]\small \caption{$M_{11} < E_7$, $p = 11$}
\begin{tabular}{rc|cccccccc|cccccccc}
& & \multicolumn{8}{c|}{$V_{133}$} & \multicolumn{8}{c}{$V_{56}$} \\ & & 1 & 9 & 10 & $10^{*}$ & 11 & 16 & 44 & 55 & 1 & 9 & 10 & $10^{*}$ & 11 & 16 & 44 & 55 \\ \hline 1) & & 2 & 1 & 1 & 1 & 2 & 5 & 0 & 0 & 2 & 0 & 0 & 0 & 2 & 2 & 0 & 0 \\ 2) & & 3 & 0 & 0 & 0 & 1 & 4 & 0 & 1 & 2 & 0 & 0 & 0 & 2 & 2 & 0 & 0 \\ 3) & \textbf{N} & 6 & 3 & 1 & 1 & 0 & 5 & 0 & 0 & 6 & 2 & 0 & 0 & 0 & 2 & 0 & 0 \end{tabular} \label{m11e7p11} \end{table}
\begin{table}[H]\small \caption{$M_{11} < E_7$, $p = 5$}
\begin{tabular}{rc|ccccccc|ccccc}
& & \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 1 & $10_a$ & 11 & 16 & $16^{*}$ & 45 & 55 & 1 & $10_a$ & 11 & 16 & $16^{*}$ \\ \hline 1) & & 1 & 1 & 2 & 0 & 0 & 1 & 1 & 2 & 0 & 2 & 1 & 1 \\ 2) & \textbf{N} & 2 & 0 & 2 & 2 & 2 & 1 & 0 & 2 & 0 & 2 & 1 & 1 \\ 3) & & 2 & 1 & 1 & 0 & 0 & 0 & 2 & 2 & 0 & 2 & 1 & 1 \\ 4) & \textbf{N} & 3 & 0 & 1 & 2 & 2 & 0 & 1 & 2 & 0 & 2 & 1 & 1 \\ 5) & & 3 & 3 & 0 & 0 & 0 & 1 & 1 & 4 & 2 & 0 & 1 & 1 \\ 6) & \textbf{N} & 4 & 2 & 0 & 2 & 2 & 1 & 0 & 4 & 2 & 0 & 1 & 1 \\ \end{tabular} \label{m11e7p5} \end{table}
\begin{table}[H]\small \caption{$M_{11} < E_7$, $p = 3$}
\begin{tabular}{rc|cccccccc|cccccc}
& & \multicolumn{8}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & & 1 & 5 & $5^{*}$ & 10$_a$ & 10$_b$ & $(10_b)^{*}$ & 24 & 45 & 1 & 5 & $5^{*}$ & 10$_a$ & 10$_b$ & $(10_b)^{*}$ \\ \hline 1) & \textbf{N} & 8 & 2 & 2 & 2 & 2 & 2 & 0 & 1 & 6 & 1 & 1 & 2 & 1 & 1 \\ 2) & \textbf{N} & 8 & 4 & 4 & 0 & 2 & 2 & 0 & 1 & 6 & 3 & 3 & 0 & 1 & 1 \\ 3) & \textbf{N} & 9 & 4 & 4 & 0 & 3 & 3 & 1 & 0 & 6 & 3 & 3 & 0 & 1 & 1 \end{tabular} \label{m11e7p3} \end{table}
\begin{table}[H]\small \caption{$M_{11} < E_7$, $p = 2$}
\begin{tabular}{rc|ccccc|cccc}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & 10 & 16 & $16^{*}$ & 44 & 1 & 10 & 16 & $16^{*}$ \\ \hline 1) & \textbf{N} & 5 & 2 & 2 & 2 & 1 & 4 & 2 & 1 & 1 \\ 2) & \textbf{N} & 7 & 5 & 1 & 1 & 1 & 4 & 2 & 1 & 1 \end{tabular} \label{m11e7p2} \end{table}
\begin{table}[H]\small
\caption{$2\cdot M_{12} < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|cccc|cc}
& \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & 1 & 16 & $16^{*}$ & 66 & 12 & 32 \\ \hline 1) & 3 & 2 & 2 & 1 & 2 & 1 \end{tabular} \label{2m12e7p0} \end{table}
\begin{table}[H]\small \caption{$2\cdot M_{12} < E_7$, $p = 11$}
\begin{tabular}{r|ccc|cc}
& \multicolumn{3}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & 1 & 16 & 66 & 12 & 32 \\ \hline 1) & 3 & 4 & 1 & 2 & 1 \end{tabular} \label{2m12e7p11} \end{table}
\begin{table}[H]\small \caption{$M_{12} < E_7$, $p = 5$}
\begin{tabular}{r|ccccc|cccc}
& \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & 1 & $11_a$ & $16$ & $16^{*}$ & 78 & 1 & $11_a$ & $16$ & $16^{*}$ \\ \hline 1) & 1 & 2 & 1 & 1 & 1 & 2 & 2 & 1 & 1 \end{tabular}\\ Permutations: $(11_a,11_b)$. \label{m12e7p5} \end{table}
\begin{table}[H]\small \caption{$2\cdot M_{12} < E_7$, $p = 5$}
\begin{tabular}{rc|cccccc|cc}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & 1 & 16 & $16^{*}$ & $55_c$ & 66 & 78 & 12 & 32 \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 1 & 0 & 1 & 2 & 1 \\ 2) & & 3 & 2 & 2 & 0 & 1 & 0 & 2 & 1 \end{tabular}\\ $55_c$ is fixed by $\textup{Out}(G)$. \label{2m12e7p5} \end{table}
\begin{table}[H]\small \caption{$2\cdot M_{12} < E_7$, $p = 3$}
\begin{tabular}{rc|cccc|cccc}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & 15 & $15^{*}$ & 34 & $6$ & $6^{*}$ & $10_c$ & $(10_c)^{*}$ \\ \hline 1) & \textbf{N} & 9 & 3 & 3 & 1 & 3 & 3 & 1 & 1 \end{tabular} \label{2m12e7p3} \end{table}
\begin{table}[H]\small \caption{$M_{12} < E_7$, $p = 2$}
\begin{tabular}{rc|ccccc|cccc}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & 10 & 16 & $16^{*}$ & 44 & 1 & 10 & $16$ & $16^{*}$ \\ \hline 1) & \textbf{N} & 5 & 2 & 2 & 2 & 1 & 4 & 2 & 1 & 1 \end{tabular} \label{m12e7p2} \end{table}
\begin{table}[H]\small \caption{$2\cdot M_{22} < E_7$, $p = 5$}
\begin{tabular}{rc|c|cc}
& & \multicolumn{1}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & 133 & $28$ & $28^{*}$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 \end{tabular} \label{2m22e7p5} \end{table}
\begin{table}[H]\small \caption{$J_{1} < E_7$, $p = 11$}
\begin{tabular}{r|ccccc|cccc}
& \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & 1 & 7 & 14 & 27 & 64 & 1 & 7 & 14 & 27 \\ \hline 1) & 3 & 5 & 1 & 3 & 0 & 0 & 4 & 2 & 0 \\ 2) & 1 & 0 & 1 & 2 & 1 & 2 & 0 & 0 & 2 \\ 3) & 21 & 14 & 1 & 0 & 0 & 14 & 6 & 0 & 0 \end{tabular} \label{j1e7p11} \end{table}
\begin{table}[H]\small
\caption{$2 \cdot J_{2} < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|ccccc|*{3}{c}}
& \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{3}{c}{$V_{56}$} \\ & 1 & $14_a$ & $14_b$ & $21_a$ & $21_b$ & $6_a$ & $6_b$ & $14_c$ \\ \hline 1) & 14 & 7 & 0 & 1 & 0 & 0 & 7 & 1 \end{tabular}\\ Permutations: $(6_a,6_b)(14_a,14_b)(21_a,21_b)$. \label{2j2e7p0} \end{table}
\begin{table}[H]\small \caption{$2 \cdot J_{2} < E_7$, $p = 7$}
\begin{tabular}{r|ccccc|*{3}{c}}
& \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{3}{c}{$V_{56}$} \\ & 1 & $14_a$ & $14_b$ & $21_a$ & $21_b$ & $6_a$ & $6_b$ & $14_c$ \\ \hline 1) & 14 & 0 & 7 & 0 & 1 & 0 & 7 & 1 \end{tabular}\\ Permutations: $(6_a,6_b)(14_a,14_b)(21_a,21_b)$. \label{2j2e7p7} \end{table}
\begin{table}[H]\small \caption{$2 \cdot J_{2} < E_7$, $p = 5$}
\begin{tabular}{r|*{3}{c}|*{2}{c}}
& \multicolumn{3}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & 1 & $14_a$ & $21_a$ & $6$ & $14_b$ \\ \hline 1) & 14 & 7 & 1 & 7 & 1 \end{tabular}\\ Outer automorphism fixes these isomorphism types. \label{2j2e7p5} \end{table}
\begin{table}[H]\small \caption{$2 \cdot J_{2} < E_7$, $p = 3$}
\begin{tabular}{rc|ccccc|ccc}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & 1 & $13_a$ & $13_b$ & $21_a$ & $21_b$ & $6_a$ & $6_b$ & 14 \\ \hline 1) & \textbf{N} & 21 & 7 & 0 & 1 & 0 & 0 & 7 & 1 \end{tabular}\\ Permutations: $(6_a,6_b)(13_a,13_b)(21_a,21_b)$. \label{2j2e7p3} \end{table}
\begin{table}[H]\small \caption{$J_{2} < E_7$, $p = 2$}
\begin{tabular}{rc|ccccccccc|cccccc}
& & \multicolumn{9}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & & 1 & $6_a$ & $6_b$ & $14_a$ & $14_b$ & 36 & $64_a$ & $64_b$ & 84 & 1 & $6_a$ & $6_b$ & $14_a$ & $14_b$ & 36 \\ \hline 1) & \textbf{P} & 1 & 1 & 0 & 1 & 2 & 0 & 0 & 0 & 1 & 2 & 1 & 2 & 0 & 0 & 1 \\ 2) & & 15 & 1 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 2 & 1 & 8 & 0 & 0 & 0 \\ 3) & \textbf{N} & 11 & 3 & 8 & 0 & 4 & 0 & 0 & 0 & 0 & 4 & 0 & 4 & 0 & 2 & 0 \\ 4) & \textbf{N} & 3 & 2 & 2 & 0 & 3 & 0 & 0 & 1 & 0 & 4 & 2 & 2 & 0 & 2 & 0 \\ 5) & \textbf{N} & 35 & 14 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 20 & 6 & 0 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(6_a,6_b)(14_a,14_b)(64_a,64_b)$. \label{j2e7p2} \end{table}
\begin{table}[H]\small \caption{$2 \cdot Ru < E_7$, $p = 5$}
\begin{tabular}{rc|c|cc}
& & \multicolumn{1}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & 133 & $28$ & $28^{*}$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 \end{tabular} \label{2Rue7p5} \end{table}
\begin{table}[H]\small \caption{$2 \cdot HS < E_7$, $p = 5$}
\begin{tabular}{rc|c|cc}
& & \multicolumn{1}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & 133 & 28 & $28^{*}$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 \end{tabular} \label{2HSe7p5} \end{table}
\subsection{Cross-characteristic Groups $L_2(q)$ $(q \neq 4$, $5$, $9)$} \
{\small
\begin{longtable}{rc|*{6}{c}|*{6}{c}}
\caption{$L_2(7) < E_7$, $p = 0$ or $p \nmid |H|$} \\
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & & 1 & 3 & $3^{*}$ & $6_a$ & 7 & $8_a$ & 1 & 3 & $3^{*}$ & $6_a$ & 7 & $8_a$ \\ \hline 1) & \textbf{P} & 0 & 1 & 1 & 6 & 5 & 7 & 0 & 2 & 2 & 0 & 4 & 2 \\ 2) & & 1 & 2 & 2 & 6 & 4 & 7 & 2 & 0 & 0 & 4 & 2 & 2 \\ 3) & & 2 & 3 & 3 & 6 & 3 & 7 & 0 & 2 & 2 & 0 & 4 & 2 \\ 4) & & 3 & 1 & 1 & 6 & 8 & 4 & 0 & 2 & 2 & 0 & 4 & 2 \\ 5) & & 3 & 4 & 4 & 6 & 2 & 7 & 4 & 2 & 2 & 4 & 0 & 2 \\ 6) & & 4 & 2 & 2 & 6 & 7 & 4 & 2 & 0 & 0 & 4 & 2 & 2 \\ 7) & & 5 & 0 & 0 & 2 & 4 & 11 & 0 & 1 & 1 & 6 & 2 & 0 \\ 8) & & 5 & 3 & 3 & 6 & 6 & 4 & 0 & 2 & 2 & 0 & 4 & 2 \\ 9) & & 6 & 4 & 4 & 6 & 5 & 4 & 4 & 2 & 2 & 4 & 0 & 2 \\ 10) & & 7 & 2 & 2 & 2 & 2 & 11 & 2 & 3 & 3 & 6 & 0 & 0 \\ 11) & & 8 & 0 & 0 & 2 & 7 & 8 & 0 & 1 & 1 & 6 & 2 & 0 \\ 12) & & 9 & 1 & 1 & 0 & 2 & 13 & 8 & 0 & 0 & 0 & 0 & 6 \\ 13) & & 10 & 2 & 2 & 2 & 5 & 8 & 2 & 3 & 3 & 6 & 0 & 0 \\ 14) & & 11 & 9 & 9 & 6 & 0 & 4 & 4 & 6 & 6 & 0 & 0 & 2 \\ 15) & & 12 & 1 & 1 & 0 & 5 & 10 & 8 & 0 & 0 & 0 & 0 & 6 \\ 16) & & 15 & 7 & 7 & 2 & 0 & 8 & 2 & 7 & 7 & 2 & 0 & 0 \\ 17) & & 21 & 1 & 1 & 0 & 14 & 1 & 14 & 0 & 0 & 0 & 6 & 0 \\ 18) & & 35 & 15 & 15 & 0 & 0 & 1 & 20 & 6 & 6 & 0 & 0 & 0 \label{l27e7p0} \end{longtable} }
\begin{table}[H]\small
\caption{$SL_2(7) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{6}{c}|*{5}{c}}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 1 & 3 & $3^{*}$ & $6_a$ & 7 & $8_a$ & 4 & $4^{*}$ & $6_b$ & $6_c$ & $8_b$ \\ \hline 1) & \textbf{P} & 0 & 4 & 4 & 3 & 5 & 7 & 2 & 2 & 1 & 3 & 2 \\ 2) & & 3 & 4 & 4 & 3 & 8 & 4 & 2 & 2 & 1 & 3 & 2 \\ 3) & & 14 & 1 & 1 & 7 & 1 & 8 & 1 & 1 & 1 & 7 & 0 \end{tabular}\\ Permutations: $(6_{b},6_{c})$ \label{2l27e7p0} \end{table}
\begin{table}[H]\small \caption{$L_2(7) < E_7$, $p = 3$}
\begin{tabular}{rc|ccccc|cccccc}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 1 & 3 & $3^{*}$ & 6 & 7 & 1 & 3 & $3^{*}$ & 6 & 7 \\ \hline 1) & \textbf{P}, \textbf{N} & 7 & 1 & 1 & 6 & 12 & 2 & 2 & 2 & 0 & 6 \\ 2) & \textbf{N} & 8 & 2 & 2 & 6 & 11 & 4 & 0 & 0 & 4 & 4 \\ 3) & \textbf{P}, \textbf{N} & 9 & 3 & 3 & 6 & 10 & 2 & 2 & 2 & 0 & 6 \\ 4) & \textbf{N} & 10 & 4 & 4 & 6 & 9 & 6 & 2 & 2 & 4 & 2 \\ 5) & \textbf{N} & 15 & 9 & 9 & 6 & 4 & 6 & 6 & 6 & 0 & 2 \\ 6) & \textbf{N} & 16 & 0 & 0 & 2 & 15 & 0 & 1 & 1 & 6 & 2 \\ 7) & \textbf{N} & 18 & 2 & 2 & 2 & 13 & 2 & 3 & 3 & 6 & 0 \\ 8) & \textbf{N} & 22 & 1 & 1 & 0 & 15 & 14 & 0 & 0 & 0 & 6 \\ 9) & \textbf{N} & 23 & 7 & 7 & 2 & 8 & 2 & 7 & 7 & 2 & 0 \\ 10) & \textbf{N} & 36 & 15 & 15 & 0 & 1 & 20 & 6 & 6 & 0 & 0 \end{tabular} \label{l27e7p3} \end{table}
\begin{table}[H]\small \caption{$SL_2(7) < E_7$, $p = 3$}
\begin{tabular}{rc|*{5}{c}|*{4}{c}}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & 3 & $3^{*}$ & $6_a$ & 7 & 4 & $4^{*}$ & $6_b$ & $6_c$ \\ \hline 1) & \textbf{P}, \textbf{N} & 7 & 4 & 4 & 3 & 12 & 4 & 4 & 1 & 3 \\ 2) & \textbf{P}, \textbf{N} & 22 & 1 & 1 & 7 & 9 & 1 & 1 & 1 & 7 \end{tabular}\\ Permutations: $(6_{b},6_{c})$ \label{2l27e7p3} \end{table}
{\small \begin{table}[H]\small
\caption{$L_2(8) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{9}{c}|*{9}{c}}
& \multicolumn{9}{c|}{$V_{133}$} & \multicolumn{9}{c}{$V_{56}$} \\ & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 & $9_a$ & $9_b$ & $9_c$ & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 & $9_a$ & $9_b$ & $9_c$ \\ \hline 1) & 1 & 2 & 1 & 1 & 1 & 2 & 3 & 3 & 3 & 0 & 2 & 2 & 2 & 2 & 0 & 0 & 0 & 0 \\ 2) & 1 & 2 & 1 & 1 & 1 & 2 & 3 & 3 & 3 & 2 & 0 & 0 & 0 & 0 & 0 & 2 & 2 & 2 \\ 3) & 2 & 2 & 1 & 1 & 1 & 3 & 2 & 3 & 3 & 2 & 0 & 0 & 0 & 0 & 0 & 2 & 3 & 1 \\ 4) & 3 & 1 & 1 & 3 & 2 & 0 & 3 & 3 & 3 & 0 & 2 & 2 & 3 & 1 & 0 & 0 & 0 & 0 \\ 5) & 3 & 1 & 5 & 1 & 0 & 0 & 3 & 3 & 3 & 0 & 2 & 4 & 2 & 0 & 0 & 0 & 0 & 0 \\ 6) & 3 & 2 & 1 & 1 & 1 & 4 & 0 & 4 & 3 & 4 & 0 & 0 & 0 & 0 & 2 & 0 & 2 & 2 \\ 7) & 5 & 2 & 1 & 1 & 1 & 6 & 0 & 4 & 1 & 3 & 0 & 0 & 0 & 0 & 1 & 0 & 5 & 0 \\ 8) & 6 & 5 & 1 & 1 & 1 & 1 & 0 & 4 & 3 & 6 & 2 & 0 & 0 & 0 & 0 & 0 & 2 & 2 \\ 9) & 10 & 2 & 1 & 1 & 1 & 11 & 0 & 0 & 0 & 8 & 0 & 0 & 0 & 0 & 6 & 0 & 0 & 0 \\ 10) & 13 & 5 & 1 & 1 & 1 & 8 & 0 & 0 & 0 & 10 & 2 & 0 & 0 & 0 & 4 & 0 & 0 & 0 \\ 11) & 21 & 1 & 0 & 14 & 1 & 0 & 0 & 0 & 0 & 14 & 0 & 0 & 6 & 0 & 0 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(7_b,7_c,7_d)(9_a,9_b,9_c)$. \label{l28e7p0} \end{table} }
\begin{table}[H]\small \caption{$L_2(8) < E_7$, $p = 7$}
\begin{tabular}{rc|cccccc|cccccc}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 \\ \hline 1) & \textbf{P}, \textbf{N} & 10 & 2 & 1 & 1 & 1 & 11 & 0 & 2 & 2 & 2 & 2 & 0 \\ 2) & \textbf{N} & 10 & 2 & 1 & 1 & 1 & 11 & 8 & 0 & 0 & 0 & 0 & 6 \\ 3) & \textbf{N} & 12 & 1 & 0 & 1 & 5 & 9 & 0 & 2 & 0 & 2 & 4 & 0 \\ 4) & \textbf{N} & 12 & 1 & 1 & 2 & 3 & 9 & 0 & 2 & 2 & 1 & 3 & 0 \\ 5) & \textbf{N} & 13 & 2 & 0 & 1 & 5 & 8 & 0 & 5 & 0 & 0 & 3 & 0 \\ 6) & \textbf{N} & 13 & 5 & 1 & 1 & 1 & 8 & 10 & 2 & 0 & 0 & 0 & 4 \\ 7) & & 21 & 1 & 0 & 1 & 14 & 0 & 14 & 0 & 0 & 0 & 6 & 0 \end{tabular}\\ Permutations: $(7_b,7_c,7_d)$. \label{l28e7p7} \end{table}
\begin{table}[H]\small \caption{$L_2(8) < E_7$, $p = 3$}
\begin{tabular}{rc|ccccc|ccccc}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 1 & 7 & $9_a$ & $9_b$ & $9_c$ & 1 & 7 & $9_a$ & $9_b$ & $9_c$ \\ \hline 1) & \textbf{P}, \textbf{N} & 3 & 7 & 3 & 3 & 3 & 0 & 8 & 0 & 0 & 0 \\ 2) & \textbf{N} & 3 & 7 & 3 & 3 & 3 & 2 & 0 & 2 & 2 & 2 \\ 3) & \textbf{N} & 5 & 8 & 2 & 3 & 3 & 2 & 0 & 2 & 1 & 3 \\ 4) & \textbf{N} & 7 & 9 & 0 & 3 & 4 & 6 & 2 & 0 & 2 & 2 \\ 5) & \textbf{N} & 11 & 11 & 0 & 1 & 4 & 4 & 1 & 0 & 0 & 5 \\ 6) & \textbf{N} & 21 & 16 & 0 & 0 & 0 & 14 & 6 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(9_a,9_b,9_c)$. \label{l28e7p3} \end{table}
\begin{table}[H]\small
\caption{$L_2(11) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{8}{c}|*{8}{c}}
& \multicolumn{8}{c|}{$V_{133}$} & \multicolumn{8}{c}{$V_{56}$} \\ & 1 & 5 & $5^{*}$ & $10_a$ & $10_b$ & 11 & $12_a$ & $12_b$ & 1 & 5 & $5^{*}$ & $10_a$ & $10_b$ & 11 & $12_a$ & $12_b$ \\ \hline 1) & 1 & 2 & 2 & 2 & 1 & 2 & 2 & 3 & 2 & 1 & 1 & 2 & 0 & 0 & 0 & 2 \\ 2) & 2 & 0 & 0 & 4 & 2 & 1 & 2 & 3 & 2 & 1 & 1 & 2 & 0 & 0 & 0 & 2 \\ 3) & 4 & 2 & 2 & 2 & 1 & 5 & 1 & 1 & 4 & 1 & 1 & 2 & 0 & 2 & 0 & 0 \\ 4) & 5 & 0 & 0 & 4 & 2 & 4 & 1 & 1 & 4 & 1 & 1 & 2 & 0 & 2 & 0 & 0 \\ 5) & 9 & 4 & 4 & 0 & 6 & 0 & 1 & 1 & 6 & 3 & 3 & 0 & 2 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(12_{a},12_{b})$. \label{l211e7p0} \end{table}
\begin{table}[H]\small
\caption{$SL_2(11) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{8}{c}|*{7}{c}}
& & \multicolumn{8}{c|}{$V_{133}$} & \multicolumn{7}{c}{$V_{56}$} \\ & & 1 & 5 & $5^{*}$ & $10_a$ & $10_b$ & 11 & $12_a$ & $12_b$ & 6 & $6^{*}$ & $10_c$ & $10_d$ & $10_e$ & $12_c$ & $12_d$ \\ \hline 1) & \textbf{P} & 0 & 1 & 1 & 3 & 0 & 3 & 2 & 3 & 1 & 1 & 0 & 1 & 1 & 2 & 0 \\ 2) & & 1 & 1 & 1 & 3 & 0 & 4 & 1 & 3 & 0 & 0 & 0 & 1 & 1 & 1 & 2 \\ 3) & & 3 & 1 & 1 & 3 & 0 & 6 & 1 & 1 & 3 & 3 & 0 & 1 & 1 & 0 & 0 \\ 4) & & 8 & 3 & 3 & 0 & 6 & 1 & 1 & 1 & 3 & 3 & 0 & 1 & 1 & 0 & 0 \end{tabular}\\ Permutations: $(12_{a},12_{b})(12_{c},12_{d})$. \label{2l211e7p0} \end{table}
\begin{table}[H]\small \caption{$L_2(11) < E_7$, $p = 5$}
\begin{tabular}{rc|*{6}{c}|*{6}{c}}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & & 1 & 5 & $5^{*}$ & $10_a$ & $10_b$ & $11$ & 1 & 5 & $5^{*}$ & $10_a$ & $10_b$ & $11$ \\ \hline 1) & \textbf{N} & 6 & 2 & 2 & 2 & 1 & 7 & 4 & 1 & 1 & 2 & 0 & 2 \\ 2) & \textbf{N} & 7 & 0 & 0 & 4 & 2 & 6 & 4 & 1 & 1 & 2 & 0 & 2 \\ 3) & \textbf{N} & 11 & 4 & 4 & 0 & 6 & 2 & 6 & 3 & 3 & 0 & 2 & 0 \end{tabular} \label{l211e7p5} \end{table}
\begin{table}[H]\small \caption{$SL_2(11) < E_7$, $p = 5$}
\begin{tabular}{rc|*{6}{c}|*{5}{c}}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 1 & 5 & $5^{*}$ & $10_a$ & $10_b$ & 11 & 6 & $6^{*}$ & $10_c$ & $10_d$ & $10_e$ \\ \hline 1) & \textbf{P}, \textbf{N} & 5 & 1 & 1 & 0 & 3 & 8 & 3 & 3 & 0 & 1 & 1 \\ 2) & \textbf{N} & 10 & 3 & 3 & 6 & 0 & 3 & 3 & 3 & 0 & 1 & 1 \end{tabular} \label{2l211e7p5} \end{table}
\begin{table}[H]\small \caption{$L_2(11) < E_7$, $p = 3$}
\begin{tabular}{rc|*{6}{c}|*{6}{c}}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & & 1 & 5 & $5^{*}$ & $10_a$ & $12_a$ & $12_b$ & 1 & 5 & $5^{*}$ & $10_a$ & $12_a$ & $12_b$ \\ \hline 1) & \textbf{N} & 3 & 4 & 4 & 3 & 2 & 3 & 2 & 3 & 3 & 0 & 0 & 2 \\ 2) & \textbf{N} & 3 & 4 & 4 & 3 & 3 & 2 & 2 & 3 & 3 & 0 & 2 & 0 \\ 3) & \textbf{N} & 9 & 4 & 4 & 6 & 1 & 1 & 6 & 3 & 3 & 2 & 0 & 0 \end{tabular} \label{l211e7p3} \end{table}
\begin{table}[H]\small \caption{$SL_2(11) < E_7$, $p = 3$}
\begin{tabular}{rc|*{6}{c}|*{5}{c}}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 1 & 5 & $5^{*}$ & $10_a$ & $12_a$ & $12_b$ & 6 & $6^{*}$ & $10_b$ & $12_c$ & $12_d$ \\ \hline 1) & \textbf{P}, \textbf{N} & 3 & 1 & 1 & 6 & 2 & 3 & 1 & 1 & 2 & 0 & 2 \\ 2) & \textbf{P}, \textbf{N} & 5 & 1 & 1 & 7 & 1 & 3 & 0 & 0 & 2 & 2 & 1 \\ 3) & \textbf{N} & 9 & 1 & 1 & 9 & 1 & 1 & 3 & 3 & 2 & 0 & 0 \\ 4) & & 9 & 9 & 9 & 1 & 1 & 1 & 3 & 3 & 2 & 0 & 0 \end{tabular}\\ Permutations: $(12_a,12_b)(12_c,12_d)$. \label{2l211e7p3}j \end{table}
\begin{table}[H]\small \caption{$L_2(11) < E_7$, $p = 2$}
\begin{tabular}{rc|*{6}{c}|*{6}{c}}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & & 1 & 5 & $5^{*}$ & 10 & $12_a$ & $12_b$ & 1 & 5 & $5^{*}$ & 10 & $12_a$ & $12_b$ \\ \hline 1) & \textbf{N} & 3 & 1 & 1 & 6 & 2 & 3 & 2 & 1 & 1 & 2 & 0 & 2 \\ 2) & \textbf{N} & 3 & 4 & 4 & 3 & 2 & 3 & 2 & 1 & 1 & 2 & 0 & 2 \\ 3) & \textbf{N} & 5 & 2 & 2 & 6 & 1 & 3 & 0 & 0 & 0 & 2 & 2 & 1 \\ 4) & \textbf{P}, \textbf{N} & 5 & 5 & 5 & 3 & 1 & 3 & 0 & 0 & 0 & 2 & 2 & 1 \\ 5) & \textbf{N} & 9 & 4 & 4 & 6 & 1 & 1 & 6 & 3 & 3 & 2 & 0 & 0 \\ 6) & \textbf{N} & 9 & 7 & 7 & 3 & 1 & 1 & 6 & 3 & 3 & 2 & 0 & 0 \end{tabular}\\ Permutations: $(12_a,12_b)$. \label{l211e7p2} \end{table}
{\small \begin{table}[H]\small
\caption{$L_2(13) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{9}{c}|*{7}{c}}
& \multicolumn{9}{c|}{$V_{133}$} & \multicolumn{7}{c}{$V_{56}$} \\ & 1 & $7_a$ & $7_b$ & $12_a$ & $12_b$ & $12_c$ & 13 & $14_a$ & $14_b$ & 1 & $7_a$ & $7_b$ & $12_a$ & 13 & $14_a$ & $14_b$ \\ \hline 1) & 1 & 0 & 0 & 1 & 1 & 1 & 2 & 2 & 3 & 0 & 2 & 2 & 0 & 0 & 2 & 0 \\ 2) & 1 & 0 & 0 & 1 & 1 & 1 & 2 & 2 & 3 & 2 & 0 & 0 & 0 & 2 & 0 & 2 \\ 3) & 2 & 2 & 2 & 1 & 1 & 1 & 3 & 0 & 2 & 0 & 2 & 2 & 0 & 0 & 2 & 0 \\ 4) & 2 & 2 & 2 & 1 & 1 & 1 & 3 & 0 & 2 & 2 & 0 & 0 & 0 & 2 & 0 & 2 \\ 5) & 3 & 0 & 0 & 4 & 0 & 1 & 0 & 2 & 3 & 4 & 0 & 0 & 2 & 0 & 0 & 2 \\ 6) & 3 & 0 & 5 & 0 & 0 & 0 & 3 & 1 & 3 & 0 & 0 & 4 & 0 & 0 & 2 & 0 \\ 7) & 4 & 2 & 2 & 4 & 0 & 1 & 1 & 0 & 2 & 4 & 0 & 0 & 2 & 0 & 0 & 2 \\ 8) & 21 & 0 & 14 & 0 & 0 & 0 & 0 & 1 & 0 & 14 & 0 & 6 & 0 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(7_a,7_b)$, $(12_a,12_b,12_c)$. \label{l213e7p0} \end{table} }
{\small \begin{table}[H]\small
\caption{$SL_2(13) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{8}{c}|*{6}{c}}
& & \multicolumn{8}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 1 & $7_a$ & $7_b$ & $12_{a,b}$ & $12_c$ & 13 & $14_a$ & $14_b$ & $6_a$ & $12_d$ & $12_e$ & $12_f$ & $14_c$ & $14_{d,e}$ \\ \hline 1) & \textbf{P} & 0 & 0 & 2 & 1 & 1 & 1 & 1 & 4 & 1 & 1 & 1 & 1 & 1 & 0 \\ 2) & \textbf{P} & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 4 & 0 & 0 & 0 & 0 & 2 & 1 \\ 3) & & 1 & 0 & 2 & 1 & 2 & 0 & 1 & 4 & 1 & 2 & 1 & 0 & 1 & 0 \\ 4) & & 1 & 2 & 4 & 1 & 1 & 2 & 0 & 2 & 1 & 1 & 1 & 1 & 1 & 0 \\ 5) & & 1 & 3 & 3 & 1 & 1 & 2 & 0 & 2 & 0 & 0 & 0 & 0 & 2 & 1 \\ 6) & & 2 & 2 & 4 & 1 & 2 & 1 & 0 & 2 & 1 & 2 & 1 & 0 & 1 & 0 \\ 7) & &14 & 0 & 1 & 0 & 0 & 0 & 7 & 1 & 7 & 0 & 0 & 0 & 1 & 0 \end{tabular}\\ Permutations: $(6_a,6_b)(7_a,7_b)$, $(12_a,12_b,12_c)(12_d,12_e,12_f)$.\\ $12_a$ and $12_b$ occur with equal multiplicities, as do $14_d$ and $14_e$. \label{2l213e7p0} \end{table} }
\begin{table}[H]\small \caption{$L_2(13) < E_7$, $p = 7$}
\begin{tabular}{rc|*{6}{c}|*{6}{c}}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & & 1 & $7_a$ & $7_b$ & $12$ & $14_a$ & $14_b$ & 1 & $7_a$ & $7_b$ & $12$ & $14_a$ & $14_b$ \\ \hline 1) & \textbf{P}, \textbf{N} & 3 & 0 & 0 & 5 & 2 & 3 & 0 & 2 & 2 & 0 & 2 & 0 \\ 2) & \textbf{N} & 3 & 0 & 0 & 5 & 2 & 3 & 4 & 0 & 0 & 2 & 0 & 2 \\ 3) & \textbf{P}, \textbf{N} & 5 & 2 & 2 & 6 & 0 & 2 & 0 & 2 & 2 & 0 & 2 & 0 \\ 4) & \textbf{N} & 5 & 2 & 2 & 6 & 0 & 2 & 4 & 0 & 0 & 2 & 0 & 2 \\ 5) & \textbf{N} & 6 & 0 & 5 & 3 & 1 & 3 & 0 & 0 & 4 & 0 & 2 & 0 \\ 6) & & 21 & 0 & 14 & 0 & 1 & 0 & 14 & 0 & 6 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(7_a,7_b)$. \label{l213e7p7} \end{table}
\begin{table}[H]\small \caption{$SL_2(13) < E_7$, $p = 7$}
\begin{tabular}{rc|*{6}{c}|*{5}{c}}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 1 & $7_a$ & $7_b$ & 12 & $14_a$ & $14_b$ & $6_a$ & $6_b$ & $14_c$ & $14_d$ & $14_e$ \\ \hline 1) & \textbf{P}, \textbf{N} & 1 & 0 & 2 & 4 & 4 & 1 & 4 & 3 & 1 & 0 & 0 \\ 2) & \textbf{P}, \textbf{N} & 1 & 1 & 1 & 4 & 4 & 1 & 0 & 0 & 2 & 1 & 1 \\ 3) & \textbf{P}, \textbf{N} & 3 & 2 & 4 & 5 & 2 & 0 & 4 & 3 & 1 & 0 & 0 \\ 4) & \textbf{P}, \textbf{N} & 3 & 3 & 3 & 5 & 2 & 0 & 0 & 0 & 2 & 1 & 1 \\ 5) & & 14 & 0 & 1 & 0 & 1 & 7 & 7 & 0 & 1 & 0 & 0 \\ \end{tabular}\\ Permutations: $(7_{a},7_{b})(6_{a},6_{b})$. \label{2l213e7p7} \end{table}
\begin{table}[H]\small \caption{$L_2(13) < E_7$, $p = 3$}
\begin{tabular}{rc|*{7}{c}|*{7}{c}}
& & \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{7}{c}{$V_{56}$} \\ & & 1 & $7_a$ & $7_b$ & $12_a$ & $12_b$ & $12_c$ & $13$ & 1 & $7_a$ & $7_b$ & $12_a$ & $12_b$ & $12_c$ & $13$ \\ \hline 1) & \textbf{P}, \textbf{N} & 4 & 2 & 2 & 1 & 1 & 1 & 5 & 0 & 4 & 4 & 0 & 0 & 0 & 0 \\ 2) & \textbf{N} & 4 & 2 & 2 & 1 & 1 & 1 & 5 & 4 & 0 & 0 & 0 & 0 & 0 & 4 \\ 3) & \textbf{N} & 6 & 1 & 6 & 0 & 0 & 0 & 6 & 0 & 2 & 6 & 0 & 0 & 0 & 0 \\ 4) & \textbf{N} & 6 & 2 & 2 & 0 & 4 & 1 & 3 & 6 & 0 & 0 & 0 & 2 & 0 & 2 \\ 5) & & 21 & 1 & 15 & 0 & 0 & 0 & 0 & 14 & 0 & 6 & 0 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(7_a,7_b)$, $(12_a,12_b,12_c)$. \label{l213e7p3} \end{table}
\begin{table}[H]\small \caption{$SL_2(13) < E_7$, $p = 3$}
\begin{tabular}{rc|*{7}{c}|*{6}{c}}
& & \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & & 1 & $7_a$ & $7_b$ & $12_a$ & $12_b$ & $12_c$ & 13 & $6_a$ & $6_b$ & $12_d$ & $12_e$ & $12_f$ & 14 \\ \hline 1) & \textbf{P}, \textbf{N} & 1 & 4 & 6 & 1 & 1 & 1 & 2 & 1 & 0 & 1 & 1 & 1 & 1 \\ 2) & \textbf{P}, \textbf{N} & 1 & 5 & 5 & 1 & 1 & 1 & 2 & 0 & 0 & 0 & 0 & 0 & 4 \\ 3) & & 2 & 6 & 4 & 2 & 1 & 1 & 1 & 0 & 1 & 1 & 2 & 0 & 1 \\ 4) & \textbf{N} & 21 & 1 & 2 & 0 & 0 & 0 & 7 & 7 & 0 & 0 & 0 & 0 & 1 \end{tabular}\\ Permutations: $(6_a,6_b)(7_a,7_b)$, $(12_a,12_b,12_c)(12_d,12_e,12_f)$. \label{2l213e7p3} \end{table}
\begin{table}[H]\small \caption{$L_2(13) < E_7$, $p = 2$}
\begin{tabular}{rc|*{7}{c}|*{7}{c}}
& & \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{7}{c}{$V_{56}$} \\ & & 1 & $6_a$ & $6_b$ & $12_a$ & $12_b$ & $12_c$ & 14 & 1 & $6_a$ & $6_b$ & $12_a$ & $12_b$ & $12_c$ & 14 \\ \hline 1) & & 3 & 0 & 0 & 0 & 1 & 4 & 5 & 4 & 0 & 0 & 0 & 0 & 2 & 2 \\ 2) & \textbf{N} & 3 & 0 & 2 & 1 & 1 & 2 & 5 & 2 & 2 & 1 & 0 & 2 & 1 & 0 \\ 3) & \textbf{P}, \textbf{N} & 3 & 1 & 3 & 1 & 1 & 1 & 5 & 2 & 2 & 1 & 1 & 1 & 1 & 0 \\ 4) & \textbf{N} & 3 & 2 & 2 & 1 & 1 & 1 & 5 & 4 & 2 & 2 & 0 & 0 & 0 & 2 \\ 5) & \textbf{N} & 5 & 0 & 5 & 0 & 0 & 0 & 7 & 4 & 0 & 4 & 0 & 0 & 0 & 2 \\ 6) & \textbf{N} & 9 & 3 & 3 & 0 & 1 & 4 & 2 & 4 & 0 & 0 & 0 & 0 & 2 & 2 \\ 7) & \textbf{N} & 9 & 3 & 5 & 1 & 1 & 2 & 2 & 2 & 2 & 1 & 0 & 2 & 1 & 0 \\ 8) & \textbf{P}, \textbf{N} & 9 & 4 & 6 & 1 & 1 & 1 & 2 & 2 & 2 & 1 & 1 & 1 & 1 & 0 \\ 9) & \textbf{N} & 9 & 5 & 5 & 1 & 1 & 1 & 2 & 4 & 2 & 2 & 0 & 0 & 0 & 2 \\ 10)& \textbf{N} &11 & 3 & 8 & 0 & 0 & 0 & 4 & 4 & 0 & 4 & 0 & 0 & 0 & 2 \\ 11)& \textbf{N} &15 & 0 & 1 & 0 & 0 & 0 & 8 & 2 & 8 & 1 & 0 & 0 & 0 & 0 \\ 12)& \textbf{N} & 35& 0 & 14& 0 & 0 & 0 & 1 & 20 & 0 & 6 & 0 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(6_a,6_b)$, $(12_a,12_b,12_c)$. \label{l213e7p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(17) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{8}{c}|*{6}{c}}
& \multicolumn{8}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & 1 & $9_a$ & $9_b$ & $16_a$ & 17 & $18_a$ & $18_b$ & $18_c$ & 1 & $9_a$ & $9_b$ & $16_a$ & 17 & $18_a$ \\ \hline 1) & 1 & 0 & 3 & 1 & 1 & 2 & 1 & 1 & 2 & 0 & 2 & 0 & 0 & 2 \\ 2) & 3 & 0 & 3 & 1 & 3 & 0 & 1 & 1 & 2 & 2 & 4 & 0 & 0 & 0 \\ 3) & 3 & 0 & 3 & 1 & 3 & 0 & 1 & 1 & 4 & 0 & 2 & 0 & 2 & 0 \\ 4) & 6 & 0 & 3 & 4 & 0 & 0 & 1 & 1 & 6 & 0 & 2 & 2 & 0 & 0 \end{tabular}\\ Permutations: $(9_a,9_b)$. \label{l217e7p0} \end{table}
\begin{table}[H]\small \caption{$L_2(17) < E_7$, $p = 3$}
\begin{tabular}{rc|*{7}{c}|*{5}{c}}
& & \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 1 & $9_a$ & $9_b$ & 16 & $18_a$ & $18_b$ & $18_c$ & 1 & $9_a$ & $9_b$ & 16 & $18_a$ \\ \hline 1) & \textbf{N} & 2 & 0 & 3 & 2 & 2 & 1 & 1 & 2 & 0 & 2 & 0 & 2 \\ 2) & \textbf{N} & 6 & 0 & 3 & 4 & 0 & 1 & 1 & 2 & 2 & 4 & 0 & 0 \\ 3) & \textbf{N} & 6 & 0 & 3 & 4 & 0 & 1 & 1 & 6 & 0 & 2 & 2 & 0 \end{tabular}\\ Permutations: $(9_a,9_b)$. \label{l217e7p3} \end{table}
\begin{table}[H]\small \caption{$L_2(17) < E_7$, $p = 2$}
\begin{tabular}{rc|*{4}{c}|*{4}{c}}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & $8_a$ & $8_b$ & $16_a$ & 1 & $8_a$ & $8_b$ & $16_a$ \\ \hline 1) & \textbf{N} & 13 & 2 & 5 & 4 & 8 & 0 & 2 & 2 \\ 2) & \textbf{N} & 13 & 5 & 8 & 1 & 8 & 2 & 4 & 0 \end{tabular}\\ Permutations: $(8_a,8_b)$. \label{l217e7p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(19) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{10}{c}|*{5}{c}}
& \multicolumn{10}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & 1 & 9 & $9^{*}$ & $18_a$ & $18_b$ & $18_c$ & $18_d$ & $20_b$ & $20_c$ & $20_d$ & 1 & 9 & $9^{*}$ & $18_a$ & $18_b$ \\ \hline 1) & 1 & 1 & 1 & 0 & 2 & 1 & 0 & 1 & 1 & 1 & 2 & 1 & 1 & 0 & 2 \\ 2) & 1 & 1 & 1 & 2 & 0 & 0 & 1 & 1 & 1 & 1 & 2 & 1 & 1 & 2 & 0 \end{tabular} \label{l219e7p0} \end{table}
\begin{table}[H]\small
\caption{$SL_2(19) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{6}{c}|*{7}{c}}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{7}{c}{$V_{56}$} \\ & & $18_b$ & $18_c$ & 19 & $20_b$ & $20_c$ & $20_d$ & 10 & $10^{*}$ & $18_f$ & $18_g$ & $18_h$ & $18_i$ & $20_e$ \\ \hline 1)& \textbf{P} &1&2&1&1&1&1& 1&1&0&1&0&1&0 \\ 2)& \textbf{P} &1&2&1&1&1&1& 0&0&0&1&0&1&1 \\ \end{tabular}\\ Permutation: $(18_b,18_c)(18_f,18_g)(18_h,18_i)$. \label{2l219e7p0} \end{table}
\begin{table}[H]\small \caption{$L_2(19) < E_7$, $p = 5$}
\begin{tabular}{r|*{8}{c}|*{3}{c}}
& \multicolumn{8}{c|}{$V_{133}$} & \multicolumn{3}{c}{$V_{56}$} \\ & 1 & 9 & $9^{*}$ & $18_a$ & $20_a$ & $20_b$ & $20_c$ & $20_d$ & 1 & 9 & $9^{*}$ \\ \hline 1) & 1 & 3 & 3 & 1 & 0 & 1 & 1 & 1 & 2 & 3 & 3 \end{tabular} \label{l219e7p5} \end{table}
\begin{table}[H]\small \caption{$SL_2(19) < E_7$, $p = 5$}
\begin{tabular}{rc|*{5}{c}|*{4}{c}}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & $18_a$ & $20_b$ & $20_c$ & $20_d$ & 10 & $10^{*}$ & $18_b$ & $20_e$ \\ \hline 1) & \textbf{P}, \textbf{N} & 1 & 4 & 1 & 1 & 1 & 0 & 0 & 2 & 1 \\ 2) & \textbf{P}, \textbf{N} & 1 & 4 & 1 & 1 & 1 & 1 & 1 & 2 & 0 \end{tabular} \label{2l219e7p5} \end{table}
\begin{table}[H]\small \caption{$L_2(19) < E_7$, $p = 3$}
\begin{tabular}{rc|*{8}{c}|*{5}{c}}
& & \multicolumn{8}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 1 & 9 & $9^{*}$ & $18_a$ & $18_b$ & $18_c$ & $18_d$ & $19$ & 1 & 9 & $9^{*}$ & $18_a$ & $18_b$ \\ \hline 1) & \textbf{N} & 4 & 1 & 1 & 0 & 2 & 0 & 1 & 3 & 2 & 1 & 1 & 0 & 2 \\ 2) & \textbf{N} & 4 & 1 & 1 & 2 & 0 & 1 & 0 & 3 & 2 & 1 & 1 & 2 & 0 \end{tabular} \label{l219e7p3} \end{table}
\begin{table}[H]\small \caption{$SL_2(19) < E_7$, $p = 3$}
\begin{tabular}{rc|*{4}{c}|*{7}{c}}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{7}{c}{$V_{56}$} \\ & & 1 & $18_b$ & $18_c$ & $19$ & 10 & $10^{*}$ & $18_e$ & $18_f$ & $18_g$ & $18_h$ & $18_i$ \\ \hline 1) & \textbf{P}, \textbf{N} & 3 & 1 & 2 & 4 & 1 & 1 & 0 & 0 & 1 & 0 & 1 \\ 2) & \textbf{P}, \textbf{N} & 3 & 2 & 1 & 4 & 1 & 1 & 0 & 1 & 0 & 1 & 0 \\ \end{tabular} \label{2l219e7p3} \end{table}
\begin{table}[H]\small \caption{$L_2(19) < E_7$, $p = 2$}
\begin{tabular}{rc|*{8}{c}|*{6}{c}}
& & \multicolumn{8}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & & 1 & 9 & $9^{*}$ & $18_a$ & $18_b$ & $20_b$ & $20_c$ & $20_d$ & 1 & 9 & $9^{*}$ & $18_a$ & $18_b$ & $20_a$ \\ \hline 1) & & 1 & 0 & 0 & 1 & 3 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 1 \\ 2) & & 1 & 0 & 0 & 1 & 3 & 1 & 1 & 1 & 2 & 2 & 2 & 1 & 0 & 0 \\ 3) & \textbf{P}, \textbf{N} & 1 & 1 & 1 & 2 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 2 & 0 & 1 \\ 4) & \textbf{N} & 1 & 1 & 1 & 2 & 1 & 1 & 1 & 1 & 2 & 1 & 1 & 2 & 0 & 0 \end{tabular} \label{l219e7p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(25) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{7}{c}|*{3}{c}}
& \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{3}{c}{$V_{56}$} \\ & 1 & 25 & $26_a$ & $26_b$ & $26_c$ & $26_d$ & $26_e$ & 1 & $26_a$ & $26_b$ \\ \hline 1)&3&0&3&0&0&1&1&4&2&0 \\ 2)&4&1&0&2&2&0&0&4&0&2 \\ 3)&4&1&2&0&2&0&0&4&2&0 \end{tabular} \label{l225e7p0} \end{table}
\begin{table}[H]\small \caption{$L_2(25) < E_7$, $p = 13$}
\begin{tabular}{r|*{7}{c}|*{3}{c}}
& \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{3}{c}{$V_{56}$} \\ & 1 & 24 & $26_a$ & $26_b$ & $26_c$ & $26_d$ & $26_e$ & 1 & $26_a$ & $26_e$ \\ \hline 1) & 3 & 0 & 3 & 0 & 1 & 1 & 0 & 4 & 2 & 0 \\ 2) & 5 & 1 & 0 & 2 & 0 & 0 & 2 & 4 & 0 & 2 \\ 3) & 5 & 1 & 2 & 2 & 0 & 0 & 0 & 4 & 2 & 0 \end{tabular} \label{l225e7p13} \end{table}
\begin{table}[H]\small \caption{$L_2(25) < E_7$, $p = 3$}
\begin{tabular}{rc|*{5}{c}|*{4}{c}}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & $13_a$ & $13_b$ & 25 & 26 & 1 & $13_a$ & $13_b$ & 25 \\ \hline 1) & & 4 & 2 & 2 & 1 & 2 & 4 & 2 & 2 & 0 \\ 2) & \textbf{N} & 6 & 0 & 0 & 3 & 2 & 6 & 0 & 0 & 2 \end{tabular} \label{l225e7p3} \end{table}
\begin{table}[H]\small \caption{$L_2(25) < E_7$, $p = 2$}
\begin{tabular}{rc|*{4}{c}|*{2}{c}}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & 1 & $12_a$ & $12_b$ & 26 & 1 & 26 \\ \hline 1) & & 3 & 0 & 0 & 5 & 4 & 2 \\ 2) & \textbf{N} & 9 & 3 & 3 & 2 & 4 & 2 \end{tabular} \label{l225e7p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(27) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{7}{c}|*{4}{c}}
& \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & 1 & $26_a$ & $26_b$ & $26_c$ & $26_d$ & $26_e$ & $26_f$ & 1 & $26_c$ & $26_d$ & $26_e$ \\ \hline 1)&3&0&1&0&3&0&1&4&0&2&0\\ 2)&3&1&0&0&0&3&1&4&0&0&2\\ 3)&3&1&1&3&0&0&0&4&2&0&0 \end{tabular} \label{l227e7p0} \end{table}
\begin{table}[H]\small
\caption{$SL_2(27) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{6}{c}|*{4}{c}}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & $26_{c}$ & $26_{d}$ & $26_{e}$ & 27 & $28_a$ & 14 & $14^{*}$ & $28_g$ & $28_h$ \\ \hline 1) & \textbf{P} & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 \\ 2) & & 1 & 1 & 1 & 1 & 2 & 0 & 2 & 2 & 0 & 0 \end{tabular}\\ Permutations: $(28_a,28_b,\ldots,28_f)(28_g,28_h,\ldots,28_l)$. \label{2l227e7p0} \end{table}
\begin{table}[H]\small \caption{$L_2(27) < E_7$, $p = 13$}
\begin{tabular}{r|*{7}{c}|*{4}{c}}
& \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & 1 & $26_a$ & $26_b$ & $26_c$ & $26_d$ & $26_e$ & $26_f$ & 1 & $26_d$ & $26_e$ & $26_f$ \\ \hline 1)& 3& 0& 1& 1& 0& 3& 0& 4& 0& 2& 0 \end{tabular}\\ Permutations: $(26_a,26_b,26_c)(26_d,26_e,26_f)$. \label{l227e7p13} \end{table}
\begin{table}[H]\small \caption{$SL_2(27) < E_7$, $p = 13$}
\begin{tabular}{rc|*{5}{c}|*{2}{c}}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & 1 & $26_c$ & $26_e$ & $26_f$ & 27 & 14 & $14^{*}$ \\ \hline 1) & \textbf{P}, \textbf{N} & 1 & 1 & 1 & 1 & 2 & 2 & 2 \end{tabular} \label{2l227e7p13} \end{table}
\begin{table}[H]\small \caption{$L_2(27) < E_7$, $p = 7$}
\begin{tabular}{rc|*{4}{c}|*{3}{c}}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{3}{c}{$V_{56}$} \\ & & 1 & 13 & $13^{*}$ & $26_a$ & 1 & 13 & $13^{*}$ \\ \hline 1)& \textbf{N} & 3& 3& 3& 2& 4& 2& 2 \end{tabular} \label{l227e7p7} \end{table}
\begin{table}[H]\small \caption{$SL_2(27) < E_7$, $p = 7$}
\begin{tabular}{rc|*{3}{c}|*{4}{c}}
& & \multicolumn{3}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & $26_a$ & $28_a$ & 14 & $14^{*}$ & $28_g$ & $28_h$ \\ \hline 1)& \textbf{P}, \textbf{N} & 1& 4& 1& 0& 0& 1& 1 \\ 2)& \textbf{P}, \textbf{N} & 3& 5& 0& 2& 2& 0& 0 \end{tabular}\\ Permutations: $(28_a,28_b,\ldots,28_f)(28_g,28_h,\ldots,28_l)$. \\ \label{2l227e7p7} \end{table}
\begin{table}[H]\small \caption{$L_2(27) < E_7$, $p = 2$}
\begin{tabular}{rc|*{7}{c}|*{6}{c}}
& & \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{6}{c}{$V_{56}$} \\ & & 1 & $13$ & $13^{*}$ & $26_a$ & $26_b$ & $26_c$ & $28_a$ & 1 & $13$ & $13^{*}$ & $26_c$ & $28_d$ & $28_f$ \\ \hline 1) & \textbf{P}, \textbf{N} & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 \\ 2) & & 3 & 0 & 0 & 0 & 1 & 4 & 0 & 4 & 0 & 0 & 2 & 0 & 0 \\ 3) & \textbf{N} & 3 & 2 & 2 & 1 & 1 & 1 & 0 & 4 & 2 & 2 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(26_a,26_b,26_c)$, $(28_a,28_b,28_c,28_d,28_e,28_f)$. \label{l227e7p2} \end{table}
\begin{table}[H]\small
\caption{$SL_2(29) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{7}{c}|*{4}{c}}
& & \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & $15_a$ & $15_b$ & $28_b$ & $28_c$ & $30_a$ & $30_b$ & $30_c$ & $28_i$ & $28_j$ & $28_k$ & $28_l$ \\ \hline 1)& \textbf{P} & 0&1&0&1&1&1&1&1&0&1&0\\ \end{tabular}\\ Permutations: $(15_a,15_b)$, $(28_a,28_b)(28_i,28_j)(28_k,28_l)$. \label{2l229e7p0} \end{table}
\begin{table}[H]\small \caption{$SL_2(29) < E_7$, $p = 7$}
\begin{tabular}{rc|*{4}{c}|*{4}{c}}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & $15_a$ & $15_b$ & $28_b$ & $28_c$ & $28_k$ & $28_l$ & $28_m$ & $28_n$\\ \hline 1)& \textbf{P} & $3$ & $4$ & 0& 1& 1& 0& 1& 0 \end{tabular}\\ Permutations: $(15_a,15_b)$, $(28_k,28_l)(28_m,28_n)$. \label{2l229e7p7} \end{table}
\begin{table}[H]\small \caption{$SL_2(29) < E_7$, $p = 5$}
\begin{tabular}{rc|*{6}{c}|*{1}{c}}
& & \multicolumn{6}{c|}{$V_{133}$} & \multicolumn{1}{c}{$V_{56}$} \\ & & $15_a$ & $15_b$ & $28_b$ & $30_a$ & $30_b$ & $30_c$ & $28_c$ \\ \hline 1) & \textbf{P} & 0 & 1 & 1 & 1 & 1 & 1 & 2 \end{tabular}\\ Permutations: $(15_a,15_b)$. \label{2l229e7p5} \end{table}
\begin{table}[H]\small \caption{$SL_2(29) < E_7$, $p = 3$}
\begin{tabular}{rc|*{7}{c}|*{2}{c}}
& & \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & $15_a$ & $15_b$ & $28_b$ & $28_c$ & $30_a$ & $30_d$ & $30_e$ & $28_d$ & $28_e$\\ \hline 1)& \textbf{P} & 0& 1& 0& 1& 1& 1& 1& 2& 0 \end{tabular}\\ Permutations: $(15_a,15_b)$, $(28_b,28_c)(28_d,28_e)$. \label{2l229e7p3} \end{table}
\begin{table}[H]\small \caption{$L_2(29) < E_7$, $p = 2$}
\begin{tabular}{rc|*{8}{c}|*{4}{c}}
& & \multicolumn{8}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & $14_a$ & $14_b$ & $28_b$ & $28_c$ & $30_a$ & $30_b$ & $30_c$ & $28_d$ & $28_e$ & $28_f$ & $28_g$ \\ \hline 1)& \textbf{P} & 1& 0& 1& 0& 1& 1& 1& 1& 1& 0& 1& 0 \end{tabular}\\ Permutations: $(14_a,14_b)$, $(28_b,28_c)(28_d,28_e)(28_f,28_g)$. \label{l229e7p2} \end{table}
\begin{table}[H]\small
\caption{$SL_2(37) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{5}{c}|*{5}{c}c}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & $19_a$ & $19_b$ & $38_c$ & $38_d$ & $38_e$ & $18_b$ & $18_b$ & $38_i$ & $38_j$ & $38_k$ \\ \hline 1)& \textbf{P} & 0&1&1&1&1&0&1&0&0&1 \end{tabular}\\ Permutations: $(18_a,18_b)$, $(19_a,19_b)$, $(38_i,38_j,38_k)$. \label{2l237e7p0} \end{table}
\begin{table}[H]\small \caption{$SL_2(37) < E_7$, $p = 19$}
\begin{tabular}{rc|*{5}{c}|*{5}{c}c}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & $19_a$ & $19_b$ & $38_a$ & $38_c$ & $38_g$ & $18_a$ & $18_b$ & $38_i$ & $38_l$ & $38_m$ \\ \hline 1)& \textbf{P} & 0& 1& 1& 1& 1& 0& 1& 0& 0& 1 \end{tabular}\\ Permutations: $(18_a,18_b)$, $(19_a,19_b)$, $(38_i,38_l,38_m)$. \label{2l237e7p19} \end{table}
\begin{table}[H]\small \caption{$SL_2(37) < E_7$, $p = 3$}
\begin{tabular}{rc|*{2}{c}|*{2}{c}}
& & \multicolumn{2}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & $19_a$ & $19_b$ & $18_b$ & 38 \\ \hline 1) & \textbf{P} & $r$ & $7 - r$ & 1 & 1 \end{tabular}\\ where $r = 1,\ldots,7$.\\ Permutations: $(18_a,18_b)$. \label{2l237e7p3} \end{table}
\begin{table}[H]\small \caption{$L_2(37) < E_7$, $p = 2$}
\begin{tabular}{rc|*{5}{c}|*{4}{c}}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & $18_a$ & $38_b$ & $38_c$ & $38_d$ & 1 & $18_a$ & $18_b$ & $38_a$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 \\ 2) & \textbf{P} & 1 & 1 & 1 & 1 & 1 & 2 & 0 & 3 & 0 \\ 3) & \textbf{P} & 1 & 1 & 1 & 1 & 1 & 2 & 1 & 2 & 0 \end{tabular}\\ Permutations: $(18_a,18_b)$. \label{l237e7p2} \end{table}
\subsection{Cross-characteristic Groups $\ncong L_2(q)$} \
\begin{table}[H]\small
\begin{tabular}{r|*{8}{c}|*{5}{c}}
& \multicolumn{8}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$}
\caption{$L_3(3) < E_7$, $p = 0$ or $p \nmid |H|$} \\ & 1 & 13 & $16_a$ & $16_b$ & $26_a$ & $26_b$ & 27 & 39 & 1 & 12 & $16_a$ & $26_a$ & 27 \\ \hline 1) & 1 & 0 & 0 & 0 & 1 & 1 & 2 & 0 & 2 & 0 & 0 & 0 & 2 \\ 2) & 2 & 1 & 0 & 2 & 0 & 0 & 2 & 0 & 0 & 2 & 1 & 0 & 0 \\ 3) & 2 & 1 & 1 & 1 & 0 & 0 & 2 & 0 & 2 & 0 & 0 & 0 & 2 \\ 4) & 3 & 0 & 0 & 0 & 3 & 1 & 0 & 0 & 4 & 0 & 0 & 2 & 0 \\ 5) & 3 & 0 & 0 & 2 & 0 & 0 & 1 & 1 & 0 & 2 & 1 & 0 & 0 \\ 6) & 3 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 2 & 0 & 0 & 0 & 2 \\ 7) & 4 & 1 & 1 & 1 & 2 & 0 & 0 & 0 & 4 & 0 & 0 & 2 & 0 \end{tabular}\\ $16_a^{*}$, $16_b^{*}$ and $26_b^{*}$ occur with the same multiplicities as their duals.\\ Permutations: $(16_a,16_b)(16_a^{*},16_b^{*})$. \label{l33e7p0} \end{table}
\begin{table}[H]\small \caption{$L_3(3) < E_7$, $p = 13$}
\begin{tabular}{rc|*{8}{c}|*{4}{c}}
& & \multicolumn{8}{c|}{$V_{133}$} & \multicolumn{4}{c}{$V_{56}$} \\ & & 1 & 11 & 13 & 16 & $26_a$ & $26_b$ & $(26_b)^{*}$ & 39 & 1 & 11 & 16 & $26_a$ \\ \hline 1)& \textbf{N} & 1& 2& 0& 2& 1& 1& 1& 0& 2& 2& 2& 0\\ 2)& \textbf{N} & 2& 2& 1& 6& 0& 0& 0& 0& 2& 2& 2& 0\\ 3)& & 3& 0& 0& 0& 3& 1& 1& 0& 4& 0& 0& 2\\ 4)& & 3& 1& 0& 5& 0& 0& 0& 1& 2& 2& 2& 0\\ 5)& & 4& 0& 1& 4& 2& 0& 0& 0& 4& 0& 0& 2 \end{tabular} \label{l33e7p13} \end{table}
\begin{table}[H]\small \caption{$L_3(3) < E_7$, $p = 2$}
\begin{tabular}{rc|*{7}{c}|*{5}{c}}
& & \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 1 & 12 & $16_a$ & $(16_a)^{*}$ & $16_b$ & $(16_b)^{*}$ & 26 & 1 & 12 & $16_b$ & $(16_b)^{*}$ & 26 \\ \hline 1) & \textbf{N} & 3 & 0 & 0 & 0 & 0 & 0 & 5 & 4 & 0 & 0 & 0 & 2 \\ 2) & \textbf{N} & 5 & 1 & 2 & 2 & 0 & 0 & 2 & 0 & 2 & 1 & 1 & 0 \\ 3) & \textbf{N} & 5 & 1 & 1 & 1 & 1 & 1 & 2 & 4 & 0 & 0 & 0 & 2 \end{tabular}\\ Permutations: $(16_a,16_b)(16_a^{*},16_b^{*})$. \label{l33e7p2} \end{table}
\begin{table}[H]\small
\caption{$2 \cdot L_3(4) < E_7$, $p = 0$ or $p \nmid |H|$} Note that although $L_3(4)$ has Schur multiplier $C_3 \times C_4 \times C_4$, all double-covers of $L_3(4)$ are isomorphic.\\
\begin{tabular}{rc|*{4}{c}|*{2}{c}}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & $35_b$ & $35_c$ & $63_a$ & $63_b$ & $28_a$ & $28_b$\\ \hline 1)& \textbf{P} & 1& 1& 0& 1& 0& 2 \\ 2)& \textbf{P} & 1& 1& 1& 0& 2& 0 \end{tabular} \label{2l34e7p0} \end{table}
\begin{table}[H]\small \caption{$2 \cdot L_3(4) < E_7$, $p = 7$}
\begin{tabular}{rc|*{4}{c}|*{2}{c}}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & $35_a$ & $35_b$ & $63_a$ & $63_b$ & $28_a$ & $28_b$ \\ \hline 1)& \textbf{P} & 1& 1& 0& 1& 0& 2 \\ 2)& \textbf{P} & 1& 1& 1& 0& 2& 0 \end{tabular} \label{2l34e7p7} \end{table}
\begin{table}[H]\small \caption{$2 \cdot L_3(4) < E_7$, $p = 5$}
\begin{tabular}{rc|*{3}{c}|*{1}{c}}
& & \multicolumn{3}{c|}{$V_{133}$} & \multicolumn{1}{c}{$V_{56}$} \\ & & $35_a$ & $35_c$ & 63 & 28 \\ \hline 1)& \textbf{P} & 1& 1& 1& 2 \end{tabular} \label{2l34e7p5} \end{table}
\begin{table}[H]\small \caption{$2 \cdot L_3(4) < E_7$, $p = 3$}
\begin{tabular}{rc|*{7}{c}|*{5}{c}}
& & \multicolumn{7}{c|}{$V_{133}$} & \multicolumn{5}{c}{$V_{56}$} \\ & & 1 & $15_a$ & $15_b$ & $15_c$ & 19 & $63_a$ & $63_b$ & 6 & $10$ & $10^{*}$ & $22_a$ & $22_b$ \\ \hline 1)& \textbf{P}, \textbf{N} & 2& 1& 0& 1& 2& 0& 1& 2& 0& 0& 0& 2 \\ 2)& \textbf{P}, \textbf{N} & 2& 1& 0& 1& 2& 1& 0& 2& 0& 0& 2& 0 \\ 3)& & 9& 0& 7& 0& 1& 0& 0& 6& 1& 1& 0& 0 \end{tabular}\\ $15_{b} = \bigwedge^{2}(6)$. \label{2l34e7p3} \end{table}
\begin{table}[H]\small \caption{$L_4(3) < E_7$, $p = 2$}
\begin{tabular}{r|*{3}{c}|*{3}{c}}
& \multicolumn{3}{c|}{$V_{133}$} & \multicolumn{3}{c}{$V_{56}$} \\ & 1 & $26_a$ & $26_b$ & 1 & $26_a$ & $26_b$ \\ \hline
1)& 3& 1& 4& 4& 0& 2 \\
2)& 3& 4& 1& 4& 2& 0 \end{tabular} \label{l43e7p2} \end{table}
\begin{table}[H]\small
\caption{$U_3(3) < E_7$, $p = 0$ or $p \nmid |H|$} Here, the duals of $7_b$, $21_b$, 28 and 32 also occur, with the same multiplicity. \\
\begin{tabular}{rc|*{10}{c}} & & \multicolumn{10}{c}{$V_{133}$} \\ & & 1 & 6 & $7_a$ & $7_b$ & 14 & $21_a$ & $21_b$ & 27 & $28$ & $32$ \\ \hline
1)& \textbf{P} & 0& 0& 0& 0& 0& 0& 1& 1& 0& 1 \\
2)& \textbf{P} & 0& 0& 1& 0& 1& 0& 1& 1& 0& 1 \\
3)& & 1& 0& 0& 0& 1& 0& 2& 0& 0& 1 \\
4)& & 1& 0& 0& 0& 1& 0& 2& 0& 0& 1 \\
5)& & 1& 0& 2& 1& 0& 0& 1& 1& 1& 0 \\
6)& & 2& 0& 1& 1& 0& 0& 2& 0& 1& 0 \\
7)& & 2& 0& 1& 1& 0& 0& 2& 0& 1& 0 \\
8)& & 3& 0& 5& 0& 1& 0& 3& 0& 0& 0 \\
9)& & 6& 6& 0& 2& 3& 0& 0& 1& 0& 0 \\ 10)& &14& 0& 0& 0& 7& 0& 0& 1& 0& 0 \\ 11)& &21& 0& 14& 0& 1& 0& 0& 0& 0& 0 \end{tabular}
\begin{tabular}{r|*{9}{c}} & \multicolumn{9}{c}{$V_{56}$} \\ & 1 & 6 & $7_a$ & $7_b$ & 14 & $21_a$ & $21_b$ & 27 & $28$ \\ \hline 1) & 0& 0& 0& 0& 0& 0& 0& 0& 1 \\ 2) & 0& 0& 0& 1& 0& 0& 1& 0& 0 \\ 3)& 0& 0& 2& 0& 0& 2& 0& 0& 0 \\ 4)& 2& 0& 0& 0& 0& 0& 0& 2& 0 \\ 5)& 0& 0& 0& 1& 0& 0& 1& 0& 0 \\ 6)& 0& 0& 2& 0& 0& 2& 0& 0& 0 \\ 7)& 2& 0& 0& 0& 0& 0& 0& 2& 0 \\ 8)& 0& 0& 4& 0& 2& 0& 0& 0& 0 \\ 9)& 4& 4& 0& 0& 2& 0& 0& 0& 0 \\ 10)& 0& 7& 0& 1& 0& 0& 0& 0& 0 \\ 11)&14 & 0& 6& 0& 0& 0& 0& 0& 0 \end{tabular} \label{u33e7p0} \end{table}
\begin{table}[H]\small \caption{$U_3(3) < E_7$, $p = 7$} Here, the duals of $7_b$, $21_b$ and $28$ also occur, with the same multiplicity.
\begin{tabular}{rc|*{9}{c}} & & \multicolumn{9}{c}{$V_{133}$} \\ & & 1 & 6 & $7_a$ & $7_b$ &14 & $21_a$ & $21_b$ & 26 & $28$ \\ \hline
1)& \textbf{P}, \textbf{N} & 1& 2& 0& 0& 0& 0& 1& 3& 0 \\
2)& \textbf{P}, \textbf{N} & 1& 2& 1& 0& 1& 1& 0& 3& 0 \\
3)& & 2& 0& 2& 1& 0& 1& 0& 1& 1 \\
4)& \textbf{P}, \textbf{N} & 3& 2& 0& 0& 1& 0& 0& 4& 0 \\
5)& \textbf{N} & 3& 2& 0& 0& 1& 0& 0& 4& 0 \\
6)& \textbf{N} & 4& 0& 1& 1& 0& 0& 0& 2& 1 \\
7)& \textbf{N} & 4& 0& 1& 1& 0& 0& 0& 2& 1 \\
8)& \textbf{N} & 6& 0& 5& 0& 1& 0& 0& 3& 0 \\
9)& & 6& 6& 0& 2& 3& 1& 0& 0& 0 \\ 10)& & 14& 0& 0& 0& 7& 1& 0& 0& 0 \\ 11)& & 21& 0&14& 0& 1& 0& 0& 0& 0 \\ \end{tabular}\\
\begin{tabular}{r|*{9}{c}} & \multicolumn{9}{c}{$V_{56}$} \\ & 1 & 6 & $7_a$ & $7_b$ &14 & $21_a$ & $21_b$ & 26 & $28$ \\ \hline 1) & 0& 0& 0& 0& 0& 0& 0& 0& 1 \\ 2) & 0& 0& 0& 1& 0& 0& 1& 0& 0 \\ 3) & 0& 0& 0& 1& 0& 0& 1& 0& 0 \\ 4) & 0& 0& 2& 0& 0& 2& 0& 0& 0 \\ 5) & 4& 0& 0& 0& 0& 0& 0& 2& 0 \\ 6) & 0& 0& 2& 0& 0& 2& 0& 0& 0 \\ 7) & 4& 0& 0& 0& 0& 0& 0& 2& 0 \\ 8) & 0& 0& 4& 0& 2& 0& 0& 0& 0 \\ 9) & 4& 4& 0& 0& 2& 0& 0& 0& 0 \\ 10) & 0& 7& 0& 1& 0& 0& 0& 0& 0 \\ 11) &14& 0& 6& 0& 0& 0& 0& 0& 0 \end{tabular} \label{u33e7p7} \end{table}
\begin{table}[H]\small \caption{$U_3(8) < E_7$, $p \neq 2$}
\begin{tabular}{rc|*{3}{c}|*{1}{c}}
& & \multicolumn{3}{c|}{$V_{133}$} & \multicolumn{1}{c}{$V_{56}$} \\ & & $133_a$ & $133_b$ & $133_c$ & 56 \\ \hline 1) & \textbf{P} & 1 & 0 & 0 & 1 \end{tabular}\\ Permutations: $(133_a,133_b,133_c)$. \label{u38e7p3} \label{u38e7p7} \label{u38e7p19} \label{u38e7p0} \end{table}
\begin{table}[H]\small
\caption{$U_4(2) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{9}{c}|*{7}{c}}
& \multicolumn{9}{c|}{$V_{133}$} & \multicolumn{7}{c}{$V_{56}$} \\ & 1 & 5 & $5^{*}$ & 6 & 10 & $10^{*}$ & $15_b$ & 20 & 24 & 1 & 5 & $5^{*}$ & 6 & 10 & $10^{*}$ & $15_b$ \\ \hline 1)& 4& 0& 0& 4& 2& 2& 3& 1& 0& 2& 0& 0& 4& 0& 0& 2\\ 2)& 8& 0& 0& 0& 0& 0& 7& 1& 0& 0& 0& 0& 6& 1& 1& 0\\ 3)& 9& 4& 4& 0& 3& 3& 0& 0& 1& 6& 3& 3& 0& 1& 1& 0 \end{tabular} \label{u42e7p0} \end{table}
\begin{table}[H]\small \caption{$U_4(2) < E_7$, $p = 5$}
\begin{tabular}{r|*{9}{c}|*{7}{c}}
& \multicolumn{9}{c|}{$V_{133}$} & \multicolumn{7}{c}{$V_{56}$} \\ & 1 & 5 & $5^{*}$ & 6 & 10 & $10^{*}$ & $15_b$ & 20 & 23 & 1 & 5 & $5^{*}$ & 6 & 10 & $10^{*}$ & $15_b$ \\ \hline 1)& 4& 0& 0& 4& 2& 2& 3& 1& 0& 2& 0& 0& 4& 0& 0& 2 \\ 2)& 8& 0& 0& 0& 0& 0& 7& 1& 0& 0& 0& 0& 6& 1& 1& 0 \\ 3)&10& 4& 4& 0& 3& 3& 0& 0& 1& 6& 3& 3& 0& 1& 1& 0 \end{tabular} \label{u42e7p5} \end{table}
\begin{table}[H]\small
\caption{$Sp_6(2) < E_7$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{5}{c}|*{2}{c}}
& \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & 1 & 7 & $21_a$ & 27 & $35_b$ & 7 & $21_a$ \\ \hline
1)& 1& 2& 1& 1& 2& 2& 2 \end{tabular} \label{sp62e7p0} \end{table}
\begin{table}[H]\small \caption{$Sp_6(2) < E_7$, $p = 7$}
\begin{tabular}{r|*{5}{c}|*{2}{c}}
& \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & 1 & 7 & $21_a$ & 26 & $35_b$ & 7 & $21_a$ \\ \hline 1)& 2& 2& 1& 1& 2& 2& 2 \end{tabular} \label{sp62e7p7} \end{table}
\begin{table}[H]\small \caption{$Sp_6(2) < E_7$, $p = 5$}
\begin{tabular}{r|*{5}{c}|*{2}{c}}
& \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & 1 & 7 & $21_a$ & 27 & $35_b$ & 7 & $21_a$ \\ \hline 1)& 1& 2& 1& 1& 2& 2& 2 \end{tabular} \label{sp62e7p5} \end{table}
\begin{table}[H]\small \caption{$Sp_6(2) < E_7$, $p = 3$}
\begin{tabular}{r|*{5}{c}|*{2}{c}}
& \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & 1 & 7 & 21 & 27 & 35 & 7 & 21 \\ \hline 1)& 1& 2& 1& 1& 2& 2& 2 \end{tabular} \label{sp62e7p3} \end{table}
\begin{table}[H]\small \caption{$\Omega_8^{+}(2) < E_7$, $p \neq 2$}
\begin{tabular}{rc|*{4}{c}|*{1}{c}}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{1}{c}{$V_{56}$} \\ & & 28 & $35_a$ & $35_b$ & $35_c$ & $28$ \\ \hline 1)& \textbf{P} & 1& 1& 1& 1& 2 \end{tabular} \label{omega8plus2e7p3} \label{omega8plus2e7p5} \label{omega8plus2e7p7} \label{omega8plus2e7p30} \end{table}
\begin{table}[H]\small \caption{$^{3}D_4(2) < E_7$, $p \neq 2, 3$}
\begin{tabular}{r|*{3}{c}|*{2}{c}}
& \multicolumn{3}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & 1 & 26 & 52 & 1 & 26 \\ \hline 1) & 3 & 3 & 1 & 4 & 2 \end{tabular} \label{3d42e7p7} \label{3d42e7p13} \label{3d42e7p0} \end{table}
\begin{table}[H]\small \caption{$^{3}D_4(2) < E_7$, $p = 3$}
\begin{tabular}{rc|*{3}{c}|*{2}{c}}
& & \multicolumn{3}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & 1 & 25 & 52 & 1 & 25 \\ \hline 1) & \textbf{N} & 6 & 3 & 1 & 6 & 2 \end{tabular} \label{3d42e7p3} \end{table}
\begin{table}[H]\small \caption{$^{2}F_4(2)' < E_7$, $p \neq 2, 3, 5$}
\begin{tabular}{r|*{4}{c}|*{3}{c}}
& \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{3}{c}{$V_{56}$} \\ & 1 & $27$ & $27^{*}$ & 78 & 1 & 27 & $27^{*}$ \\ \hline 1) & 1 & 1 & 1 & 1 & 2 & 1 & 1 \end{tabular} \label{2f42e7p13} \label{2f42e7p0} \end{table}
\begin{table}[H]\small \caption{$^{2}F_4(2)' < E_7$, $p = 5$}
\begin{tabular}{rc|*{4}{c}|*{3}{c}}
& & \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{3}{c}{$V_{56}$} \\ & & 1 & $27$ & $27^{*}$ & 78 & 1 & 27 & $27^{*}$ \\ \hline 1) & \textbf{N} & 1 & 1 & 1 & 1 & 2 & 1 & 1 \end{tabular} \label{2f42e7p5} \end{table}
\begin{table}[H]\small \caption{$^{2}F_4(2)' < E_7$, $p = 3$}
\begin{tabular}{r|*{4}{c}|*{3}{c}}
& \multicolumn{4}{c|}{$V_{133}$} & \multicolumn{3}{c}{$V_{56}$} \\ & 1 & 27 & $27^{*}$ & 77 & 1 & 27 & $27^{*}$ \\ \hline 1) & 2 & 1 & 1 & 1 & 2 & 1 & 1 \end{tabular} \label{2f42e7p3} \end{table}
\begin{table}[H]\small \caption{$^{2}B_2(8) < E_7$, $p = 5$}
\begin{tabular}{rc|*{5}{c}|*{2}{c}}
& & \multicolumn{5}{c|}{$V_{133}$} & \multicolumn{2}{c}{$V_{56}$} \\ & & $14$ & $14^{*}$ & $35_a$ & $35_b$ & $35_c$ & $14$ & $14^{*}$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{tabular} \end{table}
\section{$E_8$} \label{sec:E8tabs}
\subsection{Alternating Groups} \
Each group in the following table has a unique feasible character on $L(E_8(K))$: \begin{table}[H]\small \caption{Alt$_{n} < E_8$, $n \ge 10$, unique feasible character}
\begin{tabular}{c|c|l|l}
$n$ & $p$ & \multicolumn{1}{c|}{Factors} & \multicolumn{1}{c}{Notes}\\ \hline 17 & 2 & $1^2$/$118$/$128_a$ & Permutation: $(128_a,128_b)$ \\ 16 & 2 & $1^2$/$14^2$/$64$/$64^{*}$/$90$ & \textbf{P}, \textbf{N} \\ 15 & 2 & $1^2$/$14^2$/$64$/$64^{*}$/$90$ \\ 14 & 2 & $1^8$/$12^4$/$64_a$/$64_b^{2}$ & $64_a$ is a section of $\bigwedge^{2} 12$. \textbf{N} \\ 13 & 2 & $1^8$/$12^4$/$32_a^2$/$32_b^2$/$64$ \\ 12 & 2 & $1^{16}$/$10^6$/$16^4$/$(16^{*})^4$/$44$ & \textbf{N} \\ 11 & 11 & $36/44/84^{2}$ & \textbf{P} \\ & 2 & $1^{16}$/$10^6$/$16^4$/$(16^{*})^4$/$44$ \\ 10 & $p \neq 2,3,5$ & $9/35/36/84^{2}$ & \textbf{P} \\ & 3 & $1/9/34/36/84^{2}$ & \textbf{P} \\ \end{tabular} \label{a17e8p2} \label{a16e8p2} \label{a15e8p2} \label{a14e8p2} \label{a13e8p2} \label{a12e8p2} \label{a11e8p11} \label{a11e8p2} \label{a10e8p0} \label{a10e8p7} \label{a10e8p3} \end{table}
\begin{table}[H]\small \caption{Alt$_{10} < E_8$, $p = 5$}
\begin{tabular}{rc|*{8}{c}} & & 1 & 8 & 28 & 35$_a$ & 35$_b$ & $35_c$ & 56 \\ \hline 1) & \textbf{P}, \textbf{N} & 1 & 2 & 3 & 1 & 0 & 0 & 2 \\ 2) & & 3 & 0 & 5 & 1 & 1 & 1 & 0 \end{tabular} \label{a10e8p5} \end{table}
\begin{table}[H]\small \caption{Alt$_{10} < E_8$, $p = 2$}
\begin{tabular}{rc|*{8}{c}} & & 1 & 8 & 16 & 26 & 48 & 64$_a$ & 64$_b$ & 198 \\ \hline 1) & & 2 & 0 & 0 & 0 & 1 & 0 & 0 & 1 \\ 2) & \textbf{N} & 4 & 2 & 0 & 2 & 1 & 1 & 1 & 0 \\ 3) & \textbf{N} & 4 & 2 & 1 & 2 & 2 & 0 & 1 & 0 \\ 4) & \textbf{P}, \textbf{N} & 8 & 5 & 0 & 4 & 2 & 0 & 0 & 0 \\ 5) & \textbf{N} & 16 & 1 & 1 & 8 & 0 & 0 & 0 & 0 \\ 6) & \textbf{N} & 30 & 8 & 8 & 1 & 0 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(64_a,64_b)$. \label{a10e8p2} \end{table}
\begin{table}[H]\small
\caption{Alt$_9 < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{7}{c}} & 1 & 8 & 27 & 28 & 35$_a$ & 35$_b$ & 56 \\ \hline 1) & 1 & 3 & 1 & 3 & 0 & 0 & 2 \\ 2) & 3 & 1 & 1 & 5 & 1 & 1 & 0 \end{tabular} \label{a9e8p0} \end{table}
\begin{table}[H]\small \caption{Alt$_9 < E_8$, $p = 7$}
\begin{tabular}{r|*{7}{c}} & 1 & 8 & 19 & 28 & 35$_a$ & 35$_b$ & 56 \\ \hline 1) & 1 & 4 & 1 & 3 & 0 & 0 & 2 \\ 2) & 3 & 2 & 1 & 5 & 1 & 1 & 0 \end{tabular} \label{a9e8p7} \end{table}
\begin{table}[H]\small \caption{Alt$_9 < E_8$, $p = 5$}
\begin{tabular}{r|*{9}{c}} & 1 & 8 & 21 & 27 & 28 & 35$_a$ & 35$_b$ & 56 & 134 \\ \hline 1) & 1 & 2 & 2 & 1 & 1 & 0 & 0 & 0 & 1 \\ 2) & 1 & 3 & 0 & 1 & 3 & 0 & 0 & 2 & 0 \\ 3) & 3 & 1 & 0 & 1 & 5 & 1 & 1 & 0 & 0 \end{tabular} \label{a9e8p5} \end{table}
\begin{table}[H]\small \caption{Alt$_9 < E_8$, $p = 3$}
\begin{tabular}{rc|ccccc} & & 1 & 7 & 21 & 27 & 35 \\ \hline 1) & \textbf{P}, \textbf{N} & 4 & 6 & 5 & 1 & 2 \end{tabular} \label{a9e8p3} \end{table}
\begin{table}[H]\small \caption{Alt$_9 < E_8$, $p = 2$}
\begin{tabular}{rc|*{9}{c}} & & 1 & 8$_a$ & $8_b$ & $8_c$ & 20 & 20$^{*}$ & 26 & 48 & 78 \\ \hline 1) & \textbf{P}, \textbf{N} & 2 & 2 & 1 & 1 & 1 & 1 & 0 & 2 & 1 \\ 2) & \textbf{N} & 4 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 2 \\ 3) & \textbf{P}, \textbf{N} & 4 & 0 & 0 & 1 & 2 & 2 & 0 & 0 & 2 \\ 4) & \textbf{P}, \textbf{N} & 4 & 2 & 2 & 3 & 1 & 1 & 2 & 2 & 0 \\ 5) & \textbf{P}, \textbf{N} & 4 & 2 & 3 & 3 & 2 & 2 & 2 & 1 & 0 \\ 6) & \textbf{P}, \textbf{N} & 6 & 0 & 0 & 3 & 1 & 1 & 2 & 1 & 1 \\ 7) & \textbf{P}, \textbf{N} & 6 & 0 & 2 & 2 & 2 & 2 & 2 & 0 & 1 \\ 8) & \textbf{P}, \textbf{N} & 8 & 0 & 2 & 4 & 1 & 1 & 4 & 1 & 0 \\ 9) & \textbf{P}, \textbf{N} & 8 & 0 & 2 & 5 & 2 & 2 & 4 & 0 & 0 \\ 10) & \textbf{N} & 8 & 5 & 0 & 0 & 0 & 0 & 4 & 2 & 0 \\ 11) & \textbf{N} & 16 & 1 & 1 & 1 & 0 & 0 & 8 & 0 & 0 \\ 12) & & 30 & 8 & 8 & 8 & 0 & 0 & 1 & 0 & 0 \end{tabular}\\ Permutations: $(8_b,8_c)$. \label{a9e8p2} \end{table}
\begin{table}[H]\small
\caption{Alt$_8 < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|ccccccccc} & 1 & 7 & 14 & 20 & 21$_a$ & 28 & 35 & 64 & 70 \\ \hline 1) & 3 & 1 & 0 & 1 & 0 & 3 & 0 & 1 & 1 \\ 2) & 3 & 2 & 1 & 0 & 0 & 4 & 1 & 0 & 1 \\ 3) & 4 & 7 & 0 & 1 & 5 & 0 & 2 & 0 & 0 \end{tabular}\\ $21_a = \bigwedge^{2}7$. \label{a8e8p0} \end{table}
\begin{table}[H]\small \caption{Alt$_8 < E_8$, $p = 7$}
\begin{tabular}{rc|*{9}{c}} & & 1 & 7 & 14 & 19 & 21$_a$ & 28 & 35 & 45 & 70 \\ \hline 1) & & 3 & 2 & 1 & 0 & 0 & 4 & 1 & 0 & 1 \\ 2) & \textbf{N} & 4 & 1 & 0 & 2 & 0 & 3 & 0 & 1 & 1 \\ 3) & & 5 & 7 & 0 & 1 & 5 & 0 & 2 & 0 & 0 \end{tabular}\\ $21_a = \bigwedge^{2}7$. \label{a8e8p7} \end{table}
\begin{table}[H]\small \caption{Alt$_8 < E_8$, $p = 5$}
\begin{tabular}{r|ccccccccc} & 1 & 7 & 13 & 20 & 21$_a$ & 21$_b$ & 35 & 43 & 70 \\ \hline 1) & 3 & 4 & 0 & 1 & 0 & 4 & 0 & 1 & 1 \\ 2) & 3 & 4 & 0 & 1 & 1 & 3 & 0 & 1 & 1 \\ 3) & 4 & 6 & 1 & 0 & 0 & 4 & 1 & 0 & 1 \\ 4) & 4 & 7 & 0 & 1 & 5 & 0 & 2 & 0 & 0 \end{tabular}\\ $21_a = \bigwedge^{2}7$. \label{a8e8p5} \end{table}
\begin{table}[H]\small \caption{Alt$_8 < E_8$, $p = 3$}
\begin{tabular}{rc|cccccc} & & 1 & 7 & 13 & 21 & 28 & 35 \\ \hline 1) & \textbf{N} & 4 & 3 & 1 & 0 & 5 & 2 \\ 2) & \textbf{N} & 4 & 8 & 1 & 5 & 0 & 2 \end{tabular} \label{a8e8p3} \end{table}
{\small \begin{table}[H]\small
\caption{Alt$_7 < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{10}{c}} & & 1 & 6 & 10 & 10$^{*}$ & 14$_a$ & 14$_b$ & 15 & 21 & 35 \\ \hline 1) & \textbf{P} & 0 & 0 & 1 & 1 & 2 & 0 & 4 & 0 & 4 \\ 2) & & 1 & 0 & 2 & 2 & 1 & 2 & 4 & 0 & 3 \\ 3) & & 2 & 0 & 3 & 3 & 0 & 4 & 4 & 0 & 2 \\ 4) & & 2 & 1 & 0 & 0 & 5 & 4 & 2 & 4 & 0 \\ 5) & & 2 & 1 & 0 & 0 & 2 & 7 & 2 & 4 & 0 \\ 6) & & 2 & 2 & 0 & 0 & 5 & 4 & 3 & 3 & 0 \\ 7) & & 2 & 2 & 0 & 0 & 2 & 7 & 3 & 3 & 0 \\ 8) & & 4 & 1 & 0 & 0 & 5 & 4 & 0 & 2 & 2 \\ 9) & & 4 & 1 & 0 & 0 & 2 & 7 & 0 & 2 & 2 \\ 10) & & 4 & 2 & 0 & 0 & 5 & 4 & 1 & 1 & 2 \\ 11) & & 4 & 2 & 0 & 0 & 2 & 7 & 1 & 1 & 2 \\ 12) & & 5 & 1 & 1 & 1 & 4 & 6 & 0 & 2 & 1 \\ 13) & & 5 & 2 & 1 & 1 & 4 & 6 & 1 & 1 & 1 \\ 14) & & 11 & 13 & 2 & 2 & 1 & 0 & 7 & 0 & 0 \end{tabular}\\ $14_a$ is a section of $6 \otimes 6$. \label{a7e8p0} \end{table} }
\begin{table}[H]\small \caption{Alt$_7 < E_8$, $p = 7$}
\begin{tabular}{rc|ccccccc} & & 1 & 5 & 10 & 14$_a$ & 14$_b$ & 21 & 35 \\ \hline 1) & \textbf{P} & 0 & 0 & 8 & 0 & 4 & 2 & 2\\ 2) & \textbf{P} & 0 & 2 & 7 & 1 & 2 & 1 & 3 \\ 3) & \textbf{P} & 0 & 4 & 6 & 2 & 0 & 0 & 4\\ 4) & \textbf{P}, \textbf{N} & 1 & 2 & 9 & 0 & 4 & 1 & 2 \\ 5) & \textbf{P}, \textbf{N} & 1 & 4 & 8 & 1 & 2 & 0 & 3\\ 6) & \textbf{P}, \textbf{N} & 2 & 3 & 0 & 3 & 5 & 4 & 1 \\ 7) & \textbf{P}, \textbf{N} & 2 & 3 & 0 & 6 & 2 & 4 & 1\\ 8) & \textbf{P}, \textbf{N} & 2 & 4 & 10 & 0 & 4 & 0 & 2 \\ 9) & \textbf{N} & 3 & 3 & 2 & 2 & 7 & 4 & 0\\ 10) & \textbf{N} & 3 & 3 & 2 & 5 & 4 & 4 & 0 \\ 11) & \textbf{P}, \textbf{N} & 3 & 5 & 1 & 3 & 5 & 3 & 1\\ 12) & \textbf{P}, \textbf{N} & 3 & 5 & 1 & 6 & 2 & 3 & 1 \\ 13) & \textbf{P}, \textbf{N} & 4 & 5 & 3 & 2 & 7 & 3 & 0\\ 14) & \textbf{P}, \textbf{N} & 4 & 5 & 3 & 5 & 4 & 3 & 0 \\ 15) & & 5 & 1 & 0 & 2 & 7 & 2 & 2\\ 16) & & 5 & 1 & 0 & 5 & 4 & 2 & 2 \\ 17) & & 6 & 1 & 2 & 4 & 6 & 2 & 1\\ 18) & \textbf{N} & 6 & 3 & 1 & 2 & 7 & 1 & 2 \\ 19) & \textbf{N} & 6 & 3 & 1 & 5 & 4 & 1 & 2\\ 20) & \textbf{N} & 7 & 3 & 3 & 4 & 6 & 1 & 1 \\ 21) & \textbf{N} & 15 & 2 & 2 & 1 & 0 & 9 & 0\\ 22) & \textbf{N} & 24 & 20 & 11 & 1 & 0 & 0 & 0 \end{tabular}\\ $14_a$ is a section of $5 \otimes 5$. \label{a7e8p7} \end{table}
\begin{table}[H]\small \caption{Alt$_7 < E_8$, $p = 5$}
\begin{tabular}{rc|cccccccc} & & 1 & 6 & 8 & 10 & 10$^{*}$ & 13 & 15 & 35\\ \hline 1) & \textbf{P} & 0 & 0 & 4 & 1 & 1 & 2 & 2 & 4 \\ 2) & \textbf{P} & 0 & 1 & 3 & 1 & 1 & 1 & 3 & 4 \\ 3) & \textbf{P} & 0 & 2 & 2 & 1 & 1 & 0 & 4 & 4 \\ 4) & \textbf{P} & 0 & 4 & 12 & 4 & 4 & 1 & 0 & 1 \\ 5) & & 2 & 0 & 2 & 1 & 1 & 0 & 0 & 6 \\ 6) & \textbf{N} & 3 & 0 & 2 & 2 & 2 & 3 & 3 & 3 \\ 7) & \textbf{N} & 3 & 1 & 1 & 2 & 2 & 2 & 4 & 3 \\ 8) & \textbf{N} & 3 & 3 & 11 & 5 & 5 & 3 & 0 & 0 \\ 9) & \textbf{P}, \textbf{N} & 3 & 7 & 14 & 0 & 0 & 7 & 0 & 0 \\ 10) & & 5 & 0 & 0 & 2 & 2 & 1 & 1 & 5 \\ 11) & \textbf{N} & 6 & 0 & 0 & 3 & 3 & 4 & 4 & 2 \\ 12) & \textbf{P}, \textbf{N} & 6 & 6 & 9 & 0 & 0 & 8 & 2 & 0 \\ 13) & \textbf{P}, \textbf{N} & 6 & 7 & 8 & 0 & 0 & 7 & 3 & 0 \\ 14) & \textbf{N} & 8 & 6 & 7 & 0 & 0 & 6 & 0 & 2 \\ 15) & \textbf{N} & 8 & 7 & 6 & 0 & 0 & 5 & 1 & 2 \\ 16) & \textbf{P}, \textbf{N} & 9 & 3 & 6 & 0 & 0 & 11 & 2 & 0 \\ 17) & \textbf{P}, \textbf{N} & 9 & 4 & 5 & 0 & 0 & 10 & 3 & 0 \\ 18) & \textbf{N} & 11 & 3 & 4 & 0 & 0 & 9 & 0 & 2 \\ 19) & \textbf{N} & 11 & 4 & 3 & 0 & 0 & 8 & 1 & 2 \\ 20) & \textbf{N} & 11 & 5 & 6 & 1 & 1 & 8 & 0 & 1 \\ 21) & \textbf{N} & 11 & 6 & 5 & 1 & 1 & 7 & 1 & 1 \\ 22) & & 11 & 14 & 1 & 2 & 2 & 0 & 7 & 0 \\ 23) & & 28 & 0 & 25 & 1 & 1 & 0 & 0 & 0 \end{tabular} \label{a7e8p5} \end{table}
\begin{table}[H]\small \caption{Alt$_7 < E_8$, $p = 3$}
\begin{tabular}{rc|cccccc||rc|cccccc} & & 1 & 6 & 10 & 10$^{*}$ & 13 & 15 & & & 1 & 6 & 10 & 10$^{*}$ & 13 & 15 \\ \hline 1) & \textbf{P}, \textbf{N} & 10 & 0 & 5 & 5 & 6 & 4 & 2) & \textbf{P}, \textbf{N} & 11 & 5 & 0 & 0 & 9 & 6 \\ 3) & \textbf{N} & 12 & 13 & 2 & 2 & 1 & 7 & 4) & \textbf{P}, \textbf{N} & 17 & 3 & 2 & 2 & 11 & 2 \end{tabular} \label{a7e8p3} \end{table}
\begin{table}[H]\small \caption{Alt$_7 < E_8$, $p = 2$}
\begin{tabular}{rc|cccccc||rc|cccccc} & & 1 & 4 & 4$^{*}$ & 6 & 14 & 20 & & & 1 & 4 & 4$^{*}$ & 6 & 14 & 20 \\ \hline 1) & \textbf{P}, \textbf{N} & 8 & 1 & 1 & 0 & 8 & 6 & 2) & \textbf{P}, \textbf{N} & 8 & 1 & 1 & 1 & 9 & 5 \\ 3) & \textbf{P}, \textbf{N} & 8 & 1 & 1 & 2 & 10 & 4 & 4) & \textbf{P}, \textbf{N} & 8 & 4 & 4 & 4 & 6 & 5 \\ 5) & \textbf{P}, \textbf{N} & 8 & 4 & 4 & 5 & 7 & 4 & 6) & \textbf{P}, \textbf{N} & 8 & 4 & 4 & 6 & 8 & 3 \\ 7) & \textbf{N} & 8 & 7 & 7 & 8 & 4 & 4 & 8) & \textbf{N} & 8 & 7 & 7 & 9 & 5 & 3 \\ 9) & \textbf{N} & 8 & 7 & 7 & 10 & 6 & 2 & 10) & \textbf{N} & 18 & 2 & 2 & 9 & 0 & 8 \\ 11) & \textbf{N} & 18 & 2 & 2 & 10 & 1 & 7 & 12) & \textbf{N} & 18 & 2 & 2 & 17 & 8 & 0 \\ 13) & & 46 & 16 & 16 & 9 & 0 & 1 & 14) & & 46 & 16 & 16 & 10 & 1 & 0 \end{tabular} \label{a7e8p2} \end{table}
\begin{table}[H]\small
\caption{Alt$_6 < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|ccccccc||rc|ccccccc} & & 1 & 5$_a$ & 5$_b$ & 8$_a$ & 8$_b$ & 9 & 10 & & & 1 & 5$_a$ & 5$_b$ & 8$_a$ & 8$_b$ & 9 & 10 \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 6 & 6 & 8 & 8 & 2) & \textbf{P} & 0 & 0 & 6 & 4 & 4 & 6 & 10 \\ 3) & \textbf{P} & 0 & 2 & 2 & 7 & 7 & 4 & 8 & 4) & \textbf{P} & 0 & 3 & 3 & 4 & 4 & 6 & 10 \\ 5) & & 1 & 1 & 4 & 6 & 6 & 4 & 9 & 6) & & 1 & 2 & 2 & 5 & 10 & 3 & 8 \\ 7) & & 1 & 2 & 5 & 3 & 3 & 6 & 11& 8) & & 1 & 7 & 7 & 6 & 6 & 9 & 0 \\ 9) & & 2 & 0 & 0 & 7 & 7 & 6 & 8 & 10) & & 2 & 0 & 6 & 5 & 5 & 4 & 10 \\ 11)& & 2 & 1 & 1 & 4 & 4 & 8 & 10 & 12) & & 2 & 1 & 4 & 4 & 9 & 3 & 9 \\ 13)& & 2 & 3 & 3 & 5 & 5 & 4 & 10 & 14) & & 2 & 4 & 4 & 2 & 2 & 6 & 12 \\ 15)& & 2 & 6 & 9 & 5 & 5 & 9 & 1 & 16) & & 3 & 0 & 0 & 5 & 10 & 5 & 8 \\ 17)& & 3 & 0 & 3 & 3 & 3 & 8 & 11 & 18) & & 3 & 0 & 6 & 3 & 8 & 3 & 10 \\ 19)& & 3 & 2 & 5 & 4 & 4 & 4 & 11 & 20) & & 3 & 3 & 3 & 3 & 8 & 3 & 10 \\ 21)& & 3 & 5 & 5 & 6 & 6 & 11 & 0 & 22) & & 3 & 5 & 11 & 4 & 4 & 9 & 2 \\ 23)& & 3 & 7 & 7 & 7 & 7 & 7 & 0 & 24) & & 3 & 8 & 8 & 4 & 4 & 9 & 2 \\ 25)& & 4 & 1 & 1 & 5 & 5 & 6 & 10 & 26) & & 4 & 2 & 2 & 2 & 2 & 8 & 12 \\ 27)& & 4 & 2 & 5 & 2 & 7 & 3 & 11 & 28) & & 4 & 4 & 4 & 3 & 3 & 4 & 12 \\ 29)& & 4 & 4 & 7 & 5 & 5 & 11 & 1 & 30) & & 4 & 6 & 9 & 6 & 6 & 7 & 1 \\ 31)& & 4 & 7 & 7 & 5 & 10 & 6 & 0 & 32) & & 4 & 7 & 10 & 3 & 3 & 9 & 3 \\ 33)& & 5 & 0 & 3 & 4 & 4 & 6 & 11 & 34) & & 5 & 1 & 1 & 3 & 8 & 5 & 10 \\ 35)& & 5 & 3 & 9 & 4 & 4 & 11 & 2 & 36) & & 5 & 4 & 4 & 1 & 6 & 3 & 12 \\ 37)& & 5 & 5 & 5 & 7 & 7 & 9 & 0 & 38) & & 5 & 5 & 11 & 5 & 5 & 7 & 2 \\ 39)& & 5 & 6 & 6 & 4 & 4 & 11 & 2 & 40) & & 5 & 6 & 9 & 4 & 9 & 6 & 1 \\ 41)& & 5 & 8 & 8 & 5 & 5 & 7 & 2 & 42) & & 5 & 9 & 9 & 2 & 2 & 9 & 4 \\ 43)& & 6 & 0 & 3 & 2 & 7 & 5 & 11 & 44) & & 6 & 2 & 2 & 3 & 3 & 6 & 12 \\ 45)& & 6 & 4 & 7 & 6 & 6 & 9 & 1 & 46) & & 6 & 5 & 5 & 5 & 10 & 8 & 0 \\ 47)& & 6 & 5 & 8 & 3 & 3 & 11 & 3 & 48) & & 6 & 5 & 11 & 3 & 8 & 6 & 2 \\ 49)& & 6 & 7 & 10 & 4 & 4 & 7 & 3 & 50) & & 6 & 8 & 8 & 3 & 8 & 6 & 2 \\ 51)& & 7 & 2 & 2 & 1 & 6 & 5 & 12 & 52) & & 7 & 3 & 9 & 5 & 5 & 9 & 2 \\ 53)& & 7 & 4 & 7 & 4 & 9 & 8 & 1 & 54) & & 7 & 6 & 6 & 5 & 5 & 9 & 2 \\ 55)& & 7 & 7 & 7 & 2 & 2 & 11 & 4 & 56) & & 7 & 7 & 10 & 2 & 7 & 6 & 3 \\ 57)& & 7 & 9 & 9 & 3 & 3 & 7 & 4 & 58) & & 8 & 0 & 0 & 0 & 20 & 0 & 8 \\ 59)& & 8 & 3 & 9 & 3 & 8 & 8 & 2 & 60) & & 8 & 5 & 8 & 4 & 4 & 9 & 3 \\ 61)& & 8 & 6 & 6 & 3 & 8 & 8 & 2 & 62) & & 8 & 9 & 9 & 1 & 6 & 6 & 4 \\ 63)& & 9 & 5 & 8 & 2 & 7 & 8 & 3 & 64) & & 9 & 7 & 7 & 0 & 20 & 1 & 0 \\ 65)& & 9 & 7 & 7 & 3 & 3 & 9 & 4 & 66) & & 10 & 7 & 7 & 1 & 6 & 8 & 4 \\ 67)& & 11 & 0 & 0 & 4 & 4 & 17 & 2 & 68) & & 11 & 5 & 5 & 0 & 20 & 3 & 0 \\ 69)& & 13 & 0 & 0 & 5 & 5 & 15 & 2 & 70) & & 13 & 1 & 1 & 2 & 2 & 17 & 4 \\ 71)& & 13 & 5 & 5 & 11 & 11 & 1 & 0 & 72) & & 14 & 0 & 0 & 3 & 8 & 14 & 2 \\ 73)& & 14 & 4 & 7 & 10 & 10 & 1 & 1 & 74) & & 15 & 1 & 1 & 3 & 3 & 15 & 4 \\ 75)& & 15 & 3 & 9 & 9 & 9 & 1 & 2 & 76) & & 15 & 6 & 6 & 9 & 9 & 1 & 2 \\ 77)& & 16 & 1 & 1 & 1 & 6 & 14 & 4 & 78) & & 16 & 5 & 8 & 8 & 8 & 1 & 3 \\ 79)& & 17 & 7 & 7 & 7 & 7 & 1 & 4 & 80) & & 21 & 0 & 0 & 9 & 9 & 7 & 2 \\ 81)& & 23 & 1 & 1 & 7 & 7 & 7 & 4 & 82) & & 24 & 0 & 21 & 0 & 0 & 1 & 11 \\ 83)& & 28 & 0 & 0 & 0 & 25 & 0 & 2 \end{tabular} Permutations: $(5_a,5_b)$, $(8_a,8_b)$. \label{a6e8p0} \end{table}
\begin{table}[H]\small \caption{Alt$_6 < E_8$, $p = 5$}
\begin{tabular}{rc|ccccc||rc|ccccc} & & 1 & $5_a$ & $5_b$ & 8 & 10 & & & 1 & $5_a$ & $5_b$ & 8 & 10 \\ \hline 1) & \textbf{P}, \textbf{N} & 4 & 2 & 2 & 18 & 8 & 2) & \textbf{P}, \textbf{N} & 5 & 1 & 4 & 16 & 9 \\ 3) & \textbf{P}, \textbf{N} & 6 & 0 & 6 & 14 & 10 & 4) & \textbf{P}, \textbf{N} & 6 & 3 & 3 & 14 & 10 \\ 5) & \textbf{P}, \textbf{N} & 7 & 2 & 5 & 12 & 11 & 6) & \textbf{P}, \textbf{N} & 8 & 0 & 0 & 20 & 8 \\ 7) & \textbf{P}, \textbf{N} & 8 & 4 & 4 & 10 & 12 & 8) & \textbf{P}, \textbf{N} & 10 & 1 & 1 & 16 & 10 \\ 9) & \textbf{P}, \textbf{N} & 10 & 7 & 7 & 21 & 0 & 10) & \textbf{P}, \textbf{N} & 11 & 0 & 3 & 14 & 11 \\ 11) & \textbf{P}, \textbf{N} & 11 & 6 & 9 & 19 & 1 & 12) & \textbf{N} & 12 & 2 & 2 & 12 & 12 \\ 13) & \textbf{P}, \textbf{N} & 12 & 5 & 11 & 17 & 2 & 14) & \textbf{P}, \textbf{N} & 12 & 8 & 8 & 17 & 2 \\ 15) & \textbf{P}, \textbf{N} & 13 & 7 & 10 & 15 & 3 & 16) & \textbf{P}, \textbf{N} & 14 & 5 & 5 & 23 & 0 \\ 17) & \textbf{N} & 14 & 9 & 9 & 13 & 4 & 18) & \textbf{P}, \textbf{N} & 15 & 4 & 7 & 21 & 1 \\ 19) & \textbf{P}, \textbf{N} & 16 & 3 & 9 & 19 & 2 & 20) & \textbf{P}, \textbf{N} & 16 & 6 & 6 & 19 & 2 \\ 21) & \textbf{N} & 17 & 5 & 8 & 17 & 3 & 22) & \textbf{N} & 18 & 7 & 7 & 15 & 4 \\ 23) & & 20 & 0 & 24 & 1 & 10 & 24) & & 25 & 0 & 21 & 1 & 11 \\ 25) & \textbf{N} & 28 & 0 & 0 & 25 & 2 & 26) & \textbf{N} & 30 & 1 & 1 & 21 & 4 \end{tabular} Permutations: $(5_a,5_b)$. \label{a6e8p5} \end{table}
\begin{table}[H]\small
\caption{Alt$_5 < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|ccccc||r|ccccc} & & 1 & $3_a$ & $3_b$ & 4 & 5 & & 1 & $3_a$ & $3_b$ & 4 & 5 \\ \hline 1) & \textbf{P} & 0 & 14 & 14 & 16 & 20 & 2) & 2 & 15 & 15 & 14 & 20 \\ 3) & & 3 & 13 & 18 & 13 & 20 & 4) & 3 & 14 & 14 & 19 & 17 \\ 5) & & 5 & 15 & 15 & 17 & 17 & 6) & 6 & 13 & 18 & 16 & 17 \\ 7) & & 6 & 14 & 14 & 22 & 14 & 8) & 8 & 6 & 6 & 16 & 28 \\ 9) & & 8 & 8 & 28 & 8 & 20 & 10) & 8 & 15 & 15 & 20 & 14 \\ 11) & & 9 & 13 & 18 & 19 & 14 & 12) & 10 & 7 & 7 & 14 & 28 \\ 13) & & 10 & 19 & 19 & 6 & 20 & 14) & 11 & 5 & 10 & 13 & 28 \\ 15) & & 11 & 6 & 6 & 19 & 25 & 16) & 11 & 8 & 28 & 11 & 17 \\ 17) & & 13 & 7 & 7 & 17 & 25 & 18) & 13 & 19 & 19 & 9 & 17 \\ 19) & & 14 & 5 & 10 & 16 & 25 & 20) & 14 & 6 & 6 & 22 & 22 \\ 21) & & 14 & 8 & 28 & 14 & 14 & 22) & 16 & 0 & 20 & 8 & 28 \\ 23) & & 16 & 7 & 7 & 20 & 22 & 24) & 16 & 19 & 19 & 12 & 14 \\ 25) & & 17 & 5 & 10 & 19 & 22 & 26) & 18 & 11 & 11 & 6 & 28 \\ 27) & & 19 & 0 & 20 & 11 & 25 & 28) & 20 & 10 & 35 & 2 & 17 \\ 29) & & 21 & 11 & 11 & 9 & 25 & 30) & 22 & 0 & 20 & 14 & 22 \\ 31) & & 23 & 10 & 35 & 5 & 14 & 32) & 24 & 11 & 11 & 12 & 22 \\ 33) & & 28 & 0 & 50 & 0 & 14 & 34) & 28 & 2 & 27 & 2 & 25 \\ 35) & & 31 & 2 & 27 & 5 & 22 & 36) & 35 & 6 & 6 & 43 & 1 \\ 37) & & 37 & 7 & 7 & 41 & 1 & 38) & 38 & 5 & 10 & 40 & 1 \\ 39) & & 43 & 0 & 20 & 35 & 1 & 40) & 45 & 11 & 11 & 33 & 1 \\ 41) & & 52 & 2 & 27 & 26 & 1 & 42) & 78 & 0 & 55 & 0 & 1 \end{tabular}\\ Permutations: $(3_a,3_b)$. \label{a5e8p0} \end{table}
\begin{table}[H]\small \caption{Alt$_5 < E_8$, $p = 3$}
\begin{tabular}{rc|*{4}{c}||rc|*{4}{c}} & & 1 & $3_a$ & $3_b$ & 4 & & & 1 & $3_a$ & $3_b$ & 4 \\ \hline 1) & \textbf{P}, \textbf{N} & 20 & 14 & 14 & 36 & 2) & \textbf{P}, \textbf{N} & 22 & 15 & 15 & 34 \\ 3) & \textbf{P}, \textbf{N} & 23 & 18 & 13 & 33 & 4) & \textbf{N} & 28 & 28 & 8 & 28 \\ 5) & \textbf{N} & 30 & 19 & 19 & 26 & 6) & \textbf{P}, \textbf{N} & 36 & 6 & 6 & 44 \\ 7) & \textbf{N} & 37 & 35 & 10 & 19 & 8) & \textbf{P}, \textbf{N} & 38 & 7 & 7 & 42 \\ 9) & \textbf{P}, \textbf{N} & 39 & 10 & 5 & 41 & 10) & \textbf{N} & 42 & 50 & 0 & 14 \\ 11) & \textbf{N} & 44 & 20 & 0 & 36 & 12) & \textbf{N} & 46 & 11 & 11 & 34 \\ 13) & \textbf{N} & 53 & 27 & 2 & 27 & 14) & & 79 & 55 & 0 & 1 \end{tabular}\\ Permutations: $(3_a,3_b)$. \label{a5e8p3} \end{table}
\subsection{Sporadic Groups} \
\begin{table}[H]\small
\caption{$M_{11} < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{7}{c}} & 1 & 10$_a$ & 11 & $16$ & $16^{*}$ & 45 & 55 \\ \hline 1) & 9 & 0 & 6 & 4 & 4 & 1 & 0 \\ 2) & 10 & 0 & 5 & 4 & 4 & 0 & 1 \\ 3) & 15 & 6 & 0 & 4 & 4 & 1 & 0 \\ \end{tabular} \label{m11e8p0} \end{table}
\begin{table}[H]\small \caption{$M_{11} < E_8$, $p = 11$}
\begin{tabular}{rc|*{8}{c}} & & 1 & 9 & 10 & $10^{*}$ & 11 & 16 & 44 & 55 \\ \hline 1) & \textbf{P} & 0 & 2 & 3 & 3 & 0 & 1 & 1 & 2 \\ 2) & \textbf{P}, \textbf{N} & 2 & 3 & 0 & 0 & 1 & 2 & 4 & 0 \\ 3) & \textbf{P}, \textbf{N} & 3 & 4 & 0 & 0 & 2 & 0 & 3 & 1 \\ 4) & \textbf{P}, \textbf{N} & 4 & 6 & 4 & 4 & 0 & 0 & 0 & 2 \\ 5) & & 9 & 1 & 1 & 1 & 6 & 9 & 0 & 0 \\ 6) & & 10 & 0 & 0 & 0 & 5 & 8 & 0 & 1 \\ 7) & \textbf{N} & 21 & 7 & 1 & 1 & 0 & 9 & 0 & 0 \end{tabular} \label{m11e8p11} \end{table}
\begin{table}[H]\small \caption{$M_{11} < E_8$, $p = 5$}
\begin{tabular}{rc|*{7}{c}} & & 1 & $10_a$ & 11 & 16 & $16^{*}$ & 45 & 55 \\ \hline 1) & & 3 & 1 & 0 & 0 & 0 & 4 & 1 \\ 2) & \textbf{N} & 4 & 0 & 0 & 2 & 2 & 4 & 0 \\ 3) & \textbf{N} & 7 & 1 & 7 & 2 & 2 & 2 & 0 \\ 4) & \textbf{N} & 8 & 1 & 6 & 2 & 2 & 1 & 1 \\ 5) & \textbf{N} & 9 & 0 & 6 & 4 & 4 & 1 & 0 \\ 6) & \textbf{N} & 9 & 1 & 5 & 2 & 2 & 0 & 2 \\ 7) & \textbf{N} & 10 & 0 & 5 & 4 & 4 & 0 & 1 \\ 8) & \textbf{N} & 14 & 7 & 0 & 2 & 2 & 1 & 1 \\ 9) & \textbf{N} & 15 & 6 & 0 & 4 & 4 & 1 & 0 \end{tabular} \label{m11e8p5} \end{table}
\begin{table}[H]\small \caption{$M_{11} < E_8$, $p = 3$}
\begin{tabular}{rc|cccccccc} & & 1 & 5 & $5^{*}$ & 10$_a$ & 10$_b$ & $(10_b)^{*}$ & 24 & 45 \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 2 & 0 & 0 & 2 & 4 \\ 2) & \textbf{P}, \textbf{N} & 1 & 2 & 2 & 0 & 1 & 1 & 3 & 3 \\ 3) & \textbf{P}, \textbf{N} & 2 & 2 & 2 & 0 & 2 & 2 & 4 & 2 \\ 4) & \textbf{P}, \textbf{N} & 4 & 5 & 5 & 5 & 0 & 0 & 6 & 0 \\ 5) & \textbf{P}, \textbf{N} & 4 & 7 & 7 & 3 & 0 & 0 & 6 & 0 \\ 6) & \textbf{N} & 23 & 4 & 4 & 6 & 4 & 4 & 0 & 1 \\ 7) & \textbf{N} & 23 & 10 & 10 & 0 & 4 & 4 & 0 & 1 \\ 8) & \textbf{N} & 24 & 10 & 10 & 0 & 5 & 5 & 1 & 0 \end{tabular} \label{m11e8p3} \end{table}
\begin{table}[H]\small \caption{$M_{11} < E_8$, $p = 2$}
\begin{tabular}{rc|ccccc} & & 1 & 10 & 16 & $16^{*}$ & 44 \\ \hline 1) & \textbf{N} & 14 & 3 & 5 & 5 & 1 \\ 2) & \textbf{N} & 16 & 6 & 4 & 4 & 1 \\ 3) & \textbf{N} & 18 & 9 & 3 & 3 & 1 \end{tabular} \label{m11e8p2} \end{table}
\begin{table}[H]\small \caption{$M_{12} < E_8$, $p = 5$}
\begin{tabular}{r|*{5}{c}} & 1 & $11_a$ & 16 & $16^{*}$ & 78 \\ \hline 1) & 8 & 6 & 3 & 3 & 1 \end{tabular}\\ Permutations: $(11_a,11_b)$. \label{m12e8p5} \end{table}
\begin{table}[H]\small \caption{$M_{12} < E_8$, $p = 2$}
\begin{tabular}{rc|*{5}{c}} & & 1 & $10$ & 16 & $16^{*}$ & 44 \\ \hline 1) & \textbf{N} & 16 & 6 & 4 & 4 & 1 \end{tabular} \label{m12e8p2} \end{table}
\begin{table}[H]\small \caption{$J_{1} < E_8$, $p = 11$}
\begin{tabular}{r|*{6}{c}} & 1 & 7 & 14 & 27 & 64 & $77_a$ \\ \hline 1) & 1 & 0 & 3 & 0 & 2 & 1 \\ 2) & 6 & 13 & 5 & 3 & 0 & 0 \\ 3) & 8 & 0 & 1 & 6 & 1 & 0 \\ 4) & 52 & 26 & 1 & 0 & 0 & 0 \end{tabular}\\ $77_a$ is a section of $\bigwedge^{2}14$. \label{j1e8p11} \end{table}
\begin{table}[H]\small \caption{$J_{2} < E_8$, $p = 2$}
\begin{tabular}{rc|*{9}{c}} & & 1 & $6_a$ & $6_b$ & $14_a$ & $14_b$ & 36 & $64_a$ & $64_b$ & 84 \\ \hline 1) & & 2 & 1 & 0 & 1 & 3 & 1 & 1 & 0 & 1 \\ 2) & \textbf{N} & 4 & 2 & 2 & 0 & 4 & 1 & 0 & 2 & 0 \\ 3) & \textbf{N} & 8 & 3 & 4 & 1 & 2 & 2 & 0 & 0 & 1 \\ 4) & \textbf{N} & 14 & 6 & 6 & 0 & 7 & 0 & 0 & 1 & 0 \\ 5) & \textbf{N} & 16 & 8 & 8 & 1 & 1 & 3 & 0 & 0 & 0 \\ 6) & \textbf{N} & 22 & 3 & 16 & 0 & 8 & 0 & 0 & 0 & 0 \\ 7) & \textbf{N} & 78 & 0 & 26 & 0 & 1 & 0 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(6_a,6_b)(14_a,14_b)(64_a,64_b)$. \label{j2e8p2} \end{table}
\begin{table}[H]\small \caption{$J_{3} < E_8$, $p = 2$}
\begin{tabular}{rc|ccc} & & 80 & 84 & $84^{*}$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 \end{tabular} \label{j3e8p2} \end{table}
\addtocounter{table}{1} \begin{table}[H]\small \thetable: $Th < E_8$, $p = 3$. Irreducible on $V_{248}$. \textbf{P} \label{the8p3} \end{table}
\subsection{Cross-characteristic Groups $L_2(q)$ $(q \neq 4$, $5$, $9)$} \
\begin{table}[H]\small
\caption{$L_2(7) < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|cccccc||r|cccccc} & & 1 & 3 & $3^{*}$ & 6 & 7 & 8 & & 1 & 3 & $3^{*}$ & 6 & 7 & 8 \\ \hline 1) & \textbf{P} & 0 & 5 & 5 & 6 & 10 & 14 & 2) & 2 & 7 & 7 & 6 & 8 & 14 \\ 3) & & 3 & 0 & 0 & 14 & 7 & 14 & 4) & 3 & 5 & 5 & 6 & 13 & 11 \\ 5) & & 5 & 2 & 2 & 14 & 5 & 14 & 6) & 5 & 7 & 7 & 6 & 11 & 11 \\ 7) & & 6 & 0 & 0 & 14 & 10 & 11 & 8) & 6 & 5 & 5 & 6 & 16 & 8 \\ 9) & & 8 & 2 & 2 & 14 & 8 & 11 & 10) & 8 & 7 & 7 & 6 & 14 & 8 \\ 11) & & 9 & 0 & 0 & 14 & 13 & 8 & 12) & 11 & 2 & 2 & 14 & 11 & 8 \\ 13) & & 14 & 8 & 8 & 14 & 2 & 11 & 14) & 17 & 8 & 8 & 14 & 5 & 8 \\ 15) & & 22 & 21 & 21 & 6 & 0 & 8 & 16) & 28 & 1 & 1 & 0 & 2 & 25 \\ 17) & & 31 & 1 & 1 & 0 & 5 & 22 & 18) & 52 & 1 & 1 & 0 & 26 & 1 \\ 19) & & 78 & 27 & 27 & 0 & 0 & 1 \end{tabular} \label{l27e8p0} \end{table}
\begin{table}[H]\small \caption{$L_2(7) < E_8$, $p = 3$}
\begin{tabular}{rc|ccccc||rc|ccccc} & & 1 & 3 & $3^{*}$ & 6 & 7 & & & 1 & 3 & $3^{*}$ & 6 & 7 \\ \hline 1) & \textbf{P}, \textbf{N} & 14 & 5 & 5 & 6 & 24 & 2) & \textbf{P}, \textbf{N} & 16 & 7 & 7 & 6 & 22 \\ 3) & \textbf{P}, \textbf{N} & 17 & 0 & 0 & 14 & 21 & 4) & \textbf{N} & 19 & 2 & 2 & 14 & 19 \\ 5) & \textbf{N} & 25 & 8 & 8 & 14 & 13 & 6) & \textbf{N} & 30 & 21 & 21 & 6 & 8 \\ 7) & \textbf{N} & 53 & 1 & 1 & 0 & 27 & 8) & & 79 & 27 & 27 & 0 & 1 \end{tabular} \label{l27e8p3} \end{table}
\begin{longtable}{rc|*{9}{c}}
\caption{$L_2(8) < E_8$, $p = 0$ or $p \nmid |H|$} \\ & & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 & $9_a$ & $9_b$ & $9_c$ \\ \hline 1) & \textbf{P} & 0 & 3 & 5 & 5 & 5 & 4 & 3 & 3 & 4 \\ 2) & \textbf{P} & 0 & 6 & 3 & 5 & 4 & 4 & 3 & 3 & 4 \\ 3) & & 1 & 3 & 5 & 5 & 5 & 5 & 3 & 3 & 3 \\ 4) & & 1 & 4 & 5 & 5 & 5 & 3 & 3 & 3 & 4 \\ 5) & & 1 & 6 & 3 & 5 & 4 & 5 & 3 & 3 & 3 \\ 6) & & 2 & 2 & 4 & 8 & 6 & 2 & 3 & 3 & 4 \\ 7) & & 2 & 2 & 5 & 6 & 7 & 2 & 3 & 3 & 4 \\ 8) & & 2 & 3 & 5 & 5 & 5 & 6 & 1 & 2 & 5 \\ 9) & & 2 & 4 & 5 & 5 & 5 & 4 & 3 & 3 & 3 \\ 10) & & 2 & 5 & 2 & 8 & 5 & 2 & 3 & 3 & 4 \\ 11) & & 2 & 6 & 3 & 5 & 4 & 6 & 1 & 2 & 5 \\ 12) & & 3 & 2 & 4 & 8 & 6 & 3 & 3 & 3 & 3 \\ 13) & & 3 & 2 & 5 & 6 & 7 & 3 & 3 & 3 & 3 \\ 14) & & 3 & 4 & 5 & 5 & 5 & 5 & 1 & 2 & 5 \\ 15) & & 3 & 5 & 2 & 8 & 5 & 3 & 3 & 3 & 3 \\ 16) & & 3 & 6 & 5 & 5 & 5 & 1 & 3 & 3 & 4 \\ 17) & & 4 & 2 & 4 & 8 & 6 & 4 & 1 & 2 & 5 \\ 18) & & 4 & 2 & 5 & 6 & 7 & 4 & 1 & 2 & 5 \\ 19) & & 4 & 5 & 2 & 8 & 5 & 4 & 1 & 2 & 5 \\ 20) & & 4 & 6 & 5 & 5 & 5 & 2 & 3 & 3 & 3 \\ 21) & & 4 & 7 & 2 & 2 & 11 & 0 & 3 & 3 & 4 \\ 22) & & 5 & 0 & 1 & 1 & 1 & 3 & 7 & 7 & 8 \\ 23) & & 5 & 6 & 5 & 5 & 5 & 3 & 1 & 2 & 5 \\ 24) & & 5 & 7 & 2 & 2 & 11 & 1 & 3 & 3 & 3 \\ 25) & & 6 & 0 & 1 & 1 & 1 & 4 & 7 & 7 & 7 \\ 26) & & 6 & 5 & 0 & 13 & 5 & 0 & 3 & 3 & 3 \\ 27) & & 6 & 5 & 4 & 5 & 9 & 0 & 3 & 3 & 3 \\ 28) & & 6 & 7 & 2 & 2 & 11 & 2 & 1 & 2 & 5 \\ 29) & & 7 & 0 & 1 & 1 & 1 & 5 & 2 & 12 & 6 \\ 30) & & 7 & 0 & 1 & 1 & 1 & 5 & 5 & 6 & 9 \\ 31) & & 7 & 2 & 1 & 1 & 1 & 1 & 7 & 7 & 8 \\ 32) & & 7 & 5 & 0 & 13 & 5 & 1 & 1 & 2 & 5 \\ 33) & & 7 & 5 & 4 & 5 & 9 & 1 & 1 & 2 & 5 \\ 34) & & 8 & 2 & 1 & 1 & 1 & 2 & 7 & 7 & 7 \\ 35) & & 8 & 12 & 0 & 11 & 1 & 0 & 1 & 2 & 5 \\ 36) & & 9 & 2 & 1 & 1 & 1 & 3 & 2 & 12 & 6 \\ 37) & & 9 & 2 & 1 & 1 & 1 & 3 & 5 & 6 & 9 \\ 38) & & 10 & 1 & 0 & 1 & 5 & 0 & 7 & 7 & 7 \\ 39) & & 11 & 1 & 0 & 1 & 5 & 1 & 2 & 12 & 6 \\ 40) & & 11 & 1 & 0 & 1 & 5 & 1 & 5 & 6 & 9 \\ 41) & & 12 & 0 & 1 & 1 & 1 & 10 & 0 & 8 & 7 \\ 42) & & 12 & 0 & 1 & 1 & 1 & 10 & 0 & 14 & 1 \\ 43) & & 14 & 2 & 1 & 1 & 1 & 8 & 0 & 8 & 7 \\ 44) & & 14 & 2 & 1 & 1 & 1 & 8 & 0 & 14 & 1 \\ 45) & & 16 & 1 & 0 & 1 & 5 & 6 & 0 & 8 & 7 \\ 46) & & 16 & 1 & 0 & 1 & 5 & 6 & 0 & 14 & 1 \\ 47) & & 21 & 9 & 1 & 1 & 1 & 1 & 0 & 8 & 7 \\ 48) & & 21 & 9 & 1 & 1 & 1 & 1 & 0 & 14 & 1 \\ 49) & & 27 & 0 & 1 & 1 & 1 & 25 & 0 & 0 & 0 \\ 50) & & 29 & 2 & 1 & 1 & 1 & 23 & 0 & 0 & 0 \\ 51) & & 31 & 1 & 0 & 1 & 5 & 21 & 0 & 0 & 0 \\ 52) & & 36 & 9 & 1 & 1 & 1 & 16 & 0 & 0 & 0 \\ 53) & & 52 & 1 & 0 & 26 & 1 & 0 & 0 & 0 & 0 \\ \multicolumn{11}{c}{Permutations: $(7_b,7_c,7_d)$, $(9_a,9_b,9_c)$.} \label{l28e8p0} \end{longtable}
\begin{table}[H]\small \caption{$L_2(8) < E_8$, $p = 7$}
\begin{tabular}{rc|*{6}{c}||rc|*{6}{c}} & & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 & & & 1 & $7_a$ & $7_b$ & $7_c$ & $7_d$ & 8 \\ \hline 1) & \textbf{P}, \textbf{N} & 10 & 3 & 5 & 5 & 5 & 14 & 2) & \textbf{P}, \textbf{N} & 10 & 6 & 3 & 4 & 5 & 14 \\ 3) & \textbf{P}, \textbf{N} & 11 & 4 & 5 & 5 & 5 & 13 & 4) & \textbf{N} & 12 & 2 & 4 & 6 & 8 & 12 \\ 5) & \textbf{N} & 12 & 2 & 5 & 7 & 6 & 12 & 6) & \textbf{N} & 12 & 5 & 2 & 5 & 8 & 12 \\ 7) & \textbf{N} & 13 & 6 & 5 & 5 & 5 & 11 & 8) & \textbf{N} & 14 & 7 & 2 & 2 & 11 & 10\\ 9) & \textbf{N} & 14 & 7 & 2 & 11 & 2 & 10 & 10) & \textbf{N} & 15 & 5 & 0 & 5 & 13 & 9\\ 11) & \textbf{N} & 15 & 5 & 4 & 9 & 5 & 9 & 12) & \textbf{N} & 16 & 12 & 0 & 1 & 11 & 8\\ 13) & \textbf{N} & 20 & 13 & 5 & 5 & 5 & 4 & 14) & \textbf{N} & 27 & 0 & 1 & 1 & 1 & 25\\ 15) & \textbf{N} & 29 & 2 & 1 & 1 & 1 & 23 & 16) & \textbf{N} & 31 & 1 & 0 & 5 & 1 & 21\\ 17) & \textbf{N} & 36 & 9 & 1 & 1 & 1 & 16 & 18) & & 52 & 1 & 0 & 1 & 26 & 0 \end{tabular}\\ Permutations: $(7_b,7_c,7_d)$. \label{l28e8p7} \end{table}
\begin{table}[H]\small \caption{$L_2(8) < E_8$, $p = 3$}
\begin{tabular}{rc|ccccc||rc|ccccc} & & 1 & 7 & $9_a$ & $9_b$ & $9_c$ & & & 1 & 7 & $9_a$ & $9_b$ & $9_c$ \\ \hline 1) & \textbf{P}, \textbf{N} & 4 & 22 & 3 & 3 & 4 & 2) & \textbf{P}, \textbf{N} & 6 & 23 & 3 & 3 & 3 \\ 3) & \textbf{N} & 8 & 6 & 7 & 7 & 8 & 4) & \textbf{P}, \textbf{N} & 8 & 24 & 1 & 5 & 2 \\ 5) & \textbf{N} & 10 & 7 & 7 & 7 & 7 & 6) & \textbf{N} & 12 & 8 & 2 & 6 & 12 \\ 7) & \textbf{N} & 12 & 8 & 5 & 9 & 6 & 8) & \textbf{N} & 22 & 13 & 0 & 1 & 14 \\ 9) & \textbf{N} & 22 & 13 & 0 & 7 & 8 & 10) & \textbf{N} & 52 & 28 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(9_a,9_b,9_c)$. \label{l28e8p3} \end{table}
\begin{table}[H]\small
\caption{$L_2(11) < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{8}{c}c} & & 1 & 5 & $5^{*}$ & $10_a$ & $10_b$ & 11 & $12_a$ & $12_b$ \\ \hline 1) & \textbf{P} & 0 & 2 & 2 & 0 & 4 & 4 & 6 & 6 \\ 2) & & 1 & 0 & 0 & 2 & 5 & 3 & 6 & 6 \\ 3) & & 2 & 2 & 2 & 0 & 4 & 6 & 5 & 5 \\ 4) & & 3 & 0 & 0 & 2 & 5 & 5 & 5 & 5 \\ 5) & & 3 & 2 & 2 & 0 & 4 & 7 & 2 & 7 \\ 6) & & 4 & 0 & 0 & 2 & 5 & 6 & 2 & 7 \\ 7) & & 7 & 1 & 1 & 1 & 9 & 1 & 5 & 5 \\ 8) & & 7 & 1 & 1 & 9 & 1 & 1 & 5 & 5 \\ 9) & & 7 & 4 & 4 & 6 & 1 & 1 & 5 & 5 \\ 10) & & 8 & 1 & 1 & 1 & 9 & 2 & 2 & 7 \\ 11) & & 8 & 1 & 1 & 9 & 1 & 2 & 2 & 7 \\ 12) & & 8 & 2 & 2 & 8 & 2 & 0 & 5 & 5 \\ 13) & & 8 & 4 & 4 & 6 & 1 & 2 & 2 & 7 \\ 14) & & 8 & 8 & 8 & 2 & 2 & 0 & 5 & 5 \\ 15) & & 9 & 2 & 2 & 8 & 2 & 1 & 2 & 7 \\ 16) & & 9 & 8 & 8 & 2 & 2 & 1 & 2 & 7 \\ 17) & & 10 & 2 & 2 & 0 & 4 & 14 & 1 & 1 \\ 18) & & 11 & 0 & 0 & 2 & 5 & 13 & 1 & 1 \\ 19) & & 15 & 1 & 1 & 1 & 9 & 9 & 1 & 1 \\ 20) & & 15 & 1 & 1 & 9 & 1 & 9 & 1 & 1 \\ 21) & & 15 & 4 & 4 & 6 & 1 & 9 & 1 & 1 \\ 22) & & 16 & 2 & 2 & 8 & 2 & 8 & 1 & 1 \\ 23) & & 16 & 8 & 8 & 2 & 2 & 8 & 1 & 1 \\ 24) & & 24 & 10 & 10 & 0 & 10 & 0 & 1 & 1 \end{tabular}\\ Permutations: $(12_a,12_b)$. \label{l211e8p0} \end{table}
\begin{table}[H]\small \caption{$L_2(11) < E_8$, $p = 5$}
\begin{tabular}{rc|*{6}{c}||rc|*{6}{c}} & & 1 & $5$ & $5^{*}$ & $10_a$ & $10_b$ & 11 & & & 1 & $5$ & $5^{*}$ & $10_a$ & $10_b$ & 11 \\ \hline 1) & \textbf{P}, \textbf{N} & 12 & 2 & 2 & 0 & 4 & 16 & 2) & \textbf{P}, \textbf{N} & 13 & 0 & 0 & 2 & 5 & 15 \\ 3) & \textbf{N} & 17 & 1 & 1 & 1 & 9 & 11 & 4) & \textbf{N} & 17 & 1 & 1 & 9 & 1 & 11\\ 5) & \textbf{N} & 17 & 4 & 4 & 6 & 1 & 11 & 6) & \textbf{N} & 18 & 2 & 2 & 8 & 2 & 10 \\ 7) & \textbf{N} & 18 & 8 & 8 & 2 & 2 & 10 & 8) & \textbf{N} & 26 & 10 & 10 & 0 & 10 & 2 \end{tabular} \label{l211e8p5} \end{table}
\begin{table}[H]\small \caption{$L_2(11) < E_8$, $p = 3$}
\begin{tabular}{rc|*{6}{c}|rc|*{6}{c}} & & 1 & 5 & $5^{*}$ & 10 & $12_a$ & $12_b$ & & & 1 & 5 & $5^{*}$ & 10 & $12_a$ & $12_b$ \\ \hline 1) & \textbf{P}, \textbf{N} & 4 & 2 & 2 & 8 & 6 & 6 & 2) & & 4 & 10 & 10 & 0 & 6 & 6 \\ 3) & \textbf{P}, \textbf{N} & 8 & 2 & 2 & 10 & 5 & 5 & 4) & \textbf{N} & 8 & 10 & 10 & 2 & 5 & 5 \\ 5) & \textbf{P}, \textbf{N} & 10 & 2 & 2 & 11 & 7 & 2 & 6) & \textbf{N} & 10 & 10 & 10 & 3 & 7 & 2 \\ 7) & \textbf{N} & 24 & 2 & 2 & 18 & 1 & 1 & 8) & \textbf{N} & 24 & 10 & 10 & 10 & 1 & 1 \end{tabular} \label{l211e8p3} \end{table}
\begin{table}[H]\small \caption{$L_2(11) < E_8$, $p = 2$}
\begin{tabular}{rc|*{6}{c}||rc|*{6}{c}} & & 1 & 5 & $5^{*}$ & 10 & $12_a$ & $12_b$ & & & 1 & 5 & $5^{*}$ & 10 & $12_a$ & $12_b$ \\ \hline 1) & & 4 & 0 & 0 & 10 & 6 & 6 & 2) & \textbf{P}, \textbf{N} & 4 & 3 & 3 & 7 & 6 & 6 \\ 3) & \textbf{P}, \textbf{N} & 4 & 6 & 6 & 4 & 6 & 6 & 4) & \textbf{N} & 8 & 2 & 2 & 10 & 5 & 5 \\ 5) & \textbf{P}, \textbf{N} & 8 & 5 & 5 & 7 & 5 & 5 & 6) & \textbf{P}, \textbf{N} & 8 & 8 & 8 & 4 & 5 & 5 \\ 7) & \textbf{N} & 10 & 3 & 3 & 10 & 7 & 2 & 8) & \textbf{P}, \textbf{N} & 10 & 6 & 6 & 7 & 7 & 2 \\ 9) & \textbf{P}, \textbf{N} & 10 & 9 & 9 & 4 & 7 & 2 & 10) & \textbf{N} & 24 & 10 & 10 & 10 & 1 & 1 \\ 11) & \textbf{P}, \textbf{N} & 24 & 13 & 13 & 7 & 1 & 1 & 12) & \textbf{P}, \textbf{N} & 24 & 16 & 16 & 4 & 1 & 1 \end{tabular} \label{l211e8p2} \end{table}
{\small
\begin{longtable}{rc|*{9}{c}}
\caption{$L_2(13) < E_8$, $p = 0$ or $p \nmid |H|$} \\ & & 1 & $7_a$ & $7_b$ & $12_a$ & $12_b$ & $12_c$ & 13 & $14_a$ & $14_b$ \\ \hline 1) & \textbf{P} & 0 & 1 & 1 & 2 & 3 & 3 & 2 & 6 & 2 \\ 2) & & 1 & 1 & 1 & 3 & 3 & 3 & 1 & 6 & 2 \\ 3) & & 1 & 3 & 3 & 2 & 3 & 3 & 3 & 4 & 1 \\ 4) & & 2 & 1 & 1 & 1 & 5 & 4 & 0 & 6 & 2 \\ 5) & & 2 & 2 & 2 & 0 & 1 & 1 & 2 & 8 & 4 \\ 6) & & 2 & 3 & 3 & 3 & 3 & 3 & 2 & 4 & 1 \\ 7) & & 2 & 5 & 5 & 2 & 3 & 3 & 4 & 2 & 0 \\ 8) & & 3 & 2 & 2 & 1 & 1 & 1 & 1 & 8 & 4 \\ 9) & & 3 & 3 & 3 & 1 & 5 & 4 & 1 & 4 & 1 \\ 10)& & 3 & 4 & 4 & 0 & 1 & 1 & 3 & 6 & 3 \\ 11) & & 3 & 5 & 5 & 3 & 3 & 3 & 3 & 2 & 0 \\ 12) & & 4 & 0 & 0 & 0 & 1 & 1 & 4 & 2 & 10 \\ 13) & & 4 & 4 & 4 & 1 & 1 & 1 & 2 & 6 & 3 \\ 14) & & 4 & 5 & 5 & 1 & 5 & 4 & 2 & 2 & 0 \\ 15) & & 4 & 6 & 6 & 0 & 1 & 1 & 4 & 4 & 2 \\ 16) & & 5 & 0 & 0 & 1 & 1 & 1 & 3 & 2 & 10 \\ 17) & & 5 & 2 & 2 & 0 & 1 & 1 & 5 & 0 & 9 \\ 18) & & 5 & 6 & 6 & 1 & 1 & 1 & 3 & 4 & 2 \\ 19) & & 6 & 0 & 13 & 0 & 0 & 0 & 3 & 5 & 3 \\ 20) & & 6 & 2 & 2 & 1 & 1 & 1 & 4 & 0 & 9 \\ 21) & & 7 & 0 & 0 & 0 & 1 & 1 & 7 & 2 & 7 \\ 22) & & 8 & 0 & 0 & 1 & 1 & 1 & 6 & 2 & 7 \\ 23) & & 8 & 2 & 2 & 0 & 1 & 1 & 8 & 0 & 6 \\ 24) & & 9 & 2 & 2 & 1 & 1 & 1 & 7 & 0 & 6 \\ 25) & & 14 & 0 & 0 & 0 & 1 & 8 & 0 & 2 & 7 \\ 26) & & 15 & 2 & 2 & 0 & 1 & 8 & 1 & 0 & 6 \\ 27) & & 52 & 0 & 26 & 0 & 0 & 0 & 0 & 1 & 0 \\ \multicolumn{11}{c}{Permutations: $(7_a,7_b)$, $(12_a,12_b,12_c)$.} \label{l213e8p0} \end{longtable} }
\begin{table}[H]\small \caption{$L_2(13) < E_8$, $p = 7$}
\begin{tabular}{rc|*{6}{c}||rc|*{6}{c}} & & 1 & $7_a$ & $7_b$ & $12$ & $14_a$ & $14_b$ & & & 1 & $7_a$ & $7_b$ & $12$ & $14_a$ & $14_b$ \\ \hline 1) & \textbf{P}, \textbf{N} & 2 & 1 & 1 & 10 & 6 & 2 & 2) & \textbf{N} & 4 & 2 & 2 & 4 & 8 & 4 \\ 3) & \textbf{P}, \textbf{N} & 4 & 3 & 3 & 11 & 4 & 1 & 4) & \textbf{N} & 6 & 4 & 4 & 5 & 6 & 3 \\ 5) & \textbf{P}, \textbf{N} & 6 & 5 & 5 & 12 & 2 & 0 & 6) & \textbf{N} & 8 & 0 & 0 & 6 & 2 & 10 \\ 7) & \textbf{N} & 8 & 6 & 6 & 6 & 4 & 2 & 8) & \textbf{N} & 9 & 0 & 13 & 3 & 5 & 3 \\ 9) & \textbf{N} & 9 & 13 & 0 & 3 & 5 & 3 & 10) & \textbf{N} & 10 & 2 & 2 & 7 & 0 & 9 \\ 11) & \textbf{N} & 14 & 0 & 0 & 9 & 2 & 7 & 12) & \textbf{N} & 16 & 2 & 2 & 10 & 0 & 6 \\ 13) & & 52 & 0 & 26 & 0 & 1 & 0 & 14) & & 52 & 26 & 0 & 0 & 1 & 0 \end{tabular} \label{l213e8p7} \end{table}
\begin{table}[H]\small \caption{$L_2(13) < E_8$, $p = 3$}
\begin{tabular}{rc|*{7}{c}} & & 1 & $7_a$ & $7_b$ & $12_a$ & $12_b$ & $12_c$ & 13 \\ \hline 1) & \textbf{P}, \textbf{N} & 2 & 7 & 7 & 2 & 3 & 3 & 4\\ 2) & \textbf{N} & 3 & 7 & 7 & 3 & 3 & 3 & 3 \\ 3) & \textbf{N} & 4 & 7 & 7 & 1 & 4 & 5 & 2 \\ 4) & \textbf{N} & 6 & 10 & 10 & 0 & 1 & 1 & 6 \\ 5) & \textbf{N} & 7 & 10 & 10 & 1 & 1 & 1 & 5 \\ 6) & \textbf{N} & 9 & 5 & 18 & 0 & 0 & 0 & 6 \\ 7) & \textbf{N} & 14 & 2 & 2 & 0 & 1 & 1 & 14 \\ 8) & \textbf{N} & 15 & 2 & 2 & 1 & 1 & 1 & 13 \\ 9) & \textbf{N} & 21 & 2 & 2 & 0 & 8 & 1 & 7 \\ 10) & & 52 & 1 & 27 & 0 & 0 & 0 & 0 \\ \end{tabular}\\ Permutations: $(7_{a},7_{b})$, $(12_a,12_b,12_c)$. \label{l213e8p3} \end{table}
\begin{table}[H]\small \caption{$L_2(13) < E_8$, $p = 2$}
\begin{tabular}{rc|*{7}{c}} & & 1 & $6_a$ & $6_b$ & $12_a$ & $12_b$ & $12_c$ & 14 \\ \hline 1) & & 4 & 1 & 1 & 1 & 5 & 4 & 8 \\ 2) & \textbf{N} & 4 & 2 & 2 & 3 & 3 & 3 & 8 \\ 3) & \textbf{P}, \textbf{N} & 4 & 3 & 3 & 2 & 3 & 3 & 8 \\ 4) & \textbf{N} & 8 & 3 & 3 & 1 & 1 & 1 & 12 \\ 5) & \textbf{N} & 8 & 4 & 4 & 0 & 1 & 1 & 12 \\ 6) & \textbf{N} & 10 & 4 & 4 & 1 & 5 & 4 & 5 \\ 7) & \textbf{N} & 10 & 5 & 5 & 3 & 3 & 3 & 5 \\ 8) & \textbf{P}, \textbf{N} & 10 & 6 & 6 & 2 & 3 & 3 & 5 \\ 9) & & 14 & 0 & 0 & 0 & 1 & 8 & 9 \\ 10) & \textbf{N} & 14 & 6 & 6 & 1 & 1 & 1 & 9 \\ 11) & \textbf{N} & 14 & 7 & 7 & 0 & 1 & 1 & 9 \\ 12) & \textbf{N} & 16 & 0 & 13 & 0 & 0 & 0 & 11 \\ 13) & \textbf{N} & 16 & 2 & 2 & 2 & 3 & 10 & 2 \\ 14) & \textbf{N} & 16 & 7 & 7 & 1 & 5 & 4 & 2 \\ 15) & \textbf{N} & 16 & 8 & 8 & 3 & 3 & 3 & 2 \\ 16) & \textbf{P}, \textbf{N} & 16 & 9 & 9 & 2 & 3 & 3 & 2 \\ 18) & \textbf{N} & 20 & 3 & 3 & 0 & 1 & 8 & 6 \\ 19) & \textbf{N} & 20 & 9 & 9 & 1 & 1 & 1 & 6 \\ 20) & \textbf{N} & 20 & 10 & 10 & 0 & 1 & 1 & 6 \\ 21) & \textbf{N} & 22 & 3 & 16 & 0 & 0 & 0 & 8 \\ 23) & \textbf{N} & 78 & 0 & 26 & 0 & 0 & 0 & 1 \\ \end{tabular}\\ Permutations: $(6_{a},6_{b})$, $(12_a,12_b,12_c)$. \label{l213e8p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(16) < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{12}{c}} & & 1 & $15_a$ & $15_b$ & $15_c$ & $15_d$ & $15_e$ & $15_f$ & $15_g$ & $15_h$ & 16 & $17_{a,b,c}$ & $17_{d,e,f,g}$ \\ \hline 1) & \textbf{P} & 0 & 0 & 2 & 1 & 1 & 1 & 2 & 3 & 2 & 0 & 0 & 1 \\ 2) & \textbf{P} & 0 & 2 & 2 & 2 & 1 & 1 & 2 & 1 & 1 & 0 & 0 & 1 \\ 3) & \textbf{P} & 0 & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 & 0 & 0 & 1 \\ 4) & & 1 & 0 & 2 & 1 & 1 & 1 & 2 & 3 & 2 & 1 & 1 & 0 \\ 5) & & 1 & 1 & 1 & 2 & 1 & 1 & 2 & 2 & 2 & 1 & 1 & 0 \\ 6) & & 1 & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 1 & 0 \\ 7) & & 2 & 0 & 0 & 3 & 1 & 1 & 1 & 4 & 3 & 0 & 1 & 0 \\ 8) & & 2 & 0 & 3 & 3 & 2 & 2 & 1 & 1 & 1 & 0 & 1 & 0 \end{tabular}\\ Permutations: $(15_a,15_b,\ldots,15_h)$. \\ $17_a$, $17_b$ and $17_c$ occur with equal multiplicities, as do $17_d$, $17_e$, $17_f$ and $17_g$. \label{l216e8p0} \end{table}
\begin{table}[H]\small \caption{$L_2(16) < E_8$, $p = 17$}
\begin{tabular}{rc|*{9}{c}} & & 1 & 15 & $17_a$ & $17_b$ & $17_c$ & $17_d$ & $17_e$ & $17_f$ & $17_g$ \\ \hline 1) & \textbf{P} & 0 & 12 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ 2) & \textbf{P} & 2 & 13 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 3) & & 10 & 0 & 0 & 3 & 3 & 4 & 4 & 0 & 0 \\ 4) & & 10 & 0 & 1 & 0 & 6 & 3 & 4 & 0 & 0 \\ 5) & & 10 & 0 & 1 & 1 & 2 & 1 & 4 & 1 & 4 \\ 6) & & 10 & 0 & 8 & 0 & 0 & 1 & 3 & 2 & 0 \\ 7) & & 10 & 0 & 8 & 1 & 2 & 1 & 2 & 0 & 0 \\ 8) & & 10 & 0 & 8 & 3 & 3 & 0 & 0 & 0 & 0 \\ 9) & & 12 & 1 & 7 & 1 & 1 & 1 & 1 & 1 & 1 \end{tabular}\\ Permutations: $(17_b,17_c)(17_d,17_e,17_f,17_g)$. \label{l216e8p17} \end{table}
\begin{table}[H]\small \caption{$L_2(16) < E_8$, $p = 5$}
\begin{tabular}{rc|*{11}{c}} & & 1 & $15_a$ & $15_b$ & $15_c$ & $15_d$ & $15_e$ & $15_f$ & $15_g$ & $15_h$ & 16 & 17 \\ \hline
1) & \textbf{P} & 0 & 1 & 2 & 1 & 1 & 2 & 2 & 2 & 1 & 0 & 4 \\
2) & \textbf{P} & 0 & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 & 0 & 4 \\
3) & \textbf{P} & 0 & 2 & 3 & 2 & 1 & 1 & 1 & 2 & 0 & 0 & 4 \\
4) & & 3 & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 & 3 & 1 \\
5) & & 3 & 1 & 2 & 2 & 2 & 1 & 1 & 2 & 1 & 3 & 1 \\
6) & & 3 & 2 & 3 & 2 & 1 & 1 & 1 & 2 & 0 & 3 & 1 \\
7) & & 4 & 3 & 0 & 1 & 1 & 1 & 2 & 2 & 3 & 2 & 1 \\
8) & & 4 & 3 & 4 & 1 & 1 & 1 & 3 & 0 & 0 & 2 & 1 \\
9) & & 5 & 1 & 4 & 2 & 0 & 1 & 4 & 2 & 0 & 1 & 1 \\ 10) & & 5 & 3 & 0 & 0 & 5 & 1 & 3 & 1 & 1 & 1 & 1 \\ 11) & & 5 & 3 & 2 & 2 & 0 & 2 & 2 & 0 & 3 & 1 & 1 \end{tabular}\\ Permutations: $(15_a,15_b,\ldots,15_h)$. \label{l216e8p5} \end{table}
\begin{table}[H]\small \caption{$L_2(16) < E_8$, $p = 3$}
\begin{tabular}{rc|*{12}{c}} & & 1 & $15_a$ & $15_b$ & $15_c$ & $15_d$ & $15_e$ & $15_f$ & $15_g$ & $15_h$ & 16 & $17_a$ & $17_b$ \\ \hline
1) & \textbf{P} & 0 & 0 & 1 & 3 & 2 & 1 & 2 & 1 & 2 & 0 & 2 & 2 \\
2) & \textbf{P} & 0 & 1 & 1 & 2 & 1 & 1 & 2 & 2 & 2 & 0 & 2 & 2 \\
3) & \textbf{P} & 0 & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 & 0 & 2 & 2 \\
4) & & 2 & 0 & 1 & 3 & 2 & 1 & 2 & 1 & 2 & 2 & 1 & 1 \\
5) & & 2 & 1 & 1 & 2 & 1 & 1 & 2 & 2 & 2 & 2 & 1 & 1 \\
6) & & 2 & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 & 2 & 1 & 1 \\
7) & & 3 & 0 & 1 & 3 & 3 & 1 & 0 & 1 & 4 & 1 & 1 & 1 \\
8) & & 3 & 0 & 2 & 1 & 3 & 2 & 1 & 3 & 1 & 1 & 1 & 1 \\
9) & & 4 & 0 & 0 & 3 & 2 & 2 & 3 & 2 & 2 & 0 & 1 & 1 \\ 10) & & 4 & 0 & 1 & 1 & 0 & 1 & 5 & 3 & 3 & 0 & 1 & 1 \\ 11) & & 4 & 0 & 1 & 4 & 2 & 0 & 1 & 4 & 2 & 0 & 1 & 1 \\ \end{tabular}\\ Permutations: $(15_a,15_b,\ldots,15_h)$. \label{l216e8p3} \end{table}
\begin{table}[H]\small
\caption{$L_2(17) < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{11}{c}} & & 1 & $9_a$ & $9_b$ & $16_a$ & $16_b$ & $16_c$ & $16_d$ & 17 & $18_a$ & $18_b$ & $18_c$ \\ \hline 1) & \textbf{P} & 0 & 0 & 1 & 0 & 2 & 2 & 2 & 1 & 1 & 3 & 3 \\ 2) & \textbf{P} & 0 & 0 & 1 & 3 & 0 & 2 & 1 & 1 & 1 & 3 & 3 \\ 3) & & 1 & 0 & 1 & 1 & 2 & 2 & 2 & 0 & 1 & 3 & 3 \\ 4) & & 7 & 1 & 8 & 1 & 0 & 0 & 0 & 0 & 6 & 1 & 1 \\ 5) & & 8 & 0 & 7 & 1 & 0 & 0 & 0 & 1 & 6 & 1 & 1 \\ 6) & & 10 & 4 & 11 & 1 & 0 & 0 & 0 & 3 & 0 & 1 & 1 \\ 7) & & 14 & 0 & 7 & 1 & 0 & 0 & 0 & 7 & 0 & 1 & 1 \\ 8) & & 21 & 0 & 7 & 8 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \end{tabular}\\ Permutations: $(9_a,9_b)$, $(16_b,16_c,16_d)$. \label{l217e8p0} \end{table}
\begin{table}[H]\small \caption{$L_2(17) < E_8$, $p = 3$}
\begin{tabular}{rc|*{7}{c}} & & 1 & $9_a$ & $9_b$ & $16$ & $18_a$ & $18_b$ & $18_c$ \\ \hline 1) & \textbf{P}, \textbf{N} & 1 & 0 & 1 & 7 & 1 & 3 & 3 \\ 2) & & 7 & 1 & 8 & 1 & 6 & 1 & 1 \\ 3) & \textbf{N} & 9 & 0 & 7 & 2 & 6 & 1 & 1 \\ 4) & \textbf{N} & 13 & 4 & 11 & 4 & 0 & 1 & 1 \\ 5) & \textbf{N} & 21 & 0 & 7 & 8 & 0 & 1 & 1 \end{tabular}\\ Permutations: $(9_a,9_b)$. \label{l217e8p3} \end{table}
\begin{table}[H]\small \caption{$L_2(17) < E_8$, $p = 2$}
\begin{tabular}{rc|*{7}{c}} & & 1 & $8_a$ & $8_b$ & $16_a$ & $16_b$ & $16_c$ & $16_d$ \\ \hline 1) & \textbf{N} & 16 & 3 & 4 & 2 & 1 & 6 & 2 \\ 2) & \textbf{N} & 16 & 5 & 6 & 3 & 2 & 2 & 2 \\ 3) & \textbf{N} & 16 & 7 & 8 & 1 & 2 & 2 & 2 \\ 4) & \textbf{P}, \textbf{N} & 16 & 8 & 9 & 0 & 2 & 2 & 2 \\ 5) & \textbf{P}, \textbf{N} & 16 & 8 & 9 & 3 & 0 & 1 & 2 \\ 6) & \textbf{N} & 32 & 2 & 9 & 8 & 0 & 0 & 0 \\ 7) & \textbf{N} & 32 & 9 & 16 & 1 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(8_a,8_b)$, $(16_b,16_c,16_d)$. \label{l217e8p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(19) < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{12}{c}} & & 1 & 9 & $9^{*}$ & $18_a$ & $18_b$ & $18_c$ & $18_d$ & 19 & $20_a$ & $20_b$ & $20_c$ & $20_d$ \\ \hline 1) & \textbf{P} & 0 & 1 & 1 & 0 & 0 & 2 & 2 & 2 & 0 & 1 & 3 & 2 \\ 2) & \textbf{P} & 0 & 1 & 1 & 0 & 0 & 2 & 2 & 2 & 3 & 1 & 1 & 1 \\ 3) & & 1 & 0 & 0 & 0 & 1 & 3 & 2 & 1 & 0 & 1 & 3 & 2 \\ 4) & & 1 & 0 & 0 & 0 & 1 & 3 & 2 & 1 & 3 & 1 & 1 & 1 \\ 5) & & 1 & 1 & 1 & 0 & 0 & 2 & 2 & 3 & 2 & 1 & 1 & 1 \\ 6) & & 2 & 0 & 0 & 0 & 1 & 3 & 2 & 2 & 2 & 1 & 1 & 1 \\ 7) & & 3 & 1 & 1 & 0 & 0 & 2 & 2 & 5 & 0 & 1 & 1 & 1 \\ 8) & & 4 & 0 & 0 & 0 & 1 & 3 & 2 & 4 & 0 & 1 & 1 & 1 \\ 9) & & 8 & 3 & 3 & 0 & 6 & 1 & 0 & 0 & 0 & 1 & 1 & 1 \end{tabular}\\ Permutations: $(18_a,18_b)(18_c,18_d)$, $(20_b,20_c,20_d)$. \label{l219e8p0} \end{table}
\begin{table}[H]\small \caption{$L_2(19) < E_8$, $p = 5$}
\begin{tabular}{rc|*{8}{c}} & & 1 & 9 & $9^{*}$ & $18_a$ & $20_a$ & $20_b$ & $20_c$ & $20_d$ \\ \hline 1) & \textbf{P}, \textbf{N} & 2 & 1 & 1 & 6 & 0 & 1 & 2 & 3 \\ 2) & \textbf{P}, \textbf{N} & 2 & 1 & 1 & 6 & 3 & 1 & 1 & 1 \\ 3) & \textbf{P}, \textbf{N} & 4 & 1 & 1 & 7 & 2 & 1 & 1 & 1 \\ 4) & \textbf{P}, \textbf{N} & 8 & 1 & 1 & 9 & 0 & 1 & 1 & 1 \\ 5) & \textbf{N} & 8 & 9 & 9 & 1 & 0 & 1 & 1 & 1 \end{tabular}\\ Permutations: $(20_b,20_c,20_d)$. \label{l219e8p5} \end{table}
\begin{table}[H]\small \caption{$L_2(19) < E_8$, $p = 3$}
\begin{tabular}{rc|*{8}{c}} & & 1 & 9 & $9^{*}$ & $18_a$ & $18_b$ & $18_c$ & $18_d$ & $19$ \\ \hline 1) & \textbf{P}, \textbf{N} & 6 & 1 & 1 & 0 & 0 & 2 & 2 & 8 \\ 2) & \textbf{N} & 7 & 0 & 0 & 0 & 1 & 3 & 2 & 7 \\ 3) & \textbf{N} & 11 & 3 & 3 & 0 & 6 & 1 & 0 & 3 \end{tabular}\\ Permutations: $(18_a,18_b)(18_c,18_d)$. \label{l219e8p3} \end{table}
\begin{table}[H]\small \caption{$L_2(19) < E_8$, $p = 2$}
\begin{tabular}{rc|*{9}{c}} & & 1 & 9 & $9^{*}$ & $18_a$ & $18_b$ & $20_a$ & $20_b$ & $20_c$ & $20_d$ \\ \hline 1) & & 2 & 0 & 0 & 1 & 6 & 0 & 1 & 3 & 2 \\ 2) & & 2 & 0 & 0 & 1 & 6 & 3 & 1 & 1 & 1 \\ 3) & \textbf{N} & 2 & 1 & 1 & 3 & 3 & 0 & 1 & 3 & 2 \\ 4) & \textbf{N} & 2 & 1 & 1 & 3 & 3 & 3 & 1 & 1 & 1 \\ 5) & \textbf{P}, \textbf{N} & 2 & 3 & 3 & 2 & 2 & 0 & 1 & 3 & 2 \\ 6) & \textbf{P}, \textbf{N} & 2 & 3 & 3 & 2 & 2 & 3 & 1 & 1 & 1 \\ 7) & \textbf{N} & 4 & 1 & 1 & 1 & 6 & 2 & 1 & 1 & 1 \\ 8) & \textbf{N} & 4 & 2 & 2 & 3 & 3 & 2 & 1 & 1 & 1 \\ 9) & \textbf{P}, \textbf{N} & 4 & 4 & 4 & 2 & 2 & 2 & 1 & 1 & 1 \\ 10) & \textbf{N} & 8 & 3 & 3 & 1 & 6 & 0 & 1 & 1 & 1 \\ 11) & \textbf{N} & 8 & 4 & 4 & 3 & 3 & 0 & 1 & 1 & 1 \\ 12) & \textbf{P}, \textbf{N} & 8 & 6 & 6 & 2 & 2 & 0 & 1 & 1 & 1 \end{tabular}\\ Permutations: $(18_a,18_b)$, $(20_b,20_c,20_d)$. \label{l219e8p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(25) < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{13}{c}} & & 1 & $24_a$ & $24_b$ & $24_c$ & $24_d$ & $24_e$ & $24_f$ & 25 & $26_a$ & $26_b$ & $26_c$ & $26_d$ & $26_e$ \\ \hline 1) & \textbf{P} & 0 & 0 & 1 & 1 & 2 & 1 & 1 & 0 & 0 & 0 & 0 & 2 & 2 \\ 2) & \textbf{P} & 0 & 1 & 0 & 2 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 2 & 2 \\ 3) & \textbf{P} & 0 & 1 & 1 & 0 & 1 & 1 & 2 & 0 & 0 & 0 & 0 & 2 & 2 \\ 4) & \textbf{P} & 0 & 1 & 1 & 1 & 0 & 2 & 1 & 0 & 0 & 0 & 0 & 2 & 2 \\ 5) & \textbf{P} & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 2 & 2 \\ 6) & \textbf{P} & 0 & 1 & 2 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 2 & 2 \\ 7) & \textbf{P} & 0 & 2 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 2 & 2 \\ 8) & & 14 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 7 & 0 & 0 & 1 & 1 \\ 9) & & 15 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 6 & 2 & 0 & 0 \\ 10) & & 15 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 6 & 0 & 2 & 0 & 0 \end{tabular} \label{l225e8p0} \end{table}
\begin{table}[H]\small \caption{$L_2(25) < E_8$, $p = 13$}
\begin{tabular}{rc|*{7}{c}} & & 1 & 24 & $26_a$ & $26_b$ & $26_c$ & $26_d$ & $26_e$ \\ \hline 1) & \textbf{P} & 0 & 6 & 0 & 0 & 2 & 2 & 0 \\ 2) & & 14 & 0 & 7 & 0 & 1 & 1 & 0 \\ 3) & & 16 & 1 & 0 & 2 & 0 & 0 & 6 \\ 4) & & 16 & 1 & 6 & 2 & 0 & 0 & 0 \end{tabular} \label{l225e8p13} \end{table}
\begin{table}[H]\small \caption{$L_2(25) < E_8$, $p = 3$}
\begin{tabular}{rc|*{11}{c}} & & 1 & $13_a$ & $13_b$ & $24_a$ & $24_b$ & $24_c$ & $24_d$ & $24_e$ & $24_f$ & 25 & 26 \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 2 & 0 & 4 \\ 2) & & 15 & 6 & 6 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 \\ 3) & \textbf{N} & 21 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 7 & 2 \end{tabular}\\ Permutations: $(24_a,24_b,\ldots,24_f)$. \label{l225e8p3} \end{table}
\begin{table}[H]\small \caption{$L_2(25) < E_8$, $p = 2$}
\begin{tabular}{rc|*{10}{c}} & & 1 & $12_a$ & $12_b$ & $24_a$ & $24_b$ & $24_c$ & $24_d$ & $24_e$ & $24_f$ & 26 \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 2 & 4 \\ 2) & \textbf{P} & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 4 \\ 3) & \textbf{N} & 6 & 2 & 2 & 0 & 1 & 2 & 2 & 0 & 2 & 1 \\ 4) & \textbf{N} & 6 & 3 & 3 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 5) & \textbf{N} & 6 & 3 & 3 & 1 & 1 & 1 & 1 & 2 & 0 & 1 \\ 6) & \textbf{N} & 10 & 2 & 7 & 0 & 0 & 0 & 0 & 0 & 0 & 5 \\ 7) & & 14 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 9 \\ 8) & \textbf{N} & 16 & 5 & 10 & 0 & 0 & 0 & 0 & 0 & 0 & 2 \\ 9) & \textbf{N} & 20 & 3 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 6 \end{tabular}\\ Permutations: $(12_a, 12_b)$, $(24_a,24_b, \ldots, 24_f)$. \label{l225e8p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(27) < E_8$, $p = 0$, $13$ or $p \nmid |H|$}
\begin{tabular}{r|*{7}{c}} & 1 & $26_a$ & $26_b$ & $26_c$ & $26_d$ & $26_e$ & $26_f$ \\ \hline 1) & 14 & 0 & 1 & 1 & 0 & 0 & 7 \end{tabular}\\ Permutations: $(26_a,26_b,26_c)(26_d,26_e,26_f)$. \label{l227e8p0} \label{l227e8p13} \end{table}
\begin{table}[H]\small \caption{$L_2(27) < E_8$, $p = 7$}
\begin{tabular}{rc|*{4}{c}} & & 1 & 13 & $13^{*}$ & $26$ \\ \hline 1) & \textbf{N} & 14 & 7 & 7 & 2 \end{tabular} \label{l227e8p7} \end{table}
\begin{table}\small \caption{$L_2(27) < E_8$, $p = 2$}
\begin{tabular}{rc|*{12}{c}} & & 1 & 13 & $13^{*}$ & $26_a$ & $26_b$ & $26_c$ & $28_a$ & $28_b$ & $28_c$ & $28_d$ & $28_e$ & $28_f$ \\ \hline 1) & & 2 & 0 & 0 & 1 & 1 & 1 & 0 & 2 & 1 & 1 & 1 & 1 \\ 2) & & 2 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 3) & \textbf{N} & 2 & 1 & 1 & 0 & 1 & 1 & 0 & 2 & 1 & 1 & 1 & 1 \\ 4) & \textbf{N} & 2 & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 5) & \textbf{N} & 4 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 4 & 1 \\ 6) & \textbf{N} & 4 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 2 & 0 & 2 & 1 \\ 7) & \textbf{N} & 4 & 2 & 2 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 4 & 1 \\ 8) & \textbf{N} & 4 & 2 & 2 & 0 & 1 & 1 & 0 & 0 & 2 & 0 & 2 & 1 \\ 9) & & 14 & 0 & 0 & 0 & 1 & 8 & 0 & 0 & 0 & 0 & 0 & 0 \\ 10) & \textbf{N} & 14 & 6 & 6 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 11) & \textbf{N} & 14 & 7 & 7 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \end{tabular} Permutations: $(26_a,26_b,26_c)$, $(28_a,28_b, \ldots, 28_f)$. \label{l227e8p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(29) < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{13}{c}} & & $15_a$ & $15_b$ & $28_a$ & $28_b$ & $28_c$ & $28_d$ & $28_e$ & $28_f$ & $28_g$ & 29 & $30_a$ & $30_b$ & $30_c$ \\ \hline 1) & \textbf{P} & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 2 \\ 2) & \textbf{P} & 1 & 2 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \end{tabular}\\ Permutations: $(15_a,15_b)$, $(30_a,30_b,30_c)$, $(15_a,15_e)(15_b,15_f)(15_c,15_g)(15_d,15_h)$. \label{l229e8p0} \end{table}
\begin{table}[H]\small \caption{$L_2(29) < E_8$, $p = 7$}
\begin{tabular}{rc|*{11}{c}} & & 1 & $15_a$ & $15_b$ & $28_a$ & $28_b$ & $28_c$ & $28_d$ & $28_e$ & $28_f$ & $28_g$ & 29 \\ \hline 1) & \textbf{P} & 0 & 4 & 5 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 \\ 2) & & 1 & 4 & 5 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 \\ \end{tabular}\\ Permutations: $(15_a,15_b)$. \label{l229e8p7} \end{table}
\begin{table}[H]\small \caption{$L_2(29) < E_8$, $p = 5$}
\begin{tabular}{rc|*{8}{c}} & & 1 & $15_a$ & $15_b$ & $28_a$ & $28_b$ & $30_a$ & $30_b$ & $30_c$ \\ \hline 1) & & 1 & 0 & 1 & 0 & 4 & 1 & 1 & 2 \\ 2) & \textbf{P} & 1 & 0 & 1 & 3 & 1 & 1 & 1 & 2 \\ 3) & & 1 & 0 & 3 & 0 & 4 & 1 & 1 & 1 \\ 4) & \textbf{P} & 1 & 0 & 3 & 3 & 1 & 1 & 1 & 1 \\ 5) & & 1 & 1 & 2 & 0 & 4 & 1 & 1 & 1 \\ 6) & \textbf{P} & 1 & 1 & 2 & 3 & 1 & 1 & 1 & 1 \end{tabular}\\ Permutations: $(15_a,15_b)$, $(30_a,30_b,30_c)$. \label{l229e8p5} \end{table}
\begin{table}[H]\small \caption{$L_2(29) < E_8$, $p = 3$}
\begin{tabular}{rc|*{9}{c}} & & 1 & $15_a$ & $15_b$ & $28_a$ & $28_b$ & $28_c$ & $30_a$ & $30_b$ & $30_c$ \\ \hline 1) & & 1 & 0 & 1 & 0 & 2 & 2 & 1 & 1 & 2 \\ 2) & \textbf{P}, \textbf{N} & 1 & 0 & 1 & 2 & 1 & 1 & 1 & 1 & 2 \\ 3) & & 1 & 1 & 2 & 0 & 2 & 2 & 1 & 1 & 1 \\ 4) & \textbf{P}, \textbf{N} & 1 & 1 & 2 & 2 & 1 & 1 & 1 & 1 & 1 \end{tabular}\\ Permutations: $(15_a,15_b)$, $(30_a,30_b,30_c)$. \label{l229e8p3} \end{table}
\begin{table}[H]\small \caption{$L_2(29) < E_8$, $p = 2$}
\begin{tabular}{rc|*{13}{c}} & & 1 & $14_a$ & $14_b$ & $28_a$ & $28_b$ & $28_c$ & $28_d$ & $28_e$ & $28_f$ & $28_g$ & $30_a$ & $30_b$ & $30_c$ \\ \hline 1) & & 2 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 2 \\ 2) & \textbf{P}, \textbf{N} & 2 & 1 & 2 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 2 \\ 3) & & 4 & 0 & 1 & 0 & 0 & 1 & 2 & 0 & 2 & 0 & 1 & 1 & 1 \\ 4) & & 4 & 0 & 1 & 1 & 0 & 1 & 0 & 2 & 1 & 0 & 1 & 1 & 1 \\ 5) & \textbf{N} & 4 & 2 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 6) & \textbf{P} & 4 & 3 & 2 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 \end{tabular}\\ Permutations: $(14_a,14_b)$, $(30_a,30_b,30_c)$, $(28_b,28_c)(28_d,28_e,28_f,28_g)$. \label{l229e8p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(31) < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{14}{c}} & & 1 & $15$ & $15^{*}$ & $30_a$ & $30_b$ & $30_c$ & 31 & $32_a$ & $32_b$ & $32_c$ & $32_d$ & $32_e$ & $32_f$ & $32_g$ \\ \hline
1) & \textbf{P} & 0 & 0 & 0 & 0 & 2 & 2 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\
2) & \textbf{P} & 0 & 0 & 0 & 2 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\
3) & \textbf{P} & 0 & 1 & 1 & 1 & 0 & 2 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\
4) & \textbf{P} & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\
5) & \textbf{P} & 0 & 1 & 1 & 1 & 2 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\
6) & & 1 & 0 & 0 & 0 & 2 & 2 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\
7) & & 1 & 0 & 0 & 2 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\
8) & & 1 & 1 & 1 & 1 & 0 & 2 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\
9) & & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 10) & & 1 & 1 & 1 & 1 & 2 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \end{tabular} \label{l231e8p0} \end{table}
\begin{table}[H]\small \caption{$L_2(31) < E_8$, $p = 5$}
\begin{tabular}{rc|*{9}{c}} & & 1 & $15$ & $15^{*}$ & $30_a$ & $30_b$ & $30_c$ & 31 & $32$ \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 0 & 2 & 2 & 0 & 4 \\ 2) & \textbf{P} & 0 & 0 & 0 & 2 & 1 & 1 & 0 & 4 \\ 3) & \textbf{P} & 0 & 1 & 1 & 1 & 0 & 2 & 0 & 4 \\ 4) & \textbf{P} & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 4 \\ 5) & \textbf{P} & 0 & 1 & 1 & 1 & 2 & 0 & 0 & 4 \\ 6) & & 3 & 0 & 0 & 0 & 2 & 2 & 3 & 1 \\ 7) & & 3 & 0 & 0 & 2 & 1 & 1 & 3 & 1 \\ 8) & & 3 & 1 & 1 & 1 & 0 & 2 & 3 & 1 \\ 9) & & 3 & 1 & 1 & 1 & 1 & 1 & 3 & 1 \\ 10) & & 3 & 1 & 1 & 1 & 2 & 0 & 3 & 1 \\ 11) & & 5 & 2 & 2 & 2 & 1 & 1 & 1 & 1 \\ 12) & & 5 & 3 & 3 & 1 & 1 & 1 & 1 & 1 \\ 13) & & 6 & 0 & 0 & 5 & 1 & 1 & 0 & 1 \end{tabular} \label{l231e8p5} \end{table}
\begin{table}[H]\small \caption{$L_2(31) < E_8$, $p = 3$}
\begin{tabular}{rc|*{9}{c}} & & 1 & $15$ & $15^{*}$ & $30_a$ & $30_b$ & $30_c$ & 31 & $32_a$ & $32_b$ \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 0 & 2 & 2 & 0 & 2 & 2 \\ 2) & \textbf{P} & 0 & 0 & 0 & 2 & 1 & 1 & 0 & 2 & 2 \\ 3) & \textbf{P} & 0 & 1 & 1 & 1 & 0 & 2 & 0 & 2 & 2 \\ 4) & \textbf{P} & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 2 & 2 \\ 5) & \textbf{P} & 0 & 1 & 1 & 1 & 2 & 0 & 0 & 2 & 2 \\ 6) & & 2 & 0 & 0 & 0 & 2 & 2 & 2 & 1 & 1 \\ 7) & & 2 & 0 & 0 & 2 & 1 & 1 & 2 & 1 & 1 \\ 8) & & 2 & 1 & 1 & 1 & 0 & 2 & 2 & 1 & 1 \\ 9) & & 2 & 1 & 1 & 1 & 1 & 1 & 2 & 1 & 1 \\ 10) & & 2 & 1 & 1 & 1 & 2 & 0 & 2 & 1 & 1 \\ 11) & & 4 & 2 & 2 & 2 & 1 & 1 & 0 & 1 & 1 \\ 12) & & 4 & 3 & 3 & 1 & 1 & 1 & 0 & 1 & 1 \end{tabular} \label{l231e8p3} \end{table}
\begin{table}[H]\small \caption{$L_2(31) < E_8$, $p = 2$}
\begin{tabular}{rc|*{10}{c}} & & 1 & $15$ & $15^{*}$ & $32_a$ & $32_b$ & $32_c$ & $32_d$ & $32_e$ & $32_f$ & $32_g$ \\ \hline 1) & \textbf{P} & 0 & 4 & 4 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ 2) & \textbf{P} & 2 & 5 & 5 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \end{tabular} \label{l231e8p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(32) < E_8$, $p = 0$, $31$ or $p \nmid |H|$}
\begin{tabular}{rc|*{8}{c}} & & $31_a$ & $31_b$ & $31_c$ & $31_d$ & $31_f$ & $31_g$ & $31_k$ & $31_l$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \end{tabular}\\ Permutations: $(31_a,31_b,\ldots,31_e)(31_f,31_g,\ldots,31_j)(31_k,31_l,\ldots,31_o)$.\\ Outer automorphism group induces this permutation. Elements of order 3 have Brauer character value $-2$ on $31_a,\ldots,31_e$, and $1$ on the rest. \label{l232e8p0} \label{l232e8p31} \end{table}
\begin{table}[H]\small \caption{$L_2(32) < E_8$, $p = 11$}
\begin{tabular}{rc|*{2}{c}} & & $31_a$ & $31_b$ \\ \hline 1) & \textbf{P} & 1 & 7 \\ 2) & \textbf{P} & 4 & 4 \end{tabular}\\ Elements of order 3 have Brauer character value $-2$ on $31_a$, and $1$ on $31_b$. \label{l232e8p11} \end{table}
\begin{table}[H]\small \caption{$L_2(32) < E_8$, $p = 3$}
\begin{tabular}{rc|*{5}{c}} & & $31_a$ & $31_b$ & $31_c$ & $31_d$ & $31_e$ \\ \hline 1) & \textbf{P} & 3 & 0 & 1 & 1 & 3 \\ 2) & \textbf{P} & 2 & 2 & 1 & 2 & 1 \end{tabular}\\ Permutations: $(31_a,31_b,31_c,31_d,31_e)$.\\ These modules are all conjugate by an outer automorphism. \label{l232e8p3} \end{table}
\begin{table}[H]\small \caption{$L_2(37) < E_8$, $p = 2$}
\begin{tabular}{rc|*{8}{c}} & & $1$ & $18_a$ & $18_b$ & $38_a$ & $38_b$ & $38_c$ & $38_d$ \\ \hline 1) & & 2 & 0 & 1 & 0 & 1 & 3 & 2 \\ 2) & & 2 & 0 & 1 & 3 & 1 & 1 & 1 \\ 3) & \textbf{N} & 4 & 0 & 3 & 2 & 1 & 1 & 1 \\ 4) & \textbf{N} & 4 & 1 & 2 & 2 & 1 & 1 & 1 \\ 5) & \textbf{N} & 8 & 0 & 7 & 0 & 1 & 1 & 1 \\ 6) & \textbf{N} & 8 & 1 & 6 & 0 & 1 & 1 & 1 \\ 7) & \textbf{N} & 8 & 2 & 5 & 0 & 1 & 1 & 1 \\ 8) & \textbf{N} & 8 & 3 & 4 & 0 & 1 & 1 & 1 \end{tabular}\\ Permutations: $(18_a,18_b)$, $(38_b,38_c,38_d)$. \label{l237e8p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(41) < E_8$, $p = 0$, $3$ or $p \nmid |H|$}
\begin{tabular}{rc|*{6}{c}} & & $40_c$ & $40_d$ & $42_f$ & $42_g$ & $42_h$ & $42_i$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 1 & 1 & 1 \end{tabular}\\ Permutations: $(40_b,40_c,40_d)$. \label{l241e8p0} \label{l241e8p3} \end{table}
\begin{table}[H]\small \caption{$L_2(41) < E_8$, $p = 7$}
\begin{tabular}{rc|*{5}{c}} & & $40_a$ & $42_f$ & $42_g$ & $42_h$ & $42_i$ \\ \hline 1) & \textbf{P} & 2 & 1 & 1 & 1 & 1 \end{tabular} \label{l241e8p7} \end{table}
\begin{table}[H]\small \caption{$L_2(41) < E_8$, $p = 5$}
\begin{tabular}{rc|*{4}{c}} & & $40_c$ & $40_d$ & $42$ \\ \hline 1) & \textbf{P} & 1 & 1 & 4 \end{tabular}\\ Permutations: $(40_b,40_c,40_d)$. \label{l241e8p5} \end{table}
\begin{table}[H]\small \caption{$L_2(41) < E_8$, $p = 2$}
\begin{tabular}{rc|*{12}{c}} & & 1 & $20_a$ & $20_b$ & $40_a$ & $40_b$ & $40_c$ & $40_d$ & $40_f$ & $40_h$ & $40_j$ & $42_a$ & $42_b$ \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 2 & 2 \\ 2) & & 4 & 0 & 2 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 \\ 3) & & 4 & 0 & 2 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 \\ 4) & & 4 & 0 & 4 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 \\ 5) & & 4 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 \\ 6) & & 4 & 1 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 1 \\ 7) & & 4 & 1 & 3 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 \\ 8) & & 4 & 2 & 2 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 1 & 1 \end{tabular}\\ Permutations: $(20_a,20_b)$, $(40_b,40_c,40_d)$, $(40_e,40_f)(40_g,40_h)(40_i,40_j)$. \label{l241e8p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(49) < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{5}{c}} & &$48_a$ & $50_f$ & $50_g$ & $50_i$ & $50_j$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 1 & 1 \end{tabular}\\ Permutations: $(48_a,48_b)$. \label{l249e8p0} \end{table}
\begin{table}[H]\small \caption{$L_2(49) < E_8$, $p = 5$}
\begin{tabular}{rc|*{5}{c}} & & $48$ & $50_f$ & $50_g$ & $50_h$ & $50_i$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 1 & 1 \end{tabular} \label{l249e8p5} \end{table}
\begin{table}[H]\small \caption{$L_2(49) < E_8$, $p = 3$}
\begin{tabular}{rc|*{3}{c}} & & $48_a$ & $50_b$ & $50_c$ \\ \hline 1) & \textbf{P} & 1 & 2 & 2 \end{tabular}\\ Permutations: $(48_a,48_b)$. \label{l249e8p3} \end{table}
\begin{table}[H]\small \caption{$L_2(49) < E_8$, $p = 2$}
\begin{tabular}{rc|*{6}{c}} & & 1 & $24_a$ & $24_b$ & $48_{a}$ & $48_{b}$ & 50 \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 1 & 0 & 4 \\ 2) & & 6 & 1 & 1 & 1 & 2 & 1 \\ 3) & & 6 & 3 & 3 & 1 & 0 & 1 \end{tabular}\\ Permutations: $(48_a,48_b)$. \label{l249e8p2} \end{table}
\begin{table}[H]\small
\caption{$L_2(61) < E_8$, $p = 0$, $31$ or $p \nmid |H|$}
\begin{tabular}{rc|*{4}{c}} & & $62_g$ & $62_h$ & $62_i$ & $62_j$ \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 1 \end{tabular} \label{l261e8p0} \label{l261e8p31} \end{table}
\begin{table}[H]\small \caption{$L_2(61) < E_8$, $p = 5$}
\begin{tabular}{rc|*{1}{c}} & & $62_a$ \\ \hline 1) & \textbf{P} & 4 \end{tabular} \label{l261e8p5} \end{table}
\begin{table}[H]\small \caption{$L_2(61) < E_8$, $p = 3$}
\begin{tabular}{rc|*{2}{c}} & & $62_b$ & $62_e$ \\ \hline 1) & \textbf{P} & 2 & 2 \label{l261e8p3} \end{tabular} \end{table}
\begin{table}[H]\small \caption{$L_2(61) < E_8$, $p = 2$}
\begin{tabular}{rc|*{10}{c}} & & 1 & $30_a$ & $30_b$ & $62_a$ & $62_b$ & $62_c$ & $62_d$& $62_e$ & $62_f$ & $62_g$ \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ 2) & & 2 & 2 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 3) & & 2 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 4) & & 2 & 0 & 2 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \end{tabular} \label{l261e8p2} \end{table}
\subsection{Cross-characteristic Groups $\ncong L_2(q)$} \
\begin{table}[H]\small
\caption{$L_3(3) < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{12}{c}} & & 1 & 12 & 13 & $16_a$ & $(16_a)^{*}$ & $16_b$ & $(16_b)^{*}$ & $26_a$ & $26_b$ & $(26_b)^{*}$ & 27 & 39 \\ \hline 1) & \textbf{P} & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 2 & 2 & 1 & 2 \\ 2) & \textbf{P} & 0 & 2 & 0 & 0 & 0 & 0 & 0 & 4 & 0 & 0 & 3 & 1 \\ 3) & & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 2 & 0 & 3 \\ 4) & & 1 & 0 & 2 & 0 & 0 & 0 & 0 & 3 & 2 & 2 & 0 & 1 \\ 5) & & 1 & 0 & 2 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 1 & 2 \\ 6) & & 2 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 & 1 & 0 & 3 \\ 7) & & 2 & 0 & 3 & 1 & 1 & 1 & 1 & 2 & 1 & 1 & 0 & 1 \\ 8) & & 3 & 0 & 4 & 2 & 2 & 2 & 2 & 1 & 0 & 0 & 0 & 1 \\ 9) & & 4 & 4 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 2 & 0 \\ 10) & & 5 & 4 & 1 & 2 & 2 & 2 & 2 & 0 & 0 & 0 & 2 & 0 \\ 11) & & 6 & 4 & 0 & 2 & 2 & 2 & 2 & 0 & 0 & 0 & 1 & 1 \\ 12) & & 8 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 6 & 0 \\ 13) & & 9 & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 6 & 0 \\ 14) & & 10 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 5 & 1 \\ 15) & & 14 & 0 & 0 & 0 & 0 & 0 & 0 & 7 & 1 & 1 & 0 & 0 \\ 16) & & 15 & 0 & 1 & 1 & 1 & 1 & 1 & 6 & 0 & 0 & 0 & 0 \end{tabular} \label{l33e8p0} \end{table}
\begin{table}[H]\small \caption{$L_3(3) < E_8$, $p = 13$}
\begin{tabular}{rc|*{8}{c}} & & 1 & 11 & 13 & 16 & $26_a$ & $26_b$ & $(26_b)^{*}$ & 39 \\ \hline 1) & \textbf{P} & 0 & 1 & 1 & 1 & 1 & 2 & 2 & 2 \\ 2) & & 1 & 0 & 0 & 0 & 1 & 2 & 2 & 3 \\ 3) & & 1 & 0 & 2 & 0 & 3 & 2 & 2 & 1 \\ 4) & & 1 & 1 & 2 & 5 & 0 & 1 & 1 & 2 \\ 5) & & 2 & 0 & 1 & 4 & 0 & 1 & 1 & 3 \\ 6) & & 2 & 0 & 3 & 4 & 2 & 1 & 1 & 1 \\ 7) & \textbf{P}, \textbf{N} & 2 & 5 & 0 & 3 & 4 & 0 & 0 & 1 \\ 8) & & 3 & 0 & 4 & 8 & 1 & 0 & 0 & 1 \\ 9) & \textbf{N} & 8 & 6 & 0 & 6 & 1 & 1 & 1 & 0 \\ 10) & \textbf{N} & 9 & 6 & 1 & 10 & 0 & 0 & 0 & 0 \\ 11) & \textbf{N} & 10 & 5 & 0 & 9 & 0 & 0 & 0 & 1 \\ 12) & & 14 & 0 & 0 & 0 & 7 & 1 & 1 & 0 \\ 13) & & 15 & 0 & 1 & 4 & 6 & 0 & 0 & 0 \end{tabular} \label{l33e8p13} \end{table}
\begin{table}[H]\small \caption{$L_3(3) < E_8$, $p = 2$}
\begin{tabular}{rc|*{7}{c}} & & 1 & $12$ & $16_a$ & $(16_a)^{*}$ & $16_b$ & $(16_b)^{*}$ & 26 \\ \hline 1) & \textbf{N}, \textbf{P} & 4 & 3 & 0 & 0 & 0 & 0 & 8 \\ 2) & \textbf{N}, \textbf{P} & 6 & 4 & 1 & 1 & 1 & 1 & 5 \\ 3) & \textbf{N} & 8 & 5 & 2 & 2 & 2 & 2 & 2 \\ 4) & \textbf{N} & 14 & 0 & 0 & 0 & 0 & 0 & 9 \\ 5) & \textbf{N} & 16 & 1 & 1 & 1 & 1 & 1 & 6 \end{tabular} \label{l33e8p2} \end{table}
\begin{table}[H]\small \caption{$L_3(5) < E_8$, $p \neq 2$, $5$}
\begin{tabular}{rc|*{2}{c}} & & $124_a$ & $124_b$ \\ \hline 1) & \textbf{P} & 1 & 1 \end{tabular} \label{l35e8p3} \label{l35e8p31} \label{l35e8p0} \end{table}
\begin{table}[H]\small \caption{$L_3(5) < E_8$, $p = 2$}
\begin{tabular}{rc|*{4}{c}} & & 1 & 30 & $124_a$ & $124_b$ \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 2 \\ 2) & & 4 & 4 & 1 & 0 \end{tabular} \label{l35e8p2} \end{table}
\begin{table}[H]\small \caption{$L_4(3) < E_8$, $p = 2$}
\begin{tabular}{r|*{6}{c}} & 1 & $26_a$ & $26_b$ & 38 & $208_a$ & $208_b$ \\ \hline 1) & 2 & 0 & 0 & 1 & 0 & 1 \\ 2) & 2 & 0 & 0 & 1 & 1 & 0 \\ 3) & 14 & 1 & 8 & 0 & 0 & 0 \\ 4) & 14 & 8 & 1 & 0 & 0 & 0 \end{tabular} \label{l43e8p2} \end{table}
\addtocounter{table}{1} \begin{table}[H]\small \thetable: $L_4(5) < E_8$, $p = 2$. Irreducible on $V_{248}$. \textbf{P} \label{l45e8p2p2} \end{table}
\begin{table}[H]\small
\caption{$U_3(3) < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{r|*{14}{c}} & 1 & 6 & $7_a$ & $7_{b}$ & $(7_b)^{*}$ & 14 & $21_a$ & $21_b$ & $(21_b)^{*}$ & 27 & 28 & $28^{*}$ & $32$ & $32^{*}$ \\ \hline
1) & 1 & 0 & 0 & 0 & 0 & 3 & 1 & 0 & 0 & 0 & 1 & 1 & 2 & 2 \\
2) & 2 & 0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 2 & 2 & 1 & 1 \\
3) & 2 & 0 & 1 & 1 & 1 & 2 & 1 & 0 & 0 & 0 & 2 & 2 & 1 & 1 \\
4) & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 2 & 2 & 1 & 1 \\
5) & 3 & 0 & 1 & 2 & 2 & 1 & 1 & 2 & 2 & 1 & 0 & 0 & 1 & 1 \\
6) & 3 & 0 & 2 & 2 & 2 & 1 & 1 & 0 & 0 & 0 & 3 & 3 & 0 & 0 \\
7) & 4 & 0 & 4 & 0 & 0 & 1 & 4 & 0 & 0 & 2 & 0 & 0 & 1 & 1 \\
8) & 4 & 0 & 2 & 3 & 3 & 0 & 1 & 2 & 2 & 1 & 1 & 1 & 0 & 0 \\
9) & 5 & 0 & 5 & 1 & 1 & 0 & 4 & 0 & 0 & 2 & 1 & 1 & 0 & 0 \\ 10) & 6 & 0 & 13& 0 & 0 & 5 & 0 & 0 & 0 & 3 & 0 & 0 & 0 & 0 \\ 11) & 8 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 6 & 0 & 0 & 1 & 1 \\ 12) & 9 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 6 & 1 & 1 & 0 & 0 \\ 13) & 17& 14& 0 & 2 & 2 & 7 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 14) & 52& 0 & 26& 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{tabular} \label{u33e8p0} \end{table}
\begin{table}[H]\small \caption{$U_3(3) < E_8$, $p = 7$}
\begin{tabular}{rc|*{12}{c}} & & 1 & 6 & $7_a$ & $7_b$ & $(7_b)^{*}$ & 14 & $21_a$ & $21_b$ & $(21_b)^{*}$ & 26 & $28$ & $28^{*}$ \\ \hline
1) & \textbf{P}, \textbf{N} & 1 & 2 & 0 & 0 & 0 & 0 & 3 & 1 & 1 & 5 & 0 & 0 \\
2) & \textbf{P}, \textbf{N} & 1 & 4 & 0 & 0 & 0 & 3 & 1 & 0 & 0 & 4 & 1 & 1 \\
3) & \textbf{N} & 2 & 2 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 2 & 2 & 2 \\
4) & \textbf{N} & 2 & 2 & 1 & 1 & 1 & 2 & 1 & 0 & 0 & 2 & 2 & 2 \\
5) & & 3 & 0 & 2 & 2 & 2 & 1 & 1 & 0 & 0 & 0 & 3 & 3 \\
6) & \textbf{N} & 4 & 2 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 3 & 2 & 2 \\
7) & \textbf{N} & 4 & 2 & 1 & 2 & 2 & 1 & 1 & 2 & 2 & 3 & 0 & 0 \\
8) & & 5 & 0 & 2 & 3 & 3 & 0 & 1 & 2 & 2 & 1 & 1 & 1 \\
9) & \textbf{N} & 6 & 2 & 4 & 0 & 0 & 1 & 4 & 0 & 0 & 4 & 0 & 0 \\ 10) & \textbf{N} & 7 & 0 & 5 & 1 & 1 & 0 & 4 & 0 & 0 & 2 & 1 & 1 \\ 11) & \textbf{N} & 9 & 0 & 13& 0 & 0 & 5 & 0 & 0 & 0 & 3 & 0 & 0 \\ 12) & \textbf{N} & 14 & 2 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 8 & 0 & 0 \\ 13) & \textbf{N} & 15 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 6 & 1 & 1 \\ 14) & & 17 & 14 & 0 & 2 & 2 & 7 & 1 & 0 & 0 & 0 & 0 & 0 \\ 15) & & 52 & 0 & 26 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \end{tabular} \label{u33e8p7} \end{table}
\begin{table}[H]\small \caption{$U_3(8) < E_8$, $p \neq 2, 3$}
\begin{tabular}{r|*{7}{c}} & 1 & 56 & $57$ & $57^{*}$ & $133_a$ & $133_b$ & $133_c$ \\ \hline 1) & 1 & 0 & 1 & 1 & 0 & 0 & 1 \\ 2) & 3 & 2 & 0 & 0 & 0 & 0 & 1 \end{tabular}\\ Permutations: $(133_a,133_b,133_c)$. \label{u38e8p7} \label{u38e8p19} \label{u38e8p0} \end{table}
\begin{table}[H]\small \caption{$U_3(8) < E_8$, $p = 3$}
\begin{tabular}{rc|*{5}{c}} & & 1 & 56 & $133_a$ & $133_b$ & $133_c$ \\ \hline 1) & \textbf{N} & 3 & 2 & 0 & 0 & 1 \end{tabular}\\ Permutations: $(133_a,133_b,133_c)$. \label{u38e8p3} \end{table}
\begin{table}[H]\small
\caption{$U_4(2) < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{14}{c}} & & 1 & 5 & $5^{*}$ & 6 & 10 & $10^{*}$ & $15_b$ & 20 & 24 & $40$ & $40^{*}$ & 45 & $45^{*}$ & 64 \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 0 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 1 & 1 & 2 \\ 2) & \textbf{P} & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 0 & 2 & 1 & 1 & 1 & 1 & 0 \\ 3) & &11 & 0 & 0 &12 & 2 & 2 & 7 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 4) & &24 &10 &10 & 0 & 5 & 5 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \end{tabular} \label{u42e8p0} \end{table}
\begin{table}[H]\small \caption{$U_4(2) < E_8$, $p = 5$}
\begin{tabular}{rc|*{14}{c}} & & 1 & 5 & $5^{*}$ & 6 & 10 & $10^{*}$ & $15_b$ & 20 & 23 & $40$ & $40^{*}$ & 45 & $45^{*}$ & 58 \\ \hline 1) & \textbf{P} & 0 & 0 & 0& 2 & 0 & 0 & 2 & 0 & 0 & 0 & 0 & 1 & 1 & 2 \\ 2) & \textbf{N} & 2 & 1 & 1& 0 & 1 & 1 & 0 & 0 & 2 & 1 & 1 & 1 & 1 & 0 \\ 3) & &11 & 0 & 0&12 & 2 & 2 & 7 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 4) & &25 &10 &10& 0 & 5 & 5 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \end{tabular} \label{u42e8p5} \end{table}
\begin{table}[H]\small \caption{$PSp_4(5) < E_8$, $p = 2$}
\begin{tabular}{rc|*{6}{c}} & & 1 & $12_a$ & $12_b$ & 40 & 64 & $104_b$ \\ \hline 1) & \textbf{P} & 0 & 0 & 0 & 1 & 0 & 2 \\ 2) & \textbf{P}, \textbf{N} & 8 & 4 & 4 & 2 & 1 & 0 \end{tabular} \label{psp45e8p2} \end{table}
\begin{table}[H]\small
\caption{$Sp_6(2) < E_8$, $p = 0$, $5$ or $p \nmid |H|$}
\begin{tabular}{r|*{5}{c}} & 1 & 7 & $21_a$ & 27 & $35_b$ \\ \hline
1) & 4 & 6 & 5 & 1 & 2 \end{tabular} \label{sp62e8p0} \label{sp62e8p5} \end{table}
\begin{table}[H]\small \caption{$Sp_6(2) < E_8$, $p = 7$}
\begin{tabular}{r|*{5}{c}} & 1 & 7 & $21_a$ & 26 & $35_b$ \\ \hline 1) & 5 & 6 & 5 & 1 & 2 \end{tabular} \label{sp62e8p7} \end{table}
\begin{table}[H]\small \caption{$Sp_6(2) < E_8$, $p = 3$}
\begin{tabular}{r|*{5}{c}} & 1 & 7 & 21 & 27 & 35 \\ \hline 1) & 4 & 6 & 5 & 1 & 2 \end{tabular} \label{sp62e8p3} \end{table}
\begin{table}[H]\small \caption{$\Omega_8^{+}(2) < E_8$, $p \neq 2$}
\begin{tabular}{r|*{5}{c}} & 1 & 28 & $35_a$ & $35_b$ & $35_c$ \\ \hline 1) & 3 & 5 & 1 & 1 & 1 \end{tabular} \label{omega8plus2e8p3} \label{omega8plus2e8p5} \label{omega8plus2e8p7} \label{omega8plus2e8p0} \end{table}
\begin{table}[H]\small \caption{$G_2(3) < E_8$, $p \neq 2, 3$}
\begin{tabular}{r|*{6}{c}} & 1 & 14 & $64$ & $64^{*}$ & 78 & $91_c$ \\ \hline 1) & 1 & 0 & 0 & 0 & 2 & 1 \\ 2) & 1 & 2 & 1 & 1 & 0 & 1 \end{tabular} \label{g23e8p7} \label{g23e8p13} \label{g23e8p0} \end{table}
\begin{table}[H]\small \caption{$G_2(3) < E_8$, $p = 2$}
\begin{tabular}{r|*{6}{c}} & 1 & 14 & $64$ & $64^{*}$ & 78 & $90_a$ \\ \hline 1) & 2 & 0 & 0 & 0 & 2 & 1 \\ 2) & 2 & 2 & 1 & 1 & 0 & 1 \end{tabular} \label{g23e8p2} \end{table}
\begin{table}[H]\small \caption{$^{3}D_4(2) < E_8$, $p \neq 2, 3$}
\begin{tabular}{rc|*{4}{c}} & & 1 & 26 & 52 & 196 \\ \hline 1) & \textbf{P} & 0 & 0 & 1 & 1 \\ 2) & & 14 & 7 & 1 & 0 \end{tabular} \label{3d42e8p7} \label{3d42e8p13} \label{3d42e8p0} \end{table}
\begin{table}[H]\small \caption{$^{3}D_4(2) < E_8$, $p = 3$}
\begin{tabular}{rc|*{4}{c}} & & 1 & 25 & 52 & 196 \\ \hline 1) & \textbf{P} & 0 & 0 & 1 & 1 \\ 2) & \textbf{N} & 21 & 7 & 1 & 0 \end{tabular} \label{3d42e8p3} \end{table}
\begin{table}[H]\small \caption{$^{2}F_4(2)' < E_8$, $p \neq 2, 3, 5$}
\begin{tabular}{r|*{4}{c}} & 1 & 27 & $27^{*}$ & 78 \\ \hline 1) & 8 & 3 & 3 & 1 \end{tabular} \label{2f42e8p13} \label{2f42e8p0} \end{table}
\begin{table}[H]\small \caption{$^{2}F_4(2)' < E_8$, $p = 5$}
\begin{tabular}{rc|*{4}{c}} & & 1 & 27 & $27^{*}$ & 78 \\ \hline 1) & \textbf{N} & 8 & 3 & 3 & 1 \end{tabular} \label{2f42e8p5} \end{table}
\begin{table}[H]\small \caption{$^{2}F_4(2)' < E_8$, $p = 3$}
\begin{tabular}{r|*{6}{c}} & 1 & 27 & $27^{*}$ & 77 & $124_a$ & $124_b$ \\ \hline 1) & 0 & 0 & 0 & 0 & 1 & 1 \\ 2) & 9 & 3 & 3 & 1 & 0 & 0 \end{tabular} \label{2f42e8p3} \end{table}
\begin{table}[H]\small
\caption{$^{2}B_2(8) < E_8$, $p = 0$ or $p \nmid |H|$}
\begin{tabular}{rc|*{8}{c}} & & 1 & 14 & $14^{*}$ & $35_b$ & $35_c$ & 64 & $65_a$ & 91 \\ \hline 1) & \textbf{P} & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 \\ 2) & & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 2 \\ 3) & & 1 & 1 & 1 & 0 & 0 & 2 & 0 & 1 \\ 4) & & 2 & 0 & 0 & 0 & 0 & 1 & 0 & 2 \\ 5) & & 3 & 0 & 0 & 4 & 3 & 0 & 0 & 0 \end{tabular}\\ Permutations: $(35_a,35_b,35_c)(65_a,65_b,65_c)$. \label{2b28e8p0} \end{table}
\begin{table}[H]\small \caption{$^{2}B_2(8) < E_8$, $p = 13$}
\begin{tabular}{rc|*{6}{c}} & & 1 & 14 & $14^{*}$ & $35$ & $65_a$ & $91$ \\ \hline 1) & & 1 & 0 & 0 & 0 & 1 & 2 \\ 2) & \textbf{P} & 1 & 2 & 2 & 1 & 1 & 1 \\ 3) & & 3 & 0 & 0 & 7 & 0 & 0 \\ 4) & & 3 & 1 & 1 & 1 & 0 & 2 \\ 5) & \textbf{P} & 3 & 3 & 3 & 2 & 0 & 1 \end{tabular}\\ Permutations: $(65_a,65_b,65_c)$. \label{2b28e8p13} \end{table}
\begin{table}[H]\small \caption{$^{2}B_2(8) < E_8$, $p = 7$}
\begin{tabular}{rc|*{7}{c}} & & 1 & 14 & $14^{*}$ & $35_b$ & $35_c$ & 64 & 91 \\ \hline 1) & \textbf{P} & 1 & 1 & 1 & 0 & 0 & 2 & 1 \\ 2) & & 2 & 0 & 0 & 0 & 0 & 1 & 2 \\ 3) & & 3 & 0 & 0 & 3 & 4 & 0 & 0 \\ 4) & & 8 & 4 & 4 & 0 & 0 & 2 & 0 \\ 5) & & 9 & 3 & 3 & 0 & 0 & 1 & 1 \end{tabular}\\ Permutations: $(35_a,35_b,35_c)$. \label{2b28e8p7} \end{table}
\begin{table}[H]\small \caption{$^{2}B_2(8) < E_8$, $p = 5$}
\begin{tabular}{rc|*{8}{c}} & & 1 & 14 & $14^{*}$ & $35_a$ & $35_b$ & $35_c$ & 63 & $65_a$ \\ \hline 1) & \textbf{P} & 1 & 2 & 2 & 0 & 0 & 0 & 2 & 1 \\ 2) & & 3 & 0 & 0 & 0 & 3 & 4 & 0 & 0 \\ 3) & & 3 & 2 & 2 & 0 & 0 & 0 & 3 & 0 \\ 4) & & 3 & 5 & 5 & 1 & 1 & 1 & 0 & 0 \\ \end{tabular}\\ Permutations: $(35_a,35_b,35_c)(65_a,65_b,65_c)$. \label{2b28e8p5} \end{table}
\begin{table}[H]\small \caption{$^{2}B_2(32) < E_8$, $p = 5$}
\begin{tabular}{rc|*{2}{c}} & & 124 & $124^{*}$ \\ \hline 1) & \textbf{P} & 1 & 1 \end{tabular} \label{2b232e8p5} \end{table}
\appendix
\chapter{Auxiliary Data} \label{chap:auxiliary}
\label{sec:altreps} \label{sec:sporadicreps} \label{sec:l2qreps} \label{sec:ccreps}
We collect here various data calculated for use in Chapters \ref{chap:disproving}--\ref{chap:gcr}, as well as references to data already in the literature. All of the following information is found either in the Atlas of Finite Groups \cite{MR827219}, the modular Atlas \cite{MR1367961}, the list of Hiss and Malle \cites{MR1835851,MR1942256}, or is straightforward to calculate using well-known routines which have been implemented in Magma \cite{MR1484478}.
In calculating the tables of feasible characters, we have made use of Brauer characters for various finite groups in Table \ref{tab:subtypes} and their proper covers. Of those we require, those which do not appear elsewhere in the literature are those of the alternating groups $\Alt_{n}$ with $13 \le n \le 17$ in characteristic $2$. Here, we give enough character values to verify that each feasible character appears in the relevant table of Chapter \ref{chap:thetables}.
\begin{table}[H]\small \caption{Alt$_{n}$ Brauer Character Values, Degree $\le 248$, $p = 2$}
\begin{tabular}{c|*{12}{r}} & \multicolumn{12}{c}{Cycle type/Character Value} \\ \hline $H = \Alt_{17}$ & $3$ & $3^{2}$ & $3^{3}$ & $3^{4}$ & $3^{5}$ & $5$ & $5^{2}$ & $5^{3}$ & $7$ & $7^{2}$ & $11$ & $13$ \\ \hline $16$ & $13$ & $10$ & $7$ & $4$ & $1$ & $11$ & $6$ & $1$ & $9$ & $2$ & $5$ & $3$ \\ $118$ & $76$ & $43$ & $19$ & $4$ & $-2$ & $53$ & $13$ & $-2$ & $34$ & $-1$ & $8$ & $1$ \\ $128_a$, $128_b$ & $-64$ & $32$ & $-16$ & $8$ & $-4$ & $-32$ & $8$ & $-2$ & $16$ & $2$ & $-4$ & $-2$ \\ \hline \hline $H = \Alt_{16}$ & $3$ & $3^{2}$ & $3^{3}$ & $3^{4}$ & $3^{5}$ & $5$ & $5^{2}$ & $5^{3}$ & $7$ & $7^{2}$ & $11$ & $13$ \\ \hline $14$ & $11$ & $8$ & $5$ & $2$ & $-1$ & $9$ & $4$ & $-1$ & $7$ & $0$ & $3$ & $1$ \\ $64_a$, $64_b$ & $-32$ & $16$ & $-8$ & $4$ & $-2$ & $-16$ & $4$ & $-1$ & $8$ & $1$ & $-2$ & $-1$ \\ $90$ & $54$ & $27$ & $9$ & $0$ & $0$ & $35$ & $5$ & $0$ & $20$ & $-1$ & $2$ & $-1$ \\ \hline \hline $H = \Alt_{15}$ & \multicolumn{12}{c}{Same as $\Alt_{16}$} \\ \hline \hline $H = \Alt_{14}$ & $3$ & $3^{2}$ & $3^{3}$ & $3^{4}$ & $5$ & $5^{2}$ & $7$ & $7^{2}$ & $11$ & $13$ \\ \cline{1-11} $12$ & $9$ & $6$ &$3$ &$0$& $7$& $2$ &$5$& $-2$ &$1$ &$-1$ \\ $64_a$ & $34$ & $13$ &$1$ &$-2$ &$19$ &$-1$ &$8$ &$1$ &$-2$ &$-1$ \\ $64_b$ & $-32$ & $16$ &$-8$& $4$& $-16$& $4$ &$8$& $1$& $-2$ &$-1$ \\ $208$ & $76$ &$16$& $1$& $4$& $28$& $-2$& $5$& $-2$& $-1$& $0$ \\ \hline \hline $H = \Alt_{13}$ & $3$ & $3^{2}$ & $3^{3}$ & $3^{4}$ & $5$ & $5^{2}$ & $7$ & $11$ & \multicolumn{2}{c}{$13_a$} & \multicolumn{2}{c}{$13_b$} \\ \cline{1-13} $12$ & $9$ & $6$ & $3$ & $0$ & $7$ & $2$ & $5$ & $1$ & \multicolumn{2}{c}{$-1$} & \multicolumn{2}{c}{$-1$} \\ $32_a$ & $-16$ & $8$ & $-4$ & $2$ & $-8$ & $2$ & $4$ & $-1$ & \multicolumn{2}{c}{$\frac{-1 + \sqrt{13}}{2}$} & \multicolumn{2}{c}{$\frac{-1 - \sqrt{13}}{2}$} \\ $32_b$ & $-16$ & $8$ & $-4$ & $2$ & $-8$ & $2$ & $4$ & $-1$ & \multicolumn{2}{c}{$\frac{-1 - \sqrt{13}}{2}$} & \multicolumn{2}{c}{$\frac{-1 + \sqrt{13}}{2}$} \\ $64$ & $34$ & $13$ & $1$ & $-2$ & $19$ & $-1$ & $8$ & $-2$ & \multicolumn{2}{c}{$-1$} & \multicolumn{2}{c}{$-1$} \\ $144_a$, $144_b$ & $-48$ & $12$ & $0$ & $-3$ & $-16$ & $-1$ & $4$ & $1$ & \multicolumn{2}{c}{$1$} & \multicolumn{2}{c}{$1$} \\ $208$ & $76$ & $16$ & $1$ & $4$ & $28$ & $-2$ & $5$ & $-1$ & \multicolumn{2}{c}{$0$} & \multicolumn{2}{c}{$0$} \end{tabular} \end{table}
In addition to the Brauer characters, for many of the irreducible $KH$-modules encountered, we have made use of the Frobenius-Schur indicator, as well as the dimension of the first the cohomology group.
In the following tables we list all $KH$-modules of dimension at most 248, where $H$ is a finite simple group which embeds into an adjoint exceptional simple algebraic group over the algebraically closed field $K$. Frobenius-Schur indicators are taken from \cite{MR1942256}, and the dimension of the group $H^{1}(H,V)$, when given, has been calculated using Magma. Note that the list of \cite{MR1942256} does not distinguish between non-isomorphic $KH$-modules of the same dimension which have the same Frobenius-Schur indicator, however we list all modules in such a case, to clarify when cohomology groups differ.
Whenever $V \ncong V^{*}$ but $H^{1}(H,V) \cong H^{1}(H,V^{*})$, we omit $V^{*}$. Subscripts denote a collection of modules having identical properties, for example `($15_{a-h}$, $+$, 0)' denotes eight pairwise non-isomorphic modules of dimension 15, such that $H^{1}(H,V)$ vanishes for each such module $V$.
\begin{longtable}{c|c|l} \caption{$H$ alternating}\\ \hline $n$ & char $K = p$ & \multicolumn{1}{c}{($V$, ind($V$), dim $H^{1}(H,V)$)} \\ \hline 5 & 2 & ($2_{a,b}$, $-$, 1), (4, $+$, 0) \\ & 3 & ($3_{a,b}$, $+$, 0), (4, $+$, 1) \\ & 5 & (3, $+$, 1), (5, $+$, 0) \\ & $p \neq 2, 3, 5$ & ($3_{a,b}$, $+$), ($4_a$, $+$), (5, $+$) \\ \hline 6 & 2 & ($4_{a,b}$, $-$, 1), ($8_{a,b}$, $+$, 0), \\ & 3 & ($3_{a,b}$, $+$, 0), (4, $+$, 2), (9, $+$, 0) \\ & 5 & ($5_{a,b}$, $+$, 0), (8, $+$, 1), ($10_a$, $+$, 0) \\ & $p \neq 2, 3, 5$ & ($5_{a,b}$, $+$), ($8_{a,b}$, $+$), (9, $+$), (10, $+$) \\ \hline 7 & 2 & ($4$, $\circ$, 0), ($6_a$, $+$, 0), ($14$, $+$, 1), (20, $-$, 1) \\ & 3 & (6, $+$, 0), (10, $\circ$, 1), (13, $+$, 2), (15, $+$, 0) \\ & 5 & ($6_a$, $+$, 0), (8, $+$, 0), (10, $\circ$, 0), (13, $+$, 1), ($15_a$, $+$, 0),\\ & & (35, $+$, 0) \\ & 7 & (5, $+$, 1), (10, $+$, 0), ($14_{a,b}$, $+$, 0), ($21_a$, $+$, 0), (35, $+$, 0) \\ & $p \neq 2,3,5,7$ & ($6_a$, $+$), (10, $\circ$), ($14_{a,b}$, $+$), ($15_a$, $+$), ($21_a$, $+$), (35, $+$) \\ \hline 8 & 2 & ($4$, $\circ$, 0), ($6$, $+$, 1), ($14$, $+$, 1), (20, $\circ$, 1), (64, $+$, 0) \\ & 3 & (7, $+$, 0), (13, $+$, 1), (21, $+$, 0), (28, $+$, 0), (35, $+$, 1), \\ & & (45, $\circ$, 0) \\ & 5 & (7, $+$, 0), (13, $+$, 1), (20, $+$, 0), ($21_{a,b}$, $+$, 0), (35, $+$, 0), \\ & & (43, $+$, 0), (45, $\circ$, 0), (70, $+$, 0) \\ & 7 & (7, $+$, 0), (14, $+$, 0), (19, $+$, 1), ($21_a$, $+$, 0), ($21_b$, $\circ$, 0), \\ & & (28, $+$, 0), (35, $+$, 0), (45, $+$, 0), (56, $+$, 0), (70, $+$, 0) \\ & $p \neq 2, 3, 5,7$ & (7, $+$), (14, $+$), (20, $+$), ($21_a$, $+$), ($21_b$, $\circ$), (28, $+$), \\ & & (35, $+$), (45, $\circ$), ($56_a$, $+$), ($64_a$, $+$), (70, $+$) \\ \hline 9 & 2 & ($8_{a,b,c}$, $+$, 0), (20, $\circ$, 1), (26, $+$, 2), (48, $+$, 0), (78, $+$, 1), \\ & & (160, $+$, 0) \\ & 3 & (7, $+$, 1), (21, $+$, 0), (27, $+$, 0), (35, $+$, 1), (41, $+$, 1), \\ & & (162, $+$, 0), (189, $+$, 0) \\ & 5 & ($8_a$, $+$, 0), (21, $+$, 0), (27, $+$, 0), (28, $+$, 0), (34, $+$, 0), ($35_{a,b}$, $+$, 0), \\ & & ($56_a$, $+$, 0), (83, $+$, 1), (105, $+$, 0), (120, $+$, 0), (133, $+$, 0), \\ & & (134, $+$, 0) \\ & 7 & ($8_a$, $+$, 0), (19, $+$, 0), (21, $\circ$, 0), (28, $+$, 0), ($35_{a,b}$, $+$, 0),\\ & & (42, $+$, 0), (47, $+$, 1), ($56_a$, $+$, 0), (84, $+$, 0), (101, $+$, 0),\\ & & (105, $+$, 0), (115, $+$, 0), (168, $+$, 0), (189, $+$, 0) \\ & $p \neq 2,3,5,7$ & ($8_a$, $+$), (21, $\circ$), (27, $+$), (28, $+$), ($35_{a,b}$, $+$),\\ & & (42, $+$), ($48_a$, $+$), ($56_a$, $+$), (84, $+$), (105, $+$), \\ & & ($120_a$, $+$), (162, $+$), (168, $+$), (189, $+$), (216, $+$) \\ \hline 10 & 2 & (8, $-$, 1), (16, $+$, 0), (26, $+$, 1), (48, $+$, 0), ($64_{a,b}$, $+$, 0), \\ & & (160, $+$, 0), (198, $+$, 2), (200, $+$, 1) \\ & 3 & (9, $+$, 0), (34, $+$, 1), (36, $+$, 0), (41, $+$, 1), (84, $+$, 1), \\ & & (90, $+$, 0), (126, $+$, 0), (224, $+$, 1) \\ & 5 & ($8_a$, $+$, 1), (28, $+$, 0), (34, $+$, 0), ($35_{a,b,c}$, $+$, 0), (55, $+$, 0), \\ & & ($56_a$, $+$, 0), (75, $+$, 0), ($133_{a,b}$, $+$, 0), \\ & & (155, $+$, 0), ($160_a$, $+$, 0), (217, $+$, 1), (225, $+$, 0) \\ & 7 & (9, $+$, 0), (35, $+$, 0), (36, $+$, 0), (42, $+$, 0), (66, $+$, 0), \\ & & (84, $+$, 0), (89, $+$, 1), (101, $+$, 0), (124, $+$, 0), (126, $+$, 0), \\ & & (199, $+$, 0), (210, $+$, 0), ($224_{a,b}$, $+$, 0) \\ & $p \neq 2, 3, 5, 7$ & (9, $+$), (35, $+$), (36, $+$), (42, $+$), (75, $+$), (84, $+$), (90, $+$), \\ & & (126, $+$), (160, $+$), (210, $+$), ($224_{a,b}$, $+$), (225, $+$) \\ \hline 11 & 2 & (10, $+$, 0), (16, $\circ$, 0), (44, $+$, 1), (100, $+$, 0), (144, $+$, 0),\\ & & (164, $-$, 1), (186, $+$, 1), (198, $+$, 2) \\ & 11 & (9, $+$, 1), (36, $+$, 0), (44, $+$, 0), (84, $+$, 0), (110, $+$, 0), \\ & & (126, $+$, 0), (132, $+$, 0), (165, $+$, 0), (231, $+$, 0) \\ \hline 12 & 2 & (10, $+$, 1), (16, $\circ$, 0), (44, $+$, 1), (100, $+$, 0), (144, $\circ$, 0),\\ & & (164, $-$, 1) \\ 13 & 2 &(12, $+$, 0), (32$_{a,b}$, $+$, 0), (64, $-$, 2), (144, $\circ$, 0), (208, $+$, 0) \\ 14 & 2 & (12, $-$, 1), (64$_a$, $-$, 1), (64$_b$, $+$, 0), (208, $+$) \\ 15 & 2 & (14, $+$, 0), (64, $\circ$, 0), (90, $+$, 1) \\ 16 & 2 & (14, $+$, 1), (64, $\circ$, 0), (90, $+$, 1) \\ 17 & 2 & (16, $+$, 0), (118, $+$, 2), ($128_{a,b}$, $+$, 0) \label{tab:altreps} \end{longtable}
\begin{longtable}{c|c|l} \caption{$H$ sporadic}\\ \hline $H$ & char $K = p$ & \multicolumn{1}{c}{($V$, ind($V$), dim $H^{1}(H,V))$} \\ \hline \endfirsthead $M_{11}$ & 2 & (10, $+$, 1), (16, $\circ$, 0), (44, $+$, 1) \\ & 3 & (5, $\circ$, 0), ($5^{*}$, $\circ$, 1), ($10_a$, $+$, 0), ($10_b$, $\circ$, 1), ($(10_b)^{*}$, $\circ$, 0),\\ & & (24, $+$, 0), (45, $+$, 0) \\ & 5 & ($10_a$, $+$, 0), ($10_b$, $\circ$, 0), (11, $+$, 0), (16, $\circ$, 0), ($16^{*}$, $\circ$, 1), \\ & & (45, $+$, 0), (55, $+$, 0) \\ & 11 & (9, $+$, 1), (10, $\circ$, 0), (11, $+$, 0), (16, $+$, 0), (44, $+$, 0), \\ & & (55, $+$, 0) \\ & $p \neq 2,3,5,11$ & ($10_a$, $+$), ($10_b$, $\circ$), (11, $+$), (16, $\circ$), (44, $+$), (45, $+$),\\ & & (55, $+$) \\ \hline $M_{12}$ & 2 & (10, $+$, 2), (16, $\circ$, 0), (44, $+$, 1), (144, $+$, 0) \\ & 3 & ($10_{a,b}$, $+$, 1), (15, $\circ$, 1), (34, $+$, 1), ($45_{a-c}$, $+$, 0),\\ & & ($54$, $+$, 0), ($99$, $+$, 0) \\ & 5 & ($11_{a,b}$, $+$, 0), (16, $\circ$, 0), (45, $+$, 0), ($55_{a-c}$, $+$, 0),\\ & & (66, $+$, 0), (78, $+$, 0), (98, $+$, 1), ($120_a$, $+$, 0) \\ & 11 & ($11_{a,b}$, $+$, 0), (16, $+$, 0), (29, $+$, 0), (53, $+$, 1), \\ & & ($55_{a-c}$, $+$, 0), (66, $+$, 0), (91, $+$, 0), (99, $+$, 0), (176, $+$, 0) \\ & $p\neq 2,3,5,11$ & ($11_{a,b}$, $+$), (16, $\circ$), (44, $+$), (45, $+$), (54, $+$), ($55_{a-c}$, $+$),\\ & & (66, $+$), (99, $+$), ($120_a$, $+$), (144, $+$), (176, $+$)\\ \hline $M_{22}$ & 2 & (10, $\circ$, 1), ($10^{*}$, $\circ$, 0), (34, $-$, 1), (70, $\circ$, 1), (98, $-$, 1) \\ & 5 & ($21_a$, $+$, 0), ($45_a$, $\circ$, 0), (55, $+$, 0), (98, $+$, 1), (133, $+$, 0),\\ & & ($210_a$, $+$, 0) \\ \hline $J_1$ & 11 & (7, $+$, 0), (14, $+$, 0), (27, $+$, 0), (49, $+$, 0), (56, $+$, 0),\\ & & (64, $+$, 0), (69, $+$, 0), ($77_{a-c}$, $+$, 0), (106, $+$, 0), \\ & & (119, $+$, 1), (209, $+$, 0) \\ \hline $J_2$ & 2 & ($6_{a,b}$, $-$, 1), ($14_{a,b}$, $+$, 0), (36, $+$, 0), ($64_{a,b}$, $+$, 0),\\ & & (84, $+$, 1), (160, $+$, 0) \\ & 3 & ($13_{a,b}$, $+$, 1), ($21_{a,b}$, $+$, 0), ($36_a$, $+$, 0), ($57_{a,b}$, $+$, 0),\\ & & (63, $+$, 0), (90, $+$, 0), (133, $+$, 0), ($189_{a,b}$, $+$, 0),\\ & & (225, $+$, 0) \\ & 5 & ($14_a$, $+$, 0), (21, $+$, 1), (41, $+$, 0), (70, $+$, 0), (85, $+$, 0), \\ & & (90, $+$, 0), (175, $+$, 0), (189, $+$, 0), (225, $+$, 0) \\ & 7 & ($14_{a,b}$, $+$, 0), ($21_{a,b}$, $+$, 0), (36, $+$, 0), (63, $+$, 0), \\ & & ($70_{a,b}$, $+$, 0), (89, $+$, 1), (101, $+$, 0), (124, $+$, 0),\\ & & (126, $+$, 0), (175, $+$, 0), ($189_{a,b}$, $+$, 0), (199, $+$, 0), \\ & & ($224_{a,b}$, $+$, 0) \\ & $p \neq 2,3,5,7$ & ($14_{a,b}$, $+$), ($21_{a,b}$, $+$), (36, $+$), (63, $+$), $(70_{a,b},+)$, (90, $+$),\\ & & ($126$, $+$), (160, $+$), (175, $+$), $(189_{a,b},+)$, \\ & & ($224_{a,b}$, $+$), ($225$, $+$)\\ \hline $J_3$ & 2 & ($78_{a,b}$, $+$, 0), (80, $+$, 0), (84, $\circ$, 1), (244, $+$, 1) \label{tab:sporadicreps} \end{longtable}
\begin{longtable}{c|c|l} \caption{$H \cong L_{2}(q)$}\\ \hline $q$ & char $K = p$ & \multicolumn{1}{c}{($V$, ind($V$), dim $H^{1}(H,V)$)} \\ \hline \endfirsthead 7 & 3 & ($3$, $\circ$, 0), ($6_a$, $+$, 0), (7, $+$, 1) \\
& $p \neq 2,3,7$ & (3, $\circ$), ($6_a$, $+$), (7, $+$), (8, $+$) \\ \hline 8 & 3 & (7, $+$, 1), ($9_{a-c}$, $+$, 0), \\
& 7 & ($7_{a-d}$, $+$, 0), (8, $+$, 1) \\
& $p \neq 2,3,7$ & ($7_{a-d}$, $+$), (8, $+$), $(9_{a-c}$, $+$) \\ \hline 11 & 2 & (5, $\circ$, 1), (10, $+$, 0), ($12_{a,b}$, $+$, 0) \\
& 3 & (5, $\circ$, 0), ($10_a$, $+$, 1), ($12_{a,b}$, $+$, 0) \\
& 5 & (5, $\circ$, 0), ($10_{a,b}$, $+$, 0), (11, $+$, 1) \\
& $p \neq 2,3,5,11$ & (5, $\circ$), ($10_{a,b}$, $+$), (11, $+$), ($12_{a,b}$, $+$) \\ \hline 13 & 2 & ($6_{a,b}$, $-$, 1), ($12_{a-c}$, $+$, 0), (14, $+$, 0) \\
& 3 & ($7_{a,b}$, $+$, 0), ($12_{a-c}$, $+$, 0), (13, $+$, 1) \\
& 7 & ($7_{a,b}$, $+$, 0), ($12$, $+$, 1), ($14_{a,b}$, $+$, 0) \\
& $p \neq 2,3,7,13$ & ($7_{a,b}$, $+$), ($12_{a-c}$, $+$), (13, $+$), ($14_{a,b}$, $+$) \\ \hline 16 & 3 & ($15_{a-h}$, $+$, 0), (16, $+$, 1), ($17_{a,b}$, $+$, 0) \\
& 5 & ($15_{a-h}$, $+$, 0), (16, $+$, 1), (17, $+$, 0) \\
& 17 & (15, $+$, 1), ($17_{a-g}$, $+$, 0) \\
& $p \neq 2,3,5,17$ & ($15_{a-h}$, $+$), ($16_a$, $+$), ($17_{a-g}$, $+$) \\ \hline 17 & 2 & ($8_{a,b}$, $-$, 1), ($16_{a-d}$, $+$, 0) \\
& 3 & ($9_{a,b}$, $+$, 0), (16, $+$, 1), ($18_{a-c}$, $+$, 0) \\
& $p \neq 2,3,17$ & ($9_{a,b}$, $+$), ($16_{a-d}$, $+$), (17, $+$), ($18_{a-c}$, $+$) \\ \hline 19 & 2 & (9, $\circ$, 1), ($18_{a,b}$, $+$, 0), ($20_{a-d}$, $+$, 0) \\
& 3 & (9, $\circ$, 0), ($18_{a-d}$, $+$, 0), (19, $+$, 1) \\
& 5 & (9, $\circ$, 0), ($18_a$, $+$, 1), ($20_{a-d}$, $+$, 0) \\
& $p \neq 2,3,5,19$ & (9, $\circ$), ($18_{a-d}$, $+$), (19, $+$), ($20_{a-d}$, $+$) \\ \hline 25 & 2 & ($12_{a,b}$, $-$, 1), ($24_{a-f}$, $+$, 0), (26, $+$, 0) \\
& 3 & ($13_{a,b}$, $+$, 0), ($24_{a-f}$, $+$, 0), (25, $+$, 1), ($26_a$, $+$, 0) \\
& 13 & ($13_{a,b}$, $+$, 0), (24, $+$, 1), ($26_{a-e}$, $+$, 0) \\
& $p \neq 2,3,5,13$ & ($13_{a,b}$, $+$), ($24_{a-f}$, $+$), (25, $+$), ($26_{a-e}$, $+$) \\ \hline 27 & 2 & (13, $\circ$, 1), ($26_{a-c}$, $+$, 0), ($28_{a-f}$, $+$, 0) \\
& 7 & (13, $\circ$, 0), ($26_a$, $+$, 1), ($28_{a-f}$, $+$, 0) \\
& 13 & (13, $\circ$, 0), ($26_{a-f}$, $+$, 0), (27, $+$, 1) \\
& $p \neq 2,3,7,13$ & (13, $\circ$), ($26_{a-f}$, $+$), (27, $+$), ($28_{a-f}$, $+$) \\ \hline 29 & 2 & ($14_{a,b}$, $-$, 1), ($28_{a-g}$, $+$, 0), ($30_{a-c}$, $+$, 0) \\
& 3 & ($15_{a,b}$, $+$, 0), ($28_a$, $+$, 1), ($28_{b,c}$, $+$, 0), ($30_{a-f}$, $+$, 0) \\
& 5 & ($15_{a,b}$, $+$, 0), ($28_a$, $+$, 1), ($28_b$, $+$, 0), ($30_{a-f}$, $+$, 0) \\
& 7 & ($15_{a,b}$, $+$, 0), ($28_{a-g}$, $+$, 0), (29, $+$, 1) \\
& $p \neq 2,3,5,7,29$ & ($15_{a,b}$, $+$), ($28_{a-g}$, $+$), (29, $+$), ($30_{a-f}$, $+$) \\ \hline 31 & 2 & (15, $\circ$, 1), ($32_{a-g}$, $+$, 0) \\
& 3 & (15, $\circ$, 0), ($30_{a-g}$, $+$, 0), (31, $+$, 1), ($32_{a,b}$, $+$, 0) \\
& 5 & (15, $\circ$, 0), ($30_{a-g}$, $+$, 0), (31, $+$, 1), (32, $+$, 0) \\
& $p \neq 2,3,5,31$ & (15, $\circ$), ($30_{a-g}$, $+$), (31, $+$), ($32_{a-g}$, $+$) \\ \hline 32 & 3 & ($31_a$, $+$, 1), ($31_{b-f}$, $+$, 0), ($33_{a-o}$, $+$, 0) \\
& 11 & ($31_a$, $+$, 1), ($31_{b}$, $+$, 0), ($33_{a-o}$, $+$, 0) \\
& 31 & ($31_{a-p}$, $+$, 0), (32, $+$, 1) \\
& $p \neq 2,3,11,31$ & ($31_{a-p}$, $+$), (32, $+$), ($33_{a-o}$, $+$) \\ \hline 37 & 2 & ($18_{a,b}$, $-$, 1), ($36_{a-i}$, $+$, 0), ($38_{a-d}$, $+$, 0) \\
& 3 & ($19_{a,b}$, $+$, 0), ($36_{a-i}$, $+$, 0), (37, $+$, 1) \\
& 19 & ($19_{a,b}$, $+$, 0), (36, $+$, 1), ($38_{a-h}$, $+$, 0) \\
& $p \neq 2,3,19,37$ & ($19_{a,b}$, $+$), ($36_{a-i}$, $+$), (37, $+$), ($38_{a-h}$, $+$) \\ \hline 41 & 2 & ($20_{a,b}$, $-$, 1), ($40_{a-j}$, $+$, 0), ($42_{a,b}$, $+$, 0) \\
& 3 & ($21_{a,b}$, $+$, 0), ($40_a$, $+$, 1), ($40_{b-d}$, $+$, 0), ($42_{a-i}$, $+$, 0) \\
& 5 & ($21_{a,b}$, $+$, 0), ($40_{a-j}$, $+$, 0), (41, $+$, 1), ($42_a$, $+$, 0) \\
& 7 & ($21_{a,b}$, $+$, 0), ($40_a$, $+$, 1), ($40_{b}$, $+$, 0), ($42_{a-i}$, $+$, 0) \\
& $p \neq 2,3,5,7,41$ & ($21_{a,b}$, $+$), ($40_{a-j}$, $+$), (41, $+$), ($42_{a-i}$, $+$) \\ \hline 49 & 2 & ($24_{a,b}$, $-$, 1), ($48_{a-l}$, $+$, 0), (50, $+$, 0) \\
& 3 & ($25_{a,b}$, $+$, 0), ($48_{a-l}$, $+$, 0), (49, $+$, 1), ($50_{a-c}$, $+$, 0) \\
& 5 & ($25_{a,b}$, $+$, 0), (48, $+$, 1), ($50_{a-k}$, $+$, 0) \\
& $p \neq 2,3,5,7$ & ($25_{a,b}$, $+$), ($48_{a-l}$, $+$), (49, $+$), ($50_{a-k}$, $+$) \\ \hline 61 & 2 & ($30_{a,b}$, $-$, 1), ($60_{a-o}$, $+$, 0), ($62_{a-g}$, $+$, 0) \\
& 3 & ($31_{a,b}$, $+$, 0), ($60_{a-o}$, $+$, 0), (61, $+$, 1), ($62_{a-d}$, $+$, 0) \\
& 5 & ($31_{a,b}$, $+$, 0), ($60_{a-o}$, $+$, 0), (61, $+$, 1), ($62_{a,b}$, $+$, 0) \\
& 31 & ($31_{a,b}$, $+$, 0), (60, $+$, 1), ($62_{a-n}$, $+$, 0) \\
& $p \neq 2,3,5,31,61$ & ($31_{a,b}$, $+$), ($60_{a-o}$, $+$), (61, $+$), ($62_{a-n}$, $+$) \label{tab:l2qreps} \end{longtable}
\begin{longtable}{c|c|>{\raggedright\arraybackslash}p{0.65\textwidth}} \caption{$H$ of Lie type, $H \ncong L_{2}(q)$}\\ \hline $H$ & char $K = p$ & \multicolumn{1}{c}{($V$, ind($V$), dim $H^{1}(H,V)$)} \\ \hline \endfirsthead $L_3(3)$ & 2 & $(12,+,1)$, $(16_a,\circ,0)$, $(16_b,\circ,0)$, $(26,+,1)$ \\
& 13 & $(11,+,1)$, $(13,+,0)$, $(16,+,0)$, $(26_a,+,0)$, $(26_b,\circ,0)$, $(39,+,0)$ \\
& $p \neq 2,3,13$ & $(12,+)$, $(13,+)$, $(16_{a,b},\circ)$, $(26_a,+)$, $(26_b,\circ)$, $(27,+)$, $(39,+)$ \\ \hline $L_3(4)$ & 3 & $(15_{a-c},+,0)$, $(19,+,2)$, $(45,\circ,0)$, $(63_{a,b},+,0)$ \\
& 5 & $(20,+,0)$, $(35_{a-c},+,0)$, $(45_a,\circ,0)$, $(63,+,1)$ \\
& 7 & $(19,+,1)$, $(35_{a-c},+,0)$, $(45,+,0)$, $(63_{a,b},+,0)$ \\
& $p \neq 2,3,5,7$ & $(20,+)$, $(35_{a-c},+)$, $(45_a,\circ)$, $(63_{a,b},+)$, $(64,+)$ \\ \hline $L_3(5)$ & 2 & $(30,+,1)$, $(96_{a-e},\circ,0)$, $(124_a,+,0)$, $(124_b,-,1)$ \\
& 3 & $(30,+,0)$, $(31_a,+,0)$, $(31_b,\circ,0)$, $(96_{a-e},\circ,0)$, $(124_a,+,1)$, $(124_b,+,0)$, $(124_{c,d},\circ,0)$, $(186,+,0)$ \\
& 31 & $(29,+,1)$, $(31_a,+,0)$, $(31_b,\circ,0)$, $(96,+,0)$, $(124_{a,b},+,0)$, $(124_{c-f},\circ,0)$, $(155_a,+,0)$, $(155_b,\circ,0)$, $(186,+,0)$ \\
& $p \neq 2,3,5,31$ & $(30,+)$, $(31_a,+)$, $(31_b,\circ)$, $(96_{a-e},\circ)$, $(124_{a,b},+)$, $(124_{c-f},\circ)$, $(125,+)$, $(155_a,+)$, $(155_b,\circ)$, $(186,+)$ \\ \hline $L_4(3)$ & 2 & $(26_{a,b},+,0)$, $(38,+,2)$, $(208_{a,b},+,0)$ \\ \hline $L_4(5)$ & 2 & (154,+,2), $(248_{a,b},+,0)$ \\ \hline $U_3(3)$ & 7 & $(6,-,0)$, $(7_a,+,0)$, $(7_b,\circ,0)$, $(14,+,0)$, $(21_a,+,0)$, $(21_b,\circ,0)$, $(26,+,1)$, $(28,\circ,0)$ \\
& $p \neq 2,3,7$ & $(6,-)$, $(7_a,+)$, $(7_b,\circ)$, $(14,+)$, $(21_a,+)$, $(21_b,\circ)$, $(27,+)$, $(28,\circ)$, $(32,\circ)$ \\ \hline $U_3(8)$ & 3 & $(56,-,1)$, $(133_{a-c},+,0)$ \\
& 7 & $(56,-,0)$, $(57,\circ,0)$, $(133_{a-c},+,0)$ \\
& 19 & $(56,-,0)$, $(57,\circ,0)$, $(133_{a-c},+,0)$ \\
& $p \neq 2,3,7,19$ & $(57,\circ)$, $(133_{a-c},+)$ \\ \hline $U_4(2)$ & 5 & $(5,\circ,0)$, $(6,+,0)$, $(10,\circ,0)$, $(15_{a,b},+,0)$, $(20_a,+,0)$, $(23,+,1)$, $(30_a,+,0)$, $(30_b,\circ,0)$, $(40,\circ,0)$, $(45,\circ,0)$, $(58,+,0)$, $(60_a,+,0)$ \\
& $p \neq 2,3,5$ & $(5,\circ)$, $(6,+)$, $(10,\circ)$, $(15_{a,b},+)$, $(20_a,+)$, $(24,+)$, $(30_a,+)$, $(30_b,\circ)$, $(40,\circ)$, $(45,\circ)$, $(60_a,+)$, $(64_a,+)$, $(81,+)$ \\ \hline $U_4(3)$ & 2 & $(20,+,1)$, $(34_{a,b},-,1)$, $(70_{a,b},\circ,0)$, $(120,+,0)$ \\ \hline $PSp_4(5)$ & 2 & $(12_{a,b},-,1)$, $(40,+,0)$, $(64,-,1)$, $(104_{a,b},+,0)$, $(208_{a,b},+,0)$, $(248_{a,b},+,0)$ \\ \hline $Sp_6(2)$ & 3 & $(7,+,0)$, $(14,+,1)$, $(21,+,0)$, $(27,+,0)$, $(34,+,1)$, $(35,+,0)$, $(49,+,0)$, $(91,+,0)$, $(98,+,1)$, $(189_{a-c},+,0)$, $(196,+,0)$ \\
& 5 & $(7,+,0)$, $(15,+,0)$, $(21_{a,b},+,0)$, $(27,+,0)$, $(35_{a,b},+,0)$, $(56,+,0)$, $(70,+,0)$, $(83,+,1)$, $(105_{a-c},+,0)$, $(120,+,0)$, $(133,+,0)$, $(141,+,0)$, $(168_{a,b},+,0)$, $(210_{a,b},+,0)$ \\
& 7 & $(7,+,0)$, $(15,+,0)$, $(21_{a,b},+,0)$, $(26,+,1)$, $(35_{a,b},+,0)$, $(56,+,0)$, $(70,+,0)$, $(84,+,0)$, $(94,+,0)$, $(105_{a-c},+,0)$, $(168,+,0)$, $(189_{a-c},+,0)$, $(201,+,0)$, $(210_{a,b},+,0)$ \\ & $p \neq 2,3,5,7$ & $(7,+)$, $(15,+)$, $(21_{a,b},+)$, $(27,+)$, $(35_{a,b},+)$, $(56,+)$, $(70,+$), $(84,+)$, $(105_{a-c},+)$, $(120,+)$, $(168,+)$, $(189_{a-c},+)$, $(210_{a,c},+)$, $(216,+)$ \\ \hline $\Omega_8^{+}(2)$ & 3 & $(28,+,0)$, $(35_{a-c},+,0)$, $(48,+,2)$, $(147,+,0)$ \\ & 5 & $(28,+,0)$, $(35_{a-c},+,0)$, $(50,+,0)$, $(83_{a-c},+,1)$, $(175,+,0)$, $(210_{a-c},+,0)$ \\ & $p \neq 2,3,5$ & $(28,+,0)$, $(35_{a-c},+,0)$, $(50,+,0)$, $(84_{a-c},+,0)$, $(175,+,0)$, $(210_{a-c},+,0)$ \\ \hline $G_2(3)$ & 2 & $(14,+,0)$, $(64,\circ,0)$, $(78,+,0)$, $(90_{a-c},+,1)$ \\ & 7 & $(14,+,0)$, $(64,\circ,0)$, $(78,+,0)$, $(91_{a-c},+,0)$, $(103,+,1)$, $(168,+,0)$, $(182_{a,b},+,0)$ \\ & 13 & $(14,+,0)$, $(64,\circ,0)$, $(78,+,0)$, $(91_{a-c},+,0)$, $(104,+,0)$, $(167,+,1)$, $(182_{a,b},+,0)$ \\ & $p \neq 2,3,7,13$ & $(14,+)$, $(64,\circ)$, $(78,+)$, $(91_{a-c},+)$, $(104,+)$, $(168,+)$, $(182_{a,b},+)$ \\ \hline $^{3}D_4(2)$ & 3 & $(25,+,1)$, $(52,+,0)$, $(196,+,0)$ \\ & $p \neq 2,3$ & $(26,+,0)$, $(52,+,0)$, $(196,+,0)$ \\ \hline $^{2}F_4(2)'$ & 3 & $(26,\circ,0)$, $(27,\circ,0)$, $(77,\circ,1)$, $(124_{a,b},+,0)$ \\ & 5 & $(26,\circ,0)$, $(27,\circ,1)$, $(27^{*},\circ,0)$, $(78,+,0)$, $(109_{a,b},+,0)$ \\ & $p \neq 2,3,5$ & $(26,\circ,0)$, $(27,\circ,0)$, $(78,+,0)$ \\ \hline $^{2}B_2(8)$ & 5 & $(14,\circ,0)$, $(35_{a-c},+,0)$, $(63,+,1)$, $(65_{a-c},+,0)$ \\ & 7 & $(14,\circ,0)$, $(35_{a-c},+,0)$, $(64,+,1)$, $(91,+,0)$\\ & 13 & $(14,\circ,0)$, $(14^{*},\circ,1)$, $(35,+,0)$, $(65_{a-c},+,0)$, $(91,+,0)$ \\ & $p \neq 5,7,13$ & $(14,\circ)$, $(35_{a-c},\circ)$, $(64,+)$, $(65_{a-c},+)$, $(91,+)$ \\ \hline $^{2}B_2(32)$ & 5 & $(124,\circ)$ \label{tab:ccreps} \end{longtable}
\backmatter
\printindex
\end{document} |
\begin{document}
\title{Maximal Levi Subgroups Acting on the Euclidean Building of $\GL_n(F)$} \begin{abstract} In this paper we give a complete invariant of the action of $\text{GL}_n(F)\times \text{GL}_m(F)$ on the Euclidean building $\BB_e\GL_{n+m}(F)$, where $F$ is a non-archimedian field. We then use this invariant to give a natural metric on the resulting quotient space. In the special case of the torus acting on the tree $\BB_e\GL_2(F)$ this gives a method for calculating the distance of any vertex to any fixed apartment. \end{abstract} \section{Introduction}
\label{secBe} To understand distance in the 1-skeleton of a building $\mathcal{B} G$ associated to a reductive algebraic group $G$, one may look at a stabilizer $K$ of a point, and then study the action of $K$ on $\mathcal{B} G$. When working over a non-archimedian field vertices correspond to maximal compact subgroups. This analysis gives rise to information about $K\backslash G/K$, and therefore the Hecke algebra \cite{She:1},\cite{She:2}.\\ \\ In this paper we specialize to $G=\text{GL}_n(F)$ and are interested in the double cosets $L\backslash G/K$, where $L\cong \text{GL}_{n_1}(F)\times \text{GL}_{n_2}(F)$ is a maximal Levi subgroup of $G$. The study of the action of $L$ on the building $\mathcal{B}_e\text{GL}_n(F)$ will lead to a description of distance from any vertex to a certain subbuilding stabilized by $L$. In the case when $n=2$ and $L=T$ is a maximal split torus, our description gives a way of calculating the distance from a given point to a fixed apartment.\\ \\
We also give a combinatorial description of the quotient space $L\backslash \mathcal{B}_e \text{GL}_n(F)$ as follows. Let $A^n=\{(\alpha_i)_{i=1}^n|\alpha_i\in \mathbb{N}, \alpha_I\geq \alpha_{i+1}\}$. Then if $n_1\leq n_2$ there is an graph isometry between $L\backslash \mathcal{B}_e \text{GL}_n(F)$ and $A^{n_1}$ when $A^n$ is endowed with the following metric: $d(\alpha,\beta)=\text{max}_{i=1}^n |\alpha_i-\beta_i|$ where $\alpha,\beta\in A^n$. This result shows that $1$-skeleton of the resulting quotient space only depends on $\text{min}(n_1,n_2)$. \\ This paper is broken up into two main sections. The first gives a description of the building in terms of $\mathcal{O}$-lattices and describes an invariant of the action of $L$ on this building. The second section gives a geometric interpretation of this invariant, yielding a combinatorial description of the quotient space $L\backslash \BB_e\GL_n(F)$.
\section{Orbits of Maximal Levi Factors on $\BB_e\GL(V)$} \subsection{$\mathcal{O}$-Lattices and $\BB_e\GL(V)$}
Throughout this paper let $F$ be a non-archimedian field. We will denote the ring of integers in $F$ by $\mathcal{O}$, and fix once an for all a uniformizer $\varpi$ of $\mathcal{O}$. Let the unique maximal prime ideal be denoted as $\mathcal{P}=(\varpi)$, and the residue field $\mathcal{O}/\mathcal{P}$ of order $p^k=q$ will be denoted by $\mathfrak{k}$. Let $\mathcal{P}^k=(\varpi^k)$ for $k\in\mathbb{Z}$. Then $\log_\mathcal{P}(\mathcal{P}^k)=k$. Let $V$ be a vector space defined over $F$. We will describe the Euclidean building $\mathcal{B}_e\text{GL}(V)$ associated to $GL(V)$. For more details see \cite{Br} or \cite{Ga}. Let $\Lambda\subset V$ be a finitely generated free $\mathcal{O}$-module. Denote by $[\Lambda]$ the homothety class of $\Lambda$, that is $[\Lambda]=\{a\Lambda|a\in F^\times\}$. \\ \\ Homothety classes of lattices will form the vertices of $\mathcal{B}_e\text{GL}(V)$. Two vertices $\lambda_1,\lambda_2\in \mathcal{B}_e\text{GL}(V)$ are incident if there are representatives $\Lambda_i\in \lambda_i$ so that $\varpi\Lambda_1\subset \Lambda_2 \subset \Lambda_1$, i.e. $\Lambda_2/\varpi\Lambda_1$ is a $\mathfrak{k}$-subspace of $\Lambda_1/\varpi\Lambda_1$. The chambers in $\mathcal{B}_e\text{GL}(V)$ are collections of maximally incident vertices. To put this more concretely, assume the dimension of $V$ is $n$. Then a chamber is a collection of $n$ vertices $\lambda_0 \cdots\lambda_{n-1}$ with representatives $\Lambda_0 \cdots \Lambda_{n-1}$ satisfying $\varpi\Lambda_0\subsetneq \Lambda_1 \subsetneq \cdots \subsetneq \Lambda_{n-1}\subsetneq \Lambda_0$. A wall of a chamber is any subset of $n-1$ vertices in the given chamber. We will denote by $\mathcal{B}_e\text{GL}(V)^k$ the set of all facets of $\mathcal{B}_e\text{GL}(V)$ of dimension $k$.\\ \\ A frame $\mathcal{F}$ in $V$ is a collection of lines $l_1,\ldots l_n\subset V$ which are linearly independent and span all of $V$. We now describe certain subcomplexes of $\mathcal{B}_e\text{GL}(V)$. Define $\mathcal{A}_\mathcal{F}$ to be the subcomplex consisting of vertices $[\Lambda]$ of the following form: \begin{equation} \Lambda=\bigoplus_{i=1}^n \mathcal{O} e_i \end{equation} where $e_i\in l_i\in \mathcal{F}$. $\mathcal{A}_\mathcal{F}$ is then an apartment of $\mathcal{B}_e\text{GL}(V)$, and every apartment is uniquely determined by a frame in this way. \\ \\ The group $\text{GL}(V)$ has a natural action of $\mathcal{B}_e\text{GL}(V)$, namely the one induced from the action of $\text{GL}(V)$ on $V$. This action preserves distance in the building.
\subsection{$GL(W_1)\times GL(W_2)$ acting on $\mathcal{B}_e(W_1\oplus W_2)$}\label{secW1W2} Let $V$ be a vector space over $F$. Fix a maximal Levi subgroup $L$ of $\text{GL}(V)$. Associated to $L$ are subspaces $W_1,W_2\subset V$ satisfying $V=W_1\oplus W_2$. Then $L\cong \text{GL}(W_1)\times \text{GL}(W_2)$. In this section we will describe the orbits of the action of $GL(W_1)\times GL(W_2)$ on $\mathcal{B}_e\text{GL}(V)^0$ in terms of an invariant $Q$. Additionally we will give a representative of each orbit.\\ \\ Let $p_i$ be the projection of $V$ onto $W_i$ with respect to our given decomposition. We will use these maps to define invariants of the vertices and then show for our action that these invariants classify all orbits. \\ \\ Let $\Lambda$ be an $\mathcal{O}$-lattice. We make the following definitions for $i=1,2$: \begin{eqnarray}
P_i(\Lambda)&=&\text{Im}(p_i|_{\Lambda})\\
K_i(\Lambda)&=&\text{Ker}(p_{i'}|_{\Lambda})=\Lambda\cap W_i \end{eqnarray} Where $i'=(i\text{ mod } 2)+1$.\\ \\ These are a lattices in $W_i$.
\begin{lemma}\label{K_in_P} $K_i(\Lambda)\subset P_i(\Lambda)$ \end{lemma} \begin{proof} If $v\in K_i(\Lambda)=\Lambda \cap W_i$, then $v\in \Lambda$, so $p_i(v)\in P_i(\Lambda)$. But $p_i(v)=v$ since $v\in W_i$. \end{proof}
Another basic lemma which will not be used immediately but will be useful later on is the following.
\begin{lemma}\label{Lam_eq} Let $\Lambda,\Lambda'$ be $\mathcal{O}$-latices, and assume that $\Lambda\subset \Lambda'$. Furthermore, assume that $P_i(\Lambda)=P_i(\Lambda')$ and $K_i(\Lambda)=K_i(\Lambda')$. Then $\Lambda=\Lambda'$. \end{lemma} \begin{proof} Let $v'\in\Lambda'$, we wish to show $v'\in \Lambda$. Write $p_i(v')=w_i'$. Because $w_2'\in P_2(\Lambda)$ there is a $w_1\in W_1$ so that $w_1+w_2'\in \Lambda$. Then $w_1'-w_1\in \Lambda'$. Hence $w_1'-w_1\in K_1(\Lambda')=K_1(\Lambda)$, and so $w_1'-w_1\in \Lambda$. Therefore $(w_1'-w_1)+(w_1+w_2')=v'\in\Lambda$. \end{proof}
By lemma \ref{K_in_P} we can define $Q_i(\Lambda)=P_i(\Lambda)/K_i(\Lambda)$. This is a finite $\mathcal{O}$-module.
\begin{prop}\label{theta_map} $Q_1(\Lambda)\cong Q_2(\Lambda)$ as $\mathcal{O}$-modules. This isomorphism class will be denoted by $Q(\Lambda)$. \end{prop} \begin{proof} We make slight modifications to the proof found in \cite{Ne}. Let $p_i':\Lambda\rightarrow Q_i(\Lambda)$ be the composition of $p_i$ with the natural projection map $\pi_i:P_i(\Lambda)\rightarrow Q_i(\Lambda)$. We define a map so that $\forall v\in \Lambda$
\begin{equation}\label{thetamap} \begin{array}{rll} \Theta:Q_1(\Lambda)&\rightarrow & Q_2(\Lambda)\\ p_1'(v)&\mapsto& p_2'(v) \end{array} \end{equation}
We will show that $\Theta$ is well defined, and is an isomorphism.\\ \\ Let $w_1+w_2,w_1'+w_2'\in \Lambda$ with $w_i,w_i'\in W_i$ and $\pi_1(w_1)=\pi_1(w_1')$. Then $\pi_1(w_1-w_1')=0$, and there for $w_1-w_1'\in K_1(\Lambda)$. Therefore $w_2-w_2'\in K_2(\Lambda)$ and $\pi_2(w_2)=\pi_2(w_2')$ showing $\Theta$ is well defined. It is an isomorphism, because the map $\theta$, defined by reversing the rolls of $1$ and $2$, is an inverse map. \end{proof} We now show that $Q$ is a complete invariant of the action of $L$ on $\mathcal{B}_e\text{GL}(V)^0$. \begin{thm}\label{Q_invar} $\Lambda,\Lambda'$ be $\mathcal{O}$-lattices. Then $\Lambda$ and $\Lambda'$ are in the same $\text{GL}(W_1)\times \text{GL}(W_2)$ orbit if and only if $Q(\Lambda)=Q(\Lambda')$. \end{thm} \begin{proof} $Q(\Lambda)$ is a $\text{GL}(W_1)\times\text{GL}(W_2)$-invariant since each factor of $\text{GL}(W_i)$ commutes with the projection map $p_i$. We must show that if $Q(\Lambda)=Q(\Lambda')$ then $\exists g\in \text{GL}(W_1)\times \text{GL}(W_2)$ so that $\Lambda=g\Lambda'$.\\ \\ By \cite{Ne} we know $\exists g_1\in GL(W_1)$ and $g_2\in GL(W_2)$ so that $g_i P_i(\Lambda')=P_i(\Lambda)$ and $g_i K_i(\Lambda')=K_i(\Lambda)$. So we may replace $\Lambda'$ with $\Lambda''=(g_1,g_2)\Lambda'$. Let $\Theta$ be the map from \ref{theta_map} associated to $\Lambda$, and $\Theta''$ associated to $\Lambda''$.\\ \\ We claim $\Lambda=\Lambda''$ if and only if $\Theta=\Theta''$. To prove this we show that one can reconstruct $\Lambda$ from $\Theta$ (which implicitly encodes $Q_i(\Lambda)$ as the domain and range of the map), by taking \begin{equation}
\Lambda_\Theta =\{w_1+w_2|w_i\in P_i(\Lambda)\text{ and } \Theta(\pi_1(w_1))=\pi_2(w_2)\} \end{equation} First we show $\Lambda\subset \Lambda_\Theta$. Let $w=w_1+w_2\in\Lambda$, then by definition of $\Theta$ we have $\Theta (\pi_1(w_1)=\Theta(\pi_2(w_2))$. And so $v\in \Lambda_\Theta$. We now show $\Lambda_\Theta\subset \Lambda$. Let $w_1+w_2\in \Lambda_\Theta$. Then $w_1\in P_1(\Lambda)$ so there is a $w_2'\in P_2(\Lambda)$ so that $w_1+w_2'\in \Lambda\subset \Lambda_\Theta$. Then $0+(w_2-w_2')\in \Lambda_\Theta$. So $\pi_2(w_2-w_2')=0$ which implies $w_2-w_2'\in K_2(\Lambda)\subset \Lambda$. Hence $w_1+w_2=(w_1+w_2')+(w_2-w_2')\in \Lambda$ as desired.\\ \\ To complete the theorem, we will show there is a $g\in \text{stab}(P_2(\Lambda))\cap \text{stab}(K_2(\Lambda))$ which takes $\Theta''$ to $\Theta$. There is a $\overline{h}\in \text{GL}(P_2(\Lambda)/K_2(\Lambda))$ so that $(1,\overline{h})\Theta''=\Theta$. Let $h$ be a pull back of $\overline{h}$ to $h\in \text{stab}(P_2(\Lambda))\cap \text{stab}(K_2(\Lambda))\in \text{GL}(W_2)$. Then $(1,h)\Lambda''=\Lambda$. \end{proof}
Now let $[\Lambda]\in\mathcal{B}_e\text{GL}(V)^0$, and $c\in F^\times$. Since $Q(\Lambda)=Q(c\Lambda)$ we will abuse notation and write $Q([\Lambda])=Q(\Lambda)$. \begin{cor} $Q([\Lambda])$ is a complete invariant of the action of $GL(W_1)\times GL(W_2)$ on the space of vertices in $\mathcal{B}_e(V)^0$. \end{cor} \subsection{Orbit Representatives} We now give a set representatives of each orbit. We first do this in the case when $V$ is 2 dimensional, and then use this case to determine representatives for higher dimensions. \subsubsection{dim$(V)=2$} Let $V$ be a two dimensional vector space over $F$, with decomposition $V=W_1\oplus W_2$. Assume that $W_i$ is spanned by the vector $e_i$. We then define the following class of lattices: \begin{equation}\label{Lamk} \Lambda^k=\text{span}_\mathcal{O} <e_1,\varpi^{-k}e_1+e_2> \end{equation} \begin{prop} $Q([\Lambda^k])\cong \mathcal{O}/\mathcal{P}^k$ \end{prop} \begin{proof} $P_1(\Lambda^k)=<\varpi^{-k}e_1>$ and $K_1(\Lambda^k)=<e_1>$. Therefore $Q(\Lambda)\cong \mathcal{P}^{-k}/\mathcal{O}\cong \mathcal{O}/\mathcal{P}^k$. \end{proof} \begin{cor}\label{2dim_reps} $\{[\Lambda^k]\}_{k=0}^\infty$ is a complete set of representatives for the action of $\text{GL}(W_1)\times \text{GL}(W_2)$ on $\mathcal{B}_e\text{GL}(V)^0$ \end{cor} \begin{proof} Let $[\Lambda]\in \mathcal{B}_e\text{GL}(V)^0$. Then $Q([\Lambda])\cong \mathcal{O}/\mathcal{P}^k$ for some $k\in \mathbb{N}$. By theorem \ref{Q_invar} $[\Lambda]$ is in the orbit of $\Lambda^k$. \end{proof} \subsubsection{General $V$} We now describe representatives when $V$ is $n$ dimensional. We may assume that $\text{dim}W_i=n_i$ and $n_1\leq n_2$. Choose a basis $\{e_1,\ldots, e_{n_1}\}$ of $W_1$ and $\{f_1,\ldots,f_{n_2}\}$, and let $Y_i=\text{span}_F(e_i,f_i)$, for $1\leq i \leq n_1$. Let $\alpha=(\alpha_i) \in \mathbb{N}^{n_1}$. Let $[\Lambda^{\alpha_i}]\in \mathcal{B}_e\text{GL}(Y_i)$ defined as in equation \ref{Lamk} with respect to the basis $\{e_i,f_i\}$. This allows us to define the following class of lattices: \begin{equation}\label{Lamal} \Lambda^\alpha =\bigoplus_{i=1}^{n_1} \Lambda^{\alpha_i}\bigoplus_{i=n_1+1}^{n_2} \mathcal{O} f_i \end{equation} \begin{prop}\label{ndim_reps}
Let $A^n=\{\alpha=(\alpha_i)\in \mathbb{N}^n| \alpha_i \geq \alpha_{i+1}\}$. Then $[\Lambda^\alpha]_{\alpha\in A^{n_1}}$ is a complete set of representatives of the orbits of $\text{GL}(W_1)\times \text{GL}(W_2)$ acting on $\mathcal{B}_e\text{GL}(V)^0$. \end{prop} \begin{proof} By \cite{Ne} $Q_1([\Lambda])\cong \bigoplus_{i=1}^{n_1} \mathcal{O}/\mathcal{P}^{\alpha_i}$ where $\alpha_i\in \mathbb{N}$. We may assume $\alpha_i\geq \alpha_{i+1}$. Then by theorem \ref{Q_invar} $[\Lambda]$ is in the same orbit as $[\Lambda^\alpha]$.
\end{proof}
\section{Geometric interpretation of $Q$} \subsection{Distance Between Orbits} The main result of section \ref{secW1W2} gives an invariant $Q$ of the action of $L=\text{GL}(W_1)\times \text{GL}(W_2)$ acting on $\mathcal{B}_e\text{GL}(W_1\oplus W_2)^0$. In this section we give a geometric interpretation of this invariant in terms of a distance between orbits.\\ \\ By proposition \ref{ndim_reps} we may identify the space of orbits $L\backslash \mathcal{B}_e\text{GL}(V)$ with $A_n$. We define a function called the orbital distance as follows: \begin{equation} \begin{array}{cll} d_O:A^n\times A^n &\rightarrow & \mathbb{N}\\
(\alpha,\beta) & \hookrightarrow & {\text{{\large max}} \atop \tiny{i=1\text{ to }n}} (|\alpha_i-\beta_i|) \end{array} \end{equation} The main result of this section is that the name ``orbital distance'' is justified. That is $d_O$ is actually the minimum distance between to orbits as measured in the $1$-skeleton of the building $\mathcal{B}_e\text{GL}(V)$.\\ \\ For simplicity if $[\Lambda]\in\mathcal{B}_e(V)$ then let $L[\Lambda]$ denote the orbit of $[\Lambda]$ under $L$.
\begin{prop}\label{dis<1} Let $[\Lambda_1],[\Lambda_2]\in \mathcal{B}_e\text{GL}(V)$ be incident, then $d_O(L[\Lambda_1],L[\Lambda_2])\leq 1$. \end{prop} \begin{proof} Let $[\Lambda_1],[\Lambda_2]$ be two incident vertices with $\varpi\Lambda_1\subset \Lambda_2\subset \Lambda_1$. Let $L[\Lambda_1]$ be identified with $\alpha\in A^{n_1}$ and $L[\Lambda_2]$ with $\beta\in A^{n_1}$. We have \begin{eqnarray} \varpi P_i(\Lambda_1)\subset P_i(\Lambda_2)\subset P_i(\Lambda_1)\\ \varpi K_i(\Lambda_1)\subset K_i(\Lambda_2)\subset K_i(\Lambda_1) \end{eqnarray} There are two extreme cases. First $P_1(\Lambda_2)=P_1(\Lambda_1)$ and $K_1 (\Lambda_2)=\varpi K_1(\Lambda_1)$. In this case $\alpha_i=\beta_i+1$ for all $i$.\\ In the second case $P_1(\Lambda_2)=\varpi P_1(\Lambda_1)$, and $K_1(\Lambda_2)=K_1(\Lambda_1)\cap \varpi P_1(\Lambda_1)\supset \varpi K_1(\Lambda_1)$. In this case $\alpha_i=\beta_i-1$ or $\alpha_i=\beta_i$.\\
The above argument shows that no mater what $P_1(\Lambda_2)$, and $K_1(\Lambda_2)$ are we have $|\alpha_i-\beta_i|\leq 1$ as desired. \end{proof} Proposition shows that if two incident vertices are in different orbits, then their $L$-orbits have orbital distance 1. To show $d_O$ is actually the proposed metric we need to show if two orbits have orbital distance 1, then there are incident representatives of each orbit. The following technical lemma proves this. \begin{lemma}\label{dis_dec} Let $[\Lambda_1],[\Lambda_2]\in \mathcal{B}_e\text{GL}(V)$. Assume $d_O(L[\Lambda_1],L[\Lambda_2])=k>0$. Then there is an $[\Lambda_3]\in\mathcal{B}_e\text{GL}(V)$ incident to $[\Lambda_2]$ so that $d_O(L[\Lambda_1],L[\Lambda_3])=k-1$. \end{lemma} \begin{proof} Let $[\Lambda_1],[\Lambda_2]$ be as in the statement of the lemma. Since we are working in $L$-orbits, and $L$ preserves distance in $\mathcal{B}_e\text{GL}(V)$ we may choose any representatives for $[\Lambda_1]$ and $[\Lambda_2]$ that we like. In particular if $L[\Lambda_1],L[\Lambda_2]$ are identified with $\alpha,\beta\in A^{n_1}$ respectively, we may take for our representatives $\Lambda^\alpha,\Lambda^\beta$ respectively, as in proposition \ref{ndim_reps}. \\ \\ Recall that if $W_1$ has basis $\{e_i\}_{i=1}^{n_1}$ and $W_2$ has basis $\{f_i\}_{i=1}^{n_2}$ then $\Lambda^\alpha=\bigoplus_{i=1}^{n_1} \Lambda^{\alpha_i}\bigoplus_{i=n_1+1}^{n_2}\mathcal{O} f_i$ where $\Lambda^{\alpha_i}=<e_i,\varpi^{-\alpha_i},f_i>$. We now define a series of sublatticies $M^\alpha_i,N^\alpha_i$ which will allow us to define a lattice $\Lambda_3$ with the desired properties. Let $M^{\alpha_i}=<e_i,\varpi{-\alpha_i+1}e_i+\varpi f_i>$ if $\alpha_i>0$, and $N^{\alpha_i}=<\varpi e_i,\varpi^{-\alpha_i}e_i+f_i>$. We have that $\varpi\Lambda^{\alpha_i}\subset M^{\alpha_i},N^{\alpha_i}\subset \Lambda^{\alpha_i}$.\\ \\ We now calculate $Q(M^{\alpha_i})$ and $Q(N^{\alpha_i})$ with respect to $E_i=\text{span}(e_i)$ and $F_i=\text{span}(f_i)$. $P_1(M^{\alpha_i})=<\varpi^{-\alpha_i+1}e_i>$ and $K_1(M^{\alpha_i})=<e_i>$. So $Q(M^{\alpha_i})$ is represented by $\alpha_i-1\in A^1$. $P_1(N^{\alpha_i})=<\varpi^{-\alpha_i}e_i>$ and $K_1(n^{\alpha_i})=<\varpi e_i>$. Hence $Q(N^{\alpha_i})$ is represented by $\alpha_i+1\in A^1$.\\ \\
We now construct $\Lambda_3$. Let $M=\{i|\alpha_i-\beta_i=-k\}$ and $N=\{i|\alpha_i-\beta_i=k\}$ and set $S=\{1,2\ldots,n\}\backslash (M\cup N)$. Then define $\Lambda_3$ as follows: \begin{equation} \Lambda_3=\bigoplus_{i\in S}\Lambda^{\beta_i}\bigoplus_{i\in M} M^{\beta_i} \bigoplus_{i\in N} N^{\beta_i}\bigoplus_{i=n_1+1}^{n_2} \mathcal{O} f_i \end{equation} By construction we have that both $[\Lambda_2]$ and $[\Lambda_3]$ incident and $d_O(L[\Lambda_1],L[\Lambda_3])=k-1$ as desired. \end{proof} Together proposition \ref{dis<1} and lemma \ref{dis_dec} give us the following theorem. \begin{thm}\label{orbit_dis} Let $[\Lambda_1],[\Lambda_2]\in\mathcal{B}_e\text{GL}(V)^0$. Then $d_O(L[\Lambda_1],L[\Lambda_2])$ is the minimal distance between any two representatives of the orbits as measured in the $1$-skeleton of $\mathcal{B}_e\text{GL}(V)$. \end{thm} Theorem \ref{orbit_dis} gives a complete combinatorial description of the geometry of the orbit space $L\mathcal{B}_e\text{GL}(V)^0$. The following figure is the quotient space for $L\backslash\mathcal{B}_e\text{GL}(V)$ when $V$ is $4$ dimensional and $n_1=n_2=2$ \begin{equation} \xymatrix@=0.12cm{
& & & & & & & & & & & & & & \\
&\ar@{.}[ul] & & & & & & & & & & & & & \ar@{.}[u] \\
& & \bullet\atop{(4,3)} \ar@{-}[ur] \ar@{-}[u] \ar@{-}[ul] & & & & \bullet\atop{(4,2)} \ar@{-}[ur] \ar@{-}[u] \ar@{-}[ul]\ar@{-}[llll] & & & & \bullet\atop{(4,1)} \ar@{-}[ur] \ar@{-}[u] \ar@{-}[ul]\ar@{-}[llll] & & & & \bullet\atop{(4,0)} \ar@{-}[u] \ar@{-}[ul]\ar@{-}[llll]\\ & & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & \\ & & & & & & \bullet\atop{(2,2)} \ar@{-}[uuuu] \ar@{-}[uuuullll]\ar@{-}[uuuurrrr] & & & & \bullet\atop{(2,1)} \ar@{-}[uuuu] \ar@{-}[uuuullll] \ar@{-}[llll] \ar@{-}[uuuurrrr] & & & & \bullet\atop{(2,0)} \ar@{-}[uuuu] \ar@{-}[uuuullll] \ar@{-}[llll] \\ & & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & \\
& & & & & & & & & & \bullet\atop{(1,1)} \ar@{-}[uuuu] \ar@{-}[uuuullll] \ar@{-}[uuuurrrr]& & & & \bullet\atop{(1,0)} \ar@{-}[uuuu] \ar@{-}[uuuullll] \ar@{-}[llll] \\ & & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & \\ & & & & & & & & & & & & & & \bullet\atop{(0,0)} \ar@{-}[uuuu] \ar@{-}[uuuullll] \\ } \end{equation}
\subsection{Distance to $\overline{\mathcal{A}_{\mathcal{F}_1\cup \mathcal{F}_2}}$ in $\mathcal{B}_e(W\oplus W)$} There is an important special case of theorem \ref{orbit_dis}. The orbit for which $Q(\Lambda)=0$ is distinguished. In this section we give both a description of this orbit, as well as another description of this distance from a given point to this orbit.\\ \\ Recall from section \ref{secBe} that an apartment $\mathcal{A}_\mathcal{F}$ is specified by a frame $\mathcal{F}$ in $W_1\oplus W_2$. Denote by $\text{Frame}(V)$ the set of all frames in a vector space $V$. We will be interested in the following collection of apartments: \begin{equation} \overline{\Ac_{\FF_1\cup \FF_2}}=\bigcup_{\mathcal{F}_1\in\text{Frame}(W_1)\atop \mathcal{F}_2\in \text{Frame}(W_2)}\mathcal{A}_{\mathcal{F}_1\cup \mathcal{F}_2} \end{equation} \begin{prop} $\overline{\Ac_{\FF_1\cup \FF_2}}$ is a subbuilding of $\mathcal{B}_e\text{GL}(V)$. \end{prop} \begin{proof} Since $\overline{\Ac_{\FF_1\cup \FF_2}}$ is a union of apartments from an actual building all that needs to be shown is that any two chambers $C_1,C_2\in \overline{\Ac_{\FF_1\cup \FF_2}}$ are in a common apartment. Let $\Lambda_1\supset \Lambda_2\supset\ldots \supset\Lambda_{n}\supset \varpi\Lambda_1$ be a chain of $\mathcal{O}$-lattices corresponding to a chamber $C\in \overline{\Ac_{\FF_1\cup \FF_2}}$, and $M_1\supset M_2 \supset \ldots \supset M_{n}\supset \varpi M_1$ a chain of lattices corresponding to a chamber $D\in \overline{\Ac_{\FF_1\cup \FF_2}}$. Since each $[\Lambda_i]\in\overline{\Ac_{\FF_1\cup \FF_2}}$ we can write $\Lambda_i=\Lambda_i^1\oplus \Lambda^2_i$ with $[\Lambda_i^j]\in \mathcal{B}_e(W_j)$. Similarly for the $M_i$. The $\{[\Lambda_i^j\}_{i=1}^{n}$,$\{M_i^j\}_{i=1}^{n}$ specify facets $C_j,D_j\in \mathcal{B}_e(W_j)$ since $\Lambda_1^j\supset \Lambda_i^j\supset \varpi\Lambda_1^j$ (it will be the case that some of the $\Lambda_i^j=\Lambda_{i+1}^j$ but this will not matter), and similarly for the $M_i^j$. Then there are common apartments $\mathcal{A}_j\subset \mathcal{B}_e\text{GL}(W_j)$ which contain $C_j$ and $D_j$. Since each $\mathcal{A}$ is specified by a frame $\mathcal{F}_j$ in $W_j$. Then $\mathcal{A}_{\mathcal{F}_1\cup\mathcal{F}_2}$, the apartment specified by $\mathcal{F}_1\cup \mathcal{F}_2$, contains the chambers $C$ and $D$. \end{proof}
Now let $[\Lambda]\in \mathcal{B}_e\text{GL}(V)^0$. We define a function on $\mathcal{B}_e\text{GL}(V)^0$ as follows: \begin{equation} \begin{array}{cll} d_\mathcal{A}:\mathcal{B}_e(W_1\oplus W_2)&\rightarrow & \mathbb{N}\\ $[$\Lambda$]$ & \mapsto & \log_\mathcal{P}[\text{Ann}(Q(\Lambda))] \end{array} \end{equation} \begin{thm} Let $[\Lambda] \in \mathcal{B}_e\text{GL}(V)^0$ then $d_\mathcal{A}([\Lambda])=d_O(L[\Lambda],\overline{\Ac_{\FF_1\cup \FF_2}} )$. \end{thm} \begin{proof} This follows from theorem \ref{orbit_dis}, and the fact that $\overline{\Ac_{\FF_1\cup \FF_2}}$ is associated to $(0)\in A^n$. \end{proof} In the special case when $n=1$ $\overline{\Ac_{\FF_1\cup \FF_2}}$ is just an apartment of $\mathcal{B}_e\text{GL}(V)^0$. Then $d_\mathcal{A}$ is just measuring the distance of a given point to a fixed apartment. This suggests that one may be able to find the distance of a vertex to a fixed apartment by studying the action of a maximal split torus on the building.
{}
\end{document} |
\begin{document}
\title{Generation and Direct Detection of Broadband Mesoscopic \\Polarization-Squeezed Vacuum} \author{Timur Iskhakov} \affiliation{Department of Physics, M.V.Lomonosov Moscow State University, \\ Leninskie Gory, 119992 Moscow, Russia} \author{Maria~V.~Chekhova} \affiliation{Max Planck Institute for the Science of Light, G\"unther-Scharowsky-Stra\ss{}e 1/Bau 24, 91058 Erlangen, Germany} \affiliation{Department of Physics, M.V.Lomonosov Moscow State University, \\ Leninskie Gory, 119992 Moscow, Russia} \author{Gerd Leuchs} \affiliation{Max Planck Institute for the Science of Light, G\"unther-Scharowsky-Stra\ss{}e 1/Bau 24, 91058 Erlangen, Germany} \affiliation{University Erlangen-N\"urnberg, Staudtstrasse 7/B2, 91058 Erlangen, Germany} \begin{abstract} Using a traveling-wave OPA with two orthogonally oriented type-I BBO crystals pumped by picosecond pulses, we generate vertically and horizontally polarized squeezed vacuum states within a broad range of wavelengths and angles. Depending on the phase between these states, fluctuations in one or another Stokes parameters are suppressed below the shot-noise limit. Due to the large number of photon pairs produced, no local oscillator is required, and 3dB squeezing is observed by means of direct detection.
\end{abstract} \pacs{42.50.Lc, 42.65.Yj, 42.50.Dv}
\maketitle \narrowtext
Squeezed light is the basic resource for continuous-variable quantum communication and computation~\cite{Bachor}. Nowadays, the techniques of generating squeezed light are well developed and include $\chi^{(3)}$ interactions in fibres~\cite{Kerr} and $\chi^{(2)}$ - based optical parametric amplifiers (OPAs), seeded~\cite{Aytur,seeded OPA} or not seeded~\cite{unseeded OPA}, cavity-based~\cite{cavity} or single-pass~\cite{single-pass}. Squeezed light, as a rule, is detected through homodyne measurement, using a local oscillator, or, in the case of bright two-mode squeezing, also by means of direct detection. More advanced techniques of measuring squeezed light include Wigner-function tomography~\cite{WFtomo} and polarization tomography~\cite{Karmas,Marquardt}.
Among the variety of squeezed states, a special role belongs to the squeezed vacuum, a state generated at the output of a parametric down-converter with a vacuum input state~\cite{MandelWolf}. In the limit of weak parametric gain, squeezed vacuum is known as biphoton light~\cite{DNK}, while in the strong-gain limit it manifests quadrature squeezing~\cite{MandelWolf}. At any value of the parametric gain, squeezed vacuum contains only even-photon-number Fock states and is therefore always nonclassical. Initially such states of light were therefore called two-photon coherent states~\cite{Yuen}. Thus, squeezed vacuum plays an important role in the quantum optics of both discrete and continuous variables and provides a useful resource for quantum information protocols.
In particular, two-mode squeezed vacuum generated at the output of an unseeded two-mode parametric downconverter~\cite{Braunstein} has such an important advantage as perfect two-mode squeezing: regardless of the parametric gain, the photon-number difference for the two modes does not fluctuate. Two-mode squeezed vacuum generated in a single-pass OPA is especially interesting, first, because its correlations are not reduced by the cavity losses and second, because it has a rich broadband (multimode) frequency and angular spectrum, which can be used for quantum information protocols in higher-dimensional Hilbert spaces. However, because of the difficulties in producing and detecting such states, there are only a few reports on the direct detection of broadband squeezed vacuum, and all of them show only a small degree of squeezing~\cite{Lugiato,Bondani,Brida,JETPLett}.
In this paper, we achieve a considerable degree of two-mode broadband squeezing by combining two coherent strongly pumped single-pass type-I OPAs, which generate, via parametric down-conversion (PDC), squeezed vacuums in two orthogonal polarization modes. This creates a squeezed vacuum with the variance of one of the Stokes operators suppressed below the shot-noise level, which in this case is given by the mean number of photons~\cite{hidden}. This special type of two-mode squeezing is often referred to as polarization squeezing~\cite{hidden}, and the state can be called polarization-squeezed vacuum. The state is measured by means of direct detection.
\begin{figure}
\caption{The experimental setup.}
\end{figure}
In our setup (Fig.1), two 1-mm BBO crystals with the optic axes oriented in orthogonal (vertical and horizontal) planes were placed into the beam of a Nd:YAG laser third harmonic (wavelength $355$nm). The fundamental and second-harmonic radiation of the laser was eliminated with the help of a prism and a UV filter. The pump pulse width was 17 ps, the repetition rate 1 kHz, and the mean power up to 120 mW. The pump was focused into the crystals by either a lens with focal length 100 cm, which resulted in a beam waist of $70 \mu$m, or with a telescope, providing a more soft focusing (beam waist about $500\mu$m). Using a half-wave plate, we aligned the pump polarization to be at $45^{\circ}$ to the planes of the crystals optic axes, so that the pump provided equal contributions to PDC in both crystals. The crystals were cut for PDC with type-I collinear frequency-degenerate phase matching. After the crystals, the pump radiation was cut off by two dichroic mirrors, having high reflection for the pump and $98.5\%$ transmission for PDC radiation. The detection part of the setup, including a polarizing beam splitter, a half- or quarter-wave plate and two detectors, provided a standard Stokes measurement. All surfaces of the optic elements had a standard broadband antireflection coating. The angular spectrum of the detected light was restricted by an aperture to be $0.8^{\circ}$, and the wavelength range was only restricted by PDC phase matching and was $130$ nm broad, with the central wavelength $709$nm. The detectors, Hamamatsu S3883 p-i-n diodes followed by pulsed charge-sensitive amplifiers~\cite{Hansen} based on Amptek A250 and A275 chips, with peaking time \unit{2.77}{\micro\second}, had a quantum efficiency of $90\%$ and electronic noise equivalent to $180$ electrons/pulse. At their outputs, they produced pulses of a given shape, with the duration \unit{8}{\micro\second} and the amplitude proportional to the integral number of photons per light pulse. The phase between the states generated in the two crystals could be varied by tilting two quartz plates with thicknesses 532 and 523 $\mu$m, placed into the pump beam and having the optic axes oriented vertically.
\begin{figure}
\caption{Dependence of the OPA output on the pump power (a) for \unit{70}{\micro\meter} beam waist and (b) for \unit{500}{\micro\meter} beam waist. Solid lines show a fit with Eq.(\ref{sinh}).}
\end{figure}
The output signals of the detectors were measured by means of an AD card integrating the electronic pulses over time. The result coincided, up to the amplification factor $A$, with the photon numbers incident on the detectors during a light pulse. The amplification factors for detectors 1 and 2 were independently calibrated to be $A_1=\unit{9.96\cdot 10^{-3}}{\nano\volt\second}$/photon and $A_2=\unit{1.107\cdot 10^{-2}}{\nano\volt\second}$/photon. The difference between the detectors' amplification factors was eliminated numerically, by multiplying the result of the measurement for detector 2 by a factor of $0.9-0.92$, depending on the alignment. As a result, the output signals of the detectors were balanced to an accuracy of $0.1\%$. From the dataset obtained for 30000 pulses, mean photon numbers per pulse were measured, as well as the variances of the photon-number difference and photon-number sum for the two detectors. Since the electronic noise was not small compared to the shot-noise level, it had to be subtracted. The shot-noise level was measured independently, by using a shot-noise limited laser source, and was on the order of several hundreds of photons per pulse. The degree of two-mode squeezing was characterized~\cite{Aytur,Lugiato} by the noise reduction factor (NRF) defined as the ratio of the photon-number difference variance to the mean photon-number sum,
\begin{equation} \hbox{NRF}=\frac{Var(N_1-N_2)}{\langle N_1+N_2\rangle}. \label{NRF} \end{equation}
Dependence of the detector output signal on the input pump power $P$ is shown in Fig.2. The measurement was made for a 10 nm wavelength band selected by means of an interference filter, and the collection angle of $0.4^{\circ}$ selected by an aperture. With the pump focused tightly (using a lens with focal length 100 cm), the dependence is strongly nonlinear (Fig.2a), and by fitting it with the well-known expression for a single-pass OPA output~\cite{QE},
\begin{equation} N=m\sinh^2\Gamma, \label{sinh} \end{equation} with $m$ denoting the number of modes and the parametric gain scaling as square root of the pump power, $\Gamma=\kappa \sqrt{P}$, $\kappa$ and $m$ being the fitting parameters, we estimate the highest parametric gain achieved in our measurement as $\Gamma=3.4$. With soft focusing, by means of a telescope, the $N(P)$ dependence is much less nonlinear (Fig.2b), and the highest gain achieved, according to the fit, is only $0.8$. Note that the dependences shown in Figs. 2a,b were obtained with the pump polarized vertically, so that only one crystal contributed into the signal; with the pump polarized at $45\deg$, both crystals contributed but the gain $\Gamma$ in each of them was reduced $\sqrt{2}$ times. From the value of the gain we see that the number of photons per mode ranges between one and a hundred, i.e., we are in the regime of mesoscopic twin beams~\cite{Perina}. At the same time, the largest total number of photons per pulse is about 4000 for tight focusing and about 1500 for soft focusing. Without the interference filter, photon numbers per pulse are about one order of magnitude higher.
\begin{figure}
\caption{Noise reduction factor in $S_2$ (squares, solid line) and $S_3$ (circles, dashed line) depending on the quartz plates tilt. }
\end{figure}
Figure 3 shows the the variances of the second and the third Stokes operators measured versus the delay introduced by the two quartz plates between the vertically and horizontally polarized components of the pump beam. The measurement was performed at low parametric gain, with the $80$ mW pump beam softly focused into the crystals. When the phase delay was equal to $\pi$, squeezing was obtained in the $S_2$ Stokes observable, while $S_3$ was anti-squeezed. When the phase delay was equal to zero, the squeezed Stokes observable was $S_3$. At the same time, the observable $S_1$ was always anti-squeezed. This behavior is clear from the following simple considerations: the states generated at the outputs of the two crystals consist of horizontally and vertically polarized photon pairs, $|HH\rangle$ and $|VV\rangle$. Due to quantum interference, they become orthogonally polarized pairs. In particular, the sum $|HH\rangle+|VV\rangle$ gives two pairs of right- and left-circularly polarized photons, $|RL\rangle$, while the difference, $|HH\rangle-|VV\rangle$, gives two pairs of diagonally and anti-diagonally polarized photons, $|AD\rangle$. Therefore, the $\pi$ phase delay between the squeezed vacuum fields generated in the two crystals will lead to a state with suppressed fluctuations in the second Stokes operator, while for the case of the zero phase delay, the state will be squeezed in the third Stokes operator. Note that the mean values of all Stokes observables $\langle S_1\rangle,\langle S_2\rangle,\langle S_3\rangle$ are equal to zero; the state is unpolarized in the 'classical optics' sense but polarized in second- and higher-order moments. This property is called in literature 'hidden polarization'~\cite{hidden,Chirkin,DNK92} and led to the introduction of various definitions for the degree of polarization~\cite{Chirkin, dp}.
A rigorous calculation can be made in the framework of the Heisenberg approach. Denoting the photon creation operators in the horizontal and vertical polarization modes as $a_h^{\dagger}$ and $a_v^{\dagger}$, we can write the Hamiltonian of the OPA as the sum of the Hamiltonians corresponding to separate crystals,
\begin{equation} \hat{H}=G[(a_h^{\dagger})^2+e^{i\phi}(a_v^{\dagger})^2]+\hbox{h.c.}, \label{Hamiltonian} \end{equation} where $G$ is proportional to the pump field amplitude and $\phi$ is the phase delay introduced between the vertical and horizontal pump components. Then the Hamiltonian (\ref{Hamiltonian}) can be expressed via photon creation operators in orthogonal elliptically polarized modes with the axes of the ellipses oriented at $\pm 45\deg$, $a_{\phi}^{\dagger}\equiv1/\sqrt{2}(a_h^{\dagger}+ie^{i\phi/2}a_v^{\dagger})$ and $b_{\phi}^{\dagger}\equiv1/\sqrt{2}(a_h^{\dagger}-ie^{i\phi/2}a_v^{\dagger})$~\cite{Burlakov}:
\begin{equation} \hat{H}=2Ga_{\phi}^{\dagger}b_{\phi}^{\dagger}+\hbox{h.c.} \label{Ham1} \end{equation}
From the form of the Hamiltonian it is clear that the output state has perfectly correlated fluctuations in modes corresponding to the operators $a_{\phi}, b_{\phi}$. The mean values and variances of all Stokes operators can be calculated using the Bogoliubov transformations following from the Hamiltonian (\ref{Ham1}):
\begin{equation} a_{\phi}=U a_{0\phi}+V b_{0\phi}^{\dagger},\,\,b_{\phi}=U b_{0\phi}+V a_{0\phi}^{\dagger}, \label{Bogoliubov} \end{equation} where $U=\cosh\Gamma$ and $V=\sinh\Gamma$, $\Gamma$ is dimensionless gain proportional to $G$, and $a_{0\phi}, b_{0\phi}$ are the input (vacuum) photon annihilation operators. Losses in the optical elements and non-unity quantum efficiencies of the detectors can be taken into account in the usual way, by introducing a beamsplitter in front of each detector, with the amplitude transmission coefficient $\sqrt{\eta}$ and the amplitude reflection coefficient $\sqrt{1-\eta}$.
Calculation of the variances of the Stokes operators $\hat{S}_2\equiv a_h^{\dagger}a_v+a_v^{\dagger}a_h$ and $\hat{S}_3\equiv -i(a_h^{\dagger}a_v-a_v^{\dagger}a_h)$ is performed by passing to $a_{\phi}, b_{\phi}$ operators, then applying transformations (\ref{Bogoliubov}), and then averaging the corresponding second-order moments over the vacuum state. The result, with an account for losses $\eta$, is
\begin{eqnarray} \frac{\hbox{Var}(S_2)}{2\eta V^2}=(1-\eta)\sin^2\frac{\phi}{2}+(2\eta U^2+1-\eta)\cos^2\frac{\phi}{2},\nonumber\\ \frac{\hbox{Var}(S_3)}{2\eta V^2}=(1-\eta)\cos^2\frac{\phi}{2}+(2\eta U^2+1-\eta)\sin^2\frac{\phi}{2}. \label{Var} \end{eqnarray}
Because the total number of registered photons is given by the expression $\langle S_0\rangle=2\eta V^2$, the left-hand sides of Eqs.(\ref{Var}), according to Eq.(\ref{NRF}), give the NRF values for the $\hat{S}_2$ and $\hat{S}_3$ Stokes operators. In the low-gain limit (as is the case for the above-described measurement), $U\approx1$, and Eqs. (\ref{Var}) become
\begin{equation} \hbox{NRF}(S_2)=1+\eta\cos\phi, \, \hbox{NRF}(S_3)=1-\eta\cos\phi \label{lg} \end{equation}
Equations (\ref{lg}) were used to fit the data shown in Fig.3, taking into account the relation between the tilt angle of the plates $\alpha$ and the phase $\phi$. The only fitting parameters being the efficiency $\eta$ and the initial phase delay, the theoretical dependence is in a good agreement with the experimental data. The resulting quantum efficiency is found to be $\eta=0.45$, which is much less than we expect from the values of the detectors' quantum efficiencies and the optical losses (about 7\%).
However, in our experimental scheme there is one more source of losses. Because the nonlinear crystals are placed into the pump beam one after another, squeezed vacuum generated in the first crystal passes through the second crystal. As it was shown in Ref.~\cite{spectra} for the case of two-photon light, the desired state is only produced in the central part of the frequency-angular spectrum. The state produced at the 'slopes', due to the group-velocity dispersion and optical anisotropy, is a different one. In our case the angular bandwidth is restricted by the aperture, but the frequency spectrum contains the whole band allowed by phasematching. If the crystals are aligned to produce, for instance, $S_2$-squeezed state at the central wavelength (709 nm), the state generated at the 'slopes' will be $S_3$ - squeezed and anti-squeezed in $S_2$. According to our numerical calculation, the fraction of the 'correct' state for the case of degenerate phase matching is $0.71$. This explains the low degree of noise suppression in our experiment.
\begin{figure}
\caption{Noise reduction factor in $S_2$ depending on the quartz plates tilt for the degenerate (squares) and non-degenerate (circles) phase-matching.}
\end{figure}
The effect of group-velocity dispersion in the second crystal reduces the degree of squeezing even more drastically in the case of non-degenerate phase matching. For instance, for the phase matching at $650$ nm and $780$ nm, numerical calculation shows that the fraction of the squeezed state in the bandwidth is only $0.52$. To test this in experiment, we measured the $S_2$ variance for two cases: the crystals aligned for degenerate phase matching and the crystals aligned for phasematching at wavelengths $650$ nm and $780$ nm (Fig.4). While in the first case the NRF reaches values below $0.5$, in the second one the minimal value of NRF is $0.75$. Note that the phase of the interference dependence is also changed in the degenerate regime, because of the additional tilt of the crystals.
We would like to stress that this effect is not inevitable and is only present in configurations where two crystals are placed one after another. For instance, a setup with a Mach-Zehnder interferometer~\cite{Burlakov} is free from this drawback. Here, we used the construction with two crystals in a single beam only for the sake of higher stability; however, if the goal is to achieve high degree of squeezing, a Mach-Zehnder interferometer is a much better choice. Note that there is a recent proposal on a similar scheme employing a high-stability Mach-Zehnder interferometer based on birefringent beam displacers~\cite{fiorentino}.
In conclusion, we report a source of broadband twin beams (squeezed vacuum) covering the wavelength range from 650 to 780 nm and the angular range up to $0.8^{\circ}$. The source is a mesoscopic one, providing the number of photons per mode between 1 and 100. Because of its highly multimode character, the resulting number of photons per pulse is up to $5\cdot 10^5$. Polarization properties of the produced state reveal 'hidden polarization' effect: depending on the polarization of the pump beam, the output state has fluctuations in either $S_2$ or $S_3$ Stokes observables squeezed 50\% below the shot-noise level. To the best of our knowledge, this is the first report on such a high degree of squeezing observed via direct detection of twin beams from a single-pass OPA.
We acknowledge the financial support of the European Union under project COMPAS No. 212008 (FP7-ICT). M.V.C. acknowledges the support of the DFG foundation. We are grateful to Alan Huber from Amptek company for his valuable advice on constructing low-noise charge-integrating detectors and to Bruno Menegozzi for the realization of the electronic circuit.
\begin{references}
\bibitem{Bachor} H.-A.~Bachor, T.~C.~Ralph, \emph{A Guide to Experiments in Quantum Optics} (Wiley-VCH, sec. ed., Weinheim, 2004).
\bibitem{Kerr} R.M. Shelby et al., Phys. Rev. Lett., \textbf{57}, 691 (1986); J.~Heersink et al., Phys. Rev. A \textbf{68}, 013815 (2003); J.~Heersink et al., Opt. Lett. \textbf{30}, 1192 (2005).
\bibitem{Aytur}O. Aytur, P. Kumar, Phys. Rev. Lett. \textbf{65}, 1551 (1990).
\bibitem{seeded OPA} D.T. Smithey et al., Phys. Rev. Lett. \textbf{69}, 2650 (1992).
\bibitem{unseeded OPA} A. Heidmann et al., Phys. Rev. Lett. \textbf{59}, 2555 (1987).
\bibitem{cavity}Ling-An Wu et al., Phys. Rev. Lett. \textbf{57}, 2520 (1986); J.~Gao et al., Opt. Lett. \textbf{23}, 870 (1998).
\bibitem{single-pass} J.~A.~Levenson et al., Quantum Semiclass. Opt. \textbf{9}, 221 (1997); T.~Hirano et al., Opt. Lett. \textbf{30}, 1722 (2005).
\bibitem{WFtomo} K.~Vogel and H.~Risken, Phys. Rev. A \textbf{40}, 2847 (1989); D.T. Smithey et al., Phys. Rev. Lett. \textbf{70}, 1244 (1993).
\bibitem{Karmas} V.~P.~Karassiov and A.~V.~Masalov, J. Opt. B: Quantum Semiclass. Opt. \textbf{4}, S366 (2002); P.~A.~Bushev et al., Optics and Spectroscopy \textbf{91}, 526 (2001).
\bibitem{Marquardt} Ch.~Marquardt et al., Phys. Rev. Lett. \textbf{99}, 220401 (2007).
\bibitem{MandelWolf} L.~Mandel and E.~Wolf, \emph{Optical Coherence and Quantum Optics} (Cambridge University Press, Cambridge, 1995).
\bibitem{DNK} D.N.~Klyshko, \emph{Photons and Nonlinear Optics} (Gordon and Breach, New York, 1988).
\bibitem{Yuen} H.P. Yuen, Phys.Rev. A \textbf{13}, 2226 (1976).
\bibitem{Braunstein} S.~L.~Braunstein, Phys. Rev. A \textbf{71}, 055801 (2005).
\bibitem{Lugiato} O. Jedrkiewicz et al., Phys. Rev. Lett. \textbf{93}, 243601 (2004).
\bibitem{Bondani} M. Bondani et al., Phys. Rev. A \textbf{76}, 013833 (2007).
\bibitem{Brida} G.~Brida et al., Journal of Modern Optics, to appear (2008).
\bibitem{JETPLett} T.~Sh.~Iskhakov et al., JETP Lett., to appear (2008).
\bibitem{hidden} V. P. Karasev, A.V. Masalov, Opt. Spectrosc. 74, 551 (1993); P. Usachev et al. Opt. Commun. 193, 161 (2001).
\bibitem{Hansen} H.~Hansen et al., Optics Letters \textbf{26}, 1714 (2001).
\bibitem{QE}O.~A.~Ivanova et al., Quant. Electron. \textbf{36} (10), 951 (2006).
\bibitem{Perina} J.~Perina et al., Phys. Rev. A \textbf{76}, 043806 (2007).
\bibitem{Chirkin} A. S. Chirkin, A. A. Orlov, D. Yu. Paraschuk, Quantum Electron. 23, 870 (1993); A. P. Alodjants, S. M. Arakelyan, and A. S. Chirkin, JETP \textbf{81}, 34 (1995).
\bibitem{DNK92} D.~N.~Klyshko, Physics Letters A \textbf{163} 349, (1992).
\bibitem{dp} V. P. Karassiov, J. Phys. A \textbf{26}, 4345 (1993); D.~N.~Klyshko, JETP \textbf{84}, 1065 (1997); A. Luis, Phys. Rev. A \textbf{66}, 013806 (2002); G.~Bj\"ork et al., Proc. SPIE \textbf{4750} (2002); A. B. Klimov et al., Phys. Rev. A \textbf{72}, 033813 (2005).
\bibitem{Burlakov} A.~V.~Burlakov et al., Phys. Rev. A \textbf{64}, 041803(R) (2001).
\bibitem{spectra} G.~Brida et al., Phys. Rev. A \textbf{76}, 053807 (2007).
\bibitem{fiorentino} M.~Fiorentino and R.G. Beausoleil, Opt. Exp. \textbf{16}, 20149 (2008).
\end{references}
\end{document} |
\begin{document}
\title{Relations enumerable from positive information} \begin{abstract}
We study countable structures from the viewpoint of enumeration reducibility. Since enumeration reducibility is based on only positive information, in this setting it is natural to consider structures given by their positive atomic diagram -- the computable join of all relations of the structure. Fixing a structure ${\mathcal{A}}$, a natural class of relations in this setting are the relations $R$ such that $R^{\hat {\mathcal{A}}}$ is enumeration reducible to the positive atomic diagram of $\hat {\mathcal{A}}$ for every $\hat {\mathcal{A}}\cong {\mathcal{A}}$ -- the \emph{relatively intrinsically positively enumerable (r.i.p.e.)} relations. We show that the r.i.p.e.\ relations are exactly the relations that are definable by $\Sigma^p_1$ formulas, a subclass of the infinitary $\Sigma^0_1$ formulas. We then introduce a new natural notion of the jump of a structure and study its interaction with other notions of jumps. At last we show that positively enumerable functors, a notion studied by Csima, Rossegger, and Yu, are equivalent to a notion of interpretability using $\Sigma^p_1$ formulas. \end{abstract} While in computable structure theory countable structures are classically studied up to Turing reducibility, researchers have successfully used enumeration reducibility to both contribute to the classical study and develop a beautiful theory on its own. One example of a contribution to classical theory is Soskov's work on degree spectra and co-spectra~\cite{soskov2004} which allowed him to show that there is a degree spectrum of a structure such that the set of degrees whose $\omega$-jump is in this spectrum is not the spectrum of a structure~\cite{soskov2013a}. Another example is Kalimullin's study of reducibilities between classes of structures~\cite{kalimullin2012} where he studied enumeration reducibility versions of Muchnik and Medvedev reducibility between classes of structures. This topic has also been studied by Knight, Miller, and Vanden Boom~\cite{knight2007} and Calvert, Cummins, Knight, and Miller~\cite{calvert2004a}. There appears to be a rich theory on these notions with interesting questions on the relationship between the enumeration reducibility and the classical versions. In this article we develop a novel approach to study relations that are enumeration reducible to every copy of a given structure.
More formally, say that given a countable relational structure ${\mathcal{A}}$, a relation $R$ is \emph{relatively intrinsically positively enumerable} (\emph{r.i.p.e.}) in ${\mathcal{A}}$ if for every copy ${\mathcal{B}}$ of ${\mathcal{A}}$ and every enumeration of the basic relations on ${\mathcal{B}}$, we can compute an enumeration of $R^{\mathcal{B}}$.
We obtain a syntactic classification of the r.i.p.e.\ relations using the infinitary logic $L_{\omega_1\omega}$. \cref{ripe} shows that the r.i.p.e.\ relations on a structure are precisely those that are definable by computable infinitary $\Sigma_1^0$ formulas in which neither negations nor implications occur.
This classification is motivated by a related classification of the r.i.c.e.\ relations. A relation $R$ on a structure ${\mathcal{A}}$ is relatively intrinsically computably enumerable (r.i.c.e.) if for every ${\mathcal{B}}\cong {\mathcal{A}}$, $R^{\mathcal{B}}$ is c.e.\ relative to the atomic diagram of ${\mathcal{B}}$. The following classification of r.i.c.e.\ relations is a special case of a theorem by Ash, Knight, Manasse and Slaman~\cite{ash1989} and, independently, Chisholm~\cite{chisholm1990}: A relation is r.i.c.e.\ in a structure ${\mathcal{A}}$ if and only if it is definable in ${\mathcal{A}}$ by a computable $\Sigma^0_1$ formula in the logic $L_{\omega_1\omega}$.
Since this result there has been much work on r.i.c.e.\ relations and related concepts. For a summary, see Fokina, Harizanov, and Melnikov~\cite{fokina2014}. One particularly interesting generalization of r.i.c.e.\ relations is due to Montalb\'an~\cite{montalban2012}. He extended the definition of r.i.c.e.\ relations from relations on $\omega^n$ to $\omega^{<\omega}$ and to sequences of relations, obtaining a classification similar to the one given in~\cite{ash1989,chisholm1990}. This extension allows the development of a rich theory such as an intuitive definition of the jump of a structure, and an effective version of interpretability with a category theoretic analogue: A structure ${\mathcal{A}}$ is effectively interpretable in a structure ${\mathcal{B}}$ if and only if there is a computable functor from ${\mathcal{B}}$ to ${\mathcal{A}}$ as given in Harrison-Trainor, Melnikov, Miller, and Montalb\'an~\cite{HTMMM2015}. For a complete development of this theory see Montalb\'an~\cite{montalban2021a}. The main goal of this article to develop a similar theory for r.i.p.e.\ relations.
Let ${\mathcal{A}}$ be a countable structure in relational language $L$. We might assume that the universe of ${\mathcal{A}}$ is $\omega$, and in order to measure its computational complexity, identify it with the set $=^{\mathcal{A}}\oplus \neq^{\mathcal{A}}\oplus\bigoplus_{R_i\in L}R_i^{\mathcal{A}}$ which we call $P({\mathcal{A}})$, the \emph{positive diagram of ${\mathcal{A}}$}. It is Turing equivalent to the standard definition of the atomic diagram of a structure, which can be viewed as the set $=^{\mathcal{A}}\oplus \neq^{\mathcal{A}}\oplus\bigoplus_{R_i\in L}R_i^{\mathcal{A}}\oplus \bar{R_i}^{\mathcal{A}}$. Now, a relation $R\subseteq\omega^{<\omega}$ is r.i.c.e.\ if for every copy ${\mathcal{B}}\cong {\mathcal{A}}$, $R^{\mathcal{B}}$ is c.e.\ in $P({\mathcal{B}})$. The r.i.p.e.\ relations are the natural analogue to the r.i.c.e.\ relations for enumeration reducibility. Recall that a set of natural numbers $A$ is enumeration reducible to $B$, $A\leq_e B$ if there exists a c.e.\ set $\Psi$ consisting of pairs $\langle D,x\rangle$ where $D$ is a finite set under some fixed coding such that \[ A=\Psi^B=\{x: D\subseteq B \land \langle D,x \rangle \in \Psi\}. \] Enumeration reducibility allows us to formally define the notion of a r.i.p.e.\ relation. \begin{defn}\label{ripedefn}
Let ${\mathcal{A}}$ be a structure. A relation $R \subseteq A^{< \omega}$ is \emph{relatively intrinsically positively enumerable} in ${\mathcal{A}}$, short r.i.p.e., if, for every copy $({\mathcal{B}},R^{\mathcal{B}})$, of $({\mathcal{A}}, R)$, $R^{\mathcal{B}}\leq_e P({\mathcal{B}})$. The relation is \textit{uniformly relatively intrinsically positively enumerable} in ${\mathcal{A}}$, if the above enumeration reducibility is uniform in the copies of ${\mathcal{A}}$, that is, if there is a fixed enumeration operator $\Psi$ such that $R^{\mathcal{B}} = \Psi^{P({\mathcal{B}})}$ for every copy ${\mathcal{B}}$ of ${\mathcal{A}}$. \end{defn} The study of the computability theoretic properties of structures with respect to enumeration reducibility is an active research topic; see Soskova and Soskova~\cite{soskova2017} for a summary of results in this area. \subsection{Structure of the paper} In \cref{sec:ripe} we show that a relation is r.i.p.e.\ in a structure ${\mathcal{A}}$ if and only if it is definable by a $\Sigma_1^p$ formula in the language of ${\mathcal{A}}$, that is, a computable infinitary $\Sigma_1^0$ formula in which neither negations nor implications occur. We continue by defining a notion of reducibility between r.i.p.e.\ relations and exhibiting a complete relation. Towards the end of \cref{sec:ripe} we study sets of natural numbers r.i.p.e.\ in a given structure and show that these are precisely those sets whose degrees lie in the structures co-spectrum, the set of enumeration degrees below every copy of the structure.
\cref{sec:jumps} is devoted to the study of the positive jump of a structure. We define the positive jump and study its degree theoretic properties. The main result of this section is that the enumeration degree spectrum of the positive jump of a structure is the set of jumps of enumeration degrees of the structure. The main tool to prove this result are r.i.p.e.\ generic enumerations which are studied at the beginning of the section. To finish the section we compare the enumeration degree spectra obtained by applying various operations on structures such as the positive jump, the classical jump and the totalization.
In \cref{sec:functors} we define an effective version of interpretability using $\Sigma^p_1$ formulas, positive interpretability. We show that a structure ${\mathcal{A}}$ is positively interpretable in a structure ${\mathcal{B}}$ if and only if there is a positive enumerable functor from the isomorphism class of ${\mathcal{B}}$ to the isomorphism class of ${\mathcal{A}}$. Positive enumerable functors and related effectivizations of functors were studied by Csima, Rossegger and Yu~\cite{CsimaRY21}. Positive interpretability allows for a useful strengthening that preserves most properties of a structure, positive bi-interpretability. We show that two structures ${\mathcal{A}}$ and ${\mathcal{B}}$ are positive bi-interpretable if and only if their isomorphism classes are enumeration adjoint, that is there is an adjoint equivalence between $Iso({\mathcal{A}})$ and $Iso({\mathcal{B}})$ witnessed by enumeration operators. \section{First results on r.i.p.e.\ relations}\label{sec:ripe}
In our proofs we will often build structures in stages by copying increasing pieces of finite substructures of a given structure ${\mathcal{A}}$. The following definitions will be useful for this. \begin{defn}
Given an $\mathcal L$-structure ${\mathcal{A}}$ and $\overline a\in A^{<\omega}$, let $\restupa{\overline a}$ denote the positive diagram of the substructure of ${\mathcal{A}}$ with universe $\overline a$ in the restriction of $\mathcal L$ to the first $|\overline a|$ relation symbols. \end{defn}
\begin{defn}\label{partial}
Let ${\mathcal{A}}$ be an $\mathcal L$-structure and $\overline{a}= \langle a_0, \dots , a_s \rangle\in A^s$. The set $P_{{\mathcal{A}}}(\overline a)$ is the pullback of $\restupa{\overline a}$ along the index function of $\overline a$, i.e.,
\[ \langle i,n_0,\dots, n_{r_i}\rangle\in P_{{\mathcal{A}}}(\overline a) \Leftrightarrow \langle i, a_{n_0},\dots a_{n_{r_i}}\rangle \in \restupa{\overline a}.\] \end{defn} The main feature of \cref{partial} is that if we approximate the positive diagram of a structure ${\mathcal{A}}$ in stages by considering larger and larger tuples, i.e., $\lim_{s\in\omega} \restupa{\overline a_s}=P({\mathcal{A}})$, then the limit of $P_{{\mathcal{A}}}(\overline a_s)$ gives a structure isomorphic to ${\mathcal{A}}$. We will use this fact to obtain generic copies of a given structure.
We denote by $\Psi_e$ the $e^{th}$ enumeration operator in a fixed computable enumeration of all enumeration operators and by $\Psi_{e,s}$ its stage $s$ approximation. Without loss of generality we make the common assumption that $\Psi_{e,s}$ is finite and does not contain pairs $\langle n,D\rangle>s$. Notice that $\Psi_{e,s}$ itself is an enumeration operator.
In our proofs we also frequently argue that a set $A$ is enumeration reducible to a set $B$ by using a characterization of enumeration reducibility due to Selman~\cite{selman1971a}. \begin{thm}[Selman~\cite{selman1971a}]
For any $A, B \subseteq \omega$
\begin{equation*}
A \leq_e B \quad \text{ iff } \quad \forall X[B \text{ is c.e. in } X \Rightarrow A \text{ is c.e. in } X]
\end{equation*} \end{thm} We refer the reader to Cooper~\cite{cooper2003} for a proof of this result and further background on enumeration reducibility and enumeration degrees.
Notice that by \cref{ripedefn} $({\mathcal{A}},R)$ is technically not a first order structure as $R\subseteq \omega^{<\omega}$. We may however still think of it as a first order structure in the language expanded by relation symbols $(Q_i)_{i\in\omega}$, each $Q_i$ of arity $i$, where $Q_i^{\mathcal{A}}=\{ \bar a \in A^{n}: \bar a\in R^{\mathcal{A}}\}$. The positive diagrams $P({\mathcal{A}},R^{\mathcal{A}})$ and $P({\mathcal{A}},(Q_i^{\mathcal{A}})_{i\in\omega})$ are enumeration equivalent. \subsection{A syntactic characterization} The main purpose of this section is to show that being r.i.p.e.\ in a structure ${\mathcal{A}}$ is equivalent to being definable by infinitary formulas in ${\mathcal{A}}$ of the following type. \begin{defn}
A \emph{positive computable infinitary $\Sigma_1^p$ formula} is a formula of the infinitary logic $L_{\omega_1\omega}$ of the form
\begin{equation*}
\varphi(\overline{x}) \equiv \bigdoublevee_{i \in I} \exists \overline{y}_i \varphi_i (\overline{x},\overline{y}_i)
\end{equation*}
where each $\varphi_i$ is a finite conjunct of atomic formulas, and the index set $I$ is c.e.. \end{defn} Notice that the above definition includes all c.e.\ disjunctions of finitary $\Sigma^0_1$ formulas without negation and implication symbols, as every such formula is equivalent to a finite disjunction of conjunctions with each existential quantifier occurring in front of the conjunctions.
\begin{defn}
A relation $R\subseteq \omega^{<\omega}$ is \emph{$\Sigma^p_1$-definable with parameters $\bar c$} in a structure ${\mathcal{A}}$ if there exists a uniformly computable sequence of $\Sigma^p_1$ formulas $(\varphi_i(x_1,\dots,x_i, y_1,\dots,y_{|c|}))_{i\in\omega}$ such that for all $\bar a\in \omega^{<\omega}$
\[ \bar a\in R \Leftrightarrow {\mathcal{A}}\models \varphi_{|\bar a|}(\bar a, \bar c).\] \end{defn} \begin{thm}\label{ripe}
Let ${\mathcal{A}}$ be a structure and $R \subseteq A^{< \omega}$ a relation on it. Then the following are equivalent:
\begin{enumerate}\tightlist
\item[(i)] $R$ is relatively intrinsically positively enumerable in ${\mathcal{A}}$,
\item[(ii)] $R$ is $\Sigma^p_1$-definable in ${\mathcal{A}}$ with parameters.
\end{enumerate} \end{thm}
\begin{proof} Assuming (ii), there is a uniformly computable sequence $(\varphi_i(\overline x,\overline z))_{i\in\omega}$ of $\Sigma_1^p$ formulas where each $\varphi_i$ is of the form $\bigdoublevee_{j \in \omega} \exists \overline{y}_{i,j} \psi_{i,j} (\overline{x},\overline{y}_{i,j},\overline{z})$ with the property that for every ${\mathcal{B}}\cong {\mathcal{A}}$ there is a tuple $\bar c\in \omega^{|\bar z|}$ such that for all $i\in\omega$ and $\bar a\in\omega^{i}$, ${\mathcal{B}}\models \varphi_i(\bar a,\bar c)$ if and only if $\bar a \in R^{{\mathcal{B}}}$.
Recall that each $\psi_{i,j}$ is a finite conjunction of atomic formulas, i.e., $\psi_{i,j}=\theta_1(\overline x, \overline y_{i,j},\overline z)\land \dots \land \theta_n(\overline x, \overline y_{i,j},\overline z)$ for some $n\in \omega$. For $\theta(\overline x)$ an atomic formula, let $\ulcorner \theta(\overline a)\urcorner$ be the function mapping $\theta(\overline a)$ to its code in the positive diagram of a structure. For example, if $\theta(\overline x)=R_i(x_3,x_5)$, then $\ulcorner \theta(\overline a)\urcorner=\langle i+2, \langle a_3,a_5\rangle\rangle$ for $\overline a\in \omega^{<\omega}$. Consider the set $X^{\overline a, \overline b,\overline c}_{i,j}=\{\ulcorner \theta_k(\overline a,\overline b,\overline c)\urcorner: k<n\}$. Clearly, $X^{\overline a,\overline b,\overline c}_{i,j}\subseteq P({\mathcal{A}})$ if and only if ${\mathcal{A}}\models \psi_{i,j}(\overline a,\overline b,\overline c)$ for any $L$-structure ${\mathcal{A}}$. We define an enumeration operator $\Psi$ by enumerating all pairs $\langle \overline a,X_{i,j}^{\overline a,\overline b,\overline c}\rangle$ into $\Psi$. We have that \begin{align*}
\overline a\in \Psi^{P({\mathcal{B}})}&\Leftrightarrow \exists \langle \overline a,X_{|\overline a|,j}^{\overline a, \overline b, \overline c}\rangle \in \Psi \land X_{|\overline a|,j}^{\overline a, \overline b\overline c}\subseteq P({\mathcal{B}})\\
&\Leftrightarrow {\mathcal{B}}\models \exists \overline y_{|\overline a|,j}\psi_{|\overline a|,j}(\overline a, \overline y_{|\overline a|,j},\overline c)\\
&\Leftrightarrow {\mathcal{B}}\models \varphi_{|\overline a|}(\bar a, \bar c) \end{align*} and thus $R$ is r.i.p.e.
To show that (i) implies (ii) we build an enumeration $g:\omega \rightarrow {\mathcal{A}}$ by constructing a nested sequence of tuples $\{\overline{p}_s\}_{s \in \omega} \subseteq A^{< \omega}$ and letting $g = \bigcup_s \overline{p}_s$. We then define ${\mathcal{B}}=g^{-1}({\mathcal{A}})$. Our goal is to produce a $\Sigma^p_1$ definition of $R^{\mathcal{B}}$. To obtain it we try to force during the construction that $R^{\mathcal{B}}=g^{-1}(R)\neq \Psi_e^{P({\mathcal{B}})}$. As $R$ is r.i.p.e.\ this will fail for some $e$ and we will use the failure to get the syntactic definition.
To construct ${\mathcal{B}}$ we do the following. Let $p_0$ be the empty sequence. At stage $s+1=2e+1$ if the $e$-th element of $A$ is not already in $\overline{p}_s$ then we let $\overline{p}_{s+1}=\overline{p}_s^\frown e$. Otherwise let $\overline{p}_{s+1}=\overline{p}_s$. This guarantees that $g$ is onto. At stage $s+1=2e$ we try to force $\Psi_e^{P({\mathcal{B}})}\not \subseteq g^{-1}(R)$ for which we need a tuple $\langle j_1,\dots, j_l \rangle \in \Psi_e^{P({\mathcal{B}})}$ with $\langle g(j_1),\dots, g(j_l) \rangle \not \in R$. To do this we ask if there is an extension $\overline{q}$ of $\overline{p}_s$ in the set \begin{equation*}
Q_e = \{ \overline{q} \in A^{< \omega} : \exists l, j_1 ,\dots j_l < |\overline{q}| \, [\langle j_1,\dots, j_l \rangle \in \Psi_e^{P_{\mathcal{A}}(\overline{q})} \text{ and } \langle q_{j_1},\dots, q_{j_l} \rangle \not \in R] \} \end{equation*} If there is one let $\overline{p}_{s+1} = \overline{q}$, otherwise let $\overline{p}_{s+1}=\overline{p}_s$. This concludes the construction.
If at any stage $s+1 = 2e$ we succeed in defining $\overline{p}_{s+1}=\overline{q}$ for some $\overline{q} \in Q_e$ then we will have made $\Psi_e^{P({\mathcal{B}})} \neq g^{-1}(R)$. But by our assumption this must fail for some $e \in \omega$. For this $e$, at stage $s+1=2e$ we will not have been able to find an extension of $\bar p_{s}$ in $Q_e$. We will use this to give a $\Sigma^p_1$ definition of $R$ with parameters $\bar p_s$.
Notice that if there is some $\bar q \supseteq \bar p_s$ and sub-tuple $\langle q_{j_1} , \dots, q_{j_l} \rangle$ such that $\langle j_1,\dots , j_l \rangle \in \Psi_e^{P_{\mathcal{A}}(\overline{q})}$ then we must have $\langle q_{j_1} , \dots, q_{j_l} \rangle \in R$ or else we will have that $\bar q \in Q_e$. We now show that $R$ is equal to the set \begin{align*}
S = \{ \langle q_{j_1},\dots , q_{j_l} \rangle \in A^{< \omega} : \text{ for some } \bar q \in A^{< \omega} \text{ and } l , j_1, \dots , j_l < |\bar q | \text{ satisfying } \bar q \supseteq \bar p_s \\ \text{ and } \langle j_1 ,\dots , j_l \rangle \in \Psi_e^{P_{\mathcal{A}}(\bar q)}\} \end{align*}
By the previous paragraph $S \subseteq R$. If $\bar a \in R$ then there are indices $j_1,\dots, j_{|\bar a|}$ such that $\bar a = \langle g(j_1), \dots g(j_{|\bar a|}) \rangle$ and so if we take a long enough initial segment of $g$ it will witness the fact that $\bar a \in S$. Fix an enumeration $(\varphi_i^{at})_{i\in\omega}$ of all atomic formulas where without loss of generality the free variables of $\varphi_i^{at}$ are a subset of $\{x_0,\dots, x_{i}\}$. The following is a $\Sigma^p_1$ definition of $S$ \[
\bigdoublevee_{ C \subset_{\text{fin}} \omega} \bigvee_{\langle j_1,\dots , j_{|\bar a|} \rangle \in W_e^C} \exists\, \bar q \supseteq \bar p_s \, [\langle q_{j_1}, \dots ,q_{j_{|\bar a|}} \rangle = \bar a \, \wedge \, \bigwedge_{i \in C} [\varphi^{at}_i]\frac{q_0}{x_0}\cdots\frac{q_{|\bar q|}}{x_{|\bar q|}} ] \] where the latter half of the formula is simply saying that $C \subseteq P_{\mathcal{A}}(\bar q)$. \end{proof}
\begin{cor}\label{uripe}
Let ${\mathcal{A}}$ be a structure and $R \subseteq A^{< \omega}$ be a relation on it. Then the following are equivalent:
\begin{enumerate}\tightlist
\item[(i)] $R$ is uniformly relatively intrinsically positively enumerable in ${\mathcal{A}}$,
\item[(ii)] $R$ is $\Sigma_1^p$-definable in ${\mathcal{A}}$ without parameters.
\end{enumerate} \end{cor} \begin{proof}
For $(i) \Rightarrow (ii)$ we mimic the proof of Theorem \ref{ripe}. Let $\Psi_e$ be the fixed enumeration operator such that $R^{{\mathcal{B}}} = \Psi_e^{P({\mathcal{B}})}$ and $Q_e$ as above. Note that no $\bar q$ can be in $Q_e$ and so we mimic the construction of our set $S$ with $\bar p_s$ being the empty tuple.
For $(ii) \Rightarrow (i)$ we again mimic the same direction in \ref{ripe} excluding the parametrizing tuple $\bar c$ to make the process uniform. \end{proof} \subsection{R.i.p.e.\ completeness} Similar to the study of computably enumerable sets we want to investigate notions of completeness for r.i.p.e.\ relations on a given structure. Before we obtain a natural example of a r.i.p.e.\ complete relation we have to define a suitable notion of reduction. \begin{defn} Given a structure ${\mathcal{A}}$ and two relations $P,R\subseteq A^{<\omega}$, we say that $P$ is \emph{positively intrinsically one reducible} to $R$, and write $P\redu R$, if for all ${\mathcal{B}}\cong {\mathcal{A}}$ $P({\mathcal{B}})\oplus P^{\mathcal{B}}\leq_1 P({\mathcal{B}})\oplus R^{\mathcal{B}}$. \end{defn} \begin{proposition}
Positive intrinsic one reducibility ($\redu$) is a reducibility. \end{proposition} \begin{proof}
Let ${\mathcal{A}}$ be a structure with relations $P, Q, R \subseteq A^{<\omega}$. It is easy to see that $\redu$ is reflexive, since for any structure ${\mathcal{B}} \cong {\mathcal{A}}$ we have that $P({\mathcal{B}}) \oplus P^{\mathcal{B}} \leq_1 P({\mathcal{B}}) \oplus P^{\mathcal{B}}$, which means $P \redu P$. To see that it is transitive assume that $P\redu R$ and $R \redu Q$ and let ${\mathcal{B}} \cong {\mathcal{A}}$. By assumption $P({\mathcal{B}}) \oplus P^{\mathcal{B}} \leq_1 P({\mathcal{B}}) \oplus R^{\mathcal{B}} \leq_1 P({\mathcal{B}}) \oplus Q^{\mathcal{B}}$ and so $P \redu Q$. \end{proof} \begin{defn}
Fix a structure ${\mathcal{A}}$. A relation $R\subseteq A^{<\omega}$ is \emph{r.i.p.e.\ complete} if $R$ is r.i.p.e.\ and for every r.i.p.e.\ relation $P$ on ${\mathcal{A}}$, $P\redu R$. \end{defn} The most natural way to obtain a complete set is to follow the construction of the Kleene set in taking the computable join of all r.i.p.e.\ sets. The result is a relation $R\subseteq \omega\times A^{<\omega}$ which can be seen as a uniform sequence of r.i.p.e.\ relations in the sense that there is a enumeration operator $\Psi$ such that ${\Psi^{{\mathcal{A}}}}^{[i]}=R_{i}$. The issue is that this does not behave well under isomorphism. However, this can be easily overcome using the following coding. Given a relation $R \subseteq \omega \times A^{<\omega}$, we can identify it with a subset $R'\subseteq A^{<\omega}$ as follows. For any two elements $b,c \in A$ let \[ \overbrace{b\dots b}^{i\times}c\bar a \in R'\iff\langle i, \bar a \rangle \in R. \] One can now easily see that $R$ is a uniform sequence of r.i.p.e.\ relations if and only if the so obtained relation $R'$ is uniformly r.i.p.e.
We can now give a natural candidate for a r.i.p.e.\ complete relation. \begin{defn}
Let $\varphi_{i,j}^{\Sigma^p_1}$ be the $i${th} formula with free variables $x_1,\dots, x_j$ in a computable enumeration of all $\Sigma^p_1$ formulas.
The \emph{positive Kleene predicate} relative to ${\mathcal{A}}$ is $\vec K^{\mathcal{A}}_p = \left(K^{\mathcal{A}}_{i}\right)_{i\in\omega}$, where
\begin{equation*}
K^{{\mathcal{A}}}_{i} = \bigcup_{j\in\omega}\{ \bar a \in A^{j} : {\mathcal{A}} \models \varphi^{\Sigma_1^p}_{i,j}(\bar a)\}.
\end{equation*} \end{defn} Notice that we defined the positive Kleene predicate as a sequence of relations instead of a single relation. It is slightly more convenient as we do not have to deal with coding. Another alternative definition would be to break down the sequence even further and let the Kleene predicate be the sequence \[ (\{\overline a: {\mathcal{A}}\models \varphi_{i,j}^{\Sigma_1^p}(\overline a)\})_{\langle i,j\rangle\in\omega}\] so that $({\mathcal{A}},\vec K_p^{\mathcal{A}})$ is a first order structure. However, as all of these definitions are computationally equivalent these distinctions are irrelevant for our purpose.
\begin{proposition} The positive Kleene predicate $\vec K_p^{\mathcal{A}}$ is uniformly r.i.p.e., and r.i.p.e.\ complete. \end{proposition} \begin{proof}
Since $\vec K^{\mathcal{A}}_p$ is defined in a $\Sigma_1^p$ way without parameters we can use Corollary \ref{uripe} to see that it is uniformly relatively intrinsically positively enumerable. Let $R$ be any relation on ${\mathcal{A}}$ of arity $a_R$. Notice that $R^{\mathcal{B}}$ is trivially $\Sigma_1^p$ definable, and so there is a formula $\varphi_{i,a_R}^{\Sigma_1^P}$ such that $\bar a \in R^{\mathcal{B}} \Leftrightarrow {\mathcal{B}} \models \varphi_{i,a_R}^{\Sigma_1^p}(\bar a)$. This shows that $P({\mathcal{B}}) \oplus R^{\mathcal{B}} \leq_1 P({\mathcal{B}}) \oplus \vec K_p^{\mathcal{B}}$. \end{proof}
\subsection{R.i.p.e.\ sets of natural numbers} Our above discussion of sequences of r.i.p.e.\ relations allows us to code sets of natural numbers as r.i.p.e. relations. \begin{defn}
A set $X\subseteq\omega$ is \emph{r.i.p.e.\ in a structure ${\mathcal{A}}$} if the relation $R_X=X\times \emptyset$ is a r.i.p.e.\ relation, i.e., if the following relation is r.i.p.e.:
\[ \{\overbrace{b\dots b}^{i\times}c: i\in X, b,c\in A\}\] \end{defn} A natural question is which sets of natural numbers are r.i.p.e.\ in a given structure. One characterization can be derived directly from the definitions: The sets $X$ such that $X\leq_e P({\mathcal{B}})$ for all ${\mathcal{B}}\cong {\mathcal{A}}$. Another one can be given using co-spectra, a notion defined by Soskov~\cite{soskov2004}. Intuitively, the co-spectrum of a structure ${\mathcal{A}}$ is the maximal ideal in the enumeration degrees such that every member of it is below every copy of ${\mathcal{A}}$. More formally. \begin{defn}\label{def:cospectra}
The \emph{co-spectrum} of a structure ${\mathcal{A}}$ is
\[ Co({\mathcal{A}})=\bigcap_{{\mathcal{B}}\cong {\mathcal{A}}}\{ \mathbf d : \mathbf d\leq deg_e(P({\mathcal{B}}))\}.\] \end{defn} Let us point out that Soskov's definition of co-spectra appears to be different from ours. We will prove in \cref{sec:jumps} that the two definitions are equivalent. Given a tuple $\bar a$ in some structure ${\mathcal{A}}$ let $\Sigma_1^p\text{-}tp_{\mathcal{A}}(\bar a)$ be the set of positive finitary $\Sigma_1^0$ formulas true of ${\mathcal{A}}$. The equivalence of \cref{it:ripe} and \cref{it:enum} in \cref{thm:ripesets} is the analogue to a well-known theorem of Knight~\cite{knight1986} for total structures.
\begin{thm}\label{thm:ripesets}
The following are equivalent for every structure ${\mathcal{A}}$ and $X\subseteq \omega$.
\begin{enumerate}\tightlist
\item\label{it:ripe} $X$ is r.i.p.e.\ in ${\mathcal{A}}$
\item\label{it:cosp} $deg_e(X)\in Co({\mathcal{A}})$
\item\label{it:enum} $X$ is enumeration reducible to $\Sigma_1^p\text{-}tp_{\mathcal{A}}(\bar a)$ for some tuple $\bar a\in A^{<\omega}$.
\end{enumerate} \end{thm} \begin{proof}
If $deg_e(X)\in Co({\mathcal{A}})$, then for all ${\mathcal{B}}\cong {\mathcal{A}}$, $X\leq_e P({\mathcal{B}})$. Given an enumeration of $X$, enumerate $b^nc$ into $R_X$ for all elements $b,c\in B$ whenever you see $n$ enter $X$. The relation $R_X$ clearly witnesses that $X$ is r.i.p.e.\ in ${\mathcal{A}}$. This shows that \cref{it:cosp} implies \cref{it:ripe}. On the other hand if $X$ is r.i.p.e.\ in ${\mathcal{A}}$, then given any $B\cong {\mathcal{A}}$ and an enumeration of $R_X^{\mathcal{B}}$ build a set $S$ by enumerating $n$ into $S$ whenever you see $b^n c$ enumerated into $R_X^{\mathcal{B}}$ for any two elements $b,c\in B$. Clearly $n\in S$ if and only if $n\in X$ and thus \cref{it:ripe} implies \cref{it:cosp}.
To see that \cref{it:enum} implies \cref{it:cosp} assume that $X$ is enumeration reducible to the positive $\Sigma_1$ type of a tuple $\bar b$ in any copy ${\mathcal{B}}$ of ${\mathcal{A}}$. As the $\Sigma_1^p\text{-}tp_{\mathcal{B}}(\bar b)$ is enumeration reducible to $P({\mathcal{B}})$, by transitivity $X\leq_e P({\mathcal{B}})$ for every ${\mathcal{B}}\cong {\mathcal{A}}$ and thus $deg_e(X)\in Co({\mathcal{A}})$.
At last, we show that \cref{it:ripe} implies \cref{it:enum}. Assume that $X$ is r.i.p.e.\ in ${\mathcal{A}}$. Then there is a computable enumeration of $\Sigma^p_1$ formulas $\psi_n$ with parameters $\bar p$ such that for some $\bar a\in A^{<\omega}$, $n\in X \Leftrightarrow {\mathcal{A}}\models \psi_n(\bar a)$. Simultaneously enumerate $\Sigma^p_1\text{-}tp_{\mathcal{A}}(\bar a)$ and the disjuncts in the formulas $\psi_n$. Whenever you see a disjunct of $\psi_n$ that is also in $\Sigma^p_1\text{-}tp_{\mathcal{A}}(\bar a)$ enumerate $n$. This gives an enumeration of $X$ given an enumeration of $\Sigma^p_1\text{-}tp_{\mathcal{A}}(\bar a)$ and thus $X\leq_e \Sigma^p_1\text{-}tp_{\mathcal{A}}(\bar a)$ as required.
\end{proof}
\section{The positive jump and degree spectra}\label{sec:jumps} In this section we compare various definitions of the jump of a structure with respect to their enumeration degree spectra, a notion first studied by Soskov~\cite{soskov2004}. Before we introduce it, let us recall the definition of the jump of an enumeration degree. \begin{defn}
Let $K_A = \{x \, | \, x \in \Psi^A_x\}$. The \emph{enumeration jump} of a set $A$ is $J_e(A) := A \oplus \overline{K}_A$. The jump of an $e$-degree $\mathbf{a} = \text{deg}_e(A) = \{X : X \equiv_e A\}$ is $\mathbf{a}' = \text{deg}_e(J_e(A))$. \end{defn} \begin{defn}
The \emph{positive jump} of a structure ${\mathcal{A}}$ is the structure
\begin{equation*}
PJ({\mathcal{A}})=({\mathcal{A}}, \overline{\vec K^{{\mathcal{A}}}_p})=({\mathcal{A}},(\overline{K^{{\mathcal{A}}}_i})_{i\in\omega}).
\end{equation*} \end{defn} We are interested in the degrees of enumerations of $P({\mathcal{A}})$ and $P(PJ({\mathcal{A}}))$. To be precise, let $f$ be an enumeration of $\omega$, that is, a surjective mapping $\omega \rightarrow \omega$, and for $X\subseteq \omega^{<\omega}$ let \[ f^{-1}(X)=\{ \langle x_1,\dots\rangle: ( f(x_1),\dots, )\in X\}.\] Then, given a structure ${\mathcal{A}}$ let $f^{-1}({\mathcal{A}})=(\omega, f^{-1}(=),f^{-1}(\neq),f^{-1}(R_1^{\mathcal{A}}),f^{-1}(R_2^{\mathcal{A}}),\dots)$. This definition differs from the definition given in Soskov~\cite{soskov2004} where $f^{-1}({\mathcal{A}})$ means what we will denote as $P(f^{-1}({\mathcal{A}}))$, i.e., \[ =\oplus\neq \oplus f^{-1}(=)\oplus f^{-1}(\neq)\oplus \bigoplus_{i\in\omega} f^{-1}(R_i^{\mathcal{A}}). \]
Using this notation we can see that for any structure ${\mathcal{A}}$, and any enumeration $f$ of $\omega$ we have that $PJ(f^{-1}({\mathcal{A}}))$ is the structure $(f^{-1}({\mathcal{A}}),\overline{ \vec K_p^{f^{-1}({\mathcal{A}})}})$. Thus \begin{equation*}
P(PJ(f^{-1}({\mathcal{A}}))) = f^{-1}(=) \oplus f^{-1}(\neq)\oplus \bigoplus_{j \in \omega} \overline{K_j^{f^{-1}({\mathcal{A}})}} \oplus\bigoplus_{i \in \omega} R^{f^{-1}({\mathcal{A}})}_i. \end{equation*} If we instead apply the enumeration to $PJ({\mathcal{A}})$ we will get the structure $f^{-1}(PJ({\mathcal{A}}))$. Now, for every relation $R$ on ${\mathcal{A}}$, $R^{f^{-1}({\mathcal{A}})} = f^{-1}(R^{\mathcal{A}})$ and thus $\overline{ K_j^{f^{-1}({\mathcal{A}})}} = f^{-1}\left(\overline{ K_j^{{\mathcal{A}}}}\right)$. So $P(f^{-1}(PJ({\mathcal{A}}))) = P(PJ(f^{-1}({\mathcal{A}})))$.
\subsection{R.i.p.e.\ generic presentations} \begin{defn}
Let $A^* = \{\sigma \in A^{<\omega} : (\forall i \neq j < |\sigma|)[\sigma(i) \neq \sigma(j)]\}$. We say that $\gamma \in A^*$ \emph{decides} an upwards closed subset $R\subseteq A$ if $\gamma \in R$ or $\sigma \not \in R$ for all $\sigma \supseteq \gamma$. A $1{-}1$ function $g:\omega \rightarrow {\mathcal{A}}$ is a \emph{r.i.p.e.-generic enumeration}, if for every r.i.p.e. subset $R \subseteq A^*$ there is an initial segment of $g$ that decides $R$. ${\mathcal{B}}$ is a \emph{r.i.p.e.-generic presentation} of ${\mathcal{A}}$ if it is the pull-back along a r.i.p.e.-generic enumeration of $P({\mathcal{A}})$. \end{defn} The following lemma is an analogue of the well-known result that there is a $\Delta^0_2$ 1-generic. \begin{lem}\label{lemma:generic}
Every structure ${\mathcal{A}}$ has a r.i.p.e.-generic enumeration $g$ such that $Graph(g) \leq_e P(PJ({\mathcal{A}}))$. In particular, $P(g^{-1}(PJ({\mathcal{A}})))\leq_e P(PJ({\mathcal{A}}))$. \end{lem} \begin{proof}
Fix a $P({\mathcal{A}})$-computable enumeration $(S_i)_{i\in\omega}$ of all r.i.p.e.\ subsets of ${\mathcal{A}}$.
We define our enumeration $g$ to be the limit of an increasing sequence of strings $\{\bar p_s \in A^* : s \in \omega\}$. The strings $\bar p_s$ are defined as follows:
\begin{enumerate}
\item $\bar p_{-1}=\emptystring$
\item To define $\bar p_{s+1}$, check if there is a $\bar q \in S_{s+1}$ such that $\bar q \supset \bar p_s$. If there is let $\bar p_{s+1}$ be the least such tuple, otherwise define $\bar p_{s+1} = \bar p_s$.
\end{enumerate}
Notice that $g=\bigcup_{s\in\omega} \bar p_s$ is onto as for every $n\in\omega$, $D_n=\{ \bar a\in A^*: \exists j<|\bar a|\ \bar a(j)=n\}$ is a dense r.i.p.e.\ subset of $A^*$. Thus there is an $e$ such that $S_e=D_n$ and $\bar p_e$ forces into $D_n$.
Thus, $g$ is a $1{-}1$ r.i.p.e.-generic enumeration of ${\mathcal{A}}$.
The set $\{ \bar p : \exists \bar q \supset \bar p \,, \bar q \in S_e\}$ is $\Sigma_1^p$-definable in ${\mathcal{A}}$ and so enumerable from $P(PJ({\mathcal{A}}))$, which contains $P({\mathcal{A}})$. The coset $\{ \bar p : \forall \bar q \supset \bar p, \bar q \not \in S_e\}$ is co-r.i.p.e. and so enumerable from $\overline{\vec K^{\mathcal{A}}_p}$. Hence $P(PJ({\mathcal{A}}))$ will be able to decide when a tuple $\bar p_s$ belongs to the upward closure of $S_e$. Thus $Graph(g) \leq_e P(PJ({\mathcal{A}}))$.
If we can enumerate the graph of $g$ and also $PJ({\mathcal{A}})$, then to enumerate $g^{-1}(PJ({\mathcal{A}}))$, we wait for something of the form $(x,g(x)) \in Graph(g)$ to appear with $g(x) \in PJ({\mathcal{A}})$ and enumerate $x$ into the corresponding slice of $g^{-1}(PJ({\mathcal{A}}))$. \end{proof} R.i.p.e.\ generic presentations have many useful properties. One of them is that they are minimal in the sense that the only sets enumeration below a r.i.p.e.\ generic presentation are the r.i.p.e. sets. \begin{lem}\label{generic-ish}
If ${\mathcal{B}}$ is a r.i.p.e.-generic presentation of ${\mathcal{A}}$, then $X\subseteq \omega$ is r.i.p.e. in ${\mathcal{B}}$ if and only if $X\leq_e P({\mathcal{B}})$. \end{lem} \begin{proof}
If $X$ r.i.p.e. then it is enumerable from $P({\mathcal{B}})$ by definition. Assume $X$ is enumerable from $P({\mathcal{B}})$. Then $X = \Psi^{P({\mathcal{B}})}_e$ for some $e$. Recall the set $Q_e$ from \cref{ripe} which we know to be r.i.p.e. because we gave a $\Sigma_1^p$ description of it.
\begin{equation*}
Q_e = \{ \overline{q} \in A^{< \omega} : \exists l, j_1 ,\dots j_l < |\overline{q}| \, [\langle j_1,\dots, j_l \rangle \in \Psi_e^{P_{\mathcal{B}}(\overline{q})} \text{ and } \langle q_{j_1},\dots, q_{j_l} \rangle \not \in X] \}.
\end{equation*}
Now because ${\mathcal{B}}$ is r.i.p.e.-generic, there is some tuple $\langle 0,\dots ,k-1 \rangle$ that decides $Q_e$. $\langle 0,\dots ,k-1 \rangle \not \in Q_e$ or we would contradict the fact that $X = \Psi^{P({\mathcal{B}})}_e$ and so $\langle 0,\dots ,k-1 \rangle$ forms the parameterizing tuple $\bar p_s$ from the set $S$ in \cref{ripe}. \end{proof}
Another useful property is that the degree of the positive jump of a r.i.p.e.\ generic agrees with the enumeration jump of the degree of their positive diagram. \begin{proposition}\label{propn generic} Let ${\mathcal{A}}$ be a structure. For an arbitrary enumeration $f$ of ${\mathcal{A}}$, let ${\mathcal{B}} = f^{-1}({\mathcal{A}})$. Then $P(PJ({\mathcal{B}})) \leq_e J_e(P({\mathcal{B}}))$. Furthermore, if ${\mathcal{B}}$ is a r.i.p.e.-generic presentation then $P(PJ({\mathcal{B}})) \equiv_e J_e(P({\mathcal{B}}))$. \end{proposition} \begin{proof} Recall that $PJ({\mathcal{B}}) = ({\mathcal{B}}, \overline{\vec K_p^{{\mathcal{B}}}})$ and $J_e(P({\mathcal{B}})) = P({\mathcal{B}}) \oplus \overline{K}_{P({\mathcal{B}})}$, so to show that $P(PJ({\mathcal{B}})) \leq_e J_e(P({\mathcal{B}}))$ it is sufficient to show that $\overline{\vec K_p^{{\mathcal{B}}}} \leq_e \overline{K}_{P({\mathcal{B}})}$. As $\vec K_p^{{\mathcal{B}}}$ is r.i.p.e., $\vec K_p^{{\mathcal{B}}} = \Psi^{P({\mathcal{B}})}_e$ for some $e$. Thus $\overline{\vec K_p^{{\mathcal{B}}}}$ appears as a slice of $\{\langle x,i \rangle : x \not \in \Psi^{P({\mathcal{B}})}_i\}$ and hence $\overline{\vec K_p^{{\mathcal{B}}}}\leq_e\{\langle x,i \rangle : x \not \in \Psi^{P({\mathcal{B}})}_i\}$. The latter set is $m$-equivalent to $\overline{K}_{P({\mathcal{B}})}$ and thus $\overline{\vec K_p^{{\mathcal{B}}}}\leq_e\overline{K}_{P({\mathcal{B}})}$.
It remains to show that $J_e(P({\mathcal{B}}))\leq_e P(PJ({\mathcal{B}}))$ if ${\mathcal{B}}$ is r.i.p.e.-generic. For every $e$ consider the set \[ R_e = \{\bar q\in B^* : e \in \Psi_e^{P_{\mathcal{B}}(\bar q)}\}.\] Note that the $R_e$ are closed upwards as subsets of $B^*$ and r.i.p.e.. So since ${\mathcal{B}}$ is r.i.p.e.-generic, for every $e$ there is an initial segment of $B^*$ that either is in $R_e$ or such that no extension of it is in $R_e$. The set $Q_e=\{ \bar p\in B^* :\forall (\bar q\supseteq \bar p) (\bar q \not \in R_e)\}$ of elements in $B^*$ that are non-extendible in $R_e$ is co-r.i.p.e. Also note that these sets are uniform in $e$, that is, given $e$ we can compute the indices of $R_e$ and $Q_e$ as r.i.p.e., respectively, co-r.i.p.e.\ subsets of ${\mathcal{B}}$. Thus, given an enumeration of $P(PJ({\mathcal{B}}))$ we can enumerate $P({\mathcal{B}})$ and the sets $Q_e$ and $R_e$. By genericity for all $e\in \omega$ there is an initial segment of $\bar b$ of $B$ such that either $\bar b\in Q_e$ or $\bar b \in R_e$. So whenever we see such a $\bar b$ in $Q_e$ we enumerate $e$ and thus obtain an enumeration $\overline K_{P({\mathcal{B}})}$. \end{proof}
The above properties of r.i.p.e.\ generics are very useful to study how the enumeration degree spectra of structures and their positive jumps relate. \begin{defn}[\cite{soskov2004}]\label{def:enumerationspectrum}
Given a structure ${\mathcal{A}}$, define the set of enumerations of a structure $Enum({\mathcal{A}})=\{P({\mathcal{B}}): {\mathcal{B}} = f^{-1}({\mathcal{A}}) \text{ for } f \text{ an enumeration of } \omega\}$.
Further, let the \emph{enumeration degree spectrum} of ${\mathcal{A}}$ be the set
\[ eSp({\mathcal{A}})=\{ d_e(P({\mathcal{B}})): P({\mathcal{B}})\in Enum({\mathcal{A}})\}\]
If $\mathbf a$ is the least element of $eSp({\mathcal{A}})$, then $\mathbf a$ is called the
\emph{enumeration degree} of ${\mathcal{A}}$. \end{defn} As mentioned after \cref{def:cospectra}, Soskov's definition of the co spectrum of a structure was slightly different~\cite{soskov2004}. He defined the co-spectrum of a structure ${\mathcal{A}}$ as the set \[\{\mathbf d: \forall (\mathbf a\in eSp({\mathcal{A}}))\, \mathbf d\leq \mathbf a\}. \] We now show that the two definitions are equivalent. \begin{proposition}
For every structure ${\mathcal{A}}$, $Co({\mathcal{A}})=\{\mathbf d: \forall (\mathbf a\in eSp({\mathcal{A}})) \mathbf d\leq_e \mathbf a\}$. \end{proposition} \begin{proof}
First note that $\{\mathbf d: \forall (\mathcal C\cong{\mathcal{A}}) \mathbf d\leq deg_e(P(\mathcal C))\}\supseteq \{\mathbf d: \forall (\mathbf a\in eSp({\mathcal{A}})) \mathbf d\leq \mathbf a\}$ as every $P(\mathcal C)$ is the pullback of ${\mathcal{A}}$ along an injective enumeration.
On the other hand for any enumeration $f:\omega\to\omega$, $f^{-1}({\mathcal{A}})/f^{-1}(=)\cong{\mathcal{A}}$. Consider the substructure ${\mathcal{B}}$ of $f^{-1}({\mathcal{A}})$ consisting of the least element in every $f^{-1}(=)$ equivalence class. Since $f^{-1}({\mathcal{A}})/f^{-1}(=)\cong{\mathcal{A}}$, we get that ${\mathcal{B}}\cong {\mathcal{A}}$. As $P(f^{-1}({\mathcal{A}}))$ contains both $f^{-1}(=)$ and $f^{-1}(\neq)$ we can compute an enumeration of $B\oplus \overline B$ from any enumeration of $P(f^{-1}({\mathcal{A}}))$ and thus also the graph of its principal function $p_B$. Let $\mathcal C=p_B^{-1}({\mathcal{B}})$. Then $\mathcal C \cong {\mathcal{A}}$, and $P(\mathcal C)\leq_e P(f^{-1}({\mathcal{A}}))$.
Thus,
\[\{\mathbf d: \forall (\mathbf a\in eSp({\mathcal{A}})) \mathbf d\leq \mathbf a\}=\{\mathbf d: \forall (\mathcal C\cong {\mathcal{A}}) \mathbf d\leq deg_e(P(\mathcal C))\}=Co({\mathcal{A}}).\] \end{proof} A set $A$ is said to be \emph{total} if $A \equiv_e A \oplus \bar A$. An enumeration degree is said to be total if it contains a total set, and a structure ${\mathcal{A}}$ is total if $P(f^{-1}({\mathcal{A}}))$ is a total set for every enumeration $f$.
\begin{proposition} For every structure ${\mathcal{A}}$, $PJ({\mathcal{A}})$ is a total structure. \end{proposition} \begin{proof}
First, note that we have the following equality for arbitrary relations $R$ and enumerations $f$.
\[ x\in f^{-1}(\bar R) \Leftrightarrow f(x)\in \bar R \Leftrightarrow f(x)\not\in R \Leftrightarrow x\not\in f^{-1}(R)\Leftrightarrow x\in \bar{f^{-1}(R)}\]
Recall that for every $R_i$ in the language of ${\mathcal{A}}$, $R_i^{\mathcal{A}}$ is r.i.p.e. and that
\[ P(f^{-1}(PJ({\mathcal{A}}))) = f^{-1}(=) \oplus f^{-1}(\neq) \oplus f^{-1}(\overline{\vec K^{{\mathcal{A}}}_p}) \oplus f^{-1}(R^{\mathcal{A}}_1) \oplus \cdots\]
and we observe that all $R_i^{\mathcal{A}}$ are trivially r.i.p.e., uniformly in $i$, so in particular $\overline{f^{-1}(R_i^{\mathcal{A}})} \leq_e \overline{f^{-1}(\vec K^{\mathcal{A}}_p)}=f^{-1}(\bar{\vec K^{\mathcal{A}}_p})$. Also, $f^{-1}(\vec K^{\mathcal{A}}_p)\leq_e P(f^{-1}({\mathcal{A}}))\leq_e P(f^{-1}(PJ({\mathcal{A}}))$. \end{proof} We will now see that the positive jump of a structure jumps, in the sense that the enumeration degree spectrum of the positive jump is indeed the set of jumps of the degrees in the enumeration degree spectrum of the structure. The following version of a Theorem by Soskova and Soskov~\cite{soskova2009} is essential to our proof. \begin{thm}[{\cite[Theorem 1.2]{soskova2009}}]\label{Soskov}
Let $B$ be an arbitrary set of natural numbers. There exists a total set $F$ such that
\begin{equation*}
B \leq_e F \quad \text{and} \quad J_e(B) \equiv_e J_e(F).
\end{equation*} \end{thm}
\begin{thm}\label{theorem_espjumps}
For any structure ${\mathcal{A}}$,
\begin{equation*}
eSp(PJ({\mathcal{A}})) = \{ \mathbf{a'} : \mathbf{a} \in eSp({\mathcal{A}}) \}
\end{equation*} \end{thm}
\begin{proof}
To show that $\{ \mathbf{a'} : \mathbf{a} \in eSp({\mathcal{A}}) \}\subseteq eSp(PJ({\mathcal{A}}))$, consider an arbitrary enumeration $f$ of $\omega$ and let ${\mathcal{B}} = f^{-1}({\mathcal{A}})$. Then from Proposition \ref{propn generic} we know that $P(PJ({\mathcal{B}})) \leq_e J_e(P({\mathcal{B}}))$. Note that $P(PJ({\mathcal{B}}))=P(PJ(f^{-1}({\mathcal{A}}))) = P(f^{-1}(PJ({\mathcal{A}})))$, and $d_e(P(f^{-1}(PJ({\mathcal{A}}))) \in eSp(PJ({\mathcal{A}}))$. Since the enumeration jump is always total and enumeration degree spectra are closed upwards with respect to total degrees, $d_e(J_e(P({\mathcal{B}}))) \in eSp(PJ({\mathcal{B}}))$.
To show $eSp(PJ({\mathcal{A}})) \subseteq \{ \mathbf{a'} : \mathbf{a} \in eSp({\mathcal{A}}) \}$, let us look at $P(f^{-1}(PJ({\mathcal{A}})))$ for some enumeration $f$ of $\omega$, and again write ${\mathcal{B}} = f^{-1}({\mathcal{A}})$ so that $P(f^{-1}(PJ({\mathcal{A}}))) = P(PJ({\mathcal{B}}))$. Since $PJ({\mathcal{A}})$ is a total structure, we know that $P(PJ({\mathcal{B}}))$ is total. By \cref{lemma:generic} we can use $P(PJ({\mathcal{B}}))$ to enumerate a r.i.p.e.-generic enumeration $g$ of ${\mathcal{B}}$ such that $P(g^{-1}(PJ({\mathcal{B}}))) \leq_e P(PJ({\mathcal{B}}))$. Then, letting $\mathcal{C} = g^{-1}({\mathcal{B}})$ and using the latter part of \cref{propn generic} we know that $J_e(P(\mathcal{C})) \equiv_e P(PJ(\mathcal{C}))$ which makes $J_e(P(\mathcal{C})) \leq_e P(PJ({\mathcal{B}}))$. Using \cref{Soskov} there is a total set $F$ such that $P(\mathcal{C}) \leq_e F$ and $J_e(F) \equiv_e J_e(P(\mathcal{C})) \leq_e P(PJ({\mathcal{B}}))$.
As $F$ and $P(PJ({\mathcal{B}}))$ are total, we have that $F' \leq_T P(PJ({\mathcal{B}}))$.
We can now apply the relativized jump inversion theorem for the Turing degrees to get a set $Z\geq_T F$ such that $Z' \equiv_T P(PJ({\mathcal{B}}))$. For this $Z$, we have that $\iota(deg(Z')) = d_e(J_e(\chi_Z)) = d_e(P(PJ(({\mathcal{B}}))) \in eSp(PJ({\mathcal{A}}))$ and $d_e(P({\mathcal{B}})) \leq d_e(F) \leq d_e(\chi_Z)$. Since $\chi_Z$ is total and enumeration degree spectra are upwards closed in the total degrees, this means $d_e(\chi_Z) \in eSp({\mathcal{B}}) \subseteq eSp({\mathcal{A}})$. So in particular, $d_e(J_e(\chi_Z))=d_e(P(PJ({\mathcal{B}})))\in \{\mathbf a': \mathbf a\in eSp({\mathcal{A}})\}$. \end{proof} We now compare the enumeration degree spectrum of the positive jump of a structure ${\mathcal{A}}$ with the spectrum of the original structure and the spectrum of its traditional jump. The latter is the spectrum of the structure obtained by adding the sequence of relations \[ \vec K^{{\mathcal{A}}} = (\bigcup_{j\in\omega}\{ \bar a\in \omega^j : {\mathcal{A}}\models \varphi_{i,j}^{\Sigma_1^c}(\bar a)\})_{i\in\omega} \] where $\varphi_{i,j}^{\Sigma_1^c}(\bar x)$ is the $i^{th}$ formula with $j$ free variables in a fixed computable enumeration of all computable infinitary $\Sigma_1^0$ formulas. Let $J({\mathcal{A}})=({\mathcal{A}},\vec K^{{\mathcal{A}}})$ denote the traditional jump of ${\mathcal{A}}$. This notion of a jump was defined by Montalb\'an~\cite{montalban2012} and is similar to several other notions of jumps of structures that arose independently such as Soskova's notion using Marker extensions~\cite{soskova2009} or Stukachev's version using $\Sigma$-definability~\cite{stukachev2007}.
We will also consider the structure ${\mathcal{A}}^+$, the totalization of ${\mathcal{A}}$ given by \[ {\mathcal{A}}^+=(A,(R_i^{\mathcal{A}},\bar{ R_i^{\mathcal{A}}})_{i\in\omega}).\]
We will not compare the enumeration degree spectra directly but instead the sets of enumerations. This gives more insight than comparing the degree spectra as we can make use of the following analogues to Muchnik and Medvedev reducibility for enumeration degrees. Given $\mf A,\mf B\subseteq P(\omega)$ we say that $\mf A\leq_{we} \mf B$, $\mf A$ is \emph{weakly enumeration reducible} to $\mf B$, if for every $B\in \mf B$ there is $A\in \mf A$ such that $A\leq_e B$. We say that $\mf A\leq_{se}\mf B$, $\mf A$ is \emph{strongly enumeration reducible} to $\mf B$, if there is an enumeration operator $\Psi$ such that for every $B\in\mf B$, $\Psi^B\in \mf A$.
It is not hard to see that given an enumeration of ${\mathcal{A}}^+$ one can enumerate an enumeration of $J({\mathcal{A}})$ and the converse holds trivially. We thus have the following. \begin{proposition}\label{prop:tradjumpiscompletion}
For every structure ${\mathcal{A}}$, $deg_e(J({\mathcal{A}}))=deg_e({\mathcal{A}}^+)$. In particular $Enum(J({\mathcal{A}}))\equiv_{se} Enum({\mathcal{A}}^+)$. \end{proposition} \begin{proof} By replacing every subformula of the form $\neg R_i(x_1,\dots, x_m)$ by the formula $\bar R_i(x_1,\dots,x_m)$ we get a $\Sigma^p_1$ formula in the language of ${\mathcal{A}}^+$. Similarly, given a $\Sigma^p_1$ formula in the language of ${\mathcal{A}}^+$ we can obtain a $\Sigma^c_1$ formula in the language of ${\mathcal{A}}$ by substituting subformulas of the form $\bar R_i(x_1,\dots,x_m)$ with $\neg R_i(x_1,\dots,x_m)$. Indeed we get a computable bijection between the $\Sigma^p_1$ formulas in the language of ${\mathcal{A}}^+$ and the $\Sigma^c_1$ formulas of ${\mathcal{A}}$. Thus $J({\mathcal{A}})\equiv_e ({\mathcal{A}}^+,\vec K_p^{{\mathcal{A}}^+})$, but ${\mathcal{A}}^+\equiv_e ({\mathcal{A}}^+,\vec K_p^{{\mathcal{A}}^+})$ given the first equivalence. All of these equivalence are witnessed by fixed enumeration operators and thus $Enum(J({\mathcal{A}}))\equiv_{se} Enum({\mathcal{A}}^+)$. \end{proof} \cref{prop:tradjumpiscompletion} is not very surprising, as adding a r.i.c.e.\ set such as $\vec{K}^{\mathcal{A}}$ to the totalization of ${\mathcal{A}}$ won't change its enumeration degree. We will thus consider a slightly different notion of jump by adding the coset of $\vec{K}^{\mathcal{A}}$: \[ T({\mathcal{A}})=({\mathcal{A}},\overline{\vec{K}^{{\mathcal{A}}}})\] Clearly $T({\mathcal{A}})\equiv_T J({\mathcal{A}})$; however, the two sets are not necessarily enumeration equivalent as the following two propositions show. \begin{proposition}
Let ${\mathcal{A}}$ be a structure. For every presentation $\hat {\mathcal{A}}$ of ${\mathcal{A}}$, $P(\hat {\mathcal{A}})\leq_e D(\hat{\mathcal{A}})=P(\hat{\mathcal{A}}^+)\leq_e PJ(\hat{\mathcal{A}})\leq_e T(\hat{\mathcal{A}})$. In particular
\[Enum({\mathcal{A}})\leq_{se}Enum({\mathcal{A}}^+)\leq_{se} Enum(PJ({\mathcal{A}}))\leq_{se} Enum(T({\mathcal{A}})).\] \end{proposition} \begin{proof} Straightforward from the definitions. \end{proof} \begin{proposition} There is a structure ${\mathcal{A}}$ such that \[Enum({\mathcal{A}})\not\geq_{we}Enum({\mathcal{A}}^+)\not\geq_{we} Enum(PJ({\mathcal{A}}))\not\geq_{we}Enum(T({\mathcal{A}})).\]
\end{proposition} \begin{proof}
Sacks showed that there is an incomplete c.e.\ set $X$ of high Turing degree. Thus, $X$ has enumeration degree $deg_e(X)=\mathbf 0_e$ and $deg_T(X')=\mathbf 0''$. Let ${\mathcal{A}}$ be the graph coding $X$ as follows. ${\mathcal{A}}$ contains a single element $a$ with a loop, i.e. $aE^{{\mathcal{A}}}a$ and one circle of length $n+1$ for every $n\in\omega$. Let $y$ be the least element in ${\mathcal{A}}$ that is part of the cycle of length $n+1$. If $n\in X$, then connect $a$ to $y$, i.e., $aE^{\mathcal{A}} y$. This finishes the construction.
As $X$ is c.e., ${\mathcal{A}}$ has enumeration degree $\mathbf 0_e$. However, $\mbf 0_e\not\in eSp({\mathcal{A}}^+)$, as ${\mathcal{A}}^+$ has enumeration degree $deg_e(X\oplus\overline X)> \mathbf 0_e$. So, in particular $Enum({\mathcal{A}})\not\geq_{we} Enum({\mathcal{A}}^+)$.
By \cref{theorem_espjumps} the enumeration degree of $PJ({\mathcal{A}})$ is $\mathbf 0'_e$ and $\mathbf 0'_e\ni \emptyset'\oplus \overline{\emptyset'}>_e X\oplus\overline X$, so $Enum({\mathcal{A}}^+)\not\geq_{we}Enum(PJ({\mathcal{A}})) $.
For the last inequality notice that both $T({\mathcal{A}})$ and $PJ({\mathcal{A}})$ are total structures and that by the analogue of \cref{theorem_espjumps} for the traditional jumps of structures we have that the enumeration degree of $T({\mathcal{A}})$ is $deg_e(\emptyset''\oplus \overline{\emptyset''})$. As mentioned above the enumeration degree of $PJ({\mathcal{A}})$ is $\mathbf 0'_e$, $\emptyset'\oplus \overline{\emptyset'}\in \mathbf 0'_e$, and $\emptyset'\oplus \overline{\emptyset'}<_e \emptyset''\oplus \overline{\emptyset''}$. So, in particular, $Enum(PJ({\mathcal{A}}))\not \geq_{we} Enum(T({\mathcal{A}}))$. \end{proof} For total structures the traditional notion of the jump and the positive jump coincide. \begin{proposition}
Let ${\mathcal{A}}$ be a structure, then $Enum(PJ({\mathcal{A}}^+))\equiv_{se}Enum(T({\mathcal{A}}))$ and $eSp(PJ({\mathcal{A}}^+))=eSp(T({\mathcal{A}}))$. \end{proposition} \begin{proof}
This proof is similar to the proof of \cref{prop:tradjumpiscompletion}. Mutatis mutandis. \end{proof}
\section{Functors}\label{sec:functors} When comparing structures with respect to their enumerations it is natural to want to use only positive information. Csima, Rossegger, and Yu \cite{CsimaRY21} introduced the notion of a positive enumerable functor which uses only the positive diagrams of structures. Reductions based on this notion preserve desired properties such as enumeration degree spectra of structures.\\
Recall that a functor $F$ from a class of structures $\mf C$ to a class $\mf D$ maps structures in $\mf C$ to structures in $\mf D$ and isomorphisms $f: {\mathcal{A}}\to {\mathcal{B}}$ to isomorphisms $F(f): F({\mathcal{A}})\to F({\mathcal{B}})$ with the property that $F(id_{\mathcal{A}})=id_{F({\mathcal{A}})}$ and $F(f\circ g)=F(f)\circ F(g)$ for all isomorphisms $f$ and $g$ and structures ${\mathcal{A}}\in \mf C$. \begin{defn}\label{def:positiveenumerablefunctor}
A functor $F:\mf C\to\mf D$ is \emph{positive enumerable} if there is a pair
$(\Psi,\Psi_*)$ where $\Psi$ and $\Psi_*$ are enumeration
operators such that for all ${\mathcal{A}},{\mathcal{B}}\in\mf C$
\begin{enumerate}\tightlist
\item $\Psi^{P({\mathcal{A}})}=P(F({\mathcal{A}}))$,
\item for all $f\in Hom({\mathcal{A}},{\mathcal{B}})$, $\Psi_*^{P({\mathcal{A}})\oplus Graph(f)\oplus P({\mathcal{B}})}=Graph(F(f))$.
\end{enumerate} \end{defn} For ease of notation, when a graph of a function occurs in an oracle, we will simply write the name of the function to represent it.
An alternative and purely syntactical method of comparing classes of structures is through the model-theoretic notion of interpretability. Since we are restricting ourselves to positive information, we introduce a new notion of interpretability that uses $\Sigma^p_1$ formulas.
\begin{defn} A structure ${\mathcal{A}}=(A,P_0^{\mathcal{A}},\dots)$ is \emph{positively interpretable} in a structure ${\mathcal{B}}$ if there exists a $\Sigma^p_1$ definable sequence of relations $(Dom_{\mathcal{A}}^{\mathcal{B}},\overline{Dom_{\mathcal{A}}^{\mathcal{B}}},\sim,\not\sim,R_0,\dots)$ in the language of ${\mathcal{B}}$ such that \begin{enumerate}\tightlist
\item $Dom_{\mathcal{A}}^{\mathcal{B}}\subseteq B^{<\omega}$, $\overline{Dom_{\mathcal{A}}^{\mathcal{B}}}=B^{<\omega}\setminus Dom_{\mathcal{A}}^{\mathcal{B}}$,
\item $\sim$ is an equivalence relation on $Dom_{\mathcal{A}}^{\mathcal{B}}$ and $\not\sim$ its corelation,
\item $R_i\subseteq (B^{<\omega})^{a_{P_i}}$\footnote{$a_{P_i}$ is the arity of $P_i$} is closed under $\sim$ on $Dom_{\mathcal{A}}^{\mathcal{B}}$, \end{enumerate} and there exists a function $f_{\mathcal{B}}^{\mathcal{A}}:Dom_{\mathcal{A}}^{\mathcal{B}}\to{\mathcal{A}}$, the \emph{interpretation of ${\mathcal{A}}$ in ${\mathcal{B}}$}, which induces an isomorphism: \[ (Dom_{\mathcal{A}}^{\mathcal{B}},R_0,\dots)/{\sim} \cong (A,P_0^{\mathcal{A}},\dots)\] \end{defn}
We seek to provide enumeration analogues to the results proven in~\cite{HTMMM2015}, starting with the following.
\begin{thm}\label{functhm}
There is a positive enumerable functor $F:Iso({\mathcal{B}})\to Iso({\mathcal{A}})$ if and only if ${\mathcal{A}}$ is positively interpretable in ${\mathcal{B}}$. \end{thm}
\begin{proof}
Suppose that ${\mathcal{A}}$ is positively interpretable in ${\mathcal{B}}$ using $(Dom_{\mathcal{A}}^{\mathcal{B}},\overline{Dom_{\mathcal{A}}^{\mathcal{B}}} \sim,\not\sim,R^{\mathcal{B}}_0,\dots)$. We want to construct a functor $F:Iso({\mathcal{B}})\rightarrow Iso({\mathcal{A}})$, so let $\tilde{{\mathcal{B}}} \in Iso({\mathcal{B}})$. Since the relations are all $\Sigma_1^p$ definable they are uniformly r.i.p.e.\ and so uniformly enumerable from $P(\tilde{{\mathcal{B}}})$ by Corollary \ref{uripe}. Fix an enumeration of $\omega^{<\omega}$. Using the fact that both $Dom_{\mathcal{A}}^{\tilde{\mathcal{B}}}, \sim$, and their complements are uniformly enumerable from $P(\tilde {\mathcal{B}})$ we can uniformly enumerate a map $\tilde \tau: \omega\to Dom_{\mathcal{A}}^{\tilde {\mathcal{B}}}/\sim$ sending elements of $\omega$ to $\sim$ equivalence classes in $Dom_{\mathcal{A}}^{\tilde{{\mathcal{B}}}}$. We define $F(\tilde{{\mathcal{B}}})$ to be the pullback along $\tilde{\tau}$ of the structure $(Dom_{\mathcal{A}}^{\tilde{{\mathcal{B}}}}/ \sim,\overline{Dom_{\mathcal{A}}^{\tilde{{\mathcal{B}}}}}/ \sim,R^{\tilde{{\mathcal{B}}}}_0/ \sim,\dots)$. Since our relations are all uniformly enumerable from $P(\tilde B)$, we have that $P(F(\tilde {\mathcal{B}}))$ is enumerable from $P({\mathcal{B}})$.
Now to define our functor on isomorphisms, if we have $f:\tilde{{\mathcal{B}}} \rightarrow \hat{{\mathcal{B}}}$ then letting $\tilde{\tau}:\omega \rightarrow Dom^{\tilde{{\mathcal{B}}}}_{\mathcal{A}}{/}{\sim}$ and $\hat{\tau}: \omega \rightarrow Dom^{\hat{{\mathcal{B}}}}_{\mathcal{A}}{/}{\sim}$ be defined as above, and extending $f$ to $\tilde{{\mathcal{B}}}^{<\omega}$ in the natural way, we let \begin{equation*}
F(f) = \hat{\tau}^{-1} \circ f \circ \tilde{\tau} : \tilde{{\mathcal{A}}} \rightarrow \hat{{\mathcal{A}}} \end{equation*} Since $\tilde \tau$ and $\hat \tau$ are uniformly enumerable from $\tilde {\mathcal{B}}$ and $\hat {\mathcal{B}}$ respectively, we have that this set $Graph(F(f))$ is uniformly enumerable from $P(\tilde {\mathcal{B}}) \oplus Graph(f) \oplus P(\hat {\mathcal{B}})$.
Now suppose that there is a positive enumerable functor $F = (\Psi, \Psi_*)$ from $Iso({\mathcal{B}})$ to $Iso({\mathcal{A}})$. We want to produce the $\Sigma_1^p$ sequence of relations providing the positive interpretation of ${\mathcal{B}}$ in ${\mathcal{A}}$.
In what follows, we will often write $P(\bar b)$ instead of $P_{{\mathcal{B}}}(\bar b)$ when it is clear from context which structure ${\mathcal{B}}$ is meant.
We can view any finite disjoint tuple $\bar b$ as a map $i\mapsto b_i$ for $i<|\bar b|$. Note that viewing $\bar b$ as such a map, if $f$ is any permutation of $\omega$ extending $\bar b$, then $P_{\mathcal{B}}(\bar b)\subseteq P({\mathcal{B}}_f)$ where ${\mathcal{B}}_f=f^{-1}({\mathcal{B}})$.
Let $Dom_{\mathcal{A}}^{\mathcal{B}}$ be the set of pairs $(\bar b, i) \in {\mathcal{B}}^{<\omega} \times \omega$ such that \begin{equation*}
(i,i) \in \Psi_{*,|\bar b|}
^{P(\bar b) \oplus \lambda \restriction |\bar b | \oplus P(\bar b)} \end{equation*} where $\lambda$ is the identity function. Both $Dom_{\mathcal{A}}^{\mathcal{B}}$ and its corelation $\bar{Dom_{\mathcal{A}}^{\mathcal{B}}}$ are clearly uniformly r.i.p.e.
For $(\bar b, i), (\bar c, j) \in Dom_{\mathcal{A}}^{\mathcal{B}}$ we let $(\bar b, i) \sim (\bar c, j)$ exactly if there exists a finite tuple $\bar d$ which does not mention elements from $\bar b$ or $\bar c$, such that if $\bar b'$ lists the elements that occur in $\bar b$ but not $\bar c$ and $\bar c'$ lists the elements in $\bar c$ but not in $\bar b$, and if $\sigma = (\bar c \bar b ' \bar d)^{-1} \circ \bar b \bar c' \bar d$ then \begin{equation*}
(i,j) \in \Psi_{*,|\bar b \bar c \bar d|}^{P(\bar b \bar c' \bar d) \oplus \sigma \oplus P(\bar c \bar b ' \bar d)} \quad \text{and} \quad (j,i) \in \Psi_{*,|\bar b \bar c \bar d|}^{P(\bar c \bar b' \bar d) \oplus \sigma^{-1} \oplus P(\bar b \bar c' \bar d)}. \end{equation*}
It is easy to see that $\sim$ is uniformely r.i.p.e.. Rather than showing immediately that the complement of $\sim$ is uniformly r.i.p.e., we define a clearly uniformly r.i.p.e.\ relation $\not\sim$, and then show that it is indeed the complement of $\sim$.
For $(\bar b, i), (\bar c, j) \in Dom_{\mathcal{A}}^{\mathcal{B}}$ we say $(\bar b, i) \not \sim (\bar c, j)$ if there exist $k\neq j,l \neq i$, and a finite tuple $\bar d$ which does not mention elements from $\bar b$ or $\bar c$, such that if $\bar b'$ lists the elements that occur in $\bar b$ but not $\bar c$ and $\bar c'$ lists the elements in $\bar c$ but not in $\bar b$, and if $\sigma = (\bar c \bar b ' \bar d)^{-1} \circ \bar b \bar c' \bar d$ then \begin{equation*}
(i,k) \in \Psi^{P(\bar b \bar c' \bar d) \oplus \sigma \oplus P(\bar c \bar b' \bar d)}_{*,|\bar b \bar c \bar d|} \quad \text{or} \quad (j,l) \in \Psi_{*,|\bar b \bar c \bar d|}^{P(\bar c \bar b' \bar d) \oplus \sigma^{-1} \oplus P(\bar b \bar c' \bar d)}. \end{equation*}
\begin{claim}\label{complement} $\not \sim$ is the complement of $\sim$ \end{claim} \begin{proof}
We want to show that, for any tuples $(\bar b, i), (\bar c , j) \in Dom_{\mathcal{A}}^{\mathcal{B}}$, exactly one of $(\bar b,i) \sim (\bar c,j)$, $(\bar b,i) \not \sim (\bar c,j)$ hold. Let $\bar b'$ list the elements in $\bar b$ but not $\bar c$, and let $\bar c'$ list the elements in $\bar c$ but not $\bar b$. Let $\sigma = (\bar c \bar b')^{-1} \circ \bar b \bar c'$. Let $f,g: \omega \rightarrow {\mathcal{A}}$ be bijections extending $\bar b \bar c', \bar c \bar b'$ respectively which agree on all inputs $i > |\bar b\bar c' | = |\bar c \bar b'|$. We can then pull back $f$ and $g$ to get structures ${\mathcal{B}}_f, {\mathcal{B}}_g$. Then $h = g^{-1} \circ f$ is an isomorphism extending $\sigma$ which is constant on all $i > |\sigma|$. Hence
\begin{equation*}
(i,h(i)) \in \Psi_*^{P({\mathcal{B}}_f) \oplus h \oplus P({\mathcal{B}}_g)} \quad \text{and} \quad (j,h(j)) \in \Psi_*^{P({\mathcal{B}}_f) \oplus h \oplus P({\mathcal{B}}_g)}
\end{equation*}
If $h(i)=j$ and $h(j)=i$, then taking a long enough initial segment of $h$ witnesses $(\bar b,i) \sim (\bar c,j)$. If however $h(i)\not= j$ or $h(j)\not= i$, then a long enough initial segment of $h$ witnesses $(\bar b,i) \not \sim (\bar c,j)$.
We now assume towards a contradiction that both $(\bar b,i) \sim (\bar c,j)$, and $(\bar b, i) \not \sim (\bar c ,j)$ and that their equivalence is witnessed by $\bar d_1,\sigma$ and their inequivalence is witnessed by $k, l$,$\bar d_2$ and $\tau$. Without loss of generality assume that $(i,k) \in \Psi_*^{P(\bar b \bar c' \bar d_2) \oplus \tau \oplus P(\bar c \bar b' \bar d_2)}$. We also have $(i,j) \in \Psi_*^{P(\bar b \bar c' \bar d_1) \oplus \sigma \oplus P(\bar c \bar b' \bar d_1)}$. Let
\begin{equation*}
f_1 \supset \bar b \bar c' \bar d_1 \quad g_1 \supset \bar c \bar b' \bar d_1 \quad f_2 \supset \bar b \bar c' \bar d_2 \quad g_2 \supset \bar c \bar b' \bar d_2
\end{equation*}
Then we have isomorphisms as shown below.
\begin{center}
\begin{tikzcd}
& & \mathcal{B} & & \\
\mathcal{B}_{f_2} \arrow[r, "f_1^{-1} \circ f_2"] \arrow[rrrr, "g_2^{-1} \circ f_2", bend right] \arrow[rru, "f_2", bend left] & \mathcal{B}_{f_1} \arrow[rr, "g_1^{-1}\circ f_1"] \arrow[ru, "f_1", bend left] & & \mathcal{B}_{g_1} \arrow[r, "g_2^{-1} \circ g_1"] \arrow[lu, "g_1"', bend right] & \mathcal{B}_{g_2} \arrow[llu, "g_2"', bend right]
\end{tikzcd}
\end{center}
Since $F$ is a functor,
\begin{equation*}
F(g_2^{-1} \circ f_2) = F(g^{-1}_2 \circ g_1) \circ F(g_1^{-1} \circ f_1) \circ F(f^{-1}_1 \circ f_2).
\end{equation*}
Since $(\bar b, i)\in Dom_{\mathcal{A}}^{\mathcal{B}}$ we know that $(i,i) \in \Psi_*^{P(\bar b) \oplus \lambda \restriction |\bar b| \oplus P(\bar b)}$. Notice that $f_1^{-1} \circ f_2 \supset \lambda \restriction |\bar b\bar c' |$ and $P({\mathcal{B}}_{f_1}) \supset P(\bar b), P({\mathcal{B}}_{f_2}) \supset P(\bar b)$. Thus
\begin{equation*}
(i,i) \in \Psi_*^{P({\mathcal{B}}_{f_2}) \oplus (f_1^{-1} \circ f_2) \oplus P({\mathcal{B}}_{f_1})} = Graph(F(f_1^{-1} \circ f_2)) \Rightarrow F(f_1^{-1} \circ f_2)(i) = i.
\end{equation*}
Similarly, since $(\bar c, j)\in Dom_{\mathcal{A}}^{\mathcal{B}}$,
\begin{equation*}
(j,j) \in \Psi_*^{P(\bar c) \oplus \lambda \restriction |\bar c| \oplus P(\bar c)} \Rightarrow (j,j) \in \Psi_*^{P({\mathcal{B}}_{g_1}) \oplus (g_2^{-1} \circ g_1) \oplus P({\mathcal{B}}_{g_2})}= Graph(F(g_2^{-1} \circ g_1)).
\end{equation*}
Following from our choices for $f_1,g_1,f_2,g_2$ we have that $g_1^{-1} \circ f_1 \supset \sigma$ and $g_2^{-1} \circ f_2 \supset \tau$, so
\begin{equation*}
(i,j) \in \Psi_*^{P(\bar b\bar c'\bar d_1) \oplus \sigma \oplus P(\bar c\bar b'\bar d_1)} \Rightarrow (i,j) \in \Psi_*^{P({\mathcal{B}}_{f_1}) \oplus (g_1^{-1} \circ f_1) \oplus P({\mathcal{B}}_{g_1})} = Graph(F(g_1^{-1} \circ f_1))
\end{equation*}
\begin{equation*}
(i,k) \in \Psi_*^{P(\bar b\bar c'\bar d_2) \oplus \tau \oplus P(\bar c\bar b'\bar d_2)} \Rightarrow (i,k) \in \Psi_*^{P({\mathcal{B}}_{f_2}) \oplus (g_2^{-1} \circ f_2) \oplus P({\mathcal{B}}_{g_2})} = Graph(F(g_2^{-1} \circ f_2)).
\end{equation*}
The first three equations tell us that
\begin{equation*}
F(g^{-1}_2 \circ g_1) \circ F(g_1^{-1} \circ f_1) \circ F(f^{-1}_1 \circ f_2)(i) = F(g^{-1}_2 \circ g_1) \circ F(g_1^{-1} \circ f_1)(i) = F(g^{-1}_2 \circ g_1)(j) = j
\end{equation*}
whereas the fourth equation tells us that $F(g_2^{-1} \circ f_2)(i) = k$. This contradicts our earlier statement of $F$ being a functor, and so only one of $\sim, \not \sim$ can hold at once. \end{proof}
\begin{claim} The relation $\sim$ is an equivalence relation on $Dom_{\mathcal{A}}^{\mathcal{B}}$. \end{claim} \begin{proof}
Let $(\bar a, i), (\bar b,j),(\bar c,k) \in Dom_{\mathcal{A}}^{\mathcal{B}}$. It is reflexive, since $(\bar a,i) \in Dom_{\mathcal{A}}^{\mathcal{B}}$ means that $(i,i) \in \Psi_*^{P(\bar a) \oplus \lambda \restriction |\bar a| \oplus P(\bar a)}$, and so the equivalence is witnessed by the empty tuple and $\lambda \restriction |\bar a|$. If $(\bar a,i) \sim (\bar b,j)$ via $\bar d, \sigma$, then $(\bar b,j) \sim (\bar a,i)$ via $\bar d, \sigma^{-1}$. Now assume that $(\bar a,i) \sim (\bar b,j)$ and it is witnessed by $\bar a',\bar b'\bar d', \sigma$ and $(\bar b,j) \sim (\bar c,k)$ via $\bar b'', \bar c'',\bar d'',\tau$. Let $\bar a'''$ and $\bar c'''$ list the elements of $\bar a\setminus \bar c$ and $\bar c\setminus \bar a$ respectively. Choose bijections as follows
\begin{align*}
&f_1 \supset \bar a \bar b' \bar d' \quad g_1 \supset \bar b \bar c''\bar d'' \quad h_1 \supset \bar a \bar c ''' \\
&f_2 \supset \bar b \bar a' \bar d' \quad g_2 \supset \bar c \bar b''\bar d'' \quad h_2 \supset \bar c \bar a'''
\end{align*}
such that $h_1$ and $h_2$ agree outside an initial segment of length $|\bar a| + |\bar c'''|$.
\begin{center}
\begin{tikzcd}
& & & \mathcal{B} & & & \\
\mathcal{B}_{h_1} \arrow[rrrrrr, "h_2^{-1} \circ h_1", bend right] \arrow[r, "f_1^{-1} \circ h_1"] \arrow[rrru, "h_1", bend left] & \mathcal{B}_{f_1} \arrow[r, "f_2^{-1} \circ f_1"] \arrow[rru, "f_1", bend left] & \mathcal{B}_{f_2} \arrow[rr, "g_1^{-1} \circ f_2"] \arrow[ru, "f_2", bend left] & & \mathcal{B}_{g_1} \arrow[r, "g_2^{-1} \circ g_1"] \arrow[lu, "g_1"', bend right] & \mathcal{B}_{g_2} \arrow[llu, "g_2"', bend right] \arrow[r, "h_2^{-1} \circ g_2"] & \mathcal{B}_{h_2} \arrow[lllu, "h_2"', bend right] \\
& & & & & & \\
\bar a \bar c''' \arrow[rrrrrr, "(\cdot)_{\rho}", bend right] & \bar a \bar b' \bar d ' & \bar b \bar a ' \bar d ' \arrow[l, "(\cdot)_\sigma"] & & \bar b \bar c'' \bar d'' \arrow[r, "(\cdot )_\tau"] & \bar c \bar b'' \bar d'' & \bar c \bar a'''
\end{tikzcd}
\end{center}
Since $F$ is a functor we have
\begin{equation*}
F(h_2^{-1} \circ h_1) = F(h_2^{-1} \circ g_2)\circ F(g_2^{-1} \circ g_1) \circ F(g_1^{-1} \circ f_2) \circ F(f_2^{-1} \circ f_1)\circ F(f_1^{-1} \circ h_1).
\end{equation*}
Since $(\bar a,i) \in Dom_{\mathcal{A}}^{\mathcal{B}}$, and $f_1^{-1} \circ h_1 \supset \lambda \restriction |\bar a|$ we can show as we did in Claim \ref{complement} that $F(f_1^{-1} \circ h_1)(i) = i$. Similarly, $F(g_1^{-1} \circ f_2)(j) = j, F(h_2^{-1} \circ g_2)(k)=k$. By assumption $F(f_2^{-1} \circ f_1)(i)=j$ and $F(g_2^{-1} \circ g_1)(j)=k$. Thus
\begin{equation*}
(i,k) \in Graph(F(h_2^{-1} \circ h_1))= \Psi_*^{P({\mathcal{B}}_{h_1}) \oplus (h_2^{-1} \circ h_1) \oplus P({\mathcal{B}}_{h_2})}
\end{equation*}
Similarly one can show that $(k,i) \in \Psi_*^{P({\mathcal{B}}_{h_2}) \oplus (h_1^{-1} \circ h_2) \oplus P({\mathcal{B}}_{h_1})}$. Since $h_1$ and $h_2$ agree outside of the initial segment of length $|\bar a| + |\bar c'''|$ if we take a long enough $\bar d'''$ and let $\rho \subset h_2^{-1} \circ h_1$ be the permutation sending $\bar a \bar c''' \bar d'''$ to $\bar c \bar a''' \bar d'''$ we witness that $(\bar a, i) \sim (\bar c, k)$. \end{proof}
\begin{claim}\label{claim:exists initial seg in Dom for each i} For all $i \in \omega$, there is some $n\in \omega$ such that for $\bar c = {\mathcal{B}} \restriction n$, we have $(\bar c,i) \in Dom_{\mathcal{A}}^{\mathcal{B}}$. \end{claim}
\begin{proof} Let $\lambda$ be the identity function. Since functors map the identity to the identity, $(i,i) \in Graph(\lambda)= \Psi_*^{P({\mathcal{B}}) \oplus \lambda \oplus P({\mathcal{B}})}$, so by the use principle there is a sufficiently long initial segment of ${\mathcal{B}}$ will have $(\bar c, i) \in Dom_{\mathcal{A}}^{\mathcal{B}}$. \end{proof}
\begin{claim}\label{claim:isoisonto}
For $(\bar b,i) \in Dom_{\mathcal{A}}^{\mathcal{B}}$, there is an initial segment $\bar c = {\mathcal{B}} \restriction n$ of ${\mathcal{B}}$ and $j\in\omega$ such that $(\bar b,i) \sim (\bar c ,j)$. \end{claim} \begin{proof}
Let $m$ be greater than the maximum number in the tuple $\bar b$, and let $\bar c'$ list the numbers less than or equal to $m$ not occuring in $\bar b$. Let $\bar c = {\mathcal{B}} \restriction m$, and let $f \supset \bar c^{-1} \circ \bar b \bar c'$ be defined by $f(n)=n$ for all $n \geq m$. Then $(i,j) \in Graph(F(f)) = \Psi_*^{P({\mathcal{B}}_f) \oplus f \oplus P({\mathcal{B}})}$ for some $j$. So by the use principle, there exists $\bar d$ such that $(i,j) \in \Psi_{*,|\bar c \bar d|}^{P(\bar b\bar c'\bar d) \oplus \sigma \oplus P(\bar c \bar d)}$ where $\sigma = (\bar c \bar d)^{-1} \circ \bar b \bar c' \bar d$, witnessing that $(\bar b,i) \sim (\bar c, j)$. \end{proof}
\begin{claim}\label{claim:isois11}
If $(\bar b, i), (\bar c, j) \in Dom_{\mathcal{A}}^{\mathcal{B}}$ and $\bar b \subseteq \bar c$ then $(\bar b, i) \sim (\bar c, j)$ iff $i=j$. \end{claim} \begin{proof}
To see this we note that since $(\bar b, i) \in Dom_{\mathcal{A}}^{\mathcal{B}}$, we have $(i,i) \in \Psi_*^{P(\bar b) \oplus \lambda \restriction |\bar b| \oplus P(\bar b)}$. So since $\bar b \subseteq \bar c$, by the use principle $(i,i) \in \Psi_*^{P(\bar c) \oplus \lambda \restriction |\bar c| \oplus P(\bar c)}$, so $(\bar b, i) \sim (\bar c, i).$ Now let $\bar d,\sigma$ witness that $(\bar b,i) \sim (\bar c , j)$. Then $\sigma \supseteq \lambda \restriction |\bar b|$ and the oracle $P(\bar b\bar c'\bar d) \oplus \sigma \oplus P(\bar c\bar d)$ extends $P(\bar b) \oplus \lambda \restriction |\bar b| \oplus P(\bar b)$. So by the use principle $(i,i) \in \Psi_*^{P(\bar b\bar c'\bar d) \oplus \sigma \oplus P(\bar c\bar d)}$. As $\Psi_*^{P(\bar b\bar c'\bar d) \oplus \sigma \oplus P(\bar c\bar d)}$ must extend to the graph of a function, this shows that $j=i$. \end{proof}
We now define relations $R_i$ on $Dom_{\mathcal{A}}^{\mathcal{B}}$. For each relation $P_i$ of arity $p(i)$ we let $(\bar b_1,k_1),\dots, (\bar b_{p(i)},k_{p(i)})$ be in $R_i$ if there is an initial segment $\bar c={\mathcal{B}}\restriction n$ of ${\mathcal{B}}$ and $j_1,\dots,j_{p(i)} \in \omega$ such that $(\bar b_l,k_l) \sim (\bar c,j_l)$ for all $l$ and $P_i(j_1,\dots,j_{p(i)}) \in \Psi^{P(\bar c)}$. Note that by \cref{claim:isois11}, it does not matter which initial segment is chosen.
We are now in position to define the isomorphism \begin{equation*}
\mathfrak{F} : ({\mathcal{A}},P_0,P_1\dots)\rightarrow(Dom_{\mathcal{A}}^{\mathcal{B}}{/}{\sim}, R_0{/}{\sim}, R_1{/}{\sim}, \dots ) \end{equation*} by $\mathfrak F(i)=(\bar c,i)$ where $\bar c = {\mathcal{B}} \restriction n$ for the least $n$ such that $(\bar c,i) \in Dom_{\mathcal{A}}^{\mathcal{B}}$. Note that such $(\bar c,i)$ exists by \cref{claim:exists initial seg in Dom for each i}. It then follows from \cref{claim:isoisonto} and \cref{claim:isois11} that $\mathfrak F$ is a bijection. The bijection respects the relations by definition, and so $\mathfrak{F}$ is an isomorphism. \end{proof}
In the above theorem we not only show the existence of an interpretation given a functor, but provide a method for transforming a functor $F$ into an interpretation. Using the other direction of the proof we can turn this interpretation back into a new functor. We shall call this new induced functor $I^F$. We would like $I^F$ to agree with our original functor $F$ in some fashion, and so we introduce the following definitions.
\begin{defn} A functor $F: \mathfrak{C} \to \mathfrak{D}$ is \emph{enumeration isomorphic} to a functor $G: \mathfrak{C} \to \mathfrak{D}$ if there is an enumeration operator $\Lambda$ such that for any $\mathcal{A} \in \mathfrak{C}$, $\Lambda^{P(\mathcal{A})} : F(\mathcal{A}) \to G(\mathcal{A})$ is an isomorphism. Moreover, for any morphism $h\in Hom(\mathcal{A}, \mathcal{B})$ in $\mathfrak{C}$ when viewing $\Lambda^{P({\mathcal{A}})}$, $\Lambda^{P({\mathcal{B}})}$ as isomorphisms, $\Lambda^{P(\mathcal{B})} \circ F(h) = G(h) \circ \Lambda^{P(\mathcal{A})}$. That is, the diagram below commutes. \begin{center}
\begin{tikzpicture}
\node (FA) at (0,2) {$F(\mathcal{A})$};
\node (FB) at (0,0) {$F(\mathcal{B})$};
\node (GA) at (3,2) {$G(\mathcal{A})$};
\node (GB) at (3,0) {$G(\mathcal{B})$};
\draw[->] (FA) -- node[above] {$\Lambda^{P(\mathcal{A})}$} (GA);
\draw[->] (FB) -- node[above] {$\Lambda^{P(\mathcal{B})}$} (GB);
\draw[->] (FA) -- node[left] {$F(h)$} (FB); \draw[->] (GA) -- node[right] {$G(h)$} (GB); \end{tikzpicture} \end{center} \end{defn}
\begin{proposition}\label{Inducedprop}
Let $F:Iso({\mathcal{B}})\to Iso({\mathcal{A}})$ be positive enumerable and $I^F:Iso({\mathcal{B}})\to Iso({\mathcal{A}})$ be the functor obtained from Theorem \ref{functhm}. Then $F$ and $I^F$ are enumeration isomorphic. \end{proposition} \begin{proof}
Given a presentation ${\mathcal{B}} \in Iso({\mathcal{B}})$ let $\mathfrak{F}^{\mathcal{B}}: {\mathcal{B}} \rightarrow Dom_{\mathcal{A}}^{\mathcal{B}}$ be the map obtained in Theorem \ref{functhm} and let $\tau^{\mathcal{B}}: \omega \rightarrow Dom_{\mathcal{A}}^{\mathcal{B}} /\sim$ be the bijection obtained in the other direction of the same proof. $\tau^{\mathcal{B}}$ gives rise to an isomorphism $\mathcal{I}^F({\mathcal{B}}) \rightarrow Dom_{\mathcal{A}}^{\mathcal{B}}$. We know that both can be enumerated uniformly from a given presentation of ${\mathcal{B}}$ and so $(\tau^{\mathcal{B}})^{-1} \circ \mathfrak{F}^{\mathcal{B}}$ is also uniformly enumerable from ${\mathcal{B}}$. Thus there is some enumeration operator $\Lambda$ such that for any presentation ${\mathcal{B}}$ \begin{equation*}
\Lambda^{P({\mathcal{B}})} = (\tau^{\mathcal{B}})^{-1} \circ \mathfrak{F}^{\mathcal{B}}: F({\mathcal{B}}) \rightarrow \mathcal{I}^F({\mathcal{B}}) \end{equation*} To show $\Lambda$ is an enumeration isomorphism we want to show the following diagram commutes for all $\tilde {\mathcal{B}}, \hat {\mathcal{B}} \in Iso({\mathcal{B}})$ and all morphisms $h:\tilde {\mathcal{B}} \rightarrow \hat {\mathcal{B}}$. We extend $h$ to a map $\tilde {\mathcal{B}}^{<\omega} \rightarrow \hat {\mathcal{B}}^{<\omega}$ and then restrict to $Dom_{\mathcal{A}}^{\tilde {\mathcal{B}}} \rightarrow Dom_{\mathcal{A}}^{\hat {\mathcal{B}}}$. \begin{center}
\begin{tikzcd}
F(\tilde{\mathcal{B}}) \arrow[r, "\mathfrak{F}^{\tilde{\mathcal{B}}}"] \arrow[d, "F(h)"'] \arrow[rr, "\Lambda^{P(\tilde{\mathcal{B}})}", bend left] & Dom_{\mathcal{A}}^{\tilde{{\mathcal{B}}}} \arrow[d, "h"'] & \mathcal{I}^F(\tilde{\mathcal{B}}) \arrow[l, "\tau^{\tilde{\mathcal{B}}}"'] \arrow[d, "\mathcal{I}^F(h)"] \\
F(\hat{\mathcal{B}}) \arrow[r, "\mathfrak{F}^{\hat{\mathcal{B}}}"'] \arrow[rr, "\Lambda^{P(\hat{\mathcal{B}})}"', bend right] & Dom_\mathcal{A}^{\hat{\mathcal{B}}} & \mathcal{I}^F(\hat{\mathcal{B}}) \arrow[l, "\tau^{\hat{\mathcal{B}}}"]
\end{tikzcd} \end{center}
The right hand square commutes since $I^F(h)$ is defined to be $\tau^{\tilde{{\mathcal{B}}}} \circ h \circ (\tau^{\hat {\mathcal{B}}})^{-1}$. To see that the left square commutes take $i \in F(\tilde {\mathcal{B}})$. Then $F(h)(i) = j$ for some $j \in F(\hat {\mathcal{B}})$ and $\mathfrak{F}^{\tilde {\mathcal{B}}}(i) = (\bar a, i)$, $\mathfrak{F}^{\hat {\mathcal{B}}}(j) = (\bar b, j)$ where $\bar a$ and $\bar b$ are initial segments of $\omega$. We want to show that $h(\bar a,i) = (h(\bar a),i) \sim^{\hat {\mathcal{B}}} (\bar b , j)$.
Since $(i,j) \in \Psi_*^{P(\tilde {\mathcal{B}}) \oplus h \oplus P(\hat {\mathcal{B}})}$ we can get $(i,j) \in \Psi_{*,|\bar a||\bar b|}^{P_{\tilde {\mathcal{B}}}(\bar a) \oplus h\restriction |\bar a| \oplus P_{\hat {\mathcal{B}}}(\bar b)}$ by extending $\bar a$ and $\bar b$. Note that $P_{\tilde{{\mathcal{B}}}}(\bar a) = P_{\hat{{\mathcal{B}}}}(h(\bar a))$ and assume without loss of generality that $\bar b$ contains both $\bar a$ and $h(\bar a)$. Since $\bar b$ is an initial segment, the map associated to it is the identity. So the map $\sigma = \bar b ^{-1} \circ h(\bar a) \bar b'$ is an initial segment of $h \restriction |\bar a|$. Hence \begin{equation*}
(i,j) \in \Psi_{*,|\bar a \bar b|}^{P_{\hat {\mathcal{B}}}(h(\bar a)) \oplus \sigma \oplus P_{\hat {\mathcal{B}}}(\bar b)} \quad \text{and} \quad (j,i) \in \Psi_{*,|\bar a \bar b|}^{P_{\hat {\mathcal{B}}}(\bar b) \oplus (\sigma)^{-1} \oplus P_{\hat {\mathcal{B}}}(h(\bar a))} \end{equation*}
\end{proof} Clearly if we have a functor $F:Iso({\mathcal{A}})\to Iso({\mathcal{B}})$, then every enumeration of ${\mathcal{A}}$ computes an enumeration of ${\mathcal{B}}$. In order to preserve enumeration degree spectra of structures we need the relationship between the two isomorphism classes to be even stronger. In~\cite{CsimaRY21} positive enumerable bi-transformability was introduced and it was shown that two positive enumerable bi-transformable structures have the same enumeration degree spectra. The next definition is the same as positive enumerable bi-transformability. We chose to rename it, as we learned that the notion is not new, but rather an effectivization of the highly influential notion of an adjoint equivalence of categories in category theory. \begin{defn}
An \emph{enumeration adjoint equivalence} of categories $\mathfrak C$ and $\mathfrak D$ consists of a tuple $(F,G, \Lambda_\mathfrak{C},\Lambda_\mathfrak{D}))$ where $F:\mathfrak C\to \mathfrak D$ and $G:\mathfrak D\to \mathfrak C$ are positive enumerable functors, $\Lambda_\mf{C}$ and $\Lambda_\mf{D}$ witness enumeration isomorphisms between the compositions of $G \circ F$ and $Id_\mf{C}$, respectively $F\circ G$ and $Id_\mf{D}$, and the two isomorphisms are mapped to each other. I.e.,
\[ F(\Lambda_\mathfrak{C}^{P({\mathcal{A}})})=\Lambda_{\mathfrak D}^{P(F({\mathcal{A}}))} \text{ and } G(\Lambda_\mathfrak{D}^{P({\mathcal{B}})})=\Lambda_\mathfrak{C}^{P(G({\mathcal{B}}))}\]
for all ${\mathcal{A}}\in \mathfrak C$ and ${\mathcal{B}}\in \mathfrak D$. If there is an enumeration adjoint equivalence between $Iso({\mathcal{A}})$ and $Iso({\mathcal{B}})$ then we say that ${\mathcal{A}}$ and ${\mathcal{B}}$ are \emph{enumeration adjoint}. \end{defn} We will show that the following notion based on positive interpretability is equivalent to enumeration adjointness. \begin{defn}\label{definition:effectivebiint}
Two structures $\mathcal{A}$ and $\mathcal{B}$ are \emph{positively bi-interpretable} if there are effective interpretations of one in the other such that the compositions
\[ f_{\mathcal{B}}^{\mathcal{A}}\circ \hat f^\mathcal{B}_\mathcal{A}: Dom_\mathcal{B}^{Dom^\mathcal{B}_\mathcal{A}}\to \mathcal{B} \quad \mbox{and} \quad f_\mathcal{A}^\mathcal{B} \circ \hat f_\mathcal{B}^\mathcal{A}: Dom_\mathcal{A}^{Dom^\mathcal{A}_\mathcal{B}} \to \mathcal{A} \]
are uniformly ripe in $\mathcal{B}$ and $\mathcal{A}$ respectively. (Here the function $\hat{f^\mathcal{B}_\mathcal{A}}:(Dom_{\mathcal{A}}^{\mathcal{B}})^{<\omega}\to\mathcal{A}^{<\omega}$ is the canonical extension of $f_\mathcal{A}^\mathcal{B}: Dom^\mathcal{B}_\mathcal{A} \to \mathcal{A}$ mapping $Dom^{Dom^\mathcal{B}_\mathcal{A}}_\mathcal{B}$ to $Dom^\mathcal{A}_\mathcal{B}$) \end{defn}
\begin{thm}\label{bi}
${\mathcal{A}}$ and ${\mathcal{B}}$ are positively bi-interpretable if and only if they are enumeration adjoint. \end{thm}
Since the proof of Theorem \ref{functhm} works in the enumeration setting, this proof will go through exactly as in \cite{HTMMM2015} mutatis mutandis.
\section{Conclusion}
When we restrict ourselves to only the positive information of a structure, the notion of a r.i.p.e.\ relation is a natural analogue to r.i.c.e.\ relations. Whence Theorem \ref{ripe} shows that $\Sigma_1^p$ relations are the correct notion of formula to consider in the enumeration setting. We see further evidence for this in Section 2 when r.i.p.e.\ formulas are used to define the positive jump of a structure. Theorem \ref{theorem_espjumps} supports the claim that the positive jump is the proper enumeration jump for structures as it behaves well with the regular enumeration jump of sets.
The authors in \cite{CsimaRY21} showed that when comparing classes of structures with respect to enumeration reducibility, positive enumerable functors are the correct effectivization of functors to consider as they preserve enumeration degree spectra. The equivalence given by Theorem \ref{bi} between enumeration adjointness and positive bi-interpretability thus justifies the choice of positive bi-interpretability as the enumeration analogue to bi-interpretability.
We strongly believe that r.i.p.e.\ relations are a valuable addition to the field of computable structure theory. Developing the idea of definability by positive formulas further a Lopez-Escobar theorem for positive infinitary formulas is proven in upcoming work by Bazhenov, Fokina, Rossegger, Soskova, Soskova and Vatev: The sets of structures definable by $\Sigma^p_\alpha$ formulas are precisely the $\pmb \Sigma^0_\alpha$ sets in the Scott topology on the space of structures. This result as well as the results in this article show promising signs of being useful to answer questions in other areas, such as for instance algorithmic learning theory. \printbibliography
\end{document} |
\begin{document}
\title{On the structure of the commutator subgroup of certain homeomorphism groups}
\begin{abstract} An important theorem of Ling states that if $G$ is any factorizable non-fixing group of homeomorphisms of a paracompact space then its commutator subgroup $[G,G]$ is perfect. This paper is devoted to further studies on the algebraic structure (e.g. uniform perfectness, uniform simplicity) of $[G,G]$ and $[\tilde G,\tilde G]$, where $\tilde G$ is the universal covering group of $G$. In particular, we prove that if $G$ is bounded factorizable non-fixing group of homeomorphisms then $[G,G]$ is uniformly perfect (Corollary 3.4). The case of open manifolds is also investigated. Examples of homeomorphism groups illustrating the results are given. \end{abstract}
\section{Introduction}
Given groups $G$ and $H$, by $G\leq H$ (resp. $G\lhd H$) we denote that $G$ is a subgroup (resp. normal subgroup) of $H$. Throughout by $\mathcal{H}(X)$ we denote the group of all homeomorphism of a topological space $X$. Let $U$ be an open subset of $X$ and let $G$ be a subgroup of $\mathcal{H}(X)$. The symbol $\mathcal{H}_U(X)$ (resp. $G_U$) stands for the subgroup of elements of $\mathcal{H}(X)$ (resp. $G$) with support in $U$. For $g\in \mathcal{H}(X)$ the support of $g$, $\supp(g)$, is the closure of $\{x\in X:\, g(x)\neq x\}$. Let $\mathcal{H}_c(M)$ (resp. $G$) denotes the subgroup of $\mathcal{H}(M)$ (resp. $G$) of all its compactly supported elements.
\begin{dff}\label{fac} Let $\mathcal{U}$ be an open cover of $X$. A group of homeomorphisms $G$ of a space $X$ is called \emph{$\mathcal{U}$-factorizable} if for every $g\in G$ there are $g_1,\ldots, g_r\in G$ with $g=g_1\ldots g_r$ and such that $\supp(g_i)\subset U_i$, $i=1,\ldots, r$, for some $U_1,\ldots, U_r\in\mathcal{U}$. $G$ is called \emph{factorizable} if for every open cover $\mathcal{U}$ of $X$ it is $\mathcal{U}$-factorizable.
Next $G$ is said to be \wyr{non-fixing} if $G(x)\neq \{ x \}$ for every $x \in X$, where $G(x):= \{ g(x)|g \in G \}$ is the orbit of $G$ at $x$. \end{dff} Given a group $G$, denote by $[f,g]=fgf^{-1}g^{-1}$ the commutator of $f,g\in G$, and by $[G,G]$ the commutator subgroup. Now the theorem of Ling can be formulated as follows.
\begin{thm} \cite{li}\label{ling} Let $X$ be a paracompact topological space and let $G$ be a factorizable non-fixing group of homeomorphisms of $X$. Then the commutator subgroup $[G,G]$ is perfect, that is $[[G,G],[G,G]]=[G,G]$. \end{thm}
Recall that a group $G$ is called \wyr{uniformly perfect} \cite{bip} if $G$ is perfect (i.e. $G=[G,G]$) and there exists a positive integer $r$ such that any element of $G$ can be expressed as a product of at most $r$ commutators of elements of $G$. For $g\in [G,G]$, $g\neq e$, the least $r$ such that $g$ is a product of $r$ commutators is called the \wyr{commutator length} of $g$ and is denoted by $\cl_G(g)$. By definition we put $\cl_G(e)=0$.
Throughout we adopt the following notation. Let $M$ be a paracompact manifold of class $C^r$, where $r=0,1,\ldots,\infty$. Then $\mathcal{D}^r(M)$ (resp. $\mathcal{D}^r_c(M)$) denotes the group of all $C^r$-diffeomorphisms of $M$ which can be joined with the identity by a (resp. compactly supported) $C^r$-isotopy. For simplicity by $C^0$-diffeomorphism we mean a homeomorphism.
Observe that in view of recent results (Burago, Ivanov and Polterovich \cite{bip}, Tsuboi \cite{Tsu2}) the diffeomorphism groups $\mathcal{D}^{\infty}_c(M)$ are uniformly perfect for most types of manifolds $M$, though some open problems are left.
Our first aim is to prove the following generalization of Theorem 1.2.
\begin{thm} Let $X$ be a paracompact topological space and let $G$ be a factorizable non-fixing group of homeomorphisms of $X$. Assume that $\cl_G$ is bounded on $[G,G]$ and that $G$ is bounded with respect to all fragmentation norms $\frag^{\mathcal{U}}$ (c.f. section 2), where $\mathcal{U}$ runs over all open covers of $X$. Then the commutator subgroup $[G,G]$ is uniformly perfect. \end{thm}
The proof of Theorem 1.3 and further results concerning the uniform perfectness of $[G,G]$ will be given in section 3.
Ling's theorem (Theorem 1.2) constitutes an essential amelioration of the simplicity Epstein theorem \cite{ep} at least in two aspects. First, contrary to \cite{ep}, it provides an algebraic information on nontransitive homeomorphism groups. Second, it enables to strengthen the theorem of Epstein itself. We will recall Epstein's theorem and Ling's improvement of it in section 4. Also in section 4 we formulate conditions which ensure the uniform simplicity of $[G,G]$ (Theorem 4.3).
As usual $\tilde G$ stands for the universal covering group of $G$. In section 5 we will prove the following
\begin{thm} Suppose that $G\leq\mathcal{H}(X)$ is isotopically factorizable (Def. 5.2) and that $G_0$, the identity component of $G$, is non-fixing. Then the commutator group $[\tilde G,\tilde G]$ is perfect. \end{thm}
In section 6 we will consider the case of a noncompact manifold $M$ such that $M$ is the interior of a compact manifold $\bar M$, and groups of homeomorphisms on $M$ with no restriction on support. Consequently such groups are not factorizable in the usual way but only in a wider sense (Def. 6.1). It is surprising that for a large class of homeomorphism or diffeomorphism groups of an open manifold the assertions of Theorems 1.2 and 1.3 still hold (see Theorems 6.9 and 6.10).
In the final section we will present some examples and open problems which are of interest in the context of the above results.
{\bf Acknowledgments.} A correspondence with Paul Schweitzer and his recent paper \cite{Sch} were helpful when we were preparing section 6. We would like to thank him very much for his kind help.
\section{Conjugation-invariant norms}
The notion of the conjugation-invariant norm is a basic tool in studies on the structure of groups. Let $G$ be a group. A \wyr{conjugation-invariant norm} (or \emph{norm} for short) on $G$ is a function $\nu:G\rightarrow[0,\infty)$ which satisfies the following conditions. For any $g,h\in G$ \begin{enumerate} \item $\nu(g)>0$ if and only if $g\neq e$; \item $\nu(g^{-1})=\nu(g)$; \item $\nu(gh)\leq\nu(g)+\nu(h)$; \item $\nu(hgh^{-1})=\nu(g)$. \end{enumerate} Recall that a group is called \emph{ bounded} if it is bounded with respect to any bi-invariant metric. It is easily seen that $G$ is bounded if and only if any conjugation-invariant norm on $G$ is bounded.
Observe that the commutator length $\cl_G$ is a conjugation-invariant norm on $[G,G]$. In particular, if $G$ is a perfect group then $\cl_G$ is a conjugation-invariant norm on $G$. For any perfect group $G$ denote by $\cld_G$ the commutator length diameter of $G$, i.e. $\cld_G:=\sup_{g\in G}\cl_G(g)$. Then $G$ is uniformly perfect iff $\cld_G<\infty$.
Assume now that $G\leq\mathcal{H}(X)$ is $\mathcal{U}$-factorizable (Def.1.1), and that $\mathcal{U}$ is a $G$-invariant open cover of $X$. The latter means that $g(U)\in\mathcal{U}$ for all $g\in G$ and $U\in\mathcal{U}$. Then we may introduce the following conjugation-invariant norm $\frag^{\mathcal{U}}$ on $G$. Namely, for $g\in G$, $g\neq\id$, we define $\frag^{\mathcal{U}}(g)$ to be the least integer $\rho>0$ such that $g=g_1\ldots g_{\rho}$ with $\supp(g_i)\subset U_i$ for some $U_i\in\mathcal{U}$, where $i=1,\ldots, \rho$. By definition $\frag^{\mathcal{U}}(\id)=0$.
Define $\fragd^{\mathcal{U}}_G:=\sup_{g\in G}\frag^{\mathcal{U}}(g)$, the diameter of $G$ in $\frag^{\mathcal{U}}$. Consequently, $\frag^{\mathcal{U}}$ is bounded iff $\fragd^{\mathcal{U}}_G<\infty$.
Observe that $\frag^{\{X\}}$ is the trivial norm on $G$, i.e. equal to 1 for all $g\in G\setminus\{\id\}$. Observe as well that $\frag^{\mathcal{V}}\geq\frag^{\mathcal{U}}$ provided $\mathcal{V}$ is finer than $\mathcal{U}$.
The significance of $\frag^{\mathcal{U}}$ consists in the following version of Proposition 1.15 in \cite{bip}. \begin{prop} Let $M$ be a $C^r$-manifold, $r=0,1,\ldots,\infty$. Then $\mathcal{D}^r_c(M)$ is bounded if and only if $\mathcal{D}^r_c(M)$ is bounded with respect to $\frag^{\mathcal{U}}$, where $\mathcal{U}$ is some cover by embedded open balls. \end{prop}
Indeed, it is a consequence of Theorem 1.18 in \cite{bip} stating that for a portable manifold $M$ the group $\mathcal{D}^r_c(M)$ is bounded, and the fact that $\rz^n$ is portable.
\section{Uniform perfectness of $[G,G]$}
In Theorems 3.5 and 3.8 below we also need stronger notions than that of non-fixing group (Def. 1.1).
\begin{dff} Let $\mathcal{U}$ be an open cover of $X$, $G\leq\mathcal{H}(X)$ and let $r\in\mathbb N$. \begin{enumerate} \item $G$ is called \emph{$r$-non-fixing} if for any $x\in X$ there are $f_1,\ldots, f_r,g_1,\ldots, g_r\in G$ (possibly $=\id$) such that $([f_r,g_r]\ldots[f_1,g_1])(x)\neq x$. \item $G$ is said to be \emph{$\mathcal{U}$-moving} if for every $U\in\mathcal{U}$ then there is $g\in G$ such that $g(U)\cap U=\emptyset$. \item $G$ is said to be \emph{$r$-$\mathcal{U}$-moving} if for any $U\in \mathcal{U}$ there are $2r$ elements of $G$ (possibly $=\id$), say $f_1,\ldots, f_r, g_1,\ldots, g_r$, such that the sets $U$ and $([f_r,g_r]\ldots[f_1,g_1])(U)$ are disjoint. \item $G$ is said to be \emph{strongly $\mathcal{U}$-moving} if for every $U, V\in\mathcal{U}$ there is $g\in G$ such that $g(U)\cap (U\cup V)=\emptyset$. \item $G$ is called \emph{locally moving} if for any open set $U\subset X$ and $x\in U$ there is $g\in G_U$ such that $g(x)\neq x$. \end{enumerate} \end{dff}
Of course, if $G$ is either $r$-non-fixing, or $\mathcal{U}$-moving, or locally moving then it is non-fixing. Likewise, if $G$ is $r$-$\mathcal{U}$-moving then it is $s$-$\mathcal{U}$-moving for $r<s$ and $\mathcal{U}$-moving. Notice that if $\mathcal{V}$ is finer than $\mathcal{U}$ and $G$ is (resp. strongly) $\mathcal{U}$-moving then $G$ is (resp. strongly) $\mathcal{V}$-moving.
\begin{prop} Let $X$ be paracompact and let
$G\leq\mathcal{H}(X)$.\begin{enumerate} \item
If $G$ is non-fixing and factorizable (Def. 1.1) then $G$ is locally moving. \item If $G$ is locally moving then so is $[G,G]$. \item If $G$ is non-fixing and factorizable then $[G,G]$ is $1$-non-fixing (Def. 3.1(1)). \end{enumerate} \end{prop}
\begin{proof} (1) Let $x\in U$ and $g\in G$ such that $g(x)=y\neq x$. Choose $\mathcal{U}=\{U_1, U_2\}$, where $x\in U_1\setminus U_2$, $y\in U_2\setminus U_1$, $U_1\subset U$ and $X=U_1\cup U_2$. By assumption
we may write $g=g_r\ldots g_1$, where all $g_i$ are supported in elements of $\mathcal{U}$. Let $s:=\min\{i\in\{1,\ldots, r\}:\; \supp(g_i)\subset U_1 \text{\; and\;} g_i(x)\neq x\}$. Then $g_s\in G_U$ satisfies $g_s(x)\neq x$.
(2) Let $x\in U$. There is $g\in G_U$ with $g(x)\neq x$. Take an open $V$ such that $x\in V\subset U$ and $g(x)\not\in V$. Choose $f\in G_V$ with $f(x)\neq x$. It follows that $f(g(x))=g(x)\neq g(f(x))$ and, therefore, $[f,g](x)\neq x$. (3) follows from (1) and the proof of (2). \end{proof}
The following property of paracompact spaces is well-known. \begin{lem}\label{cover} If $X$ is a paracompact space and $\mathcal{U}$ is an open cover of $X$, then there exists an open cover $\mathcal{V}$ star finer than $\mathcal{U}$, that is
for all $V\in \mathcal{V}$ there is $U\in\mathcal{U}$ such that $\sta^{\mathcal{V}}(V)\subset U$. Here $\sta^{\mathcal{V}}(V):=\bigcup\{V'\in\mathcal{V}:\; V'\cap V\neq\emptyset\}$. In particular, for all $V_1, V_2\in \mathcal{V}$ with $V_1\cap V_2\neq\emptyset$ there is $U\in\mathcal{U}$ such that $V_1\cup V_2\subset U$. \end{lem} If $\mathcal{V}$ and $\mathcal{U}$ are as in Lemma 3.3 then we will write $\mathcal{V}\prec\mathcal{U}$.
For an open cover $\mathcal{U}$ let $\mathcal{U}^G:=\{g(U):\; g\in G \text{\; and \;} U\in\mathcal{U}\}$.
\emph{Proof of Theorem 1.3.} In view of Proposition 3.2 and the assumption, for any $x\in X$ there is $f,g\in[G,G]$ such that $[f,g](x)\neq x$. It follows the existence of an open cover $\mathcal{U}$ such that for any $U\in\mathcal{U}$ there are $f,g\in[G,G]$ such that $[f,g](U)\cap U=\emptyset$. Hence we have also that for any $U\in\mathcal{U}^G$ there is $f,g\in[G,G]$ such that $[f,g](U)\cap U=\emptyset$. In fact, if $N\lhd G$ and $U\in\mathcal{U}$ such that $n(U)\cap U=\emptyset$ for some $n\in N$, then for $g\in G$ we get $(\bar ng)(U)\cap g(U)=\emptyset$, where $\bar n=gng^{-1}\in N$.
Due to Lemma 3.3 we can find $\mathcal{V}$ such that $\mathcal{V}\prec\mathcal{U}$. We denote \begin{equation*}G^{\mathcal{U}}=\prod\limits_{U\in \mathcal{U}^G}[G_U,G_U].\end{equation*} Assume that $G$ is $\mathcal{V}$-factorizable and $\fragd^{\mathcal{V}}_G= \rho$.
First we show that $[G,G]\subset G^{\mathcal{U}}$ and that any $[g_1,g_2]\in[G,G]$ can be expressed as a product of at most $\rho^2$ elements of $G^{\mathcal{U}}$ of the form $[h_1,h_2]$, where $h_1,h_2\in G_U$ for some $U$. In fact, it is an immediate consequence of the following commutator formulae for all $f,g,h\in G$ \begin{equation} [fg,h]=f[g,h]f^{-1}[f,h],\quad [f,gh]=[f,g]g[f,h]g^{-1}, \end{equation} and the fact that $\mathcal{V}\prec\mathcal{U}$. Now if $\cld_G=d$, then every element of $[G,G]$ is a product of at most $d\rho^2$ elements of $G^{\mathcal{U}}$ of the form $[h_1,h_2]$, where $h_1,h_2\in G_U$ for some $U$.
Next, fix arbitrarily $U\in \mathcal{U}$. We have to show that for every $f,g\in G_U$ the bracket $[f,g]$ can be represented as a product of four commutators of elements of $[G,G]$.
By assumption on $\mathcal{U}^G$, there are $h_1,h_2\in [G,G]$ such that $h(U)\cap U=\emptyset$, where $h=[h_1,h_2]$. It follows that $[hfh^{-1}, g]=\id$. Therefore, $[[h,f],g]=[f,g]$. Observe that indeed $[[h,f],g]$ is a product of four commutators of elements of $[G,G]$. Thus any element of $[G,G]$ is a product of at most $4d\rho^2$ commutators of elements of $[G,G]$. \quad $\square$
\begin{cor} Let $X$ be a paracompact space and let $G\leq\mathcal{H}(X)$ be a bounded, factorizable and non-fixing group. Then the commutator subgroup $[G,G]$ is uniformly perfect. \end{cor} \begin{proof} The only thing we need is that $\cl_G$ should be bounded (on $[G,G]$), and this fact is a consequence of Proposition 1.4 in \cite{bip}. \end{proof}
A more refined version of Theorem 1.3 is the following
\begin{thm} Let $X$ be a paracompact topological space, let $G\leq\mathcal{H}(X)$ with $\cl_G$ bounded (as the norm on $[G,G]$) and let $\mathcal{U}$ be a $G$-invariant open cover of $X$ such that \begin{enumerate} \item $G$ is strongly $\mathcal{U}$-moving (Def. 3.1(4)), and \item there is an open cover $\mathcal{V}$ satisfying $\mathcal{V}\prec\mathcal{U}$ such that $G$ is $\mathcal{V}$-factorizable and $G$ is bounded with respect to the fragmentation norm $\frag^{\mathcal{V}}$.\end{enumerate} Then the commutator subgroup $[G,G]$ is uniformly perfect. Furthermore, if $\fragd^{\mathcal{V}}_G= \rho$ and $\cld_G=d$ then $\cld_{[G,G]}\leq d\rho^2$. \end{thm} \begin{proof}
Let $\mathcal{U}$ and $\mathcal{V}$ satisfy the assumption. We denote $$G^{\mathcal{U}}=\prod\limits_{U\in \mathcal{U}}[G_U,G_U].$$ As in the proof of 1.3, first we show, due to (3.1) and $G$-invariance of $\mathcal{U}$, that $[G,G]\subset G^{\mathcal{U}}$ and that any $[f,g]\in[G,G]$ can be written as a product of at most $\rho^2$ elements of $G^{\mathcal{U}}$ of the form $[h_1,h_2]$, where $h_1,h_2\in G_U$ for some $U$. This implies that every element of $[G,G]$ is a product of at most $d\rho^2$ elements of $G^{\mathcal{U}}$ of the form $[h_1,h_2]$, where $h_1,h_2\in G_U$ for some $U$.
For $U\in \mathcal{U}$ we will show that for every $f,g\in G_U$ the bracket $[f,g]$ is a commutator of two elements of $[G,G]$.
By assumption and Def. 3.2(4), there is $h\in G$ such that $h(U)\cap U=\emptyset$. It follows that $[hfh^{-1}, g]=\id$. Next, for $U, h(U)\in\mathcal{U}$ there is $k\in G$ such that $k(U)\cap(U\cup h(U))=\emptyset$. Consequently, $[f, kgk^{-1}]=\id$ and $[hfh^{-1}, kgk^{-1}]=\id$. Therefore, in view of (3.1), $[f,g]=[[f,h],[g,k]]$, that is $[f,g]$ is a commutator of elements of $[G,G]$. Thus $[G,G]$ is uniformly perfect and $\cld_{[G,G]}\leq d\rho^2$, as required. \end{proof}
From the proof of Theorem 3.5 we get \begin{cor} If $\mathcal{U}$ is a $G$-invariant open cover of $X$ such that $G$ is strongly $\mathcal{U}$-moving and $\mathcal{V}$-factorizable for some open cover $\mathcal{V}$ satisfying $\mathcal{V}\prec\mathcal{U}$ then $[G,G]$ is perfect. \end{cor}
\begin{prop} (1) Let $G$ be $\mathcal{U}$-moving. Assume that $\mathcal{V}$ is a $G$-invariant open cover
such that $\mathcal{V}\prec\mathcal{U}$, $G$ is $\mathcal{V}$-factorizable and $\fragd^{\mathcal{V}}_G=\rho$. Then $G$ is $\rho$-$\mathcal{V}$-moving.
(2) Let $\mathcal{U}$, $\mathcal{V}$, $\mathcal{W}$ and $\mathcal{T}$ be such that $\mathcal{T}\prec\mathcal{W}\prec\mathcal{V}\prec\mathcal{U}$, and $\mathcal{V}$, $\mathcal{W}$ and $\mathcal{T}$ are $G$-invariant. If $G$ is $\mathcal{U}$-moving and $\mathcal{T}$-factorizable with $\fragd^{\mathcal{T}}_G=\rho$, then $[G,G]$ is $\rho^2$-$\mathcal{W}$-moving. \end{prop} \begin{proof} (1) Suppose that $\mathcal{V}\prec\mathcal{U}$ and let $V\in\mathcal{V}$. Then there is $g\in G$ such that $g(V)\cap V=\emptyset$. By assumption there exist $V_1,\ldots, V_{\rho}\in\mathcal{V}$ and $g_1,\ldots, g_{\rho}\in G$ such that $g=g_r\ldots g_1$ and $\supp(g_i)\subset V_i$, $i=1,\ldots, \rho$ (possibly $g_i=\id$).
Let us consider two cases: $(a)$ $g_1(V)\cap V=\emptyset$ and $(b)$ $g_1(V)\cap V\neq\emptyset$. In case $(a)$ we have $ g_1(V)\cup V\subset\supp(g_1)\subset U\in\mathcal{U}$. Choose $f_1\in G$ such that $f_1(U)\cap U=\emptyset$. Then $[g_1,f_1](V)=g_1(V)$ and we are done. In case $(b)$ $V\cup g_1(V)\subset U_1\in\mathcal{U}$ such that $f_1(U_1)\cap U_1=\emptyset$ for some $f_1\in G$. Again $[g_1,f_1](V)=g_1(V)$. Now we continue as before. In case $(g_2g_1)(V)\cap g_1(V)=\emptyset$ we get $V\cap\bar g_2(V)=\emptyset$, where $\bar g_2=g_1^{-1}g_2^{-1}g_1$, and we are done as in $(a)$. Otherwise, $(g_2g_1)(V)\cup g_1(V)\subset U_2\in\mathcal{U}$ such that $f_2(U_2)\cap U_2=\emptyset$ for some $f_2\in G$. Therefore, $[g_2,f_2](g_1(V))=(g_2g_1)(V)$. Proceeding by induction we get $$ ([g_{\rho},f_{\rho}]\ldots [g_1,f_1])(V)=(g_{\rho}\ldots g_1)(V)=g(V),$$ and the claim follows.
(2) It follows from the hypotheses that $G$ is $\mathcal{V}$-factorizable and $\fragd^{\mathcal{V}}_G\leq \rho$. Moreover, as in the proof of Theorem 1.3 we get that $[G,G]$ is $\mathcal{W}$-factorizable and $\fragd^{\mathcal{W}}_{[G,G]}\leq \rho^2$. Hence by (1) $G$ is $\rho$-$\mathcal{V}$-moving. In particular $[G,G]$ is $\mathcal{V}$-moving. Then again (1) implies that $[G,G]$ is $\rho^2$-$\mathcal{W}$-moving. \end{proof}
In the following version of Theorem 1.3 we avoid the assumption that $G$ is strongly $\mathcal{U}$-moving. \begin{thm} Let $X$ be a paracompact topological space, let $G\leq\mathcal{H}(X)$ with $\cl_G$ bounded, and let $\mathcal{U}$ be an open cover of $X$ such that \begin{enumerate} \item $G$ is $\mathcal{U}$-moving, and \item there are $G$-invariant open covers $\mathcal{V}$, $\mathcal{W}$, and $\mathcal{T}$ fulfilling the relation $\mathcal{T}\prec\mathcal{W}\prec\mathcal{V}\prec\mathcal{U}$,
and such that $G$ is $\mathcal{T}$-factorizable and it is bounded with respect to $\frag^{\mathcal{T}}$.\end{enumerate} Then $[G,G]$ is uniformly perfect and $\cld_{[G,G]}\leq 4d\rho^4$ provided $\fragd^{\mathcal{T}}_G= \rho$ and $\cld_G=d$. \end{thm} \begin{proof} Let $\fragd_G^{\mathcal{T}}=\rho$. Then a fortiori $\fragd_G^{\mathcal{W}}\leq \rho$. In view of Proposition 3.7, $[G,G]$ is $\rho^2$-$\mathcal{W}$-moving.
Let $[f,g]\in[G,G]$. By applying for $\mathcal{T}\prec\mathcal{W}$ the same reasoning as in the proof of Theorem 1.3 for $\mathcal{V}\prec\mathcal{U}$, $[f,g]$ can be written as a product of at most $\rho^2$ elements from $G^{\mathcal{W}}=\prod_{W\in\mathcal{W}}[G_W,G_W]$ of the form $[h_1,h_2]$, where $h_1,h_2\in G_W$ for some $W\in \mathcal{W}$. Consequently, every element of $[G,G]$ can be expressed as a product of at most $d\rho^2$ elements of $G^{\mathcal{W}}$ of the form $[h_1,h_2]$, where $h_1,h_2\in G_W$ for some $W\in\mathcal{W}$.
Now take arbitrarily $W\in\mathcal{W}$ and $f,g\in G_W$. Since $[G,G]$ is $\rho^2$-$\mathcal{W}$-moving, there are $h_1,\ldots, h_{\rho^2},h'_1,\ldots, h'_{\rho^2}\in [G,G]$ such that for $h=[h_1,h'_1]\ldots [h_{\rho^2},h'_{\rho^2}]$ we have $h(W)\cap W=\emptyset$ and, consequently, $[[h,f],g]=[f,g]$. It is easily seen that $[[h,f],g]$ is a product of $4\rho^2$ commutators of elements of $[G,G]$. Thus any element of $[G,G]$ is a product of at most $4d\rho^4$ commutators of elements of $[G,G]$.
\end{proof} As a consequence of the above proof we have \begin{cor} If $G$ is $\mathcal{U}$-moving and $\mathcal{T}$-factorizable for some $G$-invariant open covers $\mathcal{V}$, $\mathcal{W}$, and $\mathcal{T}$ such that $\mathcal{T}\prec\mathcal{W}\prec\mathcal{V}\prec\mathcal{U}$, then $[G,G]$ is perfect. \end{cor}
\section{Simplicity and uniform simplicity of $[G,G]$}
Let us recall Epstein's theorem. \begin{thm}\cite{ep}\label{eps} Let $X$ be a paracompact space, let $G$ be a group of~homeomorphisms of $X$ and let $\,\mathcal{B}$ be a basis of open sets of $X$ satisfying the following axioms:\\ \noindent{Axiom 1.} If $U\in \mathcal{B}$ and $g\in G$, then $g(U)\in \mathcal{B}$.\\ \noindent{Axiom 2.} $G$ acts transitively on $\mathcal{B}$ (i.e. $\forall\, U,V \in \mathcal{B} \; \exists\, g \in G : g(U)=V$).\\ \noindent{Axiom 3.} Let $g\in G$, $U\in \mathcal{B}$ and let $\mathcal{U}\subset \mathcal{B}$ be a cover of $X$. Then there exist an integer $n$, elements $g_1,\dots ,g_n\in G$ and $V_1,\dots ,V_n\in \mathcal{U}$ such that $g=g_ng_{n-1}\dots g_1$, $\supp (g_i)\subset V_i$ and $$\supp (g_i)\cup (g_{i-1}\dots g_1(\overline{U}))\neq X\; \text{for}\; 1\leqslant i\leqslant n.$$ Then $[G,G]$, the commutator subgroup of $G$, is simple. \end{thm}
It is worth noting that Theorem 4.1 was an indispensable ingredient in the proofs of celebrated simplicity theorems on diffeomorphism groups and their generalizations (c.f. \cite{Thu}, \cite{Mat}, \cite{ban1}, \cite{ban2}, \cite{ha-ry}, \cite{ry1}).
We say that $G\leq\mathcal{H}(X)$ acts \emph{transitively inclusively} (c.f. \cite{li}) on a topological basis $\mathcal{B}$ if for all $U,V\in\mathcal{B}$ there is $g\in G$ such that $g(U)\subset V$. It is not difficult to derive from Theorem 1.2 the following amelioration of Theorem 4.1, see \cite{li}.
\begin{thm}\cite{li} Let $X$ be a paracompact space, let $G\leq \mathcal{H}(X)$ and let $\,\mathcal{B}$ be a basis of open sets of $X$ satisfying the following axioms:\\ \noindent{Axiom 1.} $G$ acts transitively inclusively on $\mathcal{B}$.\\ \noindent{Axiom 2.} $G$ is $\mathcal{U}$-factorizable (Def. 1.1) for all covers $\mathcal{U}\subset\mathcal{B}$.\\
Then $[G,G]$ is a simple group. \end{thm}
Now we wish to provide conditions ensuring that the commutator group of a homeomorphism group is uniformly simple. Recall that a group $G$ is called \emph{uniformly simple} if there is $d>0$ such that for all $f,g\in G$ with $f\neq e$ we have $g=h_1fh_1^{-1}\ldots h_sfh_s^{-1}$, where $s\leq d$ and $h_1,\ldots, h_s\in G$. Given a uniformly simple group $G$, denote by $\usd_G$ the least $d$ as above.
Note that recently Tsuboi \cite{Tsu3} showed that $\mathcal{D}^{\infty}_c(M)$ is uniformly simple for many types of manifolds $M$. However, for some types of $M$ the problem is still unsolved.
\begin{thm} Let $\mathcal{B}$ be a topological basis of $X$. Suppose that $G\leq\mathcal{H}(X)$ satisfies the following conditions:\begin{enumerate} \item $\cl_G$ is bounded; \item $G$ acts transitively inclusively on $\mathcal{B}$; \item there is an open cover $\mathcal{U}\prec\mathcal{B}$ such that $G$ is $\mathcal{U}$-factorizable and $G$ is bounded w.r.t. the fragmentation norm $\frag_G^{\mathcal{U}}$.\end{enumerate} Then the group $[G,G]$ is uniformly simple. Moreover, if $\cld_G=d$ and $\fragd_G^{\mathcal{U}}=\rho$ then $\usd_G\leq 4d\rho^2$. \end{thm}
\begin{proof}
In view of Theorem 4.2, $[G,G]$ is simple. Let $f,g\in [G,G]$ such that $f\neq e$. There is $x\in X$ with $f(x)\neq x$ and $B\in\mathcal{B}$ satisfying $f(B)\cap B=\emptyset$.
First we assume that $g=[g_1,g_2]\in[G,G]$. Then, if $\fragd_G^{\mathcal{U}}=\rho$ then $g$
can be expressed as a product of at most $\rho^2$ elements of $G^{\mathcal{B}}=\prod_{U\in \mathcal{B}^G}[G_U,G_U]$ of the form $[h_1,h_2]$, where $h_1,h_2\in G_U$
for some $U\in\mathcal{B}^G$. Here $\mathcal{B}^G=\{g(U)|\; g\in G,\; U\in\mathcal{B}\}$. In fact, we repeat the use of (3.1) as in the proof of Theorem 3.1. Now if $\cld_G=d$, then every $g\in[G,G]$ is a product of at most $d\rho^2$ elements of $G^{\mathcal{B}}$ of the form $[h_1,h_2]$, where $h_1,h_2\in G_U$ for some $U\in\mathcal{B}^G$.
Since $G$ acts transitively inclusively on $\mathcal{B}$ (and, consequently, on $\mathcal{B}^G$), any $[h_1,h_2]$ as above is conjugate to $[k_1,k_2]$ with $k_1,k_2\in G_B$. Then $[k_1,k_2]=[[f,k_1],k_2]$. Hence $[k_1,k_2]$ is a product of four conjugates of $f$ and $f^{-1}$. It follows that $g$ is a product of at most $4d\rho^2$ conjugates of $f$ and $f^{-1}$, as claimed.
\end{proof} \begin{cor} If $G\leq\mathcal{H}(X)$ is factorizable and bounded, and $G$ acts transitively inclusively on some basis $\mathcal{B}$ of $X$, then $[G,G]$ is uniformly simple. \end{cor} In fact, in view of Proposition 1.4 \cite{bip} $[G,G]$ is then bounded in $\cl_G$, and the remaining hypotheses of Theorem 4.3 are fulfilled too.
\section{Perfectness of $[\tilde G,\tilde G]$}
Let $G$ be a topological group. By $\mathcal{P}G$ we will denote the totality of paths (or isotopies) $\gamma:I\rightarrow G$ with $\gamma(0)=e$ (where $I=[0,1]$). Then $\mathcal{P}G$ endowed with the pointwise multiplication is a topological group. Next, $\tilde G$ will stand for the universal covering group of $G$, that is $\tilde G=PG/_{ \sim}$, where $\sim$ denotes the relation of the homotopy rel. endpoints.
We introduce the following two operations on the space of paths $\mathcal{P} G$. Let $\mathcal{P}^{\star}G=\{ \gamma \in \mathcal{P}G: \gamma (t)=e \quad \textrm{for} \quad t \in [0,\frac{1}{2}] \}$. For all $\gamma \in \mathcal{P} G$ we define $\gamma^{\star}$ as follows:
\begin{equation}\nonumber
\gamma^{\star}(t)= \left\{ \begin{array}{lcl}
e& for & t \in [0,\frac{1}{2}]\\ \gamma(2t-1)& for& t \in [\frac{1}{2},1] \end{array} \right. \end{equation} Then $\gamma^{\star}\in\mathcal{P}^{\star}G$ and the subgroup $P^{\star}G$ is the image of $\mathcal{P} G$ by the mapping $\star:\gamma\mapsto \gamma^{\star}$. The elements of $\mathcal{P}^{\star}G$ are said to be \wyr{special} paths in $G$. Clearly, the group of special paths is preserved by conjugations, i.e. for each $g\in\mathcal{P} G$ we have $\conj_g(\mathcal{P}^{\star}G)\subset\mathcal{P}^{\star}G$ for every $g\in\mathcal{P} G$, where $\conj_g(h)=ghg^{-1}$, $h\in\mathcal{P} G$.
Next, let $\mathcal{P}^{\square}G=\{ \gamma \in \mathcal{P}G: \gamma (t)=\gamma(1) \quad \textrm{for} \quad t \in [\frac{1}{2}, 1] \}$. For all $\gamma \in \mathcal{P} G$ we define $\gamma^{\square}$ by:
\begin{equation}\nonumber
\gamma^{\square}(t)= \left\{ \begin{array}{lcl}
\gamma(2t)& for & t \in [0,\frac{1}{2}]\\ \gamma(1)& for& t \in [\frac{1}{2},1] \end{array} \right. \end{equation} As before $\gamma^{\square}\in\mathcal{P}^{\square}G$ and the subgroup $P^{\square}G$ coincides with the image of $\mathcal{P} G$ by the mapping $\square:\gamma\mapsto \gamma^{\square}$.
\begin{lem}\label{zero} For any $\gamma \in \mathcal{P} G$ we have $\gamma \sim \gamma^{\star}$ and $\gamma\sim\gamma^{\square}$. \end{lem}
\begin{proof} We have to find a homotopy $\Gamma$ rel. endpoints between $\gamma$ and $\gamma^{\star}$. For all $s\in I$ define $\Gamma$ as follows: \begin{equation}\nonumber \Gamma(t,s)= \left\{ \begin{array}{lcl} e& for & t\in [0,\frac{s}{2}]\\ \gamma(\frac{2t-s}{2-s})& for& t\in [\frac{s}{2},1] \end{array} \right. \end{equation} It is easy to check that such $\Gamma$ fulfils all the requirements. Analogously the second claim follows. \end{proof}
After these prerequisites let us return to homeomorphism groups.
Let $X$ be a paracompact space and let $G\leq \mathcal{H}(X)$. Here $\mathcal{H}(X)$ is endowed with the compact-open topology and $G$ with the induced topology. If $f\in\mathcal{P} G$ then we define $\supp(f):=\bigcup_{t\in[0,1]}\supp(f_t)$. By $G_0$ we define the subgroup of all $g\in G$ such that there is $f\in\mathcal{P} G$ such that $f_1=g$. $G_0$ is called the \emph{identity component} of $G$. Clearly $G_0\lhd G$. \begin{dff} We say that $G$ is \emph{isotopically factorizable} if for every open cover $\mathcal{U}$ and every isotopy $f\in\mathcal{P} G$ there are $U_1,\ldots, U_r\in\mathcal{U}$ and $f_1,\ldots, f_r\in\mathcal{P} G$ such that $f=f_1\ldots f_r$ and $\supp(f_i)\subset U_i$ for all $i$. \end{dff}
Clearly, if $G$ is isotopically factorizable then $G_0$ is factorizable.
\emph{Proof of Theorem 1.4} For $f\in\mathcal{P} G$ by $\langle f\rangle_{\sim}$ denote the homotopy rel. endpoints class of $f$.
Due to Proposition 3.2 and the assumption, for any $x\in X$ there is $g,\bar g\in[G_0,G_0]$ such that $[g,\bar g](x)\neq x$. Consequently, there exists an open cover $\mathcal{U}$ such that for all $U\in\mathcal{U}$ there are $g,\bar g\in[G_0,G_0]$ such that $[g,\bar g](U)\cap U=\emptyset$. Since $G_0\lhd G$, the same holds for $\mathcal{U}^G$ instead of $\mathcal{U}$. In view of Lemma 5.1, there are $f, \bar f\in\mathcal{P}^{\square}G$ such that $f_1=g$ and $\bar f_1=\bar g$.
Choose $\mathcal{V}$ such that $\mathcal{V}\prec\mathcal{U}$ (Lemma 3.3) and denote \begin{equation*}\mathcal{P} G^{\mathcal{U}}=\prod\limits_{U\in \mathcal{U}^G}[\mathcal{P} G_U,\mathcal{P} G_U].\end{equation*}
First we notice that $[\mathcal{P} G,\mathcal{P} G]\subset \mathcal{P} G^{\mathcal{U}}$. As in the proof of Theorem 1.3 we use (3.1) for elements of $\mathcal{P} G$ and the fact that $\mathcal{P} G$ is $\mathcal{V}$-factorizable.
Next, fix arbitrarily $U\in \mathcal{U}$ and let $f, \bar f\in\mathcal{P}^{\square}G$ as above. Put $\hat f=[f,\bar f]$.
Then $\hat f_t(U)\cap U=\emptyset$ for all $t\in [\frac{1}{2}, 1]$. We will show that for every $h, \bar h\in \mathcal{P} G_U$ the bracket $[\langle h\rangle_{\sim},\langle\bar h\rangle_{\sim}]$ is represented as a product of four commutators of elements of $[\tilde G,\tilde G]$. In view of Lemma 5.1 choose $k,\bar k\in\mathcal{P}^{\star}G$ such that $\langle k\rangle_{\sim}=\langle h\rangle_{\sim}$ and $\langle \bar k\rangle_{\sim}=\langle \bar h\rangle_{\sim}$. It follows that $[\hat fk\hat f^{-1}, \bar k]=\id$ and $[[\hat f,k],\bar k]=[k,\bar k]$. Therefore, $[\langle h\rangle_{\sim},\langle\bar h\rangle_{\sim}]$ is a product of four commutators of elements of $[\tilde G,\tilde G]$. \quad $\square$
\begin{rem} (1) Observe that one can formulate some results for $[\tilde G,\tilde G]$, analogous to Theorems 1.3, 3.5 and 3.8, by assuming that $G$ is isotopically factorizable, $G_0$ satisfies some conditions in Def. 3.1, $\cl_{\mathcal{P} G}$ is bounded, and $\mathcal{P} G$ is bounded in $\frag^{\mathcal{U}}$.
(2) Obviously, $\tilde G$ and $[\tilde G,\tilde G]$ are not simple, since $\pi(G)\lhd\tilde G$ and $[\pi(G),\pi(G)]\lhd[\tilde G,\tilde G]$, where $\pi(G)$ is the fundamental group of $G$. \end{rem}
\section{The commutator subgroup of a diffeomorphism group on open manifold}
Assume $r=0,1,\ldots,\infty$. Let a manifold $M$ be the interior of a compact, connected manifold $\bar M$ of class $C^r$ with non-empty boundary $\partial$. By a \wyr{product neighborhood} of $\partial$ we mean a closed subset $P=\partial\times[0,1)$ of $M$ such that $\partial\times[0,1]$ is embedded in $\bar M$, and $\partial\times\{1\}$ is identified with $\partial$.
A \wyr{translation system} on the product manifold $N\times[0,\infty)$ (c.f. \cite{li1}, p.168) is a family $\{P_j\}_{j=1}^{\infty}$ of closed product neighborhoods of $N\times\{\infty\}$ such that $P_{j+1}\subset\intt P_j$ and $\bigcap_{j=1}^{\infty}P_j=\emptyset$. By a {\it ball} we mean an open ball with its closure compact and contained in a chart domain.
Let $G\leq\mathcal{D}^r(M)$, where $r=0,1,\ldots,\infty$. For a subset $U\subset M$ denote by $G(U)$ the subgroup of all elements of $G$ which can be joined with the identity by an isotopy in $G$ compactly supported in $U$.
\begin{dff} Let $\mathcal{B}$ be a cover of $M$ by balls.
$G$ is called \wyr{$\mathcal{B}$-factorizable} if for any $f\in G$ there are a product neighborhood $P=\partial\times[0,1)$, and a family of diffeomorphisms $g,g_1,\ldots, g_{\rho}\in G$ such that:
(1) $f=g g_1\cdots g_{\rho}$ with $g\in G(P)$ and $g_j\in G(B_j)$, where $B_j\in\mathcal{B}$ for $j=1,\ldots, \rho$.
Furthermore, for any product neighborhood $P$ and for any $g\in G(P)$ there is a sequence of reals from (0,1) tending to 1 \[0<a_1<\bar a_1<\bar b_1<b_1<a_2<\ldots<a_n<\bar a_n<\bar b_n<b_n<\ldots<1\] and $h\in G(P)$ such that
(2) $h=g$ on $\bigcup_{n=1}^{\infty} \partial\times[\bar a_n,\bar b_n]$;
(3) $h=\id$ if $g=\id$.
Put $D_n:=\partial\times(a_n,b_n)$ and $D:=\bigcup_{n=1}^{\infty}D_n$. Then we also assume that:
(4) $\supp(h)\subset D$;
(5) for the resulting decomposition $h=h_1h_2\ldots$ with respect to $D=\bigcup_{n=1}^{\infty}D_n$ we have $h_n\in G(D_n)$ for all $n$.
$G$ is called \wyr{factorizable (in the wider sense)} if it is $\mathcal{B}$-factorizable for every cover $\mathcal{B}$ of $M$ by balls.
Finally, if $G$ factorizable, for any $f\in G$ we define
$\Frag_{G}(f)$ as the smallest $\rho$ such that there are a family of balls
$\{B_{j}\}$, a product neighborhood $P$ and
and a decomposition of $f$ as in (1). Then
$\Frag_{G}$ is a conjugation-invariant norm on $G$, called the \wyr{fragmentation norm}. In fact, since $G\leq\mathcal{D}^r(M)$, any $g\in G$ does not change the ends of $M$ so that it takes (by conjugation) any decomposition as in (1) into another such a decomposition.
Define $\Fragd_{G}:=\sup_{g\in G}\Frag_{G}(g)$, the diameter of $G$ in $\Frag_{G}$. Consequently, $\Frag_{G}$ is bounded iff $\Fragd_{G}<\infty$.
\end{dff}
\begin{rem} The reason for introducing Def. 6.1 is the absence of isotopy extension theorems or fragmentation theorems for some geometric structures. Roughly speaking, $G$ satisfies Def. 6.1 if all its elements can be joined with id by an isotopy in $G$ and appropriate versions of the above mentioned theorems are available. \end{rem}
Let $\diff^r(M)$ (resp. $\diff^r_c(M)$) be the group of all $C^r$ diffeomorphisms of $M$ (resp. with compact support). To illustrate Def. 6.1 we consider the following \begin{exa} The group $\diff^r(\rz^n)$ does not satisfy Def.6.1. The reason is that in this case any $f\in\diff^r(\rz^n)$ would be isotopic to id due to 6.1(1) which is not true. Next, any $f\in\diff^r_c(\rz^n)$ is isotopic to the identity but the isotopy need not be compactly supported. It follows that $\diff^r_c(\rz^n)$ does not fulfil Def.6.1.(1). The exception is $r=0$, when the Alexander trick is in use (see e.g. \cite{ed-ki}, p.70) and any compactly supported homeomorphism on $\rz^n$ is isotopic to id by a compactly supported isotopy. It follows that $\diff^0_c(\rz^n)$ is factorizable in view of \cite{ed-ki}.
Let $C=\rz\times\mathbb S^1$ be the annulus. Then there is the twisting number epimorphism $\diff^r_c(C)\rightarrow\mathbb Z$. It follows that $\diff^r_c(C)$ is unbounded in view of Lemma 1.10 in \cite{bip}. On the other hand, $\diff^r_c(C)$ is not factorizable. \end{exa}
\begin{dff} \begin{enumerate}
\item $G$ is said to be \wyr{determined on compact subsets}
if the following is satisfied. Let $f\in\mathcal{D}^r(M)$.
If there are a sequence of relatively compact subsets $U_1\subset\overline U_1\subset
U_2\subset\ldots\subset U_n\subset\overline{U}_n\subset
U_{n+1}\subset\ldots$ with $\bigcup U_n=M$ and a sequence $\{g_n\}$, $n=1,2,\ldots,$ of elements
of $G$ such that $f|_{U_n}=g_n|_{U_n}$ for $n=1,2,\ldots,$ then we have $f\in G$.
\item We say that $G$ \wyr{admits translation systems} if for any sequence $\{\lambda_n\}$, $n=0,1,\ldots$, with $\lambda_n\in(0,1)$, tending increasingly to 1, there exists a $C^r$-mapping $[0,\infty)\ni t\mapsto f_t\in G$ supported in the interior of $P$, with $f_0=\id$, $f_j=(f_1)^j$ for $j=2,3,\dots$, and such that for the translation system $P_n=\partial_i\times[\lambda_n,1)$ one has $f_1(P_n)=P_{n+1}$ for $n=0,1,2,\ldots$. \end{enumerate} \end{dff}
By using suitable isotopy extension theorems (c.f. \cite{ed-ki}, \cite{hir}, \cite{ban2}) we have
\begin{prop} \cite{ry6} The groups $\mathcal{D}^r(M)$, $r=0,1,\ldots,\infty$, satisfy Definitions 6.1 and 6.4. \end{prop}
The following result is essential to describe the structure of $[G,G]$. Though it was proved in \cite{ry6}, we give the proof of it for the sake of completeness.
\begin{lem} If $G$ satisfies Definitions 6.1 and 6.4, then any $g\in G(P)$, where $P$ is a product neighborhood of $\partial$, can be written as a product of two commutators of elements of $G(P)$. \end{lem} \begin{proof} We may assume that $g\in G(\intt( P))$. Choose as in Def. 6.1 a sequence $0<a_1<\bar a_1<\bar b_1<b_1<a_2<\ldots<a_n<\bar a_n<\bar b_n<b_n<\ldots<1$ and $h\in G(P)$ such that conditions (2)-(5) in Def. 6.1 are fulfilled.
Put $\bar h=h^{-1}g$, that is $g=h\bar h$. Then $\supp(\bar h)$ is in $(0,\bar a_1)\cup\bigcup_{n=1}^{\infty}(\bar b_n,\bar a_{n+1})$, and $\bar h=g$ on $[0, a_1]\cup\bigcup_{n=1}^{\infty}[ b_n, a_{n+1}]$. We show that $h$ is a commutator of elements in $G(\intt(P))$.
Choose arbitrarily $\lambda_0\in (0,a_1)$ and $\lambda_n\in(b_n,a_{n+1})$ for $n=1,2,\ldots$. In light of Def. 6.4(2) there exists an isotopy $[0,\infty)\ni t\mapsto f_t\in G$ supported in $\partial\times(0,1)$, such that $f_0=\id$ and $f_j(P_n)=P_{n+j}$ for $j=1,2,\ldots$ and for $n=0,1,2,\ldots$, where $P_n=\partial\times[\lambda_n,1)$ for $n=0,1,\ldots$. Now define $\tilde h\in G(\intt(P))$ as follows. Set $\tilde h=h$ on $\partial\times[0,\lambda_1)$, and $\tilde h=h(f_1hf_1^{-1})\ldots(f_nhf_n^{-1})$ on $\partial\times[0,\lambda_{n+1})$
for $n=1,2\ldots$. Here $f_n=(f_1)^n$. Then $\tilde h|_{\partial\times[0,\lambda_n)}$ is a consistent family of functions, and
$\tilde h=\bigcup_{n=1}^{\infty} \tilde h|_{\partial\times[0,\lambda_n)}$ is a local diffeomorphism. It is easily checked that $\tilde h$ is a bijection. Due to Def. 6.4(1) $\tilde h\in G(\intt (P))$.
By definition we have the equality $\tilde h=hf_1\tilde h f_1^{-1}$. It follows that $h=\tilde h f_1 \tilde h^{-1}f_1^{-1}=[\tilde h,f_1]$. Similarly, $\bar h$ is a commutator of elements of $G(P)$. The claim follows. \end{proof}
\begin{dff} Let $G$ satisfy Def. 6.1. Then \begin{enumerate} \item the symbol $G_c$ stands for the subgroup of all $f\in G$ such that there is a decomposition $f=gg_1\ldots g_{\rho}$ as in Def. 6.1(1) with $g=\id$; \item $G$ is said to be \emph{localizable} if for any $f\in G$ and any compact $C\subset M$ there is $g\in G_c$ such that $f=g$ on $C$. \end{enumerate} \end{dff} Clearly $G_c$ is a subgroup of the group of compactly supported members of $G$. However, the converse is not true: for $G=\mathcal{D}^r(C)$ take a compactly supported diffeomorphism of $C$ with nonzero twisting number (Example 6.3). For the reason of introducing localizable groups, see Remark 6.2. It follows from the isotopy extension theorems (\cite{ed-ki}, \cite{hir}) that $\mathcal{D}^r(M)$ is localizable.
\begin{prop} Let $\frag_G=\frag_G^{\mathcal{B}}$, where $\mathcal{B}$ the family of all balls on $M$ (c.f. section 2). We have $\fragd_{G_c}=\Fragd_{G_c}$. \end{prop} \begin{proof} If $g\in G_c$ then $\Frag_{G_c}(g)\leq\frag_{G_c}(g)$, since any fragmentation of $g$ supported in balls is of the form from Def. 6.1(1). On the other hand, if $g=g_0g_1\ldots g_{\rho'}$ with $\rho'<\rho=\frag_{G_c}(g)$ is as in 6.1(1), then $g_0^{-1}g\in G_c$ and $\frag_{G_c}(g_0^{-1}g)\leq \rho'$. Thus, $\fragd_{G_c}=\Fragd_{G_c}$. \end{proof}
For any $M$ as above a theorem of McDuff \cite{md} states that $\mathcal{D}^r(M)$ is perfect. We generalize it as follows.
\begin{thm} Let $M$ be an open $C^r$-manifold ($r=0,1,\ldots,\infty$) such that $M=\intt\bar M$, where $\bar M$ is a compact manifold. Suppose that $G\leq\mathcal{D}^r(M)$ satisfies Definitions 6.1, 6.4 and 6.7, and that $G_c$ is non-fixing. Then $[G,G]$ is perfect. \end{thm}
\begin{proof} In view of Def. 6.1 for an arbitrary $f\in G$ we can write $f=gh$, where $g\in G(P)$ and $h\in G_c$. Let $[f_1,f_2]\in [G,G]$ with $f_1=g_1h_1$ and $f_2=g_2h_2$ as above. Since $G_c$ is localizable we have $[g_1,h_2], [g_2,h_1]\in [G_c,G_c]$. Due to Lemma 6.6 $G(P)$ is perfect, that is $g_1,g_2\in[G,G]$. It follows from (3.1) that $[f_1,f_2]=\varphi[k_1,k_2][k'_1,k'_2][k''_1,k''_2]$, where $\varphi\in[[G,G],[G,G]]$ and $k_1,k_2,k'_1,k'_2,k''_1,k''_2\in G_c$. But by Theorem 1.2 $[G_c,G_c]$ is also perfect. It follows that $[G,G]$ is perfect too. \end{proof}
\begin{thm} Under the assumptions of Theorem 6.9, if $\cl_{G_c}$ and $\frag_G^{\mathcal{V}}$ are bounded, where $\mathcal{V}$ is an arbitrary open cover with $\mathcal{V}\prec\mathcal{B}$, then $[G,G]$ is uniformly perfect. \end{thm} \begin{proof} By Theorem 6.9, $[G,G]$ is perfect. In view of Proposition 3.2, $[G,G]$ is 1-non-fixing. Due to this fact and Lemma 3.3 we can find an open cover $\mathcal{U}$ such that $\mathcal{U}\prec\mathcal{B}$ and such that for each $U\in\mathcal{U}$ there are $h_1,h_2\in[G,G]$ with $U\cap [h_1,h_2](U)=\emptyset$. We denote \begin{equation*}G^{\mathcal{U}}=\prod\limits_{U\in \mathcal{U}^G}[G_U,G_U].\end{equation*} Here $\mathcal{U}^G:=\{g(U):\; g\in G \text{\; and \;} U\in\mathcal{U}\}$. Then also for each $U\in\mathcal{U}^G$ there is $h_1,h_2\in[G,G]$ with $U\cap [h_1,h_2](U)=\emptyset$.
Assume that $\mathcal{V}\prec\mathcal{U}$ and $\fragd^{\mathcal{V}}_G= \rho$. Let $[f_1,f_2]\in[G,G]$. As in the proof of Theorem 6.9 we have $$[f_1,f_2]=[g_1,g_2][h_1,h_2][h'_1,h'_2][h''_1,h''_2],$$ where $g_1,g_2\in G(P)$ and $h_1,\ldots, h''_2\in G_c$. By Lemma 6.6 and (3.1), $[g_1,g_2]$ is a product of four commutators of elements of $[G,G]$.
Next, any $[h_1,h_2]\in[G_c,G_c]$ can be expressed as a product of at most $\rho^2$ elements of $G^{\mathcal{U}}$ of the form $[k_1,k_2]$, where $k_1,k_2\in G_U$ for some $U$. In fact, it is a consequence of (3.1) and the fact that $\mathcal{V}\prec\mathcal{U}$. Now if $\cld_{G_c}=d$, then every element of $[G_c,G_c]$ is a product of at most $d\rho^2$ elements of $G^{\mathcal{U}}$ of the form $[k_1,k_2]$, where $k_1,k_2\in G_U$ for some $U$.
Finally, fix arbitrarily $U\in \mathcal{U}^G$. We wish to show that for every $k_1,k_2\in G_U$ the bracket $[k_1,k_2]$ can be represented as a product of four commutators of elements of $[G,G]$.
By assumption on $\mathcal{U}^G$, there are $h_1,h_2\in [G,G]$ such that $h(U)\cap U=\emptyset$ for $h=[h_1,h_2]$. It follows that $[hk_1h^{-1}, k_2]=\id$. Therefore, $[[h,k_1],k_2]=[k_1,k_2]$. Observe that indeed $[[h,k_1],k_2]$ is a product of four commutators of elements of $[G,G]$. Thus any element of $[G,G]$ is a product of at most $4d(1+\rho^2)$ commutators of elements of $[G,G]$.
\end{proof} \begin{cor} Suppose that the assumptions of Theorem 6.9 are fulfilled and that $G$ is bounded. Then $[G,G]$ is uniformly perfect. \end{cor} In fact, $\cl_G$ is bounded in view of Proposition 1.4 in \cite{bip}, and $\frag_{G_c}$ is bounded in view of Proposition 6.8.
\begin{rem} By using Theorems 3.5 and 3.8, Lemma 6.6 and (3.1) we can obtain some estimates on $\cl_{[G,G]}$. \end{rem}
\section{Examples and open problems}
Let $M$ be a paracompact manifold, possibly with boundary, of class $C^{r}$, $r=0,1, \ldots,\infty$.
{\bf 1.} Let $M$ be a manifold with a boundary, $\mathrm{dim}(M)=n\geqslant
2$. Then $G=\mathcal{D}^{r}_{c}(M)$, where $r=0,1,\dots
,\infty$, $r\neq n$ and $r\neq n+1$ is perfect ( \cite{ry2}, \cite{ry}) and non-simple. Recently, Abe and Fukui \cite{af09}, using results of Tsuboi \cite{Tsu2} and their own methods, showed that $G$ is also uniformly perfect for many types of $M$. In the remaining cases, where we do not know whether $G$ is perfect or uniformly perfect, our results are of use.
{\bf 2.}
Let $N$ be a submanifold of $M$ of class $C^r$, {$r=0,1,\ldots ,\infty$}, and $\dim N\geq 1$.
It was proved in \cite{ry3} that $G_c$, where $G=\mathcal{D}^r(M,N)$ is the identity component of the group
of $C^r$-diffeomorphisms preserving $N$, is perfect.
The same was proved in the Lipschitz category in \cite{af}. All these groups are clearly non-simple.
It follows from \cite{af09} that $G_c$ is also uniformly
perfect for many types of pairs $(M,N)$. Several results of
the present paper give new information on the structure of $G$
and $G_c$.
{\bf 3.} Given a foliation $\mathcal{F}$ of dimension $k$ on a
manifold $M$, let $G=\mathcal{D}^{r}(M,\mathcal{F})$ be the identity
component
group of all diffeomorphisms of class $C^r$ taking each leaf to
itself. Due to results of Rybicki \cite{ry1}, Fukui and Imanishi \cite{fi} and Tsuboi \cite{Tsu1},
the group $G_c$ is
perfect provided $r=0,1,\dots ,k$ or $r=\infty$.
It is very
likely that for large (but finite) $r$ the group $\mathcal{D}_c^{r}(M,\mathcal{F})$ is not
perfect (c.f. a discussion on this problem in \cite{le-ry}). It is
a highly non-trivial problem whether $G_c$ is uniformly
perfect. Several results of the present paper apply to $G_c$ or
$G$.
{\bf 4.} Let $\mathcal{F}$ be a foliation of dimension $k$ on the Lipschitz
manifold $M$ and let $G=\mathrm{Lip}(M,\mathcal{F})$ be the
group of all Lipschitz homeomorphisms taking each leaf of $\mathcal{F}$ to
itself. In view of results of Fukui and Imanishi \cite{fi1},
the group $G_c$ is
perfect. Further results may be concluded from our paper.
{\bf 5.}
Assume now that $\mathcal{F}$ is a singular foliation,
i.e. the dimensions of its leaves need not be equal (see \cite{st}). One can
consider the group of~leaf-preserving
diffeomorphisms of $\mathcal{F}$,
$G=\mathcal{D}^{\infty}(M,\mathcal{F})$. However, it is
hopeless to obtain any perfectness results for this group. On the other
hand,
Theorem 1.2 still works in this case and we know that the
commutator group $[G_c,G_c]$ is perfect. We do not know whether
$[G_c,G_c]$ is uniformly perfect.
{\bf 6.}
Let us recall the definition of Jacobi manifold (see \cite{dlm}).
Let $M$ be a $C^{\infty}$ manifold, let $\frak{X}(M)$ be the
Lie algebra of the vector fields on $M$ and denote by
$C^{\infty}(M,\mathbb{R})$ the algebra of $C^{\infty}$
real-valued functions on $M$. A \emph{Jacobi structure} on $M$
is a pair $(\Lambda, E)$, where $\Lambda$ is a 2-vector field and
$E$ is a vector field on $M$ satisfying
$$[\Lambda, \Lambda]=2E \wedge \Lambda,\quad [E,\Lambda]=0.$$
Here, $[\, ,\,]$ is the Schouten-Nijenhuis bracket. The
manifold $M$ endowed with the Jacobi structure is called a \emph{Jacobi
manifold}. If $E=0$ then
$(M,\Lambda)$ is a Poisson manifold.
Observe that the notion of Jacobi manifold generalizes also symplectic, locally conformal
symplectic and contact manifolds.
Now, let $(M,\Lambda,E)$ be a Jacobi manifold.
A diffeomorphism $f$ on $M$ is called a \emph{hamiltonian diffeomorphism}
if, by definition, there exists a hamiltonian isotopy $f_t$, $t\in
[0,1]$, such that $f_0=\id$ and $f_1=f$. An isotopy $f_t$ is
\emph{hamiltonian} if the corresponding time-dependent vector
field $X_t=\dot{f}_t\circ f_{t}^{-1}$ is hamiltonian.
Let $G=\mathcal{H}(M,\Lambda,E)$ be
the compactly supported identity component of all hamiltonian
diffeomorphisms of class $C^{\infty}$ of $(M,\Lambda,E)$. It
is not known whether $G$ is perfect,
even in the case of regular Poisson manifold (\cite{ry4}). However, by
Theorem 1.2 the commutator group $[G,G]$ is
perfect. It is an interesting and difficult problem to answer when $[G,G]$ is uniformly perfect.
In the transitive cases, the compactly supported
identity components of the hamiltonian symplectomorphism group
and the contactomorphism group are simple (\cite{ban1}, \cite{ha-ry}, \cite{ry5}). In general, $G$ and $\tilde G$ is not uniformly perfect in the symplectic case, see \cite{bip}. An obstacle for the uniform simplicity of the first group is condition (2) in Theorem 4.3. On the other hand, the contactomorphism group satisfies this condition and it is likely that for some contact manifolds it is uniformly simple.
\end{document} |
\begin{document}
\draft \title{Optimizing Completely Positive Maps using Semidefinite Programming} \author{Koenraad Audenaert\cite{KAmail} and Bart De Moor\cite{BDMmail}} \address{Katholieke Universiteit Leuven, Dept. of Electrical Engineering (ESAT-SISTA) \\ Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee, Belgium }
\maketitle \begin{abstract} Recently, a lot of attention has been devoted to finding physically realisable operations that realise as closely as possible certain desired transformations between quantum states, e.g.\ quantum cloning, teleportation, quantum gates, etc. Mathematically, this problem boils down to finding a completely positive trace-preserving (CPTP) linear map that maximizes the (mean) fidelity between the map itself and the desired transformation. In this note we want to draw attention to the fact that this problem belongs to the class of so-called semidefinite programming (SDP) problems. As SDP problems are convex, it immediately follows that they do not suffer from local optima. Furthermore, this implies that the numerical optimization of the CPTP map can, and should, be done using methods from the well-established SDP field, as these methods exploit convexity and are guaranteed to converge to the real solution. Finally, we show how the duality inherent to convex and SDP problems can be exploited to prove analytically the optimality of a proposed solution. We give an example of how to apply this proof method by proving the optimality of Hardy and Song's proposed solution for the universal qubit $\theta$-shifter (quant-ph/0102100). \end{abstract} \pacs{03.65.Bz, 03.67.-a, 89.70.+c}
The basic problem considered by a number of authors \cite{app1,app2,fiur} is: what physically realisable quantum operation comes closest to a given, but potentially unphysical, transformation between quantum states? The operation is most generally described by a linear map $\$$; the physical realisability requires that the map is completely positive and trace-preserving (CPTP). The desired transformation can be specified in a number of ways, for example by enumerating all possible input-output pairs of pure states $\{\ket{\text{in},k},\ket{\text{out},k}\}$. The dimensions of the input and output Hilbert spaces, ${\cal H}_{\text{in}}$ and ${\cal H}_{\text{out}}$, denoted $d_1$ and $d_2$, respectively, can in general be different. The symbol $k$ labels the different pairs and can either be discrete or continuous.
In the most commonly used formalism, the CPTP map $\$$ that is to implement the transformation is represented by an operator $X$ acting on the Hilbert space ${\cal H}_{\text{in}}\otimes{\cal H}_{\text{out}}$. The requirements of complete positivity and trace preservation result in the constraints \begin{eqnarray*} X&\ge&0 \\ \mathop{\rm Tr}\nolimits\mbox{}_{\text{out}} X &=& \openone_{\text{in}}. \end{eqnarray*}
The requirement that the map must implement the transformation as closely as possible can be quantified by the mean fidelity $F$: $$ F = \sum_k \bra{\text{out,k}} \$(\ket{\text{in},k}\bra{\text{in},k})\ket{\text{out,k}}. $$ The sum in this equation must be an integral with an appropriate measure for $k$ if $k$ is continuous. In terms of the operator $X$, the fidelity is given by $$ F = \mathop{\rm Tr}\nolimits XR, $$ with $$ R = \sum_k (\ket{\text{in},k}\bra{\text{in},k})^T \otimes \ket{\text{out,k}}\bra{\text{out,k}} $$ The great virtue of this measure-of-goodness of the map is that the fidelity is linear in the operator $X$. In this way the problem has been formulated as an optimization problem: $$ \text{(P):} \left\{ \begin{array}{l} \text{maximize} \mathop{\rm Tr}\nolimits XR \\ X\ge 0 \\ \mathop{\rm Tr}\nolimits\mbox{}_{\text{out}} X = \openone_{\text{in}} \end{array} \right. $$
In general, optimization problem (P) cannot be solved analytically and one must resort to numerical methods. Most authors try to solve (P) using ad-hoc iteration schemes involving Lagrange multipliers. Using these schemes, various useful results have been obtained. However, in our view, the convergence properties of these schemes are questionable, as it has not been proved that the solution obtained is actually the global optimum. In fact, these methods reportedly get stuck now and then in suboptimal local optima \cite{perscomm}.
In this note we wish to draw attention to the fact that problem (P) belongs to a well-studied class of optimization problems called semidefinite programs (SDP). The importance of this fact cannot be overestimated. First of all, semidefinite programs are a subclass of the class of convex optimization problems, and convex problems have the very desirable property that a local optimum is automatically a global optimum. Keeping this in mind we see that the reported presence of local optima in the above iteration schemes is due to the scheme itself, and not to the problem being solved.
Secondly, very efficient numerical methods have been devised to solve SDPs, as these problems occur over and over again in various engineering disciplines, operations research, etc. These methods have very good convergence properties, and, moreover, they yield numerical intervals within which the solution must lie. Using a sufficient number of iterations, the width of this interval can be made arbitrarily small (apart from numerical errors and given the validity of some technical requirements). In other words: convergence to the real solution is almost always guaranteed. This is to be contrasted with ordinary methods, which typically yield one outcome only, and it is difficult to know how far its value is removed from the real solution, especially when the optimization problem has multiple local optima.
Thirdly, the way in which these numerical methods work can be exploited to prove analytically that a given proposed solution, e.g.\ an analytical Ansatz based on an educated guess and on the outcome of numerical experiments, is actually the correct solution.
In the rest of this section we will first discuss the basic mathematical facts of semidefinite programming and then apply them to the problem at hand. For a short introduction to the subject, we refer to \cite{lieven}, and for an in-depth treatment to \cite{nn}. Note that \cite{rains} presents another application of SDP to quantum mechanics, namely to finding bounds on the distillable entanglement of mixed bipartite quantum states.
The basic SDP problem is the minimization of a linear function of a real variable $x\in R^m$, subject to a matrix inequality: $$ \begin{array}{l} \text{minimize } c^T x \\ F(x)=F_0+\sum_{i=1}^m x_i F_i\ge 0 \end{array} $$ where the $\ge$-sign means that $F(x)$ is positive semidefinite (hence the term SDP). The problem data are the vector $c\in R^m$ and the $m+1$ real symmetric matrices $F_i$. Alternatively, the $F_i$ can also be complex Hermitean but this is an atypical formulation within the SDP community (in engineering one typically deals with real quantities).
This problem is called the {\em primal} problem. Vectors $x$ that satisfy the constraint $F(x)\ge 0$ are called {\em primal feasible points}, and if they satisfy $F(x)>0$ they are called {\em strictly feasible points}. The minimal objective value $c^T x$ is by convention denoted as $p^*$ (no complex conjugation!) and is called the {\em primal optimal value}.
Of paramount importance is the corresponding {\em dual} problem, associated to the primal one: $$ \begin{array}{l} \text{maximize } -\mathop{\rm Tr}\nolimits F_0 Z \\ Z\ge 0 \\ \mathop{\rm Tr}\nolimits F_i Z=c_i, \; i=1..m \end{array} $$ Here the variable is the real symmetric (or Hermitean) matrix $Z$, and the data $c,F_i$ are the same as in the primal problem. Correspondingly, matrices $Z$ satisfying the constraints are called {\em dual feasible} (or {\em strictly dual feasible} if $Z>0$). The maximal objective value $-\mathop{\rm Tr}\nolimits F_0 Z$, the {\em dual optimal value}, is denoted as $d^*$.
The objective value of a primal feasible point is an upper bound on $p^*$, and the objective value of a dual feasible point is a lower bound on $d^*$. The main reason why one is interested in the dual problem is that one can prove that, under relatively mild assumptions, $p^*=d^*$. This holds, for example, if either the primal problem or the dual problem are strictly feasible, i.e.\ there either exist strictly primal feasible points or strictly dual feasible points. If this or other conditions are not fulfilled, we still have that $d^*\le p^*$. Furthermore, when both the primal and dual problem are strictly feasible, one proves the following optimality condition on $x$: $x$ is optimal if and only if $x$ is primal feasible and there is a dual feasible $Z$ such that $ZF(x)=0$. This latter condition is called the {\em complementary slackness} condition.
In one way or another, numerical methods for solving SDP problems always exploit the inequality $d\le d^*\le p^*\le p$, where $d$ and $p$ are the objective values for any dual feasible point and primal feasible point, respectively. The difference $p-d$ is called the duality gap, and the optimal value $p^*$ is always ``bracketed'' inside the interval $[d,p]$. These numerical methods try to minimize the duality gap by subsequently choosing better feasible points. Under the requirements of the above-mentioned theorem, the duality gap can be made arbitrarily small (as far as numerical precision allows). This is precisely the reason why one should be happy when an optimization problem turns out to be an SDP problem.
We now apply these generalities to our problem at hand. Problem (P) can immediately be rewritten as a (primal) SDP problem by noting that the set of Hermitean matrices form a {\em real} vector space of dimension the square of the matrix dimension. Since we are dealing with matrices over the bipartite Hilbert space ${\cal H}_{\text{in}}\otimes{\cal H}_{\text{out}}$ it is convenient to choose the basis vectors of the matrix space accordingly. Let $\{\sigma^j\}$ and $\{\tau^k\}$ be orthogonal bases for Hermitean matrices over ${\cal H}_{\text{in}}$ and ${\cal H}_{\text{out}}$, respectively, then $\{\sigma^j\otimes\tau^k\}$ forms an orthogonal basis for ${\cal H}_{\text{in}}\otimes{\cal H}_{\text{out}}$. Furthermore, choose the bases so that both $\sigma^0$ and $\tau^0$ are the identity matrix (of appropriate dimension) and all other $\sigma^j$ and $\tau^k$ are traceless Hermitean matrices. An obvious choice would be the set of Pauli matrices $\{\sigma^x,\sigma^y,\sigma^z\}$ or generalisations thereof to higher dimensions. We thus have the following parameterisation of the matrix $X$: $$ X = \sum_{j=0}^{d_1^2-1} \sum_{k=0}^{d_2^2-1} x_{jk}\sigma^j\otimes \tau^k. $$ With this parameterisation, the TP requirement can be expressed in a straightforward way. The condition $\mathop{\rm Tr}\nolimits\mbox{}_{\text{out}} X = \openone_{\text{in}} = \sigma^0$ is fulfilled if and only if $x_{j0}=0$ for all $j>0$, and $x_{00}=1/d_2$. By changing the parameterisation of $X$, this can be taken care of implicitly: $$ \begin{array}{rcl} X &=& \sum_{j=1}^{d_1^2-1} \sum_{k=1}^{d_2^2-1} x_{jk}\sigma^j\otimes \tau^k \\ && + \sum_{k=1}^{d_2^2-1} x_{0k}\sigma^0\otimes \tau^k \\ && + \openone/d_2. \end{array} $$ From this parameterisation, and the additional requirement $X\ge0$, it immediately follows that the matrices $F_i$ (in the SDP problem) are given by \begin{eqnarray*} F_0 &=& \openone/d_2 \\ F_{\text{``$i$''}} &=& \sigma^j\otimes \tau^k,\text{ with }k\neq 0. \end{eqnarray*} The index ``$i$'' in the left-hand side refers to the $i$ of the SDP problem, and corresponds to all possible pairs $(j,k)$ of right-hand side indices with $k\neq 0$. As a shorthand for summation over all these pairs we will use the symbol $\sum_{j,k}^*$.
Finally, we can assign values to the vector coefficients $c_i$ as follows. The fidelity $F$ is to be maximized, so we need an additional minus sign; furthermore, in terms of $x_{jk}$, $F$ equals $$ F=\sum_{j,k}^* x_{jk} \mathop{\rm Tr}\nolimits(\sigma^j\otimes\tau^k R) + 1/d_2, $$ where we have used the fact that $\mathop{\rm Tr}\nolimits R=1$. This yields for the coefficients $c_i$: $$ c_{\text{``$i$''}} = -\mathop{\rm Tr}\nolimits(\sigma^j\otimes\tau^k R), $$ and for the optimal fidelity, in terms of the primal optimal value: $$ F_{\text{opt}} = -p^* +1/d_2. $$
Using these expressions for the vector $c$ and the matrices $F_i$ (which are only dependent on the dimensions of the problem!), one can go about solving the problem (P) numerically. As some of the $F_i$ are complex, one has to use SDP software that explicitly allows complex entries (e.g.\ \cite{sturm}).
Using the above assignments, the dual problem can now be formulated in a rather nice way. The dual objective, to be maximized over all $Z\ge0$, is $$ d=-\mathop{\rm Tr}\nolimits F_0 Z = -\mathop{\rm Tr}\nolimits Z/d_2. $$ The constraint $\mathop{\rm Tr}\nolimits F_i Z=c_i$ gets an interesting form: $$ \mathop{\rm Tr}\nolimits(\sigma^j\otimes \tau^k(Z+R))=0,\text{ with }k\neq0. $$ As $Z$ and $R$ are both Hermitean, this means that the matrix $Z+R$ must be of the form $Z+R = a_0\openone+\sum_{j\neq0} a_j \sigma^j\otimes\openone$, or, in other words, $$ Z=a_0\openone+A\otimes\openone-R, $$ with $A$ a traceless Hermitean matrix. With this parameterisation for all dual feasible $Z$, the dual objective becomes $$ d=-d_1 a_0+1/d_2. $$ Maximizing $d$ thus amounts to minimizing $a_0$ over all traceless Hermitean matrices $A$ such that the resulting $Z$ is still positive semidefinite. From the parameterisation of $Z$ one sees that the smallest feasible value of $a_0$ for a fixed matrix $A$ is given by $$ a_0(A) = -\lambda_{\text{min}}(A\otimes \openone-R), $$ where $\lambda_{\text{min}}$ signifies the minimal eigenvalue of the matrix. The dual problem finally becomes: find the optimal traceless Hermitean matrix $A$ such that this $a_0(A)$ is minimal. The dual optimal value is then $$ d^* = -d_1 \min_A a_0(A)+1/d_2. $$ Note that we have significantly reduced the number of unknown parameters: from $(d_1d_2)^2$ for $Z$ to $d_1^2-1$ for $A$.
These expressions for the primal and dual problem can be used for proving that a certain proposed solution is optimal. To that purpose one needs to propose primal and dual feasible points $x$ and $A$; if the resulting primal and dual objective values $p$ and $d$ turn out to be equal to each other, then $x$ and $A$ are optimal feasible points and $p=d=p^*=d^*$. Alternatively, any feasible choice for $x$ and $A$ gives upper and lower bounds on the optimal value $p^*$, resulting in lower and upper bounds, respectively, for the fidelity of problem (P). For example, setting $A=0$ gives $a_0(A) = \lambda_{\text{max}}(R)$ resulting in the upper bound $F\le d_1\lambda_{\text{max}}(R)$, which was already derived in \cite{fiur}.
Using the method of the previous paragraph, one can test whether the feasible points are optimal or not, but it does not solve the problem of finding these points. As there is no hope for solving the primal and dual problems analytically for all but the simplest problems, one must resort to numerical methods. Luckily, efficient methods abound and some implementations are freely available on the web. From the numerical results one can then try to guess the analytical form of the solution, or at least try to propose an Ansatz containing a few unknown parameters. If the number of parameters is small they could be found by solving the primal and dual problem using the Ansatz.
Even this could be relatively complicated, especially for the dual problem, as this is an eigenvalue problem. An alternative for solving the dual problem is offered by the complementary slackness (CS) condition, which does not require solving an eigenvalue equation. Supposing that a correct guess has been made for $X$ of the primal problem, one then has to solve the linear equation $$ (a_0\openone+A\otimes\openone-R)X=0 $$ in the unknowns $a_0$ and $A$. Of course, one then still has to prove that the resulting $Z$ is dual feasible, i.e.\ is positive semidefinite, and this could still require solving an eigenvalue problem.
As an example of this proof technique, we now consider the problem of constructing an optimal qubit $\theta$-shifter, first considered by Hardy and Song \cite{hardy} and prove that their ``quantum scheme'' shifter (see also \cite{fiur}) is optimal.
A qubit $\theta$-shifter is a device that transforms a pure state $\psi(\theta,\phi) = \cos(\theta/2)\ket{0}+\exp(i\phi)\sin(\theta/2)\ket{1}$ into another pure state $\psi(\theta+\alpha,\phi)$. This is a non-physical operation and has, therefore, to be approximated. Hardy and Song consider both a universal approximated shifter, with fidelity independent of $\theta$, and a shifter with $\theta$-dependent fidelity optimizing the {\em mean} fidelity. The mean fidelity of the non-universal shifter is better than for the universal one, but it has only been proven for values of $\alpha$ equal to integer multiples of $\pi/2$ that it has {\em optimal} mean fidelity \cite{fiur}. We will now prove optimality for {\em all} values of $\alpha$.
The matrix $R$ for the shifter is given by $$ R=\left[ \begin{array}{cccc} r_1 & 0 & 0 & r_5 \\ 0 & r_2 & 0 & 0 \\ 0 & 0 & r_3 & 0 \\ r_5 & 0 & 0 & r_4 \end{array} \right], $$ with $$ \begin{array}{rcl} r_1&=&1/4+c-s \\ r_2&=&1/4-c+s \\ r_3&=&1/4-c-s \\ r_4&=&1/4+c+s \\ r_5&=&2c \end{array} \mbox{ and } \begin{array}{rcl} c&=& \frac{1}{12}\cos\alpha \\ s&=& \frac{\pi}{16}\sin\alpha. \end{array} $$
The Ansatz for the primal feasible point is \cite{fiur} $$ X=\left[ \begin{array}{cccc} \cos^2\beta & 0 & 0 & \cos\beta \\ 0 & \sin^2\beta & 0 & 0 \\ 0 & 0 & 0 & 0 \\ \cos\beta & 0 & 0 & 1 \end{array} \right]. $$ There appear to be two regimes, depending on the value of $\alpha$. For $\alpha\le\alpha_0=\arctan(8/3\pi)$, put $\cos\beta=1$, and for $\alpha\ge\alpha_0$, $\cos\beta=c/(s-c)$. This gives as primal objective fidelity \begin{eqnarray*} F &=& (1+\cos\alpha)/2, \text{ for }\alpha\le\alpha_0 \\ F &=& 1/2+2s+2c^2/(s-c), \text{ for }\alpha\ge\alpha_0. \end{eqnarray*}
Going over to the dual problem, we now present our own Ansatz for the dual feasible point $A$, which was inspired by numerical results: consider diagonal $A$ only. This means that $A$ is parameterised by a single number, say $\delta$, and equals $A=\delta\sigma^z$. This gives for $Z$: $$ Z=\left[ \begin{array}{cccc} a_0+\delta-r_1 & 0 & 0 & -r_5 \\ 0 & a_0+\delta-r_2 & 0 & 0 \\ 0 & 0 & a_0-\delta-r_3 & 0 \\ -r_5 & 0 & 0 & a_0-\delta-r_4 \end{array} \right]. $$ To prove optimality of both Ansatzes, we use the complementary slackness condition (for finding the optimal value for $a_0$ and $\delta$). The CS condition $ZX=0$ gives rise to just three independent equations: \begin{eqnarray*} (a_0+\delta-r_1)\cos\beta-r_5&=&0 \\ (a_0+\delta-r_2)\sin^2\beta&=&0 \\ (a_0-\delta-r_4)-r_5\cos\beta&=&0 \end{eqnarray*} As could be expected, there are two different solutions: \begin{eqnarray*} a_0&=&1/4+3c \\ \delta&=&-s \\ \cos\beta&=&1 \end{eqnarray*} and \begin{eqnarray*} a_0&=&1/4+s+c^2/(s-c) \\ \delta &=& -s/(s-c) \\ (r_2-r_1)\cos\beta&=&r_5. \end{eqnarray*} The third equation of each set shows us that the first solution pertains to the case $\alpha\le \alpha_0$ and the second solution to the other case. The first solution gives mean fidelity $$ F=2a_0=(1+\cos\alpha)/2, $$ and the second solution $$ F=1/2+2s+2c^2/(s-c). $$ These values are exactly the ones obtained in the primal problem, so this proves the optimality of our Ansatzes, provided $Z\ge0$ in both cases. It is a basic exercise in linear algebra to calculate the eigenvalues of $Z$ in both cases; noting that $0\le s\le c$ in the case $\alpha\le\alpha_0$, and $c\le s$ in the other case, one can indeed show that $Z$ is always positive semidefinite, proving its feasibility.
To conclude, we have noted that the problem (P), which has to be solved for finding CPTP maps that optimally approximate certain desired qubit-transformations, is a semidefinite programming (SDP) problem. From this observation, it follows that (P) can be efficiently solved using standard SDP software, and that there is no need for ad-hoc solution methods, which could suffer from bad convergence properties. Furthermore, we presented a method for proving analytically that an Ansatz for the solution of (P) is optimal. We hope that the present work will be useful for those working in the field of determining optimal CP maps or optimal quantum measurements.
This work has been supported by the IUAP-P4-02 program of the Belgian state.
\end{document} |
\begin{document}
\title{Upper-semicomputable sumtests for lower semicomputable semimeasures} \author{Bruno Bauwens\footnote{
LORIA, Universit\'e de Lorraine.
This research was supported by a specialisation grant form
the Flemish agency for innovation through science and technology (IWT-Vlaanderen).
I'm grateful for social and practical support from the SYSTMeS and neurology
departments of Ghent University by Georges Otte, Luc Boullart, Bart Wyns and Patrick Santens
in the period 2008--2010.
The research was finished with support from Universit\'e de Lorraine in November 2013.
} \footnote{
The results have appeared in my Phd thesis~\cite{BauwensPhd} (with cumbersome proofs).
I thank Sebastiaan Terwijn for sharing some proof attempts for results below and
Alexander (Sasha) Shen for the simplified proof of Theorem~\ref{th:u_hierarchy}, and
so much interesting discussion about Kolmogorov complexity.
} }
\date{}
\maketitle
\begin{abstract}
A sumtest for a discrete semimeasure $P$ is a function $f$ mapping
bitstrings to non-negative rational numbers such that
\[
\sum P(x)f(x) \le 1 \,.
\]
Sumtests are the discrete analogue of Martin-L\"of tests.
The behavior of sumtests for computable $P$ seems well understood,
but for some applications lower semicomputable $P$ seem more appropriate.
In the case of tests for independence,
it is natural to consider upper semicomputable tests (see
[B.Bauwens and S.Terwijn, Theory of Computing Systems 48.2 (2011): 247-268]).
In this paper, we characterize upper semicomputable sumtests relative to any lower semicomputable semimeasures
using Kolmogorov complexity.
It is studied to what extend such tests are pathological: can
upper semicomputable sumtests for $m(x)$ be large?
It is shown that the logarithm of such tests does not exceed $\log |x| + O(\log^{(2)} |x|)$ (where $|x|$ denotes
the length of $x$ and $\log^{(2)} = \log\log$) and that this bound is tight, i.e.
there is a test whose logarithm exceeds $\log |x| - O(\log^{(2)} |x|$) infinitely often.
Finally, it is shown that for each such test $e$
the mutual information of a string with the Halting problem is at least $\log e(x)-O(1)$;
thus $e$ can only be large for ``exotic'' strings.
\end{abstract}
\begin{keywords} Kolmogorov complexity -- universal semimeasure -- sumtest \end{keywords}
\section{Introduction} \label{sec:introduction}
A (discrete) {\em semimeasure} $P$ is a function from strings to non-negative rational numbers such that $\sum P(x) \le 1$. A {\em sumtest} $f$ for a semimeasure $P$ is a function from bitstrings to non-negative reals such that \[
\sum_x P(x)f(x) \le 1\,. \] Sumtests provide a rough model for statistical significance testing for a hypothesis about the generation of data~\cite{LiVitanyi, statCourse, Young}. This hypothesis should be sufficiently specific so that it determines a unique semimeasure $P$ describing the generation process of observable data $x$ in a statistical experiment. The value $f(x)$ plays the role of significance in a statistical test. The condition implies that the $P$-probability of observing $x$ such that $f(x) \ge k$ is bounded by $1/k$. If $f(x)$ is high, it is concluded that either a rare event has occurred or the hypothesis is not consistent with the generation process of the data.
For many hypotheses, such as the hypotheses that two observables are independent, not enough information is available to infer a unique probability distribution. In such a case, one might consider a set of semimeasures that are consistent with the hypothesis. We say that $m^{\mathcal H}$ is {\em universal} in a set of semimeasures if $m^{\mathcal H}$ is in the set and if for for each $P$ in the set there is a constant $c$ such that $P \le c\cdot m^{\mathcal H}$. (If $P \le c\cdot Q$ for some $c$, we say that $Q$ {\em dominates} $P$.)
It might happen that the class of lower semicomputable $P$ consistent with a hypothesis has a universal element $m^{\mathcal H}(x)$ and that each such element satisfies \[ P(x) \le O\left(2^{-K(P)}\right)m^{\mathcal H}(x) \,, \] here, $\K(P)$ is the minimal length of a program that enumerates $P$ from below. This happens for the hypothesis of independence of two strings, or directed influences in time series (see~\cite[Proposition 2.2.6]{BauwensPhd}). In this case, if observed data results in a high value of a sumtest for $m^{\mathcal H}$, it can be concluded that either: \begin{itemize}
\item a rare event has occurred,
\item the hypothesis is not consistent with the
generation of the data,
\item the data was generated by a process that is only consistent with
semimeasures of high Kolmogorov complexity. \end{itemize}
Unfortunately, the use of approximations of universal tests seems not to be practical, and this interpretation is rather philosophic.
However, one can raise the question whether sequences of improving tests (for example general tests for independence) reported in literature tend to approach some ideal limit. This motivates the question whether there exist large sumtests in some computability classes, and whether they have universal elements.
Let $\mathcal{P}^\uparrow$ be the class of lower semicomputable semimeasures. In algorithmic information theory, it is well known that this class has a universal element~$m$ \cite{LiVitanyi}, and $m$ can be characterized in terms of prefix Kolmogorov complexity: $\log m(x) = \K(x) + O(1)$ for all~$x$ (see~\cite{LiVitanyi} for more background on Kolmogorov complexity). For all computable $P$ it is also known that: \begin{itemize}
\item
no universal element exists in the class of computable tests,
\item
no upper semicomputable test exists that dominates all lower semicomputable ones,
\item
a universal sumtest in the class of lower semicomputable tests is
\[
m(x)/P(x) \,.
\] \end{itemize} In \cite{BauwensTerwijn}, these statements are studied for lower semicomputable $P$. In this case all of the results above become false: some semimeasures have a computable universal test (for example $f(x) = 1$ is a universal test for~$m$ even among the lower semicomputable tests), upper semicomputable tests can exceed lower semicomputable ones, (every $P \in \mathcal{P}^\uparrow$ has an unbounded upper semicomputable test~\cite[Proposition 5.1]{BauwensTerwijn}, thus also~$m$), and for some $P$ no universal lower semicomputable test exists~\cite[Proposition 4.4]{BauwensTerwijn}.
Consider the hypothesis of independence for bivariate semimeasures. The corresponding class of semimeasures for this hypothesis is $P(x)Q(y)$ where $P$ and $Q$ are univariate semimeasures. The subset of lower semicomputable semimeasures has a universal element given by $m(x)m(y)$. A sumtest relative to this semimeasure can be called an independence test.
It is not hard to show that any lower semicomputable independence tests is bounded by a constant\footnote{
Suppose that a sumtest $f$ for $m(x)m(y)$ is unbounded.
For each $k$ one can search a pair $(x,y)$ such that $f(x,y) \ge 2^k$. For the first such pair
that appears we have $\log m(x) = \K(x) \le K(x,y) \le K(k) \le O(\log k)$ up to $O(1)$ terms,
and similar for $m(y)$. This implies $f(x,y) m(x)m(y) \ge 2^{k - O(\log k)}$,
which contradicts the condition in the definition of sumtest.
}. In~\cite{BauwensTerwijn} it is shown that upper semicomputable independence tests exist whose logarithm equal $n + O(\log n)$ for all $n$ and pairs $(x,y)$ of strings of length $n$. Moreover, there exist a generic upper semicomputable test $u_h$ defined for all computable $h$, and this test is increasing in $h$: if $h \le g$ then $u_h \le u_g$. The tests obtained in this way somehow ``cover'' all tests: every upper semicomputable test is dominated by $u_h$ for some computable $h$. Moreover, $u_h$ has a characterization in terms of (time bounded) Kolmogorov complexity, and for fixed $(x,y)$ and increasing $h$, the value $\log u_h(x,y)$ approaches algorithmic mutual information $I(x;y) = \K(x) + \K(y) - \K(x,y)$. Unfortunately, no universal upper semicomputable sumtest exist.
The first result of the paper is to define such a generic test $u_h$ for all lower semicomputable semimeasures in terms of Kolmogorov complexity. This generalizes the result mentioned above. The remaining goal of the paper is to investigate whether upper semicomputable tests are pathological, in particular can such tests for the universal semimeasure $m$ be large? This would imply that these tests identify ``structure'' which can not be identified by compression algorithms.
We show that there are sumtests for $m$
for which the logarithm exceeds $\log |x| - O(\log\log |x|)$,
but for no test its logarithm exceeds $\log |x| + O(\log |x|)$. This is small compared to the logarithm of the independence tests discussed above, which equal $n + O(\log n)$ for some $(x,y)$ of length $n$. We show that the logarithm of each test is bounded by the mutual information with the Halting problem $\mathbf{0'}$ given by $\K(x) - \KH(x)$, up to an additive constant (that depends on the test). Hence, strings $x$ for which such a test is large, are unlikely to be produced in a statistical experiment. We also show that there is no universal upper semicomputable test for~$m$.
\section{A generic upper semicomputable sumtest} \label{sec:hierarchy}
Let $P$ be a lower semicomputable semimeasure. For every computable two argument function $h$, we define an upper semicomputable sumtest $u_h$ for $P$. For increasing $h$ these sumtests are increasing and we show that for each upper semicomputable sumtest $f$ there exists a computable $h$ such that $u_h$ exceeds $f$ up to a multiplicative constant.
In our construction we use $m(\cdot|\cdot)$,
which is a bivariate function that is universal for all lower semicomputable conditional semimeasures. Let $m_t(\cdot|\cdot)$ and $P_t(\cdot)$ represent approximations of $m(\cdot|\cdot)$ and $P(\cdot)$ from below.
For any computable $h$, let $$
u_{h}(x) = \inf_s \left\{\frac{m_{h(x,s)}(x|s)}{P_s(x)}\right\}. $$
\begin{theorem} \label{th:u_hierarchy}
$u_{h}$ is an upper semicomputable sumtest for $P$. For any upper semicomputable sumtest $e$ for~$P$,
there exist a $c$ and a computable $h$ such that $u_{h} \le c \cdot e$. \end{theorem}
{\em Remark.} Let $t$ be a number. We can define a generic test $u_h$ using time bounded Kolmogorov complexity, given by \[
K^t\,(x) = \min \left\{ |p|:U(p) \text{ outputs $x$ and halts in at most $t$ steps} \right\}. \]
Indeed, we simply fix the approximation $m_t(x|s)$ to be $2^{-K^t(x|s)}$,
and by the conditional coding theorem, this defines a universal conditional semimeasure $m(\cdot|\cdot)$.
\begin{proof}\hspace{-1mm}\footnote{
This simplified proof was suggested by Alexander Shen.
}\hspace{1mm}
Clearly, $u_{h}$ is upper semicomputable, so we need to show it is a sumtest.
For each fixed $s$, the function $m_{h(x,s)}(x|s)/P_s(x)$ is a sumtest for $P_s$,
and $u_h(x)$ is not larger; thus for all $s$:
\[
\sum_x u_h(x)P_s(x) \le 1\,.
\]
This implies that the relation also holds in the limit, thus $u_h$ is a sumtest for~$P$.
For the second claim of the lemma, note that we can
choose an approximation $e_s$ (from above) of $e$ such that
$e_s$ has computably bounded support and $\sum_x P_s(x)e_s(x) \le 1$ for all~$s$.
By universality of $m(\cdot|\cdot)$, this implies that there exist a $c$ such
that $P_s(x)e_s(x) \le c\cdot m(x|s)$, and this $c$ does not depend on $x$ and~$s$.
We can wait for the first stage in the approximation of $m(x|s)$ for which this equation becomes
true and let this stage define the function $h(x,s)$.
This implies $e_s(x) \le c\cdot m_{h(x,s)}(x|s)/P_s(x)$ for all $x$ and $s$, thus $e(x) \le c\cdot u_h(x)$.
$h(x,s)$ is defined for all $x$ and $s$, thus $h$ is computable. \end{proof}
Let $\KH(x)$ be the Kolmogorov complexity of $x$ on a machine with an oracle for to the Halting problem that is optimal. \begin{corollary}\label{cor:high_u_muchHaltingInformation}
If $e$ is an upper semicomputable sumtest for a universal semimeasure, then
$\log e(x) \le \K(x) - \KH(x) + O(1)$.
(The constants implicit in the $O(\cdot)$ notation depend on $e$.) \end{corollary}
\begin{proof}
It suffices to show the corollary for $e = u_h$.
Let $m^\mathbf{0'}(x) = 2^{-\KH(x)}$.
We use~\cite[Theorem 2.1]{limitComplexitiesRevisited}, which states
\[
m^\mathbf{0'}(x) = \Theta \left( \liminf_t m(x|t) \right) \,.
\]
By definition of $u_h$:
\[
u_h \le \liminf_t \frac{m_{h(x,t)}(x|t)}{m_t(x)} \,.
\]
For all but finitely many $t$, the denominator exceeds $m(x)/2$, and
by the theorem mentioned above,
$\liminf m(x|t)$ in the numerator is $O(m^\mathbf{0'}(x))$.
By choice of $m^\mathbf{0'}(x)$, it equals $2^{-\KH(x)}$. Thus
\[
\le \liminf_t \frac{m(x|t)}{m(x)/2} \le O\left( \frac{m^\mathbf{0'}(x)}{m(x)}\right) = O\left(
\frac{2^{-\KH(x)}}{2^{-\K(x)}} \right) \,.
\qedhere
\] \end{proof}
\section{Upper bound for upper semicomputable tests for~$m$}
\begin{theorem}\label{th:upperbound}
If $e$ is an upper semicomputable sumtest for $m$, then
\[
e(x) \le O\left( |x|(\log |x|)^2 \right) \,.
\] \end{theorem}
\begin{proof}
By Theorem~\ref{th:u_hierarchy}, it suffices to show the theorem for $u_h$ for all computable $h$.
Note that $\sum_x 2^{-2|x|} = \sum_n 2^{-n} = 1$, thus for each universal semimeasure~$m$
there exist $c>0$ such that $m(x) > c2^{-2|x|}$.
Assume that $m_t(\cdot)$ is an approximation from below of such $m(\cdot)$ such that
$m_1(x) \ge c2^{-2|x|}$.
The idea is as follows. We consider some times $t_1, t_2, \dots$
on which $m(t)$ is large.
(For any number $n$ let $m(n)$ be
the universal semimeasure for the string containing $n$ zeros.)
This implies that
$m_{h(x,t_i)}(x|t_i)$ is not much above $m_{t_{i+1}}(x)$ if $t_{i+1}$
is sufficiently above $t_i$. From the definition of $u_h(x)$ with $t = t_i$
it follows that if $u_h(x)$ is large, then also $m_{t_{i+1}}(x)/m_{t_i}(x)$ must be large.
On the other hand, $m(x)/m_1(x)$ is bounded, thus the first
ratio can only be large for few $t_i$. On the other hand,
our construction implies that for large $u_h(x)$ the first ratio must be large
for many $t_i$.
We show the following claim
\textit{
For all computable $h$ there exist a series of numbers $t_1, t_2, t_3, \dots$ and a constant $c > 0$
such that for all $i \ge |x|$ either
\[
\frac{m_{t_{i+1}}(x) }{m_{t_i}(x)} \ge 2
\quad\quad \text{or} \quad\quad
u_h(x) < 2ci(\log i)^2 \,.
\]
}
Let us first show how this implies the theorem.
The definition of a semimeasure implies $m(x) \le 1$, thus
$
\frac{m(x)}{m_1(x)} \le O\left(2^{2|x|}\right)
$ by assumption on $m_1(\cdot)$, and hence
\[
\frac{m_{t_2}(x)}{m_{t_1}(x)} \frac{m_{t_3}(x)}{m_{t_2}(x)}\dots
\frac{m_{t_{4|x|}}(x)}{m_{t_{4|x|-1}}(x)} \le \frac{m(x)}{m_1(x)} \le O\left(2^{2|x|}\right)\,.
\]
For large $x$, at most $2|x|+O(1) < 3|x|$ elements in the series $t_{|x|}, t_{|x| + 1},
\dots, t_{4|x|-1}$ can satisfy the left condition. Thus, some element does not satisfy the condition
and hence
\[
u_h(x) < 2c\left(4|x|\right)\left(\log (4|x|)\right)^2\,.
\]
This implies the theorem.
We now construct a sequence $t_1, t_2, t_3,\dots$ satisfying the conditions of the claim.
This construction depends on a parameter $c$, which will be chosen later. Let $t_1 = 1$.
For $i \ge 1$, $t_{i+1}$ is given by the first stage in the approximation of~$m(\cdot)$ such that
\begin{equation}\label{eq:construction_t}
\frac{m_{h(x,t_i)}(x|t_i)}{ i(\log i)^2} \le c\cdot m(x) \,,
\end{equation}
for all $x$ of length at most~$i$.
We first argue why for appropriate $c$, such a stage $t_{i+1}$ exist, i.e. why
\eqref{eq:construction_t} holds for all $x$.
Note that the sequence is recursively enumerated uniformly in $c$,
thus $m(c)/(i(\log i)^2) \le O(m(i)m(c)) \le O(m(t_i))$.
On the other side
\[
m_{h(x,t_i)}(x|t_i)m(t_i) \le O(m(x))\,,
\]
thus for some $c'$ independent of $c$, $i$ and $x$:
\[
\frac{m_{h(x,t_i)}(x|t_i) \,m(c)}{ i(\log i)^2} \le c'\cdot m(x) \,.
\]
and~\eqref{eq:construction_t} is satisfied if $c \ge c'/m(c)$.
This relation holds if we choose $c$ to be a large power of two
(indeed $m(2^l) \le \alpha/l^2$ for some $\alpha>0$, thus choose $l$ such that $2^l \ge c'l^2/\alpha$).
It remains to show the claim.
Assume that the right condition is not satisfied and choose $t = t_i$ in the definition of $u_h$:
\begin{eqnarray*}
2ci(\log i)^2 \le u_h(x) &\le& \frac{m_{h(x,t_i)}(x|t_i)}{m_{t_{i}}(x)}
= \frac{m_{h(x,t_i)}(x|t_i)}{m_{t_{i+1}}(x)}\frac{m_{t_{i+1}}(x)}{m_{t_{i}}(x)} \\
& \le & ci (\log i)^2 \frac{m_{t_{i+1}}(x)}{m_{t_{i}}(x)}\,.
\end{eqnarray*}
This implies the left condition. \end{proof}
\section{Construction of large upper semicomputable tests}
For each pair $(f,g)$ of computable functions, an upper semicomputable function
$e_{f,g}$ is constructed. Afterwards, it is shown that $\sum_x m(x)e_{f,g}(x) \le O(1)$ for appropriate~$f$. Finally, we construct $g$ such that $e_{f,g}$ equals $|x|/(\log |x|)^5$ for infinitely many $x$. Before constructing $e_{f,g}$, we show a technical lemma.
\begin{lemma}\label{lem:t_k}
There exists a sequence of numbers $t_1,t_2,\dots$ such that for all $k$
\[
\sum_x \left\{m(x) : \frac{m(x)}{m_{t_k}(x)} \ge 2 \right\} \le 2^{-k+1}
\]
and such that for some computable function $f$ and large $k$
\[
m_{f(t_k)}(x) \le 2^{-k}/k^2 \,.
\] \end{lemma}
\begin{proof}
The proof of the lemma is closely related to the proof that strings with high Kolmogorov complexity
of Kolmogorov complexity given the string are rare~\cite{GacsNotes} (see~\cite{msoph} for more on
the technique).
The construction of $t_k$ uses the function $k_t$, which is in turn
defined using an approximation from below for the famous number $\Omega$ (see~\cite{Chaitin75}):
\[
\Omega = \sum_x m(x) \quad \quad \text{and} \quad \quad
\Omega_t = \sum_{|x| \le t} m_t(x)\,.
\]
For each $t$ let $k_t$ be the position of the leftmost bit of $\Omega_t$ (in binary) that
differs from $\Omega_{t-1}$. Note that $k_t$ tends to infinity for increasing $t$.
Let
\[
t_k = \max\left\{ t : k_t \le k \right\}\,,
\]
i.e. the largest $t$ for which there is a change in the first $k$ bits of $\Omega_t$.
Clearly,
\[
\sum_x m(x) - \sum_{|x|\le t_k} m_{t_k}(x) = \Omega - \Omega_{t_k} \le 2^{-k}\,.
\]
and this inequality implies the first inequality of the lemma.
For the second, we show that there exist a computable $f$ such that for all $t$:
\[
m_{f(t)}(t) \le O\left( 2^{-k_t}/k_t^2\right) \,.
\]
Indeed, given the first $k_t$ bits $y$ of $\Omega_t$ we can compute $t$
(by waiting until the first stage $s$ such that $y$ is a prefix of $\Omega_s$).
This implies $m(y) \le O(m(t))$.
Note that $2^{-|z|}/|z|^2 \le o(m(z))$ for all $z$,
thus
\[
2^{-k_t}/k_t^2 \le m(t)
\]
for large $t$.
$k_t$ is computable from $t$ and we can wait until the current approximation of $m(t)$
is large enough to satisfy the equation. Let this stage be $f(t)$. Note that it
is defined for all $t$,
and hence $f$ is computable and satisfies the inequality at the start of the paragraph. \end{proof}
Let $e_{f,g}(x)$ be equal to $|x|/(\log |x|)^5$ if for all $t$ either
\[ m_{f(t)}(t) \le \frac{6\log |x|}{|x|}
\quad\quad \text{or} \quad\quad
\frac{m_{g(x,t)}(x)}{m_t(x)} \ge 2\,, \] otherwise let $e_{f,g}(x) = 1$.
\begin{proposition}\label{prop:e_is_sumtest}
For some $c > 0$, some computable $f$ and for all computable $g$ the function $c \cdot e_{f,g}$
is an upper semicomputable sumtest for~$m$. \end{proposition}
\begin{proof}[Proof of Proposition~\ref{prop:e_is_sumtest}.]
$e_{f,g}$ is upper semicomputable. Indeed, for at most finitely many $x$ the relation $|x|/(\log
|x|)^5 \ge 1$ is false;
for all other $x$, let the test be equal this value until we find a $t$ that
does not satisfy the conditions.
It remains to construct $f$ such that
\[
\sum_x e_{f,g}(x)m(x) \le O(1)\,.
\]
For some $x$ of length $n$, let $k = \log n - 3\log n - 3$, and suppose that $t_k$ and $f$ satisfy the
conditions of Lemma~\ref{lem:t_k}.
One can calculate that $m_{f(t_k)}(t_k) \ge 2^{-k}/k^2 \ge (6\log n)/n$.
Thus the set of all $x$ of length $n$ such that $e_{f,g}(x) > 1$ has measure at most
\[
2^{-k + 1} \le O\left( \frac{(\log n)^3}{n} \right)\,.
\]
We are now ready to bound the sum at the beginning of the proof.
It is sufficient to consider $x$ such that $e_{f,g}(x)>1$.
\[
\sum_x \left\{ e_{f,g}(x)m(x): e_{f,g}(x) > 1 \right\}
\le \sum_n \frac{n}{(\log n)^5}\sum_{|x|=n} \left\{m(x): e_{f,g}(x)>1 \right\} \,,
\]
by the above bound on the measure of the sets in the right sum, this is at most
\[
\le O(1) \sum_n \frac{n}{(\log n)^5} \frac{(\log n)^3}{n} \le O(1)\sum_n \frac{1}{(\log n)^2} \le O(1)\,.
\qedhere
\] \end{proof}
\begin{theorem}\label{th:largeTest}
There exist an upper semicomputable sumtest for $m$ that exceeds
\[
\Omega\left(|x|/(\log |x|)^5\right)
\]
for some~$x$ of all large lengths. \end{theorem}
\begin{proof}
By Proposition~\ref{prop:e_is_sumtest} it suffices to construct
a computable $g$ and an $x$ of each large length such that $e_{f,g}(x) = |x|/(\log |x|)^5$.
Our construction works for any function~$f$.
Fix a large~$n$.
By construction of $e_{f,g}$, the function exceeds one on some $x$ of length $n$,
only if $\log (1/m_t(x))$ increases each time $m_{f(t)}(t)$ is large.
Let $t_1, t_2, \dots$ be the set of all $t$ such that $m_{f(t)}(t) < 6(\log n)/n$. Clearly,
there are less than $n/(6\log n)$ such $t$ and the sequence can be enumerated uniformly in~$n$.
To construct $x$ we maintain a lexicographically ordered list that initially contains all strings
of length~$n$.
At stages $t_i$ we remove all $x$ from the list for which
\[
\log (1/m_{t_i}(x)) \le n - 6i\log n \,.
\]
Let $x$ be the first string in the list that is never removed.
Such $x$ exist for large $n$, because in total there are less than $2^{n - 6\log n + 1}$ strings removed.
By construction
\[
\log \left(1/m_{t_i}(x)\right) > n - 6i\log n\,.
\]
Now we argue, why there is a computable $g$ such that
\begin{equation}\label{eq:gaolLarge}
\log \left(1/m_{g(x,t_i)}(x)\right) \le n - 6i\log n - 2\log n + O(1)
\end{equation}
for all $i$ such that $t_i$ exists; and by construction of $e_{f,g}$
this is sufficient for the theorem.
After stage $t_i$, we remove at most
\[
\exp (n - 6(i+1)\log n) + \exp(n - 6(i+2)\log n) + \ldots < \exp (n + 1 - 6(i+1)\log n)
\]
strings. Let $P(y|i,n)$ be the uniform distribution over the first $\exp (n + 1 - 6(i+1)\log n)$
strings that remain in the lexicographically ordered list.
Note that we have $i < n/(6\log n)$, thus this list is never empty, and hence contains $x$.
By universality there exist a $c$ such that for all strings $y$
\[
c \cdot m(y) \ge P(y|i,n)/(i^2n^2) \,.
\]
Now we construct the function $g(y,t)$. From $t$ and $n$ we can compute the largest $i$ such that
$t_i \le t$ if such $i$ exist (for $t < t_i$ the value of $g(y,t)$ may be anything).
From $t_i$ and $i$ we can compute the lexicographically ordered list at stage $t_i$
and evaluate $P(z|i,n)$ for all $z$ in the list.
Then we wait for the stage until the equation above is satisfied
for all $z$ in the list. Let this stage be $g(y,t)$.
This implies
\[
c \cdot m_{g(x,t_i)}(x) \ge P(x|i,n)/(i^2n^2) \ge \exp \left(-n - 1 + (6i+1)\log n\right)/n^4\,,
\]
and this implies~\eqref{eq:gaolLarge}. \end{proof}
The next corollary follows from the proof above.
\begin{corollary}\label{cor:nouniversal}
There exist no test that is universal in the set of upper semicomputable sumtests for~$m$. \end{corollary}
\begin{proof}
We show that for every test $e$, we can construct $f,g$ and infinitely many~$x$, such that
$e(x) \le O(1)$ and $e_{f,g}(x) = |x|/(\log |x|)^5$.
Remind the construction of $u_h$ and by Theorem~\ref{th:u_hierarchy} there exist
a computable $h$ be such that $e(x) \le O(u_h(x))$.
Let $m_t(\cdot)$ be an approximation of $m(\cdot)$ from below such that
$m_1(x) \ge \Omega\left(2^{-|x|}\right)$. Now, for every $n$,
we follow the construction of $x$ from the proof above with the following
modification: we do not start from a list of all $x$ of length $n$,
but from all $x$ of length $n$ such that
\[
m_{h(x,1)}(x|1) \le 2^{-n+1}\,.
\]
There are less than $2^{n-1}$ strings with $m(x|1) > 2^{n-1}$, thus the list
constains at least $2^n - 2^{n-1} = 2^{n-1}$ strings, and this is sufficient for the proof above.
Let $t=1$ in the definition of $u_h$. This implies
\[
u_h(x) \le \frac{m_{h(x,1)}(x|1)}{m_1(x)} \le \frac{2^{-n+1}}{\Omega\left( 2^{-n} \right)} \le O(1) \,,
\]
thus $e(x) \le O(1)$. On the other hand, we can follow the proof above to construct $g$ and $x$
such that $e_{f,g}(x) = n/(\log n)^5$.
(The function $f$ is still obtained from Lemma~\ref{lem:t_k}.
On the other hand, the function $g$ might be larger than in the proof above, and depends
on $h$. Indeed, it equals the stage on which $m(x)$
increases sufficiently above $2^{-n+1}$, and this time is related to $h(x,1)$.) \end{proof}
\end{document} |
\begin{document}
\title{A Trefftz Discontinuous Galerkin Method for Time Harmonic Waves with a Generalized Impedance Boundary Condition}
\author{Shelvean Kapita$^{\rm a}$$^{\ast}$\thanks{$^\ast$Corresponding author. Current address: Department of Mathematics, University of Georgia, 220 DW Brooks Dr, Athens, GA 30602. Email: [email protected]
}, Peter Monk$^{\rm b}$ and Virginia Selgas$^{\rm c}$\\
$^{a}${\em{Institute for Mathematics and its Applications, College of Science and Engineering, 306 Lind Hall, Minneapolis, MN USA 55455}}; $^{b}${\em{Department of Mathematical Sciences, University of Delaware, Newark DE 19716, USA}}; $^{c}${\em{Departamento de Matem\'{a}ticas, Universidad de Oviedo, EPIG, 33203 Gij\'{o}n, Spain}}}
\maketitle
\begin{abstract} We show how a Trefftz Discontinuous Galerkin (TDG) method for the displacement form of the Helmholtz equation can be used to approximate problems having a generalized impedance boundary condition (GIBC) involving surface derivatives of the solution. Such boundary conditions arise naturally when modeling scattering from a scatterer with a thin coating. The thin coating can then be approximated by a GIBC. A second place GIBCs arise is as higher order absorbing boundary conditions. This paper also covers both cases. Because the TDG scheme has discontinuous elements, we propose to couple it to a surface discretization of the GIBC using continuous finite elements. We prove convergence of the resulting scheme and demonstrate it with two numerical examples.
\begin{keywords} Helmholtz equation, Trefftz, Discontinuous Galerkin, Generalized Impedance Boundary Condition, Error estimate, Artificial Boundary Condition \end{keywords}
\begin{classcode}\end{classcode} \end{abstract}
\section{Introduction} \pmr{The Trefftz method, in which a linear combination of simple solutions of the underlying partial differential equation on the whole solution domain are used to approximate the solution of the desired problem, dates back to the 1926 paper of Trefftz~\cite{Trefftz}. A historical discussion in relation to Ritz and Galerkin methods can be found in ~\cite{Gander}. From our point of view, a key paper in this area is that of Cessenat and D\'espres~\cite{cessenat03} who analyzed the use of a local Trefftz space on a finite element grid to approximate the solution of the Helmholtz equation~\cite{cessenat03}. This was later shown to be a special case of the Trefftz Discontinuous Galerkin (TDG) method~\cite{buf07,git07} which opened the way for a more general error analysis. For more recent work in which boundary integral operators are used to contruct the Trefftz space, see for example \cite{bar,hof}. The aforementioned work all concerns the standard pressure field formulation of acoustics which results in a scalar Helmholtz equation. Indeed, TDG methods are well developed for the Helmholtz, Maxwell and Navier equations with standard boundary conditions and a recent survey can be found in~\cite{tsurvey}. For the displacement form, a TDG method has been proposed by Gabard~\cite{gabard} also using simple boundary conditions. }
\pmr{Because of the unusual boundary conditions considered in this paper, we propose to use the displacement form of the Trefftz Discontinuous Galerkin (TDG) method for approximating solutions of the Helmholtz equation governing scattering of an acoustic wave (or suitably polarized electromagnetic wave) by a bounded object. This is because the scatterer is assumed to be modeled by a Generalized Impedance Boundary condition (GIBC). These boundary conditions arise as approximate asymptotic models of thin coatings or gratings (\cite{EngquistNedelec,BCH,BCHprevious,Vernhet}). Importantly, they also arise as approximate absorbing boundary conditions (ABCs) and our paper shows how to handle these boundary conditions. As
far as we are aware, the displacement TDG method has not been analyzed to date. We provide such an analysis and this is a one of the
contributions of our paper.}
In order to define the problem under consideration more precisely, let $D\subset \mathbb{R}^2$ denote the region occupied by the scatterer. We assume that $D$ is an open bounded domain with connected complement having a smooth boundary $\Gamma =\partial D$. Then we can define $\nabla _{\Gamma} $ to be the surface gradient and $\nabla_{\Gamma}\cdot$ to be the surface divergence on $\Gamma$ (see for example \cite{ColtonKress}). In addition $\boldsymbol{\nu}$ denotes the outward unit normal on $\Gamma$.
Let $k\in\mathbb{R}$, $k\neq 0$, denote the wave number of the field, and suppose that a given incident field $u^i$ impinges on the scatterer. We want to approximate the scattered field $u\in H^1_{\rm{}loc}(\mathbb{R}^2\setminus\overline{D})$ that is the solution of \begin{equation}\label{eq-fwdprob} \begin{array}{rcl} \displaystyle\Delta u+k^2u&=&0 \quad \mbox{ in }\Omega=\mathbb{R}^2\setminus \overline{D} \, ,\\[2ex] \displaystyle\nabla_{\Gamma } \cdot \left(\beta \nabla_{\Gamma } u\right)+\frac{\partial u}{\partial \boldsymbol{ \nu } } +\lambda u &=&-g \quad \mbox{ on }\Gamma :=\partial D \, ,\\[2ex]
\displaystyle\lim _{r:=|\textbf{{x}}|\to\infty} \sqrt{r} \big( \frac{\partial u}{\partial r}-iku\big) &=& 0 \, . \end{array} \end{equation} The last equation, the Sommerfeld radiation condition (SRC), holds uniformly in $\hat{\textbf{{x}}}=\textbf{{x}}/r$. In addition \[ g=\nabla_{\Gamma } \cdot \left(\beta \nabla_{\Gamma } u^i \right)+\frac{\partial u^i}{\partial \boldsymbol{\nu} }+\lambda u^i \, , \] where $u^i$ is the given incident field and is assumed to be a smooth solution of the Helmholtz equation $\Delta u^i+k^2u^i=0$ in a neighborhood of $D$. For example, if $u^i$ is a plane wave then $u^{i}=\exp(ik\mathbf{x}\cdot\mathbf{d})$, where $\mathbf{d}$ is the direction of propagation of the plane wave and $\vert \mathbf{d}\vert=1$. Alternatively $u^i$ could be the field due to a point source in $\mathbb{R}^2\setminus \overline{D}$. The coefficient functions $\beta$ and $\lambda$ are used to model the thin coating on $\Gamma$ and we shall give details of the assumptions on these coefficients in the next section.
As can be seen from the second equation in (\ref{eq-fwdprob}), the GIBC involves a non-homogeneous second order partial differential equation on the boundary of the scatterer, and this complicates the implementation using a TDG method which uses discontinuous local solutions of the homogeneous equation element by element. In addition, because the problem is posed on an infinite domain we need to truncate the domain to apply the TDG method, and then apply a suitable artificial boundary condition (ABC) on the outer boundary. \pmr{Because TDG methods have discontinuous basis functions, when the GIBC or ABC involve derivatives, these boundary conditions} can be applied more easily if we convert them to displacement based equations, so we propose to solve (\ref{eq-fwdprob}) by converting it to a vector problem. To this end, we introduce $\textbf{{v}}=\nabla u$, in which case $\nabla\cdot\textbf{{v}}=\Delta u=-k^2u$. Using this relationship we see that $\textbf{{v}}$ should satisfy \begin{equation}\label{vbasic} \begin{array}{rcl} \nabla \nabla\cdot\textbf{{v}}+k^2\textbf{{v}}&=&0\mbox{ in }\mathbb{R}^2\setminus\overline{D}\, ,\\[2ex] \displaystyle\nabla_{\Gamma } \cdot \left(\beta \nabla_{\Gamma } (\nabla\cdot\textbf{{v}})\right)-{k^2}\textbf{{v}}\cdot\boldsymbol{\nu } +\lambda \nabla\cdot\textbf{{v}} &=&\displaystyle{k^2}g \quad \mbox{ on }\Gamma :=\partial D \, ,\\[2ex]
\displaystyle\lim _{r:=|\textbf{{x}}|\to\infty} \sqrt{r} \big( \nabla\cdot\textbf{{v}}-ik\textbf{{v}}\cdot\hat{\mathbf{x}}\big) &=& 0 \, , \end{array} \end{equation} where the radiation condition (last equation) holds uniformly for all in directions $\hat{\mathbf{x}}:=\mathbf{x}/\vert\mathbf{x}\vert$.
The use of the displacement variable for the Helmholtz equation with standard boundary conditions in the context of plane wave methods was considered by Gabard in \cite{gabard}, but no error estimates were proved. In particular he used the PUFEM \cite{PUFEM} and DEM \cite{DEM} approaches, not TDG. The use of the displacement vector as the primary variable is often necessary in studies of fluid-structure interaction (see e.g. \cite{wang}). To date, no error estimates have been proved for the displacement based formulation with or without the GIBC. The vector formulation is useful in its own right. For example, using finite element methods, Brenner et al. \cite{brenner} show that a vector formulation can also be advantageous for sign changing materials, although we do not consider that problem here.
Our approach to discretizing (\ref{vbasic}) is to use TDG in a bounded subdomain of $\mathbb{R}^2\setminus \overline{D}$, and standard finite elements or trigonometric polynomial based methods to discretize the GIBC on the boundary. The domain is truncated using the Neumann-to-Dirichlet (NtD) map on an artificial boundary that is taken to be a circle. Other truncation conditions could be used. Since it is not the focus of the paper, we assume for simplicity that the NtD map is computed exactly. The discretization of the NtD map could be analyzed using the techniques from \cite{shelvean_phd,shelvean_paper}, and it is also possible to use an integral equation approach to approximate the NtD on a more general artificial boundary but this remains to be analyzed.
Our analysis of the discrete problem follows the pattern of the analysis of finite element methods for approximating the standard problem of scattering by an impenetrable scatterer using the Dirichlet-to-Neumann boundary condition from \cite{koyama}. We first show that the GIBC can be discretized leaving the displacement equation continuous. Then we show that this semi-discrete problem can also be discretized successfully. The analysis of the error in the TDG part of the problem is motivated by the analysis of TDG for Maxwell's equations in \cite{HMP_Maxwell} and uses the Helmholtz decomposition of the vector field satisfying (\ref{vbasic}) as a critical tool.
The contributions of this paper are 1) a first application and analysis of TDG to the displacement Helmholtz problem; 2) a method for incorporating a discretization of the GIBC into the TDG scheme using novel numerical fluxes from \cite{shelvean_phd}; 3) an error analysis of the fully discrete problem (except for the NtD map as described earlier), and the first numerical results for TDG applied to this problem.
In the remainder of the paper we use bold font to represent vector fields and we will work in $\mathbb{R}^2$. We utilize the usual gradient and divergence operators (both in the domain and on the boundary), and also a vector and scalar curl defined by \[ {\rm\bf{} curl}\;v=\left(\!\!\begin{array}{r} \frac{\partial v\,}{\partial x_2} \\[1ex] -\frac{\partial v\,}{\partial x_1} \end{array}\!\!\right) \quad \mbox{ and } \quad
{\rm curl}\;\textbf{{v}}=\frac{\partial v_2}{\partial x_1}-\frac{\partial v_1}{\partial x_2} \, , \] for any $v:\mathbb{R}^2\to\mathbb{C}$ and $\textbf{{v}}:\mathbb{R}^2\to\mathbb{C}^2$.
The paper proceeds as follows. In the next section we formulate problem (\ref{vbasic}) in a variational way and show it is well posed using the theory of Buffa~\cite{Buffa2005}. Then in Section \ref{semi} we describe and analyze the discretization of the GIBC using finite elements (or trigonometric basis functions). The fully discrete TDG scheme is described in Section \ref{trefftz} where we also prove a basic error estimate and show well-posedness of the fully discrete problem. We then prove convergence in a special mesh independent norm. In Section~\ref{num} we provide a preliminary numerical test of the algorithm, and in Section~\ref{concl} we draw some conclusions.
\section{Variational Formulation of the Displacement Method} In this section we give details of our assumptions on the coefficients in the GIBC, and formulate the displacement problem (\ref{vbasic}) in variational setting suitable for analysis. Then we show that the problem is well-posed. The functions $\beta ,\lambda \in L^{\infty} (\Gamma )$ in (\ref{eq-fwdprob}) are complex valued functions and we assume that there exists a constant $c>0$ such that \begin{equation}\label{hyp-E!forwardprob} \Re(\beta)\geq c ,\, \Im(\beta)\leq 0 \mbox{ and } \Im(\lambda)\geq 0\quad\mbox{ a.e. on }\Gamma. \end{equation} Of key importance will be the operator $G_\Gamma:H^{-1}(\Gamma)\to H^{1}(\Gamma)$ defined weakly as the solution operator for the boundary condition on $\Gamma$ relating the Neumann and Dirichlet boundary data there. More precisely, for each $\eta\in H^{-1}(\Gamma)$ we define $G_{\Gamma}\eta\in H^1(\Gamma)$ to be the solution of \begin{equation} \int_{\Gamma } (\beta \,\nabla_{\Gamma } (G_{\Gamma\!}\eta) \cdot \nabla_{\Gamma } \overline{\xi }-\lambda \, G_{\Gamma\!}\eta \, \overline{\xi })\,dS = \int_{\Gamma } \! \eta \,\overline{\xi}\,dS \qquad \forall \xi\in H^1(\Gamma ) \, . \label{Gdef} \end{equation} An essential assumption is the following. \begin{assumption}\label{A} The only solution $u\in H^1(\Gamma)$ of \[ \displaystyle\nabla_{\Gamma } \cdot \left(\beta \nabla_{\Gamma } u\right) +\lambda u =0 \] is $u=0$. \end{assumption} We will show that Assumption \ref{A} together with the conditions (\ref{hyp-E!forwardprob}) ensure that the operator $G_{\Gamma}: H^{-1}(\Gamma )\to H^1(\Gamma )$ is well-defined.
\begin{remark} One possible condition under which Assumption~\ref{A} holds is, \[ \mbox{Either }\Im(\beta)\leq -c< 0 \mbox{ or }\Im(\lambda)\geq c>0\quad \mbox{ a.e. on a segment }\Lambda\subset D. \] \end{remark}
\begin{remark} On the one hand, the assumptions in (\ref{hyp-E!forwardprob}) concerning the imaginary parts of $\beta$ and $\lambda$ are governed by physics, since these quantities represent absorption when our model is deduced as an approximation of the Engquist-N\'ed\'elec condition modeling the diffraction of a time-harmonic electromagnetic wave by a perfectly conducting object covered by a thin dielectric layer (see \cite{EngquistNedelec}).
On the other hand, the hypothesis in (\ref{hyp-E!forwardprob}) on the real part of $\beta$ is technical and ensures ellipticity (see \cite[Th.2.1]{BCHprevious}); however, this property is fulfilled in the example of a medium with a thin coating (see \cite{EngquistNedelec}). It would also be possible to allow $\Re(\beta)\leq -c<0$ on $\Gamma$ as might be encountered modeling meta-materials, but a sign changing coefficient would require a more elaborate study.
The role of these properties will be clarified in Lemma \ref{lemma-Gok}. \end{remark}
The assumptions on the coefficients in (\ref{hyp-E!forwardprob}) together with Assumption~\ref{A} ensure that problem (\ref{eq-fwdprob}) has a unique weak solution $u$ in the space
$V = \{ v \in H^1_{loc} (\Omega ) \, ; \,\, v |_{\Gamma } \in H^1 (\Gamma ) \}$ (see later and \cite[Th.2.1]{BCHprevious}).
To solve (\ref{vbasic}) we first truncate the domain. \pmr{We wish to analyze the error introduced in approximating a scattering problem, concentrating on the discretization of the GIBC, so we truncate the domain using a simple analytic Neumann-to-Dirichlet map. Obviously other more general truncation approaches such as integral equations could be used. Indeed, in the numerical section, we shall consider a GIBC that arises from approximating the Neumann-to-Dirichlet map (a higher order ABC).}
Let $B_R$ denote the ball of radius $R$ centered at the origin and set $\Omega_R=B_R\setminus \overline{D}$ be our computational domain (i.e. the bounded domain that we will mesh for the UWVF) and $ \Sigma _R=\partial B_R$, where the radius $R$ is taken large enough to enclose $\overline{D}$ (see Fig.~\ref{cartoon} for a diagram illustrating the major geometric elements of the problem). The following Neumann-to-Dirichlet (NtD) map $N_R:H^{-1/2}(\Sigma_R)\to H^{1/2}(\Sigma_R)$ will provide the ABC on $\Sigma_R$. In particular let $v\in H^1_{\rm{}loc}(\mathbb{R}^2\setminus \overline{B_R})$ solve the exterior problem \begin{equation}\label{eq-NR} \begin{array}{rcll} \displaystyle\Delta v+k^2v&=&0 &\quad \mbox{ in }\mathbb{R}^2\setminus \overline{B_R} \, ,\\[2ex] \displaystyle\frac{\partial v}{\partial r}&=&f&\quad\mbox{ on }\Sigma_R\,,\\[2ex]
\displaystyle\lim _{r:=|\textbf{{x}}|\to\infty} \sqrt{r} \big( \frac{\partial v}{\partial r}-ikv\big) &=& 0 & \quad \mbox{ uniformly in direction }\hat{\mathbf{x}}=\mathbf{x}/\vert\mathbf{x}\vert \, , \end{array} \end{equation}
for some $f\in H^{-1/2}(\Sigma_R)$, then $N_R(f)=v|_{\Sigma_R}$. Let us recall that $N_R : H^{- 1/2} ( \Sigma _R ) \to H^{1/2} ( \Sigma _R )$ is an isomorphism since its inverse, the Dirichlet-to-Neumann
map, is also an isomorphism~\cite{ColtonKress}. Obviously, the solution of (\ref{eq-fwdprob}) satisfies $u|_{\Sigma_R}=N_R(\partial u/\partial r)$ and, in consequence, using the fact that $\nabla\cdot\textbf{{v}}=-k^2u$, \[
(\nabla\cdot\textbf{{v}} )|_{\Sigma _R} =-k^2N_R(\textbf{{v}}\cdot\boldsymbol{\nu}) \, , \] where we denote by $\boldsymbol{\nu}:=\mathbf{x}/R$ the outward unit normal on $\Sigma_R$.
In the same way, for the solution of (\ref{eq-fwdprob}) we have that \[
(\nabla\cdot\textbf{{v}} ) |_\Gamma=-k^2G_\Gamma(\textbf{{v}}\cdot\boldsymbol{\nu}+g) \, . \]
Now we can write down a weak form for the boundary value problem (\ref{vbasic}) in the usual way, multiplying the first equation in (\ref{vbasic}) by a test vector function $\mathbf{w}$ and integrating by parts: \begin{eqnarray*} 0&=& \int_{\Omega_R}(\nabla \nabla\cdot\textbf{{v}}+k^2\textbf{{v}})\cdot \overline{\mathbf{w}}\,d\textbf{{x}} \\ &=& \int_{\Omega_R}(-\nabla\cdot\textbf{{v}}\,\nabla\cdot\overline{\mathbf{w}}+k^2\textbf{{v}}\cdot \overline{\mathbf{w}})\,d\textbf{{x}}
+\int_{\Sigma_R}\nabla \cdot \textbf{{v}}\,\boldsymbol{\nu}\cdot\overline{\mathbf{w}}\,dS
-\int_{\Gamma}\nabla \cdot \textbf{{v}}\,\boldsymbol{\nu}\cdot\overline{\mathbf{w}}\,dS \, , \end{eqnarray*} where the minus sign in the last term is due to the normal field $\boldsymbol{\nu}$ pointing outward $D$. Using the NtD map $N_R$ and the boundary solution map $G_\Gamma$, the above equation can be rewritten as the problem of finding $\textbf{{v}}\in H({\rm{}div};\Omega_R)$ such that \begin{equation}\label{prob-FwdProbV} \begin{array}{rcl} &&\displaystyle\int_{\Omega_R}\! (\frac{1}{k^2}\,\nabla\cdot\textbf{{v}}\,\overline{\nabla\cdot\textbf{{w}}}-\textbf{{v}}\cdot\overline{\textbf{{w}}})\,d\textbf{{x}} +\int_{\Sigma _R}\!\! N_R(\textbf{{v}}\cdot\boldsymbol{\nu})\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS\\[2ex] &&\displaystyle\qquad- \int_{\Gamma }\!\! G_{\Gamma \! }(\textbf{{v}}\cdot\boldsymbol{\nu})\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS=
\int_{\Gamma } \!\! G_{\Gamma }(g) \,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS \, , \end{array} \end{equation} for any $\textbf{{w}}\in H({\rm{}div};\Omega_R)$. It will be convenient to associate with the left hand side of (\ref{prob-FwdProbV}) the sesquilinear form $a: H({\rm{}div};\Omega_R) \times H({\rm{}div};\Omega_R) \to \mathbb{C}$ defined by \begin{equation}
a(\textbf{{v}},\mathbf{w}) =\displaystyle\int_{\Omega_R}\! (\frac{1}{k^2}\,\nabla\cdot\textbf{{v}}\,\overline{\nabla\cdot\textbf{{w}}}-\textbf{{v}}\cdot\overline{\textbf{{w}}})\,d\textbf{{x}}
+\int_{\Sigma _R}\!\! N_R(\textbf{{v}}\cdot\boldsymbol{\nu})\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS- \int_{\Gamma }\!\! G_{\Gamma \! }(\textbf{{v}}\cdot\boldsymbol{\nu})\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS.\label{adef} \end{equation}
\begin{figure}
\caption{A cartoon showing the geometric features of the problem. The
bounded scatterer $D$ is covered by a thin coating giving rise to a
GIBC on $\Gamma$. An incident wave $u^i$ on this scatterer causes a scattered field $u$
in the exterior of $D$. The artificial boundary $\Sigma_R$ is introduced to truncate the domain resulting in a bounded computational domain $\Omega_R$ and is taken to be a circle for simplicity.}
\label{cartoon}
\end{figure}
In order to prove the well-posedness of this variational formulation, we now summarize some of the properties of the NtD map $N_R$ and GIBC boundary map $G_\Gamma$. For a given function $f\in H^{-1/2}(\Sigma_R)$ the NtD map $N_R:H^{-1/2}(\Sigma_R)\to H^{1/2}(\Sigma_R)$ is given by \begin{equation} (N_Rf)(\theta)=\sum_{n=-\infty}^{\infty}\gamma_n f_n\exp(in\theta) \, ,
\label{NRdef}
\end{equation} where $f_n =\frac{1}{2\pi R} \int_{\Sigma_R}f(R,\theta)\exp (-in\theta) \;d\theta $ are the Fourier coefficients of $f$ on $\Sigma_R$ and \[ \gamma_n=\frac{1}{k}\frac{H_n^{(1)}(kR)}{(H^{(1)}_n)'(kR)}
\, . \] According to \cite[page 97]{ColtonCakoni} there are constants $C_1>0$ and $C_2<\infty$ such that \[
\frac{C_1}{\sqrt{1+n^2}}\leq |\gamma_n|\leq\frac{C_2}{\sqrt{1+n^2}} \, , \] for all $n\in\mathbb{Z}$. Now define $\tilde{N}_R:H^{-1/2}(\Sigma_R)\to H^{1/2}(\Sigma_R)$ by \[ (\tilde{N}_Rf)(\theta)=-\sum_{n=-\infty}^{\infty}\frac{R}{\sqrt{1+n^2}}\, f_n\, \exp(in\theta) \, . \] Clearly $\tilde{N}_R$ is negative definite and \[
-\int_{\Sigma_R}(\tilde{N}_Rf)\, \overline{f} \,d\theta=\sum_{n=-\infty}^{\infty}\frac{2\pi R^2}{\sqrt{1+n^2}}\, |f_n|^2=2\pi R\, \Vert f\Vert_{H^{-1/2}(\Sigma_R)}^2\, . \] Also from \cite[page 97]{ColtonCakoni} we can obtain the asymptotic estimate \[ \gamma_n=\frac{R}{n}\left(1+O(\frac{1}{n}) \right) \quad\mbox{ when } n \to\infty\, , \] so that $\gamma_n-{R}/{\sqrt{1+n^2}}=O(1/n^2)$ when $ n \to\infty$. Hence, \begin{equation}\label{Nrsplit} N_R=\hat{N_R}+\tilde{N}_R\, , \end{equation} where
$\hat{N}_R:H^{-1/2}(\Sigma_R)\to H^{3/2}(\Sigma_R)$ is well-defined and bounded, in particular $\hat{N}_R:H^{-1/2}(\Sigma_R)\to H^{1/2}(\Sigma_R)$ is compact.
We next state some properties of the NtD map which follow from the properties of the better known DtN map. \begin{lemma}\label{lemmaNtDsign} For all $f\in H^{-1/2}(\Sigma_R)$, it holds $$\Re \left( \int_{\Sigma_R} N_R f\;\overline{f}\,dS\right) <0 \quad\mbox{and}\quad \Im\left( \int_{\Sigma_R} N_R f\;\overline{f}\,dS\right) \leq 0 \, .$$ \end{lemma} \begin{proof} The first inequality follows from \cite[Lemma 3.2]{koyama}, whereas the second is proved as follows: For any $f\in H^{-1/2}(\Sigma_R)$, we may write \begin{eqnarray*} \int_{\Sigma_R}N_R f\;\overline{f}\,dS
&=& 2\pi R\sum_{n=-\infty}^\infty |f_n|^2 \frac{H_n^{(1)}(kR)\overline{H_n^{(1)'}(kR)}}{k\, |H_n^{(1)'}(kR)|^2} \, ,\\ \end{eqnarray*} where, as above, $$f_n = \frac{1}{2\pi R}\int_{\Sigma_R}f(R,\theta)\exp(-in\theta)\;d\theta$$ are the Fourier coefficients of $f$ on $\Sigma_R$. Since $H_n^{(1)}(kR) = J_n(kR)+iY_n(kR)$, taking the imaginary part \begin{eqnarray*}
\Im \left( \int_{\Sigma_R} N_R f\;\overline{f}\,dS \right) &=& 2\pi R\sum_{n=-\infty}^\infty |f_n|^2 \, \frac{J'_n(kR) \, Y_n(kR)-J_n(kR)\, Y'_n(kR)}{k\, |H_n^{(1)'}(kR)|^2}\\
&=& -4\sum_{n=-\infty}^\infty\frac{|f_n|^2}{k^2\, |H_n^{(1)'}(kR)|^2}\, , \end{eqnarray*} by the Wronskian formula for Bessel functions (see e.g. \cite[9.1.16]{Stegun}). \end{proof} We note that the foregoing theory provides a direct proof that $N_R$ is an isomorphism as a consequence of the Fredholm alternative thanks to Lemma \ref{lemmaNtDsign} and the splitting (\ref{Nrsplit}). \begin{corollary}\label{corNtDisomorph} The operator $N_R: H^{-1/2} (\Sigma _R) \to H^{1/2}(\Sigma _R)$ is an isomorphism. \end{corollary}
Next we show that $G_\Gamma$ is well defined. \begin{lemma}\label{lemma-Gok} Under Assumption \ref{A} and the conditions (\ref{hyp-E!forwardprob}), the operator $G_{\Gamma }\! : H^{-1}(\Gamma )\to H^{1}(\Gamma )$ defined in (\ref{Gdef}) is an isomorphism. In particular, $G_{\Gamma }\! : H^{-1}(\Gamma )\to H^1(\Gamma )$ is well-defined, linear and continuous. \end{lemma} \begin{proof}
We start the proof by defining the bounded sesquilinear forms $a_{\Gamma }, b_{\Gamma } : H^1(\Gamma )\times H^1(\Gamma ) \to\mathbb{C}$ by
\[
a _{\Gamma } (\xi _1,\xi _2) = \int_{\Gamma } \beta \,(\nabla_{\Gamma } \xi _1 \cdot \nabla_{\Gamma } \overline{\xi }_2+\xi _1 \, \overline{\xi }_2)\,dS
\quad\mbox{ and }\quad
b_{\Gamma } (\xi _1,\xi _2) = -\int_{\Gamma } (\lambda+\beta) \, \xi _1 \, \overline{\xi}_2 \,dS
\, , \] for any $ \xi_1,\xi_2\in H^1(\Gamma )$. Thanks to the Riesz representation theorem, we can consider the associated operators $A_{\Gamma }, B_{\Gamma }\!: H^1(\Gamma )\to H^{-1}(\Gamma )$ that satisfy
\[
(A_{\Gamma } \xi _1 , \xi _2 ) _{ H^{-1} (\Gamma )\times H^1 (\Gamma ) } = a _{\Gamma } (\xi _1,\xi _2)
\quad\mbox{ and }\quad
(B_{\Gamma } \xi _1 , \xi _2 ) _{ H^{-1} (\Gamma )\times H^1 (\Gamma ) } = b_{\Gamma } (\xi _1,\xi _2)
\, , \] for any $ \xi_1,\xi_2\in H^1(\Gamma )$. Notice that, under assumption (\ref{hyp-E!forwardprob}),
\[
\Re \left( a _{\Gamma } (\xi,\xi)\right) = \int_{\Gamma } \Re (\beta ) \, (|\nabla_{\Gamma } \xi |^2+| \xi |^2) \,dS \geq c \, \Vert \xi\Vert ^2 _{H^1(\Gamma) } \qquad \forall \xi \in H^1(\Gamma ) \, ,
\] and, in consequence, using the Lax-Milgram theorem guarantees that $A_{\Gamma }\! : H^1(\Gamma )\to H^{-1}(\Gamma )$ is an isomorphism.
Also notice that, by Rellich's theorem we know that $H^1 (\Gamma )$ is compactly embedded into $L^2 (\Gamma )$, so that $B_{\Gamma }\! : H^1(\Gamma )\to H^{-1}(\Gamma )$ is compact.
Moreover, under Assumption \ref{A}, $A_{\Gamma }+B_{\Gamma }\! : H^1(\Gamma )\to H^{-1}(\Gamma )$ is injective. Therefore, by the Fredholm alternative, $A_{\Gamma }+B_{\Gamma }\! : H^1(\Gamma )\to H^{-1}(\Gamma )$ is an isomorphism. \end{proof}
The next lemma shows that the impedance boundary condition does not cause a loss of uniqueness for the scattering problem. \begin{lemma}\label{lemmaGIBCsign} For any $\eta\in H^{-1}(\Gamma )$, it holds that $$\Im \left( ( G_{\Gamma}\eta , {\eta})_{H^{1}( \Gamma)\times H^{-1}(\Gamma )} \right)
\geq 0\, .$$ \end{lemma} \begin{proof} Using the variational definition of $G_{\Gamma}:H^{-1}(\Gamma)\rightarrow H ^{1} (\Gamma )$ for $\eta\in H^{-1}(\Gamma)$, and choosing the test function $\xi=G_{\Gamma}\eta\in H^1(\Gamma)$, gives
$$\int_{\Gamma}\left(\beta\,\left|\nabla_{\Gamma}G_{\Gamma} \eta\right|^2 - \lambda\,\left| G_{\Gamma}\eta\right|^2\right)\,dS = \int_{\Gamma}\eta\;\overline{G_{\Gamma}\eta}\,dS.$$ This implies
\begin{eqnarray*} \Im\left( \int_{\Gamma}G_{\Gamma}\eta \;\overline{\eta }\,dS\right) &=& -\Im \left(\int_{\Gamma}\eta \;\overline{G_{\Gamma}\eta}\,dS\right) \\
&=& \int_{\Gamma}\left(-\Im ( \beta) \left|\nabla_{\Gamma}G_{\Gamma} \eta\right|^2 + \Im ( \lambda)\left|G_{\Gamma}\eta \right|^2\right)\,dS \, \geq\, 0 \, . \end{eqnarray*} The last inequality follows from the assumptions $\Im (\beta) \leq 0$ and $\Im(\lambda) \geq 0$ in (\ref{hyp-E!forwardprob}). \end{proof}
Starting our analysis of (\ref{prob-FwdProbV}) we show that any solution is unique.
\begin{lemma}\label{lemma-FwdProbVok}
Problem (\ref{prob-FwdProbV}) has at most one solution. \end{lemma} \begin{proof} Let us consider any solution of its homogeneous counterpart, that is, $\textbf{{v}}\in H({\rm{}div};\Omega_R)$ such that \begin{equation}\label{prob-FwdProbV_0} \int_{\Omega_R}\! (\frac{1}{k^2}\,\nabla\cdot\textbf{{v}}\,\overline{\nabla\cdot\textbf{{w}}}-\textbf{{v}}\cdot\overline{\textbf{{w}}})\,d\textbf{{x}} +\int_{ \Sigma_R}\!\! N_R(\textbf{{v}}\cdot\boldsymbol{\nu})\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS - \int_{\Gamma } G_{\Gamma }(\textbf{{v}}\cdot\boldsymbol{\nu})\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS=
0 \, , \end{equation} for all $\textbf{{w}}\in H({\rm{}div};\Omega_R)$.
Since $\Omega_R$ is connected, by the Helmholtz decomposition theorem (see \cite[Th.2.7-Ch.I]{GiraultRaviart}) we can rewrite $\textbf{{v}}$ as $$ \textbf{{v}} =\nabla u +\boldsymbol{\psi} \qquad\text{in } \Omega _R \, , $$ for some $u\in H^1(\Omega _R)$ and $\boldsymbol{\psi}\in H_0 (\mathrm{div}^0;\Omega _R)$, where $$ H_0 (\mathrm{div}^0;\Omega _R)=\Big\{ \textbf{{w}}\in H (\mathrm{div};\Omega _R)\, ;\,\, \nabla\cdot\textbf{{w}}=0\,\,\text{in }\Omega _R ,\, \textbf{{w}}\cdot\boldsymbol{\nu}=0\,\,\text{on } \partial\Omega _R =\Gamma\cup \Sigma _R \Big\}\, . $$ Then, the homogeneous problem (\ref{prob-FwdProbV_0}) may be rewritten as
\begin{equation}\label{prob-FwdProbV_0_bis} \int_{\Omega_R}\! \Big(\frac{1}{k^2}\,\Delta u\,\overline{\nabla\cdot\textbf{{w}}}-(\nabla u +\boldsymbol{\psi})\cdot\overline{\textbf{{w}}}\Big) d\textbf{{x}} +\int_{ \Sigma_R}\!\! N_R\Big(\frac{\partial u }{\partial\boldsymbol{\nu}}\Big)\,\overline{\textbf{{w}}\!\cdot\!\boldsymbol{\nu}}\,dS - \int_{\Gamma } G_{\Gamma \! }\Big(\frac{\partial u }{\partial\boldsymbol{\nu}}\Big)\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS=
0 \, , \end{equation} for all $\textbf{{w}}\in H({\rm{}div};\Omega_R)$. In particular, taking $\textbf{{w}}\in\mathcal{C}^{\infty}_0 (\Omega _R)^2$ we deduce that \begin{equation*}
\displaystyle \nabla (\frac{1}{k^2}\,\Delta u + u) + \boldsymbol{\psi} = \boldsymbol{0} \qquad \text{in }\Omega _R \, . \end{equation*} Noticing that $\displaystyle \nabla (\frac{1}{k^2}\,\Delta u + u) = -\boldsymbol{\psi} \in H_0 (\mathrm{div}^0;\Omega _R)$ leads to $$ \displaystyle \Delta (\frac{1}{k^2}\,\Delta u + u) = 0\,\mbox{ in } \Omega _R \qquad \mbox{ and }\qquad \displaystyle\frac{\partial \, }{\partial\boldsymbol{\nu}} (\frac{1}{k^2}\,\Delta u + u)=0 \,\mbox{ on } \partial\Omega_R=\Sigma _R\cup\Gamma \, . $$ Hence, by uniqueness of the solution (up to a constant) of the interior Neumann problem for Laplace operator in $\Omega_R$, we have that $\displaystyle\frac{1}{k^2} \Delta u + u=C_u $ in $\Omega _R$ for some constant $C_u\in \mathbb{C}$; in particular, $\boldsymbol{\psi} = -\displaystyle \nabla (\frac{1}{k^2}\,\Delta u + u) =\boldsymbol{0}$ in $\Omega _R$. Furthermore, $\tilde{u}=u-C_u\in H^1(\Omega_R)$ satisfies \begin{equation*}
\displaystyle \Delta \tilde{u} + k^2\,\tilde{u} = 0\qquad \text{in }\Omega _R \, , \end{equation*} so that $\frac{1}{k^2}\Delta u=-\tilde{u}$ in $\Omega_R$ and we deduce from (\ref{prob-FwdProbV_0_bis}) that \begin{equation*}
\begin{array}{cl}
\displaystyle -\tilde{u} + N_R\Big(\frac{\partial \tilde{u} }{\partial\boldsymbol{\nu}}\Big) = 0 \quad & \text{on } \Sigma_R \, ,\\[1.5ex]
\displaystyle \tilde{u}+ G_{\Gamma \! }\Big(\frac{\partial \tilde{u} }{\partial\boldsymbol{\nu}}\Big) = 0 \quad & \text{on }\Gamma \, .
\end{array} \end{equation*} In consequence, by the invertibility of $N_R$ and $G_{\Gamma}$ and the uniqueness of solution of the forward problem with GIBC (see \cite[Th.2.1]{BCHprevious}), we have that $\tilde{u}=0$ in $\Omega _R$; that is to say, ${u}=C_u$ in $\Omega _R$. Summing up, we conclude that $$ \textbf{{v}} =\nabla u + \boldsymbol{\psi} = \boldsymbol{0} \qquad\text{in } \Omega _R \, . $$
\end{proof}
Using this uniqueness result and a suitable stable splitting of $H({\rm{}div};\Omega_R)$, we will be able to apply \cite[Theorem 1.2]{Buffa2005} to prove the well-posedness of the continuous problem. In particular, we write \[ H({\rm{}div};\Omega_R) =H({\rm{}div}^0;\Omega_R )\oplus \nabla S \, , \] where $$ H (\mathrm{div}^0;\Omega _R)=\Big\{ \textbf{{w}}\in H (\mathrm{div};\Omega _R)\, ;\,\, \nabla\cdot\textbf{{w}}=0\,\,\text{in }\Omega _R \Big\}$$ and $$S=\Big\{p\in H_0^1(\Omega_R)\, ; \,\, \Delta u\in L^2(\Omega_R)\Big\} \, , $$ and $S$ is endowed with the inner product \[ (p,q)_S \, =\, \int_{\Omega} (\Delta p\, \Delta \overline{q}+\nabla p\cdot\nabla\overline{q}) \,d\mathbf{x}
\, . \] Notice that the orthogonality of the above splitting implies that $\mathbf{u}\in H({\rm{}div}^0;\Omega)$ if, and only if, $\mathbf{u}\in H({\rm{}div};\Omega_R)$ and $(\mathbf{u},\nabla q)=0$ for all $q\in S$. We also need to define the duality pairing $\llangle\cdot ,\cdot\rrangle$ between $H({\rm{}div};\Omega_R)$ and its dual space $H({\rm{}div};\Omega_R) '$, with respect to the pivot space $L^2(\Omega_R)^2$, so that (note: this is defined without conjugation): \[ \llangle \mathbf{u},\textbf{{w}}\rrangle =\int_{\Omega_R} \mathbf{u}\cdot\textbf{{w}}\,d\mathbf{x} \quad \forall \mathbf{u}\in H({\rm{}div};\Omega_R)' , \, \textbf{{w}}\in H({\rm{}div};\Omega_R) \, . \] According to the above splitting, any $\mathbf{u}\in H({\rm{}div};\Omega_R)$ has the form $\mathbf{u}=\mathbf{u}_0+\nabla p$ for some $\mathbf{u}_0\in H({\rm{}div}^0;\Omega_R)$ and $p\in S$. By the orthogonality of the splitting, and the fact that $\nabla\cdot\mathbf{u}_0=0$, we have that \[ \Vert \nabla p\Vert^2_{H({\rm{}div};\Omega_R)}+ \Vert \mathbf{u}_0\Vert^2_{L^2(\Omega_R )^2} = \Vert \mathbf{u}\Vert^2_{H({\rm{}div};\Omega_R)}\] and, in particular, the splitting is stable. Moreover, it allows us to define the linear continuous operator $\theta:H({\rm{}div};\Omega_R)\to H({\rm{}div};\Omega_R)$ by $\theta\mathbf{u}=\nabla p-\mathbf{u}_0$.
Next we define $A:H({\rm{}div};\Omega_R)\to H({\rm{}div};\Omega_R)'$ such that if $\mathbf{u}\in H({\rm{}div};\Omega_R)$ then $A\mathbf{u}\in H({\rm{}div};\Omega_R)'$ is given via the Riesz representation theorem by \[ \llangle A\mathbf{u},\overline{\textbf{{w}}}\rrangle=a(\mathbf{u},\textbf{{w}})\quad \mbox{ for all }\textbf{{w}} \in H({\rm{}div};\Omega_R)\, ; \] recalling the definition of $a(\cdot,\cdot)$ in (\ref{adef}).
We can now state and prove the following result. \begin{theorem}\label{th-FwdProbVok} Problem (\ref{prob-FwdProbV}) is well-posed and the Babu\v{s}ka-Brezzi condition is satisfied. \end{theorem}
\begin{proof} Let $\mathbf{u}\in H({\rm{}div};\Omega_R)$ be split into $\mathbf{u}=\mathbf{u}_0+\nabla p$ for some $\mathbf{u}_0\in H({\rm{}div}^0;\Omega_R )$ and $p\in S$, and similarly $\textbf{{w}}=\textbf{{w}}_0+\nabla q \in H({\rm{}div};\Omega_R)$. Then \begin{equation}\label{aexpand} \begin{array}{rcl} a(\mathbf{u},\theta\textbf{{w}})&=&\displaystyle \int_{\Omega_R}\left(\frac{1}{k^2} \Delta p\cdot\Delta \overline{q}+\nabla p\cdot\nabla \overline{q}+\mathbf{u}_0\cdot\overline{\textbf{{w}}_0}\right)\,d\textbf{{x}}\\ &&\quad +\displaystyle\int_{\Sigma_R}N_R((\mathbf{u}_0+\nabla p)\cdot\boldsymbol{\nu}) (\overline{\nabla q-\textbf{{w}}_0})\cdot\boldsymbol{\nu}\,dS\\&& - \int_{\Gamma}G_\Gamma ((\mathbf{u}_0+\nabla p)\cdot\boldsymbol{\nu}) (\overline{\nabla q-\textbf{{w}}_0})\cdot\boldsymbol{\nu}\,dS \\ && \quad -2\,\displaystyle \int_{\Omega_R}\nabla p\cdot\nabla \overline{q}\,d\textbf{{x}} \, . \end{array} \end{equation} We expand the troublesome term \begin{eqnarray*}
\lefteqn{\int_{\Sigma_R} N_R ((\mathbf{u}_0+\nabla p)\cdot\boldsymbol{\nu})\,(\overline{\nabla q-\textbf{{w}}_0})\cdot\boldsymbol{\nu}\,dS}\\
&=&
-\int_{\Sigma_R} N_R (\mathbf{u}_0\cdot\boldsymbol{\nu})\,\overline{\textbf{{w}}_0}\cdot\boldsymbol{\nu}\,dS +
\int_{\Sigma_R} N_R(\mathbf{u}_0\cdot\boldsymbol{\nu})\,\nabla \overline{q}\cdot\boldsymbol{\nu}\,dS \\ &&\qquad-
\int_{\Sigma_R} N_R(\nabla p\cdot\boldsymbol{\nu})\,\overline{\textbf{{w}}_0}\cdot\boldsymbol{\nu}\,dS+
\int_{\Sigma_R} N_R(\nabla p\cdot\boldsymbol{\nu})\,\nabla \overline{q}\cdot\boldsymbol{\nu}\,dS \, . \end{eqnarray*}
So we can define the sesquilinear form \[ a_+(\mathbf{u},\theta\textbf{{w}})= \frac{1}{k^2} \, \int_{\Omega_R}\left(\Delta p\,\Delta \overline{q} + \nabla p\cdot\nabla \overline{q} + \mathbf{u}_0\cdot\overline{\textbf{{w}}_0}\right)\,d\textbf{{x}} - \int_{\Sigma_R} N _R (\mathbf{u}_0\cdot\boldsymbol{\nu})\,\overline{\textbf{{w}}_0}\cdot\boldsymbol{\nu}\,dS \, , \] and use the remaining terms in (\ref{aexpand}) to define the sesquilinear form \begin{eqnarray*} b(\mathbf{u},\theta\textbf{{w}})&=&
\int_{\Sigma_R}N_R(\mathbf{u}_0\cdot\boldsymbol{\nu})\,\nabla \overline{q}\cdot\boldsymbol{\nu}\,dS -
\int_{\Sigma_R} N_R(\nabla p\cdot\boldsymbol{\nu})\,\overline{\textbf{{w}}_0}\cdot\boldsymbol{\nu}\,dS +
\int_{\Sigma_R} N_R(\nabla p\cdot\boldsymbol{\nu})\,\nabla \overline{q}\cdot\boldsymbol{\nu}\,dS
\\&&\quad - \int_{\Sigma_R} G_\Gamma(\mathbf{u}_0+\nabla p)\cdot\boldsymbol{\nu}\,(\overline{\nabla q - \textbf{{w}}_0})\cdot\boldsymbol{\nu}\,dS-2\, \int_{\Omega_R}\nabla p\cdot\nabla \overline{q}\,d\textbf{{x}} \, . \end{eqnarray*} On the one hand, since $\Re( N _R )$ is negative definite (see Lemma \ref{lemmaNtDsign}) and the splitting of the space is stable, we have that there is a constant $\alpha>0$ independent of $\mathbf{u}\in H({\rm{}div};\Omega ) $ such that \[ \Re(a_+(\mathbf{u},\theta\mathbf{u}))\, \geq\, \alpha\, \Vert\mathbf{u} \Vert_{H({\rm{}div};\Omega_R)}^2 \, . \] Now define the operator $T:H({\rm{}div};\Omega_R)\to H({\rm{}div};\Omega_R)'$ by \[ \llangle T\mathbf{u},\overline{\textbf{{w}}} \rrangle =-b(\mathbf{u},\textbf{{w}} ) \qquad \forall \mathbf{u},\textbf{{w}}\in H({\rm{}div};\Omega_R) \, . \] Notice that $T$ is compact because each sesquilinear form in its definition is compact. For example, the sesquilinear form
\[
\int_{\Sigma_R} N_R (\mathbf{u}_0\cdot\boldsymbol{\nu})\,\nabla \overline{q}\cdot\boldsymbol{\nu}\,dS \] is compact by \cite[Theorem 1.3]{Kress77}, because the trace of functions in $S$
defined into $H^{1/2}(\Sigma_R)$ is a compact operator; indeed, $S$ is a subset of $H^2(\Omega_R )$ due to our assumption of a smooth boundary $\partial \Omega_R = \Gamma \cap \Sigma _R$, and the normal derivative operator is compact from $H^2(\Omega_R )$ into $H^{1/2}(\Sigma_R)$. The remaining sesquilinear forms are also compact by the same reasoning. Hence $T$ is compact. Then we conclude that \[ \llangle (A+T)\mathbf{u},\theta\overline{\mathbf{u}}\rrangle\, =\, a_+(\mathbf{u},\theta\mathbf{u})\geq \alpha \Vert \mathbf{u}\Vert_{H({\rm{}div};\Omega_R)}^2\, . \] Hence all the conditions of \cite[Assumption 1]{Buffa2005} are satisfied and the existence of a unique solution to (\ref{prob-FwdProbV}) is shown by \cite[Theorem 2.1]{Buffa2005}. In addition this theorem shows that there is an isomorphism $\tilde{\theta}:H({\rm{}div};\Omega_R)\to H({\rm{}div};\Omega_R)$ such that \[ \llangle A\mathbf{u},\tilde{\theta}\overline{\mathbf{u}}\rrangle \, \geq\,
\alpha\, \Vert \mathbf{u}\Vert_{H({\rm{}div};\Omega_R)}^2 \, . \] This in turn implies that the Babu\v{s}ka-Brezzi condition is satisfied. \end{proof}
\section{A Semidiscrete Problem}\label{semi} In this section we consider a semidiscrete problem in which the GIBC boundary operator is discretized but the space where we search for the solution in $\Omega_R$ is not. As discussed in the introduction, we shall not consider the truncation of the NtD map here.
We shall need an additional assumption on the boundary operator $G_\Gamma$. In particular we need to know that it smooths the solution on the boundary $\Gamma$, so we make the following second assumption. \begin{assumption}\label{G} For each $-1 \leq s \leq -1/2$, it holds that $G_\Gamma ( H^{s}(\Gamma)) \subseteq H^{s+2}(\Gamma)$ and there exists $C_s^{\Gamma}>0$ such that $\Vert G_\Gamma\lambda\Vert_{H^{s+2}(\Gamma)}\leq C_s^{\Gamma} \, \Vert \lambda\Vert_{H^{s}(\Gamma)}$ for any $\lambda\in H^s(\Gamma )$.
\end{assumption}
\begin{remark} Note that if Assumption \ref{G} holds, since $G^*_\Gamma\lambda=\overline{G_{\Gamma}\lambda}$, it also holds for $G_\Gamma^*$. \end{remark} Notice that Assumption \ref{G} further constrains the choice of the coefficients $\beta$ and $\lambda$ in the generalized impedance boundary condition on $\Gamma$. Its role will be clarified in Lemma \ref{lemma-GH}, where we apply Schatz's analysis \cite{Schatz} in order to show that the finite element approximation of $G_\Gamma$, defined shortly, converges.
On the inner boundary $\Gamma $ we consider a finite dimensional subspace $S_H\subset H^1(\Gamma )$ of continuous piecewise polynomials of degree at least $P$(with $P\geq 1$) on a mesh ${\cal
T}_H^\Gamma$. We assume that the mesh ${\cal T}_H^\Gamma$ consists of segments of the boundary $\Gamma$ of maximum length $H>0$, and that it is regular and quasi-uniform: the latter means that there exists a constant $\sigma^\Gamma \in [1,\infty)$ such that \[ \frac{H}{H_e}\leq \sigma^\Gamma\quad \mbox{ for all edges }e\in\mathcal{T}^{\Gamma}_H \mbox{ and all }H>0 \, , \] where $H_e$ denotes the arc length of the edge $e$ in the mesh.
\begin{remark} Other choices of the discretization space on $\Gamma$ are possible. For example we could use a trigonometric basis or a smoother spline space on $\Gamma$; these particular choices have advantages in that they would provide faster convergence of the UWVF scheme. We shall not discuss them explicitly here but will give an example of the use of a trigonometric space in Section \ref{num}. \end{remark}
Then we approximate $G_{\Gamma }\! :H^{-1}(\Gamma )\to H^{1}(\Gamma )$ by $G_{\Gamma }^H\! :H^{-1}(\Gamma )\to S_H$ using a discrete counterpart of (\ref{Gdef}). Indeed, each $\eta\in H^{-1}(\Gamma )$ is mapped onto $G^H_{\Gamma }\eta\in S_{H}$, the unique solution of \begin{equation} \int_{\Gamma }\! \Big(\beta \,\nabla_{\Gamma } (G^H_{\Gamma } \!\eta) \cdot \nabla_{\Gamma }\overline{\xi} -\lambda\, G^H_{\Gamma }\! \eta \,\overline{\xi}\Big) \, dS=\int_{\Gamma } \eta\,\overline{\xi}\,dS \qquad\forall\xi\in S_{H} . \label{defGH} \end{equation} Notice that, as happens at continuous level, this definition can be applied for functions in a bigger space, which is now $S_{H}'$ the dual space of $S_{H}$ with pivot space $L^2(\Gamma )$.
Indeed, Assumptions \ref{A} and \ref{G}, and the conditions on the coefficients in (\ref{hyp-E!forwardprob}), allow us to show that this operator is well-defined for $H$ small enough applying the usual Schatz's analysis~\cite{Schatz} of non-coercive sesquilinear forms. Such argument is quite standard and we do not give the details here: We just mention that it applies, not just because of the approximation properties of $S_H$, but since the operator $G_{\Gamma}: H^{-1}(\Gamma)\to H^1(\Gamma )$ can be understood as the solution operator for a bounded sequilinear form which is the superposition of a compact and a coercive sesquilinear forms; see the proof of Lemma \ref{lemma-Gok}.
\begin{lemma}\label{lemma-GH}
The operator $G^H_{\Gamma }\! : S_{H}'\to S_{H}$ is an isomorphism for any $H>0$ small enough.
Furthermore, if $\lambda\in H^{-1/2}(\Gamma)$ is smooth enough that $G_\Gamma\lambda\in H^t(\Gamma)$ for some $t\in [1,P+1]$, then the following error estimate holds: \[ \Vert (G_\Gamma-G_\Gamma^H)\lambda\Vert_{H^{s}(\Gamma)}\, \leq\, CH^{t-s}\Vert G_\Gamma\lambda \Vert_{H^{t}(\Gamma)} \] for any $s\in [0,1]$, and where $C$ is independent of $\lambda$. \end{lemma}
We now consider the semidiscrete counterpart of problem (\ref{prob-FwdProbV}), which consists of computing $\textbf{{v}}^{H}\in H({\rm div};\Omega_R)$ that satisfies \begin{eqnarray} &&\hspace*{-1cm} \int_{\Omega_R}\!\! \Big(\frac{1}{k^2} \nabla\!\cdot\!\textbf{{v}}^{H,M}\,\overline{\nabla\!\cdot\!\textbf{{w}}}-\textbf{{v}}^{H}\!\cdot\overline{\textbf{{w}}} \Big) d\textbf{{x}} +\int_{ \Sigma _R} \!\!\! N_R(\textbf{{v}}^{H}\!\cdot\!\boldsymbol{\nu})\,\overline{\textbf{{w}}\!\cdot\!\boldsymbol{\nu}}\,dS -\int_{\Gamma } {G^H_{\Gamma\! }} (\textbf{{v}}^{H}\!\cdot\!\boldsymbol{\nu}) \, \overline{\textbf{{w}}\!\cdot\!\boldsymbol{\nu}}\,dS \, = \nonumber \\ &&\qquad\qquad =\, \int_{\Gamma } { G^H_{\Gamma \! } g} \,\overline{\textbf{{w}}\!\cdot\!\boldsymbol{\nu}}\,dS\qquad\mbox{ for all }\textbf{{w}}\in H({\rm{}div};\Omega_R) \, . \label{prob-FwdProbV_HM} \end{eqnarray}
As at continuous level, it is useful to associate to the left hand side of (\ref{prob-FwdProbV_HM}) the sesquilinear form
$a^H: H({\rm{}div};\Omega_R) \times H({\rm{}div};\Omega_R) \to \mathbb{C}$ defined by \begin{equation}
a^H(\mathbf{u},\mathbf{w}) =\displaystyle\int_{\Omega_R}\! (\frac{1}{k^2}\,\nabla\cdot\mathbf{u}\,\overline{\nabla\cdot\textbf{{w}}}-\textbf{{v}}\cdot\overline{\textbf{{w}}})\,d\textbf{{x}}
+\int_{\Sigma _R}\!\! N_R(\mathbf{u}\cdot\boldsymbol{\nu})\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS- \int_{\Gamma }\!\! G^H_{\Gamma \! }(\mathbf{u}\cdot\boldsymbol{\nu})\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS\, , \label{aHdef} \end{equation} which is just the semidiscrete counterpart of $a(\cdot,\cdot )$.
Moreover, to study the problem (\ref{prob-FwdProbV_HM}), we define the operator $A^H:H({\rm{}div};\Omega_R)\toH({\rm{}div};\Omega_R)'$ by \[ \llangle A^H \mathbf{u} , \overline{\textbf{{w}}}\rrangle=a^H(\mathbf{u} ,\textbf{{w}})\quad \mbox{ for all } \mathbf{u} , \textbf{{w}}\in H({\rm{}div};\Omega_R) \, . \] We can now show that $A^H$ converges to $A$ in norm. \begin{lemma}\label{Ahconv} For each $H>0$ sufficiently small, there is a constant $C$ such that $$\Vert A - A^H\Vert_{H({\rm{}div};\Omega_R)\toH({\rm{}div};\Omega_R)'}\,\leq\, C H \, .$$ \end{lemma}
\begin{proof} For any $\mathbf{u} ,\textbf{{w}}\in H({\rm{}div};\Omega_R)$, from the own definitions of $A$ and $A^H$ we have that
\begin{eqnarray*}
\vert \llangle (A-A^H) \mathbf{u} , \overline{\textbf{{w}}} \rrangle \vert &=&\vert\int_\Gamma(G^H_{\Gamma}-G_{\Gamma})(\mathbf{u}\cdot\boldsymbol{\nu})\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\, dS \, \vert \\
&\leq&\Vert (G^H_{\Gamma}-G_{\Gamma})(\mathbf{u}\cdot\boldsymbol{\nu})\Vert_{H^{1/2}(\Gamma)} \, \Vert\textbf{{w}}\cdot\boldsymbol{\nu}\Vert_{H^{-1/2}(\Gamma)}\\
&\leq&C\Vert (G^H_{\Gamma}-G_{\Gamma})(\mathbf{u}\cdot\boldsymbol{\nu})\Vert_{H^{1/2}(\Gamma)} \, \Vert\textbf{{w}}\Vert_{H({\rm{}div};\Omega_R)}\, .
\end{eqnarray*}
But, by Lemma~\ref{lemma-GH} and Assumption \ref{G}, we conclude that \begin{eqnarray*}
\Vert (G_{\Gamma}^H-G_{\Gamma})(\mathbf{u} \cdot\boldsymbol{\nu})\Vert_{H^{1/2}(\Gamma)} &\leq& CH\, \Vert G_{\Gamma} ( \mathbf{u} \cdot\boldsymbol{\nu})\Vert_{H^{3/2} (\Gamma)}\\ & \leq& CH\, \Vert \mathbf{u} \cdot\boldsymbol{\nu} \Vert_{H^{-1/2} (\Gamma)} \,\leq\, CH\, \Vert \mathbf{u} \Vert_{H({\rm{}div};\Omega_R)} \, .
\end{eqnarray*}
\end{proof}
Using \cite[Theorem 10.1]{Kress} we have the following result. \begin{theorem}\label{lemma-E!sol:semidiscrGIBCfwdprob_vec} For all $H$ sufficiently small, the operator $A^H:H({\rm{}div};\Omega_R)\toH({\rm{}div};\Omega_R)'$
is invertible and its inverse is bounded independently of $H$.
Suppose $\mathbf{v}$ satisfies
(\ref{prob-FwdProbV}) and $\mathbf{v}^H$ satisfies
(\ref{prob-FwdProbV_HM}), then there is a constant $C$ independent of
$H$ such that
\begin{equation}\label{eq-boundsemidiscr}
\Vert\mathbf{v}^H-\mathbf{v}\Vert_{H({\rm{}div};\Omega_R)} \, \leq\, C \,\Big( \Vert (G^H_{\Gamma}-G_{\Gamma})(\mathbf{v}\cdot\boldsymbol{\nu})\Vert_{H^{1/2}(\Gamma)} + \Vert (G^H_{\Gamma}-G_{\Gamma})g\Vert_{H^{1/2}(\Gamma)}\Big) \, .
\end{equation} \end{theorem} \begin{proof}
Recall that, as shown in the proof of Theorem \ref{th-FwdProbVok}, the operator $A: H({\rm{}div};\Omega_R)\toH({\rm{}div};\Omega_R) ' $ is an isomorphism. Further, Lemma \ref{Ahconv} shows the convergence of $A^H$ to $A$ in the norm $\Vert\cdot\Vert _{H({\rm{}div};\Omega_R)\toH({\rm{}div};\Omega_R) '}$. Then Theorem 10.1 of \cite{Kress} shows that, for $H$ small enough, $(A^H)^{-1}$ exists and is uniformly bounded in $H$. \\ Finally, to deduce the error bound (\ref{eq-boundsemidiscr}), we notice that $$A^H (\textbf{{v}} ^H-\textbf{{v}})\, = \, (A-A^H) \textbf{{v}} + f^H - f
\quad \mbox{ in } H({\rm{}div};\Omega_R) ' \, ,$$ where $A\textbf{{v}}=f$ and $A^H\textbf{{v}}^H=f^H$ in$H({\rm{}div};\Omega_R) '$: \[ f^H(\mathbf{w})=\int_{\Gamma}G^H_\Gamma g\, \mathbf{w\cdot\boldsymbol{\nu}}\,dS \quad\text{and}\quad f(\mathbf{w})=\int_{\Gamma}G_\Gamma g\, \mathbf{w}\cdot\boldsymbol{\nu} \,dS \qquad\forall \mathbf{w}\in H({\rm{}div};\Omega_R) \, . \] But we can estimate $\Vert (A-A^H)\textbf{{v}} \Vert _{H({\rm{}div};\Omega_R) '} $ as in the proof of Lemma \ref{Ahconv}, which gives us the first term on the right hand side of (\ref{eq-boundsemidiscr}). Similarly, we can bound $\Vert f^H - f \Vert _{H({\rm{}div};\Omega_R) '}$, which gives us the second term on the right hand side of (\ref{eq-boundsemidiscr}).
\end{proof}
Our final result of this section shows that $\textbf{{v}}^H$ is smooth enough that the trace of $\nabla\cdot\textbf{{v}}^H$ is well defined on line segments (edges of elements) in $\Omega _R$.
\begin{lemma}
For each $0\leq s<1/2$, there exists a constant $C$ (depending on $s$ but independent of $g\in H^{-1}(\Gamma)$) such that the solution $\mathbf{v}^H\inH({\rm{}div};\Omega_R) $ of (\ref{prob-FwdProbV_HM}) satisfies $$
\Vert\mathbf{v}^H\Vert_{H^{1/2+s}(\Omega_R)} \leq CH^{-s}\Vert g\Vert_{H^{-1}(\Gamma)} \quad\mbox{ and } \quad
\Vert \nabla\cdot\mathbf{v}^H\Vert_{H^{1}(\Omega_R)} \leq C\Vert g\Vert_{H^{-1}(\Gamma)}\, . $$ \end{lemma} \begin{proof}
Following the proof of the uniqueness result in Lemma \ref{lemma-FwdProbVok} and replacing there the operator $G_{\Gamma}$ by its discrete counterpart $G^H_{\Gamma}$, we see that $\textbf{{v}}^H=\nabla u^H$ where $u^H\in H^1 (\Omega_R)$ satisfies
$$ \begin{array}{rcll}
\Delta u^H+k^2u^H&=&0 &\mbox{ in }\Omega_R\, ,\\ \displaystyle -u^H+N_R(\frac{\partial u^H}{\partial \boldsymbol{\nu} })&=&0 &\mbox{ on }\Sigma_R\, ,\\[1ex] \displaystyle u^H+G_\Gamma^H(\frac{\partial u^H}{\partial\boldsymbol{\nu} } )&=&G_\Gamma^H g\quad & \mbox{ on }\Gamma \, .
\end{array} $$ Since $S_H$ consists of continuous piecewise polynomials,
we know that for each $0\leq s <1/2$ it holds $S_H \subseteq H^{1+s}(\Gamma) $ and, in particular, $u^H|_{\Gamma}\in G_{\Gamma}^H (H^{-1} (\Gamma )) \subseteq S_H\subset H^{1+s}(\Gamma)$. Moreover, $N_R$ can be inverted to give the classical Dirichlet-to-Neumann map, so that $u^H$ can be extended to the exterior of $B_R$ as a radiating solution of Helmholtz equation in the whole $\Omega$. Identifying such extension with $u^H$ itself, we have that $u^H$ satisfies the exterior Dirichlet problem for Helmholtz equation in $\Omega$ and Dirichlet data $u^H|_{\Gamma} \in H^{1+s}(\Gamma)$. Hence, using a priori estimates for the exterior Dirichlet problem, $u^H\in H^{3/2+s}_{loc}(\Omega)$ and it satisfies $$
\Vert u^H\Vert_{H^{3/2+s}(\Omega_R)}\, \leq \, \Vert u^H\Vert_{H^{3/2+s}_{loc}(\Omega )}\, \leq \, C\Vert u^H\Vert_{H^{1+s}(\Gamma)} \, = \,C\,\Vert G_\Gamma^H(\frac{\partial u^H}{\partial\boldsymbol{\nu}})-G_\Gamma^H g \Vert_{H^{1+s}(\Gamma)}\, . $$ By our quasi-uniformity assumption on the mesh $\mathcal{T}_H^{\Gamma}$, we know that a standard inverse estimate holds for $S_H$ and hence $$ \Vert u^H\Vert_{H^{3/2+s}(\Omega_R)}\, \leq \, CH^{-s}\,\Vert G_\Gamma^H(\frac{\partial u^H}{\partial\boldsymbol{\nu}}\! )-G_\Gamma^H g\Vert_{H^{1}(\Gamma)} \, \leq \, CH^{-s}\left(\Vert \frac{\partial u^H}{\partial\boldsymbol{\nu}} \Vert_{H^{-1/2}(\Gamma)}+\Vert g\Vert_{H^{-1}(\Gamma)}\right). $$ Now note that $\textbf{{v}} ^H=\nabla u^H$ in $\Omega _R$, so that $$ \Vert \textbf{{v}} ^H\Vert_{H^{1/2+s}(\Omega _R)} \,\leq \, \Vert u^H\Vert_{H^{3/2+s}(\Omega_R)}\, \leq \, CH^{-s}\left(\Vert \textbf{{v}}^H\!\cdot\!\boldsymbol{\nu}\Vert_{H^{-1/2}(\Gamma)}+\Vert g\Vert_{H^{-1}(\Gamma)}\right) . $$ Similarly, $\nabla\cdot\textbf{{v}}^H=\Delta u^H=-k^2u^H$ and we deduce that
\begin{eqnarray*}
\Vert \nabla\cdot\textbf{{v}}^H\Vert_{H^1(\Omega_R)}&=&\, k^2\, \Vert u^H\Vert_{H^1(\Omega_R)}
\leq \,C\,\Vert G_\Gamma^H(\frac{\partial u^H}{\partial\boldsymbol{\nu}}\! )-G_\Gamma^H g \Vert_{H^{1/2}(\Gamma)} \\&\leq & C\left(\Vert \textbf{{v}}^H\!\!\cdot\!\boldsymbol{\nu}\Vert_{H^{-1/2}(\Gamma)}+\Vert g\Vert_{H^{-1}(\Gamma)}\right)\,.
\end{eqnarray*} We complete the estimate using the well-posedness of the semidiscrete problem and the continuity of normal traces from $H({\rm{}div};\Omega_R)$ into $H^{-1/2}(\Gamma )$. \end{proof}
\section{A Trefftz DG Method}\label{trefftz}
We want to use a Trefftz discontinuous Galerkin method to approximate the semidiscrete problem (\ref{prob-FwdProbV_HM}).
In particular, in the scalar case, typical examples of Trefftz spaces for the Helmholtz problems are linear combinations of plane waves in different directions, or linear combinations of circular/spherical waves. The gradient of such solutions provides a basis for the vector problem. In the following we seek a Trefftz Discontinuous Galerkin (TDG) method to approximate the semidiscrete vector formulation of the problem (\ref{prob-FwdProbV_HM}).
Let us introduce ${\cal T}_h$ a triangular mesh of $\Omega _R$, possibly featuring hanging nodes, and allowing triangles to have curvilinear edges if they share an edge with $\Gamma$ or $\Sigma_R$. We write $h$ for the mesh width of ${\cal T}_h$, that is, $h = \max _{K\in {\cal T}_h} h_K$ where $h_K$ the diameter of triangle $K$. On ${\cal T}_h$ we will define our TDG method. To this
end, we denote by ${\cal F}_h = \cup _{K\in {\cal T}_h } \partial K$ the skeleton of the mesh ${\cal T}_h$, and set ${\cal F}_h^R = {\cal F}_h \cap \Sigma_R$, ${\cal F}_h^{\Gamma} = {\cal F}_h \cap \Gamma $, and ${\cal F}_h^I = {\cal F}_h \cap \Omega _R = {\cal F}_h \setminus ( {\cal F}_h^R \cup {\cal F}_h^{\Gamma}) $. We also introduce some standard DG notation: Write $\boldsymbol{\nu}^+$, $\boldsymbol{\nu}^-$ and $\boldsymbol{\nu}_K$ for the exterior unit normals on $\partial K ^+$, $\partial K ^-$ and $\partial K$, respectively, where $K^+,K^-,K \in \mathcal{T}_h$. Let $u$ and $\mathbf{v}$ denote a piecewise smooth scalar function and vector field respectively on $\mathcal{T}_h$. On any edge $e\in \mathcal{F}_h^I$ with $e=\partial K ^+\cap\partial K ^-$, where $K^+,K^-\in \mathcal{T} _h$, we define \begin{itemize} \item the averages: $\avg{u} := \frac{1}{2} (u^-+u^+)$, $\avg{\mathbf{v} } := \frac{1}{2} (\textbf{{v}}^-+\textbf{{v}}^+)$; \item the jumps: $\jmp{ u }_{\boldsymbol{\nu}}:= u^-\boldsymbol{\nu}^-+u^+\boldsymbol{\nu}^+$, $\jmp{\mathbf{v} }_{\boldsymbol{\nu}} := \mathbf{v }^-\cdot \boldsymbol{\nu}^-+ \mathbf{v} ^+\cdot \boldsymbol{\nu}^+$. \end{itemize} Furthermore, we will denote by $\nabla _{\! h}$ the elementwise application of $\nabla$, and by $\partial_{\boldsymbol{\nu},h}=\boldsymbol{\nu}\cdot\nabla _{\! h} $ the element-wise application of $\partial_{\boldsymbol{\nu}}$ on $\partial \Omega_R =\Gamma \cup \Sigma_R$.
We next introduce a suitable Trefftz space to approximate the semidiscrete problem written in vector form as (\ref{prob-FwdProbV_HM}).
To this end, we introduce the vector TDG spaces with local number of plane wave directions $\{ p_K \}_{K \in\mathcal{T}_h}$, $p_K>3$, given by $$
\mathbf{W}_h = \left\{ \textbf{{w}}_{h} \in L^2( \Omega _R)^2 \, ; \,\, \textbf{{w}}_h|_K \in \mathbf{W}_{p_K}(K)\,\,\forall K\in\mathcal{T} _h \right\} , $$ where each $\mathbf{W}_{p_K} ( K )$ \pmr{is the span of a set of $p_K$ linearly independent vector functions on} $K$ that enjoy the Trefftz property: $$ \grad\ddiv \textbf{{w}}_h + k^2 \textbf{{w}}_h = 0 \quad\forall \textbf{{w}}_h\in \mathbf{W}_{p_K} ( K ) \, . $$ Then, for any $\mathbf{u}_h \in \mathbf{W}_h$ and an arbitrary element $K\in\mathcal{T}_h$, we have the following integration by parts formula: \begin{eqnarray*} 0&=&\int_K(\grad\ddiv \mathbf{u}_h+k^2 \mathbf{u}_h)\cdot\overline{\textbf{{w}}}_h\,d\textbf{{x}}\\&=& \int_K\! (-\ddiv \mathbf{u}_h\ddiv \overline{\textbf{{w}}}_{h}+k^2\mathbf{u}_h\cdot\overline{\textbf{{w}}}_h)\,d\textbf{{x}}+\int_{\partial K}\!\!\ddiv \mathbf{u}_h\, \overline{\textbf{{w}}}_h\cdot\boldsymbol{\nu}_ K\,dS \, . \end{eqnarray*} Integrating by parts one more time \[ \int_K( \mathbf{u}_h\cdot(\grad\ddiv \overline{\textbf{{w}}}_h+k^2\overline{\textbf{{w}}}_h)\,d\textbf{{x}} +\int_{\partial K}\!\! \ddiv \mathbf{u}_h\, \overline{\textbf{{w}}}_h\cdot\boldsymbol{\nu}_K\,dS- \int_{\partial K}\!\!\mathbf{u}_h\cdot\boldsymbol{\nu}_K\ddiv\overline{\textbf{{w}}}_h\,dS \, =\, 0 \, . \] Now assuming that ${\textbf{{w}}}_h\in \mathbf{W}_{p_K} (K)$, we obtain the master equation that the fluxes are linked by \[ \int_{\partial K} \!\! \ddiv \mathbf{u} _h \, \overline{\textbf{{w}}}_h\!\cdot\!\boldsymbol{\nu}_K\,dS- \int_{\partial K}\!\! \mathbf{u} _h\! \cdot\!\boldsymbol{\nu}_K\,\ddiv\overline{\textbf{{w}}}_h\,dS\, =\, 0\, . \] This needs to be generalized to be applied to discontinuous trial and test functions in $\mathbf{W}_h$. Let $\widehat{\ddiv\mathbf{u}_h}$ and $\hat{\mathbf{u}}_h$ denote numerical fluxes computed from the appropriate functions on either side of an edge $e$ in the mesh (or on one side if the edge is on the boundary), as we will describe next. We then write the extended master equation \[ \int_{\partial K} \!\! \widehat{\ddiv \mathbf{u}_h} \, \overline{\textbf{{w}}}_h\cdot\boldsymbol{\nu}_K\,dS- \int_{\partial K} \!\! \hat{\mathbf{u}}_h\cdot\boldsymbol{\nu}_K \, \ddiv\overline{\textbf{{w}}}_h\,dS\, =\, 0\, . \] Adding over all triangles in the mesh, $K\in \mathcal{T}_h$, we may write the sum using the sets ${\cal F}_h^R$, ${\cal F}_h^I$ and ${\cal F}_h^{\Gamma}$ as defined previously and obtain: \begin{equation} \begin{array}{l} \displaystyle \int _{\mathcal{F} ^I_h} (\widehat{\ddiv \mathbf{u}}_h\, \jmp{\overline{\textbf{{w}}}_h}_{\boldsymbol{\nu}}-\hat{\mathbf{u}}_h\cdot\jmp{\ddiv\overline{\textbf{{w}}}_h}_{\boldsymbol{\nu}})\,dS \\[1ex] \hspace*{1cm} +\displaystyle \int _{\mathcal{F} ^R_h}
(\widehat{\ddiv \mathbf{u}}_h\, \overline{\textbf{{w}}}_h\cdot\boldsymbol{\nu}-\hat{\mathbf{u}}_h\cdot\boldsymbol{\nu} \,\ddiv\overline{\textbf{{w}}}_h) \,dS \\[1ex] \hspace*{2cm} -\displaystyle \int _{\mathcal{F} ^{\Gamma}_h} (\widehat{\ddiv \mathbf{u}}_h\, \overline{\textbf{{w}}}_h\cdot\boldsymbol{\nu}-\hat{\mathbf{u}}_h\cdot\boldsymbol{\nu}\,\ddiv\overline{\textbf{{w}}}_h ) \,dS \, =\, 0 \, , \end{array}\label{dgsum} \end{equation} where the negative sign appears on the last term because of the use of an outward pointing normal on $\Gamma$.
Defining numerical fluxes using conjugate variables, \pmr{we are led (see also \cite{buf07,git07})} to the following fluxes on edges in ${\cal F}_h^I$: \begin{eqnarray*} \widehat{\ddiv \mathbf{u}_h}=\avg{\ddiv\mathbf{u}_h}+ik\alpha_1 \, \jmp{\mathbf{u}_h}_{\boldsymbol{\nu}}\, ,\\ \hat{\mathbf{u}}_h=\avg{\mathbf{u}_h}+\frac{\alpha_2}{ik}\,\jmp{\ddiv\mathbf{u}_h}_{\boldsymbol{\nu}} \, . \end{eqnarray*} Here $\alpha_1$ and $\alpha_2$ \pmr{are strictly positive real numbers on each edge $e\in \mathcal{F}_h^I$. For the Ultra Weak Variational Formulation that we usually use, $\alpha_1=\alpha_2=1/2$~\cite{cessenat03}. More generally they could be mesh dependent~\cite{HMPhp,git07}. Since our numerical results are for constant $\alpha_1$ and $\alpha_2$ we shall not investigate these more general cases further.}
For the edges on the outer boundary, ${\cal F}_h^R$, following \cite{shelvean_phd} we take \begin{eqnarray*} \widehat{\ddiv \mathbf{u}_h}&=&-k^2N_R(\mathbf{u}_h\cdot\boldsymbol{\nu})+\delta{i}k\, N_R^{*}(\ddiv \mathbf{u}_h+k^2N_R(\mathbf{u}_h\!\cdot\!\boldsymbol{\nu})) \, ,\\ \hat{\mathbf{u}}_h&=&\mathbf{u}_h+\frac{\delta}{ik}\,\big(\ddiv \mathbf{u}_h+k^2N_R (\mathbf{u}_h\!\cdot\!\boldsymbol{\nu})\big) \boldsymbol{\nu} \, . \end{eqnarray*} where $N_R^{*}$ is the $L^2(\Sigma_R)$-adjoint of $N_R$, and $\delta>0$ is a parameter to be chosen.
Furthermore, for edges on the impedance boundary, ${\cal F}_h^D$, we consider
\begin{eqnarray*} \widehat{\ddiv \mathbf{u}}_h&=&-k^2G_\Gamma^H(\mathbf{u}_h\cdot\boldsymbol{\nu}+g)-ik\,\tau\, G^{H,*}_{\Gamma} (\ddiv \mathbf{u}_h+k^2G_H(\mathbf{u}_h\!\cdot\!\boldsymbol{\nu} +g))\, ,\\ \hat{\mathbf{u}}_h&=&\mathbf{u}_h-\frac{\tau}{ik}\, \big(\ddiv \mathbf{u}_h+k^2G_H(\mathbf{u}_h\cdot\boldsymbol{\nu}+g)\big) \boldsymbol{\nu}\, . \end{eqnarray*} where $G_{\Gamma}^{H,*}$ is the $L^2(\Gamma )$-adjoint of $G_{\Gamma}^H$, and $\tau>0$ is a parameter to be chosen. Note the sign change compared to the fluxes on the outer boundary $\Sigma _R$ because of the outward pointing $\boldsymbol{\nu}$.
Using these fluxes in (\ref{dgsum}) leads us to defining the sesquilinear form \begin{eqnarray} && a^H_h (\textbf{\textit{u}},\textbf{{w}})=\int_{{\cal F}_h^I}\left(\avg{\ddiv\mathbf{u}}\overline{\jmp{\textbf{{w}}}}_{\boldsymbol{\nu}}-\avg{\mathbf{u}}\cdot\overline{\jmp{\ddiv\textbf{{w}}}}_{\boldsymbol{\nu}}\right)\,dS\nonumber\\ &&\quad -\frac{1}{ik}\int_{{\cal F}_h^I}\alpha_2\jmp{\ddiv\mathbf{u}}_{\boldsymbol{\nu}}\cdot\overline{\jmp{\ddiv\textbf{{w}}}}_{\boldsymbol{\nu}}\,dS-\int_{\Sigma_R}\left(k^2N_R(\mathbf{u}\cdot\boldsymbol{\nu})\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}+\mathbf{u}\cdot\boldsymbol{\nu}\overline{\ddiv\textbf{{w}}}\right)\,dS\nonumber\\ &&\quad -\frac{1}{ik}\int_{\Sigma_R}\delta(\ddiv\mathbf{u}+k^2N_R(\mathbf{u}\cdot\boldsymbol{\nu}))\overline{(\ddiv\textbf{{w}}+k^2N_R(\textbf{{w}}\cdot\boldsymbol{\nu}))}\,dS\nonumber\\ && +\int_{\Gamma}\left(k^2G_\Gamma^H(\mathbf{u}\cdot\boldsymbol{\nu})\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}+\mathbf{u}\cdot\boldsymbol{\nu} \, \overline{\ddiv\textbf{{w}}}\right)\,dS+ik\int_{{\cal F}_h^I}\alpha_1\jmp{\mathbf{u}}_{\boldsymbol{\nu}}\overline{\jmp{\textbf{{w}}}}_{\boldsymbol{\nu}}\,dS\nonumber\\ &&\quad -\frac{1}{ik}\int_{\Gamma}\tau(\ddiv\mathbf{u}+k^2G_\Gamma^H(\mathbf{u}\cdot\boldsymbol{\nu}))\overline{(\ddiv\textbf{{w}}+k^2G_\Gamma^H(\textbf{{w}}\cdot\boldsymbol{\nu}))}\,dS \, , \end{eqnarray} and the antilinear functional \[ f^H_h(\textbf{{w}})=-\frac{1}{ik}\int_\Gamma\tau k^2 G_\Gamma^H(g)\, (\ddiv\overline{\textbf{{w}}} +k^2G_\Gamma^H(\overline{\textbf{{w}}}\cdot\boldsymbol{\nu})) \,dS+ \int_\Gamma k^2G_\Gamma^H(g)\,\overline{\textbf{{w}}}\cdot\boldsymbol{\nu}\,dS \] Then the discrete problem we wish to solve is to find $\mathbf{v}_h^H\in W_h$ such that \begin{eqnarray}\label{DiscreteProblem} a^H_h (\mathbf{v}_h^H,\textbf{{w}})&=& f_h^H (\textbf{{w}} )\mbox{ for all }\textbf{{w}}\in \mathbf{W}_h. \end{eqnarray} We start by showing that this problem has a unique solution for any $h>0$ and $k>0$ and $H$ small enough. It is useful to define the sesquilinear forms \begin{eqnarray*} a_{0,h}^H(\mathbf{u},\textbf{{w}})&=& \int _{\mathcal{F}_h^I} (\avg{\ddiv\mathbf{u}} \jmp{\overline{\textbf{{w}}}}_{\boldsymbol{\nu}}-\avg{\mathbf{u}}\cdot\jmp{\ddiv\overline{\textbf{{w}}}}_{\boldsymbol{\nu}}) \,dS \\ &&-\int_{\Sigma_R}\mathbf{u}\!\cdot\!\boldsymbol{\nu}\,\ddiv\overline{\textbf{{w}}} \,dS +\int_{\Gamma} \mathbf{u}\!\cdot\!\boldsymbol{\nu}\,\ddiv\overline{\textbf{{w}}}\,dS \, , \end{eqnarray*} and \begin{eqnarray*} b^H_h(\mathbf{u} ,\textbf{{w}}) &\! = \! & ik\int_{{\cal F}_h^I}\alpha_1\jmp{\mathbf{u}}_{\boldsymbol{\nu}}\overline{\jmp{\textbf{{w}}}}_{\boldsymbol{\nu}}\,dS-\frac{1}{ik}\int_{{\cal F}_h^I}\alpha_2\jmp{\ddiv\mathbf{u} }_{\boldsymbol{\nu}}\cdot\overline{\jmp{\ddiv\textbf{{w}}}}_{\boldsymbol{\nu}}\,dS\nonumber\\ && -\frac{1}{ik}\int_{\Sigma_R}\!\!\! \delta\, (\ddiv\mathbf{u}+k^2N_R(\mathbf{u}\!\cdot\!\boldsymbol{\nu}))\overline{(\ddiv\textbf{{w}}+k^2N_R (\textbf{{w}}\!\cdot\!\boldsymbol{\nu}))}\,dS -\int_{\Sigma_R}\!\!\! k^2 N_R(\mathbf{u}\!\cdot\!\boldsymbol{\nu})\, \overline{\textbf{{w}}\! \cdot\! \boldsymbol{\nu}}\,dS\nonumber\\&& +\int_{\Gamma}\!\! \!k^2\, G_\Gamma^H(\mathbf{u}\!\cdot\!\boldsymbol{\nu})\, \overline{\textbf{{w}}\!\cdot\!\boldsymbol{\nu}}\,dS-\frac{1}{ik}\int_{\Gamma}\! \!\! \tau\, (\ddiv\mathbf{u}+k^2G_\Gamma^H(\mathbf{u}\!\cdot\!\boldsymbol{\nu}))\overline{(\ddiv\textbf{{w}}+k^2G_\Gamma^H(\textbf{{w}}\!\cdot\!\boldsymbol{\nu}))}\,dS \end{eqnarray*} Obviously $a^H_h(\mathbf{u} ,\textbf{{w}})=a_{0,h}^H (\mathbf{u} ,\textbf{{w}})+b_h^H(\mathbf{u} ,\textbf{{w}})$.
We start by rewriting $a_{0,h}^H$ in an equivalent form using the DG Magic Lemma \cite{EM}. In particular since $\textbf{{w}}$ satisfies the Trefftz condition, for all $\mathbf{u},\textbf{{w}}\in \mathbf{W}_h$ we have \begin{eqnarray*} \lefteqn{\sum_{K\in{\cal T}_h}\int_K(\ddiv\mathbf{u}\, \overline{\ddiv\textbf{{w}}}-k^2\mathbf{u}\cdot\overline{\textbf{{w}}})\,d\textbf{{x}}=\sum_{K\in {\cal T}_h}\int_{\partial K}\mathbf{u}\cdot\boldsymbol{\nu}\overline{\ddiv \textbf{{w}}}\,dS}\\& =&\int_{\mathcal{F}_h^I}(\avg{\mathbf{u}}\cdot\overline{\jmp{\ddiv\textbf{{w}}}}_{\boldsymbol{\nu}}+\jmp{\mathbf{u}}_{\boldsymbol{\nu}}\overline{\avg{\ddiv\textbf{{w}}}})\,dS +\int_{\Sigma_R}\!\!\!\mathbf{u}\cdot\boldsymbol{\nu}\, \overline{\ddiv \textbf{{w}}}\,dS -\int_{\Gamma}\mathbf{u}\cdot\boldsymbol{\nu}\, \overline{\ddiv \mathbf{w}}\,dS\, . \end{eqnarray*} Using this equality in the definition of $a_{0,h}^H(\mathbf{u},\textbf{{w}})$ we see that $a_{0,h}^H(\mathbf{u},\textbf{{w}})=a_{1,h}^H(\mathbf{u},\textbf{{w}})$ where \[ a_{1,h}^H(\mathbf{u},\textbf{{w}}):=-\sum_{K\in{\cal T}_h}\int_K(\ddiv\mathbf{u}\, \overline{\ddiv\textbf{{w}}}-k^2\,\mathbf{u}\cdot\overline{\textbf{{w}}})\,d\textbf{{x}}+ \int_{\mathcal{F}^I_h}
(\avg{\ddiv\mathbf{u}} \jmp{\overline{\textbf{{w}}}}_{\boldsymbol{\nu}}+\jmp{\mathbf{u}}_{\boldsymbol{\nu}}\cdot\overline{\avg{\ddiv\textbf{{w}}}}) \,dS\, . \]
Then choosing $\textbf{{w}}=\mathbf{u}$ we immediately have \[ \Im(a_{0,h}^H(\mathbf{u},\mathbf{u})) =\Im(a_{1,h}^H(\mathbf{u},\mathbf{u}))=0 \, . \]
Turning to the sesquilinear form $b_h^H(\mathbf{u},\textbf{{w}})$ if we choose $\textbf{{w}}=\mathbf{u}$ then \begin{eqnarray*} b_h^H (\mathbf{u},\mathbf{u}) & \!\! = \!\! &
ik\int_{{\cal F}_h^I}\alpha_1\, |\jmp{\mathbf{u}}_{\boldsymbol{\nu}}|^2\,dS -\frac{1}{ik}\int_{{\cal F}_h^I}\alpha_2\, |\jmp{\ddiv\mathbf{u}}_{\boldsymbol{\nu}}|^2\,dS \nonumber\\
&&\quad -\frac{1}{ik}\int_{\Sigma_R}\!\! \delta\, |\ddiv\mathbf{u}+k^2N_R (\mathbf{u}\cdot\boldsymbol{\nu})|^2\,dS -\int_{\Sigma_R}\!\! k^2\, N_R(\mathbf{u}\cdot\boldsymbol{\nu})\overline{\mathbf{u}\cdot\boldsymbol{\nu}}\,dS\nonumber\\&& +\int_{\Gamma} k^2\, G_\Gamma^H(\mathbf{u}\cdot\boldsymbol{\nu})\overline{\mathbf{u}\cdot\boldsymbol{\nu}}\,dS
-\frac{1}{ik}\int_{\Sigma_R}\!\! \tau\, |\ddiv\mathbf{u}+k^2G_\Gamma^H(\mathbf{u}\cdot\boldsymbol{\nu}))|^2\,dS \, . \end{eqnarray*}
Thus since $\alpha_1$, $\alpha_2$, $\delta$ and $\tau$ are real valued \begin{eqnarray*} \Im (b^H_h (\mathbf{u},\mathbf{u}) &\! =\! & \int _{\mathcal{F}_h^I}
(k\alpha_1\, |\jmp{\mathbf{u}}_{\boldsymbol{\nu}}|^2+\frac{\alpha_2}{k}\, |\jmp{\ddiv\mathbf{u}}_{\boldsymbol{\nu}}|^2) \,dS \\ &&-\Im ( \int_{\Sigma_R} \!\! k^2 N_R(\mathbf{u}\!\cdot\! {\boldsymbol{\nu}})\overline{\mathbf{u}\!\cdot\! {\boldsymbol{\nu}}}\,dS ) + \Im (\int_{\Gamma} k^2G^H_{\Gamma}(\mathbf{u}\!\cdot\! \boldsymbol{\nu})\, \overline{\mathbf{u}\!\cdot\! \boldsymbol{\nu}}\,dS) \\
&&+\int_{\Sigma_R}\frac{\delta}{k}\,\left|\ddiv {\mathbf{u}}+k^2 N_R (\mathbf{u}\!\cdot\! {\boldsymbol{\nu}})\right|^2\,dS
+\int_{\Gamma}\frac{\tau}{k}\,\left|\ddiv \mathbf{u}+k^2 G^H_{\Gamma} (\mathbf{u}\!\cdot \! {\boldsymbol{\nu}})\right|^2\,dS\, . \end{eqnarray*} Note that, by Lemmas~\ref{lemmaNtDsign} and \ref{lemmaGIBCsign} (which is stated for $G_{\Gamma}$ but a similar reasoning shows that it also holds for $G^H_{\Gamma}$), \[ \Im (\int_{\Sigma_R} N_R (\mathbf{u}\!\cdot\! {\boldsymbol{\nu}})\, \overline{\mathbf{u}\!\cdot\! {\boldsymbol{\nu}}}\,dS) - \Im (\int_{\Gamma} G^H_{\Gamma}(\mathbf{u}\!\cdot\! {\boldsymbol{\nu}})\, \overline{\mathbf{u}\!\cdot\! {\boldsymbol{\nu}}}\,dS) \, \leq\, 0 \, , \] and so \begin{eqnarray*} \Im(b^H_h(\mathbf{u},\mathbf{u}))& \!\!\geq\!\! & \int_{\mathcal{F}^I_h}
(k\alpha_1\, |\jmp{\mathbf{u}}_{\boldsymbol{\nu}}|^2+\frac{\alpha_2}{k}\, |\jmp{\ddiv\mathbf{u}}_{\boldsymbol{\nu}}|^2) \,dS \\
&&+\int_{\Sigma_R}\frac{\delta}{k}\,\left|\ddiv {\mathbf{u}}+k^2 N_R (\mathbf{u}\!\cdot\! {\boldsymbol{\nu}})\right|^2\,dS
+\int_{\Gamma}\frac{\tau}{k}\,\left|\ddiv \mathbf{u}+k^2 G^H_{\Gamma} (\mathbf{u}\!\cdot \! {\boldsymbol{\nu}})\right|^2\,dS\, . \end{eqnarray*}
We may thus define the mesh-dependent semi-norm $\|\textbf{{w}}\|_{DG} = \sqrt{\Im(b^H_h (\textbf{{w}},\textbf{{w}}))}$
for any function $\textbf{{w}}\in\mathbf{W}^s(\mathcal{T}_h)$ where $\mathbf{W}^s(\mathcal{T}_h)$ is defined as follows and contains $\mathbf{W}_h$: $$
\mathbf{W}^s (\mathcal{T}_h) = \{ \textbf{{w}}\in L^2 (\Omega _R)^2 ; \, \textbf{{w}}|_K \in H^{1/2+s}(\mathrm{div};K) \, \text{s.t. } \nabla \nabla\cdot \textbf{{w}} + k^2 \textbf{{w}} = 0 \,\text{in } K , \, \forall K\in\mathcal{T}_h \, \} \, , $$ for any $s\in\mathbb{R}$ with $s>0$. We now have the following result. \begin{lemma}\label{lemDGnormbound} For any $s>0$ and all $H>0$ small enough, the semi-norm $\Vert\cdot\Vert_{DG}$ is a norm on $\mathbf{W}^s (\mathcal{T}_h)$, and
\begin{eqnarray}\label{DGnormbound} \Vert \mathbf{w} \Vert_{DG}^2&\geq & \int_{\mathcal{F}_h^I}
( k\alpha_1\, |\jmp{\mathbf{w} }_{\boldsymbol{\nu}} |^2
+\frac{\alpha_2}{k}\, |\jmp{\ddiv\mathbf{w} }_{\boldsymbol{\nu}} |^2) \,dS \\ &&
+\int_{\Sigma_R}\frac{\delta}{k}\, \left|\ddiv \mathbf{w} +k^2 \, N_R(\mathbf{w} \!\cdot\! {\boldsymbol{\nu}})\right|^2\,dS
+\int_{\Gamma }\frac{\tau}{k} \, \left| \ddiv \mathbf{w} +k^2\, G^H_{\Gamma} (\mathbf{w} \!\cdot\! {\boldsymbol{\nu}})\right|^2\,dS\, .\nonumber \end{eqnarray} \end{lemma}
\begin{proof}
On one hand, if $\|\textbf{{w}}\|_{DG}=0$ for some $\textbf{{w}}\in \mathbf{W}^s (\mathcal{T}_h)$, then $\nabla\nabla\cdot\textbf{{w}}+k^2\textbf{{w}}=0$ in $\Omega_R$ and $\nabla\cdot\textbf{{w}} + k^2 N_R(\textbf{{w}}\cdot\boldsymbol{\nu})=0$ on $\Sigma_R$, $\nabla\cdot\textbf{{w}} + k^2 G^H_{\Gamma}(\textbf{{w}}\cdot\boldsymbol{\nu})=0$ on $\Gamma$. The well-posedness of the semi-discrete problem for all $H$ small enough, Theorem~\ref{lemma-E!sol:semidiscrGIBCfwdprob_vec}, implies that
$\textbf{{w}} =\boldsymbol{0}$, so that the semi-norm $\|\cdot\|_{DG}$ is a norm on $\mathbf{W}^s(\mathcal{T}_h)$. On the other hand, the norm bound follows from the argument preceding the lemma. \end{proof}
We now have the existence and uniqueness of solution for the discrete problem. \begin{proposition}\label{Prop:DiscrExistUnique}
For all $H$ small enough and any $h>0$ and $k>0$ there exists a unique solution $\mathbf{v}_h^H\in \mathbf{W}_h$ to the problem~(\ref{DiscreteProblem}) for every $g\in H^{-1}(\Gamma) $.
\end{proposition} \begin{proof} By the finite dimension of the space $\mathbf{W}_h$, it suffices to show uniqueness of solution. To this end, we consider a solution of the homogeneous problem, that is, $\mathbf{v}_h^H\in\mathbf{W}_h$ such that $a_h^H(\textbf{{v}}^H_h,\textbf{{w}})=0$ for any $\textbf{{w}}\in \mathbf{W}_h$. Then
$a_h^H(\textbf{{v}}^H_h,\textbf{{v}}_h^H)=0$, so that $\|\textbf{{v}}^H_h\|_{DG} ^2= \Im ( b^H_h (\textbf{{v}}_h^H,\textbf{{v}}_h^H) ) = \Im ( a(\textbf{{v}}_h^H,\textbf{{v}}_h^H)
) =0$. Hence $\textbf{{v}}^H_h=0$ since $\|\cdot\|_{DG}$ is a norm on $\mathbf{W}_h\subseteq W^s( \mathcal{T}_h)$ (see Lemma \ref{lemDGnormbound}). \end{proof}
\subsection{A bound of the approximation error in the mesh-dependent norm $\Vert\cdot\Vert_{DG}$}
We now introduce the mesh-dependent norm $\|\cdot\|_{DG^+}$ on $\mathbf{W}^s(\mathcal{T}_h)$ as \begin{eqnarray*}
\|\textbf{{w}}\|^2_{DG^+}&=&\|\textbf{{w}}\|^2_{DG}+k^{-1}\int_{{\cal F}_h^I} \alpha^{-1}_1\, |\avg{\ddiv {\textbf{{w}}}}|^2\,dS
+ k\int_{{\cal F}_h^I} \alpha^{-1}_2\, |\avg{\textbf{{w}}}|^2\,dS\\
&& + k\int_{\Sigma_R}\!\! \delta^{-1}\, |\textbf{{w}}\cdot\boldsymbol{\nu}|^2\,dS + k\int_{\Gamma} \tau^{-1}\, |\textbf{{w}}\cdot\boldsymbol{\nu}|^2\,dS. \end{eqnarray*}
\begin{proposition}\label{BoundedForm} For any $\mathbf{u}, \mathbf{w}\in \mathbf{W}^s(\mathcal{T}_h)$, we have
$$a^H _h (\mathbf{u},\mathbf{w})\,\leq\, 2\, \|\mathbf{v}\|_{DG}\, \|\mathbf{w} \|_{DG^+}\, .$$ \end{proposition} \begin{proof} Using integration by parts, the Trefftz property of $\mathbf{u}\in \mathbf{W}^s(\mathcal{T}_h)$, and the DG Magic Lemma, we have \begin{eqnarray}\label{DGMagic1} && \sum_{K\in\mathcal{T}_h}\int_{K}\left(\nabla\cdot\mathbf{u}\, \overline{\nabla\cdot\textbf{{w}}}-k^2\mathbf{u}\cdot\overline{\textbf{{w}}}\right)\,d\textbf{{x}} \, = \, \sum_{K\in\mathcal{T}_h}\int_{\partial K}\nabla\cdot\mathbf{u}\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS\nonumber\\
&& \quad = \int_{{\cal{F}}_h^I}\avg{\nabla\cdot\mathbf{u}}\overline{\jmp{\textbf{{w}}}}_{\boldsymbol{\nu}}\,dS + \int_{{\cal{F}}_h^I}\jmp{\ddiv{\mathbf{u}}}_{\boldsymbol{\nu}}\cdot\overline{\avg{\textbf{{w}}}}\,dS-\int_{\Gamma}\!\! \nabla\cdot\mathbf{u}\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS \nonumber\\
&& \qquad + \int_{\Sigma_R}\!\! \ddiv {\mathbf{u}}\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS. \end{eqnarray} Substituting the expression for $ \sum_{K\in\mathcal{T}_h}\int_{K}\left(\nabla\cdot\mathbf{u}\, \overline{\nabla\cdot\textbf{{w}}}-k^2\mathbf{u}\cdot\overline{\textbf{{w}}}\right)\,d\textbf{{x}} $
from equation~(\ref{DGMagic1}) above into the expression for $a_{1,h}^H (\mathbf{u},\textbf{{w}})$ leads to \begin{eqnarray*} a_{1,h}^H(\mathbf{u},\textbf{{w}}) &=& \int_{{\cal{F}}_h^I}\jmp{\mathbf{u}}_{\boldsymbol{\nu}}\overline{\avg{\ddiv{\textbf{{w}}}}}\,dS - \int_{{\cal{F}}_h^I}\jmp{\ddiv{\mathbf{u}}}_{\boldsymbol{\nu}}\cdot\overline{\avg{\textbf{{w}}}}\,dS +\int_{\Gamma} \ddiv{\mathbf{u}}\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS
- \int_{\Sigma_R}\!\! \ddiv{\mathbf{u}}\,\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS. \end{eqnarray*} Then since $a^H_h (\mathbf{u},\textbf{{w}})=b^H_h(\mathbf{u},\textbf{{w}})+a_{1,h}^H(\mathbf{u},\textbf{{w}})$, we have that
\begin{eqnarray*} a^H_h (\mathbf{u},\textbf{{w}})& \! =\! & ik\int_{{\cal F}_h^I}\alpha_1\jmp{\mathbf{u}}_{\boldsymbol{\nu}}\overline{\jmp{\textbf{{w}}}}_{\boldsymbol{\nu}}\,dS-\frac{1}{ik}\int_{{\cal F}_h^I}\alpha_2\jmp{\ddiv\mathbf{u}}_{\boldsymbol{\nu}}\cdot\overline{\jmp{\ddiv\textbf{{w}}}}_{\boldsymbol{\nu}}\,dS\\ && -\int_{\Sigma_R}\left(\ddiv{\mathbf{u}}+k^2N_R (\mathbf{u}\cdot\boldsymbol{\nu})\right)\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS + \int_{\Gamma}\left(\ddiv{\mathbf{u}}+k^2G_\Gamma^H(\mathbf{u}\cdot\boldsymbol{\nu})\right)\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}\,dS\nonumber\\ && -\frac{1}{ik}\int_{\Sigma_R}\delta(\ddiv\mathbf{u}+k^2N_R (\mathbf{u}\cdot\boldsymbol{\nu}))\overline{(\ddiv\textbf{{w}}+k^2N_R (\textbf{{w}}\cdot\boldsymbol{\nu}))}\,dS\nonumber\\&& {-} \frac{1}{ik}\int_{\Gamma}\tau(\ddiv\mathbf{u}+k^2G_\Gamma^H(\mathbf{u}\cdot\boldsymbol{\nu}))\overline{(\ddiv\textbf{{w}}+k^2G_\Gamma^H(\textbf{{w}}\cdot\boldsymbol{\nu}))}\,dS\nonumber\\ &&+\int_{{\cal{F}}_h^I}\jmp{\mathbf{u}}\overline{\avg{\ddiv{\textbf{{w}}}}}\,dS - \int_{{\cal{F}}_h^I}\jmp{\ddiv{\mathbf{u}}}\cdot\overline{\avg{\textbf{{w}}}}\,dS. \end{eqnarray*} By using the weighted Cauchy Schwarz inequality, we get the result. \end{proof}
We now state a quasi-optimal error estimate with respect to the $DG$ and $DG^+$ norms. \begin{theorem}\label{TheoremQuasiOptimal}
Assume $\mathbf{v}\in\mathbf{W}^s(\mathcal{T}_h)$ is the analytical solution to problem~(\ref{prob-FwdProbV_HM}), and $\mathbf{v}_h^H$ the unique solution to problem~(\ref{DiscreteProblem}). Then $$\|\mathbf{v} - \mathbf{v}_h^H\|_{DG}\, \leq\, 2\inf_{\mathbf{w}_h\in\mathbf{W}_h}\, \|\mathbf{v} - \mathbf{w}_h\|_{DG^+}\, .$$ \end{theorem} \begin{proof}
Since $\textbf{{v}}-\textbf{{v}}_h^H\in \mathbf{W}^s(\mathcal{T}_h)$, for all $\textbf{{w}}_h\in\mathbf{W}_h$ we have
\begin{eqnarray*}
\|\textbf{{v}}-\textbf{{v}}^H_h\|^2_{DG} &=& \Im ( a^H_h (\textbf{{v}}-\textbf{{v}}_h,\textbf{{v}}-\textbf{{v}}_h) )
\, \leq
\, 2\|\textbf{{v}} - \textbf{{v}}_h\|_{DG}\, \|\textbf{{v}}-\textbf{{w}}_h\|_{DG^+} \, ,
\end{eqnarray*}
where the last inequality follows from the consistency of the discrete scheme, and continuity of the sesquilinear form in Proposition~\ref{BoundedForm}. \end{proof}
\subsection{An error bound in a mesh-independent norm}
We next derive a bound of the approximation error in terms of a mesh-independent norm. Ideally this would be the $L^2(\Omega_R)^2$-norm, but as in the case of Maxwell's equations (see \cite{HMP_Maxwell}) this is not possible and we derive an estimate in the $H({\rm{}div};\Omega_R) '$-norm. To this end, we start bounding some mesh-independent norm in terms of the mesh-dependent norm $\Vert\cdot\Vert_{DG}$ in the the vector space $\mathbf{W}^s (\mathcal{T}_h)$ for $s\in \mathbb{R}$, $s>0$, which contains the Trefftz vector space $\mathbf{W}_h$.
Recall that any function in this space may be written using the $L^2(\Omega_R)^2$-orthogonal Helmholtz decomposition \begin{equation}
\textbf{{w}} = \textbf{{w}}_0 + \nabla p\quad\text{with } \textbf{{w}}_0\in H_0 (\mathrm{div}^0;\Omega _R),\, p\in H^1 (\Omega _R) \, .\label{helm1}
\end{equation}
Also notice that, in terms of this decomposition, the property $ \textbf{{w}}|_K \in H^{1/2+s}(\mathrm{div};K) $ for all $K\in\mathcal{T}_h$ means that \begin{equation}\label{eq-regprop:p}
\textbf{{w}}_0|_K \in H^{1/2+s}(K)^2\, , \,\,
\Delta p|_K = \mathrm{div}\,\textbf{{w}}|_K \in H^{1/2+s}(K) \qquad \mbox{ for all } K\in\mathcal{T}_h \, . \end{equation} We will bound the $L^2(\Omega_R)^2$-norm of $\nabla p$ and also a weaker norm of $\textbf{{w}}_0$ by means of $\Vert\textbf{{w}}\Vert _{DG}$ using similar arguments to those in \cite{HMP_Maxwell}. In particular, we take the \emph{shape regularity} and \emph{quasi-uniformity} measures $$ s.r. (\mathcal{T}_h) = \max _{K\in\mathcal{T}_h}\frac{h_K}{d_K} \quad\text{ and }\quad q.u. (\mathcal{T}_h) = \max _{K\in\mathcal{T}_h}\frac{h}{h_K}\, , $$ where , for each $K\in\mathcal{T}_h$, we denote by $d_K$ the diameter of the largest ball contained in $K$.
\subsubsection{A bound of $\Vert\nabla p \Vert _{0,\Omega_R}$ by a duality argument}
We consider the adjoint problem of (\ref{prob-FwdProbV_HM}), which consists of finding $\boldsymbol{\phi}\in H(\mathrm{div};\Omega _R)$ such that \begin{equation}\label{prob-adjFwdProbV_HM} \begin{array}{l}
\displaystyle \int_{\Omega _R} \left( \frac{1}{k^2} \, \nabla\cdot\boldsymbol{\phi } \, \nabla\cdot\overline{\textbf{{z}}} - \boldsymbol{\phi}\cdot\overline{\textbf{{z}}} \right) d\textbf{{x}}
+ \int _{\Sigma _R} (N_R)^* (\boldsymbol{\phi}\cdot\boldsymbol{\nu}) \, (\overline{\textbf{{z}}}\cdot\boldsymbol{\nu}) \, dS \\[1ex]
\displaystyle \hspace*{6cm} - \int _{\Gamma} {(G^H_{\Gamma})}^* (\boldsymbol{\phi}\cdot\boldsymbol{\nu}) \, (\overline{\textbf{{z}}}\cdot\boldsymbol{\nu}) \, dS \, = \,
\int_{\Omega _R} \nabla p\cdot\overline{\textbf{{z}}} \, d\textbf{{x}} \, ,
\end{array} \end{equation} for all $\textbf{{z}}\in H(\mathrm{div},\Omega _R)$. Let us emphasize that (\ref{prob-adjFwdProbV_HM}) is well-posed and shows the following regularity.
\begin{lemma}\label{lemma-adjFwdProbVok}
For any $p\in H^1 (\Omega _R)$, if $H>0$ is sufficiently small then the adjoint problem (\ref{prob-adjFwdProbV_HM}) is well-posed. Moreover, for each $s\in (0,1/2)$
the solution has the regularity
$ \boldsymbol{\phi}\in H^{1/2+s} (\Omega_R)^2$, with
\[
\Vert\boldsymbol{\phi}\Vert _{1/2+s,\Omega_R} \leq CH^{-s}\Vert \nabla p \Vert _ {0,\Omega _R}\, , \] where $C>0$ depends only on $\Omega_R$. \end{lemma} \begin{proof}
The well-posedness of the adjoint problem (\ref{prob-adjFwdProbV_HM}) follows from our proof of the well-posedness of the original problem (\ref{prob-FwdProbV_HM}) in Theorem \ref{lemma-E!sol:semidiscrGIBCfwdprob_vec}.
Using the Helmholtz decomposition (\ref{helm1}) and reasoning as in the proof of Lemma \ref{lemma-FwdProbVok}, we see that the solution of the adjoint problem is the function $\boldsymbol{\phi}=\nabla q$ for $q\in H^1(\Omega _R)$ which solves the following equations in weak sense: \begin{equation*} \begin{array}{ll}
\displaystyle \Delta q+k^2q =-k^2p\quad & \mbox{in }\Omega _R \, ,\\
\displaystyle - q + (N_R)^* (\frac{\partial q}{\partial \boldsymbol{\nu}}) =0& \mbox{on }\Sigma_R \, ,\\[1ex] \displaystyle q + {(G^H_{\Gamma})}^* (\frac{\partial q}{\partial\boldsymbol{\nu}})=0 & \mbox{on }\Gamma \, .
\end{array} \end{equation*} Thus $q$ can be extended as a solution of a scattering problem to $\Omega$ with the adjoint radiation condition at infinity. Hence the regularity of $q$ is determined from the boundary condition on $\Gamma$ and in particular, since $(G^H_{\Gamma})^* (\frac{\partial q}{\partial \boldsymbol{\nu}})\in H^{1+s}(\Gamma)$ for all $0\leq s<1/2$ (see the remark after Assumption \ref{G}), we see that $q\in H^{3/2+s}(\Omega_R)$. Then using an inverse estimate guaranteed by the assumed quasi-uniformity of the boundary mesh, \begin{eqnarray*} \Vert q\Vert_{H^{3/2+s}(\Omega_R)}&\leq& C \Vert
{(G^H_{\Gamma})}^* (\frac{\partial q}{\partial\boldsymbol{\nu}})\Vert_{H^{1+s}(\Gamma)}
\leq C H^{-s}\Vert
{(G^H_{\Gamma})}^* (\frac{\partial q}{\partial\boldsymbol{\nu}})\Vert_{H^{1}(\Gamma)}
\\& \leq
& C H^{-s}\Vert
\nabla q\cdot\boldsymbol{\nu} \Vert_{H^{-1}(\Gamma)}\leq
\, C H^{-s}\Vert
\boldsymbol{\phi}\cdot\boldsymbol{\nu} \Vert_{H^{-1/2}(\Gamma)}\leq CH^{-s}\Vert\boldsymbol{\phi}\Vert_{H({\rm{}div};\Omega_R)}.
\end{eqnarray*}
Finally, the continuity estimate for the solution of the semidiscrete adjoint problem (\ref{prob-adjFwdProbV_HM}) provides the bound
$\Vert\boldsymbol{\phi}\Vert_{H({\rm{}div};\Omega_R)}\leq C\Vert \nabla p\Vert_{L^2(\Omega_R)}$ and completes the proof. \end{proof}
Notice that, by the $L^2(\Omega_R)^2$-orthogonality of the Helmholtz decomposition (\ref{helm1}), $$ \Vert\nabla p \Vert _{0,\Omega_R} ^2 \, = \, \int _{\Omega _R} \nabla p \cdot \overline{\nabla p } \, d\textbf{{x}} \, = \, \int _{\Omega _R}\nabla p \cdot \overline{\textbf{{w}}} \, d\textbf{{x}} \, . $$ Making use of the adjoint problem for $\textbf{{z}}=\textbf{{w}}$, $$ \begin{array}{l} \Vert\nabla p \Vert _{0,\Omega_R} ^2
\, = \,
\displaystyle \int_{\Omega _R} \left( \frac{1}{k^2} \, \nabla\cdot\boldsymbol{\phi } \, \nabla \cdot \overline{\textbf{{w}}} - \boldsymbol{\phi}\cdot\overline{\textbf{{w}}} \right) d\textbf{{x}}
+ \int _{\Sigma _R} (\boldsymbol{\phi}\cdot\boldsymbol{\nu}) \, N_R(\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}) \, dS \\
\hspace*{6cm} \displaystyle
- \int _{\Gamma} (\boldsymbol{\phi}\cdot\boldsymbol{\nu}) \, {G^H_{\Gamma}} (\overline{\textbf{{w}}\cdot\boldsymbol{\nu}}) \, dS \, .
\end{array} $$ If we split the domain $\Omega_R$ in terms of the mesh $\mathcal{T}_h$ and then integrate by parts in each $K\in\mathcal{T}_h$, thanks to Trefftz properties for $\mathbf{w}\in\mathbf{W}^s(\mathcal{T}_h)$, we come up with $$ \begin{array}{l} \Vert\nabla p \Vert _{0,\Omega_R} ^2
\, = \,
\displaystyle
\int _{\mathcal{F}_h^I}\frac{1}{\sqrt{\alpha _2 \, k^3}}\, \boldsymbol{\phi} \cdot \boldsymbol{\nu} \,\frac{\sqrt{\alpha _2}}{\sqrt{k}} \, \jmp{ \nabla\cdot \overline{\textbf{{w}}} }_{\boldsymbol{\nu}} \, dS \\
\hspace*{4cm} \displaystyle
+ \int _{\Sigma _R} \frac{\sqrt{k}}{\sqrt{\delta } k^2} (\boldsymbol{\phi}\cdot\boldsymbol{\nu}) \, \, \frac{\sqrt{\delta}}{\sqrt{k} } \left( \overline{ \nabla \cdot \textbf{{w}} + k^2\, N_R(\textbf{{w}} \cdot\boldsymbol{\nu}) } \right) dS \\
\hspace*{4cm} \displaystyle
- \int _{\Gamma} \frac{\sqrt{ k }}{\sqrt{\tau} \, k^2 }\, (\boldsymbol{\phi}\cdot\boldsymbol{\nu}) \, \frac{\sqrt{\tau} }{\sqrt{ k }} \left(\overline{ ( \nabla\cdot {\textbf{{w}}} + k^2\, G^H_{\Gamma} ( {\textbf{{w}}}\cdot\boldsymbol{\nu}) } \right) dS \, .
\end{array} $$ Then, by Cauchy-Schwartz inequality and the lower bound of the DG-norm (\ref{DGnormbound}), \begin{equation}\label{eq-boundw0_1} \Vert\nabla p \Vert _{0,\Omega_R} ^2
\, \leq \,
(\mathcal{G}(\boldsymbol{\phi}) ) ^{1/2} \, \Vert \textbf{{w}}\Vert _{DG} \, , \end{equation} where we denote $$ \begin{array}{l} \mathcal{G}(\boldsymbol{\phi})
\, = \,
\displaystyle \frac{1}{k^3} \big(
\int_{\mathcal{F}_h^I} \frac{1}{\alpha _2 }\, |\boldsymbol{\phi}\cdot\boldsymbol{\nu}|^2 \, dS
+ \int _{\Sigma _R} \frac{1}{\delta } \, |\boldsymbol{\phi}\cdot\boldsymbol{\nu}|^2 \, dS
+ \int _{\Gamma} \frac{1}{\tau }\, |\boldsymbol{\phi}\cdot\boldsymbol{\nu}|^2 \, dS \big)
\, .
\end{array} $$ In order to deal with this last term, we use the following trace inequality (see \cite[eq. (24)]{HMPhp}): $$
\Vert \eta \Vert ^2_{0,\partial K} \,\leq\, C \left( \frac{1}{h_K} \, \Vert \eta\Vert _{0,K}^2 + h_K^{2s}\, |\eta|_{1/2+s,K}^2\right)\quad \forall \eta\in H^{1/2+s}( K), K\in\mathcal{T}_h; $$ indeed, taking $\eta $ to be each entry of $\boldsymbol{\phi} \in H^{1/2+s} (\Omega _R) ^2$, we deduce $$ \begin{array}{l} \mathcal{G}(\boldsymbol{\phi})
\, \leq \,
\displaystyle \frac{1}{k^3 \min \{ \underline{\alpha }_2 , \underline{\delta } , \underline{ \tau } \} }
\sum _{K\in\mathcal{T}_h }
\int_{\partial K } |\boldsymbol{\phi}\cdot\boldsymbol{\nu}|^2 \, dS \\
\hspace*{3cm} \displaystyle
\leq
\displaystyle \frac{C}{k^3\min \{ \underline{\alpha }_2, \underline{\delta } , \underline{ \tau } \} }
\sum _{K\in\mathcal{T}_h }
\left( \frac{1}{h_K} \, \Vert \boldsymbol{\phi} \Vert _{0, K}^2 + \frac{h_K^{2s}}{H^{2s}}\, |\boldsymbol{\phi}|_{1/2+s,K}^2\right)
\, ,
\end{array} $$ where $\underline{\alpha }_2 = \mathrm{inf} _{\mathcal{F}_h^I } \alpha _2$, $\underline{\delta } = \mathrm{inf} _{\mathcal{F}_h^R } \delta $ and $\underline{\tau } = \mathrm{inf} _{\mathcal{F}_h^{\Gamma} } \tau $ (we have assumed $\underline{\alpha }_2 , \underline{\delta },\underline{\tau} >0$). Recalling that $q.u. (\mathcal{T}_h) \, h\leq h_K \leq h$ and applying Lemma \ref{lemma-adjFwdProbVok}, we have $$ \begin{array}{l} \mathcal{G}(\boldsymbol{\phi})
\,
\leq \,
\displaystyle \frac{C}{k^3 \min \{ \underline{\alpha }_2 , \underline{\delta } , \underline{ \tau } \} }
\sum _{K\in\mathcal{T}_h }
\left( \frac{1}{q.u. (\mathcal{T}_h) \, h } \, \Vert \boldsymbol{\phi} \Vert _{0, K}^2 + h^{2s}H^{-2s}\, |\boldsymbol{\phi}|_{1/2+s,K}^2\right) \\[2ex]
\hspace*{2cm} \displaystyle
\,
\leq \,
\frac{C}{k^3\min \{ \underline{\alpha }_2 , \underline{\delta } , \underline{ \tau } \} }\,
\Big( \frac{1}{q.u. (\mathcal{T}_h) \, h } \, \Vert \boldsymbol{\phi} \Vert _{0,\Omega _R }^2 + h^{2s}H^{-2s}\, |\boldsymbol{\phi}|_{1/2+s,\Omega _R}^2\Big)
\\[2ex]
\hspace*{2cm} \displaystyle
\,\leq\, \frac{C}{\min \{ \underline{\alpha }_2 , \underline{\delta } , \underline{ \tau } \} }\,
\Big( \frac{1}{q.u. (\mathcal{T}_h) \, h } + h^{2s}H^{-2s}\Big)\, H^{-2s}\, \Vert \nabla p \Vert _ {0,\Omega _R} ^2
\, .
\end{array} $$ Therefore, using (\ref{eq-boundw0_1}), $$ \Vert \nabla p \Vert _{0 ,\Omega_R}
\, \leq \,
\left( \frac{C\, H^{-2s} }{\min \{ \underline{\alpha }_2, \underline{\delta } , \underline{ \tau } \} } \, \Big( \frac{1}{q.u. (\mathcal{T}_h) \, h } + h^{2s}\, H^{-2s} \Big) \right) ^{\! 1/2} \, \Vert \textbf{{w}}\Vert _{DG} \, , $$
so we have proved the following lemma: \begin{lemma}\label{lem1} For sufficiently small $H>0$ and any $s\in (0,\infty)$, there is a constant $C$ (depending on $s$ but independent of $h$ and $H$), such that $$ \Vert \nabla p \Vert _{0,\Omega_R}
\, \leq \,
CH^{-s} ( h^{-1/2} + h^{s}\, H^{-s} ) \, \Vert \mathbf{w}\Vert _{DG} \, , $$ where $\mathbf{w}\in \mathbf{W}^s(\mathcal{T}_h)$ and $p\in H^1 (\Omega_R)$ satisfies (\ref{helm1}). \end{lemma}
\subsubsection{A bound of $\Vert\mathbf{w} _0 \Vert _{ H(\mathrm{curl};\Omega_R ) '}$ by a duality argument}
For any trial function $\mathbf{u}\in H(\mathrm{{\rm curl}};\Omega_R)$ we consider its $L^2 (\Omega _R)^2$-orthogonal Helmholtz decomposition as in (\ref{helm1}): \begin{equation}\label{helm1bis} \mathbf{u} = \mathbf{u}_0 + \nabla q\quad\text{with } \mathbf{u}_0\in H_0(\mathrm{div}^0;\Omega _R),\, q\in H^1 (\Omega _R) \, . \end{equation} Then, using the $L^2 (\Omega _R)$-orthogonality of Helmholtz decomposition as well as Trefftz property, $$ \int_{\Omega _R }\textbf{{w}}_0 \cdot \mathbf{u} \, d\textbf{{x}} \, = \, \int_{\Omega _R }\textbf{{w}}_0 \cdot \mathbf{u}_0 \, d\textbf{{x}} \, = \, - \frac{1}{k^2 } \, \sum _{K\in\mathcal{T}_h }
\int_{ K } \nabla \nabla\cdot \textbf{{w}}_0 \cdot \mathbf{u} _0 \, d\textbf{{x}} \, , $$ so that, integrating by parts, and using that $\mathbf{u}_0\in H_0(\mathrm{div}^0;\Omega_R)$, $$ \begin{array}{l} \displaystyle\int_{\Omega _R }\textbf{{w}}_0 \cdot \mathbf{u} \, d\textbf{{x}} \, = \, - \frac{1}{k^2 } \, \sum _{K\in\mathcal{T}_h }
\int_{ \partial K } \nabla\cdot \textbf{{w}} _0 \, \mathbf{u}_0 \cdot\boldsymbol{\nu} \, dS \, =
\\ \hspace*{1cm} = \, - \displaystyle \frac{1}{k^2 } \left( \int _{\mathcal{F}^I_h } \jmp{ \nabla\cdot \mathbf{w}_0} _{ \boldsymbol{\nu} } \, \mathbf{u}_0 \, dS + \int _{\mathcal{F}^R_h } \nabla\cdot \textbf{{w}}_0 \, \boldsymbol{\nu} \cdot \mathbf{u}_0 \, dS - \int _{\mathcal{F}^{\Gamma}_h } \nabla\cdot \textbf{{w}}_0 \, \boldsymbol{\nu} \cdot \mathbf{u}_0 \, dS \right) \, = \\ \hspace*{1cm}
= - \displaystyle \frac{1}{k^2 } \, \int _{\mathcal{F}^I_h } \jmp{ \nabla\cdot \textbf{{w}} _0 } _{ \boldsymbol{\nu} } \boldsymbol{\nu} \, \mathbf{u} _0 \cdot \boldsymbol{\nu} \, dS \, .
\end{array} $$ Therefore, by Cauchy-Schwartz inequality, $$
|\int_{\Omega _R }\textbf{{w}} _0 \cdot \mathbf{u} \, d\textbf{{x}} | \, \leq \,
\displaystyle \frac{1}{k^2 } \,
\Big( \int _{\mathcal{F}^I_h } \frac{\alpha _2}{k} \, |\jmp{ \nabla\cdot \textbf{{w}} _0 } _{ \boldsymbol{\nu} } |^2 \, dS \Big) ^{1/2} \,
\Big( \int _{\mathcal{F}^I_h } \frac{k}{\alpha _2 \, } \, |\mathbf{u} _0 \cdot \boldsymbol{\nu} |^2 \, dS \Big) ^{1/2} \, , $$ and we have $$
|\int_{\Omega _R }\textbf{{w}} _0 \cdot\mathbf{u} \, d\textbf{{x}} | \, \leq \,
\displaystyle \frac{1}{\sqrt{ k^3\, \underline{\alpha}_2 }} \,
\Vert \textbf{{w}} \Vert _{DG } \,
\Big( \sum _{K\in\mathcal{T}_h } \int_{ \partial K } | \mathbf{u}_0 \cdot \boldsymbol{\nu} |^2 \, dS \Big) ^{1/2} \, . $$ Let us notice that, taking the $\mathrm{curl}$ of (\ref{helm1bis}), we know that $\mathrm{curl}\mathbf{u}_0=\mathrm{curl}\mathbf{u}\in L^2(\Omega_R)$; this allows us to use again the trace inequality \cite[eq. (24)]{HMPhp} for $\mathbf{u}_0\in H_0(\mathrm{div}^0;\Omega _R)\cap H (\mathrm{curl};\Omega _R)\hookrightarrow H^{1/2+s} (\Omega _R)$ for any $s\in [0,1/2)$: $$
\Vert \mathbf{u}_0 \cdot \boldsymbol{\nu} \Vert ^2_{0,\partial K} \,\leq \, \Vert \mathbf{u}_0 \Vert ^2_{0,\partial K} \,\leq\, C \left( \frac{1}{q.u.(\mathcal{T}_h ) \, h} \, \Vert \mathbf{u}_0\Vert _{0,K}^2 + h^{2s}\, | \mathbf{u}_0 |_{1/2+s,K}^2\right) . $$ Then, summing over $K\in\mathcal{T}_h$ and making use of the continuity of the embedding $H_0(\mathrm{div}^0;\Omega _R)\cap H (\mathrm{curl};\Omega _R)\hookrightarrow H^{1/2+s} (\Omega _R)$, we deduce $$ \begin{array}{l}
\displaystyle\sum _{K\in\mathcal{T}_h } \Vert \mathbf{u}_0\cdot \boldsymbol{\nu} \Vert ^2_{0,\partial K}
\,\leq\, C \left( \frac{1}{q.u.(\mathcal{T}_h ) \, h} \, \Vert \mathbf{u}_0\Vert _{0,\Omega _R }^2 + h^{2s}\, | \mathbf{u}_0 |_{1/2+s,\Omega _R }^2\right) \,\leq\\ \hspace*{2cm} \displaystyle \,\leq\, C \left( \frac{1}{q.u.(\mathcal{T}_h ) \, h} + h^{2s} \right) (\Vert \mathbf{u}_0 \Vert _{0,\Omega _R }^2 + \Vert \mathrm{curl} \mathbf{u}_0 \Vert _{0,\Omega _R }^2 ) \, .
\end{array} $$ But recalling the $L^2(\Omega_R)^2$-orthogonality of Helmholtz decomposition (\ref{helm1bis}),
$$ \Vert \mathbf{u} \Vert _{\mathrm{curl},\Omega_R} ^2 \, = \, \Vert \mathbf{u}_0 \Vert _{0,\Omega_R} ^2 + \Vert \nabla q \Vert _{0,\Omega_R} ^2 + \Vert \mathrm{curl}\mathbf{u}_0 \Vert _{0,\Omega_R} ^2 \geq \Vert \mathbf{u}_0 \Vert _{0,\Omega_R} ^2 + \Vert \mathrm{curl}\mathbf{u}_0 \Vert _{0,\Omega_R} ^2 \, , $$ so that $$
\displaystyle\sum _{K\in\mathcal{T}_h } \Vert \mathbf{u}_0\cdot\boldsymbol{\nu} \Vert ^2_{0,\partial K} \,\leq\, C \left( \frac{1}{q.u.(\mathcal{T}_h ) \, h} + h^{2s} \right) \Vert \mathbf{u}\Vert _{\mathrm{curl},\Omega_R} ^2 \, . $$ Therefore $$
|\int_{\Omega _R }\textbf{{w}} _0 \cdot \mathbf{u} \, d\textbf{{x}} | \, \leq \,
\frac{C \, ( ( q.u.(\mathcal{T}_h ) \, h )^{-1} + h^{2s} ) ^{1/2} }{\sqrt{k^3\, \underline{\alpha}_2}} \,
\Vert \textbf{{w}} \Vert _{DG } \, \Vert \mathbf{u}\Vert _{\mathrm{curl},\Omega_R} \, , $$ and we conclude that $$ \Vert \textbf{{w}} _0 \Vert _{ H(\mathrm{curl};\Omega _R)' } \, \leq \,
\frac{C \, ( ( q.u.(\mathcal{T}_h ) \, h )^{-1} + h^{2s} ) ^{1/2} }{\sqrt{k^3 \, \underline{\alpha}_2} } \,
\Vert \textbf{{w}} \Vert _{DG } \, . $$ We have proved the following lemma: \begin{lemma}\label{lem2} For each $s\in (0,\infty)$, there exists a constant $C$ (depending on $s$ but independent of $h>0$ and $H>0$ small enough) such that $$ \Vert \mathbf{w} _0 \Vert _{ H(\mathrm{curl};\Omega _R)' } \, \leq \, C h^{-1/2} \Vert \mathbf{w} \Vert _{DG } \, , $$ for all $\mathbf{w}\in\mathbf{W}^s (\mathcal{T}_h)$ and $\mathbf{w}_0\in H_0(\mathrm{div}^0;\Omega_R)$ satisfying (\ref{helm1}). \end{lemma}
Using Lemmas ~\ref{lem1} and \ref{lem2} we have then proved the following theorem that summarizes our results from this section.
\begin{theorem}\label{th-conv_Hh2H} For all sufficiently small $H$ and for each $s>0$, there is a constant $C$ depending on $s$ but independent of $h$, $H$, $\mathbf{v}^H$ and $\mathbf{v}_h^H$ such that
\[
\Vert \mathbf{v}^H-\mathbf{v}^H_h\Vert_{H(\mathrm{curl},\Omega_R)'}\,\leq\, C\,H^{-s}\, (h^{-1/2}+ h^{s}\, H^{-s})\, \inf_{\mathbf{w}_h\in\mathbf{W}_h} \Vert \mathbf{v}-\mathbf{w}_h\Vert_{DG^+} \, .
\]
\end{theorem}
\begin{remark} We could avoid the growing factor $H^{-s}$ if we used $\mathcal{C}^1$ functions to compute $G_\Gamma^H$. Note that using a trigonometric basis is one case where $H^{-s}$ can be removed. In the general case, the $H^{-s}$ terms are not unexpected. Also notice that we can choose a fine mesh on the boundary $\Gamma$ and then we still can approximate the scattering problem with sufficient accuracy.
\begin{proof}
Let $\textbf{{w}}=\textbf{{v}}^H-\textbf{{v}}_h^H$ in (\ref{helm1}). Then
\[
\Vert \textbf{{w}}^H-\textbf{{v}}_h^H\Vert_{H(\mathrm{curl};\Omega_R)'}\leq
\Vert \textbf{{w}}_0\Vert_{H(\mathrm{curl};\Omega_R)'}+
\Vert \nabla p\Vert_{H(\mathrm{curl};\Omega_R)'}.
\]
Use of Lemmas \ref{lem1} and \ref{lem2} completes the proof. \end{proof}
Using approximation results from \cite{HMP_approx} this theorem could be converted into order estimates. Because of the poor regularity of $\mathbf{v}^H$ (due to the reduced regularity imposed by the regularity of $G_\Gamma^H$) we expect to need a refined $h$ grid near the boundary unless we use smooth basis functions on $\Gamma$.
\end{remark}
Our final result combines Theorem \ref{lemma-E!sol:semidiscrGIBCfwdprob_vec} and Theorem \ref{th-conv_Hh2H} to give an error estimate for the fully discrete problem. \begin{corollary} Under the conditions of Theorem~\ref{th-conv_Hh2H}, there is a constant $C$ (depending on $s\in (0,\infty)$ but independent of $h$, and $H$ small enough, $\mathbf{v}^H$ and $\mathbf{v}_h^H$) such that
\[ \begin{array}{rl} \displaystyle \Vert \mathbf{v}-\mathbf{v}^H_h\Vert_{H(\mathrm{curl};\Omega _R)'}\, \leq\, & \displaystyle C H^{-s}\, (h^{-1/2}+h^{s}H^{-s}) \, \Big( \inf_{\mathbf{w}_h\in\mathbf{W}_h} \Vert \mathbf{v}-\mathbf{w}_h\Vert_{DG^+}+\Big. \\[1ex] & \displaystyle \qquad \Big. \Vert (G_\Gamma^H-G_\Gamma)(\mathbf{v}\!\cdot\!\boldsymbol{\nu})\Vert_{H^{1/2}(\Gamma)}
+\Vert (G_\Gamma^H-G_\Gamma)g\Vert_{H^{1/2}(\Gamma)}\Big) . \end{array}
\]
\end{corollary}
\begin{proof} We have already shown in {Theorem \ref{lemma-E!sol:semidiscrGIBCfwdprob_vec}} that $\textbf{{v}}^H$ converges to $\textbf{{v}}$ in $H({\rm{}div};\Omega_R)$, and this is a stronger norm than $H(\mathrm{curl}; \Omega_R)'$. Besides, we have shown in Theorem \ref{th-conv_Hh2H} that $\textbf{{v}}^H_h$ converges to $\textbf{{v}}^H$ in $H(\mathrm{curl}; \Omega_R)'$. Combining those two results, we conclude the statement.
\end{proof}
\input numerics.tex
\section{Conclusion}\label{concl} We have provided an error analysis of the UWVF discretization of the Helmholtz equation in the presence of a Generalized Impedance Boundary Conditions. The error analysis is backed by limited numerical experiments.
Clearly several extensions and further numerical tests need to be performed. In particular our analysis and numerical tests are for a smooth boundary. Analysis for a non-smooth boundary, and appropriate mesh refinement strategies near corners need to be developed.
\end{document} |
\begin{document}
\title{Stochastic homogenization of convolution type operators}
\author{ A.~Piatnitski$^{\circ,\sharp}$,
E.~Zhizhina$^\sharp$}
\date{}
\maketitle
\parskip 0.04 truein
\begin{center} $^\sharp$ Institute for Information Transmission Problems RAS\\ Bolshoi Karetny per., 19, Moscow, 127051, Russia \end{center}
\begin{center} $^\circ$ Arctic University of Norway, UiT, campus Narvik,\\ Postbox 385, 8505 Narvik, Norway\\ \end{center}
\begin{abstract} This paper deals with the homogenization problem for convolution type non-local operators in random statistically homogeneous ergodic media. Assuming that the convolution kernel has a finite second moment and satisfies the uniform ellipticity and certain symmetry conditions, we prove the almost sure homogenization result and show that the limit operator is a second order elliptic differential operator with constant deterministic coefficients. \end{abstract}
\noindent {\bf Keywords}: \ stochastic homogenization, non-local random operators, convolution type kernels \\
\noindent {\bf AMS Subject Classification}: \ 35B27, 45E10, 60H25, 47B25
\section{Introduction}
The paper deals with homogenization problem for integral operators of convolution type in $\mathbb R^d$ with dispersal kernels that have random statistically homogeneous ergodic coefficients. For such operators, under natural integrability, moment and uniform ellipticity conditions as well as the symmetry condition we prove the homogenization result and study the properties of the limit operator.
The integral operators with a kernel of convolution type are of great interest both from the mathematical point of view and due to various important applications in other fields. Among such applications are models of population dynamics and ecological models, see \cite{OFetal}, \cite{DEE} and references therein,
non-local diffusion problems, see \cite{AMRT, BCF},
continuous particle systems, see \cite{ FKK, KPZ}, image processing algorithms, see \cite{GiOs}. In the cited works only the case of homogeneous environments has been considered. In this case the corresponding dispersal kernel depends only on the displacement $y-x$. However, many applications deal with non-homogeneous environments. Such environments are described in terms of integral operator whose dispersal kernels depend not only on the displacement $x-y$ but also on the starting and the ending positions $x, y$.
When studying the large-time behaviour of evolution processes in these environments it is natural to make the diffusive scaling in the corresponding integral operators and to consider the homogenization problem for the obtained family of operators with a small positive parameter. In what follows we call this parameter $\varepsilon$
The case of environments with periodic characteristics has been studied in the recent work \cite{PiZhi17}. It has been shown that under natural moment and symmetry conditions on the kernel the family of rescaled operators admits homogenization, and that for the corresponding jump Markov process the Central Limit Theorem and the Invariance Principle hold. Interesting homogenization problems for periodic operators containing both second order elliptic operator and nonlocal Levy type operator have been considered in \cite{Arisawa} and \cite{Sandric2016}.
In the present paper we consider the more realistic case of environments with random statistically homogeneous characteristics. More precisely, we assume that the dispersal kernel of the studied operators has the form $\Lambda(x,y)a(x-y)$, $x,\,y\in\mathbb R^d$, where $a(z)$ is a deterministic even function that belongs to $L^1(\mathbb R^d)\cap L^2_{\rm loc}(\mathbb R^d)$ and has finite second moments, while $\Lambda(x,y)=\Lambda(x,y,\omega)$ is a statistically homogeneous symmetric ergodic random field that satisfies the uniform ellipticity conditions $0<\Lambda^-\leq \Lambda(x,y)\leq \Lambda^+$.\\
Making a diffusive scaling we obtain the family of operators \begin{equation}\label{L_u_biseps} (L^\varepsilon u)(x) \ = \ \varepsilon^{-d-2} \int\limits_{\mathbb R^d} a\Big(\frac{x-y}{\varepsilon}\Big) \Lambda\Big(\frac{x}{\varepsilon},\frac{y}{\varepsilon}\Big) (u(y) - u(x)) dy, \end{equation}
where a positive scaling factor $\varepsilon$ is a parameter.
For the presentation simplicity we assume in this paper that $\Lambda(x,y)=\mu(x)\mu(y)$ with a statistically homogeneous ergodic field $\mu$. However, all our results remain valid for the generic statistically homogeneous symmetric random fields $\Lambda(x,y)$ that satisfy the above ellipticity conditions.
The main goal of this work is to investigate the limit behaviour of $L^\varepsilon$ as $\varepsilon\to 0$. We are going to show that the family $L^\varepsilon$ converges almost surely to a second order elliptic operator with constant deterministic coefficient in the so-called $G$-topology, that is for any $m>0$ the family of operators $(-L^\varepsilon+m)^{-1}$ almost surely converges strongly in $L^2(\mathbb R^d)$ to the operator $(-L^0+m)^{-1}$ where $L^0=\Theta^{ij}\frac{\partial^2}{\partial x^i\partial x^j}$, and $\Theta$ is a positive definite constant matrix.
There is a vast existing literature devoted to homogenization theory of differential operators, at present it is a well-developed area, see for instance monographs \cite{BLP} and \cite{JKO}. The first homogenization results for divergence form differential operators with random coefficients were obtained in pioneer works \cite{Ko78} and \cite{PaVa79}. In these works it was shown that the generic divergence form second order elliptic operator with random statistically homogeneous coefficients admits homogenization. Moreover, the limit operator has constant coefficients, in the ergodic case these coefficients are deterministic.
Later on a number of important homogenization results have been obtained for various elliptic and parabolic differential equations and system of equations in random stationary media. The reader can find many references in the book \cite{JKO}.
Homogenization of elliptic difference schemes and discrete operators in statistically homogeneous media has been performed in \cite{Ko87}, \cite{Ko86}. Also, in \cite{Ko86} several limit theorems have been proved for random walks in stationary discrete random media that possess different types of symmetry.
To our best knowledge in the existing literature there are no results on stochastic homogenization of convolution type integral operators with a dispersal kernel that has stationary rapidly oscillating coefficients.
In the one-dimensional case a homogenization problem for the operators that have both local and non-local parts has been considered in the work \cite{Rho_Var2008}. This work deals with scaling limits of the solutions to stochastic differential equations in dimension one with stationary coefficients driven by Poisson random measures and Brownian motions. The annealed convergence theorem is proved, in which the limit exhibits a diffusive or superdiffusive behavior, depending on whether the Poisson random measure has a finite second moment or not. It is important in this paper that the diffusion coefficient does not degenerate.
Our approach relies on asymptotic expansion techniques and using the so-called corrector. As often happens in the case of random environments we cannot claim the existence of a stationary corrector. Instead, we construct a corrector which is a random field in $\mathbb R^d$ with stationary increments and almost surely has a sublinear growth in $L^2(\mathbb R^d)$. \\ When substituting two leading terms of the expansion for the solution of the original equation, we obtain the discrepancies being oscillating functions with zero average. Some of these functions are not stationary.
In order to show that the contributions of these discrepancies are asymptotically negligible we add to the expansion two extra terms. The necessity of constructing these terms is essentially related to the fact that, in contrast with the case of elliptic differential equations, the resolvent of the studied operator is not locally compact in $L^2(\mathbb R^d)$.
The paper is organized as follows:
In Section \ref{s_pbmset} we provide the detailed setting of the problem and formulate the main result of this work.
The leading terms of the ansatz for a solution of equation $(L^\varepsilon-m)u^\varepsilon=f$ with $f\in C_0^\infty(\mathbb R^d)$ are introduced in Section \ref{s_asyexp}. Also in this section we outline the main steps of the proof of our homogenization theorem.
Then in Section \ref{s_corr} we construct the principal corrector in the asymptotic expansion and study the properties of this corrector.
Section \ref{s_addterms} is devoted to constructing two additional terms of the expansion of $u^\varepsilon$. Then we introduce the effective matrix and prove its positive definiteness.
Estimates for the remainder in the asymptotic expansion are obtained in Section \ref{s_estrem}.
Finally, in Section \ref{s_proofmain} we complete the proof of the homogenization theorem.
\section{Problem setup and main result}\label{s_pbmset}
\noindent We consider a homogenization problem for a random convolution type operator of the form \begin{equation}\label{L_u} (L_\omega u)(x) \ = \ \mu(x,\omega) \int\limits_{\mathbb R^d} a(x-y) \mu(y,\omega) (u(y) - u(x)) dy. \end{equation} For the function $a(z)$ we assume the following: \begin{equation}\label{A1} a(z) \in L^{1}(\mathbb R^d) \cap L^{2}_{\rm loc}(\mathbb R^d), \quad a(z) \ge 0; \quad a(-z) = a(z), \end{equation} and \begin{equation}\label{M2}
\| a \|_{L^1(\mathbb R^d)} = \int\limits_{\mathbb R^d} a(z) \ dz = a_1 < \infty; \quad \sigma^2 = \int\limits_{\mathbb R^d} |z|^2 a(z) \ dz < \infty. \end{equation} We also assume that \begin{equation}\label{add} \mbox{there exists a constant} \; c_0>0 \; \mbox{ and a cube } \; {\bf B} \subset \mathbb R^d, \; \mbox{ such that } \; a(z) \ge c_0 \quad \mbox{for all } \; z \in {\bf B}. \end{equation} This additional condition on $a(z)$ is naturally satisfied for regular kernels, and we introduced \eqref{add} for a presentation simplicity. Assumption \eqref{add} essentially simplifies derivation of inequality \eqref{L2B}, on which the proof of the smallness of the first corrector is based, see Proposition \ref{1corrector} below. We notice that inequality \eqref{L2B} can also be derived without assumption \eqref{add}, however in this case additional arguments of measure theory are required. \\[5pt] Let $(\Omega,\mathcal{F}, \mathbb P)$ be a standard probability space. We assume that the random field $ \mu(x,\omega)= {\bm\mu} (T_x \omega) $ is stationary and bounded from above and from below: \begin{equation}\label{lm} 0< \alpha_1 \le \mu(x,\omega) \le \alpha_2 < \infty; \end{equation} here ${\bm\mu} (\omega) $ is a random variable, and $T_x$, $x\in \mathbb R^d$, is an ergodic group of measurable transformations acting in $\omega$-space $\Omega$, $T_x:\Omega \mapsto\Omega$, and possessing the following properties: \begin{itemize}
\item $T_{x+y}=T_x\circ T_y\quad\hbox{for all }x,\,y\in\mathbb R^d,\quad T_0={\rm Id}$,
\item $\mathbb P(A)=\mathbb P(T_xA)$ for any $A\in\mathcal{F}$ and any $x\in\mathbb R^d$,
\item $T_x$ is a measurable map from $\mathbb R^d\times \Omega$ to $\Omega$, where $\mathbb R^d$ is equipped
with the Borel $\sigma$-algebra. \end{itemize}
Let us consider a family of the following operators \begin{equation}\label{L_eps} (L^{\varepsilon}_\omega u)(x) \ = \ \frac{1}{\varepsilon^{d+2}} \int\limits_{\mathbb R^d} a \Big( \frac{x-y}{\varepsilon} \Big) \mu \Big( \frac{x}{\varepsilon},\omega \Big) \mu \Big( \frac{y}{\varepsilon},\omega \Big) \Big( u(y) - u(x) \Big) dy. \end{equation} We are interested in the limit behavior of the operators $L^{\varepsilon}_\omega$ as $\varepsilon \to 0$ . We are going to show that for a.e. $\omega$ the operators $L^{\varepsilon}_\omega$ converge to a differential operator with constant coefficients in the topology of the resolvent convergence. Let us fix $m>0$, any $f \in L^2(\mathbb R^d)$, and define $u^{\varepsilon}$ as the solution of equation: \begin{equation}\label{u_eps} (L^{\varepsilon}_\omega - m) u^{\varepsilon} \ = \ f, \quad \mbox{ i.e. } \; u^{\varepsilon} \ = \ (L^{\varepsilon}_\omega - m)^{-1} f \end{equation} with $f \in L^2(\mathbb R^d)$. Denote by $\hat L$ the following operator in $L^2(\mathbb R^d)$: \begin{equation}\label{L_hat} \hat L u \ = \ \sum_{i,j = 1}^d \Theta_{i j} \frac{\partial^2 u}{\partial x_i \ \partial x_j}, \quad {\cal D}(\hat L) = H^2(\mathbb R^d) \end{equation} with a positive definite matrix $\Theta = \{ \Theta_{i j} \}, \ i,j = 1, \ldots, d,$ defined below, see (\ref{Positive}). Let $u_0(x)$ be the solution of equation \begin{equation}\label{u_0} \sum_{i,j = 1}^d \Theta_{i j} \frac{\partial^2 u_0}{\partial x_i \ \partial x_j} - m u_0 = f, \quad \mbox{ i.e. } \; u_0 \ = \ (\hat L - m)^{-1} f \end{equation} with the same right-hand side $f$ as in (\ref{u_eps}).
\begin{theorem}\label{T1} Almost surely for any $f \in L^2(\mathbb R^d)$ and any $m>0$ the convergence holds: \begin{equation}\label{t1}
\| (L^{\varepsilon}_\omega - m)^{-1} f - (\hat L - m)^{-1} f \|_{L^2(\mathbb R^d)} \ \to 0 \quad \mbox{ as } \; \varepsilon \to 0. \end{equation}
\end{theorem} The statement of Theorem \ref{T1} remains valid in the case of non-symmetric operators $L^\varepsilon$ of the form \begin{equation}\label{L_eps_ns} (L^{\varepsilon,{\rm ns}}_\omega u)(x) \ = \ \frac{1}{\varepsilon^{d+2}} \int\limits_{\mathbb R^d} a \Big( \frac{x-y}{\varepsilon} \Big) \lambda \Big( \frac{x}{\varepsilon},\omega \Big) \mu \Big( \frac{y}{\varepsilon},\omega \Big) \Big( u(y) - u(x) \Big) dy \end{equation} with $\lambda(z,\omega)=\bm{\lambda}(T_z\omega)$ such that $0< \alpha_1 \le \lambda(x,\omega) \le \alpha_2 < \infty$. In this case the equation \eqref{u_eps} reads \begin{equation}\label{u_eps_nssss} (L^{\varepsilon,{\rm ns}}_\omega - m) u^{\varepsilon} \ = \ f. \end{equation}
\begin{corollary}\label{cor_main}
Let $\lambda(z,\omega)$ and $\mu(z,\omega)$ satisfy condition \eqref{lm}. Then a.s. for any $f\in L^2(\mathbb R^d)$ and any $m>0$ the limit relation in \eqref{t1} holds true with $\hat L^{\rm ns} u \ = \ \sum_{i,j = 1}^d \Theta^{\rm ns}_{i j} \frac{\partial^2 u}{\partial x_i \ \partial x_j}$, \
$\Theta^{\rm ns}=\big(\mathbb E \big\{\frac{\bm\mu}{\bm\lambda}\big\}\big)^{-1} \Theta$, and $\Theta$ defined in
\eqref{Positive}. \end{corollary}
\section{Asymptotic expansion for $u^\varepsilon$ }\label{s_asyexp}
We begin this section by introducing a set of functions $f \in C_0^\infty(\mathbb R^d)$ such that $u_0 \ = \ (\hat L - m)^{-1} f\in C_0^\infty(\mathbb R^d)$. We denote this set by $ {\cal S}_0(\mathbb R^d)$. Observe that this set is dense in $L^2(\mathbb R^d)$. Indeed, if we take $\varphi(x)\in C^\infty(\mathbb R)$ such that $0\leq\varphi \leq 1$,
$\varphi=1$ for $x\leq 0$ and $\varphi=0$ for $x\geq 1$, then letting $f_n=(\hat L-m)\big(\varphi(|x|-n)(\hat L-m)^{-1}f(x)\big)$
one can easily check that $f_n\in C_0^\infty(\mathbb R^d)$ and $\|f_n-f\|_{L^2(\mathbb R^d)}\to0$, as $n\to\infty$.\\ We consider first the case when $f \in {\cal S}_0(\mathbb R^d)$ and denote by $Q$ a cube centered at the origin and such that $\mathrm{supp}(u_0)\subset Q$. We want to prove the convergence \begin{equation}\label{convergence1}
\| u^{\varepsilon} - u_0 \|_{L^2(\mathbb R^d)} \ \to 0, \quad \mbox{ as } \ \varepsilon \to 0, \end{equation} where the functions $u^\varepsilon$ and $u_0$ are defined in (\ref{u_eps}) and (\ref{u_0}), respectively. To this end we approximate the function $ u^\varepsilon (x, \omega)$ by means of the following ansatz \begin{equation}\label{v_eps} w^{\varepsilon}(x, \omega) \ = \ v^\varepsilon (x, \omega) + u_2^\varepsilon (x, \omega) + u_3^\varepsilon(x, \omega), \quad \mbox{ with } \; v^{\varepsilon}(x, \omega) \ = \ u_0(x)+ \varepsilon \theta \big(\frac{x}{\varepsilon}, \omega\big) \nabla u_0(x), \end{equation} where $\theta \big(z, \omega\big) $ is a vector function which is often called a corrector. It will be introduced later on as a solution of an auxiliary problem that does not depend on $\varepsilon$, see \eqref{korrkappa1}. A solution of this problem, $\theta(z,\omega)$ say, is defined up to an additive constant vector. \\ We set \begin{equation}\label{hi}
\chi^\varepsilon (z,\omega) = \theta (z,\omega)+ c^\varepsilon (\omega), \quad c^\varepsilon (\omega) = - \frac{1}{|Q|} \int\limits_Q \theta \big( \frac{x}{\varepsilon},\omega \big) dx. \end{equation} Observe that under such a choice of the vector $c^\varepsilon$ the function $\chi^\varepsilon \big(\frac x\varepsilon,\omega\big)$ has zero average in $Q$. We show in Proposition \ref{1corrector} that $\varepsilon c^\varepsilon\to 0$ a.s.
It should be emphasized that $\theta (y, \omega)$ need not be a stationary field, that is we do not claim that $\theta(y, \omega) = {\bm\theta} (T_y \omega)$ for some random vector ${\bm\theta}(\omega)$.
Two other functions, $u_2^\varepsilon$ and $u_3^\varepsilon$, that appear in the ansatz in \eqref{v_eps} will be introduced in \eqref{corr-u2}, \eqref{u3}, respectively.
After substitution $v_\varepsilon$ for $u$ to (\ref{L_eps}) we get $$ (L^{\varepsilon} v^{\varepsilon})(x) \ = \ \frac{1}{\varepsilon^{d+2}} \int\limits_{\mathbb R^d} a \big( \frac{x-y}{\varepsilon} \big) \mu \big( \frac{x}{\varepsilon} \big) \mu \big( \frac{y}{\varepsilon} \big) \Big( u_0(y)+ \varepsilon \theta \big(\frac{y}{\varepsilon}\big) \nabla u_0(y) - u_0(x)-\varepsilon \theta \big(\frac{x}{\varepsilon} \big) \nabla u_0(x) \Big) dy; $$ here and in what follows we drop the argument $\omega$ in the random fields $\mu(y,\omega)$, $\theta(y,\omega)$, etc., if it does not lead to ambiguity. After change of variables $\frac{x-y}{\varepsilon}=z$ we get \begin{equation}\label{ml_1} (L^{\varepsilon} v^{\varepsilon})(x) \ = \ \frac{1}{\varepsilon^{2}} \int\limits_{\mathbb R^d} dz \ a (z) \mu \big( \frac{x}{\varepsilon} \big) \mu \big( \frac{x}{\varepsilon} -z \big) \Big( u_0(x-\varepsilon z) - u_0(x) + \varepsilon \theta \big(\frac{x}{\varepsilon}-z \big) \nabla u_0 (x-\varepsilon z) -\varepsilon \theta \big( \frac{x}{\varepsilon} \big) \nabla u_0(x) \Big). \end{equation}
The Taylor expansion of a function $u(y)$ with a remainder in the integral form reads $$ \begin{array}{c} u(y) \ = \ u(x) + \int_0^1 \nabla u (x + (y-x)t) \cdot (y-x) \ dt \\[3pt] = \ u(x) + \nabla u(x) \cdot (y-x) + \int_0^1 \nabla \nabla u(x+(y-x)t) (y-x) (y-x) (1-t) \ dt \end{array} $$ and is valid for any $x, y \in \mathbb R^d$. Thus we can rewrite (\ref{ml_1}) as follows \begin{eqnarray} (L^{\varepsilon} v^{\varepsilon})(x) \hskip -1.7cm &&\nonumber\\[1.6mm] \label{K2_1} &&\!\!\!\!\!=\, \frac{1}{\varepsilon} \mu \Big( \frac{x}{\varepsilon}, \omega \Big)\nabla u_0(x)\! \cdot\! \int\limits_{\mathbb R^d} \Big[ -z + \theta \Big(\frac{x}{\varepsilon}-z, \omega \Big) - \theta \Big(\frac{x}{\varepsilon}, \omega \Big) \Big] a (z) \mu \Big( \frac{x}{\varepsilon} -z, \omega \Big) \, dz \\[1mm] \nonumber
&&\!\!\!\!\! +\,\mu \Big(\! \frac{x}{\varepsilon}, \omega \Big) \nabla \nabla u_0 (x)\!\cdot\! \int\limits_{\mathbb R^d}\! \Big[ \frac12 z\!\otimes\!z\! - z \!\otimes\!\theta \Big(\frac{x}{\varepsilon}\!-\!z,\omega \Big) \Big] a (z) \mu \Big( \frac{x}{\varepsilon}\! -\!z, \omega \Big) \, dz
+\, \ \phi_\varepsilon (x)
\\
\nonumber
&&=: \frac{1}{\varepsilon} I^\varepsilon_{-1} + \varepsilon^0 I^\varepsilon_0 + \phi_\varepsilon \end{eqnarray} with \begin{equation}\label{14} \begin{array}{rl} \displaystyle \!\!\!\!&\hbox{ }\!\!\!\!\!\!\!\!\!\!\!\!\phi_\varepsilon (x, \omega) =\\[3mm] & \!\!\!\!\!\!\!\!\displaystyle \!\! \int\limits_{\mathbb R^d}\! a (z) \mu \Big( \frac{x}{\varepsilon},\omega \Big) \mu \Big( \frac{x}{\varepsilon}\! -\!z,\omega \Big) \bigg(\int\limits_0^{1} \nabla \nabla u_0(x-\varepsilon z t) \!\cdot\! z\!\otimes\!z \,(1-t) \ dt - \frac{1}{2} \nabla \nabla u_0(x)\!\cdot\! z\!\otimes\!z \bigg) \, dz \\[4mm] &\!\!\!\!\!\!\!\!\! \displaystyle +\, \frac{1}{\varepsilon} \mu \Big( \frac{x}{\varepsilon},\omega \Big) \int\limits_{\mathbb R^d} \ a (z) \mu \Big( \frac{x}{\varepsilon} -z, \omega \Big) \theta \Big(\frac{x}{\varepsilon}\!-\!z,\omega \Big)\! \Big(\nabla u_0(x- \varepsilon z) - \nabla u_0(x) \Big)\, dz \\[4mm] &\!\!\!\!\!\!\!\!\! \displaystyle + \mu \Big( \frac{x}{\varepsilon},\omega \Big) \nabla \nabla u_0(x) \int\limits_{\mathbb R^d} \ a (z) \mu \Big( \frac{x}{\varepsilon} -z, \omega \Big) z \otimes \theta \Big(\frac{x}{\varepsilon}\!-\!z,\omega \Big)\, dz. \end{array} \end{equation} Here and in what follows $z\otimes z$ stands for the matrix $\{z_iz_j\}_{i,j=1}^d$.
Let us outline the main steps of the proof of relation \eqref{convergence1}.
In order to make the term $I^\varepsilon_{-1}$ in \eqref{K2_1} equal to zero, we should construct a random field $\theta \big(z, \omega\big)$ that satisfies the following equation \begin{equation}\label{korr1} \int\limits_{\mathbb R^d} \Big( -z + \theta \big(\frac{x}{\varepsilon}-z, \omega \big) - \theta \big(\frac{x}{\varepsilon}, \omega\big) \Big) \, a (z) \mu \big( \frac{x}{\varepsilon} -z,\omega \big) \ dz \ = \ 0. \end{equation} The goal of the first step is to construct such a random field $\theta(z,\omega)$. Next we show that the second term $I^\varepsilon_0$ can be represented as a sum $$ I^\varepsilon_0 = \hat L u_0 + S\Big(\frac x\varepsilon,\omega\Big)\nabla\nabla u_0 + f_2^\varepsilon (x,\omega), $$ where $S(z,\omega)$ is a stationary matrix-field with zero average, and $f_2^\varepsilon (x,\omega)$ is a non-stationary term; both of them are introduced below. We define $u_2^\varepsilon$ and $u_3^\varepsilon$ by $$ (L^\varepsilon - m) u_2^\varepsilon = - S\Big(\frac x\varepsilon,\omega\Big)\nabla\nabla u_0, \quad (L^\varepsilon - m) u_3^\varepsilon = - f_2^\varepsilon (x,\omega), $$
and prove that $\| u_2^\varepsilon \|_{L^2(\mathbb R^d)} \to 0$, $\| u_3^\varepsilon \|_{L^2(\mathbb R^d)} \to 0$. Then considering the properties of the corrector $\theta$, see Theorem \ref{t_corrector}, we derive the limit relation
$\|\varepsilon \theta\big(\frac x\varepsilon\big) \nabla u_0(x) \|_{L^2(\mathbb R^d)} \to 0$, as $\varepsilon \to 0$. This yields $\| w^\varepsilon - u_0 \| \to 0$.
With this choice of $\theta$, $u_2^\varepsilon$ and $u_3^\varepsilon$ the expression $(L^\varepsilon - m) w^\varepsilon$ can be rearranged as follows: $$ (L^\varepsilon - m) w^\varepsilon = (L^\varepsilon - m) v^\varepsilon + (L^\varepsilon - m) (u_2^\varepsilon + u_3^\varepsilon) = (\hat L - m) u_0 + \phi_\varepsilon - m \varepsilon \theta \nabla u_0 $$ $$ = f + \phi_\varepsilon - m \varepsilon \theta \nabla u_0 = (L^\varepsilon - m) u^\varepsilon + \phi_\varepsilon - m \varepsilon \theta \nabla u_0. $$
We prove below in Lemma \ref{reminder} that $\|\phi_\varepsilon\|\big._{L^2(\mathbb R^d)}$ is vanishing as $\varepsilon \to 0$.
This implies the convergence $\| w^\varepsilon - u^\varepsilon \|\big._{L^2(\mathbb R^d)} \to 0$ and, by the triangle inequality, the required relation in \eqref{convergence1}.
\section{First corrector}\label{s_corr}
In this Section we construct a solution of equation \eqref{korr1}.
Denote \begin{equation}\label{fkorr1} r \big(\frac{x}{\varepsilon}, \omega\big) = \int\limits_{\mathbb R^d} z \, a (z) \, \mu \big( \frac{x}{\varepsilon} -z,\omega \big) \ dz, \end{equation}
then $r(\xi, \omega) = \mathbf{r}(T_\xi \omega), \; \xi = \frac{x}{\varepsilon},$ is a stationary field. Moreover, since $\mathbb{E} \mu ( \xi -z,\omega )= \mathbb{E}{\bm\mu}(T_{\xi-z} \omega) = const$ for all $z$, then
$$ \mathbb{E} r (\xi, \omega) = \int\limits_{\mathbb R^d} z \, a (z) \, \mathbb{E}\mu ( \xi -z,\omega ) \ dz \ = \ 0. $$ Equation \eqref{korr1} takes the form \begin{equation}\label{korrkappa1} r (\xi, \omega) \ = \ \int\limits_{\mathbb R^d} a (z) \mu ( \xi -z,\omega ) \, \big( \theta (\xi-z, \omega ) - \theta (\xi, \omega) \big) \ dz. \end{equation} We are going to show now that equation \eqref{korrkappa1} has a solution that possesses the following properties: \\[1.5mm] {\bf A}) the increments $\zeta_z(\xi, \omega)
= \theta (z+\xi, \omega ) - \theta (\xi, \omega)$ are stationary for any given $z$, i.e. $$\zeta_z(\xi, \omega) = \zeta_z(0, T_\xi \omega);$$ {\bf B})
$\varepsilon \theta\big(\frac x\varepsilon,\omega\big) $ is a function of sub-linear growth in $L_{\rm loc}^2(\mathbb R^d)$: for any bounded Lipschitz domain $Q\subset \mathbb R^d$ $$
\Big\| \varepsilon \, \theta \big(\frac{x}{\varepsilon}, \omega \big) \Big\|_{L^2(Q)} \to 0 \quad \mbox{a.s.} \; \omega \in \Omega. $$ Here and in the sequel for presentation simplicity we write for the $L^2$ norm of a vector-function just $L^2(Q)$ instead of $L^2(Q\,;\,\mathbb R^d)$.
\begin{theorem}\label{t_corrector} There exists a unique (up to an additive constant vector) solution $\theta\in L^2_{\rm loc}(\mathbb R^d)$ of equation \eqref{korrkappa1} that satisfies conditions {\bf A}{\rm )} -- {\bf B}{\rm )}. \end{theorem}
\begin{proof}[Proof of Theorem \ref{t_corrector}] We divide the proof into several steps.\\ {\sl Step 1.} Consider the following operator
acting in $L^2(\Omega)$: \begin{equation}\label{A-omega} (A \varphi)(\omega) = \int\limits_{\mathbb R^d} a(z) {\bm\mu}(T_z \omega) \big( \varphi (T_z \omega) - \varphi(\omega) \big) dz \end{equation}
\begin{proposition}\label{spectrA} The spectrum $\sigma(A) \subset (-\infty, 0]$. \end{proposition} \begin{proof} It is straightforward to check that the operator $A$ is bounded and symmetric in the weighted space $L^2(\Omega, P_\mu) = L^2_\mu(\Omega)$ with $d P_\mu(\omega) = {\bm\mu}(\omega) d P(\omega)$. Denoting $\tilde \omega = T_z \omega, \ s=-z$, using stationarity of $\mu$ and considering the relation $a(-z) = a(z)$ we get \begin{equation}\label{PropA1} \begin{array}{c} \displaystyle \int\limits_\Omega \int\limits_{\mathbb R^d} a(z){\bm\mu}(T_z \omega){\bm\mu}(\omega) \varphi^2(T_z \omega) \, dz \, dP(\omega)= \int\limits_\Omega \int\limits_{\mathbb R^d} a(z) {\bm\mu}(\tilde \omega) {\bm\mu}(T_{-z} \tilde\omega) \varphi^2(\tilde\omega) \, dz \, dP(\tilde\omega) \\[3pt] \displaystyle
= \int\limits_\Omega \int\limits_{\mathbb R^d} a(s){\bm\mu}( \omega){\bm\mu}(T_s \omega) \varphi^2(\omega)\, ds \, dP(\omega). \end{array} \end{equation} Thus \begin{equation}\label{PropA1bis} \begin{array}{c} \displaystyle \big( A\varphi, \varphi \big)_{L^2_\mu} = \int\limits_\Omega \int\limits_{\mathbb R^d} a(z) {\bm\mu}(T_z \omega) \big( \varphi(T_z \omega) - \varphi(\omega) \big) \varphi(\omega) {\bm\mu}(\omega) dz dP(\omega) \\ \displaystyle = -\frac12 \int\limits_\Omega \int\limits_{\mathbb R^d} a(z) {\bm\mu}(T_z \omega) {\bm\mu}(\omega) \big( \varphi(T_z \omega) - \varphi(\omega) \big)^2 dz dP(\omega)<0. \end{array} \end{equation} Since the norms in $L^2(\Omega)$ and $L^2_\mu(\Omega)$ are equivalent, the desired statement follows. \end{proof}
Let us consider for any $\delta>0$ the equation \begin{equation}\label{A-delta} \delta \varphi(\omega) - \int\limits_{\mathbb R^d} a (z) {\bm\mu} ( T_z \omega ) ( \varphi (T_z \omega ) - \varphi ( \omega) ) \ dz = r(\omega), \quad r(\omega) = \int\limits_{\mathbb R^d} z a (z) {\bm\mu} (T_z \omega ) \ dz. \end{equation} By Proposition \ref{spectrA} the operator $(\delta I - A)^{-1}$ is bounded, then there exists a unique solution $\varkappa^\delta (\omega) = -(\delta I - A)^{-1} r (\omega)$ of \eqref{A-delta}.
For any given $z \in R^d$ we set $$
u^\delta(z,\omega) = \varkappa^\delta(T_z \omega) - \varkappa^\delta(\omega). $$ Then \begin{equation}\label{u-delta} u^\delta(z_1 + z_2,\omega) = u^\delta(z_2,\omega) + u^\delta(z_1, T_{z_2} \omega) \quad \forall \ z_1, z_2 \in \mathbb R^d. \end{equation} For any $\xi \in\mathbb R^d$ as an immediate consequence of \eqref{A-delta} we have \begin{equation}\label{A-delta-xi} \delta \varkappa^\delta (T_\xi \omega) - \int\limits_{\mathbb R^d} a (z) {\bm\mu} ( T_{\xi+z} \omega ) ( \varkappa^\delta (T_{\xi+z} \omega ) - \varkappa^\delta ( T_\xi \omega) ) \ dz = \int\limits_{\mathbb R^d} z a (z) {\bm\mu} (T_{\xi+z} \omega ) \ dz. \end{equation}
Next we obtain a priori estimates for $\| \varkappa^\delta (T_z \omega) - \varkappa^\delta (\omega)\|_{L^2_M}$ with $dM(z, \omega) = a(z) dz dP(\omega)$. \begin{proposition}\label{boundM}
The following estimate holds: \begin{equation}\label{AB}
\| u^\delta(z,\omega) \|_{L^2_M} = \| \varkappa^\delta (T_z \omega) - \varkappa^\delta (\omega) \|_{L^2_M} \ \le \ C \end{equation} with a constant $C$ that does not depend on $\delta$. \end{proposition} \begin{proof}
Multiplying equation \eqref{A-delta} by $\varphi(\omega)={\bm\mu}(\omega)\varkappa^\delta(\omega)$ and integrating the resulting relation over $\Omega$ yields \begin{equation}\label{Prop2} \begin{array}{c} \displaystyle \delta \int\limits_\Omega \big(\varkappa^\delta(\omega)\big)^2{\bm\mu}(\omega)\, dP(\omega)
- \int\limits_{\mathbb R^d} \int\limits_\Omega a (z) {\bm\mu} ( T_z \omega ) \big( \varkappa^\delta (T_z \omega ) - \varkappa^\delta ( \omega) \big) \varkappa^\delta(\omega){\bm\mu}(\omega) \, dz \, dP(\omega) \\ \displaystyle
= \int\limits_{\mathbb R^d} \int\limits_\Omega z a(z) \varkappa^\delta(\omega) {\bm\mu}(T_z \omega) {\bm\mu}(\omega) \, dz \, dP(\omega). \end{array} \end{equation} The same change of variables as in \eqref{PropA1} results in the relation \begin{equation}\label{Prop2_eq} \int\limits_{\mathbb R^d} \int\limits_\Omega z a(z) \varkappa^\delta (\omega) {\bm\mu}(T_z \omega) {\bm\mu}(\omega) \, dz \, dP(\omega)= - \int\limits_{\mathbb R^d} \int\limits_\Omega z a(z) \varkappa^\delta (T_z \omega) {\bm\mu}(\omega) {\bm\mu}(T_z \omega)\, dz \, dP(\omega), \end{equation} therefore, the right-hand side of \eqref{Prop2} takes the form \begin{equation}\label{RHS}
\!\int\limits_{\mathbb R^d}\! \int\limits_\Omega z a(z) \varkappa^\delta(\omega) {\bm\mu}(T_z \omega) {\bm\mu}(\omega) dz dP(\omega)= -\frac12 \int\limits_{\mathbb R^d}\! \int\limits_\Omega z a(z) \big( \varkappa^\delta(T_z \omega) - \varkappa^\delta(\omega) \big) {\bm\mu}(T_z \omega) {\bm\mu}(\omega) dz dP(\omega). \end{equation} Equality \eqref{PropA1bis} implies that the second term on the left-hand side of \eqref{Prop2} can be rearranged in the following way \begin{equation}\label{LHS2} \begin{array}{c} \displaystyle - \int\limits_{\mathbb R^d} \int\limits_\Omega a (z) {\bm\mu} ( T_z \omega ) \big( \varkappa^\delta (T_z \omega ) - \varkappa^\delta ( \omega) \big) \varkappa^\delta(\omega){\bm\mu}(\omega) \, dz \, dP(\omega) \\ \displaystyle = \frac12 \int\limits_{\mathbb R^d} \int\limits_\Omega a(z) {\bm\mu}(T_z \omega) {\bm\mu}(\omega) \big( \varkappa^\delta( T_z \omega) - \varkappa^\delta (\omega) \big)^2 dz \, dP(\omega). \end{array} \end{equation} Let us denote $$ J^\delta = \int\limits_{\mathbb R^d} \int\limits_\Omega {\bm\mu}(T_z \omega) {\bm\mu}(\omega) \big( \varkappa^\delta( T_z \omega) - \varkappa^\delta (\omega) \big)^2 a(z) dz \, dP(\omega) = \int\limits_{\mathbb R^d} \int\limits_\Omega {\bm\mu}(T_z \omega) {\bm\mu}(\omega) (u^\delta (z,\omega))^2 dM(z,\omega) $$ and $$
\int\limits_{\mathbb R^d} \int\limits_\Omega \big( \varkappa^\delta( T_z \omega) - \varkappa^\delta (\omega) \big)^2 a(z) dz \, dP(\omega) = \int\limits_{\mathbb R^d} \int\limits_\Omega (u^\delta (z,\omega))^2 dM(z,\omega) = \| u^\delta \|^2_{L^2_M}, $$ where $dM(z, \omega) = a(z) dz dP(\omega)$. Then \begin{equation}\label{B1}
J^\delta = \int\limits_{\mathbb R^d} \int\limits_\Omega {\bm\mu}(T_z \omega) {\bm\mu}(\omega) (u^\delta (z,\omega))^2 dM(z,\omega) \ge \alpha_1^2 \| u^\delta \|^2_{L^2_M} \end{equation} and on the other hand, relations \eqref{Prop2} - \eqref{LHS2} imply the following upper bound on $J^\delta$: \begin{equation}\label{B2}
J^\delta = \int\limits_{\mathbb R^d} \int\limits_\Omega {\bm\mu}(T_z \omega) {\bm\mu}(\omega) (u^\delta (z,\omega))^2 dM(z,\omega) \le \frac12 \alpha_2^2 \sigma \| u^\delta \|_{L^2_M}. \end{equation} Bounds \eqref{B1} - \eqref{B2} together yield $$
\alpha_1^2 \| u^\delta \|^2_{L^2_M} \le J^\delta \le \frac12 \alpha_2^2 \sigma \| u^\delta \|_{L^2_M}. $$ Consequently we obtain the estimate \eqref{AB} with $C = \frac{\alpha_2^2}{2 \alpha_1^2} \sigma$, and this estimate is uniform in $\delta$. \end{proof}
\begin{corollary} For any $\delta>0$ the following upper bound holds: \begin{equation}\label{u-norm}
\sqrt{\delta} \, \| \varkappa^\delta \|_{L^2_\mu} \le C. \end{equation} \end{corollary}
\begin{proof} From \eqref{Prop2} we have \begin{equation}\label{Prop2-norm} \begin{array}{c} \displaystyle \delta \int\limits_\Omega \big(\varkappa^\delta(\omega)\big)^2{\bm\mu}(\omega)\, dP(\omega)
=\int\limits_{\mathbb R^d} \int\limits_\Omega a (z) {\bm\mu} ( T_z \omega ) \big( \varkappa^\delta (T_z \omega ) - \varkappa^\delta ( \omega) \big) \varkappa^\delta(\omega){\bm\mu}(\omega) \, dz \, dP(\omega) \\ \displaystyle
+\int\limits_{\mathbb R^d} \int\limits_\Omega z a(z) \varkappa^\delta(\omega) {\bm\mu}(T_z \omega) {\bm\mu}(\omega) \, dz \, dP(\omega). \end{array} \end{equation} Then using \eqref{RHS}, \eqref{LHS2}, \eqref{B2} together with the Cauchy-Swartz inequality and bound \eqref{AB}, we obtain that the expression on the right-hand side of \eqref{Prop2-norm} is uniformly bounded in $\delta$. \end{proof}
Proposition \ref{boundM} implies that the family $\{ u^\delta(z, \omega) \}_{\delta>0}$ is bounded in $L^2_M$. Consequently there exists a subsequence $u_j (z, \omega) = u^{\delta_j} (z, \omega)$, $j=1,2, \ldots,$ that converges in a weak topology of $L^2_M$ as $\delta_j \to 0$. We denote this limit by $\theta(z,\omega)$: \begin{equation}\label{theta} w\,\mbox{-}\!\!\lim_{j \to \infty} u_j (z,\omega) = w\,\mbox{-}\!\!\lim_{\delta_j \to 0} \big( \varkappa^{\delta_j}(T_z \omega) - \varkappa^{\delta_j}(\omega) \big) = \theta(z,\omega), \end{equation} Clearly, $\theta(z,\omega) \in L^2_M$, i.e. \begin{equation}\label{thetaLM} \int\limits_{\mathbb R^d} \int\limits_\Omega \theta^2 (z,\omega) a(z) dz dP(\omega) < \infty, \end{equation} and by the Fubini theorem $\theta (z, \omega) \in L^2 (\Omega)$ for almost all $z$ from the support of the function $a(z)$. In addition $\theta(0,\omega) \equiv 0$ and for any $z$ \begin{equation}\label{Etheta} \mathbb{E} \theta(z,\omega) = \lim_{\delta_j \to 0} \Big( \mathbb{E} \varkappa^{\delta_j} (T_z \omega) - \mathbb{E} \varkappa^{\delta_j}(\omega) \Big) = 0. \end{equation}
\noindent {\sl Step 2.} {\sl Property A}. The function $\theta(z,\omega)$ introduced in \eqref{theta} is not originally defined on the set $\{z\in\mathbb R^d\,:\,a(z)=0\}$. \begin{proposition}\label{statincrements} The function $\theta(z, \omega)$, given by \eqref{theta}, can be extended to $\mathbb R^d\times\Omega$ in such a way that $\theta(z, \omega)$ satisfies relation \eqref{u-delta}, i.e. $\theta(z, \omega)$ has stationary increments: \begin{equation}\label{thetaVIP}
\theta(z+\xi,\omega) - \theta (\xi,\omega) = \theta(z, T_\xi \omega) = \theta(z, T_\xi \omega) - \theta(0, T_\xi \omega). \end{equation} \end{proposition}
\begin{proof} Applying Mazur's theorem \cite[Section V.1]{Yo65} we conclude that $\theta(z, \omega) = s\,\hbox{-}\!\lim\limits_{n \to \infty} w_n$ is the strong limit of a sequence $w_n$ of convex combinations of elements $u_j(z,\omega) = u^{\delta_j} (z,\omega)$. The strong convergence implies that there exists a subsequence of $\{w_n \}$ that converges a.s. to the same limit $\theta(z, \omega)$: $$ \lim\limits_{n_k \to \infty} w_{n_k} (z, \omega) = \theta(z, \omega) \quad \mbox{for a.e. } \; z \; \mbox{ and a.e.} \; \omega. $$ Since equality \eqref{u-delta} holds for all $u_j$, it also holds for any convex linear combination $w_n$ of $u_j$: \begin{equation}\label{wn} w_n (z_1 + z_2,\omega) = w_n(z_2,\omega) + w_n (z_1, T_{z_2} \omega) \quad \forall \ n. \end{equation} Thus taking the subsequence $\{w_{n_k} \}$ in equality \eqref{wn} and passing to the point-wise limit $n_k \to \infty$ in any term of this equality we obtain \eqref{thetaVIP} first only for such $z_1, z_2$ that $z_1, z_2, z_1+ z_2$ belong to $\mathrm{supp}(a)$. Then we extend function $\theta(z, \omega)$ to a.e. $z \in \mathbb R^d$ using relation \eqref{thetaVIP}: \begin{equation}\label{lim_sh_inv} \theta(z_1 + z_2, \omega) = \theta(z_2, \omega) + \theta(z_1, T_{z_2} \omega). \end{equation} Observe that this extension is well-defined because relation \eqref{thetaVIP} holds on the support of $a$.\\[1.5mm] Let us show that $\theta(z,\omega)$ is defined for all $z\in\mathbb Z^d$. To this end we observe that, due to the properties of the dynamical system $T_z$, the function $\theta(z_1,T_{z_2}\omega)$ is well-defined measurable function of $z_1$ and $\omega$ for all $z_2\in\mathbb R^d$. The function $\theta(z_1+z_2,\omega)$ possesses the same property due to its particular structure. Then according to \eqref{lim_sh_inv} the function $\theta(z_2, \omega)$ is defined for all $z\in\mathbb Z^d$. \end{proof} Denote $\zeta_z (\xi, \omega)= \theta(z+\xi,\omega) - \theta (\xi,\omega) $,
then for $z\in\mathbb R^d$ relation \eqref{thetaVIP} yeilds \begin{equation}\label{thetaVIPbis} \zeta_z (\xi, \omega) = \zeta_z(0, T_\xi \omega) , \end{equation} i.e. for all $z\in\mathbb R^d$ the field $\zeta_z(\xi,\omega)$ is statistically homogeneous in $\xi$, and \begin{equation}\label{zetatheta}
\zeta_z(0, \omega) = \theta(z, \omega). \end{equation} Thus by \eqref{theta}, \eqref{thetaVIP} -- \eqref{thetaVIPbis} the random function $\theta(z,\omega)$ is not stationary, but its increments $\zeta_z(\xi, \omega) = \theta (z+\xi, \omega ) - \theta (\xi, \omega)$ form a stationary field for any given $z$.
\noindent {\sl Step 3.} At this step we show that $\theta$ satisfies equation \eqref{korrkappa1}.\\
Let us prove now that $\theta(z,\omega)$ defined by \eqref{theta} is a solution of equation \eqref{korr1} (or \eqref{korrkappa1}).
To this end for an arbitrary function $\psi(\omega) \in L^2(\Omega)$ we multiply equality \eqref{A-delta-xi} by a function $\psi(\omega){\bm\mu}(\omega)$ and integrate the resulting relation over $\Omega$, then we have \begin{equation}\label{Solution} \begin{array}{c} \displaystyle \delta \int\limits_\Omega \varkappa^\delta(T_\xi \omega) \psi(\omega) {\bm\mu}(\omega)\, dP(\omega) \!=\!
\int\limits_{\mathbb R^d} \int\limits_\Omega a (z) {\bm\mu} ( T_{\xi+z} \omega ) \big( \varkappa^\delta (T_{\xi+z} \omega ) - \varkappa^\delta (T_\xi \omega) \big) dz \psi(\omega) {\bm\mu}(\omega) dP(\omega) \\ \displaystyle
+\int\limits_{\mathbb R^d} \int\limits_\Omega z a(z) {\bm\mu}(T_{\xi+z} \omega) dz \, \psi(\omega) {\bm\mu}(\omega) \, dP(\omega). \end{array} \end{equation} By estimate \eqref{u-norm} and the Cauchy-Swartz inequality for any $\psi \in L^2(\Omega)$ we get \begin{equation}\label{ud-norm} \delta \int\limits_\Omega \varkappa^\delta(T_\xi \omega) \psi(\omega) {\bm\mu}(\omega)\, dP(\omega) \to 0 \quad \mbox{as } \\ \delta \to 0. \end{equation} Passing to the limit $\delta \to 0$ in equation \eqref{Solution} and taking into account \eqref{theta} and \eqref{ud-norm}, we obtain that for a.e. $\omega$ the function $\theta(z,T_\xi \omega)$ satisfies the equation \begin{equation*}\label{A-delta-xibis} \int\limits_{\mathbb R^d} a (z) {\bm\mu} ( T_{\xi+z} \omega ) \theta(z, T_\xi \omega) ) \ dz = - \int\limits_{\mathbb R^d} z a (z) {\bm\mu} (T_{\xi+z} \omega ) \ dz. \end{equation*} Using \eqref{thetaVIP} we get after the change of variables $z \to -z$ \begin{equation}\label{theta-xi-z} -\int\limits_{\mathbb R^d} a (z) {\bm\mu} ( T_{\xi-z} \omega ) ( \theta (\xi-z, \omega ) - \theta ( \xi, \omega) ) \ dz + \int\limits_{\mathbb R^d} z a (z) {\bm\mu} (T_{\xi-z} \omega ) \ dz =0, \end{equation} and it is the same as \eqref{korr1}. Thus we have proved that $\theta(z,\omega)$ is a solution of \eqref{korrkappa1}.
\noindent {\sl Step 4}. Property B.
Assumption \eqref{add} and inequality \eqref{thetaLM} imply that
$$ c_0 \int\limits_{{\bf B}} \int\limits_\Omega \theta^2 (z,\omega) dz dP(\omega) < \int\limits_{\mathbb R^d} \int\limits_\Omega \theta^2 (z,\omega) a(z) dz dP(\omega) < \infty, $$ and by the Fubini theorem we conclude that a.s. \begin{equation}\label{L2B}
\int\limits_{{\bf B}} \theta^2 (z,\omega) dz < \infty. \end{equation}
Thus $\theta(z,\omega) \in L^2({\bf B})$ with $\| \theta (z, \omega) \|_{L^2({\bf B})} = K(\omega)$ for a.e. $\omega$, and
${\mathbb E} (K(\omega))^2< \infty$.
\begin{proposition} [Sublinear growing of $\varepsilon\theta(\frac x\varepsilon) $ in $L_{\rm loc}^2(\mathbb R^d)$] \label{1corrector} Denote by $\varphi_\varepsilon (z, \omega) = \varepsilon\, \theta \big(\frac z\varepsilon, \omega\big)$. Then a.s. \begin{equation}\label{1corrsmall}
\| \varphi_\varepsilon (\cdot, \omega) \|_{L^2(\mathcal{Q})} \ \to \ 0 \quad \mbox{ as } \; \varepsilon \to 0 \end{equation} for any bounded Lipschitz domain $\mathcal{Q}\subset\mathbb R^d$. \end{proposition}
\begin{proof} We use in the proof inequality \eqref{L2B} and assume in what follows without loss of the generality that ${\bf B}=[0,1]^d$.
\begin{lemma}\label{LemmaC} The family of functions $\varphi_\varepsilon (z, \omega) = \varepsilon\, \theta \big(\frac z\varepsilon, \omega\big)$ is bounded and compact in $L^2(Q)$. \end{lemma} \begin{proof} Using change of variables $\frac z\varepsilon = y$ we have $$
\|\varphi_\varepsilon \|^2_{L^2(Q)} = \| \varepsilon \, \theta \big(\frac z\varepsilon, \omega\big) \|^2_{L^2(Q)} = \int\limits_Q \varepsilon^2 \, \theta^2 \big(\frac z\varepsilon, \omega\big) dz = \int\limits_{\varepsilon^{-1} Q} \varepsilon^{d+2} \, \theta^2 (y, \omega) dy $$ $$ = \varepsilon^{d+2} \sum\limits_{j \in \mathbb{Z}_{ Q/\varepsilon}} \ \int\limits_{B_j} \, \theta^2 (y, \omega) dy = \varepsilon^{d+2} \sum\limits_{j \in \mathbb{Z}_{Q/\varepsilon}} \ \int\limits_{B_j} \, (\theta (y, \omega) - \theta(j,\omega) + \theta(j,\omega))^2 dy $$ \begin{equation}\label{L-1} \le {2}\varepsilon^{d+2} \sum\limits_{j \in \mathbb{Z}_{ Q/\varepsilon}} \ \int\limits_{B_j} (\theta (y, \omega) -
\theta(j,\omega))^2 dy \ + \ {2}\varepsilon^{d+2} \sum\limits_{j \in \mathbb{Z}_{ Q/\varepsilon}} \theta^2 (j,\omega) \, |B_j|. \end{equation} Here $j \in \mathbb{Z}^d \cap \frac1\varepsilon Q = \mathbb{Z}_{ Q/\varepsilon}$, $B_j=j+[0,1)^d$.
Then if $y \in B_j$, then $y = j+z, \; z \in {\bf B} = [0,1)^d$, and we can rewrite the first term on the right-hand side of \eqref{L-1} as follows $$ {2}\,\varepsilon^{d+2} \sum\limits_{j \in \mathbb{Z}_{ Q/\varepsilon}} \ \int\limits_{{\bf B}} (\theta (j + z, \omega) - \theta(j,\omega))^2 dz = {2}\,\varepsilon^{d+2} \sum\limits_{j \in \mathbb{Z}_{Q/\varepsilon}} \ \int\limits_{{\bf B}} \theta^2 (z, T_j \omega) dz. $$ Using the fact that $ \theta_B(j,\omega):=\int\limits_{{\bf B}} \theta^2 (z, T_j \omega) dz$ is a stationary field and $\theta(z,\omega) \in L^2({\bf B})$, by the Birkhoff ergodic theorem we obtain that $$
{2}\,\varepsilon^{d} \sum\limits_{j \in \mathbb{Z}_{Q/\varepsilon}} \ \int\limits_{{\bf B}} \theta^2 (z, T_j \omega) dz \ \to \ 2 |Q| \ \mathbb{E} \int\limits_{{\bf B}} \theta^2 (z, \omega) dz<\infty. $$ Consequently, the first term in \eqref{L-1} is vanishing as $\varepsilon \to 0$: \begin{equation}\label{L-2} {2}\varepsilon^{d+2} \sum\limits_{j \in \mathbb{Z}_{Q/\varepsilon}} \ \int\limits_{{\bf B}} \theta^2 (z, T_j \omega) dz \ \to \ 0. \end{equation} Let us prove now that a.s. the second term in \eqref{L-1} is bounded. Denoting $$ \widehat \varphi_\varepsilon (z) =\varepsilon \, \widehat \theta \big(\frac z\varepsilon, \omega\big),
$$ where $\widehat \theta$ is a piecewise constant function: $\widehat \theta \big(\frac z\varepsilon,\omega\big) =
\theta \big([\frac z\varepsilon],\omega\big) = \theta (j,\omega)$ as $z \in \varepsilon B_j$, the second term in \eqref{L-1} equals to \begin{equation}\label{L-3}
{2}\,\varepsilon^{d+2} \sum\limits_{j \in \mathbb{Z}_{Q/\varepsilon}} \theta^2 (j,\omega) = 2 \, \| \varepsilon \, \widehat \theta \big(\frac z\varepsilon, \omega\big) \|^2_{L^2(Q)}
=2\|\widehat \varphi_\varepsilon(z)\|^2_{L^2(Q)}. \end{equation} Let us estimate the difference gradient of $ \widehat \varphi_\varepsilon$: $$
\| {\rm grad} \, \widehat \varphi_\varepsilon\|^2_{(L^2(Q))^d} = \varepsilon^2 \int\limits_Q \sum_{k=1}^d \frac{\big( \theta\big([\frac1\varepsilon(z+\varepsilon e_k)], \omega\big) - \theta\big([\frac z\varepsilon],\omega\big) \big)^2}{\varepsilon^2} \, dz $$ $$ = \int\limits_Q \sum_{k=1}^d \big(\theta\big(\big[\frac z\varepsilon\big] + e_k, \omega\big) - \theta\big(\big[\frac z\varepsilon\big],\omega\big) \big)^2 \, dz = \varepsilon^d \sum_{k=1}^d \sum\limits_{j \in \mathbb{Z}_{Q/\varepsilon}} \big(\theta(j+ e_k, \omega) - \theta(j,\omega) \big)^2. $$ But $\theta(j+ e_k, \omega) - \theta(j,\omega) = \theta(e_k, T_j \omega)$ is stationary for any given $e_k$, thus \begin{equation}\label{L-4}
\| {\rm grad} \, \widehat \varphi_\varepsilon\|^2_{(L^2(Q))^d} = \varepsilon^d \sum_{k=1}^d \sum\limits_{j \in \mathbb{Z}_{Q/\varepsilon}} \big(\theta(j+ e_k, \omega) - \theta(j,\omega) \big)^2 \ \to \ |Q| \sum_{k=1}^d C_k, \end{equation} where $C_k = \mathbb{E} \theta^2 (e_k, \omega)$.
Next we prove that a.s. the following estimate holds: \begin{equation}\label{L-5} \bar \theta_\varepsilon (\omega) = \int\limits_Q \widehat \varphi_\varepsilon (z, \omega) dz = \varepsilon^d \sum\limits_{j \in \mathbb{Z}_{ Q/\varepsilon}} \varepsilon \, \theta(j,\omega) \le \widetilde C(\omega). \end{equation} We apply the induction and start with $d=1$. Using stationarity of $\theta(j+1,\omega) - \theta(j,\omega)$ we have by the ergodic theorem $$
\varepsilon^2 \, \Big| \sum\limits_{j \in \mathbb{Z}_{Q/\varepsilon}} \theta(j,\omega) \Big| \le \varepsilon^2 \,
\sum\limits_{j \in \mathbb{Z}_{Q/\varepsilon}} \sum_{k=0}^{j-1} |\theta(k+1,\omega) - \theta(k,\omega) | $$ $$
\le \varepsilon^2 \, \sum\limits_{j \in \mathbb{Z}_{Q/\varepsilon}} \sum\limits_{k \in \mathbb{Z}_{Q/\varepsilon}} |\theta(k+1,\omega) - \theta(k,\omega) | = \varepsilon^2\frac{|Q|}\varepsilon \sum\limits_{k \in \mathbb{Z}_{Q/\varepsilon}} |\theta(e_1, T_k\omega) | \ \to \ |Q|^2 \mathbb{E} |\theta (e_1, \omega)| = \bar C_1. $$ Thus $$
\overline{\lim\limits_{\varepsilon \to 0}}\ \varepsilon^2 \, \Big| \sum\limits_{j \in \mathbb{Z}_{Q/\varepsilon}} \theta(j,\omega) \Big| \le \bar C_1, $$ and this implies that for a.e. $\omega$ \begin{equation}\label{L-5A}
\sup_\varepsilon \Big| \varepsilon^2 \, \sum\limits_{j \in \mathbb{Z}_{Q/\varepsilon}} \theta(j,\omega) \Big| \le \widetilde C_1(\omega), \end{equation} where the constant $\widetilde C_1 (\omega)$ depends only on $\omega$.
Let us show how to derive the required upper bound in the dimension $d=2$ using \eqref{L-5A}. In this case $j~\in~\mathbb{Z}_{Q/\varepsilon}, \ j=(j_1, j_2)$, and we assume without loss of generality that $Q \subset [-q, q]^2$. Then $$ \theta ((j_1, j_2), \omega) = \sum_{k=0}^{j_2 -1} \big( \theta ((j_1, k+1), \omega) - \theta ((j_1, k), \omega) \big) \ + \ \theta ((j_1, 0), \omega), $$ and for any $j=(j_1, j_2) \in \mathbb{Z}_{Q/\varepsilon}$ we get $$
| \theta ((j_1, j_2), \omega)| \le \sum_{k= - q/\varepsilon}^{q/\varepsilon} \big| \theta ((j_1, k+1), \omega) - \theta ((j_1, k), \omega) \big| \ + \ |\theta ((j_1, 0), \omega)|. $$
Using \eqref{L-5A} and the ergodic property of the field $| \theta (e_2, T_j\omega)|$ we obtain the following upper bound $$
\varepsilon^3 \, \Big| \sum\limits_{(j_1, j_2) \in \mathbb{Z}_{Q/\varepsilon}} \theta ((j_1, j_2), \omega) \Big| \le \varepsilon^3 \sum_{j_1= - q/\varepsilon}^{q/\varepsilon} \frac{2q}\varepsilon \sum_{k= - q/\varepsilon}^{q/\varepsilon} | \theta (e_2, T_{(j_1, k)} \omega)| \ + \ \varepsilon^3
\sum_{j_1=- q/\varepsilon}^{q/\varepsilon} \frac{2q}\varepsilon |\theta ((j_1, 0), \omega)| $$ $$
= 2q\varepsilon^2 \sum\limits_{(j_1, k) \in \mathbb{Z}_{Q/\varepsilon}} | \theta (e_2, T_{(j_1, k)} \omega)| + 2q\varepsilon^2
\sum_{j_1=- q/\varepsilon}^{q/\varepsilon} |\theta ((j_1, 0), \omega)| \le \widetilde C_2(\omega) + 2q \widetilde C_1(\omega), $$ where $2q$ is the 1-d volume of slices of $Q$ that are orthogonal to $e_1$. The case of $d>2$ is considered in the same way.
Applying the standard discrete Poincar\'e inequality or the Poincar\'e inequality for piece-wise linear approximations of discrete functions we obtain from \eqref{L-4} - \eqref{L-5} that a.s. \begin{equation}\label{L-6}
\| \widehat \varphi_\varepsilon \|^2_{L^2(Q)} \le g_1 \Big(\int\limits_Q \widehat \varphi_\varepsilon (z, \omega) dz \Big)^2 + g_2
\| {\rm grad} \, \widehat \varphi_\varepsilon\|^2_{(L^2(Q))^d} \le K(\omega), \end{equation} where the constants $g_1, \; g_2$, and $K(\omega)$ do not depend on $n$.
Thus using the same piece-wise linear approximations and considering the compactness of embedding of $H^1(Q)$ to $L^2(Q)$ we derive from \eqref{L-4} and \eqref{L-6} that
the set of functions $\{ \widehat \varphi_\varepsilon \}$ is compact in $L^2(Q)$. As follows from \eqref{L-1} -- \eqref{L-2} $$ \varphi_\varepsilon = \widehat \varphi_\varepsilon + \breve{ \varphi}_\varepsilon, \quad \mbox{where } \; \breve{ \varphi}_\varepsilon(x) =
\varepsilon \big(\theta\big(\frac x\varepsilon\big) - \widehat \theta\big(\frac x\varepsilon\big)\big), \quad \| \breve{ \varphi_\varepsilon} \|_{L^2(Q)} \to 0 \; (\varepsilon \to 0). $$ This together with compactness of $\{ \widehat \varphi_\varepsilon \}$ implies the compactness of the family $\{ \varphi_\varepsilon \}$. Lemma is proved. \end{proof}
Next we show that any limit point of the family $\{\varphi_\varepsilon\}$ as $\varepsilon\to0$ is a constant function.
\begin{lemma}\label{Prop_constfun} Let $\{ \varphi_\varepsilon \}$ converge for a subsequence to $\varphi_0$ in $L^2(Q)$. Then $\varphi_0=const$. \end{lemma}
\begin{proof} According to \cite{LadSol} the set $\{\mathrm{div}\phi\,:\,\phi\in (C_0^\infty(Q))^d\}$ is dense in the subspace of functions from $L^2(Q)$ with zero average. It suffice to show that \begin{equation}\label{ortog_con} \int\limits_Q \mathrm{div}\phi(x) \varphi_\varepsilon(x)\,dx\longrightarrow 0, \ \ \hbox{as }\varepsilon\to0, \end{equation} for any $\phi=(\phi^1,\,\phi^2,\ldots,\phi^d)\in (C_0^\infty(Q))^d$. Clearly, $$ \frac 1\varepsilon(\phi^j(x+\varepsilon e_j)-\phi^j(x))=\partial_{x_j}\phi^j(x)+\varepsilon\upsilon_\varepsilon, $$
where $\|\upsilon_\varepsilon\|_{L^\infty(Q)}\leq C$. Then, for sufficiently small $\varepsilon$, we have $$ \int\limits_Q \mathrm{div}\phi(x) \varphi_\varepsilon(x)\,dx=\int\limits_Q (\phi^j(x+\varepsilon e_j)-\phi^j(x))
\theta\big(\frac x\varepsilon,\omega\big)\,dx\,+\,o(1) $$ $$ =\int\limits_Q \phi^j(x)\big(\theta\big(\frac x\varepsilon-e_j,\omega\big)-\theta\big(\frac x\varepsilon,\omega\big)\big)\,dx\,+\,o(1), $$ where $o(1)$ tends to zero as $\varepsilon\to0$ by Lemma \ref{LemmaC}. Since $\theta(z-e_j,\omega)-(\theta(z,\omega)$ is a stationary functions, by the Birkhoff ergodic theorem the integral on the right-hand side converges to zero a.s. as $\varepsilon\to 0$, and the desired statement follows. \end{proof}
Our next goal is to show that almost surely the limit relation in \eqref{1corrsmall} holds. By Lemma \ref{LemmaC} the constants $\varepsilon c^\varepsilon$ with $c^\varepsilon$ defined in \eqref{hi} are a.s. uniformly in $\varepsilon$ bounded, that is \begin{equation}\label{co_bou}
|\varepsilon c^\varepsilon|\leq K(\omega) \end{equation} for all sufficiently small $\varepsilon>0$.\\ Consider a convergent subsequence $\{\varphi_{\varepsilon_n}\}_{n=1}^\infty$. By Lemma \ref{Prop_constfun} the limit function is a constant, denote this constant by $\varphi_0$. Assume that $\varphi_0\not=0$. Then $$ \varphi_{\varepsilon_n}(z)=\varphi_0+\rho_{\varepsilon_n}(z), $$
where $\|\rho_{\varepsilon_n}\|_{L^2({Q})}\to0$ as $\varepsilon_n\to0$. Clearly, we have $$ \varphi_{2\varepsilon_n}(z)=2\varepsilon_n\theta\Big(\frac z{2\varepsilon_n}\Big)=2\varepsilon_n\theta\Big(\frac{z/2}{\varepsilon_n}\Big) =2\varphi_0+2\rho_{\varepsilon_n}\Big(\frac{z}{2}\Big)\to 2\varphi_0, $$
because $\|\rho_{\varepsilon_n}(\cdot/2)\|_{L^2({Q})}\to 0$ as $\varepsilon_n\to0$. Similarly, for any $M\in \mathbb Z^+$ we have $$ \varphi\big._{M\varepsilon_n}(z)\,\to\, M\varphi_0 \qquad \hbox{in }L^2({Q}). $$
Choosing $M$ in such a way that $M|\varphi_0|> K(\omega)$ we arrive at a contradiction with \eqref{co_bou}. Therefore, $\varphi_0=0$ for any convergent subsequence. This yields the desired convergence in \eqref{1corrsmall} and completes the proof of Proposition \ref{1corrector}.
\end{proof}
\noindent {\sl Step 5}. Uniqueness of $\theta$.
\begin{proposition}
[Uniqueness]\label{uniqueness} Problem \eqref{korrkappa1} has a unique up to an additive constant solution $\theta(z,\omega)$, $\theta \in L^2_M$, with statistically homogeneous increments such that \eqref{1corrsmall} holds true.
\end{proposition}
\begin{proof} Consider two arbitrary solutions $\theta_1(z,\omega)$ and $\theta_2(z,\omega)$ of problem \eqref{korrkappa1}.
Then the difference $\Delta (z,\omega)=\theta_1(z,\omega)-\theta_2(z,\omega)$ satisfies the equation \begin{equation}\label{1A} \int\limits_{\mathbb R^d} a (z) \mu ( \xi+ z, \omega ) \big(\Delta (\xi+z,\omega ) - \Delta(\xi, \omega) \big) \ dz =0 \end{equation} for a.e. $\omega$ and for all $\xi \in \mathbb R^d$.
Let us remark that the function $\Delta (z,\omega)$ inherits properties {\bf A)} and {\bf B)} of $\theta_1(z,\omega)$ and
$\theta_2(z,\omega)$. Consider a cut-off function $ \varphi (\frac{|\xi|}{R})$ parameterized by $R>0$, where $\varphi(r)$, $r\in\mathbb R$, is a function defined by $$ \varphi(r) = \left\{ \begin{array}{c} 1, \quad r \le 1, \\ 2 - r, \quad 1<r<2, \\ 0, \quad r \ge 2. \end{array} \right. $$
For any $R>0$, multiplying equation \eqref{1A} by $\mu(\xi, \omega) \Delta (\xi, \omega ) \varphi (\frac{|\xi|}{R})$ and integrating the resulting relation in $\xi$ over $ \mathbb R^d$, we obtain the following equality \begin{equation}\label{1B}
\int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \big(\Delta (\xi+z,\omega ) - \Delta(\xi, \omega) \big) \Delta(\xi, \omega) \varphi (\frac{|\xi|}{R}) \, dz \, d \xi =0. \end{equation} Using the relation $a(-z)=a(z)$, after change of variables $z \to -z, \ \xi - z = \xi'$, we get \begin{equation}\label{2B}
\int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \mu ( \xi'+ z, \omega ) \mu (\xi', \omega ) \big(\Delta (\xi',\omega ) - \Delta(\xi'+z, \omega) \big) \Delta(\xi'+z, \omega) \varphi (\frac{|\xi'+z|}{R}) \, dz \, d \xi' =0. \end{equation} Renaming $\xi'$ back to $\xi$ in the last equation and taking the sum of \eqref{1B} and \eqref{2B} we obtain $$
\int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \big(\Delta (\xi+z,\omega ) - \Delta(\xi, \omega) \big) \Big( \Delta(\xi+z, \omega) \varphi (\frac{|\xi+z|}{R}) - \Delta(\xi, \omega) \varphi (\frac{|\xi|}{R}) \Big) dz \, d \xi $$ $$
= \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Big(\Delta (\xi+z,\omega ) - \Delta(\xi, \omega) \Big)^2 \varphi (\frac{|\xi|}{R}) \, dz \, d \xi $$ $$
+ \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \big(\Delta (\xi+z,\omega ) - \Delta(\xi, \omega) \big) \Delta(\xi+z, \omega) \big( \varphi (\frac{|\xi+z|}{R}) - \varphi (\frac{|\xi|}{R}) \big) dz \, d \xi $$ \begin{equation}\label{2C} = J_1^R \ + \ J_2^R = 0. \end{equation} Letting $R=\varepsilon^{-1}$, we first estimate the contribution of $J_2^R $. \begin{lemma}\label{J2} The following limit relation holds a.s.: \begin{equation}\label{3A}
\frac{1}{R^d} |J_2^R| \ \to \ 0 \quad \mbox{ as } \; R \to \infty. \end{equation} \end{lemma}
\begin{proof} Denote $\Delta_z (T_\xi \omega ) = \Delta (\xi+z,\omega ) - \Delta(\xi, \omega)$, then $\Delta_z (T_\xi \omega ) $ is stationary in $\xi$ for any given $z$.
We consider separately the integration over $|\xi| > 3R$ and $|\xi| \le 3R$ in the integral $J_2^R$: $$
J_2^R = \int\limits_{\mathbb R^d} \int\limits_{|\xi|>3R} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z(T_\xi \omega) \Delta(\xi+z, \omega) \big( \varphi (\frac{|\xi+z|}{R}) - \varphi (\frac{|\xi|}{R}) \big) dz \, d \xi $$ $$
+ \int\limits_{\mathbb R^d} \int\limits_{|\xi|\le 3R} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z(T_\xi \omega) \Delta(\xi+z, \omega) \big( \varphi (\frac{|\xi+z|}{R}) - \varphi (\frac{|\xi|}{R}) \big) dz \, d \xi. $$
If $|\xi| > 3R$, then $\varphi (\frac{|\xi|}{R}) = 0$. Also, $\varphi (\frac{|\xi+z|}{R})=0$ if $|\xi| > 3R$ and $|z|>R$. Then we obtain the following upper bound $$
\frac{1}{R^d} \int\limits_{\mathbb R^d} \int\limits_{|\xi|> 3R} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) |\Delta_z (T_\xi \omega) | |\Delta(\xi+z, \omega)| \varphi (\frac{|\xi+z|}{R}) d\xi \, dz $$ \begin{equation}\label{estimm}
\le \frac{\alpha_2^2 }{R^d} \int\limits_{|\eta|\le 2R} \Big( \int\limits_{|z|>R} |z| a (z) |\Delta_z (T_{\eta-z} \omega) |\, dz \Big) \frac1R |\Delta(\eta, \omega)| \varphi (\frac{|\eta|}{R}) d\eta \end{equation} $$
\le \frac{\alpha_2^2 }{R^d} \int\limits_{|\eta|\le 2R} \phi (T_\eta \omega) \frac1R |\Delta(\eta, \omega)| \varphi (\frac{|\eta|}{R}) \, d\eta, $$ where $\eta=\xi+z$, $$
\phi (T_\eta \omega) = \int\limits_{\mathbb R^d} |z| a (z) |\Delta_z (T_{\eta -z} \omega)| \, dz, $$
and in the first inequality we have used the fact that $1< \frac{|z|}{R}$ if $|z|>R$. Since $\Delta_z(\omega) \in L^2_M$, then $\phi(\omega) \in L^2(\Omega)$. Applying the Cauchy-Swartz inequality to the last integral in \eqref{estimm} and recalling the relation $R=\varepsilon^{-1}$ we have \begin{equation}\label{5B}
\frac{\alpha_2^2 }{R^d} \int\limits_{|\eta|\le 2R} \phi (T_\eta \omega) \frac{|\Delta(\eta, \omega)|}{R} \varphi (\frac{|\eta|}{R}) \, d \eta \le \alpha_2^2 \Big( \frac{1}{R^d} \int\limits_{|\eta|\le 2R} \phi^2 (T_\eta \omega) d\eta \Big)^{\frac12} \Big( \frac{1}{R^d} \int\limits_{|\eta|\le 2R} \big(\frac{|\Delta(\eta, \omega)|}{R} \big)^2 d \eta\Big)^{\frac12} \to 0, \end{equation} as $R \to \infty$, because the first integral on the right hand side is bounded due to the stationarity of $\phi (T_\eta \omega)$, and the second integral tends to 0 due to sublinear growth of $\Delta(\eta, \omega)$, see \eqref{1corrsmall}.
If $|\xi| \le 3R$, then the corresponding part of $R^{-d} J_2^R$ can be rewritten as a sum of two terms $$ \frac{1}{R^d}
\int\limits_{\mathbb R^d} \int\limits_{|\xi| \le 3R } a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z(T_\xi \omega) (\Delta(\xi+z, \omega) - \Delta(\xi, \omega)) \big( \varphi (\frac{|\xi+z|}{R}) - \varphi (\frac{|\xi|}{R}) \big) d\xi \, dz $$ $$
+ \frac{1}{R^d}\int\limits_{\mathbb R^d} \int\limits_{| \xi | \le 3R} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z(T_\xi \omega) \Delta(\xi, \omega) \big( \varphi (\frac{|\xi+z|}{R}) - \varphi (\frac{|\xi|}{R}) \big) d\xi \, dz = I_1 + I_2. $$
We estimate $|I_1|$ and $|I_2|$ separately. Using the inequality $|\varphi( \frac{|x|}{R}) - \varphi (\frac{|y|}{R}) | \le \frac{|x-y|}{R}$ by the same arguments as above we get $$
|I_2| \le \frac{\alpha_2^2}{R^d} \int\limits_{\mathbb R^d} \int\limits_{|\xi| \le 3R} a (z) |\Delta_z(T_\xi \omega)| |\Delta(\xi, \omega)| \frac{|z|}{R} d\xi \, d z $$ $$
\le \alpha_2^2 \Big( \frac{1}{R^d} \int\limits_{|\xi|\le 3R} \phi^2 (T_\xi \omega) d\xi \Big)^{\frac12} \Big( \frac{1}{R^d} \int\limits_{|\xi|\le 3R} \big(\frac{|\Delta(\xi, \omega)|}{R} \big)^2 d \xi\Big)^{\frac12} \to 0. $$
To estimate $I_1$ we divide the area of integration in $z$ into two parts: $|z|< \sqrt{R}$ and $|z| \ge \sqrt{R}$, and first consider the integral $$
I_1^{(<)} = \frac{1}{R^d} \int\limits_{|z| < \sqrt{R}} \int\limits_{|\xi| \le 3R } a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z^2(T_\xi \omega) \big( \varphi (\frac{|\xi+z|}{R}) - \varphi (\frac{|\xi|}{R}) \big) d\xi \, dz $$
Since $|z|\leq\sqrt{R}$, we have $|\varphi( \frac{|\xi +z|}{R}) - \varphi (\frac{|\xi|}{R}) | \le \frac{1}{\sqrt{R}}$. Therefore, $$
|I_1^{(<)}| \le \alpha_2^2 \frac{1}{\sqrt{R}} \ \frac{1}{R^d} \int\limits_{|\xi| \le 3R } \int\limits_{\mathbb{R}^d} a (z) \Delta_z^2(T_\xi \omega) dz \, d \xi \to 0, $$ as $R \to \infty$; here we have used the fact that $$
\frac{1}{R^d} \int\limits_{|\xi| \le 3R } \int\limits_{\mathbb{R}^d} a (z) \Delta_z^2(T_\xi \omega) dz \, d \xi \to c_0 \mathbb{E} \Big( \int\limits_{\mathbb{R}^d} a (z) \Delta_z^2(\omega) dz \Big) $$ with a constant $c_0$ equal to the volume of a ball of radius $3$ in $\mathbb R^d$. We turn to the second integral $$
I_1^{(>)} = \frac{1}{R^d} \int\limits_{|z| \ge \sqrt{R}} \int\limits_{|\xi| \le 3R } a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z^2(T_\xi \omega) \big( \varphi (\frac{|\xi+z|}{R}) - \varphi (\frac{|\xi|}{R}) \big) d\xi \, dz. $$
Considering the inequality $|\varphi( \frac{|\xi +z|}{R}) - \varphi (\frac{|\xi|}{R}) | \le 1$ we obtain \begin{equation}\label{7A}
|I_1^{(>)}| \le \alpha_2^2 \frac{1}{R^d} \int\limits_{|\xi| \le 3R } \int\limits_{|z| \ge \sqrt{R}} a (z) \Delta_z^2(T_\xi \omega) \, dz \, d \xi. \end{equation} Denote by $\psi_{R}(\omega)$ the stationary function defined by $$
\psi_{R}(\omega) = \int\limits_{|z| \ge \sqrt{R}} a (z) \Delta_z^2( \omega) \, dz. $$ Since $ \Delta_z( \omega) \in L^2_M$, then \begin{equation}\label{5A} \mathbb{E} \psi_{R}(\omega) \to 0 \quad \mbox{ as } \; R \to \infty. \end{equation}
Moreover, function $\psi_{R}(\omega)$ is a.s. decreasing in $R$. Using the ergodic theorem, \eqref{7A} and \eqref{5A}, we conclude that $ |I_1^{(>)}| $ tends to zero as $R \to \infty$.
Thus we have proved that $|I_1| +|I_2| \to 0 $ as $R \to \infty$ a.s.
Together with \eqref{5B}
this implies \eqref{3A}. \end{proof} We proceed with the term $J_1^R$ in \eqref{2C}: $$
J_1^R = \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z^2 (\xi,\omega ) \varphi (\frac{|\xi|}{R}) \, dz \, d \xi. $$ Using the ergodic theorem we get as $R \to \infty$ \begin{equation}\label{6A} \frac{1}{R^d} J_1^R =
\frac{1}{R^d} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \mu ( \xi+ z, \omega ) \mu (\xi, \omega ) \Delta_z^2 (\xi,\omega ) \varphi (\frac{|\xi|}{R}) \, dz \, d \xi \to c_1 \mathbb{E} \int\limits_{\mathbb R^d} a (z) {\bm\mu} ( T_z \omega ) {\bm\mu} (\omega ) \Delta_z^2 (\omega )dz, \end{equation}
where $c_1=\int_{\mathbb R^d}\varphi(|\xi|)d\xi>0$. Consequently from \eqref{2C} - \eqref{3A} it follows that \begin{equation}\label{6B}
\frac{1}{R^d} |J_1^R| \ \to \ 0 \quad \mbox{ as } \; R \to \infty, \end{equation} and together with \eqref{6A} this implies that \begin{equation}\label{6C} \mathbb{E} \int\limits_{\mathbb R^d} a (z) {\bm\mu}( T_z \omega ) {\bm\mu} (\omega ) \Delta_z^2 (\omega )dz = 0. \end{equation} Using condition \eqref{add} we conclude from \eqref{6C} that $\Delta_z (\omega) \equiv 0$ for a.e. $z$ and a.e. $\omega$, and hence $\theta_1(z,\omega)=\theta_2(z,\omega)$. Proposition is proved. \end{proof}
${ }$\\[-0.8cm] This completes the proof of Theorem \ref{t_corrector}.\end{proof}
\section{Additional terms of the asymptotic expansion}\label{s_addterms}
Recall that $I_0^\varepsilon$ stands for the sum of all terms of order $\varepsilon^{0}$ in (\ref{K2_1}) and that $u_0\in C_0^\infty(\mathbb R^d)$.
Our first goal is to determine the coefficients of the effective elliptic operator $\hat L$. To this end we consider the following scalar product of $I_0^\varepsilon$ with a function $\varphi \in L^2(\mathbb R^d)$: \begin{equation}\label{hatK2_1} (I^\varepsilon_0, \varphi) =
\int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \Big( \frac12 z\otimes z - z \otimes \theta
\big(\frac{x}{\varepsilon}-z, \omega \big) \Big) \ a (z) \mu \big( \frac{x}{\varepsilon}, \omega \big) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \ dz \ \nabla \nabla u_0 (x) \varphi(x) dx. \end{equation} After change of variables $x = \varepsilon \eta$ we have \begin{equation}\label{hatK2_2} \begin{array}{l} \displaystyle (I^\varepsilon_0, \varphi) = \varepsilon^d \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \frac12 a (z) \,z\otimes z \, \mu ( \eta, \omega ) \mu ( \eta -z, \omega ) \, dz \, \nabla \nabla u_0 (\varepsilon\eta) \, \varphi (\varepsilon \eta) \, d\eta \\ \displaystyle - \varepsilon^d \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) \,z \otimes \theta (\eta-z, \omega ) \mu ( \eta, \omega ) \mu ( \eta -z, \omega ) \, dz \, \nabla \nabla u_0 (\varepsilon\eta) \, \varphi (\varepsilon \eta) \, d\eta = I^\varepsilon_1(\varphi) - I^\varepsilon_2(\varphi). \end{array} \end{equation}
We consider the integrals $I^\varepsilon_1(\varphi)$ and $I^\varepsilon_2(\varphi)$ separately. Since $\int_{\mathbb R^d}|z|^2a(z)ds\leq\infty$, then $$ \int\limits_{\mathbb R^d} z\otimes z \,a(z) \mu (0,\omega)\mu(-z,\omega)\,dz \in (L^\infty(\Omega))^{d^2}. $$ Therefore, by the Birkhoff ergodic theorem a.s. $$ \int\limits_{\mathbb R^d} z\otimes z\,a(z) \mu (\frac{x}{\varepsilon},\omega)\mu(\frac{x}{\varepsilon}-z,\omega)\,dz \rightharpoonup D_1\quad\hbox{weakly in } \ (L^2_{\rm loc}(\mathbb R^d))^{d^2} $$ with \begin{equation}\label{J_1} D_1 = \int\limits_{\mathbb R^d} \frac12 \, z\otimes z \, a (z) \, E\{ \mu ( 0, \omega ) \mu ( -z, \omega )\} \, dz. \end{equation} Recalling that $u_0\in C_0^\infty(\mathbb R^d)$, we obtain \begin{equation}\label{I_1} I^\varepsilon_1(\varphi)\to \int\limits_{\mathbb R^d}D_1\nabla\nabla u_0(x)\varphi(x)\,dx. \end{equation}
The second integral in \eqref{hatK2_2} contains the non-stationary random field $ \theta (z,\omega)$, and we rewrite $I_2(\varphi)$ as a sum of two terms, such that the first term contains the stationary field $\zeta_z (\eta, \omega)$ and the contribution of the second one is asymptotically negligible. In order to estimate the contribution of the second term we construct an additional corrector $u_2^\varepsilon$, see formula \eqref{corr-u2} below.\\ We have \begin{equation}\label{I_2appr} \begin{array}{l} \displaystyle I^\varepsilon_2 (\varphi) = \int\limits_{\mathbb R^d}\! \int\limits_{\mathbb R^d} a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \theta (\frac{x}{\varepsilon} - z, \omega ) \nabla \nabla u_0(x) \varphi(x) \, d x \, dz \\ \displaystyle = \frac12 \int\limits_{\mathbb R^d}\! \int\limits_{\mathbb R^d} a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \theta (\frac{x}{\varepsilon} - z, \omega ) \nabla \nabla u_0(x) \varphi(x) \, d x \, dz \\ \displaystyle - \, \frac12 \int\limits_{\mathbb R^d}\! \int\limits_{\mathbb R^d} a (z) z \, \mu (\frac{y}{\varepsilon}, \omega ) \mu (\frac{y}{\varepsilon} -z, \omega ) \theta (\frac{x}{\varepsilon} - z, \omega ) \nabla \nabla u_0(y - \varepsilon z) \varphi(y-\varepsilon z) \, d y \, dz \\ \displaystyle = \frac12 \int\limits_{\mathbb R^d}\! \int\limits_{\mathbb R^d} \! a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \Big( \theta (\frac{x}{\varepsilon} - z, \omega ) \nabla \nabla u_0(x) \varphi(x) - \theta (\frac{x}{\varepsilon}, \omega ) \nabla \nabla u_0(x - \varepsilon z) \varphi (x-\varepsilon z)\! \Big) d x dz \\ \displaystyle = \frac12 \int\limits_{\mathbb R^d}\! \int\limits_{\mathbb R^d} a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \big( \theta (\frac{x}{\varepsilon} - z, \omega ) - \theta (\frac{x}{\varepsilon}, \omega ) \big) \nabla \nabla u_0(x) \varphi(x) d x \, dz \\ \displaystyle + \frac12 \int\limits_{\mathbb R^d}\! \int\limits_{\mathbb R^d} a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \big( \nabla \nabla u_0(x) \varphi(x) - \nabla \nabla u_0(x - \varepsilon z) \varphi (x-\varepsilon z) \big) d x \, dz, \end{array} \end{equation} here and in what follows $z\theta(z)\nabla\nabla u_0(x)$ stands for $z^i\theta^j(z)\partial_{x_i}\partial_{x_j}u_0(x)$. The field $\zeta_{-z} (\eta, \omega)= \theta(\eta -z,\omega) - \theta (\eta,\omega)$ is stationary for any given $z$, and \begin{equation}\label{PL1} \int\limits_{\mathbb R^d} a (z) z \otimes \zeta_{-z} (0, \omega) \mu ( 0, \omega ) \mu ( -z, \omega ) \, dz \in (L^2(\Omega))^{d^2}. \end{equation} Indeed, in view of \eqref{thetaLM} and \eqref{zetatheta} by the Cauchy-Schwarz inequality we have $$
\int\limits_{\Omega}\bigg(\int\limits_{\mathbb R^d} |a (z) z \otimes \zeta_{-z} (0, \omega) \mu ( 0, \omega ) \mu ( -z, \omega )| \, dz\bigg)^2 d P(\omega) \le $$ $$
\alpha_2^2 \Big(\int\limits_{\mathbb R^d} a (z) |z|^2 dz \Big) \Big( \int\limits_{\mathbb R^d} \int\limits_{\Omega} a (z) \, |\theta(-z, \omega)|^2 dz d P(\omega) \Big) < \infty. $$
Consequently applying the ergodic theorem to the stationary field \eqref{PL1} we obtain for the first integral in \eqref{I_2appr} as $\varepsilon \to 0$ \begin{equation}\label{I2-stationary} \begin{array}{l} \displaystyle \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \zeta_{-z} (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \nabla \nabla u_0(x) \varphi(x) d x \, dz \ \to \\ \displaystyle \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) z E\{ \zeta_{-z} (0, \omega) \mu ( 0, \omega ) \mu ( -z, \omega ) \} \nabla \nabla u_0(x) \varphi(x) d x \, dz = \int\limits_{\mathbb R^d} D_2 \, \nabla \nabla u_0 (x) \varphi(x) \, dx, \end{array} \end{equation} where we have used the notation \begin{equation}\label{D_2} D_2 = \frac12 \, \int\limits_{\mathbb R^d} a (z) z \otimes E\{ \zeta_{-z} (0, \omega) \mu ( 0, \omega ) \mu ( -z, \omega )\} \, dz. \end{equation} Denote the last integral on the right-hand side in \eqref{I_2appr} by $J_2^\varepsilon (\varphi)$: \begin{equation}\label{J2eps} J_2^\varepsilon (\varphi) = \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \big( \nabla \nabla u_0(x) \varphi(x) - \nabla \nabla u_0(x - \varepsilon z) \varphi (x-\varepsilon z) \big) d x \, dz \end{equation} and consider this expression as a functional on $L^2(\mathbb R^d)$ acting on function $\varphi$. In order to show that for each $\varepsilon>0$ the functional $J_2^\varepsilon$ is a bounded linear functional on $L^2(\mathbb R^d)$ we represent $J_2^\varepsilon$ as a sum $J_2^\varepsilon=J_2^{1,\varepsilon} +J_2^{2,\varepsilon}+J_2^{3,\varepsilon}$ with $J_2^{1,\varepsilon}$, $J_2^{2,\varepsilon}$ and $J_2^{3,\varepsilon}$ introduced below and estimate each of these functionals separately. By Proposition \ref{1corrector} a.s. $ \theta (\frac{x}{\varepsilon},\omega)\in L^2_{\rm loc}(\mathbb R^d)$ for all $\varepsilon>0$. Therefore, $$ J_2^{1,\varepsilon} (\varphi) = \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \nabla \nabla u_0(x) \varphi(x) d x \, dz $$ is a.s. a bounded linear functional on $L^2(\mathbb R^d)$. Similarly, $$ J_2^{2,\varepsilon} (\varphi) = \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}-z, \omega ) \nabla \nabla u_0(x-\varepsilon z) \varphi(x-\varepsilon z) d x \, dz $$ is a.s. a bounded linear functional on $L^2(\mathbb R^d)$. Due to \eqref{thetaLM} and by the Birkhoff ergodic theorem the linear functional $$ J_2^{3,\varepsilon} (\varphi) = \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \Big( \theta (\frac{x}{\varepsilon}, \omega )-
\theta (\frac{x}{\varepsilon}-z, \omega )\Big) \nabla \nabla u_0(x-\varepsilon z) \varphi(x-\varepsilon z) d x \, dz $$ $$ = \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \, \mu (\frac{x}{\varepsilon}+z, \omega ) \mu (\frac{x}{\varepsilon} , \omega ) \, \Big( \theta (\frac{x}{\varepsilon}+z, \omega )-
\theta (\frac{x}{\varepsilon}, \omega )\Big) \nabla \nabla u_0(x) \varphi(x) d x \, dz $$ $$ = \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \, \mu (\frac{x}{\varepsilon}+z, \omega ) \mu (\frac{x}{\varepsilon} , \omega ) \, \theta
(z,T_{\frac{x}{\varepsilon}} \omega ) \nabla \nabla u_0(x) \varphi(x) d x \, dz $$ is a.s. bounded in $L^2(\mathbb R^d)$. Since $J_2^{\varepsilon} (\varphi) =J_2^{1,\varepsilon} (\varphi)+ J_2^{2,\varepsilon} (\varphi)+ J_2^{3,\varepsilon} (\varphi)$, the desired boundedness of $J_2^{\varepsilon}$ follows. Then by the Riesz theorem for a.e. $\omega$ there exists a function $f_2^\varepsilon = f_2^\varepsilon(u_0) \in L^2(\mathbb R^d)$ such that $J_2^\varepsilon(\varphi) = (f_2^\varepsilon,\varphi)$. We emphasize that here we do not claim that the norm of $J_2^\varepsilon$ admits a uniform in $\varepsilon$ estimate.
Next we show that the contribution of $f_2^\varepsilon$ to $w^\varepsilon$ is vanishing. To this end consider the function (additional corrector) \begin{equation}\label{corr-u2} u_2^\varepsilon (x,\omega) = (-L^\varepsilon +m)^{-1} f_2^\varepsilon (x, \omega). \end{equation}
\begin{lemma}\label{l_u2small}
$\| u_2^\varepsilon\|_{L^2(\mathbb R^d)} \to 0$ as $\varepsilon \to 0$ for a.e. $\omega$. \end{lemma}
\begin{proof} Taking $\varphi = u_2^\varepsilon$ we get \begin{equation}\label{L1}
((-L^\varepsilon +m) u_2^\varepsilon, u_2^\varepsilon) = (f_2^\varepsilon, u_2^\varepsilon). \end{equation} Considering \eqref{L_eps} the left-hand side of \eqref{L1} can be rearranged as follows: \begin{equation}\label{L1-LHS} \begin{array}{l} \displaystyle - \frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) ( u_2^\varepsilon (x-\varepsilon z) - u_2^\varepsilon(x)) dz \, u_2^\varepsilon (x) dx + m \int\limits_{\mathbb R^d} (u_2^\varepsilon)^2 (x) dx \\ \displaystyle = \, \frac12 \frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) ( u_2^\varepsilon (x-\varepsilon z) - u_2^\varepsilon(x))^2 dz dx + m \int\limits_{\mathbb R^d} (u_2^\varepsilon)^2 (x) dx. \end{array} \end{equation} We denote $$ G_1^2 = \frac{1}{2 \varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) ( u_2^\varepsilon (x-\varepsilon z) - u_2^\varepsilon(x))^2 dz dx, \quad G_2^2= m \int\limits_{\mathbb R^d} (u_2^\varepsilon)^2 (x) dx. $$ It follows from \eqref{J2eps} that the right-hand side of \eqref{L1} takes the form \begin{equation}\label{L1-RHS} \begin{array}{l} \displaystyle J_2^\varepsilon (u_2^\varepsilon) = \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \big( \nabla \nabla u_0(x) u_2^\varepsilon(x) - \nabla \nabla u_0(x - \varepsilon z) u_2^\varepsilon (x-\varepsilon z) \big) d x \, dz \\ \displaystyle = \frac12 \, \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \nabla \nabla u_0(x) \big( u_2^\varepsilon(x) - u_2^\varepsilon (x-\varepsilon z) \big) d x \, dz \\[6mm] \displaystyle + \frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \big( \nabla \nabla u_0(x) - \nabla \nabla u_0(x - \varepsilon z) \big) u_2^\varepsilon (x-\varepsilon z) d x \, dz =\! \frac12 (I_1 + I_2). \end{array} \end{equation}
It is proved in Proposition \ref{1corrector} that a.s. $\|\varepsilon \theta (\frac x\varepsilon,\omega)\|_{L^2(B)}\to 0$ as $\varepsilon\to0$ for any ball $B\subset\mathbb R^d$. By the Cauchy-Schwartz inequality we obtain the following upper bounds for $I_1$: \begin{equation}\label{L1-RHS-I1} \begin{array}{l} \displaystyle I_1 \le \left( \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \big( u_2^\varepsilon(x) - u_2^\varepsilon (x-\varepsilon z) \big)^2 d x \, dz \right)^{1/2} \\ \displaystyle
\left( \frac{1}{\varepsilon^2 } \, \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z)|z|^2 \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \varepsilon^2 \big|\theta (\frac{x}{\varepsilon}, \omega )\big|^2 (\nabla \nabla u_0(x))^2 d x \, dz \right)^{1/2} \\ \displaystyle \le \frac{1}{\varepsilon} \, o(1) \ \left(\frac12 \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \big( u_2^\varepsilon(x) - u_2^\varepsilon (x-\varepsilon z) \big)^2 d x \, dz \right)^{1/2} = G_1 \cdot o(1), \end{array} \end{equation} where $o(1)\to0$ as $\varepsilon\to0$. We turn to the second integral $I_2$. Let $B$ be a ball centered at the origin and such that $\mathrm{supp}(u_0)\subset B$, $\mathrm{dist}(\mathrm{supp}(u_0),\partial B)>1$. Then $$
\Big|\int\limits_{\mathbb R^d} \int\limits_{B} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \big( \nabla \nabla u_0(x) - \nabla \nabla u_0(x - \varepsilon z) \big) u_2^\varepsilon (x-\varepsilon z) d x \, dz\Big| $$ \begin{equation}\label{aaa1}
\leq C\int\limits_{\mathbb R^d} \int\limits_{B} \, a (z) |z|^2 \, \big|\varepsilon \theta (\frac{x}{\varepsilon}, \omega )\big|\,
| u_2^\varepsilon (x-\varepsilon z)| d x \, dz\le \| u_2^\varepsilon \|_{L^2(\mathbb R^d)} \cdot o(1) = G_2 \cdot o(1). \end{equation} The integral over $B^c=\mathbb R^d\setminus B$ can be estimated in the following way: $$
\Big|\int\limits_{\mathbb R^d} \int\limits_{B^c} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \big( \nabla \nabla u_0(x) - \nabla \nabla u_0(x - \varepsilon z) \big) u_2^\varepsilon (x-\varepsilon z) d x \, dz\Big| $$ $$
\Big|\int\limits_{\mathbb R^d} \int\limits_{B^c} \, a (z) z \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) \, \theta (\frac{x}{\varepsilon}, \omega ) \nabla \nabla u_0(x - \varepsilon z) u_2^\varepsilon (x-\varepsilon z) d x \, dz\Big| $$ \begin{equation}\label{aaa2}
\leq C\int\limits_{|z|\geq \frac1\varepsilon} \int\limits_{B^c} \, a (z) |z| \, \big| \theta (\frac{x}{\varepsilon}, \omega )\big|\,
|\nabla \nabla u_0(x - \varepsilon z)|\, |u_2^\varepsilon (x-\varepsilon z)|\, d x \, dz \end{equation} $$
\leq C\int\limits_{|z|\geq \frac1\varepsilon} \int\limits_{\mathbb R^d} \, a (z) |z| \, \big| \theta (\frac{x}{\varepsilon}+z, \omega )\big|\,
|\nabla \nabla u_0(x)|\, |u_2^\varepsilon (x)|\, d x \, dz $$ $$
\leq C\int\limits_{|z|\geq \frac1\varepsilon} \int\limits_{\mathbb R^d} \, a (z) |z| \,\Big[ \big| \theta (\frac{x}{\varepsilon}+z, \omega )
- \theta (\frac{x}{\varepsilon}, \omega )\big|+\big| \theta (\frac{x}{\varepsilon}, \omega )\big|\Big]\,
|\nabla \nabla u_0(x)|\, |u_2^\varepsilon (x)|\, d x \, dz. $$ We have $$
\int\limits_{|z|\geq \frac1\varepsilon} \int\limits_{\mathbb R^d} \, a (z) |z| \,\big| \theta (\frac{x}{\varepsilon}, \omega )\big|\,
|\nabla \nabla u_0(x)|\, |u_2^\varepsilon (x)|\, d x \, dz $$ $$ \leq
\int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) |z|^2 \,\big|\varepsilon \theta (\frac{x}{\varepsilon}, \omega )\big|\,
|\nabla \nabla u_0(x)|\, |u_2^\varepsilon (x)|\, d x \, dz
\leq G_2\cdot o(1) $$ and $$
\int\limits_{|z|\geq \frac1\varepsilon} \int\limits_{\mathbb R^d} \, a (z) |z| \,\Big[ \big| \theta (\frac{x}{\varepsilon}+z, \omega )
- \theta (\frac{x}{\varepsilon}, \omega )\big|\Big]\,
|\nabla \nabla u_0(x)|\, |u_2^\varepsilon (x)|\, d x \, dz $$ $$
\leq \int\limits_{|z|\geq \frac1\varepsilon} \int\limits_{\mathbb R^d} \, a (z) |z| \, \big| \zeta_z (T_{\frac{x}{\varepsilon}}\omega )
\big|\,
|\nabla \nabla u_0(x)|\, |u_2^\varepsilon (x)|\, d x \, dz $$ $$
\leq \left( \int\limits_{|z|\geq \frac1\varepsilon} \, a (z) z^2 \, dz \right)^{\frac12} \int\limits_{\mathbb R^d} \left( \int\limits_{\mathbb R^d} a(z) \big| \zeta_z (T_{\frac{x}{\varepsilon}}\omega )
\big|^2 \, dz \right)^{\frac12}
|\nabla \nabla u_0(x)|\, |u_2^\varepsilon (x)|\, d x $$ $$
\leq o(1) \, \left( \int\limits_{\mathbb R^d} \, |u_2^\varepsilon (x)|^2 \, dx \right)^{\frac12} \left( \int\limits_{\mathbb R^d} \left( \int\limits_{\mathbb R^d} a(z) \big| \zeta_z (T_{\frac{x}{\varepsilon}}\omega )
\big|^2 \, dz \right)
|\nabla \nabla u_0(x)|^2\, d x \right)^{\frac12} = G_2\cdot o(1). $$ Since $\zeta_z (\omega) \in L^2_M$, the second integral in the right hand side here converges to a constant by the ergodic theorem.
Combining the last two estimates we conclude that the term on the right-hand side in \eqref{aaa2} does not exceed $G_2\cdot o(1)$. Therefore, considering \eqref{aaa1}, we obtain $I_1\leq G_2\cdot o(1)$. This estimate and \eqref{L1-RHS-I1} imply that $$ G_1^2 + G_2^2 = I_1 + I_2 \le (G_1 + G_2) \cdot o(1). $$
Consequently, $G_1 \to 0$ and $G_2 = m^{1/2} \| u_2^\varepsilon \|_{L^2(\mathbb R^d)} \to 0$ as $\varepsilon \to 0$. Lemma is proved. \end{proof}
Thus we can rewrite $I^\varepsilon_0$ (all the terms of the order $\varepsilon^{0}$) as follows \begin{equation}\label{VV} I^\varepsilon_0 = (D_1 - D_2) \cdot \nabla\nabla u_0 + f_2^\varepsilon + S(\frac{x}{\varepsilon}, \omega) \cdot \nabla\nabla u_0, \qquad S(\frac{x}{\varepsilon}, \omega) = \Psi_1(\frac{x}{\varepsilon}, \omega) - \Psi_2(\frac{x}{\varepsilon}, \omega), \end{equation} where the matrices $D_1$and $D_2$ are defined in \eqref{J_1} and \eqref{D_2} respectively, and $ S(\frac{x}{\varepsilon}, \omega), \Psi_1(\frac{x}{\varepsilon}, \omega), \Psi_2(\frac{x}{\varepsilon}, \omega)$ are stationary fields with zero mean which are given by \begin{equation}\label{Psi-1} \Psi_1(\frac{x}{\varepsilon}, \omega) = \frac12 \int\limits_{\mathbb R^d} \, a (z) z^2 \Big[ \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) -
E\{ \mu ( 0, \omega ) \mu ( -z, \omega ) \} \Big] dz, \end{equation} \begin{equation}\label{Psi-2}
\Psi_2(\frac{x}{\varepsilon}, \omega) = \frac12 \int\limits_{\mathbb R^d} \, a (z) z \Big[ \zeta_{-z} (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega ) -
E\{ \zeta_{-z} (0, \omega) \mu ( 0, \omega ) \mu ( -z, \omega )\} \Big] dz.
\end{equation} Denote \begin{equation}\label{u3} u_3^\varepsilon(x,\omega) = (-L^\varepsilon+m)^{-1} F^\varepsilon(x,\omega), \quad \mbox{where } \; F^\varepsilon(x, \omega) = S(\frac{x}{\varepsilon}, \omega) \cdot \nabla\nabla u_0(x). \end{equation} Since $ {\rm supp} \, u_0 \subset B$ is a bounded subset of $\mathbb R^d$ and $$
\int\limits_{\mathbb R^d} \, a (z) |z|\, \big|\zeta_{-z} ( \omega )\big| \,dz \in L^2(\Omega), $$
then by the Birkhoff theorem $u_3^\varepsilon \in L^2(\mathbb R^d)$. Our goal is to prove that $\|u_3^\varepsilon \|_{L^2(\mathbb R^d)} \to 0$ as $\varepsilon \to 0$. We first show that the family $\{u_3^\varepsilon\}$ is bounded in $L^2(\mathbb R^d)$. \begin{lemma}\label{Bound}
The family of functions $u_3^\varepsilon$ defined by \eqref{u3} is uniformly bounded in $L^2(\mathbb R^d)$ for e.a. $\omega$: $\|u_3^\varepsilon \|_{L^2(\mathbb R^d)} \le C$ for any $0<\varepsilon<1$. \end{lemma}
\begin{proof}
Since the operator $(-L^\varepsilon+m)^{-1} $ is bounded ($\| (-L^\varepsilon+m)^{-1} \| \le \frac{1}{m}$), then it is sufficient to prove that $\| F^\varepsilon(x,\omega) \|_{L^2(\mathbb R^d)} \le C$ uniformly in $\varepsilon$. By the Birkhoff ergodic theorem the functions $ \Psi_1(\frac{x}{\varepsilon}, \omega)$ and $\Psi_2(\frac{x}{\varepsilon}, \omega)$ a.s converge to zero weakly in $L^2(B)$, so does $S(\frac{x}{\varepsilon}, \omega)$. Then $S(\frac{x}{\varepsilon}, \omega)\cdot \nabla\nabla u_0$ a.s. converges to zero weakly in $L^2(\mathbb R^d)$. This implies the desired boundedness.
\end{proof}
\begin{lemma}\label{Convergence} For any cube $B$ centered at the origin
$\|u_3^\varepsilon \|_{L^2(B)} \ \to \ 0$ as $\varepsilon \to 0$ for e.a. $\omega$. \end{lemma}
\begin{proof} The first step of the proof is to show that any sequence $\{u_3^{\varepsilon_j} \}$, $\varepsilon_j \to 0$, is compact in $L^2(B)$. Using definition \eqref{u3} we have $$ ( (-L^\varepsilon+m) u_3^\varepsilon, u_3^\varepsilon) \ = \ ( F^\varepsilon, u_3^\varepsilon). $$ The left-hand side of this relation can be rewritten as \begin{equation}\label{L2-rhs} \begin{array}{l} \displaystyle \int\limits_{\mathbb R^d} (-L^\varepsilon+m) u_3^\varepsilon(x) u_3^\varepsilon(x) dx \\ \displaystyle = \, m \int\limits_{\mathbb R^d} (u_3^\varepsilon(x))^2 dx - \frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega) ( u_3^\varepsilon (x-\varepsilon z) - u_3^\varepsilon(x)) u_3^\varepsilon (x) dz dx \\ \displaystyle = \, m \int\limits_{\mathbb R^d} (u_3^\varepsilon(x))^2 dx + \frac{1}{2 \varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega) ( u_3^\varepsilon (x-\varepsilon z) - u_3^\varepsilon(x))^2 dz dx. \end{array} \end{equation} Consequently we obtain the following equality \begin{equation}\label{u3-main}
m \int\limits_{\mathbb R^d} (u_3^\varepsilon(x))^2 dx + \frac{1}{2 \varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega) ( u_3^\varepsilon (x-\varepsilon z) - u_3^\varepsilon(x))^2 dz dx = ( F^\varepsilon, u_3^\varepsilon). \end{equation} Considering the uniform boundedness of $F^\varepsilon$ and $ u_3^\varepsilon$, see Lemma \ref{Bound}, we immediately conclude that \begin{equation}\label{C-main} \frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) \, \mu (\frac{x}{\varepsilon}, \omega ) \mu (\frac{x}{\varepsilon} -z, \omega) ( u_3^\varepsilon (x-\varepsilon z) - u_3^\varepsilon(x))^2 dz dx < K \end{equation} uniformly in $\varepsilon$ and for a.e. $\omega$. Therefore, \begin{equation}\label{C-main_pure}
m \int\limits_{\mathbb R^d} (u_3^\varepsilon(x))^2 dx+\frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) ( u_3^\varepsilon (x-\varepsilon z) - u_3^\varepsilon(x))^2 dz dx < K \end{equation}
For the sake of definiteness assume that $B=[-1,1]^d$. The cubes of other size can be considered in exactly the same way. Let $\phi(s)$ be an even $C_0^\infty(\mathbb R)$ function such that $0\leq \phi\leq 1$, $\phi(s)=1$ for $|s|\leq 1$,
$\phi(s)=0$ for $|s|\geq 2$, and $|\phi'(s)|\leq 2$. Denote $\tilde u_3^\varepsilon(x)= \phi(|x|)u_3^\varepsilon(x)$.
It is straightforward to check that
\begin{equation}\label{C-main_modi1}
m \int\limits_{\mathbb R^d} (\tilde u_3^\varepsilon(x))^2 dx+\frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) (\tilde u_3^\varepsilon (x-\varepsilon z) - \tilde u_3^\varepsilon(x))^2 dz dx < K \end{equation}
We also choose $\mathcal{R}$ in such a way that $\int_{|z|\leq \mathcal{R}}a(z)dz\geq \frac12$ and introduce $$
\tilde a(z) ={\bf 1}_{\{|z|\leq \mathcal{R}\}}\,a(z)\,\Big(\int_{|z|\leq \mathcal{R}}a(z)dz\Big)^{-1}. $$ Then
\begin{equation}\label{C-main_cut}
m \int\limits_{\mathbb R^d} (\tilde u_3^\varepsilon(x))^2 dx+\frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, \tilde a (z) (\tilde u_3^\varepsilon (x-\varepsilon z) - \tilde u_3^\varepsilon(x))^2 dz dx < K. \end{equation} Letting $\tilde B = [-\pi, \pi]^d$, we denote by $\hat u_3^\varepsilon(x)$ the $\tilde B$ periodic extension of $\tilde u_3^\varepsilon(x)$. For the extended function we have
\begin{equation}\label{C-main_per}
m \int\limits_{\tilde B} (\hat u_3^\varepsilon(x))^2 dx+\frac{1}{\varepsilon^2} \int\limits_{\tilde B} \int\limits_{\mathbb R^d} \, \tilde a (z) (\hat u_3^\varepsilon (x-\varepsilon z) - \hat u_3^\varepsilon(x))^2 dz dx < K. \end{equation} The functions $e_k(x) = \frac{1}{(2 \pi)^{d/2}} e^{ikx}, \; k \in Z^d$, form an orthonormal basis in $L^2(B)$, and $$ \hat u_3^\varepsilon(x) = \sum_k \alpha_k^\varepsilon e_k(x), \quad \hat u_3^\varepsilon (x-\varepsilon z) = \sum_k \alpha_k^\varepsilon e^{-i\varepsilon kz} e_k(x); $$ $$
\| \hat u_3^\varepsilon(x)\|^2 = \sum_k (\alpha_k^\varepsilon)^2, \quad \|\hat u_3^\varepsilon (x-\varepsilon z) -
\hat u_3^\varepsilon(x) \|^2 =\sum_k (\alpha_k^\varepsilon)^2 |e^{-i\varepsilon k z} - 1|^2. $$ Then inequality \eqref{C-main} is equivalent to the following bound \begin{equation}\label{AAA1}
\frac{1}{\varepsilon^2} \sum_k (\alpha_k^\varepsilon)^2 \, \int\limits_{\mathbb R^d} \, \tilde a (z) |e^{-i\varepsilon k z} - 1|^2 dz < C. \end{equation}
\begin{lemma}\label{Propc1c2} For any $k \in Z^d$ and any $0<\varepsilon<1$ there exist constants $C_1, \ C_2$ (depending on $d$) such that \begin{equation}\label{A2}
\int\limits_{\mathbb R^d} \, \tilde a (z) |e^{-i\varepsilon k z} - 1|^2 dz \ge \min \{ C_1 k^2 \varepsilon^2, \ C_2 \}. \end{equation} \end{lemma} \begin{proof}
For small $\varepsilon$, the lower bound by $C_1 k^2 \varepsilon^2$ follows from the expansion of $e^{-i \varepsilon k z}$ in the neighborhood of 0. For large enough $\varepsilon |k|\ge \varkappa_0>1$ we use the following inequality $$
\int\limits_{\mathbb R^d} \, \tilde a (z) |e^{-i\varepsilon k z} - 1|^2 dz \ge c_0 \int\limits_{[0,1]^d} |e^{-i\varepsilon k z} - 1|^2 dz \ge c_0 \big(2-\frac{2}{\varkappa_0}\big)^d. $$ \end{proof}
Let us consider a sequence $\varepsilon_j \to 0$. Using inequalities \eqref{AAA1}-\eqref{A2} we will construct now for any $\delta>0$ a finite $2 \delta$-net covering all elements of the sequence $u_3^{\varepsilon_j}$. For any $\delta>0$ we take $|k_0|$ and $j_0$ such that \begin{equation}\label{A3}
\frac{C}{\delta} < C_1 |k_0|^2 < \frac{C_2}{\varepsilon_{j_0}^2}, \end{equation} where $C,\, C_1, \, C_2$ are the same constants as in \eqref{AAA1}-\eqref{A2}. Then it follows from \eqref{AAA1}-\eqref{A3} that $$
\sum_{k:|k| \ge |k_0|} C_1 |k_0|^2 (\alpha_k^{\varepsilon_j})^2 < \sum_{k: |k| \ge |k_0|} \min \Big\{ C_1 |k|^2, \, \frac{C_2}{\varepsilon_j^2} \Big\} \, (\alpha_k^{\varepsilon_j})^2 < C \quad \mbox{ for any } \; j>j_0. $$ Consequently we obtain the uniform bound on the tails of $\hat u_3^{\varepsilon_j}$ for all $j>j_0$: \begin{equation}\label{A4}
\sum_{k:|k| \ge |k_0|} (\alpha_k^{\varepsilon_j})^2 < \frac{C}{C_1 |k_0|^2} < \delta. \end{equation}
Denote by ${\cal H}_{k_0} \subset L^2(\tilde B)$ a linear span of basis vectors $\{ e_k, \ |k|<|k_0| \}$. Evidently, it is a finite-dimensional subspace. Then we have $$
\hat u_3^\varepsilon = w_{k_0}^\varepsilon + \sum_{k:|k| \ge |k_0|} \alpha_k^{\varepsilon} e_k, \quad \mbox{ where } \; w_{k_0}^\varepsilon = P_{{\cal H}_{k_0}} u_3^\varepsilon. $$ Since we already know from Lemma \ref{Bound} that the functions $\hat u_3^{\varepsilon_j}$ are uniformly bounded in $L^2(\tilde B)$, then the functions $w_{k_0}^{\varepsilon_j}$ are also uniformly bounded. Therefore there exists in ${\cal H}_{k_0}$ a finite $\delta$-net covering the functions $\{ w_{k_0}^{\varepsilon_j}, \, j>j_0 \}$. Estimate \eqref{A4} implies that the same net will be the $2 \delta$-net for the functions $\{\hat u_3^{\varepsilon_j}, \, j>j_0 \}$. We need to add to this net $j_0$ elements to cover first $j_0$ functions $\hat u_3^{\varepsilon_j}, \, j=1, \ldots, j_0$.
Thus we constructed the finite $2 \delta$-net for any $\delta>0$ which proves the compactness of $\{\hat u_3^{\varepsilon} \}$ as $\varepsilon \to 0 $ in $L^2(\tilde B)$.
Since $u_3^{\varepsilon}(x)=\hat u_3^{\varepsilon}(x)$ for $x\in B$, we conclude that the family $\{u_3^{\varepsilon}\}$ is compact in $L^2(B)$. In the same way one can show that this family is compact on any cube $B=[-L,L]^d$. This completes the proof of Lemma. \end{proof}
\begin{lemma}\label{l_u3small}
The following limit relation holds: $\|u_3^\varepsilon\|_{L^2(\mathbb R^d)}\to 0$, as $\varepsilon\to0$. \end{lemma} \begin{proof} We go back to formula \eqref{u3-main}. On the right-hand side of this equality we have the inner product of two sequences $F^\varepsilon$ and $u_3^\varepsilon$ Since the sequence $F^\varepsilon \rightharpoonup 0$ weakly in $L^2(B)$, and the sequence $u_3^\varepsilon$ is compact in $L^2(B)$, the product $(F^\varepsilon, u_3^\varepsilon) \to 0$ as $\varepsilon \to 0$.
Therefore, both integrals on the left-hand side of \eqref{u3-main} also tend to zero as $\varepsilon \to 0$, and we obtain that $\| u_3^\varepsilon \|_{L^2(\mathbb R^d)} \to 0, \ \varepsilon \to 0$. \end{proof}
Denote by $\Theta$ the matrix $\Theta = D_1 - D_2$, where $D_1, \, D_2$ are defined by \eqref{J_1}, \eqref{D_2}. Our next goal is to show that $D_1 - D_2$ is a positive definite matrix.
\begin{proposition} The matrix $\Theta = D_1 - D_2$ is positive definite: \begin{equation}\label{Positive}
\Theta \ = \ \frac12 \, \int\limits_{\mathbb R^d} \int\limits_{\Omega} \big(z\otimes z - z \otimes \zeta_{-z} (0, \omega ) \big) \, a (z) \, \mu (0, \omega ) \mu ( -z, \omega) \, dz \, d P(\omega) > 0. \end{equation} \end{proposition}
\begin{proof} We recall that $\varkappa^\delta(\omega)$ stands for a unique solution of equation \eqref{A-delta}. Letting $\varkappa_\eta^\delta(\omega)=\eta\cdot\varkappa^\delta(\omega)$, $\eta\in\mathbb R^d\setminus \{0\}$, one can easily obtain \begin{equation}\label{Prop2_eta} \begin{array}{c} \displaystyle \delta \int\limits_\Omega \big(\varkappa_\eta^\delta(\omega)\big)^2\mu(\omega)\, dP(\omega)
- \int\limits_{\mathbb R^d} \int\limits_\Omega a (z) \mu ( T_z \omega ) \big( \varkappa_\eta^\delta (T_z \omega ) - \varkappa_\eta^\delta ( \omega) \big) \varkappa_\eta^\delta ( \omega)\mu(\omega) \, dz \, dP(\omega) \\ \displaystyle
= \int\limits_{\mathbb R^d} \int\limits_\Omega (\eta\cdot z) a(z) \varkappa_\eta^\delta(\omega) \mu(T_z \omega) \mu(\omega) \, dz \, dP(\omega). \end{array} \end{equation} In the same way as in the proof of Proposition \ref{spectrA}, we derive the following relation: \begin{equation}\label{Prop2_etabis} \begin{array}{c} \displaystyle \delta \int\limits_\Omega \big(\varkappa_\eta^\delta(\omega)\big)^2\mu(\omega)\, dP(\omega)
+\frac12\int\limits_{\mathbb R^d} \int\limits_\Omega a (z) \mu ( T_z \omega ) \big( \varkappa_\eta^\delta (T_z \omega ) - \varkappa_\eta^\delta ( \omega) \big)^2\mu(\omega) \, dz \, dP(\omega) \\ \displaystyle
= - \frac12 \int\limits_{\mathbb R^d} \int\limits_\Omega (\eta\cdot z) a(z)\big( \varkappa_\eta^\delta (T_z \omega ) - \varkappa_\eta^\delta ( \omega) \big) \mu(T_z \omega) \mu(\omega) \, dz \, dP(\omega). \end{array} \end{equation} According to \eqref{theta} the sequence $\eta\cdot(\varkappa_\eta^{\delta_j} (T_z \omega ) - \varkappa_\eta^{\delta_j} ( \omega))$ converges weakly in $L^2_M $ as $\delta_j\to 0$ to $\eta\cdot\theta(z,\omega)$. Passing to the limit $\delta_j\to0$ in relation \eqref{Prop2_etabis} and considering the lower semicontinuity of the $L^2_M$ norm with respect to the weak topology, we arrive at the following inequality \begin{equation}\label{est_ineq} \frac12\int\limits_{\mathbb R^d} \int\limits_\Omega a (z) \mu ( T_z \omega ) \big(\eta\cdot\theta(z,\omega) \big)^2\mu(\omega) \, dz \, dP(\omega) \leq - \frac12 \int\limits_{\mathbb R^d} \int\limits_\Omega (\eta\cdot z) a(z)\big( \eta\cdot\theta(z,\omega) \big) \mu(T_z \omega) \mu(\omega) \, dz \, dP(\omega). \end{equation} Therefore, $$ \Theta \eta\cdot\eta= \frac12 \, \eta_i\eta_j\int\limits_{\mathbb R^d} \int\limits_{\Omega} \big(z^i z^j - z^i \zeta^j_{-z} (0, \omega ) \big) \, a (z) \, \mu (0, \omega ) \mu ( -z, \omega) \, dz \, d P(\omega) $$ $$ =\frac12 \int\limits_{\mathbb R^d} \int\limits_\Omega \big((\eta\cdot z)^2+(\eta\cdot z) (\eta\cdot \theta(z,\omega))\big) \, a (z) \, \mu (0, \omega ) \mu ( z, \omega) \, dz \, d P(\omega). $$ Combining the latter relation with \eqref{est_ineq} we obtain $$ \Theta \eta\cdot\eta\geq \frac12 \int\limits_{\mathbb R^d} \int\limits_\Omega \big((\eta\cdot z)+(\eta\cdot z) (\eta\cdot \theta(z,\omega))\big)^2 \, a (z) \, \mu (0, \omega ) \mu ( z, \omega) \, dz \, d P(\omega). $$ Since $\theta(z, \omega)$ is a.s. a function of sublinear growth in $z$, we conclude that $ \eta\cdot\theta(z, \omega) \not \equiv \eta\cdot z$, consequently the integral on the right-hand side here is strictly positive. This yields the desired positive definiteness. \end{proof}
\section{Estimation of the remainder $ \phi_\varepsilon $}\label{s_estrem}
In this section we consider the remainder $ \phi_\varepsilon (x, \omega)$ given by (\ref{14}) and prove that $\|\phi_\varepsilon\|_{L^2(\mathbb R^d)}$ vanishes a. s. as $\varepsilon \to 0$.
\begin{lemma}\label{reminder}
Let $u_0 \in {\cal{S}}(\mathbb R^d)$. Then a.s. \begin{equation}\label{fi}
\| \phi_\varepsilon (\cdot, \omega) \|_{L^2(\mathbb R^d)} \ \to \ 0 \quad \mbox{ as } \; \varepsilon \to 0. \end{equation} \end{lemma}
\begin{proof} The first term in (\ref{14}) can be written as $$ \phi_\varepsilon^{(1)} (x, \omega) = \int\limits_{\mathbb R^d} dz \ a (z) \mu \Big( \frac{x}{\varepsilon}, \omega \Big) \mu \Big( \frac{x}{\varepsilon} -z, \omega \Big) \int_0^{1} \ \Big( \nabla \nabla u_0(x - \varepsilon z t) - \nabla \nabla u_0(x) \Big) z \otimes z (1-t) \ dt. $$ It doesn't depend on the random corrector $ \theta$ and can be considered exactly in the same way as in \cite[Proposition 5 ]{PiZhi17}. Thus we have \begin{equation}\label{phi_1bis}
\| \phi_\varepsilon^{(1)} \|_{L^2(\mathbb R^d)} \to 0 \quad \mbox{ as } \; \varepsilon \to 0. \end{equation} Let us denote by $\phi_\varepsilon^{(2)}$ the sum of the second and the third terms in (\ref{14}): \begin{equation}\label{reminder-2} \begin{array}{rl} \displaystyle \!\!\!\!&\hbox{ }\!\!\!\!\!\!\!\!\!\!\!\!\phi_\varepsilon^{(2)} (x, \omega) =\\[3mm] &\!\!\!\!\!\!\!\!\! \displaystyle \mu \big( \frac{x}{\varepsilon},\omega \big) \int\limits_{\mathbb R^d} \ a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big) \Big( \frac{1}{\varepsilon} \big(\nabla u_0(x- \varepsilon z) - \nabla u_0(x)\big) + z \, \nabla \nabla u_0(x) \Big)\, dz. \end{array} \end{equation}
We take sufficiently large $L>0$ such that supp $\, u_0 \subset \{|x|<\frac12 L \}$ and estimate $\phi_\varepsilon^{(2)} (x, \omega)$ separately in the sets $\{|x|<L\}$ and $\{|x|>L\}$. If $|x|>L$, then $u_0(x) = 0$. Since $a(z)$ has a finite second moment in $\mathbb R^d$, for any $c>0$ we have \begin{equation}\label{ineqz2}
\frac{1}{\varepsilon^2} \int\limits_{|z|> \frac{c}{\varepsilon}} a (z) \, dz = \frac{1}{\varepsilon^2} \int\limits_{|z|> \frac{c}{\varepsilon}} a (z) \frac{z^2}{z^2} \, dz \le \frac{1}{c^2} \int\limits_{|z|> \frac{c}{\varepsilon}} a (z) z^2 \, dz \to 0 \quad \mbox{as } \; \varepsilon \to 0. \end{equation} Therefore, \begin{equation}\label{r-2out} \begin{array}{l} \displaystyle
\| \phi_\varepsilon^{(2)} \, \chi_{|x|>L} \|^2_{L^2(\mathbb R^d)}
=\!\! \int\limits_{|x|>L} \Big(\!\! \int\limits_{|x - \varepsilon z|< \frac12 L}\!\! \frac{1}{\varepsilon} \mu \big( \frac{x}{\varepsilon},\omega \big) a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big) \nabla u_0(x- \varepsilon z) \, dz \Big)^2 dx \\[3mm] \displaystyle < \alpha_2^4 \, \Big( \frac{1}{\varepsilon^2}
\int\limits_{|z|> \frac{L}{2\varepsilon}} \ a (z) \, dz \, \Big)^2 \|\varepsilon \theta \big(\frac{y}{\varepsilon},\omega \big)
\nabla u_0(y)\|_{L^2(\mathbb R^d)}^2 \to 0; \end{array} \end{equation}
Here we have also used the limit relation $\| \varepsilon \theta \big(\frac{y}{\varepsilon},\omega) \nabla u_0(y) \|_{L^2(\mathbb R^d)} \to 0$ that is ensured by Proposition \ref{1corrector}. Denote $\chi_{<L}(x) = \chi_{\{|x|<L\}}(x)$ and represent the function $\phi_\varepsilon^{(2)} (x,\omega) \, \chi_{<L}(x)$ as follows: \begin{equation}\label{r-2in-bis} \phi_\varepsilon^{(2)} (x, \omega) \, \chi_{<L} (x) = \gamma_\varepsilon^{<} (x, \omega) + \gamma_\varepsilon^{>} (x, \omega), \end{equation} where \begin{equation}\label{r-2in} \begin{array}{l} \displaystyle \gamma_\varepsilon^{<} (x, \omega) =\mu \big( \frac{x}{\varepsilon},\omega \big) \chi_{<L}(x)\\[3mm]
\displaystyle
\times\int\limits_{|\varepsilon z|< 2L } \ a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big) \Big( \frac{1}{\varepsilon} \big(\nabla u_0(x- \varepsilon z) - \nabla u_0(x)\big) + z \, \nabla \nabla u_0(x) \Big)\, dz; \\[9mm] \displaystyle \gamma_\varepsilon^{>} (x, \omega) = \mu \big( \frac{x}{\varepsilon},\omega \big) \chi_{<L}(x) \\[3mm]
\displaystyle
\times\int\limits_{|\varepsilon z|> 2L } \ a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big) \Big( \frac{1}{\varepsilon} \big(\nabla u_0(x- \varepsilon z) - \nabla u_0(x)\big) + z \, \nabla \nabla u_0(x) \Big)\, dz. \end{array} \end{equation}
Since $u_0\in C_0^\infty(\mathbb R^d)$, the Teylor decomposition applies to $\nabla u_0 (x- \varepsilon z)$, and we get $$ \frac{1}{\varepsilon} \big(\nabla u_0 (x- \varepsilon z) - \nabla u_0(x)\big) + z \, \nabla \nabla u_0(x) = \frac{\varepsilon}{2} \nabla\nabla\nabla u_0 (\xi)\, z \otimes z $$ with some $\xi \in \mbox{supp} \, u_0$, here the notation $\nabla\nabla\nabla u_0 (\xi)\, z \otimes z$ is used for the vector function
$(\nabla\nabla\nabla u_0 (\xi)\, z \otimes z)^i=\partial_{x^j}\partial_{x^k}\partial_{x^i}u_0(\xi)z^jz^k$. Then the right-hand side of the first formula in \eqref{r-2in} admits the estimate \begin{equation}\label{r-2in1} \begin{array}{l} \displaystyle
\mu \big( \frac{x}{\varepsilon},\omega \big) \chi_{<L}(x) \Big|\!\!\!\int\limits_{|\varepsilon z|< 2L } \!\!\!\!\!\! a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big) \Big( \frac{1}{\varepsilon} \big(\nabla u_0(x- \varepsilon z) - \nabla u_0(x)\big) + z \nabla \nabla u_0(x)\! \Big) dz \Big| \\[3mm] \displaystyle
\le \frac{\alpha_2^2}{2} \max | \nabla\nabla\nabla u_0 | \int\limits_{\mathbb R^d } \, \varepsilon | \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big)| \, \chi_{<3L}(x-\varepsilon z) \, a (z) z^2 \, dz. \end{array} \end{equation} Taking into account the relation \begin{equation}\label{r-2in1add} \begin{array}{l} \displaystyle
\int\limits_{\mathbb R^d } \Big( \int\limits_{\mathbb R^d } \, \varepsilon | \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big)| \, \chi_{<3L}(x-\varepsilon z) \, a (z) z^2 \, dz \Big)^2 dx \\[3mm] \displaystyle
= \int\limits_{\mathbb R^d } a (z_1) z_1^2 dz_1 \int\limits_{\mathbb R^d } a (z_2) z_2^2 dz_2 \int\limits_{\mathbb R^d } \varepsilon^2 | \theta \big(\frac{x}{\varepsilon}\!-\!z_1,\omega \big)| | \theta \big(\frac{x}{\varepsilon}\!-\!z_2,\omega \big)| \chi_{<3L}(x-\varepsilon z_1) \chi_{<3L}(x-\varepsilon z_2) dx \end{array} \end{equation} and applying the Cauchy-Schwartz inequality to the last integral on its right hand side
we conclude with the help of Proposition \ref{1corrector} that $\|\gamma_\varepsilon^{<} (x, \omega) \|_{L^2(\mathbb R^d) } \to 0$ as $\varepsilon \to 0$.
If $|x|<L$ and $|\varepsilon z|>2L$, then $|x-\varepsilon z|>L$, and $u_0 (x-\varepsilon z)=0$. The right-hand side of the second formula in \eqref{r-2in} can be rearranged as follows: \begin{equation}\label{r-2in2} \begin{array}{l} \displaystyle \gamma_\varepsilon^{>} (x, \omega) =
\mu \big( \frac{x}{\varepsilon},\omega \big) \chi_{<L}(x) \int\limits_{|z|> \frac{2L}{\varepsilon} }\!\!\!\! a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big) \Big( - \frac{1}{\varepsilon} \nabla u_0(x) + z \, \nabla \nabla u_0(x) \Big)\, dz \\[3mm] \displaystyle
=\mu \big( \frac{x}{\varepsilon},\omega \big) \chi_{<L}(x) \!\! \int\limits_{|z|> \frac{2L}{\varepsilon} } \!\!\!\! a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \big( \theta \big(\frac{x}{\varepsilon}\!-\!z,\omega \big) - \theta \big(\frac{x}{\varepsilon},\omega \big) \big) \Big(\!\! - \frac{1}{\varepsilon} \nabla u_0(x) + z \nabla \nabla u_0(x)\!\Big) dz \\[3mm] \displaystyle
+\mu \big( \frac{x}{\varepsilon},\omega \big) \chi_{<L}(x) \!\! \int\limits_{|z|> \frac{2L}{\varepsilon} } \!\!\!\! a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \theta \big(\frac{x}{\varepsilon},\omega \big) \Big( - \frac{1}{\varepsilon} \nabla u_0(x) + z \, \nabla \nabla u_0(x) \Big)\, dz \end{array} \end{equation}
The second term on the right-hand side in \eqref{r-2in2} is estimated in the same way as the function $\phi_\varepsilon^{(2)} \, \chi_{|x|>L}$ in \eqref{r-2out}.
Thus the $L^2(\mathbb R^d)$ norm of this term tends to 0 as $\varepsilon \to 0$.
The first term on the right-hand side of \eqref{r-2in2} admits the following upper bound: \begin{equation}\label{r-2in2bis} \begin{array}{l} \displaystyle
\Big| \mu \big( \frac{x}{\varepsilon},\omega \big) \chi_{<L}(x) \int\limits_{|z|> \frac{2L}{\varepsilon} } \ a (z) \mu \big( \frac{x}{\varepsilon} -z, \omega \big) \zeta_{-z} \big(T_{\frac{x}{\varepsilon}}\omega \big) \Big( - \frac{1}{\varepsilon} \nabla u_0(x) + z \, \nabla \nabla u_0(x) \Big)\, dz \Big| \\[3mm] \displaystyle
\leq \alpha_2^2 \int\limits_{|z|> \frac{2L}{\varepsilon} } \ a (z) \Big| \zeta_{-z} \big(T_{\frac{x}{\varepsilon}}\omega \big)\Big|\ \Big| - \frac{1}{\varepsilon} \nabla u_0(x) + z \, \nabla \nabla u_0(x)\Big|\, dz \\[3mm] \displaystyle
\leq \alpha_2^2 C(L) \int\limits_{|z|> \frac{2L}{\varepsilon} } |z| a (z) \Big| \zeta_{-z} \big(T_{\frac{x}{\varepsilon}}\omega \big)\Big| \, dz\ \big(\big| \nabla u_0(x)\big| + \big| \nabla \nabla u_0(x)\big|\big).
\\[3mm] \displaystyle
\leq \alpha_2^2 C(L) \Big(\int\limits_{|z|> \frac{2L}{\varepsilon} } |z|^2 a (z)dz\Big)^\frac12
\Big(\int\limits_{\mathbb R^d} a (z)
\big|\zeta_{-z} \big(T_{\frac{x}{\varepsilon}}\omega \big)\big|^2 \, dz\Big)^\frac12\ \big(\big| \nabla u_0(x)\big| + \big| \nabla \nabla u_0(x)\big|\big). \end{array} \end{equation} Since $\zeta_{-z} (\omega)\in L^2_M$, we have $$ \mathbb E\int\limits_{\mathbb R^d} a (z)
| \zeta_{-z} (\omega )|^2 \, dz<\infty. $$ Taking into account the convergence $$
\int\limits_{|z|> \frac{2L}{\varepsilon} } |z|^2 a (z)dz\to 0,\quad \hbox{as }\varepsilon\to0, $$ by the Birkhoff ergodic theorem we obtain that the $L^2(\mathbb R^d)$ norm of the first term on the right-hand side of \eqref{r-2in2}
tends to zero a.s., as $\varepsilon\to0$. Therefore, $\|\gamma_\varepsilon^{>} (x, \omega) \|_{L^2(\mathbb R^d) } \to 0$ as $\varepsilon \to 0$.
From \eqref{r-2in-bis} it follows that $\| \phi_\varepsilon^{(2)}(x,\omega) \chi_{<L} (x) \|_{L^2(\mathbb R^d)} \to 0$ as $ \varepsilon \to 0$, and together with \eqref{r-2out} this implies that \begin{equation}\label{rr}
\| \phi_\varepsilon^{(2)}(x,\omega) \|_{L^2(\mathbb R^d)} \to 0 \quad \mbox{as } \; \varepsilon \to 0. \end{equation} Finally, \eqref{fi} follows from \eqref{phi_1bis} and \eqref{rr}. Lemma is proved. \end{proof}
\section{Proof of the main results}\label{s_proofmain}
We begin this section by proving relation \eqref{convergence1} for $f\in \mathcal{S}_0(\mathbb R^d)$. For such $f$ we have $u_0\in C_0^\infty(\mathbb R^d)$. It follows from \eqref{v_eps}, Proposition \ref{1corrector} and Lemmas \ref{l_u2small}, \ref{l_u3small} that \begin{equation}\label{frstconv}
\|w^\varepsilon-u_0\|_{L^2(\mathbb R^d)}\to 0,\quad\hbox{as }\varepsilon\to 0. \end{equation} By the definition of $v^\varepsilon$, $u_2^\varepsilon$ and $u_3^\varepsilon$, $$ (L^\varepsilon-m)w^\varepsilon=(\hat L-m)u_0-m\varepsilon \theta \Big(\frac x\varepsilon\Big)\cdot\nabla u_0+\phi_\varepsilon =f-m\varepsilon \theta \Big(\frac x\varepsilon\Big)\cdot\nabla u_0+\phi_\varepsilon $$ $$ =(L^\varepsilon-m)u^\varepsilon-m\varepsilon \theta \Big(\frac x\varepsilon\Big)\cdot\nabla u_0+\phi_\varepsilon. $$ Therefore, $$ (L^\varepsilon-m)(w^\varepsilon-u^\varepsilon)=-m\varepsilon \theta \Big(\frac x\varepsilon\Big)\cdot\nabla u_0+\phi_\varepsilon. $$ According to Proposition \ref{1corrector} and Lemma \ref{reminder} the $L^2$ norm of the functions on the right-hand side of the last formula tends to zero as $\varepsilon\to0$. Consequently, $$
\|w^\varepsilon-u^\varepsilon\|_{L^2({\mathbb R^d})}\to 0,\quad\hbox{as }\varepsilon\to 0. $$ Combining this relation with \eqref{frstconv} yields the desired relation \eqref{convergence1} for $f\in\mathcal{S}_0(\mathbb R^d)$.
To complete the proof of Theorem \ref{T1} we should show that the last convergence holds for any $f\in L^2(\mathbb R^d)$.
For any $f \in L^2(\mathbb R^d)$ there exists $f_\delta \in \mathcal{S}_0$ such that $\| f - f_\delta\|_{L^2(\mathbb R^d)} <\delta$. Since the operator $(L^\varepsilon - m)^{-1}$ is bounded uniformly in $\varepsilon$, then \begin{equation}\label{delta_1}
\| u^{\varepsilon}_\delta - u^\varepsilon \|_{L^2(\mathbb R^d)} \le C_1 \delta, \qquad
\| u_{0,\delta} - u_0 \|_{L^2(\mathbb R^d)} \le C_1 \delta, \end{equation} where $$ u^{\varepsilon} \ = \ (L^{\varepsilon} - m)^{-1} f, \; \; u_{0} \ = \ (\hat L - m)^{-1} f, \; \; u^{\varepsilon}_\delta \ = \ (L^{\varepsilon} - m)^{-1} f_\delta, \; \; u_{0,\delta} \ = \ (\hat L - m)^{-1} f_\delta. $$
Recalling that $f_\delta\in\mathcal{S}_0$, we obtain $\| u^{\varepsilon}_\delta - u_{0, \delta} \|_{L^2(\mathbb R^d)} \to 0 $. Therefore, by (\ref{delta_1}) $$
\mathop{ \overline{\rm lim}}\limits_{\varepsilon \to 0} \| u^{\varepsilon} - u_0 \|_{L^2(\mathbb R^d)} \le 2 C_1 \delta $$ with an arbitrary $\delta>0$. This implies the desired convergence in \eqref{t1} for an arbitrary $f\in L^2(\mathbb R^d)$ and completes the proof of the main theorem.
\subsection{Proof of Corollary \ref{cor_main}}
Here we assume that the operator $L^{\varepsilon,{\rm ns}}$ is defined by \eqref{L_eps_ns}. Multiplying equation \eqref{u_eps_nssss} by $\rho^\varepsilon(x,\omega)=\rho\big(\frac{x}{\varepsilon},\omega\big)= \mu\big(\frac{x}{\varepsilon},\omega\big)\big(\lambda\big(\frac{x}{\varepsilon},\omega\big)\big)^{-1}$ we obtain \begin{equation}\label{eq_modfd} L^{\varepsilon}u_\varepsilon -m\rho^\varepsilon u_\varepsilon=\rho_\varepsilon f, \end{equation} where the symmetrized operator $L^{\varepsilon}$ is given by \eqref{L_eps}.
Letting $\langle\rho\rangle=\mathbb E\bm{\rho} =\mathbb E\big(\frac{\bm{\mu}}{\bm{\lambda}}\big)$ we consider an auxiliary equation \begin{equation}\label{eq_ns_aux} L^{\varepsilon}g_\varepsilon -m\langle\rho\rangle g_\varepsilon=\langle\rho\rangle f. \end{equation}
By Theorem \ref{T1} the functions $g_\varepsilon$ converge a.s. in $L^2(\mathbb R^d)$, as $\varepsilon\to0$, to a solution of the equation $\hat Lg -m\langle\rho\rangle g=\langle\rho\rangle f$. Our goal is to show that $\|g_\varepsilon-u_\varepsilon\|_{L^2(\mathbb R^d)}\to0$ as $\varepsilon\to0$. To this end we subtract equation \eqref{eq_modfd} from \eqref{eq_ns_aux}. After simple rearrangements this yields \begin{equation}\label{eq_ns_alpha} L^{\varepsilon}\alpha_\varepsilon -m\rho_\varepsilon \alpha_\varepsilon=\big(\langle\rho\rangle-\rho_\varepsilon\big)g_\varepsilon +\big(\langle\rho\rangle-\rho_\varepsilon\big) f. \end{equation} with $\alpha_\varepsilon(x)=g_\varepsilon(x)-u_\varepsilon(x)$. In a standard way one can derive the following estimate \begin{equation}\label{C_ns_pure}
m \int\limits_{\mathbb R^d} (\alpha_\varepsilon(x))^2 dx+\frac{1}{\varepsilon^2} \int\limits_{\mathbb R^d} \int\limits_{\mathbb R^d} \, a (z) ( \alpha_\varepsilon (x-\varepsilon z) - \alpha_\varepsilon(x))^2 dz dx < C. \end{equation} As was shown in the proof of Lemma \ref{Convergence}, this estimate implies compactness of the family $\{\alpha_\varepsilon\}$ in $L^2(B)$ for any cube $B$. Multiplying \eqref{eq_ns_alpha} by $\alpha_\varepsilon$ and integrating the resulting relation over $\mathbb R^d$ we obtain \begin{equation}\label{al_al}
\|\alpha_\varepsilon\|^2_{L^2(\mathbb R^d)}\leq C_1
\big|\big((\langle\rho\rangle-\rho_\varepsilon)g_\varepsilon, \alpha_\varepsilon\big)_{L^2(\mathbb R^d)}\big| +\big|\big((\langle\rho\rangle-\rho_\varepsilon) f,\alpha_\varepsilon\big)_{L^2(\mathbb R^d)}\big| \end{equation} By the Birkhoff ergodic theorem $(\langle\rho\rangle-\rho_\varepsilon)$ converges to zero weakly in $L^2_{\rm loc}(\mathbb R^d)$. Considering the boundedness of $(\langle\rho\rangle-\rho_\varepsilon)$ and the properties of $\alpha_\varepsilon$ and $g_\varepsilon$, we conclude that the both terms on the right-hand side in \eqref{al_al} tend to zero, as $\varepsilon\to0$. So does
$\|\alpha_\varepsilon\|^2_{L^2(\mathbb R^d)}$. Therefore, $u_\varepsilon$ converges to the solution of equation $\hat Lu -m\langle\rho\rangle u=\langle\rho\rangle f$. Dividing this equation by $\langle\rho\rangle$, we rewrite the limit equation as follows $$ \Big(\mathbb E\big\{\frac{\bm\mu}{\bm\lambda}\big\}\Big)^{-1}Q_{ij}\frac{\partial^2 u}{\partial x_i\partial x_j}-mu=f $$ with $\Theta$ defined in \eqref{Positive}. This completes the proof of Corollary.
\noindent {\large \bf Acknowlegements}\\[2mm] The work on this project was completed during the visit of Elena Zhizhina at the Arctic University of Norway, campus Narvik. She expresses her gratitude to the colleagues at this university for hospitality.
\end{document} |
\begin{document}
\preprint{APS/123-QED}
\title{Photonic resource state generation from a minimal number of quantum emitters}
\author{Bikun Li}
\email{[email protected]}
\affiliation{Department of Physics, Virginia Tech, Blacksburg, Virginia 24061, USA}
\author{Sophia E. Economou}
\email{[email protected]}
\affiliation{Department of Physics, Virginia Tech, Blacksburg, Virginia 24061, USA}
\author{Edwin Barnes}
\email{[email protected]}
\affiliation{Department of Physics, Virginia Tech, Blacksburg, Virginia 24061, USA}
\date{\today}
\begin{abstract}
Multi-photon entangled graph states are a fundamental resource in quantum communication networks, distributed quantum computing, and sensing. These states can in principle be created deterministically from quantum emitters such as optically active quantum dots or defects, atomic systems, or superconducting qubits. However, finding efficient schemes to produce such states has been a long-standing challenge. Here, we present an algorithm that, given a desired multi-photon graph state, determines the minimum number of quantum emitters and precise operation sequences that can produce it. The algorithm itself and the resulting operation sequence both scale polynomially in the size of the photonic graph state, allowing one to obtain efficient schemes to generate graph states containing hundreds or thousands of photons.
\end{abstract}
\maketitle
\section{Introduction}\label{sec:intro}
Entanglement is widely recognized as playing a critical role in quantum computation, error correction, communication, and sensing. A family of entangled states that features prominently in these applications are graph (or cluster) states. They are key resources in one-way quantum computing paradigms \cite{Raussendorf1WQC,bartolucci2021fusionbased} and in quantum error correction \cite{Schlingemann2001,ShorCode1995, KITAEV20032, nielsen_chuang_2010}. In addition, many quantum repeater schemes \cite{Briegel1998, Dur1999, Sangouard2011, Azuma2015, Muralidharan2016} and quantum sensing protocols \cite{Gottesman2012,Degen2017} rely on graph states. Photonic graph states are especially important because photons are the predominant platform for measurement- and fusion-based computing, and, as flying qubits, they are the only viable choice for quantum networks \cite{Gisin2007} and quantum imaging \cite{Lugiato_2002,Dowling2008}.
Unfortunately, creating photonic resource states is fundamentally difficult. Because photons do not interact with each other, most attempts have focused on probabilistic generation schemes using linear optics and postselection \cite{Browne2005}, which are very resource-intensive, severely limiting the size of the resulting states \cite{Gao2010,Li2020}. This bottleneck can in principle be overcome by instead using a deterministic approach in which entangled photons are produced directly from quantum emitters (i.e., matter qubits). One possibility would be to prepare a graph state on emitters~\cite{Nemoto_PRX_2014,Choi_npjQI_2019} and transduce it to photons, but this requires a number of emitters equal to the size of the target photonic graph state. This daunting resource overhead can be avoided by instead using sequential generation schemes. Refs.~\cite{Schon2005PRL,Schon2007PRA} put forward such an approach that works well for one-dimensional (1D) graph states~\cite{Lindner_2009PRL} and has led to experimental demonstrations~\cite{Schwartz2016,Besse_NatCommun_2020}. However, in the general case where the entanglement structure is more complicated, this method scales exponentially in the size of the target state and can lead to long generation circuits, motivating the search for more efficient approaches. Refs.~\cite{Economou_2010PRL,Gimeno-Segovia2019} put forward protocols for 2D lattice graphs that leverage the principle that entangled emitters can emit entangled photons. This idea was extended further to develop protocols that deterministically generate resource states for quantum repeaters \cite{Buterakos_2017PRX,Russo2018,Hilaire2021,Zhan2020}---tailored to color centers in Refs.~\cite{Borregaard2020,michaels2021multidimensional}---and one-way computing \cite{Pichler2017,Russo_2019}. Refs.~\cite{Pichler2017,Zhan2020} allowed for the re-interference of photons with emitters to further enhance flexibility in entanglement creation.
Despite this progress and the intense interest this approach has generated among experimentalists, existing graph state generation protocols are limited to a small subset of graphs or require a number of emitters that scales linearly with the graph size \cite{NJP_9_204_2007,Russo_2019}. This is extremely resource-intensive, especially in light of the schemes for generating repeater graph states presented in Refs.~\cite{Buterakos_2017PRX,Hilaire2021}, which require only two emitters regardless of the number of photons. The required resources (number of emitters and entangling gates) is a critical factor that determines the practical feasibility of the protocol. For a general graph state, finding resource-efficient generation protocols in polynomial time remains an open problem.
Here, we address this challenge by presenting a general approach to generating arbitrary photonic graph states from quantum emitters. Given a target graph state, we show how to determine in polynomial time both the minimal number of emitters required to create it and an explicit generation protocol. The latter consists of a sequence of gate operations and measurements performed on the emitters. Moreover, our protocol naturally takes into account the order in which photons should be emitted, which can be an important consideration for applications, as it is generally preferable to emit photons in the order they are measured to avoid photon storage. Our method provides a recipe for doing this. The broad applicability of our method, its practical relevance, and its efficient use of resources make it ideally suited to the generation of any photonic graph state from various types of quantum emitters.
\begin{figure*}
\caption{\textbf{Illustration of the protocol solver algorithm.} (a) An example of a 4-photon graph state. (b) The graph is mapped to a 1D lattice. (c) The height function is computed and found to have maximum value 2, implying 2 emitters are needed. These are added to the 1D lattice. (d) Starting from the target state and decoupled emitters, a time-reversed sequence of emitter gates, photon absorption events, and time-reversed emitter measurements is constructed, until all qubits are disentangled. Further details about this example can be found in the Supplementary Information.}
\label{fig:fig1new}
\end{figure*}
\section{Results and Discussion}\label{sec:results}
\subsection{Overview of the algorithm}\label{sec:overview}
Determining how to efficiently generate an arbitrary photonic graph state from a set of quantum emitters is highly nontrivial and markedly distinct from the problem of finding an efficient quantum circuit that creates a target state on a register of qubits \cite{HMP2006}. Several additional challenges arise in the former, including the fact that qubits are both created and removed, and that different types of qubits (photons vs. emitters), with different roles and allowed gates, are involved. Depending on the experimental setup, there may also be further restrictions, e.g., emitted photons cannot interact with any other qubits following their emission (although schemes that re-interfere photons with emitters have been proposed \cite{Pichler2017,Zhan2020}). Our method addresses these challenges by leveraging three main ingredients: the notion of the height function (which is related to the entanglement entropy), the stabilizer formalism, and the concept of time-reversed emission events and measurements, which we introduce here.
The first insight is to utilize the so-called height function, which is the entanglement entropy of the system as a function of the partition point when the system is arranged in a 1D lattice and partitioned into two subsystems \cite{PhysRevX.7.031016_Nahum,PhysRevB.100.134306}. This function provides information about the entanglement structure of the target state as well as the number of emitters required to produce it. The latter is equal to the maximum value of the height function (see below), which depends on the photon emission order. Optimizing this order is NP-hard in general, although we show that heuristic approaches exist for more structured graphs. Moreover, the height function plays a crucial role in determining the sequence of operations (gates and measurements) needed to generate the target graph state from the emitters.
A second key ingredient is the use of gates from the Clifford group. Given that arbitrary graph states can be generated solely with Clifford gates \cite{VandenNest2004_PRA,Hein2006}, which were also exclusively used in the protocols of Refs. \cite{Lindner_2009PRL,Schwartz2016,Economou_2010PRL,Buterakos_2017PRX,Russo2018,Russo_2019,Gimeno-Segovia2019,Hilaire2021}, restricting ourselves to this set does not affect the generality of our approach. Clifford gates enable the use of the stabilizer formalism, such that we can manipulate Pauli operators instead of keeping track of the whole state. This makes the problem of finding the emission operation sequence tractable, reducing it from exponential to polynomial scaling due to the Gottesman-Knill theorem \cite{Gottesman:1998hu}.
A final key element in our algorithm is that we time-reverse the emission sequence. That is, we start from a target multi-photon graph state and an appropriate number of decoupled emitters (obtained from the height function for the target state), and we determine a sequence of emitter gates, ``time-reversed measurements", and ``photon absorption" events such that the target state is converted to a product state. This is somewhat reminiscent of disentangling circuits used for quantum state tomography of 1D systems~\cite{Cramer_NatCommun_2010}. The final state is a product state because, without loss of generality, photons that have not yet been emitted can be described by qubits prepared in the computational basis state $\Ket{0}$. Photon emission is then modeled as a two-qubit photon-emitter gate that brings the photon from $\Ket{0}$ into an entangled state with the emitters \cite{Lindner_2009PRL}. Because the photon absorption steps are time-reversed versions of photon emission, these too are described by photon-emitter gates.
The run time of the protocol solver algorithm scales as ${\cal O}(n_p^4)$, where $n_p$ is the number of photons in the target graph state. This is a direct consequence of the fact that the algorithm is based on the stabilizer formalism (see Methods section). This is in contrast to previous methods \cite{Schon2005PRL,Schon2007PRA}, which scale exponentially in $n_p$ due to the need to perform singular value decompositions repeatedly. We also show that the number of gates in the final emission sequence scales at most as ${\cal O}(n_p^2)$ (see Methods). However, this assumes two-qubit gates can be applied between any pair of emitters. If this is not the case, then additional SWAP operations are needed, bringing the gate count up to ${\cal O}(n_p^3)$. Therefore, both the protocol solver and the resulting gate sequence it obtains scale polynomially in the size of the target graph state.
Now we provide a more detailed description of the protocol solver algorithm. We begin with a target graph state $\Ket{\psi_p}$ of $n_p$ photons and $n_e$ decoupled emitters, so that the total state is $\Ket{\Psi}=\Ket{\psi_p}\otimes\Ket{0}^{\otimes n_e}$. An $n_p=4$ photon example graph is shown in Fig.~\ref{fig:fig1new}(a). This is what the state of the total system should be at the end of the generation sequence. $n_p$ is set by the size of the desired photonic graph state $\Ket{\psi_p}$, while $n_e$ remains to be determined. We assume the graph representing $\Ket{\psi_p}$ is connected; if this is not the case, then the algorithm can be run separately for each connected subgraph. The state $\Ket{\Psi}$ is fully described by a set of $n=n_p+n_e$ stabilizers $g_m$, $m=1,\ldots,n$, defined such that $g_m\Ket{\Psi}=\Ket{\Psi}$. The full set of $n$ qubits can be arranged in a 1D lattice with site index $x\in\{0,1,2,\ldots,n\}$ (see Fig.~\ref{fig:fig1new}(b)). Sites $x=1,\ldots,n_p$ correspond to the photons and are ordered according to the desired photon emission ordering, while the sites $x=n_p+1,\ldots,n$ are the emitters. The additional $x=0$ site is included as a matter of convention. We can now define the height function $h(x)=S_A$ to be the bipartite entanglement entropy when the 1D lattice is divided into the subregion $A=\{1,2,\ldots,x\}$ and its complement. Note that $S_A=\frac{1}{1-\alpha}\log_2\mathrm{Tr}(\rho^\alpha_A)$ can be any of the R\'enyi entropies; for stabilizer states, they are all equal \cite{Hein2004PRA}. In Ref.~\cite{Schon2005PRL}, it was shown that the state of the emitted photons, $\Ket{\psi_p}$, can be represented by a matrix product state (MPS) with bond dimension $2^{n_e}$. Because the entanglement entropy of a MPS is given by the base-2 logarithm of the bond dimension \cite{Orus2014}, it follows that $n_e$ is equal to the maximum value of $h(x)$. The height function for the graph in Fig.~\ref{fig:fig1new}(a) is shown in Fig.~\ref{fig:fig1new}(c). In this example, its maximum is 2, implying 2 emitters are needed. In general, the maximum of the height function is in fact the minimal number of emitters capable of generating the target graph state, as fewer emitters would be insufficient to match the bond dimension of any exact MPS representation.
The height function can be computed efficiently from the stabilizers. Because products of stabilizers are also stabilizers, there are many equivalent choices for the set $\{g_m\}$. Here, we focus on a particular choice of the stabilizers that we refer to as the echelon gauge \cite{Audenaert_2005NJP}, in which the stabilizer matrix has a row-reduced echelon form (see Methods). When the $g_m$ are in this gauge, the height function can be expressed as \cite{Audenaert_2005NJP} \begin{equation}\label{eq:height_function}
h(x)=n - x - \#\{g_m|\verb|l|(g_m)>x\}, \end{equation}
where $\verb|l|(g_m)$ is the index of the left-most (smallest index) site on which $g_m$ acts nontrivially. The last term in Eq.~\eqref{eq:height_function} counts the number of stabilizers that act nontrivially only on sites to the right of (i.e., larger than) $x$. Although Eq.~\eqref{eq:height_function} depends on $n_e$, this dependence cancels out for states like $\Ket{\Psi}$ in which the emitters are decoupled. Therefore we can obtain $n_e$ from the maximum of $h(x)$ on the photonic sites, using only the stabilizers of $\Ket{\psi_p}$.
Once we have the number of emitters $n_e$, we can run the protocol solver algorithm to determine the sequence of gates, time-reversed measurements, and photon absorption events needed to transform the target state $\Ket{\Psi}$ into the initial state $\Ket{0}^{\otimes n}$, which corresponds to decoupled emitters and no photons. We first introduce a photon index $j$ and initialize it to $j=n_p$. The algorithm then consists of four steps: \begin{enumerate}[(i)]
\item Transform the stabilizers $g_m$ into echelon gauge if they are not already, then compute the height function $h(x)$.
\item If $h(j) \ge h(j-1)$, skip to step (iii). Otherwise apply a time-reversed measurement and update the $g_m$ accordingly.
\item Apply a photon absorption operation on the $j$-th photon and update the $g_m$ accordingly. If $j>1$, then set $j\to j-1$ and go to step (i). Otherwise, go to step (iv).
\item All photons are now in state $\Ket{0}$. Apply a series of gates on the emitters to disentangle them, bringing the total state to $\Ket{0}^{\otimes n}$. \end{enumerate} This algorithm involves repeated applications of two basic operational primitives: time-reversed measurement and photon absorption. During the algorithm, the height function of the current state tells us which of these we need to perform next to bring the state closer to $\Ket{0}^{\otimes n}$. Each photon absorption step disentangles one photon qubit from the rest, starting with the last-emitted photon, $j=n_p$, and working down to the first photon, $j=1$. For our 4-photon example, the graphs at intermediate steps of the algorithm are shown in Fig.~\ref{fig:fig1new}(d). A step by step explanation of this example is given in the Supplementary Information. When the algorithm concludes, we can reverse the entire sequence to obtain an operation sequence that generates $\Ket{\psi_p}$ starting from $n_e$ decoupled emitters. We now describe each of the two operational primitives in more detail, the precise gates they introduce into the generation sequence, and their connection to the height function.
\begin{figure*}
\caption{\textbf{Results for repeater graph states.} (a) 12-photon repeater graph state in which external photons are emitted first. (b) Same graph state as in (a), but with ``natural" emission ordering. (c) Same graph state as in (b) but with some unnecessary edges deleted. (d), (e) and (f) show the height functions of the states in (a), (b) and (c), respectively. (g) Emission circuit for state shown in (c), where $H$ is the Hadamard gate, $P=\mathrm{diag}(1,i)$ is the phase gate, and $X\equiv \sigma^x$.}
\label{fig:RGSs}
\end{figure*}
Photon absorption of the $j$-th photon refers to a time-reversed version of photon emission. For concreteness, we focus on the case where emission is described by a CNOT gate between the photon and its emitter (with the emitter as the control), as in Ref.~\cite{Lindner_2009PRL}, although our algorithm can be adapted to any Clifford gate describing photon emission. Mathematically, the task of absorbing photon $j$ requires finding a stabilizer $g_a$ that can be transformed to $\sigma_j^z$ by applying CNOT$_{ij}$, where $i$ is an emitter site.
It is possible to find such a stabilizer when $h(j)\ge h(j-1)$. From Eq.~\eqref{eq:height_function}, we see that this condition implies there must be at least one stabilizer, $g_a$, such that $\verb|l|(g_a)=j$. This stabilizer has the form \begin{equation}\label{eq:ga}
g_a = \sigma^\alpha_j \sigma_{i_1}^{\beta_1}\cdots \sigma_{i_s}^{\beta_s}, \end{equation} where $\alpha,\beta_k\in\{x,y,z\}$ label the nontrivial Pauli operators, and $1\le j \le n_p <i_1<\cdots<i_s\le n$. Note that we can assume $g_a$ acts trivially on all photons with index larger than $j$ since these have already been decoupled at this point in the algorithm. We also assume that $g_a$ acts nontrivially on at least one emitter site; if this is not the case, then photon absorption is unnecessary since the $j$-th photon is then already disconnected. To transform $g_a$ into $\sigma_j^z$, we can first apply a local Clifford operation on the $j$-th site and general Clifford operations on the emitters to transform $g_a\to\sigma_j^z\sigma_i^z$, where $i>n_p$ is an emitter site. This can be done for example by applying local Clifford operations to transform $g_a$ to $\sigma^z_j \sigma_{i_1}^{z}\cdots \sigma_{i_s}^{z}$, and then applying CNOT gates on pairs of emitters to transform this to $\sigma_j^z\sigma_i^z$. Applying CNOT$_{ij}$ brings this to $\sigma_j^z$, completing the absorption of the $j$-th photon. Note that we can choose any emitter to absorb the photon; typically, the emitter that requires the shortest circuit to transform $g_a$ into $\sigma_j^z$ is preferred. The resulting circuit is included in the time-reversed generation sequence.
Time-reversed measurements are applied whenever $h(j)<h(j-1)$, in which case photon absorption is not possible. Indeed, in this case, Eq.~\eqref{eq:height_function} implies $\#\{g_m|\verb|l|(g_m)=j\}=0$, or in other words, a suitable $g_a$ does not exist. In order to absorb the next photon, we must therefore first find a way to increase $h(j)$ relative to $h(j-1)$. This can be accomplished with a time-reversed measurement on an emitter. To perform this operation, we first rotate the state to $\Ket{\Phi}\otimes\Ket{0}_i$, where $\Ket{\Phi}$ is a stabilizer state involving photons $1,\ldots,j$ and emitters other than $i$. This can always be done using $\mathcal{O}(n_e)$ Clifford gates on emitters when $h(j)<h(j-1)$ (see Methods). Now notice that this state is obtained from the pre-measurement state CNOT$_{ij}\Ket{\Phi}\otimes\Ket{+}_i$ when emitter $i$ is measured to be in the state $\Ket{0}_i$. Therefore, starting from $\Ket{\Phi}\otimes\Ket{0}_i$, if we perform a Hadamard gate on emitter $i$ followed by the gate CNOT$_{ij}$, we effectively reverse the measurement on the emitter. These operations transform the stabilizers $g_m$ in such a way that $h(j)$ now satisfies $h(j)\ge h(j-1)$ (see Methods), and we can proceed with the next photon absorption. The emitter gates, Hadamard on $i$, and CNOT$_{ij}$ are all included in the time-reversed generation sequence.
\subsection{Examples}\label{sec:examples}
We demonstrate our algorithm with several examples. The first is the important case of repeater graph states \cite{Azuma2015}, where we use our algorithm to obtain generation protocols that are more efficient than previously known ones. As a second example, we consider random graphs containing up to hundreds of photons and demonstrate the polynomial scaling of the resulting generation circuits. Additional examples, including modified repeater graph states, error correcting codes, and a simple example that illustrates the algorithm in detail can be found in Supplementary Notes 1-4.
Next, we apply our algorithm to find operation sequences that produce repeater graph states \cite{Azuma2015}. In addition to its importance in quantum network applications, this example also illustrates how different photon emission orderings impact the required number of emitters. Ref.~\cite{Buterakos_2017PRX} presented a generation protocol for a particular ordering that was devised essentially through guesswork. Our algorithm can be used to systematically find protocols for any ordering. An example of a 12-photon repeater graph state is shown in Fig.~\ref{fig:RGSs}(a). The graph contains a fully connected core of 6 photons, each of which is connected to a single external photon. Bell measurements are performed on pairs of these external photons, where the two photons in each pair come from different graph states. If a Bell measurement succeeds, then the two corresponding core photons are linked by an edge, and entanglement extends across two nodes of the repeater network. Having multiple external photons provides built-in redundancy that increases the likelihood that at least one Bell measurement between two repeater graph states is successful. Upon success, core photons are then measured in the $z$ or $x$ basis to remove photons connected to failed measurements or to create entanglement links between successful measurements, respectively. Because the external photons are measured first, it may be advantageous to emit these first when generating the graph state to reduce photon storage requirements. This corresponds to the photon ordering shown in Fig.~\ref{fig:RGSs}(a). The height function for this graph and photon ordering is shown in Fig.~\ref{fig:RGSs}(d), where it is evident that 6 emitters are needed to produce the state. However, if efficient photon storage is available, then the ordering shown in Fig.~\ref{fig:RGSs}(b) may be preferable, where now external and core photons are emitted in an alternating sequence. This ordering reduces the number of emitters down to only 2, as shown in Fig.~\ref{fig:RGSs}(e). As we discuss further below, this illustrates our general finding that ``natural" orderings in which neighboring vertices are emitted around the same time reduce the requisite number of emitters. This reduction in quantum resources becomes still more dramatic as the size of the graph increases; for orderings as shown in Fig.~\ref{fig:RGSs}(a), the number of emitters scales linearly with photon number, while for the natural ordering of Fig.~\ref{fig:RGSs}(b), the number of emitters remains at 2 regardless of the number of photons. This is shown explicitly in the Supplemental Information.
As discussed in Ref.~\cite{Russo_2019}, some of the edges in the repeater graph can be removed without affecting the functionality of the repeater. Fig.~\ref{fig:RGSs}(c) shows an example of this in which 4 of the core edges are deleted. As shown in Fig.~\ref{fig:RGSs}(f), the number of emitters is still 2. However, removing the redundant edges reduces the depth of the resulting generation circuit, which is shown in Fig.~\ref{fig:RGSs}(g). This circuit contains 4 CNOTs between emitters and 1 intermediate measurement on an emitter, whereas the original protocol presented in Ref.~\cite{Buterakos_2017PRX} requires 5 two-qubit gates and 5 intermediate measurements.
To demonstrate how our algorithm scales with the number of photons in the target state, we run it for random graphs ranging in size from $n_p=16$ to $n_p=256$ photons. These graphs are produced randomly using the Erd\"os–R\'enyi model \cite{RandomGraphs}. In this approach, each random graph is constructed by connecting $n_p$ vertices randomly with fixed probability $p$. We discard any graphs that contain disconnected vertices when sampling these realizations. The likelihood that such graphs arise becomes very small if $p$ is chosen sufficiently close to 1. In Fig.~\ref{fig:random}, we show the maximum value, $h_{\text{max}}$, of the height function averaged over 1024 realizations for each value of $n_p$. Averaged measurement and gate counts are also shown. It is evident that $h_{\text{max}}$, and hence the number of emitters, scales linearly with $n_p$ as $n_p$ becomes large. The same is also true of the number of measurements. On the other hand, the number of CNOTs and the total number of gates in the resulting generation circuits scale quadratically with the number of photons in the target state. These results confirm both the polynomial scaling of our algorithm, which allows us to easily find generation protocols for graph states containing hundreds of photons, and the polynomial scaling of the resulting protocols, which makes them practical for near-term experiments.
\begin{figure}\label{fig:random}
\end{figure}
\subsection{Photon emission ordering}\label{sec:discussion}
A powerful feature of our algorithm is that it readily incorporates a desired photon emission ordering. This is encoded when we arrange the photons and emitters in a 1D lattice to define the height function. If no specific ordering is preferred, then ideally we would want to choose the ordering that minimizes the number of emitters $n_e$. However, the task of finding this optimal ordering is NP-hard, as we show in Methods. Nevertheless, one can still look for heuristic solutions to the problem. In fact, the expression for the height function in Eq.~\eqref{eq:height_function} makes it clear that this function is suppressed for orderings in which the stabilizers, when expressed in the echelon gauge, are supported predominantly on high-index sites on the right side of the 1D lattice. This tends to occur for ``natural" orderings in which neighboring photons in the graph are emitted around the same time, because in this case the stabilizers are localized on the 1D lattice. This was illustrated with our repeater graph state example in the previous section. The extent to which the stabilizers can be localized in this way depends on the graph of course. For an $N\times M$ square lattice, it is inevitable that some neighboring vertices will be separated by $M$ steps in the emission sequence (assuming $M<N$), and so the number of emitters is of order $M$. On the other hand, for other graph structures like those of the repeater graph states, far fewer emitters may be needed, provided a natural photon ordering is used. Note that in this example, as for many graphs, edges between remote vertices cannot be avoided (see Fig.~\ref{fig:RGSs}(b)). Despite this, we showed that optimal orderings for which the height function remains small can still be found. Thus, emitting neighboring vertices around the same time is sufficient but not always necessary to keep the number of emitters small.
In summary, we presented an efficient algorithm to construct polynomial-depth operation sequences that produce arbitrary multi-photon graph states from a minimal number of quantum emitters. By reducing both the number of photon sources and the number of quantum operations that need to be performed on them, our method brings the wide range of quantum information applications that rely on entangled photon resource states closer to experimental reality.
\section{Methods}\label{sec:methods}
\subsection{Echelon gauge}
The echelon gauge was first defined in Ref.~\cite{Audenaert_2005NJP}, where it was called row reduced echelon form. In this gauge, the stabilizer tableau has a recursive row-reduced form based on the following three types of matrices: \begin{equation}\label{eq:echelon_gauge}
\left(\begin{array}{l|cr} \mathbbm{1} & *\cdots * & \\ \hline \mathbbm{1} & & \\ \vdots & M & \\ \mathbbm{1} & & \end{array}\right),\quad \left(\begin{array}{l|cr} \sigma & *\cdots * & \\ \hline \mathbbm{1} & & \\ \vdots & M & \\ \mathbbm{1} & & \end{array}\right), \quad \left(\begin{array}{l|cr} \sigma_1 & *\cdots * & \\ \sigma_2 & *\cdots * & \\ \hline \mathbbm{1} & & \\ \vdots & M & \\ \mathbbm{1} & & \end{array}\right), \end{equation}
where $\sigma$, $\sigma_1$, and $\sigma_2$ are nontrivial Pauli matrices, and $\sigma_1\ne\sigma_2$. In this work, we always choose $\sigma_2 = \sigma^z$, and $\sigma_1$ can be either $\sigma^x$ or $\sigma^y$. The full tableau cannot have the first form shown above (with only identities in the first column), because this case does not apply to pure states. However, the submatrix $M$ can follow any of the above three patterns, and the structure iterates recursively. The stabilizers can be transformed into this gauge starting from any other by performing a series of row reductions, as described in Ref.~\cite{Audenaert_2005NJP}. In the echelon gauge, the independent stabilizers acting on $\bar{A}=\{x+1,\ldots,n\}$ appear at the bottom right of the tableau. Therefore, starting from the formula for the entanglement entropy for subregion $\bar{A}$ of a stabilizer state \cite{Fattal2004}, $S_{\bar{A}} = n_{\bar{A}} - |\mathcal{G}_{\bar{A}}|$, where $n_{\bar{A}}$ is the size of $\bar{A}$ and $|\mathcal{G}_{\bar{A}}|$ is the number of independent stabilizers acting on $\bar{A}$, and using $h(x)=S_A = S_{\bar{A}}$, we obtain Eq.~\eqref{eq:height_function}.
\subsection{Time-reversed measurements}
Above, we saw that when the total state of the system has the form $\Ket{\Phi}\otimes\Ket{0}_i$, where $i$ is an emitter site, we can perform a time-reversed measurement to convert this to the pre-measurement state CNOT$_{ij}\Ket{\Phi}\otimes\Ket{+}_i$. Here, we clarify two important questions regarding this process: (i) When and how can we bring the system into the state $\Ket{\Phi}\otimes\Ket{0}_i$? (ii) How can we see that a time-reversed measurement on this state increases $h(j)$, as needed for a subsequent photon absorption process?
Regarding question (i), when $h(j)<h(j-1)$, we can always find a set of Clifford gates that act purely on the emitters that will transform the state of the system into $\Ket{\Phi}\otimes\Ket{0}_i$. To see this, first note that $h(j)=h(n_p)$, as follows from Eq.~\eqref{eq:height_function} when photons $j+1$ through $n_p$ are in state $\Ket{0}$. Using that the height function is bounded from above by $n_e$, we then have $h(n_p)=h(j)<h(j-1)\le n_e$. On the other hand, from Eq.~\eqref{eq:height_function} we have $h(n_p)=n_e-\#\{g_m|\verb|l|(g_m)>n_p\}$. Together, these results imply $\#\{g_m|\verb|l|(g_m)>n_p\}>0$, or in other words, there is at least one stabilizer that is supported solely on the emitter sites. We can therefore transform this stabilizer into $\sigma_i^z$ using at most $\mathcal{O}(n_e)$ Clifford gates on the emitters, bringing the state to $\Ket{\Phi}\otimes\Ket{0}_i$. We can then convert this stabilizer to $\sigma_i^x$ by applying a Hadamard gate on site $i$. This prepares the system for the second part of the time-reversed measurement process, which is the gate CNOT$_{ij}$.
We answer question (ii) by proving the following theorem:
\begin{theorem}\label{theorem}
Theorem 1: If $h(j) < h(j-1)$ and the $i$-th qubit $(i>j)$ is stabilized by $\sigma^x_i$, then applying $\mathrm{CNOT}_{ij}$ will boost $h(x)\to h(x)+1$, $\forall x\in \{j,j+1,\cdots,i-1\}$.
\end{theorem}
~
\paragraph*{Proof}
We are assuming that $h(j)<h(j-1)$, which from Eq.~\eqref{eq:height_function} implies $\#\{g_m|\verb|l|(g_m)=j\}=0$. Now consider how the stabilizers transform under $\mathrm{CNOT}_{ij}$. If $\verb|l|(g_m)<j$ before the gate, then $\verb|l|(g_m)$ remains invariant, and the contributions of these stabilizers to $h(x)$ remain the same after the gate. The only potential changes to $h(x)$ come from stabilizers $g_m$ for which $\verb|l|(g_m)>j$. These stabilizers necessarily have $\mathbbm{1}$ on the $j$-th site. Stabilizers among this set that have $\mathbbm{1}$ or $\sigma_i^z$ on the $i$-th site will be unchanged by the $\mathrm{CNOT}_{ij}$ gate. However, if one or more of these stabilizers have $\sigma_i^x$ or $\sigma_i^y$ before the gate, then afterward, these stabilizers will contain $\sigma_j^x$. Consequently, $h(j)$ increases, while $h(j-1)$ remains the same. In the echelon gauge, there can only be one stabilizer with $\sigma_j^x$ as the left-most nontrivial Pauli. Therefore, $h(j)\to h(j)+1$ when $\mathrm{CNOT}_{ij}$ is applied. Moreover, if the $i$-th qubit is stabilized by $\sigma_i^x$, then this becomes $\sigma_j^x\sigma_i^x$ after the gate, and so the height function for all sites between $j-1$ and $i$ increases: $h(x)\to h(x)+1$ $\forall x\in \{j,j+1,\cdots,i-1\}$. $\Box$
\subsection{Scaling analyses}
Here, we determine the complexity of both the protocol solver algorithm itself and the resulting graph state generation circuit. Regarding the algorithm, the main factor that determines the complexity is the need to restore the stabilizers to the echelon gauge after each operation is applied. Transforming a $n$-qubit stabilizer state into the echelon gauge generally requires $\mathcal{O}(n^3)$ steps, which is the complexity of Gaussian elimination. Another important factor is the process of determining which gates need to be applied in preparation for photon absorption or time-reversed measurement. Solving for each set of gates takes no more than $\mathcal{O}(n_e n)$ steps, which is the number of entries in the emitter part of the stabilizer tableau. Thus, the Gaussian eliminations needed to restore echelon gauge dominate the scaling. In the worst case where $n_e\propto n$, our algorithm will then take $\mathcal{O}(n^4)$ steps, where the additional factor of $n$ comes from the fact that the algorithm requires $\mathcal{O}(n_p)\sim\mathcal{O}(n)$ iterations.
As for the complexity of the output generation circuit, there are at most $\mathcal{O}(n_e)$ operations between any two-photon emissions. For example, $\mathcal{O}(n_e)$ gates are needed to transform $g_a$ into the appropriate form for photon absorption. Thus, the depth of the circuit acting on the emitter qubits is at most $\mathcal{O}(n_p n_e)$. In the worst case where $n_e\sim n_p$, the scaling is then $\mathcal{O}(n_p^2)$, which is consistent with Fig.~\ref{fig:random}. Nevertheless, due to the fact that some long-range two-qubit gates may arise, and given that these are usually decomposed as $\mathcal{O}(n_e)$ short-ranged two-qubit gates in real devices, the overall circuit depth may become $\mathcal{O}(n_p n_e^2)$.
\subsection{Complexity of finding optimal photon emission orderings}\label{sec:optimal_orderings}
We can show that the task of finding optimal emission orderings is NP-hard by mapping this to a known graph theory problem. Define $\Gamma_{ij}$ to be the adjacency matrix of the graph representing the target state $\Ket{\Psi}$. Ref. \cite{Hein2004PRA} showed that we can obtain the height function from $\Gamma_{ij}$ using the formula $h(x)=\mathrm{rank}_{2}(\Gamma_{A\bar{A}})$, where $\Gamma_{A\bar{A}}$ is the sub-matrix of $\Gamma_{ij}$ with row indices $i\in A=\{1,2,\cdots,x\}$ and column indices $j\in \bar{A}$. Note that this expression does not simplify the computation of $h(x)$; it can take more steps to find the maximum compared to using Eq.~\eqref{eq:height_function} since the former performs Gaussian eliminations for $\mathcal{O}(n_p)$ rounds, while the latter only takes one round.
However, the optimized maximum value of this alternative expression for $h(x)$ (i.e., $\mathrm{max}_xh(x)$) is precisely equal to a graph theoretic property known as linear rank-width (LRW) \cite{OUM201715}. The task of finding an optimal photon emission ordering is therefore equivalent to finding LRW through the graph isomorphism, which has long been studied in coding theory in the context of optimizing block code trellises \cite{Massey1978}. Unfortunately, determining whether a simple connected graph has an LRW bounded from above by a positive integer $k$ has been shown to be NP-complete \cite{OUM200579,Jeong2017}. Therefore, it is unlikely this problem can be solved efficiently for large, arbitrary photonic graph states unless $\text{P}=\text{NP}$. Nevertheless, if the parameter $k$ is set to $1$, this problem can be answered in polynomial time \cite{Adler2017}. If the parameter $k$ is set to larger values, a recent work \cite{Jeong2017} showed that this problem can be reduced to a fixed parameter tractable problem. Specifically, its answer, along with the sequence solution (if it exists), can be determined in $\mathcal{O}(f(k)n^3_p)$ steps, where $f(k)$ is an exponentially large function of $k$. However, the growth of $f(k)$ is so rapid that this result is not likely to be of practical use for photonic graph state generation.
\section*{Data Availability} The data that support the findings of this study are available from the authors upon request.
\section*{Code Availability} A custom MATLAB code to reproduce our results is available on GitHub and archived in Zenodo (https://doi.org/10.5281/zenodo.5652105).
\section*{Acknowledgments}
This work was in part supported by the National Science Foundation (grant no. 1741656). E.B. acknowledges support by National Science Foundation grant no. 2137953. S.E.E. acknowledges support by the Army Research Office (MURI grant no. W911NF2120214).
\section*{Author Contributions} E.B and S.E.E. conceived and supervised the project. B.L. developed the general approach and performed all the calculations. E.B. and S.E.E. contributed to technical components of the project. All authors contributed to the writing of the manuscript.
\section*{Competing Interests} The authors declare that there are no competing interests.
\nocite{}
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1} \begin{thebibliography}{60}
\bibitem{Raussendorf1WQC}
Robert Raussendorf and Hans~J. Briegel.
\newblock A one-way quantum computer.
\newblock {\em Phys. Rev. Lett.}, 86:5188--5191, (2001).
\bibitem{bartolucci2021fusionbased}
Sara Bartolucci et~al.
\newblock Fusion-based quantum computation.
\newblock {\em Preprint at https://arxiv.org/abs/2101.09310}, (2021).
\bibitem{Schlingemann2001}
D.~Schlingemann and R.~F. Werner.
\newblock Quantum error-correcting codes associated with graphs.
\newblock {\em Phys. Rev. A}, 65:012308, (2001).
\bibitem{ShorCode1995}
Peter~W. Shor.
\newblock Scheme for reducing decoherence in quantum computer memory.
\newblock {\em Phys. Rev. A}, 52:R2493--R2496, (1995).
\bibitem{KITAEV20032}
A.Yu. Kitaev.
\newblock Fault-tolerant quantum computation by anyons.
\newblock {\em Ann. Phys.}, 303(1):2--30, (2003).
\bibitem{nielsen_chuang_2010}
Michael~A. Nielsen and Isaac~L. Chuang.
\newblock {\em Quantum Computation and Quantum Information: 10th Anniversary
Edition}.
\newblock Cambridge University Press, (2010).
\bibitem{Briegel1998}
H.-J. Briegel, W.~D\"ur, J.~I. Cirac, and P.~Zoller.
\newblock Quantum repeaters: The role of imperfect local operations in quantum
communication.
\newblock {\em Phys. Rev. Lett.}, 81:5932--5935, (1998).
\bibitem{Dur1999}
W.~D\"ur, H.-J. Briegel, J.~I. Cirac, and P.~Zoller.
\newblock Quantum repeaters based on entanglement purification.
\newblock {\em Phys. Rev. A}, 59:169--181, (1999).
\bibitem{Sangouard2011}
Nicolas Sangouard, Christoph Simon, Hugues de~Riedmatten, and Nicolas Gisin.
\newblock Quantum repeaters based on atomic ensembles and linear optics.
\newblock {\em Rev. Mod. Phys.}, 83:33--80, (2011).
\bibitem{Azuma2015}
Koji Azuma, Kiyoshi Tamaki, and Hoi-Kwong Lo.
\newblock All-photonic quantum repeaters.
\newblock {\em Nat. Commun.}, 6(1):6787, (2015).
\bibitem{Muralidharan2016}
Sreraman Muralidharan et~al.
\newblock Optimal architectures for long distance quantum communication.
\newblock {\em Sci. Rep.}, 6(1):20463, (2016).
\bibitem{Gottesman2012}
Daniel Gottesman, Thomas Jennewein, and Sarah Croke.
\newblock Longer-baseline telescopes using quantum repeaters.
\newblock {\em Phys. Rev. Lett.}, 109:070503, (2012).
\bibitem{Degen2017}
C.~L. Degen, F.~Reinhard, and P.~Cappellaro.
\newblock Quantum sensing.
\newblock {\em Rev. Mod. Phys.}, 89:035002, (2017).
\bibitem{Gisin2007}
Nicolas Gisin and Rob Thew.
\newblock Quantum communication.
\newblock {\em Nat. Photonics}, 1(3):165--171, (2007).
\bibitem{Lugiato_2002}
L~A Lugiato, A~Gatti, and E~Brambilla.
\newblock Quantum imaging.
\newblock {\em J. Opt. B: Quantum Semiclass. Opt.}, 4(3):S176--S183, (2002).
\bibitem{Dowling2008}
Jonathan~P. Dowling.
\newblock Quantum optical metrology – the lowdown on high-n00n states.
\newblock {\em Cont. Phys.}, 49(2):125–143, (2008).
\bibitem{Browne2005}
Daniel~E. Browne and Terry Rudolph.
\newblock Resource-efficient linear optical quantum computation.
\newblock {\em Phys. Rev. Lett.}, 95:010501, (2005).
\bibitem{Gao2010}
Wei-Bo Gao et~al.
\newblock Experimental demonstration of a hyper-entangled ten-qubit
schr{\"o}dinger cat state.
\newblock {\em Nat. Phys.}, 6(5):331--335, (2010).
\bibitem{Li2020}
Jin-Peng Li et~al.
\newblock Multiphoton graph states from a solid-state single-photon source.
\newblock {\em ACS Photonics}, 7(7):1603--1610, (2020).
\bibitem{Nemoto_PRX_2014}
Kae Nemoto et~al.
\newblock Photonic architecture for scalable quantum information processing in
diamond.
\newblock {\em Phys. Rev. X}, 4:031022, (2014).
\bibitem{Choi_npjQI_2019}
Hyeongrak Choi, Mihir Pant, Saikat Guha, and Dirk Englund.
\newblock Percolation-based architecture for cluster state creation using
photon-mediated entanglement between atomic memories.
\newblock {\em npj Quantum Information}, 5(1):104, (2019).
\bibitem{Schon2005PRL}
C.~Sch\"on, E.~Solano, F.~Verstraete, J.~I. Cirac, and M.~M. Wolf.
\newblock Sequential generation of entangled multiqubit states.
\newblock {\em Phys. Rev. Lett.}, 95:110503, (2005).
\bibitem{Schon2007PRA}
C.~Sch\"on, K.~Hammerer, M.~M. Wolf, J.~I. Cirac, and E.~Solano.
\newblock Sequential generation of matrix-product states in cavity qed.
\newblock {\em Phys. Rev. A}, 75:032311, (2007).
\bibitem{Lindner_2009PRL}
Netanel~H. Lindner and Terry Rudolph.
\newblock Proposal for pulsed on-demand sources of photonic cluster state
strings.
\newblock {\em Phys. Rev. Lett.}, 103:113602, (2009).
\bibitem{Schwartz2016}
I.~Schwartz et~al.
\newblock Deterministic generation of a cluster state of entangled photons.
\newblock {\em Science}, 354(6311):434--437, (2016).
\bibitem{Besse_NatCommun_2020}
Jean-Claude Besse et~al.
\newblock Realizing a deterministic source of multipartite-entangled photonic
qubits.
\newblock {\em Nat. Commun.}, 11(1):4877, (2020).
\bibitem{Economou_2010PRL}
Sophia~E. Economou, Netanel Lindner, and Terry Rudolph.
\newblock Optically generated 2-dimensional photonic cluster state from coupled
quantum dots.
\newblock {\em Phys. Rev. Lett.}, 105:093601, (2010).
\bibitem{Gimeno-Segovia2019}
Mercedes Gimeno-Segovia, Terry Rudolph, and Sophia~E. Economou.
\newblock Deterministic generation of large-scale entangled photonic cluster
state from interacting solid state emitters.
\newblock {\em Phys. Rev. Lett.}, 123:070501, (2019).
\bibitem{Buterakos_2017PRX}
Donovan Buterakos, Edwin Barnes, and Sophia~E. Economou.
\newblock Deterministic generation of all-photonic quantum repeaters from
solid-state emitters.
\newblock {\em Phys. Rev. X}, 7:041023, (2017).
\bibitem{Russo2018}
Antonio Russo, Edwin Barnes, and Sophia~E. Economou.
\newblock Photonic graph state generation from quantum dots and color centers
for quantum communications.
\newblock {\em Phys. Rev. B}, 98:085303, (2018).
\bibitem{Hilaire2021}
Paul Hilaire, Edwin Barnes, and Sophia~E. Economou.
\newblock Resource requirements for efficient quantum communication using
all-photonic graph states generated from a few matter qubits.
\newblock {\em {Quantum}}, 5:397, February 2021.
\bibitem{Zhan2020}
Yuan Zhan and Shuo Sun.
\newblock Deterministic generation of loss-tolerant photonic cluster states
with a single quantum emitter.
\newblock {\em Phys. Rev. Lett.}, 125:223601, (2020).
\bibitem{Borregaard2020}
Johannes Borregaard et~al.
\newblock One-way quantum repeater based on near-deterministic photon-emitter
interfaces.
\newblock {\em Phys. Rev. X}, 10:021071, (2020).
\bibitem{michaels2021multidimensional}
Cathryn~P. Michaels et~al.
\newblock Multidimensional cluster states using a single spin-photon interface
coupled strongly to an intrinsic nuclear register.
\newblock {\em Quantum}, 5:565, (2021).
\bibitem{Pichler2017}
Hannes Pichler, Soonwon Choi, Peter Zoller, and Mikhail~D. Lukin.
\newblock Universal photonic quantum computation via time-delayed feedback.
\newblock {\em PNAS}, 114(43):11362--11367, (2017).
\bibitem{Russo_2019}
Antonio Russo, Edwin Barnes, and Sophia~E Economou.
\newblock Generation of arbitrary all-photonic graph states from quantum
emitters.
\newblock {\em New J. Phys.}, 21(5):055002, (2019).
\bibitem{NJP_9_204_2007}
M~Van den Nest, W~Dür, A~Miyake, and H~J Briegel.
\newblock Fundamentals of universality in one-way quantum computation.
\newblock {\em New J. Phys.}, 9(6):204 -- 204, (2007).
\bibitem{HMP2006}
Peter H{{\o}}yer, Mehdi Mhalla, and Simon Perdrix.
\newblock {Resources Required for Preparing Graph States}.
\newblock In {\em {17th International Symposium on Algorithms and Computation
(ISAAC 2006)}}, volume 4288 of {\em Lecture Notes in Computer Science}, pages
638 -- 649, Kolkata, India, (2006).
\bibitem{PhysRevX.7.031016_Nahum}
Adam Nahum, Jonathan Ruhman, Sagar Vijay, and Jeongwan Haah.
\newblock Quantum entanglement growth under random unitary dynamics.
\newblock {\em Phys. Rev. X}, 7:031016, (2017).
\bibitem{PhysRevB.100.134306}
Yaodong Li, Xiao Chen, and Matthew P.~A. Fisher.
\newblock Measurement-driven entanglement transition in hybrid quantum
circuits.
\newblock {\em Phys. Rev. B}, 100:134306, (2019).
\bibitem{VandenNest2004_PRA}
Maarten Van~den Nest, Jeroen Dehaene, and Bart De~Moor.
\newblock Graphical description of the action of local clifford transformations
on graph states.
\newblock {\em Phys. Rev. A}, 69:022316, (2004).
\bibitem{Hein2006}
M.~Hein et~al.
\newblock Entanglement in graph states and its applications.
\newblock {\em Preprint at https://arxiv.org/abs/quant-ph/0602096}, (2006).
\bibitem{Gottesman:1998hu}
Daniel Gottesman.
\newblock The {H}eisenberg representation of quantum computers.
\newblock {\em Preprint at https://arxiv.org/abs/quant-ph/9807006}, (1998).
\bibitem{Cramer_NatCommun_2010}
Marcus Cramer et~al.
\newblock Efficient quantum state tomography.
\newblock {\em Nat. Commun.}, 1(1):149, (2010).
\bibitem{Hein2004PRA}
M.~Hein, J.~Eisert, and H.~J. Briegel.
\newblock Multiparty entanglement in graph states.
\newblock {\em Phys. Rev. A}, 69:062311, (2004).
\bibitem{Orus2014}
Román Orús.
\newblock A practical introduction to tensor networks: Matrix product states
and projected entangled pair states.
\newblock {\em Ann. Phys.}, 349:117--158, (2014).
\bibitem{Audenaert_2005NJP}
Koenraad M.~R. Audenaert and Martin~B Plenio.
\newblock Entanglement on mixed stabilizer states: normal forms and reduction
procedures.
\newblock {\em New J. Phys.}, 7:170, (2005).
\bibitem{RandomGraphs}
E.~N. Gilbert.
\newblock {Random Graphs}.
\newblock {\em Ann. Math. Stat.}, 30(4):1141 -- 1144, (1959).
\bibitem{Fattal2004}
David Fattal, Yoshihisa~Yamamoto Toby S.~Cubitt, Sergey Bravyi, and Isaac~L.
Chuang.
\newblock Entanglement in the stabilizer formalism.
\newblock {\em Preprint at https://arxiv.org/abs/quant-ph/0406168}, 2004.
\bibitem{OUM201715}
Sang il~Oum.
\newblock Rank-width: Algorithmic and structural results.
\newblock {\em Discret. Appl. Math.}, 231:15--24, (2017).
\newblock Algorithmic Graph Theory on the Adriatic Coast.
\bibitem{Massey1978}
J.~L. Massey.
\newblock Foundation and methods of channel encoding.
\newblock {\em Proc. Int. Conf. on Information Theory and Systems}, 65, (1978).
\newblock (Berlin, Germany, Sept. 1978).
\bibitem{OUM200579}
Sang il~Oum.
\newblock Rank-width and vertex-minors.
\newblock {\em J. Combin. Theory Ser. B}, 95(1):79--100, (2005).
\bibitem{Adler2017}
Isolde Adler, Mamadou~Moustapha Kant{\'e}, and O-joung Kwon.
\newblock Linear rank-width of distance-hereditary graphs {I}. {A}
polynomial-time algorithm.
\newblock {\em Algorithmica}, 78(1):342--377, (2017).
\bibitem{Jeong2017}
Jisu Jeong, Eun~Jung Kim, and Sang-il Oum.
\newblock The “art of trellis decoding” is fixed-parameter tractable.
\newblock {\em IEEE Trans. Inform. Theory}, 63(11):7178--7205, (2017).
\end{thebibliography}
\widetext
\begin{center}
\LARGE{\textbf{Supplementary Information} }
\end{center}
\setcounter{section}{0}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
\makeatletter
\renewcommand{S\arabic{equation}}{S\arabic{equation}}
\renewcommand{S\arabic{figure}}{S\arabic{figure}}
\renewcommand{\bibnumfmt}[1]{[S#1]}
\renewcommand{\citenumfont}[1]{S#1}
\section*{Supplementary Note 1} This supplementary material contains several additional examples of generation protocols produced using our algorithm. We begin with a simple example that illustrates in detail how our algorithm works. We then provide solutions for more complicated examples of practical importance, including error correcting codes and repeater graph states of arbitrary size. These more complicated examples are solved numerically using MATLAB codes that are available on GitHub \cite{MATLAB_CircuitSolver}. This code expresses the generation sequence in terms of an MPS \cite{Schon2005PRL_S} for bookkeeping purposes: \begin{equation}\label{eq:MPSsolution}
\Ket{\Psi}
=
U_{p,\mathrm{tot.}}\langle\psi_f|\big[\prod_{j = 1}^{n_p}\left(\hat{M}_{j}U_{e,j}\hat{E}_{\eta_{j}}\right)\big]W_0 |\psi_0\rangle, \end{equation} in which the initial and final states of emitter qubits are simply product states of $\Ket{0}$: $\Ket{\psi_f} = \Ket{\psi_0} = \Ket{0}^{\otimes n_e}$. We denote $\eta_j$ as the emitter qubit that emits the $j$-th photon and $\mu_j$ as the emitter qubit that is measured after emitting the $j$-th photon. $\hat{E}_{\eta_j}$ is the emission tensor, $\hat{E}_{\eta_j} = \Ket{0}_{j} \Ket{0}_{\eta_j}\!\!\Bra{0}_{\eta_j} + \Ket{1}_{j} \Ket{1}_{\eta_j}\!\!\Bra{1}_{\eta_j}$, that describes emission of photon $j$ from emitter $\eta_j$, which can be represented as $\mathrm{CNOT}_{\eta_j,j}$. $U_{e,j}$ is the unitary operation obtained from the $j$-th photon absorption step, which transforms $g_a$ as explained in the main text (Sec.II.A). $\hat{M}_j$ is identity if no measurement happens ($\mu_j$ is not assigned), otherwise, $\hat{M}_j = W_{j}H_{\mu_j}X^{s_j}_{\mu_j}\hat{\pi}_{\mu_j}$, with projection $\hat{\pi}_{\mu_j} \equiv \frac{1}{2}[\mathbbm{1}+(-1)^{s_j}Z_{\mu_j}]$ and its random outcome $s_j\in\{0,1\}$. Here, $W_j$ is the unitary operation that is obtained from time-reversed measurement (Sec.IV.B), and $W_0$ is the unitary operation that disentangles all emitters at the final stage of the time-reversed sequence. Finally, $U_{p,\mathrm{tot}} = \prod_j\left(X^{s_j}_j U_{p,j}\right)$ is the local Clifford operation that acts on photons with conditional $X^{s_j}_j$ flipping. The profile of the solution is stored as $\{U_{e,j},U_{p,j},\mu_j,\eta_j,W_j,W_0\}$. We note that such a solution is usually not unique due to there being multiple choices for how to choose the emitter gates and emitter sites in each photon absorption and time-reversed measurement.
As discussed in the main text, the height function plays a central role in determining the number of emitters and the operation sequence needed to generate a target photonic graph state. As shown in Eq.~(1) of the main text, when the stabilizers $g_m$ are in the echelon gauge, the height function can be expressed as \begin{equation}\label{suppeq:height_function}
h(x)=n - x - \#\{g_m|\verb|l|(g_m)>x\}, \end{equation}
where $\verb|l|(g_m)$ is the index of the left-most (smallest index) site on which $g_m$ acts nontrivially. In the main text, we showed that the difference in the height function across adjacent sites determines whether we perform a photon absorption or a time-reversed measurement at each step of the algorithm. Therefore, we define \begin{equation}\label{eq:deltah}
\delta h(x)\equiv h(x)-h(x-1) = \#\{g_m|\verb|l|(g_m)=x\}-1, \end{equation} from which it is apparent that this difference only depends on the number of stabilizers (in the echelon gauge) that have a left-ending on site $x$.
We begin by demonstrating our protocol solver algorithm in the case of the simple 4-photon graph state displayed in Supplementary Fig.~\hyperref[fig:SlashSquareSolving]{\ref*{fig:SlashSquareSolving}}(a). The stabilizers are given by \begin{equation}\label{eq:g1g2g3g4}
\begin{aligned}
g_1 &= \sigma^x_1\sigma^z_2\sigma^z_3,\quad
g_2 = \sigma^z_1\sigma^x_2\sigma^z_3\sigma^z_4,\\
g_3 &= \sigma^z_1\sigma^z_2\sigma^x_3\sigma^z_4,\quad
g_4 = \sigma^z_2\sigma^z_3\sigma^x_4.\\
\end{aligned} \end{equation} We can switch to the echelon gauge by redefining $g_3\to g_2g_3$. We then calculate the height function using Supplementary Eq.~\eqref{suppeq:height_function}, finding that the maximum is $2$. Therefore, at least $n_e=2$ emitter qubits are needed, and so we assemble a $6$-qubit lattice. We can depict the complete set of 6 stabilizers as a tableau, as shown in Supplementary Fig.~\hyperref[fig:SlashSquareSolving]{\ref*{fig:SlashSquareSolving}}(b).
In Supplementary Fig.~\hyperref[fig:SlashSquareSolving]{\ref*{fig:SlashSquareSolving}}(c), we first obtain inset (1) by transforming Supplementary Fig.~\hyperref[fig:SlashSquareSolving]{\ref*{fig:SlashSquareSolving}}(b) to the echelon gauge. The upper left sub-block of the tableau is exactly Supplementary Eq.~\eqref{eq:g1g2g3g4} with $g_3\to g_2g_3$. Next we describe in detail how the generator set is updated from inset (1) to inset (17) step by step. The column label $j$ indicates which photon we are currently focusing on, and the labels (i),...,(iv) indicate the specific step of our algorithm. For each photon, we do the following steps: \begin{itemize}
\item $j=4$: (i) Obtain inset (1) by transforming to echelon gauge: $g_3\to g_2g_3$.
(ii) Supplementary Eq.~\eqref{eq:deltah} gives $\delta h(4)=-1$. Perform a time-reversed measurement on emitter site 5 by applying a Hadamard $H_5$ followed by $\mathrm{CNOT}_{54}$, which yields inset (2).
(iii) Let $g_a = g_5 = \sigma^x_4\sigma^x_5$ in inset (2). One gets inset (3) by performing Hadamards on sites 4 and 5. Then the $4$-th photon is absorbed into the emitter on site 5 by applying $\mathrm{CNOT}_{54}$. Replace $g_4\to g_4g_5$ to eliminate the redundant $\sigma^z_4$, yielding inset (4). ($\mu_4=5$, $\eta_4=5$, $U_{p,4}=H_4$, $U_{e,4} = H_5$, $W_4=H_5$.)
\item $j=3$: (i) Skip this step since inset (4) is already in echelon gauge.
(ii) Supplementary Eq.~\eqref{eq:deltah} gives $\delta h(3) =-1$. Perform a time-reversed measurement on emitter site 6 by applying $H_6$ followed by $\mathrm{CNOT}_{63}$. Inset (6) is then obtained by redefining $g_5\leftrightarrow g_6$.
(iii) Let $g_a = g_5 = \sigma^x_3\sigma^x_6$ in inset (6). One gets inset (7) by applying Hadamards on sites 3 and 6. Then the $3$rd photon is absorbed by applying $\mathrm{CNOT}_{63}$. Replace $g_3\to g_3g_5$ to eliminate the redundant $\sigma^z_3$. Thus, inset (7) becomes (8). ($\mu_j=6$, $\eta_3=6$, $U_{p,3}=H_3$, $U_{e,3} = H_6$, $W_3=H_6$.)
\item $j=2$: (i) Skip this step since inset (8) is already in echelon gauge.
(ii) Skip this step since Supplementary Eq.~\eqref{eq:deltah} gives $\delta h(2)=1$.
(iii) Choose $g_a = g_4 = \sigma^z_2\sigma^z_5\sigma^x_6$ in inset (10). One gets inset (11) by applying $H_6$ followed by $\mathrm{CNOT}_{65}$ on the emitters, so that $g_a\to\sigma^z_2\sigma^z_5$. Then the $2$nd photon is absorbed into emitter 5 by applying $\mathrm{CNOT}_{52}$. Redefine $g_k\to g_kg_4$ for $k=1,3$ to eliminate the redundant $\sigma^z$'s. Thus, inset (11) becomes (12).
($\eta_2=5$, $U_{p,2}=\mathbbm{1}$, $U_{e,2} = H_6\mathrm{CNOT}_{65}$, $\hat{M}_2 = \mathbbm{1}$.)
\item $j=1$: (i) Obtain inset (13) from (12) by transforming to echelon gauge: $(g_3,g_4,g_5,g_6)\rightarrow (g_4,g_5,g_6,g_3)$.
(ii) Skip this step since Supplementary Eq.~\eqref{eq:deltah} gives $\delta h(1)=1$.
(iii) Choose $g_a = g_2 = \sigma^z_1\sigma^x_5\sigma^z_6$ in inset (14). One gets inset (15) by applying $H_5$ and then $\mathrm{CNOT}_{65}$ to transform $g_a\to\sigma^z_1\sigma^z_5$. Then the $1$st photon is absorbed into emitter site 5 by applying $\mathrm{CNOT}_{51}$. Thus, inset (15) becomes (16).
($\eta_1=5$, $U_{p,1}=\mathbbm{1}$, $U_{e,1} = H_5\mathrm{CNOT}_{65}$, $\hat{M}_1 = \mathbbm{1}$.)
\item (iv) Finally, to recover the state $\Ket{0}^{\otimes n}$, one needs to disentangle the emitter qubits. This can be done with the following gate sequence: $H_5\mathrm{CNOT}_{56}H_5$. In the last step, we permute the $g_m$ to obtain inset (17). ($W_0=H_6\mathrm{CNOT}_{56}H_5$.) \end{itemize} \begin{figure}
\caption{\textbf{Step-by-step illustration of the protocol solver.} (a) A target graph state with 4 photons. (b) The set of generators $\mathcal{G}_f =\{g_m\}$ is depicted as a tableau in which each row corresponds to one generator. Different colors correspond to different Pauli operators. The first 4 columns correspond to photonic qubits, and the last 2 columns correspond to emitters. (c) Step by step demonstration of how to obtain the time-reversed generation sequence, where $\mathcal{G}_0=\{\sigma^z_i\}$ is finally obtained. Explanations are in the main text.
(d) Local Clifford equivalent graph state representations of tableaux in (c).}
\label{fig:SlashSquareSolving}
\end{figure} \begin{figure}\label{fig:SlashSquareCircuit}
\end{figure} Now that the algorithm is complete, we reverse all the operations to obtain the final generation sequence. This circuit is shown in Supplementary Fig.~\hyperref[fig:SlashSquareCircuit]{\ref*{fig:SlashSquareCircuit}}(a). It is worth noting that, in this example, the emission sequence can be further optimized by swapping the 1st and 3rd photons in the emission order, such that the maximum of $h(x)$ is reduced to $1$. Thus, only one emitter qubit is needed in this case, and the corresponding generation circuit is displayed in Supplementary Fig.~\hyperref[fig:SlashSquareCircuit]{\ref*{fig:SlashSquareCircuit}}(b).
\section*{Supplementary Note 2} In this subsection, we demonstrate how to generate a useful quantum error correction code, with some continuous logical rotation. In particular, we present an emission sequence for the Shor code \cite{ShorCode1995_S} with 9 photonic qubits, which is able to protect a qubit from single bit-flip and phase-flip errors. The stabilizer generators of this code are well known: $g_{j} = \sigma^z_j\sigma^z_{j+1}$ for $j = 1,2,4,5,7,8$, and $g_3 = \sigma^x_1\sigma^x_2\cdots \sigma^x_6$, $g_6 = \sigma^x_4\sigma^x_5\cdots \sigma^x_9$ \cite{nielsen_chuang_2010_S}. We can also define the logical operators $X_L \equiv \sigma^z_1\sigma^z_2\cdots \sigma^z_9$, $Z_L \equiv \sigma^x_1\sigma^x_2\cdots \sigma^x_9$ and $Y_L = iX_LZ_L$. Let the last stabilizer be $g_9 = \pm X_L$, which determines a pair of logical space basis states $\Ket{\pm}_L$. For both choices, Supplementary Eq.~\eqref{suppeq:height_function} gives $h_{\max} = 2$, and the emission circuit solutions for $\Ket{\pm}_L$ are given in Supplementary Fig.~\hyperref[fig:ShorCircuit]{\ref*{fig:ShorCircuit}}, where $\Ket{\pm}_L$ are separately given by $R = \mathbbm{1}$ and $R = X_{e_1}$. Therefore, by replacing $R$ by a more general $x$-rotation, $e^{i\frac{\varphi}{2}X_{e_1}} \equiv \mathbbm{1}\cos\frac{\varphi}{2} + i X_{e_1}\sin\frac{\varphi}{2}$, we can obtain a rotated logical qubit $\Ket{\varphi}_L = e^{i\frac{\varphi}{2}Z_L}\Ket{+}_L$, with an arbitrary angle $\varphi$.
That is, the circuit in Supplementary Fig.~\hyperref[fig:ShorCircuit]{\ref*{fig:ShorCircuit}} allows us to transmit a rotated photonic logical qubit protected by the Shor code, with merely $2$ emitter qubits, $1$ two-qubit gate and $2$ measurements, which is surprisingly simple. \begin{figure}
\caption{\textbf{Shor code example.} The emission circuit that generates a logical state of the Shor code, controlled by a local operation $R$ (the yellow block). The inset displays the height function of $\Ket{\pm}_L$, which has $h_{\max} = 2$.
}
\label{fig:ShorCircuit}
\end{figure}
\section*{Supplementary Note 3} In this section, we generalize the repeater graph state example from the main text and present explicit generation circuits for repeater graphs with arbitrarily many photons. As shown in Supplementary Fig.~\hyperref[fig:RGS2m]{\ref*{fig:RGS2m}}, for a repeater graph state with $2m$ photons, the maximum of the height function indicates that we need $2$ emitters regardless how large $m$ is $(m\ge4)$. The unitary operations $A$, $B$ and $C$ displayed in Supplementary Fig.~\hyperref[fig:RGS6m]{\ref*{fig:RGS6m}}(a) depend on $m$: \begin{equation}
A = X_{e_2}^{\lfloor m/2\rfloor+1} ,\quad B = X_{e_1}^{\lfloor m/2\rfloor}, \quad C = P^{m}_{e_2} \;, \end{equation} where $X_i=\sigma_i^x$, and $P=\mathrm{diag}(1,i)$. Compared to the approach given in previous work \cite{Buterakos_2017PRX}, which requires $m-1$ two-qubit gates and $m$ measurements, the new solution in Supplementary Fig.~\hyperref[fig:RGS2m]{\ref*{fig:RGS2m}}(a) uses $2m - 3$ two-qubit gates and $m-1$ measurements, reducing the number of measurements needed to produce the state. We highlight that our method yields different solutions with flexible settings, so actually the solution from Ref.~\cite{Buterakos_2017PRX_S} can also be obtained from our algorithm. \begin{figure}
\caption{
\textbf{Example of a large RGS.}
(a) and (b) show the emission circuit and height function for the repeater graph state of $2m$ photons displayed in (c). The boxed area of the circuit is repeated multiple times to generate photons $p_4,p_5,p_6,\cdots, p_{2m-6}$.
}
\label{fig:RGS2m}
\end{figure}
\section*{Supplementary Note 4}
Finally, we consider a modified repeater graph state that includes some additional redundancy to further boost the likelihood of successful Bell measurements \cite{Buterakos_2017PRX_S}. Supplementary Fig.~\hyperref[fig:RGS6m]{\ref*{fig:RGS6m}}(a) shows an example of such a repeater graph state with $6m$ photons $(m > 3)$. Note that compared to the state in Supplementary Fig.~\hyperref[fig:RGS2m]{\ref*{fig:RGS2m}}(c), this state contains twice as many external photon arms and is missing those internal edges that are not necessary for the functionality of this state as a repeater. Supplementary Fig.~\hyperref[fig:RGS6m]{\ref*{fig:RGS6m}}(b) shows that the height function is at most $2$ for any $m > 3$, i.e., only two emitter qubits are needed to generate the state in (a). We list all operations in the generation circuit in Supplementary Eq.~\eqref{eq:MPSsolution}. Denoting the $e_1$-th and the $e_2$-th qubits as the emitter qubits, where $e_1 = 6m+1$ and $e_2 = 6m+2$, the circuit is given by: \begin{equation}
\begin{aligned}
W_0 &= \mathrm{CNOT}_{e_1e_2}H_{e_1}H_{e_2} \;\\
\eta_j &=
\left\{
\begin{aligned}
&e_2,\quad 6m-5\le j\le 6m - 3\\
&e_1,\quad \text{otherwise}
\end{aligned}
\right.\\
U_{p,j} &=
\left\{
\begin{aligned}
&\mathbbm{1},\quad j = 1 \text{ (mod 3)}\\
&H_j,\quad \text{otherwise}\\
\end{aligned}
\right.\\
U_{e,j} &=
\left\{
\begin{aligned}
&\mathrm{CNOT}_{e_2e_1},\quad j = 3\\
&H_{e_1},\quad j = 3k \text{ with } 2\le k\le2m\\
&H_{e_2},\quad j = 6m-3\\
&\mathbbm{1},\quad \text{otherwise}
\end{aligned}
\right.
\end{aligned}
,\quad
\begin{aligned}
\mu_{j} &=\left\{
\begin{aligned}
& e_1,\quad j = 3k \text{ with } 2\le k\le 2m,\\
&\qquad\qquad\qquad\text{and}\;k\ne 2m -1\\
& e_2,\quad j = 6m - 3\\
& \text{not assigned},\quad\text{otherwise}
\end{aligned}
\right.\\
W_{j} &=\left\{
\begin{aligned}
& \mathrm{CNOT}_{e_1e_2},\quad j = 3k \text{ with } 2\le k\le 2m-2,\\
&\qquad\qquad\qquad\text{and}\;k\ne m -1\\
& H_{e_2}\mathrm{CNOT}_{e_1e_2},\quad j = 3m - 3\\
& H_{e_2},\quad j = 6m - 3\\
& H_{e_1},\quad j = 6m\\
& \mathbbm{1}, \quad \text{otherwise}
\end{aligned}\right.
\end{aligned}. \end{equation} In the above circuit, there are $2m-1$ measurements, and $2m - 1$ $\mathrm{CNOT}$ gates. Plugging these operations into Supplementary Eq.~\eqref{eq:MPSsolution} gives the full sequence. \begin{figure}
\caption{\textbf{Modified RGS example.} (a) The graph for RGS, which has $6m$ vertices. The labels represent the emission sequence. (b) The height function $h(x)$ for the target graph state in (a).
}
\label{fig:RGS6m}
\end{figure}
\nocite{}
\end{document} |
\begin{document}
\begin{center} \textbf{\large On uniform convergence of Fourier series} \end{center}
\begin{center} Vladimir Lebedev \end{center}
\begin{quotation} {\small \textsc{Abstract.} We consider the space $U(\mathbb T)$ of all continuous functions on the circle $\mathbb T$ with uniformly convergent Fourier series. We show that if $\varphi:
\mathbb T\rightarrow\mathbb T$ is a continuous piecewise linear but not linear map, then $\|e^{in\varphi}\|_{U(\mathbb T)}\simeq\log n$.
References: 5 items.
Keywords: uniformly convergent Fourier series.
AMS 2010 Mathematics Subject Classification. 42A20.}
\end{quotation}
\quad
Given any integrable function $f$ on the circle $\mathbb T=\mathbb R /2\pi \mathbb Z$ (where $\mathbb R$ is the real line and $\mathbb Z$ is the set of integers), consider Fourier series $$ f(t)\sim \sum_{k\in \mathbb Z}\widehat f(k) e^{ikt}, \qquad \widehat{f}(k)=\frac{1}{2\pi}\int_{\mathbb T} f(t)e^{-ikt} dt. $$
Let $C(\mathbb T)$ be the space of continuous functions
$f$ on $\mathbb T$ with the usual norm $\|f\|_{C(\mathbb T)}=\sup_{t\in\mathbb T}|f(t)|$.
Let $U(\mathbb T)$ be the space of all functions
$f\in C(\mathbb T)$ whose Fourier series converges uniformly, that is $\|S_N(f)-f\|_{C(\mathbb T)}\rightarrow 0$ as $N\rightarrow\infty$, where $S_N(f)$ stands for the $N$ -th partial sum of the Fourier series of $f$: $$
S_N(f)(t)=\sum_{|k|\leq N}\widehat f(k) e^{ikt}. $$ Endowed with the natural norm $$
\|f\|_{U(\mathbb T)}=\sup_N\|S_N(f)\|_{C(\mathbb T)}, $$ the space $U(\mathbb T)$ is a Banach space.
Consider also the space $A(\mathbb T)$ of functions $f\in C(\mathbb T)$ with absolutely convergent Fourier series. We put $$
\|f\|_{A(\mathbb T)}=\|\widehat{f}\|_{l^1}=
\sum_{k\in\mathbb Z} |\widehat{f}(k)|. $$
The space $A(\mathbb T)$ is a Banach space. We have $A(\mathbb T)\subset U(\mathbb T)$ and obviously $\|\cdot\|_{U(\mathbb T)}\le\|\cdot\|_{A(\mathbb T)}$.
Let $\varphi$ be a continuous map of the circle $\mathbb T$ into itself, i.e. a continuous function $\varphi: \mathbb R\rightarrow\mathbb R$ satisfying
$\varphi(t+2\pi)=\varphi(t)~(\mathrm{mod}\,2\pi), ~t\in\mathbb R$. According to the known Beurling--Helson theorem [1] (see also [2], [3]), if $\|e^{in\varphi}\|_{A(\mathbb T)}=O(1), ~n\rightarrow\infty$, then $\varphi$ is linear. At the same time it is known that there exist nontrivial maps $\varphi : \mathbb T\rightarrow\mathbb T$ such that
$\|e^{in\varphi}\|_{U(\mathbb T)}=O(1)$. For a survey of certain results on the maps of the circle and the spaces $A(\mathbb T), ~U(\mathbb T)$ see [3], [4]. For more recent results see [5].
The following assertion is due to J.-P. Kahane
[2, Ch. IV]. If a nonlinear continuous map $\varphi : \mathbb T\rightarrow\mathbb T$ is piecewise linear (which means that $[0, 2\pi]$ is a finite union of intervals such that $\varphi$ is linear on each of them), then $\|e^{in\varphi}\|_{A(\mathbb T)}\simeq\log |n|$. (The sign $\simeq$ means that for all sufficiently large $n\in\mathbb Z$ the ratio of the corresponding quantities is contained between two positive constants.) Here we shall obtain a similar assertion for the space $U(\mathbb T)$.
\quad
\textbf{Theorem.} \emph {Let $\varphi$ be a piecewise linear but not linear continuous map of the circle $\mathbb T$ into itself. Then} $ \|e^{in\varphi}\|_{U(\mathbb T)}\simeq\log |n|, ~n\in\mathbb Z.$
\quad
In particular, this theorem implies that, generally speaking, nontrivial piecewise linear changes of variable destroy the uniform convergence of Fourier series. Moreover they do not act from $A(\mathbb T)$ to $U(\mathbb T)$. Indeed, assuming that for each function $f\in A(\mathbb T)$ the superposition $f\circ\varphi$ belongs to $U(\mathbb T)$, we would have
$\|e^{in\varphi}\|_{U(\mathbb T)}=O(1)$ (it suffices to apply the closed graph theorem to the operator $f\rightarrow f\circ\varphi$).
To prove the theorem we only have to prove
$\log |n|$ lower bound, the upper bound
$\|e^{in\varphi}\|_{U(\mathbb T)}=O(\log |n|)$ follows from inequality $\|\cdot\|_{U(\mathbb T)}\leq \|\cdot\|_{A(\mathbb T)}$ and the above result of Kahane.
We shall need the following simple lemma that perhaps is of interest in itself.
\quad
\textbf{Lemma.} \emph {Let $m\in C(\mathbb T)$ be a function such that $$
\|m\|_*=\sum_{n\in\mathbb Z}|\widehat{m}(n)|\log(|n|+2)<\infty. $$ Then for each function $f\in U(\mathbb T)$ we have $mf\in U(\mathbb T)$ and $$
\|mf\|_{U(\mathbb T)}\leq c\|m\|_*\,\|f\|_{U(\mathbb T)}, $$ where the constant $c>0$ is independent of $f$ and $m$.}
\quad
\emph{Proof.} Let $f\in U(\mathbb T)$. For $n\in \mathbb Z$ we put $e_n(t)=e^{int}$. Let $n>0$. Then $$ S_N(e_n f)=e_nS_{N+n}(f)+e_N\widehat{f}(N-n)-e_{N+n}S_n(e_{-N}f). \eqno(1) $$ It is easy to see that this relation implies the inclusion $e_nf\in U(\mathbb T)$.
For each function $g\in U(\mathbb T)$
we have $\|g\|_{C(\mathbb T)}\leq \|g\|_{U(\mathbb T)}$. At the same time (as is well known) for each function $g\in C(\mathbb T)$ we have $$
\|S_n(g)\|_{C(\mathbb T)}\leq c\|g\|_{C(\mathbb T)}\log (n+2) $$ with a constant $c>0$ independent of $n$ and $g$. So (1) yields $$
\|S_N(e_nf)\|_{C(\mathbb T)}\leq \|S_{N+n}(f)\|_{C(\mathbb T)}+\|f\|_{C(\mathbb T)}+\|S_n(e_{-N}f)\|_{C(\mathbb T)} $$ $$
\leq\|f\|_{U(\mathbb T)}+\|f\|_{C(\mathbb T)}+
c\|f\|_{C(\mathbb T)}\log (n+2)\leq c_1\|f\|_{U(\mathbb T)}\log (n+2). $$
Thus, $\|e_nf\|_{U(\mathbb T)}\leq c_1\|f\|_{U(\mathbb T)}\log
(|n|+2)$. The same relation holds for $n<0$ (complex conjugation does not affect the norm of a function in $U(\mathbb T)$). The assertion of the lemma immediately follows.
\quad
\emph{Proof of the theorem.} In the standard way we identify functions on $\mathbb T$ with functions on the interval $[-\pi, \pi]$. For $v\in\mathbb R$ define the functions $e_v$ on $[-\pi, \pi]$ by $e_v(t)=e^{ivt}$. For an arbitrary interval $I\subseteq [-\pi, \pi]$ let $1_I$ denotes its characteristic function: $1_I(t)=1$ for $t\in I$, $1_I(t)=0$ for $t\in [-\pi, \pi]\setminus I$. For $0<\varepsilon<\pi$ let $\Delta_\varepsilon$ be the ``triangle'' function supported on the interval $(-\varepsilon, \varepsilon)$, that is a function on $[-\pi, \pi]$ defined by
$\Delta_\varepsilon (t)=\max (0, 1-|t|/\varepsilon)$.
Let $t_0$ be a point such that $\varphi$ is linear in some left half-neighborhood of $t_0$ and is linear in some right half-neighborhood of $t_0$, but is not linear in any its neighborhood. Replacing (if necessarily) the function $\varphi(t)$ by $\varphi(t+t_0)-\varphi(t_0)$ we can assume that $t_0=0$ and $\varphi(t_0)=0$; thus we can assume that for a certain $\varepsilon, ~0<\varepsilon<\pi,$ we have $\varphi(t)=\alpha t$ for $t\in(-\varepsilon, 0]$ and $\varphi(t)=\beta t$ for $t\in[0, \varepsilon)$, where $\alpha\neq \beta$.
Direct calculation yields for $k\neq n\alpha, ~n\beta$ $$ \widehat{\Delta_\varepsilon e^{in\varphi}}(k)=\frac{1}{2\pi i}\bigg(\frac{1}{n\alpha -k}-\frac{1}{n\beta -k}\bigg)- $$ $$ -\frac{1}{i\varepsilon}\bigg(\frac{1}{n\alpha-k}\widehat{e_{n\alpha} 1_{(-\varepsilon, 0)}}(k)- \frac{1}{n\beta-k}\widehat{e_{n\beta} 1_{(0, \varepsilon)}}(k)\bigg). \eqno(2) $$
For $\lambda\in\mathbb R$ let $$
Q(\lambda)=\sum_{k\in\mathbb Z : ~|k-\lambda|\geq 1}\frac{1}{(k-\lambda)^2}. $$ It is easy to verify that $$ Q(\lambda)\leq 4, \qquad \lambda\in\mathbb R. \eqno (3) $$
We shall show first that for $n\in\mathbb Z, ~n\neq 0,$ we have $$
\|\Delta_\varepsilon e^{in\varphi}\|_{U(\mathbb T)}\geq \frac{1}{2\pi}\log
|n|+c(\varphi), \eqno (4) $$ where $c(\varphi)$ is independent of $n$.
If $g\in U(\mathbb T)$ then the function $g(-t)$ and the function $\overline{g(t)}$ (obtained by complex conjugation) belong to $U(\mathbb T)$ and have the same norm as that of $g$. So in the proof of estimate (4) we can assume that $\alpha>0$ and consider only the following three cases: 1)
$|\beta|>\alpha$; 2) $\beta=-\alpha$; 3) $\beta=0$.
We can also assume that $n$ is positive and is so large that $n\alpha\geq 2$.
In what follows we put $N=[n\alpha]-1$, where $[x]$ stands for the integer part of a number $x$.
\emph{Case} 1). We have $$
\bigg|\sum_{|k|\leq N}\frac{1}{n\alpha-k}\bigg|=\sum_{|k|\leq N}\frac{1}{n\alpha-k}\geq \sum_{|k|\leq N}\frac{1}{N+2-k} $$ $$ =\frac{1}{2}+\frac{1}{3}+\ldots +\frac{1}{2N+2}\geq \log(N+1)\geq\log\frac{n\alpha}{2}. \eqno (5) $$
At the same time for $|k|\leq N$ we have $|n\beta-k|\geq
|n\beta|-|n\alpha|=n(|\beta|-\alpha)$, so $$
\bigg|\sum_{|k|\leq N}\frac{1}{n\beta-k}\bigg|\leq
\frac{2N+1}{n(|\beta|-\alpha)}\leq\frac{3n\alpha}{n(|\beta|-\alpha)}=
\frac{3\alpha}{|\beta|-\alpha}. \eqno (6) $$
Note now that using the Cauchy inequality and the Parseval identity we obtain (see (3)) $$
\bigg|\sum_{|k|\leq N}\frac{1}{n\alpha-k}\widehat{e_{n\alpha}
1_{(-\varepsilon, 0)}}(k)\bigg|\leq (Q(n\alpha))^{1/2}\|e_{n\alpha}
1_{(-\varepsilon, 0)}\|_{L^2(\mathbb T)}\leq 2\varepsilon^{1/2}, \eqno (7) $$ and similarly $$
\bigg|\sum_{|k|\leq N}\frac{1}{n\beta-k}\widehat{e_{n\beta} 1_{(0,
\varepsilon)}}(k)\bigg|\leq (Q(n\beta))^{1/2}\|e_{n\beta} 1_{(0,
\varepsilon)}\|_{L^2(\mathbb T)}\leq 2\varepsilon^{1/2}. \eqno (8) $$
Together relations (5)--(8) imply (see (2)) $$
|S_N(\Delta_\varepsilon e^{in\varphi})(0)|=\bigg|\sum_{|k|\leq N}
\widehat{\Delta_\varepsilon e^{in\varphi}}(k)\bigg|\geq \frac{1}{2\pi}\bigg(\log\frac{n\alpha}{2}-\frac{3\alpha}
{|\beta|-\alpha}\bigg)-4\varepsilon^{-1/2}, $$ and we obtain (4).
\emph{Case} 2). We have $$
\bigg|\sum_{|k|\leq N}\bigg(\frac{1}{n\alpha-k}-\frac{1}{n\beta-k}\bigg)\bigg|=
\bigg|\sum_{|k|\leq N}\bigg(\frac{1}{n\alpha-k}+\frac{1}{n\alpha+k}\bigg)\bigg| $$ $$
=\bigg|2\sum_{|k|\leq N}\frac{1}{n\alpha-k}\bigg|\geq \log\frac{n\alpha}{2}. $$ Together with estimates (7), (8), which are true in Case 2), this estimate yields $$
|S_N(\Delta_\varepsilon e^{in\varphi})(0)|\geq \frac{1}{2\pi}\log\frac{n\alpha}{2}-4\varepsilon^{-1/2}, $$ and we obtain (4) again.
\emph{Case} 3). We have $$
\bigg|\sum_{1\leq |k|\leq N}\bigg(\frac{1}{n\alpha-k}-\frac{1}{n\beta-k}\bigg)\bigg|=
\bigg|\sum_{1\leq |k|\leq N}\bigg(\frac{1}{n\alpha-k}-\frac{1}{-k}\bigg)\bigg| $$ $$
=\bigg|\sum_{1\leq |k|\leq N}\frac{1}{n\alpha-k}\bigg|\geq \log\frac{n\alpha}{2}-1. $$
Note that estimates (7), (8) are true in Case 3) if we replace the range $|k|\leq N$ in the sums by $1\leq |k|\leq N$. Thus we see that $$
\bigg|\sum_{1\leq |k|\leq N} \widehat{\Delta_\varepsilon e^{in\varphi}}(k)\bigg|\geq \frac{1}{2\pi}\bigg(\log\frac{n\alpha}{2}-1\bigg)-4\varepsilon^{-1/2}, $$
and since $|\widehat{\Delta_\varepsilon e^{in\varphi}}(0)|\leq 1$, we obtain $$
|S_N(\Delta_\varepsilon e^{in\varphi})(0)|\geq \frac{1}{2\pi}\bigg(\log\frac{n\alpha}{2}-1\bigg)-4\varepsilon^{-1/2}-1. $$ Estimate (4) is proved.
Note now that
$\widehat{\Delta_\varepsilon}(k)=O(1/|k|^2)$ as
$|k|\rightarrow\infty$, so $$
\|\Delta_\varepsilon\|_*=\sum_{k\in\mathbb Z}|\widehat{\Delta_\varepsilon}(k)|\log(|k|+2)=M(\varepsilon)<\infty, $$ and from (4), using Lemma, we obtain $$
c(\varphi)+\frac{1}{2\pi}\log |n|\leq \|\Delta_\varepsilon e^{in\varphi}\|_{U(\mathbb T)}\leq c M(\varepsilon) \|e^{in\varphi}\|_{U(\mathbb T)}. $$ The theorem is proved.
\quad
\emph{Remark.} It would be interesting to describe pointwise multipliers of the space $U(\mathbb T)$, i.e. the continuous functions $m$ on $\mathbb T$ such that $mf\in U(\mathbb T)$ whenever $f\in U(\mathbb T)$. According to the lemma obtained above the condition $$
\sum_{k\in\mathbb Z}|\widehat{m}(k)|\log(|k|+2)<\infty $$ is a sufficient condition for a function $m$ to be a multiplier. The author does not know if this condition is necessary. The indicated condition can not be replaced by the weaker condition $m\in A(\mathbb T)$ (see [2, Ch. I, \S ~6]).
\begin{center} \textsc{References} \end{center}
\begin{enumerate}
\item A. Beurling, H. Helson, ``Fourier-Stieltjes transforms
with bounded powers'', \emph{Math. Scand.,} \textbf{1}
(1953), 120-126.
\item J.-P. Kahane, \emph{S\'eries de Fourier absolument
convergantes}, Springer-Verlag, Berlin--Heidelberg--New
York, 1970.
\item J.-P. Kahane, ``Quatre le\c cons sur les
hom\'eomorphismes du circle et les s\'eries de Fourier'',
in: \emph{Topics in Modern Harmonic Analysis,} Vol. II,
Ist. Naz. Alta Mat. Francesco Severi, Roma, 1983, 955-990.
\item A. M. Olevskii, ``Modifications of functions and Fourier
series'', \emph{Russian Math. Surveys}, \textbf{40}:3
(1985), 181-224.
\item V. V. Lebedev, ``Quantitative estimates in Beurling
--Helson type theorems'', \emph{Sbornik: Mathematics},
\textbf{201}:12 (2010), 1811-1836.
\end{enumerate}
\quad
\noindent Dept. of Mathematical Analysis\\ Moscow State Institute of Electronics\\ and Mathematics (Technical University)\\ E-mail address: \emph {[email protected]}
\end{document} |
\begin{document}
\title[Extender sets and measures of maximal entropy for subshifts] {Extender sets and measures of maximal entropy for subshifts} \date{} \author{Felipe García-Ramos} \address{Felipe García-Ramos\\ CONACyT \& Physics Institute of the Universidad Aut\'{o}noma de San Luis Potos\'{\i}\\ Av. Manuel Nava \#6, Zona Universitaria, C.P. 78290 \\ San Luis Potosí, S.L.P.\\ Mexico} \email{[email protected]} \author{Ronnie Pavlov} \address{Ronnie Pavlov\\ Department of Mathematics\\ University of Denver \\ 2390 S. York St. \\ Denver, CO 80208 \\ USA} \email{[email protected]} \thanks{The second author gratefully acknowledges the support of NSF grant DMS-1500685.} \keywords{Symbolic dynamics, measure of maximal entropy, extender set, synchronized subshift} \subjclass[2010]{37B10, 37B40, 37D35}
\begin{abstract} For countable amenable finitely generated torsion-free $\mathbb{G}$, we prove inequalities relating $\mu(v)$ and $\mu(w)$ for any measure of maximal entropy $\mu$ on a $G$-subshift and any words $v, w$ where the extender set of $v$ is contained in the extender set of $w$. Our main results are two generalizations of the main result of \cite{M}; the first applies to all such $v,w$ when $\mathbb{G} = \mathbb{Z}$, and the second to $v,w$ with the same shape for any $\mathbb{G}$. As a consequence of our results we give new and simpler proofs of several facts about synchronizing subshifts (including the main result from \cite{Th}) and we answer a question of Climenhaga. \end{abstract}
\maketitle
\section{Introduction}
\label{intro}
In this paper, we prove several results about measures of maximal entropy on symbolic dynamical systems (subshifts). Measures of maximal entropy are natural measures, defined via the classical Kolmogorov-Sinai entropy, which also connect to problems in statistical physics, such as existence of phase transitions.
Our dynamical systems are subshifts, which consist of a compact $X \subseteq \mathcal{A}^{\mathbb{G}}$ (for some finite alphabet $\mathcal{A}$ and a countable amenable finitely generated torsion-free group $\mathbb{G}$) and dynamics given by the $\mathbb{G}$-action of translation/shift maps $\{\sigma_{g}\}_{g \in\mathbb{G}}$ (under which $X$ must be invariant). Subshifts are useful both as discrete models for the behavior of dynamical systems on more general spaces, and as an interesting class of dynamical systems in their own right, with applications in physics and information theory.
Our main results show that when a word $v$ (i.e. an element of $\mathcal{A}^F$ for some finite $F \subset \mathbb{G}$) is replaceable by another word $w$ in $X$ (meaning that $\forall x \in X$, when any occurrence of $v$ is replaced by $w$, the resulting point is still in $X$), there is a simple inequality relating $\mu(v)$ and $\mu(w)$ for every measure of maximal entropy $\mu$. (As usual, the measure of a finite word is understood to mean the measure of its cylinder set; see Section~\ref{defs} for details.) A formal statement of our hypothesis uses extender sets (\cite{KM}, \cite{OP}); the condition ``$v$ is replaceable by $w$'' is equivalent to the containment $E_{X}(v) \subseteq E_{X}(w)$, where $E_X(u)$ denotes the extender set of a word $u$.
For $\mathbb{Z}$-subshifts specifically, it is possible to talk about replacing $v$ by $w$ (and thereby the containment $E_X(v) \subseteq E_X(w)$)
even if their lengths $|v|$ and $|w|$ are different, and our first results treat this case.
\begin{Htheorem}
Let $X$ be a $\mathbb{Z}$-subshift with positive topological entropy, $\mu$ a measure of maximal entropy of $X$
, and $w,v\in L(X)$. If $E_{X}(v)\subseteq E_{X}(w)$, then
\begin{equation*}
\mu(v)\leq\mu(w)e^{h_{top}(X)(|w|-|v|)}.
\end{equation*} \end{Htheorem}
\begin{Mcorollary}
Let $X$ be a $\mathbb{Z}$-subshift with positive topological entropy, $\mu$ a measure of maximal entropy of $X$
, and $w,v\in L(X).$ If $E_{X}(v)=E_{X}(w)$, then for every measure of
maximal entropy of $X$,
\begin{equation*}
\mu(v)=\mu(w)e^{h_{top}(X)(|w|-|v|)}.
\end{equation*} \end{Mcorollary}
In the class of synchronized subshifts (see Section 3.1 for the definition), $E_{X}(v)=E_{X}(w)$ holds for many pairs of words of different lengths, in which case Corollary~\ref{maincor} gives significant restrictions on the measures of maximal entropy. In Section~\ref{apps}, we use Corollary~\ref{maincor} to obtain results about synchronized subshifts. These applications include a new proof of uniqueness of measures of maximal entropy under the hypothesis of entropy minimality (see Theorem~\ref{synchunique}), which was previously shown in \cite{Th} via the much more difficult machinery of countable-state Markov shifts, and the following result which verifies a conjecture of Climenhaga (\cite{Cl}). (Here, $X_S$ represents a so-called $S$-gap subshift; see Definition~\ref{Sgap}.)
\begin{Lcorollary}Let $S\subseteq\mathbb{N}$ satisfy $\gcd(S+1)=1$, let
$\mu$ be the unique MME on $X_{S}$, and let $\lambda = e^{h_{top}(X_{S})}$. Then
$\displaystyle\lim_{n\rightarrow\infty}\frac{\left\vert L_{n}(X_{S})\right\vert
}{\lambda^n}$ exists and is equal to
$\displaystyle \frac{\mu(1)\lambda}{(\lambda-1)^{2}}$ when $S$ is infinite and
$\displaystyle \frac{\mu(1)\lambda (1 - \lambda^{-(\max S) - 1})^2}{(\lambda-1)^{2}}$ when $S$ is finite. \end{Lcorollary}
In fact, we prove that this limit exists for all synchronized subshifts where the unique measure of maximal entropy is mixing.
Our second main result applies to countable amenable finitely generated torsion-free $\mathbb{G}$, but only to $v,w$ which have the same shape. This is unavoidable in a sense, since in general, for $F \neq F'$, there will be no natural way to compare the configurations with shapes $F^c$ and $F'^c$ in extender sets of words $v \in A^F$ and $w \in A^{F'}$ respectively.
\begin{Gtheorem}
Let $X$ be a $\mathbb{G-}$subshift, $\mu$ a measure of maximal entropy of $X$
, $F \Subset \mathbb{G}$, and $w,v\in\mathcal{A}^{F}$. If $E(v)\subseteq E(w)$
then
\begin{equation*}
\mu(v)\leq\mu(w).
\end{equation*} \end{Gtheorem}
As a direct consequence of this theorem we recover the following result due to Meyerovitch. \begin{theorem}[Theorem 3.1, \textrm{\protect\cite{M}}]
\label{tomthm} If $X$ is a $\mathbb{Z}^{d}$-subshift and $v,w\in\mathcal{A}
^{F}$ satisfy $E_{X}(v)=E_{X}(w)$, then for every measure of maximal entropy
$\mu$ on $X$, $\mu(v)=\mu(w)$. \end{theorem}
\begin{remark} In fact the theorem from \cite{M} is more general; it treats equilibrium states for a class of potentials $\phi$ with a property called $d$-summable variation, and the statement here for measures of maximal entropy corresponds to the $\phi=0$ case only. \end{remark}
Due to our weaker hypothesis, $E_X(v) \subseteq E_X(w)$, our proof techniques are different from those used in \cite{M}. In particular, the case of different length $v,w$ treated in Theorem~\ref{hardcase} requires some subtle arguments about the ways in which $v,w$ can overlap themselves and each other.
Much as Corollary~\ref{maincor} was applicable to the class of synchronized subshifts, Theorem~\ref{Gtheorem} has new natural applications to the class of hereditary subshifts (introduced in \cite{KL1}), where there exist many pairs of words satisfying $E_{X}(v)\subsetneq E_{X}(w)$; see Section~\ref{hered} for details.
Section~\ref{defs} contains definitions and results needed throughout our proofs, Section~\ref{zsec} contains our results for $\mathbb{Z}$-subshifts (including various applications in Section~\ref{apps}), and Section~\ref {gsec} contains our results for $\mathbb{G}$-subshifts.
\section*{acknowledgments} We would like to thank the anonymous referee for their useful comments and suggestions.
\section{General definitions and preliminaries}
\label{defs}
We will use $\mathbb{G}$ to refer to a countable discrete group. We write $F \Subset \mathbb{G}$ to mean that $F$ is a finite subset of $\mathbb{G}$, and unless otherwise stated, $F$ always refers to such an object.
A sequence $\{F_{n}\}_{n\in\mathbb{N}}$ with $F_{n} \Subset \mathbb{G}$ is said to be \textbf{Følner} if for every $K\Subset\mathbb{G}$, we have that $
|(K \cdot F_{n}) \Delta F_{n}|/|F_{n}|\rightarrow 0$. \ We say that $\mathbb{G}$ is \textbf{amenable} if it admits a Følner sequence. In particular, $\mathbb{Z }$ is an amenable group, since any sequence $\{F_{n}\} = [a_n, b_n] \cap \mathbb{Z}$ with $b_n - a_n \rightarrow \infty$ is F\o lner.
Let $\mathcal{A}$ be any finite set (usually known as the alphabet). We call $\mathcal{A}^{\mathbb{G}}$ the \textbf{full $\mathcal{A}$-shift on $\mathbb{G }$}, and endow it with the product topology (using the discrete topology on $ \mathcal{A}$). For $x\in\mathcal{A}^{\mathbb{G}},$ we use $x_{i}$ to represent the $i$th coordinate of $x$, and $x_{F}$ to represent the restriction of $x$ to any $F\Subset\mathbb{G}$.
For any $g \in\mathbb{G}$, we use $\sigma_{g}$ to denote the left translation by $g$ on $\mathcal{A}^{\mathbb{G}}$, also called the \textbf{ shift by $g$}; note that each $\sigma_{g}$ is an automorphism. We say $ X \subseteq \mathcal{A}^{\mathbb{G}}$ is a $\mathbb{G}$-\textbf{subshift} if it is closed and $\sigma_{g}(X) = X$ for all $g\in\mathbb{G}$; when $\mathbb{G=Z }$ we simply call it a subshift.
For $F \Subset\mathbb{G}$, we call an element of $\mathcal{A}^{F}$ a \textbf{ word with shape $F$}.
For $w$ a word with shape $F$ and $x$ either a point of $\mathcal{A}^{ \mathbb{G}}$ or a word with shape $F^{\prime}\supset F$, we say that $w$ is a \textbf{subword of $x$} if $x_{g + F} = w$ for some $g \in\mathbb{G}$.
For any $F$, the \textbf{$F$-language of $X$} is the set $L_{F}(X)\subseteq \mathcal{A}^{F}=\{x_{F}\ :\ x\in X\}$ of words with shape $F$ that appear as subwords of points of $X.$ When $\mathbb{G} = \mathbb{Z}$, we use $L_{n}(X)$ to refer to $L_{\{0, \ldots, n-1\}}(X)$ for $n \in\mathbb{N}$. We define \begin{align*} L(X) & :=\bigcup_{F\Subset\mathbb{G}}L_{F}(X)\text{ if }\mathbb{G\neq Z} \text{ and} \\ L(X) & :=\bigcup_{n \in\mathbb{N}}L_{n}(X)\text{ if }\mathbb{G=Z}\text{.} \end{align*}
For any $\mathbb{G}$-subshift $X$ and $w\in L_{F}(X)$, we define the \textbf{ cylinder set of $w$} as \begin{equation*} \left[ w\right] :=\left\{ x\in X:x_{F}=w\right\} \text{.} \end{equation*}
Whenever we refer to an interval in $\mathbb{Z}$, it means the intersection of that interval with $\mathbb{Z}$. So, for instance, if $x\in\mathcal{A}^{
\mathbb{Z}}$ and $i < j$, $x_{\left[ i,j\right] }$ represents the subword of
$x$ that starts in position $i$ and ends in position $j$. Unless otherwise stated, a word $w\in\mathcal{A}^{n}$ is taken to have shape $[0, n)$. Every word $w \in L(\mathcal{A}^{\mathbb{Z}})$ is in some $\mathcal{A}^{n}$ by definition; we refer to this $n$ as the \textbf{length} of $w$ and denote it by $|w|$.
For any amenable $\mathbb{G}$ with Følner sequence $\left\{ F_{n}\right\} _{n\in\mathbb{N}}$ and any $\mathbb{G}$-subshift $X$, we define the \textbf{ topological entropy of $X$} as \begin{equation*} h_{top}(X)=\lim_{n\rightarrow\infty}\frac{1}{\left\vert F_{n} \right\vert }\log\left\vert L_{F_{n}}(X)\right\vert \end{equation*} (this definition is in fact independent of the Følner sequence used.)
For any $w\in L(X)$, we define the \textbf{extender set of $w$} as \begin{equation*}
E_{X}(w):=\{x|_{F^{c}}\ :\ x\in\lbrack w]\}. \end{equation*}
\begin{example} For any $\mathbb{G}$, if $X$ is the full shift on two symbols, $\left\{ 0,1\right\}^\mathbb{G}$, then for any $F$, all words in $\{0,1\}^F$ have the same extender set, namely $\{0,1\}^{F^c}$. \end{example}
\begin{example} Take $\mathbb{G} = \mathbb{Z}^2$ and $X$ the hard-square shift on $\{0,1\}$ in which adjacent $1$s are forbidden horizontally and vertically. Then if we take $F = \{(0,0)\}$, we see that $E(0)$ is the set of all configurations on $\mathbb{Z}^2 \setminus F$ which are legal, i.e. which contain no adjacent $1$s. Similarly, $E(1)$ is the set of all legal configurations on $\mathbb{Z}^2 \setminus F$ which also contain $0$s at $(0, \pm 1)$ and $(\pm 1, 0)$. In particular, we note that here $E(1) \subsetneq E(0)$. \end{example}
In the specific case $\mathbb{G}=\mathbb{Z}$ and $w\in L_{n}(X)$, we may identify $E_{X}(w)$ with the set of sequences which are concatenations of the left side and the right side, i.e. $\{(x_{(-\infty,0)}x_{[n,\infty)})\ :\ x\in\lbrack w]\}$, and in this way can relate extender sets even for $v,w$ with different lengths. All extender sets in $\mathbb{Z}$ will be interpreted in this way.
\begin{example} If $X$ is the golden mean $\mathbb{Z-}$subshift on $\{0,1\}$ where adjacent $1$s are prohibited, then $E(000)$ is the set of all legal configurations on $\mathbb{Z} \setminus \{0, 1, 2\}$, which is identified with the set of all $\{0,1\}$ sequences $x$ which have no adjacent $1$s, with the exception that $x_{0} = x_{1} = 1$ is allowed. This is because $000$ may be preceded by a one-sided sequence ending with $1$ and followed by a one-sided sequence beginning with $1$, and after the identification with $\{0,1\}^{\mathbb{Z}}$, those $1$s could become adjacent.
Similarly, $E(01)$ is identified with the set of all $x$ on $\mathbb{Z}$ which have no adjacent $1$s and satisfy $x_{0} = 0$, and $E(1)$ is identified with the set of all $x$ on $\mathbb{Z}$ which have no adjacent $1$s and satisfy $x_{0} = x_{1} = 0$.
Therefore, even though they have different lengths, we can say here that $E(1) \subsetneq E(01) \subsetneq E(000)=E(0)$. \end{example}
The next few definitions concern measures. Every measure in this work is assumed to be a Borel probability measure $\mu$ on a $\mathbb{G-}$subshift $X $ which is invariant under all shifts $\sigma_{g}$. By a generalization of the Bogolyubov-Krylov theorem, every $\mathbb{G-}$subshift $X$ has at least one such measure. For any such $\mu$ and any $w\in L(X)$, we will use $\mu(w) $ to denote $\mu(\left[ w\right] ).$
For any Følner sequence $\{F_{n}\}$, we define the \textbf{entropy} of any such $\mu$ as \begin{equation*} h_{\mu}(X):=\lim_{n\rightarrow\infty}\frac{1}{\left\vert F_{n}\right\vert } \sum\nolimits_{w\in\mathcal{A}^{F_{n}}}-\mu(w)\log\mu(w). \end{equation*}
Again, this limit does not depend on the choice of Følner sequence (see \cite {KL2} for proofs of this property and of other basic properties of entropy of amenable group actions).
It is always the case that $h_{\mu}(X)\leq h(X)$, and so a measure $\mu$ is called a \textbf{measure of maximal entropy} (or \textbf{MME}) if $ h_{\mu}(X)=h_{top}(X).$ For amenable $\mathbb{G}$, every $\mathbb{G}-$ subshift has at least one measure of maximal entropy \cite{Mi}.
We briefly summarize some classical results from ergodic theory. A measure $ \mu$ is \textbf{ergodic} if every set which is invariant under all $\sigma_{g}$ has measure $0$ or $1$. In fact, every measure $\mu$ can be written as a generalized convex combination (really an integral) of ergodic measures; this is known as the \textbf{ergodic decomposition} (e.g. see Section 8.7 of \cite{Gl}). The entropy map $ \mu\mapsto h_{\mu}$ is linear and so the ergodic decomposition extends to measures of maximal entropy as well; every MME can be written as a generalized convex combination of ergodic MMEs.
\begin{theorem}[Pointwise ergodic theorem \protect\cite{Li}] \label{ergthm} For any ergodic measure $\mu$ on a $\mathbb{G}-$subshift $X$, there exists a Følner sequence $\left\{ F_{n}\right\} $ such that for every $ f\in L^{1}(\mu)$, \begin{equation*} \mu\left( \left\{ x:\lim_{n\rightarrow\infty}\frac{1}{\left\vert F_{n}\right\vert }\sum_{g\in F_{n}}f(\sigma_{g}x)=\int f\ d\mu\right\} \right) =1. \end{equation*} \end{theorem}
\begin{theorem}[Shannon-Macmillan-Breiman theorem for amenable groups \protect\cite{We}] \label{SMBthm} For any ergodic measure $\mu$ on a $\mathbb{G}-$subshift $X$, there exists a Følner sequence $\left\{ F_{n}\right\} $ such that
\begin{equation*} \mu\left( \left\{ x:\lim_{n\rightarrow\infty}-\frac{1}{\left\vert F_{n}\right\vert } \log \mu(x_{F_{n}})=h_{\mu}(X)\right\} \right) =1. \end{equation*} \end{theorem}
The classical pointwise ergodic and Shannon-Macmillan-Breiman theorems were originally stated for $\mathbb{G=Z}$ and the Følner sequence $[0,n]$. We only need Theorem~\ref{SMBthm} for the following corollary (when $\mathbb{G=Z}$ this is essentially what is known as Katok's entropy formula; see \cite{Ka}).
\begin{corollary} \label{SMBcor} Let $\mu$ be an ergodic measure of maximal entropy on a $ \mathbb{G}$-subshift $X$. There exists a Følner sequence $\{F_{n}\}$ such that for every $S_{n} \subseteq L_{F_{n}}(X)$ such that $\mu(S_{n}) \rightarrow1$, then \begin{equation*}
\lim_{n \rightarrow\infty} \frac{1}{|F_{n}|} \log|S_{n}| = h_{top}(X). \end{equation*} \end{corollary}
\begin{proof} Take $X$, $\mu$ as in the theorem, $\{F_{n}\}$ a Følner sequence that satisfies the Shannon-Macmillan-Breiman theorem, and $S_{n}$ as in the theorem. Fix any $ \epsilon> 0$. By the definition of topological entropy, \begin{equation*}
\limsup_{n \rightarrow\infty} \frac{1}{|F_{n}|} \log|S_{n}| \leq\lim_{n
\rightarrow\infty} \frac{1}{|F_{n}|} \log|L_{F_{n}}(X)| = h_{top}(X). \end{equation*}
For every $n$, define \begin{equation*}
T_{n} = \{w \in\mathcal{A}^{F_{n}} \ : \ \mu(w) < e^{-|F_{n}|(h_{top}(X) - \epsilon)} \}. \end{equation*}
By the Shannon-Macmillan-Breiman theorem, $\mu\left( \bigcup_{N} \bigcap_{n = N}^{\infty} T_{n} \right) = 1$, and so $\mu(T_{n}) \rightarrow1$. Therefore, $\mu(S_{n} \cap T_{n}) \rightarrow1$, and by definition of $T_{n}$ , \begin{equation*}
|S_{n} \cap T_{n}| \geq\mu(S_{n} \cap T_{n}) e^{|F_{n}| (h_{top}(X) - \epsilon)}. \end{equation*}
Therefore, for sufficiently large $n$, $|S_{n}| \geq|S_{n} \cap T_{n}|
\geq0.5 e^{|F_{n}| (h_{top}(X) - \epsilon)}$. Since $\epsilon> 0$ was arbitrary, the proof is complete. \end{proof}
Finally, several of our main arguments rely on the following elementary combinatorial lemma, whose proof we leave to the reader.
\begin{lemma}
\label{counting} If $S$ is a finite set, $\{A_{s}\}$ is a collection of finite sets, $m = \min\{|A_{s}|\}$, and $M = \max_{a \in\bigcup A_{s}} |\{s
\ | \ a \in A_{s}\}|$, then \begin{equation*}
\left| \bigcup_{s \in S} A_{s} \right| \geq|S| \frac{m}{M}. \end{equation*} \end{lemma}
\section{Results on $\mathbb{Z-}$Subshifts}
\label{zsec}
In this section we present the results for $\mathbb{G}=\mathbb{Z}$, and must begin with some standard definitions about $\mathbb{Z}-$subshifts.
For words $v \in\mathcal{A}^{m}$ and $w \in\mathcal{A}^{n}$ with $m \leq n$, we say that $v$ is a \textbf{prefix} of $w$ if $w_{[0,m)} = v$, and $v$ is a \textbf{suffix} of $w$ if $w_{[n-m, n)} = v$.
\subsection{Main result}
We now need some technical definitions about replacing one or more occurrences of a word $v$ by a word $w$ inside a larger word $u$, which are key to most of our arguments in this section. First, for any $v\in L( \mathcal{A}^{\mathbb{Z}}),$ we define the function $O_{v} :L(\mathcal{A}^{ \mathbb{Z}}) \rightarrow \mathcal{P}(\mathbb{N})$ which sends any word $u$ to the set of locations where $v$ occurs as a subword in $u$, i.e. \begin{equation*} O_{v}(u):=\left\{ i\in\mathbb{N}:\sigma_{i}(u)\in\left[ v\right] \right\} . \end{equation*} For any $w \in L(\mathcal{A}^{\mathbb{Z}})$, we may then define the function $R_{u}^{v\rightarrow w}: O_{v}(u) \rightarrow L(\mathcal{A}^{\mathbb{Z}})$ which replaces the occurrence of $v$ within $u$ at some position in $O_{v}(u) $ by the word $w$. Formally, $R_{u}^{v\rightarrow w}(i)$ is the word $
u^{\prime }$ of length $|u| - |v| + |w|$ defined by $u^{\prime}_{[0,i)} =
u_{[0,i)}$, $u^{\prime}_{[i,i+|w|)} = w$, and $u^{
\prime}_{[i+|w|,|u|-|v|+|w|)} = u_{[i+|v|,|u|)}$.
Our arguments in fact require replacing many occurrences of $v$ by $ w$ within a word $u$, at which point some technical obstructions appear. For instance, if several occurrences of $v$ overlap in $u$, then replacing one by $w$ may destroy the other. The following defines conditions on $v$ and $w$ which obviate these and other problems which would otherwise appear in our counting arguments.
\begin{definition} \label{respect} For $v,w \in L(\mathcal{A}^{\mathbb{Z}})$, we say that $v$ \textbf{respects the transition to} $w$ if, for any $u\in L(\mathcal{A}^{ \mathbb{Z}})$ and any $i\in O_{v}(u)$, \begin{align*}
\mathrm{(i) } \ & j+|w|-|v| \in O_{v}(R_{u}^{v\rightarrow w}(i))\text{ for any }j\in O_{v}(u)\text{ with }i < j, \\ \mathrm{(ii) } \ & j \in O_{v}(R_{u}^{v\rightarrow w}(i))\text{ for any } j\in O_{v}(u)\text{ with }i>j, \\ \mathrm{(iii) } \ & j \in O_{w}(R_{u}^{v\rightarrow w}(i))\text{ for any } j\in O_{w}(u)\text{ with }i>j, \\
\mathrm{(iv) } \ & j + |w| - |v| > i \text{ for any }j\in O_{v}(u)\text{ with }i < j. \end{align*} \end{definition}
Informally, $v$ respects the transition to $w$ if, whenever a single occurrence of $v$ is replaced by $w$ in a word $u$, all other occurrences of $v$ in $u$ are unchanged, all occurrences of $w$ in $u$ to the left of the replacement are unchanged, and all occurrences of $v$ in $u$ which were to the right of the replaced occurrence remain on that side of the replaced occurrence.
When $v$ respects the transition to $w$, we are able to meaningfully define replacement of a set of occurrences of $v$ by $w$, even when those occurrences of $v$ overlap, as long as we move from left to right. For any $ u,v,w \in L(\mathcal{A}^{\mathbb{Z}})$, we define a function $R_{u}^{v\rightarrow w}: \mathcal{P}(O_{v}(u)) \rightarrow L(\mathcal{A}^{\mathbb{Z}})$ as follows. For any $S:=\left\{ s_{1} ,...,s_{n}\right\} \subseteq O_{v}(u)$ (where we always assume $s_1 < s_2 < \ldots < s_n$), we define sequential replacements $ \left\{ u^{m}\right\} _{m=1}^{n+1}$ by\
1) $u=u^{1}.$
2) $u^{m+1}=R_{u^{m}}^{v\rightarrow w}(s_{m}+(m-1)(|w|-|v|)).$
Finally, we define $R_{u}^{v\rightarrow w}(S)$ to be $u^{n+1}$.
We first need some simple facts about $R_{u}^{v\rightarrow w}$ which are consequences of Definition~\ref{respect}.
\begin{lemma} \label{wsurvive} For any $u,v,w \in L(\mathcal{A}^{\mathbb{Z}})$ where $v$ respects the transition to $w$ and any $S=\left\{ s_{1},...,s_{n}\right\} \subseteq O_{v}(u)$, all replacements of $v$ by $w$ persist throughout, i.e. $\{s_{1},s_{2}
+(|w|-|v|),s_{3}+2(|w|-|v|),\ldots,s_{n}+(n-1)(|w|-|v|)\}\subseteq O_{w} (R_{u}^{v\rightarrow w}(S))$. \end{lemma}
\begin{proof}
Choose any $v,w,u,S$ as in the lemma, and any $s_{i} \in S$. Using the terminology above, clearly $s_{i} + (i-1)(|w| - |v|) \in O_{w}(u^{(i+1)})$. By property (iv) of a respected transition, $s_{1} < s_{2} + |w| - |v| <
\ldots< s_{n} + (n-1)(|w| - |v|)$. Then, since $s_{i} + (i-1)(|w| - |v|) <
s_{j} + (j-1)(|w| - |v|)$ for $j > i$, by property (iii) of respected transition, $s_{i} + (i-1)(|w| - |v|) \in O_{w}(u^{(j+1)})$ for all $j > i$, and so $s_{i} + (i-1)(|w| - |v|) \in O_{w}(R_{u}^{v \rightarrow w}(S))$. Since $i$ was arbitrary, this completes the proof. \end{proof}
\begin{lemma} \label{vsurvive} For any $u,v,w \in L(\mathcal{A}^{\mathbb{Z}})$ where $v$ respects the transition to $w$ and any $S=\left\{ s_{1},...,s_{n}\right\} \subseteq O_{v}(u)$, any occurrence of $v$ not explicitly replaced in the construction of $ R_{u}^{v\rightarrow w}$ also persists, i.e. if $m\in O_{v}(u)\setminus S$
and $s_{i}<m<s_{i+1}$, then $m+i(|w|-|v|)\in O_{v}(R_{u}^{v\rightarrow w}(S)) $. \end{lemma}
\begin{proof}
Choose any $v,w,u,S$ as in the lemma, and any $m \in O_{v}(u) \cap(s_{i}, s_{i+1})$ for some $i$. Using property (i) of a respected transition, a simple induction implies that $m + j(|w| - |v|) \in O_{v}(u^{(j+1)})$ for all $j \leq i$. By property (iv) of a respected transition, $m + i(|w| -
|v|) < s_{i+1} + i(|w| - |v|) < \ldots< s_{n} + (n-1)(|w| - |v|)$. Therefore, using property (ii) of a respected transition allows a simple induction which implies that $m + i(|w| - |v|) \in O_{v}(u^{(j+1)})$ for all
$j > i$, and so $m + i(|w| - |v|) \in O_{v}(R_{u}^{v \rightarrow w}(S))$. \end{proof}
We may now prove injectivity of $R_{u}^{v\rightarrow w}$ under some additional hypotheses, which is key for our main proofs.
\begin{lemma} \label{injective} Let $v,w\in L(\mathcal{A}^{\mathbb{Z}})$ such that $v$ respects the transition to $w$, $v$ is not a suffix of $w$, and $w$ is not a prefix of $v$. For any $ u\in L(\mathcal{A}^{\mathbb{Z}})$ and $m$, $R_{u}^{v\rightarrow w}$ is injective on the set of $m$ -element subsets of $O_{v}(u)$. \end{lemma}
\begin{proof} Assume that $v,w,u$ are as in the lemma, and choose $S=\left\{ s_{1} ,...,s_{m}\right\} \neq S^{\prime}=\left\{ s_{1}^{\prime},...,s_{m}^{\prime
}\right\} \subseteq O_{v}(u)$ with $|S|=|S^{\prime}|=m$.
We first treat the case where $|v| \geq|w|$, and recall that $w$ is not a prefix of $v$. Since $S \neq S^{\prime}$, we can choose $i$ maximal so that $ s_{j} = s^{\prime}_{j}$ for $j < i$. Then $s_{i} \neq s^{\prime}_{i}$; we assume without loss of generality that $s_{i} < s^{\prime}_{i}$. Since $ s_{i} \in S$, we know that $s_{i} \in O_{v}(u)$. Since $s^{\prime}_{i-1} = s_{i-1} < s_{i} < s^{\prime}_{i}$, by Lemma~\ref{vsurvive} $s_{i} +
(i-1)(|w| - |v|) \in O_{v}(R_{u}^{v \rightarrow w}(S^{\prime}))$. Also, by Lemma~\ref{wsurvive}, $s_{i} + (i-1)(|w| - |v|) \in O_{w}(R_{u}^{v \rightarrow w}(S))$. Since $w$ is not a prefix of $v$, this means that $ R_{u}^{v \rightarrow w}(S) \neq R_{u}^{v \rightarrow w}(S^{\prime})$, completing the proof of injectivity in this case.
We now treat the case where $|v| \leq|w|$, and recall that $v$ is not a suffix of $w$. Since $S \neq S^{\prime}$, we can choose $i$ maximal so that $ s_{m - j} = s^{\prime}_{m - j}$ for $j < i$. Then $s_{m - i} \neq s^{\prime}_{m - i} $; we assume without loss of generality that $s_{m - i} < s^{\prime}_{m - i}$. Since $s^{\prime}_{m - i} \in S^{\prime}$, we know that $s^{\prime}_{m - i} \in O_{v}(u)$. Since $s_{m-i} < s^{\prime}_{m-i} < s^{\prime}_{m-i+1} = s_{m-i+1}$, by Lemma~\ref{vsurvive} $s^{\prime}_{m-i} +
(m-i)(|w| - |v|) \in O_{v}(R_{u}^{v \rightarrow w}(S))$. Also, by Lemma~\ref
{wsurvive}, $s^{\prime }_{m-i} + (m-i-1)(|w| - |v|) \in O_{w}(R_{u}^{v \rightarrow w}(S^{\prime}))$. Since $v$ is not a suffix of $w$, this means that $R_{u}^{v \rightarrow w}(S) \neq R_{u}^{v \rightarrow w}(S^{\prime})$, completing the proof of injectivity in this case and in general. \end{proof}
\begin{lemma} \label{preimage} Let $v,w\in L(\mathcal{A}^{\mathbb{Z}})$ such that $v$ respects the transition to $ w$, $v$ is not a suffix of $w$, and $w$ is not a prefix of $v$. Then for any
$u^{\prime}\in L(\mathcal{A}^{\mathbb{Z}})$ and any $m \leq|O_{w}(u^{\prime})|$, \begin{equation*}
|\{(u, S) \ : \ |S| = m, S \subseteq O_{v}(u), u^{\prime}=
R_{u}^{v\rightarrow w}(S)\}| \leq{\binom{|O_{w}(u^{\prime})| }{m}}. \end{equation*} \end{lemma}
\begin{proof} Assume that $v,w,u^{\prime}$ are as in the lemma, and denote the set above by $f(u^{\prime})$. For any $(u,S)\in f(u^{\prime})$ we define $g(S)=\{s_{1}
,s_{2}+|w|-|v|,\ldots,s_{m}+(m-1)(|w|-|v|)\}$; note that by Lemma~\ref {wsurvive}, $g(S)\subseteq O_{w}(u^{\prime})$.
We claim that for any $S$, there is at most one $u$ for which $(u,S)\in f(u^{\prime})$. One can find this $u$ by simply reversing each of the replacements in the definition of $R_{u}^{v \rightarrow w}(S)$. Informally, the only such $u$ is $u = R_{u^{\prime}}^{v \leftarrow w}(g(S))$, where $ R_{u'}^{v\leftarrow w}$ is defined analogously to $R_{u}^{v\rightarrow w}$ with replacements of $w$ by $v$ made from right to left instead of $v$ by $w$ made from left to right.
Finally, since $g(S)\subseteq O_{w}(u^{\prime})$, and since $g$ is clearly injective, there are less than or equal to ${\binom{|O_{w}(u^{\prime})|}{m}}$ choices for $S$ with $(u,S)\in f(u^{\prime})$ for some $u$, completing the proof. \end{proof}
We may now prove the desired relation for $v$, $w$ with $E(v)\subseteq E(w)$ under additional assumptions on $v$ and $w$.
\begin{proposition} \label{easycase} Let $X$ be a subshift, $\mu$ a measure of maximal entropy of $X$, and $v,w\in L(X).$ If $v$ respects the transition to $w$, $v$ is not a suffix of $w$, $w$ is not a prefix of $v$, and $E_{X}(v)\subseteq E_{X}(w)$ , then \begin{equation*}
\mu(v)\leq\mu(w)e^{h_{top}(X)(|w|-|v|)}. \end{equation*} \end{proposition}
\begin{proof} Let $\delta,\varepsilon\in\mathbb{Q}_{+}$. We may assume without loss of generality that $\mu$ is an ergodic MME, since proving the desired inequality for ergodic MMEs implies it for all MMEs by ergodic decomposition.
For every $n\in\mathbb{Z}_{+},$ we define \begin{equation*} S_{n}:=\left\{ u\in L_{n}(X):\left\vert O_{v}(u)\right\vert \geq n(\mu(v)-\delta)\text{ and }\left\vert O_{w}(u)\right\vert \leq n(\mu (w)+\delta)\right\} . \end{equation*}
By the pointwise ergodic theorem (applied to $\chi_{[v]}$ and $\chi_{[w]}$), $\mu(S_{n}) \rightarrow1$. Then, by Corollary~\ref{SMBcor}, there exists $N$ so that for $n>N$, \begin{equation}
|S_{n}|>e^{n(h_{top}(X)-\delta)}. \label{Sbound} \end{equation} For each $u\in S_{n}$, we define \begin{equation*} A_{u}:=\left\{ R_{u}^{v\rightarrow w}(S):S\subseteq O_{v}(u)\text{ and } \left\vert S\right\vert =\varepsilon n\right\} \end{equation*} (without loss of generality we may assume $\varepsilon n$ is an integer by taking a sufficiently large $n$.)
Since each word in $A_u$ is obtained by making $\varepsilon n$ replacements of $v$ by $w$ in a word of length $n$, all words in $A_u$ have length
$m := n + \varepsilon n (|w| - |v|)$. Since $E_{X}(v)\subseteq E_{X}(w)$, we have that $A_{u}\subset L(X)$. Also, by Lemma~\ref{injective}, \begin{equation*}
|A_{u}|={\binom{\left\vert O_{v}(u)\right\vert}{\left\vert S\right\vert} \geq {\binom{n(\mu(v)-\delta)}{\varepsilon n}} } \end{equation*} for every $u$.
On the other hand, for every $u^{\prime}\in\bigcup_{u\in S_{n}}A_{u}$ we have that \begin{equation*} \left\vert O_{w}(u^{\prime})\right\vert \leq n(\mu(w)+\delta)+n\varepsilon
(2|w|+1) \end{equation*}
(here, we use the fact that any replacement of $v$ by $w$ can create no more than $2|w|$ \ new occurrences of $w$.) Therefore, by Lemma~\ref{preimage}, \begin{equation*} \left\vert \left\{ u\in S_{n}:u^{\prime}\in A_{u}\right\} \right\vert \leq{
\binom{n\left(\mu(w)+\delta+(2|w|+1)\varepsilon\right)}{\varepsilon n}.} \end{equation*}
Then, by Lemma~\ref{counting}, we see that for $n>N$, \begin{multline}
|L_{m}(X)|\geq\left\vert \bigcup_{u\in S_{n}}A_{u}\right\vert \geq |S_{n}|{ \binom{n(\mu(v)-\delta)}{\varepsilon n}}{\binom{n(\mu(w)+\delta
+(2|w|+1)\varepsilon)}{\varepsilon n}}^{-1} \label{imagebound} \\ \geq e^{n(h_{top}(X)-\delta)}{\binom{n(\mu(v)-\delta)}{\varepsilon n}} {
\binom{n(\mu(w)+\delta+(2|w|+1)\varepsilon)}{\varepsilon n}}^{-1}. \end{multline}
For readability, we define $x=\mu(v)-\delta$ and $y=\mu(w)+\delta$. We recall that by Stirling's approximation, for $a > b > 0$, \[ \log\left( \begin{array} [c]{c} an\\ bn \end{array} \right) =an\log(an)-bn\log(bn)-n(a-b)\log(n(a-b))+o(n) .\] Therefore, if we take logarithms and divide by $n$ on both sides of (\ref{imagebound}) and let $n$ approach infinity, we obtain \begin{multline*}
h_{top}(X)(1+\varepsilon(|w|-|v|))\geq h_{top}(X)-\delta+x\log x-(x-\varepsilon)\log(x-\varepsilon) \\
-(y+(2|w|+1)\varepsilon)\log(y+(2|w|+1)\varepsilon)+(y+2|w|\varepsilon
)\log(y+2|w|\varepsilon). \end{multline*} We subtract $h_{top}(X)$ from both sides, let $\delta\rightarrow0$, and simplify to obtain \begin{multline*}
h_{top}(X)\varepsilon(|w|-|v|)\geq\varepsilon\log\mu(v)+(\mu(v)-\varepsilon )\left( \log\frac{\mu(v)}{\mu(v)-\varepsilon}\right) \\
-\varepsilon\log(\mu(w)+(2|w|+1)\varepsilon)-(\mu(w)+2|w|\varepsilon
)\log\left( \frac{\mu(w)+(2|w|+1)\varepsilon}{\mu(w)+2|w|\varepsilon}\right) . \end{multline*}
We have that \begin{align*} & \lim_{\varepsilon\rightarrow0}\frac{\mu(v)-\varepsilon}{\varepsilon} \log \frac{\mu(v)}{\mu(v)-\varepsilon} \\ & =\lim_{\varepsilon\rightarrow0}\frac{\mu(v)}{\varepsilon}\log\frac{\mu (v) }{\mu(v)-\varepsilon} \\ & =1, \end{align*} and \begin{align*}
& \lim_{\varepsilon\rightarrow0}-\frac{\mu(w)+2|w|\varepsilon}{\varepsilon }
\log\left( \frac{\mu(w)+(2|w|+1)\varepsilon}{\mu(w)+2|w|\varepsilon}\right) \\ & =\lim_{\varepsilon\rightarrow0}-\frac{\mu(w)}{\varepsilon}\log\left( \frac{
\mu(w)+(2|w|+1)\varepsilon}{\mu(w)+2|w|\varepsilon}\right) \\
& =\lim_{t\rightarrow0}-\frac{1}{t}\log\left( \frac{1+(2|w|+1)t} {1+2|w|t} \right) \\ & =-1. \end{align*}
This implies (by dividing by $\varepsilon$ and taking limit on the previous estimate) that \begin{equation*}
h_{top}(X)(|w|-|v|)\geq\log\mu(v)-\log\mu(w). \end{equation*} Exponentiating both sides and solving for $\mu(v)$ completes the proof. \end{proof}
Our strategy is now to show that any pair $v, w$, the cylinder sets $[v]$ and $[w]$ may each be partitioned into cylinder sets of the form $[\alpha v \beta]$ and $[\alpha w \beta]$ where the additional hypotheses of Theorem~ \ref{easycase} hold on corresponding pairs. For this, we make the additional assumption that $X$ has positive entropy to avoid some pathological examples (for instance, note that if $X = \{0^{\infty}\}$, then it's not even possible to satisfy the hypotheses of Theorem~\ref{easycase}!)
\begin{definition}
Let $X$ be a subshift and $v\neq w\in L(X)$. We define
\begin{align*}
X_{resp(v\rightarrow w)} & :=\{x\in\left[ v\right] :\\
\exists N,M & \in\mathbb{Z}_{+} \text{ s.t. } \alpha v\beta=x_{[-N,M)}\text{ respects
the transition to }\alpha w\beta,\\
& \alpha v\beta\text{ is not a suffix of }\alpha w\beta\text{, and }\\
& \alpha w\beta\text{ is not a prefix of }\alpha v\beta\}.
\end{align*} \end{definition} \begin{proposition} \label{extend} \label{transition} Let $X$ be a subshift with positive topological entropy, $\mu$ an ergodic measure of maximal entropy of $X$, and $v\neq w\in L(X)$. There exists $G^{v,w}\subset X_{resp(v\rightarrow w)}$ such that $\mu(G^{v,w})=\mu(v)$. \\
\end{proposition}
\begin{proof} Define \begin{equation*} Q:=\left\{ \gamma\in L(X):\mu(\gamma)>0\right\} \end{equation*} and, for all $n \in\mathbb{N}$, define $Q_{n} := Q \cap A^{n}$.
Recall that \begin{equation*} h_{\mu}(X)=\lim_{n\rightarrow\infty}\frac{1}{n}\sum\nolimits_{w\in A^{n}}-\mu(w)\log\mu(w). \end{equation*} The only positive terms of this sum are those corresponding to $w \in Q_{n}$ , and it's a simple exercise to show that when $\sum_{i=1}^{t} \alpha_{i} = 1 $, $\sum_{i=1}^{t} -\alpha_{i} \log\alpha_{i}$ has a maximum value of $\log t $. Therefore, \begin{equation*}
h_{\mu}(X) \leq\liminf_{n \rightarrow\infty} \frac{1}{n} \log|Q_{n}|. \end{equation*} Since $h_{\mu}(X) > 0$, $\left\vert Q_{n}\right\vert $ grows exponentially. Therefore, there exists $n_{2}\in\mathbb{Z}_{+}$ such that for every $n\geq n_{2}$ we have that $\left\vert Q_{n}\right\vert $ $\geq2n.$
Let \begin{align*} N & :=\max\left\{ n_{2},\left\vert v\right\vert \right\} +1, \\
P & :=\left\{ x \in X \ : \ x_{(-\infty, 0)} \text{ periodic with period less than } |w| \right\} , \\ S & :=\left\{ x \in X \ : \ \forall\gamma\in Q, \gamma\text{ is a subword of } x_{[0, \infty)} \right\} ,\text{ and} \\ G^{v,w} & :=[v]\cap S \diagdown P. \end{align*}
Since $\mu$ has positive entropy, it is not supported on points with period less than $|w|$, and so for each $i \leq|w|$, there exists a word $u_{i} \in L_{i+1}(X)$ with different first and last letters. Then the pointwise ergodic theorem
(applied to $\chi_{[u_{1}]}, \ldots, \chi_{[u_{|w| - 1}]}$ with $F_n = [-n,0)$) implies that $ \mu(P) = 0$. The pointwise ergodic theorem (applied to $\chi_{[\gamma]}$ for $\gamma\in Q$ with $F_n = [0,n]$) shows that $\mu(S)=1$, and so $\mu(G^{v,w})=\mu(v)$. \\
Now we will prove that $G^{v,w}\subset X_{resp(v\rightarrow w)}$. Let $x\in R$. If for every $n$, $x_{(-n,0)}v$ is a suffix of $x_{(-n,0)}w$, then clearly $|w|\geq |v|$, and for any $i>0$, the $(i+|w|)$th letters from the end of $x_{(-\infty,0)}v$ and $x_{(-\infty,0)}w$ must be the same, i.e. $
x(-i)=x(-i-|w|+|v|)$. This would imply $x\in P$, which is not possible.
We can therefore define $N^{\prime}\geq N$ to be minimal so that for $ \alpha^x= x_{[-N^{\prime}, 0)}$, $\alpha^x v$ is not a suffix of $\alpha^x w$.
(Obviously if $|v| \geq|w|$, then $N^{\prime}= N$.)
Since $x \in S$, we can define the minimal $M$ so that all $N^{\prime} $ -letter words of positive $\mu$-measure are subwords of $x_{\left[ -N^{\prime},M\right) }$; for brevity we write this as $Q_{N^{\prime}} \sqsubset x_{\left[ -N^{\prime},M\right) }$.
Since $M$ is the first natural with $Q_{N^{\prime}}\sqsubset x_{\left[ -N^{\prime},M\right) }$, then \begin{equation*} \left\vert O_{x_{\left[ M-N^{\prime},M\right) }}(x_{\left[ -N^{\prime },M\right) })\right\vert =1, \end{equation*} i.e. the $N^{\prime}$-letter suffix of $x_{[-N^{\prime}, M)} = \alpha^x v \beta^x $ appears only at the end of $\alpha^x v \beta^x$. Since $N^{\prime}\geq N \geq n_{2}$, $\left\vert Q_{N^{\prime}}\right\vert $
$\geq 2N^{\prime}$, and so $M > 2N^{\prime} \geq N^{\prime} + |v|$, implying that the aforementioned $N^{\prime}$-letter suffix of $\alpha^x v \beta^x$ is also the $N^{\prime}$-letter suffix of $\alpha^x w \beta^x$.
First, it is clear that $\alpha^x v \beta^x$ is not a suffix of $\alpha^x w \beta^x$ , since $\alpha^x v$ was not a suffix of $\alpha^x w$ by definition of $\alpha^x$. Since the $N^{\prime}$-letter suffix of $\alpha^x w \beta^x$ appears only once within $\alpha^x v \beta^x$, we see that $\alpha^x w \beta^x$ cannot be a prefix of $ \alpha^x v \beta^x$ either.
It remains to show that $\alpha^x v\beta^x=x_{\left[ -N^{\prime},M\right) }$ respects the transition to $\alpha^x w\beta^x.$ Suppose that a word $u\in L(X)$ contains overlapping copies of $\alpha^x v \beta^x$, i.e. we have $i,j\in O_{\alpha^x v\beta^x}(u)$ with $j>i$. Since $\left\vert O_{x_{\left[ M-N^{\prime },M\right) }}(x_{\left[ -N^{\prime},M\right) })\right\vert =1$ we have that $ j > i + M$; otherwise the $N^{\prime}$-letter suffix of $\alpha^x v \beta^x= x_{[i, i+N^{\prime}+M)}$ would be a non-terminal subword of $
\alpha^x v \beta^x= x_{[j, j+N^{\prime}+M)}$. Then $j + |w| - |v| >
i + M + |w| - |v| > i$, and so property (iv) is verified. Since
$j > i + M$, the central $v$ within $x_{[i, i+N^{\prime}+M})$ is disjoint from $x_{[j, j+N^{\prime }+M})$, and so $j + |w| - |v| \in O_{v}(R_{u}^{v\rightarrow w}(i))$, verifying property (i).
For property (ii), the same argument as above shows that when $i,j\in O_{\alpha^x v\beta^x}(u)$ with $i>j$, $i > j + M$. Again this means that the central $v$ within $x_{[i, i+N^{\prime}+M)}$ is disjoint from $ x_{[j, j+N^{\prime}+M)}$, and so $j \in O_{v}(R_{u}^{v\rightarrow w}(i))$ , verifying property (ii) and completing the proof.
For property (iii), we simply note that the proof of (ii) is completely unchanged if we instead assumed $j \in O_{\alpha^x w\beta^x}(u)$, since the $ N^{\prime}$-letter suffixes of $\alpha^x w \beta^x$ and $\alpha^x v \beta^x$ are the same.
\end{proof}
\begin{remark}\label{cp} For $x\in G^{v,w}$ (as in Proposition~\ref{extend}) we denote by $\alpha^{x}$ and $\beta ^{x}$ the words $\alpha$ and $\beta$ constructed in the proof. \end{remark}
\begin{lemma}\label{cl} For $x \neq y\in G^{v,w}$, it is not possible for either of $\alpha^x$, $\alpha^y$ to be a proper suffix of the other, and if $\alpha^x = \alpha^y$, then it is not possible for either of $\beta^x, \beta^y$ to be a proper prefix of the other. \end{lemma}
\begin{proof}
Let $x \neq y\in G^{v,w}$. We write $\alpha^x v \beta^x = x_{[-N'_x, M_x)}$ and $\alpha^y v \beta^y= y_{[-N'_y, M_y)}$. We recall that $\alpha^x= x_{[-N'_x, 0)}$ was chosen as the minimal $N'_x$ (above a certain $N_x$ dependent only on $v$ and $X$) so that $\alpha^x v$ is not a suffix of $\alpha^x w$, and that $\alpha^y = y_{[-N'_y, 0)}$ was defined similarly using minimal $N'_y$ above some $N_y$. If $\alpha^y$ were a proper suffix of $\alpha^x$, then $N'_y < N'_x$ and $\alpha^y = x_{[-N'_y, 0)}$. Since by construction $\alpha^y v$ is not a suffix of $\alpha^y w$, this would contradict the minimality of $\alpha^x$. A trivially similar argument shows that $\alpha^x$ is not a proper suffix of $\alpha^y$.
Now, assume that $\alpha^x = \alpha^y$; we denote their common value by $\alpha$ and their common length by
$N'$. Recall that $\beta^x = x_{[|v|, M_x)}$ was chosen using the minimal $M_x$ so that $\alpha^x v \beta^x$ contains all $N'_x$-letter words of positive $\mu$-measure, and that $\beta^y$ was defined similarly using minimal $M_y$ for $y$. If $\beta^y$ were a proper prefix of $\beta^x$, then $M_y < M_x$ and
$\beta^y = x_{[|v|, M_y)}$. Since $\alpha v \beta^y$ contains all $N'$-letter words of positive $\mu$-measure, this would contradict the minimality of $\beta^x$. A trivially similar argument shows that $\beta^x$ is not a proper prefix of $\beta^y$.
\end{proof}
We may now prove the main result of this section.
\begin{theorem} \label{hardcase} Consider any $X$ a subshift with positive entropy, $\mu$ a measure of maximal entropy of $X$, and $v,w\in L(X).$ If $E_{X}(v)\subseteq E_{X}(w)$ then \begin{equation*}
\mu(v)\leq\mu(w)e^{h_{top}(X)(|w|-|v|)}. \end{equation*} \end{theorem}
\begin{proof} Consider $X, \mu, v, w$ as in the theorem. We may prove the result for only ergodic $\mu$, since it then follows for all $\mu$ by ergodic decomposition.
If $v=w$ the result is trivial, so we assume $v\neq w$. Let $G^{v,w}$ be as in the proof of Proposition~\ref{extend}.
For any $x \in G^{v,w}$, by definition $\alpha^x v \beta^x \in L(X)$. Since $E_{X}(v)\subseteq E_{X} (w)$, we then know that $\alpha^x w \beta^x\in L(X)$ and $E_{X}(\alpha^x v \beta^x)\subseteq E_{X}(\alpha^x w \beta^x)$ for every $x\in G^{v,w}$. Now, using Proposition~\ref{easycase} we have that \begin{equation} \label{hardbound}
\mu(\alpha^x v \beta^x)\leq\mu(\alpha^x w \beta^x)e^{h_{top}(X)(|\alpha^x w \beta^x|-|\alpha^x v \beta^x|)}=\mu(\alpha^x w \beta^x)e^{h_{top}(X)(|w|-|v|)}. \end{equation}
For convenience, we adopt the notation $[\alpha^x v \beta^x] = [\alpha^x.v\beta^x]$ and $[\alpha^x w \beta^x] = [\alpha^x.w\beta^x]$ to emphasize the location of the words $\alpha^x v \beta^x$ and $\alpha^x w \beta^x$ within these cylinder sets.
We now claim that if $\alpha^x v \beta^x \neq \alpha^y v \beta^y$ for $x,y \in G^{v,w}$, then $[\alpha^x v \beta^x]\cap [\alpha^y v \beta^y]=\emptyset$. To verify this, choose any $x,y$ for which $\alpha^x v \beta^x \neq \alpha^y v \beta^y$; then either $\alpha^x\neq\alpha^{y}$ or $\alpha^x = \alpha^y$ and $\beta^x\neq\beta^{y}$. If $\alpha^x \neq \alpha^y$, then by Lemma~\ref{cl}, neither of $\alpha^x$ or $\alpha^y$ can be a suffix of the other, which means that the cylinder sets $[\alpha^x .v \beta^x]$ and $[\alpha^{y} .v \beta^{y}]$ are disjoint.
If instead $\alpha^x = \alpha^y$ and $\beta^x \neq \beta^y$, then again by Lemma~\ref{cl}, neither of $\beta^x$ or $\beta^y$ can be a prefix of the other, meaning that the cylinder sets $[\alpha^x .v \beta^x]$ and $[\alpha^{y} .v \beta^{y}]$ are again disjoint. This proves the claim.
Let $K=\{\alpha^x v \beta^x \ : \ x\in G^{v,w}\}$. Since all $[\alpha^x .v \beta^x]$ are disjoint or equal, $\{[\alpha .v \beta]\}_{\alpha v \beta \in K}$ forms a partition of $G^{v,w}$. Furthermore we also obtain that the sets $\{[\alpha^x .w \beta^x]\}_{\alpha v \beta \in K}$ are disjoint, and so
\begin{align*} \sum_{\alpha v \beta \in K}\mu(\alpha v \beta) & =\mu(G^{v,w})=\mu(v)\text{ and} \\ \sum_{\alpha v \beta \in K}\mu(\alpha w \beta) & \leq\mu(w). \end{align*} In fact one can show the final inequality is an equality but we will not use this. We may then sum (\ref{hardbound}) over $\alpha v \beta \in K$ yielding \begin{align*} \mu(v) & =\sum_{\alpha v \beta \in K}\mu(\alpha v \beta) \\
& \leq e^{h_{top}(X)(|w|-|v|)}\sum_{\alpha v \beta \in K} \mu(\alpha w \beta) \\
& \leq\mu(w)e^{h_{top}(X)(|w|-|v|)}, \end{align*} as desired.
\end{proof}
The following corollary is immediate.
\begin{corollary} \label{maincor} Let $X$ be a $\mathbb{Z}$-subshift, $\mu$ a measure of maximal entropy of $X$, and $w,v\in L(X).$ If $E_{X}(v)=E_{X}(w)$, then for every measure of maximal entropy of $X$, \begin{equation*}
\mu(v)=\mu(w)e^{h_{top}(X)(|w|-|v|)}. \end{equation*} \end{corollary}
\subsection{Applications to synchronized subshifts} \label{apps}
The class of synchronized subshifts provides many examples where $E_X(v) = E_X(w)$ is satisfied for many pairs $v,w$ of different lengths, allowing for the usage of Corollary~\ref{maincor}.
\begin{definition}
For a subshift $X$, we say that $v\in L(X)$ is \textbf{synchronizing} if for
every $uv,vw\in L(X)$, it is true that $uvw\in L(X).$ A subshift $X$ is
\textbf{synchronized} if $L(X)$ contains a synchronizing word. \end{definition}
The following fact is immediate from the definition of synchronizing word.
\begin{lemma}
\label{synchlem} If $w$ is a synchronizing word for a subshift $X$, then for
any $v \in L(X)$ which contains $w$ as both a prefix and suffix, $E_{X}(v) =
E_{X}(w)$. \end{lemma}
\begin{definition}
A subshift $X$ is \textbf{entropy minimal }if every subshift strictly
contained in $X$ has lower topological entropy. Equivalently, $X$ is entropy
minimal if every MME on $X$ is fully supported. \end{definition}
The following result was first proved in \cite{Th}, but we may also derive it as a consequence of Corollary~\ref{maincor} with a completely different proof.
\begin{theorem}
\label{synchunique} Let $X$ be a synchronized subshift. If $X$ is entropy
minimal then $X$ has a unique measure of maximal entropy. \end{theorem}
\begin{proof}
Let $\mu$ be an ergodic measure of maximal entropy of such an $X$. Let $w$ be
a synchronizing word, $u\in L(X)$ and
\[
R_{u}:=\left\{ x\in\left[ u\right] :\left\vert O_{w}(x_{\left(
-\infty,0\right] })\right\vert \geq1\text{ and }\left\vert O_{w}(x_{\left[
\left( \left\vert u\right\vert ,\infty\right] \right) })\right\vert
\geq1\right\} .
\]
Since $X$ is entropy minimal, $\mu(w) > 0$, and so by the pointwise ergodic
theorem (applied to $\chi_{[w]}$ with $F_{n} = [-n,0]$ or $(|u|, n]$),
$\mu(R_{u}) = \mu(u)$.
For every $x\in R_{u}$ we define minimal $n \geq|w|$ and $m \geq|w| + |u|$ so
that $g_{u}(x):=x_{\left[ -n,m\right] }$ contains $w$ as both a prefix and a
suffix. Then $\{[g_{u}(x)]\}$ forms a partition of $R_{u}$.
By Lemma~\ref{synchlem}, $E_X(w) = E_X(wvw)$ for all $v$ s.t. $wvw \in L(X)$. Then
by Corollary \ref{maincor} we have that
\[
\mu(g_{u}(x))=\mu(w)e^{h_{top}(X)(\left\vert w\right\vert -\left\vert
g_{u}(x)\right\vert )}.
\]
Since $g_{u}(R_{u})$ is countable we can write
\[
\mu(u)=\mu(R_{u}) = \mu(w)\sum_{g_{u}(x)\in g_{u}(R_{u})}e^{h_{top}
(X)(\left\vert w\right\vert -\left\vert g_{u}(x)\right\vert )}.
\]
This implies that
\[
1=\sum_{a\in\mathcal{A}}\mu(a)=\mu(w)\sum_{a\in\mathcal{A}}\sum_{g_{a}(x)\in
g_{a}(R_{a})}e^{h_{top}(X)(\left\vert w\right\vert -\left\vert g_{a}
(x)\right\vert )}.
\]
We combine the two equations to yield
\begin{align*}
\mu(u) & =\frac{\sum\nolimits_{g_{u}(x)\in g_{u}(R_{u})}e^{h_{top}
(X)(\left\vert w\right\vert -\left\vert g_{u}(x)\right\vert )}}{\sum
_{a\in\mathcal{A}}\sum_{g_{a}(x)\in g_{a}(R_{a})}e^{h_{top}(X)(\left\vert
w\right\vert -\left\vert g_{a}(x)\right\vert )}}\\
& =\frac{\sum\nolimits_{g_{u}(x)\in g_{u}(R_{u})}e^{-h_{top}(X)\left\vert
g_{u}(x)\right\vert }}{\sum_{a\in\mathcal{A}}\sum\nolimits_{g_{a}(x)\in
g_{a}(R_{a})}e^{-h_{top}(X)\left\vert g_{a}(x)\right\vert }}.
\end{align*}
Since the right-hand side is independent of the choice of the measure we
conclude there can only be one ergodic measure of maximal entropy, which
implies by ergodic decomposition that there is only one measure of maximal entropy. \end{proof}
In \cite{CT}, one of the main tools used in proving uniqueness of the measure of maximal entropy for various subshifts was boundedness of the quantity
$\frac{|L_{n}(X)|}{e^{nh_{top}(X)}}$. One application of our results is to show that this quantity in fact converges to a limit for a large class of synchronized shifts.
\begin{definition}
A measure $\mu$ on a subshift $X$ is \textbf{mixing} if, for all measurable
$A,B$,
\[
\lim_{n \rightarrow\infty} \mu(A \cap\sigma_{-n} B) = \mu(A) \mu(B).
\]
\end{definition}
\begin{theorem}
\label{limit}Let $X$ be a synchronized entropy minimal subshift such that the
measure of maximal entropy is mixing. We have that
\[
\lim_{n\rightarrow\infty}\frac{\left\vert L_{n}(X)\right\vert }{e^{nh_{top}
(X)}}\text{ exists.}
\]
\end{theorem}
\begin{proof}
We denote $\lambda:=e^{h_{top}(X)}$ and define $\mu$ to be the unique measure
of maximal entropy for $X$. Let $w\in L(X)$ be a synchronizing word and
\[
R_{n}:=\left\{ u\in L_{n}(X):w\text{ is a prefix and a suffix of }u\right\}
.
\]
Lemma~\ref{synchlem} and Corollary \ref{maincor} imply that for every $u\in
R_{n}$,
\[
\mu(u)=\mu(w)\lambda^{\left\vert w\right\vert -n}.
\]
This implies that
\[
\sum_{u\in R_{n}}\mu(u)=\left\vert R_{n}\right\vert \mu(w)\lambda^{\left\vert
w\right\vert -n}
\]
On the other hand
\[
\sum_{u\in R_{n}}\mu(u)=\mu(\left[ w\right] \cap\sigma_{\left\vert
w\right\vert -n}\left[ w\right] ).
\]
Since the measure is mixing we obtain that
\[
\lim_{n\rightarrow\infty}\mu(\left[ w\right] \cap\sigma_{\left\vert
w\right\vert -n}\left[ w\right] )=\mu(\left[ w\right] )^{2}.
\]
Combining the three equalities above yields
\[
\lim_{n\rightarrow\infty}\frac{\left\vert R_{n}\right\vert }{\lambda^{n}
}=\frac{\mu(w)}{\lambda^{\left\vert w\right\vert }}.
\]
For all $n\in\mathbb{N}$, we define
\begin{align*}
P_{n} & :=\left\{ u\in L_{n+|w|}(x):w\text{ is a prefix of }u,|O_{w}|
(u)=1\right\} \text{ and}\\
S_{n} & :=\left\{ u\in L_{n+|w|}(x):w\text{ is a suffix of }u,|O_{w}|
(u)=1\right\}
\end{align*}
to be the sets of $(n+|w|)$-letter words in $L(X)$ containing $w$ exactly once
as a prefix/suffix respectively. We also define
\[
K_{n}:=\left\{ u\in L_{n}(x):|O_{w}(u)|=0\right\}
\]
to be the set of $n$-letter words in $L(X)$ not containing $w$. Then
partitioning words in $L_{n}(X)\setminus K_{n}$ by the first and last
appearance of $w$, recalling that $w$ is synchronizing, gives the formula
\[
\left\vert L_{n}(X)\right\vert =\left\vert K_{n}\right\vert + \sum_{0\leq i <
j\leq n}|S_{i}||R_{j-i} ||P_{n-j}|,
\]
thus
\begin{equation}
\label{sumproduct}\frac{\left\vert L_{n}(X)\right\vert }{\lambda^{n}} =
\frac{\left\vert K_{n}\right\vert }{\lambda^{n}} + \sum_{0\leq i < j \leq n}
\frac{|S_{i}|}{\lambda^{i}} \frac{|R_{j-i}|} {\lambda^{j-i}} \frac{|P_{n-j}
|}{\lambda^{n-j}}.
\end{equation}
We now wish to take the limit as $n \rightarrow\infty$ of both sides of
(\ref{sumproduct}). First, we note that since $X$ is entropy minimal,
$h_{top}(X_{w}) < h_{top}(X)$, where $X_{w}$ is the subshift of points of $X$
not containing $w$. Therefore,
\[
\limsup_{n\rightarrow\infty}\frac{1}{n}\log\left\vert K_{n}\right\vert <
h_{top}(X).
\]
Since all words in $P_{n}$ and $S_{n}$ are the concatenation of $w$ with a
word in $K_{n}$, $|P_{n}|, |S_{n}| \leq|K_{n}|$, and so
\[
\limsup_{n\rightarrow\infty}\frac{1}{n}\log\left\vert P_{n}\right\vert
,\limsup_{n\rightarrow\infty}\frac{1}{n}\log\left\vert S_{n}\right\vert <
h_{top}(X),
\]
implying that the infinite series
\[
\sum_{n=0}^{\infty}\frac{\left\vert P_{n}\right\vert }{\lambda^{n}} \text{ and
} \sum_{n=0}^{\infty}\frac{\left\vert S_{n}\right\vert }{\lambda^{n}}\text{
converge.}
\]
We now take the limit of the right-hand side of (\ref{sumproduct}).
\[
\lim_{n \rightarrow\infty} \frac{\left\vert K_{n}\right\vert }{\lambda^{n}} +
\sum_{0\leq i < j \leq n} \frac{|S_{i}|} {\lambda^{i}} \frac{|R_{j-i}
|}{\lambda^{j-i}} \frac{|P_{n-j}|}{\lambda^{n-j}} = \lim_{n \rightarrow\infty}
\sum_{0 \leq k \leq n} \left( \frac{|R_{k}|}{\lambda^{k}} \left( \sum_{i =
0}^{n-k} \frac{|S_{i}|}{\lambda^{i}} \frac{|P_{n - k - i}|} {\lambda^{n-k-i}
}\right) \right) .
\]
Since $\frac{|R_{k}|}{\lambda^{k}}$ converges to the limit $\frac{\mu
(w)}{\lambda^{\left\vert w\right\vert }}$ and the series
$\sum_{m = 0}^{\infty} \sum_{i = 0}^{m} \frac{|S_{i}|}{\lambda^{i}} \frac{|P_{m-i}
|}{\lambda^{n-k-i}}$ converges, the above can be rewritten as
\begin{multline*}
\lim_{n \rightarrow\infty} \sum_{0 \leq k \leq n} \left( \frac{|R_{k}
|}{\lambda^{k}} \left( \sum_{i = 0}^{n-k} \frac{|S_{i}|}{\lambda^{i}}
\frac{|P_{n - k - i}|}{\lambda^{n-k-i}}\right) \right) = \frac{\mu
(w)}{\lambda^{\left\vert w\right\vert }} \lim_{m \rightarrow\infty} \sum_{m =
0}^{\infty} \sum_{i = 0}^{m} \frac{|S_{i}|}{\lambda^{i}} \frac{|P_{m- i}
|}{\lambda^{n-k-i}}\\
= \frac{\mu(w)}{\lambda^{\left\vert w\right\vert }} \sum_{n=0}^{\infty}
\frac{\left\vert P_{n}\right\vert }{\lambda^{n}} \sum_{n=0}^{\infty}
\frac{\left\vert S_{n}\right\vert }{\lambda^{n}}.
\end{multline*}
Recalling (\ref{sumproduct}), we see that $\lim_{n \rightarrow\infty}
\frac{|L_{n}(X)|}{\lambda^{n}}$ converges to this limit as well, completing
the proof. \end{proof}
We will be able to say even more about a class of synchronized subshifts called the $S$-gap subshifts.
\begin{definition}\label{Sgap}
Let $S\subseteq\mathbb{N} \cup \{0\}$. We define the $S-$gap subshift $X_{S}$ by the set
of forbidden words $\{10^{n}1 \ : \ n \notin S\}$. Alternately, $X_{S}$ is the
set of bi-infinite $\{0,1\}$ sequences where the gap between any two nearest
$1$s has length in $S.$ \end{definition}
It is immediate from the definition that $1$ is a synchronizing word for every $S-$gap subshift. Also, all $S$-gap subshifts are entropy minimal (see Theorem C, Remark 2.4 of \cite{CT2}), and as long as $\gcd(S+1)=1$, their unique measure of maximal entropy is mixing (in fact Bernoulli) by Theorem 1.6 of \cite{Cl2}. (This theorem guarantees that the unique MME is Bernoulli up to period $d$ given by the gcd of periodic orbit lengths, and it's clear that $S+1$ is contained in the set of periodic orbit lengths.)
In this case Climenhaga \cite{Cl} conjectured that the limit $\lim _{n\rightarrow\infty}\frac{\left\vert L_{n}(X_{S})\right\vert }{e^{nh_{top}
(X_{S})}}$ existed; we prove this and we give an explicit formula for the limit.
\begin{corollary}
\label{limit2}Let $S\subseteq\mathbb{N}$ satisfy $\gcd(S+1)=1$, let
$\mu$ be the unique MME on $X_{S}$, and let $\lambda = e^{h_{top}(X_{S})}$. Then
$\displaystyle\lim_{n\rightarrow\infty}\frac{\left\vert L_{n}(X_{S})\right\vert
}{\lambda^n}$ exists and is equal to
$\displaystyle \frac{\mu(1)\lambda}{(\lambda-1)^{2}}$ when $S$ is infinite and
$\displaystyle \frac{\mu(1)\lambda (1 - \lambda^{-(\max S) - 1})^2}{(\lambda-1)^{2}}$ when $S$ is finite. \end{corollary}
\begin{proof}
Using the notation of the proof of Theorem~\ref{limit}, we define $w=1$ and write
$\lambda=e^{h_{top}(X_{S})}$. If $S$ is infinite, it is easy to see that $\left\vert
P_{i}\right\vert =\left\vert S_{i}\right\vert = 1$ for all $i$. As noted above,
$X_{S}$ is entropy minimal and its unique measure of maximal entropy is
mixing, and so the proof of Theorem~\ref{limit} implies that
\[
\lim_{n\rightarrow\infty}\frac{\left\vert L_{n}(X_{S})\right\vert
}{e^{nh_{top}(X_{S})}}=\frac{\mu(1)}{\lambda}\left( \sum_{i=0}^{\infty}
\frac{1}{\lambda^{i}}\right) ^{2}=\frac{\mu(1)}{\lambda}\left( \frac
{1}{1-\lambda^{-1}}\right) ^{2}=\frac{\mu(1)\lambda}{(\lambda-1)^{2}}.
\]
If instead $S$ is finite (say $M = \max S$), then the reader may check that
$\left\vert P_{i}\right\vert$ and $\left\vert S_{i}\right\vert$ are both equal to $1$
for all $i \leq M$ and equal to $0$ for all $i > M$. Then, the proof of Theorem~\ref{limit} implies that
\[
\lim_{n\rightarrow\infty}\frac{\left\vert L_{n}(X_{S})\right\vert
}{e^{nh_{top}(X_{S})}}=\frac{\mu(1)}{\lambda}\left( \sum_{i=0}^{M}
\frac{1}{\lambda^{i}}\right) ^{2}=\frac{\mu(1)}{\lambda}\left( \frac
{1 - \lambda^{-M-1}}{1-\lambda^{-1}}\right) ^{2} = \frac{\mu(1)\lambda(1 - \lambda^{-M-1})^2}{(\lambda - 1)^2},
\]
completing the proof.
\end{proof}
As noted in \cite{Cl}, a motivation for proving the existence of this limit is to fill a gap from \cite{spandl} for a folklore formula for the topological entropy of $X_{S}$. Two proofs of this formula are presented in \cite{Cl}, and Corollary \ref{maincor} yields yet another proof.
\begin{corollary}
Let $S\subseteq\mathbb{N} \cup \{0\}$ with $\gcd(S+1)=1$. Then $h_{top}(X_{S})=\log\lambda$,
where $\lambda$ is the unique solution of
\[
1=\sum_{n\in S}\lambda^{-n-1}.
\]
\end{corollary}
\begin{proof}
For any $S$-gap shift $X_S$, we can write \[ \left[1\right] = \left(\bigsqcup_{n = 0}^{\infty} \left[ 10^{n}1\right]\right) \cup \{x \in X_S \ : \ x_0 = 1 \textrm{ and } \forall n > 0, x_n = 0\}. \] By shift-invariance, $\mu(10^{\infty})=0$, and so by Lemma~\ref{synchlem} and Corollary \ref{maincor},
\[
\mu(1)=\sum_{n\in S}\mu(10^{n}1)=\sum_{n\in S}\mu(1)e^{h_{top}(X_{S}
)(-n-1)}\text{.}
\] Dividing both sides by $\mu(1)$ completes the proof.
\end{proof}
We also prove that for every $S-$gap subshift, the unique measure of maximal entropy has highly constrained values, which are very similar to those of the Parry measure for shifts of finite type.
\begin{theorem}
\label{value}Let $X_{S}$ be an $S-$gap subshift and $\mu$ the measure of
maximal entropy. Then $\mu(1)=\frac{1}{\sum_{n\in S}(n+1)e^{-h_{top}(X_{S})(n+1)}}$,
and for every $w\in L(X_{S})$, there exists a polynomial $f_{w}$ with integer
coefficients so that $\mu(w)=k_{w}+\mu(1)f_{w}(e^{-h_{top}(X_{S})})$ for some
integer $k_{w}$.
\end{theorem}
\begin{proof}
As noted above, $S$-gap shifts are synchronized and entropy minimal, and so
have unique measures of maximal entropy.
Denote by $\mu$ the unique measure of maximal entropy for some $S-$gap
subshift $X_{S}$, and for readability we define
\[
t=e^{-h_{top}(X)}.
\]
Since $X_S$ is entropy minimal, $\mu(1) > 0$, and so by the pointwise ergodic theorem
(applied to $\chi_{[1]}$), $\mu$-a.e. point of $X_S$ contains infinitely many $1$s.
Therefore, we can partition points of $X_S$ according to the closest
$1$ symbols to the left and right of the origin, and represent $X_S$ (up to a null set)
as the disjoint union
$\bigcup_{n\in S} \bigcup_{i=0}^{n} \sigma_i \left[10^{n}1\right]$. Then by
Lemma~\ref{synchlem} and Corollary \ref{maincor},
\begin{align*}
1 & =\sum_{n\in S}(n+1)\mu(10^{n}1)\\
& =\sum_{n\in S}(n+1)\mu(1)t^{n+1}\text{,}
\end{align*}
yielding the claimed formula for $\mu(1)$.
Now we prove the general formula for $\mu(w)$, and will proceed by induction on the length $n$ of $w$.
For the base case $n=1$, $\mu(0) = 1 - \mu(1)$, verifying the theorem.
Now, assume that the theorem holds for every $n\leq N$ for some $N \geq1$. Let
$w\in L_{N-1}(X_{S})$, and we will verify the theorem for $1w1$, $1w0$, $0w1$,
and $0w0$. If $1w1 \notin L(X_{S})$, then
\begin{align*}
\mu(1w1) & =0,\\
\mu(1w0) & =\mu(1w) - \mu(1w1) = \mu(1w),\\
\mu(0w1) & =\mu(w1) - \mu(1w1) = \mu(w1), \text{ and}\\
\mu(0w0) & =1-\mu(1w1) - \mu(1w0) - \mu(0w1)= 1 - \mu(1w) - \mu(w1).
\end{align*}
The theorem now holds by the inductive hypothesis.
If $1w1 \in L(X_{S})$, then as before $E_{X_{S}}(1w1)=E_{X_{S}}(1)$, implying
\begin{align*}
\mu(1w1) & = \mu(1) t^{1 + |w|},\\
\mu(1w0) & = \mu(1w) - \mu(1w1) = \mu(1w) - \mu(1) t^{1 + |w|},\\
\mu(0w1) & = \mu(w1) - \mu(1w1) = \mu(w1) - \mu(1) t^{1 + |w|}, \text{
and}\\
\mu(0w0) & = 1 - \mu(1w1) - \mu(1w0) - \mu(0w1) = 1 - \mu(1w) - \mu(w1) +
\mu(1) t^{1 + |w|},
\end{align*}
again implying the theorem by the inductive hypothesis and completing the proof.
\end{proof}
\section{$\mathbb{G}-$subshifts}
\label{gsec}
Throughout this section, $\mathbb{G}$ will denote a countable amenable group generated by a finite set $G=\left\{ g_{1},...,g_{d}\right\} $ which is torsion-free, i.e. $g^{n}=e$ if and only if $n=0$. \subsection{Main result}
For any $N=(N_{1} ,...,N_{d})\in\mathbb{Z}_{+}^{d}$, we define $\mathbb{G}_{N}$ to be the subgroup generated by $\left\{ g_{1}^{N_{1}},...,g_{d}^{N_{d}}\right\} ,$ and use $\faktor{\mathbb{G}}{\mathbb{G}_{N}}$ to represent the collection $\left\{g\cdot\mathbb{G}_{N}:g\in\mathbb{G}\right\}$ of left cosets of $\mathbb{G}_N$. Clearly, $\left\vert \faktor{\mathbb{G}}{\mathbb{G}_{N}}\right\vert =N_{1}N_{2}\cdots N_{d}$.
We again must begin with some relevant facts and definitions. The following structural lemma is elementary, and we leave the proof to the reader.
\begin{lemma} \label{one} For any amenable $\mathbb{G}$ and $F\Subset\mathbb{G}$, there exists $N=(N_{1},...,N_{d})\in\mathbb{Z}_{+}^{d}$ such that for every nonidentity $g \in\mathbb{G}_{N}$, $g\cdot F\cap F=\varnothing.$
\end{lemma}
As in the $\mathbb{Z}$ case, if $v,w\in L_{F}(\mathcal{A}^{\mathbb{G}})$ for some $F\Subset\mathbb{G}$, we define the function $O_{v}:L(\mathcal{A}^{ \mathbb{G}})\rightarrow \mathcal{P}(\mathbb{G})$ which sends a word to the set of locations where $v$ appears as a subword, i.e. \begin{equation*} O_{v}(u):=\left\{ g\in\mathbb{G}:\sigma_{g}(u) \in [v]\right\} . \end{equation*} We also define the function $R_{u}^{v\rightarrow w}:O_{v}(u)\rightarrow L( \mathcal{A}^{\mathbb{G}})$, where $R_{u}^{v\rightarrow w}(g)$ is the word you obtain by replacing the occurrence of $v$ at $g \cdot F$ within $u$ by $w $.
We now again must define a way to replace many occurrences of $v$ by $w$ within a word $u$, but will do this via restricting the sets of locations where the replacements occur rather than the pairs $(v,w)$. $\ $We say $ S\subset\mathbb{G}$ is $F-$\textbf{sparse }if $g\cdot F\cap g^{\prime}\cdot F=\varnothing$ for every unequal pair $g,g^{\prime}\in S$. When $v,w \in L_{F}(X)$ and $S$ is $F-$sparse, we may simultaneously replace occurrences of $v$ by $w$ at locations $g \cdot F$, $g \in S$ by $w$ without any of the complications dealt with in the one-dimensional case, and we denote the resulting word by $R_{u}^{v\rightarrow w}(S)$. Formally, $ R_{u}^{v\rightarrow w}(S)$ is just the image of $u$ under the composition of $R_{u}^{v \rightarrow w}(s)$ over all $s \in S$.
The following lemmas are much simpler versions of Lemmas \ref{injective} and \ref{preimage} for $F$-sparse sets.
\begin{lemma} \label{Ginjective} For any $F$, $v,w\in L_{F}(X)$, and $F$-sparse set $T \subseteq O_{v}(u)$, $R_{u}^{v\rightarrow w}$ is injective on subsets of $T$. \end{lemma}
\begin{proof} Fix $F, u, v, w, T$ as in the lemma. If $S \neq S' \subseteq T$, then either $S \setminus S'$ or $S' \setminus S$ is nonempty; assume without loss of generality that it is the former. Then, if $s \in S \setminus S'$, by definition $(R_{u}^{v\rightarrow w}(S))_{s + F} = w$ and $(R_{u}^{v\rightarrow w}(S'))_{s + F} = v$, and so $R_{u}^{v\rightarrow w}(S) \neq R_{u}^{v\rightarrow w}(S')$. \end{proof}
\begin{lemma} \label{Gpreimage} For any $F$ and $v,w\in L_{F}(X)$, any $F$-sparse set $T
\subseteq O_{v}(u)$, any $u^{\prime}$, and any $m \leq|T \cap O_{w}(u^{\prime })|$, \begin{equation*}
|\{(u, S) \ : \ S \text{ is $F$-sparse}, |S| = m, S \subseteq T, u^{\prime}=
R_{u}^{v\rightarrow w}(S)\}| \leq{\binom{|T \cap O_{w}(u^{\prime})|}{m}}. \end{equation*} \end{lemma}
\begin{proof} Fix any such $F, u', v, w, T, m$ as in the lemma. Clearly, for any $S$, $S \subseteq O_w(R_{u}^{v\rightarrow w}(S))$, and so if $R_{u}^{v\rightarrow w}(S) = u'$, then $S \subseteq O_w(u')$. There are only
${\binom{|T \cap O_{w}(u^{\prime})|}{m}}$ choices for $S \subseteq T \cap O_w(u')$ with $|S| = m$, and an identical argument to that of Lemma~\ref{preimage} shows that for each such $S$, there is only one $u$ for which $R_{u}^{v\rightarrow w}(S) = u'$. \end{proof}
Whenever $v,w\in L_{F}(X)$ and $E_{X}(v)\subseteq E_{X}(w)$, clearly $ R_{u}^{v\rightarrow w}(S)\in L(X)$ for any $F$-sparse set $S\subseteq O_{v}(u)$; this, along with the use of Lemma \ref{one}, will be the keys to the counting arguments used to prove our main result for $\mathbb{G}$-subshifts.
\begin{theorem} \label{Gtheorem} Let $X$ be a $\mathbb{G-}$subshift, $\mu$ a measure of maximal entropy of $X$, $F\Subset\mathbb{G}$, and $v,w\in L_{F}(X).$ If $ E(v)\subseteq E(w)$ then \begin{equation*} \mu(v)\leq\mu(w). \end{equation*} \end{theorem}
\begin{proof} Take $\mathbb{G}$, $X$, $\mu$, $F$, $v$, and $w$ as in the theorem, and suppose for a contradiction that $\mu(v) > \mu(w)$. Choose any $\delta \in \mathbb{Q}_{+}$ with $\delta< \frac{\mu(v) - \mu(w)}{5}$. Let $F_n$ be a Følner sequence satisfying Theorem~\ref{SMBthm}. For every $n\in\mathbb{Z}_{+},$ we define \begin{equation*} S_{n}:=\left\{ u\in L_{F_{n}}(X):\left\vert O_{v}(u)\right\vert \geq\left\vert F_{n}\right\vert (\mu(v)-\delta)\text{ and }\left\vert O_{w}(u)\right\vert \leq\left\vert F_{n}\right\vert (\mu(w)+\delta)\right\} . \end{equation*}
By the pointwise ergodic theorem (applied to $\chi_{[v]}$ and $\chi_{[w]}$), $\mu(S_{n})\rightarrow1$, and then by Corollary~\ref{SMBcor},
\begin{equation}
\lim_{n\rightarrow\infty}\frac{\log|S_{n}|}{n}=h_{top}(X). \label{Snbound} \end{equation}
Let $N\in\mathbb{Z}_{+}^{d}$ be a number obtained by Lemma \ref{one} that is minimal in the sense that if any of the coordinates is decreased then it will not satisfy the property of the lemma.
We note that for every $u\in S_{n}$, $|O_{v}(u)|-|O_{w}(u)|>3\delta|F_{n}|$. Therefore, for every $u\in S_{n}$, there exists $h(u)\in \faktor{ \mathbb{G}}{\mathbb{G}_{N}}$ such that \begin{equation} \left\vert O_{v}(u)\cap h(u) \right\vert -\left\vert O_{w}(u)\cap h(u)
\right\vert >\frac{3\delta}{M} |F_{n}|\text{,} \label{ineq} \end{equation}
where $M=\left| \faktor{\mathbb{G}}{\mathbb{G}_{N}}\right| .$
For every $u\in S_{n}$, define $k_{n}(u)\in\mathbb{N}$ satisfying $\left\vert O_{v}(u)\cap h(u)\right\vert \in [ k_{n} (u)|F_{n}|\frac{\delta}{M},$
\newline $(k_{n}(u)+1)|F_{n}|\frac{\delta}{M}] $.
Using $M=\left| \faktor{\mathbb{G}}{\mathbb{G}_{N}}\right| $ and the fact that $3\leq k_{n}(u)\leq\frac{M}{\delta}$, we may choose $S_{n}^{\prime
}\subseteq S_{n}$ with $|S_{n}^{\prime}|\geq\frac{|S_{n}|}{M^{2}/\delta}$, $ h_{n} \in\faktor{\mathbb{G}}{\mathbb{G}_{N}}$ and $k_{n}\in\mathbb{N}$ such that for every $u\in S_{n}^{\prime}$ we have $h(u)=h_{n}$ and $k_{n}(u)=k_{n} $. This implies that for every $u\in S_{n}^{\prime}$ \begin{align*}
\left\vert O_{v}(u)\cap h_{n}(u)\right\vert & \geq(k_{n}+1)|F_{n}|\frac{ \delta}{M}\text{, and hence} \\
\left\vert O_{w}(u)\cap h_{n}(u)\right\vert & \leq(k_{n}-2)|F_{n}|\frac{ \delta}{M}\text{ (using (\ref{ineq})).} \end{align*}
By the pigeonhole principle, we may pass to a sequence on which $h_{n} = h$ and $k_{n} = k$ are constant, and for the rest of the proof consider only $n$ in this sequence. Let $\varepsilon\in\mathbb{Q}_{+}$with $\varepsilon <\frac{
\delta}{|F \cdot F^{-1}|}$. For each $u\in S_{n}^{\prime}$, we define \begin{equation*} A_{u}:=\left\{ R_{u}^{v\rightarrow w}(S):S\subseteq O_{v}(u)\cap h \text{ and } \left\vert S\right\vert =\varepsilon\left\vert F_{n}\right\vert /M\right\} \end{equation*} (without loss of generality we may assume $\varepsilon\left\vert F_{n}\right\vert /M$ is an integer by taking a sufficiently large $n$) $.$
Since $E_{X}(v)\subseteq E_{X}(w)$, we have that $A_{u}\subset L(X).$ By Lemma~\ref{Ginjective}, \begin{equation*}
|A_{u}| \geq{\binom{|O_{v}(u)\cap h|} {\varepsilon\left\vert F_{n}\right\vert /M}} \geq{\binom{\delta k |F_{n}|/M} {\varepsilon\left\vert F_{n}\right\vert /M}}. \end{equation*}
On the other hand, for every $u^{\prime}\in\bigcup_{u\in S_{n}}A_{u}$, we have that \begin{equation*}
\left\vert O_{w}(u^{\prime})\cap h\right\vert \leq\frac{|F_{n}|}{M}\left(
(k_{n}-2)\delta+\varepsilon|F \cdot F^{-1}|\right) \leq\frac{\delta|F_{n}|}{M} (k_{n}-1). \end{equation*}
(here, we use $\left\vert O_{w}(u)\cap h(u)\right\vert \leq(k_{n}-2)|F_{n}| \frac{\delta}{M}$ plus $\left\vert S\right\vert =\varepsilon \left\vert F_{n}\right\vert /M$ and the simple fact that a replacement of $v$ by $w$ in
$u$ can create at most $|F \cdot F^{-1}|$ new occurrences of $w$.) Therefore, by Lemma~\ref{Gpreimage}, \begin{equation*} \left\vert \left\{ u\in S_{n}^{\prime}:u^{\prime}\in A_{u}\right\}
\right\vert \leq{\binom{\delta(k_{n}-1)|F_{n}|/M}{\varepsilon\left\vert F_{n}\right\vert /M}.} \end{equation*}
By combining the two inequalities, we see that \begin{equation}
|L_{n}(X)|\geq\left\vert \bigcup_{u\in S_{n}^{\prime}}A_{u}\right\vert
\geq|S_{n}^{\prime}|{\binom{\delta k_{n}|F_{n}|/M}{\varepsilon\left\vert F_{n}\right\vert /M}}{\binom{\delta(k_{n}-1)|F_{n}|/M}{\varepsilon\left\vert F_{n}\right\vert /M}}^{-1}. \end{equation} Now, we take logarithms of both sides, divide by $\left\vert F_{n}\right\vert $, and let $n$ approach infinity (along the earlier defined sequence). Then we use the definition of entropy, the inequality $
|S_{n}^{\prime}|\geq \frac{|S_{n}|}{M^{2}/\delta}$, (\ref{Snbound}), and Stirling's approximation to yield
\begin{align*} h_{top}(X) & \geq h_{top}(X)+\frac{\varepsilon}{M}\bigg[\left( \frac{\delta k }{\varepsilon}\log\frac{\delta k}{\varepsilon}-\left( \frac{\delta k}{ \varepsilon}-1\right) \log\left( \frac{\delta k}{\varepsilon}-1\right) \right) \\ & -\left( \frac{\delta(k-1)}{\varepsilon}\log\frac{\delta(k-1)}{\varepsilon } -\left( \frac{\delta(k-1)}{\varepsilon}-1\right) \log\left( \frac { \delta(k-1)}{\varepsilon}-1\right) \right) \bigg]. \end{align*}
Since the function $x \log x - (x - 1) \log(x-1)$ is strictly increasing for $x > 1$, the right-hand side of the above is strictly greater than $ h_{top}(X)$, a contradiction. Therefore, our original assumption does not hold and hence $\mu(v) \leq\mu(w)$. \end{proof}
\subsection{Applications to hereditary subshifts}
\label{hered}
One class of $\mathbb{G-}$subshifts with many pairs of words satisfying $E_{X}(v)\subsetneq E_{X}(w)$, allowing for the use of Theorem~\ref{Gtheorem}, are the hereditary subshifts (introduced in \cite{KL1}).
A partial order $\leq$ on a finite set $\mathcal{A}$ induces a partial order on $\mathcal{A}^{n}$ and $\mathcal{A}^{\mathbb{G}}$ (coordinatewise) which will also be denoted by $\leq$. When $\mathcal{A=}\left\{ 0,1...,m\right\} $ we will always use the linear order $0 \leq1 \leq\ldots\leq m$.
\begin{definition}
Let $X\subseteq\mathcal{A}^{\mathbb{G}}$ be a subshift and $\leq$ a partial
order on $\mathcal{A}$. We say $X$ is $\leq-$\textbf{hereditary (or simply
hereditary)} if for every $x\in\mathcal{A}^{\mathbb{G}}$ such that there
exists $y\in X$ such that $x\leq y$ then $x\in X.$ \end{definition}
Examples of hereditary shifts include $\beta-$shifts \cite{Kw}, $\mathscr{B}-$free shifts (\cite{KLW}), spacing shifts (\cite{LZ}), multi-choice shifts (\cite{LMP}) and bounded density shifts (\cite{S}). Many of these examples have a unique measure of maximal entropy, but not every hereditary subshift has this property (see \cite{KLW})
.
This definition immediately implies that whenever $x \leq y$ for $x, y \in L(X)$, $E_{X}(y) \subseteq E_{X}(x)$, yielding the following corollary of Theorem~\ref{hardcase}.
\begin{corollary}
Let $X$ be a $\leq-$hereditary $\mathbb{G-}$subshift, $\mu$ a measure of maximal entropy,
and $v,w\in L_{n}(X)$ for some $n \in\mathbb{N}.$ If $u\leq v$ then
$\mu(v)\leq\mu(u).$ \end{corollary}
In particular, if $\mathcal{A=}\left\{ 0,1...,m\right\}$, then $\mu(m)\leq\mu(m-1)...\leq\mu(1)\leq\mu(0)$.
Having $u\leq v$ $\ $is sufficient but not necessary for $E(v)\subseteq E(w). $ In particular, for $\beta-$shifts and bounded density shifts, there are many other pairs (with different lengths) where this happens. This is due to an additional property satisfied by these hereditary shifts.
\begin{definition}
Let $X\subseteq\left\{ 0,1,...,m\right\} ^{\mathbb{Z}}$ be a hereditary
$\mathbb{Z}$-subshift. We say $X$ is \textbf{$i$-hereditary} if for every $u\in L_{n}(X)$
and $u^{\prime}$ obtained by inserting a $0$ somewhere in $u$, it is the case
that $u^{\prime}\in L_{n+1}(X)$. \end{definition}
In particular, $\beta-$shifts and bounded density shifts are $i$-hereditary, but not every spacing shift is $i$-hereditary. It's immediate that any $i$-hereditary shift satisfies $E_{X}(0^{j}) \subseteq E_{X}(0^{k})$ whenever $j \geq k$. We can get equality if we assume the additional property of specification.
\begin{definition}
A $\mathbb{Z}$-subshift $X$ has the \textbf{specification property (at distance }
$N$)\textbf{\ }if for every $u,w\in L(X)$ there exists $v\in L_{N}(X)$ such
that $uvw\in L(X).$ \end{definition}
Clearly, if $X$ is hereditary and has specification property at distance $N$, then $u0^{N}w$ and $u0^{N+1}w\in L(X)$ for all $u,w\in L(X)$, and so in this case $E_{X}(0^{N})=E_{X}(0^{N+1})$. We then have the following corollary of Theorem~\ref{hardcase}.
\begin{corollary}
\label{hereditary}Let $X\subseteq\left\{ 0,1,...,m\right\} ^{\mathbb{Z}}$ be
a i-hereditary $\mathbb{Z-}$subshift$.$ Then for every $n\in\mathbb{Z}_{+}$
\[
h_{top}(X)\geq\log\frac{\mu(0^{n})}{\mu(0^{n+1})}.
\]
Furthermore, if $X$ has the specification property at distance $N$, then
\[
h_{top}(X)=\log\frac{\mu(0^{N})}{\mu(0^{N+1})}.
\]
\end{corollary}
We note that if $X$ has the specification property at distance $N$, then it also has it at any larger distance. Therefore, the final formula can be rewritten as \begin{multline*} h_{top}(X)=\lim_{N\rightarrow\infty}\log\frac{\mu(0^{N})}{\mu(0^{N+1})}
=\lim_{N\rightarrow\infty}-\log\mu(x(0)=0\ |\ x_{[-N,-1]}=0^{N})\\
=-\log\mu(x(0)=0\ |\ x_{(-\infty,-1]}=0^{\infty}), \end{multline*} recovering a formula (in fact a more general one for topological pressure of $\mathbb{Z}^{d}$ SFTs) proved under different hypotheses in \cite{MP}.
\end{document} |
\begin{document}
\title[Genealogies of two neutral loci after a selective sweep] {Genealogies of two linked neutral loci after a selective sweep in a large population of stochastically varying size}
\author{Rebekka Brink-Spalink} \address{Institute for Mathematical Stochastics, Georg-August-Universit\"at G\"ottingen, Goldschmidtstr. 7, 37077 G\"ottingen, Germany} \email{[email protected]}
\author{Charline Smadi} \address{Irstea, UR LISC, Laboratoire d'Ingénierie des Systèmes Complexes, 9 avenue Blaise Pascal-CS 20085, 63178 Aubière, France and Department of Statistics, University of Oxford, 1 South Parks Road, Oxford OX1 3TG, UK} \email{[email protected]}
\keywords{ Birth and death process;
Coalescent;
Coupling;
Eco-evolution;
Genetic hitch-hiking;
Selective sweep} \subjclass[2010]{92D25, 60J80, 60J27, 92D15, 60F15.}
\maketitle
\begin{abstract} We study the impact of a hard selective sweep on the genealogy of partially linked neutral loci in the vicinity of the positively selected allele. We consider a sexual population of stochastically varying size and, focusing on two neighboring loci, derive an approximate formula for the neutral genealogy of a sample of individuals taken at the end of the sweep. Individuals are characterized by ecological parameters depending on their genetic type and governing their growth rate and interactions with other individuals (competition). As a consequence, the "fitness" of an individual depends on the population state and is not an intrinsic characteristic of individuals. We provide a deep insight into the dynamics of the mutant and wild type populations during the different stages of a selective sweep. \end{abstract}
\section*{Introduction} We study the hitchhiking effect of a beneficial mutation in a sexual haploid population of stochastically varying size. We assume that a mutation occurs in one individual of a monomorphic population
and that individuals carrying the new allele $a$ are better adapted to the current environment and spread in the population. We suppose that the mutant allele $a$ eventually replaces the resident one, $A$, and study the influence of this fixation on the neutral gene genealogy of a sample taken at the end of the selective sweep. That is, in each sampled individual we consider the same set of partially linked loci including the locus where the advantageous mutation occurred. We then trace back the ancestral lineages of all loci in the sample until the beginning of the sweep and update the genetic relationships whenever a coalescence or a recombination changes the ancestry of one or several loci. Our main result is the derivation of a sampling formula for the ancestral partition of two neutral loci situated in the vicinity of the selected allele.
The first studies of hitchhiking, initiated by Maynard Smith and Haigh \cite{smith1974hitch}, have modeled the mutant population size as the solution of a deterministic logistic equation \cite{ohta1975effect,kaplan1989hitchhiking,stephan1992effect, stephan2006hitchhiking}. Barton \cite{barton1998effect}
was the first to point out the importance of the stochasticity of the mutant population size. Following this paper, a series of works took into account this randomness during the sweep. In \cite{durrett2004approximating,schweinsberg2005random} Schweinsberg and Durrett based their analysis on a Moran model with selection and recombination, while Etheridge and coauthors \cite{etheridge2006approximate} worked with the diffusion limit of such discrete population models. Then Brink-Spalink \cite{brink2014multsweep}, Pfaffelhuber and Studeny \cite{pfaffelhuber2007approximating} and Leocard \cite{stephanie2009selective} extended the respective findings of these two approaches for the ancestry of one neutral locus to the two-locus (resp. multiple-locus) case.
However, in all these models, the population size was constant and each individual had a ``fitness'' only dependent on its type and not on {the population state}. The fundamental idea of Darwin is that the individual traits have an influence on the interactions between individuals, which in turn generate selection on the different traits. In this paper we aim at modeling precisely these interactions by extending the model introduced in \cite{smadi2014eco} where the author considered only one neutral locus. Such an eco-evolutionary approach has been introduced by Metz and coauthors \cite{metz1996adaptive} and has been made rigorous in the seminal paper of Fournier and M\'el\'eard \cite{fournier2004microscopic}. Then it was further developed by Champagnat, M\'el\'eard and coauthors (see \cite{champagnat2006microscopic, champagnat2011polymorphic,champagnat2014adaptation} and references therein) for the haploid asexual case and by Collet, M\'el\'eard and Metz \cite{collet2011rigorous} and Coron and coauthors \cite{coron2013slow} for the diploid sexual case.
The population dynamics, described in Section \ref{modelandresults}, is a multitype birth and death Markov process with competition. We represent the carrying capacity of the underlying environment by a scaling parameter $K\in \mathbb{N}$ and state results in the limit for large $K$. In \cite{champagnat2006microscopic} it was shown that such kind of invasion processes can be divided into three phases (see Figure \ref{fig3loci}): an
initial phase in which the fraction of $a$-individuals does not exceed a fixed value $\varepsilon>0$ and where the dynamics of the wild type
population is nearly undisturbed by the invading type. A second phase where both types account for a non-negligible percentage of the
population and where the dynamics of the population can be well approximated by a deterministic competitive Lotka-Volterra system.
And finally a third phase where the roles of the types are interchanged and the wild type population is near extinction. The durations of the first and third phases of the selective sweep are of order $\log K $ whereas the second phase only lasts an amount of time of order 1. This three phases decomposition is commonly encountered in population genetics models and dates back to \cite{ohta1975effect}.
In Section \ref{sectioncoupling} we precisely describe these three phases and introduce two couplings of the population process, key tools to study the dynamics of the $A$- and $a$-populations. Section \ref{sectionproofmain} is devoted to the proofs of the main theorems on the ancestral partition of the two neutral alleles. Sections \ref{technicalsection} to \ref{secondphase} are dedicated to the proofs of auxiliary statements. In Section \ref{sectioncomparison} we compare our findings with previous results. Finally, we state technical results needed in the proofs in the Appendix.
\section{Model and results} \label{modelandresults}
We consider a three locus model: one locus under selection, $SL$, with alleles in $\mathcal{A}:=\{A,a\}$ and two neighboring neutral loci $N1$ and $N2$ with alleles in the finite sets $\mathcal{B}$ and $\mathcal{C}$ respectively. We denote by $\mathcal{E}=\mathcal{A}\times \mathcal{B} \times \mathcal{C}$ the type space. Two geometric alignments are possible: either the two neutral loci are adjacent (geometry $SL-N1-N2$), or they are separated by the selected locus (geometry $N1-SL-N2$). We introduce the model and notations for the adjacent geometry, their analogs for the separated one can be deduced in a straightforward manner.
Whenever a reproduction event takes place, recombinations between $SL$ and $N1$ or between $N1$ and $N2$ occur independently with probabilities $r_{1}$ and $r_{2}$, respectively. These probabilities depend on the parameter $K$, representing the environment's carrying capacity, but for the purpose of readability we do not indicate this dependence. We assume a regime of weak recombination:
\begin{align}\label{assrK}
\limsup_{K \rightarrow \infty}\ r_{j} \log K < \infty, \ j=1,2. \end{align} This is motivated by Theorem 2 in \cite{smadi2014eco} which states that this is the good scale to observe a signature on the neutral allele distribution. If the recombination probabilities are larger (neutral loci more distant from the selected locus), there are many recombinations and the sweep does not modify the neutral diversity at these sites. Recombinations may lead to a mixing of the parental genetic material in the newborn, and hence, parents with types $\alpha \beta \gamma$ and $\alpha' \beta' \gamma'$ in $\mathcal{E}$ can generate the following offspring: \begin{align*} \renewcommand{1.4}{1.4}{ \begin{array}{ccc}
\text{possible genotype} & \text{event} & \text{{probability}}\\
\alpha \beta \gamma,\alpha' \beta' \gamma' & \text{no recombination } & (1-r_{1})(1-r_{2})\\ \alpha \beta' \gamma',\alpha' \beta \gamma & \text{one recombination {between $SL$ and $N1$}} & r_{1}(1-r_{2})\\
\alpha \beta \gamma',\alpha' \beta' \gamma & \text{one recombination {between $N1$ and $N2$}} & (1-r_{1}) r_{2}\\ \alpha \beta' \gamma,\alpha' \beta \gamma' & \text{two recombinations} & r_{1} r_{2} \end{array}} \end{align*} We will see in the sequel that the probability to witness a birth event with two simultaneous recombinations in the neutral genealogy of a uniformly chosen individual is very small.
As we assume the loci $N1$ and $N2$ to be neutral, the ecological parameters of an individual only depend on the allele $\alpha$ at the locus under selection. Let us denote by $f_\alpha$ the fertility of an individual with type $\alpha$. In the spirit of \cite{collet2011rigorous}, such an individual gives birth at rate $f_\alpha$ (female role), and has a probability proportional to $f_\alpha$ to be chosen as the father in a given birth event (male role). Denoting the complementary type of the allele $\alpha$ by $\bar{\alpha}$ we get the following result for the birth rate of individuals of type $\alpha \beta\gamma \in \mathcal{E}$: \begin{multline}\label{birthrate}
b^{K}_{\alpha \beta \gamma }(n) = (1-r_1)(1-r_2) f_{\alpha}n_{\alpha \beta \gamma} + r_1(1-r_2)f_{\alpha}n_{\alpha}\frac{f_{\alpha}n_{\alpha \beta \gamma} +
f_{\bar{\alpha}}n_{\bar{\alpha}\beta \gamma}}{f_{a}n_a+f_An_A} + \\(1-r_1)r_2f_{\alpha} \frac{\sum_{(\beta',\gamma')\in (\mathcal{B},\mathcal{C})}n_{\alpha \beta \gamma'}( f_{\alpha}n_{\alpha \beta' \gamma}+ f_{\bar{\alpha}}n_{\bar{\alpha}\beta' \gamma})}{f_{a}n_a+f_An_A} +r_1r_2 f_{\alpha} \frac{\sum_{(\beta',\gamma')\in (\mathcal{B},\mathcal{C})}n_{\alpha \beta' \gamma }( f_{\alpha}n_{\alpha \beta\gamma' }+ f_{\bar{\alpha}}n_{\bar{\alpha}\beta \gamma'})}{f_{a}n_a+f_An_A}, \end{multline} where $n_{\alpha \beta \gamma}$ (resp. $n_\alpha$) denotes the current number of $\alpha \beta \gamma$-individuals (resp. $\alpha$-individuals) and $n=(n_{\alpha \beta \gamma}, (\alpha, \beta, \gamma) \in \mathcal{E})$ is the current state of the population. An $\alpha$-individual can die either from a natural death (rate $D_\alpha$), or from type-dependent competition:
the parameter $C_{\alpha,\alpha'}$ models the impact an individual of type $\alpha'$ has on an individual of type $\alpha$, where $(\alpha,\alpha') \in \mathcal{A}^2$. The strength of the competition also depends on the carrying capacity $K$. This results in the total death rate of individuals carrying the alleles $\alpha \beta \gamma \in \mathcal{E}$: \begin{align}\label{deathrate}
d^{K}_{\alpha \beta \gamma }(n) =\; & \left( D_{\alpha} + \frac{C_{\alpha,A}}{K}n_{A} + \frac{C_{\alpha,a}}{K} n_{a} \right) n_{\alpha \beta \gamma}. \end{align} Hence the population process \begin{align*} N^K= (N^{K}(t), t \geq 0)=\Big((N_{\alpha \beta \gamma}^{K}(t))_{(\alpha, \beta, \gamma)\in \mathcal{E}}, t \geq 0\Big), \end{align*} where $N_{\alpha \beta \gamma}^{K}(t)$ denotes the number of $\alpha \beta \gamma$-individuals at time $t$, is a multitype birth and death process with rates given in \eqref{birthrate} and \eqref{deathrate}. We will often work with the trait population process $( (N_A^K(t),N_a^K(t)),t \geq 0)$, where $N_{\alpha}^{K}(t)$ denotes the number of $\alpha$-individuals at time $t$. This is also a birth and death process with birth and death rates given by: \begin{align}\label{def:totalbd}
b^{K}_{\alpha }(n) &=\sum_{(\beta,\gamma) \in \mathcal{B}\times \mathcal{C}} b^{K}_{\alpha \beta \gamma }(n) = f_{\alpha} n_{\alpha}
\\
d^{K}_{\alpha }(n) &=\sum_{(\beta,\gamma) \in \mathcal{B}\times \mathcal{C}} d^{K}_{\alpha \beta \gamma }(n) =\Big( D_{\alpha} + \frac{C_{\alpha,A}}{K} n_{A} + \frac{C_{\alpha,a}}{K}n_{a}\Big) n_{\alpha}.\nonumber \end{align} As a quantity summarizing the advantage or disadvantage a mutant with allele type $\alpha$ has in an $\bar{\alpha}$-population at equilibrium, we introduce the so-called invasion fitness $S_{\alpha \bar{\alpha}}$ through \begin{equation} \label{deffitinv3loci}
S_{\alpha \bar{\alpha}} := f_{\alpha} -D_{\alpha} - C_{\alpha,\bar{\alpha}}\bar{n}_{\bar{\alpha}}, \end{equation} where the equilibrium density $\bar{n}_{{\alpha}}$ is defined by \begin{align}\label{defnbar} \bar{n}_{{\alpha}}: =\frac{f_{\alpha} -D_{\alpha}}{C_{\alpha,{\alpha}}}. \end{align} The role of the invasion fitness $S_{\alpha \bar{\alpha}}$ and the definition of the equilibrium density $\bar{n}_{{\alpha}}$ follow from the properties of the two-dimensional competitive Lotka-Volterra system: \begin{equation} \label{S}
\dot{n}_\alpha^{(z)}=(f_\alpha-D_\alpha-C_{\alpha,A}n_A^{(z)}-C_{\alpha,a}n_a^{(z)})n_\alpha^{(z)},\quad z \in \mathbb{R}_+^\mathcal{A}, \quad n_\alpha^{(z)}(0)=z_\alpha,\quad \alpha \in \mathcal{A}.
\end{equation} If we assume \begin{equation}\label{defnbara}
\bar{n}_{A}>0,\quad \bar{n}_{a}>0,\quad \text{and} \quad S_{Aa}<0<S_{aA}, \end{equation} then $\bar{n}_{\alpha}$ is the equilibrium size of a monomorphic $\alpha$-population and the system \eqref{S} has a unique stable equilibrium $(0,\bar{n}_a)$ and two unstable steady states $(\bar{n}_A,0)$ and $(0,0)$. Thanks to Theorem 2.1 p. 456 in \cite{ethiermarkov} we can prove that if $N_A^K(0)$ and $N_a^K(0)$ are of order $K$ and $K$ is large, the rescaled process $(N_A^K/K,N_a^K/K)$ is very close to the solution of \eqref{S} during any finite time interval.
The invasion fitness $S_{aA}$ corresponds to the \textit{per capita} initial growth rate of the mutant $a$ when it appears in a monomorphic population of individuals $A$ at their equilibrium size $\bar{n}_AK$. Hence the dynamics of the allele $a$ is very dependent on the properties of the system \eqref{S} and it is proven in \cite{champagnat2006microscopic} that under Condition \eqref{defnbara} one mutant $a$ has a positive probability to fix in the population and replace a wild type $A$. More precisely, if we use the convention
\begin{equation}\label{convP} \mathbb{P}^{(K)}(.):=\mathbb{P}(.|N_A^K(0)=\lfloor \bar{n}_A K \rfloor,N_a^K(0)=1), \end{equation} Equation (39) in \cite{champagnat2006microscopic} states that \begin{equation}\label{probafix} \lim_{K \to \infty}\mathbb{P}^{(K)}(\text{Fix}^K)=\frac{S_{aA}}{f_a}=:s, \end{equation} where $s$ is called the rescaled invasion fitness, and the extinction time of the $A$-population and the event of fixation of the $a$-allele are rigorously defined as follows: \begin{align}\label{def:extfix}
T_{\text{ext}}^K:=\inf \big\{t \geq 0 : N_{A}^{K}(t) = 0 \big\}\quad \text{and}\quad \text{Fix}^K:=\big\{T_{\text{ext}}^K<\infty, N_{a}^{K}(T_{\text{ext}}^K)>0\big\}. \end{align} From this point onward, we fix $d$ in $\mathbb{N}$. We aim at quantifying the effect of the selective sweep on the neutral diversity. Our method consists in tracing back the neutral genealogies of $d$ individuals sampled uniformly at the end of the sweep (time $T_{\text{ext}}^K$) until time $0$. Two event types (see Definition \ref{defcoalreco}) may affect the relationships of the sampled neutral alleles: coalescences correspond to the merging of the neutral genealogies of two individuals at one or two neutral loci, and recombinations redistribute the selected and neutral alleles of one individual into two groups carried by its two parents. We will represent the neutral genealogies by a partition $\Theta^K_d$ which belongs to the set $\mathcal{P}_d^*$ of marked partitions of $\{ (i,k), i \in \{1,...,d\}, k \in \{1,2\} \}$ with (at most) one block distinguished by the mark $*$, which will correspond to the descendants of the original mutant $a$. In this notation $(i,1)$ and $(i,2)$ are the neutral alleles at loci $N1$ and $N2$ of the $i$th sampled individual. Let us define rigorously the random partition $\Theta^K_d$:
\begin{defi}\label{deftheta} Sample $d$ individuals uniformly and without replacement at the end of the sweep (time $T_{\text{ext}}^K$). Follow the genealogies of the first and second neutral alleles of the $i$-th sampled individual, $(i,1)$ and $(i,2)$ for $i \in \{1, ...,d\}$. Then the partition $\Theta^K_d \in \mathcal{P}_d^*$ is defined as follows: each block of the partition $\Theta^K_d$ is composed of all those neutral alleles which originate from the same individual alive at the beginning of the sweep; the block containing the descendants of the mutant $a$ (if such a block exists) is distinguished by the mark $*$. \end{defi}
We will show in Theorems \ref{mainresult} and \ref{mainresult2} that when $K$ is large the partition $\Theta^K_d$ belongs with a probability close to one to a subset $\Delta_d$ of $\mathcal{P}_d^*$, which is defined as follows:
\begin{defi}\label{defdelta} $\Delta_d$ is the subset of $\mathcal{P}_d^*$ consisting of those partitions whose unmarked blocks (if there are any)
are either singletons or pairs of the form $\{(i,1),(i,2)\}$ for one $i\in \{1,...,d\}$. \end{defi}
\begin{ex} In the example represented in Figure \ref{schemagen}, the marked partition $\pi^{(ex)}$ belongs to $\Delta_d$: $$\pi^{(ex)}=\Big\{\{ (1,1), (1,2), (2,1), (5,2) \}^*, \{(2,2)\},\{(3,1),(3,2)\},\{(4,1)\},\{(4,2)\},\{(5,1)\} \Big\}.$$ \end{ex}
\begin{figure}
\caption{Example of genealogy for a 5-sample: dark blue neutral alleles originate from the mutant and light blue ones from an $A$-individual. We indicate the selected allele, $A$ or $a$, associated with the neutral alleles during the sweep. It can change when a recombination occurs. Bold lines represent the $A$(green)- and $a$(red)-population sizes. In this example,
the two neutral alleles of the first individual, the first neutral allele of the second individual and the second neutral allele of the fifth individual originate from the mutant; the two neutral alleles of the third individual originate from the same $A$-individual, whereas the two neutral alleles of the fourth individual originate from two distinct $A$-individuals. }
\label{schemagen}
\end{figure}
For a partition $\pi \in \mathcal{P}_d^*$, we define for some possible ancestral relationships the number of individuals in the sample whose two neutral loci are related in that particular way:
\begin{defi}\label{defpartition}
Let $d\in \mathbb{N}$ and $\pi\in \mathcal{P}_d^*$. Then we set:
\begin{enumerate}
\item[] $|\pi|_1=\#\{$\small$1\leq i \leq d$ such that $(i,1)$ and $(i,2)$ belong to the marked block \normalsize $\}$
\item[] $|\pi|_2=\#\{$\small$ 1\leq i \leq d$ such that $(i,1)$ belongs to the marked block and $\{(i,2)\}$ is an unmarked block\normalsize $\}$
\item[] $|\pi|_3=\#\{$\small$1\leq i \leq d$ such that $(i,2)$ belongs to the marked block and $\{(i,1)\}$ is an unmarked block\normalsize $\}$
\item[] $|\pi|_4=\#\{$\small$1\leq i \leq d$ such that $\{(i,1),(i,2)\}$ is an unmarked block\normalsize $\}$
\item[] $|\pi|_5=\#\{$\small$1\leq i \leq d$ such that $\{(i,1)\}$ and $\{(i,2)\}$ are two distinct unmarked blocks\normalsize $\}$
\end{enumerate} \end{defi}
To express the limit distribution of the partition $\Theta_d^{K}$ we need to introduce: \begin{equation} \label{defq1q2} q_1:= e^{- \frac{f_ar_1\log K}{S_{aA}}},\ \ q_2:=e^{-\frac{f_ar_2\log K}{S_{aA}}}, \ \
\bar{q}_2:=e^{-\frac{f_ar_2\log K}{|S_{Aa}|}} \ \ \text{and} \ \ q_3:=\frac{r_1(q_2^{f_A/f_a}-q_1q_2)}{r_1+r_2(1-f_A/f_a)}, \end{equation} where the invasion fitnesses have been defined in \eqref{deffitinv3loci}. We did not make any assumption on the sign of $f_a(r_1+r_2)-f_Ar_2$, but $q_3$ can be written in the form $\delta(e^{-\mu}- e^{-\nu})/(\nu-\mu)$ for $(\delta,\mu,\nu) \in \mathbb{R}_+^3$ so that it is well defined and non-negative. It is easy to check that $q_3\leq 1$. The forms of $q_1$, $q_2$ and $\bar{q}_2$ are intuitive (see comments of Proposition \ref{prop:1phase_probg1}).The form of $q_3$ is more complex to explain and results from a combination of different possible genealogical scenarios during the first phase. We now define five non-negative numbers $(p_k, 1\leq k \leq 5)$ which will quantify the law of $\Theta_d^K$ for large $K$ in Theorem \ref{mainresult}: \begin{eqnarray}\label{defpi} p_1 := q_1q_2 [1-(1-q_1)(1- \bar{q}_2)], \quad p_2 := q_1[(1-q_1q_2) - q_2\bar{q}_2(1-q_1) ], \quad \\
p_3 := q_1q_2(1-\bar{q}_2)(1-q_1), \quad p_4 :=\bar{q}_2 q_3\quad \text{and} \quad p_5 := (1-q_1)(1-q_1q_2(1-\bar{q}_2)) -\bar{q}_2q_3.\nonumber\end{eqnarray}
Note that $\sum_{1 \leq k \leq 5}p_k=1$. Finally, we introduce an assumption which summarizes all the assumptions made in this work: \begin{ass}\label{asstotale} $(N_A^K(0),N_a^K(0))=(\lfloor\bar{n}_AK\rfloor,1)$ and Conditions \eqref{assrK} on the recombination probability and \eqref{defnbara} on the equilibrium densities and fitnesses hold. \end{ass} With Definitions \ref{deftheta}, \ref{defdelta} and \ref{defpartition} in mind, we can now state our main results:
\begin{theo}[Geometry $SL-N1-N2$] \label{mainresult} Under Assumption \ref{asstotale}, we have for every $ \pi \in \mathcal{P}_d^*$ $$ \underset{K \to \infty}{\lim} \
\Big| \mathbb{P}^{(K)}( \Theta^K_d=\pi |{\textnormal{Fix}}^K)-\mathbf{1}_{\{\pi \in \Delta_d\}}
{p_1}^{|\pi|_1}{p_2}^{|\pi|_2}{p_3}^{|\pi|_3}{p_4}^{|\pi|_4}{p_5}^{|\pi|_5} \Big|=0.$$ \end{theo}
Notice that when $K$ is large, $\Theta^K_d$ belongs to $\Delta_d$ with a probability close to one, and that
$$({p_1}^{|\pi|_1}{p_2}^{|\pi|_2}{p_3}^{|\pi|_3}{p_4}^{|\pi|_4}{p_5}^{|\pi|_5}, \pi \in \Delta_d)$$
is a probability on $\Delta_d$ (depending on $K$). Moreover, this result implies that the $d$ sampled individuals have asymptotically independent neutral genealogies. With high probability, the neutral alleles of a given sampled individual $i$ either originate from the first mutant $a$ and belong to the marked block,
or escape the sweep and originate from an $A$ individual. In this case they belong to an unmarked block which is of the
form $\{(i,1)\}$, $\{(i,2)\}$ or $\{(i,1),(i,2)\}$, according to Definition \ref{defpartition}. As a consequence, if some neutral alleles of two distinct sampled individuals escape the sweep, they originate from distinct $A$-individuals with high probability. However, the genealogies of the two neutral alleles of a given individual are not independent. For example the probability that $(i,1)$ and $(i,2)$ escape the sweep is $p_4+p_5$; the probability that $(i,1)$ (resp. $(i,2)$) escapes the sweep is $p_3+p_4+p_5$ (resp. $p_2+p_4+p_5$), {and for every $K \in \mathbb{N}$ such that $r_1\neq 0$ $$ (p_3+p_4+p_5)(p_2+p_4+p_5)=(1-q_1)(1-q_1q_2)<(1-q_1)(1-q_1q_2+q_1q_2\bar{q}_2)=p_4+p_5. $$
This is due to the fact that if {(backwards in time)} a recombination first occurs between $SL$ and $N1$, the neutral allele at $N2$, linked to $N1$, also escapes the sweep. As the term $q_1q_2\bar{q}_2$ does not tend to $0$ when $K$ goes to infinity under Condition \eqref{assrK}, the only possibility to have an equality in the limit is the case where $r_1 \log K\ll 1$ or in other words when the probability to see a recombination between $SL$ and $N1$ is negligible.\\
Let us now consider the separated geometry, $N1-SL-N2$:
\begin{theo}[Geometry $N1-SL-N2$] \label{mainresult2} Under Assumption \ref{asstotale}, we have for every $ \pi \in \mathcal{P}_d^*$ $$
\underset{K \to \infty}{\lim} \ \Big| \mathbb{P}^{(K)}( \Theta^K_d=\pi |\textnormal{Fix}^K)-
\mathbf{1}_{\{\pi \in \Delta_d\}}{[q_1q_2]}^{|\pi|_1}{[q_1(1-q_2)]}^{|\pi|_2}{[(1-q_1)q_2]}^{|\pi|_3}{[(1-q_1)(1-q_2)]}^{|\pi|_5}
\Big|=0. $$ \end{theo}
Again the neutral genealogies of the $d$ sampled individuals are asymptotically independent. Furthermore, we have independence between the neutral loci. Indeed Theorem \ref{mainresult2} means that a neutral allele at locus $Nk$ escapes the sweep with probability $1-q_k$ independently of all other neutral alleles, including the allele at the other neutral locus of the same individual. This is due to the fact that in the separated geometry a recombination between $SL$ and one neutral locus has no impact on the genetic background of the allele at the other neutral locus. Note in particular that there is no block of the form $\{(i,1),(i,2)\}$ in the limit partition, as the two neutral alleles have a very small probability to recombine at the same time.
\section{Comparison with previous work} \label{sectioncomparison}
In \cite{schweinsberg2005random} the authors gave an approximate sampling formula for the genealogy of one neutral locus during a selective sweep. The population evolved as a two-locus modified Moran model with recombination, selection, and in particular constant population size. They introduced the fitness $s^{SD}$ of the mutant $a$ as follows: when one of the iid exponential clocks of the living individuals rings, one picks two individuals uniformly at random (with replacement), one dies, and the other one gives birth. A replacement of an $a$-individual by an $A$-individual is rejected with probability $s^{SD}$. In this case, nothing happens. In \cite{smadi2014eco}, the author studied the one neutral locus version of the here presented model. It was shown that the ancestral relationships in a sample taken at the end of the sweep correspond to the ones derived in \cite{schweinsberg2005random} when we equal the fitness of \cite{schweinsberg2005random} and the rescaled invasion fitness
$s^{SD}=S_{aA}/f_a$ and when we have the equality $|S_{Aa}|/f_A=S_{aA}/f_a$ (in this case the first and third phases have the same duration, $S_{aA}\log K/f_a$).\\
In \cite{brink2014multsweep}, the author generalized the model introduced in \cite{schweinsberg2005random} towards two neutral loci and used similar methods to derive a corresponding statement for the genealogy of a sample taken at the end of the sweep. If we however make the analogous comparison and try to match our result for the adjacent geometry with the statement from \cite{brink2014multsweep}, we observe an interesting phenomenon: the probabilities of the different types of ancestry only coincide if the birth rates of $a$- and $A$-individuals are the same, that is, if $f_a=f_A$. In biology, the fitness describes the ability to both survive and reproduce, and can be defined by the average contribution of an individual with a given genotype to the gene pool of the next generation. Hence a mutation which affects the fitness of an individual in a given environment can either act on the fertility ($f_\alpha$ in our model), or on the death rate, intrinsic ($D_\alpha$) or by competition ($C_{\alpha,\alpha'}$), or on both. Our result is comparable to that of \cite{schweinsberg2005random} if the mutation only affects the death rate
(and still if $s^{SD}=S_{aA}/f_a=|S_{Aa}|/f_A$).\\
In \cite{pfaffelhuber2007approximating}, instead of a birth and death process, the authors modeled the population with a structured coalescent. It is shown that this process can be approximated by a marked Yule tree where the different marks are realized by Poisson processes and indicate a recombination of one or two loci into the wild type background. The impact of the third phase is taken into account by a certain refinement prior to the beginning of the coalescent which leads to the same effect of splitting of the two neutral loci as it is seen here. We again find similarities with our results when $f_A=f_a$. In contrast, the techniques and precision used in \cite{pfaffelhuber2007approximating} yield that coalescent events with $A$-individuals cannot be ignored, that is, there are neutral loci of different individuals from the sample which have the same type-$A$-ancestor. The structure of the sample is therefore different from our results here. Notice that it is also the case of the second approximate sampling formula stated in \cite{schweinsberg2005random}, which is more precise than the first one.
\section{Dynamics of the sweep and couplings} \label{sectioncoupling}
\subsection{Description of the three phases}\label{sectionP1P2P3}
We only need to focus on the trajectories of the population process where the mutant allele $a$ goes to fixation and replaces the resident allele $A$. Champagnat has described these trajectories in \cite{champagnat2006microscopic} and in particular divided the sweep into three phases with distinct $A$- and $a$-population dynamics (see Figure \ref{fig3loci}). In the sequel, $\varepsilon$ will be a positive real number independent of $K$, as small as needed for the different approximations to hold. Moreover, from this point onward we will write $N_\alpha$ (resp. $N_{\alpha\beta\gamma}$) instead of $N_\alpha^K$ (resp. $N^K_{\alpha\beta\gamma}$) and $\mathbb{P}$ instead of $\mathbb{P}^{(K)}$ for the sake of readability.
\begin{figure}
\caption{The three phases of a selective sweep The y-axis corresponds to population sizes ($A$ in black, $a$ in red), and the x-axis to the time. In this simulation, $K=1000$, $(f_A, f_a) =(2,3), D_\alpha=0.5, \alpha \in \mathcal{A}$, $C_{\alpha,\alpha'}=1, (\alpha,\alpha') \in \mathcal{A}^2$. We have also indicated
some of the notations introduced in Section \ref{sectionP1P2P3}}
\label{fig3loci}
\end{figure}
\subsection*{First phase} The resident population size stays close to its equilibrium value $\bar{n}_AK$ as long as the mutant population size has not hit $\lfloor \varepsilon K \rfloor $: if we introduce the finite subset of $\mathbb{N}$ \begin{equation} \label{compact}I_\varepsilon^K:= \Big[K\Big(\bar{n}_A-2\varepsilon \frac{C_{A,a}}{C_{A,A}}\Big),K\Big(\bar{n}_A+2\varepsilon \frac{C_{A,a}}{C_{A,A}}\Big)\Big] \cap \mathbb{N}, \end{equation} and the stopping times $T^K_\varepsilon$ and $S^K_\varepsilon$, which denote respectively the hitting time of $\lfloor\varepsilon K \rfloor$ by the mutant population size
and the exit time of $I_\varepsilon^K$ by the resident population size, \begin{equation} \label{TKTKeps} T^K_\varepsilon := \inf \{ t \geq 0, N_a(t)= \lfloor \varepsilon K \rfloor \}\quad \text{and} \quad S^K_\varepsilon := \inf \{ t \geq 0, N_A(t)\notin I_\varepsilon^K \}, \end{equation} then we can deduce from \cite{champagnat2006microscopic} (see Equations (A.5) and (A.6) in \cite{smadi2014eco} for the details of the derivation) that the events $\text{Fix}^K$, $\{T_\varepsilon^K \leq S_\varepsilon^K\}$ and $\{T_\varepsilon^K<\infty\}$ are very close: \begin{equation}\label{A6frommywork}
\limsup_{K \to \infty} \mathbb{P}^{(K)}(\{ T_\varepsilon^K \leq S_\varepsilon^K \} \bigtriangleup \text{Fix}^K )\leq c \varepsilon,\quad \text{and} \quad
\limsup_{K \to \infty} \mathbb{P}^{(K)}(\{ T_\varepsilon^K <\infty \} \bigtriangleup \text{Fix}^K )\leq c \varepsilon, \end{equation} for a finite $c$ and $\varepsilon$ small enough, where we recall convention \eqref{convP}. In this context, $\bigtriangleup$ is the symmetric difference: for two sets $B$ and $C$, $B \bigtriangleup C=(B\cap C^c) \cup (C\cap B^c)$. From this point onwards, "first phase" will denote the time interval $[0,T_\varepsilon^K]$ when the $a$-population size is smaller than $\lfloor \varepsilon K \rfloor$.
\subsection*{Second phase} When $N_A$ and $N_a$ are of order $K$, the rescaled population process $(N_A/K,N_a/K)$ is well approximated by the Lotka-Volterra system \eqref{S}. Moreover, under {Condition \eqref{defnbara}} the system \eqref{S} has a unique attracting equilibrium $(0,\bar{n}_a)$ for initial conditions $z$ satisfying $z_a>0$, where $\bar{n}_a$ has been defined in \eqref{defnbar}. In particular, if we introduce for $(n_A, n_a) \in \mathbb{N}^2$ the notation,
\begin{equation}\label{convP_} \mathbb{P}_{(n_A,n_a)}(.):=\mathbb{P}(.|N_A(0)=n_A,N_a(0)=n_a), \end{equation} then Theorem 3 (b) in \cite{champagnat2006microscopic} implies: \begin{equation}\label{result_champa} \underset{K \to \infty}{\lim}\ \underset{z \in \Gamma}{\sup}\ \mathbb{P}_{{(\lfloor z_AK\rfloor,\lfloor z_aK\rfloor)}}\Big(\underset{0\leq t \leq t_\varepsilon,\alpha \in \mathcal{A}}
{\sup} \Big|\textstyle{\frac{N_\alpha(t)}{K}}-{n}^{(z)}_\alpha(t)\Big|\geq \delta \Big)=0,\end{equation} for every $\delta>0$, where \begin{align}
\label{tetaCfini}&\Gamma:=\Big\{ z \in \mathbb{R}_+^\mathcal{A}, \lfloor z_AK \rfloor \in I_\varepsilon^K, z_a\in [{\varepsilon}/{2},\varepsilon] \Big\},\\ \label{deftepsz} &t_{\varepsilon}(z):=\inf \big\{ s \geq 0,\forall t \geq s, {n}_A^{(z)}(t)\in [0,\varepsilon^2/2], {n}_a^{(z)}(t) \in[\bar{n}_a-\varepsilon/2,\bar{n}_a+\varepsilon/2] \big\},\\
&t_\varepsilon:=\sup \{ t_\varepsilon(z),z \in \Gamma\}<\infty. \nonumber \end{align}
In the sequel, "second phase" will denote the time interval $[T_\varepsilon^K,T_\varepsilon^K+t_\varepsilon]$ when the population process is close to the solution of the system \eqref{S}.
\subsection*{Third phase} Equation \eqref{result_champa} also implies that \begin{equation} \label{boundsecondphase} \underset{K \to \infty}{\lim}\
\mathbb{P}\Big( \textstyle{\frac{N_A(T_\varepsilon^K+t_\varepsilon)}{K}} \in [\omega_1,\omega_2],\Big|\textstyle{\frac{N_a(T_\varepsilon^K+t_\varepsilon)}{K}}
-\bar{n}_a\Big| \leq \varepsilon , \
\Big| \Big(\textstyle{\frac{N_A(T_\varepsilon^K)}{K},\frac{N_a(T_\varepsilon^K)}{K}} \Big) \in \Gamma \Big)=1,\end{equation} where \begin{equation}\label{defomega12} 2\omega_1:= \inf \ \{ n_A^{(z)}(t_\varepsilon), z \in \Gamma \}>0, \quad \text{and} \quad \omega_2:= 2 \sup \ \{ n_A^{(z)}(t_\varepsilon), z \in \Gamma \}\leq \varepsilon^2. \end{equation} The "third phase", which corresponds to the time interval $[T_\varepsilon^K +t_\varepsilon,T_{\text{ext}}^K]$, can be seen as the symmetric counterpart of the first phase, where the roles of $A$ and $a$ are interchanged: during the extinction of the $A$-population, the $a$-population size stays close to its equilibrium value $\bar{n}_aK$.
Let us introduce the positive real number
$M'':=3+(f_a+C_{a,A})/C_{a,a}$ and the finite subset of $\mathbb{N}$ \begin{equation} \label{compact2}J_\varepsilon^K:= \Big[K\Big(\bar{n}_a-M''\varepsilon \Big),K\Big(\bar{n}_a+M''\varepsilon \Big)\Big]\cap \mathbb{N}. \end{equation} The times $T_u^{(K,A)}$ and ${S}^{(K,a)}_\varepsilon$ are two stopping times for the process restarted after the second phase and denote respectively the hitting times of $\lfloor uK \rfloor$ by the $A$-population for $u \in \mathbb{R}_+$, and the exit time of $J_\varepsilon^K$ by the $a$-population during the third phase, \begin{equation} \label{T0K} T_u^{(K,A)} := \inf \{ t \geq 0, N_A(T_\varepsilon^K+t_\varepsilon+t)= \lfloor uK \rfloor \},\quad
{S}^{(K,a)}_\varepsilon := \inf \{ t \geq0, N_a(T_\varepsilon^K+t_\varepsilon+t)\notin J_\varepsilon^K \}. \end{equation} If we define the event \begin{equation}\label{defNepsK} \mathcal{N}_\varepsilon^K:=\{ T_\varepsilon^K \leq S_\varepsilon^K \} \cap \Big\{ \textstyle{\frac{N_A(T_\varepsilon^K+t_\varepsilon)}{K} }
\in [\omega_1,\omega_2],\Big|\textstyle{\frac{N_a(T_\varepsilon^K+t_\varepsilon)}{K}} -\bar{n}_a\Big| \leq \varepsilon \Big\}, \end{equation} we get from {the proof of Lemma 3 in} \cite{champagnat2006microscopic} that for a finite $c$ and $\varepsilon$ small enough, \begin{equation}\label{diffprobaphase3} \limsup_{K \to \infty}\Big\{\mathbb{P}(\text{Fix}^K \bigtriangleup [\mathcal{N}_\varepsilon^K \cap \{T_0^{(K,A)} <T_\varepsilon^{(K,A)}\wedge {S}^{(K,a)}_\varepsilon \} ])+\mathbb{P}(\text{Fix}^K \bigtriangleup [\mathcal{N}_\varepsilon^K \cap \{T_0^{(K,A)} <T_\varepsilon^{(K,A)} \} ] )\Big\}\leq c \varepsilon. \end{equation} To summarize, the fixation event $\text{Fix}^K$ is very close to the following succession of events: \begin{enumerate}
\item[$\bullet$] The $a$-population size hits $\lfloor \varepsilon K\rfloor$ before the $A$-population size has escaped the vicinity of its equilibrium $I_\varepsilon^K$ (first phase)
\item[$\bullet$] The rescaled {population} process $N/K$ is close to the deterministic competitive Lotka-Volterra system during the second phase
\item[$\bullet$] The $A$-population size gets extinct before hitting $\lfloor \varepsilon K\rfloor$ and before the
$a$-population size has escaped the vicinity of its equilibrium $J_\varepsilon^K$ (third phase) \end{enumerate}
\subsection{Couplings for the first and third phases}
We are interested in the law of the neutral genealogies on the event $\text{Fix}^K$. Equations \eqref{A6frommywork} and \eqref{diffprobaphase3} imply that it is enough to concentrate our attention on the event $\mathcal{N}_\varepsilon^K \cap \{T_0^{(K,A)} <T_\varepsilon^{(K,A)}\}$, but the dynamics of the population process $N$ conditionally on this event is complex to study. Indeed it boils down to studying the dynamics of a process conditioned on a future event ($\{ T_\varepsilon^K \leq S_\varepsilon^K \}$ for the first phase and $\{T_0^{(K,A)} <T_\varepsilon^{(K,A)}\}$ for the third one). Hence the idea is to couple the population process with two processes, $\tilde{N}$ and $\tilde{\tilde{N}}$, whose laws are easier to study. These processes will satisfy: \begin{equation}\label{couplage2.1}\underset{K \to \infty}{\limsup} \ \mathbb{P}(\{\exists t \leq T_{\varepsilon}^K, N(t) \neq \tilde{N}(t)\}, T_{\varepsilon}^K<\infty)\leq c \varepsilon.\end{equation} and \begin{multline}\label{couplage2.2}\underset{K \to \infty}{\limsup} \ \mathbb{P}(\{\exists \ 0\leq t-(T_\varepsilon^K+t_\varepsilon) \leq T_0^{(K,A)}, N(t) \neq \tilde{\tilde{N}}(t)\},
T_0^{(K,A)}<T_\varepsilon^{(K,A)}
| \mathcal{N}_{\varepsilon}^K<\infty)\leq c \varepsilon.\end{multline}
Let $\alpha$ be in $\mathcal{A}$ and $n$ be in $\mathbb{N}^\mathcal{E}$. Denote $n^{(\alpha)}$ the $\alpha$ component of the population state: \begin{equation}\label{defn(alpha)}
n^{(\alpha)}=\sum_{(\beta,\gamma) \in \mathcal{B}\times \mathcal{C} }n_{\alpha \beta \gamma}e_{\alpha \beta\gamma}, \end{equation} where $(e_{\alpha \beta \gamma}, (\alpha, \beta ,\gamma)\in \mathcal{E})$ is the canonical basis of $\mathbb{R}^\mathcal{E}$. We are now able to introduce a process needed to describe the couplings:
\begin{defi}\label{defMR} We denote by \textup{Moran process of type $\alpha$ with recombination $r_2$} a
process $MR_\alpha^{(n^{(\alpha)})}$ with values in $\mathbb{N}^{\alpha \times \mathcal{B}\times \mathcal{C}}$, initial state $n^{(\alpha)}$, and the following dynamics: \begin{enumerate}
\item[$\bullet$] After an exponential time with parameter $f_\alpha\bar{n}_\alpha K$ we pick uniformly and with replacement three individuals and draw a Bernoulli variable $R$ with parameter $r_2$ \item[$\bullet$] The first individual dies, the second one gives birth to an individual carrying its alleles at loci $SL$ and $N1$, the third one is the potential second parent \item[$\bullet$] If $R=0$, there is no recombination and the allele at locus $N2$ of the newborn is also inherited from the second individual; if $R=1$ there is a recombination {between} $N1$ and $N2$ and the newborn inherits its second neutral allele from the third individual \item[$\bullet$] We again draw an exponential variable with parameter $f_\alpha\bar{n}_\alpha K$ and restart the procedure \end{enumerate} \end{defi}
\noindent \textbf{Coupling with $\tilde{N}$}: $N$ and $\tilde{N}$ are equal up to time $S_\varepsilon^K$; after this time the $A$ individuals in the population process $\tilde{N}$ follow a Moran process with recombination independent of the $a$-individuals. Let $\underline{\mathfrak{b}\mathfrak{c}}$ be in $\mathcal{B}\times \mathcal{C}$. We let the $a$-population evolve as if the $a$-individuals were interacting with $\tilde{N}_A(s)$ individuals with genotype $A\underline{\mathfrak{b}\mathfrak{c}}$: \begin{multline}\label{deftildeN}
\tilde{N}(t) = \mathbf{1}_{t < S_\varepsilon^K}N(t)+ \mathbf{1}_{t \geq S_\varepsilon^K} \Big(MR_A^{({N}^{(A)}(S_\varepsilon^K))}(t-S_\varepsilon^K)+
\underset{(\beta,\gamma)\in \mathcal{B}\times\mathcal{C}}{ \sum} e_{a\beta\gamma}\\ \int_{S_\varepsilon^K}^t\int_{\mathbb{R}_+} \Big[
Q_{\beta\gamma}^{(1)}(ds,d\theta) \mathbf{1}_{\{0<\theta\leq b^K_{a \beta\gamma}(\tilde{N}_{A}(s-)
e_{A\underline{\mathfrak{b}\mathfrak{c}}},\tilde{N}^{(a)}(s^-))\}}
-
Q_{\beta \gamma}^{(2)}(ds,d\theta)
\mathbf{1}_{\{0<\theta \leq d^K_{a \beta\gamma}(\tilde{N}_{A}(s^-) e_{A\underline{\mathfrak{b}\mathfrak{c}}},\tilde{N}^{(a)}(s^-))\}}\Big]\Big), \end{multline} where $MR_A^{({N}^{(A)})}$ has been defined in Definition \ref{defMR} and $(Q_{\beta \gamma}^{(i)}, i \in \{1,2\},(\beta,\gamma)\in \mathcal{B}\times\mathcal{C} )$ are independent Poisson Point processes with density $dsd\theta$, also independent of $MR^{({N}^{(A)})}$. The reason for the construction of such a coupling is that we need to control the $A$-population size and the number of births of $A$-individuals during the first phase in Section \ref{technicalsection}. With the process $\tilde{N}$ such control is achieved easier.
\noindent \textbf{Coupling with $\tilde{\tilde{N}}$}:
we assume that $\mathcal{N}_\varepsilon^K$ from \eqref{defNepsK} holds;
$N$ and $\tilde{\tilde{N}}$ are equal up to time $T_\varepsilon^K+t_\varepsilon+S_\varepsilon^{(K,a)}\wedge T_\varepsilon^{(K,A)}$. Then the $a$-individuals in the population process $\tilde{\tilde{N}}$ follow a Moran process with recombination independent of the $A$-individuals, and each $A\beta\gamma$-population evolves as a birth and death process with individual birth and death rates $f_A$ and $f_A+|S_{Aa}|$, independent of the $a$-individuals and the $A\beta'\gamma'$-populations with $(\beta,\gamma)\neq (\beta',\gamma')$: \begin{multline}\label{deftildetildeN}
\tilde{\tilde{N}}(T_\varepsilon^K+t_\varepsilon+t) = \mathbf{1}_{t < S_\varepsilon^{(K,a)}}N(T_\varepsilon^K+t_\varepsilon+t)+ \mathbf{1}_{t \geq S_\varepsilon^{(K,a)}} \Big( MR_a^{({N}^{(a)}(S_\varepsilon^{(K,a)}))}(t-S_\varepsilon^{(K,a)})+\\
+\underset{(\beta,\gamma)\in \mathcal{B}\times\mathcal{C}}{ \sum} e_{A\beta\gamma}\Big[\int_{T_\varepsilon^K+t_\varepsilon+S_\varepsilon^{(K,a)}}^{T_\varepsilon^K+t_\varepsilon+t}\int_{\mathbb{R}_+}
Q_{\beta\gamma}(ds,d\theta) \Big\{\mathbf{1}_{\{0<\theta \leq f_A\tilde{\tilde{N}}_{A\beta\gamma}(s^-)\}} -\mathbf{1}_{\{0<\theta
- f_A\tilde{\tilde{N}}_{A\beta\gamma}(s^-)\leq (f_A+|S_{Aa}|)\tilde{\tilde{N}}_{A\beta\gamma}(s^-) \}}\Big\}\Big]\Big), \end{multline} where $MR_a^{({N}^{(a)})}$ has been defined in Definition \ref{defMR} and is independent of the sequence of independent Poisson measures $(Q_{\beta\gamma}, (\beta,\gamma) \in \mathcal{B}\times\mathcal{C})$, with intensity $dsd\theta$. The $a$-population size and the number of births of $a$-individuals will be easy to control for the process $\tilde{\tilde{N}}$ during the third phase, and again we will need such control in Section \ref{technicalsection}.\\
Inequality \eqref{couplage2.1} follows from \eqref{A6frommywork}. Moreover, from the proof of Lemma 3 in \cite{champagnat2006microscopic} we know that
$$\liminf_{K \to \infty} \mathbb{P}(T_0^{(K,A)}<T_\varepsilon^{(K,A)}\wedge S_\varepsilon^{(K,a)} |\mathcal{N}_\varepsilon^K)\geq 1-c\varepsilon $$ for a finite $c$ and $\varepsilon$ small enough. Adding \eqref{diffprobaphase3} we get that \eqref{couplage2.2} is also satisfied. Hence we will study the processes $\tilde{N}$ and $\tilde{\tilde{N}}$ and deduce properties of the dynamics of the process $N$ during the first and third phases.
\section{Proofs of the main results}
\label{sectionproofmain}
As the proof of Theorem \ref{mainresult2} is simpler than this of Theorem \ref{mainresult}, and follows essentially the same ideas, we only prove Theorem \ref{mainresult}.
\subsection{Events impacting the genealogies in each phase}\label{subsec:eventsallphases}
Let us now summarize the results on the genealogies for the three successive phases of the sweep that we will derive in Sections \ref{firstphase} and \ref{secondphase}.\\
\noindent \textbf{First phase:} As explained in the previous section, we work with the process $\tilde{N}$ to study the first phase. Let us introduce the jump times of $\tilde{N}$: \begin{equation}\label{deftpssauts} \tau_0^K=0 \quad \text{and} \quad \tau_m^K= \inf \{ t> \tau_{m-1}^K, \tilde{N}(t)\neq \tilde{N}(\tau_{m-1}^K) \}, \quad m \geq 1. \end{equation} The number of jumps during the first phase is denoted by $J^K(1)$: \begin{equation}\label{defJK1}
J^K(1):= \inf \{ m \in \mathbb{N}, \tilde{N}_a(\tau_m^K)=\lfloor \varepsilon K \rfloor \}. \end{equation} Coalescence and recombination events are defined as follows (see Figure \ref{coalevent}):
\begin{defi}\label{defcoalreco} Sample two distinct individuals at time $\tau_m^K$ and denote $\alpha\beta\gamma$
and $\alpha'\beta'\gamma'$ their type.
We say that $\beta$ and $\beta'$ coalesce at time $\tau_m^K$ if they are carried by two distinct individuals at time $\tau_{m}^K$ and by the same individual at time $\tau_{m-1}^K$. Seen forwards in time it corresponds to a birth and hence a copy of the neutral allele. Seen backwards in time it corresponds to the fusion of two neutral alleles into one, carried by one parent of the newborn. We define in the same way coalescent events at locus $N2$ (resp. loci $N1$ and $N2$) for alleles $\gamma$ and $\gamma'$ (resp. allele pairings $(\beta, \gamma)$ and $(\beta', \gamma')$).
We say that $\beta$ (and/or $\gamma$) recombines at time $\tau_m^K$ from the $\alpha$- to the $\alpha'$-population if the individual carrying the allele $\beta$ (and/or $\gamma$)
at time $\tau^K_m$ is a newborn,
carries the allele $\alpha$ inherited from it first parent, and has inherited its allele $\beta$ (and/or $\gamma$) from a different individual carrying allele $\alpha'$. \end{defi}
We are only interested in recombinations which entail new associations of alleles. In particular we will not consider the simultaneous recombinations of a pair $(\beta,\gamma)$ within the $\alpha$-population.
\begin{figure}
\caption{Illustration of Definition \ref{defcoalreco}: the newborn (individual $k$) has inherited the selected allele from its "white" parent and the two neutral alleles from its "blue" parent; hence the encircled neutral loci (of individuals $i$ and $k$) coalesce at time $\tau_m^K$. In terms of recombinations, the two neutral loci of the newborn individual recombine at time $\tau_m^K$ from the $a$- to the $A$-population}
\label{coalevent}
\end{figure}
Let us now describe the genealogical scenarios which modify the ancestral relationships between the neutral alleles of one individual and occur with positive probability when $K$ is large. We first focus on the first phase and pick uniformly an individual $i$ from the $a$-population at time $\tilde{T}_\varepsilon^K$. We introduce: \label{defgenealogy}\begin{align*} \begin{aligned}
NR(i)^{(1)}: &\quad \text{there is no recombination into the $A$-population affecting $(i,1)$ or $(i,2)$}\\ &\quad \text{and both neutral loci of the $i$-individual
originate from the first mutant,}\\
R2(i)^{(1)}: &\quad \text{only the neutral allele $(i,2)$ is affected by a recombination with the $A$-population,}\\
&\quad \text{hence $(i,1)$ originates from the first mutant and $(i,2)$ from an $A$-individual,}\\
R12(i)^{(1)}: &\quad \text{one recombination between $SL$ and $N1$ from the $a$- into the $A$-population occurs}\\
&\quad \text{and both neutral alleles $(i,1)$ and $(i,2)$ originate from the same $A$-individual,}\\
[2,1]^{rec}_{A,i}: &\quad
\text{first (backwards in time) $(i,2)$ recombines into the A-population, then $(i,1)$}\\
&\quad \text{recombines into the A-population and connects to a different individual than $(i,2)$.}\\
[12,2]^{rec}_{A,i} : &\quad
\text{first (backwards in time) the tuple $\{(i,1),(i,2)\}$ recombines into the $A$-population, }\\ &\quad \text{then a second recombination splits the two neutral loci inside the $A$-population.}\\
R1|2(i)^{(1,ga)}: &\quad [2,1]^{rec}_{A,i}\cup [12,2]^{rec}_{A,i} \text{ (see Figure \ref{schemagendefevent})} \end{aligned} \end{align*}
\begin{figure}
\caption{Illustration of events $[2,1]^{rec}_{A,i}$ (individual $1$) and $[12,2]^{rec}_{A,i}$ (individual $2$)}
\label{schemagendefevent}
\end{figure}
Finally, we introduce a conditional probability for the process $\tilde{N}$: \begin{equation}\label{defP1}
\mathbb{P}^{(1)}(.)=\mathbb{P}(.|J^K(1)<\infty), \end{equation} where $J^K(1)$ has been defined in \eqref{defJK1}. Hence, recalling the definition of $(q_1,q_2,q_3)$ in \eqref{defq1q2} we will prove in Section \ref{firstphase}:
\begin{pro}[Neutral genealogies during the first phase]\label{prop:1phase_probg1} Let $i$ be an $a$-individual sampled uniformly at the end of the first phase (time $\tilde{T}_\varepsilon^K$). Under Assumption \ref{asstotale}, there exist two finite constants $c$ and $\varepsilon_0$ such that for every $\varepsilon\leq \varepsilon_0$, \begin{multline*}
\limsup_{K \to \infty} \left\{\Big|\mathbb{P}^{(1)}(NR(i)^{(1)})-q_1 q_2\Big|+ \Big|\mathbb{P}^{(1)}(R2(i)^{(1)})-q_1
(1-q_2)\Big|\right.\\
\left. + \Big|\mathbb{P}^{(1)}(R12(i)^{(1)})-q_3\Big|+ \Big|\mathbb{P}^{(1)}(R1|2(i)^{(1,ga)})-
(1-q_1 - q_3)\Big|\right\}\leq c\varepsilon. \end{multline*} \end{pro}
For large $K$, the sum of the four probabilities of Proposition \ref{prop:1phase_probg1} equals one up to a constant times $\varepsilon$.
Hence, in the limit we only observe the events described on page \pageref{defgenealogy}. The probabilities of the first two events are quite intuitive: broadly speaking, the probability to have no recombination at a birth event is $1-r_1-r_2$, the birth rate is $f_a$ and the duration of the first phase is $\log K/S_{aA}$. Hence under $\mathbb{P}^{(1)}$, the probability of the event $NR(i)^{(1)}$ is approximately $$ (1-(r_1+r_2))^{f_a\log K/S_{aA}}\sim \exp(-(r_1+r_2))^{f_a\log K/S_{aA}}=q_1q_2. $$
Similarly the probability to have no recombination between $SL$ and $N1$ is close to $q_1$ and subtracting the probability of $NR(i)^{(1)}$ we get this of $R2(i)^{(1)}$. The probabilities of $R12(i)^{(1)}$ and $R1|2(i)^{(1,ga)}$ are more involved.
The proofs rely on a fine study of the different possible scenarios.\\
\noindent \textbf{Second phase:} We work with the process $N$ to study the second phase. The latter one has a duration of order $1$, and the recombination probabilities are negligible with respect to one (Condition \eqref{assrK}).
Consequently, no event impacting the genealogies of the neutral loci occurs during the second phase. More precisely, let us sample uniformly two distinct $a$-individuals $i$ and $j$ at the end of the second phase (time $T_\varepsilon^K+t_\varepsilon$) and introduce the events:\label{defgenealogy3} \begin{align*} \begin{aligned}
NR(i)^{(2)}: &\quad \text{there is no recombination affecting $(i,1)$ or $(i,2)$,}\\
NC(i,j)^{(2)}: &\quad \text{there is no coalescence between the neutral genealogies of $i$ and $j$.} \end{aligned} \end{align*}
Then we have the following result, which will be proven in Section \ref{secondphase}.
\begin{pro}[Neutral genealogies during the second phase]\label{prosecondphase} Let $i$ and $j$ be two distinct $a$-individuals sampled uniformly at the end of the second phase (time $T_\varepsilon^K+t_\varepsilon$).
Then under Assumption \ref{asstotale},
\begin{equation*}\label{reconegli2}
\lim_{K \to \infty} \mathbb{P}(NR(i)^{(2)} \cap NC(i,j)^{(2)}|T_\varepsilon^K\leq S_\varepsilon^K)=1.
\end{equation*} \end{pro}
\noindent \textbf{Third phase:} Finally, we focus on the process $\tilde{\tilde{N}}$. When $K$ is large, there is only one event occurring with positive probability during the third phase which may modify the ancestry of the neutral alleles of an individual $i$ sampled at the end of the sweep in the adjacent geometry:
\begin{align}\label{defgenealogy4} \begin{aligned}
R2(i)^{(3,ga)}: &\quad \text{a recombination between loci $N1$ and $N2$ occurs and separates}\\
&\quad \text{$(i,1)$ and $(i,2)$ within the $a$-population,} \end{aligned} \end{align} Indeed, if we also define the events \label{defgenealogy5}\begin{align*} \begin{aligned}
NR(i)^{(3)}: &\quad \text{there is no recombination affecting $(i,1)$ or $(i,2)$ and they both}\\ &\quad \text{originate
from the same $a$-individual at the end of the second phase }\\
NC(i,j)^{(3)}: &\quad \text{defined as $NC(i,j)^{(2)}$ for two distinct individuals sampled}\\ &\quad \text{uniformly at the end of the sweep}, \end{aligned} \end{align*} and the conditional probability for the process $\tilde{\tilde{N}}$:: \begin{equation}
\mathbb{P}^{(3)}(.):=\mathbb{P}(.|\mathcal{N}_\varepsilon^K ,\tilde{\tilde{T}}_0^{(K,A)}<\tilde{\tilde{T}}_\varepsilon^{(K,A)}), \end{equation} where $\tilde{\tilde{T}}_0^{(K,A)}$ and $\tilde{\tilde{T}}_\varepsilon^{(K,A)}$ are the analogs of
${T}_0^{(K,A)}$ and $T_\varepsilon^{(K,A)}$ (defined in \eqref{T0K}) for the process $\tilde{\tilde{N}}$, then we will prove in Section \ref{secondphase}:
\begin{pro}[Neutral genealogies during the third phase]\label{prothirdphase} Let $i$ and $j$ be two distinct $a$-individuals sampled uniformly at the end of the sweep. Under Assumption \ref{asstotale}, there exist two finite constants $c$ and $\varepsilon_0$ such that for every $\varepsilon\leq \varepsilon_0$, $$
\underset{K \to \infty}{\limsup}\left\{\Big| \mathbb{P}^{(3)}(R2(i)^{(3,ga)}) -
( 1- \bar{q}_2 ) \Big|+\Big| \mathbb{P}^{(3)}(NR(i)^{(3)}) -
\bar{q}_2 \Big|+\Big|\mathbb{P}^{(3)}(NC(i,j)^{(3)})-1\Big| \right\}\leq c \sqrt{\varepsilon}.$$ \end{pro} In particular, there is no recombination with the $A$-population during the third phase. As for the probabilities of the first two events
in the Proposition \ref{prop:1phase_probg1} this result is quite intuitive, as the duration of the third phase is close to $\log K/|S_{Aa}|$.\\
\noindent \textbf{Independence:} Finally we again consider the population process $N$ and state a proposition which enables us to give the statement of Theorem \ref{mainresult} independently for all sampled individuals, that is, jointly for the whole sample. To this aim, let us introduce a partition $ \Theta^{(K,1)}_d \in \mathcal{P}_d^*$ which is the analog of $\Theta^K_d$ where the $d$ individuals are sampled at the end of the first phase and not at the end of the sweep. Recall Definitions \ref{defdelta} and \ref{defpartition}, and denote by $|R2^{(3,ga)}|_d$ (resp. $|NR^{(3)}|_d$) the number of $a$-individuals in a $d$-sample taken at the end of the sweep whose neutral alleles originate from two distinct $a$-individuals (resp. from the same $a$-individual) at the beginning of the third phase. Then we have the following result:
\begin{pro}\label{proindi} Let Assumption \ref{asstotale} hold. Then there exist two finite constants $c$ and $\varepsilon_0$ such that for every $\varepsilon\leq \varepsilon_0$, the ancestral relationships of a $d$-sample taken at the end of the first phase (time $T_\varepsilon^K$) satisfy for every $(m_k,1\leq k \leq 4) \in \mathbb{Z}_+^4$: \begin{multline*}
\limsup_{K\rightarrow \infty} \Big| \mathbb{P}( |\Theta^{(K,1)}_d|_k=m_k,1\leq k \leq 4|T_\varepsilon^K\leq S_\varepsilon^K)\\
- \mathbf{1}_{\{m_1+m_2+m_3+m_4=d\}}\frac{d!}{m_1!m_2!m_3!m_4!}(q_1q_2)^{m_1}(q_1(1-q_2))^{m_2}q_3^{m_3}(1-q_1-q_3)^{m_4} \Big| \leq c {\varepsilon}. \end{multline*} In the same way, the neutral genealogy of a $d$-sample taken at the end of the sweep satisfies for every $(m_k,1\leq k \leq 2) \in \mathbb{Z}_+^2$: \begin{equation*}
\limsup_{K\rightarrow \infty} \Big| \mathbb{P}((|R2^{(3,ga)}|_d,|NR^{(3)}|_d)=(m_1,m_2)|\mathcal{N}_\varepsilon^K) -\mathbf{1}_{\{m_1+m_2=d\}}
\frac{d!}{m_1!m_2!}(1-\bar{q}_2)^{m_1}\bar{q}_2^{m_2} \Big| \leq c{\varepsilon}. \end{equation*} \end{pro} The Proposition \ref{proindi} is a key result: we only need to focus on individual neutral genealogies to get general results on the genealogy of a $d$-sample with respect to the neutral loci.
It will be proven in Section \ref{proofindep}.
\subsection{Proof of Theorem \ref{mainresult}}
Let $i$ be an individual sampled uniformly at the end of the sweep. The idea of the proof is the following: in a first step, we list certain compositions of coalescent and recombination events leading to specific ancestral relationships {which could be described by blocks of a partition of $\Delta_d$}. Then we approximate the probabilities of the described events and prove that these probabilities sum to one up to a constant times $\sqrt{\varepsilon}$ for some fixed small $\varepsilon$. This shows that in the limit for large $K$ the neutral genealogy of the individual $i$ belongs to those described on page \pageref{defgenealogy} with a probability close to one. In a second step we use Proposition \ref{proindi} to treat the neutral genealogies of the $d$ sampled individuals independently.
\begin{enumerate}
\item[i)] We consider two possible trajectories such that the alleles at both neutral loci originate from the mutant: either the two neutral loci separate inside the $a$-population during the third phase and coalesce during the first phase,
or they stay in the $a$-population and do not separate during the whole sweep (see individual $1$ in Figure \ref{schemagen}):
\begin{multline*}
\Big(R2(i)^{(3,ga)}\cap NR(i1)^{(2)} \cap NR(i2)^{(2)} \cap NC(i1,i2)^{(2)} \cap [ NR(i1)^{(1)}\sqcup R2(i1)^{(1)}] \cap NR(i2)^{(1)}\Big)\\ \bigsqcup \Big(NR(i)^{(3)} \cap NR(i)^{(2)} \cap NR(i)^{(1)} \Big), \end{multline*}
where {$\sqcup$ is the disjoint union and} we denote by $i1$ and $i2$ the labels of the parents of the first and second neutral loci of $i$, respectively,
at the end of the second phase (the way we label the $a$-individuals has no importance as they are exchangeable).
\item[ii)] We consider two possible trajectories such that $(i,1)$ originates from the mutant and $(i,2)$ originates
from some $A$-individual \begin{multline*} \Big(R2(i)^{(3,ga)}\cap NR(i1)^{(2)} \cap NR(i2)^{(2)} \cap NC(i1,i2)^{(2)} \cap [NR(i1)^{(1)}\cup R2(i1)^{(1)}] \\
\cap [ R12(i2)^{(1)}\sqcup R1|2(i2)^{(1)}\sqcup R2(i2)^{(1)}] \Big) \bigsqcup \Big(NR(i)^{(3)} \cap NR(i)^{(2)} \cap R2(i)^{(1)} \Big).\end{multline*}
The first bracket considers a separation of the two neutral loci during the third phase. As a consequence, the fate of the first neutral locus of individual $i2$ during the first phase has no consequence on the neutral genealogy of $i$. This is why we consider the event $\{R12(i2)^{(1)}\sqcup R1|2(i2)^{(1)}\sqcup R2(i2)^{(1)}\}$ and not only $\{ R2(i2)^{(1)} \}$. The second bracket corresponds to individual $2$ in Figure \ref{schemagen}. \item[iii)] We consider one possible trajectory such that $(i,1)$ originates from some $A$-individual and $(i,2)$ originates from the mutant (see individual $5$ in Figure \ref{schemagen}) $$ R2(i)^{(3,ga)}\cap NR(i1)^{(2)} \cap NR(i2)^{(2)} \cap NC(i1,i2)^{(2)}
\cap [R12(i1)^{(1)}\sqcup R1|2(i1)^{(1,ga)}] \cap NR(i2)^{(1)}$$
\item[iv)] We consider one possible trajectory such that $(i,1)$ and $(i,2)$ originate from the same $A$-individual
(see individual $3$ in Figure \ref{schemagen}) $$ NR(i)^{(3)} \cap NR(i)^{(2)} \cap R12(i)^{(1)} $$
\item[v)] Finally, we consider two possible trajectories such that $(i,1)$ and $(i,2)$ originate from distinct $A$-individuals (see individual $4$ in Figure \ref{schemagen} for the second bracket):
\begin{multline*} \Big(R2(i)^{(3,ga)}\cap NR(i1)^{(2)} \cap NR(i2)^{(2)} \cap NC(i1,i2)^{(2)}\cap [R12(i1)^{(1)}\sqcup R1|2(i1)^{(1)}] \\
\cap [ R12(i2)^{(1)}\sqcup R1|2(i2)^{(1)}\cup R2(i2)^{(1)}] \Big)
\bigsqcup \Big(NR(i)^{(3)} \cap NR(i)^{(2)} \cap R1|2(i)^{(1,ga)} \Big).\end{multline*} \end{enumerate}
Thanks to \eqref{A6frommywork}, and \eqref{diffprobaphase3} to \eqref{couplage2.2} we know that for all non negligible measurable events $C^{(1)}$, $C^{(2)}$ and $C^{(3)}$ occurring during the first, second and third phase respectively, { \begin{equation}\label{decompoC123}
\mathbb{P}( C^{(1)}, C^{(2)}, C^{(3)}, \text{Fix}^K )= \mathbb{P}( C^{(1)}, C^{(2)}, C^{(3)}, \mathcal{N}_\varepsilon^K,\{T_0^{(K,A)}<T_\varepsilon^{(K,A)} \wedge S_\varepsilon^{(K,A)}\}) +O_K(\varepsilon) \end{equation} where $O_K(\varepsilon)$ is a function of $K$ and $\varepsilon$ satisfying
\begin{equation}\label{defOKeps}\limsup_{K \to \infty}|O_K(\varepsilon)|\leq c \varepsilon,\end{equation} for $\varepsilon\leq \varepsilon_0$ where $\varepsilon_0$ and $c$ are finite. Using the same inequalities we can decompose the right hand side of \eqref{decompoC123} as follows \begin{multline*}
\mathbb{P}(C^{(1)},\{T_\varepsilon^K<\infty\})+\mathbb{P}(C^{(2)},\{ \textstyle{\frac{N_A(T_\varepsilon^K+t_\varepsilon)}{K} }
\in [\omega_1,\omega_2],|\textstyle{\frac{N_a(T_\varepsilon^K+t_\varepsilon)}{K}} -\bar{n}_a| \leq \varepsilon \}|C^{(1)},\{T_\varepsilon^K\leq S_\varepsilon^K\})\\
+ \mathbb{P}(C^{(3)},\{T_0^{(K,A)}<T_\varepsilon^{(K,A)} \wedge S_\varepsilon^{(K,A)}\}|C^{(1)},C^{(2)}, \mathcal{N}_\varepsilon^K)+O_K(\varepsilon). \end{multline*} Then from \eqref{couplage2.1} we get $$ \mathbb{P}(C^{(1)},\{T_\varepsilon^K<\infty\})=\mathbb{P}^{(1)}(\tilde{C}^{(1)})\mathbb{P}(T_\varepsilon^K<\infty)+O_K(\varepsilon) ,$$ from \eqref{boundsecondphase} $$ \mathbb{P}(C^{(2)},\{ \textstyle{\frac{N_A(T_\varepsilon^K+t_\varepsilon)}{K} }
\in [\omega_1,\omega_2],|\textstyle{\frac{N_a(T_\varepsilon^K+t_\varepsilon)}{K}} -\bar{n}_a| \leq \varepsilon \}|C^{(1)},\{T_\varepsilon^K\leq S_\varepsilon^K\})= \mathbb{P}(C^{(2)}
|C^{(1)},\{T_\varepsilon^K\leq S_\varepsilon^K\})+O_K(\varepsilon),$$ and from \eqref{diffprobaphase3} and \eqref{couplage2.2}
$$ \mathbb{P}(C^{(3)},\{T_0^{(K,A)}<T_\varepsilon^{(K,A)} \wedge S_\varepsilon^{(K,A)}\}|C^{(1)},C^{(2)}, \mathcal{N}_\varepsilon^K)=
\mathbb{P}^{(3)}(\tilde{\tilde{C}}^{(3)}|C^{(1)}, C^{(2)})+O_K(\varepsilon),$$ where $\tilde{C}^{(1)}$ (resp. $\tilde{\tilde{C}}^{(3)}$) corresponds to the event $C^{(1)}$ (resp. $C^{(3)}$)
expressed in terms of the process $\tilde{N}$ (resp. $\tilde{\tilde{N}}$). Putting everything together we finally obtain \begin{equation}\label{decomproba123}
\mathbb{P}( C^{(1)}, C^{(2)}, C^{(3)}| \text{Fix}^K )=
\mathbb{P}^{(3)}(\tilde{\tilde{C}}^{(3)}|C^{(1)}, C^{(2)})\mathbb{P}( C^{(2)}|C^{(1)},\{T_\varepsilon^K\leq S_\varepsilon^K\})\mathbb{P}^{(1)}(\tilde{C}^{(1)})+O_K(\varepsilon). \end{equation}} By applying Propositions \ref{prop:1phase_probg1}, \ref{prosecondphase}, \ref{prothirdphase} and \ref{proindi} we then can calculate the probabilities corresponding to the five sets from Definition \ref{defpartition} and get the probabilities $(p_k, 1 \leq k \leq 5)$ defined in \eqref{defpi}, which sum to one. Let us detail the calculations for the case $i)$: by applying \eqref{decomproba123}, Proposition \ref{prosecondphase} and the Markov property, the probability to see one of the two trajectories described in $i)$ is \begin{multline}\label{mathcalPi1} \mathcal{P}(i,1)= \mathbb{P}^{(3)}(R2(i)^{(3,ga)}) \mathbb{P}^{(1)}([ NR(i)^{(1)}\sqcup R2(i)^{(1)}] \cap NR(j)^{(1)})\\ +\mathbb{P}^{(3)}(NR(i)^{(3)})\mathbb{P}^{(1)}( NR(i)^{(1)})+O_K(\varepsilon) , \end{multline} where $i$ and $j$ are two distinct individuals (exchangeability). But thanks to Proposition \ref{proindi} we know that the neutral genealogies of individuals $i$ and $j$ are nearly independent. Hence adding Proposition \ref{prop:1phase_probg1} leads to $$ \mathbb{P}^{(1)}([ NR(i)^{(1)}\sqcup R2(i)^{(1)}] \cap NR(j)^{(1)})=(q_1q_2+q_1(1-q_2))q_1q_2+ O_K({\varepsilon}).$$ Applying Propositions \ref{prop:1phase_probg1} and \ref{prothirdphase} in \eqref{mathcalPi1} yields $$ \mathcal{P}(i,1)= (1-\bar{q}_2)q_1^2q_2+\bar{q}_2 q_1q_2+ O_K(\sqrt{\varepsilon})=p_1+ O_K(\sqrt{\varepsilon}),$$ where we recall the definition of $p_1$ in \eqref{defpi}.\\
Finally, we get the asymptotic independence of the neutral genealogies of the $d$ sampled individuals during the first and third phases by applying the multinomial version of the de Finetti Representation Theorem (see \cite{diniz2014simple} Chapter 4 for a simple proof) to the result of Proposition \ref{proindi}. The asymptotic independence during the second phase follows from Proposition \ref{prosecondphase} as, with high probability, nothing happens.
\section{Number of births and deaths during the selective sweep}\label{technicalsection}
In this section we derive some results on birth and death numbers of the population processes $\tilde{N}$ and $\tilde{\tilde{N}}$,
needed in Sections \ref{firstphase} and \ref{secondphase}
to prove Propositions \ref{prop:1phase_probg1}, \ref{prosecondphase} and \ref{prothirdphase}.
\subsection{Coupling with supercritical birth and death processes during the first phase}
We are interested in the dynamics of the process $\tilde{N}_a$ during the first phase, that is, before the time $\tilde{T}^K_\varepsilon$. The idea is to couple this process with two supercritical birth and death processes, and deduce its dynamics from well known results on birth and death processes. Recall the definition of the rescaled invasion fitness $s$ in \eqref{probafix},
and for $\varepsilon < S_{aA}/( 2 {C_{a,A}C_{A,a}}/{C_{A,A}}+C_{a,a} )$ define the two approximations, \begin{equation}\label{def_s_-s_+} s-\frac{ 2 {C_{a,A}C_{A,a}}+C_{a,a}{C_{A,A}}}{f_a{C_{A,A}}}\varepsilon=:s_-(\varepsilon)\leq s \leq s_+(\varepsilon):=s + 2 \frac{C_{a,A}C_{A,a}}{f_aC_{A,A}}\varepsilon. \end{equation} Then for $t < \tilde{T}_\varepsilon^K \wedge S_\varepsilon^K$ the death rate of $a$-individuals in the process $\tilde{N}$ equals that of the process $N$,
defined in \eqref{def:totalbd} and satisfies \begin{equation}\label{ineqtxmort} 1-s_+(\varepsilon)\leq \frac{{d}_{a}(\tilde{N}(t))}{f_a\tilde{N}_a(t)}= 1-s+\frac{C_{a,A}}{f_aK} (\tilde{N}_A(t)-\bar{n}_AK)+\frac{C_{a,a}}{f_aK}\tilde{N}_a(t)\leq 1-s_-(\varepsilon). \end{equation} For $S_\varepsilon^K\leq t <\tilde{T}_\varepsilon^K$, according to the definition of $\tilde{N}$ in \eqref{deftildeN}, the death rate of $a$-individuals also satisfies \begin{equation}\label{ineqtxmort} 1-s_+(\varepsilon)\leq \frac{{d}_{a}^K(\tilde{N}_{A}{({S_\varepsilon^K}^-)} e_{A\underline{\mathfrak{b}\mathfrak{c}}},\tilde{N}^{(a)}(t))}{f_a\tilde{N}_a(t)}\leq 1-s_-(\varepsilon). \end{equation} Hence, following Theorem 2 in \cite{champagnat2006microscopic} we can construct the processes $Z^-_\varepsilon$, $(\tilde{N}_A,\tilde{N}_a)$ and $Z^+_\varepsilon$ on the same probability space such that almost surely: \begin{equation}\label{couplage1}
Z^-_\varepsilon(t) \leq \tilde{N}_a(t) \leq Z^+_\varepsilon(t), \quad \text{for all } t < \tilde{T}^K_\varepsilon, \end{equation} where for $*\in \{-,+\}$, $Z^*_\varepsilon$ is a birth and death process with initial state $1$, and individual birth and death rates $f_a$ and $f_a (1-s_*(\varepsilon))$.\\
Let $\sigma_u^K$ denote the time of the first hitting of $\lfloor u \rfloor$ by the process $\tilde{N}_a$: \begin{equation}\label{tpssigma}
\sigma^K_u:=\inf \{ t\geq 0, \tilde{N}_a(t)= \lfloor u \rfloor\}, \quad u \in \mathbb{R}_+. \end{equation} If for $-1<s<1$, $\tilde{Z}^{(s)}$ is a random walk with jumps $\pm 1$ where up-jumps occur with probability $1/(2-s)$ and down-jumps with probability $(1-s)/(2-s)$, we introduce
\begin{equation} \label{defPronde} \mathcal{P}_i^{(s)}:= \mathcal{L} \left( \tilde{Z}^{(s)} | \tilde{Z}^{(s)}(0)=i \right), \quad i \in \mathbb{N}. \end{equation} the law of $\tilde{Z}^{(s)}$ when the initial state is $i \in \mathbb{N}$ and for every $\rho \in \mathbb{R}_+$ the stopping time \begin{equation} \label{deftauas}
\tau_\rho:=\inf \ \{ n \in \mathbb{Z}_+, \tilde{Z}^{(s)}_n= \lfloor \rho \rfloor \}. \end{equation}
\subsection{Number of jumps of $\tilde{N}_a$ during the first phase}
\subsubsection{Expectation of the number of upcrossings}
Let us recall Equation \eqref{deftpssauts} and consider $k<\lfloor \varepsilon K \rfloor $. Then the number of upcrossings from $k$ to $k+1$ during the first phase is: \begin{equation} \label{Umjk} U^K_{k}(1):=\# \{ m , \tau_m^K< \tilde{T}_\varepsilon^K, (\tilde{N}_a(\tau_{m}^K),\tilde{N}_a(\tau_{m+1}^K))=(k,k+1)\}, \end{equation} where $(1)$ stands for the first phase. Recall \eqref{compact} and \eqref{def_s_-s_+}, and introduce a real number $\lambda_\varepsilon$ \begin{equation}\label{deflambda}
\lambda_\varepsilon:=(1-s_-(\varepsilon))^3(1-s_+(\varepsilon))^{-2}, \end{equation} which belongs to $(0,1)$ for $\varepsilon$ small enough. We have the following result:
\begin{lem}\label{lemmaexpup} There exist three positive finite constants $c$, $K_0$ and $\varepsilon_0$ such that for $K\geq K_0$ and $\varepsilon\leq \varepsilon_0$: \begin{enumerate}
\item[] If $j\leq k<\lfloor \varepsilon K \rfloor$ and $n_A \in I_\varepsilon^K±1$, \begin{equation} \label{expupa2}
\Big|\mathbb{E}^{(1)}_{(n_A,j)}[U^K_{k}{(1)}]-\frac{1-(1-s)^{\lfloor \varepsilon K \rfloor -k}-(1-s)^{k+1}}{s} \Big| \leq c \varepsilon . \end{equation} \item[] If $k<j<\lfloor \varepsilon K \rfloor$ and $n_A \in I_\varepsilon^K±1$, \begin{equation} \label{expupa3}
\mathbb{E}^{(1)}_{(n_A,j)}[U^K_{k}(1)]\leq \frac{(1-s_-(\varepsilon))^{j-k}}{s_+(\varepsilon)s_-^2(\varepsilon)}. \end{equation} \item[] If $k'\leq k<\lfloor \varepsilon K \rfloor$ and $n_A \in I_\varepsilon^K±1$,
\begin{equation}\label{covUmlk} \Big|\cov^{(1)}_{(n_A,j)}(U_{k}^{K}(1),U_{k'}^{K}(1))\Big|
\leq c \Big( \lambda_\varepsilon^{(k-k')/2} +\varepsilon\Big). \end{equation} \end{enumerate} \end{lem}
\begin{proof} The idea, {which comes} from \cite{schweinsberg2005random} and will be used several times throughout Section \ref{technicalsection}, is to compare the number of upcrossings with geometric random variables. Suppose first that $j\leq k$. Then on the event $\{\tilde{T}_{\varepsilon}^K<\infty\}$ the process $\tilde{N}_a$ necessarily jumps from $k$ to $k+1$. Being in $k+1$, it either reaches $\lfloor \varepsilon K \rfloor$ before $k$, or it goes back and then again from $k$ to $k+1$ and so on. We first approximate the probability that there is only one jump from $k$ to $k+1$. As we do not know the value of $\tilde{N}_A$ when $\tilde{N}_a$ hits $k$ for the first time, we bound the probability using the extreme values it can take. Recall Definitions \eqref{tpssigma} and \eqref{deftauas}. The upper bound is derived as follows: \begin{eqnarray} \label{onlyonejump}
\mathbb{P}^{(1)}_{(n_A,j)}(U^K_{k}(1)=1)&\leq & \underset{n_A\in I_\varepsilon^K±1}{\sup}\mathbb{P}^{(1)}_{(n_A,k+1)}(\tilde{T}_{\varepsilon}^K<\sigma_{k}^K)\\ &=&\underset{n_A\in I_\varepsilon^K±1}{\sup} \frac{\mathbb{P}_{(n_A,k+1)}(\tilde{T}_{\varepsilon}^K<\sigma_{k}^K)} { \mathbb{P}_{(n_A,k+1)}(\tilde{T}_{\varepsilon}^K<\infty)} \leq
q^{(s_+(\varepsilon),s_-(\varepsilon))}_{k},\nonumber \end{eqnarray} where we use \eqref{defP1} and for $(s_1,s_2) \in (0,1)^2$ \begin{equation}\label{defqks1s2} q^{(s_1,s_2)}_{k}:=\frac{\mathcal{P}_{k+1}^{(s_1)}(\tau_{ \varepsilon K }<\tau_k)}{\mathcal{P}_{k+1}^{(s_2)}(\tau_{ \varepsilon K }<\tau_0)} .\end{equation} Similarly, we show that $ \mathbb{P}^{(1)}_{(n_A,j)}(U^K_{k}(1)=1)\geq q^{(s_-(\varepsilon),s_+(\varepsilon))}_{k}$. In the same way, we can approximate the probability that there are least three jumps from $k$ to $k+1$ knowing that there are at least two jumps, and so on. We deduce that we can construct two geometric random variables $G_1$ and $G_2$, possibly on an enlarged space, with respective parameters $q^{(s_+(\varepsilon),s_-(\varepsilon))}_{k}\wedge 1$ and $q^{(s_-(\varepsilon),s_+(\varepsilon))}_{k}$ such that
\begin{equation}\label{compaU'|Uj} G_1\leq U_{k}^{K}(1) \leq G_2, \quad \text{a.s.} \end{equation} In particular, taking the expectation we get from \eqref{hitting_times}
\begin{multline} \label{encadexpUmjk}
\frac{(1-(1-s_+(\varepsilon))^{\lfloor \varepsilon K \rfloor -k})(1-(1-s_-(\varepsilon))^{k+1})}{s_+(\varepsilon)(1-(1-s_-(\varepsilon))^{\lfloor \varepsilon K \rfloor})} \leq \mathbb{E}^{(1)}_{(n_A,j)}[U^K_{k}(1)] \\ \leq \frac{(1-(1-s_-(\varepsilon))^{\lfloor \varepsilon K \rfloor -k})(1-(1-s_+(\varepsilon))^{k+1})}{s_-(\varepsilon)(1-(1-s_+(\varepsilon))^{\lfloor \varepsilon K \rfloor})}. \end{multline}
According to \eqref{probafix} and \eqref{def_s_-s_+}, $0<s<1$ and
$|s_+(\varepsilon)-s_-(\varepsilon)|\leq (4C_{a,A}C_{A,a}+C_{a,a}C_{A,A})\varepsilon/(f_aC_{A,A})$. Hence the last inequality and straightforward calculations lead to \eqref{expupa2}.\\
Let us now assume that $k<j$. Then we have \begin{multline*}
\mathbb{P}^{(1)}_{(n_A,j)}(U^K_{k}{(1)}\geq 1)\leq \underset{n_A\in I_\varepsilon^K±1}{\sup}\mathbb{P}_{(n_A,j)}^{(1)}(\sigma_k^K<\tilde{T}_\varepsilon^K)
= \underset{n_A\in I_\varepsilon^K±1}{\sup}\frac{\mathbb{P}_{(n_A,j)}(\tilde{T}_\varepsilon^K<\infty|\sigma_k^K<\tilde{T}_\varepsilon^K) \mathbb{P}_{(n_A,j)}(\sigma_k^K<\tilde{T}_\varepsilon^K)} {\mathbb{P}_{(n_A,j)}(\tilde{T}_\varepsilon^K<\infty)}\\
\leq \frac{\mathcal{P}_{k}^{(s_+(\varepsilon))}(\tau_{\varepsilon K}<\tau_0)\mathcal{P}_{j}^{(s_-(\varepsilon))}(\tau_k<\tau_{\varepsilon K})} {\mathcal{P}_{j}^{(s_-(\varepsilon))}(\tau_{\varepsilon K}<\tau_0)} \leq \frac{(1-s_-(\varepsilon))^{j-k}}{s_+(\varepsilon)s_-(\varepsilon)}, \end{multline*}
where we again used \eqref{defP1} and \eqref{hitting_times}. Moreover, the same proof as for \eqref{compaU'|Uj} leads to: \begin{equation*}
\mathbb{E}^{(1)}_{(n_A,j)}[U^K_{k}{(1)}|U^K_{k}{(1)}\geq 1]\leq \Big(q^{(s_-(\varepsilon),s_+(\varepsilon))}_{k}\Big)^{-1} \leq s_-^{-1}(\varepsilon) ,
\end{equation*} where we used Equation \eqref{minqk}. This ends the proof of \eqref{expupa3}. The last inequality, \eqref{covUmlk}, has been stated in \cite{smadi2014eco} (Equation $(7.26)$). \end{proof}
\subsubsection{Expectation of hitting numbers}
Let us recall \eqref{Umjk} and introduce for $0< j\leq k<\lfloor \varepsilon K \rfloor$ the total number of downcrossings from $k$ to $k-1$, \begin{equation} \label{Dk} D_{k}^K(1):=\# \{m, \tau_m^K\leq \tilde{T}_\varepsilon^K, (\tilde{N}_a(\tau_m^K),\tilde{N}_a(\tau_{m+1}^K))=(k,k-1) \},\end{equation} and the number of hittings of the state $k$ by the process $\tilde{N}_a$ before the time $\tilde{T}_\varepsilon^K$: \begin{equation} \label{Vk} {V}_{k}^K (1):=U_{k-1}^K(1)+D_{k+1}^K(1)=\# \{m ,\tau_m^K\leq \tilde{T}_\varepsilon^K, \tilde{N}_a(\tau_{m-1}^K)\neq k , \tilde{N}_a(\tau_m^K)=k\}.\end{equation}
Recall the definition of $\lambda_\varepsilon \in (0,1)$ in \eqref{deflambda}. We can state the following Lemma, which will be useful to get
bounds on the number of upcrossings of the $A$-population during the first phase (see Lemma \ref{lemespvarmathcalU}):
\begin{lem}\label{lemV} There exist three finite constants $c$, $K_0$ and $\varepsilon_0$ such that for $K\geq K_0$, $\varepsilon\leq \varepsilon_0$ and $k'< k <\lfloor \varepsilon K \rfloor $:
\begin{equation*}
\Big| \mathbb{E}^{(1)}[{V}_{k}^K (1)] -\frac{(2-s)(1-(1-s)^{\lfloor \varepsilon K\rfloor -k}-(1-s)^k)}{s}\Big|\leq c\varepsilon,
\ \ \text{and} \ \
|\cov^{(1)}({V}_{k'}^K (1),{V}_{k}^K (1))|\leq c(\varepsilon + \lambda_\varepsilon^{(k-k')/2}).
\end{equation*} \end{lem}
\begin{proof}
Under $\mathbb{P}^{(1)}$ the $a$-population size goes from $1$ to $\lfloor \varepsilon K \rfloor$, thus the number of downcrossings from $k+1$ to $k$ is equal to the number of upcrossings from $k$ to $k+1$ minus $1$. Adding \eqref{Vk} yields \begin{equation*} {V}_{k}^{K} (1)={U}_{k-1}^{K} (1)+{U}_{k}^{K} (1)-1, \quad \mathbb{P}^{(1)}-a.s. \end{equation*} We get the first part of the Lemma by taking the expectation and applying \eqref{expupa2}.
The proof of the second part follows that of \eqref{covUmlk}, and once again we can find the details in the proof of
Equation (7.26) in \cite{smadi2014eco}. \end{proof}
\subsubsection{Number of upcrossings during an excursion above or below a given level} \label{sectionexcu} We now focus on the number of upcrossings from $k$ to $k+1$ during an excursion above or below $l$. Let us denote by $\sigma_l^K(1)$ the jump number of the first hitting of $l$ before the end of the first phase: for $l<\lfloor\varepsilon K \rfloor$, \begin{equation}\label{defsigma}
\sigma_l^K(1):=\inf \{ m, \tau_m^K \leq \tilde{T}_\varepsilon^K, \tilde{N}_a(\tau_m^K)=l \}, \end{equation} and for $1\leq k,l<\lfloor \varepsilon K \rfloor$ and $n_A \in I_\varepsilon^K±1$, \begin{equation}\label{defjumpexc}
U^K_{n_A,l,k}(1):= \# \Big\{ m<\sigma_l^K(1), (\tilde{N}_a(\tau_{m}^K),\tilde{N}_a(\tau_{m+1}^K))=(k,k+1) \Big\}. \end{equation} Then, if we denote by $\mu_\varepsilon$ the real number \begin{equation}\label{defmu} \mu_\varepsilon:= (1-s_-(\varepsilon))^2(1-s_+(\varepsilon))^{-1}, \end{equation} which belongs to $(0,1)$ for $\varepsilon$ small enough, we can derive the following bounds:
\begin{lem}\label{excunder} There exist three positive, finite constants $c$, $K_0$ and $\varepsilon_0$ such that for $K\geq K_0$, $\varepsilon\leq \varepsilon_0$, $1\leq k<l<\lfloor \varepsilon K \rfloor$ and $n_A \in I_\varepsilon^K±1$, \begin{equation*}
\mathbb{E}^{(1)}_{(n_A,k+1)}[U^K_{n_A,k,l}(1)|\sigma_k^K(1)<\infty] \vee \mathbb{E}^{(1)}_{(n_A,l-1)}[U^K_{n_A,l,k}(1)]\leq c\mu_\varepsilon^{l-k}.
\end{equation*}
\end{lem}
\begin{proof} Equations (B.5) and (B.6) in \cite{smadi2014eco} state that for $k<l<\lfloor \varepsilon K \rfloor$ and $n_A \in I_\varepsilon^K±1$,
$$ \mathbb{P}^{(1)}_{(n_A,k+1)}(U^K_{n_A,k,l}(1)\geq 1|\sigma_k^K(1)<\infty)\leq c(1-s_-(\varepsilon))^{l-k},$$ and
$$ \mathbb{P}^{(1)}_{(n_A,k+1)}(U^K_{n_A,k,l}(1)= 1|U^K_{n_A,k,l}(1)\geq 1,\sigma_k^K(1)<\infty) \geq c\Big(\frac{1-s_+(\varepsilon)}{1-s_-(\varepsilon)}\Big)^{l-k} $$ for a finite $c$. By comparing $U^K_{n_A,k,l}(1)$ with a geometric random variable we get the first inequality. To bound the expectation of upcrossings from $k$ to $k+1$ during an excursion below $l$ we first bound the probability to have at least one jump from $k$ to $k+1$ during
such an excursion. By definition, $\tilde{N}_a$ necessarily hits $l-1$ during the excursion below $l$. Recall Definitions \eqref{defP1}, \eqref{tpssigma} and \eqref{deftauas}. Then for every $n_A$ in $I_\varepsilon^K±1$, \begin{eqnarray*}
\mathbb{P}^{(1)}_{(n_A,l-1)}(\sigma^K_k<\sigma^K_l|\sigma^K_l<\infty)&=& \frac{{\mathbb{P}}_{(n_A,l-1)}(\tilde{T}_\varepsilon^K<\infty |\sigma^K_k<\sigma^K_l) {\mathbb{P}}_{(n_A,l-1)}(\sigma^K_k<\sigma^K_l)} {{\mathbb{P}}_{(n_A,l-1)}(\tilde{T}_\varepsilon^K<\infty)}\\ & \leq & \frac{\mathcal{P}^{(s_+(\varepsilon))}_k(\tau_{\varepsilon K}<\tau_0)\mathcal{P}^{(s_-(\varepsilon))}_{l-1}(\tau_{k}<\tau_l)}{\mathcal{P}^{(s_-(\varepsilon))}_{l-1}(\tau_{\varepsilon K}<\tau_0)} \leq \frac{(1-s_-(\varepsilon))^{l-k-1}}{s_-(\varepsilon)}, \qquad \qquad \qquad \end{eqnarray*} where we used \eqref{hitting_times}. The next step consists in bounding the number of upcrossings from $k$ to $k+1$ during the excursion knowing that this number is larger than one: for $n_A \in I_\varepsilon^K±1$, \begin{eqnarray*}
\mathbb{P}^{(1)}_{(n_A,k+1)}(\sigma_l^K<\sigma_k^K)
&=& \frac{{\mathbb{P}}_{(n_A,k+1)}(\tilde{T}_\varepsilon^K<\infty |\sigma_l^K<\sigma_k^K) {\mathbb{P}}_{(n_A,k+1)}(\sigma_l^K<\sigma_k^K)} {{\mathbb{P}}_{(n_A,k+1)}(\tilde{T}_\varepsilon^K<\infty)}\\ & \geq & \frac{\mathcal{P}^{(s_-(\varepsilon))}_l(\tau_{\varepsilon K}<\tau_0)\mathcal{P}^{(s_-(\varepsilon))}_{k+1}(\tau_{l}<\tau_k)}{\mathcal{P}^{(s_+(\varepsilon))}_{k+1}(\tau_{\varepsilon K}<\tau_0)} \geq s_-^2(\varepsilon), \end{eqnarray*} where we again used \eqref{hitting_times}. Hence on the event $\{U^K_{n_A,l,k}(1)\geq 1\}$, $U^K_{n_A,l,k}(1)$ is smaller than a geometric random variable with parameter $s_-^2(\varepsilon)$ and we get: \begin{eqnarray*} \mathbb{E}^{(1)}_{(n_A,l-1)}[U^K_{n_A,l,k}(1)]&\leq & s_-^{-2}(\varepsilon)\mathbb{P}^{(1)}_{(n_A,l-1)}(U^K_{n_A,l,k}(1)\geq 1)\ \leq \ \frac{(1-s_-(\varepsilon))^{l-k-1}}{s_-^3(\varepsilon)},\end{eqnarray*} which ends the proof of Lemma \ref{excunder}. \end{proof}
\subsection{Number of jumps $\tilde{N}_A$ during the first phase}\label{subsec:bdA1p}
We introduce for $k<\lfloor \varepsilon K \rfloor$ the number of upcrossings of the $A$-population when the $a$-population is of size $k$: \begin{equation} \label{mathcalUk} \mathcal{U}_{k}^K {(1)}:=\# \{m, \tau_m^K\leq \tilde{T}_\varepsilon^K, \tilde{N}_A(\tau_{m+1}^K)-\tilde{N}_A(\tau_m^K)=1 , \tilde{N}_a(\tau_m^K)=k \}.\end{equation} We are now able to get bounds for the expectations and covariances of these quantities:
\begin{lem}\label{lemespvarmathcalU}
There exist three finite constants $c$, $K_0$ and $\varepsilon_0$ such that for $K \geq K_0$, $\varepsilon\leq \varepsilon_0$ and $k<\lfloor \varepsilon K \rfloor$, $$\label{approxsummathcalUk}
\Big| \mathbb{E}^{(1)}\Big[ \sum_{i=1}^k \mathcal{U}_i^K(1) \Big]- \frac{f_A\bar{n}_AK\log k}{sf_a} \Big|\leq cK(1+ {\varepsilon}\log k) \quad \text{and} \quad \var^{(1)}\Big( \sum_{i=1}^k \mathcal{U}_i^K(1) \Big)\leq cK^2(1+{\varepsilon} \log^2 k). $$ \end{lem}
\begin{proof}
The proof is based on the comparison of the $A$- and $a$-population jump rates. Let us first focus on the $a$-population. For $k\leq \lfloor \varepsilon K \rfloor$ and $n_A \in I_\varepsilon^K±1$, \begin{align}\label{majkk+1}
\mathbb{P}^{(1)}_{(n_A,k)}(\tilde{N}_a(\delta t)\neq k) &=
\mathbb{P}_{(n_A,k)}(\tilde{T}_\varepsilon^K<\infty |\tilde{N}_a(\delta t)=k\pm 1) \mathbb{P}_{(n_A,k)}(\tilde{N}_a(\delta t)=k\pm 1)/\mathbb{P}_{(n_A,k)}(\tilde{T}_\varepsilon^K<\infty) \nonumber \\ &\leq \mathcal{P}_{k\pm 1}^{(s_+(\varepsilon))}(\tilde{T}_\varepsilon^K<\infty)
\mathbb{P}_{(n_A,k)}(\tilde{N}_a(\delta t)=k\pm 1)/ \mathcal{P}^{(s_{-}(\varepsilon))}_k(\tilde{T}_\varepsilon^K<\infty) \nonumber \\
& \leq \frac{(1+c\varepsilon)}{1-(1-s)^k}\Big((1-(1-s)^{k+1})f_ak+(1-s)(1-(1-s)^{k-1})(D_a+C_{aA}\bar{n}_A)k\Big) \delta t\nonumber\\ &= (1+c\varepsilon)f_a(2-s)k \delta t , \end{align} for a finite constant $c$ and $\varepsilon$ small enough, where
$\delta t$ is a small time step and by abuse of notation we did not indicate the $o(\delta t)$'s.
We used the definition of $\mathbb{P}^{(1)}$ in \eqref{defP1} for the equality, Coupling \eqref{couplage1} for the first inequality, \eqref{hitting_times} for the second one, and the equality $ S_{aA}=f_a-D_a - C_{a,A}\bar{n}_A $ for the last one. Reasoning similarly we get: \begin{equation}\label{encaderatea}
(1-c\varepsilon)f_a(2-s)k \delta t\leq \mathbb{P}^{(1)}_{(n_A,k)}(\tilde{N}_a(\delta t)\neq k). \end{equation} Let us now focus on the number of upcrossings of the $A$-population. The definition of $\tilde{N}$ in \eqref{deftildeN} and Bayes' Theorem yield \begin{equation}\label{majstsA} (1-c\varepsilon)f_A\bar{n}_AK \delta t \leq \mathbb{P}^{(1)}_{(n_A,k)}(\tilde{N}_A( \delta t )= n_A+1) \leq (1+c\varepsilon)f_A\bar{n}_AK \delta t , \end{equation} for a finite $c $ and $\varepsilon$ small enough. Indeed, from Coupling \eqref{couplage1} and Equation \eqref{hitting_times} we get the following bound, independent of $n_A$ in $I_\varepsilon^K±1$: $$ \frac{1-(1-s_-(\varepsilon))^k}{1-(1-s_-(\varepsilon))^{\lfloor \varepsilon K \rfloor}}\leq \mathbb{P}_{(n_A,k)}(\tilde{T}_\varepsilon^K <\infty )\leq \frac{1-(1-s_+(\varepsilon))^k}{1-(1-s_+(\varepsilon))^{\lfloor \varepsilon K \rfloor}}. $$
Hence there exist two finite constants $c$ and $\varepsilon_0$ such that for every $\varepsilon\leq \varepsilon_0$,
if we introduce the parameters \begin{equation} \label{defqkeps} \frac{1}{q^{(1)}_k(\varepsilon)}:=1+(1-c\varepsilon)\frac{f_A\bar{n}_AK}{(2-s)f_ak}
\leq 1+(1+c\varepsilon)\frac{f_A\bar{n}_AK}{(2-s)f_ak}=:\frac{1}{q^{(2)}_k(\varepsilon)} ,\end{equation}
we can deduce from \eqref{majkk+1} to \eqref{majstsA} that
for $k <\lfloor \varepsilon K \rfloor$ \begin{equation}\label{tildemajV2} \sum_{V_k^K{(1)}}\Big({G}^i_{q^{(1)}_k(\varepsilon)}-1 \Big)\leq {\mathcal{U}}_k^K{(1)} \leq \sum_{V_k^K{(1)}}\Big({G}^i_{q^{(2)}_k(\varepsilon)}-1 \Big) ,\end{equation} where for $j \in \{1,2\}$, $({G}^i_{q^{(j)}_k(\varepsilon)}, i \in \mathbb{N})$ is a sequence of geometric random variables with parameter $q^{(j)}_k(\varepsilon)$
independent of $V_l^K{(1)}$ (defined in \eqref{Vk}) for all $l< \lfloor \varepsilon K \rfloor$. Hence a direct application of Lemmas \ref{lemV} and \ref{lemgeom} leads to \begin{equation}\label{majexpmathcalUDtilde}
\Big| \mathbb{E}^{(1)}\Big[ \mathcal{U}_k^K(1)\Big]-
\frac{f_A\bar{n}_AK}{sf_ak}(1-(1-s)^k-(1-s)^{\lfloor \varepsilon K \rfloor -k})\Big| \leq c\varepsilon \frac{K}{k}, \end{equation} for a finite $c$ and $\varepsilon$ small enough. This implies the first inequality of Lemma \ref{lemespvarmathcalU}.\\
Let us now bound the second moment of $ \mathcal{U}_k^K(1)$ and the expectation of ${\mathcal{U}}_k^K(1) {\mathcal{U}}_l^K(1)$ for $k\neq l$. The first upper bound follows again from a direct application of Lemmas \ref{lemV} and \ref{lemgeom}. We get \begin{equation}\label{majcarremathcalU} \mathbb{E}^{(1)}\Big[ (\mathcal{U}_k^K(1))^2\Big]\leq \mathbb{E}^{(1)}\Big[ \Big(\sum_{V_k^K{(1)}}{G}^i_{q^{(2)}_k(\varepsilon)} \Big)^2\Big] \leq \frac{2(\mathbb{E}^{(1)}[V_k^K{(1)}])^2}{(q^{(2)}_k(\varepsilon))^2}\leq 2(1+c\varepsilon)\Big(\frac{f_A\bar{n}_AK}{sf_ak} \Big)^2 ,\end{equation} for a finite $c$ and $\varepsilon$ small enough. A new application of the same Lemmas yields, for $k < l <\lfloor \varepsilon K \rfloor $ \begin{equation} \label{majprod}
\mathbb{E}^{(1)}\Big[ {\mathcal{U}}_k^K(1) {\mathcal{U}}_l^K(1)\Big]
\leq \frac{ \mathbb{E}^{(1)}[ V_k^K{(1)}V_l^K{(1)} ]}{q^{(2)}_k(\varepsilon)q^{(2)}_l(\varepsilon)}
\leq c(1+\varepsilon+ \lambda_\varepsilon^{(l-k)/2}) \frac{(f_A\bar{n}_AK)^2}{(f_as)^2 kl} , \end{equation} where we used that $\mathbb{E}^{(1)}[XY]=\mathbb{E}^{(1)}[X]\mathbb{E}^{(1)}[Y]+\cov^{(1)}(X,Y)$ for any real random variables $(X,Y)$. From \eqref{tildemajV2} to \eqref{majprod} and \eqref{lemma3.5} we deduce that there exists a finite $c$ such that for $\varepsilon$ small enough and $k <\lfloor \varepsilon K \rfloor $, \begin{equation} \label{majprod2} \mathbb{E}^{(1)}\Big[ \Big( \sum_{i=1}^{k} \mathcal{U}_i^K(1)\Big)^2\Big]
\leq
(1+c\varepsilon) \Big(\frac{f_A\bar{n}_AK\log k}{f_as }\Big)^2+cK^2 . \end{equation} Reasoning similarly to get the lower bound, we obtain
\begin{equation} \label{majprod3} \Big| \mathbb{E}^{(1)}\Big[ \Big( \sum_{i=1}^{k} \mathcal{U}_i^K(1)\Big)^2\Big]
- \Big(\frac{f_A\bar{n}_AK\log k}{f_as }\Big)^2\Big| \leq cK^2(1+\varepsilon \log^2 k) . \end{equation} Adding the first inequality of Lemma \ref{lemespvarmathcalU} we conclude the proof. \end{proof}
\subsection{Coupling with subcritical birth and death processes during the third phase}\label{subsec:newprob3phase}
We couple the process $\tilde{\tilde{N}}_a$ with two subcritical birth and death processes to control its dynamics. We recall the definition of ${\mathcal{N}}_\varepsilon^K$ in \eqref{defNepsK} and introduce
\begin{equation}\label{defsbar} \bar{s}:={|S_{Aa}|}/{f_A}. \end{equation} Let us define for $\varepsilon$ small enough, \begin{equation}\label{def_bars_-s_+} \bar{s}-\frac{ M''C_{A,a}}{f_A}\varepsilon=:\bar{s}_-(\varepsilon)<\bar{s}< \bar{s}_+(\varepsilon):=\bar{s} + \frac{C_{A,A}+M''C_{A,a}}{f_A}\varepsilon, \end{equation} where $M''$ has been defined just before Definition \eqref{compact2}. Then, according to the definition of $\tilde{\tilde{N}}$ in \eqref{deftildetildeN},
we can follow Theorem 2 in \cite{champagnat2006microscopic} and construct the processes $Y^+_\varepsilon$, $\tilde{\tilde{N}}$ and $Y^-_\varepsilon$ on the same probability space such that on the event $\mathcal{N}_\varepsilon^K$ \begin{equation}\label{couplage3}
Y^+_\varepsilon(t) \leq \tilde{\tilde{N}}_A(t) \leq Y^-_\varepsilon(t), \quad \text{for all } T_\varepsilon^K+t_\varepsilon\leq t< T_\varepsilon^K+t_\varepsilon+\tilde{\tilde{T}}_0^{(K,A)},\quad \text{a.s.}, \end{equation} where for $*\in \{-,+\}$, $Y^*_\varepsilon$ is a birth and death process with initial state $N_A(T_\varepsilon^K+t_\varepsilon)$ and individual birth and death rates $f_A$ and $f_A (1+\bar{s}_*(\varepsilon))$, and we recall that $\tilde{\tilde{T}}_0^{(K,A)}$ is the analog of ${{T}}_0^{(K,A)}$ (defined in \eqref{T0K}) for the process $\tilde{\tilde{N}}$.\\
Recall Definition \eqref{defPronde}, and let us introduce for $i \in \mathbb{N}$ $\mathcal{Q}_i^{(s)}= \mathcal{P}_i^{(-s)}$ and
for $\rho \in \mathbb{R}_+$ the stopping time \begin{equation} \label{defnuuas}
\nu_\rho:=\inf \{ n \in \mathbb{Z}_+, \tilde{Z}^{(-s)}_n= \lfloor \rho \rfloor \}. \end{equation}
\subsection{Number of jumps of $\tilde{\tilde{N}}_A$ during the third phase} Similarly as in \eqref{Vk} we introduce for $1 \leq k <\lfloor \varepsilon K \rfloor$ the random variable $\mathcal{V}_k^K(3)$ which corresponds to the number of hittings of state $k$ by the process $\tilde{\tilde{N}}_A$ during the third phase. Recall Definitions \eqref{defomega12}, \eqref{compact2} and \eqref{def_bars_-s_+}. We have the following approximations:
\begin{lem}\label{jumpA3} Let $u$ be in $[\omega_1,\omega_2]$.
There exist three finite constants $c$, $K_0$ and $\varepsilon_0$ such that for $K \geq K_0$, $\varepsilon\leq \varepsilon_0$
and $n_a$ in $J_\varepsilon^K±1$, if $\lfloor u K \rfloor< k<\lfloor \varepsilon K \rfloor,$
\begin{equation*} \mathbb{E}^{(3)}_{(\lfloor u K \rfloor,n_a)}[\mathcal{V}_{k}^K(3)] \leq (1+c\varepsilon) \frac{2+\bar{s}}{\bar{s}}(1+\bar{s}_-(\varepsilon))^{\lfloor u K \rfloor-k} , \end{equation*} and if $k\leq \lfloor u K \rfloor$, \begin{eqnarray*}
\Big|\mathbb{E}^{(3)}_{(\lfloor u K \rfloor,n_a)}[\mathcal{V}_{k}^K(3)]-\frac{2+\bar{s}}{\bar{s}}(1-(1+\bar{s})^{-k}
-(1+\bar{s})^{k-\lfloor \varepsilon K \rfloor})\Big| \leq c\varepsilon. \end{eqnarray*} \end{lem}
\begin{proof} The proof is very similar to that of \eqref{expupa2}, hence we do not detail all the calculations and refer to the proof of Lemma \ref{lemmaexpup}. First we consider $\lfloor u K \rfloor< k<\lfloor \varepsilon K \rfloor$ and approximate under $\mathbb{P}^{(3)}$ the probability for $\tilde{\tilde{N}}_A$ to hit $k$ before the extinction of the $A$-population. Indeed, if $k\leq \lfloor u K \rfloor$, we know that $\tilde{\tilde{N}}_A$ hits $k \ \mathbb{P}^{(3)}$-a.s.
Let $\lfloor u K \rfloor<k<\lfloor \varepsilon K \rfloor$. Then for every $n_a \in J_\varepsilon^K±1$, Equation \eqref{hitting_times} implies \begin{equation} \label{probahitk}
\mathbb{P}^{(3)}_{(\lfloor u K \rfloor,n_a)}(\tilde{\tilde{N}}_A \text{ hits }k) \leq \frac{\mathcal{Q}^{(\bar{s}_+(\varepsilon))}_{k}(\nu_0<\nu_{\varepsilon K}) \mathcal{Q}^{(\bar{s}_-(\varepsilon))}_{\lfloor u K \rfloor}(\nu_{k}<\nu_0)} {\mathcal{Q}^{(\bar{s}_-(\varepsilon))}_{\lfloor u K \rfloor}(\nu_0<\nu_{\varepsilon K})}\leq \frac{1+c\varepsilon} {(1+\bar{s}_-(\varepsilon))^{k-\lfloor u K \rfloor}}, \end{equation} for a finite $c$, $\varepsilon$ small enough and $K$ large enough. The second step consists in counting how many times the process $\tilde{\tilde{N}}_A$ hits $k$ during the third phase knowing that it happens at least once. Once again we will compare this number with geometric random variables, by approximating the probability to have only one jump. The following inequality follows the spirit of \eqref{onlyonejump}. The only difference is that in the third phase $\tilde{\tilde{N}}_A$ is coupled with subcritical birth and death processes, whereas in the first phase $\tilde{N}_a$ was coupled with supercritical birth and death processes. For every $n_a\in J_\varepsilon^K±1$ and $k<\lfloor \varepsilon K \rfloor$, \begin{multline*}
\mathbb{P}^{(3)}_{(k,n_a)}(\tilde{\tilde{N}}_A(t)\leq k,
\forall t \geq 0 )
\geq \frac{\mathcal{Q}^{(\bar{s}_-(\varepsilon))}_{k-1}(\nu_0<\nu_{k})\mathcal{Q}^{(\bar{s}_-(\varepsilon))}_{k}(\nu_{k-1}<\nu_{k+1})} {\mathcal{Q}^{(\bar{s}_+(\varepsilon))}_{k}(\nu_0<\nu_{\varepsilon K})} \geq \frac{(1-c\varepsilon)\bar{s}}{(2+\bar{s})(1-(1+\bar{s})^{-k}-(1+\bar{s})^{k-\lfloor \varepsilon K \rfloor})}. \end{multline*} We derive the upper bound similarly and end the proof by comparing the hitting numbers with geometric random variables. For $\lfloor u K \rfloor <k<\lfloor \varepsilon K \rfloor$ we have to multiply the expectation of the geometric random variables by the probability to hit $k$ at least once, approximated in \eqref{probahitk}. \end{proof}
\subsection{Number of births of $a$-individuals during the third phase}
Recall \eqref{defn(alpha)} and let ${U}_{k}^K {(3)}$ be the number of births in the $a$-population during the third phase when $\tilde{\tilde{N}}_A$ equals $k\leq \lfloor \varepsilon K \rfloor$ \begin{multline} \label{UDk3phase} U_k^K {(3)}:=\# \{m, T_\varepsilon^K+t_\varepsilon<\tau_m^K\leq T_{\text{ext}}^K,
\tilde{\tilde{N}}_A(\tau_m^K)=k , \text{ and } \{\{\tilde{\tilde{N}}_a(\tau_{m+1}^K)-\tilde{\tilde{N}}_a(\tau_m^K)=1\}\\ \text{ or } \{\tilde{\tilde{N}}_a(\tau_{m+1}^K)=\tilde{\tilde{N}}_a(\tau_m^K),\tilde{\tilde{N}}^{(a)}(\tau_{m+1}^K)\neq \tilde{\tilde{N}}^{(a)}(\tau_m^K) \} \}.\end{multline} We now state an approximation for the expectation of ${U}_{k}^K(3)$. We do not prove this result as it is obtained in the same way as Lemma \ref{lemespvarmathcalU}: the birth rate of the $a$-population is close to $f_a\bar{n}_aK$, the jump rate of the $A$-population is of order $(2+\bar{s})f_Ak$ when $\tilde{\tilde{N}}_A=k$ and the expectations of the hitting numbers for the $A$-population are given in Lemma \ref{jumpA3}. The only difference is that the $A$-population size can hit values bigger than the initial value of the third phase, $\tilde{\tilde{N}}_A(T_\varepsilon^K+t_\varepsilon)$. However the probabilities to hit such values decrease geometrically (see Lemma \ref{jumpA3}) and they have a negligible influence on the final result. Thus we get
\begin{lem}\label{lemespvarU}
There exist three finite constants $c$, $\varepsilon_0$ and $K_0$ such that for $\varepsilon\leq \varepsilon_0$, $K \geq K_0$ and $k \leq \lfloor \varepsilon K \rfloor$ $$
\Big| \mathbb{E}^{(3)}\Big[ \sum_{i=1}^k {U}_i^K(3) \Big]- \frac{f_a\bar{n}_aK\log k}{\bar{s}f_A} \Big|\leq cK(1+\varepsilon\log k) \quad \text{and} \quad \var^{(3)}\Big( \sum_{i=1}^k {U}_i^K(3) \Big)\leq cK^2(1+\varepsilon \log^2 K). $$ \end{lem}
\section{First phase}\label{firstphase}
This section is dedicated to the proof of Proposition \ref{prop:1phase_probg1}. We prove that there are only four different possible ancestral relationships of the two neutral loci and calculate the probabilities for the non-negligible possibilities.
\subsection{Coalescence and recombination probabilities, negligible events}\label{subsec:RecombinationProbNegligible} Recall Definition \ref{defcoalreco} and define, for $j \in \{1,2\}$ $$ r^*_{j}:= r_1+\mathbf{1}_{\{j=2\}}(r_2-2r_1r_2), \quad \text{and} \quad r^*_{(1,2)}:=r_1r_2,$$ which denote the probability to have one (resp. two) recombination(s) somewhere before the locus $Nj$ (resp. before the locus $N2$) at a birth event.
\begin{defi}\label{defpalphaalpha} For $(\alpha,\alpha') \in \mathcal{A}^2$, $j \in \{1,2\}$ and $n=(n_A,n_a)\in \mathbb{N}^\mathcal{A}$ we define: \begin{enumerate}
\item[$p_{\alpha\alpha'}^{(c,j)}(n)$]:= probability that two randomly chosen neutral alleles, located at locus $Nj$ and associated respectively with alleles $\alpha$ and $\alpha'$ at time $\tau_m^K$, coalesce at this time conditionally on $(N_A,N_a)(\tau_{m-1}^K)=n$ and on the birth of an individual carrying allele $\alpha$ at time $\tau_{m}^K$. \item[$p_{\alpha \alpha'}^{(j)}(n)$]:= probability to have one (and only one) recombination from the $\alpha$- into the $\alpha'$-population before locus $Nj$ conditionally on $(N_A,N_a)(\tau_{m-1}^K)=n$ and on the birth of an individual carrying allele $\alpha$ at time $\tau_{m}^K$. \item[$p_{\alpha \alpha'}^{(1,2)}(n)$]:= probability to have a double recombination under the same conditions
\end{enumerate} \end{defi}
Then we have the following result:
\begin{lem}\label{lempcoal} Let $\alpha \in \mathcal{A}$, $n=(n_A,n_a)\in \mathbb{N}^\mathcal{A}$ such that $n_a \leq \lfloor \varepsilon K \rfloor$, $n_A \in I_\varepsilon^K ±1$ and $j \in \{1,2\}$. Then there exists a finite $c$ such that, \begin{equation*} p_{aa}^{(c,j)}(n)=\frac{2}{n_a(n_a+1)}\Big( 1-\frac{r_j^* f_A n_A} {f_An_A+f_an_a} \Big),\quad p_{a A}^{(c,j)}(n) = \frac{r_j^*f_A}{(n_a+1)(f_An_A+f_an_a)} \quad \text{and} \quad p_{A\alpha}^{(c,j)}(n)\leq \frac{c}{K^2}.
\end{equation*}
\end{lem}
\begin{proof} The proof of the two equalities can be found in \cite{smadi2014eco} (Lemma 7.1) as the expression is the same for $n_A \in I_\varepsilon^K$ or $ dist (n_A,I_\varepsilon^K)=1$ (where $dist$ is the canonical distance on $\mathbb{R}$). The only difference is that we consider two neutral loci and have to exclude the double recombination case. Indeed, if there are simultaneous recombinations the alleles located at SL and N2 in the newborn originate from the same parent. The expressions of $p_{A\alpha}^{(c,j)}(n)$ in the case where $n_A \in I_\varepsilon^K$ are also stated in \cite{smadi2014eco} (Lemma 7.1), and from the definition of $\tilde{N}$ in \eqref{deftildeN} we get that when $ dist (n_A,I_\varepsilon^K)=1$, $p_{AA}^{(c,j)}(n)={2}/{n_A^2}$ and $p_{Aa}^{(c,j)}(n)=0$. This ends the proof. \end{proof}
Next we focus on the recombination probabilities:
\begin{lem}\label{lemma:recprob} Let $\alpha \in \mathcal{A}$, $n=(n_A,n_a) \in \mathbb{N}^\mathcal{A}$ such that $n_a \leq \lfloor \varepsilon K \rfloor$, $n_A \in I_\varepsilon^K ±1$ and $j \in \{1,2,(1,2)\}$. Then there exist two finite constants $c$ and $\varepsilon_0$ such that for every $\varepsilon\leq \varepsilon_0$, \begin{align*}
p_{aa}^{(j)}(n) =\frac{r_j^* f_{a}(n_{a}-1)}{(n_a+1)(f_An_A+f_an_a)}, \quad p_{a A}^{(j)}(n) =\frac{r_j^* f_An_A}{(n_a+1)(f_An_A+f_an_a)}, \end{align*} \begin{equation}
\label{rqpr2} p_{Aa}^{(j)}(n)\leq \frac{c\varepsilon}{K\log K} \quad \text{and} \quad
(1-c\varepsilon )\frac{r_2}{n_A} \leq p_{AA}^{(2)}(n_A,k) \leq \frac{r_2}{n_A}, \ k \leq \lfloor \varepsilon K \rfloor \end{equation} \end{lem}
\begin{proof} The second equality is stated in \cite{smadi2014eco} Equation $(7.2)$.
Conditionally on the birth of an $a$-individual and the state of the process at the $(m-1)$-th jump, the probability of picking the newborn when choosing an individual at random amongst the $a$-individuals is equal to $1/(n_{a}+1)$. A recombination before the locus $Nj$ {(or before locus $N1$ and locus $N2$ if $j=(1,2)$)} happens with probability $r_j^*$, independent of all other events. Finally, the probability that the second parent is an $a$-individual but is different from the first parent is equal to $f_{a}(n_{a}-1)/(f_An_A + f_an_a)$. This proves the first equality.
When $n_A \in I_\varepsilon^K$ we get similarly that $$p_{AA}^{(j)}(n) =\frac{r_j^* f_{A}(n_{A}-1)}{(n_A+1)(f_An_A+f_an_a)}\quad \text{and}\quad p_{Aa}^{(j)}(n) =\frac{r_j^* f_an_a}{(n_A+1)(f_An_A+f_an_a)},$$ and from the definition of $\tilde{N}$ in \eqref{deftildeN} we obtain that when $ dist (n_A,I_\varepsilon^K)=1$, $p_{AA}^{(2)}(n)=r_2(n_A-1)/n_A^2$ and $p_{Aa}^{(j)}(n)=0$. Condition \eqref{assrK} completes the proof. \end{proof}
\begin{rem}\label{rem:recprob} Let us recall the definition of $I_\varepsilon^K$ in \eqref{compact}. Then there exist three finite constants $c$, $\varepsilon_0$ and $K_0$ such that for $\varepsilon\leq \varepsilon_0$, $K\geq K_0$, $j \in \{1,2,(1,2)\}$, $n_A \in I_\varepsilon^K±1$ and $k<\lfloor \varepsilon K \rfloor$, \begin{equation} \label{rqpr}
(1-c\varepsilon)\frac{r_j^*}{k+1}\leq p_{aA}^{(j)}(n_A,k)\leq \frac{r_j^*}{k+1}\quad \text{and}\quad
p_{aa}^{(2)}(n_A,k) \leq \frac{f_a}{f_A}\frac{r_2}{n_A}\leq \frac{c}{K\log K}. \end{equation} \end{rem}
Recalling the definitions of the $m$th jump time and the number of jumps in \eqref{deftpssauts}
and \eqref{defJK1}, we define for $j \in \{1,2,(1,2)\}$, $m \in \mathbb{N}$ and an individual $i$ uniformly picked at the end of the first phase, \begin{multline}\label{defassalpha}
(\alpha ij)_{m}:=\{m \leq J^K(1) \text{ and the $j$-th locus/loci of the $i$-th individual is/are associated} \\
\text{to an allele $\alpha$ at the $m$-th jump time}\}. \end{multline} The notation $(\alpha i1)_m, (\alpha' i2)_m$ here implies that the two neutral loci of individual $i$ are associated to two distinct individuals at the $m$th jump time, for any $\alpha,\alpha'\in \mathcal{A}$.\\
To approximate the genealogy of the neutral alleles sampled at the end of the first phase we will focus on the recombinations and coalescences which may happen during this time interval. Keep in mind that when looking at coalescing neutral loci, the parent's type may differ from the type of up to one child. We first prove that we can neglect some event combinations. Sample $2d$ distinct individuals uniformly at the end of the first phase (maximal number of ancestors for the $2d$ neutral alleles sampled at the end of the sweep) and define: \begin{enumerate}
\item[$aAa$:] a neutral allele recombines from the $a$-population to the $A$-population, and then (backwards in time) back into the $a$-population
\item[$CR$:] two neutral alleles coalesce in the $a$-population, and then (backwards in time) recombine into the $A$-population
\item[$CA$:] two neutral alleles coalesce and at least one of them carries the allele $A$ at the time of coalescence
\item[$2R$:] a neutral allele takes part in a double recombination (i.e. a recombination before $N1$ and a recombination
before $N2$ at the same birth event) \item[$R2a$:] a recombination separates the two neutral loci of an individual within the $a$-population \end{enumerate} We can bound the probability of these events as follows:
\begin{lem}\label{lemma_negevents}
There exist three positive finite constants $c$, $K_0$ and $\varepsilon_0$ such that for $\varepsilon\leq \varepsilon_0$ and $K\geq K_0$ $$ \mathbb{P}^{(1)}(aAa)+\mathbb{P}^{(1)}(CR)+\mathbb{P}^{(1)}(2R) +\mathbb{P}^{(1)}(R2a)\leq \frac{c}{\log K}, \quad \text{and} \quad \mathbb{P}^{(1)}(CA)\leq \frac{c\log K}{K}. $$ \end{lem}
\begin{proof}
The probabilities of events $aAa$, $CR$ and $CA$ are bounded in \cite{smadi2014eco} Lemma $7.3$ and Equation (7.19) for the process $N$. But according to Lemmas \ref{lempcoal} and \ref{lemma:recprob} the coalescence and recombination probabilities for the process $\tilde{N}$ are very close or even smaller when $ dist (n_A,I_\varepsilon^K)=1$ than when $N$ and $\tilde{N}$ are equal.
Hence we just have to bound the probability of $2R$ and $R2a$. If a neutral allele experiences a double recombination, it happens either when it is associated with an allele $a$, or with an allele $A$. From Lemma \ref{lemma:recprob} and the fact that $r_1$ and $r_2$ are of order $1/\log K$ we get for $k <\lfloor \varepsilon K \rfloor$: $$ \sup_{n_A \in I_\varepsilon^K±1}\Big( p_{aa}^{(1,2)}+p_{aA}^{(1,2)}\Big)(n_A,k)\leq \frac{c}{(k+1)\log^2 K} \ \ \text{and} \ \sup_{n_A \in I_\varepsilon^K±1}\Big( p_{Aa}^{(1,2)}+p_{AA}^{(1,2)}\Big)(n_A,k)\leq \frac{c}{K\log^2 K}. $$ Recall the definitions of $U_k^K(1)$ and $\mathcal{U}_k^K(1)$ in \eqref{Umjk} and \eqref{mathcalUk} respectively. As a birth of an $\alpha$-individual is needed to have a recombination from the $\alpha$- to the $\alpha'$-population, we can bound the probability to have a double recombination by: $$ \mathbb{P}^{(1)}(2R)\leq \frac{c}{\log^2 K} \mathbb{E}^{(1)} \Big[\sum_{k=1}^{\lfloor \varepsilon K \rfloor -1} \Big( \frac{U_k^K(1)}{k+1}+ \frac{\mathcal{U}_k^K(1)}{K} \Big) \Big].$$ By applying inequality \eqref{expupa2} and Lemma \ref{approxsummathcalUk} we succeed in bounding $\mathbb{P}^{(1)}(2R)$ by a constant over $\log K$. It remains to consider the event $R2a$ of a recombination within the $a$-population. Define the first time (with respect to the backwards in time process) that this event happens: \begin{align} \begin{aligned}\label{Raa(i)}
R^{(1)}_{aa}(i) := \sup \{ m ,&\ m \leq J^{K}(1) \text{ and both neutral loci of the $i$-th individual are}\\ &\ \text{ associated to distinct $a$-individuals at the $(m-1)$th jump}, \end{aligned} \end{align} where $R^{(1)}_{aa}(i)=-\infty$ if the event does not happen during the first phase of the sweep. Then, \begin{multline*}
\mathbb{P}^{(1)}(R^{(1)}_{aa}(i)\geq 0) = \sum_{l=1}^{\lfloor \varepsilon K \rfloor -1} \mathbb{P}^{(1)}(R^{(1)}_{aa}(i)\geq 0, \tilde{N}_a(\tau^{K}_{R^{(1)}_{aa}(i)})=l)\\ =\sum_{l=1}^{\lfloor \varepsilon K \rfloor -1} \sum_{m< \infty} \mathbb{P}^{(1)}(m \leq J^{K}(1), \tilde{N}_a(\tau^{K}_{m-1})=l,\tilde{N}_a(\tau^{K}_{m})=l+1, (ai1)_m,(ai2)_m,\forall m'> m: (ai12)_{m'})\\ \leq \sum_{l=1}^{\lfloor \varepsilon K \rfloor -1} \sum_{m< \infty} \sup_{n_A\in I_\varepsilon^K±1}\Big( p_{aa}^{(2)} (n_A,l)
\mathbb{P}^{(1)}_{(n_A,l+1)}(\forall m\geq 0: (ai12)_{m})\Big) \mathbb{P}^{(1)}(m \leq J^{K}(1), \tilde{N}_a(\tau^{K}_{m-1})=l,\tilde{N}_a(\tau^{K}_{m})=l+1)\\ \leq \sum_{l=1}^{\lfloor \varepsilon K \rfloor -1} \frac{c}{K\log K}\mathbb{E}^{(1)} [U_l^K(1)] \leq \frac{c}{\log K}, \end{multline*} by \eqref{expupa2} and \eqref{rqpr}. \end{proof} To simplify the notations we will denote the union of all negligible events by \begin{align}\label{def:negevents}
NE := aAa \cup CR \cup CA \cup 2R \cup R2a. \end{align}
\subsection{The two loci of one individual separate within the $A$-population}\label{subsec:2locisep} Having excluded events of small probability, there are exactly two ways for the neutral alleles of an individual sampled at the end of the first phase to originate from two distinct $A$-individuals. The two possibilities were already described on page \pageref{defgenealogy} and represented in Figure \ref{schemagendefevent}. The ideas which are pursued in this section are similar to the ones from \cite{brink2014multsweep}, but there are extra difficulties due to the randomness of the population size.
\subsubsection{Event $[2,1]^{rec}_{A,i}$}
The aim of this section is to prove the following approximation:
\begin{pro}\label{lemma_21Arec} Let $i$ be an $a$-individual sampled uniformly at the end of the first phase. There exist two finite constants $c$ and $\varepsilon_0$ such that for $\varepsilon \leq \varepsilon_0$,
$$ \underset{K \to \infty}{\limsup}\hspace{.1cm}\Big| \mathbb{P}^{(1)}([2,1]^{rec}_{A,i})-
\Big[ \frac{r_2}{r_1+r_2}-e^{-\frac{r_1}{s}\log \lfloor \varepsilon K\rfloor }+\frac{r_1}{r_1+r_2}e^{-\frac{r_1+r_2}{s}\log \lfloor \varepsilon K\rfloor} \Big]\Big|\leq c\sqrt{\varepsilon}. $$ \end{pro} We first give a preliminary Lemma before proving Proposition \ref{lemma_21Arec}. Recall \eqref{defJK1} and define for $j \in \{1,2,(1,2)\}$ and $m \in \mathbb{N}$, \begin{multline}\label{def:Rij}
R(i,j):= \sup \{m , m \leq J^K(1) \text{ and the $j$-th locus/loci of the $i$-th individual} \\
\text{is/are associated to an allele $A$ at the $(m-1)$th jump time}\} , \end{multline} the last jump (forwards in time) when the $j$-th locus/loci of the $i$-th individual belongs to the A-population (with $\sup \emptyset=-\infty$).
To prove Proposition \ref{lemma_21Arec} the idea is to decompose the event $[2,1]^{rec}_{A,i}$ according to the different possible $a$-population sizes when
the first (backwards in time) recombination between $N1$ and $N2$ occurs. \begin{multline} \label{decotheo1}
\mathbb{P}^{(1)}([2,1]^{rec}_{A,i}) = \mathbb{P}^{(1)}(R(i,2)>R(i,1) \geq 0)\\ = \sum_{l=1}^{\lfloor\varepsilon K \rfloor} \mathbb{P}^{(1)}(R(i,1) \geq 0, R(i,2)>R(i,1),\tilde{N}_a({\tau^K_{R(i,2)}})=l) \\
= \sum_{l=1}^{\lfloor\varepsilon K \rfloor-1} \mathbb{P}^{(1)}(R(i,2)>R(i,1),\tilde{N}_a(\tau^K_{R(i,2)})=l) \mathbb{P}^{(1)}(R(i,1) \geq 0 | R(i,2)>R(i,1),\tilde{N}_a({\tau^K_{R(i,2)}})=l). \end{multline} In the following Lemma, which then gives rise to the proof of Proposition \ref{lemma_21Arec}, we consider separately the two probabilities of the above product:
\begin{lem}\label{claim1} There exist three finite constants $c$, $K_0$ and $\varepsilon_0$ such that for $K \geq K_0$, $\varepsilon \leq \varepsilon_0$ and $l<\lfloor \varepsilon K \rfloor$,
\begin{multline}\label{541}
\Big| \mathbb{P}^{(1)}(R(i,2) > R(i,1), \tilde{N}_a(\tau^K_{R(i,2)})=l)-
r_2\frac{1-(1-s)^{\lfloor \varepsilon K \rfloor-l}-(1-s)^{l+1}}{s(l+1)}e^{- \frac{r_1+r_2}{s}\log\frac{\lfloor \varepsilon K \rfloor}{l}}
\Big|\leq \frac{c\sqrt{\varepsilon}}{l\log K}
\end{multline} and
\begin{equation}\label{542} \Big| \mathbb{P}^{(1)}(R(i,1) \geq 0 | R(i,2)>R(i,1),N_a^K(\tau^K_{R(i,2)})=l) -
\sum_{k=1}^{l-1} \frac{r_1}{s(k+1)} e^{ -\frac{r_1}{s} \log \frac{l-1}{k} }\Big|\leq c \sqrt{\varepsilon}.\end{equation} \end{lem}
\begin{proof}[Proof of Proposition \ref{lemma_21Arec}]
From Lemma \ref{claim1} and Equation \eqref{decotheo1} we get the existence of a finite $c$ such that for $K$ large enough and $\varepsilon$ small enough, \begin{equation}\label{firstineq}
\mathbb{P}^{(1)}([2,1]^{rec}_{A,i}) \leq
\sum_{l=1}^{\lfloor\varepsilon K \rfloor-1}\Big[ \frac{r_2}{s(l+1)} e^{ -\frac{r_1+r_2}{s}\log\frac{\lfloor \varepsilon K \rfloor}{l}}+\frac{c\sqrt{\varepsilon}}{l \log K}\Big] \Big[\sum_{k=1}^{l-1} \frac{r_1}{s(k+1)} e^{ -\frac{r_1}{s} \log \frac{l-1}{k} }+{c\sqrt{\varepsilon}}\Big]. \end{equation} Rewriting the second term in brackets and applying Lemma \ref{equivalent} with $c_N/\log N=r_1/s$ yields: \begin{eqnarray*}
e^{-\frac{r_1}{s}\log (l-1)} \frac{r_1}{s}\sum_{k=1}^{l-1} \frac{1}{k+1} e^{\frac{r_1}{s}\log k}+{c\sqrt{\varepsilon}}& \leq &
e^{-\frac{r_1}{s}\log (l-1)}\Big(e^{\frac{r_1}{s}\log l}-1+c\frac{r_1}{s}\Big)+c\sqrt{\varepsilon}\\ & \leq & 1-e^{-\frac{r_1}{s}\log l } + c\sqrt{\varepsilon}, \end{eqnarray*} for $K$ large enough, $\varepsilon$ small enough and a finite $c$, whose value can change from line to line and which can be chosen independently of $l$. We use in the last inequality Condition \eqref{assrK} which claims that $\limsup_{K \to \infty}r_1\log K<\infty$. Including the last inequality in \eqref{firstineq} gives \begin{eqnarray*}
\mathbb{P}^{(1)}([2,1]^{rec}_{A,i}) &\leq &
\sum_{l=1}^{\lfloor\varepsilon K \rfloor-1} \frac{r_2}{s(l+1)}e^{-\frac{r_1+r_2}{s}\log \lfloor\varepsilon K \rfloor}\Big(e^{\frac{r_1+r_2}{s}\log l}-e^{\frac{r_2}{s}\log l} \Big) + c\sqrt{\varepsilon}, \end{eqnarray*} for a finite $c$, $K$ large enough and $\varepsilon$ small enough, where we again use \eqref{assrK} which ensures that exponential terms are bounded away from zero and infinity in the following sense: $$ \frac{1}{c}\leq \liminf_{K \to \infty} e^{-\frac{r_1+r_2}{s}\log \lfloor \varepsilon K\rfloor}\leq \limsup_{K \to \infty} e^{\frac{r_1+r_2}{s}\log \lfloor \varepsilon K\rfloor}\leq c $$ for a positive and finite $c$. Applying again Lemma \ref{equivalent}, we get: $$ \mathbb{P}^{(1)}([2,1]^{rec}_{A,i}) \leq \Big( \frac{r_2}{r_1+r_2}-e^{-\frac{r_1}{s}\log \lfloor\varepsilon K \rfloor}+\frac{r_1}{r_1+r_2}e^{-\frac{r_1+r_2}{s}\log \lfloor\varepsilon K \rfloor} \Big)+c\sqrt{\varepsilon}. $$ The lower bound is obtained in the same way. Notice that it is a little bit more involved as we need to use \eqref{lemma3.5} in addition. \end{proof}
The end of this section is devoted to the proof of Lemma \ref{claim1}.
\begin{proof}[Proof of Equation \eqref{541}] We can decompose the event $ \{ R(i,2) > R(i,1), \tilde{N}_a(\tau^K_{R(i,2)})=l \} $ according to the jump number of the (backwards in time) first recombination. Recall the definition of $NR(i)^{(1)}$ on page \pageref{defgenealogy}. We will use this event with a different initial condition for $(\tilde{N}_A,\tilde{N}_a)$, which will not necessarily be $(\lfloor \bar{n}_AK \rfloor,1)$. It will however still correspond to the absence of any recombination before the end of the first phase. We recall conventions \eqref{convP} and \eqref{defP1}. With the definition of $(\alpha ik)_m$ in \eqref{defassalpha} we get \begin{multline} \label{decompproba1}
\mathbb{P}^{(1)}(R(i,2) > R(i,1), N_a(\tau^K_{R(i,2)})=l) \\ =\sum_{m>1} \mathbb{P}^{(1)}(m\leq J^{K}(1), \tilde{N}_a(\tau_{m-1}^{K})=l-1, \tilde{N}_a(\tau_{m}^{K})=l,
(ai1)_{m-1}, (Ai2)_{m-1}, \forall m\leq m'\leq J^{K}(1) : (ai12)_{m'})\\ \leq \sum_{m>1} \underset{n_A \in I_\varepsilon^K±1}{\sup} \Big\{p_{aA}^{(2)}(n_A,l-1)\mathbb{P}^{(1)}_{(n_A,l)}(NR(i)^{(1)}) \Big\} \mathbb{P}^{(1)}( m\leq J^{K}(1), \tilde{N}_a(\tau_{m-1}^{K})=l-1, \tilde{N}_a(\tau_{m}^{K})=l) \\
= \underset{n_A \in I_\varepsilon^K±1}{\sup} \Big\{p_{aA}^{(2)}(n_A,l-1)\mathbb{P}^{(1)}_{(n_A,l)}(NR(i)^{(1)}) \Big\}\mathbb{E}^{(1)}[U_{l-1}^K(1)], \end{multline} and the same expression with the infimum on $n_A \in I_\varepsilon^K±1$ for a lower bound. Adding \eqref{rqpr} and \eqref{lemPNRml} yields, \begin{eqnarray*}
\mathbb{P}^{(1)}(R(i,2) > R(i,1), \tilde{N}_a(\tau^K_{R(i,2)})=l) &\leq &(1+c{\varepsilon}) \frac{r_2}{l+1}(e^{- \frac{r_1+r_2}{s}\log\frac{\lfloor \varepsilon K \rfloor}{l}}+c\sqrt{\varepsilon})\mathbb{E}^{(1)}[U_{l-1}^K(1)] \\ &\leq &(1+c\sqrt{\varepsilon}) \frac{r_2}{l+1}e^{- \frac{r_1+r_2}{s}\log\frac{\lfloor \varepsilon K \rfloor}{l}} \mathbb{E}^{(1)}[U_{l-1}^K(1)], \end{eqnarray*} for a finite $c$, $\varepsilon$ small enough and $K$ large enough, where we used that $(r_1+r_2)\log K$ is bounded. We similarly get a lower bound and end up the proof of Equation \eqref{541} by applying \eqref{expupa2}. \end{proof}
\begin{proof}[Proof of Equation \eqref{542}] We will decompose the event considered here according to the value of $\tilde{N}_a$ when the first (backwards in time) recombination occurs. Let us denote by $\zeta_k^K{(1)}$ the jump number of the last hitting of $k\leq \lfloor \varepsilon K \rfloor$
by $\tilde{N}_a$ during the first phase, \begin{equation}\label{zeta} \zeta_k^K{(1)}: = \sup \{ m, \tau_m^K \leq \tilde{T}_\varepsilon^K, \tilde{N}_a(\tau_m^K)=k \}, \end{equation} and recall \eqref{defsigma}. Then we can define the events \begin{multline}\label{NRlsigmai} NR(l,\xi,i):=\{\text{the first locus of individual $i$ sampled at jump time $\tau^K_{\xi}$}\\ \text{does not recombine from the $a$- to the $A$-population between $0$ and $\tau_{\xi}^K$}\} \end{multline} where $\xi \in \{\zeta_l^K(1),\sigma_l^K(1)\}$. Similarly as in \eqref{decompproba1}, Bayes' rule leads to: \begin{eqnarray} \label{decompoproba2} && \mathbb{P}^{(1)}(R(i,1) \geq 0 \mid R(i,2)>R(i,1),\tilde{N}_a(\tau^K_{R(i,2)})=l)\\ &&\quad =\sum_{k=1}^{\lfloor\varepsilon K \rfloor}\mathbb{P}^{(1)}(R(i,1) \geq 0, \tilde{N}_a(\tau^K_{R(i,1)}){=k} \mid {R(i,2)>R(i,1), }\tilde{N}_a(\tau^K_{R(i,2)})=l),\nonumber\\ &&\quad \leq \sum_{k=1}^{\lfloor\varepsilon K \rfloor} \Big(\underset{n_A \in I_\varepsilon^K±1}{\sup} p_{aA}^{(1)}(n_A,k-1)
\mathbb{P}^{(1)}_{(n_A,k)}(NR(l,\sigma,i))\Big)\mathcal{S}(k,l),\nonumber \end{eqnarray} where for the sake of simplicity we have introduced the notation $$ \mathcal{S}(k,l):= \sum_{m<\infty} \mathbb{P}^{(1)}(m< R(i,2), \tilde{N}_a(\tau_{m-1}^{K})=k-1, \tilde{N}_a(\tau_{m}^{K})=k \mid \tilde{N}_a(\tau^K_{R(i,2)})=l). $$ The lower bound is obtained by taking the infimum for $n_A$ in $I_\varepsilon^K±1$ and replacing $\sigma$ by $\zeta$. To lighten the proof, we bound the probability in the brackets for both $\sigma$ and $\zeta$ in Lemma \ref{probeventphase1}, Equation \eqref{prob:ai1_m'}.
First we prove that with a probability close to one the $a$-population size is bigger when the (backwards in time) first recombination occurs than when the second, of locus $(i,1)$, occurs. Note that by \eqref{expupa2} and Lemma \ref{excunder}, there exists a finite $c$ such that for every $l< k < \lfloor \varepsilon K \rfloor$: $$ \mathcal{S}(k,l)
\leq \mathbb{E}^{(1)}[U_{l}^{K}(1)]\sup_{n_A \in I_\varepsilon^K±1}\hspace{.1cm} \mathbb{E}^{(1)}_{(n_A,l+1)}[U_{n_A,l,k-1}^K(1)|\sigma_l^K(1)<\infty] \leq c \mu_\varepsilon^{k-l}, $$ where we recall that $\mu_\varepsilon<1$ for $\varepsilon$ small enough. Hence, recalling \eqref{decompoproba2} and \eqref{rqpr}, we obtain for $k >l $ \begin{equation*}
\mathbb{P}^{(1)}(R(i,1) \geq 0, \tilde{N}_a^K(\tau^K_{R(i,1)})\geq l | R(i,2)>R(i,1),\tilde{N}_a(\tau^K_{R(i,2)})=l)\leq cr_1 \sum_{k=l+1}^{\lfloor\varepsilon K \rfloor} \frac{\mu_\varepsilon^{k-l}}{k}
\leq \frac{c}{\log K}, \end{equation*} for a finite $c$ and $\varepsilon$ small enough, which entails \begin{multline*} \mathbb{P}^{(1)}(R(i,1) \geq 0 \mid R(i,2)>R(i,1),\tilde{N}_a(\tau^K_{R(i,2)})=l)\\ \leq \sum_{k=1}^{l} \Big(\underset{n_A \in I_\varepsilon^K±1}{\sup} p_{aA}^{(1)}(n_A,k-1)
\mathbb{P}^{(1)}_{(n_A,k)}(NR(l,\sigma,i))\Big)\mathcal{S}(k,l) + O\left( \frac{1}{\log K} \right). \end{multline*} We therefore can ignore all $k> l$ in the sum in \eqref{decompoproba2} and continue with the case $k\leq l$. In this setting, we can bound the sum $\mathcal{S}(k,l)$ as follows: $$ \mathbb{E}^{(1)}[U_{k-1}^{K}(1)] - \underset{n_A \in I_\varepsilon^K±1}{\sup}\hspace{.1cm} \mathbb{E}^{(1)}_{(n_A,l-1)}[U_{n_A,l,k-1}^{K}(1)]\mathbb{E}^{(1)}[U_{l}^{K}(1)] \leq \mathcal{S}(k,l)\leq \mathbb{E}^{(1)}[U_{k-1}^{K}(1)].$$ Bounding the difference between the two bounds above within Equation \eqref{decompoproba2} then yields \begin{equation*}
\sum_{k=1}^{l} \frac{r_1}{k} \underset{n_A \in I_\varepsilon^K±1}{\sup}\hspace{.1cm} \mathbb{E}^{(1)}_{(n_A,l-1)}[U_{n_A,l,k-1}^{K}(1)]
\mathbb{E}^{(1)}[U_{l}^{K}(1)]\\ \leq cr_1 \sum_{k=1}^{l} \frac{\mu_\varepsilon^{l-k}}{k} \leq \frac{c }{\log K}, \end{equation*}
for a finite $c$ by \eqref{rqpr}, \eqref{expupa2} and Lemma \ref{excunder}. As a consequence, $$
\sum_{k=1}^{l} \Big(\underset{n_A \in I_\varepsilon^K±1}{\sup} p_{aA}^{(1)}(n_A,k-1)
\mathbb{P}^{(1)}_{(n_A,k)}(NR(l,\sigma,i))\Big)\Big|\mathcal{S}(k,l)-\mathbb{E}^{(1)}[U_{k-1}^{K}(1)]\Big|\leq O\left( \frac{1}{\log K} \right), $$ and thus we can work with $\mathbb{E}^{(1)}[U_{k-1}^{K}(1)]$ as an approximation for the sum $\mathcal{S}(k,l)$: \begin{multline*} \mathbb{P}^{(1)}(R(i,1) \geq 0 \mid R(i,2)>R(i,1),\tilde{N}_a(\tau^K_{R(i,2)})=l)\\ \leq \sum_{k=1}^{l} \Big(\underset{n_A \in I_\varepsilon^K±1}{\sup} p_{aA}^{(1)}(n_A,k-1)
\mathbb{P}^{(1)}_{(n_A,k)}(NR(l,\sigma,i))\Big)\mathbb{E}^{(1)}[U_{k-1}^{K}(1)] + O\left( \frac{1}{\log K} \right). \end{multline*}
Reasoning in the same way to get a lower bound and using \eqref{rqpr} and \eqref{prob:ai1_m'}
we get the existence of a finite $c$ such that for $K$ large enough and $\varepsilon$ small enough,
$$\Big| \mathbb{P}^{(1)}(R(i,1) \geq 0 | R(i,2)>R(i,1),\tilde{N}_a^K(\tau^K_{R(i,2)})=l) -
\sum_{ k=1}^{l-1} \frac{r_1}{k} e^{ -\frac{r_1}{s} \log \frac{l-1}{k} }\mathbb{E}^{(1)}[U_{k}^{K}(1)]\Big|\leq c \sqrt{\varepsilon}.$$ Applying \eqref{expupa2} and \eqref{lemma3.5} yields Equation \eqref{542}. Notice that we have replaced $1/k$ by $1/(k+1)$. We used Condition \eqref{assrK} to do this. \end{proof}
\subsubsection{Event $[12,2]^{rec}_{A,i}$} Recall the definition of $[12,2]^{rec}_{A,i}$ on page \pageref{defgenealogy}. This section is devoted to the proof of the following result:
\begin{pro}\label{lemma_122Arec} Let $i$ be an individual sampled uniformly at the end of the first phase. There exist two finite constants $c$ and $\varepsilon_0$ such that for $\varepsilon\leq \varepsilon_0$, $$
\limsup_{K \to \infty}\Big|\mathbb{P}^{(1)}([12,2]^{rec}_{A,i})-r_1\Big[\frac{1-e^{- \frac{r_1+r_2}{s}\log\lfloor \varepsilon K \rfloor}}{r_1+r_2}
+ \frac{e^{- \frac{r_1+r_2}{s}\log\lfloor \varepsilon K \rfloor}-e^{- \frac{f_Ar_2}{f_as}\log\lfloor \varepsilon K \rfloor}}{r_1+r_2(1-f_A/f_a)}\Big]\Big| \leq c \sqrt{\varepsilon}. $$ \end{pro}
\begin{proof} As the proof is very similar to the proof of Proposition \ref{lemma_21Arec} we will be very brief here and only give the ingredients. Let us introduce for $l<\lfloor \varepsilon K \rfloor$ the event: \begin{equation}\label{defRAli}
RA(l,i):=\{ [12,2]^{rec}_{A,i} \mid R(i,2)=R(i,1)\geq 0, \tilde{N}_a(\tau_{R(i,1)}^{K})=l \}. \end{equation} Then we can rewrite the probability of $[12,2]^{rec}_{A,i}$ as follows: \begin{equation}\label{P122}
\mathbb{P}^{(1)}([12,2]^{rec}_{A,i}) =\sum_{l=1}^{\lfloor\varepsilon K \rfloor} \mathbb{P}^{(1)}(RA(l,i))\mathbb{P}^{(1)}(R(i,2)=R(i,1)\geq 0, \tilde{N}_{a}^K(\tau_{R(i,1)}^{K})=l). \end{equation} Apart from the point of recombination, the second probability in the above sum coincides with the probability studied in \eqref{541} and we obtain for $\varepsilon$ small enough and $K$ large enough,
\begin{multline}\label{approxmirror}
\sup_{l \leq \lfloor \varepsilon K \rfloor}\ l {\cdot}\Big| \mathbb{P}^{(1)}(R(i,2)=R(i,1)\geq 0, N_{a}^K(\tau_{R(i,1)}^{K})=l)-\\
\frac{r_1(1-(1-s)^{\lfloor \varepsilon K \rfloor-l}-(1-s)^{l+1})}{s(l+1)}e^{- \frac{r_1+r_2}{s}\log\frac{\lfloor \varepsilon K \rfloor}{l}}
\Big|\leq c \frac{\sqrt{\varepsilon}}{\log K},
\end{multline} for a finite $c$, when substituting $r_2$ by $r_1$ in the fraction which mirrors the recombination probability. The probability of $RA(l,i)$ is derived in Lemma \ref{probeventphase1}. Inserting \eqref{approxmirror} and \eqref{approxhatPRAl} into \eqref{P122} yields \begin{eqnarray*}
\mathbb{P}^{(1)}([12,2]^{rec}_{A,i}) &\leq & \sum_{l=1}^{\lfloor\varepsilon K \rfloor} ( 1-e^{- \frac{f_A}{f_a} \frac{r_2}{s} \log l })\frac{r_1}{l+1}e^{-\frac{r_1+r_2}{s} \log\frac{\lfloor \varepsilon K \rfloor}{ l}}+c\sqrt{\varepsilon} \\ &\leq &r_1e^{- \frac{r_1+r_2}{s}\log\lfloor \varepsilon K \rfloor}\Big[ \frac{e^{\frac{r_1+r_2}{s}\log\lfloor \varepsilon K \rfloor} -1}{r_1+r_2} -\frac{e^{\frac{r_1+r_2-f_Ar_2/f_a}{s}\log\lfloor \varepsilon K \rfloor} -1}{r_1+r_2-f_Ar_2/f_a}\Big]+c\sqrt{\varepsilon} \end{eqnarray*} where we again applied Lemma \ref{equivalent} to express the sum in a different way, and used the {finiteness} of $\limsup_{K \to \infty}(r_1+r_2)\log K$ assumed in Condition \eqref{assrK}. Reasoning similarly for the lower bound and rearranging the terms end the proof of Proposition \ref{lemma_122Arec}. \end{proof}
\subsection{Proof of Proposition \ref{prop:1phase_probg1}}\
\noindent \textbf{Event $R2(i)^{(1)}$:} By definition and from Lemma \ref{lemma_negevents}, $$ \mathbb{P}^{(1)}(R2(i)^{(1)})
=\mathbb{P}^{(1)}(R(i,2)\geq 0) - \mathbb{P}^{(1)}(R(i,1)\geq 0)+ O\Big(\frac{\log K}{K}\Big), $$ where $R(i,1)$ and $R(i,2)$ have been defined in \eqref{def:Rij}. But these probabilities have already been derived in \cite{smadi2014eco} Lemma 7.4, and we get: $$ \mathbb{P}^{(1)}(R2(i)^{(1)})
=(1-q_1q_2) - (1-q_1)+O_K(\varepsilon)=q_1(1-q_2)+O_K(\varepsilon), $$
where $O_K(\varepsilon)$ satisfies \eqref{defOKeps}.
\noindent \textbf{Event $R1|2(i)^{(1,ga)}$:} By definition (see page \pageref{defgenealogy})
$$ \mathbb{P}^{(1)}(R1|2(i)^{(1,ga)})=\mathbb{P}^{(1)}([2,1]^{rec}_{A,i})+\mathbb{P}^{(1)}([12,2]^{rec}_{A,i}) . $$
The result then follows from Propositions \ref{lemma_21Arec} and \ref{lemma_122Arec}.\\
\noindent \textit{Event $R12(i)^{(1)}$:} From Definition \eqref{defRAli} and Equation \eqref{approxhatPRAl} we obtain for $K$ large enough, \begin{eqnarray*} \mathbb{P}^{(1)}( R12(i)^{(1)})&=& \sum_{l=1}^{\lfloor\varepsilon K \rfloor}(1-\mathbb{P}^{(1)}(RA(l,i))) \mathbb{P}^{(1)}(R(i,1)=R(i,2)\geq 0, \tilde{N}_a(\tau_{R(i,2)}^{K})=l)\\ &=& r_1\sum_{l=1}^{\lfloor\varepsilon K \rfloor} e^{- \frac{f_A}{f_a} \frac{r_2}{s} \log l } \frac{1-(1-s)^{\lfloor \varepsilon K \rfloor-l}-(1-s)^{l+1}}{s(l+1)} e^{- \frac{r_1+r_2}{s}\log\frac{\lfloor \varepsilon K \rfloor}{l}}+O_K(\sqrt{\varepsilon})\\ &= &\frac{r_1}{r_1+r_2-f_Ar_2/f_a}\Big(e^{-\frac{r_2}{s}\frac{f_A}{f_a}\log\lfloor \varepsilon K \rfloor}-e^{- \frac{r_1+r_2}{s}\log\lfloor \varepsilon K \rfloor}\Big)+O_K(\sqrt{\varepsilon}), \end{eqnarray*} where we again used the statement of Lemma \ref{equivalent} to substitute the sum, as well as Equation \eqref{lemma3.5}.\\
\noindent \textit{Event $NR(i)^{(1)}$:} From Lemma \ref{lemma_negevents}, $$ \mathbb{P}^{(1)}(NR(i)^{(1)})=1-\mathbb{P}^{(1)}(R2(i)^{(1)})-\mathbb{P}^{(1)}( R12(i)^{(1)})-\mathbb{P}^{(1)}(R2(i)^{(1)})+O\Big(\frac{\log K}{K} \Big). $$ This ends up the proof of Proposition \ref{prop:1phase_probg1}.
$\square$
\section{Second and third phases}\label{secondphase}
This section is devoted to the proofs of Propositions \ref{prosecondphase} and \ref{prothirdphase}.
\subsection{Proof of Proposition \ref{prosecondphase}}
We need to show that two distinct lineages picked uniformly at the end of the second phase coalesce or recombine during that phase only with negligible probability. Let us recall the definition of the jumps $\tau_m^K$ in \eqref{deftpssauts} and denote by $U^K(2)$ the number of upcrossings of the $a$-population during the second phase: \begin{equation}\label{defU2K} U^K(2) :=\#\{ m, T_\varepsilon^K<\tau_m^K\leq T_\varepsilon^K + t_\varepsilon, N_{a}({\tau}_{m+1}^K)-N_a({\tau}_{m}^K)=1 \}. \end{equation} Let us introduce the event $C_\varepsilon^K$: $$ C_\varepsilon^K:= \{T_\varepsilon^K \leq S_\varepsilon^K\}\cap \{N_a^K(t) \geq \varepsilon^2K/4, \forall \ T_\varepsilon^K \leq t \leq T_\varepsilon^K +t_\varepsilon\}.$$ In particular on the event $C_\varepsilon^K$, for $T_\varepsilon^K \leq \tau_m^K \leq T_\varepsilon^K+t_\varepsilon$ and $j \in \{1,2\}$ $$p_{aA}^{(j)}(N(\tau_m^K))\leq \frac{8r_j}{\varepsilon^2 K}\quad \text{and} \quad
p_{aa}^{(c,j)}(N(\tau_m^K))\leq \frac{32}{\varepsilon^4 K^2}. $$ Then if we recall the definition of $NR(i)^{(2)}$ on page \pageref{defgenealogy3} we have for $m \in \mathbb{N}$,
\begin{equation}\label{stepprop2} \mathbb{P}^{(1)}(NR(i)^{(2)}|U^K(2)=m,C_\varepsilon^K ) \geq \Big( 1- \frac{8(r_1+r_2)}{\varepsilon^2 K} \Big)^m. \end{equation} But for $K$ large enough, $\log (1-8(r_1+r_2)/(\varepsilon^2 K))\geq -10(r_1+r_2)/(\varepsilon^2 K)$ and hence \begin{eqnarray*}
\mathbb{P}^{(1)}(NR(i)^{(2)}|C_\varepsilon^K)&\geq & \Big(1- \mathbb{P}^{(1)}(U^K(2)> K\log\log K|C_\varepsilon^K)\Big)
e^{ K\log\log K\log (1-\frac{8(r_1+r_2)}{\varepsilon^2 K})}\\
&\geq & \Big(1- \mathbb{P}^{(1)}(U^K(2)> K\log\log K|C_\varepsilon^K)\Big)
e^{ -\frac{10(r_1+r_2) \log\log K}{\varepsilon^2 }}. \end{eqnarray*} According to Condition \eqref{assrK} the exponential term is equivalent to $1$ when $K$ is large. Moreover, by \eqref{result_champa}, $N_a^K$ is smaller than $2\bar{n}_aK$ on the time interval $[T_\varepsilon^K, T_\varepsilon^K+t_\varepsilon]$ with probability close to $1$. When this property holds, we can bound the birth number $U^K(2)$ by the sum of $2\bar{n}_aK$ iid Poisson random variables with parameter $f_at_\varepsilon$. The strong law of large numbers then yields
$$ \underset{K \to\infty}{\lim}\mathbb{P}^{(1)}(U^K(2)> K\log\log K|C_\varepsilon^K)=0. $$
Applying again \eqref{result_champa} to get
$ {\lim}_{K \to\infty}\mathbb{P}(C_\varepsilon^K|T_\varepsilon^K<\infty)=1 $ finally gives
$$ \underset{K \to\infty}{\lim}\mathbb{P}(NR(i)^{(2)}|T_\varepsilon^K<\infty)=1.$$ The coalescence part in Proposition \ref{prosecondphase} can be proven in the same way.
\subsection{Proof of Proposition \ref{prothirdphase}} The proof of the asymptotic probability of $R2(i)^{(3,ga)}$ is the same as for \eqref{approxhatPRAl}, except that the roles of $A$ and $a$ are exchanged. Hence we do not give more details. Note however that it extensively uses Lemma \ref{lemespvarU}. Let us now focus on the event $NR(i)^{(3)}$, and introduce $$NRA(i)^{(3)}:=\{\text{no neutral allele of individual $i$ recombines from the $a$ to the $A$ population}\}.$$ Recall the definitions of $\mathbb{P}^{(3)}$ and $U^K(3)$ in \eqref{defP1} and \eqref{UDk3phase} respectively. We decompose the probabilities according to the number of upcrossings of $\tilde{\tilde{N}}_a$ during the third phase and get in the same way as in \eqref{stepprop2}, for $m\in \mathbb{N}$
$$ \mathbb{P}^{(3)}(NRA(i)^{(3)}|U^K(3)=m,\{\tilde{\tilde{T}}_0^{(K,A)}< \tilde{\tilde{T}}_\varepsilon^{(K,A)}\wedge S_\varepsilon^{(K,a)}\}) \geq \Big( 1- \frac{f_A(r_1+r_2)\varepsilon}{f_a(\bar{n}_a-M''\varepsilon)^2 K} \Big)^m, $$ where we recall that $\tilde{\tilde{T}}_0^{(K,A)}$ and $\tilde{\tilde{T}}_\varepsilon^{(K,A)}$ are the analogs of
${T}_0^{(K,A)}$ and $T_\varepsilon^{(K,A)}$ (defined in \eqref{T0K}) for the process $\tilde{\tilde{N}}$. But for $K$ large enough and $\varepsilon$ small enough, $$\log \Big(1-\frac{f_A(r_1+r_2)\varepsilon}{f_a(\bar{n}_a-M''\varepsilon)^2 K}\Big)\geq -2f_A\frac{(r_1+r_2)\varepsilon}{f_a\bar{n}_a^2 K}.$$ Hence we get for a finite constant $c$ and $\varepsilon$ small enough: \begin{eqnarray*} \mathbb{P}^{(3)}(NRA(i)^{(3)})& \geq &
\Big(1- \mathbb{P}^{(3)}\Big(U^K(3)> \frac{K\log K}{\sqrt{\varepsilon}}\Big)\Big) \exp\Big( -\frac{2f_A(r_1+r_2)\sqrt{\varepsilon} \log K}{f_a\bar{n}_a^2 } \Big)\\ & \geq & \Big(1- \frac{\sqrt{\varepsilon}\mathbb{E}^{(3)}[U^K(3)]}{ K\log K }\Big)\Big( 1-\frac{2f_A(r_1+r_2)\sqrt{\varepsilon} \log K}{f_a\bar{n}_a^2 } \Big)
\geq (1- {c\sqrt{\varepsilon}})^2, \end{eqnarray*} where we used Lemma \ref{lemespvarU} and that $(r_1+r_2)\log K$ is bounded (Condition \eqref{assrK}).\\
The proof of the last part of Proposition \ref{prothirdphase} is very similar to that of Proposition \ref{prosecondphase}.
The key arguments are that the expectation of the birth number of $a$-individuals during the third phase under $\mathbb{P}^{(3)}$ is of order $K\log K$ (Lemma \ref{lemespvarU}),
whereas the probability for two neutral alleles associated with an allele $a$ to coalesce is of order $1/K^2$ at each birth of an $a$-individual (Lemma \ref{lempcoal}).
{\section{Independence of neutral lineages}\label{proofindep} This section is dedicated to the proof of Proposition \ref{proindi}. We sample $d$ distinct individuals uniformly at the end of the first phase. We recall the definitions of the genealogical events during the first phase on page \pageref{defgenealogy}
and introduce: $$
R(1|2):=\underset{1\leq i \leq d}{\sum} \mathbf{1}_{R1|2(i)^{(1,ga)}}, \quad R(1):=R(1|2)+ \underset{1\leq i \leq d}{\sum} \mathbf{1}_{R12(i)^{(1)}} \quad \text{and} \quad
R(2):=R(1)+ \underset{1\leq i \leq d}{\sum} \mathbf{1}_{R2(i)^{(1)}} $$ From Proposition \ref{prop:1phase_probg1} we know that
$R(1)$, $R(2)$ and $R(1|2)$ are sufficient to describe the neutral genealogies at the end of the first phase up to a probability negligible with respect to one for large $K$. Let $j,k,l$ be three integers such that $l\leq j$ and $j+k\leq d.$ We aim at approximating \begin{align}\label{decompoindep}
p(j,k,l):&=\mathbb{P}(R(1)=j,R(2)=j+k,{R(1|2)}=l|T_\varepsilon^K\leq S_\varepsilon^K)\\
&=\mathbb{P}(R(1)=j|T_\varepsilon^K\leq S_\varepsilon^K)\mathbb{P}(R(2)=j+k|T_\varepsilon^K\leq S_\varepsilon^K,R(1)=j)\nonumber\\
& \hspace{1cm}\mathbb{P}({R(1|2)}=l|T_\varepsilon^K\leq S_\varepsilon^K,R(1)=j,R(2)=j+k).\nonumber \end{align} The approximations of the two first probabilities are direct adaptations of Lemma 5.2 and the proof of Proposition 2.6 in \cite{schweinsberg2005random}, pp 1623-1624. More precisely, Lemma 7.3 in \cite{smadi2014eco} which states that with high probability two neutral lineages do not coalesce and then recombine (backwards in time) allows us to get an equivalent of Lemma 5.2 (with $J=0$) in \cite{schweinsberg2005random}:
$$ \Big| \mathbb{P}(R(1)=j|T_\varepsilon^K\leq S_\varepsilon^K)-{d\choose j} \mathbb{E}[F_1^j(1-F_1)^{n-j}|T_\varepsilon^K \leq S_\varepsilon^K] \Big|\leq c\Big(\frac{1}{\log K}+\varepsilon\Big), $$ for $\varepsilon$ small enough, where $c$ is a finite constant,
$$F_1:= \mathbb{P}(R(i,1)\geq 0|((N_A,N_a)(\tau_n^K),n\leq J^K(1)),T_\varepsilon^K \leq S_\varepsilon^K), $$ and $R(i,1)$ is defined in \eqref{def:Rij}. Then Equations (7.21), (7.23), (7.24) and (7.26) of \cite{smadi2014eco} yield
$$ \underset{K \to \infty}{\limsup}\ \Big| \mathbb{E}[F_1^j(1-F_1)^{d-j}|T_\varepsilon^K \leq S_\varepsilon^K] -(1-q_1)^jq_1^{(n-j)} \Big| \leq c \varepsilon, $$ where $q_1$ has been defined in \eqref{defq1q2}, which allows to conclude
\begin{equation} \label{indepr1} \underset{K \to \infty}{\limsup} \ \Big| \mathbb{P}(R(1)=j|T_\varepsilon^K\leq S_\varepsilon^K)-
{d\choose j}(1-q_1)^jq_1^{(d-j)} \Big|\leq c\varepsilon, \end{equation} for $\varepsilon$ small enough where $c$ is a finite constant.\\
The derivation of the second probability, $\mathbb{P}(R(2)=j+k|T_\varepsilon^K\leq S_\varepsilon^K,R(1)=j)$, follows the same outline. The lineages where $N1$ does not escape the sweep can be seen as lineages where $SL$ and $N1$ are the same locus and the recombination probability between $SL-N1$ and $N2$ is $r_2$. This is due to the independence of the recombinations between $SL$ and $N1$ and between $N1$ and $N2$. Hence we can rewrite the probability as follows:
$$ \mathbb{P}(R(2)=j+k|T_\varepsilon^K\leq S_\varepsilon^K,R(1)=j)=\mathbb{P}(R(2)-R(1)=k|T_\varepsilon^K\leq S_\varepsilon^K,d-R(1)=d-j). $$ We can then directly apply the result \eqref{indepr1} for the law of $R(1)$ and get:
\begin{equation}\label{indepr2} \underset{K \to \infty}{\limsup} \ \Big| \mathbb{P}(R(2)=j+k|T_\varepsilon^K\leq S_\varepsilon^K,R(1)=j)-
{d-j\choose k}(1-q_2)^kq_2^{(d-j-k)} \Big|\leq c\varepsilon, \end{equation} for $\varepsilon$ small enough where $c$ is a finite constant and $q_2$ has been defined in \eqref{defq1q2}.\\
The derivation of the last probability in \eqref{decompoindep} is more involved but follows the same spirit. First note that we only have to focus on genealogies where $N1$ escapes the sweep. Hence the derivation of the probability comes down to the derivation of
$ \mathbb{P}(R(1|2)=l|T_\varepsilon^K\leq S_\varepsilon^K,R(1)=j)$.
The idea is to propose an alternative construction of the process with the same law and where we add the recombinations between $N1$ and $N2$ at the end: \begin{enumerate}
\item[$\bullet$] First we construct a trait population process $(N_A,N_a)$ with birth and death rates defined in \eqref{def:totalbd}
\item[$\bullet$] Second we "add" the recombinations between $SL$ and $N1$: at each birth event we draw a Bernoulli variable with parameter $r_1$ to decide whether there
is a recombination or not. If there is a recombination, the parent giving its neutral allele at $N1$ is chosen with a probability proportional to
its fertility ($f_A$ or $f_a$). {\\After} this step of the construction we know the genealogies of the $d$ neutral alleles at $N1$ sampled at the end of the sweep. We label $(i_1,...,i_j)$ the $j$ sampled neutral alleles at $N1$ which experience a recombination between $SL$ and $N1$ in their genealogy.
\item[$\bullet$] Third we "add" the recombinations between $N1$ and $N2$ sequentially in the lineages where there is already a recombination between $SL$ and $N1$:
we first follow {backward} in time the lineage of $i_1$ and at each birth event we draw a Bernoulli variable with parameter $r_2$ to decide whether there
is a recombination or not, and choose the parent of neutral allele at $N2$ as in the second step. Then we do the same with the lineage of $i_2$, and so on until
the lineage of $i_j$.
\item[$\bullet$] Finally we "add" the recombinations between $N1$ and $N2$ in {those lineages which were not marked with any recombination between $SL$ and $N1$}. \end{enumerate} Such a construction generates a process distributed as the original process and facilitates the study of the dependencies between lineages $(i_1,...,i_j)$. According to Lemma \ref{lemma_negevents}, with high probability there is no recombination between $SL$ and $N1$ after (backwards in time) a coalescence at locus $N1$ among the $d$ sampled individuals. In the same way, there is no coalescence at locus $N1$ after a recombination between $SL$ and $N1$ in the $A$-population (this is due to the large number of $A$-individuals; similar proof as for the last probability of Proposition \ref{prothirdphase}. Hence if we introduce $$ NC(j):=\{\text{there is no coalescence between lineages $(i_1,...,i_j)$ at locus $N1$}\},$$
we get:
$$ \mathbb{P}({R(1|2)}=l|T_\varepsilon^K\leq S_\varepsilon^K,R(1)=j)= \mathbb{P}({R(1|2)}=l|T_\varepsilon^K\leq S_\varepsilon^K,R(1)=j,NC(j))+O\Big( \frac{\log K}{K}\Big). $$ {With the construction of the alternative process we can also define sequentially }for $1 \leq k \leq j$: \begin{align*}
NC(j,k):=\{&\text{there is no coalescence between lineages } (i_1,...,i_k)\text{ after completion of the}\\ &\text{process of adding the recombinations between $N1$ and $N2$ in the lineage $i_k$}\}. \end{align*} Then, if we introduce for $1\leq k \leq j$ and $\delta \in \{0,1\}$ \begin{align*} \{r_{i_k}=\delta\}:=& \{ \text{there is $\delta$ recombination between $N1$ and $N2$ in the lineage $i_k$} \},\end{align*} then for $(\delta_1,...,\delta_j) \in \{0,1\}^j$ \begin{multline*}
\mathbb{P}(r_{i_k}=\delta_k, 1\leq k \leq j|T_\varepsilon^K\leq S_\varepsilon^K,R(1)=j)=\\
\underset{1 \leq k \leq j}{\prod}
\mathbb{P}(r_{i_k}=\delta_k|T_\varepsilon^K\leq S_\varepsilon^K,R(1)=j,NC(j),NC(j,1),...,NC(j,k-1))+ O\Big( \frac{\log K}{K}\Big). \end{multline*} Indeed, the probability that the event $NC(j,k)$ is not realized after {witnessing} the recombinations between $N1$ and $N2$ in lineage $i_k$ has order $\log K/K$ according to Lemma \ref{lemma_negevents}. But for $1 \leq k \leq j$,
\begin{multline} \mathbb{P}(r_{i_k}=\delta_k|T_\varepsilon^K\leq S_\varepsilon^K,R(1)=j,NC(j),...,NC(j,k-1))\\
=\frac{ \mathbb{P}(r_{i_k}=\delta_k,R(1)=j,NC(j),...,NC(j,k-1)|T_\varepsilon^K\leq S_\varepsilon^K) }
{\mathbb{P}(R(1)=j,NC(j),...,NC(j,k-1)|T_\varepsilon^K\leq S_\varepsilon^K) }\\
=\frac{ \mathbb{P}(r_{i_k}=\delta_k,R(1)=j|T_\varepsilon^K\leq S_\varepsilon^K)- \mathbb{P}(r_{i_k}=\delta_k,R(1)=j,(NC(j)\cap...\cap NC(j,k-1))^c|T_\varepsilon^K\leq S_\varepsilon^K) }
{ \mathbb{P}(R(1)=j|T_\varepsilon^K\leq S_\varepsilon^K)- \mathbb{P}(R(1)=j,(NC(j)\cap...\cap NC(j,k-1))^c|T_\varepsilon^K\leq S_\varepsilon^K) }, \end{multline} and according to Lemma \ref{lemma_negevents} and Coupling \eqref{couplage2.1}, there exists a finite $c$ such that for $K$ large enough and $\varepsilon$ small enough,
$$ \mathbb{P}((NC(j)\cap...\cap NC(j,k-1))^c|T_\varepsilon^K\leq S_\varepsilon^K)\leq c\Big(\frac{\log K}{K}+\varepsilon\Big). $$
As $ \mathbb{P}(r_{i_k}=\delta_k,R(1)=j|T_\varepsilon^K\leq S_\varepsilon^K) $ does not go to $0$ when $K$ goes to infinity, we get \begin{multline*}
\mathbb{P}(r_{i_k}=\delta_k|T_\varepsilon^K\leq S_\varepsilon^K,R(1)=j,NC(j),...,NC(j,k-1))=
\mathbb{P}(r_{i_k}=\delta_k|T_\varepsilon^K\leq S_\varepsilon^K,R(1)=j)+O\Big( \frac{\log K}{K}+\varepsilon\Big) \\
= \delta_k\frac{\mathbb{P}(R1|2(i_k)^{(1,ga)}|T_\varepsilon^K\leq S_\varepsilon^K)}{\mathbb{P}(R(i_k,1)\geq 0|T_\varepsilon^K\leq S_\varepsilon^K)}+
(1-\delta_k)\Big(1-\frac{\mathbb{P}(R1|2(i_k)^{(1,ga)}|T_\varepsilon^K\leq S_\varepsilon^K)}{\mathbb{P}(R(i_k,1)\geq 0|T_\varepsilon^K\leq S_\varepsilon^K)}\Big) +O\Big( \frac{\log K}{K}+\varepsilon\Big)\\ = \delta_k\frac{1-q_1-q_3}{1-q_1}+ (1-\delta_k)\Big(1-\frac{1-q_1-q_3}{1-q_1}\Big) +O\Big( \frac{\log K}{K}+\varepsilon\Big), \end{multline*}
where we recall the definition of $R(i_k,1)$ in \eqref{def:Rij}, the definition of $R1|2(i_k)^{(1,ga)}$ on page \pageref{defgenealogy}, and we used Proposition \ref{prop:1phase_probg1}. Adding Equations \eqref{indepr1} and \eqref{indepr2} we finally obtain: \begin{eqnarray}
p(j,k,l)&=&{n\choose j}(1-q_1)^{j}q_1^{(n-j)}{n-j\choose k}(1-q_2)^kq_2^{(n-j-k)}{j\choose l}
\Big(1-\frac{q_3}{1-q_1}\Big)^l\Big(\frac{q_3}{1-q_1}\Big)^{j-l}+ O_K(\varepsilon)\nonumber \\
& =& \frac{n!}{l!(j-l)!k!(n-j-k)!}(q_1q_2)^{n-j-k}(q_1(1-q_2))^kq_3^{j-l}(1-q_1-q_3)^l+ O_K(\varepsilon). \end{eqnarray} This ends the proof of the independence between genealogies during the first phase.
The derivation of the asymptotic independence of neutral lineages during the third phase is an easy adaptation of Lemma 5.2 and the proof of Proposition 2.6 in \cite{schweinsberg2005random}, pp 1623-1624 as with high probability two lineages do not coalesce during this phase.}
$\square$
\appendix
\section{Lemma \ref{probeventphase1}}\label{appA}
Recall the definition of $NR(i)^{(1)}$ on page \pageref{defgenealogy}, and Definitions \eqref{NRlsigmai} and \eqref{defRAli}. Then we have the following approximations for large $K$.
\begin{lem}\label{probeventphase1}
There exist three finite constants $c$, $K_0$ and $\varepsilon_0$ such that for every $K \geq K_0$ and $\varepsilon \leq \varepsilon_0$
\begin{equation}\label{lemPNRml} \sup_{n_A \in I_\varepsilon^K±1,l \leq \lfloor \varepsilon K \rfloor} \Big| \mathbb{P}^{(1)}_{(n_A,l)}(NR(i)^{(1)})-
\exp\Big(-\frac{r_1+r_2}{s} \log \frac{ \lfloor \varepsilon K \rfloor}{l} \Big)\Big| \leq c \sqrt{\varepsilon}, \end{equation} \begin{equation}\label{prob:ai1_m'}
\sup_{\tau \in \{\zeta,\sigma\}}\sup_{n_A \in I_\varepsilon^K±1,k \leq l \leq \lfloor \varepsilon K \rfloor}\Big|\mathbb{P}^{(1)}_{(n_A,k)}(NR(l,\tau,i) -\exp\Big( -\frac{r_1}{s} \log \frac{l-1}{k} \Big)\Big| \leq c\sqrt{\varepsilon}, \end{equation} \begin{equation} \label{approxhatPRAl}
\sup_{l \leq \lfloor \varepsilon K \rfloor} \Big|\mathbb{P}^{(1)}(RA(l,i)) -\Big( 1-\exp\Big(- \frac{f_A}{f_a} \frac{r_2}{s} \log l \Big)\Big)\Big| \leq c \sqrt{\varepsilon}. \end{equation} \end{lem}
\begin{proof} Let us introduce the sigma-algebra generated by the trait population process \begin{equation*}\label{deftribu} \mathcal{F}:= \sigma \Big( (\tilde{N}_A,\tilde{N}_a)(\tau_m^K),\tau_m^K\leq \tilde{T}_{\varepsilon}^K \Big). \end{equation*}
We use some ideas developed in \cite{schweinsberg2005random} and extended in \cite{brink2014multsweep} towards the two-locus case. The proof, although quite technical, can be summarized easily: for $(g,b,c,d,f) \in \mathbb{R}_+^5$, the Triangle Inequality and the Mean Value Theorem imply
$$ |g-e^{-b}|\leq |g-e^{-c}|+|c-d|+|d-f|+|f-b| .$$ Hence for every random variables $(X_1,X_2) \in \mathbb{R}_+^2$ and measurable event $C$: \begin{multline*}
\Big| \mathbb{P}^{(1)}(C| \mathcal{F})-e^{ -\frac{r_1+r_2}{s} \log \frac{\lfloor \varepsilon K \rfloor}{l} }\Big|
\leq \Big|\mathbb{P}^{(1)}(C| \mathcal{F})-e^{-X_1}\Big| +
\Big|X_1-X_2\Big|\\
+\Big|X_2-\mathbb{E}^{(1)}[X_2]\Big|+
\Big|\mathbb{E}^{(1)}[X_2]-\frac{r_1+r_2}{s} \log \frac{\lfloor \varepsilon K \rfloor}{l} \Big|. \end{multline*} By taking the expectation and applying Jensen and Cauchy-Schwarz Inequalities, we obtain: \begin{multline}\label{ineqX1X2}
\Big| \mathbb{P}^{(1)}(C)-e^{ -\frac{r_1+r_2}{s} \log \frac{\lfloor \varepsilon K \rfloor}{l}}\Big|\leq
\mathbb{E}^{(1)} \Big|\mathbb{P}^{(1)}(C|\mathcal{F})-e^{-X_1}\Big| +
\mathbb{E}^{(1)} \Big|X_1-X_2 \Big|\\+\sqrt{\var(X_2)}+
\Big|\mathbb{E}^{(1)}[X_2]-\frac{r_1+r_2}{s} \log \frac{\lfloor \varepsilon K \rfloor}{l}\Big|. \end{multline} Hence the idea is to find the appropriate random variables $(X_1,X_2) \in \mathbb{R}_+^2$ to get small quantities on the right hand side.\\
\noindent \textit{Proof of Equation \eqref{lemPNRml}:} The first step consists in working conditionally on $\mathcal{F}$, describing this probability as a product of conditional probabilities close to one, as well as in deriving a Poisson approximation. To this aim, we define for $m\in \mathbb{N}$: $$ \theta^{(12)} (m):= \mathbf{1}_{\{\tau_m^K\leq \tilde{T}_\varepsilon^K\}}\mathbf{1}_{ \{\tilde{N}_a(\tau_{m}^{K})-\tilde{N}_a(\tau_{m-1}^{K})=1\}} (p_{aA}^{(1)}+p_{aA}^{(2)})(\tilde{N}_A,\tilde{N}_a)(\tau_{m-1}^{K}) , $$ where we recall the definition of the $p_{\alpha \alpha'}^{(i)}$ in Definition \ref{defpalphaalpha}. Notice that
Remark \ref{rem:recprob} p. \pageref{rem:recprob} implies that for $\rho \in \{1,2\}$, $n_A \in I_\varepsilon^K±1$ and $l<\lfloor \varepsilon K \rfloor$,
\begin{equation}\label{encadtheta} (1-c\varepsilon)(r_1+r_2)^\rho\Big( \sum_{k=1}^{l-1} \frac{\mathbb{E}^{(1)}_{(n_A,l)}U_{k}^{K}(1)}{(k+1)^\rho}\Big)\leq \mathbb{E}^{(1)}_{(n_A,l)} \Big[\sum_{m=1}^\infty(\theta^{(12)} (m)\mathbf{1}_{\{\tilde{N}_a(\tau_{m}^{K}) < l\}})^\rho \Big] \leq (r_1+r_2)^\rho\Big( \sum_{k=1}^{l-1} \frac{\mathbb{E}^{(1)}_{(n_A,l)}U_{k}^{K}(1)}{(k+1)^\rho}\Big).
\end{equation} Then, similarly as in \cite{schweinsberg2005random}, we have for $n_A \in I_\varepsilon^K±1$ and $l<\lfloor \varepsilon K \rfloor$
$$\mathbb{P}^{(1)}_{(n_A,l)}(NR(i)^{(1)}|\mathcal{F}) =\prod_{m=1}^\infty(1-\theta^{(12)} (m)), \quad \mathbb{P}^{(1)}_{(n_A,l)}-\text{a.s.} $$ If we introduce the variable, \begin{equation*}
\eta^{(12)} := \sum_{m=1}^{\infty} \theta^{(12)} (m), \end{equation*} which will play the role of $X_1$ in \eqref{ineqX1X2}, we get by following the path of Lemma $3.6$ in \cite{schweinsberg2005random}: \begin{equation} \label{diffcondexp}
\mathbb{E}^{(1)}_{(n_A,l)} \Big|\prod_{m=1}^\infty(1-\theta^{(12)} (m))-\exp( -\eta^{(12)})\Big| \leq \mathbb{E}^{(1)}_{(n_A,l)} \Big[\sum_{m=1}^\infty(\theta^{(12)} (m))^2 \Big] \\
\leq \frac{c}{\log^2 K }, \end{equation} for K large enough, $n_A \in I_\varepsilon^K±1, l<\lfloor \varepsilon K \rfloor$ and a finite $c$ (which can be chosen independently of $l$), where we used Equations \eqref{expupa2} \eqref{expupa3} and \eqref{encadtheta}, and Condition \eqref{assrK} for the last inequality. Next we introduce an approximation of the random variable $\eta^{(12)}$, namely \begin{align} \label{defetatilde12}
\tilde{\eta}^{(12)} := \sum_{m=1}^{\infty} \theta^{(12)} (m)\mathbf{1}_{\{\tilde{N}_a(\tau_{m}^{K}) \geq \tilde{N}_a(0)\}}, \end{align} which will play the role of $X_2$ in \eqref{ineqX1X2}. For $n_A \in I_\varepsilon^K±1$ and $l \leq \lfloor \varepsilon K \rfloor$: \begin{align} \label{diffX1X2} 0\leq \mathbb{E}^{(1)}_{(n_A,l)}[ \eta^{(12)} - \tilde{\eta}^{(12)}] = & \mathbb{E}^{(1)}_{(n_A,l)}\Big[\sum_{m=1}^{\infty} \theta^{(12)} (m) \mathbf{1}_{\{\tilde{N}_a(\tau_{m}^{K}) < l\}}\Big]\\
\leq & \frac{r_1+r_2}{s_+(\varepsilon)s_-^2(\varepsilon)} \sum_{k=1}^{l-1} \frac{(1-s_-(\varepsilon))^{l-k}}{k+1} \leq c \frac{(r_1+r_2) }{l}, \end{align} for a finite $c$ and $\varepsilon$ small enough, where we used \eqref{encadtheta} and \eqref{expupa3} for the first inequality, and \eqref{lemma3.5} for the second one. This latter ensures that $c$ can be chosen independently of $l$. The expected value of $\tilde{\eta}^{(12)}$ can be bounded by using \eqref{encadtheta}, \eqref{expupa2} and \eqref{lemma3.5} \begin{align}\label{approxtildeeta1}
\mathbb{E}^{(1)}_{(n_A,l)}[\tilde{\eta}^{(12)}] & \geq (1-c\varepsilon)(r_1+r_2)\sum_{k=l}^{\lfloor\varepsilon K \rfloor -1}\frac{1}{k+1}\Big(\frac{1-(1-s)^{\lfloor \varepsilon K \rfloor -k}-(1-s)^{k+1}}{s}-c\varepsilon \Big) \nonumber\\ &\geq(1-c\varepsilon) \frac{r_1 +r_2}{s} \log \frac{\lfloor\varepsilon K \rfloor}{l} - \frac{c}{\log K}, \end{align} for a finite $c$ and $\varepsilon$ small enough. For the upper bound we get similarly, \begin{equation}\label{approxtildeeta2}
\mathbb{E}^{(1)}_{(n_A,l)}[\tilde{\eta}^{(12)}] \leq (1+c\varepsilon) \frac{r_1 +r_2}{s} \log \frac{\lfloor\varepsilon K \rfloor}{l}. \end{equation} The last step consists in bounding the variance of $\tilde{\eta}^{(12)}$. As the calculation of this variance is quite involved, we introduce an approximation of $\tilde{\eta}^{(12)}$, namely \begin{eqnarray*} \tilde{\tilde{\eta}}^{(12)} :=
\sum_{m=1}^{\infty} \mathbf{1}_{\{\tilde{N}_a(\tau_{m-1}^K)\geq \tilde{N}_a(0)\}}
\mathbf{1}_{\{\tilde{N}_a(\tau_{m}^K)-\tilde{N}_a(\tau_{m-1}^K)=1\}}\frac{r_1+r_2}{\tilde{N}_a(\tau_{m-1}^K)+1} = \sum_{k=\tilde{N}_a(0)}^{\lfloor\varepsilon K \rfloor -1} \frac{r_1+r_2}{k+1} U_{k}^K(1). \end{eqnarray*} Equation \eqref{rqpr} yields $ (1-c\varepsilon)\tilde{\tilde{\eta}}^{(12)}\leq {\tilde{\eta}}^{(12)}\leq \tilde{\tilde{\eta}}^{(12)}$ for a finite $c$ and $\varepsilon$ small enough. Hence \begin{multline} \label{tildeeta}
\Big|{\var}^{(1)}_{(n_A,l)}\tilde{\eta}^{(12)}-{\var}^{(1)}_{(n_A,l)}\tilde{\tilde{\eta}}^{(12)}\Big|
\leq c \varepsilon \mathbb{E}^{(1)}_{(n_A,l)}\Big[\Big(\tilde{\tilde{\eta}}^{(12)} \Big)^2\Big] \\
\leq c \varepsilon(r_1+r_2)^2 \sum_{k,k'=l}^{\lfloor \varepsilon K \rfloor-1}\frac{\mathbb{E}^{(1)}[(U_{k}^K(1))^2]+\mathbb{E}^{(1)}[(U_{k'}^K(1))^2]}{(k+1)(k'+1)} \leq {c\varepsilon} , \end{multline}
where we used \eqref{compaU'|Uj} and \eqref{minqk} which ensure that
$U_{k}^K(1)$ is smaller than a geometric random variable with parameter $q_k^{(s_-(\varepsilon),s_+(\varepsilon))} \geq s_-(\varepsilon)$. Thus it is enough to bound ${\var}^{(1)}_{(n_A,l)}\tilde{\tilde{\eta}}^{(12)}$. Thanks to \eqref{covUmlk} and Condition \eqref{assrK} we get: \begin{eqnarray*}\label{varetatildetilde}
{\var}^{(1)}_{(n_A,l)}\tilde{\tilde{\eta}}^{(12)}&=&
{ (r_1+r_2)^2 }\sum_{k,k'=l}^{\lfloor \varepsilon K \rfloor -1}\frac{\cov^{(1)}_{(n_A,l)}(U_{k}^K(1),U_{k'}^K(1))}{(k+1)(k'+1)} \nonumber\\
& \leq & 2 (r_1+r_2)^2 \sum_{k\leq k'=l}^{\lfloor \varepsilon K \rfloor -1}\frac{\lambda_\varepsilon^{(k'-k)/2}+\varepsilon}{(k+1)(k'+1)}
\leq c \frac{\log \lfloor \varepsilon K \rfloor}{\log^2 K}(c+\varepsilon \log \lfloor \varepsilon K \rfloor). \end{eqnarray*} Recalling \eqref{tildeeta} and again Condition \eqref{assrK}, we finally obtain \begin{equation}\label{vartildeta}
\limsup_{K \to \infty} {\var}^{(1)}_{(n_A,l)}\tilde{{\eta}}^{(12)}\leq c \varepsilon, \end{equation} for a finite $c$ independent of $l$ and $\varepsilon$ small enough. Applying \eqref{ineqX1X2} with $X_1=\eta^{(12)}$ and $X_2=\tilde{\eta}^{(12)}$ yields \begin{multline*}
\Big|\mathbb{P}^{(1)}_{(n_A,l)}(NR(i)^{(1)})-e^{-\frac{r_1+r_2}{s}\log \frac{\lfloor \varepsilon K \rfloor}{l}} \Big|\leq
\mathbb{E}^{(1)}_{(n_A,l)} \Big|\prod_{m=1}^\infty(1-\theta^{(12)} (m))-\exp( -\eta^{(12)})\Big|\\ + \mathbb{E}^{(1)}_{(n_A,l)}[ \eta^{(12)} - \tilde{\eta}^{(12)}]+\sqrt{{\var}^{(1)}_{(n_A,l)}\tilde{{\eta}}^{(12)}}+
\Big|\mathbb{E}^{(1)}_{(n_A,l)}[\tilde{\eta}^{(12)}] - \frac{r_1 +r_2}{s} \log \frac{\lfloor\varepsilon K \rfloor}{l}\Big|. \end{multline*} We end the proof of Equation \eqref{lemPNRml} with Inequalities \eqref{diffcondexp}, \eqref{diffX1X2}, \eqref{vartildeta}, \eqref{approxtildeeta1} and \eqref{approxtildeeta2}.\\
\noindent \textit{Proof of \eqref{prob:ai1_m'}:} There is a supplementary difficulty due to the randomness of $\tilde{N}_a(\tau^K_{R(i,2)})$. In the previous case we were interested in an event before the first hitting of $\lfloor \varepsilon K \rfloor$, while in the current case, the conditioning on the value of $\tilde{N}_a(\tau^K_{R(i,2)})$ does not tell us how many times $\tilde{N}_a$ has hit this value before. This is why we have introduced $NR(l,\sigma,i)$ and $NR(l,\zeta,i)$ in \eqref{NRlsigmai}. Define for $m\geq 1$, $$ \theta^{(1)} (m):= \mathbf{1}_{\{\tau_m^K\leq \tilde{T}_\varepsilon^K\}}\mathbf{1}_{ \{\tilde{N}_a(\tau_{m}^{K})-\tilde{N}_a(\tau_{m-1}^{K})=1\}}p_{aA}^{(1)} ((\tilde{N}_A,\tilde{N}_a)(\tau_{m}^{K})). $$ We again condition on the trait population process and get for $n_A \in I_\varepsilon^K ±1$ and $k\leq l<\lfloor \varepsilon K \rfloor$,
\begin{equation}\label{lowupbound_P}\mathbb{P}^{(1)}_{(n_A,k)}(NR(l,\sigma,i)|\mathcal{F}) =\prod_{m=1}^{\sigma_{l}^{K}(1)}(1-\theta^{(1)} (m)), \quad \mathbb{P}^{(1)}_{(n_A,k)}-\text{a.s.},\end{equation} and the same expression with $\sigma$ replacing $\zeta$. We define the corresponding parameters for the Poisson approximation as follows: \begin{align*} \eta^{(1),-}_{l} := \sum_{m=1}^{\sigma_{l}^K(1)} \theta^{(1)} (m) ,\ \text{ and } \
\eta^{(1),+}_{l} := \sum_{m=1}^{\zeta_l^K(1)} \theta^{(1)} (m). \end{align*} They will play the role of $X_1$ in \eqref{ineqX1X2}. We will show that both can be approximated by: \begin{align}\label{defetatilde1}
\tilde{\eta}^{(1)}_{l} := \sum_{m=1}^{\zeta_l^K(1)} \theta^{(1)} (m)
\mathbf{1}_{\{\tilde{N}_a(0)\leq \tilde{N}_a(\tau_{m}^{K}) \leq l\}}, \end{align} which will play the role of $X_2$ in \eqref{ineqX1X2}. Recall Definitions \eqref{Umjk}, \eqref{Dk} and \eqref{defjumpexc}. On the one hand, for $n_A \in I_\varepsilon^K±1$ and $k<\lfloor \varepsilon K \rfloor$, \begin{align}\label{approx+} \mathbb{E}^{(1)}_{(n_A,k)}[\eta^{(1),+}_{l}-\tilde{\eta}^{(1)}_{l} ] &= \mathbb{E}^{(1)}_{(n_A,k)} \Big[\sum_{m=1}^{\zeta_l^K(1)} \theta^{(1)} (m) (\mathbf{1}_{\{ N_a^K(\tau_{m}^{K}) <k\}} + \mathbf{1}_{\{ N_a^K(\tau_{m}^{K}) > l\}})\Big]\nonumber\\ &\leq \mathbb{E}^{(1)}[D^{K}_{k}(1)] \sum_{j=1}^{k-1} \sup_{n_A \in I_\varepsilon^K}\hspace{.1cm}p_{aA}^{(1)}(n_A,j) \sup_{n_A \in I_\varepsilon^K±1}\hspace{.1cm}\mathbb{E}^{(1)}_{(n_A,k-1)}[U_{n_A,k,j}^K(1)]\nonumber\\ &+ \mathbb{E}^{(1)}[U^{K}_{l}(1)] \sum_{j=l+1}^{ \lfloor \varepsilon K \rfloor} \sup_{n_A \in I_\varepsilon^K} \hspace{.1cm} p_{aA}^{(1)}(n_A,j)
\sup_{n_A \in I_\varepsilon^K±1}\hspace{.1cm}\mathbb{E}^{(1)}_{(n_A,l+1)}[U_{n_A,l,j}^K(1)|\sigma_l^K(1)<\infty], \end{align} where we used that in the first phase, under $\mathbb{P}^{(1)}$, the number of excursions below $k$ (resp. above $l$) is equal to $D_k^K(1)$ (resp. $U_l^K(1)-1$). Applying Inequality \eqref{expupa2}, Lemma \ref{excunder}, and Equation \eqref{rqpr}, we get the existence of a finite $c$ such that for $\varepsilon$ small enough: \begin{equation*}
\mathbb{E}^{(1)}_{(n_A,k)}[\eta^{(1),+}_{l}-\tilde{\eta}^{(1)}_{l} ] \leq
cr_1 \sum_{j=1}^{ \lfloor \varepsilon K \rfloor} \frac{\mu_\varepsilon^{|j-l|}}{j+1} \leq \frac{c}{\log K}, \end{equation*} as $\mu_\varepsilon\in(0,1)$ for $\varepsilon$ small enough and by Condition \eqref{assrK}. On the other hand, by using the same results as in \eqref{approx+}, we get \begin{eqnarray*}
\mathbb{E}^{(1)}_{(n_A,k)}[| \eta^{(1),-}_{l}-\tilde{\eta}^{(1)}_{l} | ] &\leq & \mathbb{E}^{(1)}_{(n_A,k)} \Big[\sum_{m=1}^{\sigma_{l}^K(1)} \theta^{(1)} (m) \mathbf{1}_{\{\tilde{N}_a(\tau_{m}^{K})<k\}} + \sum_{m=\sigma_{l}^{K}(1)+1}^{\zeta_l^K(1)} \theta^{(1)} (m)\mathbf{1}_{\{k\leq \tilde{N}_a(\tau_{m}^{K}) \leq l\}} \Big]\\
&\leq & cr_1 \Big(\sum_{j=1}^{k-1}\frac{\mu_\varepsilon^{k-j}}{j+1} +\sum_{j=k}^{l-1} \frac{\mu_\varepsilon^{l-j}}{j+1} \Big)
\leq \frac{c}{\log K}.\end{eqnarray*} This shows that it is sufficient to use $\tilde{\eta}^{(1)}_{l}$ for the Poisson approximation. From \eqref{diffcondexp} we deduce that this approximation holds true up to terms of order $1/\log^{2} K$. Recalling once again \eqref{ineqX1X2}, we see that it only remains to calculate the expected value of $\tilde{\eta}^{(1)}_{l}$ and to bound its variance. The expectation can be approximated in the same way as the expected value of $\tilde{\eta}_{l}^{(12)}$ from the previous part in \eqref{approxtildeeta1} and \eqref{approxtildeeta2}: \begin{align} (1-c\varepsilon) \frac{r_1}{s} \log \frac{l-1}{k} - \frac{c}{\log K}\leq
\mathbb{E}^{(1)}_{(n_A,k)}[\tilde{\eta}^{(1)}_{l}] \leq(1+c\varepsilon) \frac{r_1}{s} \log \frac{l-1}{k}. \end{align} A comparison of the definitions of $\tilde{\eta}^{(1)}_{l}$ in \eqref{defetatilde1} and $\tilde{\eta}^{(12)}$ in \eqref{defetatilde12} shows that the variance of $\tilde{\eta}^{(1)}_{l}$ can be bounded by the same expression, that is, a constant times $\varepsilon$. This ends the proof of Equation \eqref{prob:ai1_m'}.\\
\noindent \textit{Proof of Equation \eqref{approxhatPRAl}} It can be done in a similar way as for Equations \eqref{lemPNRml} and \eqref{prob:ai1_m'}. We have the following lower and upper bounds: \begin{multline} \label{lowupbound_P_AA}
\prod_{m=1}^{\zeta_l^K(1)} \Big[ 1- p_{AA}^{(2)}(\tilde{N}_A,\tilde{N}_a)(\tau_m^K) \Big]
\leq 1-\mathbb{P}^{(1)}(RA(l,i)|\mathcal{F})
\leq \prod_{m=1}^{\sigma_l^K(1)} \Big[ 1- p_{AA}^{(2)}(\tilde{N}_A,\tilde{N}_a)(\tau_m^K) \Big]. \end{multline} Once again we aim at deriving a Poisson approximation. As a birth event in the $A$-population is needed to see a recombination within the $A$-population, bounds on the expected number of jumps will concern the process $\tilde{N}_A$ and we have to use Lemma \ref{lemespvarmathcalU}. \end{proof}
\section{Technical results}
This section is dedicated to technical results needed in the proofs. First we recall a well known result on the hitting times of birth and death processes which can be found in \cite{schweinsberg2005random} Lemma 3.1:
\begin{pro} Let $Z=(Z_t)_{t \geq 0}$ be a birth and death process with individual birth and death rates $b$ and $d $. For $i \in \mathbb{Z}^+$, $T_i=\inf\{ t\geq 0, Z_t=i \}$ and $\mathbb{P}_i$ is the law of $Z$ when $Z_0=i$. Then for $(i,j,k) \in \mathbb{Z}_+^3$ such that $j \in (i,k)$, \begin{equation} \label{hitting_times} \mathbb{P}_j(T_k<T_i)=\frac{1-(d/b)^{j-i}}{1-(d/b)^{k-i}} .\end{equation} \end{pro}
We also recall Lemma 3.5 in \cite{schweinsberg2005random} and the first part of Equation (A.16) in \cite{smadi2014eco} which are used several times:
\begin{lem} \ \begin{enumerate}
\item[$\bullet$] If $a>1$ there is a $C$ such that for every $N \in \mathbb{N}$, \begin{equation} \label{lemma3.5}\sum_{j=1}^N \frac{a^j}{j}\leq \frac{Ca^N}{N}.\end{equation}
\item[$\bullet$] Recall Definition \eqref{defqks1s2}. Then for $(s_1, s_2) \in (0,1)^2$ and $k< \lfloor \varepsilon K \rfloor$, \begin{equation} \label{minqk} q^{(s_1\wedge s_2,s_1 \vee s_2)}_k\geq s_1 \wedge s_2. \end{equation} \end{enumerate} \end{lem}
Finally, we state two technical results. The first one can be proven by using characteristic functions, the proof of the second Lemma is given below:
\begin{lem}\label{lemgeom} Let $V$ be a geometric random variable with parameter $p_1$ and $(G^i, i \in \mathbb{N})$ a sequence of independent geometric random variables with parameter $p_2$, independent of $V$. Then the random variable: $$ Z:= \underset{i \leq V}{\sum}G^i $$ is geometrically distributed with parameter $p_1p_2$. \end{lem}
\begin{lem}\label{equivalent}
Let $(c_N, N \in \mathbb{N})$ be a bounded sequence of $\mathbb{R}$.
Then there exists a finite constant $c$ such that
$$\limsup_{N \to \infty}\sup_{k \leq N}\Big|
\sum_{l=1}^{k-1}\frac{e^{\frac{c_N }{\log N}\log l}}{l+1}-\frac{\log N}{c_N} (e^{\frac{c_N }{\log N}\log k}-1) \Big| \leq c.$$ \end{lem}
\begin{proof} We prove the Lemma for a sequence $(c_N, N \in \mathbb{N})$ in $\mathbb{R}^*$ and extend the result by using the convention
$$ \Big( \frac{\log N}{c_N} (e^{\frac{c_N }{\log N}\log k}-1)\Big)_{|c_N=0}=\log k.$$
The idea is to compare the sum with the integral $$ \int_1^k x^{\frac{c_N}{\log N}-1}dx= \frac{\log N}{c_N} (e^{\frac{c_N }{\log N}\log k}-1).$$ Let $l$ be in $\{1,...,N-1\}$. Then we have \begin{eqnarray*}
\int_l^{l+1} x^{\frac{c_N}{\log N}-1}dx -\frac{l^{\frac{c_N }{\log N}}}{l+1} &=& \frac{\log N}{c_N} \Big((l+1)^{\frac{c_N }{\log N}}-l^{\frac{c_N }{\log N}} -\frac{c_N}{\log N}\frac{l^{\frac{c_N }{\log N}}}{l+1} \Big)\\ &=& \frac{\log N}{c_N}l^{\frac{c_N }{\log N}} \Big(\Big(1+\frac{1}{l}\Big)^{\frac{c_N }{\log N}}-1 -\frac{c_N}{(l+1)\log N}\Big).
\end{eqnarray*} An application of the Taylor-Lagrange formula yields that $$ \Big(1+\frac{1}{l}\Big)^{\frac{c_N }{\log N}}-1=\frac{c_N}{l\log N}+\frac{c_N}{\log N}\Big(\frac{c_N}{\log N}-1 \Big) \frac{1}{2l^2}\Big(1+x\Big)^{\frac{c_N }{\log N}-2} $$ where $x$ belongs to $[0,1/l]$. As the sequence $(c_N, N \in \mathbb{N})$ is bounded, we deduce that there exists a finite constant $c$ such that
$$ \Big|\int_l^{l+1} x^{\frac{c_N}{\log N}-1}dx -\frac{l^{\frac{c_N }{\log N}}}{l+1}\Big|\leq \frac{c}{l^2}. $$ This ends up the proof of Lemma \ref{equivalent}. \end{proof}
{\bf Acknowledgements:} {\sl The authors would like to thank Jean-François Delmas, Sylvie M\'el\'eard and Anja Sturm for their careful reading of this paper. They also want to thank an anonymous reviewer for several suggestions and improvements. This work was partially funded by project MANEGE `Mod\`eles Al\'eatoires en \'Ecologie, G\'en\'etique et \'Evolution' of the French national research agency ANR-09-BLAN-0215, Chair Mod\'elisation Math\'ematique et Biodiversit\'e Veolia Environnement- Ecole Polytechnique-Museum National d'Histoire Naturelle-Fondation X and the French national research agency ANR-11-BSV7-013-03, the DFG through SPP priority programme 1590 and the RTG 1644, `Scaling Problems in Statistics'.}
\end{document} |
\begin{document}
\title{Planar Tur\'an Number of the 6-Cycle} \baselineskip=0.30in \begin{abstract}
Let ${\rm ex}_{\mathcal{P}}(n,T,H)$ denote the maximum number of copies of $T$ in an $n$-vertex planar graph which does not contain $H$ as a subgraph. When $T=K_2$, ${\rm ex}_{\mathcal{P}}(n,T,H)$ is the well studied function, the planar Tur\'an number of $H$, denoted by ${\rm ex}_{\mathcal{P}}(n,H)$. The topic of extremal planar graphs was initiated by Dowden (2016). He obtained sharp upper bound for both ${\rm ex}_{\mathcal{P}}(n,C_4)$ and ${\rm ex}_{\mathcal{P}}(n,C_5)$. Later on, Y. Lan, et al. continued this topic and proved that ${\rm ex}_{\mathcal{P}}(n,C_6)\leq \frac{18(n-2)}{7}$. In this paper, we give a sharp upper bound ${\rm ex}_{\mathcal{P}}(n,C_6) \leq \frac{5}{2}n-7$, for all $n\geq 18$, which improves Lan's result. We also pose a conjecture on ${\rm ex}_{\mathcal{P}}(n,C_k)$, for $k\geq 7$. \end{abstract} {\bf Keywords}\ \ Planar Tur\'an number, Extremal planar graph
\section{Introduction and Main Results} In this paper, all graphs considered are planar, undirected, finite and contain neither loops nor multiple edges. We use $C_k$ to denote the cycle on $k$ vertices and $K_r$ to denote the complete graph on $r$ vertices.
One of the well-known results in extremal graph theory is the Tur\'an Theorem \cite{TP}, which gives the maximum number of edges that a graph on $n$ vertices can have without containing a $K_r$ as a subgraph. The Erd\H{o}s-Stone-Simonovits Theorem \cite{EP1,EP2} then generalized this result and asymptotically determines $\ex(n,H)$ for all non-bipartite graphs $H$: $\ex(n,H)=(1-\frac{1}{\chi(H)-1})\binom{n}{2}+o(n^{2})$, where $\chi(H)$ denotes the chromatic number of $H$. Over the last decade, a considerable amount of research work has been carried out in Tur\'an-type problems, i.e., when host graphs are $K_n$, $k$-uniform hypergraphs or $k$-partite graphs, see \cite{EP2,zykov}.
In 2016, Dowden \cite{DC} initiated the study of Tur\'an-type problems when host graphs are planar, i.e., how many edges can a planar graph on $n$ vertices have, without containing a given smaller graph? The planar Tur\'an number of a graph $H$, $\ex_{\mathcal{P}}(n,H)$, is the maximum number of edges in a planar graph on $n$ vertices which does not contain $H$ as a subgraph. Dowden \cite{DC} obtained the tight bounds $\ex_{\mathcal{P}}(n,C_4) \leq\frac{15(n-2)}{7}$, for all $n\geq 4$ and $\ex_{\mathcal{P}}(n,C_5) \leq\frac{12n-33}{5}$, for all $n\geq 11$. Later on, Y. Lan, et al.~\cite{LY} obtained bounds $\ex_{\mathcal{P}}(n,\Theta_4)\leq \frac{12(n-2)}{5}$, for all $n\geq 4$, $\ex_{\mathcal{P}}(n,\Theta_5)\leq \frac{5(n-2)}{2}$, for all $n\geq 5$ and $\ex_{\mathcal{P}}(n,\Theta_6)\leq \frac{18(n-2)}{7}$, for all $n\geq 7$, where $\Theta_k$ is obtained from a cycle $C_k$ by adding an additional edge joining any two non-consecutive vertices. They also demonstrated that their bounds for $\Theta_4$ and $\Theta_5$ are tight by showing infinitely many values of $n$ and planar graph on $n$ vertices attaining the stated bounds. As a consequence of the bound for $\Theta_6$ in the same paper, they presented the following corollary.
\begin{corollary}[Y. Lan, et al.\cite{LY}]
\begin{align*}
\ex_{\mathcal{P}}(n, C_6)\leq \frac{18(n-2)}{7}
\end{align*}
for all $n\geq 6$, with equality when $n=9$. \end{corollary} In this paper we present a tight bound for $\ex_{\mathcal{P}}(n, C_6)$. In particular, we prove the following two theorems to give the tight bound.
We denote the vertex and the edge sets of a graph $G$ by $V(G)$ and $E(G)$ respectively. We also denote the number of vertices and edges of $G$ by $v(G)$ and $e(G)$ respectively. The minimum degree of $G$ is denoted $\delta(G)$. The main ingredient of the result is as follows: \begin{theorem}\label{thm:main}
Let $G$ be a $2$-connected, $C_6$-free plane graph on $n$ $(n\geq 6)$ vertices with $\delta(G)\geq 3$. Then $e(G) \leq \frac{5}{2} n - 7$. \end{theorem}
We use Theorem~\ref{thm:main}, which considers only $2$-connected graphs with no degree $2$ (or $1$) vertices and order at least $6$, in order to establish our desired result, which bounds gives the desired bound of $\frac{5}{2}n-7$ for all $C_6$-free plane graphs with at least $18$ vertices. \begin{theorem}\label{thm:main_new}
Let $G$ be a $C_6$-free plane graph on $n$ $(n\geq 18)$ vertices. Then
\begin{align*}
e(G) \leq \frac{5}{2} n - 7 .
\end{align*} \end{theorem}
Indeed, there are $17$-vertex graphs on $17$ vertices with $36$ edges, but $\frac{5}{2}(17)-7=35.5<36$. One such graph can be seen in Figure~\ref{fig:smallexample}.
\begin{figure}
\caption{Example of $G$ on $17$ vertices such that $e(G) > (5/2) v(G) - 7$.}
\label{fig:smallexample}
\end{figure}
We show that, for large graphs, Theorem~\ref{thm:main_new} is tight: \begin{theorem}\label{thm:construct}
For every $n\cong 2\pmod{5}$, there exists a $C_6$-free plane graph $G$ with $v(G) = \frac{18n+14}{5}$ and $e(G) = 9n$, hence $e(G) = \frac{5}{2} v(G) - 7$. \end{theorem}
For a vertex $v$ in $G$, the neighborhood of $v$, denoted $N_G(v)$, is the set of all vertices in $G$ which are adjacent to $v$. We denote the degree of $v$ by $d_G(v) = |N_G(v)|$. We may avoid the subscripts if the underlying graph is clear. The minimum degree of $G$ is denoted by $\delta(G)$, the number of components of $G$ is denoted by $c(G)$. For the sake of simplicity, we may use the term $k$-cycle to mean a cycle of length $k$ and $k$-face to mean a face bounded by a $k$-cycle. A $k$-path is a path with $k$ edges.
\section{Proof of Theorem~\ref{thm:construct}: Extremal Graph Construction}
First we show that for a plane graph $G_0$ with $n$ vertices ($n\cong 7\pmod{10}$), each face having length $7$ and each vertex in $G_0$ having degree either $2$ or $3$, we can construct $G$, where $G$ is a $C_6$-free plane graph with $v(G) = \frac{18n+14}{5}$ and $e(G) = 9n$. We then give a construction for such a $G_0$ as long as $n\cong 7\pmod{10}$.
Using Euler's formula, the fact that every face has length $7$ and every degree is $2$ or $3$, we have $e(G_0)=\frac{7(n-2)}{5}$ and the number of degree $2$ and degree $3$ vertices in $G_0$ are $\frac{n+28}{5}$ and $\frac{4n-28}{5}$, respectively.
Given $G_0$, we construct first an intermediate graph $G'$ by step \ref{it:construct1}: \begin{enumerate}[label=(\arabic*)]
\item Add halving vertices to each edge of $G_0$ and join the pair of halving vertices with distance $2$, see an example in Figure~\ref{fig:op1}. Let $G'$ denote this new graph, then $v(G')=v(G_0)+e(G_0)=\frac{12n-14}{5}$ and the number of degree $2$ and degree $3$ vertices in $G'$ is equal to the number of degree $2$ and degree $3$ vertices in $G_0$, respectively. \label{it:construct1}
\begin{figure}
\caption{Adding a halving vertex to each edge of $G_0$.}
\label{fig:op1}
\end{figure}
To get $G$, we apply the following steps \ref{it:construct2} and \ref{it:construct3} on the degree $2$ and $3$ vertices in $G'$, respectively.
\item For each degree $2$ vertex $v$ in $G_0$, let $N(v)=\{v_1,v_2\}$, and so $v_1vv_2$ forms an induced triangle in $G'$. Fix $v_1$ and $v_2$, replace $v_1vv_2$ with a $K^{-}_5$ by adding vertices $v^{'}_1$, $v^{'}_2$ to $V(G')$ and edges $v^{'}_1v$, $v^{'}_1v^{'}_2$, $v^{'}_1v_1$, $v^{'}_1v_2$, $v^{'}_2v_1$, $v^{'}_2v_2$ to $E(G')$. See Figure~\ref{fig:op2}. \label{it:construct2}
\begin{figure}
\caption{Replacing a degree-$2$ vertex of $G_0$ with a $K_5^-$.}
\label{fig:op2}
\end{figure}
\item For each degree $3$ vertex $v$ in $G_0$, such that $N(v)=\{v_1,v_2,v_3\}$, the set of vertices $\{v,v_1,v_2,v_3\}$ then forms an induced $K_4$ in $G'$. Fix $v_1$, $v_2$ and $v_3$, replace this $K_4$ with a $K^{-}_5$ by adding a new vertex $v'$ to $V(G')$ and edges $v'v$, $v'v_1$, $v'v_2$ to $E(G')$. See Figure~\ref{fig:op3}.\label{it:construct3}
\begin{figure}
\caption{Replacing a degree-$3$ vertex of $G_0$ with a $K_5^-$.}
\label{fig:op3}
\end{figure} \end{enumerate}
For each integer $k\geq 0$, and $n = 10 k + 7$ we present a construction for such a $G_0$, call it $G_0^k$: Let $v_{i}^{t}$ and $v_{i}^{b}$ $(1\leq i\leq k+1)$ be the top and bottom vertices of the heptagonal grids with $3$ layers and $k$ columns, respectively (see the red vertices in Figure~\ref{fig:heptgrid}) and $v$ be the extra vertex in $G_0^k$ but not in the heptagonal grid. We join $v_1^tv$, $vv_1^b$ and $v_{i}^{t}v_{i}^{b}$ $(2\leq i\leq k+1)$. Clearly, $G_0^k$ is a $(10k+7)$-vertex plane graph and each face of $G_0^k$ is a $7$-face. Obviously $e\left(G_0^k\right)=14k+7$, and the number of degree $2$ and $3$ vertices are $2 k + 7 = \frac{n + 28}{5}$ and $8 k = \frac{4 n - 28}{5}$ respectively.
\begin{figure}
\caption{The graph $G_0^k$, $k\geq 1$, in which each face has length $7$. The graph $H_0^k$ (see Remark~\ref{rem:H0k}) is obtained by deleting $x_1,\ldots,x_5$ and adding the edge $v_1^ty$.}
\label{fig:heptgrid}
\end{figure}
After applying steps \ref{it:construct1}, \ref{it:construct2}, and \ref{it:construct3} on $G_0^k$, we get $G$. It is easy to verify that $G$ is a $C_6$-free plane graph with \begin{align*}
v(G) &= v(G_0^k) + e(G_0^k) + 2 (2 k + 7) + 8 k = (10 k + 7) + (14 k + 7) + 12 k + 14 &&= 36 k + 28 \\
e(G) &= 9 v(G_0^k)= 90k+63 . \end{align*} Thus, $e(G)=\frac{5}{2}v(G)-7$.
\begin{remark}\label{rem:H0k}
In fact, for $k\geq 1$ and $n=10k+2$, there exists a graph $H^k_0$ which is obtained from $G^k_0$ by deleting vertices (colored green in Figure~\ref{fig:heptgrid}) $x_1$, $x_2$, $x_3$, $x_4$, $x_5$ and adding the edge $v^t_1y$. Clearly, $H^k_0$ is an $10k+2$-vertex plane graph such that all faces have length $7$. Moreover, $e(H^k_0) = 14 k$, the number of degree-$2$ and degree-$3$ vertices are $2 k + 6 = \frac{n + 28}{5}$ and $8 k - 4 = \frac{4 n - 28}{5}$, respectively. After applying steps (1), (2), and (3) to $H^k_0$, we get a graph $H$ that is a $C_6$-free plane graph with $e(H) = (5/2) v(H) - 7$.
Thus, for any $k\cong 2\pmod{5}$, we have the graphs above such that each face is a 7-gon and we get a $C_6$-free plane graph on $n$ vertices with $(5/2)n - 7$ edges for $n\cong 10\pmod{18}$ if $n\geq 28$. \end{remark}
\section{Definitions and Preliminaries}
We give some necessary definitions and preliminary results which are needed in the proof of Theorems~\ref{thm:main} and~\ref{thm:main_new}.
\begin{definition}
Let $G$ be a plane graph and $e\in E(G)$. If $e$ is not in a $3$-face of $G$, then we call it a \textbf{trivial triangular-block}. Otherwise, we recursively construct a \textbf{triangular-block} in the following way. Start with $H$ as a subgraph of $G$, such that $E(H)=\{e\}$.
\begin{enumerate}[label=(\arabic*)]
\item Add the other edges of the $3$-face containing $e$ to $E(H)$.
\item Take $e'\in E(H)$ and search for a $3$-face containing $e'$. Add these other edge(s) in this $3$-face to $E(H)$.
\item Repeat step (2) till we cannot find a $3$-face for any edge in $E(H)$.
\end{enumerate}
We denote the triangular-block obtained from $e$ as the starting edge, by $B(e)$. \end{definition}
Let $G$ be a plane graph. We have the following three observations: \begin{enumerate}[label=(\roman*)]
\item If $H$ is a non-trivial triangular-block and $e_1, e_2\in E(H)$, then $B(e_1)=B(e_2)=H$.
\item Any two triangular-blocks of $G$ are edge disjoint. \label{it:obs2}
\item If $B$ is a triangular-block with the unbounded region being a $3$-face, then $B$ is a triangulation graph. \end{enumerate}
Let $\mathcal{B}$ be the family of triangular-blocks of $G$. From observation \ref{it:obs2} above, we have \begin{align*}
e(G)=\sum\limits_{B\in\mathcal{B}}e(B), \end{align*} where $e(G)$ and $e(B)$ are the number of edges of $G$ and $B$ respectively.
Next, we distinguish the types of triangular-blocks that a $C_6$-free plane graph may contain. The following lemma gives us the bound on the number of vertices of triangular-blocks. \begin{lemma}\label{lem:5block}
Every triangular-block of $G$ contains at most $5$ vertices. \end{lemma} \begin{proof}
We prove it by contradiction. Let $B$ be a triangular-block of $G$ containing at least $6$ vertices. We perform the following operations: delete one vertex from the boundary of the unbounded face of $B$ sequentially until the number of vertices of the new triangular block $B^{'}$ is $6$. Next, we show that $B'$ is not a triangular-block in $G$. Suppose that it is. We consider the following two cases to complete the proof.
\begin{case} $B'$ contains a separating triangle.\end{case}
Let $v_1v_2v_3$ be the separating triangle. Without loss of generality, assume that the inner region of the triangle contains two vertices say, $v_4$ and $v_5$. The outer region of the triangle contains one vertex, say $v_6$. Since the unbounded face is a $3$-face, the inner structure is a triangulation. Without loss of generality, let the inner structure be as shown in Figure \ref{fig:blocksize}(a). Now consider the vertex $v_6$. If $v_1,v_2\in N(v_6)$, then $v_3v_4v_5v_2v_6v_1v_3$ is a $6$-cycle in $G$, a contradiction. Similarly for the cases when $v_1,v_3\in N(v_6)$ and $v_2,v_3\in N(v_6)$.
\begin{case} $B'$ contains no separating triangle.\end{case}
Consider a triangular face $v_1v_2v_3v_1$. Let $v_4$ be a vertex in the triangular-block such that $v_2v_3v_4v_2$ is a $3$-face. Notice that $v_1v_4 \notin E(B')$, otherwise we get a separating triangle in $B'$. Let $v_5$ be a vertex in $B'$ such that $v_2v_4v_5v_2$ is a $3$-face. Notice that $v_6$ cannot be adjacent to both vertices in any of the pairs $\{v_1,v_2\}$, $\{v_1,v_3\}$, $\{v_2,v_5\}$, $\{v_3,v_4\}$, or $\{v_4,v_5\}$. Otherwise, $C_6\subset G$. Also $v_3v_5 \notin E(B')$, otherwise we have a separating triangle. So, let $v_1v_5\in E(B')$ and $v_1,v_5\in N(v_6)$ (see Figure~\ref{fig:blocksize}(b)). In this case $v_1v_6v_5v_2v_4v_3v_1$ results in a $6$-cycle, a contradiction. \end{proof}
\begin{figure}
\caption{The structure of $B'$ when it contains a separating triangle or not, respectively.}
\label{fig:blocksize}
\end{figure}
Now we describe all possible triangular-blocks in $G$ based on the number of vertices the block contains. For $k\in\{2,3,4,5\}$, we denote the triangular-blocks on $k$ vertices as $B_k$.
\subsubsection*{Triangular-blocks on $5$ vertices.} There are four types of triangular-blocks on $5$ vertices (see Figure~\ref{fig:5blocks}). Notice that $B_{5,a}$ is a $K_5^-$ . \begin{figure}
\caption{Triangular-blocks on $5$ vertices.}
\label{fig:5blocks}
\end{figure}
\subsubsection*{Triangular-blocks on $4$, $3$, and $3$ vertices.} There are two types of triangular-blocks on $4$ vertices. See Figure~\ref{fig:432blocks}. Observe that $B_{4,a}$ is a $K_4$. The $3$-vertex and $2$-vertex triangular-blocks are simply $K_3$ and $K_2$ (the trivial triangular-block), respectively. \begin{figure}
\caption{Triangular-blocks on 4,3 and 2 vertices.}
\label{fig:432blocks}
\end{figure}
\begin{definition} Let $G$ be a plane graph.
\begin{enumerate}[label=(\roman*)]
\item A vertex $v$ in $G$ is called a \textbf{junction vertex} if it is in at least two distinct triangular-blocks of $G$.
\item Let $B$ be a triangular-block in $G$. An edge of $B$ is called an \textbf{exterior edge} if it is on a boundary of non-triangular face of $G$. Otherwise, we call it an \textbf{interior edge}. An endvertex of an exterior edge is called an \textbf{exterior vertex}. We denote the set of all exterior and interior edges of $B$ by $Ext(B)$ and $Int(B)$ respectively. Let $e\in Ext(B)$, a non-triangular face of $G$ with $e$ on the boundary is called the \textbf{exterior face} of $e$.
\end{enumerate} \end{definition}
Notice that an exterior edge of a non-trivial triangular-block has exactly one exterior face. On the other hand, if $G$ is a $2$-connected plane graph, then every trivial triangular-block has two exterior faces. For a non-trivial triangular-block $B$ of a plane graph $G$, we call a path $P=v_1v_2v_3\dots v_k$ an \textit{exterior path} of $B$, if $v_1$ and $v_k$ are junction vertices and $v_iv_{i+1}$ are exterior edges of $B$ for $i\in\{1,2,\dots,k-1\}$ and $v_j$ is not junction vertex for all $j\in\{2,3,\dots,k-1\}$. The corresponding face in $G$ where $P$ is on the boundary of the face is called the \textit{exterior face} of $P$.
Next, we give the definition of the contribution of a vertex and an edge to the number of vertices and faces of $C_6$-free plane graph $G$. All graphs discussed from now on are $C_6$-free plane graph. \begin{definition} Let $G$ be a plane graph, $B$ be a triangular-block in $G$ and $v\in V(B)$. The contribution of $v$ to the vertex number of $B$ is denoted by $n_B(v)$, and is defined as \begin{equation*}
n_B(v) = \dfrac{1}{\#~\mbox{triangular-blocks in $G$ containing $v$}}. \end{equation*} We define the contribution of $B$ to the number of vertices of $G$ as $n(B)=\sum\limits_{v\in V(B)}n_B(v)$. \end{definition} Obviously, $v(G)=\sum\limits_{B\in \mathcal{B}}n(B)$, where $v(G)$ is the number of vertices in $G$ and $\mathcal{B}$ is the family of triangular-blocks of $G$.
Let $B_{K^{-}_5}$ be a triangular-block of $G$ isomorphic to a $B_{5,a}$ with exterior vertices $v_1,v_2,v_3$, where $v_1$ and $v_3$ are junction vertices, see Figure \ref{fig:11to9} for an example. Let $F$ be a face in $G$ such that $V(F)$ contains all exterior vertices $v_{1,1}, \dots, v_{1,m}, v_{2,1}, \dots, v_{2,m}, v_{3,1}, \dots, v_{3,m}$ of $m$ $(m\geq 1)$ copies of $B_{K^-_5}$, such that $v_{1,i}, v_{2,i}, v_{3,i}$ are the exterior vertices of the $i$-th $B_{K^{-}_5}$ and $v_{1,i}$, $v_{3,i}$ $(1\leq i\leq m)$ are junction vertices. Let $C_F$ denote the cycle associated with the face $F$. We alter $E(C_F)$ in the following way: \begin{align*}
E(C_F'):=E(C_F)-\{v_{1,1}v_{2,1}v_{3,1}\}-\dots-\{v_{1,m}v_{2,m}v_{3,m}\}\cup\{v_{1,1}v_{3,1}\}\cup \dots \cup\{v_{1,m},v_{3,m}\}. \end{align*}
Hence, the length of $F$ as $|E(C_F')|=|E(C_F)|-m$. For example, in Figure \ref{fig:11to9}, $|E(C_F)|=11$ but $|E(C_F')|=9$.
\begin{figure}
\caption{An example of a face containing all the exterior vertices of at least one $B_{K^-_5}$.}
\label{fig:11to9}
\end{figure}
Now we are able to define the \textbf{contribution} of an ``\textit{edge}'' to the number of faces of $C_6$-free plane graph $G$. \begin{definition}\label{bvvc}
Let $F$ be a exterior face of $G$ and $C_F:=\{e_1,e_2,\dots, e_k\}$ be the cycle associated with $F$. The contribution of an exterior edge $e$ to the face number of the exterior face $F$, is denoted by $f_F(e)$, and is defined as follows.
\begin{enumerate}[label=(\roman*)]
\item If $e_1$ and $e_2$ are adjacent exterior edges of $B_{K_5^-}$, then $f_F(e_1)+f_F(e_2)=\dfrac{1}{|C_F'|}$, and $f_F(e_i)=\dfrac{1}{|C_F'|}, \text{ where }i\in\{3,4,\dots,k\}$.
\item Otherwise, $f_F(e)=\dfrac{1}{|C_F|}$.
\end{enumerate} \end{definition}
Note that $\sum\limits_{e\in E(F)}f_F(e)=1$. For a triangular-block $B$, the total face contribution of $B$ is denoted by $f_B$ and defined as $f_B=(\#~\mbox{interior faces of $B$}) + \sum\limits_{e\in Ext(B)}f_F(e)$, where $F$ is the exterior face of $B$ with respective to $e$. Obviously, $f(G)=\sum\limits_{B\in\mathcal{B}}f(B)$, where $f(G)$ is the number of faces of $G$.
\section{Proof of Theorem~\ref{thm:main}} We begin by outlining our proof. Let $f$, $n$, and $e$ be the number of faces, vertices, and edges of $G$ respectively. Let $\mathcal{B}$ be the family of all triangular-blocks of $G$.
The main target of the proof is to show that \begin{align}
7 f + 2 n - 5 e \leq 0\label{eq:main}. \end{align}
Once we show \eqref{eq:main}, then by using Euler's Formula, $e=f+n-2$, we can finish the proof of Theorem~\ref{thm:main}. To prove \eqref{eq:main}, we show the existence of a partition $\mathcal{P}_1, \mathcal{P}_2,\dots,\mathcal{P}_m$ of $\mathcal{B}$ such that $7\sum\limits_{B\in \mathcal{P}_i}f(B)+2\sum\limits_{B\in\mathcal{P}_i}n(B)-5\sum\limits_{B\in \mathcal{P}_i}e(B)\leq 0$, for all $i\in\{1,2,3\dots,m\}$. Since $f=\sum\limits_{B\in \mathcal{B}}f(B)$, $n=\sum\limits_{B\in\mathcal{B}}n(B)$ and $e=\sum\limits_{B\in \mathcal{B}}e(B)$ we have \begin{align*}
7 f + 2 n - 5 e
&= 7\sum\limits_{i}^{m}\sum\limits_{B\in\mathcal{P}_i}f(B) + 2\sum\limits_{i}^{m}\sum\limits_{B\in\mathcal{P}_i}n(B) - 5\sum\limits_{i}^{m}\sum\limits_{B\in\mathcal{P}_i}e(B)\\
&= \sum\limits_{i}^{m}\bigg(7\sum\limits_{B\in\mathcal{P}_i}f(B) + 2\sum\limits_{B\in\mathcal{P}_i}n(B) - 5\sum\limits_{B\in\mathcal{P}_i}e(B)\bigg)\leq 0. \end{align*}
The following proposition will be useful in many lemmas. \begin{proposition}\label{prop:faces}
Let $G$ be a $2$-connected, $C_6$-free plane graph on $n$ $(n\geq 6)$ vertices with $\delta(G)\geq 3$.
\begin{enumerate}[label=(\roman*)]
\item If $B$ is a nontrivial triangular-block (that is, not $B_2$), then none of the exterior faces can have length $5$. \label{it:faces:no5face}
\item If $B$ is in $\{B_{5,a},B_{5,b},B_{5,c},B_{4,a}\}$, then none of the exterior faces can have length $4$. \label{it:faces:no4face}
\item If $B$ is in $\{B_{5,d},B_{4,b}\}$ and an exterior face of $B$ has length $4$, then that $4$-face must share a $2$-path with $B$ (shown in blue in Figures~\ref{fig:B5d} and~\ref{fig:B4b}) and the other edges of the face must be in trivial triangular-blocks. \label{it:faces:yes4face}
\item No two $4$-faces can be adjacent to each other. \label{it:faces:two4faces}
\end{enumerate} \end{proposition}
\begin{proof}
\begin{itemize}
\item[\ref{it:faces:no5face}] Observe that any pair of consecutive exterior vertices of a nontrivial triangular-block has a path of length $2$ (counted by the number of edges) between them and any pair of nonconsecutive exterior vertices has a path of length $3$ between them. So having a face of length $5$ incident to this triangular-block would yield a $C_6$, a contradiction.
\item[\ref{it:faces:no4face}] If $B$ is in $\{B_{5,a},B_{5,b},B_{5,c},B_{4,a}\}$, then any pair of consecutive exterior vertices of the listed triangular-blocks has a path of length $3$ between them. It remains to consider nonconsecutive vertices for $\{B_{5,b},B_{5,c}\}$. For $B_{5,b}$ each pair of nonconsecutive exterior vertices has a path of length $3$ between them. In the case where $B$ is $B_{5,c}$, this is true for all pairs without an edge between them. As for the other pairs, if they are in the same $4$-face, then at least one of the degree-$2$ vertices in $B$ must have degree $2$ in $G$, a contradiction.
\item[\ref{it:faces:yes4face}] In both $B_{5,d}$ and $B_{4,b}$, any pair of consecutive exterior vertices has a path of length $3$ between them. For $B_{5,d}$, in Figure~\ref{fig:B5d}, we see that there is a path of length $4$ between $v_2$ and $v_4$ and so the only way a $4$-face can be adjacent to $B$ is via a $2$-path with endvertices $v_1$ and $v_3$. In fact, because there is no vertex of degree $2$, the path must be $v_1v_4v_3$. For $B_{4,b}$, in Figure~\ref{fig:B5d}, we see that because $B$ cannot have a vertex of degree $2$, the $4$-face and $B$ cannot share the path $v_2v_1v_4$ or the path $v_2v_3v_4$. Thus the only paths that can share a boundary with a $4$-face are $v_1v_4v_3$ and $v_1v_2v_3$.
As to the other blocks that form edges of such a $4$-face. In Figure~\ref{fig:4face}, we see that if, say, $v_1u$ is in a nontrivial triangular-block, then there is a vertex $w$ in that block, in which case $wv_1xv_4v_3uw$ forms a $6$-cycle, a contradiction.
\item[\ref{it:faces:two4faces}] If two $4$-faces share an edge, then there is a $6$-cycle formed by deleting that edge. If two $4$-faces share a $2$-path, then the midpoint of that path is a vertex of degree $2$ in $G$. In both cases, a contradiction.
\end{itemize} \end{proof}
\begin{figure}
\caption{Proposition~\ref{prop:faces}\ref{it:faces:yes4face}: The blocks defined by blue edges must be trivial.}
\label{fig:4face}
\end{figure}
To show the existence of such a partition we need the following lemmas. \begin{lemma}\label{lem:easyblock}
Let $G$ be a $2$-connected, $C_6$-free plane graph on $n$ $(n\geq 6)$ vertices with $\delta(G)\geq 3$. If $B$ is a triangular-block in $G$ such that $B\notin \{B_{5,d}, B_{4,b}\}$, then $7 f(B) + 2 n(B) - 5 e(B) \leq 0$. \end{lemma}
\begin{proof} We separate the proof into several cases.
\subsubsection*{Case 1: $B$ is $B_{5,a}$.}
Let $v_1$, $v_2$ and $v_3$ be the exterior vertices of $K^{-}_5$. At least two of them must be junction vertices, otherwise $G$ contains a cut vertex. We consider $2$ possibilities to justify this case.
\begin{enumerate}[label=(\alph*)]
\item Let $B$ be $B_{5,a}$ with $3$ junction vertices (see Figure \ref{fig:B5a}(a)). By Proposition~\ref{prop:faces}, every exterior edge in $B$ is contained in an exterior face with length at least $7$. Thus, $f(B) = (\#~\mbox{interior faces of $B$}) + \sum\limits_{e\in Ext(B)} f_F(e) \leq 5 + 3/7$. Moreover, every junction vertex is contained in at least $2$ triangular-blocks, so we have $n(B)\leq 2 + 3/2$. With $e(B)=9$, we obtain $7f(B)+2n(B)-5e(B)\leq 0$. \label{case1a}
\item Let $B$ be $B_{5,a}$ with $2$ junction vertices, say $v_2$ and $v_3$ (see Figure \ref{fig:B5a}(b)). Let $F$ and $F_1$ are exterior faces of the exterior edge $v_2v_3$ and exterior path $v_2v_1v_3$ of the triangular-block respectively. Notice that $v_1v_2$ and $v_2v_3$ are the adjacent exterior edges in the same face $F_1$, hence $|C(F_1)|\geq 8$. By Definition \ref{bvvc}, we have $f_{F_1}(v_1v_2)+f_{F_1}(v_1v_3) \leq 1/7$. Because there can be no $C_6$, one can see that regardless of the configuration of the $B_{K_5^-}$, it is the case that $f_F(v_2v_3) \leq 1/7$. Thus, $f(B)\leq 5 + 2/7$. Moreover, since $v_1$ and $v_3$ are contained in at least $2$ triangular-blocks, we have $n(B) \leq 3 + 2/2$. With $e(B)=9$, we obtain $7f(B)+2n(B)-5e(B)\leq 0$. \label{case1b}
\begin{figure}
\caption{A $B_{5,a}$ triangular-block with $3$ and $2$ junction vertices, respectively.}
\label{fig:B5a}
\end{figure}
\end{enumerate}
\subsubsection*{Case $2$: $B$ is in $\{B_{4,a},B_{5,b},B_{5,c}\}$.}
\begin{enumerate}[label=(\alph*)]
\item Let $B$ be a $B_{4,a}$. By Proposition~\ref{prop:faces}, each face incident to this triangular-block has length at least $7$. So, $f(B) \leq 3 + 3/7$. Because there is no cut-vertex, this triangular-block must have at least two junction vertices, hence $n(B) \leq 2 + 2/2$. With $e(B) = 6$, we obtain $7 f(B) + 2 n(B) - 5 e(B) \leq 0$. \label{case2a}
\item Let $B$ be a $B_{5,b}$. There are $4$ faces inside the triangular-block and each face incident to this triangular-block has length at least $7$. So, $f(B) \leq 4 + 4/7$. Because there is no cut-vertex, this triangular-block must have at least two junction vertices, hence $n(B) \leq 3 + 2/2$. With $e(B) = 8$, we obtain $7 f(B) + 2 n(B) - 5 e(B) \leq 0$, as seen in Table~\ref{tab:5block}. \label{case2b}
\item Let $B$ be a $B_{5,c}$. Similarly, $f(B) \leq 3 + 5/7$ and because there are at least two junction vertices, $n(B) \leq 3 + 2/2$. With $e(B) = 7$, we obtain $7 f(B) + 2 n(B) - 5 e(B) \leq -1$. \label{case2c}
\end{enumerate}
\subsubsection*{Case $3$: $B$ is $B_{3}$.}
Let $v_1$, $v_2$ and $v_3$ be the exterior vertices of triangular-block $B$. Each of these three must be junction vertices since there is no degree $2$ vertex in $G$, which implies that each is contained in at least $2$ triangular-blocks. We consider two possibilities:
\begin{enumerate}[label=(\alph*)]
\item Let the three exterior vertices be contained in exactly $2$ triangular-blocks. By Proposition~\ref{prop:faces}\ref{it:faces:no5face}, the length of each exterior face is either $4$ or at least $7$. We want to show that at most one exterior face has length $4$.
If not, then let $x_1$ be a vertex that is in two such faces. Consider the triangular-block incident to $B$ at $x_1$, call it $B'$. By Proposition~\ref{prop:faces}, $B'$ is not in $\{B_{5,a},B_{5,b},B_{5,c},B_{4,a}\}$.
If $B'$ is in $\{B_{5,d},B_{4,b},B_3\}$, then the triangular-block has vertices $\ell_2,\ell_3$, each adjacent to $x_1$ and the length-$4$ faces consist of $\{v_1,\ell_2,m_2,v_2\}$ and $\{v_1,\ell_3,m_3,v_3\}$. Either $\ell_2\sim\ell_3$ (in which case $\ell_2m_2v_2v_3m_3\ell_3\ell_2$ is a $6$-cycle, see Figure~\ref{fig:B3}(a)) or there is a $\ell'$ distinct from $v_1$ that is adjacent to both $\ell_2$ and $\ell_3$ (in which case $\ell'\ell_2m_2v_2v_1\ell_3\ell_2$ is a $6$-cycle, see Figure~\ref{fig:B3}(b)).
If $B'$ is $B_2$, then the trivial triangular-block is $\{v_1,\ell\}$, in which case $\{\ell,m_2,v_2,v_1,v_3,m_3\}$ is a $C_6$, see Figure~\ref{fig:B3}(c). Thus, we may conclude that if each of the three exterior vertices are in exactly $2$ triangular-blocks, then $f(B) \leq 1 + 2/7 + 1/4$ and $n(B) \leq 3/2$. With $e(B) = 3$, we obtain $7 f(B) + 2 n(B) - 5 e(B) \leq -5/4$.\label{case3a}
\begin{figure}
\caption{A $B_3$ triangular-block, $B$ and the various cases of what must occur if $B$ is incident to two $4$-faces.}
\label{fig:B3}
\end{figure}
\item Let at least one exterior vertex be contained in at least $3$ triangular-blocks and the others be contained at least $2$ triangular-blocks. In this case, we have $f(B) \leq 1 + 3/4$ and $n(B) \leq 2/2 + 2/3$. With $e(B) = 3$, we obtain $7 f(B) + 2 n(B) - 5 e(B) \leq -1/12$.\label{case3b}
\end{enumerate}
\subsubsection*{Case $4$: $B$ is $B_{2}$.}
Note that the fact that there is no vertex of degree $2$ gives that if an endvertex is in exactly two triangular-blocks, then the other one cannot be a $B_2$. We consider three possibilities:
\begin{enumerate}[label=(\alph*)]
\item Let each endvertex be contained in exactly $2$ triangular-blocks. Since neither of the triangular-blocks incident to $B$ can be trivial, they cannot be incident to a face of length $5$ by Proposition~\ref{prop:faces}\ref{it:faces:no5face}. Thus, $B$ cannot be incident to a face of length $5$. Moreover, the two faces incident to $B$ cannot both be of length $4$, again by Proposition~\ref{prop:faces}\ref{it:faces:two4faces}. Hence, $f(B) \leq 1/4 + 1/7$. Clearly $n(B) \leq 2/2$ and with $e(B) = 1$, we obtain $7 f(B) + 2 n(B) - 5 e(B) \leq -1/4$.\label{case4a}
\item Let one endvertex be contained in exactly $2$ triangular-blocks and the other endvertex be contained in at least $3$ triangular-blocks. This is similar to case \ref{case4a} in that neither face can have length $5$ and they cannot both have length $4$. The only difference is that $n(B) \leq 1/2 + 1/3$ and so $7 f(B) + 2 n(B) - 5 e(B) \leq -7/12$.\label{case4b}
\item Let each endvertex be contained in at least $3$ triangular-blocks. The two faces cannot both be of length $4$ by Proposition~\ref{prop:faces}\ref{it:faces:two4faces}. Hence, $f(B) \leq 1/4 + 1/5$ and $n(B) \leq 2/3$. With $e(B) = 1$, we obtain $7 f(B) + 2 n(B) - 5 e(B) \leq -31/60$.\label{case4c}
\end{enumerate} \end{proof}
\begin{lemma}\label{lem:B5d}
Let $G$ be a $2$-connected, $C_6$-free plane graph on $n$ $(n\geq 6)$ vertices with $\delta(G)\geq 3$. If $B$ is $B_{5,d}$, then $7 f(B) + 2 n(B) - 5 e(B) \leq 1/2$. Moreover, $7 f(B) + 2 n(B) - 5 e(B) \leq 0$ unless $B$ shares a $2$-path with a $4$-face. \end{lemma}
\begin{figure}
\caption{A $B_{5,d}$ triangular-block and how a $4$-face must be incident to it.}
\label{fig:B5d}
\end{figure}
\begin{proof} Let $B$ be $B_{5,d}$ with vertices $v_1$, $v_2$, $v_3$, $v_4$, and $v_5$, as shown in Figure~\ref{fig:B5d}(a). By Proposition~\ref{prop:faces}\ref{it:faces:no5face}, no exterior face of $B$ can have length $5$. By Proposition~\ref{prop:faces}\ref{it:faces:yes4face}, if there is an exterior face of $B$ that has length $4$, this $4$-face must contain the path $v_1v_4v_3$.
Moreover, since there is no vertex of degree $2$, $v_2$ is a junction vertex. Because $G$ has no cut-vertex, there is at least one other junction vertex. We may consider the following cases:
\begin{enumerate}[label=(\alph*)]
\item Let $v_4$ be a junction vertex. This prevents an exterior face of length $4$. Thus, each exterior face has length at least $7$. Hence, $f(B) \leq 4 + 4/7$ and $n(B) \leq 3 + 2/2$. With $e(B) = 8$, we obtain $7 f(B) + 2 n(B) - 5 e(B) \leq 0$. \label{case5a}
\item Let $v_4$ fail to be a junction vertex and exactly one of $v_1,v_3$ be a junction vertex. Without loss of generality let it be $v_3$. In this case, again, each exterior face has length\footnote{In fact, it can be shown that the length of the exterior face containing the path $v_2v_1v_4v_3$ is at least $9$. This yields $f(B) \leq 4 + 1/7 + 3/9$ and $7 f(B) + 2 n(B) - 5 e(B) \leq -2/3$. However, this precision is unnecessary.} at least $7$. Again, $f(B) \leq 4 + 4/7$ and $n(B) \leq 3 + 2/2$. With $e(B) = 8$, we obtain $7 f(B) + 2 n(B) - 5 e(B) \leq 0$. \label{case5b}
\item Let $v_4$ fail to be a junction vertex and both $v_1$ and $v_3$ be junction vertices. Here either the exterior path $v_1v_4v_3$ is part of an exterior face of length at least $4$ or each edge must be in a face of length at least $7$.
If the exterior face is of length at least $7$, then $f(B) \leq 4 + 4/7$, otherwise $f(B) \leq 4 + 2/4 + 2/7$. In both cases, $n(B) \leq 2 + 3/2$ and $e(B) = 8$. Hence we obtain $7 f(B) + 2 n(B) - 5 e(B) \leq -1$ in the first instance and $7 f(B) + 2 n(B) - 5 e(B) \leq 1/2$ in the case where $B$ is incident to a $4$-face.\label{case5c} \end{enumerate} \end{proof}
\begin{lemma}\label{lem:B4b}
Let $G$ be a $2$-connected, $C_6$-free plane graph on $n$ $(n\geq 6)$ vertices with $\delta(G)\geq 3$. If $B$ is $B_{4,b}$, then $7 f(B) + 2 n(B) - 5 e(B) \leq 4/3$. Moreover, $7 f(B) + 2 n(B) - 5 e(B) \leq 1/6$ if $B$ shares a $2$-path with exactly one $4$-face and $7 f(B) + 2 n(B) - 5 e(B) \leq 0$ if $B$ fails to share a $2$-path with any $4$-face. \end{lemma}
\begin{figure}
\caption{A $B_{4,b}$ triangular-block and how a $4$-face must be incident to it.}
\label{fig:B4b}
\end{figure}
\begin{proof} Let $B$ be with vertices $v_1$, $v_2$, $v_3$, and $v_4$, as shown in Figure~\ref{fig:B4b}(a). By Proposition~\ref{prop:faces}\ref{it:faces:no5face}, no exterior face of $B$ can have length $5$. If there is an exterior face of $B$ that has length $4$, it is easy to verify that being $C_6$-free and having no vertex of degree $2$ means that the junction vertices must be $v_1$ and $v_3$. We may consider the following cases.
\begin{enumerate}[label=(\alph*)]
\item Let either $v_2$ or $v_4$ be a junction vertex and, without loss of generality, let it be $v_2$. All the exterior faces have length at least $7$ except for the possibility that the path $v_1v_4v_3$ may form two sides of a $4$-face. Hence, $f(B) \leq 2 + 2/4 + 2/7$ and $n(B) \leq 1 + 3/2$. With $e(B) = 5$, we obtain $7 f(B) + 2 n(B) - 5 e(B) \leq -1/2$. \label{case6a}
\item Let neither $v_2$ nor $v_4$ be a junction vertex. Because there is no cut-vertex, this requires both $v_1$ and $v_3$ to be junction vertices. Hence, there are two exterior faces: One that shares the exterior path $v_1v_4v_3$ and the other shares the exterior path $v_1v_2v_3$. Each exterior face has length either $4$ or at least $7$. We consider several subcases: \label{case6b}
\begin{enumerate}[label=(\roman*)]
\item If both faces are of length at least $7$, then $f(B) \leq 2 + 4/7$, and $n(B) \leq 2 + 2/2$. With $e(B) = 5$, we obtain $7 f(B) + 2 n(B) - 5 e(B)\leq -1$. \label{case6bi}
\item If only one of the exterior faces is of length $4$, then $f(B) \leq 2 + 2/7 + 2/4$. Moreover, at least one of $v_1$, $v_3$ must be a junction vertex for more than two triangular-blocks, otherwise either $v(G) = 5$ or the vertex incident to two blue edges in Figure~\ref{fig:B4b}(b) is a cut-vertex. Hence, $n(B) \leq 2 + 1/3 + 1/2$ and with $e(B) = 5$, we have $7 f(B) + 2 n(B) - 5 e(B) \leq 1/6$. \label{case6bii}
\item Both exterior faces are of length $4$. Thus $f(B) \leq 2 + 4/4$. By Proposition~\ref{prop:faces}\ref{it:faces:yes4face}, the blocks represented by the blue edges in Figure~\ref{fig:B4b}(c) are each trivial. Hence $n(B) \leq 2 + 2/3$. With $e(B)=5$, we get $7 f(B) + 2 n(B) - 5 e(B) \leq 4/3$. \label{case6biii}
\end{enumerate} \end{enumerate} \end{proof}
Tables~\ref{tab:5block} and~\ref{tab:432block} in Appendix~\ref{appendix} give a summary of Lemmas~\ref{lem:easyblock}, \ref{lem:B5d}, and \ref{lem:B4b}.
\begin{lemma}\label{lem:partition}
Let $G$ be a $2$-connected, $C_6$-free plane graph on $n$ $(n\geq 6)$ vertices with $\delta(G)\geq 3$. Then the triangular-blocks of $G$ can be partitioned into sets, $\mathcal{P}_1$, $\mathcal{P}_2$,\dots, $\mathcal{P}_m$ such that $7\sum\limits_{B\in\mathcal{P}_i}f(B)+2\sum\limits_{B\in\mathcal{P}_i}n(B)-5\sum\limits_{B\in\mathcal{P}_i}e(B)\leq 0$ for all $i\in[m]$. \end{lemma}
\begin{proof} As it can be seen from Tables~\ref{tab:5block} and~\ref{tab:432block} in Appendix~\ref{appendix}, there are three possible cases where $7 f(B) + 2 n(B) - 5e(B)$ assumes a positive value. We deal with each of these blocks as follows. \begin{enumerate}[label=(\arabic*)]
\begin{figure}
\caption{Structure of a $B_{5,d}$ if it is incident to a $4$-face, as in Lemma~\ref{lem:partition}. The triangular-blocks $B'$ and $B''$ are trivial.}
\label{fig:B5dPartition}
\end{figure}
\item Let $B$ be a $B_{5,d}$ triangular-block as described in the proof of Lemma \ref{lem:B5d}\ref{case5c}. See Figure~\ref{fig:B5dPartition}.
By Proposition~\ref{prop:faces}\ref{it:faces:yes4face}, the edges $v_1u$ and $v_3u$ are trivial triangular-blocks. Denote these triangular-blocks as $B'$ and $B''$. Consider $B'$. One of the exterior faces of $B'$ has length $4$ whereas by Proposition~\ref{prop:faces}\ref{it:faces:two4faces},the other has length at least $5$. It must have length at least $7$ because if it had length $5$, then the path $v_1v_3u$ would complete it to a $6$-cycle. Thus, $f(B') \leq 1/4 + 1/7$. Since the vertex $u$ cannot be of degree $2$, then this vertex is shared in at least three triangular-blocks. Thus, $n(B') \leq 1/2 + 1/3$. With $e(B') = 1$, we obtain $7 f(B') + 2 n(B') - 5 e(B') \leq -7/12$ and similarly, $7 f(B'') + 2 n(B'') - 5 e(B'') \leq -7/12$. Define $\mathcal{P}'=\{B,B',B''\}$. Thus, $7 \sum\limits_{B^*\in \mathcal{P}'} f(B^*) + 2 \sum\limits_{B^*\in \mathcal{P}'} n(B^*) - 5 \sum\limits_{B^*\in \mathcal{P}'} e(B^*) \leq 1/2 + 2(-7/12) = - 2/3$.
Therefore, for each triangular-block in $G$ as described in Lemma \ref{lem:B5d}\ref{case5c}, it belongs to a set $\mathcal{P'}$ of three triangular-blocks such that $7 \sum\limits_{B^*\in \mathcal{P}'} f(B^*) + 2 \sum\limits_{B^*\in \mathcal{P}'} n(B^*) - 5 \sum\limits_{B^*\in \mathcal{P}'} e(B^*) \leq 0$.
Denote such sets as $\mathcal{P}_1,\mathcal{P}_2,\dots,\mathcal{P}_{m_1}$ if they exist.
\begin{figure}
\caption{Structure of a $B_{4,b}$ triangular-block if it is incident to a $4$-face, as in Lemma~\ref{lem:partition}. The triangular-blocks $B'$, $B''$, $B'''$, and $B''''$ are all trivial.}
\label{fig:B4bPartition}
\end{figure}
\item Let $B$ be a $B_{4,b}$ triangular-block as described in the proof of Lemma~\ref{lem:B4b}\ref{case6b}\ref{case6bii}. See Figure~\ref{fig:B4bPartition}(a).
By Proposition~\ref{prop:faces}\ref{it:faces:yes4face}, the edges $v_1u_1$ and $v_3u_1$ are trivial triangular-blocks. Denote them as $B'$ and $B''$, respectively. Consider $B'$. One of the exterior faces of $B'$ has length $4$ and by Proposition~\ref{prop:faces}\ref{it:faces:two4faces}, the other has length at least $5$. Thus, $f(B') \leq 1/4 + 1/5$. Since the vertex $u_1$ cannot be of degree $2$, then this vertex is shared in at least three triangular-blocks. Thus, $n(B') \leq 1/2 + 1/3$. With $e(B') = 1$, we obtain $7 f(B') + 2 n(B') - 5 e(B') \leq -11/60$ and similarly, $7 f(B'') + 2 n(B'') - 5 e(B'') \leq -11/60$. Define $\mathcal{P}''=\{B,B',B''\}$. Thus, $7 \sum\limits_{B^*\in \mathcal{P}''} f(B^*) + 2 \sum\limits_{B^*\in \mathcal{P}''} n(B^*) - 5 \sum\limits_{B^*\in \mathcal{P}''} e(B^*) \leq 1/6 + 2(-11/60) = -1/5$.
Therefore, for each triangular-block in $G$ as described in Lemma \ref{lem:B4b}\ref{case6b}\ref{case6bii}, it belongs to a set $\mathcal{P''}$ of three triangular-blocks such that $7 \sum\limits_{B^*\in \mathcal{P}''} f(B^*) + 2 \sum\limits_{B^*\in \mathcal{P}''} n(B^*) - 5 \sum\limits_{B^*\in \mathcal{P}''} e(B^*) \leq 0$.
Denote such sets as $\mathcal{P}_{m_1+1},\mathcal{P}_{m_1+2},\dots,\mathcal{P}_{m_2}$ if they exist.
\item Let $B$ be a $B_{4,b}$ triangular-block as described in the proof of Lemma~\ref{lem:B4b}\ref{case6b}\ref{case6biii}. See Figure~\ref{fig:B4bPartition}(b).
By Proposition~\ref{prop:faces}\ref{it:faces:yes4face}, the edges $v_1u_1$, $v_3u_1$, $v_1u_2$, and $v_3u_2$ are trivial triangular-blocks. Denote them as $B'$, $B''$, $B'''$ and $B''''$ respectively. Consider $B'$. One of the exterior faces of $B'$ has length $4$ whereas the other has length at least $5$. Thus, $f(B') \leq 1/4 + 1/5$. Since the vertex $u_1$ cannot be of degree $2$, then this vertex is shared in at least three triangular-blocks. Clearly $v_1$ is in at least three triangular-blocks. Thus, $n(B') \leq 2/3$. With $e(B') = 1$, we obtain $7 f(B') + 2 n(B') - 5 e(B') \leq -31/60$ and the same inequality holds for $B''$, $B'''$, and $B''''$.
Define $\mathcal{P}'''=\{B,B',B'',B''',B''''\}$. Thus, $7 \sum\limits_{B^*\in \mathcal{P}''} f(B^*) + 2 \sum\limits_{B^*\in \mathcal{P}''} n(B^*) - 5 \sum\limits_{B^*\in \mathcal{P}''} e(B^*) \leq 4/3 + 4(-31/60) = -11/15$.
Therefore, for each triangular-block in $G$ as described in Lemma \ref{lem:B4b}\ref{case6b}\ref{case6biii}, it belongs to a set $\mathcal{P'}$ of three triangular-blocks such that $7 \sum\limits_{B^*\in \mathcal{P}'''} f(B^*) + 2 \sum\limits_{B^*\in \mathcal{P}'''} n(B^*) - 5 \sum\limits_{B^*\in \mathcal{P}'''} e(B^*) \leq 0$.
Denote such sets as $\mathcal{P}_{m_2+1},\mathcal{P}_{m_2+2},\dots,\mathcal{P}_{m_3}$ if they exist. \end{enumerate}
Now define $\mathcal{P}_{m_3+1}=\mathcal{B}-\bigcup\limits_{i=1}^{m_3}\mathcal{P}_i$, where $\mathcal{B}$ is the set of all blocks of $G$. Clearly, for each block $B\in \mathcal{P}_{m_3+1}$, $7f(B)+2n(B)-5e(B)\leq 0$. Thus, $7\sum\limits_{B\in \mathcal{P}_{m_3+1}}f(B)+2\sum\limits_{B\in \mathcal{P}_{m_3+1}}n(B)-5\sum\limits_{B\in \mathcal{P}_{m_3+1}}e(B)\leq 0$. Putting $m:=m_3+1$ we got the partition $\mathcal{P}_1,\mathcal{P}_2,\dots,\mathcal{P}_{m}$ of $\mathcal{B}$ meeting the condition of the lemma. \end{proof} This completes the proof of Theorem~\ref{thm:main}.
\section{Proof of Theorem~\ref{thm:main_new}}\label{strongproof}
Let $G$ be a $C_6$-free plane graph. We will show that either $5 v(G) - 2 e(G) \geq 14$ or $v(G) \leq 17$.
If we delete a vertex $x$ from $G$, then \begin{align*}
5 v(G-x) - 2 e(G-x) &= 5 (v(G) - 1) - 2 (e(G) - \deg(x)) \\
&= 5 v(G) - 2 e(G) - 5 + 2 \deg(x) \\
&\geq 5 v(G) - 2 e(G) - 1. \end{align*}
So, graph $G$ has an induced subgraph $G'$ with $\delta(G)\geq 3$ with \begin{align}
5 v(G) - 2 e(G) \geq 5 v(G') - 2 e(G') + \left(v(G)-v(G')\right) \label{eq:main_new:GtoGp} \end{align} In line with usual graph theoretic terminology, we call a maximal $2$-connected subgraph a \textbf{block}. Let $\mathcal{B}'$ denote the set of blocks of $G'$ with the $i^{\rm th}$ block having $n_i$ vertices and $e_i$ edges. Let $b$ be the total number of blocks of $G'$. Specifically, let $b_2$, $b_3$, $b_4$, and $b_5$ denote the number of blocks of size $2$, $3$, $4$, and $5$, respectively. Let $b_6$ denote the number of blocks of size at least $6$. Then we have $b = b_6 + b_5 + b_4 + b_3 + b_2$ and, using Table~\ref{tab:5n2e5}: \begin{align}
5 v(G') - 2 e(G') &= 5 \left(\sum_{i=1}^b n_i - (b-1)\right) - 2 \sum_{i=1}^b e_i \nonumber \\
&= \sum_{i=1}^b \left(5n_i - 2e_i - 5\right) + 5 \nonumber \\
&\geq 9 b_6 + 2 b_5 + 3 b_4 + 4 b_3 + 3 b_2 + 5 \label{eq:main_new:blocks} \end{align}
\begin{table}
\centering
\begin{tabular}{|r|rl|l|}\hline
& \multicolumn{3}{l|}{$\min$ of $5 n - 2 e - 5$} \\ \hline\hline
$n\geq 6$ & $14-5\; \geq$ & 9 & Theorem~\ref{thm:main} \\ \hline
$n = 5$ & $5(5)-2(9)-5\; \geq$ & 2 & $B_{5,a}$, Figure~\ref{fig:5blocks} \\ \hline
$n = 4$ & $5(4)-2(6)-5\; \geq$ & 3 & $B_{4,a}$, Figure~\ref{fig:432blocks} \\ \hline
$n = 3$ & $5(3)-2(3)-5\; \geq$ & 4 & $B_3$, Figure~\ref{fig:432blocks} \\ \hline
$n = 2$ & $5(2)-2(2)-5\; \geq$ & 3 & $B_2$, Figure~\ref{fig:432blocks} \\ \hline
\end{tabular}
\caption{Estimates of $5n-2e-5$ for various block sizes.}
\label{tab:5n2e5} \end{table}
Combining \eqref{eq:main_new:GtoGp} and \eqref{eq:main_new:blocks}, we obtain \begin{align}
5 v(G) - 2 e(G) \geq 9 b_6 + 2 b_5 + 3 b_4 + 4 b_3 + 3 b_2 + 5 + \left(v(G)-v(G')\right) \label{eq:main_new:Gbound} \end{align}
If $b_6\geq 1$, then the right-hand side of \eqref{eq:main_new:Gbound} is at least $14$, as desired.
So, let us assume that $b_6=0$ and $b = b_5 + b_4 + b_3 + b_2$. Furthermore, \begin{align}
v(G') &= 5 b_5 + 4 b_4 + 3 b_3 + 2 b_2 - (b-1) \nonumber \\
&= 4 b_5 + 3 b_4 + 2 b_3 + b_2 + 1 . \label{eq:main_new:vGp} \end{align}
So, substituting $2 b_5$ from \eqref{eq:main_new:vGp} into \eqref{eq:main_new:Gbound}, we have \begin{align*}
5 v(G) - 2 e(G) &\geq 2 b_5 + 3 b_4 + 4 b_3 + 3 b_2 + 5 + \left(v(G)-v(G')\right) \\
&= \left(\frac{1}{2}v(G') - \frac{3}{2} b_4 - b_3 - \frac{1}{2} b_2 - \frac{1}{2}\right) + 3 b_4 + 4 b_3 + 3 b_2 + 5 + \left(v(G)-v(G')\right) \\
&= v(G) - \frac{1}{2}v(G') + \frac{3}{2} b_4 + 3 b_3 + \frac{5}{2} v_2 + \frac{9}{2} \\
&\geq \frac{1}{2}v(G) + \frac{9}{2} , \end{align*} which is strictly larger than $13$ if $v(G)\geq 18$. Since $5 v(G) - 2 e(G)$ is an integer, it is at least $14$ and this completes the proof of Theorem~\ref{thm:main_new}.
\begin{remark}
Observe that for $n\geq 17$, the only graphs on $n$ vertices with $e$ edges such that $e > (5/2)n - 7$ have blocks of order $5$ or less and by \eqref{eq:main_new:Gbound}, there are at most $4$ such triangular blocks. A bit of analysis shows that the maximum number of edges is achieved when the number of blocks of order $5$ is as large as possible. \end{remark}
\section{Conclusions}
We note that the proof of Theorem~\ref{thm:main}, particularly Lemma~\ref{lem:partition}, can be rephrased in terms of a discharging argument.
We believe that our construction in Theorem~\ref{thm:construct} can be generalized to prove ${\rm ex}_{\mathcal{P}}(n,C_{\ell})$ for $\ell$ sufficiently large. That is, for certain values of $n$, we try to construct $G_0$, a plane graph with all faces of length $\ell+1$ with all vertices having degree $3$ or degree $2$.
If such a $G_0$ exists, then the number of degree-$2$ and degree-$3$ vertices are $\frac{(\ell-5)n+4(\ell+1)}{\ell-1}$ and $\frac{4(n-\ell-1)}{\ell-1}$, respectively. We could then apply steps similar to (1), (2), and (3) in the proof of Theorem~\ref{thm:construct} in that we add halving vertices and insert a graph $B_{\ell-1}$ (see Figure~\ref{fig:gadgets}) in place of vertices of degree $2$ and $3$. For the resulting graph $G$, \begin{align*}
v(G) &= v(G_0)+e(G_0)+(\ell-4)\frac{(\ell-5)n+4(\ell+1)}{\ell-1}+(\ell-5)\frac{4(n-\ell-1)}{\ell-1} \\
&= n+\frac{\ell+1}{\ell-1}(n-2)+\frac{(\ell^2-5\ell)n+2(\ell+1)}{\ell-1} \\
&= \frac{\ell^2-3\ell}{\ell-1}n+\frac{2(\ell+1)}{\ell} \\
e(G) &= (3\ell-9)v(G_0)=(3\ell-9)n \end{align*}
Therefore, $e(G)=\frac{3(\ell-1)}{\ell}v(G)-\frac{6(\ell+1)}{\ell}$. We conjecture that this is the maximum number of edges in a $C_{\ell}$-free planar graph.
\begin{figure}
\caption{$B_{\ell-1}$ is used in the construction of a $C_{\ell}$-free graph.}
\label{fig:gadgets}
\end{figure}
\begin{conjecture}
Let $G$ be an $n$-vertex $C_{\ell}$-free plane graph ($\ell\geq 7$), then there exists an integer $N_0>0$, such that when $n\geq N_0$, $e(G)\leq \frac{3(\ell-1)}{\ell}n-\frac{6(\ell+1)}{\ell}$. \end{conjecture}
\begin{thebibliography}{99} \bibitem{DC} C. Dowden, Extremal $C_4$-free/$C_5$-free planar graphs, \textit{J. Graph Theory} \textbf{83} (2016), 213--230.
\bibitem{EP1} P. Erd\H os. On the structure of linear graphs. \textit{Israel Journal of Mathematics} \textbf{1} (1963) 156--160.
\bibitem{EP2} P. Erd\H os. On the number of complete subgraphs contained in certain graphs. \textit{Publ. Math. Inst. Hung. Acad. Sci.} \textbf{7} (1962) 459--464.
\bibitem{LY} Y. Lan, Y. Shi, Z. Song. Extremal theta-free planar graphs. \textit{Discrete Mathematics} \textbf{342}(12) (2019), Article 111610.
\bibitem{TP} P. Tur\'an. On an extremal problem in Graph Theory. \textit{Mat. Fiz. Lapok} (in Hungarian). \textbf{48} (1941) 436--452.
\bibitem{zykov} A. Zykov. On some properties of linear complexes. \textit{Mat. Sb. (N.S.)} \textbf{24}(66) (1949) 163--188.
\end{thebibliography}
\appendix \section{Tables} \label{appendix}
The following tables give a summary of the results from Lemmas~\ref{lem:easyblock}, \ref{lem:B5d}, and~\ref{lem:B4b}.
A red edge incident to a vertex of a triangular-block indicates the corresponding vertex is a junction vertex. Moreover, if a vertex has only one red edge, it is to indicate the vertex is shared in at least two triangular-blocks. Whereas if a vertex has two red edges, it means that the vertex is shared in at least three blocks.
A pair of blue edges indicates the boundary of a $4$-face.
\begin{table}[ht]
\renewcommand\arraystretch{2}\par
\centering
\newlength{\casebox}
\setlength{\casebox}{1.8cm}
\begin{tabular}{|c|c|c|c|c|c|c|}\hline
\makebox[\casebox][c]{Case} &
\makebox[0.7cm]{$B$} &
\makebox[3.0cm]{Diagram} &
\makebox[1.8cm]{$f(B)\leq$} &
\makebox[1.4cm]{$n(B)\leq$} &
\makebox[1.4cm]{$e(B)=$} &
\makebox[2.9cm]{$7f+2n-5e\leq$} \\ \hline \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:easyblock} \\ 1\ref{case1a}} &
$B_{5,a}$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (-3,0);
\coordinate (x2) at (0,5);
\coordinate (x3) at (3,0);
\coordinate (l1) at (-6,-1.5);
\coordinate (l2) at (0,8);
\coordinate (l3) at (6,-1.5);
\draw[red,very thick] (l1) -- (x1);
\draw[red,very thick] (l2) -- (x2);
\draw[red,very thick] (l3) -- (x3);
\draw[very thick,fill=gray!20] (x1) -- (x2) -- (x3) -- (x1);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x2) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (l1) circle(\circ);
\draw[fill=black] (l2) circle(\circ);
\draw[fill=black] (l3) circle(\circ);
\node at (0,1.5) {$K_5^-$};
\node at (-6.5,-2) {};
\node at (6.5,8.5) {};
\end{tikzpicture}}} &
$5 + \dfrac{3}{7}$ & $2 + \dfrac{3}{2}$ & 9 & 0 \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:easyblock} \\ 1\ref{case1b}} &
$B_{5,a}$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (-3,0);
\coordinate (x2) at (0,5);
\coordinate (x3) at (3,0);
\coordinate (l1) at (-6,-1.5);
\coordinate (l3) at (6,-1.5);
\draw[red,very thick] (l1) -- (x1);
\draw[red,very thick] (l3) -- (x3);
\draw[very thick,fill=gray!20] (x1) -- (x2) -- (x3) -- (x1);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x2) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (l1) circle(\circ);
\draw[fill=black] (l3) circle(\circ);
\node at (0,1.5) {$K_5^-$};
\node at (-6.5,-2) {};
\node at (6.5,5.5) {};
\end{tikzpicture}}} &
$5 + \dfrac{2}{7}$ & $3 + \dfrac{2}{2}$ & 9 & 0 \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:easyblock} \\ 2\ref{case2b}} &
$B_{5,b}$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (-3,6);
\coordinate (x2) at (3,6);
\coordinate (x3) at (3,0);
\coordinate (x4) at (-3,0);
\coordinate (x5) at (0,3);
\draw[very thick] (x1) -- (x2) -- (x3) -- (x4) -- (x1);
\draw[very thick] (x1) -- (x5);
\draw[very thick] (x2) -- (x5);
\draw[very thick] (x3) -- (x5);
\draw[very thick] (x4) -- (x5);
\draw[very thick] (x5) -- (x5);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x2) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (x4) circle(\circ);
\draw[fill=black] (x5) circle(\circ);
\node at (-3.5,-0.5) {};
\node at (3.5,6.5) {};
\end{tikzpicture}}} &
$4 + \dfrac{4}{7}$ & $3 + \dfrac{2}{2}$ & 8 & 0 \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:easyblock} \\ 2\ref{case2c}} &
$B_{5,c}$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (-3,6);
\coordinate (x2) at (3,6);
\coordinate (x3) at (5,3);
\coordinate (x4) at (3,0);
\coordinate (x5) at (-3,0);
\draw[very thick] (x1) -- (x2) -- (x3) -- (x4) -- (x5) -- (x1);
\draw[very thick] (x5) -- (x2) -- (x4);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x2) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (x4) circle(\circ);
\draw[fill=black] (x5) circle(\circ);
\node at (-3.5,-0.5) {};
\node at (5.5,6.5) {};
\end{tikzpicture}}} &
$3+ \dfrac{5}{7}$ & $3 + \dfrac{2}{2}$ & 7 & $-1$ \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:B5d} \\ \ref{case5a}} &
$B_{5,d}$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (0,8);
\coordinate (x2) at (4,4);
\coordinate (x3) at (2,4);
\coordinate (x4) at (0,0);
\coordinate (x5) at (-4,4);
\coordinate (l2) at (7,4);
\coordinate (l5) at (-7,4);
\draw[red,very thick] (l2) -- (x2);
\draw[red,very thick] (l5) -- (x5);
\draw[very thick] (x1) -- (x2) -- (x4) -- (x5) -- (x1);
\draw[very thick] (x1) -- (x4);
\draw[very thick] (x1) -- (x3) -- (x4);
\draw[very thick] (x2) -- (x3);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x2) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (x4) circle(\circ);
\draw[fill=black] (x5) circle(\circ);
\draw[fill=black] (l2) circle(\circ);
\draw[fill=black] (l5) circle(\circ);
\node at (-7.5,-0.5) {};
\node at (7.5,8.5) {};
\end{tikzpicture}}} &
$4 + \dfrac{4}{7}$ & $3 + \dfrac{2}{2}$ & 8 & $0$ \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:B5d} \\ \ref{case5b}} &
$B_{5,d}$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (0,8);
\coordinate (x2) at (4,4);
\coordinate (x3) at (2,4);
\coordinate (x4) at (0,0);
\coordinate (x5) at (-4,4);
\coordinate (l4) at (4,0);
\coordinate (l5) at (-7,4);
\draw[red,very thick] (l5) -- (x5);
\draw[red,very thick] (l4) -- (x4);
\draw[very thick] (x1) -- (x2) -- (x4) -- (x5) -- (x1);
\draw[very thick] (x1) -- (x4);
\draw[very thick] (x1) -- (x3) -- (x4);
\draw[very thick] (x2) -- (x3);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x2) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (x4) circle(\circ);
\draw[fill=black] (x5) circle(\circ);
\draw[fill=black] (l5) circle(\circ);
\draw[fill=black] (l4) circle(\circ);
\node at (-7.5,-0.5) {};
\node at (7.5,8.5) {};
\end{tikzpicture}}} &
$4 + \dfrac{4}{7}$ & $3 + \dfrac{2}{2}$ & 8 & $0$ \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:B5d} \\ \ref{case5c}} &
$B_{5,d}$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (0,8);
\coordinate (x2) at (4,4);
\coordinate (x3) at (2,4);
\coordinate (x4) at (0,0);
\coordinate (x5) at (-4,4);
\coordinate (l2) at (7,4);
\coordinate (l5) at (-7,4);
\draw[red,very thick] (l5) -- (x5);
\draw[blue,very thick] (x1) -- (l2) -- (x4);
\draw[very thick] (x1) -- (x2) -- (x4) -- (x5) -- (x1);
\draw[very thick] (x1) -- (x4);
\draw[very thick] (x1) -- (x3) -- (x4);
\draw[very thick] (x2) -- (x3);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x2) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (x4) circle(\circ);
\draw[fill=black] (x5) circle(\circ);
\draw[fill=black] (l2) circle(\circ);
\draw[fill=black] (l5) circle(\circ);
\node at (-7.5,-0.5) {};
\node at (7.5,8.5) {};
\end{tikzpicture}}} &
$4 + \dfrac{2}{4} + \dfrac{2}{7}$ & $2 + \dfrac{3}{2}$ & 8 & $\dfrac{1}{2}$~\redstar \\ \hline
\end{tabular}
\caption{All possible $B_5$ blocks in $G$ and the estimation of $7f(B)+2n(B)-5e(B)$.}
\label{tab:5block} \end{table}
\begin{table}
\renewcommand\arraystretch{2}\par
\centering
\setlength{\casebox}{1.8cm}
\begin{tabular}{|c|c|c|c|c|c|c|}\hline
\makebox[\casebox][c]{Case} &
\makebox[0.7cm]{$B$} &
\makebox[3.0cm]{Diagram} &
\makebox[1.8cm]{$f(B)\leq$} &
\makebox[1.4cm]{$n(B)\leq$} &
\makebox[1.4cm]{$e(B)=$} &
\makebox[2.9cm]{$7f+2n-5e\leq$} \\ \hline \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:easyblock} \\ 2\ref{case2a}} &
$B_{4,a}$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (-4,0);
\coordinate (x2) at (0,7);
\coordinate (x3) at (4,0);
\coordinate (x4) at (0,3);
\draw[very thick] (x1) -- (x2) -- (x3) -- (x1);
\draw[very thick] (x1) -- (x4) -- (x3);
\draw[very thick] (x2) -- (x4);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x2) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (x4) circle(\circ);
\node at (-4.5,-0.5) {};
\node at (4.5,7.5) {};
\end{tikzpicture}}} &
$3 + \dfrac{3}{7}$ & $2 + \dfrac{2}{2}$ & 6 & 0 \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:B4b} \\ \ref{case6a}} &
$B_{4,b}$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (0,8);
\coordinate (x2) at (4,4);
\coordinate (x3) at (0,0);
\coordinate (x4) at (-4,4);
\coordinate (l2) at (7,4);
\coordinate (l4) at (-7,4);
\draw[red,very thick] (l4) -- (x4);
\draw[blue,very thick] (x1) -- (l2) -- (x3);
\draw[very thick] (x1) -- (x2) -- (x3) -- (x4) -- (x1);
\draw[very thick] (x2) -- (x4);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x2) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (x4) circle(\circ);
\draw[fill=black] (l2) circle(\circ);
\draw[fill=black] (l4) circle(\circ);
\node at (-7.5,-0.5) {};
\node at (7.5,8.5) {};
\end{tikzpicture}}} &
$2 + \dfrac{2}{4} + \dfrac{2}{7}$ & $1 + \dfrac{3}{2}$ & 5 & $-\dfrac{1}{2}$ \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:B4b} \\ \ref{case6b}\ref{case6bi}} &
$B_{4,b}$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (0,8);
\coordinate (x2) at (4,4);
\coordinate (x3) at (0,0);
\coordinate (x4) at (-4,4);
\coordinate (l3) at (4,0);
\coordinate (l1) at (-4,8);
\draw[red,very thick] (l3) -- (x3);
\draw[red,very thick] (l1) -- (x1);
\draw[very thick] (x1) -- (x2) -- (x3) -- (x4) -- (x1);
\draw[very thick] (x2) -- (x4);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x2) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (x4) circle(\circ);
\draw[fill=black] (l3) circle(\circ);
\draw[fill=black] (l1) circle(\circ);
\node at (-7.5,-0.5) {};
\node at (7.5,8.5) {};
\end{tikzpicture}}} &
$2 + \dfrac{4}{7}$ & $2 + \dfrac{2}{2}$ & 5 & $-1$ \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:B4b} \\ \ref{case6b}\ref{case6bii}} &
$B_{4,b}$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (0,8);
\coordinate (x2) at (4,4);
\coordinate (x3) at (0,0);
\coordinate (x4) at (-4,4);
\coordinate (l2) at (7,4);
\coordinate (l4) at (-7,4);
\coordinate (l1) at (-4,8);
\draw[red,very thick] (l1) -- (x1);
\draw[blue,very thick] (x1) -- (l2) -- (x3);
\draw[very thick] (x1) -- (x2) -- (x3) -- (x4) -- (x1);
\draw[very thick] (x2) -- (x4);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x2) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (x4) circle(\circ);
\draw[fill=black] (l2) circle(\circ);
\draw[fill=black] (l1) circle(\circ);
\node at (-7.5,-0.5) {};
\node at (7.5,8.5) {};
\end{tikzpicture}}} &
$2 + \dfrac{2}{4} + \dfrac{2}{7}$ & $2 + \dfrac{1}{3} + \dfrac{1}{2}$ & 5 & $\dfrac{1}{6}$~\redstar \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:B4b} \\ \ref{case6b}\ref{case6biii}} &
$B_{4,b}$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (0,8);
\coordinate (x2) at (4,4);
\coordinate (x3) at (0,0);
\coordinate (x4) at (-4,4);
\coordinate (l2) at (7,4);
\coordinate (l4) at (-7,4);
\draw[blue,very thick] (x1) -- (l2) -- (x3);
\draw[blue,very thick] (x1) -- (l4) -- (x3);
\draw[very thick] (x1) -- (x2) -- (x3) -- (x4) -- (x1);
\draw[very thick] (x2) -- (x4);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x2) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (x4) circle(\circ);
\draw[fill=black] (l2) circle(\circ);
\draw[fill=black] (l4) circle(\circ);
\node at (-7.5,-0.5) {};
\node at (7.5,8.5) {};
\end{tikzpicture}}} &
$2 + \dfrac{2}{4} + \dfrac{2}{4}$ & $2 + \dfrac{2}{3}$ & 5 & $\dfrac{4}{3}$~\redstar \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:B4b} \\ 3\ref{case3a}} &
$B_3$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (-3,0);
\coordinate (x2) at (0,5);
\coordinate (x3) at (3,0);
\coordinate (l1) at (-6,-1.5);
\coordinate (l2) at (0,8);
\coordinate (l3) at (6,-1.5);
\draw[red,very thick] (l1) -- (x1);
\draw[red,very thick] (l2) -- (x2);
\draw[red,very thick] (l3) -- (x3);
\draw[very thick] (x1) -- (x2) -- (x3) -- (x1);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x2) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (l1) circle(\circ);
\draw[fill=black] (l2) circle(\circ);
\draw[fill=black] (l3) circle(\circ);
\node at (-6.5,-2) {};
\node at (6.5,8.5) {};
\end{tikzpicture}}} &
$1 + \dfrac{2}{7} + \dfrac{1}{4}$ & $\dfrac{3}{2}$ & 3 & $-\dfrac{5}{4}$ \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:B4b} \\ 3\ref{case3b}} &
$B_3$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (-3,0);
\coordinate (x2) at (0,5);
\coordinate (x3) at (3,0);
\coordinate (l1) at (-6,-1.5);
\coordinate (l2) at (-2,7.5);
\coordinate (l4) at (2,7.5);
\coordinate (l3) at (6,-1.5);
\draw[red,very thick] (l1) -- (x1);
\draw[red,very thick] (l2) -- (x2);
\draw[red,very thick] (l4) -- (x2);
\draw[red,very thick] (l3) -- (x3);
\draw[very thick] (x1) -- (x2) -- (x3) -- (x1);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x2) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (l1) circle(\circ);
\draw[fill=black] (l2) circle(\circ);
\draw[fill=black] (l4) circle(\circ);
\draw[fill=black] (l3) circle(\circ);
\node at (-6.5,-2) {};
\node at (6.5,8.0) {};
\end{tikzpicture}}} &
$1 + \dfrac{3}{4}$ & $\dfrac{2}{2} + \dfrac{1}{3}$ & 3 & $-\dfrac{1}{12}$ \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:B4b} \\ 4\ref{case4a}} &
$B_2$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (-3,0);
\coordinate (x3) at (3,0);
\coordinate (l1) at (-6,0);
\coordinate (l3) at (6,0);
\draw[red,very thick] (l1) -- (x1);
\draw[red,very thick] (l3) -- (x3);
\draw[very thick] (x1) -- (x3);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (l1) circle(\circ);
\draw[fill=black] (l3) circle(\circ);
\node at (-6.5,-2.5) {};
\node at (6.5,2.5) {};
\end{tikzpicture}}} &
$\dfrac{1}{4} + \dfrac{1}{7}$ & $\dfrac{2}{2}$ & 1 & $-\dfrac{1}{4}$ \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:B4b} \\ 4\ref{case4b}} &
$B_2$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (-3,0);
\coordinate (x3) at (3,0);
\coordinate (l1) at (-6,0);
\coordinate (l3) at (6,2);
\coordinate (l4) at (6,-2);
\draw[red,very thick] (l1) -- (x1);
\draw[red,very thick] (l3) -- (x3);
\draw[red,very thick] (l4) -- (x3);
\draw[very thick] (x1) -- (x3);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (l1) circle(\circ);
\draw[fill=black] (l3) circle(\circ);
\draw[fill=black] (l4) circle(\circ);
\node at (-6.5,-2.5) {};
\node at (6.5,2.5) {};
\end{tikzpicture}}} &
$\dfrac{1}{4} + \dfrac{1}{7}$ & $\dfrac{1}{2} + \dfrac{1}{3}$ & 1 & $-\dfrac{7}{12}$ \\ \hline
\parbox[c]{\casebox}{\centering Lemma~\ref{lem:B4b} \\ 4\ref{case4c}} &
$B_2$ &
\parbox[c]{1em}{
\renewcommand\arraystretch{2}\par
\centerline{
\begin{tikzpicture}[scale=0.2]
\coordinate (x1) at (-3,0);
\coordinate (x3) at (3,0);
\coordinate (l1) at (-6,2);
\coordinate (l2) at (-6,-2);
\coordinate (l3) at (6,2);
\coordinate (l4) at (6,-2);
\draw[red,very thick] (l1) -- (x1);
\draw[red,very thick] (l2) -- (x1);
\draw[red,very thick] (l3) -- (x3);
\draw[red,very thick] (l4) -- (x3);
\draw[very thick] (x1) -- (x3);
\draw[fill=black] (x1) circle(\circ);
\draw[fill=black] (x3) circle(\circ);
\draw[fill=black] (l1) circle(\circ);
\draw[fill=black] (l2) circle(\circ);
\draw[fill=black] (l3) circle(\circ);
\draw[fill=black] (l4) circle(\circ);
\node at (-6.5,-2.5) {};
\node at (6.5,2.5) {};
\end{tikzpicture}}} &
$\dfrac{1}{4} + \dfrac{1}{5}$ & $\dfrac{2}{3}$ & 1 & $-\dfrac{31}{60}$ \\ \hline
\end{tabular}
\caption{All possible $B_4,B_3$ and $B_2$ blocks in $G$ and the estimate of $7f(B)+2n(B)-5e(B)$.}
\label{tab:432block} \end{table}
\end{document} |
\begin{document}
~
\begin{center} {\sc \bf\LARGE Maxwell's equations with hypersingularities\\[6pt] at a conical plasmonic tip} \end{center} \begin{center} \textsc{Anne-Sophie Bonnet-Ben Dhia}$^1$, \textsc{Lucas Chesnel}$^2$, \textsc{Mahran Rihani}$^{1,2}$\\[16pt] \begin{minipage}{0.96\textwidth} {\small $^1$ Laboratoire Poems, CNRS/INRIA/ENSTA Paris, Institut Polytechnique de Paris, 828 Boulevard des Mar\'echaux, 91762 Palaiseau, France;\\ $^2$ INRIA/Centre de math\'ematiques appliqu\'ees, \'Ecole Polytechnique, Institut Polytechnique de Paris, Route de Saclay, 91128 Palaiseau, France.\\[10pt] E-mails: \texttt{[email protected]}, \texttt{[email protected]}, \texttt{[email protected]}\\[-14pt] \begin{center} (\today) \end{center} } \end{minipage} \end{center}
\noindent\textbf{Abstract.} In this work, we are interested in the analysis of time-harmonic Maxwell's equations in presence of a conical tip of a material with negative dielectric constants. When these constants belong to some critical range, the electromagnetic field exhibits strongly oscillating singularities at the tip which have infinite energy. Consequently Maxwell's equations are not well-posed in the classical $\mrm{L}^2$ framework. The goal of the present work is to provide an appropriate functional setting for 3D Maxwell's equations when the dielectric permittivity (but not the magnetic permeability) takes critical values. Following what has been done for the 2D scalar case, the idea is to work in weighted Sobolev spaces, adding to the space the so-called outgoing propagating singularities. The analysis requires new results of scalar and vector potential representations of singular fields. The outgoing behaviour is selected via the limiting absorption principle. \\
\noindent\textbf{Key words.} Time-harmonic Maxwell's equations, negative metamaterials, Kondratiev weighted Sobolev spaces, $T$-coercivity, compact embeddings, scalar and vector potentials, limiting absorption principle.
\section{Introduction}
For the past two decades, the scientific community has been particularly interested in the study of Maxwell's equations in the unusual case where the dielectric permittivity $\varepsilon$ is a real-valued sign-changing function. There are several motivations to this which are all related to spectacular progress in physics. Such sign-changing $\varepsilon$ appear for example in the field of plasmonics \cite{BaDE03,Maier07,BVNMRB08}. The existence of surface plasmonic waves is mainly due to the fact that, at optical frequencies, some metals like silver or gold have an $\varepsilon$ with a small imaginary part and a negative real part. Neglecting the imaginary part, at a given frequency, one is led to consider a real-valued $\varepsilon$ which is negative in the metal and positive in the air around the metal. A second more prospective motivation concerns the so-called metamaterials, whose micro-structure is designed so that their effective electromagnetic constants may have a negative real part and a small imaginary part in some frequency ranges \cite{SmiPenWil04,Sih07,SalEng06}. Let us emphasize that for such metamaterials not only the dielectric permittivity $\varepsilon$ may become negative but the magnetic permeability $\mu$ as well. At the interface between dielectrics and negative-index metamaterials, one can observe a negative refraction phenomenon which opens a lot of exciting prospects. Finally let us mention that negative $\varepsilon$ also appear in plasmas, together with strong anisotropic effects. But we want to underline a main difference between plasmas and the previous applications. In the case of plasmonics and metamaterials, $\varepsilon$ is sign-changing but does not vanish (and similarly for $\mu$), while in plasmas, $\varepsilon$ vanishes on some particular surfaces, leading to the phenomenon of hybrid resonance (see \cite{DeIW14,NiCD19}). The theory developed in the present paper does no apply to the case where $\varepsilon$ vanishes.\\ \newline The goal of the present work is to study the Maxwell's system in the case where $\varepsilon$, $\mu$ change sign but do not vanish. In case of invariance with respect to one variable, the analysis of time-harmonic Maxwell's problem leads to consider the 2D scalar Helmholtz equation \[ \mrm{div}\left(\frac{1}{\varepsilon}\nabla \varphi\right)+\omega^2\mu\varphi=f. \] Here $f$ denotes the source term and the unknown $\varphi$ is a component of the magnetic field. For this scalar equation, only the change of sign of $\varepsilon$ matters because roughly speaking, the term involving $\mu$ is compact (or locally compact in freespace). In the particular case where $\varepsilon$ takes constant values $\varepsilon_+>0$ and $\varepsilon_-<0$ in two subdomains separated by a a curve $\Sigma$, the results are quite complete \cite{BoChCi12}. If $\Sigma$ is smooth (of class ${\mathcal C}^1$), the equation has the same properties in the $\mrm{H}^1$ framework as in the case of positive coefficients, except when the contrast $\kappa_{\varepsilon}:=\varepsilon_-/\varepsilon_+$ takes the particular value $-1$. One way to show this consists in finding an appropriate operator $\mathbb T$ such that the coercivity of the variational formulation is restored when testing with functions of the form $\mathbb T\varphi'$ (instead of $\varphi'$). This approach is called the $\mathbb T$-coercivity technique. When $\kappa_{\varepsilon}=-1$, Fredholmness is lost in $\mrm{H}^1$ but some results can be established in some weighted Sobolev spaces where the weight is adapted to the shape of $\Sigma$ \cite{Ola95,Nguy16,Pank19}. The picture is quite different when $\Sigma$ has corners. For instance, in the case of a polygonal curve $\Sigma$, Fredholmness in $\mrm{H}^1$ is lost not only for $\kappa_{\varepsilon}=-1$ but for a whole interval of values of $\kappa_{\varepsilon}$ around $-1$. We name this interval the critical interval. The smaller the angle of the corners, the larger the critical interval is. In fact, we can still find a solution in that case but this solution has a strongly singular behaviour at the corners in $r^{i\eta}$ where $r$ is the distance to the corner and $\eta$ is a real coefficient. In particular, this hypersingular solution does not belong to $\mrm{H}^1$. It has been shown that Fredholmness can be recovered in an appropriate unusual framework \cite{BonCheCla13} which is obtained by adding a singular function to a Kondratiev weighted Sobolev space of regular functions. The proof requires to adapt Mellin techniques in Kondratiev spaces \cite{Kond67} to an equation which is not elliptic due to the change of sign of $\varepsilon$ (see \cite{DaTe97} for the first analysis). From a physical point of view, the singular\footnote{From now on, we simply write ``singular'' instead of ``hypersingular''.} function corresponds to a wave which propagates towards the corner, without never reaching it because its group velocity tends to zero with the distance to the corner \cite{BCCC16,HeKa18,HeKa20}. In the literature, this wave which is trapped by the corner is commonly referred to as a black-hole wave. It leads to a strange phenomenon of leakage of energy while only non-dissipative materials are considered.\\ \newline The objective of this article is to extend this type of results to 3D Maxwell's equations. The case where the contrasts in $\varepsilon$ and $\mu$ do not take critical values has been considered in \cite{BoCC14}. Using the $\mathbb T$-coercivity technique, a Fredholm property has been proved for Maxwell equations in a classical functional framework as soon as two scalar problems (one for $\varepsilon$ and one for $\mu$) are well-posed in $\mrm{H}^1$. The case where these problems satisfy a Fredholm property in $\mrm{H}^1$ but with a non trivial kernel has also been treated in \cite{BoCC14}. Let us finally mention \cite{NgSi19} where different types of results have been established for a smooth inclusion of class ${\mathscr C}^1$. In the present work, we consider a 3D configuration with an inclusion of material with a negative dielectric permeability $\varepsilon$. We suppose that this inclusion has a tip at which singularities of the electromagnetic field exist. The objective is to combine Mellin analysis in Kondratiev spaces with the $\mathbb T$-coercivity technique to derive an appropriate functional framework for Maxwell's equations when the contrast $\kappa_{\varepsilon}$ takes critical values (but not the contrast in $\mu$). We emphasize that due to the non standard singularities we have to deal with, the results we obtain are quite different from the ones existing for classical Maxwell's equations with positive materials in non smooth domains \cite{BiSo87b,Cost90,BiSo94,CoDN99,CoDa00}.\\ \newline The outline is as follows. In the remaining part of the introduction, we present some general notation. In Section \ref{sec-assumptions}, we describe the assumptions made on the dielectric constants $\varepsilon$, $\mu$. Then we propose a new functional framework for the problem for the electric field and show its well-posedness in Section \ref{SectionChampE}. Section \ref{SectionChampH} is dedicated to the analysis of the problem for the magnetic field. We emphasize that due to the assumptions made on $\varepsilon$, $\mu$ (the contrast in $\varepsilon$ is critical but the one in $\mu$ is not), the studies in sections \ref{SectionChampE} and \ref{SectionChampH} are quite different. We give a few words of conclusion in Section \ref{SectionConclusion} before presenting technical results needed in the analysis in two sections of appendix. The main outcomes of this work are Theorem \ref{MainThmE} (well-posedness for the electric problem) and Theorem \ref{MainThmH} (well-posedness for the magnetic problem).\\ \newline All the study will take place in some domain $\Omega$ of $\mathbb{R}^3$. More precisely, $\Omega$ is an open, connected and bounded subset of $\mathbb{R}^3$ with a Lipschitz-continuous boundary $\partial\Omega$. Once for all, we make the following assumption:\\ \newline \textbf{Assumption 1.} \textit{The domain $\Omega$ is simply connected and $\partial\Omega$ is connected.}\\ \newline When this assumption is not satisfied, the analysis below must be adapted (see the discussion in the conclusion). For some $\omega\ne0$ ($\omega\in\mathbb{R}$), the time-harmonic Maxwell's equations are given by \begin{equation}\label{Eqs Maxwell} \boldsymbol{\mrm{curl}}\,\boldsymbol{E}-i\omega\,\mu\,\boldsymbol{H} = 0\qquad\mbox{ and }\qquad\boldsymbol{\mrm{curl}}\,\boldsymbol{H}+i\omega\,\varepsilon\,\boldsymbol{E} = \boldsymbol{J}\,\mbox{ in }\Omega. \end{equation} Above $\boldsymbol{E}$ and $\boldsymbol{H}$ are respectively the electric and magnetic components of the electromagnetic field. The source term $\boldsymbol{J}$ is the current density. We suppose that the medium $\Omega$ is surrounded by a perfect conductor and we impose the boundary conditions \begin{equation}\label{CL Maxwell} \boldsymbol{E}\times\nu=0\qquad\mbox{ and }\qquad\mu\boldsymbol{H}\cdot\nu=0\,\mbox{ on }\partial\Omega, \end{equation} where $\nu$ denotes the unit outward normal vector field to $\partial\Omega$. The dielectric permittivity $\varepsilon$ and the magnetic permeability $\mu$ are real valued functions which belong to $\mrm{L}^{\infty}(\Omega)$, with $\varepsilon^{-1},\,\mu^{-1}\in \mrm{L}^{\infty}(\Omega)$ (without assumption of sign). Let us introduce some usual spaces in the study of Maxwell's equations: $$ \begin{array}{rcl} \boldsymbol{\mrm{L}}^2(\Omega)&:=&(\mrm{L}^2(\Omega))^3\\
\mrm{H}^1_{0}(\Omega)&:=&\{\varphi\in\mrm{H}^1(\Omega)\,|\,\varphi=0\mbox{ on }\partial\Omega\}\\
\mrm{H}^1_{\#}(\Omega)&:=&\{\varphi\in\mrm{H}^1(\Omega)\,|\,\int_{\Omega}\varphi\,dx=0\}\\
\displaystyle \boldsymbol{\mrm{H}}(\boldsymbol{\mrm{curl}}\,) &:=& \displaystyle \{ \boldsymbol{H}\in \boldsymbol{\mrm{L}}^2(\Omega) \,|\, \boldsymbol{\mrm{curl}}\, \boldsymbol{H}\in\boldsymbol{\mrm{L}}^2(\Omega)\}\\[2pt]
\displaystyle \boldsymbol{\mrm{H}}_N(\boldsymbol{\mrm{curl}}\,) &:=& \displaystyle \{ \boldsymbol{E}\in \boldsymbol{\mrm{H}}(\boldsymbol{\mrm{curl}}\,) \,|\, \boldsymbol{E}\times\nu=0 \mbox { on } \partial\Omega\} \end{array} $$ and for $\xi\in \mrm{L}^{\infty}(\Omega)$: $$ \begin{array}{rcl}
\boldsymbol{\mrm{X}}_T(\xi) & :=& \left\{\boldsymbol{H}\in \boldsymbol{\mrm{H}}(\boldsymbol{\mrm{curl}}\,)\,|\,\mrm{div}(\xi\boldsymbol{H})=0,\,\xi\boldsymbol{H}\cdot\nu=0 \mbox{ on }\partial\Omega \right\}\\[2pt]
\boldsymbol{\mrm{X}}_N(\xi) & := &\left\{\boldsymbol{E}\in \boldsymbol{\mrm{H}}_N(\boldsymbol{\mrm{curl}}\,)\,|\,\mrm{div}(\xi\boldsymbol{E})=0\right\}. \end{array} $$
We denote indistinctly by $(\cdot,\cdot)_{\Omega}$ the classical inner products of $\mrm{L}^2(\Omega)$ and $\boldsymbol{\mrm{L}}^2(\Omega)$. Moreover, $\|\cdot\|_{\Omega}$ stands for the corresponding norms. We endow the spaces $\boldsymbol{\mrm{H}}(\boldsymbol{\mrm{curl}}\,)$, $\boldsymbol{\mrm{H}}_N(\boldsymbol{\mrm{curl}}\,)$, $\boldsymbol{\mrm{X}}_T(\xi)$, $\boldsymbol{\mrm{X}}_N(\xi)$ with the norm \[
\|\cdot\|_{\boldsymbol{\mrm{H}}(\boldsymbol{\mrm{curl}}\,)}:=(\|\cdot\|^2_{\Omega}+\|\boldsymbol{\mrm{curl}}\,\cdot\|^2_{\Omega})^{1/2}. \] Let us recall a well-known property for the particular spaces $\boldsymbol{\mrm{X}}_T(1)$ and $\boldsymbol{\mrm{X}}_N(1)$ (cf. \cite{Webe80,AmrBerDau98}). \begin{proposition}\label{PropoEmbeddingCla} Under Assumption 1, the embeddings of $\boldsymbol{\mrm{X}}_T(1)$ in $\boldsymbol{\mrm{L}}^2(\Omega)$ and of $\boldsymbol{\mrm{X}}_N(1)$ in $\boldsymbol{\mrm{L}}^2(\Omega)$ are compact. And there is a constant $C>0$ such that \[
\|\boldsymbol{u}\|_{\Omega}\le C\,\|\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\|_{\Omega},\qquad \forall\boldsymbol{u}\in\boldsymbol{\mrm{X}}_T(1)\cup\boldsymbol{\mrm{X}}_N(1). \]
Therefore, in $\boldsymbol{\mrm{X}}_T(1)$ and in $\boldsymbol{\mrm{X}}_N(1)$, $\|\boldsymbol{\mrm{curl}}\,\cdot\|_{\Omega}$ is a norm which is equivalent to $\|\cdot\|_{\boldsymbol{\mrm{H}}(\boldsymbol{\mrm{curl}}\,)}$. \end{proposition}
\section{Assumptions for the dielectric constants $\varepsilon$, $\mu$}\label{sec-assumptions}
In this document, for a Banach space $\mrm{X}$, $\mrm{X}^{\ast}$ stands for the topological antidual space of $\mrm{X}$ (the set of continuous anti-linear forms on $\mrm{X}$). \\In the analysis of the Maxwell's system (\ref{Eqs Maxwell})-(\ref{CL Maxwell}), the properties of two scalar operators associated respectively with $\varepsilon$ and $\mu$ play a key role. Define $A_{\varepsilon}:\mrm{H}^1_0(\Omega)\to(\mrm{H}^1_0(\Omega))^{\ast}$ such that \begin{equation}\label{DefAeps} \langle A_{\varepsilon}\varphi,\varphi'\rangle=\int_{\Omega}\varepsilon\nabla\varphi\cdot\nabla\overline {\varphi'}\,dx,\qquad \forall \varphi,\varphi'\in\mrm{H}^1_0(\Omega) \end{equation} and $A_{\mu}:\mrm{H}^1_{\#}(\Omega)\to(\mrm{H}^1_{\#}(\Omega))^{\ast}$ such that \[ \langle A_{\mu}\varphi,\varphi'\rangle=\int_{\Omega}\mu\nabla\varphi\cdot\nabla\overline{\varphi'}\,dx,\qquad \forall \varphi,\varphi'\in\mrm{H}^1_{\#}(\Omega). \] \textbf{Assumption 2.} \textit{We assume that $\mu$ is such that $A_{\mu}:\mrm{H}^1_{\#}(\Omega)\to(\mrm{H}^1_{\#}(\Omega))^{\ast}$ is an isomorphism.}\\ \newline Assumption 2 is satisfied in particular if $\mu$ has a constant sign (by Lax-Milgram theorem). We underline however that we allow $\mu$ to change sign (see in particular \cite{CosSte85,BonCiaZwo08,BoChCi12,BoCC14} for examples of sign-changing $\mu$ such that Assumption 2 is verified). The assumption on $\varepsilon$, that will be responsible for the presence of (hyper)singularities, requires to consider a more specific configuration as explained below. \subsection{Conical tip and scalar (hyper)singularities} \label{subsec-hypersingularities}
We assume that $\Omega$ contains an inclusion of a particular material (metal at optical frequency, metamaterial, ...) located in some domain $\mathcal{M}$ such that $\overline{\mathcal{M}}\subset\Omega$ ($\mathcal{M}$ like metal or metamaterial). We assume that $\partial\mathcal{M}$ is of class $\mathscr{C}^2$ except at the origin $O$ where $\mathcal{M}$ coincides locally with a conical tip. More precisely, there are $\rho>0$ and some smooth domain $\varpi$ of the unit sphere $\mathbb{S}^2:=\{x\in\mathbb{R}^3\,|\,|x|=1\}$ such that $B(O,\rho)\subset \Omega$ and \[
\mathcal{M}\cap B(O,\rho)=\mathcal{K}\cap B(O,\rho)\qquad\mbox{ with }\ \mathcal{K}:=\{r\,\boldsymbol{\theta}\,|\,r>0,\,\boldsymbol{\theta}\in\varpi \}. \] Here $B(O,\rho)$ stands for the open ball centered at $O$ and of radius $\rho$. We assume that $\varepsilon$ takes the constant value $\varepsilon_-<0$ (resp. $\varepsilon_+>0$) in $\mathcal{M}\cap B(O,\rho)$ (resp. $(\Omega\setminus\overline{\mathcal{M}})\cap B(O,\rho)$). And we assume that the contrast $\kappa_{\varepsilon}:=\varepsilon_-/\varepsilon_+<0$ and $\varpi$ (which characterizes the geometry of the conical tip) are such that there exist singularities of the form \begin{equation}\label{defSing} \mathfrak{s}(x)=r^{-1/2+i\eta}\Phi(\theta,\phi) \end{equation} satisfying $\mrm{div}(\varepsilon\nabla\mathfrak{s})=0$ in $\mathcal{K}$ with $\eta\in\mathbb{R},\eta\neq 0$. Here $(r,\theta,\phi)$ are the spherical coordinates associated with $O$ while $\Phi$ is a function which is smooth in $\varpi$ and in $\mathbb{S}^2\setminus\overline{\varpi}$. We emphasize that since the interface between the metamaterial and the exterior material is not smooth, singularities always exist at the conical tip. However, here we make a particular assumption on the singular exponent which has to be of the form $-1/2+i\eta$ with $\eta\in\mathbb{R},\eta\neq 0$. Such singularities play a particular role for the operator $A_{\varepsilon}$ introduced in (\ref{DefAeps}) because they are ``just'' outside $\mrm{H}^1$. More precisely, we have $\mathfrak{s}\notin\mrm{H}^1(\Omega)$ but $r^\gamma\mathfrak{s}\in\mrm{H}^1(\Omega)$ for all $\gamma>0$. With them, we can construct a sequence of functions $u_n\in\mrm{H}^1_0(\Omega)$ such that \[
\forall n\in\mathbb{N},\quad\|u_n\|_{\mrm{H}^1(\Omega)}=1\qquad\mbox{ and }\qquad\lim_{n\to+\infty}\|\mrm{div}(\varepsilon\nabla u_n)\|_{(\mrm{H}^1_0(\Omega))^{\ast}}+\|u_n\|_{\Omega}=0. \] Then this allows one to prove that the range of $A_{\varepsilon}:\mrm{H}^1_0(\Omega)\to(\mrm{H}^1_0(\Omega))^{\ast}$ is not closed (see \cite{BonDauRam99,BoChCi12,BonCheCla13} in 2D). Of course, for any given geometry, such singularities do not exist when $\kappa_{\varepsilon}>0$ because we know that in this case $A_{\varepsilon}:\mrm{H}^1_0(\Omega)\to(\mrm{H}^1_0(\Omega))^{\ast}$ is an isomorphism. On the other hand, when \begin{equation}\label{ConicalTip}
\varpi=\{(\cos\theta\cos\phi,\sin\theta\cos\phi,\sin\phi)\,|\,-\pi\le\theta\le\pi,\,-\pi/2\le\phi <-\pi/2+\alpha\}\mbox{ for some }\alpha\in(0;\pi) \end{equation}
(the circular conical tip, see Figure \ref{fig-geom}), it can be shown that such $\mathfrak{s}$ exists for $\kappa_{\varepsilon}>-1$ (resp. $\kappa_{\varepsilon}<-1$) and $|\kappa_{\varepsilon}+1|$ small enough (see \cite{KCHWS14}) when $\alpha<\pi/2$ (resp. $\alpha>\pi/2$). \begin{figure}
\caption{The domain $\Omega$ with the inclusion $\mathcal{M}$ exhibiting a conical tip. }
\label{fig-geom}
\end{figure} For a general smooth domain $\varpi\subset\mathbb{S}^2$ and a given contrast $\kappa_{\varepsilon}$, in order to know if such $\mathfrak{s}$ exists, one has to solve the spectral problem \begin{equation}\label{spectralPb}
\begin{array}{|l} \mbox{Find }(\Phi,\lambda)\in\mrm{H}^1(\mathbb{S}^2)\setminus\{0\}\times\mathbb C\mbox{ such that }\\ \displaystyle\int_{\mathbb{S}^2}\varepsilon\nabla_S\Phi\cdot\nabla_S\overline{\Phi'}\,ds = \lambda(\lambda+1)\int_{\mathbb{S}^2}\varepsilon\Phi\,\overline{\Phi'}\,ds,\qquad \forall\Phi'\in\mrm{H}^1(\mathbb{S}^2), \end{array} \end{equation}
and see if among the eigenvalues some of them are of the form $\lambda=-1/2+i\eta$ with $\eta\in\mathbb{R}, \eta\neq 0$. Above, $\nabla_S$ stands for the surface gradient. With a slight abuse, when $\varepsilon$ is involved into integrals over $\mathbb{S}^2$, we write $\varepsilon$ instead of $\varepsilon(\rho\,\cdot)$. Note that since $\varepsilon$ is real-valued, if $\lambda=-1/2+i\eta$ is an eigenvalue, we have $\lambda(\lambda+1)=-\eta^2-1/4$, so that $\lambda=-1/2-i\eta$ is also an eigenvalue for the same eigenfunction. And since $\lambda(\lambda+1)\in\mathbb{R}$, we can find a corresponding eigenfunction which is real-valued. From now on, we assume that $\Phi$ in (\ref{defSing}) is real-valued. Let us mention that this problem of existence of singularities of the form (\ref{defSing}) is directly related to the problem of existence of essential spectrum for the so-called Neumann-Poincar\'e operator \cite{KaPS07,PePu17,BoZh19,HePe18}. A noteworthy difference with the 2D case of a corner in the interface is that several singularities of the form (\ref{defSing}) with different values of $|\eta|$ can exist in 3D \cite{KCHWS14} (this depends on $\varepsilon$ and on $\varpi$). To simplify the presentation, we assume that for the case of interest, singularities of the form (\ref{defSing}) exist for only one value of $|\eta|$. Moreover we assume that the quantity $\textstyle\int_{\mathbb{S}^2}\varepsilon|\Phi|^2ds$ does not vanish. In this case, exchanging $\eta$ by $-\eta$ if necessary, we can set $\eta$ so that \begin{equation}\label{eq-intPhineq0}
\eta \int_{\mathbb{S}^2}\varepsilon|\Phi|^2ds>0. \end{equation}
For the 2D problem, it can be proved that the quantity corresponding to $\textstyle\int_{\mathbb{S}^2}\varepsilon|\Phi|^2ds$ vanishes if and only if the contrast $\kappa_\varepsilon$ coincides with a bound of the critical interval. We conjecture that this also holds in 3D. Note that when $\textstyle\int_{\mathbb{S}^2}\varepsilon|\Phi|^2ds=0$, the singularities have a different form from (\ref{defSing}). To fix notations, we set \begin{equation}\label{DefSinguEss} s^{\pm}(x)=\chi(r)r^{-1/2\pm i\eta}\Phi(\theta,\phi) \end{equation} In this definition the smooth cut-off function $\chi$ is equal to one in a neighbourhood of $0$ and is supported in $[-\rho;\rho]$. In particular, we emphasize that $s^{\pm}$ vanish in a neighbourhood of $\partial\Omega$.\\ \newline In order to recover Fredholmness for the scalar problem involving $\varepsilon$, an important idea is too add one (and only one) of the singularities (\ref{DefSinguEss}) to the functional framework. From a mathematical point of view, working with the complex conjugation, it is obvious to see that adding $s^+$ or $s^-$ does not change the results. However physically one framework is more relevant than the other. More precisely, we will explain in \S\ref{ParagLimiting} with the limiting absorption principle why selecting $s^+$, with $\eta$ such that (\ref{eq-intPhineq0}) holds, together with a certain convention for the time-harmonic dependence, is more natural.
\subsection{Kondratiev functional framework} In this paragraph, adapting what is done in \cite{BonCheCla13} for the 2D case, we describe in more details how to get a Fredholm operator for the scalar operator associated with $\varepsilon$. For $\beta \in \mathbb{R}$ and $m \in \mathbb{N}$, let us introduce the weighted Sobolev (Kondratiev) space $\mrm{V}_\beta^m(\Omega)$ (see \cite{Kond67}) defined as the closure of $\mathscr{C}^{\infty}_0(\overline{\Omega}\setminus \{O\})$ for the norm \[
\|\varphi\|_{\mrm{V}_\beta^m(\Omega)}=\left(\sum_{|\alpha| \leq m} \|r^{|\alpha|-m+\beta} \partial_x^{\alpha}\varphi\|_{\mrm{L}^2(\Omega)}^2\right)^{1/2}. \] Here $\mathscr{C}^{\infty}_0(\overline{\Omega}\setminus \{O\})$ denotes the space of infinitely differentiable functions which are supported in $\overline{\Omega}\setminus \{O\}$. We also denote $\mathring{\mrm{V}}_\beta^1(\Omega)$ the closure of $\mathscr{C}^{\infty}_0(\Omega\setminus \{O\})$ for the norm
$\|\cdot\|_{\mrm{V}_\beta^1(\Omega)}$. We have the characterisation \[
\mathring{\mrm{V}}_\beta^1(\Omega)=\{\varphi\in\mrm{V}_\beta^1(\Omega)\,|\,\varphi=0\mbox{ on }\partial\Omega\}. \] Note that using Hardy's inequality \[
\int_{0}^1\frac{|u(r)|^2}{r^2}\,r^2dr \le 4\,\int_{0}^1|u'(r)|^2\,r^2dr,\qquad\forall u\in\mathscr{C}^1_0[0;1), \]
one can show the estimate $\|r^{-1}\varphi\|_{\Omega}\le C\,\|\nabla\varphi\|_{\Omega}$ for all $\varphi\in\mathscr{C}^{\infty}_0(\Omega\setminus \{O\})$. This proves that $\mathring{\mrm{V}}_{0}^1(\Omega)=\mrm{H}^1_0(\Omega)$. Now set $\beta>0$. Observe that we have \[ \mathring{\mrm{V}}_{-\beta}^1(\Omega)\subset \mrm{H}^1_0(\Omega)\subset \mathring{\mrm{V}}_\beta^1(\Omega)\qquad\mbox{ so that }\qquad(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast}\subset (\mrm{H}^1_0(\Omega))^{\ast}\subset (\mathring{\mrm{V}}^1_{-\beta}(\Omega))^{\ast}. \] Define the operators $A^{\pm\beta}_{\varepsilon}:\mathring{\mrm{V}}^1_{\pm\beta}(\Omega)\to(\mathring{\mrm{V}}^1_{\mp\beta}(\Omega))^{\ast}$ such that \begin{equation}\label{DefOperatorWeight} \langle A^{\pm\beta}_{\varepsilon}\varphi,\varphi'\rangle=\int_{\Omega}\varepsilon\nabla\varphi\cdot\nabla\overline{\varphi'}\,dx,\qquad \forall \varphi\in\mathring{\mrm{V}}^1_{\pm\beta}(\Omega),\,\varphi'\in\mathring{\mrm{V}}^1_{\mp\beta}(\Omega). \end{equation} Working as in \cite{BonCheCla13} for the 2D case of the corner, one can show that there is $\beta_0>0$ (depending only on $\kappa_{\varepsilon}$ and $\varpi$) such that for all $\beta\in(0;\beta_0)$, $A^{\beta}_{\varepsilon}$ is Fredholm of index $+1$ while $A^{-\beta}_{\varepsilon}$ is Fredholm of index $-1$. We remind the reader that for a bounded linear operator between two Banach spaces $T:\mrm{X}\to\mrm{Y}$ whose range is closed, its index is defined as $\mrm{ind}\,T:=\dim\ker\,T-\dim\mrm{coker}\,\,T$, with $\dim\mrm{coker}\,\,T=\dim\,(\mrm{Y}/\mrm{range}(T))$. On the other hand, application of Kondratiev calculus guarantees that if $\varphi\in\mathring{\mrm{V}}^1_{\beta}(\Omega)$ is such that $A^{+\beta}_{\varepsilon}\varphi\in(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast}$ (the important point here being that $(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast}\subset (\mathring{\mrm{V}}^1_{-\beta}(\Omega))^{\ast}$), then there holds the following representation \begin{equation}\label{PropoRepresentation} \varphi=c_-\,s^-+c_+\,s^++\tilde{\varphi}\qquad\mbox{ with }c_{\pm}\in\mathbb{C}\mbox{ and }\tilde{\varphi}\in\mathring{\mrm{V}}^1_{-\beta}(\Omega). \end{equation} Note that $s^\pm$, with $s^\pm$ defined by (\ref{DefSinguEss}), belongs to $\mathring{\mrm{V}}^1_{\beta}(\Omega)$, but not to $\mrm{H}^1_0(\Omega)$, and a fortiori not to $\mathring{\mrm{V}}^1_{-\beta}(\Omega)$. Then introduce the space $\mathring{\mrm{V}}^{\mrm{out}}:=\mrm{span}(s^+)\,\oplus\,\mathring{\mrm{V}}^1_{-\beta}(\Omega)$, endowed with the norm \begin{equation}\label{WPNonStandard}
\|\varphi\|_{\mrm{V}^{\mrm{out}}}=(|c|^2+\|\tilde{\varphi}\|^2_{\mrm{V}^1_{-\beta}(\Omega))})^{1/2},\qquad \forall\varphi=c\,s^++\tilde{\varphi}\in\mathring{\mrm{V}}^{\mrm{out}}, \end{equation} which is a Banach space. Introduce also the operator $A^{\mrm{out}}_{\varepsilon}$ such that for all $\varphi=c\,s^++\tilde{\varphi}\in\mathring{\mrm{V}}^{\mrm{out}}$ and $\varphi'\in\mathscr{C}^{\infty}_0(\Omega\setminus\{O\})$, \[ \langle A^{\mrm{out}}_{\varepsilon}\varphi,\varphi'\rangle=\int_{\Omega}\varepsilon\nabla\varphi\cdot\nabla\overline{\varphi'}\,dx=-c\int_{\Omega}\mrm{div}(\varepsilon\nabla s^+)\overline{\varphi'}\,dx+\int_{\Omega}\varepsilon\nabla\tilde{\varphi}\cdot\nabla\overline{\varphi'}\,dx. \]
Note that due to the features of the cut-off function $\chi$, we have $\mrm{div}(\varepsilon\nabla s^+)\in\mrm{L}^2(\Omega)$. And since $\mrm{div}(\varepsilon\nabla s^+)=0$ in a neighbourhood of $O$, we observe that there is a constant $C>0$ such that $|\langle A^{\mrm{out}}_{\varepsilon}\varphi,\varphi'\rangle|\le C\,\|\varphi\|_{\mrm{V}^{\mrm{out}}}\,\|\varphi'\|_{\mrm{V}^1_{\beta}(\Omega)}$. The density of $\mathscr{C}^{\infty}_0(\Omega\setminus\{O\})$ in $\mathring{\mrm{V}}^1_{\beta}(\Omega)$ then allows us to extend $A^{\mrm{out}}_{\varepsilon}$ as a continuous operator from $\mathring{\mrm{V}}^{\mrm{out}}$ to $(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast}$. And we have \[ \langle A^{\mrm{out}}_{\varepsilon}\varphi,\varphi'\rangle=-c\int_{\Omega}\mrm{div}(\varepsilon\nabla s^+)\overline{\varphi'}\,dx+\int_{\Omega}\varepsilon\nabla\tilde{\varphi}\cdot\nabla\overline{\varphi'}\,dx,\quad \forall\varphi=c\,s^++\tilde{\varphi},\,\varphi'\in\mathring{\mrm{V}}^1_{\beta}(\Omega). \] Working as in \cite{BonCheCla13} (see Proposition 4.4.) for the 2D case of the corner, one can prove that $A^{\mrm{out}}_{\varepsilon}:\mathring{\mrm{V}}^{\mrm{out}}\to(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast}$ is Fredholm of index zero and that $\ker\,A^{\mrm{out}}_{\varepsilon}=\ker\,A^{-\beta}_{\varepsilon}$. In order to simplify the analysis below, we shall make the following assumption.\\ \newline \textbf{Assumption 3.} \textit{We assume that $\varepsilon$ is such that for $\beta\in (0;\beta_0)$, $A^{-\beta}_{\varepsilon}$ is injective, which guarantees that $A^{\mrm{out}}_{\varepsilon}:\mathring{\mrm{V}}^{\mrm{out}}\to(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast}$ is an isomorphism.}\\ \newline In what follows, we shall also need to work with the usual Laplace operator in weighted Sobolev spaces. For $\gamma\in\mathbb{R}$, define $A^{\gamma}:\mathring{\mrm{V}}^1_{\gamma}(\Omega)\to(\mathring{\mrm{V}}^1_{-\gamma}(\Omega))^{\ast}$ such that \[ \langle A^{\gamma}\varphi,\varphi'\rangle=\int_{\Omega}\nabla\varphi\cdot\nabla\overline{\varphi'}\,dx,\qquad \forall \varphi\in\mathring{\mrm{V}}^1_{\gamma}(\Omega),\,\varphi'\in\mathring{\mrm{V}}^1_{-\gamma}(\Omega) \] (observe that there is no $\varepsilon$ here). Combining the theory presented in \cite{KoMR97} (see also the founding article \cite{Kond67} as well as the monographs \cite{MaNP00,NaPl94}) together with the result of \cite[Corollary 2.2.1]{KoMR01}, we get the following proposition. \begin{proposition}\label{PropoLaplaceOp} \noindent For all $\gamma\in(-1/2;1/2)$, the operator $A^{\gamma}:\mathring{\mrm{V}}^1_{\gamma}(\Omega)\to(\mathring{\mrm{V}}^1_{-\gamma}(\Omega))^{\ast}$ is an isomorphism. \end{proposition} \noindent Note in particular that for $\gamma=0$, this proposition simply says that $\Delta:\mrm{H}^1_0(\Omega)\to(\mrm{H}^1_0(\Omega))^{\ast}$ is an isomorphism. In order to have a result of isomorphism both for $A^{\mrm{out}}_{\varepsilon}$ and $A^{\beta}$, we shall often make the assumption that the weight $\beta$ is such that \begin{equation}\label{betainfbeta_0et1demi} 0<\beta<\min(1/2,\beta_0) \end{equation} where $\beta_0$ is defined after (\ref{DefOperatorWeight}). \newline To measure electromagnetic fields in weighted Sobolev norms, in the following we shall work in the spaces \[ \begin{array}{rcl} \boldsymbol{\mrm{V}}^0_{\beta}(\Omega)&:=&(\mrm{V}^0_{\beta}(\Omega))^3\\[4pt] \boldsymbol{\mathring{\mrm{V}}}^1_{\beta}(\Omega)&:=&(\mathring{\mrm{V}}^1_{\beta}(\Omega))^3. \end{array} \] Note that we have $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)\subset \boldsymbol{\mrm{L}}^2(\Omega)\subset \boldsymbol{\mrm{V}}^0_{\beta}(\Omega)$.
\section{Analysis of the problem for the electric component}\label{SectionChampE}
In this section, we consider the problem for the electric field associated with (\ref{Eqs Maxwell})-(\ref{CL Maxwell}). Since the scalar problem involving $\varepsilon$ is well-posed in a non standard framework involving the propagating singularity $s^+$ (see (\ref{WPNonStandard})), we shall add its gradient in the space for the electric field. Then we define a variational problem in this unsual space, and prove its well-posedness. Finally we justify our choice by a limiting absorption principle. \subsection{A well-chosen space for the electric field} Define the space of electric fields with the divergence free condition \begin{equation} \label{eq-XNout} \begin{array}{l}
\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon):=\{\boldsymbol{u}=c\nabla s^{+}+\tilde{\boldsymbol{u}},\ c\in\mathbb{C},\,\tilde{\boldsymbol{u}}\in\boldsymbol{\mrm{L}}^2(\Omega)\,|\,\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\in\boldsymbol{\mrm{L}}^2(\Omega),\,\mrm{div}(\varepsilon\boldsymbol{u})=0\mbox{ in }\Omega\setminus\{O\},\\ \hspace{11.6cm}\boldsymbol{u}\times\nu=0\mbox{ on }\partial\Omega\}. \end{array} \end{equation} In this definition, for $\boldsymbol{u}=c\nabla s^{+}+\tilde{\boldsymbol{u}}$, the condition $\mrm{div}(\varepsilon\boldsymbol{u})=0$ in $\Omega\setminus\{O\}$ means that there holds \begin{equation}\label{defDivFaible} \int_\Omega \varepsilon\boldsymbol{u}\cdot\nabla\varphi\,dx=0,\qquad\forall\varphi\in\mathscr{C}^{\infty}_0(\Omega\setminus\{O\}), \end{equation} which after integration by parts and by density of $\mathscr{C}^{\infty}_0(\Omega\setminus\{O\})$ in $\mrm{H}^1_0(\Omega)$ is equivalent to \begin{equation}\label{defDivFaible-bis} -c\int_{\Omega}\mrm{div}(\varepsilon\nabla s^{+})\varphi\,dx+\int_{\Omega}\varepsilon\tilde{\boldsymbol{u}}\cdot\nabla\varphi\,dx=0,\qquad\forall\varphi\in\mathscr{C}^{\infty}_0(\Omega). \end{equation} Note that we have $\boldsymbol{\mrm{X}}_N(\varepsilon)\subset\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ and that $\dim\,(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)/\boldsymbol{\mrm{X}}_N(\varepsilon))=1$ (see Lemma \ref{LemmaCodim} in Appendix). For $\boldsymbol{u}=c\nabla s^{+}+\tilde{\boldsymbol{u}}$ with $c\in\mathbb{C}$ and $\tilde{\boldsymbol{u}}\in\boldsymbol{\mrm{L}}^2(\Omega)$, we set \[
\|\boldsymbol{u}\|_{\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)}=(|c|^2+\|\tilde{\boldsymbol{u}}\|^2_{\Omega}+\|\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\|^2_{\Omega})^{1/2}\,. \] Endowed with this norm, $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ is a Banach space. \begin{lemma}\label{LemmaNormeEquiv} Pick some $\beta$ satisfying (\ref{betainfbeta_0et1demi}). Under Assumptions 1 and 3, for any $\boldsymbol{u}=c\nabla s^{+}+\tilde{\boldsymbol{u}}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$, we have $\tilde{\boldsymbol{u}}\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ and there is a constant $C>0$ independent of $\boldsymbol{u}$ such that \begin{equation}\label{EstimBas}
|c|+\|\tilde{\boldsymbol{u}}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)} \le C\,\|\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\|_{\Omega}. \end{equation}
As a consequence, the norm $\|\cdot\|_{\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)}$ is equivalent to the norm $\|\boldsymbol{\mrm{curl}}\,\cdot\|_{\Omega}$ in $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ and $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ endowed with the inner product $(\boldsymbol{\mrm{curl}}\,\cdot,\boldsymbol{\mrm{curl}}\,\cdot)_{\Omega}$ is a Hilbert space.
\end{lemma} \begin{proof} Let $\boldsymbol{u}=c\nabla s^{+}+\tilde{\boldsymbol{u}}$ be an element of $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. The field $\tilde{\boldsymbol{u}}$ is in $\boldsymbol{\mrm{L}}^2(\Omega)$ and therefore decomposes as \begin{equation}\label{DecompoHelm1} \tilde{\boldsymbol{u}}=\nabla\varphi+\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi} \end{equation} with $\varphi\in\mrm{H}^1_{0}(\Omega)$ and $\boldsymbol{\psi}\in \boldsymbol{\mrm{X}}_T(1)$ (item $iv)$ of Proposition \ref{propoPotential}). Moreover, since $\boldsymbol{u}\times\nu=0$ on $\partial\Omega$ and since both $s^+$ and $\varphi$ vanish on $\partial\Omega$, we know that $\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\times\nu=0$ on $\partial\Omega$. Then noting that $-\boldsymbol{\Delta\psi}=\boldsymbol{\mrm{curl}}\,\tilde{\boldsymbol{u}}=\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\in \boldsymbol{\mrm{L}}^2(\Omega)$, we deduce from Proposition \ref{propoLaplacienVect} that $\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi} \in \boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ with the estimate \begin{equation}\label{DecompoHelm2}
\|\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)} \le C\,\|\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\|_{\Omega}. \end{equation} Using (\ref{defDivFaible}), the condition $\mrm{div}(\varepsilon\boldsymbol{u})=0$ in $\Omega\setminus\{O\}$ implies \[ \int_{\Omega}\varepsilon\nabla(c\,s^++\varphi)\cdot\nabla\varphi'\,dx=-\int_{\Omega} \varepsilon \boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\cdot\nabla\varphi'\,dx,\qquad\forall\varphi'\in \mathring{\mrm{V}}^1_{-\beta}(\Omega), \] which means exactly that $A^{\beta}_{\varepsilon}(c\,s^++\varphi)=-\mrm{div}(\varepsilon\,\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi})\in(\mathring{\mrm{V}}^1_{-\beta}(\Omega))^{\ast}$. Since additionally $-\mrm{div}(\varepsilon\,\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi})\in(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast}$, from (\ref{PropoRepresentation}) we know that there are some complex constants $c_{\pm}$ and some $\tilde{\varphi}\in\mathring{\mrm{V}}^1_{-\beta}(\Omega)$ such that \[ c\,s^++\varphi = c_-\,s^-+c_+\,s^++\tilde{\varphi}. \] This implies $c_-=0$, $c_+=c$ (because $\varphi\in\mrm{H}^1_0(\Omega)$) and so $\varphi=\tilde{\varphi}$ is an element of $\mathring{\mrm{V}}^1_{-\beta}(\Omega)$. This shows that $c\,s^++\varphi\in\mathring{\mrm{V}}^{\mrm{out}}$ and that $A^{\mrm{out}}_{\varepsilon}(c\,s^++\varphi)=-\mrm{div}(\varepsilon\,\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}).$ Since $A^{\mrm{out}}_{\varepsilon}:\mathring{\mrm{V}}^{\mrm{out}}\to(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast}$ is an isomorphism, we have the estimate \begin{equation}\label{DecompoHelm3}
|c|+\|\varphi\|_{\mrm{V}^1_{-\beta}(\Omega)} \le C\,\|\mrm{div}(\varepsilon\,\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi})\|_{(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast}} \le C\,\|\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)}. \end{equation} Finally gathering (\ref{DecompoHelm1})--(\ref{DecompoHelm3}), we obtain that $\tilde{\boldsymbol{u}}\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ and that the estimate (\ref{EstimBas}) is valid. Noting that
$\|\tilde{\boldsymbol{u}}\|_{\Omega}\leq C\,\|\tilde{\boldsymbol{u}}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)}$, this implies that the norms $\|\cdot\|_{\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)}$ and $\|\boldsymbol{\mrm{curl}}\,\cdot\|_{\Omega}$ are equivalent in $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. \end{proof} \noindent Thanks to the previous lemma and by density of $\mathscr{C}^{\infty}_0(\Omega\setminus\{O\})$ in $\mathring{\mrm{V}}^1_{\beta}(\Omega)$, the condition (\ref{defDivFaible-bis}) for $\boldsymbol{u}=c\nabla s^{+}+\tilde{\boldsymbol{u}}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ is equivalent to \begin{equation}\label{defDivFaible-ter} -c\int_{\Omega}\mrm{div}(\varepsilon\nabla s^{+})\varphi\,dx+\int_{\Omega}\varepsilon\tilde{\boldsymbol{u}}\cdot\nabla\varphi\,dx=0,\qquad\forall\varphi\in\mathring{\mrm{V}}^1_{\beta}(\Omega) \end{equation} where all the terms are well-defined as soon as $\beta$ satisfies (\ref{betainfbeta_0et1demi}).
\subsection{Definition of the problem for the electric field}
Our objective is to define the problem for the electric field as a variational formulation set in $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. For some $\gamma>0$, let $\boldsymbol{J}$ be an element of $\boldsymbol{\mrm{V}}^0_{-\gamma}(\Omega)$ such that $\mrm{div}\,\boldsymbol{J}=0$ in $\Omega$. Consider the problem \begin{equation}\label{MainPbE}
\begin{array}{|l}
\mbox{Find }\boldsymbol{u}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\mbox{ such that }\\
\displaystyle\int_{\Omega}\mu^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx -\omega^2 \fint_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx
= i\omega\int_\Omega \boldsymbol{J}\cdot\overline{\boldsymbol{v}}\,dx,\qquad \forall\boldsymbol{v}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon),
\end{array} \end{equation} where the term \begin{equation}\label{defInt} \fint_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx \end{equation} has to be carefully defined. The difficulty comes from the fact that $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ is not a subspace of $\boldsymbol{\mrm{L}}^2(\Omega)$ so that this quantity cannot be considered as a classical integral. \\ Let $\boldsymbol{u}=c_{\boldsymbol{u}}\nabla s^{+}+\tilde{\boldsymbol{u}}\in \boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. First, for $\tilde{\boldsymbol{v}}\in \boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ with $\beta>0$, it is natural to set \begin{equation}
\label{def-newint-vtilde}\fint_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\tilde{\boldsymbol{v}}}\,dx:=\int_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\tilde{\boldsymbol{v}}}\,dx. \end{equation} To complete the definition, we have to give a sense to (\ref{defInt}) when $\boldsymbol{v}=\nabla s^+$. Proceeding as for the derivation of (\ref{defDivFaible-ter}), we start from the identity \[ \int_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\nabla\varphi}\,dx=-c_{\boldsymbol{u}}\int_{\Omega}\mrm{div}(\varepsilon\nabla s^{+})\overline{\varphi} \,dx+\int_{\Omega}\varepsilon\tilde{\boldsymbol{u}}\cdot\overline{\nabla\varphi}\,dx,\qquad\forall\varphi\in\mathscr{C}^{\infty}_0(\Omega\setminus\{O\}). \] By density of $\mathscr{C}^{\infty}_0(\Omega\setminus\{O\})$ in $\mathring{\mrm{V}}^1_{\beta}(\Omega)$, this leads to set \begin{equation}\label{def-newint-V1beta}
\fint_\Omega \varepsilon \boldsymbol{u}\cdot \overline{\nabla\varphi}\,dx:=-c_{\boldsymbol{u}}\int_{\Omega}\mrm{div}(\varepsilon\nabla s^{+})\overline{\varphi} \,dx+\int_{\Omega}\varepsilon\tilde{\boldsymbol{u}}\cdot\overline{\nabla\varphi}\,dx,\qquad\forall\varphi\in\mathring{\mrm{V}}^1_{\beta}(\Omega). \end{equation} With this definition, condition \eqref{defDivFaible-ter} can be written as $$
\fint_\Omega \varepsilon \boldsymbol{u}\cdot \nabla \varphi\,dx=0,\qquad\forall\varphi\in\mathring{\mrm{V}}^1_{\beta}(\Omega). $$ In particular, since $s^+\in \mathring{\mrm{V}}^1_{\beta}(\Omega)$, for all $\boldsymbol{u}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ we have \begin{equation}\label{defDivFaible-4} \fint_\Omega \varepsilon \boldsymbol{u}\cdot \nabla \overline{s^+}\,dx=0\qquad\mbox{and so}\qquad \int_\Omega \varepsilon \tilde{\boldsymbol{u}}\cdot\nabla \overline{s^{+}}\,dx=c_{\boldsymbol{u}} \int_{\Omega}\mrm{div}(\varepsilon\nabla{s^+})\overline{s^{+}}\,dx. \end{equation} Finally for all $\boldsymbol{u}=c_{\boldsymbol{u}}\nabla s^{+}+\tilde{\boldsymbol{u}}$ and $\boldsymbol{v}=c_{\boldsymbol{v}}\nabla s^{+}+\tilde{\boldsymbol{v}}$ in $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$, using \eqref{def-newint-vtilde} and \eqref{defDivFaible-4}, we find \[ \fint_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx=\int_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\tilde{\boldsymbol{v}}}\,dx=c_{\boldsymbol{u}}\int_\Omega \varepsilon \nabla s^{+}\cdot\overline{\tilde{\boldsymbol{v}}}\,dx+\int_\Omega \varepsilon \tilde{\boldsymbol{u}}\cdot\overline{\tilde{\boldsymbol{v}}}\,dx. \] But since $\boldsymbol{v}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$, we deduce from the second identity of \eqref{defDivFaible-4} that \begin{equation}\label{RelationPart} \int_\Omega \varepsilon \nabla s^+\cdot\overline{\tilde{\boldsymbol{v}}}\,dx=\overline{c_{\boldsymbol{v}}} \int_{\Omega}\mrm{div}(\varepsilon\nabla{\overline{s^{+}} )}{s^+}\,dx. \end{equation} Summing up, we get \begin{equation}\label{newint-epsuvbar}
\fint_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx=c_{\boldsymbol{u}}\overline{c_{\boldsymbol{v}}}\int_{\Omega}\mrm{div}(\varepsilon\nabla{\overline{s^{+}} )}{s^+}\,dx+\int_\Omega \varepsilon \tilde{\boldsymbol{u}}\cdot\overline{\tilde{\boldsymbol{v}}}\,dx,\qquad \forall\boldsymbol{u}, \boldsymbol{v}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon). \end{equation} \begin{remark} Even if we use an integral symbol to keep the usual aspects of formulas and facilitate the reading, it is important to consider this new quantity as a sesquilinear form $$(\boldsymbol{u}, \boldsymbol{v}) \mapsto \fint_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx$$ on $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\times\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. In particular, we point out that this sesquilinear form is not hermitian on $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\times\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. Indeed, we have \[ \overline{\fint_\Omega \varepsilon \boldsymbol{v}\cdot\overline{\boldsymbol{u}}\,dx}=\int_\Omega \varepsilon \tilde{\boldsymbol{u}}\cdot\overline{\tilde{\boldsymbol{v}}}\,dx+c_{\boldsymbol{u}}\overline{c_{\boldsymbol{v}}}\int_{\Omega}\mrm{div}(\varepsilon\nabla{s^{+} )}{\overline{s^+}}\,dx \] so that \begin{equation}\label{FormulaAdjoint} \fint_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx-\overline{\fint_\Omega \varepsilon \boldsymbol{v}\cdot\overline{\boldsymbol{u}}\,dx}=2ic_{\boldsymbol{u}}\overline{c_{\boldsymbol{v}}}\,\Im m\,\bigg(\int_{\Omega}\mrm{div}(\varepsilon\nabla\overline{s^{+}} )\,s^+\,dx\bigg). \end{equation} But Lemma \ref{lemmaNRJ} and assumption \eqref{eq-intPhineq0} show that \[ \Im m\,\bigg(\int_{\Omega}\mrm{div}(\varepsilon\nabla\overline{s^{+}} )\,s^+\,dx\bigg) \neq 0. \] \end{remark} \noindent In the sequel, we denote by $a_N(\cdot,\cdot)$ (resp. $\ell_N(\cdot)$) the sesquilinear form (resp. the antilinear form) appearing in the left-hand side (resp. right-hand side) of (\ref{MainPbE}).
\subsection{Equivalent formulation} Define the space \[ \boldsymbol{\mrm{H}}_N^{\mrm{out}}(\boldsymbol{\mrm{curl}}\,):=\mrm{span}(\nabla s^+)\oplus\boldsymbol{\mrm{H}}_N(\boldsymbol{\mrm{curl}}\,)\supset\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon) \] (without the divergence free condition) and consider the problem \begin{equation}\label{Formu2}
\begin{array}{|l} \mbox{Find }\boldsymbol{u}\in \boldsymbol{\mrm{H}}_N^{\mrm{out}}(\boldsymbol{\mrm{curl}}\,)\mbox{ such that }\\[2pt] a_N(\boldsymbol{u},\boldsymbol{v})= \ell_N(\boldsymbol{v}),\quad\forall \boldsymbol{v} \in \boldsymbol{\mrm{H}}_N^{\mrm{out}}(\boldsymbol{\mrm{curl}}\,), \end{array} \end{equation} where the definition of
$$\fint_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx$$
has to be extended to the space $\boldsymbol{\mrm{H}}_N^{\mrm{out}}(\boldsymbol{\mrm{curl}}\,)$. Working exactly as in the beginning of the proof of Lemma \ref{LemmaNormeEquiv}, one can show that any $\boldsymbol{u}\in\boldsymbol{\mrm{H}}_N^{\mrm{out}}(\boldsymbol{\mrm{curl}}\,)$ admits the decomposition \begin{equation}\label{DecompoGene} \boldsymbol{u}=c_{\boldsymbol{u}}\nabla s^++\nabla\varphi_{\boldsymbol{u}}+\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{u}}, \end{equation} with $c_{\boldsymbol{u}}\in\mathbb{C}$, $\varphi_{\boldsymbol{u}}\in\mrm{H}^1_{0}(\Omega)$ and $\boldsymbol{\psi}_{\boldsymbol{u}}\in \boldsymbol{\mrm{X}}_T(1)$, such that $\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{u}}\in \boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$, for $\beta$ satisfying (\ref{betainfbeta_0et1demi}). Then, for all $\boldsymbol{u}=c_{\boldsymbol{u}}\nabla s^++\nabla\varphi_{\boldsymbol{u}}+\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{u}}$ and $\boldsymbol{v}=c_{\boldsymbol{v}}\nabla s^++\nabla\varphi_{\boldsymbol{v}}+\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{v}}$ in $\boldsymbol{\mrm{H}}_N^{\mrm{out}}(\boldsymbol{\mrm{curl}}\,)$, a natural extension of the previous definitions leads to set \begin{equation}\label{DefExtensionH} \begin{array}{lcl} \fint_{\Omega}\varepsilon\boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx&:=&\int_{\Omega}\varepsilon\,(\nabla\varphi_{\boldsymbol{u}}+\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{u}})\cdot(\nabla\overline{\varphi_{\boldsymbol{v}}}+\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{\psi}_{\boldsymbol{v}}})\,dx\\[10pt] & & +\int_{\Omega}c_{\boldsymbol{u}}\,\varepsilon\nabla s^+\cdot\overline{\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{v}}}+\overline{c_{\boldsymbol{v}}}\,\varepsilon\,\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{u}}\cdot\nabla \overline{s^+}\,dx\\[10pt] & & -\int_{\Omega}c_{\boldsymbol{u}}\overline{c_{\boldsymbol{v}}}\,\mrm{div}(\varepsilon\nabla s^+)\overline{s^+}+c_{\boldsymbol{u}}\,\mrm{div}(\varepsilon\nabla s^+)\overline{\varphi_{\boldsymbol{v}}}+\overline{c_{\boldsymbol{v}}}\,\varphi_{\boldsymbol{u}}\mrm{div}(\varepsilon\nabla \overline{s^+})\,dx. \end{array} \end{equation} Note that (\ref{DefExtensionH}) is indeed an extension of (\ref{newint-epsuvbar}). To show it, first observe that for $\boldsymbol{u}=c_{\boldsymbol{u}}\nabla s^++\nabla\varphi_{\boldsymbol{u}}+\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{u}}$, $\boldsymbol{v}=c_{\boldsymbol{v}}\nabla s^++\nabla\varphi_{\boldsymbol{v}}+\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{v}}$ in $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$, the proof of Lemma \ref{LemmaNormeEquiv} guarantees that $\varphi_{\boldsymbol{u}}$, $\varphi_{\boldsymbol{v}}\in\mathring{\mrm{V}}^1_{-\beta}(\Omega)$ with $\beta$ satisfying (\ref{betainfbeta_0et1demi}). This allows us to integrate by parts in the last two terms of (\ref{DefExtensionH}) to get \begin{equation}\label{Calcul1} \begin{array}{lcl} \fint_{\Omega}\varepsilon\boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx&:=&\int_{\Omega}\varepsilon\,(\nabla\varphi_{\boldsymbol{u}}+\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{u}})\cdot(\nabla\overline{\varphi_{\boldsymbol{v}}}+\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{\psi}_{\boldsymbol{v}}})\,dx\\[10pt] & & +\int_{\Omega}c_{\boldsymbol{u}}\,\varepsilon\nabla s^+\cdot(\nabla\overline{\varphi_{\boldsymbol{v}}}+\overline{\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{v}}})+\overline{c_{\boldsymbol{v}}}\,\varepsilon\,(\nabla\varphi_{\boldsymbol{u}}+\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{u}})\cdot\nabla \overline{s^+}\,dx\\[10pt] & & -c_{\boldsymbol{u}}\overline{c_{\boldsymbol{v}}}\int_{\Omega}\mrm{div}(\varepsilon\nabla s^+)\overline{s^+}\,dx. \end{array} \end{equation} Using (\ref{defDivFaible-4}), (\ref{RelationPart}), the second line above can be written as \begin{equation}\label{Calcul2} \begin{array}{ll} &\int_{\Omega}c_{\boldsymbol{u}}\,\varepsilon\nabla s^+\cdot(\nabla\overline{\varphi_{\boldsymbol{v}}}+\overline{\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{v}}})+\overline{c_{\boldsymbol{v}}}\,\varepsilon\,(\nabla\varphi_{\boldsymbol{u}}+\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{u}})\cdot\nabla \overline{s^+}\,dx\\[10pt] =&c_{\boldsymbol{u}}\overline{c_{\boldsymbol{v}}}\int_{\Omega}\mrm{div}(\varepsilon\nabla \overline{s^+})s^+\,dx+c_{\boldsymbol{u}}\overline{c_{\boldsymbol{v}}}\int_{\Omega}\mrm{div}(\varepsilon\nabla s^+)\overline{s^+}\,dx. \end{array} \end{equation} Inserting (\ref{Calcul2}) in (\ref{Calcul1}) yields exactly (\ref{newint-epsuvbar}). \begin{lemma}\label{lemmaEquivE} Under Assumptions 1 and 3, the field $\boldsymbol{u}$ is a solution of (\ref{MainPbE}) if and only if it solves the problem (\ref{Formu2}). \end{lemma} \begin{proof} If $\boldsymbol{u}\in\boldsymbol{\mrm{H}}_N^{\mrm{out}}(\boldsymbol{\mrm{curl}}\,)$ satisfies (\ref{Formu2}), then taking $\boldsymbol{v}=\nabla\varphi$ with $\varphi\in\mathscr{C}^{\infty}_0(\Omega\setminus\{O\})$ in (\ref{Formu2}), and using that $\mrm{div}\,\boldsymbol{J}=0$ in $\Omega$, we get \eqref{defDivFaible}, which implies that $\boldsymbol{u}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. This shows that $\boldsymbol{u}$ solves (\ref{MainPbE}).\\ \newline Now assume that $\boldsymbol{u}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\subset\boldsymbol{\mrm{H}}_N^{\mrm{out}}(\boldsymbol{\mrm{curl}}\,)$ is a solution of (\ref{MainPbE}). Let $\boldsymbol{v}$ be an element of $\boldsymbol{\mrm{H}}_N^{\mrm{out}}(\boldsymbol{\mrm{curl}}\,)$. As in (\ref{DecompoGene}), we have the decomposition \begin{equation}\label{DecompoGenev} \boldsymbol{v}=c_{\boldsymbol{v}}\nabla s^++\nabla\varphi_{\boldsymbol{v}}+\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{v}}, \end{equation} with $c_{\boldsymbol{v}}\in\mathbb{C}$, $\varphi_{\boldsymbol{v}}\in\mrm{H}^1_{0}(\Omega)$ and $\boldsymbol{\psi}_{\boldsymbol{v}}\in \boldsymbol{\mrm{X}}_T(1)$ such that $\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{v}}\in \boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ for all $\beta$ satisfying (\ref{betainfbeta_0et1demi}). By Assumption 3, there is $\zeta \in \mathring{\mrm{V}}^{\mrm{out}}$ such that \begin{equation}\label{ResolPbAdd} A^{\mrm{out}}_{\varepsilon}\zeta=-\mrm{div}(\varepsilon\,\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{v}}) \in(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast} . \end{equation} The function $\zeta$ decomposes as $\zeta=\alpha s^++\tilde{\zeta}$ with $\tilde{\zeta}\in\mathring{\mrm{V}}^1_{-\beta}(\Omega)$. Finally, set \[ \hat{\boldsymbol{v}}=\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_{\boldsymbol{v}}-\nabla \zeta=\boldsymbol{v}-\nabla(c_{\boldsymbol{v}}s^++\varphi_{\boldsymbol{v}}+\zeta). \] The function $\hat{\boldsymbol{v}}$ is in $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$, it satisfies $\boldsymbol{\mrm{curl}}\,\hat{\boldsymbol{v}}=\boldsymbol{\mrm{curl}}\,\boldsymbol{v}$ and from \eqref{defDivFaible-4}, we deduce that \[ \fint_{\Omega}\varepsilon\boldsymbol{u}\cdot\overline{\hat{\boldsymbol{v}}}\,dx=\fint_{\Omega}\varepsilon\boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx. \] Using also that $\boldsymbol{J}\in\boldsymbol{\mrm{V}}^0_{-\gamma}(\Omega)$ for some $\gamma>0$ and is such that $\mrm{div}\,\boldsymbol{J}=0$ in $\Omega$, so that \[ \int_\Omega \boldsymbol{J}\cdot\overline{\hat{\boldsymbol{v}}}\,dx=\int_\Omega \boldsymbol{J}\cdot\overline{\boldsymbol{v}}\,dx, \] this shows that $a_N(\boldsymbol{u},\boldsymbol{v})=a_N(\boldsymbol{u},\hat{\boldsymbol{v}})=\ell_N(\hat{\boldsymbol{v}})=\ell_N(\boldsymbol{v})$ and ends the proof. \end{proof} \noindent In the following, we shall work with the formulation (\ref{MainPbE}) set in $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. The reason being that, as usual in the analysis of Maxwell's equations, the divergence free condition will yield a compactness property allowing us to deal with the term involving the frequency $\omega$. \subsection{Main analysis for the electric field}\label{subsec-mainanalysisE} \label{subsec-mainanalysisE} \noindent Define the continuous operators $\mathbb{A}^{\mrm{out}}_N:\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon))^{\ast}$ and $\mathbb{K}^{\mrm{out}}_N:\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon))^{\ast}$ such that for all $\boldsymbol{u},\,\boldsymbol{v}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$, \[ \langle\mathbb{A}^{\mrm{out}}_N\boldsymbol{u},\boldsymbol{v}\rangle = \int_{\Omega}\mu^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx,\qquad\qquad\langle\mathbb{K}^{\mrm{out}}_N\boldsymbol{u},\boldsymbol{v}\rangle = \fint_{\Omega}\varepsilon\boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx. \] With this notation, we have $\langle(\mathbb{A}^{\mrm{out}}_N+\mathbb{K}^{\mrm{out}}_N)\boldsymbol{u},\boldsymbol{v}\rangle=a_{N}(\boldsymbol{u},\boldsymbol{v})$.
\begin{proposition}\label{propoUniqueness} Under Assumptions 1--3, the operator $\mathbb{A}^{\mrm{out}}_N:\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon))^{\ast}$ is an isomorphism. \end{proposition} \begin{proof} Let us construct a continuous operator $\mathbb{T}:\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\to\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ such that for all $\boldsymbol{u},\,\boldsymbol{v}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$, \[ \int_{\Omega}\mu^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,(\overline{\mathbb{T}\boldsymbol{v}})\,dx=\int_{\Omega}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx. \] To proceed, we adapt the method presented in \cite{BoCC14}. Assume that $\boldsymbol{v}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ is given. We construct $\mathbb{T}\boldsymbol{v}$ in three steps.\\ \newline 1) Since $\boldsymbol{\mrm{curl}}\,\boldsymbol{v}\in\boldsymbol{\mrm{L}}^2(\Omega)$ and $A_{\mu}:\mrm{H}^1_{\#}(\Omega)\to(\mrm{H}^1_{\#}(\Omega))^{\ast}$ is an isomorphism, there is a unique $\zeta\in\mrm{H}^1_{\#}(\Omega)$ such that \[ \int_{\Omega}\mu\nabla\zeta\cdot\nabla\zeta'\,dx=\int_{\Omega}\mu\,\boldsymbol{\mrm{curl}}\,\boldsymbol{v}\cdot\nabla\zeta'\,dx,\qquad\forall\zeta'\in\mrm{H}^1_{\#}(\Omega). \] Then the field $\mu(\boldsymbol{\mrm{curl}}\,\boldsymbol{v}-\nabla\zeta)\in\boldsymbol{\mrm{L}}^2(\Omega) $ is divergence free in $\Omega$ and satisfies $\mu(\boldsymbol{\mrm{curl}}\,\boldsymbol{v}-\nabla\zeta)\cdot\nu=0$ on $\partial\Omega$.\\ \newline 2) From item $ii)$ of Proposition \ref{propoPotential}, we infer that there is $\boldsymbol{\psi}\in \boldsymbol{\mrm{X}}_N(1)$ such that $$ \mu(\boldsymbol{\mrm{curl}}\,\boldsymbol{v}-\nabla\zeta)= \boldsymbol{\mrm{curl}}\, \boldsymbol{\psi}. $$ Thanks to Lemma \ref{LemmaWeightedClaBis}, we deduce that $\boldsymbol{\psi}\in \boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ for all $\beta\in(0;1/2)$ and a fortiori for $\beta$ satisfying (\ref{betainfbeta_0et1demi}).\\ \newline 3) Suppose now that $\beta$ satisfies (\ref{betainfbeta_0et1demi}). Then we know from the previous step that $\mrm{div}(\varepsilon \boldsymbol{\psi})\in(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast}$. On the other hand, by Assumption 3, $A^{\mrm{out}}_{\varepsilon}:\mathring{\mrm{V}}^{\mrm{out}}\to(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast}$ is an isomorphism. Consequently we can introduce $\varphi\in \mathring{\mrm{V}}^{\mrm{out}}$ such that $A^{\mrm{out}}_{\varepsilon}\varphi=-\mrm{div}(\varepsilon \boldsymbol{\psi})$.\\ \newline Finally, we set $\mathbb{T}\boldsymbol{v}=\boldsymbol{\psi}-\nabla \varphi$. Clearly $\mathbb{T}\boldsymbol{v}$ is an element of $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. Moreover, for all $\boldsymbol{u}$, $\boldsymbol{v}$ in $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$, we have \[ \begin{array}{lcl} \displaystyle\int_\Omega \mu^{-1} \boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\mathbb{T}\boldsymbol{v}}\,dx&=&\displaystyle\int_\Omega \mu^{-1} \boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{\psi}}\,dx\\[10pt] &=&\displaystyle\int_\Omega \boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx-\displaystyle\int_\Omega \boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\nabla\overline{\zeta}\,dx\\[10pt] &=&\displaystyle\int_\Omega \boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx. \end{array} \] From Lemma \ref{LemmaNormeEquiv} and the Lax-Milgram theorem, we deduce that $\mathbb{T}^{\ast}\mathbb{A}^{\mrm{out}}_N:\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon))^{\ast}$ is an isomorphism. And by symmetry, permuting the roles of $\boldsymbol{u}$ and $\boldsymbol{v}$, it is obvious that $\mathbb{T}^{\ast}\mathbb{A}^{\mrm{out}}_N=\mathbb{A}^{\mrm{out}}_N\mathbb{T}$, which allows us to conclude that $\mathbb{A}^{\mrm{out}}_N:\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon))^{\ast}$ is an isomorphism. \end{proof}
\begin{proposition}\label{PropCompactE} Under Assumptions 1 and 3, if $(\boldsymbol{u}_n=c_n\nabla s^++\tilde{\boldsymbol{u}}_n)$ is a sequence which is bounded in $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$, then we can extract a subsequence such that $(c_n)$ and $(\tilde{\boldsymbol{u}}_n)$ converge respectively in $\mathbb{C}$ and in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ for $\beta$ satisfying (\ref{betainfbeta_0et1demi}). As a consequence, the operator $\mathbb{K}^{\mrm{out}}_N:\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon))^{\ast}$ is compact. \end{proposition} \begin{proof}
Let $(\boldsymbol{u}_n)$ be a bounded sequence of elements of $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. From the proof of Lemma \ref{LemmaNormeEquiv}, we know that for $n\in\mathbb{N}$, we have \begin{equation}\label{DecompoCompa} \boldsymbol{u}_n=c_n\nabla s^++\nabla\varphi_n+\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_n \end{equation} where the sequences $(c_n)$, $(\varphi_n)$, $(\boldsymbol{\psi}_n)$ and $(\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_n)$ are bounded respectively in $\mathbb{C}$, $\mathring{\mrm{V}}^1_{-\beta}(\Omega)$, $\boldsymbol{\mrm{X}}_T(1)$ and $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$. Observing that $\boldsymbol{\mrm{curl}}\,\boldsymbol{u}_n=\boldsymbol{\mrm{curl}}\,\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi_n}=-\boldsymbol{\Delta \psi_n} $ is bounded in $\boldsymbol{\mrm{L}}^2(\Omega)$, we deduce from Proposition \ref{propoLaplacienVectcompact} that there exists a subsequence such that $\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi_n}$ converges in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$. Moreover, by \eqref{DecompoHelm3}, we have \[
|c_n-c_m|+\|\varphi_n-\varphi_m\|_{\mrm{V}^1_{-\beta}(\Omega)}\leq C\|\boldsymbol{\mrm{curl}}\,(\boldsymbol{\psi}_n-\boldsymbol{\psi}_m)\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)}, \] which implies that $(c_n)$ and $(\varphi_n)$ converge respectively in $\mathbb{C}$ and in $\mathring{\mrm{V}}^1_{-\beta}(\Omega)$. From (\ref{DecompoCompa}), we see that this is enough to conclude to the first part of the proposition.\\ Finally, observing that \[
\|\mathbb{K}_N^{\mrm{out}}\boldsymbol{u}\|_{(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon))^{\ast}} \le C\,(\|\tilde{\boldsymbol{u}}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)}+|c_{\boldsymbol{u}}|), \] we deduce that $\mathbb{K}^{\mrm{out}}_N:\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon))^{\ast}$ is a compact operator. \end{proof} \noindent We can now state the main theorem of the analysis of the problem for the electric field. \begin{theorem}\label{MainThmE} Under Assumptions 1--3, for all $\omega\in\mathbb{R}$ the operator $\mathbb{A}^{\mrm{out}}_N-\omega^2\mathbb{K}^{\mrm{out}}_N:\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon))^{\ast}$ is Fredholm of index zero. \end{theorem} \begin{proof} Since $\mathbb{K}^{\mrm{out}}_N:\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon))^{\ast}$ is compact (Proposition \ref{PropCompactE}) and $\mathbb{A}^{\mrm{out}}_N:\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon))^{\ast}$ is an isomorphism (Proposition \ref{propoUniqueness}), $\mathbb{A}^{\mrm{out}}_N-\omega^2\mathbb{K}^{\mrm{out}}_N:\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon))^{\ast}$ is Fredholm of index zero. \end{proof} \noindent The previous theorem guarantees that the problem (\ref{MainPbE}) is well-posed if and only if uniqueness holds, that is if and only if the only solution for $\boldsymbol{J}=0$ is $\boldsymbol{u}=0$. Since uniqueness holds for $\omega=0$, one can prove with the analytic Fredholm theorem that (\ref{MainPbE}) is well-posed except for at most a countable set of values of $\omega$ with no accumulation points (note that Theorem \ref{MainThmE} remains true for $\omega\in\mathbb{C}$). \\ However this result is not really relevant from a physical point of view. Indeed, negative values of $\varepsilon$ can occur only if $\varepsilon$ is itself a function of $\omega$. For instance, if the inclusion $\mathcal{M}$ is metallic, it is commonly admitted that the Drude's law gives a good model for $\varepsilon$. But taking into account the dependence of $\varepsilon$ with respect to $\omega$ when studying uniqueness of problem (\ref{MainPbE}) leads to a non-linear eigenvalue problem, where the functional space $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ itself depends on $\omega$. This study is beyond the scope of the present paper (see \cite{HaPa17} for such questions in the case of the 2D scalar problem). \\ Nonetheless, there is a result that we can prove concerning the cases of non-uniqueness for problem (\ref{MainPbE}). \begin{proposition}\label{trappedmodes} If $\boldsymbol{u}=c\nabla s^{+}+\tilde{\boldsymbol{u}}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ is a solution of (\ref{MainPbE}) for $\boldsymbol{J}=0$, then $c=0$ and $\boldsymbol{u}\in\boldsymbol{\mrm{X}}_N(\varepsilon)$. \end{proposition} \begin{proof} When $\omega=0$, the result is a direct consequence of Theorem \ref{MainThmE} (because zero is the only solution of (\ref{MainPbE}) for $\boldsymbol{J}=0$). From now on, we assume that $\omega\in\mathbb{R}\setminus\{0\}$. Suppose that $\boldsymbol{u}=c\nabla s^{+}+\tilde{\boldsymbol{u}}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ is such that \[ \int_{\Omega}\mu^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx -\omega^2 \fint_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx
= 0,\qquad \forall\boldsymbol{v}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon). \] Taking the imaginary part of the previous identity for $\boldsymbol{v}=\boldsymbol{u}$, we get \[ \Im m\,\bigg(\fint_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\boldsymbol{u}}\,dx\bigg) =0. \] On the other hand, by (\ref{newint-epsuvbar}), we have \[
\fint_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\boldsymbol{u}}\,dx=\int_\Omega \varepsilon |\tilde{\boldsymbol{u}}|^2\,dx+|c|^2\int_{\Omega}\mrm{div}(\varepsilon\nabla\overline{s^{+}} )\,{s^+}\,dx, \] so that \[
|c|^2 \Im m\,\bigg(\int_{\Omega}\mrm{div}(\varepsilon\nabla\overline{s^{+}} )\,s^+\,dx\bigg) =0. \] The result of the proposition is then a consequence of Lemma \ref{lemmaNRJ} in Appendix where it is proved that \[
\Im m\,\bigg(\int_{\Omega}\mrm{div}(\varepsilon\nabla\overline{s^{+}} )\,s^+\,dx\bigg) =\eta\int_{\mathbb{S}^2}\varepsilon|\Phi|^2ds, \] and of the assumption \eqref{eq-intPhineq0}. \end{proof} \begin{remark}As a consequence, from Lemma \ref{LemmaNormeEquiv}, we infer that elements of the kernel of $\mathbb{A}^{\mrm{out}}_N-\omega^2\mathbb{K}^{\mrm{out}}_N$ are in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ for all $\beta$ satisfying (\ref{betainfbeta_0et1demi}). \end{remark}
\subsection{Problem in the classical framework}
In the previous paragraph, we have shown that the Maxwell's problem (\ref{MainPbE}) for the electric field set in the non standard space $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$, and so in $\boldsymbol{\mrm{H}}_N^{\mrm{out}}(\boldsymbol{\mrm{curl}}\,)$ according to Lemma \ref{lemmaEquivE}, is well-posed. Here, we wish to analyse the properties of the problem for the electric field set in the classical space $\boldsymbol{\mrm{X}}_N(\varepsilon)$ (which does not contain $\nabla s^+$). Since this space is a closed subspace of $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$, it inherits the main properties of the problem in $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ proved in the previous section. More precisely, we deduce from Lemma \ref{LemmaNormeEquiv} and Proposition \ref{PropCompactE} the following result. \begin{proposition}\label{PropX_N(eps)}
Under Assumptions 1 and 3, the embedding of $\boldsymbol{\mrm{X}}_N(\varepsilon)$ in $\boldsymbol{\mrm{L}}^2(\Omega)$ is compact, and $\|\boldsymbol{\mrm{curl}}\,\cdot\|_{\Omega}$ is a norm in $\boldsymbol{\mrm{X}}_N(\varepsilon)$ which is equivalent to the norm $\|\cdot\|_{\boldsymbol{\mrm{H}}(\boldsymbol{\mrm{curl}}\,)}$. \end{proposition} \noindent Note that we recover classical properties similar to what is known for positive $\varepsilon$, or more generally \cite{BoCC14} for $\varepsilon$ such that the operator $A_{\varepsilon}:\mrm{H}^1_0(\Omega)\to(\mrm{H}^1_0(\Omega))^{\ast}$ defined by (\ref{DefAeps}) is an isomorphism (which allows for sign-changing $\varepsilon$). But we want to underline the fact that under Assumption 3, these classical results could not be proved by using classical arguments. They require the introduction of the bigger space $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$, with the singular function $\nabla s^+$. \newline Let us now consider the problem \begin{equation}\label{MainPbECla}
\begin{array}{|l} \mbox{Find }\boldsymbol{u}\in\boldsymbol{\mrm{X}}_N(\varepsilon)\mbox{ such that }\\ \displaystyle\int_{\Omega}\mu^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx -\omega^2 \int_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx= i\omega\int_\Omega \boldsymbol{J}\cdot \overline{\boldsymbol{v}}\,dx,\qquad \forall\boldsymbol{v}\in\boldsymbol{\mrm{X}}_N(\varepsilon). \end{array} \end{equation} An important remark is that one cannot prove that problem (\ref{MainPbECla}) is equivalent to a similar problem set in $\boldsymbol{\mrm{H}}_N(\boldsymbol{\mrm{curl}}\,)$ (the analogue of Lemma \ref{lemmaEquivE}). Again, the difficulty comes from the fact that $A_{\varepsilon}$ is not an isomorphism, and the trouble would appear when solving (\ref{ResolPbAdd}). Therefore, a solution of (\ref{MainPbECla}) is not in general a distributional solution of the equation \[ \boldsymbol{\mrm{curl}}\, \left(\mu^{-1}\boldsymbol{\mrm{curl}}\, \boldsymbol{u}\right)-\omega^2\varepsilon u=i\omega \boldsymbol{J}. \] To go further in the analysis of (\ref{MainPbECla}), we recall that $\boldsymbol{\mrm{X}}_N(\varepsilon)$ is a subspace of codimension one of $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ (Lemma \ref{LemmaCodim} in Appendix). Let $\boldsymbol{v}_0$ be an element of $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ which does not belong to $\boldsymbol{\mrm{X}}_N(\varepsilon)$. Then we denote by $\ell_0$ the continuous linear form on $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ such that: \begin{equation} \label{eq-decompv_0} \forall v\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\qquad\boldsymbol{v}-\ell_0(\boldsymbol{v})\boldsymbol{v}_0\in \boldsymbol{\mrm{X}}_N(\varepsilon). \end{equation} Let us now define the operators $\mathbb{A}_N:{\boldsymbol{\mrm{X}}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}_N(\varepsilon))^{\ast}$ and $\mathbb{K}_N:{\boldsymbol{\mrm{X}}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}_N(\varepsilon))^{\ast}$ by \[ \langle\mathbb{A}_N\boldsymbol{u},\boldsymbol{v}\rangle = \int_{\Omega}\mu^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx,\qquad \langle\mathbb{K}_N\boldsymbol{u},\boldsymbol{v}\rangle = \int_{\Omega}\varepsilon\boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx. \] \begin{proposition}\label{propoFredholmclas}
Under Assumptions 1--3, the operator $\mathbb{A}_N:{\boldsymbol{\mrm{X}}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}_N(\varepsilon))^{\ast}$ is Fredholm of index zero. \end{proposition} \begin{proof} Let $\boldsymbol{u}\in \boldsymbol{\mrm{X}}_N(\varepsilon)$. By Proposition \ref{propoUniqueness}, for the operator ${\mathbb T}$ introduced in the corresponding proof, one has:
$$\|\boldsymbol{u}\|^2_{\boldsymbol{\mrm{X}}_N(\varepsilon)}=\|\boldsymbol{\mrm{curl}}\, \boldsymbol{u}\|^2_\Omega=\langle\mathbb{A}^{\mrm{out}}_N\boldsymbol{u},{\mathbb T}\boldsymbol{u}\rangle.$$ Then, using (\ref{eq-decompv_0}), we get:
$$\|\boldsymbol{u}\|^2_{\boldsymbol{\mrm{X}}_N(\varepsilon)}=\langle\mathbb{A}_N\boldsymbol{u},{\mathbb T}\boldsymbol{u}-\ell_0({\mathbb T}\boldsymbol{u})v_0\rangle+\langle\mathbb{A}^{\mrm{out}}_N\boldsymbol{u},\ell_0({\mathbb T}\boldsymbol{u})v_0\rangle,$$ which implies that
$$\|\boldsymbol{u}\|_{\boldsymbol{\mrm{X}}_N(\varepsilon)}\leq C\left(\|\mathbb{A}_N\boldsymbol{u}\|_{(\boldsymbol{\mrm{X}}_N(\varepsilon))^{\ast}}+|\ell_0({\mathbb T}\boldsymbol{u})|\right) .$$ The result of the proposition then follows from a classical adaptation of Peetre's lemma (see for example \cite[Theorem 12.12]{Wlok87}) together with the fact that $\mathbb{A}_N$ is bounded and hermitian. \end{proof} \noindent Combining the two previous propositions, we obtain the \begin{theorem} \label{the-Fredholm-cla} Under Assumptions 1--3, for all $\omega\in\mathbb{R}$, the operator $\mathbb{A}_N-\omega^2\mathbb{K}_N:{\boldsymbol{\mrm{X}}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}_N(\varepsilon))^{\ast}$ is Fredholm of index zero. \end{theorem} \noindent But as mentioned above, even if uniqueness holds and if Problem (\ref{MainPbECla}) is well-posed, it does not provide a solution of Maxwell's equations.
\subsection{Expression of the singular coefficient}\label{paraSingCoef} Under Assumptions 1--3, Theorem \ref{MainThmE} guarantees that for all $\omega\in\mathbb{R}$ the operator $\mathbb{A}^{\mrm{out}}_N-\omega^2\mathbb{K}^{\mrm{out}}_N:\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon))^{\ast}$ is Fredholm of index zero. Assuming that it is injective, the problem (\ref{MainPbE}) admits a unique solution $\boldsymbol{u}=c_{\boldsymbol{u}}\nabla s^++\tilde{\boldsymbol{u}}$. The goal of this paragraph is to derive a formula allowing one to compute $c_{\boldsymbol{u}}$ without knowing $\boldsymbol{u}$. Such kind of results are classical for scalar operators (see e.g. \cite{Gris92}, \cite[Theorem 6.4.4]{KoMR97},
\cite{DNBL90a,DNBL90b,AsCS00,HaLo02,YoAS02,Nkem16}). They are used in particular for numerical purposes. But curiously they do not seem to exist for Maxwell's equations in 3D, not even for classical situations with positive materials in non smooth domains. We emphasize that the analysis we develop can be adapted to the latter case.\\ \newline In order to establish the desired expression, for $\omega\in\mathbb{R}$, first we introduce the field $\boldsymbol{w}_N\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ such that \begin{equation}\label{defwSingu}
\displaystyle\int_{\Omega}\mu^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{v}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{w}_N}\,dx -\omega^2 \fint_\Omega \varepsilon \boldsymbol{v}\cdot \overline{\boldsymbol{w}_N}\,dx
= \int_\Omega \varepsilon\tilde{\boldsymbol{v}}\cdot\nabla \overline{s^+}\,dx,\qquad \forall\boldsymbol{v}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon). \end{equation} Note that Problem (\ref{defwSingu}) is well-posed when $\mathbb{A}^{\mrm{out}}_N-\omega^2\mathbb{K}^{\mrm{out}}_N$ is an isomorphism. Indeed, using (\ref{FormulaAdjoint}), one can check that it involves the operator $(\mathbb{A}^{\mrm{out}}_N-\omega^2\mathbb{K}^{\mrm{out}}_N)^{\ast}$, that is the adjoint of $\mathbb{A}^{\mrm{out}}_N-\omega^2\mathbb{K}^{\mrm{out}}_N$. Moreover $\boldsymbol{v}\mapsto \textstyle\int_\Omega \varepsilon\tilde{\boldsymbol{v}}\cdot\nabla\overline{s^+}\,dx$ is a linear form over $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. \begin{theorem}\label{ThmCoefSinguE} Assume that $\omega\in\mathbb{R}$, Assumptions 1--3 are valid and $\mathbb{A}^{\mrm{out}}_N-\omega^2\mathbb{K}^{\mrm{out}}_N:\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)\to(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon))^{\ast}$ is injective. Then the solution $\boldsymbol{u}=c_{\boldsymbol{u}}\nabla s^++\tilde{\boldsymbol{u}}$ of the electric problem (\ref{MainPbE}) is such that \begin{equation}\label{FormulaCoefSinguE} c_{\boldsymbol{u}}=i\omega\int_{\Omega} \boldsymbol{J}\cdot\overline{\boldsymbol{w}_N}\,dx\bigg/\int_{\Omega}\mrm{div}(\varepsilon\nabla s^{+})\,\overline{s^+}\,dx. \end{equation} Here $\boldsymbol{w}_N$ is the function which solves (\ref{defwSingu}). \end{theorem} \begin{remark} Note that in practice $\boldsymbol{w}_N$ can be computed once for all because it does not depend on $\boldsymbol{J}$. Then the value of $c_{\boldsymbol{u}}$ can be determined very simply via Formula (\ref{FormulaCoefSinguE}). \end{remark} \begin{proof} By definition of $\boldsymbol{u}$, we have \[ \displaystyle\int_{\Omega}\mu^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{w}_N}\,dx -\omega^2 \fint_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\boldsymbol{w}_N}\,dx= i\omega\int_{\Omega} \boldsymbol{J}\cdot\overline{\boldsymbol{w}_N}\,dx. \] On the other hand, from (\ref{defwSingu}), there holds \[ \displaystyle\int_{\Omega}\mu^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{w}_N}\,dx-\omega^2 \fint_\Omega \varepsilon \boldsymbol{u}\cdot\overline{\boldsymbol{w}_N}\,dx= \int_\Omega \varepsilon\tilde{\boldsymbol{u}}\cdot\nabla\overline{s^+}\,dx. \] From these two relations as well as (\ref{defDivFaible-4}), we get \[ i\omega\int_{\Omega} \boldsymbol{J}\cdot\overline{\boldsymbol{w}_N}\,dx=\int_\Omega \varepsilon\tilde{\boldsymbol{u}}\cdot\nabla\overline{s^+}\,dx=c_{\boldsymbol{u}}\int_{\Omega}\mrm{div}(\varepsilon\nabla s^{+})\,\overline{s^+}\,dx. \] But Lemma \ref{lemmaNRJ} in Appendix guarantees that $\textstyle\Im m\,\int_{\Omega}\mrm{div}(\varepsilon\nabla s^{+})\,\overline{s^+}\,dx\ne0$. Therefore we find the desired formula. \end{proof}
\subsection{Limiting absorption principle}\label{ParagLimiting} In \S\ref{subsec-mainanalysisE}, we have proved well-posedness of the problem for the electric field in the space $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. But up to now, we have not explained why we select this framework. In particular, as mentioned in \S\ref{subsec-hypersingularities}, well-posedness also holds in $\boldsymbol{\mrm{X}}^{\mrm{in}}_N(\varepsilon)$ where $\boldsymbol{\mrm{X}}^{\mrm{in}}_N(\varepsilon)$ is defined as $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ with $s^+$ replaced by $s^-$ (see (\ref{DefSinguEss}) for the definitions of $s^{\pm}$). In general, the solution in $\boldsymbol{\mrm{X}}^{\mrm{in}}_N(\varepsilon)$ differs from the one in $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. Therefore one can build infinitely many solutions of Maxwell's problem as linear interpolations of these two solutions. Then the question is: which solution is physically relevant? Classically, the answer can be obtained thanks to the limiting absorption principle. The idea is the following. In practice, the dielectric permittivity takes complex values, the imaginary part being related to the dissipative phenomena in the materials. Set \[ \varepsilon^\delta:=\varepsilon+i\delta \] where $\varepsilon$ is defined as previously (see (\ref{sec-assumptions})) and $\delta>0$ (the sign of $\delta$ depends on the convention for the time-harmonic dependence (in $e^{-i\omega t}$ here)). Due to the imaginary part of $\varepsilon^\delta$ which is uniformly positive, one recovers some coercivity properties which allow one to prove well-posedness of the corresponding problem for the electric field in the classical framework. The physically relevant solution for the problem with the real-valued $\varepsilon$ then should be the limit of the sequence of solutions for the problems involving $\varepsilon^\delta$ when $\delta$ tends to zero. The goal of the present paragraph is to explain how to show that this limit is the solution of the problem set in $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$.
\subsubsection{Limiting absorption principle for the scalar case}
Our proof relies on a similar result for the 3D scalar problem which is the analogue of what has been done in 2D in \cite[Theorem 4.3]{BoCC14}. Consider the problem
\begin{equation}\label{pbclassic}
\text{Find } \varphi^\delta\in\mrm{H}^1_0(\Omega) \text{ such that } -\mrm{div}(\varepsilon^\delta \nabla \varphi^\delta)=f ,
\end{equation}
where $f\in(\mrm{H}^1_0(\Omega))^\ast$.
Since $\delta>0$, by the Lax-Milgram lemma, this problem is well-posed for all $f\in(\mrm{H}^1_0(\Omega))^\ast$ and in particular for all $f\in(\mathring{\mrm{V}}^1_\beta(\Omega))^\ast$, $\beta>0$.
Our objective is to prove that $ \varphi^\delta$ converges when $\delta$ tends to zero to the unique solution of the problem
\begin{equation} \label{eq-limitpbsca}
\text{Find } \varphi\in \mathring{\mrm{V}}^{\mrm{out}}\text{ such that } A^{\mrm{out}}_{\varepsilon}\varphi=f.
\end{equation} We expect a convergence in a space $\mathring{\mrm{V}}^1_\beta(\Omega)$ with $0<\beta<\beta_0$. We first need a decomposition of $\varphi^\delta$ as a sum of a singular part and a regular part. Since problem \eqref{pbclassic} is strongly elliptic, one can directly apply the theory presented in \cite{KoMR97}. On the one hand, from the assumptions of Section \ref{sec-assumptions}, one can verify that for $\delta$ small enough, there exists one and only one singular exponent $\lambda^\delta\in\mathbb C$ such that $\Re e\,\lambda^\delta\in(-1/2;-1/2+\beta_0-\sqrt{\delta})$. We denote by $\mathfrak{s}^\delta$ the corresponding singular function such that \[ \mathfrak{s}^\delta(r,\theta,\varphi)=r^{\lambda^\delta}\, \Phi^\delta(\theta,\phi). \] Note that it satisfies $ \mrm{div}(\varepsilon^\delta \nabla \mathfrak{s}^\delta)=0$ in $\mathcal{K}$. As in \eqref{DefSinguEss} for $s^{\pm}$, we set \begin{equation}\label{singexprdelta} s^\delta(x)=\chi(r)\, r^{-1/2+i\eta^\delta}\,\Phi^\delta(\theta,\phi), \end{equation} where $\eta^\delta\in\mathbb C$ is the number such that $\lambda^\delta=-1/2+i\eta^\delta$. By applying \cite[Theorem 5.4.1]{KoMR97}, we get the following result. \begin{lemma} \label{lem-decompphidelta} Let $0<\beta<\beta_0$ and $f\in(\mathring{\mrm{V}}^1_\beta(\Omega))^\ast$. The solution $\varphi^\delta$ of \eqref{pbclassic} decomposes as \begin{equation}\label{asymudel} \varphi^\delta =c^\delta s^\delta+\tilde{\varphi}^\delta \end{equation} where $c^\delta\in \mathbb C$ and $\tilde{\varphi}^\delta \in \mathring{\mrm{V}}^1_{-\beta}(\Omega)$. \end{lemma} \noindent Let us first study the limit of the singular function. \begin{lemma} For all $\beta>0$, when $\delta$ tends to zero, the function $s^\delta$ converges in $\mathring{\mrm{V}}^1_\beta(\Omega)$ to $s^+$ and not to $s^-$ (see the definitions in \eqref{DefSinguEss}).
\end{lemma} \begin{proof} The pair $(\Phi^\delta,\lambda^\delta)$ solves the spectral problem \begin{equation}\label{PbSpectralDelta}
\begin{array}{|l} \mbox{Find }(\Phi^\delta,\lambda^\delta)\in\mrm{H}^1(\mathbb{S}^2)\setminus\{0\}\times\mathbb C\mbox{ such that }\\ \displaystyle\int_{\mathbb{S}^2}\varepsilon^\delta\nabla_S\Phi^\delta\cdot\nabla_S\overline{\Psi}\,ds = \lambda^\delta(\lambda^\delta+1)\int_{\mathbb{S}^2}\varepsilon^\delta\Phi^\delta\,\overline{\Psi}\,ds,\qquad \forall\Psi\in\mrm{H}^1(\mathbb{S}^2). \end{array} \end{equation} Postulating the expansions $\Phi^\delta=\Phi^0 +\delta\Phi'+\dots$, $\lambda^\delta=\lambda^0+\delta\lambda'+\dots$ in this problem and identifying the terms in $\delta^0$, we get $\Phi^0=\Phi$ and we find that $\lambda^0=-1/2+i\eta^0$ where $\eta^0$ coincides with $\eta$ or $-\eta$ (see an illustration with Figure \ref{FigureLimitingAbsorptionPrinciple}). At order $\delta$, we get the variational equality \begin{equation}\label{PbLim2} \begin{array}{rcl} \displaystyle\int_{\mathbb{S}^2}\varepsilon\nabla_S\Phi'\cdot\nabla_S\overline{\Psi}\,ds +i\displaystyle\int_{\mathbb{S}^2}\nabla_S\Phi\cdot\nabla_S\overline{\Psi}\,ds &\hspace{-0.25cm}=&\hspace{-0.25cm} \displaystyle\lambda^0(\lambda^0+1)\bigg(\int_{\mathbb{S}^2}\varepsilon\Phi'\,\overline{\Psi}\,ds+i\int_{\mathbb{S}^2}\Phi\,\overline{\Psi}\,ds\bigg)\hspace{-0.5cm}\\[10pt]
& & \hspace{-1cm}\displaystyle+\lambda'(2\lambda^0+1)\int_{\mathbb{S}^2}\varepsilon\Phi\,\overline{\Psi}\,ds,\qquad \forall\Psi\in\mrm{H}^1(\mathbb{S}^2). \end{array} \end{equation} Taking $\Psi=\Phi$ in (\ref{PbLim2}), using (\ref{spectralPb}) and observing that $\lambda^0(\lambda^0+1)=-\eta^2-1/4$, this implies \[
\displaystyle\int_{\mathbb{S}^2}|\nabla_S\Phi|^2+(\eta^2+1/4)|\Phi|^2\,ds=\lambda'2\eta^0\int_{\mathbb{S}^2}\varepsilon|\Phi|^2\,ds. \] Thus $\lambda'$ is real. Since by definition of $\lambda^{\delta}$, we have $\Re e\,\lambda^{\delta}>-1/2$ for $\delta>0$, we infer that $\lambda'>0$. As a consequence, we have \[
\eta^0\int_{\mathbb{S}^2}\varepsilon|\Phi|^2\,ds>0 \] which according to the definition of $\eta$ in (\ref{eq-intPhineq0}) ensures that $\eta^0=\eta$. Therefore the pointwise limit of $s^\delta$ when $\delta$ tends to zero is indeed $s^+$ and not $s^-$. This is enough to conclude that $s^\delta$ converges to $s^+$ in $\mathring{\mrm{V}}^1_\beta(\Omega)$ for $\beta>0$. \end{proof}
\begin{figure}
\caption{Behaviour of the eigenvalue $\lambda^\delta$ close to the line $\Re e\,\lambda=-1/2$ as the dissipation $\delta$ tends to zero. Here the values have been obtained solving the problem (\ref{PbSpectralDelta}) with a Finite Element Method. We work in the conical tip defined via (\ref{ConicalTip}) with $\alpha=2\pi/3$ and $\kappa_\varepsilon=-1.9$.}
\label{FigureLimitingAbsorptionPrinciple}
\end{figure}
\noindent Then proceeding exactly as in the proof of \cite[Theorem 4.3]{BonCheCla13}, one can establish the following result. \begin{lemma}\label{Abslimscalaire} Let $0<\beta<\beta_0$ and $f\in(\mathring{\mrm{V}}^1_\beta(\Omega))^\ast$. If Assumption 3 holds, then $(\varphi^\delta=c^{\delta}\,s^\delta+\tilde{\varphi}^\delta)$ converges to $\varphi=c\,s^++\tilde{\varphi}$ in $\mathring{\mrm{V}}^1_\beta(\Omega)$ as $\delta$ tends to zero. Moreover, $(c^\delta,\tilde{\varphi}^\delta)$ converges to $(c,\tilde{\varphi})$ in $\mathbb C\times\mathring{\mrm{V}}^1_{-\beta}(\Omega)$. In this statement, $\varphi^\delta$ (resp. $\varphi$) is the solution of \eqref{pbclassic} (resp. \eqref{eq-limitpbsca}).
\end{lemma} \noindent Note that the results of Lemma \ref{Abslimscalaire} still hold if we replace $f$ by a family of source terms $(f^\delta)\in(\mathring{\mrm{V}}^1_\beta(\Omega))^\ast$ that converges to $f$ in $(\mathring{\mrm{V}}^1_\beta(\Omega))^\ast$ when $\delta$ tends to zero.
\subsubsection{Limiting absorption principle for the electric problem}
The problem \begin{equation}\label{pb-udelta}
\text{Find}~\boldsymbol{u}^\delta \in \boldsymbol{\mrm{X}}_N(\varepsilon^\delta)~\text{such that}~~ \boldsymbol{\mrm{curl}}\,\mu^{-1}\boldsymbol{\mrm{curl}}\, \boldsymbol{u}^\delta-\omega^2\varepsilon^\delta\boldsymbol{u}^\delta=i\omega \boldsymbol{J} , \end{equation}
with $\boldsymbol{\mrm{X}}_N(\varepsilon^\delta)= \{\boldsymbol{E}\in \boldsymbol{\mrm{H}}_N(\boldsymbol{\mrm{curl}}\,)\,|\,\mrm{div}(\varepsilon^\delta\boldsymbol{E})=0\}$, is well-posed for all $\omega\in\mathbb{R}$ and all $\delta>0$. This result is classical when $\mu$ takes positive values while it can be shown by using \cite{BoCC14} when $\mu$ changes sign. We want to study the convergence of $\boldsymbol{u}^\delta$ when $\delta$ goes to zero. Let $(\delta_n)$ be a sequence of positive numbers such that $\textstyle\lim_{n\to+\infty}\delta_n=0$. To simplify, we denote the quantities with an index $n$ instead of $\delta_n$ (for example we write $\varepsilon^n$ instead of $\varepsilon^{\delta_n}$).
\begin{lemma}\label{lem-udeltan} Suppose that $(\boldsymbol{u}^{n})$ is a sequence of elements of $\boldsymbol{\mrm{X}}_N(\varepsilon^n)$ such that $(\boldsymbol{\mrm{curl}}\, \boldsymbol{u}^{n})$ is bounded in $\boldsymbol{\mrm{L}}^2(\Omega)$. Then, under Assumption 3, for all $\beta$ satisfying \eqref{betainfbeta_0et1demi}, for all $n\in\mathbb{N}$, $\boldsymbol{u}^n$ admits the decomposition $\boldsymbol{u}^n=c^n\nabla s^n+\tilde{\boldsymbol{u}}^n$ with $c^n\in\mathbb{C}$ and $\tilde{\boldsymbol{u}}^n\in \boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$. Moreover, there exists a subsequence such that $(c^n)$ converges to some $c$ in $\mathbb C$ while $(\tilde{\boldsymbol{u}}^n)$ converges to some $\tilde{\boldsymbol{u}}$ in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$. Finally, the field $\boldsymbol{u}:=c\nabla s^++\tilde{\boldsymbol{u}}$ belongs to $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$.
\end{lemma} \begin{proof} For all $n\in\mathbb{N}$, we have $\boldsymbol{u}^n\in\boldsymbol{\mrm{X}}_N(\varepsilon^\delta)\subset\boldsymbol{\mrm{L}}^2(\Omega)$. Therefore, there exist $\varphi^n\in\mrm{H}^1_0(\Omega)$ and $\boldsymbol{\psi}^n\in\boldsymbol{\mrm{X}}_T(1)$, satisfying $\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi^n}\times \nu=0$ on $\partial\Omega$ such that $\boldsymbol{u}^n=\nabla\varphi^n+\boldsymbol{\mrm{curl}}\, \boldsymbol{\psi}^n$. Moreover, we have the estimate \[
\|\Delta \boldsymbol{\psi^n}\|_\Omega=\|\boldsymbol{\mrm{curl}}\, \boldsymbol{u}^{n}\|_\Omega\leq C. \] As a consequence, Proposition \ref{propoLaplacienVect} guarantees that $(\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi^n})$ is a bounded sequence of $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$, and Proposition \ref{propoLaplacienVectcompact} ensures that there exists a subsequence such that $(\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi^n})$ converges in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$. Now from the fact that $\mrm{div}(\varepsilon^{n}\boldsymbol{u}^n)=0$, we obtain \[ \mrm{div}(\varepsilon^{n} \nabla\varphi^n)=-\mrm{div}(\varepsilon^{n} \boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}^n)\in(\mathring{\mrm{V}}^1_\beta(\Omega))^\ast. \] By Lemmas \ref{lem-decompphidelta} and \ref{Abslimscalaire}, this implies that the function $\varphi^n$ decomposes as
$\varphi^n=c^ns^{n} +\tilde{\varphi}^n$ with
$c^n\in\mathbb{C}$ and $\tilde{\varphi}^n\in\mathring{\mrm{V}}^1_{-\beta}(\Omega)$. Moreover, $(c^n)$ converges to $c$ in $\mathbb C$ while $(\tilde{\varphi}^n)$ converges to $\tilde{\varphi}$ in $\mrm{V}^1_{-\beta}(\Omega)$.\\ \newline Summing up, we have that $\boldsymbol{u}^n=c^n\nabla s^{n}+\tilde{\boldsymbol{u}}^n$ where $\tilde{\boldsymbol{u}}^n=\nabla\tilde{\varphi}^n+\boldsymbol{\mrm{curl}}\, \boldsymbol{\psi}^n$ converges to $\tilde{\boldsymbol{u}}$ in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$. In particular, this implies that $\boldsymbol{u}^n$ converges to $\boldsymbol{u}= c\nabla s^++\tilde{\boldsymbol{u}}$ in $\boldsymbol{\mrm{V}}^0_{\gamma}(\Omega)$ for all $\gamma>0$. It remains to prove that $\boldsymbol{u} \in \boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon) $, which amounts to show that $\boldsymbol{u}$ satisfies \eqref{defDivFaible-4}. To proceed, we take the limit as $n\rightarrow +\infty$ in the identity \[ -c^n\int_{\Omega}\mrm{div}(\varepsilon^{n}\nabla s^{n})\varphi \,dx+\int_{\Omega}\varepsilon^{n}\tilde{\boldsymbol{u}}^n\cdot\nabla\varphi\,dx=0 \] which holds for all $\varphi\in \mathring{\mrm{V}}^1_\beta(\Omega)$ because $\boldsymbol{u}^n\in\boldsymbol{\mrm{X}}_N(\varepsilon^{n})$. \end{proof} \begin{theorem} Let $\omega\in\mathbb{R}$.
Suppose that Assumptions 1, 2 and 3 hold, and that $\boldsymbol{u}=0$ is the only function of $\boldsymbol{\mrm{X}}_N(\varepsilon)$ satisfying
\begin{equation}
\label{PbhomogdansXN} \boldsymbol{\mrm{curl}}\,\mu^{-1}\boldsymbol{\mrm{curl}}\, \boldsymbol{u}-\omega^2\varepsilon\boldsymbol{u}=0.
\end{equation} Then the sequence of solutions $(\boldsymbol{u}^\delta=c^\delta\nabla s^{\delta}+\tilde{\boldsymbol{u}}^\delta)$ of \eqref{pb-udelta} converges, as $\delta$ tends to 0, to the unique solution $\boldsymbol{u}=c \nabla s^++\tilde{\boldsymbol{u}}\in\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon) $ of (\ref{MainPbE}) in the following sense: $(c_\delta)$ converges to $c$ in $\mathbb C$, $(\tilde{\boldsymbol{u}}^\delta)$ converges to $\tilde{\boldsymbol{u}}$ in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ and $(\boldsymbol{\mrm{curl}}\, \boldsymbol{u}^\delta)$ converges to $\boldsymbol{\mrm{curl}}\, \boldsymbol{u}$ in $\boldsymbol{\mrm{L}}^2(\Omega)$. \end{theorem} \begin{proof} Let $(\delta_n)$ be a sequence of positive numbers such that $\textstyle\lim_{n\to+\infty}\delta_n=0$. Denote by $\boldsymbol{u}^n$ the unique function of $\boldsymbol{\mrm{X}}_N(\varepsilon^{n})$ such that \begin{equation} \label{eq-suiteu_n} \boldsymbol{\mrm{curl}}\,\mu^{-1}\boldsymbol{\mrm{curl}}\, \boldsymbol{u}^n-\omega^2\varepsilon^{n}\boldsymbol{u}^n=i\omega \boldsymbol{J}. \end{equation}
Note that we set again $\varepsilon^n$ instead of $\varepsilon^{\delta_n}$. The proof is in two steps. First, we establish the desired property by assuming that $ (\|\boldsymbol{\mrm{curl}}\, \boldsymbol{u}^n\|_\Omega)$ is bounded. Then we show that this hypothesis is indeed satisfied.
\\
{\bf First step.} Assume that there is a constant $C>0$ such that for all $n\in\mathbb{N}$
\begin{equation}\label{eq-curlun}
\|\boldsymbol{\mrm{curl}}\, \boldsymbol{u}^n\|_\Omega\leq C.
\end{equation}
By lemma \ref{lem-udeltan}, we can extract a subsequence from $(\boldsymbol{u}^n=c^n\nabla s^{n}+\tilde{\boldsymbol{u}}^n)$ such that $(c^n)$ converges to $c$ in $\mathbb C$, $(\tilde{\boldsymbol{u}}^n)$ converges to $\tilde{\boldsymbol{u}}$ in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$, with $\boldsymbol{u}=\tilde{u}+c\nabla s^+\in \boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. Besides, since for all $n\in\mathbb{N}$, $\boldsymbol{\mrm{curl}}\,\boldsymbol{u}^n\in\boldsymbol{\mrm{L}}^2(\Omega)$, there exist $h^n\in\mrm{H}^1_\#(\Omega)$ and $\boldsymbol{\boldsymbol{w}}^n\in\boldsymbol{\mrm{X}}_N(1)$, such that \begin{equation}\label{helm-decomp-mu} \mu^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}^n=\nabla h^n+\boldsymbol{\mrm{curl}}\, \boldsymbol{\boldsymbol{w}}^n. \end{equation} Observing that $(\boldsymbol{w}^n)$ is bounded in $\boldsymbol{\mrm{X}}_N(1)$, from Lemma \ref{LemmaWeightedClaBis}, we deduce that it admits a subsequence which converges in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$. Multiplying \eqref{eq-suiteu_n} taken for two indices $n$ and $m$ by $\overline{(\boldsymbol{w}^n-\boldsymbol{w}^m)}$, and integrating by parts, we obtain \[
\int_\Omega|\boldsymbol{\mrm{curl}}\,\boldsymbol{w}^n-\boldsymbol{\mrm{curl}}\,\boldsymbol{w}^m|^2\,dx=\omega^2\int_\Omega\left(\varepsilon^{n}\boldsymbol{u}^n-\varepsilon^{m}\boldsymbol{u}^m\right)\overline{(\boldsymbol{w}^n-\boldsymbol{w}^m)}\,dx. \] This implies that $(\boldsymbol{\mrm{curl}}\,\boldsymbol{w}^n)$ converges in $\boldsymbol{\mrm{L}}^2(\Omega)$. Then, from \eqref{helm-decomp-mu}, we deduce that \[ \mrm{div} \left(\mu \nabla h^n\right)=-\mrm{div} \left(\mu\, \boldsymbol{\mrm{curl}}\,\boldsymbol{w}^n\right)\mbox{ in }\Omega. \] By Assumption 2, the operator $A_{\mu}: \mrm{H}_\#^1(\Omega)\to (\mrm{H}_\#^1(\Omega))^\ast$ is an isomorphism. Therefore $(\nabla h^n)$ converges in $\boldsymbol{\mrm{L}}^2(\Omega)$. From \eqref{helm-decomp-mu}, this shows that $(\boldsymbol{\mrm{curl}}\,\boldsymbol{u}^n)$ converges to $\boldsymbol{\mrm{curl}}\, \boldsymbol{u}$ in $\boldsymbol{\mrm{L}}^2(\Omega)$. Finally, we know that $\boldsymbol{u}^n$ satisfies \[ \int_{\Omega}\mu^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}^n\cdot\boldsymbol{\mrm{curl}}\,\overline{{\boldsymbol{v}}}\,dx -\omega^2 \int_\Omega \varepsilon^{n} \boldsymbol{u}^n\cdot\overline{{\boldsymbol{v}}}\,dx= i\omega\int_\Omega \boldsymbol{J}\cdot \overline{{\boldsymbol{v}}}\,dx \] for all ${\boldsymbol{v}}\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$.
Taking the limit, we get that $\boldsymbol{u}$ satisfies
\begin{equation}
\label{eq-variationnel-u}
\int_{\Omega}\mu^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{{\boldsymbol{v}}}\,dx -\omega^2 \fint_\Omega \varepsilon \boldsymbol{u}\cdot\overline{{\boldsymbol{v}}}\,dx= i\omega\int_\Omega \boldsymbol{J}\cdot \overline{{\boldsymbol{v}}}\,dx
\end{equation}
for all ${\boldsymbol{v}}\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$.
Since in addition, $\boldsymbol{u}$ satisfies \eqref{defDivFaible-4}, \eqref{eq-variationnel-u} also holds for $\boldsymbol{v}=\nabla s^+$ and we get that $\boldsymbol{u}$ is the unique solution $u$ of (\ref{MainPbE}). \\ \textbf{Second step.} Now we prove that the assumption \eqref{eq-curlun} is satisfied. Suppose by contradiction that there exists a subsequence of $(\boldsymbol{u}^n)$ such that \[
\|\boldsymbol{\mrm{curl}}\, \boldsymbol{u}^n\|_\Omega\rightarrow +\infty \]
and consider the sequence $(\boldsymbol{v}^n)$ with for all $n\in\mathbb{N}$, $\boldsymbol{v}^n:=\boldsymbol{u}^n/\|\boldsymbol{\mrm{curl}}\, \boldsymbol{u}^n\|_\Omega$. We have
\begin{equation}
\label{eq-suitev_n}
\boldsymbol{v}^n \in \boldsymbol{\mrm{X}}_N(\varepsilon^{n})\quad\mbox{ and }\quad\boldsymbol{\mrm{curl}}\,\mu^{-1}\boldsymbol{\mrm{curl}}\, \boldsymbol{v}^n-\omega^2\varepsilon^{n}\boldsymbol{v}^n=i\omega \boldsymbol{J}/\|\boldsymbol{\mrm{curl}}\, \boldsymbol{u}^n\|_\Omega.
\end{equation}
Following the first step of the proof, we find that we can extract a subsequence from $(\boldsymbol{v}^n)$ which converges, in the sense given in the theorem, to the unique solution of the homogeneous problem (\ref{MainPbE}) with $\boldsymbol{J}=0$. But by Proposition \ref{trappedmodes}, this solution also solves \eqref{PbhomogdansXN}. As a consequence, it is equal to zero. In particular, it implies that $(\boldsymbol{\mrm{curl}}\, \boldsymbol{v}^n)$ converges to zero in $\boldsymbol{\mrm{L}}^2(\Omega)$, which is impossible since by construction, for all $n\in\mathbb{N}$, we have $\|\boldsymbol{\mrm{curl}}\, \boldsymbol{v}^n\|_\Omega=1$. \end{proof} \section{Analysis of the problem for the magnetic component}\label{SectionChampH}
In this section, we turn our attention to the analysis of the Maxwell's problem for the magnetic component. Importantly, in the whole section, we suppose that $\beta$ satisfies (\ref{betainfbeta_0et1demi}), that is $0<\beta<\min(1/2,\beta_0)$. Contrary to the analysis for the electric component, we define functional spaces which depend on $\beta$: \[
\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu):=\{\boldsymbol{u}\in\boldsymbol{\mrm{L}}^2(\Omega)\,|\,\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\in\mrm{span}(\varepsilon\nabla s^{+})\oplus\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega) ,\,\mrm{div}(\mu\boldsymbol{u})=0\mbox{ in }\Omega,\,\mu\boldsymbol{u}\cdot\nu=0\mbox{ on }\partial\Omega\} \] and for $\xi\in\mrm{L}^\infty(\Omega)$, \[
\boldsymbol{\mrm{Z}}^{\pm\beta}_T(\xi):=\{ \boldsymbol{u}\in\boldsymbol{\mrm{L}}^2(\Omega)\,|\,\boldsymbol{\mrm{curl}}\,\boldsymbol{\boldsymbol{u}}\in \boldsymbol{\mrm{V}}^0_{\pm\beta}(\Omega),\,\mrm{div}\,(\xi \boldsymbol{u})=0\mbox{ in }\Omega\mbox{ and }\xi\boldsymbol{u}\cdot\nu=0 \mbox{ on }\partial\Omega \}. \] Note that we have $\boldsymbol{\mrm{Z}}^{-\beta}_T(\mu)\subset\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)\subset \boldsymbol{\mrm{Z}}^{\beta}_T(\mu)$. The conditions $\mrm{div}(\mu\boldsymbol{u})=0$ in $\Omega$ and $\mu\boldsymbol{u}\cdot\nu=0$ on $\partial\Omega$ for the elements of these spaces boil down to impose \[ \int_{\Omega}\mu\boldsymbol{u}\cdot\nabla\varphi\,dx=0,\qquad\forall \varphi\in\mrm{H}^1_{\#}(\Omega). \] \begin{remark} Observe that the elements of $\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$ are in $\boldsymbol{\mrm{L}}^2(\Omega)$ but have a singular curl. On the other hand, the elements of $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$ are singular but have a curl in $\boldsymbol{\mrm{L}}^2(\Omega)$. This is coherent with the fact that for the situations we are considering in this work, the electric field is singular while the magnetic field is not. \end{remark} \noindent The analysis of the problem for the magnetic component leads to consider the formulation \begin{equation}\label{MainPbH}
\begin{array}{|l} \mbox{Find }\boldsymbol{u}\in\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)\mbox{ such that }\\ \displaystyle\fint_{\Omega}\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx -\omega^2 \int_\Omega \mu \boldsymbol{u}\cdot\overline{\boldsymbol{v}}= \int_\Omega \varepsilon^{-1}\boldsymbol{J}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}},\qquad \forall\boldsymbol{v}\in\boldsymbol{\mrm{Z}}^{\beta}_T(\mu),
\end{array} \end{equation} where $\boldsymbol{J}\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$. Again, the first integral in the left-hand side of (\ref{MainPbH}) is not a classical integral. Similarly to definition (\ref{defDivFaible-4}), we set \[ \fint_{\Omega}\nabla s^+\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx:=0,\qquad\forall \boldsymbol{v}\in\boldsymbol{\mrm{Z}}^{\beta}_T(\mu). \] As a consequence, for $\boldsymbol{u}\in \boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$ such that $\boldsymbol{\mrm{curl}}\,\boldsymbol{u}=c_{\boldsymbol{u}}\,\varepsilon\nabla s^++\boldsymbol{\zeta_{\boldsymbol{u}}}$ (we shall use this notation throughout the section) and $\boldsymbol{v}\in\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)$, there holds \begin{equation}\label{eq-intH-symZero} \fint_{\Omega}\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx=\int_{\Omega}\varepsilon^{-1}\boldsymbol{\zeta_{\boldsymbol{u}}}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx. \end{equation} Note that for $\boldsymbol{u}$, $\boldsymbol{v}$ in $\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$ such that $\boldsymbol{\mrm{curl}}\,\boldsymbol{u}=c_{\boldsymbol{u}}\,\varepsilon\nabla s^++\boldsymbol{\zeta_{\boldsymbol{u}}}$, $\boldsymbol{\mrm{curl}}\,\boldsymbol{v}=c_{\boldsymbol{v}}\,\varepsilon\nabla s^++\boldsymbol{\zeta_{\boldsymbol{v}}}$, we have \begin{equation}\label{eq-intH-sym} \begin{array}{rcl} \fint_{\Omega}\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx&=&\int_{\Omega}\varepsilon^{-1}\boldsymbol{\zeta_{\boldsymbol{u}}}\cdot(\overline{c_{\boldsymbol{v}}\,\varepsilon\nabla s^++\boldsymbol{\zeta_{\boldsymbol{v}}}})\,dx \\[10pt] &=&\int_{\Omega}\varepsilon^{-1}\boldsymbol{\zeta_{\boldsymbol{u}}}\cdot\overline{\boldsymbol{\zeta_{\boldsymbol{v}}}}\,dx-\overline{c_{\boldsymbol{v}}}\int_{\Omega}\mrm{div}(\boldsymbol{\zeta_{\boldsymbol{u}}})\,\overline{s^{+}}\,dx \\[10pt] &=&\int_{\Omega}\varepsilon^{-1}\boldsymbol{\zeta_{\boldsymbol{u}}}\cdot\overline{\boldsymbol{\zeta_{\boldsymbol{v}}}}\,dx+c_{\boldsymbol{u}}\overline{c_{\boldsymbol{v}}}\int_{\Omega}\mrm{div}(\varepsilon\nabla s^+)\,\overline{s^{+}}\,dx. \end{array} \end{equation} We denote by $a_T(\cdot,\cdot)$ (resp. $\ell_T(\cdot)$) the sesquilinear form (resp. the antilinear form) appearing in the left-hand side (resp. right-hand side) of (\ref{MainPbH}).
\begin{remark} Note that in (\ref{MainPbH}), the solution and the test functions do not belong to the same space. This is different from the formulation (\ref{MainPbE}) for the electric field but seems necessary in the analysis below to obtain a well-posed problem (in particular to prove Proposition \ref{propoUniquenessH}). Note also that even if the functional framework depends on $\beta$, the solution will not if $\boldsymbol{J}$ is regular enough (see the explanations in Remark \ref{frameworkIndbeta}). \end{remark}
\subsection{Equivalent formulation}
Define the spaces \[ \begin{array}{lcl}
\boldsymbol{\mrm{H}}^{\beta}(\boldsymbol{\mrm{curl}}\,)&\hspace{-0.2cm}:=&\hspace{-0.2cm}\{\boldsymbol{u}\in\boldsymbol{\mrm{L}}^2(\Omega)\,|\,\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\in\boldsymbol{\mrm{V}}^0_{\beta}(\Omega)\}\\
\boldsymbol{\mrm{H}}^{\mrm{out}}(\boldsymbol{\mrm{curl}}\,)&\hspace{-0.2cm}:=&\hspace{-0.2cm}\{\boldsymbol{u}\in\boldsymbol{\mrm{L}}^2(\Omega)\,|\,\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\in\mrm{span}(\varepsilon\nabla s^{+})\oplus\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)\}. \end{array} \]
\begin{lemma}\label{EquivHcurlH} Under Assumptions 1--2, the field $\boldsymbol{u}$ is a solution of (\ref{MainPbH}) if and only if it solves the problem \begin{equation}\label{Formu2H}
\begin{array}{|l} \mbox{Find }\boldsymbol{u}\in \boldsymbol{\mrm{H}}^{\mrm{out}}(\boldsymbol{\mrm{curl}}\,)\mbox{ such that }\\[2pt] a_T(\boldsymbol{u},\boldsymbol{v})= \ell_T(\boldsymbol{v}),\quad\forall \boldsymbol{v} \in \boldsymbol{\mrm{H}}^{\beta}(\boldsymbol{\mrm{curl}}\,). \end{array} \end{equation} \end{lemma} \begin{proof} If $\boldsymbol{u}$ satisfies (\ref{Formu2H}), then taking $\boldsymbol{v}=\nabla\varphi$ with $\varphi\in\mrm{H}^1_{\#}(\Omega)$ in (\ref{Formu2H}), we get that $\boldsymbol{u}\in\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$. This proves that $\boldsymbol{u}$ solves (\ref{MainPbH}).\\ \newline Assume now that $\boldsymbol{u}$ is a solution of (\ref{MainPbH}). Let $\boldsymbol{v}$ be an element of $\boldsymbol{\mrm{H}}^{\beta}(\boldsymbol{\mrm{curl}}\,)$. Introduce $\varphi\in\mrm{H}^1_{\#}(\Omega)$ the function such that \[ \int_{\Omega}\mu\nabla\varphi\cdot\nabla\varphi'\,dx=\int_{\Omega}\mu\boldsymbol{v} \cdot\nabla\varphi'\,dx,\qquad\forall \varphi'\in\mrm{H}^1_{\#}(\Omega). \] The field $\hat{\boldsymbol{v}}:=\boldsymbol{v}-\nabla\varphi$ belongs to $\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)$. Moreover, there holds $\boldsymbol{\mrm{curl}}\,\hat{\boldsymbol{v}}=\boldsymbol{\mrm{curl}}\,\boldsymbol{v}$ and since for $\boldsymbol{u}\in\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$, we have \[ \int_{\Omega}\mu\boldsymbol{u}\cdot\nabla\varphi\,dx=0,\qquad\forall\varphi\in\mrm{H}^1_{\#}(\Omega), \] we deduce that $a_T(\boldsymbol{u},\boldsymbol{v})=a_T(\boldsymbol{u},\hat{\boldsymbol{v}})=\ell_T(\hat{\boldsymbol{v}})=\ell_T(\boldsymbol{v})$. \end{proof}
\subsection{Norms in $\boldsymbol{\mrm{Z}}^{\pm\beta}_T(\mu)$ and $\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$}
We endow the space $\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)$ with the norm \[
\|\boldsymbol{u}\|_{\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)}=(\|\boldsymbol{u}\|^2_{\Omega}+\|\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\|^2_{\boldsymbol{\mrm{V}}^0_{\beta}(\Omega)})^{1/2}, \] so that it is a Banach space. \begin{lemma}\label{LemmaNormeEquivHPlus} Under Assumptions 1--2, there is a constant $C>0$ such that for all $\boldsymbol{u}\in\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)$, we have \[
\|\boldsymbol{u}\|_{\Omega}\le C\,\|\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\|_{\boldsymbol{\mrm{V}}^0_{\beta}(\Omega)}. \]
As a consequence, the norm $\|\cdot\|_{\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)}$ is equivalent to the norm $\|\boldsymbol{\mrm{curl}}\,\cdot\|_{\boldsymbol{\mrm{V}}^0_{\beta}(\Omega)}$ in $\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)$. \end{lemma} \begin{remark} The result of Lemma \ref{LemmaNormeEquivHPlus} holds for all $\beta$ such that $0\le \beta < 1/2$ and not only for $0<\beta<\min(1/2,\beta_0)$. \end{remark} \begin{proof} Let $\boldsymbol{u}$ be an element of $\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)$. Since $\boldsymbol{u}$ belongs to $\boldsymbol{\mrm{L}}^2(\Omega)$, according to the item $v)$ of Proposition \ref{propoPotential}, there are $\varphi\in\mrm{H}^1_{\#}(\Omega)$ and $\boldsymbol{\psi}\in \boldsymbol{\mrm{X}}_N(1)$ such that \begin{equation}\label{DecompoInter} \boldsymbol{u}=\nabla\varphi+\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}. \end{equation} Lemma \ref{LemmaWeightedClaBis} guarantees that $\boldsymbol{\psi}\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ with the estimate \begin{equation}\label{EstimaWeightedClaInterm}
\|\boldsymbol{\psi}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)} \le C\,\|\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\|_{\Omega}. \end{equation} Multiplying the equation $\boldsymbol{\mrm{curl}}\,\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}=\boldsymbol{\mrm{curl}}\,\boldsymbol{u}$ in $\Omega$ by $\boldsymbol{\psi}$ and integrating by parts, we get \begin{equation}\label{EstimaWeightedClaInterm2}
\|\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\|^2_{\Omega}\le \|\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\|_{\boldsymbol{\mrm{V}}^0_{\beta}(\Omega)}\|\boldsymbol{\psi}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)}. \end{equation} Gathering (\ref{EstimaWeightedClaInterm}) and (\ref{EstimaWeightedClaInterm2}) leads to \begin{equation}\label{EstimPlusInter}
\|\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\|_{\Omega}\le C\,\|\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\|_{\boldsymbol{\mrm{V}}^0_{\beta}(\Omega)}. \end{equation} On the other hand, using that \[ \int_{\Omega}\mu\boldsymbol{u}\cdot\nabla\varphi'\,dx=0,\qquad \forall\varphi'\in\mrm{H}^1_{\#}(\Omega) \]
and that $A_{\mu}:\mrm{H}^1_{\#}(\Omega)\to(\mrm{H}^1_{\#}(\Omega))^{\ast}$ is an isomorphism, we deduce that $\|\nabla\varphi\|_{\Omega} \le C\,\|\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\|_{\Omega}$. Using this estimate and (\ref{EstimPlusInter}) in the decomposition (\ref{DecompoInter}), finally we obtain the desired result. \end{proof} \noindent If $\boldsymbol{u}\in\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$, we have $\boldsymbol{\mrm{curl}}\,\boldsymbol{u}=c_{\boldsymbol{u}}\,\varepsilon\nabla s^++\boldsymbol{\zeta_{\boldsymbol{u}}}$ with $c_{\boldsymbol{u}}\in\mathbb{C}$ and $\boldsymbol{\zeta_{\boldsymbol{u}}}\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$. We endow the space $\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$ with the norm \[
\|\boldsymbol{u}\|_{\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)}=(\|\boldsymbol{u}\|^2_{\Omega}+|c_{\boldsymbol{u}}|^2+\|\boldsymbol{\zeta_{\boldsymbol{u}}}\|^2_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)})^{1/2}, \] so that it is a Banach space. \begin{lemma}\label{LemmaNormeEquivHOut} Under Assumptions 1--3, there is $C>0$ such that for all $\boldsymbol{u}\in\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$, we have \begin{equation}\label{MainEstimaSpaceOut}
\|\boldsymbol{u}\|_{\Omega}+|c_{\boldsymbol{u}}|\le C\,\|\boldsymbol{\zeta_{\boldsymbol{u}}}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)}. \end{equation}
As a consequence, the norm $\|\boldsymbol{u}\|_{\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)}$ is equivalent to the norm $\|\boldsymbol{\zeta_{\boldsymbol{u}}}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)}$ for $\boldsymbol{u}\in\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$. \end{lemma} \begin{proof} Let $\boldsymbol{u}$ be an element of $\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$. Since $\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)\subset\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)$, Lemma \ref{LemmaNormeEquivHPlus} provides the estimate \begin{equation}\label{MainEstimaSpaceOutInterm}
\|\boldsymbol{u}\|_{\Omega}\le C\,\|\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\|_{\boldsymbol{\mrm{V}}^0_{\beta}(\Omega)}\le C\,(|c_{\boldsymbol{u}}|+\|\boldsymbol{\zeta_{\boldsymbol{u}}}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)}). \end{equation} On the other hand, taking the divergence of $\boldsymbol{\mrm{curl}}\,\boldsymbol{u}=c_{\boldsymbol{u}}\,\varepsilon\nabla s^++\boldsymbol{\zeta_{\boldsymbol{u}}}$, we obtain $c_{\boldsymbol{u}}\,\mrm{div}(\varepsilon\nabla s^+)=-\mrm{div}\,\boldsymbol{\zeta_{\boldsymbol{u}}}$. Using the fact that $A^{\mrm{out}}_{\varepsilon}:\mathring{\mrm{V}}^{\mrm{out}}\to(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast}$ is an isomorphism, we get \[
|c_{\boldsymbol{u}}|\le C\,\|\mrm{div}\,\boldsymbol{\zeta_{\boldsymbol{u}}}\|_{(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast}}\le C\,\|\boldsymbol{\zeta_{\boldsymbol{u}}}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)}. \] Using this inequality in (\ref{MainEstimaSpaceOutInterm}) leads to (\ref{MainEstimaSpaceOut}). \end{proof}
\subsection{Main analysis for the magnetic field} \noindent Define the continuous operators $\mathbb{A}^{\mrm{out}}_T:\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)\to(\boldsymbol{\mrm{Z}}^{\beta}_T(\mu))^{\ast}$ and $\mathbb{K}^{\mrm{out}}_T:\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)\to(\boldsymbol{\mrm{Z}}^{\beta}_T(\mu))^{\ast}$ such that for all $\boldsymbol{u}\in\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$, $\boldsymbol{v}\in\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)$, \begin{equation}\label{DefOpChampH} \langle\mathbb{A}^{\mrm{out}}_T\boldsymbol{u},\boldsymbol{v}\rangle = \fint_{\Omega}\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx,\qquad\qquad\langle\mathbb{K}^{\mrm{out}}_T\boldsymbol{u},\boldsymbol{v}\rangle = \int_{\Omega}\mu\boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx. \end{equation} With this notation, we have $\langle(\mathbb{A}^{\mrm{out}}_T-\omega^2\mathbb{K}^{\mrm{out}}_T)\boldsymbol{u},\boldsymbol{v}\rangle=a_{T}(\boldsymbol{u},\boldsymbol{v})$.
\begin{proposition}\label{propoUniquenessH} Under Assumptions 1--3, the operator $\mathbb{A}^{\mrm{out}}_T:\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)\to(\boldsymbol{\mrm{Z}}^{\beta}_T(\mu))^{\ast}$ is an isomorphism. \end{proposition} \begin{proof} We have \[ \langle\mathbb{A}^{\mrm{out}}_T\boldsymbol{u},\boldsymbol{v}\rangle=\int_{\Omega}\varepsilon^{-1}\boldsymbol{\zeta_{\boldsymbol{u}}}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx,\qquad\forall\boldsymbol{u}\in\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu),\,\forall\boldsymbol{v}\in\boldsymbol{\mrm{Z}}^{\beta}_T(\mu). \] Let us construct a continuous operator $\mathbb{T}:\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)\to\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$ such that \begin{equation}\label{RelationIdentity} \langle \mathbb{A}^{\mrm{out}}_T\,\mathbb{T}\boldsymbol{u},\boldsymbol{v}\rangle=\int_{\Omega}r^{2\beta}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx,\qquad\forall\boldsymbol{u},\,\boldsymbol{v}\in\boldsymbol{\mrm{Z}}^{\beta}_T(\mu). \end{equation} Let $\boldsymbol{u}$ be an element of $\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)$. Then the field $r^{2\beta}\varepsilon\,\boldsymbol{\mrm{curl}}\,\boldsymbol{u}$ belongs to $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$. Since $A^{\mrm{out}}_{\varepsilon}:\mathring{\mrm{V}}^{\mrm{out}}\to(\mathring{\mrm{V}}^1_{\beta}(\Omega))^{\ast}$ is an isomorphism, there is a unique $\varphi=\alpha\,s^++\tilde{\varphi}\in\mathring{\mrm{V}}^{\mrm{out}}$ such that $A^{\mrm{out}}_{\varepsilon}\varphi=-\mrm{div}(r^{2\beta}\varepsilon\,\boldsymbol{\mrm{curl}}\,\boldsymbol{u})$. Observing that $\boldsymbol{w}:=r^{2\beta}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}-\nabla\varphi\in\boldsymbol{\mrm{V}}^0_{\beta}(\Omega)$ is such that $\mrm{div}\,\boldsymbol{w}=0$ in $\Omega$, according to the result of Proposition \ref{propoPotentialWeight}, we know that there is a unique $\boldsymbol{\psi}\in\boldsymbol{\mrm{Z}}^{\beta}_T(1)$ such that \[ \boldsymbol{\mrm{curl}}\, \boldsymbol{\psi}=\varepsilon\,(r^{2\beta}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}-\nabla\varphi). \] At this stage, we emphasize that in general $\nabla\varphi\in\boldsymbol{\mrm{V}}^0_{\beta}(\Omega)\setminus\boldsymbol{\mrm{L}}^2(\Omega)$. This is the reason why we are obliged to establish Proposition \ref{propoPotentialWeight}. Since $\boldsymbol{\psi}$ is in $\boldsymbol{\mrm{L}}^2(\Omega)$, when $A_{\mu}:\mrm{H}^1_{\#}(\Omega)\to(\mrm{H}^1_{\#}(\Omega))^{\ast}$ is an isomorphism, there is a unique $\phi\in\mrm{H}^1_{\#}(\Omega)$ such that \[ \int_{\Omega} \mu\nabla\phi\cdot\nabla\overline{\phi'}\,dx=\int_{\Omega} \mu\boldsymbol{\psi}\cdot\nabla\overline{\phi'}\,dx,\qquad\forall\phi'\in\mrm{H}^1_{\#}(\Omega). \] Finally, we set $\mathbb{T}\boldsymbol{u}=\boldsymbol{\psi}-\nabla\phi$. It can be easily checked that this defines a continuous operator $\mathbb{T}:\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)\to\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$. Moreover we have \[ \boldsymbol{\mrm{curl}}\,\mathbb{T}\boldsymbol{u}=\alpha\,\varepsilon\nabla s^++\boldsymbol{\zeta_{\mathbb{T}\boldsymbol{u}}}\qquad\mbox{ with }\boldsymbol{\zeta_{\mathbb{T}\boldsymbol{u}}}=\varepsilon\,(r^{2\beta}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}-\nabla\tilde{\varphi}). \] As a consequence, indeed we have identity (\ref{RelationIdentity}). From Lemma \ref{LemmaNormeEquivHPlus}, we deduce that $\mathbb{A}^{\mrm{out}}_T\,\mathbb{T}:\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)\to(\boldsymbol{\mrm{Z}}^{\beta}_T(\mu))^{\ast}$ is an isomorphism, and so that $\mathbb{A}^{\mrm{out}}_T$ is onto. It remains to show that $\mathbb{A}^{\mrm{out}}_T$ is injective.\\ \newline If $\boldsymbol{u}\in\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$ is in the kernel of $\mathbb{A}^{\mrm{out}}_T$, we have $ \langle\mathbb{A}^{\mrm{out}}_T\boldsymbol{u},\boldsymbol{v}\rangle=0 $ for all $\boldsymbol{v}\in\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)$. In particular from (\ref{eq-intH-sym}), we can write \[
\langle\mathbb{A}^{\mrm{out}}_T\boldsymbol{u},\boldsymbol{u}\rangle=\int_{\Omega}\varepsilon^{-1}|\boldsymbol{\zeta_{\boldsymbol{u}}}|^2\,dx+|c_{\boldsymbol{u}}|^2\int_{\Omega}\mrm{div}(\varepsilon\nabla s^{+}) \overline{s^{+}}\,dx=0. \] Taking the imaginary part of the above identity, we obtain $c_{\boldsymbol{u}}=0$ (see the details in the proof of Proposition \ref{trappedmodes-H}). We deduce that $\boldsymbol{u}$ belongs to $\boldsymbol{\mrm{Z}}^{-\beta}_T(\mu)$ and from (\ref{eq-intH-sym}), we infer that $\langle\mathbb{A}^{\mrm{out}}_T\boldsymbol{u},\mathbb{T}\boldsymbol{u}\rangle=\overline{\langle\mathbb{A}^{\mrm{out}}_T\,\mathbb{T}\boldsymbol{u},\boldsymbol{u}\rangle}$. This gives \[
0=\int_{\Omega}r^{2\beta}|\boldsymbol{\mrm{curl}}\,\boldsymbol{u}|^2\,dx=0 \] and shows that $\boldsymbol{u}=0$. \end{proof} \begin{proposition}\label{PropCompactH} Under Assumptions 1--3, the embedding of the space $\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$ in $\boldsymbol{\mrm{L}}^2(\Omega)$ is compact. As a consequence, the operator $\mathbb{K}^{\mrm{out}}_T:\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)\to(\boldsymbol{\mrm{Z}}^{\beta}_T(\mu))^{\ast}$ defined in (\ref{DefOpChampH}) is compact. \end{proposition} \begin{proof} Let $(\boldsymbol{u}_n)$ be a sequence of elements of $\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$ which is bounded. For all $n\in\mathbb{N}$, we have $\boldsymbol{\mrm{curl}}\,\boldsymbol{u}_n=c_{\boldsymbol{u}_n}\varepsilon\nabla s^++\boldsymbol{\zeta}_{\boldsymbol{u}_n}$. By definition of the norm of $\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$, the sequence $(c_{\boldsymbol{u}_n})$ is bounded in $\mathbb{C}$. Let $\boldsymbol{w}$ be an element of $\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$ such that $c_{\boldsymbol{w}}=1$ (if such $\boldsymbol{w}$ did not exist, then we would have $\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)=\boldsymbol{\mrm{Z}}^{-\beta}_T(\mu)\subset\boldsymbol{\mrm{X}}_T(\mu)$ and the proof would be even simpler). The sequence $(\boldsymbol{u}_n-c_{\boldsymbol{u}_n}\boldsymbol{w})$ is bounded in $\boldsymbol{\mrm{X}}_T(\mu)$. Since this space is compactly embedded in $\boldsymbol{\mrm{L}}^2(\Omega)$ when $A_{\mu}:\mrm{H}^1_{\#}(\Omega)\to(\mrm{H}^1_{\#}(\Omega))^{\ast}$ is an isomorphism (see \cite[Theorem 5.3]{BoCC14}), we infer we can extract from $(\boldsymbol{u}_n-c_{\boldsymbol{u}_n}\boldsymbol{w})$ a subsequence which converges in $\boldsymbol{\mrm{L}}^2(\Omega)$. Since clearly we can also extract a subsequence of $(c_{\boldsymbol{u}_n})$ which converges in $\mathbb{C}$, this shows that we can extract from $(\boldsymbol{u}_n)$ a subsequence which converges in $\boldsymbol{\mrm{L}}^2(\Omega)$. This shows that the embedding of $\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$ in $\boldsymbol{\mrm{L}}^2(\Omega)$ is compact.\\ Now observing that for all $\boldsymbol{u}\in\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$, we have \[
\|\mathbb{K}^{\mrm{out}}_T\boldsymbol{u}\|_{(\boldsymbol{\mrm{Z}}^{\beta}_T(\mu))^{\ast}} \le C\,\|\boldsymbol{u}\|_{\Omega}, \] we deduce that $\mathbb{K}^{\mrm{out}}_T:\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)\to(\boldsymbol{\mrm{Z}}^{\beta}_T(\mu))^{\ast}$ is a compact operator. \end{proof} \noindent We can now state the main theorem of the analysis of the problem for the magnetic field. \begin{theorem}\label{MainThmH} Under Assumptions 1--3, for all $\omega\in\mathbb{R}$ the operator $\mathbb{A}^{\mrm{out}}_T-\omega^2\mathbb{K}^{\mrm{out}}_T:\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)\to(\boldsymbol{\mrm{Z}}^{\beta}_T(\mu))^{\ast}$ is Fredholm of index zero. \end{theorem} \begin{proof} Since $\mathbb{K}^{\mrm{out}}_T:\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)\to(\boldsymbol{\mrm{Z}}^{\beta}_T(\mu))^{\ast}$ is compact (Proposition \ref{PropCompactH}) and $\mathbb{A}^{\mrm{out}}_T:\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)\to(\boldsymbol{\mrm{Z}}^{\beta}_T(\mu))^{\ast}$ is an isomorphism (Proposition \ref{propoUniquenessH}), $\mathbb{A}^{\mrm{out}}_T-\omega^2\mathbb{K}^{\mrm{out}}_T:\boldsymbol{\mrm{Z}}^{\mrm{out}}_N\to(\boldsymbol{\mrm{Z}}^{\beta}_T(\mu))^{\ast}$ is Fredholm of index zero. \end{proof} \noindent Finally we establish a result similar to Proposition \ref{trappedmodes} by using the formulation for the magnetic field. \begin{proposition}\label{trappedmodes-H} Under Assumptions 1 and 3, if $\boldsymbol{u}\in \boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$ is a solution of (\ref{MainPbH}) for $\boldsymbol{J}=0$, then $\boldsymbol{u}\in\boldsymbol{\mrm{Z}}^{-\gamma}_T(\mu)\subset\boldsymbol{\mrm{X}}_T(\mu)$ for all $\gamma$ satisfying (\ref{betainfbeta_0et1demi}). \end{proposition} \begin{proof} Assume that $\boldsymbol{u}\in\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$ satisfies \[ \fint_{\Omega}\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx -\omega^2 \int_\Omega \mu \boldsymbol{u}\cdot\overline{\boldsymbol{v}}=0,\qquad \forall \boldsymbol{v}\in \boldsymbol{\mrm{Z}}^{\beta}_T(\mu). \] Taking the imaginary part of this identity for $\boldsymbol{v}=\boldsymbol{u}$, since $\omega$ is real, we get \[ \Im m\,\left(\fint_{\Omega}\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{u}}\,dx\right)=0. \] If $\boldsymbol{\mrm{curl}}\,\boldsymbol{u}=c_{\boldsymbol{u}}\,\varepsilon\nabla s^++\boldsymbol{\zeta_{\boldsymbol{u}}}$ with $c_{\boldsymbol{u}}\in \mathbb{C}$ and $\boldsymbol{\zeta_{\boldsymbol{u}}}\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$, according to (\ref{eq-intH-sym}), this can be written as \[
|c_{\boldsymbol{u}}|^2 \Im m\,\left(\int_{\Omega}\mrm{div}(\varepsilon\nabla s^+ )\,\overline{s^{+}}\,dx\right) =0. \] Then one concludes as in the proof of Proposition \ref{trappedmodes} that $c_{\boldsymbol{u}}=0$, so that $\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$. Therefore we have $\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\in\boldsymbol{\mrm{X}}_N(\varepsilon)\subset\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. From Lemma \ref{LemmaNormeEquiv}, we deduce that $\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\in\boldsymbol{\mrm{V}}^0_{-\gamma}(\Omega)$ for all $\gamma$ satisfying (\ref{betainfbeta_0et1demi}). This shows that $\boldsymbol{u}\in\boldsymbol{\mrm{Z}}^{-\gamma}_T(\mu)$ for all $\gamma$ satisfying (\ref{betainfbeta_0et1demi}). \end{proof} \begin{remark}\label{frameworkIndbeta} Assume that $\boldsymbol{J}\in\boldsymbol{\mrm{V}}^0_{-\gamma}(\Omega)$ for all $\gamma$ satisfying (\ref{betainfbeta_0et1demi}). Assume also that zero is the only solution of (\ref{MainPbH}) with $\boldsymbol{J}=0$ for a certain $\beta_0$ satisfying (\ref{betainfbeta_0et1demi}). Then Theorem \ref{MainThmH} and Proposition \ref{trappedmodes-H} guarantee that (\ref{MainPbH}) is well-posed for all $\gamma$ satisfying (\ref{betainfbeta_0et1demi}). Moreover Proposition \ref{trappedmodes-H} allows one to show that all the solutions of (\ref{MainPbH}) for $\gamma$ satisfying (\ref{betainfbeta_0et1demi}) coincide. \end{remark}
\subsection{Analysis in the classical framework} In the previous paragraph, we proved that the formulation (\ref{MainPbH}) for the magnetic field with a solution in $\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)$ and test functions in $\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)$ is well-posed. Here, we study the properties of the problem for the magnetic field set in the classical space $\boldsymbol{\mrm{X}}_T(\mu)$. More precisely, we consider the problem \begin{equation}\label{MainPbHCla}
\begin{array}{|l} \mbox{Find }\boldsymbol{u}\in\boldsymbol{\mrm{X}}_T(\mu)\mbox{ such that }\\ \displaystyle\int_{\Omega}\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx -\omega^2 \int_\Omega \mu \boldsymbol{u}\cdot\overline{\boldsymbol{v}}= \int_\Omega \varepsilon^{-1}\boldsymbol{J}\cdot \boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}},\qquad \forall\boldsymbol{v}\in\boldsymbol{\mrm{X}}_T(\mu). \end{array} \end{equation} Working as in the proof of Lemma \ref{EquivHcurlH}, one shows that under Assumptions 1, 2, the field $\boldsymbol{u}$ is a solution of (\ref{MainPbHCla}) if and only if it solves the problem \begin{equation}\label{MainPbHClaHcurl}
\begin{array}{|l} \mbox{Find }\boldsymbol{u}\in\boldsymbol{\mrm{H}}(\boldsymbol{\mrm{curl}}\,)\mbox{ such that }\\ \displaystyle\int_{\Omega}\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx -\omega^2 \int_\Omega \mu \boldsymbol{u}\cdot\overline{\boldsymbol{v}}= \int_\Omega \varepsilon^{-1}\boldsymbol{J}\cdot \boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}},\qquad \forall\boldsymbol{v}\in\boldsymbol{\mrm{H}}(\boldsymbol{\mrm{curl}}\,). \end{array} \end{equation} \noindent Define the continuous operators $\mathbb{A}_T:\boldsymbol{\mrm{X}}_T(\mu)\to(\boldsymbol{\mrm{X}}_T(\mu))^{\ast}$ and $\mathbb{K}_T:\boldsymbol{\mrm{X}}_T(\mu)\to(\boldsymbol{\mrm{X}}_T(\mu))^{\ast}$ such that for all $\boldsymbol{u}\in\boldsymbol{\mrm{X}}_T(\mu)$, $\boldsymbol{v}\in\boldsymbol{\mrm{X}}_T(\mu)$, \[ \langle\mathbb{A}_T\boldsymbol{u},\boldsymbol{v}\rangle = \int_{\Omega}\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx,\qquad\langle\mathbb{K}_T\boldsymbol{u},\boldsymbol{v}\rangle = \int_{\Omega}\mu\boldsymbol{u}\cdot\overline{\boldsymbol{v}}\,dx. \] As for $\mathbb{A}_N$ and $\mathbb{K}_N$, we emphasize that these are the classical operators which appear in the analysis of the magnetic field, for example when $\varepsilon$ and $\mu$ are positive in $\Omega$. \begin{proposition}\label{propoUniquenessCla} Under Assumptions 1--3, for all $\omega\in\mathbb{C}$ the operator $\mathbb{A}_T-\omega^2\mathbb{K}_T:\boldsymbol{\mrm{X}}_T(\mu)\to(\boldsymbol{\mrm{X}}_T(\mu))^{\ast}$ is not Fredholm. \end{proposition} \begin{proof} From \cite[Theorem 5.3 and Corollary 5.4]{BoCC14}, we know that under the Assumptions 1, 2, the embedding of $\boldsymbol{\mrm{X}}_T(\mu)$ in $\boldsymbol{\mrm{L}}^2(\Omega)$ is compact. This allows us to prove that $\mathbb{K}_T:\boldsymbol{\mrm{X}}_T(\mu)\to(\boldsymbol{\mrm{X}}_T(\mu))^{\ast}$ is a compact operator. Therefore, it suffices to show that $\mathbb{A}_T:\boldsymbol{\mrm{X}}_T(\mu)\to(\boldsymbol{\mrm{X}}_T(\mu))^{\ast}$ is not Fredholm. Let us work by contraction assuming that $\mathbb{A}_T$ is Fredholm. Since this operator is self-adjoint (it is symmetric and bounded), necessarily it is of index zero.\\ \newline $\star$ If $\mathbb{A}_T$ is injective, then it is an isomorphism. Let us show that in this case, $A_{\varepsilon}:\mrm{H}^1_0(\Omega)\to(\mrm{H}^1_0(\Omega))^{\ast}$ is an isomorphism (which is not the case by assumption). To proceed, we construct a continuous operator $\texttt{T}:\mrm{H}^1_0(\Omega)\to\mrm{H}^1_0(\Omega)$ such that \begin{equation}\label{IdentityIsom} \langle A_{\varepsilon}\varphi,\texttt{T}\varphi'\rangle=\int_{\Omega}\varepsilon\nabla\varphi\cdot\nabla(\overline {\texttt{T}\varphi'})\,dx=\int_{\Omega}\nabla\varphi\cdot\nabla\overline {\varphi'}\,dx,\qquad \forall \varphi,\varphi'\in\mrm{H}^1_0(\Omega). \end{equation} When $\mathbb{A}_T$ is an isomorphism, for any $\varphi'\in\mrm{H}^1_0(\Omega)$, there is a unique $\boldsymbol{\psi}\in\boldsymbol{\mrm{X}}_T(\mu)$ such that \[ \int_{\Omega}\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{\psi}'}\,dx=\int_{\Omega}\varepsilon^{-1}\nabla\varphi'\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{\psi}'}\,dx,\qquad\forall \boldsymbol{\psi}'\in\boldsymbol{\mrm{X}}_T(\mu). \] Using item $iii)$ of Proposition \ref{propoPotential}, one can show that there is a unique $\texttt{T}\varphi'\in\mrm{H}^1_0(\Omega)$ such that \[ \nabla(\texttt{T}\varphi')=\varepsilon^{-1}(\nabla\varphi'-\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}). \] This defines our operator $\texttt{T}:\mrm{H}^1_0(\Omega)\to\mrm{H}^1_0(\Omega)$ and one can verify that it is continuous. Moreover, integrating by parts, we indeed get (\ref{IdentityIsom}) which guarantees, according to the Lax-Milgram theorem, that $A_{\varepsilon}:\mrm{H}^1_0(\Omega)\to\mrm{H}^1_0(\Omega)$ is an isomorphism.\\ \newline $\star$ If $\mathbb{A}_T$ is not injective, it has a kernel of finite dimension $N\ge1$ which coincides with $\mrm{span}(\boldsymbol{\lambda}_1,\dots,\boldsymbol{\lambda}_N)$, where $\boldsymbol{\lambda}_1,\dots,\boldsymbol{\lambda}_N\in\boldsymbol{\mrm{X}}_T(\mu)$ are linearly independent functions such that $(\boldsymbol{\mrm{curl}}\,\boldsymbol{\lambda}_i,\boldsymbol{\mrm{curl}}\,\boldsymbol{\lambda}_j)_{\Omega}=\delta_{ij}$ (the Kronecker symbol). Introduce the space \[
\tilde{\boldsymbol{\mrm{X}}}_T(\mu):=\{\boldsymbol{u}\in\boldsymbol{\mrm{X}}_T(\mu)\,|\,(\boldsymbol{\mrm{curl}}\,\boldsymbol{u},\boldsymbol{\mrm{curl}}\,\boldsymbol{\lambda}_i)_{\Omega}=0,\,i=1,\dots N\}. \] as well as the operator $\tilde{\mathbb{A}}_T:\tilde{\boldsymbol{\mrm{X}}}_T(\mu)\to\tilde{\boldsymbol{\mrm{X}}}_T(\mu)$ such that \[ \langle\tilde{\mathbb{A}}_T\boldsymbol{u},\boldsymbol{v}\rangle = \int_{\Omega}\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{v}}\,dx,\qquad\forall\boldsymbol{u},\boldsymbol{v}\in\tilde{\boldsymbol{\mrm{X}}}_T(\mu). \] Then $\tilde{\mathbb{A}}_T$ is an isomorphism. Let us construct a new operator $\texttt{T}:\mrm{H}^1_0(\Omega)\to\mrm{H}^1_0(\Omega)$ to have something looking like (\ref{IdentityIsom}). For a given $\varphi'\in\mrm{H}^1_0(\Omega)$, introduce $\boldsymbol{\psi}\in\tilde{\boldsymbol{\mrm{X}}}_T(\mu)$ the function such that \begin{equation}\label{LineSmallPb} \int_{\Omega}\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{\psi}'}\,dx=\int_{\Omega}(\varepsilon^{-1} \nabla\varphi' -\sum_{i=1}^{N} \alpha_i \boldsymbol{\mrm{curl}}\,\boldsymbol{\lambda}_i)\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{\psi}'}\,dx,\qquad\forall \boldsymbol{\psi}'\in\tilde{\boldsymbol{\mrm{X}}}_T(\mu), \end{equation} where for $i=1,\dots,N$, we have set $\alpha_i:=\textstyle\int_\Omega \varepsilon^{-1}\nabla \varphi'\cdot\boldsymbol{\mrm{curl}}\,\boldsymbol{\lambda}_i\,dx$. Observing that (\ref{LineSmallPb}) is also valid for $\boldsymbol{\psi}'=\boldsymbol{\lambda}_i$, $i=1,\dots,N$, we infer that there holds \[ \int_{\Omega}\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{\psi}'}\,dx=\int_{\Omega}(\varepsilon^{-1} \nabla\varphi' -\sum_{i=1}^{N} \alpha_i \boldsymbol{\mrm{curl}}\,\boldsymbol{\lambda}_i)\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{\psi}'}\,dx,\qquad\forall \boldsymbol{\psi}'\in\boldsymbol{\mrm{X}}_T(\mu). \] Using again item $iii)$ of Proposition \ref{propoPotential}, we deduce that there is a unique $\texttt{T}\varphi'\in\mrm{H}^1_0(\Omega)$ such that \[ \nabla(\texttt{T}\varphi')=\varepsilon^{-1}(\nabla\varphi'-\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi})-\sum_{i=1}^{N} \alpha_i \boldsymbol{\mrm{curl}}\,\boldsymbol{\lambda}_i. \] This defines the new continuous operator $\texttt{T}:\mrm{H}^1_0(\Omega)\to\mrm{H}^1_0(\Omega)$. Then one finds \[ \langle A_{\varepsilon}\varphi,\texttt{T}\varphi'\rangle=\int_{\Omega}\varepsilon\nabla\varphi\cdot\nabla(\overline {\texttt{T}\varphi'})\,dx=\int_{\Omega}\nabla\varphi\cdot\nabla\overline {\varphi'}\,dx-\sum_{i=1}^{N}\overline{\alpha_i}\int_\Omega \varepsilon\nabla\varphi\cdot\boldsymbol{\mrm{curl}}\,\overline {\lambda_i}\,dx ,\quad \forall \varphi,\varphi'\in\mrm{H}^1_0(\Omega). \] This shows that $\texttt{T}$ is a left parametrix for the self adjoint operator $A_{\varepsilon}$. Therefore, $A_{\varepsilon}:\mrm{H}^1_0(\Omega)\to\mrm{H}^1_0(\Omega)$ is Fredholm of index zero. Note that then, one can verify that $\dim\ker\,A_{\varepsilon}=\dim\ker\,\mathbb{A}_T$. And more precisely, we have $\ker\,A_{\varepsilon}=\mrm{span}(\gamma_1,\dots,\gamma_N)$ where $\gamma_i\in\mrm{H}^1_0(\Omega)$ is the function such that \[ \nabla \gamma_i=\varepsilon^{-1}\boldsymbol{\mrm{curl}}\,\boldsymbol{\lambda}_i \] (existence and uniqueness of $\gamma_i$ is again a consequence of item $iii)$ of Proposition \ref{propoPotential}). But by assumption, $A_{\varepsilon}$ is not a Fredholm operator. This ends the proof by contradiction. \end{proof} \begin{remark} In the article \cite{BoCC14}, it is proved that if $A_{\varepsilon}:\mrm{H}^1_0(\Omega)\to\mrm{H}^1_0(\Omega)$ is an isomorphism (resp. a Fredholm operator of index zero), then $\mathbb{A}_T:\boldsymbol{\mrm{X}}_T(1)\to(\boldsymbol{\mrm{X}}_T(1))^{\ast}$ is an isomorphism (resp. a Fredholm operator of index zero). Here we have established the converse statement. \end{remark} \begin{remark} We emphasize that the problems (\ref{MainPbECla}) for the electric field and (\ref{MainPbHCla}) for the magnetic in the usual spaces $\boldsymbol{\mrm{X}}_N(\varepsilon)$ and $\boldsymbol{\mrm{X}}_T(\mu)$ have different properties. Problem (\ref{MainPbECla}) is well-posed but is not equivalent to the corresponding problem in $\boldsymbol{\mrm{H}}_N(\boldsymbol{\mrm{curl}}\,)$, so that its solution in general is not a distributional solution of Maxwell's equations. On the contrary, problem (\ref{MainPbHCla}) is equivalent to problem (\ref{MainPbHClaHcurl}) in $\boldsymbol{\mrm{H}}(\boldsymbol{\mrm{curl}}\,)$ but it is not well-posed. \end{remark}
\subsection{Expression of the singular coefficient} Under Assumptions 1--3, Theorem \ref{MainThmH} guarantees that for all $\omega\in\mathbb{R}$ the operator $\mathbb{A}^{\mrm{out}}_T-\omega^2\mathbb{K}^{\mrm{out}}_T:\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)\to(\boldsymbol{\mrm{Z}}^{\beta}_T(\mu))^{\ast}$ is Fredholm of index zero. Assuming that it is injective, the problem (\ref{MainPbH}) admits a unique solution $\boldsymbol{u}$ with $\boldsymbol{\mrm{curl}}\,\boldsymbol{u}=c_{\boldsymbol{u}}\,\varepsilon\nabla s^++\boldsymbol{\zeta}_{\boldsymbol{u}}$. As in \S\ref{paraSingCoef}, the goal of this paragraph is to derive a formula for the coefficient $c_{\boldsymbol{u}}$ which does not require to know $\boldsymbol{u}$.\\ \newline For $\omega\in\mathbb{R}$, introduce the field $\boldsymbol{w}_T\in\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)$ such that \begin{equation}\label{defwSinguBis} \displaystyle\int_{\Omega}\varepsilon^{-1}\boldsymbol{\zeta_{\boldsymbol{v}}}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{w}_T}\,dx -\omega^2 \int_\Omega \mu \boldsymbol{v}\cdot \overline{\boldsymbol{w}_T}\,dx
= \int_\Omega \zeta_{\boldsymbol{v}}\cdot\nabla\overline{s^+}\,dx,\qquad \forall\boldsymbol{v}\in\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu). \end{equation} Note that $\boldsymbol{w}_T$ is well-defined because $(\mathbb{A}^{\mrm{out}}_T-\omega^2\mathbb{K}^{\mrm{out}}_T)^{\ast}:\boldsymbol{\mrm{Z}}^{\beta}_T(\mu)\to(\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu))^{\ast}$ is an isomorphism. \begin{theorem} Assume that $\omega\in\mathbb{R}$, Assumptions 1--3 are valid and $\mathbb{A}^{\mrm{out}}_T-\omega^2\mathbb{K}^{\mrm{out}}_T:\boldsymbol{\mrm{Z}}^{\mrm{out}}_T(\mu)\to(\boldsymbol{\mrm{Z}}^{\beta}_T(\mu))^{\ast}$ is injective. Let $\boldsymbol{u}$ denote the solution of the magnetic problem (\ref{MainPbH}). Then the coefficient $c_{\boldsymbol{u}}$ in the decomposition $\boldsymbol{\mrm{curl}}\,\boldsymbol{u}=c_{\boldsymbol{u}}\,\varepsilon\nabla s^++\boldsymbol{\zeta}_{\boldsymbol{u}}$ is given by the formula \begin{equation}\label{FormulaCoefSinguH} c_{\boldsymbol{u}}=i\omega\int_{\Omega} \varepsilon^{-1}\boldsymbol{J}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{w}_T}\,dx\bigg/\int_{\Omega}\mrm{div}(\varepsilon\nabla s^{+})\,\overline{s^+}\,dx. \end{equation} Here $\boldsymbol{w}_T$ is the function which solves (\ref{defwSinguBis}). \end{theorem} \begin{proof} By definition of $\boldsymbol{u}$, we have \[ \displaystyle\int_{\Omega}\varepsilon^{-1}\boldsymbol{\zeta_{\boldsymbol{u}}}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{w}_T}\,dx -\omega^2 \int_\Omega \mu \boldsymbol{u}\cdot \overline{\boldsymbol{w}_T}\,dx= i\omega\int_{\Omega} \varepsilon^{-1}\boldsymbol{J}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{w}_T}\,dx. \] On the other hand, from (\ref{defwSinguBis}), we can write \[ \displaystyle\int_{\Omega}\varepsilon^{-1}\boldsymbol{\zeta_{\boldsymbol{u}}}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{w}_T}\,dx -\omega^2 \int_\Omega \mu \boldsymbol{u}\cdot \overline{\boldsymbol{w}_T}\,dx
= \int_\Omega \boldsymbol{\zeta_{\boldsymbol{u}}}\cdot\nabla\overline{s^+}\,dx. \] From these two relations, using (\ref{eq-intH-sym}), we deduce that \[ i\omega\int_{\Omega} \varepsilon^{-1}\boldsymbol{J}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{w}_T}\,dx=\int_\Omega \boldsymbol{\zeta_{\boldsymbol{u}}}\cdot\nabla\overline{s^+}\,dx=c_{\boldsymbol{u}}\int_{\Omega}\mrm{div}(\varepsilon\nabla s^{+})\,\overline{s^+}\,dx. \] This gives (\ref{FormulaCoefSinguH}). \end{proof}
\section{Conclusion}\label{SectionConclusion} In this work, we studied the Maxwell's equations in presence of hypersingularities for the scalar problem involving $\varepsilon$. We considered both the problem for the electric field and for the magnetic field. Quite naturally, in order to obtain a framework where well-posedness holds, it is necessary to modify the spaces in different ways. More precisely, we changed the condition on the field itself for the electric problem and on the curl of the field for the magnetic problem. A noteworthy difference in the analysis of the two problems is that for the electric field, we are led to work in a Hilbertian framework, whereas for the magnetic field we have not been able to do so.\\ \newline Of course, we could have assumed that the scalar problem involving $\varepsilon$ is well-posed in $\mrm{H}^1_0(\Omega)$ and that hypersingularities exist for the problem in $\mu$. This would have been similar mathematically. Physically, however, this situation seems to be a bit less relevant because it is harder to produce negative $\mu$ without dissipation. We assumed that the domain $\Omega$ is simply connected and that $\partial\Omega$ is connected. When these assumptions are not met, it is necessary to adapt the analysis (see \S8.2 of \cite{BoCC14} for the study in the case where the scalar problems are well-posed in the usual $\mrm{H}^1$ framework). This has to be done. Moreover, for the conical tip, at least numerically, one finds that several singularities can exist (see the calculations in \cite{KCHWS14}). In this case, the analysis should follow the same lines but this has to be written. On the other hand, in this work, we focused our attention on a situation where the interface between the positive and the negative material has a conical tip. It would be interesting to study a setting where there is a wedge instead. In this case, roughly speaking, one should deal with a continuum of singularities. We have to mention that the analysis of the scalar problems for a wedge of negative material in the non standard framework has not been done. Finally, considering a conical tip with both critical $\varepsilon$ and $\mu$ is a direction that we are investigating.
\appendix \section{Vector potentials, part 1} \begin{proposition}\label{propoPotential} Under Assumption 1, the following assertions hold.\\ \newline i)\, According to \cite[Theorem 3.12]{AmrBerDau98}, if $\boldsymbol{u}\in \boldsymbol{\mrm{L}}^2(\Omega)$ satisfies $\mrm{div}\,\boldsymbol{u}=0$ in $\Omega$, then there exists a unique $\boldsymbol{\psi}\in \boldsymbol{\mrm{X}}_T(1)$ such that $\boldsymbol{u}= \boldsymbol{\mrm{curl}}\, \boldsymbol{\psi}$.\\ \newline ii)\, According to \cite[Theorem 3.17]{AmrBerDau98}), if $\boldsymbol{u}\in \boldsymbol{\mrm{L}}^2(\Omega)$ satisfies $\mrm{div}\,\boldsymbol{u}=0$ in $\Omega$ and $\boldsymbol{u}\cdot\nu=0$ on $\partial\Omega$, then there exists a unique $\boldsymbol{\psi}\in \boldsymbol{\mrm{X}}_N(1)$ such that $\boldsymbol{u}= \boldsymbol{\mrm{curl}}\, \boldsymbol{\psi}$.\\ \newline iii)\, If $\boldsymbol{u}\in \boldsymbol{\mrm{L}}^2(\Omega)$ satisfies $\boldsymbol{\mrm{curl}}\,\boldsymbol{u}=0$ in $\Omega$ and $\boldsymbol{u}\times \nu=0$ on $\partial\Omega$, then there exists (see \cite[Theorem 3.41]{Mon03}) a unique $p\in \mrm{H}^1_0(\Omega)$ such that $\boldsymbol{u}= \nabla p$.\\ \newline iv)\, Every $\boldsymbol{u}\in \boldsymbol{\mrm{L}}^2(\Omega)$ can be decomposed as follows (\cite[Theorem 3.45]{Mon03}) $$ \boldsymbol{u}= \nabla p +\boldsymbol{\mrm{curl}}\, \boldsymbol{\psi}, $$ with $p\in \mrm{H}^1_0(\Omega)$ and $\boldsymbol{\psi}\in \boldsymbol{\mrm{X}}_T(1)$ which are uniquely defined. \\ \newline v)\, Every $\boldsymbol{u}\in \boldsymbol{\mrm{L}}^2(\Omega)$ can be decomposed as follows (\cite[Remark 3.46]{Mon03}) $$ \boldsymbol{u}= \nabla p +\boldsymbol{\mrm{curl}}\, \boldsymbol{\psi}, $$ with $p\in \mrm{H}^1_{\#}(\Omega)$ and $\boldsymbol{\psi}\in \boldsymbol{\mrm{X}}_N(1)$ which are uniquely defined. \end{proposition}
\begin{proposition}\label{propoLaplacienVect} Under Assumption 1, if $\boldsymbol{\psi}$ satisfies one of the following conditions \newline i) $\boldsymbol{\psi}\in\boldsymbol{\mrm{X}}_N(1)$ and $\boldsymbol{\Delta \psi}\in \boldsymbol{\mrm{L}}^2(\Omega)$, \newline ii) $\boldsymbol{\psi}\in\boldsymbol{\mrm{X}}_T(1)$, $\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\times \nu=0$ on $\partial\Omega$ and $\boldsymbol{\Delta \psi}\in \boldsymbol{\mrm{L}}^2(\Omega)$, \newline then for all $\beta<1/2$, we have $\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ and there is a constant $C>0$ independent of $\boldsymbol{\psi}$ such that \begin{equation}\label{EstimateWeightedLap}
\|\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)} \le C\,\|\boldsymbol{\Delta \psi}\|_{\Omega}. \end{equation} \end{proposition} \begin{proof} It suffices to prove the result for $\beta\in(0;1/2)$. Let $\boldsymbol{\psi}\in\boldsymbol{\mrm{X}}_N(1)\cup \boldsymbol{\mrm{X}}_T(1)$. Since $\boldsymbol{\mrm{curl}}\,\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}=-\boldsymbol{\Delta \psi} $, integrating by parts we get \[
\|\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\|^2_{\Omega}=-\int_{\Omega}\boldsymbol{\Delta \psi}\cdot\overline{\boldsymbol{\psi}}\,dx. \] Note that the boundary term vanishes because either $\boldsymbol{\psi}\times \nu=0$ or $\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\times \nu=0$ on $\partial\Omega$. This furnishes the estimate \begin{equation}
\|\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\|_{\Omega} \le C\,\|\boldsymbol{\Delta \psi}\|_{\Omega}.\label{nomrcurlL2} \end{equation} Now working with cut-off functions, we refine the estimate at the origin to get (\ref{EstimateWeightedLap}). \newline
Let us consider a smooth cut-off function $\chi$, compactly supported in $\Omega$, equal to one in a neighbourhood of $O$.
To prove the proposition, it suffices in addition to (\ref{nomrcurlL2}) to prove that $\boldsymbol{\mrm{curl}}\,(\chi\boldsymbol{\psi})\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ together with the following estimate $\|\boldsymbol{\mrm{curl}}\, (\chi\boldsymbol{\psi})\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)} \le C\,\|\boldsymbol{\Delta \psi}\|_{\Omega}$. \\
First of all, since $\boldsymbol{\mrm{curl}}\, (\chi\boldsymbol{\psi})\in \boldsymbol{\mrm{L}}^2(\Omega)$ and $\mrm{div}(\chi\boldsymbol{\psi})=\nabla\chi\cdot\boldsymbol{\psi}\in \mrm{L}^2(\Omega)$, we know that $\chi\boldsymbol{\psi_i}\in\mrm{H}^1_0(\Omega)$ for $i=1,2,3$ and we have \[
\|\boldsymbol{\mrm{curl}}\, (\chi\boldsymbol{\psi})\|_{\Omega}^2+\|\mrm{div} (\chi\boldsymbol{\psi})\|_{\Omega}^2=\sum_{i=1}^3\|\nabla(\chi\boldsymbol{\psi_i})\|_{\Omega}^2. \]
From the previous identity, (\ref{nomrcurlL2}) and Proposition \ref{PropoEmbeddingCla}, we deduce \begin{equation}\label{normH1}
\left(\|\boldsymbol{\psi}\|^2_{\Omega}+\sum_{i=1}^3\|\nabla (\chi\boldsymbol{\psi_i})\|_{\Omega}^2\right)^{1/2}\leq C\,\|\boldsymbol{\Delta \psi}\|_{\Omega}. \end{equation} Note that, \eqref{normH1} is also valid if we replace $\chi$ by any other smooth function with compact support in $\Omega$. Now setting $f_i=\Delta(\chi \boldsymbol{\psi_i})$ for $i=1,2,3$, we have \begin{equation} \label{laplaciantranc}
f_i= \chi {\Delta} \boldsymbol{\psi}_i+2\,\nabla \chi\cdot\nabla\boldsymbol{\psi}_i +\boldsymbol{\psi}_i {\Delta} \chi .
\end{equation}
By writing that $\nabla \chi\cdot\nabla\boldsymbol{\psi}_i=\mrm{div}(\boldsymbol{\psi}_i\nabla \chi)-\boldsymbol{\psi}_i\Delta\chi$ and replacing $\chi$ by $\partial_j\chi$ in
(\ref{normH1}) for $j=1,2,3$, we deduce that for $i=1,2,3$, $f_i$ belongs to $\mrm{L}^2(\Omega)$ and satisfies \[
\|f_i\|_{ \Omega}\leq C \|\boldsymbol{\Delta \psi}\|_{\Omega}. \] Note that since $\beta\in(0;1/2)$, we have $\mathring{\mrm{V}}^1_\beta(\Omega)\subset\mrm{V}^0_{\beta-1}\subset\mrm{L}^2(\Omega)$ and so $\mrm{L}^2(\Omega)\subset(\mathring{\mrm{V}}^1_\beta(\Omega))^\ast$. Now starting from the fact that $\chi \boldsymbol{\psi_i}\in \mrm{H}^1_0(\Omega)$ in addition to $\Delta(\chi \boldsymbol{\psi_i})=f_i\in\mrm{L}^2(\Omega)\subset(\mathring{\mrm{V}}^1_\beta(\Omega))^\ast$, by applying Proposition \ref{PropoLaplaceOp}, we deduce that $\chi \boldsymbol{\psi_i}\in\mathring{\mrm{V}}^1_{-\beta}(\Omega)$ with the estimate \[
\|\chi \boldsymbol{\psi_i} \|_{\mathring{\mrm{V}}^1_{-\beta}(\Omega)}\leq C\, \|f_i\|_{(\mathring{\mrm{V}}^1_\beta(\Omega))^\ast}\leq C\, \|f_i\|_\Omega.
\]
As a consequence, $\boldsymbol{\mrm{curl}}\,(\chi\boldsymbol{\psi})\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ and
\[\|\boldsymbol{\mrm{curl}}\,(\chi\boldsymbol{\psi})\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)}\leq C \sum_{i=1}^3 \|\chi \boldsymbol{\psi_i} \|_{\mathring{\mrm{V}}^1_{-\beta}(\Omega)}\leq \sum_{i=1}^3 \|f_i\|_\Omega \leq C \|\boldsymbol{\Delta \psi}\|_{\Omega}, \]
which concludes the proof. \end{proof} \begin{proposition}\label{propoLaplacienVectcompact} Under Assumption 1, the following assertions hold:\\ i) if $(\boldsymbol{\psi_n})$ is a bounded sequence of elements of $\boldsymbol{\mrm{X}}_N(1)$ such that $(\boldsymbol{\Delta \psi_n})$ is bounded in $\boldsymbol{\mrm{L}}^2(\Omega)$, then one can extract a subsequence such that $(\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi_n})$ converges in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ for all $\beta\in(0;1/2)$;
\newline ii) if $(\boldsymbol{\psi_n})$ is a bounded sequence of elements of $\boldsymbol{\mrm{X}}_T(1)$ such that $\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi_n}\times \nu=0$ on $\partial\Omega$ and such that $(\boldsymbol{\Delta \psi_n})$ is bounded in $\boldsymbol{\mrm{L}}^2(\Omega)$, then one can extract a subsequence such that $(\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi_n})$ converges in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ for all $\beta\in(0;1/2)$. \end{proposition} \begin{proof} Let us establish the first assertion, the proof of the second one being similar. Let $(\boldsymbol{\psi_n})$ be a bounded sequence of elements of $\boldsymbol{\mrm{X}}_N(1)$ such that $(\boldsymbol{\Delta \psi_n})$ is bounded in $\boldsymbol{\mrm{L}}^2(\Omega)$. Observing that $\boldsymbol{\mrm{curl}}\,\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi_n}=-\boldsymbol{\Delta \psi_n} $, we deduce that $(\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi_n})$ is a bounded sequence of $\boldsymbol{\mrm{X}}_T(1)$. Since the spaces $\boldsymbol{\mrm{X}}_N(1)$ and $\boldsymbol{\mrm{X}}_T(1)$ are compactly embedded in $\boldsymbol{\mrm{L}}^2(\Omega)$ (see Proposition \ref{PropoEmbeddingCla}), one can extract a subsequence such that both $ (\boldsymbol{\psi_n})$ and $(\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi_n})$ converge in $\boldsymbol{\mrm{L}}^2(\Omega)$.
\newline
Then, working as in the proof of Proposition \ref{propoLaplacienVect}, we can show that for a smooth cut-off function $\chi$ compactly supported in $\Omega$ and equal to one in a neighbourhood of $O$, the sequence $(\chi\boldsymbol{\psi_n})$ is bounded in $ \boldsymbol{\mrm{V}}^2_{\gamma}(\Omega):=(\mrm{V}^2_{\gamma}(\Omega))^3$ for all $\gamma>1/2$. To obtain this result, we use in particular the fact that if $\mathscr{O}\subset\mathbb{R}^3$ is a smooth bounded domain such that $O\in\mathscr{O}$, then $\Delta:\mrm{V}^2_{\gamma}(\mathscr{O})\cap\mathring{\mrm{V}}^1_{\gamma-1}(\mathscr{O})\to\mrm{V}^0_{\gamma}(\mathscr{O})$ is an isomorphism for all $\gamma\in(1/2;3/2)$ (see \cite[\S1.6.2]{MaNP00}). Finally, to conclude to the result of the proposition, we use the fact $ \boldsymbol{\mrm{V}}^2_{\gamma}(\mathscr{O})$ is compactly embedded in $ \boldsymbol{\mrm{V}}^1_{\gamma'}(\mathscr{O})$ a soon as $\gamma-1<\gamma'$ (\cite[Lemma 6.2.1]{KoMR97}). This allows us to prove that for all $\beta<1/2$, the subsequence $(\chi\boldsymbol{\psi_n})$ converges in $\boldsymbol{\mrm{V}}^1_{-\beta}(\Omega)$, so that $(\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi_n})$ converges in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$.
\end{proof}
\noindent The next two lemmas are results of additional regularity for the elements of classical Maxwell's spaces that are direct consequences of Propositions \ref{propoLaplacienVect} and \ref{propoLaplacienVectcompact}. \begin{lemma}\label{LemmaWeightedCla}
Under Assumption 1, for all $\beta\in(0;1/2)$, $\boldsymbol{\mrm{X}}_T(1)$ is compactly embedded in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$. In particular, there is a constant $C>0$ such that
\begin{equation}\label{EstimaWeightedCla}
\|\boldsymbol{u}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)} \le C\,\|\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\|_{\Omega},\qquad\forall\boldsymbol{u}\in\boldsymbol{\mrm{X}}_T(1).
\end{equation} \end{lemma} \begin{proof} Let $\boldsymbol{u}$ be an element of $\boldsymbol{\mrm{X}}_T(1)$. From the item $ii)$ of Proposition \ref{propoPotential}, we know that there exists $\boldsymbol{\psi}\in \boldsymbol{\mrm{X}}_N(1)$ such that $\boldsymbol{u}= \boldsymbol{\mrm{curl}}\, \boldsymbol{\psi}$. Using that $\boldsymbol{-\Delta\psi}= \boldsymbol{\mrm{curl}}\,\boldsymbol{u} \in \boldsymbol{\mrm{L}}^2(\Omega)$, from Proposition \ref{propoLaplacienVect}, we get that $\boldsymbol{u}\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ together with the estimate \[
\|\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)} \le C\,\|\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\|_{\Omega}. \] This gives (\ref{EstimaWeightedCla}). Now suppose that $(\boldsymbol{u}_n)$ is a bounded sequence of elements of $\boldsymbol{\mrm{X}}_T(1)$. Then there exists a bounded sequence $(\boldsymbol{\psi}_n)$ of elements of $\boldsymbol{\mrm{X}}_N(1)$ such that $\boldsymbol{u}_n= \boldsymbol{\mrm{curl}}\, \boldsymbol{\psi}_n$. Since $(\boldsymbol{\mrm{curl}}\, \boldsymbol{u}_n=-\Delta \boldsymbol{\psi}_n)$ is bounded in $\boldsymbol{\mrm{L}}^2(\Omega)$, the first item of Proposition \ref{propoLaplacienVectcompact} implies that there is a subsequence such that $(\boldsymbol{u}_n)$ converges in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$. \end{proof}
\begin{lemma}\label{LemmaWeightedClaBis} Under Assumption 1, for all $\beta\in(0;1/2)$, $\boldsymbol{\mrm{X}}_N(1)$ is compactly embedded in $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$. In particular, there is a constant $C>0$ such that \[
\|\boldsymbol{u}\|_{\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)} \le C\,\|\boldsymbol{\mrm{curl}}\,\boldsymbol{u}\|_{\Omega},\qquad\forall\boldsymbol{u}\in\boldsymbol{\mrm{X}}_N(1). \] \end{lemma} \begin{proof} The proof is similar to the one of Lemma \ref{LemmaWeightedCla}. \end{proof}
\section{Vector potentials, part 2}
First we establish an intermediate lemma which can be seen as a result of well-posedness for Maxwell's equations in weighted spaces with $\varepsilon=\mu=1$ in $\Omega$. Define the continuous operator $\mathbb{B}_T:\boldsymbol{\mrm{Z}}^{\beta}_T(1)\to(\boldsymbol{\mrm{Z}}^{-\beta}_T(1))^{\ast}$ such that for all $\boldsymbol{\psi}\in\boldsymbol{\mrm{Z}}^{\beta}_T(1)$, $\boldsymbol{\psi}'\in\boldsymbol{\mrm{Z}}^{-\beta}_T(1)$, \[ \langle\mathbb{B}_T\boldsymbol{\psi},\boldsymbol{\psi}'\rangle = \int_{\Omega}\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{\psi}'}\,dx. \] \begin{lemma}\label{LemmaWPPoids} Under Assumption 1, for $0\le\beta<1/2$, the operator $\mathbb{B}_T:\boldsymbol{\mrm{Z}}^{\beta}_T(1)\to(\boldsymbol{\mrm{Z}}^{-\beta}_T(1))^{\ast}$ is an isomorphism. \end{lemma} \begin{proof} Let $\boldsymbol{\psi}$ be an element of $\boldsymbol{\mrm{Z}}^{\beta}_T(1)$. According to Proposition \ref{PropoLaplaceOp}, there is a unique $\varphi\in\mathring{\mrm{V}}^1_{-\beta}(\Omega)$ such that \[ \int_{\Omega}\nabla\varphi\cdot\nabla\overline{\varphi'}\,dx=\int_{\Omega}r^{2\beta}\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\cdot\nabla\overline{\varphi'}\,dx,\qquad\forall \varphi'\in\mathring{\mrm{V}}^1_{\beta}(\Omega). \] Then denote $\mathbb{T}\boldsymbol{\psi}\in\boldsymbol{\mrm{Z}}^{-\beta}_T(1)$ the function such that \[ \boldsymbol{\mrm{curl}}\,(\mathbb{T}\boldsymbol{\psi})=r^{2\beta}\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}-\nabla\varphi. \] Observe that $\mathbb{T}\boldsymbol{\psi}$ is well-defined according to the item $i)$ of Proposition \ref{propoPotential}. This defines a continuous operator $\mathbb{T}:\boldsymbol{\mrm{Z}}^{\beta}_T(1)\to \boldsymbol{\mrm{Z}}^{-\beta}_T(1)$. We have \[
\langle\mathbb{B}_T\boldsymbol{\psi},\mathbb{T}\boldsymbol{\psi}\rangle=\displaystyle\int_{\Omega}\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\cdot\overline{\boldsymbol{\mrm{curl}}\,(\mathbb{T}\boldsymbol{\psi})}\,dx=\|r^{\beta}\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\|^2_{\Omega}=\|\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\|^2_{\boldsymbol{\mrm{V}}^0_{\beta}(\Omega)}. \]
Adapting the proof of Lemma \ref{LemmaNormeEquivHPlus}, one can show that $\|\boldsymbol{\mrm{curl}}\,\cdot\|_{\boldsymbol{\mrm{V}}^0_{\beta}(\Omega)}$ is a norm which is equivalent to the natural norm of $\boldsymbol{\mrm{Z}}^{\beta}_T(1)$. Therefore, from the Lax-Milgram theorem, we infer that $\mathbb{T}^{\ast}\mathbb{B}_T$ is an isomorphism which shows that $\mathbb{B}_T$ is injective and that its image is closed in $(\boldsymbol{\mrm{Z}}^{-\beta}_T(1))^{\ast}$. And from that, we deduce that $\mathbb{B}_T$ is onto if and only if its adjoint is injective. The adjoint of $\mathbb{B}_T$ is the operator $\mathbb{B}_T^{\ast}:\boldsymbol{\mrm{Z}}^{-\beta}_T(1)\to(\boldsymbol{\mrm{Z}}^{\beta}_T(1))^{\ast}$ such that for all $\boldsymbol{\psi}\in\boldsymbol{\mrm{Z}}^{-\beta}_T(1)$, $\boldsymbol{\psi}'\in\boldsymbol{\mrm{Z}}^{\beta}_T(1)$, \begin{equation}\label{DefAdjoint} \langle\mathbb{B}_T^{\ast}\boldsymbol{\psi},\boldsymbol{\psi}'\rangle = \int_{\Omega}\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{\psi}'}\,dx. \end{equation}
If $\mathbb{B}_T^{\ast}\boldsymbol{\psi}=0$, then taking $\boldsymbol{\psi}'=\boldsymbol{\psi}\in\boldsymbol{\mrm{Z}}^{-\beta}_T(1)\subset \boldsymbol{\mrm{Z}}^{\beta}_T(1)$ in (\ref{DefAdjoint}), we obtain $\|\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\|_{\Omega}=0$. Since $\boldsymbol{\mrm{Z}}^{-\beta}_T(1)\subset \boldsymbol{\mrm{X}}_T(1)$ and $\|\boldsymbol{\mrm{curl}}\,\cdot\|_{\Omega}$ is a norm in $\boldsymbol{\mrm{X}}_T(1)$ (Proposition \ref{PropoEmbeddingCla}), we deduce that $\boldsymbol{\psi}=0$. This shows that $\mathbb{B}_T^{\ast}$ is injective and that $\mathbb{B}_T$ is an isomorphism. \end{proof} \noindent Now we use the above lemma to prove the following result which is essential in the analysis of the Problem (\ref{MainPbH}) for the magnetic field. This is somehow an extension of the result of item $i)$ of Proposition \ref{propoPotential} for singular fields which are not in $\boldsymbol{\mrm{L}}^2(\Omega)$. \begin{proposition}\label{propoPotentialWeight} Under Assumption 1, for all $0\le\beta<1/2$, if $\boldsymbol{u}\in\boldsymbol{\mrm{V}}^0_{\beta}(\Omega)$ satisfies $\mrm{div}\,\boldsymbol{u}=0$ in $\Omega$, then there exists a unique $\boldsymbol{\psi}\in\boldsymbol{\mrm{Z}}^{\beta}_T(1)$ such that $\boldsymbol{u}= \boldsymbol{\mrm{curl}}\, \boldsymbol{\psi}$. \end{proposition} \begin{proof} Let $\boldsymbol{u}\in \boldsymbol{\mrm{V}}^0_{\beta}(\Omega)$ be such that $\mrm{div}\,\boldsymbol{u}=0$ in $\Omega$. According to Lemma \ref{LemmaWPPoids}, we know that there is a unique $\boldsymbol{\psi}\in\boldsymbol{\mrm{Z}}^{\beta}_T(1)$ such that \[ \int_{\Omega}\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{\psi}'}\,dx=\int_{\Omega}\boldsymbol{u}\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{\psi}'}\,dx,\qquad\forall\boldsymbol{\psi}'\in\boldsymbol{\mrm{Z}}^{-\beta}_T(1). \] Then we have \begin{equation}\label{FirstOrtho} \int_{\Omega}(\boldsymbol{u}-\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi})\cdot\boldsymbol{\mrm{curl}}\,\overline{\boldsymbol{\psi}'}\,dx=0,\qquad\forall\boldsymbol{\psi}'\in\boldsymbol{\mrm{Z}}^{-\beta}_T(1). \end{equation} Since $\boldsymbol{u}$ is divergence free in $\Omega$, we also have \begin{equation}\label{SecondOrtho} \int_{\Omega}(\boldsymbol{u}-\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi})\cdot\nabla \overline{p'}\,dx=0,\qquad\forall p'\in\mathring{\mrm{V}}^1_{-\beta}(\Omega). \end{equation} Now if $\boldsymbol{v}$ is an element of $\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)\subset\boldsymbol{\mrm{L}}^2(\Omega)$, from item $iv)$ of Proposition \ref{propoPotential}, we know that there holds the decomposition \begin{equation}\label{DecompHInterm} \boldsymbol{v}= \nabla p' +\boldsymbol{\mrm{curl}}\, \boldsymbol{\psi}', \end{equation} for some $p'\in \mrm{H}^1_{0}(\Omega)$ and some $\boldsymbol{\psi}'\in \boldsymbol{\mrm{X}}_T(1)$. Taking the divergence in (\ref{DecompHInterm}), we get \begin{equation}\label{SolLapl} \Delta p'= \mrm{div} \,\boldsymbol{v}\in (\mathring{\mrm{V}}^1_{\beta}(\Omega))^\ast. \end{equation} From Proposition \ref{PropoLaplaceOp}, since $0\le\beta<1/2$, we know that (\ref{SolLapl}) admits a solution in $\mathring{\mrm{V}}^1_{-\beta}(\Omega)\subset\mrm{H}^1_0(\Omega)$. Using uniqueness of the solution of (\ref{SolLapl}) in $\mrm{H}^1_0(\Omega)$, we obtain that $p'\in\mathring{\mrm{V}}^1_{-\beta}(\Omega)$. This implies that $\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}'=\boldsymbol{v}-\nabla p'\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega)$ and so $\boldsymbol{\psi}'\in\boldsymbol{\mrm{Z}}^{-\beta}_T(1)$. From (\ref{FirstOrtho}) and (\ref{SecondOrtho}), we infer that \[ \int_{\Omega}(\boldsymbol{u}-\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi})\cdot \overline{\boldsymbol{v}}\,dx=0,\qquad\forall \boldsymbol{v}\in\boldsymbol{\mrm{V}}^0_{-\beta}(\Omega). \] This shows that $\boldsymbol{u}=\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}$. Finally, if $\boldsymbol{\psi}_1$, $\boldsymbol{\psi}_2$ are two elements of $\boldsymbol{\mrm{Z}}^{\beta}_T(1)$ such that $\boldsymbol{u}=\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_1=\boldsymbol{\mrm{curl}}\,\boldsymbol{\psi}_2$, then $\boldsymbol{\psi}_1-\boldsymbol{\psi}_2$ belongs to $\boldsymbol{\boldsymbol{\mrm{X}}}_T(1)$ and satisfies $\boldsymbol{\mrm{curl}}\,(\boldsymbol{\psi}_1-\boldsymbol{\psi}_2)=0$ in $\Omega$. From Proposition \ref{PropoEmbeddingCla}, we deduce that $\boldsymbol{\psi}_1=\boldsymbol{\psi}_2$. \end{proof}
\section{Energy flux of the singular function} \label{Appendix energy flux} \begin{lemma}\label{lemmaNRJ} With the notations of (\ref{defSing}), we have \[
\Im m\,\bigg(\int_{\Omega}\mrm{div}(\varepsilon\nabla\overline{s^{+}})\,{s^+}\,dx\bigg) =\eta\int_{\mathbb{S}^2}\varepsilon|\Phi|^2ds. \] \end{lemma} \begin{proof} Set $\Omega_\tau:=\Omega\setminus \overline{B(O,\tau)}$. Noticing that $\mrm{div}(\varepsilon\nabla\overline{s^{+}})$ vanishes in a neighbourhood of the origin, we can write \[ \begin{array}{lcl} \int_{\Omega}\mrm{div}(\varepsilon\nabla\overline{s^{+}} )\,{s^+}\,dx&=&\lim_{\tau\rightarrow 0}\int_{\Omega_\tau}\mrm{div}(\varepsilon\nabla\overline{s^{+}} )\,{s^+}\,dx\\ &=&
\lim_{\tau\rightarrow 0}\bigg(-\int_{\Omega_\tau}\varepsilon|\nabla s^+|^2dx-\int_{\partial B(O,\tau)}\varepsilon\frac{\overline{\partial s^+}}{\partial r}s^+ds\bigg). \end{array} \] Taking the imaginary part and observing that \[
\int_{\partial B(O,\tau)}\varepsilon\frac{\overline{\partial s^+}}{\partial r}s^+ds=-\left(\frac{1}{2}+i\eta\right)\int_{\mathbb{S}^2}\varepsilon|\Phi|^2ds, \] the result follows. \end{proof}
\section{Dimension of $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)/\boldsymbol{\mrm{X}}_N(\varepsilon)$} \begin{lemma}\label{LemmaCodim} Under Assumptions 1--3, we have $\dim\,(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)/\boldsymbol{\mrm{X}}_N(\varepsilon))=1$. \end{lemma} \begin{proof} If $\boldsymbol{u}_1=c_1\nabla s^{+}+\tilde{\boldsymbol{u}}_1$, $\boldsymbol{u}_2=c_2\nabla s^{+}+\tilde{\boldsymbol{u}}_2$ are two elements of $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$, then $c_2\boldsymbol{u}_1-c_1\boldsymbol{u}_2\in\boldsymbol{\mrm{X}}_N(\varepsilon)$, which shows that $\dim\,(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)/\boldsymbol{\mrm{X}}_N(\varepsilon))\le1$.\\ Now let us prove that $\dim\,(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)/\boldsymbol{\mrm{X}}_N(\varepsilon))\ge1$. Introduce $\tilde{\mathfrak{s}}\in\mathring{\mrm{V}}^{\mrm{out}}$ the function such that $A^{\mrm{out}}_{\varepsilon}\tilde{\mathfrak{s}}=\mrm{div}(\varepsilon\nabla s^-)$. Note that since $\mrm{div}(\varepsilon\nabla s^-)$ vanishes in a neighbourhood of the origin, it belongs to $(\mathring{\mrm{V}}^1_{\gamma}(\Omega))^{\ast}$ for all $\gamma\in\mathbb{R}$. Then set \begin{equation}\label{defs} \mathfrak{s}=s^-+\tilde{\mathfrak{s}}. \end{equation} Observe that $\mathfrak{s}\in\mathring{\mrm{V}}^1_{\gamma}(\Omega)$ for all $\gamma>0$ and that $\mrm{div}(\varepsilon\nabla \mathfrak{s})=0$ in $\Omega\setminus\{O\}$ ($\mathfrak{s}$ is a non zero element of $\ker\,A^{\gamma}_{\varepsilon}$ for all $\gamma>0$). Let $\tilde{\boldsymbol{u}}\in(\mathscr{C}^{\infty}_0(\Omega\setminus\{O\}))^3$ be a field such that $\textstyle\int_{\Omega}\varepsilon\tilde{\boldsymbol{u}}\cdot\nabla\overline{\mathfrak{s}}\,dx\ne0$. The existence of such a $\tilde{\boldsymbol{u}}$ can be established thanks to the density of $(\mathscr{C}^{\infty}_0(\Omega\setminus\{O\}))^3$ in $\boldsymbol{\mrm{L}}^2(\Omega)$, considering for example an approximation of $\mathbbm{1}_{\overline{B}}\nabla \mathfrak{s}\in\boldsymbol{\mrm{L}}^2(\Omega)$ where $\mathbbm{1}_{\overline{B}}$ is the indicator function of a ball included in $\mathcal{M}$. Introduce $\zeta=c\,s^++\tilde{\zeta}\in\mathring{\mrm{V}}^{\mrm{out}}$, with $c\in\mathbb{C}$, $\tilde{\zeta}\in\mathring{\mrm{V}}^1_{-\beta}(\Omega)$, the function such that $A^{\mrm{out}}_{\varepsilon}\zeta=-\mrm{div}(\varepsilon\tilde{\boldsymbol{u}})$. This is equivalent to have \[ -c\int_{\Omega}\mrm{div}(\varepsilon\nabla s^+)\overline{\varphi'}\,dx+\int_{\Omega}\varepsilon\nabla\tilde{\zeta}\cdot\nabla\overline{\varphi'}\,dx=\int_{\Omega}\varepsilon\tilde{\boldsymbol{u}}\cdot\nabla\overline{\varphi'}\,dx,\qquad \forall\varphi'\in\mathring{\mrm{V}}^1_{\beta}(\Omega). \] Clearly $\nabla\zeta-\tilde{\boldsymbol{u}}=c\nabla s^++(\nabla\tilde{\zeta}-\tilde{\boldsymbol{u}})$ is an element of $\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)$. Moreover taking $\varphi'=\mathfrak{s}$ above, we get \[ -c\int_{\Omega}\mrm{div}(\varepsilon\nabla s^+)\overline{\mathfrak{s}}\,dx=\int_{\Omega}\varepsilon\tilde{\boldsymbol{u}}\cdot\nabla\overline{\mathfrak{s}}\,dx\ne0. \] This shows that $c\ne0$ and guarantees that $\dim\,(\boldsymbol{\mrm{X}}^{\mrm{out}}_N(\varepsilon)/\boldsymbol{\mrm{X}}_N(\varepsilon))\ge1$. \end{proof}
\end{document} |
\begin{document}
\title[Short Title]{Steady-State Entanglement for Distant Atoms by Dissipation in Coupled Cavities}
\author{Li-Tuo Shen$^{1}$} \author{Xin-Yu Chen$^{1}$} \author{Zhen-Biao Yang$^{2}$} \author{Huai-Zhi Wu$^{1}$} \author{Shi-Biao Zheng$^{1}$} \email{[email protected]}
\affiliation{$^{1}$Lab of Quantum Optics, Department of Physics, Fuzhou University, Fuzhou 350002, China\\$^{2}$Key Laboratory of Quantum Information, University of Science and Technology of China, CAS, Hefei 230026, China}
\begin{abstract} We propose a scheme for the generation of entangled states for two atoms trapped in separate cavities coupled to each other. The scheme is based on the competition between the unitary dynamics induced by the classical fields and the collective decays induced by the dissipation of two delocalized field modes. Under certain conditions, the symmetric or asymmetric entangled state is produced in the steady state. The analytical result shows that the distributed steady entanglement can be achieved with high fidelity independent of the initial state, and is robust against parameter fluctuations. We also find out that the linear scaling of entanglement fidelity has a quadratic improvement compared to distributed entangled state preparation protocols based on unitary dynamics. \end{abstract}
\pacs{03.67.Bg, 42.50.Pq, 03.67.-a}
\keywords{steady-state entanglement, dissipative channel, coupled cavity} \maketitle
There have been various practical applications for quantum entangled states, ranging from quantum teleportation \cite{PR1935-47-777,JMO1993-40-1195} to universal quantum computation \cite{PRL2000-85-2392,cup2000}. The main obstacle in preserving entanglement is decoherence induced by the environment. Recently, dissipative state preparation has become a focus in quantum computation and entanglement engineering \cite{PRL2011-106-090502,arXiv1110.1024v1,PRA2011-84-022316,arXiv1005.2114v2, PRA2011-83-042329,PRL2011-107-120502,PRA2010-82-054103, PRA2007-76-062311, EPL-85-20007,PRL-89-277901,PRL-91-070402,PRA-76-022312,PRE-77-011112,JPA-39-2689, PRA-77-042305,PRL-100-220401}, which uses decoherence as a powerful resource without destroying the quantum entanglement. These schemes are robust against parameter fluctuations, obtain high fidelity entanglement with arbitrarily initial states, and do not need accurate control of the evolution time. Particularly, Kastoryano and Reiter \emph{et al.} \cite{PRL2011-106-090502,arXiv1110.1024v1} proposed a novel scheme for dissipative preparation of entanglement for two atoms in an optical cavity which gets a qualitative improvement in the scaling of the fidelity with optimal cavity parameters as compared to any state preparation protocol with coherent unitary dynamics. However, most of the previous theoretical schemes and experiments \cite{PRL2011-107-080503} concentrate on the case in which two atoms are trapped in a single cavity.
For distributed quantum information processing, it is a basic requirement to perform state transfer and quantum gate operation between separate nodes of a quantum network. To overcome the difficulty of individual addressability existing in a single cavity, efforts have been devoted to the coupled-cavity models both theoretically \cite{PRA2008-78-063805,LPR2008-2-527,PRA-79-050303, PRA-78-022323,PRA-76-031805R,Nature2006-2-849,Nature2006-2-856} and experimentally \cite{Nature2003-421-925}. Most works on the coupled-cavity system focused on the traditional coherent unitary dynamics, requiring precise timing and special initial states. Clark \emph{et al.} \cite{PRL2003-91-177901} proposed a scheme to entangle the internal states of atoms in separate optical cavities using technique of quantum reservoir engineering, however the scheme requires a complex atomic level configuration. Furthermore, the evolution towards the steady state slows down as the entanglement of the desired state increases.
In this paper, we generalize the idea of Refs. \cite{PRL2011-106-090502,arXiv1110.1024v1} and propose a scheme for producing distributed entanglement for two atoms trapped in coupled cavities. Due to the coherent photon hopping between the two cavities, the system is mathematically equivalent to that involving two atoms collectively interacting with two common nondegenerate field modes symmetrically and asymmetrically, respectively. Each delocalized field mode induces a collective atomic decay channel. The present scheme uses the competition between the transitions induced by the microwave fields and the two collective atomic decay channels to drive atoms to a symmetric or asymmetric entangled state. Analytical and numerical results show that the distributed steady entanglement can be obtained with high fidelity. The scheme is independent of the initial state and robust against parameter fluctuations. No photon detection, or unitary feedback control is required. The linear scaling of $F$ is a quadratic improvement on the cooperativity parameter $C^{-1}$ compared to any known entangled state preparation protocol for coupled-cavity systems \cite{PRL2003-91-177901,Nature2003-421-925,Nature2006-2-849,Nature2006-2-856,OC2010-283-3052}, whose optimal value is $1-F$ $\varpropto$ $C^{-1/2}$.
The experimental setup, as shown in Fig. 1, consists of two identical $\Lambda$-type atoms each having two ground states
$|0\rangle$ and $|1\rangle$, and an excited state $|2\rangle$ and trapped in one detuned cavity. An off-resonance optical laser with detuning $\Delta$ drives the transition $|0\rangle$
$\leftrightarrow$ $|2\rangle$ and a microwave field resonantly drives the transition $|0\rangle$ $\leftrightarrow$ $|1\rangle$. The cavity mode is coupled to the $|1\rangle$ $\leftrightarrow$
$|2\rangle$ transition with the detuning $\Delta-\delta$, where $\delta$ is the cavity detuning from two photon resonance. We here assume a $\theta_{M}$ phase difference between the microwave fields applied to the two atoms. Under the rotating-wave approximation, the Hamiltonian of the whole system in the interaction picture reads $H_{I}$ = $H_{0}$ + $H_{g}$ + $V_{+}$ + $V_{-}$, where \begin{eqnarray}\label{e1-e3} H_{0}&=&\delta
(a_{1}^{\dag}a_{1}+a_{2}^{\dag}a_{2})+\Delta(|2\rangle_{1}\langle2|+|2\rangle_{2}\langle2|)\cr&&
+[g|2\rangle_{1}\langle1|a_{1}+g|2\rangle_{2}\langle1|a_{2}+H.c.]\cr&& +J(a_{1}^{\dag}a_{2}+a_{1}a_{2}^{\dag}),\\
H_{g}&=&\frac{\Omega_{M}}{2}(e^{i\theta_{M}}|1\rangle_{1}\langle0|+|1\rangle_{2}\langle0|)+H.c.,\\
V_{+}&=&\frac{\Omega}{2}(|2\rangle_{1}\langle0|+|2\rangle_{2}\langle0|), \end{eqnarray}
$V_{-}=(V_{+})^{\dagger}$, $a_{i}$ is the cavity field operator in cavity $i$ ($i=1,2$), $J$ is the photon-hopping strength which describes cavity and cavity coupling, $g$ is the atom-cavity coupling constant, $\Omega$ and $\Omega_{M}$ represent the classical laser driving strength and the microwave driving strength, respectively. $\theta_{M}$ = $\pi$ (or $0$) guarantees a high fidelity for asymmetric steady-state $|S\rangle$ $=$
$(|01\rangle-|10\rangle)/\sqrt{2}$ $($ or symmetric steady-state
$|T\rangle$ $=$ $(|01\rangle+|10\rangle)/\sqrt{2}$ $)$. Let us introduce two delocalized bosonic modes $c_{1}$ and $c_{2}$, and define asymmetric mode $c_1$ = $(a_1-a_2)/\sqrt{2}$ and symmetric mode $c_2$ = $(a_1+a_2)/\sqrt{2}$, which are linearly related to the field modes of two cavities. In terms of the new operators, the Hamiltonian $H_{0}$ can be rewritten as \begin{eqnarray}\label{e4}
H_{0}&=& \frac{g}{\sqrt{2}}[|2\rangle_{1}\langle1|(c_1+c_2)
+|2\rangle_{2}\langle1|(c_2-c_1)+H.c.]\cr&&+(\delta-J)c_{1}^{+}c_{1}+(\delta+J)c_{2}^{+}c_{2}
+\Delta\sum_{i=1,2}|2\rangle_{i}\langle2|. \end{eqnarray} \begin{figure}
\caption{(Color online) Experimental setup for dissipative preparation of entangled steady-state between two $\Lambda$-type atoms trapped in two coupled cavities. The atom in each detuned cavity has two ground states
$|1\rangle$ and $|0\rangle$, and one excited state $|2\rangle$, which is driven by the same off-resonance optical laser. The microwave fields applied to the two atoms differ by a relative phase of $\theta_{M}$.}
\end{figure}
The Hamiltonian $H_{0}$ describes the asymmetric coupling for the two atoms to the delocalized field mode $c_1$ and the symmetric coupling to $c_2$. Due to the photon hopping these two delocalized field modes are nondegenerate and each induces a collective atomic decay channel. The photon decay rate of cavity $i$ ($i$ = $1,2$) is denoted as $\kappa_{i}$ and the spontaneous emission rate of the atoms is denoted as $\gamma_j$ ($j$ = $1,2,3,4$). Under the condition $\kappa_1$ = $\kappa_2$ =$\kappa$, the Lindblad operators associated with the cavity decay and atomic spontaneous emission can be expressed as $L^{\kappa_1}$ = $\sqrt{\kappa}$ $c_1$, $L^{\kappa_2}$ = $\sqrt{\kappa}$ $c_2$, $L^{\gamma_1}$ = $\sqrt{\gamma_1}$
$|0\rangle_{1}\langle2|$, $L^{\gamma_2}$ = $\sqrt{\gamma_2}$
$|0\rangle_{2}\langle2|$, $L^{\gamma_3}$ = $\sqrt{\gamma_3}$
$|1\rangle_{1}\langle2|$, $L^{\gamma_4}$ = $\sqrt{\gamma_4}$
$|1\rangle_{2}\langle2|$. We assume $\gamma_1$ = $\gamma_2$ = $\gamma_3$ = $\gamma_4$ = $\gamma/2$ for simplicity.
Under the condition of weak classical laser field, we can adiabatically eliminate the excited cavity field modes and excited states of the atoms when the excited states are not initially populated. To tailor the effective decay processes to achieve a desired steady-state, we introduce an effective operator formalism based on second-order perturbation theory \cite{PRL2011-106-090502,arXiv1110.1024v1,arXiv:1112.2806v1}. Then the dynamics of our coupled cavity system is governed by the effective Hamiltonian $H_{eff}$ and effective Lindblad operator $L_{eff}^{x}$ \begin{eqnarray}\label{e5-e6} H_{eff}&=&-\frac{1}{2}V_{-}[H^{-1}_{NH}+(H^{-1}_{NH})^{\dag}]V_{+}+H_{g},\\ L_{eff}^{x}&=&L^{x}H^{-1}_{NH}V_{+}, \end{eqnarray} where $H^{-1}_{NH}$ is the inverse of the non-Hermitian Hamiltonian $H_{NH}$ = $H_{0}$ $-$ $\frac{i}{2}\sum_{x}(L^{x})^{\dag}L^{x}$. The resulting effective master equation in Lindblad form is \begin{eqnarray}\label{e7-e10} \dot{\rho}&=&i[\rho,H_{eff}]+\sum_{x}\{L_{eff}^{x}\rho (L_{eff}^{x})^{\dag}-\frac{1}{2}[(L_{eff}^{x})^{\dag}L_{eff}^{x}\rho\cr&&+\rho(L_{eff}^{x})^{\dag}L_{eff}^{x}]\},\\
H_{eff}&=&-Re[\frac{\Omega^2}{4}\widetilde{R}_{3}]|S\rangle\langle S|-Re[\frac{\Omega^2}{4}\widetilde{R}_{2}]|T\rangle\langle T|\cr&&-Re[\frac{\Omega^2}{2}\widetilde{R}_{1}]|00\rangle\langle00|+H_{g},\\ L_{eff}^{\kappa_{1}}&=&\sqrt{\frac{(\delta+J)^2g_{eff}^2\kappa/4
}{A_{\kappa_{1}}^2+B_{\kappa_{1}}^2}}|S\rangle\langle00|
+\sqrt{\frac{g_{eff}^2\kappa/4}{C_{\kappa_{1}}^2+D_{\kappa_{1}}^2}}|11\rangle\langle S|,\cr&&\\ L_{eff}^{\kappa_{2}}&=&\sqrt{\frac{(\delta-J)^2g_{eff}^2\kappa/4
}{A_{\kappa_{2}}^2+B_{\kappa_{2}}^2}}|T\rangle\langle00|
+\sqrt{\frac{g_{eff}^2\kappa/4}{C_{\kappa_{2}}^2+D_{\kappa_{2}}^2}}|11\rangle\langle T|,\cr&& \end{eqnarray} where $Re[$ $]$ denotes the real part of the argument, \begin{eqnarray}\label{e11} g_{eff}&=&\frac{g\Omega}{\Delta}, \delta^{'}=\delta-\frac{i}{2}\kappa,\Delta^{'}=\Delta-\frac{i}{2}\gamma,\cr\cr \widetilde{R}_{1}&=&\frac{-(\delta^{'}-J)(\delta^{'}+J)}{\delta^{'}g^2-\Delta^{'}(\delta^{'}-J)(\delta^{'}+J)},\cr \widetilde{R}_{2}&=&\frac{-gJ-\delta^{'}g^2+\Delta^{'}(\delta^{'}-J)(\delta^{'}+J)}{[g^2-\Delta^{'}(\delta^{'}-J)][g^2-\Delta^{'}(\delta^{'}+J)]},\cr \widetilde{R}_{3}&=&\frac{gJ-\delta^{'}g^2+\Delta^{'}(\delta^{'}-J)(\delta^{'}+J)}{[g^2-\Delta^{'}(\delta^{'}-J)][g^2-\Delta^{'}(\delta^{'}+J)]},\cr
A_{\kappa_{1}}&=&A_{\kappa_{2}}=\frac{\delta g^2}{\Delta}-(\delta^2-J^2),\cr
B_{\kappa_{1}}&=&B_{\kappa_{2}}=\kappa(\delta-\frac{g^2}{2\Delta})+\frac{\gamma(\delta^2-J^2)}{2\Delta},\cr C_{\kappa_{1}}&=&\frac{g^2}{\Delta}-(\delta-J),D_{\kappa_{1}}=\frac{\kappa}{2}+\frac{\gamma(\delta-J)}{2\Delta},\cr C_{\kappa_{2}}&=&\frac{g^2}{\Delta}-(\delta+J),D_{\kappa_{2}}=\frac{\kappa}{2}+\frac{\gamma(\delta+J)}{2\Delta}. \end{eqnarray}
As shown in Fig. 2 (a) and (b), the loop-like elements $O_{00}(\Omega^2)$, $O_{T}(\Omega^2)$ and $O_{S}(\Omega^2)$
represent the effective-Hamiltonian evolution in three triplet states $|00\rangle$, $|T\rangle$ and $|S\rangle$ without microwave fields, respectively. For weak optical driving $\Omega$, $H_{eff}$ $\simeq$ $H_g$. There exist two effective decay channels characterized by $L_{eff}^{\kappa_{1}}$ and $L_{eff}^{\kappa_{2}}$ through the two delocalized bosonic modes $c_{1}$ and $c_{2}$ as compared with the case of Ref.
\cite{PRL2011-106-090502} in which only one decay channel is mediated. It is the photon hopping that lifts the degeneracy of the two delocalized field modes and leads to the two independent decay channels. $L_{eff}^{\kappa_{1}}$ indicates the effective decay from $|00\rangle$ to $|S\rangle$ at a rate $\kappa_{c_{1,1}}$ and from $|S\rangle$ to $|11\rangle$ at a rate $\kappa_{c_{1,2}}$ caused by asymmetric $c_1$ mode, and
$L_{eff}^{\kappa_{2}}$ denotes the effective decay from $|00\rangle$
to $|T\rangle$ at a rate $\kappa_{c_{2,1}}$ and from $|T\rangle$ to
$|11\rangle$ at a rate $\kappa_{c_{2,2}}$ caused by symmetric $c_2$ mode simultaneously. The decay rates $\kappa_{c_{1,1}}$ ($\kappa_{c_{1,2}}$) and $\kappa_{c_{2,1}}$ ($\kappa_{c_{2,2}}$) equal to the square of the first (second) coefficient in the right side of Eq. (9) and Eq. (10), respectively. Set
$A_{\kappa_{1}}$ $=$ $A_{\kappa_{2}}$ $=$ $0$, decays from $|S\rangle$ to
$|11\rangle$ and from $|T\rangle$ to $|11\rangle$ can be both largely suppressed. On the other hand, the microwave fields drive the transition between the three states $|00\rangle$, $|T\rangle$
($|S\rangle$) and $|11\rangle$ for $\theta_{M}=0(\pi)$. The dynamics of the full master equation in Fig. 3 (a) and
(b) illustrates that we can obtain state $|S\rangle$ or $|T\rangle$
of high fidelity, and the time needed for reaching the entangled steady-state $|T\rangle$ is about two times as large as that of
$|S\rangle$. This is because that the optimal ratio of $\kappa_{c_{1,1}}$/$\kappa_{c_{1,2}}$ is about $2$ times as large as $\kappa_{c_{2,1}}$/$\kappa_{c_{2,2}}$. \begin{figure}
\caption{(Color online) The populations of four states $|S\rangle$,
$|T\rangle$, $|00\rangle$ and $|11\rangle$ versus the dimensionless parameter $gt$ for a random initial state. Both curves are plotted for $C=200$, $\kappa=\gamma/2$, $\Omega_{M}=2\Omega/5$, $\Omega=g/20$ with $\Delta$, $\delta$ and $J$ being the optimal values for two entangled steady-states. (a) $\theta_{M}=0$. (b)
$\theta_{M}=\pi$. (c) The fidelity $F_{|S\rangle}$ for steady-state
$|S\rangle$ versus $C$, and the coefficient of the linear scaling in
$F_{|S\rangle}$ as a function of $C$ with different ratios $\kappa/\gamma$ is plotted in the inset.}
\label{Fig.sub.a}
\label{Fig.sub.b}
\label{Fig.sub.c}
\end{figure} The errors imposed by all possible atomic spontaneous emissions should also be taken into account. We apply Eq. (6) again to derive four analytic expressions of effective spontaneous emissions with the other Lindblad operators $L^{\gamma_1}$, $L^{\gamma_2}$, $L^{\gamma_3}$ and $L^{\gamma_4}$ \begin{eqnarray}\label{e12-e13}
L^{\gamma_1}_{eff}&=&\sqrt{\frac{\gamma}{2}}[\frac{\Omega}{2}|\widetilde{R}_{1}||00\rangle\langle00|
+\frac{\Omega}{4}|\widetilde{R}_{2}|(|T\rangle\langle T|+|S\rangle\langle T|)\cr&&+\frac{\Omega}{4}|\widetilde{R}_{3}|(|T\rangle\langle S|+|S\rangle\langle S|)],\\ L^{\gamma_3}_{eff}
&=&\sqrt{\frac{\gamma}{2}}[\frac{\Omega}{2\sqrt{2}}|\widetilde{R}_{1}|(|T\rangle\langle00|+|S\rangle\langle00|)
\cr&&+\frac{\Omega}{2\sqrt{2}}(|\widetilde{R}_{2}||11\rangle\langle T|+|\widetilde{R}_{3}||11\rangle\langle S|)], \end{eqnarray}
where $|\cdot|$ denotes modulus of the symbol in it, $L^{\gamma_2}_{eff}=L^{\gamma_1}_{eff}$ and $L^{\gamma_4}_{eff}=
L^{\gamma_3}_{eff}$. The operators of effective spontaneous emission for $|S\rangle$ state are \begin{eqnarray}\label{e14-e15}
L^{\gamma_1}_{eff,S}&=&L^{\gamma_2}_{eff,S}=\sqrt{\gamma_{S,i=1,2}}|T\rangle\langle S|,\\
L^{\gamma_3}_{eff,S}&=&L^{\gamma_4}_{eff,S}=\sqrt{\gamma_{S,i=3,4}}|11\rangle\langle S|, \end{eqnarray}
and the operators of that for $|T\rangle$ state are \begin{eqnarray}\label{e16-e17}
L^{\gamma_1}_{eff,T}&=&L^{\gamma_2}_{eff,T}=\sqrt{\gamma_{T,i=1,2}}|S\rangle\langle T|,\\
L^{\gamma_3}_{eff,T}&=&L^{\gamma_4}_{eff,T}=\sqrt{\gamma_{T,i=3,4}}|11\rangle\langle T|, \end{eqnarray} where \begin{eqnarray}\label{e18} \gamma_{eff} &\simeq& \frac{(\frac{\gamma\Omega^2}{2})\{(gJ)^2+[\kappa(\Delta\delta-\frac{g^2}{2})+\gamma\frac{(\delta^2-J^2)}{2}]^2\}} {(g^2-\Delta\delta)^2(g^4+\kappa^2\Delta^2)},\cr&& \end{eqnarray} and $\gamma_{S,i=1,2}$ $=$ $\gamma_{T,i=1,2}$ $=$ $\gamma_{eff}/16$, $\gamma_{S,i=3,4}$ $=$ $\gamma_{T,i=3,4}$ $=$ $\gamma_{eff}/8$. Then we use the rate equation to evaluate the fidelity for the state $(j=S$ or $T)$ \begin{figure}
\caption{(Color online) $F_{|S\rangle}$ in the effective two-qubit system versus fluctuations of various parameters. (a) $F_{|S\rangle}$ vs $\frac{dJ}{J}$ and
$\frac{d\delta}{\delta}$; (b) $F_{|S\rangle}$ vs $\frac{d\Delta}{\Delta}$ and $\frac{dg}{g}$.}
\end{figure} \begin{eqnarray}\label{e19} \dot{P}_{j}=\kappa_{a}P_{00}-(\kappa_{b}+\sum_{i=1}^{4}\gamma_{j,i})P_{j}, \end{eqnarray} where $P_{j}$ is the probability to be in the state $j$. The first term on the right side of Eq. (19) represents the population decaying into the state $j$ with the rate $\kappa_a$, while the other terms express the population leaking out of the state $j$ with the rate $\kappa_{b}+\sum_{i=1}^{4}\gamma_{j,i}$. Suppose $P_{j}$ $\simeq$ $1$ and the probability in each of the other three states is nearly $P_{00}$, then \begin{eqnarray}\label{e20}
1-F_{|S\rangle}\approx(3\frac{g_{eff}^2\kappa}{C_{k_{1}}^{2}+D_{k_{1}}^{2}}+9\gamma_{eff}) /[\frac{(\delta+J)^{2}g_{eff}^2\kappa}{A_{k_{1}}^{2}+B_{k_{1}}^{2}}],\cr \end{eqnarray}
where $F_{|S\rangle}$ $=$ $|\langle S|\rho_{SS}|S\rangle|$ is the fidelity of state $|S\rangle$. Setting $\delta g^2=\Delta(\delta^2-J^2)$ and $\kappa(\Delta\delta-g^2/2)$ $\simeq$ $\gamma(\delta^2-J^2)/2$, the optimal fidelity of the entanglement can be obtained. The effective two-qubit system in the inset of Fig. 3 (c) shows that the fidelity scaling of state
$|S\rangle$ is independent of different ratios $\kappa/\gamma$, then we find out the actual constants for maximizing the fidelity as follows \begin{eqnarray}\label{e21}
1-F_{|S\rangle}\approx12.8 C^{-1}. \end{eqnarray}
The influences of different parameter fluctuations on the fidelity
$F_{|S\rangle}$ of entangled state are considered. As shown in Fig. 4 (a) and (b), $F_{|S\rangle}$ keeps above $90\%$ even $5\%$ fluctuations in these parameters. The preparation process of state
$|T\rangle$ is similar to that of $|S\rangle$.
Photonic band gap cavities coupled to atoms or quantum dots are suitable candidates for realizing the proposal. Cooperativity of value $C$ $\sim$ $100$ has been realized \cite{Nature2007-445-896}. The cavity modes can be coupled via the overlap of their evanescent fields or via an optical fiber, and photon hopping between two cavities has been observed \cite{PRB2000-61-R11855}.
In conclusion, we have proposed a scheme for dissipative preparation of entanglement between two atoms that are distributed in two coupled cavities. We find the linear scaling of the fidelity is a quadratic improvement compared with distributed entangled state preparation protocols based on unitary dynamics.
L.T.S., X.Y.C., H.Z.W, and S.B.Z acknowledge support from the National Fundamental Research Program Under Grant No. 2012CB921601, National Natural Science Foundation of China under Grant No. 10974028, the Doctoral Foundation of the Ministry of Education of China under Grant No. 20093514110009, and the Natural Science Foundation of Fujian Province under Grant No. 2009J06002. Z.B.Y is supported by the National Basic Research Program of China under Grants No. 2011CB921200 and No. 2011CBA00200, and the China Postdoctoral Science Foundation under Grant No. 20110490828.
\end{document} |
\begin{document}
\title[Bounded Height and Resultant]{Endomorphisms of Bounded Height and Resultant}
\author{Brian Stout} \author{Adam Towsley}
\address{Brian Stout; Department of Mathematics; United States Naval Academy; Annapolis, MD 21401 U.S.A.} \email{[email protected]}
\address{Adam Towsley; Department of Mathematics; The City University of New York Graduate Center; New York, NY 10016 U.S.A.} \email{[email protected]}
\thanks{{\em Date of last revision:} May 8, 2014} \subjclass[2010]{Primary: 37P45; Secondary: 14D22} \keywords{Arithmetic dynamics, bounded height, bounded resultant}
\begin{abstract} Let $K$ be an algebraic number field and $B\geq 1$. For an endomorphism $\varphi:{\mathbb P}^n\rightarrow{\mathbb P}^n$ defined over $K$ of degree $d$ let ${\mathfrak R}_\varphi\subset{\mathcal O}_K$ denote its minimal resultant ideal. For a fixed height function $h_{\mathcal{M}^n_d}$ on the moduli space of dynamical systems this paper shows that all such morphisms $\varphi$ of bounded resultant $\mathrm{N}_{K/{\mathbb Q}}({\mathfrak R}_\varphi)\leq B$ and bounded height $h_{\mathcal{M}^n_d}(\langle\varphi\rangle)\leq B$ are contained in finitely many $\mathrm{PGL}_{n+1}(K)$-equivalence classes. This answers a question of Silverman in the affirmative. \end{abstract} \maketitle
\section{Introduction}\label{Introduction} In \cite{SilvermanADS} Silverman asked the following question: given a rational map $\varphi: {\mathbb P}^1 \rightarrow {\mathbb P}^1$ of degree $d$ defined over $K$ are there only finitely many $K$-isomorphism classes of $\varphi$ of bounded height and bounded resultant? We prove the following $n$-dimensional generalization of Silverman's question: \begin{thm}\label{MainTheorem}
Let $K$ be a number field and let $\varphi: \PP^n \rightarrow \PP^n$ be a morphism of degree $d\geq 2$ defined over $K$. Let ${\mathfrak R}_\varphi$ denote the minimal resultant ideal of $\varphi$. Fix an embedding of
$\mathcal{M}^n_d$ into projective space and denote the associated height function by $h$. If $B \geq 1$ then the set
$$\Gamma_{K,B} = \left\lbrace \varphi \in \mathrm{Hom}_d^n \left( K \right) \ : \ \mathrm{N}_{K/{\mathbb Q}}( {\mathfrak R}_\varphi) \leq B \text{ and } h \left( \left< \varphi \right> \right) \leq B \right\rbrace$$
is contained in only finitely many $\mathrm{PGL}_{n+1} \left( K \right)$-conjugacy classes of morphisms. \end{thm} The minimal resultant ideal is a way of encoding information about the primes of bad reduction for $\varphi$ across its conjugacy class. For every non-archimedean place $v$ for which ${\mathfrak R}_\varphi$ has positive valuation, some conjugate $\varphi^f$ for $f\in\mathrm{PGL}_{n+1}(K)$ has bad reduction.
One can consider the height $h(\langle\varphi\rangle)$ as a measure of the arithmetic complexity of the conjugacy class of $\varphi$ and the norm of the resultant as a measure of the amount of bad reduction. As we will see in Section \ref{Bounded}, bounding the norm of the minimal resultant ideal bounds the primes for which $\varphi$ has bad reduction.
Bounding both the height of the conjugacy class and the norm of the minimal resultant ideal are needed to obtain a finiteness result. If only the norm $\mathrm{N}_{K/{\mathbb Q}}({\mathfrak R}_\varphi)$ is bounded, then one can still find infinitely many distinct $\mathrm{PGL}_{n+1}(K)$-conjugacy classes of endomorphisms defined over $K$. This can be accomplished, for example, by considering monic polynomials defined over ${\mathcal O}_K$. Monic polynomials have everywhere good reduction and hence their minimal resultant ideal is the unit ideal, which has norm $1$. It is easy to show that there are infinitely many distinct $\mathrm{PGL}_{n+1}({\bar K})$-conjugacy classes of such maps, and therefore infinitely many distinct $\mathrm{PGL}_{n+1}(K)$-conjugacy classes.
Conversely, if one bounds only the height $h(\langle\varphi\rangle)$ one gets a finite set of points in $\mathcal{M}^n_d(K)$. Points in $\mathcal{M}^n_d(K)$ are the same as $\mathrm{PGL}_{n+1}({\bar K})$-conjugacy classes of endomorphisms defined over $K$; denote these classes by $[\varphi_1],\ldots, [\varphi_r]$ for endomorphisms $\varphi_i$ of degree $d$ on ${\mathbb P}^n$ defined over $K$. Each $\mathrm{PGL}_{n+1}(K)$-conjugacy class which descends to some $[\varphi_i]$ is called a twist. A priori, it is not obvious how many twists over $K$ descend to each $[\varphi_i]$. Indeed, if the automorphism group of $\varphi$, which consists of the finite subgroup of $\mathrm{PGL}_{n+1} \left( \bar{K} \right)$ that fixes $\varphi$ under conjugation, is trivial, then there is only one twist. This need not be the case in general. For an example where there are infinitely many twists one can consider quadratic rational maps on ${\mathbb P}^1$ of the following form \begin{equation*} \varphi_b(z)=z+\dfrac{b}{z} \end{equation*} where $b\in K^*$. All $\varphi_b$ are ${\bar K}$-isomorphic and therefore descend to the point $[\varphi_1]\in{\mathcal M}_2$. However, $\varphi_b$ and $\varphi_c$ are $K$-isomorphic if and only if $b/c$ is a square in $K$. If we denote the set of twists of $\varphi_1$ by $\mathrm{Twist}(\varphi_1/K)$, then this gives an injective map $K^*/K^{*2}\rightarrow\mathrm{Twist}(\varphi_1/K)$ by $b\mapsto [\varphi_b]_K$. See example 4.71 in \cite{SilvermanADS} for more details. For a number field $K^*/K^{*2}$ is infinite (for example, over ${\mathbb Q}$ this set contains all the primes), so it follows that bounding the height $h(\langle\varphi\rangle)$ is not enough to guarantee a finiteness result.
\emph{Acknowledgements} Both authors would like to thank Tom Tucker for reading this paper and his helpful comments, and the referee for several helpful comments.
\section{Preliminaries}\label{Prelims}
\subsection{The spaces $\mathrm{Hom}^n_d$ and $\mathcal{M}^n_d$}
Let $K$ be a number field and ${\bar K}$ be a fixed algebraic closure. By assumption, varieties and morphisms will be defined over ${\bar K}$. When we wish to emphasize a special field of definition, we will use the notation ${\mathbb P}^n(K)$. Let $$\mathrm{Hom}_d^n = \left\lbrace \varphi:\PP^n\rightarrow\PP^n\text{ is a degree } d \text{ morphism} \right\rbrace.$$ By degree we mean algebraic degree. We will see that $\mathrm{Hom}^n_d$ is a variety over ${\bar K}$ and that $K$-rational points correspond to morphisms of degree $d$ on $\PP^n$ defined over $K$. Equivalently, points of $\mathrm{Hom}^n_d(K)$ correspond to endomorphism $\varphi:{\mathbb P}^n\rightarrow{\mathbb P}^n$ of algebraic degree $d$ defined over $K$.
After fixing a basis $X_0,\ldots ,X_n$ of $\PP^n$, any morphism $\varphi \in \mathrm{Hom}_d^n$ can be written as $\varphi = \left[ \varphi_0, \dots, \varphi_n \right]$, where each $\varphi_i$ is a degree $d$ homogeneous polynomial in the $X_i$, and $\varphi_0, \dots, \varphi_n$ have no non-trivial common zeros over the algebraic closure ${\bar K}$. Each $\varphi_i$ can be written as \begin{equation*}\varphi_i = \displaystyle\sum_I a_I X^I\end{equation*} with multi-index $I$. Here, $I=(i_0,\ldots,i_n)$ and $i_0+\cdots +i_n=d$ and $X^I=X^{i_0}_0\cdots X^{i_n}_n$. We order these monomials using the lexographic ordering. Thus, if $N = \left({n+d \choose d} \right) \left( n+1 \right) - 1$ is the number of monomials $X^I$ then we can identify any $\varphi \in \mathrm{Hom}_d^n$ with a point in ${\mathbb P}^N$ via the association $\varphi \mapsto \left[a_I\right] \in {\mathbb P}^N$. It follows that $\mathrm{Hom}^n_d(K)\subset{\mathbb P}^N(K)$.
\begin{thm}[Theorem 1.8 in \cite{SilvermanBarbados}] \label{Silverman1.8} There exists a geometrically irreducible polynomial $\mathrm{Res} \in {\mathbb Z}\left[a_i\right]$ in the coefficients of $\varphi$ such that $$ \varphi \in \mathrm{Hom}_d^n \Leftrightarrow \mathrm{Res} \left( \varphi \right) \neq 0.$$ \end{thm}
The polynomial $\mathrm{Res}$ is called the Macaulay resultant of $\varphi$. We remark that we abuse notation above. Let $\Phi$ be an choice of the coordinates of the projective point defined by $\phi$. If $\Psi$ is another choice, then $\Phi=\lambda\Psi$ some $\lambda\in{\bar K}$. It follows from elementary properties of Macaulay resultant that $\mathrm{Res}(\Phi)=\lambda^{(n+1)d^n}\mathrm{Res}(\Psi)$. Therefore, only the vanishing of $\mathrm{Res}$ on ${\mathbb P}^N$ is well defined. It is multi-homogeneous in the coefficients of the $\varphi_i$. This implies that $\mathrm{Hom}_d^n$ is an affine variety. Specifically, $\mathrm{Hom}^n_d={\mathbb P}^N-V(\mathrm{Res})$. For further discussion and proof of these facts regarding the Macaulay resultant see \cite{CoxLittleOshea}.
The automorphism group $\mathrm{Aut} \left( \PP^n \right) = \mathrm{PGL}_{n+1}\left( \bar{K} \right)$ acts on $\mathrm{Hom}_d^n$, and on $\mathrm{Rat}_d^n$ (the set of rational functions from $\PP^n$ to itself) via the action of conjugation, indeed it acts on the entire space ${\mathbb P}^N$; for $\varphi \in \mathrm{Hom}_d^n$ and $f \in \mathrm{PGL}_{n+1}\left( \bar{K} \right)$, $\varphi^f = f^{-1}\circ \varphi \circ f$. It follows from the properties of the resultant that $\varphi\in\mathrm{Hom}^n_d\Leftrightarrow\varphi^f\in\mathrm{Hom}^n_d$, so $\mathrm{Hom}^n_d$ is a $\mathrm{PGL}_{n+1} \left( \bar{K} \right)$-stable subset for the action of this group on ${\mathbb P}^N$. We then define the moduli space $\mathcal{M}^n_d$ to be the set of all conjugacy classes of endomorphisms of $\PP^n$, that is $$\mathcal{M}^n_d = \mathrm{Hom}_d^n / \mathrm{PGL}_{n+1} \left( \bar{K} \right).$$ For any $\varphi \in \mathrm{Hom}_d^n$ we denote its image in $\mathcal{M}^n_d$ by $\left< \varphi \right>.$ It follows from Geometric Invariant Theory that $\mathcal{M}^n_d$ exists as an affine variety (see \text{section 2.3 in }\cite{SilvermanBarbados}).
By picking an appropriate $M$ the moduli space $\mathcal{M}^n_d$ can be embedded into $\mathbb{P}^M$. It follows from Theorems 2.24 and 2.26 in \cite{SilvermanBarbados} that there exists a projective variety ${\mathcal{M}^n_d}^{,ss}$, the moduli space of semi-stable dynamical systems on $\PP^n$. The variety ${\mathcal{M}^n_d}^{,ss}$ exists as the quotient of the semi-stable locus for the $\mathrm{PGL}_{n+1} \left(\bar{K}\right)$ action on ${\mathbb P}^N$ and it follows that $\mathcal{M}^n_d\subset{\mathcal{M}^n_d}^{,ss}$ is a dense open subset. As ${\mathcal{M}^n_d}^{,ss}$ is projective, it embeds into some ${\mathbb P}^M$ as does $\mathcal{M}^n_d$ by the inclusion $\mathcal{M}^n_d\hookrightarrow{\mathcal{M}^n_d}^{,ss}\hookrightarrow{\mathbb P}^M$. We can therefore define a height function $h_{\mathcal{M}^n_d}$ on $\mathcal{M}^n_d$ by taking the height of the image of $\left< \varphi \right>$ in $\mathbb{P}^M$. We will denote this height function by $h$.
\subsection{Minimal Resultants}\label{SubSection_MinResultant} Following \cite{SilvermanBarbados} we now use the Macaulay resultant to define the minimal resultant of a an endomorphism.
Let $R$ be a discrete valuation ring with discrete valuation $v$ and field of fractions $F$. If $\varphi= \left[ \varphi_1, \dots, \varphi_n \right] \in \mathrm{Hom}_d^n \left( F \right)$ then we define \begin{equation*} e_v (\varphi)= v \left( \mathrm{Res} \left( \varphi \right) \right) - \left( n+1 \right) d^n \underset{0\leq i\leq n}{\mathrm{min}} v \left( \varphi_i \right). \end{equation*} Where $v \left( \varphi_i \right)$ is taken to be the minimal valuation of the coefficients of $\varphi_i$. We remark that $e_v(\varphi)$ is well defined. Let $\Phi,\Psi$ be affine models of $\varphi$ and $\lambda\in F^\times$ such that $\Phi=\lambda\Psi$. Then $v(\mathrm{Res}(\Phi))=v(\mathrm{Res}(\lambda\Psi)=v(\lambda^{(n+1)d^n}\mathrm{Res}(\Psi))=(n+1)d^nv(\lambda)+v(\mathrm{Res}(\Psi))$ Similarly, \begin{equation*}\underset{0\leq i\leq n}{\mathrm{min}}\text{ }v(\Phi_i)=\underset{0\leq i\leq n}{\mathrm{min}}\text{ }v(\lambda\Psi_i)=v(\lambda)+ \underset{0\leq i\leq n}{\mathrm{min}}\text{ }v(\Psi_i)\end{equation*} It follows that $e_v(\varphi)$ is independent of choice of affine model used to compute it.
The quantity $e_v(\phi)$ is then used to define the \emph{exponent of the minimal discriminant of} $\varphi$ by: \begin{dfn}\label{MinResultantExponent} \begin{equation*} \varepsilon_v \left( \varphi \right) = \underset{f \in \mathrm{PGL}_{n+1}{\left( K \right)}}{\mathrm{min}} e_v \left( \varphi^f \right). \end{equation*} \end{dfn}
A fact that will be important in our proof of Theorem \ref{MainTheorem} is the following: \begin{prop}\label{GoodReduction}
$\varphi$ has good reduction if and only if $\varepsilon_v \left( \varphi \right) = 0$. \end{prop} \begin{proof}
See proposition 3.11 in \cite{SilvermanBarbados}. \end{proof}
Now let $R$ be a Dedekind domain with field of fractions $K$. If $v$ is a discrete valuation on $R$ then we denote the prime ideal associated to $v$ by ${\mathfrak p}_v$.
\begin{dfn}\label{MinResultant}
For an endomorphism $\varphi \in \mathrm{Hom}_d^n \left( K \right)$ the \emph{minimal resultant} of $\varphi$ is
$$\mathfrak{R}_\varphi = \prod_v {\mathfrak p}_v^{\varepsilon_v \left( \varphi \right)}.$$
Where the product is taken over all of the inequivalent valuations on $R$. \end{dfn}
It is clear that this is an ideal of $R$ as any morphism $\varphi$ has only finitely many primes of bad reduction. It follows from Proposition \ref{GoodReduction} that $\varepsilon_v(\varphi)=0$ for all but finitely many $v$.
\subsection{Twists}
Two morphisms $\varphi, \psi \in \mathrm{Hom}_d^n \left( K \right)$ are $\bar{K}$-isomorphic if there is an $f \in \mathrm{PGL}_{n+1}\left( \bar{K}\right)$ such that $\varphi = \psi^f$, that is if $\left< \varphi \right> = \left< \psi \right>$. We denote this set of $\bar{K}$-isomorphic morphisms as $$\left[\varphi \right] = \left\lbrace \psi \in \mathrm{Hom}_d^n\left(K \right) : \psi = \varphi^f \text{ for some } f \in \mathrm{PGL}_{n+1} \left(\bar{K}\right) \right\rbrace.$$ If we restrict our automorphisms to those defined over $K$ we get a second set $$\left[\varphi \right]_K = \left\lbrace \psi \in \mathrm{Hom}_d^n\left(K \right) : \psi = \varphi^f \text{ for some } f \in \mathrm{PGL}_{n+1} \left(K\right) \right\rbrace.$$ The twists of $\varphi$ are the $K$-isomorphism classes of $\varphi$ which are $\bar{K}$-isomorphic. $$\text{Twist}_K\left(\varphi\right) = \left\lbrace \left[ \psi \right]_K : \psi \in \mathrm{Hom}_d^n \left( K \right) \text{ and } \left[\psi \right] = \left[ \varphi \right] \right\rbrace.$$ One can interpret twists in terms of moduli spaces. A ${\bar K}$-isomorphism class for $\varphi$ defined over $K$ corresponds to a $K$-rational point of $\mathcal{M}^n_d$. One can also consider the quotient variety $$\mathcal{M}^n_{d,K}=\mathrm{Hom}^n_d(K)/\mathrm{PGL}_{n+1}(K)$$ and a $K$-isomorphism class $\left[\varphi\right]_K$ corresponds to a $K$-rational point of this space. The natural inclusion $\mathrm{PGL}_{n+1}(K)\hookrightarrow\mathrm{PGL}_{n+1}(\bar{K})$ induces a map on the quotients $\mathcal{M}^n_{d,K}\rightarrow\mathcal{M}^n_d$. Fibers of this morphism over the $K$-rational point $\left[\varphi\right]$ are twists.
In \cite{Stout}, the first author proved the following theorem which is central in our proof of \ref{MainTheorem}:
\begin{thm}\label{FiniteTwists} Let $\varphi \in \mathrm{Hom}_d^n \left(K \right)$ for $d \geq 2$, and let $S$ be a finite set of places containing the archimidean places. If $${\mathcal V} \left( S \right) = \left\lbrace \left[ \psi \right]_K \in \text{Twist}_K \left( \varphi \right) : \left[ \psi \right]_K \text{ has good reduction outside } S \right\rbrace,$$ then ${\mathcal V} \left( S \right)$ is finite. \end{thm}
\section{Bounded height and resultant}\label{Bounded} Let $h$ denote a fixed height on $\mathcal{M}^n_d$ corresponding to some embedding $\mathcal{M}^n_d\hookrightarrow{\mathbb P}^M$. Let $K$ denote a fixed number field; we assume all $\varphi$ are defined over $K$.
\begin{lem}\label{FiniteKbarPoints} Let $d\geq 1$ be an integer and $B\geq 1$. The set
\begin{equation*}{\mathcal W} =\lbrace \langle\varphi\rangle | \varphi\in\mathrm{Hom}_d^n(K)\text{ and }h(\langle\varphi\rangle)\leq B\rbrace\subset\mathcal{M}^n_d\end{equation*} is finite. \end{lem}
\begin{proof} We claim that this set is a set of bounded height and degree in $\mathcal{M}^n_d$. First, every $\varphi\in{\mathcal W}$ is a $K$-rational point of $\mathcal{M}^n_d$. This follows from the assumption that $\varphi$ is defined over $K$. Therefore, it is a $K$-rational point of $\mathrm{Hom}_d^n$. It follows via the quotient map $\mathrm{Hom}_d^n\rightarrow\mathcal{M}^n_d$ that $\varphi$ corresponds to a $K$-rational point of $\mathcal{M}^n_d$. Hence, ${\mathcal W}$ is a set of bounded degree. By the assumption that each $\varphi\in{\mathcal W}$ satisfies $h(\langle\varphi\rangle)\leq B$ we have that ${\mathcal W}$ is a set of bounded height. By the theorem of Northcott (Theorem 3.7 of \cite{SilvermanADS}), such sets of bounded degree and height are finite. \end{proof}
We remark that this does not prove our main theorem, as this shows that $\Gamma$ is contained in finitely many $\mathrm{PGL}_{n+1}({\bar K})$ conjugacy classes. We make the following definition based on the requirement that the resultant is bounded.
\begin{dfn} Let $B\geq 1$. We define the set of primes ${\mathcal S}_B$ of $K$ in the following way \begin{equation*}
{\mathcal S}_B=\lbrace{\mathfrak p} |\mathrm{N}_{K/{\mathbb Q}}({\mathfrak p})\leq B\rbrace \end{equation*} \end{dfn}
\begin{lem}\label{SBFinite} Let $B\geq 1$. Then the set ${\mathcal S}_B$ of primes of $K$ is finite. \end{lem}
\begin{proof} There are only finitely many primes $p$ of ${\mathbb Z}$ such that $p\leq B$. It follows from the property of the norm that \begin{equation*}
\mathrm{N}_{K/{\mathbb Q}}({\mathfrak p})=\underset{p|{\mathfrak p}}{\prod}p^{[K/{\mathfrak p}:{\mathbb Z}/p]} \end{equation*} and the fact that only finitely many primes ${\mathfrak p}$ lie over $p$ that ${\mathcal S}_B$ is finite. \end{proof}
\begin{lem}\label{GoodRed} Let $\Gamma_{K,B}$ be the set from Theorem \ref{MainTheorem}. If $\varphi\in\Gamma_{K,B}$, then $\varphi$ has good reduction at all primes ${\mathfrak p}\not\in {\mathcal S}_B$. \end{lem}
\begin{proof}
Let $\varphi\in\Gamma_{K,B}$ and ${\mathfrak p}\not\in {\mathcal S}_B$. By definition of the set ${\mathcal S}_B$, we have that $\mathrm{N}_{K/{\mathbb Q}}({\mathfrak p})>B$. By Proposition \ref{GoodReduction} $\varphi$ has bad reduction at ${\mathfrak p}$ if and only if ${\mathfrak p}|{\mathfrak R}_\varphi$. If $\varphi$ has bad reduction at ${\mathfrak p}$, then ${\mathfrak R}_\varphi={\mathfrak p}{\mathfrak a}$ and hence $\mathrm{N}_{K/{\mathbb Q}}({\mathfrak R}_\varphi)=\mathrm{N}_{K/{\mathbb Q}}({\mathfrak p})\mathrm{N}_{K/{\mathbb Q}}({\mathfrak a})>B\mathrm{N}_{K/{\mathbb Q}}({\mathfrak a})>B$. This is contrary to the assumption that $\varphi\in\Gamma_{K,B}$. It follows that $\varphi$ has good reduction at ${\mathfrak p}$. \end{proof}
\begin{rem} ${\mathcal S}_B$ may contain some primes at which $\varphi$ has good reduction. Consider, for example, an endomorphism defined over ${\mathbb Q}$ with minimal resultant ${\mathfrak R}_\varphi=(2)^3$. Then ${\mathcal S}_B=\lbrace 2,3,5,7\rbrace$ even though $\varphi$ has bad reduction only at $2$. \end{rem}
We now prove theorem \ref{MainTheorem}. \begin{proof} It follows from Lemma \ref{FiniteKbarPoints} that $\Gamma$ descends to only finitely many point in $\mathcal{M}^n_d({\bar K})$, i.e. that ${\mathcal W}=\pi(\Gamma)$ is finite under the image of the quotient map $\pi:\mathrm{Hom}_d^n(K)\rightarrow\mathcal{M}^n_d({\bar K})$.
Let ${\mathcal V}$ be the set of $K$-conjugacy classes of maps in $\Gamma$. It follows that ${\mathcal V}\subset{\mathcal M}^n_{d,K}$ and that under $\theta:{\mathcal M}^n_{d,K}\rightarrow\mathcal{M}^n_d$, $\theta({\mathcal V})={\mathcal W}$. By definition, the $K$-conjugacy classes of ${\mathcal V}$ consist of all twists lying over the finitely many ${\bar K}$-conjugacy classes in ${\mathcal W}$. If ${\mathcal V}$ is infinite, then by the pidgeon hole principle there is at least one $\bar{K}$-conjugacy class which contains infinitely many $K$-conjugacy classes. Denote this ${\bar K}$-congugacy class by $[\psi]$ and the infinitely many distinct $K$-conjugacy classes by $[\psi_i]_K$ for $i=0,1,\ldots$ where $[\psi_i]=[\psi]$ for all $i$. It follows that each $[\psi_i]_K$ is a twist of $[\psi]$.
By Lemma \ref{SBFinite} the set ${\mathcal S}_B$ is finite and by Lemma \ref{GoodRed} we see that each $[\psi_i]_K$ must have good reduction outside of ${\mathcal S}_B$. Thus we have constructed an infinite set of twists $[\psi_i]_K$ of $[\psi]$ with good reduction outside of ${\mathcal S}_B$ which contradicts Theorem \ref{FiniteTwists}. This concludes the proof of the main theorem. \end{proof}
\end{document} |
\begin{document}
\title{Computing one-bit compressive sensing via zero-norm regularized DC loss model and its surrogate}
\begin{abstract}
One-bit compressed sensing is very popular in signal processing and communications
due to its low storage costs and low hardware complexity, but it is a challenging task
to recover the signal by using the one-bit information. In this paper, we propose
a zero-norm regularized smooth difference of convexity (DC) loss model and
derive a family of equivalent nonconvex surrogates covering the MCP and SCAD
surrogates as special cases. Compared to the existing models, the new model and
its SCAD surrogate have better robustness. To compute their $\tau$-stationary points,
we develop a proximal gradient algorithm with extrapolation and establish the convergence
of the whole iterate sequence. Also, the convergence is proved to have a linear rate
under a mild condition by studying the KL property of exponent $0$ of the models.
Numerical comparisons with several state-of-art methods show that
in terms of the quality of solution, the proposed model and its SCAD surrogate
are remarkably superior to the $\ell_p$-norm regularized models,
and are comparable even superior to those sparsity constrained models
with the true sparsity and the sign flip ratio as inputs.
\end{abstract}
\noindent
{\bf Keywords:}
One-bit compressive sensing, zero-norm, DC loss, equivalent surrogates, global convergence, KL property
\section{Introduction}\label{sec1.0}
Compressive sensing (CS) has gained significant progress in theory and algorithms over
the past few decades since the seminal works \cite{Candes05,Donoho06}. It aims
to recover a sparse signal $x^{\rm true}\in\mathbb{R}^n$ from a small number of
linear measurements. One-bit compressive sensing, as a variant of the CS,
was proposed in \cite{Boufounos08} and had attracted considerable interests in
the past few years (see, e.g., \cite{Dai16,Jacques13,Plan13a,Yan12,Zhang14,HuangS18}).
Unlike the conventional CS which relies on real-valued measurements, one-bit CS aims to
reconstruct the sparse signal $x^{\rm true}$ from the sign of measurement. Such a new setup is
appealing because (i) the hardware implementation of one-bit quantizer is low-cost and efficient;
(ii) one-bit measurement is robust to nonlinear distortions \cite{Boufounos10};
and (iii) in certain situations, for example, when the signal-to-noise ratio
is low, one-bit CS performs even better than the conventional one \cite{Laska12}.
For the applications of one-bit CS, we refer to the recent survey paper \cite{Li18}.
\subsection{Review on the related works}\label{sec1.1}
In the noiseless setup, the one-bit CS acquires the measurements via the linear
model $b={\rm sgn}(\Phi x^{\rm true})$, where $\Phi\in\mathbb{R}^{m\times n}$
is the measurement matrix and the function ${\rm sgn}(\cdot)$ is applied to
$\Phi x^{\rm true}$ in a component-wise way. Here, for any $t\in\mathbb{R}$,
${\rm sgn}(t)=1$ if $t>0$ and $-1$ otherwise, which has a little difference
from the common ${\rm sign}(\cdot)$. By following the theory
of conventional CS, the ideal optimization model for one-bit CS is as follows:
\begin{equation}\label{znorm-min}
\min_{x\in\mathbb{R}^n}\Big\{\|x\|_0\ \ {\rm s.t.}\ \ b={\rm sgn}(\Phi x),\,\|x\|=1\Big\},
\end{equation}
where $\|x\|_0$ denotes the zero-norm (i.e., the number of nonzero entries) of $x\in\mathbb{R}^n$,
and $\|x\|$ means the Euclidean norm of $x$. The unit sphere constraint is introduced
into \eqref{znorm-min} to address the issue that the scale information of a signal is lost during
the one-bit quantization. Due to the combinatorial properties of the functions
${\rm sgn}(\cdot)$ and $\|\cdot\|_0$, the problem \eqref{znorm-min} is NP-hard.
Some earlier works (see, e.g., \cite{Boufounos08,Plan13a,Wang15}) mainly focus on
its convex relaxation model, obtained by replacing the zero-norm by the $\ell_1$-norm
and relaxing the consistency constraint $b={\rm sgn}(\Phi x)$ into the linear constraint
$b\circ(\Phi x)\ge 0$, where the notation ``$\circ$'' means the Hadamard operation of vectors.
In practice the measurement is often contaminated by noise before the quantization and
some signs will be flipped after quantization due to quantization distoration, i.e.,
\begin{equation}\label{observation}
b=\zeta\circ{\rm sgn}(\Phi x^{\rm true}+\varepsilon)
\end{equation}
where $\zeta\in\{-1,1\}^m$ is a random binary vector and $\varepsilon\in\mathbb{R}^m$ denotes
the noise vector. Let $L\!:\mathbb{R}^m\to\mathbb{R}_{+}$ be a loss function to ensure
data fidelity as well as to tolerate the existence of sign flips.
Then, it is natural to consider the zero-norm regularized loss model:
\begin{equation}\label{znorm-reg}
\min_{x\in\mathbb{R}^n}\Big\{L(Ax)+\lambda\|x\|_0\ \ {\rm s.t.}\ \ \|x\|=1\Big\}\ \ {\rm with}\
A:={\rm Diag}(b)\Phi,
\end{equation}
and achieve a desirable estimation for the true signal $x^{\rm true}$ by tuning
the parameter $\lambda>0$. Consider that the projection mapping onto the intersection
of the sparsity constraint set and the unit sphere has a closed norm. Some researchers
prefer the following model or a similar variant to achieve a desirable estimation
for $x^{\rm true}$ (see, e.g., \cite{Boufounos09,Yan12,Dai16,Zhou21}):
\begin{equation}\label{znorm-constr}
\min_{x\in\mathbb{R}^n}\Big\{L(Ax)\ \ {\rm s.t.}\ \ \|x\|_0\le s,\,\|x\|=1\Big\},
\end{equation}
where the positive integer $s$ is an estimation for the sparsity of $x^{\rm true}$.
For this model, if there is a big difference between the estimation $s$ from the true sparsity $s^*$,
the mean-squared-error (MSE) of the associated solutions will become worse.
Take the model in \cite{Zhou21} for example. If the difference between the estimation $s$
from the true sparsity $s^*$ is $2$, the MSE of the associated solutions will have a difference
at least $20\%$ (see Figure \ref{fig_K}). Moreover, now it is unclear how to achieve
such a tight estimation for $s^*$. We find that the numerical experiments for the zero-norm constrained
model all use the true sparsity as an input (see \cite{Yan12,Zhou21}).
In this work, we are interested in the regularization models.
\begin{figure}
\caption{MSE of the solution yielded by GPSP with different $s$ (\small the data generated
in the same way as in Section 5.1 with $(m,n,s^*)=(500,1000,5),(\mu,\varpi)=(0.3,0.1)$ and $\Phi$ of type I)}
\label{fig_K}
\end{figure}
The existing loss functions for the one-bit CS are mostly convex, including the one-sided
$\ell_2$ loss \cite{Jacques13,Yan12}, the linear loss \cite{Plan13b,Zhang14},
the one-sided $\ell_1$ loss \cite{Jacques13,Yan12,Peng19b}, the pinball loss \cite{HuangS18}
and the logistic loss \cite{Fang14}. Among others, the one-sided $\ell_1$ loss is closely
related to the hinge loss function in machine learning \cite{Cucker05,ZhangT04},
which was reported to have a superior performance to the one-sided $\ell_2$ loss (see \cite{Jacques13}),
and the pinball loss provides a bridge between the hinge loss and the linear loss.
One can observe that these convex loss functions all impose a large penalty on the flipped samples,
which inevitably imposes a negative effect on the solution quality of the model \eqref{znorm-reg}.
In fact, for the pinball loss in \cite{HuangS18}, when the involved parameter $\tau$
is closer to $0$, the penalty degree on the flipped samples becomes smaller.
This partly accounts for $\tau=-0.2$ instead of $\tau=-1$ used for numerical experiments there.
Recently, Dai et al. \cite{Dai16} derived a one-sided zero-norm loss by maximizing
a posteriori estimation of the true signal. This loss function and its lower semicontinuous
(lsc) majorization proposed there impose a constant penalty for those flipped samples,
but their combinatorial property brings much difficulty to the solution of the associated
optimization models. Inspired by the superiority of the ramp loss in SVM \cite{Brooks11,HuangS14},
in this work we are interested in a more general DC loss:
\begin{equation}\label{DC-loss}
L_{\sigma}(z):=\sum_{i=1}^m\vartheta_{\!\sigma}(z_i)
\ \ {\rm with}\ \vartheta_{\!\sigma}(t):=
\left\{\begin{array}{cl}
\max(0,-t)&{\rm if}\ t\ge-\sigma,\\
\sigma&{\rm if}\ t<-\sigma,
\end{array}\right.
\end{equation}
where $\sigma\in(0,1]$ is a constant representing the penalty degree imposed on the flip outlier.
Clearly, the DC function $\vartheta_{\!\sigma}$ imposes a small fixed penalty for those flip outliers.
Due to the nonconvexity of the zero-norm and the sphere constraint, some researchers
are interested in the convex relaxation of \eqref{znorm-reg} obtained by replacing
the zero-norm by the $\ell_1$-norm and the unit sphere constraint by the unit ball constraint;
see \cite{Zhang14,HuangS18,Laska11,Plan13b}. However, as in the conventional CS,
the $\ell_1$-norm convex relaxation not only has a weak sparsity-promoting ability
but also leads to a biased solution; see the discussion in \cite{Fan01}. Motivated by this,
many researchers resort to the nonconvex surrogate functions of the zero-norm,
such as the minimax concave penalty (MCP) \cite{Zhu15,HuangY18},
the sorted $\ell_1$ penalty \cite{HuangY18}, the logarithmic
smoothing functions \cite{Shen16}, the $\ell_q\,(0<\!q\!<1)$-norm \cite{Fan21},
and the Schur-concave functions \cite{Peng19a}, and then develop algorithms for
solving the associated nonconvex surrogate problems to achieve a better sparse solution.
To the best of our knowledge, most of these algorithms are lack of convergence certificate.
Although many nonconvex surrogates of the zero-norm are used for the one-bit CS,
there is no work to investigate the equivalence between the surrogate problems
and the model \eqref{znorm-reg} in a global sense.
\subsection{Main contributions}\label{sec1.2}
The nonsmooth DC loss $L_{\sigma}$ is desirable to ensure data fidelity and tolerate
the existence of sign flips, but its nonsmoothness is inconvenient for the solution
of the associated regularization model \eqref{znorm-reg}. With $0<\!\gamma<\!{\sigma}/{2}$
we construct a smooth approximation to it:
\begin{equation}\label{Lsig-gam}
L_{\sigma,\gamma}(z):=\sum_{i=1}^m\vartheta_{\sigma,\gamma}(z_i)
\ \ {\rm with}\ \
\vartheta_{\sigma,\gamma}(t)\!:=\!
\left\{\begin{array}{cl}
0 &{\rm if}\ t>0,\\
t^2/(2\gamma)&{\rm if}\ -\!\gamma<t\le 0,\\
-t-\gamma/2&{\rm if}\ -\!\sigma\!+\!\gamma<t<-\gamma,\\
\!\sigma\!-\!\frac{\gamma}{2}\!-\!\frac{(t+\sigma+\gamma)^2}{4\gamma}&{\rm if}\ -\!(\sigma\!+\!\gamma)\le t\le\!\gamma\!-\!\sigma,\\
\!\sigma\!-\!{\gamma}/2&{\rm if} \ t<-(\sigma\!+\!\gamma).
\end{array}\right.
\end{equation}
Clearly, as the parameter $\gamma$ approaches to $0$, $\vartheta_{\!\sigma,\gamma}$ is closer to $\vartheta_{\!\sigma}$.
As illustrated in Figure \ref{fig_gam}, the smooth function $\vartheta_{\!\sigma,\gamma}$ approximates
$\vartheta_{\!\sigma}$ very well even with $\gamma=0.05$. Therefore, in this paper
we are interested in the zero-norm regularized smooth DC loss model
\begin{equation}\label{znorm-Moreau}
\min_{x\in\mathbb{R}^n}F_{\sigma,\gamma}(x):=L_{\sigma,\gamma}(Ax)+\delta_{\mathcal{S}}(x)+\lambda\|x\|_0,
\end{equation}
where $\mathcal{S}$ denotes a unit sphere whose dimension is known from the context,
and $\delta_{\mathcal{S}}$ means the indicator function of $\mathcal{S}$, i.e.,
$\delta_{\mathcal{S}}(x)=0$ if $x\in\mathcal{S}$ and otherwise $\delta_{\mathcal{S}}(x)=+\infty$.
\begin{figure}
\caption{Approximation degree of $\vartheta_{\!\sigma,\gamma}$ with different $\gamma$ to $\vartheta_{\!\sigma}$ for $\sigma=1$}
\label{fig_gam}
\end{figure}
\noindent
Let $\mathscr{L}$ denote the family of proper lsc convex functions
$\phi\!:\mathbb{R}\to(-\infty,+\infty]$ satisfying
\begin{equation}\label{phi-assump}
{\rm int}({\rm dom}\,\phi)\supseteq[0,1],\ 1>t^*\!:=\mathop{\arg\min}_{0\le t\le 1}\phi(t),\ \phi(t^*)=0
\ \ {\rm and}\ \ \phi(1)=1.
\end{equation}
With an arbitrary $\phi\in\!\mathscr{L}$, the model \eqref{znorm-Moreau} is reformulated
as a mathematical program with an equilibrium constraint (MPEC) in Section \ref{sec3},
and by studying its global exact penalty induced by the equilibrium constraint,
we derive a family of equivalent surrogates
\begin{equation}\label{Esurrogate}
\mathop{\min}_{x\in\mathbb{R}^n}G_{\sigma,\gamma,\rho}(x)
:=L_{\sigma,\gamma}(Ax)+\delta_{\mathcal{S}}(x)+\lambda\rho\varphi_{\rho}(x),
\end{equation}
in the sense that the problem \eqref{Esurrogate} associated to every $\rho>\overline{\rho}$
has the same global optimal solution set as the problem \eqref{znorm-Moreau} does.
Here $\varphi_{\rho}(x)\!:=\!\|x\|_1-\!\frac{1}{\rho}\!\sum_{i=1}^n\psi^*(\rho|x_i|)$ with
$\rho>0$ being the penalty parameter and $\psi^*$ being the conjugate function of $\psi$:
\[
\psi^*(\omega):=\sup_{t\in\mathbb{R}}\big\{\omega t-\psi(t)\big\}
\ \ {\rm for}\ \psi(t)\!:=\!\left\{\begin{array}{cl}
\phi(t)&{\rm if}\ t\in[0,1],\\
+\infty &{\rm otherwise}.
\end{array}\right.
\]
This family of equivalent surrogates is illustrated to include the one associated to
the MCP function (see \cite{Zhang10,Zhu15,HuangY18}) and the SCAD function \cite{Fan01}.
The SCAD function corresponds to $\phi(t)=\frac{a-1}{a+1}t^2+\frac{2}{a+1}t\ (a>1)$
for $t\in\mathbb{R}$, whose conjugate has the form
\begin{equation}\label{psi-star}
\psi^*(\omega)=\begin{cases}
0& \omega\le \frac{2}{a+1},\\
\frac{((a+1)\omega-2)^2}{4(a^2-1)}& \frac{2}{a+1}<\omega\le\frac{2a}{a+1},\\
\omega-1& \omega>\frac{2a}{a+1},
\end{cases}\quad{\rm for}\ \omega\in\mathbb{R}.
\end{equation}
Figure \ref{fig_rho} below shows that $G_{\sigma,\gamma,\rho}$ with $\psi^*$ in \eqref{psi-star}
approximates $F_{\sigma,\gamma}$ every well for $\rho\ge 2$, though the model \eqref{Esurrogate}
has the same global optimal solution set as the model \eqref{znorm-Moreau} does
only when $\rho$ is over the theoretical threshold $\overline{\rho}$.
Unless otherwise stated, the function $G_{\sigma,\gamma,\rho}$ appearing in the rest of
this paper always represents the one associated to $\psi^*$ in \eqref{psi-star}.
\begin{figure}
\caption{Approximation effect of $|t|-\rho^{-1}\psi^*(\rho|t|)$ with $\psi^*$ in \eqref{psi-star} to ${\rm sign}(|t|)$}
\label{fig_rho}
\end{figure}
For the nonconvex nonsmooth optimization problems \eqref{znorm-Moreau} and \eqref{Esurrogate},
we develop a proximal gradient (PG) method with extrapolation to solve them, establish
the convergence of the whole iterate sequence generated, and analyze its local linear convergence rate
under a mild condition. The main contributions of this paper can be summarized as follows:
\begin{itemize}
\item [{\bf (i)}] We introduce a smooth DC loss function well suited for the data with
a high sign flip ratio, and propose the zero-norm regularized smooth DC
loss model \eqref{znorm-Moreau} which, unlike those models in
\cite{Boufounos09,Dai16,Yan12,Zhou21}, does not require any priori information
on the sparsity of the true signal and the number of sign flips. In particular,
a family of equivalent nonconvex surrogates is derived for the model \eqref{znorm-Moreau}.
We also introduce a class of $\tau$-stationary points for the model \eqref{znorm-Moreau}
and its equivalent surrogate \eqref{Esurrogate} associated to $\psi^*$ in \eqref{psi-star},
which is stronger than the limiting critical points of the corresponding objective functions.
\item[{\bf(ii)}] By characterizing the closed form of the proximal operators of
$\delta_{\mathcal{S}}(\cdot)+\lambda\|\cdot\|_0$ and $\delta_{\mathcal{S}}(\cdot)+\lambda\|\cdot\|_1$,
we develop a proximal gradient (PG) algorithm with extrapolation for
solving the problem \eqref{znorm-Moreau} (PGe-znorm)
and its surrogate \eqref{Esurrogate} associated to $\psi^*$ in \eqref{psi-star}
(PGe-scad) and establish the convergence of the whole iterate sequences.
Also, by analyzing the KL property of exponent $0$ of
$F_{\sigma,\gamma}$ and $G_{\sigma,\gamma,\rho}$,
the convergence is shown to have a linear rate under a mild condition.
It is worth pointing out that to verify if
a nonconvex and nonsmooth function has the KL property of exponent not more than $1/2$
is not an easy task because there is lack of a criterion for it.
\item[{\bf(iii)}] Numerical experiments indicate that the proposed models armed with the PGe-znorm and PGe-scad
are robust to a large range of $\lambda$, and numerical comparisons with several state-of-art methods
demonstrate that the proposed models are well suited for high noise and/or high sign flip ratio.
The obtained solutions are remarkably superior to those yielded by other regularization models,
and for the data with a high flip ratio they are also superior to those yielded by the models
with the true sparsity as an input, in terms of MSE and Hamming error.
\end{itemize}
\section{Notation and preliminaries}\label{sec2}
Throughout this paper, $\overline{\mathbb{R}}$ denotes the extended real number set
$(-\infty,\infty]$, $I$ and $e$ denote an identity matrix and a vector of all ones,
whose dimensions are known from the context; and $\{e^1,\ldots,e^n\}$ denotes
the orthonormal basis of $\mathbb{R}^n$. For a integer $k>0$, write $[k]:=\{1,2,\ldots,k\}$.
For a vector $z\in\mathbb{R}^n$, $|z|_{\rm nz}$ denotes the smallest nonzero entry of the vector $|z|$,
$z^{\downarrow}$ means the vector of the entries of $z$ arranged in a nonincreasing order,
and $z^{s,\downarrow}$ means the vector $(z_1^{\downarrow},\ldots,z_s^{\downarrow})^{\mathbb{T}}$.
For given index sets $I\subseteq[m]$ and $J\subseteq[n]$, $A_{I\!J}\in\mathbb{R}^{|I|\times |J|}$
denotes the submatrix of $A_{J}$ consisting of those rows $A_i$ with $i\in I$, and
$A_{J}\in\mathbb{R}^{m\times |J|}$ denotes the submatrix of $A$ consisting of those
columns $A_j$ with $j\in\!J$. For a proper $h\!:\mathbb{R}^n\to\overline{\mathbb{R}}$,
${\rm dom}\,h\!:=\!\{z\in\mathbb{R}^n\,|\,h(z)<+\infty\}$ denotes its effective domain,
and for any given $-\infty<\eta_1<\eta_2<\infty$, $[\eta_1<h<\eta_2]$ represents
the set $\{x\in\mathbb{R}^n\,|\,\eta_1<h(x)<\eta_2\}$.
For any $\lambda>0,\rho>0,0<\gamma<\sigma/2$ and any $x\in\mathbb{R}^n$,
write $\Gamma(x)\!:=\!\{i\in[m]\,|\,-\!\gamma\le (Ax)_i\le0\}\cup
\{i\in[m]\,|\,-\!\sigma\!-\!\gamma\le (Ax)_i\le\!\gamma\!-\!\sigma\}$ and define
\begin{align}\label{fvphi}
f_{\sigma,\gamma}(x):=L_{\sigma,\gamma}(Ax),\ \
\Xi_{\sigma,\gamma}(x):=f_{\sigma,\gamma}(x)-\lambda{\textstyle\sum_{i=1}^n}\psi^*(\rho|x|_i),\\
\label{ghlambda}
g_{\lambda}(x):=\delta_{\mathcal{S}}(x)+\lambda\|x\|_0\ \ {\rm and}\ \
h_{\lambda,\rho}(x):=\delta_{\mathcal{S}}(x)+\lambda\rho\|x\|_1.
\end{align}
For a proper lsc $h\!:\mathbb{R}^n\to\overline{\mathbb{R}}$,
the proximal mapping of $h$ associated to $\tau>0$ is defined as
\[
\mathcal{P}_{\!\tau}h(x)\!:=\mathop{\arg\min}_{z\in\mathbb{R}^n}\big\{\frac{1}{2\tau}\|z-x\|^2+h(z)\big\}
\quad\ \forall x\in\mathbb{R}^n.
\]
When $h$ is convex, $\mathcal{P}_{\!\tau}h$ is a Lipschitz continuous mapping
with modulus $1$. When $h$ is an indicator function of a closed set $C\subseteq\mathbb{R}^n$,
$\mathcal{P}_{\!\tau}h$ is the projection mapping $\Pi_{C}$ onto $C$.
\subsection{Proximal mappings of $g_{\lambda}$ and $h_{\lambda,\rho}$}\label{sec2.1}
To characterize the proximal mapping of the nonconvex nonsmooth function $g_{\lambda}$,
we need the following lemma, whose proof is not included due to the simplicity.
\begin{lemma}\label{lemma-glam}
Fix any $z\in\mathbb{R}^n\backslash\{0\}$ and an integer $s\ge 1$. Consider the following problem
\begin{equation}\label{znorm-prob1}
S^*(z):=\mathop{\arg\min}_{x\in\mathbb{R}^n}\Big\{\frac{1}{2}\|x-z\|^2\ \ {\rm s.t.}\ \ \|x\|=1,\|x\|_0=s\Big\}.
\end{equation}
Then, $S^*(z)=\big\{\frac{P^{\mathbb{T}}(|z|^{s-1,\downarrow};|z|_i;0)}{\|(|z|^{s-1,\downarrow};|z|_i;0)\|}
\,|\ i\in\{s,\ldots,n\}\ {\rm is\ such\ that}\ |z|_i=|z|_{s}^{\downarrow}\big\}$,
where $P$ is an $n\times n$ signed permutation matrix such that $Pz=|z|^{\downarrow}$.
\end{lemma}
\begin{proposition}\label{proxm-glam}
Fix any $\lambda>0$ and $\tau>0$. For any $z\in\mathbb{R}^n$, by letting $P$ be an $n\times n$
signed permutation matrix such that $Pz=|z|^{\downarrow}$, it holds that
$\mathcal{P}_{\!\tau}g_{\lambda}(z)=P^{\mathbb{T}}\mathcal{Q}_{\tau\lambda}(|z|^{\downarrow})$ with
\begin{equation}\label{prox-Eprob}
\mathcal{Q}_{\nu}(y):=\mathop{\arg\min}_{x\in\mathbb{R}^n}\Big\{\frac{1}{2}\|x-y\|^2+\nu\|x\|_0\ \ {\rm s.t.}\ \|x\|=1\Big\}
\quad\forall y\in\mathbb{R}^n.
\end{equation}
For any $y\ne 0$ with $y_1\ge\cdots\ge y_n\ge 0$,
by letting $\chi_{j}(y)\!:=\!\|y^{j,\downarrow}\|-\!\|y^{j-1,\downarrow}\|$ with
$y^{0,\downarrow}=0$, $\mathcal{Q}_{\nu}(y)\!=\!\{\frac{y}{\|y\|}\}$ if $\nu\le\!\chi_n(y)$;
$\mathcal{Q}_{\nu}(y)\!=\!\big\{(\frac{y_i}{|y_i|},0,\ldots,0)^{\mathbb{T}}\,|\,i\in[n]\ {\rm is\ such\ that}\
y_i=y_1\big\}$ if $\nu\ge\!\chi_1(y)$; otherwise
$\mathcal{Q}_{\nu}(y)\!:=\!\big\{(\frac{y^{l,\downarrow}}{\|y^{l,\downarrow}\|};0)\ |\ l\in[n]
\ {\rm is\ such\ that}\ \nu\in(\chi_{l+1}(y),\chi_{l}(y)]\big\}$.
\end{proposition}
\begin{proof}
By the definition of $g_{\lambda}$, for any $z\in\mathbb{R}^n$,
$\mathcal{P}_{\!\tau}g_{\lambda}(z)=\mathcal{Q}_{\tau\lambda}(z)$.
Since for any $n\times n$ signed permutation matrix $Q$ and any $z\in\mathbb{R}^n$,
$\|Qz\|=\|z\|$ and $\|Qz\|_0=\|z\|_0$, it is easy to verify that
$\mathcal{Q}_{\nu}(z)=P^{\mathbb{T}}\mathcal{Q}_{\nu}(|z|^{\downarrow})$.
The first part of the conclusions then follows. For the second part,
we first argue that the following inequality relations hold:
\begin{equation}\label{vtheta-equa}
\chi_1(y)\ge\chi_2(y)\ge\cdots\ge \chi_n(y).
\end{equation}
Indeed, for each $j\in\{1,2,\ldots,n\!-\!1\}$, from the definition of $y^{j}$,
it is immediate to have
\begin{align*}
\|y^{j}\|^2\!-\!\|y^{j-1}\|^2
=y_{j}^2\ge y_{j+1}^2 =\|y^{j+1}\|^2\!-\!\|y^{j}\|^2\ \ {\rm and}\ \
\|y^{j}\|\!+\!\|y^{j-1}\|\le\|y^{j+1}\|\!+\!\|y^{j}\|.
\end{align*}
Along with $\chi_{j}(y)=\frac{\|y^{j}\|^2-\|y^{j-1}\|^2}{\|y^{j}\|+\|y^{j-1}\|}$,
we get $\chi_{j}(y)\ge\chi_{j+1}(y)$ and the relations in \eqref{vtheta-equa} hold.
Let $\upsilon^*(y)$ denote the optimal value of \eqref{prox-Eprob}.
Then $\upsilon^*(y)=\min\{\overline{\chi}_{1}(y),\ldots,\overline{\chi}_{n}(y)\}$ with
\begin{equation}\label{prox-opt}
\overline{\chi}_{s}(y):=\min_{x\in \mathbb{R}^n}
\Big\{\frac{1}{2}\|x-y\|^2+\nu\|x\|_0\ \ {\rm s.t.}\ \ \|x\|_0=s,\|x\|=1\Big\}
\ \ {\rm for}\ s=1,\ldots,n.
\end{equation}
From Lemma \ref{lemma-glam}, it follows that
$\overline{\chi}_{s}(y)=\frac{1}{2}(1+\|y\|^2-2\|y^{s,\downarrow}\|)+\nu s$. Then,
\[
\Delta\overline{\chi}_{s}(y):=\overline{\chi}_{s+1}(y)-\overline{\chi}_{s}(y)
=\|y^{s,\downarrow}\|-\|y^{s+1,\downarrow}\|+\nu=-\chi_{s}(y)+\nu.
\]
When $\nu\le\chi_n(y)$, we have $\nu\le\chi_{s}(y)$ for all $s=1,\ldots,n$.
From the last equation, $\Delta\overline{\chi}_{s}(y)\le 0$ for
$s=1,\ldots,n-\!1$, which means that
\(
\overline{\chi}_1(y)\ge\overline{\chi}_2(y)\ge\cdots\ge\overline{\chi}_n(y).
\)
Hence, $\upsilon^*(y)=\overline{\chi}_n(y)$, and $\mathcal{Q}_{\nu}(y)=\{\frac{y}{\|y\|}\}$
follows by Lemma \ref{lemma-glam}. Using the similar arguments,
we can obtain the rest of the conclusions.
\end{proof}
To characterize the proximal mapping of the nonconvex nonsmooth function $h_{\lambda,\rho}$,
we need the following lemma, whose proof is omitted due to the simplicity.
\begin{lemma}\label{MP-lemma}
Let $\mathcal{S}_{+}:=\mathcal{S}\cap\mathbb{R}_{+}^n$. For any $z\in\!\mathbb{R}^n$,
by letting $P$ be an $n\times n$ permutation matrix
such that $Pz=z^{\downarrow}$, it holds that $\Pi_{\mathcal{S}_{+}}(z)
=P^{\mathbb{T}}\Pi_{\mathcal{S}_{+}}(z^{\downarrow})$. Also, for any $y\!\in\mathbb{R}^n$
with $y_1\ge\cdots\ge y_n$, $\Pi_{\mathcal{S}_{+}}(y)=\big\{e_i\,|\,i\in[n]\ {\rm is\ such\ that}\ y_i=y_1\big\}$
if $y_1\le 0$; $\Pi_{\mathcal{S}_{+}}(y)=\big\{\frac{y}{\|y\|}\big\}$ if $y_n\ge 0$,
otherwise $\Pi_{\mathcal{S}_{+}}(y)=\big\{\frac{(y_1,\ldots,y_j,0,\ldots,0)^{\mathbb{T}}}
{\|(y_1,\ldots,y_j,0,\ldots,0)^{\mathbb{T}}\|}\,|\, j\in[n\!-\!1]\ {\rm is\ such\ that}\ y_j>0\ge y_{j+1}\big\}$.
\end{lemma}
\begin{proposition}\label{proxm-hlam}
Fix any $\lambda>0,\rho>0$ and $\tau>0$. For any $z\in\mathbb{R}^n$, by letting $P$
be an $n\times n$ signed permutation matrix with $Pz=|z|^{\downarrow}$,
$\mathcal{P}_{\!\tau}h_{\lambda,\rho}(z)=P^{\mathbb{T}}\Pi_{\mathcal{S}_{+}}(|z|^{\downarrow}\!-\!\tau\lambda\rho e)$.
\end{proposition}
\begin{proof}
Fix any $\xi\in\mathbb{R}^n$ with $\xi_1\ge \xi_2\ge\cdots\ge \xi_n\ge 0$.
Consider the following problem
\begin{equation}\label{MQ-map2}
\mathcal{P}_{\nu}(\xi):=\mathop{\arg\min}_{x\in\mathbb{R}^n}
\Big\{\frac{1}{2}\big\|x-\xi\big\|^2+\nu\|x\|_1\ \ {\rm s.t.}\ \|x\|=1\Big\}
\end{equation}
where $\nu>0$ is a regularization parameter. By the definition of $h_{\lambda,\rho}$,
$\mathcal{P}_{\!\tau}h_{\lambda,\rho}(z)=\mathcal{P}_{\!\tau\lambda\rho}(z)$,
so it suffices to argue that $\mathcal{P}_{\!\nu}(\xi)=\Pi_{\mathcal{S}_{+}}(\xi-\nu e)$.
Indeed, if $x^*$ is a global optimal solution of \eqref{MQ-map2},
then $x^*\ge 0$ necessarily holds. If not, we will have $J:=\{j\,|\,x_j^*<0\}\ne\emptyset$.
Let $\overline{J}=\{1,\ldots,n\}\backslash J$. Take $\widetilde{x}_i^*=x_i^*$
for each $i\in\overline{J}$ and $\widetilde{x}_i^*=-x_i^*$ for each $i\in J$.
Clearly, $\widetilde{x}^*\ge 0$ and $\|\widetilde{x}^*\|=1$. However, it holds that
$\frac{1}{2}\big\|\widetilde{x}^*-\xi\big\|^2+\nu\|\widetilde{x}^*\|_1
\le\frac{1}{2}\big\|x^*-\xi\big\|^2+\nu\|x^*\|_1$,
which contradicts the fact that $x^*$ is a global optimal solution of \eqref{MQ-map2}.
This implies that
\(
\mathcal{P}_{\nu}(\xi)=\mathop{\arg\min}_{x\in\mathbb{R}^n}
\big\{\frac{1}{2}\big\|x-\xi\big\|^2+\nu\langle e,x\rangle\ \ {\rm s.t.}\ x\ge 0,\|x\|=1\big\}.
\)
Consequently, $\mathcal{P}_{\nu}(\xi)=\Pi_{\mathcal{S}_{+}}(\xi-\nu e)$.
The desired equality then follows.
\end{proof}
\subsection{Generalized subdifferentials}\label{sec2.2}
\begin{definition}\label{Gsubdiff-def}(see \cite[Definition 8.3]{RW98})\
Consider a function $h\!:\mathbb{R}^n\to\overline{\mathbb{R}}$ and a point $x\in{\rm dom}h$.
The regular subdifferential of $h$ at $x$, denoted by $\widehat{\partial}h(x)$, is defined as
\[
\widehat{\partial}h(x):=\bigg\{v\in\mathbb{R}^n\ \big|\
\liminf_{x'\to x\atop x'\ne x}\frac{h(x')-h(x)-\langle v,x'-x\rangle}{\|x'-x\|}\ge 0\bigg\};
\]
and the (limiting) subdifferential of $h$ at $x$, denoted by $\partial h(x)$, is defined as
\[
\partial h(x):=\Big\{v\in\mathbb{R}^n\,|\,\exists\,x^k\to x\ {\rm with}\ h(x^k)\to h(x)\ {\rm and}\
v^k\in\widehat{\partial}h(x^k)\to v\ {\rm as}\ k\to\infty\Big\}.
\]
\end{definition}
\begin{remark}\label{remark-Fsubdiff}
{\bf(i)} At each $x\in{\rm dom}h$, $\widehat{\partial}h(x)\subseteq\partial h(x)$,
$\widehat{\partial}h(x)$ is always closed and convex, and $\partial h(x)$ is closed
but generally nonconvex.
When $h$ is convex, $\widehat{\partial}h(x)=\partial h(x)$, which is precisely
the subdifferential of $h$ at $x$ in the sense of convex analysis.
\noindent
{\bf(ii)} Let $\{(x^k,v^k)\}_{k\in\mathbb{N}}$ be a sequence in the graph of
$\partial h$ that converges to $(x,v)$ as $k\to\infty$.
By invoking Definition \ref{Gsubdiff-def}, if $h(x^k)\to h(x)$ as $k\to\infty$,
then $v\in\partial h(x)$.
\noindent
{\bf(iii)} A point $\overline{x}$ at which $0\in\partial h(\overline{x})$
($0\in\widehat{\partial}h(\overline{x})$) is called a limiting (regular)
critical point of $h$. In the sequel, we denote by ${\rm crit}\,h$
the limiting critical point set of $h$.
\end{remark}
When $h$ is an indicator function of a closed set $C$,
the subdifferential of $h$ at $x\in C$ is the normal cone to $C$ at $x$,
denoted by $\mathcal{N}_{C}(x)$. The following lemma characterizes the (regular)
subdifferentials of $F_{\sigma,\gamma}$ and $G_{\sigma,\gamma,\rho}$ at any point
of their domains.
\begin{lemma}\label{lemma-critical}
Fix any $\lambda>0,\rho>0$ and $0<\!\gamma<\!{\sigma}/{2}$. Consider any $x\in\mathcal{S}$. Then,
\begin{itemize}
\item[(i)] $f_{\sigma,\gamma}$ is a smooth function whose gradient $\nabla\!f_{\sigma,\gamma}$
is Lipschitz continuous with the modulus $L_{\!f}\le\frac{1}{\gamma}\|A\|^2$.
\item [(ii)] $\widehat{\partial}F_{\sigma,\gamma}(x)=\partial F_{\sigma,\gamma}(x)=\nabla\!f_{\sigma,\gamma}(x)
+\mathcal{N}_{\mathcal{S}}(x)+\lambda\partial\|x\|_0$.
\item[(iii)] $\widehat{\partial}G_{\sigma,\gamma,\rho}(x)
=\partial G_{\sigma,\gamma,\rho}(x)\!=\!\nabla\Xi_{\sigma,\gamma}(x)
\!+\!\mathcal{N}_{\mathcal{S}}(x)+\lambda\rho\partial\|x\|_1$.
\item[(iv)] When $|x|_{\rm nz}\ge\frac{2a}{\rho(a-1)}$,
it holds that $\partial G_{\sigma,\gamma,\rho}(x)\subseteq\partial F_{\sigma,\gamma}(x)$.
\end{itemize}
\end{lemma}
\begin{proof}
{\bf(i)} The result is immediate by the definition of $f_{\sigma,\gamma}$ and the expression of $L_{\sigma,\gamma}$.
\noindent
{\bf(ii)} From \cite[Lemma 3.1-3.2 \& 3.4]{WuPanBi21},
$\widehat{\partial}g_{\lambda}(x)=\partial g_{\lambda}(x)=\mathcal{N}_{\mathcal{S}}(x)+\lambda\partial\|x\|_0$.
Together with part (i) and \cite[Exercise 8.8]{RW98}, we obtain the desired result.
\noindent
{\bf(iii)} By the convexity and Lipschitz continuity of $\ell_1$-norm and \cite[Exercise 10.10]{RW98},
it follows that $\partial h_{\lambda,\rho}(x)\!=\mathcal{N}_{\mathcal{S}}(x)+\lambda\rho\partial\|x\|_1$.
Let $\theta_{\!\rho}(z)\!:=\!\rho^{-1}\sum_{i=1}^n\psi^*(\rho|z_i|)$ for $z\in\mathbb{R}^n$.
Clearly, $\Xi_{\sigma,\gamma}=f_{\sigma,\gamma}-\lambda\rho\theta_{\!\rho}$.
By the expression of $\psi^*$ in \eqref{psi-star}, it is easy to verify that
$\theta_{\!\rho}$ is smooth and $\nabla\theta_{\!\rho}$ is Lipschitz continuous
with modulus $\rho\max(\frac{a+1}{2},\frac{a+1}{2(a-1)})$. Hence,
$\Xi_{\sigma,\gamma}$ is a smooth function whose gradient is Lipschitz continuous.
Together with \cite[Exercise 8.8]{RW98} and $G_{\sigma,\gamma,\rho}=\Xi_{\sigma,\gamma}+h_{\lambda,\rho}$,
we obtain the desired equalities.
\noindent
{\bf(iv)} Let $\theta_{\!\rho}$ be the function defined as above.
After an elementary calculation, we have
\[
\nabla\theta_{\!\rho}(x)\!=\big((\psi^*)'(\rho|x_1|){\rm sign}(x_1),\ldots,
(\psi^*)'(\rho|x_n|){\rm sign}(x_n)\big)^{\mathbb{T}}.
\]
Along with $|x|_{\rm nz}\ge\frac{2a}{\rho(a-1)}$ and the expression of $\psi^*$ in \eqref{psi-star},
we have $\nabla\theta_{\!\rho}(x)={\rm sign}(x)$ and
$\partial\|x\|_1\!-\!\nabla\theta_{\!\rho}(x)\subseteq\rho\partial\|x\|_0$.
By part (iii), $\nabla\Xi_{\sigma,\gamma}(x)=\nabla\!f_{\sigma,\gamma}(x)-\lambda\rho\nabla\theta_{\!\rho}(x)$.
Comparing $\partial G_{\sigma,\gamma,\rho}(x)$ in part (iii) with $\partial F_{\sigma,\gamma}(x)$
in part (ii) yields that $\partial G_{\sigma,\gamma,\rho}(x)\subseteq\partial F_{\sigma,\gamma}(x)$.
\end{proof}
\subsection{Stationary points}\label{sec2.3}
Lemma \ref{lemma-critical} shows that for the functions $F_{\sigma,\gamma}$ and $G_{\sigma,\gamma,\rho}$
the set of their regular critical points coincides with that of their limiting critical points,
so we call the critical point of $F_{\sigma,\gamma}$ a stationary point of \eqref{znorm-Moreau},
and the critical point of $G_{\sigma,\gamma,\rho}$ a stationary point of \eqref{Esurrogate}.
Motivated by the work \cite{Beck19}, we introduce a class of $\tau$-stationary points for them.
\begin{definition}\label{tau-critical}
Let $\tau>0$. A vector $x\in\mathbb{R}^n$ is called a $\tau$-stationary point of
\eqref{znorm-Moreau} if $x\in\mathcal{P}_{\!\tau}g_{\lambda}(x\!-\!\tau\nabla\!f_{\sigma,\gamma}(x))$,
and is called a $\tau$-stationary point of \eqref{Esurrogate} if
$x\in\mathcal{P}_{\!\tau}h_{\lambda,\rho}(x\!-\!\tau\nabla\Xi_{\sigma,\gamma}(x))$.
\end{definition}
In the sequel, we denote by $S_{\tau,g_{\lambda}}$ and $S_{\tau,h_{\lambda,\rho}}$
the $\tau$-stationary point set of \eqref{znorm-Moreau} and \eqref{Esurrogate}, respectively.
By Proposition \ref{proxm-glam} and \ref{proxm-hlam}, we have the following result for them.
\begin{lemma}\label{relation-critical}
Fix any $\tau>0,\lambda>0,\rho>0$ and $0<\!\gamma<\!{\sigma}/{2}$. Then, $S_{\tau,g_{\lambda}}\subseteq{\rm crit}F_{\sigma,\gamma}$
and $S_{\tau,h_{\lambda,\rho}}\subseteq{\rm crit}G_{\sigma,\gamma,\rho}$.
\end{lemma}
\begin{proof}
Pick any $\overline{x}\in\!S_{\tau,g_{\lambda}}$. Then
$\overline{x}=\mathcal{P}_{\!\tau}g_{\lambda}(\overline{x}\!-\!\tau\nabla\!f_{\sigma,\gamma}(\overline{x}))$.
By Proposition \ref{proxm-glam}, for each $i\in{\rm supp}(\overline{x})$,
$\overline{x}_i=\alpha^{-1}[\overline{x}_i\!-\!\tau(\nabla\!f_{\sigma,\gamma}(\overline{x}))_i]$
for some $\alpha\!>0$ (depending on $\overline{x}$). Then, for each $i\in{\rm supp}(\overline{x})$,
it holds that $(\nabla\!f_{\sigma,\gamma}(\overline{x}))_i+\tau^{-1}(\alpha\!-\!1)\overline{x}_i=0$.
Recall that
\begin{equation}\label{subdiff-sphere}
\mathcal{N}_{\mathcal{S}}(\overline{x})=\big\{\beta\overline{x}\,|\,\beta\in\mathbb{R}\big\}
\ \ {\rm and}\ \
\partial\|\overline{x}\|_0=\big\{v\in\mathbb{R}^n\,|\, v_i=0\ {\rm for}\ i\in{\rm supp}(\overline{x})\big\}.
\end{equation}
We have $0\in \nabla\!f_{\sigma,\gamma}(\overline{x})+\mathcal{N}_{\mathcal{S}}(\overline{x})+\lambda\partial\|\overline{x}\|_0$,
and hence $\overline{x}\in{\rm crit}F_{\sigma,\gamma}$ by Lemma \ref{lemma-critical} (ii).
Pick any $\overline{x}\in\!S_{\tau,h_{\lambda,\rho}}$.
Write $\overline{u}=\overline{x}\!-\!\tau\nabla\Xi_{\sigma,\gamma}(\overline{x})$.
Then, we have $\overline{x}=\mathcal{P}_{\!\tau}h_{\lambda,\rho}(\overline{u})$.
Let $J\!:=\{i\in[n]\,|\,|\overline{u}_i|>\tau\lambda\rho\}$ and $\overline{J}=[n]\backslash J$.
For each $i\in\overline{J}$,
$|\nabla\Xi_{\sigma,\gamma}(\overline{x})-\tau^{-1}\overline{x}_i|\le\lambda\rho$.
Since the subdifferential of the function $t\mapsto |t|$ at $0$ is $[-1,1]$,
it holds that
\[
0\in[\nabla\Xi_{\sigma,\gamma}(\overline{x})]_{\overline{J}}-\tau^{-1}\overline{x}_{\overline{J}}
+\lambda\rho\partial\|\overline{x}_{\overline{J}}\|_1.
\]
By Proposition \ref{proxm-hlam}, we have
\(
\overline{x}_J=\frac{\overline{u}_J-\tau\lambda\rho{\rm sign}(\overline{u}_J)}
{\|\overline{u}_J-\tau\lambda\rho{\rm sign}(\overline{u}_J)\|}.
\)
Together with ${\rm sign}(\overline{u}_J)={\rm sign}(\overline{x}_J)$,
\[
(\nabla\Xi_{\sigma,\gamma}(\overline{x}))_J
+\tau^{-1}(\|\overline{u}_J\!-\!\tau\lambda\rho{\rm sign}(\overline{u}_J)\|\!-\!1)\overline{x}_J
+\lambda\rho{\rm sign}(\overline{x}_J)=0.
\]
By the expression of $\mathcal{N}_{\mathcal{S}}(\overline{x})$ in \eqref{subdiff-sphere},
from the last two equations it follows that
\[
0\in \nabla\Xi_{\sigma,\gamma}(\overline{x})+\mathcal{N}_{\mathcal{S}}(\overline{x})+\lambda\rho\partial\|\overline{x}\|_1.
\]
By Lemma \ref{lemma-critical} (iii), this shows that $\overline{x}\in{\rm crit}G_{\sigma,\gamma,\rho}$.
The proof is completed.
\end{proof}
Note that if $\overline{x}$ is a stationary point of \eqref{znorm-Moreau},
then for $i\notin {\rm supp}(\overline{x})$, $[\mathcal{P}_{\!\tau}g_{\lambda}(\overline{x}\!-\!\tau\nabla\!f_{\sigma,\gamma}(\overline{x}))]_i$
does not necessarily equal $0$. A similar case also occurs for the stationary point of \eqref{Esurrogate}.
This means that the two inclusions in Lemma \ref{relation-critical} are generally strict.
By combining Lemma \ref{relation-critical} with \cite[Theorem 10.1]{RW98},
it is immediate to obtain the following conclusion.
\begin{corollary}\label{corollary-opt}
Fix any $\tau>0$. For the problems \eqref{znorm-Moreau} and \eqref{Esurrogate},
their local optimal solution is necessarily is a stationary point,
and consequently a $\tau$-stationary point.
\end{corollary}
\subsection{Kurdyka-{\L}\"{o}jasiewicz property}\label{sec2.3}
\begin{definition}\label{KL-def}
(see \cite{Attouch10}) A proper lsc function $h\!:\mathbb{R}^n\to\overline{\mathbb{R}}$ is said to have
the KL property at $\overline{x}\in{\rm dom}\,\partial h$ if there exist $\eta\in(0,+\infty]$,
a neighborhood $\mathcal{U}$ of $\overline{x}$, and a continuous concave function
$\varphi\!:[0,\eta)\to\mathbb{R}_{+}$ that is continuously differentiable on $(0,\eta)$
with $\varphi'(s)>0$ for all $s\in(0,\eta)$ and $\varphi(0)=0$, such that
for all $x\in\mathcal{U}\cap\big[h(\overline{x})<h<h(\overline{x})+\eta\big]$,
\[
\varphi'(h(x)-h(\overline{x})){\rm dist}(0,\partial h(x))\ge 1.
\]
If $\varphi$ can be chosen as $\varphi(t)=ct^{1-\theta}$ with $\theta\in[0,1)$
for some $c>0$, then $h$ is said to have the KL property of exponent $\theta$
at $\overline{x}$. If $h$ has the KL property (of exponent $\theta$) at each point
of ${\rm dom}\,\partial h$, then it is called a KL function (of exponent $\theta$).
\end{definition}
\begin{remark}\label{KL-remark}
{\bf(a)} As discussed thoroughly in \cite[Section 4]{Attouch10}, there are
a large number of nonconvex nonsmooth functions are the KL functions,
which include real semi-algebraic functions and those functions definable
in an o-minimal structure.
\noindent
{\bf(b)} From \cite[Lemma 2.1]{Attouch10}, a proper lsc function has
the KL property of exponent $\theta=0$ at any noncritical point.
Thus, to prove that a proper lsc $h\!:\mathbb{R}^n\to\overline{\mathbb{R}}$
is a KL function (of exponent $\theta$), it suffices to achieve its KL property
(of exponent $\theta$) at critical points. On the calculation of KL exponent,
please refer to the recent works \cite{LiPong18,YuLiPong21}.
\end{remark}
\section{Equivalent surrogates of the model \eqref{znorm-Moreau}}\label{sec3}
Pick any $\phi\in\!\mathscr{L}$. By invoking equation \eqref{phi-assump}, it is immediate to verify that
for any $x\in\mathbb{R}^n$,
\[
\|x\|_0=\min_{w\in[0,e]}\Big\{\textstyle{\sum_{i=1}^n}\phi(w_i)\ \ {\rm s.t.}\ \langle e-\!w,|x|\rangle=0\Big\}.
\]
This means that the zero-norm regularized problem \eqref{znorm-Moreau} can be reformulated as
\begin{equation}\label{Eznorm-MPEC}
\min_{x\in\mathcal{S},w\in[0,e]}\Big\{f_{\sigma,\gamma}(x)+\lambda\textstyle{\sum_{i=1}^n}\phi(w_i)
\quad\mbox{s.t.}\ \ \langle e-w,|x|\rangle=0\Big\}
\end{equation}
in the following sense: if $x^*$ is globally optimal to the problem \eqref{znorm-Moreau},
then $(x^*\!,{\rm sign}(|x^*|))$ is a global optimal solution of the problem \eqref{Eznorm-MPEC},
and conversely, if $(x^*,w^*)$ is a global optimal solution of \eqref{Eznorm-MPEC},
then $x^*$ is globally optimal to \eqref{znorm-Moreau}. The problem \eqref{Eznorm-MPEC}
is a mathematical program with an equilibrium constraint $e\!-\!w\ge 0,|x|\ge 0,\langle e-w,|x|\rangle=0$.
In this section, we shall show that the penalty problem induced by this equilibrium constraint, i.e.,
\begin{equation}\label{Penalty-MPEC}
\min_{x\in\mathcal{S},w\in[0,e]}\Big\{f_{\sigma,\gamma}(x)+\lambda\textstyle{\sum_{i=1}^n}\phi(w_i)
+\rho\lambda\langle e-w,|x|\rangle\Big\}
\end{equation}
is a global exact penalty of \eqref{Eznorm-MPEC} and from this global exact penalty achieve
the equivalent surrogate in \eqref{Esurrogate}, where $\rho>0$ is the penalty parameter.
For each $s\in[n]$, write
\[
\Omega_{s}:=\mathcal{S}\cap\mathcal{R}_{s}\ \ {\rm with}\ \ \mathcal{R}_{s}:=\{x\in\mathbb{R}^n\,|\,\|x\|_0\le s\}.
\]
To get the conclusion of this section, we need the following global error bound result.
\begin{lemma}\label{calmness}
For each $s\in\{1,2,\ldots,n\}$, there exists $\kappa_{s}>0$ such that
for all $x\in\mathcal{S}$,
\[
{\rm dist}(x,\Omega_{s})\le\kappa_{s}\big[\|x\|_1-\|x\|_{(s)}\big],
\]
where $\|x\|_{(s)}$ denotes the sum of the first $s$ largest entries of the vector $x\in\mathbb{R}^n$.
\end{lemma}
\begin{proof}
Fix any $s\in\{1,2,\ldots,n\}$. We first argue that the following multifunction
\[
\Upsilon_{\!s}(\tau):=\big\{x\in\mathcal{S}\,|\,\|x\|_1-\|x\|_{(s)}=\tau\big\}
\ \ {\rm for}\ \tau\in\mathbb{R}
\]
is calm at $0$ for every $x\in\Upsilon_{\!s}(0)$. Pick any $\widehat{x}\in\Upsilon_{\!s}(0)$.
By \cite[Theorem 3.1]{QianPan21}, the calmness of $\Upsilon_{\!s}$ at $0$ for $\widehat{x}$
is equivalent to the existence of $\delta>0$ and $\kappa>0$ such that
\begin{equation}\label{aim-ineq1}
{\rm dist}(x,\Omega_{s})\le\kappa\big[{\rm dist}(x,\mathcal{S})+{\rm dist}(x,\mathcal{R}_{s})\big]
\ \ {\rm for\ all}\ x\in\mathbb{B}(\widehat{x},\delta).
\end{equation}
Since $\|\widehat{x}\|=1$, there exists $\varepsilon\in(0,1/2)$ such that
for all $x\in\mathbb{B}(\widehat{x},\varepsilon)$, $x\ne 0$.
Fix any $x\in\mathbb{B}(\widehat{x},{\varepsilon}/{2})$.
Clearly, $\|x\|\ge\|\widehat{x}\|-{\varepsilon}/{2}\ge{3}/{4}$.
This means that $\|x\|_{\infty}\!\ge\frac{3}{4\sqrt{n}}$.
Pick any $x^*\in\Pi_{\mathcal{R}_{s}}(x)$. Clearly,
$\|x^*\|_{\infty}=\|x\|_\infty\ge\frac{3}{4\sqrt{n}}$ and $\frac{x^*}{\|x^*\|}\in\Omega_s$.
Then, with $\overline{x}=\frac{x}{\|x\|}$,
\begin{align*}
{\rm dist}(x,\Omega_{s})&\le\|x-x^*\!/{\|x^*\|}\|
\le\|x-\overline{x}\|+\|\overline{x}-x^*\!/{\|x^*\|}\|\\
&\le\|x-\overline{x}\|+\frac{\|(x-x^*)\|x\|+x(\|x^*\|-\|x\|)\|}{\|x\|\|x^*\|}\\
&\le\|x-\overline{x}\|+(2/\|x^*\|)\|x-x^*\|
\le{\rm dist}(x,\mathcal{S})+3\sqrt{n}{\rm dist}(x,\mathcal{R}_{s}).
\end{align*}
This shows that the inequality \eqref{aim-ineq1} holds for $\delta=\varepsilon/2$ and
$\kappa=3\sqrt{n}$. Consequently, the mapping $\Upsilon_{\!s}$ is calm at $0$ for
every $x\in\Upsilon_{\!s}(0)$. Now by invoking \cite[Theorem 3.3]{QianPan21}
and the compactness of $\mathcal{S}$, we obtain the desired result.
The proof is completed.
\end{proof}
Now we are ready to show that the problem \eqref{Penalty-MPEC}
is a global exact penalty of \eqref{Eznorm-MPEC}.
\begin{proposition}\label{partial-calm}
Let $\overline{\rho}:=\frac{\kappa\phi_{-}'(1)(1-t^*)\alpha_{\!f}}{\lambda(1-t_0)}$
where $t_0\in[0,1)$ is such that $\frac{1}{1-t^*}\in\partial\phi(t_0)$,
$\phi_{-}'(1)$ is the left derivative of $\phi$ at $1$,
$\alpha_{\!f}$ is the Lipschitz constant of $f_{\sigma,\gamma}$ on $\mathcal{S}$,
and $\kappa=\max_{1\le s\le n}\kappa_s$ with $\kappa_s$ given by Lemma \ref{calmness}.
Then, for any $(x,w)\in\mathcal{S}\times[0,e]$,
\begin{equation}\label{aim-ineq}
\big[f_{\sigma,\gamma}(x) +\lambda{\textstyle\sum_{i=1}^n}\phi(w_i)\big]
-\big[f_{\sigma,\gamma}(x^*) +\lambda{\textstyle\sum_{i=1}^n}\phi(w_i^*)\big]
+\overline{\rho}\lambda\langle e\!-\!w,|x|\rangle\ge 0,
\end{equation}
where $(x^*,w^*)$ is an arbitrary global optimal solution of \eqref{Eznorm-MPEC},
and consequently the problem \eqref{Penalty-MPEC} associated to each $\rho>\overline{\rho}$
has the same global optimal solution set as \eqref{Eznorm-MPEC} does.
\end{proposition}
\begin{proof}
By Lemma \ref{calmness} and $\kappa=\max_{1\le s\le n}\kappa_s$,
for each $s\in\{1,2,\ldots,n\}$ and any $z\in\mathcal{S}$,
\begin{equation}\label{err-bound}
{\rm dist}(z,\mathcal{S}\cap\mathcal{R}_s)\le\kappa\big[\|z\|_1-\|z\|_{(k)}\big].
\end{equation}
Fix any $(x,w)\in\mathcal{S}\times[0,e]$.
Let $J=\big\{j\in[n]\,|\ \overline{\rho}|x|_j^{\downarrow}>\phi_{-}'(1)\big\}$ and
$r=|J|$. By invoking \eqref{err-bound} for $s=r$ with $z=x$,
there exists $x^{\overline{\rho}}\in\mathcal{S}\cap\mathcal{R}_r$ such that
\begin{equation}\label{err-ineq1}
\|x-x^{\overline{\rho}}\|\le\kappa\big[\|x\|_1-\|x\|_{(r)}\big]
=\kappa{\textstyle\sum_{j=r+1}^n}|x|_j^{\downarrow}.
\end{equation}
Let $J_1=\!\big\{j\in[n]\,|\,\frac{1}{1-t^*}\!\le\!\overline{\rho}|x|_j^{\downarrow}\le\phi_{-}'(1)\big\}$
and $J_2=\!\big\{j\in[n]\,|\,0\!\le\!\overline{\rho}|x|_j^{\downarrow}<\frac{1}{1-t^*}\big\}$.
Note that
\[
{\textstyle\sum_{i=1}^n}\phi(w_i)+\overline{\rho}\big(\|x\|_1\!-\langle w,|x|\rangle\big)
\ge{\textstyle \sum_{i=1}^n}\min_{t\in[0,1]}\big\{\phi(t)+\overline{\rho}|x|_i^{\downarrow}(1-t)\big\}.
\]
By invoking \cite[Lemma 1]{LiuBiPan18} with $\omega=|x|_j^{\downarrow}$
for each $j$, it immediately follows that
\[
{\textstyle\sum_{i=1}^n}\phi(w_i)+\overline{\rho}\big(\|x\|_1\!-\langle w,x\rangle\big)
\ge\|x^{\overline{\rho}}\|_0+\frac{\overline{\rho}(1\!-\!t_0)}{\phi_{-}'(1)(1\!-t^*)}
\sum_{j\in J_1}\,|x|_j^{\downarrow}+\overline{\rho}(1\!-\!t_0)
\sum_{j\in J_2}\,|x|_j^{\downarrow}.
\]
Notice that $1=\phi(1)=\phi(1)-\phi(t^*)\le\phi_{-}'(1)(1-t^*)$. From the last inequality, we have
\begin{align}\label{temp-wxineq0}
{\textstyle\sum_{i=1}^n}\phi(w_i)+\overline{\rho}\big(\|x\|_1\!-\langle w,x\rangle\big)
\ge\|x^{\overline{\rho}}\|_0+\frac{\overline{\rho}(1-t_0)}{\phi_{-}'(1)(1\!-t^*)}
\sum_{j\in J_1\cup J_2}|x|_j^{\downarrow}\nonumber\\
=\|x^{\overline{\rho}}\|_0+\frac{\overline{\rho}(1-t_0)}{\phi_{-}'(1)(1\!-t^*)}
\sum_{j=r+1}^n|x|_j^{\downarrow}
\ge \|x^{\overline{\rho}}\|_0+\alpha_{\!f}\lambda^{-1}\|x-x^{\overline{\rho}}\|\qquad
\end{align}
where the last inequality is due to \eqref{err-ineq1} and the definition of $\overline{\rho}$.
Since $x\in\mathcal{S}$ and $x^{\overline{\rho}}\in\mathcal{S}$, we have
$f_{\sigma,\gamma}(x^{\overline{\rho}})-f_{\sigma,\gamma}(x)\le \alpha_{\!f}\|x-x^{\overline{\rho}}\|$.
Together with the last inequality,
\begin{equation}\label{temp-wxineq}
{\textstyle\sum_{i=1}^n}\phi(w_i)+\overline{\rho}\big(\|x\|_1\!-\langle w,x\rangle\big)
\ge \|x^{\overline{\rho}}\|_0+\lambda^{-1}\big[f_{\sigma,\gamma}(x^{\overline{\rho}})-f_{\sigma,\gamma}(x)\big].
\end{equation}
Now take $w_{i}^{\overline{\rho}}=1$ for $i\in{\rm supp}(x^{\overline{\rho}})$
and $w_{i}^{\overline{\rho}}=0$ for $i\notin{\rm supp}(x^{\overline{\rho}})$.
Clearly, $(x^{\overline{\rho}},w^{\overline{\rho}})$ is a feasible point
of the MPEC \eqref{Eznorm-MPEC} with $\sum_{i=1}^n\phi(w_i^{\overline{\rho}})
=\|x^{\overline{\rho}}\|_0$. Then, it holds that
\[
f_{\sigma,\gamma}(x^{\overline{\rho}})+\lambda\|x^{\overline{\rho}}\|_0
\ge f_{\sigma,\gamma}(x^*) +\lambda{\textstyle\sum_{i=1}^n}\phi(w_i^*).
\]
Together with \eqref{temp-wxineq}, we obtain the inequality \eqref{aim-ineq}.
Notice that $\langle e\!-\!w^*,|x^*|\rangle=0$. The inequality \eqref{aim-ineq}
implies that every global optimal solution of \eqref{Eznorm-MPEC} is globally
optimal to the problem \eqref{Penalty-MPEC} associated to every $\rho>\overline{\rho}$.
Conversely, by fixing any $\rho>\overline{\rho}$ and letting $(\overline{x}^{\rho},\overline{w}^{\rho})$
be a global optimal solution of the problem \eqref{Penalty-MPEC} associated to $\rho$, it holds that
\begin{align*}
&f_{\sigma,\gamma}(\overline{x}^{\rho})+\lambda\textstyle{\sum_{i=1}^n}\phi(\overline{w}_i^{\rho})
+\rho\lambda\langle e-\overline{w}^{\rho},|\overline{x}^{\rho}|\rangle \\
&\le f_{\sigma,\gamma}(x^*)+\lambda\textstyle{\sum_{i=1}^n}\phi(w_i^*)
=f_{\sigma,\gamma}(x^*)+\lambda\textstyle{\sum_{i=1}^n}\phi(w_i^*)
+\frac{\rho+\overline{\rho}}{2}\lambda\langle e-\overline{w}^{\rho},|\overline{x}^{\rho}|\rangle\\
&\le f_{\sigma,\gamma}(\overline{x}^{\rho})+\lambda\textstyle{\sum_{i=1}^n}\phi(\overline{w}_i^{\rho})
+\frac{\rho+\overline{\rho}}{2}\lambda\langle e-\overline{w}^{\rho},|\overline{x}^{\rho}|\rangle,
\end{align*}
which implies that $\frac{\rho-\overline{\rho}}{2}\lambda\langle e-\overline{w}^{\rho},|\overline{x}^{\rho}|\rangle\le 0$.
Since $\rho>\overline{\rho}$ and $\langle e-\overline{w}^{\rho},|\overline{x}^{\rho}|\rangle\ge 0$,
we obtain $\langle e-\overline{w}^{\rho},|\overline{x}^{\rho}|\rangle=0$.
Together with the last inequality, it follows that $(\overline{x}^{\rho},\overline{w}^{\rho})$
is a global optimal solution of \eqref{Eznorm-MPEC}.
The second part then follows.
\end{proof}
By the definition of $\psi$, the penalty problem \eqref{Penalty-MPEC} can be rewritten in a compact form
\begin{equation*}
\min_{x\in\mathcal{S},w\in\mathbb{R}^n}\big\{f_{\sigma,\gamma}(x)+\lambda\textstyle{\sum_{i=1}^n}\psi(w_i)
+\rho\lambda\langle e-w,|x|\rangle\big\},
\end{equation*}
which, by the definition of the conjugate function $\psi^*$, can be simplified to be \eqref{Esurrogate}.
Then, Proposition \ref{partial-calm} implies that the problem \eqref{Esurrogate} associated
to every $\phi\in\!\mathscr{L}$ and $\rho>\overline{\rho}$ is an equivalent surrogate
of the problem \eqref{znorm-Moreau}. For a specific $\phi$, since $t^*,t_0$ and $\phi_{-}'(1)$
are known, the threshold $\overline{\rho}$ is also known by Lemma \ref{calmness} though $\kappa=3\sqrt{n}$
is a rough estimate.
When $\phi$ is the one in Section \ref{sec1.2}, it is easy to verify that
$\lambda\rho\varphi_{\rho}$ with $\lambda=\frac{(a+1)\nu^2}{2}$ and $\rho=\frac{2}{(a+1)\nu}$
is exactly the SCAD function $x\mapsto\sum_{i=1}^n\rho_{\nu}(x_i)$ proposed in \cite{Fan01}.
Since $t^*=0,t_0=1/2$ and $\phi_{-}'(1)=\frac{2a}{a+1}$ for this $\phi$,
the SCAD function with $\nu<\frac{2}{(a+1)\overline{\rho}}$ is an equivalent
surrogate of \eqref{znorm-Moreau}.
When $\phi(t)=\frac{a^2}{4}t^2-\frac{a^2}{2}t+at+\frac{(a-2)^2}{4}\ (a>2)$ for $t\in\mathbb{R}$,
\[
\psi^*(\omega)=\left\{\begin{array}{cl}
-\frac{(a-2)^2}{4} & \textrm{if}\ \omega\leq a-a^2/2,\\
\frac{1}{a^2}(\frac{a(a-2)}{2}+\omega)^2-\frac{(a-2)^2}{4}& \textrm{if}\ a-a^2/2 <\omega\leq a,\\
\omega-1 & \textrm{if}\ \omega>a.
\end{array}\right.
\]
It is not hard to verify that the function $\lambda\rho\varphi_{\rho}$
with $\lambda={a\nu^2}/{2}$ and $\rho={1}/{\nu}$ is exactly the one
$x\mapsto\sum_{i=1}^ng_{\nu,b}(x_i)$ with $b=a$ used in \cite[Section 3.3]{HuangY18}.
Since $t^*=\frac{a-2}{a},t_0=\frac{a-1}{a}$ and $\phi_{-}'(1)=a$ for this $\phi$,
the MCP function used in \cite{HuangY18} with $\nu<1/{\overline{\rho}}$ and $b=a$
is also an equivalent surrogate of the problem \eqref{znorm-Moreau}.
\section{PG method with extrapolation}\label{sec4}
\subsection{PG with extrapolation for solving \eqref{znorm-Moreau}}\label{sec4.1}
Recall that $f_{\sigma,\gamma}$ is a smooth function whose gradient $\nabla\!f_{\sigma,\gamma}$
is Lipschitz continuous with modulus $L_{\!f}\le\gamma^{-1}\|A\|^2$.
While by Proposition \ref{proxm-glam} the proximal mapping of $g_{\lambda}$ has a closed form.
This inspires us to apply the PG method with extrapolation to solving \eqref{znorm-Moreau}.
\begin{algorithm}[H]
\caption{\label{PGMe1}{\bf (PGe-znorm for solving the problem \eqref{znorm-Moreau})}}
\textbf{Initialization:} Choose $\varsigma\in(0,1),0<\tau<(1\!-\!\varsigma)L_{\!f}^{-1},0<\beta_{\rm max}\le\frac{\sqrt{\varsigma(\tau^{-1}-L_{\!f})\tau^{-1}}}{2(\tau^{-1}+L_{\!f})}$ and
an initial point $x^0\in\mathcal{S}$. Set $x^{-1}=x^{0}$ and $k:=0$.
\noindent
\textbf{while} the termination condition is not satisfied \textbf{do}
\begin{itemize}
\item[{\bf1.}] Let $\widetilde{x}^k=x^{k}+\beta_k(x^k-x^{k-1})$. Compute
$x^{k+1}\in\mathcal{P}_{\!\tau}g_{\lambda}(\widetilde{x}^k\!-\!\tau\nabla\!f_{\sigma,\gamma}(\widetilde{x}^k))$.
\item[{\bf3.}] Choose $\beta_{k+1}\in[0,\beta_{\rm max}]$. Let $k\leftarrow k+1$ and go to Step 1.
\end{itemize}
\textbf{end (while)}
\end{algorithm}
\begin{remark}\label{remark1-PGMe1}
The main computation work of Algorithm \ref{PGMe1} in each iteration is to seek
\begin{equation}\label{xk-def}
x^{k+1}\in\mathop{\arg\min}_{x\in\mathbb{R}^n}\Big\{\langle\nabla\!f_{\sigma,\gamma}(\widetilde{x}^k),x-\widetilde{x}^k\rangle
+\frac{1}{2\tau}\|x-\widetilde{x}^k\|^2+g_{\lambda}(x)\Big\}.
\end{equation}
By Proposition \ref{proxm-glam}, to achieve a global optimal solution of
the nonconvex problem \eqref{xk-def} requires about $2mn$ flops.
Owing to the good performance of the Nesterov's acceleration strategy \cite{Nesterov83,Nesterov04},
one can use this strategy to choose the extrapolation parameter $\beta_k$, i.e.,
\begin{equation}\label{accelerate}
\beta_k=\frac{t_{k-1}-1}{t_k}\ \ {\rm with}\ t_{k+1}=\frac{1}{2}\big(1+\!\sqrt{1+4t_k^2}\big)
\ \ {\rm for}\ t_{-1}=t_{0}=1.
\end{equation}
In Algorithm \ref{PGMe1}, an upper bound $\beta_{\rm max}$ is imposed on $\beta_{k}$ just for
the convergence analysis. It is easy to check that as $\varsigma$ approaches to $1$,
say $\varsigma=0.999$, $\beta_{\rm max}$ can take $0.235$.
\end{remark}
The PG method with extrapolation, first proposed in \cite{Nesterov83} and
extended to a general composite setting in \cite{Beck09}, is a popular
first-order one for solving nonconvex nonsmooth composite optimization problems
such as \eqref{znorm-Moreau} and \eqref{Esurrogate}. In the past several years,
the PGe and its variants have received extensive attentions (see, e.g.,
\cite{Ghadimi16,Li15,Wen17,Ochs14,Ochs19,Xu17,Yang21}). Due to the nonconvexity of
the sphere constraint and the zero-norm, the results obtained in \cite{Ghadimi16,Wen17,Ochs14}
are not applicable to \eqref{znorm-Moreau}. Although Algorithm \ref{PGMe1} is
a special case of those studied in \cite{Li15,Xu17,Yang21}, the convergence results
of \cite{Li15,Yang21} are obtained for the objective value sequence and the convergence
result of \cite{Xu17} on the iterate sequence requires a strong restriction on $\beta_k$,
i.e., it is such that the objective value sequence is nonincreasing.
Next we provide the proof for the convergence and local convergence rate of
the iterate sequence yielded by Algorithm \ref{PGMe1}.
For any $\tau>0$ and $\varsigma\in(0,1)$, we define the function
\begin{equation}\label{Htau-vsig}
H_{\tau,\varsigma}(x,u):=F_{\sigma,\gamma}(x)+\frac{\varsigma}{4\tau}\|x-u\|^2
\quad\ \forall (x,u)\in\mathbb{R}^n\times\mathbb{R}^n.
\end{equation}
The following lemma summarizes the properties of $H_{\tau,\zeta}$ on the sequence
$\{x^k\}_{k\in\mathbb{N}}$.
\begin{lemma}\label{lemma1-PGMe1}
Let $\{x^k\}_{k\in\mathbb{N}}$ be the sequence generated by Algorithm \ref{PGMe1}. Then,
\begin{itemize}
\item[(i)] for each $k\in\mathbb{N}$,
\(
H_{\tau,\varsigma}(x^{k+1},x^k)\le H_{\tau,\varsigma}(x^{k},x^{k-1})
-\frac{\varsigma(\tau^{-1}-L_{\!f})}{2}\|x^{k+1}\!-\!x^k\|^2.
\)
\item[(ii)] The sequence $\{H_{\tau,\varsigma}(x^{k},x^{k-1})\}_{k\in\mathbb{N}}$ is convergent
and $\sum_{k=1}^{\infty}\|x^{k+1}\!-\!x^k\|^2<\infty$.
\item[(iii)] For each $k\in\mathbb{N}$, there exists $w^{k}\in\partial H_{\tau,\varsigma}(x^{k},x^{k-1})$
with $\|w^{k+1}\|\le b_1\|x^{k+1}\!-\!x^k\|+b_2\|x^k\!-\!x^{k-1}\|$, where
$b_1>0$ and $b_2>0$ are the constants independent of $k$.
\end{itemize}
\end{lemma}
\begin{proof}
{\bf(i)} Since $\nabla\!f_{\sigma,\gamma}$ is globally Lipschitz continuous,
from the descent lemma we have
\begin{equation}\label{fdecent}
f_{\sigma,\gamma}(x')\le f_{\sigma,\gamma}(x)+\langle\nabla\!f_{\sigma,\gamma}(x),x'-x\rangle
+({L_{\!f}}/{2})\|x'-x\|^2\quad\forall x',x\in\mathbb{R}^n.
\end{equation}
From the definition of $x^{k+1}$ or the equation \eqref{xk-def},
for each $k\in\mathbb{N}$ it holds that
\[
\langle\nabla\!f_{\sigma,\gamma}(\widetilde{x}^k),x^{k+1}-x^k\rangle
+\frac{1}{2\tau}\|x^{k+1}-\widetilde{x}^k\|^2+g_{\lambda}(x^{k+1})
\le \frac{1}{2\tau}\|x^{k}-\widetilde{x}^k\|^2+g_{\lambda}(x^{k}).
\]
Together with the inequality \eqref{fdecent} for $x'=x^{k+1}$ and $x=x^k$,
it follows that
\begin{align*}
f_{\sigma,\gamma}(x^{k+1})+g_{\lambda}(x^{k+1})
&\le f_{\sigma,\gamma}(x^{k})+g_{\lambda}(x^{k})
-\frac{1}{2\tau}\|x^{k+1}-\widetilde{x}^k\|^2
+\frac{L_{\!f}}{2}\|x^{k+1}-x^k\|^2 \nonumber\\
&\quad+\langle\nabla\!f_{\sigma,\gamma}(x^k)-\nabla\!f_{\sigma,\gamma}(\widetilde{x}^k),x^{k+1}-x^k\rangle
+\frac{1}{2\tau}\|x^{k}-\widetilde{x}^k\|^2 \nonumber\\
&=f_{\sigma,\gamma}(x^{k})+g_{\lambda}(x^{k}) -\frac{1}{2}(\tau^{-1}\!-\!L_{\!f})\|x^{k+1}-x^k\|^2\\
&\quad -\frac{1}{\tau}\langle x^{k+1}\!-\!x^k,x^{k}\!-\!\widetilde{x}^k\rangle +\langle\nabla\!f_{\sigma,\gamma}(x^k)\!-\!\nabla\!f_{\sigma,\gamma}(\widetilde{x}^k),x^{k+1}\!-\!x^k\rangle.\nonumber
\end{align*}
Using $\widetilde{x}^k=x^{k}+\beta_k(x^k-x^{k-1})$ and the Lipschitz continuity
of $\nabla\!f_{\sigma,\gamma}$ yields that
\begin{align*}
F_{\sigma,\gamma}(x^{k+1})
&\le F_{\sigma,\gamma}(x^k) -\frac{\tau^{-1}\!-\!L_{\!f}}{2}\|x^{k+1}\!-\!x^k\|^2
+(\tau^{-1}\!+\!L_{\!f})\beta_k\|x^{k+1}\!-\!x^k\|\|x^{k}\!-\!x^{k-1}\|\\
&\le F_{\sigma,\gamma}(x^k) -\frac{\tau^{-1}\!-\!L_{\!f}}{4}\|x^{k+1}\!-\!x^k\|^2
+\frac{(\tau^{-1}\!+\!L_{\!f})^2}{\tau^{-1}\!-\!L_{\!f}}\beta_k^2\|x^{k}\!-\!x^{k-1}\|^2\\
&\le F_{\sigma,\gamma}(x^k) -\frac{\tau^{-1}\!-\!L_{\!f}}{4}\|x^{k+1}\!-\!x^k\|^2
+\frac{\varsigma}{4\tau}\|x^{k}\!-\!x^{k-1}\|^2\\
&=F_{\sigma,\gamma}(x^k) -\frac{(1\!-\!\varsigma)\tau^{-1}\!-\!L_{\!f}}{4}\|x^{k+1}\!-\!x^k\|^2
-\frac{\varsigma}{4\tau}\|x^{k+1}\!-\!x^k\|^2+\frac{\varsigma}{4\tau}\|x^{k}\!-\!x^{k-1}\|^2
\end{align*}
where the second is due to $ab\le\frac{a^2}{4s}+b^2$ with $a=\|x^{k+1}\!-\!x^k\|$,
$b=(\tau^{-1}\!+\!L_{\!f})\beta_k\|x^{k}\!-\!x^{k-1}\|$ and $s=\frac{1}{\tau^{-1}-L_{\!f}}>0$,
and the last is due to $\beta_k\le\beta_{\rm max}\le\frac{\sqrt{\varsigma(\tau^{-1}-L_{\!f})\tau^{-1}}}{2(\tau^{-1}+L_{\!f})}$.
Combining the last inequality with the definition of $H_{\tau,\varsigma}$,
we obtain the result.
\noindent
{\bf(ii)} Note that $H_{\tau,\varsigma}$ is lower bounded by the lower boundedness of
the function $F_{\sigma,\gamma}$. The nonincreasing of the sequence
$\{H_{\tau,\varsigma}(x^{k},x^{k-1})\}_{k\in\mathbb{N}}$ in part (i) implies
its convergence, and consequently, $\sum_{k=1}^{\infty}\|x^{k+1}\!-\!x^k\|^2<\infty$
follows by using part (i) again.
\noindent
{\bf(iii)} From the definition of $H_{\tau,\varsigma}$ and \cite[Exercise 8.8]{RW98},
for any $(x,u)\in\mathcal{S}\times\mathbb{R}^n$,
\begin{equation}\label{subdiff1}
\partial H_{\tau,\varsigma}(x,u)
=\left(\begin{matrix}
\partial F_{\sigma,\gamma}(x)+\frac{1}{2}\tau^{-1}\varsigma(x-u)\\
\frac{1}{2}\tau^{-1}\varsigma(u-x)
\end{matrix}\right).
\end{equation}
Fix any $k \in \mathbb{N}$. By the optimality of $x^{k+1}$ to the nonconvex problem \eqref{xk-def},
it follows that
\[
0\in \nabla\!f_{\sigma,\gamma}(\widetilde{x}^k)+\tau^{-1}(x^{k+1}\!-\widetilde{x}^{k})+\partial h_{\lambda}(x^{k+1}),
\]
which is equivalent to
\(
\nabla\!f_{\sigma,\gamma}(x^{k+1})\!-\!\nabla\!f_{\sigma,\gamma}(\widetilde{x}^k)
-\tau^{-1}(x^{k+1}\!-\!\widetilde{x}^{k})\in \partial F_{\sigma,\gamma}(x^{k+1}).
\)
Write
\[
w^{k}:=\left(\begin{matrix}
\nabla\!f_{\sigma,\gamma}(x^{k})-\nabla\!f_{\sigma,\gamma}(\widetilde{x}^{k-1})
-\tau^{-1}(x^{k}-\widetilde{x}^{k-1})+\frac{1}{2}\tau^{-1}\varsigma(x^{k}-x^{k-1})\\
\frac{1}{2}\tau^{-1}\varsigma(x^{k-1}-x^{k})
\end{matrix}\right).
\]
By comparing with \eqref{subdiff1}, we have $w^{k}\in\partial H_{\tau,\varsigma}(x^{k},x^{k-1})$.
From the Lipschitz continuity of $\nabla\!f_{\sigma,\gamma}$ and Step 1,
$\|w^{k+1}\|\le (\tau^{-1}\!+\!L_{\!f}\!+\!\tau^{-1}\varsigma)\|x^{k+1}\!-\!x^k\|
+(\tau^{-1}\!+\!L_{\!f})\beta_{\rm max}\|x^{k}\!-\!x^{k-1}\|$.
Since $\beta_{\rm max}\in(0,1)$, the result holds
with $b_1=\tau^{-1}\!+\!L_{\!f}\!+\!\tau^{-1}\varsigma$ and $b_2=\tau^{-1}\!+\!L_{\!f}$.
\end{proof}
\begin{lemma}\label{lemma2-PGMe1}
Let $\{x^k\}_{k\in\mathbb{N}}$ be the sequence generated by Algorithm \ref{PGMe1}
and denote by $\varpi(x^0)$ the set of accumulation points of $\{x^k\}_{k\in\mathbb{N}}$.
Then, the following assertions hold:
\begin{itemize}
\item [(i)] $\varpi(x^0)$ is a nonempty compact set and $\varpi(x^0)\subseteq
S_{\tau,g_{\lambda}}\subseteq{\rm crit}F_{\sigma,\gamma}$;
\item [(ii)] $\lim_{k\to\infty}{\rm dist}((x^k,x^{k-1}),\Omega)=0$ with
$\Omega:=\{(x,x)\,|\,x\in\varpi(x^0)\}\subseteq{\rm crit}H_{\tau,\varsigma}$;
\item[(iii)] the function $H_{\tau,\varsigma}$ is finite and keeps the constant on the set $\Omega$.
\end{itemize}
\end{lemma}
\begin{proof}
{\bf (i)} Since $\{x^k\}_{k\in\mathbb{N}}\subseteq \mathcal{S}$, we have $\varpi(x^0)\ne\emptyset$.
Since $\varpi(x^0)$ can be viewed as an intersection of compact sets, i.e.,
$\varpi(x^0)=\bigcap_{q\in\mathbb{N}}\overline{\bigcup_{k\ge q}\{x^k\}}$, it is also compact.
Now pick any $x^*\in\varpi(x^0)$. There exists a subsequence $\{x^{k_j}\}_{j\in\mathbb{N}}$
with $x^{k_j}\rightarrow x^*$ as $j\rightarrow \infty$. Note that
$\lim_{j\to\infty}\|x^{k_j}-x^{k_j-1}\|=0$ implied by Lemma \ref{lemma1-PGMe1} (ii).
Then, $x^{k_j-1}\rightarrow x^*$ and $x^{k_j+1}\rightarrow x^*$ as $j\rightarrow \infty$.
Recall that $\widetilde{x}^{k_j}=x^{k_j}+\beta_{k_j}(x^{k_j}-x^{k_j-1})$
and $\beta_{k_j}\in[0,\beta_{\rm max})$. When $j\rightarrow \infty$,
we have $\widetilde{x}^{k_j}\rightarrow x^*$ and then
$\widetilde{x}^{k_j}\!-\!\tau\nabla\!f_{\sigma,\gamma}(\widetilde{x}^{k_j})
\rightarrow x^*\!-\!\tau\nabla\!f_{\sigma,\gamma}(x^*)$.
In addition, since $g_{\lambda}$ is proximally bounded with threshold $+\infty$,
i.e., for any $\tau'>0$ and $x\in\mathbb{R}^n$,
$\min_{z\in\mathbb{R}^n}\big\{\frac{1}{2\tau'}\|z-x\|^2+g_{\lambda}(z)\big\}>-\infty$,
from \cite[Example 5.23]{RW98} it follows that $\mathcal{P}_{\!\tau}g_{\lambda}$
is outer semicontinuous. Thus, from $x^{k_j+1}\in\mathcal{P}_{\!\tau}g_{\lambda}(\widetilde{x}^{k_j}\!-\!\tau\nabla\!f_{\sigma,\gamma}(\widetilde{x}^{k_j}))$
for each $j\in\mathbb{N}$, we have $x^*\in\mathcal{P}_{\!\tau}g_{\lambda}(x^*\!-\!\tau\nabla\!f_{\sigma,\gamma}(x^*))$,
and then $x^*\in S_{\tau,g_{\lambda}}$. By the arbitrariness of $x^*\in\varpi(x^0)$,
the first inclusion follows. The second inclusion is given by Lemma \ref{relation-critical}.
\noindent
{\bf (ii)-(iii)} The result of part (ii) is immediate, so it suffices to prove part (iii).
By Lemma \ref{lemma1-PGMe1} (i), the sequence $\{H_{\tau,\varsigma}(x^k,x^{k-1})\}_{k\in\mathbb{N}}$
is convergent and denote its limit by $\omega^*$. Pick any $(x^*,x^*)\in\Omega$.
There exists a subsequence $\{x^{k_j}\}_{j\in\mathbb{N}}$ with $x^{k_j}\rightarrow x^*$
as $j\rightarrow \infty$. If $\lim_{j \rightarrow \infty}H_{\tau,\varsigma}(x^{k_j},x^{k_j-1})
=H_{\tau,\varsigma}(x^*,x^*)$, then the convergence of $\{H_{\tau,\varsigma}(x^k,x^{k-1})\}_{k\in\mathbb{N}}$
implies that $H_{\tau,\varsigma}(x^*,x^*)=\omega^*$, which by the arbitrariness of
$(x^*,x^*)\in\Omega$ shows that the function $H_{\tau,\varsigma}$ is finite and
keeps the constant on $\Omega$. Hence, it suffices to argue that
$\lim_{j \rightarrow \infty}H_{\tau,\varsigma}(x^{k_j},x^{k_j-1})=H_{\tau,\varsigma}(x^*,x^*)$.
Recall that $\lim_{j\to\infty}\|x^{k_j}-x^{k_j-1}\|=0$ by Lemma \ref{lemma1-PGMe1} (ii).
We only need argue that $\lim_{j \rightarrow \infty}F_{\sigma,\gamma}(x^{k_j})=F_{\sigma,\gamma}(x^*)$.
From \eqref{xk-def}, it holds that
\[
\langle\nabla\!f_{\sigma,\gamma}(\widetilde{x}^{k_j-1}),x^{k_j}-x^*\rangle
+\frac{1}{2\tau}\|x^{k_j}-\widetilde{x}^{k_j-1}\|^2+g_{\lambda}(x^{k_j})
\le \frac{1}{2\tau}\|x^*-\widetilde{x}^{k_j-1}\|^2+g_{\lambda}(x^*).
\]
Together with the inequality \eqref{fdecent} with $x'=x^{k_j}$ and $x=x^*$, we obtain that
\begin{align*}
F_{\sigma,\gamma}(x^{k_j})
&\le F_{\sigma,\gamma}(x^*)+\langle\nabla\!f_{\sigma,\gamma}(x^*)-\nabla\!f_{\sigma,\gamma}(\widetilde{x}^{k_j-1}),x^{k_j}-x^*\rangle
-\frac{1}{2\tau}\|x^{k_j}-\widetilde{x}^{k_j-1}\|^2\\
&\quad +\frac{1}{2\tau}\|x^*-\widetilde{x}^{k_j-1}\|^2+\frac{L_{\!f}}{2}\|x^{k_j}-x^*\|^2,
\end{align*}
which by $\lim_{j\rightarrow \infty}x^{k_j}=x^*=\lim_{j\rightarrow \infty}x^{k_j-1}$ implies that
$\limsup_{j\rightarrow\infty}F_{\sigma,\gamma}(x^{k_j})\le F_{\sigma,\gamma}(x^*)$.
In addition, by the lower semicontinuity of $F_{\sigma,\gamma}$,
$\liminf_{j\rightarrow\infty}F_{\sigma,\gamma}(x^{k_j})\ge F_{\sigma,\gamma}(x^*)$.
The two sides show that $\lim_{j \rightarrow \infty}F_{\sigma,\gamma}(x^{k_j})=F_{\sigma,\gamma}(x^*)$.
The proof is then completed.
\end{proof}
Since $f_{\sigma,\gamma}$ is a piecewise linear-quadratic function, it is semi-algebraic.
Recall that the zero-norm is semi-algebraic. Hence, $F_{\sigma,\gamma}$ and $H_{\tau,\varsigma}$
are also semi-algebraic, and then the KL functions. By using Lemma \ref{lemma1-PGMe1}-\ref{lemma2-PGMe1}
and following the arguments as those for \cite[Theorem 1]{Bolte14} and \cite[Theorem 2]{Attouch09}
we can establish the following convergence results.
\begin{theorem}\label{global-conv}
Let $\{x^k\}_{k\in\mathbb{N}}$ be the sequence generated by Algorithm \ref{PGMe1}.
Then,
\begin{itemize}
\item[(i)] $\sum_{k=1}^{\infty}\|x^{k+1}-x^k\|<\infty$ and consequently $\{x^k\}_{k\in \mathbb{N}}$
converges to some $x^*\in S_{\tau,g_{\lambda}}$.
\item[(ii)] If $F_{\sigma,\gamma}$ is a KL function of exponent $1/2$,
then there exist $c_1>0$ and $\varrho\in(0,1)$ such that
for all sufficiently large $k$, $\|x^k-x^*\|\le c_1\varrho^k$.
\end{itemize}
\end{theorem}
\begin{proof}
{\bf(i)} For each $k\in\mathbb{N}$, write $z^k\!:=(x^k,x^{k-1})$.
Since $\{x^k\}_{k\in\mathbb{N}}$ is bounded, there exists a subsequence $\{x^{k_q}\}_{q\in\mathbb{N}}$
with $x^{k_q}\to\overline{x}$ as $q\to\infty$. By the proof of Lemma \ref{lemma2-PGMe1} (iii),
$\lim_{k\to\infty}H_{\tau,\varsigma}(z^k)=H_{\tau,\varsigma}(\overline{z})$ with
$\overline{z}=(\overline{x},\overline{x})$. If there exists $\overline{k}\in\mathbb{N}$
such that $H_{\tau,\varsigma}(z^{\overline{k}})=H_{\tau,\varsigma}(\overline{z})$,
by Lemma \ref{lemma1-PGMe1} (i) we have $x^{k}=x^{\overline{k}}$ for all $k\ge\overline{k}$
and the result follows. Thus, it suffices to consider that
$H_{\tau,\varsigma}(z^{k})>H_{\tau,\varsigma}(\overline{z})$ for all $k\in\mathbb{N}$.
Since $\lim_{k\to\infty}H_{\tau,\varsigma}(z^k)=H_{\tau,\varsigma}(\overline{z})$,
for any $\eta>0$ there exists $k_0\in\mathbb{N}$ such that for all $k\ge k_0$,
$H_{\tau,\varsigma}(z^k)< H_{\tau,\varsigma}(\overline{z})+\eta$.
In addition, from Lemma \ref{lemma2-PGMe1} (ii),
for any $\varepsilon>0$ there exists $k_1\in\mathbb{N}$ such that for all $k\ge k_1$,
${\rm dist}(z^k,\Omega)<\varepsilon$. Then, for all $k\ge\overline{k}:=\max(k_0,k_1)$,
\[
z^k\in\big\{z\,|\,{\rm dist}(z,\Omega)<\varepsilon\big\}
\cap[H_{\tau,\varsigma}(\overline{z})<H_{\tau,\varsigma}<H_{\tau,\varsigma}(\overline{z})+\eta].
\]
By combining Lemma \ref{lemma2-PGMe1} (iii) and \cite[Lemma 6]{Bolte14},
there exist $\delta>0$, $\eta>0$ and a continuous concave function
$\varphi\!:[0,\eta)\to\mathbb{R}_{+}$ satisfying the conditions in Definition \ref{KL-def}
such that for all $\overline{z}\in\Omega$ and
all $z\in\big\{z\,|\,{\rm dist}(z,\Omega)<\varepsilon\big\}
\cap[H_{\tau,\varsigma}(\overline{z})<H_{\tau,\varsigma}<H_{\tau,\varsigma}(\overline{z})+\eta]$,
\[
\varphi'(H_{\tau,\varsigma}(z)-H_{\tau,\varsigma}(\overline{z}))
{\rm dist}(0,\partial H_{\tau,\varsigma}(z))\ge 1.
\]
Consequently, for all $k>\overline{k}$,
$\varphi'(H_{\tau,\varsigma}(z^k)-H_{\tau,\varsigma}(\overline{z})){\rm dist}(0,\partial H_{\tau,\varsigma}(z^k))\ge 1$.
By Lemma \ref{lemma1-PGMe1} (iii), there exists $w^{k}\in\partial H_{\tau,\varsigma}(z^k)$
with $\|w^k\|\le b_1\|x^{k}-x^{k-1}\|+b_2\|x^{k-1}-x^{k-2}\|$. Then,
\[
\varphi'(H_{\tau,\varsigma}(z^k)-H_{\tau,\varsigma}(\overline{z}))\|w^k\|\ge 1.
\]
Together with the concavity of $\varphi$ and Lemma \ref{lemma1-PGMe1} (i), it follows that
for all $k>\overline{k}$,
\begin{align*}
& [\varphi(H_{\tau,\varsigma}(z^k)-H_{\tau,\varsigma}(\overline{z}))-
\varphi(H_{\tau,\varsigma}(z^{k+1})-H_{\tau,\varsigma}(\overline{z}))]\|w_k\|\\
&\ge \varphi'(H_{\tau,\varsigma}(z^k)-H_{\tau,\varsigma}(\overline{z}))
[H_{\tau,\varsigma}(z^k)-H_{\tau,\varsigma}(z^{k+1})]\|w_k\|\\
&\ge H_{\tau,\varsigma}(z^k)-H_{\tau,\varsigma}(z^{k+1})\ge a\|x^{k+1}-x^k\|^2
\ \ {\rm with}\ a={\varsigma(\tau^{-1}\!-\!L_{\!f})}/{2}.
\end{align*}
For each $k\in\mathbb{N}$, let $\Delta_k:=\varphi(H_{\tau,\varsigma}(z^k)-H_{\tau,\varsigma}(\overline{z}))-
\varphi(H_{\tau,\varsigma}(z^{k+1})-H_{\tau,\varsigma}(\overline{z}))$. For all $k>\overline{k}$,
\begin{align*}
2\|x^{k+1}-x^k\|&\le 2\sqrt{a^{-1}\Delta_k\|w_k\|}
\le 2\sqrt{a^{-1}\Delta_k[b_1\|x^{k}\!-\!x^{k-1}\|+b_2\|x^{k-1}\!-\!x^{k-2}\|]}\\
&\le \frac{1}{2}\big(\|x^{k}-x^{k-1}\|+\|x^{k-1}-x^{k-2}\|\big)+2a^{-1}\max(b_1,b_2)\Delta_k,
\end{align*}
where the second inequality is due to $2\sqrt{st}\le s/2+2t$ for any $s,t\ge 0$.
For any $\nu>k>\overline{k}$, summing the last inequality from $k$ to $\nu$ yields that
\begin{equation}\label{use-ineq2}
\sum_{j=k}^{\nu}\|x^{j+1}\!-\!x^j\|\le \|x^k-x^{k-1}\|+\frac{1}{2}\|x^{k-1}\!-\!x^{k-2}\|
+\frac{2\max(b_1,b_2)}{a}\varphi(H_{\tau,\varsigma}(z^{k})-H_{\tau,\varsigma}(\overline{z})).
\end{equation}
By passing the limit $\nu\to\infty$ to the last inequality, we obtain the desired result.
\noindent
{\bf(ii)} Since $F_{\sigma,\gamma}$ is a KL function of exponent $1/2$,
by \cite[Theorem 3.6]{LiPong18} and the expression of $H_{\tau,\varsigma}$,
it follows that $H_{\tau,\varsigma}$ is also a KL function of exponent $1/2$.
From the arguments for part (i) with $\varphi(t)=c\sqrt{t}$ for $t\ge 0$
and Lemma \ref{lemma1-PGMe1} (iii), for all $k\ge\overline{k}$ it holds that
\[
\sqrt{H_{\tau,\varsigma}(z^k)-H_{\tau,\varsigma}(\overline{z})}
\le\frac{c}{2}{\rm dist}(0,\partial H_{\tau,\varsigma}(z^k))
\le\frac{c}{2}[b_1\|x^{k}-x^{k-1}\|+b_2\|x^{k-1}-x^{k-2}\|].
\]
Consequently, $\varphi(H_{\tau,\varsigma}(z^k)-H_{\tau,\varsigma}(\overline{z}))
\le \frac{c^2}{2}[b_1\|x^{k}-x^{k-1}\|+b_2\|x^{k-1}-x^{k-2}\|]$.
Together with the inequality \eqref{use-ineq2}, by letting $c'=c^2a^{-1}[\max(b_1,b_2)]^2$,
for any $\nu>k>\overline{k}$ we have
\[
{\textstyle\sum_{j=k}^{\nu}}\|x^{j+1}-x^j\|\le (1+c')\|x^k-x^{k-1}\|+(1/2+c')\|x^{k-1}-x^{k-2}\|
\]
For each $k\in\mathbb{N}$, let $\Delta_k:=\sum_{j=k}^{\infty}\|x^{j+1}-x^j\|$.
Passing the limit $\nu\to +\infty$ to this inequality, we obtain
\(
\Delta_k\le (1+c')[\Delta_{k-1}-\Delta_{k}]+(1/2+c')[\Delta_{k-2}-\Delta_{k-1}]
\le (1+c')[\Delta_{k-2}-\Delta_{k}],
\)
which means that $\Delta_k\le\varrho\Delta_{k-2}$ for $\varrho=\frac{1+c'}{2+c'}$.
The result follows by this recursion.
\end{proof}
It is worthwhile to point out that by Lemma \ref{lemma1-PGMe1}-\ref{lemma2-PGMe1}
and the proof of Lemma \ref{lemma2-PGMe1} (iii), applying \cite[Theorem 10]{Ochs19}
directly can yield $\sum_{k=1}^{\infty}\|x^{k+1}-x^k\|<\infty$.
Here, we include its proof just for the convergence rate analysis in
Theorem \ref{global-conv} (ii). Notice that Theorem \ref{global-conv} (ii) requires
the KL property of exponent $1/2$ of the function $F_{\sigma,\gamma}$. The following
lemma shows that $F_{\sigma,\gamma}$ indeed has such an important property under a mild condition.
\begin{lemma}\label{KL-exponent}
If any $\overline{x}\in\!{\rm crit}F_{\sigma,\gamma}$ has
$\Gamma(\overline{x})\!=\emptyset$, then $F_{\sigma,\gamma}$ is a KL function of exponent $0$.
\end{lemma}
\begin{proof}
Write $\widetilde{f}_{\sigma,\gamma}(x):=f_{\sigma,\gamma}(x)+\delta_{\mathcal{S}}(x)$
for $x\in\mathbb{R}^n$. For any $x\in\mathcal{S}$, by \cite[Exercise 8.8]{RW98},
\begin{equation}\label{subdiff-Theta}
\partial\!\widetilde{f}_{\sigma,\gamma}(x)=\nabla\!f_{\sigma,\gamma}(x)+\mathcal{N}_{\mathcal{S}}(x)
=A^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(Ax)+\mathcal{N}_{\mathcal{S}}(x).
\end{equation}
Fix any $\overline{x}\in{\rm crit}F_{\sigma,\gamma}$ with $\Gamma(\overline{x})=\emptyset$.
Let $J:={\rm supp}(\overline{x}),I_{0}\!:=\{i\in[m]\,|\, (A\overline{x})_i>0\}$,
$I_{1}\!:=\{i\in[m]\,|\, (A\overline{x})_i<-(\sigma\!+\!\gamma)\}$
and $I_{2}\!:=\!\{i\in[m]\,|\, -\sigma\!+\!\gamma<(A\overline{x})_i<-\gamma\}$.
Since $\Gamma(\overline{x})=\emptyset$, we have $I_{0}\cup I_{1}\cup I_{2}=[m]$. Moreover, from the continuity,
there exists $\varepsilon'>0$ such that for all $x\in\mathbb{B}(\overline{x},\varepsilon')$,
${\rm supp}(x)\supseteq J$ and the following inequalities hold:
\begin{equation}\label{temp-indx}
(Ax)_i>0\ {\rm for}\ i\in I_{0},\ (Ax)_i<-(\sigma\!+\!\gamma)\ {\rm for}\ i\in I_{1}\ {\rm and}\
\gamma\!-\!\sigma<(Ax)_i<-\gamma\ {\rm for}\ i\in I_{2}.
\end{equation}
By the continuity of $f_{\sigma,\gamma}$,
there exists $\varepsilon''>0$ such that for all $x\in\mathbb{B}(\overline{x},\varepsilon'')$,
$f_{\sigma,\gamma}(x)>f_{\sigma,\gamma}(\overline{x})-\lambda/2$.
Set $\varepsilon=\min(\varepsilon',\varepsilon'')$ and pick any $\eta\in(0,\lambda/4]$.
Next we argue that
\[
\mathbb{B}(\overline{x},\varepsilon)
\cap[F_{\sigma,\gamma}(\overline{x})<F_{\sigma,\gamma}<F_{\sigma,\gamma}(\overline{x})+\eta]=\emptyset,
\]
which by Definition \ref{KL-def} implies that $F_{\sigma,\gamma}$
is a KL function of exponent $1/2$. Suppose on the contradiction that
there exists $x\in\mathbb{B}(\overline{x},\varepsilon)
\cap[F_{\sigma,\gamma}(\overline{x})<F_{\sigma,\gamma}<F_{\sigma,\gamma}(\overline{x})+\eta]$.
From $F_{\sigma,\gamma}(x)<F_{\sigma,\gamma}(\overline{x})+\eta$, we have $x\in\mathcal{S}$.
Together with ${\rm supp}(x)\supseteq J$, we deduce that ${\rm supp}(x)=J$ (if not,
$f_{\sigma,\gamma}(x)+\lambda\|\overline{x}\|_0+\lambda<F_{\sigma,\gamma}(x)
<f_{\sigma,\gamma}(\overline{x})+\lambda\|\overline{x}\|_0+\eta$,
which along with $f_{\sigma,\gamma}(x)>f_{\sigma,\gamma}(\overline{x})-\lambda/2$ implies
$\eta>\lambda/2$, a contradiction to $\eta\le\lambda/4$).
Now from $x\in\mathcal{S}$, equation \eqref{temp-indx},
the expression of $\vartheta_{\sigma,\gamma}$ and $I_{0}\cup I_{1}\cup I_{2}=[m]$,
it follows that
\begin{equation}\label{Phi-gamma}
0<F_{\sigma,\gamma}(x)-F_{\sigma,\gamma}(\overline{x})
=L_{\sigma,\gamma}(Ax)-L_{\sigma,\gamma}(A\overline{x})={\textstyle\sum_{i\in I_{2}}}[(A\overline{x})_i-(Ax)_i].
\end{equation}
Recall that $[\nabla\!L_{\sigma,\gamma}(Ax')]_{I_{2}}=-e$ and $[\nabla\!L_{\sigma,\gamma}(Ax')]_{I_{0}\cup I_{1}}=0$
with $x'=x$ and $\overline{x}$. Hence,
\begin{align}\label{temp-equa31}
\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(Ax)\|^2
=\|A_{I_{2}J}^{\mathbb{T}}e\|^2,\
\langle \nabla\!L_{\sigma,\gamma}(Ax),Ax\rangle^2
=[{\textstyle\sum_{i\in I_{2}}}(Ax)_i]^2,\\
\label{temp-equa32}
\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(A\overline{x})\|^2
=\|A_{I_{2}J}^{\mathbb{T}}e\|^2,\
\langle \nabla\!L_{\sigma,\gamma}(A\overline{x}),A\overline{x}\rangle^2
=[{\textstyle\sum_{i\in I_{2}}}(A\overline{x})_i]^2.
\end{align}
By comparing \eqref{subdiff-Theta} with Lemma \ref{lemma-critical} (ii), we have
$\partial F_{\sigma,\gamma}(x) =\partial\!\widetilde{f}_{\sigma,\gamma}(x)+\lambda\partial\|x\|_0$.
Since ${\rm supp}(x)=J$, we also have $\partial\|x\|_0=\{v\in\mathbb{R}^n\,|\,v_i=0\ {\rm for}\ i\in J\}$.
Then, it holds that
\begin{align}\label{temp-equa33}
{\rm dist}^2(0, \partial F_{\sigma,\gamma}(x))
&=\min_{u\in\partial\!\widetilde{f}_{\sigma,\gamma}(x),v\in \lambda\partial\|x\|_0}\|u+v\|^2
=\min_{u\in\partial\!\widetilde{f}_{\sigma,\gamma}(x)}\|u_{J}\|^2\nonumber\\
&=\min_{\alpha\in\mathbb{R}}\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(Ax)+\alpha x_{J}\|^2\nonumber\\
&=\min_{\alpha\in\mathbb{R}}\alpha^2+2\langle Ax,\nabla\!L_{\sigma,\gamma}(Ax)\rangle\alpha+\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(Ax)\|^2\nonumber\\
&=\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(Ax)\|^2-\langle Ax,\nabla\!L_{\sigma,\gamma}(Ax)\rangle^2.
\end{align}
Since $0\in\partial F_{\sigma,\gamma}(\overline{x})=\nabla\!f_{\sigma,\gamma}(\overline{x})
+\mathcal{N}_{\mathcal{S}}(\overline{x})+\lambda\partial\|\overline{x}\|_0$,
from the expression of $\partial\|\overline{x}\|_0$ we have
\[
A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(A\overline{x})=\overline{\alpha}\,\overline{x}_{J}
\ \ {\rm with}\ \ \overline{\alpha}=\langle A\overline{x},\nabla\!L_{\sigma,\gamma}(A\overline{x})\rangle.
\]
Together with the equations \eqref{temp-equa31}-\eqref{temp-equa33}, it immediately follows that
\begin{align*}
0&\le {\rm dist}^2(0, \partial F_{\sigma,\gamma}(x))
=\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(Ax)\|^2-\langle Ax,\nabla\!L_{\sigma,\gamma}(Ax)\rangle^2
-\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(A\overline{x})-\overline{\alpha}\,\overline{x}_{J}\|^2\\
&=\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(Ax)\|^2-\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(A\overline{x})\|^2
+\big[\langle\nabla\!L_{\sigma,\gamma}(A\overline{x}),A\overline{x}\rangle^2
\!-\!\langle \nabla\!L_{\sigma,\gamma}(Ax),Ax\rangle^2\big]\nonumber\\
&={\textstyle\sum_{i\in I_{2}}}[(A\overline{x})_i-(Ax)_i]\cdot{\textstyle\sum_{i\in I_{2}}}[(A\overline{x})_i+(Ax)_i].
\end{align*}
Since ${\textstyle\sum_{i\in I_{2}}}[(A\overline{x})_i+(Ax)_i]<0$,
the last inequality implies that ${\textstyle\sum_{i\in I_{2}}}[(A\overline{x})_i-(Ax)_i]\le 0$,
which is a contradiction to the inequality \eqref{Phi-gamma}.
The proof is then completed.
\end{proof}
\begin{remark}\label{remark2-PGMe1}
By the definition of $\Gamma(\overline{x})$, when $\gamma$ is small enough,
it is highly possible for $\Gamma(\overline{x})=\emptyset$ and then
for $F_{\sigma,\gamma}$ to be a KL function of exponent $0$.
\end{remark}
\subsection{PG with extrapolation for solving \eqref{Esurrogate}}\label{sec4.2}
By the proof of Lemma \ref{lemma-critical}, $\Xi_{\sigma,\gamma}$ is a smooth function
and $\nabla\Xi_{\sigma,\gamma}$ is globally Lipschitz continuous with
Lipschitz constant $L_{\Xi}\le \gamma^{-1}\|A\|^2+\!\lambda\rho^2\max(\frac{a+1}{2},\frac{a+1}{2(a-1)})$.
While by Proposition \ref{proxm-hlam}, the proximal mapping of $h_{\lambda,\rho}$ has a closed form.
This motivates us to apply the PG method with extrapolation to solving the problem \eqref{Esurrogate}.
\begin{algorithm}[H]
\caption{\label{PGMe2}{\bf (PGe-scad for solving the problem \eqref{Esurrogate})}}
\textbf{Initialization:} Choose $\varsigma\in(0,1),0<\tau<(1\!-\!\varsigma)L_{\Xi}^{-1},0<\beta_{\rm max}\le{\frac{\sqrt{\varsigma(\tau^{-1}-L_{\Xi})\tau^{-1}}}{2(\tau^{-1}+L_{\Xi})}}$ and
an initial point $x^0\in\mathcal{S}$. Set $x^{-1}=x^{0}$ and $k:=0$.
\noindent
\textbf{while} the termination condition is not satisfied \textbf{do}
\begin{itemize}
\item[{\bf1.}] Let $\widetilde{x}^k=x^{k}+\beta_k(x^k-x^{k-1})$ and compute $x^{k+1}\in\mathcal{P}_{\!\tau}h_{\lambda,\rho}(\widetilde{x}^k\!-\!\tau\nabla\Xi_{\sigma,\gamma}(\widetilde{x}^k))$.
\item[{\bf3.}] Choose $\beta_{k+1}\in[0,\beta_{\rm max}]$. Let $k\leftarrow k+1$ and go to Step 1.
\end{itemize}
\textbf{end (while)}
\end{algorithm}
Similar to Algorithm \ref{PGMe1}, the extrapolation parameter
$\beta_k$ in Algorithm \ref{PGMe2} can be chosen in terms of
the rule in \eqref{accelerate}. For any $\tau>0$ and $\varsigma\in(0,1)$,
we define the potential function
\begin{equation}\label{Upsion}
\Upsilon_{\!\tau,\varsigma}(x,u):=G_{\sigma,\gamma,\rho}(x)+\frac{\varsigma}{4\tau}\|x-u\|^2
\quad\ \forall (x,u)\in\mathbb{R}^n\times\mathbb{R}^n.
\end{equation}
Then, by following the same arguments as those for Lemma \ref{lemma1-PGMe1} and \ref{lemma2-PGMe1},
we can establish the following properties of $\Upsilon_{\!\tau,\varsigma}$ on
the sequence $\{x^k\}_{k\in\mathbb{N}}$ generated by Algorithm \ref{PGMe2}.
\begin{lemma}\label{lemma1-PGMe2}
Let $\{x^k\}_{k\in\mathbb{N}}$ be the sequence generated by Algorithm \ref{PGMe2}
and denote by $\pi(x^0)$ the set of accumulation points of $\{x^k\}_{k\in\mathbb{N}}$.
Then, the following assertions hold.
\begin{itemize}
\item[(i)] For each $k\in\mathbb{N}$,
\(
\Upsilon_{\!\tau,\varsigma}(x^{k+1},x^k)\le \Upsilon_{\!\tau,\varsigma}(x^{k},x^{k-1})
-\frac{\varsigma(\tau^{-1}-L_{\Xi})}{2}\|x^{k+1}\!-\!x^k\|^2.
\)
Consequently, $\{\Upsilon_{\!\tau,\varsigma}(x^{k},x^{k-1})\}_{k\in\mathbb{N}}$ is convergent
and $\sum_{k=1}^{\infty}\|x^{k+1}\!-\!x^k\|^2<\infty$.
\item[(ii)] For each $k\in\mathbb{N}$, there exists $w^{k}\in\partial\Upsilon_{\!\tau,\varsigma}(x^{k},x^{k-1})$
with $\|w^{k+1}\|\le b_1'\|x^{k+1}\!-\!x^k\|+b_2'\|x^k\!-\!x^{k-1}\|$, where
$b_1'>0$ and $b_2'>0$ are the constants independent of $k$.
\item [(iii)] $\pi(x^0)$ is a nonempty compact set and $\pi(x^0)\subseteq S_{\tau,h_{\lambda,\rho}}$.
\item [(iv)] $\lim_{k\to\infty}{\rm dist}((x^k,x^{k-1}),\pi(x^0)\times \pi(x^0))=0$,
and $\Upsilon_{\!\tau,\varsigma}$ is finite and keeps the constant
on the set $\pi(x^0)\times \pi(x^0)$.
\end{itemize}
\end{lemma}
By using Lemma \ref{lemma1-PGMe2} and following the same arguments
as those for Theorem \ref{global-conv}, it is not difficult to achieve
the following convergence results for Algorithm \ref{PGMe2}.
\begin{theorem}\label{global2-conv}
Let $\{x^k\}_{k\in\mathbb{N}}$ be the sequence generated by Algorithm \ref{PGMe2}.
Then,
\begin{itemize}
\item[(i)] $\sum_{k=1}^{\infty}\|x^{k+1}-x^k\|<\infty$ and consequently $\{x^k\}_{k\in \mathbb{N}}$
converges to some $x^*\in S_{\tau,h_{\lambda,\rho}}$.
\item[(ii)] If $G_{\sigma,\gamma,\rho}$ is a KL function of exponent $1/2$,
then there exist $c_2>0$ and $\varrho\in(0,1)$ such that
for all sufficiently large $k$, $\|x^k-x^*\|\le c_2\varrho^k$.
\end{itemize}
\end{theorem}
Theorem \ref{global2-conv} (ii) requires that $G_{\sigma,\gamma,\rho}$ is a KL function
of exponent $1/2$. We next show that it indeed holds under a little
stronger condition than the one used in Lemma \ref{KL-exponent}.
\begin{lemma}\label{KL-exponent1}
If $\lambda$ and $\rho$ are chosen with
$\lambda\rho\!>\!{\displaystyle\max_{z\in{\rm crit}G_{\sigma,\gamma,\rho}}}\|\nabla\!f_{\sigma,\gamma}(z)\|_\infty$
and all $\overline{x}\!\in{\rm crit}G_{\sigma,\gamma,\rho}$ satisfy
$\Gamma(\overline{x})=\emptyset$ and $|\overline{x}|_{\rm nz}>\frac{2a}{\rho(a-1)}$,
then $G_{\sigma,\gamma,\rho}$ is a KL function of exponent $0$.
\end{lemma}
\begin{proof}
Fix any $\overline{x}\in{\rm crit}G_{\sigma,\gamma,\rho}$ with $\Gamma(\overline{x})=\emptyset$
and $|\overline{x}|_{\rm nz}>\frac{2a}{\rho(a-1)}$. Let $J={\rm supp}(\overline{x})$
and $\overline{J}=[n]\backslash J$. Let $\theta_{\!\rho}$ be the function in the proof
of Lemma \ref{lemma-critical}. Since $[\nabla\theta_{\!\rho}(\overline{x})]_{\overline{J}}=0$,
the given assumption means that $\|[\nabla\!f_{\sigma,\gamma}(\overline{x})\!-\!\lambda\rho\nabla\theta_{\!\rho}(\overline{x})]_{\overline{J}}\|_\infty<\lambda\rho$.
By the continuity, there exists $\delta_0>0$ such that for all $x\in\mathbb{B}(\overline{x},\delta_0)$,
\(
\|[\nabla\!f_{\sigma,\gamma}(x)\!-\!\lambda\rho\nabla\theta_{\!\rho}(x)]_{\overline{J}}\|_\infty<\lambda\rho.
\)
Let $I_0,I_1$ and $I_2$ be same as
in the proof of Lemma \ref{KL-exponent}. Then, there exists
$\delta_1>0$ such that for all $x\in\mathbb{B}(\overline{x},\delta_1)$,
${\rm supp}(x)\supseteq J$ and the relations in \eqref{Phi-gamma} hold.
By the continuity, there exist $\delta_2>0$ such that
for all $x\in\mathbb{B}(\overline{x},\delta_2)$, $|x_i|>\frac{2a}{\rho(a-1)}$
with $i\in{\rm supp}(x)$ and
\begin{equation}\label{temp-equaXi}
\Xi_{\sigma,\gamma}(x)+\lambda\rho{\textstyle\sum_{i\in J}}|x_i|
>\Xi_{\sigma,\gamma}(\overline{x})+\lambda\rho{\textstyle\sum_{i\in J}}|\overline{x}_i|-{a\lambda}/{(a\!-\!1)}.
\end{equation}
Set $\delta=\min(\delta_0,\delta_1,\delta_2)$. Pick any $\eta\in(0,\frac{a\lambda}{2(a-1)})$.
Next we argue that
$\mathbb{B}(\overline{x},\varepsilon)
\cap[G_{\sigma,\gamma,\rho}(\overline{x})<G_{\sigma,\gamma,\rho}<G_{\sigma,\gamma,\rho}(\overline{x})+\eta]=\emptyset$,
which by Definition \ref{KL-def} implies that $G_{\sigma,\gamma,\rho}$
is a KL function of exponent $1/2$. Suppose on the contradiction that
there exists $x\in\mathbb{B}(\overline{x},\varepsilon)
\cap[G_{\sigma,\gamma,\rho}(\overline{x})<G_{\sigma,\gamma,\rho}<G_{\sigma,\gamma,\rho}(\overline{x})+\eta]$.
From $G_{\sigma,\gamma,\rho}(x)<G_{\sigma,\gamma,\rho}(\overline{x})+\eta$, we have $x\in\mathcal{S}$,
which along with ${\rm supp}(x)\supseteq J$ implies that ${\rm supp}(x)=J$ (if not,
we will have $\Xi_{\sigma,\gamma}(x)+\lambda\rho\sum_{i\in J}|x_i|
+\lambda\rho\sum_{i\in{\rm supp}(x)\backslash J}|x_i|<G_{\sigma,\gamma,\rho}(x)
<\Xi_{\sigma,\gamma}(\overline{x})+\lambda\rho\sum_{i\in J}|\overline{x}_i|+\eta$,
which along with \eqref{temp-equaXi} and $|x_i|>\frac{2a}{\rho(a-1)}$ for
$i\in{\rm supp}(x)\backslash J$ implies that $\eta>\frac{a\lambda}{a-1}$,
a contradiction to $\eta<\frac{a\lambda}{2(a-1)}$). Now from ${\rm supp}(x)=J$
and $|x_i|>\frac{2a}{\rho(a-1)}$ for $i\in{\rm supp}(x)$, it is not hard to
verify that $\|x\|_1-\theta_{\!\rho}(x)=\|\overline{x}\|_1-\theta_{\!\rho}(\overline{x})$.
Together with $x\in\mathcal{S}$ and the expression of $L_{\sigma,\gamma}$, we have
\begin{equation}\label{G-gamma}
0<G_{\sigma,\gamma,\rho}(x)-G_{\sigma,\gamma,\rho}(\overline{x})
=L_{\sigma,\gamma}(Ax)-L_{\sigma,\gamma}(A\overline{x})
={\textstyle\sum_{i\in I_{2}}}[(A\overline{x})_i-(Ax)_i].
\end{equation}
Moreover, the equalities in \eqref{temp-equa31}-\eqref{temp-equa32} still hold for $x$.
Let $\widetilde{f}_{\sigma,\gamma}$ be same as in the proof of Lemma \ref{KL-exponent}.
Clearly, $\partial G_{\sigma,\gamma,\rho}(x)=\partial\!\widetilde{f}_{\sigma,\gamma}(x)
+\lambda\rho[\partial\|x\|_1-\nabla\theta_{\!\rho}(x)]$. Then, it holds that
\[
{\rm dist}^2(0, \partial G_{\sigma,\gamma,\rho}(x))
=\min_{u\in\partial\!\widetilde{f}_{\sigma,\gamma}(x),v\in\lambda\rho[\partial\|x\|_1-\nabla\theta_{\!\rho}(x)]}\|u+v\|^2.
\]
Notice that $\partial\!\widetilde{f}_{\sigma,\gamma}(x)=\{\nabla\!f_{\sigma,\gamma}(x)+\alpha x\,|\,\alpha\in\mathbb{R}\},
\|[\nabla\!f_{\sigma,\gamma}(x)\!-\!\lambda\rho\nabla\theta_{\!\rho}(x)]_{\overline{J}}\|_\infty<\lambda\rho$
and $v_{\overline{J}}\in[-\lambda\rho,\lambda\rho]-\lambda\rho[\nabla\theta_{\!\rho}(x)]_{\overline{J}}$.
From the last equation, it follows that
\begin{align}\label{temp-equa34}
{\rm dist}^2(0, \partial G_{\sigma,\gamma,\rho}(x))
&=\min_{u\in\partial\!\widetilde{f}_{\sigma,\gamma}(x)}\|u_{J}\|^2
=\min_{\alpha\in\mathbb{R}}\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(Ax)+\alpha x_{J}\|^2\nonumber\\
&=\min_{\alpha\in\mathbb{R}}\alpha^2+2\langle Ax,\nabla\!L_{\sigma,\gamma}(Ax)\rangle\alpha+\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(Ax)\|^2\nonumber\\
&=\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(Ax)\|^2-\langle Ax,\nabla\!L_{\sigma,\gamma}(Ax)\rangle^2.
\end{align}
Since $0\in\partial G_{\sigma,\gamma,\rho}(\overline{x})=\nabla\!f_{\sigma,\gamma}(\overline{x})
+\mathcal{N}_{\mathcal{S}}(\overline{x})+\lambda\rho[\partial\|\overline{x}\|_1-\nabla\theta_{\rho}(\overline{x})]$
and $|\overline{x}|_{\rm nz}>\frac{2a}{\rho(a-1)}$, by the proof of Lemma \ref{lemma-critical} (iii)
and $\mathcal{N}_{\mathcal{S}}(\overline{x})=\{\alpha\overline{x}\,|\,\alpha\in\mathbb{R}\}$,
there exists $\overline{\alpha}\in\mathbb{R}$ such that
$A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(A\overline{x})=\overline{\alpha}\,\overline{x}_{J}$
with $\overline{\alpha}=\langle A\overline{x},\nabla\!L_{\sigma,\gamma}(A\overline{x})\rangle$.
Together with \eqref{temp-equa31}-\eqref{temp-equa32} and \eqref{temp-equa34},
\begin{align*}
0&\le\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(Ax)\|^2-\langle Ax,\nabla\!L_{\sigma,\gamma}(Ax)\rangle^2
-\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(A\overline{x})-\overline{\alpha}\,\overline{x}_{J}\|^2\\
&=\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(Ax)\|^2-\|A_{J}^{\mathbb{T}}\nabla\!L_{\sigma,\gamma}(A\overline{x})\|^2
+\big[\langle\nabla\!L_{\sigma,\gamma}(A\overline{x}),A\overline{x}\rangle^2
\!-\!\langle \nabla\!L_{\sigma,\gamma}(Ax),Ax\rangle^2\big]\nonumber\\
&={\textstyle\sum_{i\in I_{2}}}[(A\overline{x})_i-(Ax)_i]\cdot{\textstyle\sum_{i\in I_{2}}}[(A\overline{x})_i+(Ax)_i].
\end{align*}
Since ${\textstyle\sum_{i\in I_{2}}}[(A\overline{x})_i+(Ax)_i]<0$,
the last inequality implies that ${\textstyle\sum_{i\in I_{2}}}[(A\overline{x})_i-(Ax)_i]\le 0$,
which is a contradiction to the inequality \eqref{G-gamma}.
The proof is then completed.
\end{proof}
\section{Numerical experiments}\label{sec5}
In this section we demonstrate the performance of the zero-norm regularized DC loss model \eqref{znorm-Moreau} and its surrogate \eqref{Esurrogate}, which are respectively solved with PGe-znorm and PGe-scad. All numerical experiments are performed in MATLAB on a laptop running on 64-bit Windows System with an Intel(R) Core(TM) i7-7700HQ CPU 2.80GHz and 16 GB RAM.
The MATLAB package for reproducing all the numerical results can be found at \url{https://github.com/SCUT-OptGroup/onebit}.
\subsection{Experiment setup}\label{sec5.1}
The setup of our experiments is similar to the one in \cite{Yan12,HuangJ18}.
Specifically, we generate the original $s^*$-sparse signal $x^{\rm true}$ with
the support $T$ chosen uniformly from $\{1,2,\ldots,n\}$ and $(x^{\rm true})_{T}$
taking the form of ${\xi}/{\|\xi\|}$, where the entries of $\xi\in\mathbb{R}^{s^*}$
are drawn from the standard normal distribution. Then, we obtain the observation vector $b$
via \eqref{observation}, where the sampling matrix $\Phi\in\mathbb{R}^{m\times n}$
is generated in the two ways: (I) the rows of $\Phi$ are i.i.d. samples of $N(0,\Sigma)$
with $\Sigma_{ij}=\mu^{|i-j|}$ for $i,j\in[n]$ (II) the entries of $\Phi$ are i.i.d.
and follow the standard normal distribution; the noise $\varepsilon\in\mathbb{R}^m$
is generated from $N(0,\varpi^2I)$; and the entries of $\zeta$ is set by
$\mathbb{P}(\zeta_i=1)=1-\mathbb{P}(\zeta_i=-1)=1-r$. In the sequel, we denote
the corresponding data with the two triples $(m,n,s^*)$ and $(\mu,\varpi,r)$,
where $\mu$ means the correlation factor,
$\varpi$ denotes the noise level and $r$ means the sign flip ratio.
We evaluate the quality of an output $x^{\rm sol}$ of a solver in terms of
the mean squared error (MSE), the Hamming error (Herr), the ratio of missing support (FNR)
and the ratio of misidentified support (FPR), which are defined as follows
\begin{align*}
{\rm MSE}:=\|x^{\rm sol}\!-\!x^{\rm true}\|,\
{\rm Herr}:=\dfrac{1}{m}\|{\rm sign}(\Phi x^{\rm sol})-{\rm sign}(\Phi x^{\rm true})\|_0,\\
{\rm FNR}:= \frac{|T\backslash {\rm supp}(x^{\rm sol})|}{|T|}\ \ {\rm and}\ \
{\rm FPR}:= \frac{|{\rm supp}(x^{\rm sol})\backslash T|}{n-|T|},\qquad
\end{align*}
where, in our numerical experiments, a component of a vector $z\in\mathbb{R}^n$
being nonzero means that its absolute value is larger than $10^{-5}\|z\|_\infty$.
Clearly, a solver has a better performance if its output has the smaller MSE,
${\rm Herr}$, {\rm FNR} and {\rm FPR}.
\subsection{Implementation of PGe-znorm and PGe-scad}\label{sec5.2}
From the definition of $x^{k+1}$ in PGe-znorm and PGe-scad,
we have $(x^{k+1}\!-\!\widetilde{x}^k)+\widetilde{x}^k
\in\mathcal{P}_{\!\tau}g_{\lambda}(\widetilde{x}^k\!-\!\tau\nabla\!f_{\sigma,\gamma}(\widetilde{x}^k))$
and $(x^{k+1}-\widetilde{x}^k)+\widetilde{x}^k
\in\mathcal{P}_{\!\tau}h_{\lambda,\rho}(\widetilde{x}^k\!-\!\tau\nabla\!f_{\sigma,\gamma}(\widetilde{x}^k))$.
Together with the expression of $\mathcal{P}_{\!\tau}g_{\lambda}$ and $\mathcal{P}_{\!\tau}h_{\lambda,\rho}$,
when $\|x^{k+1}\!-\!\widetilde{x}^k\|$ is small enough,
$\widetilde{x}^k$ can be viewed as an approximate $\tau$-stationary point.
Hence, we terminate PGe-znorm and PGe-scad at the iterate $x^k$
once $\|x^{k+1}-\widetilde{x}^k\|\le 10^{-6}$ or $k\ge2000$. In addition, we also terminate
the two algorithms at $x^k$ when $\frac{|F_{\sigma,\gamma}(x^{k-j})-F_{\sigma,\gamma}(x^{k-j})|}
{\max(1,F_{\sigma,\gamma}(x^{k-j}))}\le 10^{-10}$ for $k\ge 100$ and $j=0,1,\ldots,9$.
The extrapolation parameters $\beta_k$ in the two algorithms are chosen by \eqref{accelerate}
with $\beta_{\rm max}=0.235$. The starting point $x^0$ of PGe-znorm and PGe-scad
is always chosen to be ${e^{\mathbb{T}}A}/{\|e^{\mathbb{T}}A\|}$.
\subsection{Choice of the model parameters}\label{sec5.3}
The model \eqref{znorm-Moreau} and its surrogate \eqref{Esurrogate} involve
the parameters $\lambda>0,\rho>0$ and $0<\gamma<\sigma/2$. By
Figure \ref{fig_gam} and \ref{fig_rho}, we choose $\gamma=0.05,\rho=10$
for the subsequent tests. To choose an appropriate $\sigma>2\gamma$,
we generate the original signal $x^{\rm true}$, the sampling matrix $\Phi$ of type I
and the observation $b$ with $(m,n,s^*,r)=(500,1000,5,1.0)$,
and then solve the model \eqref{znorm-Moreau} associated to $\gamma=0.05,\lambda=10$ for
each $\sigma\in\{0.2,0.4,\ldots,3\}$ with PGe-znorm and the model \eqref{Esurrogate}
associated to $\gamma=0.05,\lambda=5,\rho=10$ for each $\sigma\in\{0.2,0.4,\ldots,3\}$ with PGe-scad.
Figure \ref{fig_sig} plots the average MSE of $50$ trials for each $\sigma$.
We see that $\sigma\in[0.6,1.2]$ is a desirable choice, so choose $\sigma=0.8$
for the two models in the subsequent experiments.
\begin{figure}
\caption{Influence of $\sigma$ on the performance of the models \eqref{znorm-Moreau} and \eqref{Esurrogate}}
\label{fig_sig}
\end{figure}
Next we take a closer look at the influence of $\lambda$ on the models \eqref{znorm-Moreau}
and \eqref{Esurrogate}. To this end, we generate the signal $x^{\rm true}$,
the sampling matrix $\Phi$ of type I, and the observation $b$ with $(m,n,s^*)=(500,1000,5)$
and $(\mu,\varpi)=(0.3,0.1)$, and then solve the model \eqref{znorm-Moreau} associated to
$\sigma=0.8,\gamma=0.05$ for each $\lambda\in\{1,3,5,\ldots,49\}$ with PGe-znorm and
solve the model \eqref{Esurrogate} associated to $\sigma=0.8,\gamma=0.05,\rho=10$
for each $\lambda\in\{0.5,1.5,2.5,\ldots,24.5\}$ with PGe-scad.
Figure \ref{fig_lam} plots the average MSE of $50$ trials for each $\lambda$.
When $r=0.15$, the MSE from the model \eqref{znorm-Moreau} has a small variation
for $\lambda\in[7,49]$, while the MSE from the model \eqref{Esurrogate} has
a small variation for $\lambda\in[3,24.5]$. When $r=0.05$, the MSE from the model
\eqref{znorm-Moreau} has a small variation for $\lambda\in[5,49]$ and is relatively
low for $\lambda\in[5,16]$, while the MSE from the model \eqref{Esurrogate} has a tiny
change for $\lambda\in[1.5,24.5]$. In view of this, we always choose $\lambda=8$
for the model \eqref{znorm-Moreau}, and choose $\lambda=4$ and $\lambda=8$ for the model \eqref{Esurrogate}
with $n\le 5000$ and $n>5000$, respectively, in the subsequent experiments.
\begin{figure}
\caption{Influence of $\lambda$ on the MSE from the models \eqref{znorm-Moreau} and \eqref{Esurrogate}}
\label{fig_lam}
\end{figure}
\subsection{Numerical comparisons}\label{sec5.4}
We compare PGe-znorm and PGe-scad with six state-of-the-art solvers, which are
BIHT-AOP \cite{Yan12}, PIHT \cite{HuangY18}, PIHT-AOP \cite{HuangS18}, GPSP \cite{Zhou21}
(\url{https://github.com/ShenglongZhou/GPSP}), PDASC \cite{HuangJ18} and WPDASC \cite{Fan21}.
Among others, the codes for BIHT-AOP, PIHT and PIHT-AOP can be found at \url{http://www.esat.kuleuven.be/stadius/ADB/huang/downloads/1bitCSLab.zip} and
the codes for PDASC and WPDASC can be found at \url{https://github.com/cjia80/numericalSimulation}.
It is worth pointing out that BIHT-AOP, GPSP and PIHT-AOP all require
an estimation on $s^*$ and $r$ as an input, PIHT require an estimation
on $s^*$ as an input, while PDASC, WPDASC, PGe-znorm and PGe-scad do not need
these prior information. For the solvers to require an estimation on $s^*$
and $r$, we directly input the true sparsity $s^*$ and $r$ as those papers do.
During the testing, PGe-znorm and PGe-scad use the parameters described before,
and other solvers use their default setting except the PIHT is terminated once
its iteration is over $100$.
We first apply the eight solvers to solving the test problems with the sampling matrix
of type I and low noise. Table \ref{table1} reports their average MSE, Herr, FNR, FPR
and CPU time for $50$ trials. We see that among the four solvers without requiring any
information on $x^{\rm true}$, PGe-scad and PGe-znorm yield the lower MSE, {\rm Herr}
and FNR than PDASC and WPDASC do, and PGe-scad is the best one in terms of MSE, Herr and FNR;
while among the four solvers requiring some information on $x^{\rm true}$,
BIHT-AOP and PIHT-AOP yield the smaller MSE and {\rm Herr} than PIHT and GPSP do,
and the former also yields the lower FNR and FPR under the scenario of $r=0.05$.
When comparing PGe-scad with BIHT-AOP and PIHT-AOP, the former yields the smaller
MSE, Herr, FNR and FPR under the scenario of $r=0.15$, and under the scenario
of $r=0.05$, it also yields the comparable MSE, Herr and FNR as BIHT-AOP and PIHT-AOP do.
\begin{table}[!h] \centering \tiny \captionsetup{font={scriptsize}} \caption{Numerical comparisons of eight solvers for test problems with $\Phi$ of type I and low noise} \label{table1} \resizebox{\textwidth}{35mm}{
\begin{tabular}{|llllllllllllllll|} \hline
\multicolumn{16}{|c|}{$m=800,n=2000,s^*=10,\varpi=0.1,{\bf r=0.05}$} \\ \hline
\multicolumn{1}{|l|}{}&\multicolumn{5}{c|}{$\mu=0.1$}&\multicolumn{5}{c|}{$\mu=0.3$}&\multicolumn{5}{c|}{$\mu=0.5$} \\ \hline
\multicolumn{1}{|c|}{solvers}&
\multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)}& \multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)}&
\multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)} \\ \hline
\multicolumn{1}{|c|}{PIHT}&\multicolumn{1}{c|}{2.57e-1}&\multicolumn{1}{c|}{6.80e-2}&\multicolumn{1}{c|}{3.26e-1}&\multicolumn{1}{c|}{1.64e-3}&\multicolumn{1}{c|}{1.55e-1}& \multicolumn{1}{c|}{2.75e-1}&\multicolumn{1}{c|}{7.27e-2}&\multicolumn{1}{c|}{3.52e-1}&\multicolumn{1}{c|}{1.77e-3}&\multicolumn{1}{c|}{1.57e-1}&
\multicolumn{1}{c|}{3.52e-1}&\multicolumn{1}{c|}{9.22e-2}&\multicolumn{1}{c|}{4.24e-1}&\multicolumn{1}{c|}{2.13e-3}&\multicolumn{1}{c|}{1.58e-1}\\ \hline
\multicolumn{1}{|c|}{BIHT-AOP}&\multicolumn{1}{c|}{\color{red}1.46e-1}&\multicolumn{1}{c|}{\color{red}4.36e-2}&\multicolumn{1}{c|}{\color{red}1.94e-1}&\multicolumn{1}{c|}{\color{red}9.75e-4}&\multicolumn{1}{c|}{5.30e-1}&
\multicolumn{1}{c|}{\color{red}1.32e-1}&\multicolumn{1}{c|}{\color{red}3.85e-2}&\multicolumn{1}{c|}{\color{red}1.80e-1}&\multicolumn{1}{c|}{\color{red}9.05e-4}&\multicolumn{1}{c|}{5.47e-1}&
\multicolumn{1}{c|}{1.46e-1}&\multicolumn{1}{c|}{4.18e-2}&\multicolumn{1}{c|}{2.06e-1}&\multicolumn{1}{c|}{1.04e-3}&\multicolumn{1}{c|}{5.45e-1}\\ \hline
\multicolumn{1}{|c|}{PIHT-AOP}&\multicolumn{1}{c|}{1.61e-1}&\multicolumn{1}{c|}{4.67e-2}&\multicolumn{1}{c|}{2.06e-1}&\multicolumn{1}{c|}{1.04e-3}&\multicolumn{1}{c|}{1.81e-1}& \multicolumn{1}{c|}{1.55e-1}&\multicolumn{1}{c|}{4.60e-2}&\multicolumn{1}{c|}{1.90e-1}&\multicolumn{1}{c|}{9.55e-4}&\multicolumn{1}{c|}{1.92e-1}&
\multicolumn{1}{c|}{\color{red}1.40e-1}&\multicolumn{1}{c|}{\color{red}4.17e-2}&\multicolumn{1}{c|}{\color{red}2.02e-1}&\multicolumn{1}{c|}{\color{red}1.02e-3}&\multicolumn{1}{c|}{1.86e-1}\\ \hline
\multicolumn{1}{|c|}{GPSP}&\multicolumn{1}{c|}{1.91e-1}&\multicolumn{1}{c|}{5.02e-2}&\multicolumn{1}{c|}{2.56e-1}&\multicolumn{1}{c|}{1.29e-3}&\multicolumn{1}{c|}{1.60e-2}
&\multicolumn{1}{c|}{1.87e-1}&\multicolumn{1}{c|}{4.83e-2}&\multicolumn{1}{c|}{2.40e-1}&\multicolumn{1}{c|}{1.21e-3}&\multicolumn{1}{c|}{1.83e-2}&
\multicolumn{1}{c|}{1.89e-1}&\multicolumn{1}{c|}{4.78e-2}&\multicolumn{1}{c|}{2.52e-1}&\multicolumn{1}{c|}{1.27e-3}&\multicolumn{1}{c|}{2.31e-2}\\ \hline\hline
\multicolumn{1}{|c|}{\bf PGe-scad}&\multicolumn{1}{c|}{2.15e-1}&\multicolumn{1}{c|}{6.70e-2}&\multicolumn{1}{c|}{\color{red}3.34e-1}&\multicolumn{1}{c|}{\color{red}0}&\multicolumn{1}{c|}{2.29e-1}&
\multicolumn{1}{c|}{\color{red}2.04e-1}&\multicolumn{1}{c|}{\color{red}6.36e-2}&\multicolumn{1}{c|}{\color{red}3.32e-1}&\multicolumn{1}{c|}{\color{red}0}&\multicolumn{1}{c|}{2.79e-1}&
\multicolumn{1}{c|}{\color{red}2.10e-1}&\multicolumn{1}{c|}{\color{red}6.42e-2}&\multicolumn{1}{c|}{\color{red}3.44e-1}&\multicolumn{1}{c|}{\color{red}1.01e-5}&\multicolumn{1}{c|}{2.82e-1}\\ \hline
\multicolumn{1}{|c|}{\bf PGe-znorm}&\multicolumn{1}{c|}{\color{red}2.10e-1}&\multicolumn{1}{c|}{\color{red}6.52e-2}&\multicolumn{1}{c|}{3.58e-1}&\multicolumn{1}{c|}{2.01e-5}&\multicolumn{1}{c|}{1.19e-1}&
\multicolumn{1}{c|}{2.10e-1}&\multicolumn{1}{c|}{6.41e-2}&\multicolumn{1}{c|}{3.62e-1}&\multicolumn{1}{c|}{4.02e-5}&\multicolumn{1}{c|}{1.24e-1}&
\multicolumn{1}{c|}{2.22e-1}&\multicolumn{1}{c|}{6.82e-2}&\multicolumn{1}{c|}{3.72e-1}&\multicolumn{1}{c|}{3.02e-5}&\multicolumn{1}{c|}{1.26e-1}\\ \hline
\multicolumn{1}{|c|}{\bf PDASC}&\multicolumn{1}{c|}{4.29e-1}&\multicolumn{1}{c|}{1.34e-1}&\multicolumn{1}{c|}{5.94e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{6.04e-2}&
\multicolumn{1}{c|}{4.27e-1}&\multicolumn{1}{c|}{1.33e-1}&\multicolumn{1}{c|}{5.92e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{5.98e-2}&
\multicolumn{1}{c|}{4.53e-1}&\multicolumn{1}{c|}{1.37e-1}&\multicolumn{1}{c|}{6.08e-1}&\multicolumn{1}{c|}{1.01e-5}&\multicolumn{1}{c|}{5.93e-2}\\ \hline
\multicolumn{1}{|c|}{\bf WPDASC}&\multicolumn{1}{c|}{4.38e-1}&\multicolumn{1}{c|}{1.37e-1}&\multicolumn{1}{c|}{6.02e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{9.31e-2}&
\multicolumn{1}{c|}{4.20e-1}&\multicolumn{1}{c|}{1.30e-1}&\multicolumn{1}{c|}{5.78e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{9.32e-2}&
\multicolumn{1}{c|}{3.97e-1}&\multicolumn{1}{c|}{1.20e-1}&\multicolumn{1}{c|}{5.56e-1}&\multicolumn{1}{c|}{1.01e-5}&\multicolumn{1}{c|}{9.69e-2}\\ \hline
\multicolumn{16}{|c|}{$m=800,n=2000,s^*=10,\varpi=0.1,{\bf r=0.15}$} \\ \hline
\multicolumn{1}{|l|}{}&\multicolumn{5}{c|}{$\mu=0.1$}&\multicolumn{5}{c|}{$\mu=0.3$}&\multicolumn{5}{c|}{$\mu=0.5$} \\ \hline
\multicolumn{1}{|c|}{}&
\multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)}&
\multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)}&
\multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)} \\ \hline
\multicolumn{1}{|c|}{PIHT}&\multicolumn{1}{c|}{4.10e-1}&\multicolumn{1}{c|}{1.13e-1}&\multicolumn{1}{c|}{4.04e-1}&\multicolumn{1}{c|}{2.03e-3}&\multicolumn{1}{c|}{1.61e-1}& \multicolumn{1}{c|}{4.01e-1}&\multicolumn{1}{c|}{1.08e-1}&\multicolumn{1}{c|}{\color{red}2.90e-1}&\multicolumn{1}{c|}{1.96e-3}&\multicolumn{1}{c|}{1.57e-1}&
\multicolumn{1}{c|}{4.18e-1}&\multicolumn{1}{c|}{1.12e-1}&\multicolumn{1}{c|}{4.02e-1}&\multicolumn{1}{c|}{2.02e-1}&\multicolumn{1}{c|}{1.60e-1}\\ \hline
\multicolumn{1}{|c|}{BIHT-AOP}&\multicolumn{1}{c|}{3.77e-1}&\multicolumn{1}{c|}{1.04e-1}&\multicolumn{1}{c|}{4.10e-1}&\multicolumn{1}{c|}{2.06e-3}&\multicolumn{1}{c|}{5.41e-1}&
\multicolumn{1}{c|}{3.74e-1}&\multicolumn{1}{c|}{\color{red}9.88e-2}&\multicolumn{1}{c|}{4.16e-1}&\multicolumn{1}{c|}{2.09e-3}&\multicolumn{1}{c|}{5.51e-1}&
\multicolumn{1}{c|}{\color{red}3.56e-1}&\multicolumn{1}{c|}{\color{red}9.49e-2}&\multicolumn{1}{c|}{3.94e-1}&\multicolumn{1}{c|}{1.98e-3}&\multicolumn{1}{c|}{5.38e-1}\\ \hline
\multicolumn{1}{|c|}{PIHT-AOP}&\multicolumn{1}{c|}{\color{red}3.48e-1}&\multicolumn{1}{c|}{\color{red}9.80e-2}&\multicolumn{1}{c|}{\color{red}3.82e-1}&\multicolumn{1}{c|}{\color{red}1.92e-3}&\multicolumn{1}{c|}{1.85e-1}&
\multicolumn{1}{c|}{\color{red}3.70e-1}&\multicolumn{1}{c|}{1.01e-1}&\multicolumn{1}{c|}{4.10e-1}&\multicolumn{1}{c|}{2.06e-3}&\multicolumn{1}{c|}{1.91e-1}&
\multicolumn{1}{c|}{3.65e-1}&\multicolumn{1}{c|}{9.68e-2}&\multicolumn{1}{c|}{4.08e-1}&\multicolumn{1}{c|}{2.05e-3}&\multicolumn{1}{c|}{1.87e-1}\\ \hline
\multicolumn{1}{|c|}{GPSP}&\multicolumn{1}{c|}{3.90e-1}&\multicolumn{1}{c|}{1.05e-1}&\multicolumn{1}{c|}{3.86e-1}&\multicolumn{1}{c|}{1.94e-3}&\multicolumn{1}{c|}{1.86e-2}&
\multicolumn{1}{c|}{3.73e-1}&\multicolumn{1}{c|}{1.01e-1}&\multicolumn{1}{c|}{3.76e-1}&\multicolumn{1}{c|}{\color{red}1.89e-3}&\multicolumn{1}{c|}{2.04e-2}&
\multicolumn{1}{c|}{3.63e-1}&\multicolumn{1}{c|}{9.31e-2}&\multicolumn{1}{c|}{\color{red}3.74e-1}&\multicolumn{1}{c|}{\color{red}1.88e-3}&\multicolumn{1}{c|}{2.48e-2}\\ \hline\hline
\multicolumn{1}{|c|}{\bf PGe-scad}&\multicolumn{1}{c|}{\color{red}2.72e-1}&\multicolumn{1}{c|}{\color{red}8.54e-2}&\multicolumn{1}{c|}{\color{red}3.98e-1}&\multicolumn{1}{c|}{1.51e-4}&\multicolumn{1}{c|}{2.31e-1}&
\multicolumn{1}{c|}{\color{red}2.78e-1}&\multicolumn{1}{c|}{\color{red}8.67e-2}&\multicolumn{1}{c|}{\color{red}3.90e-1}&\multicolumn{1}{c|}{1.81e-4}&\multicolumn{1}{c|}{2.83e-1}&
\multicolumn{1}{c|}{\color{red}2.83e-1}&\multicolumn{1}{c|}{\color{red}8.39e-2}&\multicolumn{1}{c|}{\color{red}4.02e-1}&\multicolumn{1}{c|}{1.91e-4}&\multicolumn{1}{c|}{2.63e-1}\\ \hline
\multicolumn{1}{|c|}{\bf PGe-znorm}&\multicolumn{1}{c|}{3.33e-1}&\multicolumn{1}{c|}{9.82e-2}&\multicolumn{1}{c|}{4.20e-1}&\multicolumn{1}{c|}{8.24e-4}&\multicolumn{1}{c|}{1.34e-1}&
\multicolumn{1}{c|}{3.31e-1}&\multicolumn{1}{c|}{9.64e-2}&\multicolumn{1}{c|}{4.16e-1}&\multicolumn{1}{c|}{9.45e-3}&\multicolumn{1}{c|}{1.34e-1}&
\multicolumn{1}{c|}{3.42e-1}&\multicolumn{1}{c|}{9.56e-2}&\multicolumn{1}{c|}{4.14e-1}&\multicolumn{1}{c|}{9.55e-4}&\multicolumn{1}{c|}{1.50e-1}\\ \hline
\multicolumn{1}{|c|}{\bf PDASC}&\multicolumn{1}{c|}{5.63e-1}&\multicolumn{1}{c|}{1.80e-1}&\multicolumn{1}{c|}{6.88e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{5.08e-2}&
\multicolumn{1}{c|}{5.89e-1}&\multicolumn{1}{c|}{1.85e-1}&\multicolumn{1}{c|}{7.12e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{5.18e-2}&
\multicolumn{1}{c|}{5.58e-1}&\multicolumn{1}{c|}{1.73e-1}&\multicolumn{1}{c|}{6.90e-1}&\multicolumn{1}{c|}{4.02e-5}&\multicolumn{1}{c|}{5.18e-2}\\ \hline
\multicolumn{1}{|c|}{\bf WPDASC}&\multicolumn{1}{c|}{5.40e-1}&\multicolumn{1}{c|}{1.71e-1}&\multicolumn{1}{c|}{6.72e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{7.93e-2}&
\multicolumn{1}{c|}{5.87e-1}&\multicolumn{1}{c|}{1.83e-1}&\multicolumn{1}{c|}{7.10e-1}&\multicolumn{1}{c|}{1.01e-5}&\multicolumn{1}{c|}{8.32e-2}&
\multicolumn{1}{c|}{5.63e-1}&\multicolumn{1}{c|}{1.75e-1}&\multicolumn{1}{c|}{6.98e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{8.08e-2}\\ \hline \end{tabular}} \end{table}
Next we use the eight solvers to solve the test problems with the sampling matrix
of type I and high noise. Table \ref{table2} reports the average MSE, Herr, FNR,
FPR and CPU time for $50$ trials. Now among the four solvers requiring partial
information on $x^{\rm true}$, GPSP yields the smallest MSE, {\rm Herr}, FNR
and FPR, and among the four solvers without requiring any information on $x^{\rm true}$,
PGe-scad is still the best one. Also, for those problems with $r=0.15$,
PGe-scad yields the smaller MSE, Herr and FNR than GPSP does.
\begin{table}[!h] \centering \tiny \captionsetup{font={scriptsize}} \caption{Numerical comparisons of eight solvers for test problems with $\Phi$ of type I and high noise} \label{table2} \resizebox{\textwidth}{35mm}{
\begin{tabular}{|llllllllllllllll|} \hline
\multicolumn{16}{|c|}{$m=1000,n=5000,s^*=15,\varpi=0.3,{\bf r=0.05}$} \\ \hline
\multicolumn{1}{|l|}{}&\multicolumn{5}{c|}{$\mu=0.1$}&\multicolumn{5}{c|}{$\mu=0.3$}&\multicolumn{5}{c|}{$\mu=0.5$} \\ \hline
\multicolumn{1}{|c|}{solvers}&
\multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)}& \multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)}&
\multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)} \\ \hline
\multicolumn{1}{|c|}{PIHT}&\multicolumn{1}{c|}{3.48e-1}&\multicolumn{1}{c|}{9.53e-2}&\multicolumn{1}{c|}{4.15e-1}&\multicolumn{1}{c|}{1.25e-3}&\multicolumn{1}{c|}{5.42e-1}& \multicolumn{1}{c|}{3.40e-1}&\multicolumn{1}{c|}{9.54e-2}&\multicolumn{1}{c|}{4.19e-1}&\multicolumn{1}{c|}{1.26e-3}&\multicolumn{1}{c|}{5.46e-1}&
\multicolumn{1}{c|}{3.62e-1}&\multicolumn{1}{c|}{9.67e-2}&\multicolumn{1}{c|}{4.47e-1}&\multicolumn{1}{c|}{1.34e-3}&\multicolumn{1}{c|}{5.56e-1}\\ \hline
\multicolumn{1}{|c|}{BIHT-AOP}&\multicolumn{1}{c|}{3.57e-1}&\multicolumn{1}{c|}{1.11e-1}&\multicolumn{1}{c|}{3.80e-1}&\multicolumn{1}{c|}{1.14e-3}&\multicolumn{1}{c|}{1.59e-0}&
\multicolumn{1}{c|}{3.47e-1}&\multicolumn{1}{c|}{1.09e-1}&\multicolumn{1}{c|}{3.67e-1}&\multicolumn{1}{c|}{1.10e-3}&\multicolumn{1}{c|}{1.60e-0}&
\multicolumn{1}{c|}{3.25e-1}&\multicolumn{1}{c|}{1.05e-1}&\multicolumn{1}{c|}{3.61e-1}&\multicolumn{1}{c|}{1.09e-3}&\multicolumn{1}{c|}{1.61e-0}\\ \hline
\multicolumn{1}{|c|}{PIHT-AOP}&\multicolumn{1}{c|}{3.71e-1}&\multicolumn{1}{c|}{1.17e-1}&\multicolumn{1}{c|}{3.83e-1}&\multicolumn{1}{c|}{1.15e-3}&\multicolumn{1}{c|}{5.66e-1}& \multicolumn{1}{c|}{3.47e-1}&\multicolumn{1}{c|}{1.10e-1}&\multicolumn{1}{c|}{3.71e-1}&\multicolumn{1}{c|}{1.11e-3}&\multicolumn{1}{c|}{5.79e-1}&
\multicolumn{1}{c|}{3.30e-1}&\multicolumn{1}{c|}{1.10e-1}&\multicolumn{1}{c|}{\color{red}3.59e-1}&\multicolumn{1}{c|}{\color{red}1.08e-3}&\multicolumn{1}{c|}{5.83e-1}\\ \hline
\multicolumn{1}{|c|}{GPSP}&\multicolumn{1}{c|}{\color{red}2.64e-1}&\multicolumn{1}{c|}{\color{red}7.33e-2}&\multicolumn{1}{c|}{\color{red}3.31e-1}&\multicolumn{1}{c|}{\color{red}9.95e-4}&\multicolumn{1}{c|}{5.77e-2}
&\multicolumn{1}{c|}{\color{red}2.68e-1}&\multicolumn{1}{c|}{\color{red}7.52e-2}&\multicolumn{1}{c|}{\color{red}3.32e-1}&\multicolumn{1}{c|}{\color{red}9.99e-4}&\multicolumn{1}{c|}{5.19e-2}&
\multicolumn{1}{c|}{\color{red}2.95e-1}&\multicolumn{1}{c|}{\color{red}8.08e-2}&\multicolumn{1}{c|}{3.63e-1}&\multicolumn{1}{c|}{1.09e-3}&\multicolumn{1}{c|}{4.89e-2}\\ \hline\hline
\multicolumn{1}{|c|}{\bf PGe-scad}&\multicolumn{1}{c|}{\color{red}2.65e-1}&\multicolumn{1}{c|}{\color{red}8.36e-2}&\multicolumn{1}{c|}{\color{red}3.87e-1}&\multicolumn{1}{c|}{\color{red}2.21e-4}&\multicolumn{1}{c|}{1.05e-0}&
\multicolumn{1}{c|}{\color{red}2.67e-1}&\multicolumn{1}{c|}{\color{red}8.35e-2}&\multicolumn{1}{c|}{\color{red}3.85e-1}&\multicolumn{1}{c|}{\color{red}2.45e-4}&\multicolumn{1}{c|}{1.01e-0}&
\multicolumn{1}{c|}{\color{red}2.65e-1}&\multicolumn{1}{c|}{\color{red}8.13e-2}&\multicolumn{1}{c|}{\color{red}3.93e-1}&\multicolumn{1}{c|}{\color{red}2.29e-4}&\multicolumn{1}{c|}{1.10e-0}\\ \hline
\multicolumn{1}{|c|}{\bf PGe-znorm}&\multicolumn{1}{c|}{2.89e-1}&\multicolumn{1}{c|}{8.85e-2}&\multicolumn{1}{c|}{4.67e-1}&\multicolumn{1}{c|}{8.83e-5}&\multicolumn{1}{c|}{4.05e-1}&
\multicolumn{1}{c|}{2.92e-1}&\multicolumn{1}{c|}{8.89e-2}&\multicolumn{1}{c|}{4.69e-1}&\multicolumn{1}{c|}{7.22e-5}&\multicolumn{1}{c|}{4.17e-1}&
\multicolumn{1}{c|}{3.00e-1}&\multicolumn{1}{c|}{8.95e-2}&\multicolumn{1}{c|}{4.77e-1}&\multicolumn{1}{c|}{9.63e-5}&\multicolumn{1}{c|}{4.20e-1}\\ \hline
\multicolumn{1}{|c|}{\bf PDASC}&\multicolumn{1}{c|}{5.55e-1}&\multicolumn{1}{c|}{1.77e-1}&\multicolumn{1}{c|}{7.24e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{1.63e-1}&
\multicolumn{1}{c|}{5.80e-1}&\multicolumn{1}{c|}{1.84e-1}&\multicolumn{1}{c|}{7.44e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{1.64e-1}&
\multicolumn{1}{c|}{5.96e-1}&\multicolumn{1}{c|}{1.89e-1}&\multicolumn{1}{c|}{7.48e-1}&\multicolumn{1}{c|}{4.01e-6}&\multicolumn{1}{c|}{1.65e-1}\\ \hline
\multicolumn{1}{|c|}{\bf WPDASC}&\multicolumn{1}{c|}{5.73e-1}&\multicolumn{1}{c|}{1.84e-1}&\multicolumn{1}{c|}{7.36e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{2.66e-1}&
\multicolumn{1}{c|}{5.54e-1}&\multicolumn{1}{c|}{1.76e-1}&\multicolumn{1}{c|}{7.19e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{2.68e-1}&
\multicolumn{1}{c|}{5.96e-1}&\multicolumn{1}{c|}{1.88e-1}&\multicolumn{1}{c|}{7.49e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{2.70e-1}\\ \hline
\multicolumn{16}{|c|}{$m=1000,n=5000,s^*=15,\varpi=0.3,{\bf r=0.15}$} \\ \hline
\multicolumn{1}{|l|}{}&\multicolumn{5}{c|}{$\mu=0.1$}&\multicolumn{5}{c|}{$\mu=0.3$}&\multicolumn{5}{c|}{$\mu=0.5$} \\ \hline
\multicolumn{1}{|c|}{}&
\multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)}&
\multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)}&
\multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)} \\ \hline
\multicolumn{1}{|c|}{PIHT}&\multicolumn{1}{c|}{5.28e-1}&\multicolumn{1}{c|}{1.48e-1}&\multicolumn{1}{c|}{4.97e-1}&\multicolumn{1}{c|}{1.50e-3}&\multicolumn{1}{c|}{5.52e-1}& \multicolumn{1}{c|}{5.42e-1}&\multicolumn{1}{c|}{1.51e-1}&\multicolumn{1}{c|}{5.09e-1}&\multicolumn{1}{c|}{1.53e-3}&\multicolumn{1}{c|}{5.50e-1}&
\multicolumn{1}{c|}{5.30e-1}&\multicolumn{1}{c|}{1.47e-1}&\multicolumn{1}{c|}{4.81e-1}&\multicolumn{1}{c|}{1.45e-3}&\multicolumn{1}{c|}{5.46e-1}\\ \hline
\multicolumn{1}{|c|}{BIHT-AOP}&\multicolumn{1}{c|}{5.23e-1}&\multicolumn{1}{c|}{1.51e-1}&\multicolumn{1}{c|}{5.13e-1}&\multicolumn{1}{c|}{1.54e-3}&\multicolumn{1}{c|}{1.61e-0}&
\multicolumn{1}{c|}{4.97e-1}&\multicolumn{1}{c|}{1.45e-1}&\multicolumn{1}{c|}{5.00e-1}&\multicolumn{1}{c|}{1.50e-3}&\multicolumn{1}{c|}{1.60e-0}&
\multicolumn{1}{c|}{5.19e-1}&\multicolumn{1}{c|}{1.46e-1}&\multicolumn{1}{c|}{5.35e-1}&\multicolumn{1}{c|}{1.61e-3}&\multicolumn{1}{c|}{1.61e-0}\\ \hline
\multicolumn{1}{|c|}{PIHT-AOP}&\multicolumn{1}{c|}{5.13e-1}&\multicolumn{1}{c|}{1.46e-1}&\multicolumn{1}{c|}{5.09e-1}&\multicolumn{1}{c|}{1.53e-3}&\multicolumn{1}{c|}{5.78e-1}&
\multicolumn{1}{c|}{5.04e-1}&\multicolumn{1}{c|}{1.47e-1}&\multicolumn{1}{c|}{5.13e-1}&\multicolumn{1}{c|}{1.54e-3}&\multicolumn{1}{c|}{5.79e-1}&
\multicolumn{1}{c|}{5.30e-1}&\multicolumn{1}{c|}{1.52e-1}&\multicolumn{1}{c|}{5.45e-1}&\multicolumn{1}{c|}{1.64e-3}&\multicolumn{1}{c|}{5.70e-1}\\ \hline
\multicolumn{1}{|c|}{GPSP}&\multicolumn{1}{c|}{\color{red}4.60e-1}&\multicolumn{1}{c|}{\color{red}1.29e-1}&\multicolumn{1}{c|}{\color{red}4.59e-1}&\multicolumn{1}{c|}{\color{red}1.38e-3}&\multicolumn{1}{c|}{8.21e-2}&
\multicolumn{1}{c|}{\color{red}4.60e-1}&\multicolumn{1}{c|}{\color{red}1.29e-1}&\multicolumn{1}{c|}{\color{red}4.65e-1}&\multicolumn{1}{c|}{\color{red}1.40e-3}&\multicolumn{1}{c|}{8.13e-2}&
\multicolumn{1}{c|}{\color{red}4.90e-1}&\multicolumn{1}{c|}{\color{red}1.33e-1}&\multicolumn{1}{c|}{\color{red}4.77e-1}&\multicolumn{1}{c|}{\color{red}1.44e-3}&\multicolumn{1}{c|}{6.03e-2}\\ \hline\hline
\multicolumn{1}{|c|}{\bf PGe-scad}&\multicolumn{1}{c|}{\color{red}3.55e-1}&\multicolumn{1}{c|}{\color{red}1.09e-1}&\multicolumn{1}{c|}{\color{red}4.76e-1}&\multicolumn{1}{c|}{1.34e-3}&\multicolumn{1}{c|}{1.05e-0}&
\multicolumn{1}{c|}{\color{red}3.63e-1}&\multicolumn{1}{c|}{\color{red}1.12e-1}&\multicolumn{1}{c|}{\color{red}4.81e-1}&\multicolumn{1}{c|}{1.50e-3}&\multicolumn{1}{c|}{1.11e-0}&
\multicolumn{1}{c|}{\color{red}3.63e-1}&\multicolumn{1}{c|}{\color{red}1.12e-1}&\multicolumn{1}{c|}{\color{red}4.80e-1}&\multicolumn{1}{c|}{1.38e-3}&\multicolumn{1}{c|}{1.27e-0}\\ \hline
\multicolumn{1}{|c|}{\bf PGe-znorm}&\multicolumn{1}{c|}{4.22e-1}&\multicolumn{1}{c|}{1.24e-2}&\multicolumn{1}{c|}{5.17e-1}&\multicolumn{1}{c|}{6.38e-4}&\multicolumn{1}{c|}{4.19e-1}&
\multicolumn{1}{c|}{4.51e-1}&\multicolumn{1}{c|}{1.30e-1}&\multicolumn{1}{c|}{5.19e-1}&\multicolumn{1}{c|}{7.94e-4}&\multicolumn{1}{c|}{4.06e-1}&
\multicolumn{1}{c|}{4.41e-1}&\multicolumn{1}{c|}{1.27e-1}&\multicolumn{1}{c|}{5.21e-1}&\multicolumn{1}{c|}{6.62e-4}&\multicolumn{1}{c|}{4.49e-1}\\ \hline
\multicolumn{1}{|c|}{\bf PDASC}&\multicolumn{1}{c|}{6.90e-1}&\multicolumn{1}{c|}{2.24e-1}&\multicolumn{1}{c|}{8.12e-1}&\multicolumn{1}{c|}{4.01e-6}&\multicolumn{1}{c|}{1.35e-1}&
\multicolumn{1}{c|}{7.07e-1}&\multicolumn{1}{c|}{2.26e-1}&\multicolumn{1}{c|}{8.24e-1}&\multicolumn{1}{c|}{4.01e-6}&\multicolumn{1}{c|}{1.34e-1}&
\multicolumn{1}{c|}{7.11e-1}&\multicolumn{1}{c|}{2.27e-1}&\multicolumn{1}{c|}{8.27e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{1.33e-1}\\ \hline
\multicolumn{1}{|c|}{\bf WPDASC}&\multicolumn{1}{c|}{6.62e-1}&\multicolumn{1}{c|}{2.14e-1}&\multicolumn{1}{c|}{7.92e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{2.35e-1}&
\multicolumn{1}{c|}{6.82e-1}&\multicolumn{1}{c|}{2.18e-1}&\multicolumn{1}{c|}{8.03e-1}&\multicolumn{1}{c|}{4.01e-6}&\multicolumn{1}{c|}{2.34e-1}&
\multicolumn{1}{c|}{7.04e-1}&\multicolumn{1}{c|}{2.25e-1}&\multicolumn{1}{c|}{8.20e-1}&\multicolumn{1}{c|}{4.01e-6}&\multicolumn{1}{c|}{2.30e-1}\\ \hline \end{tabular}} \end{table}
Finally, we use the eight solvers to solve the test problems with the sampling matrix of type II.
Table \ref{table3} reports the average MSE, Herr, FNR, FPR and CPU time for $50$ trials.
From Table \ref{table3}, among the four solvers requiring partial information on $x^{\rm true}$,
PIHT yields the better MSE, {\rm Herr},FNR and FPR than others for those examples with
high noise, and among the four solvers without needing any information on $x^{\rm true}$,
PGe-scad is still the best one. Moreover, PGe-scad yields the smaller MSE, Herr and FNR
than PIHT does for $\varpi=0.3$ and $0.5$. We also observe that among the eight solvers,
GPSP always requires the least CPU time, and PGe-scad and PGe-znorm requires the comparable CPU time as
PIHT, BIHT-AOP and PIHT-AOP do for all test examples.
\begin{table}[!h] \centering \tiny \captionsetup{font={scriptsize}} \caption{Numerical comparisons of eight solvers for test problems with $\Phi$ of type II with different noise levels} \label{table3} \resizebox{\textwidth}{15mm}{
\begin{tabular}{|llllllllllllllll|} \hline
\multicolumn{16}{|c|}{$m=2500,n=10000,s^*=20,r=0.1$} \\ \hline
\multicolumn{1}{|l|}{}&\multicolumn{5}{c|}{$\varpi=0.1$}&\multicolumn{5}{c|}{$\varpi=0.3$}&\multicolumn{5}{c|}{$\varpi=0.5$} \\ \hline
\multicolumn{1}{|l|}{}&
\multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)}&
\multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)}&
\multicolumn{1}{c|}{MSE}&\multicolumn{1}{c|}{Herr}&\multicolumn{1}{c|}{FNR}&\multicolumn{1}{c|}{FPR}&\multicolumn{1}{c|}{time(s)}\\ \hline
\multicolumn{1}{|c|}{PIHT}&\multicolumn{1}{c|}{2.57e-1}&\multicolumn{1}{c|}{7.23e-1}&\multicolumn{1}{c|}{3.44e-1}&\multicolumn{1}{c|}{6.89e-4}&\multicolumn{1}{c|}{2.55}& \multicolumn{1}{c|}{\color{red}2.74e-1}&\multicolumn{1}{c|}{\color{red}8.15e-2}&\multicolumn{1}{c|}{\color{red}3.49e-1}&\multicolumn{1}{c|}{\color{red}6.99e-4}&\multicolumn{1}{c|}{2.54}&
\multicolumn{1}{c|}{\color{red}3.14e-1}&\multicolumn{1}{c|}{\color{red}9.58e-2}&\multicolumn{1}{c|}{\color{red}3.69e-1}&\multicolumn{1}{c|}{\color{red}7.39e-4}&\multicolumn{1}{c|}{2.54}\\ \hline
\multicolumn{1}{|c|}{BIHT-AOP}&\multicolumn{1}{c|}{\color{red}1.54e-1}&\multicolumn{1}{c|}{\color{red}4.61e-2}&\multicolumn{1}{c|}{\color{red}2.40e-1}&\multicolumn{1}{c|}{\color{red}4.81e-4}&\multicolumn{1}{c|}{7.70}&
\multicolumn{1}{c|}{3.06e-1}&\multicolumn{1}{c|}{9.84e-2}&\multicolumn{1}{c|}{3.36e-1}&\multicolumn{1}{c|}{6.73e-3}&\multicolumn{1}{c|}{7.68}&
\multicolumn{1}{c|}{4.23e-1}&\multicolumn{1}{c|}{1.29e-1}&\multicolumn{1}{c|}{3.95e-1}&\multicolumn{1}{c|}{7.92e-4}&\multicolumn{1}{c|}{7.64}\\ \hline
\multicolumn{1}{|c|}{PIHT-AOP}&\multicolumn{1}{c|}{1.68e-1}&\multicolumn{1}{c|}{5.08e-2}&\multicolumn{1}{c|}{2.44e-1}&\multicolumn{1}{c|}{4.89e-4}&\multicolumn{1}{c|}{2.66}&
\multicolumn{1}{c|}{3.16e-1}&\multicolumn{1}{c|}{1.03e-1}&\multicolumn{1}{c|}{3.38e-1}&\multicolumn{1}{c|}{6.77e-4}&\multicolumn{1}{c|}{2.66}&
\multicolumn{1}{c|}{4.62e-1}&\multicolumn{1}{c|}{1.52e-1}&\multicolumn{1}{c|}{4.14e-1}&\multicolumn{1}{c|}{8.30e-4}&\multicolumn{1}{c|}{2.66}\\ \hline
\multicolumn{1}{|c|}{GPSP}&\multicolumn{1}{c|}{2.45e-1}&\multicolumn{1}{c|}{6.88e-2}&\multicolumn{1}{c|}{3.36e-1}&\multicolumn{1}{c|}{6.73e-4}&\multicolumn{1}{c|}{0.19}&
\multicolumn{1}{c|}{2.77e-1}&\multicolumn{1}{c|}{8.22e-2}&\multicolumn{1}{c|}{3.51e-1}&\multicolumn{1}{c|}{7.03e-4}&\multicolumn{1}{c|}{0.19}&
\multicolumn{1}{c|}{3.23e-1}&\multicolumn{1}{c|}{9.68e-2}&\multicolumn{1}{c|}{3.73e-1}&\multicolumn{1}{c|}{7.47e-4}&\multicolumn{1}{c|}{0.19}\\ \hline\hline
\multicolumn{1}{|c|}{\bf PGe-scad}&\multicolumn{1}{c|}{\color{red}2.10e-1}&\multicolumn{1}{c|}{\color{red}6.65e-2}&\multicolumn{1}{c|}{\color{red}3.08e-1}&\multicolumn{1}{c|}{2.61e-5}&\multicolumn{1}{c|}{2.79}&
\multicolumn{1}{c|}{\color{red}2.44e-1}&\multicolumn{1}{c|}{\color{red}7.82e-2}&\multicolumn{1}{c|}{\color{red}3.61e-1}&\multicolumn{1}{c|}{2.10e-4}&\multicolumn{1}{c|}{2.71}&
\multicolumn{1}{c|}{\color{red}2.92e-1}&\multicolumn{1}{c|}{\color{red}9.34e-2}&\multicolumn{1}{c|}{\color{red}4.14e-1}&\multicolumn{1}{c|}{6.89e-4}&\multicolumn{1}{c|}{2.75}\\ \hline
\multicolumn{1}{|c|}{\bf PGe-znorm}&\multicolumn{1}{c|}{2.34e-1}&\multicolumn{1}{c|}{7.36e-2}&\multicolumn{1}{c|}{4.25e-1}&\multicolumn{1}{c|}{2.00e-6}&\multicolumn{1}{c|}{1.78}&
\multicolumn{1}{c|}{2.44e-1}&\multicolumn{1}{c|}{7.71e-2}&\multicolumn{1}{c|}{4.27e-1}&\multicolumn{1}{c|}{8.02e-6}&\multicolumn{1}{c|}{1.78}&
\multicolumn{1}{c|}{2.74e-1}&\multicolumn{1}{c|}{8.70e-2}&\multicolumn{1}{c|}{4.45e-1}&\multicolumn{1}{c|}{3.21e-5}&\multicolumn{1}{c|}{1.77}\\ \hline
\multicolumn{1}{|c|}{\bf PDASC}&\multicolumn{1}{c|}{5.41e-1}&\multicolumn{1}{c|}{1.73e-1}&\multicolumn{1}{c|}{6.84e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{8.28e-1}&
\multicolumn{1}{c|}{6.26e-1}&\multicolumn{1}{c|}{2.01e-1}&\multicolumn{1}{c|}{7.50e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{8.07e-1}&
\multicolumn{1}{c|}{6.28e-1}&\multicolumn{1}{c|}{2.03e-1}&\multicolumn{1}{c|}{7.56e-1}&\multicolumn{1}{c|}{4.02e-5}&\multicolumn{1}{c|}{7.66e-1}\\ \hline
\multicolumn{1}{|c|}{\bf WPDASC}&\multicolumn{1}{c|}{5.41e-1}&\multicolumn{1}{c|}{1.73e-1}&\multicolumn{1}{c|}{6.83e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{1.01}&
\multicolumn{1}{c|}{5.62e-1}&\multicolumn{1}{c|}{1.81e-1}&\multicolumn{1}{c|}{7.03e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{9.98e-1}&
\multicolumn{1}{c|}{6.17e-1}&\multicolumn{1}{c|}{1.99e-1}&\multicolumn{1}{c|}{7.37e-1}&\multicolumn{1}{c|}{0}&\multicolumn{1}{c|}{9.71e-1}\\ \hline \end{tabular}} \end{table}
\section{Conclusion}\label{sec6}
We proposed a zero-norm regularized smooth DC loss model and derived a family of
equivalent nonconvex surrogates that cover the MCP and SCAD surrogates as special cases.
For the proposed model and its SCAD surrogate, we developed the PG method with extrapolation
to compute their $\tau$-stationary points and provided its convergence certificate by
establishing the convergence of the whole iterate sequence and its local linear convergence rate.
Numerical comparisons with several state-of-art methods demonstrate that the two new models
are well suited for high noise and/or high sign flip ratio.
An interesting future topic is to analyze the statistical error bound for them.
\end{document} |
\begin{document}
\title{New examples of compact special Lagrangian submanifolds embedded in hyper-K\"ahler manifolds} \author{Kota Hattori} \date{} \maketitle {\abstract We construct smooth families of compact special Lagrangian submanifolds embedded in some toric hyper-K\"ahler\ manifolds, which never become holomorphic\ Lagrangian\ submanifolds via any hyper-K\"ahler\ rotations. These families converge to special\ Lagrangian\ immersions with self-intersection points in the sense of current. To construct them, we apply the desingularization method developed by Joyce.} \section{Introduction} In $1982$, Harvey and Lawson have introduced in \cite{harvey1982calibrated} the notion of calibrated submanifolds in Riemannian manifold. They were recognized by many researchers as the important class of minimal submanifolds which had already been well-studied for a long time. One of the importances of calibrated submanifolds is the volume minimizing property, that is, every compact calibrated submanifold minimizes the volume functional in its homology class.
The several kinds of calibrated submanifolds are defined in the Riemannian manifolds with special holonomy. For example, special Lagrangian submanifolds are the middle dimensional calibrated submanifolds embedded in Riemannian manifolds with $SU(n)$ holonomy, so called Calabi-Yau manifolds. In hyper-K\"ahler\ manifolds, which are Riemannian manifolds with $Sp(n)$ holonomy, there is a notion of holomorphic\ Lagrangian\ submanifolds those are calibrated by the $n$-th power of the K\"ahler form. At the same time, hyper-K\"ahler\ manifolds are naturally regarded as Calabi-Yau manifolds, special\ Lagrangian\ submanifolds also make sense in these manifolds. Hence there are two kinds of calibrated submanifolds in hyper-K\"ahler\ manifolds, and it is well-known that every holomorphic\ Lagrangian\ submanifold becomes special\ Lagrangian\ by the hyper-K\"ahler\ rotations. The converse may not holds although the counterexamples have not been found.
Another importance of calibrated geometry is that some of the calibrated submanifolds have the moduli spaces with good structure. For instance, McLean has shown that the moduli space of compact special Lagrangian submanifolds becomes a smooth manifold, whose dimension is equal to the first betti number of the special Lagrangian submanifold \cite{mclean1996deformations}.
Although the construction of compact special\ Lagrangian\ submanifolds embedded in Calabi-Yau manifolds is not easy in general, Y. I. Lee \cite{lee2003embedded}, Joyce \cite{joyce2003special}\cite{joyce2004special} and D. A. Lee \cite{lee2004connected} developed the gluing method for the construction of families of compact special\ Lagrangian\ submanifolds converging to special\ Lagrangian\ immersions with self-intersection points as a sense of current. Moreover D. A. Lee construct a non-totally geodesic special\ Lagrangian\ submanifold in the flat torus by applying his gluing method. After these working, several concrete examples of special\ Lagrangian\ submanifolds are constructed by gluing method. See \cite{haskins2007slag}\cite{chan2009desingularization1}\cite{chan2009desingularization2}, for example.
In this paper we apply the result in \cite{joyce2003special}\cite{joyce2004special} to the construction of new examples of compact special\ Lagrangian\ submanifolds embedded in toric hyper-K\"ahler\ manifolds. Moreover, these examples never become holomorphic\ Lagrangian\ submanifolds with respect to any complex structures given by the hyper-K\"ahler\ rotations.
A hyper-K\"ahler\ manifold is a Riemannian manifold $(M^{4n},g)$ equipped with an integrable hypercomplex structure $(I_1,I_2,I_3)$, so that $g$ is hermitian with respect to every $I_\alpha$, and $\omega_\alpha:=g(I_\alpha\cdot,\cdot)$ are closed. For any $\theta\in \mathbb{R}$, note that $e^{\sqrt{-1}\theta}(\omega_2+\sqrt{-1}\omega_3)$ becomes a holomorphic symplectic $2$-form with respect to $I_1$. If the holomorphic symplectic form vanishes on a submanifold $L^{2n}\subset M$, $L$ is called a holomorphic\ Lagrangian\ submanifold. Clearly, this definition is not depend on $\theta$.
Similarly, we can define the notion of holomorphic\ Lagrangian\ submanifold with respect to a complex structure $aI_1+bI_2+cI_3$ for every unit vector $(a,b,c)$ in $\mathbb{R}^3$. The new complex structure $aI_1+bI_2+cI_3$ is called a hyper-K\"ahler\ rotation of $(M,g,I_1,I_2,I_3)$.
The hyper-K\"ahler\ manifold $M$ is naturally regarded as the Calabi-Yau manifold by the complex structure $I_1$, the K\"ahler form $\omega_1$ and the holomorphic volume form $(\omega_2 + \sqrt{-1}\omega_3)^n$. Then we can easy to see that holomorphic\ Lagrangian\ submanifolds with respect to $\cos(\alpha\pi/n) I_2 + \sin(\alpha\pi/n) I_3$ are special\ Lagrangian\ for every $\alpha =1,\cdots, 2n$. Conversely, it has been unknown whether there exist special\ Lagrangian\ submanifolds embedded in hyper-K\"ahler\ manifolds never come from holomorphic\ Lagrangian\ submanifolds with respect to any complex structure given by the hyper-K\"ahler\ rotations. The main result of this paper is described as follows.
\begin{thm} Let $n\ge 2$. There exist smooth compact special\ Lagrangian\ submanifolds $\{ \tilde{L}_t\}_{0<t< \delta}$ and $\{ L_\alpha\}_{\alpha = 1,\cdots,2n}$ embedded in a hyper-K\"ahler\ manifold $M^{4n}$, which satisfy $\lim_{t\to 0}\tilde{L}_t = \bigcup_\alpha L_\alpha$ in the sense of current, and $\tilde{L}_t$ is diffeomorphic to $2n(\mathbb{P}^1)^n \# (S^1\times S^{2n-1})$. Moreover, each $L_\alpha$ is the holomorphic\ Lagrangian\ submanifold of $M$ with respect to $\cos(\alpha\pi/n) I_2 + \sin(\alpha\pi/n) I_3$, although $\tilde{L}_t$ never become holomorphic\ Lagrangian\ submanifolds with respect to any complex structure given by the hyper-K\"ahler\ rotations. \label{main1} \end{thm}
This is one of examples which we obtain in this article. Furthermore, we obtain special\ Lagrangian\ $2\mathbb{P}^2 \# 2\overline{\mathbb{P}^2} \# (S^1\times S^3)$ embedded in an $8$-dimensional hyper-K\"ahler\ manifold and special\ Lagrangian\ $(3N+1)(\mathbb{P}^1)^2 \# N(S^1\times S^3)$ embedded in another $8$-dimensional hyper-K\"ahler\ manifold, both of which never become holomorphic\ Lagrangian\ submanifolds with respect to any complex structure given by the hyper-K\"ahler\ rotations.
Theorem \ref{main1} has another significance from the point of the view of the compactification of the moduli spaces of compact special\ Lagrangian\ submanifolds. In general, the moduli space $\mathcal{M}(L)$ of the deformations of compact special\ Lagrangian\ submanifolds $L\subset X$ is not necessarily to be compact, consequently the study of its compactification is important problem. It is known that the compactification of $\mathcal{M}(L)$ is given by the geometric measure theory. The special\ Lagrangian\ immersion $\bigcup_\alpha L_\alpha$ appeared in Theorem \ref{main1} is the concrete example of an element of $\overline{\mathcal{M}(\tilde{L}_{t_0})}\backslash\mathcal{M}(\tilde{L}_{t_0})$. D. A. Lee also considered the similar situation, however the Calabi-Yau structures of ambient space of $\tilde{L}_t$ is deformed by the parameter $t$ in \cite{lee2004connected}.
Here, we describe the outline of the proof. Let $(M,J,\omega,\Omega)$ be a K\"ahler manifold of complex dimension $m \ge 3$ with holomorphic volume form $\Omega\in H^0(K_M)$, and $L_{\alpha}\subset M$ be connected special\ Lagrangian\ submanifolds, where $\alpha = 1,\cdots,A$. Put $\mathcal{V}=\{1,\cdots, A\}$, and suppose we have a quiver $(\mathcal{V},\mathcal{E},s,t)$, namely, $\mathcal{V}$ consists of finite vertices, $\mathcal{E}$ consists of finite directed edges, and $s,t$ are maps $\mathcal{E} \to \mathcal{V}$ so that $s(h)$ is the source of $h\in \mathcal{E}$ and $t(h)$ is the target.
A subset $S\subset \mathcal{E}$ is called a cycle if it is written as $S=\{ h_1,h_2,\cdots,h_l\}$ and $t(h_k) = s(h_{k+1})$, $t(h_l) = s(h_1)$ hold for all $k=1,\cdots,l-1$. Then $\mathcal{E}$ is said to be {\it covered by cycles} if every edge $h\in \mathcal{E}$ is contained in some cycles of $\mathcal{E}$.
If there are two special\ Lagrangian\ submanifolds $L_0,L_1\subset X$ intersecting transversely at $p\in L_0\cap L_1$, then we can define a type at the intersection point $p$, which is a positive integer less than $m$. Then we have the next result, which follows from Theorem 9.7 of \cite{joyce2003special} by some additional arguments.
\begin{thm} Let $(\mathcal{V},\mathcal{E},s,t)$ be a quiver, and $L_\alpha$ be connected compact special\ Lagrangian\ submanifolds embedded in a Calabi-Yau manifold $(M,J,\omega,\Omega)$ for every $\alpha\in \mathcal{V}$. Assume that $L_{s(h)}$ and $L_{t(h)}$ intersects transversely at only one point $p$ if $h \in \mathcal{E}$, and $p$ is the intersection point of type $1$, and $L_\alpha \cap L_\beta$ is empty if $\alpha\neq \beta$ and there are no edges connecting $\alpha$ and $\beta$. Then, if $\mathcal{E}$ is covered by cycles, there exists a family of compact special\ Lagrangian\ submanifolds $\{ \tilde{L}_t\}_{0<t<\delta}$ embedded in $M$ which satisfies $\lim_{t\to 0}\tilde{L}_t = \bigcup_{\alpha\in \mathcal{V}}L_\alpha$ in the sense of current. \label{gluing} \end{thm}
To obtain Theorem \ref{main1}, we apply Theorem \ref{gluing} to the case that $M$ is a toric hyper-K\"ahler\ manifold and $L_\alpha$ is a holomorphic\ Lagrangian\ submanifold with respect to $\cos(\alpha\pi/n) I_2 + \sin(\alpha\pi/n) I_3$. Accordingly, the proof is reduced to looking for toric hyper-K\"ahler\ manifolds $M$ and their holomorphic\ Lagrangian\ submanifolds $L_1,\cdots,L_{2n}$ satisfying the assumption of Theorem \ref{gluing}. In particular, to find $L_\alpha$'s so that $\mathcal{E}$ is covered by cycles is not so easy. The author cannot develop the systematic way to find such examples in toric hyper-K\"ahler\ manifolds, however, we can raise some concrete examples in this article.
In toric hyper-K\"ahler\ manifolds, many holomorphic\ Lagrangian\ submanifolds are obtained as the inverse image of some special polytopes by the hyper-K\"ahler\ moment maps, where the polytopes are naturally given by the hyperplane arrangements which determine the toric hyper-K\"ahler\ manifolds. We can compute the type at the intersection point of two holomorphic\ Lagrangian\ submanifolds, if the intersection point is the fixed point of the torus action. Finally, we can find examples of toric hyper-K\"ahler\ manifolds and such polytopes, which satisfy the assumption Theorem \ref{gluing}.
Next we have to show that these examples of special\ Lagrangian\ submanifolds never become holomorphic\ Lagrangian\ submanifolds. Since $\tilde{L}_t$ is contained in the homology class $\sum_{\alpha}(-1)^\alpha [L_{\alpha}]$, we obtain the volume of $\tilde{L}_t$ by integrating the real part of the holomorphic volume form over $\sum_{\alpha}(-1)^\alpha [L_{\alpha}]$. On the other hand, if $\tilde{L}_t$ is holomorphic\ Lagrangian\ submanifold with respect to some $aI_1+bI_2+cI_3$, then the volume can be also computed by integrating $(a\omega_1+b\omega_2+c\omega_3)^n$ over $\sum_{\alpha}(-1)^\alpha [L_{\alpha}]$, since $a\omega_1+b\omega_2+c\omega_3$ should be the K\"ahler form on $\tilde{L}_t$. These two values of the volume do not coincide, we have a contradiction. At the same time, we have another simpler proof if the first betti number $\tilde{L}_t$ is odd, since any holomorphic\ Lagrangian\ submanifolds become K\"ahler manifolds which always have even first betti number. The example constructed in Theorem \ref{main1} satisfies $b_1 = 1$, hence we can use this proof. However, we have other examples in Section \ref{sec6} whose first betti number may be even.
This article is organized as follows. First of all we define $\sigma$-holomorphic\ Lagrangian\ submanifolds in Section \ref{sec2} and review the constructions of them in toric hyper-K\"ahler\ manifolds in Section \ref{sec3}. Next we review the definition of the type at the intersection point of two special\ Lagrangian\ submanifolds, and then compute them in the case of toric hyper-K\"ahler\ manifolds in Section \ref{sec4}. In Section \ref{sec5}, we prove Theorem \ref{gluing} by using Theorem 9.7 of \cite{joyce2003special}. In Section \ref{sec6}, we find toric hyper-K\"ahler\ manifolds and their holomorphic\ Lagrangian\ submanifolds which satisfy the assumption of Theorem \ref{gluing}, and obtain compact special\ Lagrangian\ submanifolds embedded in some toric hyper-K\"ahler\ manifolds. In Section \ref{sec7}, we show the examples obtained in Section \ref{sec6} never become $\sigma$-holomorphic\ Lagrangian\ submanifolds for any $\sigma\in S^2$.
{\bf Acknowledgment.} The author would like to express his gratitude to Professor Dominic Joyce for his advice on this article. The author is also grateful to Dr. Yohsuke Imagi for useful discussion and his advice.
\section{Holomorphic Lagrangian submanifolds}\label{sec2} \begin{definition} {\rm A Riemannian manifold $(M,g)$ equipped with integrable complex structures $(I_1,I_2,I_3)$ is a} hyper-K\"ahler\ manifold {\rm if each $I_{\alpha}$ is orthogonal with respect to $g$, they satisfy the quaternionic relation $I_1I_2I_3 = -1$ and fundamental $2$-forms $\omega_{\alpha}:=g(I_{\alpha}\cdot,\cdot)$ are closed.} \end{definition} We put $\omega=(\omega_1,\omega_2,\omega_3)$ and call it the hyper-K\"ahler\ structure. For each \begin{eqnarray} \sigma = (\sigma_1,\sigma_2,\sigma_3) \in S^2=\{ (a,b,c)\in \mathbb{R}^3;\ a^2+b^2+c^2=1 \},\nonumber \end{eqnarray} we have another K\"ahler structure \begin{eqnarray} (M, I^\sigma, \omega^\sigma) := (M, \sum_{i=1}^3 I_i\omega_i, \sum_{i=1}^3\sigma_i\omega_i). \nonumber \end{eqnarray} Take $\sigma',\sigma''\in S^2$ so that $(\sigma,\sigma',\sigma'')$ forms an orthonormal basis in $\mathbb{R}^3$. Suppose it has the positive orientation, that is, \begin{eqnarray} \sigma \wedge \sigma' \wedge \sigma'' = (1,0,0) \wedge (0,1,0) \wedge (0,0,1)\nonumber \end{eqnarray} holds. Then we have another hyper-K\"ahler\ structure $(\omega^\sigma, \omega^{\sigma'}, \omega^{\sigma''})$ which is called the hyper-K\"ahler\ rotation of $\omega$. \begin{definition} {\rm Let $(M,g,I_1,I_2,I_3)$ be a hyper-K\"ahler\ manifold of real dimension $4n$, and $L\subset M$ be a $2n$-dimensional submanifold. Fix $\sigma\in S^2$ arbitrarily.
Then $L$ is a $\sigma$}-holomorphic\ Lagrangian\ submanifold {\rm if $\omega^{\sigma'}|_{L} = \omega^{\sigma''}|_{L} = 0$.} \end{definition} It is easy to see that the above definition is not depend on the choice of $\sigma',\sigma''$.
Any hyper-K\"ahler\ manifolds can be regarded as Calabi-Yau manifolds by considering the pair of a K\"ahler manifold $(M,I_1,\omega_1)$ and a holomorphic volume form $(\omega_2+\sqrt{-1}\omega_3)^n \in H^0(M,K_M)$, where $K_M$ is the canonical line bundle of the complex manifold $(M,I_1)$. Therefore, we can consider the notion of special\ Lagrangian\ submanifolds in $M$ as follows. \begin{definition} {\rm Let $(M,g,I_1,I_2,I_3)$ be a hyper-K\"ahler\ manifold of real dimension $4n$, and $L\subset M$ be a $2n$-dimensional submanifold.
Then $L$ is a} special\ Lagrangian\ submanifold {\rm if $\omega_1|_{L} = {\rm Im}(\omega_2+\sqrt{-1}\omega_3)^n|_L = 0$.} \end{definition}
\section{Toric hyper-K\"ahler\ manifolds}\label{torichk}\label{sec3} \subsection{Construction}\label{const} In this subsection we review the construction of toric hyper-K\"ahler\ manifolds shortly. Let $u_{\mathbb{Z}} : \mathbb{Z}^d \to \mathbb{Z}^n$ be a surjective $\mathbb{Z}$ linear map which induces a homomorphisms between tori and their Lie algebras, denoted by $\hat{u}: T^d \to T^n$ and $u : \mathbf{t}^d \to \mathbf{t}^n$, respectively. We put $K:={\rm Ker}\ \hat{u}\in T^d$ and $\mathbf{k}:= {\rm Ker}\ u\in \mathbf{t}^d$, where $\mathbf{k}:$ is the Lie algebra of the subtorus $K$. The adjoint map of $u$ is denoted by $u^* : (\mathbf{t}^n)^*\to (\mathbf{t}^d)^*:$ and it induces $u^*:V\otimes (\mathbf{t}^n)^*\to V\otimes (\mathbf{t}^d)^*$ naturally for any vector space $V$, which is also denoted by the same symbol.
Next we consider the action of $T^d$ on the quaternionic vector space $\mathbb{H}^d$ given by $(x_1,\cdots,x_d)\cdot (g_1,\cdots, g_d) := (x_1g_1,\cdots,x_dg_d)$ for $x_k\in \mathbb{H}$ and $g_k\in S^1$. Then this action preserves the standard hyper-K\"ahler\ structure on $\mathbb{H}^d$, and the hyper-K\"ahler\ moment map $\mu_d:\mathbb{H}^d\to {\rm Im}\mathbb{H} \otimes (t^d)^*$ is given by $\mu_d(x_1,\cdots,x_d) = (x_1i\overline{x_1},\cdots,x_di\overline{x_d})$. Here, ${\rm Im}\mathbb{H} \cong \mathbb{R}^3$ is the pure imaginary part of $\mathbb{H}$.
Let $\hat{\iota} : K\to T^d$ and $\iota : \mathbf{k}\to \mathbf{t}^d$ be the inclusion maps and put $\mu_K:=\iota^*\circ\mu_d : \mathbb{H}^d\to {\rm Im}\mathbb{H}\otimes\mathbf{k}^*$ be the hyper-K\"ahler\ moment map with respect to $K$-action on $\mathbb{H}^d$. For each $\lambda \in {\rm Im}\mathbb{H}\otimes \mathbf{k}^*$, we obtain the hyper-K\"ahler\ quotient $X(u,\lambda):= \mu_K^{-1}(\iota^*(\lambda))/K$ for every $\lambda=(\lambda_1,\cdots,\lambda_d)\in {\rm Im}\mathbb{H}\otimes (\mathbf{t}^d)^*$, called toric hyper-K\"ahler\ varieties. The complex structures on $X(u,\lambda)$ are denoted by $I_{\lambda,1}, I_{\lambda,2},I_{\lambda,3}$, and the corresponding K\"ahler forms are denoted by $\omega_{\lambda}=(\omega_{\lambda,1}, \omega_{\lambda,2},\omega_{\lambda,3})$.
Although $X(u,\lambda)$ is not necessarily to be a smooth manifold, the equivalent condition for the smoothness was obtained by Bielawski-Dancer in \cite{bielawski2000geometry}. Let $e_1,\cdots,e_d\in \mathbb{R}^d$ be the standard basis and $u_k:=u(e_k) \in \mathbf{t}^n$. Put \begin{eqnarray} H_k = H_k(\lambda) := \{ y\in {\rm Im}\mathbb{H}\otimes (\mathbf{t}^n)^* ;\ \langle y, u_k \rangle + \lambda_k = 0 \},\nonumber \end{eqnarray} where \begin{eqnarray} \langle y, u_k \rangle = (\langle y_1, u_k \rangle,\ \langle y_2, u_k \rangle,\ \langle y_3, u_k \rangle) \in \mathbb{R}^3 = {\rm Im}\mathbb{H} \nonumber \end{eqnarray} for $y = (y_1,y_2,y_3)$. \begin{thm}[\cite{bielawski2000geometry}] The hyper-K\"ahler\ quotient $X(u,\lambda)$ is a smooth manifold if and only if both of the following conditions $(*1)(*2)$ are satisfied. $(*1)$ For any $\tau \subset \{1,2,\cdots, d \}$ with $\#\tau = n + 1$, the intersection $\bigcap_{k\in \tau} H_k$ is empty. $(*2)$ For every $\tau \subset \{1,2,\cdots, d \}$ with $\#\tau = n$, the intersection $\bigcap_{k\in \tau} H_k$ is nonempty if and only if $\{u_k;\ k\in\tau \}$ is a $\mathbb{Z}$-basis of $\mathbb{Z}^n$. \label{smooth} \end{thm}
The $T^d$ action on $\mathbb{H}^d$ induces a $T^n=T^d/K$ action on $X(u,\lambda)$ preserving the hyper-K\"ahler\ structure of $X(u,\lambda)$, and the hyper-K\"ahler\ moment map $\mu_{\lambda} = (\mu_{\lambda,1},\ \mu_{\lambda,2},\ \mu_{\lambda,3}): X(u,\lambda)\to {\rm Im}\mathbb{H}\otimes (\mathbf{t}^n)^*$ is defined by \begin{eqnarray} u^*(\mu_{\lambda}([x])):= \mu_d(x) - \lambda,\nonumber \end{eqnarray} where $[x]\in X(u,\lambda)$ is the equivalence class represented by $x\in \mu_K^{-1}(\iota^*(\lambda))$.
Let $\sigma\in S^2$. A $T^n$-invariant submanifold $L\subset X(u,\lambda)$ becomes a $\sigma$-holomorphic\ Lagrangian\ submanifold if $\mu_\lambda(L)$ is contained in $q+\sigma\otimes (\mathbf{t}^n)^*$ for some $q\in {\rm Im}\mathbb{H}\otimes (\mathbf{t}^n)^*$.
\subsection{Local model of the neighborhood of a fixed point}\label{localmodel} Let $X=X(u,\lambda)$ be a smooth toric hyper-K\"ahler\ manifold of real dimension $4n$, $\omega=\omega_{\lambda}$ and $\mu=\mu_{\lambda}$. Denote by $X^*$ the maximal subset of $X$ on whom $T^n$ acts freely. Let $p\in X$ be a fixed point of the $T^n$-action. Then we can see that \begin{eqnarray} H_{k_1}\cap H_{k_2}\cap \cdots\cap H_{k_n} = \{ \mu(p) \}\nonumber \end{eqnarray} for some $k_1,\cdots, k_n$, and we may suppose $k_i=i$ without loss of generality.
By the result of \cite{pedersen1988hyper}, the hyper-K\"ahler\ metric on $X^*$ can be described using $\mu$ and $T^n$-connection on $X^*$ and some functions defined on $\mu(X^*)$. Using their result, we can see that $\omega$ can be decomposed into two parts as \begin{eqnarray} \omega = \omega_{\mathbb{H}^n} + \mu^*\eta \nonumber \end{eqnarray} on $U$, where $U$ is a $T^n$-invariant neighborhood of $p$, $\omega_{\mathbb{H}^n}$ has the same form with the standard hyper-K\"ahler\ structure on $\mathbb{H}^n$, and $\eta\in \Omega^2(\mu(U))$. Hence we have the followings. \begin{prop} Let $(X,\omega,\mu)$ and $p$ be as above. There is a $T^n$-invariant neighborhood $U\subset X$ of $p$, a $T^n$-equivariant diffeomorphism $F:U \to B_{\mathbb{H}^n}(\varepsilon)$ and $\eta\in \Omega^2(\mu(U))$ which satisfy \begin{eqnarray}
\omega|_U &=& F^*\omega_{\mathbb{H}^n}|_U+ \mu^*\eta, \nonumber\\
\mu_n|_{B_{\mathbb{H}^n}(\varepsilon)} \circ F &=& \mu|_U - \mu(p) \nonumber \end{eqnarray}
where $B_{\mathbb{H}^n}(\varepsilon)=\{ x\in \mathbb{H}^n;\ \| x\| < \varepsilon\}$, and $\omega_{\mathbb{H}^n}$ is the standard hyper-K\"ahler\ structure on $\mathbb{H}^n$. \label{3.2} \end{prop}
\section{Characterizing angles}\label{sec4} \subsection{Calabi-Yau case} For the desingularization of special Lagrangian immersions which intersect transversely on a point, one should consider the characterizing angles, introduced by Lawlor \cite{lawlor1989angle}.
Let $(M,J,\omega)$ be a K\"ahler manifold, where $J$ is a complex structure, $\omega$ is a K\"ahler form. Suppose that there is a Lagrangian immersion $\iota : L \to M$, where $\iota$ is embedding on $L\backslash \{p_+,p_-\}$ and $\iota(L)$ intersects at $\iota(p_+) = \iota(p_-) = p \in M$ transversely. We suppose $L$ is not necessarily to be connected, and the orientation of $L$ is fixed. \begin{thm}[\cite{joyce2003special}\cite{joyce2004special}] Let $(J_0,\omega_0)$ be the standard K\"ahler structure on $\mathbb{C}^m$. There exists a linear map $v:T_pM\to \mathbb{C}^m$ satisfying the following conditions; (i) $v$ is a $\mathbb{C}$-linear isomorphism preserving the K\"ahler forms, (ii) there is $\varphi =(\varphi_1, \cdots, \varphi_m)\in \mathbb{R}^m$ which satisfies $0<\varphi_1\le \cdots\le \varphi_m<\pi$ and \begin{eqnarray} v\circ\iota_* (T_{x_+} L) &=& \mathbb{R}^m = \{ (t_1,\cdots, t_m)\in \mathbb{C}^m;\ t_i\in\mathbb{R} \},\nonumber\\ v\circ\iota_* (T_{x_-} L) &=& \mathbb{R}^m_{\varphi}=\{ (t_1 e^{\sqrt{-1}\varphi_1},\cdots, t_m e^{\sqrt{-1}\varphi_m})\in \mathbb{C}^m;\ t_i\in\mathbb{R} \}.\nonumber \end{eqnarray} (iii) $v$ maps the orientation of $\iota_* (T_{x_+} L)$ to the standard orientation of $\mathbb{R}^m$. Moreover, $\varphi_1, \cdots, \varphi_m$ and the induced orientation of $\mathbb{R}^m_{\varphi}$ by $v,\iota_* (T_{x_-} L)$ do not depend on the choice of $v$. \label{joyce} \end{thm} \begin{proof} Choose a $\mathbb{C}$-linear map $v_0:T_pM\to \mathbb{C}^m$ which preserves the K\"ahler metrics and the orientations of $\iota_* (T_{x_+} L)$ and $\mathbb{R}^m$. Then $V_{\pm}:=v_0\circ\iota_* (T_{x_{\pm}} L)$ are Lagrangian subspaces of $\mathbb{C}^m$. It is well-known that any Lagrangian subspaces in $\mathbb{C}^m$ are written as $g\cdot\mathbb{R}^m$ for some $g\in U(m)$, where $\mathbb{R}^m\subset \mathbb{C}^m$ is the standard real form of $\mathbb{C}^m$. Therefore there are $g_{\pm}\in U(m)$ so that $g_+\cdot V_+ = g_-\cdot V_- = \mathbb{R}^m$. We may chose $g_+$ so that it preserves the orientations of $V_+$ and $\mathbb{R}^m$. Once we fined such $g_\pm$, we can replace them by $h_\pm g_\pm$ for any $h_+\in SO(m)$ and $h_-\in O(m)$ respectively. Now, let $v:= h_+ g_+ v_0$. Then we have $v\circ\iota_* (T_{x_+} L) = \mathbb{R}^m$ and \begin{eqnarray} v\circ\iota_* (T_{x_-} L) &=& h_+ g_+ V_- = (h_+ g_+g_-^{-1}h_-^{-1})h_-g_- V_- \nonumber\\ &=& (h_+ g_+g_-^{-1}h_-^{-1}) \mathbb{R}^m. \nonumber \end{eqnarray} Accordingly, it suffices to show that $h_\pm \in O(m)$ can be chosen so that $h_+ g_+g_-^{-1}h_-^{-1}$ is a diagonal matrix. Put $P= g_+g_-^{-1} \in SU(m)$. Since ${}^tPP $ is a unitary and symmetric matrix, it can be diagonalized by some $Q\in O(m)$, that is, ${}^tQ{}^tPP Q = {\rm diag}(e^{\sqrt{-1}\theta_1},\cdots,e^{\sqrt{-1}\theta_m})$ holds for some $0\le \theta_1\le \cdots \le\theta_m <2\pi$. Note that $Q$ can be chosen so that either ${\rm det}(Q) = 1$ or ${\rm det}(Q) = -1$. If we put $R:= P Q {\rm diag}(e^{-\sqrt{-1}\theta_1/2},\cdots,e^{-\sqrt{-1}\theta_m/2}) \in U(m)$, then ${}^tR=R^*=R^{-1}$ holds, hence $R$ is contained in $O(m)$. We determine the value of ${\rm det}(Q)$ so that ${\rm det}(R) = 1$. Hence the assertion follows by putting $h_+ = R^{-1}$, $h_- = Q^{-1}$ and $\varphi_i = \theta_i /2$. Here, $\varphi_i$ never be $0$ since $V_+$ and $V_-$ intersect transversely.
Next we show the uniqueness of $\varphi_1\le \cdots \le\varphi_m$ and the orientation of $\mathbb{R}^m_{\varphi}$. Assume that we have other $\hat{v}:T_pM\to \mathbb{C}^m$ satisfying $(i)(ii)(iii)$, and suppose \begin{eqnarray} \hat{v}\circ\iota_* (T_{x_-} L) = \{ (t_1 e^{\sqrt{-1}\hat{\varphi}_1},\cdots, t_m e^{\sqrt{-1}\hat{\varphi}_m})\in \mathbb{C}^m;\ t_i\in\mathbb{R} \}\nonumber \end{eqnarray} holds for some $0<\hat{\varphi}_1 \le \cdots \le \hat{\varphi}_m < \pi$. If we put $\hat{g}:=\hat{v}v^{-1}$, then $\hat{g}$ is in $U(m)$ and preserves the subspace $\mathbb{R}^m\subset\mathbb{C}^m$, hence $\hat{g} \in O(m)$. Moreover $\hat{g}\in SO(m)$ holds since $v$ and $\hat{v}$ satisfy $(iii)$. Moreover $\hat{g}(\mathbb{R}^m_{\varphi}) = \mathbb{R}^m_{\hat{\varphi}}$ also holds, consequently we can see that \begin{eqnarray} G:= {\rm diag}(e^{-\sqrt{-1}\hat{\varphi}_1},\cdots,e^{-\sqrt{-1}\hat{\varphi}_m})\cdot \hat{g}\cdot {\rm diag}(e^{\sqrt{-1}\varphi_1},\cdots,e^{\sqrt{-1}\varphi_m})\nonumber \end{eqnarray} is a real matrix, hence we can deduce that \begin{eqnarray} \hat{g}\cdot {\rm diag}(e^{2\sqrt{-1}\varphi_1},\cdots,e^{2\sqrt{-1}\varphi_m})\cdot \hat{g}^{-1} = {\rm diag}(e^{2\sqrt{-1}\hat{\varphi}_1},\cdots,e^{2\sqrt{-1}\hat{\varphi}_m}) \nonumber \end{eqnarray} by $\overline{G}=G$. Thus we obtain $e^{2\sqrt{-1}\varphi_i}=e^{2\sqrt{-1}\hat{\varphi}_i}$ for all $i=1,\cdots,m$, which implies $\varphi_i=\hat{\varphi}_i$, since $\hat{\varphi}_i$ are taken from $(0,\pi)$. Now we have two orientations on $\mathbb{R}^m_{\varphi}$ induced from $v$ and $\hat{v}$ respectively. These coincide since ${\rm det}(G) = {\rm det}(\hat{g}) = 1$. \end{proof}
Here, $\varphi=(\varphi_1, \cdots, \varphi_m)$ is called the characterizing angles between $(L,p_+)$ and $(L,p_-)$. Under the above situation, assume that there is a holomorphic volume form $\Omega$ on $M$ satisfying $\omega^m/m!= (-1)^{m(m-1)/2}(\sqrt{-1}/2)^m\Omega\wedge\overline{\Omega}$, where $m$ is the complex dimension of $M$. Let $\Omega_0:=dz_1\wedge \cdots\wedge dz_m$ be the standard holomorphic volume form on $\mathbb{C}^m$, and assume that $\iota : L\to M$ is a special\ Lagrangian\ immersion. Then there exists $v:T_pM\to \mathbb{C}^m$ satisfying Theorem \ref{joyce}. By the condition $(ii)$, we can see that $v^*\Omega_0 = \Omega_p$.
Since both of $\iota_* (T_{p_\pm} L)$ are special Lagrangian subspaces, there is a positive integer $k = 1,2,\cdots m-1$ and $\varphi_1+ \cdots + \varphi_m =k\pi$ holds. Then the intersection point $p\in M $ is said to be of type $k$. Note that the type depends on the order of $p_+,p_-$. If we take the opposite order, the characterizing angles become $\pi -\varphi_m,\cdots,\pi -\varphi_1$ and the type becomes $m-k$.
\subsection{Hyper-K\"ahler\ case} An irreducible decomposition of $T^n$-action on $\mathbb{H}^n$ is given by \begin{eqnarray} \mathbb{H}^n = \bigoplus_{i=1}^n Z_i \oplus \bigoplus_{i=1}^n W_i, \nonumber \end{eqnarray} where $Z_i$ and $W_i$ are complex $1$-dimensional representation of $T^n$ defined by \begin{eqnarray} (g_1,\cdots, g_n)z_i:= g_i z_i,\quad (g_1,\cdots, g_n)w_i:= g_i^{-1} w_i\nonumber \end{eqnarray} for $(g_1,\cdots, g_n)\in T^n$ and $z_i\in Z_i,\ w_i\in W_i$. Note that $Z_i$ and $W_i$ are not isomorphic as $\mathbb{C}$-representations, but the complex conjugate restricted to $Z_i$ gives an isomorphism of $\mathbb{R}$-representations $Z_i\to W_i$. For $(\alpha,\beta)\in S^3 \subset \mathbb{C}^2$, put
$h(\alpha,\beta) := (|\alpha|^2 - |\beta|^2, 2{\rm Im}(\alpha \beta), -2{\rm Re}(\alpha \beta)) \in \mathbb{R}^3$. Then $h:S^3\to S^2$ is the Hopf fibration and $S^1$ action is given by $e^{\sqrt{-1}t}\cdot(\alpha,\beta) = (e^{\sqrt{-1}t}\alpha,e^{-\sqrt{-1}t}\beta)$.
Now we put \begin{eqnarray} V_i(y) := \{(\alpha z_i, \beta \overline{z_i})\in Z_i\oplus W_i;\ z_i\in Z_i\}\nonumber \end{eqnarray} for $y\in S^2$, where $(\alpha,\beta)\in S^3$ is taken to be $h(\alpha,\beta) = y$. Then $V_i(y)$ does not depend on the choice of $(\alpha,\beta)$, and $V_i(y)$ is an sub $\mathbb{R}$-representation of $Z_i\oplus W_i$. Conversely, any nontrivial sub $\mathbb{R}$-representation of $Z_i\oplus W_i$ is obtained in this way. Note that $V_i(y) = V_i(y')$ holds if and only if $y=y'$.
\begin{prop} Let $V\subset \mathbb{H}^n$ be a $\sigma$-holomorphic\ Lagrangian\ subspace which is closed under the $T^n$ action. Then we have \begin{eqnarray} V = \bigoplus_{i=1}^n V_i(\varepsilon_i \sigma) \nonumber \end{eqnarray} for some $\varepsilon_i = \pm 1$, and its hyper-K\"ahler\ moment image is given by \begin{eqnarray} \mu_n(V) = \{ \sigma\otimes (x_1,\cdots, x_n)\in {\rm Im}\mathbb{H} \otimes (\mathbf{t}^n)^*;\ \varepsilon_i x_i\ge 0 \}. \nonumber \end{eqnarray} \label{model} \end{prop}
\begin{proof} Take $\sigma',\sigma''\in S^2$ so that $(\sigma,\sigma',\sigma'')$ is the orthonormal basis with positive orientation of $\mathbb{R}^3$. Let $I_1,I_2,I_3$ be the standard basis of the pure imaginary part of $\mathbb{H}$, and $I^\sigma,I^{\sigma'},I^{\sigma''}$ be its hyper-K\"ahler\ rotation. Let $V\subset \mathbb{H}^n$ be a $\sigma$-holomorphic Lagrangian subspace which is closed under the $T^n$ action. Since $V$ is Lagrangian with respect to $\omega^{\sigma'}$, we have an orthogonal decomposition $\mathbb{H}^n = V \oplus I^{\sigma'} V$. Then $V$ and $I^{\sigma'} V$ are isomorphic as real representations of $T^n$, therefore $V$ should be isomorphic to $\oplus_{i=1}^n Z_i$ as real representations of $T^n$, and can be written as $V = \oplus_{i=1}^nV_i(y_i)$ for some $y_i\in S^2$ by Schur's Lemma. Here, every $y_i$ is determined uniquely. Next we calculate the restriction of $\omega$ to $V$. Let $(z_1,\cdots, z_n,w_1,\cdots, w_n)$ be the holomorphic coordinate on $\mathbb{H}^n$, where $z_i\in Z_i \cong \mathbb{C}$ and $w_i \in W_i \cong \mathbb{C}$ then $\omega$ can be written as \begin{eqnarray} \omega_1 &=& \frac{\sqrt{-1}}{2}\sum_{i=1}^n(dz_i\wedge d\overline{z_i} + dw_i\wedge d\overline{w_i}),\nonumber\\ \omega_2+\sqrt{-1}\omega_3 &=& \sum_{i=1}^n dz_i\wedge dw_i.\nonumber \end{eqnarray} Take $(\alpha_i,\beta_i)\in S^3$ such that $h(\alpha_i,\beta_i) = y_i$. Then $P,Q\in V=\oplus_{i=1}^nV_i(y_i)$ can be written as \begin{eqnarray} P &=& (\alpha_1p_1,\cdots, \alpha_n p_n, \beta_1\overline{p_1},\cdots \beta_n\overline{p_n}),\nonumber\\ Q &=& (\alpha_1q_1,\cdots, \alpha_n q_n, \beta_1\overline{q_1},\cdots \beta_n\overline{q_n})\nonumber \end{eqnarray} for some $p_i\in Z_i$ and $q_i\in W_i$. Then we obtain \begin{eqnarray}
\omega_1(P,Q) &=& \sum_{i=1}^n(|\alpha_i|^2 - |\beta_i|^2 ){\rm Im}(\overline{p_i}q_i),\nonumber\\ (\omega_2+\sqrt{-1}\omega_3)(P,Q) &=& -2\sqrt{-1}\sum_{i=1}^n \alpha_i\beta_i{\rm Im}(\overline{p_i}q_i).\nonumber \end{eqnarray} Hence $\sigma$ Lagrangian condition for $V$ is equivalent to that the vector \[ \left ( \begin{array}{c}
\sum_{i=1}^n(|\alpha_i|^2 - |\beta_i|^2 ){\rm Im}(\overline{p_i}q_i) \\ 2\sum_{i=1}^n {\rm Im}(\alpha_i\beta_i){\rm Im}(\overline{p_i}q_i) \\ -2\sum_{i=1}^n {\rm Re}(\alpha_i\beta_i){\rm Im}(\overline{p_i}q_i) \end{array} \right )\in \mathbb{R}^3 \] is orthogonal to $\sigma',\sigma''\in \mathbb{R}^3$ for any $p_i,q_i\in\mathbb{C}$. Thus every \[ y_i = \left ( \begin{array}{c}
|\alpha_i|^2 - |\beta_i|^2 \\ 2 {\rm Im}(\alpha_i\beta_i) \\ -2 {\rm Re}(\alpha_i\beta_i) \end{array} \right ) \] is equal to $\pm \sigma$ because $\{\sigma,\sigma',\sigma''\}$ is an orthonormal basis. \end{proof}
\begin{prop} Let $(X,\omega,\mu)$ and $p$ be as in Proposition \ref{3.2}. Let $L\subset X$ be a $\sigma$-holomorphic\ Lagrangian\ submanifold containing $p$, and assume that there exists a sufficiently small $r >0$, $\varepsilon_i,\varepsilon'_i = \pm 1$, and \begin{eqnarray} (\mu (L)-\mu(p))\cap B(r) &=& \sigma \otimes \{ x \in (\mathbf{t}^n)^*;\
\| x\| < r, \varepsilon'_i x_i \ge 0\},\nonumber\\ \mu_n (V) &=& \sigma \otimes \{ x \in (\mathbf{t}^n)^*;\ \varepsilon_i x_i \ge 0\}\nonumber \end{eqnarray}
holds, where $V= dF_p(T_p L)$ and $B(r) = \{ y\in{\rm Im}\mathbb{H} \otimes (\mathbf{t}^n)^*; \| y\| <r\}$. Then $\varepsilon_i = \varepsilon'_i$ holds for every $i=1,\cdots, n$. \label{apex} \end{prop} \begin{proof} By the first equation of Proposition \ref{3.2}, we have $\mu (L)-\mu(p) = \mu_n(F(L))$. Since $T_0F(L) = dF_p(T_pL)$, $\mu_n (V) = \mu_n(T_0F(L))$ holds. Now we have open neighborhoods $U_0\subset F(L)$ of $0$, $U_1\subset T_0F(L)$ of $0$, and a diffeomorphism $f:U_0\to U_1$ such that $f(0)= 0$ and $df_0= {\rm id}$. Next we take a smooth map $\gamma:(-1,1) \to F(L)$ which satisfies $\gamma(0) = F(p) = 0$, $\varepsilon'_i\mu_n^i(\gamma(t)) >0$ for $t\neq 0$.
Here $\mu_n^i$ is the $i$-th component of $\mu_n$. Since $\| f(x)-x\| = \mathcal{O}(\| x\|^2)$ and
$\mu_n(x + \delta x) = \mu_n(x) + \mathcal{O}(\| x\| \| \delta x\|)$ holds, we have \begin{eqnarray}
\mu_n(f\circ\gamma(t)) &=& \mu_n(\gamma(t) + \mathcal{O}(\| \gamma(t)\|^2))\nonumber\\
&=& \mu_n(\gamma(t)) + \mathcal{O}(\| \gamma(t)\|^3)\nonumber \end{eqnarray}
If we take $t$ sufficiently close to $0$, then $\| \gamma(t)\|$ is sufficiently small but $\varepsilon'_i\mu_n^i(\gamma(t)) >0$, hence $\varepsilon'_i\mu_n^i(f\circ\gamma(t))$ should be positive for small $t$ since $\mu_n$ is a quadratic polynomial. Since $\mu_n(f\circ\gamma(t)) \in \mu_n (V)$, $\varepsilon_i = \varepsilon'_i$ must holds. We have taken $i$ arbitrarily, $\varepsilon_i = \varepsilon'_i$ holds for every $i=1,\cdots,n$. \end{proof}
Let \begin{eqnarray} \sigma(\theta)=(0,\cos\theta,\sin\theta) \in S^2.\nonumber \end{eqnarray} Then every $\sigma(\theta)$-holomorphic Lagrangian submanifold is special Lagrangian, if $n\theta \in \pi \mathbb{Z}$.
\begin{prop} Let $n\theta_{\pm} \in \pi\mathbb{Z}$ and $V_{\pm}$ be $T^n$-invariant $\sigma(\theta_{\pm})$-holomorphic Lagrangian subspaces of $\mathbb{H}^n$ given by \begin{eqnarray} V_+ := \bigoplus_{i=1}^n V_i( \sigma(\theta_+) ),\quad V_- := \bigoplus_{i=1}^n V_i( \sigma(\theta_-) ).\nonumber \end{eqnarray} Then the characterizing angles between $V_+$ and $V_-$ are given by $(\theta_{-} - \theta_{+})/2$ with multiplicity $2n$. \label{angle} \end{prop}
\begin{proof} Since $h(\sqrt{-1}/\sqrt{2},e^{\sqrt{-1} \theta_{\pm}} / \sqrt{2}) = \sigma(\theta_{\pm})$, we have \begin{eqnarray} V_{\pm}=\{(\frac{\sqrt{-1}}{\sqrt{2}}z_1, \frac{e^{\sqrt{-1} \theta_{\pm}}}{\sqrt{2}}\overline{z_1}, \cdots, \frac{\sqrt{-1}}{\sqrt{2}}z_n, \frac{e^{\sqrt{-1} \theta_{\pm}}}{\sqrt{2}}\overline{z_n}) \in \mathbb{H}^n;\ z_1,\cdots, z_n\in \mathbb{C} \}\nonumber \end{eqnarray} respectively. Put \[ A(\theta) := \frac{1}{\sqrt{2}} \left ( \begin{array}{ccc} -\sqrt{-1} & e^{-\sqrt{-1} \theta} \\ -1 & \sqrt{-1}e^{-\sqrt{-1} \theta} \end{array} \right ), \] and \[ g_+ := \left ( \begin{array}{ccc} A(\theta_{+}) & & \rm{O} \\
& \ddots & \\ \rm{O} & & A(\theta_{+}) \end{array} \right ),\quad g_- := \left ( \begin{array}{ccc} A(\theta_{-}) & & \rm{O} \\
& \ddots & \\ \rm{O} & & A(\theta_{-}) \end{array} \right ). \] Since $g_+V_+ = g_-V_- = \mathbb{R}^{2n}$ holds, then the characterizing angles are the argument of the square root of the eigenvalues of ${}^tPP$, where $P = g_+g_-^{-1}$, by the proof of Theorem \ref{joyce}. Since \begin{eqnarray} {}^t(A(\theta_{+})A(\theta_{-})^{-1})A(\theta_{+})A(\theta_{-})^{-1} = e^{\sqrt{-1}(\theta_{-} - \theta_{+})}{\rm Id},\nonumber \end{eqnarray} the characterizing angles turn out to be $(\theta_{-} - \theta_{+})/2$ with multiplicity $2n$. \end{proof}
Now we consider the case that \begin{eqnarray} (M,J,\omega,\Omega)=(X(u,\lambda),I_1,(\omega_{\lambda,2} + \sqrt{-1} \omega_{\lambda,3})^n)\nonumber \end{eqnarray} and $L=L_+\sqcup L_- $, where $L_{\pm}$ is embedded as $\sigma(\theta_{\pm})$-holomorphic Lagrangian submanifolds respectively, for some $\theta_{\pm}\in\mathbb{R}$. Denote by $\iota: L\to X(u,\lambda)$ the immersion. Assume that the image of $L$ is a $T^n$ invariant subset of $X(u,\lambda)$, and $p$ is the fixed point of the torus action. In this subsection, we see the characterizing angles between $(L,p_+)$ and $(L,p_-)$ in this situation.
Take $F:U\to B_{\mathbb{H}^n}(\varepsilon)$ as in Proposition \ref{3.2}. Then $dF_p:T_pX(u,\lambda) \to \mathbb{H}^n$ is $T^n$-equivariant
and satisfies $dF_p^*(\omega_{\mathbb{H}^n}|_0) = \omega|_p$ by the first equation in Proposition \ref{3.2} since $d\mu_p=0$. Here, a $T^n$-action on $T_pX(u,\lambda)$ is induced from the torus action on $X(u,\lambda)$ since $p$ is fixed by the action. Then $V_{\pm}:=dF_p\circ\iota_*(T_{p_{\pm}}L)$ is a $\sigma_{\pm}$-holomorphic Lagrangian subspace of $\mathbb{H}^n$, respectively. Moreover, $V_{\pm}$ are closed under the $T^n$-action.
\begin{prop} Under the above setting, assume that there is a sufficiently small $r>0$ and \begin{eqnarray} (\mu (L_\pm)-\mu(p))\cap B(r) &=& \sigma_{\pm} \otimes \{ x \in (\mathbf{t}^n)^*;\
\| x\| < r, x_i \ge 0\}\nonumber\nonumber \end{eqnarray} holds respectively. Then the characterizing angles between $(L,p_+)$ and $(L,p_-)$ are given by $(\theta_- - \theta_+)/2$ with multiplicity $2n$. \end{prop} \begin{proof} By combining Propositions \ref{model}\ref{apex}, we can see that \begin{eqnarray} V_\pm = \bigoplus_{i=1}^n V_i( \sigma(\theta_\pm) )\nonumber \end{eqnarray} respectively. Thus we have the assertion by Proposition \ref{angle}. \end{proof}
\section{Proof of Theorem \ref{gluing}}\label{sec5} In this section we prove Theorem \ref{gluing}. Although Theorem \ref{gluing} follows from Theorem 9.7 of \cite{joyce2003special} essentially, we need some additional argument about the quivers. Let $Q = (\mathcal{V},\mathcal{E},s,t)$ be a quiver, that is, $\mathcal{V}$ consists of finite vertices, $\mathcal{E}$ consists of directed finite edges, and $s,t:\mathcal{E} \to \mathcal{V}$ are maps. Here, $s(h)$ and $t(h)$ means the source and the target of $h\in\mathcal{E}$ respectively. The quiver is said to be connected if any two vertices are connected by some edges. Given the quiver, we have operators \begin{eqnarray} \partial &:& \mathbb{R}^\mathcal{E} \to \mathbb{R}^\mathcal{V}\nonumber\\ \partial^* &:& \mathbb{R}^\mathcal{V} \to \mathbb{R}^\mathcal{E}\nonumber \end{eqnarray} defined by \begin{eqnarray} \partial (\sum_{h\in \mathcal{E}}A_h\cdot h) &:=& \sum_{k\in \mathcal{V}}A_h\cdot (s(h) - t(h)),\nonumber\\ \partial^*(\sum_{k\in \mathcal{V}}x_k\cdot k) &:=& \sum_{h\in \mathcal{E}}(x_{s(h)} - x_{t(h)})\cdot h.\nonumber \end{eqnarray} Here, $\mathbb{R}^\mathcal{E}$ and $\mathbb{R}^\mathcal{V}$ are the free $\mathbb{R}$-modules generated by elements of $\mathcal{E}$ and $\mathcal{V}$ respectively. Since $\partial^*$ is the adjoint of $\partial$, we have \begin{eqnarray} \mathbf{h}_0(Q) - \mathbf{h}_1(Q) = \#\mathcal{V} - \#\mathcal{E},\label{euler} \end{eqnarray} where $\mathbf{h}_0(Q) = {\rm dim\ Ker} \partial^*$ and $\mathbf{h}_1(Q) = {\rm dim\ Ker} \partial$. Note that $\mathbf{h}_0(Q)$ is equal to the number of the connected components of $Q$.
We need the following lemmas for the proof of Theorem \ref{gluing}.
\begin{lem} Let $Q$ be as above. The set $(\mathbb{R}_{>0})^\mathcal{E}\cap {\rm Ker}(\partial)$ is nonempty if and only if $\mathcal{E}$ is covered by cycles. \label{quiver1} \end{lem} \begin{proof} Suppose that $\mathcal{E} = \bigcup_{\alpha} S_k$ holds for some cycles $S_1,\cdots, S_N$. For a subset $S\subset \mathcal{E}$, define $\chi_S\in \mathbb{R}^\mathcal{E}$ by \[ (\chi_{S})_h := \left\{ \begin{array}{cc} 1 & (h\in S), \\ 0 & (h\notin S). \end{array} \right. \] Then $\sum_{k=1}^N \chi_{S_k}$ is contained in $(\mathbb{R}_{>0})^\mathcal{E}\cap {\rm Ker}(\partial)$.
Conversely, assume that there exists $A = \sum_{h\in\mathcal{E}}A_h\cdot h\in (\mathbb{R}_{>0})^\mathcal{E}\cap {\rm Ker}(\partial)$, and take $h_0\in \mathcal{E}$ arbitrarily. Since $\partial(A) = 0$, we have \begin{eqnarray} \sum_{h\in s^{-1}(t(h_0))}A_h = \sum_{h\in t^{-1}(t(h_0))}A_h \ge A_{h_0}>0.\nonumber \end{eqnarray} Hence $s^{-1}(t(h_0))$ is nonempty, we can take $h_1\in s^{-1}(t(h_0))$. By repeating this procedure, we obtain $h_0,h_1,\cdots,h_l$ so that $t(h_k)=s(h_{k+1})$ for $k=0,\cdots,l-1$. Stop this procedure when $t(h_l) = s(h_k)$ holds for some $k=0,\cdots,l$. Since $\mathcal{V}$ is finite, this procedure always stops for some $l<+\infty$. Then we have an nonempty cycle $S_0 = \{ h_k,h_{k+1},\cdots,h_l \}$. If $h_0$ is contained in $S_0$, then we have the assertion, hence suppose $h_0\notin S_0$ Put $A_0:=\min_{h\in S_0}A_h >0$, \begin{eqnarray} P_0 &:=& \{ h\in\mathcal{E} ;\ A_h = A_0\},\nonumber\\ \mathcal{E}_1 &:=& \mathcal{E}\backslash P_0.\nonumber \end{eqnarray} Then we have a new quiver $((\mathcal{V},\mathcal{E}_1,s,t))$ and the boundary operator $\partial_1:\mathbb{R}^{\mathcal{E}_1} \to \mathbb{R}^\mathcal{V}$. Now, put $A^{(1)}:=A- A_0\chi_{S_0} \in \mathbb{R}^{\mathcal{E}_1}$, where Then each component of $A^{(1)}$ is positive. Moreover we can see that \begin{eqnarray} \partial_1 (A^{(1)}) &=& \sum_{h\in \mathcal{E} \backslash S_0}A_h (s(h) - t(h)) + \sum_{h\in S_0\backslash P_0} (A_h - A_0)(s(h) - t(h)) \nonumber\\ &=& \sum_{h\in \mathcal{E}}A_h (s(h) - t(h)) - \sum_{h\in S_0}A_h (s(h) - t(h))\nonumber\\ &\quad & \quad + \sum_{h\in S_0} (A_h - A_0)(s(h) - t(h)) \nonumber\\ &=& \partial (A) - \sum_{h\in S_0}A_0(s(h) - t(h)) \nonumber\\ &=& - A_0\partial (\chi_{S_0}) = 0,\nonumber \end{eqnarray} thus $A^{(1)}$ is contained in $(\mathbb{R}_{>0})^{\mathcal{E}_1}\cap {\rm Ker}(\partial_1)$. Then we can apply the above procedure for $h_0\in \mathcal{E}_1$ and we can construct $S_k$ inductively. Since $\mathcal{E}$ is finite and $\#\mathcal{E} > \#\mathcal{E}_1> \cdots $ holds, there is $k_0$ such that $h_0\in S_{k_0}$. \end{proof}
\begin{lem} Let $Q=(\mathcal{V},\mathcal{E},s,t)$ be as above. Then $Q'=(\mathcal{V},\mathcal{E}\backslash \{ h\},s,t)$ satisfies either $(\mathbf{h}_0(Q'),\mathbf{h}_1(Q')) = (\mathbf{h}_0(Q) + 1,\mathbf{h}_1(Q))$ or $(\mathbf{h}_0(Q'),\mathbf{h}_1(Q')) = (\mathbf{h}_0(Q),\mathbf{h}_1(Q) - 1)$ for any $h\in \mathcal{E}$. \label{quiver2} \end{lem} \begin{proof} Put \begin{eqnarray} \mathcal{E}_1 &:=& \{ h\in \mathcal{E}; A_h = 0\ {\rm for\ any\ }A\in {\rm Ker}(\partial)\},\nonumber\\ \mathcal{E}_2 &:=& \mathcal{E}\backslash \mathcal{E}_1.\nonumber \end{eqnarray} First of all, we show that there exists $A\in (\mathbb{R}_{\neq 0})^{\mathcal{E}_2}\cap {\rm Ker}(\partial_2)$, where $\partial_2:\mathbb{R}^{\mathcal{E}_2} \to \mathbb{R}^\mathcal{V}$ is the restriction of $\partial$ to $\mathbb{R}^{\mathcal{E}_2} \subset \mathbb{R}^\mathcal{E}$. By the definition of $\mathcal{E}_2$, we can easy to see that ${\rm Ker}(\partial_2) = {\rm Ker}(\partial)$ holds, then we can take $A^h=\sum_{h'\in \mathcal{E}_2}A^h_{h'}\cdot h'\in {\rm Ker}(\partial_2)$ for every $h\in \mathcal{E}_2$ so that $A^h_h \neq 0$. Since $\mathcal{E}_2$ is a finite set, we may write $\mathcal{E}_2 = \{ 1,\cdots, \#\mathcal{E}_2\}$. Let $h_1$ be the minimum number so that $A^1_{h_1} = 0$. Then put $B^2:= A^1 + a_1A^{h_1}$ for some $a_1\neq 0$. Chose $a_1$ sufficiently close to $0$ so that $B^2_h\neq 0$ for all $h\le h_1$. By defining $B^k$ inductively, finally we obtain $A = B^N\in (\mathbb{R}_{\neq 0})^\mathcal{E}_2\cap {\rm Ker}(\partial_2)$ for some $N$.
Since $\mathbf{h}_i(Q)$ is independent of the orientation of each edge in $\mathcal{E}$, we can replace $h\in \mathcal{E}_2$ by the edge with the opposite orientation if $A_h<0$. Consequently, we may suppose $A_h >0$ for any $h\in \mathcal{E}_2$ without loss of generality. Hence $\mathcal{E}_2$ is covered by cycles by Lemma \ref{quiver1}.
Next we consider $Q'=(\mathcal{V},\mathcal{E}'=\mathcal{E}\backslash \{ h\},s,t)$. Let $\partial':=\partial|_{\mathcal{E}'}$ and $(\partial')^*:\mathcal{V}\to\mathcal{E}'$ be the adjoint operator. If $h\in \mathcal{E}_1$, then we can see ${\rm Ker}(\partial) = {\rm Ker}(\partial')$. In this case $(\mathbf{h}_0(Q'),\mathbf{h}_1(Q')) = (\mathbf{h}_0(Q) + 1,\mathbf{h}_1(Q))$ holds by the equation (\ref{euler}). If $h\in \mathcal{E}_2$, then $h$ is contained in a cycle $S\subset \mathcal{E}_2$. Then $s(h)$ and $t(h)$ are also connected in $\mathcal{E}'$, the number of the connected components of $Q'$ is equal to that of $Q$. Thus we have $(\mathbf{h}_0(Q'),\mathbf{h}_1(Q')) = (\mathbf{h}_0(Q),\mathbf{h}_1(Q) - 1)$. \end{proof}
Let $L_\alpha$ be a compact connected smooth special\ Lagrangian\ submanifold of the Calabi-Yau manifold $(M,J,\omega,\Omega)$ of ${\rm dim}_{\mathbb{C}}M=m$ for every $\alpha\in\mathcal{V}$. For every $h\in\mathcal{E}$, suppose $L_{s(h)}$ and $L_{t(h)}$ intersects transversely at $p_h\in L_{s(h)} \cap L_{t(h)}$, where $p_h$ is the intersection point of type $1$. Assume that $p_h\neq p_{h'}$ if $h\neq h'$, and assume that $\bigcup_{\alpha\in\mathcal{V}}L_\alpha\backslash\{ p_h;h\in\mathcal{E}\}$ is embedded in $M$. Let $L_{Q}$ be a differential manifold obtained by taking the connected some of $L_{s(h)}$ and $L_{t(h)}$ at $p_h$ for every $h\in \mathcal{E}$. By Theorem 9.7 of \cite{joyce2003special}, if $(\mathbb{R}_{>0})^\mathcal{E}\cap {\rm Ker}(\partial)$ is nonempty, there exists a family of compact smooth special\ Lagrangian\ submanifolds $\{\tilde{L}_t\}_{0<t<\delta}$ which converges to $\bigcup_{\alpha\in\mathcal{V}}$ as $t\to 0$ in the sense of current. Here, $\tilde{L}_t$ is diffeomorphic to $L_{Q}$.
Now we can replace the assumption that $(\mathbb{R}_{>0})^\mathcal{E}\cap {\rm Ker}(\partial)$ is nonempty can be replaced by that $\mathcal{E}$ is covered by cycles. Consequently, the proof of Theorem \ref{gluing} is completed by the next proposition.
\begin{prop} If $Q=(\mathcal{V},\mathcal{E},s,t)$ is a connected quiver, then $L_{Q}$ is diffeomorphic to \begin{eqnarray} L_1\# L_2\# \cdots \# L_A\# N (S^1\times S^{m-1}),\nonumber \end{eqnarray} where $\mathcal{V}=\{ 1,\cdots, A\}$ and $N={\rm dim}\ {\rm Ker}(\partial)$,
and the orientation of each $L_\alpha$ is determined by ${\rm Re}\Omega|_{L_\alpha}$. \label{topology} \end{prop} \begin{proof} Let $Q=(\mathcal{V},\mathcal{E},s,t)$ be a connected quiver
and $Q'=(\mathcal{V},\mathcal{E}',s|_{\mathcal{E}'},t|_{\mathcal{E}'})$, where $\mathcal{E}'=\mathcal{E}\backslash \{ h\}$. Let $\mathcal{E}_1,\mathcal{E}_2$ be as in the proof of Lemma \ref{quiver2}.
If $h\in \mathcal{E}_1$, then the quiver $Q'$ consists of two connected components
$Q_1 = (\mathcal{W}_1,\mathcal{F}_1,s|_{\mathcal{F}_1},t|_{\mathcal{F}_1})$
and $Q_2 = (\mathcal{W}_2,\mathcal{F}_2,s|_{\mathcal{F}_2},t|_{\mathcal{F}_2})$, where $\mathcal{V} = \mathcal{W}_1\sqcup \mathcal{W}_2$ and $\mathcal{F}_i = \mathcal{E}'\cap (s^{-1}(\mathcal{W}_i)\cup t^{-1}(\mathcal{W}_i))$. Then we can see that $L_Q = L_{Q_1}\# L_{Q_2}$.
If $h\in \mathcal{E}_2$, then $Q'=(\mathcal{V},\mathcal{E}',s|_{\mathcal{E}'},t|_{\mathcal{E}'})$ is also connected, hence $L_Q$ is constructed from $L_{Q'}$ in the following way. Take any distinct points $p_+,p_-\in L_{Q'}$ and their neighborhood $B_{p_\pm}\subset L_{Q'}$ so that $B_{p_0}\cap B_{p_1}$ is empty and $B_{p_\pm}$ are diffeomorphic to the Euclidean unit ball. Now we have a polar coordinate $(r_\pm,\Theta_\pm) \in B_{p_\pm}\backslash \{ p_\pm\}$, where $r_\pm \in (0,1)$ is the distance from $p_\pm$, and $\Theta_\pm\in S^{m-1}$. By taking a diffeomorphism $\psi: (r,\Theta) \mapsto (1-r,\varphi (\Theta))$, we can glue $B_{p_+}\backslash \{ p_+\}$ and $B_{p_-}\backslash \{ p_-\}$, then obtain $L_Q$. Here, $\varphi:S^{m-1} \to S^{m-1}$ is a diffeomorphism which reverse the orientation. Note that the differentiable structure of $L_Q$ is independent of the choice of $p_\pm$, $B_{p_\pm}$ and $\varphi$. Therefore we may suppose $p_+$ and $p_-$ is contained in an open subset $U\subset L_Q$, where $U= B(0,10)$ and
$B_{p_\pm} = B(\pm 5, 1)$, respectively. Here $B(x,r) = \{ x'\in\mathbb{R}^m;\ \| x' - x\| < r\}$. Then $(U\backslash \{ (1,0),(-1,0)\} )/\psi$ is diffeomorphic to $S^1\times S^{m-1}\backslash \{ {\rm pt.}\}$, hence $L_Q$ is diffeomorphic to $L_{Q'}\# S^1\times S^{m-1}$.
By repeating these two types of procedures, we finally obtain a quiver $Q''=(\mathcal{V},\emptyset,s,t)$, and we have $(\mathbf{h}_0(Q''),\mathbf{h}_1(Q'')) = (\#\mathcal{V},0)$. By counting $(\mathbf{h}_0,\mathbf{h}_1)$ on each step, it turns out that we have to follow the former procedures $\#\mathcal{V} -1$ times and the latter procedures $\mathbf{h}_1(Q)$ times until we reach $Q''$. Therefore we obtain the assertion by considering the procedures inductively. \end{proof}
\section{The construction of compact special Lagrangian submanifolds in $X(u,\lambda)$}\label{sec6} Here we construct examples of compact special Lagrangian submanifolds in $X(u,\lambda)$, using Theorem \ref{gluing}. We construct a one parameter family of compact special Lagrangian submanifolds which degenerates to the union $\bigcup_i L_i$ of some $\sigma_i$-holomorphic Lagrangian submanifolds $L_i$ in Subsection \ref{5.1}.
Let $X(u,\lambda)$ be a smooth toric hyper-K\"ahler\ manifold. \begin{definition} {\rm We call $\triangle \subset {\rm Im}\mathbb{H} \otimes (\mathbf{t}^n)^*$} a $\sigma$-Delzant polytope {\rm if it is a compact convex set in \begin{eqnarray} V(q,\sigma):=q + \sigma\otimes (\mathbf{t}^n)^* \nonumber \end{eqnarray} for some $q\in {\rm Im}\mathbb{H} \otimes (\mathbf{t}^n)^*$, and the boundary of $\triangle$ in $V(q,\sigma)$ satisfies \begin{eqnarray} \partial \triangle = \triangle \cap \bigg( \bigcup_{k=1}^NH_k\bigg).\nonumber \end{eqnarray} } \end{definition} It is easy to see that $L_{\triangle} := \mu_{\lambda}^{-1}(\triangle)$ is $\sigma$-holomorphic\ Lagrangian\ if it is smooth. Since $T^n$-action is closed on $L_{\triangle}$,
we may regard $(L_{\triangle},I_{\lambda,1}^\sigma|_{L_{\triangle}})$ as a toric variety,
equipped with a K\"ahler form $\omega_{\lambda,1}^\sigma|_{L_{\triangle}}$ and a K\"ahler moment map $\mu_{\lambda,1}^\sigma: L_{\triangle} \to (\mathbf{t}^n)^*$. In particular, $L_{\triangle}$ is an oriented manifold whose orientation is induced naturally from $I_{\lambda,1}^\sigma$. We denote by $\overline{L}_{\triangle}$ the oriented manifold diffeomorphic to $L_{\triangle}$ with the opposite orientation. By the assumption $X(u,\lambda)$ is smooth, $u$ and $\lambda$ satisfies $(*1)(*2)$ of Theorem \ref{smooth}, then it is easy to see that $\triangle$ is a Delzant polytope in the ordinary sense, consequently $L_{\triangle}$ turns out to be a smooth toric variety.
\begin{definition} {\rm For $\alpha=0,1$, let $\triangle_\alpha$ be a $\sigma(\theta_\alpha)$-Delzant polytope. Put \begin{eqnarray} Q(r) := \{ (t_1,\cdots, t_n)\in (\mathbf{t}^n)^*; \ t_1\ge 0,\cdots, t_n\ge 0, t_1^2+\cdots + t_n^2 <r^2\}.\nonumber \end{eqnarray} Then $\triangle_0$ and $\triangle_1$ are said to be} intersecting standardly with angle $\theta$ {\rm if $\triangle_0 \cap \triangle_1 = \{ q\}$ and there are $\psi\in GL_n\mathbb{Z}$, $\theta_0\in\mathbb{R}$ and sufficiently small $r >0$ such that \begin{eqnarray} \psi(\triangle_0 -q)\cap B(r) &=& \sigma(\theta_0)\otimes Q(r),\nonumber\\ \psi(\triangle_1 -q)\cap B(r) &=& \sigma(\theta_0 + \theta)\otimes Q(r),\nonumber \end{eqnarray} where $\psi : (\mathbb{Z}^n)^* \to (\mathbb{Z}^n)^*$ extends to ${\rm Im}\mathbb{H} \otimes (\mathbf{t}^n)^* \to {\rm Im}\mathbb{H} \otimes (\mathbf{t}^n)^*$ naturally. } \label{def.intersection} \end{definition}
For $m\in\mathbb{Z}_{>0}$, let \begin{eqnarray}
d_m(l_1,l_2) := \min \{ |l_1 - l_2 + mk|;\ k\in\mathbb{Z} \},\nonumber \end{eqnarray} for $l_1,l_2\in \mathbb{Z}$, which induces a distance function on $\mathbb{Z} /m\mathbb{Z}$.
The main result of this article is described as follows. \begin{thm} Let $X(u,\lambda)$ be a smooth toric hyper-K\"ahler\ manifold, and $\triangle_k$ be a $\sigma(k\pi/n)$-Delzant polytope for each $k = 1,\cdots, 2n$. Assume that $\triangle_k \cap \triangle_l = \emptyset$ if $d_{2n}(k,l)>1$, and $\triangle_k$ and $\triangle_{k+1}$ intersecting standardly with angle $\pi/n$. Then there exists a family of compact special Lagrangian submanifolds $\{ \tilde{L}_t \}_{0<t<\delta}$ which converges $\bigcup_{k=1}^{2n}L_{\triangle_k}$ as $t\to 0$ in the sense of current. Moreover, $\tilde{L}_t$ is diffeomorphic to $L_{\triangle_1}\# \overline{L}_{\triangle_2}\# \cdots L_{\triangle_{2n-1}}\# \overline{L}_{\triangle_{2n}}\# (S^1\times S^{2n-1})$. \label{main} \end{thm} \begin{proof} We apply Theorem \ref{gluing}. By combining Propositions \ref{model} and \ref{angle}, we can see that the characterizing angles between $L_{\triangle_k}$ and $L_{\triangle_{k+1}}$ are $\frac{\pi}{2n}$ with multiplicity $2n$. Then the intersection point $L_{\triangle_k}\cap L_{\triangle_{k+1}}$ is of type $1$.
Next we consider the topology of $\tilde{L}_t$. When we take a connected sum, we should determine the orientation of $L_{\triangle_k}$ uniformly by the calibration ${\rm Re}\Omega$,
where $\Omega=(\omega_{\lambda,2} + \sqrt{-1}\omega_{\lambda,3})^n$. Now $\Omega|_{L_{\triangle_k}} = (-1)^k(\omega_1^{\sigma(k\pi/n)})^n|_{L_{\triangle_k}}$ holds, therefore $\tilde{L}_t$ is diffeomorphic to \begin{eqnarray} L_{\triangle_1}\# \overline{L}_{\triangle_2}\# \cdots L_{\triangle_{2n-1}}\# \overline{L}_{\triangle_{2n}}\# (S^1\times S^{2n-1}).\nonumber \end{eqnarray} \end{proof}
\subsection{Example $(1)$}\label{5.1} Let \begin{eqnarray} u = (I_n\ I_n\ \cdots\ I_n) \in {\rm Hom}(\mathbb{Z}^{2n^2}, \mathbb{Z}^n) \nonumber \end{eqnarray} and $\lambda = (\lambda_{1,1},\cdots,\lambda_{1,n},\lambda_{2,1},\cdots,\lambda_{2,n},\cdots,\lambda_{2n,1},\cdots,\lambda_{2n,n})$, where $I_n$ is the identity matrix. Then $X(u,\lambda)$ is smooth if $\lambda_{k,\alpha} = \lambda_{l,\alpha}$ holds only if $k=l$. We assume this condition and that $-\lambda_{k,\alpha} = (0, \rho_{k,\alpha}) \in \{ 0\} \oplus \mathbb{C}$ holds for every $k,\alpha$, where ${\rm Im}\mathbb{H}$ is identified with $\mathbb{R} \oplus \mathbb{C}$. Moreover we suppose that \begin{eqnarray} \arg (\rho_{k + 1,\alpha} - \rho_{k,\alpha} ) = \theta_0 + \frac{n+1}{n}k\pi \end{eqnarray} for some $\theta_0 \in \mathbb{R}$. Note that $X(u,\lambda)$ is a direct product of multi Eguchi-Hanson spaces.
Next we put $q_k:= -(\lambda_{k,1},\cdots,\lambda_{k,n})\in {\rm Im}\mathbb{H}\otimes (\mathbf{t}^{n})^*$,
and \begin{eqnarray} \square_k &:=& q_k + (0, e^{\sqrt{-1}(\theta_0 + \frac{n+1}{n}k\pi) })\otimes \square(r_{k,1},\cdots, r_{k,n})\nonumber\\ &\subset& V(q_k, \sigma(\theta_0 + \frac{n+1}{n}k\pi)), \nonumber \end{eqnarray}
where $r_{k,\alpha} = |\rho_{k + 1,\alpha} - \rho_{k,\alpha}|$, and a hyperrectangle $\square(r_{1},\cdots, r_{n}) \subset (\mathbf{t}^{n})^*\cong \mathbb{R}^n$ is defined by \begin{eqnarray} \square(r_{1},\cdots, r_{n}) := \{ (t_1,\cdots, t_n) \in \mathbb{R}^n ;\ 0\le t_1 \le r_1,\ \cdots,\ 0\le t_n \le r_n\}\nonumber \end{eqnarray}
Let $H_{k,\alpha} = \{ y\in {\rm Im}\mathbb{H}\otimes (\mathbf{t}^{n})^* ;\ y_k + \lambda_{k,\alpha} = 0 \}$. Then it is easy to see that $\square_k$ is compact, convex and \begin{eqnarray} \partial \square_k \subset \bigcup_{\alpha = 1}^{2n}(H_{k,\alpha}\cup H_{k+1,\alpha}).\nonumber \end{eqnarray} Therefore, $\square_k$ is a $\sigma(\theta_0 + (n+1)k\pi/n)$-Delzant polytope if \begin{eqnarray} \square_k \cap \bigcup_{\alpha, k}H_{k,\alpha} \subset \partial \square_k \label{inclusion} \end{eqnarray} holds.
Next we study the intersection of $\square_{k-1}$ and $\square_k$. We can check that $\square_{k-1} \cap \square_k = \{ q_k\}$ and every element in $\square_{k-1}$ satisfies \begin{eqnarray} &\ & q_{k-1} + (0, e^{\sqrt{-1}(\theta_0 + \frac{n+1}{n}(k-1)\pi )})\otimes (t_1,\cdots, t_n) \nonumber\\ &=& q_{k-1} + (0, e^{\sqrt{-1}(\theta_0 + \frac{n+1}{n}(k-1)\pi )})\otimes (r_{k-1,1},\cdots, r_{k-1,n}) \nonumber\\ &\ & - (0, e^{\sqrt{-1}(\theta_0 + \frac{n+1}{n}(k-1)\pi )})\otimes (r_{k-1,1} - t_1,\cdots, r_{k-1,n}-t_n)\nonumber\\ &=& q_{k-1} + (\lambda_{k-1,1} - \lambda_{k,1},\cdots, \lambda_{k-1,n} - \lambda_{k,n}) \nonumber\\ &\ & + (0, e^{\sqrt{-1}(\theta_0 + \frac{n+1}{n}(k-1)+1)\pi })\otimes (r_{k-1,1} - t_1,\cdots, r_{k-1,n}-t_n)\nonumber\\ &=& q_k + (0, e^{\theta_0 + \sqrt{-1}\frac{(n+1)k -1}{n}\pi })\otimes (r_{k-1,1} - t_1,\cdots, r_{k-1,n}-t_n).\nonumber \end{eqnarray} Therefore, $\square_{k-1}$ and $\square_k$ are intersecting standardly with angle $\pi/n$. Of course, the same argument goes well for $\square_{2n}$ and $\square_1$.
To apply Theorem \ref{main}, it suffices to show that $\square_k \cap \square_l$ is empty if $d_{2n}(k,l)>1$. However, this condition does not hold in general, accordingly we need to take $\rho_{k,\alpha}$ well. Unfortunately, the author cannot find the good criterion for $\rho_{k,\alpha}$ satisfying the above condition. Here we show one example of $\rho_{k,\alpha}$ which satisfies the assumption of Theorem \ref{main}.
First of all, take $a_1,\cdots,a_n\in \mathbb{R}$ so that every $a_m$ is larger than $1$, and put \begin{eqnarray} \rho_{2m-1} &:=& e^{\sqrt{-1}\frac{2(m-1)}{n}\pi} + a_m(e^{\sqrt{-1}\frac{2m}{n}\pi} - e^{\sqrt{-1}\frac{2(m-1)}{n}\pi}),\nonumber\\ \rho_{2m} &:=& e^{\sqrt{-1}\frac{2(m+1)}{n}\pi} + a_m(e^{\sqrt{-1}\frac{2m}{n}\pi} - e^{\sqrt{-1}\frac{2(m+1)}{n}\pi})\nonumber \end{eqnarray} for each $m=1,\cdots, n$. Denote by $\mathbf{l}_k\subset \mathbb{C}$ the segment connecting $\rho_k$ and $\rho_{k+1}$. Then we can easily see that $\mathbf{l}_{k-1} \cap \mathbf{l}_{k} = \{ \rho_k \}$ and \begin{eqnarray} \arg(\rho_{k+1} -\rho_{k}) = \frac{n-2}{2n}\pi + \frac{n+1}{n}k\pi.\nonumber \end{eqnarray} Note that we can regard $k\in\mathbb{Z} /2n \mathbb{Z}$ and $m\in \mathbb{Z}/ n\mathbb{Z}$.
\begin{prop} Let $\rho_1,\cdots,\rho_{2n}$ be as above. If every $a_k-1$ is sufficiently small, then $\mathbf{l}_{2m-1} \cap \mathbf{l}_{k}$ are empty for all $m=1,\cdots,n$ and $k=1,\cdots,2n$ with $d_{2n}(k,2m-1)>1$. \label{disjoint} \end{prop} \begin{proof} Let ${\rm Re}:\mathbb{C} \to \mathbb{R}$ be the projection given by taking the real part. It suffices to show that ${\rm Re}(\mathbf{l}_{2m-1}e^{-\sqrt{-1}\frac{2m}{n}\pi}) \cap {\rm Re}(\mathbf{l}_{k}e^{-\sqrt{-1}\frac{2m}{n}\pi})$ is empty under the given assumptions. Let $\rho_{2m-1} + t(\rho_{2m}-\rho_{2m-1})\in \mathbf{l}_{2m-1}$. Then we can check that \begin{eqnarray} {\rm Re}(\rho_{2m-1}e^{-\sqrt{-1}\frac{2m}{n}\pi} + t(\rho_{2m}-\rho_{2m-1})e^{-\sqrt{-1}\frac{2m}{n}\pi}) = (1-a_m)\cos \frac{2\pi}{n} + a_m,\nonumber \end{eqnarray} which implies ${\rm Re}(\mathbf{l}_{2m-1}e^{-\sqrt{-1}\frac{2m}{n}\pi}) = \{ -(a_m-1)\cos \frac{2\pi}{n} + a_m\}$. If we can see that \begin{eqnarray} {\rm Re}(\rho_k e^{-\sqrt{-1}\frac{2m}{n}\pi}) < -(a_m-1)\cos \frac{2\pi}{n} + a_m\label{ineq1} \end{eqnarray} for all $k$, we have the assertion. Since \begin{eqnarray} {\rm Re}(\rho_{2l} e^{-\sqrt{-1}\frac{2m}{n}\pi}) &=& -(a_{l}-1)\cos(\frac{2(1 + l-m)}{n}\pi) \nonumber\\ &\ &\quad + a_{l}\cos(\frac{2(l-m)}{n}\pi),\nonumber\\ {\rm Re}(\rho_{2l'-1} e^{-\sqrt{-1}\frac{2m}{n}\pi}) &=& -(a_{l'} -1)\cos(\frac{2(1 - l'+m)}{n}\pi) \nonumber\\ &\ &\quad + a_{l'}\cos(\frac{2(l'-m)}{n}\pi)\nonumber \end{eqnarray} and $d_{2n}(2l,2m)>1$, $d_{2n}(2l'-1,2m)>1$ holds, we have $\cos(2(l-m)\pi /n) \le \cos(2\pi / n)$ and $\cos(2(l'-m)\pi /n) \le \cos(2\pi / n)$. By using the inequality $\cos(\frac{2(1 + l-m)}{n}\pi) \ge -1$ and $\cos(\frac{2(1 - l'+m)}{n}\pi) \ge -1$, we obtain \begin{eqnarray} {\rm Re}(\rho_{2l} e^{-\sqrt{-1}\frac{2m}{n}\pi}) &\le& (a_{l}-1) + a_{l}\cos\frac{2\pi}{n} \nonumber\\ &=&(a_{l}-1)(1+ \cos\frac{2\pi}{n}) + \cos\frac{2\pi}{n},\nonumber\\ {\rm Re}(\rho_{2l'-1} e^{-\sqrt{-1}\frac{2m}{n}\pi}) &\le& (a_{l'} -1) + a_{l'}\cos\frac{2\pi}{n}\nonumber\\ &=& (a_{l'}-1)(1+ \cos\frac{2\pi}{n}) + \cos\frac{2\pi}{n}\nonumber \end{eqnarray} Now, if we assume $a_l-1<(1-\cos\frac{2\pi}{n})/(1+ \cos\frac{2\pi}{n})$, then the left-hand-side of (\ref{ineq1}) is less than $1$. Since \begin{eqnarray} -(a_m-1)\cos \frac{2\pi}{n} + a_m = (a_m-1)(1- \cos \frac{2\pi}{n}) + 1,\nonumber \end{eqnarray} the right-hand-side of (\ref{ineq1}) is always larger than $1$ and we obtain the inequality (\ref{ineq1}). \end{proof}
Now, divide $\{ 1,\cdots, n\}$ into two nonempty sets \begin{eqnarray} \{ 1,\cdots, n\} = A_+\sqcup A_-, \nonumber \end{eqnarray} and define $\rho_{k,\alpha}$ by $\rho_{k,\alpha} = \rho_k$ if $\alpha \in A_+$, and $\rho_{k,\alpha} = \rho_ke^{\sqrt{-1}\pi/n}$ if $\alpha \in A_-$. Here, we suppose $a_k-1$ are sufficiently small so that Proposition \ref{disjoint} holds. Denote by $\mathbf{l}_{k,\alpha}$ the segment in ${\rm Im}\mathbb{H}$ connecting $(0,\rho_{k,\alpha})$ and $(0,\rho_{k+1,\alpha})$. Then we can see that \begin{eqnarray} \square_k = q_k + \mathbf{l}_{k,1}\times \cdots \times \mathbf{l}_{k,n},\nonumber \end{eqnarray} and $\square_k$ satisfies (\ref{inclusion}).
\begin{prop} Let $\square_1,\cdots,\square_{2n}$ be as above. Then $\square_k \cap \square_l$ is empty if $d_{2n}(k,l)>1$. \end{prop} \begin{proof} Suppose there is an element $\hat{x}\in \square_k \cap \square_l$. Then $\hat{x}$ can be written as $\hat{x}=(0,x)$ for some $x = (x_1,\cdots,x_n)\in \mathbb{C}^n$, and \begin{eqnarray} x_\alpha = \rho_{k,\alpha} + t_{k,\alpha} (\rho_{k+1,\alpha} - \rho_{k,\alpha}) = \rho_{l,\alpha} + t_{l,\alpha} (\rho_{l+1,\alpha} - \rho_{l,\alpha}) \label{lineareq} \end{eqnarray} holds for some $0\le t_{k,\alpha},t_{l,\alpha}\le 1$. Now, assume that $l$ is odd. Then (\ref{lineareq}) has no solution for $\alpha \in A_+$ by Proposition \ref{disjoint}. Similarly, if $l$ is supposed to be even, then (\ref{lineareq}) has no solution for $\alpha \in A_-$. Hence $\square_k \cap \square_l$ should be empty. \end{proof} Since $L_{\square_k} = (\mathbb{P}^1)^n$, and there is an orientation preserving diffeomorphism between $(\mathbb{P}^1)^n$ and $(\overline{\mathbb{P}^1})^n$, we obtain the following example. \begin{thm} Let $X(u,\lambda)$ be as above. Then there exists a compact smooth special\ Lagrangian\ submanifold diffeomorphic to \begin{eqnarray} 2n(\mathbb{P}^1)^n \# (S^1\times S^{2n-1})\nonumber \end{eqnarray} embedded in $X(u,\lambda)$. \end{thm}
\subsection{Example $(2)$}\label{5.2} Here we construct one more example in an $8$ dimensional toric hyper-K\"ahler\ manifolds.
Let \[ u := \left ( \begin{array}{ccccc} 1 & 1 & 0 & 1 & 0 \\ 1 & 0 & 1 & 0 & 1 \end{array} \right ) \in {\rm Hom}(\mathbb{Z}^5, \mathbb{Z}^2), \] and $\lambda=(\lambda_0,\cdots,\lambda_4)\in {\rm Im}\mathbb{H}\otimes (\mathbf{t}^5)^*$. Put \begin{eqnarray} q_1:=-(\lambda_1,\lambda_2),\ q_2:=-(\lambda_3,\lambda_2),\ q_3:=-(\lambda_3,\lambda_4),\ q_4:=-(\lambda_1,\lambda_4)\nonumber \end{eqnarray} and $\triangle_k := q_k + \tau_k\otimes \triangle$ for $k=1,\cdots,4$, where \begin{eqnarray} \tau_1 &:=& \lambda_1 + \lambda_2 -\lambda_0, \nonumber\\ \tau_2 &:=& \lambda_3 + \lambda_2 -\lambda_0, \nonumber\\ \tau_3 &:=& \lambda_3 + \lambda_4 -\lambda_0, \nonumber\\ \tau_4 &:=& \lambda_1 + \lambda_4 -\lambda_0, \nonumber \end{eqnarray} and \begin{eqnarray} \triangle := \{ (t_1,t_2) \in (\mathbf{t}^2)^*\cong \mathbb{R}^2; \ t_1\ge 0,\ t_2\ge 0,\ t_1+t_2 \le 1\}.\nonumber \end{eqnarray} If we assume that $\tau_1 = (0,\sqrt{-1}r_1)$, $\tau_2 = (0,-r_2)$, $\tau_3 = (0,-\sqrt{-1}r_1)$, $\tau_4 = (0,r_2)\in \mathbb{R} \oplus \mathbb{C}$, where $r_1,r_2>0$, then $\triangle_k$ is a $\sigma(k\pi/2)$-Delzant polytope. For example, put $\lambda_0 = \lambda_1 = 0$, $\lambda_2 = (0,\sqrt{-1}r_1)$, $\lambda_3 = (0,-r_2 - \sqrt{-1}r_1)$ and $\lambda_4 = (0,r_2)$. \begin{prop} Under the above setting, $\triangle_k$ and $\triangle_{k+1}$ are intersecting standardly with angle $\pi/2$ for every $k=1,\cdots,4$. Here we suppose $\triangle_5 = \triangle_1$. \end{prop}
\begin{proof} We check the case of $k=1$, because other cases can be shown similarly. Let $q_k + \tau_k\otimes (t_1,t_2) \in \triangle_k$. Then we have \begin{eqnarray} q_1 + \tau_1\otimes (t_1,t_2) &=& q_1 + \tau_1\otimes (1,0) + \tau_1\otimes (t_1-1,t_2)\nonumber\\ &=& (\lambda_2-\lambda_0,-\lambda_2) + \sigma(\frac{\pi}{2}) \otimes r_1(t_1-1,t_2),\nonumber\\ q_2 + \tau_2\otimes (t_1,t_2) &=& q_2 + \tau_2\otimes (1,0) + \tau_2\otimes (t_1-1,t_2)\nonumber\\ &=& (\lambda_2-\lambda_0,-\lambda_2) + \sigma(\pi)\otimes r_2(t_1-1,t_2),\nonumber \end{eqnarray} therefore $\triangle_1$ and $\triangle_2$ are intersecting standardly with angle $\pi/2$. Note that we have to take \[ \psi = \left ( \begin{array}{cc} -1 & 0\\ 0 & 1 \end{array} \right ) \] in Definition \ref{def.intersection}, since $t_1 - 1$ is nonpositive in this case. \end{proof} \begin{prop} Under the above setting, $\triangle_1 \cap \triangle_3$ and $\triangle_2 \cap \triangle_4$ are empty sets. \end{prop} \begin{proof} Every $x\in \triangle_1 \cap \triangle_3$ can be written as \begin{eqnarray} x = q_1 + \tau_1\otimes (t_1,t_2) = q_3 + \tau_3\otimes (s_1,s_2)\nonumber \end{eqnarray} for some $0\le t_1,t_2,s_1,s_2\le 1$. The first component of the above equation gives \begin{eqnarray} t_1\tau_1 - s_1\tau_3 = \lambda_1 - \lambda_3 = \tau_1 - \tau_2.\nonumber \end{eqnarray} By substituting $\tau_1 = (0,\sqrt{-1}r_1)$, $\tau_2 = (0,-r_2)$ and $\tau_3 = (0,-\sqrt{-1}r_1)$, we obtain \begin{eqnarray} (t_1 + s_1)\sqrt{-1}r_1 = r_2 + \sqrt{-1}r_1,\nonumber \end{eqnarray} which has no solution $(t_1,s_1) \in \mathbb{R}^2$. $\triangle_2 \cap \triangle_4 = \emptyset$ also follows by the same argument. \end{proof} Thus we obtain the following example. \begin{thm} Let $X(u,\lambda)$ be as above. Then there exists a compact smooth special\ Lagrangian\ submanifold diffeomorphic to $2\mathbb{P}^2 \# 2\overline{\mathbb{P}^2} \# (S^1\times S^3)$ embedded in $X(u,\lambda)$. \end{thm}
\subsection{Example $(3)$}\label{5.3} We can describe a generalization of Theorem \ref{main} in the more complicated situation.
\begin{thm} Let $(\mathcal{V},\mathcal{E},s,t)$ be a quiver, $X(u,\lambda)$ be a smooth toric hyper-K\"ahler\ manifold, and $\{\triangle_k \}_{k\in \mathcal{V}}$ be a family of subsets of ${\rm Im}\mathbb{H} \otimes (\mathbf{t}^n)^*$. Assume that every $\triangle_k$ is a $\sigma(\theta_k)$-Delzant polytope for some $\theta_k\in\mathbb{R}$, $\triangle_{s(h)}$ and $\triangle_{t(h)}$ intersecting standardly with angle $\pi/n$ if $h \in\mathcal{E}$, otherwise $\triangle_{k_1} \cap \triangle_{k_2} = \emptyset$ or $k_1=k_2$. Moreover, suppose that $\mathcal{E}$ is covered by cycles. Then there exists a family of compact special\ Lagrangian\ submanifolds $\{ \tilde{L}_t\}_{0<t<\delta}$ which converges to $\bigcup_{k\in \mathcal{V}} L_{\triangle_k}$ in the sense of current. \label{graph} \end{thm} \begin{proof} The proof is same as that of Theorem \ref{main}. \end{proof} Fix positive real numbers $a,b,c,a_m$ for $m=1,\cdots,N$ so that $0<a_1<a_2<\cdots <a_N$. Let \[ u = \left ( \begin{array}{cccccccc} 1 & 1 & 1 & 1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & \cdots & 1 \end{array} \right ) \in {\rm Hom}(\mathbb{Z}^{2N+6}, \mathbb{Z}^2) \] and \begin{eqnarray} \lambda = (\lambda_{-3}, \lambda_{-2}, \lambda_{-1}, \lambda_0, \lambda_1,\cdots,\lambda_{2N+2}) \in {\rm Im}\mathbb{H}\otimes (\mathbf{t}^{2N+6})^* ,\nonumber \end{eqnarray} where $-\lambda_0=0$, $-\lambda_{-1}=(0,\sqrt{-1}b)$, $-\lambda_{-2}=(0,a+\sqrt{-1}b)$, $-\lambda_{-3}=(0,a)$, $-\lambda_{2m+1}=(0,a_m + \sqrt{-1}c)$ and $-\lambda_{2m+2}=(0,a_m)$ for $m=0,1,\cdots,N$. Here, we put $a_0=0$. Then $X(u,\lambda)$ is smooth and become the direct product $X(u',\lambda')\times X(u'',\lambda'')$ where $u'=(1,1,1,1) \in {\rm Hom}(\mathbb{Z}^4,\mathbb{Z})$, $u''=(1,\cdots,1) \in {\rm Hom}(\mathbb{Z}^{2N+2},\mathbb{Z})$, $\lambda' =(\lambda_{-3}, \lambda_{-2}, \lambda_{-1}, \lambda_0)$ and $\lambda''=(\lambda_1,\cdots,\lambda_{2N+2})$. Denote by $[p,q]\subset {\rm Im}\mathbb{H}$ the segment connecting $p,q\in {\rm Im}\mathbb{H}$, and put $\mathbf{A}_{-}:=[-\lambda_0,-\lambda_{-1}]$, $\mathbf{A}_{+}:=[-\lambda_{-2},-\lambda_{-3}]$, $\mathbf{S}_{+}:=[-\lambda_{-1},-\lambda_{-2}]$, $\mathbf{S}_{-}:=[-\lambda_{-3},-\lambda_0]$, $\mathbf{A}_{m}:=[-\lambda_{2m+1},-\lambda_{2m+2}]$ for $m=0,1,\cdots,N$, $\mathbf{S}_{+,m}:=[-\lambda_{2m-1},-\lambda_{2m+1}]$ and $\mathbf{S}_{-,m}:=[-\lambda_{2m},-\lambda_{2m+2}]$ for $m=1,\cdots, N$.
Let \begin{eqnarray} \square_{2l,1} &:=& \mathbf{A}_{-}\times \mathbf{A}_{2l},\nonumber\\ \square_{2l,2} &:=& \mathbf{S}_{+}\times \mathbf{S}_{+,2l+1}, \nonumber\\ \square_{2l,3} &:=& \mathbf{A}_{+}\times \mathbf{A}_{2l+1}, \nonumber\\ \square_{2l,4} &:=& \mathbf{S}_{-}\times \mathbf{S}_{-,2l+1}\nonumber \end{eqnarray} for $l=0,1,\cdots, [(N-1)/2]$, and \begin{eqnarray} \square_{2l-1,1} &:=& \mathbf{A}_{-}\times \mathbf{A}_{2l}, \nonumber\\ \square_{2l-1,2} &:=& \mathbf{S}_{+}\times \mathbf{S}_{-,2l}, \nonumber\\ \square_{2l-1,3} &:=& \mathbf{A}_{+}\times \mathbf{A}_{2l-1}, \nonumber\\ \square_{2l-1,4} &:=& \mathbf{S}_{-}\times \mathbf{S}_{+,2l}\nonumber \end{eqnarray} for $l=1,\cdots, [N/2]$. Then $\square_{m,j}$ is a $\sigma(j\pi/2)$-holomorphic\ Lagrangian\ submanifold satisfying $\square_{2l-1,1}=\square_{2l,1}$ and $\square_{2l,3}=\square_{2l+1,3}$, moreover $\square_{m,j}$ and $\square_{m,j+1}$ intersect standardly with angle $\pi/2$ for $j=1,2,3,4$, where we put $\square_{m,5}=\square_{m,1}$. Otherwise, $\square_{m,j} \cap \square_{m',j'}$ is empty.
Now let \begin{eqnarray} \mathcal{V} &:=& (\{ 0,1,\cdots,N\}\times \{ 1,2,3,4\})/\sim,\nonumber \end{eqnarray} where $\sim$ is defined by $(2l-1,1)\sim (2l,1)$ and $(2l,3)\sim (2l+1,3)$. We denote by $[m,j]\in \mathcal{V}$ the equivalence class represented by $(m,j)$. Put \begin{eqnarray} \mathcal{E} &:=& \{ [m,j]\to [m,j+1], [m,4]\to [m,1];\ m=0,\cdots, N,\ j=1,2,3\},\nonumber \end{eqnarray} where $x\to y$ means the directed edge whose source is $x$ and the target is $y$. Then we obtain a quiver $(\mathcal{V},\mathcal{E}s,t)$ and it is easy to see that $\mathcal{E}$ is covered by cycles. By this setting, we can see that $\{ \square_{m,j}\}_{[m,j]\in\mathcal{V}}$ satisfies the assumption of Theorem \ref{graph}, and we have the following result. \begin{thm} Let $X(u,\lambda)$ be as above. Then there exists a compact smooth special\ Lagrangian\ submanifold diffeomorphic to \begin{eqnarray} (3N+1)(\mathbb{P}^1)^2 \# N(S^1\times S^3)\nonumber \end{eqnarray} embedded in $X(u,\lambda)$. \end{thm}
\section{Obstruction}\label{sec7} Here we introduce obstructions for the existence of holomorphic\ Lagrangian\ and special\ Lagrangian\ submanifolds in hyper-K\"ahler\ manifolds. Throughout of this section, let $(M^{4n},g,I_1,I_2,I_3)$ be a hyper-K\"ahler\ manifold. \begin{prop} Let $L\subset M$ be a special\ Lagrangian\ submanifold, and also a $\sigma$-holomorphic\ Lagrangian\ submanifold for some $\sigma\in S^2$. Then $\sigma = \sigma(k\pi/n)$ for some $k = 1,\cdots, 2n$. \label{6.1} \end{prop} \begin{proof} By decomposing $\mathbb{R}^3$ into $\mathbb{R}\sigma$ and its orthogonal complement, we have \begin{eqnarray} (1,0,0) = p\sigma + q\tau\nonumber \end{eqnarray} for some $p,q\in \mathbb{R}$ and $\tau\in S^2$, where $\tau$ is orthogonal to $\sigma$. Then we have $\omega_1= p\omega^\sigma + q\omega^\tau$ and \begin{eqnarray}
0= \omega_1|_L= p\omega^\sigma|_L + q\omega^\tau|_L = p\omega^\sigma|_L,\nonumber \end{eqnarray} since $L$ is a special\ Lagrangian\ and $\sigma$-holomorphic\ Lagrangian\ submanifold. Hence $p$ should be $0$ since $\omega^\sigma$ is non-degenerate on $L$. Thus we have $(1,0,0) = q\tau$, which means that $\sigma$ is orthogonal to $(1,0,0)$.
Then we may write $\sigma = \sigma(\theta)$ for some $\theta\in \mathbb{R}$. By the condition ${\rm Im}(\omega_2 + \sqrt{-1}\omega_3)^n|_L = 0$, we obtain $\theta = k\pi/n$ for some $k=1,\cdots, 2n$. \end{proof}
\begin{prop} Let $L$ be a compact $\sigma(\theta)$-holomorphic\ Lagrangian\ submanifold in $M$ for some $\theta$, and the orientation of $L$ be determined by $\omega^{\sigma(\theta)}_1$. Then the pairing of the de Rham cohomology class $[\omega_2+\sqrt{-1}\omega_3]^n$ and the homology class $[L]\in H_{2n}(M,\mathbb{Z})$ is given by \begin{eqnarray} \langle [\omega_2+\sqrt{-1}\omega_3]^n, [L]\rangle = e^{\sqrt{-1}n \theta} V, \nonumber \end{eqnarray} where $V(L) >0$ is the volume of $L$. \label{6.2} \end{prop} \begin{proof} Since $L$ is $\sigma(\theta)$-holomorphic Lagrangian, we have \begin{eqnarray}
\omega_1|_L = \omega^{\hat{\sigma}(\theta)}|_L=0,\nonumber \end{eqnarray} where $\hat{\sigma}(\theta) = (0,-\sin\theta,\cos\theta) \in S^2$. Then we obtain \begin{eqnarray} \langle [\omega_2+\sqrt{-1}\omega_3]^n, [L]\rangle &=& \int_{L}(\omega_2+\sqrt{-1}\omega_3)^n\nonumber\\ &=& \int_{L}e^{\sqrt{-1}n \theta}\{ e^{-\sqrt{-1} \theta}(\omega_2+\sqrt{-1}\omega_3) \}^n\nonumber\\ &=& e^{\sqrt{-1}n \theta}\int_{L}(\omega^{\sigma(\theta)}+\sqrt{-1}\omega^{\hat{\sigma}(\theta)})^n \nonumber\\ &=& e^{\sqrt{-1}n \theta}\int_{L}(\omega^{\sigma(\theta)})^n = e^{\sqrt{-1}n \theta} V(L). \nonumber \end{eqnarray} \end{proof} Let $L_1,L_2,\cdots, L_A$ be compact smooth submanifolds of dimension $2n$ embedded in $M$. Assume that each $L_\alpha$ is a $\sigma(\theta_\alpha)$-holomorphic\ Lagrangian\ submanifold for some $\theta_\alpha\in \frac{\pi}{n} \mathbb{Z}$, and the orientation of $L_\alpha$ is determined by $\omega^{\sigma(\theta_\alpha)}$. Put $\varepsilon_\alpha =1$ if $\frac{n}{\pi}\theta_\alpha$ is even, and $\varepsilon_\alpha =-1$ if $\frac{n}{\pi}\theta_\alpha$ is odd. \begin{prop} Under the above setting, assume that there exists a compact smooth $\sigma(\theta)$-holomorphic\ Lagrangian\ submanifold $L$ in the homology class $\sum_{\alpha=1}^A \varepsilon_\alpha [L_\alpha]$ for some $\theta\in \mathbb{R}$. Then $\{ \theta_1,\theta_2,\cdots,\theta_A\}$ is contained in $\theta + \pi\mathbb{Z}$. \label{homology} \end{prop} \begin{proof} Since $L_\alpha$ is a $\sigma(\theta_\alpha)$-holomorphic\ Lagrangian\ submanifold, we have \begin{eqnarray} \langle [\omega_2+\sqrt{-1}\omega_3]^n, \sum_{\alpha=1}^A \varepsilon_\alpha [L_\alpha ] \rangle &=& \sum_{\alpha=1}^A \varepsilon_\alpha e^{\sqrt{-1}n \theta_\alpha} V(L_\alpha)\nonumber\\ &=& \sum_{\alpha=1}^A \varepsilon_\alpha^2 V(L_\alpha) = \sum_{\alpha=1}^A V(L_\alpha).
\label{eq5} \end{eqnarray} by Proposition \ref{6.2}. Since $L$ is a compact smooth $\sigma(\theta)$-holomorphic\ Lagrangian\ submanifold,
$\omega_1|_L = \omega^{\hat{\sigma}(\theta)}|_L=0$ holds, where we put ${\hat{\sigma}(\theta)}$ as in the proof of Proposition \ref{6.2}. Therefore we obtain \begin{eqnarray} \langle [\omega_2+\sqrt{-1}\omega_3]^n, [L] \rangle = e^{\sqrt{-1}n\theta} \langle [\omega^{\sigma(\theta)}]^n, [L] \rangle = e^{\sqrt{-1}n\theta} \langle [\omega^{\sigma(\theta)}]^n, [L] \rangle \label{eq6} \end{eqnarray} by the same computation in the proof of Proposition \ref{6.2}. Then by combining (\ref{eq5})(\ref{eq6}), $\theta$ is given by $\theta = k\pi/n$ for an integer $k=1,\cdots,2n$. Now we have \begin{eqnarray} \omega^{\sigma(\theta)} &=& {\rm Re}(e^{-\sqrt{-1}\theta}(\omega_2+\sqrt{-1}\omega_3)) \nonumber\\ &=& {\rm Re}(e^{-\sqrt{-1}(\theta - \theta_\alpha)} e^{-\sqrt{-1}\theta_\alpha}(\omega_2+\sqrt{-1}\omega_3 ))\nonumber\\ &=& {\rm Re}(e^{-\sqrt{-1}(\theta - \theta_\alpha)} (\omega^{\sigma(\theta_\alpha)}+\sqrt{-1}\omega^{\hat{\sigma}(\theta_\alpha)}))\nonumber\\ &=& \cos (\theta - \theta_\alpha)\omega^{\sigma(\theta_\alpha)} + \sin (\theta - \theta_\alpha)\omega^{\hat{\sigma}(\theta_\alpha)}\nonumber \end{eqnarray}
and $\omega^{\hat{\sigma}(\theta_\alpha)}|_{L_\alpha} = 0$, we obtain \begin{eqnarray} \langle [\omega^{\sigma(\theta)}]^n, [L] \rangle &=& \sum_{\alpha=1}^A \varepsilon_\alpha \langle [\omega^{\sigma(\theta)}]^n, [L_\alpha ] \rangle \nonumber\\ &=& \sum_{\alpha=1}^A \varepsilon_\alpha \cos^n (\theta - \theta_\alpha)\langle [\omega^{\sigma(\theta_\alpha)} ]^n, [L_\alpha ] \rangle \nonumber\\ &=& \sum_{\alpha=1}^A \varepsilon_\alpha \cos^n (\theta - \theta_\alpha) V(L_\alpha ) \label{eq7} \end{eqnarray} By combining (\ref{eq5})(\ref{eq6})(\ref{eq7}) and putting $\theta=k\pi/n$, we obtain \begin{eqnarray} \sum_{\alpha=1}^A V(L_\alpha) = (-1)^k\sum_{\alpha=1}^A \varepsilon_\alpha \cos^n \bigg(\frac{k\pi}{n} - \theta_\alpha\bigg) V(L_\alpha). \nonumber \end{eqnarray} Next we put $\theta_\alpha = k_\alpha\pi/n$ for $k_\alpha=1,\cdots,2n$. Then $\varepsilon_\alpha = (-1)^{k_\alpha}$ and \begin{eqnarray} \sum_{\alpha=1}^A V(L_\alpha) = \sum_{\alpha=1}^A (-1)^{k-k_\alpha} \cos^n \bigg(\frac{k-k_\alpha}{n}\pi\bigg) V(L_\alpha) \nonumber \end{eqnarray} holds. Since every $V(L_\alpha)$ is positive, we obtain \begin{eqnarray} (-1)^{k-k_\alpha} \cos^n \bigg(\frac{k-k_\alpha}{n}\pi\bigg) = 1, \label{eq8} \end{eqnarray} then $k-k_\alpha$ should be contained in $n\mathbb{Z}$. If $k-k_\alpha = nl$ for some $l\in \mathbb{Z}$, then $\cos (\frac{k-k_\alpha}{n}\pi) = \cos l\pi = (-1)^l$ holds, which gives \begin{eqnarray} (-1)^{k-k_\alpha} \cos^n \bigg(\frac{k-k_\alpha}{n}\pi\bigg) = (-1)^{nl} (-1)^{nl} = 1.\nonumber \end{eqnarray} Thus the assertion follows since (\ref{eq8}) holds if and only if $k-k_\alpha \in n\mathbb{Z}$. \end{proof} \begin{cor} Under the assumption of Theorem \ref{graph}, let $\mathcal{E}$ be nonempty. Then the special\ Lagrangian\ submanifolds $\tilde{L}_t$ obtained in Theorem \ref{graph} is not $\sigma$-holomorphic\ Lagrangian\ submanifold for any $\sigma\in S^2$. \end{cor} \begin{proof} First of all, note that $\tilde{L}_t$ is contains in $\sum_{k\in\mathcal{V}}\varepsilon_{k}[L_{\triangle_k}]$. Let $k_1 \to k_2\in \mathcal{E}$. Then $\triangle_{k_1}$ and $\triangle_{k_2}$ are intersecting standardly with angle $\pi/n$, hence we have $\theta_{k_2} = \theta_{k_1} + \pi/n$, which implies that $\{\theta_k;\ k\in\mathcal{V}\}$ contains $\theta_{k_1}$ and $\theta_{k_1} + \pi/n$. Thus $\{\theta_k;\ k\in\mathcal{V}\}$ never be contained in $\theta+\pi\mathbb{Z}$ for any $\theta$ since $n>1$. By Propositions \ref{6.1} and \ref{homology}, $\tilde{L}_t$ never becomes $\sigma$-holomorphic\ Lagrangian\ submanifold for any $\sigma\in S^2$. \end{proof}
Keio University, 3-14-1 Hiyoshi, Kohoku, Yokohama 223-8522, Japan,\\ [email protected]
\end{document} |
\begin{document}
\title{Localization of the Kobayashi distance for any visibility domain}
\author{Amar Deep Sarkar}
\address{ADS: Indian Institute of Science Education and Research Kolkata, India} \email{[email protected]}
\keywords{Visibility, weak visibility, Kobayashi distance, localization}
\subjclass{Primary: 32F45}
\thanks{The author is supported by the postdoctoral fellowship of Indian Institute of Science Education and Research Kolkata.} \begin{abstract} In this article, we prove localization results for the Kobayashi distance of Kobayashi hyperbolic domains with local visibility property in $\mathbb{C}^d$, $d \geq 1$. This is done by proving a localization result for the Kobayashi-Royden pseudometric, along with some other results for domains satisfying local weak visibility. \end{abstract}
\maketitle \section{Introduction} Let $\Omega \subset \mathbb{C}^d$, $d \geq 1$, be a domain and $\mathsf{k}_{\Omega}: \Omega \times \Omega \longrightarrow \mathbb{R}_{\geq 0}$ and $\kappa: \Omega \times \mathbb{C}^d \longrightarrow \mathbb{R}_{\geq 0}$ denote the Kobayashi pseudodistance and Kobayashi-Royden pseudometric respectively. When $\mathsf{k}_{\Omega}$ is indeed a distance, we say that $\Omega$ is a Kobayashi hyperbolic domain. The aim of this paper is to obtain localization results for Kobayashi distance under the assumption of weak visibility property (see Definition~\ref{D:Weak_vis}). These localization results could be used to infer about the global geometry from the local geometry and vice versa.
This kind of localization result (see Theorem~\ref{T:Main_Thm_1}) is well-known for bounded strongly convex and strongly pseudoconvex domains (cf. \cite{Abate, Balogh_Bonk, Jarnicki_Pflug}); from the work of Zimmer \cite{Zimmer_1, Zimmer_2} it follows that such kind of localization is also true for bounded convexifiable domains of finite type, $\mathbb{C}$-strictly convex domains with $C^{1, \alpha}$ boundary. Recently, Liu--Wang obtained such kind of localization result under the assumption that locally the domains are log-type convex domains, see \cite{Liu_et_al}, and this was substantially generalized by Bracci--Nikolov--Thomas under the assumption that the domains are locally convex and locally has weak visibility property, see \cite{BNT}. In all the above results mentioned, the local convexity or convexifiability assumption is present along with the weak visibility property. However, there exists a large class of domains in $\mathbb{C}^d$ which are not locally convex or locally convexifiable. For example, not all pseudoconvex domains of finite type near a boundary point is locally convexifiable; however, these domains satisfy local visibility property, this follows from the work of Bharali--Zimmer \cite{Bharali_Zimmer_1}, also see \cite{BM}, \cite{BNT}, \cite{CMS}.
Now we present our first main result which shows that no assumption of local convexity is required; only local weak visibility assumption is enough to get such localization result. This generalizes \cite[Theorem~1.1]{BNT}. The idea of the proof is inspired by the proof of Theorem~1.1 in \cite{BNT} along with a few important new observations.
\begin{theorem}\label{T:Main_Thm_1} Suppose $\Omega \subset \mathbb{C}^d$ is a Kobayashi hyperbolic domain and $U$ is an open subset of $\mathbb{C}^d$ such that $U \cap \partial \Omega \neq \emptyset$ and $U \cap \Omega$ is connected. Suppose $( U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$ has weak visibility property for every pair of distinct points in $ U \cap \partial \Omega$. Then for every open sets $W, W_0$ with $W\cap \Omega \neq \emptyset$, $W \subset \subset W_0 \subset \subset U$ and $\mathsf{k}_{\Omega}(W_0 \cap \Omega, \Omega \setminus U) > 0$, there exists a constant $C> 0$ which depends only on $U, W, W_0$ such that for every $z, w \in W \cap \Omega$, \[ \mathsf{k}_{U \cap \Omega} (z, w) \leq \mathsf{k}_{\Omega} (z, w) + C. \] \end{theorem} \begin{remark} If $\Omega$ is a bounded domain, then for every pair $W_0, U$ as in the above theorem, it follows $\mathsf{k}_{\Omega}(W_0 \cap \Omega, \Omega \setminus U) > 0$. Moreover, {\it (3)} of Proposition~\ref{P:Visibility_Outside_Point} gives a sufficient condition for $\mathsf{k}_{\Omega}(W_0 \cap \Omega, \Omega \setminus U) > 0$ when $\Omega$ possibly an unbounded domain. This condition is mentioned in the previous version of this paper without a proof, and later a proof of this also appears in \cite{NOT}. \end{remark}
Our next main result is the following \begin{theorem}\label{T:Main_Thm_2} Suppose $\Omega \subset \mathbb{C}^d$ is a Kobayashi hyperbolic domain and $U$ is an open subset of $\mathbb{C}^d$ such that $U \cap \partial \Omega \neq \emptyset$. Suppose $( \Omega, \mathsf{k}_{\Omega} )$ has weak visibility property for every pair of distinct points in $ U \cap \partial \Omega$. Then for every open set $W$ with $W \cap \Omega \neq \emptyset$ and $W \subset \subset U$, there exists a constant $C> 0$ which depends only on $U, W$ such that for every $z, w \in W \cap \Omega$, \[ \mathsf{k}_{U \cap \Omega} (z, w) \leq \mathsf{k}_{\Omega} (z, w) + C. \] \end{theorem}
\begin{remark} Note that in Theorem~\ref{T:Main_Thm_2} the weak visibility assumption is taken with respect to $(\Omega, \mathsf{k}_{\Omega} )$ whereas in Theorem~\ref{T:Main_Thm_1} the weak visibility is taken with respect to $(U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$. The second assumption in Theorem~\ref{T:Main_Thm_1} is that $\mathsf{k}_{\Omega}(W_0 \cap \Omega, \Omega \setminus U) > 0$, however there is no such assumption in Theorem~\ref{T:Main_Thm_2} --- although it is necessary in the proof --- is because of {\it (3)} of Proposition~\ref{P:Visibility_Outside_Point}. \end{remark} As a consequence of the above statement, we obtain the following
\begin{corollary}\label{C:Pseudo_finte_type} Suppose $\Omega \subset \mathbb{C}^d$ is a bounded domain with $C^{\infty}$-smooth boundary and of finite D'Angelo type. Let $p \in \partial \Omega$. Then for every neighbourhood $ U$ and $W$ of $p$ with $W \subset \subset U$ there exists a constant $C> 0$ such that for every $z, w \in W \cap \Omega$, \[ \mathsf{k}_{U \cap \Omega} (z, w) \leq \mathsf{k}_{\Omega} (z, w) + C. \] \end{corollary} \begin{remark} To the best of our knowledge, such kind of additive localization result for the Kobayashi distance has not been obtained before in this generality. The above corollary could be mentioned for a larger class of domains. We refer the reader to the following papers for such examples of domains: \cite{Bharali_Zimmer_1, Bharali_Zimmer_2, BM, BNT, CMS}. These domains are not necessarily of finite type near every boundary point and in many cases, the type may not even be defined because of the low regularity of the boundary of the domains. We also emphasize that these domains need not be Cauchy-complete with respect to the Kobayashi distance. \end{remark}
\subsection*{An application of additive localization} Next, we show that for bounded domains, multiplicative localization (given below) as a consequence of the additive localization. This shows that additive localization is indeed stronger than multiplicative localization, at least in this case. \begin{theorem}\label{T:Main_Thm_3} Suppose $\Omega \subset \mathbb{C}^d$ is a bounded domain. Suppose $U$ and $W$ are open subsets of $\mathbb{C}^d$ with $W \subset \subset U$, $W \cap \Omega \neq \emptyset$ and $U \cap \Omega$ is connected, and there exists a constant $C_0> 0$ such that for all $z, w \in W \cap \Omega$ \[ \mathsf{k}_{U \cap \Omega}(z, w) \leq \mathsf{k}_{\Omega}(z, w) + C_0. \] Then there exists a constant $ C \geq 1$ such that for all $z, w \in W \cap \Omega$, \[ \mathsf{k}_{U \cap \Omega}(z, w) \leq C\mathsf{k}_{\Omega}(z, w). \] \end{theorem} The above theorem follows from a more general result with no boundedness assumption on $\Omega$ but with some other conditions. \begin{theorem}\label{T:Main_Thm_4} Suppose $\Omega \subset \mathbb{C}^d$ is a Kobayashi hyperbolic domain and $W_1, W_2$ are open subsets of $\mathbb{C}^d$ with $W_1 \subset \subset W_2$, $W_1 \cap \Omega \neq \emptyset$ and $\mathsf{k}_{\Omega}(W_1 \cap \Omega, \Omega \setminus W_2) > 0$. Suppose $U$ and $W$ are open subsets of with $W \subset \subset W_1 \subset \subset W_2 \subset \subset U$, $W \cap \Omega \neq \emptyset$, $U \cap \Omega$ is connected, and there exists a constant $C_0> 0$ such that for all $z, w \in W \cap \Omega$ \[ \mathsf{k}_{U \cap \Omega}(z, w) \leq \mathsf{k}_{\Omega}(z, w) + C_0. \] Then there exists a constant $ C \geq 1$ such that for all $z, w \in W \cap \Omega$, \[ \mathsf{k}_{U \cap \Omega}(z, w) \leq C\mathsf{k}_{\Omega}(z, w). \] \end{theorem} \begin{remark} The above result is proved using a weaker assumption than that of Theorem~1.4 in \cite{NOT} although the proofs are similar. It can be applied in situations where no form of visibility is present, but additive localization is present. We do not know any non-trivial examples of such domains. \end{remark}
Very recently, similar localization results are obtained by Nikolov--\"Okten--Thomas \cite{NOT} under some weaker conditions along with results on local and global visibility. \section{Preliminaries} \subsection{Notations and conventions}
Let $\Omega \subset \mathbb{C}^d$, $d \geq 1$, be a domain, $U \subset \mathbb{C}^d$ an open set, and $X, Y \subset \mathbb{C}^d $. We denote $\Omega \setminus X$ as the complement of $X \cap \Omega$ in $\Omega$. \begin{itemize}
\item $X \subset \subset U$ means that $X$ is a relatively compact subset of $U$, i.e., $\overline{X} \subset U$ and $\overline{X}$ is a compact set.
\item For $z \in \Omega$, $\delta_{\Omega}(z) = \inf_{w \in \mathbb{C}^d \setminus \Omega}||z - w||$, where $||z - w||$ denotes the Euclidean distance between $z$ and $w$.
\item For $z \in \Omega$, $\mathsf{k}_{\Omega}(z, X) \defeq \inf_{x \in X}\mathsf{k}_{\Omega}(z, x)$.
\item $\mathsf{k}_{\Omega}(X, Y) \defeq \inf_{x \in X, y \in Y}\mathsf{k}_{\Omega}(x, y)$.
\item Throughout this paper, whenever for an open set $U \subset \mathbb{C}^d$ if $U \cap \partial \Omega \neq \emptyset$, we assume the cardinality of $U \cap \partial \Omega$ is greater than one, i.e., $\#(U \cap \partial \Omega) > 1$, because when $\#(U \cap \partial \Omega) = 1$ the main results of this paper follows trivially. Here $\partial \Omega$ denotes the boundary of the domain $\Omega$.
\item For a curve $\gamma: I \longrightarrow \Omega$ of an interval $I \subset \mathbb{R}$, by a slight abuse of notation we denote the range of the curve $\gamma$ by $\gamma$ itself. \end{itemize}
\begin{definition} For $ \lambda \geq 1$ and $\kappa \geq 0$, a curve $\gamma: I \longrightarrow \Omega$ of an interval $I \subset \mathbb{R}$ is said to be $(\lambda, \kappa)${\bf -quasi-geodesic} if for all $s, t \in I$ \[
\frac{1}{\lambda}|t - s| - \kappa\leq \mathsf{k}_{\Omega}(\gamma(s), \gamma(t) ) \leq \lambda |s -t| + \kappa, \] and when $\lambda =1$ and $\kappa = 0$ $\gamma$ is called a geodesic. Furthermore, for $ \lambda \geq 1$ and $\kappa \geq 0$, a $(\lambda, \kappa)$-quasi-geodesic $\gamma: I \longrightarrow \Omega$ is said to be $(\lambda, \kappa)${\bf-almost-geodesic} of $( \Omega, \mathsf{k}_{\Omega} )$ if, \begin{itemize} \item $\gamma$ is an absolutely continuous curve, and \item $\kappa_{\Omega}(\gamma(t); \gamma^{\prime}(t) ) \leq \lambda$ for almost every $t \in I$, ($\gamma^{\prime}$ exists almost everywhere on $I$ since $\gamma$ is absolutely continuous). \end{itemize} \end{definition}
Given any $ \kappa > 0$ and $\Omega \subset \mathbb{C}^d$ a Kobayashi hyperbolic domain (need not be bounded), it is shown in \cite[Proposition~5.3]{Bharali_Zimmer_2} that given any two points $z, w \in \Omega$ there exists a $(1, \kappa)$-almost-geodesic $ \gamma: [0, a] \longrightarrow \Omega$ joining $z$ and $w$, that is, $\gamma(0) = z$ and $\gamma(a) = w$. Now, we state the definition of visibility and weak visibility. \begin{definition}\label{D:Weak_vis} Let $\Omega \subset \mathbb{C}^d$ be a Kobayashi hyperbolic domain. For $\lambda \geq 1$ and $\kappa \geq 0$, we say a pair of points $p, q \in \partial \Omega$, $p \neq q$ has visibility property with respect to $( \Omega, \mathsf{k}_{\Omega} )$ if there exist neighbourhoods $U$ of $p$ and $V$ of $q$ in $\mathbb{C}^d$ such that $\overline U \cap \overline V \neq \emptyset$ and a compact set $K \subset \Omega$ such that for any $(\lambda, \kappa)$-almost-geodesic $\gamma: [0, a] \longrightarrow \Omega$ with $\gamma(0) \in U \cap \Omega$ and $\gamma(a) \in V \cap \Omega$, $\gamma \cap K \neq \emptyset$. When the above condition is only required for $\lambda = 1$ and $\kappa \geq 0$, we say that the distinct pair $p, q$ has {\bf weak visibility} property with respect to $(\Omega, \mathsf{k}_{\Omega} )$. \end{definition}
\section{Proofs of the main results}
The following localization lemma, by L. H. Royden \cite[Lemma~2]{Royden}, whose proof can be found in \cite[Lemma~4]{Ian_Gramham}, is used to prove localization of the Kobayashi distance and to study the relation between local and global visibility and Gromov hyperbolicity in \cite{BNT} and \cite{BGNT} respectively.
\noindent {\bf Royden's Localization Lemma.} Suppose $\Omega \subset \mathbb{C}^d $ is a Kobayashi hyperbolic domain and $U $ is an open subset of $\mathbb{C}^d$ such that $U \cap \Omega \neq \emptyset$ and $U \cap \Omega $ is connected. Then for all $z \in U \cap \Omega$ and $v \in \mathbb{C}^d$, \begin{equation} \kappa_{\Omega}(z; v) \leq \kappa_{U \cap \Omega}(z; v) \leq \coth(\mathsf{k}_{\Omega}(z, \Omega \setminus U)) \kappa_{\Omega}(z; v). \end{equation}
Next, we prove a lemma that is used to prove the localization of the Kobayashi distance for visibility domains stated above. \begin{lemma}\label{L:Varied_Loc_Kob_metric} Suppose $\Omega \subset \mathbb{C}^d $ is a Kobayashi hyperbolic domain and $U $ is an open subset of $\mathbb{C}^d$ such that $U \cap \Omega \neq \emptyset$ and connected. Then for every $W \subset \subset U$ with $W \cap \Omega \neq \emptyset$ and $\mathsf{k}_{\Omega}(W \cap \Omega, \Omega \setminus U) > 0$, there exists $L > 0$ such that for all $z \in W \cap \Omega$ and $v \in \mathbb{C}^d$, \begin{equation} \kappa_{U \cap \Omega}(z; v) \leq \left( 1 + L e^{-\mathsf{k}_{\Omega} (z, \Omega \setminus U) } \right)\kappa_{\Omega}(z; v). \end{equation} \end{lemma} \begin{proof} We first note the inequality, \begin{equation}\label{E:Ineq_tanh}
\tanh(x) \geq 1 - e^{-x}\,\, \forall \,\, x \geq 0. \end{equation} This follows from the following formula obtained by algebraic manipulations: for all $x \in \mathbb{R}$, \[ \tanh(x) - (1 - e^{-x}) = \frac{(1 -e^{-x})^{2}}{e^{x} + e^{-x}} \geq 0. \] Now note that, for all $z \in W \cap \Omega$, we have $\coth(\mathsf{k}_{\Omega}(z, \Omega \setminus U)) \leq \coth(\mathsf{k}_{\Omega}(W \cap \Omega, \Omega \setminus U)) =: L < +\infty$. By the Royden's Localization Lemma, for all $z \in U\cap \Omega$ and $v \in \mathbb{C}^d$, we have \[ \tanh(\mathsf{k}_{\Omega}(z, \Omega \setminus U))\kappa_{U \cap \Omega}(z; v) \leq \kappa_{\Omega}(z; v). \] This gives, after applying $\kappa_{ U \cap \Omega}(z; v) \leq \coth(\mathsf{k}_{\Omega}(z, \Omega \setminus U)) \kappa_{\Omega}(z; v) \leq L \kappa_{\Omega}(z; v) $ for every $z \in W \cap \Omega$ and (\ref{E:Ineq_tanh}), for all $z \in W \cap \Omega$ \begin{equation*} \begin{split}
\kappa_{U \cap \Omega}(z; v)
&\leq (1 - \tanh(\mathsf{k}_{\Omega}(z, \Omega \setminus U)) ) \kappa_{U \cap \Omega}(z; v) +\kappa_{\Omega}(z; v)\\
&\leq \left( 1 + L\left(1 - \tanh(\mathsf{k}_{\Omega}(z, \Omega \setminus U)) \right) \right) \kappa_{ \Omega}(z; v) \\
&\leq \left(1 + Le^{- \mathsf{k}_{\Omega}(z, \Omega \setminus U) } \right)\kappa_{ \Omega}(z; v) \quad [\text{by applying (\ref{E:Ineq_tanh})}].
\end{split} \end{equation*} This proves the lemma. \end{proof} \begin{proposition}\label{P:Visibility_Outside_Point} Suppose $\Omega \subset \mathbb{C}^d$ is a Kobayashi hyperbolic domain and $U$ is an open subset of $\mathbb{C}^d$, and $U \cap \partial \Omega \neq \emptyset$. Suppose $(\Omega, \mathsf{k}_{\Omega} )$ has weak visibility property for every distinct pair $\{p, q\}$, $ p,q \in U \cap \partial \Omega$. Let $W \subset \subset U$ be an open set with $W \cap \Omega \neq \emptyset$. Then \begin{enumerate}
\item for any $\xi_2 \in \partial \Omega \setminus U$ (whenever $\partial \Omega \setminus U \neq \emptyset$) and $\xi_1 \in U \cap \partial \Omega$, the distinct pair $\{\xi_1, \xi_2\}$ has weak visibility property with respect to $(\Omega, \mathsf{k}_{\Omega} )$. Consequently, every pair $\{\xi_1, \xi_2\}$, with $\xi_1 \in U \cap \partial \Omega$, $\xi_2 \in \partial \Omega$ and $\xi_1 \neq \xi_2$, has weak visibility property with respect to $(\Omega, \mathsf{k}_{\Omega} )$.
\item Let $V$ is a subset of $\mathbb{C}^d$ with $V \cap \Omega \neq \emptyset$ and $\overline W \cap \overline V = \emptyset$. Then for $\kappa \geq 0$ there is a compact set $K \subset \Omega$ such for every $z \in W \cap \Omega$ and $w \in V \cap \Omega $ and $\gamma$ is a $(1, \kappa)$-almost-geodesic joining $z$ and $w$, $\gamma \cap K \neq \emptyset$.
\item For every open sets $W_1, W_2$ with $W \subset \subset W_1 \subset \subset W_2 \subset \subset U$, we have $\mathsf{k}_{\Omega}(W_1 \cap \Omega, \Omega \setminus W_2) > 0$. Consequently, $\mathsf{k}_{\Omega}(W \cap \Omega, \Omega \setminus U) > 0$. \end{enumerate}
\end{proposition}
\begin{proof}
If possible, suppose that {\it (1)} is not true. Then we get a sequence of $(1, \kappa)$-almost-geodesics $\gamma_n: [0, a_n] \longrightarrow \Omega$ such that $\gamma_n(0) \to \xi_1$ and $\gamma_n(a_n) \to \xi_2$ and $ \lim_{n \to \infty} \sup_{z \in \gamma_n \cap B(0, R)}\delta_{\Omega}(z) = 0 $, for all $R> R_0$, where we choose $R_0 > 0$ such that $\gamma_n \cap B(0, R_0) \neq \emptyset $ for all $n \in \mathbb{Z}_{+}$. Since $U$ is an open subset of $\mathbb{C}^d$ and $\xi_1 \in U$, there exists $0 < r < ||\xi_1 - \xi_2||/4$ such that the open ball of radius $2r$ centred at $\xi_1$, $B(\xi_1, 2r) \subset U$. Without loss of generality (by passing to a subsequence if necessary), we may assume that $\gamma_n(0) \in B(\xi_1, r/2)$ and $\gamma_n(a_n) \in B(\xi_2, r/2)$ for all $n \in \mathbb{Z}_{+}$. Now, let \[ t_n \defeq \inf\{t \in [0, a_n] : \gamma_n(t) \in \partial B(\xi_1, r) \cap \Omega \}. \] Again, by passing to a subsequence, we may assume that $ \gamma_n(t_n) \to \xi_0 \in (\partial B(\xi_1, r) \cap \partial \Omega) \subset U \cap \partial \Omega$. Now, note that the sequence of $(1, \kappa)$-almost-geodesics defined as \[
\sigma_n \defeq \gamma_n|_{[0, t_n]} \] shows that the distinct pair $\{ \xi_1, \xi_0 \} \subset U \cap \partial \Omega $ does not have weak visibility property with respect to $(\Omega, \mathsf{k}_{\Omega} )$. This gives a contradiction to our assumption. This proves {\it (1)}.
Next, we shall prove {\it (2)}. If possible, to get a contradiction, suppose that {\it (2)} does not hold. Then there exist $\kappa \geq 0$, sequences of points $z_n \in W \cap \Omega$ and $w_n \in V \cap \Omega$ and $(1, \kappa)-$almost-geodesic $\gamma_n$ of $(\Omega, \mathsf{k}_{\Omega} )$ joining $z_n$ and $w_n$ for all $n \in \mathbb{Z}_{+}$, such that $\lim_{n \to \infty} \sup_{z \in \gamma_n \cap B(0, R)}\delta_{\Omega}(z) = 0 $, for all $R> R_1$, where we choose $R_1 > 0$ such that $ \overline W \subset B(0, R_1) $. By passing to a subsequence, we may assume that $z_n \to \xi'_1 \in \overline W \cap \partial \Omega \subset U \cap \partial \Omega $ as $n \to \infty$.
Since $\overline {W \cap \Omega} \cap \overline{V} = \emptyset$, let $0 < r_1 < \frac{||x - y||}{4}$ for all $x \in W \cap \Omega $ and $y \in V$. Assume, by passing to a subsequence, $z_n \in B(\xi'_1, r_1/2) $ for all $n \in \mathbb{Z}_{+}$, and define \[ t'_n \defeq \inf\{t \in [0, a_n] : \gamma_n(t) \in \partial B(\xi'_1, r_1) \cap \Omega \}. \] Without loss of generality, we may assume that $\gamma_n(t'_n) \to \xi'_2 \in \partial B(\xi'_1, r_1) \cap \partial \Omega$ as $n \to \infty$. By construction,
$\xi'_1 \neq \xi'_2$. Now, note that the sequence of $(1, \kappa)$-almost-geodesics defined as \[
\sigma'_n \defeq \gamma_n|_{[0, t'_n]} \] shows that the distinct pair $\{ \xi'_1, \xi'_2 \} \subset \partial \Omega $ with $\xi'_1 \in U \cap \partial \Omega$ does not have weak visibility property with respect to $(\Omega, \mathsf{k}_{\Omega} )$. This gives a contradiction to {\it (1)}. Hence, {\it (2)} holds true.
To prove {\it (3)} by contradiction if possible let $\mathsf{k}_{\Omega}(W_1 \cap \Omega, \Omega \setminus W_2) = 0$. Then there exist sequences of points $z_n \in W_1 \cap \Omega$ and $w_n \in \Omega \setminus W_2$ such that $\mathsf{k}_{\Omega}(z_n, w_n) \to 0$ as $n \to \infty$. Let $\gamma_n: [0, a_n] \longrightarrow \Omega$ is a sequence of $(1, 1/n)$-almost-geodesic such that $\gamma_n(0) = z_n \in W_1 \cap \Omega$ and $\gamma_n(a_n) = w_n \in \Omega \setminus W_2$ for all $n \in \mathbb{Z}_{+}$. Since $W_1$ is a relatively compact subset of U, and $\overline {W_1} \cap \overline{\Omega \setminus W_2} = \emptyset$, by {\it (2)}, for $\kappa \geq 1 $ there exists a compact set $K \subset \Omega$ such that for every $z \in W_1 \cap \Omega$ and $w \in \Omega \setminus W_2 $ if $\gamma$ is a $(1, \kappa)$-almost-geodesic joining $z$ and $w$, then $\gamma \cap K \neq \emptyset$. Hence,
$\gamma_n \cap K \neq \emptyset$ for all $n \in \mathbb{Z}_{+}$. Let $o_n \in \gamma_n \cap K$ and if necessary by passing to a subsequence, assume that $o_n \to o \in K$. This gives, using the fact that $\gamma_n$ is a $(1, 1/n)$-almost-geodesic passing through $o_n$,
\[
\mathsf{k}_{\Omega}(z_n, o_n) + \mathsf{k}_{\Omega}(o_n, w_n) - 3/n \leq \mathsf{k}_{\Omega}(z_n, w_n).
\]
This implies as $n \to \infty$, $\mathsf{k}_{\Omega}(z_n, o_n) \to 0$ and $\mathsf{k}_{\Omega}( o_n, w_n) \to 0$. Since $\Omega$ is a Kobayashi hyperbolic, it follows $z_n \to o$ and $w_n \to o$. This is a contradiction because $z_n \in W_1 \cap \Omega$, $w_n \in \Omega \setminus W_2$ for all $n \in \mathbb{Z}_{+}$, and $\overline{W_1 \cap \Omega} \cap \overline{\Omega \setminus W_2} = \emptyset$. Hence $\mathsf{k}_{\Omega}(W_1 \cap \Omega, \Omega \setminus W_2) > 0$.
Moreover, using the fact that if $X, Y \subset \Omega$ and $A \subset X$ and $B \subset Y$, then by definition $\mathsf{k}_{\Omega}( A, B) \geq \mathsf{k}_{\Omega}(X, Y)$, we have $\mathsf{k}_{\Omega}(W \cap \Omega, \Omega \setminus U) \geq \mathsf{k}_{\Omega}(W_1 \cap \Omega, \Omega \setminus W_2) > 0$. This completes the proof of {\it (3)}. \end{proof}
\begin{lemma}\label{L:Comple_dist_Compar} Suppose $\Omega \subset \mathbb{C}^d$ is a Kobayashi hyperbolic domain and $U$ is an open subset of $\mathbb{C}^d$ such that $U \cap \partial \Omega \neq \emptyset$. Suppose $( \Omega, \mathsf{k}_{\Omega} )$ has weak visibility property for every pair of distinct points $\{p, q\}$, $p, q \in U \cap \partial \Omega$. Then for every open sets $W_1, W_2$ with $W_1 \subset \subset W_2 \subset \subset U$, $W_1 \cap \Omega \neq \emptyset$ and $o \in \Omega$ there exists $L > 0$ such that for all $z \in W_1 \cap \Omega$ \[ \mathsf{k}_{\Omega} ( z, o) \leq \mathsf{k}_{\Omega} (z, \Omega \setminus W_2) + L. \] \end{lemma} \begin{proof} Let $z \in W_1 \cap \Omega$ and $w \in \Omega \setminus W_2$. Let $\kappa > 0$ and $\gamma$ is a $(1, \kappa)$-almost-geodesic joining $z$ and $w$. Then, by {\it (2)} of Proposition~\ref{P:Visibility_Outside_Point}, there exists a compact subset $K \subset \Omega$ such that $\gamma \cap K \neq \emptyset$. Let $o_w \in \gamma \cap K$, then we have (using the fact that $\gamma$ is $(1, \kappa)$-almost-geodesic) \begin{equation*}
\begin{split}
\mathsf{k}_{\Omega}(z, o)
&\leq \mathsf{k}_{\Omega}(z, o_w) + \mathsf{k}_{\Omega}(o_w, o) \\
& \leq \mathsf{k}_{\Omega}(z, w) + 2\kappa + \sup_{y \in K}\mathsf{k}_{\Omega}(y, o).
\end{split} \end{equation*} Since $w \in \Omega \setminus W_2$ is arbitrary, by taking $L \defeq 2\kappa + \sup_{y \in K}\mathsf{k}_{\Omega}(y, o) $, we have, \[ \mathsf{k}_{\Omega} ( z, o) \leq \mathsf{k}_{\Omega} (z, \Omega \setminus W_2) + L. \] \end{proof} As a corollary of the above lemma, we have a similar local result.
\begin{lemma}\label{L:Comple_dist_Compar_local} Suppose $\Omega \subset \mathbb{C}^d$ is a Kobayashi hyperbolic domain and $U$ is an open subset of $\mathbb{C}^d$ such that $U \cap \partial \Omega \neq \emptyset$ and $U \cap \Omega $ is connected. Suppose $( U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$ has weak visibility property for every pair of distinct points $\{p, q\}$, $p, q \in U \cap \partial \Omega$. Then for every open sets $W_1, W_2$ with $W_1 \subset \subset W_2 \subset \subset U$, $W_1 \cap \Omega \neq \emptyset$ and $o \in U \cap\Omega$ there exists $L > 0$ such that for all $z \in W_1 \cap \Omega$ \[ \mathsf{k}_{U \cap \Omega} ( z, o) \leq \mathsf{k}_{U \cap \Omega} (z, U \cap \Omega \setminus W_2) + L. \] \end{lemma} \begin{proof} The proof follows replacing $\Omega$ by $U \cap \Omega$ in Lemma~\ref{L:Comple_dist_Compar}. \end{proof}
\begin{lemma}\label{L:One_kappa_geodesic} Suppose $\Omega \subset \mathbb{C}^d$ is a Kobayashi hyperbolic domain and $U$ is an open subset of $\mathbb{C}^d$ such that $U \cap \partial \Omega \neq \emptyset$ and $U \cap \Omega $ is connected. Suppose $( U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$ has weak visibility property for every pair of distinct points $\{p, q\}$, $p, q \in U \cap \partial \Omega$. Let $W, W_1$ be open sets with $W \subset \subset W_1 \subset \subset U $, $W \cap \Omega \neq \emptyset$ and $\mathsf{k}_{\Omega}(W_1 \cap \Omega, \Omega \setminus U) > 0$, and suppose that $z, w \in W \cap \Omega $ and $\gamma$ is a $(1, \kappa)$-almost-geodesic of $(\Omega, \mathsf{k}_{\Omega})$ joining $z$ and $w$ such that $\gamma \subset W$. Then there exists a $\kappa_0 > 0$ such that $\gamma$ is $(1, \kappa_0)$-quasi-geodesic of $(U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$. \end{lemma} \begin{proof} By Lemma~\ref{L:Varied_Loc_Kob_metric}, there exists $L > 0$ which depends on $U, W_1$ such that for all $z \in W_1 \cap \Omega$ and $v \in \mathbb{C}^d$, \begin{equation*} \kappa_{U \cap \Omega}(z; v) \leq \left( 1 + L e^{-\mathsf{k}_{\Omega} (z, \Omega \setminus U) } \right)\kappa_{\Omega}(z; v). \end{equation*} Since $\mathsf{k}_{\Omega} (z, \Omega \setminus W_1) \leq \mathsf{k}_{\Omega} (z, \Omega \setminus U)$, we have \begin{equation}\label{E:Lemma_use_Roy_loc} \kappa_{U \cap \Omega}(z; v) \leq \left( 1 + L e^{-\mathsf{k}_{\Omega} (z, \Omega \setminus W_1) } \right)\kappa_{\Omega}(z; v). \end{equation}
Suppose that $\gamma: [0, a] \longrightarrow \Omega$, $a \geq 0$. Since $\gamma \subset W$, from above inequality and by \cite[Theorem~3.1]{Venturini}, for every $s_1, s_2 \in [0, a]$, we have \begin{equation}\label{E:One_kappa_geo_ineq} \begin{split} \mathsf{k}_{U \cap \Omega}(\gamma(s_1), \gamma(s_2)) & \leq \int_{s_1}^{s_2} \kappa_{U \cap \Omega}(\gamma(t); \gamma^{\prime}(t)) dt\\ & \leq \int_{s_1}^{s_2} \left( 1 + L e^{-\mathsf{k}_{\Omega} (\gamma(t), \Omega \setminus W_1) } \right)\kappa_{\Omega}(\gamma(t); \gamma^{\prime}(t)) dt\\
&\leq |s_2 - s_1| + L \int_{s_1}^{s_2} e^{-\mathsf{k}_{\Omega} (\gamma(t), \Omega \setminus W_1) } dt \\ & \leq \mathsf{k}_{\Omega}(\gamma(s_1), \gamma(s_2)) + \kappa + L \int_{0}^{a} e^{-\mathsf{k}_{\Omega} (\gamma(t), \Omega \setminus W_1) } dt. \end{split} \end{equation} {\bf Claim.} There exist $C_0, C_1 > 0$ such that for all $z \in W \cap \Omega$ \[ \mathsf{k}_{U \cap \Omega}(z, U \cap \Omega \setminus W_1) \leq C_0 \mathsf{k}_{\Omega}(z, \Omega \setminus W_1) + C_1. \] Assuming the claim and deferring the proof, first note that, for $o \in U \cap \Omega$, by Lemma~\ref{L:Comple_dist_Compar_local}, there exists $L_0 > 0$ which depends on $o, W, W_1, U$ such that for all $z \in W \cap \Omega$, \[ \mathsf{k}_{U \cap \Omega} ( z, o) \leq \mathsf{k}_{U \cap \Omega} (z, U \cap \Omega \setminus W_1) + L_0. \] Next, let $t_0 \in [0, a]$ such that for all $t \in [0, a]$, \[ \mathsf{k}_{\Omega}(\gamma(t_0), \Omega \setminus W_1) \leq \mathsf{k}_{\Omega}(\gamma(t), \Omega \setminus W_1). \] Now, applying the above inequalities in the following computation, we obtain, for all $t \in [0, a]$ \begin{equation*}
\begin{split}
|t - t_0| - \kappa
&\leq \mathsf{k}_{\Omega}(\gamma(t), \gamma(t_0))\\
& \leq \mathsf{k}_{\Omega}(\gamma(t), o) + \mathsf{k}_{\Omega}(\gamma(t_0), o)\\
& \leq \mathsf{k}_{U \cap \Omega}(\gamma(t), o) + \mathsf{k}_{U \cap \Omega}(\gamma(t_0), o)\\
&\leq \mathsf{k}_{U \cap \Omega} (\gamma(t), U \cap \Omega \setminus W_1) + \mathsf{k}_{U \cap \Omega} (\gamma(t_0), U \cap \Omega \setminus W_1) + 2L_0\\
&\leq 2\mathsf{k}_{U \cap \Omega} (\gamma(t), U \cap \Omega \setminus W_1) + 2L_0\\
\end{split} \end{equation*} If we apply the claim to the above inequality, we get \begin{equation*}
\begin{split}
|t - t_0| - \kappa
&\leq 2\mathsf{k}_{U \cap \Omega} (\gamma(t), U \cap \Omega \setminus W_1) + 2L_0\\
& \leq 2C_0 \mathsf{k}_{\Omega}(\gamma(t), \Omega \setminus W_1) + 2C_1 + 2 L_0.
\end{split} \end{equation*} This gives, for all $ t \in [0, a]$ \[
-\mathsf{k}_{\Omega}(\gamma(t), \Omega \setminus W_1) \leq \frac{ -|t - t_0|}{2C_0} + \frac{2C_1 + 2L_0 + \kappa}{2C_0}. \] After using the above inequality in inequality~(\ref{E:One_kappa_geo_ineq}), we obtain for every $s_1, s_2 \in [0, a]$, \begin{equation} \begin{split} \mathsf{k}_{U \cap \Omega}(\gamma(s_1), \gamma(s_2))
& \leq \mathsf{k}_{\Omega}(\gamma(s_1), \gamma(s_2)) + \kappa + LC_3 \int_{0}^{a} e^{-\frac{|t - t_0|}{2C_0} } dt\\
&\leq \mathsf{k}_{\Omega}(\gamma(s_1), \gamma(s_2)) + \kappa + LC_3 \int_{0}^{+\infty} e^{-\frac{|t - t_0|}{2C_0} } dt\\ &\leq \mathsf{k}_{\Omega}(\gamma(s_1), \gamma(s_2)) + \kappa + 4LC_3 C_0 \end{split} \end{equation} where $C_3 = e^{\frac{2C_1 + 2L_0 + \kappa}{2C_0}}$, and the last inequality follows from the following bound: \begin{equation}
\int_{0}^{+\infty} e^{-\frac{|t - t_0|}{2C_0} } dt \leq 4C_0. \end{equation} This shows that $\gamma$ is a $(1, \kappa_0)$-quasi-geodesic for $\kappa_0 = \kappa + 4LC_3C_0$, given that the claim is true (note the fact that $\kappa_0$ depends only on $W,W_1,U$ and $\kappa$). Now, we give a proof of the claim. \begin{proof}[Proof of the claim.] By the Royden's Localization Lemma and setting $ \coth{\mathsf{k}_{\Omega}(W_1 \cap \Omega, \Omega \setminus U)} := C_0 < +\infty$, for all $z \in W_1 \cap \Omega $ and $v \in \mathbb{C}^d$, \[ \kappa_{U \cap \Omega}(z; v) \leq \coth{\mathsf{k}_{\Omega}(z, \Omega \setminus U)} \kappa_{\Omega}(z;v) \leq \coth{\mathsf{k}_{\Omega}(W_1 \cap \Omega, \Omega \setminus U)} \kappa_{\Omega}(z;v) = C_0 \kappa_{\Omega}(z;v). \] Now, suppose that $w_n \in \Omega \setminus W_1$ such that $\lim_{n \to \infty} \mathsf{k}_{\Omega}(z, w_n) = \mathsf{k}_{\Omega}(z, \Omega \setminus W_1) $ and $\gamma_n : [0, a_n] \longrightarrow \Omega$ is sequence of $(1, \kappa)$-almost-geodesic of $(\Omega, \mathsf{k}_{\Omega} )$ joining $z$ and $w_n$, for some $\kappa > 0$. \[ t_n = \sup \{t : t \in [0, a_n]\,\, \text{such that}\,\, \gamma_n(s) \in W_1 \cap \Omega \,\forall s \in [0, t]\}, \]
By definition, $\gamma_n([0, t_n)) \subset W_1 \cap \Omega$. Then, \begin{equation*}
\begin{split}
\mathsf{k}_{U \cap \Omega}(z, U \cap \Omega \setminus W_1)
&\leq \mathsf{k}_{U \cap \Omega}(z, \gamma_n(t_n) )\\
& \leq \int_{0}^{t_n} \kappa_{U \cap \Omega}(\gamma_n(t); \gamma_n^{\prime}(t)) dt\\
& \leq C_0 \int_{0}^{t_n} \kappa_{ \Omega}(\gamma_n(t); \gamma_n^{\prime}(t)) dt\\
&\leq C_0 \int_{0}^{a_n} \kappa_{ \Omega}(\gamma_n(t); \gamma_n^{\prime}(t)) dt\\
&\leq C_0 \mathsf{k}_{\Omega}(z, w_n) + C_0 \kappa.\\
\end{split} \end{equation*} The third inequality follows because $\kappa_{U \cap \Omega}(\gamma_n(t); \gamma_n^{\prime}(t)) \leq C_0 \kappa_{\Omega}(\gamma_n(t); \gamma_n^{\prime}(t)) $ a.e. $t \in [0, a_n] $. Taking limit as $n \to \infty$, we get \[ \mathsf{k}_{U \cap \Omega}(z, U \cap \Omega \setminus W_1) \leq C_0\mathsf{k}_{\Omega}(z, \Omega \setminus W_1) + C_0 \kappa. \] This proves the claim. Hence the proof is complete. \end{proof} \end{proof}
\begin{lemma}\label{L:One_kappa_geodesic_Omega} Suppose $\Omega \subset \mathbb{C}^d$ is a domain and $U$ is an open subset of $\mathbb{C}^d$ such that $U \cap \partial \Omega \neq \emptyset$. Suppose $(\Omega, \mathsf{k}_{\Omega} )$ has weak visibility property for every pair of distinct points $\{p, q\}$, $p, q \in U \cap \partial \Omega$. Let $W \subset \subset U $ be an open set with $W \cap \Omega \neq \emptyset$, and suppose that $z, w \in W \cap \Omega $ and $\gamma$ is a $(1, \kappa)$-almost-geodesic of $(\Omega, \mathsf{k}_{\Omega})$ joining $z$ and $w$ such that $\gamma \subset W$. Then there exists a $\kappa_0 > 0$ such that $\gamma$ is $(1, \kappa_0)$-quasi-geodesic of $(U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$. \end{lemma} \begin{proof} Let $W_1 \subset \mathbb{C}^d$ be an open set such that $W \subset \subset W_1 \subset \subset U$. By {\it (3)} of Proposition~\ref{P:Visibility_Outside_Point}, $\mathsf{k}_{\Omega}(W_1 \cap \Omega, \Omega \setminus U) >0 $.
Therefore, (as observed at the start of the proof of Lemma~\ref{L:One_kappa_geodesic}), by Lemma~\ref{L:Varied_Loc_Kob_metric}, there exists $L > 0$ which depends on $U, W_1$ such that for all $z \in W_1 \cap \Omega$ and $v \in \mathbb{C}^d$, \begin{equation*} \kappa_{U \cap \Omega}(z; v) \leq \left( 1 + L e^{-\mathsf{k}_{\Omega} (z, \Omega \setminus W_1) } \right)\kappa_{\Omega}(z; v). \end{equation*} Suppose that $\gamma: [0, a] \longrightarrow \Omega$. Then by doing similar computations as of (\ref{E:One_kappa_geo_ineq}) of Lemma~ \ref{L:One_kappa_geodesic}, for every $s_1, s_2 \in [0, a]$, we have \begin{equation}\label{E:One_kappa_geo_ineq_2} \begin{split} \mathsf{k}_{U \cap \Omega}(\gamma(s_1), \gamma(s_2)) & \leq \mathsf{k}_{\Omega}(\gamma(s_1), \gamma(s_2)) + \kappa + L \int_{0}^{a} e^{-\mathsf{k}_{\Omega} (\gamma(t), \Omega \setminus W_1) } dt. \end{split} \end{equation}
Now, by Lemma~\ref{L:Comple_dist_Compar}, there exists $L_0 > 0$ which depends on $o, W, W_1, U$ such that for all $z \in W \cap \Omega$, \[ \mathsf{k}_{\Omega} ( z, o) \leq \mathsf{k}_{\Omega} (z, \Omega \setminus W_1) + L_0. \] Next, let $t_0 \in [0, a]$ such that for all $t \in [0, a]$, \[ \mathsf{k}_{\Omega}(\gamma(t_0), \Omega \setminus W_1) \leq \mathsf{k}_{\Omega}(\gamma(t), \Omega \setminus W_1). \] Since $\gamma$ is a $(1, \kappa)$-almost-geodesic of $(\Omega, \mathsf{k}_{\Omega} )$, we have by a simple computation (using the above inequalities and the triangle inequality), for all $t \in [0, a]$ \begin{equation*}
\begin{split}
|t - t_0| - \kappa
&\leq 2\mathsf{k}_{\Omega} (\gamma(t), \Omega \setminus W_1) + 2L_0.
\end{split} \end{equation*} This gives, for all $ t \in [0, a]$ \[
-\mathsf{k}_{\Omega}(\gamma(t), \Omega \setminus W_1) \leq \frac{ -|t - t_0|}{2} + \frac{2L_0 + \kappa}{2}. \] After using the above inequality in inequality~(\ref{E:One_kappa_geo_ineq_2}), we obtain for every $s_1, s_2 \in [0, a]$, \begin{equation} \begin{split} \mathsf{k}_{U \cap \Omega}(\gamma(s_1), \gamma(s_2))
& \leq \mathsf{k}_{\Omega}(\gamma(s_1), \gamma(s_2)) + \kappa + LC_3 \int_{0}^{a} e^{-\frac{|t - t_0|}{2} } dt\\ &\leq \mathsf{k}_{\Omega}(\gamma(s_1), \gamma(s_2)) + \kappa + 4LC_3 C_0 \end{split} \end{equation} where $C_3 = e^{\frac{2L_0 + \kappa}{2}}$, and the last inequality follows from the following bound: \begin{equation}
\int_{0}^{+\infty} e^{-\frac{|t - t_0|}{2} } dt \leq 4. \end{equation} This shows that $\gamma$ is a $(1, \kappa_0)$-quasi-geodesic for $\kappa_0 = \kappa + 4LC_3$ (note the fact that $\kappa_0$ depends only on $W,W_1,U$ and $\kappa$). Hence the proof is complete. \end{proof}
\begin{proof}[Proof of Theorem~\ref{T:Main_Thm_1}] Let $W_1, W_2$ be open sets such that $W \subset \subset W_1 \subset \subset W_2 \subset \subset W_0$. Let $z, w \in W \cap \Omega$. Let $ \kappa > 0$ and $\gamma : [0, a] \longrightarrow \Omega$ be a $(1, \kappa)$-almost-geodesic of $(\Omega, \mathsf{k}_{\Omega} )$ joining $z$ and $w$.
\noindent {\bf Case 1.} $\gamma \subset W_2 \cap \Omega $.
Since $\mathsf{k}_{\Omega}(W_0 \cap \Omega, \Omega \setminus U) > 0$, by replacing $W$ and $W_1$ by $W_2$ and $W_0$ respectively in Lemma~\ref{L:One_kappa_geodesic}, there exists $\kappa_0 > 0$ which depends on $W_2, W_0, U$ and $\kappa$ such that $\gamma$ is a $(1, \kappa_0)$-quasi-geodesic of $(U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$. This gives, using the fact that $\gamma$ is a $(1, \kappa_0)$-quasi-geodesic of $(U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$ and $\gamma$ is a $(1, \kappa)$-almost-geodesic of $(\Omega, \mathsf{k}_{\Omega} )$, \[ \mathsf{k}_{U \cap \Omega }(z, w) \leq a + \kappa_0 \leq \mathsf{k}_{ \Omega }(z, w) + \kappa + \kappa_0. \] \noindent {\bf Case 2.} $\gamma \nsubseteq W_2 \cap \Omega $. Now, set \[ s_0 = \sup \{t : t \in [0, a]\,\, \text{such that}\,\, \gamma(s) \in W_1 \,\forall s \in [0, t]\}, \] \[ t_0= \inf \{t : t \in [0, a]\,\, \text{such that}\,\, \gamma(s) \in W_1 \,\forall s \in [t, a]\}. \]
Note that by definition, $\gamma([0, s_0]) \subset W_2$ and $\gamma([t_0, a]) \subset W_2$. Hence $\gamma|_{[0, s_0]}$ and $ \gamma|_{[t_0, a]}$ are $(1, \kappa_0)$-quasi-geodesics of $(U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$ (as observed in Case 1).
Now, let $\sigma^1 : [0, b^1] \longrightarrow U \cap \Omega $ and $\sigma^2: [0, b^2] \longrightarrow U \cap \Omega $ be $(1, \kappa)$-almost-geodesics of $(U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$ joining $z$ and $\gamma(s_0)$, and $w$ and $\gamma(t_0)$ respectively. Let $U' \subset \subset U$ such that $\#(U' \cap \partial \Omega) > 1$ and $W_0 \subset \subset U'$. Since $U' \cap \partial (U \cap \Omega) = U'\cap \partial \Omega$, every pair of distinct boundary points in $U'\cap \partial \Omega$ has weak visibility property with respect to $(U \cap \Omega, \mathsf{k}_{\Omega} )$. Then, by {\it (2)} of Proposition ~\ref{P:Visibility_Outside_Point} by replacing the domain $\Omega$ by $ U\cap \Omega$, $U$ by $U'$, $W$ by $W$ and $V$ by $ (U \cap \Omega) \setminus W_1$, there exists a compact set $K \subset U \cap \Omega$ such that $\sigma^1 \cap K$ and $\sigma^2 \cap K$ are non-empty sets. Let $ o^1 \in \sigma^1 \cap K$ and $o^2 \in \sigma^2 \cap K$.
Now, using the fact that $\sigma^1 $ and $\sigma^2 $ are $(1, \kappa)$-almost-geodesics of $(U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$ and the triangle inequality, we have the following inequality \begin{equation}\label{E:C_inequality_1} \begin{split}
\mathsf{k}_{U \cap \Omega}(z, \gamma(s_0))
&\geq \mathsf{k}_{U \cap \Omega}(z, o^1) + \mathsf{k}_{U \cap \Omega}(o^1, \gamma(s_0)) - 3\kappa\\
&\geq \mathsf{k}_{U \cap \Omega}(z, o) - \mathsf{k}_{U \cap \Omega}(o, o^1) + \mathsf{k}_{U \cap \Omega}(o, \gamma(s_0)) - \mathsf{k}_{U \cap \Omega}(o, o^1) - 3\kappa\\
&\geq \mathsf{k}_{U \cap \Omega}(z, o) + \mathsf{k}_{U \cap \Omega}(o, \gamma(s_0)) - 2 \sup_{y \in K }\mathsf{k}_{U \cap \Omega}(o, y) - 3\kappa, \end{split} \end{equation} and similarly \begin{equation}\label{E:C_inequality_2} \begin{split}
\mathsf{k}_{U \cap \Omega}(w, \gamma(t_0))
&\geq \mathsf{k}_{U \cap \Omega}(w, o) + \mathsf{k}_{U \cap \Omega}(o, \gamma(t_0)) - 2 \sup_{y \in K }\mathsf{k}_{U \cap \Omega}(o, y) - 3\kappa. \end{split} \end{equation} Let $C_0 \defeq 2 \sup_{y \in K }\mathsf{k}_{U \cap \Omega}(o, y) + 3\kappa$.
Now, using the fact that $\gamma$ is a $(1, \kappa)$-almost-geodesic of $ (\Omega, \mathsf{k}_{\Omega} )$, and $\gamma|_{[0, s_0]}$ and $ \gamma|_{[t_0, a]}$ are $(1, \kappa_0)$-quasi-geodesics of $ (U \cap\Omega, \mathsf{k}_{U \cap \Omega} )$, we have \begin{equation*} \begin{split}
\mathsf{k}_{\Omega}(z, w)
&\geq a - \kappa\\
&\geq a - t_0 + s_0 -\kappa \quad [\text{since $s_0 \leq t_0$ }]\\
&\geq \mathsf{k}_{U \cap \Omega}(w, \gamma(t_0) ) - \kappa_0 + \mathsf{k}_{U \cap \Omega}(\gamma(s_0), z) - \kappa_0 - \kappa\\
&\geq \mathsf{k}_{U \cap \Omega} (w, o) + \mathsf{k}_{U \cap \Omega}(o, \gamma(t_0) ) - C_0 + \mathsf{k}_{U \cap \Omega} (z, o) + \mathsf{k}_{U \cap \Omega}(o, \gamma(s_0) ) - C_0 - 2 \kappa_0- \kappa\\
&\geq \mathsf{k}_{U \cap \Omega} (z, o) + \mathsf{k}_{U \cap \Omega} (w, o) - 2C_0- 2 \kappa_0- \kappa\\
&\geq \mathsf{k}_{U \cap \Omega} (z, w) - 2C_0- 2 \kappa_0 - \kappa. \end{split} \end{equation*} The fourth inequality follows from (\ref{E:C_inequality_1}) and (\ref{E:C_inequality_2}). This proves the theorem by taking $C \defeq 2C_0 + 2 \kappa_0 + \kappa$. \end{proof}
\begin{proof}[Proof of Theorem~\ref{T:Main_Thm_2}] Let $W_1, W_2$ be open sets such that $W \subset \subset W_1 \subset \subset W_2 \subset \subset U$. Let $z, w \in W \cap \Omega$. Let $ \kappa > 0$ and $\gamma : [0, a] \longrightarrow \Omega$ be a $(1, \kappa)$-almost-geodesic of $(\Omega, \mathsf{k}_{\Omega} )$ joining $z$ and $w$.
\noindent {\bf Case 1.} $\gamma \subset W_2 \cap \Omega $.
Now, replacing $W$ by $W_2$ in Lemma~\ref{L:One_kappa_geodesic_Omega}, there exists $\kappa_0 > 0$ which depends on $W_2, U$ and $\kappa$ such that $\gamma$ is a $(1, \kappa_0)$-quasi-geodesic of $(U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$.
This gives, using the fact that $\gamma$ is a $(1, \kappa_0)$-quasi-geodesic of $(U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$ and $\gamma$ is a $(1, \kappa)$-almost-geodesic of $(\Omega, \mathsf{k}_{\Omega} )$, \[ \mathsf{k}_{U \cap \Omega }(z, w) \leq a + \kappa_0 \leq \mathsf{k}_{ \Omega }(z, w) + \kappa + \kappa_0. \] \noindent {\bf Case 2.} $\gamma \nsubseteq W_0 \cap \Omega $. Now, set \[ s_0 = \sup \{t : t \in [0, a]\,\, \text{such that}\,\, \gamma(s) \in W_1 \,\forall s \in [0, t]\}, \] \[ t_0= \inf \{t : t \in [0, a]\,\, \text{such that}\,\, \gamma(s) \in W_1 \,\forall s \in [t, a]\}. \]
Note that by definition, $\gamma([0, s_0]) \subset W_2$ and $\gamma([t_0, a]) \subset W_2$. Hence $\sigma^1 \defeq \gamma|_{[0, s_0]}$ and $\sigma^2 \defeq \gamma|_{[t_0, a]}$ are $(1, \kappa_0)$-quasi-geodesic of $(U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$ (as observed in Case 1). Also note that $\sigma_1: [0, s_0] \longrightarrow \Omega$ and $\sigma_2: [t_0, a] \longrightarrow \Omega$ are $(1, \kappa)$-almost-geodesics of $(\Omega, \mathsf{k}_{\Omega} )$ such that $\sigma_1(0), \sigma_2(a) \in W \cap \Omega$ and $\sigma_1(s_0), \sigma_2(t_0) \in \Omega \setminus W_1 $. Furthermore, $\overline W \cap \overline{ \Omega \setminus W_1} = \emptyset$. Now, since $(\Omega, \mathsf{k}_{\Omega} )$ has weak visibility for every pair of distinct points in $U \cap \partial \Omega$, by {\it (2)} of Proposition ~\ref{P:Visibility_Outside_Point} after replacing $W$ and $V$ by $W$ and $\Omega \setminus W_1$ respectively, for $\kappa $ as above, there exists a compact set $K \subset \Omega$ such that $\sigma^1 \cap K \neq \emptyset$ and $\sigma^2 \cap K \neq \emptyset$. Let $o^1 \in \sigma^1 \cap K \subset \overline W_1 \cap \Omega$ and $o^2 \in \sigma^2 \cap K \subset \overline W_1 \cap \Omega$. Now, applying the fact that $\gamma$ is a $(1, \kappa)$-almost-geodesic of $(\Omega, \mathsf{k}_{\Omega} )$ and $\sigma_1$ and $\sigma_2$ are $(1, \kappa_0)$-quasi-geodesic of $(U \cap \Omega, \mathsf{k}_{U \cap \Omega} )$ and by the triangle inequality, we obtain the following sequence of inequalities: \begin{equation*} \begin{split}
\mathsf{k}_{\Omega}(z, w)
&\geq a - \kappa\\
&\geq a - t_0 + s_0 -\kappa \quad [\text{since $s_0 \leq t_0$ }]\\
&\geq \mathsf{k}_{U \cap \Omega}(w, \gamma(t_0) ) - \kappa_0 + \mathsf{k}_{U \cap \Omega}(\gamma(s_0), z) - \kappa_0 - \kappa\\
&\geq \mathsf{k}_{U \cap \Omega}(w, o^2) + \mathsf{k}_{U \cap \Omega}(o^2, \gamma(t_0) ) - 3\kappa_0 + \mathsf{k}_{U \cap \Omega}(z, o^1) + \mathsf{k}_{U \cap \Omega}(o^1, \gamma(s_0) ) - 3\kappa_0 - 2 \kappa_0 - \kappa\\
&\geq \mathsf{k}_{U \cap \Omega}(w, o^2) + \mathsf{k}_{U \cap \Omega}(z, o^1) - 8 \kappa_0 - \kappa\\
&= \mathsf{k}_{U \cap \Omega}(w, o^2) + \mathsf{k}_{U \cap \Omega}(o^2, o^1) + \mathsf{k}_{U \cap \Omega}(z, o^1) - \mathsf{k}_{U \cap \Omega}(o^2, o^1) - 8 \kappa_0- \kappa\\
&\geq \mathsf{k}_{U \cap \Omega}(z, w) - \sup_{x, y \in K \cap \overline W_1} \mathsf{k}_{U \cap \Omega}(x, y) - 8\kappa_0 - \kappa. \end{split} \end{equation*} Note that $K \cap \overline W_1$ is a relatively compact subset of $U \cap \Omega$, and hence $\sup_{x, y \in K \cap \overline W_1} \mathsf{k}_{U \cap \Omega}(x, y) < + \infty$. This proves the theorem by taking $C \defeq \sup_{x, y \in K \cap \overline W_1} \mathsf{k}_{U \cap \Omega}(x, y) + 8\kappa_0 + \kappa$.
\end{proof} \begin{proof}[Proof of Corollary~\ref{C:Pseudo_finte_type}] This follows from Theorem~\ref{T:Main_Thm_2} by noting that $\Omega$ is also a Goldilocks domain, by \cite{Bharali_Zimmer_1}. Hence $(\Omega, \mathsf{k}_{\Omega} )$ has weak visibility property for every pair of distinct boundary points in $\partial \Omega$. \end{proof} Next, we give a proof of Theorem~\ref{T:Main_Thm_4} which is similar to the proof of \cite[Theorem]{NOT} by Nikolov-\"Okten-Thomas although our result holds in a more general setting. \begin{proof}[Proof of Theorem~\ref{T:Main_Thm_4}] We break the problem in two cases. Set $c\defeq \mathsf{k}_{\Omega}(W_1 \cap \Omega, \Omega \setminus W_2) > 0$. {\bf Case 1.} $z, w \in W \cap \Omega$ such that $\mathsf{k}_{\Omega} (z, w) \geq \frac{1}{4} \mathsf{k}_{\Omega}(W_1 \cap \Omega, \Omega \setminus W_2) = \frac{c}{4}$. In this case, using the following (additive localization): \[ \mathsf{k}_{U \cap \Omega}(z, w) \leq \mathsf{k}_{\Omega}(z, w) + C_0, \] we get \[ \frac{\mathsf{k}_{U \cap \Omega}(z, w)}{\mathsf{k}_{\Omega}(z, w)} \leq 1 + \frac{C_0}{\mathsf{k}_{\Omega}(z, w)} \leq 1 + \frac{4C_0}{c}. \] {\bf Case 2.} $z, w \in W \cap \Omega$ such that $\mathsf{k}_{\Omega} (z, w) < c/4 $. Suppose that $\gamma_n : [0, a_n] \longrightarrow \Omega$ is a sequence of $(1, c/4n)$-almost-geodesics such that $\gamma_n(0) = z$ and $\gamma_n (a_n) = w$ for all $n \in \mathbb{Z}_{+}$ (existence follows from \cite[Proposition~5.3]{Bharali_Zimmer_2}).
\noindent {\bf Claim.} $\gamma_n \subset W_2$ for all $n \in \mathbb{Z}_{+}$. \begin{proof}[Proof of the claim] If not then there exists $N \in \mathbb{Z}_{+}$ such that $\gamma_N \cap \Omega \setminus W_2 \neq \emptyset$. Let $z_0 \in \gamma_N \cap \Omega \setminus W_2 $. Since $z, w \in W \cap \Omega$ and $z_0 \in \Omega \setminus W_2$, it follows by definition that \[ \mathsf{k}_{\Omega}(z,z_0) \geq \mathsf{k}_{\Omega}(W_1 \cap \Omega, \Omega \setminus W_2)= c, \quad \mathsf{k}_{\Omega}(w,z_0) \geq \mathsf{k}_{\Omega}(W_1 \cap \Omega, \Omega \setminus W_2)= c. \] Since $\gamma_N$ is a $(1, c/4N)$-almost-geodesic of $(\Omega, \mathsf{k}_{\Omega})$, using the inequalities above, we have \[ \mathsf{k}_{\Omega}(z,w) \geq \mathsf{k}_{\Omega}(z,z_0) + \mathsf{k}_{\Omega}(z_0,w) - 3c/4N \geq c + c - 3c/4N > c. \] This is a contradiction because by assumption $\mathsf{k}_{\Omega}(z,w) < c/4$. Hence the claim is true. \end{proof} Now, using the Royden's Localization Lemma, it follows for all $z \in W_1 \cap \Omega$ and $v \in \mathbb{C}^d$, \[ \kappa_{U \cap \Omega}(z;v) \leq c \kappa_{\Omega}(z;v). \] This gives (by definition $\gamma_n$ is is absolutely continuous, hence differentiable almost everywhere on $[0, a_n]$ for all $n \in \mathbb{Z}_{+}$) by \cite[Theorem~3.1]{Venturini}, for all $n \in \mathbb{Z}_{+}$, \begin{equation} \begin{split} \mathsf{k}_{U \cap \Omega}(z, w) & \leq \int_{0}^{a_n} \kappa_{U \cap \Omega}(\gamma_n(t); \gamma_n^{\prime}(t)) dt\\ & \leq c\int_{0}^{a_n} \kappa_{ \Omega}(\gamma_n(t); \gamma_n^{\prime}(t)) dt\\ &\leq c (a_n - c/4n) +c^2/4n\\ & \leq c\mathsf{k}_{\Omega}(z, w) + c^2/4n. \end{split} \end{equation} By letting $n \to \infty$, we get \[ \mathsf{k}_{U \cap \Omega}(z, w) \leq c\mathsf{k}_{\Omega}(z, w). \] By combining both cases, we obtain the required result. \end{proof} \begin{proof}[Proof of Theorem~\ref{T:Main_Thm_3}] Since for the bounded domain $\Omega$, $\mathsf{k}_{\Omega}(W_1 \cap U, \Omega \setminus W_2) > 0$ for every open sets $W_1, W_2$ with $W_1 \subset \subset W_2$ and $W_1 \cap \Omega \neq \emptyset$. This can be seen from \cite[Proposition 3.5.(1)]{Bharali_Zimmer_1}. Hence the result follows from Theorem~\ref{T:Main_Thm_4}. \end{proof} \noindent \textbf{Acknowledgements.} The author would like to thank Anwoy Maitra, Vikramjeet Singh Chandel and Sushil Gorai for useful discussions and Gautam Bharali for useful suggestions and corrections. The author also would like to thank the referee for the corrections and many useful suggestions which improved the exposition of the previous version of this paper, and also for suggesting short and transparent proof of Lemma~\ref{L:Comple_dist_Compar}, and consequently, the proofs of the main theorems are simplified.
\end{document} |
\begin{document}
\author{Seonghak Kim}
\address{Institute for Mathematical Sciences\\ Renmin University of China \\ Beijing 100872, PRC}
\email{[email protected]}
\author{Youngwoo Koh}
\address{School of Mathematics\\ Korea Institute for Advanced Study \\ Seoul 130-722, ROK}
\email{[email protected]}
\title[1-D non-convex elastodynamics]{Weak solutions for one-dimensional non-convex elastodynamics}
\subjclass[2010]{74B20,74N15,35M13} \keywords{non-convex elastodynamics, hyperbolic-elliptic equations, phase transition, partial differential inclusions, Baire's category method, microstructures of weak solutions}
\begin{abstract} We explore local existence and properties of classical weak solutions to the initial-boundary value problem of a one-dimensional quasilinear equation of elastodynamics with non-convex stored-energy function, a model of phase transitions in elastic bars proposed by Ericksen \cite{Er}. The instantaneous formation of microstructures of local weak solutions is observed for all smooth initial data with initial strain having its range overlapping with the phase transition zone of the Piola-Kirchhoff stress. As byproducts, it is shown that such a problem admits a local weak solution for all smooth initial data and local weak solutions that are smooth for a short period of time and exhibit microstructures thereafter for some smooth initial data. In a parallel exposition, we also include some results concerning one-dimensional quasilinear hyperbolic-elliptic equations. \end{abstract} \maketitle
\section{Introduction} The evolution process of a one-dimensional continuous medium with elastic response can be modeled by quasilinear wave equations of the form \begin{equation}\label{main-P} u_{tt} =\sigma(u_x)_x, \end{equation} where $u=u(x,t)$ denotes the displacement of a reference point $x$ at time $t$ and $\sigma=\sigma(s)$ the Piola-Kirchhoff stress, which is the derivative of a stored-energy function $W=W(s)\ge 0$. With $v=u_x$ and $w=u_t$, one may study equation (\ref{main-P}) as the system of conservation laws: \begin{equation}\label{main-cons} \left\{\begin{split} v_t & = w_x, \\ w_t & = \sigma(v)_x. \end{split}\right. \end{equation}
In case of a strictly convex stored-energy function, the existence of weak or classical solutions to equation (\ref{main-P}) and its vectorial case has been studied extensively: Global weak solutions to system (\ref{main-cons}) and hence equation (\ref{main-P}) are established in a classical work of {DiPerna} \cite{Di} via vanishing viscosity method in the framework of compensated compactness of {Tartar} \cite{Ta} for $L^\infty$ data and later by {Lin} \cite{Li} and {Shearer} \cite{Sr} in an $L^p$ setup. This framework is also used to construct global weak solutions to (\ref{main-P}) via relaxation methods by {Serre} \cite{Se} and {Tzavaras} \cite{Tz}. An alternative variational scheme is studied by {Demoulini \emph{et al.}} \cite{DST} via time discretization. However global existence of weak solutions to the vectorial case of (\ref{main-P}) is still open. In regard to classical solutions to (\ref{main-P}) and its vectorial case, one can refer to {Dafermos and Hrusa} \cite{DH} for local existence of smooth solutions, to {Klainerman and Sideris} \cite{KS} for global existence of smooth solutions for small initial data in dimension 3, and to {Dafermos} \cite{Ds1} for uniqueness of smooth solutions in the class of BV weak solutions whose shock intensity is not too strong.
Convexity assumption on the stored-energy function is often regarded as a severe restriction in view of the actual behavior of elastic materials (see, e.g., \cite[Section 2]{Hi} and \cite[Section 8]{CN}). However there have not been many analytic works dealing with the lack of convexity on the energy function. For the vectorial case of equation (\ref{main-P}) in dimension 3, measure-valued solutions are constructed for polyconvex energy functions by {Demoulini \emph{et al.}} \cite{DST1}. Also by the same authors \cite{DST2}, in the identical situation, it is shown that a dissipative measure-valued solution coincides with a strong one provided the latter exists. Assuming convexity on the energy function at infinity but not allowing polyconvexity, measure-valued solutions are obtained by {Rieger} \cite{Ri} for the vectorial case of (\ref{main-P}) in any dimension. Despite of all these existence results, there has been no example of a non-convex energy function with which (\ref{main-P}) admits \emph{classical} weak solutions in general, not to mention its vectorial case. Among some optimistic and pessimistic opinions, {Rieger} \cite{Ri} expects such solutions even showing microstructures of phase transitions. Moreover, such expected phenomenology is successfully implemented in some numerical simulations \cite{CR, Pr}.
In this paper, we study weak solutions to the one-dimensional initial-boundary value problem of non-convex elastodynamics \begin{equation}\label{ib-P} \begin{cases} u_{tt} =\sigma(u_x)_x& \mbox{in $\Omega_T=\Omega\times (0,T)$,} \\ u(0,t)=u(1,t)=0 & \mbox{for $t\in(0,T)$,}\\ u =g,\,u_t=h & \mbox{on $\Omega\times \{t=0\}$}, \end{cases} \end{equation}
where $\Omega=(0,1)\subset\mathbb R$ is the domain occupied by a reference configuration of an elastic bar, $T>0$ is a fixed number, $g$ is the initial displacement of the bar, $h$ is the initial rate of change of the displacement, and the stress $\sigma:(-1,\infty)\to\mathbb R$ is given as in Figure \ref{fig1}. The zero boundary condition here amounts to the physical situation of fixing the end-points of the bar. In this case, the energy function $W:(-1,\infty)\to[0,\infty)$ may satisfy $W(s)\to\infty$ as $s\to -1^+$; but this is not required to obtain our result. On the other hand, we consider (\ref{ib-P}) as a prototype of the hyperbolic-elliptic problem with a non-monotone constitutive function $\sigma:\mathbb R\to\mathbb R$ as in Figure \ref{fig2}.
\begin{figure}
\caption{Non-monotone Piola-Kirchhoff stress $\sigma(s).$}
\label{fig1}
\end{figure}
\begin{figure}
\caption{Non-monotone constitutive function $\sigma(s).$}
\label{fig2}
\end{figure}
Problem (\ref{ib-P}) with a non-monotone stress $\sigma(s)$ as in Figure \ref{fig1} is proposed by {Ericksen} \cite{Er} as a model of the phenomena of phase transitions in elastic bars. There have been many studies on such a problem that usually fall into two types. One direction of study is to consider the Riemann problem of the system of conservation laws of mixed type (\ref{main-cons}) initiated by James \cite{Ja} and followed by numerous works (see, e.g., {Shearer} \cite{Sh}, {Pego and Serre} \cite{PS} and {Hattori} \cite{Ha}). Another path is to study the viscoelastic version of equation (\ref{main-P}): To name a few among initiative works, {Dafermos} \cite{Ds} considers the equation $u_{tt}=\sigma(u_x,u_{xt})_x+f(x,t)$ under certain parabolicity and growth conditions and establishes global existence and uniqueness of smooth solutions together with some asymptotic behaviors as $t\to\infty$. Following the work of {Andrews} \cite{An}, {Andrews and Ball} \cite{AB} prove global existence of weak solutions to the equation $u_{tt}=u_{xxt}+\sigma(u_x)_x$ for non-smooth initial data and study their long-time behaviors. For the same equation, {Pego} \cite{Pe} characterizes long-time convergence of weak solutions to several different types of stationary states in a strong sense. Nonetheless, up to our best knowledge, the main theorem below may be the first general existence result on weak solutions to (\ref{ib-P}), not in the stream of the Riemann problem nor in that of non-convex viscoelastodynamics.
Let $\sigma(s)$ be given as in Figure \ref{fig1} or \ref{fig2} (see section \ref{sec:state}). For an initial datum $(g,h)\in W^{1,\infty}_0(\Omega)\times L^\infty(\Omega)$, we say that a function $u\in W^{1,\infty}(\Omega_T)$ is a \emph{weak solution} to problem (\ref{ib-P}) provided $u_x>-1$ a.e. in $\Omega_T$ in case of Figure \ref{fig1}, for all $\varphi\in C^\infty_c(\Omega\times[0,T))$, we have \begin{equation}\label{def:sol} \int_{\Omega_T}(u_t\varphi_t-\sigma(u_x)\varphi_x)\,dxdt=-\int_0^1 h(x)\varphi(x,0)\,dx, \end{equation} and \begin{equation}\label{def:sol-1} \left\{\begin{array}{ll}
u(0,t)=u(1,t)=0 & \mbox{for $t\in(0,T)$}, \\
u(x,0)=g(x) & \mbox{for $x\in\Omega$}.
\end{array} \right. \end{equation}
The main result of the paper is the following theorem that will be separated into two detailed statements in section \ref{sec:state} along with some corollaries.
\begin{thm}\label{thm:main-pre} Let $\sigma(s)$ be as in Figure \ref{fig1} or \ref{fig2}, and let $(g,h)\in W^{3,2}_0(\Omega)\times W^{2,2}_0(\Omega) $ with $s_1^*<g' (x_0)<s_2^*$ at some $x_0\in\Omega$. In case of Figure \ref{fig1}, assume also $g'(x)>-1$ for all $x\in\bar\Omega$. Then there exists a finite number $T>0$ for which problem (\ref{ib-P}) admits infinitely many weak solutions. \end{thm}
Existence and non-uniqueness of weak solutions to problem (\ref{ib-P}) have been generally accepted (especially, in the context of the Riemann problem) and actively studied in the community of solid mechanics. Such non-uniqueness is usually understood to be arising from a constitutive deficiency in the theory of elastodynamics, reflecting the need to incorporate some additional relations (see, e.g., {Slemrod} \cite{Sl}, {Abeyaratne and Knowles} \cite{AK} and {Truskinovsky and Zanzotto} \cite{TZ}).
Global existence of Lipschitz continuous weak solutions to problem (\ref{ib-P}) is not directly obtained in the course of proving Theorem \ref{thm:main-pre} as it would require a global classical solution to some modified hyperbolic problem in our method of proof and such a global one might not exist due to a possible shock formation at a finite time. However, we still expect the existence of global $W^{1,p}$-solutions $(p<\infty)$ to (\ref{ib-P}).
The rest of the paper is organized as follows. Section \ref{sec:state} describes precise structural assumptions on the functions $\sigma(s)$ corresponding to Figures \ref{fig1} and \ref{fig2}, respectively. Then detailed statements of the main result, Theorem \ref{thm:main-pre}, with respect to Figures \ref{fig1} and \ref{fig2} are introduced separately as Theorems \ref{thm:main-NCE} and \ref{thm:main-HEP} with relevant corollaries in each case. Section \ref{sec:exist} begins with a motivational approach to solve problem (\ref{ib-P}) as a homogeneous partial differential inclusion with a linear constraint. Then the main results in precise form, Theorems \ref{thm:main-NCE} and \ref{thm:main-HEP}, are proved at the same time under a pivotal density fact, Theorem \ref{thm:density}. The proofs of the corollaries to the main results are also included in section \ref{sec:exist}. In section \ref{sec:rank-1}, a major tool for proving the density fact is established in a general form. Lastly, section \ref{sec:density-proof} carries out the proof of the density fact.
In closing this section, we introduce some notations. Let $m,n$ be positive integers. We denote by $\mathbb M^{m\times n}$ the space of $m\times n$ real matrices and by $\mathbb M_{sym}^{n\times n}$ that of symmetric $n\times n$ real matrices. We use $O(n)$ to denote the space of $n\times n$ orthogonal matrices. For a given matrix $M\in\mathbb M^{m\times n}$, we write $M_{ij}$ for the component of $M$ in the $i$th row and $j$th column and $M^T$ for the transpose of $M$. For a bounded domain $U\subset\mathbb R^n$ and a function $w^*\in W^{m,p}(U)$ $(1\le p\le\infty)$, we use $W_{w^*}^{m,p}(U)$ to denote the space of functions $w\in W^{m,p}(U)$ with boundary trace $w^*.$
\section{Precise statements of main theorems}\label{sec:state}
In this section, we present structural assumptions on the functions $\sigma(s)$ for \textbf{Case I:} non-convex elastodynamics and \textbf{Case II:} hyperbolic-elliptic problem corresponding to Figures \ref{fig1} and \ref{fig2}, respectively. Then we give the detailed statement of the main result, Theorem \ref{thm:main-pre}, in each case, followed by some relevant byproducts.
\textbf{(Case I):} For the problem of non-convex elastodynamics, we impose the following conditions on the stress $\sigma:(-1,\infty)\to\mathbb R$ (see Figure \ref{fig1}).
\textbf{Hypothesis (NC):} There exist two numbers $s_2>s_1>-1$ with the following properties: \begin{itemize} \item[(a)] $\sigma\in C^3((-1,s_1)\cup(s_2,\infty))\cap C((-1,s_1]\cup[s_2,\infty))$. \item[(b)] $\displaystyle\lim_{s\to -1^+}\sigma(s)=-\infty$. \item[(c)] $\sigma:(s_1,s_2)\to\mathbb R$ is bounded and measurable. \item[(d)] $\sigma(s_1)>\sigma(s_2)$, and $\sigma'(s)>0$ for all $s\in (-1,s_1)\cup(s_2,\infty)$. \item[(e)] There exist two numbers $c>0$ and $s_1+1>\rho>0$ such that $\sigma'(s)\ge c$ for all $s\in (-1,s_1-\rho]\cup[s_2+\rho,\infty)$. \item[(f)] Let $s_1^*\in (-1,s_1)$ and $s^*_2\in(s_2,\infty)$ denote the unique numbers with $\sigma(s_1^*)=\sigma(s_2)$ and $\sigma(s_2^*)=\sigma(s_1)$, respectively.
\end{itemize}
\textbf{(Case II):} For the hyperbolic-elliptic problem, we assume the following for the constitutive function $\sigma:\mathbb R\to\mathbb R$ (see Figure \ref{fig2}).
\textbf{Hypothesis (HE):} There exist two numbers $s_2>s_1$ satisfying the following: \begin{itemize} \item[(a)] $\sigma\in C^3((-\infty,s_1)\cup(s_2,\infty))\cap C((-\infty,s_1]\cup[s_2,\infty))$. \item[(b)] $\sigma:(s_1,s_2)\to\mathbb R$ is bounded and measurable. \item[(c)] $\sigma(s_1)>\sigma(s_2)$, and $\sigma'(s)>0$ for all $s\in (-\infty,s_1)\cup(s_2,\infty)$. \item[(d)] There exists a number $c>0$ such that $\sigma'(s)\ge c$ for all $s\in (-\infty,s_1-1]\cup[s_2+1,\infty)$. \item[(e)] Let $s_1^*\in (-\infty,s_1)$ and $s^*_2\in(s_2,\infty)$ denote the unique numbers with $\sigma(s_1^*)=\sigma(s_2)$ and $\sigma(s_2^*)=\sigma(s_1)$, respectively.
\end{itemize}
In both cases, for each $r\in (\sigma(s_2),\sigma(s_1))$, let $s_-(r)\in(s_1^*,s_1)$ and $s_+(r)\in(s_2,s_2^*)$ denote the unique numbers with $\sigma(s_\pm(r))=r.$ We may call the interval $(s_1^*,s_2^*)$ as the \emph{phase transition zone} of problem (\ref{ib-P}), since the formation of microstructures and breakdown of uniqueness of weak solutions to (\ref{ib-P}) begin to occur whenever the range of the initial strain $g'$ overlaps with the interval $(s_1^*,s_2^*)$ as we can see below from Theorems \ref{thm:main-NCE} and \ref{thm:main-HEP} and their corollaries.
We now state the main result on \textbf{Case I:} weak solutions for non-convex elastodynamics under Hypothesis (NC). \begin{thm}\label{thm:main-NCE} Let $(g,h)\in W^{3,2}_0(\Omega)\times W^{2,2}_0(\Omega)$ satisfy $g'(x)>-1$ for all $x\in\bar\Omega$ and $s_1^*<g' (x_0)<s_2^*$ for some $x_0\in\Omega$. Let $\sigma(s_2)<r_1<r_2<\sigma(s_1)$ be any two numbers with $s_-(r_1)<g'(x_0)<s_+(r_2)$. Then there exist a finite number $T>0$, a function $\displaystyle{u^*\in \cap_{k=0}^3C^k([0,T];W^{3-k,2}_0(\Omega))}$ with $u^*_x>-1$ on $\bar\Omega_T$, where $W^{0,2}_0(\Omega)=L^2(\Omega)$, and three disjoint open sets $\Omega_T^1,\Omega_T^2,\Omega_T^3\subset \Omega_T$ with $\Omega_T^2\not=\emptyset$, $\partial\Omega_T^1\cap\partial\Omega_T^3=\emptyset$, and \begin{equation}\label{sep-domain-NCE} \left\{\begin{array}{l}
\partial\Omega_T^1\cap \Omega_0=\{(x,0)\,|\, x\in\Omega,\,g'(x)<s_-(r_1)\}, \\
\partial\Omega_T^2\cap \Omega_0 =\{(x,0)\,|\, x\in\Omega,\,s_-(r_1)<g'(x)<s_+(r_2)\}, \\
\partial\Omega_T^3\cap \Omega_0=\{(x,0)\,|\, x\in\Omega,\,g'(x)>s_+(r_2)\},
\end{array}
\right. \end{equation} where $\Omega_0=\Omega\times\{t=0\},$ such that for each $\epsilon>0$, there exist a number $T_\epsilon\in(0,T)$ and infinitely many weak solutions $u\in W^{1,\infty}_{u^*}(\Omega_T)$ to problem (\ref{ib-P}) with the following properties: \begin{itemize}
\item[(a)] Approximate initial rate of change:
\[
\|u_t-h\|_{L^\infty(\Omega_{T_\epsilon})}<\epsilon, \] where $\Omega_{T_\epsilon}=\Omega\times(0,T_\epsilon)$.
\item[(b)] Classical part of $u$: \begin{itemize} \item[(i)] $u\equiv u^*$ on $\overline{\Omega_T^1\cup\Omega_T^3}$,
\item[(ii)] $u_t(x,0)=h(x)\quad\forall (x,0)\in(\partial\Omega_T^1\cup\partial\Omega_T^3)\cap\Omega_0$,
\item[(iii)] $u_x(x,t)\left\{\begin{array}{ll}
\in(-1,s_-(r_1)) & \forall(x,t)\in\Omega_T^1, \\
>s_+(r_2) & \forall(x,t)\in\Omega_T^3.
\end{array}
\right.$ \end{itemize}
\item[(c)] Microstructure of $u$:
$u_x(x,t)\in[s_-(r_1),s_-(r_2)]\cup[s_+(r_1),s_+(r_2)]$, a.e. $(x,t)\in\Omega_T^2$.
\item[(d)] Interface of $u$:
$u_x(x,t)\in\{s_-(r_1),s_+(r_2)\}$, a.e. $(x,t)\in\Omega_T\setminus (\cup_{i=1}^3\Omega_T^i)$.
\end{itemize} \end{thm}
As a remark, note that corresponding deformations of the elastic bar, $d(x,t)=u(x,t)+x$, should satisfy \[ d_x(x,t)=u_x(x,t)+1 >-1+1=0,\;\;\mbox{a.e. $(x,t)\in\Omega_T$;} \] this guarantees that for a.e. $t\in(0,T)$, such deformations $d:[0,1]\times\{t\}\to[0,1]$ are strictly increasing with $d(0,t)=0$ and $d(1,t)=1$. Moreover, for such a $t\in(0,T)$, the deformations $d(x,t)$ are smooth (as much as the initial displacement $g$) for the values of $x\in[0,1]$ for which slope $d_x(x,t)\in(0,s_-(r_1)+1)\cup(s_+(r_2)+1,\infty)$ and are Lipschitz elsewhere with $d_x(x,t)\in[s_-(r_1)+1,s_-(r_2)+1]\cup[s_+(r_1)+1,s_+(r_2)+1]$ a.e. Therefore, these dynamic deformations fulfill a natural physical requirement of invertibility for the motion of an elastic bar not allowing interpenetration.
As byproducts of Theorem \ref{thm:main-NCE}, we obtain the following two results for non-convex elastodynamics. The first is on local existence of weak solutions to problem (\ref{ib-P}) for all smooth initial data. The second gives local weak solutions to (\ref{ib-P}) that are all identical and smooth for a short period of time and then evolve microstructures along with the breakdown of uniqueness for some smooth initial data.
\begin{coro}\label{coro:weak-NCE} For any initial datum $(g,h)\in W^{3,2}_0(\Omega)\times W^{2,2}_0(\Omega)$ with $g'>-1$ on $\bar\Omega$, there exists a finite number $T>0$ for which problem (\ref{ib-P}) has a weak solution. \end{coro}
\begin{coro}\label{coro:weak-NCE-micro} Let $(g,h)\in W^{3,2}_0(\Omega)\times W^{2,2}_0(\Omega)$ satisfy $g'>-1$ on $\bar\Omega$. Assume $\max_{\bar\Omega}g'\in (s_1^*,s_1)$ or $\min_{\bar\Omega}g'\in (s_2,s_2^*)$. Then there exist finite numbers $T>T'>0$ such that problem (\ref{ib-P}) admits infinitely many weak solutions that are all equal to some $\displaystyle{u^*\in \cap_{k=0}^3C^k([0,T'];W^{3-k,2}_0(\Omega))}$ in $\Omega_{T'}$ and evolve microstructures from $t=T'$ as in Theorem \ref{thm:main-NCE}. \end{coro}
The following is the main result on \textbf{Case II:} hyperbolic-elliptic equations under Hypothesis (HE).
\begin{thm}\label{thm:main-HEP} Let $(g,h)\in W^{3,2}_0(\Omega)\times W^{2,2}_0(\Omega)$ with $s_1^*<g' (x_0)<s_2^*$ for some $x_0\in\Omega$. Let $\sigma(s_2)<r_1<r_2<\sigma(s_1)$ be any two numbers with $s_-(r_1)<g'(x_0)<s_+(r_2)$. Then there exist a finite number $T>0$, a function $\displaystyle{u^*\in \cap_{k=0}^3C^k([0,T];W^{3-k,2}_0(\Omega))}$, and three disjoint open sets $\Omega_T^1,\Omega_T^2,\Omega_T^3\subset \Omega_T$ with $\Omega_T^2\not=\emptyset$, $\partial\Omega_T^1\cap\partial\Omega_T^3=\emptyset$, and \begin{equation}\label{sep-domain-HEP} \left\{\begin{array}{l}
\partial\Omega_T^1\cap \Omega_0=\{(x,0)\,|\, x\in\Omega,\,g'(x)<s_-(r_1)\}, \\
\partial\Omega_T^2\cap \Omega_0 =\{(x,0)\,|\, x\in\Omega,\,s_-(r_1)<g'(x)<s_+(r_2)\}, \\
\partial\Omega_T^3\cap \Omega_0=\{(x,0)\,|\, x\in\Omega,\,g'(x)>s_+(r_2)\}
\end{array}
\right. \end{equation} such that for each $\epsilon>0$, there exist a number $T_\epsilon\in(0,T)$ and infinitely many weak solutions $u\in W^{1,\infty}_{u^*}(\Omega_T)$ to problem (\ref{ib-P}) satisfying the following properties: \begin{itemize}
\item[(a)] Approximate initial rate of change:
\[
\|u_t-h\|_{L^\infty(\Omega_{T_\epsilon})}<\epsilon. \]
\item[(b)] Classical part of $u$: \begin{itemize} \item[(i)] $u\equiv u^*$ on $\overline{\Omega_T^1\cup\Omega_T^3}$,
\item[(ii)] $u_t(x,0)=h(x)\quad\forall (x,0)\in(\partial\Omega_T^1\cup\partial\Omega_T^3)\cap\Omega_0$,
\item[(iii)] $u_x(x,t)\left\{\begin{array}{cc}
<s_-(r_1) & \forall(x,t)\in\Omega_T^1, \\
>s_+(r_2) & \forall(x,t)\in\Omega_T^3.
\end{array}
\right.$ \end{itemize}
\item[(c)] Microstructure of $u$:
$u_x(x,t)\in[s_-(r_1),s_-(r_2)]\cup[s_+(r_1),s_+(r_2)]$, a.e. $(x,t)\in\Omega_T^2$.
\item[(d)] Interface of $u$:
$u_x(x,t)\in\{s_-(r_1),s_+(r_2)\}$, a.e. $(x,t)\in\Omega_T\setminus (\cup_{i=1}^3\Omega_T^i)$.
\end{itemize} \end{thm}
We also have the following results similar to Corollaries \ref{coro:weak-NCE} and \ref{coro:weak-NCE-micro}.
\begin{coro}\label{coro:weak-HEP} For any initial datum $(g,h)\in W^{3,2}_0(\Omega)\times W^{2,2}_0(\Omega)$, there exists a finite number $T>0$ for which problem (\ref{ib-P}) has a weak solution. \end{coro}
\begin{coro}\label{coro:weak-HEP-micro} Let $(g,h)\in W^{3,2}_0(\Omega)\times W^{2,2}_0(\Omega)$ satisfy $\max_{\bar\Omega}g'\in (s_1^*,s_1)$ or $\min_{\bar\Omega}g'\in (s_2,s_2^*)$. Then there exist finite numbers $T>T'>0$ such that problem (\ref{ib-P}) admits infinitely many weak solutions that are all equal to some $u^*\in \cap_{k=0}^3C^k([0,T'];$ $W^{3-k,2}_0(\Omega))$ in $\Omega_{T'}$ and evolve microstructures from $t=T'$ as in Theorem \ref{thm:main-HEP}. \end{coro}
\section{Proof of main theorems}\label{sec:exist} In this section, we prove the main results, Theorems \ref{thm:main-NCE} and \ref{thm:main-HEP}, with some essential ingredient, Theorem \ref{thm:density}, to be verified in sections \ref{sec:rank-1} and \ref{sec:density-proof}. The proofs of related corollaries are also included.
Our exposition hereafter will be parallelwise for \textbf{Cases I} and \textbf{II}.
\subsection{An approach by differential inclusion} We begin with a motivational approach to attack problem (\ref{ib-P}) for both \textbf{Cases I} and \textbf{II}. To solve equation (\ref{main-P}) in the sense of distributions in $\Omega_T$, suppose there exists a vector function $w=(u,v)\in W^{1,\infty}(\Omega_T;\mathbb R^2)$ such that \begin{equation}\label{abs-1} v_x=u_t\quad\mbox{and}\quad v_t=\sigma(u_x)\quad\mbox{a.e. in $\Omega_T$}. \end{equation} We remark that this formulation is motivated by the approach in \cite{Zh} and different from the usual setup of conservation laws (\ref{main-cons}). For all $\varphi\in C^\infty_c(\Omega_T)$, we now have \[ \int_{\Omega_T} u_t\varphi_t\,dxdt =\int_{\Omega_T}v_x\varphi_t\,dxdt= \int_{\Omega_T}v_t\varphi_x\,dxdt= \int_{\Omega_T}\sigma(u_x)\varphi_x\,dxdt; \] hence having (\ref{abs-1}) is sufficient to solve (\ref{main-P}) in the sense of distributions in $\Omega_T$. Equivalently, we can rewrite (\ref{abs-1}) as \[ \nabla w=\begin{pmatrix}
u_x & u_t \\
v_x & v_t
\end{pmatrix}=
\begin{pmatrix}
u_x & v_x \\
v_x & \sigma(u_x)
\end{pmatrix}\quad\mbox{a.e. in $\Omega_T$}, \] where $\nabla$ denotes the space-time gradient. Set \[ \Sigma_\sigma=\left\{\begin{pmatrix}
s & b \\
b & \sigma(s)
\end{pmatrix}\in\mathbb M^{2\times 2}_{sym}\,\Big|\,s,b\in\mathbb R \right\}. \] We can now recast (\ref{abs-1}) as a homogeneous partial differential inclusion with a linear constraint: \begin{equation*} \nabla w(x,t)\in\Sigma_\sigma,\quad\mbox{a.e. $(x,t)\in\Omega_T$}. \end{equation*} We will solve this inclusion for a suitable subset $K$ of $\Sigma_\sigma$ to incorporate some detailed properties of weak solutions to (\ref{ib-P}).
Homogeneous differential inclusions of the form $\nabla w\in K\subset\mathbb M^{m\times n}$ are first encountered and successfully understood in the study of crystal microstructure by {Ball and James} \cite{BJ}, {Chipot and Kinderlehrer} \cite{CK} and with a constraint on a minor of $\nabla w$ by {M\"uller and \v Sver\'ak} \cite{MSv1}. General inhomogeneous differential inclusions are studied by {Dacorogna and Marcellini} \cite{DM1} using Baire's category method and by {M\"uller and Sychev} \cite{MSy} using the method of convex integration; see also \cite{Ki}. Moreover, the methods of differential inclusions have been applied to other important problems concerning elliptic systems \cite{MSv2}, Euler equations \cite{DS}, the porous media equation \cite{CFG}, the active scalar equation \cite{Sy}, Perona-Malik equation and its generalizations \cite{Zh,KY,KY1,KY2}, and ferromagnetism \cite{Ya1}.
\subsection{Proof of Theorems \ref{thm:main-NCE} and \ref{thm:main-HEP}}\label{subsec:mainproof} Due to the similarity between Theorems \ref{thm:main-NCE} and \ref{thm:main-HEP}, we can combine their proofs into a single one.
To start the proof, we assume functions $g,h$ and numbers $r_1,r_2$ are given as in Theorem \ref{thm:main-NCE} \textbf{(Case I)}, as in Theorem \ref{thm:main-HEP} \textbf{(Case II)}. For clarity, we divide the proof into several steps.
\textbf{(Modified hyperbolic problem):} Using elementary calculus, from Hypothesis (NC) \textbf{(Case I)}, Hypothesis (HE) \textbf{(Case II)}, we can find a function $\sigma^*\in C^3(-1,\infty)$ \textbf{(Case I)}, $\sigma^*\in C^3(\mathbb R)$ \textbf{(Case II)} such that \begin{equation}\label{modi} \left\{\begin{array}{l}
\mbox{$\sigma^*(s)=\sigma(s)$ for all $s\in(-1,s_-(r_1)]\cup[s_+(r_2),\infty)$ \textbf{(Case I)},} \\
\mbox{\quad\quad\quad\quad\quad\,\,\, for all $s\in(-\infty,s_-(r_1)]\cup[s_+(r_2),\infty)$ \textbf{(Case II)},} \\
\mbox{$(\sigma^*)'(s)\ge c^*$ for all $s\in(-1,\infty)$, for some constant $c^*>0$ \textbf{(Case I)},} \\
\mbox{\quad\quad\quad\quad\quad\,\,\, for all $s\in\mathbb R$, for some constant $c^*>0$ \textbf{(Case II)},} \\
\mbox{$\sigma^*(s)<\sigma(s)$ for all $s_-(r_1)<s\le s_-(r_2)$, and}\\
\mbox{$\sigma^*(s)>\sigma(s)$ for all $s_+(r_1)\le s< s_+(r_2)$ (see Figure \ref{fig3} for both cases).}
\end{array}
\right. \end{equation}
\begin{figure}
\caption{The original $\sigma(s)$ and modified $\sigma^*(s)$}
\label{fig3}
\end{figure}
Thanks to \cite[Theorem 5.2]{DH} \textbf{(Case I)}, \cite[Theorem 5.1]{DH} \textbf{(Case II)}, there exists a finite number $T>0$ such that the \emph{modified} initial-boundary value problem \begin{equation}\label{ib-P-modi} \begin{cases} u^*_{tt} =\sigma^*(u^*_x)_x& \mbox{in $\Omega_T$,} \\ u^*(0,t)=u^*(1,t)=0 & \mbox{for $t\in(0,T)$,}\\ u^* =g,\;u^*_t=h & \mbox{on $\Omega\times \{t=0\}$} \end{cases} \end{equation} admits a unique solution $u^*\in \cap_{k=0}^3C^k([0,T];W^{3-k,2}_0(\Omega))$, with $u^*_x>-1$ on $\bar\Omega_T$ for \textbf{Case I}. By the Sobolev embedding theorem, we have $u^*\in C^2(\bar\Omega_T)$. Let \[\left\{ \begin{split}
\Omega^1_T & =\{(x,t)\in\Omega_T\,|\,u^*_x(x,t)<s_-(r_1)\},\\
\Omega^2_T & =\{(x,t)\in\Omega_T\,|\,s_-(r_1)<u^*_x(x,t)<s_+(r_2)\},\\
\Omega^3_T & =\{(x,t)\in\Omega_T\,|\,u^*_x(x,t)>s_+(r_2)\},\\ F_T & =\Omega_T\setminus(\cup_{i=1}^3\Omega_T^i); \end{split}\right. \]
then (\ref{sep-domain-NCE}) holds \textbf{(Case I)}, (\ref{sep-domain-HEP}) holds \textbf{(Case II)}, and $\partial\Omega_T^1\cap\partial\Omega_T^3=\emptyset$. As $s_-(r_1)<g'(x_0)=u^*_x(x_0,0)<s_+(r_2)$, we also have $\Omega_T^2\not=\emptyset$; so $|\Omega_T^2|>0$.
We define \[ v^*(x,t)=\int_0^x h(z)\,dz+\int_0^t \sigma^*(u^*_x(x,\tau))\,d\tau\quad\forall(x,t)\in\Omega_T. \] Then $w^*:=(u^*,v^*)$ satisfies \begin{equation}\label{classic} v^*_x=u^*_t\quad\mbox{and}\quad v^*_t=\sigma^*(u^*_x)\quad\mbox{in $\Omega_T$}. \end{equation} Note that this implies $v^*\in C^2(\bar\Omega_T)$; hence $w^*\in C^2(\bar\Omega_T;\mathbb R^2)$.
\textbf{(Related matrix sets):} Define the sets (see Figure \ref{fig3}) \[ \begin{split}
\tilde K_\pm & =\{(s,\sigma(s))\in\mathbb R^2\,|\,s_\pm(r_1)\le s\le s_\pm(r_2)\},\\
\tilde K & = \tilde K_+\cup\tilde K_-,\\
\tilde U & =\{(s,r)\in\mathbb R^2\,|\, r_1<r<r_2,\,0<\lambda<1,\,s=\lambda s_-(r)+(1-\lambda) s_+(r) \},\\
K & = \left\{\begin{pmatrix} s & b \\ b & r \end{pmatrix}\in\mathbb M^{2\times 2}_{sym}\,\Big|\, (s,r)\in\tilde K,\, |b|\le\gamma\right\},\\
U & = \left\{\begin{pmatrix} s & b \\ b & r \end{pmatrix}\in\mathbb M^{2\times 2}_{sym}\,\Big|\, (s,r)\in\tilde U,\, |b|<\gamma \right\}, \end{split} \]
where $\gamma:=\|u^*_t\|_{L^\infty(\Omega_T)}+1$.
\textbf{(Admissible class):}
Let $\epsilon>0$ be given. Choose a number $T_\epsilon\in(0,T]$ so that $\|u^*_t-h\|_{L^\infty(\Omega_{T_\epsilon})}<\epsilon/2=:\epsilon'$. We then define the \emph{admissible class} $\mathcal A$ by \[
\mathcal A=\left\{w=(u,v)\in W^{1,\infty}_{w^*}(\Omega_T;\mathbb R^2)\,\Bigg|\, \begin{array}{l}
w\in C^2(\bar\Omega_T;\mathbb R^2),\,w\equiv w^* \\
\mbox{in}\;\Omega_T\setminus\bar\Omega_T^w\;\mbox{for some open set}\\
\mbox{$\Omega_T^w\subset\subset\Omega_T^2$ with $|\partial\Omega_T^w|=0,$}\\
\nabla w(x,t) \in U\;\forall(x,t)\in\Omega_{T}^2,\\
\|u_t-h\|_{L^\infty(\Omega_{T_\epsilon})}<\epsilon'
\end{array}
\right\}. \] It is easy to see from (\ref{modi}) and (\ref{classic}) that $w^*\in\mathcal A\not =\emptyset$. For each $\delta>0$, we also define the \emph{$\delta$-approximating class} $\mathcal A_\delta$ by \[
\mathcal A_\delta=\left\{w\in\mathcal A\,\Big|\, \int_{\Omega_T^2}\operatorname{dist}(\nabla w(x,t),K)\,dxdt\le\delta|\Omega_T^2|
\right\}. \]
\textbf{(Density result):} One crucial step for the proof of Theorem \ref{thm:main-NCE} \textbf{(Case I)}, Theorem \ref{thm:main-HEP} \textbf{(Case II)} is the following density fact whose proof appears in section \ref{sec:density-proof} that is common for both cases. \begin{thm}\label{thm:density} For each $\delta>0$, \[\mbox{$\mathcal A_\delta$ is dense in $\mathcal A$ with respect to the $L^\infty(\Omega_T;\mathbb R^2)$-norm.}\] \end{thm}
\textbf{(Baire's category method):} Let $\mathcal X$ denote the closure of $\mathcal A$ in the space $L^\infty(\Omega_T;\mathbb R^2)$, so that $(\mathcal X,L^\infty)$ is a nonempty complete metric space. As $U$ is bounded in $\mathbb M^{2\times 2}$, so is $\mathcal A$ in $W^{1,\infty}(\Omega_T;\mathbb R^2)$; thus it is easily checked that \[ \mathcal X\subset W^{1,\infty}_{w^*}(\Omega_T;\mathbb R^2). \] Note that the space-time gradient operator $\nabla:\mathcal X\to L^1(\Omega_T;\mathbb M^{2\times 2})$ is a Baire-one function (see, e.g., \cite[Proposition 10.17]{Da}). So by the Baire Category Theorem (see, e.g., \cite[Theorem 10.15]{Da}), the set of points of discontinuity of the operator $\nabla$, say $\mathcal D_{\nabla}$, is a set of the first category; thus the set of points at which $\nabla$ is continuous, that is, $\mathcal C_{\nabla}:=\mathcal X\setminus\mathcal D_{\nabla}$, is dense in $\mathcal X$.
\textbf{(Completion of proof):} Let us confirm that for any function $w=(u,v)\in\mathcal C_\nabla$, its first component $u$ is a weak solution to (\ref{ib-P}) satisfying (a)--(d). Towards this, fix any $w=(u,v)\in\mathcal C_\nabla$.
\textbf{\underline{(\ref{def:sol}) \& (\ref{def:sol-1}):}} To verify (\ref{def:sol}), let $\varphi\in C^\infty_c(\Omega\times[0,T))$. From Theorem \ref{thm:density} and the density of $\mathcal A$ in $\mathcal X$, we can choose a sequence $w_k=(u_k,v_k)\in\mathcal A_{1/k}$ such that $w_k\to w$ in $\mathcal X$ as $k\to\infty$. As $w\in\mathcal C_{\nabla}$, we have $\nabla w_k\to \nabla w$ in $L^1(\Omega_T;\mathbb M^{2\times 2})$ and so pointwise a.e. in $\Omega_T$ after passing to a subsequence if necessary. By (\ref{classic}) and the definition of $\mathcal A$, we have $(v_k)_x=(u_k)_t$ in $\Omega_T$ and $(v_k)_x(x,0)=v^*_x(x,0)=u^*_t(x,0)=h(x)$ ($x\in\Omega$); so \[ \begin{split} \int_{\Omega_T}(u_k)_t\varphi_t\,dxdt & =\int_{\Omega_T}(v_k)_x\varphi_t\,dxdt\\ & = -\int_{\Omega_T}(v_k)_{xt}\varphi \,dxdt -\int_0^1 (v_k)_x(x,0)\varphi(x,0)\,dx\\ & = \int_{\Omega_T}(v_k)_{t}\varphi_x \,dxdt -\int_0^1 h(x)\varphi(x,0)\,dx, \end{split} \] that is, \[ \int_{\Omega_T}((u_k)_t\varphi_t-(v_k)_{t}\varphi_x)\,dxdt = -\int_0^1 h(x)\varphi(x,0)\,dx. \] On the other hand, by the Dominated Convergence Theorem, we have \[ \int_{\Omega_T}((u_k)_t\varphi_t-(v_k)_{t}\varphi_x)\,dxdt \to \int_{\Omega_T}(u_t\varphi_t-v_{t}\varphi_x)\,dxdt; \] thus \begin{equation}\label{pre-weak} \int_{\Omega_T}(u_t\varphi_t-v_{t}\varphi_x)\,dxdt = -\int_0^1 h(x)\varphi(x,0)\,dx. \end{equation} Also, by the Dominated Convergence Theorem, \[ \int_{\Omega_T^2} \operatorname{dist}(\nabla w_k(x,t),K)\,dxdt\to\int_{\Omega_T^2} \operatorname{dist}(\nabla w(x,t),K)\,dxdt. \] From the choice $w_k\in\mathcal A_{1/k}$, we have \[
\int_{\Omega_T^2} \operatorname{dist}(\nabla w_k(x,t),K)\,dxdt\le\frac{|\Omega_T^2|}{k}\to 0; \] so \[ \int_{\Omega_T^2} \operatorname{dist}(\nabla w(x,t),K)\,dxdt=0. \] Since $K$ is closed, we must have \begin{equation}\label{inclusion} \nabla w(x,t)\in K\subset \Sigma_\sigma,\quad\mbox{a.e. $(x,t)\in\Omega_T^2$}. \end{equation}
For each $k$, we have $w_k\equiv w^*$ in $\Omega_T\setminus\bar\Omega_T^{w_k}$ for some open set $\Omega_T^{w_k}\subset\subset\Omega_T^2$ with $|\partial \Omega_T^{w_k}|=0$, and so $\nabla w_k\equiv\nabla w^*$ in $\Omega_T\setminus\bar\Omega_T^{w_k}$; thus $w=w^*$ and $\nabla w=\nabla w^*$ a.e. in $\Omega_T\setminus\Omega_T^2$. By (\ref{modi}) and (\ref{classic}), we have \begin{equation*} v_x=u_t\quad\mbox{and}\quad v_t=\sigma^*(u^*_x)=\sigma(u_x)\quad\mbox{a.e. in $\Omega_T\setminus\Omega_T^2$}. \end{equation*} This together with (\ref{inclusion}) implies that $\nabla w\in\Sigma_\sigma$ a.e. in $\Omega_T$. In particular, $v_t =\sigma(u_x)$ a.e. in $\Omega_T$. Reflecting this to (\ref{pre-weak}), we have (\ref{def:sol}). As $w=w^*$ on $\partial\Omega_T$, we also have (\ref{def:sol-1}).
\textbf{\underline{(a), (b), (c) \& (d):}}
As $w=w^*$ a.e. in $\Omega_T\setminus\Omega_T^2$, it follows from the continuity that $u\equiv u^*$ in $\Omega_T^1\cup\Omega_T^3$; so (b) is guaranteed by the definition of $\Omega_T^1$ and $\Omega_T^3$, with $u^*_x>-1$ on $\bar\Omega_T$ for \textbf{Case I}. Since $\nabla w=\nabla w^*$ a.e. in $\Omega_T\setminus\Omega_T^2$, we have $u_x=u^*_x\in\{s_-(r_1),s_+(r_2)\}$ a.e. in $F_T$; so (d) holds. From $w_k\in\mathcal A_{1/k}\subset\mathcal A$, we have $|(u_k)_t(x,t)-h(x)|<\epsilon'$ for a.e. $(x,t)\in\Omega_{T_\epsilon}$. Taking the limit as $k\to\infty$, we obtain that $|u_t(x,t)-h(x)|\le\epsilon'$ for a.e. $(x,t)\in\Omega_{T_\epsilon}$; hence $\|u_t-h\|_{L^\infty(\Omega_{T_\epsilon})}\le\epsilon'=\epsilon/2<\epsilon$. Thus (a) is proved. From (\ref{inclusion}) and the definition of $K$, (c) follows.
\textbf{\underline{Infinitely many weak solutions:}} Having shown that the first component $u$ of each pair $w=(u,v)$ in $\mathcal C_\nabla$ is a weak solution to (\ref{ib-P}) satisfying (a)--(d), it remains to verify that $\mathcal C_\nabla$ has infinitely many elements and that no two different pairs in $\mathcal C_\nabla$ have the first components that are equal. Suppose on the contrary that $\mathcal C_\nabla$ has finitely many elements. Then $w^*\in\mathcal A\subset\mathcal X=\bar{\mathcal C}_\nabla=\mathcal C_\nabla$, and so $u^*$ itself is a weak solution to (\ref{ib-P}) satisfying (a)--(d); this is a contradiction. Thus $\mathcal C_\nabla$ has infinitely many elements. Next, we check that for any two $w_1=(u_1,v_1),w_2=(u_2,v_2)\in\mathcal C_\nabla,$ \[ u_1=u_2\quad \Leftrightarrow\quad v_1=v_2. \] Suppose $u_1\equiv u_2$ in $\Omega_T$. As $\nabla w_1,\nabla w_2\in\Sigma_\sigma$ a.e. in $\Omega_T$, we have, in particular, that \[ (v_1)_x=(u_1)_t=(u_2)_t=(v_2)_x\;\;\mbox{a.e. in $\Omega_T$.} \] Since both $v_1$ and $v_2$ share the same trace $v^*$ on $\partial\Omega_T$, it follows that $v_1\equiv v_2$ in $\Omega_T$. The converse can be shown similarly. We can now conclude that there are infinitely many weak solutions to (\ref{ib-P}) satisfying (a)--(d).
The proof of Theorem \ref{thm:main-NCE} \textbf{(Case I)}, Theorem \ref{thm:main-HEP} \textbf{(Case II)} is now complete under the density fact, Theorem \ref{thm:density}, to be justified in sections \ref{sec:rank-1} and \ref{sec:density-proof}.
\subsection{Proofs of Corollaries \ref{coro:weak-NCE}, \ref{coro:weak-NCE-micro}, \ref{coro:weak-HEP} and \ref{coro:weak-HEP-micro}} We proceed the proofs of the companion versions of Corollaries \ref{coro:weak-NCE} and \ref{coro:weak-HEP} and Corollaries \ref{coro:weak-NCE-micro} and \ref{coro:weak-HEP-micro}, respectively.
\begin{proof}[Proof of Corollaries \ref{coro:weak-NCE} and \ref{coro:weak-HEP}] Let $(g,h)\in W^{3,2}_0(\Omega)\times W^{2,2}_0(\Omega)$ be any given initial datum, with $g'>-1$ on $\bar\Omega$ for \textbf{Case I}. If $g'(x_0)\in(s_1^*,s_2^*)$ for some $x_0\in\Omega$, then the result follows immediately from Theorem \ref{thm:main-NCE} \textbf{(Case I)}, Theorem \ref{thm:main-HEP} \textbf{(Case II)}.
Next, let us assume $g'(x)\not\in(s_1^*,s_2^*)$ for all $x\in\bar\Omega.$ We may only consider the case that $g'(x)\ge s_2^*$ for all $x\in\bar\Omega$ as the other case can be shown similarly. Fix any two $\sigma(s_2)<r_1<r_2<\sigma(s_1),$ and choose a function $\sigma^*\in C^3(-1,\infty)$ \textbf{(Case I)}, $\sigma^*\in C^3(\mathbb R)$ \textbf{(Case II)} in such a way that (\ref{modi}) is fulfilled. By \cite[Theorem 5.2]{DH} \textbf{(Case I)}, \cite[Theorem 5.1]{DH} \textbf{(Case II)}, there exists a finite number $\tilde T>0$ such that the modified initial-boundary value problem (\ref{ib-P-modi}), with $T$ replaced by $\tilde T$, admits a unique solution $u^*\in \cap_{k=0}^3C^k([0,\tilde T];W^{3-k,2}_0(\Omega))$, with $u^*_x>-1$ on $\bar\Omega_{\tilde T}$ for \textbf{Case I}. Now, choose a number $0<T\le\tilde T$ so that $u^*_x\ge s_+(r_2)$ on $\bar\Omega_T$. Then $u^*$ itself is a classical and thus weak solution to problem (\ref{ib-P}). \end{proof}
\begin{proof}[Proof of Corollaries \ref{coro:weak-NCE-micro} and \ref{coro:weak-HEP-micro}] Let $(g,h)\in W^{3,2}_0(\Omega)\times W^{2,2}_0(\Omega)$ satisfy $\max_{\bar\Omega}g'\in (s_1^*,s_1)$ or $\min_{\bar\Omega}g'\in (s_2,s_2^*)$. In \textbf{Case I}, assume also $g'>-1$ on $\bar\Omega.$ We may only consider the case that $M:=\max_{\bar\Omega}g'\in (s_1^*,s_1)$ as the other case can be handled in a similar way. Choose two numbers $\sigma(s_2)<r_1<r_2<\sigma(s_1)$ so that $s_-(r_1)>M.$ Then take a $C^3$ function $\sigma^*(s)$ satisfying (\ref{modi}). Using \cite[Theorem 5.2]{DH} \textbf{(Case I)}, \cite[Theorem 5.1]{DH} \textbf{(Case II)}, we can find a finite number $\tilde T>0$ such that modified problem (\ref{ib-P-modi}), with $T$ replaced by $\tilde T$, has a unique solution $u^*\in \cap_{k=0}^3C^k([0,\tilde T];W^{3-k,2}_0(\Omega))$, with $u^*_x>-1$ on $\bar\Omega_{\tilde T}$ for \textbf{Case I}. Then choose a number $0<T'\le\tilde T$ so small that $u^*_x\le s_-(r_1)$ on $\bar\Omega_{T'}$ and that $s_1^*<u^*_x(x_0,T')$ for some $x_0\in\Omega$. With the initial datum $(u^*(\cdot,T'),u^*_t(\cdot,T'))\in W^{3,2}_0(\Omega)\times W^{2,2}_0(\Omega)$ at $t=T'$, with $u^*_x(\cdot,T')>-1$ on $\bar\Omega$ for \textbf{Case I}, we can apply Theorem \ref{thm:main-NCE} \textbf{(Case I)}, Theorem \ref{thm:main-HEP} \textbf{(Case II)} to obtain, for some finite number $T>T'$, infinitely many weak solutions $\tilde u\in W^{1,\infty}(\Omega\times(T',T))$ to the initial-boundary value problem \begin{equation*} \begin{cases} \tilde u_{tt} =\sigma(\tilde u_x)_x& \mbox{in $\Omega\times (T',T)$,} \\ \tilde u(0,t)=\tilde u(1,t)=0 & \mbox{for $t\in(T',T)$,}\\ \tilde u =u^*,\,\tilde u_t=u^*_t & \mbox{on $\Omega\times \{t=T'\}$} \end{cases} \end{equation*} satisfying the stated properties in the theorem. Then the glued functions $u=u^*\chi_{\Omega\times(0,T')}+\tilde u\chi_{\Omega\times[T',T)}$ are weak solutions to problem (\ref{ib-P}) fulfilling the required properties.
\end{proof}
\section{Rank-one smooth approximation under linear constraint}\label{sec:rank-1} In this section, we prepare the main tool, Theorem \ref{thm:rank-1}, for proving the density result, Theorem \ref{thm:density}. Instead of presenting a special case that would be enough for our purpose, we exhibit the following result in a generalized and refined form of \cite[Lemma 2.1]{Po} that may be of independent interest (cf. \cite[Lemma 6.2]{MSv1}).
\begin{thm}\label{thm:rank-1} Let $m,n\ge 2$ be integers, and let $A,B\in\mathbb M^{m\times n}$ be such that $\operatorname{rank}(A-B)=1$; hence \[ A-B=a\otimes b=(a_i b_j) \]
for some non-zero vectors $a\in\mathbb R^m$ and $b\in\mathbb R^n$ with $|b|=1.$ Let $L\in\mathbb M^{m\times n}$ satisfy \begin{equation}\label{rank-1-1} Lb\ne 0 \;\;\mbox{in}\;\;\mathbb R^m, \end{equation} and let $\mathcal{L}:\mathbb M^{m\times n}\to \mathbb R$ be the linear map defined by \[ \mathcal{L}(\xi)=\sum_{1\le i\le m,\, 1\le j \le n} L_{ij}\xi_{ij}\quad \forall \xi\in\mathbb M^{m\times n}. \] Assume $\mathcal{L}(A)=\mathcal{L}(B)$ and $0<\lambda<1$ is any fixed number. Then there exists a linear partial differential operator $\Phi:C^1(\mathbb R^n;\mathbb R^m)\to C^0(\mathbb R^n;\mathbb R^m)$ satisfying the following properties:
(1) For any open set $\Omega\subset\mathbb R^n$, \[ \Phi v\in C^{k-1}(\Omega;\mathbb R^m)\;\;\mbox{whenever}\;\; k\in\mathbb N\;\;\mbox{and}\;\;v\in C^{k}(\Omega;\mathbb R^m) \] and \[ \mathcal{L}(\nabla\Phi v)=0 \;\;\mbox{in}\;\;\Omega\;\;\forall v\in C^2(\Omega;\mathbb R^m). \]
(2) Let $\Omega\subset\mathbb R^n$ be any bounded domain. For each $\tau>0$, there exist a function $g=g_\tau\in C^{\infty}_{c}(\Omega;\mathbb R^m)$ and two disjoint open sets $\Omega_A,\Omega_B\subset\subset\Omega$ such that \begin{itemize} \item[(a)] $\Phi g\in C^\infty_c(\Omega;\mathbb R^m)$,
\item[(b)] $\operatorname{dist}(\nabla\Phi g,[-\lambda(A-B),(1-\lambda)(A-B)])<\tau$ in $\Omega$, \item[(c)] $\nabla \Phi g(x)= \left\{\begin{array}{ll}
(1-\lambda)(A-B) & \mbox{$\forall x\in\Omega_A$}, \\
-\lambda(A-B) & \mbox{$\forall x\in\Omega_B$},
\end{array}\right.$
\item[(d)] $||\Omega_A|-\lambda|\Omega||<\tau$, $||\Omega_B|-(1-\lambda)|\Omega||<\tau$,
\item[(e)] $\|\Phi g\|_{L^\infty(\Omega)}<\tau$, \end{itemize} where $[-\lambda(A-B),(1-\lambda)(A-B)]$ is the closed line segment in $\mathrm{ker}\mathcal{L}\subset\mathbb M^{m\times n}$ joining $-\lambda(A-B)$ and $(1-\lambda)(A-B)$. \end{thm}
\begin{proof} We mainly follow and modify the proof of \cite[Lemma 2.1]{Po} which is divided into three cases.
Set $r=\operatorname{rank}(L).$ By (\ref{rank-1-1}), we have $1\le r\le m\wedge n=:\min\{m,n\}.$
\textbf{(Case 1):} Assume that the matrix $L$ satisfies \[ \begin{split} L_{ij}=0\;\;& \mbox{for all $1\le i\le m,\, 1\le j\le n$ but possibly the pairs}\\ & \mbox{$(1,1),(1,2),\cdots,(1,n),(2,2),\cdots,(r,r)$ of $(i,j)$}; \end{split} \] hence $L$ is of the form \begin{equation}\label{rank-1-5} L=\begin{pmatrix} L_{11} & L_{12} & \cdots & L_{1r} & \cdots & L_{1n}\\
& L_{22} & & & & & \\
& & \ddots & & & & \\
& & & L_{rr} & & & \\
& & & & & & \end{pmatrix}\in\mathbb M^{m\times n} \end{equation} and that \[ A-B=a\otimes e_1\;\;\mbox{for some nonzero vector $a=(a_1,\cdots,a_m)\in\mathbb R^m$}, \] where each blank component in (\ref{rank-1-5}) is zero. From (\ref{rank-1-1}) and $\operatorname{rank}(L)=r$, it follows that the product $L_{11}\cdots L_{rr}\ne 0$. Since $0=\mathcal L(A-B)=\mathcal L(a\otimes e_1)=L_{11}a_1$, we have $a_1=0$.
In this case, the linear map $\mathcal L:\mathbb M^{m\times n}\to\mathbb R$ is given by \[ \mathcal L(\xi)=\sum_{j=1}^n L_{1j}\xi_{1j}+\sum_{i=2}^r L_{ii}\xi_{ii},\quad \xi\in\mathbb M^{m\times n}. \] We will find a linear differential operator $\Phi:C^1(\mathbb R^n;\mathbb R^m)\to C^0(\mathbb R^n;\mathbb R^m)$ such that \begin{equation}\label{rank-1-6} \mathcal L(\nabla\Phi v)\equiv 0 \quad\forall v\in C^2(\mathbb R^n;\mathbb R^m). \end{equation} So our candidate for such a $\Phi=(\Phi^1,\cdots,\Phi^m)$ is of the form \begin{equation}\label{rank-1-7} \Phi^i v=\sum_{1\le k\le m,\,1\le l\le n}a^i_{kl}v^k_{x_l}, \end{equation} where $1\le i\le m$, $v\in C^1(\mathbb R^n;\mathbb R^m)$, and $a^i_{kl}$'s are real constants to be determined; then for $v\in C^2 (\mathbb R^n;\mathbb R^m)$, $1\le i\le m$, and $1\le j\le n$, \[ \partial_{x_j}\Phi^i v =\sum_{1\le k\le m,\,1\le l\le n}a^i_{kl}v^k_{x_lx_j}. \] Rewriting (\ref{rank-1-6}) with this form of $\nabla\Phi v$ for $v\in C^2 (\mathbb R^n;\mathbb R^m)$, we have \[ \begin{split} 0 & \equiv \sum_{1\le k\le m,\,1\le j,l\le n} L_{1j}a^1_{kl}v^k_{x_lx_j} + \sum_{i=2}^r\sum_{1\le k\le m,\,1\le l\le n} L_{ii}a^i_{kl}v^k_{x_lx_i}\\
& = \sum_{k=1}^m \Big(L_{11}a^1_{k1}v^k_{x_1x_1}+\sum_{j=2}^r (L_{1j}a^1_{kj}+L_{jj}a^j_{kj})v^k_{x_jx_j}+\sum_{j=r+1}^n L_{1j}a^1_{kj}v^k_{x_jx_j} \\ & \quad +\sum_{l=2}^r (L_{11}a^1_{kl}+L_{1l}a^1_{k1}+L_{ll}a^l_{k1})v^k_{x_lx_1} +\sum_{l=r+1}^n (L_{11}a^1_{kl}+L_{1l}a^1_{k1})v^k_{x_lx_1} \\ & \quad +\sum_{2\le j<l\le r} (L_{1j}a^1_{kl}+L_{1l}a^1_{kj}+L_{jj}a^j_{kl}+L_{ll}a^l_{kj})v^k_{x_lx_j}\\ & \quad +\sum_{2\le j\le r,\,r+1\le l\le n} (L_{1j}a^1_{kl}+L_{1l}a^1_{kj}+L_{jj}a^j_{kl})v^k_{x_lx_j} \end{split} \] \[ \begin{split} & \quad+\sum_{r+1\le j<l\le n} (L_{1j}a^1_{kl}+L_{1l}a^1_{kj})v^k_{x_lx_j} \Big). \end{split} \]
Should (\ref{rank-1-6}) hold, it is thus sufficient to solve the following algebraic system for each $k=1,\cdots,m$ (after adjusting the letters for some indices): \begin{eqnarray} \label{rr-1}& L_{11}a^1_{k1}=0, &\\ \label{rr-4}& L_{1j}a^1_{kj}+L_{jj}a^j_{kj}=0 & \;\,\forall j=2,\cdots, r,\\ \label{rr-3}& L_{11}a^1_{kj}+L_{1j}a^1_{k1}+L_{jj}a^j_{k1}=0 & \;\,\forall j=2,\cdots, r, \\ \label{rr-5}& L_{1l}a^1_{kj}+L_{1j}a^1_{kl}+L_{ll}a^l_{kj}+L_{jj}a^j_{kl}=0 & \begin{array}{l}
\forall j=3,\cdots, r, \\
\forall l=2,\cdots, j-1,
\end{array} \\ \label{rr-6}& L_{1j}a^1_{kj}=0 & \;\,\forall j=r+1,\cdots, n,\\ \label{rr-7}& L_{11}a^1_{kj}+L_{1j}a^1_{k1}=0 & \;\,\forall j=r+1,\cdots, n, \\ \label{rr-8}& L_{1l}a^1_{kj}+L_{1j}a^1_{kl}+L_{ll}a^l_{kj}=0 & \begin{array}{l}
\forall j=r+1,\cdots, n, \\
\forall l=2,\cdots, r,
\end{array} \\ \label{rr-2}& L_{1l}a^1_{kj}+L_{1j}a^1_{kl}=0 & \begin{array}{l}
\forall j=r+2,\cdots, n, \\
\forall l=r+1,\cdots,j-1.
\end{array} \end{eqnarray} Although these systems have infinitely many solutions, we will solve those in a way for a later purpose that the matrix $(a^j_{k1})_{2\le j, k\le m}\in\mathbb M^{(m-1)\times(m-1)}$ fulfills \begin{equation}\label{rank-1-8} a^j_{21}=a_j \quad\forall j=2,\cdots, m,\quad\mbox{and}\quad a^j_{k1}=0\quad\mbox{otherwise}. \end{equation} Firstly, we let the coefficients $a^i_{kl}\;(1\le i,k\le m,\,1\le l\le n)$ that do not appear in systems (\ref{rr-1})--(\ref{rr-2}) $(k=1,\cdots, m)$ be zero with an exception that we set $a^j_{21}=a_j$ for $j=r+1,\cdots,m$ to reflect (\ref{rank-1-8}). Secondly, for $1\le k\le m,\,k\ne 2$, let us take the trivial (i.e., zero) solution of system (\ref{rr-1})--(\ref{rr-2}). Finally, we take $k=2$ and solve system (\ref{rr-1})--(\ref{rr-2}) as follows with (\ref{rank-1-8}) satisfied. Since $L_{11}\ne 0$, we set $a^1_{21}=0$; then (\ref{rr-1}) is satisfied. So we set \[ a^j_{21}=-\frac{L_{11}}{L_{jj}}a^1_{2j},\;\;a^1_{2j}=-\frac{L_{jj}}{L_{11}}a_j \quad \forall j=2,\cdots,r; \] then (\ref{rr-3}) and (\ref{rank-1-8}) hold. Next, set \[ a^j_{2j}=-\frac{L_{1j}}{L_{jj}}a^1_{2j}=\frac{L_{1j}}{L_{11}}a_j \quad\forall j=2,\cdots, r; \] then (\ref{rr-4}) is fulfilled. Set \[ a^l_{2j}=-\frac{L_{1l}a^1_{2j}+L_{1j}a^1_{2l}}{L_{ll}}=\frac{L_{1l}L_{jj}a_j+L_{1j}L_{ll}a_l}{L_{ll}L_{11}},\;\;a^j_{2l}=0 \] for $j=3,\cdots,r$ and $l=2,\cdots, j-1$; then (\ref{rr-5}) holds. Set \[ a^1_{2j}=0\quad\forall j=r+1,\cdots,n; \] then (\ref{rr-6}) and (\ref{rr-7}) are satisfied. Lastly, set \[ a^1_{2j}=0,\;\; a^l_{2j}=-\frac{L_{1j}}{L_{ll}}a^1_{2l}=\frac{L_{1j}}{L_{11}}a_l\quad\forall j=r+1,\cdots, n,\,\forall l=2,\cdots, r; \] then (\ref{rr-8}) and (\ref{rr-2}) hold. In summary, we have determined the coefficients $a^i_{kl}\;(1\le i,k\le m,\,1\le l\le n)$ in such a way that system (\ref{rr-1})--(\ref{rr-2}) holds for each $k=1,\cdots, m$ and that (\ref{rank-1-8}) is also satisfied. Therefore, (1) follows from (\ref{rank-1-6}) and (\ref{rank-1-7}).
To prove (2), without loss of generality, we can assume $\Omega=(0,1)^n\subset\mathbb R^n.$ Let $\tau>0$ be given. Let $u=(u^1,\cdots,u^m)\in C^\infty(\Omega;\mathbb R^m)$ be a function to be determined. Suppose $u$ depends only on the first variable $x_1\in(0,1).$ We wish to have \[ \nabla\Phi u(x)\in\{-\lambda a\otimes e_1,(1-\lambda) a\otimes e_1\} \] for all $x\in\Omega$ except in a set of small measure. Since $u(x)=u(x_1)$, it follows from (\ref{rank-1-7}) that for $1\le i\le m$ and $1\le j\le n$, \[ \Phi^i u=\sum_{k=1}^m a^i_{k1} u^k_{x_1};\;\;\mbox{thus}\;\;\partial_{x_j}\Phi^i u=\sum_{k=1}^m a^i_{k1} u^k_{x_1 x_j}. \] As $a^1_{k1}=0$ for $k=1,\cdots, m$, we have $\partial_{x_j}\Phi^1 u =\sum_{k=1}^m a^1_{k1} u^k_{x_1 x_j}=0$ for $j=1,\cdots,n$. We first set $u^1\equiv 0$ in $\Omega$. Then from (\ref{rank-1-8}), it follows that for $i=2,\cdots, m$, \[ \partial_{x_j}\Phi^i u =\sum_{k=2}^m a^i_{k1} u^k_{x_1 x_j}=a^i_{21} u^2_{x_1 x_j}=a_i u^2_{x_1 x_j} = \left\{\begin{array}{ll}
a_i u^2_{x_1 x_1} & \mbox{if $j=1$,} \\
0 & \mbox{if $j=2,\cdots, n$.}
\end{array} \right. \] As $a_1=0$, we thus have that for $x\in\Omega$, \[ \nabla\Phi u(x)=(u^2)''(x_1) a\otimes e_1. \]
For irrelevant components of $u$, we simply take $u^3=\cdots =u^m\equiv 0$ in $\Omega$. Lastly, for a number $\delta>0$ to be chosen later, we choose a function $u^2(x_1)\in C^\infty_c(0,1)$ such that there exist two disjoint open sets $I_1,I_2\subset\subset (0,1)$ satisfying $||I_1|-\lambda|<\tau/2$, $||I_2|-(1-\lambda)|<\tau/2$, $\|u^2\|_{L^\infty(0,1)}<\delta$, $\|(u^2)'\|_{L^\infty(0,1)}<\delta$, $-\lambda\le (u^2)''(x_1)\le 1-\lambda$ for $x_1\in(0,1)$, and \[ (u^2)''(x_1)= \left\{\begin{array}{ll}
1-\lambda & \mbox{if $x_1\in I_1$}, \\
-\lambda & \mbox{if $x_1\in I_2$}.
\end{array}
\right. \] In particular, \begin{equation}\label{rank-1-9} \nabla \Phi u(x)\in[-\lambda a\otimes e_1,(1-\lambda)a\otimes e_1]\;\;\forall x\in\Omega. \end{equation}
We now choose an open set $\Omega'_\tau\subset\subset\Omega':=(0,1)^{n-1}$ with $|\Omega'\setminus\Omega'_\tau|<\tau/2$ and a function $\eta\in C^\infty_c(\Omega')$ so that \[
0\le\eta\le 1\;\;\mbox{in}\;\;\Omega',\;\; \eta\equiv 1\;\;\mbox{in}\;\Omega'_\tau,\;\;\mbox{and}\;\;|\nabla^i_{x'}\eta|<\frac{C}{\tau^i}\;\;(i=1,2)\;\;\mbox{in}\;\Omega', \] where $x'=(x_2,\cdots,x_n)\in\Omega'$ and the constant $C>0$ is independent of $\tau$. Now, we define $g(x)=\eta(x') u(x_1)\in C^\infty_c(\Omega;\mathbb R^m)$. Set $\Omega_A=I_1\times\Omega'_\tau$ and $\Omega_B=I_2\times\Omega'_\tau.$ Clearly, (a) follows from (1). As $g(x)=u(x_1)=u(x)$ for $x\in \Omega_A\cup\Omega_B$, we have \[ \nabla\Phi g(x)=\left\{\begin{array}{ll}
(1-\lambda)a\otimes e_1 & \mbox{if $x\in \Omega_A$}, \\
-\lambda a\otimes e_1 & \mbox{if $x\in \Omega_B$};
\end{array}
\right. \] hence (c) holds. Also, \[
||\Omega_A|-\lambda|\Omega||=||\Omega_A|-\lambda|=||I_1||\Omega'_\tau|-\lambda|=||I_1|-|I_1||\Omega'\setminus\Omega'_\tau|-\lambda|<\tau, \] and likewise \[
||\Omega_B|-(1-\lambda)|\Omega||<\tau; \] so (d) is satisfied. Note that for $i=1,\cdots,m,$ \[ \begin{split} \Phi^i g & = \Phi^i(\eta u) = \sum_{1\le k\le m,\,1\le l\le n}a^i_{kl}(\eta u^k)_{x_l}=\eta\Phi^i u+\sum_{1\le k\le m,\,1\le l\le n}a^i_{kl}\eta_{x_l} u^k\\ & = \eta\Phi^i u+ u^2\sum_{l=1}^n a^i_{2l}\eta_{x_l} =\eta a^i_{21}u^2_{x_1} + u^2\sum_{l=1}^n a^i_{2l}\eta_{x_l}. \end{split} \] So \[
\|\Phi g\|_{L^\infty(\Omega)}\le C\max\{\delta,\delta\tau^{-1}\}<\tau \] if $\delta>0$ is chosen small enough; so (e) holds. Next, for $i=1,\cdots,m$ and $j=1,\cdots,n,$ \[ \partial_{x_j}\Phi^i g=\eta_{x_j}a^i_{21}u^2_{x_1}+\eta\partial_{x_j}\Phi^i u + u^2_{x_j}\sum_{l=1}^n a^i_{2l}\eta_{x_l} + u^2\sum_{l=1}^n a^i_{2l}\eta_{x_l x_j}; \] hence from (\ref{rank-1-9}), \[ \operatorname{dist}(\nabla\Phi g,[-\lambda a\otimes e_1,(1-\lambda) a\otimes e_1])\le C\max\{\delta \tau^{-1},\delta\tau^{-2}\}<\tau\;\;\mbox{in $\Omega$} \] if $\delta$ is sufficiently small. Thus (b) is fulfilled.
\textbf{(Case 2):} Assume that $L_{i1}=0$ for all $i=2,\cdots, m$, that is, \begin{equation}\label{rank-1-3} L=\begin{pmatrix} L_{11} & L_{12} & \cdots & L_{1n}\\
0 & L_{22} & \cdots & L_{2n}\\
\vdots & \vdots & \ddots & \vdots\\
0 & L_{m2} & \cdots & L_{mn} \end{pmatrix}\in\mathbb M^{m\times n} \end{equation} and that \[ A-B=a\otimes e_1\;\;\mbox{for some nonzero vector $a\in\mathbb R^m$}; \] then by (\ref{rank-1-1}), we have $L_{11}\ne 0.$
Set \[ \hat L=\begin{pmatrix} L_{22} & \cdots & L_{2n}\\
\vdots & \ddots & \vdots\\
L_{m2} & \cdots & L_{mn} \end{pmatrix}\in\mathbb M^{(m-1)\times (n-1)}. \] As $L_{11}\ne 0$ and $\operatorname{rank}(L)=r$, we must have $\operatorname{rank}(\hat L)=r-1.$ Using the singular value decomposition theorem, there exist two matrices $\hat U\in O(m-1)$ and $\hat V\in O(n-1)$ such that \begin{equation}\label{rank-1-4} \hat U^T\hat L\hat V=\mathrm{diag}(\sigma_2,\cdots,\sigma_r,0,\cdots,0)\in\mathbb M^{(m-1)\times (n-1)}, \end{equation} where $\sigma_2,\cdots,\sigma_r$ are the positive singular values of $\hat L.$ Define \begin{equation}\label{rank-1-2} U=\begin{pmatrix} 1 & 0\\
0 & \hat U\end{pmatrix}\in O(m),\;\; V=\begin{pmatrix} 1 & 0\\
0 & \hat V\end{pmatrix}\in O(n). \end{equation} Let $L'=U^T LV$, $A'=U^T AV$, and $B'=U^T BV$. Let $\mathcal{L}':\mathbb M^{m\times n}\to \mathbb R$ be the linear map given by \[ \mathcal{L}'(\xi')=\sum_{1\le i\le m,\,1\le j\le n}L'_{ij}\xi'_{ij}\quad \forall \xi'\in\mathbb M^{m\times n}. \] Then, from (\ref{rank-1-3}), (\ref{rank-1-4}) and (\ref{rank-1-2}), it is straightforward to check the following: \[ \left\{ \begin{array}{l}
\mbox{$A'-B'=a'\otimes e_1$ for some nonzero vector $a'\in\mathbb R^m$,} \\
\mbox{$L' e_1\neq 0$, $\mathcal L'(A)=\mathcal L'(B)$, and} \\
\mbox{$L'$ is of the form (\ref{rank-1-5}) in Case 1 with $\mathrm{rank}(L')=r$.} \end{array}\right. \] Thus we can apply the result of Case 1 to find a linear operator $\Phi':C^1(\mathbb R^n;\mathbb R^m)\to C^0(\mathbb R^n;\mathbb R^m)$ satisfying the following:
(1') For any open set $\Omega'\subset\mathbb R^n$, \[ \Phi' v'\in C^{k-1}(\Omega';\mathbb R^m)\;\;\mbox{whenever}\;\; k\in\mathbb N\;\;\mbox{and}\;\;v'\in C^{k}(\Omega';\mathbb R^m) \] and \[ \mathcal{L'}(\nabla\Phi' v')=0 \;\;\mbox{in}\;\;\Omega'\;\;\mbox{for all}\;\;v'\in C^2(\Omega';\mathbb R^m). \]
(2') Let $\Omega'\subset\mathbb R^n$ be any bounded domain. For each $\tau>0$, there exist a function $g'=g'_\tau\in C^{\infty}_{c}(\Omega';\mathbb R^m)$ and two disjoint open sets $\Omega'_{A'},\Omega'_{B'}\subset\subset\Omega'$ such that \begin{itemize} \item[(a')] $\Phi' g'\in C^\infty_c(\Omega';\mathbb R^m)$,
\item[(b')] $\operatorname{dist}(\nabla\Phi' g',[-\lambda(A'-B'),(1-\lambda)(A'-B')])<\tau$ in $\Omega'$, \item[(c')] $\nabla \Phi' g'(x)= \left\{\begin{array}{ll}
(1-\lambda)(A'-B') & \mbox{$\forall x\in\Omega'_{A'}$}, \\
-\lambda(A'-B') & \mbox{$\forall x\in\Omega'_{B'}$},
\end{array}\right.$
\item[(d')] $||\Omega'_{A'}|-\lambda|\Omega'||<\tau$, $||\Omega'_{B'}|-(1-\lambda)|\Omega'||<\tau$,
\item[(e')] $\|\Phi' g'\|_{L^\infty(\Omega')}<\tau$. \end{itemize}
For $v\in C^1(\mathbb R^n;\mathbb R^m)$, let $v'\in C^1(\mathbb R^n;\mathbb R^m)$ be defined by $v'(y)=U^T v(Vy)$ for $y\in\mathbb R^n$. We define $\Phi v(x)=U\Phi' v'(V^T x)$ for $x\in\mathbb R^n$, so that $\Phi v\in C^0(\mathbb R^n;\mathbb R^m).$ Then it is straightforward to check that properties (1') and (2') of $\Phi'$ imply respective properties (1) and (2) of the linear operator $\Phi:C^1(\mathbb R^n;\mathbb R^m)\to C^0(\mathbb R^n;\mathbb R^m)$.
\textbf{(Case 3):} Finally, we consider the general case that $A$, $B$ and $L$ are as in the statement of the theorem. As $|b|=1$, there exists an $R\in O(n)$ such that $R^T b=e_1\in\mathbb R^n$. Also there exists a symmetric (Householder) matrix $P\in O(m)$ such that the matrix $L':=PLR$ has the first column parallel to $e_1\in\mathbb R^m$. Let \[ A'=PAR\;\;\mbox{and}\;\; B'=PBR. \] Then $A'-B'=a'\otimes e_1$, where $a'=Pa\ne 0$. Note also that $L'e_1=PLRR^tb=PLb\ne 0$. Define $\mathcal L'(\xi')=\sum_{i,j}L'_{ij}\xi'_{ij}\;(\xi'\in\mathbb M^{m\times n})$; then $\mathcal L'(A')=\mathcal L(A)=\mathcal L(B)=\mathcal L'(B')$. Thus by the result of Case 2, there exists a linear operator $\Phi':C^1(\mathbb R^n;\mathbb R^m)\to C^0(\mathbb R^n;\mathbb R^m)$ satisfying (1') and (2') above.
For $v\in C^1(\mathbb R^n;\mathbb R^m)$, let $v'\in C^1(\mathbb R^n;\mathbb R^m)$ be defined by $v'(y)=Pv(Ry)$ for $y\in\mathbb R^n$, and define $\Phi v(x)=P\Phi'v'(R^Tx)\in C^0(\mathbb R^n;\mathbb R^m)$. Then it is easy to check that the linear operator $\Phi:C^1(\mathbb R^n;\mathbb R^m)\to C^0(\mathbb R^n;\mathbb R^m)$ satisfies (1) and (2) by (1') and (2') similarly as in Case 2.
\end{proof}
\section{Proof of density result}\label{sec:density-proof}
In this final section, we prove Theorem \ref{thm:density}, which plays a pivotal role in the proof of the main results, Theorems \ref{thm:main-NCE} and \ref{thm:main-HEP}.
To start the proof, fix a $\delta>0$ and choose any $w=(u,v)\in\mathcal A$ so that $w\in W^{1,\infty}_{w^*}(\Omega_T;\mathbb R^2)\cap C^2(\bar\Omega_T;\mathbb R^2)$ satisfies the following: \begin{equation}\label{func-w} \left\{\begin{array}{l}
\mbox{$w\equiv w^*$ in $\Omega_T\setminus\bar\Omega_T^w$ for some open set $\Omega_T^w\subset\subset\Omega_T^2$ with $|\partial\Omega_T^w|=0$,} \\
\mbox{$\nabla w(x,t)\in U$ for all $(x,t)\in\Omega_T^2$, and $\|u_t-h\|_{L^\infty(\Omega_{T_\epsilon})}<\epsilon'$.}
\end{array}
\right. \end{equation}
Let $\eta>0$. Our goal is to construct a function $w_\eta=(u_\eta,v_\eta)\in \mathcal A_\delta$ with $\|w-w_\eta\|_{L^\infty(\Omega_T)}<\eta;$ that is, a function $w_\eta\in W^{1,\infty}_{w^*}(\Omega_T;\mathbb R^2)\cap C^2(\bar\Omega_T;\mathbb R^2)$ with the following properties: \begin{equation}\label{func-weta} \left\{\begin{array}{l}
\mbox{$w_\eta\equiv w^*$ in $\Omega_T\setminus\bar\Omega_T^{w_\eta}$ for some open set $\Omega_T^{w_\eta}\subset\subset\Omega_T^2$ with $|\partial\Omega_T^{w_\eta}|=0$,} \\
\mbox{$\nabla w_\eta(x,t)\in U$ for all $(x,t)\in\Omega_T^2$, $\|(u_\eta)_t-h\|_{L^\infty(\Omega_{T_\epsilon})}<\epsilon'$,} \\
\mbox{$\int_{\Omega_T^2}\operatorname{dist}(\nabla w_\eta(x,t),K)\,dxdt\le\delta|\Omega_T^2|$, and $\|w-w_\eta\|_{L^\infty(\Omega_T)}<\eta$.}
\end{array}
\right. \end{equation}
For clarity, we divide the proof into several steps.
\textbf{(Step 1):} Choose a nonempty open set $G_1\subset\subset\Omega_T^2\setminus\partial\Omega_T^w$ with $|\partial G_1|=0$ so that \begin{equation}\label{int-1}
\int_{(\Omega_T^2\setminus\partial\Omega_T^w)\setminus G_1}\operatorname{dist}(\nabla w(x,t),K)\,dxdt\le\frac{\delta}{5}|\Omega_T^2|. \end{equation}
Since $\nabla w\in U$ on $\bar G_1$, we have $\|u_t\|_{L^\infty(G_1)}<\gamma$; then fix a number $\theta$ with \begin{equation}\label{theta}
0<\theta<\min\{\epsilon'-\|u_t-h\|_{L^\infty(\Omega_{T_\epsilon})},\gamma-\|u_t\|_{L^\infty(G_1)}\}. \end{equation} For each $\mu>0$, let \[ \begin{split}
G_2^\mu & =\{(x,t)\in G_1\,|\,\operatorname{dist}((u_x(x,t),v_t(x,t)),\partial\tilde U)>\mu\},\\
H_2^\mu & =\{(x,t)\in G_1\,|\,\operatorname{dist}((u_x(x,t),v_t(x,t)),\partial\tilde U)<\mu\},\\
F_2^\mu & =\{(x,t)\in G_1\,|\,\operatorname{dist}((u_x(x,t),v_t(x,t)),\partial\tilde U)=\mu\}. \end{split} \]
Since $\lim_{\mu\to 0^+}|H_2^\mu|=0,$ we can find a $\nu\in(0,\min\{\frac{\delta}{5},\theta\})$ such that \begin{equation}\label{int-2}
\int_{H_2^{\nu}}\operatorname{dist}(\nabla w(x,t),K)\,dxdt\le\frac{\delta}{5}|\Omega_T^2|,\;\;G^\nu_2\not=\emptyset,\;\;\mbox{and}\;\; |F_2^{\nu}|=0. \end{equation} Let us write $G_2=G_2^{\nu}$ and $H_2=H_2^{\nu}$. Choose finitely many disjoint open squares $B_1,\cdots, B_N\subset G_2$ parallel to the axes such that \begin{equation}\label{int-3}
\int_{G_2\setminus(\cup_{i=1}^N B_i)}\operatorname{dist}(\nabla w(x,t),K)\,dxdt\le\frac{\delta}{5}|\Omega_T^2|. \end{equation}
\textbf{(Step 2):} Dividing the squares $B_1,\cdots, B_N$ into smaller disjoint sub-squares if necessary, we can assume that \begin{equation}\label{small}
|\nabla w(x,t)-\nabla w(\tilde x,\tilde t)|<\frac{\nu}{8} \end{equation} whenever $(x,t),(\tilde x,\tilde t)\in B_i$ and $i=1,\cdots,N$. Now, fix any $i\in\{1,\cdots,N\}$. Let $(x_i,t_i)$ denote the center of the square $B_i$, and write \[ (s_i,r_i)=(u_x(x_i,t_i),v_t(x_i,t_i))\in \tilde U; \] then $\operatorname{dist}((s_i,r_i),\partial\tilde U)>\nu$. Let $\alpha_i>0,\,\beta_i>0$ be chosen so that \[ (s_i-\alpha_i,r_i),(s_i+\beta_i,r_i)\in\tilde U,\;\;\operatorname{dist}((s_i-\alpha_i,r_i),\tilde K_-)=\frac{\nu}{2},\;\;\mbox{and} \] \[ \operatorname{dist}((s_i+\beta_i,r_i),\tilde K_+)=\frac{\nu}{2}. \] To apply Theorem \ref{thm:rank-1} with $m=n=2$ to the square $B_i$, let \[ A_i=\begin{pmatrix} s_i-\alpha_i & b_i \\ b_i & r_i \end{pmatrix}\;\;\mbox{and}\;\;B_i=\begin{pmatrix} s_i+\beta_i & b_i \\ b_i & r_i\end{pmatrix}, \] where $b_i=u_t(x_i,t_i);$ then \[ A_i-B_i=\begin{pmatrix} -\alpha_i-\beta_i & 0 \\ 0 & 0 \end{pmatrix}=\begin{pmatrix} -\alpha_i-\beta_i \\ 0 \end{pmatrix}\otimes \begin{pmatrix} 1 \\ 0 \end{pmatrix}. \] Let $\mathcal L:\mathbb M^{2\times 2}\to\mathbb R$ be the linear map defined by \[ \mathcal L(\xi)=-\xi_{21}+\xi_{12}\quad\forall \xi\in\mathbb M^{2\times 2}, \] with its associated matrix $L=\begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}$; then \[ \mathcal L(A_i)=\mathcal L(B_i)(=0)\;\;\mbox{and}\;\;C_i=\lambda_i A_i+(1-\lambda_i) B_i, \]
where $C_i=\nabla w(x_i,t_i)$ and $\lambda_i=\frac{\beta_i}{\alpha_i+\beta_i}\in(0,1).$ By Theorem \ref{thm:rank-1}, there exists a linear operator $\Phi_i:C^1(\mathbb R^2;\mathbb R^2)\to C^0(\mathbb R^2;\mathbb R^2)$ satisfying properties (1) and (2) in the statement of the theorem with $A=A_i$, $B=B_i$ and $\lambda=\lambda_i$. In particular, for the square $B_i\subset\mathbb R^2$ and a number $0<\tau<\min\{\frac{\nu}{8},\theta,\eta,\frac{\delta|\Omega_T^2|}{5SN}\}$ with $\begin{displaystyle} S:=\max_{r_1\le r\le r_2} (s_+(r)-s_-(r))>0 \end{displaystyle}$, we can find a function $g_i\in C^\infty_c(B_i;\mathbb R^2)$ such that \begin{equation}\label{patch} \left\{\begin{array}{l}
\Phi_i g_i \in C^\infty_c(B_i;\mathbb R^2),\;\mathcal L(\nabla\Phi_i g_i)=0\;\;\mbox{in $B_i$,} \\
\operatorname{dist}(\nabla\Phi_i g_i,[-\lambda_i(A_i-B_i),(1-\lambda_i)(A_i-B_i)])<\tau\;\;\mbox{in $B_i$,} \\
|B_i^1|<\tau, \;\|\Phi_i g_i\|_{L^\infty(B_i)}<\tau ,
\end{array}
\right. \end{equation} where \[
B_i^1=\{(x,t)\in B_i\,|\,\operatorname{dist}(\nabla\Phi_i g_i(x,t),\{-\lambda_i(A_i-B_i),(1-\lambda_i)(A_i-B_i)\})>0\}. \] Let $B_i^2=B_i\setminus B_i^1$. We finally define \[ w_\eta=w+\sum_{i=1}^N \chi_{B_i} \Phi_i g_i\;\;\mbox{in $\Omega_T$.} \]
\textbf{(Step 3):} Let us check that $w_\eta=(u_\eta, v_\eta)$ is indeed a desired function satisfying (\ref{func-weta}). It is clear from (\ref{func-w}) and the construction above that $w_\eta\in W^{1,\infty}_{w^*}(\Omega_T;\mathbb R^2)\cap C^2(\bar\Omega_T;\mathbb R^2)$. Set $\Omega_T^{w_\eta}=\Omega_T^{w}\cup(\cup_{i=1}^N B_i)$; then $\Omega_T^{w_\eta}\subset\subset \Omega_T^2$, $|\partial\Omega_T^{w_\eta}|=0$, and $w_\eta=w=w^*$ in $\Omega_T\setminus\bar\Omega_T^{w_\eta}$. From (\ref{small}), (\ref{patch}), $\nu<\theta$ and $\tau<\nu/8$, it follows that for $i=1,\cdots, N$, \[ \nabla w_\eta=\nabla w+\nabla\Phi_i g_i\in [A_i,B_i]_{\nu/4}\subset U\;\;\mbox{in $B_i$,} \] where $[A_i,B_i]_{\nu/4}$ is the $\frac{\nu}{4}$-neighborhood of the closed line segment $[A_i,B_i]$ in the space $\mathbb M^{2\times 2}_{sym}$; thus $\nabla w_\eta\in U$ in $\Omega_T^2$. By (\ref{theta}) and (\ref{patch}) with zero antidiagonal of $A_i-B_i$, we have \[
\|(u_\eta)_t-h\|_{L^\infty(\Omega_{T_\epsilon})}\le
\|u_t-h\|_{L^\infty(\Omega_{T_\epsilon})}+\tau<\|u_t-h\|_{L^\infty(\Omega_{T_\epsilon})}+\theta<\epsilon', \] \[
\|w-w_\eta\|_{L^\infty(\Omega_{T})}=\|\sum_{i=1}^N \chi_{B_i} \Phi_i g_i\|_{L^\infty(\Omega_{T})}<\tau<\eta. \] Lastly, note that \[ \begin{split} \int_{\Omega_T^2} & \operatorname{dist}(\nabla w_\eta(x,t),K)\,dxdt = \int_{(\Omega_T^2\setminus\partial\Omega_T^w)\setminus G_1}\operatorname{dist}(\nabla w (x,t),K)\,dxdt\\ & + \int_{H_2} \operatorname{dist}(\nabla w (x,t),K)\,dxdt + \int_{G_2\setminus(\cup_{i=1}^N B_i)} \operatorname{dist}(\nabla w (x,t),K)\,dxdt\\ & +\sum_{i=1}^N \int_{B_i} \operatorname{dist}(\nabla w (x,t)+\nabla \Phi_i g_i(x,t),K)\,dxdt=:I_1+I_2+I_3+I_4. \end{split} \] Observe here that for $i=1,\cdots,N$, \[ \begin{split} \int_{B_i} \operatorname{dist}( & \nabla w +\nabla \Phi_i g_i,K)\,dxdt = \int_{B_i^1} \operatorname{dist}(\nabla w +\nabla \Phi_i g_i,K)\,dxdt \\ & + \int_{B_i^2} \operatorname{dist}(\nabla w (x,t)+\nabla \Phi_i g_i,K)\,dxdt \\
& \le S|B_i^1| + \nu|B_i^2| \le S\tau + \frac{\delta}{5}|B_i^2|\le \frac{\delta|\Omega_T^2|}{5N} + \frac{\delta}{5}|B_i^2|. \end{split} \]
Thus $I_4\le \frac{2\delta}{5}|\Omega_T^2|$; whence with (\ref{int-1}), (\ref{int-2}) and (\ref{int-3}), we have $I_1+I_2+I_3+I_4\le (\frac{\delta}{5}+\frac{\delta}{5}+\frac{\delta}{5}+\frac{2\delta}{5})|\Omega_T^2|=\delta|\Omega_T^2|$.
Therefore, (\ref{func-weta}) is proved, and the proof is complete.
\end{document} |
\begin{document}
\makeatletter
\begin{titlepage}
\centering
{\large \textsc{École polytechnique}}\\
\textsc{Centre de mathématiques appliquées}\\
{\large\textbf{ \@date\\
RAPPORT DE STAGE DE RECHERCHE}}\\
\includegraphics[width=0.5\textwidth]{Image/logo1.png}\\
{\LARGE \textbf{\@title}} \\
{\large \@author - X2016} \\
\begin{flushleft}
{\large \textsc{MAP 594 - Modélisation probabiliste et statistique}}\\
\textbf{Enseignant référent}: BANSAYE Vincent\\
\textbf{Tuteurs}: CAI Desmond, WYNTER Laura\\
1 Avril 2018 - 23 Août 2018\\
\textsc{IBM Research}\\
\textsc{Singapore Lab - AI @ Nation Scale}\\
\textsc{Marina Bay Financial Centre, Tower 2, Level 42}\\
\textsc{Singapore 018983}\\
\end{flushleft}
\includegraphics[height=0.12\textheight]{Image/X.png}
\end{titlepage} \makeatother
\section*{Déclaration d'intégrité relative au plagiat} \addcontentsline{toc}{section}{Déclaration d'intégrité relative au plagiat} Je certifie sur l'honneur: \begin{enumerate} \item Que les résultats décrits dans ce rapport sont l'aboutissement de mon travail. \item Que je suis l'auteur de ce rapport. \item Que je n'ai pas utilisé des sources ou résultats tiers sans clairement les citer et les référencer selon les règles bibliographiques préconisées. \end{enumerate}
Signé: Antoine Grosnit\\
\hrule
Date: \today
\hrule
~
\begin{abstract} \hskip7mm
Reinforcement Learning (RL) has made significant progress in the recent years, and has been applied to more and more challenging problems in various domains such as robotics or resources management in computer clusters. Challenge can notably arise from the existence of multiple simultaneously training agents, which often increases the size of state and action spaces, and makes it more difficult to learn the system dynamics. In this work we describe a multi-agent reinforcement learning (MARL) problem where each agent has to learn a policy that will maximize the long-run expected reward averaged over all agents. To tackle this cooperative problem in a decentralized way, We propose a multi-agent actor-critic algorithm with deterministic policies. Similar to recent works on the decentralized actor-critic with stochastic policies, we provide convergence guarantees for our algorithm when linear function approximations are used. The consideration of deterministic policies algorithms is motivated by the fact that they can sometimes outperform their stochastic counterparts in high-dimensional spaces. Nevertheless, applicability is still uncertain as decentralized setting, involving policy privacy among agents, requires on-policy learning, while deterministic policies are classically trained off-policy due to their low ability to explore the environment. Though, such algorithm may still be able to learn good policies in naturally noisy environments. We discuss further on the strenght and shortcomings of our Decentralized MARL algorithm in the light of the recent developments in MARL.
\end{abstract}
~
\tableofcontents
\section{Introduction} \subsection{Reinforcement Learning}
There are three main approaches to learning: supervised, unsupervised, and reward-based learning. A learning method falls into one of these depending on the kind of feedback provided to the learner. In supervised learning, the correct output is provided to the learner, while no feedback is provided in unsupervised learning. In reward-based learning \cite{Sutton:barto1998}, an environment provides a quality assessment (the "reward") of the learner’s output. Reinforcement Learning (RL) methods are particularly useful in domains where reinforcement information (expressed as penalties or rewards) is provided after a sequence of actions performed in the environment. Many real-world problems, in economy, biology, robotics, involve this kind of setting which has made Reinforcement Learning gain popularity over the past decades. We will see how Markov Decision Process (MDP) can help capture the interaction between the learning agent and its environment. If in some simple cases, MDP's can be solved analytically using dynamic or linear programming, in many cases these techniques are insufficient, either because the system has too many states (which is known as the curse of dimensionality since computational efforts to solve MDP's grow exponentially with the state space dimension) or because the underlying MDP model is not known, in which case the agent needs to infer the system's dynamics from data generated by interacting with the environment. In reinforcement learning, the agent makes its decisions based on observations from the environment (such as a set of pixel from an Atari-game screen \cite{mnih:2013:atari}, or data from robotic sensors \cite{levine:2015:robot-locomotion}) and receives a reward that determines the agent’s success or failure given the actions taken. Thus, the fundamental question in reinforcement learning is how can an agent improve its behaviour based on its observations, on its past actions and on the rewards it receives. Across the past few decades, the tasks assigned to agents trained using reinforcement learning, has become more and more intricate, due to the development of new algorithms \cite{ Watkins:1989:Q-Learning, Sutton:2009:Fast-Grad-descent, Silver:2014:DeterministicPolicyGradient, Fujimoto:2018:TD3} and to the increase in computational power available \cite{mnih:2013:atari, Silver:2017:alpha-go}.
\subsection{Multi-Agent Reinforcement Learning} Another level of complexity arises when several agents are involved, in which case the RL problem is called Multi-Agent (MARL). Depending on the nature of the problem, the agents can be trained to achieve a cooperative task \cite{Gupta:2017:cooperativeMARL_deepRL} or a competitive one \cite{lowe:2017:multi_AC_Mixed}, they can be integrated into a communication network \cite{FDMARL} that allows them to share useful information (regarding the environment, their policies, their rewards...) or they be supposed totally independent \cite{Littman:94:markovgames}. In what follows we will only focus on the cooperative case where the agents have a common goal, which is to jointly maximize the globally average reward over all agents. It can be tempting to use a central controller to coordinate the agents, but this solution is not practical in many real world situations, such as intelligent transport systems \cite{Namoun:2013:intelligent-transport}, in which no central agent exists. Central agent would also induce communication overhead, thus potentially hamper scalability, and makes the system more vulnerable to attacks. That is why the decentralized architecture has been advocated \cite{FDMARL} for its scalability with the number of agents and its robustness against attacks. Moreover, such a convenient architecture reducing communication overhead, offers convergence guarantees comparable to its centralized counterparts under mild network-specific assumptions \cite{FDMARL, Suttle:2019:MA-OFF-Policy-AC}.
\subsection{Contributions of this work} In this work, we describe a multi-agent reinforcement learning problem where the goal is to develop a decentralized actor-critic algorithm with deterministic policies. This study has been motivated by recent papers presenting convergence proofs for actor-critic algorithms with stochastic policies under linear function approximation \cite{Bhatnagar:2009:NaturalActorCritic, degris:2012:offPol_AC, Maei:2018:Convergent_AC_OffPol, Suttle:2019:MA-OFF-Policy-AC}, and by theoretical and empirical works showing that deterministic algorithms can sometimes outperform their stochastic counterparts in high-dimensional continuous action spaces \cite{Silver:2014:DeterministicPolicyGradient, Lillicrap:2015:DDPG, Fujimoto:2018:TD3}.
By decentralized, we mean the following properties: (a) there is no central controller in the system, (b) each agent chooses its action individually based on its observation of the environment state, and (c) each agent receives an individual reward and keeps this information private. The reward value may depend on other agents’ actions. In general, the latter does not always imply that each agent can observe the actions selected by the other agents since agents may not know their reward functions and may be receiving a reward signal from some black-box entity. However, in our theoretical study, we will build on the stochastic multi-agent algorithm presented in \cite{FDMARL} which assumes full observation of other agents’ actions.
Our theoretical contribution is three-fold: first, we give an expression for the gradient of the objective function that we want to maximize (the long-run average reward), that is valid in the undiscounted Multi-Agent setting with deterministic policies. This expression for the policy gradient (PG) is the same as the one given in $\cite{lowe2017multi}$ for a discounted reward dynamic. Then, we will show as in \cite{Silver:2014:DeterministicPolicyGradient}, that the deterministic policy gradient is the limiting case, as policy variance tends to zero, of the stochastic policy gradient given in \cite{Bhatnagar:2009:NaturalActorCritic}. Finally, we describe a decentralized deterministic multi-agent actor critic algorithm that we prove to converge when using linear function approximation under classical assumptions. We will expose the limits and the potential extensions of our approach in the light of recent advances.
We apply our deterministic actor-critic algorithms to a cooperative Beer Game problem, an inventory optimization problem described in \cite{Oroojlooyjadid:2017:beergame}, in which each agent is part of a supply chain and aims at minimizing the global cost induced by stocks and shortages.
\section{Preliminaries} For clarity, single-agent RL concepts are given first, followed by their extensions to the multi-agent setting. \subsection{Single Agent Reinforcement Learning} \subsubsection{RL formalism} \begin{figure}
\caption{Basic reinforcement learning routine}
\label{fig:RL-scheme}
\end{figure}
The basic elements involved in reinforcement learning appears on Figure \ref{fig:RL-scheme}.
\begin{itemize}
\item An \textbf{agent} is something which interacts with the environment by executing certain actions, taking observations, and receiving eventual rewards for this. In most practical RL scenarios, it's our piece of software that is supposed to solve some problem in an efficient way.
\item The \textbf{environment} is external to an agent, and its communication with the environment is limited by rewards (obtained from the environment), actions (executed by the agent and given to the environment), and observations (some information besides the rewards that the agent receives from the environment and that defines a state).
\begin{itemize}
\item An environment is said \textbf{deterministic} when any action has a single guaranteed effect, and no failure or uncertainty. On the contrary it is said {non-deterministic}. In this environment, the same task performed twice may produce different results or may even fail completely.
\item An environment is said \textbf{episodic} when each agent’s performance is the result of a series of independent tasks performed (episodes). There is no link between the agent’s performance and other different episodes.
\item A \textbf{discrete} environment has fixed locations or time intervals, while a \textbf{continuous} environment could be measured quantitatively to any level of precision.
\end{itemize}
\item \textbf{Actions} are things that an agent can do in the environment. In RL, there are two types of actions: discrete or continuous. Discrete actions form a discrete set of mutually exclusive things an agent could do at one step, such as move left or right. But many real-world problems require the agent to take actions belonging to a continuous set, for instance choosing an angle or an acceleration when controlling a car.
\item A \textbf{reward} is a scalar value agent obtains periodically from the environment. It can be positive or negative, large or small. The purpose of reward is to tell our agent how well it has behaved.
\item \textbf{Observations} of the environment is the second information channel for an agent, with the first being the reward. As shown by $\textnormal{Figure } 2$ \& $3$ below, \textbf{states} and observations may not be the same. The environment state $S_t^e$ is the environment’s private representation, i.e. whatever data the environment uses to pick the next observation/reward. The environment state is not usually visible to the agent and may contain irrelevant information. The agent state $S^a_t$ is the agent’s internal representation, i.e. it is the information used by reinforcement learning algorithms. It can be any function of history $H_t = O_1, R_1, A_1, ... O_t, R_t$: $S^a_t = f (H_t)$
\begin{figure}
\caption{Environment state}
\label{fig:Env_state}
\caption{Agent state}
\label{fig:aGNET8STATE}
\end{figure}
We can then talk of \textbf{Fully Observable Environments} when the agent directly observes environment state: $O_t = S^a_t = S^e_t$. In this case we can use Markov decision process (MDP) model.
When the environment is only partially observable by the agent (e.g. poker playing agent only observes public cards) we talk of \textbf{Partially Observable Environments} and use Partially observable Markov decision process (POMDP) model.
\end{itemize}
Figure \ref{fig:RL-scheme} shows that in Fully Observable Environments, at each time step, the agent, knowing the current state, picks an action that modifies its environment. The agent receives an instantaneous reward depending on the action it chose, and receives information from the environment about the new state. Problems with these characteristics are best described in the framework of Markovian Decision Processes (MDPs).
\subsubsection{Markov decision processes}
Markov decision processes (MDP) formally describes an environment for reinforcement learning when the environment is fully observable. A MDP is a tuple $\pac{\mathcal{S}, \mathcal{A}, P, R, \gamma}$, where : \begin{itemize}
\item $\mathcal{S}$ is the state space.
\item $\mathcal{A}$ is the action space.
\item $P$ is the state transition probability kernel
\item $R$ is the reward function, $R: \mc{S}\times\mc{A} \to \mathbb{R}$
\item $\gamma \in (0,1)$ is an optional discount factor. \end{itemize}
In this work, we assume that $\mc{S}$ is finite and $\mc{A}$ is continuous. The markovian dynamics of the system is captured by $P$, since $P(s_{t+1} \vert s_t, a_t)$ is a stationary transition dynamics distribution with conditional probability verifying the Markov property: $P(s_{t+1} \vert s_0, a_0, \dots, s_t, a_t) = P(s_{t+1} \vert s_t, a_t)$.
As the agent selects an action $a_t$ at each step $t$ based on the current state $s_t$ and receives a reward $r_{t+1}$ (whose expectency is given by $R(s_t,a_t)$), there is a random state-action-reward sequence $\left(s_t, a_t, r_{t+1}\right)_{t\geq 0}$ corresponding to the agent's behavior. A rule describing the way the actions are selected is called a policy. In the general case, an agent's policy is stochastic and associates to each state $\mc{S}$ a distribution over $\mc{A}$. A particular case that we will further explore is the use of a deterministic policy, i.e., policy matching an action to each state. Dealing with such policy can be useful when the action space is very large or continuous.
Reinforcement learning methods specify how the agent’s policy should change to maximize an objective function. Let $\pi$ be the agent's policy, then there are two classical ways of formulating the agent’s objective. One is the long-run average reward formulation, in which policies are ranked according to their long-term expected reward per step, $J(\pi)$ as: \begin{equation} \label{eq:long-run-J}
J(\pi) = \limit{T} \frac{1}{T}\esp{\sum_{t=0}^{T-1} \bar{r}_{t+1}} = \espL{R(s,a)}{s \sim d^\pi, a \sim \pi(\cdot|s)} \end{equation}
where $d^\pi$ is the stationary distribution over states induced by $P^\pi$ giving the probability of transitioning from state $s$ to state $s^\prime$ following $\pi$: $P^\pi_{s,s^\prime} = \int_\mc{A} \pi(a|s) P(s^\prime|s,a) \textnormal{d}a$ (where $\pi$ can be viewed as a Dirac distribution if it is deterministic). We will make assumptions to ensure that $d^\pi$ is properly defined and does not depend on the initial state.
The second formulation requires to set a start state $s_0$ (or an initial state distribution $d_0$), and to care only about the discounted long-term reward obtained from it: \begin{equation*}
J(\pi) = \espL{\sum_{t=1}^{+\infty}\gamma^{t-1} r_t|s_0}{a_t \sim \pi(\cdot|s_t)} \end{equation*} where $\gamma$ is a discount rate ($\gamma=1$ is allowed only in episodic tasks) granting less importance to rewards far in the future. This can be interpreted as a depreciation due to uncertainty of future rewards.
It could be tempting to view the average reward problem as a discounted reward problem in which the discounting factor tends to $1$ (when renormalizing by $(1-\gamma))$. However, it has been shown in \cite{Puterman:1994:MDP} p. $165$ that for some converging algorithms the convergence rates can become very small as the discounting factor approaches $1$, therefore it makes sense to develop the theory of average reward algorithms separately. Thus, following \cite{Bhatnagar:2009:NaturalActorCritic, Prabuchandran:2016:A-C_Online_Feature_Adapt, FDMARL, Zhang:2018:ContinuousFDMARL}, we will only consider the average reward setting in our theoretical developments.
Adopting this setting, we can introduce the differential action-value function $Q_\pi : \mathcal{S} \times \mathcal{A} \to \mathbb{R}$ \cite{Puterman:1994:MDP} that gives the relative advantage of being in a state $s$ and taking an action $a$ when following a policy $\pi$: \begin{equation*}
Q_\pi(s, a) = \sum_{t=0}^\infty \mathbb{E}_{a \sim \pi(\cdot|s)}\left(r(s_t, a_t) - J(\pi) \vert s_0 = s, a_0 = a\right) \end{equation*} Similarly, the differential value-function $V_\pi : \mathcal{S} \to \mathbb{R}$ giving the relative advantage of being in a state $s$ when following a policy $\pi$ is defined as: \begin{equation*}
V_\pi(s, a) = \sum_{t=0}^\infty \mathbb{E}_{a \sim \pi(\cdot|s)}\left(r(s_t, a_t) - J(\pi) \vert s_0 = s\right) \end{equation*} For simplicity, we will hereafter refer to $Q_\pi$ and $V_\pi$ as respectively value function and state-value function.
Since in many problems of interest, the number of states is very large (for example, in games such as Backgammon, Chess, and $19 \times 19$ computer Go, there are roughly 1028, 1047 and 10170 states, respectively) and since here the action space is continuous, we cannot hope to compute value functions exactly and instead use approximation techniques. We will approximate $Q_\pi$ by some parametrized functions $\hat{Q}^\pi_\omega$ with parameter $\omega$. For notational convenience we write $\hat{Q}_\omega$ when there is no ambiguity on the underlying policy. Similarly, a policy $\pi$ can also be parametrized as $\pi_\theta$ (one can think of a standard Gaussian policy with mean $\theta$), in which case the reinforcement learning algorithm's goal is to find the best admissible $\theta$ maximizing $J(\pi_\theta)$. This is a model free problem given that we don't have a direct access to the environment's dynamics characterized by $P$. One of the most important information on $Q_\pi$ is that it verifies the Bellman equation (also known as the Poisson equation) \cite{Puterman:1994:MDP}: \begin{equation} \label{eq:poisson}
Q_\pi(s,a) = R(s,a) + \sum_{s^\prime \in \mc{S}} P^{\pi_\theta}(s^\prime|s) V_\pi(s^\prime) \end{equation} Therefore a classical idea to find a good approximation of the value function is to choose $\omega$ by minimizing at each step $t$, the temporal-difference error \begin{equation} \label{eq:td-error}
\delta_t = |(r_{t+1} - J(\pi_\theta) + \hat{Q}_\omega(s_{t+1},a_{t+1})) - \hat{Q}_\omega(s_t,a_t)| \end{equation} is small.
Now that the main elements of single-agent RL machinery are introduced, we can straightforwardly extend them to the decentralized multi-agent setting.
\subsection{Decentralized Multi-Agent Reinforcement Learning} \label{section:MARL}
\begin{figure}
\caption{Decentralized MARL routine}
\label{fig:AC}
\end{figure}
We consider now a system of $N$ agents denoted by $\mc{N} = \pabb{1,n}$ operating in a common environment. As in \cite{FDMARL, Suttle:2019:MA-OFF-Policy-AC}, we model multi-agent reinforcement learning problem as a networked multi-agent MDP described by a tuple $\pac{\mc{S}, \mc{A}, P, \paa{R^i}_{i\in\mc{N}}, \paa{\mc{G}_t}_{t\geq 0}}$ where \begin{itemize}
\item $\mathcal{S}$ is a finite state space.
\item $\mathcal{A} = \prod_{i\in \mc{N}} \mathcal{A}^i$ is a joint action space (where $\mathcal{A}^i$ is the continuous action space associated to agent $i$).
\item $P$ is the same as in the single agent setting.
\item $\paa{R^i}_{i\in\mc{N}}$ is the set of local reward functions $R^i:\mc{S} \times \mc{A} \to \mathbb{R}$
\item $\paa{\mc{G}_t}_t = \paa{\pa{\mc{N}, \mc{E}_t}}_t$ is a sequence of time-varying communication networks \end{itemize} Throughout this report, it is assumed that states and actions are globally observed but that rewards are only locally observed, i.e., $r^i_t$ is only known by agent $i$. The process is thus decentralized in the sense that no central controller either collects local rewards or takes actions on behalf of the agents. The communication network is modelled as a time-varying graph $\mc{G}_t = \pa{\mc{N}, \mc{E}_t}$ where $\mc{E}_t$ denotes the edge set of $\mc{G}_t$. Agents $i$ and $j$ can share some information at time $t$ if and only if $(i,j)$ is in $\mc{E}_t$.
As action selection is performed locally by each agent, we can assume that this selection is conditionally independent given the current state. So with this decentralized setting, we can express a joint policy $\pi$ as a product of the local policies $\pi^i$: $\pi(a|s) = \prod_{i \in \mc{N}} \pi^i(a^i|s)$ where $a = (a_1, \dots, a_N) \in \mc{A}$ and $\pi^i(a^i|s)$ is the conditional probability density of $a^i$ associated with the policy $\pi^i$ of agent $i$. At step $t$, given the global state $s_t$, agents select $a_t = (a^1_t, \dots, a^N_t)$ according to their own policy, and as a result, each agent receives a specific reward $r^i_{t+1}$ whose expected value is given by $R^i(s_t,a_t)$ (the local reward depends on the state and the global action). Thus decentralized setting allows to handle the case of multi-tasks RL \cite{Macua:2017:Dec_multi-task_deep-RL} as local rewards can be totally unrelated to one another.
We now assume that each agent follows a deterministic policy $\mu^i_{\theta^i}: \mc{S} \to \mc{A}$ parametrized by $\theta^i \in \Theta^i$, with $\Theta^i$ a compact subset of $\mathbb{R}^{m_i}$. Then $\mu_\theta : \mc{S} \to \mc{A}$ defined as $\mu_\theta (s) = \left(\mu_{\theta}^1(s), \dots, \mu_{\theta}^N(s) \right) = \left(\mu_{\theta^1}^1(s), \dots, \mu_{\theta^N}^N(s) \right)$, with $\theta = \big[\left(\theta^1\right)^\top, \dots,\left(\theta^N\right)^\top ]^\top \in \mathbb{R}^m$ is the joint policy of all agents and we have $a_t = \mu_\theta(s_t)$. We note $\bar{r}_{t}$ the reward averaged over all agents, i.e., $\bar{r}_t = \frac{1}{N}\sum_{i \in \mc{N}} r^i_t$, and similarly, we note $\bar{R}(s, a) = \frac{1}{N}\sum_{i \in \mathcal{N}} R^i(s,a)$.
The following is a regularity assumption on the networked MDP and policy function that is standard in existing work on actor-critic algorithms using function approximation \cite{Konda:Tsitsiklis:actor-criticalgorithms, Bhatnagar:2009:NaturalActorCritic}.
\begin{asm} \label{asm:MDP_Regularity} For any $i \in \mathcal{N}, s \in \mathcal{S}, \mu^i_{\theta^i}(s)$ is twice continuously differentiable with respect to the parameter $\theta^i$ over $\Theta^i$. Moreover, for any $\theta \in \Theta$, let $P^\theta$ be the transition matrix of the Markov chain $\{s_t\}_{t \geq 0}$ induced by policy $\mu_\theta$, that is \begin{equation*} P^{\mu_\theta}(s^\prime \vert s) = P(s^\prime \vert s, \mu_\theta(s)), \quad \forall s, s^\prime \in \mathcal{S} \end{equation*} We assume that the Markov chain $\{s_t\}_{t \geq 0}$ is irreducible and aperiodic under any $\mu_\theta$, $\theta \in \Theta$ and denote by $d^\mu_\theta$ its stationary distribution. Furthermore, we assume that $r^i(s, a)$ is uniformly bounded for all $i \in \mathcal{N}, s \in \mathcal{S}, a \in \mathcal{A}$. \end{asm}
In addition we make another standard assumption regarding the regularity of the expected reward $R^i$ and the state transition probability kernel $P$. \begin{asm} \label{asm:Reg_P_R} For any $s, s^\prime \in \mathcal{S}, i \in \mathcal{N}$, $P(s^\prime \vert s,a)$ and $R^i(s, a)$ are bounded, twice differentiable, and have bounded first and second derivatives. \end{asm}
This assumption will allows to justify the existence and differentiability of the cooperative objective function that the agents aim at maximizing. In the multi-agent setting, this objective function is the long-run average reward given by $J(\mu_\theta)$: \begin{align} J(\mu_\theta) &= \limit{T} \frac{1}{T}\esp{\sum_{t=0}^{T-1} \bar{r}_{t+1}} = \espL{\bar{R}(s,\mu_\theta(s))}{s \sim d^\theta} \end{align}
Similar to the single-agent case, we define the action-value function $Q_{\mu_\theta} : \mathcal{S} \times \mathcal{A} \to \mathbb{R}$ and value function $V_{\mu_\theta} : \mathcal{S} \to \mathbb{R}$ as follow: \begin{align}
&Q_{\mu_\theta}(s, a) = \sum_{t=0}^\infty \mathbb{E}\left(\bar{r}(s_t, a_t) - J(\mu_\theta) \vert s_0 = s, a_0 = a, \mu_\theta \right) \label{eq:Q-multi-agent}\\
&V_{\mu_\theta}(s) = \sum_{t=0}^\infty \mathbb{E}\left(\bar{r}(s_t, a_t) - J(\mu_\theta) \vert s_0 = s, \mu_\theta \right) \label{eq:V-multi-agent} \end{align} Note that, for deterministic policy $\mu_\theta$, there is a simple relation between $V_{\mu_\theta}$ and $Q_{\mu_\theta}$: \begin{equation} \label{eq:Q-v}
V_\theta(s) = Q_\theta(s, \mu_\theta(s)) \end{equation} and the Poisson equation (\ref{eq:poisson}) becomes: \begin{equation}\label{eq:poisson-multi}
Q_{\mu_\theta}(s,a) = \bar{R}(s,a) - J(\mu_\theta) + \sum_{s^\prime\in\mc{S}} P(s^\prime|s,\mu_\theta(s)) Q(s^\prime, \mu_\theta(s^\prime)) \end{equation}
Now that the RL formalism has been introduced, we can try to solve our RL problem by adapting a popular algorithm called Actor-Critic (AC) to our deterministic multi-agent setting.
\section{Actor-Critic algorithm}
We first present Actor-Critic principle for the single-agent case before naturally extending it to the multi-agent setting.
\subsection{Principle}
Given a parametrized policy $\mu_\theta$, Actor-only methods \cite{Marbach:2001} consist in estimating the gradient of the objective function $J(\mu_\theta)$ with respect to the actor parameter $\theta$, and to update this parameter following a gradient-ascent direction. As this estimation is done after each iteration of $\theta$, theoretical and empirical studies emphasized an important drawback of these methods, which is the high variance of their gradient estimators, which can lead to slower convergence and sample inefficiency. On the other hand, a Critic-only method aims at learning an approximate solution to the Bellman equation associated to $Q_{\mu_\theta}$ or $V_{\mu_\theta}$. There is no update of $\theta$, so the goal of these methods is not to improve the policy but to have a good estimation of its performance.
Combining the strengths of both methods, Actor-Critic algorithm has been proposed in \cite{Konda:Tsitsiklis:actor-criticalgorithms} to find the optimal policy following a gradient ascent strategy based on a better estimation of the value functions.
\begin{figure}
\caption{Actor-Critic routine}
\label{fig:AC}
\end{figure}
Figure \ref{fig:AC} shows the algorithm routine with a single agent: \begin{itemize}
\item In the critic step, based on the observation of $s_t, a_t, r_{t+1}$ and $s_{t+1}$ the $Q$-function parameter $\omega$ is updated based on a Temporal-Difference scheme (\ref{eq:td-error}) requiring the estimation of the long-run average reward $\hat{J}$, which is updated as well.
\item In the actor step, the policy parameter $\theta$ is updated following the estimation of the gradient of $J$. \end{itemize} Updating $\theta$, $\hat{J}$, $\omega$ by directly interacting with the environment following the current policy (i.e. in an on-line fashion), we can hope to get eventually a good estimate parameter for $Q_{\mu_\theta}$ where $\theta$ is close to to a local optimizer of the true objective function $J(\mu_\theta)$.
\subsection{Critic step: Temporal Difference learning (TD)}
Part of the RL literature has focused on the estimation of the objective function, state value function or action value function associated to a stationary policy. This problem arises as a subroutine of generalized policy iteration and is generally thought to be an important step in developing algorithms that can learn good control policies in reinforcement learning. Considering a stationary policy $\mu_\theta$, estimate the objective function $J(\mu_\theta) = \espL{R(s,\mu_\theta(s))}{s\sim d^{\mu_\theta}}$ by some $\hat{J}_t$ relies on a classical stochastic approximation approach \cite{Robbins&Monro:1951}: \begin{equation}
\hat{J}_{t+1} = \hat{J}_t + \beta_{J,t} (r_{t+1} - \hat{J}_t) \end{equation} where $\beta_{J,t}$ belongs to a family of vanishing step-sizes characterized in Assumption \ref{asm:step-sizes}. Note that, with $\beta_{J,t}=\frac{1}{t+1}$ we have \begin{equation*}
\hat{J}_{t} = \frac{1}{t} \hat{J}_0 + \frac{1}{t}\sum_{k=0}^{t-1} r_{k+1} \to J(\mu_\theta) \quad \textnormal{a.s.} \end{equation*}
For the action value function estimation, we consider the simpler TD learning version, called TD($0$) (generalized as TD($\lambda$) with $\lambda\in\pab{0,1}$ \cite{Sutton:1988:RL_TD}), which is based on the one-step error signal (\ref{eq:td-error}). This method somehow has the flavor of supervised learning, as the goal is to minimize an error measure with respect to some set of values that parameterizes the function making the prediction. Here, the predicting function is the parametrized value-function $\hat{Q}_{\omega_t}$, and, given the sequence $s_t, a_t, r_{t+1}, s_{t+1} and a_{t+1}$, the predicted value $\hat{Q}_\omega(s_t,a_t)$ is compared to the target value $r_{t+1} - \hat{J}_t + \hat{Q}_{\omega_t}(s_{t+1},a_{t+1})$ (Owing to the Poisson equation (\ref{eq:poisson})). As the predicting function is also used to define the target value, TD($0$) is called a bootstrapping method. Nonetheless, as in supervised learning, the one-step gradient descent is performed considering the target value fixed, which gives $\omega_t$ update: \begin{equation}
\omega_{t+1} = \omega_t + \beta_{\omega,t} \cdot\pa{r_{t+1} - \hat{J}_t + \hat{Q}_{\omega_t}(s_{t+1},a_{t+1}) - \hat{Q}_{\omega_t}(s_t,a_t)} \cdot \nablaV{\omega}{\hat{Q}_\omega(s_t,a_t)}{\omega_t} \end{equation} It is classical to set $\beta_{J,t} = \beta_{\omega,t}$, so finally we have the following critic recursion \begin{align} \label{algo:critic-step-single-agent}
&\hat{J}_{t+1} = \hat{J}_t + \beta_{\omega,t} (r_{t+1} - \hat{J}_t)\\
&\omega_{t+1} = \omega_t + \beta_{\omega,t} \cdot \delta_t \cdot \nablaV{\omega}{\hat{Q}_\omega(s_t,a_t)}{\omega_t} \end{align} with $\delta_t = r_{t+1} - \hat{J}_t + \hat{Q}_{\omega_t}(s_{t+1},a_{t+1}) - \hat{Q}_{\omega_t}(s_t,a_t)$.
We now need to focus on the actor step, which is a gradient ascent method, requiring an estimation of the gradient of $J(\mu_\theta)$. In RL literature, this is referred to as a policy gradient problem.
\subsection{Actor step: Policy Gradient (PG)}
The fundamental result underlying the actor step of an AC algorithm is the \textit{policy gradient theorem} \cite{Sutton:2000:Policy_grad_meth_RL_fct_appr} giving a simple expression of the gradient of the objective function $J(\pi_\theta)$ for a parametrized stochastic policy $\pi_\theta$ in the case of both discounted and undiscounted MDP (specific assumptions will be made when giving our policy gradient result in theorem \ref{thm:PG-single-agent}): \begin{equation} \label{eq:PG-Sutton}
\nabla_\theta J(\pi_\theta) = \sum_{s \in \mc{S}} \int_{\mc{A}} \nabla_\theta \pi_\theta(a|s) Q_\pi(s,a)\textnormal{d}a = \espL{\nabla_\theta \log \pi_\theta(a|s) Q_\pi(s,a)}{s \sim d^{\pi_\theta}, a \sim \pi_\theta} \end{equation}
An interesting aspect of this formula comes from the fact that even though $J(\pi_\theta)$ depends on the stationary distribution $d^{\pi_\theta}$, its gradient with respect to $\theta$ does not involve the gradient of this distribution, which is a good thing since it might be hard to approximate. In the discounted setting, there is a useful extension of this PG result to the case of a deterministic policy $\mu_\theta$ \cite{Silver:2014:DeterministicPolicyGradient}: \begin{equation}\label{eq:PG-Silver}
\nabla_\theta J(\mu_\theta) = \sum_{s \in \mc{S}} \nabla_\theta \mu_\theta(s) \nablaV{a}{Q_{\mu_\theta}(s,a)}{\mu_\theta(s)} = \espL{\nabla_\theta \mu_\theta(s) \nablaV{a}{Q_{\mu_\theta}(s,a)}{\mu_\theta(s)}}{s \sim d^{\mu_\theta}} \end{equation}
Similar to (\ref{eq:PG-Sutton}), gradient of $J$ with respect to $\theta$ is again independent of $\nabla_\theta d^{\pi_\theta}$.
To the best of our knowledge, no such expression for deterministic policy gradient has been provided in the undiscounted setting. As it is the case we want to handle, we show that (\ref{eq:PG-Silver}) is still true in the long-run average reward setting (\ref{eq:long-run-J}): \begin{thm}[Deterministic Policy Gradient Theorem] \label{thm:PG-single-agent} For any $\theta \in \Theta$, under Assumptions \ref{asm:MDP_Regularity} and \ref{asm:Reg_P_R} (here for $N=1$), $\nabla_{\theta} J(\mu_\theta)$ exists and is given by \begin{equation*}
\nabla_{\theta}J(\mu_\theta) = \espL{\nabla_\theta \mu_\theta(s) \nablaV{a}{Q_{\mu_\theta}(s,a)}{\mu_\theta(s)}}{s \sim d^{\mu_\theta}} \end{equation*} \end{thm}
\begin{proof} The proof follows the same scheme as \cite{Sutton:2000:Policy_grad_meth_RL_fct_appr}, naturally extending their results for a deterministic policy $\mu_\theta$ and a continuous action space $\mc{A}$. Within this proof $J(\theta)$ (resp. $d^\theta, Q_\theta, V_\theta$) stands for $J_{\mu_\theta}$ (resp. $d^{\mu_\theta}, Q_{\mu_\theta}, V_{\mu_\theta}$).
Note that assumptions \ref{asm:MDP_Regularity} and \ref{asm:Reg_P_R} ensures that, for any $s \in \mathcal{S}$, $V_\theta(s)$, $\nabla_\theta V_\theta(s)$, $J(\theta)$, $\nabla_\theta J(\theta)$, $d^\theta$(s) are Lipschitz-continuous functions of $\theta$, and that $Q_\theta(s,a)$ and $\nabla_a Q_\theta(s,a)$ are Lipschitz-continuous functions of $a$ (\cite{Marbach:2001}). Taking the gradient on both sides of (\ref{eq:Q-v}), \begin{align*}
\nabla_\theta V_\theta(s) &= \nabla_\theta Q_\theta(s, \mu_\theta(s)) \\
&= \nabla_\theta \big[\bar{R}(s, \mu_\theta(s)) - J(\theta) + \sum_{s^\prime \in \mathcal{S}} P(s^\prime \vert s, \mu_\theta(s)) V_\theta(s^\prime) \big] \\
&= \nabla_\theta \mu_\theta (s) \left. \nabla_a \bar{R}(s,a) \right|_{a = \mu_\theta(s)} - \nabla_\theta J(\theta) + \nabla_\theta \sum_{s^\prime \in \mathcal{S}} P(s^\prime \vert s, \mu_\theta(s)) V_\theta(s^\prime) \\
\begin{split}
&= \nabla_\theta \mu_\theta (s) \left. \nabla_a \bar{R}(s,a) \right|_{a = \mu_\theta(s)} - \nabla_\theta J(\theta)\\
& \qquad + \sum_{s^\prime \in \mathcal{S}} \nabla_\theta \mu_\theta (s) \left. \nabla_a P(s^\prime \vert s, a) \right|_{a = \mu_\theta(s)} V_\theta (s^\prime) + \sum_{s^\prime \in \mathcal{S}} P(s^\prime \vert s, \mu_\theta(s)) \nabla_\theta V_\theta(s^\prime)
\end{split}\\
\begin{split}
&= \nabla_\theta \mu_\theta (s) \nabla_a \left.\Big[\bar{R}(s, a) + \sum_{s^\prime \in \mathcal{S}} P(s \vert s^\prime,a) V_\theta (s^\prime) \Big]\right|_{a = \mu_\theta(s)} \\
& \qquad - \nabla_\theta J(\theta) + \sum_{s^\prime \in \mathcal{S}} P(s^\prime \vert s, a) \nabla_\theta V_\theta(s^\prime)
\end{split}\\
&= \nabla_\theta \mu_\theta(s) \nabla_a \left. Q_\theta(s,a)\right|_{a = \mu_\theta(s)} + \sum_{s^\prime \in \mathcal{S}} P(s^\prime \vert s, \mu_\theta(s)) \nabla_\theta V_\theta(s^\prime) - \nabla_\theta J(\theta) \end{align*} Hence, \begin{align*}
\nabla_\theta J(\theta) &= \nabla_\theta \mu_\theta(s) \nabla_a \left. Q_\theta(s,a)\right|_{a = \mu_\theta(s)} + \sum_{s^\prime \in \mathcal{S}} P(s^\prime \vert s, \mu_\theta(s)) \nabla_\theta V_\theta(s^\prime) - \nabla_\theta V_\theta(s) \\
\begin{split}
\sum_{s \in \mathcal{S}} d^\theta(s) \nabla_\theta J(\theta) &= \sum_{s \in \mathcal{S}} d^\theta(s) \nabla_\theta \mu_\theta(s) \nabla_a \left. Q_\theta(s,a)\right|_{a = \mu_\theta(s)} \\
& \qquad + \sum_{s \in \mathcal{S}} d^\theta(s) \sum_{s^\prime \in \mathcal{S}} P(s^\prime \vert s, \mu_\theta(s)) \nabla_\theta V_\theta(s^\prime) - \sum_{s \in \mathcal{S}} d^\theta(s) \nabla_\theta V_\theta(s)
\end{split}\\ \end{align*} Using stationarity property of $d^\theta$, we get $$\sum_{s \in \mathcal{S}} \sum_{s^\prime \in \mathcal{S}} d^\theta (s) P(s^\prime \vert s, \mu_\theta(s)) \nabla_\theta V_\theta(s^\prime) = \sum_{ s^\prime \in \mathcal{S}} d^\theta(s^\prime) \nabla_\theta V_\theta(s^\prime)$$ Thus \begin{equation*}
\nabla_\theta J(\theta) = \sum_{s \in \mathcal{S}} d^\theta(s) \nabla_\theta \mu_\theta(s) \left. \nabla_a Q_\theta(s,a)\right|_{a = \mu_\theta(s)} = \mathbb{E}_{s \sim d^\theta} \big[\nabla_\theta \mu_\theta(s) \left. \nabla_a Q_\theta(s,a)\right|_{a = \mu_\theta(s)}] \end{equation*} \end{proof}
Even though stochastic (\ref{eq:PG-Sutton}) and deterministic (\ref{eq:PG-Silver}) policy gradients may not look alike at first glance, it has been established in \cite{Silver:2014:DeterministicPolicyGradient}, that for a wide class of stochastic policies, the deterministic policy gradient is a limiting case of the stochastic policy gradient. The claim is that, in a discounted setting, if we consider parametrized stochastic policies $\pi_{\mu,\sigma}$ associated to deterministic policies $\mu_\theta$ where $\sigma$ is a variance parameter and such that, when $\sigma \to 0$, we have $\pi_{\theta,\sigma \downarrow 0} \equiv \mu_\theta$ then the stochastic policy gradient converges to the deterministic gradient. We prove that this limit still holds under the long-run average setting:
\begin{thm} \label{thm:limit_stoch_grad}
Let $\pi_{\theta, \sigma}$ stochastic policy such that $\pi_{\theta, \sigma}(a|s) = \nu_\sigma(\mu_\theta(s),a)$, where $\sigma$ is a parameter controlling the variance, and $\nu_\sigma$ satisfy conditions \ref{cond:regular_delta_appr}. Suppose further that the Assumptions \ref{asm:MDP_Regularity} and \ref{asm:Reg_P_R} on the MDP hold. Then, \begin{equation} \label{eq:PG-Grosnit}
\underset{\sigma \downarrow 0}{\textnormal{lim }} \nabla_\theta J_{\pi_{\theta, \sigma}} (\pi_{\theta, \sigma}) = \nabla_\theta J_{\mu_\theta}(\mu_\theta) \end{equation} where on the l.h.s the gradient is the standard stochastic policy gradient (\ref{eq:PG-Sutton}) and on the r.h.s. the gradient is the deterministic policy gradient given in Theorem \ref{thm:PG-single-agent}. \end{thm} The proof does not rely on the expressions of the policy gradient given by (\ref{eq:PG-Sutton}) and (\ref{eq:PG-Grosnit}), but by directly applying gradient operator to the definition of $J$ (which, for the deterministic case (\ref{eq:long-run-J}), gives: \begin{equation*}
\nabla_\theta J(\mu_\theta) = \sum_{s \in \mathcal{S}} \nabla_\theta d^{\mu_\theta}(s) \bar{R}(s,\mu_\theta(s)) + \sum_{s \in \mathcal{S}} d^{\mu_\theta}(s) \nabla_\theta \mu_\theta(s)\left. \nabla_a \bar{R}(s,a)\right|_{a = \mu_\theta(s)} \end{equation*} by simple differentiation of a product). We then use that the gradient of $d^{\pi_{\theta,\sigma}}$ converges towards the gradient of $d^{\mu_\theta}$ as $\sigma \downarrow 0$ under some technical conditions and exploit properties of $\nu_\sigma$. A rigorous statement of this theorem and details of the proof are provided in Appendix \ref{app:limit_stoch_grad}.
It has been emphasized in \cite{Silver:2014:DeterministicPolicyGradient} that the fact that deterministic gradient is a limit case of the stochastic gradient makes it likely that standard machinery of policy gradient, such as compatible-function approximation \cite{Sutton:2000:Policy_grad_meth_RL_fct_appr}, natural gradients \cite{Kakade:2001:NPG}, on-line feature adaptation \cite{Prabuchandran:2016:A-C_Online_Feature_Adapt}, and also actor-critic \cite{Konda:2002:Actor-Critic} could be still applicable when having deterministic policy. That is why it was important to show that this convergence held in the long-run average setting.
Having an expression for the deterministic policy gradient (\ref{eq:PG-Grosnit}), we can use it to design the actor step of our actor-critic algorithm as follows: \begin{equation} \label{algo:actor-step-single-agent} \theta_{t+1} = \theta_t + \beta_{\theta,t} \cdot \nabla_{\theta} \mu_{\theta_t}(s_t) \cdot \nablaV{a}{Q_{\omega_t}(s_t,a_t)}{\mu_{\theta_t(s_t)}} \end{equation}
We get an actor-critic algorithm by iterating $(\ref{algo:critic-step-single-agent})$ and (\ref{algo:actor-step-single-agent}) on a sequence of states $s_t$, actions $\mu_{\theta_t}(s_t)$ and rewards $r_{t+1}$ generated by interacting with the environment.
\subsection{Actor-Critic algorithm for Decentralized-MARL}
In this section, we adapt the actor-critic algorithm presented above for the multi-agent MDP with networked agents. Each agent follows a parametrized deterministic policy $\mu^i_{\theta^i}$.
\subsubsection{Critic step} As in \cite{FDMARL}, we assume that each agent $i$ maintains its own parameter $\omega^i$ and uses $Q_{\omega^i}$ as a local estimate of the \textit{global} value function $Q_{\mu_\theta}$ defined as (\ref{eq:Q-multi-agent}). Parameters $\omega^i$ is a column vector of $\mathbb{R}^K$ that will be updated at each time-step during the critic step. Since in the multi-agent setting the Bellman equation involves the globally averaged reward $\bar{r}_{t+1}$, estimation of $Q_{\mu_\theta}$ requires aggregation of local information, as if agents share no information then they won't be able to learn anything apart from maximizing their own rewards which may lead to a bad averaged reward. Thus, the communication network is used by each agent $i$ to share its local parameter $\omega^i$ with its neighbors, so that a consensus can be reached upon the parameter $\omega$ that provides the best approximation of $Q_{\mu_\theta}$. Note that this communication does not compromise agent's confidentiality, as it is not possible to infer either $r^i$ or $\theta^i$ from the sharing of $\omega^i$.
The critic update is comprised of two steps: an update similar to the single-agent critic step based on TD($0$) for the action value function and classical stochastic approximation for the local long-run expected reward, and a consensus step during which each agent updates its local parameter $\omega$ by taking a linear combination of its neighbors parameter estimates. Weights associated to this combination is governed by $C_t = (c_t^{i j})_{i j} \in \mathbb{R}_+^{N\times N}$ where $c_t^{i j}$ denotes the weight on the message transmitted from $i$ to $j$ at time $t$. This weight matrix depends on $\mc{G}_t$ in a way that is described in assumption \ref{asm:random_matrix}. Thus, the critic step is as follows, \begin{alignat}{3} \label{eq:algo1_critic_step} &\hat{J}_{t+1}^i = (1 - \beta_{\omega, t}) \cdot \hat{J}_t^i + \beta_{\omega, t} \cdot r_{t+1}^i \hspace{2em}
&\widetilde{\omega}^i_t = \omega_t^i + \beta_{\omega, t} \cdot \delta_t^i \cdot \left. \nabla_\omega \hat{Q}_{\omega^i} (s_t,a_t) \right|_{\omega = \omega^i_t} \hspace{1em} &\omega_{t+1}^i = \sum_{j \in \mathcal{N}} c_t^{i j} \cdot \widetilde{\omega}^j_t \end{alignat} with \begin{equation*} \delta_t^i = r_{t+1}^i - \hat{J}_t^i + \hat{Q}_{\omega_t^i} (s_{t+1}, a_{t+1}) - \hat{Q}_{\omega_t^i}(s_t, a_t) \end{equation*} where $\hat{J}^i$ is updated with the same learning rate as $\omega^i$ \cite{Konda:2002:Actor-Critic}. $\tilde{\omega}$ is an auxiliary parameter updated according to TD($0$), and then $\omega^i$ is updated through consensus step.
\subsubsection{Actor step}
As done in \cite{FDMARL}, we derive a multi-agent policy gradient directly from the single agent expression (\ref{eq:PG-Grosnit}). \begin{thm}[Local Deterministic Policy Gradient Theorem - On Policy] \label{thm:PG-local-multi-agent} For any $\theta \in \Theta$, $i \in \mathcal{N}$, under Assumptions \ref{asm:MDP_Regularity} and \ref{asm:Reg_P_R}, $\nabla_{\theta^i} J(\theta)$ exists and is given by \begin{equation} \label{PG-Grosnit-multi-agent}
\nabla_{\theta^i}J(\theta) = \espL{\nabla_{\theta^i} \mu^i_{\theta^i}(s) \nabla_{a^i} \left. Q_\theta(s, \mu^{-i}_{\theta^{-i}}(s), a^i)\right|_{a^i = \mu^i_{\theta^i}(s)}}{s \sim d^\theta} \end{equation} \end{thm} \begin{proof} Having $\mu_\theta (s) = \left(\mu_{\theta}^1(s), \dots, \mu_{\theta}^N(s) \right) = \left(\mu_{\theta^1}^1(s), \dots, \mu_{\theta^N}^N(s) \right)$, with $\theta = \big[\left(\theta^1\right)^\top, \dots,\left(\theta^N\right)^\top ]^\top$, we have from the single-agent policy gradient (\ref{eq:PG-Grosnit}): \begin{equation*}
\nabla_\theta J(\mu_\theta) =. \espL{\nabla_\theta \mu_\theta(s) \cdot \nablaV{a}{Q(s,a)}{\mu_\theta(s)}}{s \sim d^{\mu_\theta}} \end{equation*} Given that $\nabla_{\theta^i} \mu_{\theta}^j (s) = 0$ if $i \neq j$, we have $\nabla_\theta \mu_\theta(s) = \textnormal{Diag}(\nabla_{\theta^1} \mu_{\theta_1}^1 (s), \dots, \nabla_{\theta^N} \mu_{\theta_N}^N (s))$, and (\ref{PG-Grosnit-multi-agent}) directly follows. \end{proof} This shows, that the gradient of the objective function with respect to the local parameter $\theta^i$ can be obtained locally as it depends on the local gradient $\nabla_{\theta^i} \mu^{\theta^i}_{\theta^i}$ and on the global action-value function that can be approximated locally by $\hat{Q}_{\omega^i}$. This property is one of the key aspects of decentralized actor critic algorithm \cite{FDMARL, Zhang:2018:ContinuousFDMARL, Suttle:2019:MA-OFF-Policy-AC}. Thus, we have the following actor step motivated by (\ref{PG-Grosnit-multi-agent}): \begin{equation} \label{eq:algo1_actor_step}
\theta^i_{t+1} = \theta_t^i + \beta_{\theta, t} \cdot \nabla_{\theta^i} \mu_{\theta^i_t}^i(s_t) \cdot \left. \nabla_{a^i} \hat{Q}_{\omega^i_t} (s_t,a^{-i}_t, a^i)\right|_{a^i = a^i_t} \end{equation} Each agent $i$ updates its policy in direction of the estimated ascending gradient $\nabla_{\theta^i} J$ with learning rate $\beta_{\theta,t}$ smaller than $\beta_{\omega,t}$, as discussed below.
\subsubsection{Pseudocode of Decentralized Actor-Critic} Having a critic and an actor step, we give in Algorithm \ref{algo:A-C-On-policy} the pseudocode of the Decentralized Deterministic Multi-Agent Actor Critic we designed . Parts in red refer to the use of compatible function, as discussed in section \ref{section:function-appr}.
\begin{algorithm}[tb]
\caption{Networked deterministic on-policy actor-critic algorithm based on action-value function}
\label{algo:A-C-On-policy} \begin{algorithmic}
\STATE {\textbf Input:} Initial values $\hat{J}_0^i, \omega^i_0, \widetilde{\omega}^i_0, \theta_0^i, \forall i \in \mathcal{N}$; $s_0$ initial state of the MDP, stepsizes $\{\beta_{\omega, t}\}_{t \geq 0}, \{\beta_{\theta, t}\}_{t \geq 0}$
\STATE Draw $a^i_0 = \mu^i_{\theta^i_0}(s_0)$ \textcolor{red}{and compute $\widetilde{a}_0^i = \nabla_{\theta^i} \mu^i_{\theta^i_0}(s_0)$}
\STATE Observe joint action $a_0 = (a^1_0, \dots, a^N_0)$ \textcolor{red}{and $\widetilde{a}_0 = \left(\widetilde{a}^1_0, \dots, \widetilde{a}^N_0\right)$}
\REPEAT
\FOR{$i \in \mathcal{N}$}
\STATE Observe $s_{t+1}$ and reward $r_{t+1}^i = r^i(s_t, a_t)$
\STATE Update $\hat{J}^i_{t+1} \leftarrow (1 - \beta_{\omega, t}) \cdot \hat{J}_t^i + \beta_{\omega, t} \cdot r_{t+1}^i$
\STATE Draw action $a_{t+1} = \mu^i_{\theta^i_t} (s_{t+1})$ \textcolor{red}{and compute $\widetilde{a}_{t+1}^i = \nabla_{\theta^i} \mu^i_{\theta^i_t}(s_{t+1})$}
\ENDFOR
\STATE Observe joint action $a_{t+1} = (a^1_{t+1}, \dots, a^N_{t+1})$ \textcolor{red}{and $\widetilde{a}_{t+1} = \left(\widetilde{a}^1_{t+1}, \dots, \widetilde{a}^N_{t+1}\right)$}
\FOR{$i \in \mathcal{N}$}
\STATE Update: $\delta_t^i \leftarrow r_{t+1}^i - \hat{J}_t^i + \hat{Q}_{\omega_t^i} (s_{t+1}, a_{t+1}) - \hat{Q}_{\omega_t^i}(s_t, a_t)$
\STATE \textbf{Critic step: } $\widetilde{\omega}^i_t \leftarrow \omega_t^i + \beta_{\omega, t} \cdot \delta_t^i \cdot \left. \nabla_\omega \hat{Q}_{\omega^i} (s_t,a_t) \right|_{\omega = \omega^i_t}$
\STATE \textbf{Actor step: } $\theta^i_{t+1} = \theta_t^i + \beta_{\theta, t} \cdot \nabla_{\theta^i} \mu_{\theta^i_t}^i(s_t) \left. \nabla_{a^i} \hat{Q}_{\omega^i_t} (s_t,a^{-i}_t, a^i)\right|_{a^i = a^i_t}$
\STATE Send $\widetilde{\omega}^i_t$ to the neighbors $\{j \in \mathcal{N} : (i,j) \in \mathcal{E}_t \}$ over $\mathcal{G}_t$
\STATE \textbf{Consensus step: } $\omega^i_{t+1} \leftarrow \sum_{j \in \mathcal{N}} c^{i j}_t \cdot \widetilde{\omega}^j_t$
\ENDFOR
\UNTIL{convergence} \end{algorithmic} \end{algorithm}
This algorithm can be applied in an on-line fashion, at each time step, the agents interact with the environment based on their knowledge of the system and, receiving information from the environment (reward) and from its neighbors (consensus step) tries to improve its estimates $\hat{J}$ and $\hat{Q}$ as well as its policy $\mu^i_{\theta^i}$. Moreover, we show in the next section that this algorithm has similar convergence guarantees as its stochastic counterparts \cite{FDMARL,Zhang:2018:ContinuousFDMARL, Suttle:2019:MA-OFF-Policy-AC} when using linear function approximation for $\hat{Q}_\omega$.
\section{Convergence results}
By convergence of an actor-critic algorithm, one means that critic parameters $\omega$ and $\hat{J}$ on the one hand, and critic parameter $\theta$ on the other hand converges. Most of the existing convergence results on Actor-Critic algorithms require linear function approximation or tabular representation of the value function (meaning that the agent updates a table which has as many entries as $|\mc{S}|\times |\mc{A}|$, which becomes intractable as state and action spaces become large or even continuous). Many converging variants of Actor-Critic algorithms exist, they can differ in the critic update (based on TD($\lambda$) in \cite{Konda:2003:On-Actor-Critics} with stronger convergence guarantees for $\lambda = 1$, while we use TD($0$) signal), in the type of gradient considered (we use standard policy gradient while the Natural gradient is used for the actor update in \cite{Bhatnagar:2009:NaturalActorCritic} and convergence proof is provided), in the fact that the actor step is based on data generated by the current policy (on-line) or by following another policy (off-policy actor critic convergence is studied in \cite{degris:2012:offPol_AC} in the tabular case and under linear function approximation in \cite{Maei:2018:Convergent_AC_OffPol}), or in the existence of a single or several cooperating agents (we study decentralized multi-agent with function approximation as in \cite{FDMARL, Zhang-Y:2019:Distrib_Off-Pol_A-C} for discrete state and action spaces, and in \cite{Zhang:2018:ContinuousFDMARL} for continuous state and action spaces). A common feature among these algorithms is the use of two different time-scales in the actor and the critic step. The use of a smaller learning rate for the actor update (inducing slower convergence) stems from the fact that we need the critic to give an accurate enough estimate of the value function, which is valid in a small neighborhood of the current actor parameter, so that the direction provided by gradient estimation is accurate as well. In what follows we provide some insights into two timescale stochastic approximation machinery and state a standard result that can be adapted to study the convergence of our Actor-Critic algorithm.
\subsection{Two-Timescale Stochastic Approximation}
\subsubsection{Basic stochastic approximation} Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. It is used for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations. Stochastic approximation algorithms typically deal with a function $f$ of the form $f(\theta) = \espL{F(\theta,X)}{X}$ with $X$ a random variable. The goal can then be to find a zero of such a function $f$ without evaluating it directly but instead using random samples of $F(\theta,X)$. The algorithm proposed in \cite{Robbins&Monro:1951} to find the unique root of a function $f$ for which it is only possible to get observations of the form $f(\theta)+\epsilon$ with $\epsilon$ a $0$-mean noise, is of the form: \begin{equation*}
\theta_{t+1} = \theta_t - \beta_t (f(\theta_t) + \epsilon_{t+1}) \end{equation*} Where $\beta_t$ is some vanishing step-size. Under some assumptions on $f$, $\paa{\beta_t}_t$ and $\paa{\epsilon}_t$, $\theta_t$ is guaranteed to converge to $\theta^*$ verifying $f(\theta^*)=0$. Furthermore, it has been established in \cite{Borkar:2000:ODE-SA} that the asymptotic behaviour of such sequence $\paa{\theta_t}_t$ is captured by the ODE \begin{equation*}
\dot{\theta} = -f(\theta(t)) \end{equation*}
\subsubsection{Two-timescale stochastic approximation Theorem} Two-timescale stochastic approximation algorithms are characterized by coupled stochastic recursions that are driven by two step-size parameters decreasing to $0$ at a different rate. The standard simple example of such algorithms involves two parameter sequences $\paa{x_t}_{t\geq0}$, $\paa{y_t}_{t\geq0}$ governed by \begin{align}
&x_{t+1} = x_t + \beta_{x, t}(f(x_t, y_t) + N^1_{t+1})\label{eq:x-recursion}\\
&y_{t+1} = y_t + \beta_{y, t}(g(x_t, y_t) + N^2_{t+1})\label{eq:y-recursion} \end{align} where $f$, $g$ are Lipschitz continuous functions and $\paa{N^1_t}_t$, $\paa{N^2_t}_t$ are martingale difference sequences with respect to the $\sigma$-field $\mc{F}=\sigma\pa{x_n, y_n, N^1_n, N^2_n, n \leq t}$ satisfying, for each $t \geq 0$, $i=1,2$ and some constant $K<\infty$: \begin{equation*}
\esp{\palV{N^i_{t+1}}^2|\mc{F}_t} \leq K (1 + \palV{x_t}^2 + \palV{y_t}^2) \end{equation*} This limits the intensity of the noise surrounding estimations of $f$ and $g$. Also, the step-size schedule $\paa{\beta_{x,t}}$ and $\paa{\beta_{y,t}}$ satisfies \begin{equation*}
\sum_t \beta_{x,t} = \sum_t \beta_{y,t} = \infty, \qquad \sum_t \beta_{x,t}^2 = \sum_t \beta_{y,t}^2 < \infty, \qquad \beta_t = o(\alpha_t) \end{equation*} Thus, asymptotically, (\ref{eq:x-recursion}) has uniformly higher increments than $(\ref{eq:y-recursion})$, inducing a 'faster' convergence. Hence, as $\paa{y_t}$ is almost 'static' from the perspective of $\paa{x_t}$, we consider the ODEs \begin{align}
&\dot{x} = f(x(t), y(t)) \\
&\dot{y} = 0 \label{eq:static-y} \end{align} and, as a consequence of (\ref{eq:static-y}), for a constant $y$: \begin{equation} \label{eq:ODE-fast}
\dot{x} = f(x(t),y) \end{equation} To be able to characterize the asymptotic behaviour of $\paa{x_t}$ and $\paa{y_t}$, we suppose the two following assumptions to hold. \begin{asm}\label{asm:bounded-x-y} $\underset{t}{\textnormal{ sup }} \palV{x_t}, \underset{t}{\textnormal{ sup }} \palV{y_t} < \infty$ \end{asm} \begin{asm}\label{asm:ODE-fast-asympt-glob} The ODE (\ref{eq:ODE-fast}) has a globally asymptotically stable equilibrium $\xi(y)$ where $\xi(\cdot)$ is a Lipschitz-continuous function. \end{asm} Under these assumptions, we can expect that the behaviour of $\paa{y_t}$ will be captured by the following ODE, replacing $x_t$ by $\xi(y_t)$ in (\ref{eq:y-recursion}), as $x_t$ converges on the faster time-scale: \begin{equation}\label{eq:ODE-slow}
\dot{y} = g(\xi(y(t)),y(t)) \end{equation} We make an additional assumption before stating the main result \cite{Borkar:1997:SA-two-timescale}. \begin{asm}\label{asm:ODE-slow-asympt-glob} The ODE (\ref{eq:ODE-slow}) has a globally asymptotically stable equilibrium $y^*$ \end{asm} \begin{thm} Under Assumptions \ref{asm:bounded-x-y}, \ref{asm:ODE-fast-asympt-glob}, \ref{asm:ODE-slow-asympt-glob}, $\limit{t\to\infty} (x_t,y_t) = (\xi(y^*),y^*)$ with probability one. \end{thm}
\subsection{Critic convergence}
We can now apply the two-timescale technique to our actor-critic algorithm for which the actor step, updating deterministic policy parameter $\theta^i$, occurs at a slower pace than the update of $\omega^i$ and $\hat{J}^i$ during the critic step. Thus, we first focus on the ODE capturing the asymptotic behaviour of the critic parameters by freezing the joint policy $\mu_\theta$ (\ref{eq:ODE-fast}), and then we study the behaviour of $\theta_t$ upon the convergence of the critic parameters (\ref{eq:ODE-slow}). But before that, we state some standard assumptions, that are taken or adapted from \cite{FDMARL}, which are needed for the convergence proof.
We use $\{\mathcal{F}_t\}$ to denote the filtration with $\mathcal{F}_t = \sigma(s_\tau, C_{\tau-1}, a_{\tau-1}, r_{\tau-1}, \tau \leq t)$, and $J(\theta)$ (resp. $d^\theta, Q_\theta$) stands for $J_{\mu_\theta}$ (resp. $d^{\mu_\theta}, Q_{\mu_\theta}$).
As it is necessary to ensure the stability of the policy parameters updates (Assumption \ref{asm:bounded-x-y} in the two-timescale theorem), projection is often used since, as discussed in \cite{Bhatnagar:2009:NaturalActorCritic} p23, it is not clear how boundedness of $\paa{\theta_t^i}$ can be otherwise ensured. However this projection step is often not applied in practise \cite{Bhatnagar:2009:NaturalActorCritic, degris:2012:offPol_AC, Prabuchandran:2016:A-C_Online_Feature_Adapt, FDMARL, Suttle:2019:MA-OFF-Policy-AC} without hampering empirical convergence, so sometimes \cite{Zhang-Y:2019:Distrib_Off-Pol_A-C} the boundedness of the actor parameter is directly assumed without resorting to projection.
\begin{asm} \label{Bound_theta} The update of the policy parameter $\theta^i$ includes a local projection by $\Gamma^i:\mathbb{R}^{m_i} \to \Theta^i \subset \mathbb{R}^{m_i}$ that projects any $\theta^i_t$ onto a compact set $\Theta^i$. We also assume that $\Theta = \prod_{i=1}^N \Theta^i$ is large enough to include at least one local minimum of $J(\theta)$. \end{asm}
The following assumption on value function approximation is adapted from \cite{FDMARL} for continuous action space. \begin{asm} \label{asm:Linear_appr} For each agent $i$, the action-value function is parametrized by the class of linear functions, i.e., $\hat{Q}_{\omega^i}(s,a) = \phi(s,a) \cdot \omega^i$ where $\phi(s,a) = \big[\phi_1(s,a), \dots, {\phi_K(s,a) \big]} \in \mathbb{R}^K$ is the feature associated with the state-action pair $(s, a)\in\mc{S}\times\mc{A}$. The feature vectors $\phi(s,a)$, as well as $\nabla_a \phi_k (s,a)$ are uniformly bounded for any $s \in \mathcal{S}$, $a \in \mathcal{A}, k \in \{1, \dots, K\}$. Furthermore, we assume that for any $\theta \in \Theta$, the feature matrix $\Phi_\theta \in \mathbb{R}^{\vert \mathcal{S} \vert \times K}$ has full column rank, where the $k$-th column of $\Phi_\theta$ is $\big[\phi_k(s, \mu_\theta(s)), s \in \mathcal{S}\big]$ for any $k \in \llbracket1, K\rrbracket$. Also, for any $u \in \mathbb{R}^K$, $\Phi_\theta u \neq \textbf{1}$. \end{asm} Linear approximation is widely used in Actor-Critic convergence analysis, as TD-learning-based policy evaluation with nonlinear function approximation may fail to converge (such as TD($0$) \cite{Tsitsiklis:1997:TD-learning}).
The following assumption on the matrix $\paa{C_t}$ for the consensus updates is the same as Assumption $4.3.$ in \cite{FDMARL} and is classical in consensus optimization literature \cite{bianchi:2013:CV-multi-non-convex}. \begin{asm} \label{asm:random_matrix} The sequence of non-negative random matrices $\{C_t = (c_t^{i j})_{i j}\}$ satisfies: \begin{enumerate}
\item $C_t$ is row stochastic and $\mathbb{E}(C_t|\mathcal{F}_t)$ is almost surely (a.s.) column stochastic for each $t$, i.e., $C_t \textbf{1} = \textbf{1}$ and $\textbf{1}^\top\mathbb{E}(C_t|\mathcal{F}_t) = \textbf{1}^\top$ a.s. . Furthermore, there exists a constant $\eta \in (0, 1)$ such that, for any $c_t^{i j} > 0$, we have $c_t^{i j} \geq \eta$
\item $C_t$ respects the communication graph $\mathcal{G}_t$, i.e., $c_t^{i j} = 0$ if $(i, j) \notin \mathcal{E}_t$. \label{asm:connected-graph}
\item The spectral norm of $\mathbb{E}\big[C_t^\top \cdot (I - \textbf{1} \textbf{1}^\top / N) \cdot C_t \big]$ is smaller than one.
\item Given the $\sigma$-algebra generated by the random variables before time $t$, $C_t$, is conditionally independent of $s_t, a_t$ and $r^i_{t+1}$ for any $i \in \mathcal{N}$. \end{enumerate} \end{asm} There can be several sources of randomness for $C_t$ such as random link failures in a communication networks or the intrinsic randomness of the underlying time-varying graph $\mc{G}_t$ (point \ref{asm:connected-graph} ensures that $\mc{G}_t$ is connected). A standard choice for $C_t$ in a decentralized context is based on Metropolis weights \cite{Xiao:2005:metropolis}: \begin{equation}
c^{i j}_t = \pab{1 + \textnormal{max}(d_t(i), d_t(j))}^{-1}, \forall (i,j) \in \mc{E}_t \qquad c_t^{i i} = 1 - \sum_{j \in \mc{N}_t(i)} c_t^{i j}, \forall i \in \mc{N} \end{equation}
where $\mc{N}_t(i)$ denotes the set of neighbors of agent $i$ at time $t$, and $d_t(i) = |\mc{N}_t(i)|$ is the degree of agent $i$.
The following assumption on the actor and critic time-steps is standard in two-timescale stochastic approximation analysis. \begin{asm} \label{asm:step-sizes} The stepsizes $\beta_{\omega, t}, \beta_{\theta, t}$ satisfy: \begin{align*} \label{asm:step-sizes}
&\sum_t \beta_{\omega, t} = \sum_t \beta_{\theta, t} = \infty \\
&\sum_t (\beta_{\omega, t}^2 + \beta_{\theta, t}^2) < \infty \end{align*} In addition, $\beta_{\theta, t} = o(\beta_{\omega, t})$ and $\textnormal{lim}_{t \to \infty} \beta_{\omega, t+1}/\beta_{\omega, t} = 1$. \end{asm} \bigbreak
To state our convergence result regarding the critic parameters, we define $D^s_\theta = \textnormal{Diag}\big[d^\theta(s), s \in \mathcal{S}\big]$, $\bar{R}_\theta = \big[\bar{R}(s, \mu_\theta(s)), s \in \mathcal{S}\big]^\top \in \mathbb{R}^{\vert \mathcal{S} \vert}$. Also, we define operator $T^Q_\theta : \mathbb{R}^{\vert \mathcal{S} \vert} \to \mathbb{R}^{\vert \mathcal{S} \vert}$ for any action-value vector $Q^\prime \in \mathbb{R}^{\vert \mathcal{S} \vert}$ (and not $\mathbb{R}^{\vert \mathcal{S} \vert \cdot \vert \mathcal{A} \vert}$ as in \cite{FDMARL} since there is a mapping associating an action to each state) as: \begin{equation} T_\theta^Q(Q^\prime) = \bar{R}_\theta - J(\theta) \cdot \textbf{1} + P^\theta Q^\prime \end{equation} Note that, owing to Poisson equation \ref{eq:poisson-multi}, we have for any $\theta \in \Theta$, $T^Q_\theta(Q_\theta) = Q_\theta$. So our critic step (\ref{eq:algo1_critic_step}) is sort of a fix-point-like iterate in a sense that is made clear in the following theorem establishing the convergence of the critic step for a fixed policy parameter $\theta$ (converging on the slower time-scale).
\begin{thm}\label{thm:on-policy-AC-critic-cv} Under Assumptions \ref{asm:MDP_Regularity}, \ref{asm:Reg_P_R} and \ref{Bound_theta}-\ref{asm:step-sizes}, for any given deterministic policy $\mu_\theta$, with $\{\hat{J}_t\}$ and $\{\omega_t\}$ generated from (\ref{eq:algo1_critic_step}), we have $\textnormal{ lim }_{t \to \infty} \frac{1}{N}\sum_{i \in \mathcal{N}} \hat{J}_t^i = J(\theta)$ and $\textnormal{lim}_{t \to \infty} \omega_t^i = \omega_\theta$ a.s. for any $i \in \mathcal{N}$, where $$J(\theta) = \sum_{s \in \mathcal{S}} d^\theta(s) \bar{R}(s, \mu_\theta(s))$$ is the long-run average return under $\mu_\theta$, and $\omega_\theta$ is the unique solution to \begin{equation} \label{omega_expr} {\Phi_\theta}^\top D_\theta^s \big[T_\theta^Q(\Phi_\theta \omega_\theta) - \Phi_\theta \omega_\theta \big] = 0 \end{equation}
$\omega_\theta$ is also the minimizer of the Mean Square Projected Bellman Error (MSPBE), i.e., the solution to $$\underset{\omega}{\textnormal{minimize }} \lVert \Phi_\theta \omega - \Pi T_\theta^Q(\Phi_\theta \omega) \lVert^2_{D_\theta^s},$$ where $\Pi$ is the operator that projects a vector to the space spanned by the columns of $\Phi_\theta$,and $\lVert \cdot \lVert^2_{D_\theta^s}$ denotes the euclidean norm weighted by the matrix $D_\theta^s$. \end{thm}
Owing to the consensus step, only accessing local signal and some information from random neighbors, each agent manages to asymptotically get a copy of the best approximation of the global value function available with the given features $\phi$ in the sense of MSPBE minimization. The policy gradient for each agent involves this approximation of the global value function.
To state our convergence result for the actor step, we define quantities $\psi_{t, \theta}^i$, $\xi_t^i$ and $\xi_{t, \theta}^i$ as
\begin{align*} &\psi_{t, \theta}^i = \nabla_{\theta^i} \mu_{\theta^i}^i(s_t) \\ &\psi_{t}^i = \psi_{t, \theta_t}^i = \nabla_{\theta^i} \mu_{\theta^i_t}^i(s_t) \\ & \xi_{t, \theta}^i = \nablaV{a_i}{\hat{Q}_{\omega_\theta} (s_t, a_t^{-i}, a_i)}{a_i = \mu^i_{\theta^i_t}(s_t)} = \nablaV{a_i}{\phi(s_t, a_t^{-i}, a_i)}{a_i = \mu^i_{\theta^i_t}(s_t)} \omega_\theta \\
& \xi_t^i = \left. \nabla_{a_i} \hat{Q}_{\omega^i_t} (s_t, a_t^{-i}, a_i) \right|_{a_i = \mu^i_{\theta^i}(s_t)} = \left. \nabla_{a_i} \phi(s_t, a_t^{-i}, a_i) \right|_{a_i = \mu^i_{\theta^i}(s_t)} \omega_t^i \end{align*} and we denote by $\psi_{t,\theta}, \psi_t,\xi_{t,\theta}$ and $\xi_t$ their joint counterparts. As we use projection in the actor step, we introduce the operator $\hat{\Gamma}(\cdot)$ as \begin{equation} \label{eq_projection_limit}
\hat{\Gamma}\pab{g(\theta)} = \underset{0<\eta \to 0}{lim} \frac{\Gamma\pab{\theta + \eta \cdot g(\theta)} - \theta}{\eta} \end{equation} for any $\theta \in \Theta$ and $g:\Theta \to \mathbb{R}^{m}$ a continuous function. In case the limit above is not unique we take $\hat{\Gamma}\pab{g(\theta)}$ to be the set of all possible limit points of (\ref{eq_projection_limit}) ((see p.$191$ of \cite{Kushner_Clark})). We consider the following ODE associated to the actor step with the projection (\ref{eq:algo1_actor_step}) \begin{equation} \label{eq:ODE-actor-projection}
\dot{\theta} = \hat{\Gamma}\pab{h(\theta)} \end{equation} where $h(\theta) = \espL{\psi_{t,\theta} \cdot \xi_{t, \theta}}{s_t \sim d^\theta, \mu_\theta}$. Finally, let \begin{equation}
\mc{K} = \paa{\theta \in \Theta| \hat{\Gamma}(h(\theta)) = 0} \end{equation} denote the set of all fixed points of \ref{eq:ODE-actor-projection}. Now we can present our convergence theorem for the joint policy sequence $\paa{\theta_t}$ based on Theorem $5.3.1.$ pp. 191–196 of Kushner and Clark \cite{Kushner_Clark} that we restate in Appendix \ref{section:kc}.
\begin{thm} \label{thm:on-policy-AC-actor-cv}
Under Assumptions \ref{asm:MDP_Regularity}, \ref{asm:Reg_P_R} and \ref{Bound_theta}-\ref{asm:step-sizes}, the joint policy parameter sequence $\paa{\theta_t}$ obtained from (\ref{eq:algo1_actor_step}) is such that,
\begin{equation}
\theta_t \to \mc{K} \textnormal{ as } t \to \infty \textnormal{ almost surely}
\end{equation}
\end{thm}
Analysis of such stochastic problem has been provided in \cite{FDMARL} for a decentralized stochastic multi-agent actor-critic, and for the critic step, our deterministic policy setting can be handled as a special case of the stochastic policy one.
As for the actor step analysis, difficulties arise from the projection on the one hand (handled using Kushner-Clark Lemma \cite{Kushner_Clark} given in appendix \ref{section:kc}), and from the presence of a state-dependent noise (that will vanish due to "natural" timescale averaging \cite{BorkarStochasticApprDynamicalSysViewpoint} (Theorem 7-Corollary 8, Theorem 9 pp. 74-75)). A detailed proof is provided in the appendix \ref{section:cv-on-pol-proof}.
\section{Limits and possible extensions} In this section, we discuss the results presented in the previous section in the light of related works
\subsection{Function approximation features} \label{section:function-appr}
Let's first comment on the convergence of the actor parameters. Similar to \cite{FDMARL}, it can be noted that with arbitrary linear function approximators for $Q_\theta$, $\psi_{t, \theta} \cdot \xi_{t,\theta} = \nabla_\theta \mu_\theta(s_t) \left. \nabla_{a} \hat{Q}_{\omega_\theta}(s_t,a)\right|_{a = \mu_\theta(s_t)}$ may not be an unbiased estimate of $\nabla_{\theta} J(\theta)$: \begin{equation*}
\mathbb{E}_{s \sim d^\theta}\big[\psi_{t, \theta} \cdot \xi_{t, \theta} \big] = \nabla_\theta J(\theta) + \espL{\nabla_\theta \mu_\theta(s) \cdot \pa{\nablaV{a}{\hat{Q}_{\omega_\theta}(s,a)}{\mu_\theta(s)} - \nablaV{a}{Q_{\omega_\theta}(s,a)}{\mu_\theta(s)}}}{s \sim d^\theta} \end{equation*} Hence, distance between a convergence point of sequence $\paa{\theta_t}$ (i.e. generally a zero of $\espL{\psi_{t,\theta}\cdot \xi_{t,\theta}}{s\sim d^\theta}$) and a zero of $\nabla_\theta J(\mu_\theta)$ depends on the quality of the function approximation. Besides, even when converging directly to a zero of $\nabla_\theta J(\mu_\theta)$, in principle there is no guarantee for this zero to be a stable equilibria of the objective function, i.e. a local minimum of $J(\mu_\theta)$. Nevertheless, under additional assumptions on the noise (some of them presented in chapter 4 of \cite{BorkarStochasticApprDynamicalSysViewpoint}), one can ensure non-convergence to unstable points, and in practice, the intrinsic noise induced by the simulation routine often prevents such convergence to unstable points, without considering any additional noise conditions.
A traditional way to overcome this approximation issue consists in using "compatible" feature vectors $\phi$ for $Q_\omega$. Such features have been proposed in \cite{Sutton:2000:Policy_grad_meth_RL_fct_appr} and adopted in \cite{Bhatnagar:2009:NaturalActorCritic} in the context of stochastic policy gradient. Such features ensure that replacing real $Q_\theta$ by a linear approximation $Q_\omega$ will not affect the gradient estimation if $\omega$ is well chosen, i.e. for this good parameter $\omega^*$ we have \begin{equation*}
\nabla_\theta J(\pi_\theta) = \espL{\nabla_\theta \log(\pi_\theta(s,a)) Q_\theta(s,a)}{s \sim d^\theta, a \sim \pi_\theta} = \espL{\nabla_\theta \log(\pi_\theta(s,a)) Q_{\omega^*}(s,a)}{s \sim d^\theta, a \sim \pi_\theta} \end{equation*} Moreover, $\omega^*$ is such that \begin{equation*}
\omega^* = \underset{\omega}{\textnormal{minimize}} \textnormal{ MSE}(\theta, \omega) = \espL{|Q_\theta(s,a) - Q_\omega(s,a)|^2}{s \sim d^\theta, a \sim \pi_\theta} \end{equation*} So with a good critic step, there is hope that $\omega_t$ would converge to an $\tilde{\omega}$ lying in a small neighborhood of $\omega^*$ and thus that $Q_{\tilde{\omega}}$ will induce a good estimation for the policy gradient.
In \cite{Silver:2014:DeterministicPolicyGradient} compatible features are extended to the case of deterministic policy gradient. They are given by $\phi(s,a) = a \cdot \nabla_\theta \mu_\theta(s)^\top$. Then for $\omega^*$ such that, \begin{equation} \label{eq:compatible-deterministic}
\omega^* = \underset{\omega}{\textnormal{minimize}} \textnormal{ MSE}(\theta, \omega) = \espL{\palV{\nablaV{a}{Q_\theta(s,a)}{\mu_\theta(s)} - \nablaV{a}{Q_\omega(s,a)}{\mu_\theta(s)}}^2}{s \sim d^\theta} \end{equation} we have $\nabla_\theta J(\theta) = \espL{\nabla_\theta \mu_\theta(s) \cdot \nablaV{a}{Q_\theta(s,a)}{\mu_\theta(s)}}{s \sim d^\theta} = \espL{\nabla_\theta \mu_\theta(s) \cdot \nablaV{a}{Q_{\omega^*}(s,a)}{\mu_\theta(s)}}{s \sim d^\theta}$. However, contrary to the stochastic case, Theorem \ref{thm:on-policy-AC-critic-cv} on critic convergence offers a constraint on the distance between $Q_{\omega_\theta}$ and $Q_\theta$ when in condition (\ref{eq:compatible-deterministic}) it is the distance between the gradients of these quantities that are involved. As formulated in \cite{Silver:2014:DeterministicPolicyGradient}, one can only hope that since the critic will find a solution $Q_\omega(s,a) \approx Q_\theta(s,a)$, therefore this solution will also satisfy (for smooth function approximators) $\nablaV{a}{Q_\theta(s,a)}{\mu_\theta(s)} \approx \nablaV{a}{Q_\omega(s,a)}{\mu_\theta(s)}$. So even in a deterministic policy setting, we could consider using compatible features $\phi(s,a)$, giving, for $\omega \in \mathbb{R}^m$, \begin{align*}
&\hat{Q}_{\omega}(s,a) = a \cdot \nabla_\theta \mu_\theta(s)^\top \omega = (a - \mu_\theta(s)) \cdot \nabla_\theta \mu_\theta(s)^\top \omega + \hat{V}_{\omega}(s), \\
&\qquad\text{with } \hat{V}_{\omega}(s) = \hat{Q}_{\omega}(s,\mu_\theta(s))\\
&\left. \nabla_{a} \hat{Q}_{\omega}(s,a) \right|_{a = \mu_{\theta}(s)} = \nabla_\theta \mu_\theta(s)^\top \omega \end{align*} Using compatible features, it can then be hoped that the convergent point of \ref{thm:on-policy-AC-actor-cv} would correspond to a small neighborhood of a local optimum of $J(\theta)$, provided that the error for the gradient of the action-value function $\nablaV{a}{\hat{Q}_{\omega}(s,a)}{\mu_{\theta}(s)} - \nablaV{a}{Q_\theta(s,a)}{\mu_\theta(s)}$ is small.
Nevertheless, using compatible features requires to be able to compute, at each step $t$, $\phi(s_t, a_t) = a_t \cdot \nabla_\theta \mu_\theta(s_t)^\top$. To do so, it should be added to Algorithm \ref{algo:A-C-On-policy} (it appears in red) that each agent observes not only joint action $a_{t+1} = (a_{t+1}^1, \dots, a_{t+1}^N)$ but also the joint policy gradient $(\nabla_{\theta^1} \mu^1_{\theta^1_t}(s_{t+1}), \dots, \nabla_{\theta^N} \mu^N_{\theta^N_t}(s_{t+1}))$. Broadcasting this information would be detrimental to the algorithm complexity (each agent should store a copy of the gradient of $\mu$) and would weaken agent's confidentiality which goes against the principle of decentralized setting with communication network. Note that despise the use of a decentralized deterministic multi-agent setting, this problem does not occur in \cite{Zhang-Y:2019:Distrib_Off-Pol_A-C} as the consensus step is done in the actor step (agents share with their neighbors their local estimate of the global optimal policy) and not in the critic step (as each agent has a local approximation of its own local value function associated to their own task). The main drawback is that there is no confidentiality regarding agents' policies, which is not allowed in our setting. Besides, we note that in the recent works building up on \cite{Silver:2014:DeterministicPolicyGradient} using deterministic policy gradient with deep-neural-network as value function approximators (DDPG), achieve empirical convergence without compatible features \cite{Lillicrap:2015:DDPG, lowe:2017:multi_AC_Mixed,Fujimoto:2018:TD3}. This motivates us not to use compatible features as it is, to some extent, incompatible with our requirements.
\subsection{Exploration}
\subsubsection{Deterministic policy}
The advantage of using deterministic policies when compared to stochastic ones is that quantities estimated in the critic step does not involve estimation of a problematic integral over the action space. Empirically this can induce a lower variance of the critic estimates and a faster convergence. On the other hand, deterministic policy gradient methods are exposed to a major issue: the lack of exploration. Indeed, in Reinforcement learning, there is a trade-off between exploration and exploitation: an agent may spend too much time exploring the environment looking for better rewards but never converges or gets trapped in a local minimum, or it may stop exploring its environment always selecting at each step the action assumed to induce the best reward, even though there may be a better one that has never been tried which would yield even better immediate or future rewards.
Q-learning \cite{Sutton:barto1998} is a popular RL technique uniquely based on the estimation of the $Q$-function. Applying this algorithm, there is a simple exploration strategy called $\epsilon$-greedy: at each step, agent selects with a probability $(1-\epsilon))$, the action maximizing the current estimate of the $Q$ function for the current state, and selects with a probability $\epsilon$ a uniformly sampled action. Usually the parameter $\epsilon$ is large at the beginning of the training and slowly decreases until it reaches a fixed minimal value. Most of the exploration techniques rely on the same principle, allowing the agent to explore at the beginning of the process and then exploiting its knowledge of the environment by selecting a promising action. This can be done naturally using a stochastic policy with a possibly controllable variance parameter that would decrease as the learning goes. This is not possible with our algorithm which thus may be strongly dependant on its parameter initialization or be more prone to converge to local minima. Nonetheless, the agent's policy is not the only source of exploration, the environment's intrinsic stochasticity (depending on its state transition kernel $P$) can ease exploration of a deterministic agent as a same action can lead it to different states. Thus, for noisy environment, our algorithm may show better performance than its stochastic counterparts by providing better estimates of the value function.
\subsubsection{Off-policy} \label{section:off-pol}
Another approach to the exploration problem is the off-policy learning. It refers to learning about one way of behaving, called the target policy, from data generated by another way of selecting actions, called the behavior policy. The behavior policy can be totally unrelated to the target policy, which is typically the case when an agent trains using data generated by unrelated controllers, including manual human control, and from previously collected data. Alternatively, the behavior policy can be a stochastic version of a (generally) deterministic target policy in order to allow exploration (in the case of $Q$-learning with $\epsilon$-greedy exploration, the behavior policy is a kind of noisy version of the deterministic target policy). An off-policy version of the deterministic policy gradient is presented in \cite{Silver:2014:DeterministicPolicyGradient} for a stochastic behavior policy $\beta$ and an independent deterministic target policy $\mu_\theta$ parametrized by $\theta$. Following the stochastic off-policy actor-critic proposed in \cite{degris:2012:offPol_AC}, the objective function $J_\beta(\mu_\theta)$ is defined, in the \textit{discounted} version, as
\begin{equation} \label{eq:objective-off-pol}
J_\beta(\mu_\theta) = \espL{Q_\theta(s,\mu_\theta)}{s \sim d^\beta}
\end{equation} which boils down to considering the value associated to policy $\pi_\theta$ when the state distribution is given by $d^\beta$, and not $d^\theta$ (hence it is called excursion setting \cite{Sutton:2016:EmphaticAP}). Our own version of undiscounted deterministic off-policy multi-agent actor-critic following the excursion setting is only presented in Appendix as we believe it has very limited applications.
As it happens, the excursion setting suffers from two major problems that recent works endeavored to tackle: \begin{itemize}
\item Even when finding an optimal parameter $\theta^*$ for the objective function (\ref{eq:objective-off-pol}), without additional assumptions on behavior policy $\beta$, there is no guarantee that applying $\theta^*$ would be interesting. Indeed, it could happen that when following this target policy, the stationary distribution over states $d^{\theta^*}$ may give more weights to states on which $\pi_{\theta^*}$ performs poorly.
\item It may be difficult to learn good estimations of the value functions associated to $\pi_\theta$ when only data generated from $\beta$ are available. In the on-policy case, simple temporal-difference learning TD($\lambda$) is known to converge when used with linear function approximators only in the on-policy setting. Off-policy learning can cause the parameters of the function approximator to diverge when trained with TD methods (e.g. $\theta \to 2\theta$ configuration in \cite{Tsitsiklis:1997:TD-learning}). \end{itemize}
The first issue has been notably handled empirically in \cite{Silver:2014:DeterministicPolicyGradient, Lillicrap:2015:DDPG, Fujimoto:2018:TD3} by taking for $\beta$ on old version of $\pi_\theta$ (experience replay in Deep Deterministic policy gradient). Doing this, $\beta$ is not strictly unrelated to $\pi_\theta$, but it is hoped (and the successful experiments tend to strengthen this hope) that the deterministic policy gradient given for a stationary and independent behavior policy $\beta$ \cite{Silver:2014:DeterministicPolicyGradient} \begin{equation}
\nabla_\theta J_\beta(\mu_\theta) = \espL{\nabla_\theta \mu_\theta(s)\cdot \nablaV{a}{Q_\theta(s,a)}{\mu_\theta(s)}}{s\sim d^\beta} \end{equation} will still hold when these constraints on $\beta$ are weakened.
On a theoretical side, one-step importance sampling factor $\rho_t = \pi_\theta(a_t|s_t)/\beta(a_t|s_t)$ has been added to improve the stochastic off-policy policy gradient estimation \cite{degris:2012:offPol_AC} by correcting the bias induced, for a given state $s$, by the difference of the distribution over actions, between $\pi(\cdot|s)$ and $\beta(\cdot|s)$. This technique, though, does not cope with the state-distribution discrepancy, sometimes called the curse of horizon. So to bridge the gap between behaviour and target limiting state distribution, a new technique to estimate the state distribution ratio $w(s) = d^{\pi_\theta}(s)/d^{\beta}(s)$ has been proposed in \cite{Liu:2018:infinite-horizon-off-pol} and a converging actor critic algorithm has been presented in \cite{Liu:2019:Off_Pol_Distribution_correction} that uses this state distribution correction as well as the one-step importance sampling in both discounted and undiscounted case. This way, excursion estimation is by-passed by using a counterfactual objective function adapted to the target policy, \begin{equation}
J_\beta(\theta) = \espL{w(s)\frac{\pi_\theta(a|s))}{\beta(a|s))}Q_\theta(s,a)}{d^\beta} = \espL{Q_\pi(s,a)}{s \sim d^\theta, a \sim \pi_\theta} \end{equation} Thus, the Off-policy policy optimisation with state distribution correction (OPPOSD) proposed in $\cite{Liu:2019:Off_Pol_Distribution_correction}$ outperforms the Off-policy Actor-Critc (Off-PAC) of \cite{degris:2012:offPol_AC} when trained off-policy and evaluated on-policy. To the best of our knowledgre, no multi-agent version of (OPPOSD) has been proposed so far.
To deal with the second issue of diverging vanilla TD-learning with linear function approximation, gradient-temporal-difference (GTD) learning has been explored in \cite{Maei:GQ-lambda}. Gradient-TD methods are of linear complexity and guaranteed to converge under off-policy setting for appropriately chosen step-size parameters but are more complex than TD($\lambda$) because they require a second auxiliary set of parameters with a second step size (on top of the one for function approximation) that must be set in a problem-dependent way for good performance, and their analyses require the use of two-timescales stochastic approximation machinery. More recently, a simpler technique has been introduced called emphatic TD-learning \cite{Sutton:2016:EmphaticAP}. While it provides similar convergence guarantees as GTD \cite{Yu:2015:cv:emphatic, Yu:2016:WEAK_CV_EMPHATIC_TD}, emphatic-TD only requires one set of parameter (for the linear function approximation) and one time-step update. An Off-policy Gradient-Actor-Critic and Emphatic-Actor-Critic using excursion setting (\ref{eq:objective-off-pol}), linear function approximations and GTD or Emphatic-TD in the critic step was presented in \cite{Maei:2018:Convergent_AC_OffPol} with a convergence proof based on classical two-timescale analysis. Off-policy Emphatic-Actor-Critic has then been adapted to a discounted decentralized multi-agent setting in \cite{Suttle:2019:MA-OFF-Policy-AC} in which a convergence proof is provided. This is a promising result as this MARL algorithm enjoy from the decentralized setting (inducing low communication overhead and allowing privacy) and learns off-policy (which can speed-up learning or allows to learn from existing data-batch). Nonetheless, at each time step, a broadcast of local importance sampling factor estimators has to be performed until consensus is reached among all the agents, which mitigates the advantage induced by decentralization.
\subsection{Potential extensions}
\subsubsection{Continuous states} It could be interesting to analyse the convergence of our actor-critic algorithm when not only the action space but also the state space $\mc{S}$ is continuous, as many continuous action environment also have continuous state space. We are confident that convergence would still hold with little additional assumptions (e.g. geometric ergodicity of the stationary state distribution induced by the deterministic policies) and adaptation from the finite state space setting we worked on, as two-timescale stochastic approximation techniques can be applied to the continuous case (Chap. $6$ \cite{BorkarStochasticApprDynamicalSysViewpoint}). Such analysis has been carried out in \cite{Zhang:2018:ContinuousFDMARL} to show convergence of a stochastic multi-agent actor-critic algorithm with continuous state and action spaces when linear function approximation is employed. Interestingly, the actor update relies on the recently proposed expected policy gradient (EPG) \cite{Ciosek:2018:ExpectedPG} presented as an hybrid version between stochastic and deterministic policy gradient, designed to reduce the variance of the policy gradient estimate. This setting is supposed to induce a lower variance than the classical stochastic PG without suffering from the lack of exploration induced by deterministic policies and thus can be naturally trained in both decentralized and on-policy way.
\subsubsection{Partial observability}
Another interesting extension, on a more practical side would be to weaken the assumption that state and actions are both fully observable. Indeed, many real-world problems, such as supply-chain management, do not comply with this setting, and involve agents which cannot observe other's actions, at least in real time. Full observability is also detrimental to scalability as the number of actions registered in the system tend to grow quadratically with the number of agents, whereas it would grow linearly supposing for instance that each agent only observes the actions of some "close" neighbors. If more realistic, Decentralized Partially Observable problems are also more difficult to solve since, in general, the updates of agent's policies induce non-stationarity of the part of the environment observed by each agent. This is why such problems belong to the class of NEXP-complete problems.
The Multi-Agent Deep Deterministic Policy Gradient algorithm (MADDPG) designed in \cite{lowe:2017:multi_AC_Mixed} is a model-free actor-critic MARL algorithm to the problem in which agent $i$ at time step $t$ of execution has only access to its own local observation, local actions, and local rewards. Nonetheless, full observability of the joint state and action is assumed to allow the critic to learn in a stationary setting. This is a more stringent setting than ours since we assumed that both local critic and local actor steps are performed observing the global state. On the other hand, no convergence guarantees have been provided for the MADDPG, though simulations demonstrates its ability to learn in a predator-pray multi-agent environment.
\section{Conclusion} We propose a new Decentralized-Multi-Agent Actor-Critic algorithm with deterministic policies for finite state space and continuous action space environments. The critic step is based on a classic TD(0) update, while the direction of the actor's parameter update is given by a Deterministic Policy Gradient that can be computed locally. We show that many strategies applied for stochastic PG should still hold in the deterministic setting, as the deterministic PG is the limit of the stochastic one, even in the undiscounted setting we consider.
We give convergence guarantees for our algorithm which are theoretically as good as those provided in recent papers using stochastic policies. Nonetheless we have doubts on the ability of a multi-agent actor-critic with deterministic policies to train on-policy, as it is likely to suffer from a lack of exploration. Designing multi-agent actor-critic that can be trained on policy with a low variance policy gradient estimate, or that can learn off-policy while maintaining an effective decentralized setting are ongoing research topics.
As we need to consolidate our experimental results, the presentation of trainings on the OpenAi Gym environment that we designed to model an easily customizable supply-chain problem known as the \textit{Beer Game}, supporting MARL with both continuous state and action spaces, is deferred to the defense.
\textbf{Acknowledgements}: The work produced so far has been done with the support of the École polytechnique and IBM Research. I am grateful to Laura WYNTER who made this internship possible. I am especially thankful to Desmond CAI, my tutor throughout this very exciting research internship. His insights in Reinforcement Learning and his way of approaching research challenges have been very precious to me.
\appendix \section{Proof of Policy Gradient Theorems} \subsection{Proof of Theorem \ref{thm:limit_stoch_grad}} \label{app:limit_stoch_grad} In this section, we use the following notations for the reward function $J$: let $\beta$ be a fixed policy under which $\{s_t\}_{t \geq 0}$ is irreducible and aperiodic and $d^\beta$ its corresponding stationary distribution. For a stochastic policy $\pi: \mathcal{S} \mapsto \mathcal{P}(\mathcal{A)}$, we define $J_\beta(\pi)$ as: \begin{equation}
J_\beta(\pi) = \sum_{s \in \mathcal{S}} d^\beta(s) \int_{\mathcal{A}} \pi(a|s) \bar{R}(s, a) \textnormal{d}a \end{equation} and for a deterministic policy $\mu: \mathcal{S} \mapsto \mathcal{A}$, we have: \begin{equation}
J_\beta(\mu) = \sum_{s \in \mathcal{S}} d^\beta(s) \bar{R}(s, \mu(s)) \end{equation}
Note that this is correspond to the excursion setting discussed in section \ref{section:off-pol} with behavior policy $\beta$ and target policy $\pi$ or $\mu$.
We reproduce \textbf{Conditions B1} from \cite{Silver:2014:DeterministicPolicyGradient} and add two additional conditions:
\begin{cond} \label{cond:regular_delta_appr} Functions $\nu_\sigma$ parametrized by $\sigma$ are said to be regular delta-approximation on $\mathcal{R} \subset \mathcal{A}$ if they satisfy the following conditions: \begin{enumerate}
\item The distributions $\nu_\sigma$ converge to a delta distribution: $\textnormal{lim}_{\sigma \downarrow 0} \int_\mathcal{A} \nu_\sigma(a^\prime, a) f(a) \textnormal{d}a = f(a^\prime)$ for $a^\prime \in \mathcal{R}$ and suitably smooth $f$. Specifically we require that this convergence is uniform in $a^\prime$ and over any class $\mathcal{F}$ of $L$-Lipschitz and bounded functions, $\lVert \nabla_a f(a)\lVert < L < \infty$, $\textnormal{sup}_a f(a) < b < \infty$, i.e.:
\begin{equation*}
\underset{\sigma \downarrow 0}{\textnormal{lim }} \underset{f \in \mathcal{F}, a^\prime \in \mc{R}}{\textnormal{sup }} \left|\int_\mathcal{A} \nu_\sigma(a^\prime, a) f(a) \textnormal{d}a - f(a^\prime) \right| = 0
\end{equation*}
\item For each $a^\prime \in \mathcal{R}$, $\nu_\sigma(a^\prime, \cdot)$ is supported on some compact $\mathcal{C}_{a^\prime} \subseteq \mathcal{A}$ with Lipschitz boundary $\textnormal{bd} (\mathcal{C}_{a^\prime})$, vanishes on the boundary and is continuously differentiable on $\mathcal{C}_{a^\prime}$.
\item For each $a^\prime \in \mathcal{R}$, for each $a \in \mathcal{A}$, the gradient $\nabla_{a^\prime} \nu_\sigma(a^\prime, a)$ exists.
\item Translation invariance: for all $a \in \mathcal{A}, a^\prime \in \mathcal{R}$, and any $\delta \in \mathbb{R}^n$ such that $a + \delta \in \mathcal{A}$, $a^\prime + \delta \in \mathcal{A}$, $\nu_\sigma(a^\prime, a) = \nu_\sigma(a^\prime + \delta, a + \delta)$. \end{enumerate} \end{cond}
Moreover, we state the following lemma that is an immediate corollary of \textbf{Lemma 1} from \cite{Silver:2014:DeterministicPolicyGradient} (Supplementary Material) \begin{lem}\label{grad_nu} Let $\nu_\sigma$ be a regular delta-approximation on $\mathcal{R} \subseteq \mathcal{A}$. Then, wherever the gradients exist $$\nabla_{a^\prime}\nu_\sigma(a^\prime, a) = - \nabla_a \nu_\sigma(a^\prime, a)$$ \end{lem}
We restate Theorem \ref{thm:limit_stoch_grad}.
\begin{thm*}
Let $\mu_\theta: \mathcal{S} \to \mathcal{A}$. Denote the range of $\mu_\theta$ by $\mathcal{R}_\theta \subseteq \mathcal{A}$, and $\mathcal{R} = \cup_\theta \mathcal{R}_\theta$. For each $\theta$, consider $\pi_{\theta, \sigma}$ a stochastic policy such that $\pi_{\theta, \sigma}(a|s) = \nu_\sigma(\mu_\theta(s),a)$, where $\nu_\sigma$ satisfy Conditions \ref{cond:regular_delta_appr} on $\mathcal{R}$. Suppose further that the Assumptions \ref{asm:MDP_Regularity} and \ref{asm:Reg_P_R} on the MDP hold. Then, there exists $r > 0$ such that, for each $\theta \in \Theta$, $\sigma \mapsto J_{\pi_{\theta, \sigma}}(\pi_{\theta, \sigma})$, $\sigma \mapsto J_{\pi_{\theta, \sigma}}(\mu_\theta)$, $\sigma \mapsto \nabla_\theta J_{\pi_{\theta, \sigma}}(\pi_{\theta, \sigma})$, and $\sigma \mapsto \nabla_\theta J_{\pi_{\theta, \sigma}}(\mu_\theta)$ are properly defined on $\big[0, r\big]$ (with $J_{\pi_{\theta, 0}}(\pi_{\theta, 0}) = J_{\pi_{\theta, 0}}(\mu_\theta) = J_{\mu_\theta}(\mu_\theta)$ and $\nabla_\theta J_{\pi_{\theta, 0}}(\pi_{\theta, 0}) = \nabla_\theta J_{\pi_{\theta, 0}}(\mu_\theta) = \nabla_\theta J_{\mu_\theta}(\mu_\theta)$), and we have: \begin{equation*}
\underset{\sigma \downarrow 0}{\textnormal{lim }} \nabla_\theta J_{\pi_{\theta, \sigma}} (\pi_{\theta, \sigma}) = \underset{\sigma \downarrow 0}{\textnormal{lim }} \nabla_\theta J_{\pi_{\theta, \sigma}}(\mu_\theta) = \nabla_\theta J_{\mu_\theta}(\mu_\theta) \end{equation*} \end{thm*} \begin{proof}[Proof of Theorem \ref{thm:limit_stoch_grad}] We first state and prove the following Lemma. \bigbreak
\begin{lem} \label{stationary_distrib} There exists $r > 0$ such that, for all $\theta \in \Theta$ and $\sigma \in \big[0, r\big]$, stationary distribution $d^{\pi_{\theta, \sigma}}$ exists and is unique. Moreover, for each $\theta \in \Theta$, $\sigma \mapsto d^{\pi_{\theta, \sigma}}$ and $\sigma \mapsto \nabla_\theta d^{\pi_{\theta, \sigma}}$ are properly defined on $\big[0, r\big]$ and both are continuous at $0$. \end{lem}
\begin{proof}[Proof of Lemma \ref{stationary_distrib}] For any policy $\beta$, we let $\left(P^\beta_{s,s^\prime}\right)_{s, s^\prime \in \mathcal{S}}$ be the transition matrix associated to the Markov chain $\{s_t\}_{t \geq 0}$ induced by $\beta$. In particular, for each $\theta \in \Theta$, $\sigma >0$, $s, s^\prime \in \mathcal{S}$, we have: \begin{align*}
&P^{\mu_\theta}_{s, s^\prime} = P(s^\prime |s, \mu_\theta(s))\\
&P^{\pi_{\theta, \sigma}}_{s, s^\prime} = \int_\mathcal{A} \pi_{\theta, \sigma} (a | s) P(s^\prime |s, a) \textnormal{d}a = \int_\mathcal{A} \nu_\sigma(\mu_\theta(s), a) P(s^\prime | s, a) \textnormal{d}a \end{align*} Let $\theta \in \Theta$, $s, s^\prime \in \mathcal{S}$, $\left(\theta_n\right)\in\Theta^\mathbb{N}$ such that $\theta_n \rightarrow \theta$ and $\left(\sigma_n\right)_{n \in \mathbb{N}} \in {\mathbb{R}^+}^\mathbb{N}$, $\sigma_n \downarrow 0$: \begin{equation*}
\left|P^{\pi_{\theta_n, \sigma_n}}_{s, s^\prime} - P^{\mu_\theta}_{s, s^\prime}\right| \leq \left|P^{\pi_{\theta_n, \sigma_n}}_{s, s^\prime} - P^{\mu_{\theta_n}}_{s, s^\prime}\right| + \left|P^{\mu_{\theta_n}}_{s, s^\prime} - P^{\mu_\theta}_{s, s^\prime}\right|\\ \end{equation*}
Applying the first condition of Conditions \ref{cond:regular_delta_appr} with $f: a \mapsto P(s^\prime | s, a)$ belonging to $\mathcal{F}$ (by Assumption \ref{asm:Reg_P_R}): \begin{align*}
\left|P^{\pi_{\theta_n, \sigma_n}}_{s, s^\prime} - P^{\mu_{\theta_n}}_{s, s^\prime}\right| &= \left|\int_\mathcal{A} \nu_{\sigma_n}(\mu_{\theta_n}(s), a) P(s^\prime | s, a) \textnormal{d}a - P(s^\prime | s, \mu_{\theta_n}(s)) \right|\\
&\leq \underset{f \in \mathcal{F}, a^\prime \in \mathcal{A}}{\sup} \left|\int_\mathcal{A} \nu_{\sigma_n}(a^\prime, a) f(a) \textnormal{d}a - f(a^\prime) \right| \\
&\underset{n \rightarrow \infty}{\longrightarrow} 0 \end{align*}
By regularity assumptions on $\theta \mapsto \mu_\theta(s)$ (\ref{asm:MDP_Regularity}) and $P(s^\prime|s, \cdot)$ (\ref{asm:Reg_P_R}), we have \begin{equation*}
\left|P^{\mu_{\theta_n}}_{s, s^\prime} - P^{\mu_\theta}_{s, s^\prime}\right| = \left| P(s^\prime | s, \mu_{\theta_n}(s)) - P(s^\prime | s, \mu_\theta(s))\right| \underset{n \rightarrow \infty}{\longrightarrow} 0 \end{equation*} Hence \begin{equation*}
\left|P^{\pi_{\theta_n, \sigma_n}}_{s, s^\prime} - P^{\mu_\theta}_{s, s^\prime}\right| \underset{n \rightarrow \infty}{\longrightarrow} 0 \end{equation*} Therefore, for each $s, s^\prime \in \mathcal{S}$, $(\theta, \sigma) \mapsto P^{\pi_{\theta, \sigma}}_{s, s^\prime}$, with $P^{\pi_{\theta, 0}}_{s, s^\prime} = P^{\mu_\theta}_{s, s^\prime}$, is continuous on $\Theta \times \{0\}$. Note that, for each $n \in \mathbb{N}$, $P \mapsto \prod_{s, s^\prime} \left(P^n\right)_{s, s^\prime}$ is a polynomial function of the entries of $P$. Thus, for each $n \in \mathbb{N}$, $f_n: (\theta, \sigma) \mapsto \prod_{s, s^\prime} \left({P^{\pi_{\theta, \sigma}}}^n\right)_{s, s^\prime}$, with $f_n(\theta, 0) = \prod_{s, s^\prime} \left({P^{\mu_\theta}}^n\right)_{s, s^\prime}$ is continuous on $\Theta \times \{0\}$. Moreover, for each $\theta \in \Theta, \sigma \geq 0$, from structure of $P^{\pi_{\theta, \sigma}}$, if there is some $n^* \in \mathbb{N}$ such that $f_{n^*}(\theta, \sigma) > 0$ then, for all $n \geq n^*$, $f_n(\theta, \sigma) > 0$.
Now let suppose that there exists $\left(\theta_n\right)\in\Theta^\mathbb{N^*}$ such that, for each $n > 0$ there is a $\sigma_n \leq n^{-1}$ such that $f_n(\theta_n, \sigma_n) = 0$. By compacity of $\Theta$, we can take $\left(\theta_n\right)$ converging to some $\theta \in \Theta$. For each $n^* \in \mathbb{N}$, by continuity we have $f_{n^*}(\theta, 0) = \underset{n \to \infty}{\lim} f_{n^*}(\theta_n, \sigma_n) = 0$. Besides, by Assumption \ref{asm:MDP_Regularity}, $P^{\mu_\theta}$ is irreducible and aperiodic, thus, there is some $n \in \mathbb{N}$ such that for all $s, s^\prime \in \mathcal{S}$ and for all $n^* \geq n$, $\left({P^{\mu_\theta}}^{n^*}\right)_{s, s^\prime} > 0$, i.e. $f_{n^*}(\theta, 0) > 0$. This leads to a contradiction.
Hence, there exists $n^* > 0$ such that for all $\theta \in \Theta$ and $\sigma \leq {n^*}^{-1}$, $f_n(\theta, \sigma) > 0$. We let $r = {n^*}^{-1}$. It follows that, for all $\theta \in \Theta$ and $\sigma \in \big[0, r\big]$, $P^{\pi_{\theta, \sigma}}$ is a transition matrix associated to an irreducible and aperiodic Markov Chain, thus $d^{\pi_{\theta, \sigma}}$ is well defined as the unique stationary probability distribution associated to $P^{\pi_{\theta, \sigma}}$. We fix $\theta \in \Theta$ in the remaining of the proof.
\bigbreak Let $\beta$ a policy for which the Markov Chain corresponding to $P^\beta$ is irreducible and aperiodic. Let $s_* \in \mathcal{S}$, as asserted in \cite{Marbach:2001}, considering stationary distribution $d^\beta$ as a vector $\left(d^{\beta}_s\right)_{s \in \mathcal{S}} \in \mathbb{R}^{|\mathcal{S}|}$, $d^\beta$ is the unique solution of the balance equations: \begin{align*}
\sum_{s^\prime \in \mathcal{S}} d^{\beta}_s P^{\beta}_{s, s^\prime} &= d^\beta_{s^\prime} \hspace{4em} s^\prime \in \mathcal{S}\backslash\{s_*\}\\
\sum_{s \in \mathcal{S}} d^{\beta}_s &= 1 \end{align*}
Hence, we have $A^\beta$ an $|\mathcal{S}| \times |\mathcal{S}|$ matrix and $a \neq 0$ a constant vector of $\mathbb{R}^{|\mathcal{S}|}$ such that the balance equations is of the form \begin{equation} \label{balance_equation}
A^\beta d^{\beta} = a \end{equation}
with $A^\beta_{s, s^\prime}$ depending on $P^{\beta}_{s^\prime, s}$ in an affine way, for each $s, s^\prime \in \mathcal{S}$. Moreover, $A^\beta$ is invertible, thus $d^\beta$ is given by \begin{equation*}
d^\beta = \frac{1}{\det(A^\beta)} \textnormal{adj}(A^\beta)^\top a \end{equation*} Entries of $\textnormal{adj}(A^\beta)$ and $\det(A^\beta)$ are polynomial functions of the entries of $P^\beta$. \bigbreak
Thus, $\sigma \mapsto d^{\pi_{\theta, \sigma}} = \frac{1}{\det(A^{\pi_{\theta, \sigma}})} \textnormal{adj}(A^{\pi_{\theta, \sigma}})^\top a$ is defined on $\big[0, r\big]$ and is continuous at 0. \bigbreak
Conditions \ref{cond:regular_delta_appr}, Lemma \ref{grad_nu} and integration by parts implies that, for $s, s^\prime \in \mathcal{S}$, $\sigma \in \big[0, r\big]$: \begin{align*}
\int_\mathcal{A} \left. \nabla_{a^\prime} \nu_\sigma(a^\prime, a) \right|_{a^\prime = \mu_\theta(s)} P(s^\prime | s, a) \textnormal{d}a &= - \int_\mathcal{A} \nabla_{a} \nu_\sigma(\mu_\theta(s), a) P(s^\prime | s, a) \textnormal{d}a \\
&= \int_{\mathcal{C}_{\mu_\theta(s)}} \nu_\sigma(\mu_\theta(s), a) \nabla_a P(s^\prime |s, a) \textnormal{d}a + \textnormal{boundary terms} \\
&= \int_{\mathcal{C}_{\mu_\theta(s)}} \nu_\sigma(\mu_\theta(s), a) \nabla_a P(s^\prime |s, a) \textnormal{d}a \end{align*}
where the boundary terms are zero since $\nu_\sigma$ vanishes on the boundary.
Thus, for $s, s^\prime \in \mathcal{S}$, $\sigma \in \big[0, r\big]$: \begin{align}
\nabla_\theta P^{\pi_{\theta, \sigma}}_{s, s^\prime} &= \nabla_\theta \int_\mathcal{A} \pi_{\theta, \sigma} (a | s) P(s^\prime |s, a) \textnormal{d}a \nonumber\\
&= \int_\mathcal{A} \nabla_\theta \pi_{\theta, \sigma}(a |s) P(s^\prime | s, a) \textnormal{d}a \label{exchange_grad_int}\\
&= \int_\mathcal{A} \nabla_\theta \mu_\theta(s) \left. \nabla_{a^\prime} \nu_\sigma(a^\prime, a) \right|_{a^\prime = \mu_\theta(s)} P(s^\prime | s, a) \textnormal{d}a \nonumber \\
&= \nabla_\theta \mu_\theta(s) \int_{\mathcal{C}_{\mu_\theta(s)}} \nu_\sigma(\mu_\theta(s), a) \nabla_a P(s^\prime |s, a) \textnormal{d}a \nonumber \end{align}
where exchange of derivation and integral in (\ref{exchange_grad_int}) follows by application of Leibniz rule with: \begin{itemize}
\item $\forall a \in \mathcal{A}$, $\theta \mapsto \pi_{\theta, \sigma} (a |s) P(s^\prime | s, a)$ is differentiable, and $\nabla_\theta \pi_{\theta, \sigma} (a |s) P(s^\prime | s, a) = \nabla_\theta \mu_\theta(s) \left. \nabla_{a^\prime} \nu_\sigma(a^\prime, a)\right|_{a^\prime = \mu_\theta(s)}$.\\
\item Let $a^* \in \mathcal{R}$, $\forall \theta \in \Theta$,
\begin{align}
\left\lVert \nabla_\theta \pi_{\theta, \sigma} (a |s) P(s^\prime | s, a) \right\lVert &= \left\lVert \nabla_\theta \mu_\theta(s) \left. \nabla_{a^\prime} \nu_\sigma(a^\prime, a)\right|_{a^\prime = \mu_\theta(s)} \right\lVert\nonumber\\
&\leq \left\lVert\nabla_\theta\mu_\theta(s) \right\lVert_\textnormal{op} \left\lVert\left. \nabla_{a^\prime} \nu_\sigma(a^\prime, a)\right|_{a^\prime = \mu_\theta(s)}\right\lVert\nonumber \nonumber \\
&\leq \underset{\theta \in \Theta}{\sup} \left\lVert\nabla_\theta\mu_\theta(s) \right\lVert_\textnormal{op} \left\lVert \nabla_a \nu_\sigma(\mu_\theta(s), a)\right\lVert\nonumber\\
&= \underset{\theta \in \Theta}{\sup} \left\lVert\nabla_\theta\mu_\theta(s) \right\lVert_\textnormal{op} \left\lVert \nabla_a \nu_\sigma(a^*, a - \mu_\theta(s) + a^*)\right\lVert \label{align_delta}\\
&\leq \underset{\theta \in \Theta}{\sup} \left\lVert\nabla_\theta\mu_\theta(s) \right\lVert_\textnormal{op} \underset{a \in \mathcal{C}_{a^*}}{\sup} \left\lVert\nabla_a \nu_\sigma(a^*, a)\right\lVert \textbf{1}_{a \in \mathcal{C}_{a^*}} \nonumber
\end{align}
where $\lVert \cdot \lVert_\textnormal{op}$ denotes the operator norm, and (\ref{align_delta}) comes from translation invariance (we take $\nabla_a \nu_\sigma(a^*, a) = 0$ for $a \in \mathbb{R}^n\backslash \mathcal{C}_{a^*}$). $a \mapsto \underset{\theta \in \Theta}{\sup} \left\lVert\nabla_\theta\mu_\theta(s) \right\lVert_\textnormal{op} \underset{a \in \mathcal{C}_{a^*}}{\sup} \left\lVert\nabla_a \nu_\sigma(a^*, a)\right\lVert \textbf{1}_{a \in \mathcal{C}_{a^*}}$ is measurable, bounded and supported on $\mathcal{C}_{a^*}$, so it is integrable on $\mathcal{A}$.
\item Dominated convergence ensures that, for each $k \in \llbracket 1, m\rrbracket$, partial derivative
\begin{equation*}
g_k(\theta) = \partial_{\theta_k} \int_\mathcal{A} \nabla_\theta \pi_{\theta, \sigma} (a |s) P(s^\prime | s, a) \textnormal{d}a
\end{equation*}
is continuous: let $\theta_n \downarrow \theta$
\begin{align*}
g_k(\theta_n) &= \partial_{\theta_k} \int_\mathcal{A} \nabla_\theta \pi_{\theta_n, \sigma} (a |s) P(s^\prime | s, a) \textnormal{d}a\\
&= \partial_{\theta_k} \mu_{\theta_n} (s) \int_{\mathcal{C}_{a^*}} \nu_\sigma(a^*, a - \mu_{\theta_n}(s) + a^*) \nabla_a P(s^\prime | s,a) \textnormal{d}a\\
& \partial_{\theta_k} \mu_{\theta} (s) \underset{n \to \infty}{\longrightarrow} \int_{\mathcal{C}_{a^*}} \nu_\sigma(a^*, a - \mu_{\theta}(s) + a^*) \nabla_a P(s^\prime | s,a) \textnormal{d}a = g_k(\theta)
\end{align*}
with the dominating function $a \mapsto \underset{a \in \mathcal{C}_{a^*}}{\sup} |\nu_\sigma(a^*,a)| \underset{a \in \mathcal{A}}{\sup}\left\lVert\nabla_a P(s^\prime|s,a) \right\lVert \textbf{1}_{a \in \mathcal{C}_{a^*}}$.
\end{itemize} \bigbreak
Thus $\sigma \mapsto \nabla_\theta P^{\pi_{\theta, \sigma}}_{s, s^\prime}$ is defined for $\sigma \in \big[0, r\big]$ and is continuous at 0, with \begin{equation*}
\nabla_\theta P^{\pi_{\theta, 0}}_{s, s^\prime} = \nabla_\theta \mu_\theta(s) \left. \nabla_a P(s^\prime |s, a) \right|_{a = \mu_\theta(s)} \end{equation*}
Indeed, let $\left(\sigma_n\right)_{n \in \mathbb{N}} \in {\big[0, r\big]^+}^\mathbb{N}$, $\sigma_n \downarrow 0$, then, applying the first condition of Conditions \ref{cond:regular_delta_appr} with $f: a \mapsto \nabla_a P(s^\prime | s, a)$ belonging to $\mathcal{F}$ (Assumption \ref{asm:Reg_P_R}): \begin{equation*}
\left \lVert \nabla_\theta P^{\pi_{\theta, \sigma_n}}_{s, s^\prime} - \nabla_\theta P^{\mu_\theta}_{s, s^\prime}\right\lVert = \left\lVert \nabla_\theta \mu_\theta(s) \right\lVert_\textnormal{op} \left\lVert \int_{\mathcal{C}_{\mu_\theta(s)}} \nu_{\sigma_n}(\mu_\theta(s), a) \nabla_a P(s^\prime |s, a) \textnormal{d}a - \left. \nabla_a P(s^\prime |s, a) \right|_{a = \mu_\theta(s)} \right\lVert \underset{n \rightarrow \infty}{\longrightarrow} 0 \end{equation*}
Since $d^{\pi_{\theta, \sigma}} = \frac{1}{\det\left(A^{\pi_{\theta, \sigma}} \right)}\textnormal{adj}\left(A^{\pi_{\theta, \sigma}}\right)^\top a$ with $|\det\left(A^{\pi_{\theta, \sigma}}\right)| > 0$ for all $\sigma \in \big[0, r\big]$ and since entries of $\textnormal{adj}\left(A^{\pi_{\theta, \sigma}}\right)$ and $\det\left(A^{\pi_{\theta, \sigma}}\right)$ are polynomial functions of the entries of $P^{\pi_{\theta, \sigma}}$, it follows that $\sigma \mapsto \nabla_\theta d^{\pi_{\theta, \sigma}}$ is properly defined on $\big[0, r \big]$ and is continue in 0, which concludes the proof of Lemma \ref{stationary_distrib}. \end{proof}
Let $\theta \in \Theta$, $\pi_\theta$ as in Theorem \ref{thm:limit_stoch_grad}, and $r > 0$ such that $\sigma \mapsto d^{\pi_{\theta, \sigma}}$, $\sigma \mapsto \nabla_\theta d^{\pi_{\theta, \sigma}}$ are well defined on $\big[0, r\big]$ and are continuous at 0. Then, \begin{align}
\label{J_pi_pi}
&\sigma \mapsto J_{\pi_{\theta, \sigma}}({\pi_{\theta, \sigma}}) = \sum_{s \in \mathcal{S}} d^{\pi_{\theta, \sigma}}(s) \int_\mathcal{A}{\pi_{\theta, \sigma}}(a|s) \bar{R}(s,a) \textnormal{d}a \hspace{2em} \textnormal{and}\\
\label{J_pi_mu}
&\sigma \mapsto J_{\pi_{\theta, \sigma}}(\mu_\theta) = \sum_{s \in \mathcal{S}} d^{\pi_{\theta, \sigma}}(s) \bar{R}(s, \mu_\theta(s)) \end{align} are properly defined on $\big[0, r\big]$ (with $J_{\pi_{\theta, 0}}(\pi_{\theta, 0}) = J_{\pi_{\theta, 0}}(\mu_\theta) = J_{\mu_\theta}(\mu_\theta)$). Let $s \in \mathcal{S}$, from same arguments as developed in the proof of Lemma \ref{stationary_distrib}, we have \begin{align*}
\nabla_\theta \int_{\mathcal{A}} \pi_{\theta, \sigma}(a |s) \bar{R}(s, a) \textnormal{d}a &= \int_{\mathcal{A}} \nabla_\theta \pi_{\theta, \sigma}(a,s) \bar{R}(s,a) \textnormal{d}a\\
&= \nabla_\theta \mu_\theta(s) \int_{\mathcal{C}_{\mu_\theta(s)}} \nu_\sigma(\mu_\theta(s), a) \nabla_a \bar{R}(s,a) \textnormal{d}a \end{align*}
Thus, $\sigma \mapsto \nabla_\theta J_{\pi_{\theta, \sigma}}(\pi_{\theta, \sigma})$ is properly defined on $\big[0, r\big]$ and \begin{align*}
\nabla_\theta J_{\pi_{\theta, \sigma}}(\pi_{\theta, \sigma}) &= \sum_{s \in \mathcal{S}} \nabla_\theta d^{\pi_{\theta, \sigma}}(s) \int_{\mathcal{A}} \pi_{\theta, \sigma}(a|s) \bar{R}(s,a) \textnormal{d}a + \sum_{s \in \mathcal{S}} d^{\pi_{\theta, \sigma}}(s) \nabla_\theta \int_{\mathcal{A}} \pi_{\theta, \sigma}(a|s) \bar{R}(s,a) \textnormal{d}a\\
&= \sum_{s \in \mathcal{S}} \nabla_\theta d^{\pi_{\theta, \sigma}}(s) \int_{\mathcal{A}} \nu_\sigma(\mu_\theta(s), a) \bar{R}(s,a) \textnormal{d}a + \sum_{s \in \mathcal{S}} d^{\pi_{\theta, \sigma}}(s) \nabla_\theta \mu_\theta(s) \int_{\mathcal{C}_{\mu_\theta(s)}} \nu_\sigma(\mu_\theta(s), a) \nabla_a \bar{R}(s,a) \textnormal{d}a \end{align*}
Similarly, $\sigma \mapsto \nabla_\theta J_{\pi_{\theta, \sigma}}(\mu_\theta)$ is properly defined on $\big[0, r\big]$ and \begin{equation*}
\nabla_\theta J_{\pi_{\theta, \sigma}}(\mu_\theta) = \sum_{s \in \mathcal{S}} \nabla_\theta d^{\pi_{\theta, \sigma}}(s) \bar{R}(s,\mu_\theta(s)) + \sum_{s \in \mathcal{S}} d^{\pi_{\theta, \sigma}}(s) \nabla_\theta \mu_\theta(s)\left. \nabla_a \bar{R}(s,a)\right|_{a = \mu_\theta(s)} \end{equation*}
To prove continuity at $0$ of both $\sigma \mapsto \nabla_\theta J_{\pi_{\theta, \sigma}}(\pi_{\theta, \sigma})$ and $\sigma \mapsto \nabla_\theta J_{\pi_{\theta, \sigma}}(\mu_\theta)$ (with $\nabla_\theta J_{\pi_{\theta, 0}}(\pi_{\theta, 0}) = \nabla_\theta J_{\pi_{\theta, 0}}(\mu_\theta) = \nabla_\theta J_{\mu_\theta}(\mu_\theta)$), let $\pa{\sigma_n}_{n\geq 0} \downarrow 0$:
\begin{equation} \label{continuity_J}
\left\lVert\nabla_\theta J_{\pi_{\theta, \sigma_n}}(\pi_{\theta, \sigma_n}) - \nabla_\theta J_{\pi_{\theta, 0}}(\pi_{\theta,0}) \right\lVert \leq \left\lVert\nabla_\theta J_{\pi_{\theta, \sigma_n}}(\pi_{\theta, \sigma_n}) - \nabla_\theta J_{\pi_{\theta, \sigma_n}}(\mu_\theta) \right\lVert + \left\lVert\nabla_\theta J_{\pi_{\theta, \sigma_n}}(\mu_\theta) - \nabla_\theta J_{\mu_\theta}(\mu_\theta)\right\lVert \end{equation}
For the first term of the r.h.s we have
\begin{align*}
\left\lVert \nabla_\theta J_{\pi_{\theta, \sigma_n}}(\pi_{\theta, \sigma_n}) - \nabla_\theta J_{\pi_{\theta, \sigma_n}}(\mu_\theta) \right\lVert &\leq \sum_{s \in \mathcal{S}} \lVert\nabla_\theta d^{\pi_{\theta, \sigma_n}}(s)\lVert \left|\int_{\mathcal{A}} \nu_{\sigma_n}(\mu_\theta(s),a) \bar{R}(s,a)\textnormal{d}a - \bar{R}(s, \mu_\theta(s))\right|\\
& + \sum_{s \in \mathcal{S}} d^{\pi_{\theta, \sigma_n}}(s) \lVert\nabla_\theta \mu_\theta(s)\lVert_\textnormal{op} \left\lVert\int_{\mathcal{A}} \nu_{\sigma_n}(\mu_\theta(s),a) \nabla_a \bar{R}(s,a)\textnormal{d}a - \left. \nabla_a \bar{R}(s, a)\right|_{a=\mu_\theta(s)}\right\lVert \end{align*}
applying the first condition of Conditions \ref{cond:regular_delta_appr} with $f: a \mapsto \bar{R}(s, a)$ and $f: a \mapsto \nabla_a \bar{R}(s, a)$ belonging to $\mathcal{F}$ (by Assumption \ref{asm:Reg_P_R}) we have, for each $s \in \mathcal{S}$: \begin{align*}
\left|\int_{\mathcal{A}} \nu_{\sigma_n}(\mu_\theta(s),a) \bar{R}(s,a)\textnormal{d}a - \bar{R}(s, \mu_\theta(s))\right| \underset{n \to \infty}{\longrightarrow} 0 \quad \textnormal{and}\\
\left\lVert\int_{\mathcal{A}} \nu_{\sigma_n}(\mu_\theta(s),a) \nabla_a \bar{R}(s,a)\textnormal{d}a - \left. \nabla_a \bar{R}(s, a)\right|_{a=\mu_\theta(s)}\right\lVert \underset{n \to \infty}{\longrightarrow} 0 \end{align*} Moreover, for each $s \in \mathcal{S}$, $d^{\pi_{\theta, \sigma_n}}(s) \underset{n \to \infty}{\longrightarrow} d^{\mu_\theta}(s)$ and $\nabla_\theta d^{\pi_{\theta, \sigma_n}}(s) \underset{n \to \infty}{\longrightarrow} \nabla_\theta d^{\mu_\theta}(s)$ (by Lemma \ref{stationary_distrib}), and $\lVert \nabla_\theta \mu_\theta(s) \lVert_\textnormal{op} < \infty$ (by Assumption \ref{asm:Reg_P_R}), so \begin{equation*}
\left\lVert \nabla_\theta J_{\pi_{\theta, \sigma_n}}(\pi_{\theta, \sigma_n}) - \nabla_\theta J_{\pi_{\theta, \sigma_n}}(\mu_\theta) \right\lVert \underset{n \to \infty}{\longrightarrow} 0 \end{equation*}
For the second term of the r.h.s of (\ref{continuity_J}), we have \begin{align*}
\left\lVert\nabla_\theta J_{\pi_{\theta, \sigma_n}}(\mu_\theta) - \nabla_\theta J_{\mu_\theta}(\mu_\theta)\right\lVert &\leq \sum_{s \in \mathcal{S}} \left\lVert\nabla_\theta d^{\pi_{\theta, \sigma_n}}(s) - \nabla_\theta d^{\mu_\theta}(s)\right\lVert \left|\bar{R}(s,\mu_\theta(s))\right|\\
& \qquad + \sum_{s \in \mathcal{S}}\left|d^{\pi_{\theta, \sigma_n}}(s) - d^{\mu_\theta}(s)\right| \left\lVert \nabla_\theta \mu_\theta(s)\right\lVert_\textnormal{op} \left\lVert \left. \nabla_a \bar{R}(s,a)\right|_{a = \mu_\theta(s)} \right\lVert \end{align*} Continuity at 0 of $\sigma \mapsto d^{\pi_{\theta,\sigma}}(s)$ and $\sigma \mapsto \nabla_\theta d^{\pi_{\theta,\sigma}}(s)$ for each $s \in \mathcal{S}$, boundedness of $\bar{R}(s, \cdot)$, $\nabla_a \bar{R}(s, \cdot)$ and $\nabla_\theta \mu_\theta(s)$ implies that \begin{equation*}
\left\lVert\nabla_\theta J_{\pi_{\theta, \sigma_n}}(\mu_\theta) - \nabla_\theta J_{\mu_\theta}(\mu_\theta)\right\lVert \underset{n \to \infty}{\longrightarrow} 0 \end{equation*} Hence \begin{equation}
\left\lVert\nabla_\theta J_{\pi_{\theta, \sigma_n}}(\pi_{\theta, \sigma_n}) - \nabla_\theta J_{\pi_{\theta, 0}}(\pi_{\theta,0}) \right\lVert \underset{n \to \infty}{\longrightarrow} 0 \end{equation}
So, $\sigma \mapsto \nabla_\theta J_{\pi_{\theta, \sigma}}(\pi_{\theta, \sigma})$ and $\nabla_\theta J_{\pi_{\theta, \sigma}}(\mu_\theta)$ are continuous at $0$: \begin{equation}
\underset{\sigma \downarrow 0}{\textnormal{lim }} \nabla_\theta J_{\pi_{\theta, \sigma}} (\pi_{\theta, \sigma}) = \underset{\sigma \downarrow 0}{\textnormal{lim }} \nabla_\theta J_{\pi_{\theta, \sigma}}(\mu_\theta) = \nabla_\theta J_{\mu_\theta}(\mu_\theta) \end{equation} \end{proof}
\section{Kushner-Clark Lemma}\label{section:kc} Our convergence result for the actor step relies on the Kushner-Clark lemma (pp191-196 \cite{Kushner_Clark}), which we state here.
Consider the following $\mathbb{R}^l$-dimensional stochastic recursion: \begin{equation} \label{eq:KC-recursion}
x_{t+1} = \Gamma \pab{x_t + \beta_t \pa{f(x_t) + M_t + \epsilon_t}} \end{equation} where $\Gamma: \mathbb{R}^l \to C$ is a projection map, and $C \subset \mathbb{R}^l$. Consider also the following ODE associated with equation (\ref{eq:KC-recursion}): \begin{equation}\label{eq:kc-ode}
\dot{x} = \hat{\Gamma}\pab{f(x(t))} \end{equation} where, for any continuous function $w: \mathbb{R}^l \to \mathbb{R}^l$, \begin{equation}
\hat{\Gamma}\pab{w(x)} = \limit{0<\eta \to 0} \pa{\frac{\Gamma{(x+\eta \cdot w(x)) - x}}{\eta}} \end{equation}
Let $\mc{B} = \paa{x \in \mathbb{R}^l|\hat{\Gamma}\pab{w(x)}=0}$ denote the set of all fixed points of (\ref{eq:kc-ode}). We now state the following conditions concerning (\ref{eq:KC-recursion}): \begin{enumerate}
\item The function $f:\mathbb{R}^l \to \mathbb{R}^l$ is continuous
\item The step-sizes $\beta_t, t\geq0$ satisfy
\begin{equation}
\beta_t\geq0, \qquad \sum_t \beta_t = \infty, \qquad \beta_t \to 0 \text{ as } t \to \infty
\end{equation}
\item The sequence $\paa{\epsilon_t}_{t\geq0}$ is a bounded random sequence with $\epsilon_t \to 0$ a.s. as $t \to \infty$.
\item $\forall \delta >0$
\begin{equation}
\limit{t\to \infty} \mathbb{P}\pa{\underset{n\geq t}{\text{ sup }} \palV{\sum_{i=t}^n \beta_t M_t} \geq \delta} = 0
\end{equation}
\item The set $C$ is compact. \end{enumerate} Then Theorem $5.3.1$ \cite{Kushner_Clark} in this setting says the following: \begin{thm} Under conditions $1.$ to $5.$, $x_t \to \mc{B}$ as $t \to \infty$ a.s. . \end{thm}
\section{Proof of Decentralized Deterministic Multi-Agent Actor-Critic convergence} \label{section:cv-on-pol-proof}
We use the two-time-scale stochastic approximation analysis \cite{Borkar:1997:SA-two-timescale}. We let the policy parameter $\theta_t$ fixed as $\theta_t \equiv \theta$ when analysing the convergence of the critic step. Thus we can show the convergence of $\omega_t$ towards an $\omega_\theta$ depending on $\theta$, which will then be used to prove the convergence for the slow time-scale. \subsection{Proof of Theorem \ref{thm:on-policy-AC-critic-cv}}
\begin{lem} \label{lem:Boundedness_omega} Under Assumptions \ref{asm:MDP_Regularity} and \ref{Bound_theta} - \ref{asm:step-sizes}, the sequence ${\omega_t^i}$ generated from (\ref{alg:On_Policy_AC_critic_step}) is bounded a.s., i.e., $\textnormal{sup}_t \lVert \omega^i_t \lVert < \infty$ a.s., for any $i \in \mathcal{N}$. \end{lem}
\begin{proof} The proof is exactly the same as in \cite{FDMARL} (Proof of Lemma $5.1.$ in Appendix A) since deterministic policy can here be considered as a special case of stochastic policy. \end{proof}
\begin{lem} \label{Boundedness_J_hat} Under Assumptions \ref{asm:MDP_Regularity} and \ref{asm:step-sizes}, the sequence $\{\hat{J}^i_t\}$ generated as in \ref{alg:On_Policy_AC_critic_step} is bounded a.s, i.e., $\textnormal{sup}_t \vert \hat{J}^i_t \vert < \infty$ a.s., for any $i \in \mathcal{N}$. \end{lem}
\begin{proof} The proof is exactly the same as in \cite{FDMARL} (Proof of Lemma $5.2.$) since deterministic policy can here be considered as a special case of stochastic policy. \end{proof}
\textbf{Step 1} and \textbf{Step 2} of the proof in \cite{FDMARL} (Section 5.1) holds with deterministic policy.
\subsection{Proof of Theorem \ref{thm:on-policy-AC-actor-cv}}
Let $\mathcal{F}_{t,2} = \sigma (\theta_\tau, s_{\tau-1}, \tau \leq t)$ a filtration. In addition, we define \begin{alignat*}{3} &H(\theta, s, \omega) = \nabla_{\theta} \mu_\theta(s) \cdot \nablaV{a}{Q_{\omega}(s, a)}{\mu_{\theta}(s)}\hspace{3em} &H(\theta, s) = H(\theta, s, \omega_\theta)\hspace{3em} &h(\theta) = \espL{H(\theta,s)}{s \sim d^\theta} \end{alignat*}
With projection, actor update (\ref{eq:algo1_actor_step}) becomes \begin{align}
\theta_{t+1} &= \Gamma\pab{\theta_t + \beta_{\theta, t} H(\theta_t, s_t, \omega_t)} \label{algo1_actor_update_proj}\\
&= \Gamma\pab{\theta_t + \beta_{\theta, t} h(\theta_t) + \beta_{\theta, t} \pa{h(\theta_t) - H(\theta_t, s_t)} + \beta_{\theta, t} \pa{H(\theta_t, s_t) - H(\theta_t, s_t, \omega_t)}} \nonumber\\
\begin{split}\nonumber
&= \Gamma\Big[\theta_t + \beta_{\theta, t} h(\theta_t) \\
&\hspace{3em}+ \beta_{\theta, t} \pa{h(\theta_t) - \esp{H(\theta_t,s_t)|\mc{F}_{t,2}}}\\
&\hspace{3em} + \beta_{\theta, t} \pa{\esp{H(\theta_t,s_t)|\mc{F}_{t,2}} - H(\theta_t,s_t)}\\
&\hspace{3em} + \beta_{\theta, t} \pa{H(\theta_t, s_t) - H(\theta_t, s_t, \omega_t)}\Big]
\end{split}\\
&= \Gamma\pab{\theta_t + \beta_{\theta, t} \pa{h(\theta_t) + A^1_t + A^{2}_t + A^3_t}} \label{eq:AAA} \end{align} where \begin{equation*}
A^1_t = h(\theta_t) - \esp{H(\theta_t,s_t)|\mc{F}_{t,2}}, \qquad A^2_t = \esp{H(\theta_t,s_t)|\mc{F}_{t,2}} - H(\theta_t,s_t), \qquad A^3_t = H(\theta_t, s_t) - H(\theta_t, s_t, \omega_t) \end{equation*}
Since $\{\nabla_a \phi_k(s,a)\}_{s, k, a}$, and $\{\nabla_\theta \mu_\theta(s)\}_s$ are uniformly bounded (Assumptions \ref{asm:Linear_appr} and \ref{asm:MDP_Regularity}), there exists $K^1 > 0$ such that, $\forall \theta \in \Theta, s \in \mc{S}$, $\omega, \omega^\prime \in \mathbb{R}^K$ \begin{equation}
\palV{H(\theta,s,\omega) - H(\theta,s,\omega^\prime)} \leq K^1 \palV{\omega-\omega^\prime} \end{equation} Thus, by critic faster convergence, we have that the term $A^3_t= H(\theta_t, s_t) - H(\theta_t, s_t, \omega_t)$ in $(\ref{eq:AAA})$ is $o(1)$ a.s. because of critic faster convergence.
Let $M_{t+1} = \sum_{\tau = 0}^{t} \beta_{\theta,\tau} A^2_{\tau}$. ${M_t}$ is a martingale sequence with respect to $\mathcal{F}_{t,2}$. Since $\{\omega_t\}_t, \{\nabla_a \phi_k(s,a)\}_{s, k}$, and $\{\nabla_\theta \mu_\theta(s)\}_s$ are bounded (Lemma \ref{lem:Boundedness_omega}, Assumptions \ref{asm:Linear_appr} and \ref{asm:MDP_Regularity}), it follows that the sequence $\paa{A^2_{t}}$ is bounded. Thus, by Assumption \ref{asm:step-sizes}, $\sum_t \esp{\palV{M_{t+1} - M_t}^2 | \mathcal{F}_{t,2}} = \sum_t \palV{\beta_{\theta,t} A^2{t}}^2 < \infty$ a.s. The martingale convergence theorem ensures that $\paa{M_t}$ converges a.s. Thus, for any $\epsilon > 0$, \begin{equation*}
\underset{t}{\textnormal{lim }} \mathbb{P}\pa{\underset{n \geq t}{\textnormal{sup }} \palV{\sum_{\tau = t}^n \beta_{\theta,\tau} A_{\tau}^2} \geq \epsilon} = 0 \end{equation*}
Finally, using Theorem 7-Corollary 8 (p.74) and Theorem 9 (p. 75) of \cite{BorkarStochasticApprDynamicalSysViewpoint}, we have $\limit{t\to\infty} A^1_t = 0$ a.s., owing to the "natural timescale" averaging.
The proof can thus be concluded by applying Kushner-Clark lemma \cite{Kushner_Clark} (pp 191-196) to the actor recursion in (\ref{algo1_actor_update_proj}).
\section{Off-policy Decentralized-Multi-Agent Actor-Criic Algorithm}
We can propose an alternative off-policy actor-critic algorithm whose goal is to maximize $J_\pi(\mu_\theta) = \sum_{s\in\mc{S} d^\pi(s)\bar{R}(s,\mu_\theta(s))}$ where $\pi$ is the behavior policy and $\mu_\theta$ the target policy. To do so, the globally averaged reward function $\bar{R}(s, a)$ is approximated using a family of functions $\hat{\bar{R}}_\lambda: \mathcal{S} \times \mathcal{A} \to \mathbb{R}$ that are parametrized by $\lambda$ column vector in $\mathbb{R}^K$. Each agent $i$ maintains its own parameter $\lambda^i$ and it uses $\hat{\bar{R}}_{\lambda^i}$ as its local estimate of $\bar{R}$. Based on the straightforward expression for the off-policy gradient \begin{equation}\label{eq:Off_Policy_Grad}
\nabla_{\theta^i} J_\pi(\mu_\theta) =\espL{\nabla_{\theta^i} \mu^i_{\theta^i}(s) \nabla_{a^i} \left. \bar{R}(s, \mu^{-i}_{\theta^{-i}}(s), a^i)\right|_{a^i = \mu^i_{\theta^i}(s)}}{s \sim d^\pi} \end{equation} the actor updates via: \begin{equation} \label{algo2_actor_step}
\theta^i_{t+1} = \theta^i_t + \beta_{\theta, t} \cdot \nabla_{\theta^i} \mu_{\theta^i_t}^i(s_t) \cdot \left. \nabla_{a^i} \hat{\bar{R}}_{\lambda^i_t} (s_t,\mu_{\theta_t^{-i}}^{-i}(s_t), a^i)\right|_{a^i = \mu_{\theta^i_t}(s_t)} \end{equation} which requires each agent $i$ to have access to $\mu_{\theta_t^j}^j(s_t)$ for $j \in \mathcal{N}$, which is not convenient or realistic in practice. \bigbreak
The critic update via: \begin{align}\label{alg:Off_Policy_AC_critic_step}
&\widetilde{\lambda}^i_t = \lambda_t^i + \beta_{\lambda, t} \cdot \delta_t^i \cdot \left. \nabla_\lambda \hat{\bar{R}}_{\lambda^i} (s_t,a_t)\right|_{\lambda = \lambda^i_t} \\
&\lambda_{t+1}^i = \sum_{j \in \mathcal{N}} c_t^{i j} \widetilde{\lambda}^j_t, \end{align} with \begin{equation}
\delta_t^i = r^i(s_t, a_t) - \hat{\bar{R}}_{\lambda^i_t}(s_t, a_t) \end{equation} In this case, $\delta_t^i$ was motivated by distributed optimization results, and it is not related to the local TD-error (as there is no "temporal" relationship for $R$), it is simply the difference between the sample reward and the bootstrap estimate.
\begin{algorithm}[tb]
\caption{Networked deterministic off-policy actor-critic algorithm based on average-reward function}
\label{alg:Off_Policy_AC} \begin{algorithmic}
\STATE {\textbf Input:} Initial values $\lambda^i_0, \widetilde{\lambda}^i_0, \theta_0^i, \forall i \in \mathcal{N}$; $s_0$ initial state of the MDP, stepsizes $\{\beta_{\lambda, t}\}_{t \geq 0}, \{\beta_{\theta, t}\}_{t \geq 0}$
\STATE Draw $a^i_0 \sim \pi^i(s_0)$ , compute $\dot{a}^i_0 = \mu^i_{\theta^i_0}(s_0)$ \textcolor{red}{and $\widetilde{a}_0^i = \nabla_{\theta^i} \mu^i_{\theta^i_0}(s_0)$}
\STATE Observe joint action $a_0 = (a^1_0, \dots, a^N_0)$, $\dot{a}_0 = (\dot{a}^1_0, \dots, \dot{a}^N_0)$ \textcolor{red}{and $\widetilde{a}_0 = \pa{\widetilde{a}^1_0, \dots, \widetilde{a}^N_0}$} \REPEAT
\FOR{$i \in \mathcal{N}$}
\STATE Observe $s_{t+1}$ and reward $r^i_{t+1} = r^i(s_t, a_t)$
\ENDFOR
\FOR{$i \in \mathcal{N}$}
\STATE Update: $\delta_t^i \leftarrow r_{t+1}^i - \hat{\bar{R}}_{\lambda^i_t}(s_t, a_t)$
\STATE \textbf{Critic step: } $\widetilde{\lambda}^i_t \leftarrow \lambda_t^i + \beta_{\lambda, t} \cdot \delta_t^i \cdot \left. \nabla_\lambda \hat{\bar{R}}_{\lambda^i} (s_t,a_t)\right|_{\lambda = \lambda^i_t}$
\STATE \textbf{Actor step: } $\theta^i_{t+1} = \theta^i_t + \beta_{\theta, t} \cdot \nabla_{\theta^i} \mu_{\theta^i_t}^i(s_t) \cdot \left. \nabla_{a^i} \hat{\bar{R}}_{\lambda^i_t} (s_t,\mu_{\theta^{-i}_t}^{-i}(s_t), a^i)\right|_{a^i = \mu_{\theta^i_t}(s_t)}$
\STATE Send $\widetilde{\lambda}^i_t$ to the neighbors $\{j \in \mathcal{N} : (i,j) \in \mathcal{E}_t \}$ over $\mathcal{G}_t$
\ENDFOR
\FOR{$i \in \mathcal{N}$}
\STATE \textbf{Consensus step: } $\lambda^i_{t+1} \leftarrow \sum_{j \in \mathcal{N}} c^{i j}_t \cdot \widetilde{\lambda}^j_t$
\STATE Draw action $a_{t+1} \sim \pi(s_{t+1})$, compute $\dot{a}^i_{t+1} = \mu^i_{\theta^i_{t+1}}(s_{t+1})$ \textcolor{red}{and compute $\widetilde{a}_{t+1}^i = \nabla_{\theta^i} \mu^i_{\theta^i_{t+1}}(s_{t+1})$}
\ENDFOR
\STATE Observe joint action $a_{t+1} = (a^1_{t+1}, \dots, a^N_{t+1})$, $\dot{a}_{t+1} = (\dot{a}^1_{t+1}, \dots, \dot{a}^N_{t+1})$ \textcolor{red}{and $\widetilde{a}_{t+1} = \pa{\widetilde{a}^1_{t+1}, \dots, \widetilde{a}^N_{t+1}}$};\\
\UNTIL{end} \end{algorithmic} \end{algorithm}
As for the on-policy algorithm, we make the following assumption: \begin{asm} \label{Linear_appr_R}
For each agent $i$, the average-reward function $\bar{R}$ is parametrized by the class of linear functions, i.e., $\hat{\bar{R}}_{\lambda^i}(s,a) = w(s,a) \cdot \lambda^i$ where $w(s,a) = \big[\psi_1(s,a), \dots, {\psi_K(s,a) \big]} \in \mathbb{R}^K$ is the feature associated with the state-action pair $(s, a)$. The feature vectors $w(s,a)$, as well as $\nabla_a \psi_k (s,a)$ are uniformly bounded for any $s \in \mathcal{S}$, $a \in \mathcal{A}, k \in \{1, \dots, K\}$. Furthermore, we assume that the feature matrix $W_\pi \in \mathbb{R}^{|S| \times K}$ has full column rank, where the $k$-th column of $W_\pi$ is $\big[\int_\mathcal{A}\pi(a|s)w_k(s, a)\textnormal{d}a, s \in \mathcal{S}\big]$ for any $k \in \llbracket1,K\rrbracket$. \end{asm}
Furthermore, we can define compatible features for average-reward function analogous to those defined for action-value function: $w_\theta(s,a) = (a - \mu_\theta(s)) \cdot \nabla_\theta \mu_\theta(s)^\top$. For $\lambda \in \mathbb{R}^m$ \begin{align*}
&\hat{\bar{R}}_{\lambda, \theta}(s,a) = (a - \mu_\theta(s)) \cdot \nabla_\theta \mu_\theta(s)^\top \cdot \lambda \\
&\nabla_a \hat{\bar{R}}_{\lambda, \theta} (s,a) = \nabla_\theta \mu_\theta(s)^\top \cdot \lambda \end{align*} and we have that, for $\lambda^* = \underset{\lambda}{\textnormal{argmin }} \mathbb{E}_{s \sim d^\pi}\big[\lVert \nabla_a \hat{\bar{R}}_{\lambda, \theta}(s,\mu_\theta(s)) - \nabla_a \bar{R}(s, \mu_\theta(s))\lVert^2 \big]$:
$$\nabla_\theta J_\pi(\theta) = \mathbb{E}_{s \sim d^\pi}\big[\nabla_\theta \mu_\theta (s) \cdot \left. \nabla_a \bar{R}(s, a)\right|_{a = \mu_\theta(s)} \big] = \mathbb{E}_{s \sim d^\pi}\big[\nabla_\theta \mu_\theta (s) \cdot \left. \nabla_a \hat{\bar{R}}_{\lambda^*, \theta}(s, a)\right|_{a = \mu_\theta(s)} \big]$$
\begin{rk} Use of compatible features requires each agent to observe not only joint taken action $a_{t+1} = (a_{t+1}^1, \dots, a_{t+1}^N)$ and "on-policy action" $\dot{a}_{t+1} = (\dot{a}_{t+1}^1, \dots, \dot{a}_{t+1}^N)$ but also $\widetilde{a}_{t+1} = (\nabla_{\theta^1} \mu^1_{\theta^1_t}(s_{t+1}), \dots, \nabla_{\theta^N} \mu^N_{\theta^N_t}(s_{t+1}))$. \end{rk}
\begin{asm} \label{asm_compatible_features_R}
For each agent $i$, the average-reward function $\bar{R}$ is parametrized by the class of linear functions, i.e., $\hat{\bar{R}}_{\lambda^i, \theta}(s,a) = w_\theta(s,a) \cdot \lambda^i$ where $w_\theta(s,a) = \big[w_{\theta,1}(s,a), \dots, w_{\theta,K}(s,a) \big] \in \mathbb{R}^K$ is the feature associated with the state-action pair $(s, a)$. The feature vectors $w_\theta(s,a)$, as well as $\nabla_a w_{\theta, k} (s,a)$ are uniformly bounded for any $s \in \mathcal{S}$, $a \in \mathcal{A}, k \in \pabb{1, K}$. Furthermore, we assume that the feature matrix $W_\pi \in \mathbb{R}^{|S| \times K}$ has full column rank, where the $k$-th column of $W_{\pi, \theta}$ is $\big[\int_\mathcal{A}\pi(a|s)w_{\theta, k}(s, a)\textnormal{d}a, s \in \mc{S}\big]$ for any $k \in \llbracket1,K\rrbracket$. \end{asm}
\begin{asm} \label{asm_algo2_two_timesteps} The step-sizes $\beta_{\lambda, t}$, $\beta_{\theta, t}$ satisfy: \begin{alignat*}{2} &\sum_t \beta_{\lambda, t} = \sum_t \beta_{\theta,t} = \infty, \hspace{4em} &\sum_t \beta_{\lambda, t}^2 + \beta_{\theta, t}^2 < \infty\\ &\beta_{\theta, t} = o(\beta_{\lambda, t}), &\underset{t\to \infty}{\textnormal{lim}}\beta_{\lambda, t+1} / \beta_{\lambda, t} = 1 \end{alignat*} \end{asm}
\begin{thm} \label{thm:off-policy-AC-critic-cv} Under Assumptions \ref{asm:MDP_Regularity}, \ref{asm:Reg_P_R}, \ref{asm:random_matrix}, \ref{asm_compatible_features_R} and \ref{asm_algo2_two_timesteps}, for any given behavior policy $\pi$ and any $\theta \in \Theta$, with $\{\lambda_t^i\}$ generated from (\ref{alg:Off_Policy_AC_critic_step}) we have $\textnormal{lim}_{t \to \infty} \lambda^i_t = \lambda_\theta$ a.s. for any $i \in \mathcal{N}$, where $\lambda_\theta$ is the unique solution to \begin{equation} \label{eq_algo2_consensus_cv}
B_{\pi, \theta}\cdot\lambda_\theta = A_{\pi,\theta} \cdot d_\pi^s \end{equation}
where we have $d_\pi^s = \big[d^\pi(s), s \in \mathcal{S}\big]^\top$, $A_{\pi,\theta} = \big[\int_\mathcal{A}\pi(a|s) \bar{R}(s,a) w(s,a)^\top \textnormal{d}a, s \in \mathcal{S}\big] \in \mathbb{R}^{K \times \vert \mathcal{S} \vert}$ and $B_{\pi, \theta} = \big[\sum_{s\in\mathcal{S}} d^\pi(s) \int_\mathcal{A}\pi(a|s) w_i(s,a) \cdot w(s,a)^\top \textnormal{d}a, 1 \leq i \leq K\big] \in \mathbb{R}^{K \times K}$. \end{thm}
From here on we let \begin{align*}
& \xi_{t, \theta}^i = \left. \nabla_{a_i} \hat{\Bar{R}}_{\lambda_\theta} (s_t, \mu_{\theta_t^{-i}}^{-i} (s_t), a_i) \right|_{a_i = \mu^i_{\theta^i_t}(s_t)} = \left. \nabla_{a_i} w(s_t, \mu_{\theta_t^{-i}}^{-i} (s_t), a_i) \right|_{a_i = \mu^i_{\theta^i_t}(s_t)} \lambda_\theta \\
& \xi_t^i = \left. \nabla_{a_i} \hat{\bar{R}}_{\lambda^i_t} (s_t, \mu_{\theta_t^{-i}}^{-i} (s_t), a_i) \right|_{a_i = \mu^i_{\theta^i_t}(s_t)} = \left. \nabla_{a_i} w(s_t, \mu_{\theta^{-i}}^{-i} (s_t), a_i) \right|_{a_i = \mu^i_{\theta^i}(s_t)} \lambda_t^i \end{align*} and we keep \begin{align*}
&\psi_{t, \theta}^i = \nabla_{\theta^i} \mu_{\theta^i}^i(s_t) \\
&\psi_{t}^i = \psi_{t, \theta_t}^i = \nabla_{\theta^i} \mu_{\theta^i_t}^i(s_t) \end{align*}
We consider the following ODE associated to the actor step with the projection (\ref{algo2_actor_update_proj}) \begin{equation} \label{eq:ODE-actor-projection-off}
\dot{\theta} = \hat{\Gamma}\pab{h(\theta)} \end{equation} where $h(\theta) = \espL{\psi_{t,\theta} \cdot \xi_{t, \theta}}{s_t \sim d^\pi}$. Finally, let \begin{equation}
\mc{K}_\text{off} = \paa{\theta \in \Theta| \hat{\Gamma}(h(\theta)) = 0} \end{equation} denote the set of all fixed points of \ref{eq:ODE-actor-projection-off}. Now we can present our convergence theorem for the joint policy sequence $\paa{\theta_t}$ similar to the on-policy one.
\begin{thm} \label{thm:off-policy-AC-actor-cv} Under Assumptions \ref{asm:MDP_Regularity}, \ref{asm:Reg_P_R}, \ref{Bound_theta}, \ref{asm:random_matrix}, \ref{asm_compatible_features_R} and \ref{asm_algo2_two_timesteps}, the joint policy parameter $\theta_t$ obtained from (\ref{algo2_actor_step}) is such that,
\begin{equation}
\theta_t \to \mc{K}_\text{off} \textnormal{ as } t \to \infty \textnormal{ almost surely}
\end{equation}
\end{thm}
\subsection{Convergence proof}
\begin{proof}[Proof of Theorem \ref{thm:off-policy-AC-critic-cv}]
We use the two-time scale technique: since critic updates at a faster rate than the actor, we let the policy parameter $\theta_t$ to be fixed as $\theta$ when analysing the convergence of the critic update. As it cannot be directly consider as a special case of \cite{FDMARL}, we present a complete proof for the critic step.
\begin{lem} \label{lem_bounded_lambda} Under Assumptions \ref{asm:MDP_Regularity}, \ref{asm:Reg_P_R}, \ref{asm:random_matrix}, \ref{asm_compatible_features_R} and \ref{asm_algo2_two_timesteps}, for any $i \in \mathcal{N}$, sequence $\{\lambda_t^i\}$ generated from (\ref{alg:Off_Policy_AC_critic_step}) is bounded almost surely. \end{lem}
To prove this lemma we verify the conditions for \textbf{Theorem A.2} of \cite{FDMARL} to hold. We use $\{\mathcal{F}_{t,1}\}$ to denote the filtration with $\mathcal{F}_{t,1} = \sigma(s_\tau, C_{\tau-1}, a_{\tau-1}, r_\tau, \lambda_\tau, \tau \leq t)$. With $\lambda_t = \big[(\lambda^1_t)^\top, \dots, (\lambda^N_t)^\top\big]^\top$, critic step (\ref{alg:Off_Policy_AC_critic_step}) has the form: \begin{equation} \label{eq_algo2_rec_lambda}
\lambda_{t+1} = (C_t \otimes I)\left(\lambda_t + \beta_{\lambda, t}\cdot y_{t+1}\right) \end{equation} with $y_{t+1} = \left(\delta_t^1w(s_t,a_t)^\top, \dots, \delta_t^N w(s_t, a_t)^\top\right)^\top \in \mathbb{R}^{KN}$, $\otimes$ denotes Kronecker product and $I$ is the identity matrix. Using the same notation as in \textbf{Assumption A.1} from \cite{FDMARL}, we have: \begin{align*}
&h^i(\lambda_t^i,s_t) = \mathbb{E}_{a \sim \pi}\big[\delta_t^i w(s_t,a)^\top \vert \mathcal{F}_{t,1}\big] = \int_\mathcal{A}\pi(a|s_t)(R^i(s_t,a) - w(s_t,a) \cdot \lambda_t^i) w(s_t,a)^\top \textnormal{d}a\\
&M^i_{t+1} = \delta_t^i w(s_t,a_t)^\top - \mathbb{E}_{a \sim \pi}\big[\delta_t^i w(s_t,a)^\top \vert \mathcal{F}_{t,1}\big]\\
&\bar{h}^i(\lambda_t) = A_{\pi, \theta}^i \cdot d_\pi^s - B_{\pi, \theta} \cdot \lambda_t, \qquad \textnormal{where } A_{\pi,\theta}^i = \pab{\int_\mathcal{A}\pi(a|s) R^i(s,a) w(s,a)^\top \textnormal{d}a, s \in \mathcal{S}} \end{align*}
Since feature vectors are uniformly bounded for any $s \in \mathcal{S}$ and $a \in \mathcal{A}$, $h^i$ is Lipschitz continuous in its first argument. Since, for $i \in \mathcal{N}$, the $r^i$ are also uniformly bounded, $\mathbb{E}\big[\lVert M_{t+1}\lVert^2 | \mathcal{F}_{t,1}\big] \leq K \cdot (1 + \lVert\lambda_t\lVert^2)$ for some $K > 0$. Furthermore, finiteness of $|\mathcal{S}|$ ensures that, a.s., $\lVert \bar{h}(\lambda_t) - h(\lambda_t, s_t) \lVert^2 \leq K^\prime \cdot (1 + \lVert \lambda_t \lVert^2)$. Finally, $h_\infty(y)$ exists and has the form \begin{equation*}
h_\infty(y) = - B_{\pi, \theta} \cdot y \end{equation*} From Assumption \ref{asm_compatible_features_R}, we have that $-B_{\pi, \theta}$ is a Hurwitcz matrix, thus the origin is a globally asymptotically stable attractor of the ODE $\dot{y} = h_\infty(y)$. Hence \textbf{Theorem A.2} of \cite{FDMARL} applies, which concludes the proof of Lemma \ref{lem_bounded_lambda}. \bigbreak
We introduce the following operators as in \cite{FDMARL}: \begin{itemize}
\item $\langle\cdot\rangle: \mathbb{R}^{KN} \to \mathbb{R}^K$
\begin{equation*}
\langle\lambda\rangle = \frac{1}{N}(\textbf{1}^\top \otimes I)\lambda = \frac{1}{N}\sum_{i \in \mathcal{N}} \lambda^i
\end{equation*}
\item $\mathcal{J} = \left(\frac{1}{N}\textbf{1}\textbf{1}^\top\otimes I\right): \mathbb{R}^{KN} \to \mathbb{R}^{KN}$ such that $\mathcal{J}\lambda = \textbf{1} \otimes \langle\lambda\rangle$
\item $\mathcal{J}_\bot = I - \mathcal{J}: \mathbb{R}^{KN} \to \mathbb{R}^{KN}$ and we note $\lambda_\bot = \mathcal{J}_\bot \lambda = \lambda - \textbf{1} \otimes \langle\lambda\rangle$ \end{itemize}
We then proceed in two steps as in \cite{FDMARL}, firstly by showing the convergence a.s. of the disagreement vector sequence $\{\lambda_{\bot, t}\}$ to zero, secondly showing that the consensus vector sequence $\{\langle\lambda_{t}\rangle\}$ converges to the equilibrium such that $\langle\lambda_t\rangle$ is solution to (\ref{eq_algo2_consensus_cv}).
\begin{lem} Under Assumptions \ref{asm:MDP_Regularity}, \ref{asm:Reg_P_R}, \ref{asm:random_matrix}, \ref{asm_compatible_features_R} and \ref{asm_algo2_two_timesteps}, for any $M > 0$, we have \begin{equation*}
\underset{t}{\textnormal{sup }} \mathbb{E}\Big[\lVert\beta_{\lambda, t}^{-1}\lambda_{\bot,t}\lVert^2 \mathbbm{1}_{\{\sup_t\lVert\lambda_t\lVert\leq M\}}\Big] < \infty \end{equation*} \end{lem} Since dynamic of $\{\lambda_t\}$ described by (\ref{eq_algo2_rec_lambda}) is similar to (5.2) in \cite{FDMARL} we have \begin{equation} \label{eq_algo2_rec_lamb}
\mathbb{E}\Big[\lVert\beta_{\lambda, t+1}^{-1}\lambda_{\bot,t+1}\lVert^2|\mathcal{F}_{t,1}\Big] = \frac{\beta_{\lambda,t}^2}{\beta_{\lambda,t+1}^2} \rho \left(\lVert\beta_{\lambda, t}^{-1}\lambda_{\bot,t}\lVert^2 + 2 \cdot \lVert\beta_{\lambda, t}^{-1}\lambda_{\bot,t}\lVert \cdot \mathbb{E}(\lVert y_{t+1}\lVert^2|\mathcal{F}_{t,1})^{\frac{1}{2}} + \mathbb{E}(\lVert y_{t+1}\lVert^2|\mathcal{F}_{t,1})\right) \end{equation} where $\rho$ represents the spectral norm of $\mathbb{E}\big[C_t^\top \cdot (I - \textbf{1} \textbf{1}^\top / N) \cdot C_t \big]$, with $\rho \in \left[0, 1\right)$ by Assumption \ref{asm:random_matrix}. Since $y_{t+1}^i = \delta_t^i \cdot w(s_t,a_t)^\top$ we have \begin{align*}
\mathbb{E}\Big[\lVert y_{t+1} \lVert^2|\mathcal{F}_{t,1}\Big] &= \mathbb{E}\Big[\sum_{i \in \mathcal{N}} \lVert(r^i(s_t,a_t) - w(s_t,a_t)\lambda_t^i)\cdot w(s_t,a_t)^\top\lVert^2 \vert \mathcal{F}_{t,1}\Big]\\
&\leq 2 \cdot \mathbb{E}\Big[\sum_{i \in \mathcal{N}} \lVert r^i(s_t,a_t) w(s_t,a_t)^\top\lVert^2 + \lVert w(s_t,a_t)^\top\lVert^4 \cdot \lVert \lambda_t^i\lVert^2 \vert \mathcal{F}_{t,1}\Big]\\ \end{align*} By uniform boundedness of $r(s,\cdot)$ and $ w(s,\cdot)$ (Assumptions \ref{asm:Reg_P_R} and \ref{asm_compatible_features_R}) and finiteness of $\mathcal{S}$, there exists $K_1 > 0$ such that \begin{equation*}
\mathbb{E}\Big[\lVert y_{t+1} \lVert^2|\mathcal{F}_{t,1}\Big] \leq K_1(1 + \lVert \lambda_t \lVert^2) \end{equation*} Thus, for any $M > 0$ there exists $K_2 >0$ such that, on the set $\{\sup_{\tau\leq t} \lVert\lambda_\tau\lVert < M\}$, \begin{equation} \label{eq_y_bounded}
\mathbb{E}\Big[\lVert y_{t+1} \lVert^2 \mathbbm{1}_{\{\sup_{\tau\leq t} \lVert\lambda_\tau\lVert < M\}}|\mathcal{F}_{t,1}\Big] \leq K_2 \end{equation}
We let $v_t = \lVert\beta_{\lambda, t}^{-1}\lambda_{\bot,t}\lVert^2 \mathbbm{1}_{\{\sup_{\tau\leq t} \lVert\lambda_\tau\lVert < M\}}$. Taking expectation over (\ref{eq_algo2_rec_lamb}), noting that $\mathbbm{1}_{\{\sup_{\tau\leq t+1} \lVert\lambda_\tau\lVert < M\}} \leq \mathbbm{1}_{\{\sup_{\tau\leq t} \lVert\lambda_\tau\lVert < M\}}$ we get \begin{equation*}
\mathbb{E}(v_{t+1}) \leq \frac{\beta_{\lambda,t}^2}{\beta_{\lambda,t+1}^2} \rho \left(\mathbb{E}(v_t) + 2 \sqrt{\mathbb{E}(v_t)} \cdot \sqrt{K_2} + K_2\right) \end{equation*} which is the same expression as (5.10) in \cite{FDMARL}. So similar conclusions to the ones of \textbf{Step 1} of \cite{FDMARL} holds: \begin{align}
&\underset{t}{\textnormal{sup } } \mathbb{E}\Big[\lVert\beta_{\lambda, t}^{-1}\lambda_{\bot,t}\lVert^2 \mathbbm{1}_{\{\sup_t\lVert\lambda_t\lVert\leq M\}}\Big] < \infty \label{eq_sup_lamb}\\
\textnormal{and}\qquad& \underset{t}{\textnormal{lim }} \lambda_{\bot, t} = 0 \textnormal{ a.s.} \end{align}
We now show convergence of the consensus vector $\textbf{1} \otimes \langle \lambda_t \rangle$. Based on (\ref{eq_algo2_rec_lambda}) we have \begin{align*}
\langle \lambda_{t+1} \rangle &= \langle(C_t \otimes I)(\textbf{1} \otimes \langle \lambda_t \rangle + \lambda_{\bot, t} + \beta_{\lambda,t} y_{t+1})\rangle\\
&= \langle\lambda_t\rangle + \langle\lambda_{\bot,t}\rangle + \beta_{\lambda,t} \langle(C_t \otimes I)(y_{t+1} + \beta_{\lambda,t}^{-1} \lambda_{\bot,t})\rangle\\
&= \langle\lambda_t\rangle + \beta_{\lambda,t} (h(\lambda_t, s_t) + M_{t+1}) \end{align*}
where $h(\lambda_t, s_t) = \mathbb{E}_{a_t \sim \pi}\big[\langle y_{t+1}\rangle|\mathcal{F}_t\big]$ and $M_{t+1} = \langle(C_t \otimes I)(y_{t+1} + \beta_{\lambda,t}^{-1} \lambda_{\bot,t})\rangle - \mathbb{E}_{a_t \sim \pi}\big[\langle y_{t+1}\rangle|\mathcal{F}_t\big]$. Since $\langle\delta_t\rangle = \bar{r}(s_t,a_t) - w(s_t,a_t)\langle\lambda_t\rangle$, we have \begin{equation*}
h(\lambda_t,s_t) = \mathbb{E}_{a_t\sim\pi} (\bar{r}(s_t,a_t) w(s_t,a_t)^\top|\mathcal{F}_t) + \mathbb{E}_{a_t\sim\pi}( w(s_t,a_t)\langle\lambda_t\rangle\cdot w(s_t,a_t)^\top|\mathcal{F}_{t,1}) \end{equation*}
so $h$ is Lipschitz-continuous in its first argument. Moreover, since $\langle \lambda_{\bot,t} \rangle = 0$ and $\textbf{1}^\top \mathbb{E}(C_t|\mathcal{F}_{t,1}) = \textbf{1}^\top$ a.s.: \begin{align*}
\mathbb{E}_{a_t \sim \pi}\big[\langle(C_t \otimes I)(y_{t+1} + \beta_{\lambda,t}^{-1} \lambda_{\bot,t})\rangle|\mathcal{F}_{t,1}\big] &= \mathbb{E}_{a_t \sim \pi}\Big[\frac{1}{N}(\textbf{1}^\top \otimes I)(C_t \otimes I)(y_{t+1} + \beta_{\lambda,t}^{-1} \lambda_{\bot,t})|\mathcal{F}_{t,1}\Big]\\
&= \frac{1}{N}(\textbf{1}^\top \otimes I)(\mathbb{E}(C_t| \mathcal{F}_{t,1}) \otimes I)\mathbb{E}_{a_t \sim \pi}\big[ y_{t+1} + \beta_{\lambda,t}^{-1} \lambda_{\bot,t}|\mathcal{F}_{t,1}\big]\\
&= \frac{1}{N}(\textbf{1}^\top \mathbb{E}(C_t|\mathcal{F}_{t,1})\otimes I)\mathbb{E}_{a_t \sim \pi}\big[ y_{t+1} + \beta_{\lambda,t}^{-1} \lambda_{\bot,t}|\mathcal{F}_{t,1}\big]\\
&=\mathbb{E}_{a_t \sim \pi}\big[\langle y_{t+1}\rangle|\mathcal{F}_{t,1}\big] \textnormal{ a.s.} \end{align*} So $\{M_t\}$ is a martingale difference sequence. Additionally we have \begin{equation*}
\mathbb{E}\big[\lVert M_{t+1}\lVert^2|\mathcal{F}_{t,1}\big] \leq 2 \cdot \mathbb{E}\big[\lVert y_{t+1} + \beta^{-1}_{\lambda,t} \lambda_{\bot,t}\lVert^2_{G_t}|\mathcal{F}_{t,1}\big] + 2 \cdot \lVert \mathbb{E}\big[\langle y_{t+1} \rangle | \mathcal{F}_{t,1} \big] \lVert^2 \end{equation*} with $G_t = N^{-2} \cdot C_t^\top \textbf{1} \textbf{1}^\top C_t \otimes I$ whose spectral norm is bounded for $C_t$ is stochastic. From (\ref{eq_y_bounded}) and (\ref{eq_sup_lamb}) we have that, for any $M>0$, over the set $\{\sup_t \lVert\lambda_t\lVert \leq M\}$, there exists $K_3, K_4 < \infty$ such that \begin{equation*}
\mathbb{E}\big[\lVert y_{t+1} + \beta^{-1}_{\lambda,t} \lambda_{\bot,t}\lVert^2_{G_t}|\mathcal{F}_{t,1}\big]\mathbbm{1}_{\{\sup_t \lVert\lambda_t\lVert \leq M\}} \leq K_3 \cdot \mathbb{E}\big[\lVert y_{t+1}\lVert^2 +
\lVert\beta^{-1}_{\lambda,t} \lambda_{\bot,t}\lVert^2|\mathcal{F}_{t,1}\big]\mathbbm{1}_{\{\sup_t \lVert\lambda_t\lVert \leq M\}} \leq K_4 \end{equation*}
Besides, since $r^i_{t+1}$ and $ w$ are uniformly bounded, there exists $K_5 < \infty$ such that $\lVert \mathbb{E}\big[\langle y_{t+1} \rangle | \mathcal{F}_{t,1} \big] \lVert^2 \leq K_5 \cdot (1 + \lVert\langle\lambda_t\rangle\lVert^2)$. Thus, for any $M > 0$, there exists some $K_6 < \infty$ such that over the set $\{\sup_t \lVert\lambda_t\lVert \leq M\}$ \begin{equation*}
\mathbb{E}\big[\lVert M_{t+1} \lVert^2 |\mathcal{F}_{t,1}\big] \leq K_6 \cdot (1 + \lVert \langle \lambda_t \rangle\lVert^2) \end{equation*} Hence, for any $M>0$, assumptions (a.1) - (a.5) of B.1. from \cite{FDMARL} are verified on the set $\{\sup_t \lVert\lambda_t\lVert \leq M\}$. Finally, we consider the ODE asymptotically followed by $\langle\lambda_t\rangle$: \begin{equation*}
\dot{\langle\lambda_t\rangle} = -B_{\pi,\theta} \cdot \langle\lambda_t\rangle + A_{\pi,\theta}\cdot d^\pi \end{equation*} which has a single globally asymptotically stable equilibrium $\lambda^* \in \mathbb{R}^K$, since $B_{\pi,\theta}$ is positive definite: $\lambda^* = B_{\pi,\theta}^{-1} \cdot A_{\pi,\theta} \cdot d^\pi$. By Lemma \ref{lem_bounded_lambda}, $\sup_t \lVert \langle \lambda_t \rangle \lVert < \infty$ a.s., all conditions to apply \textbf{Theorem B.2.} of \cite{FDMARL} hold a.s., which means that $ \langle \lambda_t \rangle \underset{t\to\infty}{\longrightarrow} \lambda^*$ a.s. As $\lambda_t = \textbf{1} \otimes \langle\lambda_t\rangle + \lambda_{\bot,t}$ and $\lambda_{\bot,t}\underset{t\to\infty}{\longrightarrow}0 $ a.s., we have for each $i \in \mathcal{N}$, a.s., \begin{equation*}
\lambda^i_t \underset{t\to\infty}{\longrightarrow} B_{\pi,\theta}^{-1} \cdot A_{\pi,\theta} \cdot d^\pi \end{equation*} \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:off-policy-AC-actor-cv}] Same arguments as the one used in our on-policy actor step convergence proof still apply in the off-policy setting. The proof is even simpler since there is no need to consider a "Natural" timescale averaging, since we have a stationary behavior policy. \end{proof}
\end{document} |
\begin{document}
\newcommand{$\Box$\\}{$\Box$\\} \newcommand{\mathbf{C}}{\mathbf{C}} \newcommand{\mathbf{R}}{\mathbf{R}} \newcommand{\mathbf{Q}}{\mathbf{Q}} \newcommand{\mathbf{N}}{\mathbf{N}} \renewcommand{\mathbf{S}}{\mathbf{S}} \newcommand{\mathbf{P}}{\mathbf{P}} \newcommand{\mathbf{E}}{\mathbf{E}} \newcommand{\hbox{$\cal B$}}{\hbox{$\cal B$}} \setlength{\parindent}{0pt} \newcommand{\displaystyle}{\displaystyle} \newcommand{\saut}[1]{
\\[#1]} \newcommand{\displaystyle \frac}{\displaystyle \frac} \newcommand{\textrm{dist}}{\textrm{dist}} \newcommand{\mathrm{diam}}{\mathrm{diam}} \newcommand{\dim_{\mathcal{H}} }{\dim_{\mathcal{H}} } \newcommand{\textrm{Recov}}{\textrm{Recov}} \newcommand{\mathrm{Var}}{\mathrm{Var}} \newcommand{\mathrm{Gr}}{\mathrm{Gr}} \newcommand{\mathrm{Rg}}{\mathrm{Rg}} \newcommand{\textbf}{\textbf} \newcommand{\mathscr{B}}{\mathscr{B}} \newcommand{\mathbbm{B}}{\mathbbm{B}} \newcommand{\mathbf{B}}{\mathbf{B}} \newcommand{\texttt{\large $\boldsymbol{\alpha}$}}{\texttt{\large $\boldsymbol{\alpha}$}}
\newcommand{\widetilde{\mathbb{\alpha}}}{\widetilde{\mathbb{\alpha}}}
\newcommand{\underline{\mathbb{\alpha}}}{\underline{\mathbb{\alpha}}}
\title[Local regularity and Hausdorff dimension of Gaussian fields]{From almost sure local regularity to almost sure Hausdorff dimension for Gaussian fields }
\author{Erick Herbin} \address{Ecole Centrale Paris, Grande Voie des Vignes, 92295 Chatenay-Malabry, France} \email{[email protected]}
\author{Benjamin Arras} \email{[email protected]} \author{Geoffroy Barruel} \email{[email protected]}
\subjclass[2000]{60\,G\,15, 60\,G\,17, 60\,G\,10, 60\,G\,22, 60\,G\,60} \keywords{Gaussian processes, Hausdorff dimension, (multi)fractional Brownian motion, multiparameter processes, H\"older regularity, stationarity.}
\begin{abstract} Fine regularity of stochastic processes is usually measured in a local way by local H\"older exponents and in a global way by fractal dimensions. Following a previous work of Adler, we connect these two concepts for multiparameter Gaussian random fields. More precisely, we prove that almost surely the Hausdorff dimensions of the range and the graph in any ball $B(t_0,\rho)$ are bounded from above using the local H\"older exponent at $t_0$. We define the deterministic local sub-exponent of Gaussian processes, which allows to obtain an almost sure lower bound for these dimensions. Moreover, the Hausdorff dimensions of the sample path on an open interval are controlled almost surely by the minimum of the local exponents.
Then, we apply these generic results to the cases of the multiparameter fractional Brownian motion, the multifractional Brownian motion whose regularity function $H$ is irregular and the generalized Weierstrass function, whose Hausdorff dimensions were unknown so far. \end{abstract}
\maketitle
\section{Introduction}
Since the 70's, the regularity of stochastic processes used to be considered in different ways. On one hand, the local regularity of sample paths is usually measured by local moduli of continuity and H\"older exponents (e.g. \cite{dudley, 2ml, Orey.Pruitt(1973), Yadrenko}). And on the other hand, the global regularity can be quantified by the global H\"older exponent (e.g. \cite{Xiao2009, Xiao2010}) or by fractal dimensions (Hausdorff dimension, box-counting dimension, packing dimension, \dots) and respective measures of the graph of the processes (e.g. \cite{Berman(1972), Pruitt1969, Strassen(1964)}).
As an example, if $B^H=\{B^H_t;\;t\in\mathbf{R}_+\}$ is a real-valued fractional Brownian motion (fBm) with self-similarity index $H\in (0,1)$, the pointwise H\"older exponent at any point $t\in\mathbf{R}_+$ satisfy $\texttt{\large $\boldsymbol{\alpha}$}_{B^H}(t) = H$ almost surely. Besides, the Hausdorff dimension of the graph of $B^H$ is given by $\dim_{\mathcal{H}} (\mathrm{Gr}_{B^H}) = 2-H$ almost surely. In this specific case, we observe a connection between the global and local points of view of regularity for fBm. Is it possible to obtain some general result, for some larger class of processes?
In \cite{Adler77}, Adler showed that the Hausdorff dimension of the graph of a $\mathbf{R}^d$-valued Gaussian field $X=\{X^{(i)}_t;\;1\leq i\leq p,\; t\in\mathbf{R}^N_+\}$, made of i.i.d. Gaussian coordinate processes $X^{(i)}$ with stationary increments, can be deduced from the local behavior of its incremental variance. More precisely, when the quantities $\sigma^2(t)=\mathbf{E}[|X^{(i)}_{t+t_0}-X^{(i)}_{t_0}|^2]$ independent of $1\leq i\leq p$ and $t_0\in\mathbf{R}^N_+$ satisfy \begin{equation}\label{ineqAdler} \forall \epsilon>0,\quad
|t|^{\alpha+\epsilon} \leq \sigma(t) \leq |t|^{\alpha-\epsilon} \quad\textrm{as } t\rightarrow 0, \end{equation} the Hausdorff dimension of the graph $\mathrm{Gr}_X=\{(t,X_t):t\in\mathbf{R}^N_+\}$ of $X$ is proved to be \begin{align*}
\dim_{\mathcal{H}} (\mathrm{Gr}_X) &= \min\left\{ \frac{N}{\alpha}, N+d (1-\alpha) \right\}. \end{align*} This result followed Yoder's previous works in \cite{Yoder} where the Hausdorff dimensions of the graph and also the range $\mathrm{Rg}_X=\{ X_t: t\in\mathbf{R}^N_+ \}$ were obtained for a multiparameter Brownian motion in $\mathbf{R}^d$. As an application to Adler's result, the Hausdorff dimension of the graph of fractional Brownian motion can be deduced from the local H\"older exponents of its sample paths. As an extension of this result, Xiao has completely determined in \cite{Xiao95} the Hausdorff dimensions of the image $X(K)$ and the graph $\mathrm{Gr}_X(K)$ of a Gaussian field $X$ as previously, for a compact set $K\subset\mathbf{R}^N_+$, in function of $\dim_{\mathcal{H}} K$.
In this paper, we aim at extending Adler's result to Gaussian random fields with non-stationary increments. We will see that this goal requires a localization of Adler's index $\alpha$ along the sample paths. There is a large litterature about local regularity of Gaussian processes. We refer to \cite{AT, davar, ledouxtalagrand, marcusrosen} for a contemporary and detailled review of it. This field of research is still very active, especially in the multiparameter context, and a non-exhaustive list of authors and recent works in this area includes Ayache \cite{AyLeVe, Ayache.Shieh.ea(2011)}, Mountford \cite{Baraka.Mountford.ea(2009)}, Dozzi \cite{dozzi07}, Khoshnevisan \cite{KX05}, Lawler \cite{lawler2011}, L\'evy V\'ehel \cite{2ml}, Lind \cite{lind08} and Xiao \cite{mwx, tudorxiao07, Xiao95, Xiao2009, Xiao2010}.
Usually the local regularity of an $\mathbf{R}^d$-valued stochastic process $X$ at $t_0\in\mathbf{R}^N_+$ is measured by the pointwise and local H\"older exponents $\texttt{\large $\boldsymbol{\alpha}$}_X(t_0)$ and $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)$ defined by \begin{align} \texttt{\large $\boldsymbol{\alpha}$}_X(t_0) &= \sup\left\{ \alpha>0: \limsup_{\rho\rightarrow 0}
\sup_{s,t\in B(t_0,\rho)} \frac{\|X_t-X_s\|}{\rho^{\alpha}} < +\infty \right\},\nonumber\\ \widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0) &= \sup\left\{ \alpha>0: \lim_{\rho\rightarrow 0}
\sup_{s,t\in B(t_0,\rho)} \frac{\|X_t-X_s\|}{\| t-s \|^{\alpha}} < +\infty \right\}.\label{eq:localHolder-exp} \end{align}
A general connection between the local structure of a stochastic process and the Hausdorff dimension of its graph has already been studied. In \cite{BCI03}, the specific case of local self-similarity property has been considered. Here, we show how the local H\"older regularity of a Gaussian random field allows to estimate the Hausdorff dimensions of its range $\mathrm{Rg}_X$ and its graph $\mathrm{Gr}_X$.
Recently in \cite{2ml}, the quantities $\mathbf{E}[|X_t - X_s|^2]$ when $s,t$ are close to $t_0\in\mathbf{R}^N_+$ are proved to capture a lot of informations about the almost sure local regularity. More precisely, the almost sure $2$-microlocal frontier of $X$ at $t_0$ allows to predict the evolution of the local regularity at $t_0$ under fractional integrations or derivations. Particularly, as special points of the $2$-microlocal frontier, both pointwise and local H\"older exponents can be derived from the study of $\mathbf{E}[|X_t - X_s|^2]$. For all $t_0\in\mathbf{R}_+^N$, we define in Section~\ref{sec:subexp} the exponents $\underline{\mathbb{\alpha}}_X(t_0)$ and $\widetilde{\mathbb{\alpha}}_X(t_0)$ of a real-valued Gaussian process $X$ as the minimum of $\underline{\mathbb{\alpha}}>0$ and maximum of $\widetilde{\mathbb{\alpha}}>0$ such that \begin{equation*} \forall s,t\in B(t_0,\rho_0),\quad
\| t-s \|^{2\, \underline{\mathbb{\alpha}}} \leq \mathbf{E}[|X_t-X_s|^2]
\leq \| t-s \|^{2\, \widetilde{\mathbb{\alpha}}}, \end{equation*} for some $\rho_0>0$. The exponents of the components $X^{(i)}$ of a Gaussian random field $X=(X^{(1)},\dots,X^{(d)})$ allow to get almost sure lower and upper bounds for quantities, $$\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho)))\quad\textrm{and}\quad\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))).$$ After the statement of the main result in Section~\ref{sec:main}, the almost sure local Hausdorff dimensions are given uniformly in $t_0\in\mathbf{R}_+^N$ and the global dimensions $\dim_{\mathcal{H}} (\mathrm{Gr}_X(I))$ and $\dim_{\mathcal{H}} (\mathrm{Rg}_X(I))$ are almost surely bounded for any open interval $I\subset\mathbf{R}_+^N$, in function of $\inf_{t\in I}\underline{\mathbb{\alpha}}_{X^{(i)}}(t)$ and $\inf_{t\in I}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t)$. Sections~\ref{sec:up} and \ref{sec:low} are devoted to the proofs of the upper bound and lower bound of the Hausdorff dimensions respectively.
In Section~\ref{sec:app}, the main result is applied to some stochastic processes whose increments are not stationary and whose Hausdorff dimension is still unknown.
The first one is the multiparameter fractional Brownian motion (MpfBm), derived from the set-indexed fractional Brownian motion introduced in \cite{sifBm, MpfBm}. On the contrary to fractional Brownian sheet studied in \cite{AyXiao, WuXiao}, the MpfBm does not satisfy the increment stationarity property. Then the study of the local regularity of its sample path allows to determine the Hausdorff dimension of its graph in Section~\ref{sec:mpfbm}.
The second application is the multifractional Brownian motion (mBm), introduced in \cite{RPJLV,BJR} as an extension of the classical fractional Brownian motion where the self-similarity index $H\in (0,1)$ is substituted with a function $H:\mathbf{R}_+\rightarrow (0,1)$ in order to allow the local regularity to vary along the sample path. The immediate consequence is the loss of the increment stationarity property. Then, the knowledge of local H\"older regularity implies the Hausdorff dimensions of the graph and the range of the mBm. In the case of a regular function $H$, the almost sure value of $\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho)))$ was already known to be $2-H(t_0)$ for any fixed $t_0\in\mathbf{R}_+$. In Section~\ref{sec:mbm}, this almost sure result is proved uniformly in $t_0$. The new case of an irregular function $H$ is also considered.
The last application of this article concerns the generalized Weierstrass function, defined as a stochastic Gaussian version of the well-known Weierstrass function, where the index varies along the trajectory. The local H\"older regularity is determined in Section~\ref{sec:GW} and consequentely, the Hausdorff dimension of its sample path.
\section{Hausdorff dimension of the sample paths of Gaussian random fields}
In this paper, we denote by {\em multiparameter Gaussian random field} in $\mathbf{R}^d$, a stochastic process $X=\{ X_t;\;t\in\mathbf{R}_+^N \}$, where $X_t = (X^{(1)}_t,\dots,X^{(d)}_t)\in\mathbf{R}^d$ for all $t\in\mathbf{R}^N_+$ and the coordinate processes $X^{(i)}=\{ X^{(i)}_t;\;t\in\mathbf{R}^N_+\}$ are independent real-valued Gaussian processes with the same law.
\subsection{A new local exponent}\label{sec:subexp}
According to \cite{2ml}, the local regularity of a Gaussian process $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ can be obtained by the {\em deterministic local H\"older exponent} \begin{equation}\label{DetLocalHolder} \widetilde{\mathbb{\alpha}}_X(t_0) = \sup\left\{ \alpha>0 :
\lim_{\rho\rightarrow 0} \sup_{s,t\in B(t_0,\rho)} \frac{\mathbf{E}[|X_t-X_s|^2]}{\|t-s\|^{2\alpha}} <+\infty \right\}. \end{equation} More precisely, the local H\"older exponent of $X$ at any $t_0\in\mathbf{R}_+^N$ is proved to satisfy $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)=\widetilde{\mathbb{\alpha}}_X(t_0)$ a.s.
In order to get a localized version of (\ref{ineqAdler}), we need to introduce a new exponent $\underline{\mathbb{\alpha}}_X(t_0)$, the {\em deterministic local sub-exponent} at any $t_0\in\mathbf{R}_+^N$, \begin{align}\label{DetUpLocalHolder} \underline{\mathbb{\alpha}}_X(t_0) &= \inf\left\{ \alpha>0 :
\lim_{\rho\rightarrow 0} \inf_{s,t\in B(t_0,\rho)} \frac{\mathbf{E}[|X_t-X_s|^2]}{\|t-s\|^{2\alpha}} =+\infty \right\} \\ &= \sup\left\{ \alpha>0 :
\lim_{\rho\rightarrow 0} \inf_{s,t\in B(t_0,\rho)} \frac{\mathbf{E}[|X_t-X_s|^2]}{\|t-s\|^{2\alpha}} =0 \right\}. \nonumber \end{align}
As usually, this double definition relies on the equality \begin{align*}
\frac{\mathbf{E}[|X_t-X_s|^2]}{\|t-s\|^{2 \alpha'}}
= \frac{\mathbf{E}[|X_t-X_s|^2]}{\|t-s\|^{2 \alpha}}
\times \| t-s \|^{2(\alpha-\alpha')}. \end{align*}
\
\begin{lemma}\label{lemcovinc} Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter Gaussian process.\\ Consider $\widetilde{\mathbb{\alpha}}_X(t_0)$ and $\underline{\mathbb{\alpha}}_X(t_0)$ the deterministic local H\"older exponent and local sub-exponent of $X$ at $t_0\in\mathbf{R}_+^N$ (as defined in (\ref{DetLocalHolder}) and (\ref{DetUpLocalHolder})).
For any $\epsilon>0$, there exists $\rho_0>0$ such that \begin{equation*} \forall s,t\in B(t_0,\rho_0),\quad
\| t-s \|^{2\, \underline{\mathbb{\alpha}}_X(t_0) +\epsilon} \leq \mathbf{E}[|X_t-X_s|^2]
\leq \| t-s \|^{2\, \widetilde{\mathbb{\alpha}}_X(t_0) -\epsilon}. \end{equation*} \end{lemma}
\
\begin{proof} For any $\epsilon >0$, the definition of $\widetilde{\mathbb{\alpha}}_X(t_0)$ leads to \begin{equation*}
\lim_{\rho\rightarrow 0} \sup_{s,t\in B(t_0,\rho)} \frac{\mathbf{E}[|X_t-X_s|^2]}{\|t-s\|^{2\,\widetilde{\mathbb{\alpha}}_X(t_0) - \epsilon}} =0. \end{equation*} Then there exits $\rho_1>0$ such that \begin{equation*}
0<\rho\leq\rho_1 \Rightarrow \forall s,t\in B(t_0,\rho),\ \mathbf{E}[|X_t-X_s|^2] \leq \|t-s\|^{2\,\widetilde{\mathbb{\alpha}}_X(t_0) - \epsilon} \end{equation*} and then \begin{equation*}
\forall s,t\in B(t_0,\rho_1),\quad \mathbf{E}[|X_t-X_s|^2] \leq \|t-s\|^{2\,\widetilde{\mathbb{\alpha}}_X(t_0) - \epsilon}. \end{equation*}
For the lower bound, we use the definition of the new exponent $\underline{\mathbb{\alpha}}_X(t_0)$ \begin{align*}
\lim_{\rho\rightarrow 0} \inf_{s,t\in B(t_0,\rho)} \frac{\mathbf{E}[|X_t-X_s|^2]}{\|t-s\|^{2\,\underline{\mathbb{\alpha}}_X(t_0) + \epsilon}} =+\infty. \end{align*} Then, there exists $\rho_2>0$ such that \begin{equation*}
0<\rho\leq\rho_2 \Rightarrow \forall s,t\in B(t_0,\rho),\ \mathbf{E}[|X_t-X_s|^2] \geq \|t-s\|^{2\,\underline{\mathbb{\alpha}}_X(t_0) + \epsilon} \end{equation*} and then \begin{equation*} \forall s,t\in B(t_0,\rho_2),\quad
\mathbf{E}[|X_t-X_s|^2] \geq \|t-s\|^{2\,\underline{\mathbb{\alpha}}_X(t_0) + \epsilon}. \end{equation*} The result follows setting $\rho_0=\rho_1\wedge\rho_2$. \end{proof}
\
From the previous result, we can derive an ordering relation between the deterministic local sub-exponent and the deterministic local H\"older exponent. We have \begin{equation}\label{ineqexp} \forall t_0\in\mathbf{R}^N_+,\quad \widetilde{\mathbb{\alpha}}_X(t_0) \leq \underline{\mathbb{\alpha}}_X(t_0). \end{equation}
\subsection{Main result: The Hausdorff dimension of Gaussian random fields}\label{sec:main}
For sake of self-containess of the paper, we recall the basic frame of the Hausdorff dimension definition.
For all $\delta>0$, we denote by $\delta$-covering of a non-empty subset $E$ of $\mathbf{R}^d$. all collection $A = (A_i)_{i\in\mathbf{N}}$ such that \begin{itemize}
\item $\forall i \in \mathbf{N}, \mathrm{diam}(A_i) < \delta$, where $\mathrm{diam}(A_i)$ denotes $\sup(\|x-y\|;\; x,y\in A_i)$ ; and \item $E \subseteq \bigcup_{i \in \mathbf{N}}A_i$. \end{itemize} We denote by $\Sigma_\delta(E)$ the set of $\delta$-covering de $E$ and by $\Sigma(E)$ the set of the covering of $E$. We define $$\mathcal{H}^s_{\delta}(E)=\inf_{A\in{\Sigma_\delta(E)}}\left\{\sum_{i=1}^{\infty}\mathrm{diam}(A_i)^s\right\},$$ and the Hausdorff measure of $E$ by \begin{align*} \mathcal{H}^s(E)=\lim_{\delta\rightarrow 0}\mathcal{H}^s_{\delta}(E) = \begin{cases} +\infty & \text{si } 0 \leq s < \dim_{\mathcal{H}} (E), \\ 0 & \text{si } s > \dim_{\mathcal{H}} (E). \end{cases} \end{align*} The quantity $\dim_{\mathcal{H}} (E)$ is the Hausdorff dimension of $E$. It is defined by $$\dim_{\mathcal{H}} (E)=\inf \left\{s \in \mathbf{R}_+: \mathcal{H}^s(E)=0\right\}=\sup\left\{s \in \mathbf{R}_+: \mathcal{H}^s(E)=+\infty\right\}.$$
\
For any random field $X=\{X^{(i)}_t;\;1\leq i\leq p,\;t\in\mathbf{R}^N_+\}$ made of i.i.d. Gaussian coordinate processes with possibly non-stationary increments, the Hausdorff dimensions of the range $\mathrm{Rg}_X(B(t_0,\rho)) = \{ X_t;\; t\in B(t_0,\rho)\}$ and the graph $\mathrm{Gr}_X(B(t_0,\rho)) = \{ (t,X_t);\; t\in B(t_0,\rho)\}$ of $X$ in the ball $B(t_0,\rho)$ of center $t_0$ and radius $\rho>0$ can be estimated when $\rho$ goes to $0$, using the deterministic local H\"older exponent and the deterministic local sub-exponent of $X^{(i)}$ at $t_0$.
In the following statements and in the sequel of the paper, the deterministic local H\"older exponent $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ and the deterministic local sub-exponent $\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ of $X^{(i)}$ at any $t_0\in\mathbf{R}^N_+$ are independent of $1\leq i\leq d$, since the component $X^{(i)}$ are assumed to be i.i.d.
\begin{theorem}[Pointwise almost sure result]\label{thmain} Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multi-parameter Gaussian random field in $\mathbf{R}^d$. Let $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ be the deterministic local H\"older exponent and $\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ the deterministic local sub-exponent of $X^{(i)}$ at $t_0\in\mathbf{R}_+^N$ as defined in (\ref{DetLocalHolder}) and (\ref{DetUpLocalHolder}), independent of $1\leq i\leq d$. Assume that $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)>0$.
Then, the Hausdorff dimensions of the graph and the range of $X$ satisfy almost surely, \begin{align*} \left. \begin{array}{l r} \textrm{if } N\leq d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0), & N/\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) \\ \textrm{if } N> d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0), & N + d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)) \end{array} \right\} \leq &\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \\ &\leq \min\left\{\frac{N}{\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)} ; N + d(1-\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0))\right\} \end{align*} and \begin{align*} \left. \begin{array}{l r} \textrm{if } N\leq d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0), & N/\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) \\ \textrm{if } N> d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0), & d \end{array} \right\} \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \leq \min\left\{\frac{N}{\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)} ; d\right\}. \end{align*} \end{theorem}
The proof of Theorem \ref{thmain} relies on Propositions \ref{propmajdimH} and \ref{propmindimH}.
\
\begin{theorem}[Uniform almost sure result]\label{thmainunif} Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multi-parameter Gaussian random field in $\mathbf{R}^d$. Let $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t)$ be the deterministic local H\"older exponent and $\underline{\mathbb{\alpha}}_{X^{(i)}}(t)$ the deterministic local sub-exponent of $X^{(i)}$ at any $t\in\mathbf{R}_+^N$.
Set $\mathcal{A} = \{ t\in\mathbf{R}_+^N: \liminf_{u\rightarrow t}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u)>0\}$.
Then, with probability one, for all $t_0\in\mathcal{A}$, \begin{itemize} \item if $N\leq d\ \liminf_{u\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(u)$ then \begin{align*} \frac{N}{\displaystyle\liminf_{u\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(u)} \leq \lim_{\rho\rightarrow 0}&\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \\ &\leq \min\left\{\frac{N}{\displaystyle\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u)} ; N + d(1-\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u))\right\} \end{align*} and \begin{align*} \frac{N}{\displaystyle\liminf_{u\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(u)} \leq \lim_{\rho\rightarrow 0}&\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \leq \min\left\{\frac{N}{\displaystyle\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u)} ; d\right\}. \end{align*}
\item if $N> d\ \liminf_{u\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(u)$ then \begin{align*} N + d(1-\liminf_{u\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(u)) \leq \lim_{\rho\rightarrow 0}&\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \\ &\leq \min\left\{\frac{N}{\displaystyle\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u)} ; N + d(1-\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u))\right\} \end{align*} and \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) = d. \end{align*}
\end{itemize} \end{theorem}
The proof of Theorem \ref{thmainunif} relies on Proposition \ref{propmajdimH} and Corollary \ref{cormindimHunif2}.
\begin{theorem}[Global almost sure result]\label{thmaincompact} Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter Gaussian field in $\mathbf{R}^d$. Let $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t)$ be the deterministic local H\"older exponent and $\underline{\mathbb{\alpha}}_{X^{(i)}}(t)$ the deterministic local sub-exponent of $X^{(i)}$ at any $t\in\mathbf{R}_+^N$.
For any open interval $I \subset\mathbf{R}^N_+$, assume that the quantities $\underline{\mathbb{\alpha}} = \inf_{t\in I}\underline{\mathbb{\alpha}}_{X^{(i)}}(t)$ and $\widetilde{\mathbb{\alpha}} = \inf_{t\in I}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t)$ satisfy $0 < \widetilde{\mathbb{\alpha}} \leq \underline{\mathbb{\alpha}}$. Then, with probability one, \begin{align*} \left. \begin{array}{l r} \textrm{if } N\leq d\ \underline{\mathbb{\alpha}}, & N/\underline{\mathbb{\alpha}} \\ \textrm{if } N> d\ \underline{\mathbb{\alpha}}, & N + d(1-\underline{\mathbb{\alpha}}) \end{array} \right\} \leq \dim_{\mathcal{H}} (\mathrm{Gr}_X(I)) \leq \min\left\{N/\widetilde{\mathbb{\alpha}} ; N + d(1-\widetilde{\mathbb{\alpha}}) \right\} \end{align*} and \begin{align*} \left. \begin{array}{l r} \textrm{if } N\leq d\ \underline{\mathbb{\alpha}}, &N/\underline{\mathbb{\alpha}} \\ \textrm{if } N> d\ \underline{\mathbb{\alpha}}, & d \end{array} \right\} \leq \dim_{\mathcal{H}} (\mathrm{Rg}_X(I)) \leq \min\left\{N/\widetilde{\mathbb{\alpha}} ; d\right\}. \end{align*}
\end{theorem}
The proof of Theorem \ref{thmaincompact} relies on Corollary \ref{cormajdimHunif1} and Corollary \ref{cormindimHunif1}.
\subsection{Upper bound for the Hausdorff dimension}\label{sec:up}
\begin{lemma}\label{lemdimHmaj} Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter random process with values in $\mathbf{R}^d$. Let $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)$ be the local H\"older exponent of $X$ at $t_0\in\mathbf{R}_+^N$.
For any $\omega$ such that $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)>0$, \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} &(\mathrm{Gr}_X(B(t_0,\rho))) \\ &\leq \min\left\{ \frac{N}{\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)} ; N + d(1-\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0))\right\}. \end{align*} \end{lemma}
\begin{proof} The first inequality follows the fact that the range $\mathrm{Rg}_X(B(t_0,\rho))$ is a projection of the graph $\mathrm{Gr}_X(B(t_0,\rho))$. For the second inequality, we need to localize the argument of Yoder (\cite{Yoder}), who proved the upper bound for the Hausdorff dimensions of the range and the graph of a H\"olderian function from $\mathbf{R}^N$ (or $[0,1]^N$) to $\mathbf{R}^d$ (see also \cite{falconer}, Corollary~11.2 p. 161).
Assume that $\omega$ is fixed such that $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega)>0$. By definition of $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)$, for all $\epsilon>0$ there exists $\rho_0>0$ such that for all $\rho\in (0,\rho_0]$, \begin{align*} \forall s,t\in B(t_0,\rho),\quad
\|X_t(\omega)-X_s(\omega)\| \leq \|t-s\|^{\widetilde{\boldsymbol{\alpha}}_X(t_0,\omega)-\epsilon}. \end{align*} There exists a real $0<\delta_0<1$ such that for all $u\in [0,1]^N$, $t_0 + \delta_0.u\in B(t_0,\rho_0)$ and consequently, \begin{align*} \forall u,v\in [0,1]^N,\quad
\|X_{t_0+\delta_0.u}(\omega)-X_{t_0+\delta_0.v}(\omega)\| \leq (\delta_0\ \|u-v\|)^{\widetilde{\boldsymbol{\alpha}}_X(t_0,\omega)-\epsilon}. \end{align*}
Then, the function $Y_{\bullet}(\omega):u\mapsto Y_u(\omega)=X_{t_0+\rho_0.u}(\omega)$ is H\"older-continuous of order $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega)-\epsilon$ on $[0,1]^N$ and therefore, according to \cite{Yoder}, \begin{align*} \dim_{\mathcal{H}} (\mathrm{Rg}_{Y_{\bullet}(\omega)}([0,1]^N)) \leq \dim_{\mathcal{H}} &(\mathrm{Gr}_{Y_{\bullet}(\omega)}([0,1]^N)) \\ &\leq \min\left\{ \frac{N}{\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega)-\epsilon} ; N + d(1-\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega)+\epsilon)\right\}. \end{align*} We can observe that the graph $\mathrm{Gr}_{X_{\bullet}(\omega)}(t_0+\delta_0.[0,1]^N))$ is an affine transformation of the graph $\mathrm{Gr}_{Y_{\bullet}(\omega)}([0,1]^N))$, therefore their Hausdorff dimensions are equal. Moreover, there exists $\rho>0$ such that $B(t_0,\rho) \subset t_0+\delta_0.[0,1]^N$. By monotony of the function $\rho\mapsto\dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(B(t_0,\rho)))$, we can write \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} &(\mathrm{Gr}_{X_{\bullet}(\omega)}(B(t_0,\rho))) \leq \min\left\{ \frac{N}{\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega)-\epsilon} ; N + d(1-\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega)+\epsilon)\right\}. \end{align*} Since this inequality stands for all $\epsilon>0$, we get \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} &(\mathrm{Gr}_{X_{\bullet}(\omega)}(B(t_0,\rho))) \leq \min\left\{ \frac{N}{\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega)} ; N + d(1-\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0,\omega))\right\}. \end{align*} \end{proof}
Lemma \ref{lemdimHmaj} gives a random upper bound for the Hausdorff dimensions of the (localized) range and graph of the sample path, in function of its local H\"older exponents. When $X$ is a multiparameter Gaussian field in $\mathbf{R}^d$, we prove that this upper bound can be expressed almost surely with the deterministic local H\"older exponent of the Gaussian component processes $X^{(i)}$.
\begin{proposition}\label{propmajdimH} Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter Gaussian field in $\mathbf{R}^d$. Let $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ be the deterministic local H\"older exponent of $X^{(i)}$ at $t_0\in\mathbf{R}_+^N$ and assume that $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)>0$.
Then, almost surely \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} &(\mathrm{Gr}_X(B(t_0,\rho))) \\ &\leq \min\left\{N/\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0) ; N + d(1-\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0))\right\}. \end{align*}
Moreover, an uniform result can be stated on the set $$\mathcal{A}=\{t_0\in\mathbf{R}^N_+: \liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u)>0\}.$$ With probability one, for all $t_0\in \mathcal{A}$, \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \leq &\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \\ &\quad\leq \min\left\{N/\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u) ; N + d(1-\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u))\right\}. \end{align*}
\end{proposition}
\begin{proof} In \cite{2ml}, the local H\"older exponent of any Gaussian process $Y$ at $t_0\in\mathbf{R}^N_+$ such that $\widetilde{\mathbb{\alpha}}_Y(t_0)>0$ is proved to satisfy $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_Y(t_0) = \widetilde{\mathbb{\alpha}}_Y(t_0)$ almost surely. Therefore, by definition of $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_{X^{(i)}}(t_0)$, for all $\epsilon>0$ there exists $\rho_0>0$ such that for all $\rho\in (0,\rho_0]$, we have almost surely \begin{align*} \forall s,t\in B(t_0,\rho),\quad
|X^{(i)}_t-X^{(i)}_s| \leq \|t-s\|^{\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)-\epsilon}, \end{align*} and consequently, almost surely \begin{align}\label{eqmajGaussField} \forall s,t\in B(t_0,\rho),\quad
\|X_t-X_s\| \leq K\ \|t-s\|^{\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)-\epsilon}, \end{align} for some constant $K>0$.
From (\ref{eqmajGaussField}), we deduce that $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)\geq \widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ almost surely. Then Lemma \ref{lemdimHmaj} implies almost surely \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} &(\mathrm{Gr}_X(B(t_0,\rho))) \\ &\leq \min\left\{N/\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0) ; N + d(1-\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0))\right\}. \end{align*}
For the uniform result on $t_0\in\mathbf{R}^N_+$, we use the Theorem 3.14 of \cite{2ml} which states that if $Y$ is a Gaussian process such that the function $t_0\mapsto\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_Y(u)$ is positive, then with probability one, \begin{align*} \forall t_0\in\mathbf{R}^N_+,\quad \liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_Y(u) \leq \widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_Y(t_0) \leq \limsup_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_Y(u). \end{align*} This inequality yields to the existence of $\Omega_i\in\mathcal{F}$ for all $1\leq i\leq d$ with $\mathbf{P}(\Omega_i)=1$ and: \\ For all $\omega\in\Omega_i$, all $t_0\in \mathcal{A}$ and all $\epsilon>0$, there exists $\rho_0>0$ such that for all $\rho\in (0,\rho_0]$, \begin{align*} \forall s,t\in B(t_0,\rho),\quad
|X^{(i)}_t(\omega)-X^{(i)}_s(\omega)| \leq \|t-s\|^{\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u)-\epsilon}. \end{align*} This yields to: For all $\omega\in\bigcap_{1\leq i\leq d}\Omega_i$, all $t_0\in \mathcal{A}$ and all $\epsilon>0$, there exists $\rho_0>0$ such that for all $\rho\in (0,\rho_0]$, \begin{align*} \forall s,t\in B(t_0,\rho),\quad
\|X_t(\omega)-X_s(\omega)\| \leq K\ \|t-s\|^{\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u)-\epsilon}, \end{align*} for some constant $K>0$.
With the argument of Lemma \ref{lemdimHmaj}, we deduce \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho),\omega)) &\leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho),\omega)) \\ &\quad\leq \min\left\{N/\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u) ; N + d(1-\liminf_{u\rightarrow t_0}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(u))\right\}, \end{align*} which is the result stated. \end{proof}
\begin{corollary}\label{cormajdimHunif1} Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter Gaussian field in $\mathbf{R}^d$ and $\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ the deterministic local H\"older exponent of $X^{(i)}$ at $t_0\in\mathbf{R}_+^N$.\\ Assume that for some bounded interval $I\subset\mathbf{R}^N_+$, we have $\alpha = \inf_{t_0\in I}\widetilde{\mathbb{\alpha}}_{X^{(i)}}(t_0) >0$. Then, with probability one, $$\dim_{\mathcal{H}} (\mathrm{Rg}_X(I)) \leq \dim_{\mathcal{H}} (\mathrm{Gr}_X(I))\leq \min\left\{N/\alpha ; N + d(1-\alpha)\right\}.$$
\end{corollary}
\begin{proof} With the same arguments as in the proof of Proposition \ref{propmajdimH}, we can claim that, with probability one, $\forall t_0\in I,\ \alpha\leq\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0).$ Then, there exists $\Omega_0\in\mathcal{F}$ with $\mathbf{P}(\Omega_0)=1$ and: For all $\omega\in\Omega_0$, all $t_0\in I$ and all $\epsilon>0$, there exist $\rho_0>0$ and $K>0$ such that $\forall \rho\in (0,\rho_0]$, \begin{align*} \forall s,t\in B(t_0,\rho),\quad
\| X_t(\omega)-X_s(\omega) \| \leq K\ \| t-s \|^{\alpha-\epsilon}. \end{align*}
Then the continuity of $t\mapsto X_t(\omega)$ on the bounded interval $I$ allows to deduce that, for all $\omega\in\Omega_0$ and all $\epsilon>0$, there exists a constant $K'>0$ such that \begin{align}\label{eq:maj-holder-global} \forall s,t\in I,\quad
\| X_t(\omega)-X_s(\omega) \| \leq K'\ \| t-s \|^{\alpha-\epsilon}. \end{align}
If the interval $I$ is compact, we can exhibit an affine one-to-one mapping $I\rightarrow [0,1]^N$ and conclude with the arguments of Lemma \ref{lemdimHmaj} that \cite{Yoder} implies \begin{align*} \dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}(I)) \leq \dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(I)) \leq \min\left\{ \frac{N}{\alpha-\epsilon} ; N + d(1-\alpha+\epsilon)\right\} \qquad\textrm{a.s.} \end{align*} Since this inequality stands for any $\epsilon>0$, the result follows in that case.
If $I$ is not closed, we remark that $$\dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}(I)) \leq \dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}(\overline{I}))\quad\textrm{and}\quad \dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(I)) \leq \dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(\overline{I})).$$ Then, extending the inequality (\ref{eq:maj-holder-global}) to $\overline{I}$ by continuity, the result for the compact interval $\overline{I}$ is proved as previously. \end{proof}
\
\subsection{Lower bound for the Hausdorff dimension}\label{sec:low}
Frostman's Theorem constitutes the key argument to prove the lower bound for the Hausdorff dimensions. We recall the basic notions of potential theory, which are used along the proofs of this section. For any Borel set $E\subseteq\mathbf{R}^d$, the $\beta$-dimensional energy of a probability measure $\mu$ on $E$ is defined by \begin{equation*}
I_{\beta}(\mu) = \int_{E\times E} \|x-y\|^{-\beta}\ \mu(dx)\ \mu(dy). \end{equation*} Then, the $\beta$-dimensional Bessel-Riesz capacity of $E$ is defined as $$ C_{\beta}(E) = \sup\left( \frac{1}{I_{\beta}(\mu)};\; \mu\textrm{ probability measure on }E \right). $$ According to Frostman's Theorem, the Hausdorff dimension of $E$ is obtained from the capacity of $E$ by the expression \begin{align*} \dim_{\mathcal{H}} E = \sup\left(\beta: C_{\beta}(E)>0\right) =\inf\left(\beta: C_{\beta}(E)=0\right). \end{align*}
Consequently, if $I_{\beta}(\mu) < +\infty$ for some probability measure (or some mass distribution) $\mu$ on $E$, then $\dim_{\mathcal{H}} E \geq \beta$.
\begin{proposition}\label{propmindimH} Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter Gaussian field in $\mathbf{R}^d$ and $\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ the deterministic local sub-exponent of $X^{(i)}$ at $t_0\in\mathbf{R}_+^N$.
Then, almost surely \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \geq \left\{ \begin{array}{l l} N/\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) & \textrm{if }N\leq d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) ; \\ N + d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)) & \textrm{if }N > d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) ; \end{array} \right. \end{align*} and \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \geq \left\{ \begin{array}{l l} N/\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) & \textrm{if }N\leq d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) ; \\ d & \textrm{if }N > d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0). \end{array} \right. \end{align*}
\end{proposition}
\begin{proof} Following the Adler's proof for the lower bound in the case of processes with stationary increments, we distinguish the two cases: $N\leq d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$ and $N > d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$.
\begin{itemize} \item Assume that $N\leq d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$. In that case, we prove that almost surely, \begin{align}\label{eqmindimH1} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho)) \geq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho)) \geq \frac{N}{\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)}. \end{align} For any $\epsilon>0$, we consider any $\beta<N/(\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)+\epsilon) \leq d$ and we aim at showing that the $\beta$-dimensional capacity $C_\beta(\mathrm{Rg}_X(B(t_0,\rho)))$ is positive almost surely for all $\rho>0$.
\noindent With this intention, for $E=\mathrm{Rg}_X(B(t_0,\rho))=X(B(t_0,\rho))$, we consider the $\beta$-dimensional energy $I_{\beta}(\mu)$ of the mass distribution $\mu = \lambda|_{B(t_0,\rho)} \circ X^{-1}$ of $E$, where $\lambda|_{B(t_0,\rho)}$ denotes the restriction of the Lebesgue measure to $B(t_0,\rho)$. As mentioned above (see also Theorem B in \cite{Taylor}), a sufficient condition for the capacity to be positive is that, almost surely \begin{align}\label{eqcapa}
\int_{E\times E}\|x-y\|^{-\beta}\ \mu(dx)\ \mu(dy) =
\int_{B(t_0,\rho)\times B(t_0,\rho)}\|X_t-X_s\|^{-\beta}\ ds\ dt < +\infty. \end{align} Since the $X^{(i)}$ are independent and have the same distribution, we compute for all $s,t\in\mathbf{R}^N_+$, \begin{align*}
\mathbf{E}\left[\|X_t-X_s\|^{-\beta}\right]=\frac{1}{[2\pi \sigma^2(s,t)]^{d/2}}
\int_{\mathbf{R}^d} \|x\|^{-\beta} \exp\left(-\frac{\|x\|^2}{2\ \sigma^2(s,t)}\right)\ dx, \end{align*}
where $\sigma^2(s,t) = \mathbf{E}[| X^{(i)}_t-X^{(i)}_s |^2]$ is independent of $1\leq i\leq d$.\\ Let us consider the change of variables $(\mathbf{R}_+\setminus\{0\}, \mathbf{S}^{d-1})\rightarrow\mathbf{R}^d\setminus\{0\}$ defined by $(r,u)\mapsto r.u$, where $\mathbf{S}^{d-1}$ denotes the unit hypersphere of $\mathbf{R}^d$. The previous expression becomes \begin{align*}
\mathbf{E}\left[\|X_t-X_s\|^{-\beta}\right] &= \frac{K_1}{[2\pi \sigma^2(s,t)]^{d/2}} \int_{\mathbf{R}_+} r^{d-1-\beta} \exp\left(-\frac{r^2}{2\ \sigma^2(s,t)}\right)\ dr \\ &= K_1\ (\sigma(s,t))^{-\beta}\int_{\mathbf{R}_+} z^{d-1-\beta}\exp\left(-\frac{1}{2}z^2\right)\ dz, \end{align*} where $K_1$ is a positive constant and using the change of variables $r=\sigma(s,t)\ z$.\\ Since the integral is finite when $\beta<d$, we get \begin{equation}\label{eqCovInc_beta} \forall s,t\in\mathbf{R}_+^N,\quad
\mathbf{E}\left[\|X_t-X_s\|^{-\beta}\right]\leq K_2\ (\sigma(s,t))^{-\beta}, \end{equation} for some positive constant $K_2$.\\ By Tonelli's theorem and Lemma \ref{lemcovinc}, this inequality implies the existence of $\rho_0>0$ such that for all $\rho\in (0,\rho_0]$, \begin{align*}
\mathbf{E}\left[\int_{B(t_0,\rho)\times B(t_0,\rho)}\|X_t-X_s\|^{-\beta}\ dt\ ds\right]& \\
\leq \int_{B(t_0,\rho)\times B(t_0,\rho)}K_2\ &\|t-s\|^{-\beta(\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) + \epsilon)}\ dt\ ds < +\infty \end{align*} because $\beta(\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) + \epsilon)< N$. Thus (\ref{eqcapa}) holds and for all $\rho\in (0,\rho_0]$, \begin{align*} \dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \geq \frac{N}{\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)+\epsilon} \qquad\textrm{a.s.} \end{align*} Taking $\rho,\epsilon\in\mathbf{Q}_+$, this yields to \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \geq \frac{N}{\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)} \qquad\textrm{a.s.}, \end{align*} which proves (\ref{eqmindimH1}).
\
\item Assume $N>d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$. We use the previous method to prove that almost surely \begin{equation}\label{eqmindimHrg2} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho)))\geq d. \end{equation} For any $\epsilon>0$ such that $d<N/(\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)+\epsilon)$, consider any real $\beta$ such that $\beta<d$. As previously, we show that equation (\ref{eqcapa}) is verified, which implies that the $\beta$-dimensional capacity $C_\beta(\mathrm{Rg}_X(B(t_0,\rho)))$ is positive almost surely for all $\rho>0$.
\noindent Since $\beta<d$, equation (\ref{eqCovInc_beta}) still holds. As in the previous case, the inequality $\beta(\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) + \epsilon)< N$ implies (\ref{eqcapa}) for $\rho$ small enough and then \begin{align*} \dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \geq d \qquad\textrm{a.s.} \end{align*} Taking $\rho\in\mathbf{Q}_+$, the inequality (\ref{eqmindimHrg2}) follows.
\
\item Assume $N>d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$. To prove the lower bound for the Hausdorff dimension of the graph, \begin{equation}\label{eqmindimH2} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho)))\geq N + d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)) \qquad\textrm{a.s.}, \end{equation} we use the same arguments of potential theory than for the range.
\noindent For any $\epsilon>0$, consider any real $\beta$ such that $d<\beta<N+d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)-\epsilon)$. In order to prove that the $\beta$-dimensional capacity $C_\beta(\mathrm{Gr}_X(B(t_0,\rho)))$ is positive almost surely for all $\rho>0$, it is sufficient to show that \begin{equation}\label{eqcapa2}
\int_{B(t_0,\rho)\times B(t_0,\rho)} \|(t,X_t)-(s,X_s)\|^{-\beta}\ ds\ dt<+\infty \qquad\textrm{a.s.} \end{equation}
\noindent Since the components $X^{(i)}$ ($1\leq i\leq d$) of $X$ are i.i.d., we compute \begin{align*}
&\mathbf{E}\left[(\|X_t-X_s\|^2+\|t-s\|^2)^{-\beta/2}\right] \\ &\qquad =\frac{1}{[2\pi \sigma^2(s,t)]^{d/2}}
\int_{\mathbf{R}^d}\left(\|x\|^2+\|t-s\|^2\right)^{-\beta/2}
\exp\left(-\frac{\|x\|^2}{2\ \sigma^2(s,t)}\right)\ dx. \end{align*} As in the previous case, by using the hyperspherical change of variables $(r,u)\in\mathbf{R}_+\times\mathbf{S}^{d-1}$ and then $r=\sigma(s,t)\ z$, we get \begin{align*}
&\mathbf{E}\left[(\|X_t-X_s\|^2 + \|t-s\|^2)^{-\beta/2}\right] \\
&\qquad = K_3 \int_{\mathbf{R}_+} \left(z^2\sigma^2(s,t)+\|t-s\|^2\right)^{-\beta/2}\ z^{d-1}\ e^{-\frac{1}{2}z^2}\ dz \\
&\qquad = K_3\ \sigma(s,t)^{-\beta} \int_{\mathbf{R}_+} \left(z^2+\frac{\|t-s\|^2}{\sigma^2(s,t)}\right)^{-\beta/2}\ z^{d-1}\ e^{-\frac{1}{2}z^2}\ dz, \end{align*} where $K_3$ is a positive constant. Then, since $\beta>d$, the following inequality holds \begin{align*}
&\mathbf{E}\left[(\|X_t-X_s\|^2 + \|t-s\|^2)^{-\beta/2}\right] \\
&\qquad \leq \frac{2^{-\beta/2}\ K_3}{\sigma(s,t)^{\beta}} \left[ \int_0^\frac{\|t-s\|}{\sigma(s,t)}
\left( \frac{\|t-s\|}{\sigma(s,t)} \right)^{-\beta} z^{d-1}\ dz
+ \int_\frac{\|t-s\|}{\sigma(s,t)}^\infty z^{d-1-\beta}\ dz \right] \\
&\qquad \leq \frac{K_4}{\sigma(s,t)^{\beta}}\ \left( \frac{\|t-s\|}{\sigma(s,t)} \right)^{d-\beta}
\leq K_4\ \frac{\|t-s\|^{d-\beta}}{\sigma(s,t)^d}. \end{align*}
\noindent By Tonelli's Theorem and Lemma \ref{lemcovinc}, this inequality implies the existence of $\rho_0>0$ such that for all $\rho\in (0,\rho_0]$, \begin{align*}
&\mathbf{E}\left[ \int_{B(t_0,\rho)\times B(t_0,\rho)} \|(t,X_t)-(s,X_s)\|^{-\beta}\ dt\ ds \right]\\ &\qquad\qquad\leq \int_{B(t_0,\rho)\times B(t_0,\rho)} K_4\
\frac{\|t-s\|^{d-\beta}}{\sigma(s,t)^d}\ ds\ dt \\ &\qquad\qquad\leq \int_{B(t_0,\rho)\times B(t_0,\rho)} K_4\
\|t-s\|^{-\beta+d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)-\epsilon)}\ ds\ dt <+\infty, \end{align*} because $\beta<N+d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)-\epsilon)$. Thus (\ref{eqcapa2}) holds and for all $\rho\in (0,\rho_0]$, \begin{equation*} \dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho)) \geq N+d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)-\epsilon) \qquad\textrm{a.s.} \end{equation*} Taking $\rho,\epsilon\in\mathbf{Q}_+$, this yields to \begin{equation*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho)) \geq N+d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)) \qquad\textrm{a.s.}, \end{equation*} which proves (\ref{eqmindimH2}). \end{itemize} \end{proof}
We now investigate uniform extensions of Proposition \ref{propmindimH}.
\begin{corollary}\label{cormindimHunif1} Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter Gaussian field in $\mathbf{R}^d$ and $\underline{\mathbb{\alpha}}_{X^{(i)}}(t)$ the deterministic local sub-exponent of $X^{(i)}$ at any $t\in\mathbf{R}_+^N$.
Assume that for some open subset $I \subset\mathbf{R}^N_+$, we have $\underline{\alpha} = \inf_{t\in I} \underline{\mathbb{\alpha}}_{X^{(i)}}(t) > 0$.
Then, with probability one, \begin{align*} \dim_{\mathcal{H}} (\mathrm{Gr}_X(I)) \geq \left\{ \begin{array}{l l} N/\underline{\alpha} & \textrm{if } N \leq d\ \underline{\alpha} ; \\ N + d(1-\underline{\alpha}) & \textrm{if } N > d\ \underline{\alpha} ; \end{array} \right. \end{align*} and \begin{align*} \dim_{\mathcal{H}} (\mathrm{Rg}_X(I)) \geq \left\{ \begin{array}{l l} N/\underline{\alpha} & \textrm{if } N \leq d\ \underline{\alpha} ; \\ d & \textrm{if } N > d\ \underline{\alpha}. \end{array} \right. \end{align*} \end{corollary}
\begin{proof} For any open subset $I\subset\mathbf{R}^N_+$, we first prove that for all $\omega$, the Hausdorff dimension of the graph of $X_{\bullet}(\omega):t\mapsto X_t(\omega)$ satisfies \begin{align}\label{ineqdimHboules} \dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(I)) \geq \sup_{t_0\in I} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(B(t_0,\rho))). \end{align}
Since $I$ is an open subset of $\mathbf{R}^N_+$, for all $t_0\in I$, there exists $\rho>0$ such that $B(t_0,\rho)\subset I$. This leads to $\dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(B(t_0,\rho))) \leq \dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(I))$ and then $$ \dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(I)) \geq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(B(t_0,\rho))), $$ since $\rho\mapsto \dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(B(t_0,\rho))) $ is decreasing. Then (\ref{ineqdimHboules}) follows.
In the same way, we prove that for all $\omega$, \begin{align}\label{ineqdimHboulesRg} \dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}(I)) \geq \sup_{t_0\in I} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}(B(t_0,\rho))). \end{align}
Following the proof of Proposition \ref{propmindimH}, we distinguish the two cases: $N \leq d\ \underline{\alpha}$ and $N > d\ \underline{\alpha}$ with $\underline{\alpha} = \inf_{t\in I} \underline{\mathbb{\alpha}}_{X^{(i)}}(t)$.
\begin{itemize} \item Assume that $N \leq d\ \underline{\alpha}$. In that case, for all $t_0\in I$, we have $N \leq d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$. Equations (\ref{eqmindimH1}), (\ref{ineqdimHboules}) and (\ref{ineqdimHboulesRg}) imply almost surely \begin{equation*} \dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}(I)) \geq \dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}(I)) \geq \frac{N}{\underline{\alpha}}. \end{equation*}
\item Assume that $N > d\ \underline{\alpha}$. By definition of $\underline{\alpha}$, for all $\epsilon>0$ with $N>d\ (\underline{\alpha} + \epsilon)$, there exists $t_0\in I$ such that $$ \underline{\alpha} < \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0) < \underline{\alpha} + \epsilon. $$ Then, we have $N > d\ \underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)$. In the proof of Proposition \ref{propmindimH}, we proved that this implies almost surely \begin{equation*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho)))\geq d \end{equation*} and \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) &\geq N + d(1-\underline{\mathbb{\alpha}}_{X^{(i)}}(t_0)) \\ &\geq N + d(1-\underline{\alpha} - \epsilon) \end{align*} for all $\epsilon \in\mathbf{Q}_+$ with $N>d\ (\underline{\alpha} + \epsilon)$. Then almost surely, \begin{equation*} \sup_{t_0\in I}\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho)))\geq d \end{equation*} and \begin{align*} \sup_{t_0\in I}\lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \geq N + d(1-\underline{\alpha}). \end{align*} \end{itemize} \end{proof}
\begin{corollary}\label{cormindimHunif2} Let $X=\{X_t;\;t\in\mathbf{R}^N_+\}$ be a multiparameter Gaussian field in $\mathbf{R}^d$ and $\underline{\mathbb{\alpha}}_{X^{(i)}}(t)$ the deterministic local sub-exponent of $X^{(i)}$ at any $t\in\mathbf{R}_+^N$.\\ Set $\underline{\mathcal{A}} = \{ t\in\mathbf{R}_+^N: \liminf_{u\rightarrow t}\underline{\mathbb{\alpha}}_{X^{(i)}}(u)>0\}$.
Then, with probability one, for all $t_0\in\underline{\mathcal{A}}$, \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \geq \left\{ \begin{array}{l l} \displaystyle N/\liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t) & \displaystyle\textrm{if }N\leq d\ \liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t);\\ \displaystyle N + d\left(1-\liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t) \right) & \displaystyle\textrm{if }N>d\ \liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t) ; \end{array} \right. \end{align*} and \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) \geq \left\{ \begin{array}{l l} \displaystyle N/\liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t) & \displaystyle\textrm{if } N \leq d\ \liminf_{t\rightarrow t_0} \underline{\mathbb{\alpha}}_{X^{(i)}}(t) ; \\ d & \displaystyle\textrm{if } N>d\ \liminf_{t\rightarrow t_0} \underline{\mathbb{\alpha}}_{X^{(i)}}(t). \end{array} \right. \end{align*} \end{corollary}
\begin{proof} Corollary \ref{cormindimHunif1} implies the existence of $\Omega^*\in\mathcal{F}$ with $\mathbf{P}(\Omega^*)=1$ such that: For all $\omega\in\Omega^*$ and all $a,b\in\mathbf{Q}^N_+$ with $a\prec b$, such that $\underline{\alpha} = \inf_{t\in (a,b)} \underline{\mathbb{\alpha}}_{X^{(i)}}(t) > 0$, we have $\dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}((a,b))) \geq N/\underline{\alpha}$ if $N\leq d\ \underline{\alpha}$ and $\geq N + d(1-\underline{\alpha})$ if $N> d\ \underline{\alpha}$ and $\dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}((a,b))) \geq N/\underline{\alpha}$ if $N\leq d\ \underline{\alpha}$ and $\geq d$ if $N> d\ \underline{\alpha}$.
Therefore, taking two sequences $(a_n)_{n\in\mathbf{N}}$ and $(b_n)_{n\in\mathbf{N}}$ such that $\forall n\in\mathbf{N}$, $a_n<t_0<b_n$ and converging to $t_0$, we get \begin{align*} \lim_{n\rightarrow\infty}\dim_{\mathcal{H}} (\mathrm{Gr}_{X_{\bullet}(\omega)}((a_n,b_n))) \geq \left\{ \begin{array}{l l} \displaystyle N/\liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t) & \displaystyle\textrm{if }N\leq d\ \liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t);\\ \displaystyle N + d(1-\liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t) ) & \displaystyle\textrm{if }N>d\ \liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t) ; \end{array} \right. \end{align*} and \begin{align*} \lim_{n\rightarrow\infty}\dim_{\mathcal{H}} (\mathrm{Rg}_{X_{\bullet}(\omega)}((a_n,b_n))) \geq \left\{ \begin{array}{l l} \displaystyle N/\liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t) & \displaystyle\textrm{if } N \leq d\ \liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t) ; \\ d & \displaystyle\textrm{if } N>d\ \liminf_{t\rightarrow t_0}\underline{\mathbb{\alpha}}_{X^{(i)}}(t). \end{array} \right. \end{align*} By monotony of the Hausdorff dimension, the result follows. \end{proof}
\section{Applications}\label{sec:app}
In this section, we apply the main results to Gaussian processes whose fine regularity is not completely known: the multiparameter fractional Brownian motion, the multifractional Brownian motion with a regularity function lower than its own regularity and the generalized Weierstrass function.
\subsection{Multiparameter fractional Brownian motion}\label{sec:mpfbm}
The multiparameter fractional Brownian motion (MpfBm) $\mathbf{B}^H=\{ \mathbf{B}^H_t;\; t\in\mathbf{R}_+^N\}$ of index $H\in (0,1/2]$ is defined as a particular case of set-indexed fractional Brownian motion (see \cite{sifBm, MpfBm}), where the indexing collection is $\mathcal{A}=\{ [0,t];\; t\in\mathbf{R}_+^N\} \cup\{\emptyset\}$. It is characterized as a real-valued mean-zero Gaussian process with covariance function \begin{align*} \forall s,t\in\mathbf{R}_+^N,\quad \mathbf{E}[\mathbf{B}^H_s \mathbf{B}^H_t] = \frac{1}{2} \left[ m([0,s])^{2H} + m([0,t])^{2H} - m([0,s]\bigtriangleup [0,t])^{2H} \right], \end{align*} where $m$ denotes a Radon measure in $\mathbf{R}^N_+$.
In the specific case where $N=2$ and $m$ is the Lebesgue measure of $\mathbf{R}^2_+$, the covariance structure of the MpfBm is \begin{align*} \forall s,t\in\mathbf{R}_+^2,\quad \mathbf{E}[\mathbf{B}^H_s \mathbf{B}^H_t] = \frac{1}{2} \left[ (s_1s_2)^{2H}+(t_1t_2)^{2H}-(s_1s_2+t_1t_2-2(s_1\wedge t_1)(s_2\wedge t_2))^{2H} \right]. \end{align*}
Then, its incremental variance is \begin{align}\label{eqcovincMpfBm} \forall s,t\in \mathbf{R}_+^2,\quad
\mathbf{E}\left[|\mathbf{B}^H_t-\mathbf{B}^H_s|^2\right]=(s_1 s_2 + t_1 t_2 - 2 (s_1\wedge t_1)(s_2\wedge t_2))^{2H}. \end{align}
The stationarity of the increments of the multiparameter fractional Brownian motion are studied in \cite{MpfBm}. Among all the various definitions of the stationarity property for a multiparameter process, the MpfBm does not satisfy the increment stationarity assumption of \cite{Adler77}. Indeed, (\ref{eqcovincMpfBm}) shows that $\mathbf{E}\left[|\mathbf{B}^H_t-\mathbf{B}^H_s|^2\right]$ does not only depend on $t-s$. Since the Hausdorff dimension of its graph does not come directly from \cite{Adler77}, we use the generic results of Section \ref{sec:main}.
\begin{lemma}\label{lemdist} If $m$ is the Lebesgue measure of $\mathbf{R}^N$, for any $a\prec b$ in $\mathbf{R}^N_{+}\setminus\left\{0\right\}$, there exists two positive constants $m_{a,b}$ and $M_{a,b}$ such that \begin{equation*} \forall s,t\in [a,b];\quad m_{a,b}\ d_1(s,t)\leq m([0,s]\bigtriangleup [0,t]) \leq M_{a,b}\ d_{\infty}(s,t) \end{equation*} where $d_1$ and $d_{\infty}$ are the usual distances of $\mathbf{R}^N$ defined by \begin{align*}
d_1:(s,t)&\mapsto\|t-s\|_1=\sum_{i=1}^N |t_i-s_i| \\
d_{\infty}:(s,t)&\mapsto\|t-s\|_{\infty}=\max_{1\leq i\leq N} |t_i-s_i|. \end{align*}
\end{lemma}
\begin{proof} For all $s,t\in [a,b]$, we write \begin{align*} [0,s] \bigtriangleup [0,t]=\left([0,s]\setminus [0,t]\right) \cup \left([0,t]\setminus [0,s]\right). \end{align*} Suppose that for all $i\in I\subset\left\{1,\dots,N\right\}$, $s_i>t_i$, and that for all $i\in\left\{1,\dots,N\right\}\setminus I$, $s_i\leq t_i$.
For any subset $J$ of $\{1,\dots,N\}$, we denote by $\prod_{i\in J}[0,s_i]$ the cartesian product of $[0,s_i]$ for $i\in J$.
\noindent We have \begin{align*} [0,s] &= \prod_{i\notin I}[0,s_i] \times \prod_{i\in I}\left([0,t_i]\cup [t_i,s_i]\right) \\ &= \left( \prod_{i\notin I}[0,s_i] \times \prod_{i\in I}[0,t_i] \right) \cup \bigcup_{J\subsetneq I}\left( \prod_{i\notin I}[0,s_i] \times \prod_{i\in J}[0,t_i] \times \prod_{i\in I\setminus J}[t_i,s_i] \right), \end{align*} and then \begin{align*} [0,s]\setminus [0,t] &= \bigcup_{J\subsetneq I}\left( \prod_{i\notin I}[0,s_i] \times \prod_{i\in J}[0,t_i] \times \prod_{i\in I\setminus J}[t_i,s_i] \right) \\ &= \left\{ x\in [0,s]:\;\exists i\in I;\; t_i<x_i\leq s_i \right\}. \end{align*} We deduce \begin{align*}
m([0,s]\setminus [0,t]) = \prod_{i\notin I} |s_i|\ \sum_{J\subsetneq I} \left( \prod_{i\in J} |t_i|
\prod_{i\in I\setminus J} |t_i-s_i| \right). \end{align*} In the same way, we get \begin{align*}
m([0,t]\setminus [0,s]) = \prod_{i\in I} |s_i|\ \sum_{J\subsetneq I^c} \left(
\prod_{i\in J} |t_i| \prod_{i\in I^c\setminus J} |t_i-s_i| \right). \end{align*}
\noindent
For all $1\leq i\leq N$, we have $|a| \leq |s_i| \leq |b|$ and $|a| \leq |t_i| \leq |b|$. Then, \begin{align*} m([0,s] &\bigtriangleup [0,t]) \\
&\leq |b|^{\# I^c} \sum_{J\subsetneq I} |b|^{\# J}
d_{\infty}(s,t)^{\#(I\setminus J)} + |b|^{\# I} \sum_{J\subsetneq I^c} |b|^{\# J} d_{\infty}(s,t)^{\#(I^c\setminus J)} \\
&\leq d_{\infty}(s,t)\ \underbrace{\left[ |b|^{\# I^c} \sum_{J\subsetneq I} |b|^{\# J}
d_{\infty}(s,t)^{\#(I\setminus J)-1} + |b|^{\# I} \sum_{J\subsetneq I^c} |b|^{\# J} d_{\infty}(s,t)^{\#(I^c\setminus J)-1}\right]}_{\textrm{bounded in }[a,b]}\\ &\leq M_{a,b}\ d_{\infty}(s,t). \end{align*} For the lower bound, we write \begin{align*} m([0,s] \bigtriangleup [0,t])
\geq |a|^{\# I^c} \sum_{J\subsetneq I} |a|^{\# J}
\prod_{i\in I\setminus J} |t_i-s_i| + |a|^{\# I} \sum_{J\subsetneq I^c} |a|^{\# J}
\prod_{i\in I^c\setminus J} |t_i-s_i| \end{align*}
Let $m_a$ be the minimum of $|a|^k$ for $1\leq k\leq N$. We get \begin{align}\label{eqminda} m([0,s] \bigtriangleup [0,t])
\geq m_a^2 \sum_{J\subsetneq I} \prod_{i\in I\setminus J} |t_i-s_i|
+ m_a^2 \sum_{J\subsetneq I^c} \prod_{i\in I^c\setminus J} |t_i-s_i|. \end{align}
\noindent Let us remark that \begin{align*}
\sum_{J\subsetneq I} \prod_{i\in I\setminus J} |t_i-s_i|
= \prod_{i\in I} \left(1+|t_i-s_i|\right) - 1. \end{align*} Using the expansion \begin{align*}
\log\prod_{i\in I} \left(1+|t_i-s_i|\right) = \sum_{i\in I}\log\left(1+|t_i-s_i|\right)
=\sum_{i\in I} |t_i-s_i| + o(|t_i-s_i|^2), \end{align*} which implies \begin{align*}
\prod_{i\in I} \left(1+|t_i-s_i|\right) = 1+\sum_{i\in I} |t_i-s_i| + o(|t_i-s_i|^2), \end{align*} the inequality (\ref{eqminda}) becomes \begin{align*} m([0,s] \bigtriangleup [0,t])
\geq m_a^2 \sum_{1\leq i\leq N} |t_i-s_i| + o(\|t-s\|_{\infty}). \end{align*} The result follows. \end{proof}
\begin{lemma}\label{lemMpfBmexp} Let $\mathbf{B}^H=\{ \mathbf{B}^H_t;\; t\in\mathbf{R}_+^N\}$ be a multiparameter fractional Brownian motion with index $H\in (0, 1/2]$. The deterministic local H\"older exponent and deterministic local sub-exponent of $\mathbf{B}^H$ at any $t_0\in\mathbf{R}_+^N$ is given by $\widetilde{\mathbb{\alpha}}_X(t_0)=\underline{\mathbb{\alpha}}_X(t_0)=H$. \end{lemma}
\begin{proof} We prove that $\widetilde{\mathbb{\alpha}}_X(t_0)\geq H$ and $\underline{\mathbb{\alpha}}_X(t_0)\leq H$. The result will follow from $\widetilde{\mathbb{\alpha}}_X(t_0)\leq\underline{\mathbb{\alpha}}_X(t_0)$.
Since for all $s,t\in \mathbf{R}^N_+$, \begin{align*}
\frac{\mathbf{E}\left[|\mathbf{B}^H_t - \mathbf{B}^H_t|^2\right]}{\|t-s\|^{2H}} = \left( \frac{m([0,s] \bigtriangleup [0,t])}{d_2(s,t)} \right)^{2H}, \end{align*} Lemma \ref{lemdist} implies that for all $s,t$ in any interval $[a,b]$, \begin{align}\label{ineqMpfBm} M_1 \left( \frac{d_1(s,t)}{d_2(s,t)} \right)^{2H} \leq
\frac{\mathbf{E}\left[|\mathbf{B}^H_t - \mathbf{B}^H_t|^2\right]}{\|t-s\|^{2H}} \leq M_2 \left( \frac{d_{\infty}(s,t)}{d_2(s,t)} \right)^{2H}, \end{align} for some positive constants $M_1$ and $M_2$.
\noindent Since the distances $d_1, d_2$ and $d_{\infty}$ are equivalent, the inequality (\ref{ineqMpfBm}) implies that the quantity $\mathbf{E}\left[|\mathbf{B}^H_t - \mathbf{B}^H_t|^2\right]/\|t-s\|^{2H}$ is bounded on any interval $[a,b]$. Consequently, for all $t_0\in\mathbf{R}^N_+$, $\widetilde{\mathbb{\alpha}}_X(t_0)\geq H$ and $\underline{\mathbb{\alpha}}_X(t_0)\leq H$, by definition of the deterministic local H\"older exponent and the deterministic local sub-exponent. \end{proof}
\
A direct consequence from Lemma \ref{lemMpfBmexp} is the local regularity of the sample paths of the multiparameter fractional Brownian motion. In \cite{2ml}, Corollary 3.15 states that for any Gaussian process $X$ such that the function $t\mapsto\widetilde{\mathbb{\alpha}}_X(t)$ is continuous and positive, the local H\"older exponents satisfy with probability one: $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t)=\widetilde{\mathbb{\alpha}}_X(t)$ for all $t\in\mathbf{R}_+^N$. Since the deterministic local H\"older exponents of the MpfBm are constant and positive, the following result comes directly.
\begin{corollary} The local H\"older exponent of the multiparameter fractional Brownian motion $\mathbf{B}^H=\{ \mathbf{B}^H_t;\; t\in\mathbf{R}_+^N\}$ (with $1<H\leq 1/2$) satisfies with probability one, $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_{\mathbf{B}^H}(t_0)=H$ for all $t_0\in\mathbf{R}^N_+$. \end{corollary}
As an application of Theorem \ref{thmaincompact}, the property of constant local regularity of the multiparameter fractional Brownian motion yields to sharp results about the Hausdorff dimensions of its graph and its range.
\begin{proposition}\label{prop:mpfbm} Let $X=\{ X_t;\; t\in\mathbf{R}_+^N\}$ be a multiparameter fractional Brownian field with index $H\in (0, 1/2]$, i.e. whose coordinate processes $X^{(1)},\dots,X^{(d)}$ are i.i.d. multiparameter fractional Brownian motions with index $H$. \\ With probability one, the Hausdorff dimensions of the graph and the range of the sample paths of $X$ are \begin{align*} \forall I=(a,b)\subset\mathbf{R}_+^N,\quad \dim_{\mathcal{H}} (\mathrm{Gr}_{X}(I)) &= \min\{ N / H; N + d(1 - H) \}, \\ \dim_{\mathcal{H}} (\mathrm{Rg}_{X}(I)) &= \min\{ N / H; d \}. \end{align*} \end{proposition}
\begin{corollary}\label{cor:mpfbm} Let $\mathbf{B}^H=\{ \mathbf{B}^H_t;\; t\in\mathbf{R}_+^N\}$ be a multiparameter fractional Brownian motion with index $H\in (0, 1/2]$. With probability one, the Hausdorff dimensions of the graph and the range of the sample paths of $\mathbf{B}^H$ are \begin{align*} \forall I=(a,b)\subset\mathbf{R}_+^N,\quad \dim_{\mathcal{H}} (\mathrm{Gr}_{\mathbf{B}^H}(I)) &= N + 1 - H, \\ \dim_{\mathcal{H}} (\mathrm{Rg}_{\mathbf{B}^H}(I)) &= 1. \end{align*} \end{corollary}
Proposition \ref{prop:mpfbm} and Corollary \ref{cor:mpfbm} should be compared to Theorem 1.3 of \cite{AyXiao} which states the Hausdorff dimensions of the range and the graph of the fractional Brownian sheet (result extended by Proposition 1 and Theorem 3 of \cite{WuXiao}). In particular, the Hausdorff dimensions of the sample path (range and graph) of the multiparameter fractional Brownian motion are equal to the respective quantities for the fractional Brownian sheet, when the Hurst index is the same along each axis.
\subsection{Irregular Multifractional Brownian motion}\label{sec:mbm}
The multifractional Brownian motion (mBm) is an extension of the fractional Brownian motion, where the self-similarity index $H\in (0,1)$ is substituted with a function $H:\mathbf{R}_+\rightarrow (0,1)$ (see \cite{RPJLV} and \cite{BJR}). More precisely, it can be defined as a zero mean Gaussian process $\{X_t;\;t\in\mathbf{R}_+\}$ with \begin{equation*} X_t = \int_{-\infty}^t \left[(t-u)^{H(t)-1/2} - (-u)^{H(t)-1/2}\right].\mathbbm{W}(du) +\int_0^t (t-u)^{H(t)-1/2}.\mathbbm{W}(du) \end{equation*} or \begin{equation}\label{eq:mbm-harm}
X_t = \int_{\mathbf{R}} \frac{e^{it\xi}-1}{|\xi|^{H(t)+1/2}}.\widehat{\mathbbm{W}}(du), \end{equation} where $\mathbbm{W}$ is a Gaussian measure in $\mathbf{R}$ and $\widehat{\mathbbm{W}}$ is the Fourier transform of a Gaussian measure in $\mathbf{C}$. The variety of the class of multifractional Brownian motions is described in \cite{StoevTaqqu}.
In the first definitions of the mBm, the different groups of authors used to consider the assumption: $H$ is a $\beta$-H\"older function and $H(t)<\beta$ for all $t\in\mathbf{R}_+$. Under this so-called $(H_{\beta})$-assumption, the local regularity of the sample paths was described by \begin{equation*} \texttt{\large $\boldsymbol{\alpha}$}_X(t_0) = \widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0) = H(t_0) \qquad\textrm{a.s.} \end{equation*} where $\texttt{\large $\boldsymbol{\alpha}$}_X(t_0)$ and $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)$ denote the pointwise and local H\"older exponents of $X$ at any $t_0\in\mathbf{R}_+$. A localization of the Hausdorff dimension of the graph were also proved: For any $t_0\in\mathbf{R}_+$, \begin{equation*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} \left[ \mathrm{Gr}_X\left( B(t_0,\rho) \right)\right] = 2-H(t_0) \qquad\textrm{a.s.} \end{equation*} Let us notice that this result could not be a direct consequence of Adler's earlier work \cite{Adler77} since the multifractional Brownian motion does not have stationary increments, on the contrary to the classical fractional Brownian motion.
In \cite{EH06, 2ml}, the fine regularity of the multifractional Brownian motion has been studied in the irregular case, i.e. when the function $H$ is only assumed to be $\beta$-H\"older continuous with $\beta>0$. In this more general case, the pointwise and local H\"older exponents of $X$ at any $t_0\in\mathbf{R}_+$ satisfy respectively \begin{align*} \texttt{\large $\boldsymbol{\alpha}$}_X(t_0) &= H(t_0) \wedge \alpha_H(t_0) \qquad\textrm{a.s.}\\ \widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0) &= H(t_0) \wedge \widetilde{\alpha}_H(t_0) \qquad\textrm{a.s.}, \end{align*} where \begin{align*} \alpha_H(t_0)&= \sup\left\{ \alpha>0: \limsup_{\rho\rightarrow 0}
\sup_{s,t\in B(t_0,\rho)} \frac{|H(t)-H(s)|}{\rho^{\alpha}} < +\infty \right\};\\ \widetilde{\alpha}_H(t_0)&= \sup\left\{ \alpha>0: \lim_{\rho\rightarrow 0}
\sup_{s,t\in B(t_0,\rho)} \frac{|H(t)-H(s)|}{|t-s|^{\alpha}} < +\infty \right\}. \end{align*}
Roughtly speaking, when the function $H$ is irregular, it transmits its local regularity to the sample paths of the mBm. But in that case, nothing is known about the Hausdorff dimension of the range or the graph of the process.
In this section, the main results of the paper stated in Section \ref{sec:main} are applied to derive informations on these Hausdorff dimensions, without any regularity assumptions on the function $H$. As for Gaussian processes, we define the {\em local sub-exponent} of $H$ at $t_0\in\mathbf{R}_+$ by \begin{align*} \underline{\alpha}_H(t_0)&= \inf\left\{ \alpha>0 :
\lim_{\rho\rightarrow 0} \inf_{s,t\in B(t_0,\rho)} \frac{|H(t)-H(s)|}{|t-s|^{\alpha}} =+\infty \right\} \\ &= \sup\left\{ \alpha>0 :
\lim_{\rho\rightarrow 0} \inf_{s,t\in B(t_0,\rho)} \frac{|H(t)-H(s)|}{|t-s|^{\alpha}} =0 \right\}. \end{align*}
\begin{proposition}\label{prop:mbm} Let $X=\{X_t;\; t\in\mathbf{R}_+\}$ be the multifractional Brownian motion of integral representation (\ref{eq:mbm-harm}), with regularity function $H:\mathbf{R}_+\rightarrow (0,1)$ assumed to be $\beta$-H\"older-continuous with $\beta>0$. Let $\widetilde{\alpha}_H(t_0)$ and $\underline{\alpha}_H(t_0)$ be respectively the local H\"older exponent and sub-exponent of $H$ at $t_0\in\mathbf{R}_+$.
In the three following cases, the Hausdorff dimension of the graph of the sample path of $X$ satisfies: \begin{enumerate}[(i)] \item If $H(t_0) < \widetilde{\alpha}_H(t_0) \leq \underline{\alpha}_H(t_0)$ for $t_0\in\mathbf{R}_+$, then \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) = 2-H(t_0) \qquad\textrm{a.s.} \end{align*}
\item If $\widetilde{\alpha}_H(t_0) < H(t_0) \leq \underline{\alpha}_H(t_0)$ for $t_0\in\mathbf{R}_+$, then \begin{align*} 2-H(t_0) \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) &\leq 2-\widetilde{\alpha}_H(t_0) \qquad\textrm{a.s.} \end{align*}
\item If $\widetilde{\alpha}_H(t_0) \leq \underline{\alpha}_H(t_0) < H(t_0)$ for $t_0\in\mathbf{R}_+$, then \begin{align*} 2-\underline{\alpha}_H(t_0) \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) &\leq 2-\widetilde{\alpha}_H(t_0) \qquad\textrm{a.s.} \end{align*}
\end{enumerate}
With probability one, the Hausdorff dimension of the range of the sample path of $X$ satisfies: \begin{align*} \forall t_0\in\mathbf{R}_+,\quad \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) = 1. \end{align*}
Moreover if the $(H_{\beta})$-assumption holds then, with probability one, \begin{align*} \forall t_0\in\mathbf{R}_+,\quad \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) = 2-H(t_0). \end{align*}
\end{proposition}
\begin{proof}
In \cite{EH06}, an asymptotic behaviour of the incremental variance of the multifractional Brownian motion, in a neighborhood $B(t_0,\rho)$ of any $t_0\in\mathbf{R}_+$ as $\rho$ goes to $0$, is given by: $\forall s,t\in B(t_0,\rho)$, \begin{align}\label{eq:asympMBM}
\mathbf{E}[|X_t-X_s|^2] \sim K(t_0)\ |t-s|^{H(t)+H(s)} + L(t_0)\ [H(t)-H(s)]^2, \end{align} where $K(t_0)$ and $L(t_0)$ are positive constants.\\ From (\ref{eq:asympMBM}), for any $t_0\in\mathbf{R}_+$, for all $\alpha>0$ and for all $s,t\in B(t_0,\rho)$, \begin{align}\label{eq:asympMBM2}
\frac{\mathbf{E}[|X_t-X_s|^2]}{|t-s|^{2\alpha}} \sim K(t_0)\ |t-s|^{H(t)+H(s)-\alpha} + L(t_0)\ \left[\frac{H(t)-H(s)}{|t-s|^{\alpha}}\right]^2, \end{align} when $\rho\rightarrow 0$. This expression allows to evaluate the exponents $\widetilde{\mathbb{\alpha}}_X(t_0)$ (and consequently $\widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0)$) and $\underline{\mathbb{\alpha}}_X(t_0)$, in function of the respective exponents of the function $H$.
The local behaviour of $H$ around $t_0$ is described by one of the two following situations: \begin{itemize}
\item Either there exists $\rho>0$ such that the restriction $\left.H\right|_{B(t_0,\rho)}$ is increasing or decreasing. In that case, $\underline{\alpha}_H(t_0)\in\mathbf{R}_+\cup\{+\infty\}$.
\item Or for all $\rho>0$, there exist $s,t\in B(t_0,\rho)$ such that $H(t)=H(s)$.\\
In that case, for all $\alpha>0$ and for all $\rho>0$, $\displaystyle\inf_{s,t\in B(t_0,\rho)} \frac{|H(t)-H(s)|}{|t-s|^{\alpha}}=0$ and therefore, $\underline{\alpha}_H(t_0)=+\infty$.
\end{itemize}
Since $\widetilde{\alpha}_H(t_0) \leq \underline{\alpha}_H(t_0)$ for all $t_0\in\mathbf{R}_+$ as noticed in Section~\ref{sec:subexp}, we distinguish the three following cases:
\begin{enumerate}[(i)] \item If $H(t_0) < \widetilde{\alpha}_H(t_0) \leq \underline{\alpha}_H(t_0)$ for some $t_0\in\mathbf{R}_+$, then for all $0<\epsilon<\widetilde{\alpha}_H(t_0) - H(t_0)$, there exists $\rho_0>0$ such that $$ \forall t\in B(t_0,\rho_0),\quad H(t_0)-\epsilon < H(t) < H(t_0)+\epsilon,$$ and thus \begin{equation}\label{eq:mbm-H}
\forall s,t\in B(t_0,\rho_0),\quad |t-s|^{2H(t_0)+2\epsilon} \leq |t-s|^{H(s)+H(t)} \leq |t-s|^{2H(t_0)-2\epsilon}. \end{equation} Then, expression (\ref{eq:asympMBM2}) implies $H(t_0)-\epsilon\leq\widetilde{\bbalpha}_X(t_0)$ and $\underline{\bbalpha}_X(t_0) \leq H(t_0)+\epsilon$, by definition of the exponents. Letting $\epsilon$ tend to $0$, and using $\widetilde{\bbalpha}_X(t_0) \leq \underline{\bbalpha}_X(t_0)$, we get $\widetilde{\bbalpha}_X(t_0) = \underline{\bbalpha}_X(t_0) = H(t_0)$.
\noindent Then, Theorem \ref{thmain} (with $N>d\ \underline{\bbalpha}_X(t_0)$) implies: \begin{align*} \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) = 2-H(t_0) \qquad\textrm{a.s.} \end{align*}
\item If $\widetilde{\alpha}_H(t_0) < H(t_0) \leq \underline{\alpha}_H(t_0)$ for some $t_0\in\mathbf{R}_+$, then as previously, we consider any $0<\epsilon<H(t_0)-\widetilde{\alpha}_H(t_0)$ and we show that expression (\ref{eq:asympMBM2}) and inequalities (\ref{eq:mbm-H}) imply $\widetilde{\bbalpha}_X(t_0) = \widetilde{\alpha}_H(t_0)$ and $\underline{\bbalpha}_X(t_0) = H(t_0)$. Theorem \ref{thmain} (with $N>d\ \underline{\bbalpha}_X(t_0)$) implies: \begin{align*} 2-H(t_0) \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \leq 2-\widetilde{\alpha}_H(t_0) \qquad\textrm{a.s.} \end{align*}
\item If $\widetilde{\alpha}_H(t_0) \leq \underline{\alpha}_H(t_0) < H(t_0)$ for some $t_0\in\mathbf{R}_+$, then as previously, we consider any $0<\epsilon<H(t_0)-\underline{\alpha}_H(t_0)$ and we show that expression (\ref{eq:asympMBM2}) and inequalities (\ref{eq:mbm-H}) imply $\widetilde{\bbalpha}_X(t_0) = \widetilde{\alpha}_H(t_0)$ and $\underline{\bbalpha}_X(t_0) = \underline{\alpha}_H(t_0)$. Theorem \ref{thmain} (with $N>d\ \underline{\bbalpha}_X(t_0)$) implies: \begin{align*} 2-\underline{\alpha}_H(t_0) \leq \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) \leq 2-\widetilde{\alpha}_H(t_0) \qquad\textrm{a.s.} \end{align*}
\end{enumerate}
Since $H$ is $\beta$-H\"older-continuous with $\beta>0$, Theorem \ref{thmainunif} can be applied with $\mathcal{A}=\mathbf{R}_+$. In the three previous case, we observe that $\underline{\bbalpha}_X(u) < 1$ for all $u\in\mathbf{R}_+$. Consequently, $N>d\ \liminf_{u\rightarrow t_0}\underline{\bbalpha}_X(u)$ and, with probability one, \begin{align*} \forall t_0\in\mathbf{R}_+,\quad \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) = 1. \end{align*}
When the $(H_{\beta})$-assumption holds, $\widetilde{\bbalpha}_X(t_0) = \widetilde{\alpha}_H(t_0)=\underline{\bbalpha}_X(t_0)$ for all $t_0\in\mathbf{R}_+$, and by continuity of $H$, $$ \liminf_{u\rightarrow t_0}\widetilde{\bbalpha}_X(u) = \liminf_{u\rightarrow t_0}\underline{\bbalpha}_X(u) = H(t_0). $$ Then, Theorem \ref{thmainunif} implies: With probability one, \begin{align*} \forall t_0\in\mathbf{R}_+,\quad \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) = 2-H(t_0). \end{align*}
\end{proof}
According to Proposition \ref{prop:mbm}, the general theorems of Section \ref{sec:main} fail to derive sharp values for the Hausdorff dimensions of the sample paths of the multifractional Brownian motion when the $(H_{\beta})$-assumption for the function $H$ is not satisfied. This is due to the fact that the irregularity of $H$ is not completely controlled by the exponents $\widetilde{\alpha}_H(t_0)$ and $\underline{\alpha}_H(t_0)$. A deeper analysis of the function $H$ is required in order to determine the exact Hausdorff dimensions of the mBm.
\subsection{Generalized Weierstrass function}\label{sec:GW}
The local regularity of the Weierstrass function $W_H$, defined by \begin{equation*} t\mapsto W_H(t)=\sum_{j=1}^{\infty}\lambda^{-j H}\ \sin\lambda^{j}t, \end{equation*} where $\lambda \geq 2$ and $H\in (0,1)$, has been deeply studied in the literature (e.g. see \cite{falconer}). When $\lambda$ is large enough, the box-counting dimension of the graph of $W_H$ is known to be $2-H$. Nevertheless the exact value of the Hausdorff dimension remains unknown at this stage.
Different stochastic versions of the Weierstrass function have been considered in \cite{AyLeVe, falconer, 2ml, hunt, liningPhD} and their geometric properties have been investigated. In this section, we consider the {\em generalized Weierstrass function (GW)}, defined as the Gaussian process $X=\left\{X_t;\;t\in\mathbf{R}_{+}\right\}$, \begin{equation}\label{def:Weierstrass} \forall t\in\mathbf{R}_+,\quad X_t=\sum_{j=1}^{\infty}Z_j\ \lambda^{-j H(t)}\ \sin(\lambda^{j}t + \theta_j) \end{equation} where \begin{itemize} \item $\lambda \geq 2$, \item $t\mapsto H(t)$ takes values in $(0,1)$, \item $\left(Z_j\right)_{j\geq 1}$ is a sequence of $\mathcal{N}(0,1)$ i.i.d. random variables, \item and $\left(\theta_j\right)_{j\geq 1}$ is a sequence of uniformly distributed on $[0,2\pi)$ random variables independent of $\left(Z_j\right)_{j\geq 1}$. \end{itemize}
In the specific case of $\theta_j=0$ for all $j\geq 1$, Theorem 4.9 of \cite{2ml} determines the local regularity of the sample path of the GW through its $2$-microlocal frontier, when the function $H$ is $\beta$-H\"older continuous with $\beta>0$ and when the $(H_{\beta})$-assumption holds, i.e. $H(t)<\beta$ for all $t\in\mathbf{R}_+$. In particular, the deterministic local H\"older exponent is proved to be $\widetilde{\bbalpha}_X(t_0) = H(t_0)$ for all $t_0\in\mathbf{R}_+$ and the local H\"older exponent satisfies, with probability one, \begin{align*} \forall t_0\in\mathbf{R}_+,\quad \widetilde{\texttt{\large $\boldsymbol{\alpha}$}}_X(t_0) = H(t_0). \end{align*}
Moreover, when $H$ is constant and $\theta_j=0$ for all $j\geq 1$, the Hausdorff dimension of the graph of the sample path of the GW is proved to be equal to $2-H$, as a particular case of Theorem 5.3.1 of \cite{liningPhD}. In the sequel, we use Theorem \ref{thmainunif} to extend this result when $H$ is no longer constant and the $\theta_j$'s are not equal to $0$.
The two following lemmas are the key results to determine the deterministic local H\"older exponent and sub-exponent of the GW, in the general case. Their proofs of are sketched in \cite{falconer} when $\left(\theta_j\right)_{j\geq 1}$ are independent and uniformly distributed on $[0,2\pi)$; for sake of completeness, we detail them in this section without requiring the independence of the $\theta_j$'s, before considering the case of a non-constant function $H$.
\begin{lemma}\label{lem:GW-inc-var} Let $\{X_t;\;t\in\mathbf{R}_+\}$ be the stochastic Weierstrass function defined by (\ref{def:Weierstrass}). Then, the incremental variance between $u,v\in\mathbf{R}_+$ is given by \begin{align}\label{eq:GW-inc-var}
\mathbf{E}[|X_u-X_v|^2] = 2\sum_{j\geq 1} \lambda^{-2j H(u)} \sin^2\left(\lambda^j \frac{u-v}{2}\right) + \sum_{j\geq 1} \left( \lambda^{-j H(v)} - \lambda^{-j H(u)} \right)^2. \end{align}
\end{lemma}
\begin{proof} For all $u,v\in\mathbf{R}_+$, we compute \begin{align*} X_u-X_v &= \sum_{j\geq 1} Z_j\ \lambda^{-j H(u)} \left[ \sin(\lambda^j u + \theta_j) - \sin(\lambda^j v + \theta_j) \right] \\ &\qquad\qquad + \sum_{j\geq 1} Z_j\ \left[ \lambda^{-j H(v)} - \lambda^{-j H(u)} \right] \sin(\lambda^j v + \theta_j) \\ &= 2 \sum_{j\geq 1} Z_j\ \lambda^{-j H(u)} \sin\left( \lambda^j \frac{u-v}{2} \right) \cos\left( \lambda^j \frac{u+v}{2} + \theta_j \right) \\ &\qquad\qquad + \sum_{j\geq 1} Z_j\ \left[ \lambda^{-j H(v)} - \lambda^{-j H(u)} \right] \sin(\lambda^j v + \theta_j). \end{align*}
In the expression of $\mathbf{E}[|X_u-X_v|^2]$, the three following terms appear: \begin{itemize} \item $\displaystyle\mathbf{E}\left[ Z_j Z_k \cos\left( \lambda^j \frac{u+v}{2} + \theta_j \right) \cos\left( \lambda^k \frac{u+v}{2} + \theta_k \right) \right]$, \item $\displaystyle\mathbf{E}\left[ Z_j Z_k \sin\left( \lambda^j v + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \right]$ \item and $\displaystyle\mathbf{E}\left[ Z_j Z_k \cos\left( \lambda^j \frac{u+v}{2} + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \right]$, \end{itemize} where $j,k \geq 1$.
The first two terms are treated in the same way. For the second one, we have \begin{align*} \mathbf{E}\left[ Z_j Z_k \right. &\left.\sin\left( \lambda^j v + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \right] \\ &= \mathbf{E}\left( \mathbf{E}\left[ Z_j Z_k \sin\left( \lambda^j v + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \mid Z_j, Z_k\right] \right) \\ &= \mathbf{E}[Z_j Z_k]\ \mathbf{E}\left[ \sin\left( \lambda^j v + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \right], \end{align*} using the independence of $(\theta_j, \theta_k)$ with $(Z_j, Z_k)$. Then, since $\mathbf{E}[Z_j Z_k] = \mathbbm{1}_{j=k}$ and \begin{align*} \mathbf{E}[\sin^2(\lambda^j v + \theta_j)] = \frac{1}{2\pi} \int_{[0,2\pi)} \sin^2(\lambda^j v + x)\ dx = \frac{1}{2}, \end{align*} we get \begin{align*} \mathbf{E}\left[ Z_j Z_k \sin\left( \lambda^j v + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \right] = \frac{1}{2}.\mathbbm{1}_{j=k}. \end{align*} In the same way, we prove that \begin{align*} \mathbf{E}\left[ Z_j Z_k \cos\left( \lambda^j \frac{u+v}{2} + \theta_j \right) \cos\left( \lambda^k \frac{u+v}{2} + \theta_k \right) \right] = \frac{1}{2}.\mathbbm{1}_{j=k}. \end{align*} For the third term, we compute as previously \begin{align*} \mathbf{E}\bigg[ Z_j Z_k & \cos\left( \lambda^j \frac{u+v}{2} + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \bigg] \\ &= \mathbf{E}[Z_j Z_k] \ \mathbf{E}\left[ \cos\left( \lambda^j \frac{u+v}{2} + \theta_j \right) \sin\left( \lambda^k v + \theta_k \right) \right] \\ &= \mathbbm{1}_{j=k} . \mathbf{E}\left[ \cos\left( \lambda^j \frac{u+v}{2} + \theta_j \right) \sin\left( \lambda^j v + \theta_j \right) \right] \\ &= \mathbbm{1}_{j=k} . \frac{1}{2\pi}\int_{[0,2\pi)} \cos\left( \lambda^j \frac{u+v}{2} + x \right) \sin\left( \lambda^j v + x \right) \ dx =0, \end{align*} by a parity argument. The result follows. \end{proof}
\begin{lemma}\label{lem:GW-const} Let $\{X_t;\;t\in\mathbf{R}_+\}$ be the stochastic Weierstrass function defined by (\ref{def:Weierstrass}), where the function $H$ is assumed to be constant.
Then, for all compact subset $I\subset\mathbf{R}_+$, there exists two constants $C_1>0$ and $C_2>0$ such that for all $u,v\in I$, \begin{align}\label{eq:GW-const}
0 < C_1 \leq \frac{\mathbf{E}[|X_u-X_v|^2]}{|u-v|^{2H}} \leq C_2 < +\infty. \end{align}
\end{lemma}
\begin{proof} According to Lemma \ref{lem:GW-inc-var}, the incremental variance of $X$ is given by \begin{align}\label{eq:GW-inc-var-const}
\mathbf{E}[|X_u-X_v|^2] = 2\sum_{j\geq 1} \lambda^{-2j H} \sin^2\left(\lambda^j \frac{u-v}{2}\right). \end{align}
Let $N$ be the integer such that $\lambda^{-(N+1)} \leq |u-v| < \lambda^{-N}$.
For all $j\leq N$, $\displaystyle\lambda^j \frac{u-v}{2} \leq \frac{1}{2}$. Since $\displaystyle x^2 - \frac{x^4}{3} \leq \sin^2 x \leq x^2$ for all $x\in [0,1]$, expression (\ref{eq:GW-inc-var-const}) implies \begin{align}\label{eq:GW-const-up}
\mathbf{E}[|X_u-X_v|^2] &\leq 2\sum_{j=1}^N \lambda^{-2j H} \lambda^{2j} \left(\frac{u-v}{2}\right)^2 + 2\sum_{j\geq N+1} \lambda^{-2j H} \nonumber\\ &\leq 2\sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2 + \frac{2\ \lambda^{-2H(N+1)}}{1-\lambda^{-2H}} \nonumber\\ &\leq 2\sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2
+ \frac{2\ |u-v|^{2H}}{1-\lambda^{-2H}} \end{align}
and \begin{align}\label{eq:GW-const-low}
\mathbf{E}[|X_u-X_v|^2] &\geq 2\sum_{j=1}^N \lambda^{-2j H} \lambda^{2j} \left(\frac{u-v}{2}\right)^2 - \frac{2}{3} \sum_{j=1}^N \lambda^{-2j H} \lambda^{4j} \left(\frac{u-v}{2}\right)^4 \nonumber\\
&\geq 2\sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2 - \frac{1}{24} \lambda^{-4N} \sum_{j=1}^N \lambda^{j(4-2H)} \nonumber\\ &\geq 2\sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2 - \frac{1}{24} \lambda^{-4N} \lambda^{4-2H} \frac{\lambda^{(4-2H)N} - 1}{\lambda^{4-2H} - 1} \nonumber\\ &\geq 2\sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2 - \frac{1}{24} \frac{\lambda^{4-2H}}{\lambda^{4-2H} - 1} \lambda^{-2HN} \nonumber\\ &\geq 2\sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2
- \frac{1}{24} \frac{\lambda^{4}}{\lambda^{4-2H} - 1} |u-v|^{2H}. \end{align}
Now, it remains to compare the term $\displaystyle\sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2$ with $|u-v|^{2H}$.
By definition of the integer $N$, we have \begin{align}\label{eq:GW-const-ineq-1er} \frac{\lambda^{-2(N+1)}}{4} \sum_{j=1}^N \lambda^{2j(1-H)} \leq \sum_{j=1}^N \lambda^{2j(1-H)} \left(\frac{u-v}{2}\right)^2 \leq \frac{\lambda^{-2N}}{4} \sum_{j=1}^N \lambda^{2j(1-H)}. \end{align} But \begin{align*} \frac{\lambda^{-2N}}{4} \sum_{j=1}^N \lambda^{2j(1-H)} &= \frac{\lambda^{-2N}}{4} \lambda^{2(1-H)} \frac{\lambda^{2N(1-H)}-1}{\lambda^{2(1-H)}-1} \\ &= \frac{\lambda^{2(1-H)}}{4(\lambda^{2(1-H)}-1)} \left(\lambda^{-2NH}-\lambda^{-2N}\right). \end{align*}
Using the definition of $N$, we get \begin{align*}
|u-v|^{2H} - \lambda^2\ |u-v|^2 \leq \lambda^{-2NH}-\lambda^{-2N} \leq
\lambda^{2H}\ |u-v|^{2H} - |u-v|^2. \end{align*} Then there exists two constants $c_1>0$ and $c_2>0$ such that for all $u,v\in I$, \begin{align*}
c_1\ |u-v|^{2H}
\leq \frac{\lambda^{-2N}}{4} \sum_{j=1}^N \lambda^{2j(1-H)} \leq c_2\ |u-v|^{2H}. \end{align*} Then, the result follows from \eqref{eq:GW-const-up}, \eqref{eq:GW-const-low} and\eqref{eq:GW-const-ineq-1er}.
\end{proof}
When the function $H:\mathbf{R}_+\rightarrow (0,1)$ is $\beta$-H\"older continuous (and no longer constant), the double inequality (\ref{eq:GW-const}) can be improved by the following result.
\begin{proposition}\label{prop:GW} Let $X=\{X_t;\;t\in\mathbf{R}_+\}$ be a generalized Weierstrass function defined by (\ref{def:Weierstrass}), where the function $H$ is assumed to be $\beta$-H\"older-continuous with $\beta>0$.
Then, for any $t_0\in\mathbf{R}_+$,
for all $\epsilon > 0$, there exist $\rho_0>0$ and positive constants $c_1, c_2, c_3, c_4$ such that for all $u,v\in B(t_0,\rho_0)$, \begin{align}
& c_1\ |u-v|^{2 H(t_0)+\epsilon} + c_3\ [H(u) - H(v)]^2 \leq \mathbf{E}[|X_u-X_v|^2] \label{eq:GW-min}\\ &\textrm{and} \qquad
\mathbf{E}[|X_u-X_v|^2] \leq c_2\ |u-v|^{2 H(t_0)-\epsilon} + c_4\ [H(u) - H(v)]^2. \label{eq:GW-max} \end{align}
\end{proposition}
\begin{proof}
Since the function $H:\mathbf{R}_+\rightarrow (0,1)$ is continuous, for all $t_0\in\mathbf{R}_+$ and all $\epsilon>0$, there exists $\rho_0>0$ such that \begin{align*} \forall u,v\in B(t_0,\rho_0),\quad H(u), H(v) \in (H(t_0)-\epsilon; H(t_0)+\epsilon). \end{align*}
Then, the first term of the expression (\ref{eq:GW-inc-var}) for $\mathbf{E}[|X_u-X_v|^2]$ satisfies \begin{align*} 2\sum_{j\geq 1} \lambda^{-2j H(u)}\ \sin^2\left(\lambda^j \frac{u-v}{2}\right) \leq 2\sum_{j\geq 1} \lambda^{-2j (H(t_0)-\epsilon)}\ \sin^2\left(\lambda^j \frac{u-v}{2}\right) \end{align*} and \begin{align*} 2\sum_{j\geq 1} \lambda^{-2j H(u)}\ \sin^2\left(\lambda^j \frac{u-v}{2}\right) \geq 2\sum_{j\geq 1} \lambda^{-2j (H(t_0)+\epsilon)}\ \sin^2\left(\lambda^j \frac{u-v}{2}\right). \end{align*} Then, according to Lemma \ref{lem:GW-const}, there exist two constants $c_1>0$ and $c_2>0$ such that for all $u,v\in B(t_0,\rho_0)$, \begin{align}\label{eq:GW-first}
c_1\ |u-v|^{2(H(t_0) + \epsilon)}
\leq 2\sum_{j\geq 1} \lambda^{-2j H(u)}\ \sin^2\left(\lambda^j \frac{u-v}{2}\right) \leq c_2\ |u-v|^{2(H(t_0) - \epsilon)}. \end{align}
For the second term of the expression (\ref{eq:GW-inc-var}) for $\mathbf{E}[|X_u-X_v|^2]$, we consider the function $\psi_{\lambda,j}:x\mapsto \lambda^{-jx}=e^{-jx \ln\lambda}$ of derivative $\psi'_{\lambda,j}(x) = -j \ln\lambda \ \lambda^{-jx}$.
From the finite increment theorem, for all $u, v\in B(t_0,\rho_0)$, there exists $h_{uv}$ between $H(u)$ and $H(v)$ (i.e. in either $(H(u),H(v))$ or $(H(v),H(u))$) such that \begin{align*}
|\lambda^{-j H(u)} - \lambda^{-j H(v)}|
= |H(u)-H(v)|\ j \ln\lambda \ \lambda^{-j h_{uv}}. \end{align*} Using the fact that $H(u)$ and $H(v)$ belong to the interval $(H(t_0)-\epsilon,H(t_0)+\epsilon)$ implies $H(t_0)-\epsilon<h_{uv}<H(t_0)+\epsilon$, we get \begin{align*}
&|H(u)-H(v)|\ j \ln\lambda \ \lambda^{-j (H(t_0)+\epsilon)}
\leq |\lambda^{-j H(u)} - \lambda^{-j H(v)}| \\
\textrm{and}\quad &|\lambda^{-j H(u)} - \lambda^{-j H(v)}|
\leq |H(u)-H(v)|\ j \ln\lambda \ \lambda^{-j (H(t_0)-\epsilon)}. \end{align*} Since $\sum_{j\geq 1}j\lambda^{-j(H(t_0)-\epsilon)} < +\infty$ and $\sum_{j\geq 1}j\lambda^{-j(H(t_0)+\epsilon)} < +\infty$, the second term of (\ref{eq:GW-inc-var}) is bounded by \begin{align}\label{eq:GW-2nd} c_3\ [H(u)-H(v)]^2 \leq \sum_{j\geq 1}\left[\lambda^{-j H(u)} - \lambda^{-j H(v)}\right]^2 \leq c_4\ [H(u)-H(v)]^2. \end{align} The result follows from (\ref{eq:GW-inc-var}), (\ref{eq:GW-first}) and (\ref{eq:GW-2nd}). \end{proof}
The following result shows that Theorem \ref{thmainunif} allows to derive the Hausdorff dimensions of the graph of the generalized Weierstrass function.
\begin{corollary}\label{prop:Weierstrass} Let $X=\{X_t;\;t\in\mathbf{R}_+\}$ be a generalized Weierstrass function defined by (\ref{def:Weierstrass}), where the function $H$ is assumed to be $\beta$-H\"older-continuous with $\beta>0$ and satisfies the $(H_{\beta})$-assumption.
Then, the local H\"older exponents and sub-exponents of $X$ are given by \begin{align*} \forall t_0\in\mathbf{R}_+,\quad \widetilde{\bbalpha}_X(t_0)=\underline{\bbalpha}_X(t_0)=H(t_0). \end{align*}
Consequently, the Hausdorff dimensions of the graph and the range of the sample path of $X$ satisfy: With probability one, \begin{align*} \forall t_0\in\mathbf{R}_+,\quad \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) &= 2-H(t_0), \\ \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) &= 1. \end{align*}
\end{corollary}
\begin{proof} According to the $(H_{\beta})$-assumption, $H(t_0) < \beta$ for all $t_0\in\mathbf{R}_+$.
Let us fix $t_0\in\mathbf{R}_+$ and consider any $0<\epsilon< 2(\beta-H(t_0))$. From Proposition \ref{prop:GW} and the fact that $H$ is $\beta$-H\"older continuous with $2H(t_0)-\epsilon<2H(t_0)+\epsilon<2\beta$, there exist $\rho_0>0$ and two constants $C_1>0$ and $C_2>0$ such that for all $u,v \in B(t_0,\rho_0)$, \begin{align*}
C_1\ |u-v|^{2H(t_0) + \epsilon} \leq \mathbf{E}[|X_u-X_v|^2] \leq C_2\ |u-v|^{2H(t_0) - \epsilon}. \end{align*}
From the definitions of the deterministic local H\"older exponent and sub-exponent $\widetilde{\bbalpha}_X(t_0)$ and $\underline{\bbalpha}_X(t_0)$, we get \begin{align*} \forall 0<\epsilon< 2(\beta-H(t_0)), \quad &\widetilde{\bbalpha}_X(t_0) \geq H(t_0) - \epsilon/2, \\ &\underline{\bbalpha}_X(t_0) \leq H(t_0) + \epsilon/2 \end{align*} and therefore, $H(t_0)\leq\widetilde{\bbalpha}_X(t_0)\leq\underline{\bbalpha}_X(t_0)\leq H(t_0)$ leads to $\widetilde{\bbalpha}_X(t_0)=\underline{\bbalpha}_X(t_0)=H(t_0)$.
Consequently, by continuity of the function $H$, Theorem \ref{thmainunif} implies: With probability one, \begin{align*} \forall t_0\in\mathbf{R}_+,\quad \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Gr}_X(B(t_0,\rho))) &= 2-H(t_0), \\ \lim_{\rho\rightarrow 0}\dim_{\mathcal{H}} (\mathrm{Rg}_X(B(t_0,\rho))) &= 1. \end{align*}
\end{proof}
\begin{remark} Proposition \ref{prop:Weierstrass} should be compared to Theorem 1 of \cite{hunt}, where the Hausdorff dimension of the graph of the process $\{Y_t;\;t\in\mathbf{R}_+\}$ defined by $$\forall t\in\mathbf{R}_+,\quad Y_t = \sum_{n=1}^{+\infty} \lambda^{-nH} \sin(\lambda^n t + \theta_n),$$
where $\lambda\geq 2$, $H\in (0,1)$ and $(\theta_n)_{n\geq 1}$ are independent random variables uniformly distributed on $[0, 2\pi)$, is proved to be $D=2-H$.
The generalized Weierstrass function $X$ differs from the process $Y$, in the form of the random serie (the $\theta_n$'s in the definition of $Y_t$ cannot be all equal) and in the fact that the exponent $H$ is constant in the definition of $Y$, on the contrary to $X$. \end{remark}
\end{document} |
\begin{document}
\title{Sensitivity analysis based dimension reduction of\multiscale models}
\begin{abstract} In this paper, the sensitivity analysis of a single scale model is employed in order to reduce the input dimensionality of the related multiscale model, in this way, improving the efficiency of its uncertainty estimation. The approach is illustrated with two examples: a reaction model and the standard Ornstein-Uhlenbeck process. Additionally, a counterexample shows that an uncertain input should not be excluded from uncertainty quantification without estimating the response sensitivity to this parameter. In particular, an analysis of the function defining the relation between single scale components is required to understand whether single scale sensitivity analysis can be used to reduce the dimensionality of the overall multiscale model input space. \end{abstract}
\section{Introduction}
Results of computational models should be supported by uncertainty estimates whenever precise values of their inputs are not available \cite{ROY20112131,UUSITALO201524,Soize_2017}. This is usually the case since measurements of inputs rarely can be made exactly, or inputs may include aleatory uncertainty \cite{OBERKAMPF2002333,URBINA20111114}. Uncertainty Quantification (UQ) of a complex model usually requires powerful computational resources. Moreover, the cost of some UQ methods increases exponentially with the number of uncertain inputs.
Sensitivity analysis (SA) identifies the effects of uncertainty in a model input or group of inputs to the model response. In 1990, Sobol introduced sensitivity indices to measure the effect of input uncertainty on the model output variance \cite{Sob90, Sob107}. In \cite{sobol2001global,SOBOL2007957}, Sobol employs SA in order to fix uncertain parameters with low total sensitivity indices and reduce the model dimensionality.
Here such application of SA to multiscale models is considered. A multiscale model is defined as a collection of single scale models that are coupled using scale bridging methods. The approach proposed here consists in examining the type of function coupling the single scale components, followed by estimating the sensitivity of the response of a single scale model. This paper demonstrates that estimates of the single scale model sensitivity can be used to assess the sensitivity of the overall multiscale model response for some classes of multiscale model functions. However, this is not always possible, as will be shown by a counterexample.
Sobol's variance based approach is the preferred method to measure model output sensitivity \cite{SOBOL20093009,KUCHERENKO2009,saltelli2010avoid,SALTELLI201929}. Even though it is important to note that variance is not always the most representative measure of model response uncertainty \cite{BORGONOVO2016869,KUCHERENKO201935}, it is assumed to be so in this work. The proposed approach is based on exploring the coupled structure of multiscale models, allowing to analyse independently the single scale models. Therefore, the second assumption is that SA can be performed on the multiscale model components. Additionally, it is assumed that the multiscale model parameters are uncorrelated.
In Section \ref{sec:MSmodels}, a brief description of multiscale models is given. Section~\ref{sec:sa} is devoted to SA, and its application to dimensionality reduction of a multiscale model is discussed in subsection~\ref{sec:SA}. Together with some examples of the sensitivity analysis for multiscale models (subsections~\ref{sec:case_1} and~\ref{sec:case_2}), a counterexample is considered in subsection~\ref{sec:Counterexamples} in order to illustrate that, even though it is tempting to employ the SA result of single scale models to the response of the overall multiscale model, this is not always allowed. Section~\ref{sec:Conclusions} summarizes the results and includes a note on the application of the proposed approaches to some real-world models. Some other cases of multiscale models for which the proposed method on dimension reduction can be applied are in the Appendix. In particular, in the \ref{sec:general_res} an upper bound for the sensitivity of model output for a general class of coupling function is obtained.
\section{Multiscale model}\label{sec:MSmodels}
Following the concept introduced in the Multiscale Modelling and Simulation Framework (MMSF) \cite{Borgdorff_2013,borgdorff2014performance,Chopard_Falcone_Kunzli_Veen_Hoekstra_2018}, multiscale models are considered as a set of single scale models coupled using scale bridging methods. The single scale models represent processes that operate on well defined spatio-temporal scales. In MMSF, the single scale models are placed on a scale separation map (SSM), where axes indicate the spatial-temporal scales. An example of SSM with a multiscale model that consists of two single scale components is shown in Figure~\ref{fig:MSmodel}. The directed edges between the single scale components indicate their interactions. In general, cyclic and acyclic coupling topologies are recognised: the cyclic one, as in Figure~\ref{fig:MSmodel}, assumes a feedback loop between the components, and in the acyclic one, no feedback is present. Here we rely on the assumptions of a component-based structure of the multiscale models as well as on a drastic difference in the computational cost of the single scale components.
\begin{figure}
\caption{Scale separation map. The functions $G(f, \cdot)$ and $h$ are the macro and micro models with inputs $x$ and $\xi$, respectively. The function $G(f(x), h(\xi))$ defines the relation between the response of the micro model and the rest of the macro model parameters denoted by $f$. The final multiscale model output $z = g(x, \xi) = G(f(x), h(\xi))$ is produced by the macro model.}
\label{fig:MSmodel}
\end{figure}
The overall multiscale model is denoted by a function $g(x, \xi) = z$ such that $$g:\mathbb{R}^{n+m} \to \mathbb{R}^q$$
with $n, m, q \in \mathbb{N}$ and $\mathbb{E}[|g|^2] < \infty$, which produces the Quantity of Interest (QoI) $z$. We introduce a function $G: \mathbb{R}^{s+p} \to \mathbb{R}^q$, with $s, p \in \mathbb{N}$, as a representation of $g$, which underlines the relationship between the micro model response and the remaining variables inside the macro model, denoted by the function $f$: \begin{equation*}
g(x, \xi) = G(f(x), h(\xi)). \end{equation*}
Therefore, the function $G(f(x), \cdot)$ represents the macro model for some $f: \mathbb{R}^{n} \to \mathbb{R}^s$ which depends on parameters $x = (x_{1}, \dots, x_{n})$. It is assumed that $f$ can be executed in a relatively short computational time, that it has a finite non-zero variance, i.e. $\mathbb{E}[|f|^2] < \infty$ and $f$ is not constant, and that it is possible to obtain its output sensitivity.
The micro scale component is defined by a function $h: \mathbb{R}^{m} \to \mathbb{R}^p$ which satisfies $\mathbb{E}[|h|^2] < \infty$. The sets of variables on which the function $h$ depends\footnote{Additionally, $h$ may depend on the macro model response. When this is the case, the micro model function is denoted by $h(x,\xi)$ or $h(x)$, meaning that it depends on the same uncertain inputs as the macro model function $f$. This is a relevant feature of the method presented here.} are of the form $\xi = (\xi_{1}, \dots, \xi_{m})$.
Without loss of generality, later in the text it is assumed that the uncertain inputs $x$ and $\xi$ follow uniform distributions $\mathcal{U}([0, 1]^n)$ and $\mathcal{U}([0, 1]^m)$, respectively.
\section{Sensitivity analysis}\label{sec:sa}
Sensitivity analysis identifies the effect of uncertainty in the model input parameters on the model response \cite{Sal101}. The Sobol sensitivity indices \cite{Sob90,SOBOL20093009} (SIs) are widely used to measure the response sensitivity. The total SI of an input $x_i$ for the results of the multiscale model function $g(x, \xi)=z$ is given by \begin{align} \begin{split}
\label{eq:STg}
S^g_{T_{x_i}}
&= \frac{\mathrm{Var}(z) - \mathrm{Var}_{x_{\sim i}, \xi} \left(\mathbb{E}_{x_i}[z|x_{\sim i}, \xi] \right) }{\mathrm{Var}(z)}
\\
&= \frac{\int |g(x, \xi)|^2 dx d\xi - \int |\int g(x, \xi) dx_i|^2 dx_{\sim i} d\xi} {\int |g(x, \xi)|^2 dx d\xi -|g_0|^2}, \end{split} \end{align} where $g_0 = \mathbb{E}[g(x, \xi)]$, and the notation $x_{\sim i} = (x_{1}, \dots, x_{i - 1}, x_{i + 1}, \dots, x_{n})$ is employed \cite{HOMMA19961}. In \cite{Sob90,SOBOL2007957}, the total SIs were employed to identifying the effective dimensions of a model function and to fixing unessential variables. In particular, it was shown that, when fixing $x_i$ to a value $x^0_i$ in $[0, 1]$, the error defined by \begin{equation*}
\delta(x^0_i) = \frac{\int \left|g(x, \xi) - g(x_{\sim i},x^0_i, \xi)\right|^2 dx d\xi}{\mathrm{Var}(z)} \end{equation*} satisfies \begin{equation}\label{eq:P_error} P \left(\delta(x^0_i) < \left(1 + \frac{1}{\varepsilon} \right)S^g_{T_{x_i}}\right) > 1 - \varepsilon \end{equation} for any $\varepsilon > 0$. This result is applied in this work, meaning that we expect with high confidence that fixing an input with a low total sensitivity index does not produce a large error in the estimates of uncertainty. Then, this fact can be employed to reduce input dimensionality, so that UQ can be performed more efficiently. However, sensitivity indices are usually not given in advance and their estimation can be a computationally expensive task as well.
\subsection{Sensitivity analysis of multiscale models}\label{sec:SA}
In this work, it is proposed to evaluate the response sensitivity of the computationally cheap single scale model $f$ to estimate an upper bound of the sensitivity of the multiscale model output $z$. This approach can be highly computationally efficient; however, the method does not work in general.
In order to fix uncertain inputs according to single scale model SA, it should be proved that the total sensitivity for an input $x_i$ remains small also for the output of the model $g(x, \xi)$, i.e. $S^{g}_{T_{x_{i}}} \ll 1$ given that $S^{f}_{T_{x_{i}}} \ll 1$. This cannot be assumed in general, and it depends on the form of the model function $G$.
The first step of the proposed approach is to analyse the multiscale model function $G$, as it is shown in the following sections. In the cases, in which our method applies, the next step is to estimate numerically $S^f_{T_{x_i}}$ for $i=1, \dots, n$ by a black box method, for instance from \cite{SOBOL2007957}. Then, if it is found that $S^{f}_{T_{x_{i}}} \ll 1$, it shall follow automatically that $S^{g}_{T_{x_{i}}} \ll 1$. Hence, according to \eqref{eq:P_error}, uncertainty can be estimated with fixed $x_i$ without producing a large error.
While the results stated below hold also for vector valued functions, using the definition of total SI given in \eqref{eq:STg}, we shall work mainly with scalar functions, in order to avoid a heavy notation.
\subsubsection{Case 1}\label{sec:case_1} We start by considering the homogeneous case: $G: \mathbb{R}^2 \to \mathbb{R}$, given by $G(u, v) = uv$.
\begin{theorem} \label{thm:case_1} Let $g:(0, 1)^{m + n} \to \mathbb{R}$ be a function in $L^{2}((0, 1)^{m + n})$ such that \begin{equation*}
\label{eq:case1}
g(x, \xi) = f(x)h(\xi), \end{equation*} for some $f : (0, 1)^{n} \to \mathbb{R}$ and $h: (0, 1)^{m} \to \mathbb{R}$ satisfying $f \in L^{2}((0, 1)^{n})$ and $h \in L^{2}((0, 1)^{m})$. Then, we have \begin{equation} \label{eq:STf_case1}
S^g_{T_{x_i}} = \lambda_{f, h} S^{f}_{T_{x_{i}}}, \end{equation} where \begin{equation*} \label{eq:lambda_f_h} \lambda_{f, h} = \frac{\int f(x)^{2} \, dx - f_{0}^{2}}{\int f^2(x) dx- \frac{f^2_0 h^2_0}{\int h^2(\xi) d\xi}}. \end{equation*} In particular, \begin{equation} \label{eq:STf_case1_upper} S^g_{T_{x_i}} \leq S^f_{T_{x_i}}, \end{equation} and \begin{equation} \label{eq:STf_case1_lower} S^{g}_{T_{x_{i}}} \ge \left ( 1 - \frac{f_{0}^{2}}{\int f^2(x) dx} \right ) S^{f}_{T_{x_{i}}}. \end{equation} \end{theorem} \begin{proof} The total SI of the input $x_i$ for the results of the model $g(x, \xi)$ is equal to \begin{align*} \begin{split}
S^g_{T_{x_i}} &= \frac{\int \int f^2(x) h^2(\xi) dx d\xi - \int \int (\int f(x) h(\xi) dx_i )^2 dx_{\sim i} d\xi} {\int \int f^2(x) h^2(\xi) dx d\xi - (f_0 h_{0})^{2}} \\
&= \frac{\int f^2(x) dx - \int (\int f(x) dx_i )^2 dx_{\sim i}} {\int f^2(x) dx- \frac{f^2_0 h^2_0}{\int h^2(\xi) d\xi}}\\
&= \frac{\int f(x)^{2} \, dx - f_{0}^{2}}{\int f^2(x) dx- \frac{f^2_0 h^2_0}{\int h^2(\xi) d\xi}}
\frac{\int f^2(x) dx - \int (\int f(x) dx_i )^2 dx_{\sim i}}{\int f^2(x) dx- f^2_0}\\
&= \frac{\int f(x)^{2} \, dx - f_{0}^{2}}{\int f^2(x) dx- \frac{f^2_0 h^2_0}{\int h^2(\xi) d\xi}} S^{f}_{T_{x_{i}}}, \end{split} \end{align*} from which \eqref{eq:STf_case1} follows.
By the Cauchy-Schwarz inequality, $$h^2_0 \leq \int h^2(\xi) d\xi.$$ Therefore, $\lambda_{f, h} \le 1$, and \eqref{eq:STf_case1_upper} is obtained. In addition, again by Cauchy-Schwarz inequality, we get \begin{equation*} \lambda_{f, h} \ge \frac{\int f(x)^{2} \, dx - f_{0}^{2}}{\int f^2(x) dx} = 1 - \frac{f_{0}^{2}}{\int f^2(x) dx} > 0 \end{equation*} for any $h \in L^{2}((0, 1)^{m})$. Hence, \eqref{eq:STf_case1_lower} is obtained. \end{proof}
Therefore, if a low sensitivity to the parameter $x_i$ is identified by computing $S^f_{T_{x_i}}$, this parameter can be excluded from UQ of the whole multiscale model. On the other hand, inequality \eqref{eq:STf_case1_lower} means that we have a lower bound for the total SI of the input $x_{i}$ for the model $g(x, \xi) = f(x) h(\xi)$, which is independent from the choice of the function $h(\xi)$. In particular, if $x_{i}$ is an important variable for the model $f(x)$, then \eqref{eq:STf_case1_lower} implies that it cannot loose dramatically its importance in the model given by $g$.
\begin{example}[Reaction equation]
An example of Case 1 can be a reaction equation presented by an acyclic model \cite{chopard2014framework} with initial conditions provided by some function $f(x)$: \begin{align*} \begin{split} \frac{\partial z(t, x, \xi)}{\partial t} &= -\psi(\xi)z(t, x, \xi), \\ z(0, x, \xi) &= f(x), \end{split} \end{align*} where $x$ and $\xi$ are uncertain model inputs. The analytical solution of the equation is $$z(t, x, \xi) = f(x) e^{-t\psi(\xi)}.$$ Therefore, if we define $h_t(\xi) = e^{-t\psi(\xi)}$, we get $$z(t, x, \xi) = f(x) h_t(\xi),$$ and Theorem~\ref{thm:case_1} can be applied.
Since the proposed approach is applicable to multiscale models regardless of the complexity of $f$ and $h$, in the example, these model components are represented by the following equations: \begin{align*} \begin{split} \psi(\xi) &= \xi_1^2 - \xi_2,\\ f(x) &= x_1^2 + x_1x_2x_3 + x_3^3 - x_1x_3, \end{split} \end{align*} where uncertain parameters $x$ have uniform distribution $\mathcal{U}(0.9, 1.1)$, $\xi_1$ is uniformly distributed on $[0.07, 0.09]$, and $\xi_2$ on $[0.05, 0.09]$.
Sensitivity analysis of the function $f$ results in: \begin{align*} \begin{split} S^f_{T_{x_1}} &\approx 2.9 \cdot 10^{-1},\\ S^f_{T_{x_2}} &\approx 7.2 \cdot 10^{-2},\\ S^f_{T_{x_3}} &\approx 6.5 \cdot 10^{-1}, \end{split} \end{align*} suggesting that the parameter $x_2$ does not significantly affect the output of the function $f$. Therefore, by Theorem~\ref{thm:case_1}, the value of this parameter can be equated to its mean when estimating uncertainty of the overall model response $z$.
Figure~\ref{fig:comparison_UQ_case1}~(a) illustrates a satisfactory match between the mean values and standard deviations obtained by sampling the results varying all the uncertain inputs and keeping the input $x_2$ equal to its mean value. Figure~\ref{fig:comparison_UQ_case1}~(c) shows that the relative error in the standard deviation does not exceed 3.5\% at any simulation time. Moreover, the resulting $p$-value of Levene's test \cite{LIM1996287} is about 0.84. Therefore, the null hypothesis that the samples are obtained from distributions with equal variances cannot be rejected.
\begin{figure}
\caption{(a) Comparison of the estimated mean and standard deviation of the model response $z^t$ using the original sample and the sample with the unimportant parameter $x_2$ equal to its mean value (reduced); (b) and (d) Comparison of the probability density functions and the cumulative distribution functions at the final simulation time $T_{end}=100$; (c) Relative error in the estimated mean and standard deviation using the samples with the reduced number of uncertain input.}
\label{fig:comparison_UQ_case1}
\end{figure}
Figures~\ref{fig:comparison_UQ_case1}~(b) and (d) show the probability density functions (PDFs) and the cumulative distribution functions (CDFs) of the uncertain model output $z$ at the final simulation time obtained using these two samples. There is a good match in the PDFs and CDFs with Kolmogorov–Smirnov (K-S) two sample test shows the K-S distance nearly $3.6\cdot10^{-4}$ and $p$-value larger than $0.5$, therefore, the hypothesis that the two samples are drawn from the same distributions cannot be rejected\footnote{This conclusion also applies to the other simulation times (data not shown).}. \end{example}
\subsubsection{Case 2}\label{sec:case_2} We consider the linear case, where the sampling function $G:\mathbb{R}^{2} \to \mathbb{R}$ is given by $G(u, v) = u + v$.
\begin{theorem} \label{thm:case_2} Let $g:(0, 1)^{n + m} \to \mathbb{R}$ be a function in $L^{2}((0, 1)^{n + m})$ such that \begin{equation*}
\label{eq:case3}
g(x, \xi) = f(x) + h(\xi), \end{equation*} for some $f:(0, 1)^{n} \to \mathbb{R}$ and $h: (0, 1)^{m} \to \mathbb{R}$ satisfying $f \in L^{2}((0, 1)^{n})$ and $h \in L^{2}((0, 1)^{m})$. Then, we have \begin{equation} \label{eq:STf_case2} S^g_{T_{x_i}} = \mu_{f, h} S_{T_{x_{i}}}^{f}, \end{equation} where \begin{equation*} \mu_{f, h} := \frac{1}{1 + \frac{\mathrm{Var}(h)}{\mathrm{Var}(f)}}. \end{equation*} In particular, $S^{g}_{T_{x_{i}}} \le S_{T_{x_{i}}}^{f}$. \end{theorem}
\begin{proof} The total SI of the input $x_i$ for the results of the model $g$ is equal to \begin{align*} \begin{split}
S^g_{T_{x_i}} &= \frac{\int (f(x) + h(\xi))^2 dx d\xi - \int (\int f(x) + h(\xi) dx_i )^2 dx_{\sim i} d\xi}
{\int (f(x) + h(\xi))^2 dx d\xi - (f_{0} + h_{0})^2}
\\
&= \frac{\int f^2(x) dx - \int (\int f(x) dx_i )^2 dx_{\sim i}}
{\int f^2(x) dx - f_0^2 + \int h^2(\xi)d\xi- h_0^2}, \end{split} \end{align*} from which we get \eqref{eq:STf_case2} by dividing by $\mathrm{Var}(f)$ numerator and denominator.
Clearly, $\mu_{f, h} \in (0, 1]$, and so we conclude that $S^{g}_{T_{x_{i}}} \le S_{T_{x_{i}}}^{f}$.
\end{proof}
Therefore, if the parameter $x_i$ is unimportant for $f$, it can be equated to its mean value in the uncertainty estimation of the model $g$.
\begin{example}[Standard Ornstein-Uhlenbeck process]
An example of Case 2 can be a multiscale model whose micro scale dynamics does not depend on the macro scale response. Let us consider the system (Figure~\ref{fig:results_case2}~(a)) \cite{Weinan_2011,weinan2005analysis}: \begin{align*} \begin{split}
\frac{\partial z}{\partial t} &= v + f(x),\\
\frac{\partial v}{\partial t} &= -\frac{1}{\epsilon} v + \frac{1}{\sqrt{\epsilon}} \dot{W}_t, \\ f(x) &= - x_1 + (x_2^2 \, x_3 + x_4), \end{split} \end{align*} where $z$ simulates the slow processes with $z(t=0) = 1$, $v$ is the fast process with $v(t=0)=1$, $\epsilon=10^{-2}$, $\dot{W}_t$ is a white noise with unite variance. The fast dynamics is the standard Ornstein-Uhlenbeck process. At any simulation time $t, \dot{W}_{t}$ plays the role of $\xi$ in Theorem~\ref{thm:case_2}. The macro model uncertain parameters $x=(x_1, x_2, x_3, x_4)$ follow normal distribution, such that $x_1\sim \mathcal{N}(0, 10^{-4})$, $x_2\sim \mathcal{N}(0, 2.5 \cdot 10^{-4})$, $x_3\sim \mathcal{N}(0, 2.5 \cdot 10^{-6})$, $x_4\sim \mathcal{N}(0, 2.5 \cdot 10^{-6})$.
The system is simulated using the forward Euler method with the macro time step $\Delta t_M = 1$ and the micro time step $\Delta t_{\mu} = 10^{-2}$.
\begin{figure}
\caption{(a) Standard Ornstein-Uhlenbeck process; (b) Comparison of UQ result using the original sample and the sample obtained with values of the unimportant parameters $x_2$ and $x_3$ equal to their mean (reduced); (c) and (e) Comparison of the PDF and CDF at the final time step; (d) Relative error in the estimation of the mean and standard deviation.}
\label{fig:results_case2}
\end{figure}
Sensitivity analysis of the function $f(x)$ yields \begin{align*} \begin{split} S^f_{T_{x_1}} &\approx 7.7 \cdot 10^{-1},\\ S^f_{T_{x_2}} &\approx 2.6 \cdot 10^{-4},\\ S^f_{T_{x_3}} &\approx 3.9 \cdot 10^{-4},\\ S^f_{T_{x_4}} &\approx 2.0 \cdot 10^{-1}. \end{split} \end{align*} At any simulation time, the inputs $x_2$ and $x_3$ do not influence significantly the output of the function $f$. Therefore, they can be equated to their mean values without a substantial loss of accuracy of the uncertainty estimate as a consequence of Theorem~\ref{thm:case_2}.
The uncertainty estimation results of $z$ are presented in Figure~\ref{fig:results_case2}~(b). As it is proven analytically, the estimates obtained by sampling the model results with uncertain parameters $x_2$ and $x_3$ equal to their mean values are close to those resulting from samples where all the uncertain inputs vary. At any simulation time, the relative error between these estimates of the standard deviation does not exceed $1.1\%$ (Figure~\ref{fig:results_case2}~(d)). Additionally, Levene's test shows $p$-value about $0.66$, therefore, we cannot reject the hypothesis that the two samples are drawn from distributions with the same variance.
The PDFs and CDFs for the model result at the final time point obtained from these two samples are in Figure~\ref{fig:results_case2}~(c) and (e). There is a good match of the PDFs and CDFs obtained from these two samples, and K-S test produces the distance about $0.01$ and $p$-value about $0.47$, therefore, the hypothesis that the two samples are drawn from the same distributions cannot be rejected. \end{example}
Some additional cases of the function $G$ for which the method of eliminating unimportant parameters to reduce the input dimensionality is valid are presented in the Appendix.
\subsubsection{Counterexample}\label{sec:Counterexamples}
In this section, the importance of the examination of properties of the function $G$ is demonstrated. The counterexample illustrates that low sensitivity to a parameter of the response of a function $f$ does not necessarily imply low sensitivity to this parameter of a response of the function $g$.
\begin{example}[Total sensitivity indices of composite functions]
Let $n = 2, m = 1$ and $i = 2$, \begin{align}
\frac {\partial z}{\partial x_1} = -\frac{1}{4\sqrt[4]{\beta}} |x_1 + x_2 + \xi|^{-\frac{5}{4}}, \label{eq:counterexample_1} \end{align}
for $(x_1, x_2, \xi) \in (0, 1)^3$, with $z(0, x_2, \xi) = \frac{1}{\sqrt[4]{\beta |x_2 + \xi|}}$ and $\beta > 0$ some fixed parameter. The solution to equation~\eqref{eq:counterexample_1} can be represented using the following system \begin{align*} u=f(x) & = x_{1} + \beta x_{2},\\ v=h(x_{1}, \xi) & = (1 - \beta) x_{1} - \beta \xi, \\
G(u, v) &= \frac{1}{\sqrt[4]{|u - v|}}, \end{align*} so that \begin{equation*} z(x, \xi) = g(x, \xi) = \frac{1}{\sqrt[4]{\beta}} \frac{1}{\sqrt[4]{x_{1} + x_{2} + \xi}}. \end{equation*} Let us now directly obtain sensitivity indices of the function $f(x)$ for the parameter $x_2$: \begin{align*} \begin{split} S^{f}_{T_{x_{2}}} = \frac{\frac{1}{3} + \frac{\beta^{2}}{3} + \frac{\beta}{2} - \left( \frac{1}{3} + \frac{\beta^{2}}{4} + \frac{\beta}{2} \right)}{\frac{1}{3} + \frac{\beta^{2}}{3} + \frac{\beta}{2} - \left( \frac{\beta + 1}{2} \right)^2} = \frac{\frac{\beta^{2}}{12}}{\frac{\beta^{2} + 1}{12}} = \frac{\beta^{2}}{1 + \beta^{2}}. \end{split} \end{align*} Note that $S^{f}_{T_{x_{2}}}$ can be made arbitrarily small as $\beta \to 0$: for instance, by choosing $\beta \in \left (0, \frac{1}{10} \right )$, we get \begin{equation*} S^{f}_{T_{x_{2}}} < \frac{1}{100}, \end{equation*} so that $x_{2}$ becomes an unimportant input for $f$.
On the other hand, sensitivity of the function $g(x, \xi)$ does not depend on $\beta$: \begin{align*} \begin{split}
S^{g}_{T_{x_{2}}} &= \frac {\frac{1}{\sqrt{\beta}}\int_{(0, 1)^3} \frac{1}{ \sqrt{x_{1} + x_{2} + \xi}} dx_1 dx_2 d\xi- \frac{1}{\sqrt{\beta}} \int_{(0, 1)^2} \left(\int_{(0, 1)} \frac{1}{\sqrt[4]{x_{1} + x_{2} + \xi}}dx_2\right)^2 dx_1d\xi} {\frac{1}{\sqrt{\beta}}\int_{(0, 1)^3} \frac{1}{\sqrt{x_{1} + x_{2} + \xi}} dx_1 dx_2 d\xi- \frac{1}{\sqrt{\beta}}\left( \int_{(0, 1)^3} \frac{1}{\sqrt[4]{x_{1} + x_{2} + \xi}} dx_1 dx_2 d\xi \right)^2}\\ &=\frac{\int_{(0, 1)^3} \frac{1}{\sqrt{x_{1} + x_{2} + \xi}} dx_1 dx_2 d\xi-
\int_{(0, 1)^2} \left(\int_{(0, 1)} \frac{1}{\sqrt[4]{x_{1} + x_{2} + \xi}}dx_2\right)^2 dx_1d\xi} {\int_{(0, 1)^3} \frac{1}{\sqrt{x_{1} + x_{2} + \xi}} dx_1 dx_2 d\xi- \left( \int_{(0, 1)^3} \frac{1}{\sqrt[4]{x_{1} + x_{2} + \xi}} dx_1 dx_2 d\xi \right)^2}. \end{split} \end{align*} In addition, since $g$ is symmetrical, \begin{equation*} S^g_{T_{x_{2}}} = S^g_{T_{x_{1}}} = S^g_{T_{\xi}}. \end{equation*} Hence, this proves that $x_{2}$ is not an unimportant input for the function $g$, since it must be as relevant as $x_{1}$ and $\xi$. Therefore, in general, it is wrong to eliminate an uncertain input from UQ only based on sensitivity analysis of a single scale model without verifying that $S^g_{T_{x_{i}}} \leq \lambda S^f_{T_{x_{i}}}$ holds for some finite $\lambda \ge 0$ as in Theorem~\ref{thm:case_1} and Theorem~\ref{thm:case_2}. \end{example}
\section{Concluding remarks}\label{sec:Conclusions}
An application of sensitivity analysis to reduce dimensionality of multiscale models in order to improve the performance of their uncertainty estimation is discussed in this paper. It has been shown that for some multiscale models, the estimates of Sobol sensitivity indices of a single scale output can be used as an estimate of the upper bound for the sensitivity of the output of the whole multiscale model. In other words, knowledge on the importance of inputs from single scale models can be used to find the effective dimensionality of the overall multiscale model. Two classes of coupling function $G$ (multiplicative, additive) were considered, where the approach was demonstrated to work, based on Theorems~\ref{thm:case_1} and~\ref{thm:case_2}, and two examples. However, a counterexample was also constructed, showing that the success of the method strongly depends on the properties of the coupling function $G$. Obviously, this analysis only covers a very small portion of possible coupling functions, and a more systematic or case by case investigation would be warranted.
The next step is to apply the proposed approach to real-world multiscale applications, for instance, to a multiscale fusion model \cite{luk2019compat} and to a coupled human heart model \cite{Santiago2018}. Uncertainty quantification applied to these models is computationally expensive due to the high dimension of the model parameters. Therefore, the SA analysis on single scale models to reduce the dimensionality of the overall multiscale model input can be one of the possible ways to improve the efficiency of the model uncertainty quantification.
\section*{Funding} This work is a part of the eMUSC (Enhancing Multiscale Computing with Sensitivity Analysis and Uncertainty Quantification) project. A. N. and A. H. gratefully acknowledge financial support from the Netherlands eScience Center. This project has received funding from the European Union Horizon 2020 research and innovation programme under grant agreement \#800925 (VECMA project).
\section*{Declarations of interest: none} None.
\appendix\label{sec:appendix} \section*{Appendices} \addcontentsline{toc}{section}{Appendices} \renewcommand{\Alph{subsection}}{\Alph{subsection}}
In this Appendix, additional cases of the function $G$ are considered. In particular, relations between the function $f$ and two or more functions representing the micro model are investigated, in this way allowing for vector valued functions $h$. Overall, our goal here is to show that the method presented in this work can be applied to different types of functions of the multiscale model components.
\section{Case 3} \label{sec:case_3}
Consider the affine linear case: $G: \mathbb{R}^{3} \to \mathbb{R}$ given by $G(u, v_{1}, v_{2}) = u v_{1} + v_{2}.$
\setcounter{theorem}{0} \renewcommand{\Alph{section}\arabic{theorem}}{\Alph{section}\arabic{theorem}} \begin{theorem} \label{thm:case_3} Let $g: (0, 1)^{m + n + k} \to \mathbb{R}$ be a function in $L^{2}((0, 1)^{m + n + k})$ such that \begin{equation*}\label{eq:case4}
g(x, \xi, \eta) = f(x)h_{1}(\xi) + h_{2}(x_{\sim i}, \eta), \end{equation*} for some $f: (0, 1)^{n} \to \mathbb{R}$, $h_{1}: (0, 1)^{m} \to \mathbb{R}$ and $h_{2}: (0, 1)^{k + n - 1} \to \mathbb{R}$ satisfying $f \in L^{2}((0, 1)^{n})$, $h_{1} \in L^{2}((0, 1)^{m})$, and $h_{2} \in L^{2}((0, 1)^{k + n - 1})$. Then, \begin{equation} \label{eq:STf_case4_th} S^{g}_{T_{x_{i}}} = \gamma_{f, h_{1}, h_{2}} S^{f}_{T_{x_{i}}}, \end{equation} where \begin{equation*} \label{gamma_f_h_varphi_def} \gamma_{f, h_{1}, h_{2}} := \frac{\int f^2(x) dx - f_{0}^{2}}
{\int f^2(x) dx - \frac{f^2_0 (h_{1})^2_0}{\int h_{1}^2(\xi) d\xi}+ \frac{\int \int h_{2}^2(x_{\sim i}, \eta) dx_{\sim i} d\eta - (h_{2})_0^2}{\int h_{1}^2(\xi) d\xi} + 2 (h_{1})_0 \frac{ (f h_{2})_{0}-f_{0} (h_{2})_{0}}{\int h_{1}^2(\xi) d\xi}}.
\end{equation*} If, additionally, it is assumed that \begin{equation} \label{covariance_f_varphi} (h_{1})_{0}(f h_{2})_{0} \geq f_{0}(h_{1})_{0}(h_{2})_{0}, \end{equation} then $$S^g_{T_{x_i}} \leq S^f_{T_{x_i}}.$$ \end{theorem}
\begin{proof} We compute \begin{align*} \begin{split} \mathrm{Var}(g)_{T_{x_i}} =& \int \int f^2(x) h_{1}^2(\xi) dx d\xi + \int \int h_{2}^2(x_{\sim i}, \eta) dx_{\sim i} d\eta + 2 (h_{1})_0 (f h_{2})_{0} + \\ &
- \int \int \left(\int f(x) h_{1}(\xi) dx_i \right)^2 dx_{\sim i} d\xi + \\ &
- \int \int h_{2}^2(x_{\sim i}, \eta) dx_{\sim i} d\eta - 2 (h_{1})_0 (f h_{2})_{0}, \\ \mathrm{Var}(g) =& \int \int f^2(x) h_{1}^2(\xi) dx d\xi + \int \int h_{2}^2(x_{\sim i}, \eta) dx_{\sim i} d\eta + 2 (h_{1})_0 (f h_{2})_{0} + \\&
- h_{0}^{2} f_{0}^{2} - (h_{2})_{0}^{2} - 2 f_{0} (h_{1})_{0} (h_{2})_{0}. \end{split} \end{align*} Thus, the total SI of the input $x_i$ for the results of the model $g(x, \xi, \eta)$ is equal to \begin{align*} \begin{split}
\label{eq:STf_case3}
S^g_{T_{x_i}} = &\frac{\mathrm{Var}(g)_{T_{x_i}}}{\mathrm{Var}(g)}
\\
= & \frac{\int f^2(x) dx - \int (\int f(x) dx_i )^2 dx_{\sim i}}
{\int f^2(x) dx - \frac{f^2_0 (h_{1})^2_0}{\int h_{1}^2(\xi) d\xi}+ \frac{\int \int h_{2}^2(x_{\sim i}, \eta) dx_{\sim i} d\eta- (h_{2})_0^2}{\int h_{1}^2(\xi) d\xi} + 2 (h_{1})_0 \frac{ (f h_{2})_{0}-f_{0}(h_{2})_{0}}{\int h_{1}^2(\xi) d\xi}}, \end{split} \end{align*} from which \eqref{eq:STf_case4_th} follows. By Cauchy-Schwarz inequality, we have \begin{align*} (h_{1})^2_0 & \leq \int h_{1}^2(\xi) d\xi, \\ (h_{2})_{0}^{2} & \leq \int \int h_{2}^2(x_{\sim i}, \eta) dx_{\sim i} d\eta, \end{align*} which imply \begin{equation*} \gamma_{f, h_{1}, h_{2}} \le \frac{\mathrm{Var}(f)}
{\mathrm{Var}(f) + 2 (h_{1})_0 \frac{ (f h_{2})_{0}-f_{0}(h_{2})_{0}}{\int h_{1}^2(\xi) d\xi}}.
\end{equation*} To estimate the last term at the denominator, \eqref{covariance_f_varphi} is employed, yielding $$\gamma_{f, h_{1}, h_{2}} \le 1,$$ and the result follows. \end{proof}
Note that, in the previous theorem, $h_{2}$ can be independent of more than one input $x_j$, however, it is crucial to assume the independence from the unimportant parameters which we want to exclude from uncertainty quantification.
\begin{remark} It is noticed that condition \eqref{covariance_f_varphi} is equivalent to assume that $\mathbb{E}(h_{1}) \mathrm{Cov}(f, h_{2}) \ge 0$, since $\mathrm{Cov}(f, h_{2}) = (f h_{2})_{0} - f_{0} (h_{2})_{0}$. Under the same assumption, one can get the following lower bound on $\gamma_{f, h_{1}, h_{2}}$: \begin{equation*} \gamma_{f, h_{1}, h_{2}} \ge \frac{\int f^2(x) dx - f_{0}^{2}}
{\int f^2(x) dx + \frac{\int \int h_{2}^2(x_{\sim i}, \eta) dx_{\sim i} d\eta - (h_{2})_0^2}{\int h_{1}^2(\xi) d\xi} + 2 (h_{1})_0 \frac{ (f h_{2})_{0}-f_{0}(h_{2})_{0}}{\int h_{1}^2(\xi) d\xi}}. \end{equation*} On the other hand, if $\mathbb{E}(h_{1}) \mathrm{Cov}(f, h_{2}) \le 0$; that is, $(h_{1})_{0}(f h_{2})_{0} \le f_{0}(h_{1})_{0} (h_{2})_{0}$, then \begin{equation*} \gamma_{f, h_{1}, h_{2}} \ge \frac{\int f^2(x) dx - f_{0}^{2}}
{\int f^2(x) dx + \frac{\int \int h_{2}^2(x_{\sim i}, \eta) dx_{\sim i} d\eta - (h_{2})_0^2}{\int h_{1}^2(\xi) d\xi}}. \end{equation*} In addition, if it is assumed that $(h_{1})_{0} (f h_{2})_{0} \le f_{0} (h_{1})_{0} (h_{2})_{0}$ and that $$\mathrm{Var}(f) + \frac{\mathrm{Var}(h_{2})}{\int h_{1}^{2}(\xi) d \xi} + \mathrm{Cov}(f, h_{2}) \ge 0,$$ we obtain the following upper bound for $\gamma_{f, h_{1}, h_{2}}$: \begin{equation*} \gamma_{f, h_{1}, h_{2}} \le \frac{1}
{1 + \frac{\int \int h_{2}^2(x_{\sim i}, \eta) dx_{\sim i} d\eta - (h_{2})_0^2}{\mathrm{Var}(f) \int h_{1}^2(\xi) d\xi} + 2 (h_{1})_0 \frac{ (f h_{2})_{0}-f_{0}(h_{2})_{0}}{\mathrm{Var}(f) \int h_{1}^2(\xi) d\xi}}. \end{equation*} \end{remark}
\section{Case 4}
A variant of the linear case $G(u, v) = u + v$ is considered. The difference with Case 2 of Theorem \ref{thm:case_2} is that now the functions $f$ and $h$ depend on the same set of variables.
\begin{theorem} \label{thm:case_4} Let $g: (0, 1)^{n} \to \mathbb{R}$ be a function in $L^{2}((0, 1)^{n})$ such that \begin{equation*} \label{eq:res_macro_t}
g(x) = f(x) + h(x), \end{equation*} for some $f, h: (0, 1)^{n} \to \mathbb{R}$, $f, h \in L^{2}((0, 1)^{n})$. Then, if $$\mathrm{Cov}(f, h) = (f h)_{0} - f_{0} h_{0} \geq 0,$$ we have \begin{equation} \label{eq:case_5_bound} S^{g}_{T_{x_{i}}} \le \frac{ \left ( \sqrt{ S^{f}_{T_{x_{i}}} \mathrm{Var}(f)} + \sqrt{ S^{h}_{T_{x_{i}}} \mathrm{Var}(h)} \right )^{2}}{\mathrm{Var}(f) + \mathrm{Var}(h)}, \end{equation} and so \begin{equation} \label{eq:case_5_bound_easy} S^{g}_{T_{x_{i}}} \le 2 \max \{S^{f}_{T_{x_{i}}}, S^{h}_{T_{x_{i}}} \}, \end{equation} where the factor $2$ is sharp. \end{theorem}
\begin{proof} By a simple computation, it follows that \begin{align*} \begin{split}
S^{g}_{T_{x_{i}}} & = \frac{\int (f+h)^2 dx - \int (\int (f + h) dx_i)^2 dx_{\sim i}} {\int (f+h)^2 dx -(f_{0} + h_{0})^2}
\\
\\
& =\frac{\mathrm{Var}(f)_{T_{x_i}} + \mathrm{Var}(h)_{T_{x_i}} + 2 \int fh dx - 2 \int(\int f dx_i) (\int h dx_i) dx_{\sim i}}
{\mathrm{Var}(f) + \mathrm{Var}(h) + 2\mathrm{Cov}(f, h)}, \end{split} \end{align*} where $\mathrm{Var}(f)_{T_{x_{i}}} = \int f^{2}(x) \, dx - \int (\int f(x) \, d x_{i})^{2} \, d x_{\sim i}$, and $\mathrm{Var}(h)_{T_{x_{i}}}$ is defined analogously. Then, by applying the Cauchy-Schwarz inequality to the functions $f(x) - \int f(x) \, d x_{i}$ and $h(x) - \int h(x) \, d x_{i}$, we get
\begin{equation} \label{Cov_x_i} \left | \int fh dx - \int \left (\int f dx_i \right ) \left ( \int h dx_i \right ) dx_{\sim i} \right | \le \sqrt{\mathrm{Var}(f)_{T_{x_i}} \mathrm{Var}(h)_{T_{x_i}} }. \end{equation} Thus, if $\mathrm{Cov}(f, h) \geq 0$, by \eqref{Cov_x_i} we obtain \begin{equation*} S^{g}_{T_{x_{i}}} \le \frac{ S^{f}_{T_{x_{i}}} \mathrm{Var}(f) + S^{h}_{T_{x_{i}}} \mathrm{Var}(h) + 2 \sqrt{ S^{f}_{T_{x_{i}}} \mathrm{Var}(f) S^{h}_{T_{x_{i}}} \mathrm{Var}(h)} }{\mathrm{Var}(f) + \mathrm{Var}(h)}, \end{equation*} from which \eqref{eq:case_5_bound} immediately follows. Finally, we show that \begin{equation} \label{eq:algebraic_inequality} \frac{(\sqrt{a y} + \sqrt{b z})^{2}}{a + b} \le 2 \max \{y, z\} \end{equation} for any $a, b, y, z > 0$. Indeed, without loss of generality, let $y > z$, and recall that $2 \sqrt{ab} \le a + b$: then, \begin{equation*} \frac{(\sqrt{a y} + \sqrt{b z})^{2}}{a + b} \le y \frac{(\sqrt{a} + \sqrt{b})^{2}}{a + b} = y \frac{a + b + 2 \sqrt{ab}}{a + b} \le 2 y. \end{equation*} Moreover, the factor $2$ is sharp: if $y = z$ and $a = b$, \begin{equation*} \frac{(\sqrt{a y} + \sqrt{b z})^{2}}{a + b} = y \frac{4 a}{2a} = 2 y = 2 \max\{y, z\}. \end{equation*} Therefore, inequality \eqref{eq:algebraic_inequality} shows that \eqref{eq:case_5_bound} implies \eqref{eq:case_5_bound_easy}. \end{proof}
The bound given by \eqref{eq:case_5_bound} means that the total sensitivity index $S^{g}_{T_{x_{i}}}$ for the function $g$ of the input $x_i$ is controlled by $S^{f}_{T_{x_{i}}}$ and $S^{h}_{T_{x_{i}}}$. It is clear that this result can be applied also to a function $g$ of the form \begin{equation*} g(x) = f(x) + h_{1}(x) + h_{2}(x) + \dots + h_{k}(x), \end{equation*} for any $k \ge 1$. Indeed, it is enough to proceed by iteration: at first, we let $$h(x) = h_{1}(x) + h_{2}(x) + \dots + h_{k}(x),$$ then \eqref{eq:case_5_bound} is applied to $S^{h}_{T_{x_{i}}}$, by seeing $h$ as $$h(x) = h_{1}(x) + \tilde{h}_{2}(x),$$ where $$\tilde{h}_{2}(x) = h_{2}(x) + \dots + h_{k}(x).$$ By applying this procedure $k$ times, the desired result is obtained. However, since the factor $2$ in \eqref{eq:case_5_bound_easy} is sharp, in general we cannot hope to obtain a better control than \begin{equation*} \label{eq:case_5_bound_easy_iterated} S^{g}_{T_{x_{i}}} \le 2^{k} \max \{S^{f}_{T_{x_{i}}}, S^{h_{1}}_{T_{x_{i}}}, S^{h_{2}}_{T_{x_{i}}}, \dots, S^{h_{k}}_{T_{x_{i}}} \}, \end{equation*} where the factor $2^{k}$ is again sharp.
\section{Case 5}
A variant of Case 3 (Theorem \ref{thm:case_3}), $G(u, v_{1}, v_{2}) = uv_{1} + v_{2}$ is considered. This time, we assume dependence of $h_{2}$ also on the input $x_{i}$.
\begin{theorem} \label{thm:case_5} Let $g : (0, 1)^{n + m + k} \to \mathbb{R}$ be a function in $L^{2}((0, 1)^{n + m + k})$ such that \begin{align*} \begin{split}
g(x, \xi, \eta) = f(x) h_{1}(\xi) + h_{2}(x,\eta) \end{split} \end{align*} for some $f \in L^{2}((0, 1)^{n}), h_{1} \in L^{2}((0, 1)^{m})$ and $h_{2} \in L^{2}((0, 1)^{n + k})$. Then, if $\mathbb{E}(h_{1}) \mathrm{Cov}(f, h_{2}) \ge 0$, \begin{equation} \label{eq:STf_case4} S^{g}_{T_{x_{i}}} \le \frac{ S^{f}_{T_{x_{i}}} (\int h_{1}^2(\xi) \, d\xi ) \mathrm{Var}(f) + S^{h_{2}}_{T_{x_{i}}} \mathrm{Var}(h_{2}) + 2 (h_{1})_{0} \sqrt{S^{f}_{T_{x_{i}}} \mathrm{Var}(f) S^{h_{2}}_{T_{x_{i}}} \mathrm{Var}(h_{2})}}
{( \int h_{1}^2(\xi) \, d\xi ) \mathrm{Var}(f) + f_{0}^{2} \mathrm{Var}(h_{1}) + \mathrm{Var}(h_{2})}. \end{equation} \end{theorem} \begin{proof} It is enough to evaluate $S^{g}_{T_{x_{i}}}$. We have \begin{align*} & \int g^2(x, \xi, \eta) \, dx \, d\xi \, d \eta - \int \left (\int g(x, \xi, \eta) \, dx_i \right )^2 \, dx_{\sim i} \, d\xi \, d \eta \\ & = \int \int (f(x) h_{1}(\xi) + h_{2}(x, \eta))^2 \, dx \, d\xi \, d \eta + \\ & - \int\int \left (\int (f(x) h_{1}(\xi) + h_{2}(x, \eta)) \, dx_i \right )^2 \, dx_{\sim i} \, d\xi \, d \eta \\
& = \left ( \int h_{1}^2(\xi) \, d\xi \right ) \mathrm{Var}(f)_{T_{x_i}} + \mathrm{Var}(h_{2})_{T_{x_i}} +\\
& + 2 (h_{1})_{0} \left ( \int f(x) h_{2}(x, \eta) \, dx \, d \eta - 2 \int \left (\int f(x) \, dx_i \right) \left (\int h_{2}(x, \eta) \, dx_i \right ) \, dx_{\sim i} \, d \eta \right ) \\
& \le \left ( \int h_{1}^2(\xi) \, d\xi \right ) \mathrm{Var}(f)_{T_{x_i}} + \mathrm{Var}(h_{2})_{T_{x_i}} + 2 (h_{1})_{0} \sqrt{\mathrm{Var}(f)_{T_{x_{i}}} \mathrm{Var}(h_{2})_{T_{x_{i}}}}, \end{align*} by \eqref{Cov_x_i}. On the other hand, we get \begin{align*} \int \int g^{2}(x, \xi, \eta) \, dx \, d \xi \, d \eta - g_{0}^{2} & = \int \int (f(x) h_{1}(\xi) + h(x, \eta))^2 \, dx \, d\xi \, d \eta -(f h_{1} + h_{2})^2_0 \\ & = \left ( \int h_{1}^2 \, d\xi \right ) \mathrm{Var}(f) + f_{0}^{2} \mathrm{Var}(h_{1}) + \mathrm{Var}(h_{2}) + \\ & + 2 (h_{1})_{0} \mathrm{Cov}(f, h_{2}). \\ & \ge \left ( \int h_{1}^2 \, d\xi \right ) \mathrm{Var}(f) + f_{0}^{2} \mathrm{Var}(h_{1}) + \mathrm{Var}(h_{2}), \end{align*} since $(h_{1})_{0} \mathrm{Cov}(f, h_{2}) \ge 0$. Then, it follows that \begin{align*} \begin{split}
S^{g}_{T_{x_{i}}} & \le \frac{\left ( \int h_{1}^2(\xi) \, d\xi \right ) \mathrm{Var}(f)_{T_{x_i}} + \mathrm{Var}(h_{2})_{T_{x_i}} + 2 (h_{1})_{0} \sqrt{\mathrm{Var}(f)_{T_{x_{i}}} \mathrm{Var}(h_{2})_{T_{x_{i}}}}}{\left ( \int h_{1}^2 \, d\xi \right ) \mathrm{Var}(f) + f_{0}^{2} \mathrm{Var}(h_{1}) + \mathrm{Var}(h_{2})} \\
& = \frac{ S^{f}_{T_{x_{i}}} (\int h_{1}^2 \, d\xi ) \mathrm{Var}(f) + S^{h_{2}}_{T_{x_{i}}} \mathrm{Var}(h_{2}) + 2 (h_{1})_{0} \sqrt{S^{f}_{T_{x_{i}}} \mathrm{Var}(f) S^{h_{2}}_{T_{x_{i}}} \mathrm{Var}(h_{2})}}{( \int h_{1}^2 \, d\xi ) \mathrm{Var}(f) + f_{0}^{2} \mathrm{Var}(h_{1}) + \mathrm{Var}(h_{2})},
\end{split} \end{align*} which is \eqref{eq:STf_case4}. \end{proof}
If $h_{1} \equiv 1$ and $k = 0$, there is no dependence on $\xi$ and $\eta$, and Theorem \ref{thm:case_5} implies Theorem \ref{thm:case_4} for the functions $f$ and $h_{2}$.
\section{An estimate on a general class of model functions}\label{sec:general_res}
Let $G : \mathbb{R}^{2} \to \mathbb{R}$ be such that there exist $L \ge c > 0$ satisfying
\begin{align} \label{Lipschitz} |G(u, v) - G(u_{0}, v)| & \le L |u - u_{0}|, \\
\label{coercive} |G(u, v)| & \ge c \sqrt{u^{2} + v^{2} } \end{align} for any $u, u_{0}, v \in \mathbb{R}$, which means that $G$ is Lipschitz in $u$, uniformly in $v$, and that it is a coercive function.
\begin{theorem} \label{general_case_thm} Let $g: (0, 1)^{n + m} \to \mathbb{R}$ be a function in $L^{2}((0, 1)^{n + m})$ such that $g_{0} = 0$ and \begin{equation} \label{g_function_form} g(x, \xi) = G(f(x), h(x_{\sim i}, \xi)) \end{equation} for some functions $f : (0, 1)^{n} \to \mathbb{R}$ and $h : (0, 1)^{n + m - 1} \to \mathbb{R}$ satisfying $f \in L^{2}((0, 1)^{n})$ and $h \in L^{2}((0, 1)^{n + m -1})$. Then, \begin{equation} \label{general_upper_bound} S_{T_{x_{i}}}^{g} \le 2 \frac{L^{2}}{c^{2}} \frac{\mathrm{Var}(f)}{\mathrm{Var}(f) + \mathrm{Var}(h)} S_{T_{x_{i}}}^{f}. \end{equation} \end{theorem} \begin{proof} By \eqref{Lipschitz} and \eqref{g_function_form}, it follows that $g(x, \xi) \in L^{2}((0,1)^{n + m})$. Since $g_{0} = 0$, \eqref{coercive} implies \begin{align*} \mathrm{Var}(g) & = \int_{(0, 1)^{n + m}} g^{2}(x, \xi) \, dx \, d \xi \\ & \ge c^{2} \left ( \int_{(0, 1)^{n}} f^{2}(x) \, dx + \int_{(0, 1)^{n + m - 1}} h^{2}(x_{\sim i}, \xi) \, d x_{\sim i} \, d \xi \right ) \\ & \ge c^{2} \left ( \mathrm{Var}(f) + \mathrm{Var}(h) \right ). \end{align*} We further notice that, by Jensen inequality combined with \eqref{Lipschitz} and \eqref{g_function_form}, we get \begin{align*} & \int_{(0, 1)^{n + m}} \left ( g(x_{i}, x_{\sim i}, \xi) - \int_{0}^{1} g(\tilde{x}_{i}, x_{\sim i}, \xi) \, d \tilde{x}_{i} \right )^{2} \, d x_{i} \, d x_{\sim i} \, d \xi \\
& \le \int_{(0, 1)^{n + m}} \int_{0}^{1} |g(x_{i}, x_{\sim i}, \xi) - g(\tilde{x}_{i}, x_{\sim i}, \xi)|^{2} \, d \tilde{x}_{i} \, d x_{i} \, d x_{\sim i} \, d \xi \\
& = \int_{(0, 1)^{n + m + 1}} |G(f(x_{i}, x_{\sim i}), h(x_{\sim i}, \xi)) - G(f(\tilde{x}_{i}, x_{\sim i}), h(x_{\sim i}, \xi))|^{2} \, d \tilde{x}_{i} \, d x_{i} \, d x_{\sim i} \, d \xi \\
& \le L^{2} \int_{(0, 1)^{n - 1}} \int_{0}^{1} \int_{0}^{1} |f(x_{i}, x_{\sim i}) - f(\tilde{x}_{i}, x_{\sim i})|^{2} \, d \tilde{x}_{i} \, d x_{i} \, d x_{\sim i} \\ & = 2 L^{2} \left ( \int_{(0, 1)^{n}} f^{2}(x) \, dx - \int_{(0, 1)^{n - 1}} \left ( \int_{0}^{1} f(x_{i}, x_{\sim i}) \, d x_{i} \right )^{2} \, d x_{\sim i} \right ). \end{align*} Therefore, combining these two inequalities, \eqref{general_upper_bound} is obtained. \end{proof}
We notice that we can replace the assumption $g_{0} = 0$ in Theorem \ref{general_case_thm} with a weaker one.
\begin{corollary} \label{general_case_cor} Let $g: (0, 1)^{m + n} \to \mathbb{R}$ be a function in $L^{2}((0, 1)^{m + n})$ as in \eqref{g_function_form}, with $f \in L^{2}((0, 1)^{n})$, $h \in L^{2}((0, 1)^{n + m -1})$. Then, if $$c^{2}(\mathrm{Var}(f) + \mathrm{Var}(h) ) - g_{0}^{2} \ge 0,$$ we have \begin{equation*} \label{general_upper_bound_cor} S_{T_{x_{i}}}^{g} \le 2 \frac{L^{2} \mathrm{Var}(f)}{c^{2}(\mathrm{Var}(f) + \mathrm{Var}(h)) - g_{0}^{2}} S_{T_{x_{i}}}^{f}. \end{equation*} \end{corollary} \begin{proof} The proof is the same of Theorem \ref{general_case_thm}, one needs just to subtract the term $g_{0}^{2}$ at the denominator. \end{proof}
\begin{remark} It is not difficult to see that we could restate Theorem \ref{general_case_thm} and Corollary \ref{general_case_cor} for a function $G : \mathbb{R} \times \mathbb{R}^{l} \to \mathbb{R}$; that is, allowing $v$ to be a vector $(v_{1}, v_{2}, \dots, v_{l})$ in $\mathbb{R}^{l}$. The Lipschitz condition would not change, while the coercivity condition \eqref{coercive} would become
$$ |G(u, v)| \ge c \sqrt{u^{2} + |v|^{2}} = c \sqrt{u^{2} + v_{1}^{2} + v_{2}^{2} + \dots + v_{l}^{2}}.$$ This would allow to have not only one function $h$, but a family of $l$ different functions $h_{1}, h_{2}, \dots , h_{l}$, which could be seen as a vector valued function $$ h = (h_{1}, h_{2}, \dots, h_{l}) : (0, 1)^{n + m - 1} \to \mathbb{R}^{l},$$ satisfying $$\mathrm{Var}(h) = \mathrm{Var}(h_{1}) + \mathrm{Var}(h_{2}) + \dots + \mathrm{Var}(h_{l}).$$ \end{remark}
The next example illustrates that the admissible function $G$ for Theorem \ref{general_case_thm} and Corollary \ref{general_case_cor} can be very nonlinear.
\begin{example} Let
$$G(u, v) = a |u| + b |v| + \arctan{\left ( \frac{|u| + |v|}{1 + v^{2}} \right )},$$ for some $a, b > 0$. Then, $G$ is Lipschitz in $u$ uniformly in $v$, since \begin{equation*}
\frac{\partial G (u, v)}{\partial u} = \left ( a + \frac{(1 + v^{2})}{(1 + v^{2})^{2} + u^{2} + v^{2} + 2 |u||v|} \right ) {\rm sgn}(u), \end{equation*} which is a bounded function. Thus, $G$ satisfies condition \eqref{Lipschitz}, with
$$L = \sup_{u, v} \left | \frac{\partial G (u, v)}{\partial u} \right |.$$ As for the coercivity condition \eqref{coercive}, it is easy to see that $G(u, v) \ge 0$ and \begin{equation*}
G(u, v) \ge \min\{a, b\}(|u| + |v|) \ge \min\{a, b\} \sqrt{u^{2} + v^{2}}, \end{equation*} so that we have $c = \min\{a, b\}$. It is clear that, since $G(u, v) \ge 0$, any $g(x, \xi) = G(f(x), h(x_{\sim i}, \xi))$ cannot satisfy $g_{0} = 0$, unless $f = h = 0$. Hence, in general, we can apply Corollary \ref{general_case_cor} only if we ensure that $$\min\{a, b\}^{2}(\mathrm{Var}(f) + \mathrm{Var}(h) ) \ge g_{0}^{2}.$$ \end{example}
{}
\end{document} |
\begin{document}
\renewcommand\thesubsection{\arabic{subsection}}
\title{Quantum Motional State Tomography with Non-Quadratic Potentials and Neural Networks}
\author{Talitha Weiss} \affiliation{Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, A-6020 Innsbruck, Austria} \affiliation{Institute for Theoretical Physics, University of Innsbruck, A-6020 Innsbruck, Austria}
\author{Oriol Romero-Isart} \affiliation{Institute for Quantum Optics and Quantum Information of the Austrian Academy of Sciences, A-6020 Innsbruck, Austria} \affiliation{Institute for Theoretical Physics, University of Innsbruck, A-6020 Innsbruck, Austria}
\begin{abstract} We propose to use the complex quantum dynamics of a massive particle in a non-quadratic potential to reconstruct an initial unknown motional quantum state. We theoretically show that the reconstruction can be efficiently done by measuring the mean value and the variance of the position quantum operator at different instances of time in a quartic potential. We train a neural network to successfully solve this hard regression problem. We discuss the experimental feasibility of the method by analyzing the impact of decoherence and uncertainties in the potential. \end{abstract}
\maketitle
\section{Introduction}
One of the most fascinating possibilities in quantum physics is to prepare the motional degrees of freedom of a massive particle in a quantum state. The non-classical features of such a state can be demonstrated by reconstructing its quantum density-matrix operator and showing that its associated Wigner function has negative values. Such an endeavor has been successfully achieved with ions, see Ref.~\cite{Leibfried_IonReview_2003} and references therein. Today, the field of quantum nano- and micromechanics aims to do the same with objects much more massive~\cite{Aspelmeyer_ReviewOM_2014}, for instance nano- and micro-particles, which contain billions of atoms~\cite{Romero_Isart_SuperposLiving_2010,Chang_levitatedNano_2010}. Such an exciting goal has many challenges, and a crucial one is the faithful reconstruction (also called quantum tomography) of the quantum motional state.
Standard strategies to perform quantum motional state tomography~\cite{Vanner_OM_QST_2015} are to couple the motion of the particle to a few-level system~\cite{Wallentowitz_QST_ion_1995,Singh_WignerTomo_usingAtom_2010}, to transfer the mechanical state to a cavity electromagnetic mode whose state can be reconstructed with homodyne tomography~\cite{Parkins_QST_stateTransfer_1999}, to apply coherent displacements and phonon number measurements on the motional degree of freedom~\cite{Wallentwoitz_QSTunbalancedHomodyne_1996,Banaszek_QSTphotonCountin_1996,Poyatos_QST_ion_1996}, as done with ions see, \eg,~Ref.~\cite{Leibfried_expQSTion_1996}, or to measure the full position distribution function (\ie having access to all moments) at different instances of time in a harmonic potential \cite{Vanner_CoolingByMsmt_QST_2013,Romero-Isart_LevTheoryProtocols_2011}. In this article, we propose an alternative approach based on exploiting two distinctive features of levitated particles: (i) their low level of motional decoherence and (ii) the possibility to engineer the potential of the particle, in particular to let the particle coherently evolve in a non-quadratic potential. We show that by solely measuring the mean value and the variance of the position of the particle (\ie the first two moments only) as a function of time during the evolution in a non-quadratic potential, one can efficiently reconstruct the initial unknown quantum motional state (\eg~a given non-Gaussian state). Such reconstruction is a hard quantum regression problem that, as we show below, is ideally suited for neural networks.
This article is organized as follows: In Sec.~\ref{sec_model} we introduce the physical scenario and argue that the time evolution of the mean value and variance of the particle's position should allow us to reconstruct the initial state. In Sec.~\ref{sec_results} we present our results: First, in Sec.~\ref{subsec_protocol}, we explain the overall protocol and how we use and evaluate the neural network for quantum state tomography. Then, in Sec.~\ref{subsec_noDec}, we show quantum state tomography in the absence of decoherence. In Sec.~\ref{subsec_dec} we discuss the effects of decoherence on the achieved fidelity and in Sec.~\ref{subsec_realistic} we consider realistic scenarios where we take experimental limitations into account. In Sec. \ref{sec_summary} we summarize our results. Additional information can be found in the appendices: First, in Appendix \ref{sec_expansion}, we discuss an approximative analytical approach and how it becomes unfeasible in the regime relevant for quantum state tomography. Finally, in Appendix \ref{sec_NN}, we present technical details about the architecture and training of the neural networks.
\section{Model}\label{sec_model}
Let us consider the one-dimensional motion of a particle of mass $m$ in a quartic potential such that its coherent dynamics is described by the Hamiltonian \begin{equation}\label{eq_Hamiltonian}
\Hop = \frac{\Pop^2}{2m}+\lambda \Xop^4 = \hbar \omega_0\left( \frac{\pop^2}{4}+\frac{\xop^4}{\alpha^4}\right). \end{equation} Here, $\Xop=\xop x_0$ and $\Pop=\pop p_0$, with $\coms{\Xop}{\Pop}=\im \hbar$, are the position and momentum, and $\lambda$ the strength of the quartic potential. We have extracted units using $x_0=\hbar/(2 p_0) =[\hbar/(2m\omega_0)]^{1/2}$ and defined the inverse quarticity parameter $\alpha=[\hbar\omega_0/(\lambda x_0^4)]^{1/4}$, with the motivation that we will consider initial motional quantum states assumed to be prepared in a harmonic potential of frequency $\w_0$. Additionally, we consider a standard source of decoherence for levitated particles \cite{Romero_Isart_QuantumSup_Collapse_2011,Joos_book_ClassicalWorld_QuTheory_2003,Schlosshauer_book_QtoCl_2007} that is of position localization type (\eg~due to recoil heating or a fluctuating white-noise force). In total, the evolution of the density-matrix operator of the motional state is described by the master equation
\begin{equation}\label{meq}
\dot\rhoop = -\frac{i}{\hbar} \coms{\Hop}{\rhoop}- \Gamma \coms{\xop}{\coms{\xop}{\rhoop}} = \hat{L} \rhoop. \end{equation} Here, $\Gamma$ is the decoherence rate.
At $t=0$, the particle is assumed to be in an unknown motional quantum state $\hat \eta \equiv \hat \rho(0)$ that we aim to reconstruct. The system evolves according to \eqnref{meq} and the state at a later time $t \geq 0$ is given by $\rhoop(t)= \exp (\hat{L} t) \etaop$. The position $\Xop$ is sufficiently measured at different instances of time to retrieve the mean value and its variance, that is, to obtain the dimensionless trajectories $u_1(t) \equiv \avg{\xop(t)}- \avg{\xop(0)}= \tr \spares{ \xop \pares{\hat \rho(t)-\hat \eta}}$ and $u_2(t) \equiv \expect{\xop^2(t)}-\expect{\xop(t)}^2 = \tr \spares{\xop^2 \rhoop (t)} - \pares{\tr \spares{\xop \rhoop}}^2 $. Note that, as defined, $u_1(0)=0$, and hence one does not need to assume an absolute position measurement. In Appendix \ref{sec_input}, we show examples of the trajectories $u_1(t)$ and $u_2(t)$ numerically calculated by solving \eqnref{meq} on a truncated Hilbert space using the python toolbox QuTiP~\cite{Johansson_qutip_2012,Johansson_qutip_2013}. The question addressed in this article is the following: Can the information provided by $u_1(t)$ and $u_2(t)$ be used to reconstruct the unknown quantum motional state $\hat \eta$?
The reconstruction of a quantum motional state $\hat \eta$ could be performed should one have full knowledge of the mean values $\avg{\xop}$ and $\avg{\pop}$ and all the moments defined by $G^{a,b} = \langle (\pop-\expect\pop)^a (\xop-\expect{\xop})^b\rangle_\text{Weyl}$, where $\avg{\cdot}_\text{Weyl}$ denotes the mean value of the Weyl-ordered product of operators calculated for the state $\hat \eta$, and $a,b$ are non-negative integers. As we show in detail in Appendix \ref{sec_expansion} based on Refs.~\cite{Ballentine_MomentEquations_1998, Brizuela_PRD_2014,Brizuela_PRD_generalizedUncertainty_2014}, the trajectories $u_1(t)$ and $u_2(t)$ depend, for $t$ larger than a given critical time, on basically all the moments $G^{a,b}$ of the state at $t=0$. This is a manifestation of the non-linear quantum dynamics induced by the quartic potential and has two consequences. First, it shows that, indeed, the trajectories $u_1(t)$ and $u_2(t)$ should provide sufficient information to reconstruct $\hat \eta$. This is in contrast to a harmonic potential where $u_1(t)$ and $u_2(t)$ would depend at most on quadratic moments. Second, it shows that in a quartic potential, consequently, it is not possible to correctly approximate $u_1(t)$ and $u_2(t)$ as a function of a finite set of initial moments and, hence, the regression problem of deriving $\hat \eta$ based on $u_1(t)$ and $u_2(t)$ is a hard problem that, to our knowledge, cannot be solved with analytical tools. Nevertheless, this problem is very well suited to a neural network trained by supervised learning. The neural network will not require us to input how exactly the initial moments affect the trajectories. Instead, the neural network will, based on the training examples, find by itself an internal representation of the underlying regression problem. We remark that such a setting, inferring the quantum state from the time evolution of observables, is very different from recent works using neural networks for quantum state tomography of systems of many qubits~\cite{Torlai_NN_QST_2018, Xu_NNforQST_2018,Xin_LocalMsmt_QST_NN_2018,Quek_adaptiveQST_NN_2018}, or for filtering experimental data before performing quantum state tomography~\cite{Palmieri_experimentalQSTenhanced_NN_2019}.
\section{Results}\label{sec_results}
In this section we present our results on quantum state tomography based solely on trajectories in non-quadratic potentials. In Sec.~\ref{subsec_protocol} we explain the protocol and how we obtain our results using neural networks. In Sec.~\ref{subsec_noDec} we investigate the ideal decoherence-free scenario, while in \ref{subsec_dec} we include decoherence and give an estimate of the thereby introduced necessary conditions. Eventually, in Sec.~\ref{subsec_realistic}, we study realistic scenarios including experimental limitations.
\subsection{Protocol}\label{subsec_protocol}
Let us now introduce the overall procedure: We propose to train the neural network on simulated data and then use the trained network to deduce the initial quantum state from experimentally measured trajectories $u_1(t)$ and $u_2(t)$. Experimentally these trajectories could be obtained by repeatedly re-preparing a particle in the same initial state and then evolving it (in the absence of measurements) up to a time $t_1$ when the position is measured, for instance, via optical position detection \cite{Tebbenjohanns_optimalDetection_2019, Tebbenjohanns_sidebandAsym_2019}. Averaging over the many repetitions, this reveals the expectation value and variance of the position at this time $t_1$. Consecutively, this procedure is repeated evolving the particle (in the absence of a measurement) to a later time $t_1+\delta t$ in order to measure the next point of the trajectory. This is repeated for all points of the trajectory.
We now explain how the neural network is trained and tested. Initial states $\etaop$ are randomly sampled from a Hilbert Schmidt ensemble of density matrices of dimensions $d\times d$. That is, we assume that $\etaop$ is prepared in a harmonic potential of frequency $\w_0$ with zero probability to contain more than $d-1$ excitations, an assumption motivated by experiments preparing non-Gaussian quantum states after the particle has been cooled near the ground state of a harmonic potential~\cite{Romero-Isart_LevTheoryProtocols_2011}. While such a subspace is considered for the initial state $\etaop$, note that during the evolution the state $\rhoop(t) = \exp(\hat L t) \etaop$ can populate a much larger space that is only limited by numerical restrictions in the integration of the master equation~\eqnref{meq}. With the input of the trajectories $u_1(t)$ and $u_2(t)$, the neural network reconstructs a density-matrix operator $\etaop_\text{est}$ of size $d\times d$ with infidelity \begin{equation}
1-F=1- \tr \spare{\sqrt{\sqrt{\etaop}\etaop_\text{est}\sqrt{\etaop}}}. \end{equation} We remark that, in practice, any prior knowledge about the initially prepared quantum state should be used to accordingly choose the subspace and sampling distribution of training states in order to optimize the performance. Here, we sample the trajectories with timesteps $\delta t=0.05/\omega_0$ and each data point is represented by a neuron in the input layer of the network. Throughout the article we used four hidden layers of $800$, $800$, $400$, and $200$ neurons and an output layer of $2d^2$ neurons, representing the real numbers defining $\etaop_\text{est}$. The output is interpreted as a complex matrix $M$ that, generally, does not strictly fulfill the conditions of a physical density-matrix operator (positive semi-definite with unit trace). Consequently, the reconstructed physical state is obtained via $\etaop_\text{est}={M^\dagger M}/\tr \spares{M^\dagger M}$~~\cite{Xu_NNforQST_2018}. The network is trained via supervised learning using the mean squared error as the loss function and a training set of $10000$ randomly drawn quantum states. All results shown in the figures are obtained from a validation set, \textit{i.e.}, another set of $10000$ random states that were not used during training. More details on the network architecture and training can be found in Appendix \ref{sec_NN}.
\begin{figure}
\caption{\textit{Infidelity depending on trajectory length.} (a) Average infidelity $1-F$ of the quantum state predicted by the neural network as a function of the trajectory length that is used as an input. Each point presents a newly trained network using trajectories in a quartic (solid lines) or harmonic potential (dotted lines). The color denotes the dimension $d$ of the quantum state. (b) and (c) show the Wigner functions (red positive, blue negative values) of actual and reconstructed quantum state for two examples taken from the scenario indicated by the arrow in (a). The quantum state from the validation set where the neural network achieves the lowest (highest) infidelity $1-F$ is shown in (b), $1-F=4.8\times 10^{-5}$ ((c), $1-F=2.1\times 10^{-2}$). Parameters: $\alpha=5$, $\Gamma=0$. }
\label{fig_noDec}
\end{figure}
\subsection{Decoherence-free scenario}\label{subsec_noDec} Let us first show the results obtained in the absence of decoherence using \eqnref{meq} with $\Gamma=0$. In Fig.~\ref{fig_noDec}a, we show the average infidelity on the validation set reached by a neural network given input trajectories of a certain length (denoted by the time $t$) for $d=2,3$ and $4$, and with a quartic potential defined by the inverse quarticity $\alpha=5$. In every data point a new network was trained on the specific trajectory length (defining the input layer size of the network) and initial state dimension $d$ (defining the output layer size). In all cases, the infidelity decreases significantly with trajectory length and eventually saturates (all fluctuations around the saturation level are not of physical origin, as explained in Appendix \ref{sec_training}). The saturation occurs later for larger $d$, as an increasing number of $d^2-1$ independent moments need to be extracted from the trajectories in order to determine an arbitrary state of dimension $d$. The achieved infidelity also saturates on different levels depending on $d$, as we use the same number of training states ($10000$) despite the increasing size of the initial subspace. The achieved low infidelities demonstrate that the neural network can reconstruct the initial state from the trajectories with high accuracy, see Figs.~\ref{fig_noDec}b,c and caption for some examples. To show that the non-quadratic potential is indeed crucial, we also plot the performance of neural networks that were trained on trajectories in a harmonic potential (dotted lines in Fig.~\ref{fig_noDec}a) described by the Hamiltonian $\Hop = \Pop^2/(2m) + m \w_0^2 \Xop^2/2$. As expected, the non-quadratic potential outperforms the quadratic one, with the exception of the $d=2$ case, where the trajectories in the quadratic potential contain information about the only three moments that are required to fully determine the state.
\begin{figure}
\caption{\textit{Impact of decoherence on infidelity.} Infidelity $1-F$ as a function of time (as in Fig.~\ref{fig_noDec}a) for $d=4$ and $\alpha=5$ but including decoherence (black lines). For reference, the solid green line, same as in Fig.~\ref{fig_noDec}a, is the case without decoherence. The colored stars indicate the points for which the respective distribution of infidelities of all quantum states in the validation set is shown in the inset. With longer trajectories the overall performance increases by improving both on the peak infidelity and the infidelity spread. }
\label{fig_wDec}
\end{figure}
\subsection{Limitations due to decoherence}\label{subsec_dec} Let us now show the impact of decoherence, which will limit the length of the trajectories that can be used for quantum state reconstruction since, eventually, all information about the initial state is lost. In Fig.~\ref{fig_wDec}, we plot the achieved infidelities using \eqnref{meq} for different values of $\Gamma$, an inverse quarticity $\alpha=5$, the same set of quantum states as sampled for $d=4$ in Fig.~\ref{fig_noDec}a, and with neural networks trained and validated using the simulated trajectories in the presences of decoherence. The inset shows the distribution of infidelities reached in the validation set (see the caption for details). If the decoherence is sufficiently small (dashed line) the reached infidelity does not differ significantly from the performance achievable in absence of decoherence (green line, same as in Fig.~\ref{fig_noDec}a), since the trajectories are only significantly altered by decoherence after all information necessary to reconstruct the initial state has already been extracted. In contrast, at larger decoherence rates (dashed-dotted and dotted line), the trajectories are altered much earlier and both the average performance at intermediate times and the final performance become worse. The reason is that there is neither enough information contained in a trajectory up to the time where decoherence acts, nor sufficient time for the neural network to infer all the moments determining the initial state before decoherence erases the initial state dependence.
The above discussion shows that in an experimental implementation of the proposed method the decoherence rate plays a limiting role. In the following we will show that eventually the ratio between decoherence and the strength of the non-quadratic potential is decisive. To this end, let us obtain a rough quantitative estimate of a necessary requirement. The initial state $\etaop$ is spatially confined in a length scale given by $ (\tr[\etaop \Xop^2])^{1/2} \sim x_0$ and has a kinetic energy of the order of $\hbar \omega_0$. During the evolution in the quartic potential, the initial state spreads as $ (\tr[\rho(t) \Xop^2])^{1/2} \sim x_0 \w_0 t$ and the effect of the quartic potential is relevant when the potential energy is comparable to the initial kinetic energy, namely for $t^\star$ such that $ \lambda (\tr[\rho(t^\star) \Xop^2])^{2} \sim \lambda (x_0 \w_0 t^\star)^4 \approx \hbar \omega_0$. Using the definition of the dimensionless inverse quarticity $\alpha=[4m^2\omega_0^3/(\hbar\lambda)]^{1/4}$, one obtains $t^\star \approx \alpha/\w_0$. Trajectories longer than $t^\star$ are required to be affected from the quartic potential, and the requirement that they are coherent demands that $1/\Gamma \gg t^\star$, or alternatively, $ \alpha \ll \omega_0/\Gamma$. Thus, a stronger quartic potential (smaller inverse quarticity $\alpha$) helps to cope with decoherence. Such a rough estimate is a necessary but not a sufficient requirement, as can be seen in Fig.~\ref{fig_wDec}. Nevertheless, it provides a good reference for experimental implementations. For instance, let us assume that the quartic potential is engineered using a potential of the form $V_G(X) = - m \w_0^2 \sigma^2 \exp [-X^2/\sigma^2]/2$ (\eg~generated by optical tweezers), which has been used to generate the quadratic potential, $V_G(X) \approx V_G(0) + m \w_0^2 X^2/2$. A (to leading order) pure quartic potential can then be obtained around $X=0$ by superimposing two such Gaussians $V(X) = V_G(X-\sigma/\sqrt{2})+ V_G(X+\sigma/\sqrt{2}) \approx 2 V_G(0) + \lambda X^4$, where $\lambda = m \w_0^2/[3 \sqrt{e} \sigma^2]$, and, hence, $\alpha \approx 1.8 \sqrt{\sigma/x_0}$. In this case, the necessary requirement reads $1.8 \sqrt{\sigma/x_0} \ll \omega_0/\Gamma$. For ions, this condition is not challenging since $\w_0/\Gamma \sim 10^3$~\cite{Leibfried_IonReview_2003} and $x_0 \approx 10~\text{nm}$, and, hence, one requires potentials with $\sigma \ll 10^3~\mu\text{m}$. For optically levitated nanoparticles, where $\w_0/\Gamma \sim 10^2$ \cite{Chang_levitatedNano_2010,Windey_Cooling_2019,Gonzalez-Ballestero_CoolingTheoery_2019}, and $x_0 \sim 10^{-12}~\text{m}$, the condition reads $\sigma \ll 10~\text{nm}$, which is not compatible with optical potentials where $\sigma$ is lower bounded by an optical wavelength. Therefore, levitated nanoparticles require either longer coherence times, achievable by evolution in the absence of recoil heating from laser light (quasi-electrostatic traps~\cite{Home_ionsAnharmonicPot_2011, Kuhlicke_NV_quadrupoleTrap_2014,Alda_NanoparticlePaulTrap_2016,Delord_SpinEchosLevi_2018,Bykov_NanoparticlesPaulTrap_2019}, magnetic traps~\cite{Romero-Isart_MagneticLevi_2012,Slezak_MagneticLevi_2018,Prat-Camps_InertialForceSensor_2017}, or in free fall \cite{Romero-Isart_LargeQuSup_2011,Hebestreit_FreeFallingNanoparticles_2018} where the quartic potential is only applied after the state has sufficiently broadened) or the use of electromagnetic forces near surfaces \cite{Diehl_NanoparticleCloseToMembrane_2018,Magrini_NearFieldNanoparticle_2018,Pino_Skatepark_2018} such that $\sigma$ can be potentially smaller than an optical wavelength. Instead of aiming for stronger non-quadratic potentials or longer decoherence times one could also speed up the broadening of the initially prepared state by introducing an inverted harmonic potential \cite{Pino_Skatepark_2018,ORI_CoherentInflation_2017} at the center of the quartic trap, that is, using a double-well potential.
\begin{figure}
\caption{\textit{Imperfect quartic potentials.} Infidelity $1-F$ as a function of time (as in Fig.~\ref{fig_noDec}a) for $\Gamma=0$, $d=4$, and $\alpha=5$. The solid green line shows the results in a perfect quartic potential (same as in Fig.~\ref{fig_noDec} (a)), while the dashed and dotted lines show the performance for the perturbed potentials, $\epsilon/x_0 =-0.1$ and $\epsilon/x_0=0.1$ respectively, and with $\alpha=5=\sqrt{\sigma/x_0}(6\sqrt{e})^{1/4}$. The black lines show the infidelity reached by neural networks both trained and validated on the perturbed potential.
The red lines show the infidelity reached by neural networks trained on the purely quartic potential but applied to trajectories from the perturbed potential. The inset illustrate the respective potential shapes.}
\label{fig_wDisp}
\end{figure}
\subsection{Realistic scenarios}\label{subsec_realistic} Regarding the experimental implementation of the method, it is also clear that a perfect quartic potential cannot be engineered. Related to the discussion above, let us assume that the two Gaussian potentials are not perfectly symmetric aligned, namely one has $V_\epsilon (X) = V_G(X-\sigma/\sqrt{2}) + V_G(X+\epsilon+\sigma/\sqrt{2})$, where $\epsilon$ parametrizes the imperfection of the quartic potential. The form of such imperfect potentials is illustrated in the inset of Fig.~\ref{fig_wDisp} (dotted line for $\epsilon>0$, dashed line for $\epsilon<0$). In Fig.~\ref{fig_wDisp} we show the quantum state tomography performance of neural networks trained and tested on trajectories from the perturbed potential (black dotted line: $\epsilon/x_0=0.1$, black dashed line: $\epsilon/x_0=-0.1$). A similar overall performance can be achieved compared to the purely quartic potential $\epsilon=0$ (solid green line). Thus, the neural network finds an appropriate model to each scenario and the quantum state tomography does not crucially depend on the details of the non-quadratic potential, even in the presence of small linear and quadratic contributions.
So far, the training and testing scenario of the neural network were always the same. However, the experimental situation might not perfectly match the scenario used for training. For example, one could be ignorant of the exact form of the potential and hence use a neural network trained in slightly different potentials. The red lines in Fig.~\ref{fig_wDisp} show the reached average infidelity of neural networks that were trained on trajectories from the purely quartic potential ($\epsilon=0$) but that are then used to estimate the quantum state given trajectories from the perturbed potential (dotted and dashed again refer to $\epsilon/x_0=0.1$ and $\epsilon/x_0=-0.1$, respectively). At very short trajectory lengths the internal model of the trained neural network allows to reconstruct the quantum state with similar infidelity to the scenarios where training and validation situation had the same physical origin. If longer trajectories are used, the performance still improves, although the network is not able to retrieve as much information as in the ideal scenario. Given any specific accuracy goal, a numerical study would easily allow to estimate beforehand what size of experimental uncertainties a neural network trained with ignorance could bear.
\section{Summary}\label{sec_summary} In summary, we have proposed a method to perform quantum motional state tomography for levitated particles (\eg~ions, nanoparticles), based on inferring the initial state from the time evolution of a few moments in a non-quadratic potential. The reconstruction is efficiently done with a neural network. We have analyzed the impact of decoherence and potential imperfections. As a proof-of-principle, we have shown results for a quartic potential in which the mean value and variance of the position is measured. We emphasize, however, that the method is very general since a neural network allows to optimally adapt the quantum state tomography to any given physical scenario in an experiment by using training examples from the particular situation. For the case of levitated nanoparticles, ground-state cooling is closely approached~\cite{Tebbenjohanns_optimalDetection_2019,Windey_Cooling_2019,Delic_Cooling_2019,Gonzalez-Ballestero_CoolingTheoery_2019, Tebbenjohanns_sidebandAsym_2019, Jain_LeviRecoilHeating_2016,Fonesca_NonlinearDynNanopart_2016,Millen_CoolingChargedNanosphere_2015,Asenbaum_CoolingSiliconNanopart_2013,Kiesel_CoolingSubmicronPart_2013,Gieseler_FeedbackCoolingNanopart_2012,Li_CoolingMicrosphere_2011}, hence the development of quantum tomography schemes is not only important but timely. At the same time, implementing non-quadratic potentials is also a fantastic tool to prepare non-Gaussian states. We therefore hope that this work will further motivate experimentalists in the field of levitated nanoparticles to engineer non-quadratic potentials to bring and probe nanoparticles in the quantum regime.
\section{\label{sec_expansion}Expansion in moments}
In this appendix, we discuss an approximative approach to analytically describe the time evolution of the initial quantum state and demonstrate how it becomes unfeasible in the regime important for quantum state tomography. This approach is based on truncating the infinite system of coupled equations of motion of all moments.
To this end, we consider the motion of a particle in a quartic trap as described by the Hamiltonian (\ref{eq_Hamiltonian}). Using the results of Refs.~\cite{Ballentine_MomentEquations_1998, Brizuela_PRD_2014,Brizuela_PRD_generalizedUncertainty_2014}, one can write the equations of motions of all moments:
\begin{align*}
\langle\dot{\hat x}\rangle &= \expect{\pop}\\
\langle\dot{\hat p}\rangle &= -\frac{8}{\alpha^4} \left (\expect\xop^3+3 \expect\xop G^{0,2}+G^{0,3}\right)\\
\dot G^{a,b} =& b G^{a+1,b-1}+8 \frac{a}{\alpha^4} \left[ 3 \expect\xop G^{0,2}+G^{0,3}\right]G^{a-1,b}\\
&-8 \frac{a}{\alpha^4}\left[3 \expect\xop^2G^{a-1,b+1}+3\expect\xop G^{a-1,b+2}+G^{a-1,b+3}\right]\\
&+ 8 \frac{a}{\alpha^4}(a-1)(a-2)\left[\expect\xop G^{a-3,b}+G^{a-3,b+1} \right] \end{align*} Recall that $G^{a,b} = \langle (\pop-\expect\pop)^a (\xop-\expect{\xop})^b\rangle_\text{Weyl}$ denotes moments of the order $a+b$, with $\avg{\cdot}_\text{Weyl}$ the expectation value of the Weyl-ordered operators. This set of differential equations is exact but infinite. However, an approximation can be applied by only keeping all moments with combinations of $a$ and $b$ such that $a+b\leq N_\text{t}$, where $N_\text{t}$ is the truncation order. The resulting system of coupled, non-linear differential equations can then be solved numerically.
\begin{figure*}
\caption{(a) Trajectory $\expect{\xop^2}$ of the initial Fock state $\ket{1}$ as obtained from a full quantum simulation (blue line) on a sufficiently larger Hilbert space, or from an expansion in moments truncated to the order indicated by the respectively colored number. The dashed black lines show the expansion in moments up to the next higher odd order, e.g.~up to order $7$ for the black dashed line on top of the yellow line which corresponds to truncation order $6$. For this particular initial state the odd order moments have zero or vanishing contribution to the overall motion. (b) shows the with time increasing absolute error between $\expect{\xop^2}_\text{t}$ from a truncated moment approach (truncation order again indicated by colored numbers) and the full quantum solution $\expect{\xop^2}_\text{q}$. Notably, the gain of accuracy by increasing the truncation order (by two) becomes smaller and smaller.}
\label{fig_expansion}
\end{figure*}
In Fig.~\ref{fig_expansion} we illustrate the performance of this approximation using the Fock state $\ket{1}$ as the initial state, $\alpha=5$, and no decoherence $\Gamma=0$. The symmetry of a Fock state leads to vanishing first order moments, \ie, $\expect\xop=\expect\pop=0$ for all times, and odd order moments do not significantly contribute to the motion (the results of truncating to an odd order are shown as black dashed lines and coincide with the respective solution of the next lower even truncation order). In Fig.~\ref{fig_expansion}a we also display the exact solution (blue line) which we obtain by numerically integrating the Schr\"odinger equation using QuTiP. We show approximations up to $N_\text{t}=14$. In Fig.~\ref{fig_expansion}b, we show the relative error between the full quantum solution $\expect{\xop^2}_\text{q}$ and the truncated solution $\expect{\xop^2}_\text{t}$. Increasing the truncation order increases the time where the error remains below a certain threshold only slightly. Even more importantly, the gain of accuracy by increasing the truncation order (by two) decreases. Therefore, truncating the number of moments used to simulate the non-linear dynamics is not a good approximation. This means that the trajectories depend, in general, on an unbounded number of initial moments, and hence provide sufficient information to reconstruct the initial quantum state. Such task for arbitrary quantum states cannot, however, be done analytically but with a neural network, as we show in this work.
\setcounter{equation}{0} \renewcommand{B\arabic{equation}}{B\arabic{equation}}
\section{\label{sec_NN}Neural network} In this appendix, we provide additional information on the neural networks that we used throughout this work. In particular, in Appendix \ref{sec_hyper}, we summarize the network architecture and relevant hyperparameters. In Appendix \ref{sec_input} we discuss the input to the neural network and show example trajectories for the various scenarios discussed in the main text. Finally, in Appendix \ref{sec_training}, we describe the technical aspects of training the neural network in detail and present a typical learning curve.
\subsection{\label{sec_hyper}Details on the neural network} The neural networks used throughout this work were the same for all tasks, with only the input and output layer depending on the trajectory length and dimensionality of the specific problem respectively. In Table \ref{tbl_NN} we describe the network architecture and specify the relevant hyperparameters.
\begin{table*}
\begin{center}
\begin{tabular}{|| p{5cm} | p{10cm}||}
\hline
\textbf{network type} & feed-forward, densely connected \\
\hline
\textbf{neurons} & \multirow{3}{10cm}{\textit{input layer:} $2N$ ($N$ the number of points in each trajectory)\\ \textit{hidden layers:} $800$, $800$, $400$, $200$\\
\textit{output layer:} $2d^2$ ($d$ the dimension of the initial quantum state)} \\
&\\
&\\
\hline
\textbf{activation functions} & ‘sigmoid', for the final layer ‘tanh' \\
\hline
\textbf{optimizer} & adam (with keras default values) \\
\hline
\textbf{loss function} & custom mean-squared error \\
\hline
\textbf{end of training} & early stopping, with a patience of $500$ epochs (maximal $10000$ epochs)\\
\hline
\textbf{batch size} & 512 \\
\hline
\textbf{number of training states} & 10000 \\
\hline
\textbf{number of validation states} & 10000 \\
\hline
\end{tabular}
\caption{Summary of the neural network architecture and hyperparameters.}\label{tbl_NN}
\end{center} \end{table*}
\subsection{\label{sec_input}Input to the neural network}
\begin{figure*}
\caption{Four hundred example trajectories of shifted position, (a), and variance, (b), taken from the training set of four dimensional states, $d=4$, in a quartic potential. The neural network receives a single position trajectory and the corresponding single variance trajectory as an input. Parameters: $\alpha=5$, $\Gamma=0$.}
\label{fig_trajs_noDec}
\end{figure*}
Here, we show examples of trajectories that were used for training the neural network to give an impression about their diversity, Fig.~\ref{fig_trajs_noDec}, the impact of decoherence, Fig.~\ref{fig_trajs_wDec}, and the motion in a perturbed non-quadratic potential, Fig.~\ref{fig_trajs_wDisp}.
The effect of position localization type of decoherence, Fig.~\ref{fig_trajs_wDec}, reveals a damping of the position and eventually an increase in variance (after the oscillations disappeared). As expected, all trajectories approach each other at long times (even more significantly for times later than the ones shown and used throughout this work). This is a manifestation of the loss of information about the initial state.
In contrast, the deviation from a purely quartic potential does alter the trajectories in a systematic manner. The position and variance show no sign of damping or heating (in those simulations no decoherence was included, only a changed potential) and the trajectories remain distinguishable. Thus, the initial state dependence remains although the mapping from the trajectory to the state is changed. A neural network trained on either scenario will therefore end up with a different internal representation to reconstruct the quantum state in an optimal way.
For the input of the neural network, we sampled each trajectory of maximal length $t\omega_0=20$ with $400$ data points, resulting in a spacing of $\delta t\omega_0=0.05$. However, this is not crucial for the training of the network as long as the number of sampled data points is sufficiently high to accurately represent the trajectories. The data points from the (shifted) position and variance trajectory are concatenated to one large input vector that is then fed to the input layer of the neural network. For the figures in the main text that show the performance of the quantum state tomography with shorter trajectories we use the \textit{same} trajectories, but instead of using all $400$ data points as an input we shorten it appropriately. Thereby, we only had to simulate the trajectories once with full length instead of creating many different data sets for various lengths.
\begin{figure*}
\caption{Three example trajectories of shifted position, (a), and variance, (b), taken from the training set of four dimensional states, $d=4$, in a quartic potential. The dashed black lines show the `original' decoherence-free trajectories. The colored lines show the respective trajectory of the same initial state in the presence of recoil heating $\Gamma/\omega_0=10^{-2}$. Parameters: $\alpha=5$.}
\label{fig_trajs_wDec}
\end{figure*}
\begin{figure*}
\caption{Three example trajectories of shifted position, (a) and (c), and variance, (b) and (d) taken from the training set of four dimensional states, $d=4$. The dashed black lines show the `original' trajectories in a purely quartic potential. The colored lines show the respective trajectory of the same initial state in the perturbed non-quadratic potential, i.e., $\epsilon/x_0=0.1$ for (a) and (b) and $\epsilon/x_0=-0.1$ for (c) and (d). Here, $\Gamma/\omega_0=0$ and $\alpha=5$ for all trajectories.}
\label{fig_trajs_wDisp}
\end{figure*}
\subsection{Training}\label{sec_training}
\textit{Loss function:} The neural networks are trained via standard supervised training using keras. However, as discussed in the main text, the output of the neural network cannot directly be interpreted as the desired density matrix. Instead, we interpret the output vector of size $2d^2$ as the real and imaginary parts of the $d^2$ entries of a complex matrix $M$ from which the density matrix is obtained via $\etaop_\text{est}={M^\dagger M}/\tr [M^\dagger M]$. Therefore, we define a custom loss function where we first calculate the real and imaginary parts of all entries of the estimated density matrix $\etaop_\text{est}$. These values form a vector $\vec{\eta}_\text{est}$ with $2d^2$ entries (with some values equal to zero by definition since the density matrix is Hermitian). Similarly, we write the true density matrix $\etaop$ as a vector $\vec{\eta}$ of length $2d^2$. Finally we calculate the element-wise mean-squared error between $\vec{\eta}$ and $\vec{\eta}_\text{est}$ which we use as the loss to train the neural network.
\textit{Early stopping:} Instead of training all neural networks for a fixed amount of epochs, we made use of an early stopping routine. Thereby training is stopped when no significant improvement is achieved anymore (or at a chosen maximum of $10000$ epochs). In particular, we have used a ‘patience' value of $500$, \ie if the so far lowest validation loss is not lowered even further (by the tiniest amount) within the following $500$ epochs then training is stopped. The neural network state achieving the lowest validation loss reached so far is stored and eventually used for quantum state reconstruction. The usage of early stopping prevents us from spending too much computational time if significant progress is no longer made (with the risk of missing very slow improvements) and is a common tool to avoid ending up in overfitting. Note that we attribute the fluctuations around (and small increase above) the saturation level shown in Fig.~1 of the main text (most prominently visible for the $d=2$ curves) to our particular choice of patience. Data points at larger $t\omega_0$ correspond to training a neural network with longer trajectories, \ie more input data are provided to a larger number of input neurons. However, all other parameters concerning the network architecture and training are kept fixed. It can be anticipated that this generically means that the overall learning rate is slowed down since a larger amount of data has to be processed with the same tools. This, in turn, makes it more likely that the next best value of validation loss is shifted out of the fixed patience interval. Randomly this might or might not be the case, leading to an earlier or later stop of the training and thereby to the fluctuations in performance (with a small trend of performing worse). This becomes particularly evident, if the trajectory is sufficiently long to gain all information (the regime where the performance saturates) and no huge further improvements can be expected anyways. Indeed, in this regime we notice that the stopping epochs of two consecutive data points (two networks that receive trajectories with just a small change in trajectory length) can differ significantly (\eg~one of the networks training almost twice as many epochs as the other). This effect can be avoided (or at least attenuated) by training all networks with the same, much larger, number of epochs while still storing the network with the overall best performance during training. However, this would significantly increase the overall computational time needed.
\textit{Learning curve:} In Fig.~\ref{fig_learning_Curve} we show a typical learning curve. In particular, we show the performance of a neural network that is trained to reconstruct four-dimensional quantum states from trajectories in the absence of decoherence and in a perfect quartic potential. The maximal trajectory length $t\omega_0=20$ was used. While the blue line shows the validation loss at every epoch, we only calculated the validation infidelity every $50$th epoch (green line) and the orange line shows the validation loss at the same epochs. At very late epochs the validation loss rises again, indicating overfitting. For our main results, we used early stopping with a patience of $500$ epochs to avoid ending up in this regime. In Fig.~\ref{fig_learning_Curve}, the solid black line marks where the early stopping criterion would have canceled further training. The best validation loss value so far is indicated by the dashed black line which was not improved in the following $500$ epochs up to the solid black line. The actual lowest value of the validation loss in this figure occurred roughly $2000$ epochs later, as marked by the blue dashed line in the zoom in shown in Fig.~\ref{fig_learning_Curve}b. The minimum of the orange line, showing the validation loss on a rougher grid, occurs at a very similar epoch. Notably, the overall improvement of validation loss from the best value reached before the early stopping (black dashed line) and the actual best value (blue dashed line) is small. The improvement of the validation infidelity from the early stopping threshold to the actual lowest infidelity value indicated by the green dotted line is more significant though. The lowest validation loss (of the orange curve) and the lowest validation infidelity (green curve) do not coincide at the same epoch (although being relatively close). This shows that, although the mean squared error loss function works very well for training, it does not perfectly match the physical objective of small infidelity. Another interesting observation is that the infidelity does not immediately increase when the validation loss starts to rise again at very late training epochs shown in Fig.~\ref{fig_learning_Curve}.
Finally, we want to remark that throughout this work we used a total set of $20000$ randomly sampled quantum states, $10000$ each for training and validation. The states were sampled using QuTiP's {\tt rand\_dm\_ginibre()} function, where the default is to produce full rank matrices that correspond to sampling from a Hilbert Schmidt ensemble. For this fixed set of random states trajectories were simulated according to the discussed physical situations and with a maximal length of $t\omega_0=20$. Furthermore, the same fixed random seed was used for training all neural networks. Thereby, the comparison between our results, \eg~comparing the performance of neural networks on the same physical scenario but with different trajectory length, is simplified: The neural network trains on the \textit{same trajectories} in the \textit{same order}, just that those trajectories are a bit longer. The long trajectories contain the \textit{same information} as the short trajectories plus some additional data points that can either contribute additional information or not depending on the scenario. This is also a reason, why, \eg~in Fig.~1 of the main text, we can clearly state that the fluctuations around the saturation level (most prominently seen for both $d=2$ results) are not physical. Indeed, there cannot be less information in the longer trajectories than in the shorter ones and we have to attribute the observed deviations t o the neural network learning less efficiently.
\begin{figure*}
\caption{(a) Typical learning curve showing the performance of a neural network during training. The blue line shows the validation loss at every epoch. The orange line shows the validation loss only at every $50$th epoch where also the average infidelity of the validation set was computed (green line). The solid black line shows at which epoch the early stopping that we used for our main results would have stopped the presented training, with the black dashed line indicating the best performing network so far. The actual lowest value of validation loss can be seen in the zoom in (b) indicated by the blue dashed line and occurs at an even later epoch, close to where also the lowest value of infidelity is reached (green dotted lines). Physical scenario for training: perfect quartic potential, $d=4$, $\alpha=5$, $t\omega_0=20$, $\Gamma=0$.}
\label{fig_learning_Curve}
\end{figure*}
\begin{thebibliography}{50}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{http://dx.doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Leibfried}\ \emph {et~al.}(2003)\citenamefont
{Leibfried}, \citenamefont {Blatt}, \citenamefont {Monroe},\ and\
\citenamefont {Wineland}}]{Leibfried_IonReview_2003}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Leibfried}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Blatt}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Monroe}}, \ and\ \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Wineland}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Quantum dynamics of single trapped ions}},\
}\href {\doibase 10.1103/RevModPhys.75.281} {\bibfield {journal} {\bibinfo
{journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {75}},\ \bibinfo
{pages} {281} (\bibinfo {year} {2003})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Aspelmeyer}\ \emph {et~al.}(2014)\citenamefont
{Aspelmeyer}, \citenamefont {Kippenberg},\ and\ \citenamefont
{Marquardt}}]{Aspelmeyer_ReviewOM_2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Aspelmeyer}}, \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont
{Kippenberg}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Marquardt}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Cavity
optomechanics}},\ }\href {\doibase 10.1103/RevModPhys.86.1391} {\bibfield
{journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume}
{86}},\ \bibinfo {pages} {1391} (\bibinfo {year} {2014})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Romero-Isart}\ \emph {et~al.}(2010)\citenamefont
{Romero-Isart}, \citenamefont {Juan}, \citenamefont {Quidant},\ and\
\citenamefont {Cirac}}]{Romero_Isart_SuperposLiving_2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Romero-Isart}}, \bibinfo {author} {\bibfnamefont {M.~L.}\ \bibnamefont
{Juan}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Quidant}}, \
and\ \bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Cirac}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Toward quantum superposition
of living organisms}},\ }\href {\doibase 10.1088/1367-2630/12/3/033015}
{\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo
{volume} {12}},\ \bibinfo {pages} {033015} (\bibinfo {year}
{2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Chang}\ \emph {et~al.}(2010)\citenamefont {Chang},
\citenamefont {Regal}, \citenamefont {Papp}, \citenamefont {Wilson},
\citenamefont {Ye}, \citenamefont {Painter}, \citenamefont {Kimble},\ and\
\citenamefont {Zoller}}]{Chang_levitatedNano_2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont
{Chang}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Regal}},
\bibinfo {author} {\bibfnamefont {S.~B.}\ \bibnamefont {Papp}}, \bibinfo
{author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wilson}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Ye}}, \bibinfo {author} {\bibfnamefont
{O.}~\bibnamefont {Painter}}, \bibinfo {author} {\bibfnamefont {H.~J.}\
\bibnamefont {Kimble}}, \ and\ \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Zoller}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Cavity opto-mechanics using an optically levitated nanosphere},}\ }\href
{\doibase 10.1073/pnas.0912969107} {\bibfield {journal} {\bibinfo {journal}
{Proc. Natl. Acad. Sci.}\ }\textbf {\bibinfo {volume} {107}},\ \bibinfo
{pages} {1005--} (\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Vanner}\ \emph {et~al.}(2015)\citenamefont {Vanner},
\citenamefont {Pikovski},\ and\ \citenamefont {Kim}}]{Vanner_OM_QST_2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~R.}\ \bibnamefont
{Vanner}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Pikovski}}, \
and\ \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Kim}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Towards optomechanical
quantum state reconstruction of mechanical motion}},\ }\href {\doibase
10.1002/andp.201400124} {\bibfield {journal} {\bibinfo {journal} {Ann.
Phys.}\ }\textbf {\bibinfo {volume} {527}},\ \bibinfo {pages} {15} (\bibinfo
{year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wallentowitz}\ and\ \citenamefont
{Vogel}(1995)}]{Wallentowitz_QST_ion_1995}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Wallentowitz}}\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Vogel}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Reconstruction of
the quantum mechanical state of a trapped ion}},\ }\href {\doibase
10.1103/PhysRevLett.75.2932} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {75}},\ \bibinfo {pages}
{2932} (\bibinfo {year} {1995})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Singh}\ and\ \citenamefont
{Meystre}(2010)}]{Singh_WignerTomo_usingAtom_2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Singh}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Meystre}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Atomic probe
wigner tomography of a nanomechanical system}},\ }\href {\doibase
10.1103/PhysRevA.81.041804} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {041804 (R)}
(\bibinfo {year} {2010})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Parkins}\ and\ \citenamefont
{Kimble}(1999)}]{Parkins_QST_stateTransfer_1999}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont
{Parkins}}\ and\ \bibinfo {author} {\bibfnamefont {H.~J.}\ \bibnamefont
{Kimble}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum state
transfer between motion and light}},\ }\href {\doibase
10.1088/1464-4266/1/4/323} {\bibfield {journal} {\bibinfo {journal} {J.
Opt.}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages} {496} (\bibinfo
{year} {1999})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Wallentowitz}\ and\ \citenamefont
{Vogel}(1996)}]{Wallentwoitz_QSTunbalancedHomodyne_1996}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Wallentowitz}}\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont
{Vogel}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Unbalanced
homodyning for quantum state measurements}},\ }\href {\doibase
10.1103/PhysRevA.53.4528} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo {pages} {4528}
(\bibinfo {year} {1996})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Banaszek}\ and\ \citenamefont
{W\'odkiewicz}(1996)}]{Banaszek_QSTphotonCountin_1996}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Banaszek}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{W\'odkiewicz}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Direct
probing of quantum phase space by photon counting}},\ }\href {\doibase
10.1103/PhysRevLett.76.4344} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages}
{4344} (\bibinfo {year} {1996})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Poyatos}\ \emph {et~al.}(1996)\citenamefont
{Poyatos}, \citenamefont {Walser}, \citenamefont {Cirac}, \citenamefont
{Zoller},\ and\ \citenamefont {Blatt}}]{Poyatos_QST_ion_1996}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont
{Poyatos}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Walser}},
\bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}}, \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Zoller}}, \ and\ \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Blatt}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Motion tomography of a single trapped ion}},\ }\href
{\doibase 10.1103/PhysRevA.53.R1966} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo
{pages} {R1966 (R)} (\bibinfo {year} {1996})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Leibfried}\ \emph {et~al.}(1996)\citenamefont
{Leibfried}, \citenamefont {Meekhof}, \citenamefont {King}, \citenamefont
{Monroe}, \citenamefont {Itano},\ and\ \citenamefont
{Wineland}}]{Leibfried_expQSTion_1996}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Leibfried}}, \bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont
{Meekhof}}, \bibinfo {author} {\bibfnamefont {B.~E.}\ \bibnamefont {King}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Monroe}}, \bibinfo
{author} {\bibfnamefont {W.~M.}\ \bibnamefont {Itano}}, \ and\ \bibinfo
{author} {\bibfnamefont {D.~J.}\ \bibnamefont {Wineland}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Experimental determination of the
motional quantum state of a trapped atom}},\ }\href {\doibase
10.1103/PhysRevLett.77.4281} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages}
{4281} (\bibinfo {year} {1996})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Vanner}\ \emph {et~al.}(2013)\citenamefont {Vanner},
\citenamefont {Hofer}, \citenamefont {Cole},\ and\ \citenamefont
{Aspelmeyer}}]{Vanner_CoolingByMsmt_QST_2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~R.}\ \bibnamefont
{Vanner}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Hofer}},
\bibinfo {author} {\bibfnamefont {G.~D.}\ \bibnamefont {Cole}}, \ and\
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Aspelmeyer}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Cooling-by-measurement and mechanical
state tomography via pulsed optomechanics}},\ }\href {\doibase
10.1038/ncomms3295} {\bibfield {journal} {\bibinfo {journal} {Nat.
Commun.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {2295}
(\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Romero-Isart}\ \emph
{et~al.}(2011{\natexlab{a}})\citenamefont {Romero-Isart}, \citenamefont
{Pflanzer}, \citenamefont {Juan}, \citenamefont {Quidant}, \citenamefont
{Kiesel}, \citenamefont {Aspelmeyer},\ and\ \citenamefont
{Cirac}}]{Romero-Isart_LevTheoryProtocols_2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Romero-Isart}}, \bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont
{Pflanzer}}, \bibinfo {author} {\bibfnamefont {M.~L.}\ \bibnamefont {Juan}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Quidant}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Kiesel}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Aspelmeyer}}, \ and\ \bibinfo {author}
{\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Optically levitating dielectrics in the quantum regime:
Theory and protocols}},\ }\href {\doibase 10.1103/PhysRevA.83.013803}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {83}},\ \bibinfo {pages} {013803} (\bibinfo {year}
{2011}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont
{Romero-Isart}(2011)}]{Romero_Isart_QuantumSup_Collapse_2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Romero-Isart}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Quantum
superposition of massive objects and collapse models}},\ }\href {\doibase
10.1103/PhysRevA.84.052121} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {052121}
(\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Joos}\ \emph {et~al.}(2003)\citenamefont {Joos},
\citenamefont {Zeh}, \citenamefont {Kiefer}, \citenamefont {Giulini},
\citenamefont {Kupsch},\ and\ \citenamefont
{Smatescu}}]{Joos_book_ClassicalWorld_QuTheory_2003}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Joos}}, \bibinfo {author} {\bibfnamefont {H.~D.}\ \bibnamefont {Zeh}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Kiefer}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Giulini}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Kupsch}}, \ and\ \bibinfo {author}
{\bibfnamefont {I.-O.}\ \bibnamefont {Smatescu}},\ }\href@noop {} {\emph
{\bibinfo {title} {Decoherence and the Appearance of a Classical World in
Quantum Theory}}}\ (\bibinfo {publisher} {New York: Springer},\ \bibinfo
{year} {2003})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Schlosshauer}(2007)}]{Schlosshauer_book_QtoCl_2007}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont
{Schlosshauer}},\ }\href@noop {} {\emph {\bibinfo {title} {Decoherence and
the Quantum-to-Classical Transition}}}\ (\bibinfo {publisher} {Berlin:
Springer},\ \bibinfo {year} {2007})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Johansson}\ \emph {et~al.}(2012)\citenamefont
{Johansson}, \citenamefont {Nation},\ and\ \citenamefont
{Nori}}]{Johansson_qutip_2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont
{Johansson}}, \bibinfo {author} {\bibfnamefont {P.~D.}\ \bibnamefont
{Nation}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Qutip: An
open-source python framework for the dynamics of open quantum systems}},\
}\href {\doibase 10.1016/j.cpc.2012.02.021} {\bibfield {journal} {\bibinfo
{journal} {Comp. Phys. Comm.}\ }\textbf {\bibinfo {volume} {183}},\ \bibinfo
{pages} {1760} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Johansson}\ \emph {et~al.}(2013)\citenamefont
{Johansson}, \citenamefont {Nation},\ and\ \citenamefont
{Nori}}]{Johansson_qutip_2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont
{Johansson}}, \bibinfo {author} {\bibfnamefont {P.~D.}\ \bibnamefont
{Nation}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Nori}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Qutip 2: A python
framework for the dynamics of open quantum systems}},\ }\href {\doibase
10.1016/j.cpc.2012.11.019} {\bibfield {journal} {\bibinfo {journal} {Comp.
Phys. Comm.}\ }\textbf {\bibinfo {volume} {184}},\ \bibinfo {pages} {1234}
(\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ballentine}\ and\ \citenamefont
{McRae}(1998)}]{Ballentine_MomentEquations_1998}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~E.}\ \bibnamefont
{Ballentine}}\ and\ \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont
{McRae}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Moment equations
for probability distributions in classical and quantum mechanics}},\ }\href
{\doibase 10.1103/PhysRevA.58.1799} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {58}},\ \bibinfo
{pages} {1799} (\bibinfo {year} {1998})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Brizuela}(2014{\natexlab{a}})}]{Brizuela_PRD_2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Brizuela}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Classical and
quantum behavior of the harmonic and the quartic oscillators}},\ }\href
{\doibase 10.1103/PhysRevD.90.125018} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo
{pages} {125018} (\bibinfo {year} {2014}{\natexlab{a}})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont
{Brizuela}(2014{\natexlab{b}})}]{Brizuela_PRD_generalizedUncertainty_2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Brizuela}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Statistical
moments for classical and quantum dynamics: Formalism and generalized
uncertainty relations}},\ }\href {\doibase 10.1103/PhysRevD.90.085027}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo
{volume} {90}},\ \bibinfo {pages} {085027} (\bibinfo {year}
{2014}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Torlai}\ \emph {et~al.}(2018)\citenamefont {Torlai},
\citenamefont {Mazzola}, \citenamefont {Carrasquilla}, \citenamefont
{Troyer}, \citenamefont {Melko},\ and\ \citenamefont
{Carleo}}]{Torlai_NN_QST_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Torlai}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Mazzola}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Carrasquilla}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Troyer}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Melko}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Carleo}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Neural-network quantum state tomography}},\ }\href
{\doibase 10.1038/s41567-018-0048-5} {\bibfield {journal} {\bibinfo
{journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages}
{447} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Xu}\ and\ \citenamefont
{Xu}(2018)}]{Xu_NNforQST_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont
{Xu}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Xu}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Neural network state
estimation for full quantum state tomography}},\ }\href
{https://arxiv.org/abs/1811.06654} {\bibfield {journal} {\bibinfo {journal}
{arXiv:1811.06654}\ } (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Xin}\ \emph {et~al.}(2018)\citenamefont {Xin},
\citenamefont {Lu}, \citenamefont {Cao}, \citenamefont {Anikeeva},
\citenamefont {Lu}, \citenamefont {Li}, \citenamefont {Long},\ and\
\citenamefont {Zeng}}]{Xin_LocalMsmt_QST_NN_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Xin}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lu}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Cao}}, \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {Anikeeva}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Long}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Zeng}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Local-measurement-based
quantum state tomography via neural networks}},\ }\href
{https://arxiv.org/abs/1807.07445} {\bibfield {journal} {\bibinfo {journal}
{arXiv:1807.07445}\ } (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Quek}\ \emph {et~al.}(2018)\citenamefont {Quek},
\citenamefont {Fort},\ and\ \citenamefont {Ng}}]{Quek_adaptiveQST_NN_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Quek}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Fort}}, \ and\
\bibinfo {author} {\bibfnamefont {H.~K.}\ \bibnamefont {Ng}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Adaptive quantum state tomography with
neural networks}},\ }\href {https://arxiv.org/abs/1812.06693} {\bibfield
{journal} {\bibinfo {journal} {arXiv:1812.06693}\ } (\bibinfo {year}
{2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Palmieri}\ \emph {et~al.}(2019)\citenamefont
{Palmieri}, \citenamefont {Kovlakov}, \citenamefont {Bianchi}, \citenamefont
{Yudin}, \citenamefont {Straupe}, \citenamefont {Biamonte},\ and\
\citenamefont {Kulik}}]{Palmieri_experimentalQSTenhanced_NN_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont
{Palmieri}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Kovlakov}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Bianchi}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Yudin}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Straupe}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Biamonte}}, \ and\ \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Kulik}},\ }\bibfield {title} {\enquote {\bibinfo {title}
{Experimental neural network enhanced quantum tomography}},\ }\href
{https://arxiv.org/abs/1904.05902} {\bibfield {journal} {\bibinfo {journal}
{arXiv:1904.05902}\ } (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Tebbenjohanns}\ \emph {et~al.}(2019)\citenamefont {Tebbenjohanns},
\citenamefont {Frimmer},\ and\ \citenamefont
{Novotny}}]{Tebbenjohanns_optimalDetection_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Tebbenjohanns}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Frimmer}},\ and\
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Novotny}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Optimal position detection of a dipolar scatterer in a focused field}},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.100.043821} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. A}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages}
{043821} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Tebbenjohanns}\ \emph {et~al.}(2018)\citenamefont {Tebbenjohanns},
\citenamefont {Frimmer}, \citenamefont {Jain}, \citenamefont
{Windey},\ and\ \citenamefont {Novotny}}]{Tebbenjohanns_sidebandAsym_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Tebbenjohanns}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Frimmer}},
\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Jain}}, \bibinfo
{author} {\bibfnamefont {D.}\ \bibnamefont {Windey}}, \ and\ \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {Novotny}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Motional sideband asymmetry of a nanoparticle optically levitated in free space}},\ }\href {https://arxiv.org/abs/1908.05079}
{\bibfield {journal} {\bibinfo {journal} {arXiv:1908.05079}\ } (\bibinfo {year}
{2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Windey}\ \emph {et~al.}(2019)\citenamefont {Windey},
\citenamefont {Gonzalez-Ballestero}, \citenamefont {Maurer}, \citenamefont
{Novotny}, \citenamefont {Romero-Isart},\ and\ \citenamefont
{Reimann}}]{Windey_Cooling_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Windey}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Gonzalez-Ballestero}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Maurer}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Novotny}},
\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Romero-Isart}}, \ and\
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Reimann}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Cavity-based 3d cooling of a levitated
nanoparticle via coherent scattering}},\ }\href {\doibase
10.1103/PhysRevLett.122.123601} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {122}},\ \bibinfo {pages}
{123601} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gonzalez-Ballestero}\ \emph
{et~al.}(2019)\citenamefont {Gonzalez-Ballestero}, \citenamefont {Maurer},
\citenamefont {Windey}, \citenamefont {Novotny}, \citenamefont {Reimann},\
and\ \citenamefont {Romero-Isart}}]{Gonzalez-Ballestero_CoolingTheoery_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Gonzalez-Ballestero}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Maurer}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Windey}},
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Novotny}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Reimann}}, \ and\ \bibinfo
{author} {\bibfnamefont {O.}~\bibnamefont {Romero-Isart}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Theory for cavity cooling of levitated
nanoparticles via coherent scattering: Master equation approach}},\ }\href
{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.100.013805} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. A}\ } \textbf {\bibinfo {volume} {100}},\ \bibinfo
{pages} {013805} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Home}\ \emph {et~al.}(2011)\citenamefont
{Home}, \citenamefont {Hanneke}, \citenamefont {Jost}, \citenamefont {Leibfried},\ and\ \citenamefont
{Wineland}}]{Home_ionsAnharmonicPot_2011} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~P.}~\bibnamefont
{Home}}, \bibinfo {author} {\bibfnamefont {D.}\ \bibnamefont
{Hanneke}}, \bibinfo {author} {\bibfnamefont {J.~D.}~\bibnamefont {Jost}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Leibfried}} \ and\
\bibinfo {author} {\bibfnamefont {D.~J.}~\bibnamefont {Wineland}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Normal modes of trapped ions in the presence of anharmonic trap potentials}},\ }\href {https://doi.org/10.1088/1367-2630/13/7/073026} {\bibfield {journal} {\bibinfo
{journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo
{pages} {073026} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kuhlicke}\ \emph {et~al.}(2014)\citenamefont
{Kuhlicke}, \citenamefont {Schell}, \citenamefont {Zoll},\ and\ \citenamefont
{Benson}}]{Kuhlicke_NV_quadrupoleTrap_2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Kuhlicke}}, \bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont
{Schell}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zoll}}, \ and\
\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Benson}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Nitrogen vacancy center fluorescence
from a submicron diamond cluster levitated in a linear quadrupole ion
trap}},\ }\href {\doibase 10.1063/1.4893575} {\bibfield {journal} {\bibinfo
{journal} {Appl. Phys. Lett.}\ }\textbf {\bibinfo {volume} {105}},\ \bibinfo
{pages} {073101} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Alda}\ \emph {et~al.}(2016)\citenamefont {Alda},
\citenamefont {Berthelot}, \citenamefont {Rica},\ and\ \citenamefont
{Quidant}}]{Alda_NanoparticlePaulTrap_2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Alda}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Berthelot}},
\bibinfo {author} {\bibfnamefont {R.~A.}\ \bibnamefont {Rica}}, \ and\
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Quidant}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Trapping and manipulation of individual
nanoparticles in a planar paul trap}},\ }\href {\doibase 10.1063/1.4965859}
{\bibfield {journal} {\bibinfo {journal} {Appl. Phys. Lett.}\ }\textbf
{\bibinfo {volume} {109}},\ \bibinfo {pages} {163105} (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Delord}\ \emph {et~al.}(2018)\citenamefont {Delord},
\citenamefont {Huillery}, \citenamefont {Schwab}, \citenamefont {Nicolas},
\citenamefont {Lecordier},\ and\ \citenamefont
{H\'etet}}]{Delord_SpinEchosLevi_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Delord}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Huillery}},
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Schwab}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {Nicolas}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Lecordier}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {H\'etet}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Ramsey interferences and spin echoes from electron spins
inside a levitating macroscopic particle}},\ }\href {\doibase
10.1103/PhysRevLett.121.053602} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {121}},\ \bibinfo {pages}
{053602} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bykov}\ \emph {et~al.}(2019)\citenamefont {Bykov},
\citenamefont {Mestres}, \citenamefont {Dania}, \citenamefont {Schm\"oger},\
and\ \citenamefont {Northup}}]{Bykov_NanoparticlesPaulTrap_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont
{Bykov}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Mestres}},
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Dania}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {Schm\"oger}}, \ and\ \bibinfo
{author} {\bibfnamefont {T.~E.}\ \bibnamefont {Northup}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Direct loading of nanoparticles under
high vacuum into a Paul trap for levitodynamical experiments}},\ }\href
{https://aip.scitation.org/doi/10.1063/1.5109645} {\bibfield {journal} {\bibinfo {journal}
{Appl. Phys. Lett}\ }\textbf {\bibinfo {volume} {115}},\ \bibinfo {pages}
{034101} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Romero-Isart}\ \emph {et~al.}(2012)\citenamefont
{Romero-Isart}, \citenamefont {Clemente}, \citenamefont {Navau},
\citenamefont {Sanchez},\ and\ \citenamefont
{Cirac}}]{Romero-Isart_MagneticLevi_2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Romero-Isart}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Clemente}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Navau}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Sanchez}}, \ and\
\bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Quantum magnetomechanics with levitating
superconducting microspheres}},\ }\href {\doibase
10.1103/PhysRevLett.109.147205} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {109}},\ \bibinfo {pages}
{147205} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Slezak}\ \emph {et~al.}(2018)\citenamefont {Slezak},
\citenamefont {Lewandowski}, \citenamefont {Hsu},\ and\ \citenamefont
{D'Urso}}]{Slezak_MagneticLevi_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~R.}\ \bibnamefont
{Slezak}}, \bibinfo {author} {\bibfnamefont {C.~W.}\ \bibnamefont
{Lewandowski}}, \bibinfo {author} {\bibfnamefont {J.-F.}\ \bibnamefont
{Hsu}}, \ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {D'Urso}},\
}\bibfield {title} {\enquote {\bibinfo {title} {Cooling the motion of a
silica microsphere in a magneto-gravitational trap in ultra-high vacuum}},\
}\href {\doibase 10.1088/1367-2630/aacac1} {\bibfield {journal} {\bibinfo
{journal} {New J. Phys.}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo
{pages} {063028} (\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Prat-Camps}\ \emph {et~al.}(2017)\citenamefont
{Prat-Camps}, \citenamefont {Teo}, \citenamefont {Rusconi}, \citenamefont
{Wieczorek},\ and\ \citenamefont
{Romero-Isart}}]{Prat-Camps_InertialForceSensor_2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Prat-Camps}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Teo}},
\bibinfo {author} {\bibfnamefont {C.~C.}\ \bibnamefont {Rusconi}}, \bibinfo
{author} {\bibfnamefont {W.}~\bibnamefont {Wieczorek}}, \ and\ \bibinfo
{author} {\bibfnamefont {O.}~\bibnamefont {Romero-Isart}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Ultrasensitive inertial and force
sensors with diamagnetically levitated magnets}},\ }\href {\doibase
10.1103/PhysRevApplied.8.034002} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Appl.}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages}
{034002} (\bibinfo {year} {2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Romero-Isart}\ \emph
{et~al.}(2011{\natexlab{b}})\citenamefont {Romero-Isart}, \citenamefont
{Pflanzer}, \citenamefont {Blaser}, \citenamefont {Kaltenbaek}, \citenamefont
{Kiesel}, \citenamefont {Aspelmeyer},\ and\ \citenamefont
{Cirac}}]{Romero-Isart_LargeQuSup_2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Romero-Isart}}, \bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont
{Pflanzer}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Blaser}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kaltenbaek}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Kiesel}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Aspelmeyer}}, \ and\ \bibinfo {author}
{\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Large quantum superpositions and interference of massive
nanometer-sized objects}},\ }\href {\doibase 10.1103/PhysRevLett.107.020405}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {107}},\ \bibinfo {pages} {020405} (\bibinfo {year}
{2011}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hebestreit}\ \emph {et~al.}(2018)\citenamefont
{Hebestreit}, \citenamefont {Frimmer}, \citenamefont {Reimann},\ and\
\citenamefont {Novotny}}]{Hebestreit_FreeFallingNanoparticles_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Hebestreit}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Frimmer}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Reimann}}, \ and\
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Novotny}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Sensing static forces with free-falling
nanoparticles}},\ }\href {\doibase 10.1103/PhysRevLett.121.063602} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {121}},\ \bibinfo {pages} {063602} (\bibinfo {year}
{2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Diehl}\ \emph {et~al.}(2018)\citenamefont {Diehl},
\citenamefont {Hebestreit}, \citenamefont {Reimann}, \citenamefont
{Tebbenjohanns}, \citenamefont {Frimmer},\ and\ \citenamefont
{Novotny}}]{Diehl_NanoparticleCloseToMembrane_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Diehl}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Hebestreit}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Reimann}}, \bibinfo
{author} {\bibfnamefont {F.}~\bibnamefont {Tebbenjohanns}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Frimmer}}, \ and\ \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Novotny}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Optical levitation and feedback cooling of a nanoparticle
at subwavelength distances from a membrane}},\ }\href {\doibase
10.1103/PhysRevA.98.013851} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {013851}
(\bibinfo {year} {2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Magrini}\ \emph {et~al.}(2018)\citenamefont
{Magrini}, \citenamefont {Norte}, \citenamefont {Riedinger}, \citenamefont
{Marinkovi\'{c}}, \citenamefont {Grass}, \citenamefont {Deli\'{c}},
\citenamefont {Gr\"{o}blacher}, \citenamefont {Hong},\ and\ \citenamefont
{Aspelmeyer}}]{Magrini_NearFieldNanoparticle_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Magrini}}, \bibinfo {author} {\bibfnamefont {R.~A.}\ \bibnamefont {Norte}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Riedinger}}, \bibinfo
{author} {\bibfnamefont {I.}~\bibnamefont {Marinkovi\'{c}}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Grass}}, \bibinfo {author}
{\bibfnamefont {U.}~\bibnamefont {Deli\'{c}}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Gr\"{o}blacher}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Hong}}, \ and\ \bibinfo {author}
{\bibfnamefont {M}~\bibnamefont {Aspelmeyer}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Near-field coupling of a levitated nanoparticle to a
photonic crystal cavity}},\ }\href {\doibase 10.1364/OPTICA.5.001597}
{\bibfield {journal} {\bibinfo {journal} {Optica}\ }\textbf {\bibinfo
{volume} {5}},\ \bibinfo {pages} {1597} (\bibinfo {year} {2018})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Pino}\ \emph {et~al.}(2018)\citenamefont {Pino},
\citenamefont {Prat-Camps}, \citenamefont {Sinha}, \citenamefont
{Venkatesh},\ and\ \citenamefont {Romero-Isart}}]{Pino_Skatepark_2018}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Pino}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Prat-Camps}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Sinha}}, \bibinfo
{author} {\bibfnamefont {B.~P.}\ \bibnamefont {Venkatesh}}, \ and\ \bibinfo
{author} {\bibfnamefont {O.}~\bibnamefont {Romero-Isart}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {On-chip quantum interference of a
superconducting microsphere}},\ }\href {\doibase 10.1088/2058-9565/aa9d15}
{\bibfield {journal} {\bibinfo {journal} {Quantum. Sci. Technol.}\ }\textbf
{\bibinfo {volume} {3}},\ \bibinfo {pages} {025001} (\bibinfo {year}
{2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Romero-Isart} (2017)\citenamefont {Romero-Isart}}]{ORI_CoherentInflation_2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Romero-Isart}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Coherent inflation for large quantum superpositions of levitated microspheres}},\ }\href {https://doi.org/10.1088/1367-2630/aa99bf}
{\bibfield {journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf
{\bibinfo {volume} {19}},\ \bibinfo {pages} {123029} (\bibinfo {year}
{2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Deli\ifmmode~\acute{c}\else \'{c}\fi{}}\ \emph
{et~al.}(2019)\citenamefont {Deli\ifmmode~\acute{c}\else \'{c}\fi{}},
\citenamefont {Reisenbauer}, \citenamefont {Grass}, \citenamefont {Kiesel},
\citenamefont {Vuleti\ifmmode~\acute{c}\else \'{c}\fi{}},\ and\ \citenamefont
{Aspelmeyer}}]{Delic_Cooling_2019}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont
{Deli\ifmmode~\acute{c}\else \'{c}\fi{}}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Reisenbauer}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Grass}}, \bibinfo {author} {\bibfnamefont
{N.}~\bibnamefont {Kiesel}}, \bibinfo {author} {\bibfnamefont
{V.}~\bibnamefont {Vuleti\ifmmode~\acute{c}\else \'{c}\fi{}}}, \ and\
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Aspelmeyer}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Cavity cooling of a levitated
nanosphere by coherent scattering}},\ }\href {\doibase
10.1103/PhysRevLett.122.123602} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {122}},\ \bibinfo {pages}
{123602} (\bibinfo {year} {2019})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Jain}\ \emph {et~al.}(2016)\citenamefont {Jain},
\citenamefont {Gieseler}, \citenamefont {Moritz}, \citenamefont {Dellago},
\citenamefont {Quidant},\ and\ \citenamefont
{Novotny}}]{Jain_LeviRecoilHeating_2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Jain}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gieseler}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Moritz}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Dellago}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Quidant}}, \ and\ \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Novotny}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Direct measurement of photon recoil from a levitated
nanoparticle}},\ }\href {\doibase 10.1103/PhysRevLett.116.243601} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {116}},\ \bibinfo {pages} {243601} (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fonseca}\ \emph {et~al.}(2016)\citenamefont
{Fonseca}, \citenamefont {Aranas}, \citenamefont {Millen}, \citenamefont
{Monteiro},\ and\ \citenamefont
{Barker}}]{Fonesca_NonlinearDynNanopart_2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~Z.~G.}\
\bibnamefont {Fonseca}}, \bibinfo {author} {\bibfnamefont {E.~B.}\
\bibnamefont {Aranas}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Millen}}, \bibinfo {author} {\bibfnamefont {T.~S.}\ \bibnamefont
{Monteiro}}, \ and\ \bibinfo {author} {\bibfnamefont {P.~F.}\ \bibnamefont
{Barker}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Nonlinear
dynamics and strong cavity cooling of levitated nanoparticles}},\ }\href
{\doibase 10.1103/PhysRevLett.117.173602} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo
{pages} {173602} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Millen}\ \emph {et~al.}(2015)\citenamefont {Millen},
\citenamefont {Fonseca}, \citenamefont {Mavrogordatos}, \citenamefont
{Monteiro},\ and\ \citenamefont
{Barker}}]{Millen_CoolingChargedNanosphere_2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Millen}}, \bibinfo {author} {\bibfnamefont {P.~Z.~G.}\ \bibnamefont
{Fonseca}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Mavrogordatos}}, \bibinfo {author} {\bibfnamefont {T.~S.}\ \bibnamefont
{Monteiro}}, \ and\ \bibinfo {author} {\bibfnamefont {P.~F.}\ \bibnamefont
{Barker}},\ }\bibfield {title} {\enquote {\bibinfo {title} {Cavity cooling a
single charged levitated nanosphere}},\ }\href {\doibase
10.1103/PhysRevLett.114.123602} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {114}},\ \bibinfo {pages}
{123602} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Asenbaum}\ \emph {et~al.}(2013)\citenamefont
{Asenbaum}, \citenamefont {Kuhn}, \citenamefont {Nimmrichter}, \citenamefont
{Sezer},\ and\ \citenamefont {Arndt}}]{Asenbaum_CoolingSiliconNanopart_2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Asenbaum}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kuhn}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Nimmrichter}}, \bibinfo
{author} {\bibfnamefont {U.}~\bibnamefont {Sezer}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Arndt}},\ }\bibfield {title} {\enquote
{\bibinfo {title} {Cavity cooling of free silicon nanoparticles in high
vacuum}},\ }\href {\doibase 10.1038/ncomms3743} {\bibfield {journal}
{\bibinfo {journal} {Nat. Commun.}\ }\textbf {\bibinfo {volume} {4}},\
\bibinfo {pages} {2743} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kiesel}\ \emph {et~al.}(2013)\citenamefont {Kiesel},
\citenamefont {Blaser}, \citenamefont {Deli{\'c}}, \citenamefont {Grass},
\citenamefont {Kaltenbaek},\ and\ \citenamefont
{Aspelmeyer}}]{Kiesel_CoolingSubmicronPart_2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Kiesel}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Blaser}},
\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Deli{\'c}}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Grass}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Kaltenbaek}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Aspelmeyer}},\ }\bibfield {title}
{\enquote {\bibinfo {title} {Cavity cooling of an optically levitated
submicron particle}},\ }\href {\doibase 10.1073/pnas.1309167110} {\bibfield
{journal} {\bibinfo {journal} {Proc. Natl. Acad. Sci.}\ }\textbf {\bibinfo
{volume} {110}},\ \bibinfo {pages} {14180--14185} (\bibinfo {year}
{2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gieseler}\ \emph {et~al.}(2012)\citenamefont
{Gieseler}, \citenamefont {Deutsch}, \citenamefont {Quidant},\ and\
\citenamefont {Novotny}}]{Gieseler_FeedbackCoolingNanopart_2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Gieseler}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Deutsch}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Quidant}}, \ and\
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Novotny}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Subkelvin parametric feedback cooling of
a laser-trapped nanoparticle}},\ }\href {\doibase
10.1103/PhysRevLett.109.103603} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {109}},\ \bibinfo {pages}
{103603} (\bibinfo {year} {2012})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Li}\ \emph {et~al.}(2013)\citenamefont {Li},
\citenamefont {Kheifets},\ and\ \citenamefont
{Raizen}}]{Li_CoolingMicrosphere_2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Li}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kheifets}}, \ and\
\bibinfo {author} {\bibfnamefont {M.~G.}\ \bibnamefont {Raizen}},\ }\bibfield
{title} {\enquote {\bibinfo {title} {Millikelvin cooling of an optically
trapped microsphere in vacuum}},\ }\href {\doibase 10.1038/nphys1952}
{\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo
{volume} {7}},\ \bibinfo {pages} {527} (\bibinfo {year} {2013})}\BibitemShut
{NoStop} \end{thebibliography}
\end{document} |
\begin{document}
\title{{\bf\large Rational Maps and Maximum Likelihood Decodings}
\begin{abstract} This paper studies maximum likelihood(ML) decoding in error-correcting codes as rational maps and proposes an approximate ML decoding rule by using a Taylor expansion. The point for the Taylor expansion, which will be denoted by $p$ in the paper, is properly chosen by considering some dynamical system properties. We have two results about this approximate ML decoding. The first result proves that the order of the first nonlinear terms in the Taylor expansion is determined by the minimum distance of its dual code. As the second result, we give numerical results on bit error probabilities for the approximate ML decoding. These numerical results show better performance than that of BCH codes, and indicate that this proposed method approximates the original ML decoding very well. \end{abstract} {\small {\bf Key words.} Maximum likelihood decoding, rational map, dynamical system
\\ {\bf AMS subject classification.} 37N99, 94B35}
\section{Introduction}\label{sec:intro} This paper proposes a new perspective to maximum likelihood(ML) decoding in error-correcting codes as rational maps and shows some relationships between coding theory and dynamical systems. In Section \ref{sec:cs}, \ref{sec:lc}, and \ref{sec:ml}, we explain notations and minimum prerequisites of coding theory (e.g., see \cite{Gallagerinfo}). The main results are presented in Section \ref{sec:result}.
\subsection{Communication Systems}\label{sec:cs}
A mathematical model of communication systems in information theory was developed by Shannon \cite{shannon}. A general block diagram for visualizing the behavior of such systems is given by Figure \ref{fig:channel}. The source transmits a $k$-bit message $m=(m_1\cdots m_k)$ to the destination via the channel, which is usually affected by noise $e$. In order to recover the transmitted message at the destination under the influence of noise, we transform the message into a codeword $x=(x_1\cdots x_n),~n\geq k,$ by some injective mapping $\varphi$ at the encoder and input it to the channel. Then the decoder transforms an $n$-bit received sequence of letters $y=(y_1\cdots y_n)$ by some decoding mapping $\psi$ in order to obtain the transmitted codeword at the destination. Here we consider all arithmetic calculations in some finite field and in this paper we fix it as $\mathbb{F}_2=\{0,1\}$ except for Section \ref{sec:ag}. As a model of channels, we deal with a binary symmetric channel (BSC) in this paper which is characterized by the transition probability $\epsilon$ ($0<\epsilon<1/2$). Namely, with probability $1-\epsilon$, the output letter is a faithful replica of the input, and with probability $\epsilon$, it is the opposite of the input letter for each bit (see Figure \ref{fig:bsc}). In particular, this is an example of memoryless channels.
Then, one of the main purposes of coding theory is to develop a good encoder-decoder pair $(\varphi, \psi)$ which is robust to noise perturbations. Hence, the problem is how we efficiently use the redundancy $n\geq k$ in this setting.
\begin{figure}
\caption{Communication system}
\label{fig:channel}
\caption{Binary symmetric channel}
\label{fig:bsc}
\end{figure}
\subsection{Linear Codes}\label{sec:lc} A code with a linear encoding map $\varphi$ is called a linear code. A codeword in a linear code can be characterized by its generator matrix \[ G=\left(\begin{array}{ccc} g_{11} & \cdots & g_{1n}\\ \vdots & \ddots & \vdots\\ g_{k1} & \cdots & g_{kn} \end{array} \right)= \left( g_1 \cdots g_n \right),~~~ g_i=\left(\begin{array}{c} g_{1i}\\ \vdots \\ g_{ki} \end{array} \right),~~i=1,\cdots,n, \] where each element $g_{ij}\in\mathbb{F}_2$. Therefore the set of codewords is given by \[ \mathcal{C}=\{
(x_1\cdots x_n)=(m_1\cdots m_k)G~|~m_i\in \mathbb{F}_2,\}, ~~~\sharp \mathcal{C}=2^k. \] Here without loss of generality, we assume $g_i\neq 0$ for all $i=1,\cdots, n$ and ${\rm rank}~G=k$. We call $k$ and $n$ the dimension and the length of the code, respectively.
Because of the linearity, it is also possible to describe $\mathcal{C}$ as a kernel of a matrix $H$ whose $m=n-k$ row vectors are linearly independent and orthogonal to those of $G$, i.e., \[
\mathcal{C}=\{(x_1\cdots x_n)~|~(x_1\cdots x_n)H^t=0\},~~~ H=\left(\begin{array}{ccc} h_{11} & \cdots & h_{1n}\\ \vdots & \ddots & \vdots\\ h_{m1} & \cdots & h_{mn} \end{array} \right)= (h_1\cdots h_n), \] where $H^t$ means the transpose matrix of $H$. This matrix $H$ is called a parity check matrix of $\mathcal{C}$. The dual code $\mathcal{C}^*$ of $\mathcal{C}$ is defined in such a way that a parity check matrix of $\mathcal{C}^*$ is given by a generator matrix $G$ of $\mathcal{C}$.
The Hamming distance $d(x,y)$ between two $n$-bit sequences $x,y\in\mathbb{F}^n_2$ is given by the number of positions at which the two sequences differ. The weight of an element $x\in\mathbb{F}^n_2$ is the Hamming distance to $0$, i.e., $d(x):=d(x,0)$. Then the minimum distance $d(\mathcal{C})$ of a code $\mathcal{C}$ is defined by two different ways as \[ d(\mathcal{C})=\min \{
d(x,y)~|~x,y\in\mathcal{C}~{\rm and}~x\neq y \} =\min \{
d(x)~|~0\neq x\in\mathcal{C} \}. \] Here the second equality results from the linearity. It is easy to observe that the minimum distance is $d(\mathcal{C})=d$ if and only if there exists a set of $d$ linearly dependent column vectors of $H$ but no set of $d-1$ linearly dependent column vectors.
For a code $\mathcal{C}$ with the minimum distance $d=d(\mathcal{C})$, let us set $t:=\lfloor (d-1)/2\rfloor$, where $\lfloor a \rfloor$ is the integer part of $a$. Then, it follows from the following observation that $\mathcal{C}$ can correct $t$ errors: if $y\in\mathbb{F}^n_2$ and $d(x,y)\leq t$ for some $x\in\mathcal{C}$ then $x$ is the only codeword with $d(x,y)\leq t$. In this sense, the minimum distance is one of the important parameters to measure performance of a code and is desirable to design it as large as possible for the robustness to noise.
\subsection{Maximum Likelihood Decoding}\label{sec:ml}
Let us recall that, given a transmitted codeword $x$, the conditional probability $P(y|x)$ of a received sequence $y\in\mathbb{F}^n_2$ at the decoder is given by \[
P(y|x)=P(y_1|x_1)\cdots P(y_n|x_n) \] for a memoryless channel. Maximum likelihood(ML) decoding $\psi:\mathbb{F}_2^n\ni y\mapsto \tilde{x}\in\mathbb{F}^n_2$ is given by taking
the marginalization of $P(y|x)$ for each bit. Precisely speaking, for a received sequence $y$, the $i$-th bit element $\tilde{x}_i$ of the decoded word $\tilde{x}$ is determined by the following rule: \begin{equation} \tilde{x}_i:=\left\{ \begin{array}{ll}
1, & \sum_{\substack{x\in\mathcal{C}\\x_i=0}}P(y|x)\leq\sum_{\substack{x\in\mathcal{C}\\x_i=1}}P(y|x)\\ 0, & {\rm otherwise} \end{array},~~i=1,\cdots,n. \right. \label{eq:bitml} \end{equation}
In general, for a given decoder $\psi$, the bit error probability ${P^e_{\rm bit}=\max\{P^e_1,\cdots,P^e_n\}}$, where \[ P^e_i=\sum_{x\in\mathcal{C},y\in\mathbb{F}^n_2}P(x,y)(1-\delta_{x_i,\tilde{x}_i}),~~~ \tilde{x}=(\tilde{x}_1,\cdots,\tilde{x}_n)=\psi(y),~~~ \delta_{a,b}=\left\{ \begin{array}{ll} 1, & a=b\\ 0, & a\neq b \end{array} \right., \] is one of the important measures of decoding performance. Obviously, it is desirable to design an encoding-decoding pair whose bit error probability is as small as possible. It is known that ML decoding attains the minimum bit error probability for any encodings under the uniform distribution on $P(x)$. In this sense, ML decoding is the best for all decoding rules. However its computational cost requires at least $2^k$ operations, and it is too much to use for practical applications.
From the above property of ML decoding, one of the key motivation of this work comes from the following simple question. Is it possible to accurately approximate the ML decoding rules with low computational complexity? The main results in this paper give answers to this question.
\subsection{Main Results}\label{sec:result}
Let us first define for each codeword $x\in \mathcal{C}$ its codeword polynomial $F_x(u_1,\cdots,u_n)$ as \[ F_x(u_1,\cdots,u_n):=\prod^n_{i=1}\rho_i(u_i), ~~~~~ \rho_i(u_i)=\left\{\begin{array}{ll} u_i,& x_i=1,\\ 1-u_i,&x_i=0. \end{array} \right. \] Then we define a rational map $f:I^n\rightarrow I^n$, $I=[0,1],$ by using codeword polynomials as \begin{eqnarray} &&f:(u_1,\cdots,u_n)\mapsto (u'_1,\cdots,u'_n),\nonumber\\ &&u'_i=f_i(u):=\frac{\sum_{x\in \mathcal{C},x_i=1} F_x(u)}{H(u)},~~~i=1,\cdots,n,\nonumber\\ &&H(u):=\sum_{x\in\mathcal{C}}F_x(u),\label{eq:map} \end{eqnarray} where $u=(u_1,\cdots,u_n)$. This rational map plays the most important role in the paper. It is sometimes denoted by $f_G$, when we need to emphasize the generator matrix $G$ of the code $\mathcal{C}$.
For a sequence $y\in\mathbb{F}^n_2$, let us take a point $u^0\in I^n$ as \begin{equation} u^0_i=\left\{ \begin{array}{ll} \epsilon, & y_i=0\\ 1-\epsilon, & y_i=1 \end{array} ,~~~i=1,\cdots,n, \label{eq:ip} \right. \end{equation} where $\epsilon$ is the transition probability of the channel.
Then it is straightforward to check that $F_x(u^0)=P(y|x)$. Namely, the conditional probability of $y$ under a codeword $x\in \mathcal{C}$ is given by the value of the corresponding codeword polynomial $F_x(u)$ at $u=u^0$. Therefore, from the construction of the rational map, ML decoding (\ref{eq:bitml}) is equivalently given by the following rule \begin{eqnarray} &&\psi:\mathbb{F}_2^n\ni y\mapsto \tilde{x}\in\mathbb{F}_2^n,\nonumber\\ &&\tilde{x}_i=\psi_i(y):=\left\{ \begin{array}{ll} 1, & f_i(u^0)\geq 1/2\\ 0, & f_i(u^0)< 1/2 \end{array} ,~~~i=1,\cdots,n. \right.\label{eq:mlrule} \end{eqnarray} In this sense, the study of ML decoding can be treated by analyzing the image of the initial point (\ref{eq:ip}) by the rational map (\ref{eq:map}). Some of the properties of this map in the sense of dynamical systems will be studied in detail in Section \ref{sec:ds}. We will also discuss in Section \ref{sec:discussions} that performance of a code can be explained by these properties.
For the statement of the main results, we only here mention that this rational map has a fixed point $p:=(1/2,\cdots,1/2)$ for any generator matrix (Proposition \ref{pp:fp}). Let us denote the Taylor expansion at $p$ by \begin{equation} f(u)=p+Jv+f^{(2)}(v)+\cdots+f^{(l)}(v)+O(v^{l+1}), \label{eq:taylor} \end{equation} where $v=(v_1,\cdots,v_n)$ is a vector notation of $v_i=u_i-1/2, i=1,\cdots,n$, $J$ is the Jacobi matrix at $p$, $f^{(i)}(v)$ corresponds to the $i$-th order term, and $O(v^{l+1})$ means the usual order notation. The reason why we choose $p$ as the approximating point is related to the local dynamical property at $p$ and will be explained in Section \ref{sec:p}.
By truncating higher oder terms $O(v^{l+1})$ in (\ref{eq:taylor}) and denoting it as \[ \tilde{f}(u)=p+Jv+f^{(2)}(v)+\cdots+f^{(l)}(v), \] we can define the $l$-th approximation of ML decoding by replacing the map $f(u)$ in (\ref{eq:mlrule}) with $\tilde{f}(u)$, and denote this approximate ML decoding by ${\tilde{\psi}:\mathbb{F}_2^n\rightarrow\mathbb{F}_2^n}$, i.e., \begin{eqnarray} &&\tilde\psi:\mathbb{F}_2^n\ni y\mapsto \tilde{x}\in\mathbb{F}_2^n,\nonumber\\ &&\tilde{x}_i=\tilde\psi_i(y):=\left\{ \begin{array}{ll} 1, & \tilde{f}_i(u^0)\geq 1/2\\ 0, & \tilde{f}_i(u^0)< 1/2 \end{array} ,~~~i=1,\cdots,n. \right.\label{eq:apmlrule} \end{eqnarray} Let us remark that the notations $\tilde{f}$ and $\tilde{\psi}$ do not explicitly express the dependence on $l$ for removing unnecessary confusions of subscripts.
\subsubsection{Duality Theorem}\label{sec:duality} We note that there are two different viewpoints on this approximate ML decoding. One way is that, in the sense of its precision, it is preferable to have an expansion with large $l$. On the other hand, from the viewpoint of low computational complexity, it is desirable to include many zero elements in higher order terms. The next theorem states a sufficient condition to satisfy these two requirements.
\begin{thm}\label{thm:thm} Let $l\geq 2$. If any distinct $l$ column vectors of a generator matrix $G$ are linearly independent, then the Taylor expansion {\rm (\ref{eq:taylor})} at $p$ of the rational map {\rm (\ref{eq:map})} takes the following form \[ f(u)=u+f^{(l)}(v)+O(v^{l+1}), \] where the $i$-th coordinate $f^{(l)}_i(v)$ of $f^{(l)}(v)$ is given by \begin{eqnarray} &&f^{(l)}_i(v)=\sum_{(i_1,\cdots,i_l)\in\Theta^{(l)}_i}(-2)^{l-1} v_{i_1}\cdots v_{i_l}= -\frac{1}{2}\!\!\!\!\sum_{(i_1,\cdots,i_l)\in\Theta^{(l)}_i}(1-2u_{i_1})\cdots(1-2u_{i_l}),\label{eq:hosum}\\
&&\Theta^{(l)}_i=\left\{(i_1,\cdots,i_l)~|~1\leq i_1<\cdots <i_l\leq n,~ i_k\neq i~(k=1,\cdots,l),~ g_{i}+g_{i_1}+\cdots+g_{i_l}=0 \right\}.\nonumber \end{eqnarray}
\end{thm}
First of all, it follows that the larger the minimum distance of the dual code $\mathcal{C}^*$ is, the more precise approximation of ML decoding with low computational complexity we have for the code $\mathcal{C}$ with the generator matrix $G$. Especially, we can take $l=d(\mathcal{C^*})-1$.
Secondly, let us consider the meaning of the approximate map $\tilde{f}(u)$ and its approximate ML decoding $\tilde{\psi}$. We note that each value $u^0_i(i=1,\cdots,n)$ in (\ref{eq:ip})
for a received word $y\in\mathbb{F}^n_2$ expresses the likelihood $P(y_i|x_i=1)$. Let us suppose $\Theta^{(l)}_i\neq\emptyset$. Then, from the definition of $u^0$, each term in the sum of (\ref{eq:hosum}) satisfies \[ -\frac{1}{2}(1-2u_{i_1})\cdots(1-2u_{i_l}) \left\{ \begin{array}{ll} <0, & {\rm if}~y_{i_1}+\cdots+y_{i_l}=0\\ >0, & {\rm if}~y_{i_1}+\cdots+y_{i_l}=1 \end{array}, \right. ~~~~(i_1,\cdots,i_l)\in\Theta^{(l)}_i. \] When $y_{i_1}+\cdots+y_{i_l}=0(=1, {\rm resp.})$, this term decreases(increases, resp.) the value of initial likelihood $u^0_i$. In view of the decoding rule (\ref{eq:mlrule}), this induces $\tilde{x}_i$ to be decoded into $\tilde{x}_i=0(=1,{\rm resp.})$, and this actually corresponds to the structure of the code $g_{i}+g_{i_1}+\cdots+g_{i_l}=0$ appearing in $\Theta^{(l)}_i$. In this sense, the approximate map ${\tilde f}(u)$ can be regarded as renewing the likelihood (under suitable normalizations) based on the code structure, and the approximate ML decoding $\tilde{\psi}$ judges these renewed data. From this argument, it is easy to see that a received word $y\in\mathcal{C}$ is decoded into $y=\tilde{\psi}(y)\in\mathcal{C}$, i.e., the codeword is decoded into itself and, of course, this property should be equipped with any decoders.
We also remark that Theorem \ref{thm:thm} can be regarded as a duality theorem in the following sense. Let $\mathcal{C}$ be a code whose generator(resp. parity check) matrix is $G$ (resp. $H$). As we explained in Section \ref{sec:lc}, the linear independence of the column vectors of $H$ controls the minimum distance $d(\mathcal{C})$ and this is an encoding property. On the other hand, Theorem \ref{thm:thm} shows that the linear independence of the column vectors of $G$, which determines the dual minimum distance $d(\mathcal{C}^*)$, controls a decoding property of ML decoding in the sense of accuracy and computational complexity. Hence, we have the correspondence between $H/G$ duality and encoding/decoding duality. In Corollary \ref{corollary:ag}, we will consider this duality viewpoint in a setting of geometric Reed-Solomon/Goppa codes.
\subsubsection{Decoding Performance}\label{sec:dprc} We show the second result of this paper about the decoding performances of the approximate ML (\ref{eq:apmlrule}). For this purpose, let us first examine numerical simulations of the bit error probability on the BSC with the transition probability $\epsilon=0.16$. We also show numerical results on BCH codes with Berlekamp-Massey decoding for comparison. The results are summarized in Figure \ref{fig:berrate}.
\begin{figure}
\caption{BSC with $\epsilon=0.16$. Horizontal axis: code rate ${r=k/n}$. Vertical axis: bit error probability. $\times$: random codes with the 3rd order approximate ML decoding. $+$: random codes with the 2nd order approximate ML decoding. $\ast$: BCH codes with Berlekamp-Massey decoding. }
\label{fig:berrate}
\caption{ Comparison of decoding performances for a BCH code with $k=7$ and $n=63$. Horizontal axis: transition probability $\epsilon$. Vertical axis: bit error probability. $\times$: 3rd order approximate ML decoding. $\ast$: Berlekamp-Massaey decoding. $\Box$: ML decoding. }
\label{fig:berprob}
\end{figure}
Here, the horizontal axis is the code rate $r=k/n$, and the vertical axis is the bit error probability. The plots $+$ ($\times$ resp.) correspond to the 2nd (3rd resp.) order approximate ML (\ref{eq:apmlrule}), and $\ast$ are the results on several BCH codes ($n=7,~15,~31,~63,~127,~255,~511$) with Berlekamp-Messey decodings. For the proposed method, we randomly construct a systematic generator matrix in such a way that each column except for the systematic part has the same weight (i.e. number of non-zero elements) $w$. To be more specific, the submatrix composed by the first $k$ columns of the generator matrix is set to be an identity matrix in order to make the code systematic, while the rest of the generator matrix is made up of $k\times k$ random matrices generated by random permutations of columns of a circulant matrix, whose first column is given by \begin{align} (\underbrace{1,\cdots 1}_{w},0,\cdots 0)^{t} \notag. \end{align} The reason for using random codings is that we want to investigate average behaviors of the decoding performance, and, for this purpose, we do not put unnecessary additional structure at encodings. The number of matrices added after the systematic part depends on the code rate, and the plot for each code rate corresponds to the best result obtained out of about 100 realizations of the generator matrix. Also, we have employed $w=2$ and 3 for the generator matrices of the 3rd and the 2nd order approximate ML, respectively. Moreover, the length of the codewords $n$ are assumed to be up to 512. From Figure \ref{fig:berrate}, we can see that the proposed method with the 3rd order approximate ML ($\times$) achieves better performance than that of BCH codes with Berlekamp-Massey($\ast$). It should be also noticed that the decoding performance is improved a lot from the 2nd order to the 3rd order approximation. This improvement is reasonable because of the meaning of the Taylor expansion.
Next, let us directly compare the decoding performances among ML, approximate ML (3rd order) and Berlekamp-Massey by applying them on the same BCH code ($k=7$, $n=63$). The result on the bit error probability with respect to transition probability is shown in Figure \ref{fig:berprob}. This figure clearly shows that the performance of the 3rd order approximate ML decoding is far better than that of Berlekamp-Massey decoding (e.g., improvement of double-digit at $\epsilon=0.1$). Furthermore, it should be noted that the 3rd order approximate ML decoding achieves a very close bit error performance to that of ML decoding. Although we have not mathematically confirmed the computational complexity of the proposed approach, the computational time of the approximate ML (3rd order) is much faster than ML decoding. This fact about low computational complexity of the approximate ML is explained as follows: Non-zero higher order terms in (\ref{eq:hosum}) appear as a result of linear dependent relations of column vectors of $G$, however, linear dependences require high codimentional scenario. Hence, most of the higher order terms become zeros. As a result, the computational complexity for the approximate ML, which is determined by the number of nonzero terms in the expansion, becomes small.
In conclusion, these numerical simulations suggest that the 3rd order approximate ML decoding approximates ML decoding very well with low computational complexity. We notice that the encodings examined here are random codings. Hence, we can expect to obtain better bit error performance by introducing certain structure on encodings suitable to this proposed decoding rule, or much more exhaustive search of random codes. One of the possibility will be the combination with Theorem \ref{thm:thm}. On the other hand, it is also possible to consider suitable encoding rules from the viewpoint of dynamical systems via rational maps (\ref{eq:map}). This issue is discussed in detail in Section \ref{sec:discussions}. In any case, finding suitable encoding structure for the proposed decodings is one of the important future problem.
The paper is organized as follows. In Section \ref{sec:ds}, we study properties of the rational map (\ref{eq:map}) in view of a discrete dynamical system and show some relationships to coding theory. We also show that this discrete dynamical system is related to a continuous gradient dynamical system $du/dt={\rm grad}(\log H(u))$. Section \ref{sec:taylor} deals with relationship between a generator matrix of a code and its Taylor expansion (\ref{eq:taylor}). The proof of Theorem \ref{thm:thm}, which is a direct consequence of Proposition \ref{pp:jacobi} and Proposition \ref{pp:fl0}, is shown in this section. In Section \ref{sec:ag}, we apply Theorem \ref{thm:thm} to geometric Reed-Solomon/Goppa codes in Corollary \ref{corollary:ag} with a simple example by using the Hermitian curve. Finally, we discuss the future problems on this subject as an intersection of dynamical systems and coding theory.
\section{Rational Maps}\label{sec:ds} In this section, we discuss ML decoding from dynamical systems viewpoints. Let $G$ be a $k\times n$ generator matrix for a code $\mathcal{C}$. We begin with showing the following easy consequence of linear codes, which will be used frequently throughout the paper.
\begin{lemma}\label{lemma:gi} For $i\in\{1,\cdots,n\}$, let us denote subcodes of $\mathcal{C}$ with $x_i=0$ and $x_i=1$ respectively by \[
\mathcal{C}(x_i=0)=\left\{x\in \mathcal{C}~|~x_i=0\right\},~~~~~
\mathcal{C}(x_i=1)=\left\{x\in \mathcal{C}~|~x_i=1\right\}. \] Then $\sharp \mathcal{C}(x_i=0)=\sharp \mathcal{C}(x_i=1)=2^{k-1}$. \end{lemma} \begin{prf}{\rm From the assumption on the generator matrix, we have $g_i\neq 0$ and let $l>0$ be the number of 1 in $g_i$. Then we have \[ \sharp \mathcal{C}(x_i=0)= \sum^l_{\substack{ j=0 \\ j:{\rm even} }}{_l}C_j \times 2^{k-l},~~~~ \sharp \mathcal{C}(x_i=1)= \sum^l_{\substack{ j=0 \\ j:{\rm odd} }}{_l}C_j \times 2^{k-l}, \] where the symbol ${_l}C_j$ means the number of combinations for taking $j$ elements from $l$ elements. However these summations of combinations are obviously equal because \[ 0=(1-1)^l=\sum^l_{j=0}{_l}C_j(-1)^j= \sum^l_{\substack{ j=0 \\ j:{\rm even} }}{_l}C_j - \sum^l_{\substack{ j=0 \\ j:{\rm odd} }}{_l}C_j. \] Therefore $\sharp \mathcal{C}(x_i=0)=\sharp \mathcal{C}(x_i=1)=2^{k-1}$. }\qed \end{prf}
Next, let us characterize the codewords in $\mathcal{C}$ and the non-codewords in $\mathbb{F}^n_2\setminus \mathcal{C}$ by means of the rational map (\ref{eq:map}). It should be remarked that a codeword polynomial $F_x(u)$ for $u=(u_1,\cdots,u_n)$ with $u_i\in\partial I=\{0,1\},i=1,\cdots,n$, takes its value \[ F_x(u)=\left\{\begin{array}{ll} 1, & u=x,\\ 0, & u\neq x. \end{array} \right. \] Here we identify a point $u=(u_1,\cdots,u_n)\in I^n,$ ${u_i\in\partial I, i=1,\cdots,n}$, with a point $u\in\mathbb{F}^n_2$ by a natural inclusion $\mathbb{F}_2\hookrightarrow I$, and this convention will be used frequently in the paper. We also define a point $u\in I^n$ as a pole of the rational map (\ref{eq:map}) if $H(u)=0$. The boundary and the interior of a set $A$ are denoted by $\partial A$ and ${\rm Int}(A)$, respectively.
\begin{pp}\label{pp:fp} The followings hold for the rational map {\rm (\ref{eq:map})}: \begin{enumerate} \item $p:=(1/2,\cdots,1/2)\in I^n$ is a fixed point. \item Let $u\in\mathbb{F}^n_2$. Then $u$ is a fixed point if and only if $u\in\mathcal{C}$. \item The set of poles is given by \[ \hspace{-0.5cm}S:=\left\{ u\in \partial (I^n)~
\begin{array}{|l} there~is~no~code~word~x\in\mathcal{C}~with~(x_{i_1},\cdots,x_{i_l})=(u_{i_1},\cdots,u_{i_l}),\\ where~u_{i_1},\cdots,u_{i_l}\in \mathbb{F}_2,~and~0<u_i<1,~i\neq i_1,\cdots,i_l \end{array} \right\}. \] Especially, $\mathbb{F}^n_2\setminus\mathcal{C}\subset S$. \end{enumerate} \end{pp}
\begin{prf}{\rm For the statement 1, let us note that $F_x(p)=(1/2)^n$ for each codeword $x\in\mathcal{C}$. Then Lemma \ref{lemma:gi} leads to $f_i(p)=1/2, i=1,\cdots,n.$
For the statement 2, let us suppose $u\in\mathcal{C}$. Then, from the remark before this proposition, $f_i(u)=1({\rm or}~0~{\rm resp.})$ if $u_i=1({\rm or}~0~{\rm resp.})$ for each $i$. Hence $f(u)=u$. On the other hand, if $u\in\mathbb{F}^n_2\setminus\mathcal{C}$, then $H(u)=0$. It means that $u$ is a pole and can not be a fixed point.
For the statement 3, let us first note that $u\in S$ if and only if $F_x(u)=0$ for any codeword $x\in\mathcal{C}$, because $F_x(u)\geq 0$ for $u\in I^n$. Therefore $S\subset \partial (I^n)$, since $F_x(u)>0$ for $u\in{\rm Int}(I^n)$.
Let $u\in\partial (I^n)$ such that $u_{i_1},\cdots,u_{i_l}\in\mathbb{F}_2$ and $0<u_i<1$ for $i\neq i_1,\cdots,i_l$. Then, if there exists a codeword $x\in\mathcal{C}$ such that $(x_{i_1},\cdots,x_{i_l})=(u_{i_1},\cdots,u_{i_l})$, then the value of its corresponding codeword polynomial is $F_x(u)>0$. So, $u\notin S$. On the other hand, if there is no such codeword, then $H(u)=0$, it means $u\in S$. }\qed \end{prf}
From this proposition, the rational map (\ref{eq:map}) has information of not only all codewords $\mathcal{C}$ as fixed points but also non-codewords $\mathbb{F}^n_2\setminus\mathcal{C}$ as poles.
We call these fixed points codeword fixed points. The following proposition shows that all of the codeword fixed points are stable. \begin{pp}\label{pp:stable} Let a parity check matrix do not have zero column vectors, i.e., there exists no codeword with weight 1. Let $u$ be a codeword fixed point. Then the Jacobi matrix of the rational map {\rm (\ref{eq:map})} at $u$ is the zero matrix. Hence, $u$ is a stable fixed point. \end{pp} \begin{prf}{\rm Let us denote the $i$-th element of the rational map (\ref{eq:map}) by \[ f_i(u)=\frac{I_i(u)}{H(u)} \] and denote the derivatives of $I_i$ and $H$ with respect to $u_j$ by $I^j_i$ and $H^j$, respectively, for the simplicity of notations. In what follows, we will also use these notations for higher order derivatives in a similar way (like $H^{ij}=\frac{\partial^2 H}{\partial u_i \partial u_j}$). Then the derivative $\partial f_i/\partial u_j(u)$ is given by \[ \frac{\partial f_i}{\partial u_j}(u)=\left\{ I^j_i(u)H(u)-I_i(u)H^j(u) \right\}/H^2(u). \] Since $u\in\mathcal{C}$, we have $H(u)=1$. Let us consider the two cases $u_i=0$ and $u_i=1$, separately.
For $u_i=0$, we have $I_i(u)=0$. Similarly we have $I^j_i(u)=0$ if $i\neq j$. Hence, $\partial f_i/\partial u_j(u)=0$ in this case. On the other hand, if $i=j$, from the assumption on the parity check matrix, we have no codeword $\tilde{u}\in\mathcal{C}$ such that the only difference from $u$ occurs at the $i$-th bit element, i.e., $u_j=\tilde{u}_j$ for $j\neq i$ but $u_i\neq \tilde{u}_i$. Therefore, $I^j_i(u)=0$, and it again leads to $\partial f_i/\partial u_j(u)=0$. The case $u_i=1$ can be proven by the similar way. }\qed \end{prf}
Let us next discuss properties of the fixed point $p=(1/2,\cdots,1/2)$. \begin{pp}\label{pp:jacobi} Let $G=(g_1 \cdots g_n)$ be a generator matrix. Then, the Jacobi matrix $J$ of the rational map {\rm (\ref{eq:map})} at $p$ is determined by \[ \begin{array}{ll} J_{ij}=1, & {\rm if}~g_i=g_j,\\ J_{ij}=0, & {\rm if}~g_i\neq g_j\\ \end{array} \] for all $i,j=1,\cdots,n$. \end{pp}
For the proof of this proposition, we need the following lemma.
\begin{lemma}\label{lemma:gij} For $i,j\in\{1,\cdots,n\}$ with $i\neq j$, let us consider the following subcodes of $\mathcal{C}$ \[ \mathcal{C}(\substack{x_i=1\\x_j=0})=\left\{
x\in\mathcal{C}~|~x_i=1,x_j=0 \right\},~~~~ \mathcal{C}(\substack{x_i=1\\x_j=1})=\left\{
x\in\mathcal{C}~|~x_i=1,x_j=1 \right\}. \] Then the followings hold \begin{enumerate} \item if $g_i=g_j$, then $\sharp \mathcal{C}(\substack{x_i=1\\x_j=0})=0,~\sharp \mathcal{C}(\substack{x_i=1\\x_j=1})=2^{k-1}.$ \item if $g_i\neq g_j$, then $\sharp \mathcal{C}(\substack{x_i=1\\x_j=0})=\sharp \mathcal{C}(\substack{x_i=1\\x_j=1})=2^{k-2}.$ \end{enumerate} \end{lemma} \begin{prf}{\rm The statement is trivial when $g_i=g_j$, so we suppose $g_i\neq g_j$. In the following we adopt the $mod~2$ arithmetic for elements in $\mathbb{F}_2$. We can express $g_i$, by using some permutations of rows if necessary, as follows \[ g_i=(\underbrace{1\cdots 1}_{\alpha}~0\cdots 0)^T. \] In the following we only deal with the case $\alpha<k$, but the modification to the case $\alpha=k$ is trivial. Now we have the following two cases
\\
case I: there exists $l>\alpha$ such that $g_{lj}=1$,\\ case I\!I: not case I, i.e., $g_{lj}=0$ for all $l>\alpha$.
\\
In the case I, let us fix $(m_1\cdots m_\alpha)$ in a message $m=(m_1\cdots m_k)$ with $m_1+\cdots+m_{\alpha}=1$, which corresponds to $x_i=1$, and consider the numbers of codewords with $x_j=0$ and $x_j=1$ from the remaining message bits $(m_{\alpha+1} \cdots m_k)$. Then, from the assumption, there exists at least one non zero element in $g_{\alpha+1,j},\cdots,g_{k,j}$. Hence, the same argument in Lemma \ref{lemma:gi} shows that the numbers of codewords with $x_j=0$ and $x_j=1$ under a fixed $(m_1,\cdots,m_{\alpha})$ are the same. By considering all the possibilities of $(m_1\cdots m_\alpha)$ with $m_1+\cdots+m_\alpha=1$, it gives $\sharp \mathcal{C}(\substack{x_i=1\\x_j=0})=\sharp \mathcal{C}(\substack{x_i=1\\x_j=1})=2^{k-2}.$
Next, let us consider the case I\!I. Again by using a permutation if necessary, we have the following expressions \[ g_i=(\underbrace{1\cdots 1}_{\alpha_1}~\underbrace{1\cdots 1}_{\alpha_2}~0\cdots 0)^T,~~~ g_j=(\underbrace{1\cdots 1}_{\alpha_1}~0\cdots 0)^T, \] where $\alpha=\alpha_1+\alpha_2$ and $\alpha_2>0$ because of $g_i\neq g_j$. Then we have \begin{eqnarray*} \sharp \mathcal{C}(\substack{x_i=1\\x_j=0})&=& \sharp\left\{m_1+\cdots+m_{\alpha_1+\alpha_2}=1~{\rm and}~ m_1+\cdots+m_{\alpha_1}=0 \right\}\times 2^{k-\alpha}\\ &=&\sharp\left\{ m_{\alpha_1+1}+\cdots+m_{\alpha_1+\alpha_2}=1~{\rm and}~ m_1+\cdots+m_{\alpha_1}=0 \right\}\times 2^{k-\alpha},\\ \sharp \mathcal{C}(\substack{x_i=1\\x_j=1})&=& \sharp\left\{m_1+\cdots+m_{\alpha_1+\alpha_2}=1~{\rm and}~ m_1+\cdots+m_{\alpha_1}=1 \right\}\times 2^{k-\alpha}\\ &=&\sharp\left\{ m_{\alpha_1+1}+\cdots+m_{\alpha_1+\alpha_2}=0~{\rm and}~ m_1+\cdots+m_{\alpha_1}=1 \right\}\times 2^{k-\alpha}. \end{eqnarray*} However, the same argument in Lemma \ref{lemma:gi} implies $\sharp \mathcal{C}(\substack{x_i=1\\x_j=0})=\sharp \mathcal{C}(\substack{x_i=1\\x_j=1})=2^{k-2}.$ }\qed \end{prf}
{\underline {\it Proof of Proposition \ref{pp:jacobi}.}}~~~ The $(i,j)$ element in the Jacobi matrix $J$ is given by \[ \frac{\partial f_i}{\partial u_j}(p)=\left\{ I^j_i(p)H(p)-I_i(p)H^j(p) \right\}/H^2(p). \] From Lemma \ref{lemma:gi}, $H^j(p)=0$ because \[ H^j(p)=\left(\frac{1}{2}\right)^{n-1}\!\!\!\!\!\!\times 2^{k-1}-\left(\frac{1}{2}\right)^{n-1}\!\!\!\!\!\!\times 2^{k-1}=0, \] where the first term comes from the codewords with $x_j=1$ and the second term comes from the codewords with $x_j=0$. It is also easy to observe that $H(p)=(1/2)^n\times 2^k=2^{-n+k}$, and $I^i_i(p)=(1/2)^{n-1}\times 2^{k-1}=2^{-n+k}$. Therefore the diagonal elements are $J_{ii}=1,i=1,\cdots,n$.
Next, let us consider the case $i\neq j$. In this case, if we have $g_i=g_j$, then, from Lemma \ref{lemma:gij}, $I^j_i(p)=(1/2)^{n-1}\times 2^{k-1}=2^{-n+k}$. On the other hand, if we have $g_i\neq g_j$, then $I^j_i(p)=(1/2)^{n-1}\times2^{k-2}-(1/2)^{n-1}\times2^{k-2}=0$. This concludes the proof of Proposition \ref{pp:jacobi}. \qed
Two corollaries follow from Proposition \ref{pp:jacobi} which characterize the eigenvalues and the eigenvectors of the Jacobi matrix $J$, and it clearly determines the stability and the stable/unstable subspaces of $p$. To this end, let us denote by $\mathcal{G}_J$ a graph whose adjacent matrix is $J$. Namely, the nodes of $\mathcal{G}_J$ are $1,\cdots,n$, and an undirected edge $(i,j)$ appears in $\mathcal{G}_J$ if and only if $J_{ij}=1$. \begin{corollary}\label{corollary:eigenvalue} Suppose the graph $\mathcal{G}_J$ is decomposed into $l$ connected components \[ \mathcal{G}_J=A_1\cup \cdots \cup A_l,~~~A_i\cap A_j=\emptyset~{\rm if}~i\neq j. \] Let $n_i$ be the number of nodes in the component $A_i$, $i=1,\cdots,l$. Then all the eigenvalues of $J$ are given by \[ n_1, n_2,\cdots, n_l, 0, \] where the eigenvalues $n_1,\cdots,n_l$ are simple and the $0$ eigenvalue has $(n_1+\cdots+n_l-l)$ multiplicity. \end{corollary} \begin{prf}{\rm From Proposition \ref{pp:jacobi}, any two nodes in a same connected component have an edge between them. Hence, it is possible to transform $J$ into the following block diagonal matrix \begin{equation} E^{-1}JE=\left(\begin{array}{cccc} B_1 & 0 & \cdots & 0\\ 0 & B_2 & \cdots &0\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & B_l \end{array} \right), \label{eq:elementarymatrix} \end{equation} where $E$ is determined by compositions of column switching elementary matrices, and $B_i$ is an $n_i\times n_i$ matrix all of whose elements are 1. The statement of the corollary follows immediately. }\qed \end{prf}
From now on, we treat a Jacobi matrix of the block diagonal form (\ref{eq:elementarymatrix}). Obviously, it gives no restriction since, if necessary, we can appropriately permute columns of the original generator matrix in advance. Let us denote the set of eigenvalues of $J$ derived in Corollary \ref{corollary:eigenvalue} by \begin{equation} n_1,0,\cdots,0,\cdots,n_i,0,\cdots,0,\cdots,n_l,0,\cdots,0, \label{eq:eigenvalue} \end{equation} where the successive $0,\cdots, 0$ after each $n_i$ has $(n_i-1)$ elements. In case of $n_i=1$, we ignore the successive $0,\cdots,0$ for $n_i$ (i.e., $n_i, n_{i+1}, 0, \cdots$).
\begin{corollary}\label{corollary:eigenvector} The Jacobi matrix $J$ is diagonalizable. Furthermore, the corresponding eigenvectors \[ p^{(1)}_1,\cdots,p^{(1)}_{n_1},\cdots,p^{(i)}_1\cdots,p^{(i)}_{n_i},\cdots,p^{(l)}_1,\cdots,p^{(l)}_{n_l}, \] for {\rm (\ref{eq:eigenvalue})} under this ordering are given by the following \[ p^{(i)}_1=\left(\begin{array}{c} \bf{0}\\ q^{(i)}_1\\ \bf{0} \end{array} \right), \cdots, p^{(i)}_{n_i}=\left(\begin{array}{c} \bf{0}\\ q^{(i)}_{n_i}\\ \bf{0} \end{array} \right), \] where \[ q^{(i)}_1=\left(\begin{array}{r} 1\\ \vdots
\\ \vdots
\\ \vdots\\ 1 \end{array} \right),~ q^{(i)}_2=\left(\begin{array}{r} 1\\ -1\\ 0\\ \vdots\\ 0 \end{array} \right),~ q^{(i)}_3=\left(\begin{array}{r} 0\\ 1\\ -1\\ \vdots\\ 0 \end{array} \right), \cdots, q^{(i)}_{n_i}=\left(\begin{array}{r} 0\\ \vdots\\ 0\\ 1\\ -1 \end{array} \right). \] Here the vectors $q^{(i)}_1\cdots q^{(i)}_{n_i}$ have $n_i$ elements and the first element starts at the $(n_1+\cdots+n_{i-1}+1)$-th row. The bold type ${\bf 0}$ expresses that the remaining elements in $p^{(i)}_1,\cdots,p^{(i)}_{n_i}$ are $0$. In case of $n_i=1$, we only have $p^{(i)}_1$ {\rm (}with $q^{(i)}_1=(1)${\rm )}. \end{corollary} \begin{prf}{\rm It is obvious from Corollary \ref{corollary:eigenvalue}. }\qed \end{prf}
We finally mention a relationship to a continuous gradient dynamical system. Let us denote by $F^{(i)}_{x}(u)$ a polynomial obtained by removing $\rho_i(u_i)$ from a codeword polynomial $F_x(u)$ of $x\in\mathcal{C}$. By using this notation, the map (\ref{eq:map}) can be also described as \[ u'_i=f_i(u)=\frac{\sum_{\substack{x\in\mathcal{C}\\x_i=1}}F_x(u)}{\sum_{\substack{x\in\mathcal{C}\\x_i=0}}F_x(u)+\sum_{\substack{x\in\mathcal{C}\\x_i=1}}F_x(u)}= \frac{\sum_{\substack{x\in\mathcal{C}\\x_i=1}}u_iF^{(i)}_x(u)}{\sum_{\substack{x\in\mathcal{C}\\x_i=0}}(1-u_i)F^{(i)}_x(u)+\sum_{\substack{x\in\mathcal{C}\\x_i=1}}u_iF^{(i)}_x(u)}. \] Then it follows that \begin{eqnarray*} u'_i-u_i&=&\frac{1}{H(u)}\left( \sum_{\substack{x\in\mathcal{C}\\x_i=1}}u_iF^{(i)}_x(u) -\sum_{\substack{x\in\mathcal{C}\\x_i=0}}u_i(1-u_i)F^{(i)}_x(u) -\sum_{\substack{x\in\mathcal{C}\\x_i=1}}u^2_iF^{(i)}_x(u) \right)\\ &=&\frac{u_i(1-u_i)}{H(u)}\left( \sum_{\substack{x\in\mathcal{C}\\x_i=1}}F^{(i)}_x(u)-\sum_{\substack{x\in\mathcal{C}\\x_i=0}}F^{(i)}_x(u) \right)\\ &=&u_i(1-u_i)\frac{\partial}{\partial u_i}\left(\log H(u)\right). \end{eqnarray*} This proves the following proposition. \begin{pp}\label{pp:gradient} The rational map {\rm (\ref{eq:map})} maps a point $u\in {\rm Int}(I^n)$ to the direction of ${\rm grad}(\log H(u))$ with a contraction rate $u_i(1-u_i)$ for each element $i=1,\cdots,n$. Especially, $u\in {\rm Int}(I^n)$ is a fixed point of {\rm (\ref{eq:map})} if and only if it is a fixed point of the continuous gradient dynamical system $du/dt={\rm grad}(\log H(u))$. \end{pp}
\section{Generator Matrix and Taylor Expansion}\label{sec:taylor}
Next, we study a relationship between higher order terms in (\ref{eq:taylor}) and a generator matrix $G$, and prove Theorem \ref{thm:thm}. For this purpose, the key proposition is given as follows. \begin{pp} \label{pp:fl0} Let $l\geq 2$. If any distinct $(l+1)$ column vectors of $G$ are linearly independent, then $f^{(l)}(v)=0$. \end{pp}
Let us denote a higher order derivative of an $i$-th element $f_{i}$ with respect to variables $u_{i_1}, \cdots, u_{i_l}$ by \[ f^{i_1\cdots i_l}_{i}(p):=\frac{\partial^l f_{i}}{\partial u_{i_1}\cdots \partial u_{i_l}}(p). \] For the proof of Proposition \ref{pp:fl0}, we need to study higher order derivatives $H^{i_1\cdots i_l}(p)$ and $I^{i_1\cdots i_l}_{i}(p)$.
Let us at first focus on higher order derivatives $H^{i_1\cdots i_l}(p)$. We begin with the following observation, which characterizes the numbers of subcodes by means of column vectors of $G$. It is noted that we adopt the $mod~2$ arithmetic for elements in $\mathbb{F}_2$.
\begin{lemma}\label{lemma:C01} Suppose $i_1,\cdots,i_l\in\{1,\cdots,n\},~l\geq 2,$ are distinct indices, and let \begin{eqnarray*}
&&\mathcal{C}(x_{i_1}+\cdots+x_{i_l}=0)=\left\{x\in \mathcal{C}~|~x_{i_1}+\cdots+x_{i_l}=0\right\},\\
&&\mathcal{C}(x_{i_1}+\cdots+x_{i_l}=1)=\left\{x\in \mathcal{C}~|~x_{i_1}+\cdots+x_{i_l}=1\right\} \end{eqnarray*} be subcodes in $\mathcal{C}$. Then the followings hold \begin{enumerate} \item if $g_{i_1}+\cdots+g_{i_l}\neq 0$, then $\sharp\mathcal{C}(x_{i_1}+\cdots+x_{i_l}=0)=\sharp\mathcal{C}(x_{i_1}+\cdots+x_{i_l}=1)=2^{k-1}$. \item if $g_{i_1}+\cdots+g_{i_l}= 0$, then $\sharp\mathcal{C}(x_{i_1}+\cdots+x_{i_l}=0)=2^k$~and~ ${\sharp\mathcal{C}(x_{i_1}+\cdots+x_{i_l}=1)=0}$. \end{enumerate} \end{lemma}
\begin{prf}{\rm By a suitable bit permutation, if necessary, the sum of $g_{i_1},\cdots,g_{i_l}$ can be expressed as \[ g_{i_1}+\cdots+g_{i_l}=(\underbrace{1\cdots 1}_{\alpha}~0\cdots 0)^T. \] Then an original message $(m_1 \cdots m_k)$ and its codeword $(x_1 \cdots x_n)$ satisfy the following \[ x_{i_1}+\cdots+x_{i_l}=(m_1~\cdots~m_k)\cdot(g_{i_1}+\cdots+g_{i_l}). \]
In the case 1 ($\alpha\neq 0$), it leads to $x_{i_1}+\cdots+x_{i_l}=m_1+\cdots+m_{\alpha}$, so the conclusion follows from the same argument in Lemma \ref{lemma:gi}. The case 2 is trivial from the above expression of $x_{i_1}+\cdots+x_{i_l}$. }\qed \end{prf}
The following lemma classifies the value $H^{i_1\cdots i_l}(p)$ based on the column vectors of $G$.
\begin{lemma}\label{lemma:H} Let $l\geq 2$. Then $H^{i_1\cdots i_l}(p)=0$ if either of \begin{enumerate} \item there exist same indices in $i_1,\cdots,i_l$ \item $g_{i_1}+\cdots+g_{i_l}\neq 0$ \end{enumerate} is satisfied. Otherwise, that is $i_1,\cdots,i_l$ are all distinct and $g_{i_1}+\cdots+g_{i_l}=0$, \[ H^{i_1\cdots i_l}(p)=(-1)^l 2^{-n+k+l}. \] \end{lemma}
\begin{prf}{\rm The condition 1 immediately implies $H^{i_1\cdots i_l}(p)=0$ since the degree of each variable $u_i$ in $H(u)$ is 1. Hence we assume all the indices are distinct.
Let us define the following subcodes \begin{eqnarray*}
&&Z^{\rm odd}_{i_1\cdots i_l}:=\{x\in \mathcal{C}~|~{\rm the~number~of~0~in}~x_{i_1},\cdots,x_{i_l}~{\rm is~odd}\}\\
&&Z^{\rm even}_{i_1\cdots i_l}:=\{x\in \mathcal{C}~|~{\rm the~number~of~0~in}~x_{i_1},\cdots,x_{i_l}~{\rm is~even}\}. \end{eqnarray*} Then $H^{i_1\cdots i_l}(p)$ can be expressed by \[ H^{i_1\cdots i_l}(p)=(1/2)^{n-l}(\sharp Z^{\rm even}_{i_1\cdots i_l}-\sharp Z^{\rm odd}_{i_1\cdots i_l}). \] Suppose $g_{i_1}+\cdots+g_{i_l}\neq 0$. Then, by Lemma \ref{lemma:C01}, we have $\sharp \mathcal{C}(x_{i_1}+\cdots+x_{i_l}=0)=\sharp \mathcal{C}(x_{i_1}+\cdots+x_{i_l}=1)$. On the other hand, when $l$ is odd (or even, resp.), \begin{eqnarray*} &&Z^{\rm odd}_{i_1\cdots i_l}=\mathcal{C}(x_{i_1}+\cdots+x_{i_l}=0) ~({\rm or}~ =\mathcal{C}(x_{i_1}+\cdots+x_{i_l}=1), {\rm resp.}),\\ &&Z^{\rm even}_{i_1\cdots i_l}=\mathcal{C}(x_{i_1}+\cdots+x_{i_l}=1) ~({\rm or}~=\mathcal{C}(x_{i_1}+\cdots+x_{i_l}=0), {\rm resp.}). \end{eqnarray*} Therefore it concludes $H^{i_1\cdots i_l}(p)=0$. The statement for $g_{i_1}+\cdots+g_{i_l}=0$ is similarly derived from Lemma \ref{lemma:C01}. }\qed \end{prf}
Next, we try to classify the value $I^{i_1\cdots i_l}_{i}(p)$.
\begin{lemma}\label{lemma:iC01} Suppose $i, i_1,\cdots,i_l\in\{1,\cdots,n\}, l\geq 2,$ are distinct indices and let us define two subcodes in $\mathcal{C}$ by \begin{eqnarray*}
&&\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=0}\right):=\left\{x\in\mathcal{C}~|~x_{i}=1,~x_{i_1}+\cdots+x_{i_l}=0\right\},\\
&&\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=1}\right):=\left\{x\in\mathcal{C}~|~x_{i}=1,~x_{i_1}+\cdots+x_{i_l}=1\right\}. \end{eqnarray*} Then the following classification holds \begin{enumerate} \item if $g_{i_1}+\cdots+g_{i_l}=0$, then $\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=0}\right)=2^{k-1}$, $\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=1}\right)=0$. \item if $0\neq g_{i_1}+\cdots+g_{i_l}\neq g_{i}$, then $\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=0}\right)=\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=1}\right)=2^{k-2}$. \item if $0\neq g_{i_1}+\cdots+g_{i_l}=g_{i}$, then $\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=0}\right)=0$, $\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=1}\right)=2^{k-1}$. \end{enumerate} \end{lemma}
\begin{prf}{\rm Since we have \[ x_{i_1}+\cdots+x_{i_l}=(m_1~\cdots~m_k)\cdot(g_{i_1}+\cdots+g_{i_l}), \] the case 1 and 3 are trivial. So, let us assume $0\neq g_{i_1}+\cdots+g_{i_l}\neq g_{i}$. The proof is similar to that of Lemma \ref{lemma:gij}. By using a suitable permutation, let us express $g_{i}$ as follows \[ g_{i}=(\underbrace{1\cdots 1}_{\alpha}~0\cdots 0)^T. \] Here we only deal with the case $\alpha<k$ again, since the modification for $\alpha = k$ follows immediately from the following case I\!I. We have two situations
\\ case I: there exists $\beta>\alpha$ such that $g_{\beta,i_1}+\cdots+g_{\beta,i_l}=1$.\\ case I\!I: not case I, i.e., $g_{\beta,i_1}+\cdots+g_{\beta,i_l}=0$ for all $\beta>\alpha$.
\\ In case I, let us fix $(m_1~\cdots~m_\alpha)$ with $m_1+\cdots+m_{\alpha}=1$, which corresponds to $x_{i}=1$, and consider the numbers of codewords with $x_{i_1}+\cdots+x_{i_l}=0$ or $=1$ for the remaining message bits $(m_{\alpha+1}~\cdots~m_k)$. From the assumption, the $\beta$-th element of the vector $g_{i_1}+\cdots+g_{i_l}$ is 1, and, by applying Lemma \ref{lemma:C01} to the subvector from $(\alpha+1)$-th to $k$-th elements, the numbers of codewords with $x_{i_1}+\cdots+x_{i_l}=0$ or $=1$ are the same for each $(m_1~\cdots~m_\alpha)$. Hence we have $\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=0}\right)=\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=1}\right)=2^{k-2}$. The proof for case I\!I is almost parallel to that of Lemma \ref{lemma:gij}, so we omit it. }\qed \end{prf}
Let us introduce the following notations, which are similar to those in the proof of Lemma \ref{lemma:H}, \begin{eqnarray*}
&&\hspace{-0.5cm}Z^{\rm odd}_{i_1\cdots i_l}(x_{i}=1)=\{x\in \mathcal{C}~|~{\rm the~number~of~0~in}~x_{i_1},\cdots,x_{i_l}~{\rm is~odd~and}~x_{i}=1\},\\
&&\hspace{-0.5cm}Z^{\rm even}_{i_1\cdots i_l}(x_{i}=1)=\{x\in \mathcal{C}~|~{\rm the~number~of~0~in}~x_{i_1},\cdots,x_{i_l}~{\rm is~even~and}~x_{i}=1\}. \end{eqnarray*} Then, for odd $l$ (or even $l$, resp.), we have \begin{eqnarray*} &&Z^{\rm odd}_{i_1\cdots i_l}(x_{i}=1)=\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=0}\right)~ ({\rm or~}=\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=1}\right),~{\rm resp.}),\\ &&Z^{\rm even}_{i_1\cdots i_l}(x_{i}=1)=\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=1}\right)~ ({\rm or~}=\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=0}\right),~{\rm resp.}). \end{eqnarray*}
\begin{lemma}\label{lemma:Ip} Let $l\geq 2$. Then $I^{i_1\cdots i_l}_{i}(p)$ is classified as \[ I^{i_1\cdots i_l}_{i}(p)=\left\{ \begin{array}{ll} 0, & {\rm If~C0}~or~{\rm C1}\\ (-1)^{l+1}2^{-n+k+l-1}, & {\rm If~C2}\\ (-1)^{l}2^{-n+k+l-1}, & {\rm If~C3} \end{array} \right., \] where each condition is given by
\\ {\rm C0:} there exist same indices in $i_1,\cdots,i_l$
\\ {\rm C1:} $\overline{{\rm C0}}$ and $0\neq g_{i_1}+\cdots+g_{i_l}\neq g_{i}$~~(here $\overline{{\rm C0}}$ means ``NOT {\rm C0}")
\\ {\rm C2:} $\overline{{\rm C0}}$ and $g_{i_1}+\cdots+g_{i_l}=g_{i}$
\\ {\rm C3:} $\overline{{\rm C0}}$ and $g_{i_1}+\cdots+g_{i_l}=0$ \end{lemma}
\begin{prf}{\rm The condition C0 immediately implies $I^{i_1\cdots i_l}_{i}(p)=0$. Hence we assume all the indices are distinct. The remaining proof follows directly from Lemma \ref{lemma:iC01} for each case. First of all, let us study the case $i\notin\{i_1,\cdots,i_l\}$. By using the notations introduced before the lemma, we have \[ I^{i_1\cdots i_l}_{i}(p)=(1/2)^{n-l}(\sharp Z^{\rm even}_{i_1\cdots i_l}(x_{i}=1)-\sharp Z^{\rm odd}_{i_1\cdots i_l}(x_{i}=1)). \] Therefore the condition C1 implies $I^{i_1\cdots i_l}_{i}(p)=0$ by Lemma \ref{lemma:iC01}.
On the other hand, if we assume the condition C2, then $\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=0}\right)=0$ and $\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=1}\right)=2^{k-1}$ from Lemma \ref{lemma:iC01}. If $l$ is even, $I^{i_1\cdots i_l}_{i}(p)=-(1/2)^{n-l}\times 2^{k-1}=-2^{-n+k+l-1}$. Similarly, if $l$ is odd, $I^{i_1\cdots i_l}_{i}(p)=2^{-n+k+l-1}$. Hence, we have $I^{i_1\cdots i_l}_{i}(p)=(-1)^{l+1}2^{-n+k+l-1}$ for the condition C2.
For the condition C3, the role of $\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=0}\right)$ and $\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_l}=1}\right)$ changes each other from Lemma \ref{lemma:iC01}, so it just leads to the opposite sign in $I^{i_1\cdots i_l}_{i}(p)$ to that for the condition C2.
Next, let us study the case $i\in\{i_1,\cdots,i_l\}$. Without loss of generality, let us suppose $i=i_l$. Then we have \[ I^{i_1\cdots i_l}_{i}(p)=(1/2)^{n-l}(\sharp Z^{\rm even}_{i_1\cdots i_{l-1}}(x_{i}=1)-\sharp Z^{\rm odd}_{i_1\cdots i_{l-1}}(x_{i}=1)). \] Therefore the condition C1 implies $I^{i_1\cdots i_l}_{i}(p)=0$ by Lemma \ref{lemma:iC01} (or Lemma \ref{lemma:gij} for $l=2$).
For the condition C2 ($l$ must be $>2$), we have $\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_{l-1}}=0}\right)=2^{k-1}$ and $\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_{l-1}}=1}\right)=0$ from Lemma \ref{lemma:iC01}. Hence, by the same calculation as that for $i\notin\{i_1,\cdots,i_l\}$, we have $I^{i_1\cdots i_l}_{i}(p)=(-1)^{l+1}2^{-n+k+l-1}$.
Finally, let us consider the condition C3. For $l>2$, we have $\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_{l-1}}=0}\right)=0$ and $\sharp\mathcal{C}\left(\substack{x_{i}=1 \\ x_{i_1}+\cdots+x_{i_{l-1}}=1}\right)=2^{k-1}$ by Lemma \ref{lemma:iC01} again. So, $I^{i_1\cdots i_l}_{i}(p)=(-1)^{l}2^{-n+k+l-1}$. For $l=2$, we can not use Lemma \ref{lemma:iC01} because of $l-1=1<2$. However, $x_{i}=x_{i_1}$ holds from the assumption $g_{i}=g_{i_1}$. Hence a direct calculation shows $I^{i_1i_2}_{i}(p)=(1/2)^{n-2}\times 2^{k-1}=2^{-n+k+1}$, which is the formula for $l=2$. }\qed \end{prf}
Before proving Proposition \ref{pp:fl0}, let us show the following two lemmas. The proofs of them are easy application of induction.
\begin{lemma}\label{lemma:1/A} The derivative $(1/H)^{i_1\cdots i_l}$ is given by \[ \left(\frac{1}{H}\right)^{i_1\cdots i_l}\!\!\!\!\!\!=(-1)^l\frac{l!}{H^{l+1}}(H^{i_1}\cdots H^{i_l})+\cdots +(-1)^k\frac{k!}{H^{k+1}}\sum_{C(l,k)}(H^{{\bf r}^1}\cdots H^{{\bf r}^k})+\cdots -\frac{1}{H^2}H^{i_1\cdots i_l}, \] where the summation for the $k$-th term {\rm (}$1<k<l${\rm )} is taken on all combinations $C(l,k)$ for dividing $i_1,\cdots,i_l$ into $k$ groups. Here ${\bf r}^1,\cdots,{\bf r}^k$ represent a decomposition of $\{i_1,\cdots,i_l\}${\rm ;} \[ \{i_1,\cdots,i_l\}=\cup_{i=1}^{k}{\bf r}^i,~~~{\bf r}^i\neq \emptyset,~~~{\bf r}^i\cap {\bf r}^j=\emptyset~(i\neq j). \] \end{lemma}
\begin{lemma}\label{lemma:fl} Let $l\geq 2$. Then the derivative $f^{i_1\cdots i_l}_{i}$ is given by \[ f^{i_1\cdots i_l}_{i}=I^{i_1\cdots i_l}_{i}\left(\frac{1}{H}\right) +\cdots+ \sum^{_{l}C_k}_{j=1}I_{i}^{{\bf p}^k_j}\left(\frac{1}{H}\right)^{{\bf q}^k_j} +\cdots+ I_{i}\left(\frac{1}{H}\right)^{i_1\cdots i_l}, \] where ${\bf p}^k_j$ and ${\bf q}^k_j$ are a decomposition of $\{i_1,\cdots,i_l\}${\rm ;} \[ {\bf p}^k_j\cap {\bf q}^k_j=\emptyset,~~~{\bf p}^k_j\cup {\bf q}^k_j=\{i_1,\cdots,i_l\},~~~\sharp {\bf q}^k_j=k, \] and the summations are taken on all the combinations of the decompositions. \end{lemma}
Now we prove Proposition \ref{pp:fl0}.
\\ {\underline {\it Proof of Proposition \ref{pp:fl0}.}}~~~ Let us assume that any distinct $(l+1)$ column vectors of $G$ are linearly independent. Then, from Lemma \ref{lemma:H}, \ref{lemma:1/A}, and \ref{lemma:fl}, we have \[ f^{i_1\cdots i_l}_{i}(p)=I^{i_1\cdots i_l}_{i}(p)\left(\frac{1}{H(p)}\right). \] On the other hand, from Lemma \ref{lemma:Ip}, $I^{i_1\cdots i_l}_{i}(p)$ is 0. The proof is completed. \qed
\\ Finally, we are in the position to prove Theorem \ref{thm:thm}.
\\ {\underline {\it Proof of Theorem \ref{thm:thm}.}}~~~ The formula for the case $l=2$ is given by Proposition \ref{pp:jacobi}. Let us assume $l\geq 3$. From Proposition \ref{pp:fl0}, all nonlinear terms with orders less than $l$ are zero, so we only study the $l$-th nonlinear terms. From the assumption and similar argument in the proof of Proposition \ref{pp:fl0}, the derivative $f^{i_1\cdots i_l}_{i}(p)$ is given by \[ f^{i_1\cdots i_l}_{i}(p)=I^{i_1\cdots i_l}_{i}(p)\left(\frac{1}{H(p)}\right). \]
Then, the classification in Lemma \ref{lemma:Ip} shows that $f^{i_1\cdots i_l}_{i}(p)=0$ in C0 and C1. Moreover, the condition C3 does not occur from the assumption. Hence, only nonzero terms are derived from the condition C2. It should be noted that the set of indices $(i_1,\cdots,i_l)$ satisfying the condition C2 with $i$ is exactly the same as $\Theta^{(l)}_{i}$. Hence, Lemma \ref{lemma:Ip} results in \begin{eqnarray*} f^{i_1\cdots i_l}_{i}(p)&=&I^{i_1\cdots i_l}_{i}(p)\left(\frac{1}{H(p)}\right)\\ &=&(-1)^{l+1}2^{-n+k+l-1}\frac{1}{(1/2)^n\times 2^k}\\ &=&(-2)^{l-1}, \end{eqnarray*} for $(i_1,\cdots,i_l)\in\Theta^{(l)}_{i}$. Finally, we have the following \begin{eqnarray*} f^{(l)}_{i}(v)&=&\sum_{\substack{m_1+\cdots+m_n= l\\m_k\geq 0}} \frac{v_1^{m_1}\cdots v_n^{m_n}}{m_1!\cdots m_n!} \left(\frac{\partial^l f_i}{\partial u_1^{m_1}\cdots\partial u_n^{m_n}}\right)(p)\\ &=&\sum_{(i_1,\cdots,i_l)\in \Theta^{(l)}_i}(-2)^{l-1} v_{i_1}\cdots v_{i_l}. \end{eqnarray*} The identity in (\ref{eq:hosum}) is derived by just substituting $v_i=u_i-1/2$. It completes the proof. \qed
\section{Relationship to Algebraic Geometry Codes}\label{sec:ag} The purpose of this section is to derive Corollary \ref{corollary:ag}. This corollary shows that usual techniques in algebraic geometry codes can be applied to control, not only the minimum distance of the code, but also the approximate ML decoding. For the definitions of basic tools in algebraic geometry such as genus, divisor, Riemann-Roch space, differential, and residue, we refer to \cite{fulton}. We also request basic knowledge of algebraic geometry codes in this section (e.g., see \cite{vanlint}, \cite{stich} and \cite{tvn}).
Let $\mathbb{F}_q$ be a finite field with $q>1$ elements. Let us first recall two classes of algebraic geometry codes called geometric Reed-Solomon codes and geometric Goppa codes. Let $\mathcal{X}$ be an absolutely irreducible nonsingular projective curve over $\mathbb{F}_q$. For rational points $P_1,\cdots,P_{\tilde n}$ on $\mathcal{X}$, we define a divisor on $\mathcal{X}$ by $D=P_1+\cdots +P_{\tilde n}$. Moreover, let $D'$ be another divisor whose support is disjoint to $D$. We assume that $D'$ satisfies the following condition \[ 2g-2<{\rm deg}(D')<\tilde{n} \] for the sake of simplicity. Here $g$ is the genus of $\mathcal{X}$. Geometric Reed-Solomon codes are characterized by the Riemann-Roch space associated to $D'$ \[ \mathcal{L}(D'):=\left\{
\phi\in\mathbb{F}^*_q(\mathcal{X})~|~(\phi)+D'\geq 0 \right\}\cup\{0\}, \] where $\mathbb{F}^*_q(\mathcal{X})$ is the set of nonzero elements of the function field $\mathbb{F}_q(\mathcal{X})$, and $(\phi)$ is the principal divisor of the rational function $\phi$. The fundamental fact that the Riemann-Roch space $\mathcal{L}(D')$ is a finite dimensional vector space leads to the following definition.
\begin{df}{\rm {\it The geometric Reed-Solomon code} $\mathcal{C}(D,D')$ of length $\tilde{n}$ over $\mathbb{F}_q$ is defined by the image of the linear map $\alpha:\mathcal{L}(D')\rightarrow\mathbb{F}^{\tilde n}_q$ given by $\alpha(\phi)=(\phi(P_1),\cdots,\phi(P_{\tilde n}))$. } \end{df} On the other hand, geometric Goppa codes are defined via differentials and their residues. Let us denote the set of differentials on $\mathcal{X}$ by $\Omega(\mathcal{X})$, and define for each divisor $D$ \[
\Omega(D):=\{\omega\in\Omega(\mathcal{X})~|~(\omega)-D\geq 0\}, \] where $(\omega)$ is the divisor of the differential $\omega$.
\begin{df}{\rm {\it The geometric Goppa code} $\mathcal{C}^*(D,D')$ of length ${\tilde n}$ over $\mathbb{F}_q$ is defined by the image of the linear map $\alpha^*:\Omega(D'-D)\rightarrow \mathbb{F}^{\tilde n}_q$ given by $\alpha^*(\omega)=({\rm Res}_{P_1}(\omega),\cdots,{\rm Res}_{P_{\tilde n}}(\omega))$, where ${\rm Res}_{P}(\omega)$ expresses the residue of $\omega$ at $P$. } \end{df}
The following propositions are an easy consequence of the Riemann-Roch theorem. \begin{pp}{\rm (e.g., \cite{vanlint}, \cite{stich}, \cite{tvn})}\label{pp:grs} \begin{enumerate} \item The dimension of the geometric Reed-Solomon code $\mathcal{C}(D,D')$ is $k={\rm deg}(D')-g+1$ and the minimum distance satisfies $d\geq \tilde{n}-{\rm deg}(D')$. \item The dimension of the geometric Goppa code $\mathcal{C}^*(D,D')$ is $k=\tilde{n}-{\rm deg}(D')+g-1$ and the minimum distance satisfies $d\geq {\rm deg}(D')-2g+2$. \item The codes $\mathcal{C}(D,D')$ and $\mathcal{C}^*(D,D')$ are dual codes. \end{enumerate} \end{pp} It should be noted that the minimum distances of these two codes are controlled by the genus of $\mathcal{X}$ and the choice of the divisor $D'$, and, as a result, they induce appropriate linear independence on their parity check matrices.
For an application of Theorem \ref{thm:thm}, we need to derive expanded codes over $\mathbb{F}_2$ from geometric Reed-Solomon and geometric Goppa codes over $\mathbb{F}_{q}, q=2^s$. Let $\mathcal{C}_{q}$ be a code over $\mathbb{F}_q$ with length $\tilde{n}$, and $e_1,\cdots,e_s\in\mathbb{F}_q$ be a basis of $\mathbb{F}_2$-vector space $\mathbb{F}_q$. This basis naturally induces the map $\mathbb{F}^{\tilde n}_q\ni x\mapsto \hat{x}\in \mathbb{F}^{{\tilde n}s}_2$ by expressing each element $x_i$ in $x=(x_1,\cdots,x_{\tilde n})$ as coefficients of $\mathbb{F}_2$-vector space. Then the expanded code of $\mathcal{C}_{q}$ over $\mathbb{F}_2$ is defined by $
\mathcal{C}_2:=\{\hat{x}~|~x\in \mathcal{C}_q\}. $ A relationship between $\mathcal{C}_q$ and $\mathcal{C}_2$ is given by the following proposition. \begin{pp} If a code $\mathcal{C}_q$ has parameters $[\tilde{n},k,d]$, where $\tilde{n}$ is the code length, $k$ is the dimension, and $d$ is the minimum distance, then its expanded code $\mathcal{C}_2$ has the parameters $[\tilde{n}s, ks, d'\geq d]$. \end{pp}
Now we apply Theorem \ref{thm:thm} to geometric Reed-Solomon/Goppa codes.
Let $n=\tilde{n}s$, and let $\mathcal{C}_2(D,D')$ and $\mathcal{C}^*_2(D,D')$ be the expanded codes over $\mathbb{F}_2$ of a geometric Reed-Solomon code $\mathcal{C}(D,D')$ and a geometric Goppa code $\mathcal{C}^*(D,D')$ over $\mathbb{F}_q$. Then, we have the following corollary.
\begin{corollary}\label{corollary:ag} The expanded geometric Reed-Solomon code $\mathcal{C}_2(D,D')$ has the minimum distance $d\geq {\tilde n}-{\rm deg}(D')$. Furthermore, there exists an $l$-th order approximate ML decoding with $l\geq{\rm deg}(D')-2g+1$ such that $\tilde{f}_{i}(u)=u_i+f^{(l)}_i(v)$. \end{corollary} \begin{prf}{\rm The first statement is the property of a geometric Reed-Solomon code and its expansion. The second statement follows from Theorem \ref{thm:thm} and the duality of $\mathcal{C}(D,D')$ and $\mathcal{C}^*(D,D')$. }\qed \end{prf}
{\bf Example: Hermitian Code} (e.g., \cite{vanlint}, \cite{stich}, \cite{tvn})\\
Let $q=r^2$ be a power of 2. The Hermitian curve $\mathcal{H}$ is given by the homogeneous equation $X^{r+1}+Y^{r+1}+Z^{r+1}=0$ and its genus is $g=r(r-1)/2$, because there are no singular points. It is known that the number of rational points over $\mathbb{F}_q$ is $r^3+1$.
Let us fix $r=2$ as an example. Then, the following is the list of the rational points on $\mathcal{H}$: \begin{eqnarray*} &&P_1=(1,0,\bar{\omega}),~~P_2(1,0,\omega),~~P_3(1,0,1),~~P_4=(1,\bar{\omega},0),~~P_5(1,\omega,0),\\ &&P_6(1,1,0),~~P_7=(0,\bar{\omega},1),~~P_8(0,\omega,1),~~Q=(0,1,1), \end{eqnarray*} where $\omega$ is a primitive element of $\mathbb{F}_4$ and $\bar{\omega}=1+\omega$ (i.e., $\mathbb{F}_4=\{0,1,\omega,\bar{\omega}\}$). Let us suppose $D=P_1+\cdots+P_8$ (hence the code length is ${\tilde n}=8$), and $D'=mQ$, $2g-2<m<{\tilde n}$. A basis of the Riemann-Roch space $\mathcal{L}(D')$ with $m=4$ is given by \[ \mathcal{L}(4Q)={\rm Span}\left\{ 1, \frac{X}{Y+Z},\frac{Y}{Y+Z},\frac{X^2}{(Y+Z)^2} \right\}. \] Then, we can explicitly show a generator matrix of the expanded geometric Reed-Solomon code $\mathcal{C}(D,D')$ over $\mathbb{F}_2$ as follows \begin{equation} G=\left( \begin{array}{cccccccccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 1\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 1 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 1\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 1 \end{array} \right). \label{eq:Gexample} \end{equation} It should be noted that this case leads to a self-dual code ${\mathcal{C}(D,D')=\mathcal{C}^*(D,D')}$. Hence, its parity check matrix $H$ is the same as $G$, and a direct calculation proves that the minimum distance of this code is 4. It means that we have the $3$-rd order ML decoding in Corollary \ref{corollary:ag}. The explicit forms of $\tilde{f}_{i}(u), i=1,\cdots,16,$ are given as \begin{eqnarray*} &&\hspace{-0.6cm} \tilde{f}_{1}(u)=u_1 + 4v_{7}(v_{3}v_{9} + v_{5}v_{11} + v_{13}v_{15}),~~~ \tilde{f}_{2}(u)=u_2 + 4v_{8}(v_{4}v_{10} + v_{6}v_{12} + v_{14}v_{16}),\\ &&\hspace{-0.6cm} \tilde{f}_{3}(u)=u_3 + 4v_{9}(v_{1}v_{7} + v_{5}v_{11} + v_{13}v_{15}),~~~ \tilde{f}_{4}(u)=u_4 + 4v_{10}(v_{2}v_{8} + v_{6}v_{12} + v_{14}v_{16}),\\ &&\hspace{-0.6cm} \tilde{f}_{5}(u)=u_5 + 4v_{11}(v_{1}v_{7} + v_{3}v_{9} + v_{13}v_{15}),~~~ \tilde{f}_{6}(u)=u_6 + 4v_{12}(v_{2}v_{8} + v_{4}v_{10} + v_{14}v_{16}),\\ &&\hspace{-0.6cm} \tilde{f}_{7}(u)=u_7 + 4v_{1}(v_{3}v_{9} + v_{5}v_{11} + v_{13}v_{15}),~~~ \tilde{f}_{8}(u)=u_8 + 4v_{2}(v_{4}v_{10} + v_{6}v_{12} + v_{14}v_{16}),\\ &&\hspace{-0.6cm} \tilde{f}_{9}(u)=u_9 + 4v_{3}(v_{1}v_{7} + v_{5}v_{11} + v_{13}v_{15}),~~~ \tilde{f}_{10}(u)=u_{10} + 4v_{4}(v_{2}v_{8} + v_{6}v_{12} + v_{14}v_{16}),\\ &&\hspace{-0.6cm} \tilde{f}_{11}(u)=u_{11} + 4v_{5}(v_{1}v_{7} + v_{3}v_{9} + v_{13}v_{15}),~~ \tilde{f}_{12}(u)=u_{12} + 4v_{6}(v_{2}v_{8} +4v_{4}v_{10} + v_{14}v_{16}),\\ &&\hspace{-0.6cm} \tilde{f}_{13}(u)=u_{13} + 4v_{15}(v_{1}v_{7} + v_{3}v_{9} + v_{5}v_{11}),~~ \tilde{f}_{14}(u)=u_{14} + 4v_{16}(v_{2}v_{8} + v_{4}v_{10} + v_{6}v_{12}),\\ &&\hspace{-0.6cm} \tilde{f}_{15}(u)=u_{15} + 4v_{13}(v_{1}v_{7} + v_{3}v_{9} + v_{5}v_{11}),~~ \tilde{f}_{16}(u)=u_{16} + 4v_{14}(v_{2}v_{8} + v_{4}v_{10} + v_{6}v_{12}), \end{eqnarray*} where $v_i=u_i-1/2,i=1,\cdots,16$.
\section{Discussions}\label{sec:discussions} To conclude this paper, we address the following comments and discussions, some of which will be important for designs of good practical error-correcting codes.
\subsection{Stability} Codeword fixed points in $\mathcal{C}$ are stable from Proposition \ref{pp:stable}, and non-codeword poles in $\mathbb{F}^n_2\setminus\mathcal{C}$ are unstable from Proposition \ref{pp:gradient} in the sense that nearby points to a pole in $\mathbb{F}^n_2\setminus\mathcal{C}$ leave away from the pole. Let us recall that each $n$-bit received sequence $y\in\mathbb{F}^n_2$ and its initial point $u^0\in I^n$ are related by (\ref{eq:ip}) and it is characterized that $y$ is the closest point to $u^0$ in $(\partial I)^n$. Hence, if the received sequence $y$ is a codeword, then $u^0$ may approach to the codeword fixed point $y$. This obviously depends on whether or not $u^0$ is located in the attractor region of the codeword fixed point, although it is actually the case when $\epsilon$ is small enough because of the stability. So far, a general structure of the attractor region for each codeword fixed point is not yet known. However this is an important subject since it is indispensable to give an estimate of error probabilities of ML decoding and its approximation. Similar arguments also hold for a non-codeword received sequence and its repelling property.
\subsection{Local dynamics around $p$}\label{sec:p} Let us recall that the ML decoding rule is given by (\ref{eq:mlrule}) which checks the location of the image for an initial point $u^0$ to the point $p$. Hence the local dynamics around the point $p$ will be important for decoding process. In the following, we explain the local dynamics around $p$ in two different cases: the Jacobi matrix $J$ at $p$ is (i) identity or (ii) not identity.
In the case (i), the local dynamics around $p$ is precisely determined by Theorem \ref{thm:thm}. As explained after Theorem \ref{thm:thm}, the nonlinear dynamics around $p$ is closely related to the encoding structure of the code and the decoding process.
In the case (ii), let us suppose that $B_i$ in Corollary \ref{corollary:eigenvalue} induces unstable eigenvalues $n_i>1$ and let us focus on its stable/unstable eigenspaces $S_i/U_i$, respectively. From its eigenvector, $U_i$ is given by the 1 dimensional subspace spanned by \[ (0,\cdots,0,\underbrace{1,\cdots,1}_{n_i},0,\cdots,0). \] This plays a role to make $u_{i_1},\cdots,u_{i_{n_i}}$ to be equal, and it reflects the fact $g_{i_1}=\cdots=g_{i_{n_i}}$. It means that the unstable subspace $U_i$ points to codewords' directions, i.e., $\mathcal{C}$. On the other hand, $S_i$ is spanned by the stable eigenvectors $p^{(i)}_2,\cdots,p^{(i)}_{n_i}$ in Corollary \ref{corollary:eigenvector}. Contrary to the unstable eigenvector, these eigenvectors play a role to generate different elements in $u_{i_1},\cdots,u_{i_{n_i}}$ and point to non-codewords' directions under time reversal, i.e., $\mathbb{F}^n_2\setminus\mathcal{C}$. Therefore, the fixed point $p$ can be regarded as an indicator to codewords in the sense that non-codeword elements shrink and codeword elements expand around $p$.
In both cases, further nonlinear analysis of center/stable/unstable manifolds of $p$ will be useful for finding suitable encoding rules and estimating the decoding performance for the approximate ML.
\subsection{Hyperbolicity of $p$ and rate restriction} From the above argument on the fixed point $p$, it seems to be appropriate to design a generator matrix to be hyperbolic at $p$, because $p$ separates expanding and shrinking directions properly and these separations have an affect on the decoding performance. However, if $p$ is hyperbolic, then the coding rate must satisfy $r=k/n\leq 1/2$ by Corollary \ref{corollary:eigenvalue}. Namely, the hyperbolicity prevents a code to have a high coding rate greater than half, although this is not a strict restriction in particular applications like wireless communication channels. Therefore it is necessary to have a center eigenspace at $p$ for a code with the rate $r>1/2$.
\subsection{Normal form theory in dynamical systems} The normal form theory in dynamical systems (e.g., see \cite{wiggins}) enables us to transform a map into a simpler form by using a near identity transformation around a fixed point. One of the essential points is that nonresonant higher order terms can be removed from the original map by this transformation. Theorem \ref{thm:thm} can be interpreted from the viewpoints of normal forms in such a way that an algebraic geometry code gives only zero nonresonant terms in the expansion of $f(u)$ at $p$. Then, it leads to the following natural question whether a code whose rational map does not have resonant terms, but has nonresonant terms which are not necessarily zeros in its expansion is a good error-correcting code or not through a near identity transformation. At least, this class of codes contains algebraic geometry codes as a subclass, and a similar statement to Corollary \ref{corollary:ag} holds through near identity transformations.
\subsection{Relation to LDPC codes} It seems to be valuable to mention a relationship to LDPC codes \cite{GallagerLDPC}, \cite{MacKay}, which are a relatively new class of error-correcting codes based on iterative decoding schemes (for a reference to this research region, see \cite{RU}). The iterative decoding schemes mainly use so-called sum-product algorithm for ML decoding and deal with a marginalized conditional probability in (\ref{eq:bitml}) as a convergent point. Although this coding scheme gives a good performance in some numerical simulations, mathematical further understanding of the sum-product algorithm and ML decoding is desired to design better coding schemes. From the viewpoint of dynamical systems, it seems to be natural to formulate the sum-product algorithm or ML decoding itself as a certain map, and then analyze its mechanism. The strategy in this paper is based on this consideration.
\section*{Acknowledgment} The authors express their sincere gratitude to the members of TIN working group for valuable comments and discussions on this paper. This work is supported by JST PRESTO program.
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Error estimates for splitting methods based on AMF-Runge-Kutta formulas for the time integration of advection diffusion reaction PDEs.}
\author{ S. Gonzalez-Pinto, D. Hernandez-Abreu and S. Perez-Rodriguez}
\thanks[thanks1]{This work has been supported by projects MTM2010-21630-C02-02 and MTM2013-47318-C2-2}
\corauth[cor1]{Corresponding author: S. Gonzalez-Pinto ([email protected])}
\baselineskip=0.9\normalbaselineskip
\maketitle
\address{
{ \footnotesize Departamento de An\'{a}lisis Matem\'{a}tico. Universidad de La Laguna.\\ 38071. La Laguna, Spain. \\
email: spinto\symbol{'100}ull.es, dhabreu\symbol{'100}ull.es}}
\begin{abstract}
The convergence of a family of AMF-Runge-Kutta methods (in short AMF-RK) for the time integration of evolutionary Partial Differential Equations (PDEs) of Advection Diffusion Reaction type semi-discretized in space is considered. The methods are based on very few inexact Newton Iterations of Aproximate Matrix Factorization splitting-type (AMF) applied to the Implicit Runge-Kutta formulas, which allows very cheap and inexact implementations of the underlying Runge-Kutta formula. Particular AMF-RK methods based on Radau IIA formulas are considered. These methods have given very competitive results when compared with important formulas in the literature for multidimensional systems of non-linear parabolic PDE problems. Uniform bounds for the global time-space errors on semi-linear PDEs when simultaneously the time step-size and the spatial grid resolution tend to zero are derived. Numerical illustrations supporting the theory are presented. \end{abstract}
\begin{keyword} Evolutionary Advection-Diffusion-Reaction Partial Differential equations, Approximate Matrix Factorization, Runge-Kutta Radau IIA methods, Finite Differences, Stability and Convergence.\\ {\sl AMS subject classifications: 65M12, 65M15, 65M20.} \end{keyword}
\end{frontmatter}
\section{Introduction}
We consider numerical methods for the time integration of a family of Initial Value Problems in ODEs \begin{equation}\label{ode} y_h'(t) = f_h(t, y_h(t)),\;\;\;y_h(0) = {u}^*_{0,h}, \;\;\; 0 \le t \le t^*, \quad y_h, f_h \in \mathbb{R}^{m(h)}, \quad h\rightarrow 0^+, \end{equation} coming from the spatial semi-discretization of an $l-$dimensional
Advection Diffusion Reaction problem in time dependent Partial Differential Equations (PDEs), with prescribed Boundary Conditions and an Initial Condition. Here $h$ denotes a small positive parameter associated with the spatial resolution and usually $l=2,3,\ldots$ .
The typical PDE problem with Dirichlet boundary conditions is given by ($\Omega$ is a bounded open connected region in $\mathbb{R}^l$, $\partial \Omega$ its boundary and $\nabla$ is the gradient operator) \begin{equation}\label{pde} \begin{array}{c} u_t(x,t) = - \nabla \cdot (a(x,t) u(x,t)) + \nabla \cdot (\bar{d}(x,t)\cdot \nabla u(x,t)) + r(u,x,t),\\[0.5pc] x \in \Omega , \; t\in [0,t^*]; \; a(x,t)=(a_j(x,t))_{j=1}^l \in \mathbb{R}^l, \;\bar{d}(x,t)=(\bar{d}_j(x,t))_{j=1}^l \in \mathbb{R}^l,\\[0.3pc] u(x,t) =g_1(x,t), \:(x,t)\in \partial \Omega\times[0,t^*]; \qquad u(x,0)=g_2(x), \; x \in \Omega,\end{array} \end{equation} which is assumed to have some diffusion ($\bar{d}_j(x,t)\ge d_0>0,\;j=1,\ldots,l$), namely that it is not of pure hyperbolic type, and it is also assumed that some adequate spatial discretization based on Finite Differences or Finite Volume is applied to obtain the system (\ref{ode}). Some stiffnes in the reaction part $r(u,x,t)$ is also allowed. The treatment of Systems of PDEs do not involve more difficulty for our analysis but for simplicity of presentation we prefer to confine ourselves to the case of one PDE.
We denote by $u_h(t)$ the solution of the PDE problem confined to the spatial grid (or well to the $h$-space related). It will be tacitly assumed that the PDE problem admits a smooth solution $u(x,t)$ in the sense that continuous partial derivatives in all variables up to some order $p$ exist and are continuous and uniform bounded on $\Omega\times[0,t^*]$ and that $u(x,t)$ is continuous on $\bar{\Omega}\times[0,t^*]$ ($\bar{\Omega}=\Omega \bigcup \partial \Omega$). It is also assumed that the spatial discretization errors \begin{equation}\label{spatialerrors} \sigma_h(t):= u_h'(t)-f_h(t,u_h(t)), \end{equation}
satisfy in the norm considered, \begin{equation}\label{norm} \Vert \sigma_h(t) \Vert \le C \:h^r , \quad (C\ge 0,\;r>0), \quad 0 \le t \le t^*, \quad h\rightarrow 0. \end{equation} In general $C$, $C'$ or $C^*$ will refer to some constants that maybe different at each occurrence but that all of them are independent of $h\rightarrow 0$ and from the time-stepsize $\tau\rightarrow 0$. The vector norm used is arbitrary as long as it is defined for vectors of any dimension. For square matrices the norm used is the induced operator norm, $\Vert A\Vert = \sup_{v\ne 0} \Vert A v \Vert / \Vert v \Vert. $
In spite of most of our results apply in general, we will provide specific results for weighted Euclidean norms of type $$\Vert (v_j)_{j=1}^N \Vert=N^{-1/2} \Vert (v_j)_{j=1}^N \Vert_2. $$ It should be noted that in this case we have for any square matrix $A$ that, $$ \Vert A \Vert= \Vert A \Vert_2, \quad \forall \:A \in \mathbb{R}^{N,N},\;N=1,2,3,\ldots .$$ We assume some natural splitting for $f_h$ (directional or other), \begin{equation}\label{splitting} f_h(t,y)=\sum_{j=1}^d f_{j,h}(t,y), \end{equation} which provides some natural splitting for the Jacobian matrix at the current point $(t_n,y_n)$, \begin{equation}\label{split} J_h=\sum_{j=1}^d J_{j,h}, \quad J_h := \displaystyle{\frac{\partial f_h(t_n,y_n)}{\partial y}}, \quad J_{j,h} :=\displaystyle{\frac{\partial f_{j,h}(t_n,y_n)}{\partial y}}. \end{equation}
This goal of the paper is to analyze the convergence order of
the Method of Lines (MoL) approach for time-dependent PDEs of Advection Reaction Diffusion
PDEs, with the main focuss on the time integration of the large ODE systems resulting of the spatial PDE-semidiscretization, where some stiffness is assumed (parabolic dominant problems with stiff reaction terms) and the time integrators are based on very few iterations of splitting type (Approximate Matrix Factorization and Newton-type schemes) applied to highly stable Implicit Runge-Kutta methods. It should be remarked that the underlying Implicit Runge-Kutta method is never solved up to convergence, hence the convergence study does not follows from the results collected in classical references about finite difference methods such as \cite{Rich-Morton67,Bur-Hun-Ver86,Marchuk90,Thomee90,Trefethen92,HV}. The kind of approach to be considered here has interest since it is easily applicable to general systems of PDEs as we will see later on and it is
reasonably cheap for non-linear problems in general
(although we give convergence results for semilinear problems only) when some splitting of the function $f_h$ and
its Jacobian is available and the split terms can be handled efficiently. In particular a method based on
three AMF-iterations of the two-stage Radau IIA method
\cite{Axelsson69} has shown to be competitive \cite{apnum-sevsole10} when compared with some
standard PDE-solvers such as VODPK \cite{Brown-Byrne-Hindmarsh-SISC89,Brown}
in some interesting non-linear diffusion reaction problems widely considered in the literature. We also present two new methods based on the 2-stage Radau IIA, by performing just one or two iterations of splitting type, respectively. The method based on two iterations is one of the very few one-step methods of splitting type we have seen in the literature that has order three in PDE-sense for the time integration.
The rest of the paper is organized as follows. In section 2 we introduce the {\sf AMF$_q$-RK} methods, and special attention is paid to some methods based on Radau IIA formulas. In section 3, the convergence for semilinear PDEs is studied in detail. The local and global errors are studied for the {\sf AMF$_q$-RK} splitting methods based on some general Runge-Kutta methods. Section 4 is devoted to some applications of the convergence results to 2D and 3D-parabolic PDEs.
{\rm Henceforth, for simplicity in the notations, we omit in many cases the $h$-dependence
of some vectors such as $f_h,\;f_{j,h}$ and of some matrices such as $J_h$
and $J_{j,h}$ ($j=1,\ldots,d$). It should be clear from the context which ones are $h$-dependent.
Besides, we will refer to the identity matrix as $I$ when its dimension is clear from the context. }
\section{AMF-IRK methods}
For the integration of the ODEs (\ref{ode}), we consider as a first step an implicit s-stage Runge-Kutta method with a nonsingular coefficient matrix $A=(a_{ij})_{i,j=1}^s$ and a weight vector $b=(b_j)_{j=1}^s$. The method is given by the compact formulation (below $\otimes$ denotes the Kronecker product of matrices $A\otimes B=(a_{ij}B), \;A=(a_{ij}), \;B=(b_{ij})$) \begin{equation}\label{IRK} \begin{array}{c} Y_n=e\otimes y_n +\tau (A\otimes I_m)F(Y_n), \\ y_{n+1}= \varpi y_n + (\ss^T\otimes I_m)Y_n, \\ c\equiv (c_j)_{j=1}^s:=A e, \quad e=(1,\ldots,1)^T \in \mathbb{R}^{s}, \quad \ss^T:=b^TA^{-1},\quad \varpi=1-\ss^Te, \\ Y_n=(Y_{n,j})_{j=1}^s \in \mathbb{R}^{ms}, \qquad F(Y_n)=(f(t_n+\tau c_j,Y_{n,j}))_{j=1}^s \in \mathbb{R}^{ms}. \end{array} \end{equation} It should be noted that we have replaced the usual formulation at the stepping point $y_{n+1}= y_n + \tau (b^T\otimes I_m) F(Y_n)$ by the equivalent in (\ref{IRK}), which has some computational advantages for stiff problems when the algebraic system for the stages is not exactly solved.
A typical Quasi-Newton iteration to solve the stage equations above is given by (below, $J=\partial{f}/\partial y\:(t_n,y_n)$ is the exact Jacobian at the step-point $(t_n,y_n)$), \begin{equation}\label{newt} [I_{ms}- A\otimes \tau J]\Delta^\nu=D_n^{\nu-1},\quad Y_n^\nu=Y_n^{\nu-1}+\Delta^\nu, \quad \nu=1,2,\dots, \end{equation} where \begin{equation}\label{residual} D_n^{\nu-1}\equiv D(t_n,\tau,y_n,Y_n^{\nu-1}):= e\otimes y_n - Y_n^{\nu-1} + \tau ( A\otimes I_m) F(Y_n^{\nu-1}). \end{equation} A cheaper iteration of Newton-type when the matrix $A$ has a multipoint spectrum has been considered in \cite{APNUM95-seve,PGS09} (denoted as Single-Newton iteration) \begin{equation}\label{s-newt} [I_{ms}- T_\nu\otimes \tau J]\Delta^\nu=D_n^{\nu-1},\quad Y_n^\nu=Y_n^{\nu-1}+\Delta^\nu, \quad \nu=1,2,\dots,q \end{equation} where \begin{equation}\label{T} \begin{array}{c} T_\nu=\gamma S_\nu (I-L_\nu)^{-1} S_\nu^{-1}, \quad \gamma>0,\\ \quad S_\nu \in \mathbb{R}^{s,s} \;\mbox{\rm are regular matrices and } \\ L_\nu \in \mathbb{R}^{s,s} \; \mbox{\rm are strictly lower triangular matrices.} \end{array} \end{equation} After some simple manipulations, by using standard properties of the Kronecker product, this iteration can be rewritten in the equivalent form, \begin{equation}\label{s-newt1} \begin{array}{rl} [I_s\otimes(I_{m}-\gamma \tau J)] E^\nu&=((I_s-L_\nu)S_\nu^{-1} \otimes I_m)D_n^{\nu-1}+ (L_\nu\otimes I_m)E^\nu, \\ Y_n^\nu &=Y_n^{\nu-1}+(S_\nu\otimes I_m)E^\nu, \qquad \nu=1,2,\dots,q. \end{array} \end{equation} To reduce the algebra cost, we use the Approximate Matrix Factorization \cite{How-Som-JCAM2001} in short AMF, with $J\equiv J_h$ and $J_j\equiv J_{j,h}$ given in (\ref{split}), \begin{equation}\label{product} \Pi_d := \prod_{j=1}^d (I_m-\gamma \tau J_j)= (I_{m}-\gamma \tau J) + \mathcal{O}(\tau^2), \end{equation} and replace in (\ref{s-newt1}) $(I_{m}-\gamma \tau J)$ by $\Pi_d$, which yields the {\sf AMF$_q$-RK} method based on the underlying Runge-Kutta method \begin{equation}\label{AMF-RK} \begin{array}{rl} [I_s\otimes\Pi_d] E^\nu&=((I_s-L_\nu)S_\nu^{-1} \otimes I_m)D_n^{\nu-1}+ (L_\nu\otimes I_m)E^\nu, \\ Y_n^\nu &=Y_n^{\nu-1}+(S_\nu\otimes I_m)E^\nu, \qquad \nu=1,2,\dots,q \\ Y_n^0&=e\otimes y_n \qquad \mbox{\rm (Predictor)} \\ y_{n+1}&= \varpi y_n + (\ss^T\otimes I_m)Y^q_n \qquad \mbox{\rm (Corrector).} \end{array} \end{equation} Our starting point for the convergence analysis in the next section takes into account that the {\sf AMF$_q$-RK} method can be rewritten in the equivalent form \cite{sevedom-AMFestab}
\begin{equation}\label{AMF-RK1} \begin{array}{c}
[I\otimes I- T_\nu\otimes \tau P](Y_n^\nu-Y_n^{\nu-1})= D(t_n,\tau,y_n,Y_n^{\nu-1}), \;\; 1\leq \nu\leq q \\ Y_n^0=e\otimes y_n, \qquad y_{n+1}= \varpi y_n + (\ss^T\otimes I_m)Y^q_n, \end{array} \end{equation} where the matrix $P$ plays a primary role \begin{equation}\label{matrixP} \begin{array}{lll} P&:=&(\gamma \tau)^{-1}(I-\Pi_d)\\&=& J+(-\gamma \tau)\displaystyle{\sum_{j<k}} J_jJ_k +(-\gamma \tau)^2 \displaystyle{\sum_{j<k<l}} J_jJ_kJ_l +\ldots+(-\gamma \tau)^{d-1} J_1J_2\cdots J_d.\end{array} \end{equation} \subsection{AMF$_q$-RK methods based on the 2 stage Radau IIA formula}\label{sec-2.1}
We are going to deserve special attention to AMF$_q$-RK methods
based on the 2 stage Radau IIA formula \cite{Axelsson69}. This formula has coefficient Butcher tableau given by
$$\begin{array}{c|c} c&A\\ \hline \\[-2pc] & b^T\end{array} \quad \equiv
\quad \begin{array}{c|cc} 1/3& 5/12 & \;-1/12\\ 1 & 3/4&\; 1/4 \\[0.3pc] \hline \\[-2pc]
& 3/4&\; 1/4\end{array}$$
This is a collocation method (stage order is two) possessing good stability properties, such as $L$-stability (i.e. $A$-stability plus $R(\infty)=0$, with $R(z)$ being the linear stability function of the method), and has order of convergence three (in ODE sense), not only on non-stiff problems but also in many kinds of stiff problems \cite{Bur-Hun-Ver86}. These properties for the underlying Runge-Kutta method are convenient, since the family of ODEs (\ref{ode}) involves stiffness in most of cases, due to the diffusion terms and possibly to the reaction part, and it is expected that the methods to be built on inherit part of the good properties of the original Runge-Kutta method.
The next three {\sf AMF$_q$-Rad} methods have coefficient matrices ($L_\nu$, $S_\nu$ and $T_\nu$) and eigenvalue $\gamma$ of the form \begin{equation}\label{T} T_\nu=\gamma S_\nu (I_2-L_\nu)^{-1} S_\nu^{-1}, \; S_\nu=\left(\begin{array}{cc} 1 & s_\nu \\ 0&1\end{array}\right),\; L_\nu=\left(\begin{array}{cc} 0 & 0 \\ l_\nu &0\end{array}\right),\; \gamma=\sqrt{\det(A)}=1/\sqrt{6}. \end{equation} {\sf AMF$_1$-Rad} was derived in \cite{sevedom-AMFestab} by looking for good stability properties and order two (ODE sense). In particular the method is A($\pi/2$)-stable for a $2$-splitting (see in Definition \ref{def-estability} below, the concept of stability for a $d$-splitting), A($0$)-stable for any $d$-splitting and has stability wedges close to $\theta_d=\pi/(2(d-1))$ for $d=3,4$. The method is based on one iteration ($q=1$) and was required to fulfil $(A-T_1)c=0$ and it has coefficients given by \begin{equation}\label{met1} s_1= -\frac{3+2 \sqrt{6}}{9}, \quad l_1=\frac{3}{4}(-12+5\sqrt{6}). \end{equation}
{\sf AMF$_2$-Rad} was derived in \cite{sevedom-AMFestab} by looking for good stability properties and order three (ODE sense). The method is A($\pi/2$)-stable for a $2$-splitting, A($0$)-stable for any $d$-splitting and A($\pi/6$)-stable for $d=3,4$. The method is based on two iterations ($q=2$) and their matrices $T_1$ and $T_2$ were required to satisfy $(A-T_1)c=0$ and $e_2^T T_2^{-1}(A-T_2)=0^T, \;e_2^T=(0,1)$, respectively. Its coefficients are uniquely given by \begin{equation}\label{met2} \begin{array}{c} \displaystyle{s_1= -\frac{3+2 \sqrt{6}}{9}, \quad l_1=\frac{3}{4}(-12+5\sqrt{6})}\\[0.3pc] \displaystyle{s_2= \frac{5-2\sqrt{6}}{9}, \quad l_2=\frac{3\sqrt{6}}{4}}.\end{array} \end{equation}
{\sf AMF$_3$-Rad} was derived in \cite{PGS09,apnum-sevsole10} by looking for good stability properties and order three (ODE sense). The method is A($\pi/2$)-stable for a $2$-splitting, A($0$)-stable for any $d$-splitting and close to A($\theta_d$)-stable for $d=3,4$ with $\theta_d=\pi/(2(d-1))$. The method is based on three iterations ($q=3$) and their matrices $T=T_1=T_2=T_3$ were required to satisfy $e_2^T T^{-1}(A-T)=0^T$. Its coefficients are uniquely given by \begin{equation}\label{met3} \begin{array}{c} \displaystyle{s_1=s_2=s_3= \frac{5-2\sqrt{6}}{9}, \quad l_1=l_2=l_3=\frac{3\sqrt{6}}{4}}.\end{array} \end{equation}
In \cite{apnum-sevsole10}, a variable-stepsize integrator based on the {\sf AMF$_3$-Rad} method was successfully tested on several interesting $2D$ and $3D$ advection diffusion reaction PDEs by exhibiting good performances in comparison with state-of-the-art codes like {\sf VODPK} \cite{Brown-Byrne-Hindmarsh-SISC89,Brown} and {\sf RKC} \cite{RKC,VSH-JcompPhys2004} and its implicit-explicit counterpart, {\sf IRKC} \cite{IRKC,IMEXRKC}. The other two methods, {\sf AMF$_q$-Rad} ($q=1,2$), were introduced later \cite{sevedom-AMFestab} after carefully analyzing the PDE errors on semilinear problems and with the purpose of reducing the number of iterations w.r.t. {\sf AMF$_3$-Rad}.
\section{Convergence for semilinear problems} For our convergence analysis we consider {\sf AMF$_q$-RK} methods
applied to the ODE problems coming from the spatial discretizations of semilinear PDE problems of type (\ref{pde}) where the advection and diffusion vectors $a(x,t)$ and $\bar{d}(x,t)$ are both constant and the reaction part has the form \begin{equation}\label{reaction} r(u,x,t)=\kappa\: u + g(x,t), \quad \kappa \; \mbox{\rm being a constant},\quad x\in\Omega \subseteq \mathbb{R}^l. \end{equation} In this way, the ODE systems have the form \begin{equation}\label{lin-system} \begin{array}{c} y_h'(t)=f_h(t,y_h):=J_h y_h(t)+g_h(t),\quad y_h(0)=u^*_{0,h},\quad h\rightarrow 0^+,\\ J_h=\sum_{j=1}^d J_{j,h}, \qquad t\in [0,t^*]. \end{array} \end{equation} Here, the exact solution of the PDE confined to the spatial grid $u_h(t)=u(x,t)$, is assumed to satisfy (\ref{spatialerrors}) and (\ref{norm}). Thus, we focus on the global errors of the MoL approach, where the spatial discretization is carried out first by using finite differences (or finite volumes) and then the time discretization is performed by using {\sf AMF$_q$-RK} methods. It is important to remark that we will not pursue the details of the spatial semidiscretizations but rather it is assumed that the spatial semidiscretizations are stable and provides spatial discretization errors satisfying (\ref{norm}). We shall provide uniform bounds for the global errors of the MoL approach ($y_h(t)$ henceforth denotes the numerical solution of the MoL approach) in the sense \begin{equation}\label{global-errors} \epsilon_{n,h}:=u_h(t_n)-{y}_h(t_n)= \mathcal{O}(\tau)^{p_1} +\mathcal{O}(h^{\alpha}\tau^{p_2}) ,\quad h\rightarrow 0^+,\tau \rightarrow 0^+, \end{equation} which is meant that there exist constants $C_1,\:C_2,\:p_1,\:p_2,\:\alpha$ (all of them independent on $h$ and $\tau$) so that in the norm considered, $$\Vert \epsilon_{n,h}\Vert \le C_1 \tau^{p_1} + C_2 h^{\alpha} \tau^{p_2},\quad h \rightarrow 0^+,\tau \rightarrow 0^+ \quad \mbox{\rm holds. }$$ In our convergence analysis we need that all the matrices $J_{j,h}$ pairwise conmute and that they can be brought to the following decomposition (it has some resemblance with the Jordan's decomposition, but it is a little more general) \begin{equation}\label{jordan} \begin{array}{c} J_{j,h}= \Theta_{h} \Lambda_{j,h} \Theta_{h}^{-1}, \quad \mbox{\rm Cond} (\Theta_{h}):=\Vert \Theta_{h}\Vert\cdot \Vert \Theta_{h}^{-1}\Vert \le C, \;h\rightarrow 0^+, \;1\leq j\leq d,\\ \Lambda_{j,h}=\mbox{\rm BlockDiag}(\Lambda^{(1)}_{j,h},\Lambda^{(2)}_{j,h},\ldots,\Lambda^{(\vartheta_h)}_{j,h}), \quad \Lambda^{(l)}_{j,h}=\lambda^{(l)}_{j,h} I + E^{(l)}_{h} , \quad \mbox{\rm Re } \lambda^{(l)}_{j,h} \le 0,\\
\mbox{\rm dim}(E^{(l)}_{h})\le N, \quad \Vert E^{(l)}_{h} \Vert_\infty \le C',
\; l=1,2,\ldots,\vartheta_h \quad (h\rightarrow 0^+).\\ E^{(l)}_{h} \quad \mbox{\rm are all of them strictly lower triangular matrices. } \end{array} \end{equation}
Another important approach for the convergence analysis of the MoL method (mainly concerned with the time integration) is based on the pseudo-spectra analysis of the matrix $J_{h}$ \cite{Trefethen92} and the related matrices $J_{j,h}$. That analysis is of more general scope but it is much more difficult to make and as we will see below, our analysis is enough for some interesting kind of semilinear problems and it is expected that the results extend to most of the non-linear problems of parabolic dominant type.
Next, we consider a standard 3D semilinear-PDEs problem where the assumptions in (\ref{jordan}) are fulfilled.
\subsection{An example} Consider the semilinear PDE-problem (\ref{pde}) with $x\in \Omega=(0,1)^3$, with constant vectors, $a(x,t)=(a_j)_{j=1}^3, \; \bar{d}(x,t)=(\bar{d}_j)_{j=1}^3, \;\bar{d}_j>0 \;(j=1,2,3)$ and $r(x,u,t)$ as in (\ref{reaction}). Consider the spatial semidiscretization by using second order central differences and spatial resolution $h=1/(N+1)$. This yields a semilinear ODE systems of dimension $m=N^3$ of the form (\ref{lin-system}) for $d=3$. The matrices $J_{j,h}$ are given by \begin{equation}\label{J-example} \begin{array}{c} J_{1,h}=I_N\otimes I_N\otimes \mathcal{T}_1,\quad J_{2,h}=I_N\otimes \mathcal{T}_2\otimes I_N,\quad J_{3,h}= \mathcal{T}_3\otimes I_N \otimes I_N\\ \mathcal{T}_l =\mbox{\rm Tridiag}(\alpha_l,\delta_l,\beta_l)\in \mathbb{R}^{N,N},\quad l=1,2,3,\\ \alpha_l=h^{-2}( \bar{d}_l- 2^{-1}h \: a_l ), \quad \beta_l=h^{-2}( \bar{d}_l+ 2^{-1}h \:a_l ), \quad \delta_l=h^{-2}( -2\bar{d}_l+h^2 \kappa), \end{array} \end{equation} and the vector $g_h(t)$ includes the reaction part $g(x,t)$ plus the boundary conditions. It is straightforward to see that the $J_{l,h}$ pairwise commute. Moreover, by assuming Cell-P\'{e}clet numbers \cite[p. 67, formula (3.42) ]{HV}
$$ h|a_l|/\bar{d}_l< 2 ,\quad l=1,2,3,$$ from \cite[section 2]{Pasquini13} it follows that their spectral decomposition has the form \begin{equation}\label{Tj} \begin{array}{c}
\mathcal{T}_l =\mbox{\rm Tridiag}(\alpha_l,\delta_l,\beta_l) =V_l \Lambda_lV_l^{-1},\quad V_l= D_l U, \quad l=1,2,3,\\[0.2pc] \Lambda_l=\mbox{\rm Diag}\displaystyle{\left( \lambda_{l,k}\right)_{k=1}^N,\quad \lambda_{l,k}= \delta_l + 2\sqrt{\alpha_l\beta_l} \cos{\frac{k\pi}{N+1}},} \\[0.2pc] U=(\frac{2}{N+1})^{1/2}\displaystyle{\left(\sin{\frac{kj\pi}{N+1}}\right)_{k=1,N \atop j=1,N}} \; \mbox{\rm is an orthogonal matrix and } \\[0.2pc] D_l= (\frac{N+1}{2})^{1/2}\mbox{\rm Diag}\displaystyle{\left((\alpha_l/\beta_l)^{k/2}\right)_{k=1}^N.} \end{array} \end{equation} From here we conclude that all the matrices can be brought to the spectral decomposition in (\ref{jordan}) having negative eigenvalues and with matrix $\Theta_{h}=V_3\otimes V_2\otimes V_1. $ Observe that $$\begin{array}{rcl} \Vert \Theta_{h}\Vert_2\Vert \Theta_{h}^{-1}\Vert_2&=& \prod_{l=1}^3 \Vert V_l \Vert_2 \Vert V_l^{-1} \Vert_2= \prod_{l=1}^3 \Vert D_l \Vert_2 \Vert D_l^{-1} \Vert_2\\ &=& \displaystyle{ \prod_{l=1}^3 \left(\frac{2\bar{d}_l
+h|a_l|}{2\bar{d}_l -h|a_l|}\right)^{N/2}\le \prod_{l=1}^3
\left(\frac{2\bar{d}_l +h|a_l|}{2\bar{d}_l -h|a_l|}\right)^{1/(2h)}}
\\ & \simeq & \displaystyle{\exp\left(\sum_{l=1}^3 \frac{|a_l|}{2\bar{d}_l}\right)} \quad \mbox{\rm as } h\rightarrow 0. \end{array} $$
\subsection{Analysis of the Truncation Errors}
The {\sf AMF$_q$-RK} method applied on problem (\ref{ode}) can be expressed in the simple one-step format $y_{n+1}=\phi_f(t_n,y_n,\tau),\;n\geq 0$. Thus, the time-space global errors $\epsilon_{n}=u_h(t_n)-y_n$ satisfy $$ \begin{array}{lll}\epsilon_{n+1}&:=& u_h(t_{n+1})-\phi_f(t_n,y_n,\tau)\\ &=& (u_h(t_{n+1})-\phi_f(t_n,u_h(t_n),\tau)) + (\phi_f(t_n,u_h(t_n),\tau)-\phi_f(t_n,y_n,\tau))\\ &=&l(t_n,\tau,h)+[\partial \phi_f/\partial y]_n (u_h(t_n)-y_n), \end{array} $$ where
$$ [\partial \phi_f/\partial y]_n=\int_0^1 \frac{\partial \phi_f}{\partial y}(t_n,u_h(t_n)+(\theta-1)\epsilon_{n},\tau)d\theta, $$
and the {\it time-space local errors} are defined by \begin{equation}\label{local-errors} l_n\equiv l(t_n,\tau,h) :=u_h(t_{n+1})-\phi_f(t_n,u_h(t_n),\tau). \end{equation} Then, we have for the {\it time-space global errors} $\epsilon_{n}$ the recurrence \begin{equation}\label{global-error} \epsilon_{n+1}= [\partial \phi_f/\partial y]_n \cdot \epsilon_{n}+ l_n,\quad n=0,1,2,\ldots,t^*/\tau-1. \end{equation} In order to get a better understanding of the latter recurrence, we next introduce the following matrix operators ($P$ is defined in (\ref{matrixP})) \begin{equation}\label{Mnu-Qnu} Q_\nu=(I\otimes I- T_\nu\otimes \tau P)^{-1}, \quad M_\nu=Q_\nu (A\otimes \tau J - T_\nu \otimes \tau P),\quad \nu\geq 1, \;Q_0=I. \end{equation}
\begin{lemma}\label{GE-recursion} The time-space global errors provided by the {\sf AMF$_q$-RK} method
when applied to the problem (\ref{lin-system}) satisfy the recurrence \begin{equation}\label{global-errors1} \epsilon_{n+1}= R_q(\tau J,\tau P)\cdot \epsilon_{n}+ l_n,\quad n=0,1,2,\ldots,t^*/\tau-1, \end{equation} where $l_n$ stands for the time-space local error defined in (\ref{local-errors}) and \begin{equation}\label{estab}\begin{array}{l} R_q(\tau J,\tau P)= \varpi I + \displaystyle{(\ss^T\otimes I)\left( Q_q + \sum_{j=q}^1 (\prod_{i=q}^{j} M_i)Q_{j-1}\right) (e\otimes I)}, \end{array} \end{equation} with $Q_\nu, M_\nu$ ($\nu\geq 1$) given by (\ref{Mnu-Qnu}). Moreover, the function $R_q(\tau J,\tau P)$ fulfils \begin{equation}\label{estab-1} R_q(\tau J,\tau P)-I= (\ss^T\otimes I)\left(Q_q + \sum_{j=q}^1 (\prod_{i=q}^{j} M_i)Q_{j-1}-\prod_{i=q}^{1} M_i\right) (c\otimes \tau J). \end{equation} \end{lemma}
\begin{remark} {\rm It must be observed that commutativity does not hold in general, thus $\prod_{j=q}^1 M_j\equiv M_q M_{q-1}\cdots M_1.$
On the other hand, $R_q(\cdot)$ can be seen as the linear stability function of the method. The identity (\ref{estab-1}) for the function $R_q(\cdot)-I$ will play a major role in a favourable propagation of the local errors in a similar way as indicated in Lemma 2.3 in \cite[p.162]{HV}.} \end{remark}
\noindent {\bf Proof of Lemma \ref{GE-recursion}.} Our first step is to analyze the operator $[\partial \phi_f/\partial y]_n$ for the semilinear problem (\ref{lin-system}). Taking into account that the method is defined by (\ref{AMF-RK1}), then we are led to compute $\displaystyle{\frac{\partial y_{n+1}}{\partial y_n}}$ with $ y_{n+1}= \varpi y_n + (\ss^T\otimes) Y_n^q$. At this end, by taking derivatives with regard to $y_n$ in the iteration (\ref{AMF-RK1}), it holds that $$ \begin{array}{lll} (I\otimes I- T_\nu\otimes \tau P)\left(\dfrac{\partial Y_n^\nu}{\partial y_n}-\dfrac{\partial Y_n^{\nu-1}}{\partial y_n}\right)&=& \dfrac{\partial D(t_n,\tau,y_n,Y_n^{\nu-1})}{\partial y_n}\\ &=& e\otimes I +(-I\otimes I + A\otimes \tau J) \dfrac{\partial Y_n^{\nu-1}}{\partial y_n}. \end{array} $$ From here, after some simple manipulations it follows that, \begin{equation}\label{deriv}\begin{array}{l} \displaystyle{\frac{\partial Y_n^\nu}{\partial y_n}}= \displaystyle{Q_\nu (e\otimes I) + M_\nu \frac{\partial Y_n^{\nu-1}}{\partial y_n}}, \quad (\nu=1,2,\ldots,q), \quad \displaystyle{\frac{\partial Y_n^0}{\partial y_n}=e\otimes I.} \end{array} \end{equation} From an inductive argument, it is not difficult to see that \begin{equation}\label{sol1}\begin{array}{l} \displaystyle{\frac{\partial Y_n^q}{\partial y_n}}= \displaystyle{\left( Q_q + \sum_{j=q}^1 (\prod_{i=q}^{j} M_i)Q_{j-1}\right) (e\otimes I).} \end{array} \end{equation} Then, by denoting $R_q(\tau J,\tau P):=\displaystyle{\frac{\partial y_{n+1}}{\partial y_n}}$ it follows $$R_q(\tau J,\tau P)= \varpi I + (\ss^T\otimes)\displaystyle{\frac{\partial Y_n^q}{\partial y_n}},$$ and we deduce both (\ref{estab}) and (\ref{global-errors1}) from (\ref{global-error}) and (\ref{sol1}).
In order to prove (\ref{estab-1}), we first take into account that $R_q(\cdot)-I= (\ss^T\otimes I)Z_n^q$, where
$Z_n^\nu= \partial Y_n^\nu/\partial y_n-e\otimes I$. Then, from the recurrence (\ref{deriv}), it follows after some simple calculations that $Z_n^\nu = M_\nu Z_n^{\nu-1} + Q_\nu (c\otimes \tau J)$, $(\nu=1,2,\ldots,q)$, with $Z_n^0=0$. From here, we deduce $Z_n^q=\displaystyle{\left( Q_q + \sum_{j=q}^1 (\prod_{i=q}^{j} M_i)Q_{j-1}-\prod_{i=q}^{1} M_i\right) (c\otimes \tau J)},$ and this directly gives (\ref{estab-1}).
$\Box$
\begin{remark}\label{sev-remark-0} {\rm For a given rational function of two complex variables \begin{equation}\label{sev-res0} \displaystyle{\zeta(z,w)=\frac{\sum_{i,j=0}^{m_1} \alpha_{ij} z^iw^j}{\sum_{i,j=0}^{m_2} \beta_{ij}z^iw^j}\equiv \left( {\sum_{i,j=0}^{m_1} \alpha_{ij} z^iw^j}\right) \left({\sum_{i,j=0}^{m_2} \beta_{ij}z^iw^j}\right)^{-1}},\end{equation} we define the associated mapping $\zeta(Z,W)$ for two arbitrary commuting matrices $Z$ and $W$ just by replacing $z$ by $Z$ and $w$ by $W$ whenever the denominator yields a regular matrix. Sometimes we are given the rational mapping $\zeta(Z,W)$ first and then we define the rational complex function just by replacing the matrices $Z$ and $W$ by the complex variables $z$ and $w$, respectively. The above definitions are straightforward extended to functions and mappings of more than two complex variables.
We will be mainly concerned with the case in which $z=\tau J$ and $w= \tau P$, where $J$ and $P$ are defined in (\ref{lin-system}) and (\ref{matrixP}), respectively. It should be noticed that for instance the $(i,j)$-element of the matrix $M_\nu$, see (\ref{Mnu-Qnu}), would be given by (observe that it is a matrix itself) $$ M_{ij}(\tau J, \tau P)= (e_i^T\otimes I) (I_s\otimes I_m- T_\nu\otimes \tau P)^{-1} (A\otimes \tau J - T_\nu \otimes \tau P)(e_j\otimes I),$$ where $e_j$ denotes the $j$-vector of the canonical basis in $\mathbb{R}^s$ and the corresponding
complex function is $$M_{ij}(z,w)=e_i^T(I_s-wT_\nu)^{-1}(zA-wT_\nu)e_j.$$
Another important point is that despite of we are considering cases with a $d$-splitting for $J$ as indicated in (\ref{lin-system}), the replacement of every $\tau J_j$ by the complex variable $z_j$ and the definition of \begin{equation}\label{z-w} z:=\sum_{k=1}^d z_k, \qquad w:= \gamma^{-1}\left(1-\prod_{k=1}^d (1-\gamma z_k)\right), \end{equation} simplifies the study to the case of two complex variables $z$ and $w$ or well to the case of mappings acting on the two matrices $\tau J$ and $\tau P$.
It is worth to mention that our rational mappings and related complex functions are all well defined whenever Re $z_k \le 0$ for $k=1,2,\ldots,d$ and $d$ arbitrary, because the existence of the matrix inverse $(I-wT_\nu)^{-1}$ is guaranteed if and only if $\displaystyle{(1-\gamma w)^{-1}=\prod_{k=1}^d (1-\gamma z_k)^{-1}}$ exists. It is easily seen the existence of the late expression by virtue of $\gamma>0$ and that all the eigenvalues of the matrices $J_j,\;(j=1,\ldots,d)$ have a non-positive real part. Moreover, for any $\nu=1,\ldots,q$ and any $d\ge 1$, we next prove that \begin{equation}\label{sev-res0a}\begin{array}{c} \displaystyle{\sup_{\mbox{\scriptsize Re}\:z_k\:\le 0,
\;k=1,\ldots,d} |Q_\nu(z,w)|< + \infty, \quad \sup_{\mbox{\scriptsize Re}\:z_k\:\le 0, \;k=1,\ldots,d}
|M_\nu(z,w)|< + \infty,}\\ z \;\mbox{\rm and } w\; \mbox{\rm defined in (\ref{z-w}).} \end{array} \end{equation} This see this, observe that $(T_\nu-\gamma I)$ is a nilpotent matrix fulfilling $(T_\nu-\gamma I)^s=0$ and that $$ \begin{array}{rcl} Q_\nu(z,w)&=& (I-w T_\nu)^{-1}=\left((1-w\gamma) I-w (T_\nu-\gamma I)\right)^{-1}\\ &=& (1-w\gamma)^{-1}\left( I-\frac{w}{1-w \gamma} (T_\nu-\gamma I)\right)^{-1}=(1-w\gamma)^{-1}\sum_{j=0}^{s-1}\left(\frac{w}{1-w\gamma}\right)^j (T_\nu-\gamma I)^{j} \end{array} $$ and $$ \begin{array}{rcl} M_\nu(z,w)&=& Q_\nu(z,w)(z A-w T_\nu)= \frac{z}{1-w\gamma}\left(\sum_{j=0}^{s-1} (\frac{w}{1-w\gamma})^j (T_\nu-\gamma I)^{j}\right)A \\[0.2pc] &-& \frac{w}{1-w\gamma}\left(\sum_{j=0}^{s-1}(\frac{w}{1-w\gamma})^j (T_\nu-\gamma I)^{j}\right)T_\nu. \end{array} $$ Hence the boundedness of $Q_\nu(z,w)$ and $M_\nu(z,w)$ follows from the boundedness of
$$ \begin{array}{rcl} |\frac{1}{1-w\gamma}|&=&|\prod_{k=1}^d (1-\gamma z_k)^{-1}| \le 1, \\
|\frac{w}{1-w\gamma}|&=&\gamma^{-1}|1-\frac{1}{1-w\gamma}| \le \gamma^{-1}(1+1)=2\gamma^{-1}, \end{array} $$ and from the next lemma.
$\Box$ } \end{remark} \begin{lemma}\label{sev-lema-0} For any $d=2,3,\ldots$, and $z$ and $w$ defined in (\ref{z-w}), we have that $$\begin{array}{c}
\displaystyle{\sup_{\mbox{\scriptsize Re}\:z_k\:\le 0 \atop k=1,\ldots,d} \left|\frac{z}{1-\gamma w}\right|=\gamma^{-1}\left( \frac{(d-1)^{d-1}}{d^{d-2}}\right)^{1/2}.} \end{array} $$ \end{lemma}
\noindent {\bf Proof.} The third equality below follows from the Maximum Modulus principle, which says that the Maximum Modulus is reached at the boundary of the open region for complex analytical functions, $$\begin{array}{rcl}
\displaystyle{\sup_{\mbox{\scriptsize Re}\:z_k\:\le 0 \atop k=1,\ldots,d} \left|\frac{z}{1-\gamma w}\right|}&=& \gamma^{-1}\displaystyle{\sup_{\mbox{\scriptsize Re}\:z_k\:\le 0
\atop k=1,\ldots,d} \left|\frac{\gamma z}{1-\gamma w}\right|}= \gamma^{-1}\displaystyle{\sup_{\mbox{\scriptsize Re}\:u_k\:\le 0 \atop k=1,\ldots,d}
\left|\frac{u_1+u_2+\ldots+u_d}{\prod_{k=1}^d(1-u_k)}\right|}\\[0.5pc] &=&
\gamma^{-1}\displaystyle{\left|\frac{(y_1+y_2+\ldots+y_d)i}{\prod_{k=1}^d\sqrt{1+(y_k)^2}}\right|}= \gamma^{-1}\displaystyle{\max_{x_k\:\ge 0 \atop k=1,\ldots,d}\left(\frac{(x_1+x_2+\ldots+x_d)^2}{\prod_{k=1}^d(1+(x_k)^2)}\right)^{1/2}.} \end{array} $$ The computation of the extrema by making zero the gradient of the real function of several variables ($x_1,\ldots,x_d$) gives the maximum for $x_1=x_2=\ldots=x_d=(d-1)^{-1/2}.$ The proof follows after substituting above this value.
$\Box$
\begin{definition}\label{def-estability} {\rm A method of the form (\ref{AMF-RK1}) is said to be $A(\theta)$-stable for a $d$-splitting, if and only if $$
|R_q(z,w)| \le 1,\quad \forall z,w \; \mbox{\rm given by (\ref{z-w}) whenever }
z_k \in \mathcal{W}(\theta),\;k=1,2,\ldots,d, $$ where (we consider that the argument of a no-null complex number ranges in $[-\pi,\pi)$) \begin{equation}\label{ss1-1}
\mathcal{W}(\theta):= \{ u \in \mathbb{C}: u=0 \; \mbox{or}\;
|\mbox{arg}(-u)| \le \theta \}.\end{equation}} \end{definition}
\subsection{Analysis of the Local Errors}
Next, we study the {\it time-space local errors} $l_n$ given by (\ref{local-errors}). We will see that the time-space local error $l_n$ is composed of two terms, $l_n^{[2]}$ related to the predictor used in the {\sf AMF$_q$-RK} method and $l_n^{[1]}$ related to the quadrature associated with the underlying Runge-Kutta method.
\begin{lemma}\label{lem-loc-err} If the linear system has continuous derivatives $u_h^{(k)}(t)$ up to order $p+1$ in $[0,t^*]$ and the underlying RK method has stage order $\ell\ge 1$ ($\ell \le p$), i.e. $$Ac^{j-1}=j^{-1}c^j,\qquad b^Tc^{j-1}=j^{-1},\quad j=1,2,\ldots,\ell.$$ Then, the local error $l_n$ in (\ref{local-errors}) of the {\sf AMF$_q$-RK} method is given by \begin{equation}\label{local-errors1} \begin{array}{lll} l_n&=&l_n^{[1]}+ l_n^{[2]},\\ l_n^{[1]}&:=&(\ss^T\otimes I)\left( Q_q + \sum_{j=q}^1 (\prod_{i=q}^{j} M_i)Q_{j-1}-\prod_{i=q}^{1} M_i \right) \hat{D}_n + \delta_n,\\ l_n^{[2]}&:=&(\ss^T\otimes I) (\prod_{i=q}^{1} M_i)\: \triangle u_h(t_n), \end{array} \end{equation} with \begin{equation}\label{triangle} \begin{array}{rcl}\triangle u_h(t_n)&:=&(u_h(t_n+c_i\tau)-u_h(t_n))_{i=1}^s=\sum_{j=1}^p\frac{\tau^j}{j!}(c^j\otimes I) u^{(j)}_h(t_n)\\[0.3pc] & + & \displaystyle{\frac{\tau^{p+1}}{p!}\left(\int_0^1 (c_i-\theta)_+^p
u^{(p+1)}_h(t_n+\theta \tau) d\theta\right)_{i=1}^s.} \end{array} \end{equation} \end{lemma} and (we use, $(x)_+:=x$ if $x\ge 0$ and $(x)_+:=0$ otherwise) \begin{equation}\label{resid-2} \begin{array}{rcl} \hat{D}_n &=& \displaystyle{\sum_{j=\ell+1}^p \frac{\tau^j}{j!} \left((c^j-jAc^{j-1})\otimes u^{(j)}_h(t_n) \right)+ \tau^{p+1} \int_0^1 \left(\varphi(\theta) \otimes u^{(p+1)}_h(t_n+ \theta \tau)\right) d\theta + }\\[0.5pc] & & \tau (A\otimes I) \left(\sigma_h(t_n+c_i\tau)\right)_{i=1}^s;\quad \varphi(\theta)=\displaystyle{\frac{1}{p!}\left((c_i-\theta)_+^p-p\sum_{j=1}^s a_{ij}(c_j-\theta)_+^{p-1}\right)_{i=1}^s} \\[0.8pc] \delta_n &=& \displaystyle{\sum_{j=\ell+1}^p \frac{\tau^j}{j!} (1-\ss^T c^j) u^{(j)}_h(t_n)+ \tau^{p+1} \int_0^1\phi(\theta)\: u^{(p+1)}_h(t_n+ \theta \tau) d\theta},\\[0.3pc] \phi(\theta)&=&\displaystyle{\frac{1}{p!}\left((1-\theta)^p-\sum_{j=1}^s \ss_{j}(c_j-\theta)_+^{p}\right)}. \end{array} \end{equation}
\noindent {\bf Proof.} Let us define \begin{equation}\label{sev-res1} \hat{D}_n:= (u_h(t_n+c_i\tau))_{i=1}^s-e\otimes u_h(t_n) - \tau (A\otimes I) (f_h(t_n+c_i \tau,u_h(t_n+c_i\tau))_{i=1}^s.\end{equation} From (\ref{spatialerrors}), it follows that \begin{equation}\label{sev-res1a}\hat{D}_n= (u_h(t_n+c_i\tau))_{i=1}^s-(u_h(t_n))_{i=1}^s - \tau (A\otimes I) (u'_h(t_n+c_i\tau)-\sigma_h(t_n+c_i\tau))_{i=1}^s.\end{equation} Now, by using the Taylor expansion with integral remainder (below $\zeta(x)$ denotes a generic function having $r+1$-continuous derivatives in an adequate interval) \begin{equation}\label{sev-res2} \displaystyle{\zeta(t_n+x)=\sum_{l=0}^r \frac{x^l}{l!} \zeta^{(l)}(t_n) + \frac{x^{r+1}}{r!}\int_0^1(1-\theta)^r \zeta^{(r+1)}(t_n+\theta x) d\theta}, \end{equation} and applying it conveniently to $u_h(t_n+c_i\tau)$ and $u'_h(t_n+c_i\tau)$ in (\ref{sev-res1a}) with $r=p$ and $r=p-1$ respectively, we deduce after some computations, the expression for $\hat{D}_n$ in (\ref{resid-2}). Observe that order stage $\ell$ for the Runge-Kutta method implies that $c^j-jAc^{j-1}=0, \;\ss^Tc^j-1=0,\;j=1,\ldots,\ell$. The expression for $\delta_n$ is obtained in a similar way, but taking into account that this time we define, \begin{equation}\label{sev-res3} \delta_n:= u_h(t_n+\tau)- \varpi u_h(t_n) - \sum_{j=1}^s \ss_j u_h(t_n+c_j\tau).\end{equation} Let us now take $\hat{U}_n:=(u_h(t_n+c_i\tau))_{i=1}^s$ and $\Delta_n^\nu:=\hat{U}_n- U_n^\nu,$ where $ U_n^\nu$ are the iterates obtained by the scheme (\ref{AMF-RK1}) when the predictor $U_n^0=e\otimes u_h(t_n)$ is taken on the exact solution of the PDE at $t_n$, i.e. $y_n=u_h(t_n)$. This gives as solution, see (\ref{AMF-RK1}) \begin{equation}\label{sev-res4} y_{n+1}= \varpi u_h(t_n) + (\ss^T\otimes I)U_n^q. \end{equation} From (\ref{sev-res3}) and (\ref{sev-res4}) it follows \begin{equation}\label{sev-res5} l_n=u_h(t_{n+1})-y_{n+1}= (\ss^T\otimes I)\Delta_n^q + \delta_n. \end{equation} In order to compute $\Delta_n^q$ we insert the expression for $U_n^\nu$ in (\ref{AMF-RK1}). It follows for the semi-linear problem (\ref{lin-system}) that $$ \begin{array}{lll} (I\otimes I- T_\nu\otimes \tau P)(\Delta_n^\nu-\Delta_n^{\nu-1})&=& -D(t_n,\tau,u_h(t_n),U_n^{\nu-1})\\&=&-(I\otimes I- A\otimes \tau J) \Delta_n^{\nu-1}+ \hat{D}_n,\end{array} \quad (\nu=1,2,\dots,q). $$ This implies that $ \Delta_n^\nu = M_\nu \Delta_n^{\nu-1}+ Q_\nu \hat{D}_n$, $1\leq \nu\leq q$, and from this recurrence $$ \Delta_n^q=\displaystyle{\left( Q_q + \sum_{j=q}^1 (\prod_{i=q}^{j} M_i)Q_{j-1} - \prod_{i=q}^{1} M_i\right) \hat{D}_n+ (\prod_{i=q}^{1} M_i)\: \Delta_n^0},$$ with $\Delta_n^0=\triangle u_h(t_n)$ in (\ref{triangle}). Now, from this expression and from (\ref{sev-res5}) the formula (\ref{local-errors1}) follows.
$\Box$
\begin{theorem}\label{th3-0} Consider a family of matrices $\{J_{k,h}\}_{k=1}^d$ and $P_h$, $h\rightarrow 0^+$, as given in (\ref{lin-system}) and (\ref{matrixP}), respectively. Assume that (\ref{jordan}) holds and that \begin{equation}\label{spect-1} \bigcup_{k=1}^d \mbox{\rm Spect}(J_{k,h}) \subseteq \mathcal{W}(\theta),\qquad (h\rightarrow 0^+)\end{equation} is fulfilled for some $\theta\in[0,\pi/2]$. Let $L(z,w)$ be a complex rational function satisfying
$$\sup_{z_k \in \mathcal{W}(\theta), \;k=1,2,\ldots,d} |L(z,w)| \le 1, \quad \mbox{\rm $z$ and $w$ given by (\ref{z-w}).} $$
Then, we have that $$\begin{array}{c}
\Vert L(\tau J, \tau P)^n \Vert \le C^*, \quad 0\leq n\tau\leq t^*, \qquad (\tau,h\rightarrow 0^+).\end{array}$$ \end{theorem} {\bf Proof.} For simplicity of notations, we omit the sub-index $h$ in the matrices. By virtue of (\ref{lin-system}), (\ref{jordan}) and (\ref{sev-res0}) it follows that
$$ \Vert \left(L(\tau J, \tau P)\right)^n \Vert= \Vert \Theta \cdot \left(L(\tau \Lambda, \tau \Upsilon)\right)^n \cdot \Theta^{-1} \Vert \le C \Vert \left(L (\tau \Lambda, \tau \Upsilon )\right)^n\Vert, \quad n\geq 1, $$
where
$$ \begin{array}{l} \tau \Lambda:=\sum_{k=1}^d \tau \Lambda_k, \quad \Lambda_k=\mbox{\rm Block-Diag}(\Lambda^{(1)}_k,\Lambda^{(2)}_k,\ldots, \Lambda^{(\vartheta)}_k), \\ \tau \Upsilon := \gamma^{-1}\left(I-\prod_{k=1}^d (I-\gamma \tau \Lambda_k)\right). \end{array} $$
By defining $\tau \Lambda^{(l)}:= \sum_{k=1}^d \tau \Lambda^{(l)}_k$ and $\tau \Upsilon^{(l)}:= \gamma^{-1}\left(I-\prod_{k=1}^d (I-\gamma \tau \Lambda^{(l)}_k)\right)$, for the norm considered it follows that
$$ \Vert \left(L(\tau \Lambda, \tau \Upsilon)\right)^n \Vert = \max_{ l=1,\ldots,\vartheta} \Vert \left(L(\tau \Lambda^{(l)} , \tau \Upsilon^{(l)})\right)^n \Vert, \quad n\geq 1. $$
Consider any diagonal block $\Lambda^{(l)}_k=\lambda_k^{(l)}I + E$ ($E\equiv E^{(l)}$ for simplicity of notation. Observe that all the matrices $E$ are strictly lower triangular and they have uniform bounded entries and uniform bounded dimensions, hence all of them are nilpotent with nilpotency index $\le N$) and define $$z_k= \tau \lambda^{(l)}_k,\;1\leq k\leq d,\; z=\sum_{k=1}^d z_k, \; w= \gamma^{-1}\left(1-\prod_{k=1}^d (1-\gamma z_k)\right),$$ it follows that, $$ L(\tau \Lambda^{(l)} , \tau \Upsilon^{(l)}) = L\left(\sum_{k=1}^d (z_k I+\tau E),\gamma^{-1}(I-\prod_{k=1}^d (I-\gamma (z_k I+\tau E))\right). $$ By defining the function of $d$ complex variables,
$$ \psi(w_1,\ldots,w_d):= L\left(\sum_{k=1}^d w_k ,\gamma^{-1}(1-\prod_{k=1}^d (1-\gamma w_k))\right),
$$ we get that $ L(\tau \Lambda^{(l)} , \tau \Upsilon^{(l)})=\psi(z_1 I+\tau E,\ldots,z_d I+\tau E).$ Then, by using the Taylor expansion for $\psi$ around $\tau=0$ and taking into the nilpotency of the matrix $E$,
we deduce that,
$$ \begin{array}{l} \psi(z_1 I+\tau E,\ldots,z_d I+\tau E)= \psi(z_1,\ldots,z_d) I + \\ \qquad \qquad \displaystyle{ \sum_{l=1}^{N-1} \frac{\tau^l}{l!} E^l \sum_{i_1+i_2+\ldots+i_d=l} \frac{\partial^{l}\psi }{\partial^{i_1} z_1\ldots\partial^{i_d} z_d} (z_1,z_2,\ldots,z_d)}.\end{array} $$
Now, since $L(z,w)\equiv L(z_1,\ldots,z_d)$ and all its partial derivatives up to order $N$ are uniformly bounded on the wedge $\mathcal{W}(\theta)$, we can write that $$ \psi(z_1 I+\tau E,\ldots,z_d I+\tau E) = \psi(z_1,\ldots,z_d ) I + \tau L^*_{\tau,h},\quad \Vert L^*_{\tau,h}\Vert \le C^*, \;(\tau,\:h\rightarrow 0^+).$$ From here we get for $0\le \tau n \le t^*$ that $$ \Vert \left(\psi(z_1 I+\tau E,\ldots,z_d I+\tau E)\right)^n \Vert = \Vert \left (L(z,w) I + \tau L^*_{\tau,h}\right)^n\Vert \le (1+\tau C^*)^n \le \exp(t^*C^*).$$
$\Box$
\subsection{Some mappings and definitions} For a given mapping $\zeta(X,Y) \in \mathbb{C}^{m,m}$ where $X$ and $Y$ are two arbitrary square complex matrices of order $m$ we define some associated mappings in the following way, \begin{equation}\label{sev-res7}\begin{array}{c} \zeta^{[1]}(X,Y):=\left(\zeta(X,Y)-\zeta(X,X)\right)(Y-X)^{-1}, \quad \mbox{\rm whenever } \det(Y-X)\ne 0,\\ \zeta^{[1]}(X,X):=\lim_{\epsilon\rightarrow 0} \zeta^{[1]}(X,X+\epsilon I), \quad \mbox{\rm whenever the limit exists. } \end{array} \end{equation} In a recursive form, when $\det(Y-X)\ne 0$ and $\zeta^{[l]}(X,X)$ exists, we continue by defining \begin{equation}\label{sev-res8}\begin{array}{c} \zeta^{[l+1]}(X,Y):=\left(\zeta^{[l]}(X,Y)-\zeta^{[l]}(X,X)\right)(Y-X)^{-1}, \\ \zeta^{[l+1]}(X,X):=\lim_{\epsilon\rightarrow 0} \zeta^{[l+1]}(X,X+\epsilon I), \quad l=1,2,\ldots,l^*. \end{array} \end{equation} By assuming $\det(Y-X)\ne 0$ and the existence of $\zeta^{[l]}(X,X),\;l=1,2,\ldots,l^*$, it is straightforward to show by induction that \begin{equation}\label{sev-res9} \zeta(X,Y)=\sum_{l=0}^{l^*}\zeta^{[l]}(X,X)(Y-X)^{l} + \zeta^{[l^*+1]}(X,Y)(Y-X)^{l^*+1}. \end{equation} We have considered for convenience that $\zeta^{[0]}(X,Y):=\zeta(X,Y).$ It should be noted that the commutativity of the matrices $X$ and $Y$ is neither necessary in the definitions above nor in the formula (\ref{sev-res9}).
To have a practical meaning of the mapping $\zeta^{[l]}(X,Y)$ we show next that assuming $\zeta(x,y)$ has $l^*$ continuous partial derivatives regarding the second variable, then it holds that \begin{equation}\label{sev-res10} \zeta^{[l]}(X,X)=\frac{1}{l!} \frac{\partial^l \zeta(x,y)}{\partial y^l} (X,X),\quad l=1,2,\ldots,l^*. \end{equation} To see (\ref{sev-res10}), we use the induction. For $l=0$ it is true for convenience. For $l=1$ it is true since $$ \zeta^{[l]}(X,X)=\lim_{\epsilon \rightarrow 0} \zeta^{[l]}(X,X+\epsilon I)=\lim_{\epsilon \rightarrow 0} \epsilon^{-1} \left(\zeta(X,X+\epsilon I)-\zeta(X,X)\right)= \frac{\partial \zeta}{\partial y} (X,X).$$ Assume it is true up to $l$, we show it
for $l+1$ by using (\ref{sev-res9}) in the second equality and the induction in the third equality below. The L'Hospital formula for limits (for the indetermination $0/0$) is used $l+1$ times in the fourth equality, $$\begin{array}{rcl} \zeta^{[l+1]}(X,X)&=&\displaystyle{\lim_{\epsilon\rightarrow 0} \zeta^{[l+1]}(X,X+\epsilon I)=\lim_{\epsilon\rightarrow 0}\frac{\zeta(X,X+\epsilon I)-\sum_{j=0}^{l}\zeta^{[j]}(X,X)(\epsilon I)^{j}}{(\epsilon I)^{l+1}}}\\[0.5pc] &= & \displaystyle{\lim_{\epsilon\rightarrow 0} \frac{\zeta(X,X+\epsilon I)-\sum_{j=0}^{l}\frac{\epsilon^j}{j!} \frac{\partial^j \zeta(x,y)}{\partial y^l}(X,X)}{\epsilon^{l+1}}}\\[0.5pc] & = &\displaystyle{\lim_{\epsilon\rightarrow 0} \frac{1}{(l+1)!} \frac{\partial^{l+1} \zeta(x,y)}{\partial y^{l+1}}(X,X+\epsilon I) = \frac{1}{(l+1)!} \frac{\partial^{l+1} \zeta}{\partial y^{l+1}}(X,X)} \end{array}$$
These results can be trivially extended to vectors (and matrices), namely $(\zeta_{ij}(X,Y))\in \mathbb{C}^{q_1m,q_2m}$, by applying them to each component $\zeta_{ij}(X,Y)\in \mathbb{C}^{m,m}$. Sometimes we will make use of this kind of vectors as we will see in the next section.
\subsection{Bounds for the local errors}
The forthcoming convergence results for {\sf AMF$_q$-RK} methods
are based in the Lemma II.2.3 \cite[p. 162]{HV}, which can be stated as follows \begin{lemma}\label{sev-lema-glob-err} Assume that the global errors $\epsilon_n\equiv\epsilon_n(\tau;h)$, of a one-step method satisfy the recursion (\ref{global-errors1}), where the local errors $l_n$ can be split (uniformly on $h$ and $\tau$) as \begin{equation}\label{ln-domi} l_n=\left(R_q(\tau J,\tau P)-I\right) \phi(t_n)\tau^\mu h^\alpha + \tau \mathcal{O}(\tau^\nu h^\beta),\quad n=0,1,\ldots, t^*/\tau-1, \end{equation} where the function $\phi(t)$ and its first derivative regarding $t$ are uniformly bounded, then the stability condition \begin{equation}\label{sev-estab} \sup_{1\le n\le t^*/\tau \atop \tau\rightarrow 0^+, \;h\rightarrow 0^+} \Vert R_q(\tau J,\tau P)^n \Vert\le C, \end{equation}
implies that the global errors uniformly fulfil \begin{equation}\label{sev-global-err} \epsilon_{n} = \mathcal{O}(\tau^\mu h^\alpha) + \mathcal{O}(\tau^\nu h^\beta),\quad n=1,\ldots, t^*/\tau, \quad \tau\rightarrow 0^+, h\rightarrow 0^+.\end{equation} \end{lemma}
{\sf General Assumptions on the semilinear problem.}
{\it To bound the local errors and consequently the global errors we henceforth assume that the exact PDE solution $u_h(t)$ confined to the spatial grid and the semilinear problem (\ref{lin-system})
fulfil (\ref{spatialerrors})-(\ref{norm}), (\ref{jordan}) and (\ref{spect-1}) for some $\theta\in [0,\pi/2]$, and that the following hypotheses (related the matrices $J$ and $P$) hold for some constants (not necessarily positive) $\alpha_l,\; \beta_l $ and $\eta$ and some nonnegative integer $l^*$, whenever $h\rightarrow 0^+$ and $\tau\rightarrow 0^+$, \begin{equation}\label{P-hypo} \begin{array}{rcl} {\bf (P1)} & & \left\{ \begin{array}{l}
(P-J)^lu_h^{(k)}(t)=\tau^l h^{\alpha_l} \: \mathcal{O}(1), \\
(P-J)^{l+1} u_h^{(k)}(t)=\tau^{l+1} h^{\beta_{l+1}} J \: \mathcal{O}(1) \end{array} \right\} \quad {l=0,1,\ldots,l^* \atop
k=1,2,\ldots,p+1.} \\[0.5pc] {\bf (P2)} & & J^\eta u_h^{(k)}(t)=\mathcal{O}(1), \quad k=1,2,\ldots,p+1, \;\mbox{\rm for some } \eta. \end{array} \end{equation} It should be noticed that always $\alpha_0=0$, because the derivatives (up to some order) of the exact solution are uniformly bounded, i.e. $u_h^{(k)}(t)=\mathcal{O}(1), \;t\in [0,t^*],\;k=0,1,\ldots,p+1$.}
\begin{theorem}\label{sev-th-1} Assume that the Runge-Kutta method has stage order $\ell$ and that
\begin{equation}\label{sev-equ-1}\sup_{z_k\in \mathcal{W}(\theta),\atop k=1,2,\ldots,d} |z/(R_q(z,w)-1)|\le C, \quad z\;\mbox{\rm and } w \; \mbox{\rm given by } (\ref{z-w}). \end{equation} Then for the {\sf AMF$_q$-RK} method we have that, $$l_n^{[1]}= \mathcal{O}(\tau h^r) + \mathcal{O}(\tau^{\ell+1}), \qquad (\tau\rightarrow 0, \; h\rightarrow 0),$$ and $$l_n^{[1]}= \tau h^r\mathcal{O}(1) + \tau^{\ell+1}(R_q(\tau J, \tau P)-I)\left(\mathcal{O}(1) + \tau h^{\beta_1} \mathcal{O}(1)\right), \; \tau\rightarrow 0, \; h\rightarrow 0. $$ \end{theorem}
{\bf Proof.} According to Lemma \ref{lem-loc-err} the term $l_n^{[1]}$ of the local error is given by, \begin{equation}\label{sev-res11} l_n^{[1]}= \xi(\tau J, \tau P) \hat{D}_n + \delta_n, \end{equation} where \begin{equation}\label{sev-res12} \xi(\tau J, \tau P):=(\ss^T\otimes I)\left( Q_q(\tau J, \tau P) + \sum_{j=q}^1 (\prod_{i=q}^{j} M_i(\tau J, \tau P))Q_{j-1}(\tau J, \tau P)-\prod_{i=q}^{1} M_i(\tau J, \tau P) \right) \end{equation} From Remark \ref{sev-remark-0} we have that ($e_j$ denotes the $j$-vector of the canonical basis)
$$ \sup_{\mbox {\tiny Re } z_k \le 0\atop k=1,\ldots,d}|\xi(z,w)e_j|\le C, \quad (j=1,\ldots,s),\quad z,\; w \;\mbox{\rm given by (\ref{z-w})}. $$ From Theorem \ref{th3-0} this implies that $$ \max_{j=1,\ldots,s} \Vert \xi(\tau J, \tau P)(e_j\otimes I) \Vert \le C', \quad \tau \rightarrow 0^+, \quad h \rightarrow 0^+.$$ Then, from (\ref{resid-2}) in Lemma \ref{lem-loc-err} the first bound for $l_n^{[1]}$ follows.
For the second bound, we separate in (\ref{sev-res11}) the $\tau^{\ell+1}$-term from the others, take into account (\ref{sev-res12}) and Lemma \ref{lem-loc-err}, we get \begin{equation}\label{sev-res13}\begin{array}{rcl} l_n^{[1]}&=& \frac{\tau^{\ell+1}}{(\ell+1)!} \left(\xi(\tau J, \tau P)\left((c^{\ell+1}-(\ell+1)Ac^\ell)\otimes I \right) + (1-\ss^T c^{\ell+1}) I
\right) u_h^{(\ell+1)}(t_n) + \circledR,\\[0.3pc] \mbox{\rm where }& &\circledR= \mathcal{O}(\tau^{\ell+2}) + \mathcal{O}(\tau h^r). \end{array} \end{equation} Next, we define the mapping (assume that $J$ is regular only to simplify the proof) \begin{equation}\label{sev-res14} \upsilon(\tau J, \tau P):=(R_q(\tau J, \tau P) - I)^{-1}\left( \xi(\tau J, \tau P)\left((c^{\ell+1}-(\ell+1)Ac^\ell)\otimes I\right) + (1-\ss^T c^{\ell+1}) I\right). \end{equation} By using the assumption (\ref{sev-equ-1}), the bounds in Remark \ref{sev-remark-0} and Lemma \ref{sev-lema-0}, it is not very difficult to see that \begin{equation}\label{sev-res14a} \begin{array}{c}
\displaystyle{\sup_ {z \in \mathcal{W}(\theta)}|\upsilon(z, z)|<
+\infty. \quad \sup_{z_k \in \mathcal{W}(\theta) \atop k=1,2,\ldots,d }|z\upsilon^{[1]}(z, w)|< +\infty, \; \mbox{\rm $z$ and $w$ given by (\ref{z-w}).}}\end{array} \end{equation} Then, from (\ref{sev-res13}) it follows that, \begin{equation}\label{sev-res15}\begin{array}{rcl} l_n^{[1]}&=& \frac{\tau^{\ell+1}}{(\ell+1)!} \left(R(\tau J, \tau P)-I\right)\upsilon(\tau J, \tau P) u_h^{(\ell+1)}(t_n) + \circledR \\[0.3pc] &=& \frac{\tau^{\ell+1}}{(\ell+1)!} \left(R(\tau J, \tau P)-I\right)\left(\upsilon(\tau J, \tau J)+ \upsilon^{[1]}(\tau J, \tau P)(\tau P-\tau J)\right)u_h^{(\ell+1)}(t_n) + \circledR \\[0.3pc] &=& \circledR + \frac{\tau^{\ell+1}}{(\ell+1)!} \left(R(\tau J, \tau P)-I\right)\upsilon(\tau J, \tau J)u_h^{(\ell+1)}(t_n) \\[0.3pc] &+& \frac{\tau^{\ell+1}} {(\ell+1)!} \left(R(\tau J, \tau P)-I\right)\upsilon^{[1]}(\tau J, \tau P)(\tau J)(J^{-1}(P-J)) u_h^{(\ell+1)}(t_n)\\[0.3pc] &=& \circledR + \frac{\tau^{\ell+1}}{(\ell+1)!} \left(R(\tau J, \tau P)-I\right) \mathcal{O}(1) + \frac{\tau^{\ell+1}}{(\ell+1)!}\left(R(\tau J, \tau P)-I\right) \mathcal{O}(\tau h^{\beta_1})\quad \mbox{\rm
$\Box$} \end{array} \end{equation}
For the analysis of the local error term $l_n^{[2]}$ in (\ref{local-errors1}), we define the mappings \begin{equation} \begin{array}{lll}\label{H-eq} \psi_{q}(\tau J,\tau X)&:=& (\ss^T\otimes I)\prod_{j=q}^1 M_j(\tau J,\tau X) \in \mathbb{C}^{m,sm}, \\[0.3pc] \zeta_{q}(\tau J,\tau X)&:=& \left(R_q(\tau J,\tau X)-I\right)^{-1} \psi_{q}(\tau J,\tau X) \in \mathbb{C}^{m,sm}, \end{array} \end{equation} and their associated vector complex functions \begin{equation} \begin{array}{lll}\label{H-eq1} \psi_{q}(z,w)&:=& \ss^T\prod_{j=q}^1 (I-wT_j)^{-1}(zA-wT_j) \in \mathbb{C}^{1,s}, \\[0.3pc] \zeta_{q}(z,w)&:=& \left(R_q(z,w)-1\right)^{-1} \psi_{q}(z,w) \in \mathbb{C}^{1,s}. \end{array} \end{equation} These mappings will play a mayor role in the proof of the convergence results. It must be remarked that whereas $\Vert \psi_{q}(z,w)\Vert_2$ is uniformly bounded when $z$ and $w$ are given by (\ref{z-w}), the vector $\zeta_{q}(z,w)=\mathcal{O}(z^{-1})$ as $z\rightarrow 0$ due to the fact that (see (\ref{estab-1})) \begin{equation}\label{sev-eqq2} R_q(z,w)-1= \ss^T\left(Q_q(z,w) + \sum_{j=q}^1 (\prod_{i=q}^{j} M_i(z,w))Q_{j-1}(z,w)-\prod_{i=q}^{1} M_i(z,w)\right) c z. \end{equation} Hence $\zeta_{q}(z,w)$ is not bounded in general for $z$ and $w$
given by (\ref{z-w}). However, $\zeta_{q}(z,z)$ is uniformly
bounded as long as $R_q(z,z)-1\ne 0$ for $z\in \mathcal{W}(\theta)
\backslash \{0\}$.
From (\ref{local-errors1}), by using (\ref{sev-res9}), we deduce that, \begin{equation}\label{sev-res16} \begin{array}{rcl}l_n^{[2]}&=& (R_q(\tau J, \tau P)-I) \sum_{j=0}^{l^*} \zeta_q^{[j]}(\tau J, \tau J) (I_s\otimes (\tau(P-J))^j)\Delta_h(t_n)\\[0.3pc] & + & (R_q(\tau J, \tau P)-I) \zeta_q^{[l^*+1]}(\tau J, \tau P)(I_s\otimes (\tau(P-J))^{l^*+1})\Delta_h(t_n).\end{array} \end{equation}
Next, we provide some convergence results for different kind of {\sf AMF$q$-RK} methods, which depends on the Runge-Kutta method on which the {\sf AMF$_q$-RK} is based on. We start with Theorem \ref{sev-th-2} that meets applications for DIRK methods (Diagonally Implicit Runge-Kutta) and SIRK methods (Single Implicit Runge-Kutta) and then with Theorems \ref{sev-th-3}, \ref{sev-th-4} and \ref{sev-th-5} which meet applications in the {\sf AMF$q$-Rad} methods presented in section two. Of course, the assumptions {\bf (P1)-(P2)} will be always assumed for some integers $l^*\ge 0$, $\ell\ge 1, \:p\ge 1$.
\begin{theorem}\label{sev-th-2} If $T_\nu=A, \;\nu=1,\ldots,q$, with the Runge-Kutta coefficient matrix $A$ having unique eigenvalue $\gamma>0$ (with multiplicity $s$), then the local errors ($l_n=l_n^{[1]}+ l_n^{[2]}$) fulfil $$\left. \begin{array}{rcl} l_n^{[1]}&=& \mathcal{O}(\tau h^r) + \tau^{\ell+1}(R(\tau J,\tau P)-I) (\mathcal{O}(1)+ \mathcal{O}(\tau h^{\beta_1})), \\ l_n^{[2]}&=& \tau^{2l+2}h^{\beta_{l+1}}(R(\tau J,\tau P)-I) \mathcal{O}(1),\; l=0,1,\ldots,\tilde{l}, \\ \tilde{l}&=&\max\{0,\min\{q-2,l^*\}\}. \end{array} \right\} \; (\tau\rightarrow 0^+, \; h\rightarrow 0^+). $$ If the method is A$(\theta)$-stable for a $d$-splitting and (\ref{spect-1}) holds, then for any $l=0,1,\ldots,\tilde{l}$,
the global errors fulfil (whenever $\tau\rightarrow 0^+$ and $h\rightarrow 0^+$) that, $$\epsilon_{n,h} = \mathcal{O}( h^r)+ \tau^{\ell}\min\{1,\max\{\tau,\tau^2 h^{\beta_1}\}\} \mathcal{O}(1) +\mathcal{O}(\tau^{2l+2} h^{\beta_{l+1}}); \; n=1,2,\ldots,t^*/\tau.$$ \end{theorem}
{\bf Proof.} The expression of $l_n^{[1]}$ was seen in Theorem \ref{sev-th-1}. In order to show the expression for $l_n^{[2]}$, we start by deducing from (\ref{H-eq}) and (\ref{estab-1}) that \begin{equation}\label{sev-eqq-1} \begin{array}{rcl} \zeta_q(z,w)&=&(R_q(z,w)-1)^{-1} \ss^T\left((I-wA)^{-1}A\right)^q (z-w)^q,\\[0.3pc] R_q(z,w)-1&=&\ss^T\left(\left((I-wA)^{-1}A\right)^q (z-w)^q-I\right)(zA-I)c z. \end{array} \end{equation} From (\ref{sev-res10}) we have that $\displaystyle{\zeta_q^{[l]}(z,z)=\frac{1}{l!}\frac{\partial^l \zeta_q}{\partial w^l}}(z,z)$. From here and from (\ref{sev-eqq-1}) it follows that $$\zeta_q^{[l]}(z,z)=0,\quad l=0,1,\ldots,\tilde{l}.$$ From (\ref{sev-res16}) by taking $\tilde{l}$ as upper index, for any $l=0,1,\ldots,\tilde{l}$, we have that $$ \begin{array}{rcl} \begin{array}{rcl}l_n^{[2]}&=& (R_q(\tau J, \tau P)-I) \zeta_q^{[l+1]}(\tau J, \tau P)(I_s\otimes (\tau(P-J))^{l+1}\Delta_h(t_n) \\[0.3pc] &=& \tau^{l}(R_q(\tau J, \tau P)-I) \left( \zeta_q^{[l+1]}(\tau J, \tau P)(I_s\otimes \tau J)\right)(I_s\otimes J^{-1}(P-J)^{l+1})
\Delta_h(t_n) \\[0.3pc] &=& \tau^{l}(R(\tau J,\tau P)-I) \mathcal{O}(1) (I_s\otimes J^{-1}(P-J)^{l+1})(\tau c\otimes u_h'(t_n) + \tau^2\mathcal{O}(1))\\[0.3pc] &=&\tau^{2l+2}h^{\beta_{l+1}}(R(\tau J,\tau P)-I) \mathcal{O}(1). \end{array} \end{array} $$ To see the bound for the global errors we apply Lemma \ref{sev-lema-glob-err}. The bounds for the local errors $l_n$ have been obtained above (see also Theorem \ref{sev-th-1} for $l_n^{[1]}$). The boundedness of the powers of $R_q(\tau J, \tau P)$ as indicated in (\ref{sev-estab}) follows from Theorem \ref{th3-0} by taking into account the A$(\theta)$-stability of the method for the $d$-splitting and that (\ref{spect-1}) holds. Now from Lemma \ref{sev-lema-glob-err} the proof is accomplished.
$\Box$
\begin{theorem}\label{sev-th-3} For AMF$_q$-RK methods with $\gamma>0$ and satisfying $(A-T_1)c=0$, we have that $$ l_n^{[2]}= \tau^{2}(R(\tau J,\tau P)-I)\left( \mathcal{O}(1) + h^{\beta_1}\mathcal{O}(1)\right), \; (\tau\rightarrow 0^+, \; h\rightarrow 0^+). $$ Additionally if the method is A$(\theta)$-stable for a $d$-splitting and (\ref{spect-1}) holds, then for $\tau\rightarrow 0^+$ and $h\rightarrow 0^+$, the global errors fulfil $$\epsilon_{n,h} = \mathcal{O}( h^r)+ \tau^{\ell}\min\{1,\max\{\tau,\tau^2 h^{\beta_1}\}\} \mathcal{O}(1) +\tau^{2}\left(\mathcal{O}(1)+ h^{\beta_1}\mathcal{O}(1) \right); \; n=1,2,\ldots,t^*/\tau.$$ \end{theorem}
{\bf Proof.} The expression of $l_n^{[1]}$ was seen in Theorem \ref{sev-th-1}. In order to show the expression for $l_n^{[2]}$, from (\ref{sev-res16}) by setting $l^*=0$ we get that (observe that $\zeta_q(z,z)c=0$ because $(A-T_1)c=0$. This expression is used in the third equality below) $$ \begin{array}{rcl} \begin{array}{rcl}l_n^{[2]}&=& (R_q(\tau J, \tau P)-I) \zeta_q(\tau J, \tau J)\Delta_h(t_n) \\[0.3pc] &+&(R_q(\tau J, \tau P)-I) \zeta_q^{[1]}(\tau J, \tau P)(I_s\otimes (\tau(P-J)))\Delta_h(t_n) \\[0.3pc] &=& (R_q(\tau J, \tau P)-I) \zeta_q(\tau J, \tau J)\left(\tau c \otimes I + \tau^2 \mathcal{O}(1)\right) \\[0.3pc] &+& (R_q(\tau J, \tau P)-I) \left(\zeta_q^{[1]}(\tau J, \tau P)(I\otimes \tau J)\right)(I_s\otimes (J^{-1}(P-J)))\left(\tau \mathcal{O}(1)\right) \\[0.3pc] &=& (R(\tau J,\tau P)-I) \left(\tau^{2} \mathcal{O}(1)\right) + (R(\tau J,\tau P)-I) \mathcal{O}(1)\:\left(\tau^{2}h^{\beta_1}\mathcal{O}(1)\right). \end{array} \end{array} $$ This provides the bound for the local errors $l_n^{[2]}$. The boundedness of the powers of $R_q(\tau J, \tau P)$ as indicated in (\ref{sev-estab}) follows from Theorem \ref{th3-0} by taking into account the A$(\theta)$-stability of the method for the $d$-splitting and that (\ref{spect-1}) holds. Now, from the bounds for the local error and from Lemma \ref{sev-lema-glob-err} the proof follows.
$\Box$
\begin{theorem}\label{sev-th-4} For AMF$_q$-RK methods with $\gamma>0$ and satisfying $$\sup_{\mbox{\tiny Re$\:z$} \:\le\: 0, \:z\ne 0}\Vert z^{-\eta} \zeta_q(z,z)\Vert_2 < +\infty,$$ with $\eta$ given in {\bf (P2)} we have that $$ l_n^{[2]}= (R(\tau J,\tau P)-I)\left( \mathcal{O}( \tau^{1+\eta}) + \mathcal{O}( \tau^2 h^{\beta_1})\right), \; (\tau\rightarrow 0^+, \; h\rightarrow 0^+). $$ Additionally if the method is A$(\theta)$-stable for a $d$-splitting and (\ref{spect-1}) holds, then for $\tau\rightarrow 0^+$ and $h\rightarrow 0^+$, the global errors fulfil $$\epsilon_{n,h} = \mathcal{O}( h^r)+ \min\{1,\max\{\tau,\tau^2 h^{\beta_1}\}\} \mathcal{O}( \tau^{\ell}) + \mathcal{O}(\tau^{1+\eta})+ \mathcal{O}(\tau^2 h^{\beta_1}); \; n=1,2,\ldots,t^*/\tau.$$ \end{theorem}
{\bf Proof.} The expression of $l_n^{[1]}$ was seen in Theorem \ref{sev-th-1}. In order to show the expression for $l_n^{[2]}$, from (\ref{sev-res16}) by setting $l^*=0$ we get that $$ \begin{array}{rcl} \begin{array}{rcl}l_n^{[2]}&=& (R_q(\tau J, \tau P)-I) \zeta_q(\tau J, \tau J)\Delta_h(t_n) \\[0.3pc] &+&(R_q(\tau J, \tau P)-I) \zeta_q^{[1]}(\tau J, \tau P)(I_s\otimes (\tau(P-J)))\Delta_h(t_n) \\[0.3pc] &=& (R_q(\tau J, \tau P)-I) \zeta_q(\tau J, \tau J)\left(\tau \mathcal{O}(1)\right) \\[0.3pc] &+& (R_q(\tau J, \tau P)-I) \left(\zeta_q^{[1]}(\tau J, \tau P)(I\otimes \tau J)\right)(I_s\otimes (J^{-1}(P-J)))\left(\tau \mathcal{O}(1)\right) \\[0.3pc] &=& (R_q(\tau J, \tau P)-I) \left(\zeta_q(\tau J, \tau J)(I\otimes (\tau J)^{-\eta})\right) \left(I\otimes (\tau J)^{\eta}\right)\left(\tau \mathcal{O}(1)\right) \\[0.3pc] &+& (R(\tau J,\tau P)-I) \mathcal{O}(1)\:\left(\tau^{2}h^{\beta_1}\mathcal{O}(1)\right)\\[0.3pc] &=& R_q(\tau J, \tau P)-I) \left(\mathcal{O}(1) \right) \left(\tau^{\eta+1} I\otimes J^{\eta} \mathcal{O}(1)\right) \\[0.3pc] &+& (R(\tau J,\tau P)-I) \left(\tau^{2}h^{\beta_1}\mathcal{O}(1)\right)\\[0.3pc] &=& (R(\tau J,\tau P)-I) \left( \mathcal{O}(\tau^{1+\eta}) + \mathcal{O}(\tau^{2}h^{\beta_1})\right). \end{array} \end{array} $$ This provides the bound for the local errors $l_n^{[2]}$. The rest of the proof follows as in the previous theorems.
$\Box$
\begin{theorem}\label{sev-th-5} For AMF$_q$-RK methods with $\gamma>0$ and
$$\begin{array}{c} (A-T_1)c=0,\quad \sup_{\mbox{\tiny Re$\:z$} \:\le\: 0, \:z\ne 0}\Vert z^{-\eta} \zeta_q(z,z)\Vert_2 < +\infty,\end{array}$$ with $\eta$ given in {\bf (P2)} and assuming {\bf (P1)} for $l^*=1$, we have that $$ l_n^{[2]}= (R(\tau J,\tau P)-I)\left( \mathcal{O}( \tau^{2+\eta}) + \mathcal{O}( \tau^3 h^{\alpha_1}) + \mathcal{O}( \tau^4 h^{\beta_2})\right), \; (\tau\rightarrow 0^+, \; h\rightarrow 0^+). $$ Additionally if the method is A$(\theta)$-stable for a $d$-splitting and (\ref{spect-1}) holds, then the global errors fulfil $$\begin{array}{c} \epsilon_{n,h} = \mathcal{O}( h^r)+ \min\{1,\max\{\tau,\tau^2 h^{\beta_1}\}\} \mathcal{O}( \tau^{\ell}) + \mathcal{O}(\tau^{2+\eta})+ \mathcal{O}(\tau^3 h^{\alpha_1})+ \mathcal{O}(\tau^4 h^{\beta_2}),\\ n=1,2,\ldots,t^*/\tau,\qquad (\tau\rightarrow 0^+,\; h\rightarrow 0^+).\end{array}$$ \end{theorem}
{\bf Proof.} In order to show the expression for $l_n^{[2]}$, from (\ref{sev-res16}) by setting $l^*=1$ we get that $$ \begin{array}{rcl} \begin{array}{rcl}l_n^{[2]}&=& (R_q(\tau J, \tau P)-I)\left( \zeta_q(\tau J, \tau J)+ \zeta^{[1]}_q(\tau J, \tau J)(I\otimes \tau(P-J))\right) \Delta_h(t_n) \\[0.3pc] &+&(R_q(\tau J, \tau P)-I) \zeta_q^{[2]}(\tau J, \tau P)(I_s\otimes \tau^2(P-J)^2)\Delta_h(t_n) \\[0.3pc] &=& (R_q(\tau J, \tau P)-I)\left( \zeta_q(\tau J, \tau J)+ \zeta^{[1]}_q(\tau J, \tau J)(I\otimes \tau(P-J))\right) \left((\tau c\otimes I)u_h'(t_n)+ \tau^2\mathcal{O}(1)\right)\\[0.3pc] &+&(R_q(\tau J, \tau P)-I) \zeta_q^{[2]}(\tau J, \tau P)(I_s\otimes \tau^2(P-J)^2)(\tau \mathcal{O}(1)) \\[0.3pc] &=& (R_q(\tau J, \tau P)-I)\left( \tau^2 \zeta_q(\tau J, \tau J)\mathcal{O}(1)+ \tau \zeta^{[1]}_q(\tau J, \tau J)(I\otimes \tau(P-J)\mathcal{O}(1))\right) \\[0.3pc] &+&(R_q(\tau J, \tau P)-I) \left(\zeta_q^{[2]}(\tau J, \tau P)(I_s\otimes \tau J)\right)(I_s\otimes \tau J^{-1}(P-J)^2)(\tau \mathcal{O}(1)) \\[0.3pc] &=& (R_q(\tau J, \tau P)-I)\left(\tau^2\left(\zeta_q(\tau J, \tau J)(\tau J)^{-\eta}\right)(\tau^\eta J^\eta \mathcal{O}(1))+ \mathcal{O}(\tau^3 h^{\alpha_1})\right) \\[0.3pc] &+& (R_q(\tau J, \tau P)-I) \left(\mathcal{O}(1)\: \tau^2 J^{-1}(P-J)^2 \mathcal{O}(1)\right) \\[0.3pc] &=& (R(\tau J,\tau P)-I)\left( \mathcal{O}( \tau^{2+\eta}) + \mathcal{O}( \tau^3 h^{\alpha_1}) + \mathcal{O}( \tau^4 h^{\beta_2})\right). \end{array} \end{array} $$ This provides the bound for the local errors $l_n^{[2]}$. The rest of the proof follows as in the previous theorems.
$\Box$
\section{Application of the convergence results for Dirichlet Boundary Conditions in parabolic problems}
Let us next consider the $2D$ semi-linear diffusion-reaction model ($\varepsilon$ is a positive constant) \begin{equation}\label{2D-dif-reac} u_t=\varepsilon(u_{xx}+u_{yy}) + g(x,y,t),\; (x,y)\in (0,1)^2,\,t\in[0,1],\;\varepsilon>0, \end{equation} with prescribed Dirichlet boundary conditions and an initial condition. The PDE is discretized on uniform spatial meshes $(x_i,y_j)=(ih,jh)$, $h= N^{-1}$, $1\le i,j\le N-1$, where $N-1$ is the number of interior grid-points for each spatial variable. We shall assume that the exact solution of the PDE (\ref{2D-dif-reac}) is regular enough when $(x,y,t)\in [0,1]^2\times [0,t^*]$.
Let us denote $u_h(t):=(u_{i,j}(t))_{i,j=1}^{N-1}$ with a row-wise ordering, where $u_{i,j}(t):=u(x_i,y_j,t)$ for $0\leq i,j\leq N$. Then, by using second-order central differences, we obtain for the exact solution of (\ref{2D-dif-reac}) on the grid a semi-discrete system (\ref{pde}) with dimension $m=(N-1)^2$ \begin{equation}\label{semilin-exact-2D} u_h'(t)=\varepsilon J u_h(t)+g_h(t)+\sigma_h(t)+\varepsilon
h^{-2}u_{\Gamma_h}(t), \end{equation} where \begin{equation}\label{J-2D}\begin{array}{c} J:=J_1+J_2, \;\;J_1=I_{N-1}\otimes B_{N-1},\;\; J_2=B_{N-1}\otimes I_{N-1},\\ B_{N-1}=h^{-2}TriDiag(1,-2,1)\in \mathbb{R}^{(N-1)\times(N-1)},\quad h=1/N. \end{array} \end{equation} Moreover, $g_h(t)=(g(x_i,y_j,t))_{i,j=1}^{N-1}$, $\Vert \sigma_h(t) \Vert_{2,h} =\mathcal{O}(h^2)$ ($0 \le t \le t^*$), whereas $u_{\Gamma_h}(t)$ contains the values of the exact solution on the boundary, i.e., \begin{equation}\label{uh-boundary} u_{\Gamma_h}(t)=u_h^{(0,y)}(t)\otimes e_1+u_h^{(1,y)}(t)\otimes e_{N-1}+e_1\otimes u_h^{(x,0)}(t)+e_{N-1}\otimes u_h^{(x,1)}(t), \end{equation} with $u_h^{(0,y)}(t)=(u_{0,j}(t))_{j=1}^{N-1}$, $u_h^{(1,y)}(t)=(u_{N,j}(t))_{j=1}^{N-1}$, $u_h^{(x,0)}(t)=(u_{i,0}(t))_{i=1}^{N-1}$ and $u_h^{(x,1)}(t)=(u_{i,N}(t))_{i=1}^{N-1}$. Above, $\{e_1,\ldots,e_{N-1}\}$ denotes the canonical basis in $\mathbb{R}^{N-1}$.
For the proof of the convergence results we need the lemma \ref{lema-valP2} and the lemma \ref{lema-Pgeneral} given below. These lemmas can be derived from the material in \cite[pp. 96-300]{HV} (see from Lemma 6.1 to Lemma 6.5). Lemmas \ref{lema-valP2} and \ref{lema-Pgeneral} supply sharp values for the constants $\alpha_l,\;\beta_l$ and $\eta$ appearing in the {\bf P}-assumptions of section 3. These constants together with the convergence theorems provide specific orders of convergence of the MoL approach for several AMF$_q$-RK methods, in particular for the {\sf AMF$_q$-Rad} methods presented in section 2.
The norm considered here for vectors, is the weighed Euclidean norm $$ \displaystyle{\Vert (v_{ij})_{i,j=1}^{N-1}\Vert_{2,h}
:=\sqrt{\frac{1}{N^2}\sum_{i,j=1}^{N-1}|v_{ij}|^2}=h\Vert (v_{ij})_{ij=1}^{N-1}\Vert_2, }$$ and for matrices the corresponding operator norm.
\begin{lemma}\label{lema-valP2} Assume that exact solution $u(x,y,t)$ of the 2D-PDE problem (\ref{2D-dif-reac}) has as many continuous partial derivatives as needed in the analysis in $(x,y,t)\in [0,1]^2\times [0,t^*]$. Then for $ k=1,2\ldots$ and $\omega<\frac{1}{4}$ we have that, $$\begin{array}{rcl}\norm{J^{\omega}u_h^{(k)}(t)}_{2,h}&=&\mathcal{O}(1), \quad \mbox{\rm and moreover } \\ \norm{J^{1+\omega}u_h^{(k)}(t)}_{2,h}&=&\mathcal{O}(1), \; \mbox{\rm whenever } u^{(1)}_{\Gamma_h}(t)\equiv 0. \end{array}$$ \end{lemma}
\begin{lemma}\label{lema-Pgeneral} Assume that exact solution $u(x,y,t)$ of the 2D-PDE problem (\ref{2D-dif-reac}) has as many continuous partial derivatives as needed in the analysis in $(x,y,t)\in [0,1]^2\times [0,t^*]$. Then, for $l=0,1,...$ we have that, \begin{equation}\label{P-hypo-gen} \norm{(P-J)^l u_h^{(k)}(t)}_{2,h}=\mathcal{O}(\tau^l h^{\alpha_l}),\qquad \norm{J^{-1}(P-J)^l u_h^{(k)}(t)}_{2,h}=\mathcal{O}(\tau^l h^{\beta_l}), \end{equation} where \begin{equation}\label{P-hypo-alpha} \alpha_l= \left\{\begin{array}{ll}-\max\{0,3+4(l-2)\}, &\quad {\rm if} \; u^{(1)}_{\Gamma_h}(t)\equiv 0,\\-\max\{0,3+4(l-1)\}, & \quad {\rm otherwise},\end{array}\right. \end{equation} and \begin{equation}\label{P-hypo-beta} \beta_l= \left\{\begin{array}{ll}-\max\{0,1+4(l-2)\}, &\quad {\rm if} \; u^{(1)}_{\Gamma_h}(t)\equiv 0,\\-\max\{0,1+4(l-1)\}, & \quad {\rm otherwise}.\end{array}\right. \end{equation} \end{lemma}
$\Box$
We next give a convergence theorem for 2D-parabolic PDEs when the MoL approach with {\sf AMF$_q$-Rad} methods in section 2 are applied to the time discretization. The results still hold for 3D-parabolic problems (even for $d$D-parabolic problems and $d\ge 3$) and Time-Independent Dirichlet boundary conditions, but the proof requires some extra length to be included here.
\begin{theorem}\label{sev-th-7} The global errors (GE) in the weighted Euclidean norm of the MoL approach for the 2D-PDE (\ref{2D-dif-reac}) when the spatial semi-discretization is carried out with second order central differences and the time integration is performed with AMF$_q$-RK methods, are given in Table \ref{estimates1D}. There, $\varrho=\min\{1,\tau^2h^{-1}\}$ and $\mathcal{O}(\tau^{2.25^*})$ is meant for $\mathcal{O}(\tau^{\mu})$ where $\mu<2.25$ is any constant. \begin{table}[h!] \centering
\begin{tabular}{|c|c|c|} \hline $(\tau\rightarrow 0^+,\:h\rightarrow 0^+)$ & GE (Time-Indep.) & GE (Time-Dep.) \\[0.2pc]
\hline {\sf AMF$_1$-Rad} & $\mathcal{O}(h^2)+\mathcal{O}(\tau^2)$ & $\mathcal{O}(h^2) + \mathcal{O}(\varrho)$ \\[0.2pc]\hline {\sf AMF$_2$-Rad} & $\mathcal{O}(h^2)+\mathcal{O}(\tau^3)+\tau^2\mathcal{O}(\varrho)$ & $\mathcal{O}(h^2) + \mathcal{O}(\varrho)$ \\[0.2pc]\hline {\sf AMF$_3$-Rad} & $\mathcal{O}(h^2)+\mathcal{O}(\tau^{2.25^*})$ & $\mathcal{O}(h^2)+\mathcal{O}(\varrho)$ \\[0.2pc] \hline \end{tabular}\caption{\scriptsize Global error estimates in the weighted Euclidean norm for Time-Dependent Dirichlet boundary conditions (in short Time-Dep.) and Time-Independent Dirichlet boundary conditions (in short Time-Indep.).} \label{estimates1D} \end{table}
\end{theorem}
{\bf Proof.} In all cases we have that the stage order of the underlying Runge-Kutta Radau IIA method is $\ell=2$ and the order of the spatial semi-discretization is $r=2$. Moreover, all the three methods {\sf AMF$_q$-Rad} ($q=1,\:2,\:3$) are A($\pi/2$)-stable for a 2-splitting as it is shown in \cite{sevedom-AMFestab} for the cases $q=1$ and $q=2$ and in \cite{apnum-sevsole10} for the case $q=3$. Also, it should be noticed that (\ref{spect-1}) holds.
We start with the {\sf AMF$_1$-Rad} method. We have for the case of Time-Independent Dirichlet Boundary conditions that the derivative regarding $t$ vanishes on boundary points $(x,y)\in \Gamma_h$, i.e. $ u^{(1)}_{\Gamma_h}(t)\equiv 0$. From Lemma \ref{lema-Pgeneral} we get that $\alpha_1=0$ and $\beta_1=0$. Then the bound for the global errors follows from Theorem \ref{sev-th-3}. For the case of Time-Dependent Dirichlet Boundary conditions, from Lemma \ref{lema-Pgeneral}, we have that $\alpha_1=-3$ and $\beta_1=-1$. Then, the bound for the global errors follows from Theorem \ref{sev-th-3}. The bound also applies to the {\sf AMF$_2$-Rad} method for Time-Dependent Dirichlet BCs, because this method fulfils the assumptions in Theorem \ref{sev-th-3}.
For the case of the {\sf AMF$_2$-Rad} method and Time-Independent Dirichlet BCs we apply Theorem \ref{sev-th-3} for the case $\varrho=1$ and Theorem \ref{sev-th-5} with $l^*=1$ for the case $\varrho=\tau^2h^{-1}$. Observe that from Lemma \ref{lema-Pgeneral} we have that $\alpha_1=0$ and $\beta_1=0$ and $\beta_2=-1$. Moreover the {\sf AMF$_2$-Rad} method fulfils all the assumptions in Theorem \ref{sev-th-5} by taking $\eta=1$, see also Lemma \ref{lema-valP2}.
For the case of the {\sf AMF$_3$-Rad} method and Time-Independent Dirichlet BCs we apply Theorem \ref{sev-th-4} with any $\eta<1.25$, see Lemma \ref{lema-valP2}. Observe that in this case $\alpha_1=0,\;\beta_1=0$. Then from Theorem \ref{sev-th-4} the global errors are of size $\mathcal{O}(h^2)+\mathcal{O}(\tau^{2})$. The proof that the order can be increased up to $\mathcal{O}(h^2)+\mathcal{O}(\tau^{2.25^*})$ requires some extra technical details that we have omitted for simplicity. The case of Time-Dependent Dirichlet BCs follows from Theorem \ref{sev-th-4} too, but in this case $\beta_1=-1$.
$\Box$
\subsection{Numerical Experiments} We have performed some numerical experiments on two 2D-PDE and 3D-PDE problems of parabolic type in order to illustrate the convergence results presented in former sections for the {\sf AMF$_q$-Rad} methods.
\begin{enumerate} \item {\sf Problem 1} is the 2D-PDE problem (\ref{2D-dif-reac}) with diffusion parameter $\varepsilon=0.1$ and Dirichlet Boundary Conditions and an Initial Condition so that \begin{equation}\label{2D-solution} u(x,y,t)= 10x(1-x)y(1-y)e^t + \beta e^{2x-y-t}, \end{equation} is the exact solution. The case $\beta=0$ provides Time-Independent Boundary conditions and no spatial error ($\sigma_h(t)\equiv 0$, due to the polynomial nature of the exact solution). The case $\beta=1$ provides Time-Dependent boundary conditions and spatial discretizations errors of order two. \item {\sf Problem 2} is the 3D-PDE problem (\ref{3D-dif-reac}) with diffusion parameter $\varepsilon=0.1$ \begin{equation}\label{3D-dif-reac}\begin{array}{c} u_t(\overrightarrow{x},t)= \varepsilon\: \Delta u(\overrightarrow{x},t)+g(\overrightarrow{x},t),\\ t\in[0,1],\;\;\overrightarrow{x}=(x,y,z)\in (0,1)^3 \in \mathbb{R}^3, \end{array}
\end{equation} and Dirichlet Boundary Conditions and an Initial Condition so that \begin{equation}\label{2D-solution} u(x,y,t)= 64x(1-x)y(1-y)z(1-z)e^t + \beta e^{2x-y-z-t}, \end{equation} is the exact solution. Again, the case $\beta=0$ provides Time-Independent Boundary conditions and no spatial error and the case $\beta\ne 0$ provides Time-Dependent boundary conditions and spatial discretizations errors of order two. \end{enumerate}
On the end-point of the time interval $t^*=1$, in the weighted Euclidean norm we have computed as specified in (\ref{sev-equa-1}), the global errors $\epsilon_2(h,\tau)$ ($y_{\rm met}(t^*)$ denotes the numerical solution at $t^*$ by the method considered), the number of significant figures of the global errors $\delta_2(h,\tau)$ and the estimated order of the global errors $p(h,\tau)$ as powers of $h$ when $r=\tau/h$ is kept constant and both $\tau$ and $h$ tend to zero. \begin{equation}\label{sev-equa-1}\begin{array}{c} \epsilon_2(h,\tau):=\norm{u_h(t^*)-y_{\rm met}(t^*)}_{2,h}, \quad \delta_2(h,\tau)=-\log_{10} \epsilon_2(h,\tau)\\ p(h,\tau)= (\delta_2(h/2,\tau/2)- \delta_2(h,\tau))/\log_{10} 2. \end{array} \end{equation}
In the Tables \ref{table-linear2D-1}, \ref{table-linear2D-2} and \ref{table-linear3D-1} we have considered for each $h$ the time-stepsize $\tau=q h$ for the corresponding {\sf AMF$_q$-Rad} method ($q=1,2,3$), so that all the methods make use of the same number of $f$-evaluations and similar CPU times in the computations. In those tables we have displayed the number of significant figures in the global errors $\delta_2(h,\tau)$ and in brackets the estimated orders $p(h,\tau)$ of each method.
From Theorem \ref{sev-th-7}, the global errors are expected to be of size $h^\mu$ (observe that $\tau/h$ is kept constant) where: \begin{enumerate} \item for the {\sf AMF$_1$-Rad} method, $\mu=2$ if Time-Independent BCs are considered and $\mu=1$ if Time-Dependent BCs are imposed. This nicely fits with the results displayed in Table \ref{table-linear2D-1} (Time-Independent BCs) and in Table \ref{table-linear2D-2} (Time-Dependent BCs) for the 2D-PDE problem. Moreover, the convergence order is still $\mu=2$ in the 3D-PDE problem for Time-Independent BCs as it can be seen in Table \ref{table-linear3D-1}. \item For the {\sf AMF$_2$-Rad} method, $\mu=3$ if Time-Independent BCs are considered and $\mu=1$ if Time-Dependent BCs are imposed. This fits well with the results displayed in Table \ref{table-linear2D-1} (Time-Independent BCs) and in Table \ref{table-linear2D-2} (Time-Dependent BCs) for the 2D-PDE problem. Moreover, the convergence order is also $\mu=3$ in the 3D-PDE problem for Time-Independent BCs as it can be observed in Table \ref{table-linear3D-1}. \item For the {\sf AMF$_3$-Rad} method, $\mu=2.25^*$ if Time-Independent BCs are considered and $\mu=1$ if Time-Dependent BCs are imposed. This can be observed in Table \ref{table-linear2D-1} (Time-Independent BCs) and in Table \ref{table-linear2D-2} (Time-Dependent BCs) for the 2D-PDE problem. Moreover, the convergence order also approaches to $\mu=2.3$ in the 3D-PDE problem for Time-Independent BCs as shown in Table \ref{table-linear3D-1}. \end{enumerate}
\begin{table}[h!] \centering
\begin{tabular}{|l|l|l|l|} \hline
$h$ & $\begin{array}{c}\mbox{\sf AMF$_1$-Rad} \;({p})\\ \tau/h= 1\end{array}$ &
$\begin{array}{c}\mbox{\sf AMF$_2$-Rad} \;({p})\\ \tau/h= 2\end{array}$ &
$\begin{array}{c}\mbox{\sf AMF$_3$-Rad} \;({p})\\ \tau/h= 3\end{array}$ \\ \hline $\frac{1}{24}$ & $ \delta_2=3.74 \; (2.03)$ & $ \delta_2=4.94 \; (2.82)$ & $ \delta_2=4.90 \; (3.56)$ \\\hline $\frac{1}{48}$ & $ \delta_2=4.35 \; (2.03)$ & $ \delta_2=5.79 \; (2.89)$ & $ \delta_2=5.67 \; (2.42)$ \\\hline$\frac{1}{96}$ & $ \delta_2=4.96 \; (1.99)$ & $ \delta_2=6.66 \; (2.92)$ & $ \delta_2=6.40 \; (2.36)$ \\\hline$\frac{1}{192}$ & $ \delta_2=5.56 \; (1.99)$ & $ \delta_2=7.54 \; (2.93)$ & $ \delta_2=7.11 \; (2.29)$ \\\hline$\frac{1}{384}$ & $ \delta_2=6.16 \; (2.03)$ & $ \delta_2=8.42 \; (2.96)$ & $ \delta_2=7.80 \; (2.29)$ \\\hline$\frac{1}{768}$ & $ \delta_2=6.77 \; (--)$ & $ \delta_2=9.31 \; (--)$ & $ \delta_2=8.49 \; (--)$ \\\hline \end{tabular} \caption{\scriptsize Significant correct digits ($l_{2,h}$-norm) for the 2D-PDE problem with Time-Independent Dirichlet BCs ($\beta=0$). In brackets the estimated orders of convergence (by halving both the spatial resolution $h$ and the time-stepizes $\tau$ and taking ratio $r=\tau/h$).}\label{table-linear2D-1} \end{table}
\begin{table}[h!] \centering
\begin{tabular}{|l|l|l|l|} \hline
$h$ & $\begin{array}{c}\mbox{\sf AMF$_1$-Rad} \;({p})\\ \tau/h= 1\end{array}$ &
$\begin{array}{c}\mbox{\sf AMF$_2$-Rad} \;({p})\\ \tau/h= 2\end{array}$ &
$\begin{array}{c}\mbox{\sf AMF$_3$-Rad} \;({p})\\ \tau/h= 3\end{array}$ \\ \hline $\frac{1}{24}$ & $ \delta_2=3.02 \; (1.00)$ & $ \delta_2=2.79 \; (0.76)$ & $ \delta_2=2.52 \; (0.66)$ \\\hline $\frac{1}{48}$ & $ \delta_2=3.32 \; (0.97)$ & $ \delta_2=3.02 \; (0.83)$ & $ \delta_2=2.72 \; (0.76)$ \\\hline$\frac{1}{96}$ & $ \delta_2=3.61 \; (1.00)$ & $ \delta_2=3.27 \; (0.90)$ & $ \delta_2=2.95 \; (0.86)$ \\\hline$\frac{1}{192}$ & $ \delta_2=3.91 \; (1.00)$ & $ \delta_2=3.54 \; (0.93)$ & $ \delta_2=3.21 \; (0.91)$ \\\hline$\frac{1}{384}$ & $ \delta_2=4.21 \; (1.03)$ & $ \delta_2=3.82 \; (0.97)$ & $ \delta_2=3.48 \; (0.97)$ \\\hline$\frac{1}{768}$ & $ \delta_2=4.52 \; (--)$ & $ \delta_2=4.11 \; (--)$ & $ \delta_2=3.77 \; (--)$ \\\hline \end{tabular} \caption{\scriptsize Significant correct digits ($l_{2,h}$-norm) for the 2D-PDE problem with Time-Dependent Dirichlet BCs ($\beta=1$). In brackets the estimated orders of convergence (by halving both the spatial resolution $h$ and the time-stepizes $\tau$ and taking ratio $r=\tau/h$).}\label{table-linear2D-2} \end{table}
\begin{table}[h!] \centering
\begin{tabular}{|l|l|l|l|} \hline
$h$ & $\begin{array}{c}\mbox{\sf AMF$_1$-Rad} \;({p})\\ \tau/h= 1\end{array}$ &
$\begin{array}{c}\mbox{\sf AMF$_2$-Rad} \;({p})\\ \tau/h= 2\end{array}$ &
$\begin{array}{c}\mbox{\sf AMF$_3$-Rad} \;({p})\\ \tau/h= 3\end{array}$ \\ \hline $\frac{1}{24}$ & $ \delta_2=3.40 \; (2.03)$ & $ \delta_2=4.31 \; (2.96)$ & $ \delta_2=4.53 \; (2.69)$ \\\hline $\frac{1}{48}$ & $ \delta_2=4.01 \; (2.03)$ & $ \delta_2=5.20 \; (2.96)$ & $ \delta_2=5.34 \; (2.59)$ \\\hline$\frac{1}{96}$ & $ \delta_2=4.62 \; (--)$ & $ \delta_2=6.09 \; (--)$ & $ \delta_2=6.12 \; (--)$ \\\hline \end{tabular} \caption{\scriptsize Significant correct digits ($l_{2,h}$-norm) for the 3D-PDE problem with Time-Independent Dirichlet BCs ($\beta=0$). In brackets the estimated orders of convergence (by halving both the spatial resolution $h$ and the time-stepizes $\tau$ and taking ratio $r=\tau/h$).}\label{table-linear3D-1} \end{table}
As a conclusion we can say that the convergence results presented in Theorem \ref{sev-th-7} seem to be sharp for 2D-parabolic problems and that they still hold for $d$D-parabolic problems ($d > 2$) when Time-Independent boundary conditions are considered. The proof of this fact requires some additional work and is not presented here. On the other hand, the convergence results are very poor when Time-Dependent Boundary conditions are considered. However, in such a situation we have developed a very simple technique (Boundary Correction Technique) to recover the convergence order as if Time-Independent Boundary conditions were considered. The explanation of the Boundary Correction Technique and the proof of the convergence orders requires some extra length and will be the objective of another paper.
It is also important to remark that although we have considered in Theorem \ref{sev-th-7}, second-order central differences for the spatial discretization, the convergence results also hold for most of the usual spatial discretizations as long as they are stable and consistent with order $r\ge 1$. Numerical experiments carried by the authors seem to indicate that the convergence results also hold for many classes of non-linear problems.
\end{document} |
\begin{document}
\title{Revisiting Maximum Satisfiability and Related Problems in Data Streams \footnote{An extended abstract of this paper will appear in the 28th International Computing and Combinatorics Conference (COCOON 2022). }}
\author{Hoa T. Vu \footnote{San Diego State University, San Diego, CA, USA. Email: [email protected].}} \date{}
\maketitle
\begin{abstract}
We revisit the maximum satisfiability problem (\textup{Max-SAT}\xspace) in the data stream model. In this problem, the stream consists of $m$ clauses that are disjunctions of literals drawn from $n$ Boolean variables. The objective is to find an assignment to the variables that maximizes the number of satisfied clauses. Chou et al. (FOCS 2020) showed that $\Omega(\sqrt{n})$ space is necessary to yield a $\sqrt{2}/2+\varepsilon$ approximation of the optimum value; they also presented an algorithm that yields a $\sqrt{2}/2-\varepsilon$ approximation of the optimum value using $O(\varepsilon^{-2}\log n)$ space.
In this paper, we focus not only on approximating the optimum value, but also on obtaining the corresponding Boolean assignment using sublinear $o(mn)$ space. We present randomized single-pass algorithms that w.h.p. \footnote{W.h.p. denotes ``with high probability''. Here, we consider $1-1/\poly(n)$ or $1-1/\poly(m)$ as high probability.} yield: \begin{itemize} \item A $1-\varepsilon$ approximation using $\tilde{O}(n/\varepsilon^3)$ space and exponential post-processing time \item A $3/4-\varepsilon$ approximation using $\tilde{O}(n/\varepsilon)$ space and polynomial post-processing time. \end{itemize} Our ideas also extend to dynamic streams. On the other hand, we show that the streaming $\textup{$k$-SAT}\xspace$ problem that asks to decide whether one can satisfy all size-$k$ input clauses must use $\Omega(n^k)$ space.
We also consider the related minimum satisfiability problem ($\textup{Min-SAT}\xspace$), introduced by Kohli et al. (SIAM J. Discrete Math. 1994), that asks to find an assignment that minimizes the number of satisfied clauses. For this problem, we give a $\tilde{O}(n^2/\varepsilon^2)$ space algorithm, which is sublinear when $m = \omega(n)$, that yields an $\alpha+\varepsilon$ approximation where $\alpha$ is the approximation guarantee of the offline algorithm. If each variable appears in at most $f$ clauses, we show that a $2\sqrt{fn}$ approximation using $\tilde{O}(n)$ space is possible.
Finally, for the \textup{Max-AND-SAT}\xspace problem where clauses are conjunctions of literals, we show that any single-pass algorithm that approximates the optimal value up to a factor better than 1/2 with success probability at least $2/3$ must use $\Omega(mn)$ space.
\end{abstract}
\section{Introduction}
\paragraph{Problems overview.} The Boolean satisfiability problem (\textsf{SAT}\xspace) is one of the most famous problems in computer science. A satisfiability instance is a conjunction of $m$ clauses $C_1 \land C_2 \land \ldots \land C_m$ where each clause $C_j$ is a disjunction of literals drawn from a set of $n$ Boolean variables $x_1,\ldots,x_n$ (a literal is either a variable or its negation). Deciding whether such an expression is satisfiable is \textup{NP-Complete} \cite{Cook71,Trakhtenbrot84}. When each clause has size exactly $k$, this is known as the $\textup{$k$-SAT}\xspace$ problem.
In the optimization version, one aims to find an assignment to the variables that maximizes the number of satisfied clauses. This is known as the maximum satisfiability problem (\textup{Max-SAT}\xspace). This problem is still \textup{NP-Hard} even when each clause has at most two literals \cite{GareyJS76}. However, \textup{Max-SAT}\xspace can be approximated up to a factor $3/4$ using linear programming (LP) \cite{GW94}, network flow \cite{Y94}, or a careful greedy approach \cite{PSWZ17}. In polynomial time, one can also obtain an approximation slightly better than 3/4 using semidefinite programming (SDP) \cite{GW95,ABZ05}. Hastad showed the inapproximability result that unless $\textup{P}=\textup{NP}$, there is no polynomial-time algorithm that yields an approximation better than 21/22 to \textup{Max-2-SAT}\xspace \cite{Hastad01}.
A related problem is the minimum satisfiability problem ($\textup{Min-SAT}\xspace$) which was introduced by Kohli et al. \cite{KKM94}. In this problem, the goal is to minimize the number of satisfied clauses. They showed that this problem is \textup{NP-Hard} and gave a simple randomized 2-approximation. Marathe and Ravi \cite{MaratheR96} showed that \textup{Min-SAT}\xspace is equivalent to the minimum vertex cover problem and therefore an approximation factor better than 2 in polynomial time is unlikely. Better approximations for $\textup{Min-$k$-SAT}\xspace$ for small values of $k$ have also been developed by Avidor and Zwick \cite{AvidorZ02}, Bertsimas et al. \cite{BertsimasTV99}, and Arif et al. \cite{ArifBGK20} using linear and semidefinite programming.
In this paper, we also consider another related optimization problem \textup{Max-AND-SAT}\xspace. This problem is similar to \textup{Max-SAT}\xspace except that each clause is a conjunction of literals (as opposed to being a disjunction of literals in \textup{Max-SAT}\xspace). Trevisan studied this problem in the guise of parallel algorithms \cite{T98}. We aim to understand the space complexity of \textup{Max-SAT}\xspace, \textup{Min-SAT}\xspace, \textup{$k$-SAT}\xspace, and \textup{Max-AND-SAT}\xspace in the streaming model.
\paragraph{The data stream model.} In this setting, clauses are presented one by one in the stream in an online fashion and the objective is to use sublinear space $o(mn)$ while obtaining a guaranteed non-trivial approximation.
\paragraph{Motivation and past work.} Constraint satisfaction problems and their optimization counterparts have recently received notable attention in the data stream model. Some examples include vertex coloring \cite{BeraCG20,AssadiCK19}, \textup{Max-2-AND} \cite{GVV17}, \textup{Max-Boolean-CSPs} and \textup{Max-$k$-SAT} \cite{ChouGV20, chou2021approximability}, \textup{Min-Ones $d$-SAT} \cite{AgrawalBBBCMM0019}, and \textup{Max-2-XOR} \cite{KK19}.
In terms of applications, \textsf{SAT}\xspace, \textup{Max-SAT}\xspace, and \textup{Min-SAT}\xspace have been used in model-checking, software package management, design debugging, AI planning, bioinformatics, combinatorial auctions, etc. \cite{JSS00,ABLSR10,CSMV10,AM07,HPJED17,C08,LM06,JMJI15, LiZMS11, M08, SZGN17}. Many of these applications have inputs that are too large to fit in a single machine's main memory. Furthermore, in many applications, we need to run multiple instances of \textup{Max-SAT}\xspace and hence saving memory could be valuable.
Examples of large \textup{Max-SAT}\xspace benchmarks that arise from real-world applications can be found at \cite{MaxSATbenchmark}. This motivates us to study this problem in the streaming setting that aims to use sublinear memory.
\textup{Max-SAT}\xspace and \textup{Max-AND-SAT}\xspace were also studied by Trevisan \cite{T98} in the guise of parallel algorithms. Trevisan showed that there is a parallel algorithm that finds a $3/4-\varepsilon$ approximation to \textup{Max-SAT}\xspace in $O(\poly(1/\varepsilon,\log m))$ time using $O(n+m)$ processors \cite{T98}. Our results here show that it suffices to use $O(n)$ processors.
The most relevant result is by Chou et al. \cite{ChouGV20}. They showed that $\Omega(\sqrt{n})$ space is required to yield a $\sqrt{2}/2+\varepsilon$ approximation of the optimum value of \textup{Max-$k$-SAT}\xspace for $k\geq 2$; they also presented an algorithm that yields a $\sqrt{2}/2-\varepsilon$ approximation of the optimum value of \textup{Max-SAT}\xspace using $O(\varepsilon^{-2}\log n)$ space.
In many cases, we want to not only approximate the optimum value, but also output the corresponding Boolean assignment which is an objective of this work. It is worth noting that storing the assignment itself requires $\Omega(n)$ space. While our algorithms use more space, this allows us to actually output the assignment and to obtain a better approximation.
To the best of our knowledge, unlike \textup{Max-SAT}\xspace, there is no prior work on $\textsf{SAT}\xspace,\textup{Min-SAT}\xspace$, and $\textup{Max-AND-SAT}\xspace$ in the streaming model.
\paragraph{Main results.} Hereinafter, the memory use is measured in terms of bits. All randomized algorithms succeed w.h.p. For \textup{Max-SAT}\xspace, we show that it is possible to obtain a non-trivial approximation using only $\tilde{O}(n)$ space \footnote{$\tilde{O}$ hides $\polylog$ factors.} which is roughly the space needed to store the output assignment. Throughout this paper, algorithms actually output an assignment to the variables along with an estimate for the number of satisfied clauses. In this paper, we solely focus on algorithms that use a single pass over the stream. Furthermore, unless stated otherwise, we assume insertion-only streams.
The algorithms for \textup{Max-SAT}\xspace rely on two simple observations. If $m =\omega(n/\varepsilon^2)$ and we sample $\Theta(n/\varepsilon^2)$ clauses uniformly at random, then w.h.p. an $\alpha$ approximation on the sampled clauses corresponds to an $\alpha-\varepsilon$ approximation on the original input. Moreover, if a clause is large, it can be satisfied w.h.p. as long as each literal is set to $\true$ independently with a not-too-small probability and therefore we may ignore such clauses. This second observation also allows us to extend our result to insertion-deletion (dynamic) streams.
Based on the above observations, we proceed by simply ignoring large clauses. Then, among the remaining (small) clauses, we sample $\Theta(n/\varepsilon^2)$ clauses uniformly at random, denoted by $W$. Finally, we run some randomized $\alpha$ approximation algorithm on $W$ in post-processing in which every literal is set to $\true$ with some small probability. This will lead to an $\alpha-\varepsilon$ approximation on the original set of clauses w.h.p.
\paragraph{No-duplicate assumption.}There is a subtlety regarding duplicate clauses, especially for dynamic streams. Suppose two (or more) similar clauses (i.e., duplicates) appear in the stream, would we consider them as one clause or two separate clauses (or equivalently, one clause with weight 2)? This boils down to the choice of using an $L_0$ sampler or an $L_1$ sampler. That is whether one samples a clause uniformly at random as long as it appears in the stream or based on its frequency. However, to facilitate our discussion, hereinafter, we assume that there is no duplicate in the stream.
Our first main results are algorithms for $\textup{Max-SAT}\xspace$ that use space linear in terms of $n$. Note that the space to store the output assignment itself is $\Omega(n)$.
\begin{theorem}\label{thm:max-sat} We have the following randomized streaming algorithms for $\textup{Max-SAT}\xspace$. \begin{itemize} \item A $3/4-\varepsilon$ and a $1-\varepsilon$ approximations for insertion-only streams while using $\tilde{O}(n/\varepsilon)$ and $\tilde{O}(n/\varepsilon^3)$ space respectively. These algorithms have $O(1)$ update time. \item A $3/4-\varepsilon$ and a $1-\varepsilon$ approximations for dynamic streams while using $\tilde{O}(n/\varepsilon)$ and $\tilde{O}(n/\varepsilon^4)$ space respectively. The update time can be made $\tilde{O}(1)$ with an additional $\varepsilon^{-1}\log n$ factor in the space use. \end{itemize} \end{theorem}
The decision problem \textsf{SAT}\xspace is however much harder in terms of streaming space complexity. Specifically, we show that $o(m)$ space is generally not possible. Our lower bound holds even for the decision \textup{$k$-SAT}\xspace problem where each clause has exactly $k$ literals and $m= \Theta((n/k)^k)$.
\begin{theorem}\label{thm:lb-ksat} Suppose $k \leq n/e$. Any single-pass streaming algorithm that solves $\textup{$k$-SAT}\xspace$ with success probability at least 3/4 requires $\Omega(m)$ space where $m = \Theta((n/k)^k)$ w.h.p. \end{theorem} This lower bound for \textup{$k$-SAT}\xspace is tight up to polylogarithmic factors in the following sense. If $k > \lceil \log_2 m \rceil$, we know that it is possible to satisfy all the input clauses via a probabilistic argument. In particular, we can independently assign each variable to $\true$ or $\false$ equiprobably then the probability that a clause is not satisfied is smaller than $1/m$. Hence, a union bound over $m$ clauses implies that the probability that we satisfy all the clauses is positive. On the other hand, if $k < \log m$, storing the entire stream requires $\tilde{O}(m)$ space. Hence, we have an $\tilde{O}(m)$-space algorithm.
For \textup{Min-SAT}\xspace, we observe that one can obtain a $1+\varepsilon$ approximation in $\tilde{O}(n^2/\varepsilon^2)$ space using a combination of $F_0$ sketch and brute-forte search. However, it is not clear how to run polynomial time algorithms based on the sketch (see Section \ref{sec:min-sat} for further discussion). We provide another approach that sidesteps this entirely.
\begin{theorem}\label{thm:min-sat} Suppose there is an $\alpha$ offline approximation algorithm for $\textup{Min-SAT}\xspace$ that runs in $T$ time. Then, we have a randomized single-pass, $\tilde{O}(n^2/\varepsilon^2)$-space streaming algorithm for \textup{Min-SAT}\xspace that yields an $\alpha + \varepsilon$ approximation and uses $T$ post-processing time. \end{theorem}
\paragraph{Other results.} For \textup{Min-SAT}\xspace, if each variable appears in at most $f$ clauses, then it is possible to yield a $2\sqrt{nf}$ approximation to \textup{Min-SAT}\xspace in $\tilde{O}(n)$ space. On the lower bound side, we show that any streaming algorithm for \textup{Min-SAT}\xspace that decides if $\opt = 0$ must use $\Omega(n)$ space.
Finally, we present a space lower bound on streaming algorithms that approximate the optimum value of \textup{Max-AND-SAT}\xspace up to a factor $1/2+\varepsilon$. In particular, we show that such algorithms must use $\Omega(mn)$ space.
\paragraph{Notation and preliminaries.} We occasionally use set notation for clauses. For example, we write $x_i \in C_j$ (or $\compl{x_i} \in C_j$) if the {\em literal} $x_i$ (or $\compl{x_i}$ respectively) is in clause $C_j$. Furthermore, $C_j \setminus x_i$ denotes the clause $C_j$ with the literal $x_i$ removed. We often write $\opt(P)$ to denote the optimal value of the problem $P$, and when the context is clear, we drop $P$ and just write $\opt$. We write $\poly(x)$ to denote $x^c$ for an arbitrarily large constant $c$; in this paper, constant $c$ is absorbed in the big-$O$. Throughout this paper, we use $K$ to denote a universally large enough constant. We assume that our algorithms is provided with $m$ or an upper bound for $m$ in advance; this is necessary since the space often depends on $\log m$.
\paragraph{Chernoff bound.} We use the following version of Chernoff bound. If $X = \sum_{i=1}^n X_i$ where $X_i$ are negatively correlated binary random variables and $\prob{X_i=1} = p$, then for any $\eta > 0$, $\prob{\left| X - p n \right| \geq \eta p n } \leq 2 \cdot \exp{{-\frac{\eta^2 }{2+\eta} \cdot pn }}$.
\paragraph{Organization.} We provide our main algorithms for \textup{Max-SAT}\xspace and \textup{Min-SAT}\xspace in Section \ref{sec:maxsat-algorithms} and Section \ref{sec:min-sat} respectively. The space lower bound results are presented in Section \ref{sec:space-lb}; some discussions and omitted proofs are deferred to the Appendix.
\section{Streaming Algorithms for \textup{Max-SAT}\xspace}\label{sec:maxsat-algorithms} Without loss of generality, we may assume that there is no {\em trivially true} clause, i.e., clauses that contain both a literal and its negation and there are no duplicate literals in a clause. We show that if we sample $\Theta(n/\varepsilon^2)$ clauses uniformly at random and run a constant approximation algorithm for \textup{Max-SAT}\xspace on the sampled clauses, we obtain roughly the same approximation. Note that if $m \leq K n/\varepsilon^2$ we might skip the sampling part.
\begin{lemma}\label{lem:sampling1} For \textup{Max-SAT}\xspace, an $\alpha$ approximation on $K n/\varepsilon^2 $ clauses sampled uniformly at random corresponds to an $\alpha-\varepsilon$ approximation on the original input clauses with probability at least $1-e^{-n}$. \end{lemma} \begin{proof} We recall the folklore fact that $\opt \geq m/2$. Consider an arbitrary assignment. If it satisfies fewer than $m/2$ clauses, we can invert the assignment to satisfy at least $m/2$ clauses.
Suppose that an assignment $A$ satisfies $m_A$ clauses. Let the number of sampled clauses that $A$ satisfies be $m_A'$ and let $p = Kn/(\varepsilon^2 m)$. For convenience, let $y = m_A$ and $y' = m_A'$. We observe that $\expec{y'} =p y$.
Suppose the assignment $A$ satisfies clauses $C_{\sigma_1},\ldots,C_{\sigma_y}$. We define the indicator variable $X_i = [\text{$C_{\sigma_i}$ is sampled}]$ and so $y' = \sum_{i=1}^y X_i$. Let $\eta = \varepsilon \opt/y$. Since we sample without replacement, $\{ X_i\}$ are negatively correlated. Appealing to Chernoff bound, we have \begin{align*}
\prob{|y' - p y| \geq \varepsilon p \opt} & \leq 2 \cdot \exp{-\frac{\eta^2}{2+\eta} py} \\ & \leq 2 \cdot \exp{-\frac{\varepsilon^2 \opt^2/y^2}{2+\varepsilon \opt/y}py} \\ & = 2 \cdot \exp{-\frac{\varepsilon^2 \opt^2}{2y+\varepsilon \opt}p} \leq 2 \cdot \exp{-\varepsilon^2\opt p/3}. \end{align*} The last inequality follows because $3\opt \geq 2 y + \varepsilon \opt$. Therefore, \begin{align*} \prob{m_A' = p m_A \pm \varepsilon p \opt } & \geq 1- 2 \cdot\exp{-\frac{\varepsilon^2 p \opt}{3}} = 1- 2 \cdot \exp{-\frac{\varepsilon^2 \frac{K n}{m \varepsilon^2} \opt}{3}} \nonumber \\ & \geq 1-2 \cdot \exp{\frac{-Kn}{6}} \geq 1-\exp{-100n}. \end{align*} The second inequality follows from the fact that $\opt \geq m/2$. A union bound over $2^n$ distinct assignments implies that with probability at least $1-e^{-n}$, we have $m_A' = p m_A \pm \varepsilon p \opt$ for all assignments $A$.
Suppose an assignment $\tilde{A}$ is an $\alpha$ approximation to \textup{Max-SAT}\xspace on the sampled clauses. Let $A^\star$ be an optimal assignment on the original input clauses. From the above, with probability at least $1-e^{-n}$, we have $pm_{\tilde{A}} + \varepsilon p \opt \geq m_{\tilde{A}}' \geq \alpha m_{A^\star}' \geq \alpha \opt p(1-\varepsilon) $. Hence, $m_{\tilde{A}} \geq \alpha \opt (1-\varepsilon) - \varepsilon \opt \geq (\alpha - 2 \varepsilon) \opt$. Reparameterizing $\varepsilon \leftarrow \varepsilon/2$ completes the proof. \end{proof}
Note that storing a clause may require $\Omega(n)$ space and hence the space use can still be $\Omega(n^2/\varepsilon^2)$ after sampling. We then observe that large clauses are probabilistically easy to satisfy. We define $\beta$-large clauses as clauses that have at least $\beta$ literals.
\begin{lemma} \label{lem:large-clauses} If each literal is set to $\true$ independently with probability at least $\gamma$, then the assignment satisfies all $(K\log m)/\gamma$-large clauses with probability at least $1-1/\poly(m)$. \end{lemma} \begin{proof} We can safely discard clauses that contains both $x$ and $\overline{x}$ since they are trivially true. Thus, all literals in each clauses are set to $\true$ independently with probability at least $\gamma$. The probability that a $(K \log m)/\gamma$-large clause is not satisfied is at most \[ (1-\gamma)^{(K \log m)/\gamma} \leq e^{-K \log m} \leq \frac{1}{\poly(m)} \] which implies that all such clauses are satisfied with probability at least $1-1/\poly(m)$ by appealing to a union bound over at most $m$ large clauses. \end{proof}
From the above observations, we state a simple meta algorithm (Algorithm \ref{alg:meta1}), that can easily be implemented in several sublinear settings. We will then present two possible post-processing algorithms and the corresponding $\gamma$ values.
~
\begin{algorithm} \setstretch{1.35} \DontPrintSemicolon \caption{A meta algorithm for sublinear \textup{Max-SAT}\xspace}\label{alg:meta1} Ignore all $\beta$-large clauses where $\beta = \frac{K \log m}{\gamma}$. Among the remaining clauses, sample and store $ K n/\varepsilon^2$ clauses uniformly at random. Call this set $W$. \\ {\bf Post-processing:} Run an $\alpha$ approximation to \textup{Max-SAT}\xspace on the collected clauses $W$ where each literal is set to $\true$ independently with probability at least $\gamma$.\\ \end{algorithm}
Let $L$ and $S$ be the set of $\beta$-large and small clauses respectively. Furthermore, let $\opt_L$ and $\opt_S$ be the number of satisfied clauses in $L$ and $S$ respectively in the optimal assignment.
\paragraph{Post-processing algorithm 1: An exponential-time $1-\varepsilon$ approximation.}
Here, we set $\gamma = \varepsilon$. Suppose in post-processing, we run the exact algorithm to find an optimal assignment $A^\star$ on the set of collected clauses $W$. Ideally, we would like to apply Lemma \ref{lem:sampling1} to argue that we yield a $1-\varepsilon$ approximation on the small clauses in $S$ w.h.p. and then apply Lemma \ref{lem:large-clauses} to argue that we satisfy all the large clauses $L$ w.h.p. However, the second claim requires that each literal is set to $\true$ with probability at least $\varepsilon$ whereas the exact algorithm is deterministic. Our trick is to randomly perturb the assignment given by the exact algorithm.
If the exact algorithm sets $x_i = q \in \{\true,\false\}$, we set $x_i = q$ with probability $1-\varepsilon$ (and $x_i = \overline{q}$ with probability $\varepsilon$). We will show that this yields a $1-\varepsilon$ approximation in expectation which can then be repeated $O(\frac{\log m}{\varepsilon})$ times to obtain the ``w.h.p.'' guarantee. ~
\begin{algorithm} \setstretch{1.35}
\DontPrintSemicolon \caption{A $1-2\varepsilon$ approximation post-processing}\label{alg:processing1} Obtain an optimal assignment $A^\star$ on $W$. Let $Q = \lceil K \varepsilon^{ -1}\log m) \rceil$. \\ For each trial $t=1,2,\ldots,Q$, if {$x_i = q$ in $A^\star$}, then {set $x_i = q$ with probability $1-\varepsilon$ and $x_i = \compl{q}$ with probability $\varepsilon$ in assignment $A_t$.} \\ Return the assignment $A_t$ that satisfies the most number of clauses in $W$. \end{algorithm}
Before analyzing the above post-processing algorithm, we observe that if we obtain an $\alpha \geq 1/2$ approximation in expectation, we can repeat the corresponding algorithm $O(\varepsilon^{-1}\log m)$ times and choose the best solution to obtain an $\alpha-\varepsilon$ approximation w.h.p.
\begin{lemma}\label{lem:whp} An $\alpha \geq 1/2$ approximation to \textup{Max-SAT}\xspace in expectation can be repeated $O(\varepsilon^{-1}\log m)$ times to yield an $\alpha-\varepsilon$ approximation w.h.p. \end{lemma}
We show that the post-processing Algorithm \ref{alg:processing1} yields a $1-\varepsilon$ approximation. \begin{lemma}\label{lem:post-processing-1}
Algorithm \ref{alg:processing1} yields a $1-3\varepsilon$ approximation on the original input w.h.p. \end{lemma} \begin{proof} Clearly, in each trial $t=1,2,\ldots,Q$, each literal is set to $\true$ with probability at least $\varepsilon$. According to Lemma \ref{lem:large-clauses}, each assignment $A_t$ satisfies all the large clauses in $L$ with probability at least $1-1/\poly(m)$. Taking a union bound over $Q < m$ trials, we conclude that all assignments $A_t$ satisfy all the clauses in $L$ with probability at least $1-1/\poly(m)$.
Next, let $B$ be the set of clauses in $W$ satisfied by the exact assignment $A^\star$. Consider any trial $t$. The expected number of satisfied clauses in $B$ after we randomly perturb the exact assignment is \[
\sum_{C \in B} \prob{\text{$C$ is satisfied}} \geq \sum_{C \in B} (1-\varepsilon) = (1-\varepsilon)|B|. \] The first inequality follows from the observation that at least one literal in $C$ must be $\true$ in the exact assignment and it remains $\true$ with probability at least $1-\varepsilon$. The assignment returned by post-processing Algorithm \ref{alg:processing1} yields a $1-2\varepsilon$ approximation on $W$ with probability at least $1-1/\poly(m)$ by Lemma \ref{lem:whp}. This, in turn, implies that we obtain a $1-3\varepsilon$ approximation on $S$ with probability at least $1-1/\poly(m)-e^{-n} \geq 1-1/\poly(m)-1/\poly(n)$ by Lemma \ref{lem:sampling1}.
Therefore, we satisfy at least $(1-3\varepsilon)\opt_S + \opt_L \geq (1-3\varepsilon)\opt$ clauses with probability at least $ 1-1/\poly(m)-1/\poly(n)$. \end{proof} \paragraph{Post-processing algorithm 2: A polynomial-time $3/4-\varepsilon$ approximation.} We can set $\beta = K \log m$ if we settle for a $3/4-\varepsilon$ approximation. This saves a factor $1/\varepsilon$ in the memory use. Consider the standard linear programming (LP) formulation for \textup{Max-SAT}\xspace given by Goemans and Williamson \cite{GW94}. \begin{align*} (LP) ~~ \text{maximize} & ~~ \sum_{j=1}^m z_j \\
\text{subject to} & ~~ \sum_{i \in P_j} y_i + \sum_{i \in N_j} (1-y_i) \geq z_j && \text{ for all $1 \leq j \leq m $}\\
& 0 \leq y_i,z_j \leq 1 && \text{ for all $1\leq i \leq n, 1\leq j \leq m$~.} \end{align*}
The integer linear program where $y_i,z_j \in \{0,1\}$ corresponds exactly to the \textup{Max-SAT}\xspace problem. In particular, if $x_i \in C_j$ then $i \in P_j$ and if $\overline{x_i} \in C_j$ then $i \in N_j$. We associate $y_i=1$ with $x_i$ being set to $\true$ and $y_i=0$ if it is set to $\false$. Similarly, we associate $z_i = 1$ with clause $C_j$ being satisfied and 0 otherwise. Let $\opt(LP)$ denote the optimum of the above LP.
\begin{lemma} [\cite{GW94}, Theorem 5.3]\label{lem:rounding1} The optimal fractional solution of the LP for \textup{Max-SAT}\xspace can be rounded to yield a 3/4 approximation in expectation by independently setting each variable to $\true$ with probability $1/4+y_i^\star/2$ where $y_i^\star$ is the value of $y_i$ in $\opt(LP)$. \end{lemma}
\begin{algorithm} \setstretch{1.35}
\DontPrintSemicolon \caption{A $3/4-2\varepsilon$ approximation post-processing }\label{alg:processing2} Obtain an optimal solution $z^\star,y^\star$ for the linear program of \textup{Max-SAT}\xspace on $W$. \\ Let $Q = \lceil K \varepsilon^{-1}\log m \rceil$. For each trial $t=1,2,\ldots,Q$, set $x_i = \true$ with probability $1/4+y_i^\star/2$ in assignment $A_t$. \\ Return the assignment $A_t$ that satisfies the most number of clauses in $W$. \end{algorithm}
The following lemma and its proof are analogous to Lemma \ref{lem:post-processing-1}.
\begin{lemma}\label{lem:post-processing-2} Algorithm \ref{alg:processing2} yields a $3/4-3\varepsilon$ approximation on the original input w.h.p. \end{lemma}
\paragraph{Handling deletions.} To support deletions, we use the $L_0$ sampler in \cite{JayaramW18, JST11}. Suppose each stream token is either an entry deletion or insertion on a vector $v$ of size $N$ then the $L_0$ sampler returns a non-zero coordinate $j$ uniformly at random. The sampler succeeds with probability at least $1-\delta$ and uses $O(\log^2 N \cdot \log (1/\delta))$ space. We can use a vector of size $N = 2^{2n}$ to encode the characteristic vector of the set of the clauses we have at the end of the stream. However, this results in an additional factor $n^2$ in the space use. Fortunately, since we only consider small clauses of size at most $\beta$, we can use a vector of size \[
N = {2n \choose 1} + {2n \choose 2} + \ldots + {2n \choose \beta} \leq \sum_{i=1}^{\beta} (2n)^i \leq O((2n)^{\beta}). \] Thus, to sample a small clause with probability at least $1-1/\poly(n)$, we need $O(\beta^2 \log^3 n )$ space. To sample $K n/\varepsilon^2$ small clauses, the space is $\tilde{O}(n \beta^2/ \varepsilon^2 )$. Finally, to sample clauses without replacement, one may use the approach described by McGregor et al. \cite{MTVV15} that avoids an increase in space.
A naive implementation to sample $s$ clauses is to run $s$ different $L_0$ samplers in parallel which results in an update time of $\tilde{O}(s)$. However, \cite{MTVV15} showed that it is possible to improve the update time to $\tilde{O}(1)$ with an additional factor $\log N = O(\beta \log n)$ in the space use.
\paragraph{Putting it all together.} We finalize the proof of the first main result by outlining the implementation of the algorithms above in the streaming model. We further present an improvement that saves another $1/\varepsilon$ factor under the no-duplicate assumption in the $3/4-\varepsilon$ approximation.
\begin{proof}[Proof of Theorem \ref{thm:max-sat} (1)] We ignore $\beta$-large clauses during the stream. Among the remaining clauses, we can sample and store $\Theta(n/\varepsilon^2 )$ small clauses in the stream uniformly at random as described.
For insertion-only streams, we may use Reservoir sampling \cite{V85} to sample clauses. Since storing each small clause requires $\tilde{O}(\beta)$ space, the total space is $\tilde{O}(n \beta /\varepsilon^2)$. We can run post-processing Algorithm \ref{alg:processing1} where $\beta = K \varepsilon^{-1} \log m$ and yield a $1-\varepsilon$ approximation; alternatively, we set $\beta = (K \log m)$ and run the polynomial post-processing Algorithm \ref{alg:processing2} to yield a $3/4-\varepsilon$ approximation.
We can further improve the dependence on $\varepsilon$ for the $3/4-\varepsilon$ approximation as follows. If the number of 1-literal clauses is at most $\varepsilon m$, we can safely ignore these clauses. This is because the number of 1-literal clauses is then at most $2\varepsilon \opt$. If we randomly set each variable to $\true$ with probability $1/2$, we will yield a $3/4$ approximation in expectation on the remaining clauses since each of these clause is satisfied with probability at least $1-1/2^2 = 3/4$. By Lemma \ref{lem:whp}, we can run $O(\varepsilon^{-1}\log m)$ copies in parallel and return the best assignment to yield a $3/4-\varepsilon$ approximation w.h.p. Therefore, the overall approximation is $3/4-3\varepsilon$ and the space is $O(n/\varepsilon \cdot \log m)$.
If the number of 1-literal clauses is more than $\varepsilon m$, then we know that $m \leq 2n/\varepsilon$ since the number of 1-literal clauses is at most $2n$ if there are no duplicate clauses. Since $m \leq 2n/\varepsilon$, the sampling step can be ignored. It then suffices to store only clauses of size at most $K \log m$ and run post-processing Algorithm \ref{alg:processing2}. The space use is $O(m \log m)=\tilde{O}(n/\varepsilon)$. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:max-sat} (2)] For insertion-deletion streams without duplicates, we sample clauses using the $L_0$ sampler as described earlier. We can run post-processing Algorithm \ref{alg:processing1} where $\beta = K \varepsilon^{-1}\log m$ and obtain a $1-\varepsilon$ approximation while using $\tilde{O}(n/\varepsilon^4)$ space. Alternatively, we set $\beta = K \log m$ and run the polynomial post-processing Algorithm \ref{alg:processing2} to yield a $3/4-\varepsilon$ approximation. This leads to a $\tilde{O}(n/\varepsilon)$-space algorithm that yields a $3/4-\varepsilon$ approximation. \end{proof}
We remark that it is possible to have an approximation slightly better than $3/4-\varepsilon$ in polynomial post-processing time using semidefinite programming (SDP) instead of linear programming \cite{GW95}. In the SDP-based algorithm that obtains a $0.7584-\varepsilon$ approximation, we also have the property that each literal is true with probability at least some constant. The analysis is analogous to what we outlined above.
\section{Streaming Algorithms for \textup{Min-SAT}\xspace}\label{sec:min-sat} In this section, we provide several algorithms for \textup{Min-SAT}\xspace in the streaming setting. One can show that $F_0$-sketch immediately gives us a $1+\varepsilon$ approximation. However, the drawback of this approach is its inability to adapt polynomial time algorithms that yields approximations worse than $1+\varepsilon$. See Appendix \ref{app:F0-minsat} for a detailed discussion.
\paragraph{The subsampling framework.} We first present a framework that allows us to assume that the optimum value is at most $O(n/\varepsilon^2)$. \begin{lemma}\label{lem:sampling-minsat} Suppose $\frac{K n}{\varepsilon^2} \leq \opt \leq z \leq 2\opt $ and $0 \leq \varepsilon < 1/4$. An $\alpha$ approximation to \textup{Min-SAT}\xspace on clauses sampled independently with probability $p = \frac{K n}{\varepsilon^2 z}$ corresponds to an $\alpha+\varepsilon$ approximation on the original input w.h.p. Furthermore, the optimum of the sampled clauses $\opt' = O(n/\varepsilon^2)$. \end{lemma}
Based on the above lemma, we can run any $\alpha$ approximation algorithm on sampled clauses with sampling probability $p = \frac{K n}{\varepsilon^2 z}$ to yield an almost as good approximation. Furthermore, the optimum value on these sampled clauses is at most $O(n/\varepsilon^2)$.
Since we do not know $z$, we can run different instances of our algorithm corresponding to different guesses $z=1,2,4,\ldots, 2m$. At least one of these guesses satisfies $\opt \leq z \leq 2\opt$. The algorithm instances that correspond to wrong guesses may use too much space. In that case, we simply terminate those instances. We then return the best solutions among the instances that are not terminated. Hereinafter, we may safely assume that $\opt = O(n/\varepsilon^2)$.
\begin{proof}[Proof of Theorem \ref{thm:min-sat}] We first show that an $\tilde{O}(n^2/\varepsilon^2)$-space algorithm exists. Let ${\mathcal{C}}(\ell) = \{ C_j : \ell \in C_j \}$ be the set of clauses that contains the literal $\ell$. It must be the case that either all clauses ${\mathcal{C}}(x_i)$ are satisfied or all clauses in ${\mathcal{C}}({ \compl{x_i}})$ are satisfied. For time being, assume we have an upper bound $u$ of $\opt$.
Originally, all variables are marked as unsettled. If at any point during the stream, $|{\mathcal{C}}(x_i)| > u$, we can safely set $x_i \leftarrow \false$; otherwise, $\opt > u$ which is a contradiction. We then say the variable $x_i$ is settled. The case in which $|{\mathcal{C}}(\compl{x_i})| > u$ is similar.
When a clause $C_j$ arrives, if $C_j$ contains a settled literal $\ell$ that was set to $\true$, we may simply ignore $C_j$ since it must be satisfied by any optimal solution. On the other hand, if the fate of $C_j$ has not been decided yet, we store $C_j \setminus Z$ where $Z$ is the set of settled literals in $C_j$ that were set to $\false$ at this point. For example, if $C_j = (x_1 \lor x_2 \lor \compl{x_3})$ and $x_1 \leftarrow \false$ at this point while $x_2$ and $x_3$ are unsettled, then we simply store $C_j = (x_2 \lor \compl{x_3})$.
Since we store the input induced by unsettled variables and each unsettled variable appears in at most $2u$ clauses (positively or negatively), the space use is $\tilde{O}(nu)$. By using the aforementioned subsampling framework, we may assume that $u = O(n/\varepsilon^2)$. Thus, the total space is $\tilde{O}(n^2/\varepsilon^2)$. \end{proof}
If each variable appears in at most $f$ clauses, we have a slightly non-trivial approximation that uses less space. \begin{theorem}\label{thm:min-sat2} Suppose each variable appears in at most $f$ clauses. There exists a single-pass, $\tilde{O}(n)$-space algorithm that yields a $2\sqrt{fn}$ approximation to \textup{Min-SAT}\xspace. \end{theorem} \begin{proof} It is easy to check if $\opt = 0$ in $\tilde{O}(n)$ space. For each variable $x_i$, we keep track of whether it always appears positively (or negatively); if so, it is safe to set $x_i \leftarrow \false$ (or $x_i \leftarrow \true$ respectively). If that is the case for all variables, then $\opt = 0$. We run this in parallel with our algorithm that handles the case $\opt > 0$.
Now, assume $\opt \geq 1$. Since each variable involves in at most $f$ clauses. There are fewer than $\sqrt{fn}$ clauses with size larger than $\sqrt{fn}$. We ignore these large clauses from the input stream. Let $S_{x_i} = \{ C_j : |C_j| \leq \sqrt{fn} \text{ and } x_i \in C_j \}$. For each variable $x_i$, if $|S_{x_i}| \leq |S_{\compl{x_i}}|$, then we set $x_i \leftarrow \true$; otherwise, we set $x_i \leftarrow \false$. Let $\ell_i = x_i $ if $x_i$ was set to $\true$ and $\ell_i = \compl{x_i} $ if $x_i$ was set to $\false$ by our algorithm. Furthermore, let $\ell_i^\star = x_i $ if $x_i$ was set to $\true$ and $\ell_i^\star = \compl{x_i} $ if $x_i$ was set to $\false$ in the optimal assignment. Let $\opt_S$ be the number of small clauses that are satisfied by the optimal assignment and let $\opt$ be the number of clauses that are satisfied by the optimal assignment.
Note that $\sum_{i=1}^n |S_{\ell_i^\star}| \leq \sqrt{fn} \opt_S$ since each small clauses belong to at most $\sqrt{fn}$ different $S_{\ell_i^\star}$. The number of clauses that our algorithm satisfies at most \[
\sqrt{fn} + \sum_{i=1}^n |S_{\ell_i}| \leq \sqrt{fn} + \sum_{i=1}^n |S_{\ell_i^\star}| \leq \sqrt{fn}\opt + \sqrt{fn} \opt_S
\leq 2\sqrt{fn} \opt. \qedhere
\] \end{proof}
\section{Space Lower Bounds} \label{sec:space-lb}
\paragraph{Lower bound for $\textup{$k$-SAT}\xspace$.} Let us first prove that any streaming algorithm that decides whether a $\textup{$k$-SAT}\xspace$ formula, where $k < \log m$, is satisfiable requires $\Omega(\min\{m, n^k \})$ space. We consider the one-way communication problem $\textup{$k$-SAT}\xspace$ defined as follows. In this communication problem, Alice and Bob each has a set of $\textup{$k$-SAT}\xspace$ clauses. Bob wants to decide if there is a Boolean assignment to the variables that satisfies all the clauses from both sets. The protocol can be randomized, but the requirement is that the success probability is at least some large enough constant. Note that a streaming algorithm that solves $\textup{$k$-SAT}\xspace$ yields a protocol for the $\textup{$k$-SAT}\xspace$ communication problem since Alice can send the memory of the algorithm to Bob.
We proceed with a simple claim. If $k=2$, then this claim says that there is no assignment that satisfies all of the following clauses $(x \lor y), (\compl{x} \lor y),(x \lor \compl{y}),(\compl{x} \lor \compl{y})$. The claim generalizes this fact for all $k \in \mathbb{N}$.
\begin{claim} \label{clm:1} Consider $k$ Boolean variables $x_1,\ldots,x_k$. No Boolean assignment simultaneously satisfies all clauses in the set $S_k = \{ (\bigvee_{\ell \in S} \compl{x_\ell} )\bigvee(\bigvee_{\ell' \notin S} {x_{\ell'}} ) : S \subseteq [k] \}$ for all $k \in \mathbb{N}$.
\end{claim}
\begin{proof}
The proof is a simple induction on $k$. The base case $k=1$ is trivial since $S_1$ consists of two clauses $x_1$ and $\compl{x_1}$. Now, consider $k > 1$. Suppose that there is an assignment that satisfies all clauses in $S_k$. Without loss of generality, suppose $x_k = \true$ in that assignment (the case $x_k =\false$ is analogous). Consider the set of clauses $A = \{ \compl{x_k} \vee \phi : \phi \in S_{k-1} \} \subset S_k$. Since $\compl{x_k} =\false$, it must be the case that all clauses in $S_{k-1}$ are satisfied which contradicts the induction hypothesis.
\end{proof}
Without loss of generality, we assume $n/k$ is an integer. Now, we consider the one-way communication problem $\textup{Index}\xspace$. In this problem, Alice has a bit-string $A$ of length $t$ where each bit is independently set to 0 or 1 uniformly at random. Bob has a random index $i$ that is unknown to Alice. For Bob to output $A_i$ correctly with probability 2/3, Alice needs to send a message of size $\Omega(t)$ bits \cite{KremerNR99}. We will show that a protocol for for $\textup{$k$-SAT}\xspace$ will yield a protocol for $\textup{Index}\xspace$.
\begin{proof}[Proof of Theorem \ref{thm:lb-ksat}] Without loss of generality, assume $n/k$ is an integer. Consider the case where Alice has a bit-string $A$ of length ${(n/k)^k}$. For convenience, we index $A$ as $[n/k]\times \ldots \times [n/k] = [n/k]^k$. We consider $n$ Boolean variables $\{x_{a,b} \}_{a \in [k], b \in [n/k]}$.
For each $j \in [n/k]^k$ where $A_j = 1$, Alice generates the clause $(x_{1,j_1} \lor x_{2,j_2} \lor \ldots \lor x_{k,j_k})$. Let $S = \{ (1,i_1),(2,i_2)\ldots,(k,i_k) \}$. Bob, with the index $i \in [n/k]^k$, generates the clauses in \[\left\{ \left(\bigvee_{l \in Z} \compl{x_l} \right) \left(\bigvee_{l' \in S \setminus Z} {x_{\ell'}} \right) : Z \subseteq S \right\} \setminus \{(x_{1,i_1} \lor x_{2,i_2} \lor \ldots \lor x_{k,i_k}) \}.\]
If $A_i = 0$, then all the generated clauses can be satisfied by setting the variables $x_{l} = \false$ for all $l \in S$ and all remaining variables to $\true$. All the clauses that Bob generated are satisfied since each contains at least one literal among $\compl{x_{1,i_1}},\compl{x_{2,i_2}} \ldots, \compl{x_{k,i_k}}$. Note that the clause $(x_{1,i_1} \lor x_{2,i_2} \lor \ldots \lor x_{k,i_k})$ does not appear in the stream since $A_i = 0$. For any $j \in [n/k]^k$, where $(j_1,\ldots,j_k) \neq (i_1,\ldots,i_k)$, such that $A_j = 1$, there must be some $j_z \notin \{ i_1,i_2,\ldots,i_k \}$. Hence, $x_{z, j_z}=\true$ and therefore the clause $(x_{1,j_1} \lor \ldots \lor x_{k,j_k})$ is satisfied.
If $A_i = 1$, then by Claim \ref{clm:1}, we cannot simultaneously satisfy the clauses \[ \left\{ \left(\bigvee_{l \in Z} \compl{x_l} \right) \left(\bigvee_{l' \in S \setminus Z} {x_{\ell'}} \right) : Z \subseteq S \right\} \] that were generated by Alice and Bob. Hence, any protocol for $\textup{$k$-SAT}\xspace$ yields a protocol for $\textup{Index}\xspace$. Hence, such a protocol requires $\Omega(n^k)$ bits of communication.
Finally, we check the parameters' range where the lower bound operates. We first show that in the above construction, the number of clauses $m$ concentrates around $\Theta((n/k)^k)$. First, note that for fixed $n$, the function $(n/k)^k$ is increasing in the interval $k \in [1,n/e]$. Since we assume $k \leq n/e$, we have $(n/k)^k \geq n$. Let $m_A$ be the number of clauses Alice generates. Because independently each $A_j = 1$ with probability 1/2, by Chernoff bound, we have \[
\prob{| m_A - 0.5(n/k)^k| > 0.01 (n/k)^k} \leq \exp{-\Omega(({n}/{k})^k)} \leq \exp{-\Omega(n)}. \] Bob generates $m_B = 2^k -1 $ clauses. Thus, w.h.p., $ m = m_A + m_B = \left( 0.5(n/k)^k + 2^k - 1 \right) \pm 0.01 (n/k)^k.$ Note that $2^k \leq (n/k)^k$ since $k \leq n/e < n/2$. As a result, w.h.p., $0.49(n/k)^k -1 \leq m \leq 1.51(n/k)^k +1$ which implies $m=\Theta((n/k)^k)$. \end{proof}
\paragraph{Lower bound for $\textup{Max-AND-SAT}\xspace$.} We show that a $(1/2+\varepsilon)$-approximation for $\textup{Max-AND-SAT}\xspace$ requires $\Omega(mn)$ space. In particular, our lower bound uses the algorithm to distinguish the two cases $\opt =1 $ and $\opt = 2$. This lower bound is also a reduction from the $\textup{Index}\xspace$ problem. Recall that in \textup{Max-AND-SAT}\xspace, each clause is a conjunction of literals. We first need a simple observation.
\begin{claim} \label{clm:clause-systems} Let $T=K \log m$. There exists a set of $m$ (conjunctive) clauses $C_1,\ldots,C_m$ over the set of Boolean variables $\{z_1,\ldots,z_T\}$ such that exactly one clause can be satisfied. \end{claim} \begin{proof} We show this via a probabilistic argument. For $i = 1,2,\ldots,m$ and $j \in [T]$, let $C_i = \land_{j=1}^{T} l_{i,j}$ where each $l_{i,j}$ is independently set to $z_j$ or $\compl{z_j}$ equiprobably.
Two different clauses $C_i$ and $C_{i'}$ can be both satisfied if and only if for all $j=1,2,\ldots, T$, it holds that the variable $z_j$ appears similarly (i.e., as $z_j$ or as $\compl{z_j}$) in both clauses. This happens with probability $(1/2)^{K \log m} = 1/\poly(m)$. Hence, by a union bound over ${m \choose 2}$ pairs of clauses, exactly one clause can be satisfy with probability at least $1-1/\poly(m)$. \end{proof}
Alice and Bob agree on such a set of clauses $C_1,\ldots,C_m$ over the set of Boolean variables $\{z_1,\ldots,z_T\}$ (which is hard-coded in the protocol). Suppose Alice has a bit-string of length $mn$, indexed as $[m] \times [n]$. For each $k \in [m]$: if $A_{k,j}=1$, she adds the literal $x_j$ to clause $D_k$ and if $A_{k,j}=0$, she adds the literal $\compl{x_j}$ to clause $D_k$. Finally, she concatenates $C_k$ to $D_k$. More formally, for each $k \in [m]$, \[ D_k = \left( \bigwedge_{j\in [n]: A_{k,j} = 1} x_j \right) \bigwedge \left( \bigwedge_{j \in [n]: A_{k,j} = 0} \compl{x_j} \right) \bigwedge \left(C_k \right). \]
Bob, with the index $(i_1,i_2) \in [m] \times [n]$, generates the clauses $D_B = x_{i_2} \wedge C_{i_1}$. If $A_{i_1,i_2} = 1$, then both clauses $D_B$ and $D_{i_1}$ can be satisfied since the set of literals in $D_B$ is a subset of the set of literals in $D_{i_1}$. If $A_{i_1,i_2} = 0$, exactly one of $D_B$ and $D_{i_1}$ can be satisfied since $x_{i_2} \in D_B$ and $\compl{x_{i_2}} \in D_{i_1}$. Furthermore, by Claim \ref{clm:clause-systems}, we cannot simultaneously satisfy $D_B$ and any other clause $D_{k}$ where $k \neq i_1$ (since $C_{i_1}$ is part of $D_B$ and $C_k$ is part of $D_k$). Therefore, if $\opt = 2$, then $A_{i_1,i_2} = 1$ and if $\opt = 1$ then $A_{i_1,i_2} = 0$. Thus, a streaming algorithm that distinguishes the two cases yields a protocol for the communication problem. The number of variables in our construction is $n + K \log m \leq 2 n$, if we assume $n \geq K \log m$. Reparameterizing $n \leftarrow n/2$ gives us the theorem below.
\begin{theorem} Suppose $n = \Omega(\log m)$. Any single-pass streaming algorithm that decides if $\opt = 1$ or $\opt = 2$ for $\textup{Max-AND-SAT}\xspace$ with probability 2/3 must use $\Omega(mn)$ space. \end{theorem}
\paragraph{Lower bound for \textup{Min-SAT}\xspace.} For \textup{Min-SAT}\xspace, it is easy to show that deciding if $\opt > 0$ requires $\Omega(n)$ space. The algorithm in the proof of Theorem \ref{thm:min-sat2} shows that this lower bound is tight.
\begin{theorem}\label{thm:lb-minsat} Any single-pass streaming algorithm that decides if $\opt >0$ for $\textup{Min-SAT}\xspace$ with probability 2/3 must use $\Omega(n)$ space. \end{theorem} \begin{proof} Consider an Index instance of length $n$. If $A_j=1$, Alice adds the literal $x_j$ to the clause $C_1$; otherwise, she adds the literal $\compl{x_j}$ to $C_1$. Alice then puts $C_1$ in the stream. Bob, with the index $i$, puts the clause $C_2 = (x_i)$ in the stream. If $A_i = 0$, then $\compl{x_i} \in C_1$ and ${x_i} \in C_2$ and therefore $\opt >0$. Otherwise, it is easy to see that we have an assignment that does not satisfy any of $C_1$ or $C_2$. \end{proof}
\small
\normalsize
\appendix
\section{An $F_0$-sketch based approach for \textup{Min-SAT}\xspace}\label{app:F0-minsat}
We show that using $F_0$ sketch, we can obtain a $1+\varepsilon$ approximation in $\tilde{O}(n^2/\varepsilon^2)$ space. Given a vector $x\in {\mathbb R}^n$, $F_0(x)$ is defined as the number of elements of $x$ which are non-zero. Consider a subset $S\subseteq \{1, \ldots, n\}$, let $x_S\in \{0,1\}^n$ be characteristic vector of $S$ (i.e., $x_i=1$ iff $i\in S$). Note that $F_0(x_{S_1} + x_{S_2}+\ldots )$ is exactly the coverage $|S_1\cup S_2\cup \ldots |$. We will use the following result for estimating $F_0$.
\begin{theorem}[$F_0$ Sketch, \cite{ BJKST02}]\label{thm:F0-approximation} Given a set $S\subseteq [n]$, there exists an $\tilde{O}(\varepsilon^{-2}\log \delta^{-1})$-space algorithm that constructs a data structure $\mathcal{M}(S)$ (called an \emph{$F_0$ sketch} of $S$). The sketch has the property that the number of distinct elements in a collection of sets $S_1, S_2, \ldots, S_t$ can be approximated up to a $1 + \varepsilon$ factor with probability at least $1-\delta$ provided the collection of $F_0$ sketches $\mathcal{M}(S_1), \mathcal{M}(S_2), \ldots, \mathcal{M}(S_t)$. \end{theorem}
The above immediately gives us a $1+\varepsilon$ approximation in $\tilde{O}(n^2/\varepsilon^2)$ space. Each literal $\ell$ corresponds to a set $S_\ell$ that contains the clauses that it is in, i.e., $S_\ell = \{ C_j : \ell \in C_j \}$. Hence, the goal is to find a combination of $\ell_1, \ell_2, \ldots, \ell_n$ where $\ell_i \in \{ x_i, \compl{x_i} \}$ such that the coverage $|C_{\ell_1} \cup \ldots \cup C_{\ell_n}|$ is minimized. We can construct $\mathcal{M}(S_{x_i})$ and $\mathcal{M}(S_{\compl{x_i}})$ for each $i=1,2,\ldots,n$ with failure probability $\delta = 1/(n 2^n)$. Then, we return the smallest estimated coverage among $2^n$ such combinations based on these sketches. This is a $1+\varepsilon$ approximation w.h.p. since for all $\ell_1, \ell_2, \ldots, \ell_n$ where $\ell_i \in \{ x_i, \compl{x_i} \}$, we have an estimate $ (1\pm \varepsilon)| S_{\ell_1} \cup \ldots \cup S_{\ell_n}| $ with probability $1-1/n$ by a simple union bound.
This approach's drawback is its exponential post-processing time. To see why this is hard to extend to other offline algorithms, let us review the algorithm by Kohli et al. \cite{KKM94} that yields a 2-approximation in expectation. The algorithm goes through the variables $x_1,x_2,\ldots,x_n$ one by one. At step $i$, it processes the variable $x_i$. Let $a_i$ and $b_i$ be the number of newly satisfied clauses if we assign $x_i$ to $\true$ and $\false$ respectively. We randomly set $x_i \leftarrow \true$ with probability $\frac{b_i}{a_i+b_i}$ and set $x_i \leftarrow \false$ with probability $\frac{a_i}{a_i + b_i}$. The algorithm updates the set of satisfied clauses and move on to the next variable. A simple induction shows that this is a 2-approximation in expectation. Note that we can run this algorithm $O(\varepsilon^{-1}\log n)$ times and return the best solution to obtain a $2+\varepsilon$ approximation w.h.p.
Unfortunately, $F_0$ sketch does not support set subtraction (i.e., if we have the sketches $\mathcal{M}(A)$ and $\mathcal{M}(B)$, we are not guaranteed to get a $1\pm \varepsilon$ multiplicative approximation of $|A \setminus B|$) and thus it is unclear how to compute or estimate $a_i$ and $b_i$ at each step $i$.
Besides, better approximations to $\textup{Min-$k$-SAT}\xspace$, for small values of $k$, have also been developed \cite{AvidorZ02,BertsimasTV99,ArifBGK20} using linear and semidefinite programming. It is not clear how to combine $F_0$ sketch with these approaches either. We now show how to sidestep the need to use $F_0$ sketch entirely and run any offline algorithm of our choice.
\section{Omitted Proofs}
\begin{proof}[Proof of Lemma \ref{lem:whp}] Let $Z$ be the number of clauses satisfied by the algorithm. First, note that since $m/4 \leq \alpha \opt $ (since $\alpha \geq 1/2$), we have $\alpha \opt/(m-\alpha \opt) \geq 1/3$. We have $\expec{m-Z} \leq m-\alpha \opt$. Appealing to Markov inequality, \begin{align*} \prob{Z \leq (1-\varepsilon) \alpha \opt} & = \prob{m-Z \geq m- (1-\varepsilon) \alpha \opt} \\ & \leq \frac{m-\alpha \opt}{m- (1-\varepsilon) \alpha \opt} \\ & = \frac{1}{1 +\varepsilon \alpha \frac{\opt}{(m-\alpha \opt)}} \leq \frac{1}{1+\varepsilon/3}. \end{align*} Hence, we can repeat the algorithm $O(\log_{1+\varepsilon/3} m) = O(1/\varepsilon \cdot \log m)$ times and choose the best solution to obtain an $\alpha-\varepsilon$ approximation with probability at least $1-1/\poly(m)$. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:post-processing-2}] In each trial $t=1,2,\ldots,Q$, each literal is set to $\true$ with probability at least $1/4$. This is because $1/4 \leq 1/4+y_i^\star/2 \leq 3/4$. Appealing to Lemma \ref{lem:large-clauses} with $\gamma = 1/4$, all assignments $A_t$ satisfy all the large clauses in $L$ with probability at least $1-1/\poly(m)$. The rest of the argument is then similar to that of Lemma \ref{lem:post-processing-1}. \end{proof}
\begin{proof}[Proof of Lemma \ref{lem:sampling-minsat}] Suppose that an assignment $A$ satisfies $m_A$ clauses. Let the number of sampled clauses that $A$ satisfies be $m_A'$. For convenience, let $y = m_A$ and $y' = m_A'$. We observe that $\expec{y'} =p y$. Note that $y \geq \opt \geq z/2$. Suppose the assignment $A$ satisfies clauses $C_{\sigma_1},\ldots,C_{\sigma_y}$. We define the indicator variable $X_i = [\text{$C_{\sigma_i}$ is sampled}]$ and so $y' = \sum_{i=1}^y X_i$. Appealing to Chernoff bound, \begin{align*}
\prob{|y' - p y| \geq \varepsilon p y} & \leq 2 \exp{-\frac{\varepsilon^2}{3} py} \leq 2 \exp{-\frac{K \varepsilon^2 n y}{z \varepsilon^2}} \leq \exp{-100n}. \end{align*} Taking a union bound over $2^n$ distinct assignments, we have $\prob{m_A' = (1 \pm \varepsilon)p m_A } \geq 1- \exp{-50n}$ for all assignments $A$. We immediately have that $\opt' \leq (1+\varepsilon)p \opt = O(n/\varepsilon^2)$.
Suppose an assignment $\tilde{A}$ is an $\alpha$ approximation to \textup{Min-SAT}\xspace on the sampled clauses. Let $A^\star$ be an optimal assignment on the original input clauses. From the above, with probability at least $1-e^{-50n}$, we have $(1-\varepsilon)pm_{\tilde{A}} \leq m_{\tilde{A}}' \leq \alpha m_{A^\star}' \leq \alpha \opt p(1+\varepsilon) $. Hence, $m_{\tilde{A}} \leq (1+3\varepsilon) \alpha \opt$. Reparameterizing $\varepsilon \leftarrow \varepsilon/3$ completes the proof. \end{proof}
\end{document} |
\begin{document}
\title{Fourier restriction to smooth enough curves}
\author{Michael Jesurum}
\address{Department of Mathematics, University of Wisconsin, Madison, WI 53706}
\email{[email protected]}
\begin{abstract}
We prove Fourier restriction estimates to arbitrary compact \(C^{N}\) curves
for any \(N > d\) in the (sharp) Drury range, using a power of the
affine arclength measure as a mitigating factor. In particular, we make no
nondegeneracy assumption on the curve.
\end{abstract}
\maketitle
\section{Introduction} \label{introduction section}
The boundedness of restriction operators \(\mathcal{R} \colon L^{p}(\mathbb{R}^{d}) \rightarrow L^{q}(\gamma; \mathrm{d} \sigma)\) associated with curves \(\gamma \colon \mathbb{R} \rightarrow \mathbb{R}^{d}\) has been studied for decades. The first natural choice of measure \(\mathrm{d} \sigma\) is the Euclidean arclength measure. In this case the maximal range of \(p\) and \(q\) for which a restriction operator exists depends on the order of vanishing of the torsion. \begin{definition}
For a compact interval \(I \subset \mathbb{R}\) and a curve
\(\gamma \in C^{d}(I; \mathbb{R}^{d})\), define the \textbf{torsion}
\(\tau(t) = |\det[\gamma'(t), \dots, \gamma^{(d)}(t)]|\). Furthermore, for
\(\epsilon \geq 0\) define the weight
\begin{equation} \label{affine arclength weight}
w_{\epsilon}(t) = \tau(t)^{\frac{2}{d(d + 1)} + \epsilon}.
\end{equation} \end{definition} In particular, \(\omega_{0}(t)\mathrm{d}t\) is the well-studied affine arclength measure, which is also interesting due to the affine invariance of the problem. See~\cite{Guggenheimer} for background on affine geometry. Using the affine arclength measure, Drury~\cite{Drury} discovered a proof for the optimal range of \(p\) and \(q\) in the least-degenerate case: when \(\gamma\) is the moment curve \(\gamma(t) = (t, t^{2}, \dots, t^{d})\). For various classes of curves, many authors (see below for the history) have shown that the affine arclength measure compensates for vanishing of the torsion, so restriction operators exist for nearly the same range of \(p\) and \(q\) as the moment curve. Several authors have also used the overdamped affine arclength measure, \(\epsilon > 0\) in~\eqref{affine arclength weight}, to attain the exact range of \(p\) and \(q\) for the moment curve. The case of a general curve \(\gamma \in C^{\infty}(I)\) has long been expected to behave similarly. Building on techniques in~\cite{ChenFanWang, DendrinosWright, Drury, Stovall}, our main result establishes the boundedness of restriction operators for arbitrary compact curves that are smooth enough. In the theorem, \(C^{N} \coloneqq C^{\lfloor N\rfloor, N-\lfloor N\rfloor}\). \begin{theorem} \label{restriction theorem}
Let \(d \geq 2\), \(I \subset \mathbb{R}\) be a compact interval,
\(N \in \mathbb{R}\) such that \(N > d\), and
\(\gamma \in C^{N}(I; \mathbb{R}^{d})\). For \(\epsilon \geq 0\) let
\(w_{\epsilon}\) be the weight defined in~\eqref{affine arclength weight}.
Let
\begin{equation} \label{restriction theorem p and q}
1 \leq p < \frac{d^{2} + d + 2}{d^{2} + d}
\quad \text{and} \quad
\begin{cases}
1
\leq q
\leq \frac{2}{d^{2}+d}p'
\quad & \text{if } \epsilon > \sum_{j=1}^{d} \frac{1}{N-j},
\\
1
\leq q
< \frac{\frac{2}{d^{2}+d}+\epsilon}
{1 + \frac{d^{2}+d}{2}\sum_{j=1}^{d} \frac{1}{N-j}}p'
\quad & \text{if }
0 \leq \epsilon \leq \sum_{j=1}^{d} \frac{1}{N-j}.
\end{cases}
\end{equation}
Then there is \(C = C(I, d, \gamma, N, p, q, \epsilon) > 0\) such
that for any \(f \in L^{p}(\mathbb{R}^{d})\),
\begin{equation*}
\bigg(\int_{I} |\hat{f}(\gamma(t))|^{q} w_{\epsilon}(t)
\mathrm{d} t\bigg)^{\frac{1}{q}}
\leq C \|f\|_{L^{p}(\mathbb{R}^{d})}.
\end{equation*} \end{theorem} \begin{remark}
When \(\gamma \in C^{\infty}(I)\), Theorem~\ref{restriction theorem}
gives a restriction bound for \(q\) on the scaling line
\(q = \frac{2}{d^{2}+d}p'\) in the overdamped case with any \(\epsilon > 0\)
and for all \(q < \frac{2}{d^{2}+d}p'\) with the affine arclength measure. \end{remark} \begin{remark}
In the case \(\epsilon \leq \sum_{j = 1}^{d} \frac{1}{N - j}\), the range of
\(q\) in~\eqref{restriction theorem p and q} is empty whenever
\begin{equation*}
1
< \frac{\frac{2}{d^{2} + d} + \epsilon}
{1 + \frac{d^{2} + d}{2}\sum_{j = 1}^{d} \frac{1}{N - j}}p'.
\end{equation*}
Thus, to obtain restriction estimates for all \(p\) in the range
\(1 \leq p < \frac{d^{2} + d + 2}{d^{2} + d}\), we need
\begin{equation} \label{N epsilon interplay for Drury range}
\sum_{j = 1}^{d} \frac{1}{N - j}
\leq \frac{4}{(d^{2} + d)^{2}} + \frac{\epsilon(d^{2} + d + 2)}{d^{2} + d}.
\end{equation}
If \(N\) is much larger than \(d\),
then~\eqref{N epsilon interplay for Drury range} is true even in the undamped
case \(\epsilon = 0\). When \(N\) is closer to \(d\), there is always some
\(0 < \epsilon_{0} < \sum_{j = 1}^{d} \frac{1}{N - j}\) such
that~\eqref{N epsilon interplay for Drury range} holds for all
\(\epsilon \geq \epsilon_{0}\). See Figure~\ref{fig: range of p and q} below,
which uses the extension operator instead of the restriction operator for
visual clarity. \end{remark} \begin{figure}\label{fig: range of p and q}
\end{figure} In the case of the moment curve, where the torsion is constant, the offspring curve method is available to prove optimal restriction estimates. That method includes an analysis of the function \begin{equation} \label{sum of gammas}
\Phi_{\gamma}(t_{1}, \dots, t_{d}) = \gamma(t_{1})+\dots+\gamma(t_{d}). \end{equation} When \(\gamma\) is the moment curve, the Jacobian \(J_{\Phi_{\gamma}}\) is (up to a multiplicative constant) equal to the Vandermonde determinant \begin{equation} \label{vandermonde}
v(t_{1}, \dots, t_{d}) = C_{d}\prod_{1 \leq i < j \leq d} (t_{j}-t_{i}). \end{equation} To prove Theorem~\ref{restriction theorem}, we apply the Drury method to intervals on which the torsion of \(\gamma\) is comparable to some dyadic value. We also need to ensure that \(\Phi_{\gamma}\) is well-behaved on each interval. The main difficulty is controlling the number of intervals we study. This is the content of the following theorem: \begin{theorem} \label{geometric theorem}
Let \(I \subset \mathbb{R}\) be a compact interval. If
\(N \in \mathbb{R}\) with \(N > d\) and \(\gamma \in C^{N}(I; \mathbb{R}^{d})\),
then there is a family of intervals \(I_{k, l}\) such that
\begin{equation} \label{interval decomposition}
I
= \{t \in I : \tau(t) = 0\}
\cup \bigcup_{k \geq C_{\gamma}} \bigcup_{l = 1}^{N_{k}} I_{k, l},
\end{equation}
\begin{equation*}
2^{-k-2} \leq \tau(t) \leq 2^{-k+1} \text{ for } t \in I_{k, l},
\end{equation*}
\begin{equation} \label{injective}
(t_{1}, \dots, t_{d}) \mapsto \Phi_{\gamma}(t_{1}, \dots, t_{d})
\text{ is 1-to-1 for } t_{1} < \dots < t_{d} \in I_{k, l},
\end{equation}
\begin{equation} \label{geometric inequality}
|J_{\Phi_{\gamma}}(t_{1}, \dots, t_{d})|
\geq C_{d}2^{-k}|v(t_{1}, \dots, t_{d})|
\text{ for } (t_{1}, \dots, t_{d}) \in (I_{k, l})^{d},
\end{equation}
\begin{equation*}
N_{k} \leq C_{d, N, \gamma}2^{k\sum_{j=1}^{d}\frac{1}{N-j}}, \text{ and}
\end{equation*}
\begin{equation*}
\sum_{l=1}^{N_{k}}|I_{k, l}|
\leq C_{d, N, \gamma}(k+C_{d, \gamma})^{d}|I|.
\end{equation*} \end{theorem} \subsection*{History}
In~\cite{Stein1}, Stein traces the roots of Fourier restriction theory
to observations about the continuity of Fourier transforms of radial
functions. Laurent Schwartz and many others independently observed that if
\(1 \leq p < \frac{2d}{d+1}\) and \(f \in L^{p}(\mathbb{R}^{d})\) is a radial
function, then \(\hat{f}(r)\) is continuous for all \(0 < r < \infty\).
Thus, \(\hat{f}\) can be thought of as a function on \(S^{d-1}\), even
though that set has measure 0. Stein wondered if one could give a similar
statement about Fourier restrictions of nonradial functions. He successfully
proved such a result in 1967, but did not publish because it was unclear
what purpose such a lemma would have.
In 1970, Fefferman~\cite{Fefferman}, in collaboration with Stein, improved
on Stein's lemma in 2 dimensions and showed that the \(n\)-dimensional
lemma could be used to make progress on the multiplier problem for the ball.
Interest in the restriction problem picked up, and by 1974 the case of
curves in two dimensions had largely been solved by
Zygmund~\cite{Zygmund}, H{\"ormander}~\cite{Hormander}, and
Sj{\"o}lin~\cite{Sjolin}. While Zygmund and H{\"ormander} dealt with
nondegenerate curves (\(\tau \neq 0\)), Sj{\"o}lin brought in the measures
\(w_{\epsilon}(t)\mathrm{d}t\) to understand curves with vanishing curvature.
The case of \(d = 2\) and \(\gamma \in C^{\infty}\) in
Theorem~\ref{restriction theorem} is due to Sj{\"o}lin~\cite{Sjolin}.
There have been several more
papers~\cite{Barcelo2, Barcelo1, Fraccaroli, Oberlin2, Oberlin3, Ruiz, Sogge}
that have answered some remaining questions in 2 dimensions. In the most
recent of these (in 2021), Fraccaroli~\cite{Fraccaroli} proved a restriction
theorem in the optimal range for all continuous convex curves, which is
the first result that did not require \(C^{2}\).
Unfortunately, the techniques that work well in two dimensions do not
carry over to higher dimensions. Thus, it was a few more years before any
results were known. Prestini broke into the high-dimensional setting by
proving a restriction theorem for curves with nonvanishing torsion in
1978~\cite{Prestini1} for \(d = 3\) and 1979~\cite{Prestini2} for
\(d \geq 4\). However, her range of \(p\) was not sharp: it was
\(1 \leq p < \frac{d^{2}+2d}{d^{2}+2d-2}\) (compare
with~\eqref{restriction theorem p and q}). She also did not attain bounds
on the scaling line \(q = \frac{2}{d^{2}+d}p'\), which is seen to be the largest
possible value of \(q\) by inspecting Knapp examples. In 1982, Christ~\cite{Christ2}
extended Prestini's theorem to include the scaling line, and then in
1985~\cite{Christ} he provided restriction estimates for curves of finite type
with the same range of \(p\) and for \(q\) up to the scaling line. Furthermore,
those bounds included \(q\) on the scaling line in a restricted range of \(p\).
The range \(1 \leq p < \frac{d^{2} + 2d}{d^{2} + 2d - 2}\) is called the
Christ-Prestini range. See also~\cite{Ruiz} for the first result for a
curve with vanishing torsion.
At a similar time that Christ was beginning the study of curves with finite
type, Drury~\cite{Drury} was concluding the study of curves with
nonvanishing torsion. He proved a restriction theorem for nondegenerate
\(C^{d}\) curves in the optimal range
\(1 \leq p < \frac{d^{2} + d + 2}{d^{2} + d}\) and
\(1 \leq q \leq \frac{2}{d^{2} + d}p'\). Optimality of the range of \(p\) is due to
Arkhipov, Karacuba, and {\v C}ubarikov~\cite{ArkhipovKaratsubaChubarikov} (see
also~\cite{ArkhipovKaratsubaChubarikov2}). Further results for nondegenerate
curves appear in~\cite{BakLee, BakOberlin}.
Shortly thereafter, Drury and Marshall~\cite{DruryMarshall1, DruryMarshall2}
improved the known estimates for curves of finite type, and then in 1990
Drury~\cite{Drury2} further improved the results for curves
\(\gamma(t) = (t, t^{2}, t^{k})\), \(k \geq 4\).
Little further progress was made for curves with vanishing torsion in higher
dimensions until 2008, when Bak, Oberlin, and
Seeger~\cite{BakOberlinSeeger1} solved the monomial curve case. In that and
a subsequent paper~\cite{BakOberlinSeeger2}, they also obtained endpoint
results. Shortly thereafter, Dendrinos and M{\"u}ller~\cite{DendrinosMuller}
obtained results for pertubed monomial curves. General polynomial curves
were covered for a restricted range of \(p\) by Dendrinos and
Wright~\cite{DendrinosWright}, and then for the full range of \(p\) by
Stovall~\cite{Stovall}. See also~\cite{HamLee} for results with general measures.
In addition to monomial curves, Bak, Oberlin, and
Seeger~\cite{BakOberlinSeeger3} also proved restriction
theorems for simple curves
\begin{equation*}
\gamma(t) = (t, t^{2}, \dots, t^{d-1}, \phi(t))
\end{equation*}
such that \(\phi \in C^{d}\) with \(\phi^{(d)}\) satisfying a
certain inequality. Chen, Fan, and Wang~\cite{ChenFanWang} were able to
dispense with the inequality, but at the cost of enforcing
\(\phi \in C^{N}\) for some \(N > d\). By a change of variables, the
case of \(d = 2\) and \(\gamma \in C^{N}\) in
Theorem~\ref{restriction theorem} is due to Chen, Fan, and
Wang~\cite{ChenFanWang}. Another result
of this nature appears in~\cite{Wan}.
With the polynomial case solved, it is likely that an
argument based on \(\epsilon\)-removal and polynomial approximation could be
used to solve the general \(C^{\infty}\) case off the scaling line. Thus, the
most interesting new consequences of Theorem~\ref{restriction theorem} are
(in dimension \(d \geq 3\)) the scaling line estimates for
\(C^{\infty}(\mathbb{R}^{d})\) curves with \(\epsilon > 0\) and the nontrivial
range of \(p\) and \(q\) for \(C^{N}(\mathbb{R}^{d})\) curves with
\(\epsilon = 0\). \subsection*{Outline of proof}
Section~\ref{drury section} uses Theorem~\ref{strong geometric theorem}, which is
a stronger version of Theorem~\ref{geometric theorem}, as a black box to prove
Theorem~\ref{restriction theorem}. It begins with a restriction
result on each interval in the decomposition given by
Theorem~\ref{strong geometric theorem}, and then combines these estimates into a
restriction inequality on the whole interval.
Sections~\ref{decomposition section} and~\ref{geometric inequality section}
are devoted to a proof of Theorem~\ref{strong geometric theorem}.
Section~\ref{decomposition section} constructs a decomposition
for Theorem~\ref{strong geometric theorem} in two steps. The first
step decomposes \(I\) into intervals on which \(\gamma\) is well-behaved, and the secondary decomposition
creates intervals where certain auxiliary curves are similarly well-behaved.
Finally, Section~\ref{geometric inequality section} finishes the proof
of the geometric inequality~\eqref{geometric inequality} and the
condition~\eqref{injective} on each interval in the decomposition. \subsection*{Notation}
Let \(\tilde{I}\) be a compact interval, \(d \in \mathbb{N}\),
\(N \in \mathbb{R}\) with \(N > d\), and
\(\gamma \in C^{d}(\tilde{I}; \mathbb{R}^{d})\). These will remain fixed
throughout this paper, and we will prove Theorem~\ref{restriction theorem}
and Theorem~\ref{geometric theorem} with these fixed values. \(C\) denotes an
arbitrary constant that may change line by line and is always allowed to depend
on the dimension \(d\) and the interval \(\tilde{I}\). Any subscripts indicate
additional dependence: for instance, \(C_{\gamma}\) is a constant that depends
only on \(\gamma\), the dimension, and the original interval. For two numbers
\(A\) and \(B\), write \(A \approx B\) if there exist constants \(C\) and \(C'\)
such that
\begin{equation*}
CB \leq A \leq C'B.
\end{equation*}
Once again, subscripts indicate additional dependence. Logarithms are taken in base
2 purely for the convenience of calculations in Section~\ref{decomposition section}.
\section{Proof of Theorem~\ref{restriction theorem}} \label{drury section}
In this section, we use a strengthening of Theorem~\ref{geometric theorem} to prove Theorem~\ref{restriction theorem}. The first step is to prove a restriction estimate on each interval of the decomposition~\eqref{interval decomposition}. Define the family of offspring curves \begin{equation*}
\Upsilon
= \bigg\{\gamma_{h}(t) = \frac{1}{m} \sum_{j=1}^{m} \gamma(t+h_{j})
: m \in \mathbb{N}, \ h \in \mathbb{R}^{m},
\ 0 \leq h_{1} \leq \dots \leq h_{m}\bigg\}. \end{equation*} For an interval \(I = [a, b]\), set \(I_{h} = [a-h_{1}, b-h_{m}]\). We will use induction to show that a restriction bound holds uniformly for \(\gamma_{h} \in \Upsilon\). \begin{proposition} \label{Drury argument}
Let \(I \subseteq \tilde{I}\) be a compact interval and \(k \in \mathbb{Z}\).
Suppose that for every \(\gamma_{h} \in \Upsilon\),
\begin{equation} \label{injective Drury}
(t_{1}, \dots, t_{d}) \mapsto \Phi_{\gamma_{h}}(t_{1}, \dots, t_{d})
\text{ is 1-to-1 for } t_{1} < \dots < t_{d} \in I_{h} \text{ and}
\end{equation}
\begin{equation} \label{geometric inequality Drury}
|J_{\Phi_{\gamma_{h}}}(t_{1}, \dots, t_{d})|
\geq C2^{-k}|v(t_{1}, \dots, t_{d})|
\text{ for } (t_{1}, \dots, t_{d}) \in I_{h}^{d}.
\end{equation}
Then for
\begin{equation*}
1 \leq p < \frac{d^{2}+d+2}{d^{2}+d}
\quad \text{and}
\quad q = \frac{2p'}{d^{2}+d},
\end{equation*}
we have the restriction inequality
\begin{equation} \label{restriction bound Drury}
\bigg(\int_{I_{h}} |\hat{f}(\gamma_{h}(t))|^{q}
\mathrm{d} t\bigg)^{\frac{1}{q}}
\leq 2^{\frac{k}{p'}}C_{p}\|f\|_{L^{p}(\mathbb{R}^{d})}
\end{equation}
for all \(f \in L^{p}(\mathbb{R}^{d})\) and all \(\gamma_{h} \in \Upsilon\). \end{proposition} \begin{proof}
We adapt Drury's argument from~\cite{Drury}. By duality, it suffices to study
the extension operator
\begin{equation*}
\mathcal{E}_{h}g(x) = \int_{I_{h}} e^{i\gamma_{h}(t) \cdot x}g(t) \mathrm{d}t.
\end{equation*}
We will show that
\begin{equation*}
\|\mathcal{E}_{h}g\|_{L^{p'}(\mathbb{R}^{d})}
\leq 2^{\frac{k}{p'}}C_{p}\|g\|_{L^{q'}(I)},
\end{equation*}
for
\begin{equation*}
1 \leq q' < \frac{d^{2} + d + 2}{2},
\quad \frac{d^{2} + d}{2p'} + \frac{1}{q'} = 1.
\end{equation*}
The proof is by induction on \(q'\). Hausdorff-Young shows that the base case
\(q' = 1\) and \(p' = \infty\) is true. The induction hypothesis is that for
some \(1 \leq q_{0}' < \frac{d^{2}+d+2}{d^{2}+d}\) and \(p_{0}'\) defined by
\begin{equation*}
\frac{d^{2}+d}{2p_{0}'}+\frac{1}{q_{0}'} = 1,
\end{equation*}
the following inequality holds uniformly for \(\gamma_{h} \in \Upsilon\):
\begin{equation} \label{induction hypothesis}
\|\mathcal{E}_{h}g\|_{L^{p_{0}'}(\mathbb{R}^{d})}
\leq 2^{\frac{k}{p_{0}'}}C_{p_{0}}\|g\|_{L^{q_{0}'}(I)}.
\end{equation}
Fix \(\gamma_{\tilde{h}} \in \Upsilon\). For ease of notation, set
\(\zeta = \gamma_{\tilde{h}}\), \(I = I_{\tilde{h}}\), and
\(\mathcal{E} = \mathcal{E}_{\tilde{h}}\). To
improve the bound~\eqref{induction hypothesis}, we first write
\begin{equation*}
\Big(\mathcal{E}g\Big(\frac{x}{d}\Big)\Big)^{d}
= \bigg(\int_{I} e^{i\zeta(t) \cdot \frac{x}{d}}g(t) \mathrm{d}t\bigg)^{d}
= \int_{I^{d}} e^{ix\cdot\frac{1}{d}\sum_{j=1}^{d} \zeta(t_{j})}
\prod_{j=1}^{d} g(t_{j}) \mathrm{d}t_{1} \dots \mathrm{d}t_{d}.
\end{equation*}
Set
\begin{equation*}
A = \{(t_{1}, \dots, t_{d}) \in I^{d} : t_{1} < \dots < t_{d}\}.
\end{equation*}
By symmetry in \(t_{1}, \dots, t_{d}\),
\begin{equation*}
\Big(\mathcal{E}g\Big(\frac{x}{d}\Big)\Big)^{d}
= d!\int_{A} e^{ix\cdot\frac{1}{d}\sum_{j=1}^{d} \zeta(t_{j})}
\prod_{j=1}^{d} g(t_{j}) \mathrm{d}t_{1} \dots \mathrm{d}t_{d}.
\end{equation*}
With the change of variables
\begin{equation*}
t = t_{1},
\quad h_{j} = t_{j} - t_{1} \text{ for } 1 \leq j \leq d,
\end{equation*}
and with \(B\) the image of \(A\) under this change of variables, we observe that
\begin{equation*}
\Big(\mathcal{E}g\Big(\frac{x}{d}\Big)\Big)^{d}
= d!\int_{B} e^{ix\cdot\frac{1}{d}\sum_{j=1}^{d} \zeta(t+h_{j})}
\prod_{j=1}^{d} g(t+h_{j})
\mathrm{d}t \mathrm{d}h_{2} \dots \mathrm{d}h_{d}.
\end{equation*}
For fixed \(h_{2}, \dots, h_{d}\), each curve
\begin{equation*}
t \mapsto \frac{1}{d}\sum_{j = 1}^{d} \zeta(t + h_{j})
\end{equation*}
is an offspring curve in the family \(\Upsilon\). Let \(v\) be the Vandermonde
determinant~\eqref{vandermonde} and define
\begin{equation*}
TG(x)
= \int_{B} e^{ix\cdot\frac{1}{d}\sum_{j=1}^{d} \zeta(t+h_{j})}
G(t, h) v(h) \mathrm{d}t \mathrm{d}h.
\end{equation*}
\begin{lemma}
We have the bound
\begin{equation} \label{induction bound}
\|TG\|_{L^{p_{0}'}}
\leq 2^{\frac{k}{p_{0}'}}C_{p_{0}}
\|G\|_{L^{1}_{h'}(L^{q_{0}'}_{t}; |v(h)|)}.
\end{equation}
\end{lemma}
\begin{proof}
An application of Minkowski's inequality for integrals shows that
\begin{equation*}
\|TG\|_{L^{p_{0}'}}
\leq \int \bigg\|\int
e^{ix\cdot\frac{1}{d}\sum_{j=1}^{d} \zeta(t+h_{j})}
G(t, h) \mathrm{d} t\bigg\|_{L^{p_{0}'}(\mathrm{d} x)} |v(h)|
\mathrm{d} h.
\end{equation*}
Employing the induction hypothesis~\eqref{induction hypothesis}, we obtain
\begin{equation*}
\|TG\|_{L^{p_{0}'}}
\leq 2^{\frac{k}{p_{0}'}}C_{p_{0}}
\int \|G(\cdot, h)\|_{L^{q_{0}'}(I_{h})} |v(h)| \mathrm{d} h.
\end{equation*}
The lemma now follows from the inequality
\begin{equation*}
2^{\frac{k}{p_{0}'}}C_{p_{0}}
\int \|G(\cdot, h)\|_{L^{q_{0}'}(I_{h})} |v(h)| \mathrm{d} h
\leq 2^{\frac{k}{p_{0}'}}C_{p_{0}}
\|G\|_{L^{1}_{h'}(L^{q_{0}'}_{t}; |v(h)|)}.
\qedhere
\end{equation*}
\end{proof}
\begin{lemma}
We have the bound
\begin{equation} \label{L^2 bound}
\|TG\|_{L^{2}(\mathrm{d} x)}
\leq 2^{\frac{k}{2}}C\|G\|_{L^{2}_{h'}(L^{2}_{t}; |v(h)|)}.
\end{equation}
\end{lemma}
\begin{proof}
Set
\begin{equation*}
y = \frac{1}{d}\sum_{j=1}^{d} \zeta(t+h_{j}).
\end{equation*}
This change of variables is injective because of~\eqref{injective Drury}.
The Jacobian is
\begin{equation*}
J(t, h)
= \frac{1}{d^{d}} J_{\Phi_{\zeta}}(t, t+h_{2}, \dots, t+h_{d}).
\end{equation*}
The geometric inequality~\eqref{geometric inequality Drury} guarantees that
\begin{equation} \label{lower bound of jacobian}
J(t, h) \geq C2^{-k}v(h).
\end{equation}
With these variables, set
\begin{equation*}
F(y) = \mathbb{1}_{B}(t, h)G(t, h)\frac{v(h)}{J(t, h)}.
\end{equation*}
Applying the change of variables to \(T\), we see that
\begin{equation*}
TG(x)
= \int e^{iy \cdot x}F(y) \mathrm{d}y
= \check{F}(x).
\end{equation*}
Plancherel gives \(\|\check{F}\|_{2} = \|F\|_{2}\), so
\(\|TG\|_{L^{2}(\mathrm{d} x)} = \|F\|_{L^{2}(\mathrm{d}y)}.\)
Changing variables back and unwinding the definition of \(F\) yields
\begin{equation} \label{L^2 bound post change of variables}
\|TG\|_{L^{2}(\mathrm{d} x)}
= \bigg(\int_{B} |G(t, h)|^{2} \bigg[\frac{v(h)}{J(t, h)}\bigg] v(h)
\mathrm{d} t \mathrm{d} h\bigg)^{\frac{1}{2}}.
\end{equation}
Lines~\eqref{lower bound of jacobian}
and~\eqref{L^2 bound post change of variables} combine to demonstrate that
\begin{equation*}
\|TG\|_{L^{2}(\mathrm{d} x)}
\leq 2^{\frac{k}{2}}C\bigg(\int_{B} |G(t, h)|^{2}v(h)
\mathrm{d}t \mathrm{d}h\bigg)^{\frac{1}{2}}.
\end{equation*}
Since
\begin{equation*}
\bigg(\int_{B} |G(t, h)|^{2} v(h)
\mathrm{d} t \mathrm{d} h\bigg)^{\frac{1}{2}}
\leq \|G\|_{L^{2}_{h'}(L^{2}_{t}; |v(h)|)},
\end{equation*}
the inequality~\eqref{L^2 bound} is true.
\end{proof}
Interpolation of~\eqref{induction bound},~\eqref{L^2 bound}, and the
trivial \(L^{1}(L^{1}) \rightarrow L^{\infty}\) estimate establishes
\begin{equation} \label{interpolation bound}
\|TG\|_{L^{c}}
\leq 2^{\frac{k}{c'}}C_{a, b}\|G\|_{L^{a}_{h'}(L^{b}_{t}; |v(h)|)}
\end{equation}
for all \((a^{-1}, b^{-1})\) in the triangle with vertices \((1, 1)\),
\(\big(1, \frac{1}{q_{0}'}\big)\), and \(\big(\frac{1}{2}, \frac{1}{2}\big)\),
with \(c\) satisfying
\begin{equation*}
\frac{(d+2)(d-1)}{2}a^{-1}+b^{-1}+\frac{d^{2}+d}{2}c^{-1}
=\frac{d^{2}+d}{2}.
\end{equation*}
In particular, the choice of
\begin{equation} \label{G definition}
G(t, h) = |v(h)|^{-1}\prod_{j=1}^{d} g(t+h_{j})
\end{equation}
has
\begin{equation*}
\|G\|_{L^{a}_{h'}(L^{b}_{t}; |v(h)|)}
= \bigg(\int_{\mathbb{R}} |v(h)|^{-(a - 1)} \bigg(\int_{\mathbb{R}^{d-1}}
|g(t+h_{1}) \cdots g(t+h_{d})|^{b} \mathrm{d}t\bigg)^{\frac{a}{b}}
\mathrm{d}h'\bigg)^{\frac{1}{a}}.
\end{equation*}
As noted in \cite{Drury}, \(v(0, h')^{-1} \in L^{\frac{d}{2}, \infty}_{h'}\),
so we can apply H\"older's inequality to obtain
\begin{equation} \label{G L^aL^b Holder}
\|G\|_{L^{a}_{h'}(L^{b}_{t}; |v(h)|)} \leq \|g\|_{L^{q', 1}_{t}}^{d},
\end{equation}
for
\begin{equation*}
\begin{cases}
1 < a < \frac{d+2}{2},
\\
a \leq b < \frac{2a}{d+2-da}, \text{ and}
\\
\frac{d}{q'}
= \frac{(d+2)(d-1)}{2}a^{-1}+b^{-1}-\frac{d(d-1)}{2}.
\end{cases}
\end{equation*}
On the other hand, by the definition of \(G\)~\eqref{G definition},
\begin{equation} \label{TG = Eg^d}
TG(x) = \frac{1}{d!}\Big(\mathcal{E}g\Big(\frac{x}{d}\Big)\Big)^{d}.
\end{equation}
Combining~\eqref{interpolation bound},~\eqref{G L^aL^b Holder}
and~\eqref{TG = Eg^d}, we see that
\begin{equation} \label{extension strong q' to p'}
\|\mathcal{E}g\|_{L^{p'}} \leq C_{p}2^{\frac{k}{p'}}\|g\|_{L^{q', 1}},
\end{equation}
for
\begin{equation} \label{p of a and b}
\frac{d}{q'}
= \frac{(d+2)(d-1)}{2}a^{-1}+b^{-1}-\frac{d(d-1)}{2},
\end{equation}
where \(p' = \frac{d^{2}+d}{2} q\), and \(a\) and \(b\)
satisfy~(Figure~\ref{fig: interpolation picture}):
\begin{equation*}
\begin{cases}
\frac{d}{d+2} < a^{-1} < 1,
\\
b^{-1} \leq a^{-1},
\\
(d+ 2)a^{-1} - 2b^{-1} < d, \text{ and}
\\
(q_{0}'-2)a^{-1}+q_{0}'b^{-1} \geq q_{0}'-1.
\end{cases}
\end{equation*}
\begin{figure}\label{fig: interpolation picture}
\end{figure}
The point
\((a^{-1}, b^{-1})
= (\frac{d}{d + 2}, \frac{2}{d + 2} + \frac{d - 2}{(d + 2)q_{0}'})\)
lies on the boundary of this region and satisfies
\begin{equation*}
\frac{(d + 2) (d - 1)}{2}a^{-1} + b^{-1} - \frac{d(d - 1)}{2}
< \frac{d}{q_{0}'}.
\end{equation*}
Taking \((a^{-1}, b^{-1})\) slightly inside of the region and using real
interpolation, we obtain
\begin{equation*}
\|\mathcal{E}g\|_{L^{p'}}
\leq C_{p}2^{\frac{k}{p'}}\|g\|_{L^{q'}},
\quad p' = \frac{d^{2} + d}{2}q,
\quad \frac{d}{q'}
> \frac{2}{(d + 2)} + \frac{d - 2}{(d + 2)q_{0}'}.
\end{equation*}
This closes the induction and proves Proposition~\ref{Drury argument}. \end{proof} Now we turn to the deduction of Theorem~\ref{restriction theorem} from the following stronger version of Theorem~\ref{geometric theorem}. Recall that the symbol ``\(\approx\)'' depends only on the dimension \(d\) and the original interval \(\tilde{I}\), unless otherwise specified by subcripts. In particular, there is no dependence on \(m\) or \(h\) in what follows. \begin{theorem} \label{strong geometric theorem}
Let \(\tilde{I} \subset \mathbb{R}\) be a compact interval. If
\(N \in \mathbb{R}\) with \(N > d\) and
\(\gamma \in C^{N}(\tilde{I}; \mathbb{R}^{d})\), then there is a
family of intervals \(I_{k, l}\) such that for every \(m \in \mathbb{N}\)
and \(h \in \mathbb{R}^{m}\),
\begin{equation} \label{strong interval decomposition}
\tilde{I}
= \{t \in \tilde{I} : \tau(t) = 0\}
\cup \bigcup_{k \geq C_{\gamma}} \bigcup_{l = 1}^{N_{k}} I_{k, l},
\end{equation}
\begin{equation} \label{strong torsion size}
\tau_{h}(t) \approx 2^{-k} \text{ for } t \in (I_{k, l})_{h},
\end{equation}
\begin{equation*}
(t_{1}, \dots, t_{d}) \mapsto \Phi_{\gamma_{h}}(t_{1}, \dots, t_{d})
\text{ is 1-to-1 for } t_{1} < \dots < t_{d} \in (I_{k, l})_{h},
\end{equation*}
\begin{equation*}
|J_{\Phi_{\gamma_{h}}}(t_{1}, \dots, t_{d})|
\geq C2^{-k}|v(t_{1}, \dots, t_{d})|
\text{ for } (t_{1}, \dots, t_{d}) \in (I_{k, l})_{h}^{d},
\end{equation*}
\begin{equation} \label{strong number of intervals}
N_{k} \leq C_{N, \gamma}(k+C_{\gamma})^{d}2^{k\sum_{j=1}^{d}\frac{1}{N-j}},
\text{ and}
\end{equation}
\begin{equation} \label{strong total length of intervals}
\sum_{l=1}^{N_{k}}|I_{k, l}|
\leq C_{N, \gamma}(k+C_{\gamma})^{d}.
\end{equation} \end{theorem} \begin{proof} [Deduction of Theorem~\ref{restriction theorem}]
Let \(\{I_{k, l}\}\) be the intervals given in
Theorem~\ref{strong geometric theorem}. By~\eqref{strong interval decomposition},
\begin{equation*}
\int_{\tilde{I}} |\hat{f}(\gamma(t))|^{q} w_{\epsilon}(t) \mathrm{d} t
\leq \sum_{k \geq C_{\gamma}} \sum_{l = 1}^{N_{k}}
\int_{I_{k, l}} |\hat{f}(\gamma(t))|^{q} w_{\epsilon}(t) \mathrm{d} t.
\end{equation*}
Utilizing the bound~\eqref{strong torsion size} on the size of the torsion on
each interval,
\begin{equation*}
\sum_{k \geq C_{\gamma}} \sum_{l = 1}^{N_{k}}
\int_{I_{k, l}} |\hat{f}(\gamma(t))|^{q} w_{\epsilon}(t) \mathrm{d} t
\leq C\sum_{k \geq C_{\gamma}} \sum_{l = 1}^{N_{k}}
2^{\frac{-2k}{d^{2} + d} - k\epsilon}
\int_{I_{k, l}} |\hat{f}(\gamma(t))|^{q} \mathrm{d} t.
\end{equation*}
An application of H\"older's inequality shows that for any
\(q \leq \frac{2}{d^{2} + d}p'\),
\begin{align*}
&\sum_{k \geq C_{\gamma}} \sum_{l = 1}^{N_{k}}
2^{\frac{-2k}{d^{2} + d} - k\epsilon}
\int_{I_{k, l}} |\hat{f}(\gamma(t))|^{q} \mathrm{d} t
\\
&\leq \sum_{k \geq C_{\gamma}} \sum_{l = 1}^{N_{k}}
2^{\frac{-2k}{d^{2} + d} - k\epsilon}
|I_{k, l}|^{1 - \frac{(d^{2} + d)q}{2p'}}
\bigg(\int_{I_{k, l}} |\hat{f}(\gamma(t))|^{\frac{2p'}{d^{2} + d}}
\mathrm{d} t\bigg)^{\frac{(d^{2} + d)q}{2p'}},
\end{align*}
where \(|I_{k, l}|\) is the length of the interval \(I_{k, l}\). Each
interval \(I_{k, l}\) satisfies the hypotheses of
Proposition~\ref{Drury argument}, so
by~\eqref{restriction bound Drury},
\begin{align*}
&\sum_{k \geq C_{\gamma}}\sum_{l=1}^{N_{k}}
2^{\frac{-2k}{d^{2}+d}-k\epsilon}|I_{k, l}|^{1-\frac{(d^{2}+d)q}{2p'}}
\bigg(\int_{I_{k, l}} |\hat{f}(\gamma(t))|^{\frac{2p'}{d^{2}+d}}
\mathrm{d}t\bigg)^{\frac{(d^{2}+d)q}{2p'}}
\\
&\leq \sum_{k \geq C_{\gamma}}\sum_{l=1}^{N_{k}}
2^{\frac{-2k}{d^{2}+d}-k\epsilon}|I_{k, l}|^{1-\frac{(d^{2}+d)q}{2p'}}
2^{\frac{kq}{p'}}C_{p}^{q}\|f\|_{L^{p}(\mathbb{R}^{d})}^{q}.
\end{align*}
Another application of H\"older's inequality yields
\begin{align*}
&\sum_{k \geq C_{\gamma}} \sum_{l = 1}^{N_{k}}
2^{\frac{-2k}{d^{2} + d} - k\epsilon}
|I_{k, l}|^{1 - \frac{(d^{2} + d)q}{2p'}}
2^{\frac{kq}{p'}}C_{p}^{q}\|f\|_{L^{p}(\mathbb{R}^{d})}^{q}
\\
&\leq C_{p}^{q}\|f\|_{L^{p}(\mathbb{R}^{d})}^{q}
\sum_{k \geq C_{\gamma}}
2^{\frac{-2k}{d^{2} + d} - k\epsilon + \frac{kq}{p'}}
N_{k}^{\frac{(d^{2} + d)q}{2p'}}
\sum_{l = 1}^{N_{k}} |I_{k, l}|.
\end{align*}
With the bounds~\eqref{strong number of intervals} on \(N_{k}\)
and~\eqref{strong total length of intervals} on the total lengths of the
intervals \(I_{k, l}\),
\begin{equation*}
C_{p}^{q}\|f\|_{L^{p}}^{q}
\sum_{k \geq C_{\gamma}}
2^{\frac{-2k}{d^{2}+d}-k\epsilon+\frac{kq}{p'}}
N_{k}^{\frac{(d^{2}+d)q}{2p'}}
\leq C_{p, q, N, \gamma, \epsilon}\|f\|_{L^{p}}^{q}
\sum_{k \geq 1}k^{d}
2^{\frac{-2k}{d^{2}+d}-k\epsilon+\frac{kq}{p'}
+k\frac{(d^{2}+d)q}{2p'}\sum_{j=1}^{d}\frac{1}{N-j}}.
\end{equation*}
The sum converges whenever
\begin{equation*}
\frac{(d^{2}+d)q}{2p'}\sum_{j=1}^{d}\frac{1}{N-j}-\frac{2}{d^{2}+d}-\epsilon
+\frac{q}{p'}
< 0,
\end{equation*}
which occurs in either of the cases:
\begin{equation*}
\begin{cases}
1
\leq q
\leq \frac{2}{d^{2} + d}p'
\quad & \text{if } \epsilon > \sum_{j = 1}^{d} \frac{1}{N - j}
\\
1
\leq q
< \frac{\frac{2}{d^{2} + d} + \epsilon}
{1 + \frac{d^{2} + d}{2}\sum_{j = 1}^{d} \frac{1}{N - j}}p'
\quad & \text{if }
0 \leq \epsilon \leq \sum_{j = 1}^{d} \frac{1}{N - j}.
\end{cases}
\qedhere
\end{equation*} \end{proof}
\section{The Decomposition}
This section contains the decomposition for Theorem~\ref{strong geometric theorem}, of which Theorem~\ref{geometric theorem} is essentially a special case. First, we will create an initial decomposition using Lemma 8 from~\cite{ChenFanWang} to find intervals on which we can prove Theorem~\ref{strong geometric theorem} for the original curve \(\gamma\). Then, we will use polynomial approximation and Lemma 2.3 from~\cite{Stovall} to decompose further into intervals on which offspring curves are well-behaved.
More concretely, the methods in~\cite{DendrinosWright} that we need to prove Theorem~\ref{strong geometric theorem} on each interval in our final decomposition require an examination of minors of the torsion matrix for all offspring curves. With that in mind, for a curve \(\zeta\), a permutation \(\sigma \in S_{d}\) (the symmetric group on \(d\) elements), and \(1 \leq j \leq d\), define \begin{equation*}
L_{\sigma, j}^{\zeta}(t)
= \det\begin{pmatrix}
\zeta_{\sigma(1)}'(t) & \cdots & \zeta_{\sigma(1)}^{(j)}(t) \\
\vdots & & \vdots \\
\zeta_{\sigma(j)}'(t) & \cdots & \zeta_{\sigma(j)}^{(j)}(t)
\end{pmatrix}. \end{equation*}
Whenever \(j = d\), we will omit \(\sigma\) since \(|L_{\sigma, d}^{\zeta}|\) does not depend on \(\sigma\). We also omit \(\sigma\) when \(\sigma\) is the identity. Recall that \(\gamma \in C^{N}\) for some \(d < N \in \mathbb{R}\). The main result of this section is the following proposition. \begin{proposition} \label{full decomposition proposition}
For every \(k_{d} \in \mathbb{Z}\), there is a family of intervals
\(\{I_{l}\}\) and permutations \(\sigma_{l}\) such that for every
\(m \in \mathbb{N}\) and \(h \in \mathbb{R}^{m}\),
\begin{equation*}
\{t \in \tilde{I} : 2^{-k_{d}-1} \leq |L_{d}^{\gamma}(t)| \leq 2^{-k_{d}}\}
\subseteq \bigcup_{l} I_{l},
\end{equation*}
\begin{equation*}
|L_{j, \sigma_{l}}^{\gamma_{h}}(t)| \approx 2^{-k_{j}},
\quad \forall t \in (I_{l})_{h},\ 1 \leq j \leq d, \text{ and}
\end{equation*}
\begin{equation} \label{full decomposition number of intervals}
\#\{I_{l}\}
\leq C_{N, \gamma}(k_{d}+C_{\gamma})^{d}2^{k_{d}\sum_{j=1}^{d}\frac{1}{N-j}}.
\end{equation}
\begin{equation*}
\sum_{l} |I_{k, l}| \leq C(k_{d}+C_{\gamma})^{d}.
\end{equation*} \end{proposition} \subsection*{The initial decomposition}
We first prove Proposition~\ref{full decomposition proposition} in the
special case \(h = 0\).
\begin{proposition} \label{k decomposition lemma}
For every \(k_{d} \in \mathbb{Z}\), there is a family of intervals \(\{I_{l}\}\)
with
\begin{equation} \label{initial decomposition number and length of intervals}
\sum_{I \in \mathcal{I}_{k_{d}}} |I| \leq C(k_{d}+C_{\gamma})^{d}
\quad \text{and} \quad
\#\{I_{l}\} \leq C_{N, \gamma}2^{k_{d}\sum_{j=1}^{d}\frac{1}{N-j}}
\end{equation}
such that
\begin{equation*}
\{t \in \tilde{I} : 2^{-k_{d} - 1} \leq |L_{d}^{\gamma}(t)| \leq 2^{-k_{d}}\}
\subseteq \bigcup_{I \in \mathcal{I}_{k_{d}}} I.
\end{equation*}
Furthermore, there are constants \(A_{j}\) depending only on \(\gamma\), \(j\),
and \(d\) such that on each interval \(I \in \mathcal{I}_{k_{d}}\), there is a
permutation \(\sigma \in S_{d}\) and \(k_{j} \in \mathbb{Z}\) with
\(A_{j} \leq k_{j} \leq k_{d} + A_{j} + \log(d!\|\gamma\|_{C^{d}}^{d})\)
such that
\begin{equation} \label{size of L_j}
2^{-k_{j} - 2} \leq |L_{\sigma, j}^{\gamma}(t)| \leq 2^{-k_{j} + 1},
\quad t \in I,\ 1 \leq j \leq d.
\end{equation}
\end{proposition}
The first step in proving Proposition~\ref{k decomposition lemma} is to show
that for each \(t \in I\), there is a permutation \(\sigma\) such that the
\(L_{\sigma, j}^{\gamma}(t)\)'s are generally decreasing in \(j\).
\begin{lemma} \label{number of k_j}
There are constants \(A_{j}\) such that for every \(t \in I\), there is a
permutation \(\sigma\) such that if
\begin{equation*}
2^{-k_{d}-1} \leq |L_{d}^{\gamma}(t)| \leq 2^{-k_{d}},
\end{equation*}
then there is \(k_{j} \in \mathbb{Z}\) with
\(A_{j} \leq k_{j} \leq k_{d} + A_{j} + \log(d!\|\gamma\|_{C^{d}}^{d})\)
such that
\begin{equation*}
2^{-k_{j}-1} \leq |L_{\sigma, j}^{\gamma}(t)| \leq 2^{-k_{j}}.
\end{equation*}
\end{lemma}
\begin{proof}
Fix \(t \in I\). For each \(j\), let \(k_{j}\) be the unique integer that
satisfies
\begin{equation*}
2^{-k_{j}-1} \leq |L_{\sigma, j}^{\gamma}(t)| < 2^{-k_{j}}.
\end{equation*}
For any permutation \(\sigma\),
\begin{equation*}
L_{\sigma, j}^{\gamma} \leq j!\|\gamma\|_{C^{d}}^{j}.
\end{equation*}
Hence,
\begin{equation} \label{L_j's are close lower bound}
k_{j} \geq -\log(j!\|\gamma\|_{C^{d}}^{j}).
\end{equation}
To get an upper bound on \(k_{j}\), we'll show by induction that there is a
permutation \(\sigma\) such that
\begin{equation} \label{L_j's are close upper bound}
|L_{\sigma, j}^{\gamma}(t)|
\geq \frac{j!}{d!\|\gamma\|_{C^{d}}^{d - j}}|L_{d}^{\gamma}(t)|
\quad \text{ for all } 1 \leq j \leq d.
\end{equation}
The base case of \(j = d\) is true for every permutation. In the induction
step, suppose that \(\sigma(d), \dots, \sigma(j+2)\) have been specified
(with nothing specified if \(j = d-1\)). Let \(M\) be the torsion matrix
with the last \(d-j-1\) columns deleted and rows
\(\sigma(d), \dots, \sigma(j+2)\) deleted. Then
\(|\det M| = |L_{j+1}^{\sigma}|\). For \(1 \leq i, l \leq j\), let
\(M_{i, l}\) be the minor of \(M\) obtained by deleting the \(i\)'th row
and \(l\)'th column. Using the cofactor expansion, we find that
\begin{equation*}
|L_{j+1}^{\sigma}(t)|
= \bigg|\sum_{i = 1}^{j+1}
(-1)^{i+j+1}\gamma_{i}^{(d)}(t)\det M_{i, d}\bigg|
\leq \|\gamma\|_{C^{d}} \sum_{i = 1}^{j+1} |\det M_{i, j+1}|.
\end{equation*}
Hence there is some \(i\) such that
\begin{equation*}
|\det M_{i, j+1}|
\geq \frac{1}{(j+1)\|\gamma\|_{C^{d}}}|L_{j+1}^{\sigma}(t)|.
\end{equation*}
Thus, for every permutation \(\sigma\) that sends \(i\) to \(j+1\),
\begin{equation*}
|L_{\sigma, j}^{\gamma}(t)|
\geq \frac{1}{(j+1)\|\gamma\|_{C^{d}}}|L_{j+1}^{\sigma}(t)|.
\end{equation*}
By the induction hypothesis,
\begin{equation*}
|L_{\sigma, j}^{\gamma}(t)|
\geq \frac{1}{(j+1)\|\gamma\|_{C^{d}}}
\frac{(j+1)!}{d!\|\gamma\|_{C^{d}}^{d-j-1}}|L_{d}^{\gamma}(t)|
= \frac{j!}{d!\|\gamma\|_{C^{d}}^{d-j}}|L_{d}^{\gamma}(t)|.
\end{equation*}
This completes the induction, so~\eqref{L_j's are close upper bound} holds.
Putting together~\eqref{L_j's are close lower bound}
and~\eqref{L_j's are close upper bound},
\begin{equation*}
-\log(j!\|\gamma\|_{C^{d}}^{j})
\leq k_{j}
\leq -\log\Big(\frac{j!}{d!\|\gamma\|_{C^{d}}^{d - j}}\Big) + k_{d}.
\end{equation*}
The lemma follows by setting \(A_{j} = -\log(j!\|\gamma\|_{C^{d}}^{j})\).
\end{proof}
We can combine Lemma~\ref{number of k_j} and the following lemma
from~\cite{ChenFanWang} \(d\) times to prove
Proposition~\ref{k decomposition lemma}.
\begin{lemma} [\cite{ChenFanWang} Lemma 8] \label{ChenFanWang lemma 8}
Let \(\varphi \in C^{1 / \alpha}(I)\) with \(\alpha > 0\). For every
\(k \in \mathbb{Z}\), there exist disjoint intervals
\(\{I_{k, j} \subseteq I\}_{j = 1}^{N_{k}}\) such that
\(2^{-k - 2} \leq |\varphi(t)| \leq 2^{-k + 1}\) for all \(t \in I_{k, j}\) and
\begin{equation*}
\{t \in I : 2^{-k - 1} \leq |\varphi(t)| \leq 2^{-k}\}
\subseteq \bigcup_{j = 1}^{N_{k}} I_{k, j};
\end{equation*}
moreover, there is a constant \(B_{\alpha}\) such that
\(N_{k} \leq B_{\alpha} 2^{\alpha k}\) for every \(k\).
\end{lemma}
\begin{remark}
The proof of the above lemma in~\cite{ChenFanWang} shows that we can take
\begin{equation*}
B_{\alpha}
= \|\varphi\|_{C^{\frac{1}{\alpha}}}^{\alpha}4^{\frac{1}{\alpha}+\alpha+4}.
\end{equation*}
\end{remark}
\begin{proof}[Proof of Proposition~\ref{k decomposition lemma}]
For an interval \(J\), a permutation \(\sigma\), \(1 \leq j \leq d\), and
\(k \in \mathbb{Z}\), let \(\mathcal{I}_{j}^{\sigma}(J, k)\) be the set of
intervals from Lemma~\ref{ChenFanWang lemma 8} with
\(\varphi = L_{\sigma, j}^{\gamma}\)
and \(\frac{1}{\alpha} = N - j\). To simplify the notation, set
\begin{equation*}
B_{j}
= \|\gamma\|_{C^{N}}^{\frac{1}{N-j}}4^{\frac{1}{N-j}+N-j+4}.
\end{equation*}
Then for any interval \(J\), the number of intervals of
\(\mathcal{I}_{j}^{\sigma}(J, k)\) is at most \(B_{j}2^{\frac{k}{N - j}}\).
Combining Lemma~\ref{number of k_j} with \(d\)-many applications of
Lemma~\ref{ChenFanWang lemma 8}, we see that
\begin{align*}
&\{t \in \tilde{I} : 2^{-k_{d} - 1} \leq |L_{d}^{\gamma}(t)| \leq 2^{-k_{d}}\}
\\
&\subseteq \bigcup_{\sigma \in S_{d}}
\bigcup_{I_{d} \in \mathcal{I}_{d}^{\sigma}(\tilde{I}, k_{d})} \hspace{-1em}
\bigcup_{k_{d-1}=A_{d-1}}
^{A_{d-1}+k_{d}+\log(d!\|\gamma\|_{C^{d}}^{d})} \hspace{-1.2em}
\cdots
\bigcup_{I_{2} \in \mathcal{I}_{2}^{\sigma}(I_{3}, k_{2})} \hspace{-1em}
\bigcup_{k_{1}=A_{1}}^{A_{1}+k_{d}+\log(d!\|\gamma\|_{C^{d}}^{d})}
\hspace{-1em}
\bigcup_{I_{1} \in \mathcal{I}_{1}^{\sigma}(I_{2}, k_{1})} \hspace{-1em}
I_{1}.
\end{align*}
Whenever \(j < d\), the total number of intervals \(I_{j}\) in each pair of
unions
\begin{equation} \label{pair of unions}
\bigcup_{k_{j} = A_{j}}^{A_{j}+k_{d}+\log(d!\|\gamma\|_{C^{d}}^{d})}
\hspace{-1em}
\bigcup_{I_{j} \in \mathcal{I}_{j}^{\sigma}(I_{j+1}, k_{j})} \hspace{-1.25em}
I_{j},
\end{equation}
is bounded by
\begin{equation*}
\sum_{k_{j} = A_{j}}^{k_{d}+A_{j}+\log(d!\|\gamma\|_{C^{d}}^{d})}
B_{j}2^{\frac{k_{j}}{N-j}}
\leq B_{j}2^{\frac{A_{j}}{N-j}}
\frac{2^\frac{k_{d}+\log(d!\|\gamma\|_{C^{d}}^{d})+1}{N-j}}
{2^{\frac{1}{N-j}}-1}.
\end{equation*}
Recalling that all logarithms are taken in base 2 and using the fact that
\(N-j > 1\),
\begin{equation*}
B_{j}2^{\frac{A_{j}}{N-j}}
\frac{2^\frac{k_{d}+\log(d!\|\gamma\|_{C^{d}}^{d})+1}{N-j}}
{2^{\frac{1}{N-j}}-1}
\leq C\|\gamma\|_{C^{N}}^{\frac{1}{N-j}}4^{N}
\bigg(\frac{d!\|\gamma\|_{C^{d}}^{d}}
{j!\|\gamma\|_{C^{d}}^{j}}\bigg)^{\frac{1}{N-j}}
2^{\frac{k_{d}}{N-j}}
\leq C\|\gamma\|_{C^{N}}^{\frac{1}{N-j}}4^{N}
\|\gamma\|_{C^{d}}^{\frac{d-j}{N-j}}
2^{\frac{k_{d}}{N-j}}.
\end{equation*}
Hence, the set
\(\{t \in I : 2^{-k_{d} - 1} \leq |L_{d}^{\gamma}(t)| \leq 2^{-k_{d}}\}\) is
covered by at most
\begin{equation*}
C4^{Nd+\frac{1}{N-d}}\|\gamma\|_{C^{N}}^{\sum_{j=1}^{d}\frac{1}{N-j}}
\|\gamma\|_{C^{d}}^{\sum_{j=1}^{d-1}\frac{d-j}{N-j}}
2^{k_{d}\sum_{j=1}^{d}\frac{1}{N-j}}
= C_{N, \gamma}2^{k_{d}\sum_{j=1}^{d}\frac{1}{N-j}}
\end{equation*}
many intervals that satisfy~\eqref{size of L_j}. Moreover, the sum of the lengths
of the intervals in each pair of unions~\eqref{pair of unions} is bounded by
\begin{equation*}
(k_{d}+\log(d!\|\gamma\|_{C^{d}}^{d}))|I_{j+1}|,
\end{equation*}
since the intervals in \(\mathcal{I}_{j}^{\sigma}(I_{j+1}, k_{j})\) are disjoint
for every \(k_{j}\). Therefore, the total length of all the intervals in the
initial decomposition at scale \(k_{d}\) is at most
\begin{equation*}
d!(k_{d}+\log(d!\|\gamma\|_{C^{d}}^{d}))^{d}|\tilde{I}|
= C(k_{d}+C_{\gamma})^{d}.
\qedhere
\end{equation*}
\end{proof} \subsection*{The secondary decomposition}
We now proceed to the general \(h \in \mathbb{R}^{m}\) in
Proposition~\ref{full decomposition proposition}. Our initial decomposition gave
a family of intervals where \(|L_{j, \sigma}^{\gamma}| \approx 2^{-k_{j}}\). We
finish the proof of Proposition~\ref{full decomposition proposition} by applying
the following proposition to each \(\zeta_{j} = (\gamma_{1}, \dots, \gamma_{j})\)
in turn. We need to ensure the intervals in the initial decomposition are small
for this proposition. By the upper
bounds~\eqref{initial decomposition number and length of intervals} on the total
length and the number of intervals, we can freely shrink the intervals to be of
size at most \(C_{N, \gamma}2^{-k_{d}\sum_{j=1}^{d}\frac{1}{N-j}}\) while
retaining the necessary upper
bound~\eqref{full decomposition number of intervals} on the total number of
intervals.
\begin{proposition}
Let \(I = [a, b]\) with \(b-a \leq 1\) and let
\(\zeta \in C^{N}(I; \mathbb{R}^{j})\). There is a
constant \(A\) depending only on \(N\) and \(j\) such that if
\(|L_{j}^{\zeta}(t)| \approx 2^{-k}\) on \(I\) and
\begin{equation} \label{size of intervals}
b-a
\leq A^{\frac{1}{N-j}}\|\zeta\|_{C^{N}}^{\frac{-j}{N-j}}
2^{\frac{-k}{N-j}},
\end{equation}
there is a decomposition \(I = \cup_{i=1}^{C_{N}} I_{i}\) into disjoint
intervals such that for every \(m \in \mathbb{N}\) and \(h \in \mathbb{R}^{m}\),
\begin{equation*}
|L_{j}^{\zeta_{h}}(t)| \approx 2^{-k}
\quad \forall t \in (I_{i})_{h}.
\end{equation*}
\end{proposition}
\begin{proof}
By Taylor's theorem, for any \(t \in I\),
\begin{align*}
\zeta(t)
&= \sum_{i=0}^{\lfloor N \rfloor-1} \frac{\zeta^{(i)}(a)}{i!}(t-a)^{i}
+\frac{\zeta^{(\lfloor N \rfloor)}(z_{t})}{\lfloor N \rfloor!}
(t-a)^{\lfloor N \rfloor}
\\
&= \sum_{i=0}^{\lfloor N \rfloor} \frac{\zeta^{(i)}(a)}{i!}(t-a)^{i}
+\frac{\zeta^{(\lfloor N \rfloor)}(z_{t})
-\zeta^{(\lfloor N \rfloor)}(a)}{\lfloor N \rfloor!}(t-a)^{\lfloor N \rfloor}.
\end{align*}
Set
\begin{equation*}
P(t) = \sum_{i=0}^{\lfloor N \rfloor} \frac{\zeta^{(i)}(a)}{i!}(t-a)^{i}
\quad \text{and} \quad
R(t) = \frac{\zeta^{(\lfloor N \rfloor)}(z_{t})
-\zeta^{(\lfloor N \rfloor)}(t)}{\lfloor N \rfloor!}(t-a)^{\lfloor N \rfloor}.
\end{equation*}
Then \(R \in C^{N}(I)\), \(R^{(i)}(a) = 0\) for
\(0 \leq i \leq \lfloor N \rfloor\), and
\begin{equation} \label{remainder bound}
|R^{(i)}(t)| \leq |t-a|^{N-i}\|\zeta\|_{C^{N}}
\quad \forall t \in I.
\end{equation}
For any \(h \in \mathbb{R}^{m}\) and \(t \in I_{h}\),
\begin{equation} \label{L_h,j}
\begin{aligned}
L_{j}^{\zeta_{h}}(t)
&= \det[P_{h}'(t)+R_{h}'(t), \dots, P_{h}^{(j)}(t)+R_{h}^{(j)}(t)]
\\
&= \det[P_{h}'(t), \dots, P_{h}^{(j)}(t)]
\\
&\quad +\det[R_{h}'(t),\ P_{h}''(t), \dots, P_{h}^{(j)}(t)]
\\
&\quad
+\det[P_{h}'(t)+R_{h}'(t),\ R_{h}''(t),\ P_{h}'''(t), \dots, P_{h}^{(j)}(t)]
\\
&\quad \dots
\\
&\quad +\det[P_{h}'(t)+R_{h}'(t), \dots, P_{h}^{(j-1)}(t)+R_{h}^{(j-1)}(t),\
R_{h}^{(j)}(t)]
\\
&\eqqcolon L^{P_{h}}_{j}(t)+L_{P, R, j, h}(t).
\end{aligned}
\end{equation}
We will show that \(|L_{P, R, j, h}| \lesssim 2^{-k}\) with an implicit constant
depending on \(A\) in~\eqref{size of intervals} (which we can make as small
as we need), and then we will divide \(I\) into intervals where
\(|L_{j}^{P_{h}}| \approx 2^{-k}\). Combined, these imply that
\(|L_{j}^{\zeta_{h}}| \approx 2^{-k}\). We start with \(|L_{P, R, j, h}|\).
Applying~\eqref{remainder bound}, we see that
\begin{equation*}
|L_{P, R, j, h}(t)| \leq C|I|^{N-j}\|\zeta\|_{C^{N}}^{j}
\quad \forall t \in I_{h}.
\end{equation*}
Let \(A\) be a small constant to be chosen shortly. Assuming the
bound~\eqref{size of intervals} on the size of each interval holds, we
conclude that
\begin{equation} \label{L_P,R,j,h}
|L_{P, R, j, h}(t)| \leq AC2^{-k}
\quad \forall t \in I_{h}.
\end{equation}
The next lemma will give a decomposition to deal with \(|L_{j}^{P_{h}}|\).
\begin{lemma} [\cite{Stovall}, Lemma 2.3] \label{Stovall lemma 2.3}
Fix \(j \geq 2\) and \(l \in \mathbb{N}\). There exists a decomposition of
\([-1, 1]\) into disjoint intervals \(I_{i}\), \(1 \leq i \leq C_{l}\),
such that for every \(I_{i}\) and every degree \(l\) polynomial
\(Q \colon \mathbb{R} \rightarrow \mathbb{R}^{j}\) satisfying
\begin{equation*}
|L^{Q}_{j}(t)| \approx 1, \quad t \in [-1, 1],
\end{equation*}
every offspring curve \(Q_{h}(t)\) satisfies
\begin{equation*}
|L^{Q_{h}}_{j}(t)| \approx_{l, j} 1
\quad \forall t \in (I_{i})_{h}.
\end{equation*}
\end{lemma}
For \(t \in [-1, 1]\), set
\begin{equation*}
Q(t)
= 2^{\frac{k}{j}}\Big(\frac{2}{b-a}\Big)^{\frac{j+1}{2}}
\bigg(P_{1}\Big(\frac{b-a}{2}(t+1)+a\Big), \dots,
P_{j}\Big(\frac{b-a}{2}(t+1)+a\Big)\bigg)
\end{equation*}
Before we can apply Lemma~\ref{Stovall lemma 2.3}, we calculate
\begin{equation*}
|L^{Q}_{j}(t)|
= \Big|2^{k}\Big(\frac{2}{b-a}\Big)^{\frac{j(j+1)}{2}}
\Big(\frac{b-a}{2}\Big)^{\frac{j(j+1)}{2}}
L^{P}_{j}\Big(\frac{b-a}{2}(t+1)+a\Big)\Big|
\quad \forall t \in [-1, 1].
\end{equation*}
By the calculation in~\eqref{L_h,j} with \(h = 0\), we have
\begin{equation*}
L_{j}^{\zeta}(t) = L_{j}^{P}(t)+L_{P, R, j, 0}(t).
\end{equation*}
Since \(|L_{j}^{\zeta}| \approx 2^{-k}\) and \(|L_{P, R, j, 0}(t)| \leq AC2^{-k}\)
by~\eqref{L_P,R,j,h}, we can choose \(A\) small enough that
\(|L^{P}_{j}| \approx 2^{-k}\). Therefore,
\begin{equation*}
|L^{Q}_{j}(t)| \approx 1
\quad \forall t \in [-1, 1].
\end{equation*}
Applying Lemma~\ref{Stovall lemma 2.3}, we obtain a decomposition
\([-1, 1] = \cup_{i=1}^{C_{N}} J_{i}\) into disjoint
intervals such that
\begin{equation*}
|L^{Q_{h}}_{j}(t)| \approx 1
\quad \forall t \in J_{i},
\end{equation*}
Setting
\begin{equation*}
I_{i} = \Big\{t \in I : \frac{b-a}{2}(t-1)+a \in J_{i}\Big\},
\end{equation*}
we see that
\begin{equation} \label{L^P_h_j}
|L^{P_{h}}_{j}(t)| \approx 2^{-k}
\quad \forall t \in I_{i}.
\end{equation}
As before, we can choose \(A\) small enough that we can
combine~\eqref{L_h,j},~\eqref{L_P,R,j,h}, and~\eqref{L^P_h_j} to conclude
\(|L_{j}^{\zeta_{h}}(t)| \approx 2^{-k}\).
\end{proof}
\section{The Geometric Inequality}
To finish the proof of Theorem~\ref{strong geometric theorem}, we need to show that \(\Phi_{\gamma_{h}}\) given in~\eqref{sum of gammas} is 1-to-1 for \(t_{1} < \dots < t_{d}\) and that the geometric inequality~\eqref{geometric inequality} holds on each interval given in Lemma~\ref{k decomposition lemma}. The following proposition shows that the geometric inequality~\eqref{geometric inequality} holds on each interval in our decomposition for \(\gamma\), all offspring curves \(\gamma_{h}\), and all truncations of these curves. \begin{proposition} \label{geometric inequality truncation proposition}
Let \(I \subset \mathbb{R}\), \(n \in \mathbb{N}\),
\(\zeta \colon I \rightarrow \mathbb{R}^{n}\), and
\begin{equation*}
\Phi_{\zeta}(t_{1} \cdots t_{n})
= \zeta(t_{1}) + \cdots + \zeta(t_{n}).
\end{equation*}
Assume that there are \(k_{j} \in \mathbb{Z}\), \(1 \leq j \leq n\), such that
\begin{equation} \label{size of L_j^zeta}
|L_{j}^{\zeta}| \approx 2^{-k_{j}} \quad \text{on } I.
\end{equation}
Then the Jacobian
\begin{equation*}
J_{\Phi_{\zeta}}(t_{1}, \dots, t_{n}) = \det[\zeta'(t_{1}) \dots \zeta'(t_{n})]
\end{equation*}
satisfies
\begin{equation*}
|J_{\Phi_{\zeta}}(t_{1}, \dots, t_{n})|
\approx_{n} 2^{-k_{n}}|v(t_{1}, \dots, t_{n})|
\end{equation*}
for all \((t_{1}, \dots, t_{n}) \in I^{n}\). \end{proposition} The above proposition shows that if \(I\) is an interval in the decomposition, \(1 \leq n \leq d\), and \(\zeta = ((\gamma_{h})_{1}, \dots, (\gamma_{h})_{n})\), then the Jacobian \(J_{\Phi_{\zeta_{h}}}\) is single-signed and nonzero in the region \(A = \{(t_{1}, \dots, t_{n}) \in I^{d} : t_{1} < \dots < t_{n}\}\). With that, an argument of Steinig~\cite{Steinig} (see also~\cite{CarberyVanceWaingerWatsonWright, DendrinosWright}) shows that \(\Phi_{\gamma_{h}}\) is 1-to-1 on \(A\). \begin{proposition} [Steinig]
\(\Phi_{\gamma_{h}}\) is 1-to-1 on
\(A = \{(t_{1}, \dots, t_{d}) \in I^{d} : t_{1} < \dots < t_{d}\}\). \end{proposition} For the convenience of the reader, we recall Steinig's argument. \begin{proof}
Assume for contradiction that there are \(\vec{s} \neq \vec{t} \in A\) such
that
\begin{equation} \label{Steinig contradiction}
\gamma_{h}(s_{1}) + \dots + \gamma_{h}(s_{d})
\neq \gamma_{h}(t_{1}) + \dots + \gamma_{h}(t_{d}).
\end{equation}
We can rewrite~\eqref{Steinig contradiction} as
\begin{equation*}
\sum_{j = 1}^{m} \epsilon_{j}\gamma_{h}(u_{j}) = 0
\end{equation*}
for some even integer \(m \in [2, 2d]\), \(u_{1} < \dots < u_{m} \in I\),
\(\epsilon_{j} \in \{-1, 1\}\), and \(\sum_{j = 1}^{m} \epsilon_{j} = 0\).
Let
\begin{equation*}
\alpha_{l} = \sum_{j=1}^{l} \epsilon_{j}, \quad 1 \leq l \leq m.
\end{equation*}
Then the sequence of \(\alpha_{l}\)'s has at most \(d-1\) changes of sign.
Define the step function \(\phi(u)\) to be \(\alpha_{j}\) when
\(u \in (u_{j}, u_{j+1})\). We have
\begin{equation} \label{linear dependence}
0
= \sum_{j = 1}^{m} \epsilon_{j}\gamma_{h}(u_{j})
= \sum_{j = 1}^{m-1} \alpha_{j}[\gamma_{h}(u_{j}) - \gamma_{h}(u_{j+1})]
= -\int_{u_{1}}^{u_{m}} \phi(u)\gamma_{h}'(u) \mathrm{d} u.
\end{equation}
Let \(I_{i}\), \(1 \leq i \leq n\) be the ordered, maximal intervals where
\(\phi\) is constant and nonzero. Since the sequence of \(\alpha_{l}\)'s has
at most \(d-1\) changes of sign, \(n \leq d\). Let \(M\) be
the \(n \times n\) matrix whose \((i, j)\)'th entry is given by
\begin{equation*}
\int_{I_{i}} |\phi(u)|(\gamma_{h})_{j}'(u) \mathrm{d} u.
\end{equation*}
Setting \(\zeta = ((\gamma_{h})_{1}, \dots, (\gamma_{h})_{n})\),
\begin{equation*}
\det M
= \itint{I_{1}}{\ }{I_{n}}{} |\phi(u_{1})| \dots |\phi(u_{n})|
\det[\zeta'(u_{1}) \cdots \zeta'(u_{n})]
\mathrm{d} u_{1} \dots \mathrm{d} u_{n}.
\end{equation*}
By~\eqref{linear dependence}, the rows of \(M\) are linearly dependent, so
\(\det M = 0\). On the other hand, \(J_{\Phi_{\zeta}}\) is single-signed and
nonzero. Thus
\begin{equation*}
0
= \det M
= \itint{I_{1}}{\ }{I_{n}}{} |\phi(u_{1})| \dots |\phi(u_{n})|
\det[\zeta'(u_{1}) \cdots \zeta'(u_{n})]
\mathrm{d} u_{1} \dots \mathrm{d} u_{n}
> 0,
\end{equation*}
so we have reached a contradiction. \end{proof} \subsection*{Proof of Proposition~\ref{geometric inequality truncation proposition}}
The proof comes in two steps. Both parts of this proof are adaptations of methods
in~\cite{DendrinosWright}. Some minor differences arise because we are not
dealing with polynomials.
First, we will define a sequence of iterated integrals
\(\mathfrak{I}_{1}, \dots, \mathfrak{I}_{n}\) such that
\begin{equation} \label{jacobian equals iterated integral}
J_{\Phi_{\zeta}}(t_{1}, \dots, t_{n})
= \mathfrak{I}_{n}(t_{1}, \dots, t_{n}).
\end{equation}
The equality~\eqref{jacobian equals iterated integral} will be shown in
Lemma~\ref{jacobian equals iterated integral lemma}. Then, using the inductive
definition of the iterated integrals, we will show in
Lemma~\ref{size of iterated integrals lemma} that
\begin{equation*}
|\mathfrak{I}_{n}(t_{1}, \dots, t_{n})|
\approx_{n} 2^{-k_{n}}v(t_{1}, \dots, t_{n}).
\end{equation*}
To that end, let
\begin{equation*}
\mathfrak{I}_{1}(t_{1})
= \frac{L_{n-2}(t_{1})L_{n}(t_{1})}{[L_{n-1}(t_{1})]^{2}}.
\end{equation*}
For \(2 \leq m \leq n\) and \(t_{1}, \dots, t_{m} \in I^{m}\), define
\begin{equation} \label{I_m}
\mathfrak{I}_{m}(t_{1}, \dots, t_{m})
= \bigg(\prod_{j=1}^{m}
\frac{L_{n-m-1}(t_{j}) L_{n-m+1}(t_{j})}{[L_{n-m}(t_{j})]^{2}}\bigg)
\itint{t_{1}}{t_{2}}{t_{m-1}}{t_{m}}
\mathfrak{I}_{m-1}(\vec{s})\mathrm{d}s_{1} \dots \mathrm{d}s_{m-1},
\end{equation}
with the convention that \(L_{0} = L_{-1} \equiv 1\). As mentioned, the proof of
Proposition~\ref{geometric inequality truncation proposition} will be complete
following the proofs of Lemmata~\ref{jacobian equals iterated integral lemma}
and~\ref{size of iterated integrals lemma}.
\begin{lemma} \label{jacobian equals iterated integral lemma}
\(\mathfrak{I}_{n}\) defined in~\eqref{I_m}
satisfies~\eqref{jacobian equals iterated integral}.
\end{lemma}
\begin{proof}
Define \(f_{i, 0} = \zeta_{i}\) for \(1 \leq i \leq n\), and for
\(1 \leq j \leq i-1\) define
\begin{equation*}
f_{i, j} = \frac{f_{i, j-1}'}{f_{j, j-1}'}.
\end{equation*}
Assume for now that the denominator is always nonzero; this follows
from~\eqref{geometric inequality lemma 4.1 application} and the
condition~\eqref{size of L_j^zeta}. We will show that
\begin{equation} \label{iterated integrals equal f's}
\mathfrak{I}_{n-j+1}(t_{1}, \dots, t_{n-j+1})
= \det\begin{pmatrix}
f_{j, j-1}'(t_{1}) & \dots & f_{j, j-1}'(t_{n-j+1}) \\
\vdots & & \vdots \\
f_{n, j-1}'(t_{1}) & \dots & f_{n, j-1}'(t_{n-j+1})
\end{pmatrix}
\end{equation}
for all \(1 \leq j \leq n\). In particular, when \(j = 1\) we see that
\begin{equation*}
\mathfrak{I}_{n}(t_{1}, \dots, t_{n})
= \det\begin{pmatrix}
f_{1, 0}'(t_{1}) & \dots & f_{1, 0}'(t_{n}) \\
\vdots & & \vdots \\
f_{n, 0}'(t_{1}) & \dots & f_{n, 0}'(t_{n})
\end{pmatrix}
= \det\begin{pmatrix}
\zeta_{1}'(t_{1}) & \dots & \zeta_{1}'(t_{n}) \\
\vdots & & \vdots \\
\zeta_{n}'(t_{1}) & \dots & \zeta_{n}'(t_{n})
\end{pmatrix}
= J_{\Phi_{\zeta}}(t_{1}, \dots, t_{n}).
\end{equation*}
The proof of~\eqref{iterated integrals equal f's} requires two ingredients.
First, we need to write down the exact relationship between each \(f_{i, j}'\)
and various derivatives of \(\zeta\). Second, we need an iterative way of writing
the left-hand side of~\eqref{iterated integrals equal f's}.
For the first ingredient, we will need to define auxiliary matrices
\begin{equation*}
L_{\zeta_{i_{1}}, \dots, \zeta_{i_{l}}}(t)
= \det\begin{pmatrix}
\zeta_{i_{1}}'(t) & \cdots & \zeta_{i_{l}}^{(l)}(t) \\
\vdots & & \vdots \\
\zeta_{i_{1}}'(t) & \cdots & \zeta_{i_{l}}^{(l)}(t)
\end{pmatrix}.
\end{equation*}
If \(A\) is the \((j+1) \times (j+1)\) matrix defining
\(L_{\zeta_{1} \dots \zeta_{j} \zeta_{i}}\), and if
\([r_{1}, \dots, r_{j}; c_{1}, \dots, c_{j}]\) denotes the determinant of
the matrix obtained from \(A\) by deleting rows \(r_{1}, \dots, r_{j}\) and
columns \(c_{1}, \dots, c_{j}\), then an application of Sylvester's Determinant
Identity (see~\cite{Bareiss}) gives
\begin{equation*}
[j, j+1; j, j+1] \cdot \det A
= [j+1; j+1] \cdot [j; j]-[j+1; j] \cdot [j; j+1].
\end{equation*}
Unwinding all the definitions, we see that
\begin{align*}
[j, j + 1; j, j + 1] &= L_{\zeta_{1} \dots \zeta_{j - 1}},
\\
[j + 1; j + 1] &= L_{\zeta_{1} \dots \zeta_{j}},
\\
[j; j] &= (L_{\zeta_{1} \dots \zeta_{j - 1} \zeta_{i}})',
\\
[j + 1; j] &= (L_{\zeta_{1} \dots \zeta_{j}})', \text{ and}
\\
[j; j + 1] &= L_{\zeta_{1} \dots \zeta_{j - 1} \zeta_{i}}.
\end{align*}
Thus, we have
\begin{equation*}
L_{\zeta_{1} \dots \zeta_{j - 1}}
\cdot L_{\zeta_{1} \dots \zeta_{j} \zeta_{i}}
= L_{\zeta_{1} \dots \zeta_{j}}
\cdot (L_{\zeta_{1} \dots \zeta_{j - 1} \zeta_{i}})'
- (L_{\zeta_{1} \dots \zeta_{j}})'
\cdot L_{\zeta_{1} \dots \zeta_{j - 1} \zeta_{i}}.
\end{equation*}
Since \(L_{\zeta_{1} \dots \zeta_{j}} = L_{j}^{\zeta}\) is bounded away from 0
by~\eqref{size of L_j^zeta}, the above shows that
\begin{equation*}
\bigg(\frac{L_{\zeta_{1} \dots \zeta_{j - 1} \zeta_{i}}}
{L_{\zeta_{1} \dots \zeta_{j}}}\bigg)'
= \frac{L_{\zeta_{1} \dots \zeta_{j - 1}}
L_{\zeta_{1} \dots \zeta_{j} \zeta_{i}}}
{(L_{\zeta_{1} \dots \zeta_{j}})^{2}}.
\end{equation*}
Induction in \(i\) and \(j\) then gives
\begin{equation} \label{geometric inequality lemma 4.1 application}
f_{i, j}'
= \bigg(\frac{f_{i, j-1}'}{f_{j, j-1}'}\bigg)'
= \bigg(\frac{L_{\zeta_{1} \dots \zeta_{j - 1} \zeta_{i}}}
{L_{\zeta_{1} \dots \zeta_{j}}}\bigg)'
= \frac{L_{\zeta_{1} \dots \zeta_{j - 1}}
L_{\zeta_{1} \dots \zeta_{j} \zeta_{i}}}
{(L_{\zeta_{1} \dots \zeta_{j}})^{2}}.
\end{equation}
The second ingredient is covered by the following calculus lemma
in~\cite{DendrinosWright}:
\begin{lemma} [\cite{DendrinosWright} Lemma 5.1]
Let \(\{g_{i}\}_{i = 1}^{l}\) be smooth functions on an open interval
\(J \subset \mathbb{R}\) such that \(g_{1}\) never vanishes on \(J\).
If \(f_{i} = \frac{g_{i}}{g_{1}}\), \(2 \leq i \leq l\), then for
\((t_{1}, \dots, t_{l}) \in J^{l}\),
\begin{equation*}
\setlength\arraycolsep{2pt}
\det\begin{pmatrix}
g_{1}(t_{1}) & \dots & g_{1}(t_{l}) \\
\vdots & & \vdots \\
g_{n}(t_{1}) & \dots & g_{n}(t_{l})
\end{pmatrix}
= \prod_{i = 1}^{l} g_{1}(t_{i})
\int_{t_{1}}^{t_{2}} \dots \int_{t_{l-1}}^{t_{l}}
\det\begin{pmatrix}
f_{2}'(s_{1}) & \dots & f_{2}'(s_{l-1}) \\
\vdots & & \vdots \\
f_{n}'(s_{1}) & \dots & f_{n}'(s_{l-1})
\end{pmatrix}
\mathrm{d} s_{1} \dots \mathrm{d} s_{l-1}.
\end{equation*}
\end{lemma}
Using this lemma (noting that \(f_{j, j-1}' \neq 0\)), we have
\begin{equation} \label{geometric inequality lemma 5.1 application}
\begin{aligned}
&\det\begin{pmatrix}
f_{j, j-1}'(t_{1}) & \dots & f_{j, j-1}'(t_{n-j+1}) \\
\vdots & & \vdots \\
f_{n, j-1}'(t_{1}) & \dots & f_{n, j-1}'(t_{n-j+1})
\end{pmatrix}
\\
&= \prod_{i=1}^{n-j+1} f_{j, j-1}'(t_{i})
\int_{t_{1}}^{t_{2}} \dots \int_{t_{n-j}}^{t_{n-j+1}}
\det\begin{pmatrix}
f_{j+1, j}'(s_{1}) & \dots & f_{j+1, j}'(s_{n-j}) \\
\vdots & & \vdots \\
f_{n, j}'(s_{1}) & \dots & f_{n, j}'(s_{n-j})
\end{pmatrix}
\mathrm{d}s_{1} \dots \mathrm{d}s_{n-j}.
\end{aligned}
\end{equation}
As in~\cite{DendrinosWright},
combining~\eqref{geometric inequality lemma 4.1 application}
and~\eqref{geometric inequality lemma 5.1 application} iteratively gives us the
equality \eqref{iterated integrals equal f's}, thus proving the lemma.
\end{proof}
\begin{lemma} \label{size of iterated integrals lemma}
Under the assumption~\eqref{size of L_j^zeta}, for \(1 \leq m \leq n\) we have
\begin{equation} \label{size of iterated integrals}
\mathfrak{I}_{m}(t_{1}, \dots, t_{m})
\approx_{m} \pm2^{(m+1)k_{n-m}-mk_{n-m-1}-k_{n}}v(t_{1}, \dots, t_{m}),
\end{equation}
where \(k_{0} = k_{-1} = 0\). In particular,
\begin{equation*}
|\mathfrak{I}_{n}(t_{1}, \dots, t_{n})|
\approx_{n} |2^{-k_{n}}v(t_{1}, \dots, t_{n})|.
\end{equation*}
\end{lemma}
\begin{proof}
We proceed by induction. In the base case \(m = 1\), the Vandermonde determinant
\(v(t)\) is simply the constant function 1. Furthermore,
\begin{equation*}
\mathfrak{I}_{1}(t_{1})
= \frac{L_{n-2}^{\zeta}(t_{1})L_{n}^{\zeta}(t_{1})}
{[L_{n-1}^{\zeta}(t_{1})]^{2}}.
\end{equation*}
For every \(t_{1}\), the assumption~\eqref{size of L_j^zeta} shows
\begin{equation*}
|\mathfrak{I}_{1}(t_{1})|
\approx 2^{-k_{n-2}-k_{n}+2k_{n-1}},
\end{equation*}
so the base case is complete. In the inductive step,
assume that~\eqref{size of iterated integrals} holds for some
\(1 \leq m-1 \leq n\). From the definition of \(\mathfrak{I}_{m}\)
in~\eqref{I_m} and the condition~\eqref{size of L_j^zeta},
\begin{align*}
\mathfrak{I}_{m}(t_{1}, \dots, t_{m})
&\approx_{m} \pm\bigg(\prod_{j=1}^{m} 2^{2k_{n-m}-k_{n-m-1}-k_{n-m+1}}\bigg)
\itint{t_{1}}{t_{2}}{t_{m-1}}{t_{m}}
\mathfrak{I}_{m-1}(s_{1}, \dots, s_{m-1})
\mathrm{d}s_{1} \dots \mathrm{d}s_{m-1}
\\
&= \pm2^{2mk_{n-m}-mk_{n-m-1}-mk_{n-m+1}}\itint{t_{1}}{t_{2}}{t_{m-1}}{t_{m}}
\mathfrak{I}_{m-1}(s_{1}, \dots, s_{m-1})
\mathrm{d}s_{1} \dots \mathrm{d}s_{m-1}.
\end{align*}
By the induction hypothesis,
\begin{equation*}
\mathfrak{I}_{m}(t_{1}, \dots, t_{m})
\approx_{m} \pm2^{(m+1)k_{n-m}-mk_{n-m-1}-k_{n}}
\itint{t_{1}}{t_{2}}{t_{m-1}}{t_{m}}
v(s_{1}, \dots, s_{m-1}) \mathrm{d}s_{1} \dots \mathrm{d}s_{m-1}.
\end{equation*}
The integrand is a homogeneous polynomial of degree
\(\frac{(m-1)(m-2)}{2}\). Thus, the integral is a homogeneous polynomial
of degree \(\frac{(m-1)(m-2)}{2}+m-1 = \frac{m(m - 1)}{2}\). Hence, there is
some polynomial \(P\) such that
\begin{align*}
\itint{t_{1}}{t_{2}}{t_{m-1}}{t_{m}}
v(s_{1}, \dots, s_{m-1}) \mathrm{d}s_{1} \dots \mathrm{d}s_{m-1}
&= P(t_{1}, \dots, t_{m}) \prod_{1 \leq i < j \leq m} (t_{j} - t_{i})
\\
&= P(t_{1}, \dots, t_{m}) v(t_{1}, \dots, t_{m}).
\end{align*}
Moreover, for any \(1 \leq i < j \leq m\), the integral is 0 whenever
\(t_{j} = t_{i}\). Since \(v(t_{1}, \dots, t_{m})\) already has degree
\(\frac{m(m-1)}{2}\), \(P\) must be a constant, so
\begin{equation*}
\mathfrak{I}_{m}(t_{1}, \dots, t_{m})
\approx_{m} \pm2^{(m+1)k_{n-m}-mk_{n-m-1}-k_{n}}v(t_{1}, \dots, t_{m}).
\end{equation*}
This closes the induction and finishes the proof of the lemma.
\end{proof}
\printbibliography
\end{document} |
\begin{document}
\setlength{\baselineskip}{4.9mm} \setlength{\abovedisplayskip}{4.5mm} \setlength{\belowdisplayskip}{4.5mm}
\renewcommand{\roman{enumi}}{\roman{enumi}} \renewcommand{(\theenumi)}{(\roman{enumi})} \renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}}
\renewcommand{\fnsymbol{footnote}}{\fnsymbol{footnote}}
\allowdisplaybreaks[2]
\parindent=20pt
\pagestyle{myheadings}
\begin{center}
{\bf Kostka functions associated to complex reflection groups} \end{center}
\par
\begin{center} Toshiaki Shoji \\
\end{center}
\title{}
\begin{abstract} Kostka functions $K^{\pm}_{\Bla, \Bmu}(t)$ associated to complex reflection groups are a generalization of Kostka polynomials, which are indexed by a pair $\Bla, \Bmu$ of $r$-partitions of $n$ (and by the sign $+, -$). It is expected that there exists a close relationship between those Kostka functions and the intersection cohomology associated to the enhanced variety $\SX$ of level $r$. In this paper, we study combinatorial properties of $K^{\pm}_{\Bla,\Bmu}(t)$ based on the geometry of $\SX$. In paticular, we show that in the case where $\Bmu = (-,\dots, -,\mu^{(r)})$ (and for arbitrary $\Bla$), $K^-_{\Bla, \Bmu}(t)$ has a Lascoux-Sch\"utzenberger type combinatorial description. \end{abstract}
\maketitle
\pagestyle{myheadings}
\begin{center} {\sc Introduction} \end{center} \par
In 1981, Lusztig gave a geometric interpretation of Kostka polynomials in the following sense; let $V$ be an $n$-dimensional vector space over an algebraically closed field, and put $G = GL(V)$. Let $\SP_n$ be the set of partitions of $n$. Let $\SO_{\la}$ be the unipotent class in $G$ labelled by $\la \in \SP_n$, and $K = \IC(\ol \SO_{\la}, \Ql)$ the intersection cohomology associated to the closure $\ol\SO_{\la}$ of $\SO_{\la}$. Let $K_{\la,\mu}(t)$ be the Kostka polynomial indexed by $\la, \mu \in \SP_n$, and $\wt K_{\la,\mu}(t) = t^{n(\mu)}K_{\la,\mu}(t\iv)$ the modified Kostka polynomial (see 1.1 for the definition $n(\mu)$). Lusztig proved that
\begin{equation*} \tag{0.1} \wt K_{\la,\mu}(t) = t^{n(\la)}\sum_{i \ge 0}\dim (\SH^{2i}_xK)t^i \end{equation*} for $x \in \SO_{\mu} \subset \ol\SO_{\la}$, where $\SH^{2i}_xK$ is the stalk at $x$ of the $2i$-th cohomology sheaf $\SH^{2i}K$ of $K$. (0.1) implies that $K_{\la,\mu}(t) \in \BZ_{\ge 0}[t]$. \par Let $\SP_{n,r}$ be the set of $r$-tuple of partitions
$\Bla = (\la^{(1)}, \dots, \la^{(r)})$ such that $\sum_{i=1}^r |\la^{(i)}| = n$
(we write $|\la^{(i)}| = m$ if $\la^{(i)} \in \SP_m$). In [S1], [S2], Kostka functions $K^{\pm}_{\Bla,\Bmu}(t)$ associated to complex reflections groups (depending on the signs $+, -$) are introduced, which are apriori rational functions in $t$ indexed by $\Bla, \Bmu \in \SP_{n,r}$. In the case where $r = 2$ (in this case $K^-_{\Bla,\Bmu}(t) = K^+_{\Bla, \Bmu}(t)$), it is proved in [S2] that $K^{\pm}_{\Bla, \Bmu}(t) \in \BZ[t]$. In this case, Achar-Henderson [AH] proved that those (generalized) Kostka polynomials have a geometric interpretation in the following sense; under the previous notation, consider the variety $\SX = G \times V$ on which $G$ acts naturally. Put $\SX\uni = G\uni \times V$, where $G\uni$ is the set of unipotent elements in $G$. $\SX\uni$ is a $G$-stable subset of $\SX$, and is isomorphic to the enhanced nilpotent cone introduced by [AH]. It is known by [AH], [T] that $\SX\uni$ has finitely many $G$-orbits, which are naturally parametrized by $\SP_{n,2}$. They proved in [AH] that the modified Kostka polynomial $\wt K^{\pm}_{\Bla, \Bmu}(t)$ $(\Bla, \Bmu \in \SP_{n,2}$), defined in a similar way as in the original case, can be written as in (0.1) in terms of the intersection cohomology associated to the closure $\ol\SO_{\Bla}$ of the $G$-orbit $\SO_{\Bla} \subset \SX\uni$. \par In the case where $r = 2$, the interaction of geometric properties and combinatorial properties of Kostka polynomials was studied in [LS]. In particular, it was proved that in the special case where $\Bmu = (-,\mu^{(2)})$ (and for arbitrary $\Bla \in \SP_{n,2}$), $K_{\Bla,\Bmu}(t)$ has a combinatorial description analogous to Lascoux-Sch\"utzenberger theorem for the original Kostka polynomials ([M, III, (6.5)]). \par We now consider the variety $\SX = G \times V^{r-1}$ for an integer $r \ge 1$, on which $G$ acts diagonally, and let $\SX\uni = G\uni \times V^{r-1}$ be the $G$-stable subset of $\SX$. The variety $\SX$ is called the enhanced variety of level $r$. In [S4], the relationship between Kostka functions $K^{\pm}_{\Bla, \Bmu}(t)$ indexed by $\Bla, \Bmu \in \SP_{n,r}$ and the geometry of $\SX\uni$ was studied. In contrast to the case where $r = 1,2$, $\SX\uni$ has infinitely many $G$-orbits if $r \ge 3$. A partition $\SX\uni = \coprod_{\Bla \in \SP_{n,r}}X_{\Bla}$ into $G$-stable pieces $X_{\Bla}$ was constructed in [S3], and some formulas expressing the Kostka functions in terms of the intersection cohomology associated to the closure of $X_{\Bla}$ were obtained in [S4], though it is a partial generalization of the result of Achar-Henderson for the case $r = 2$. \par In this paper, we prove a formula (Theorem 2.6) which is a generalization of the formula in [AH, Theorem 4.5] (and also in [FGT (11)]) to arbitrary $r$. Combined this formula with the results in [S4], we extend some results in [LS] to arbitrary $r$. In particular, we show in the special case where $\Bmu = (-,\dots,-,\mu^{(r)}) \in \SP_{n,r}$ (and for arbitrary $\Bla \in \SP_{n,r}$) that $K^{-}_{\Bla,\Bmu}(t)$ has a Lasacoux-Sch\"utzenberger type combinatorial description.
\par
\par
\section{Review on Kostka functions}
\para{1.1.} First we recall basic properties of Hall-Littlewood functions and Kostka polynomials in the original setting, following [M]. Let $\vL = \vL(y) = \bigoplus_{n \ge 0}\vL^n$ be the ring of symmetric functions over $\BZ$ with respect to the variables $y = (y_1, y_2, \dots)$, where $\vL^n$ denotes the free $\BZ$-module of symmetirc functions of degree $n$. We put $\vL_{\BQ} = \BQ\otimes_{\BZ}\vL$, $\vL^n_{\BQ} = \BQ\otimes_{\BZ}\vL^n$. Let $s_{\la}$ be the Schur function associated to $\la \in \SP_n$. Then $\{ s_{\la} \mid \la \in \SP_n \}$ gives a $\BZ$-baisis of $\vL^n$. Let $p_{\la} \in \vL^n$ be the power sum symmetric function associated to $\la \in \SP_n$, \begin{equation*} p_{\la} = \prod_{i = 1}^kp_{\la_i}, \end{equation*} where $p_m$ denotes the $m$-th power sum symmetric function for each integer $m > 0$. Then $\{ p_{\la} \mid \la \in \SP_n \}$ gives a $\BQ$-basis of $\vL^n_{\BQ}$. For $\la = (1^{m_1}, 2^{m_2}, \dots) \in \SP_n$, define an integer $z_{\la}$ by \begin{equation*} \tag{1.1.1} z_{\la} = \prod_{i \ge 1}i^{m_i}m_i!. \end{equation*} Following [M, I], we introduce a scalar product on $\vL_{\BQ}$ by $\lp p_{\la}, p_{\mu} \rp = \d_{\la\mu}z_{\la}$. It is known that $\{s_{\la}\}$ form an orthonormal basis of $\vL$.
\par Let $P_{\la}(y;t)$ be the Hall-Littlewood function associated to a partition $\la$. Then $\{ P_{\la} \mid \la \in \SP_n \}$ gives a $\BZ[t]$-basis of $\vL^n[t] = \BZ[t]\otimes_{\BZ}\vL^n$, where $t$ is an indeterminate. Kostka polynomials $K_{\la, \mu}(t) \in \BZ[t]$ ($\la, \mu \in \SP_n$) are defined by the formula \begin{equation*} \tag{1.1.2} s_{\la}(y) = \sum_{\mu \in \CP_n}K_{\la,\mu}(t)P_{\mu}(y;t). \end{equation*} \par Recall the dominance order $\la \ge \mu$ in $\SP_n$, which is defined by the condition $\sum_{j= 1}^i\la_j \ge \sum_{j = 1}^i\mu_j$ for each $i \ge 1$. For each partition $\la = (\la_1, \dots, \la_k)$, we define an integer $n(\la)$ by $n(\la) = \sum_{i=1}^k(i-1)\la_i$. It is known that $K_{\la,\mu}(t) = 0$ unless $\la \ge \mu$, and that $K_{\la,\mu}(t)$ is a monic of degree $n(\mu) - n(\la)$ if $\la \ge \mu$ ([M, III, (6.5)]). Put $\wt K_{\la,\mu}(t) = t^{n(\mu)}K_{\la, \mu}(t\iv)$. Then $\wt K_{\la, \mu}(t) \in \BZ[t]$, which we call the modified Kostka polynomial. \par For $\la = (\la_1, \dots, \la_k) \in \SP_n$ with $\la_k > 0$, we define $z_{\la}(t) \in \BQ(t)$ by \begin{equation*} \tag{1.1.3} z_{\la}(t) = z_{\la}\prod_{i \ge 1}(1 - t^{\la_i})\iv, \end{equation*} where $z_{\la}$ is as in (1.1.1). Following [M, III], we introduce a scalar product on $\vL_{\BQ}(t) = \BQ(t)\otimes_{\BZ}\vL$ by $\lp p_{\la}, p_{\mu} \rp = z_{\la}(t)\d_{\la,\mu}$. Then $P_{\la}(y;t)$ form an orthogonal basis of $\vL[t] = \BZ[t]\otimes_{\BZ}\vL$. In fact, they are characterized by the following two properties ([M, III, (2.6) and (4.9)]); \begin{equation*} \tag{1.1.4} P_{\la}(y;t) = s_{\la}(y) + \sum_{\mu < \la}w_{\la\mu}(t)s_{\mu}(y) \end{equation*} with $w_{\la\mu}(t) \in \BZ[t]$ , and \begin{equation*} \tag{1.1.5} \lp P_{\la}, P_{\mu} \rp = 0 \text{ unless $\la = \mu$. } \end{equation*}
\para{1.2.} We fix a positive integer $r$. Let $\Xi = \Xi(x) \simeq \vL(x^{(1)})\otimes\cdots\otimes\vL(x^{(r)})$ be the ring of symmetric functions over $\BZ$ with respect to variables $x = (x^{(1)}, \dots, x^{(r)})$, where $x^{(i)} = (x^{(i)}_1, x^{(i)}_2, \dots)$. We denote it as $\Xi = \bigoplus_{n \ge 0}\Xi^n$, similarly to the case of $\vL$.
Let $\SP_{n,r}$ be as in Introduction. For $\Bla \in \SP_{n,r}$, we define a Schur function $s_{\Bla}(x) \in \Xi^n$ by \begin{equation*} \tag{1.2.1} s_{\Bla}(x) = s_{\la^{(1)}}(x^{(1)})\cdots s_{\la^{(r)}}(x^{(r)}). \end{equation*} Then $\{ s_{\Bla} \mid \Bla \in \SP_{n,r} \}$ gives a $\BZ$-basis of $\Xi^n$. Let $\z$ be a primitive $r$-th root of unity in $\BC$. For an integer $m \ge 1$ and $k$ such that $1 \le k \le r$, put \begin{equation*} p_m^{(k)}(x) = \sum_{j = 1}^{r}\z^{(k-1)(j-1)}p_m(x^{(j)}), \end{equation*} where $p_m(x^{(j)})$ denotes the $m$-th power sum symmetric function with respect to the variables $x^{(j)}$. For $\Bla \in \SP_{n,r}$, we define $p_{\Bla}(x) \in \Xi^n_{\BC} = \Xi^n \otimes_{\BZ}\BC$ by \begin{equation*} \tag{1.2.2} p_{\Bla}(x) = \prod_{k = 1}^r\prod_{j= 1}^{m_k}p^{(k)}_{\la^{(k)}_j}(x), \end{equation*} where $\la^{(k)} = (\la^{(k)}_1, \dots, \la^{(k)}_{m_k})$ with $\la^{(k)}_{m_k} > 0$. Then $\{ p_{\Bla} \mid \Bla \in \SP_{n,r} \}$ gives a $\BC$-basis of $\Xi^n_{\BC}$. For a partition $\la^{(k)}$ as above, we define a function $z_{\la^{(k)}}(t) \in \BC(t)$ by \begin{equation*} z_{\la^{(k)}}(t) = \prod_{j = 1}^{m_k}(1 - \z^{k-1}t^{\la_j^{(k)}})\iv. \end{equation*} For $\Bla \in \SP_{n,r}$, we define an integer $z_{\Bla}$ by $z_{\Bla} = \prod_{k=1}^rr^{m_k}z_{\la^{(k)}}$, wher $z_{\la^{(k)}}$ is as in (1.1.1). We now define a function $z_{\Bla}(t) \in \BC(t)$ by \begin{equation*} \tag{1.2.3} z_{\Bla}(t) = z_{\Bla}\prod_{k=1}^rz_{{\la}^{(k)}}(t). \end{equation*} Let $\Xi[t] = \BZ[t]\otimes_{\BZ}\Xi$ be the free $\BZ[t]$-module, and $\Xi_{\BC}(t) = \BC(t)\otimes_{\BZ}\Xi$ be the $\BC(t)$-space. Then $\{ p_{\Bla}(x) \mid \Bla \in \SP_{n,r} \}$ gives a basis of $\Xi^n_{\BC}(t)$. We define a sesquilinear form on $\Xi_{\BC}(t)$ by \begin{equation*} \tag{1.2.4} \lp p_{\Bla}, p_{\Bmu} \rp = \d_{\Bla,\Bmu}z_{\Bla}(t). \end{equation*} \par We express an $r$-partition $\Bla = (\la^{(1)}, \dots, \la^{(r)})$ as $\la^{(k)} = (\la^{(k)}_1, \dots, \la^{(k)}_m)$ with a common $m$, by allowing zero on parts $\la^{(i)}_j$, and define a composition $c(\Bla)$ of $n$ by \begin{equation*} c(\Bla) = (\la^{(1)}_1, \dots, \la^{(r)}_1, \la^{(1)}_2, \dots, \la^{(r)}_2,
\dots, \la^{(1)}_m, \dots, \la^{(r)}_m). \end{equation*} We define a partial order $\Bla \ge \Bmu$ on $\SP_{n,r}$ by the condition $c(\Bla) \ge c(\Bmu)$, where $\ge $ is the dominance order on the set of compositions of $n$ defined in a similar way as in the case of partitions. We fix a total order $\Bla \gv \Bmu$ on $\SP_{n,r}$ compatible with the partial order $\Bla > \Bmu$. \par The following result was proved in Theorem 4.4 and Proposition 4.8 in [S1], combined with [S2, \S 3].
\begin{prop} For each $\Bla \in \SP_{n,r}$, there exist unique functions $P^{\pm}_{\Bla}(x;t) \in \Xi^n_{\BQ}(t)$ $($depending on the signs $+$, $-$ $)$ satisfying the following properties. \begin{enumerate} \item $P^{\pm}_{\Bla}(x;t)$ can be written as \begin{equation*} P^{\pm}_{\Bla}(x;t) = s_{\Bla}(x) + \sum_{\Bmu \lv \Bla}u^{\pm}_{\Bla,\Bmu}(t)s_{\Bmu}(x) \end{equation*} with $u^{\pm}_{\Bla,\Bmu}(t) \in \BQ(t)$. \item $\lp P_{\Bla}^-, P^+_{\Bmu} \rp = 0$ unless $\Bla = \Bmu$. \end{enumerate} \end{prop}
\para{1.4.} $P^{\pm}_{\Bla}(x;t)$ are called Hall-Littlewood functions associated to $\Bla \in \SP_{n,r}$. By Proposition 1.3, for $\ve \in \{ +,-\}$, $\{ P^{\ve}_{\Bla} \mid \Bla \in \SP_{n,r}\}$ gives a $\BQ(t)$-basis for $\Xi_{\BQ}(t)$. For $\Bla, \Bmu \in \SP_{n,r}$, we define functions $K^{\pm}_{\Bla, \Bmu}(t) \in \BQ(t)$ by
\begin{equation*} \tag{1.4.1} s_{\Bla}(x) = \sum_{\Bmu \in \SP_{n,r}}K^{\pm}_{\Bla,\Bmu}(t)P^{\pm}_{\Bmu}(x;t). \end{equation*}
\par $K^{\pm}_{\Bla, \Bmu}(t)$ are called Kostka functions associated to complex reflection groups since they are closely related to the complex reflection group $S_n\ltimes (\BZ/r\BZ)^n$ (see [S1, Theorem 5,4]). For each $\Bla \in \SP_{n,r}$, by putting $n(\Bla) = n(\la^{(1)}) + \cdots + n(\la^{(r)})$, we define an $a$-function $a(\Bla)$ on $\SP_{n,r}$ by \begin{equation*} \tag{1.4.2}
a(\Bla) = r\cdot n(\Bla) + |\la^{(2)}| + 2|\la^{(3)}| + \cdots + (r-1)|\la^{(r)}|. \end{equation*} We define modifed Kostka functions $\wt K^{\pm}_{\Bla, \Bmu}(t)$ by \begin{equation*} \tag{1.4.3} \wt K^{\pm}_{\Bla, \Bmu}(t) = t^{a(\Bmu)}K^{\pm}_{\Bla, \Bmu}(t\iv). \end{equation*}
\remark{1.5.} In the case where $r = 1$, $P^{\pm}_{\Bla}(x;t)$ coincides with the original Hall-Littlewood function given in 1.1. In the case where $r = 2$, it is proved by [S2, Prop. 3.3] that $P^-_{\Bla}(x;t) = P^+_{\Bla}(x;t) \in \Xi[t]$, hence $K^-_{\Bla,\Bmu}(t) = K^+_{\Bla, \Bmu}(t) \in \BZ[t]$. Moreover it is shown that $K^{\pm}_{\Bla, \Bmu}(t) \in \BZ[t]$, which is a monic of degree $a(\Bmu) - a(\Bla)$. Thus $\wt K^{\pm}_{\Bla, \Bmu}(t) \in \BZ[t]$. As mentioned in Introduction $\wt K^{\pm}_{\Bla, \Bmu}(t)$ has a geometric interpretation, which imples that $K^{\pm}_{\Bla, \Bmu}(t)$, and so $P^{\pm}_{\Bla}(x;t)$ are independent of the choice of the total order $\lv$ on $\SP_{n,r}$. In the case where $r \ge 3$, it is not known whether Hall-Littlewood functions do not depend on the choice of the total order $\lv$, whether $K^{\pm}_{\Bla, \Bmu}(t)$ are polynomials in $t$.
\par
\section{Enhanced variety of level $r$ }
\para{2.1.} Let $V$ be an $n$-dimensional vector space over an algebraic closure $\Bk$ of a finite field $\Fq$, and $G = GL(V) \simeq GL_n$. Let $B = TU$ be a Borel subgroup of $G$, $T$ a maximal torus and $U$ the unipotent radical of $B$. Let $W = N_G(T)/T$ be the Weyl group of $G$, which is isomorphic to the symmetric group $S_n$. By fixing an integer $r \ge 1$, put $\SX = G \times V^{r-1}$ and $\SX\uni = G\uni \times V^{r-1}$, where $G\uni$ is the set of unipotent elements in $G$. The variety $\SX$ is called the enhanced variety of level $r$. We consider the diagonal action of $G$ on $\SX$. Put $\SQ_{n,r} = \{ \Bm = (m_1, \dots, m_r) \in \BZ^r_{\ge 0} \mid \sum m_i = n\}$. For each $\Bm \in \SQ_{n,r}$, we define integers $p_i = p_i(\Bm)$ by $p_i = m_1 + \cdots + m_i$ for $i = 1, \dots, r$.
Let $(M_i)_{1 \le i \le n}$ be the total flag in $V$ whose stabilizer in $G$ coincides with $B$. We define varieties
\begin{align*} \wt\SX_{\Bm} &= \{ (x, \Bv, gB) \in G \times V^{r-1} \times G/B \mid g\iv xg \in B,
g\iv \Bv \in \prod_{i=1}^{r-1}M_{p_i} \}, \\ \SX_{\Bm} &= \bigcup_{g \in G}g(B \times \prod_{i=1}^{r-1}M_{p_i}), \end{align*} and the map $\pi_{\Bm} : \wt \SX_{\Bm} \to \SX_{\Bm}$ by $(x,\Bv, gB) \mapsto (x,\Bv)$. We also define the varieties
\begin{align*} \wt\SX_{\Bm, \unip} &= \{ (x, \Bv, gB) \in G\uni \times V^{r-1} \times G/B \mid g\iv xg \in U,
g\iv \Bv \in \prod_{i=1}^{r-1}M_{p_i} \}, \\ \SX_{\Bm} &= \bigcup_{g \in G}g(U \times \prod_{i=1}^{r-1}M_{p_i}), \end{align*} and the map $\pi_{\Bm,1}: \wt\SX_{\Bm,\unip} \to \SX_{\Bm,\unip}$, similarly. Note that in the case where $\Bm = (n,0,\dots, 0)$, $\SX_{\Bm}$ (resp. $\SX_{\Bm,\unip}$) coincides with $\SX$ (resp. $\SX\uni$). In that case, we denote $\wt\SX_{\Bm}, \pi_{\Bm}$, etc. by $\wt\SX, \pi$, etc. by omitting the symbol $\Bm$. (Note: here we follow the notation in [S4], but, in part, it differs from [S3]. In [S3], our $\pi_{\Bm}, \pi_{\Bm,1}$ are denoted by $\pi^{(\Bm)}, \pi^{(\Bm)}_1$ for the consistency with the exotic case).
\para{2.2.} In [S3, 5.3], a partition of $\SX\uni$ into pieces $X_{\Bla}$ is defined
\begin{equation*} \SX\uni = \coprod_{\Bla \in \SP_{n,r}}X_{\Bla}, \end{equation*} where $X_{\Bla}$ is a locally closed, smooth irreducible, $G$-stable subvariety of $\SX\uni$. If $r = 1$ or 2, $X_{\Bla}$ is a single $G$-orbit. However, if $r \ge 3$, $X_{\Bla}$ is in general a union of infinitely many $G$-orbits. \par For $\Bm \in \SQ_{n,r}$, let $W_{\Bm} = S_{m_1} \times \cdots \times S_{m_r}$ be the Young subgroup of $W = S_n$. For $\Bm \in \SQ_{n,r}$, we denote by $\SP(\Bm)$ the set of $\Bla \in \SP_{n,r}$
such that $|\la^{(i)}| = m_i$. The (isomorphism classes of) irreducible representations (over $\Ql$) of $W_{\Bm}$ are parametrized by $\SP(\Bm)$. We denote by $V_{\Bla}$ an irreducible representation of $W_{\Bm}$ corresponding to $\Bla$, namley $V_{\Bla} = V_{\la^{(1)}}\otimes\cdots\otimes V_{\la^{(r)}}$, where $V_{\mu}$ denotes the irreducible representation of $S_n$ corresponding to the partition $\mu$ of $n$. (Here we use the parametrization such that $V_{(n)}$ is the trivial representation of $S_n$). The following results were proved in [S3].
\begin{thm}[{[S3, Thm. 4.5]}] Put $d_{\Bm} = \dim \SX_{\Bm}$. Then $(\pi_{\Bm})_*\Ql[d_{\Bm}]$ is a semsimple perverse sheaf equipped with the action of $W_{\Bm}$, and is decomposed as
\begin{equation*} (\pi_{\Bm})_*\Ql[d_{\Bm}] \simeq \bigoplus_{\Bla \in \SP(\Bm)}
V_{\Bla} \otimes \IC(\SX_{\Bm}, \SL_{\Bla})[d_{\Bm}], \end{equation*} where $\SL_{\Bla}$ is a simple local system on a certain open dense subvariety of $\SX_{\Bm}$. \end{thm}
\begin{thm}[{[S3, Thm. 8.13, Thm. 7.12]}] Put $d'_{\Bm} = \dim \SX_{\Bm,\unip}$. \begin{enumerate} \item $(\pi_{\Bm,1})_*\Ql[d'_{\Bm}]$ is a semisimple perverse sheaf equipped with the action of $W_{\Bm}$, and is decomposed as
\begin{equation*} (\pi_{\Bm,1})_*\Ql[d'_{\Bm}] \simeq \bigoplus_{\Bla \in \SP(\Bm)}
V_{\Bla} \otimes \IC(\ol X_{\Bla}, \Ql)[\dim X_{\Bla}]. \end{equation*} \item We have
$\IC(\SX_{\Bm}, \SL_{\la})|_{\SX_{\Bm, \unip}} \simeq
\IC(\ol X_{\Bla}, \Ql)[\dim X_{\Bla} - d'_{\Bm}]$. \end{enumerate} \end{thm}
\para{2.5.} For a partition $\la$, we denote by $\la^t$ the dual partition of $\la$. For $\Bla = (\la^{(1)}, \dots, \la^{(r)}) \in \SP(\Bm)$, we define $\Bla^t \in \SP(\Bm)$ by $\Bla^t = ((\la^{(1)})^t, \dots, (\la^{(r)})^t)$. Assume that $\Bla \in \CP(\Bm)$. We write $(\la^{(i)})^t$ as ($\mu^{(i)}_1 \le \mu^{(i)}_2 \le \cdots \le \mu^{(i)}_{\ell_i})$, in the increasing order, where $\ell_i = \la^{(i)}_1$. For each $1 \le i \le r, 1\le j < \ell_i$, we define an integer $n(i,j)$ by \begin{equation*}
n(i,j) = (|\la^{(1)}| + \cdots + |\la^{(i-1)}|) + \mu_1^{(i)} + \cdots + \mu_j^{(i)}. \end{equation*} Let $Q = Q_{\Bla}$ be the stabilizer of the partial flag $(M_{n(i,j)})$ in $G$, and $U_Q$ the unipotent radical of $Q$. In particular, $Q$ stabilizes the subspaces $M_{p_i}$. Let us define a variety $\wt X_{\Bla}$ by \begin{equation*} \begin{split} \wt X_{\Bla} = \{ (x, \Bv, gQ) \in G\uni \times V^{r-1} \times G/Q
\mid g\iv xg \in U_Q, g\iv \Bv \in \prod_{i=1}^{r-1}M_{p_i} \}. \end{split} \end{equation*} We define a map $\pi_{\Bla} : \wt X_{\Bla} \to \CX\uni$ by $(x,\Bv, gQ) \mapsto (x,\Bv)$. Then $\pi_{\Bla}$ is a proper map. Since $\wt X_{\Bla} \simeq G\times^{Q}(U_{Q}
\times \prod_i M_{p_i})$, $\wt X_{\Bla}$ is smooth and irreducible. It is known by [S3, Lemma 5.6] that $\dim \wt X_{\Bla} = \dim X_{\la}$ and that $\Im \pi_{\Bla}$ coincides with $\ol X_{\Bla}$, the closure of $X_{\Bla}$ in $\SX\uni$. \par For $\la, \mu \in \SP_n$, let $K_{\la, \mu} = K_{\la,\mu}(1)$ be the Kostka number. We have $K_{\la,\mu} = 0$ unless $\la \ge \mu$. For $\Bla = (\la^{(1)}, \dots, \la^{(r)})$, $\Bmu = (\mu^{(1)}, \dots, \mu^{(r)}) \in \SP(\Bm)$, we define an integer $K_{\Bla, \Bmu}$ by
\begin{equation*} K_{\Bla,\Bmu} = K_{\la^{(1)}, \mu^{(1)}}K_{\la^{(2)},\mu^{(2)}}
\cdots K_{\la^{(r)}, \mu^{(r)}}. \end{equation*} We define a partial order $\Bla \trreq \Bmu$ on $\SP_{n,r}$ by the condition $\la^{(i)} \ge \mu^{(i)}$ for $i = 1, \dots, r$. Hence $\Bla \trreq \Bmu$ implies that $\Bla, \Bmu \in \SP(\Bm)$ for a commom $\Bm$. We have $K_{\Bla, \Bmu} = 0$ unless $\Bla \trreq \Bmu$. Note that $\Bla \trreq \Bmu$ implies that $\Bmu^t \trreq \Bla^t$. We show the following theorem. In the case where $r = 2$, this result was proved by [AH, Thm. 4.5].
\begin{thm} Assume that $\Bla \in \SP_{n,r}$. Then $(\pi_{\Bla})_*\Ql[\dim X_{\Bla}]$ is a semisimple perverse sheaf on $\ol X_{\Bla}$, and is decomposed as
\begin{equation*} \tag{2.6.1} (\pi_{\Bla})_*\Ql[\dim X_{\Bla}] \simeq \bigoplus_{\Bmu \trleq \Bla}
\Ql^{K_{\Bmu^t, \Bla^t}}\otimes \IC(\ol X_{\Bmu}, \Ql)[\dim X_{\Bmu}]. \end{equation*} \end{thm}
\para{2.7.} The rest of this section is devoted to the proof of Theorem 2.6. First we consider the case where $r = 1$. Actually, the result in this case is contained in [AH]. Their proof (for $r = 2$) depends on the result of Spaltenstein [Sp] concerning the ``Springer fibre'' $(\pi_{\Bla})\iv(z)$ for $z \in \ol X_{\Bla}$ in the case $r = 1$. In the following, we give an alternate proof independent of [Sp] for the later use. Let $Q$ be a parabolic subgroup of $G$ containing $B$, $M$ the Levi subgroup of $Q$ containing $T$ and $U_Q$ the unipotent radical of $Q$. (In this stage, this $Q$ is independent of $Q$ in 2.5.) Let $W_Q$ be the Weyl subgroup of $W$ corresponding to $Q$. Let $G\reg$ be the set of regular semisimple elements in $G$, and put $T\reg = G\reg \cap T$. Consider the map $\psi: \wt G\reg \to G\reg$, where
\begin{align*} \wt G\reg = \{ (x, gT) \in G\reg \times G/T \mid g\iv xg \in T\reg \} \\ \end{align*} and $\psi : (x, gT) \mapsto x$. Then $\psi$ is a finite Galois covering with group $W $. We also consider a variety
\begin{align*} \wt G\reg^M &= \{ (x, gM) \in G\reg \times G/M \mid g\iv xg \in M\reg \}, \end{align*} where $M\reg = G\reg \cap M$. The map $\psi$ is decomposed as
\begin{equation*} \begin{CD} \psi : \wt G\reg @>\psi' >> \wt G\reg^M @> \psi''>> G\reg, \end{CD} \end{equation*}
\par\noindent where $\psi': (x, gT) \mapsto (x, gM)$, $\psi'': (x, gM) \mapsto x$. Here $\psi'$ is a finite Galois covering with group $W_Q$. Now $\psi_*\Ql$ is a semisimple local system on $G\reg$ such that $\End (\psi_*\Ql) \simeq \Ql[W]$, and is decomposed as
\begin{equation*} \tag{2.7.1} \psi_*\Ql \simeq \bigoplus_{\r \in W\wg} \r \otimes \SL_{\r}, \end{equation*} where $\SL_{\r} = \Hom_W(\r, \psi_*\Ql)$ is a simple local system on $G\reg$. We also have \begin{equation*} \tag{2.7.2} \psi'_*\Ql \simeq \bigoplus_{\r' \in W_Q\wg}\r' \otimes \SL'_{\r'}, \end{equation*} where $\SL'_{\r'}$ is a simple local system on $\wt G\reg^M$. Hence
\begin{equation*} \tag{2.7.3} \psi_*\Ql \simeq \psi_*''\psi_*'\Ql \simeq \bigoplus_{\r' \in W_Q\wg}
\r'\otimes \psi''_*\SL'_{\r'}. \end{equation*} (2.7.3) gives a decompostion of $\psi_*\Ql$ with respect to the action of $W_Q$. Comparing (2.7.1) and (2.7.3), we have
\begin{equation*} \tag{2.7.4} \psi''_*\SL'_{\r'} \simeq \bigoplus_{\r \in W\wg}\Ql^{(\r: \r')}\otimes \SL_{\r}, \end{equation*} where $(\r: \r')$ is the multiplicity of $\r'$ in the restricted $W_Q$-module $\r$. \par We consider the map $\pi : \wt G \to G$, where
\begin{equation*} \wt G = \{ (x, gB) \in G \times G/B \mid g\iv xg \in B \} \simeq G \times^BB , \end{equation*} and $\pi: (x, gB) \mapsto x$. We also consider
\begin{align*} \wt G^Q = \{ (x, gQ) \in G \times G/Q \mid g\iv xg \in Q \} \simeq G \times^QQ. \end{align*} The map $\pi$ is decomposed as
\begin{equation*} \begin{CD} \pi: \wt G @>\pi'>> \wt G^Q @>\pi''>> G, \end{CD} \end{equation*} where $\pi': (x, gB) \mapsto (x, gQ)$, $\pi'': (x, gQ) \mapsto x$. It is well-known ([L1]) that
\begin{equation*} \tag{2.7.5} \pi_*\Ql \simeq \bigoplus_{\r \in W\wg} \r \otimes \IC(G, \SL_{\r}). \end{equation*} Let $B_M = B \cap M$ be the Borel subgroup of $M$ containing $T$. We consider the following commutative diagram
\begin{equation*} \tag{2.7.6} \begin{CD} G \times ^BB @<\wt p<< G \times (Q\times^BB) @>\wt q>> M \times^{B_M}B_M \\
@V\pi'VV @VVr V @VV\pi^M V \\ G \times^QQ @<p<< G \times Q @>q>> M , \end{CD} \end{equation*} where under the identification $G \times^BB \simeq G \times^Q(Q \times^BB)$, the maps $p,\wt p$ are defined by the quotient by $Q$. The map $q$ is a projection to the $M$-factor of $Q$, and $\wt q$ is the map induced from the projection $Q \times B \to M \times B_M$. $\pi^M$ is defined similarly to $\pi$ replacing $G$ by $M$. The map $r$ is defined by $(g, h*x) \mapsto (g, hxh\iv)$. (We use the notation $h*x \in Q\times^BB$ to denote the $B$-orbit in $Q \times B$ containing $(h,x)$.) Here all the squares are cartesian squares. Moreover, \par
(a) $p$ is a principal $Q$-bundle. \par (b) $q$ is a locally trivial fibration with fibre isomorphic to $G \times U_Q$. \par
\noindent Thus as in [S4, (1.5.2)], for any $M$-equivariant simple pervere sheaf $A_1$ on $M$, there exists a unique (up to isomorphism) simple perverse sheaf $A_2$ on $\wt G^Q$ such that $p^*A_2[a] \simeq q^*A_1[b]$, where $a = \dim Q$ and $b = \dim G + \dim U_Q$. \par By using the cartesian squares in (2.7.6), and by (2.7.2), we see that $\pi'_*\Ql \simeq \IC(\wt G^Q, \psi'_*\Ql)$, and $\pi'_*\Ql$ is decomposed as \begin{equation*} \tag{2.7.7} \pi'_*\Ql \simeq \bigoplus_{\r' \in W_Q\wg}\r' \otimes \IC(\wt G^Q, \SL'_{\r'}). \end{equation*}
By comparing (2.7.4) and (2.7.7), we have
\begin{equation*} \tag{2.7.8} \pi''_*\IC(\wt G^Q, \SL'_{\r'}) \simeq \bigoplus_{\r \in W\wg}\Ql^{(\r:\r')}\otimes
\IC(G,\SL_{\r}). \end{equation*} Note that if $\r = V_{\la}$ for $\la \in \SP_n$, we have \begin{equation*} \tag{2.7.9}
\IC(G, \SL_{\r})|_{G\uni} \simeq \IC(\ol\SO_{\la}, \Ql)[\dim \SO_{\la} - 2\nu_G] \end{equation*} by [BM], where $\nu_G = \dim U$. Hence by restricting on $G\uni$, we have
\begin{equation*} \tag{2.7.10}
\pi''_*\IC(\wt G^Q, \SL'_{\r'})[2\nu_G]|_{G\uni}
\simeq \bigoplus_{\la \in \SP_n}\Ql^{(V_{\la} : \r')}\otimes
\IC(\ol\SO_{\la}, \Ql)[\dim \SO_{\la}]. \end{equation*}
\para{2.8.} Now assume that $W_Q \simeq S_{\mu}$ for a partition $\mu$, where we put $S_{\mu} = S_{\mu_1} \times \cdots \times S_{\mu_k}$ if $\mu = (\mu_1, \dots, \mu_k) \in \SP_n$. Take $\r' = \ve$ the sign representation of $W_Q$. We have \begin{equation*} \tag{2.8.1} (V_{\la} : \ve) = (V_{\la^t} : 1_{W_Q}) = K_{\la^t,\mu}, \end{equation*} where $1_{W_Q}$ is the trivial representation of $W_Q$. \par The restriction of the diagram (2.7.6) to the ``unipotent parts'' makes sense, and we have the commutative diagram
\begin{equation*} \tag{2.8.2} \begin{CD} G \times^BU @<<< G \times^Q(Q \times^BU) @>>> M \times^{B_M}U_M \\
@VVV @VVV @VVV \\ G \times^QQ\uni @<p_1<< G \times Q\uni @>q_1>> M\uni, \end{CD} \end{equation*} where $U_M$ is the unipotent radical of $B_M$, and $Q\uni, M\uni$ are the set of unipotent elements in $Q, M$, respectively. $p_1, q_1$ have similar properties as (a), (b) in 2.7. We consider $\IC(M, \SL^M_{\ve})$ on $M$, where $\SL^M_{\ve}$ is the simple local system on $M\reg$ corresponding to $\ve \in W\wg_Q$. Then by (2.7.6), we see that
\begin{equation*} p^*\IC(\wt G^Q, \SL'_{\ve}) \simeq q^*\IC(M, \SL^M_{\ve}). \end{equation*} By applying (2.7.9) to $M$,
$\IC(M, \SL^M_{\ve})|_{M\uni} \simeq \IC(\ol\SO'_{\ve}, \Ql)[\dim \SO'_{\ve} - 2\nu_M]$,
where $\SO'_{\ve}$ is the orbit in $M\uni$ corresponding to $\ve$ under the Springer correspondence, and $\nu_M$ is defined similarly to $\nu_G$. It is known that $\SO'_{\ve}$ is the orbit $\{ e \} \subset M\uni$, where $e$ is the identity element
in $M$. Hence $\IC(M, \SL^M_{\ve})|_{M\uni}$ coincides with $\Ql[-2\nu_M]$ supported on $\{e\}$. It follows, by (2.8.2)
\par
\noindent (2.8.3) \ The restriction of $\IC(\wt G^Q, \SL'_{\ve})$ on $G\times^QQ\uni$ coincides with $i_*\Ql[-2\nu_M]$, where $i: G \times^QU_Q \hra G \times^QQ\uni$ is the closed embedding. \par
We deifne a map $\pi_Q : G\times^QU_Q \to G\uni$ by $g*x \mapsto gxg\iv$. Put $\wt G^Q_1 = G\times^QU_Q$.
\begin{prop} Under the notation as above, \begin{enumerate} \item
$\pi''_*\IC(\wt G^Q, \SL'_{\ve})[2\nu_G]|_{G\uni} \simeq (\pi_Q)_*\Ql[\dim \wt G^Q_1]$.
\item We have \begin{equation*} (\pi_Q)_*\Ql[\dim \wt G^Q_1] \simeq \bigoplus_{\substack{\mu \in \SP_n \\ \mu \le {}^t\la}}
\Ql^{K_{{}^t\la, \mu}}\otimes \IC(\ol\SO_{\la}, \Ql)
[\dim \SO_{\la}]. \end{equation*} \end{enumerate} \end{prop}
\begin{proof} Note that $2\nu_G - 2\nu_M = 2\dim U_Q = \dim \wt G^Q_1$. Thus by (2.8.3), \begin{equation*} \tag{2.9.1}
\IC(\wt G^Q, \SL'_{\ve})[2\nu_G]|_{G \times^QQ\uni} \simeq i_*\Ql[\dim \wt G^Q_1]. \end{equation*} By applying the base change theorem to the cartesian square
\begin{equation*} \begin{CD} G \times^QQ\uni @>>> G \times^QQ \\ @V\pi_1''VV @VV\pi''V \\ G\uni @>>> G, \end{CD} \end{equation*} we obtain (i) from (2.9.1) since $\pi_Q = \pi''_1\circ i$. Then (ii) follows from (i) by using (2.7.10) and (2.8.1). \end{proof}
\para{2.10.} Returning to the setting in 2.5, we consider the case where $r$ is arbitrary. We fix $\Bm \in \SQ_{n,r}$, and let $P = P_{\Bm}$ be the parabolic subgroup of $G$ containing $B$ which is the stabilizer of the partial flag $(M_{p_i})_{1 \le i \le r}$. Let $L$ be the Levi subgroup of $P$ containing $T$, and $B_L = B \cap L$ the Borel subgroup of $L$ containing $T$. Let $U_L$ be the unipotent radical of $B_L$. Put $\ol M_{p_i} = M_{p_i}/M_{p_{i-1}}$ for each $i$, under the convention $M_{p_0} = 0$. Then $L$ acts naturally on $\ol M_{p_i}$, and by applying the definition of $\pi_{\Bm,1} : \wt\SX_{\Bm,\unip} \to \SX_{\Bm, \unip}$ to $L$, we can define
\begin{align*} \wt\SX^L_{\Bm, \unip} &\simeq L \times^{B_L}(U_L \times \prod_{i=1}^{r-1}\ol M_{p_i}), \\ \SX^L_{\Bm,\unip} &= \bigcup_{g \in L}g(U_L \times \prod_{i = 1}^{r-1}\ol M_{p_i}) = L\uni \times \prod_{i=1}^{r-1} \ol M_{p_i} \end{align*} and the map $\pi^L_{\Bm,1} : \wt\SX^L_{\Bm, \unip} \to \SX^L_{\Bm, \unip}$ similarly.
Let $Q = Q_{\Bla}$ be as in 2.5 for $\Bla \in \SP(\Bm)$. Thus we have $B \subset Q \subset P$, and $Q_L = Q \cap L$ is a parabolic subgroup of $L$ containing $B_L$. We consider the following commutative diagram \begin{equation*} \tag{2.10.1} \begin{CD} \wt\SX_{\Bm,\unip} @<\wt p_1<< G \times \wt\SX^P_{\Bm,\unip} @>\wt q_1>> \wt\SX^L_{\Bm,\unip} \\
@V\a'_1VV @VV r'_1 V @VV\b'_1 V \\ \wh \SX^Q_{\Bm,\unip} @<\wh p_1 << G \times \wt\SX_{\Bm,\unip}^{P,Q} @>\wh q_1>>
\wt\SX^{L, Q_L}_{\Bm,\unip} \\
@V\a''_1VV @VVr_1''V @VV\b''_1V \\ \wh \SX^P_{\Bm, \unip} @<p_1<< G \times \SX_{\Bm, \unip}^P @>q_1>> \SX^L_{\Bm,\unip} \\
@V\pi''_1 VV \\
\SX_{\Bm,\unip}, \end{CD} \end{equation*} where, by putting $P\uni = L\uni U_P$ (the set of unipotent elements in $P$), \begin{align*} \SX_{\Bm,\unip}^P &= \bigcup_{g \in P}g(U \times \prod_i M_{p_i}) = P\uni \times \prod_i M_{p_i}, \\ \wh \SX^P_{\Bm,\unip} &= G \times^P\SX_{\Bm,\unip}^P = G \times^P(P\uni \times \prod_iM_{p_i}), \\ \wt \SX^P_{\Bm,\unip} &= P \times^{B}(U \times \prod_iM_{p_i}), \\ \wh \SX^Q_{\Bm,\unip} &= G \times^Q(Q\uni \times \prod_i M_{p_i}), \\ \wt\SX^{P,Q}_{\Bm, \unip} &= P \times^Q(Q\uni \times \prod_i M_{p_i}). \end{align*} $\wt\SX^{L,Q_L}_{\Bm \unip}$ is a similar variety as $\wh\SX^P_{\Bm, \unip}$ defined with respecto to $(L, Q_L)$, namely,
\begin{equation*} \wt\SX^{L, Q_L}_{\Bm, \unip} = L \times^{Q_L}((Q_L)\uni \times \prod_i\ol M_{p_i}). \end{equation*} The maps are defined as follows; under the identification $\wt\SX_{\Bm, \unip} \simeq G \times^B(U \times \prod_iM_{p_i})$, $\a'_1, \a''_1$ are the natural maps induced from the inclusions $G \times (U \times \prod M_{p_i}) \to G \times (Q\uni \times \prod M_{p_i})
\to G \times (P\uni \times \prod M_{p_i})$. $\pi_1'': g*(x,\Bv) \mapsto (gxg\iv, g\Bv)$. $q_1$ is defined by $(g,x,\Bv) \mapsto (\ol x, \ol \Bv)$, where $x \to \ol x$, $\Bv \mapsto \ol \Bv$ are natural maps $P \to L, \prod_iM_{p_i} \to \prod_i\ol M_{p_i}$. $\wt q_1$ is the composite of the projection $G \times \wt\SX^P_{\Bm,\unip} \to \wt\SX^P_{\Bm, \unip}$ and the map $\wt\SX^P_{\Bm, \unip} \to \wt\SX^L_{\Bm,\unip}$ induced from the projection $P \times (U \times \prod M_{p_i}) \to L \times (U_L \times \prod \ol M_{p_i})$. $\wh q_1$ is defined similarly by using the map $\wt\SX^{P,Q}_{\Bm,\unip} \to \wh\SX^{L,Q_L}_{\Bm,\unip}$ induced from the projection $P \times (Q\uni \times \prod M_{p_i}) \to L \times ((Q_L)\uni \times \prod \ol M_{p_i})$. $p_1$ is the quotient by $P$. $\wt p_1$ and $\wh p_1$ are also quotient by $P$ under the identifications $\wt\SX_{\Bm,\unip} \simeq G \times^P \wt\SX^P_{\Bm,\unip}$, $\wh\SX^Q_{\Bm\unip} \simeq G \times^P\wt\SX^{P,Q}_{\Bm, \unip}$. $\b_1'$ is defined similarly to $\a_1'$ and $\b_1''$ is defined similarly to $\pi_1''$. $r'_1$ is the natural map induced from the injection $P \times (U \times \prod M_{p_i}) \to P \times (Q\uni \times \prod M_{p_i})$, and $r_1''$ is the natural map induced from the map $P \times^Q(Q\uni \times \prod M_{p_i}) \to P\uni \times \prod M_{p_i}$, $g*(x,\Bv) \mapsto (gxg\iv, g\Bv)$. \par Put $\pi'_1 = \a_1''\circ \a_1': \wt\SX_{\Bm, \unip} \to \wh\SX^P_{\Bm, \unip}$. We have $\b_1''\circ \b_1' = \pi^L_{\Bm,1}$, and the diagram (2.10.1) is the refinement of the diagram (6.3.2) in [S4] (see also the diagram (1.5.1) in [S4]). In particular, the map $p_1$ is a principal $P$-bundle, and the map $q_1$ is a locally trivial fibration with fibre isomorphic to $G \times U_P \times \prod_{i=1}^{r-2}M_{p_i}$. Moreover, all the squares appearing in (2.10.1) are caetesian squares. Hence the diagram (2.10.1) satisfies similar properties as in the diagram (2.8.2). \par Note that $L \simeq G_1 \times \cdots \times G_r$ with $G_i = GL(\ol M_{p_i})$. Then $Q_L$ can be written as $Q_L \simeq Q_1 \times \cdots \times Q_r$, where $Q_i$ is a parabloic subgroup of $G_i$. We have \begin{align*} \wt\SX^L_{\Bm, \unip} &\simeq \prod_{i=1}^r(\wt G_i)\uni \times V, \\ \wh \SX^{L,Q_L}_{\Bm, \unip} &\simeq \prod_{i=1}^r (\wt G_i^{Q_i})\uni \times V, \\ \SX^L_{\Bm, \unip} &\simeq \prod_{i=1}^r (G_i)\uni \times V, \end{align*} where $(\wt G_i)\uni, (\wt G_i^{Q_i})\uni$, etc. denote the unipotent parts of $\wt G_i, \wt G_i^{Q_i}$, etc. as in (2.8.2). The maps $\b_1', \b_1''$ are induced from the maps $(\wt G_i)\uni \to (\wt G_i^{Q_i})$, $(\wt G_i^{Q_i})\uni \to (G_i)\uni$, and those maps coincide with the maps $\pi', \pi''$ in 2.7 defined with respect to $G_i$. Note that $W_{Q_i} \simeq S_{(\la^{(i)})^t}$ for each $i$ by the construction of $Q = Q_{\Bla}$ in 2.5. Put \begin{equation*} \wh \SX^Q_1 = G \times^Q(U_Q \times \prod M_{p_i}), \quad \wt\SX^{L,Q_L}_1 = L \times^{Q_L}(U_{Q_L} \times \prod \ol M_{p_i}), \end{equation*} and let $i_Q : \wh\SX_1^Q \hra \wh \SX^Q_{\Bm, \unip},
i_{Q_L} : \wt \SX^{L, Q_L}_1 \hra \wt \SX^{L,Q_L}_{\Bm \unip}$ be the closed embeddings. Let $\pi^L_{Q_L} : \wt \SX^{L,Q_L}_1 \to \SX^L_{\Bm, \unip}$ be the restriction of $\b_1''$. Let $\SO^L_{\Bmu} \simeq \SO'_{\mu^{(1)}} \times \cdots \times \SO'_{\mu^{(r)}}$ be the $L$-orbit in $\SX^L_{\Bm, \unip}$, where $\SO'_{\mu^{(i)}}$ is the $G_i$-oribt in $(G_i)\uni \times \ol M_{p_i}$ of type $(\mu^{(i)}, \emptyset)$. Note that if we denote by $\SO_{\mu^{(i)}}$ the $G_i$-orbit in $(G_i)\uni$ of type $\mu^{(i)}$, we have $\IC(\ol\SO'_{\mu^{(i)}}, \Ql) \simeq \IC(\ol\SO_{\mu^{(i)}}, \Ql) \boxtimes \Ql$ (the latter term $\Ql$ denotes the constatn sheaf on $\ol M_{p_i}$). Hence the decompostion of $\pi^L_{Q_L}$ into simple components is described by considering the factors $\IC(\ol \SO_{\mu^{(i)}}, \Ql)$. In particular, by Proposition 2.9, we have
\begin{equation*} \tag{2.10.2} (\pi^L_{Q_L})_*\Ql[\dim \wt\SX_1^{L,Q_L}] \simeq
\bigoplus_{\Bmu \trleq \Bla}
\Ql^{K_{\Bmu^t, \Bla^t}}\otimes \IC(\ol \SO^L_{\Bmu}, \Ql)[\dim \SO^L_{\Bmu}]. \end{equation*} \par By using the diagram (2.10.1), we see that \begin{equation*} \wh q_1^*(i_{Q_L})_*\Ql[\dim \wt\SX^{L, Q_L}_1] \simeq \wh p_1^*(i_Q)_*\Ql[\dim \wt X_{\Bla}]. \end{equation*} It follows, again by using the diagram (2.10.1), we have
\begin{equation*} \tag{2.10.3} (\a_1'')_*(i_Q)_*\Ql[\dim \wt X_{\Bla}] \simeq \bigoplus_{\Bmu \trleq \Bla}
\Ql^{K_{\Bmu^t, \Bla^t}}\otimes B_{\Bmu}, \end{equation*} where $B_{\Bmu}$ is the simple perverse sheaf on $\wh\SX^P_{\Bm, \unip}$ characterized by the property that \begin{equation*}
p_1^*B_{\Bmu}[a'] \simeq q_1^*\IC(\ol\SO^L_{\Bmu}, \Ql)[b' + \dim \SO^L_{\Bmu}] \end{equation*} with $a' = \dim P$, $b' = \dim G + \dim U_P + \dim \prod_{i=1}^{r-2}M_{p_i}$. \par On the other hand, by Proposition 1.6 in [S4], we have \begin{equation*} \pi''_*A_{\Bmu} \simeq \IC(\SX_{\Bm}, \SL_{\Bmu})[d_{\Bm}], \end{equation*} where $\pi'': \wh\SX^P_{\Bm} = G \times^P(P \times \prod_iM_{p_i})
\to \SX_m $ is an analogous map to $\pi''_1$, and $A_{\Bmu}$ is a simple perverse sheaf on $\wh \SX^P_{\Bm}$ such that the restriction of $A_{\Bmu}$ on $\wh\SX^P_{\Bm, \unip}$ coincides with $B_{\Bmu}$, up to shift. Thus by Theorem 2.4 (ii), we have
\begin{equation*} \tag{2.10.4} (\pi''_1)_*B_{\Bmu} \simeq \IC(\ol X_{\Bmu}, \Ql)[\dim X_{\Bmu}]. \end{equation*} Since $\pi_{\Bla} = \pi_1''\circ \a_1''\circ i_Q$, by applying $(\pi''_1)_*$ on both sides of (2.10.3), we obtain the formula (2.6.1). This completes the proof of Theorem 2.6.
\par
\section{$G^F$-invariant functions on the enhanced variety \\ and Kostka functions}
\para{3.1.} We now assume that $G$ and $V$ are defined over $\Fq$, and let $F: G \to G, F: V \to V$ be the corresponding Frobenius maps. Assume that $B$ and $T$ are $F$-stable. Then $X_{\Bla}$ and $\wt X_{\Bla}$ have natrual $\Fq$-structures, and the map $\pi_{\Bla}: \wt X_{\Bla} \to \ol X_{\Bla}$ is $F$-equivariant. Thus one can define a canonical isomorphsim $\vf : F^*K_{\Bla} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, K_{\Bla}$ for $K_{\Bla} = (\pi_{\Bla})_*\Ql$. By using the decomposition in Theorem 2.6, $\vf$ can be written as $\vf = \sum_{\Bmu}\s_{\Bmu} \otimes \vf_{\Bmu}$, where $\s_{\Bmu}$ is the identity map on $\Ql^{K_{\Bmu^t, \Bla^t}}$ and $\vf_{\Bmu} : F^*L_{\Bmu} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, L_{\Bmu}$ is the isomorphism induced from $\vf$ for $L_{\Bmu} = \IC(\ol X_{\Bmu}, \Ql)$. (Note that $\dim X_{\Bla} - \dim X_{\Bmu}$ is even if $\Bmu \trleq \Bla$ by [S4, Prop. 4.3], so the degree shift is negligible). We also consider the natural isomorphism $\f_{\Bmu} : F^*L_{\Bmu} \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, L_{\Bmu}$ induced from the $\Fq$-strucutre of $X_{\Bmu}$. By using a similar argument as in [S4, (6.1.1)], we see that \begin{equation*} \tag{3.1.1} \vf_{\Bmu} = q^{d_{\Bmu}}\f_{\Bmu}, \end{equation*} where $d_{\Bmu} = n(\Bmu)$. We consider the characteristic function $\x_{L_{\Bmu}}$ of $L_{\Bmu}$ with respect to $\f_{\Bmu}$, which is a $G^F$-invariant function on $\ol X_{\Bmu}^F$.
\para{3.2.} Take $\Bmu, \Bnu \in \SP_{n,r}$, and assume that $\Bnu \in \SP(\Bm)$. For each $z = (x, \Bv) \in X_{\Bmu}$ with $\Bv = (v_1, \dots, v_{r-1})$, we define a variety $\SG_{\Bnu,z}$ by
\begin{equation*} \tag{3.2.1} \begin{split} \SG_{\Bnu,z} = \{ (W _{p_i}) &\text{ : $x$-stable flag } \mid v_i \in W_{p_i}
\ (1 \le i \le r-1), \\
&x|_{W_{p_i}/W_{p_{i-1}}}
\text{: type $\nu^{(i)}$ } \ (1 \le i \le r) \}. \end{split} \end{equation*} If $z \in X_{\Bmu}^F$, the variety $\SG_{\Bnu,z}$ is defined over $\Fq$.
Put $g_{\Bnu,z}(q) = |\SG_{\Bnu,z}^F|$. Let $\wt K_{\la,\mu}(t)$ be the modified Kostka polynomial indexed by partitions $\la, \mu$. The following result is a generalization of Proposiition 5.8 in [AH].
\begin{prop} Assume that $\Bla, \Bmu \in \SP_{n,r}$. For each $z \in X_{\Bmu}^F$, we have
\begin{equation*} \x_{L_{\Bla}}(z) = q^{-n(\Bla)}\sum_{\Bnu \trleq \Bla}g_{\Bnu, z}(q)
\wt K_{\la^{(1)},\nu^{(1)}}(q)\cdots \wt K_{\la^{(r)},\nu^{(r)}}(q). \end{equation*} \end{prop}
\begin{proof} Let $\x_{K_{\Bla}, \vf}$ be the characteristic function of $K_{\Bla}$ with respect to $\vf$. By Theorem 2.6 together with (3.1.1), we have
\begin{equation*} \tag{3.3.1} \x_{K_{\Bla}, \vf} = \sum_{\Bxi \trleq \Bla}K_{\Bxi^t, \Bla^t}q^{n(\Bxi)}\x_{L_{\Bxi}}. \end{equation*} On the other hand, by the Grothendieck's fixed point formula, we have
$\x_{K_{\Bla}, \vf}(z) = |\pi_{\Bla}\iv(z)^F|$ for $z \in \ol X_{\Bla}^F$. Then if $z = (x, \Bv) \in X_{\Bmu}^F$,
\begin{equation*} \tag{3.3.2}
|\pi_{\Bla}\iv(z)^F| = \sum_{\Bnu \in \SP_{n,r}}|\SG_{\Bnu,z}^F|\prod_i|\pi_{\la^{(i)}}\iv(x_i)^F|, \end{equation*} where $\pi_{\la^{(i)}} : \wt\SO_{\la^{(i)}} \to \ol\SO_{\la^{(i)}}$ is a similar map as $\pi_{\Bla}$ applied to the case $r = 1$, by replacing $G$ by $G_i = GL(\ol M_{p_i})$,
and $x_i = x|_{\ol M_{p_i}}$ has Jordan type $\nu^{(i)}$. It is known by [L1] that $q^{n(\xi^{(i)})}\x_{L_{\xi^{(i)}}}(x_i) = \wt K_{\xi^{(i)}, \nu^{(i)}}(q)$ for a partition $\xi^{(i)}$ of $m_i$. It follows, by applying (3.3.1) to the case where $r = 1$, and by the Grothendieck's fixed point formula, we have \begin{equation*}
|\pi_{\la^{(i)}}\iv(x_i)^F| = \sum_{\xi^{(i)} \le \la^{(i)}}
K_{\xi^{(i)t}, \la^{(i)t}}\wt K_{\xi^{(i)}, \nu^{(i)}}(q). \end{equation*} Then (3.3.2) implies that
\begin{equation*} \tag{3.3.3} \x_{K_{\Bla}, \vf} =
|\pi_{\Bla}\iv(z)^F| = \sum_{\Bnu \in \SP_{n,r}}g_{\Bnu,z}(q)\sum_{\Bxi \trleq \Bla}
K_{\Bxi^t, \Bla^t}\wt K_{\xi^{(1)}, \nu^{(1)}}(q)\cdots
\wt K_{\xi^{(r)}, \nu^{(r)}}(q). \end{equation*} Since $(K_{\Bxi^t, \Bla^t})_{\Bla, \Bxi}$ is a unitriangular matrix with respect to the partial order $\Bxi \trleq \Bla$, by comparing (3.3.1) and (3.3.3), we obtain the required formula. \end{proof}
\remark{3.4.} In general, $X_{\Bmu}$ consists of infinitely many $G$-orbits. Hence the value $g_{\Bnu, z}(q)$ may depend on the choice of $z \in X_{\Bmu}^F$. However, if $X_{\Bmu}$ is a single $G$-orbit, then $X_{\Bmu}^F$ is also a single $G^F$-orbit, and $g_{\Bnu,z}(q)$ is constant for $z \in X_{\Bmu}^F$, in which case, we denote $g_{\Bnu,z}(q)$ by $g_{\Bnu}^{\Bmu}(q)$. In what follows, we show in some special cases that there exists a polynomial $g_{\Bnu}^{\Bmu}(t) \in \BZ[t]$ such that $g_{\Bnu}^{\Bmu}(q)$ coincides with the value at $t = q$ of $g_{\Bnu}^{\Bmu}(t)$.
\para{3.5.} We consider the special case where $\Bmu \in \SP(\Bm')$ is such that $m_i' = 0$ for $i = 1, \dots, r-2$. In this case, $X_{\Bmu}$ consists of a single $G$-orbit. In particular, for $\Bla \in \SP_{n,r}$, $\dim \SH^i_z\IC(\ol X_{\Bla}, \Ql)$ does not depend on the chocie of $z \in X_{\Bmu}$. We define a polynomial $\IC^-_{\Bla, \Bmu}(t) \in \BZ[t]$ by
\begin{equation*} \IC^-_{\Bla,\Bmu}(t) = \sum_{i \ge 0}\dim \SH^{2i}_z\IC(\ol X_{\Bla}, \Ql)t^i. \end{equation*}
The following result was proved in [S4].
\begin{prop}[{[S4, Prop. 6.8]}] Let $\Bla, \Bmu \in \SP_{n,r}$, and assume that $\Bmu$ is as in 3.5. \begin{enumerate} \item Assume that $z \in X^F_{\Bmu}$. Then $\SH^i_z\IC(\ol X_{\Bla}, \Ql) = 0$ if $i$ is odd, and the eigenvalues of $\f_{\Bla}$ on $\SH^{2i}_z\IC(\ol X_{\Bla}, \Ql)$ are $q^i$. In particular, $\x_{L_{\Bla}}(z) = \IC^-_{\Bla, \Bmu}(q)$. \item $\wt K^-_{\Bla, \Bmu}(t) = t^{a(\Bla)}\IC^-_{\Bla, \Bmu}(t^r)$. \end{enumerate} \end{prop}
As a corollary, we have the following result, which is a generalization of [AH, Prop. 5.8] (see also [LS, Prop. 3.2]).
\begin{cor} Assume that $\Bmu$ is as in 3.5. \begin{enumerate} \item There exists a polynomial $g_{\Bnu}^{\Bmu}(t) \in \BZ[t]$ such that $g_{\Bnu}^{\Bmu}(q)$ coincides with the value at $t = q$ of $g^{\Bmu}_{\Bnu}(t)$. \item We have \begin{equation*} \tag{3.7.1} \wt K^-_{\Bla, \Bmu}(t) = t^{a(\Bla)-rn(\Bla)}
\sum_{\Bnu \trleq \Bla}g^{\Bmu}_{\Bnu}(t^r)
\wt K_{\la^{(1)}, \nu^{(1)}}(t^r)\cdots \wt K_{\Bla^{(r)}, \Bnu^{(r)}}(t^r). \end{equation*} \end{enumerate} \end{cor}
\begin{proof} By Proposition 3.6 (i) and Proposition 3.3, we have
\begin{equation*} \tag{3.7.2} \IC^-_{\Bla, \Bmu}(q) = q^{-n(\Bla)}\sum_{\Bnu \trleq \Bla}g_{\Bnu}^{\Bmu}(q)
\wt K_{\la^{(1)},\nu^{(1)}}(q)\cdots \wt K_{\la^{(r)}, \nu^{(r)}}(q) \end{equation*} By fixing $\Bmu$, we consider two sets of functions $\{ \IC^-_{\Bla \Bmu}(q) \mid \Bla \in \SP_{n,r} \}$ and $\{ g^{\Bmu}_{\Bnu}(q) \mid \Bnu \in \SP_{n,r} \}$. If we notice that $\wt K_{\la^{(1)}, \nu^{(1)}}(q)\cdots \wt K_{\la^{(r)}, \nu^{(r)}}(q)
= q^{n(\Bla)}$ for $\Bnu = \Bla$, (3.7.2) shows that the transition matrix between those two sets is unitriangular. Hence $g^{\Bmu}_{\Bnu}(q)$ is determined from $\IC^-_{\Bla, \Bmu}(q)$, and a similar formula makes sense if we replace $q$ by $t$. This implies (i). (ii) now follows from (3.7.2) by replacing $q$ by $t$. \end{proof}
\para{3.8.} In what follows, we assume that $\Bmu$ is of the form $\Bmu = (-, \dots, -, \xi)$ with $\xi \in \SP_n$. In this case, $g^{\Bmu}_{\Bnu}(t)$ coincides with the polynomial $g^{\xi}_{\nu^{(1)}, \dots, \nu^{(r)}}(t)$ obtained from $G^{\xi}_{\nu^{(1)}, \dots, \nu^{(r)}}(\Fo)$ discussed in [M, II, 2]. On the other hand, we define a polynomial $f^{\xi}_{\nu^{(1)}, \dots, \nu^{(r)}}(t)$ by \begin{equation*} \tag{3.8.1} P_{\nu^{(1)}}(y;t)\cdots P_{\nu^{(r)}}(y;t) =
\sum_{\xi \in \SP_n}f^{\xi}_{\nu^{(1)}, \dots, \nu^{(r)}}(t)P_{\xi}(y;t). \end{equation*} In the case where $r = 2$, $g^{\xi}_{\nu^{(1)}, \nu^{(2)}}(t)$ coincides with the Hall polynomial, and a simple formula relating it with $f^{\xi}_{\nu^{(1)}, \nu^{(2)}}(t)$ is konwn ([M, III (3.6)]). In the general case, we also have a formula
\begin{equation*} \tag{3.8.2} g^{\xi}_{\nu^{(1)}, \dots, \nu^{(r)}}(t) = t^{n(\xi)- n(\Bnu)}
f^{\xi}_{\nu^{(1)}, \dots, \nu^{(r)}}(t\iv). \end{equation*} The proof is easily reduced to [M, III (3.6)]. \par For partitions $\la, \nu^{(1)}, \dots, \nu^{(r)}$, we define an integer $c^{\la}_{\nu^{(1)}, \dots, \nu^{(r)}}$ by
\begin{equation*} s_{\nu^{(1)}}\cdots s_{\nu^{(r)}} = \sum_{\la}c^{\la}_{\nu^{(1)},\dots, \nu^{(r)}}s_{\la}. \end{equation*} In the case where $r = 2$, $c^{\la}_{\nu^{(1)},\nu^{(2)}}$ coincides with the Littlewood-Richardson coefficient. \par For $\Bla \in \SP_{n,r}$, put \begin{equation*} \tag{3.8.3}
b(\Bla) = a(\Bla) - r\cdot n(\Bla) = |\la^{(2)}| + 2|\la^{(3)}| + \cdots + (r-1)|\la^{(r)}|. \end{equation*} The following lemma is a generalization of [LS, Lemma 3.4].
\begin{lem} Let $\Bla, \Bmu \in \SP_{n,r}$, and assume that $\Bmu = (-, \dots, -, \xi)$. Then we have
\begin{align*} \tag{3.9.1} K^-_{\Bla, \Bmu}(t) &= t^{b(\Bmu) - b(\Bla)}
\sum_{\Bnu \trleq \Bla}
f^{\xi}_{\nu^{(1)}, \dots, \nu^{(r)}}(t^{r})
K_{\la^{(1)}, \nu^{(1)}}(t^r)\cdots K_{\la^{(r)}, \nu^{(r)}}(t^r), \\ \tag{3.9.2} K^-_{\Bla, \Bmu}(t) &=
t^{b(\Bmu) - b(\Bla)}
\sum_{\e \in \SP_n}c^{\eta}_{\la^{(1)}, \dots, \la^{(r)}}K_{\eta, \xi}(t^r). \end{align*} \end{lem}
\begin{proof} The formula (3.7.1) can be rewritten as
\begin{equation*} \tag{3.9.3} K^-_{\Bla, \Bmu}(t) = t^{a(\Bmu) - a(\Bla) + rn(\Bla)}\sum_{\Bnu \trleq \Bla}t^{-rn(\Bnu)}
g^{\xi}_{\nu^{(1)}, \dots, \nu^{(r)}}(t^{-r})
K_{\la^{(1)}, \nu^{(1)}}(t^r)\cdots K_{\la^{(r)}, \nu^{(r)}}(t^r). \end{equation*} Substituting (3.8.2) into (3.9.3), we obtain (3.9.1). Next we show (3.9.2). One can write as
\begin{equation*} s_{\la^{(i)}}(y) = \sum_{\nu^{(i)}}K_{\la^{(i)}, \nu^{(i)}}(t)P_{\nu^{(i)}}(y;t). \end{equation*} Hence
\begin{align*} \tag{3.9.4} s_{\la^{(1)}}(y)\cdots s_{\la^{(r)}}(y) &= \sum_{\Bnu \in \SP_{n,r}}
K_{\la^{(1)}, \nu^{(1)}}(t)\cdots K_{\la^{(r)}, \nu^{(r)}}(t)
P_{\nu^{(1)}}(y;t)\cdots P_{\nu^{(r)}}(y;t) \\
&= \sum_{\Bnu \in \SP_{n,r}}\sum_{\xi \in \SP_n}f^{\xi}_{\nu^{(1)}, \dots, \nu^{(r)}}(t)
K_{\la^{(1)},\nu^{(1)}}(t)\cdots K_{\la^{(r)},\nu^{(r)}}(t)P_{\xi}(y;t). \end{align*} On the other hand,
\begin{align*} \tag{3.9.5} s_{\la^{(1)}}(y)\cdots s_{\la^{(r)}}(y) &= \sum_{\e \in \SP_n}
c^{\eta}_{\la^{(1)}, \dots, \la^{(r)}}s_{\eta}(y) \\
&= \sum_{\eta \in \SP_n}c^{\eta}_{\la^{(1)}, \dots, \la^{(r)}}
\sum_{\xi \in \SP_n} K_{\eta, \xi}(t)P_{\xi}(y;t). \end{align*} By comparing (3.9.4) and (3.9.5), we have an equality for each $\xi \in \SP_n$,
\begin{equation*} \sum_{\e \in \SP_n}c^{\eta}_{\la^{(1)}, \dots, \la^{(r)}}K_{\eta, \xi}(t)
= \sum_{\Bnu \in \SP_{n,r}}f^{\xi}_{\nu^{(1)}, \dots, \nu^{(r)}}(t)
K_{\la^{(1)}, \nu^{(1)}}(t), \dots K_{\la^{(r)}, \nu^{(r)}}(t). \end{equation*} Combining this with (3.9.1), we obtain (3.9.2). The lemma is proved. \end{proof}
\para{3.10.} Let $\e' = \la' - \th', \e'' = \la'' - \th''$ be skew diagrams, where $\th' \subset \la', \th'' \subset \la''$ are partitions. We define a new skew diagram $\e'*\e'' = \la - \th$ as follows; write the partitions $\la', \la''$ as $\la' = (\la'_1, \dots, \la'_{k'}), \la'' = (\la''_1, \dots, \la''_{k''})$ with $\la'_{k'} > 0, \la''_{k''} > 0$. Put $a = \la_1''$. We define a partition $\la = (\la_1, \dots, \la_{k' + k''})$ by
\begin{equation*} \la_i = \begin{cases}
\la'_i + a &\quad\text{ for } 1 \le i \le k', \\
\la''_{i-k'} &\quad\text{ for } k' + 1 \le i \le k' + k''.
\end{cases} \end{equation*} Write partitions $\th', \th''$ as $\th' = (\th'_1, \dots, \th'_{k'}), \th'' = (\th''_1, \dots, \th''_{k''})$ with $\th'_{k'} \ge 0$, $\th''_{k''} \ge 0$. We define a partition $\th = (\th_1, \dots, \th_{k' + k''})$, in a similar way as above, by
\begin{equation*} \th_i = \begin{cases}
\th'_i + a &\quad\text{ for } 1 \le i \le k', \\
\th''_{i-k'} &\quad\text{ for } k' + 1 \le i \le k' + k''.
\end{cases} \end{equation*} We have $\th \subset \la$, and the skew diagram $\e'*\e'' = \la - \th$ can be defined. \par For $\la, \mu \in \SP_n$, let $SST(\la, \mu)$ be the set of semistandard tableaux of shape $\la$ and weight $\mu$. Let $\Bla \in \SP_{n,r}$. An $r$-tuple $T = (T^{(1)}, \dots, T^{(r)})$ is called a semistandard tableau of shape $\Bla$ if $T^{(i)}$ is a semistandard tableau of shape $\la^{(i)}$ with respect to the letters $\{ 1, \dots, n\}$. We denote by $SST(\Bla)$ the set of semistandard tableaux of shape $\Bla$. For $\Bla \in \SP_{n,r}$, let $\wt\Bla$ be the skew diagram $\la^{(1)}*\la^{(2)}*\cdots *\la^{(r)}$. Then $T \in SST(\Bla)$ is regarded as a usual semistandard tableau $\wt T$ associated to the skew diagram $\wt\Bla$. Assume $\pi \in \SP_n$. We say that $T \in SST(\Bla)$ has weight $\pi$ if the corresponidng tableau $\wt T$ has shape $\wt\Bla$ and weight $\pi$. We denote by $SST(\Bla, \pi)$ the set of semistandard tableaux of shape $\Bla$ and weight $\pi$.
\para{3.11.} In [M, I, (9.4)], a bijective map $\varTheta$
\begin{equation*} \tag{3.11.1} \varTheta : SST(\wt\Bla, \pi) \,\raise2pt\hbox{$\underrightarrow{\sim}$}\, \coprod_{\nu \in \SP_n}(SST^0(\wt\Bla, \nu) \times SST(\nu, \pi)) \end{equation*} was constructed, where $SST^0(\wt\Bla, \nu)$ is the set of tableau $T$ such that the associated word $w(T)$ is a lattice permutation (see [M, I, 9] for the definition). Under the identification $SST(\wt\Bla, \pi) \simeq SST(\Bla, \pi)$, the subset $SST^0(\Bla,\nu)$ of $SST(\Bla, \nu)$ is also defined. Then we can regard $\varTheta$ as a bijection with respect to the set $SST(\Bla, \pi)$ (and $SST^0(\Bla, \nu)$).
\par In the case where $r = 2$, it is shown in [LS, Cor. 3.9] that $|SST^0(\Bla, \nu)|$ coincides with the Littlewood-Richardson coefficient $c^{\nu}_{\la^{(1)}, \la^{(2)}}$. A similar argument can be applied also to the general case, and we have
\begin{cor} Assume that $\Bla \in \SP_{n,r}, \nu \in \SP_n$. Then we have
\begin{equation*}
|SST^0(\Bla, \nu)| = c^{\nu}_{\la^{(1)}, \dots, \la^{(r)}}. \end{equation*} \end{cor}
\para{3.13.} For a semistandard tableau $S$, the charge $c(S)$ is defined as in [M, III, 6]. It is known that Lascoux-Sch\"utzenberger Theorem ([M, III, (6.5)]) gives a combinatorial description of Koskta polynomials $K_{\la,\mu}(t)$ in terms of sesmistandard tableaux,
\begin{equation*} \tag{3.13.1} K_{\la,\mu}(t) = \sum_{S \in SST(\la, \mu)}t^{c(S)}. \end{equation*}
In the case where $r = 2$, a similar formula was proved for $K_{\Bla, \Bmu}(t)$ in [LS, Thm. 3.12], in the special case where $\Bmu = (-,\mu'')$. Here we consider $K_{\Bla, \Bmu}(t)$ for general $r$. Assume that $\Bla \in \SP_{n,r}$ and $\xi \in \SP_n$. For $T \in SST(\Bla, \xi)$, we write $\varTheta(T) = (D, S)$ with $S \in SST(\nu, \xi)$ for some $\nu$. we define a charge $c(T)$ of $T$ by $c(T) = c(S)$. We have the following theorem. Note that the proof is quite similar to [LS].
\begin{thm} Let $\Bla, \Bmu \in \SP_{n,r}$, and assume that $\Bmu = (-, \dots, -, \xi)$. Then
\begin{equation*} K^-_{\Bla,\Bmu}(t) = t^{b(\Bmu) - b(\Bla)}\sum_{T \in SST(\Bla, \xi)}t^{r\cdot c(T)}. \end{equation*} \end{thm}
\begin{proof} We define a map $\Psi : SST(\Bla, \xi) \to \coprod_{\nu \in \SP_n}SST(\nu,\xi)$ by $T \mapsto S$, where $\varTheta(T) = (D, S)$. Then by Corollary 3.12, for each $S \in SST(\nu, \xi)$, the set $\Psi\iv(S)$ has the cardinality $c^{\xi}_{\la^{(1)}, \dots, \la^{(r)}}$, and by definition, any $T \in \Psi\iv(S)$ has the charge $c(T) = c(S)$. Hence
\begin{align*} \sum_{T \in SST(\Bla, \xi)}t^{c(T)} &= \sum_{\nu \in \SP_n}
\sum_{S \in SST(\nu, \xi)}
c^{\nu}_{\la^{(1)}, \dots, \la^{(r)}}t^{c(S)} \\
&= \sum_{\nu \in \SP_n}c^{\nu}_{\la^{(1)}, \dots, \la^{(r)}}K_{\nu, \xi}(t). \end{align*} The last equality follows from (3.13.1). The theorem now follows from (3.9.2). \end{proof}
\begin{cor} Under the assumption of Theorem 3.14, we have
\begin{equation*}
K^-_{\Bla, \Bmu}(1) = |SST(\Bla, \xi)|. \end{equation*} \end{cor}
\para{3.16.} In the rest of this section, we shall give an alternate description of the polynomial $g^{\Bmu}_{\Bnu}(t)$ in the case where $\Bmu = (-, \dots, -, \xi)$. For $\Bnu \in \SP_{n,r}$, put $R_{\Bnu}(x;t) = P_{\nu^{(1)}}(x^{(1)};t^r)\cdots P_{\nu^{(r)}}(x^{(r)};t^r)$. Then $\{ R_{\Bnu} \mid \Bnu \in \SP_{n,r} \}$ gives a basis of $\Xi^n[t]$. We define funtions $h^{\Bmu}_{\Bnu}(t) \in \BQ(t)$ by the condition that
\begin{equation*} \tag{3.16.1} R_{\Bnu}(x;t) = \sum_{\Bmu \in \SP_{n,r}}h^{\Bmu}_{\Bnu}(t)P^-_{\Bmu}(x;t). \end{equation*}
The following formula is a generalization of Proposition 4.2 in [LS].
\begin{prop} Assume that $\Bmu = (-,\dots,-,\xi)$. Then
\begin{equation*} h^{\Bmu}_{\Bnu}(t) = t^{a(\Bmu) - a(\Bnu)}g^{\Bmu}_{\Bnu}(t^{-r}). \end{equation*} \end{prop}
\begin{proof} The proof is quite similar to that of [LS, Prop. 4.2]. For $\Bla \in \SP_{n,r}$, we have
\begin{align*} s_{\Bla}(x) &= s_{\la^{(1)}}(x^{(1)})\cdots s_{\la^{(r)}}(x^{(r)}) \\
&= \prod_{i=1}^r\sum_{\nu^{(i)}}K_{\la^{(i)},\nu^{(i)}}(t^r)P_{\nu^{(i)}}(x^{(i)};t^r) \\
&= \sum_{\Bnu}K_{\la^{(1)},\nu^{(1)}}(t^r)\cdots K_{\la^{(r)},\nu^{(r)}}(t^r)
\sum_{\Bmu \in \SP_{n,r}}h^{\Bmu}_{\Bnu}(t)P^-_{\Bmu}(x;t) \\
&= \sum_{\Bmu \in \SP_{n,r}}\biggl(
\sum_{\Bnu}K_{\la^{(1)},\nu^{(1)}}(t^r)\cdots K_{\la^{(r)}, \nu^{(r)}}(t^r)
h^{\Bmu}_{\Bnu}(t)\biggr)P^-_{\Bmu}(x;t). \end{align*} Since $s_{\Bla}(x) = \sum_{\Bmu \in \SP_{n,r}}K^-_{\Bla,\Bmu}(t)P_{\Bmu}^-(x;t)$, by comparing the coefficients of $P^-_{\Bmu}(x;t)$, we have
\begin{equation*} \tag{3.17.1} K^-_{\Bla,\Bmu}(t) = \sum_{\Bnu \in \SP_{n,r}}h^{\Bmu}_{\Bnu}(t)
K_{\la^{(1)}, \nu^{(1)}}(t^r)\cdots K_{\la^{(r)}, \nu^{(r)}}(t^r). \end{equation*}
Now assume that $\Bmu = (-,\dots,-,\xi)$. If we notice that
$K_{\la^{(i)},\nu^{(i)}}(t^r) \ne 0$ only when $|\la^{(i)}| = |\nu^{(i)}|$, (3.9.3) implies that
\begin{equation*} \tag{3.17.2} K^-_{\Bla,\Bmu}(t) = \sum_{\Bnu \in \SP_{n,r}}t^{a(\Bmu) - a(\Bnu)}g^{\Bmu}_{\Bnu}(t^{-r})
K_{\la^{(1)}, \nu^{(1)}}(t^r)\cdots K_{\la^{(r)},\nu^{(r)}}(t^r). \end{equation*} Since $(K_{\la^{(1)},\nu^{(1)}}(t^r)\cdots K_{\la^{(r)},\nu^{(r)}}(t^r))_{\Bla, \Bnu \in \SP_{n,r}}$ is a unitriangular matrix, the proposition follows by comparing (3.17.1) and (3.17.2). \end{proof}
\par
\noindent T. Shoji \\ Department of Mathematics, Tongji University \\ 1239 Siping Road, Shanghai 200092, P. R. China \\
E-mail: \verb|[email protected]|
\end{document} |
\begin{document}
\title{Parameterized Algorithms for String Matching to DAGs: Funnels and Beyond hanks{This work was partially funded by the Academy of Finland
(grants No. 352821, 328877). I am very grateful to Alexandru~I.~Tomescu for initial discussions on funnel algorithms, to Veli Mäkinen for discussions on applying KMP in DAGs, to Massimo Equi and Nicola Rizzo for the useful discussions, and to the anonymous reviewers for their useful comments.}
\begin{abstract}
The problem of String Matching to Labeled Graphs (SMLG) asks to find all the paths in a labeled graph $G = (V, E)$ whose spellings match that of an input string $S \in \Sigma^m$. SMLG can be solved in quadratic $O(m|E|)$ time~[Amir et~al., JALG], which was proven to be optimal by a recent lower bound conditioned on SETH~[Equi et~al., ICALP 2019]. The lower bound states that no strongly subquadratic time algorithm exists, even if restricted to directed acyclic graphs (DAGs).
In this work we present the first parameterized algorithms for SMLG in DAGs. Our parameters capture the topological structure of $G$. All our results are derived from a generalization of the Knuth-Morris-Pratt algorithm~[Park and Kim, CPM 1995] optimized to work in time proportional to the number of prefix-incomparable matches.
To obtain the parameterization in the topological structure of $G$, we first study a special class of DAGs called funnels~[Millani et~al., JCO] and generalize them to $k$-funnels and the class $\mathcal{ST}_k$. We present several novel characterizations and algorithmic contributions on both funnels and their generalizations. \end{abstract}
\section{Introduction}
Given a labeled graph $G = (V, E)$ (vertices labeled with characters) and a string $S$ of length $m$ over an alphabet $\Sigma$ of size $\sigma$, the problem of \emph{String Matching to Labeled Graph (SMLG)} asks to find all paths of $G$ spelling $S$ in their characters; such paths are known as \emph{occurrences} or \emph{matches} of $S$ in $G$. This problem is a generalization of the classical \emph{string matching (SM)} to a text $T$ of length $n$, which can be encoded as an SMLG instance with a path labeled with $T$. Labeled graphs are present in many areas such as information retrieval~\cite{conklin1987hypertext,nielsen1990hypertext,baeza1999modern}, graph databases~\cite{angles2008survey,angles2017foundations,perez2009semantics,barcelo2013querying} and bioinformatics~\cite{computational2018computational}, SMLG being a primitive operation to locate information on them.
It is a textbook result~\cite{aho1974design,cormen2022introduction,sedgewick2011algorithms} that the classical SM can be solved in linear $O(n+m)$ time. For example, the well-known \emph{Knuth-Morris-Pratt} algorithm (KMP)~\cite{knuth1977fast} preprocess $S$ and then scans $T$ while maintaining the longest matching prefix of $S$. However, for SMLG a recent result~\cite{backurs2016regular,equi2019complexity} shows that there is no strongly subquadratic $O(m^{1-\epsilon}|E|), O(m|E|^{1-\epsilon})$ time algorithm unless the \emph{strong exponential time hypothesis (SETH)} fails, and the most efficient current solutions~\cite{amir2000pattern,navarro2000improved,rautiainen2017aligning,jain2020complexity} match this bound, thus being optimal in this sense. Moreover, these algorithms solve the approximate version of SMLG (errors in $S$ only) showing that both problems are equally hard under SETH, which is not the case for SM~\cite{DBLP:journals/siamcomp/BackursI18}.\\
\par{\textbf{The history of (exact) SMLG.}} SMLG can be traced back to the publications of Manber and Wu~\cite{manber1992approximate} and Dubiner et~al.~\cite{dubiner1994faster} where the problem is defined for the first time, and solved in linear time on directed trees by using an extension of KMP. Later Akutsu~\cite{akutsu1993linear} used a sampling on $V$ and a suffix tree of $S$ to solve the problem on (undirected) trees in linear time and Park and Kim~\cite{park1995string} obtained a $O(N + m|E|)$\footnote{$N$ is the total length of the labels in $G$ in a more general version of the problem where vertices are labeled with strings.} time algorithm for directed acyclic graphs (DAGs) by extending KMP on a topological ordering of $G$ (we call this the \emph{DAG algorithm}). Finally, Amir et al.~\cite{amir2000pattern} showed an algorithm with the same running time for general graphs with a simple and elegant idea that was later used to solve the approximate version~\cite{rautiainen2017aligning,jain2020complexity}, and that has been recently generalized as the labeled product~\cite{rizzo2021labeled}. The stricter lower bound of Equi et~al.~\cite{equi2019complexity} shows that the problem remains quadratic (under SETH) even if the problem is restricted to deterministic DAGs with vertices of two kinds: indegree at most $1$ and outdegree $2$, and indegree $2$ and outdegree at most $1$~\cite[Theorem 1]{equi2019complexity}, or if restricted to undirected graphs with degree at most $2$~\cite[Theorem 2]{equi2019complexity}. Furthermore, they show how to solve the remaining cases (in/out-trees whose roots can be connected by a cycle) in linear time by an extension of KMP. Later they showed~\cite{equi2021graphs} that the quadratic lower bound holds even when allowing polynomial indexing time.\\
\par{\textbf{An (important) special case.}} Gagie et~al.~\cite{gagie2017wheeler} introduced \emph{wheeler graphs} as a generalization of prefix sortable techniques~\cite{ferragina2000opportunistic,grossi2000compressed,ferragina2005structuring,bowe2012succinct,siren2014indexing} applied to labeled graphs. On wheeler graphs, SMLG can be solved in time $O(m\log{|E|})$~\cite{gagie2017wheeler} after indexing, however, it was shown that the languages recognized by wheeler graphs (intuitively the set of strings they encode), is very restrictive~\cite{alanko2020regular,alanko2021wheeler}. Latter, Cotumaccio and Prezza~\cite{cotumaccio2021indexing} generalized wheeler graphs to $p$\emph{-sortable graphs}, capturing every labeled graph by using the parameter $p$: the minimum width of a colex relation over the vertices of the graph. On $p$-sortable graphs, SMLG can be solved in time $O(mp^2\log{(p\sigma}))$ after indexing, however, the problems of deciding if a labeled graph is wheeler or $p$-sortable are NP-hard~\cite{DBLP:conf/esa/GibneyT19}. In a recent work, Cotumaccio~\cite{cotumaccio2022graphs} defined \emph{$q$-sortable graphs} as a relaxation of $p$-sortable ($q < p$), which can be indexed in $O(|E|^2+|V|^{5/2})$ time but still solve SMLG in time $O(mq^2\log{(q\sigma}))$.
\subsection{Our results} We present parameterized algorithms for SMLG in DAGs. Our parameters capture the topological structure of $G$. These results are related to the line of research ``FPT inside P''~\cite{giannopoulou2017polynomial,caceres2021safety,fomin2018fully,koana2021data,abboud2016approximation,caceres2021a,caceres2022sparsifying,caceres2022minimum,makinen2019sparse,ma2022graphchainer} of finding parameterizations for polynomially-solvable problems.
All our results are derived from a new version of the DAG algorithm~\cite{park1995string}, which we present in \Cref{sec:dag-incomparable}. Our algorithm is optimized to only carry \emph{prefix-incomparable} matches (\Cref{def:prefix-incomparable}) and process them in time proportional to their size (\Cref{lemma:parameterized-vertex} further optimized in \Cref{cor:sorting-parameterized-vertex,cor:linear-parameterized-vertex}). Prefix-incomparable sets suffice to capture all prefix matches of $S$ ending in a vertex $v$ (\Cref{def:bv-piv}). By noting that the size of prefix-incomparable sets is upper-bounded by the structure of $S$ (\Cref{lemma:bounded-size-prefix-incomparable} in \Cref{sec:the-pattern}), we obtain a parameterized algorithm (\Cref{thm:dag-algorithm-pattern} in \Cref{sec:the-pattern}) that beats the DAG algorithm on periodic strings.
To obtain the parameterization on the topological structure of the graph we first study and generalize a special class of DAGs called \emph{funnels} in \Cref{sec:funnels-and-beyond}. \\
\par{\textbf{Funnels.}} Funnels are DAGs whose source-to-sink paths contain a \emph{private} edge that is not used by any other source-to-sink-path. Although more complex that in/out-forests, its simplicity has allowed to efficiently solve problems that remain hard even when the input is restricted to DAGs, including: DAG partitioning~\cite{millani2020efficient}, $k$-linkage~\cite{millani2020efficient}, minimum flow decomposition~\cite{khan2022safety,khan2022improving,khan2022optimizing}, a variation of network inhibition~\cite{lehmann2017thecomp} and SMLG (this work). Millani et~al.~\cite{millani2020efficient} showed that funnels can also be characterized by a partition into an in-forest plus an out-forest (the \emph{vertex partition} characterization), or by the absence of certain forbidden paths (the \emph{forbidden path} characterization), and propose how to find a minimal forbidden path in quadratic $O(|V|(|V|+|E|))$ time and a recognition algorithm running in $O(|V|+|E|)$ time. They used the latter to develop branching algorithms for the NP-hard problems of vertex and edge deletion distance to a funnel, obtaining a fix-parameter quadratic solution. Analogous to the minimum feedback set problem~\cite{karp1972reducibility}, the vertex (edge) deletion distance to a funnel problem asks to find the minimum number of vertices (edges) that need to be removed from a graph so that the resulting graph is a funnel.\\
We propose three (new) linear time recognition algorithms of funnels (\Cref{sec:three-recognition}), each based on a different characterization, improving the running time of the branching algorithm to parameterized linear time (see \Cref{sec:linear-distance}). We generalize funnels to $k$-funnels by allowing private edges to be shared by at most $k$ source-to-sink paths (\Cref{def:kprivate,def:kfunnel}). We show how to recognize them in linear time (\Cref{thm:k-funnel-linear-recognition}) and find the minimum $k$ for which a DAG is a $k$-funnel (\Cref{cor:exponential-min-k,lemma:linear-min-k}). We then further generalize $k$-funnels to the class of DAGs $\mathcal{ST}_k$ (\Cref{def:stk,lemma:subset-stk}), which (unlike $k$-funnels for $k>1$, see \Cref{fig:strict-containment}) can be characterized (and efficiently recognized, see \Cref{thm:stk-partitioning}) by a partition into a graph of the class $\mathcal{S}_k$ (generalization of out-forest, see \Cref{def:sk-tk}) and a graph of the class $\mathcal{T}_k$ (generalization of in-forest, see \Cref{def:sk-tk}).
We obtain our parameterized results in \Cref{sec:the-dag} by noting that, analogous to the fact that in KMP we only need the longest prefix match, in the DAG algorithm we can bound the size of the prefix-incomparable sets by the number of paths from a source or the number of paths to a sink, $\mu_s(v)$ and $\mu_t(v)$, respectively (\Cref{lemma:prefix-incomparable-topo-bound}).
\begin{restatable}{theorem}{paramAlgTopoOne}\label{thm:param-alg-top-1}
Let $G = (V, E)$ be a DAG, $\Sigma$ a finite ($\sigma = |\Sigma|$) alphabet, $\ell: V \rightarrow \Sigma$ a labeling function and $S \in \Sigma^m$ a string. We can decide whether $S$ has a match in $G,\ell$ in time $O((|V|+|E|)k + \sigma m)$, where $k = \min(\max_{v\in V} \mu_s(v), \max_{v\in V} \mu_t(v))$.
\end{restatable}
In particular, this implies linear time algorithms for out-forests and in-forests, and for every DAG in $\mathcal{S}_k$ or $\mathcal{T}_k$ for constant $k$. Finally, we solve the problem on DAGs in $\mathcal{ST}_k$ (thus also in $k$-funnels), by using the vertex partition characterization of $\mathcal{ST}_k$ (\Cref{thm:stk-partitioning}), solving the matches in each part separately with \Cref{thm:param-alg-top-1}, and resolve the matches crossing from one part to the other with a precomputed data structure (\Cref{lemma:prefix-suffix}).
\begin{restatable}{theorem}{paramAlgTopoTwo}\label{thm:param-alg-top-2}
Let $G = (V, E)$ be a DAG, $\ell: V \rightarrow \Sigma$ a labeling function and $S \in \Sigma^m$ a string. We can decide whether $S$ has a match in $G,\ell$ in time $O((|V|+|E|)k^2 + m^2)$, where $k = \max_{v\in V}(\min(\mu_s(v), \mu_t(v)))$.
\end{restatable}
\section{Preliminaries}
We work with a (directed) graph $G = (V, E)$, a function $\ell: V \rightarrow \Sigma$ labeling the vertices of $G$ with characters from a finite alphabet $\Sigma$ of size $\sigma$, and a sequence $S[1..m] \in \Sigma^m$. \\
\par{\textbf{Graphs.}} A graph $S = (V_S, E_S)$ is said to be a \emph{subgraph} of $G$ if $V_S \subseteq V$ and $E_S \subseteq E$. If $V' \subseteq V$, then $G[V']$ is the subgraph \emph{induced by} $V'$, defined as $G[V'] = (V', \{(u, v) \in E ~:~ u,v \in V'\})$. We denote $G^r = (V, E^r)$ to be the \emph{reverse} of $G$ ($E^r = \{(v, u) \mid (u, v) \in E\}$). For a vertex $v\in V$ we denote by $N^{-}_{v}$ ($N^{+}_{v}$) the set of \emph{in(out)-neighbors} of $v$, and by $d^{-}_{v} = |N^{-}_{v}|$ ($d^{+}_{v} = |N^{+}_{v}|$) its \emph{in(out)degree}. A \emph{source (sink)} is a vertex with zero in(out)degree. \emph{Edge contraction} of $(u, v) \in E$ is the graph operation that removes $(u, v)$ and merges $u$ and $v$. A \emph{path} $P$ is a sequence $v_{1},\ldots,v_{|P|}$ of different vertices of $V$ such that $(v_{i}, v_{i+1}) \in E$ for every $i \in \{1,\ldots, |P|-1\}$. We say that $P$ is \emph{proper} if $|P| \ge 2$, a \emph{cycle} if $(v_{|P|}, v_{1}) \in E$, and \emph{source-to-sink} if $v_1$ is source and $v_{|P|}$ is a sink. We say that $u\in V$ ($e \in E$) \emph{reaches} $v\in V$ ($f \in E$) if there is a path from $u$ (the head of $e$) to $v$ (the tail of $f$). If $G$ does not have cycles it is called \emph{directed acyclic graph} (DAG). A \emph{topological ordering} of a DAG is a total order $v_{1}, \ldots, v_{|V|}$ of $V$ such that for every $(v_{i}, v_{j}) \in E$, $i < j$. It is known~\cite{kahn1962topological,tarjan1976edge} how to compute a topological ordering in $O(|V|+|E|)$ time, and we assume one ($v_{1}, \ldots , v_{|V|}$) is already computed if $G$ is a DAG\footnote{Our algorithms run in $\Omega(|V|+|E|)$ time.}. An \emph{out(in)-forest} is a DAG such that every vertex has in(out)degree at most one, if it has a unique source (sink) it is called an \emph{out(in)-tree}. The \emph{label} of a path $P = v_{1},\ldots,v_{|P|}$ is the sequence of the labels of its vertices, $\ell(P) = \ell(v_{1})\ldots\ell(v_{|P|})$. \\
\par{\textbf{Strings.}} We say that $S$ has a \emph{match} in $G,\ell$ if there is a path whose label is equal to $S$, every such path is an \emph{occurrence} of $S$ in $G,\ell$. We denote $S[i..j]$ (also $S[i]$ if $i=j$, and the empty string if $j<i$) to be the segment of $S$ between position $i$ and $j$ (both inclusive), we say that it is \emph{proper} if $i > 1$ or $j<m$, a \emph{prefix} if $i = 1$ and a
\emph{suffix} if $j = m$. We denote $S^r$ to be the reverse of $S$ ($S^r[i] = S[m-i+1]$ for $i \in \{1,\ldots, m\}$). A segment of $S$ is called a \emph{border} if it is a proper prefix and a proper suffix at the same time. The \emph{failure function} of $S$, $f_{S}: \{1,\ldots, m\} \rightarrow \{0,\ldots, m\}$ (just $f$ if $S$ is clear from the context), is such that $f_{S}(i)$ is the length of the longest border of $S[1..i]$. We also use $f_{S}$ to denote the in-tree $(\{0,\ldots, m\}, \{(i, f_{S}(i)) \mid i \in \{1,\ldots, m\}\})$, also known as the \emph{failure tree}~\cite{10.5555/314464.314675} of $S$. By definition, the lengths of all borders of $S[1..i]$ in decreasing order are $f_{S}(i), f^2_{S}(i), \ldots, 0$. The matching automaton of $S$, $A_S: \{0, \ldots, m\} \times \Sigma \rightarrow \{0, \ldots, m\}$, is such that $A_S(i, a)$ is the length of the longest border of $S[1..i]\cdot a$.
It is known how to compute $f_{S}$ in time $O(m)$~\cite{knuth1977fast} and $A_{S}$ in time $O(\sigma m)$~\cite{aho1974design,cormen2022introduction,sedgewick2011algorithms}, and we assume they are already computed\footnote{Our algorithms run in $\Omega(\sigma m)$ time.}.
\section{The DAG algorithm on prefix-incomparable matches}\label{sec:dag-incomparable}
A key idea in our linear time parameterized algorithm is that of prefix-incomparable sets of the string $S$. We will show that one prefix-incomparable set per vertex suffices to capture all the matching information. See \Cref{fig:prefix-incomparable} for an example of these concepts.
\begin{definition}[Prefix-incomparable]\label{def:prefix-incomparable}
Let $S \in \Sigma^m$ be a string. We say that $i < j \in \{0, \ldots, m\}$ are prefix-incomparable (for $S$) if $S[1..i]$ is not a border of $S[1..j]$. We say that $B \subseteq \{0,\ldots, m\}$ is prefix-incomparable (for $S$) if for every $i < j\in B$, $i$ and $j$ are prefix-incomparable (for $S$).
\end{definition}
\begin{figure}\label{fig:prefix-incomparable}
\end{figure}
In our algorithm we will compute for each vertex $v$ a prefix-incomparable set representing all the prefixes of $S$ that match with a path ending in $v$. More precisely, if $B_v$ is the set of all the prefixes of $S$ that match with a path ending in $v$, then the algorithm will compute $PI_v \subseteq B_v$ such that $PI_v$ is prefix-incomparable and for every $i \in B_v$ there is a $j \in PI_v$ such that $i$ is ancestor of $j$. Note that such a set always exists and it is unique, it corresponds to the leaves of $f_S[B_v]$. To obtain a linear time parameterized algorithm we show how to compute $PI_v$ from the sets $PI_u$, $u \in N^{-}_{v}$, in time parameterized by the size of these sets.
\begin{definition}[$B_v, PI_v$]\label{def:bv-piv}
Let $G = (V, E)$ be a DAG, $\ell: V \rightarrow \Sigma$ a labeling function, and $S \in \Sigma^m$ a string. For every $v \in V$ we define the sets:
\begin{itemize}
\item $B_v = \{i \in \{0,\ldots,m\} \mid \exists P \text{ path of } G\text{ ending in $v$ and } \ell(P) = S[1..i]\}$\footnote{Here we consider that the empty path always exists and its label is the empty string, thus $0\in B_v$.}
\item $PI_v \subseteq B_v$ as the unique prefix-incomparable set such that for every $i \in B_v$ there is a $j \in PI_v$ such that $i = j$ or $S[1..i]$ is a border of $S[1..j]$
\end{itemize}
\end{definition}
\begin{lemma}\label{lemma:parameterized-vertex}
Let $G = (V, E)$ be a DAG, $v \in V$, $S \in \Sigma^m$ a string, $f_S$ its failure tree and $A_S$ its matching automaton. We can compute $PI_v$ from $PI_u$ for every $u \in N^{-}_{v}$ in time $O\left(w^2\cdot d^{-}_{v}\right)$ or in time $O\left(\left(k_v := \sum_{u \in N^{-}_{v}} |PI_u|\right)^2\right)$, after $O(m)$ preprocessing time.
\end{lemma}
\begin{proof}
We precompute constant time lowest common ancestor ($LCA$) queries~\cite{aho1976finding} of $f_S$ in $O(m)$ time~\cite{gabow1985linear,schieber1988finding,berkman1993recursive,bender2000lca,bender2005lowest,alstrup2004nearest,fischer2006theoretical}. Note that with this structure we can check whether $i < j$ are prefix-incomparable in constant time ($LCA(i, j) < i$).
If $v$ is a source we have that either $B_v = PI_v = \{0\}$ if $\ell(v) \neq S[1]$ or $PI_v = \{1\}$ if $\ell(v) = S[1]$, otherwise we proceed as follows. To obtain the $O\left(k_v^2\right)$ time, we first append all the elements of every $PI_u$ for $u\in N^{-}_{v}$ into a list $\mathcal{L}$ (of size $k_v$), then we replace every $i \in \mathcal{L}$ by $A_S(i, \ell(v))$, and finally we check (at most) every pair $i < j$ of elements of $\mathcal{L}$ and test (in constant time) if they are prefix-incomparable, if they are not we remove $i$ from the list. After these $O(|\mathcal{L}|^2) = O(k_v^2)$ tests $\mathcal{L} = PI_v$.
To obtain the $O\left(w^2\cdot d^{-}_{v}\right)$ time, we process the in-neighbors of $v$ one by one and maintain a prefix-incomparable set representing the prefix matches incoming from the already processed in-neighbors. That is, we maintain a prefix-incomparable set $PI'$, and when we process the next in-neighbor $u \in N^{-}_{v}$ we append all elements of $PI'$ and $\{A_S(i,\ell(v)) \mid i \in PI_u\}$ into a list $\mathcal{L}'$ of size $O(w)$ (by \Cref{lemma:bounded-size-prefix-incomparable}), then we use the same quadratic procedure applied on $\mathcal{L}$ in time $O(w^2)$ to obtain the new $PI'$. After processing all in-neighbors in time $O(d^{-}_{v}\cdot w^2)$, we have $PI' = PI_v$. Next, we show the correctness of both procedures.
Let $PI'_v$ be the result after applying some of the procedures explained before (that is the final state of $PI'$ or $\mathcal{L}$), by construction $PI'_v$ is prefix-incomparable. Now, consider $i \in B_v$ and a path $P$ ending in $v$ with $\ell(P) = S[1..i]$. If $P$ is of length zero (only one vertex), then $i = 1$ and $\ell(P) = \ell(v) = S[1]$. Consider any $u\in N^{-}_{v}$, and any $j \in PI_u$ (there is at least one since $0 \in B_u)$, this value is mapped to $A_S(j, \ell(v)) = A_S(j, S[1]) = j'$. By definition of the matching automaton, $j'$ is the longest border of $S[1..j]\cdot S[1]$, thus $S[1]$ is a border of $S[1..j']$ or $j' = 1$, in both procedures $j'$ can only be removed from $PI'_v$ if a longer prefix contains $S[1..j']$ as a border and also $S[1]$. If $P$ is a proper path and $i > 1$, consider the second to last vertex $u$ of $P$. Note that $i-1 \in B_u$, and thus there is $j\in PI_u$, such that $S[1..i-1]$ is a border of $S[1..j]$, this value is mapped to $j' = A_S(j, \ell(v)) = A_S(j, S[i])$, which is the length of the longest border of $S[1..j]\cdot S[i]$, but since $S[1..i]$ is also a border of $S[1..j]\cdot S[i]$, then $S[1..i]$ is also a border of $S[1..j']$ or $j' = i$. Again, in both procedures $j'$ can only be removed from $PI'_v$ if a longer prefix contains $S[1..j']$ as a border and also $S[1..i]$. Finally, we note that $PI'_v \subseteq B_v$ since every $i \in PI'_v$ corresponds to a match of $S[1..i]$ by construction.
\end{proof}
We improve the dependency on $w$ and $k_v$ by replacing the quadratic comparison by sorting plus a linear time algorithm on the balanced parenthesis representation~\cite{jacobson1989space,munro2001succinct} of $f_S$.
\begin{restatable}{lemma}{sortingParameterizedVertex}\label{cor:sorting-parameterized-vertex}
We can obtain \Cref{lemma:parameterized-vertex} in time $O(sort(w, m)\cdot d^{-}_{v})$ or in time $O(sort(k_v, m))$, where $sort(n, p)$ is the time spent by an algorithm sorting $n$ integers in the range $\{0, \ldots, p\}$.
\end{restatable}
\begin{proof}
We compute the balanced parenthesis (BP)~\cite{jacobson1989space,munro2001succinct} representation of the topology of $f_S$, that is, we traverse $f_S$ from the root in preorder, appending an open parenthesis when we first arrive at a vertex, and a closing one when we leave its subtree. As a result we obtain a balanced parenthesis sequence of length $2(m+1)$, where every vertex $i\in f_S$ is mapped to its open parenthesis position $open[i]$ and to its close parenthesis position $close[i]$, which can be computed and stored at preprocessing time. Note that in this representation, $i$ is ancestor of $j$ (and thus prefix-comparable) if and only if $open[i] \le open[j] \le close[i]$. As such, if we have a list of $O(k_v)$ (or $O(w)$) ($\mathcal{L}$ and $\mathcal{L}'$ from \Cref{lemma:parameterized-vertex}) values, we can compute the corresponding prefix-incomparable sets as follows.
First, we sort the list by increasing $open$ value, this can be done in $O(sort(k_v, m))$ (or $O(sort(w, m)$), since this sorting is equivalent to sort by increasing $open/2 \in \{0, \ldots, m\}$ value. Then, we process the list in the sorted order, if two consecutive values $i$ and $j$ in the order are prefix-comparable (that is, if $open[j] \le close[i]$) then we remove $i$ and continue to the next value $j$. At the end of this $O(k_v)$ (or $O(w)$) time processing we obtain the desired prefix-incomparable set.
\end{proof}
If we use techniques for integer sorting~\cite{van1975preserving,willard1983log,kirkpatrick1983upper} we can get $O(k_v\log\log{m})$ (or $O(w\log\log{m})$) time for sorting, however introducing $m$ into the running time. We can solve this issue by using more advanced techniques~\cite{han1995conservative,andersson1998sorting,han2002deterministic} obtaining a $O(k_v\log\log{k_v})$ (or $O(w\log\log{w})$) time for sorting. However, we show that by using the suffix-tree~\cite{weiner1973linear,mccreight1976space,ukkonen1995line} of $S^r$ we can obtain a linear dependency on $w$ and $k_v$.
\begin{restatable}{theorem}{linearParameterizedVertex}\label{cor:linear-parameterized-vertex}
We can obtain the result of \Cref{lemma:parameterized-vertex} in time $O(w\cdot d^{-}_{v})$ or in time $O(k_v)$.
\end{restatable}
\begin{proof}
We reuse the procedure of \Cref{cor:sorting-parameterized-vertex} but this time on the BP representation of the topology of the suffix-tree $T_r$ of $S^r$, which has $O(m)$ vertices and can be built in $O(m)$ time~\cite{weiner1973linear,mccreight1976space,ukkonen1995line}. Note that every suffix represented in $T_r$ corresponds to a prefix of $S$ (spelled in the reverse direction). Moreover, $i \le j$ are prefix-comparable if and only if the vertex representing $i$ in $T_r$ ($T_r[i]$) is a ancestor of $T_r[j]$, the same property as in $f_S$. Furthermore, if $B$ is prefix-incomparable and $A(j, a) = j+1$ for every $j\in B$, then the positions of the vertices in $A(B, a)$ in $T_r$ follow the same order as the ones in $B$, since the suffix-tree is lexicographically sorted.
Now, we show how to obtain the prefix-incomparable set representing $A(PI_u, \ell(v))$ in $|PI_u|$ time assuming that $PI_u$ is sorted by increasing ($open$) position in $T_r$.
We first separate $PI_u$ into the elements $i \in M$ with $S[i+1] = \ell(v)$ and $i \in E$ with $S[i+1] \ne \ell(v)$ (in the same relative order as in $PI_u$, which is supposed to be in increasing order). Since $M$ is prefix-incomparable the positions of the vertices in $A(M, \ell(v))$ in $T_r$ follow the same order as the ones in $M$. We then obtain the list $E_u$ by applying $T_r[A(i, \ell(v)) - 1]$ for every $i \in E$ (if $A(i, \ell(v)) = 0$ we do not process $i$), and then for any pair of consecutive elements $x$ before $y$ in $E_u$ such that $y\le x$ we remove $y$ from $E_u$, and repeat this until no further such inversion remains, thus obtaining an increasing list in $E_u$ representing vertices in $T_r$. Next, since $E_u$ is sorted we can obtain the list $PI_E$ of prefix-incomparable elements representing $E_u$, and finally apply $A(PI_E, \ell(v))$ (which also follows an increasing order in $T_r$), merge it with $A(M, \ell(v))$, and compute the prefix-comparable elements of this merge.
The correctness of the previous procedure follows by the fact that if there is an inversion $y < x$ in $E_u$, then the prefix $A(j,\ell(v))-1$ represented by $y$ in $T_r$ is a border of the prefix $A(i,\ell(v))-1$ represented by $x$ (and thus is safe to remove $y$). For this, first note that $i$ appears before $j$ in $E$, then $S[i..1] <_{lex} S[j..1]$, and since $i$ is prefix-incomparable with $j$ there is a $k\ge 1$ such that $S[i..i+k-1] = S[j..j+k-1]$ and $S[i+k] <_{lex} S[j+k]$. Then, since $y$ appears before $x$ in $E_u$, then $S[A(j,\ell(v))-1..1] <_{lex} S[A(i,\ell(v))-1..1]$, but since $A(i,\ell(v))-1$ is a border of $i$ and $A(j,\ell(v))-1$ is a border of $j$, $S[A(j,\ell(v))-1..1]$ must be a prefix of $S[A(i,\ell(v))-1..1]$, and thus $S[1..A(j,\ell(v))-1]$ is a border of $S[1..A(j,\ell(v))-1]$.
The corollary is obtained by maintaining the $PI_v$ sets sorted by position in $T_r$, and noting that the previous procedure runs in linear $O(|P_u|)$ time.
\end{proof}
In \Cref{sec:the-pattern} we show how to use \Cref{cor:linear-parameterized-vertex} to derive a parameterized algorithm using parameter $w = |\{i \in \{0,\ldots, m\} \mid \not \exists j, f_S(j) = i\}|$, improving on the classical DAG algorithm when $S$ is a periodic string. Next, we will present our results on recognizing funnels and their generalization (\Cref{sec:funnels-and-beyond}), and how to use these classes of graphs and \Cref{cor:linear-parameterized-vertex} to obtain parameterized algorithms using parameters related to the topology of the DAG (\Cref{sec:the-dag}).
\section{Funnels and beyond}\label{sec:funnels-and-beyond}
Recall that funnels are DAGs whose source-to-sink paths have at least one \emph{private} edge\footnote{For the sake of simplicity, we assume that there are no isolated vertices, thus any source-to-sink path has at least one edge.}, that is, an edge used by only one source-to-sink path. More formally,
\begin{definition}[Private edge]
Let $G = (V, E)$ be a DAG and $\mathcal{P}$ the set of source-to-sink paths of $G$. We say that $e\in E$ is \emph{private} if $\mu(e) := |\{P \in \mathcal{P} \mid e \in P\}| = 1$. If $\mu(e) > 1$, we say that $e$ is \emph{shared}.
\end{definition}
\begin{definition}[Funnel]\label{def:funnel}
Let $G = (V, E)$ be a DAG and $\mathcal{P}$ the set of source-to-sink paths of $G$. We say that $G$ is a funnel if for every $P \in \mathcal{P}$ there exists $e \in P$ such that $e$ is private.
\end{definition}
Millani et al.~\cite{millani2020efficient} showed two other characterizations of funnels.
\begin{theorem}[\cite{millani2020efficient}]\label{thm:funnels-characterizations}
Let $G = (V, E)$ be a DAG. The following are equivalent:
\begin{enumerate}
\item $G$ is a funnel
\item There exists a partition $V = V_1 \dot\cup V_2$ such that $G[V_1]$ is an out-forest, $G[V_2]$ is an in-forest and there are no edges from $V_2$ to $V_1$
\item There is no path $P$ such that its first vertex has more than one in-neighbor (a merging vertex) and its last vertex more than one out-neighbor (a forking vertex). Such a path is called forbidden
\end{enumerate}
\end{theorem}
They also gave a $O(|V|+|E|)$ time algorithm to recognize whether a DAG $G$ is a funnel, and a $O(|V|(|V|+|E|))$ time algorithm to find a minimal forbidden path in a general graph, that is, a forbidden path that is not contained in another forbidden path.
\subsection{Three (new) linear time recognition algorithms}\label{sec:three-recognition}
We first show how to find a minimal forbidden path in time $O(|V|+|E|)$ in general graphs, improving on the quadratic algorithm of Millani et al.~\cite{millani2020efficient}.
\begin{restatable}{lemma}{unitig}\label{lemma:unitig}
Let $G = (V, E)$ be a graph. In $O(|V|+|E|)$ time, we can decide if $G$ contains a forbidden path, and if one exists we report a minimal forbidden path.
\end{restatable}
\begin{proof}
In the bioinformatics community minimal forbidden paths are a subset of \emph{unitigs} and it is well known how to compute them in $O(|V|+|E|)$ time (see e.g.~\cite{kececioglu1995combinatorial,jackson2009parallel,kingsford2010assembly,medvedev2007computability}), here we include a simple algorithm for completeness. We first compute the indegree and outdegree of each vertex and check whether exists a forbidden path of length zero or one, all in $O(|V|+|E|)$ time, in the process we also mark all vertices except the ones with unit indegree and outdegree. If no path is found we iterate over the vertices one last time. If the current vertex is not marked we extend it back and forth as long as there is a unique extension and mark the vertices in this extension, finally we check whether the first vertex is merging and the last forking. This last iteration takes $O(|V|)$ time.
\end{proof}
\Cref{lemma:unitig} provides our first linear time recognition algorithm and, as opposed to the algorithm of Millani et al.~\cite{millani2020efficient}, it also reports a minimal forbidden path given a general graph. Moreover, in \Cref{sec:linear-distance}, we show that \Cref{lemma:unitig} provides a linear time parameterized algorithm for the NP-hard (and inapproximable) problem of deletion distance of a general graph to a funnel~\cite{lund1993approximation,millani2020efficient}. Millani et al.~\cite{millani2020efficient} solved this problem in (parameterized) quadratic time and in (parameterized) linear time only if the input graph is a DAG.
Next, we show another linear time recognition algorithm, which additionally finds the partition $V = V_{1} \dot\cup V_{2}$ from \Cref{thm:funnels-characterizations}. Finding such a partition will be essential for our solution to SMLG. From now we will assume that the input graph is a DAG since this condition can be checked in linear time~\cite{kahn1962topological,tarjan1976edge}.
\begin{lemma}
Let $G = (V, E)$ be a DAG. We can decide in $O(|V|+|E|)$ time whether $G$ is a funnel. Additionally, if $G$ is a funnel, the algorithm reports a partition $V = V_{1} \dot\cup V_{2}$ such that $G[V_1]$ is an out-forest, $G[V_2]$ is an in-forest and there are no edges from $V_2$ to $V_1$.
\end{lemma}
\begin{proof}
We start a special BFS traversal from all the source vertices of $G$. The traversal only adds vertices to the BFS queue if they have not been previously visited (as a typical BFS traversal) and if its indegree is at most one. After the search we define the partition $V_1$ as the set of vertices visited during the traversal and $V_2 = V\setminus V_1$. Finally, we report the previous partition if there are no edges from $V_2$ to $V_1$, and if every vertex of $V_2$ has outdegree at most one. All these steps run in time $O(|V|+|E|)$.
Note that if the algorithm reports a partition, then this satisfies the required conditions to be a funnel ($G[V_{1}]$ is an out-forest since every vertex visited in the traversal has indegree at most one). Moreover, if $G$ is a funnel, we prove that $V_{2}$ is an in-forest and that there are no edges from $V_{2}$ to $V_{1}$. For the first, suppose by contradiction that there is a vertex $v \in V_{2}$ with $d^{+}_{v} > 1$, since every vertex is reached by some source in a DAG then there is a $u \in V_{2}$ with $d^{-}_{u} > 1$ (a vertex that was not added to the BFS queue) that reaches $v$, implying the existence of a forbidden path in $G$, a contradiction. Finally, there cannot be edges from $V_{2}$ to $V_{1}$ since the indegree (in $G$) of vertices of $V_1$ is at most one and its unique (if any) in-neighbor is also in $V_{1}$ by construction.
\end{proof}
Next, we present another characterization of funnels based on the structure of private/shared edges of the graph, which can be easily obtained by manipulating the original~\Cref{def:funnel}.
\begin{definition}[Funnel]
Let $G = (V, E)$ be a DAG. We say that $G$ is a funnel if there is no source-to-sink path using only shared edges.
\end{definition}
As such, another approach to decide whether a DAG $G$ is a funnel is to compute $\mu(e)$ for every $e\in E$ and then perform a traversal that only uses shared edges. Computing the number of source-to-sink paths containing $e$, that is $\mu(e)$, can be done by multiplying the number of source-to-$e$ paths, $\mu_s(e)$, by the number of $e$-to-sink paths, $\mu_t(e)$, each of which can be computed in $O(|V|+|E|)$ time for all edges. The solution consists of a dynamic programming on a topological order (and reverse topological order) of $G$ with the following recurrences.
\begin{equation}\label{eq:s2s-counting}
\begin{aligned}
\mu_s(e = (u, v)) &= \mu_s(u) = \mathbbm{1}_{d^{-}_{u} = 0} + \sum_{u' \in N^{-}_{u}} \mu_s((u', u))\\
\mu_t(e = (u, v)) &= \mu_t(v) = \mathbbm{1}_{d^{+}_{v}=0} +\sum_{v' \in N^{+}_{v}} \mu_t((v, v'))\\\
\mu(e) &= \mu_s(e)\cdot\mu_t(e)
\end{aligned}
\end{equation}
Where $\mathbbm{1}_A$ is the characteristic function evaluating to $1$ if $A$ is true and to $0$ otherwise.
We note that solving the dynamic programs of \Cref{eq:s2s-counting} would take $\Omega(|V||E|)$, since the equations are written in terms of edges. It is simple to see that for every $e = (u, v) \in E, \mu_s(e) = \mu_s(u) \land \mu_t(e) = \mu_t(v)$, thus one can compute the dynamic programs in terms of vertices in $O(|V|+|E|)$ time. By simplicity, we will use this observation implicitly in \Cref{thm:k-funnel-linear-recognition,lemma:linear-min-k}.
The previous algorithm assumes constant time arithmetic operations on numbers up to $\max_{e\in E} \mu(e)$, which can be $O(2^{|V|})$. To avoid this issue, we note that it is not necessary to compute $\mu(e)$, but only to verify that $\mu(e) > 1$. As such, we can recognize shared edges as soon as we identify that $\mu(e) > 1$, that is whenever $\mu_s(e)$ or $\mu_t(e)$ is greater than one in their respective computation. A formal description of this algorithm can be found in \Cref{thm:k-funnel-linear-recognition}.
\subsection{Generalizations of funnels}
To generalize funnels we will allow source-to-sink paths to use only shared edges, but require to have at least one edge shared by at most $k$ different source-to-sink paths.
\begin{definition}[$k$-private edge]\label{def:kprivate}
Let $G = (V, E)$ be a DAG. We say that $e \in E$ is $k$-private if $\mu(e) \le k$. If $\mu(e) > k$ we say that $e$ is $k$-shared.
\end{definition}
\begin{definition}[$k$-funnel]\label{def:kfunnel}
Let $G = (V, E)$ be a DAG. We say that $G$ is a $k$-funnel if there is no source-to-sink path using only $k$-shared edges.
\end{definition}
The next algorithm is a generalization of the last algorithm in \Cref{sec:three-recognition} to decide if a DAG is a $k$-funnel. It assumes constant time arithmetic operations on numbers up to $k$.
\begin{lemma}\label{thm:k-funnel-linear-recognition}
We can decide if a DAG $G = (V, E)$ is a $k$-funnel in $O(|V|+|E|)$ time.
\end{lemma}
\begin{proof}
We process the vertices in a topological ordering and use \Cref{eq:s2s-counting} to compute $\mu_s(e)$ in one pass, $\mu_t(e)$ in another pass and $\mu(e)$ in a final pass. To avoid arithmetic operations with numbers greater than $k$, we mark the edges having $\mu_s$ and $\mu_t$ greater than $k$ as $k$-shared during the computations of $\mu_s, \mu_t$. Note that if $\mu_s(e) > k$ or $\mu_t(e)> k$ then $\mu(e) > k$. As such, before computing $\mu_s(e)$ ($\mu_t(e)$) we check if some of the edges from (to) the in(out)-neighbors is marked as $k$-shared. If that is the case we do not compute $\mu_s(e)$ ($\mu_t(e)$) and instead mark $e$ as $k$-shared, otherwise we compute the respective sum of \Cref{eq:s2s-counting}, and if at some point the cumulative sum exceeds $k$ we stop the computation and mark $e$ as $k$-shared. Finally, we find all $k$-shared edges as the marked plus the unmarked with $\mu(e) = \mu_s(e)\cdot\mu_t(e) > k$, perform a traversal only using $k$-shared edges, and report that $G$ is not a $k$-funnel if there is a source-to-sink path using only $k$-shared edges in time $O(|V|+|E|)$.
\end{proof}
We can use the previous result and exponential search~\cite{bentley1976almost,baeza2010fast} to find the minimum $k$ such that a DAG is a $k$-funnel. Since the exponential search can overshoot $k$ at most by a factor of $2$, this results assumes constant time arithmetic operations on numbers up to $2k$.
\begin{corollary}\label{cor:exponential-min-k}
Let $G = (V, E)$ be a DAG. We can find the minimum $k$ such that $G$ is a $k$-funnel in $O((|V|+|E|)\log{k})$ time.
\end{corollary}
Assuming constant time arithmetic operations on numbers up to $\max_{e\in E} \mu(e)$ the problem is solvable in linear time by noting that the answer is equal to the weight of a widest path.
\begin{restatable}{lemma}{linearMinK}\label{lemma:linear-min-k}
Let $G = (V, E)$ be a DAG. We can find the minimum $k$ such that $G$ is a $k$-funnel in $O(|V|+|E|)$ time.
\end{restatable}
\begin{proof}
We compute $\mu(e)$ for every $e \in E$ by using the dynamic programming algorithm specified by \Cref{eq:s2s-counting} on a topological ordering of $G$. Since constant time arithmetic operations are assumed for numbers up to $\max_{e\in E} \mu(e)$, the previous computation takes linear time. Then, we compute the weight of a source-to-sink path $P$ maximizing $\min_{e\in P} \mu(e)$, and report this value. This problem is known as the widest path problem~\cite{pollack1960maximum,shacham1992multicast,magnanti1993network,ullah2009algorithm,schulze2011new} and it can be solved in linear time in DAGs~\cite{vatinlen2008simple,hartman2012split} by a dynamic program on a topological order of the graph. By completeness, we show a dynamic programming recurrence to compute $W[e]$, the weight of a source-to-$e$ path $P$ maximizing $\min_{e'\in P} \mu(e')$.
\begin{align*}
W[e = (u, v)] &= \mu(e)\cdot\mathbbm{1}_{d^{-}_{u} = 0} + \sum_{u' \in N^{-}_{u}} \min(W[(u', u)], \mu((u', u)))
\end{align*}
Finally, note that if we denote $w$ to the weight of a widest path, then there is a source-to-sink path using only $w-1$-shared edges, $G$ is not $w-1$-funnel. Moreover, there cannot be a source-to-sink path using only $w$-shared edges, since such a path would contradict $w$ being the weight of a widest path. As such, $w$ is the minimum $k$ such that $G$ is $k$-funnel.
\end{proof}
We now define three classes of DAGs closely related to $k$-funnels.
\begin{definition}\label{def:sk-tk}
We say that a DAG $G = (V, E)$ belongs to the class $\mathcal{S}_k$ ($\mathcal{T}_k$) if for every $v \in V$, $\mu_s(v)$ ($\mu_t(v)$) $\le k$.
\end{definition}
\begin{definition}\label{def:stk}
We say that a DAG $G = (V, E)$ belongs to the class $\mathcal{ST}_k$ if for every $v \in V$, $\mu_s(v) \le k$ or $\mu_t(v)\le k$.
\end{definition}
\begin{lemma}\label{lemma:subset-stk}
$\mathcal{S}_k, \mathcal{T}_k \subseteq k$-funnels $\subseteq \mathcal{ST}_{k}$.
\end{lemma}
\begin{proof}
We first prove that $\mathcal{S}_k, \mathcal{T}_k \subseteq k$-funnels. Consider $G \in \mathcal{S}_k$ ($G \in \mathcal{T}_k$), and take any source-to-sink path $P$ of $G$. Let $(u,v)$ be the last (first) edge of $P$, then by \Cref{eq:s2s-counting} $\mu((u,v)) = \mu_s(u)\cdot\mu_t(v)$, but since $\mu_t(v) = 1$ ($v$ is a sink) and $\mu_s(u) \le k$ ($G \in \mathcal{S}_k$) (analogously, $\mu_s(u) = 1$ and $\mu_t(v) \le k$), then $\mu((u,v)) \le k$, and thus $(u, v)$ is a $k$-private edge.
To prove that $k$-funnels $\subseteq \mathcal{ST}_{k}$, suppose that $G$ is a $k$-funnel, and by contradiction that there exists $v\in V$ with $\mu_s(v), \mu_t(v) > k$. Consider any source-to-sink path $P$ using $v$. Now, let $(u,w)$ be any edge in $P$ before (after) $v$, then $\mu_t(w) \ge \mu_t(v) > k$ ($\mu_s(u) \ge \mu_s(v) > k$), and thus $\mu((u,w)) = \mu_s(u)\cdot\mu_t(w) > k$. As such, $P$ does not have a $k$-private edge, a contradiction.
\end{proof}
For $k = 1$, $\mathcal{S}_1$ describes out-forests and $\mathcal{T}_1$ in-forests, thus being more restrictive than funnels. Moreover, we note that the in(out)-star of $k$ vertices, that is $k-1$ vertices pointing to a sink (pointed from a source), $\not\in \mathcal{S}_k$ ($\mathcal{T}_k$), but this graph is a funnel. On the other hand, for the vertex partition characterization of funnels (\Cref{thm:funnels-characterizations}~\cite{millani2020efficient}) we have that $\mathcal{ST}_1 =$ ($1$-)funnels. However, for $k>1$, the containment $k$-funnels $\subseteq \mathcal{ST}_{k}$ is strict (\Cref{fig:strict-containment}).
\begin{figure}
\caption{A DAG in $\mathcal{ST}_k$ that is not a $k$-funnel, for every $k > 1$. The central edge is a forbidden path whose first vertex has indegree $k$ and outdegree $2$, and last vertex has indegree $2$ and outdegree $k$, the rest of the edges have either a source tail or a sink head. The blue label next to each vertex $v$ corresponds to $\min(\mu_s(v),\mu_t(v))$, since the maximum of these labels is $k$ the graph belongs to $\mathcal{ST}_k$. The red label next to each edge $e$ corresponds to $\mu(e)$, since there is a source-to-sink path with no $k$-private edge the graph is not a $k$-funnel.}
\label{fig:strict-containment}
\end{figure}
By noting that the minimum $k$ such that a DAG is in $\mathcal{S}_k$, $\mathcal{T}_k$ and $\mathcal{ST}_k$ is $\max_{v\in V} \mu_s(v)$, $\max_{v\in V} \mu_t(v)$ and $\max_{v\in V} \min(\mu_{s}(v), \mu_{t}(v))$, respectively, we obtain the same results as in \Cref{thm:k-funnel-linear-recognition,cor:exponential-min-k,lemma:linear-min-k} (with analogous assumptions on the cost of arithmetic operations) for recognition of $\mathcal{S}_k$, $\mathcal{T}_k$ and $\mathcal{ST}_k$.
Next, we prove that although the vertex partition characterization of funnels does not generalizes to $k$-funnels, it does for the class $\mathcal{ST}_k$ and it can be found efficiently.
\begin{lemma}\label{thm:stk-partitioning}
Let $G = (V, E) \in \mathcal{ST}_k$ and $k$ given as inputs. We can find, in $O(|V|+|E|)$ time, a partition $V = V_1 \dot\cup V_2$ such that $G[V_1]\in \mathcal{S}_k$, $G[V_2]\in \mathcal{T}_k$ and there are no edges from $V_2$ to $V_1$. Moreover, if such a partition of a DAG $G$ exists, then $G \in \mathcal{ST}_k$.
\end{lemma}
\begin{proof}
We set $V_1 = \{v \in V \mid \mu_s(v) \le k\}$ and $V_2 = V \setminus V_1$. Note that finding $V_1$ takes linear time, since we can apply the algorithm described in \Cref{thm:k-funnel-linear-recognition} to compute the $\mu_s$ values (or decide that they are more than $k$). By construction we know that every $v \in V_1$ has $\mu_s(v) \le k$, and since $G \in \mathcal{ST}_k$ also every $v \in V_2$ has $\mu_t(v) \le k$, thus $G[V_1]\in \mathcal{S}_k$, $G[V_2]\in \mathcal{T}_k$. Suppose by contradiction that there exists $e = (u, v)\in E \cap (V_2\times V_1)$. As such, $\mu_s(u) > k$, but since $\mu_s(u) \le \mu_s(v)$, then $\mu_s(v) > k$, a contradiction. Finally, if such a partition exists then $\mu_s(v) \le k$ for every $v \in V_1$ and $\mu_t(v) \le k$ for every $v \in V_2$, and thus $G\in\mathcal{ST}_k$.
\end{proof}
\section{Parameterized algorithms: The DAG}\label{sec:the-dag}
The main idea to get the parameterized algorithms in this section is to bound the size of the $PI_v$ sets by a topological graph parameter and use \Cref{lemma:parameterized-vertex,cor:linear-parameterized-vertex} to obtain a parameterized solution. As in the KMP algorithm~\cite{knuth1977fast} only one prefix-incomparable value suffices (the longest prefix match until that point), we show that $\mu_s(v)$ prefix-incomparable values suffice to capture the prefix matches up to $v$.
\begin{lemma}\label{lemma:prefix-incomparable-topo-bound}
Let $G = (V, E)$ be a DAG, $v \in V$, $\mathcal{P}_{sv}$ the set of source-to-$v$ paths, $\ell: V \rightarrow \Sigma$ a labeling function, $S \in \Sigma^m$ a string, and $PI_v$ as in \Cref{def:bv-piv}. Then, $|PI_v| \le \mu_s(v)$.
\end{lemma}
\begin{proof}
Since any path ending in $v$ is the suffix of a source-to-$v$ path we can write $B_v$ as:
\begin{align*}
B_v &= \bigcup_{P_{sv} \in \mathcal{P}_{sv}} B_{P_{sv}}:= \{i \in \{0,\ldots,m\} \mid \exists P \text{ suffix of }P_{sv}, \ell(P) = S[1..i]\}
\end{align*}
However, for every pair of values $i < j \in B_{P_{sv}}$, $S[1..i]$ is a border of $S[1..j]$ (it is a suffix since they are both suffixes of $\ell(P_{sv})$). As such, at most one value of $B_{P_{sv}}$ appears in $PI_v$, and then $|PI_v| \le |\mathcal{P}_{sv}| = \mu_s(v)$.
\end{proof}
This result directly implies a parameterized string matching algorithm to DAGs in $\mathcal{S}_k$.
\begin{lemma}\label{thm:dag-algorithm-dag-1}
Let $G = (V, E) \in \mathcal{S}_k$, $\ell: V \rightarrow \Sigma$ a labeling function and $S \in \Sigma^m$ a string. We can decide whether $S$ has a match in $G,\ell$ in time $O(|V|k+|E| + \sigma m)$.
\end{lemma}
\begin{proof}
We proceed as in \Cref{thm:dag-algorithm-pattern}, but instead we compute each $PI_v$ with the $O(k_v)$ version of \Cref{cor:linear-parameterized-vertex}. The claimed running time follows since $k_v = \sum_{u \in N^{-}_{v}} |PI_u| \le \sum_{u \in N^{-}_{v}} \mu_s(u) \le 1 + \mu_s(v) \le k+1$, by~\Cref{lemma:prefix-incomparable-topo-bound},~\Cref{eq:s2s-counting} and since $G \in \mathcal{S}_k$.
\end{proof}
A simple but interesting property about string matching to graphs is that we obtain the same problem by reversing the input (both the graph and the string), that is, $S$ has a match in $G, \ell$ if and only if $S^r$ has a match in $G^r, \ell$. This fact, plus noting that $G\in \mathcal{S}_k$ if and only if $G^r \in \mathcal{T}_k$ gives the following corollary of \Cref{thm:dag-algorithm-dag-1}.
\begin{corollary}\label{cor:dag-algorithm-dag-1}
Let $G = (V, E) \in \mathcal{T}_k$, $\ell: V \rightarrow \Sigma$ a labeling function and $S \in \Sigma^m$ a string. We can decide whether $S$ has a match in $G,\ell$ in time $O(|V|k+|E| + \sigma m)$.
\end{corollary}
With these two results and the fact that we can compute the minimum $k$ such that a DAG is in $\mathcal{S}_k, \mathcal{T}_k$ in time $O((|V|+|E|)\log{k})$ (see \Cref{cor:exponential-min-k}) we obtain our first algorithm parameterized by the topology of the DAG.
\paramAlgTopoOne*
Our final result is a parameterized algorithm for DAGs in $\mathcal{ST}_k$ (in particular for $k$-funnels). We note that the algorithm of \Cref{cor:dag-algorithm-dag-1} computes $PI_v$ for $S^r$ for every vertex in $G^r$. Recall that $PI_v$ represents all the prefix matches of $S^r$ with paths ending in $v$ in $G^r$. In other words, it represents all suffix matches of $S$ with paths starting in $v$ in $G$. For clarity, let us call this set $SI_v$. The main idea of the algorithm for $\mathcal{ST}_k$ is to use \Cref{thm:stk-partitioning} to find a partitioning $V = V_1 \dot\cup V_2$ into $\mathcal{S}_k$ and $\mathcal{T}_k$, use \Cref{thm:dag-algorithm-dag-1,cor:dag-algorithm-dag-1} to search for matches within each part and also to compute $PI_v$ for every $v \in V_1$ and $SI_v$ for every $v \in V_2$, and finally, to find matches using the edges from $V_1$ to $V_2$. The last ingredient of our algorithm consists of preprocessing the answers to the last type of matches.
\begin{lemma}\label{lemma:prefix-suffix}
Let $G = (V, E)$ a DAG, $(u, v) \in E$, $\ell: V \rightarrow \Sigma$ a labeling function, $S \in \Sigma^m$ a string and $PI_u$ and $SI_v$ as in \Cref{def:bv-piv}. We can decide if there is a match of $S$ in $G,\ell$ using $(u,v)$ in $O(|PI_u|\cdot|SI_v|)$ time, after $O(m^2)$ preprocessing time.
\end{lemma}
\begin{proof}
We precompute a boolean table $PS$ of $m\times m$ entries, such that $PS[i,j]$ is \texttt{true} if there is a length $i'$ of a (non-empty) border of $S[1..i]$ (or $i' = i$) and a length $j'$ of a (non-empty) border of $S[m-j+1..m]$ (or $j' = j$) such that $i'+j'=m$, and \texttt{false} otherwise. This table can be computed by dynamic programming in $O(m^2)$ time as follows.
\begin{align*}
PS[i,j] &= \begin{cases} \texttt{false} & \text{if }i+j < m \lor i = 0 \lor j = 0\\ i+j = m \lor PS[i, f_{S^r}(j)] \lor PS[f_S(i), j] & \text{otherwise}\end{cases}
\end{align*}
We then use this table to test every $PS[i,j]$ with $i \in PI_u, j\in SI_v$ and report a match if any of these table entries is \texttt{true}, in total $O(|PI_u|\cdot|SI_v|)$ time.
Since every match of $S$ using $(u, v)$ must match a prefix $S[1..i]$ with a path ending in $u$ and a suffix $S[i+1..m]$ with a path starting in $v$, the previous procedure finds it (if any).
\end{proof}
\paramAlgTopoTwo*
\begin{proof}
We first compute the minimum $k$ such that the input DAG is in $\mathcal{ST}_k$ in time $O((|V|+|E|)\log{k})$ (see \Cref{cor:exponential-min-k}). Then, we obtain the partition of $G$ into $G[V_1] \in \mathcal{S}_k, G[V_2] \in \mathcal{T}_k$ and no edges from $V_2$ to $V_1$. We then search matches within $G[V_1]$ and $G[V_2]$ in time $O(|V|k+|E| + \sigma m)$ (\Cref{thm:dag-algorithm-dag-1,cor:dag-algorithm-dag-1}) and we also keep $PI_u$ for every $u\in V_1$ and $SI_v$ for every $v\in V_2$. Finally, we process the matches using the edges $(u, v)$ with $u\in V_1, v\in V_2$ in total $O(|E|k^2 + m^2)$ time (\Cref{lemma:prefix-suffix}) since $O(|PI_u|\cdot|SI_v|) = O(k^2)$.
\end{proof}
\section{Conclusions}
In this paper we introduced the first parameterized algorithms for matching a string to a labeled DAG, a problem known (under SETH) to be quadratic even for a very special type of DAGs. Our parameters depend on the structure of the input DAG.
We derived our results from a generalization of KMP to DAGs using prefix-incomparable matches, which allowed us to bound the running time to parameterized linear. Further improvements on the running time of our algorithms remain open: is it possible to get rid of the automaton? or to combine prefix-incomparable and suffix-incomparable matches in better than quadratic (either in the size of the sets or the string)? (e.g. with a different tradeoff between query and construction time of the data structure answering these queries) and is there a (conditional) lower bound to combine these incomparable sets? (see e.g.~\cite{bernardini2019even}). Another interesting question with practical importance is whether our parameterized approach can be extended to string labeled graphs with (unparameterized) linear time in the total length of the strings or extended to counting and reporting algorithms in linear time in the number of occurrences.
We also presented novel algorithmic results on funnels as well as generalizations of them. These include linear time recognition algorithms for their different characterizations, which we showed useful for the string matching problem but hope that can also help in other graph problems. We also showed how to find the minimum $k$ for which a DAG is a $k$-funnel or $\in \mathcal{ST}_k$ (assuming constant time arithmetic operations on numbers up to $O(k)$) using an exponential search, but it remains open whether there exists a linear time solution.
\appendix
\section{A parameterized algorithm: The String}\label{sec:the-pattern}
A simple property about prefix-incomparable sets is that their sizes are bounded by the number of prefixes that are not a border of other prefixes of the string, equivalently, the number of leaves in the failure function of the string.
\begin{lemma}\label{lemma:bounded-size-prefix-incomparable}
Let $S \in \Sigma^m$ be a string, $f_S$ its failure function/tree, and $B \subseteq \{0,\ldots,m\}$ prefix-incomparable for $S$. Then, $|B| \le w$ such that $w$ is the number of leaves of $f_S$, equivalently $w := |\{i \in \{0,\ldots, m\} \mid \not \exists j, f_S(j) = i\}|$.
\end{lemma}
\begin{proof}
First note that $i < j$ are prefix-incomparable if and only if $i$ is not ancestor of $j$ in $f_S$. Suppose by contradiction that $|B| > w$, and consider the $w$ leaf-to-root paths of $f_S$. Note that these $w$ leaf-to-root paths cover all the vertices of $f_S$. By pigeonhole principle, there must be $i < j \in B$ in the same leaf-to-root path, that is $i$ is ancestor of $j$, a contradiction.
\end{proof}
\begin{restatable}{theorem}{dagAlgorithmPattern}\label{thm:dag-algorithm-pattern}
Let $G = (V, E)$ be a DAG, $\Sigma$ a finite ($\sigma = |\Sigma|$) alphabet, $\ell: V \rightarrow \Sigma$ a labeling function, $S \in \Sigma^m$ a string and $f_S$ its failure function. We can decide whether $S$ has a match in $G,\ell$ in time $O((|V|+|E|)w + \sigma m)$, where $w = |\{i \in \{0,\ldots, m\} \mid \not \exists j, f_S(j) = i\}|$. \end{restatable}
\begin{proof}
We compute the matching automaton $A_S$ in $O(\sigma m)$ time. Then, we process the vertices in topological order, and for each vertex $v$ we compute $PI_v$, the unique prefix-incomparable set representing $B_v$ (all prefix matches of $S$ with paths ending in $v$). We proceed according to \Cref{lemma:parameterized-vertex,cor:linear-parameterized-vertex} in $O(m)$ preprocessing time plus $O(w\cdot d^{-}_{v})$ time per vertex, adding up to $O(w(|V|+|E|))$ time in total. There is a match of $S$ in $G, \ell$ if and only if any $PI_v$ contains $m$.
\end{proof}
We note that $w \le m$, thus our algorithm is asymptotically as fast as the DAG algorithm, which runs in time $\Omega((|V|+|E|)m)$. However, we note that for $w$ to be $o(m)$, $S$ must be a periodic string. To see this, consider the longest prefix $S[1..i]$ of $S$, such that there exists $j > i$ with $i = f_{S}(j)$. By definition, $S[1..i]$ is a border of $S[1..j]$, thus $S[k] = S[k+j-i]$ for $k \in \{1, \ldots, i\}$, that is $S[1..j]$ is a periodic string with period $j-i$. Finally, note that if $w = o(m)$, then $m-i \in o(m)$, and thus the period $j-i \in o(m)$.
\section{A linear time parameterized algorithm for the distance problems}\label{sec:linear-distance}
Millani et al.~\cite{millani2020efficient} gave a $O(|V|(|V|+|E|))$ time algorithm to find a minimal forbidden path in a general graph. They used this algorithm to design branching algorithms (see e.g.~\cite{cygan2015parameterized}) for the problems of finding maximum sized sets $V' \subseteq V, d_{v} := |V'|$ and $E' \subseteq E, d_{e} := |E'|$, such that $G[V']$ and $(V, E')$ are funnels, known as vertex and edge distance to a funnel. It is know that (unless P = NP) there is an $\epsilon > 0$ such that there is no polynomial time $|V|^{\epsilon}$ approximation~\cite{lund1993approximation} for the vertex version nor $(1+\epsilon)$ approximation~\cite{millani2020efficient} for the edge version. The authors~\cite{millani2020efficient} noted that if we consider a minimal forbidden path $P$ of $G$ of length $|P| > 1$, then the edges of $P$ can be contracted until $|P| = 1$ without affecting the size of the solution. Moreover, they noted that if we consider such $P$, two in-neighbors of the first vertex and two out-neighbors of the last\footnote{This structure is known as a \emph{butterfly}.}, then $V'$ must contain at least one of those $6$ vertices and $E'$ one of those $5$ edges, deriving $O(6^{d_{v}}|V|(|V|+|E|))$ and $O(5^{d_{e}}|V|(|V|+|E|))$ time branching algorithms for each problem\footnote{After removing forbidden paths all cycles are vertex-disjoint thus the rest of the problem can be solved by removing one vertex (edge) per cycle in one $O(|V|+|E|)$ time traversal.}~\cite[Corollary 1]{millani2020efficient}. The authors also developed a more involved branching algorithm, only for the edge distance problem on DAG inputs, running in time $O(3^{d_{e}}(|V|+|E|))$~\cite[Theorem 4]{millani2020efficient}.
By noting that minimal forbidden paths can be further contracted to length zero (one vertex) in the vertex distance problem, and that a minimal forbidden path can be found in time $O(|V|+|E|)$ (\Cref{lemma:unitig}) we obtain the following result.
\begin{restatable}{theorem}{linearDistance}\label{thm:linear-distance}
Let $G = (V, E)$ be a graph. We can compute the vertex (edge) deletion distance to a funnel in time $O(5^d(|V|+|E|))$, where $d$ is the deletion distance.
\end{restatable}
\begin{proof}
We follow the branching approach as in~\cite[Corollary 1]{millani2020efficient}, but in the case of vertex distance we further contract the forbidden paths to length $0$, the correctness of this step follows by noting that any solution containing two different vertices in a forbidden path is not minimum, since we still get a funnel by removing one of them (from the solution). As such, the number of recursive calls is $\le 5$ for both problems. Moreover, by \Cref{lemma:unitig}, we can find a minimal forbidden path in time $O(|V|+|E|)$.
\end{proof}
\end{document} |
\begin{document}
\maketitle
\setcounter{section}{-1}
\begin{abstract} Let $X$ be a regular geometrically integral variety over an imperfect field $K$. Unlike the case of characteristic $0$, $X':=X\times_{\mathrm{Spec}\,K}\mathrm{Spec}\,K'$ may have singular points for a (necessarily inseparable) field extension $K'/K$. In this paper, we define new invariants of the local rings of codimension $1$ points of $X'$, and use these invariants for the calculation of $\delta$-invariants (, which relate to genus changes,) and conductors of such points. As a corollary, we give refinements of Tate's genus change theorem and \cite[Theorem 1.2]{PW}. Moreover, when $X$ is a curve, we show that the Jacobian number of $X$ is $2p/(p-1)$ times of the genus change by using the above calculation. In this case, we also relate the structure of the Picard scheme of $X$ with invariants of singular points of $X$. To prove such a relation, we give a characterization of the geometrical normality of algebras over fields of positive characteristic. \end{abstract}
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction} Let $K$ be a field, $p\,(\geq0)$ the characteristic of $K$, and $C$ a regular proper geometrically integral curve over $K$. If $K$ is perfect (in particular, if $p=0$), $C$ is smooth over $K$. On the other hand, if $K$ is imperfect, $C':=C\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, K'$ may have non-regular points for some purely inseparable field extension $K'/K$. The $\delta$-invariant (cf.\,Notation-Definition \ref{deltaconddef}.3), which relates to genus changes, and the conductor (cf.\,Notation-Definition \ref{deltaconddef}) of such a singular point $c'$, both of which measure how strong the singularity is, have been well-studied. In \cite{Ta}, Tate proved $$g_{C}-g_{\widetilde{C'}}\equiv 0\mod \frac{p-1}{2}$$ for any purely inseparable field extension $K'/K$, where $\widetilde{C'}$ is the normalization of $C'$ in its function field, and $g_{-}$ denotes the arithmetic genus of the curve. See \cite{Sc} for a modern and simple proof of this theorem. As a variant of Tate's genus change theorem, it is shown that the $\delta$-invariant is greater than or equal to $(p-1)/2$ in \cite[Theorem 5.7 and Remark 5.9]{IIL} in the case where $K'$ is an algebraic closure of $K$. Note that $g_{C}-g_{\widetilde{C'}}$ coincides with the sum of the product of the $\delta$-invariants and the residue extension degrees of all the singular points of $C'$. Patakfalvi and Waldron proved that the conductor divisors of the singular points are divisible by $(p-1)$ (\cite[Theorem 1.2]{PW}). (Note that, in \cite{PW}, more general varieties are treated. See also \cite{Tanaka2019}.) In this paper, we study these two invariants via two distinct approaches.
First, we consider the Picard schemes of algebraic curves. In \cite{A}, Achet studied the structure of the Picard scheme of a form of $\mathbb{A}^{1}_{K}$, which reflects the properties of the singularity of the complement of $\mathbb{A}^{1}_{K}$ in its regular compactification. In this paper, we treat general regular singular points of curves. Suppose that $p>0$ and fix an algebraic closure $\overline{K}$ of $K$. For any $i\in\mathbb{N}$, write $C(i)$ for the normalization of $C\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, K^{1/p^{i}}$ in its function field, $C_{\overline{K}}$ (resp.\,$C(i)_{\overline{K}}$) for the algebraic curve $C\times_{\mathrm{Spec}\, K}\mathrm{Spec}\,\overline{K}$ (resp.\,$C(i)\times_{\mathrm{Spec}\, K^{1/p}}\mathrm{Spec}\,\overline{K}$), $g_{C}$ (resp.\,$g_{C(i)}$) for the arithmetic genus of $C$ (resp.\,$C(i)$), and $\widetilde{C}$ for the normalization of $C_{\overline{K}}$ in its function field.
\begin{thm}[See Theorem \ref{overclosed} for the precise statement] The algebraic group $$\mathrm{Ker}\,(\mathrm{Pic}_{C_{\overline{K}}/\overline{K}}\to \mathrm{Pic}_{C(i)_{\overline{K}}/\overline{K}})$$ coincides with the maximal reduced closed subscheme of the algebraic group $$\mathrm{Ker}\,(\mathrm{Pic}_{C_{\overline{K}}/\overline{K}}\to \mathrm{Pic}_{\widetilde{C}/\overline{K}})[p^{i}].$$ Here, for a commutative algebraic group $P$, $P[p^{i}]$ denotes the subalgebraic group $\mathrm{Ker}\,(p^{i}\times\colon P\to P)$ of $P$. Moreover, (both of) these algebraic groups also coincide with the largest subgroup of $\mathrm{Ker}\,(\mathrm{Pic}_{C_{\overline{K}}/\overline{K}}\to \mathrm{Pic}_{\widetilde{C}/\overline{K}})$ that is $p^{i}$-torsion, connected, smooth, and unipotent. \label{intromain1} \end{thm}
Since the dimension of the Picard scheme of a curve coincides with its genus, this theorem says that the genus change $g_{C(i)}-g_{C}$ can be written simply in terms of algebraic groups, i.e., the dimension of the $p^{i}$-torsion part of the affine part of the Picard scheme of $C_{\overline{K}}$. To prove Theorem \ref{intromain1}, we study the Weil restriction of $\mathbb{G}_{m,K}$ in Section \ref{affinesection} and the Picard schemes of proper curves with cusps in Section \ref{Picpresection}. Moreover, in Section \ref{geomnormalsection}, we prove a geometrical normality criterion of rings over imperfect fields which asserts the following: For any algebra $R$ over $K$, $R$ is geometrically normal over $K$ if and only if $R\otimes_{K}K^{1/p}$ is normal (see Corollary \ref{geomnor}). We also study conditions that some algebraic subgroups of the affine part of $\mathrm{Pic}_{C'/K'}$ are split over $K'$ in the case of $[K':K]=p$ (see Theorem \ref{x01cor}).
As a corollary of Theorem \ref{intromain1}, we can obtain an inequality $$g_{C(i+1)}-g_{C(i+2)}\leq g_{C(i)}-g_{C(i+1)}\quad (i\in\mathbb{N}),$$ by applying a discussion similar to the following one:\\ For any $p$-power torsion abelian group $A$, the natural homomorphism $$p\times\colon A[p^{i+2}]/A[p^{i+1}]\to A[p^{i+1}]/A[p^{i}]$$ is injective, where $A[p^{j}]$ is the subgroup of $p^{j}$-torsion elements of $A$ (cf.\,Corollary \ref{main1cor} and Remark \ref{gpgenuschange}). In fact, a stronger inequality $$p(g_{C(i+1)}-g_{C(i+2)})\leq g_{C(i)}-g_{C(i+1)}$$ holds. As in Corollary \ref{geqp}, we can show this inequality for a geometrically integral discrete valuation ring over $K$ under some mild conditions, by using the elementary theory of discrete valuation rings. Moreover, this inequality can be improved by using an invariant $q(x)$, which we will introduce later (see Theorem \ref{mainineq} and Proposition \ref{Bgeqp}).
As we have just seen, the genus changes decrease step-by-step. On the other hand, the genus changes $g_{C(i+1)}-g_{C(i)}$ can be written as the sum of the local genus changes, which are the products of the $\delta$-invariants and the degrees of the extension of the residue fields, of all the singular points of the curve $C(i)\times_{\mathrm{Spec}\, K^{1/p^{i}}}\mathrm{Spec}\, K^{1/p^{i+1}}$. In the second part of this paper, we study the step-by-step behavior of local genus changes in more detail. We define invariants $q(x)$ for the local rings of singular points of such curves, which enable us to calculate the $\delta$-invariants and the conductors of the local rings. In fact, the invariants $q(x)$ will be defined for very general discrete valuation rings $R$ over $K$ (see the settings before Notation-Definition \ref{deltaconddef}). In Introduction, we limit ourselves to the case where $R$ is essentially of finite type over $K$.
\begin{thm}[See Theorem \ref{q} for the precise statement] Let $R$ be a discrete valuation ring geometrically integral and essentially of finite type over $K$, $v$ the normalized valuation of $R$, $x$ an element of $K$ satisfying $x^{1/p}\notin K$, $R(1_{x})$ the normalization of $R\otimes_{K}K(x^{1/p})$ in its field of fractions, $\mathfrak{m}(1_{x})$ the maximal ideal of $R(1_{x})$, $e$ the ramification index of $R(1_{x})$ over $R$, and $q(x)$ the natural number $$\sup_{r\in R}v(r^{p}-x).$$ Suppose that the residue field of $R$ contains $x^{1/p}$. Write $\mathcal{C}$ for the conductor of $R\otimes_{K}K(x^{1/p})$ and $\delta$ for the $\delta$-invariant of $R\otimes_{K}K(x^{1/p})$, i.e., the length of $R(1_{x})/R\otimes_{K}K(x^{1/p})$ as an $R(1_{x})$-module. Then the inequalities $1\leq q(x)<\infty$ hold, and $e=p$ if and only if $q(x)$ is not divisible by $p$. Moreover, the following hold: \begin{description} \item[The case of $e=p$] We have \begin{equation*} \mathcal{C}=\mathfrak{m}(1_{x})^{(p-1)(q(x)-1)},\quad \delta=\frac{(p-1)(q(x)-1)}{2}. \end{equation*} \item[The case of $e=1$] We have \begin{equation*} \mathcal{C}=\mathfrak{m}(1_{x})^{\frac{(p-1)q(x)}{p}},\quad \delta=\frac{(p-1)q(x)}{2}. \end{equation*} \end{description} \label{intromain2} \end{thm}
This theorem gives another proof of Tate's genus change formula. This theorem also gives a generalization of the main theorem of \cite{PW}. To prove Theorem \ref{intromain2}, we study the structure of the completion of $R(1_{x})$ by using combinatorial techniques. For example, in the case of $e=p$, we use the classical combinatorial problem, the so-called Frobenius coin problem. By applying Theorem \ref{intromain2}, we can calculate more precise step-by-step behavior of genus changes of curves (see Theorem \ref{mainineq} and Proposition \ref{Bgeqp}). We note that we can obtain the relation between conductor $\mathcal{C}$ and $\delta$, $$2\mathop{\mathrm{length}}\nolimits_{R}(R(1_{x})/\mathcal{C})=\delta,$$ by Theorem \ref{intromain2} (cf.\,Remark \ref{conductorvsgenuschange}). As in Remark \ref{conductorvsgenuschange}, this equality follows from theory of dualizing complexes.
Theorem \ref{intromain2} also has an application to the study of the Jacobian number of a curve, which is another invariant for singularities of the curve. Jacobian numbers have been studied by many researchers, and they have a significant role in studies of singular curves (see \cite{Buchweitz1980}, \cite{Esteves2003}, \cite{Greuel2007}, \cite{IIL}, and \cite{Tjurina1969}). Partitioning Jacobian numbers by considering partial field extensions of $K^{1/p}/K$, we can link Jacobian numbers to the invariants $q(x)$ which we defined in Theorem \ref{intromain2}. Then we have the following nontrivial relation between Jacobian numbers and genus changes:
\begin{thm}[see Theorem \ref{genjacob} for the precise statement] Suppose that $[K:K^{p}] < \infty.$ Let $R$ be a discrete valuation ring geometrically integral and essentially of finite type over $K$. Then we have \[ \frac{g_{10}}{(p-1)/2} = \frac{\mathop{\mathrm{jac}} (R)}{p}. \] Here, $\mathop{\mathrm{jac}} (R)$ is the Jacobian number of $R$, and $g_{10}$ is the local genus change $\dim_{K^{1/p}}(R(1)/R \otimes_{K} K^{1/p})$, where $R(1)$ is the normalization of the base change $R \otimes_{K}K^{1/p}$ in its field of fractions. \end{thm} We note that this theorem clarifies the relation between the smoothness criterion in terms of genus changes (which follows from Tate's genus change theorem, see Remark \cite[Remark 1.9]{IIL}) and that in terms of Jacobian numbers (see \cite[Proposition 4.4]{IIL}).
The content of each section of this paper is as follows: In Section \ref{affinesection}, we review elementary properties of unipotent algebraic groups over imperfect fields. In Section \ref{Picpresection}, we study the structure of the Picard schemes of regular geometrically integral curves over imperfect fields. In Section \ref{geomnormalsection}, we give some elementary lemmas on integral domains over fields of positive characteristics. In Section \ref{main1section}, we explain structures of $p$-power torsion subgroups of the Picard schemes of regular geometrically integral curves and give the proof of Theorem \ref{intromain2}. In Section \ref{simplecase}, we give calculations of the conductors and certain local variants of ``genus changes'' of curves which are called $\delta$-invariants. In Section \ref{eg}, we give examples of discrete valuation rings to show that any behavior of ramification indices appearing in Theorem \ref{mainineq} actually occurs. In Section \ref{jac}, we define Jacobian numbers and relate them to differential modules. Moreover, we study the behavior of differential modules via step-by-step field extensions to show Theorem \ref{genjacob}. \\
\noindent\textbf{Acknowledgements.} The authors are deeply grateful to Naoki Imai, the supervisor of the second author, for deep encouragement and helpful advice. They would like to thank Teruhisa Koshikawa for informing them of the arguments used in Lemma \ref{splitvector}. They are also grateful to Hiromu Tanaka, Tetsushi Ito, Kazuhiro Ito, and Makoto Enokizono for their helpful comments and suggestions. Moreover, they would like to thank Yuya Matsumoto for helpful comments (especially on the proof of Lemma \ref{coin}).
The first author is supported by JSPS KAKENHI Grant Number 21J00489. He is also supported by the Iwanami-Fujukai Foundation. He is also supported by Foundation of Research Fellows, The Mathematical Society of Japan. The second author is supported by JSPS KAKENHI Grant Number 19J22795.
\subsection*{Notation} In this paper, all rings are commutative. Let $p$ be a prime number. For an algebra $A$ over $\mathbb{F}_{p}$, we write $\mathrm{Frob}_{A}$ for the Frobenius map $A\to A: a\mapsto a^{p}$. For a ring $R$, we write $\mathrm{Frac}(R)$ for the total ring of fractions of $R$. For a field $K$ and a scheme $X$ over $K$, we shall say that $X$ is a curve (over $K$) if $X$ is an integral scheme of dimension $1$. We shall write $\mathrm{Pic}_{X/K}$ (resp.\,$\mathrm{Pic}_{X/K}^{0}$) for the relative Picard scheme of $X$ over $K$ (resp.\,the identity component of $\mathrm{Pic}_{C/K}$).
\section{Notes on algebraic groups in positive characteristics} In this section, we review elementary properties of unipotent algebraic groups over imperfect fields.
Let $M$ be a field and $H$ an algebraic group over $M$. Recall that $H$ is called a vector group if $H$ is isomorphic to the product of finite copies of $\mathbb{G}_{a,M}$ over $M$. We shall say that a smooth connected solvable algebraic group $H$ over $M$ is $M$-split if $H$ admits a composition series \begin{equation} \begin{split} H=H_{0}\supset H_{1}\supset\ldots\supset H_{n}=1 \label{splitdef} \end{split} \end{equation} consisting of smooth closed algebraic subgroups of $H$ such that $H_{i+1}$ is normal in $H_{i}$ and the quotient $H_{i}/H_{i+1}$ is $M$-isomorphic to $\mathbb{G}_{a}$ or $\mathbb{G}_{m}$ for all $0\leq i<n$ (cf.\,{\cite[Appendix B]{CGP}} and Remark \ref{whyassumption}). (Note that $H_{i}$ is not necessarily normal in $H$. On the other hand, in the definition of $M$-splitness in {\cite[Examples 12.3.5. (3)]{Sp}}, $H_{i}$ is suppose to be normal in $H$. However, we can use results of {\cite{Sp}} because we mainly treat commutative algebraic groups.) If $M$ is perfect, every connected (commutative) smooth unipotent algebraic group is $M$-split by {\cite[Corollary 14.3.10]{Sp}}.
\begin{rem} In this paper, we follow the definition of $M$-splitness of {\cite[Appendix B]{CGP}}. Since an extension of connected (resp.\,smooth) algebraic groups is also connected (resp.\,smooth), an algebraic group admitting a sequence $(\ref{splitdef})$ over $M$ is connected (resp.\,smooth). Hence, we do not need to assume that $H$ is smooth, connected, and even solvable. \label{whyassumption} \end{rem}
\begin{lem} Suppose that $H$ is connected, smooth, commutative, and $p$-torsion. Then $H$ is an $M$-split algebraic group if and only if $H$ is a vector group. \label{splitvector} \end{lem}
\begin{proof} Any vector group is a split algebraic group. Suppose that $H$ is an $M$-split algebraic group. By {\cite[Appendix B, Lemma 2.5]{CGP}} and the assumption that $H$ is $p$-torsion, there exists a canonical decomposition $H\simeq V\times W$, where $V$ is a vector subgroup of $H$ and $W$ is a wounded subgroup of $H$, i.e., every homomorphism from $\mathbb{G}_{a,M}$ to $W$ is trivial. Take a composition series $H=H_{0}\supset H_{1}\supset \ldots\supset H_{n}=1$ of $H$ whose successive quotients are isomorphic to $\mathbb{G}_{a,M}$. Inductively, it follows that $H_{i}$ is contained in $V$ for each $0\leq i \leq n$. In particular, we have $H=V$. \end{proof}
\begin{lem} Let $i$ be a natural number or $\infty$. Suppose that $H$ is connected and commutative. Then the largest subgroup $H_{s}(i)$ of $H$ that is $p^{i}$-torsion, $M$-split, and unipotent exists. \label{maximalvec} \end{lem}
\begin{proof} Let $V_{1}$ and $V_{2}$ be $M$-split $p^{i}$-torsion unipotent algebraic subgroups of $H$. Then the image of the natural homomorphism $V_{1}\times V_{2}\to H: (v_{1},v_{2})\mapsto v_{1}+v_{2}$ is connected, smooth, and $p^{i}$-torsion. Since $V_{1}\times V_{2}$ is $M$-split, this image is also $M$-split by {\cite[Exercises 14.3.12. (2)]{Sp}}. Hence, the desired algebraic subgroup $H_{s}(i)$ exists. \end{proof}
Next, we discuss the Weil restriction of $\mathbb{G}_{m}$. Let $K$ be a field, $L/K$ a finite purely inseparable field extension of $K$, and $\overline{L}$ an algebraic closure of $L$. For a scheme $X$ over $L$, write $(X)_{L/K}$ for the Weil restriction of $X$ to $K$ (if exists). Recall that, for any scheme $T$ over $K$, we have a natural bijection $$\mathrm{Mor}_{\mathrm{Spec}\, L}(T\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, L, X)\simeq \mathrm{Mor}_{\mathrm{Spec}\, K}(T, (X)_{L/K})$$ which is functorial in $T$ and $X$. Write $(\mathbb{G}_{m})_{L/K}$ for the Weil restriction of $\mathbb{G}_{m,L}$ to $K$ (see {\cite[Section 7.6]{BLR}}). Then we have a natural homomorphism $$\eta_{L/K}\colon\mathbb{G}_{m,K}\to (\mathbb{G}_{m})_{L/K}$$ over $K$ which is called the unit of $\mathbb{G}_{m,K}$ and a natural homomorphism $$\varepsilon_{L/K}\colon(\mathbb{G}_{m})_{L/K}\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, L\to\mathbb{G}_{m,L}$$ over $L$ which is called the counit of $\mathbb{G}_{m,L}$. Note that the composite homomorphism $\varepsilon_{L/K}\circ(\eta_{L/K},\mathrm{id}_{\mathrm{Spec}\, L})$ coincides with $\mathrm{id}_{\mathbb{G}_{m,L}}$. Let $L'$ be a field satisfying $K\subset L'\subset L$. We have a canonical isomorphism $((\mathbb{G}_{m})_{L/L'})_{L'/K}\simeq (\mathbb{G}_{m})_{L/K}$. Under this identification, it holds that $$\eta_{L/K}=(\eta_{L/L'})_{L'/K}\circ\eta_{L'/K}.$$
\begin{lem} The following hold: \begin{enumerate} \item $(\mathbb{G}_{m})_{L/K}$ is connected and smooth. \item The sequence $$0\to\mathbb{G}_{m,K}\overset{\eta_{L/K}}{\to}(\mathbb{G}_{m})_{L/K}\to\mathrm{Coker}\,\eta_{L/K}\to0$$ gives the extension explained in {\cite[Expos\'e XVII, Th\'eor\`eme 7.2.1]{SGA3}} (or {\cite[Theorem 9.2.2]{BLR}}). In particular, $\mathrm{Coker}\,\eta_{L/K}$ is a connected smooth unipotent algebraic group over $K$. \item $(\mathbb{G}_{m})_{L/K}\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, L$ is a split algebraic group. \end{enumerate} \label{forsplit} \end{lem}
\begin{proof} Assertion 1 follows from {\cite[Expos\'e XVII Proposition C.5.1]{SGA3}}. By {\cite[Expos\'e XVII Proposition C.5.1]{SGA3}}, $\varepsilon_{L/K}$ is an epimorphism and $\mathrm{Ker}\, \varepsilon_{L/K}$ is unipotent. Since we have $\varepsilon_{L/K}\circ(\eta_{L/K},\mathrm{id}_{\mathrm{Spec}\, L})=\mathrm{id}_{\mathbb{G}_{m,L}}$, the composite homomorphism $$\mathrm{Ker}\,\varepsilon_{L/K} \hookrightarrow (\mathbb{G}_{m})_{L/K}\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, L\to (\mathrm{Coker}\,\eta_{L/K})\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, L$$ is an isomorphism. Thus, $(\mathrm{Coker}\,\eta_{L/K})$ is a connected smooth unipotent algebraic group over $K$, and assertion 2 holds.
Next, we show assertion 3. Note that $L\otimes_{K}L$ is an Artin local ring with the $L$-algebra structure given by the base change of $K \rightarrow L$. Write $\mathfrak{m}$ for the unique maximal ideal of $L\otimes_{K}L$ and, for any natural number $m$, $Q_{m}$ for the functor $$(-\otimes_{L}(L\otimes_{K}L/\mathfrak{m}^{m}))^{\ast}\colon\mathrm{Alg}_{L}\to \mathrm{Ab},$$ where $\mathrm{Alg}_{L}$ is the category of $L$-algebras, and $\mathrm{Ab}$ is the category of abelian groups. Then, $Q_{m}$ is a quotient functor of $Q_{m+1}$ and $\mathrm{Ker}\, (Q_{m+1} \rightarrow Q_{m})$ is isomorphic to $(-)^{\ast}$ (resp.\,$-\otimes_{L}(\mathfrak{m}^{m}/\mathfrak{m}^{m+1})$) if $m=0$ (resp.\,$m\geq 1$). Hence, assertion 3 holds. \end{proof}
\begin{lem}[cf.\,{\cite[Expos\'e XVII Proposition C.5.1]{SGA3}}] For any field extension $M/K$, the following are equivalent: \begin{enumerate} \item There exists a(n automatically unique injective) $K$-algebra homomorphism $L\hookrightarrow M$. \item The base change of $(\mathbb{G}_{m})_{L/K}$ to $M$ becomes a product of a subtorus and a unipotent algebraic subgroup. (cf.\,Remark \ref{radical}) \item The base change of $\mathrm{Coker}\, \eta_{L/K}$ (and hence $(\mathbb{G}_{m})_{L/K}$) to $M$ becomes an $M$-split algebraic group. \end{enumerate} \label{Weilstr} \end{lem}
\begin{proof} The implication 3$\Rightarrow$2 follows from \cite[Expos\'e XVII Th\'eor\`eme 6.1.1.]{SGA3}, and the implication 1$\Rightarrow$3 follows from Lemma \ref{forsplit}.3. Therefore, it suffices to show that 1$\Leftrightarrow$2. As explained in the discussion before Lemma \ref{Weilstr}, $\varepsilon_{L/K}\circ(\eta_{L/K},\mathrm{id}_{\mathrm{Spec}\, L})$ coincides with $\mathrm{id}_{\mathbb{G}_{m,L}}$. To show the equivalence, it suffices to show that $L$ is the minimum field that the unipotent radical of $(\mathbb{G}_{m})_{L/K}$ is defined over (cf.\,{\cite[Corollaire (4.8.11)]{EGA42}}). Let $L'$ be a field satisfying $K\subset L'\subsetneq L$. By the implication 1$\Rightarrow$2, it suffices to show that the unipotent radical of $(\mathbb{G}_{m})_{L/K}$ is not defined over $L'$. Since we have $((\mathbb{G}_{m})_{L/L'})_{L'/K}\simeq (\mathbb{G}_{m})_{L/K}$, the natural morphism $$(\mathbb{G}_{m})_{L/K}\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, L'\to (\mathbb{G}_{m})_{L/L'}$$ is an epimorphism by {\cite[Expos\'e XVII Proposition C.5.1]{SGA3}}. Since the unipotent radical of $(\mathbb{G}_{m})_{L/L'}\times_{\mathrm{Spec}\, L'}\mathrm{Spec}\, \overline{L}$ is not defined over $L'$ by {\cite[Expos\'e XVII Corollaire (a) to Proposition C.5.1]{SGA3}}, $(\mathbb{G}_{m})_{L/K}\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, L'$ cannot be a product of a subtorus and a unipotent algebraic subgroup. Hence, we obtain the desired equivalence. \end{proof}
\begin{rem} Note that condition 2 in Lemma \ref{Weilstr} is equivalent to the condition that the unipotent radical of $(\mathbb{G}_{m})_{L/K}\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, \overline{M}$ is defined over $M$, where $\overline{M}$ is an algebraic closure of $M$. In the definition of the unipotent radical of an algebraic group $G$ given in the discussion after {\cite[Expos\'e XV D\'efinition 6.1.\,ter.]{SGA3}}, the coefficient field of $G$ is assumed to be an algebraically closed field $k$. If one defines the unipotent radical of $G$ to be the maximal connected unipotent normal algebraic subgroup of $G$ (in the case where $k$ is not necessarily algebraically closed), condition 2 in Lemma \ref{Weilstr} is equivalent to the condition that $(\mathbb{G}_{m})_{L/K}\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, M$ is canonically isomorphic to the direct product of its unipotent radical and its maximal subtorus. \label{radical} \end{rem}
\label{affinesection}
\section{The Picard schemes of proper regular geometrically integral curves} In this section, we study the structure of the Picard schemes of regular proper geometrically integral curves over imperfect fields.
Let $K$ (resp.\,$K^{\mathrm{sep}}$; $\overline{K}$) be a field (resp.\,a separable closure of $K$; an algebraic closure of $K$).
\begin{lem} Let $C_{1}$ and $C_{2}$ be proper geometrically integral curves over $K$ and $\alpha\colon C_{1}\to C_{2}$ a universally homeomorphic birational morphism. Then the natural homomorphism $\mathrm{Pic}_{C_{2}/K}\to\mathrm{Pic}_{C_{1}/K}$ is an epimorphism. \label{surj} \end{lem}
\begin{proof} We may assume that $K=\overline{K}$. Since $\mathrm{Pic}_{C_{1}/K}$ is smooth over $K$ by {\cite[Proposition 8.4.2]{BLR}}, it suffices to show that the natural homomorphism $\mathrm{Pic}_{C_{2}/K}(K)\to\mathrm{Pic}_{C_{1}/K}(K)$ is surjective (cf.\,\cite[1.71]{Milne2017}). This homomorphism can be identified with the first homomorphism of the following exact sequence: $$H^{1}(C_{2},\mathcal{O}_{C_{2}}^{\ast})\to H^{1}(C_{1},\mathcal{O}_{C_{1}}^{\ast})\to H^{1}(C_{1},(\alpha_{\ast}\mathcal{O}_{C_{1}}^{\ast})/\mathcal{O}_{C_{2}}^{\ast}).$$ Since the support of the sheaf $(\alpha_{\ast}\mathcal{O}_{C_{1}}^{\ast})/\mathcal{O}_{C_{2}}^{\ast}$ is $0$-dimensional, we have $$H^{1}(C_{1},(\alpha_{\ast}\mathcal{O}_{C_{1}}^{\ast})/\mathcal{O}_{C_{2}}^{\ast})=0.$$ Hence, we finish the proof of Lemma \ref{surj}. \end{proof}
Let $C'$ be a proper geometrically integral curve over $K$ and $C$ the normalization of $C'$ in the function field of $C'$. Suppose that the natural morphism $\pi\colon C\to C'$ is universally homeomorphic.
\begin{lem} Suppose that $K=K^{\mathrm{sep}}$. Let $S\subset C$ be a finite subset of closed points and $n\colon S\to \mathbb{N}$ a map. For any subset $S'\subset S$, write $C_{S',n}$ for the curve whose underlying topological space is the same as $C$ and whose local ring at $c\in C\setminus S'$ (resp.\,$s\in S'$) is $\mathcal{O}_{C,c}$ (resp.\,$K+\mathfrak{m}_{s}^{n(s)}$). Here, $\mathfrak{m}_{s}$ is the maximal ideal of $O_{C,s}$.
\begin{enumerate} \item $C_{S,n}$ is a proper geometrically integral curve over $K$. \item We have a canonical isomorphism $$\mathrm{Ker}\,(\mathrm{Pic}_{C_{S,n}/K}\to\mathrm{Pic}_{C/K} )\simeq\bigoplus_{s\in S} \mathrm{Ker}\,(\mathrm{Pic}_{C_{\{s\},n}/K}\to\mathrm{Pic}_{C/K}).$$ \end{enumerate} In the following, suppose that $S=\{s_{0}\}$. Write $C_{n(s_{0})}$ for $C_{S,n}$, $\mathfrak{m}$ for $\mathfrak{m}_{s_{0}}$, and $L$ for the residue field of $s_{0}$. Note that we have the following sequence of curves over $K$: $$C=C_{0}\overset{\pi_{0}}{\to} C_{1}\to\ldots\to C_{i}\overset{\pi_{i}}{\to}\ldots.$$ \begin{enumerate} \setcounter{enumi}{2} \item For any positive integer $i$, we have a canonical isomorphism between an fpqc sheaf defined by $\mathfrak{m}^{i}/\mathfrak{m}^{i+1}$ and that of $\mathrm{Ker}\,(\mathrm{Pic}_{C_{i+1}/K}\to\mathrm{Pic}_{C_{i}/K}).$ In particular, $\mathrm{Ker}\,(\mathrm{Pic}_{C_{i+1}/K}\to\mathrm{Pic}_{C_{i}/K})$ is isomorphic to $\mathbb{G}_{a}^{[L:K]}$ over $K$. \item We have a canonical isomorphism $$\mathrm{Coker}\,(\eta\colon \mathbb{G}_{m,K}\to (\mathbb{G}_{m})_{L/K}) \simeq \mathrm{Ker}\,(\mathrm{Pic}_{C_{1}/K}\to\mathrm{Pic}_{C/K}).$$ \end{enumerate} \label{connsmunip} \end{lem}
\begin{proof} For assertion 1, we only show that $C_{S,n}$ is a scheme of finite type over $K$. Take an affine open subscheme $U=\mathrm{Spec}\, B$ of $C$ such that $U\cap S=\{s\}$. It suffices to show that $(K+\mathfrak{m}_{s}^{n(s)})\cap B$ is a $K$-algebra of finite type. Since $B/(B\cap\mathfrak{m}_{s}^{n(s)})$ is a finite dimensional $K$-linear space, $B$ is finite over $(K+\mathfrak{m}_{s}^{n(s)})\cap B$. Then, since $B$ is of finite type over $K$, $(K+\mathfrak{m}_{s}^{n(s)})\cap B$ is of finite type over $K$.
Next, we show assertion 3. Write $f_{i+1}$ for the structure morphism $C_{i+1}\to \mathrm{Spec}\, K$. From the exact sequence $$0\to\mathcal{O}_{C_{i+1}}^{\ast} \to\pi_{i \ast}\mathcal{O}_{C_{i}}^{\ast}\to(\pi_{i\ast}\mathcal{O}_{C_{i}}^{\ast})/\mathcal{O}_{C_{i+1}}^{\ast} \to 0,$$ we obtain an exact sequence of fpqc sheaves over $\mathrm{Spec}\, K$ $$0\to f_{i+1\ast}((\pi_{i\ast}\mathcal{O}_{C_{i}}^{\ast})/\mathcal{O}_{C_{i+1}}^{\ast}) \to R^{1}(f_{i+1})_{\ast}\mathcal{O}_{C_{i+1}}^{\ast} \to R^{1}f_{i\ast}\mathcal{O}_{C_{i}}^{\ast} \to 0.$$ This exact sequence of fpqc sheaves can be identified with the sequence of fpqc sheaves defined by the following exact sequence of algebraic groups over $K$: $$0\to \mathrm{Ker}\,(\mathrm{Pic}_{C_{i+1}/K}\to\mathrm{Pic}_{C_{i}/K})\to\mathrm{Pic}_{C_{i+1}/K}\to\mathrm{Pic}_{C_{i}/K}\to 0.$$ Hence it suffices to show that the fpqc sheaf $f_{i+1, \ast}((\pi_{i\ast}\mathcal{O}_{C_{i}}^{\ast})/\mathcal{O}_{C_{i+1}}^{\ast})$ on $\mathrm{Spec}\, K$ is canonically isomorphic to that defined by $\mathfrak{m}^{i}/\mathfrak{m}^{i+1}$. For any $K$-algebra $A$, we have a canonical group isomorphism $$(\mathfrak{m}^{i}/\mathfrak{m}^{i+1})\otimes_{K}A\simeq ((K+\mathfrak{m}^{i})\otimes_{K}A)^{\ast}/((K+\mathfrak{m}^{i+1})\otimes_{K}A)^{\ast}$$ which sends $m$ to $1+m$. Here, the right-hand group coincides with the group of sections of the quotient presheaf of $\pi_{i\ast}\mathcal{O}_{C_{i}}^{\ast}$ by $\mathcal{O}_{C_{i+1}}^{\ast}$. Therefore, assertion 3 holds.
Assertion 2 follows from a similar discussion to that for assertion 3 and the fact that the quotient sheaf $O_{C}^{\ast}/O_{C_{S,n}}^{\ast}$ is isomorphic to the direct sum of the skyscraper sheaves $O_{C,s}^{\ast}/O_{C_{S,n},s}^{\ast}$ at $s$ for all $s\in S$.
Finally, we show assertion 4. Write $R$ for the local ring $\mathcal{O}_{C,s_{0}}$. For any $K$-algebra $A$, we have a commutative diagram with exact horizontal lines $$ \xymatrix{ 0\ar[r] &((K+\mathfrak{m})\otimes_{K} A)^{\ast}\ar[r]\ar[d] &(R\otimes_{K} A)^{\ast}\ar[r]\ar[d] &(R\otimes_{K} A)^{\ast}/((K+\mathfrak{m})\otimes_{K} A)^{\ast}\ar[r]\ar[d] &0\\ 0\ar[r] &A^{\ast}\ar[r] &(L\otimes_{K}A)^{\ast}\ar[r] &(L\otimes_{K}A)^{\ast}/A^{\ast}\ar[r] &0. } $$ Here, the both first and second vertical arrows are surjective and the kernels of these homomorphisms are isomorphic to $1+\mathfrak{m}\otimes_{K}A$. Therefore, the third vertical arrow is an isomorphism. Then assertion 4 follows from an argument similar to that of the proof of assertion 3. \end{proof}
\begin{prop}[cf.\,{\cite[Proposition 9.2.9]{BLR}}] $\mathrm{Ker}\,(\mathrm{Pic}_{C'/K}\to\mathrm{Pic}_{C/K})$ is a connected smooth unipotent algebraic group over $K$. \label{connunip} \end{prop}
\begin{rem} {\cite[Proposition 9.2.9]{BLR}} treats the case where $K$ is perfect and states that $\mathrm{Ker}\,(\mathrm{Pic}_{C'/K}\to\mathrm{Pic}_{C/K})$ is unipotent. In the proof of {\cite[Proposition 9.2.9]{BLR}}, results of {\cite{Ser1}} are used and we can see that $\mathrm{Ker}\,(\mathrm{Pic}_{C'/K}\to\mathrm{Pic}_{C/K})$ is also connected and smooth. \end{rem}
\begin{proof}[Proof of Proposition \ref{connunip}] We may and do assume that $K=K^{\mathrm{sep}}$. Write $S$ for the closed subset of $C$ where $\pi$ is not locally isomorphic. For every $s\in S$, fix a natural number $n(s)$ such that $\mathfrak{m}_{s}^{n(s)}\subset \mathcal{O}_{C',\pi(s)}$. Then we have a natural morphism $C'\to C_{S,n}$ over $K$. The induced homomorphism $$\mathrm{Ker}\,(\mathrm{Pic}_{C_{S,n}/K}\to\mathrm{Pic}_{C/K})\to\mathrm{Ker}\,(\mathrm{Pic}_{C'/K}\to\mathrm{Pic}_{C/K})$$ is an epimorphism by Lemma \ref{surj}. Hence, we may and do assume that $C'=C_{S,n}$. In this case, $\mathrm{Ker}\,(\mathrm{Pic}_{C_{S,n}/K}\to\mathrm{Pic}_{C/K} )$ is a successive extension of connected smooth unipotent algebraic groups by Lemmas \ref{connsmunip}.2, \ref{connsmunip}.3, and \ref{connsmunip}.4 and Lemma \ref{Weilstr}. Therefore, $\mathrm{Ker}\,(\mathrm{Pic}_{C_{S,n}/K}\to\mathrm{Pic}_{C/K})$ is a connected smooth unipotent algebraic group over $K$. \end{proof}
\label{Picpresection}
\section{A geometrical normality criterion} In this section, we introduce some notions on integral domains over fields of positive characteristics, which we use in Section \ref{simplecase}. We also give an elementary lemma (Lemma \ref{element}) which we use in the proof of Theorem \ref{overclosed}. To prove Theorem \ref{overclosed}, we only need Lemma \ref{element} in this section. However, we discuss general ring theory and prove a geometrical normality criterion (cf.\,Corollary \ref{geomnor}) because the authors cannot find it in literature.
We start with reviews of some elementary facts on rings. Let $k$ be a field and $A$ a $k$-algebra. Recall that we say that $A$ is normal if, for any prime ideal $\mathfrak{p}$ of $A$, $A_{\mathfrak{p}}$ is an integral domain which is integrally closed in $\mathrm{Frac}(A_{\mathfrak{p}})$. If $A$ is normal, then $A$ is reduced and integrally closed in $\mathrm{Frac}(A)$. Note that the converse is true if $A$ has finitely many minimal primes. Recall that we say that $A$ is geometrically reduced (resp.\,geometrically normal) over $k$ if, for any field extension $k'\supset k$, $A\otimes_{k}k'$ is reduced (resp.\,normal). In the case where the characteristic of $k$ is $p>0$, $A$ is geometrically reduced (resp.\,geometrically normal) over $k$ if and only if $A\otimes_{k}k^{1/p^{\infty}}$ is reduced (resp.\,normal) by {\cite[Tag 030V and 037Z]{Stacks}} (cf.\,see also {\cite[\S4.6]{EGA42}}).
Suppose that the characteristic of $k$ is $p$. Note that $A$ is reduced if and only if $\mathrm{Frob}_{A}$ is injective. In this case, we write $A^{1/p}$ for the target $A$ of $\mathrm{Frob}_{A}$ if we want to identify $A$ with the image of $\mathrm{Frob}_{A}$. Suppose that $A$ is geometrically reduced. \begin{lem} For any natural number $m$, we have natural inclusions $$A\hookrightarrow A\otimes_{k}k^{1/p^{m}}\hookrightarrow A^{1/p^{m}}.$$ \label{injections} \end{lem}
\begin{proof} Since $A\otimes_{k}k^{1/p^{m}}$ is reduced, the composite homomorphism $$ A\otimes_{k}k^{1/p^{m}}\to A^{1/p^{m}} \to A^{1/p^{m}}\otimes_{k^{1/p^{m}}}k^{1/p^{2m}} =(A\otimes_{k}k^{1/p^{m}})^{1/p^{m}} $$ is injective. Hence, the desired injectivity for the second homomorphism holds. \end{proof} Note that the inclusions in Lemma \ref{injections} induce inclusions $$\mathrm{Frac}(A)\hookrightarrow \mathrm{Frac}(A\otimes_{k}k^{1/p^{m}})\hookrightarrow \mathrm{Frac}(A^{1/p^{m}}).$$ This follows from the fact that an element $a\in A^{1/p^{m}}$ is regular (in $A^{1/p^{m}}$) if and only if $a^{p^{m}}\in A$ is regular (in $A$). Moreover, it holds that $$\mathrm{Frac}(A)\otimes_{k}k^{1/p^{m}}\simeq \mathrm{Frac}(A\otimes_{k}k^{1/p^{m}}).$$ In particular, we have \begin{equation} \begin{split} (\mathrm{Frac}(A))\otimes_{k}k^{1/p^{\infty}}\simeq \mathrm{Frac}(A\otimes_{k}k^{1/p^{\infty}}). \end{split} \label{geomnormalfld} \end{equation}
\begin{lem} Let $K$ be a field of characteristic $p$, $R$ a normal $K$-algebra, and $m$ a natural number. Suppose that $R$ is geometrically reduced over $K$. Write $R(m)$ for the normalization of $R\otimes_{K}K^{1/p^{m}}$ in $\mathrm{Frac}(R\otimes_{K}K^{1/p^{m}})$. Let $L$ be a field containing $K^{1/p^{m}}$. Consider the following diagram: $$ \xymatrix{ L\ar@{^{(}->}[r] &R\otimes_{K}L\ar@{^{(}->}[r] &R(m)\otimes_{K^{1/p^{m}}}L\ar@{^{(}->}[r] &R^{1/p^{m}}\otimes_{K^{1/p^{m}}}L\\ K^{1/p^{m}}\ar@{^{(}->}[r]\ar@{^{(}->}[u] &R\otimes_{K}K^{1/p^{m}}\ar@{^{(}->}[r]\ar@{^{(}->}[u] &R(m)\ar@{^{(}->}[u]\ar@{^{(}->}[r] &R^{1/p^{m}}\ar@{^{(}->}[u]\\ K\ar@{^{(}->}[r]\ar@{^{(}->}[u] &R.\ar@{^{(}->}[u] & & } $$ Then we have \begin{equation*} \begin{split} R(m)\otimes_{K^{1/p^{m}}}L&=\{x\in\mathrm{Frac}(R\otimes_{K}L)\mid x^{p^{m}}\in R\otimes_{K}L\}\\ &=(\mathrm{Frac}(R\otimes_{K}L))\cap(R\otimes_{K}L)^{1/p^{m}}. \end{split} \end{equation*} \label{element} \end{lem}
\begin{proof} First, we show the assertion in the case where $L=K^{1/p^{m}}$. Since $R^{1/p^{m}}$ coincides with the integral closure of $R$ in $\mathrm{Frac}(R^{1/p^{m}})$, it holds that \begin{equation} R(m)= \mathrm{Frac}(R\otimes_{K}K^{1/p^{m}})\cap R^{1/p^{m}} (\subset\mathrm{Frac}(R^{1/p^{m}})). \label{pmcase} \end{equation}
Next, we consider the case where $L$ is perfect. We show that we have \begin{equation*} \begin{split} R(m)\otimes_{K^{1/p^{m}}}L&=(\mathrm{Frac}(R\otimes_{K}K^{1/p^{m}})\cap R^{1/p^{m}})\otimes_{K^{1/p^{m}}}L\\ &=((\mathrm{Frac}(R\otimes_{K}K^{1/p^{m}}))\otimes_{K^{1/p^{m}}}L)\cap(R^{1/p^{m}}\otimes_{K^{1/p^{m}}}L)\\ &=(\mathrm{Frac}(R\otimes_{K}L))\cap(R^{1/p^{m}}\otimes_{K^{1/p^{m}}}L)\\ &=(\mathrm{Frac}(R\otimes_{K}L))\cap(R\otimes_{K}L)^{1/p^{m}}. \end{split} \end{equation*} The first equality follows from the formula (\ref{pmcase}). The second equality holds since $L$ is flat over $K^{1/p^{m}}$. The third equality follows from the isomorphism (\ref{geomnormalfld}). Since $L$ is perfect, we have $(R\otimes_{K}L)^{1/p^{m}}=R^{1/p^{m}}\otimes_{K^{1/p^{m}}}L$. Thus, the fourth equality holds.
Finally, we consider a general $L$. Let $x$ be an element of $\mathrm{Frac}(R\otimes_{K}L)$. Since $L^{1/p^{\infty}}$ is faithfully flat over $L$, $x^{p^{m}}\in R\otimes_{K}L$ (resp.\,$x\in R(m)\otimes_{K^{1/p^{m}}}L$) if and only if $x^{p^{m}}\in R\otimes_{K}L^{1/p^{\infty}}$ (resp.\,$x\in R(m)\otimes_{K^{1/p^{m}}}L^{1/p^{\infty}}$). Hence, we finish the proof of Lemma \ref{element}. \end{proof}
\begin{cor} Let $K$ be a field of characteristic $p$ and $R$ an algebra over $K$. Then $R$ is geometrically normal if and only if $R\otimes_{K}K^{1/p}$ is normal. \label{geomnor} \end{cor}
\begin{proof} If $R$ is geometrically normal, $R\otimes_{K}K^{1/p}$ is normal by the definition of the geometrical normality. Suppose that $R\otimes_{K}K^{1/p}$ is normal. It suffices to show that $R\otimes_{K}K^{1/p^{\infty}}$ is normal. Thus, we may and do assume that $R$ is a local ring and $R\otimes_{K}K^{1/p}$ is a normal local domain. To show that the integral domain $R\otimes_{K}K^{1/p^{\infty}}$ is normal, it suffices to show that $$\{r\in\mathrm{Frac}(R\otimes_{K}K^{1/p^{\infty}})\mid \exists i\in\mathbb{Z}, r^{p^{i}}\in R\otimes_{K}K^{1/p^{\infty}}\}=R\otimes_{K}K^{1/p^{\infty}}.$$ Hence, Corollary \ref{geomnor} follows from Lemma \ref{element} and the assumption that $R\otimes_{K}K^{1/p}$ is normal. \end{proof}
\begin{not-dfn}[cf.\,Remark \ref{B?}] Let $K$ be a field of characteristic $p$, $B$ a subset of a $p$-basis of $K$ over $K^{p}$, and $R$ an algebra over $K$. We shall write $K(B^{1/p^{n}})$ (resp.\,$K(B^{1/p^{\infty}})$) for the field $K(x^{1/p^{n}}\mid x\in B)$ (resp.\,$\bigcup_{n\geq 1}K(B^{1/p^{n}})$). We shall say that $R$ is {\it $B$-reduced} (resp.\,{\it $B$-integral}; {\it $B$-normal}) if $R\otimes_{K}K(B^{1/p^{\infty}})$ is reduced (resp.\,integral; normal). For a $B$-integral algebra $R$, we shall say that $R$ is {\it $B$-Japanese} if, for any natural number $n$, the normalization of $R\otimes_{K}K(B^{1/p^{n}})$ in its field of fractions is finite over $R\otimes_{K}K(B^{1/p^{n}})$. \label{x-geom} \end{not-dfn}
\begin{rem} We use the same notation in Notation-Definition \ref{x-geom}. Let $\mathscr{P}$ be one of the properties written in italics in Notation-Definition \ref{x-geom}. Note that, for any $p$-basis $B'$ of $K^p(B)$ over $K^p$, $R$ is $B$-$\mathscr{P}$ if and only if $R$ is $B'$-$\mathscr{P}$. However, we do not adopt the terminology ``$K(B^{1/p})$-$\mathscr{P}$" because these could be misunderstood as properties of algebras over $K(B^{1/p})$. \label{B?} \end{rem}
\begin{prop} Let $K$, $B$, and $R$ be as in Notation-Definition \ref{x-geom}. \begin{enumerate} \item $R$ is $B$-integral if and only if $\mathrm{Spec}\, R$ is irreducible and $R$ is $B$-reduced. \item $R$ is $B$-reduced (resp.\,$B$-normal) if and only if $R\otimes_{K}K(B^{1/p})$ is reduced (resp.\,normal). \item $R$ is $B$-reduced (resp.\,$B$-normal) if and only if $R$ is $B'$-reduced (resp.\,$B'$-normal) for any finite subset $B'\subset B$. \end{enumerate} Suppose that $R$ is a $B$-integral discrete valuation ring and write $R(n_{B})$ for the normalization of $R\otimes_{K}K(B^{1/p^{n}})$ in its field of fractions. \begin{enumerate} \setcounter{enumi}{3} \item $R(n_{B})$ is a discrete valuation ring. Moreover, if $R$ is $B$-Japanese, then $R\otimes_{K}K(B^{1/p^{n}})$ is Noetherian. \item If $R$ is $B$-Japanese, then $R$ is $B'$-Japanese for any subset $B'\subset B$. \item $R$ is $B$-Japanese if and only if $R(1_{B})$ is finite over $R\otimes_{K}K(B^{1/p})$. \item If $R$ is $B$-Japanese, then the completion $\widehat{R}$ is $B$-integral. \end{enumerate} \label{B-prop} \end{prop}
\begin{proof} Since the morphism $\mathrm{Spec}\, R\otimes_{K}K(B^{1/p^{\infty}})\to \mathrm{Spec}\, R$ is a homeomorphism, assertion 1 holds.
Next, we show assertion 2. If $R$ is $B$-reduced (resp.\,$B$-normal), $R\otimes_{K}K(B^{1/p})$ is also reduced (resp.\,normal) since $K(B^{1/p^{\infty}})$ is faithfully flat over $K(B^{1/p})$. Suppose that $R\otimes_{K}K(B^{1/p})$ is reduced, or equivalently, the absolute Frobenius endomorphism $F$ of $R\otimes_{K}K(B^{1/p})$ is injective. Note that the image of $F$ is contained in $R$. Then the absolute Frobenius endomorphism $F'$ of $R\otimes_{K}K(B^{1/p^{2}})$ can be identified with $\bigoplus_{\varphi}F$ via the commutative diagram $$ \xymatrix{ R\otimes_{K}K(B^{1/p^{2}})\ar[r]_-{F'}\ar@{=}[d] &R\otimes_{K}K(B^{1/p^{2}})\\ \underset{\varphi}\bigoplus ((R\otimes_{K}K(B^{1/p}))\underset{b\in B}{\prod}b^{\varphi(b)/p^{2}})\ar[r]_-{\bigoplus_{\varphi}F} &\underset{\varphi}\bigoplus (R\underset{b\in B}{\prod}b^{\varphi(b)/p}),\ar@{_{(}->}[u] } $$ where $\varphi$ ranges over the maps from $B$ to $\{0,\ldots,p-1\}$ such that $\varphi^{-1}(\{1,\ldots,p-1\})$ is a finite set. Therefore, $F'$ is injective, and hence $R$ is $B$-reduced by induction.
Next, suppose that $R\otimes_{K}K(B^{1/p})$ is normal. Note that, by the above argument, $R$ is $B$-reduced. To show that $R$ is $B$-normal, it suffices to show that the first and second parts of the proof of Lemma \ref{element} work even if we replace $L, R(1), K^{1/p^{m}}$, and the assumption that $R$ is geometrically reduced over $K$ with $K(B^{1/p^{\infty}}), R(1_{B}), K(B^{1/p^{m}})$, and the assumption that $R$ is $B$-reduced, respectively. The only thing we should confirm is the validity of the fourth equality which we checked in the second part of the proof of Lemma \ref{element}. Hence, we should show \begin{align*} R^{1/p}\otimes_{K(B^{1/p})}K(B^{1/p^{\infty}}) =(R\otimes_{K}K(B^{1/p^{\infty}}))^{1/p}. \end{align*} This follows from \begin{align*} R^{1/p}\otimes_{K(B^{1/p})}K(B^{1/p^{\infty}}) &=(R\otimes_{K^{p}(B)}K^{p}(B^{1/p^{\infty}}))^{1/p}\\ &=(R\otimes_{K}K\otimes_{K^{p}(B)}K^{p}(B^{1/p^{\infty}}))^{1/p}\\ &=(R\otimes_{K}K(B^{1/p^{\infty}}))^{1/p}. \end{align*}
Assertion 3 follows from the fact that $$R\otimes_{K}K(B^{1/p})=\varinjlim_{B'}R\otimes_{K}K(B'^{1/p}),$$ where $B'$ ranges over the finite subsets of $B$.
Next, we prove assertion 4. Write $v_{R}$ for the normalized valuation of $R$ and $v_{R(n_{B})}$ for the map $$R(n_{B})\to \mathbb{N}\cup\{\infty\}: r\mapsto v_{R}(r^{p^{n}}).$$ Then $v_{R(n_{B})}$ is a (possibly not normalized) nontrivial discrete valuation of $R(n_{B})$, which shows that $R(n_{B})$ is a discrete valuation ring. If $R$ is $B$-Japanese, $R\otimes_{K}K(B^{1/p^{n}})$ is Noetherian by the Eakin-Nagata theorem.
Next, we suppose that $R$ is $B$-Japanese and prove assertion 5. Since $R$ is $B$-integral, we have natural injections $$(R\otimes_{K}K(B'^{1/p^{n}}))\otimes_{K(B'^{1/p^{n}})}K(B^{1/p^{n}}) \hookrightarrow R(n_{B'})\otimes_{K(B'^{1/p^{n}})}K(B^{1/p^{n}}) \hookrightarrow R(n_{B}).$$ Since $K(B^{1/p^{n}})$ is faithfully flat over $K(B'^{1/p^{n}})$ and $R\otimes_{K}K(B^{1/p^{n}})$ is Noetherian by 4, $R(n_{B'})$ is finite over $R\otimes_{K}K(B'^{1/p^{n}})$. Hence, assertion 5 holds.
Next, we prove assertion 6. By the proof of assertion 4, $R\otimes_{K}K(B^{1/p})$ is Noetherian. It suffices to show that $R(2_{B})$ is finite over $R(1_{B})\otimes_{K(B^{1/p})}K(B^{1/p^{2}})$, which is equivalent to that $R(2_{B})^{p}$ is finite over $R(1_{B})^{p}\otimes_{K^{p}(B)}K^{p}(B^{1/p})$. Since $R$ is faithfully flat over $R(1_{B})^{p}$, it suffices to show that $R(2_{B})^{p}\otimes_{R(1_{B})^{p}}R$ is finite over $$(R(1_{B})^{p}\otimes_{K^{p}(B)}K^{p}(B^{1/p}))\otimes_{R(1_{B})^{p}}R =R\otimes_{K}K(B^{1/p}).$$ Since $R(2_{B})^{p}\otimes_{R(1_{B})^{p}}R$ is integral over $R\otimes_{K}K(B^{1/p})$ and $R$ is flat over $R(1)^{p}$, $R(2_{B})^{p}\otimes_{R(1_{B})^{p}}R$ is a subring of $R(1_{B})$. Since $R(1_{B})$ is finite over a Noetherian ring $R\otimes_{K}K(B^{1/p})$, assertion 6 holds.
Finally, we assume that $R$ is a $B$-Japanese discrete valuation ring and prove assertion 7. We may assume that $B$ is a finite set by 3 and 5. By 1 and 2, it suffices to show that $\widehat{R}\otimes_{K}K(B^{1/p})=\widehat{R}\otimes_{R}(R\otimes_{K}K(B^{1/p}))$ is reduced. Since $R\otimes_{K}K(B^{1/p})$ and hence $R(1_{B})$ is finite over $R$, $\widehat{R}\otimes_{R}(R\otimes_{K}K(B^{1/p}))=\widehat{R\otimes_{K}K(B^{1/p})}$ is a subring of $\widehat{R}\otimes_{R}(R(1_{B}))=\widehat{R(1_{B})}$. Since $\widehat{R(1_{B})}$ is a discrete valuation ring, assertion 7 holds. \end{proof}
The next corollary will be improved in Proposition \ref{Bgeqp}. We note that $g_{10}$ in Corollary \ref{geqp} is a local analogue of a genus change of a curve.
\begin{cor} Let $K$ be a field of characteristic $p$, $B$ a $p$-basis of $K$ over $K^{p}$, and $R$ a discrete valuation ring geometrically integral over $K$. Moreover, we use the notation of Proposition \ref{B-prop}. Suppose that $R$ is $B$-Japanese and the residue field $L$ of $R$ is finite over $K$. Write $g_{10}$ (resp.\,$g_{21}$) for \begin{equation*} \begin{split} &\dim_{K^{1/p}}R(1)/R\otimes_{K}K^{1/p}\\ (\text{resp.\,}&\dim_{K^{1/p^{2}}}R(2)/R(1)\otimes_{K^{1/p}}K^{1/p^{2}}). \end{split} \end{equation*} Then we have $g_{10}\geq pg_{21}$. \label{geqp} \end{cor}
\begin{proof} By the proof of Proposition \ref{B-prop}.6, it suffices to show that $$\dim_{K^{1/p}}R\otimes_{R(1)^{p}}R(2)^{p}/R\otimes_{K}K^{1/p} \geq pg_{21}.$$ Since we have \begin{equation*} \begin{split} g_{21}=&\dim_{K^{1/p^{2}}}R(2)/R(1)\otimes_{K^{1/p}}K^{1/p^{2}}\\ =&\dim_{K^{1/p}}R(2)^{p}/R(1)^{p}\otimes_{K}K^{1/p}. \end{split} \end{equation*} and \begin{equation*} \begin{split} R\otimes_{R(1)^{p}}R(2)^{p}/R\otimes_{K}K^{1/p}=&(R(2)^{p}/R(1)^{p}\otimes_{K}K^{1/p})\otimes_{R(1)^{p}}R, \end{split} \end{equation*} Corollary \ref{geqp} follows from the next lemma. \end{proof}
\begin{lem} The residue degree or ramification index of the extension $R(1)^{p}\subset R$ is greater than $1$. \end{lem}
\begin{proof} Write $L(1)$ for the residue field of $R(1)$. We may and do assume that $L(1)=L^{1/p}$. It suffices to show that the ramification index of $R(1)$ over $R$ is $1$. Let $B'$ be a subset of $K^{1/p}$ which is a $p$-basis of $L K^{1/p} / L$. Then $R\otimes_{K}K(B')$ is a discrete valuation ring whose residue field is $LK^{1/p}$ and the ramification index of the extension $R \otimes_K K(B') \supset R$ is $1$ by {\cite[I \S6 Proposition 15]{Ser}}. Let $B''$ be a subset of $K^{1/p}$ such that $B'\cup B''$ is a $p$-basis of $K^{1/p}/K$ and $B'\cap B''=\emptyset$. Note that the cardinality of $B''$ is $\log_{p}[K^{1/p}:K(B')]$. Since we have $$[K^{1/p}:K(B')]\leq[L(B'):K(B')]\leq[L:K],$$ $B''$ is a finite set. Moreover, it holds that \begin{equation*} \begin{split} &[L^{1/p}:L(B')]=[L^{1/p}(B'^{1/p^{\infty}}):L(B'^{1/p^{\infty}})]\\ =&[K^{1/p}(B'^{1/p^{\infty}}):K(B'^{1/p^{\infty}})]=[K^{1/p}:K(B')], \end{split} \end{equation*} where the second equality holds since we have $L^{1/p}(B'^{1/p^{\infty}})=(L(B'^{1/p^{\infty}}))^{1/p}$ is finite over $K^{1/p}(B'^{1/p^{\infty}})=(K(B'^{1/p^{\infty}}))^{1/p}$. Since we have $L(1)=L^{1/p}$, the ramification index of $R(1)$ over $R\otimes_{K}K(B')$ is $1$. \end{proof}
\label{geomnormalsection}
\section{The Picard schemes of regular curves} In this section, we explain structures of $p$-power torsion subgroups of the Picard schemes of regular geometrically integral curves.
Let $K$ be a field of characteristic $p$, $\overline{K}$ an algebraic closure of $K$, and $C$ a proper regular curve over $K$. For any natural number $m$, write $C(m)$ for the normalization of $C\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, K^{1/p^{m}}$ in its field of fractions and $g_{m}$ for the arithmetic genus of $C(m)$. Moreover, for any field extension $L$ over $K^{1/p^{m}}$, we write $C(m)_{L}$ for the scheme $C(m)\times_{\mathrm{Spec}\, K^{1/p^{m}}}\mathrm{Spec}\, L$. Note that we have the following sequence of curves over $\overline{K}$: $$C(m)_{\overline{K}}\to C(m-1)_{\overline{K}}\to \ldots \to C(1)_{\overline{K}}\to C(0)_{\overline{K}}(=C\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, \overline{K}).$$ There exists a natural number $n$ such that $C(n)_{\overline{K}}$ is normal. Write $n_{0}$ for the minimal one.
First, we recall fundamental properties of the Picard schemes of curves. \begin{lem} Let $m$ be a natural number. We have the following: \begin{enumerate} \item $\mathrm{Pic}_{C(m)/K^{1/p^{m}}}$ is a smooth group scheme of dimension $g_{m}$ over $K^{1/p^{m}}$. \item The N\'eron-Severi group $\mathrm{Pic}_{C(m)/K^{1/p^{m}}}(\overline{K})/\mathrm{Pic}^{0}_{C(m)/K^{1/p^{m}}}(\overline{K})$ of $C(m)$ is isomorphic to $\mathbb{Z}$. \item There exists a short exact sequence of algebraic groups over $\overline{K}$ $$0\to G(m) \to \mathrm{Pic}^{0}_{C(m)_{\overline{K}}/\overline{K}} \to A(m) \to 0$$ such that $A(m)$ is an abelian variety and $G(m)$ is a connected smooth unipotent algebraic group. Moreover, these properties characterize the algebraic subgroup $G(m)$ of $\mathrm{Pic}^{0}_{C(m)_{\overline{K}}/\overline{K}}$ and the quotient algebraic group $A(m)$ of $\mathrm{Pic}^{0}_{C(m)_{\overline{K}}/\overline{K}}$. \item The natural homomorphism $\mathrm{Pic}_{C(m)_{\overline{K}}/\overline{K}}\to \mathrm{Pic}_{C(m+1)_{\overline{K}}/\overline{K}}$ is surjective. This homomorphism induces an isomorphism $A(m)\to A(m+1)$ and an isomorphism between the N\'eron-Severi groups of $C(m)$ and $C(m+1)$. \item $\mathrm{Pic}^{0}_{C(n_{0})/K^{1/p^{n_{0}}}}$ is an abelian variety over $K^{1/p^{n_{0}}}$. In other words, $G(n_{0})$ is a trivial group scheme and we have $\mathrm{Pic}^{0}_{C(n_{0})_{\overline{K}}/\overline{K}} \simeq A(n_{0})$. \end{enumerate} \label{fundamentalstructure} \end{lem}
\begin{rem} By assertion 4, $A(m)\simeq A(0)$ for any natural number $m$. \end{rem}
\begin{proof}[Proof of Lemma \ref{fundamentalstructure}] Assertion 1 follows from {\cite[$\mathrm{n}^{\circ}$ 232, Theorem 3.1]{FGA}} (or {\cite[Theorem 8.2.1]{BLR}}) and {\cite[Proposition 8.4.2]{BLR}}. Assertion 2 follows from {\cite[Corollary 9.2.14]{BLR}}. Assertion 5 follows from {\cite[Proposition 9.2.3]{BLR}}. Assertions 3 and 4 follow from assertion 1, {\cite[Theorem 9.2.1]{BLR}}, Proposition \ref{connunip}, and assertion 5. \end{proof}
For any natural number $0 \leq m\leq n_{0}$, let $G(m)$ be as in Lemma \ref{fundamentalstructure}. Write $A$ (resp.\,$G$) for $A(0)$ (resp.\,$G(0)$) in Lemma \ref{fundamentalstructure}. By Lemma \ref{fundamentalstructure}, we have a sequence of connected smooth unipotent commutative algebraic groups $$G=G(0)\to G(1)\to \ldots\to G(n_{0})=0.$$ Write $G_{i}$ for the kernel of the homomorphism $G\to G(i)$. Then $G$ has an increasing filtration $$0=G_{0}\hookrightarrow G_{1}\hookrightarrow\ldots\hookrightarrow G_{n_{0}}=G.$$
\begin{thm}[cf.\,Theorem \ref{intromain1}] Let $0\leq i \leq n_{0}$ be a natural number. The following four algebraic subgroups $G_{i},G'_{i},G''_{i}, G'''_{i}$ of $G$ coincide: \begin{itemize} \item $G_{i}(=\mathrm{Ker}\,(G\to G(i)))$. \item $G'_{i}:=\mathrm{Ker}\,(\mathrm{Pic}_{C_{\overline{K}}/\overline{K}}\to\mathrm{Pic}_{C(i)_{\overline{K}}/\overline{K}})$. \item $G''_{i}:=G_{s}(i)$, i.e., the largest subgroup of $G$ that is $p^{i}$-torsion, $\overline{K}$-split, and unipotent (cf.\,Lemma \ref{maximalvec} and Remark \ref{whyred}.1). \item $G'''_{i}:=(\mathrm{Ker}\,(p^{i}\times\colon G\to G))_{\mathrm{red}}$, i.e., the maximal reduced closed subscheme of $\mathrm{Ker}\,(p^{i}\times\colon G\to G)$ (cf.\,Remark \ref{whyred}.2). \end{itemize} \label{overclosed} \end{thm}
\begin{proof} First, by Lemmas \ref{fundamentalstructure}.3 and 4, we have $G_{i}=G'_{i}$.
Next, we show that $G'_{i}=G'''_{i}$ holds. Write $S$ for the closed subset of $C_{\overline{K}}$ consisting of singular points. For any $s\in S$, write $s_{n_{0}}$ (resp.\,$s_{i}$) for the closed point of $C(n_{0})_{\overline{K}}$ (resp.\,$C(i)_{\overline{K}}$) whose image in $C_{\overline{K}}$ is $s$. Write $\pi_{i}$ (resp.\,$\pi$) for the natural morphism $C(i)_{\overline{K}}\to C_{\overline{K}}$ (resp.\,$C(n_{0})_{\overline{K}}\to C_{\overline{K}}$). We have a commutative diagram with exact horizontal lines \begin{equation} \xymatrix{ 0\ar[r] &\mathcal{O}^{\ast}_{C_{\overline{K}}}\ar[r]\ar@{=}[d] &\pi_{i\ast}\mathcal{O}^{\ast}_{C(i)_{\overline{K}}}\ar[r]\ar[d] &\underset{s\in S}{\bigoplus}(\mathcal{O}_{C(i)_{\overline{K}},s_{i}}^{\ast}/\mathcal{O}_{C_{\overline{K}},s}^{\ast})\ar[r]\ar[d] &0\\ 0\ar[r] &\mathcal{O}^{\ast}_{C_{\overline{K}}}\ar[r] &\pi_{\ast}\mathcal{O}^{\ast}_{C(n_{0})_{\overline{K}}}\ar[r] &\underset{s\in S}{\bigoplus}(\mathcal{O}_{C(n_{0})_{\overline{K}},s_{n_{0}}}^{\ast}/\mathcal{O}_{C_{\overline{K}},s}^{\ast})\ar[r] &0. } \label{forPic} \end{equation} Here, the sheaves $\mathcal{O}_{C(n_{0})_{\overline{K}},s_{n_{0}}}^{\ast}/\mathcal{O}_{C_{\overline{K}},s}^{\ast}$ and $\mathcal{O}_{C(n_{0})_{\overline{K}},s_{n_{0}}}^{\ast}/\mathcal{O}_{C(i)_{\overline{K}},s_{i}}^{\ast}$ are skyscraper sheaves whose supports are concentrated on $\{s\}$, from which it follows that the first cohomology groups of these sheaves are trivial. By taking the long exact sequences of the cohomology groups for (\ref{forPic}), we have the following commutative diagram with exact horizontal lines: \begin{equation} \xymatrix{ 0\ar[r] &\underset{s\in S}{\bigoplus}(\mathcal{O}_{C(i)_{\overline{K}},s_{i}}^{\ast}/\mathcal{O}_{C_{\overline{K}},s}^{\ast})\ar[r]\ar[d] &H^{1}(C_{\overline{K}},\mathcal{O}^{\ast}_{C_{\overline{K}}})\ar[r]\ar@{=}[d] &H^{1}(C(i)_{\overline{K}},\mathcal{O}^{\ast}_{C(i)_{\overline{K}}})\ar[r]\ar[d] &0\\ 0\ar[r] &\underset{s\in S}{\bigoplus}(\mathcal{O}_{C(n_{0})_{\overline{K}},s_{n_{0}}}^{\ast}/\mathcal{O}_{C_{\overline{K}},s}^{\ast})\ar[r] &H^{1}(C_{\overline{K}},\mathcal{O}^{\ast}_{C_{\overline{K}}})\ar[r] &H^{1}(C(n_{0})_{\overline{K}},\mathcal{O}^{\ast}_{C(n_{0})_{\overline{K}}})\ar[r] &0. } \label{cohPic} \end{equation} On the other hand, we have the following commutative diagram with exact horizontal lines: \begin{equation} \xymatrix{ 0\ar[r] &G'_{i}(\overline{K})\ar[r]\ar[d] &\mathrm{Pic}_{C_{\overline{K}}/\overline{K}}(\overline{K})\ar[r]\ar@{=}[d] &\mathrm{Pic}_{C(i)_{\overline{K}}/\overline{K}}(\overline{K})\ar[r]\ar[d] &0\\ 0\ar[r] &G(\overline{K})\ar[r] &\mathrm{Pic}_{C_{\overline{K}}/\overline{K}}(\overline{K})\ar[r] &\mathrm{Pic}_{C(n_{0})_{\overline{K}}/\overline{K}}(\overline{K})\ar[r] &0. } \label{Picexact} \end{equation} Hence, we have natural isomorphisms $$G(\overline{K})=\underset{s\in S}{\bigoplus}(\mathcal{O}_{C(n_{0})_{\overline{K}},s_{n_{0}}}^{\ast}/\mathcal{O}_{C_{\overline{K}},s}^{\ast})$$ and $$G'_{i}(\overline{K})=\underset{s\in S}{\bigoplus}(\mathcal{O}_{C(n_{0})_{\overline{K}},s_{n_{0}}}^{\ast}/\mathcal{O}_{C(i)_{\overline{K}},s_{i}}^{\ast}).$$ From these isomorphisms and Lemma \ref{element}, we obtain $$G'_{i}(\overline{K})=\mathrm{Ker}\,(p^{i}\times\colon G(\overline{K})\to G(\overline{K}))\simeq\mathrm{Ker}\,(p^{i}\times\colon G\to G)(\overline{K}).$$ Since $G'_{i}$ is a smooth (and hence reduced) algebraic subgroup of $\mathrm{Pic}_{C_{\overline{K}}/\overline{K}}$ by Proposition \ref{connunip}, we have a natural isomorphism $G'_{i}\simeq G'''_{i}$.
Finally, we see that there are natural inclusions $G'_{i}\hookrightarrow G''_{i}\hookrightarrow G'''_{i}$. By definition, we have a natural inclusion $G''_{i}\hookrightarrow G'''_{i}$. The fact that $G'_{i}$ is a connected smooth unipotent algebraic group over $\overline{K}$ follows from Proposition \ref{connunip}. Then the $\overline{K}$-splitness of $G'_{i}$ follows from {\cite[Expos\'e XVII Proposition 4.1.1]{SGA3}}. Since $G'_{i}=G'''_{i}$, $G'_{i}$ is $p^{i}$-torsion. Hence, we have a natural inclusion $G'_{i}\hookrightarrow G''_{i}$. Therefore, we finish the proof of Theorem \ref{overclosed}. \end{proof}
\begin{rem} \begin{enumerate} \item Note that, for an algebraic group $H$ over an algebraically closed field, $H$ is $p^{i}$-torsion, split, and unipotent if and only if $p^{i}$-torsion, connected, smooth, and unipotent by {\cite[Expos\'e XVII Proposition 4.1.1]{SGA3}}. \item Since the maximal reduced closed subscheme of an algebraic group over a perfect field is again an algebraic group, $G'''_{i}$ is an algebraic group over $\overline{K}$. Note that $\mathrm{Ker}\,(p\times\colon G\to G)$ is reduced if and only if $G_{1}=G$. Indeed, if $G=G_{1}$, $\mathrm{Ker}\,(p\times\colon G\to G)=G$. On the other hand, if $G\neq G_{i}$, $pG$ is nontrivial algebraic subgroup of $G$. Since $G$ and $pG$ are smooth and the natural homomorphism $G\to pG$ induces a trivial homomorphism between their Lie algebras, the algebraic group $\mathrm{Ker}\,(G\to pG)$ is not smooth. \end{enumerate} \label{whyred} \end{rem}
\begin{cor} For every natural number $i$, we have a natural homomorphism $p\times\colon G_{i+2}/G_{i+1}\to G_{i+1}/G_{i}$ inducing an injection $(G_{i+2}/G_{i+1})(\overline{K})\hookrightarrow(G_{i+1}/G_{i})(\overline{K})$. \label{main1cor} \end{cor}
\begin{proof} Since $G_{i+1}$ is connected smooth unipotent $\overline{K}$-split and $p^{i+1}$-torsion by Theorem \ref{overclosed}, $pG_{i+1}$ is connected smooth unipotent $\overline{K}$-split and $p^{i}$-torsion. Hence, by Theorem \ref{overclosed}, we have $pG_{i+1}\subset G_{i}$ and a homomorphism $$p\times\colon G_{i+2}/G_{i+1}\to G_{i+1}/G_{i}.$$ Again by Theorem \ref{overclosed}, we have a natural isomorphism $$(G_{i+1}/G_{i})(\overline{K})\simeq \mathrm{Ker}\,(p^{i+1}\times\colon G(\overline{K})\to G(\overline{K}))/\mathrm{Ker}\,(p^{i}\times\colon G(\overline{K})\to G(\overline{K})).$$ Therefore, the induced homomorphism $$(G_{i+2}/G_{i+1})(\overline{K})\hookrightarrow(G_{i+1}/G_{i})(\overline{K})$$ is injective. \end{proof}
\begin{rem} By Corollary \ref{main1cor}, Lemma \ref{fundamentalstructure}, and Theorem \ref{overclosed}, it holds that $\dim G_{i+2}/G_{i+1}\leq\dim G_{i+1}/G_{i}$, and hence we have $g_{i+1}-g_{i+2}\leq g_{i}-g_{i+1}$. Note that a stronger inequality $p(g_{i+1}-g_{i+2})\leq g_{i}-g_{i+1}$ follows from Corollary \ref{geqp}. \label{gpgenuschange} \end{rem}
\label{main1section}
\section{The case of simple extensions} In this section, we give calculations of conductors and $\delta$-invariants. Here, $\delta$-invariants can be regarded as certain local variants of ``genus changes'' of curves.
Let $K$ be a field of characteristic $p$, $R$ a discrete valuation ring over $K$, $\mathfrak{m}$ the maximal ideal of $R$, $\varpi$ a uniformizer of $R$, $v$ the normalized valuation of $R$, and $L$ the residue field of $R$. Let $x$ be an element of $K$ satisfying $x\in K\setminus K^{p}$. Suppose that $R$ is $\{x\}$-integral. Write $R(1_{x})$ for the normalization of $R\otimes_{K}K(x^{1/p})$ in its field of fractions, which is a discrete valuation ring. (Note that, in Section \ref{geomnormalsection}, we write $R(1_{\{x\}})$ for $R(1_{x})$.) Write $v_{1}$ (resp.\,$L(1_{x})$; $\mathfrak{m}(1_{x})$; $\varpi_{1}$; $e_{1}$; $f_{1}$) for the valuation of $R(1_{x})$ (resp.\,the residue field of $R(1_{x})$; the maximal ideal of $R(1_{x})$; a uniformizer of $R(1_{x})$; the ramification index of $R(1_{x})$ over $R$; the residue degree of $R(1_{x})$ over $R$). Suppose that $R(1_{x})$ is finite over $R$ (or equivalently, $\{x\}$-Japanese, cf.\,Proposition \ref{B-prop}.5). Then we have $e_{1}f_{1}=p$ by {\cite[I \S4 Proposition 10]{Ser}}. Moreover, note that $R, R\otimes_{K}K(x^{1/p})$, and $R(1_{x})$ are local Noetherian domains of dimension $1$. For such a local domain $A$, write $\widehat{A}$ for the completion of $A$. Since $R(1_{x})$ is finite over $R$, we have $\widehat{A}\simeq A\otimes_{R}\widehat{R}$. Since $\widehat{R}$ is a flat over $R$, all such $\widehat{A}$ are local domains.
\begin{not-dfn} \begin{enumerate} \item We write $\delta_{10}$ for the natural number $$\mathop{\mathrm{length}}\nolimits_{R\otimes_{K}K(x^{1/p})}(R(1_{x})/(R\otimes_{K}K(x^{1/p}))),$$ and refer to this as the {\it $\delta$-invariant} (of $R\otimes_{K}K(x^{1/p})$) (cf.\,\cite[0C3Q]{Stacks}). \item If $L$ is finite over $K$, we write $g_{10}$ for the natural number $$\dim_{K(x^{1/p})}(R(1_{x})/(R\otimes_{K}K(x^{1/p}))).$$ \item We write $\mathcal{C}_{10}$ for the largest ideal of $R(1_{x})$ contained in $R\otimes_{K}K(x^{1/p})$ and refer to this as {\it the conductor} (cf.\,{\cite[III \S6]{Ser}} and {\cite[Definition 2.13]{PW}}). \end{enumerate} \label{deltaconddef} \end{not-dfn}
\begin{rem} \label{remdelta} In \cite[Section 5]{IIL}, the definition of the $\delta$-invariant is a little different. In their definition, they replace $K(x^{1/p})$ in our definition with an algebraic closure of $K$. Since we want to consider the step-by-step behavior of invariants, we follow the convention in \cite[0C3Q]{Stacks}. \end{rem}
\begin{lem} \begin{enumerate} \item If $x^{1/p}\notin L$, then $R(1_{x})=R\otimes_{K}K(x^{1/p})$. Moreover, $R$ is $\{x\}$-normal. \item $\delta_{10}=\mathop{\mathrm{length}}\nolimits_{R}(R(1_{x})/(R\otimes_{K}K(x^{1/p})))$ \item If $L$ is finite over $K$ and $x^{1/p}\in L$, we have $$g_{10}=\delta_{10}[L:K(x^{1/p})].$$ \end{enumerate} \label{calcul} \end{lem}
\begin{proof} Assertion 1 follows from {\cite[I \S6 Proposition 15]{Ser}} and Proposition \ref{B-prop}.2. To show assertion 2, we may assume $x^{1/p}\in L$ by assertion 1. Since the residue field of $R\otimes_{K}K(x^{1/p})$ is naturally isomorphic to $L$, we have assertion 2. Moreover, if $L$ is finite over $K$, then we have \begin{align*} [L:K(x^{1/p})] &\mathop{\mathrm{length}}\nolimits_{R\otimes_{K}K(x^{1/p})}(R(1_{x})/(R\otimes_{K}K(x^{1/p}))) \\ = &\dim_{K(x^{1/p})}(R(1_{x})/(R\otimes_{K}K(x^{1/p}))). \end{align*} Hence, assertion 3 holds. \end{proof}
\begin{nota} For any lift $r\in R$ of $x^{1/p}$, we shall write $q_{r}$ for $v(r^{p}-x)$. Moreover, we shall write $$q(x)=\sup_{r\in R}v(r^{p}-x).$$ Note that if $x^{1/p}\in L$, $q(x)$ coincides with $\underset{r}{\sup}\,q_{r}$, where $r$ ranges over all the lifts of $x^{1/p}$. Moreover, note that if $x^{1/p}\notin L$, $q(x)=0$. \label{notationq} \end{nota} In the following, suppose that $x^{1/p}\in L$. Note that $pv_{1}(r-x^{1/p})=v_{1}(r^{p}-x)=e_{1}v(r^{p}-x)=e_{1}q_{r}$. The following is the first main theorem of this section. \begin{thm} \begin{enumerate} \item The inequalities $1\leq q(x)<\infty$ holds (cf.\,Remark \ref{metricinterprete}). \item $e_{1}=p$ if and only if $q(x)$ is not divisible by $p$. \item If $e_{1}=p$, we have \begin{align*} \mathcal{C}_{10}=\mathfrak{m}(1_{x})^{(p-1)(q(x)-1)}\\ \delta_{10}=\frac{(p-1)(q(x)-1)}{2}. \end{align*} In particular, $R$ is $\{x\}$-normal if and only if $q(x)=1$. \item If $f_{1}=p$, we have \begin{eqnarray*} \mathcal{C}_{10}=\mathfrak{m}(1_{x})^{\frac{(p-1)q(x)}{p}}\\ \delta_{10}=\frac{(p-1)q(x)}{2}. \end{eqnarray*} In particular, if $p\neq 2$, then $\delta_{10}$ is divisible by $p$. \item We have $$q(x)=\max_{r'\in R, x'\in K^{p}(x)\setminus K^{p}} v(r'^{p}-x')=\max_{r''\in R^{p}, x'\in K^{p}(x)\setminus K^{p}}v(r''-x').$$ In particular, for any element $x'\in K^{p}(x)\setminus K^{p}$, we have $q(x')=q(x)$ (cf.\,Remark \ref{metricinterprete}). \end{enumerate} \label{q} \end{thm}
\begin{rem} There exists an element of $R$ which is one of the nearest elements to $x$ with respect to the metric of $R(1_{x})$, which attains the distance between $R^{p}$ and $K^{p}(x)\setminus K^{p}(=K^{p}(x)\setminus R^{p})$. \label{metricinterprete} \end{rem}
\begin{rem} \label{conductorvsgenuschange} From Theorems \ref{q}.2 and 3, we obtain a relation \begin{align*} 2\mathop{\mathrm{length}}\nolimits_{R}(R(1_{x})/R\otimes_{K}K(x^{1/p})) &=2\delta_{10}\\ &=[L(1_{x}):L]\mathop{\mathrm{length}}\nolimits_{R(1_{x})}(R(1_{x})/\mathcal{C}_{10})\\ &=\mathop{\mathrm{length}}\nolimits_{R}(R(1_{x})/\mathcal{C}_{10}) \end{align*} of the conductor and the $\delta$-invariant. As we will see below, this equation follows from the theory of duality of modules. Let $A\subset B$ be an extension of $1$-dimensional Gorenstein local domains such that $B/A$ is an $A$-module of finite length. We will show that \begin{align} \label{2times} 2 \mathop{\mathrm{length}}\nolimits_{A}(B/A) = \mathop{\mathrm{length}}\nolimits_{A} (B/\mathcal{C}), \end{align} where $\mathcal{C}$ is the image of \[ \omega_{B/A} = \mathop{\mathrm{Hom}}\nolimits_{A}(B, A) \hookrightarrow A \] via the trace map. Note that, in the setting of Notation-Definition \ref{deltaconddef}, the extension $R\otimes_{K}(K(x^{1/p}))\subset R(1_{x})$ satisfies the assumptions on $A\subset B$. We also note that, in this case, $\mathcal{C}$ is no other than $\mathcal{C}_{10}$, and $\mathop{\mathrm{length}}\nolimits_{R}$ and $\mathop{\mathrm{length}}\nolimits_{A}$ have the same value for $A$-modules. To show that (\ref{2times}) holds, it suffices to show that \[ \mathop{\mathrm{length}}\nolimits_{A}(A/\mathcal{C}) = \mathop{\mathrm{length}}\nolimits_{A}(B/A), \] since we have $(B/\mathcal{C})/(A/\mathcal{C})=B/A$. By applying $R\mathop{\mathrm{Hom}}\nolimits_{A}(-,A)$ to the exact sequence \[ 0 \rightarrow A \rightarrow B \rightarrow B/A \rightarrow 0, \] we obtain an exact sequence \[ 0 \rightarrow \mathop{\mathrm{Hom}}\nolimits_{A}(B,A) \rightarrow \mathop{\mathrm{Hom}}\nolimits_{A}(A,A) \rightarrow \mathop{\mathrm{Ext}}\nolimits^{1}_{A}(B/A,A) \rightarrow 0, \] i.e.,\,we have $A/\mathcal{C}\simeq\mathop{\mathrm{Ext}}\nolimits^{1}_{A}(B/A,A)$. By the local duality, \[ \mathop{\mathrm{Ext}}\nolimits^{1}_{A}(B/A,A) \simeq \mathop{\mathrm{Hom}}\nolimits_{A}(B/A,E), \] where $E$ is an injective hull of the residue field of $A$. Since the Matlis duality preserves the length of modules, we obtain the desired equality. \end{rem}
\begin{proof}[Proof of Theorem \ref{q}.1] Since $x^{1/p}\in L$, we have $1 \leq q(x)$. Next we show that $q(x)<\infty$. Note that, since $R$ is geometrically reduced over $K$, we have $x^{1/p}\notin R$. Since $\widehat{R}$ is faithfully flat over $R$, we also have $x^{1/p}\notin\widehat{R}$. Therefore, since $\widehat{R}$ is a closed subset of $\widehat{R(1_{x})}$, there exists an element $r$ of $R$ which satisfies $q_{r}=q(x)$. \end{proof}
To show the rest of Theorem \ref{q}, we need some lemmas.
\begin{lem} For any lift $r\in R$ of $x^{1/p}$, $q_{r}<q(x)$ if and only if $p$ divides $q_{r}$ and the image of $r-x^{1/p}$ in $\mathfrak{m}(1_{x})^{e_{1}q_{r}/p}/\mathfrak{m}(1_{x})^{(e_{1}q_{r}/p)+1}$ is contained in $\mathfrak{m}^{q_{r}/p}/\mathfrak{m}^{(q_{r}/p)+1}$. \label{q<q} \end{lem}
\begin{proof} By the definition of $q(x)$, $q_{r}<q(x)$ if and only if there exists an element $r'\in R$ such that \begin{align} v(r'^{p}+r^{p}-x)>v(r^{p}-x)(\Leftrightarrow v_{1}(r'+r-x^{1/p})>v_{1}(r-x^{1/p})). \label{forq<q} \end{align} Since the inequality (\ref{forq<q}) holds only if $v(r'^{p})=v(r^{p}-x)$, we may assume that $p$ divides $q_{r}$. Then we have a natural injection $$\mathfrak{m}^{q_{r}/p}/\mathfrak{m}^{(q_{r}/p)+1}\hookrightarrow \mathfrak{m}(1_{x})^{e_{1}q_{r}/p}/\mathfrak{m}(1_{x})^{(e_{1}q_{r}/p)+1}.$$ Then there exists $r'$ satisfying the inequality (\ref{forq<q}) if and only if the image of $r-x^{1/p}$ in $\mathfrak{m}(1_{x})^{e_{1}q_{r}/p}/\mathfrak{m}(1_{x})^{(e_{1}q_{r}/p)+1}$ is contained in $\mathfrak{m}^{q_{r}/p}/\mathfrak{m}^{(q_{r}/p)+1}$. Indeed, if such an element $r'$ exists, then $r-x^{1/p}= -r' + (r'+r-x^{1/p})$ is contained in $\mathfrak{m}^{q_{r}/p}/\mathfrak{m}^{(q_{r}/p)+1}$, and the converse also holds in a similar way.
Hence, Lemma \ref{q<q} holds. \end{proof}
\begin{lem} $e_{1}=p$ if and only if there exists a lift $t\in R$ of $x^{1/p}$ such that $q_{t}$ is not divisible by $p$. In this case, such $t$ satisfies $q(x)=q_{t}$. In particular, $e_{1}=p$ if and only if $q(x)$ is not divisible by $p$. \label{r-x} \end{lem}
\begin{proof} If $e_{1}=1$, for any lift $r\in R$ of $x^{1/p}$, $q_{r}=pv_{1}(r-x^{1/p})$ is divisible by $p$. In particular, in this case, $q(x)$ is divisible by $p$. If $e_{1}=p$, we have a natural isomorphism $\mathfrak{m}^{pi}/\mathfrak{m}^{pi+1}\simeq \mathfrak{m}(1_{x})^{i}/\mathfrak{m}(1_{x})^{i+1}$ for every $i\in \mathbb{N}$. In this case, for any lift $r\in R$ of $x^{1/p}$, $q_{r}$ is not divisible by $p$ if and only if $q_{r}=q(x)$ by Lemma \ref{q<q}.
(Note that such $r$ exists by Theorem \ref{q}.1.) \end{proof}
\begin{proof}[Proof of Theorems \ref{q}.2 and \ref{q}.3] Theorem \ref{q}.2 follows from Lemma \ref{r-x}.
Suppose that $e_{1}=p$ and take an element $s\in R$ satisfying the condition on $t$ in Lemma \ref{r-x}. Then $\varpi_{1}^{p}$ is a uniformizer of $R$. Note that $R\otimes_{K}K(x^{1/p})=R[x^{1/p}]=R[s-x^{1/p}]$. Since the characteristic of the field of fractions of $\widehat{R}$ is $p$, there exists a subfield $L'$ of $\widehat{R}$ such that the composite homomorphism $L'\hookrightarrow \widehat{R}\twoheadrightarrow L$ is an isomorphism by the Cohen structure theorem. Then $\widehat{R}\hookrightarrow (R\otimes_{K}K(x^{1/p}))^{\wedge}\hookrightarrow \widehat{R(1_{x})}$ can be identified with $L'[[\varpi_{1}^{p}]]\hookrightarrow L'[[\varpi_{1}^{p}, s-x^{1/p}]]\hookrightarrow L'[[\varpi_{1}]]$ by {\cite[I \S6 Proposition 18]{Ser}}. (Here, $(R\otimes_{K}K(x^{1/p}))^{\wedge}$ denotes the completion of $R\otimes_{K}K(x^{1/p})$.) Then we have \begin{equation*} \begin{split} \delta_{10}&=\mathop{\mathrm{length}}\nolimits_{R}(R(1_{x})/(R\otimes_{K}K(x^{1/p})))\\ &=\mathop{\mathrm{length}}\nolimits_{\widehat{R}}(\widehat{R(1_{x})}/(R\otimes_{K}K(x^{1/p}))^{\wedge})\\ &=\dim_{L'}(L'[[\varpi_{1}]]/L'[[\varpi_{1}^{p}, s-x^{1/p}]])\\ &=\frac{(p-1)(q(x)-1)}{2} \end{split} \end{equation*} and $$\mathcal{C}_{10}=\mathfrak{m}(1_{x})^{(p-1)(q(x)-1)}$$ by Lemma \ref{r-x} and Lemma \ref{coin}. \end{proof}
\begin{lem} Let $M$ be a field, $T$ a variable, and $m, n$ positive integers. Suppose that $m$ and $n$ are coprime. Let $M[[T]]$ be the ring of formal power series and $\gamma(T), \delta(T)\in M[[T]]$ units. Then we have $$\dim_{M}(M[[T]]/M[[T^{m}\gamma(T), T^{n}\delta(T)]])=\frac{(m-1)(n-1)}{2}$$ and the conductor of $M[[T^{m}\gamma(T), T^{n}\delta(T)]]$ in $M[[T]]$ is $(T^{(m-1)(n-1)})$. \label{coin} \end{lem}
\begin{proof} In the case where $\gamma(T)=\delta(T)=1$, this lemma is the so-called ``Frobenius coin problem". In this case, there exist natural numbers $\alpha_{1},\ldots,\alpha_{\frac{(m-1)(n-1)}{2}}$ such that $$M[[T]]=M[[T^{m},T^{n}]]\oplus\bigoplus_{1\leq i\leq \frac{(m-1)(n-1)}{2}}MT^{\alpha_{i}},$$ and, for any $\beta\geq (m-1)(n-1)$, there also exists polynomials $F_{\beta}(X,Y)\in M[X,Y]$ such that $$T^{\beta}=F_{\beta}(T^{m},T^{n}).$$
Next, we prove Lemma \ref{coin} for general $\gamma(T)$ and $\delta(T)$. We may assume that the constant terms of $\gamma(T)$ and $\delta(T)$ are $1$. It follows that $$M[[T]]=M[[T^{m}\gamma(T),T^{n}\delta(T)]]\oplus\bigoplus_{1\leq i\leq \frac{(m-1)(n-1)}{2}}MT^{\alpha_{i}}$$ from induction argument and the fact that, for any natural number $l<nm$, there are at most $1$ pair $(i,j)$ of natural numbers satisfying $ni+mj=l$. Moreover, for any natural number $\beta\geq(m-1)(n-1)$, we have $$T^{\beta}= F_{\beta}(T^{m}\gamma(T),T^{n}\delta(T)) \in \mathfrak{m}(1_{x})^{\beta}/\mathfrak{m}(1_{x})^{\beta+1}.$$ Again by induction argument, we can show that the conductor of $M[[T^{m}\gamma(T), T^{n}\delta(T)]]$ in $M[[T]]$ is $(T^{(m-1)(n-1)})$. \end{proof}
\begin{lem} Suppose that $f_{1}=p$. Let $r\in R$ be a lift of $x^{1/p}$. For such $r$, we write $q'_{r}$ for $q_{r}/p$, $u_{r}$ for the element $(r-x^{1/p})/(\varpi^{q'_{r}})$ in $R(1_x)$, and $\overline{u_{r}}$ for the image of $u_{r}$ in $L(1_{x})$. The following are equivalent: \begin{enumerate} \item $R[u_{r}]=R(1_{x})$. \item $L[\overline{u_{r}}]=L(1_{x})$. \item $q_{r}=q(x)$. \end{enumerate} \label{choiceofr} \end{lem}
\begin{proof} By {\cite[I \S6 Proposition 15]{Ser}}, $R(1_{x})=R[u_{r}]$ if and only if $L(1_{x})=L(\overline{u_{r}})$. Since we have $L\varpi^{q'_{r}}=\mathfrak{m}^{q'_{r}}/\mathfrak{m}^{q'_{r}+1}\subset \mathfrak{m}(1_{x})^{q'_{r}}/\mathfrak{m}(1_{x})^{q'_{r}+1}$, $\overline{u_{r}}\in L$ if and only if the image of $r-x^{1/p}$ in $\mathfrak{m}(1_{x})^{q'_{r}}/\mathfrak{m}(1_{x})^{q'_{r}+1}$ is contained in $\mathfrak{m}^{q'_{r}}/\mathfrak{m}^{q'_{r}+1}$, which is equivalent to $q_{r}<q(x)$ by Lemma \ref{q<q}. \end{proof}
\begin{proof}[Proof of Theorems \ref{q}.4 and \ref{q}.5] We keep the notation and the assumption of Lemma \ref{choiceofr}. Let $s\in R$ be a lift of $x^{1/p}$ satisfying $q(x)=q_{s}$ (cf.\,Theorem \ref{q}.1). Write $q'(x)$ for $q(x)/p$. We have $R\otimes_{K}K(x^{1/p})=R[u_{s}\varpi^{q'(x)}]$. Since $s-x^{1/p}=\varpi^{q'(x)}u_{s}$ and $R\otimes_{K}K(x^{1/p})=R[x^{1/p}]$, we have $R\otimes_{K}K(x^{1/p})=R[u_{s}\varpi^{q'(x)}]$. Note that we have $R(1_{x})=R[u_{s}]=\sum_{0\leq i\leq p-1}Ru_{s}^{i}$. Then we have $$R(1_{x})\varpi^{pq'(x)}=\sum_{0\leq i\leq p-1}Ru_{s}^{i}\varpi^{pq'(x)}=\sum_{0\leq i\leq p-1}R(s-x)^{i}\varpi^{(p-i)q'(x)}\subset R\otimes_{K}K(x^{1/p}).$$ Therefore, we have \begin{equation*} \begin{split} \delta_{10}&=\mathop{\mathrm{length}}\nolimits_{R}(R(1_{x})/R\otimes_{K}K(x^{1/p}))\\ &=\mathop{\mathrm{length}}\nolimits_{R}(R(1_{x})/R[u_{s}]\varpi^{pq'(x)})/(R[u_{s}\varpi^{q'(x)}]/R[u_{s}]\varpi^{pq'(x)}). \end{split} \end{equation*} Write $\overline{\mathfrak{m}(1_{x})}$ for the image of $\mathfrak{m}(1_{x})$ in $R(1_{x})/R[u_{s}]\varpi^{pq'(x)}$. Then, for every natural number $j$, we have $$\overline{\mathfrak{m}(1_{x})}^{j}/\overline{\mathfrak{m}(1_{x})}^{j+1}=\underset{\substack{1\leq i\leq p-1\\j<pq'(x)i}}{\bigoplus}L u_{s}^{i}\varpi^{j}.$$ Therefore, we have $$\delta_{10}=\frac{p(p-1)q'(x)}{2}=\frac{(p-1)q(x)}{2},\quad \mathcal{C}_{10}=\mathfrak{m}(1_{x})^{pq'(x)-q'(x)}=\mathfrak{m}(1_{x})^{\frac{(p-1)q(x)}{p}}.$$
Theorem \ref{q}.5 follows from Theorems \ref{q}.1, \ref{q}.3, and \ref{q}.4. \end{proof}
Suppose that $x^{1/p^{2}}\notin L(1_{x})$. By replacing $R$ and $K$ with $R(1_{x})$ and $K(x^{1/p})$, respectively, we define $R(2_{x}), e_{2}, f_{2}, g_{21}, \delta_{21}$, and $q(x^{1/p})$ in a similar way. Note that $R(2_{x})$ is finite over $R(1_{x})$ since $R$ is $\{x\}$-Japanese.
\begin{lem} Let $r_{1}\in R(1_{x})$ be a lift of $x^{1/p^{2}}\in L(1_{x})$ satisfying $v_{1}(r_{1}^{p}-x^{1/p})=q(x^{1/p})$. Note that $r_{1}^{p}\in R$ is a lift of $x^{1/p}$. \begin{enumerate} \item Suppose that $e_{1}=e_{2}=p$. We have $q(x^{1/p})=q_{r_{1}^{p}}=q(x)$ and $\delta_{21}=\delta_{10}$. \item Suppose that $e_{1}=p$ and $e_{2}=1$. We have $q(x^{1/p})=q_{r_{1}^{p}}< q(x)$ and $\delta_{21}\leq \delta_{10}$. \item Suppose that $e_{1}=e_{2}=1$. We have $pq(x^{1/p})=q_{r_{1}^{p}}\leq q(x)$ and $p\delta_{21}\leq \delta_{10}$. Moreover, the equality $pq(x^{1/p})=q(x)$ holds if and only if the equality $p\delta_{21}=\delta_{10}$ holds. Furthermore, $L(2_{x})$ is a simple extension field over $L$ if and only if $pq(x^{1/p})=q(x)$. \item Suppose that $e_{1}=1$ and $e_{2}=p$. We have $pq(x^{1/p})=q_{r_{1}^{p}}\leq q(x)$ and $p\delta_{21}+\frac{p(p-1)}{2}\leq \delta_{10}$. Moreover, the equality $p q(x^{1/p})= q(x)$ holds if and only if the equality $p\delta_{21}+\frac{p(p-1)}{2} = \delta_{10}$ holds. \end{enumerate} \label{qlem} \end{lem}
Before we prove Lemma \ref{qlem}, we state one of the main theorems of this paper (cf.\,Theorem \ref{intromain2}), which follows immediately from this lemma.
\begin{thm} We keep the notation of Lemma \ref{qlem}. In each assertion $1\leq i \leq 4$ of this theorem, we assume the same assumption as that in assertion $i$ of Lemma \ref{qlem}. Suppose that $L$ is finite over $K$. The we have the following: \begin{enumerate} \item $pg_{21}=g_{10}$. \item $pg_{21}\leq g_{10}$. \item $pg_{21}\leq g_{10}$. Moreover, the equality holds if $L(2_{x})$ is a simple field extension over $L$. \item $pg_{21}+\frac{p(p-1)}{2}\leq g_{10}$. \end{enumerate} \label{mainineq} \end{thm}
\begin{proof}[Proof of Lemma \ref{qlem}] First, we have an inequality $q_{r_{1}^{p}}\leq q(x)$ by the definition of $q(x)$.
We show assertions 1 and 2. Suppose that $e_{1}=p$. Then we have $$q(x^{1/p})=v_{1}(r_{1}^{p}-x^{1/p})=v((r_{1}^{p})^{p}-x)=q_{r_{1}^{p}}.$$ If $e_{2}=p$, since $q_{r_{1}^{p}}=q(x^{1/p})$ is not divisible by $p$ by Lemma \ref{r-x}, $q_{r_{1}^{p}}=q(x)$ again by Lemma \ref{r-x}. Hence, assertion 1 holds. If $e_{2}=1$, since $q(x)$ is divisible by $p$ and $q(x^{1/p})$ is not divisible by $p$, we have $q(x^{1/p})<q(x)$.
Next, we show assertions 3 and 4. Suppose that $e_{1}=1$. Then we have $$pq(x^{1/p})=pv_{1}(r_{1}^{p}-x^{1/p})=v((r_{1}^{p})^{p}-x)=q_{r_{1}^{p}}.$$ If $e_{2}=p$, assertion 4 follows from Theorems \ref{q}.2, \ref{q}.3, and \ref{q}.4. Suppose that $e_{2}=1$. Write $u'$ (resp.\,$\overline{u'}$) for the element \[ \frac{r_{1}^{p}-x^{1/p}}{\varpi^{q'(x^{1/p})}}\in R(2_{x}) \] (resp.\,the image of $u'$ in $L(2_{x})$). Since $e_{1}=1$, $\varpi$ is a uniformizer of $R(1_{x})$ and $L(2_{x})=L(1_{x})[\overline{u'}]$ by Lemma \ref{choiceofr}. Moreover, we have $$u'^{p}=(\frac{r_{1}^{p}-x^{1/p}}{\varpi^{q'(x^{1/p})}})^{p}=\frac{(r_{1}^{p})^{p}-x}{\varpi^{pq'(x^{1/p})}}.$$ Then, by Lemma \ref{choiceofr}, the following are equivalent: \begin{itemize} \item $L(2_{x})$ is a simple field extension over $L$. \item $L(1_{x})=L[\overline{u'}^{p}]$. \item $q_{r_{1}^{p}}=q(x)$. \end{itemize} Therefore, assertion 3 follows from Theorems \ref{q}.2 and \ref{q}.4. \end{proof}
\begin{cor} Suppose that $e_{1}=e_{2}=p$ and $R(1_{x})$ is $\{x^{1/p}\}$-normal. Then $R$ is $\{x\}$-normal. \end{cor}
\begin{proof} This follows from Theorem \ref{q}.3 and Lemma \ref{qlem}.1. \end{proof}
By using the above results, we can treat the case of general extensions as follows.
\begin{not-set} Let $K, R, \mathfrak{m}, \varpi, v$, and $L$ be as above. We suppose that $L$ is a finite extension of $K$. Let $B$ be a subset of a $p$-basis over $K$. Suppose that $R$ is $B$-integral and $B$-Japanese. We write $R(1_{B})$ for the normalization of $R\otimes_{K}K(B^{1/p})$ in its field of fractions as before. We put $g_{10}':= \dim_{K(B^{1/p})}(R(1_{B})/R\otimes_{K}K(B^{1/p}))$. By replacing $K, R,$ and $B$ with $K(B^{1/p}), R(1_{B}),$ and $B^{1/p}$, respectively, we define $R(2_{B})$ and $g'_{21}$ in a similar way. We note that $g_{10}',g'_{21} < \infty$ since $R$ is $B$-Japanese and $L$ is finite over $K$. \label{notageneral} \end{not-set}
\begin{prop} In the setting of Notation-Setting \ref{notageneral}, we have $p g_{21}' \leq g_{10}'$. \label{Bgeqp} \end{prop} \begin{proof} For each $i \in \{ 1,2\}$, there exist elements $r_{i1}, \ldots, r_{is_{i}} \in R(i_{B})$ such that \[ R(i_{B})= \sum_{j}(R\otimes_{K}K(B^{1/p^{i}}))r_{ij}. \] Then we can take a finite subset $B' \subset B$ such that $r_{ij} \in \mathrm{Frac} (R\otimes_{K}K(B'^{1/p^{i}}))$ for any $i,j$. Then the problem can be reduced to the case for $B'$, so we may suppose that $B$ is a finite set. We write $B = \{x_{1}, \ldots, x_{m}\}.$ We consider extensions $K \subset K(x_{1}^{1/p}) \subset K(B^{1/p}) \subset K(B^{1/p})(x_{1}^{1/p^{2}})$. We put \[ g_{10}^{x_{1}} := \dim_{K(x_{1}^{1/p})} (R(1_{x_{1}})/ R \otimes_{K} K (x_{1}^{1/p})) \] and \[ g_{21}^{x_{1}} := \dim_{K(B^{1/p})(x_{1}^{1/p^{2}})} (R(1_{B})(1_{x_{1}^{1/p}})/ R(1_{B})\otimes_{K(B^{1/p})} K (B^{1/p})(x_{1}^{1/p^{2}}) ). \] It suffices to show that $p g_{21}^{x_{1}} \leq g_{10}^{x_{1}}$. Indeed, by using the same inequality with respect to $x_{2}$ for the normalization of $R\otimes_{K} K(x_{1}^{1/p})$, and iterating these calculations, we obtain the desired inequality.
Let $v'$ be the normalized valuation of $R(1_{B})$ and $L'$ the residue field of $R(1_{B})$. We put $v'|_{R}= p^{i}v$ and $[L':L] = p^{j}$. We note that $i$ is $1$ or $0$, and $i+j=m$. We may assume that $x_{1}^{1/p^{2}} \in L'$. Otherwise, we have $g_{21}^{x_{1}} = 0$ by \cite[I \S6 Proposition 15]{Ser}. Therefore, we can take $r\in R$ such that $q(x_{1}) = q_{r}$ as in Notation \ref{notationq}. Moreover, we can take $r' \in R(1_{B})$ such that $q(x_{1}^{1/p}) = q_{r'}$. We have \begin{align*} q_{r'} &= v' (r'^{p}- x_{1}^{1/p})\\
&= p^{i-1} v (r'^{p^2} - x_{1})\\
&\leq p^{i-1} v(r^{p}- x_{1}) = p^{i-1} q_{r}. \end{align*} First, we consider the case where $p \nmid q_{r'}$. In this case, we have \begin{align*} g_{21}^{x_{1}} &= \frac{p-1}{2} (q_{r'}-1) [L': K(B^{1/p})(x_{1}^{1/p^2})] \\ &\leq \frac{p-1}{2} (p^{i-1}q_{r}-1) [L:K(x_{1}^{1/p})] p^{j-m} \\ &\leq \frac{p-1}{2} p^{i-1} (q_{r}-1) [L:K(x_{1}^{1/p})] p^{j-m} \leq \frac{1}{p} g_{10}^{x_{1}}. \end{align*} Next, we consider the case where $p \mid q_{r'}$. If $p \mid q_{r}$, then we have \begin{align*} g_{21}^{x_{1}} &= \frac{p-1}{2}q_{r'}[L':K(B^{1/p})(x_{1}^{1/p^{2}})]\\ & \leq \frac{p-1}{2} p^{i-1} q_{r} [L:K(x_{1}^{1/p})]p^{j-m} =\frac{1}{p} g_{10}^{x_{1}}. \end{align*} On the other hand, if $p \nmid q_{r}$, then we have $i=1$, and the inequality \begin{align*} g_{21}^{x_{1}} &= \frac{p-1}{2}q_{r'}[L':K(B^{1/p})(x_{1}^{1/p^{2}})]\\ & \leq \frac{p-1}{2} (q_{r}-1) [L:K(x_{1}^{1/p})]\frac{p^{m-1}}{p^{m}} =\frac{1}{p} g_{10}^{x_{1}}. \end{align*} follows from $q_{r'} \leq q_{r}-1$. Therefore, we have $p g_{21}^{x_{1}} \leq g_{10}^{x_{1}}$ in any case, and we finish the proof of Proposition \ref{Bgeqp}. \end{proof}
\begin{not-set} Let $\overline{K}, C$, and $G$ be as in Section \ref{main1section}. Let $c$ be a closed point of $C$. Write $C_{K(x^{1/p})}$ (resp.\,$C(1_{x})$) for the scheme $C\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, K(x^{1/p})$ (resp.\,the normalization of $C_{K(x^{1/p})}$ in its function field). We use the notation of this section assuming $R=O_{C,c}$, and we suppose $x^{1/p}\in L$. Let $G_{c}$ be the algebraic group over $K(x^{1/p})$ which represents the functor defined by $R(1_{x})^{\ast}/(R\otimes_{K}K(x^{1/p}))^{\ast}$ (cf.\,the proofs of Lemmas \ref{connsmunip}.3 and \ref{connsmunip}.4). By the proofs of Lemma \ref{connsmunip}.2 and Proposition \ref{connunip}, $G_{c}$ is a connected unipotent algebraic group. Moreover, $G_{c}$ is $p$-torsion by Theorem \ref{overclosed}. \label{forKnot} \end{not-set}
\begin{prop} We work under the setting of Notation-Setting \ref{forKnot}. Let $G_{c,v}$ (resp.\,$G_{c,s}$) be the largest vector subgroup (resp.\,the largest $K(x^{1/p})$-split algebraic subgroup) of $G_{c}$ (cf.\,the proof of Lemma \ref{maximalvec}). (Note that $G_{c,v}$ coincides with the largest $p$-torsion $K(x^{1/p})$-split unipotent subgroup of $G_{c}$ by Lemma \ref{splitvector}.) Then we have $G_{c,v}=G_{c,s}$. Moreover, the following are equivalent: \begin{enumerate} \item $G_{c,v}(=G_{c,s})=G_{c}$. \item $e_{1}=p$. \end{enumerate} \label{pointsubgroup} \end{prop}
\begin{proof} Since $G_{c}$ is $p$-torsion and unipotent, the desired equality $G_{c,v}=G_{c,s}$ holds. Next, we show the equivalence between $1$ and $2$. By Lemmas \ref{connsmunip}.2, \ref{connsmunip}.3, and \ref{connsmunip}.4 and the proof of Proposition \ref{connunip}, it suffices to show the fact that $e_{1}=p$ if and only if the algebraic group $\mathrm{Coker}\,((\mathbb{G}_{m})_{L/K}\to(\mathbb{G}_{m})_{L(1_{x})/K})$ is $K(x^{1/p})$-split. Since we have \begin{equation*} \begin{split} &\mathrm{Coker}\,((\mathbb{G}_{m})_{L/K}\to(\mathbb{G}_{m})_{L(1_{x})/K})\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, L\\ \simeq\,&\mathrm{Coker}\,((\mathbb{G}_{m})_{L/K}\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, L\to(\mathbb{G}_{m})_{L(1_{x})/K}\times_{\mathrm{Spec}\, K}\mathrm{Spec}\, L), \end{split} \end{equation*} these algebraic groups are $L$-split if and only if $L=L(1_{x})$ by Lemma \ref{forsplit}, \cite[Exercises 14.3.12. (2)]{Sp}, and Lemma \ref{Weilstr}. Therefore, the desired equivalence holds. \end{proof}
\begin{thm} We work under the setting of Notation-Setting \ref{forKnot}. We define the following four algebraic subgroups of the algebraic group $\mathrm{Pic}^{0}_{C_{K(x^{1/p})}/K(x^{1/p})}$: \begin{itemize} \item The smallest connected linear algebraic subgroup $G_{x}$ such that the quotient algebraic group $\mathrm{Pic}^{0}_{C_{K(x^{1/p})}/K(x^{1/p})}/G_{x}$ is an abelian variety over $K(x^{1/p})$ (cf.\,\cite[Theorem 9.2.1]{BLR}). \item $G_{x}(1):=\mathrm{Ker}\,(\mathrm{Pic}^{0}_{C_{K(x^{1/p})}/K(x^{1/p})}\to\mathrm{Pic}^{0}_{C(1_{x})/K(x^{1/p})})$. \item The largest vector subgroup $G_{x,v}$ (cf.\,Lemma \ref{splitvector} and Lemma \ref{maximalvec}). \item The largest $K(x^{1/p})$-split algebraic subgroup $G_{x,s}$ (cf.\,the proof of Lemma \ref{maximalvec}). \end{itemize} Then we have $$G_{x,v}=G_{x,s}\subset G_{x}(1)\subset G_{x}.$$ Moreover, the following are equivalent: \begin{enumerate} \item $G_{x,v}=G_{x,s}=G_{x}(1).$ \item For any singular point $c$ of $C$, the ramification index ``$e_{1}$'' for $R=O_{C,c}$ is equal to $p$. \end{enumerate} \label{x01cor} \end{thm}
\begin{proof} First, we have $G_{x,v}\subset G_{x,s}$. By Lemma \ref{splitvector} and Theorem \ref{overclosed}, we have a natural injective homomorphism $$(G_{x,s}/G_{x,v})\times_{\mathrm{Spec}\, K(x^{1/p})}\mathrm{Spec}\, K^{1/p}\hookrightarrow \mathrm{Pic}^{0}_{C(1)/K^{1/p}}.$$ Since $G_{x,s}/G_{x,v}$ is $K(x^{1/p})$-split by \cite[Exercises 14.3.12. (2)]{Sp} and $\mathrm{Pic}^{0}_{C(1)/K^{1/p}}$ does not contain algebraic groups isomorphic to $\mathbb{G}_{a,K^{1/p}}$ by {\cite[Proposition 9.2.4]{BLR}}, $G_{x,v}=G_{x,s}$ holds. Since $\mathrm{Pic}^{0}_{C(1_{x})/K(x^{1/p})}$ does not contain algebraic groups isomorphic to $\mathbb{G}_{a,K(x^{1/p})}$ again by \cite[Proposition 9.2.4]{BLR}, we have $G_{x,v}\subset G_{x}(1)$. Next, we show that $G_{x}(1)\subset G_{x}$.
By Lemma \ref{fundamentalstructure}.4 and 5, the algebraic subgroup $G_{x}(1)\times_{\mathrm{Spec}\, K(x^{1/p})}\mathrm{Spec}\,\overline{K}$ is contained in $G$. Moreover, by the definition of $G$ and $G_{x}$, we have $G\subset G_{x}\times_{\mathrm{Spec}\, K(x^{1/p})}\mathrm{Spec}\,\overline{K}$.
The desired equivalence follows from Lemma \ref{connsmunip} and Proposition \ref{pointsubgroup}. \end{proof}
\label{simplecase}
\section{Examples} \label{eg} In this section, we give several examples of calculations of $\delta$-invariants. In particular, we have the following. \begin{prop} The four cases appearing in Lemma \ref{qlem} actually occur. Moreover, there exists an example where the first inequality of Lemma \ref{qlem}.3 (resp.\,Lemma \ref{qlem}.4) is strict. We also give an example where the first inequality of Lemma \ref{qlem}.3 (resp.\,Lemma \ref{qlem}.4) is an equality. \end{prop} In this section, we fix an algebraically closed field $k$ of characteristic $p$, and we put $K:= k(t)$. Moreover, let $n \geq 2$ be an integer which is coprime to $p$.
\subsection*{(1): the case where $e_{1}=e_{2}=p$} \label{ramram} We put \[ R := (K[X,Y]/ X^{n}-(Y^{p^{2}}-t))_{(X, Y^{p^{2}}-t)}. \] To show that (this) $R$ satisfies the assumptions on $R$ in Section \ref{simplecase}, we use the following argument:\\ By the Jacobian criterion for varieties over $k$ (resp.\,for varieties over $K$), one can show that $R$ is regular and integral (resp.\,geometrically reduced over $K$). Moreover, since the residue field of $R$ is purely inseparable over $K$, the field $\mathrm{Frac} R$ is geometrically connected over $K$. Therefore, $R$ is geometrically integral over $K$.$\cdots(\ast)$\\ Then the normalization of $R\otimes_{K}K^{1/p}$ is \[ R(1) = (K^{1/p}[Z,Y]/ Z^{n} - (Y^{p}-t^{1/p}))_{(Z, Y^{p}-t^{1/p})}, \] where we put \[ Z: = \frac{(Y^{p}-t^{1/p})^{a}}{X^{b}} \] so that $Z^p =X$. Here, $a,b$ are integers such that $na-pb =1$. Therefore, we have $t^{1/p^{2}} \in L(1)$ and $e_{1}=e_{2}=p$.
\subsection*{(2): the case where $e_{1}=p$ and $e_{2} =1$} We put \[ R := (K[X,Y]/ X^{p+n}-X^{p}-Y^{p}(Y^{p^{2}}-t))_{(X,Y^{p^{2}}-t)}, \] which is regular and geometrically integral over $K$ by the argument $(\ast)$. The normalization of $R\otimes_{K}K^{1/p}$ is \[ R (1) = (K^{1/p}[Z,Y]/Z^{p+n}-Z^{p}-Y(Y^{p}-t^{1/p}))_{(Z, Y^{p}-t^{1/p})}, \] where we put \[ Z:= \frac{(X+ Y(Y^{p}-t^{1/p}))^{a}}{X^{b}} \] so that $Z^p =X$. Here, $a,b$ are integers such that $a(p+n) - bp =1$. Therefore, we have $e_{1}=p$ and $e_{2} =1$.
\subsection*{(3): the case where $e_{1}= e_{2} =1$} We put \[ R := (K[X,Y]/X^{p^2}-Y(Y^{p}-t))_{(X, Y^{p}-t)}, \] which is regular and geometrically integral over $K$ by the argument $(\ast)$. The normalization of $R\otimes_{K}K^{1/p}$ is \[ R(1) = (K^{1/p}[X,Z]/X^{p}-Z(Z^{p}-t^{1/p}))_{(X,Z^{p}-t^{1/p})}, \] where we put \[ Z:= \frac{X^{p}}{Y-t^{1/p}} \] so that $Z^p=Y$. Therefore, we have $t^{1/p^{2}} \in L(1)$ and $e_{1}= e_{2} =1$. Moreover, we have $pq(t^{1/p})= q(t)$ (cf.\,Lemma \ref{qlem}.3).
We also give another example where the first inequality of Lemma \ref{qlem}.3 is strict (i.e.\,$L(2_{x})$ is not a simple extension field of $L$). We put $K':= k(s,t)$. We put \[ R' := (K'[X,Y]/ YX^{p^{3}}-s^{p}X^{p^{2}}-(Y^{p}-t))_{(X, Y^{p}-t)}, \] which is regular and geometrically integral over $K'$ by the argument $(\ast)$. The normalization of $R\otimes_{K'}K'(t^{1/p})$ is \[ R'(1_{t}) = (K'(t^{1/p})[X,Z]/ ZX^{p^{2}}- sX^{p}-(Z^{p}-t^{1/p}))_{(X, Z^{p}-t^{1/p})}, \] where we put \[ Z := \frac{sX^{p}+ (Y-t^{1/p})}{X^{p^{2}}} \] so that $Z^p=Y$. Therefore, we have $t^{1/p^{2}} \in L(1_{t})$, $q(t)= p^{3}$, and $q(t^{1/p})= p$. Thus, we have $pq(t^{1/p})<q(t)$.
\subsection*{(4): the case where $e_{1}=1$ and $e_{2} =p$}
We put \[ R:= (K[X,Y]/X^{np}-Y(Y^{p}-t))_{(X,Y^{p}-t)}, \] which is regular and geometrically integral over $K$ by the argument $(\ast)$. The normalization of $R_{K^{1/p}}$ is \[ R(1)=(K^{1/p}[X,Z]/X^{n}-Z(Z^{p}-t^{1/p}))_{(X,Z^{p}-t^{1/p})}, \] where we put \[ Z:=\frac{X^{n}}{Y-t^{1/p}} \] so that $Z^p=Y$. In this case, we have $t^{1/p^2}\in L(1)$, $e_{1}=1$, and $e_{2}=p$. Moreover, we have $q(t)=np$ and $q(t^{1/p})=n$. Thus, we have $pq(t^{1/p})=q(t)$ (cf.\,Lemma \ref{qlem}.4).
We also give another example where the first inequality of Lemma \ref{qlem}.4 is strict. We take a positive integer $m$ such that $m>n$. We put \[ R' := (K[X,Y]/ X^{mp} - YX^{np} - Y (Y^{p}-t))_{(X,Y^{p}-t)}, \] which is regular and geometrically integral over $K$ by the argument $(\ast)$. The normalization of $R'\otimes_{K}K^{1/p}$ is \[ R' (1) = (K^{1/p}[X,Z]/ X^{m} - Z X^{n} - Z (Z^{p}-t^{1/p}))_{(X, Z^{p}-t^{1/p})}, \] where we put \[ Z:= \frac{X^{m}}{X^{n} + (Y-t^{1/p})} \] so that $Z^p=Y$. Therefore, in this case we have $t^{1/p^{2}} \in L(1), e_{1}=1$, and $e_{2} =p$. Moreover, we have $q (t) = mp$ and $q(t^{1/p}) = n$.
Thus, we have $pq(t^{1/p})<q(t)$.
\section{Relation between genus changes and Jacobian numbers} \label{jac} Jacobian numbers, which have been studied by many researchers (for example, \cite{Buchweitz1980}, \cite{Esteves2003}, \cite{Greuel2007}, \cite{IIL}, and \cite{Tjurina1969}), are useful invariants of singularities of curves. In this section, we give a comparison between Jacobian numbers and genus changes (or equivalently, $\delta$-invariants).
First, we recall the definition of continuous derivations to define the Jacobian number for a discrete valuation ring (of sufficiently general class) over a field. See \cite[\S20.3 and \S20.7]{EGA41} for fundamental treatment of the notion of continuous derivations. Let $K$ be a field, $R$ a Noetherian local ring over $K$, $\mathfrak{m}$ the maximal ideal of $R$, and $L$ the residue field of $R$. In this section, in the case where $R$ is complete, for any finitely generated $R$-module $M$, we always consider $M$ as an $\mathfrak{m}$-adic topological $R$-module.
\begin{dfn}[cf.\,Remarks \ref{compareEGA}.1 and 2] \label{defomega} Suppose that $R$ is complete. For any finitely generated $R$-module $M$, we write $\mathrm{Der}_{K}^{c}(R,M)$ for the set of continuous $K$-derivations from $R$ to $M$. We define an $R$-module $\Omega^{1,c}_{R/K}$ to be a finitely generated $R$-module such that there exists an isomorphism of functors \begin{equation*} \begin{split} \mathop{\mathrm{Hom}}\nolimits_{R}(\Omega^{1,c}_{R/K},-)\simeq \mathrm{Der}_{K}^{c}(R,-) \end{split} \end{equation*} from the category of finitely generated $R$-modules to the category of $R$-modules. (Note that $\Omega^{1,c}_{R/K}$ does not always exist.) \end{dfn}
\begin{rem} \begin{enumerate} \item Regard $K$ as a discrete topological ring and $R$ as an $\mathfrak{m}$-adic topological ring. In \cite[$0_{\text{IV}}$.20.7.14]{EGA41}, an $R$-module $\widehat{\Omega}^{1}_{R/K}$ is defined. By the construction of $\widehat{\Omega}^{1}_{R/K}$, we have $$\widehat{\Omega}^{1}_{R/K}\simeq \varprojlim_{n}\Omega^{1}_{R/K}\otimes_{R}(R/\mathfrak{m}^{n}).$$ By \cite[($0_{\text{IV}}$.20.7.14.4)]{EGA41}, if $\widehat{\Omega}^{1}_{R/K}$ is a finitely generated $\widehat{R}$-module, we have a canonical isomorphism $$\widehat{\Omega}^{1}_{R/K}\simeq \Omega^{1,c}_{\widehat{R}/K}.$$ (Note that ``$\Omega^{1}_{R/K}$'' in the sense of \cite[D\'efinition $0_{\text{IV}}$.20.4.3]{EGA41} does not coincide with $\Omega^{1,c}_{R/K}$ in general.) \item Suppose that $R$ is complete and $L$ is finitely generated over $K$. The existence of the module $\Omega^{1,c}_{R/K}$ follows from \cite[Proposition $0_{\text{IV}}$.20.7.15]{EGA41} and Remark \ref{compareEGA}.1. Here, we give an explicit construction of $\Omega^{1,c}_{R/K}$. Suppose that we can take a surjective $K$-homomorphism \begin{align} \varphi\colon K[[T_{1},\ldots,T_{m}]][X_{1},\ldots,X_{n}]\to R \label{presentation} \end{align} such that the image of $T_{i}$ is contained in $\mathfrak{m}$ for each $i$. We choose a system of generators $f_{1},\ldots,f_{l}$ of the kernel of $\varphi$. Then the module \begin{equation} \label{equationOmega} M:= \frac{\bigoplus_{1\leq i\leq m}RdT_{i} \oplus\bigoplus_{1\leq j\leq n}RdX_{j}}{\langle \sum_{1\leq i\leq m}\frac{\partial f_{t}}{\partial T_{i}}dT_{i} +\sum_{1\leq j\leq n}\frac{\partial f_{t}}{\partial X_{j}}dX_{j}\mid 1\leq t\leq l\rangle} \end{equation} and a group homomorphism $d\colon R\to M$ sending $f$ to \begin{equation} \label{equationd} \sum_{1\leq i\leq m}\frac{\partial f}{\partial T_{i}}dT_{i} +\sum_{1\leq j\leq n}\frac{\partial f}{\partial X_{j}}dX_{j} \end{equation} satisfy the condition of the definition of $\Omega^{1,c}_{R/K}$. \item If $R$ is essentially of finite type over $K$, we have a canonical isomorphism $$\Omega^{1}_{R/K}\otimes_{R}\widehat{R}\simeq \Omega^{1,c}_{\widehat{R}/K}$$ by Remark \ref{compareEGA}.1. \item As in Remark \ref{compareEGA}.2, suppose that $R$ is complete and has the presentation (\ref{presentation}). Moreover, suppose that $R$ is a domain. Then $\mathrm{Frac}(R)$ (resp.\,any finite dimensional $\mathrm{Frac}(R)$-linear space $V$) has a canonical topological field structure (resp.\,a canonical topological $\mathrm{Frac}(R)$-linear space structure) such that the topology of $R$ (resp.\,any finitely generated $R$-submodule of $V$) coincides with the relative topology from $\mathrm{Frac}(R)$ (resp.\,$V$). Let $D\colon \mathrm{Frac}(R)\to V$ be a continuous derivation over $K$ to a finite dimensional $\mathrm{Frac}(R)$-linear space. Then $D(R)$ is contained in the $R$-subsmodule generated by all $D(T_{i})$ and $D(X_{j})$. Hence, we have a natural isomorphism of functors $$\mathop{\mathrm{Hom}}\nolimits_{\mathrm{Frac}(R)}(\Omega^{1,c}_{R/K}\otimes_{R}\mathrm{Frac}(R),-)\simeq \mathrm{Der}_{K}^{c}(\mathrm{Frac}(R),-)$$ from the category of finite dimensional $\mathrm{Frac}(R)$-linear spaces to the category of $\mathrm{Frac}(R)$-linear spaces. Here, $\mathrm{Der}_{K}^{c}(\mathrm{Frac}(R),-)$ is the set of continuous $K$-derivations.
Let $R'$ be an integral extension ring $R'\supset R$ contained in $\mathrm{Frac}(R)$. Note that $R'$ is complete local and finite over $R$ since $R$ is complete. Then we have a natural isomorphism $\Omega^{1,c}_{R/K}\otimes_{R}\mathrm{Frac}(R)\simeq \Omega^{1,c}_{R'/K}\otimes_{R'}\mathrm{Frac}(R)$. \item For later use, we use a similar convention for the product of complete local rings in the following way: Let $R'$ be a Noetherian ring over $K$. Suppose that $R'$ is the product of finitely many complete local rings $R_{k}$. For any finitely generated $R'$-module $M$, we have a canonical decomposition $M = \prod_{k} M_{k}$, where $M_{k}$ is a finitely generated $R'$-module supported in $\mathrm{Spec}\, R_{k}$. We put $\mathrm{Der}_{K}^{c} (R', M) := \prod_{k} \mathrm{Der}_{K}^{c} (R_{k}, M_{k})$. As in Definition \ref{defomega}, if $\mathrm{Der}_{K}^{c} (R', -)$ is represented by a finitely generated $R'$-module, we denote it by $\Omega_{R'/K}^{1,c}$. Then $\Omega_{R'/K}^{1,c}$ exists if and only if, for all $k$, the modules $\Omega_{R_{k}/K}^{1,c}$ exist. Moreover, we have \[ \Omega_{R'/K}^{1,c} \simeq \prod_{k} \Omega_{R_{k}/K}^{1,c} \] if $\Omega_{R'/K}^{1,c}$ exists. As in Remark \ref{compareEGA}.2, if we have a surjective $K$-homomorphism \begin{align} \varphi\colon K[[T_{1},\ldots,T_{m}]][X_{1},\ldots,X_{n}]\to R' \end{align} such that the $k$-component of the image of $T_{i}$ is contained in the maximal ideal of $R_{k}$ for any $i,k$, then the $R'$-module $M$ and $d\colon R' \rightarrow M$ that are defined by the formulas (\ref{equationOmega}) and (\ref{equationd}) satisfy the condition in the definition of $\Omega_{R'/K}^{1,c}$. In this case, for any finite field extension $K'/K$, we have $R_{k}\otimes_{K} K' \simeq \prod_{k'}S_{k,k'}$, where $S_{k'}$ are complete local rings over $K'$. Therefore, $R' \otimes_{K} K'$ is also the product of finite complete local rings. By the above construction, we have \[ \Omega^{1,c}_{R'/K}\otimes_{K}K' \simeq \Omega^{1,c}_{R' \otimes_{K} K'/K'}. \] \end{enumerate} \label{compareEGA} \end{rem}
\begin{dfn} Suppose that $R$ is $1$-dimensional and $L$ is finite over $K$. We define {\it the Jacobian number} of $R$ over $K$ to be $$\mathop{\mathrm{jac}} (R):=\dim_{K}(\widehat{R}/\mathrm{Fitt}_{1}\Omega^{1,c}_{\widehat{R}/K}) \in \mathbb{Z}_{\geq 0} \cup \{\infty\}.$$ Here, ``$\mathrm{Fitt}_{1}-$" denotes the first Fitting ideal of the $R$-module. \end{dfn}
\begin{rem} Suppose that $R$ is essentially of finite type over $K$. Then $\mathop{\mathrm{jac}}(R)$ agrees with the Jacobian number defined in \cite[Definition 4.1]{IIL} by Remark \ref{compareEGA}.3. \end{rem}
In the rest of this section, we consider the following situation: Let $K$ be a field of characteristic $p$ satisfying $[K^{1/p} : K] < \infty$, $R$ a discrete valuation ring over $K$ which is geometrically integral over $K$, $\varpi$ a uniformizer of $R$, and $L$ the residue field of $R$. We suppose that $L/K$ is a finite extension. Let $R(1)$ be the normalization of $R\otimes_{K}K^{1/p}$. We further suppose that $R(1)$ is finite over $R\otimes_{K}K^{1/p}$. We write $g_{10}$ for the natural number \[ \dim_{K^{1/p}}(R(1)/(R\otimes_{K}K^{1/p})). \] We note that we use the same notation as in Section \ref{geomnormalsection}. The main theorem in this section is the following:
\begin{thm} \label{genjacob} In the above situation, we have \[ \frac{g_{10}}{(p-1)/2} = \frac{\mathop{\mathrm{jac}} (R)}{p}. \] \end{thm}
The rest of this section is devoted to proving Theorem \ref{genjacob}.
First, we may assume that $R$ is complete by the assumption and Proposition \ref{B-prop}.7. Since \[ (\mathrm{Fitt}_{1} \Omega_{R/K}^{1,c})\otimes_{K}K' \simeq \mathrm{Fitt}_{1} (\Omega_{R/K}^{1,c} \otimes_{K} K') \] for any finite field extension $K'$ over $K$ by Remark \ref{compareEGA}.5, we may assume $L$ is purely inseparable over $K$. Moreover, we have \[ \Omega^{1,c}_{(R\otimes_{K}K^{\mathrm{sep}})^{\wedge}/K^{\mathrm{sep}}} \simeq \Omega^{1,c}_{R/K}\otimes K^{\mathrm{sep}} \] by the construction in Remark \ref{compareEGA}.2. (Here, $(R\otimes_{K}K(x^{1/p}))^{\wedge}$ denotes the completion of $R\otimes_{K}K(x^{1/p})$.) Therefore, by taking the base change and the completion again, we may assume that $K$ is separably closed and $R$ is complete.
We write $\Omega^{1,c}_{R/K,\mathrm{tor}}$ for the torsion part of $\Omega^{1,c}_{R/K}$.
\begin{lem} \label{lemtorsion} Write $\phi$ for the natural homomorphism \[ \Omega^{1,c}_{R/K} \otimes_{R} R(1) \rightarrow \Omega^{1,c}_{R(1)/K^{1/p}}. \] Then we have $$\langle dr\mid r\in R(1)^{p}\rangle_{R}=\Omega^{1,c}_{R/K,\mathrm{tor}} \quad\text{and}\quad \Omega^{1,c}_{R/K,\mathrm{tor}} \otimes_{R} R(1) = \mathrm{Ker}\, \phi,$$ and the induced homomorphism \begin{align} \Omega^{1,c}_{R/K}\otimes_{R}\mathrm{Frac}(R(1))\to\Omega^{1,c}_{R(1)/K^{1/p}}\otimes_{R(1)}\mathrm{Frac}(R(1)) \label{aftertensor} \end{align} is an isomorphism of $1$-dimensional $\mathrm{Frac}(R(1))$-linear spaces. \end{lem}
\begin{proof} The homomorphism (\ref{aftertensor}) is an isomorphism by Remark \ref{compareEGA}.2 and Remark \ref{compareEGA}.4. From this and the calculation $dr^{p}=pr^{p-1}dr=0$, we have $$\langle dr\mid r\in R(1)^{p}\rangle_{R} \otimes_{R}R(1) \subset \mathrm{Ker}\, \phi\subset\Omega^{1,c}_{R/K, \mathrm{tor}} \otimes_{R}R(1).$$ Since $R$ is complete, we have $[R(1):R(1)^{p}] = [K^{1/p}:K]p$. Therefore, there exists an element $t \in R(1)^{p}$ such that \[ R = R(1)^{p}[t^{1/p}] \simeq R(1)^{p}[X]/(X^{p}-t). \] By Remark \ref{compareEGA}.2, we have \begin{align} \Omega^{1,c}_{R/K} &\simeq((\Omega^{1,c}_{R(1)^{p}/K}\otimes_{R(1)^{p}}R)\oplus RdX)/Rd(X^{p}-t)\\ &\simeq(\Omega^{1,c}_{R(1)^{p}/K}\otimes_{R(1)^{p}}R/Rdt)\oplus Rdt^{1/p}. \label{omegacomputation} \end{align} In particular, $dt^{1/p}$ is torsion free in $\Omega^{1,c}_{R/K}$. This shows that linear spaces in (\ref{aftertensor}) are $1$-dimensional and \[ \Omega^{1,c}_{R/K, \mathrm{tor}}\subset (\Omega^{1,c}_{R(1)^{p}/K} \otimes_{R(1)^{p}}R)/Rdt=\langle dr\mid r\in R(1)^{p}\rangle_{R} \] holds. We finish the proof of Lemma \ref{lemtorsion}. \end{proof}
\begin{lem}[cf.\,\cite{Rim} and {\cite[Proposition 4.7]{IIL}}] \label{lemjactor} The equality \[ \mathop{\mathrm{jac}} (R) = \dim_{K} (\Omega^{1,c}_{R/K,\mathrm{tor}}) \] holds. \end{lem}
\begin{proof} Since we have $\dim_{\mathrm{Frac}(R)}(\Omega^{1,c}_{R/K}\otimes_{R}\mathrm{Frac}(R))=1$ by Lemma \ref{lemtorsion}, this lemma follows from the structure theorem for finitely generated modules over a principal ideal domain. \end{proof}
We take an absolute $p$-basis $x_{1}, \ldots, x_{c}$ of $K$. For $0 \leq i \leq c$, let $K_{i}$ be the field $K(x_{1}^{1/p}, \ldots, x_{i}^{1/p})$ and $R_{i}$ the normalization of $R\otimes_{K} K_{i}$. Let $L_{i}$ be the residue field of $R_{i}$. We note that $K_{0}=K, R_{0} =R,$ and $L_{0} = L$.
\begin{lem} \label{relemkernel} \begin{enumerate} \item There exists an element $a\in R_{1}$ satisfying $R_{1}=R[a]$. \item Let $a$ be such an element of $R_{1}$ and $f$ the natural homomorphism \[ \Omega^{1,c}_{R/K} \otimes_{R} R_{1} \rightarrow \Omega^{1,c}_{R_{1}/K_{1}}. \] Then we have \[ \mathrm{Ker}\, f = R_{1}da^{p}. \] \item Suppose $x_{1}^{1/p} \in L$. Then we can define $q(x_{1})$ in Notation \ref{notationq}. If $p \mid q(x_{1})$, then there exists an isomorphism \[ \mathrm{Ker}\, f \simeq R_{1}/ (\varpi^{q(x_{1})}). \] If $p \nmid q(x_{1})$, then there exists an isomorphism \[ \mathrm{Ker}\, f \simeq R_{1}/ (\varpi^{q(x_{1})-1}). \] \end{enumerate} \end{lem}
\begin{proof} We can choose $a$ to be a lift of a generator of the residue field of $R_{1}$ over $L$ (resp.\,a uniformizer of $R_{1}$) by Lemma \ref{choiceofr} (resp.\,\cite[Proposition 17]{Ser}) in the case where $p \mid q(x_{1})$ (resp.\,$p \nmid q(x_{1})$). Then we have \[ \mathrm{Ker}\, (\Omega_{R/K}^{1,c}\otimes_{R}R_{1} \rightarrow \Omega_{R_{1}/K}^{1,c}) = R_{1}da^{p} \] by Remark \ref{compareEGA}.2 (cf.\, the calculation (\ref{omegacomputation})). Moreover, by Remark \ref{compareEGA}.2, we have an exact sequence \[ \Omega_{K_{1}/K}^{1,c} \otimes_{K_{1}} R_{1} \rightarrow \Omega_{R_{1}/K}^{1,c} \rightarrow \Omega_{R_{1}/K_{1}}^{1,c} \] and $\Omega_{K_{1}/K}^{1,c}$ is free of rank $1$ over $K_{1}$. Therefore, the homomorphism \[ \mathrm{Im}\, (\Omega_{R/K}^{1,c}\otimes_{R}R_{1} \rightarrow \Omega_{R_{1}/K}^{1,c}) \rightarrow \Omega_{R_{1}/K_{1}}^{1,c} \] is injective since $\Omega_{R/K}^{1,c} \otimes_{R} \mathrm{Frac} (R_{1}) \rightarrow \Omega^{1,c}_{R_{1}/K_{1}} \otimes_{R_{1}} \mathrm{Frac} (R_{1})$ is an isomorphism by Remarks \ref{compareEGA}.4 and \ref{compareEGA}.5. Now we have \[ \mathrm{Ker}\, f = R_{1}da^{p}. \]
Assertion 3 follows from Lemma \ref{lemkernel}, which we will prove at the end of this section. \end{proof}
\begin{lem} \label{lemgenmain} We fix an integer $i$ with $1 \leq i \leq c$. Let \[ \xymatrix { \Omega^{1,c}_{R/K} \otimes_{R} R_{i} \ar[r]^-{f} &\Omega^{1,c}_{R_{i-1}/K_{i-1}} \otimes_{R_{i-1}} R_{i} \ar[r]^-{g} &\Omega^{1,c}_{R_{i}/K_{i}} } \] be the natural homomorphisms. Then the sequence \[ \xymatrix { 0 \rightarrow \mathrm{Ker}\, f \ar[r] &\mathrm{Ker}\, g\circ f \ar[r]^-{f} &\mathrm{Ker}\, g \rightarrow 0 } \] is exact. \end{lem}
\begin{proof} We only need to show the exactness at $\mathrm{Ker}\, g$. Take an element $a\in R_{i}$ satisfying $R_{i}=R_{i-1}[a]$ by Lemma \ref{relemkernel}.1. By Lemma \ref{relemkernel}.2, $\mathrm{Ker}\, g$ is generated by $da^{p}$. Since we have $a^{p}\in R$, Lemma \ref{lemgenmain} holds. \end{proof}
\begin{proof}[Proof of Theorem \ref{genjacob} (assuming Lemma \ref{relemkernel}.3)] Now we start the proof of Theorem \ref{genjacob} assuming Lemma \ref{relemkernel}.3. We have \begin{eqnarray*} \mathop{\mathrm{jac}} (R) &=& \dim_{K}(\Omega_{R/K,\mathrm{tor}}^{1,c})\\ &=& \dim_{K^{1/p}} \mathrm{Ker}\, \phi\\ &=& \sum_{i=0}^{c-1} \dim_{K_{i}} \mathrm{Ker}\, (\phi_{i}),\\ \end{eqnarray*} where $\phi_{i}$ is the natural homomorphism \[ \Omega^{1,c}_{R_{i-1}/K_{i-1}} \otimes_{R_{i-1}} R_{i} \rightarrow \Omega^{1,c}_{R_{i}/K_{i}}. \] Here, the first equality follows from Lemma \ref{lemjactor}, the second equality follows from Lemma \ref{lemtorsion}, and the third equality follows from Lemma \ref{lemgenmain}. By Lemma \ref{calcul}.1 and Lemma \ref{relemkernel}.3, we have \[ \dim_{K_{i}} \mathrm{Ker}\, (\phi_{i}) = \begin{cases} 0 &(x_{i}^{1/p} \notin L_{i-1}),\\ \dim_{K_{i}} R_{i}/ \varpi_{R_{i-1}}^{q^{(i)}_{1}} = [L_{i-1}:K_{i}] p q^{(i)}_{1} &(p\mid q^{(i)}_{1}),\\ \dim_{K_{i}} R_{i}/ \varpi_{R_{i-1}}^{q^{(i)}_{1}-1} = [L_{i-1}:K_{i}] p (q^{(i)}_{1}-1) &(p \nmid q^{(i)}_{1}), \end{cases} \] where we write $q^{(i)}_{1}$ for the invariant $q(x_{i})$ for $R_{i-1}$ (cf.\,Notation \ref{notationq}) and $\varpi_{R_{i-1}}$ for a uniformizer in $R_{i-1}$. On the other hand, by Theorem \ref{q}, the genus changes \[ g_{i} :=\dim_{K_{i}} R_{i}/(R_{i-1}\otimes_{K_{i-1}} K_{i})\quad (1\leq i\leq c) \] satisfy \[ g_{i} = \begin{cases} 0 &(x_{i}^{1/p} \notin L_{i-1}),\\ [L_{i-1}:K_{i}] \frac{p-1}{2}q_{1}^{(i)} &(p\mid q^{(i)}_{1}),\\ [L_{i-1}:K_{i}] \frac{p-1}{2}(q_{1}^{(i)}-1) &(p \nmid q^{(i)}_{1}). \end{cases} \] Since $g_{10} = \sum_{i=0}^{c-1} g_{i}$, we have the desired equality. It finishes the proof of Theorem \ref{genjacob} up to the proof of Lemma \ref{relemkernel}.3. \end{proof}
To complete the proof of Theorem \ref{genjacob}, we prove Lemma \ref{relemkernel}.3. To understand the structure of $\Omega_{R/K}^{1,c}$, we need to describe the structure of $R$ in terms of invariants that are similar to $q(x)$.
\begin{nota} \label{rbqb} \begin{enumerate} \item Let $y_{1}, \ldots, y_{m}$ be a $p$-basis of $L$ over $K$. We put \[
n_{i} := \min{\{n \in \mathbb{Z}_{> 0} \,|\,y_{i}^{p^{n}}\in K(y_{1}, \ldots, y_{i-1})\}}. \] We put $z_{1} := y_{1}^{p^{n_{1}}}$. For $1 \leq i \leq m$, we fix $f_{i} \in K[T_{1}, \ldots, T_{i-1}]$ such that \[ f_{i}(y_{1}, \ldots, y_{i-1}) =y_{i}^{p^{n_{i}}} \] and $f_{1}= z_{1}$. \item For $1 \leq i \leq m$, we define elements $r'_{i}\in R$ and natural numbers $q_{i}$ and $q'_{i}$ inductively. First, we put $q_{1} := \underset{r}{\max}\,v(r^{p}-z_{1})$, where $r\in R$ ranges over all the lifts of $y_{1}^{p^{n_{1}-1}}$. Moreover, we put $q'_{1} := \underset{r}{\max}\,v(r^{p^{n_{1}}}-z_{1})$, where $r\in R$ ranges over all the lifts of $y_{1}$. We fix $r'_{1} \in R$ such that $v(r_{1}^{\prime p^{n_{1}}}-z_{1}) = q'_{1}$.
We suppose that $q_{j}, q'_{j},$ and $r'_{j}$ are defined for $j= 1, \ldots, i-1$. We put \[ q_{i} := \underset{r}{\max}\,v(r^{p}- f_{i} (r'_{1}, \ldots, r'_{i-1})), \] where $r\in R$ ranges over all the lifts of $y_{i}^{p^{n_{i}}-1}$. We also put \[ q'_{i} := \underset{r}{\max}\,v(r^{p^{n_{i}}}- f_{i} (r'_{1}, \ldots, r'_{i-1})), \] where $r\in R$ ranges over all the lifts of $y_{i}$. We also fix $r'_{i} \in R$ such that \[v(r_{i}^{\prime p^{n_{i}}}-f_{i} (r'_{1}, \ldots, r'_{i-1})) = q'_{i}. \] Now $q_{i}, q_{i}'$, and $r_{i}'$ are defined. \item For $1 \leq i \leq m$, we take $u_{i}' \in R^{\times}$ such that \[ r_{i}^{\prime p^{n_{i}}}- f_{i} (r'_{1}, \ldots, r'_{i-1}) = u_{i}' \varpi^{q'_{i}}. \] We also take $r_{i}\in R$ and a unit $u_{i} \in R^{\times}$ such that \begin{equation} \label{r_{i}} r_{i}^{p}-f_{i} (r'_{1}, \ldots, r'_{i-1}) = u_{i} \varpi^{q_{i}}. \end{equation} We note that we can take $r_{i} = r_{i}'^{p^{n_{i}}-1}$ if $q_{i} = q_{i}'$ holds. Therefore, we always assume this condition in the following. \end{enumerate} \end{nota}
\begin{thm} \label{lemgen} There exists a $K$-algebra isomorphism \[ K[[S]][T_{1}, \ldots, T_{m}]/(T_{i}^{p^{n_{i}}} - f_{i} (T_{1}, \ldots, T_{i-1}) - \widetilde{u}_{i} S^{q_{i}} + \widetilde{w}_{i}^{p} S^{q'_{i}})_{1 \leq i \leq m} \simeq R. \] Here, $\widetilde{u}_{i}$ and $\widetilde{w}_{i}$ are elements of $K[[S]][T_{1}, \ldots, T_{m}]$, and $\widetilde{u}_{i}$ (resp.\,$S$) goes to the unit $u_{i} \in R^{\times}$ which we took in (\ref{r_{i}}) (resp.\,the element $\varpi\in R$). Moreover, the following hold. \begin{itemize} \item For each $1 \leq i \leq m$, if $q_{i} = q'_{i}$, then we can take $\widetilde{w}_{i}$ to be $0$. \item If $p\nmid q_{1}$, by replacing $\varpi$ by another uniformizer, we can take $\widetilde{u}_{1}$ to be $1$. \end{itemize} In the following, we denote the above polynomial \[ T_{i}^{p^{n_{i}}} - f_{i} (T_{1}, \ldots, T_{i-1}) - \widetilde{u}_{i} S^{q_{i}} + \widetilde{w}_{i}^{p} S^{q'_{i}} \] by $P_{i}$. \end{thm} \begin{proof} Let \[ \varphi\colon K[[S]][T_{1}, \ldots, T_{n}] \rightarrow R \] be the $K$-algebra homomorphism which sends $S$ to $\varpi$ and $T_{i}$ to $r'_{i}$. Since the image of $\varphi$ contains a uniformizer and maps onto the residue field $L$ (note that $L = \bigoplus_{0 \leq i_{j} < p^{n_{j}}} K y_{1}^{i_{1}} \cdots y_{m}^{i_{m}}$), the homomorphism $\varphi$ is surjective. By the definition of $u_{i}'$ and $u_{i}$, we have \begin{equation} \frac{r_{i}^{p}-r_{i}^{\prime p^{n_{i}}}}{\varpi^{q'_{i}}} = u_{i} \varpi^{q_{i}-q'_{i}} - u_{i}'. \label{compareqq'} \end{equation} If $p \nmid q'_{i}$, then we have $q_{i} = q'_{i}$ (by the same argument as that in Lemma \ref{q<q}), and the left-hand side of (\ref{compareqq'}) is $0$ by the choice of $r_{i}$ (cf.\,Notation \ref{rbqb}.3). Therefore, we can always take the $p$-th root $w_{i}$ of the left-hand side of (\ref{compareqq'}). Then we have \[ r_{i}^{\prime p^{n_{i}}} - f_{i} (r'_{1}, \ldots, r'_{i-1}) = u_{i}' \varpi^{q'_{i}} = u_{i} \varpi^{q_{i}} - w_{i}^{p} \varpi^{q'_{i}}. \] Take elements $\widetilde{u}_{i} \in \varphi^{-1} (u_{i})$ and $\widetilde{w}_{i} \in \varphi^{-1} (w_{i})$. Then $\varphi$ induces a surjective homomorphism \[ K[[S]][T_{1}, \ldots, T_{m}]/(T_{i}^{p^{n_{i}}} - f_{i} (T_{1}, \ldots, T_{i-1}) - \widetilde{u}_{i} S^{q_{i}} + \widetilde{w}_{i}^{p} S^{q'_{i}})_{1 \leq i \leq m} \rightarrow R, \] which is an isomorphism since both sides are free modules over $K[[S]]$ of the same rank. We note that if $p \nmid q_{1}$, we have $u_{1}=1$ after replacing $\varpi$ by a suitable uniformizer, since $L$ is separably closed. In this case, we can take $\widetilde{u}_{1}$ to be $1$. \end{proof}
We note that, for any $x_{1} \in (K\cap L^{p}) \setminus K^{p}$, there exists a $p$-basis $y_{1}, \ldots, y_{m}$ of $L$ over $K$ such that $z_{1} =x_{1}$. Therefore, the following lemma gives the proof of Lemma \ref{relemkernel}.3.
\begin{lem}[cf.\,Lemma \ref{relemkernel}.3] \label{lemkernel} We denote $K(z_{1}^{1/p})$ by $K'$ and the normalization of $R\otimes_{K}K'$ by $R'$. Let $f$ be the natural homomorphism \[ \Omega^{1,c}_{R/K} \otimes_{R} R' \rightarrow \Omega^{1,c}_{R'/K'}. \] If $p \mid q_{1}$, then we have \[ \mathrm{Ker}\, f \simeq R' d u_{1} \simeq R'/(\varpi^{q_{1}}). \] On the other hand, if $p \nmid q_{1}$ and $\widetilde{u}_{1} = 1$, then we have \[ \mathrm{Ker}\, f \simeq R' d \varpi \simeq R'/ (\varpi^{q_{1}-1}). \] \end{lem}
\begin{proof} In the proof, we use the notation in Theorem \ref{lemgen}. By the proof of Lemma \ref{relemkernel}.1, we have $ R' = R[u_{1}^{1/p}] $ (resp.\,$R' = R[\varpi^{1/p}]$) in the case where $p \mid q_{1}$ (resp.\,$p\nmid q_{1}$). Moreover, by Lemma \ref{relemkernel}.2, we have \[ \mathrm{Ker}\, f \simeq R' du_{1}\quad(\text{resp.}\,\mathrm{Ker}\, f \simeq R'd\varpi). \] Therefore, it suffices to show that $ Rdu_{1} \simeq R/ (\varpi^{q_{1}})$ (resp.\,$Rd\varpi \simeq R/ \varpi^{q_{1}-1}$) in $\Omega^{1,c}_{R/K}$. Since $p \mid q_{1}$ (resp.$\,p\nmid q_{1}$) and the image of \[ dP_{1}= - S^{q_{1}}d \widetilde{u}_{1}\quad (\text{resp.}\,dP_{1}=-q_{1}S^{q_{1}-1}dS) \] is $0$ in $\Omega^{1,c}_{R/K}$, we have \[ \varpi^{q_{1}}d u_{1}=0\quad(\text{resp.}\,\varpi^{q_{1}-1}d\varpi=0). \] Here, we note that if $p\nmid q'_{1}$ holds, then we have $q'_{1}=q_{1}$ and $\widetilde{w}_{1}=0$. By Remark \ref{compareEGA}.2 and Theorem \ref{lemgen}, we have \[ \Omega_{R/K}^{1,c} \simeq \frac{R d S \oplus \bigoplus_{1 \leq i \leq m} R d T_{i}}{\langle dP_{i} \rangle_{1 \leq i \leq m}}. \] By Lemma \ref{lemtorsion}, $\Omega_{R/K}^{1,c} \otimes_{R}\mathrm{Frac} (R)$ is of rank $1$. Therefore, $dP_{1}$ is not contained in the submodule generated by $d P_{j} \in R d S \oplus \bigoplus_{1 \leq i \leq m} R d T_{i}$ $(2 \leq j \leq m)$, and it finishes the proof. \end{proof}
\newcommand{\etalchar}[1]{$^{#1}$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\title[The Price of Anarchy in Cooperative Network Creation Games]
{The Price of Anarchy in \\ Cooperative Network Creation Games}
\author[MIT]{E. D. Demaine}{Erik D. Demaine} \address[MIT]{MIT Computer Science and Artificial Intelligence Laboratory,
32 Vassar St., Cambridge, MA 02139, USA} \email{[email protected]}
\author[MIT,ATT]{M. Hajiaghayi}{MohammadTaghi Hajiaghayi} \address[ATT]{AT\&T Labs --- Research, 180 Park Ave., Florham Park, NJ 07932,
USA} \email{[email protected]}
\author[ITPM,Sharif]{H. Mahini}{Hamid Mahini} \address[ITPM]{School of Computer Science, Institute for Theoretical Physics
and Mathematics, Tehran} \email{[email protected]}
\author[Sharif]{M. Zadimoghaddam}{Morteza Zadimoghaddam} \address[Sharif]{Department of Computer Engineering,
Sharif University of Technology} \email{[email protected]}
\begin{abstract} We analyze the structure of equilibria and the price of anarchy in the family of network creation games considered extensively in the past few years, which attempt to unify the network design and network routing problems by modeling both creation and usage costs. In general, the games are played on a host graph, where each node is a selfish independent agent (player) and each edge has a fixed link creation cost~$\alpha$. Together the agents create a network (a subgraph of the host graph) while selfishly minimizing the link creation costs plus the sum of the distances to all other players (usage cost). In this paper, we pursue two important facets of the network creation~game.
First, we study extensively a natural version of the game, called the cooperative model, where nodes can collaborate and share the cost of creating any edge in the host graph. We prove the first nontrivial bounds in this model, establishing that the price of anarchy is polylogarithmic in $n$ for all values of~$\alpha$ in complete host graphs. This bound is the first result of this type for any version of the network creation game; most previous general upper bounds are polynomial in~$n$. Interestingly, we also show that equilibrium graphs have polylogarithmic diameter for the most natural range of~$\alpha$ (at most $n \mathop{\rm polylg}\nolimits n$).
Second, we study the impact of the natural assumption that the host graph is a general graph, not necessarily complete. This model is a simple example of nonuniform creation costs among the edges (effectively allowing weights of $\alpha$ and~$\infty$). We prove the first assemblage of upper and lower bounds for this context, establishing nontrivial tight bounds for many ranges of~$\alpha$, for both the unilateral and cooperative versions of network creation. In particular, we establish polynomial lower bounds for both versions and many ranges of~$\alpha$, even for this simple nonuniform cost model, which sharply contrasts the conjectured constant bounds for these games in complete (uniform) graphs. \end{abstract}
\maketitle
\section{Introduction}
A fundamental family of problems at the intersection between computer science and operations research is \emph{network design}. This area of research has become increasingly important given the continued growth of computer networks such as the Internet. Traditionally, we want to find a minimum-cost (sub)network that satisfies some specified property such as $k$-connectivity or connectivity on terminals (as in the classic Steiner tree problem). This goal captures the (possibly incremental) creation cost of the network, but does not incorporate the cost of actually using the network. In contrast, \emph{network routing} has the goal of optimizing the usage cost of the network, but assumes that the network has already been created.
\emph{Network creation games} attempt to unify the network design and network routing problems by modeling both creation and usage costs. In general, the game is played on a \emph{host graph}, where each node is an independent agent (player), and the goal is to create a network from a subgraph of the host graph. Collectively, the nodes decide which edges of the host graph are worth creating as links in the network. Every link has the same creation cost~$\alpha$. (Equivalently, links have creation costs of $\alpha$ and~$\infty$, depending on whether they are edges of the host graph.) In addition to these creation costs, each node incurs a usage cost equal to the sum of distances to all other nodes in the network. Equivalently, if we divide the cost (and thus~$\alpha$) by the number $n$ of nodes, the usage cost for each node is its average distance to all other nodes. (This natural cost model has been used in, e.g., contribution games and network-formation games.)
There are several versions of the network creation game that vary how links are purchased. In the \emph{unilateral} model---introduced by Fabrikant, Luthra, Maneva, Papadimitriou, and Shenker \cite{FLMPS03}---every node (player) can locally decide to purchase any edge incident to the node in the host graph, at a cost of~$\alpha$. In the \emph{bilateral} model---introduced by Corbo and Parkes \cite{CP05}---both endpoints of an edge must agree before they can create a link between them, and the two nodes share the $\alpha$ creation cost equally. In the \emph{cooperative} model--- introduced by Albers, Eilts, Even-Dar, Mansour, and Roditty~\cite{AEEMR06}---any node can purchase any amount of any edge in the host graph, and a link gets created when the total purchased amount is at least~$\alpha$.
To model the dominant behavior of large-scale networking scenarios such as the Internet, we consider the case where every node (player) selfishly tries to minimize its own creation and usage cost \cite{Jackson03,FLMPS03,AEEMR06,CP05}. This game-theoretic setting naturally leads to the various kinds of \emph{equilibria} and the study of their structure. Two frequently considered notions are \emph{Nash equilibrium} \cite{Nash50,Nash51}, where no player can change its strategy (which edges to buy) to locally improve its cost, and \emph{strong Nash equilibrium} \cite{Aumann59,AFM07,Albers08}, where no coalition of players can change their collective strategy to locally improve the cost of each player in the coalition. Nash equilibria capture the combined effect of both selfishness and lack of coordination, while strong Nash equilibria separates these issues, enabling coordination and capturing the specific effect of selfishness. However, the notion of strong Nash equilibrium is extremely restrictive in our context, because all players can simultaneously change their entire strategies, abusing the local optimality intended by original Nash equilibria, and effectively forcing globally near-optimal solutions \cite{AFM07}.
We consider weaker notions of equilibria, which broadens the scope of equilibria and therefore strengthens our upper bounds, where players can change their strategy on only a single edge at a time. In a \emph{collaborative equilibrium}, even coalitions of players do not wish to change their collective strategy on any single edge; this concept is particularly important for the cooperative network creation game, where multiple players must negotiate their relative valuations of an edge. (This notion is the natural generalization of pairwise stability from \cite{CP05} to arbitrary cost sharing.) Collaborative equilibria are essentially a compromise between Nash and strong Nash equilibria: they still enable coordination among players and thus capture the specific effect of selfishness, like strong Nash, yet they consider more local moves, in the spirit of Nash. In particular, any results about all collaborative equilibria also apply to all strong Nash equilibria. Collaborative equilibria also make more sense computationally: players can efficiently detect equilibrium using a simple bidding procedure (whereas this problem is NP-hard for strong Nash), and the resulting dynamics converge to such equilibria (see Section~\ref{dynamics}).
The structure of equilibria in network creation games is not very well understood. For example, Fabrikant et al.~\cite{FLMPS03} conjectured that equilibrium graphs in the unilateral model were all trees, but this conjecture was disproved by Albers et al.~\cite{AEEMR06}. One particularly interesting structural feature is whether all equilibrium graphs have small \emph{diameter} (say, polylogarithmic), analogous to the small-world phenomenon \cite{Kleinberg00,EK06}, In the original unilateral version of the problem, the best general lower bound is just a constant and the best general upper bound is polynomial. A closely related issue is the \emph{price of anarchy} \cite{KP99,papa01,roughgarden-phd}, that is, the worst possible ratio of the total cost of an equilibrium (found by independent selfish behavior) and the optimal total cost possible by a centralized solution (maximizing social welfare). The price of anarchy is a well-studied concept in algorithmic game theory for problems such as load balancing, routing, and network design; see, e.g., \cite{papa01,CV02,Roughgarden02,FLMPS03,ADTW03,ADKTW04,CFSK04,CP05,AEEMR06,PODC07}. Upper bounds on diameter of equilibrium graphs translate to approximately equal upper bounds on the price of anarchy, but not necessarily vice versa. In the unilateral version, for example, there is a general $2^{O(\sqrt{\lg n})}$ upper bound on the price of anarchy.
\paragraph{\bf Previous work.} Network creation games have been studied extensively in the literature since their introduction in 2003.
For the unilateral version and a complete host graph, Fabrikant et al.~\cite{FLMPS03} prove an upper bound of $O(\sqrt{\alpha})$ on the price of anarchy for all~$\alpha$. Lin \cite{Lin03} proves that the price of anarchy is constant for two ranges of $\alpha$: $\alpha = O(\sqrt n)$ and $\alpha \geq c \, n^{3/2}$ for some $c > 0$. Independently, Albers et al.~\cite{AEEMR06} prove that the price of anarchy is constant for $\alpha = O(\sqrt n)$, as well as for the larger range $\alpha \geq 12 \, n \lceil \lg n \rceil$. In addition, Albers et al.\ prove a general upper bound of $15 \left(1+(\min\{\frac{\alpha^2}{n},\frac{n^2}{\alpha}\})^{1/3}\right)$. The latter bound shows the first sublinear worst-case bound, $O(n^{1/3})$, for all~$\alpha$. Demaine et al.~\cite{PODC07} prove the first $o(n^\epsilon)$ upper bound for general~$\alpha$, namely, $2^{O(\sqrt{\lg n})}$. They also prove a constant upper bound for $\alpha = O(n^{1-\epsilon})$ for any fixed $\epsilon > 0$, and improve the constant upper bound by Albers et al.\ (with the lead constant of~$15$) to $6$ for $\alpha < (n/2)^{1/2}$ and to $4$ for $\alpha < (n/2)^{1/3}$. Andelmen et al.~\cite{AFM07} show that, among strong Nash equilibria, the price of anarchy is at most~$2$.
For the bilateral version and a complete host graph, Corbo and Parkes \cite{CP05} prove that the price of anarchy is between $\Omega(\lg \alpha)$ and $O(\min\{\sqrt \alpha,n/\sqrt \alpha)$. Demaine et al.~\cite{PODC07} prove that the upper bound is tight, establishing the price of anarchy to be $\Theta(\min\{\sqrt{\alpha}, n/\sqrt{\alpha}\})$ in this case.
For the cooperative version and a complete host graph, the only known result is an upper bound of $15 \left(1+(\min\{\frac{\alpha^2}{n},\frac{n^2}{\alpha}\})^{1/3}\right)$, proved by Albers et al.~\cite{AEEMR06}.
Other variations of network creation games allow nonuniform interests in connectivity between nodes \cite{HM07} and nodes with limited budgets for buying edges \cite{LPRST08}.
\paragraph{\bf Our results.}
Our research pursues two important facets of the network creation game.
First, we make an extensive study of a natural version of the game---the cooperative model---where the only previous results were simple extensions from unilateral analysis. We substantially improve the bounds in this case, showing that the price of anarchy is polylogarithmic in $n$ for \emph{all values of~$\alpha$} in complete graphs. This is the first result of this type for any version of the network creation game. As mentioned above, this result applies to both collaborative equilibria and strong Nash equilibria. Interestingly, we also show that equilibrium graphs have polylogarithmic diameter for the most natural range of~$\alpha$ (at most $n \mathop{\rm polylg}\nolimits n$). Note that, because of the locally greedy nature of Nash equilibria, we cannot use the classic probabilistic spanning (sub)tree embedding machinery of \cite{Bartal98,FRT04,EEST05} to obtain polylogarithmic bounds (although this machinery can be applied to approximate the global social optimum).
Second, we study the impact of the natural assumption that the host graph is a general graph, not necessarily complete, inspired by practical limitations in constructing network links. This model is a simple example of nonuniform creation costs among the edges (effectively allowing weights of $\alpha$ and~$\infty$). Surprisingly, no bounds on the diameter or the price of anarchy have been proved before in this context. We prove several upper and lower bounds, establishing nontrivial tight bounds for many ranges of~$\alpha$, for both the unilateral and cooperative versions. In particular, we establish polynomial lower bounds for both versions and many ranges of~$\alpha$, even for this simple nonuniform cost model. These results are particularly interesting because, by contrast, no superconstant lower bound has been shown for either game in complete (uniform) graphs. Thus, while we believe that the price of anarchy is polylogarithmic (or even constant) for complete graphs, we show a significant departure from this behavior in general graphs.
Our proof techniques are most closely related in spirit to ``region growing'' from approximation algorithms; see, e.g., \cite{LR99}. Our general goal is to prove an upper bound on diameter by way of an upper bound on the expansion of the graph. However, we have not been able to get such an argument to work directly in general. The main difficulty is that, if we imagine building a breadth-first-search tree from a node, then connecting that root node to another node does not necessarily benefit the node much: it may only get closer to a small fraction of nodes in the BFS subtree. Thus, no node is motivated selfishly to improve the network, so several nodes must coordinate their changes to make improvements. The cooperative version of the game gives us some leverage to address this difficulty. We hope that this approach, particularly the structure we prove of equilibria, will shed some light on the still-open unilateral version of the game, where the best bounds on the price of anarchy are $\Omega(1)$ and $2^{O(\sqrt{\lg n})}$.
Table~\ref{summary} summarizes our results. Section~\ref{Cooperative Version in Complete Graphs} proves our polylogarithmic upper bounds on the price of anarchy for all ranges of $\alpha$ in the cooperative network creation game in complete graphs. Section~\ref{Cooperative Version in General Graphs} considers how the cooperative network creation game differs in general graphs, and proves our upper bounds for this model. Section~\ref{Unilateral Version in General Graphs} extends these results to apply to the unilateral network creation game in general graphs. Section~\ref{Lower Bounds in General Graphs} proves lower bounds for both the unilateral and cooperative network creation games in general graphs, which match our upper bounds for some ranges of~$\alpha$.
\begin{table*}
\centering
\footnotesize
\tabcolsep=0pt
\def\LABEL#1{\hbox to 0pt{\hss#1\hss}}
\def\BOUND#1#2{\multicolumn{#1}{|c|}{\hbox{\,}#2\hbox{\,}}}
\hbox to \hsize{\hss
\begin{tabular}{lclclclclclclclclclc}
\multicolumn{1}{r}{$\alpha = $ \hbox{~~}}
&\LABEL{$0$}& \hbox{\qquad}
&\LABEL{$n$}& \hbox{\qquad\qquad}
&\LABEL{$n \lg^{0.52} n$}& \hbox{\qquad\qquad}
&\LABEL{$n \lg^{7.16} n$\hspace{1em}}& \hbox{\qquad\quad}
&\LABEL{$n^{3/2}$}& \hbox{\qquad}
&\LABEL{\hspace*{1em}$n^{5/3}$}& \hbox{\qquad\qquad}
&\LABEL{$n^2$} & \hbox{\qquad\qquad}
&\LABEL{$n^2 \lg n$} & \hbox{\qquad}
&\LABEL{$\infty$}
\\ \cline{2-17}
Cooperative, complete graph\hbox{~} && \BOUND{1}{$\Theta(1)$} && \BOUND{1}{$\lg^{3.32} n$} && \BOUND{1}{$O\big(\lg n {+} \sqrt{n \over \alpha} \lg^{3.58}n\big)$} && \BOUND{9}{$\Theta(1)$}
\\ \cline{2-17}
Cooperative, general graph\hbox{~} && \BOUND{1}{$O(\alpha^{1/3})$} && \BOUND{7}{$O(n^{1/3})$, $\Omega(\sqrt{{\alpha \over n}})$} && \BOUND{1}{$\Theta({n^2 \over \alpha})$}&& \BOUND{1}{$O\big({n^2 \over \alpha}\lg n\big)$} && \BOUND{1}{$\Theta(1)$}
\\ \cline{2-17}
Unilateral, general graph\hbox{~} && \BOUND{1}{$O(\alpha^{1/2})$} && \BOUND{5}{$O(n^{1/2})$, $\Omega({\alpha \over n})$} && \BOUND{3}{$\Theta({n^2 \over \alpha})$} && \BOUND{3}{$\Theta(1)$}
\\ \cline{2-17}
\end{tabular}
\hss
}
\caption{Summary of our bounds on equilibrium diameter and price of anarchy
for cooperative network creation in complete graphs,
and unilateral and cooperative network creation in general graphs.
For all three of these models, our bounds are strict
improvements over the best previous bounds.}
\label{summary} \end{table*}
\section{Models}
In this section, we formally define the different models of the network creation game.
\subsection{Unilateral Model}
We start with the unilateral model, introduced in~\cite{FLMPS03}. The game is played on a \emph{host graph} $G = (V,E)$. Assume $V = \{1, 2, \dots, n\}$. We have $n$ players, one per vertex. The strategy of player $i$ is specified by a subset $s_i$ of $\{j : \{i,j\} \in E\}$, defining the set of neighbors to which player $i$ creates a link. Thus each player can only create links corresponding to edges incident to node $i$ in the host graph $G$ Together, let $s = \langle s_1, s_2, \dots, s_n \rangle$ denote the joint strategy of all players.
To define the cost of strategies, we introduce a spanning subgraph $G_s$ of the host graph~$G$. Namely, $G_s$ has an edge $\{i,j\} \in E(G)$ if either $i \in s_j$ or $j \in s_i$. Define $d_{G_s}(i,j)$ to be the distance between vertices $i$ and $j$ in graph~$G_s$. Then the cost incurred by player $i$ is $
c_i(s) = \alpha \, |s_i| + \sum_{j=1}^n d_{G_s}(i,j). $ The total cost incurred by joint strategy $s$ is $c(s) = \sum_{i=1}^n c_i(s)$.
A (pure) \emph{Nash equilibrium} is a joint strategy $s$ such that $c_i(s) \leq c_i(s')$ for all joint strategies $s'$ that differ from $s$ in only one player~$i$. The \emph{price of anarchy} is then the maximum cost of a Nash equilibrium divided by the minimum cost of any joint strategy (called the \emph{social optimum}).
\subsection{Cooperative\ Model}
Next we turn to the cooperative model, introduced in \cite{FLMPS03,AEEMR06}. Again, the game is played on a host graph $G = (V,E)$, with one player per vertex. Assume $V = \{1, 2, \dots, n\}$ and $E = \{e_1, e_2, \dots, e_{|E|}\}$. Now the strategy of player $i$ is specified by a vector
$s_i = \langle s(i, e_1), s(i,e_2), \dots, s(i, e_{|E|}) \rangle$, where $s(i, e_j)$ corresponds to the value that player $i$ is willing to pay for link~$e_j$. Together, $s = \langle s_1, s_2, \dots, s_n \rangle$ denotes the strategies of all players.
We define a spanning subgraph $G_s = (V,E_s)$ of the host graph~$G$: $e_j$ is an edge of $G_s$ if $\sum_{i \in V(G)} s(i,e_j) \geq \alpha$. To make the total cost for an edge $e_j$ exactly $0$ or $\alpha$ in all cases, if $\sum_{i \in V(G)} s(i,e_j) > \alpha$, we uniformly scale the costs to sum to~$\alpha$: $s'(i,e_j) = \alpha s(i,e_j)/\sum_{k \in V(G)} s(k,e_j)$ (Equilibria will always have $s = s'$.) Then the cost incurred by player $i$ is
$ c_i(s) = \sum_{e_j \in E_s} s'(i,e_j) + \sum_{j=1}^n d_{G_s}(i,j). $
The total cost incurred by joint strategy $s$ is
$
c(s) = \alpha \, |E_s| + \sum_{i=1}^n \sum_{j=1}^n d_{G_s}(i,j). $
In this cooperative model, the notion of Nash equilibrium is less natural because it allows only one player to change strategy, whereas a cooperative purchase in general requires many players to change their strategy. Therefore we use a stronger notion of equilibrium that allows coalition among players, inspired by the strong Nash equilibrium of Aumann \cite{Aumann59}, and modeled after the pairwise stability property introduced for the bilateral game \cite{CP05}. Namely, a joint strategy $s$ is \emph{collaboratively equilibrium} if, for any edge $e$ of the host graph~$G$, for any coalition $C \subseteq V$, for any joint strategy $s'$ differing from $s$ in only $s'(i,e)$ for $i \in C$, some player $i \in C$ has $c_i(s') > c_i(s)$.
Note that any such joint strategy must have every sum $\sum_{i \in V(G)} s(i,e_j)$ equal to either $0$ or~$\alpha$, so we can measure the cost $c_i(s)$ in terms of $s(i,e_j)$ instead of $s'(i,e_j)$. The \emph{price of anarchy} is the maximum cost of a collaborative equilibrium divided by the minimum cost of any joint strategy (the \emph{social optimum}).
\label{dynamics} We can define a simple dynamics for the cooperative network creation game in which we repeatedly pick a pair of vertices, have all players determine their valuation of an edge between those vertices (change in $c_i(s)$ from addition or removal), and players thereby bid on the edge and change their strategies. These dynamics always converge to a collaborative equilibrium because each change decreases the total cost $c(s)$, which is a discrete quantity in the lattice $\mathbb Z + \alpha \mathbb Z$. Indeed, the system therefore converges after a number of steps polynomial in $n$ and the smallest integer multiple of $\alpha$ (if one exists). More generally, we can show an exponential upper bound in terms of just $n$ by observing that the graph uniquely determines $c(s)$, so we can never repeat a graph by decreasing $c(s)$.
\section{Preliminaries} \label{preliminaries}
In this section, we define some helpful notation and prove some basic results. Call a graph $G_s$ corresponding to an equilibrium joint strategy $s$ an \emph{equilibrium graph}. In such a graph, let $d_{G_s}(u,v)$ be the length of the shortest path from $u$ to~$v$ and
$\mathop{\mathrm{Dist}}\nolimits_{G_s}(u)$ be $\sum_{v \in V(G_s)}d_{G_s}(u,v)$. Let $N_k(u)$ denote the set of vertices with distance at most $k$ from vertex~$u$, and let $N_k = \min_{v \in G} |N_k(v)|$. In both the unilateral and cooperative network creation games, the total cost of a strategy consists of two parts. We refer to the cost of buying edges as the \emph{creation cost} and the cost $\sum_{v \in V(G_s)}d_{G_s}(u,v)$ as the \emph{usage cost}.
First we prove the existence of collaborative equilibria for complete host graphs. Similar results are known in the unilateral case \cite{FLMPS03,AFM07}.
\begin{lemma}
In the cooperative network creation game,
any complete graph is a collaborative equilibrium for $\alpha \leq 2$,
and any star graph is a collaborative equilibrium for $\alpha \geq 2$. \end{lemma}
Next we show that, in the unilateral version, a bound on the usage cost suffices to bound the total cost of an equilibrium graph~$G_s$, similar to \cite[Lemma~1]{PODC07}.
\begin{lemma} \label{tree}
The total cost of any equilibrium graph in the unilateral game is at most
$\alpha \, n + 2 \sum_{u,v \in V(G_s)}d_{G_s}(u,v)$. \end{lemma}
Next we prove a more specific bound for the cooperative version, using the following bound on the number of edges in a graph of large girth:
\begin{lemma} {\rm \cite{DB91}} \label{sparse}
The number of edges in an $n$-vertex graph of odd girth $g$
is $O(n^{1+2/(g-1)})$. \end{lemma}
\begin{lemma} \label{general_bound} For any integer $g$, the total cost of any equilibrium graph $G_s$ is at most $\alpha \, O(n^{1+2/g}) + g \sum_{u,v \in V(G_s)}d_{G_s}(u,v)$. \end{lemma}
\section{Cooperative Version in Complete Graphs} \label{Cooperative Version in Complete Graphs}
In this section, we study the price of anarchy when any number of players can cooperate to create any link, and the host graph is the complete graph.
We start with two lemmata that hold for both the unilateral and cooperative versions of the problem. The first lemma bounds a kind of ``doubling radius'' of large neighborhoods around any vertex, which the second lemma uses to bound the usage cost.
\begin{lemma} \label{2k+2alpha/n}
{\rm \cite[Lemma~4]{PODC07}}
For any vertex $u$ in an equilibrium graph $G_s$,
if $|N_k(u)| > n/2$, then $|N_{2k+2 \alpha / n}(u)| \geq n$. \end{lemma}
\begin{lemma} \label{N_k>n/2} If we have $N_k(u) > n/2$ for some vertex $u$ in an equilibrium graph $G_s$, the usage cost\ is at most $O(n^2k+\alpha n)$. \end{lemma}
Next we show how to improve the bound on ``doubling radius'' for large neighborhoods in the cooperative game:
\begin{lemma} \label{2k+4sqrt(alpha/n)}
For any vertex $u$ in an equilibrium graph $G_s$,
if $|N_k(u)| > n/2$, then $|N_{2k+4 \sqrt{\alpha / n}}(u)| \geq n$. \end{lemma}
Next we consider what happens with arbitrary neighborhoods, using techniques similar to \cite[Lemma~5]{PODC07}.
\begin{lemma} \label{4k+3}
If $|N_k(u)| \geq Y$ for every vertex $u$ in an equilibrium graph~$G_s$,
then either $|N_{4k+2}(u)| > n/2$ for some vertex $u$
or $|N_{5k+3}(u)| \geq Y^2 n/\alpha$ for every vertex~$u$. \end{lemma}
\begin{proof}
If there is a vertex $u$ with $|N_{4k+2}(u)| > n/2$, then the
claim is obvious. Otherwise, for every vertex~$u$, $|N_{4k+2}(u)|
\leq n/2$. Let $u$ be an arbitrary vertex. Let $S$ be the set of
vertices whose distance from $u$ is $4k+3$.
We select a subset of $S$, called \emph{center points},
by the following greedy algorithm.
We repeatedly select an unmarked vertex $z \in S$ as a center point,
mark all unmarked vertices in $S$ whose distance from $z$ is at
most $2 k$, and assign these vertices to~$z$.
\begin{wrapfigure}{r}{3in} \centering \scalebox{0.6}{\figtex{lemma8_1}} \caption{Center points.} \label{fig:02}
\end{wrapfigure}
Suppose that we select $l$ vertices $x_1, x_2, \dots, x_l$ as center points.
We prove that $l \geq |N_k(u)| n/\alpha$.
Let $C_i$ be the vertices in $S$ assigned to~$x_i$; see
Figure~\ref{fig:02}.
By construction, $S = \bigcup_{i=1}^l C_i$.
We also assign each vertex $v$ at distance at least $4 k + 4$ from $u$
to one of these center points, as follows.
Pick any one shortest path from $v$ to~$u$
that contains some vertex $w \in S$,
and assign $v$ to the same center point as~$w$. This vertex $w$ is
unique in this path because this path is a shortest path from $v$ to~$u$.
Let $T_i$ be the set of vertices assigned to $x_i$
and whose distance from $u$ is more than $4k+2$.
By construction, $\bigcup_{i=1}^l T_i$ is the set of vertices
at distance more than $4k+2$ from~$u$.
The shortest path from $v \in T_i$ to $u$ uses some vertex $w \in C_i$.
For any vertex $x$ whose distance is at most $k$ from $u$ and
for any $y \in T_i$, adding the edge $\{u,x_i\}$ decreases the distance
between $x$ and $y$ at least~$2$, because
the shortest path from $y \in T_i$ to $u$ uses some vertex $w \in C_i$,
as shown in Figure~\ref{fig:02}.
By adding edge $\{u,x_i\}$, the distance between $u$ and $w$
would become at most $2k+1$ and the distance between $x$ and $w$
would become at most $3k+1$, where $x$ is any vertex whose distance
from $u$ is at most $k$. Because the current distance between $x$
and $w$ is at least $4k+3-k=3k+3$, adding the edge $\{u,x_i\}$ decreases
this distance by at least~$2$. Consequently the distance between $x$
and any $y \in T_i$ decreases by at least~$2$. Note that the distance
between $x$ and $y$ is at least $d_{G_s}(u,y)-k$, and after adding
edge $(u,x_i)$, this distance becomes at most
$3k+1 + d_{G_s}(w,y) = 3k+1 + d_{G_s}(u,y) - d_{G_s}(u,w)
= 3k+1 + d_{G_s}(u,y) - (4k+3)
= d_{G_s}(u,y) - k - 2$.
Thus any vertex $y \in T_i$ has
incentive to pay at least $2 \, |N_k(u)|$ for edge $\{u,x_i\}$.
Because the edge $\{u,x_i\}$ is not in equilibrium, we conclude that
$\alpha \geq 2 |T_i| |N_k(u)|$. On the other hand, $|N_{4k+2}(u)| \leq n/2$,
so $\sum_{i=1}^l |T_i| \geq n/2$. Therefore, $l \, \alpha \geq
2 |N_k(u)| \sum_{i=1}^l |T_i| \geq n |N_k(u)|$ and hence $l \geq n |N_k(u)|/\alpha$.
According to the greedy algorithm, the distance between any pair of
center points is more than~$2 k$;
hence, $N_k(x_i) \cap N_k(x_j) = \emptyset$ for $i \neq j$.
By the hypothesis of the lemma, $|N_k(x_i)| \geq Y$ for every vertex~$x_i$;
hence $|\bigcup_{i=1}^l N_k(x_i)| = \sum_{i=1}^l |N_k(x_i)| \geq l \, Y$.
For every $i \leq l$, we have $d_{G_s}(u,x_i)=4k+3$, so vertex $u$
has a path of length at most $5k+3$ to every vertex whose distance to $x_i$
is at most~$k$.
Therefore, $|N_{5k+3}(u)| \geq |\bigcup_{i=1}^l N_k(x_i)| \geq
l \, Y \geq Y n |N_k(u)|/\alpha \geq Y^2 n/\alpha$. \end{proof}
Now we are ready to prove bounds on the price of anarchy. We start with the case when $\alpha$ is a bit smaller than~$n$:
\begin{theorem} \label{1/epsilon^3}
For $1 \leq \alpha < n^{1-\epsilon}$,
the price of anarchy is at most $O(1/\epsilon^{1+\lg 5})$. \end{theorem}
Next we prove a polylogarithmic bound on the price of anarchy when $\alpha$ is close to~$n$.
\begin{theorem} \label{log(n)^3}
For $\alpha = O(n)$, the price of anarchy is $O(\lg^{1+\lg 5} n)$
and the diameter of any equilibrium graph is $O(\lg^{\lg 5} n)$. \end{theorem}
\begin{proof}
Consider an equilibrium graph $G_s$.
The proof is similar to the proof of Theorem~\ref{1/epsilon^3}.
Define $a_1=\max\{2 , 2 \alpha /n\}+1$ and $a_i=5 a_{i-1}+3$,
or equivalently $a_i=\frac{4a_1+3}{20} \cdot 5^{i}- {3 \over 4} < a_1 5^i$,
for all $i>1$.
By Lemma~\ref{4k+3}, for each $i \geq 1$, either $N_{4 a_i+2}(v) > n/2$ for
some vertex $v$ or $N_{a_{i+1}} \geq (n/\alpha) \, N_{a_i}^2$.
Let $j$ be the least number for which $|N_{4 a_j+2}(v)| > n/2$ for some
vertex~$v$.
By this definition, for each $i < j$,
$N_{a_{i+1}} \geq (n/\alpha) \, N_{a_i}^2$.
Because $N_{a_1} > 2 \max\{1,\alpha/n\}$, we obtain that $N_{a_i} > 2^{2^{i-1}}\max\{1,\alpha/n\}$
for every $i \leq j$.
On the other hand, $2^{2^{j-1}} \leq 2^{2^{j-1}}\max\{1,\alpha/n\} < N_{a_j} \leq n$,
so $j < \lg\lg n+1$ and $a_j < a_1 \, 5^{\lg\lg n+1} <
(2+2\alpha/n+1+1)5\lg^{\lg 5} n = 10(2+\alpha/n)\lg^{\lg 5} n$. Therefore
$N_{4 \cdot[10(2+\alpha/n)\lg^{\lg 5} n]+2}(v) > n/2$
for some vertex $v$ and using Lemma
\ref{2k+2alpha/n}, we conclude that the distance of $v$ to all other vertices
is at most $2[40(2+\alpha/n)\lg^{\lg 5} n + 2] + 2\alpha/n$. Thus the diameter of $G_s$
is at most $O((1+\alpha/n)\lg^{\lg 5} n)$.
Setting $g= \lg n$ in Lemma~\ref{general_bound}, the cost of $G_s$
is at most $\alpha \, O(n) + (\lg n) O(n^2(1+\alpha/n)\lg^{\lg 5} n)=O((\alpha n +
n^2)\lg^{1+\lg 5} n)$. Therefore the price of anarchy is at most $O(\lg^{1+\lg 5} n)$. \end{proof}
When $\alpha$ is a bit larger than $n$, we can obtain a constant bound on the price of anarchy. First we need a somewhat stronger result on the behavior of neighborhoods:
\begin{lemma} \label{5k+1}
If $|N_k(u)| \geq Y$ for every vertex $u$ in an equilibrium graph~$G_s$,
then either $|N_{5k}(u)| > n/2$ for some vertex $u$
or $|N_{6k+1}(u)| \geq Y^2 k n/2\alpha$ for every vertex~$u$. \end{lemma}
\begin{theorem} \label{sqrt(alpha/n)}
For any $\alpha > n$,
the price of anarchy is $O(\sqrt{n/\alpha}\lg^{1+\lg 6} n)$
and the diameter of any equilibrium graph is
$O(\lg^{\lg 6} n \cdot \sqrt{\alpha/n})$. \end{theorem}
By Theorem~\ref{sqrt(alpha/n)}, we conclude the following:
\begin{corollary} For $\alpha=\Omega(n \lg^{2+2\lg 6} n) \approx \Omega(n \lg^{7.16} n)$, the price of anarchy is $O(1)$. \end{corollary}
\section{Cooperative Version in General Graphs} \label{Cooperative Version in General Graphs} \label{cooperativeupper}
In this section, we study the price of anarchy when only some links can be created, e.g., because of physical limitations. In this case, the social optimum is no longer simply a clique or a star.
We start by bounding the growth of distances from the host graph $G$ to an arbitrary equilibrium graph~$G_s$:
\begin{lemma} \label{k^2/3}
For any two vertices $u$ and $v$ in any equilibrium graph $G_s$,
$d_{G_s}(u,v) = O(d_G(u,v) + \alpha^{1/3} d_G(u,v)^{2/3})$. \end{lemma}
\begin{proof} Let $u=v_0, v_1, \dots, v_k=v$ be a shortest path in $G$ between $u$ and~$v$, so $k=d_G(u,v)$. Suppose that the distance between $v_0$ and $v_i$ in $G_s$ is $d_i$, for $0 \leq i \leq k$. We first prove that $d_{i+1} \leq d_i + 1 + \sqrt{9 \alpha/d_i}$ for $0 \leq i < k$. If edge $\{v_i,v_{i+1}\}$ already exists in $G_s$, the inequality clearly holds. Otherwise, adding this edge decreases the distance between $x$ and $y$ by at least $\frac{d_{i+1}-d_i}{3}$, where $x$ is a vertex whose distance is at most $\frac{d_{i+1}-d_i}{3}-1$ from $v_{i+1}$ and $y$ is a vertex in a shortest path from $v_i$ to~$v_0$. Therefore any vertex $x$ whose distance is at most $\frac{d_{i+1}-d_i}{3}-1$ from $v_{i+1}$ can pay $\frac{d_{i+1}-d_i}{3}d_i$ for this edge. Because this edge does not exist in $G_s$ and because there are at least $\frac{d_{i+1}-d_i}{3}$ vertices of distance at most $\frac{d_{i+1}-d_i}{3}-1$ from~$v_{i+1}$, we conclude that ${\left(\frac{d_{i+1}-d_i}{3}\right)}^2d_i \leq \alpha$. Thus we have $d_{i+1} \leq d_i + 1 + \sqrt{9 \alpha/d_i}$ for $0 \leq i < k$. Next we prove that $d_{i+1} \leq d_i + 1 + 5 \alpha^{1/3}$. If edge $\{v_i,v_{i+1}\}$ already exists in $G_s$, the inequality clearly holds. Otherwise, adding this edge decreases the distance between $z$ and $w$ by at least $\frac{d_{i+1}-d_i}{5}$, where $z$ and $w$ are two vertices whose distances from $v_{i+1}$ and $v_i$, respectively, are less than $\frac{d_{i+1}-d_i}{5}$. There are at least at least $\left(\frac{d_{i+1}-d_i}{5}\right)^2$ pair of vertices like $(z,w)$. Because the edge $\{v_i,v_{i+1}\}$ does not exist in $G_s$, we conclude that $\left(\frac{d_{i+1}-d_i}{5}\right)^3 \leq \alpha$. Therefore $d_{i+1} \leq d_i + 1 +5 \alpha^{1/3}$. Combining these two inequalities, we obtain $d_{i+1} \leq d_i + 1 + \min\{ \sqrt{9 \alpha/d_i} , 5 \alpha^{1/3} \}$.
Inductively we prove that $d_j \leq 3j + 7 \alpha^{1/3} + 5 \alpha^{1/3}j^{2/3}$. For $j \leq 2$, the inequality is clear. Now suppose by induction that $d_j \leq 3j +7 \alpha^{1/3} + 5 \alpha^{1/3}j^{2/3}$. If $d_j \leq 2 \alpha^{1/3}$, we reach the desired inequality using the inequality $d_{j+1} \leq d_j + 1 + 5 \alpha^{1/3}$. Otherwise, we know that $d_{j+1} \leq d_j + 1 + \sqrt{9 \alpha/d_j} = f(d_j)$ and to find the maximum of the function $f(d_j)$ over the domain $d_j \in [2\alpha^{1/3},j + 7 \alpha^{1/3} + 5 \alpha^{1/3}j^{2/3}]$, we should check $f$'s critical points, including the endpoints of the domain interval and where $f$'s derivative is zero. We reach three values for~$d_j$: $2 \alpha^{1/3}$, $j + 7 \alpha^{1/3} + 5 \alpha^{1/3}j^{2/3}$, and $\left(\frac{9 \alpha}{4}\right)^{1/3}$. Because the third value is not in the domain, we just need to check the first two values. The first value is also checked, so just the second value remains. For the second value, we have
$d_{j+1} \leq \textstyle d_j + 1 + \sqrt{9 \alpha/d_j} \leq \textstyle j + 7 \alpha^{1/3} + 5 \alpha^{1/3}j^{2/3} + 1 + \sqrt{\frac{9 \alpha}{j + 7 \alpha^{1/3} + 5 \alpha^{1/3}j^{2/3}}} \leq \textstyle j + 1 + 7 \alpha^{1/3} + 5 \alpha^{1/3}j^{2/3} + \sqrt{\frac{10\alpha}{5\alpha^{1/3}j^{2/3}}} \leq \textstyle j + 1 + 7 \alpha^{1/3} + 5 \alpha^{1/3}j^{2/3} + \frac{\alpha^{1/3}\sqrt{2}}{j^{1/3}}$.
Because $(j+1)^{2/3} - j^{2/3} = \frac{(j+1)^2- j^2}{(j+1)^{4/3}+(j+1)^{2/3}j^{2/3}+j^{4/3}} \allowbreak \geq \frac{2j}{3(j+1)^{4/3}}$, we have
$j + 1 + 7 \alpha^{1/3} + 5 \alpha^{1/3}j^{2/3} + \frac{\alpha^{1/3}\sqrt{2}}{j^{1/3}} \leq \textstyle j + 1 + 7 \alpha^{1/3} + 5 \alpha^{1/3}(j+1)^{2/3} - 5 \alpha^{1/3}\frac{2j}{3(j+1)^{4/3}} + \frac{\alpha^{1/3}\sqrt{2}}{j^{1/3}} \leq \textstyle j + 1 + 7 \alpha^{1/3} + 5 \alpha^{1/3}(j+1)^{2/3} - \frac{10 \alpha^{1/3} j}{3j^{4/3}} + \frac{\alpha^{1/3}\sqrt{2}}{j^{1/3}} \leq \textstyle j + 1 + 7 \alpha^{1/3} + 5 \alpha^{1/3}(j+1)^{2/3}$.
Note that $j+1 > 2$ and $d_k = d_{G_s}(u,v)$. Therefore $d_{G_s}(u,v)$ is at most $O(d_G(u,v) + \alpha^{1/3}d_G(u,v)^{2/3})$ and the desired inequality is proved. \end{proof}
Using this Lemma~\ref{k^2/3}, we prove two different bounds relating the sum of all pairwise distances in the two graphs:
\begin{corollary} \label{alpha^1/3} For any equilibrium graph~$G_s$, $\sum_{u,v \in V(G)} d_{G_s}(u,v) = O(\alpha^{1/3}) \cdot \sum_{u,v \in V(G)} d_G(u,v)$. \end{corollary}
\begin{theorem} \label{n^1/3} For any equilibrium graph $G_s$, $\sum_{u,v \in V(G)} d_{G_s}(u,v) \leq \min\{O(n^{1/3}) (\alpha n + \sum_{u,v \in V(G)} d_G(u,v)) , n^3\}$. \end{theorem}
Now we can bound the price of anarchy for the various ranges of~$\alpha$, combining Corollary~\ref{alpha^1/3}, Theorem~\ref{n^1/3}, and Lemma~\ref{general_bound}, with different choices of~$g$.
\begin{theorem} In the cooperative network creation game in general graphs, the price of anarchy is at most
\begin{enumerate} \item[\rm (a)]
$O(\alpha^{1/3})$ for $\alpha < n$~~~[$g=6$ in Lemma~\ref{general_bound} and Corollary~\ref{alpha^1/3}], \item[\rm (b)]
$O(n^{1/3})$ for $n \leq \alpha \leq n^{5/3}$~~~[$g=6$ in Lemma~\ref{general_bound} and Theorem~\ref{n^1/3}], \item[\rm (c)]
$O(\frac{n^2}{\alpha})$ for $n^{5/3} \leq \alpha < n^{2-\epsilon}$~~~[$g=2/\epsilon$ in Lemma~\ref{general_bound} and Theorem~\ref{n^1/3}], and \item[\rm (d)]
$O(\frac{n^2}{\alpha} \lg n)$
for $n^2 \leq \alpha$~~~[$g=\lg n$ in Lemma~\ref{general_bound} and Theorem~\ref{n^1/3}]. \end{enumerate} \end{theorem}
\section{Unilateral Version in General Graphs} \label{Unilateral Version in General Graphs}
Next we consider how a general host graph affects the unilateral version of the problem. Some proofs are similar to proofs for the cooperative version in Section~\ref{cooperativeupper} and hence omitted.
\begin{lemma} \label{k^1/2uni}
For any two vertices $u$ and $v$ in any equilibrium graph $G_s$,
$d_{G_s}(u,v) = O(d_G(u,v) + \alpha^{1/2} d_G(u,v)^{1/2})$. \end{lemma}
Again we relate the sum of all pairwise distances in the two graphs:
\begin{corollary} \label{alpha^1/2uni} For any equilibrium graph~$G_s$, $\sum_{u,v \in V(G)} d_{G_s}(u,v) = O(\alpha^{1/2}) \cdot \sum_{u,v \in V(G)} D_G(u,v)$. \end{corollary}
\begin{theorem} \label{n^1/2uni} For any equilibrium graph $G_s$, $\sum_{u,v \in V(G_s)} d_{G_s}(u,v) \leq \min\{O(n^{1/2}) (\alpha n + \sum_{u,v \in V(G)} D_G(u,v)) , n^3\}$. \end{theorem}
To conclude bounds on the price of anarchy, we now use Lemma \ref{tree} in place of Lemma~\ref{general_bound}, combined with Corollary \ref{alpha^1/2uni} and Theorem \ref{n^1/2uni}.
\begin{theorem} For $\alpha \geq n$, the price of anarchy is at most $\min\{O(n^{1/2}) , \frac{n^2}{\alpha}\}$. \end{theorem}
\begin{theorem} For $\alpha < n$, the price of anarchy is at most $O(\alpha^{1/2})$. \end{theorem}
\section{Lower Bounds in General Graphs} \label{Lower Bounds in General Graphs}
In this section, we prove polynomial lower bounds on the price of anarchy for general host graphs, first for the cooperative version and second for the unilateral version.
\begin{theorem} \label{lowerboundcooperative} The price of anarchy in the cooperative game is $\Omega(\min\{\sqrt{\frac{\alpha}{n}} , \frac{n^2}{\alpha}\})$. \end{theorem}
\begin{wrapfigure}{r}{2in} \centering
\scalebox{0.65}{\figtex{lower}} \caption{Lower bound graph.}\label{fig:03}
\end{wrapfigure}
\begin{proof} For $\alpha = O(n)$ or $\alpha = \Omega(n^2)$, the claim is clear. Otherwise, let $k=\sqrt{\frac{\alpha}{12 n}} \geq 2$. Thus $k = O(\sqrt n)$. We construct graph $G_{k,l}$ as follows; see Figure~\ref{fig:03}. Start with $2l$ vertices $v_1, v_2, \dots, v_{2l}$ connected in a cycle. For any $1 \leq i \leq 2l$, insert a path $P_i$ of $k$ edges between $v_i$ and $v_{i+1}$ (where we define $v_{2l+1}=v_1$). For any $1 \leq i \leq l$, insert a path $Q_i$ of $k$ edges between $v_{2i}$ and $v_{2i+2}$ (where we define $v_{2l+2}=v_2$). Therefore there are $n=(3k-1)l$ vertices and $(3k+2)l$ edges in $G_{k,l}$, so $l = n/(3 k - 1)$.
For simplicity, let $G$ denote $G_{k,l}$ in the rest of the proof. Let $G_1$ be a spanning connected subgraph of $G$ that contains exactly one cycle, namely, $(v_1, v_2, \dots, v_{2l}, v_1)$; in other words, we remove from $G$ exactly one edge from each path $P_i$ and~$Q_i$. Let $G_2$ be a spanning connected subgraph of $G$ that contains exactly one cycle, formed by the concatenation of $Q_1, Q_2, \dots, Q_l$, and contains none of the edges $\{v_i,v_{i+1}\}$, for $1 \leq i \leq 2l$; for example, we remove from $G$ exactly one edge from every $P_{2i}$ and every edge $\{v_i,v_{i+1}\}$.
Next we prove that $G_2$ is an equilibrium. For any $1 \leq i \leq l$, removing any edge of path $Q_i$ increases the distance between its endpoints and at least $n/6$ vertices by at least $\frac{l k}{3} \geq n/6$. Because $\alpha = o(n^2)$, we have $\alpha < \frac{n}{6} \frac{n}{6}$, so if we assign this edge to be bought solely by one of its endpoints, then this owner will not delete the edge. Removing other edges makes $G_2$ disconnected. For any $1 \leq i \leq l$, adding an edge of path $P_{2i}$ or path $P_{2i+1}$ or edge $\{v_{2i},v_{2i+1}\}$ or edge $\{v_{2i+1}, v_{2i+2}\}$ to $G_2$ decreases only the distances from some vertices of paths $P_{2i}$ or $P_{2i+1}$ to the other vertices. There are at most $n(2k-1)$ such pairs. Adding such an edge can decrease each of these distance by at most $3k-1$. But we know that $\alpha \geq 12 n k^2 > 2 n (2k-1) (3k-1)$, so the price of the edge is more than its total benefit among all nodes, and thus the edge will not be created by any coalition.
The cost of $G_1$ is equal to $O(\alpha n + n^2 (k+l)) = O(\alpha n + n^2 (k + \frac{n}{k}))$ and the cost of $G_2$ is $\Omega(\alpha n + n^2(k+l k)) = \Omega(\alpha n + n^3)$. The cost of the social optimum is at most the cost of $G_1$, so the price of anarchy is at least $\Omega(\frac{n^3}{\alpha n + n^3/k +kn^2}) = \Omega(\min\{\frac{n^2}{\alpha} , k, \frac{n}{k}\})$. Because $k=O(\sqrt{n})$, the price of anarchy is at least $ \Omega(\min\{\frac{n^2}{\alpha} , k\}) =\Omega(\min\{\frac{n^2}{\alpha} , \sqrt{\frac{\alpha}{n}}\})$. \end{proof}
\begin{theorem} \label{lower bound unilateral} The price of anarchy in unilateral games is $\Omega(\min\{\frac{\alpha}{n} , \frac{n^2}{\alpha}\})$. \end{theorem}
The proof uses a construction similar to Theorem~\ref{lowerboundcooperative}.
\vspace*{-10ex}
\end{document} |
\begin{document}
\title{Combinatorial Dehn surgery on cubed\\and Haken $3$--manifolds} \asciititle{Combinatorial Dehn surgery on cubed and Haken 3-manifolds} \shorttitle{Combinatorial Dehn surgery on cubed and Haken 3--manifolds}
\authors{I\thinspace R Aitchison\\J\thinspace H Rubinstein}
\asciiauthors{IR Aitchison and JH Rubinstein}
\address{Department of Mathematics and Statistics, University of Melbourne\\ Parkville, Vic 3052, Australia} \email{[email protected], [email protected]}
\begin{abstract} A combinatorial condition is obtained for when immersed or embedded incompressible surfaces in compact 3--manifolds with tori boundary components remain incompressible after Dehn surgery. A combinatorial characterisation of hierarchies is described. A new proof is given of the topological rigidity theorem of Hass and Scott for 3--manifolds containing immersed incompressible surfaces, as found in cubings of non-positive curvature. \end{abstract} \asciiabstract{A combinatorial condition is obtained for when immersed or embedded incompressible surfaces in compact 3--manifolds with tori boundary components remain incompressible after Dehn surgery. A combinatorial characterisation of hierarchies is described. A new proof is given of the topological rigidity theorem of Hass and Scott for 3--manifolds containing immersed incompressible surfaces, as found in cubings of non-positive curvature.}
\primaryclass{57M50} \secondaryclass{57N10} \keywords{3--manifold, Dehn surgery, cubed manifold, Haken manifold} \asciikeywords{3-manifold, Dehn surgery, cubed manifold, Haken manifold} \maketitle
\section{Introduction}\label{S:Intro}
An important property of the class of $3$--dimensional manifolds with cubings of non-positive curvature is that they contain `canonical' immersed incompressible surfaces (cf \cite{AR1}). In particular these surfaces satisfy the $4$--plane and $1$--line properties of Hass and Scott (cf \cite{HS}) and so any $P^2$--irreducible $3$--manifold which is homotopy equivalent to such a $3$--manifold is homeomorphic to it (topological rigidity). In this paper we study the behaviour of these canonical surfaces under Dehn surgery on
knots and links. A major objective here is to show that in the case of a cubed manifold with tori as boundary components, there is a simple criterion to tell if a canonical surface
remains incompressible after a particular Dehn surgery. This result is very much
in the spirit of the negatively curved Dehn surgery of Gromov and Thurston (cf \cite{BH})
and was announced in \cite{AR2}. Many examples are given in \cite{AR3}.
The key lemma required is a combinatorial version of Dehn's lemma and the loop theorem for immersed surfaces of the type considered by Hass and Scott
with an extra condition --- the triple point property. We are able to give a
simplified proof of the rigidity theorem of Hass and Scott for $3$--manifolds
containing immersed incompressible surfaces with this additional condition.
By analogy with the combinatorial Dehn's lemma and the loop theorem, we are
also able to find a combinatorial characterisation of the crucial idea of
hierarchies in $3$--manifolds, as used by Waldhausen in his solution to the word problem \cite{Wa1}. In particular, if a list of embedded surfaces is given
in a $3$--manifold, with boundaries on the previous surfaces in the list, then a simple condition determines whether all the surfaces are incompressible and mutually boundary incompressible. This idea can be used to determine if
the hierarchy persists after Dehn surgery --- for example, if there are
several cusps in a $3$--manifold then we can tell if Dehn surgery on all
but one cusp makes the final cusp remain incompressible (cf \cite{AR2}).
One should also note the result of Mosher \cite{Mo} that the fundamental
group of an atoroidal cubed $3$--manifold is word hyperbolic. Also in \cite{AMR} it is
shown that any cubed $3$--manifold which has all edges of even degree is geometric
in the sense of Thurston.
This research was supported by the Australian Research Council.
\section{Preliminaries}\label{S:Prelim}
In this section we introduce the basic concepts and definitions
needed for the paper. All $3$--manifolds will be compact and connected.
All surfaces will be compact but not necessarily connected nor orientable. All maps
will be assumed smooth.
\begin{defn}
A properly immersed compact surface not equal to a $2$--sphere, projective plane
or disk is {\it incompressible} if the induced map of fundamental groups of the surface
to the $3$--manifold is injective. We say that the surface is {\it boundary incompressible} if no essential arc on the surface with ends in its boundary is homotopic keeping its ends
fixed into the boundary of the $3$--manifold. \end{defn}
By the work of Schoen and Yau \cite{SY}, any incompressible surface can be homotoped to be least area in its homotopy class,
assuming that some Riemannian metric is chosen on the $3$--manifold so that the boundary
is {\it convex} (inward pointing mean curvature). Then by Freedman, Hass, Scott \cite{FHS},
after possibly a small perturbation, the least area map becomes a self-transverse
immersion which lifts to a collection of embedded planes in the universal cover of the $3$--manifold. Moreover any two of these planes which cross, meet in proper
lines only, so that there are no simple closed curves of intersection.
We will always assume that an incompressible surface is homotoped
to satisfy these conditions.
Alternatively, the method of PL minimal surfaces can be used instead of
smooth minimal surfaces (cf \cite{JR}).
However it is interesting to arrive at
this conclusion from other considerations, which we will do in Section \ref{S:Topo}, for the special surfaces associated with cubings of non-positive curvature.
We will denote by {$\cal P$} the collection of planes covering a given
incompressible surface $F$ in $M$.
\begin{defn} $F$ satisfies the {\it $k$--plane property} for some positive integer $k$ if any subfamily of $k$ planes in {$\cal P$} has a disjoint pair. $F$ satisfies the {\it $1$--line property}
if any two intersecting planes in {$\cal P$} meet in a single line. Finally, the {\it triple point property} is defined for surfaces $F$ which already obey the $1$--line property. This condition states that for any three planes $P$, $P'$
and $P''$ of {$\cal P$} which mutually intersect in lines, the three lines cross in an odd
(and hence finite) number of triple points. \end{defn}
\begin{rem}
The triple point property rules out the possibility that the common stabiliser of the three intersecting planes is non-trivial. Indeed, the cases of either three disjoint lines of intersection or three lines meeting in infinitely many triple points are not allowed. A non-trivial common stabiliser would require one of these two possibilities.
\end{rem}
\begin{defn}
A $3$--manifold is said to be {\it irreducible} if every embedded $2$--sphere bounds a $3$--cell. It is {\it $P^2$--irreducible} if it is irreducible and there are no embedded $2$--sided projective planes.
\end{defn}
From now on we will be dealing with $3$--manifolds which are $P^2$--irreducible.
\begin{defn}
A $3$--manifold will be called {\it Haken} if it is $P^2$--irreducible and either has non-empty incompressible boundary or is closed and admits a closed embedded $2$--sided incompressible surface.
\end{defn}
\begin{defn}
A compact $3$--manifold admits a {\it cubing of non-positive curvature} (or just a {\it cubing} ) if it can be formed by gluing together a finite collection of standard Euclidean cubes with the following conditions: \begin{itemize} \item[-] Each edge of the resulting cell structure has degree at least four.
\item[-] Each link of a vertex of the cell structure is a triangulated $2$--sphere so that any loop of edges of length three
bounds a triangle, representing the intersection with a single cube,
and there are no cycles of length less than three, other than a single edge traversed twice with opposite orientations. \end{itemize} \end{defn}
\begin{rem}
Note the conditions on the cubing are just a special case of Gromov's CAT$(0)$ conditions.
\end{rem}
\begin{defn}
The {\it canonical surface} $S$ in a cubed $3$--manifold is formed by gluing together the finite collection of squares, each obtained as the intersection of a plane with a cube, where the plane is midway between a pair of parallel faces of the cube.
\end{defn}
\begin{rems}\begin{enumerate}\item We consider cubed $3$--manifolds which are closed or with boundaries consisting of incompressible tori and Klein bottles. In the latter case, the boundary surfaces are covered by square faces of the cubes. Such cubings are very useful for studying knot and link complements.
\item In \cite{AR1} it is sketched why the canonical surface is incompressible and satisfies the $4$--plane, $1$--line and triple point properties.
We prove this in detail in the next section for completeness.
\item Regions complementary to $S$ are cones on unions of squares obtained by canonically decomposing each triangle in the link of a vertex into $3$ squares.
\item The complementary regions of $S$ in (3) are polyhedral cells. Each such polyhedron $\Pi$ has boundary determined by a graph $\Gamma_\Pi$ whose edges correspond to arcs of double points of $S$ and whose vertices are triple points of $S$. By construction, there is a unique vertex $v$ in the original cubing which lies in the centre of $\Pi$: the graph $\Gamma_\Pi$ is merely the graph on $S^2$ dual to the triangulation determined by the link of $v$ in the cubing. The conditions on the cubing translate directly into the following statements concerning the graph $\Gamma_\Pi$.
\begin{itemize}\item[(a)] Every face has degree at least $4$.
\item[(b)] Every embedded loop on $S^2$ meeting $\Gamma_\Pi$ transversely in exactly two points bounds a disk cutting off a single arc of $\Gamma_\Pi$.
\item[(c)] Every embedded loop on $S^2$ meeting $\Gamma_\Pi$ transversely in exactly three points bounds a disk which contains a single vertex of degree $3$ of $\Gamma_\Pi$. So the part of $\Gamma_\Pi$ inside this disk can be described as a `Y'. \end{itemize}\end{enumerate} \end{rems}
\begin{nota}
We will use $S$ to denote a subset of $M$, ie, the image of the canonical surface and ${\bar S}$ to denote the domain of a map $f\co {\bar S} \rightarrow M$ which has image $f({\bar S}) = S$.
A similar notational convention applies for other surfaces in $M$.
\end{nota}
\section{The canonical surface}\label{S:Canon}
In this section we verify the properties listed in the final remark in the previous section.
\begin{thm}\label{T:1}
The canonical surface $S$ in a cubed $3$--manifold $M$ is incompressible and satisfies the $4$--plane, $1$--line and triple point properties.
Moreover $S$ is covered by a collection of embedded planes in the universal covering of $M$
and two such planes meet at most in a single line. Also two such lines meet in at most a single point.
\end{thm}
\begin{proof} We show first that $S$ is incompressible. Of course this follows by standard techniques, by thinking of $M$ as having a polyhedral metric of non-positive curvature and using the Cartan--Hadamard Theorem to identify the universal
covering with $\bf R^3$ (cf \cite{BGS}). Since $S$ is totally geodesic and geodesics diverge
in the universal covering space, we see that $S$ is covered by a collection
of embedded planes {$\cal P$}.
However we want to use a direct combinatorial argument which generalises to
situations in the next section where no such metric is obvious on $M$. Suppose
that there is an immersed disk $D$ with boundary $C$ on $S$. Assume that $D$ is in general position relative to $S$, so that the inverse image of $S$ is a collection $G$
of arcs and loops in the domain of the map ${\bar D}\rightarrow M$ with image $D$. $G$ can be viewed as a graph with vertices of degree four in the interior of ${\bar D}$.
Let $v$ be the number of vertices, $e$ the number of edges and $f$ the number of faces of the graph $G$, where the faces are the complementary regions of $G$ in {$\bar D$}. We assume
initially that these regions are all disks.
An Euler characteristic argument gives that $v - e + f =1$ and so since $2v \le e$, there must be some faces with degree less than four. We define some basic homotopies on
the disk $D$ which change $G$ to eventually decrease the number of vertices or edges. First of all assume there is a region in the complement of $G$ adjacent to $C$ with two or three
vertices. In the former case we have a $2$--gon $D'$ of {$\bar D$} with one boundary arc on $C$
and the other on $G$. So $D'$ has interior disjoint from $S$
and its boundary lies on $S$. But by definition of $S$, any such a $2$--gon can be homotoped into a double arc of $S$. For the $2$--gon is contained in a cell in the closure of the complement of $S$. The cell has a polyhedral structure which can be described as the cone on the dual cell
decomposition of a link of a vertex of the cubing.
The two arcs of the $2$--gon can be deformed into the $1$--skeleton of the link and then define a cycle of length two. By definition such a cycle is an edge taken twice in opposite directions. We now homotop $D$ until $D'$ is pushed into the double arc of $S$
and then push $D$ slightly off this double arc. The effect is to eliminate the $2$--gon $D'$,
ie one arc or edge of $G$ is removed.
Next assume there is a region $D''$ of the complement of $G$ bounded by three arcs, two of which are edges of $G$ and one is in $C$. The argument is very similar to that in the previous paragraph. Note that when the boundary of $D$ is pushed into the $1$--skeleton of the link of some vertex of the cubing then it gives a $3$--cycle which is the boundary of a triangle representing the intersection of the link with a single cube. Therefore we can slide $D$ so that $D''$ is pushed into the triple point of $S$ lying at the centre of this cube. Again by perturbing $D$ slightly off $S$, $D''$ is removed and $G$
has two fewer edges and one fewer vertex.
Finally to complete the argument we need to discuss how to deal with internal
regions which are $1$--gons, $2$--gons or $3$--gons. Now $1$--gons cannot occur, as there are no $1$--cycles in the link of a vertex. $2$--gons can be eliminated as above. The same move as described above on $3$--gons has the effect of inverting them, ie, moving one of the edges of the $3$--gon past the opposite vertex. This is enough to finish the argument by the following observations.
First of all consider an arc of $G$ which is a path with both ends on $C$ and passes through vertices by using opposite edges at each vertex (degree $4$), ie, corresponds to a double curve of $D \cap S$. If such an arc has a self intersection, it is easy to see there are embedded $2$--gons or $1$--gons
consisting of subarcs. Choosing an innermost such a $2$--gon or $1$--gon, then there must be `boundary' $3$--gons (relative to the subdisk defined by the $2$--gon or $1$--gon) if there are intersections with arcs. Now push all $3$--gons off such a $1$--gon or $2$--gon, starting with the boundary ones. Then we arrive at an innermost $1$--gon or $2$--gon with no arcs crossing it and can use the previous method to obtain a contradiction or to decrease the complexity of $G$. Similarly if two such arcs meet at more than one point, we can remove $3$--gons from the interior of an innermost resulting $2$--gon and again simplify $G$. Finally if such arcs meet at one point, we get $3$--gons which can be made innermost. So in all cases $G$ can be reduced down to the empty
graph, with $C$ then lying in a face of $S$. So $C$ is contractible on $S$.
It remains to discuss the situation when some regions in the complement of $G$ are not
disks. In this case, there are components of $G$ in the interior of {$\bar D$}. We simplify such
an innermost component $G'$ first by the same method as above, working with the subdisk {$\bar D^*$} consisting of all the disks in the complement of $G'$ and separated from $C$ by $G'$. So we can get rid of $G'$
and continue
on until finally all of $G$ is removed by a homotopy of $D$. Note that once $D$ has no interior
intersections
with $S$ then $D$ can be homotoped into $S$ as it lies in a single cell, which has the polyhedral structure of the cone on the dual of a link of a vertex of the cubing. This completes the argument showing that $S$ is incompressible.
Next we wish to show why $S$ has the $4$--plane, $1$--line and triple point properties.
Before discussing this, it is helpful to discuss why the lifts of $S$ to the universal covering are planes, without using the polyhedral metric. Suppose some lift of $S$ to the universal covering {$\cal M$} was not embedded. We know such a lift
$P$ is an immersed plane by the previous argument that $S$ is incompressible. It is easy to see we can find an immersed disk $D$ with boundary $C$ on $P$ which represents a $1$--gon. There
is one vertex where $C$ crosses a double curve of $P$. But the same argument as in the previous
paragraph applies to simplify the intersections of $D$ with all the lifts $P$ of $S$. We get a contradiction, as there cannot be a $1$--gon with interior mapped into the complement of $S$. This establishes that all the lifts $P$ are embedded as claimed.
It is straightforward now to show that any pair of such planes $P, P^\prime$ which intersect,
meet in a single line. For if there is a simple closed curve of intersection, again the disk $D$ bounded by this curve on say $P$ can be homotoped relative to the other planes to get a contradiction. Similarly if there are at least two lines of intersection of $P$ and $P^\prime$ then there is a $2$--gon $D$ with boundary arcs on $P$
and $P^\prime$. Again we can deform $D$ to push its interior off {$\cal P$} giving a contradiction. This establishes the $1$--line property.
The $4$--plane and triple point properties follow once we can show that any three planes of ${\cal P}$ which mutually intersect, meet in a single triple point. For then if four planes all met in pairs, then on one of the planes we would see three lines all meeting in pairs. But this implies there is a $3$--gon between the lines and the same disk simplification argument as above, shows that this is impossible.
There are two situations which could occur for three mutually crossing planes
$P$, $P'$ and $P''$. First of all, there could be no triple points at all between the
three planes. In this case the $3$--gon $D$ with three boundary arcs joining the three lines of intersection on each of the planes can be used to give a contradiction. This follows by the same simplification argument, since the $3$--gon can be homotoped to have interior disjoint from ${\cal P}$. Secondly there could be more than one triple point between the planes. But in this
case, in say $P$, we would see two lines meeting more than once. Hence there would be $2$--gons $D$ in $P$ between these lines. The interiors of such $2$--gons can be homotoped off ${\cal P}$ and the resulting contradiction completes the argument for all the properties claimed in the theorem.\end{proof}
\begin{rems}\begin{enumerate} \item In the next section we will define a class of $3$--manifolds which are almost cubed. These do not have nice polyhedral metrics arising from a simple construction like cubings but the same methods will work as in the above theorem. This is the basis of what we call a combinatorial version of Dehn's lemma and the loop theorem.
\item In \cite{ARS}, other generalisations of cubings are given, where the manifold behaves as if `on average' it has non-positive curvature. Again the technique of the above theorem applies in this situation, to deduce incompressibility of particular surfaces.
\item A key factor in making the above method work is that there must always be faces of the graph $G$ which are $2$--gons or $3$--gons. In particular the Euler characteristic argument to show existence of such regions breaks down once $D$ is a $4$--gon! \end{enumerate}\end{rems}
\begin{defn}
An immersed surface $S$ is called {\it filling} if the closures of the complementary regions of $S$ in $M$ are all cells, for all least area maps in the homotopy class of $S$, for any metric on $M$.
\end{defn}
It is trivial to see that for the canonical surface $S$ in a cubed $3$--manifold $M$, all the closures of the complementary regions are cells.
A little more work checks that $S$ is actually filling. In fact, one way to do this is to observe that any essential loop in $M$ can be homotoped to a geodesic $C$ in the polyhedral metric defined by the cubing. Then this geodesic lifts to a line in the universal covering {$\cal M$}. A geodesic line will meet
the planes over $S$ in single points or lie in such a plane. Note that the lines of intersection between the planes are also geodesics. So if the given geodesic line lies in some plane, then by the filling property, it meets some other line of intersection in a single point. Hence it meets the corresponding plane in one point.
A homotopy of $S$ to a least area surface $S^\prime$ relative to some metric, will lift to a proper homotopy between collections of embedded planes ${\cal P}$ and ${\cal P^\prime}$
in the universal covering. This latter homotopy cannot remove such essential intersections between the given geodesic line and some plane (all points only move a bounded distance, whereas the ends of the line are an unbounded distance from the plane and on either side of it). So we conclude that any essential loop must intersect $S$.
Therefore a complementary domain to $S^\prime$ must have fundamental group with trivial image in ${\pi_1}(M)$. The argument of \cite{HRS} shows that for a least area map of an incompressible surface, all the complementary regions are $\pi_1$--injective. So such regions must be cells, as a cubed manifold is irreducible.
\begin{rem}
Note that $P^2$--irreducibility for a cubed manifold can also be shown directly, since we can apply the same argument as above to simplify the intersections of an immersed sphere or projective plane
with the canonical surface and eventually by a homotopy, achieve that the sphere or projective plane lies in a complementary cell.
\end{rem}
In \cite{AR1} it is observed that a $3$--manifold has a cubing of non-positive curvature if and only if it has a filling incompressible surface satisfying the $4$--plane, $1$--line and
triple point properties. This follows immediately from the work of Hass and Scott \cite{HS}
\section{Combinatorial Dehn's lemma and the loop theorem}\label{S:Comb}
Our aim here is to define a class of $3$--manifolds which are almost
cubed and for which one can still verify that the canonical surface is imcompressible and satisfies similar properties to that for cubings.
In fact the canonical surface here satisfies the $4$--plane, $1$--line and triple point properties, but not necessarily the filling property. So as in \cite{HRS},
the closures of the complementary regions of $S$ are $\pi_1$--injective handlebodies. These surfaces naturally arise in \cite{AR3}, where we investigate surgeries on certain
classes of simple alternating links containing such closed surfaces in their complements.
\begin{defn}
Suppose that $S$ is an immersed closed surface in a compact 3--manifold $M$. We say that the closure of a connected component of the complement in $S$ of the double curves of $S$, is a {\it face} of $S$.
\end{defn}
\begin{defn}
Suppose that $D$ is an immersed disk in $M$ with boundary $C$ on an immersed closed surface $S$ and interior of $D$ disjoint from $S$. Assume also that there are no such disks $D$ in $M$ for which $C$ crosses the double curves of $S$ exactly once. We say that {\it $D$ is homotopically trivial relative to $S$}, if one of the following three situations hold: \begin{enumerate} \item If $C$ has no intersections with the double curves of $S$, then $D$ can be homotoped into a face of $S$, keeping its boundary fixed.
\item Any $2$--gon $D$ (ie, $C$ meets the singular set of $S$ in two points) is homotopic into a double curve of $S$, without changing the number of intersections of $C$ and the double curves of $S$ and keeping $C$ on $S$.
\item Any $3$--gon $D$ is homotopic into a triple point of $S$, without changing the number of intersections of $C$ and the double curves of $S$ and keeping $C$ on $S$. \end{enumerate} \end{defn}
\begin{rem}
Note that these conditions occur in Johannson's definition of boundary patterns in \cite{Jo1}, \cite{Jo2}.
\end{rem}
\begin{thm}\label{T:2}
Assume that $S$ is an immersed closed surface in a compact 3--manifold $M$. Suppose that all the faces of $S$ are polygons with at least four sides. Also assume that any embedded disk $D$ with boundary $C$ on $S$ and interior disjoint from $S$, with $C$ meeting the double curves two or three times, is homotopically trivial relative to $S$ and there is no such disk with $C$ crossing the double curves once.
Then $S$ is incompressible with the $4$--plane, $1$--line and triple point properties. Moreover $S$ lifts to a collection of embedded planes in the universal cover of $M$ and each pair of these planes meets in at most a single line. If three planes mutually intersect in pairs, then they share a single triple point. Also the closures of components of the complement of $S$ are $\pi_1$--injective. Finally if $S$ is incompressible with the $4$--plane, $1$--line and triple point properties then $S$ can be homotoped to satisfy the above set of conditions.
\end{thm}
\begin{proof}
The proof is extremely similar to that for Theorem \ref{T:1} so we only remark on the ideas. First of all the conditions in the statement of Theorem \ref{T:2} play the same role as the link conditions in the definition of a cubing
of non-positive curvature. So we can homotop disks which have boundary on $S$ to
reduce the graph of intersection of the interior of the disk with $S$. In this way,
$2$--gons and $3$--gons can be eliminated, as well as compressing disks for $S$. This
is the key idea and the rest of the argument is entirely parallel to Theorem \ref{T:1}. Note that the closures of the complementary regions of $S$ are $\pi_1$--injective, by essentially the same proof as in \cite{HRS}.
The only thing that needs to be carefully checked, is why it suffices to assume that only embedded disks in the complementary regions need to be examined, to see that any possibly singular $n$--gons, for $2 \le n \le 3$, are homotopically trivial and there are no singular $1$--gons.
Suppose that we have a properly immersed disk $D$ in a complementary region, with boundary meeting the set of double curves of $S$ at most three times and $D$ is not homotopic into a
double arc, a triple point or a face of $S$. If this disk is not homotopic into the boundary of the complementary region, we can apply Dehn's lemma and the loop theorem to
replace the singular disk by an embedded one. Moreover since the boundary of the
new disk is obtained by cutting-and-pasting of the old boundary curve, we see the
new curve also meets the set of double curves of $S$ at most three times. So this case is easy to handle: it does not happen.
Next assume that the singular disk is homotopic into the boundary surface $T$ of the complementary region. (Note we include the possibility here that the complementary region is a ball
and $T$ is a $2$--sphere). Let $C$ be the boundary curve of the singular disk and let $N$
be a small regular neighbourhood of $C$ in $T$. Thus $C$ is null homotopic in $T$.
Notice that there are at most three double
arcs of $S$ crossing $N$. Now fill in the disks $D'$ bounded by any contractible boundary
component $C'$ of $N$ in $T$, to enlarge $N$ to $N'$. Since $C$ shrinks in $T$, it is easy to see
by Van Kampen's theorem, that $C$ contracts also in $N'$. Also if $C'$ meets the double arcs of $S$, we see the picture in $D'$ must be either a single arc or three arcs meeting at a single triple point, or else we have found an embedded disk contradicting our assumption. For we only need to check that $C'$ cannot meet the double arcs in at least four points. If $C'$ did have four or more intersection points with the singular set of $S$, then one of the double arcs crossing $N$ has both
ends on $C'$. But this is impossible, as there would be a cycle in the graph of the double arcs on $T$, which met the contractible curve $C$ once.
Finally we notice that there must be some disks $D'$ which meet the double arcs; in fact at least one point on the end of each double arc in the boundary of $N$ must be in such a disk.
For otherwise it is impossible for $C$ to shrink in $N'$, as there is an essential intersection at one point with such an arc. (This immediately shows the possibility that $C$ crossed the double curves once cannot happen).
So there are either one or two disks $D'$ with a single arc and
at most one such disk with three arcs meeting in a triple point. But the latter case means that $C$ can be shrunk into the triple point and the former means $C$ can be homotoped into the double
arc of $S$ in $N'$ by an easy examination of the possibilities.
Hence this shows that it suffices to consider only embedded disks
when requiring the properties in Theorem \ref{T:2}. This is very useful in applications in \cite{AR3}.
To show the converse, assume we have an incompressible surface which has the $4$--plane, $1$--line and triple point properties. Notice that in the paper of Hass and Scott \cite{HS}, the triple point
property is enough to show that once the number of triple points has been minimised for a least area representative of $S$, then the combinatorics of the surface are rigid. So we get that $S$ has exactly the properties as in Theorem \ref{T:2}. \end{proof}
\begin{rem}
Theorem \ref{T:2} can be viewed as a singular version of Dehn's lemma and the loop theorem.
For we have started with an assumption that there are no embedded disks of a special type with boundary on the singular surface $S$ and have concluded that $S$ is incompressible, ie $\pi_1$--injective. In \cite{ARS} other variants on this theme are given. \end{rem}
\begin{defn}
We say that $M$ is {\it almost cubed} if it is $P^2$--irreducible and contains a surface $S$ as in Theorem \ref{T:2}.
\end{defn}
It is interesting to speculate as to how large is the class of almost cubed $3$--manifolds. We do not know of any specific examples of compact $P^2$--irreducible $3$--manifolds with infinite $\pi_1$ which are not almost cubed.
\begin{cor}
Suppose that $M$ is a compact $P^2$--irreducible $3$--manifold with boundary, which is almost cubed, ie, there is a canonical surface $S$ in the interior of the manifold. Assume also that the complementary regions of $S$ include collars of all components of the boundary. Suppose that a
handlebody is glued onto each boundary component of $M$ to give a new manifold $M'$. If the boundary of every meridian disk, when projected onto $S$, meets the double curves at least four times, then $M'$ is almost cubed.
\end{cor}
\begin{proof} This follows immediately from Theorem \ref{T:2}, by observing that since the boundary of every meridian disk meets the double curves at
least four times, there are no non-trivial $n$--gons in the complement of
$S$ in $M'$ for $n= 2,3$ and no $1$--gons. Hence $M'$ is almost cubed, as $S$ in $M'$ has similar
properties to $S$ in $M$.\end{proof}
\begin{rem}
Examples satisfying the conditions of the corollary are given in \cite{AR3}. In particular
such examples occur for many classes of simple alternating link complements. In \cite{ALR}, the class of well-balanced alternating links are shown to be almost cubed and so the corollary applies.
\end{rem}
\section{Hierarchies}\label{S:Hier}
Our aim in this section is to give a similar treatment of hierarchies, to that of cubings and almost cubings. The definition below is motivated by the special hierarchies used by Waldhausen in his solution of the word problem in the class of Haken $3$--manifolds \cite{Wa1}. Such hierarchies were extensively studied by Johannson in his work on the characteristic variety theorem \cite{Jo1} and also in \cite{Jo2}.
\begin{defn}
A {\it hierarchy} is a collection ${\cal S}$ of embedded compact $2$--sided surfaces
${S_1, S_2,\ldots ,S_k}$, which are not $2$--spheres, in a compact
$P^2$--irreducible $3$--manifold, with the following properties:
\begin{enumerate}
\item Each $S_i$ has boundary on the union of the previous $S_j$ for $j \le i-1$.
\item If an embedded polygonal disk $D$ intersects ${\cal S}$ in its boundary loop $C$ only and $C$ meets the boundary curves of ${\cal S}$ in at most $3$ points, then $D$ is homotopic into either an arc of a boundary curve, a vertex or a surface of ${\cal S}$, where $\partial D$ is mapped into ${\cal S}$ throughout the homotopy.
\item Assume an embedded polygonal disk $D$ intersects ${\cal S}$ in its boundary loop $C$ only and $C$ has only one boundary arc $\lambda$ on the surface $S_j$, where $j$ is the largest value for surfaces of ${\cal S}$ met by $C$. Then $\lambda$ is homotopic into the boundary of $S_j$ keeping the boundary points of $\lambda$ fixed. \end{enumerate} \end{defn}
\begin{rems}\begin{enumerate}\item Note that Waldhausen shows that for a Haken $3$--manifold,
one can always change a given hierarchy into one satisfying these conditions
by the following simple procedures:
\begin{itemize}\item[-] Assuming that all $S_j$ have been picked for $j<i$, then first arrange that after $M$ is cut open along all the $S_j$ to give $M_{i-1}$, the boundary of $S_i$ is chosen so that there are no triangular regions cut off between the `boundary pattern' of $M_{i-1}$ (ie, all the boundary curves of surfaces $S_j$ with $j<i$) and the boundary curves of $S_i$. This is done by minimising the intersection between the boundary pattern of $M_{i-1}$ and $\partial S_i$.
\item[-] It is simple to arrange that $S_i$ is boundary incompressible in $M_{i-1}$, by performing
boundary compressions if necessary. So there are no $2$--gons between $S_i$ and $S_j$ for any $j<i$.
\item[-] Finally one can see that there are no essential triangular disks in $M_{i-1}$, with one
boundary arc on $S_i$ and the other two arcs on surfaces $S_j$ for values $j \le i-1$,
by the boundary incompressibility of $S_i$ as in the previous step. \end{itemize}
\item Notice we are not assuming the hierarchy is complete, in the sense that the complementary
regions are cells (in the case that $M$ is closed) or cells and collars of the boundary (if M is compact with incompressible boundary).
\item The next result is a converse statement, showing that the conditions above imply that the surfaces do form a hierarchy. \end{enumerate} \end{rems}
\begin{thm}\label{T:3}
Assume that ${S_1, S_2,\ldots , S_k}$ is a sequence ${\cal S}$ of embedded compact 2--sided surfaces, none of which are $2$--spheres, in a compact $P^2$--irreducible $3$--manifold $M$ with the following properties: \begin{enumerate}
\item Each $S_i$ has boundary on the union of the previous $S_j$ for $j \le i-1$.
\item If an embedded polygonal disk $D$ intersects ${\cal S}$ in its boundary loop $C$ only and $C$ meets the boundary curves of ${\cal S}$ in at most $3$ points, then $D$ is homotopic into either an arc of a boundary curve, a vertex or a surface of ${\cal S}$.
\item Assume an embedded polygonal disk $D$ intersects ${\cal S}$ in its boundary loop $C$ only and $C$ has only one boundary arc $\lambda$ on the surface $S_j$, where $j$ is the largest value for surfaces of ${\cal S}$ met by $C$. Then $\lambda$ is homotopic into the boundary of $S_j$ keeping the boundary points of $\lambda$ fixed. \end{enumerate}
Then each of the surfaces $S_i$ is incompressible and boundary incompressible in the cut open manifold $M_{i-1}$ and ${\cal S}$ forms a hierarchy for $M$ as above.
\end{thm}
\begin{proof}
The argument is very similar to those for Theorems \ref{T:1} and \ref{T:2} above, so we outline the modifications needed.
Suppose there is a compressing or boundary compressing disk $D$ for one of the surfaces $S_i$.
We may assume that all the previous $S_j$ are incompressible and boundary incompressible by
induction. Consider $G$, the graph of intersection of $D$ with the previous $S_j$, pulled back to the domain of $D$. Then $G$ is a degree three graph; however at each vertex there is a `$T$' pattern
as one of the incident edges lies in some $S_j$ and the other two in the same surface
$S_k$ for some $k<j$. We argue
that the graph $G$ can be simplified by moves similar to the ones in Theorems \ref{T:1} and \ref{T:2}.
First of all, note that by an innermost region argument, there must be either an innermost $0$--gon, $2$--gon or a triangular component of the closure of the complement of $G$. For we can cut up $D$ first using the arcs of intersection with $S_1$, then $S_2$ etc. Using the first collection of arcs, there is clearly an outermost $2$--gon region in $D$. Next the second collection of arcs is either disjoint
from this $2$--gon or there is an outermost $3$--gon. At each stage, there must always be an outermost $2$--gon or $3$--gon. (Of course any simple closed curves of intersection just
start smaller disks which can be followed by the same method. If such a loop is isolated, one gets
an innermost $0$--gon which is readily eliminated, by assumption).
By supposition, such a $2$--gon or $3$--gon can be homotoped into either a boundary curve or
into a boundary vertex of some $S_j$. We follow this by slightly deforming the map to regain general position. The complexity of the graph is defined
by listing lexicographically the numbers of vertices with a particular label. The label of each vertex is given by the subscript of the first surface of $\cal S$ containing the vertex. (cf \cite{Jo2} for a good discussion of this lexicographic complexity). The homotopy above can be readily seen to reduce the complexity of the graph.
Note that the hypotheses only refer to embedded
$n$--gons but as in the proof of Theorem \ref{T:2} it is easy to show that if there are only trivial embedded $n$--gons for $n<4$, then the same is true for immersed $n$--gons, using Dehn's lemma and the loop theorem. Similarly, the hypothesis (3) of the theorem can be converted to a statement about embedded disks, using Dehn's lemma and the loop theorem, since cutting open the manifold using the previous surfaces, converts the polygon into a $2$--gon. This completes the proof of Theorem \ref{T:3}.\end{proof}
\begin{exam}
Consider the Borromean rings complement $M$ in the $3$--sphere. We can find such a hierarchy easily as follows:
Start with a peripheral torus as $S_1$, ie, one of the boundary tori of the complement $M$. Next choose $S_2$ as an essential embedded disk with boundary on $S_1$, with a tube attached to
avoid the intersections with one of the other components $C_1$ of $B$, the Borromean rings. Now cut $M$ open along $S_1$ and $S_2$ to form a collar of $S_1$ and another component $M_2$.
It is easy to see that $M_2$
is a genus $2$
handlebody and $B$ has two components $C_1$ and $C_2$ inside this, forming a sublink $B'$. Moreover
these two loops are generators of the fundamental group of $M_2$, since they are dual (intersect in single points) to two disjoint meridian disks for $M_2$. Finally the loops are readily seen to be linked once in $M_2$.
Now we can cut $M_2$ open using a separating meridian disk with a handle attached, so
as to avoid $B'$. This surface $S_3$ can also be viewed as another spanning disk with boundary on $S_1$ having a tube attached to miss the other component $C_2$ of $B'$. So
there is a nice symmetry between $S_2$ and $S_3$. Finally we observe that when $M$ is cut
open along $S_1$, $S_2$ and $S_3$ to form $M_3$, this is a pair of genus two handlebodies, each of
which contains one of $C_1$ and $C_2$ as a core curve for one of the handles, plus the collar of $S_1$. So we can
choose $S_4$ and $S_5$ to be non-separating meridian disks for these handlebodies, disjoint
from $B$, so that $M_5$ consists of collars of the three boundary tori.
Notice that the `boundary patterns' on each of these tori, ie, the boundary curves of
the surfaces in the hierarchies, consist of two contractible simple closed curves and four arcs, two of which have boundary on each of these loops. The pairs of arcs are `parallel',
in that the whole boundary pattern divides each torus into six regions, two $0$--gon disks, two $4$--gon disks and two annuli (cf Figure 4 in \cite{AR2}).
As a corollary then to Theorem \ref{T:3}, we observe that any non-trivial surgery on each of the two components $C_2$ and $C_3$ of $B$ gives meridian disks $S_6$ and $S_7$ which
meet the boundary pattern
at least four times. Hence these surfaces form a hierarchy for the surgered manifold and so all
such surgeries on $C_2$ and $C_3$ have the result
that the peripheral torus $S_1$ remains incompressible.
\end{exam}
\begin{rems}\begin{enumerate}\item This can be proved by other methods but the above argument is particularly revealing,
not using any hyperbolic geometry.
\item A similar example of the Whitehead link is discussed in \cite{AR2} and it is interesting to note the boundary pattern found there (Figure 4) is exactly the same as here.
\item A very significant problem is to try to use the above characterisation of hierarchies
to find some type of polyhedral metric of non-positive curvature, similar to cubings. This would give a polyhedral approach to Thurston's uniformisation Theorem for Haken manifolds \cite{Th}. \end{enumerate} \end{rems}
\section{Topological rigidity of cubed manifolds}\label{S:Topo}
In this section we give a different approach to the result of Hass and Scott \cite{HS} that if a closed $P^2$--irreducible $3$--manifold is homotopy equivalent to a cubed $3$--manifold
then the manifolds are homeomorphic. This can be viewed as a polyhedral version of Mostow
rigidity, which says that complete hyperbolic manifolds of finite volume which are
homotopy equivalent are isometric, in dimensions greater than $2$. We refer to this
as topological rigidity of cubed $3$--manifolds. Our aim here is to show that this rigidity theorem can be shown without resorting to the least area methods of Freedman Hass Scott \cite{FHS}, but can be obtained by a direct argument more like Waldhausen's original proof of
rigidity for Haken $3$--manifolds \cite{Wa2}. Note that various generalisations of Hass and Scott's theorem have been obtained recently by Paterson \cite{P1}, \cite{P2} using different methods.
\begin{thm}\label{T:4} Suppose that $M$ is a compact $3$--manifold with a cubing of non-positive curvature and $M'$ is a closed $P^2$--irreducible $3$--manifold which is homotopy equivalent to $M$. Then $M$ and $M'$ are homeomorphic. \end{thm}
\begin{proof} Note that the cases where either $M$ has incompressible boundary or is non-orientable, are not so interesting, as then $M$ and $M'$ are Haken and the result follows by Waldhausen's theorem \cite{Wa2}. So we restrict attention to the case where $M$ and $M'$ are closed and orientable.
Our method is a mixture of those of \cite{HS} and \cite{Wa2} and we indicate the steps, which are all quite standard techniques.
\noindent{\bf Step 1}\qua
By Theorem \ref{T:1}, if $S$ is the canonical surface for the cubed manifold $M$ and $M_S$
is the cover corresponding to the fundamental group of $S$, then $S$ lifts to an embedding denoted by $S$ again in $M_S$. For by Theorem \ref{T:1}, all the lifts of $S$ to the universal covering ${\cal M}$ are embedded planes and $\pi_1(S)$ stabilises one of these planes
with quotient the required lift of $S$. Let $f\co M' \rightarrow M$ be the homotopy equivalence and assume that $f$ has been perturbed
to be transverse to $S$. Denote the immersed surface $f^{-1}(S$) by $S'$. Notice that
$f$ lifts to a map ${\tilde f}\co {\cal M'} \rightarrow {\cal M}$ between universal covers and so all the
lifts of $S'$ to ${\cal M'}$ are properly embedded non-compact surfaces. In fact, if $M'_S$ is the induced cover of $M'$ corresponding
to $M_S$, ie, with fundamental group projecting to $f^{-1}(\pi_1(S))$, then there is an
embedded lift, denoted
$S'$ again, of $S'$ to $M'_S$, which is the inverse image of the embedded lift of $S$.
The first step is to surger $S'$ in $M'_S$ to get a copy of $S$ as the result. We will be able to keep some of the nice properties of $S$ by this procedure, especially the $4$--plane property.
This will enable us to carry out the remainder of the argument of Hass and Scott quite
easily. For convenience, we will suppose that $S$ is orientable. The non-orientable case is not
difficult to derive from this; we leave the details to the reader. (All that is necessary is to
pass to a $2$--fold cover of $M_S$ and $M'_S$, where $S$ lifts to its orientable double covering surface.)
Since $f$ is a homotopy equivalence, so is the lifted map $f_S\co M'_S \rightarrow M_S$. Hence
if $S'$ is not
homeomorphic to $S$, then the induced map on fundamental groups of the inclusion
of $S'$ into $M'_S$ has kernel in $\pi_1(S')$. So we can compress $S'$ by Dehn's lemma and the loop theorem. On the other hand, the ends of $M_S$ pull back to ends of $M'_S$ and a properly
embedded line going between the ends of $M_S$ meeting $S$ once, pulls back to a similar line in $M'_S$ for $S'$. Hence $S'$
represents a non-trivial
homology class in $M'_S$ and so cannot be completely compressed. We conclude that $S'$
compresses to
an incompressible surface $S''$ separating the ends of $M'_S$.
Now we claim that any component $S^*$ of $S''$ which is homologically non-trivial,
must be homeomorphic
to $S$. Also the inclusion of $S^*$ induces an isomorphism on fundamental groups to $M'_S$.
The argument
is in \cite{FHS}, for example, but we repeat it for the benefit of the reader. The homotopy equivalence between $M'_S$ and $S$ induces a map $f\co S^* \rightarrow S$ which is non-zero on second homology. So $f$ is homotopic to a finite sheeted covering. Lifting $S^*$ to the corresponding finite sheeted cover of $M'_S$, we get a number of copies of $S^*$, if the map
$S^* \rightarrow M'_S$ is not a homotopy equivalence. Now the different lifts of $S^*$
must all separate the two ends of the covering of $M'_S$ and so are all homologous. (Note as the second homology is cyclic, there are exactly two ends). But then any compact region between these lifts projects onto $M'_S$ and so $M'_S$ is actually compact, unless $S$ is non-orientable, which has been ruled out.
Finally to complete this step, we claim that if $S^*$ is projected to $M'$ and then lifted to
{$\cal M'$}, then the result is a collection of embedded planes ${\cal P^*}$ satisfying the $4$--plane property.
Notice first of all that all the lifts ${\cal P'}$ of $S'$ to ${\cal M'}$ satisfy the `$4$--surface' property. In other words,
if any subcollection of four components of ${\cal P'}$ are chosen,
then there must be a disjoint pair. This is evident
as $S'$ is the pull-back of $S$ and so ${\cal P'}$ is the pull-back of ${\cal P}$.
Then the $4$--plane property clearly pulls-back to the `$4$--surface' property as required.
Now we claim that as $S'$ is surgered and then a component $S^*$ is chosen, this can be done so that the $4$--surface property remains valid. For consider some disk $D$ used to surger the embedded lift $S'$ in $M'_S$. By projecting to $M'$ and lifting to ${\cal M'}$,
we have a family of embedded disks surgering the embedded surfaces ${\cal P'}$.
It is sufficient to show that one such $D$ can be selected so as to miss all the surfaces $P''$ in
${\cal P'}$ which are disjoint from a given surface $P'$ containing the boundary of $D$, as the picture in the universal covering is invariant under the action of the covering translation group. This is similar to the argument in \cite{HRS}. First of all, if $D$ meets any such a surface $P''$ in a loop which is non-contractible on $P''$, we can replace $D$ by the subdisk
bounded by this loop. This
subdisk has fewer curves of intersection with ${\cal P'}$ than the original. Of course the subdisk may not be disjoint from its translates under the stabiliser of $P''$. However we can fix this up at the end of the argument. We relabel $P''$ by $P'$ if this step is necessary.
Suppose now that $D$ meets any surface $P''$ disjoint from $P'$ in loops $C$
which are contractible on $P''$. Choose an innermost such a loop. We would like to do a simple disk swap and reduce the number of such surfaces $P''$ disjoint from $P'$ met by $D$. Note we do not care if the number of loops of intersection goes up during this procedure.
However we must be careful that no new planes are intersected by $D$. So suppose that $C$ bounds
a disk $D''$ on $P''$ met by some plane $P_1$ which is disjoint from $P'$, but $D$
does not already meet $P_1$. Then clearly $P_1$ must meet $D''$ in a simple closed
curve in the interior of $D''$. Now we can use the technique of Hass and Scott to eliminate
all such intersections in $P'$. For by the $4$--surface property, either such simple closed curves
are isolated (ie not met by other surfaces) or there are disjoint embedded arcs where
the curves of intersection of the surfaces cross an innermost such loop. But then we can start with an innermost such $2$--gon between such arcs and by simple $2$--gon moves, push all the arcs equivariantly outside the loop. At each stage we decrease the number of triple points and eventually can eliminate the contractible double curves.
The conclusion is that eventually we can pull $D$ off all the surfaces disjoint from $P'$.
Finally to fix up the disk $D$ relative to the action of the stabiliser of $P'$, project $P'$ to the compact surface in $M'_S$. Now we see that $D$ may project to an immersed disk, but all the lifts of this immersed disk with boundary on $P'$ are embedded and disjoint from the surfaces in ${\cal P'}$ which miss $P'$. We can now apply Dehn's lemma and the loop theorem to replace the immersed disk by an embedded one in $M'_S$. This is obtained by cutting and pasting, so it follows immediately that any lift of the new disk with boundary on $P'$ misses all the surfaces which are disjoint from $P'$ as desired. This completes the first step of the argument.
\noindent{\bf Step 2}\qua The remainder of the argument follows that of Hass and Scott closely. By Step 1 we have a component $S^*$ of the surgered surface which gives a subgroup of $\pi_1(M')$ mapped by $f$ isomorphically to the subgroup of $\pi_1(M)$ corresponding to $\pi_1(S)$. Also $S^*$ is embedded in the cover $M'_S$
and all the lifts
have the $4$--plane property in {$\cal M'$}.
All that remains is to use Hass and Scott's triple point cancellation technique to get rid of
redundant triple points and simple closed curves of intersection between the planes over $S^*$. Eventually we get a new surface, again denoted by $S^*$, which is changed by an isotopy in $M'_S$
and has
the $1$--line and triple point properties. It is easy then to conclude that an equivariant homeomorphism between {$\cal M'$} and {$\cal M$} can be constructed.
Note that this case is the easy one in Hass and Scott's paper, as the triple point property means that triangular prism regions cannot occur and so no crushing of such regions is necessary.\end{proof}
\begin{cor}
If $f\co M' \rightarrow M$ is a homotopy equivalence and $M$ has an immersed incompressible surface $S$ satisfying the $4$--plane property, then the method of Theorem \ref{T:4} shows that $M'$ also has an immersed incompressible surface $S'$ satisfying the $4$--plane property and $f$ induces an isomorphism from
$\pi_1(S')$ to $\pi_1(S)$.\endproof \end{cor}
\begin{rem} This can be shown to be true by least area methods also, but it is interesting to have alternative combinatorial arguments. Least area techniques give the result also for the $k$--plane property, for all $k$. However there is no direct information about how the surface is pulled back, as in the above argument.
\end{rem}
\Addresses\recd
\end{document} |
\begin{document}
\title{\Large
\textbf{Dealing with Interaction Between Bipolar Multiple\\
Criteria Preferences in PROMETHEE Methods}
}
\author{ \hspace{0,1cm} Salvatore Corrente \thanks{Department of Economics and Business, University of Catania, Corso Italia 55, 95129 Catania, Italy, e-mails: \texttt{salvatore.corrente\[email protected]}, \texttt{salgreco\[email protected]}}, Jos\'e Rui Figueira \footnote{CEG-IST, Instituto Superior T\'{e}cnico, Technical University of Lisbon, Av. Rovisco Pais, 1049-001 Lisboa, Portugal, e-mail: \texttt{[email protected]}}, \hspace{0,1cm} Salvatore Greco $^*$}
\date{} \maketitle
\addcontentsline{toc}{section}{Abstract}
\begin{resumeT}
\textbf{Abstract:} \noindent In this paper we extend the PROMETHEE methods to the case of interacting criteria on a bipolar scale, introducing the bipolar PROMETHEE method based on the bipolar Choquet integral. In order to elicit parameters compatible with preference information provided by the Decision Maker (DM), we propose to apply the Robust Ordinal Regression (ROR). ROR takes into account simultaneously all the sets of parameters compatible with the preference information provided by the DM considering a necessary and a possible preference relation.
{\bf Keywords}: {PROMETHEE methods, Interaction between criteria, Bipolar Choquet integral.} \end{resumeT}
\pagenumbering{arabic}
\section{Introduction}
In many decision making problems (for a survey on Multiple Criteria Decision Analysis (MCDA) see \cite{FigGreEhr}), alternatives are evaluated with respect to a set of criteria being not mutually preferentially independent (see \cite{wakker1989additive}). In fact, in most cases, the criteria present a certain form of positive (synergy) or negative (redundancy) interaction. For example, if one likes sport cars, maximum speed and acceleration are very important criteria. However, since in general speedy cars have also a good acceleration, giving a high weight to both criteria can over evaluate some cars. Thus, it seems reasonable to give maximum speed and acceleration considered together a weight smaller than the sum of the two weights assigned to these criteria when considered separately. In this case we have a redundancy between the criteria of maximum speed and acceleration. On the contrary, we have a synergy effect between maximum speed and price because, in general, speedy cars are also expensive and, therefore, a car which is good on both criteria is very appreciated. In this case, it seems reasonable to give maximum speed and price considered together a weight greater than the sum of the two weights assigned to these criteria when considered separately. In these cases, the aggregation of the evaluations is done by using non-additive integrals the most known of which are the Choquet integral \cite{Choquet53} and the Sugeno integral \cite{sugeno1974theory} (for a comprehensive survey on the use of non-additive integrals in MCDA see \cite{Grabisch1996,Grabisch_book_greco,Grabisch2008}).\\ In many cases, we have also to take into account that the importance of criteria may also depend on the criteria which are opposed to them. For example, a bad evaluation on aesthetics reduces the importance of maximum speed. Thus, the weight of maximum speed should be reduced when there is a negative evaluation on aesthetics. In this case, we have an antagonism effect between maximum speed and aesthetics. \\ Those types of interactions between criteria have been already taken into consideration in the ELECTRE methods \cite{FGR}. In this paper, we deal with the same problem using the bipolar Choquet integral \cite{GL1,GL2} applied to the PROMETHEE I and II methods \cite{BransMa04,BransVi85}.
This article extends the short paper published by the authors in \cite{Corrente2012a} with respect to which we added the description of the bipolar PROMETHEE I method, the proofs of all theorems presented in \cite{Corrente2012a} and a didactic example in which we apply the bipolar PROMETHEE methods and the Robust Ordinal Regression (ROR) \cite{greco2010robust} being a family of MCDA methods taking into account simultaneously all the sets of preference parameters compatible with the preference information provided by the Decision Maker (DM) using a necessary and a possible preference relation. \\ The paper is organized as follows. In the next section we recall the basic concepts of the classical PROMETHEE methods; in section 3 we introduce the bipolar PROMETHEE methods; the elicitation of preference information permitting to fix the value of the preference parameters of the model (essentially the bicapacities of the bipolar Choquet integral) is presented in section 4; in the fifth section we apply the ROR to the bipolar PROMETHEE methods; a didactic example is presented in section 6 while the last section provides some conclusions and lines for future research.
\section{The classical PROMETHEE methods}\label{PROM}
Let us consider a set of actions or alternatives $A=\left\{a,b,c,\ldots\right\}$ evaluated with respect to a set of criteria $G=\left\{g_{1},\ldots,g_{n}\right\}$, where $g_{j}:A\rightarrow\mathbb{R}$, $j\in{\cal J}=\left\{1,\ldots,n\right\}$ and $|A|=m$. PROMETHEE \cite{BransMa04,BransVi85} is a well-known family of MCDA methods, among which the most known are PROMETHEE I and II, that aggregate preference information of a DM through an outranking relation. Considering for each criterion $g_{j}$ a weight $w_{j}$ (representing the importance of criterion $g_{j}$ within the family of criteria $G$), an indifference threshold $q_{j}$ (being the largest difference $d_{j}(a,b)=g_{j}(a)-g_{j}(b)$ compatible with the indifference between alternatives $a$ and $b$), and a preference threshold $p_{j}$ (being the minimum difference $d_{j}(a,b)$ compatible with the preference of $a$ over $b$), PROMETHEE methods (from now on, when we shall speak of PROMETHEE methods, we shall refer to PROMETHEE I and II) build a non decreasing function $P_{j}(a,b)$ of $d_{j}(a,b)$, whose formulation (see \cite{BransMa04} for other formulations) can be stated as follows $$ P_{j}(a,b)= \left\{ \begin{array}{lll} 0 & \mbox{if} & d_{j}(a,b)\leq q_{j} \\ \frac{d_{j}(a,b)-q_j}{p_{j}-q_{j}} & \mbox{if} & q_{j}<d_{j}(a,b)<p_{j} \\ 1 & \mbox{if} & d_{j}(a,b)\geq p_{j} \end{array} \right. $$ \noindent The greater the value of $P_{j}(a,b)$, the greater the preference of $a$ over $b$ on criterion $g_j$. For each ordered pair of alternatives $(a,b)\in A\times A,$ PROMETHEE methods compute the value $\pi(a,b)=\sum_{j\in{\cal J}}w_{j}P_{j}(a,b)$ representing how much alternative $a$ is preferred to alternative $b$ taking into account the whole set of criteria. It can assume values between $0$ and $1$ and obviously the greater the value of $\pi(a,b)$, the greater the preference of $a$ over $b$. \\ In order to compare an alternative $a$ with all the other alternatives of the set $A$, PROMETHEE methods compute the negative and the positive net flow of $a$ in the following way: $$ \phi^{-}(a)=\frac{1}{m-1}\sum_{b\in A\setminus\left\{a\right\}}\pi(b,a) \;\;\;\;\;\;\mbox{and}\;\;\;\;\;\; \phi^{+}(a)=\frac{1}{m-1}\sum_{b\in A\setminus\left\{a\right\}}\pi(a,b). $$ \noindent These flows represent how much the alternatives of $A\setminus\left\{a\right\}$ are preferred to $a$ and how much $a$ is preferred to the alternatives of $A\setminus\left\{a\right\}$. For each alternative $a\in A$, PROMETHEE II computes also the net flow $\phi(a)=\phi^{+}(a)-\phi^{-}(a)$. On the basis of the positive and the negative flows, PROMETHEE I provides a partial ranking on the set of alternatives $A$, building a preference (${\cal P}^{I}$), an indifference (${\cal I}^{I}$) and an incomparability (${\cal R}^{I}$) relation. In particular:
$$ \left\{ \begin{array}{lll} a{\cal P}^I b & \mbox{iff} & \left\{\begin{array}{l} \Phi^{+}(a)\geq\Phi^{+}(b),\\ \Phi^{-}(a)\leq\Phi^{-}(b),\\ \Phi^{+}(a)-\Phi^{-}(a)>\Phi^{+}(b)-\Phi^{-}(b)\end{array} \right.\\[3mm] a{\cal I}^I b & \mbox{iff} & \left\{\begin{array}{l} \Phi^{+}(a)=\Phi^{+}(b),\\ \Phi^{-}(a)=\Phi^{-}(b)\end{array} \right.\\[3mm] a{\cal R}^{I}b & \mbox{otherwise} & \\ \end{array} \right. $$
On the basis instead of the net flows, the PROMETHEE II method provides a complete ranking on the set of alternatives $A$ defining, in a natural way, a preference (${\cal P}^{II}$) and an indifference (${\cal I}^{II}$) relation for which $a{\cal P}^{II}b$ iff $\Phi(a)>\Phi(b)$ while $a{\cal I}^{II}b$ iff $\Phi(a)=\Phi(b)$.\
\section{The bipolar PROMETHEE methods}
In order to extend the classical PROMETHEE methods to the bipolar framework, we define for each criterion $g_j$, $j\in{\cal J}$, the bipolar preference function $P_j^B: A \times A \rightarrow [-1,1], j\in {\cal J}$ in the following way:
\begin{equation}\label{equat}
P_{j}^{B}(a, b) =P_j(a,b) - P_j(b,a)=
\left\{
\begin{array}{lll}
P_j(a,b) & \mbox{if} & P_j(a,b) > 0 \\[2mm]
- P_j(b,a) & \mbox{if} & P_j(a,b) = 0
\end{array}
\right.
\end{equation}
\noindent It is straightforward proving that $P_{j}^{B}(a,b)=-P_{j}^{B}(b,a)$ for all $j\in{\cal J}$ and for all pairs $(a,b)\in A\times A.$
In this section we propose to aggregate the bipolar vector $P^{B}(a,b)=\left[P_{1}^{B}(a,b),\ldots,P_{n}^{B}(a,b)\right]$ through the bipolar Choquet integral. \\ The bipolar Choquet integral is based on a bicapacity \cite{GL1,GL2}, being a function $\hat\mu: P({\cal J})\rightarrow[-1,1]$, where $P({\cal J})=\left\{(C,D): C,D\subseteq {\cal J} \mbox{ and } C \cap D=\emptyset \right\}$, such that \begin{itemize}
\item $\hat\mu(\emptyset, {\cal J})=-1, \hat\mu({\cal J},\emptyset)=1,$ $\hat\mu(\emptyset, \emptyset)=0$ (boundary conditions),
\item for all $(C,D),(E,F)\in P({\cal J})$, if $C\subseteq E$ and $D\supseteq F$, then $\hat\mu(C,D)\leq\hat\mu(E,F)$ (monotonicity condition). \end{itemize}
\noindent According to \cite{GF2003,GrecoMatarazzoSlowinski02}, we consider the following expression for a bicapacity $\hat\mu$:
\begin{equation}\label{bicapacityFG} \hat\mu(C,D)=\mu^{+}(C,D)-\mu^{-}(C,D), \mbox{\;\;\;for all } (C,D)\in P({\cal J}) \end{equation}
\noindent where $\mu^{+},\mu^{-}:P({\cal J})\rightarrow\left[0,1\right]$ such that:
\begin{equation}\label{bound_plus} \mu^{+}({\cal J},\emptyset)=1, \;\;\;\;\;\; \mu^{+}(\emptyset,B)=0, \;\forall B\subseteq{\cal J}, \end{equation}
\begin{equation}\label{bound_minus} \mu^{-}(\emptyset,{\cal J})=1, \;\;\;\;\;\; \mu^{-}(B,\emptyset)=0, \;\forall B\subseteq{\cal J}, \end{equation}
\begin{equation}\label{M_miu_p} \left. \begin{array}{l} \mu^+(C,D) \leq \mu^+(C\cup\left\{j\right\},D),\;\;\;\; \forall (C\cup\left\{j\right\},D)\in P({\cal J}), \;\forall j\in{\cal J},\\ \mu^+(C,D) \geq \mu^+(C,D\cup\left\{j\right\}),\;\;\;\; \forall (C,D\cup\left\{j\right\})\in P({\cal J}), \;\forall j\in{\cal J}\\ \end{array} \right\} \end{equation}
\begin{equation}\label{M_miu_m} \left. \begin{array}{l} \mu^-(C,D) \leq \mu^-(C,D\cup\left\{j\right\}),\;\;\;\; \forall (C,D\cup\left\{j\right\})\in P({\cal J}), \;\forall j\in{\cal J},\\ \mu^-(C,D) \geq \mu^-(C\cup\left\{j\right\},D),\;\;\;\; \forall (C\cup\left\{j\right\},D)\in P({\cal J}), \;\forall j\in{\cal J}\\ \end{array} \right\} \end{equation}
\noindent Let us observe that (\ref{M_miu_p}) are equivalent to the constraint
$$ \mu^+(C,D)\leq\mu^+(E,F), \;\;\;\mbox{for all} \;\;\;(C,D), (E,F)\in P({\cal J})\;\;\; \mbox{such that} \;\;\;C\subseteq E \;\;\mbox{and}\;\;D\supseteq F,$$
\noindent while (\ref{M_miu_m}) are equivalent to the constraint
$$ \mu^-(C,D)\leq\mu^-(E,F), \;\;\;\mbox{for all} \;\;\;(C,D), (E,F)\in P({\cal J})\;\;\; \mbox{such that} \;\;\;C\supseteq E \;\;\mbox{and}\;\;D\subseteq F.$$
The interpretation of the functions $\mu^{+}$ and $\mu^{-}$ is the following. Given the pair $(a,b)\in A\times A$, let us consider $(C,D)\in P({\cal J})$ where $C$ is the set of criteria expressing a preference of $a$ over $b$ and $D$ the set of criteria expressing a preference of $b$ over $a$. In this situation, $\mu^{+}(C,D)$ represents the importance of criteria from $C$ when criteria from $D$ are opposing them, and $\mu^{-}(C,D)$ represents the importance of criteria from $D$ opposing $C$. Consequently, $\hat\mu(C,D)$ represents the balance of the importance of $C$ supporting $a$ and $D$ supporting $b$.
Given $(a,b)\in A\times A$, the bipolar Choquet integral of $P^B(a,b)$ with respect to the bicapacity $\hat\mu$ can be written as follows
$$Ch^{B}(P^{B}(a, b), \hat{\mu})=\int_0^1\hat\mu(\{j\in{\cal J}:P_{j}^{B}(a,b)>t\},\{j\in{\cal J}:P_{j}^{B}(a,b)<-t\})dt,$$
\noindent while the bipolar comprehensive positive preference of $a$ over $b$ and the comprehensive negative preference of $a$ over $b$ with respect to the bicapacity $\hat\mu$ are respectively:
$$Ch^{B+}(P^{B}(a, b), \hat{\mu})=\int_0^1\mu^{+}(\{j\in{\cal J}:P_{j}^{B}(a,b)>t\},\{j\in{\cal J}:P_{j}^{B}(a,b)<-t\})dt,$$
$$Ch^{B-}(P^{B}(a, b), \hat{\mu})=\int_0^1\mu^{-}(\{j\in{\cal J}:P_{j}^{B}(a,b)>t\},\{j\in{\cal J}:P_{j}^{B}(a,b)<-t\})dt,$$
\noindent where $\mu^{+}$ and $\mu^{-}$ have been defined before.
From an operational point of view, the bipolar aggregation of $P^{B}(a,b)$ can be computed as follows: for all the criteria $j \in \cal{J}$, the absolute values of these preferences should be re-ordered in a non-decreasing way, as follows: $\vert P_{(1)}^{B}(a,b) \vert \leq \vert P_{(2)}^{B}(a,b) \vert \leq
\ldots \leq \vert P_{(j)}^{B}(a,b) \vert \leq \ldots \leq \vert P_{(n)}^{B}(a,b) \vert$.
\noindent The bipolar Choquet integral of $P^B(a,b)$ with respect to the bicapacity $\hat\mu$ can now be determined:
\begin{equation}\label{bipolarcomprehensive}
Ch^{B}(P^{B}(a, b), \hat{\mu}) = \sum_{j \in {\cal{J}}^{>}} \vert P_{(j)}^{B}(a, b) \vert
\Big[ \hat{\mu}\left(C_{(j)}(a,b), D_{(j)}(a,b)\right) - \hat{\mu}\left(C_{(j+1)}(a,b), D_{(j+1)}(a,b)\right) \Big]
\end{equation}
\noindent where $P^B(a, b) = \Big[P_{j}^{B}(a,b), \; j \in{\cal{J}}\Big]$, ${\cal{J}}^{>} = \{ j \in {\cal{J}} \; : \; \vert P_{(j)}^{B}(a,b) \vert > 0\}$, $C_{(j)}(a,b) = \{ i \in {\cal{J}}^{>} \; : \; P_{i}^{B}(a,b) \geq \vert P_{(j)}^{B}(a, b) \vert \}$, $D_{(j)}(a,b) = \{ i \in {\cal{J}}^{>} \; : \; - P_{i}^{B}(a,b) \geq \vert P_{(j)}^{B}(a, b) \vert\}$ and $C_{(n+1)}(a,b)=D_{(n+1)}(a,b)=\emptyset$.\\ We give also the following definitions:
\begin{equation}\label{bipolarpositive}
Ch^{B+}(P^{B}(a, b), {\mu}^{+}) = \sum_{j \in {\cal{J}}^{>}} \vert P_{(j)}^{B}(a, b) \vert
\Big[ {\mu}^{+}\left(C_{(j)}(a,b), D_{(j)}(a,b)\right) - {\mu}^{+}\left(C_{(j+1)}(a,b), D_{(j+1)}(a,b)\right) \Big],
\end{equation}
\begin{equation}\label{bipolarnegative}
Ch^{B-}(P^{B}(a, b), {\mu}^{-}) = \sum_{j \in {\cal{J}}^{>}} \vert P_{(j)}^{B}(a, b) \vert
\Big[ {\mu}^{-}\left(C_{(j)}(a,b), D_{(j)}(a,b)\right) - {\mu}^{-}\left(C_{(j+1)}(a,b), D_{(j+1)}(a,b)\right) \Big].
\end{equation}
\noindent $Ch^B(P^B(a,b), \hat{\mu})$ gives the comprehensive preference of $a$ over $b$ and it is equivalent to $\pi(a,b) - \pi(b,a) = P^C(a,b)$ in the classical PROMETHEE method while $Ch^{B+}(P^B(a,b), \mu^+)$ and $Ch^{B-}(P^B(a,b), \mu^-)$ give, respectively, how much $a$ outranks $b$ (considering the reasons in favor of $a$) and how much $a$ is outranked by $b$ (considering the reasons against $a$).\\ From the definitions above, it is straightforward proving that, for all $a,b\in A,$
\begin{equation}\label{bipolarpref} Ch^{B}(P^{B}(a, b), \hat{\mu})=Ch^{B+}(P^{B}(a, b), {\mu}^{+})-Ch^{B-}(P^{B}(a, b), {\mu}^{-}) \end{equation}
\noindent Using equations (\ref{bipolarcomprehensive}), (\ref{bipolarpositive}) and (\ref{bipolarnegative}), we can define for each alternative $a\in A$ the bipolar positive flow, the bipolar negative flow and the bipolar net flow as follows:
\begin{equation}\label{pos_flow}
{\phi}^{B+}(a) = \frac{1}{m-1} \sum_{b \in A\setminus\left\{a\right\}}Ch^{B+}(P^{B}(a,b), {\mu}^{+}) \end{equation}
\begin{equation}\label{neg_flow}
{\phi}^{B-}(a) = \frac{1}{m-1} \sum_{b \in A\setminus\left\{a\right\}}Ch^{B-}(P^{B}(a,b), {\mu}^{-}) \end{equation}
\begin{equation}\label{net_flow}
{\phi}^{B}(a) = \frac{1}{m-1} \sum_{b \in A\setminus\left\{a\right\}}Ch^{B}(P^{B}(a,b), \hat{\mu}) \end{equation}
\noindent By equation (\ref{bipolarpref}), it follows that ${\phi}^{B}(a)={\phi}^{B+}(a)-{\phi}^{B-}(a)$ for each $a\in A$.
Analogously to the classical PROMETHEE I and II methods, using the positive, the negative and the net bipolar flows we propose the bipolar PROMETHEE I and the bipolar PROMETHEE II methods. Given a pair of alternatives $(a,b)\in A\times A$, the bipolar PROMETHEE I method defines a partial order on the set of alternatives $A$ considering a preference (${\cal P}_{B}^I$), an indifference (${\cal I}_{B}^I$) and an incomparability (${\cal R}_{B}^I$) relation defined as follows:
$$ \left\{ \begin{array}{lll} a{\cal P}_{B}^I b & \mbox{iff} & \left\{\begin{array}{l} \Phi^{B+}(a)\geq\Phi^{B+}(b),\\ \Phi^{B-}(a)\leq\Phi^{B-}(b),\\ \Phi^{B+}(a)-\Phi^{B-}(a)>\Phi^{B+}(b)-\Phi^{B-}(b)\end{array} \right.\\[3mm] a{\cal I}_B^I b & \mbox{iff} & \left\{\begin{array}{l} \Phi^{B+}(a)=\Phi^{B+}(b),\\ \Phi^{B-}(a)=\Phi^{B-}(b)\end{array} \right.\\[3mm] a{\cal R}_B^{I}b & \mbox{otherwise} & \\ \end{array} \right. $$
\noindent Given a pair of alternatives $(a,b)\in A\times A$, the bipolar PROMETHEE II method provides, instead, a complete order on the set of alternatives $A$, defining the a preference (${\cal P}_{B}^{II}$) and an indifference $({\cal I}_{B}^{II})$ relations as follows: $aP_{B}^{II}b$ iff $\Phi^{B}(a)>\Phi(b)$, while $aI_{B}^{II}b$ iff $\Phi^{B}(a)=\Phi^{B}(b)$.
\subsection{Symmetry conditions}
Because $Ch^B(P^B(a,b), \hat{\mu})$ is equivalent to $\pi(a,b) - \pi(b,a) = P^C(a,b)$ in the classical PROMETHEE method, it is reasonable expecting that, for all $a,b\in A$, $Ch^B(P^B(a,b), \hat{\mu})=-Ch^B(P^B(b,a),\hat{\mu})$. The following Proposition gives conditions to satisfy such a requirement:
\begin{prop}\label{Sym_1}\textit{$Ch^{B}(P^{B}(a, b),\hat{\mu}) = -Ch^{B}(P^{B}(b, a),\hat{\mu})$ ~~ for all possible ~~ $a, b$, ~~ iff ~~ $\hat{\mu}(C, D) = - \hat{\mu}(D, C)$ ~~ for each ~~ $(C, D) \in P({\cal{J}})$}. \end{prop}
\begin{proof} Let us prove that if $\hat{\mu}(C,D)= -\hat{\mu}(D,C)$ for each $(C, D) \in P({\cal{J}})$, then $Ch^B(P^B(a,b), \hat{\mu})= - Ch^B(P^B(b,a), \hat{\mu})$. As noticed, $P_{j}^{B}(a,b)= - P_{j}^{B}(b,a)$ for all $j \in \cal{J}$, and consequently $\vert P_{(j)}^{B}(a,b) \vert = \vert -P_{(j)}^{B}(b,a) \vert = \vert
P_{(j)}^{B}(b,a) \vert$ for all $j\in{\cal J}.$
\noindent From this, it follows that:
\begin{itemize} \item[$(\alpha)$] ${\displaystyle C_{(j)}(a,b) = \{ i \in {\cal{J}}^{>} \; : \; P_{i}^{B}(a,b) \geq \vert P_{(j)}^{B}(a,b) \vert \} = \{ i \in {\cal{J}}^{>} \; : \; -P_{i}^{B}(b,a) \geq \vert P_{(j)}^{B}(b,a) \vert \} = }$ \\ ${\displaystyle = D_{(j)}(b,a)}$; \item[$(\beta)$] ${\displaystyle D_{(j)}(a,b) = \{ i \in {\cal{J}}^{>} \; : \; - P_{i}^{B}(a,b) \geq \vert P_{(j)}^{B}(a,b) \vert \} = \{ i \in {\cal{J}}^{>} \; : \; P_{i}^{B}(b,a) \geq \vert P_{(j)}^{B}(b,a) \vert \}=}$ \\ ${\displaystyle = C_{(j)}(b,a)}$. \end{itemize}
From $(\alpha)$ and $(\beta)$ we have that
\begin{itemize} \item[$(\gamma)$] ${\displaystyle Ch^B(P^B(a,b), \hat{\mu}) =}$ \\ =${\displaystyle \sum_{j \in {\cal{J}}^{>}} \vert P_{(j)}^{B}(a,b) \vert \Big[\hat{\mu}(C_{(j)}(a,b), D_{(j)}(a,b)) - \hat{\mu}(C_{(j+1)}(a,b), D_{(j+1)}(a,b))\Big] = }$ \\ ${\displaystyle = \sum_{j \in {\cal{J}}^{>}} \vert P_{(j)}^{B}(b,a) \vert \Big[\hat{\mu}(D_{(j)}(b,a), C_{(j)}(b,a)) - \hat{\mu}(D_{(j+1)}(b,a), C_{(j+1)}(b,a)) \Big]}$. \end{itemize}
\noindent Since $\hat{\mu}(C,D) =- \hat{\mu}(D,C),\; \forall (C,D)\in P({\cal J})$, from $(\gamma)$ we have that,
\begin{itemize} \item[$(\delta)$] ${\displaystyle Ch^B(P^B(b,a), \hat{\mu}) =}$ \\ ${\displaystyle = \sum_{j \in {\cal{J}}^{>}} \vert P_{(j)}^{B}(b,a) \vert \Big[\hat{\mu}(C_{(j)}(b,a), D_{(j)}(b,a)) - \hat{\mu}(C_{(j+1)}(b,a), D_{(j+1)}(b,a))\Big] = }$ \\ ${\displaystyle = \sum_{j \in {\cal{J}}^{>}} \vert P_{(j)}^{B}(b,a) \vert \Big[ - \hat{\mu}(D_{(j)}(b,a), C_{(j)}(b,a)) + \hat{\mu}(D_{(j+1)}(b,a), C_{(j+1)}(b,a)) \Big]}$ \\ ${\displaystyle = -Ch^B(P^B(a, b), \hat{\mu})}.$ \end{itemize}
Let us now prove that if $Ch^B(P^B(a,b), \hat{\mu})= - Ch^B(P^B(b,a), \hat{\mu})$, then $\hat{\mu}(C,D) = - \hat{\mu}(D,C)$. Let us consider the pair $(a,b)$ such that,
\begin{equation} P_{j}^{B}(a,b)= \left\{ \begin{array}{lll} 1 & \mbox{if} & j\in C\\ -1 & \mbox{if} & j\in D\\ 0 & & \mbox{otherwise} \\ \end{array} \right. \end{equation}
In this case we have that $Ch^B(P^B(a,b), \hat{\mu})= \hat{\mu}(C,D)$ and $Ch^B(P^B(b,a), \hat{\mu})= \hat{\mu}(D,C)$. Thus if $Ch^B(P^B(a,b), \hat{\mu})=-Ch^B(P^B(b,a), \hat{\mu})$, by $(iv)$ we obtain that $\hat{\mu}(C,D) = - \hat{\mu}(D,C)$ and the proof is concluded. \end{proof}
\noindent Analogously, because $Ch^{B+}(P^B(a,b),\mu^+)$ represents how much $a$ outranks $b$ and $Ch^{B-}(P^B(b,a),\mu^-)$ represents how much $b$ is outranked by $a$, it is reasonable expecting that $Ch^{B+}(P^B(a,b),\mu^+)$=$Ch^{B-}(P^B(b,a),\mu^-)$. Sufficient and necessary conditions to get this equality are given by the following Proposition.
\begin{prop}\label{Salvo} \noindent \textit{$Ch^{B+}(P^{B}(a, b),{\mu}^{+}) = Ch^{B-}(P^{B}(b, a),{\mu}^{-})$ ~~ for all possible ~~ $a, b$, ~~ iff ~~ ${\mu}^{+}(C, D) = {\mu}^{-}(D, C)$ ~~ for each ~~ $(C, D) \in P({\cal{J}})$}. \end{prop} \begin{proof} Analogous to Proposition \ref{Sym_1}. \end{proof}
\noindent Reminding equation (\ref{bipolarpref}), the Corollary follows.
\begin{corollary} \textit{$Ch^{B}(P^{B}(a, b),\hat{\mu}) =- Ch^{B}(P^{B}(b, a),\hat{\mu})$ ~~ for all possible ~~ $a, b$, ~~ if ~~ ${\mu}^{+}(C, D) = {\mu}^{-}(D, C)$ ~~ for each ~~ $(C, D) \in P({\cal{J}})$}. \end{corollary} \begin{proof} This can be seen as a Corollary both of Proposition \ref{Sym_1} and Proposition \ref{Salvo}. In fact, \begin{itemize} \item ${\mu}^{+}(C, D) = {\mu}^{-}(D, C)$ for each $(C, D) \in P({\cal{J}})$ implies that $\hat\mu(C,D)=-\hat\mu(D,C)$ for each $(C, D) \in P({\cal{J}})$, and by Proposition \ref{Sym_1}, it follows the thesis. \item ${\mu}^{+}(C, D) = {\mu}^{-}(D, C)$ for each $(C, D) \in P({\cal{J}})$ implies that $Ch^{B+}(P^{B}(a, b),{\mu}^{+}) = Ch^{B-}(P^{B}(b, a),{\mu}^{-})$ (by Proposition \ref{Salvo}) and from this it follows obviously the thesis by equation (\ref{bipolarpref}). \end{itemize} \end{proof}
\subsection{The 2-additive decomposable bipolar PROMETHEE methods}
\noindent As seen in the previous section, the use of the bipolar Choquet integral is based on a bicapacity which assigns numerical values to each element $P({\cal{J}})$. Let us remark that the number of elements of $P({\cal{J}})$ is $3^{n}$. This means that the definition of a bicapacity requires a rather huge and unpractical number of parameters. Moreover, the interpretation of these parameters is not always simple for the DM. Therefore, the use of the bipolar Choquet integral in real-world decision-making problems requires some methodology to assist the DM in assessing the preference parameters (bicapacities). Several studies dealing with the determination of the relative importance of criteria were proposed in MCDA (see e.g. \cite{RoyMousseau}). The question of the interaction between criteria was also studied in the context of MAUT methods \cite{MarichalRo00}. \\ In the following we consider only the $2$-additive bicapacities \cite{GL1,fujimoto2004new}, being a particular class of bicapacities.
\subsection{Defining a manageable and meaningful bicapacity measure}
\noindent According to \cite{GF2003}, we give the following decomposition of the functions $\mu^{+}$ and $\mu^{-}$ previously defined:
\begin{defn}\label{mupiumeno} \hspace{0,1cm} \begin{itemize} \item ${\displaystyle \mu^{+}(C, D) = \sum_{j \in C}a^{+}(\{ j\}, \emptyset) + \sum_{\{j,k \} \subseteq C}a^{+}(\{j, k\}, \emptyset) +
\sum_{j \in C, \; k \in D}a^{+}(\{ j\}, \{ k \}) }$ \item ${\displaystyle \mu^{-}(C, D) = \sum_{j \in D}a^{-}(\emptyset, \{ j\}) + \sum_{\{j,k \} \subseteq D}a^{-}(\emptyset, \{j, k\}) +
\sum_{j \in C, \; k \in D}a^{-}(\{ j\}, \{ k \}) }$ \end{itemize} \end{defn}
\noindent The interpretation of each $a^{\pm}(\cdot)$ is the following:
\begin{itemize} \item $a^+(\{ j\}, \emptyset)$, represents the power of criterion $g_j$ by itself; this value is always non negative; \item $a^+(\{j, k\}, \emptyset)$, represents the interaction between $g_j$ and $g_k$, when they are in favor of the preference of $a$ over $b$; when its value is zero there is no interaction; on the contrary, when the value is positive there is a synergy effect when putting together $g_j$ and $g_k$; a negative value means that the two criteria are redundant; \item $a^+(\{ j\}, \{ k \})$, represents the power of criterion $g_k$ against criterion $g_j$, when criterion $g_j$ is in favor of $a$ over $b$ and $g_k$ is against to the preference of $a$ over $b$; this leads always to a reduction or no effect on the value of $\mu^+$ since this value is always non-positive. \end{itemize}
An analogous interpretation can be applied to the values $a^-(\emptyset, \{ j\})$, $a^-(\emptyset, \{j, k\})$, and $a^-(\{ j \}, \{ k \})$.
In what follows, for the sake of simplicity, we will use $a_{j}^+$, $a_{jk}^+$, $a_{j \vert k}^+$ instead of $a^+(\{ j\}, \emptyset)$, $a^+(\{j, k\}, \emptyset)$ and $a^+(\{ j\}, \{ k \})$, respectively and $a_{j}^-$, $a_{jk}^-$, $a_{j \vert k}^-$ instead of $a^-(\emptyset, \{ j\})$, $a^-(\emptyset, \{j, k\})$ and $a^-(\{ j \}, \{ k \})$, respectively.\\ In this way, the bicapacity $\hat\mu$, decomposed using $\mu^+$ and $\mu^-$ of Definition \ref{mupiumeno}, has the following expression:
\begin{eqnarray*} \hat{\mu}(C,D) & = & {\mu}^{+}(C,D)-{\mu}^{-}(C,D)=\\
& = & \sum_{j \in C}a^{+}_{j} - \sum_{j \in D}a^{-}_{j} + \sum_{\{j, k \} \subseteq C}a^{+}_{jk} - \sum_{\{j, k \} \subseteq D}a^{-}_{jk} + \sum_{j \in C, \; k \in D}a^{+}_{j \vert k} - \sum_{j \in C, \; k \in D}a^{-}_{j \vert k} \end{eqnarray*}
\noindent We call such a bicapacity $\hat\mu$, a \textit{ 2-additive decomposable bicapacity }. (An analogous decomposition has been proposed directly for $\hat\mu$ without considering $\mu^{+}$ and $\mu^{-}$ in \cite{Muro}). \\ Considering these decompositions for the functions $\mu^+$ and $\mu^-$, the monotonicity conditions (\ref{M_miu_p}), (\ref{M_miu_m}) and the boundary conditions (\ref{bound_plus}), (\ref{bound_minus}) have to be expressed in function of the parameters $a_j^{+}$, $a_{jk}^{+}$, $a_{j\vert k}^{+}$, $a_j^{-}$, $a_{jk}^{-}$, $a_{j\vert k}^{-}$ as follows:
\noindent {\bf{Monotonicity conditions}} \begin{enumerate} \item[1)] $\mu^{+}(C, D) \leq \mu^{+}(C \cup \{ j \}, D)$, $\; \; \forall \; j \in {\cal{J}}, \; \forall (C \cup \{ j \}, D) \in P({\cal{J}})$
\[
{\Leftrightarrow\displaystyle a^{+}_{j} + \sum_{k \in C}a^{+}_{jk} + \sum_{k \in D}a^{+}_{j \vert
k} \geq 0, \; \; \forall \; j \in {\cal{J}}, \; \forall (C \cup \{ j \}, D) \in
P({\cal{J}})}
\] \item[2)] $\mu^{+}(C, D) \geq \mu^{+}(C, D \cup \{ j \}), \; \; \forall \; j \in {\cal{J}}, \; \forall (C, D \cup \{ j \}) \in P({\cal{J}})$
\[
{\Leftrightarrow\displaystyle \sum_{k \in C}a^{+}_{k \vert
j} \leq 0, \; \; \forall \; j \in {\cal{J}}, \; \forall (C, D \cup \{ j \}) \in
P({\cal{J}})}
\] being already satisfied because $a^{+}_{k \vert j}\leq 0$, $\forall k,j\in {\cal J}, k\neq j.$ \\ \end{enumerate}
\begin{enumerate} \item[3)] $\mu^{-}(C, D) \leq \mu^{-}(C, D \cup \{ j \})$, $\; \; \forall \; j \in {\cal{J}}, \; \forall (C , D\cup \{ j \}) \in P({\cal{J}})$
\[
{\Leftrightarrow\displaystyle a^{-}_{j} + \sum_{k \in D}a^{-}_{jk} + \sum_{k \in C}a^{-}_{k \vert
j} \geq 0, \; \; \forall \; j \in {\cal{J}}, \; \forall (C , D\cup \{ j \}) \in
P({\cal{J}})}
\] \item[4)] $\mu^{-}(C, D) \geq \mu^{-}(C\cup \{ j \}, D ), \; \; \forall \; j \in {\cal{J}}, \; \forall (C\cup \{ j \}, D ) \in P({\cal{J}})$
\[
{\Leftrightarrow\displaystyle \sum_{k \in D}a^{-}_{j \vert
k} \leq 0, \; \; \forall \; j \in {\cal{J}}, \; \forall (C\cup \{ j \}, D ) \in
P({\cal{J}})}
\] being already satisfied because $a^{-}_{j \vert k}\leq 0$, $\forall j,k\in {\cal J}, j\neq k.$ \\ \end{enumerate}
\noindent Conditions $1)$, $2)$, $3)$ and $4)$ ensure the monotonicity of the bi-capacity, $\hat{\mu}$, on $\cal{J}$, obtained as the difference of $\mu^+$ and $\mu^-$, that is,
\[ \forall \; \; (C,D), \; (E, F) \; \in \; P({\cal{J}}) \; \; \; \mbox{such that} \; \; \; C \supseteq E, \; D \subseteq F, \; \; \hat{\mu}(C,D) \geq \hat{\mu}(E,F). \]
\noindent {\bf{Boundary conditions}} \begin{enumerate} \item $\mu^{+}({\cal{J}}, \emptyset) = 1$, i.e., ${\displaystyle \sum_{j \in {\cal{J}}}a_{j}^{+} + \sum_{\{j, k \} \subseteq {\cal{J}}}a_{jk}^{+} = 1}$ \item $\mu^{-}(\emptyset, {\cal{J}}) = 1$, i.e., ${\displaystyle \sum_{j \in {\cal{J}}}a_{j}^{-} + \sum_{ \{j, k \} \subseteq {\cal{J}}}a_{jk}^{-} = 1}$ \end{enumerate}
\subsection{The $2$-additive bipolar Choquet integral}
\noindent The following theorem gives an expression of $ Ch^{B+}(x, {\mu}^{+})$ and $Ch^{B-}(x, {\mu}^{-})$ considering a 2-additive decomposable bicapacity $\mu$.
\begin{theorem}\label{Bi-polar_Choq} {\it Given a 2-additive decomposable bicapacity $\hat{\mu}$, then for all $x \in {\mathbb{R} }^n$ \begin{enumerate}
\item ${\displaystyle Ch^{B+}(x, {\mu}^{+}) = \sum_{j \in {\cal{J}}, x_j> 0}a^{+}_{j}x_j + \sum_{j, k \in {\cal{J}}, j \neq k, x_j, x_k > 0}a^{+}_{jk}\min\{x_j,
x_k\}+ \sum_{j, k \in {\cal{J}}, j\neq k, x_j > 0, x_k < 0}a^{+}_{j \vert k}\min\{x_j, - x_k\} } $
\item ${\displaystyle Ch^{B-}(x, {\mu}^{-}) = -\sum_{j \in {\cal{J}}, x_j< 0}a^{-}_{j}x_j - \sum_{j, k \in {\cal{J}}, j \neq k, x_j, x_k < 0}a^{-}_{jk}\max\{x_j,
x_k\}- \sum_{j, k \in {\cal{J}}, j\neq k, x_j > 0, x_k < 0}a^{-}_{j \vert k}\max\{-x_j, x_k\} }$ \end{enumerate} } \end{theorem} \begin{proof} We shall prove only part 1. Proof of part 2. can be obtained analogously.\\ If the bicapacity $\hat\mu$ is $2-$additive decomposable, then
\[ \begin{array}{ll} {\displaystyle Ch^{B+}(x, {\mu}^{+})} & {\displaystyle = \sum_{j \in {\cal{J}}^>}\vert x_{(j)}\vert \big[{\mu}^{+}(C_{(j)}, D_{(j)}) - {\mu}^{+}(C_{(j+1)}, D_{(j+1)}) \Big] =} \\ [6mm]
& \\
& {\displaystyle = \sum_{j \in {\cal{J}}^>}\vert x_{(j)} \vert \Big[\Big( \sum_{k \in {\cal{J}}^>, x_k
\geq \vert x_{(j)} \vert} a^{+}_{k} - \sum_{k \in {\cal{J}}^>, x_k \geq
\vert x_{(j+1)} \vert} a^{+}_{k}\Big) + } \\ [6mm]
& {\displaystyle + \Big(\sum_{h, k \in {\cal{J}}^>, h \neq k, x_h, x_k \geq \vert x_{(j)} \vert} a^{+}_{hk} -
\sum_{h, k \in {\cal{J}}^>, h \neq k, x_h, x_k \geq \vert x_{(j+1)} \vert} a^{+}_{hk}\Big) + } \\ [6mm]
& {\displaystyle +\Big(\sum_{h, k \in {\cal{J}}^>, h\neq k, x_h, -x_k \geq \vert x_{(j)} \vert} a^{+}_{h \vert k} -
\sum_{h, k \in {\cal{J}}^>, h\neq k, x_h, -x_k \geq \vert x_{(j+1)} \vert} a^{-}_{h \vert k} \Big)\Big] } \\ [6mm]
\end{array} \]
Let us remark that,
\[ a) \; \; \; \; \; \Big( \sum_{k \in {\cal{J}}^>, x_k
\geq \vert x_{(j)} \vert} a^{+}_{k} - \sum_{k \in {\cal{J}}^>, x_k
\geq \vert x_{(j+1)} \vert} a^{+}_{k}\Big) =
\left\{
\begin{array}{lll}
\displaystyle\sum_{k\in{\cal J}^{>}, x_{k}=|x_{(j)}|} a^{+}_{k} & \mbox{if} & |x_{(j)}| < |x_{(j+1)}| \\[10mm]
0 & & \mbox{otherwise}
\end{array}
\right. \]
\[ b) \; \; \; \; \; \Big( \sum_{k \in {\cal{J}}^>, -x_k
\geq \vert x_{(j)} \vert} a^{-}_{k} - \sum_{k \in {\cal{J}}^>, -x_k
\geq \vert x_{(j+1)} \vert} a^{-}_{k}\Big) =
\left\{
\begin{array}{lll}
\displaystyle\sum_{k\in{\cal J}^{>}, -x_{k}=|x_{(j)}|} a^{-}_{k} & \mbox{if} & |x_{(j)}| < |x_{(j+1)}| \\[10mm]
0 & & \mbox{otherwise}
\end{array}
\right. \]
\[ c) \; \; \; \; \; \Big( \sum_{\substack{h,k \in {\cal{J}}^>, h\neq k, \\x_h,x_k \geq \vert x_{(j)} \vert }} a^{+}_{hk} - \sum_{\substack{h,k \in {\cal{J}}^>, h\neq k, \\x_h,x_k \geq \vert x_{(j+1)} \vert }} a^{+}_{hk} \Big) =
\left\{
\begin{array}{lll}
{\displaystyle \sum_{\substack{h,k \in {\cal{J}}^>, h\neq k, \\ \min\{x_h,x_k\} = \vert x_{(j)} \vert }}} a^{+}_{hk} & \mbox{if} & |x_{(j)}| < |x_{(j+1)}| \\[10mm]
0 & & \mbox{otherwise}
\end{array}
\right. \]
\noindent Considering $a)-c)$ we get that:
\[ \begin{array}{ll}
\chi) & {\displaystyle = \sum_{\substack{j \in {\cal{J}}^>,\\|x_{(j)}|<|x_{(j+1)}|}}\vert x_{(j)}
\vert \Big[ \displaystyle\sum_{k\in{\cal J}^{>}, x_{k}=|x_{(j)}|} a^{+}_{k} + \sum_{\substack{h,k \in {\cal{J}}^>, h\neq k, \\ \min\{x_h,x_k\} = \vert x_{(j)} \vert }} a^{+}_{hk} + \sum_{\substack{h,k \in {\cal{J}}^>, h\neq k, \\\min\{x_h,-x_k\} = \vert x_{(j)} \vert }} a^{+}_{h \vert k} \Big]}\\ [6mm] \end{array} \] \noindent and from this it follows the thesis. \end{proof}
In the following, we provide the symmetry conditions of Propositions \ref{Sym_1} and \ref{Salvo} in function of the parameters $a_j^{+}$, $a_j^{-}$, $a_{jk}^{+}$, $a_{jk}^{-}$, $a_{j\vert k}^{+}$ and $a_{j\vert k}^{-}$.
\begin{prop}\label{Prop_cond} {\it Given a 2-additive decomposable bicapacity $\hat\mu$, then $\hat{\mu}(C,D)= - \hat{\mu}(D,C)$ for each $(C,D) \in P({\cal{J}})$ iff
\begin{enumerate}
\item for each $j \in {\cal{J}}$, $a^{+}_{j} = a^{-}_{j}$,
\item for each $\{j,k\} \subseteq {\cal{J}}$, $a^{+}_{jk} = a^{-}_{jk}$,
\item for each $ j, k \in {\cal{J}}$, $j \neq k$, $a^{+}_{j \vert k} - a^{-}_{j \vert k} = a^{-}_{k \vert j} - a^{+}_{k \vert j}$.
\end{enumerate}
} \end{prop} \begin{proof} First, let us prove that
\begin{itemize} \item[$(a)$] $\hat{\mu}(C,D) = - \hat{\mu}(D,C)$ \end{itemize}
\noindent implies $1.$, $2.$ and $3.$ For each $j \in \cal{J}$,
\begin{itemize} \item[$(b)$] $\hat{\mu}(\{j\}, \emptyset)= a^{+}_{j}$ and $\hat{\mu}(\emptyset,\{j\})= -a^{-}_{j}$ \end{itemize}
\noindent From $(a)$ and $(b)$ we have,
\[ a^{+}_{j}= \hat{\mu}(\{j\}, \emptyset )= - \hat{\mu}(\emptyset ,\{j\})=a^{-}_{j} \]
\noindent which is $1$.
For each $\{ j,k \} \subseteq \cal{J}$ we have that, \begin{itemize} \item[$(c)$] $\hat{\mu}(\{j,k\}, \emptyset) = a^{+}_{j} + a^{+}_{k} + a^{+}_{jk}$ and $\hat{\mu}(\emptyset, \{j,k\}) = -a^{-}_{j} - a^{-}_{k} - a^{-}_{jk}$ \end{itemize}
Being $\hat{\mu}(\{j,k\}, \emptyset)=-\hat{\mu}(\emptyset,\{j,k\})$, and being $a^{+}_{j} = a^{-}_{j}$ and $a^{+}_{k} = a^{-}_{k}$ by $1.$, we have that for each $\{j,k\} \subseteq \cal{J}$, $a^{+}_{jk}= a^{-}_{jk}$, i.e. $2$.
For all $j,k\in {\cal J}$ with $j\neq k,$ we have:\\ $$\hat{\mu}(\{j\},\{k\})=a_{j}^{+}-a_{k}^{-}+a^{+}_{j \vert k}-a^{-}_{j \vert k}$$ $$\hat{\mu}(\{k\},\{j\})=a_{k}^{+}-a_{j}^{-}+a^{+}_{k \vert j}-a^{-}_{k \vert j}$$ \noindent Being $\hat{\mu}(\{j\},\{k\})=-\hat{\mu}(\{k\},\{j\})$ and having proved that $a^{+}_{j}=a^{-}_{j}, \forall j$, we obtain that $a^{+}_{j \vert k}-a^{-}_{j \vert k}=-a^{+}_{k \vert j}+a^{-}_{k \vert j}$ i.e. 3.
It is straightforward to prove that $1.$, $2.$, and $3.$ imply $\hat\mu(C,D)=-\hat\mu(D,C)$.
\end{proof}
\begin{corollary}\label{lem_SYM} Given a 2-additive decomposable bicapacity $\hat\mu$, $Ch^{B}(P^{B}(a,b),\hat\mu)=-Ch^{B}(P^{B}(b,a),\hat\mu)$ for all $a,b\in A$ iff \begin{enumerate} \item for each $j \in {\cal{J}}$, $a^{+}_{j} = a^{-}_{j}$, \item for each $\{j,k\} \subseteq {\cal{J}}$, $a^{+}_{jk} = a^{-}_{jk}$, \item for each $ j, k \in {\cal{J}}$, $j \neq k$, $a^{+}_{j \vert k} - a^{-}_{j \vert k} = a^{-}_{k \vert j} - a^{+}_{k \vert j}$. \end{enumerate} \end{corollary} \begin{proof} It follows by Propositions \ref{Prop_cond} and \ref{Sym_1}. \end{proof}
\begin{prop}\label{Prop_cond_2} {\it Given a 2-additive decomposable bicapacity $\hat\mu$, then ${\mu}^{+}(C,D)= {\mu}^{-}(D,C)$ for each $(C,D) \in P({\cal{J}})$ iff
\begin{enumerate}
\item for each $j \in {\cal{J}}$, $a^{+}_{j} = a^{-}_{j}$,
\item for each $\{j,k\} \subseteq {\cal{J}}$, $a^{+}_{jk} = a^{-}_{jk}$,
\item for each $ j, k \in {\cal{J}}$, $j \neq k$, $a^{+}_{j \vert k} = a^{-}_{k \vert j}$.
\end{enumerate}
} \end{prop} \begin{proof} Analogous to Proposition \ref{Prop_cond}. \end{proof}
\begin{corollary}\label{lem_SYM_2} Given a 2-additive decomposable bicapacity $\hat\mu$, $Ch^{B+}(P^{B}(a,b),\mu^{+})=Ch^{B-}(P^{B}(b,a),\mu^{-})$ for all $a,b\in A$ iff \begin{enumerate} \item for each $j \in {\cal{J}}$, $a^{+}_{j} = a^{-}_{j}$, \item for each $\{j,k\} \subseteq {\cal{J}}$, $a^{+}_{jk} = a^{-}_{jk}$, \item for each $ j, k \in {\cal{J}}$, $j \neq k$, $a^{+}_{j \vert k}= a^{-}_{k \vert j}$. \end{enumerate} \end{corollary} \begin{proof} It follows by Propositions \ref{Prop_cond_2} and \ref{Salvo}. \end{proof}
\noindent Because the first two conditions of Proposition \ref{Sym_1} are the same of the first two conditions of Proposition \ref{Salvo}, but the third condition of Proposition \ref{Salvo} implies the third one of Proposition \ref{Sym_1}, in order to get both $Ch^{B}(P^{B}(a,b),\hat\mu)=-Ch^{B}(P^{B}(b,a),\hat\mu)$ and $Ch^{B+}(P^{B}(a,b),\mu^{+})=Ch^{B-}(P^{B}(b,a),\mu^{-})$ for all $a,b\in A$, we impose that shoul be fulfilled the conditions in Proposition \ref{Salvo}.
\section{Assessing the preference information}\label{assessing}
On the basis of the considered $2$-additive decomposable bicapacity $\hat\mu$, and holding the symmetry condition in Corollary \ref{lem_SYM_2}, we propose the following methodology which simplifies the assessment of the preference information. \\We consider the following information provided by the DM and their representation in terms of linear constraints:
\begin{enumerate} \item {\it Comparing pairs of actions locally or globally}. The constraints represent some pairwise comparisons on a set of training actions. Given two actions $a$ and $b$, the DM may prefer $a$ to $b$, $b$ to $a$ or be indifferent to both:
\begin{enumerate}
\item the linear constraint associated with $a{\cal P}b$ ($a$ is locally preferred to $b$) is:
$$Ch^{B}(P^{B}(a, b), \hat{\mu}) > 0;$$
\item the linear constraints associated with $a{\cal P}^{I}b$ ($a$ is preferred to $b$ with respect to the bipolar PROMETHEE I method) are:
$$
\left.
\begin{array}{l}
\Phi^{B+}(a)\geq\Phi^{B+}(b),\\
\Phi^{B-}(a)\leq\Phi^{B-}(b),\\
\Phi^{B+}(a)-\Phi^{B-}(a)>\Phi^{B+}(b)-\Phi^{B-}(b),\\
\end{array}
\right\}
$$
\item the linear constraint associated with $a{\cal P}^{II}b$ ($a$ is preferred to $b$ with respect to the bipolar PROMETHEE II method) is:
$$
\Phi^{B}(a)>\Phi^{B}(b)
$$
\item the linear constraint associated with $a{\cal I}b$ ($a$ is locally indifferent to $b$) is:
$$
Ch^{B}(P^{B}(a, b), \hat{\mu}) = 0
$$
\item the linear constraints associated with $a{\cal I}^{I}b$ ($a$ is indifferent to $b$ with respect to the bipolar PROMETHEE I method) are:
$$
\left.
\begin{array}{l}
\Phi^{B+}(a)=\Phi^{B+}(b),\\
\Phi^{B-}(a)=\Phi^{B-}(b),\\
\end{array}
\right\}
$$
\item the linear constraint associated with $a{\cal I}^{II}b$ ($a$ is indifferent to $b$ with respect to the bipolar PROMETHEE II method) is:
$$
\Phi^{B}(a)=\Phi^{B}(b)
$$
\end{enumerate} \item {\it Comparison of the intensity of preferences between pairs of actions}. The constraints represent some pairwise comparisons between pairs of alternatives on a set of training actions. Given four actions $a$, $b$, $c$ and $d$: \begin{enumerate} \item the linear constraints associated with $(a,b) {\cal{P}} (c, d)$ (the local preference of $a$ over $b$ is larger than the local preference of $c$ over $d$) is: \[ Ch^{B}(P^{B}(a, b), \hat{\mu}) > Ch^{B}(P^{B}(c, d), \hat{\mu}) \] \item the linear constraints associated with $(a,b) {\cal{I}} (c, d)$ (the local preference of $a$ over $b$ is the same of local preference of $c$ over $d$) is: \[ Ch^{B}(P^{B}(a, b), \hat{\mu}) = Ch^{B}(P^{B}(c, d), \hat{\mu}) \] \end{enumerate} \item {\it Importance of criteria}. A partial ranking over the set of criteria $\cal{J}$ may be provided by the DM:
\begin{enumerate}
\item criterion $g_j$ is more important than criterion $g_k$, which leads
to the constraint $a_{j} > a_{k}$;
\item criterion $g_j$ is equally important to criterion $g_k$, which leads
to the constraint $a_{j} = a_{k}$.
\end{enumerate} \item {\it The sign of interactions}. The DM may be able, for certain cases, to provide the sign of some interactions. For example, if there is a synergy effect when criterion $g_j$ interacts with criterion $g_k$, the following constraint should be added to the model: $a_{jk} > 0$. \item {\it Interaction between pairs of criteria}. The DM can provide some information about interaction between criteria:
\begin{enumerate}
\item[a)] if the DM feels that interaction between $g_j$ and $g_k$ is greater
than the interaction between $g_p$ and $g_q$, the constraint should be
defined as follows: $|a_{jk}| > |a_{pq}|$ where in particular:
\begin{itemize}
\item if both couples of criteria are synergic then: $a_{jk} > a_{pq}$,
\item if both couples of criteria are redundant then: $a_{jk} < a_{pq}$,
\item if $(j,k)$ is a couple of synergic criteria and $(p,q)$ is a couple of redundant criteria, then: $a_{jk} > -a_{pq}$,
\item if $(j,k)$ is a couple of redundant criteria and $(p,q)$ is a couple of synergic criteria, then: $-a_{jk} > a_{pq}$.
\end{itemize}
\item[b)] if the DM feels that the strength of the interaction between $g_j$ and $g_k$ is the same of the strength of the interaction between
$g_p$ and $g_q$, the constraint will be the following: $|a_{jk}| = |a_{pq}|$ and in particular:
\begin{itemize}
\item if both couples of criteria are synergic or redundant then: $a_{jk} = a_{pq}$,
\item if one couple of criteria is synergic and the other is redundant then: $a_{jk} = -a_{pq}$,
\end{itemize}
\end{enumerate} \item {\it The power of the opposing criteria}{\label{interact}}. Concerning the power of the opposing criteria several situations may occur. For example:
\begin{enumerate}
\item[a)] when the opposing power of $g_k$ is larger than the
opposing power of $g_h$, with respect to $g_j$, which expresses a positive preference,
we can define the following constraint:
$a^{+}_{j \vert k} < a^{+}_{j \vert h}$ (because $a^{+}_{j \vert h}\leq 0$ and $a^{-}_{j \vert h}\leq 0$ for all $j,k$ with $j\neq k$);
\item[b)] if the opposing power of $g_k$, expressing negative preferences, is larger with $g_j$ rather
than with $g_h$, the constraint will be $a^{+}_{j \vert k} < a^{+}_{h \vert
k}$.
\end{enumerate} \end{enumerate}
\subsection{A linear programming model} All the constraints presented in the previous section along with the symmetry, boundary and monotonicity conditions can now be put together and form a system of linear constraints. Strict inequalities can be converted into weak inequalities by adding a variable $\varepsilon$. It is well-know that such a system has a feasible solution if and only if when maximizing $\varepsilon$, its value is strictly positive \cite{MarichalRo00}. Considering constraints given by Corollary \ref{lem_SYM_2} for the symmetry condition, the linear programming model can be stated as follows (where $j{\cal{P}}k$ means that criterion $g_j$ is more important than criterion $g_k$; the remaining relations have a similar interpretation):
\begin{scriptsize}
\begin{displaymath}
\begin{array}{l}
\mbox{Max}\; \varepsilon \\[5mm]
\left.
\begin{array}{ll}
Ch^{B}(P^{B}(a, b),\hat\mu) \geq \varepsilon \;\; \mbox{if} \;\; a{\cal{P}}b, & Ch^{B}(P^{B}(a, b),\hat\mu) = 0 \;\; \mbox{if} \;\; a{\cal{I}}b, \\[1mm]
\left.
\begin{array}{l}
\Phi^{B+}(a)\geq\Phi^{B+}(b),\\
\Phi^{B-}(a)\leq\Phi^{B-}(b),\\
\Phi^{B+}(a)-\Phi^{B-}(a)\geq \Phi^{B+}(b)-\Phi^{B-}(b)+\varepsilon\\
\end{array}
\right\} \;\; \mbox{if} \;\; a{\cal P}_B^{I}b &
\left.
\begin{array}{l}
\Phi^{B+}(a)=\Phi^{B+}(b),\\
\Phi^{B-}(a)=\Phi^{B-}(b)\\
\end{array}
\right\} \;\; \mbox{if} \;\; a{\cal I}_B^{I}b \\[1mm]
\Phi^{B}(a)\geq\Phi^{B}(b)+\varepsilon \;\; \mbox{if} \;\; a{\cal P}_B^{II}b & \Phi^{B}(a)=\Phi^{B}(b) \;\; \mbox{if} \;\; a{\cal I}_B^{II}b \\[1mm]
{Ch^{B}(P^{B}(a, b),\hat\mu) \geq Ch^{B}(P^{B}(c, d),\hat\mu) + \varepsilon } \;\; \mbox{if} \;\; (a,b) {\cal{P}} (c,d), & {Ch^{B}(P^{B}(a, b),\hat\mu) = Ch^{B}(P^{B}(c, d),\hat\mu)} \;\; \mbox{if} \;\;(a,b){\cal{I}}(c,d), \\[1mm]
a_{j} - a_{k} \geq \varepsilon \;\; \mbox{if} \;\; j{\cal{P}}k, & a_{j} = a_{k} \;\; \mbox{if} \;\; j{\cal{I}}k, \\[1mm]
|a_{jk}| - |a_{pq}| \geq \varepsilon \;\; \mbox{if} \;\; \{ j, k\} {\cal{P}}\{ p, q\}, \; \mbox{(see point 5.a) of the previous subsection )}& \\[1mm]
|a_{jk}| = |a_{pq}| \;\;\mbox{if} \;\; \{ j, k\} {\cal{I}}\{ p, q\}, \; \mbox{(see point 5.b) of the previous subsection )} & \\[1mm]
a_{jk} \geq \varepsilon \;\; \mbox{if there is synergy between criteria $j$ and $k$}, \\[1mm]
a_{jk} \leq - \varepsilon \;\; \mbox{if there is redundancy between criteria $j$ and $k$}, \\[1mm]
a_{jk} = 0 \;\;\mbox{if criteria $j$ and $k$ are not interacting}, \\[1mm]
\mbox{Power of the opposing criteria of the type \ref{interact}:}\\[1mm]
a^{+}_{j \vert k} - a^{+}_{j \vert p} \geq \varepsilon, & a^{-}_{j \vert k} - a^{-}_{j \vert p} \geq \varepsilon, \\[1mm]
a^{+}_{j \vert k} - a^{+}_{p \vert k} \geq \varepsilon, & a^{-}_{j \vert k} - a^{-}_{p \vert k} \geq \varepsilon, \\[1mm]
\mbox{Symmetry conditions (Proposition \ref{lem_SYM_2}):}\\[1mm]
{\displaystyle a_{j \vert k}^{+} = a_{k \vert j}^{-}, \; \; \forall \; j, k\in {\cal{J}}, j\neq k \;\; \; }\\[1mm]
\mbox{Boundary and monotonicity conditions:}\\[1mm]
{\displaystyle \sum_{j \in {\cal{J}}}a_{j} + \sum_{\{j, k \} \subseteq {\cal{J}}}a_{jk} = 1},\\[1mm]
{\displaystyle a_{j}\geq 0 \; \; \forall \; j \in {\cal{J}}}, & {\displaystyle a_{j \vert k}^{+}, \; a_{j \vert k}^{-} \; \leq 0 \; \; \forall \; j, k \in {\cal{J}}},\\[1mm]
{\displaystyle a_{j} + \sum_{k \in C}a_{jk} + \sum_{k \in D}a^{+}_{j \vert k} \geq 0, \; \; \forall \;
j \in {\cal{J}}, \; \forall (C \cup \{ j \}, D) \in P({\cal{J}}) }, \\[1mm]
{\displaystyle a_{j} + \sum_{k \in D}a_{jk} + \sum_{h \in C}a^{-}_{h \vert j} \geq 0, \; \; \forall \;
j \in {\cal{J}}, \; \forall (C, D \cup \{ j \}) \in P({\cal{J}}) }. \\
\end{array}
\right\}E^{A^R}
\end{array}
\end{displaymath} \end{scriptsize}
\subsection{Restoring PROMETHEE} The condition which allows to restore the classical PROMETHEE methods is the following: \begin{enumerate} \item $\forall j, k \in {\cal{J}}, \; \; a_{jk} = a^{+}_{j \vert k} = a^{-}_{j \vert k} = 0$. \end{enumerate}
\noindent If Condition 1. is not satisfied and the following condition holds \begin{enumerate}
\item[2.] $\forall j, k \in {\cal{J}}, a^{+}_{j \vert k} = a^{-}_{j \vert k} = 0$, \end{enumerate} \noindent then the comprehensive preference of $a$ over $b$ is calculated as the difference between the Choquet integral of the positive preferences and the Choquet integral of the negative preferences, with a common capacity $\mu$ on ${\cal J}$ for the positive and the negative preferences, i.e. there exists $\mu: 2^{\cal J}\rightarrow[0,1]$, with $\mu(\emptyset)=0,$ $\mu({\cal J})=1,$ and $\mu(A)\leq\mu(B)$ for all $A\subseteq B\subseteq{\cal J}$, such that $$Ch^{B}(P^{B}(a, b), \hat{\mu})=\int_0^1\mu(\{j\in{\cal J}:P_{j}^{B}(a,b)>t\})dt-\int_0^1 \mu(\{j\in{\cal J}:P_{j}^{B}(a,b)<-t\})dt.$$ We shall call this type of aggregation of preferences, the symmetric Choquet integral PROMETHEE method.\\ If neither 1. nor 2. are satisfied, but the following condition holds \begin{enumerate}
\item[3.] $\forall j, k \in {\cal{J}}, a^{+}_{j \vert k} = a^{-}_{k \vert j}$, \end{enumerate} \noindent then we have the Bipolar PROMETHEE methods.\\
\subsection{A constructive learning preference information elicitation process}\label{constructive process}
The previous Conditions $1.$-$3.$ suggest a proper way to deal with the linear programming model in order to assess the interactive bipolar criteria coefficients. Indeed, it is very wise trying before to elicit weights concordant with the classical PROMETHEE method. If this is not possible, one can consider a PROMETHEE method which aggregates positive and negative preferences using the Choquet integral. If this is not possible, one can consider the bipolar symmetric PROMETHEE method. If, by proceeding in this way, we are not able to represent the DM's preferences, then we can take into account a more sophisticated aggregation procedure by using the bipolar PROMETHEE method. This way to progress from the simplest to the most sophisticated model can be outlined in a four steps procedure as follows:
\begin{enumerate} \item Solve the linear programming problem
\begin{equation}\label{First_PROB} \begin{array}{l} \;\;\mbox{Max}\; \varepsilon=\varepsilon_{1} \\[2mm] \left. \begin{array}{ll}
E^{A^{R}}\\
a_{jk} = a^{+}_{j \vert k} = a^{-}_{j \vert k} = 0,\;\;\forall j, k \in {\cal{J}} \end{array} \right\}E_{1} \end{array} \end{equation}
adding to $E^{A^{R}}$ the constraint related to the previous Condition $1.$ If $E_{1}$ is feasible and $\varepsilon_{1} > 0$, then the obtained preferential parameters are concordant with the classical PROMETHEE method. Otherwise, \item Solve the linear programming problem
\begin{equation}\label{Second_PROB} \begin{array}{l} \;\;\mbox{Max}\; \varepsilon=\varepsilon_{2} \\[2mm] \left. \begin{array}{ll}
E^{A^{R}}\\
a^{+}_{j \vert k} = a^{-}_{j \vert k} = 0, \;\;\forall j, k \in {\cal{J}} \end{array} \right\}E_{2} \end{array} \end{equation}
adding to $E^{A^{R}}$ the constraint related to the previous Condition $2$. If $E_{2}$ is feasible and $\varepsilon_2 > 0$, then the information is concordant with the symmetric Choquet integral PROMETHEE method having a unique capacity for the negative and the positive part. Otherwise,
\item Solve the linear programming problem
\begin{equation}\label{third_PROB} \begin{array}{l} \;\;\mbox{Max}\; \varepsilon=\varepsilon_{3} \\[2mm] \;\;E^{A^{R}}\\ \end{array} \end{equation}
\noindent If $E_{3}$ is feasible and $\varepsilon_3 > 0$, then the information is concordant with the bipolar PROMETHEE method. Otherwise, \item We can try to help the DM by providing some information about inconsistent judgments, when it is the case, by using a similar constructive learning procedure proposed in \cite{MousseauFiDiGoCl03}. In fact, in the linear programming model some of the constraints cannot be relaxed, that is, the basic properties of the model (symmetry, boundary and monotonicity conditions). The remaining constraints can lead to an infeasible linear system which means that the DM provided inconsistent information about her/his preferences. The methods proposed in \cite{MousseauFiDiGoCl03} can then be used in this context, providing to the DM some useful information about inconsistent judgments. \end{enumerate}
\section{ROR and Bipolar PROMETHEE methods} In the above sections we dealt with the problem of finding a bicapacity restoring preference information provided by the DM in case where multiple criteria evaluations are aggregated by Bipolar PROMETHEE method. Generally, there could exist more than one model (in our case the model will be a bicapacity, but in other contexts it could be a utility function or an outranking relation) compatible with the preference information provided by the DM on the training set of alternatives. Each compatible model restores the preference information provided by the DM but two different compatible models could compare the other alternatives not provided as examples by the DM in a different way. For this reason, the choice of one of these models among those compatible could be considered arbitrary. In order to take into account not only one but the whole set of models compatible with the preference information provided by the DM, we consider the ROR \cite{greco2010robust}. This approach considers the whole set of models compatible with preference information provided by the DM building two preference relations: the weak \textit{necessary} preference relation, for which alternative $a$ is necessarily weakly preferred to alternative $b$ (and we write $a\succsim^{N}b$), if $a$ is at least as good as $b$ for all compatible models, and the weak \textit{possible} preference relation, for which alternative $a$ is possibly weakly preferred to alternative $b$ (and we write $a\succsim^{P}b$), if $a$ is at least as good as $b$ for at least one compatible model. \\ Considering the bipolar flows (\ref{pos_flow})-(\ref{net_flow}) and the comprehensive Choquet integral in equation (\ref{bipolarpref}), given the alternatives $a,b\in A$, we say that $a$ outranks $b$ (or $a$ is at least as good as $b$): \begin{itemize} \item locally, if $Ch^{B}(P^{B}(a, b), \hat{\mu})\geq 0$; \item globally and considering the bipolar PROMETHEE I method, if $\Phi^{B+}(a)\geq\Phi^{B+}(b)$, $\Phi^{B-}(a)\leq\Phi^{B-}(b)$; \item globally and considering the bipolar PROMETHEE II method, if $\Phi^{B}(a)\geq\Phi^{B}(b).$ \end{itemize}
\noindent To check if $a$ is necessarily preferred to $b$, we look if it is possible that $a$ does not outrank $b$. Locally, this means that it is possible that there exists a bicapacity $\hat\mu$ such that $Ch^{B}(P^{B}(a,b),\hat\mu)<0$; globally, considering the bipolar PROMETHEE I this means that $\Phi^{B+}(a)<\Phi^{B+}(b)$ or $\Phi^{B-}(a)>\Phi^{B-}(b)$, while considering the bipolar PROMETHEE II this means that $\Phi^B(a)<\Phi^B(b)$. Given the following set of constraints,
$$ \label{SN} \left. \begin{array}{l}
E^{A^R}\\[2mm]
\mbox{if one verifies the truth of global outranking: }\\[2mm]
\mbox{\quad if exploited in the way of the bipolar PROMETHEE II method, then: }\\[2mm] \quad \quad \Phi^B(a) + \varepsilon \leq \Phi^B(b)\\[2mm]
\mbox{\quad if exploited in the way of the bipolar PROMETHEE I method, then: }\\[2mm]
\mbox {\quad \quad } \Phi^{B+}(a) + \varepsilon \leq \Phi^{B+}(b) + 2M_1 \mbox{ \ and \ } \Phi^{B-}(a) + 2M_2 \geq \Phi^{B-}(b) + \varepsilon \\[2mm]
\mbox { \quad \quad } \mbox{ where } M_i \in \{0,1\}, i=1,2, \mbox{ and } \sum_{i=1}^2 M_i \leq 1 \\[2mm]
\mbox{if one verifies the truth of local outranking: }\\[2mm]
\mbox { \quad \quad } Ch^{B}(P^{B}(a,b),\hat\mu)+\varepsilon \leq 0 \end{array} \right\} E^{N}(a,b) $$ \noindent we say that $a$ is weakly necessarily preferred to $b$ if $E^{N}(a,b)$ is infeasible or $\varepsilon^{*}\leq 0$ where $\varepsilon^{*}=\max\varepsilon$ s.t. $E^{N}(a,b)$.
\noindent To check if $a$ is possibly preferred to $b$, we check if it is possible that $a$ outrank $b$ for at least one bicapacity $\hat\mu.$ Locally, this means that there exists a bicapacity $\hat\mu$ such that $Ch^{B}(P^{B}(a,b),\hat\mu)\geq 0$; globally, considering PROMETHEE I this means that $\Phi^{B+}(a)\geq\Phi^{B+}(b)$ and $\Phi^{B-}(a)\leq\Phi^{B-}(b)$, while considering PROMETHEE II this means that $\Phi^{B}(a)\geq\Phi^{B}(b)$. Given the following set of constraints,
$$ \label{SP} \left. \begin{array}{l}
E^{A^R}\\[2mm]
\mbox{if one verifies the truth of global outranking: }\\[2mm]
\mbox{\quad if exploited in the way of the bipolar PROMETHEE II method, then: }\\[2mm] \quad \quad \Phi^B(a) \geq \Phi^B(b)\\[2mm]
\mbox{\quad if exploited in the way of the bipolar PROMETHEE I method, then: }\\[2mm]
\mbox {\quad \quad } \Phi^{B+}(a) \geq \Phi^{B+}(b) \mbox{ \ and \ } \Phi^{B-}(a) \leq \Phi^{B-}(b) \\[2mm]
\mbox{if one verifies the truth of local outranking: }\\[2mm]
\mbox { \quad \quad } Ch^{B}(P^{B}(a,b),\hat\mu) \geq 0 \end{array} \right\} E^{P}(a,b) $$ \noindent we say that $a$ is weakly possibly preferred to $b$ if $E^{P}(a,b)$ is feasible and $\varepsilon^{*}> 0$ where $\varepsilon^{*}=\max\varepsilon$ s.t. $E^{P}(a,b)$.
\section{Didactic Example}
Inspired by an example in literature \cite{Grabisch1996}, let us consider the problem of evaluating High School students according to their grades in Mathematics, Physics and Literature. In the following we suppose that the Director is the DM, while we will cover the role of analyst helping and supporting the DM in (her)his evaluations. \\ The Director thinks that scientific subjects (Mathematics and Physics) are more important than Literature. However, when students $a$ and $b$ are compared, if $a$ is better than $b$ both at Mathematics and Physics but $a$ is much worse than $b$ at Literature, then the Director has some doubts about the comprehensive preference of $a$ over $b$.\\ Mathematics and Physics are in some sense \textit{redundant} with respect to the comparison of students, since usually students which are good at Mathematics are also good at Physics. As a consequence, if $a$ is better than $b$ at Mathematics, the comprehensive preference of the student $a$ over the student $b$ is stronger if $a$ is better than $b$ at Literature rather than if $a$ is better than $b$ at Physics. \\ Let us consider the students whose grades (belonging to the range $\left[0,20\right]$) are represented in Table \ref{Evaluations} and the following formulation of the preference of $a$ over $b$ with respect to each criterion $g_j$, for all $j=(M)$ Mathematics, $(Ph)$ Physics, $(L)$ Literature.
\begin{table}[htbp] \begin{center} \begin{tabular}{cccc} \hline \mbox{Students} & \mbox{Mathematics} & \mbox{Physics} & \mbox{Literature}\\ \hline $s_1$ & 16 & 16 & 16 \\ $s_2$ & 15 & 13 & 18 \\ $s_3$ & 19 & 18 & 14 \\ $s_4$ & 18 & 16 & 15 \\ $s_5$ & 15 & 16 & 17 \\ $s_6$ & 13 & 13 & 19 \\ $s_7$ & 17 & 19 & 15 \\ $s_8$ & 15 & 17 & 16 \\ \hline \end{tabular} \end{center} \caption{Evaluations of the students}\label{Evaluations} \end{table}
$$ P_j(a,b)= \left\{ \begin{array}{ccc} 0 & \mbox{if} & g_{j}(b)\geq g_j(a) \\ (g_j(a)-g_j(b))/4 & \mbox{if} & 0<g_j(a)-g_j(b)\leq 4 \\ 1 & & \mbox{otherwise} \\ \end{array} \right. $$
From the values of the partial preferences $P_j(a,b)$, we obtain the positive and the negative partial preferences $P_{j}^{B}(a,b)$ with respect to each criterion $g_j$, for $j=M,Ph,L$ using the definition (\ref{equat}). \noindent Thus, to each pair of students $(s_{i},s_{j})$ is associated a vector of three elements: \\ $P^{B}(s_{i},s_{j})=\left[P^{B}_{M}(s_{i},s_{j}),P^{B}_{Ph}(s_{i},s_{j}),P^{B}_{L}(s_{i},s_{j})\right]$; for example, to the pair of students $(s_{1},s_{2})$ is associated the vector $P^{B}(s_{1},s_{2})=\left[0.25,0.75,-0.5\right]$.
Let us suppose that the Dean provides the following information regarding some pairs of students: \begin{itemize} \item student $s_1$ is preferred to student $s_2$ more than student $s_3$ is preferred to student $s_4$, \item student $s_7$ is preferred to student $s_8$ more than student $s_5$ is preferred to student $s_6$. \end{itemize} As explained in section \ref{assessing}, these two information are translated by the constraints: $$Ch^{B}(P^{B}(s_1,s_2),\hat{\mu})>Ch^{B}(P^{B}(s_3,s_4),\hat{\mu}),\;\; \mbox{and} \;\; Ch^{B}(P^{B}(s_7,s_8),\hat{\mu})>Ch^{B}(P^{B}(s_5,s_6),\hat{\mu})$$
\noindent Following the procedure described in section \ref{constructive process}, at first we check if the classical PROMETHEE method and the symmetric Choquet integral PROMETHEE method are able to restore the preference information provided by the Dean; solving the optimization problems \ref{First_PROB} and \ref{Second_PROB}, we get $\varepsilon_{1}<0$ and $\varepsilon_{2}<0$ and therefore neither the classical PROMETHEE method nor the symmetric Choquet integral PROMETHEE method are able to explain the preference information provided by the Dean. Solving the optimization problem \ref{third_PROB}, we get this time $\varepsilon_3>0$; this means that the information provided by the Dean can be explained by the Bipolar PROMETHEE method.\\ In order to better understand the problem at hand, we suggested to the Dean to use the ROR applied to the bipolar PROMETHEE method as discussed in the previous section. Using the first piece of preference information, we get the necessary and possible preference relations shown in Table \ref{first_necessary} at local level and considering PROMETHEE II and PROMETHEE I. In Table \ref{nec_local}, the value 1 in position $(i,j)$ means that $s_{i}$ is necessarily locally preferred to $s_{j}$ while the viceversa corresponds to the value. Analogous meaning have the values 1 and 0 in in Tables \ref{nec_PROM_II} and \ref{nec_PROM_I} respectively.
\begin{table}[!h] \begin{center} \caption{Necessary preference relations after the first piece of preference information\label{first_necessary}} \subtable[Local\label{nec_local}]{ \resizebox{0.25\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline
& $\mathbf{s_1}$ & $\mathbf{s_2}$ & $\mathbf{s_3}$ & $\mathbf{s_4}$ & $\mathbf{s_5}$ & $\mathbf{s_6}$ & $\mathbf{s_7}$ & $\mathbf{s_8}$ \\ \hline
$\mathbf{s_1}$ & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\
$\mathbf{s_2}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_3}$ & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 0 \\
$\mathbf{s_4}$ & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_5}$ & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\
$\mathbf{s_6}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_7}$ & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 \\
$\mathbf{s_8}$ & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 \\
\hline \end{tabular} } } \subtable[PROMETHEE II\label{nec_PROM_II}]{ \resizebox{0.25\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline
& $\mathbf{s_1}$ & $\mathbf{s_2}$ & $\mathbf{s_3}$ & $\mathbf{s_4}$ & $\mathbf{s_5}$ & $\mathbf{s_6}$ & $\mathbf{s_7}$ & $\mathbf{s_8}$ \\ \hline
$\mathbf{s_1}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_2}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_3}$ & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
$\mathbf{s_4}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_5}$ & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\
$\mathbf{s_6}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_7}$ & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 \\
$\mathbf{s_8}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline \end{tabular} } } \subtable[PROMETHEE I\label{nec_PROM_I}]{ \resizebox{0.25\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline
& $\mathbf{s_1}$ & $\mathbf{s_2}$ & $\mathbf{s_3}$ & $\mathbf{s_4}$ & $\mathbf{s_5}$ & $\mathbf{s_6}$ & $\mathbf{s_7}$ & $\mathbf{s_8}$ \\ \hline
$\mathbf{s_1}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_2}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_3}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_4}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_5}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_6}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_7}$ & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_8}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \end{tabular} } } \end{center} \end{table}
\begin{table}[!h] \begin{center} \caption{Possible preference relations after the first piece of preference information\label{poss_first}} \subtable[Local]{ \resizebox{0.25\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline
& $\mathbf{s_1}$ & $\mathbf{s_2}$ & $\mathbf{s_3}$ & $\mathbf{s_4}$ & $\mathbf{s_5}$ & $\mathbf{s_6}$ & $\mathbf{s_7}$ & $\mathbf{s_8}$ \\ \hline
$\mathbf{s_1}$ & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 1 \\
$\mathbf{s_2}$ & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
$\mathbf{s_3}$ & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 \\
$\mathbf{s_4}$ & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 \\
$\mathbf{s_5}$ & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 0 \\
$\mathbf{s_6}$ & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 \\
$\mathbf{s_7}$ & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \\
$\mathbf{s_8}$ & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\
\hline \end{tabular} } } \subtable[PROMETHEE II]{ \resizebox{0.25\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline
& $\mathbf{s_1}$ & $\mathbf{s_2}$ & $\mathbf{s_3}$ & $\mathbf{s_4}$ & $\mathbf{s_5}$ & $\mathbf{s_6}$ & $\mathbf{s_7}$ & $\mathbf{s_8}$ \\ \hline
$\mathbf{s_1}$ & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \\
$\mathbf{s_2}$ & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 \\
$\mathbf{s_3}$ & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 \\
$\mathbf{s_4}$ & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 \\
$\mathbf{s_5}$ & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 1 \\
$\mathbf{s_6}$ & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\
$\mathbf{s_7}$ & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \\
$\mathbf{s_8}$ & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\
\hline \end{tabular} } } \subtable[PROMETHEE I]{ \resizebox{0.25\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline
& $\mathbf{s_1}$ & $\mathbf{s_2}$ & $\mathbf{s_3}$ & $\mathbf{s_4}$ & $\mathbf{s_5}$ & $\mathbf{s_6}$ & $\mathbf{s_7}$ & $\mathbf{s_8}$ \\ \hline
$\mathbf{s_1}$ & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \\
$\mathbf{s_2}$ & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\
$\mathbf{s_3}$ & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 \\
$\mathbf{s_4}$ & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 \\
$\mathbf{s_5}$ & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 1 \\
$\mathbf{s_6}$ & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\
$\mathbf{s_7}$ & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \\
$\mathbf{s_8}$ & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\ \hline \end{tabular} } } \end{center} \end{table}
\noindent Looking at Tables \ref{first_necessary}, we underline that $s_{7}$, $s_3$ and $s_{5}$ are surely the best among the eight students considered. In fact, $s_{7}$ is necessarily preferred to five out of the other seven students both locally and considering the bipolar PROMETHEE II method and, at the same time, (s)he is the only student being necessarily preferred to some other student using the bipolar PROMETHEE I method. $s_{3}$ is necessarily preferred to four out of the other seven students locally, and (s)he is necessarily preferred to $s_4$ considering the bipolar PROMETHEE II method. At the same time, (s)he is locally possibly preferred to $s_7$ (see Table \ref{poss_first}). $s_{5}$ is necessarily preferred to $s_2$ and $s_{6}$ considering the bipolar PROMETHEE II method. In order to get a more insight on the problem at hand, we suggest to the Dean to provide other information (s)he is sure about. For this reason, the Dean states that, locally, $s_{2}$ is preferred to $s_6$ and $s_{8}$ is preferred to $s_{1}$.
\begin{table}[!h] \begin{center} \caption{Necessary preference relations after the second piece of preference information\label{nec_second}} \subtable[Local]{ \resizebox{0.25\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline
& $\mathbf{s_1}$ & $\mathbf{s_2}$ & $\mathbf{s_3}$ & $\mathbf{s_4}$ & $\mathbf{s_5}$ & $\mathbf{s_6}$ & $\mathbf{s_7}$ & $\mathbf{s_8}$ \\ \hline
$\mathbf{s_1}$ & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\
$\mathbf{s_2}$ & 0 & 0 & 0 & 0 & 0 & \cellcolor{yellow}{1} & 0 & 0 \\
$\mathbf{s_3}$ & 1 & 1 & 0 & 1 & \cellcolor{yellow}{1} & 1 & 0 & \cellcolor{yellow}{1} \\
$\mathbf{s_4}$ & \cellcolor{yellow}{1} & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_5}$ & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\
$\mathbf{s_6}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_7}$ & 1 & 1 & 0 & \cellcolor{yellow}{1} & 1 & 1 & 0 & 1 \\
$\mathbf{s_8}$ & \cellcolor{yellow}{1} & 1 & 0 & 0 & 1 & 1 & 0 & 0 \\
\hline \end{tabular} } } \subtable[PROMETHEE II]{ \resizebox{0.25\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline
& $\mathbf{s_1}$ & $\mathbf{s_2}$ & $\mathbf{s_3}$ & $\mathbf{s_4}$ & $\mathbf{s_5}$ & $\mathbf{s_6}$ & $\mathbf{s_7}$ & $\mathbf{s_8}$ \\ \hline
$\mathbf{s_1}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_2}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_3}$ & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
$\mathbf{s_4}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_5}$ & 0 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\
$\mathbf{s_6}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_7}$ & 1 & 1 & 0 & \cellcolor{yellow}{1} & 1 & 1 & 0 & 1 \\
$\mathbf{s_8}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\hline \end{tabular} } } \subtable[PROMETHEE I]{ \resizebox{0.25\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline
& $\mathbf{s_1}$ & $\mathbf{s_2}$ & $\mathbf{s_3}$ & $\mathbf{s_4}$ & $\mathbf{s_5}$ & $\mathbf{s_6}$ & $\mathbf{s_7}$ & $\mathbf{s_8}$ \\ \hline
$\mathbf{s_1}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_2}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_3}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_4}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_5}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_6}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
$\mathbf{s_7}$ & 1 & 1 & 0 & \cellcolor{yellow}{1} & 0 & 0 & 0 & \cellcolor{yellow}{1} \\
$\mathbf{s_8}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \end{tabular} } } \end{center} \end{table}
\begin{table}[!h] \begin{center} \caption{Possible preference relations after the second piece of preference information\label{pos_second}} \subtable[Local]{ \resizebox{0.25\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline
& $\mathbf{s_1}$ & $\mathbf{s_2}$ & $\mathbf{s_3}$ & $\mathbf{s_4}$ & $\mathbf{s_5}$ & $\mathbf{s_6}$ & $\mathbf{s_7}$ & $\mathbf{s_8}$ \\ \hline
$\mathbf{s_1}$ & 0 & 1 & 0 & \cellcolor{yellow}{0} & 1 & 1 & 0 & \cellcolor{yellow}{0} \\
$\mathbf{s_2}$ & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
$\mathbf{s_3}$ & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 \\
$\mathbf{s_4}$ & 1 & 1 & 0 & 0 & 1 & 1 & \cellcolor{yellow}{0} & 1 \\
$\mathbf{s_5}$ & 1 & 1 & \cellcolor{yellow}{0} & 1 & 0 & 1 & 0 & 0 \\
$\mathbf{s_6}$ & 0 & \cellcolor{yellow}{0} & 0 & 1 & 0 & 0 & 0 & 0 \\
$\mathbf{s_7}$ & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \\
$\mathbf{s_8}$ & 1 & 1 & \cellcolor{yellow}{0} & 1 & 1 & 1 & 0 & 0 \\
\hline \end{tabular} } } \subtable[PROMETHEE II]{ \resizebox{0.25\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline
& $\mathbf{s_1}$ & $\mathbf{s_2}$ & $\mathbf{s_3}$ & $\mathbf{s_4}$ & $\mathbf{s_5}$ & $\mathbf{s_6}$ & $\mathbf{s_7}$ & $\mathbf{s_8}$ \\ \hline
$\mathbf{s_1}$ & 0 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \\
$\mathbf{s_2}$ & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 \\
$\mathbf{s_3}$ & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 \\
$\mathbf{s_4}$ & 1 & 1 & 0 & 0 & 1 & 1 & \cellcolor{yellow}{0} & 1 \\
$\mathbf{s_5}$ & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 1 \\
$\mathbf{s_6}$ & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 1 \\
$\mathbf{s_7}$ & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \\
$\mathbf{s_8}$ & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\
\hline \end{tabular} } } \subtable[PROMETHEE I]{ \resizebox{0.25\textwidth}{!}{ \begin{tabular}{ccccccccc} \hline
& $\mathbf{s_1}$ & $\mathbf{s_2}$ & $\mathbf{s_3}$ & $\mathbf{s_4}$ & $\mathbf{s_5}$ & $\mathbf{s_6}$ & $\mathbf{s_7}$ & $\mathbf{s_8}$ \\ \hline
$\mathbf{s_1}$ & 0 & 1 & \cellcolor{yellow}{0} & 1 & 1 & 1 & 0 & 1 \\
$\mathbf{s_2}$ & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 \\
$\mathbf{s_3}$ & 1 & 1 & 0 & 1 & 1 & 1 & 1 & 1 \\
$\mathbf{s_4}$ & 1 & 1 & 0 & 0 & 1 & 1 & \cellcolor{yellow}{0} & 1 \\
$\mathbf{s_5}$ & 1 & 1 & 1 & 1 & 0 & 1 & 0 & 1 \\
$\mathbf{s_6}$ & \cellcolor{yellow}{0} & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\
$\mathbf{s_7}$ & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 1 \\
$\mathbf{s_8}$ & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 \\ \hline \end{tabular} } } \end{center} \end{table}
\noindent Translating these preference information using the constraints $Ch^{B}(P^{B}(2,6),\hat\mu)>0$ and $Ch^{B}(P^{B}(8,1),\hat\mu)>0$, and computing again the necessary and possible preference relations locally and considering both the bipolar PROMETHEE methods, we get the results shown in Tables \ref{nec_second} and \ref{pos_second}. In these Tables, yellow cells correspond to new information we have got using the second piece of information provided by the Dean. In particular, in Tables \ref{nec_second} the cell in correspondence of the pair of students $(s_i,s_j)$ is yellow colored if $s_i$ was not necessarily preferred to $s_j$ after the first iteration, but $s_{i}$ is necessarily preferred to $s_{j}$ after the second iteration; in Tables \ref{pos_second}, the cell in correspondence of the pair of students $(s_i,s_j)$ is yellow colored if $s_i$ was possibly preferred to $s_{j}$ after the first iteration but $s_{i}$ is not possibly preferred to $s_j$ after the second iteration anymore. Looking at Tables \ref{nec_second} and \ref{pos_second}, the Dean is addressed to consider $s_7$ as the best student. In fact, also if $s_7$ and $s_3$ are locally necessarily preferred to all other six considered students, $s_{7}$ is still the only one being necessarily preferred to someone else considering the bipolar PROMETHEE I method. Besides, looking at Tables \ref{pos_second}, we get that $s_{3}$ is the only student being possibly preferred to $s_{7}$ locally and with respect to PROMETHEE I and PROMETHEE II but, at the same time, everyone except $s_4$, is possibly preferred to $s_{3}$ considering the bipolar PROMETHEE I method while four students ($s_5$, $s_6$, $s_7$ and $s_8$) are possibly preferred to $s_{3}$ with respect to the bipolar PROMETHEE I method.
\section{Conclusions} In this paper we proposed a generalization of the classical PROMETHEE methods. A basic assumption of PROMETHEE methods is the independence between criteria which implies that no interaction between criteria is considered. In this paper we developed a methodology permitting to take into account interaction between criteria (synergy, redundancy and antagonism effects) within PROMETHEE method by using the bipolar Choquet integral. In this way we obtained a new method called the Bipolar PROMETHEE method.\\ The Decision Maker (DM) can give directly the preferential parameters of the method; however, due to their great number, it is advisable using some indirect procedure to elicit the preferential parameters from some preference information provided by the DM. \\ Since, in general, there is more than one set of parameters compatible with these preference information, we proposed to use the Robust Ordinal Regression (ROR) to consider the whole family of compatible sets of preferential parameters. We believe that the proposed methodology can be successfully applied in many real world problems where interacting criteria have to be considered; besides, in a companion paper, we propose to apply the SMAA methodology to the classical and to the bipolar PROMETHEE methods (for a survey on SMAA methods see \cite{tervonen_figueira}).
\end{document} |
\begin{document}
\baselineskip.5cm \title {Exotic group actions on simply connected smooth 4-manifolds} \author[Ronald Fintushel]{Ronald Fintushel} \address{Department of Mathematics, Michigan State University \newline \hspace*{.375in}East Lansing, Michigan 48824} \email{\rm{[email protected]}} \thanks{R.F. was partially supported by NSF Grant DMS-0704091, R.J.S. by NSF Grant DMS-0505080, and N.S. by NSF RTG Grant DMS-0353717 and by DMS-0704091.} \author[Ronald J. Stern]{Ronald J. Stern} \address{Department of Mathematics, University of California \newline \hspace*{.375in}Irvine, California 92697} \email{\rm{[email protected]}} \author[Nathan Sunukjian]{Nathan Sunukjian} \address{Department of Mathematics, Michigan State University \newline \hspace*{.375in}East Lansing, Michigan 48824} \email{\rm{[email protected]}}
\dedicatory{Dedicated to Jos\'{e} Maria Montesinos on the occasion of his 65th birthday.}
\begin{abstract} We produce infinite families of exotic actions of finite cyclic groups on simply connected smooth $4$-manifolds with nontrivial Seiberg-Witten invariants.\end{abstract} \maketitle
\section{Introduction\label{Intro}}
The goal of this paper is to exhibit infinite families of exotic actions of cyclic groups on many simply connected smooth $4$-manifolds. By {\it exotic actions} we mean smooth actions on a $4$-manifold $X$ that are equivariantly homeomorphic but not equivariantly diffeomorphic. Exotic orientation-reversing free involutions on $S^4$ were first constructed around 1980 by the first two authors \cite{invol} and can also be constructed using examples of Cappell-Shaneson \cite{CS} and later work showing that the covers of many of their manifolds are standard \cite{G1,A,G2}. In \cite{U} Ue shows that for any nontrivial finite group $G$ there is a $4$-manifold that has infinitely many free $G$-actions such that their orbit spaces are homeomorphic but mutually nondiffeomorphic. The manifolds which support Ue's exotic actions are of the form $\SS\# Z$ with $b^+(Z)>0$, and hence their Seiberg-Witten invariants vanish.
In contrast, we shall produce infinite families of finite cyclic group actions on simply connected manifolds with nontrivial Seiberg-Witten invariants. Our theorem is:
\begin{thm}\label{main} Let $Y$ be a simply connected $4$-manifold with $b^+\ge1$ containing an embedded surface $\Sig$ of genus $g\ge 1$ of nonnegative self-intersection. Suppose that $\pi_1(Y\- \Sig)=\Z_d$ and that the pair $(Y,\Sig)$ has a nontrivial relative Seiberg-Witten invariant. Suppose also that $\Sig$ contains a nonseparating loop which bounds an embedded $2$-disk whose interior lies in $Y\- \Sig$. Let $X$ be the (simply connected) $d$-fold cover of $Y$ branched over $\Sig$. Then $X$ admits an infinite family of smoothly distinct but topologically equivalent actions of $\Z_d$. \end{thm}
As far as we know, these are the first examples of exotic orientation-preserving actions of finite cyclic groups on $4$-manifolds with nontrivial Seiberg-Witten invariants. Most of these manifolds which arise in practice are irreducible, and, in fact, if $X$ is spin with a nontrivial Seiberg-Witten invariant, $X$ must be irreducible. Construction of the group actions that we describe are obtained by altering branch set data and has its origins in papers of Giffen and Gordon \cite{Gi,Go}.
As a simple example of our theorem, let $\Sig$ be an embedded degree $d$ curve in $\CP$. Its complement has $\pi_1=\Z_d$ and the corresponding $d$-fold cyclic branched cover is the degree $d$ hypersurface $V_d$ in $\mathbf{CP}{}^{3}$. We can choose $\Sig$ so that it lives in a pencil and, for $d>2$, has a vanishing cycle which gives us a loop on $\Sig$ which bounds an embedded disk in its complement. That $(\CP,\Sig)$ has a nontrivial relative Seiberg-Witten invariant follows from gluing theory \cite{MST,KM}: After blowing up $d^2$ times so that the proper transform of our curve has self-intersection $0$, one can take a fiber sum with an algebraic surface containing an embedded curve of self-intersection $0$ and of the same genus as $\Sig$ to get a symplectic manifold. (See \cite{surfaces}.) Theorem~\ref{main} implies that $V_d$ admits an infinite family of topologically equivalent but smoothly distinct actions of $\Z_d$. For example, we get such a family of $\Z_4$-actions on the quartic, which is diffeomorphic to the $K3$ surface. These examples are also discussed in the paper of H.-J. Kim \cite{K}, where, although it is not proved that the branched covers are unchanged by these operations, it is observed that the Seiberg-Witten invariants remain the same, even without the hypothesis of the theorem that there be a nonseparating loop which bounds an embedded $2$-disk.
Similarly, one can obtain an infinite family of exotic involutions on the $K3$ surface by realizing it as the double branched cover of the sextic. For this application one needs to restate our theorem to apply to simply connected $d'$-fold branched covers where $d'$ divides $d$. This extension is, more or less, automatic, and we will not comment on it further. One can obtain an infinite family of $\Z_3$-actions on the $K3$ surface as follows. Consider a smooth embedded curve $\Sig$ in $\SS$ representing $3([S^2\x \{\text{pt}\}]+[ \{\text{pt}\}\x S^2])$; for example, view $\SS$ as the ruled surface $F_0$ and take $\Sig$ to be a smooth representative of the homology class of $3$ times a section plus a fiber. Then $\Sig$ has genus $4$ and $\pi_1(\SS\- N_\Sig)$ is abelian by the generalized Zariski Conjecture \cite{N}, hence $\pi_1(\SS\- N_\Sig)=\Z_3$. The gluing argument above implies that the relative Seiberg-Witten invariant of $\SS\- N_\Sig$ is nonzero, so Theorem~\ref{main} applies. In fact, using the formulas \[ e(X)=d\,e(Y)-(d-1)\,e(\Sig),\ \ \ \ \sign(X)= d\,\sign(Y)-\frac{(d-1)(d+1)}{3d}\Sig\cdot\Sig\] for the euler characteristic and signature of a cyclic branched cover, one can show via a simple case-by-case analysis, that the only finite cyclic groups which can act on $K3$ with a smooth connected $2$-dimensional fixed point set are $\Z_2$, $\Z_3$, and $\Z_4$, and we have seen that there are infinite families of topologically equivalent but smoothly distinct examples in all these cases.
\section{Rim surgery}
We first remind the reader of the definition of knot surgery. If $Y$ is an oriented smooth $4$-manifold containing an embedded torus $T$ of self-intersection $0$ and $K$ is a knot in $S^3$, then {\em knot surgery} on $T$ is the result of replacing a tubular neighborhood $T\x D^2$ of $T$ with $S^1$ times the exterior $S^3\- N_K$ of the knot \cite{KL4M}: \[ Y_K = \left( X\- (T\x D^2)\right) \cup \left(S^1\x(S^3\- N_K)\right) \] where $\bd D^2$ is identified with a longitude $\ell_K$ of $K$. This description doesn't necessarily determine $Y_K$ up to diffeomorphism; however, when $T$ represents a non-trivial homology class in $Y$ and under reasonable hypotheses, all manifolds obtained from the same $(Y,T)$ and $K\C S^3$ will have the same Seiberg-Witten invariant: $\sw_{Y_K}=\sw_X\cdot\DD_K(t^2)$ where where $t$ corresponds to $T$ and $\DD_K$ is the symmetrized Alexander polynomial of $K$.
When $\Sig$ is a smoothly embedded genus $g>0$ surface in $Y$, then a relative version of knot surgery called {\em rim surgery} \cite{surfaces} can be applied to alter the embedding type of $\Sig$. If $C$ is a homologically nontrivial loop in $\Sig$, then the preimage of $C$ under the projection of the normal circle bundle of $\Sig$ is called a {\em rim torus}. ``Rim surgery" is the result of knot surgery on a rim torus. Note that a rim torus represents a trivial homology class in $Y$ and a nontrivial homology class in $Y\setminus \Sigma$. Rim surgery replaces $C\x \bd D^2_\nu \x D^2_\delta$ with $C\x (S^3\- N_K)$ where $\bd D^2_\nu$ is the boundary circle of a normal disk to $\Sig$ and $N_K$ is a tubular neighborhood of $K$ in $S^3$. If we denote the homology class of the boundary circle of a normal disk $D^2_\delta$ to the rim torus $C\x \bd D^2_\nu$ by $\delta$, and similarly set $\bd D^2_\nu = \nu$ and the homology class of the meridian and longitude to $K$ by $m_K$ and $\ell_K$, then the rim surgery gluing is: \[ \psi: C\x \bd D^2_\nu \x D^2_\delta\to S^1\x \bd(S^3\- N_K)\] where $\psi_*([C])=[S^1]$, $\psi_*(\nu) =m_K$, and $\psi_*(\delta) =\ell_K$. In \cite{surfaces} it is explained that this is equivalent to: \[ (Y,\Sig_{K,C}) = (Y,\Sig) \- (C\x (I\x D^2_\nu, I\x \{ 0\}) )\cup (C\x (D^3, K')) \] where $(D^3, K') = (S^3, K)\- (D^3, D^1)$, the knot minus a standard ball pair. This construction depends on a framing of the restriction of the normal bundle of $\Sig$ to $C\x I$ in the sense that different choices of pushoffs of $C$ to the boundary of the normal bundle may give rise to different surfaces.
In \cite{surfaces,addendum} it is shown that if both $Y$ and $Y\- \Sig$ are simply connected then $(Y,\Sig)$ and $(Y,\Sig_{K,C})$ are homeomorphic pairs, but if the self-intersection of $\Sig$ is nonnegative and if $\DD_K(t)\not\equiv 1$ then there is no self-diffeomorphism of $Y$ which throws $\Sig_{K,C}$ onto $\Sig$ provided $(Y,\Sig)$ has a nontrivial relative Seiberg-Witten invariant. In fact, under the same hypotheses, the same is true for $(Y,\Sig_{K_1,C})$ and $(Y,\Sig_{K_2,C})$ provided that the Alexander polynomials $\DD_{K_1}$ and $\DD_{K_2}$ have different sets of nonzero coefficients. Perhaps the best way to understand this (at least in the case where the self-intersection of $\Sig$ is zero) is that rim surgery multiplies the relative Seiberg-Witten invariant in the monopole Floer homology group $H\!M(\Sig\x S^1)$ for an appropriate spin$^c$-structure by $\DD_K(t^2)$. Recently, Tom Mark \cite{M} has shown that the above result is true if the self-intersection number of $\Sig$ is greater than $2 -2g(\Sig)$. In \S4 we shall discuss this topic further.
Since a surface with a simply connected complement has no branched covers, the hypothesis that $Y\- \Sig$ is simply connected is not useful for the purpose of this paper. Kim and Ruberman \cite{K,KR} have generalized rim surgery in such a way that the condition $\pi_1(Y\- \Sig) = \Z_d$ ($d>1$) is preserved. For these purposes they used a {\em twist-rim surgery} \cite{KR} that we now describe. In rim surgery $C\x (I\x D^2_\nu, I\x \{ 0\}) \C (Y,\Sig)$ is replaced with $S^1\x (D^3, K')$. A key observation is that this last term occurs naturally in the process of spinning a knot. Given a knot $K$ in $S^3$, after removing a standard ball pair one obtains a knotted arc $K'\C D^3$. The corresponding spun 2-knot in $S^4 = (S^1\x D^3)/\{ (t,x)\sim (t',x) \}$ (for all\ $t,t'\in S^1$ and $x\in \bd D^3$) is $S_K=(S^1\x K')/\sim$. This spun $2$-sphere $S_{K}$ naturally determines another $2$-sphere $T_K = (S^1\x \bd D^3)/\sim$ in $S^{4}$. $T_K$ is an unknotted $2$-sphere in $S^{4}$ because it bounds the $3$-ball $\{\text{pt}\}\x D^3$. Since the 2-spheres $S_K$ and $T_K$ intersect transversely in two points, $S_K$ and $T_K$ are {\it Montesinos twins} \cite{Mo}. These twin $2$-spheres have a neighborhood $P$ in $S^{4}$ which is obtained by plumbing together two copies of $S^2\x D^2$ at two points. We call $P$ a {\it twin neighborhood}. Note that $P$ has a natural embedding in $S^4$ as the complement of the neighborhood of a standardly embedded torus in $S^4$: $S^4=P\cup (T^2\x D^2)$.
Returning to rim surgery, we identify $S^1\x (D^3, K')$ with $(S^4\- N_{T_K}, S'_K)$ where $N_{T_K}$ is a tubular neighborhood of the $2$-sphere $T_K$, and $S'_K = S_K\cap (S^4\- N_{T_K})\cong S^1\x I$. So rim surgery is given by the formula \[ (Y,\Sig_{K,C}) = (Y,\Sig) \- (C\x (I\x D^2, I\x \{ 0\}) )\cup (S^4\- N_{T_K}, S'_K) \]
The process of $k$-twist-spinning a knot \cite{Z} also produces a pair of twins in $S^4$, the twist-spun knot $S_{K,k}$ and the twin $T_{K,k}$, which again arises from $\bd D^3$. (We shall give an explicit description of twist-spinning below.) The twin $T_{K,k}$ is again unknotted in $S^4$. Let $N_{T_{K,k}}$ denote a tubular neighborhood of $T_{K,k}$, then $S^4\- N_{T_{K,k}}$ is diffeomorphic to $S^1\x D^3$. One defines $k$-twist-rim surgery on $\Sig\C Y$ by \[ (Y,\Sig_{K,C,k}) = (Y,\Sig) \- (C\x (I\x D^2, I\x \{ 0\}) )\cup (S^4\- N_{T_{K,k}}, S'_{K,k}) \] where $S'_{K,k}=S_{K,k}\cap (S^4\- N_{T_{K,k}})$. Once again, this depends on a choice of framing for $(C\x I)\x D^2$. As we explain below, different framings may affect the value of $k$. Nonetheless, we will not further complicate matters by notationally keeping track of the framing.
The theorem of Kim and Ruberman is:
\begin{prop}[\cite{KR}] \label{KR} Let $Y$ be a simply connected smooth $4$-manifold with an embedded surface $\Sig$ of positive genus. Suppose that $\pi_1(Y\- \Sig)$ is a finite cyclic group $\Z_d$ and let $k$ be any integer relatively prime to $d$. Then for any knot $K\C S^3$ and homologically essential loop $C\C \Sig$ and for an appropriate choice of framing described below, $\pi_1(Y\- \Sig_ {K,C,k}) = \Z_d$ and, in fact, $(Y,\Sig)$ and $(Y,\Sig_ {K,C,k})$ are homemorphic as pairs. \end{prop}
As in the case of ordinary rim surgery, there is also a knot surgery description of $k$-twist-rim surgery. Consider the rim torus $C\x \bd D^2_\nu$ as above. Twist-rim surgery is accomplished by removing a neighborhood $C\x \bd D^2_\nu\x D^2_\delta$ of the rim torus and gluing in $S^1\x (S^3\- N_K)$ by the diffeomorphism \[ \psi_k: C\x \bd D^2_\nu \x D^2_\delta \to S^1\x \bd(S^3\- N_K)\] where $\psi_{k,*}([C])=k\, m_K+[S^1]$, $\psi_*[\nu] =m_K$, and $\psi_*[\delta] =\ell_K$. The image of $\Sigma$ is now the $k$-twist rim surgered surface. Since the longitude of $K$ is identified with $\bd D^2_\delta$, we have the same (relative) Seiberg-Witten invariant as for ordinary rim surgery. This is discussed further below.
\section{Twist-spinning and circle actions}\label{S^1}
There is a relation between twist-spinning a knot and smooth circle actions on $4$-manifolds which we shall describe in this section. Smooth circle actions on $S^4$ are completely determined by their orbit space data \cite{S^4,S^1-1,Pa}\footnote{Although these papers are set in the category of locally smooth actions, their results apply {\em{verbatim}}, with the same proofs, in the smooth category.}. The orbit space is $S^3$ or $B^3$, and in the latter case, the boundary is the image of the fixed point set and the rest of the action is free. In case the orbit space is $S^3$, the fixed point set is a pair of points, and the image of the exceptional orbits is either empty, a single arc connecting the two fixed point images, or a pair of arcs which meet only at their endpoints, the fixed point images. In case there is just one arc, its interior points all correspond to orbits with the same finite cyclic isotropy group $\Z_k$ and its endpoints to fixed points. (Thus its preimage in $S^4$ is a $2$-sphere.) We denote $S^4$ with this action by $S^4(k)$. If there are two arcs, we get a circle which contains two fixed point images splitting this circle into two arcs which correspond to finite cyclic isotropy groups of relatively prime orders. The knot type $K$ in $S^3$ which this provides is an invariant of the $S^1$-action. If the exceptional orbit types are $\Z_k$ and $\Z_d$, we denote $S^4$ with this action by $S^4(K;k,d)$. By $\bE_k$ we denote the $2$-sphere in $S^4$ consisting of the closure of the set of orbits of isotropy type $Z_k$, this is the preimage of a closed arc in $S^3$ contained in $K$. The orbit data described here completely determines smooth $S^1$-actions on $S^4$ up to equivariant diffeomorphism.
In $S^4(K;k,d)$, the $2$-spheres $\bE_k$ and $\bE_d$ form a pair of twins. The corresponding twin neighborhood is denoted $P(K;k,d)$. We will often use the notation $S^4(K;k,1)$ and $P(K;k,1)$. This gives us the $S^1$-action $S^4(k)$, but picks out a preferred set of twins in $S^4$, $\bE_k\cup \bE_1$, where $\bE_1$ is the preimage of the closed arc in $K$ labelled `$1$'. For the actions $S^4(K;k,d)$, $d\ge1$, we have \[ S^4(K;k,d) = P\cup (S^1\x(S^3\- N_K))\] where $P= P(K;k,d)$ and $S^1$ acts freely in the obvious fashion on the other summand. In order to describe how these pieces are glued together, we choose bases for $H_1$ of $\bd P\cong T^3$ and $\bd(S^1\x (S^3\- N_K))$. To get such a basis for $H_1(\bd P)$ we consider the standard embedding of $P$ in $S^4$ with complement $T^2\x D^2$. (This corresponds to $K =$ unknot.) We let $\mu_1$ be the homology class of the meridian of one of the twin two spheres, $\mu_2$ the homology class of the meridian of the other, and $\lam$ the homology class of a loop on $\bd P$ which generates $H_1(P)\cong\Z$ and which is homologically trivial in $S^4\- P$. We use the ordered basis $\{\mu_1,\mu_2,\lam\}$. For an ordered basis of $H_1(S^1\x (S^3\- N_K))$ we choose $\{ m_K, [S^1], \ell_K\}$. The gluing for $S^4(K;k,d)$, $\psi:\bd P\to \bd(S^1\x (S^3\- N_K))$ has $\psi_*$ given by the matrix \[ A(k,d)= \left( \begin{array}{ccc} k &d &0 \\
-\b &\g &0 \\
0 &0 &1 \end{array} \right)\ \ \ \b\,d+\g\,k=1 \] There is an easy description of twist-spinning in this language. If $K$ is a knot in $S^3$ then its $k$-twist-spin is $\bE_1$ in $S^4(K;k,1)$. (See e.g. \cite{Pa}.)
We describe the above surgery operations one last time in terms of this notation. If $K$ is a knot in $S^3$, let $(B^3, K')$ be $(S^3,K)$ with a trivial ball pair removed. Consider the semifree $S^1$-action on $S^4$ whose orbit space is $B^3$ with orbit map $\pi$. Then $S_K$, the spun knot obtained from $K$, is $S_K=\pi^{-1}(K')$ and its twin is $T_K=\pi^{-1}(\bd B^3)$. Rim surgery can now be described as \[ (Y,\Sig_{K,C}) = (Y,\Sig) \- (C\x (I\x D^2, I\x \{ 0\}) )\cup (\pi^{-1}(B^3_0), \pi^{-1}(K'\cap B^3_0))\] where $B^3_0$ is $B^3$ with an open collar of its boundary removed. Notice that from $Y$ we have removed $C\x I\x D^2 \cong S^1\x B^3$ and replaced it with $\pi^{-1}(B^3_0)\cong S^1\x B^3$, leaving the ambient space $Y$ unchanged.
Similarly, using the $S^1$-action $S^4(K;k,1)$, the formula for $k$-twist-rim surgery becomes \[ (Y,\Sig_{K,C,k}) = (Y,\Sig) \- (C\x (I\x D^2, I\x \{ 0\}) )\cup (S^4\- N(\bE_k), \bE_1') \] where $N(\bE_k)$ is an $S^1$-equivariant tubular neighborhood and $\bE_1' =\bE_1\cap (S^4\- N(\bE_k))$. Because $S^4(K;k,1)=P\cup (S^1\x (S^3\- N_K))$, we can express $k$-twist-rim surgery in terms of surgery on the torus $R=C\x \bd D^2_\nu$: \[ (Y,\Sig_{K,C,k}) = (Y\- ( R\x D^2_\delta) ,\Sig) \cup_\phi (S^1\x (S^3\- N_K),\emptyset) \] where $\phi_*$ is the composition $\psi_*\circ \iota$ with $\iota_*(C)=\mu_1$, $\iota_*(\nu)=\mu_2$ and $\iota_*(\delta)=\lam$. We are identifying $\mu_1$ with a normal circle to $\bE_k$ and $\mu_2$ with a normal circle to $\bE_1$. (See also \cite{Pl} where this is described in slightly different notation.) The matrix giving our gluing is the matrix $A(k,1)$ defined above.
As we have pointed out, our construction depends on a choice of framing for the restriction of the normal bundle of $\Sig$ to $C\x I$ or equivalently of the rim torus $C\x \bd D^2_\nu$. If one pushoff $C'$ of $C$ gives rise to the gluing above, then any other framing comes from replacing $C'$ by $C'+r\nu$. Thus it corresponds to the gluing matrix $A( k+r,1)$. Thus $k$-twist-rim surgery with respect to the first framing is $(k+r)$-twist-rim surgery with respect to the second framing. This brings us to the choice of framing in the Kim-Ruberman Theorem. We need to choose a framing so that $C\x \{\text{pt}\}$ is nullhomologous in $Y\- \Sig$. Because $H_1(Y\- \Sig)=\Z_d$ is generated by $\nu$, different choices of acceptable framings differ by integer multiples of $d\,\nu$. Thus $k$-twist-rim surgery gets turned into into $(k+rd)$-twist-rim surgery, preserving the hypothesis that $k$ and $d$ should be relatively prime.
\section{Branch sets and relative Seiberg-Witten invariants}
Fix an integer $d>1$. The $\Z_d$-actions which we construct will be $d$-fold cyclic branched covers of smoothly embedded surfaces in smooth $4$-manifolds. Let $Y$ be a simply connected oriented smooth $4$-manifold with $b^+(Y)\ge 1$ containing a smoothly embedded surface $\Sig$ of genus $g\ge 1$ and self-intersection $\Sig\cdot\Sig=n \ge 0$ such that $\pi_1(Y\- \Sig)=\Z_d$.
Choose a homologically essential loop $C$ on $\Sig$ and for $k\ge 1$ relatively prime to $d$ perform $k$-twist-rim surgery on $\Sig$ using the rim torus corresponding to $C$ and a knot $K$ in $S^3$ to obtain a surface $\Sig_{K,C,k}$. We now fix $d$ and $k$ and use the shorthand $\Sig_K =\Sig_{K,C,k}$. It follows from the result of Kim and Ruberman, Proposition~\ref{KR}, that the pairs $(Y,\Sig)$ and $(Y,\Sig_K)$ are homeomorphic. We say that surfaces $\Sig$, $\Sig'$ in $Y$ are {\em smoothly (resp. topologically) equivalent} if there is a diffeomorphism (resp. homeomorphism) of pairs $(Y,\Sig)\cong (Y,\Sig')$.
We employ a simple trick to reduce to the situation where the self-intersection of the surface is $0$. Blow up $\Sig\cdot\Sig=n$ times to get $\widehat{Y}=Y\# n\,\CPb$, and let $\widehat{\Sig}$ be the blown up surface, which has self-intersection $0$. If the surfaces $\Sig_1$, $\Sig_2$ are smoothly equivalent in $Y$, then $\widehat{\Sig}_1$ and $\widehat{\Sig}_2$ will be smoothly equivalent in $\widehat{Y}$. Furthermore, $\widehat{\Sig}_{K,C,k}=\widehat{\Sig_{K,C,k}}$. Thus we may assume that $\Sig\cdot\Sig=0$. There is a complete proof in \cite{surfaces} that when the genus of $\Sigma$ is $1$, if rim surgery is performed using knots with distinct Alexander polynomials then one obtains smoothly distinct embedded surfaces, and the same holds for twist-rim surgery. Hence we may assume that $g\ge 2$.
We need to describe the Seiberg-Witten invariant $SW_{(Y|\Sig)}$ as defined in \cite{KM,KM2}. This is the Seiberg-Witten invariant of $Y\- N(\Sig)$ obtained from spin$^{\text{c}}$-structures $\mathfrak{s}$ on $Y$ which satisfy $\la c_1(\mathfrak{s}), \Sig\ra=2g-2$. On $\Sig\x S^1$ there is a unique spin$^{\text{c}}$-structure ${\mathfrak{s}}_{g-1}$ which is pulled back from a spin$^{\text{c}}$-structure on $\Sig$ and satisfies $\la c_1(\mathfrak{s}_{g-1}), \Sig\ra=2g-2$. As explained in \cite{KM}, systems of local coefficients for monopole Floer homology correspond to $1$-cycles on $\Sig\x S^1$, and up to isomorphism the groups depend on their homology classes $\eta\in H_1(\Sig\x S^1;\R)$. The related monopole Floer homology groups with local coefficients are
$H\!M_{\bullet}(\Sig\x S^1| \Sig;\Gamma_{\eta})=\R$. In fact, as is pointed out in \cite{KM2}, if we take a product metric on $\Sig\x S^1$ where the metric on $\Sig$ has constant negative curvature, then there is a unique nondegenerate solution of the Seiberg-Witten equations on $\Sig\x S^1$. This gives rise to a distinguished generator of each of the groups $H\!M_{\bullet}(\Sig\x S^1| \Sig;\Gamma_{\eta})$ and thus they can all be identified.
In order to get an invariant of the pair $(Y,\Sig)$ we need to consider the relative Seiberg-Witten invariant of $Y\- N(\Sig)$. Relative Seiberg-Witten invariants of a $4$-manifold with boundary take their values in various Floer homology groups of the boundary. In our case, all the groups can be identified as pointed out above to obtain the relative invariant described below.
Let $W= Y\- N(\Sig)$, and assume that $W$ inherits an orientation and homology orientation from $(Y,\Sig)$. The space $\mathcal{B}(W;[a_0])$ of pairs $(A,\Phi)$ consisting of a spin$^{\text{c}}$-connection and spinor which limit to the unique equivalence class $[a_0]$ of solutions of the Seiberg-Witten equations for the spin$^{\text{c}}$-structure ${\mathfrak{s}}_{g-1}$ on $\Sig\x S^1$ splits into path components, and each path component $z$ determines a spin$^{\text{c}}$-structure $\mathfrak{s}_{W,z}$. The moduli space of solutions to the Seiberg-Witten equations on $W^*$, i.e. $W$ with a cylindrical end, splits along these path components, $M(W^*;[a_0])=\coprod M_z(W^*;[a_0])$, and similarly for the whole configuration space, $\mathcal{B}(W^*;[a_0])=\coprod \mathcal{B}_z(W^*;[a_0])$. The set of all path components, $\pi_0(\mathcal{B}(W^*;[a_0]))$, is a principal homogeneous space for $H^2(W,\Sig\x S^1;\Z)$.
For each such path component $z$, any pair $(A,\Phi)$ representing $z$, and any class $\nu\in H_2(W, \Sig\x S^1;\R)$, the integral over $\nu$ of the curvature $F_{A^t}$ of the connection $A^t$, induced on the determinant line of the spinor bundle, depends only on $z$ and $\nu$. We thus have a relative Seiberg-Witten invariant defined by
\[SW_{(Y|\Sig)}: H_2(W, \Sig\x S^1;\R)\to \R\]
\[ SW_{(Y|\Sig)}(\nu)= \sum_z m_W(z) \exp(\frac{i}{2\pi}\int_{\nu} F_{A_z^t})\]
where the sum is taken over $z\in \pi_0(\mathcal{B}(W^*;[a_0]))$. (It is shown in \cite{KM} that for an appropriate perturbation of the Seiberg-Witten equations, only finitely many such $z$ admit solutions of the Seiberg-Witten equations.) The coefficient $m_W(z)$ denotes the count with signs of solutions in $M_z(W^*;[a_0])$ in case this moduli space is $0$-dimensional; $m_W(z)$ is $0$ otherwise. No assumption on $b^+(W)$ is necessary for the definition of the invariant $SW_{(Y|\Sig)}$. (See \cite[\S 3.9]{KM}.)
The proof of the knot surgery theorem \cite{KL4M} tells us that $SW_{(Y|\Sig_K)}$ is obtained from
$SW_{(Y|\Sig)}$ by multiplying by $\DD_K(t)$, the symmetrized Alexander polynomial of $K$. We now explain this. Write $\DD_K(t)= \sum_{j=-d}^d c_jt^j$, and let $\rho\in H^2(W,\Sig\x S^1;\Z)$ be the Poincar\'e dual of the rim torus $R$ corresponding to the loop $C$ on $\Sig$. Then, recalling that $\pi_0(\mathcal{B}(W^*;[a_0]))$ is a principal homogeneous space for $H^2(W,\Sig\x S^1;\Z)$, for each $z$ such that $m_W(z)\ne 0$, we have $z+j\rho\in \pi_0(\mathcal{B}(W^*;[a_0]))$, $j\in\Z$. Because the calculation for twist-rim surgery is the same as for rim surgery, the knot surgery theorem gives
\begin{multline*} SW_{(Y|\Sig_{K,C,k})}(\nu) = SW_{(Y|\Sig)}(\nu)\cdot \DD_K(\rho^2) =\\
=\sum_{z,j} m_W(z) c_j \exp\big{(}2j\la\rho,\nu\ra+\frac{i}{2\pi}\int_{\nu} F_{A_z^t}\big{)}= \\
=\sum_{z,j} m_{W_K}(z+j\rho) \exp\big{(}\frac{i}{2\pi}\int_{\nu} F_{A_{z+j\rho}^t}\big{)} \end{multline*} where $W_K = Y\- (\Sig_K\x D^2)$, and we are identifying $H^2(W_K,\Sig_K\x S^1;\Z)$ with the group $H^2(W,\Sig\x S^1;\Z)$ using the canonical isomorphism described in \cite{surfaces}. Note that this formula asserts that $m_{W_K}(z+j\rho)= c_j\,m_W(z)$ and that $\frac{i}{2\pi}\int_{\nu} F_{A_{z+j\rho}^t} = 2j\la\rho,\nu\ra+\frac{i}{2\pi}\int_{\nu} F_{A_z^t}$.
We would like to be able to conclude that if $\DD_{K_1}(t)\ne\DD_{K_2}(t)$ then rim (or twist-rim) surgery using these two knots results in smoothly inequivalent surfaces in $Y$. However, all that we are currently able to say is the following. (Compare \cite{addendum}.) Let ${\mathcal{S}}_{W_K}=\{z\in \pi_0(\mathcal{B}(W_K^*;[a_0]))\mid m_{W_K}(z)\ne 0\}$.
\begin{prop} \label{SS} If $\Sig_{K_1}$ and $\Sig_{K_2}$ are smoothly equivalent, there is an automorphism of $H^2(W,\Sig\x S^1;\Z)$ sending ${\mathcal{S}}_{W_{K_1}}$ to ${\mathcal{S}}_{W_{K_2}}$ and preserving the coefficients $m_{W_{K_i}}(z)$. \end{prop} \begin{proof} If $\Sig_K$ and $\Sig_{K'}$ are smoothly equivalent in $Y$, then $\widehat{\Sig}_K$ and $\widehat{\Sig}_{K'}$ are smoothly equivalent in $\widehat{Y}$. The proposition now follows because relative Seiberg-Witten invariants are invariants of smooth equivalence of surfaces. (Cf. \cite{addendum}.) \end{proof}
\section{Cyclic group actions: Equivariant rim surgery}
Let $Y$ be a simply connected smooth $4$-manifold, with an embedded surface $\Sig$ of genus $g\ge 1$ whose self-intersection number is nonegative and such that $\pi_1(Y\- \Sig) = \Z_d$. Let $C$ be a nonseparating loop on $\Sig$ which bounds a disk in $Y\- \Sig$, for example, $C$ could be a vanishing cycle.
Let $K$ be a knot in $S^3$, and for an integer $k$ relatively prime to $d$ perform $k$-twist-rim surgery on $\Sig$ using the loop $C$, and let $\Sig_K=\Sig_{K,C,k}\C Y$. Proposition~\ref{KR} implies that the surfaces $\Sig$ and $\Sig_K$ are topologically equivalent in $Y$. Let $W$ and $W_K$ be the complements of tubular neighborhoods of $\Sig$ and $\Sig_K$ in $Y$. As in the previous section, blow up $\Sig\cdot\Sig$ times to obtain $\widehat{Y}$, $\widehat{\Sig}$, and $\widehat{\Sig}_K$. Note that the blowup of $\Sig_K$ is the same as the result of twist-rim surgery ${\widehat{\Sig}}_K$ on ${\widehat{\Sig}}$.
\begin{prop} \label{3} Let $K$ and $K'$ be two knots in $S^3$ and suppose that their Alexander polynomials have different (unordered) sets of nontrivial coefficients. Also suppose that $SW_{(Y|\Sig)}\ne 0$. Then the surfaces $\Sig_K$ and $\Sig_{K'}$ are smoothly inequivalent in $Y$. In particular, the $\Z_d$-actions on the $d$-fold cyclic covers of $Y$ branched over $\Sig_K$ and $\Sig_{K'}$ are equivariantly homeomorphic but not equivariantly diffeomorphic. \end{prop}
\begin{proof} The hypothesis implies via the knot surgery formula that ${\mathcal{S}}_{W_{K}}\ne {\mathcal{S}}_{W_{K'}}$. The first part of the proposition now follows from Proposition~\ref{SS}. The second part of the proposition follows from Proposition~\ref{KR} and the fact that an equivariant diffeomorphism induces a diffeomorphism of orbit spaces preserving the fixed point image. \end{proof}
We now need to show that the cyclic branched covers in question are diffeomorphic. Let $X_K$ be the $d$-fold cyclic cover of $Y$ branched over $\Sig_K$. We have seen that \[ (Y,\Sig_K) = (Y,\Sig) \- (C\x (I\x D^2, I\x \{ 0\}))\cup (S^4(K;k,1)\- N(\bE_k), \bE_1') \]
Alternatively, we have the rim surgery description: \[ (Y,\Sig_K)= (Y,\Sig) \- ( R\x D^2_\delta) \cup_\phi (S^1\x (S^3\- N_K)) \] where $S^4(K;k,1)\- P(K;k,1)\cong S^1\x (S^3\- N_K)$,
Let $\vt: (X,\SIG)\to (Y,\Sig)$ and $\vt_K:(X_K,\SIG_K)\to (Y,\Sig_K)$ be the branched covers. The loop $C$ lifts to a loop $\CC$ on $\SIG$ and the rim torus $R$ similarly lifts to the rim torus $\RR$ associated to $\SIG$ and $\CC$. The manifold $X_K$ is obtained from $X$ by replacing $\RR\x D^2_\delta$ with the $d$-fold cover of $S^4(K;k,1)\- P(K;k,1)$. According to \cite{Pa} (see also \cite{Pl}), the branched cover of $S^4(K;k,1)$ branched over $\bar{E}_1$ is $S^4(K;k,d)$, and the branching locus is the $2$-sphere $\bar{E}_d$. The deck transformations of this branched cover are generated by the action of $e^{2\pi\,i/d}\in S^1$ contained in the circle action; so the branched covering map $S^4(K;k,d) \to S^4(K;k,1)$ sends $\bE_k$ to $\bE_k$. Thus we can see that to obtain $X_K$, we replace $\RR\x D^2_\delta$ with $S^4(K;k,d)\- P(K;k,d)$ which is in turn diffeomorphic to $S^1\x (S^3\- N_K)$. \[ X_K= X\- (\RR\x D^2_\delta)\cup_{\tilde{\phi}} (S^1\x(S^3\- N_K)) \] where $\tilde{\phi}$ is given by the matrix $A(k,d)$ when bases are chosen as in \S~\ref{S^1}. And again as in that section, we may redescribe $X_K$ as \[ X_K= (X\- (\CC\x I\x D^2_\nu)) \cup (S^4(K;k,d)\- N(\bE_k))\]
\begin{prop} If $C$ bounds an embedded disk in $Y\- \Sig$ then $X_K$ is diffeomorphic to $X$. \end{prop} \begin{proof} If $C$ bounds an embedded disk in $Y\- \Sig$, in the cover this means that $\CC$ bounds a disk in $X\- \SIG$. (In fact $\CC$ bounds $d$ such disks with disjoint interiors.) Hence a pushoff $\CC\x \{\text{pt}\}$ bounds an embedded disk in $X\- (\CC\x I\x D^2_\nu)$. The union of a regular neighborhood $U$ of this disk with $\CC\x I\x D^2_\nu$ is the result of attaching a $2$-handle to $S^1\x B^3$ along $S^1\x \{\text{pt}\}$. This is the $4$-ball, $B^4$.
The $S^1\x B^3$ in question is $\CC\x I\x D^2_\nu$ and the rim torus is $\RR = \CC\x \{\text{pt}\}\x \bd D^2_\nu$ in the boundary $S^1\x S^2 = \CC\x D^2\cup_{\RR}\CC\x D^2$. Attaching the $2$-handle corresponds to surgery on $\CC\x\{\text{pt}\}$; so $\bd B^4 = \CC\x D^2\cup_{\RR} S^1\x D^2$ where the gluing takes some pushoff of $\CC$ to $\bd D^2$. If $\CC'$ is a preferred pushoff of $\CC$, i.e. it is nullhomologous in $X\- \SIG$, then our gluing takes $[\CC'] + r[\bd D^2_\nu]$ to $[\bd D^2]=0$.
Thus the rim torus $\RR$ is a standard unknotted torus in $S^3=\bd B^4$, and, if we take the union of $B^4=U\cup (\CC\x I\x D^2_\nu)$ with another copy of $B^4$, we get $S^4=P\cup_{\RR\x\bd D^2_\delta} \RR\x D^2_\delta$. After the handle addition, the standard basis $\{\mu_1,\mu_2,\lam\}$ of $H_1(\bd P)$ is identified with $\{\CC'+r\nu, \nu, \delta\}$ in $H_1(\RR\x\bd D^2_\delta)$.
Let $V=\CC\x I\x D^2_\nu\- (\RR\x D^2_\delta)$ which is diffeomorphic $S^1$ times the standard cobordism from a torus to a $2$-sphere obtained by attaching a $2$-handle. In $X_K$, the $4$-ball $ U\cup (\CC\x I\x D^2_\nu) = U\cup V \cup (\RR\x D^2_\delta)$
is replaced by
\[ U\cup V \cup (S^4(K;k,d)\- P(K;k,d)) = U\cup V \cup_{\tilde{\phi}} (S^1\x (S^3\- N_K))\]
However using obvious notation, $S^4(K;k,d) =P\cup_{A(k,d)} (S^1\x (S^3\- N_K))$; so
\begin{multline*} (B^4\cup U\cup V) \cup_{\tilde{\phi}} (S^1\x (S^3\- N_K)) = \\
(S^4\- {\text{Nbd}}(T^2_{\text{std}})\cup_{\tilde{\phi}} (S^1\x (S^3\- N_K)) = P\cup_A (S^1\x (S^3\- N_K))\cong S^4
\end{multline*}
because
\[ A= A(k,d)\circ \left( \begin{array}{ccc} 1 &0 &0 \\ r &1 &0 \\
0 &0 &1 \end{array} \right) = \left( \begin{array}{ccc} k+rd &d &0 \\ -\b+r\g &\g &0 \\
0 &0 &1 \end{array} \right) = A(k+rd,d)
\] Hence $U\cup V \cup_{\tilde{\phi}} (S^1\x (S^3\- N_K)) \cong B^4$. It follows that $X_K$ is diffeomorphic to $X$.
\end{proof}
We may now complete the proof of Theorem~\ref{main}.
\begin{proof}[of Theorem~\ref{main}] Fix a positive integer $k$ relatively prime to $d$ and a nonseparating simple closed curve $C$ on $\Sig$ such that $C$ bounds an embedded $2$-disk $D$ in $Y\- \Sig$. Let $\{K_i\}_{i=1}^{\infty}$ be a family of knots in $S^3$ whose Alexander polynomials have pairwise different sets of nonzero coefficients. It then follows from Propositions~\ref{KR} and \ref{3} that the surfaces $\Sig_{K,C,k}$ obtained from $\Sig$ by $k$ twist-rim surgery are topologically equivalent but smoothly distinct. Of course this means that their corresponding branched covers give $Z_d$-actions which are equivariantly homeomorphic but not equivariantly diffeomorphic. Furthermore, because each of these branched covers $X_{K_i}$ is obtained from $X$ by removing a $4$-ball and then replacing it with another $4$-ball, each $X_{K_i}$ is, in fact, diffeomorphic to $X$. \end{proof}
Note that the construction used in the proof can be viewed as an {\it equivariant rim surgery}. In fact we could have presented the construction in this manner. However, it has been convenient to phrase our arguments in the language of circle actions in order to more easily identify the gluing diffeomorphisms and to more clearly see that the construction will not change $X$ as long as $C$ bounds an embedded disk in the complement of $\Sig$.
\section{Final comments}
As we have shown above, Theorem~\ref{main} applies widely. Many smooth $4$-manifolds are constructed as branched $\Z_{d}$-covers and, with mild conditions on the branch set, they thus have infinite families of exotic actions of $\Z_{d}$. In most cases these manifolds are irreducible. All these actions are nontrivial on homology. (Because otherwise $e(X)=e(Y)$, which implies $e(Y)=e(\Sig)$. But $X$ is simply connected, so this implies $e(X)=2$, which is ruled out if $X$ has a nontrivial Seiberg-Witten invariant.) It remains an interesting question to determine if there are simply connected $4$-manifolds with exotic actions of cyclic groups $\Z_{d}$ ($d >2$) that induce the identity on homology. This is of particular interest for the $K3$ surface. Also, it is still an interesting problem to determine exotic free group actions on a fixed smooth $4$-manifold with a nontrivial Seiberg-Witten invariant. All these questions are in the realm of seeking general rigidity or uniqueness properties in dimension $4$.
\end{document} |
\begin{document}
\title{An atom interferometer enabled by spontaneous decay} \author{R.A. Cornelussen, R.J.C. Spreeuw, and H.B. van Linden van den Heuvell} \affiliation{Van der Waals - Zeeman Institute, University of Amsterdam, \\ Valckenierstraat 65, 1018 XE Amsterdam, The Netherlands \\ e-mail: [email protected]}
\date{\today}
\begin{abstract} We investigate the question whether Michelson type interferometry is possible if the role of the beam splitter is played by a spontaneous process. This question arises from an inspection of trajectories of atoms bouncing inelastically from an evanescent-wave (EW) mirror. Each final velocity can be reached via two possible paths, with a {\it spontaneous} Raman transition occurring either during the ingoing or the outgoing part of the trajectory. At first sight, one might expect that the spontaneous character of the Raman transfer would destroy the coherence and thus the interference. We investigated this problem by numerically solving the Schr\"odinger equation and applying a Monte-Carlo wave-function approach. We find interference fringes in velocity space, even when random photon recoils are taken into account.\end{abstract}
\pacs{03.65.Yz, 03.75.Dg, 39.20.+q}
\maketitle
\section{Introduction}
Spontaneous emission is generally considered a detrimental effect in atom interferometers. The associated random recoil reduces or even completely destroys the visibility of the interference fringes. In this paper we describe an atom interferometer where the beam splitter works by means of a spontaneous Raman transition. Our central question will be whether one can observe interference in such an interferometer which is enabled by a spontaneous process and where decoherence is built into the beam splitting process from the start.
The role of spontaneous emission in (atom) interferometers has long been connected to the concept of which-way information, ultimately tracing back to Heisenberg's uncertainty principle \cite{Heisenberg27}. Feynman \cite{FeynmanLectures} discussed a {\it gedankenexperiment} using a Heisenberg microscope to determine which slit was taken by a particle in a Young's two-slit interferometer. The scattered photon, needed to determine the position, spoils the interference pattern due to its associated recoil. Similarly, spontaneous emission in an (atom) interferometer may provide position information on the atom, while at the same time randomizing the momentum and spoiling interference \cite{ChaHamPri95,HacHorArn04}. On the other hand, resonance fluorescence, including that from spontaneous Raman transitions, is as coherent as the incident light in the limit of low saturation \cite{Loudon83, CCT}. Experimental demonstrations have been given by Eichmann {\it et al.} \cite{EicBerRai93} and Cline {\it et al.} \cite{CliMilHei94}. An experiment by D\"urr {\it et al.} \cite{DurNonRem98} has made it clear that availability of which-way information should be conceptually separated from the presence of random recoils. Which-way information can be obtained without random recoils, nevertheless leading to loss of interference. Here we show that random recoils do not lead to loss of interference as long as the spontaneously emitted photons do not yield which-way information.
\begin{figure}
\caption{Atoms approaching an evanescent-wave mirror in state $|1\rangle$ can undergo a spontaneous transition to state
$|2\rangle$, either on the ingoing or on the outgoing part of the trajectory. These two paths possibly interfere, with a phase difference depending on the transition point $z$.}
\label{fig1}
\end{figure}
Our proposed interferometer is based on cold atoms that reflect from an evescent-wave (EW) mirror. Our model atom is a typical alkali atom with two hyperfine ground states. During the reflection from the EW mirror the atoms can make a spontaneous Raman transition to the other hyperfine state. When the repulsive potential experienced by the final state is lower, the atoms lose kinetic energy and hence bounce inelastically from the potential \cite{HelRolPhi,OvcSodGri95}. This Sisyphus process has been investigated previously \cite{DesArnDal96} and the resulting final velocity distribution has been shown to be a caustic \cite{WolVoiHeu01}, reminiscent of the rainbow. Furthermore it is used in several experiments as the loading process of (low-dimensional) traps for atoms \cite{OvcManGri97,GauHarMly98,CorAmeHeu02}. Cognet {\it et al.} \cite{CogSavAsp98} observed the analog of St\"uckelberg oscillations in the transverse velocity distribution of atoms that reflect elastically from a corrugated EW potential. However, no stochastic or incoherent processes were involved.
In our case the final velocity of an atom depends on the position where it made the Raman transfer \cite{WolVoiHeu01}. Looking at the trajectories in detail we see that each final velocity can be reached by two trajectories, as is shown in Fig. \ref{fig1}. An atom can be transferred to the second state on the ingoing or the outgoing part of its trajectory. Interference will manifest itself in the velocity distribution of the reflected atoms, since the outgoing velocity depends on the transition point $z$. Note that the beam splitter, its role being played by a spontaneous Raman transition, is highly non-unitary: atoms are only transferred from state $|1\rangle$ to state $|2\rangle$ and not vice versa.
This paper is structured as follows. We first present a semi-classical picture, and use it to make qualitative predictions about the behavior of the interference effects, if present. The question whether interference is possible will be answered by solving the Schr\"odinger equation in two different ways. The first approach will employ stationary analytical solutions of the time-independent Schr\"odinger equation, but is limited to monochromatic wave functions. The second approach will propagate a wave packet by numerically integrating the time-dependent Schr\"odinger equation with random quantum jumps describing the Raman transitions. The last section deals with experimental considerations.
\section{Semi-classical description}
The analysis will be for a two-level atom, and only the motion of the atom in the direction along the EW-potential gradient will be considered. The calculations throughout this paper will assume a low saturation parameter, so that depletion of the initial state can be neglected. An atom in state $|1\rangle$ that reflects from an EW mirror experiences a potential $V_1\exp(-2\kappa z)$, with $\kappa ^{-1}$ the decay length of the EW field. The atom's trajectory through phase-space is given by $z_1(v)=(-1/2\kappa)\ln\left[(m/2V_1)(v_{\rm i}^2-v^2)\right]$, with $v_{\rm i}$ the velocity with which the atom enters the potential, see Fig. \ref{fig2}. After a transition to state $|2\rangle$ in point $A$ or $A'$ the atom experiences a potential $\beta V_1\exp(-2\kappa z)$, with $\beta<1$ the factor by which the potential energy is reduced after the Raman transition. The atom continues its way through phase space on a new trajectory $z_2(v)$, given by $z_2(v)=(-1/2\kappa)\ln\left[(m/2\beta V_1)(v_{\rm f}^2-v^2)\right]$, with $v_{\rm f}$
the asymptotic velocity with which the atom leaves the potential. The final velocity can have any value between $v_{\rm f}=\sqrt{\beta}v_{\rm i}$, which corresponds to a transfer in the turning point, and $v_{\rm f}=v_{\rm i}$, which corresponds to a transfer outside the EW potential. The final velocity depends on the atom's velocity $v_{\rm t}$ at the moment of the transfer to state $|2\rangle$. This dependence is given by $v_{\rm f}^2=v_{\rm t}^2+\beta(v_{\rm i}^2 -v_{\rm t}^2)$, from which it is clear that two values $\pm v_{\rm t}$ lead to the same final velocity $v_{\rm f}$. The two transitions lead to two possible trajectories through phase space, as shown in Fig. \ref{fig2}(a). The phase difference $\Delta\varphi$ between the two trajectories is given by \begin{equation} \Delta\varphi=\frac{m}{\hbar}\int_{-v_{\rm t}}^{v_{\rm t}}\left(z_1(v)-z_2(v)\right)\mathrm{d} v, \label{phasedifference} \end{equation} This phase difference is proportional to the area between the two curves, indicated in gray in Fig.~\ref{fig2}(b). From the evaluation of the integral for various parameters we learn that the fringe period decreases for increasing initial velocities $v_{\rm i}$, for increasing final velocities $v_{\rm f}$, for smaller $\beta$ and for larger decay length (smaller $\kappa$).
\begin{figure}
\caption{Phase-space trajectories of atoms being repelled by an evanescent potential. Atoms initially in state
$|1\rangle$ can be transferred to state $|2\rangle$ and continue on a different path through phase space. (a) Depending on the initial shape of the wave packet (e.g. the light and dark grey areas) which-way information can be obtained or not. (b) The accumulated phase difference between these two paths, indicated by the enclosed gray area, may give rise to interference effects. (c) A momentum kick due to the spontaneous recoil gives rise to an extra phase contribution.}
\label{fig2}
\end{figure}
In a semi-classical picture the atom can be treated as a wave packet which is subject to Heisenberg's uncertainty relation $\Delta z \Delta v_z\geq\hbar/2m$. The distribution of uncertainty between position $z$ and momentum $mv_z$ is determined by the experimental preparation procedure of the atoms. For a wave packet that initially has a large spread in momentum, it is not possible to unambiguously determine the phase difference between the two possible paths, since the wave packet is spread out over several classical trajectories through phase space. This is indicated by the light grey area in Fig. \ref{fig2}(a). It is, however, possible to determine whether the transfer to state $|2\rangle$ is on the ingoing or outgoing path, by observing the timing of the spontaneously emitted photon. Therefore, it is not expected that wave packets with this shape show interference. On the other hand, a minimum uncertainty wave packet with a narrow initial momentum spread will more closely follow a classical trajectory through phase space. This is indicated by the dark grey area in Fig. \ref{fig2}(a). The phase difference between the two paths is well defined. The two points in phase space, A and A', where a transition to the final trajectory is possible are covered simultaneously by the wave packet. Thus no which-way information can be obtained by observing the time of emission of the spontaneously emitted photon. The initial trade-off between position and momentum uncertainty in a bandwidth limited wave packet determines whether interference can be a priori excluded or not.
The random direction of the spontaneously emitted photon can be taken into account in the motion of the atom by a random momentum jump. This makes the atom propagate on a different trajectory through phase space than it would have without the random recoil. The momentum changes are indicated by horizontal arrows in the phase-space diagram of Fig. \ref{fig2}(c). For a single atom, or a collection of distinguishable atoms, the spontaneous recoil could be measured by detecting the direction in which the photon was emitted. Due to this possibility there will be a set of interference patterns, one for each recoil direction. By disregarding the information present in the scattered photons, we probe the incoherent sum of all these interference patterns. The phase difference between the two paths is different with respect to the recoil free case, and it depends on the direction of the recoil. It is indicated by the gray areas in Fig. \ref{fig2}(c). In order for the interference to be experimentally observable the difference between the interference patterns with a certain recoil direction should not be too large. This means that the phase difference between recoil components in the $\pm z$ directions should be less than $\pi$. For larger final velocities these phase corrections get larger as is apparent from comparing the areas around the point B' with the areas around point A' in Fig. \ref{fig2}(c). We thus expect the visibility of the interference to decrease for larger final velocities.
Note that which-way information can neither be retrieved from a measurement of the frequency $\omega$ of the emitted photon. This frequency is determined by energy conservation: $\hbar\omega=\frac{1}{2}m(v_{\rm i}^2-v_{\rm f}^2)+\hbar\omega_{\rm EW}-\hbar\Delta_{\rm 12}$, where $\omega_{\rm EW}$ is the frequency of the evanescent photon and
$\hbar\Delta_{\rm 12}$ is the energy difference between states $|1\rangle$ and $|2\rangle$. Because the initial and final kinetic energies are equal for both trajectories, the frequency of the spontaneously emitted photon is equal for both interfering paths.
\section{Time-independent approach}
The question whether interference is visible will be answered in this section by considering analytical solutions of the time-independent Schr\"odinger equation \begin{equation} -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial z^2}\psi_1(z)+V_1e^{-2\kappa z}\psi_1(z)=\frac{p_0^2}{2m}\psi_1(z), \end{equation} describing stationary states with a total energy $p_0^2/2m$ on the potential $V_1\exp(-2\kappa z)$. It thus describes particles with momenta $\pm p_0$ in the asymptotic limit of large $z$. This is one of the few examples where the eigenfunctions of the Schr\"odinger equation are known analytically. The solutions are given by \begin{equation} \psi_1(z)=\sqrt{\frac{4p_0}{\pi\hbar\kappa}\sinh\left(\frac{\pi p_0}{\hbar\kappa}\right)}K_{ip_0/\hbar\kappa}\left(\frac{\sqrt{2mV_1}}{\hbar\kappa} e^{-\kappa z}\right), \label{BesselKeigenfunction} \end{equation} where $K_{\alpha}(\cdot)$ is the Bessel-K function of order $\alpha$ \cite{AbrSte72}. These functions are normalized such that the asymptotic density is independent of $p_0$. They are also given by \cite{HenCouAsp94} where a different normalization is used.
This wave function describes the atoms that are incident in state $|1\rangle$. After the spontaneous Raman transition to state $|2\rangle$, the atoms are described by one of the eigenfunctions $\psi_{2,p}(z)$ with final momentum $p$ in the potential $V_2\exp(-2\kappa z)$. The final wave function in momentum space is given by the overlap integral \begin{equation}
\phi_{k}(p)\propto\int_{-\infty}^{\infty}\psi_1(z)e^{-\kappa z-i k z}\psi_{2,p}^*(z)\mathrm{d} z, \label{timedependentk} \end{equation}
where the recoil due to the absorbed evanescent photon is taken into account by the factor $\exp(-\kappa z)$ and the recoil due to the spontaneously emitted photon by the factor $\exp(-i k z)$, with $\hbar k$ the momentum component of the recoil in the $z$ direction. As already discussed there will be interference patterns $|\phi_{k}(p)|^2$ for every value $\hbar k$ of the recoil. A measurement that disregards the emitted photon yields the sum of all these possible interference patterns. In this derivation we will assume an isotropic distribution of the recoil momentum $\hbar k$. This is an approximation, since the distribution depends on the polarization of the spontaneously emitted photon. We will come back to this point in section \ref{sec5}. This leads to \begin{equation}
|\phi(p)|^2=\int_{-k_0}^{+k_0}|\phi_{k}(p)|^2\mathrm{d} k, \label{tindependent} \end{equation} with $\hbar k_0$ the total recoil momentum.
Fig. \ref{fig3} shows the behavior of the momentum distribution for various values of the $z$ component of the photon recoil $\hbar k$. It is calculated using Eq. \eqref{timedependentk}, with an initial momentum $p_0=2\hbar k_0$, a potential steepness $\kappa=k_0/8$, and potential reduction $\beta=0.2$. The main features of this figure can be understood from semiclassical arguments.
\begin{figure}
\caption{The behavior of the interference pattern $|\phi_k(p)|^2$ versus the final momentum $p$, expressed in units of the photon recoil $\hbar k_0$, for various values of the $z$ component of the photon recoil $\hbar k$ for parameters $p_0=2\hbar k_0$, $\kappa=k_0/8$, and $\beta=0.2$. The dashed curve indicates the classically lowest reachable momentum. Interference is visible in the area that can be reached by two trajectories, enclosed by the dashed curve and straight lines. The triangular regions can only be reached by one trajectory and hence no interference is visible. }
\label{fig3}
\end{figure}
The dashed curved and straight lines demarcate three separate classically allowed regions, taking into account the recoil. The dashed curve on the left indicates the lowest classically reachable momentum, given by $p=\sqrt{\beta}p_0\sqrt{1-(\hbar k/p_0)^2/(1-\beta)}$. To the right of this curve, we clearly see a region with interference. In addition we see two triangular regions with no interference, demarcated by two straight lines described by $p=p_0\pm\hbar k$. The triangular regions can be reached by a single phase space trajectory only. The upper (lower) triangle corresponds to a trajectory where the Raman transfer took place on the outgoing (incoming) branch. Only the region on the left of the triangles is reachable by two trajectories and thus shows interference.
The left dashed curve is reached by atoms that scatter a photon near the turning point. Note that in this case the photon recoil has only a small influence on the final momentum $p$. The final momentum is mainly determined by the potential energy near the turning point that is converted to kinetic energy. The amount of kinetic energy that can be added or removed by the photon recoil near the turning point is small because the atomic velocity is small. As a result we see only a slight curvature as a function of the recoil $k$. For larger values of the initial momentum $p_0$ or the ratio $\beta=V_2/V_1$ the left curve will become more and more straight.
As expected, the main part of the momentum distribution is in the classically allowed regions. The distributions peak near the lower classical limit (the dashed curve), resembling the caustic distribution \cite{WolVoiHeu01}. For every recoil direction interference is visible. Although the region with interference is smaller for larger values of the recoil, the remaining interference fringes are present at more or less the same final momenta. This indicates that the spontaneous recoil does not completely wash out the interference. The behavior of the interference does not depend on the sign of the recoil, because a photon that is emitted on the ingoing part of the trajectory has the same effect on the momentum distribution as a photon that is emitted in the opposite direction on the outgoing part of the trajectory.
\begin{figure}
\caption{Final momentum distributions $|\phi(p)|^2$ calculated for different parameters using the time-independent method. Solid lines including the effect of a spontaneous recoil, and dashed lines without the effect of recoils, versus the final momentum $p$ in units of the photon recoil $\hbar k_0$. The dotted lines indicate the classically allowed region without recoil. Between (a) and (b) the initial momentum $p_0$ is changed, (a) and (c) differ in the reduction factor $\beta$, (c)-(f) are a sequence for decreasing decay length $\kappa^{-1}$.}
\label{fig4}
\end{figure}
Fig. \ref{fig4} shows the final momentum distribution, calculated by Eq. \eqref{tindependent} for various experimental parameters. The results are compared with an evaluation without a stochastic contribution. Indeed, the averaging over the spontaneous recoil does not destroy the interference pattern. The small part of the distribution that extends into the classically forbidden region is an evanescent matter wave. Several of our predictions that were made for the general case of a coherent interferometer are noticeable in these graphs. Indeed, the fringe spacing decreases for larger initial and final momenta, for longer decay lengths $\kappa^{-1}$ of the evanescent field, and for smaller values of $\beta$. Furthermore, as predicted for the case of an incoherent interferometer, the visibility of the graphs in which the effect of the recoil has been taken into account decreases for larger final momenta.
\section{Time-dependent approach}
In the previous section we have shown that the incoherent nature of the spontaneous Raman transfer does not prevent us from observing interference. In this section we show that the interference phenomena will also be visible for a wave packet with a finite momentum spread. In the analysis we closely follow the Monte-Carlo wave-function approach \cite{DalCasMol92,MolCasDal93}.
We consider the evolution of a diffraction limited wave packet in state $|1\rangle$ \begin{equation} \psi_1(z,t=0)=\sqrt{\frac{1}{(2\pi)^{1/2}\sigma_{\rm z}}}e^{i k_z z}e^{-\frac{(z-z_0)^2}{4\sigma_{z}^2}} \end{equation} with initial height $z_0$, initial width $\sigma_z$, and initial momentum $p_0=\hbar k_z$. It is normalized such that
$\int|\psi(z,0)|^2\mathrm{d} z=1$ and $\int(z-z_0)^2|\psi(z,0)|^2\mathrm{d} z=\sigma_z^2$. The evolution of the wave packet when it reflects from the evanescent-wave potential with a potential height $V_1$ at $z=0$ is calculated by numerically solving the time-dependent Schr\"odinger equation \begin{equation} i\hbar\frac{\partial}{\partial t}\psi_1(z,t)=-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial z^2}\psi_1(z,t)+V_1e^{-2\kappa z}\psi_1(z,t) \end{equation}
using the {\it Quantum kernel} \cite{ThaLie00} package in {\it Mathematica} \cite{mathematica}. This results in a wave packet $\psi_1(z,t)$ at time t. At a time $\tau$ a spontaneous Raman transition to state $|2\rangle$ occurs, and the evolution abruptly continues on a potential that is a factor $\beta$ lower. Immediately after the transfer the wave function in state $|2\rangle$ is described by \begin{equation}
\psi_{\tau,k}(z,t=\tau)=\EuScript{N}\psi_1(z,\tau)e^{-\kappa z}e^{-i k z}, \end{equation} where $\EuScript{N}$ denotes a normalization factor. The two exponents are equal to the exponents in Eq. \eqref{timedependentk}. The evolution of this wave function can now be continued up to a time $t_{\rm end}$, leading to a wave function $\psi_{\tau,k}(z,t_{\rm end})$. When $t_{\rm end}$ is large enough the entire wave packet effectively propagates in free space, so that the momentum distribution remains constant. The Fourier transform \begin{equation}
\phi_{\tau,k}(p)=\EuScript{F}\left(\psi_{\tau,k}(z,t_{\rm end})\right) \end{equation} of the wave packet at this time is the wave function in momentum space that endured a momentum kick $\hbar k$ at its transfer time $\tau$. The transfer rate $\Gamma(\tau)$ at a certain time $\tau$ is given by \begin{equation}
\Gamma(\tau)\propto\int_0^{\infty}\psi_1^*(z,\tau){V_1(z)}\psi_1(z,\tau)\mathrm{d} z. \end{equation} We again assume an isotropic distribution of the recoil momentum $\hbar k$. Contributions with different $\tau$ and $k$ have to be summed in an incoherent way. For the momentum distribution as a function of the wave vector of the spontaneously emitted photon we get \begin{equation}
|\phi_k(p)|^2\propto\int_0^{t_{\rm end}}\Gamma(\tau)|\phi_{\tau,k}(p)|^2{\rm d}\tau. \label{phik} \end{equation} A subsequent integration over this wave vector yields \begin{equation}
|\phi(p)|^2=\int_{-k_0}^{k_0}|\phi_k(p)|^2{\rm d}k \end{equation} for the momentum distribution of a sample of atoms.
Fig. \ref{fig5} shows graphs of $|\phi(p)|^2$ for some parameters. All calculations are performed with $t_{\rm end}=70 m/\hbar k_0^2$ for which the criterium that the entire wave packet has left the potential is fulfilled. Fig. \ref{fig5}(a) should be compared with Fig. \ref{fig4}(a). The parameters for Figs. \ref{fig5}(b)-(d) are equal to the parameters used for Fig. \ref{fig4}(c) except for the initial width $\sigma_z$ of the wave packet and thus the momentum spread $\sigma_p=\hbar/\sigma_z$. Also for a wave packet with a finite momentum spread the interference effects are present. As expected, the interference fringes are more apparent for a wave packet with a smaller momentum spread.
\begin{figure}
\caption{Interference patterns calculated for different parameters calculated using the time-dependent method. Solid lines: $|\phi(p)|^2$, with the effect of a spontaneous recoil, and dashed lines: $|\phi_0(p)|^2$, without the effect of a spontaneous recoil, versus the final momentum $p$ in units of the photon recoil $\hbar k_0$. The dotted lines indicate the classically allowed region without recoil. Between (a) and (c) the reduction factor $\beta$ is changed, (b)-(d) are a sequence where the initial momentum spread $\sigma_p$ is decreased.}
\label{fig5}
\end{figure}
\section{Experimental considerations} \label{sec5}
So far we have considered levels $|1\rangle$ and $|2\rangle$ without discussing which physical level they correspond to. In reality we usually deal with multi-level atoms, that moreover include sub-structure. Each of these (sub-)levels has a different interaction with the evanescent field. If more (sub-)levels contribute to the signal the predicted interference can be washed out. For $^{87}$Rb atoms a convenient choice would be the $|Fm\rangle=|1,0\rangle$ ground state for state $|1\rangle$ and the $|Fm\rangle=|2,\pm1\rangle$ ground states for state $|2\rangle$. The evanescent field needs to be linearly ($\pi$) polarized and blue detuned with respect to the $F=1\rightarrow F'=2$ transition of either the $D_1$ or $D_2$ line. Due to selection rules, only a transition over the $|F'm'\rangle=|2,0\rangle$ excited state contributes to the transition from state $|1\rangle$ to state $|2\rangle$. This excited state can decay to either of the $|2,\pm 1\rangle$ ground states by emitting a $\sigma^{\pm}$ polarized photon. Since these states interact identically with the evanescent field, their interference patterns will overlap.
The necessary linear polarization for the EW can only be easily obtained using a TE ($\perp z$) polarized incident beam. For a circularly polarized photon this distribution of wave vectors of the spontaneously emitted photon is non-isotropic. The circularly polarized photon has its quantization axis along the polarization axis of the EW field. The recoil distribution in the $z$-direction due to such a photon is given by $\frac{3}{16}(3-(k/k_0)^2)$. Intensity distributions of dipole radiation are given by e.g. \cite{Jackson99}. Since this distribution has a maximum for $k=0$, for which the recoil-dependent interference patterns are most pronounced as is visible in Fig. \ref{fig3}, the visibility of the interference signals will be slightly better than presented in this paper.
\section{Discussion and conclusions}
The calculations presented in this paper have been performed for low initial velocities $v_{\rm i}$. This is because both calculation procedures turned out to be limited by computational resources. For the time-independent approach the evaluation in {\it Mathematica} \cite{mathematica} of the Bessel-K functions of high imaginary order becomes very slow. For the time-dependent approach the number of sampling points, necessary for the numerical evaluation, becomes too large due to the highly oscillatory character of the incident wave packet. However, we expect even better signals for realistic values for the initial velocity $v_{\rm i}$, so the calculation represents a worst case.
By numerically solving the Schr\"odinger equation we can now reply with an unambiguous yes to the question {\it can the beamsplitter in an atom interferometer work on the basis of spontaneous emission?} The intuitive objections to whether this is possible have been refuted. The semi-classical arguments have been confirmed by the full quantum-mechanical calculations. Which-way information due to the possibility of detecting the time of emission of the spontaneously emitted photon is avoided by choosing a sufficiently narrow velocity uncertainty. A wave packet covers both transfer points in phase space simultaneously if its velocity is defined accurately enough. The incoherent nature of a spontaneous emission process due to the random recoil direction of the atom is visible in all the calculated interference curves, but does not lead to a complete scrambling of the interference. For larger final velocities, for which the transfer points are separated more and the acquired random phase is consequently larger, the visibility of the interference fringes indeed decreases. Furthermore the fringe period indeed qualitatively shows the behavior that was predicted on the basis of the semi-classical calculations.
Our analysis also shows that the absence or presence of which-way information is not the same as the perturbing effect of the recoils due to spontaneous transitions. This is most clearly seen in Fig. \ref{fig2} and is in agreement with the viewpoint of D\"urr {\it et al.} \cite{DurNonRem98}.
\section*{Acknowledgments}
This work is part of the research program of the ``Stichting voor Fundamenteel Onderzoek van de Materie'' (FOM) which is financially supported by the ``Nederlandse Organisatie voor Wetenschappelijk Onderzoek'' (NWO).
\end{document} |
\begin{document}
\title{On cubic Kummer towers of Garcia, Stichtenoth and Thomas type}
\begin{abstract} In this paper we initiate the study of the class of cubic Kummer type towers considered by Garcia, Stichtenoth and Thomas in 1997 by classifying the asymptotically good ones in this class. \end{abstract}
\maketitle
\section{Introduction} \label{intro} It is well known the importance of asymptotically good recursive towers in coding theory and some other branches of information theory (see, for instance, \cite{NX01}). Among the class of recursive towers there is an important one, namely the class of Kummer type towers which are recursively defined by equations of the form $y^m=f(x)$ for some suitable exponent $m$ and rational function $f(x)\in K(x)$. A particular case was studied by Garcia, Stichtenoth and Thomas in \cite{GST97} where a Kummer tower over a finite field $\mathbb{F}_q$ with $q\equiv 1\mod m$ is recursively defined by an equation of the form \begin{equation}\label{kummergst97} y^m=x^df(x)\,, \end{equation} where $f(x)$ is a polynomial of degree $m-d$ such that $f(0)\neq 0$ and $\gcd(d,m)=1$. The authors showed that they have positive splitting rate and, assuming the existence of a subset $S_0$ of $\mathbb{F}_q$ with certain properties, the good asymptotic behavior of such towers can be deduced together with a concrete non trivial lower bound for their limit. Later Lenstra showed in \cite{Le02} that in the case of an equation of the form \eqref{kummergst97} over a prime field, there is not such a set $S_0$ satisfying the above conditions of Garcia, Stichtenoth and Thomas. Because of Lenstra's result it seems reasonable to expect that many Kummer towers defined by equations of the form \eqref{kummergst97} have infinite genus. However, to the best of our knowledge there are not examples of such towers in the literature. The aim of this paper is to classify those asymptotically good Kummer type towers considered by Garcia, Stichtenoth and Thomas in \cite{GST97} recursively defined by an equation of the form \begin{equation}\label{cubicgst} y^3=xf(x)\,, \end{equation} over a finite field $\mathbb{F}_q$ where $q\equiv 1\mod 3$ and $f(t)\in\mathbb{F}_q[t]$ is a monic and quadratic polynomial. It was shown in \cite{GST97} that there are choices of the polynomial $f$ giving good asymptotic behavior and even optimal behavior. For instance if $f(x)=x^2+x+1$ then the equation \eqref{cubicgst} defines an optimal tower over $\mathbb{F}_4$, a finite field with four elements (see \cite[Example 2.3]{GST97}). It is worth to point out that the quadratic case (i.e. an equation of the form $y^2=x(x+a)$ with $0\neq a\in \mathbb{F}_q$) is already included in the extensive computational search of good quadratic tame towers performed in \cite{MaWu05}.
The organization of the paper is as follows. In Section \ref{notanddef} we give the basic definitions and we establish the notation to be used throughout the paper. In Section \ref{genus} we give an overview of the main ideas, in the general setting of towers of function fields over a perfect field $K$, used to prove the infiniteness of the genus of a tower. In Section \ref{pyramid} we prove some criteria involving the basic function field associated to a tower to check the infiniteness of its genus. Finally in Section \ref{examples} we prove our main result (Theorem \ref{teoexe2}) where we show that asymptotically good towers defined by an equation of the form \eqref{kummergst97} \[y^3=x(x^2+bx+c)\,,\] with $b,c \in \mathbb{F}_q$ and $q\equiv 1 \mod 3$ fall into three mutually disjoint classes according to the way the quadratic polynomial $x^2+bx+c$ splits into linear factors over $\mathbb{F}_q$. From this result many examples of non skew recursive Kummer towers with positive splitting rate and infinite genus can be given. We would like to point out that there are very few known examples showing this phenomena. An example of a non skew Kummer tower (but not of the form \eqref{kummergst97}) with infinite genus over a prime field $\mathbb{F}_p$ was given in \cite{MaWu05} but, as we will show at the end of Section \ref{genus}, there is a mistake in the argument used by the authors. There are also examples of non skew Kummer towers with bad asymptotic behavior over some non-prime finite fields given by Hasegawa in \cite{Ha05} but those Kummer towers have zero splitting rate.
\section{Notation and Definitions}\label{notanddef} In this work we shall be concerned with \emph{ towers} of function fields and this means a sequence $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ of function fields over a field $K$ where for each index $i\geq 0$ the field $F_i$ is a proper subfield of $F_{i+1}$, the field extension $F_{i+1}/F_i$ is finite and separable and $K$ is the full field of constants of each field $F_i$ (i.e. $K$ is algebraically closed in each $F_i$). If the genus $g(F_i)\rightarrow \infty$ as $i\rightarrow \infty$ we shall say that $\mathcal{F}$ is a {\em tower in the sense of Garcia and Stichtenoth}.
Following \cite{Stichbook09} (see also \cite{GS07}), one way of constructing towers of function fields over $K$ is by giving a bivariate polynomial \[H\in K[X,Y]\,,\]
and a transcendental element $x_0$ over $K$. In this situation a tower $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ of function fields over $K$ is defined as
\begin{enumerate}[(i)]
\item $F_0=K(x_0)$, and
\item $F_{i+1}=F_i(x_{i+1})$ where $H(x_i,x_{i+1})=0$ for $i\geq 0$.
\end{enumerate}
A suitable choice of the bivariate polynomial $H$ must be made in order to have towers. When the choice of $H$ satisfies all the required conditions we shall say that the tower $\mathcal{F}$ constructed in this way is a {\em recursive tower} of function fields over $K$. Note that for a recursive tower $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ of function fields over $K$ we have that
$$F_i=K(x_0,\ldots,x_i)\qquad \text{for }i\geq 0,$$ where $\{x_i\}_{i=0}^{\infty}$ is a sequence of transcendental elements over $K$.
Associated to a recursive tower $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ of function fields $F_i$ over $K$ we have the so called {\em basic function field} $K(x,y)$ where $x$ is transcendental over $K$ and $H(x,y)=0$.
For the sake of simplicity we shall say from now on that $H$ defines the tower $\mathcal{F}$ or, equivalently, that tower $\mathcal{F}$ is recursively defined by the equation $H(x,y)=0$.
A tower $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ of function fields over a perfect field $K$ of positive characteristic is called \emph{tame} if the ramification index $e(Q|P)$ of any place $Q$ of $F_{i+1}$ lying above a place $P$ of $F_i$ is relatively prime to the characteristic of $K$ for all $i\geq 0$. Otherwise the tower $\mathcal{F}$ is called \emph{wild}.
The set of places of a function field $F$ over $K$ will be denoted by $\mathbb{P}(F)$.
The following definitions are important when dealing with the asymptotic behavior of a tower. Let $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ be a tower of function fields over a finite field $\mathbb{F}_q$ with $q$ elements. The {\em splitting rate} $\nu(\mathcal{F})$ and the {\em genus} $\gamma(\mathcal{F})$ of $\mathcal{F}$ over $F_0$ are defined, respectively, as $$\nu(\mathcal{F})\colon=\lim_{i\rightarrow \infty}\frac{N(F_i)}{[F_i:F_0]}\,, \qquad\gamma(\mathcal{F})\colon=\lim_{i\rightarrow \infty}\frac{g(F_i)}{[F_i:F_0]}\,.$$ If $g(F_i)\geq 2$ for $i\geq i_0\geq 0,$ the {\em limit} $\lambda(\mathcal{F})$ of $\mathcal{F}$ is defined as $$\lambda(\mathcal{F})\colon=\lim_{i\rightarrow \infty}\frac{N(F_i)}{g(F_i)}\,.$$ It can be seen that all the above limits exist and that $\lambda(\mathcal{F})\geq 0$ (see \cite[Chapter 7]{Stichbook09}).
Note that the definition of the genus of $\mathcal{F}$ makes sense also in the case of a tower $\mathcal{F}$ of function fields over a perfect field $K$.
We shall say that a tower $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ of function fields over $\mathbb{F}_q$ is {\em asymptotically good} if $\nu(\mathcal{F})>0$ and $\gamma(\mathcal{F})<\infty$. If either $\nu(\mathcal{F})=0$ or $\gamma(\mathcal{F})=\infty$ we shall say that $\mathcal{F}$ is {\em asymptotically bad}.
From the well-known Hurwitz genus formula (see \cite[Theorem 3.4.13]{Stichbook09}) we see that the condition $g(F_i)\geq 2$ for $i\geq i_0$ in the definition of $\lambda(\mathcal{F})$ implies that $g(F_i)\rightarrow \infty$ as $i\rightarrow \infty$. Hence, when we speak of the limit of a tower of function fields we are actually speaking of the limit of a tower in the sense of Garcia and Stichtenoth (see \cite[Section 7.2]{Stichbook09}).
It is easy to check that in the case of a tower $\mathcal{F}$ we have that $\mathcal{F}$ is asymptotically good if and only if $\lambda(\mathcal{F})>0$. Therefore a tower $\mathcal{F}$ is asymptotically bad if and only if $\lambda(\mathcal{F})=0$.
\section{The genus of a tower}\label{genus}
As we mentioned in the introduction, a simple and useful condition implying that $H \in \mathbb{F}_q[x,y]$ does not give rise to an asymptotically good recursive tower $\mathcal{F}$ of function fields over $\mathbb{F}_q$ is that $\deg_xH\neq\deg_yH$. With this situation in mind we shall say that a recursive tower $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ of function fields over a perfect field $K$ defined by a polynomial $H \in K[x,y]$ is {\em non skew} if $\deg_xH=\deg_yH$. In the skew case (i.e. $\deg_xH\neq \deg_yH$) we might have that $[F_{i+1}:F_i]\geq 2$ for all $i\geq 0$ and even that $g(F_i)\rightarrow \infty$ as $i\rightarrow\infty$ but, nevertheless, $\mathcal{F}$ will be asymptotically bad. What happens is that if $\deg_yH>\deg_xH$ then the splitting rate $\nu(\mathcal{F})$ is zero (this situation makes sense in the case $K=\mathbb{F}_q)$ and if $\deg_xH>\deg_yH$ the genus $\gamma(\mathcal{F})$ is infinite (see \cite{GS07} for details). Therefore the study of good asymptotic behavior in the case of recursive towers must be focused on non skew towers. Since the splitting rate of recursive towers defined by an equation of the form \eqref{kummergst97} is positive, their good asymptotic behavior is determined by their genus.
From now on $K$ will denote a perfect field and we recall that $K$ is assumed to be the full field of constants of each function field $F_i$ of any given tower $\mathcal{F}$ over $K$. We recall a well-known formula for the genus of a tower $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ in terms of a subtower $\mathcal{F}'=\{F_{s_i}\}_{i=1}^\infty$, namely \begin{equation}\label{ecu1paper3} \gamma(\mathcal{F})= \lim_{i \rightarrow \infty}\frac {g(F_{s_i})}{[F_{s_i}:F_0]}=g(F_0)-1+\frac 1 2 \sum_{i=1}^\infty \frac{\deg{\operatorname{Diff}(F_{s_{i+1}}/F_{s_i})}}{[F_{s_{i+1}}:F_0]}.\end{equation} \begin{rem}\label{remarkdivisor} Suppose now that there exist positive functions $c_1(t)$ and $c_2(t)$, defined for $t\geq 0$, and a divisor $B_i\in \mathcal{D}(F_i)$ such that for each $i\geq 1$
\begin{enumerate}[\text{Condition} (a):] \item $\deg B_i\geq c_1(i)[F_i:F_0]$ and\label{thm3.2-a}
\item $\sum\limits_{P\in supp(B_i)}\sum\limits_{Q|P}d(Q|P)\deg Q\geq c_2(i)[F_{{i+1}}\colon F_i]\deg{B_i}\,,$ \label{thm3.2-b}
\end{enumerate} where the inner sum runs over all places $Q$ of $F_{i+1}$ lying above $P$, then it is easy to see from \eqref{ecu1paper3} that if the series \begin{equation}\label{thm3.2-c}
\sum_{i=1}^{\infty}c_1(i)c_2(i) \end{equation} is divergent then $\gamma(\mathcal{F})=\infty$. \end{rem}
With the same hypotheses as in Remark~\ref{remarkdivisor}, if in addition $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ is non skew and recursively defined by the equation $H(x,y)=0$ such that $H(x,y)$, as a polynomial with coefficients in $K(y)$, is irreducible in $K(y)[x]$ then condition \eqref{thm3.2-a}
can be replaced by the following \begin{enumerate}[(a')] \item $\deg{B_{j}}\geq c_1(j)\cdot \deg(b(x_{j}))^{j}$ where $b\in K(T)$ is a rational function and $(b(x_{j}))^{j}$ denotes either the pole divisor or the zero divisor of $b(x_{j})$ in $F_{j}$,\label{thm3.2-a'} \end{enumerate} and the same result hold, i.e., $\gamma(\mathcal{F})=\infty$. These are the usual ways of proving the infiniteness of the genus of a recursive tower $\mathcal{F}$.
In particular the existence of a divisor as in Remark \ref{remarkdivisor} can be proved by showing that sufficiently many places of $F_i$ are ramified in $F_{i+1}$ in the sense that the number $r_i=\#(R_i)$ where \[R_i=\{P\in\mathbb{P}(F_i)\,:\,\text{$P$ is ramified in $F_{i+1}$}\}\,.\] satisfies the estimate \[r_i\geq c_i[F_{s_{i+1}}:F_0]\,,\] where $c_i>0$ for $i\geq 1$ and the series $\sum_{i=1}^{\infty}c_i$ is divergent. It is easily seen that the divisor of $F_i$ \[B_i=\sum_{P\in R_i}P\,,\] satisfies the conditions \eqref{thm3.2-a} and \eqref{thm3.2-b} of Remark \ref{remarkdivisor} with $c_1(i)= c_i[F_{i+1}:F_i]$ and $c_2(i)=[F_{i+1}:F_i]^{-1}$.
We recall now a standard result from the theory of constant field extensions (see \cite[Theorem 3.6.3]{Stichbook09}): let $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ be a tower of function fields over $K$. By considering the constant field extensions $\bar{F_i}=F_i\cdot K'$ where $K'$ is an algebraic closure of $K$, we have the so called constant field extension tower $\bar{\mathcal{F}}=\{\bar{F_i}\}_{i=0}^{\infty}$ of function fields over $K'$ and \[\gamma(\mathcal{F})=\gamma(\bar{\mathcal{F}})\,.\] Now we can prove the following result which will be useful later. \begin{pro}\label{propmajwulf} Let $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ be a tower of function fields over $K$. Suppose that either each extension $F_{i+1}/F_i$ is Galois or that there exists a constant $M$ such that $[F_{i+1}:F_i]\leq M$ for $i\geq 0$. In order to have infinite genus it suffices to find, for infinitely many indices $i\geq 1$, a place $P_i$ of $F_0$ unramified in $F_i$ and such that each place of $F_i$ lying above $P_i$ is ramified in $F_{i+1}$.
In particular, suppose that the tower $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ is a non skew recursive tower defined by a suitable polynomial $H\in K[x,y]$. Let $\{x_i\}_{i=0}^{\infty}$ be a sequence of transcendental elements over $K$ such that $F_{i+1}=F_i(x_{i+1})$ where $H(x_{i+1},x_i)=0$. Then $\gamma(\mathcal{F})=\infty$ if \begin{enumerate}[(i)] \item $H$, as a polynomial with coefficients in $K(y)$, is irreducible in $K(y)[x]$. \item There exists an index $k\geq 0$ such that for infinitely many indices $i\geq 0$ there is a place $P_i$ of $K(x_{i-k},\ldots,x_i)$ which is unramified in $F_i$ and each place of $F_i$ lying above $P_i$ is ramified in $F_{i+1}$. \end{enumerate} \end{pro} \begin{proof} We may assume that $K$ is algebraically closed since, by passing to the constant field tower $\bar{\mathcal{F}}=\{\bar{F_i}\}_{i=0}^{\infty}$ with $\bar{F_i}=F_i\cdot K'$ where $K'$ is an algebraic closure of $K$, we have $\gamma(\mathcal{F})=\gamma(\bar{\mathcal{F}})$. In this situation we have that for each $i\geq 0$ the place $P_i$ of $F_0$ splits completely in $F_i$ and each place $Q$ of $F_i$ lying above $P_i$ ramifies in $F_{i+1}$. Now consider the following sets \[R_i=\{P\in\mathbb{P}(F_i)\,:\,\text{$P$ is ramified in $F_{i+1}$}\}\,,\]
and \[A_i=\{Q\in\mathbb{P}(F_{i+1})\,:\,\text{$Q$ lies over some $P\in R_i$}\}\,.\] and set $r_i=\#(R_i)$. Let $B_i$ be a divisor of $F_i$ defined as \[B_i=\sum_{P\in R_i}P\,.\] Then $\deg B_i \geq r_i\geq [F_i:F_0]$, because every place $Q$ of $F_i$ lying above $P_i$ is in $R_i$ and $P_i$ splits completely in $F_i$, so that condition \eqref{thm3.2-a} of Remark \ref{remarkdivisor} holds with $c_1(i)=1$.
Now suppose that each extension $F_{i+1}/F_i$ is Galois. Then $A_i$ is the set of all places of $F_{i+1}$ lying above a place of $R_i$. Therefore \begin{align*}
\sum_{P\in supp(B_i)}\underset{Q|P}{\sum_{Q\in\mathbb{P}(F_{i+1})}}d(Q|P)\deg Q & \geq \sum_{P\in R_i}\sum_{Q\in A_i}d(Q|P)\deg Q\\ & \geq \frac{1}{2}\sum_{P\in R_i}\sum_{Q\in A_i}e(Q|P)f(Q|P)\deg P\\
&=\frac{1}{2}[F_{i+1}:F_i]\, \sum_{P\in R_i}\deg P\\
&\geq \frac{1}{2}[F_{i+1}:F_i]\,\deg B_i \,. \end{align*} Then condition \eqref{thm3.2-b} of Remark \ref{remarkdivisor} holds with $c_2(i)=1/2$ and the series $\sum_{i=1}^{\infty}c_1(i)c_2(i)$ is divergent. Hence $\gamma(\mathcal{F})=\infty$. In the case that $[F_{i+1}:F_i]\leq M$ for $i\geq 0$ by taking $c_2(i)=M^{-1}$ we arrive to the same conclusion.
Finally suppose that the tower $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ is non skew and recursive. Since $\mathcal{F}$ is non skew and $(i)$ holds, we have that $[F_i:F_0]=m^i=[F_i:K(x_i)]$ where $m=\deg_yH=\deg_xH$. Now we proceed with the same divisor $B_i$ as defined above using $(ii)$. We have that \[\deg B_i\geq [F_i:K(x_{i-k},\ldots,x_i)]=m^{-k}[F_i:K(x_i)]=m^{-k}[F_i:F_0]\,,\] so that by taking $c_1(i)=m^{-k}$ and $c_2(i)=m^{-k-1}$ we have the desired conclusion. \end{proof}
An example of the situation described in the second part of Proposition \ref{propmajwulf} for $k=0$ was given in Lemma 3.2 in \cite{MaWu05} and applied to the non skew Kummer tower \[y^3=1-\left(\frac{x-1}{x+1}\right)^3\,,\] over $\mathbb{F}_p$ with $p\equiv 1,7\mod 12$. Unfortunately there is a mistake in the proof as we show now. The basic function field associated to that tower is $\mathbb{F}_p(x,y)$ and both extensions $\mathbb{F}_p(x,y)/\mathbb{F}_p(x)$ and $\mathbb{F}_p(x,y)/\mathbb{F}_p(y)$ are Galois. The key part of the argument is that $-3^{-1}$ is not a square in $\mathbb{F}_p$ with $p\equiv 1,7\mod 12$. With this we would have that the polynomial $x^2+3^{-1}$ is irreducible in $\mathbb{F}_p[x]$ and then it would define the place $P_{x^2+3^{-1}}$ of $\mathbb{F}_p(x)$ which is not only totally ramified in $\mathbb{F}_p(x,y)$ (by the theory of Kummer extensions) but also of degree $2$, which is crucial for their argument. From these facts the authors deduce that the above equation defines a tower in the sense of Garcia and Stichtenoth with infinite genus. But any such prime is congruent to $1$ modulo $3$ and $-3^{-1}$ is a square in $\mathbb{F}_p$ for $p\equiv 1\mod 3$ as can be easily seen using the quadratic reciprocity law. Thus the polynomial $x^2+3^{-1}$ is not irreducible in $\mathbb{F}_p[x]$ so it does not define a place of $\mathbb{F}_p(x)$.
\section{Climbing the pyramid}\label{pyramid}
In this section and the next one we shall use the following convention: a place defined by a monic and irreducible polynomial $f\in K[x]$ in a rational function field $K(x)$ will be denoted by $P_{f(x)}$. A slight modification of the arguments given in Lemma 3.2 of \cite{MaWu05} allowed us to prove the following useful criterion for infinite genus in the case of recursive towers and we include the proof for the sake of completeness. The main difficulty on the applicability of Lemma 3.2 of \cite{MaWu05} is that it requires that both extensions $K(x,y)/K(x)$ and $K(x,y)/K(y)$ be Galois, which is something unusual or simply hard to prove. Getting rid of the condition $K(x,y)/K(y)$ being Galois was the key ingredient in proving the main result in the next section. \begin{pro}\label{examplemawu} Let $\mathcal{F}=\{F_i\}_{i=0}^{\infty}$ be a non skew recursive tower of function fields over $K$ defined by a polynomial $H\in K[x,y]$ with the same degree $m$ in both variables. Let $K(x,y)$ be the basic function field associated to $\mathcal{F}$ and consider the set $$N=\{\deg{R} : R\in \mathbb{P}(K(y)) \text{ and $R$ is ramified in $K(x,y)$}\} \,. $$ Let $d\in \mathbb{N}$ such that $\gcd(d,m)=1$ and $n\ncon 0\mod d$ for all $n\in N$. Suppose that there is a place $P$ of $K(x)$ with the following properties: \newcounter{saveenum} \begin{enumerate}[(a)] \item $\deg{P}=d$ and\label{item-a-mawu} \item $P$ is ramified in $K(x,y)$.\label{item-b-mawu} \setcounter{saveenum}{\value{enumi}} \end{enumerate} Then $\gamma(F)=\infty$ if $K(x,y)/K(x)$ is a Galois extension and $H$, as a polynomial with coefficients in $K(y)$, is irreducible in $K(y)[x]$. \end{pro} \begin{proof} Consider a sequence $\{x_i\}_{i=0}^{\infty}$ of transcendental elements over $K$ such that \[F_0=K(x_0)\quad\text{and}\quad F_{i+1}=F_i(x_{i+1})\,,\] where $H(x_i,x_{i+1})=0$ for $i\geq 0$. Let $i\geq 1$. By the above assumptions there is a place $P_i$ of $K(x_i)$ ramified in the extension $K(x_i, x_{i+1})/K(x_i)$ with $\deg{P_i}=d$. Let $Q$ be a place of $F_i$ lying above $P_i$. Let $P_0, P_1, \ldots, P_i$ be the restrictions of $Q$ to $K(x_0), K(x_1),\ldots, K(x_i)$ respectively and let $P'_j$ be a place of $K(x_j,x_{j+1})$ lying above $P_j$ for $j=1,\ldots i$ (see Figure \ref{figu5.9} below).
\begin{figure}
\caption{Ramification of $P_0, P_1,\ldots P_i$ in the pyramid.}
\label{figu5.9}
\end{figure}
By hypothesis we have that $e(P'_i|P_i)=1$. On the other hand \begin{equation}\label{inertiadegree}
f(P_j'|P_j)\deg{P_j}=\deg{P_j'}=f(P_j'|P_{j-1})\deg{P_{j-1}}\,, \end{equation}
for $1\leq j\leq i $ where $f(P_j'|P_j)$ and $f(P_j'|P_{j-1})$ are the respective inertia degrees. Since $d=\deg P_i$ and $\gcd(d,m)=1$ from \eqref{inertiadegree} for $j=i$ we must have that $d$ is a divisor of $\deg P_{i-1}$, otherwise there would be a prime factor of $d$ dividing $m$ because $K(x_{i-1},x_i)/K(x_{i-1})$ is Galois and in this case $f(P_i'|P_{i-1})$ is a divisor of $m$. Continuing in this way using \eqref{inertiadegree} we see that $d$ is a divisor of $\deg P_j$ for $j=1,\ldots i$ and this implies, by hypothesis, that each place $P_j$ is unramified in the extension $K(x_{j-1},x_j)/K(x_j)$ for $j=1,\ldots i$.
We have now a ramification situation as in Figure \ref{figu5.9} below. By Abhyankar's Lemma (see \cite[Theorem 3.9.1]{Stichbook09}) it follows that $e(Q|P_i)=1$. Now let $Q'$ be a place of $F_{i+1}$ lying above $Q$ and let $P_{i+1}'$ be the restriction of $Q'$ to $K(x_i,x_{i+1})$. Then $P_{i+1}'$ lies above $P_i$ and $e(P_{i+1}'|P_i)=e>1$ because $P_i$ is ramified in $K(x_i,x_{i+1})$ and the extension $K(x_i,x_{i+1})/K(x_i)$ is Galois. Once again, by Abhyankar's Lemma, we have that $e(Q'|Q)=e(P'_{i+1}|P_i)>1$. Then we are in the conditions $(i)$ and $(ii)$ of Proposition \ref{propmajwulf} with $k=0$ and thus $\gamma(\mathcal{F})=\infty$.
\end{proof}
\begin{rem}\label{remark-thm4-2} Note that if we have a ramification situation as in Figure \ref{figu5.9} above and $P_i$ is totally ramified in $K(x_i,x_{i+1})$ for all $i\geq 0$ then $Q$ is totally ramified in $F_{i+1}$ for all $i\geq 0$ because $e=[K(x_i,x_{i+1}):K(x_i)]=[F_{i+1}:F_i]$. Therefore if a recursive sequence $\mathcal{F}$ of function fields is defined by a separable polynomial $H(x,y)$ in the second variable and for each $i\geq 0$ we have a ramification situation as in Figure \ref{figu5.9} and $P_i$ is totally ramified in $K(x_i,x_{i+1})$ for all $i\geq 0$ then $K$ is the full field of constants of each $F_i$ so that $\mathcal{F}$ is, in fact, a tower. \end{rem}
\section{Classification of asymptotically good cubic towers of Garcia, Stichtenoth and Thomas type}\label{examples}
We prove now our main result. As we said in the introduction Garcia, Stichtenoth and Thomas introduced in \cite{GST97} an interesting class of Kummer type towers over a finite field $\mathbb{F}_q$ with $q\equiv 1\mod m$ defined by an equation of the form \begin{equation} y^m=x^df(x)\,, \end{equation} where $f(x)$ is a polynomial of degree $m-d$ such that $f(0)\neq 0$ and $\gcd(d,m)=1$. These Kummer type towers have positive splitting rate but over prime fields Lenstra \cite{Le02} showed that they fail to satisfy a well-known criterion for finite ramification locus given in \cite{GST97} which is the main tool in proving the finiteness of their genus. In this context the next result is important in the study of the cubic case of these Kummer type towers.
\begin{thm}\label{teoexe2}
Let $p$ be a prime number and let $q=p^r$ with $r\in \mathbb{N}$ such that $q\equiv 1 \mod 3$. Let $f(t)=t^2+bt+c \in \mathbb{F}_q[t]$ be a polynomial such that $t=0$ is not a double root. Let $\mathcal{F}$ be a Kummer type tower over $\mathbb{F}_q$ recursively defined by the equation
\begin{equation}\label{badkummerexample}
y^3=xf(x)\,.
\end{equation}
If $\mathcal{F}$ is asymptotically good then the polynomial $f$ splits into linear factors over $\mathbb{F}_q$. This implies that any asymptotically good tower recursively defined by \eqref{badkummerexample} is of one and only one of the following three types: \begin{enumerate}[Type 1.] \item Recursively defined by $y^3=x(x+\alpha)(x+\beta)$ with non zero $\alpha\neq\beta\in\mathbb{F}_q$. \label{a-teoexe2} \item Recursively defined by $y^3=x^2(x+\alpha)$ with non zero $\alpha\in\mathbb{F}_q$. \label{b-teoexe2} \item Recursively defined by $y^3=x(x+\alpha)^2$ with non zero $\alpha\in\mathbb{F}_q$. \label{c-teoexe2} \end{enumerate} \end{thm}
\begin{proof}
On the contrary, suppose that the polynomial $f$ is irreducible over $\mathbb{F}_q$. Let us consider the basic function field $F=\mathbb{F}_q(x,y)$. Since the polynomial $f(x)$ is irreducible in $\mathbb{F}_q[x]$ we have that the place $P_{f(x)}$ of $\mathbb{F}_q(x)$ associated to $f(x)$ is of degree $2$ and, by the general theory of Kummer extensions (see \cite[Proposition 6.3.1]{Stichbook09}, $P_{f(x)}$ is totally ramified in $F$. In fact it is easy to see that the genus of $F$ is one and \[\operatorname{Diff} (F/\mathbb{F}_q(x))=2Q_1+2Q_2\,,\] where $Q_1$ is the only place of $F$ lying above $P_x$ (the zero of $x$ in $\mathbb{F}_q(x)$) and $Q_2$ is the only place of $F$ lying above $P_{f(x)}$. Also $Q_1$ is of degree $1$ and $Q_2$ is of degree $2$.
The extension $F/\mathbb{F}_q(y)$ is of degree $3$ because the polynomial \[\phi(t)=tf(t)-y^3\in \mathbb{F}_q(y)[t]\,,\] is the minimal polynomial of $x$ over $\mathbb{F}_q(y)$, otherwise $\phi(t)$ would have a root $z\neq y$ in $ \mathbb{F}_q(y)$ and this would imply that $y$ is algebraic over $ \mathbb{F}_q$, a contradiction. Clearly the extension $F/\mathbb{F}_q(y)$ is tame.
By choosing the place $P_{f(x)}$ of $\mathbb{F}_q(x)$ we have that items \eqref{item-a-mawu} and \eqref{item-b-mawu} with $d=2$ hold in Proposition~\ref{examplemawu} so it remains to prove that the integers in the set $$N=\{\deg{R} : R\in \mathbb{P}(\mathbb{F}_q(y)) \text{ and $R$ is ramified in $F$}\} \,, $$ are odd integers. We shall use the following notation: for $z\in F$ the symbols $(z)_F$, $(z)_0^F$ and $(z)_{\infty}^F$ denote the principal divisor, the zero divisor and the pole divisor of $z$ in $F$ respectively. Using the well known expression of the different divisor in terms of differentials (see Chapter $4$ of \cite{Stichbook09}) we have that \begin{align}\label{diffequality} \begin{split} \operatorname{Diff} (F/\mathbb{F}_q(y)) &= 2(y)_{\infty}^F+(dy)_F\\ &= 2(y)_{\infty}^F+\left(\frac{f(x) + x f'(x)}{3y^2}\right)_F+(dx)_F\\ &=2(y)_{\infty}^F+\left(\frac{(x-\beta_1)(x-\beta_2)}{y^2}\right)_F-2(x)_{\infty}^F+\operatorname{Diff} (F/\mathbb{F}_q(x))\\ & =2(y)_{\infty}^F+\left(\frac{(x-\beta_1)(x-\beta_2)}{y^2}\right)_F-2(x)_{\infty}^F+2Q_1+2Q_2\,. \end{split} \end{align} We show now that $(y)_{\infty}^F=(x)_{\infty}^F$. Let $Q\in \operatorname{supp} (y)_{\infty}^F$ and let $S=Q\cap \mathbb{F}_q(x)$. Then
\[3\nu_Q(y)=e(Q|S)((\nu_S(x)+\nu_S(f(x)))\,.\]
Since $\nu_Q(y)<0$ we must have that $S=P_{\infty}^x$, the pole of $x$ in $\mathbb{F}_q(x)$. Hence $\nu_Q(y)=-e(Q|P_{\infty}^x)=-1$ because by Kummer theory (see \cite[Proposition 6.3.1]{Stichbook09}) $P_{\infty}^x$ is unramified in $F$. Then \[-3=\nu_Q(y^3)=\nu_Q(x)+\nu_Q(f(x))\,,\] and this implies that $\nu_Q(x)<0$. Therefore $-3=3\nu_Q(x)$ and we have $\nu_Q(x)=-1$ which says that $Q\in \operatorname{supp} (x)_{\infty}^F$ and $\nu_Q(\operatorname{supp} (y)_{\infty}^F)=\nu_Q(\operatorname{supp} (x)_{\infty}^F)$.
Reciprocally let $Q\in \operatorname{supp} (x)_{\infty}^F$. Since $\nu_Q(x)<0$ we have \[3\nu_Q(y)=\nu_Q(x)+\nu_Q(f(x))=3\nu_Q(x)\,,\] so that $\nu_Q(y)=\nu_Q(x)<0$. If $S=Q\cap \mathbb{F}_q(x)$ then
\[3\nu_Q(y)=e(Q|S)((\nu_S(x)+\nu_S(f(x)))\,,\]
and we must have again that $S=P_{\infty}^x$. This implies that $\nu_Q(y)=-e(Q|P_{\infty}^x)=-1$. Therefore $Q\in \operatorname{supp} (y)_{\infty}^F$ and $\nu_Q(\operatorname{supp} (x)_{\infty}^F)=\nu_Q(\operatorname{supp} (y)_{\infty}^F)$. Hence $(y)_{\infty}^F=(x)_{\infty}^F$ as claimed.
From \eqref{diffequality} we have now that \begin{align}\label{diffequalityreduced} \begin{split} \operatorname{Diff} (F/\mathbb{F}_q(y))&=\left(\frac{(x-\beta_1)(x-\beta_2)}{y^2}\right)_F+2Q_1+2Q_2\\&=(z)_0^F-(z)_\infty^F+2Q_1+2Q_2\,, \end{split} \end{align} where $z=(x-\beta_1)(x-\beta_2)y^{-2}$.
Let $Q$ be a place of $F$ in the support of $(z)_0^F$. Then $\nu_Q(z)>0$ and thus one of the following two cases can occur: \begin{enumerate}[(i)]
\item $\nu_Q(x-\beta_i)>0$ for $i=1$ or $i=2$. In either case $Q$ lies above the rational place $P_{x-\beta_i}$ of $\mathbb{F}_q(x)$. Since $F/K(x)$ is a Galois extension of degree $3$ and $\deg Q=f(Q|P)\deg P_{x-\beta_i}$ we have that either $\deg Q =1$ or $\deg Q =3$.
\item $\nu_Q(y)<0$. Let $S=Q\cap \mathbb{F}_q(x)$. We have $$3\nu_Q(y)=e(S|Q)(\nu_S(x)+\nu_S(f(x)))\,.$$ Since $\nu_S(x)\geq 0$ leads to a contradiction we must have $\nu_S(x)<0$ and thus $S=P_\infty^x$. The same argument used in (i) above shows that either $\deg Q =1$ or $\deg Q =3$. \end{enumerate}
Now let $Q$ be a place of $F$ in the support of $(z)_\infty^F$. Then $\nu_Q(z)<0$ and thus one of the following two cases can occur: \begin{enumerate}[(a)]
\item $\nu_Q(x-\beta_i)<0$ for $i=1$ or $i=2$. In either case $\nu_Q(x)<0$ so that $Q$ lies above the place $P_\infty^x$ of $\mathbb{F}_q(x)$ and the same argument given in (i) above shows that either $\deg Q =1$ or $\deg Q =3$.
\item $\nu_Q(y)>0$. Let $S=Q\cap \mathbb{F}_q(x)$. We have
\begin{equation}\label{case2}
3\nu_Q(y)=e(S|Q)(\nu_S(x)+\nu_S(f(x)))\,.
\end{equation}
Since $\nu_S(x)< 0$ leads to a contradiction we must have that $\nu_S(x)\geq 0$. If $\nu_S(x)> 0$ then $S=P_x$ and so $Q=Q_1$. If $\nu_S(x)=0$ then we must have that $\nu_S(f(x))>0$ because the left hand side of \eqref{case2} is positive. Therefore if $\nu_S(x)=0$ then $S=P_{f(x)}$ and thus $Q=Q_2$. \end{enumerate}
On the other hand $\nu_{Q_i}(y)=1$ for $i=1,2$ as it is easy to see from the definition of each $Q_i$. Then $\nu_{Q_i}(z)=-2\nu_{Q_i}(y)=-2$ so that, in fact, the divisor $-2Q_1-2Q_2$ is part of the divisor $(z)_F$. This implies that both places $Q_1$ and $Q_2$ are not in the support of $\operatorname{Diff} (F/\mathbb{F}_q(y))$. From the cases (i), (ii) and (a) above we conclude that every place in the support of $\operatorname{Diff} (F/\mathbb{F}_q(y))$ is of degree $1$ or $3$. Therefore no place of even degree en $\mathbb{F}_q(y)$ can ramify in $F$ as we claimed. In this way we see that all the conditions of Proposition \ref{examplemawu} hold so that the equation \[y^3=xf(x)\,,\] defines a Kummer tower $\mathcal{F}$ over $\mathbb{F}_q$ with infinite genus if $f(x)$ is irreducible over $\mathbb{F}_q$ and this proves the theorem. \end{proof}
{}
\end{document} |
\begin{document}
\title{Global existence and exponential growth for a viscoelastic wave equation with dynamic boundary conditions} \author{St\'{e}phane Gerbi\thanks{ Laboratoire de Math\'ematiques, Universit\'e de Savoie et CNRS, UMR-5128, 73376 Le Bourget du Lac, France, E-mail : \url{[email protected]}, } ~and Belkacem Said-Houari\thanks{ Division of Mathematical and Computer Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal, KSA, E-mail: \url{[email protected]}}} \date{}
\maketitle
\begin{abstract} The goal of this work is to study a model of the wave equation with dynamic boundary conditions and a viscoelastic term. First, applying the Faedo-Galerkin method combined with the fixed point theorem, we show the existence and uniqueness of a local in time solution. Second, we show that under some restrictions on the initial data, the solution continues to exist globally in time. On the other hand, if the interior source dominates the boundary damping, then the solution is unbounded and grows as an exponential function. In addition, in the absence of the strong damping, then the solution ceases to exist and blows up in finite time. \end{abstract} {\bf Keywords:} Damped viscoelastic wave equations, global solutions, exponential growth, blow up in finite time, dynamic boundary conditions.
\section{Introduction}
We consider the following problem \begin{equation} \left\{ \begin{array}{ll}
u_{tt}-\Delta u-\alpha \Delta u_{t}+\displaystyle\int_{0}^{t}g(t-s)\Delta u(s)ds=|u|^{p-2}u, & x\in \Omega ,\ t>0 \, ,
\\ u(x,t)=0, & x\in \Gamma _{0},\ t>0 \, ,
\\ u_{tt}(x,t)=-\left[ \displaystyle\frac{\partial u}{\partial \nu }(x,t)- \displaystyle\int_{0}^{t}g(t-s)\frac{\partial u}{\partial \nu }(x,s)ds+\frac{ \alpha \partial u_{t}}{\partial \nu }(x,t)+h\left( u_{t}\right) \right] & x\in \Gamma _{1},\ t>0\, ,
\\ u(x,0)=u_{0}(x),\qquad u_{t}(x,0)=u_{1}(x) & x\in \Omega \, , \end{array} \right. \label{ondes} \end{equation}
where $u=u(x,t)\,,\,t\geq 0\,,\,x\in \Omega \,,\,\Delta $ denotes the Laplacian operator with res\-pect to the $x$ variable, $\Omega $ is a regular and bounded domain of $\mathbb{R}^{N}\,,\,(N\geq 1)$, $\partial \Omega ~=~\Gamma _{0}~\cup ~\Gamma _{1}$, $mes(\Gamma _{0})>0,$ $\Gamma_{0}\cap \Gamma _{1}=\varnothing $ and $\partial/\partial \nu $ denotes the unit outer normal derivative, $\alpha$ is a positive constant, $p>2, \,h \mbox{ and } g$ are functions whose properties will be discussed in the next section, $u_{0}\,,\,u_{1}$ are given functions.
Nowadays the wave equation with dynamic boundary conditions are used in a wide field of applications. See \cite{MuerKu_2011} for some applications. Problems similar to (\ref{ondes}) arise (for example) in the modeling of longitudinal vibrations in a homogeneous bar in which there are viscous effects. The term $\Delta u_t$, indicates that the stress is proportional not only to the strain, but also to the strain rate, see \cite{CSh_76} fore more details.
From the mathematical point of view, these problems do not neglect acceleration terms on the boundary. Such type of boundary conditions are usually called \textit{dynamic boundary conditions}. They are not only important from the theoretical point of view but also arise in several physical applications. For instance in one space dimension and for $g=0$, problem (\ref{ondes}) can modelize the dynamic evolution of a viscoelastic rod that is fixed at one end and has a tip mass attached to its free end. The dynamic boundary conditions represents Newton's law for the attached mass, (see \cite {BST64,AKS96, CM98} for more details). In the two dimension space, as showed in \cite{G06} and in the references therein, these boundary conditions arise when we consider the transverse motion of a flexible membrane $\Omega $ whose boundary may be affected by the vibrations only in a region. Also some dynamic boundary conditions as in problem (\ref{ondes}) appear when we assume that $\Omega $ is an exterior domain of $\mathbb{R}^{3} $ in which homogeneous fluid is at rest except for sound waves. Each point of the boundary is subjected to small normal displacements into the obstacle: this type of dynamic boundary conditions are known as acoustic boundary conditions, see \cite{B76} for more details.
Littman and Markus \cite{LM88} considered a system which describe an elastic beam, linked at its free end to a rigid body. The whole system is governed by the Euler-Bernoulli Partial Differential Equations with dynamic boundary conditions. They used the classical semigroup methods to establish existence and uniqueness results while the asymptotic stabilization of the structure is achieved by the use of feedback boundary damping.
In \cite{GV94} the author introduced the model \begin{equation} u_{tt}-u_{xx}-u_{txx}=0,\qquad x\in (0,L),\,t>0, \label{Vand_1} \end{equation} which describes the damped longitudinal vibrations of a homogeneous flexible horizontal rod of length $L$ when the end $x=0$ is rigidly fixed while the other end $x=L$ is free to move with an attached load. Thus she considered Dirichlet boundary condition at $x=0$ and dynamic boundary conditions at $ x=L\,$, namely \begin{equation} u_{tt}(L,t)=-\left[ u_{x}+u_{tx}\right] (L,t),\qquad t>0 \ . \label{Vand_2} \end{equation} By rewriting the whole system within the framework of the abstract theories of the so-called $B$-evolution theory, the existence of a unique solution in the strong sense has been shown. An exponential decay result was also proved in \cite{GV96} for a problem related to (\ref{Vand_1})-(\ref{Vand_2}), which describe the weakly damped vibrations of an extensible beam. See \cite{GV96} for more details.
Subsequently, Zang and Hu \cite{ZH07}, considered the problem \begin{equation*} u_{tt}-p\left( u_{x}\right) _{xt}-q\left( u_{x}\right) _{x}=0,\qquad x\in \left(0,1\right) ,\,t>0 \end{equation*} with \begin{equation*} u\left( 0,t\right) =0,\qquad p\left( u_{x}\right) _{t}+q\left( u_{x}\right) \left( 1,t\right) +ku_{tt}\left( 1,t\right) =0,\, t\geq 0. \end{equation*} By using the Nakao inequality, and under appropriate conditions on $p$ and $ q $, they established both exponential and polynomial decay rates for the energy depending on the form of the terms $p$ and $q$.
Recently, the present authors have considered, in \cite{GS08} and \cite{GS082}, problem (\ref{ondes}) with $g=0$ and a nonlinear boundary damping of the form $h\left(u_{t}\right)=\left\vert u_{t}\right\vert^{m-2}u_{t}$. A local existence result was obtained by combining the Faedo-Galerkin method with the contraction mapping theorem. Concerning the asymptotic behavior, the authors showed that the solution of such problem is unbounded and grows up exponentially when time goes to infinity provided that the initial data are large enough and the damping term is nonlinear. The blow up result was shown when the damping is linear (i.e. $m=2$). Also, we proved in \cite{GS082} that under some restrictions on the exponents $m$ and $p$, we can always find initial data for which the solution is global in time and decays exponentially to zero. These results had been recently generalized for a wide range of nonlinearities in the equation and in the boundary term: the authors proved the local existence and uniqueness by a sophisticated application of the non linear semigroup theory, see \cite{GRS2012}.
In the absence of the strong damping $\alpha \Delta u_t$ and for Dirichlet boundary conditions on the whole boundary $\partial\Omega$, the question of blow up in finite time of problem (\ref{ondes}) has been investigated by many authors. Messaoudi \cite{Mess03} showed that if the initial energy is negative and if the relaxation function $g$ satisfies the following assumption \begin{equation} \label{Messaoudi_condition} \int_{0}^{\infty }g(s)ds<\frac{(p/2)-1}{(p/2)-1+(1/2p)} \ , \end{equation} then the solutions blow up in finite time. In fact this last condition has been assumed by other researchers. See for instance \cite{Mess_Kafi_2007,Mess_Kafi_2008,MS2010,Mes01,SaZh2010,YWL2009}.
The main goal of this paper is to prove the local existence and to study the asymptotic behavior of the solution of problem (\ref{ondes}).
One of the main questions is to show a blow-up result of the solution. This question is a difficult open problem, since in the presence of the strong damping term, i.e. when $\alpha\neq 0$, the problem has a parabolic structure, which means that the solution gains more regularity. However, in this paper, we give a partial answer to this question and show that for $\alpha\neq 0$ and for large initial data, the solution is unbounded and grows exponentially as $t$ goes to infinity. While for the case $\alpha=0$, the solution has been shown to blow up in finite time.
The main contribution of this paper in this blow up result is the following: the exponential growth and blow-up results hold without making the assumption (\ref{Messaoudi_condition}). In fact the only requirement is that the exponent $p$ has to be large enough which is a condition much weaker than condition (\ref{Messaoudi_condition}). Moreover, unlike in the works of Messaoudi and coworkers, we do not assume any polynomial structure on the damping term $h(u_t)$, to obtain an exponential growth of the solution or a blow up in finite time.
This paper is organized as follows: firstly, applying the Faedo-Galerkin method combined with the fixed point theorem, we show, in Section \ref{local_existence_section}, the existence and uniqueness of a local in time solution. Secondly, under the smallness assumption on the initial data, we show, in Section \ref{Global_existence_section}, that the solution continues to exist globally in time. On the other hand, in Section \ref{Exponential_growth_section}, we prove that under some restrictions on the initial data and if the interior source dominates the boundary damping then the $L^p$-norm of the solution grows as an exponential function. Lastly, in Section \ref{blow_up_section}, we investigate the case when $\alpha=0$ and we prove that the solution ceases to exist and blows up in finite time.
\section{Preliminary and local existence}
\label{local_existence_section}
In this section, we introduce some notations used throughout this paper. We also prove a local existence result of the solution of problem (\ref {ondes}).
We denote \begin{equation*} H_{\Gamma_{0}}^{1}(\Omega) =\left\{u \in H^1(\Omega) /\ u_{\Gamma_{0}} = 0\right\} . \end{equation*} By $( .,.) $ we denote the scalar product in $L^{2}( \Omega)$ i.e. $(u,v)(t) = \int_{\Omega} u(x,t) v(x,t) dx$. Also we mean by $\Vert .\Vert_{q}$ the $L^{q}(\Omega) $ norm for $1 \leq q \leq \infty$, and by $ \Vert .\Vert_{q,\Gamma_{1}}$ the $L^{q}(\Gamma_{1}) $ norm.
Let $T>0$ be a real number and $X$ a Banach space endowed with norm $\Vert.\Vert _{X}$. $L^{p}(0,T;X),\ 1~\leq p~<\infty $ denotes the space of functions $f$ which are $L^{p}$ over $\left( 0,T\right) $ with values in $X$ , which are measurable and $\Vert f\Vert _{X}\in L^{p}\left( 0,T\right) $. This space is a Banach space endowed with the norm \begin{equation*} \Vert f\Vert _{L^{p}\left( 0,T;X\right) }=\left( \int_{0}^{T}\Vert f\Vert _{X}^{p}dt\right) ^{1/p}\quad . \end{equation*} $L^{\infty }\left( 0,T;X\right) $ denotes the space of functions $f:\left] 0,T\right[ \rightarrow X$ which are measurable and $\Vert f\Vert _{X}\in L^{\infty }\left( 0,T\right) $. This space is a Banach space endowed with the norm: \begin{equation*} \Vert f\Vert _{L^{\infty }(0,T;X)}=\mbox{ess}\sup_{0<t<T}\Vert f\Vert _{X}\quad . \end{equation*} We recall that if $X$ and $Y$ are two Banach spaces such that $ X\hookrightarrow Y$ (continuous embedding), then \begin{equation*} L^{p}\left( 0,T;X\right) \hookrightarrow L^{p}\left( 0,T;Y\right) ,\ 1\leq p\leq \infty . \end{equation*} We will also use the embedding (see \cite[Therorem 5.8]{A75}): \begin{equation*} H_{\Gamma _{0}}^{1}(\Omega )\hookrightarrow L^{p}(\Omega),\;2\leq p\leq \bar{p}\quad \mbox{where }\quad \bar{p}=\left\{ \begin{array}{ll} \dfrac{2N}{N-2} & \mbox{ if }N\geq 3, \\ +\infty & \mbox{ if }N=1,2 \ , \end{array} \right. \end{equation*} and also \begin{equation*} H_{\Gamma _{0}}^{1}(\Omega )\hookrightarrow L^{q}(\Gamma _{1}),\;2\leq q\leq \bar{q}\quad \mbox{where }\quad \bar{q}=\left\{ \begin{array}{ll} \dfrac{2(N-1)}{N-2} & \mbox{ if }N\geq 3, \\ +\infty & \mbox{ if }N=1,2. \end{array} \right. \end{equation*} For $2 \leq m \leq \bar{q}$, let us denote $V=H_{\Gamma _{0}}^{1}(\Omega )\cap L^{m}(\Gamma _{1})$.
We assume that the relaxation functions $g$ is of class $C^{1}$ on $\mathbb{R}$ and satisfies: \begin{equation} \forall \, s \, \in \mathbb{R} \,,\, g\left( s\right) \geq 0,\mbox{ and } \left(1-\displaystyle\int_{0}^{\infty }g\left(s\right) ds \right)=l>0 \ .\label{hypothesis_g} \end{equation} Moreover, we suppose that: \begin{equation} \forall \, s\geq 0 \,,\, g^{\prime}(s) \leq 0. \label{hypothesis_g_2} \end{equation} The hypotheses on the function $h$ are the following: \begin{description} \item[(H1)] $h$ is continuous and strongly monotone, i.e. for $2 \leq m \leq \bar{q}$, there exists a constant $m_{0}>0$ such that \begin{equation}
\left( h(s)-h(v)\right) (s-v)\geq m_{0}|s-v|^{m} \, , \label{Assumption_h_1} \end{equation}
\item[(H2)] there exist two positive constants $c_{m}$ and $C_{m}$ such that \begin{equation}
c_{m}|s|^{m}\leq h(s)s\leq C_{m}|s|^{m},\qquad \forall s\in \mathbb{R} \ . \label{Assumption_h} \end{equation} \end{description}
For a function $u \in C\Bigl(\lbrack \ 0,T],H_{\Gamma _{0}}^{1}(\Omega )\Bigl)$, let us introduce the following notation: \begin{equation*} \left( g\diamond u\right) \left( t\right) =\int_{0}^{t}g\left( t-s\right) \left\Vert \nabla u\left( s\right) -\nabla u\left( t\right) \right\Vert _{2}^{2}ds. \end{equation*} Thus, when $u \in C\Bigl(\lbrack \ 0,T],H_{\Gamma _{0}}^{1}(\Omega )\Bigl)\cap C^{1}\Bigl(\lbrack \ 0,T],L^{2}(\Omega )\Bigl)$ such that $u_{t} \in L^{2}\Bigl(0,T;H_{\Gamma _{0}}^{1}(\Omega )\Bigl)$, we have: \begin{eqnarray} \frac{d}{dt}\left( g\diamond u\right) \left( t\right) &=&\int_{0}^{t}g^{\prime }\left( t-s\right) \left\Vert \nabla u\left( s\right) -\nabla u\left( t\right) \right\Vert _{2}^{2}ds \notag \\ &&+\frac{d}{dt}\left( \left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}\right) \int_{0}^{t}g\left( s\right) ds-2\int_{\Omega }\int_{0}^{t}g\left( t-s\right) \nabla u\left( s\right) \nabla u_{t}\left( t\right) dsdx \notag \\ &=&\left( g^{\prime }\diamond u\right) \left( t\right) -2\int_{\Omega }\int_{0}^{t}g\left( t-s\right) \nabla u\left( s\right) \nabla u_{t}\left( t\right) dsdx \label{Integral_relation} \\ &&+\frac{d}{dt}\left\{ \left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}\int_{0}^{t}g\left( s\right) ds\right\} -g\left( t\right) \left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}. \notag \end{eqnarray} This last identity implies: \begin{eqnarray} \int_{\Omega }\int_{0}^{t}g\left( t-s\right) \nabla u\left( s\right) \nabla u_{t}\left( t\right) dsdx &=&\frac{1}{2}\left( g^{\prime }\diamond u\right) \left( t\right) +\frac{1}{2}\frac{d}{dt}\left\{ \left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}\int_{0}^{t}g\left( s\right) ds\right\} \notag \\ &&-\frac{1}{2}g\left( t\right) \left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}-\frac{1}{2}\frac{d}{dt}\left( g\diamond u\right) \left( t\right) . \label{Nonlinear_term_viscoelastic} \end{eqnarray} For $u \in C\Bigl(\lbrack \ 0,T],H_{\Gamma _{0}}^{1}(\Omega )\Bigl)\cap C^{1}\Bigl(\lbrack \ 0,T],L^{2}(\Omega )\Bigl)$ such that $u_{t} \in L^{2}\Bigl(0,T;H_{\Gamma _{0}}^{1}(\Omega )\Bigl)$, let us define the modified energy functional $E$ by: \begin{eqnarray} E\left( t,u,u_{t}\right) =E\left( t\right) &=&\dfrac{1}{2}\Vert u_{t}\left(t\right) \Vert _{2}^{2}+\dfrac{1}{2}\Vert u_{t}\left( t\right) \Vert _{2,\Gamma _{1}}^{2}+\dfrac{1}{2}\left( 1-\displaystyle\int_{0}^{t}g\left( s\right) ds\right) \left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}
\notag\\ & ~&+\dfrac{1}{2}\left( g\diamond u\right) \left( t\right) -\dfrac{1}{p}\left\Vert u\left( t\right) \right\Vert _{p}^{p}.\label{Energy_visco_elastic} \end{eqnarray}
The following local existence result of the solution of problem (\ref{ondes}) is closely related to the one we have proved for a slightly different problem in \cite[Theorem 2.1]{GS08}, where no memory term was present. Let us sate it: \begin{theorem} \label{existence} Assume that (\ref{hypothesis_g}), (\ref{hypothesis_g_2}) and (\ref{Assumption_h_1}) hold. Let $2\leq p\leq \bar{q}$ and $\max\left( 2,\frac{\bar{q}}{\bar{q}+1-p} \right) \leq m \leq \bar{q}$.
Then given $ u_{0}\in H_{\Gamma _{0}}^{1}(\Omega )$ and $u_{1}\in L^{2}(\Omega )$, there exists $T>0$ and a unique solution $u$ of the problem (\ref{ondes}) on $ (0,T) $ such that \begin{eqnarray*} u &\in &C\Bigl(\lbrack \ 0,T],H_{\Gamma _{0}}^{1}(\Omega )\Bigl)\cap C^{1} \Bigl(\lbrack \ 0,T],L^{2}(\Omega )\Bigl), \\ u_{t} &\in &L^{2}\Bigl(0,T;H_{\Gamma _{0}}^{1}(\Omega )\Bigl)\cap L^{m}\left( \left( 0,T\right) \times \Gamma _{1}\right) . \end{eqnarray*} \end{theorem}
Let us mention that Theorem \ref{existence} also holds for $\alpha=0$. The proof of Theorem \ref{existence} can be done along the same line as in \cite[Theorem 2.1]{GS08}. The main idea of the proof is based on the combination between the Fadeo-Galerkin approximations and the contraction mapping theorem. However, for the convenience of the reader we give only the outline of the proof here.
For $u\in C\bigl(\lbrack 0,T],H_{\Gamma _{0}}^{1}(\Omega )\bigl)\,\cap \,C^{1}\bigl(\lbrack 0,T],L^{2}(\Omega )\bigl)$ given, let us consider the following problem: \begin{equation} \left\{ \begin{array}{ll}
v_{tt}-\Delta v-\alpha \Delta v_{t}+\displaystyle\int_{0}^{t}g(t-s)\Delta v(s)ds=|u|^{p-2}u, & x\in \Omega ,\ t>0 \,,
\\ v(x,t)=0, & x\in \Gamma _{0},\ t>0\,,
\\ v_{tt}(x,t)=-\left[ \displaystyle\frac{\partial v}{\partial \nu }(x,t)- \displaystyle\int_{0}^{t}g(t-s)\frac{\partial v}{\partial \nu }(x,s)ds+\frac{ \alpha \partial v_{t}}{\partial \nu }(x,t)+h\left( v_{t}\right) \right] & x\in \Gamma _{1},\ t>0 \,,
\\ v(x,0)=u_{0}(x),\;v_{t}(x,0)=u_{1}(x) & x\in \Omega . \end{array} \right. \label{ondes_u} \end{equation}
\begin{definition} \label{generalised} A function $v(x,t)$ such that \begin{eqnarray*} v &\in &L^{\infty }\left( 0,T;H_{\Gamma _{0}}^{1}(\Omega )\right) \ , \\ v_{t} &\in &L^{2}\left( 0,T;H_{\Gamma _{0}}^{1}(\Omega )\right) \cap L^{m}\left( (0,T)\times \Gamma _{1}\right) \ , \\ v_{t} &\in &L^{\infty }\left( 0,T;H_{\Gamma _{0}}^{1}(\Omega )\right) \cap L^{\infty }\left( 0,T;L^{2}(\Gamma _{1})\right) \ , \\ v_{tt} &\in &L^{\infty }\left( 0,T;L^{2}(\Omega )\right) \cap L^{\infty }\left( 0,T;L^{2}(\Gamma _{1})\right) \ , \\ v(x,0) &=&u_{0}(x)\,, \\ v_{t}(x,0) &=&u_{1}(x)\,, \end{eqnarray*} is a generalized solution to the problem (\ref{ondes_u}) if for any function $\omega \in H_{\Gamma _{0}}^{1}(\Omega )\cap L^{m}(\Gamma _{1})$ and $ \varphi \in C^{1}(0,T)$ with $\varphi (T)=0$, we have the following identity: \begin{equation*} \begin{array}{lll}
\displaystyle\int_{0}^{T}(|u|^{p-2}u,w)(t)\,\varphi (t)\,dt & =\displaystyle \int_{0}^{T}\Bigg[(v_{tt},w)(t)+(\nabla v,\nabla w)(t)-\int_{0}^{t}g(t-s)(\nabla v\left( s\right) ,\nabla w\left( t\right) )ds
& \\ & +\alpha (\nabla v_{t},\nabla w)(t)\varphi (t)dt\Bigg]+\displaystyle \int_{0}^{T}\varphi (t)\left( \int_{\Gamma _{1}}v_{tt}(t)w\,d\Gamma +h\left( v_{t}\right) \,d\Gamma \right) dt. & \end{array} \end{equation*} \end{definition}
\begin{lemma} \label{existence_f} Let $2\leq p\leq \bar{q}$ and $2 \leq m \leq \bar{q}$. Let $u_{0}\in H^{2}(\Omega )\cap V,\,u_{1}\in H^{2}(\Omega )$, then for any $T>0,$ there exists a unique generalized solution (in the sense of Definition \ref{generalised}), $ v(t,x)$ of problem (\ref{ondes_u}). \end{lemma}
The proof of Lemma \ref{existence_f} is essentially based on the Fadeo-Galerkin approximations combined with the compactness method and can be done along the same line as in \cite[Lemma 2.2]{GS08}, we omit the details.
In the following lemma we state a local existence result of problem (\ref {ondes_u}).
\begin{lemma} \label{existence_u} Let $2\leq p\leq \bar{q}$ and $\max\left( 2, \frac{\bar{q}}{\bar{q}+1-p} \right) \leq m \leq \bar{q}$. Then given $u_{0}~\in ~H_{\Gamma _{0}}^{1}(\Omega )\,,\,u_{1}\in L^{2}(\Omega )$ there exists $T>0$ and a unique solution $v$ of the problem (\ref{ondes_u}) on $(0,T)$ such that \begin{eqnarray*} v &\in &C\Bigl(\lbrack 0,T],H_{\Gamma _{0}}^{1}(\Omega )\Bigl)\,\cap \,C^{1} \Bigl(\lbrack 0,T],L^{2}(\Omega )\Bigl), \\ v_{t} &\in &L^{2}\Bigl(0,T;H_{\Gamma _{0}}^{1}(\Omega )\Bigl)\,\cap L^{m}\left( \left( 0,T\right) \times \Gamma _{1}\right) \end{eqnarray*} and satisfies the energy inequality: \begin{eqnarray*} &&\frac{1}{2}\left[ \Vert u_{t}\left( t\right) \Vert _{2}^{2}+\Vert u_{t}\left( t\right) \Vert _{2,\Gamma _{1}}^{2}+\left( 1-\displaystyle \int_{0}^{t}g\left( s\right) ds\right) \left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}
+\left( g\diamond u\right) \left( t\right) \right] _{s}^{t} \\ &&+\alpha \displaystyle\int_{s}^{t}\Vert \nabla v_{t}(\tau )\Vert _{2}^{2}d\tau +\displaystyle\int_{s}^{t}\int_{\Gamma _{1}}h(v_{t}(\sigma,\tau ))d\sigma d\tau \\
&\leq &\displaystyle\int_{s}^{t}\displaystyle\int_{\Omega }|u(\tau
)|^{p-2}u(\tau )v_{t}(\tau )d\tau dx \end{eqnarray*} for $0\leq s\leq t\leq T$. \end{lemma}
\begin{proof} We first approximate $u\in C([0,T],H_{\Gamma _{0}}^{1}(\Omega ))\cap C^{1}\left( [0,T],L^{2}(\Omega )\right) $ endowed with the standard norm $\Vert u\Vert =\displaystyle\max_{t\in \lbrack 0,T]}\Vert u_{t}(t)\Vert _{2}+\Vert u(t)\Vert _{H^{1}(\Omega )}$, by a sequence $(u^{k})_{k\in \mathbb{N}}\subset C^{\infty }([0,T]\times \overline{ \Omega })$ by a standard convolution arguments (see \cite{B83}). Next, we approximate the initial data $u_{1}\in L^{2}(\Omega )$ by a sequence $ (u_{1}^{k})_{k\in \mathbb{N}}\subset C_{0}^{\infty }(\Omega )$. Finally, since the space $H^{2}(\Omega )\cap V\cap H_{\Gamma _{0}}^{1}(\Omega )$ is dense in $H_{\Gamma _{0}}^{1}(\Omega )$ for the $H^{1}$ norm, we approximate $u_{0}\in H_{\Gamma _{0}}^{1}(\Omega )$ by a sequence $(u_{0}^{k})_{k\in \mathbb{N}}\subset H^{2}(\Omega )\cap V\cap H_{\Gamma _{0}}^{1}(\Omega )$.
We consider now the set of the following problems: \begin{equation} \left\{ \begin{array}{ll} v_{tt}^{k}-\Delta v^{k}-\alpha \Delta v_{t}^{k}+\displaystyle
\int_{0}^{t}g(t-s)\Delta v^{k}(s)ds=|u^{k}|^{p-2}u^{k}, & x\in \Omega ,\ t>0
, \\ v^{k}(x,t)=0, & x\in \Gamma _{0},\ t>0
, \\ v_{tt}^{k}(x,t)=-\left[ \displaystyle\frac{\partial v^{k}}{\partial \nu } (x,t)-\displaystyle\int_{0}^{t}g(t-s)\frac{\partial v^{k}}{\partial \nu } (x,s)ds+\frac{\alpha \partial v_{t}^{k}}{\partial \nu }(x,t)+h\left( v_{t}^{k}\right) \right] , & x\in \Gamma _{1},\ t>0
, \\ v^{k}(x,0)=u_{0}^{k},\;v_{t}^{k}(x,0)=u_{1}^{k}, & x\in \Omega . \end{array} \right. \label{approx_k} \end{equation} Since every hypothesis of Lemma \ref{existence_f} are verified, we can find a sequence of unique solution $\left( v_{k}\right) _{k\in \mathbb{N}}$ of the problem (\ref{approx_k}). Our goal now is to show that $ (v^{k},v_{t}^{k})_{k\in \mathbb{N}}$ is a Cauchy sequence in the space \begin{equation*} \begin{array}{lll} Y_{T} & =\Bigl\{ & (v,v_{t})/v\in C\left( \left[ 0,T\right] ,H_{\Gamma _{0}}^{1}(\Omega )\right) \cap C^{1}\left( \left[ 0,T\right] ,L^{2}(\Omega )\right) , \\ & & v_{t}\in L^{2}\left( 0,T;H_{\Gamma _{0}}^{1}(\Omega )\right) \cap L^{m}\left( \left( 0,T\right) \times \Gamma _{1}\right) \Bigl\} \end{array} \end{equation*} endowed with the norm \begin{equation} \label{norm} \Vert (v,v_{t})\Vert _{Y_{T}}^{2}=\max_{0\leq t\leq T}\Bigl[\Vert v_{t}\Vert _{2}^{2}+l\Vert \nabla v\Vert _{2}^{2}\Bigl]+\Vert v_{t}\Vert _{L^{m}\left( \left( 0,T\right) \times \Gamma _{1}\right) }^{2}+\int_{0}^{t}\Vert \nabla v_{t}(s)\Vert _{2}^{2}\;ds\;. \end{equation} For this purpose, we set $U=u^{k}-u^{k^{\prime }},\ V=v^{k}-v^{k^{\prime }}$ . It is straightforward to see that $V$ satisfies: \begin{equation*} \left\{ \begin{array}{ll}
V_{tt}-\Delta V-\alpha \Delta V_{t}+\displaystyle\int_{0}^{t}g(t-s)\Delta V(s)ds=|u^{k}|^{p-2}u^{k}-|u^{k^{\prime }}|^{p-2}u^{k^{\prime }} & x\in \Omega ,\ t>0 \,,
\\ V(x,t)=0 & x\in \Gamma _{0},\ t>0 \,,
\\ V_{tt}(x,t)=-\left[\displaystyle\frac{\partial V}{\partial \nu }(x,t)- \displaystyle\int_{0}^{t}g(t-s)\frac{\partial V}{\partial \nu }(x,s)ds+\frac{ \alpha \partial V_{t}}{\partial \nu }(x,t)+h(v_{t}^{k})-h(v_{t}^{k^{\prime }})\right] & x\in \Gamma _{1},\ t>0 \,,
\\ V(x,0)=u_{0}^{k}-u_{0}^{k^{\prime }},\;V_{t}(x,0)=u_{1}^{k}-u_{1}^{k^{\prime }} & x\in \Omega . \end{array} \right. \end{equation*} We multiply the above differential equations by $V_{t}$, we integrate over $ (0,t)\times \Omega $, we use integration by parts and the identity (\ref {Integral_relation}) to obtain: \begin{equation} \left. \begin{array}{ll} & \begin{array}{l} \dfrac{1}{2}\left( \Vert V_{t}\left( t\right) \Vert _{2}^{2}+\Vert V_{t}\left( t\right) \Vert _{2,\Gamma _{1}}^{2}+\left( 1-\displaystyle \int_{0}^{t}g\left( r\right) dr\right) \left\Vert \nabla V\left( t\right) \right\Vert _{2}^{2}\right) \\ +\alpha \displaystyle\int_{0}^{t}\Vert \nabla V_{t}\Vert _{2}^{2}ds
+\int_{0}^{t}\int_{\Gamma _{1}}\left( h(v_{t}^{k}(x,\tau ))-h(v_{t}^{k^{\prime }}(x,\tau ))\right) \left( v_{t}^{k}(x,\tau )-v_{t}^{k^{\prime }}(x,\tau )\right) d\Gamma d\tau \end{array} \\ & -\dfrac{1}{2}\displaystyle\int_{0}^{t}\left( g^{\prime }\diamond V\right) \left( s\right) ds+\dfrac{1}{2}\displaystyle\int_{0}^{t}g\left( s\right) \left\Vert \nabla V\left( s\right) \right\Vert _{2}^{2}ds
\\ = & \displaystyle\frac{1}{2}\left( \Vert V_{t}(0)\Vert _{2}^{2}+\Vert \nabla V(0)\Vert _{2}^{2}+\Vert V_{t}(0)\Vert _{2,\Gamma _{1}}^{2}\right)
\\ & +\displaystyle\int_{0}^{t}\displaystyle\int\limits_{\Omega }\left(
|u^{k}|^{p-2}u^{k}-|u^{k^{\prime }}|^{p-2}u^{k^{\prime }}\right) \left( v_{t}^{k}-v_{t}^{k^{\prime }}\right) \,dxds,\quad \forall t\in \left( 0,T\right) . \end{array} \right. \label{Main_estimate_existence} \end{equation} Consequently, the above inequality together with (\ref{hypothesis_g}), (\ref {hypothesis_g_2}) and (\ref{Assumption_h_1}) gives \begin{equation} \left. \begin{array}{ll} & \dfrac{1}{2}\left( \Vert V_{t}\left( t\right) \Vert _{2}^{2}+\Vert V_{t}\left( t\right) \Vert _{2,\Gamma _{1}}^{2}+l\left\Vert \nabla V\left( t\right) \right\Vert _{2}^{2}\right) +\alpha \displaystyle\int_{0}^{t}\Vert \nabla V_{t}\Vert _{2}^{2}ds
+m_{0}\int_{0}^{t}\Vert V_{t}\Vert _{m,\Gamma _{1}}^{m}ds \\ \leq & \displaystyle\frac{1}{2}\left( \Vert V_{t}(0)\Vert _{2}^{2}+\Vert \nabla V(0)\Vert _{2}^{2}+\Vert V_{t}(0)\Vert _{2,\Gamma _{1}}^{2}\right) \\ &
+\displaystyle\int_{0}^{t}\displaystyle\int\limits_{\Omega
}\left( |u^{k}|^{p-2}u^{k}-|u^{k^{\prime }}|^{p-2}u^{k^{\prime }}\right) \left( v_{t}^{k}-v_{t}^{k^{\prime }}\right) \,dxds,\quad \forall t\in \left( 0,T\right) . \end{array} \right. \label{Main_inequality_Cauchy} \end{equation} Following the same method as in \cite{GS08}, we deduce that there exists $C$ depending only on $\Omega \mbox{ and }p$ such that: \begin{equation*} \Vert V\Vert _{Y_{T}}\leq C\left( \Vert V_{t}(0)\Vert _{2}^{2}+\Vert \nabla V(0)\Vert _{2}^{2}+\Vert V_{t}(0)\Vert _{2,\Gamma _{1}}^{2}\right) +CT\Vert U\Vert _{Y_{T}}. \end{equation*} Since $(u_{0}^{k})_{k\in \mathbb{N}}$ is a converging sequence in $H_{\Gamma _{0}}^{1}\left( \Omega \right) $, $(u_{1}^{k})_{k\in \mathbb{N}}$ is a converging sequence in $L^{2}\left( \Omega \right) $ and $\left( u^{k}\right) _{k\in \mathbb{N}}$ is a converging sequence in $C\left( \left[ 0,T\right] ,H_{\Gamma _{0}}^{1}(\Omega )\right) \cap C^{1}\left( \left[ 0,T \right] ,L^{2}(\Omega )\right) $ (so in $Y_{T}$ also), we conclude that $ (v^{k},v_{t}^{k})_{k\in \mathbb{N}}$ is a Cauchy sequence in $Y_{T}$. Thus $ (v^{k},v_{t}^{k})$ converges to a limit $(v,v_{t})\in Y_{T}$. Now by the same procedure used by Georgiev and Todorova in \cite{GT94}, we prove that this limit is a weak solution of the problem (\ref{ondes_u}). This completes the proof of the Lemma \ref{existence_u}. \end{proof} \begin{proof}[Proof of Theorem \ref{existence}] In order to prove Theorem \ref{existence}, we use the contraction mapping theorem.\newline Indeed, for $T>0,$ let us define the convex closed subset of $Y_{T}$: \begin{equation*} X_{T}=\left\{ (v,v_{t})\in Y_{T}\mbox{ such that }v(0)=u_{0},v_{t}(0)=u_{1} \right\} . \end{equation*} Let us denote: \begin{equation*} B_{R}\left( X_{T}\right) =\left\{ v\in X_{T};\Vert v\Vert _{Y_{T}}\leq R\right\} , \end{equation*} the ball of radius $R$ in $X_{T}$. Then, Lemma \ref{existence_u} implies that for any $u\in X_{T}$, we may define $v=\Phi \left( u\right) $ the unique solution of (\ref{ondes_u}) corresponding to $u$. Our goal now is to show that for a suitable $T>0$, $\Phi $ is a contractive map satisfying $ \Phi \left( B_{R}(X_{T})\right) \subset B_{R}(X_{T})$. \newline Let $u\in B_{R}(X_{T})$ and $v=\Phi \left( u\right) $. Then for all $t\in \lbrack 0,T]$ we have as in (\ref{Main_estimate_existence}): \begin{equation} \begin{array}{l} \Vert v_{t}\Vert _{2}^{2}+l\Vert \nabla v\Vert _{2}^{2}+\Vert v_{t}\Vert _{2,\Gamma _{1}}^{2}+2\alpha \displaystyle\int\limits_{0}^{t}\Vert \nabla v_{t}\Vert _{2}^{2}\,ds+c\int_{0}^{t}\Vert v_{t}\Vert _{m,\Gamma _{1}}^{m}ds \\ \leq \Vert u_{1}\Vert _{2}^{2}+\Vert \nabla u_{0}\Vert _{2}^{2}+\Vert u_{1}\Vert _{2,\Gamma _{1}}^{2}+2\displaystyle\int\limits_{0}^{t}
\displaystyle\int\limits_{\Omega }|u\left( \tau \right) |^{p-2}u\left( \tau \right) v_{t}\left( \tau \right) \,dx\,d\tau . \end{array} \quad \label{schauder} \end{equation} Using H\"{o}lder's inequality, we can control the last term in the right hand side of the inequality (\ref{schauder}) as follows: \begin{equation*}
\displaystyle\int\limits_{0}^{t}\displaystyle\int\limits_{\Omega }|u\left(
\tau \right) |^{p-2}u\left( \tau \right) v_{t}\left( \tau \right) dxd\tau \leq \displaystyle\int\limits_{0}^{t}\Vert u\left( \tau \right) \Vert _{2N/\left( N-2\right) }^{p-1}\Vert v_{t}\left( \tau \right) \Vert _{{2N}/{ \bigl(3N-Np+2(p-1)\bigl)}}d\tau \end{equation*} Since $\displaystyle p\leq \frac{2N}{N-2}$, we have: $$\displaystyle\frac{2N}{\bigl(3N-Np+2(p-1)\bigl)}\leq \frac{2N}{N-2}\quad .$$ Thus, by Young's and Sobolev's inequalities, we get for all $\delta >0$ there exists $C(\delta )>0$, such that for all $t\in \left( 0,T\right) $ \begin{equation*}
\displaystyle\int\limits_{0}^{t}\displaystyle\int\limits_{\Omega }|u\left(
\tau \right) |^{p-2}u\left( \tau \right) v_{t}\left( \tau \right) dxd\tau \leq C(\delta )tR^{2(p-1)}+\delta \displaystyle\int\limits_{0}^{t}\Vert \nabla v_{t}\left( \tau \right) \Vert _{2}^{2}d\tau . \end{equation*} Inserting the last estimate in the inequality (\ref{schauder}) and choosing $ \delta $ small enough such that: \begin{equation*} \Vert v\Vert _{Y_{T}}^{2}\leq \frac{1}{2}R^{2}+CTR^{2(p-1)}. \end{equation*} Thus, for $T$ sufficiently small, we have $\Vert v\Vert _{Y_{T}}\leq R$. This shows that $v\in B_{R}\left( X_{T}\right) $.
To prove that $\Phi $ is a contraction, we have to follow the same steps (up to minor changes) as in \cite{GS08}. We omit the details. Thus the proof of Theorem \ref{existence} is finished. \end{proof} \begin{remark} Let us say that the hypothesis on $m$, $\max\left( 2, \frac{\bar{q}}{\bar{q}+1-p} \right) \leq m \leq \bar{q}$, is made to pass to the limit in the nonlinear term, by the same way we have used in \cite[Equation (2.28)]{GS08}. \end{remark}
\section{Global existence}\label{Global_existence_section}
In this section, we show that, under some restrictions on the initial data, the local solution of problem (\ref{ondes}) can be continued in time and the lifespan of the solution will be $[0,\infty )$.
\begin{definition} \label{Tmax} Let $2\leq p\leq \bar{q}$, $\max\left( 2, \frac{\bar{q}}{\bar{q}+1-p} \right) \leq m \leq \bar{q}$, $u_{0}\in H_{\Gamma_{0}}^{1}(\Omega) $ and $u_{1}\in L^{2}(\Omega) $. We denote by $u$ the solution of (\ref{ondes}). We define: $$ T_{max} = \sup\Bigl\{ T > 0 \,,\, u = u(t) \ \mbox{ exists on } \ [0,T]\Bigr\} \ . $$ Since the solution $u \in Y_T$ (the solution is ``regular enough''), from the definition of the norm given by (\ref{norm}), let us recall that if $T_{max} < \infty$, then $$
\lim_{\underset {t < T_{max}} {t \rightarrow T_{max}}} \Vert \nabla u(t) \Vert_2 + \Vert u_t(t) \Vert_2 = + \infty. $$ If $T_{max} < \infty$, we say that the solution of (\ref{ondes}) blows up and that $T_{max}$ is the blow up time.\\ If $T_{max} = \infty$, we say that the solution of (\ref{ondes}) is global. \end{definition}
In order to study the blow up phenomenon or the global existence of the solution of (\ref{ondes}), for all $0\leq t < T_{max}$, we define: \begin{eqnarray} I(t) &=&I(u(t))=\left( 1-\displaystyle\int_{0}^{t}g\left( s\right) ds\right) \left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}+\left( g\diamond u\right) \left( t\right) -\Vert u\Vert _{p}^{p}, \label{Energy_I} \\ J(t) &=&J(u(t))=\frac{1}{2}\left( 1-\displaystyle\int_{0}^{t}g\left( s\right) ds\right) \left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}+ \dfrac{1}{2}\left( g\diamond u\right) \left( t\right) -\frac{1}{p}\Vert u\Vert _{p}^{p}. \label{Energy_J} \end{eqnarray}
Thus the energy functional defined in (\ref{Energy_visco_elastic}) can be rewritten as \begin{equation} E(u(t))=E(t)=J(t)+\frac{1}{2}\Vert u_{t}\Vert _{2}^{2}+\frac{1}{2}\Vert u_{t}\Vert _{2,\Gamma _{1}}^{2}. \label{Energy_E} \end{equation} As in \cite{GS06,V99}, we denote by $B$ the best constant in the Poincar\'{e}-Sobolev embedding $H_{\Gamma_{0}}^{1}(\Omega) \hookrightarrow L^{p}(\Omega)$ defined by: \begin{equation}\label{sobolev} B^{-1} = \inf\left\{\Vert \nabla u \Vert_2 : u \in H_{\Gamma_{0}}^{1}(\Omega), \Vert u\Vert_p = 1 \right\}. \end{equation}
For $u_{0}~\in~H_{\Gamma _{0}}^{1}(\Omega )\,,\,u_{1}\in L^{2}(\Omega )$, we define: \begin{equation*} E(0) =\dfrac{1}{2}\Vert u_{1}\Vert_{2}^{2}+\dfrac{1}{2}\Vert u_{1}\Vert_{2,\Gamma _{1}}^{2}+ \dfrac{1}{2} \left\Vert \nabla u_{0} \right\Vert _{2}^{2} -\dfrac{1}{p}\left\Vert u_{0}\right\Vert _{p}^{p}. \end{equation*}
The first goal is to prove that the above energy $E\left( t\right) $ defined in (\ref{Energy_visco_elastic}) is a non-increasing function along the trajectories. More precisely, we have the following result:
\begin{lemma} \label{Lemma_dissp_energy_visco} Let $2\leq p\leq \bar{q}$, $\max\left( 2, \frac{\bar{q}}{\bar{q}+1-p} \right) \leq m \leq \bar{q}$, and $u$ be the solution of (\ref{ondes}). Then, for all $t>0,$ we have \begin{eqnarray} \frac{dE\left( t\right) }{dt} &=&\frac{1}{2}\left( g^{\prime }\diamond u\right) \left( t\right) -\frac{1}{2}g\left( t\right) \left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}-\alpha \left\Vert \nabla u_{t}\right\Vert _{2}^{2}-\int_{\Gamma _{1}}h\left( u_{t}\right) u_{t}d\Gamma \notag \\ &\leq &\frac{1}{2}\left( g^{\prime }\diamond u\right) \left( t\right) -\alpha \left\Vert \nabla u_{t}\right\Vert _{2}^{2}-\int_{\Gamma _{1}}h\left( u_{t}\right) u_{t}d\Gamma ,\qquad \forall t\in \lbrack 0,T_{max}). \label{dissp_enrgy_visco} \end{eqnarray} \end{lemma}
\begin{proof}Multiplying the first equation in (\ref{ondes}) by $u_{t}$, integrating over $\Omega $, using integration by parts we get: \begin{equation} \left. \begin{array}{l} \dfrac{d}{dt}\left\{ \dfrac{1}{2}\Vert u_{t}\Vert _{2}^{2}+\dfrac{1}{2}\Vert u_{t}\Vert _{2,\Gamma _{1}}^{2}+\dfrac{1}{2}\Vert \nabla u\Vert _{2}^{2}- \dfrac{1}{p}\left\Vert u\right\Vert _{p}^{p}\right\}
\\ -\displaystyle\int_{\Omega }\displaystyle\int_{0}^{t}g\left( t-s\right) \nabla u\left( s\right) \nabla u_{t}\left( t\right) dsdx
\\ =-\alpha \left\Vert \nabla u_{t}\right\Vert _{2}^{2}-\displaystyle \int_{\Gamma _{1}}h\left( u_{t}\right) u_{t}d\Gamma . \end{array} \right. \label{d_dt_Energy_2} \end{equation} A simple use of the identity (\ref{Nonlinear_term_viscoelastic}) gives (\ref {dissp_enrgy_visco}). This completes the proof of Lemma \ref {Lemma_dissp_energy_visco}. \end{proof}
\begin{lemma} \label{Stable_set_lemma}Let $2\leq p\leq \bar{q}$, $\max\left( 2, \frac{\bar{q}}{\bar{q}+1-p} \right) \leq m \leq \bar{q}$. Assume that (\ref {hypothesis_g}) and (\ref{hypothesis_g_2}) hold. Then given $u_{0}~\in ~H_{\Gamma _{0}}^{1}(\Omega )\,,\,u_{1}\in L^{2}(\Omega )$ satisfying \begin{equation} \left\{ \begin{array}{l} \beta =\dfrac{B^{p}}{l}\left( \dfrac{2p}{l\left( p-2\right) }E(0)^{\left( p-2\right) /2}\right)<1,
\\ I\left( u_{0}\right) >0, \end{array} \right. \label{beta_condition} \end{equation} we have: \begin{equation*} I\left( u\left( t\right) \right) >0,\qquad \forall t\in \lbrack 0,T_{\max }). \end{equation*} \end{lemma} \begin{proof} Since $I\left( u_{0}\right) >0$, then by continuity, there exists $T^{\ast }<T_{\max }$, such that \begin{equation*} I\left( t\right) >0,\qquad \forall t\in \lbrack 0,T^{\ast }] \end{equation*} which implies that for all $t\in \lbrack 0,T^{\ast }],$ \begin{eqnarray} J\left( t\right) &=&\frac{1}{p}I\left( t\right) +\frac{p-2}{2p}\left\{ \left( 1-\displaystyle\int_{0}^{t}g\left( s\right) ds\right) \left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}+\left( g\diamond u\right) \left( t\right) \right\} \notag \\ &\geq &\frac{p-2}{2p}\left\{ \left( 1-\displaystyle\int_{0}^{t}g\left( s\right) ds\right) \left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}+\left( g\diamond u\right) \left( t\right) \right\} . \label{J_inequality} \end{eqnarray} By using (\ref{hypothesis_g}), (\ref{hypothesis_g_2}), (\ref{Energy_E}) and ( \ref{dissp_enrgy_visco}), we easily get, for all $t\in \lbrack 0,T^{\ast }]$ \begin{eqnarray} l\left\Vert \nabla u(t)\right\Vert _{2}^{2} &\leq &\frac{2p}{p-2}J\left( t\right) ,\ \notag \\ &\leq &\frac{2p}{p-2}E\left( t\right) \leq \frac{2p}{p-2}E\left( 0\right) . \label{E_0_J_inequality} \end{eqnarray} From the definition of the constant $B$ in (\ref{sobolev}), we first get: $$ \forall t\in \lbrack 0,T^{\ast }] \ , \ \left\Vert u(t)\right\Vert _{p}^{p} \leq B^{p}\Vert \nabla u(t)\Vert_{2}^{p} \ . $$ Since we have: $$ \forall t\in \lbrack 0,T^{\ast }] \ , \ B^{p}\Vert \nabla u(t)\Vert_{2}^{p}= \frac{B^{p}}{l}\Vert \nabla u(t)\Vert _{2}^{p-2}\left( l\Vert\nabla u(t)\Vert _{2}^{2}\right) , $$ by exploiting (\ref{E_0_J_inequality}) and (\ref{beta_condition}), we obtain, for all $t\in \lbrack 0,T^{\ast }]$: \begin{eqnarray*} \left\Vert u(t)\right\Vert _{p}^{p} &\leq &\beta l\left( \Vert \nabla u(t)\Vert _{2}^{2}\right) \\ &\leq &\beta \left( 1-\int_{0}^{t}g\left( s\right) ds\right) \Vert \nabla u(t)\Vert _{2}^{2} \\ &<&\left( 1-\int_{0}^{t}g\left( s\right) ds\right) \Vert \nabla u(t)\Vert _{2}^{2}. \end{eqnarray*} Therefore, by using (\ref{Energy_I}), we conclude that \begin{equation*} I\left( t\right) >0,\qquad \forall t\in \lbrack 0,T^{\ast }]. \end{equation*} Using the fact that $E$ is decreasing along the trajectory, we get: \begin{equation*} \forall \, 0\leq t < T_{max} \,,\, \dfrac{B^{p}}{l}\left( \dfrac{2p}{l\left(p-2\right) }E\left( t\right) \right) ^{\left( p-2\right) /2}\leq \beta <1 \ . \end{equation*} By repeating this procedure, $T^{\ast }$ is extended to $T_{max}.$ \end{proof} Now, we are able to state the global existence theorem. \begin{theorem} \label{Global_existence}Let $2\leq p\leq \bar{q}$, $\max\left( 2, \frac{\bar{q}}{\bar{q}+1-p} \right) \leq m \leq \bar{q}$. Assume that (\ref{hypothesis_g}) and (\ref{hypothesis_g_2}) hold. Then given $u_{0}~\in ~H_{\Gamma _{0}}^{1}(\Omega )\,,\,u_{1}\in L^{2}(\Omega )$ satisfying (\ref {beta_condition}). Then the solution of (\ref{ondes}) is global and bounded. \end{theorem}
\begin{proof} To prove Theorem \ref{Global_existence}, using the definition of $T_{max}$, we have just to check that \begin{equation*} \left\Vert \nabla u(t)\right\Vert_{2}^{2}+\left\Vert u_{t}(t)\right\Vert _{2}^{2} \end{equation*} is uniformly bounded in time. To achieve this, we use (\ref{Energy_J}), ( \ref{Energy_E}), (\ref{dissp_enrgy_visco}) and (\ref{E_0_J_inequality}) to get \begin{eqnarray} E\left( 0\right) &\geq &E\left( t\right) =J\left( t\right) +\frac{1}{2}\Vert u_{t}\Vert _{2}^{2}+\frac{1}{2}\Vert u_{t}\Vert _{2,\Gamma _{1}}^{2} \notag \\ &\geq &\frac{p-2}{2p}\left\Vert \nabla u(t)\right\Vert _{2}^{2}+\frac{1}{2} \left\Vert u_{t}(t)\right\Vert _{2}^{2}. \label{Inq_E_t_E_0} \end{eqnarray} Therefore, \begin{equation*} \left\Vert \nabla u(t)\right\Vert_{2}^{2}+\left\Vert u_{t}(t)\right\Vert _{2}^{2}\leq C E(0) \end{equation*} where $C$ is a positive constant, which depends only on $p.$ \end{proof}
\section{Exponential growth for $\protect\alpha>0$}
\label{Exponential_growth_section}
In this section we will prove that when the initial data are large enough, the energy of the solution of problem (\ref{ondes}) defined by (\ref{Energy_visco_elastic}) grows exponentially and thus so the $L^p$ norm.
In order to state and prove the exponential growth result, we introduce the following constants: \begin{equation} B_{1}=\frac{B}{l},\quad \alpha _{1}=B_{1}^{-p/(p-2)},\quad E_{1}=\left( \frac{1}{2}-\frac{1}{p}\right) \alpha_{1}^{2},\quad E_{2}=\left( \frac{l}{2}-\frac{1}{p}\right) \alpha _{1}^{2} \label{constant} \end{equation} Let us first mention that $E_{2} < E_{1}$.
The following Lemma will play an essential role in the proof of the exponential growth result, and it is inspired by the work in \cite{CCLa_2007} where the authors proved a similar lemma for the wave equation.
First, we define the function \begin{equation} \gamma \left( t\right) :=\left( 1-\displaystyle\int_{0}^{t}g\left( s\right) ds\right) \left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}+\left( g\diamond u\right) \left( t\right) . \label{Gama} \end{equation} Let us rewrite the energy functional $E$ defined by (\ref{Energy_visco_elastic}) as: \begin{equation}\label{Energy_visco_elastic2} E(t) = \dfrac{1}{2}\Vert u_{t}\left(t\right) \Vert _{2}^{2}+\dfrac{1}{2}\Vert u_{t}\left( t\right) \Vert_{2,\Gamma _{1}}^{2}+
\dfrac{1}{2}\gamma \left( t\right) - \dfrac{1}{p}\left\Vert u(t)\right\Vert_{p}^{p} \ . \end{equation}
\begin{lemma} \label{Vitilaro_Lemma} Let $2\leq p\leq \bar{q}$, $\max\left( 2, \frac{\bar{q}}{\bar{q}+1-p} \right) \leq m \leq \bar{q}$. Let $u$ be the solution of (\ref{ondes}). Assume that \begin{equation} E\left( 0\right) <E_{1}\quad \text{ and }\quad \left\Vert \nabla u_{0}\right\Vert _{2}\geq \alpha _{1}. \label{Initial_data_assumptions} \end{equation} Then there exists a constant $\alpha _{2}>\alpha _{1}$ such that \begin{equation} \left( \gamma \left( t\right) \right) ^{1/2}\geq \alpha _{2},\qquad \forall t\in \lbrack 0,T_{\max }) \label{result_Vitillaro_1} \end{equation} and \begin{equation} \left\Vert u\left( t\right) \right\Vert _{p}\geq B_{1}\alpha _{2},\;\qquad \forall t\in \lbrack 0,T_{\max }). \label{result_Vitillaro_2} \end{equation} \end{lemma}
\begin{proof} We first note that, by (\ref{Energy_visco_elastic2}), we have: \begin{eqnarray} E(t) &\geq &\dfrac{1}{2}\gamma \left( t\right) -\dfrac{1}{p}\left\Vert u\left( t\right) \right\Vert _{p}^{p} \notag \\ &\geq &\dfrac{1}{2}\gamma \left( t\right) -\frac{B_{1}^{p}}{p}\left( l\left\Vert \nabla u\left( t\right) \right\Vert _{2}\right) ^{p} \notag \\ &\geq &\dfrac{1}{2}\gamma \left( t\right) -\frac{B_{1}^{p}}{p}\left( \gamma \left( t\right) \right) ^{p/2} \label{F_gamma_1} \\ &=&\dfrac{1}{2}\alpha ^{2}-\frac{B_{1}^{p}}{p}\alpha ^{p}=F\left( \alpha \right) , \notag \end{eqnarray} where $\alpha =\left( \gamma \left( t\right) \right) ^{1/2}.$ It is easy to verify that $F$ is increasing for $0<\alpha <\alpha _{1},$ decreasing for $\alpha >\alpha _{1},$ $F\left( \alpha \right) \rightarrow -\infty $ as $ \alpha \rightarrow +\infty ,$ and \begin{equation*} F\left( \alpha _{1}\right) =E_{1}, \end{equation*} where $\alpha _{1}$ is given in (\ref{constant}). Therefore, since $ E(0)<E_{1},$\ there exists $\alpha _{2}>\alpha _{1}$ such that $\,\,F\left( \alpha _{2}\right) =$ $E\left( 0\right) .$\newline If we set $\alpha _{0}=\left( \gamma \left( 0\right) \right) ^{1/2},$ then by (\ref{F_gamma_1}) we have: \begin{equation*} F\left( \alpha _{0}\right) \leq E\left( 0\right) =F\left( \alpha _{2}\right) , \end{equation*} which implies that $\alpha _{0}\geq \alpha _{2}.$\newline Now to establish (\ref{result_Vitillaro_1}), we suppose by contradiction that: \begin{equation*} \left( \gamma \left( t_{0}\right) \right) ^{1/2}<\alpha _{2}, \end{equation*} for some $t_{0}>0$ and by the continuity of $\gamma \left( .\right) ,$ we may choose\ $t_{0}$ such that \begin{equation*} \left( \gamma \left( t_{0}\right) \right) ^{1/2}>\alpha _{1}. \end{equation*} Using again (\ref{F_gamma_1}) leads to: \begin{equation*} E\left( t_{0}\right) \geq F\left( \gamma \left( t_{0}\right) ^{1/2}\right)>F(\alpha _{2})=E\left( 0\right) . \end{equation*}
But this is impossible since for all $t>0$, $E(t)\leq E\left( 0\right) $. Hence (\ref{result_Vitillaro_1}) is established.
To prove (\ref{result_Vitillaro_2}), we use (\ref{Energy_visco_elastic2}) to get: \begin{equation*} \frac{1}{2}\gamma \left( t\right) \leq E\left( 0\right) +\dfrac{1}{p} \left\Vert u\left( t\right) \right\Vert _{p}^{p} \ . \end{equation*} Consequently, using (\ref{result_Vitillaro_1}) leads to: \begin{eqnarray*} \dfrac{1}{p}\left\Vert u\left( t\right) \right\Vert _{p}^{p} &\geq &\frac{1}{ 2}\gamma \left( t\right) -E\left( 0\right) \\ &\geq &\frac{1}{2}\alpha _{2}^{2}-E\left( 0\right). \end{eqnarray*} But we have: $$\frac{1}{2}\alpha _{2}^{2}-E\left( 0\right) = \frac{1}{2}\alpha _{2}^{2}-F\left( \alpha _{2}\right) =\frac{B_{1}^{p}}{p}\alpha_{2}^{p} \ .$$ Therefore (\ref{result_Vitillaro_2}) holds. This finishes the proof of Lemma \ref{Vitilaro_Lemma}. \end{proof} The exponential growth result reads as follows:
\begin{theorem} \label{blow_up_viscoel} Suppose that (\ref{hypothesis_g}), (\ref {hypothesis_g_2}) (\ref{Assumption_h}) hold. Assume that \begin{equation*} 2\leq m\quad \text{and}\quad \max \left( m,2/l\right) <p\leq \overline{p}. \end{equation*} Then, the solution of (\ref{ondes}) satisfying \begin{equation} E\left( 0\right) <E_{2}\ \mbox{ and } \ \left\Vert \nabla u_{0}\right\Vert _{2}\geq \alpha _{1}, \label{Initial_data_visco} \end{equation} grows exponentially in the $L^{p}$ norm. \end{theorem}
\begin{remark} \label{Remark_Gerbi_Said} It is obvious that for $g=0$, we have $E_1=E_2$, and Theorem \ref{blow_up_viscoel} reduces to Theorem 3.1 in \cite{GS08}. \end{remark}
\begin{remark} \label{Remark_integral_condition} In Theorem \ref{blow_up_viscoel}, the condition \begin{equation*} \int_0^\infty g(s)ds<\frac{(p/2)-1}{(p/2)-1+(1/2p)} \end{equation*} used in \cite {Mess_Kafi_2007,Mess_Kafi_2008,MS2010,Mes01,Mess03,SaZh2010,YWL2009} is unnecessary and our result holds without it. \end{remark} \begin{remark} \label{remarkc1} Let us denote $c_{1}=\left(l-\frac{2}{p}\right)-2E_{2}\left( B_{1}\alpha _{2}\right) ^{-p}.$ Since we have seen that
$\alpha_{2} > \alpha_{1}$, using the definition of $E_{2}$, we easily get $c_{1}>0$. This constant will play an important role in the proof of Theorem \ref{blow_up_viscoel} \end{remark} \begin{proof}[Proof of Theorem \protect\ref{blow_up_viscoel}] \label{subsection_proof_blow_up}
We implement the so-called Georgiev-Todorova method (see \cite{GT94,Mes01} and also \cite{MS041}). So, we suppose that the solution exists for all time and we will prove an exponential growth. For this purpose, we set: \begin{equation} \mathscr{H}\left( t\right) =E_{2}-E\left( t\right) . \label{function_H} \end{equation} Of course by (\ref{Initial_data_assumptions}) and (\ref{dissp_enrgy_visco}) and since $E_{2}<E_{1}$, we deduce that $\mathscr{H}$ is a non-decreasing function. So, by using (\ref{Energy_visco_elastic2}) and, (\ref{function_H}) we get successively: \begin{equation*} 0 < \mathscr{H}\left( 0\right) \leq \mathscr{H}\left( t\right) \leq E_{2}-E\left( t\right) \leq E_{1}-\dfrac{1}{2}\gamma(t)+\dfrac{1}{p}\left\Vert u\left( t\right) \right\Vert _{p}^{p}. \end{equation*} On one hand as $F(\alpha_{1}) = E_{1}$ and $ \forall \ t > 0 \,,\ \gamma(t) \geq \alpha_{2}^{2} > \alpha_{1}^{2}$, we obtain: $$ E_{1}-\dfrac{1}{2}\gamma(t)< F\left( \alpha _{1}\right) -\frac{1}{2}\alpha _{1}^{2} $$ On the other hand, since $$F\left( \alpha _{1}\right) -\frac{1}{2}\alpha _{1}^{2}=-\frac{B_{1}^{p}}{p}\alpha _{1}^{p} \ , $$ we obtain the following inequality: \begin{equation} 0<\mathscr{H}\left( 0\right) \leq \mathscr{H}\left( t\right) \leq \dfrac{1}{p }\left\Vert u\left( t\right) \right\Vert _{p}^{p},\;\qquad \forall t\geq 0. \label{H_inequality} \end{equation} For $\varepsilon $ small to be chosen later, and inspired by the ideas of the authors in \cite{GS08}, we then define the auxiliary function: \begin{equation} \mathscr{L}\left( t\right) =\mathscr{H}\left( t\right) +\varepsilon \int_{\Omega }u_{t}udx+\varepsilon \int_{\Gamma _{1}}u_{t}ud\Gamma +\frac{ \varepsilon \alpha }{2}\left\Vert \nabla u\right\Vert _{2}^{2}. \label{defL} \end{equation} Let us remark that $\mathscr{L}$ is a small perturbation of the energy. By taking the time derivative of (\ref{defL}), using problem (\ref{ondes}), we obtain: \begin{eqnarray} \frac{d\mathscr{L}(t)}{dt} &=&\alpha \left\Vert \nabla u_{t}\right\Vert _{2}^{2}+\int_{\Gamma _{1}}h\left( u_{t}\right) u_{t}d\Gamma +\varepsilon \left\Vert u_{t}\right\Vert _{2}^{2}-\varepsilon \left\Vert \nabla u\right\Vert _{2}^{2} \notag \\ &&+\varepsilon \left\Vert u\right\Vert _{p}^{p}+\varepsilon \left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}-\varepsilon \int_{\Gamma _{1}}h\left( u_{t}\right) u(x,t)d\sigma \notag \\ &&+\int_{\Omega }\nabla u\left( t\right) \int_{0}^{t}g\left( t-s\right) \nabla u\left( s\right) dsdx. \label{derivL2} \end{eqnarray} By making use of (\ref{Assumption_h}) and the following Young's inequality \begin{equation} XY\leq \frac{\lambda ^{\mu }X^{\mu }}{\mu }+\frac{\lambda ^{-\nu }Y^{\nu }}{\nu }, \label{Young_inequality} \end{equation} $X,\,Y\geq 0,\;\lambda >0,\;\mu ,\,\nu \in \mathbb{R^{+}}$ such that $ 1/\mu +1/\nu =1,$ then we get \begin{eqnarray} \int_{\Gamma _{1}}h\left( u_{t}\right) ud\Gamma &\leq &C_{m}\int_{\Gamma _{1}}\left\vert u_{t}\right\vert ^{m-2}u_{t}ud\Gamma \notag \\ &\leq &C_{m}\frac{\lambda ^{m}}{m}\left\Vert u\right\Vert _{m,\Gamma _{1}}^{m}+C_{m}\frac{m-1}{m}\lambda ^{-m/\left( m-1\right) }\left\Vert u_{t}\right\Vert _{m,\Gamma _{1}}^{m}. \label{Young_1} \end{eqnarray} Now, the term involving $g$ on the right-hand side of (\ref{derivL2}) can be written as \begin{eqnarray} \displaystyle\int_{\Omega }\nabla u\left( t,x\right) \displaystyle &&\hspace*{-1cm}\int_{0}^{t}g\left( t-s\right) \nabla u\left( s,x\right) dsdx
=\Vert \nabla u\left( t\right) \Vert_{2}^{2}\left( \displaystyle\int_{0}^{t}g\left( s\right) ds\right) \label{Las_term_estimate} \\ &+&\displaystyle\int_{\Omega }\nabla u\left( t,x\right) \displaystyle\int_{0}^{t}g\left( t-s\right) \left( \nabla u\left( s,x\right) -\nabla u\left( t,x\right) \right) dsdx \notag . \end{eqnarray}
On the other hand, by using H\"{o}lder's and Young's inequalities, we infer that for all $\mu >0,$ we get \begin{equation} \left. \begin{array}{l} \displaystyle\int_{\Omega }\nabla u\left( t,x\right) \displaystyle \int_{0}^{t}g\left( t-s\right) \left( \nabla u\left( s,x\right) -\nabla u\left( t,x\right) \right) dsdx
\\ \leq \displaystyle\int_{0}^{t}g\left( t-s\right) \Vert \nabla u\left( t\right) \Vert _{2}\Vert \nabla u\left( s\right) -\nabla u\left( t\right) \Vert _{2}
ds \\ \leq \mu \left( g\diamond u\right) \left( t\right) +\dfrac{1}{4\mu }\left( \displaystyle\int_{0}^{t}g\left( s\right) ds\right) \Vert \nabla u\left( t\right) \Vert _{2}^{2}. \end{array} \right. \label{mu_inequality} \end{equation} Inserting the estimates (\ref{Young_1}) and (\ref{Las_term_estimate}) into ( \ref{derivL2}), taking into account the inequality (\ref{mu_inequality}) and making use of (\ref{Assumption_h}), we obtain by choosing $\mu =1/2$ and multiplying by $l$ \begin{eqnarray} l\mathscr{L}^{\prime }\left( t\right) &\geq &\alpha l\left\Vert \nabla u_{t}\right\Vert _{2}^{2}+l\left( c_{m}-C_{m}\varepsilon \frac{m-1}{m} \lambda ^{-m/\left( m-1\right) }\right) \left\Vert u_{t}\right\Vert _{m,\Gamma _{1}}^{m}+\varepsilon l\left\Vert u_{t}\right\Vert _{2}^{2} \notag \\ &&+\varepsilon l\left\Vert u\right\Vert _{p}^{p}+\varepsilon l\left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}-C_{m}\varepsilon l\frac{\lambda ^{m}}{m }\left\Vert u\right\Vert _{m,\Gamma _{1}}^{m} \label{dL_dt_2} \\ &&-\frac{\varepsilon l}{2}\left( g\diamond u\right) \left( t\right) -\varepsilon l\Vert \nabla u\left( t\right) \Vert _{2}^{2}. \notag \end{eqnarray} We want now to estimate the term involving $\left\Vert u\right\Vert _{m,\Gamma _{1}}^{m}$ in (\ref{dL_dt_2}). We proceed as in \cite{GS08}. Then, we have \begin{equation*} \left\Vert u\right\Vert _{m,\Gamma _{1}}\leq C\left\Vert u\right\Vert _{H^{s}(\Omega )}, \end{equation*} which holds for: \begin{equation*} m\geq 1\quad \mbox{ and }\quad 0<s<1,\quad s\geq \frac{N}{2}-\frac{N-1}{m}>0 , \end{equation*} where $C$ here and in the sequel denotes a generic positive constant which may change from line to line.
Recalling the interpolation and Poincar\'{e}'s inequalities (see \cite{LM68}) \begin{eqnarray*} \left\Vert u\right\Vert _{H^{s}(\Omega )} &\leq &C\left\Vert u\right\Vert _{2}^{1-s}\left\Vert \nabla u\right\Vert _{2}^{s} ,\\ &\leq &C\left\Vert u\right\Vert _{p}^{1-s}\left\Vert \nabla u\right\Vert_{2}^{s}, \end{eqnarray*} we finally have the following inequality: \begin{equation} \left\Vert u\right\Vert _{m,\Gamma _{1}}\leq C\left\Vert u\right\Vert _{p}^{1-s}\left\Vert \nabla u\right\Vert _{2}^{s}. \label{u_m_estimate} \end{equation} If $s<2/m$, using again Young's inequality, we get: \begin{equation} \left\Vert u\right\Vert _{m,\Gamma _{1}}^{m}\leq C\left[ \left( \left\Vert u\right\Vert _{p}^{p}\right) ^{\frac{m\left( 1-s\right) \mu }{p}}+\left( \left\Vert \nabla u\right\Vert _{2}^{2}\right) ^{\frac{ms\theta }{2}}\right] \label{estiGamma1} \end{equation} for $1/\mu +1/\theta =1.$ Here we choose $\theta =2/ms,$ to get $\mu =2/\left( 2-ms\right) $. Therefore the previous inequality becomes: \begin{equation} \left\Vert u\right\Vert _{m,\Gamma _{1}}^{m}\leq C\left[ \left( \left\Vert u\right\Vert _{p}^{p}\right) ^{\frac{m\left( 1-s\right) 2}{\left( 2-ms\right) p}}+\left\Vert \nabla u\right\Vert _{2}^{2}\right] . \label{normGamma1} \end{equation} Now, choosing $s$ such that: \begin{equation*} 0<s\leq \frac{2\left( p-m\right) }{m\left( p-2\right) }, \end{equation*} we get: \begin{equation} \frac{2m\left( 1-s\right) }{\left( 2-ms\right) p}\leq 1. \label{choicesm} \end{equation} Once the inequality (\ref{choicesm}) is satisfied, we use the classical algebraic inequality: \begin{equation} z^{\nu }\leq \left( z+1\right) \leq \left( 1+\frac{1}{\omega }\right) \left( z+\omega \right) \;,\quad \forall z\geq 0\;,\quad 0<\nu \leq 1\;,\quad \omega \geq 0, \label{Algebraic_inequality} \end{equation} to obtain the following estimate: \begin{eqnarray} \left( \left\Vert u\right\Vert _{p}^{p}\right) ^{\frac{m\left( 1-s\right) 2}{ \left( 2-ms\right) p}} &\leq &d\left( \left\Vert u\right\Vert _{p}^{p}+ \mathscr{H}\left( 0\right) \right) \notag \label{normeLp} \\ &\leq &d\left( \left\Vert u\right\Vert _{p}^{p}+\mathscr{H}\left( t\right) \right) \;,\quad \forall t\geq 0 , \end{eqnarray} where we have set $d=1+1/\mathscr{H}(0)$. Inserting the estimate (\ref {normeLp}) into (\ref{estiGamma1}) we obtain the following important inequality: \begin{equation} \left\Vert u\right\Vert _{m,\Gamma _{1}}^{m}\leq C\left[ \left\Vert u\right\Vert _{p}^{p}+l\left\Vert \nabla u\right\Vert _{2}^{2}+\mathscr{H} \left( t\right) \right] . \label{L_gam_m-norm} \end{equation} Keeping in mind that $ l=1-\int_{0}^{\infty }g\left( s\right) ds$, in order to control the term $\left\Vert \nabla u\right\Vert _{2}^{2}$ in equation (\ref{dL_dt_2}), we preferely use (as $\mathscr{H}(t)>0$), the following estimate: \begin{equation*} \left\Vert u\right\Vert _{m,\Gamma _{1}}^{m}\leq C\left[ \left\Vert u\right\Vert _{p}^{p}+l\left\Vert \nabla u\right\Vert _{2}^{2}+2\mathscr{H} \left( t\right) \right] . \end{equation*} which gives finally: \begin{eqnarray} \left\Vert u\right\Vert _{m,\Gamma _{1}}^{m} &\leq &C\left[ 2E_{2}+\left( 1+ \frac{2}{p}\right) \left\Vert u\right\Vert _{p}^{p}-\left\Vert u_{t}\right\Vert _{2}^{2}-\left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}\right. \notag \\ &&\left. +\left( l-\Big(1-\int_{0}^{t}g\left( s\right) ds\Big)\right) \left\Vert \nabla u\right\Vert _{2}^{2}-\left( g\diamond u\right) \left( t\right) \right] . \label{umgama1} \end{eqnarray} Since $ 1-\int_{0}^{t}g\left( s\right) ds\geq l$, then we obtain from above \begin{equation} \left\Vert u\right\Vert _{m,\Gamma _{1}}^{m}\leq C\left[ 2E_{2}+\left( 1+ \frac{2}{p}\right) \left\Vert u\right\Vert _{p}^{p}-\left\Vert u_{t}\right\Vert _{2}^{2}-\left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}-\left( g\diamond u\right) \left( t\right) \right] . \label{boundary_important_estimate} \end{equation} Now, inserting (\ref{boundary_important_estimate}) into (\ref{dL_dt_2}), then we infer that: \begin{eqnarray} l\mathscr{L}^{\prime }\left( t\right) &\geq &\alpha l\left\Vert \nabla u_{t}\right\Vert _{2}^{2}+l\left( c_{m}-C_{m}\varepsilon \frac{m-1}{m} \lambda ^{-m/\left( m-1\right) }\right) \left\Vert u_{t}\right\Vert _{m,\Gamma _{1}}^{m} \notag \\ &&+\varepsilon \left( l+lC_{m}\frac{\lambda ^{m}}{m}C\right) \left\Vert u_{t}\right\Vert _{2}^{2}+\varepsilon \left( l+lC_{m}\frac{\lambda ^{m}}{m} C\right) \left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2} \notag \\ &&+\varepsilon \left\{ l-C_{m}l\frac{\lambda ^{m}}{m}C\left( 1+\frac{2}{p} \right) \right\} \left\Vert u\right\Vert _{p}^{p} \label{dL_dt_4} \\ &&+\varepsilon \left( C_{m}l\frac{\lambda ^{m}}{m}C-\frac{l}{2}\right) \left( g\diamond u\right) \left( t\right) -\varepsilon l\Vert \nabla u\left( t\right) \Vert _{2}^{2}-2C_{m}\varepsilon l\frac{\lambda ^{m}}{m}CE_{2} \ . \notag \end{eqnarray} From (\ref{function_H}), we get \begin{eqnarray*} \mathscr{H}\left( t\right) &\leq &E_{2}-\dfrac{1}{2}\left( 1-\displaystyle \int_{0}^{t}g\left( s\right) ds\right) \left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}
\\ &&-\dfrac{1}{2}\left( g\diamond u\right) \left( t\right) +\dfrac{1}{p} \left\Vert u\left( t\right) \right\Vert _{p}^{p} \\ &\leq &E_{2}-\frac{l}{2}\left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}-\dfrac{1}{2}\left( g\diamond u\right) \left( t\right) +\dfrac{1}{p} \left\Vert u\left( t\right) \right\Vert _{p}^{p}. \end{eqnarray*} This last inequality gives \begin{equation} -l\left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}\geq 2\left( \mathscr{H}\left( t\right) -E_{2}+\dfrac{1}{2}\left( g\diamond u\right) \left( t\right) -\dfrac{1}{p}\left\Vert u\left( t\right) \right\Vert _{p}^{p}\right) . \label{Gradient_inequality} \end{equation} Consequently, (\ref{dL_dt_4}) takes the form \begin{eqnarray} l\mathscr{L}^{\prime }\left( t\right) &\geq &\alpha l\left\Vert \nabla u_{t}\right\Vert _{2}^{2}+l\left( c_{m}-C_{m}\varepsilon \frac{m-1}{m} \lambda ^{-m/\left( m-1\right) }\right) \left\Vert u_{t}\right\Vert _{m,\Gamma _{1}}^{m} \notag \\ &&+\varepsilon \left( l+lC_{m}\frac{\lambda ^{m}}{m}C\right) \left\Vert u_{t}\right\Vert _{2}^{2}+\varepsilon \left( l+lC_{m}\frac{\lambda ^{m}}{m} C\right) \left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2} \notag \\ &&+\varepsilon \left\{ l-\frac{2}{p}-C_{m}l\frac{\lambda ^{m}}{m}C\left( 1+ \frac{2}{p}\right) \right\} \left\Vert u\right\Vert _{p}^{p}-2\varepsilon E_{2} \label{dL_dt_5} \\ &&+\varepsilon \left( C_{m}l\frac{\lambda ^{m}}{m}C-\frac{l}{2}+1\right) \left( g\diamond u\right) \left( t\right) +2\varepsilon \mathscr{H}\left( t\right) -2C_{m}\varepsilon l\frac{\lambda ^{m}}{m}CE_{2}. \notag \end{eqnarray} Now to estimate the terms involving $\left\Vert u\right\Vert _{p}^{p}$ and $E_{2}$ in (\ref{dL_dt_5}), we simply write: $$ \left( l-\frac{2}{p}\right) \left\Vert u\right\Vert_{p}^{p}-2E_{2} = \left( l-\frac{2}{p}\right) \left\Vert u\right\Vert _{p}^{p}-2E_{2} \frac{\Vert u\Vert_{p}^p} {\Vert u\Vert_{p}^p} \ .$$ Then by using (\ref{result_Vitillaro_2}), we get: \begin{equation*} \left( l-\frac{2}{p}\right) \left\Vert u\right\Vert _{p}^{p}-2E_{2}\geq c_{1}\left\Vert u\right\Vert _{p}^{p}, \end{equation*} where $c_{1} > 0$ is defined in Remark \ref{remarkc1}. Thus, (\ref{dL_dt_5}) becomes: \begin{eqnarray} l\mathscr{L}^{\prime }\left( t\right) &\geq &\alpha l\left\Vert \nabla u_{t}\right\Vert _{2}^{2}+l\left( c_{m}-C_{m}\varepsilon \frac{m-1}{m} \lambda ^{-m/\left( m-1\right) }\right) \left\Vert u_{t}\right\Vert _{m,\Gamma _{1}}^{m} \notag \\ &&+\varepsilon \left( l+lC_{m}\frac{\lambda ^{m}}{m}C\right) \left\Vert u_{t}\right\Vert _{2}^{2}+\varepsilon \left( l+lC_{m}\frac{\lambda ^{m}}{m} C\right) \left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2} \notag \\ &&+\varepsilon \left\{ c_{1}-C_{m}l\frac{\lambda ^{m}}{m}C\left( 1+\frac{2}{p }\right) -4C_{m}\varepsilon l\frac{\lambda ^{m}}{m}CE_{2}\left( B_{1}\alpha _{2}\right) ^{-p}\right\} \left\Vert u\right\Vert _{p}^{p} \label{dL_dt_6} \\ &&+\varepsilon \left( C_{m}l\frac{\lambda ^{m}}{m}C-\frac{l}{2}+1\right) \left( g\diamond u\right) \left( t\right) +2\varepsilon \left( \mathscr{H} \left( t\right) +C_{m}l\frac{\lambda ^{m}}{m}CE_{2}\right) . \notag \end{eqnarray} Notice that since $l < 1$, we first have : $$ \forall \, \lambda > 0\,,\, C_{m}l\dfrac{\lambda^{m}}{m}C-\dfrac{l}{2}+1>0 \ .$$ At this point, we pick $\lambda $ small enough such that: \begin{equation*} c_{1}-C_{m}l\dfrac{\lambda ^{m}}{m}C\left( 1+\dfrac{2}{p}\right) -4C_{m}\varepsilon l\dfrac{\lambda ^{m}}{m}CE_{2}\left( B_{1}\alpha _{2}\right) ^{-p}>0 \ . \end{equation*} Once $\lambda $ is fixed, we may choose $\varepsilon $ small enough such that \begin{equation*} \left\{ \begin{array}{l} c_{m}-C_{m}\varepsilon \dfrac{m-1}{m}\lambda ^{-m/\left( m-1\right) }>0,
\\ \mathscr{L}\left( 0\right) >0. \end{array} \right. \end{equation*} Consequently, we end up with the estimate: \begin{equation} \mathscr{L}^{\prime }\left( t\right) \geq \eta _{1}\left( \left\Vert u_{t}\right\Vert _{2}^{2}+\left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}+\left\Vert u\right\Vert _{p}^{p}+\mathscr{H}\left( t\right) +E_{2}\right) ,\quad \forall t\geq 0 \ . \label{First_main_inequality} \end{equation} Next, it is clear that, by Young's and Poincar\'{e}'s inequalities, we have: \begin{equation} \mathscr{L}\left( t\right) \leq \gamma \left[ \mathscr{H}\left( t\right) +\left\Vert u_{t}\right\Vert _{2}^{2}+\left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}+\left\Vert \nabla u\right\Vert _{2}^{2}\right] \,\mbox{ for some } \gamma >0. \label{estiL1} \end{equation} Since $\mathscr{H}(t)>0$, then for all $t\geq 0$, we have: \begin{equation} \frac{l}{2}\left\Vert \nabla u\right\Vert _{2}^{2}\leq \frac{1}{p}\left\Vert u\right\Vert _{p}^{p}+E_{2},\quad \label{Dradient_main_estimate} \end{equation} Thus, the inequality (\ref{estiL1}) becomes: \begin{equation} \mathscr{L}\left( t\right) \leq \zeta \left[ \mathscr{H}(t)+\left\Vert u_{t}\right\Vert _{2}^{2}+\left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}+\left\Vert u\right\Vert _{p}^{p}+E_{2}\right] \,\, \mbox{ for some }\zeta >0. \label{estiL1bis} \end{equation} From the two inequalities (\ref{First_main_inequality}) and (\ref{estiL1bis} ), we finally obtain the differential inequality: \begin{equation} \frac{d\mathscr{L}\left( t\right) }{dt}\geq \mu \mathscr{L}\left( t\right) \,\,\mbox{ for some }\mu >0. \label{diffineq} \end{equation} An integration of the previous differential inequality (\ref{diffineq}) between $0$ and $t$ gives the following estimate for the function $\mathscr{L}$: \begin{equation} \mathscr{L}\left( t\right) \geq \mathscr{L}\left( 0\right) e^{\mu t}. \label{estiL2} \end{equation} On the other hand, from the definition of the function $\mathscr{L}$, from inequality (\ref{H_inequality}) and for small values of the parameter $\varepsilon$, it follows that: \begin{equation} \mathscr{L}\left( t\right) \leq \frac{1}{p}\left\Vert u\right\Vert _{p}^{p}. \label{estiLp2} \end{equation} From the two inequalities (\ref{estiL2}) and (\ref{estiLp2}) we conclude the exponential growth of the solution in the $L^{p}$-norm. \end{proof} \section{Blow up in finite time for $\protect\alpha=0$}
\label{blow_up_section}
In this section, we prove that in the absence of the strong damping $-\Delta u_{t}$, (i.e. $\alpha =0$), the solution of problem (\ref{ondes}) blows up in finite time that is it exists $0<T^{\ast}<\infty $ such that $\left\Vert u(t)\right\Vert_{p}\rightarrow \infty $ as $t\rightarrow T^{\ast }$.
The blow up result reads as follows: \begin{theorem} \label{blow_up_} Suppose that (\ref{hypothesis_g}), (\ref{hypothesis_g_2}) and (\ref{Assumption_h}) hold. Assume that \begin{equation*} 2<m\quad \text{and}\quad \max \left( m,2/l\right) <p\leq \overline{p}. \end{equation*} Then, the solution of (\ref{ondes}) satisfying \begin{equation} E\left( 0\right) <E_{2},\qquad \left\Vert \nabla u_{0}\right\Vert _{2}\geq \alpha _{1}, \end{equation} blows up in finite time. That is $\left\Vert u\left( t\right) \right\Vert _{p}\rightarrow \infty $ as $t\rightarrow T^{\ast }$ for some $0<T^{\ast }<\infty $. \end{theorem}
\begin{remark} The requirement $m>2$ in Theorem \ref{blow_up_} is technical but it seems necessary in our proof. The case $m=2$ cannot be handled with the method we use here. But the same result can be shown for $m=2$ by using the concavity method. See \cite{GS082} for more details. \end{remark}
\begin{proof}[Proof of Theorem \ref{blow_up_}] To prove Theorem \ref{blow_up_}, we suppose that the solution exists for all time and we reach to a contradiction. Following the idea introduced in \cite{GT94} and developed in \cite{MS041} and \cite{V99}, we will define a function $\hat{L}$ which is a perturbation of the total energy of the system and which will satisfy the differential inequality \begin{equation} \frac{d\hat{L}\left( t\right) }{dt}\geq \xi \hat{L}^{1+\nu }\left( t\right) \ , \label{Georgiev_inequality} \end{equation} where $\nu >0.$ Inequality (\ref{Georgiev_inequality}) leads to a blow up of the solution in finite time $T^{\ast }\geq \hat{L}\left( 0\right) ^{-\nu }\xi ^{-1}\nu ^{-1}$, provided that $\hat{L}\left( 0\right) >0.$
To do so, we define the functional $\hat{L}$ as follows: \begin{equation} \hat{L}\left( t\right) =\mathscr{H}^{1-\sigma }(t)+\epsilon \int_{\Omega }u_{t}udx+\epsilon \int_{\Gamma _{1}}u_{t}ud\Gamma , \label{L_hat} \end{equation} where the functional $\mathscr{H}$ is defined in (\ref{function_H}), $\sigma $ is satisfying \begin{equation} 0<\sigma \leq \min \left( \frac{p-m}{p\left( m-1\right) },\frac{p-2}{2p}, \frac{m-2}{2m},\hat{\sigma}\right) , \label{Segma} \end{equation} where $\hat{\sigma}$ is defined later in (\ref{sigma_hat}) and $\epsilon $ is a small positive constant to be chosen later. Taking the time derivative of $ \hat{L}(t)$ and following the same steps as in the proof of Theorem \ref {blow_up_viscoel}, we get (instead of inequality (\ref{dL_dt_2})), for all $\lambda > 0$, \begin{eqnarray} l\hat{L}^{\prime }\left( t\right) &\geq &lc_{m}\left( 1-\sigma \right) \mathscr{H}^{-\sigma }\left( t\right) \left\Vert u_{t}\right\Vert _{m,\Gamma _{1}}^{m}-C_{m}\epsilon \frac{m-1}{m}\lambda ^{-m/\left( m-1\right) }\left\Vert u_{t}\right\Vert _{m,\Gamma _{1}}^{m}+\epsilon l\left\Vert u_{t}\right\Vert _{2}^{2} \notag \\ &&+\epsilon l\left\Vert u\right\Vert _{p}^{p}+\epsilon l\left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}-C_{m}\epsilon l\frac{\lambda ^{m}}{m} \left\Vert u\right\Vert _{m,\Gamma _{1}}^{m} \label{dL_hat_dt_1} \\ &&-\frac{\epsilon l}{2}\left( g\diamond u\right) \left( t\right) -\epsilon l\Vert \nabla u\left( t\right) \Vert _{2}^{2}. \notag \end{eqnarray} Next, for large positive $M$, we select $\lambda ^{-m/\left( m-1\right) }=M \mathscr{H}^{-\sigma }\left( t\right) $. Then the estimate (\ref{dL_hat_dt_1} ) takes the form: \begin{eqnarray} l\hat{L}^{\prime }\left( t\right) &\geq &\left( lc_{m}\left( 1-\sigma \right) -MC_{m}\epsilon \frac{m-1}{m}\right) \mathscr{H}^{-\sigma }\left( t\right) \left\Vert u_{t}\right\Vert _{m,\Gamma _{1}}^{m}+\epsilon l\left\Vert u_{t}\right\Vert _{2}^{2} \notag \\ &&+\epsilon l\left\Vert u\right\Vert _{p}^{p}+\epsilon l\left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}-C_{m}\epsilon l\frac{M^{-\left( m-1\right) }}{m}H^{\sigma \left( m-1\right) }\left\Vert u\right\Vert _{m,\Gamma _{1}}^{m} \label{dL_hat_dt_2} \\ &&-\frac{\epsilon l}{2}\left( g\diamond u\right) \left( t\right) -\epsilon l\Vert \nabla u\left( t\right) \Vert _{2}^{2}. \notag \end{eqnarray} Exploiting (\ref{H_inequality}) and (\ref{u_m_estimate}), we get: \begin{equation*} \mathscr{H}^{\sigma \left( m-1\right) }\left\Vert u\right\Vert _{m,\Gamma _{1}}^{m}\leq C\left\Vert u\right\Vert _{p}^{\left( 1-s\right) m+\sigma p\left( m-1\right) }\left\Vert \nabla u\right\Vert _{2}^{sm} \ . \end{equation*} Thus, as in section \ref{Exponential_growth_section}, we have \begin{equation*} \left\Vert u\right\Vert _{p}^{\left( 1-s\right) m+\sigma p\left( m-1\right) }\left\Vert \nabla u\right\Vert _{2}^{sm}\leq C\left[ \left( \left\Vert u\right\Vert _{p}^{p}\right) ^{\left( \frac{m\left( 1-s\right) }{p}+\sigma \left( m-1\right) \right) \mu }+\left( \left\Vert \nabla u\right\Vert _{2}^{2}\right) ^{\frac{ms\theta }{2}}\right] . \end{equation*} Choosing $\mu ,\,\theta ,$ and $s$ exactly as in section \ref {Exponential_growth_section} (with strict inequalities), we choose $\sigma$ that verifies: \begin{equation} \sigma \leq \frac{2-ms}{2\left( m-1\right) }\left( 1-\frac{2m\left( 1-s\right) }{\left( 2-ms\right) p}\right) =\hat{\sigma}. \label{sigma_hat} \end{equation} The hypotheses on $m$ and $p$ ensure to have $0 < \sigma < 1$.
Consequently, we get from above: \begin{equation} \mathscr{H}^{\sigma \left( m-1\right) }\left\Vert u\right\Vert _{m,\Gamma _{1}}^{m}\leq C\left[ \left( \left\Vert u\right\Vert _{p}^{p}\right) ^{\left( \frac{m\left( 1-s\right) }{p}+\sigma \left( m-1\right) \right) \mu }+\left\Vert \nabla u\right\Vert _{2}^{2}\right] . \label{H_sigma_inequality} \end{equation} Since, \begin{equation*} \left( \frac{m\left( 1-s\right) }{p}+\sigma \left( m-1\right) \right) \frac{2 }{2-ms}\leq 1, \end{equation*} applying the algebraic inequality (\ref{Algebraic_inequality}), we get: \begin{eqnarray} \left( \left\Vert u\right\Vert _{p}^{p}\right) ^{\left( \frac{m\left( 1-s\right) }{p}+\sigma \left( m-1\right) \right) \frac{2}{2-ms}} &\leq &d\left( \left\Vert u\right\Vert _{p}^{p}+\mathscr{H}\left( 0\right) \right) \notag \\ &\leq &d\left( \left\Vert u\right\Vert _{p}^{p}+\mathscr{H}\left( t\right) \right) \;,\quad \forall t\geq 0 \ .\label{u_p_inequality_2} \end{eqnarray} Thus, (\ref{u_p_inequality_2}) together with (\ref{H_sigma_inequality}) leads to (see (\ref{boundary_important_estimate})): \begin{eqnarray} \mathscr{H}^{\sigma \left( m-1\right) }\left\Vert u\right\Vert _{m,\Gamma _{1}}^{m} &\leq &Cd\left[ \left\Vert u\right\Vert _{p}^{p}+l\left\Vert \nabla u\right\Vert _{2}^{2}+\mathscr{H}\left( t\right) \right] \notag \\ &\leq &Cd\left[ 2E_{2}+\left( 1+\frac{2}{p}\right) \left\Vert u\right\Vert _{p}^{p}-\left\Vert u_{t}\right\Vert _{2}^{2}-\left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}-\left( g\diamond u\right) \left( t\right) \right] . \label{H_sigma_m_inequality} \end{eqnarray} Inserting (\ref{H_sigma_m_inequality}) into (\ref{dL_hat_dt_2}) and using ( \ref{Gradient_inequality}), we obtain: \begin{eqnarray} l\hat{L}^{\prime}(t)&\geq &\left( lc_{m}\left( 1-\sigma \right) -MC_{m}\epsilon \frac{m-1}{m}\right) \mathscr{H}^{-\sigma }\left( t\right) \left\Vert u_{t}\right\Vert _{m,\Gamma _{1}}^{m} \notag \\ &+~\epsilon& l\left( 1+C_{m}\epsilon \frac{M^{-\left( m-1\right) }}{m} Cd\right) \left\{ \left\Vert u_{t}\right\Vert _{2}^{2}+\left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}\right\} +2\epsilon \mathscr{H}\left( t\right) -2\epsilon E_{2} \label{dL_hat_dt_3} \\ &+~\epsilon&\left\{ l-\frac{2}{p}-C_{m}l\frac{M^{-\left( m-1\right) }}{m} Cd\left( 1+\frac{2}{p}\right) \right\} \left\Vert u\right\Vert _{p}^{p}+C_{m}\epsilon l\frac{M^{-\left( m-1\right) }}{m}Cd\left( g\diamond u\right) \left( t\right) \notag\\ &-2~\epsilon& C_{m} l\frac{M^{-\left( m-1\right) }}{m}CdE_{2}+\epsilon \left( 1-\frac{l}{2}\right) \left( g\diamond u\right) \left( t\right) . \notag \end{eqnarray} Writing again $E_{2} = E_{2} {\Vert u\Vert_{p}^p}/{\Vert u\Vert_{p}^p}$ and using again (\ref{result_Vitillaro_2}), we deduce that: \begin{eqnarray*} l\hat{L}^{\prime }\left( t\right)& \geq& \left( lc_{m}\left( 1-\sigma \right) -MC_{m}\epsilon \frac{m-1}{m}\right) \mathscr{H}^{-\sigma }\left( t\right) \left\Vert u_{t}\right\Vert _{m,\Gamma _{1}}^{m} \\ &+~\epsilon& l\left( 1+C_{m}\epsilon \frac{M^{-\left( m-1\right) }}{m} Cd\right) \left\{ \left\Vert u_{t}\right\Vert _{2}^{2}+\left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}\right\} +2\epsilon \mathscr{H}\left( t\right) +2C_{m}\epsilon l\frac{M^{-\left( m-1\right) }}{m}CdE_{2} \\ &+~\epsilon& \left\{ l-\frac{2}{p}-2E_{2}\left( B_{1}\alpha _{2}\right) ^{-p}-C_{m}l\frac{M^{-\left( m-1\right) }}{m}Cd\left( 1+\frac{2}{p}\right) -4C_{m}l\frac{M^{-\left( m-1\right) }}{m}CdE_{2}\left( B_{1}\alpha _{2}\right) ^{-p}\right\} \left\Vert u\right\Vert _{p}^{p} \\ &+~\epsilon&C_{m} l\frac{M^{-\left( m-1\right) }}{m}Cd\left( g\diamond u\right) \left( t\right) . \end{eqnarray*} Thus, using the definition of $c_{1}$ in Remark \ref{remarkc1}, we get: \begin{eqnarray*} l\hat{L}^{\prime }\left( t\right) &\geq &\left( lc_{m}\left( 1-\sigma \right) -MC_{m}\epsilon \frac{m-1}{m}\right) \mathscr{H}^{-\sigma }\left( t\right) \left\Vert u_{t}\right\Vert _{m,\Gamma _{1}}^{m} \\ &+~\epsilon& l\left( 1+C_{m}\epsilon \frac{M^{-\left( m-1\right) }}{m} Cd\right) \left\{ \left\Vert u_{t}\right\Vert _{2}^{2}+\left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}\right\} +2\epsilon \mathscr{H}\left( t\right) +2C_{m}\epsilon l\frac{M^{-\left( m-1\right) }}{m}CdE_{2} \\ &+~\epsilon& \left\{c_{1}-C_{m}l\frac{M^{-\left( m-1\right) }}{m}Cd\left( 1+\frac{2}{p}\right) -4C_{m}l\frac{M^{-\left( m-1\right) }}{m}CdE_{2}\left( B_{1}\alpha _{2}\right) ^{-p}\right\} \left\Vert u\right\Vert _{p}^{p} \\ &+~\epsilon&C_{m}l\frac{M^{-\left( m-1\right) }}{m}Cd\left( g\diamond u\right) \left( t\right) . \end{eqnarray*} Since $c_{1} > 0$, we choose $M$ large enough such that: \begin{equation*} c_{1}-C_{m}l\frac{M^{-\left( m-1\right) }}{m}Cd\left( 1+\frac{2}{p}\right) -4C_{m}l\frac{M^{-\left( m-1\right) }}{m}CdE_{2}\left( B_{1}\alpha _{2}\right) ^{-p}>0. \end{equation*} Once $M$ is fixed, we pick $\epsilon $ small enough such that \begin{equation*} lc_{m}\left( 1-\sigma \right) -MC_{m}\epsilon \frac{m-1}{m}>0 \end{equation*} and $\hat{L}\left( 0\right) >0$. This leads to \begin{equation} \hat{L}^{\prime }\left( t\right) \geq \hat{\eta}\left( \left\Vert u_{t}\right\Vert _{2}^{2}+\left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}+ \mathscr{H}\left( t\right) +\left\Vert u\right\Vert _{p}^{p}+E_{2}\right) \label{L_hat_prime} \end{equation} for some $\hat{\eta}>0.$
On the other hand, it is clear from the definition (\ref{L_hat}), we have: \begin{equation} \hat{L}^{\frac{1}{1-\sigma }}\left( t\right) \leq C\left( \epsilon ,\sigma \right) \left[ \mathscr{H}\left( t\right) +\left( \int_{\Omega }u_{t}\,udx\right) ^{\frac{1}{1-\sigma }}+\left( \int_{\Gamma _{1}}u_{t}ud\Gamma \right) ^{\frac{1}{1-\sigma }}\right] \ . \label{L_second_estimate} \end{equation} By the Cauchy-Schwarz inequality and H\"{o}lder's inequality, we have: \begin{eqnarray*} \int_{\Omega }u_{t}udx &\leq &\left( \int_{\Omega }u_{t}^{2}dx\right) ^{ \frac{1}{2}}\left( \int_{\Omega }u^{2}dx\right) ^{\frac{1}{2}} \\ &\leq &C\left( \int_{\Omega }u_{t}^{2}dx\right) ^{\frac{1}{2}}\left( \int_{\Omega }\left\vert u\right\vert ^{p}dx\right) ^{\frac{1}{p}}, \end{eqnarray*} where $C$ is the positive constant which comes from the embedding $ L^{p}\left( \Omega \right) \hookrightarrow L^{2}\left( \Omega \right) $. This inequality implies that there exists a positive constant $C_{1}>0$ such that: \begin{equation*} \left( \int_{\Omega }u_{t}udx\right) ^{\frac{1}{1-\sigma }}\leq C_{1}\left[ \left( \int_{\Omega }\left\vert u\right\vert ^{p}dx\right) ^{\frac{1}{ (1-\sigma )p}}\left( \int_{\Omega }u_{t}^{2}dx\right) ^{\frac{1}{2(1-\sigma ) }}\right] . \end{equation*} Applying Young's inequality to the right hand-side of the preceding inequality, there exists a positive constant also denoted $C>0$ such that: \begin{equation} \left( \int_{\Omega }u_{t}udx\right) ^{\frac{1}{1-\sigma }}\leq C\left[ \left( \int_{\Omega }\left\vert u\right\vert ^{p}dx\right) ^{\frac{\tau }{ (1-\sigma )p}}+\left( \int_{\Omega }u_{t}^{2}dx\right) ^{\frac{\theta }{ 2(1-\sigma )}}\right] , \label{Young_main} \end{equation} for $1/\tau +1/\theta =1$. We take $\theta =2(1-\sigma )$, hence $\tau =2\left( 1-\sigma \right) /\left( 1-2\sigma \right) $, to get: \begin{equation*} \left( \int_{\Omega }u_{t}udx\right) ^{\frac{1}{1-\sigma }}\leq C\left[ \left( \int_{\Omega }\left\vert u\right\vert ^{p}dx\right) ^{\frac{2}{ (1-2\sigma )p}}+\int_{\Omega }u_{t}^{2}dx\right] \ . \end{equation*} Using the algebraic inequality (\ref {Algebraic_inequality}) with $z=\left\Vert u\right\Vert _{p}^{p}$, $ \displaystyle d=1+1/\mathscr{H}(0)$, $\omega =\mathscr{H}(0)$ and $\nu = \displaystyle\frac{2}{p\left( 1-2\sigma \right) }$ (the condition (\ref {Segma}) on $\sigma $ ensuring that $0<\nu \leq 1$) we get: \begin{equation*} z^{\nu }\leq d\left( z+\mathscr{H}(0)\right) \leq d\left( z+\mathscr{H} (t)\right) . \end{equation*} Therefore, there exists a positive constant denoted $C_{2}$ such that \ for all $t\geq 0$, \begin{equation} \left( \int_{\Omega }u_{t}udx\right) ^{\frac{1}{1-\sigma }}
\leq C_{2}\left[ \mathscr{H}\left( t\right) +\left\Vert u\left( t\right) \right\Vert _{p}^{p}+\left\Vert u_{t}\left( t\right) \right\Vert _{2}^{2} \right] . \label{estimate_first_term} \end{equation} Following the same method as above, we can show that there exists $C_{3}>0$ such that: \begin{equation*} \left( \int_{\Gamma _{1}}u_{t}ud\Gamma \right) ^{\frac{1}{1-\sigma }}\leq C_{3}\left[ \mathscr{H}\left( t\right) +\left\Vert u\left( t\right) \right\Vert _{m,\Gamma _{1}}^{m}+\left\Vert u_{t}\left( t\right) \right\Vert _{2,\Gamma _{1}}^{2}\right] . \end{equation*} Applying the inequality (\ref{L_gam_m-norm}), we get: \begin{equation*} \left( \int_{\Gamma _{1}}u_{t}ud\Gamma \right) ^{\frac{1}{1-\sigma }}\leq C_{4}\left[ \mathscr{H}\left( t\right) +\left\Vert u\left( t\right) \right\Vert _{p}^{p}+l\left\Vert \nabla u\left( t\right) \right\Vert _{2}^{2}+\left\Vert u_{t}\left( t\right) \right\Vert _{2,\Gamma _{1}}^{2} \right] . \end{equation*} Furthermore, inequality (\ref{Dradient_main_estimate}) leads to: \begin{equation} \left( \int_{\Gamma _{1}}u_{t}ud\Gamma \right) ^{\frac{1}{1-\sigma }}\leq C_{5}\left[ \mathscr{H}\left( t\right) +\left\Vert u\left( t\right) \right\Vert _{p}^{p}+\left\Vert u_{t}\left( t\right) \right\Vert _{2,\Gamma _{1}}^{2}+E_{2}\right] . \label{estimate_second_term} \end{equation} Collecting (\ref{L_second_estimate}), (\ref{estimate_first_term}) and (\ref {estimate_second_term}), we obtain: \begin{equation} \hat{L}^{\frac{1}{1-\sigma }}\left( t\right) \leq \hat{\eta}_{1}\left\{ \left\Vert u_{t}\left( t\right) \right\Vert _{2}^{2}+\left\Vert u_{t}\right\Vert _{2,\Gamma _{1}}^{2}+\mathscr{H}(t)+\left\Vert u\left( t\right) \right\Vert _{p}^{p}+E_{2}\right\} ,\qquad \forall t\geq 0, \label{L_hat_last} \end{equation} for some $\hat{\eta}_{1}>0.$
Combining (\ref{L_hat_prime}) and (\ref{L_hat_last}), then, there exists a positive constant $\xi >0$, as small as $\epsilon $, such that for all $ t\geq 0$, \begin{equation} \hat{L}^{\prime }(t)\geq \xi \hat{L}^{\frac{1}{1-\sigma }}(t). \label{L_1-sigma_2} \end{equation} Thus, inequality (\ref{Georgiev_inequality}) holds. Therefore, $\hat{L}(t)$ blows up in a finite time $T^{\ast }$.
On the other hand, from the definition of the function $\hat{L}(t)$ and using inequality (\ref{H_inequality}), for small values of the parameter $\varepsilon$, it follows that: \begin{equation} \hat{L}(t)\leq \kappa \left(\left\Vert u\left( t\right) \right\Vert_{p}^{p}\right)^{1-\sigma} \ , \label{inq_L_norm} \end{equation} where $\kappa $ is a positive constant. Consequently, from the inequality ( \ref{inq_L_norm}) we conclude that the norm $\left\Vert u\left( t\right) \right\Vert _{p}$ of the solution $u$, blows up in the finite time $T^{\ast } $, which implies the desired result. This completes the proof of Theorem \ref{blow_up_}. \end{proof}
\end{document} |
\begin{document}
\title {Ordinary and symbolic powers are Golod}
\author {J\"urgen Herzog and Craig Huneke}
\address{J\"urgen Herzog, Fachbereich Mathematik, Universit\"at Duisburg-Essen, Campus Essen, 45117 Essen, Germany} \email{[email protected]}
\address{Craig Huneke, Department of Mathematics, University of Virginia, 1404 University Ave, Charlottesville, VA 22903-2600, United States} \email{[email protected]}
\subjclass[2000]{13A02, 13D40} \keywords{Powers of ideals, Golod rings, Koszul cycles}
\begin{abstract}
Let $S$ be a positively graded polynomial ring over a field of characteristic $0$, and $I\subset S$ a proper graded ideal. In this note it is shown that $S/I$ is Golod if $\partial(I)^2\subset I$. Here $\partial(I)$ denotes the ideal generated by all the partial derivatives of elements of $I$. We apply this result to find large classes of Golod ideals, including powers, symbolic powers, and saturations of ideals.
\end{abstract}
\thanks{Part of the paper was written while the authors were visiting MSRI at Berkeley. They wish to acknowledge the support, the hospitality and the inspiring atmosphere of this institution. The second author was partially suppported by NSF grant 1259142.}
\maketitle
\section*{Introduction}
Let $(R,{\frk m})$ be a Noetherian local ring with residue class field $K$, or a standard graded $K$-algebra with graded maximal ideal ${\frk m}$. The formal power series $P_R(t)= \sum_{i \geq 0} \dim_K \Tor_i^{R} (R/{\frk m},R/{\frk m}) t^i$ is called the {\em Poincar\'{e} series} of $R$. Though the ring is Noetherian, in general the Poincar\'{e} series of $R$ is not a rational function. The first example that showed that $P_R(t)$ is not necessarily rational was given by Anik \cite{An}. In the meantime more such examples are known, see \cite{Ro} and its references. On the other hand, Serre showed that $P_R(t)$ is coefficientwise bounded above by the rational series
\[
\frac{(1+t)^n}{1-t\sum_{i\geq 1}\dim_K H_i({\bold x};R)t^i},
\] where ${\bold x}=x_1,\ldots,x_n$ is a minimal system of generators of ${\frk m}$ and where $H_i({\bold x};R)$ denotes the $i$th Koszul homology of the sequence ${\bold x}$.
The ring $R$ is called {\em Golod}, if $P_R(t)$ coincides with this upper bound given by Serre. There is also a relative version of Golodness which is defined for local homomorphisms as an obvious extension of the above concept of Golod rings. We refer the reader for details regarding Golod rings and Golod homomorphism to the survey article \cite{Av} by Avramov. Here we just want to quote the following result of Levin \cite{Le} which says that for any Noetherian local ring $(R,{\frk m})$, the canonical map $R\to R/{\frk m}^k$ is a Golod homomorphism for all $k\gg 0$. It is natural to ask whether in this statement ${\frk m}$ could be replaced by any other proper ideal of $R$. Some very recent results indicate that this question may have a positive answer. In fact, in \cite{HWY} it is shown that if $R$ is regular, then for any proper ideal $I\subset R$ the residue class ring $R/I^k$ is Golod for $k\gg 0$, which, since $R$ is regular, is equivalent to saying that the residue class map $R\to R/I^k$ is a Golod homomorphism for $k\gg 0$. But how big $k$ has to be chosen to make sure that $R/I^k$ is Golod? In the case that $R$ is the polynomial ring and $I$ is a proper monomial ideal, the surprising answer is that $R/I^k$ is Golod for all $k\geq 2$, as has been shown by Fakhari and Welker in \cite{FW}. The authors show even more: if $I$ and $J$ a proper monomial ideals, then $R/IJ$ is Golod. Computational evidence suggests that $R/IJ$ is Golod for any two proper ideals $I,J$ in a local ring (or graded ideals in a graded ring). This is supported by a result of Huneke \cite{Hu} which says that for an unramified regular local ring $R$, the residue class ring $R/IJ$ is never Gorenstein, unless $I$ and $J$ are principal ideals: Indeed, being Golod implies in particular that the Koszul homology $H({\bold x};R)$ admits trivial multiplication, while for a Gorenstein ring, by a result of Avramov and Golod \cite{LAv}, the multiplication map induces for all $i$ a non-degenerate pairing $H_i({\bold x};R)\times H_{p-i}({\bold x};R) \to H_p({\bold x}:R)$ where $p$ is place of the top non-vanishing homology of the Koszul homology. In the case that $I$ and $J$ are not necessarily monomial ideals, it is only known that $R/IJ$ is Golod if $IJ=I\sect J$, see \cite{HSt}.
In the present note we consider graded ideals in the graded polynomial ring $S=K[x_1,\ldots,x_n]$ over a field $K$ of characteristic $0$ with $\deg x_i=a_i>0$ for $i=1,\ldots,n$. The main result of Section~\ref{diff} is given in Theorem~\ref{muchbetter} which says that $S/I$ is Golod if $\partial(I)^2\subset I$. Here $\partial(I)$ denotes the ideal which is generated by all partial derivatives of the generators of $I$. This is easily seen to be independent of the generators of $I$ chosen. We call an ideal {\em strongly Golod} if $\partial(I)^2\subset I$. In Section~\ref{classes} it is shown that the class of strongly Golod ideals is closed under several important ideal operations like products, intersections and certain colon ideals. In particular it is shown that for any $k\geq 2$, the $k$th power of a graded ideal, as well as its $k$th symbolic power and its $k$th saturated power, is strongly Golod. We also prove the surprising fact that all the primary components of a graded ideal $I$ which belong to the minimal prime ideals of $I$ are strongly Golod, if $I$ is so. Even more, we prove that every strongly Golod ideal has a primary decomposition in which each primary component is strongly Golod. We are also able to prove that the integral closure of a strongly Golod monomial ideal is again strongly Golod; in particular, the integral closure of $I^k$ is always strongly Golod if $I$ is monomial and $k\geq 2$. We do not know if this last assertion is true for general ideals.
It should be noted that our results, though quite general, do not imply the result of Fakhari and Welker concerning products of monomial ideals. One can easily find products of monomial ideals which are not strongly Golod. Moreover we have to require that the base field is of characteristic $0$. Actually as the proof will show, it is enough to require in Theorem~\ref{muchbetter} that the characteristic of $K$ is big enough compared with the shifts in the graded free resolution of the ideal.
A preliminary version of this paper by the first author was posted on ArXiv, proving that powers are Golod. After some discussions with the second author, this final version emerged.
\section{A differential condition for Golodness} \label{diff} Let $K$ be a field of characteristic $0$, and $S=K[x_1,\ldots,x_n]$ the graded polynomial ring over $K$ with $\deg x_i=a_i>0$ for $i=1,\ldots,n$, and let $I\subset S$ be a graded ideal different from $S$. We denote by $\partial(I)$ the ideal which is generated by the partial derivatives $\partial f/\partial x_i$ with $f\in I$ and $i=1,\ldots,n$.
\begin{Theorem} \label{muchbetter} Suppose that $\partial(I)^2\subset I$. Then $S/I$ is Golod. \end{Theorem}
\begin{proof} We set $R=S/I$, and denote by $K(R)$ the Koszul complex of $R$ with respect to the sequence ${\bold x}=x_1,\ldots,x_n$. Furthermore we denote by $Z(R)$, $B(R)$ and $H(R)$ the module of cycles, boundaries and the homology of $K(R)$.
Golod \cite{Go} showed that Serre's upper bound for the Poincar\'{e} series
is reached if and only if all Massey operations of $R$ vanish. By definition, this is the case (see \cite[Def. 5.5 and 5.6]{AKM}), if for each subset $\mathcal{S}$ of
homogeneous elements of $\Dirsum_{i=1}^nH_i(R)$ there exists a function $\gamma$, which is defined on the set of finite
sequences of elements from ${\mathcal S}$ with values in ${\frk m}\dirsum\Dirsum_{i=1}^nK_i(R)$, subject to the following conditions:
\begin{enumerate}
\item[(G1)] if $h\in {\mathcal S}$, then $\gamma(h)\in Z(R)$ and $h=[\gamma(h)]$;
\item[(G2)] if $h_1,\ldots,h_m$ is a sequence in $\mathcal{S}$ with $m>1$, then
\[
\partial\gamma(h_1,\ldots,h_m)=\sum_{\ell=1}^{m-1}\overline{\gamma(h_1,\ldots,h_\ell)}\gamma(h_{\ell+1},\ldots,h_m),
\]
where $\bar{a} = (-1)^{i+1}a$ for $a\in K_i(R)$.
\end{enumerate}
Note that (G2) implies, that $\gamma(h_1)\gamma(h_2)$ is a boundary for all $h_1,h_2\in {\mathcal S}$ (which in particular implies that the Koszul homology of a Golod ring has trivial multiplication). Suppose now that for each ${\mathcal S}$ we can choose a functions $\gamma$ such that $\gamma(h_1)\gamma(h_2)$ is not only a boundary but that $\gamma(h_1)\gamma(h_2)=0$ for all $h_1,h_2\in {\mathcal S}$. Then obviously we may set $\gamma(h_1,\ldots h_r)=0$ for all $r\geq 2$, so that in this case (G2) is satisfied and $R$ is Golod.
The proof of Theorem~\ref{muchbetter} follows, once we have shown that $\gamma$ can be chosen that $\gamma(h_1)\gamma(h_2)=0$ for all $h_1,h_2\in {\mathcal S}$. For the proof of this fact we use the following result from \cite{H}: Let \[ 0\to F_p\to F_{n-1}\to \cdots \to F_1\to F_0\to S/J\to 0 \] be the graded minimal free $S$-resolution of $S/J$, and for each $i$ let $f_{11},\ldots, f_{ib_i}$ a homogeneous basis of $F_i$. Let $\varphi_i\: F_i\to F_{i-1}$ denote the chain maps in the resolution, and let \[ \varphi_i(f_{ij})=\sum_{k=1}^{b_{i-1}}\alpha_{jk}^{(i)}f_{i-1,k}, \] where the $\alpha_{jk}^{(i)}$ are homogeneous polynomials.
In \cite[Corollary 2]{H} it is shown that for all $l=1,\ldots,p$ the elements \[ \sum_{1\leq i_1<i_2<\cdots <i_l\leq n}a_{i_1}a_{i_2}\cdots a_{i_l}\sum_{j_2=1}^{b_{l-1}}\cdots \sum_{j_l=1}^{b_1}c_{j_1,\ldots,j_l}\frac{\partial(\alpha_{j_1,j_2}^{(l)},\alpha_{j_2,j_3}^{(l-1)},\ldots, \alpha_{j_l,1}^{(1)})}{\partial(x_{i_1},\ldots,x_{i_l})}e_{i_1}\wedge \cdots \wedge e_{i_l}, \] $j_1=1,\ldots,b_l$ are cycles of $K(R)$ whose homology classes form a $K$-basis of $H_l(R)$.
Thus we see that a $K$-basis of $H_l(R)$ is given by cycles which are linear combinations of Jacobians determined by the entries $\alpha_{jk}^{(i)}$ of the matrices describing the resolution of $S/J$. The coefficients $c_{j_1,\ldots,j_l}$ which appear in these formulas are rational numbers determined by the degrees of the $\alpha_{jk}^{(i)}$, and the elements $e_{i_1}\wedge \cdots \wedge e_{i_l}$ form the natural $R$-basis of the free module $K_l(R)=\bigwedge^l(\Dirsum_{i=1}^nRe_i)$.
From this result it follows that any homology class of $H_l(R)$ can be represented by a cycle which is a linear combination of Jacobians of the form
\begin{eqnarray} \label{jacobian} \frac{\partial(\alpha_{j_1,j_2}^{(l)}\alpha_{j_2,j_3}^{(l-1)},\ldots, \alpha_{j_l,1}^{(1)})}{\partial(x_{i_1},\ldots,x_{i_l})}. \end{eqnarray} We choose such representatives for the elements of the set ${\mathcal S}$. Thus we may choose the map $\gamma$ in such a way that it assigns to each element of ${\mathcal S}$ a cycle which is a linear combination of Jacobians as in (\ref{jacobian}).
The elements $\alpha_{j_l,1}^{(1)}$, generate $I$. Thus it follows that $\gamma(h)\in Z(R)\sect (\partial I) K(R)$ for all $h\in {\mathcal S}$, and hence our assumption, $\partial(I)^2\subset I$, implies that $\gamma(h_1)\gamma(h_2)=0$ for any two elements $h_1,h_2\in {\mathcal S}$. \end{proof}
\section{Classes of Golod ideals} \label{classes}
We keep the notation and the assumptions of Section~\ref{diff} and apply Theorem~\ref{muchbetter} to exhibit new classes of Golod rings.
It is customary to call a graded ideal $I\subset S$ a {\em Golod ideal}, if $S/I$ is Golod. For convenience, we call a graded ideal $I\subset S$ {\em strongly Golod}, if $\partial(I)^2\subset I$. As we have shown in Theorem~\ref{muchbetter}, any strongly Golod ideal is Golod.
As usual we denote by $I^{(k)}$ the symbolic powers and by $\widetilde{I^k}$ the saturated powers of $I$. Recall that $I^{(k)}=\Union_{t\geq 1} I^k: L^t$, where $L$ is the intersection of all associated, non-minimal prime ideals of $I^k$, while $\widetilde{I^k}=\Union_{t\geq 1} I^k: {\frk m}^t$ where ${\frk m}$ is the graded maximal ideal of $S$.
Since $S$ is Noetherian, there exists an integer $t_0$ such that $I^{(k)}= I^k: L^t$ for all $t\geq t_0$. In particular, if we let $J=L^t$ for some $t\geq t_0$, then $I^{(k)}=I:J=I:J^2$.
The next result shows that strongly Golod ideals behave well with respect to several important ideal operations; in particular combining the various parts of the theorem yields a quite large class of Golod ideals.
\begin{Theorem} \label{twodaysbeforemy71thbirthday} Let $I,J\subset S$ be graded ideals. Then the following hold: \begin{enumerate} \item[(a)] if $I$ and $J$ are strongly Golod, then $I\sect J$ and $IJ$ are strongly Golod; \item[(b)] if $I$ and $J$ are strongly Golod and $\partial(I)\partial(J)\subset I+J$, then $I+J$ is strongly Golod; \item[(c)] if $I$ is strongly Golod, $J$ is arbitrary, and $I: J=I:J^2$, then $I:J$ is strongly Golod; \item[(d)] $I^k$, $I^{(k)}$ and $\widetilde{I^k}$ are strongly Golod for all $k\geq 2$. \end{enumerate} \end{Theorem}
\begin{proof} To simplify notation we write $\partial f$ to mean anyone of the partials $\partial f/\partial x_i$.
(a) Let $f,g\in I\sect J$. Since $I$ and $J$ are strongly Golod, it follows that $(\partial f)(\partial g)\in I$ and $(\partial f)(\partial g)\in J$, and hence $(\partial f)(\partial g)\in I\sect J$. This shows that $I\sect J$ is strongly Golod.
Due to the product rule for partial derivatives it follows that $\partial(IJ)\subset \partial(I)J+I\partial(J)$. This implies that \[ \partial(IJ)^2\subset \partial(I)^2J^2 + \partial(I)\partial(J)IJ+\partial(J)^2I^2. \] Obviously, the middle term is contained in $IJ$, while $\partial(I)^2J^2\subset IJ^2\subset IJ$, since $I$ is strongly Golod. Similarly, $\partial(J)^2I^2\subset IJ$. This shows that $\partial(IJ)^2\subset IJ$, and proves that $IJ$ is strongly Golod.
(b) is proved in the same manner as (a).
(c) Let $f,g\in I:J$. Then for all $h_1,h_2\in J$ one has $fh_1\in I$ and $gh_2\in I$. This implies that $(\partial f)h_1+f\partial h_1\in \partial(I)$ and $(\partial g)h_2+g\partial h_2\in \partial(I)$. Thus \begin{eqnarray*} &&((\partial f)h_1+f\partial h_1)((\partial g)h_2+g\partial h_2)\\ &=&(\partial f)(\partial g)h_1h_2+(\partial f)(\partial h_2)gh_1+(\partial g)(\partial h_1)fh_2 +fg (\partial h_1)(\partial h_2)\in (\partial I)^2\subset I. \end{eqnarray*} Since $gh_1,fh_2\in I$, it follows that $(\partial f)(\partial h_2)gh_1+(\partial g)(\partial h_1)fh_2\in I$. Moreover, $Jfg (\partial h_1)(\partial h_2)\subset I$. Hence $J(\partial f)(\partial g)h_1h_2\subset I$. Since $h_1,h_2$ were arbitrary in $J$, it then follows that $(\partial(I:J))^2\subset I:J^3=I:J$, as desired.
(d) Let $k\geq 2$. Then $\partial(I^k)\subset I^{k-1}\partial(I)$. It follows that $\partial(I^k)^2\subset I^{2k-2}\partial(I)^2\subset I^k$. Thus $I^k$ is strongly Golod.
As explained above, for appropriately chosen $J$, $I^{(k)}=I^k:J=^kI:J^2$. Now it follows from (c) that $I^{(k)}$ is strongly Golod.
The same arguments show that $\widetilde{I^k}$ is strongly Golod. \end{proof}
\begin{Corollary} \label{mm} Let $P$ be a homogeneous prime ideal of $S$ containing $I$, a strongly Golod ideal. Then $I+P^k$ is strongly Golod for all $k\geq 2$. \end{Corollary}
\begin{proof} First note that $\partial(I)\subset P$. If not, then $\partial(I)^2$ will also not be in $P$, and therefore not in $I$, a contradiction. We also observe that $\partial(P^k)\subset P^{k-1}$. It follows that $(\partial I)(\partial P^k)\subset P^k\subset I+P^k$. Thus the assertion follows from Theorem~\ref{twodaysbeforemy71thbirthday}(b) and (d) (we apply (d) to ensure that $P^k$ is strongly Golod). \end{proof}
Let $R=S/I$ and ${\frk n}$ the graded maximal ideal of $R$, and suppose that $I$ is strongly Golod. Then Corollary~\ref{mm} implies that $R/{\frk n}^k$ is Golod for all $k\geq 2$. Also note that by the theorem of Zariski-Nagata (see, e.g., \cite[p.143]{N}), the fact that $\partial(I)$ is contained in every homogeneous prime containing $I$ implies that $I$ is in the second symbolic power of every such prime. In particular, self-radical ideals are never strongly Golod, though they may be Golod.
\begin{Corollary} \label{components} The (uniquely determined) primary components belonging to the minimal prime ideals of a strongly Golod ideal are strongly Golod. \end{Corollary}
\begin{proof} Let $I$ be strongly Golod and $P_1,\ldots,P_s$ its minimal prime ideals. Let $Q_i$ be the primary component of $I$ with $\Ass(S/Q_i)=\{P_i\}$, and set $L_i=\Sect_{j\neq i}P_j$. Then there exists an integer $r>1$ such that $Q_i=I:L_i^r=I:L_i^{2r}$. It follows from Theorem~\ref{twodaysbeforemy71thbirthday}(c) that $Q_i$ is strongly Golod. \end{proof}
\begin{Corollary} \label{components1} Every strongly Golod ideal has a primary decomposition with strongly Golod primary ideals. \end{Corollary}
\begin{proof} Let $P$ be an associated prime of $I$. By Corollary \ref{mm}, $I+P^k$ is strongly Golod for all $k\geq 2$, and then by Corollary \ref{components} the unique $P$-primary minimal component of $I+P^k$ is also strongly Golod. We denote this component by $P_k$. The Corollary now follows from the general fact that $I = \cap P_k$ for large $k$, where the intersection is taken over all associated primes of $I$. To prove this, fix a primary decomposition of $I$, say $I = \cap Q_P$, where $Q_P$ is $P$-primary, and the intersection runs over all associated primes of $I$. It suffices to prove that $P_k\subset Q_P$ for large $k$, since then $I\subset \cap P_k\subset \cap Q_P = I$. To check that $P_k\subset Q_P$, we may localize at $P$, since both of these ideals are $P$-primary. Then the claim is clear. \end{proof}
Statement (d) of Theorem~\ref{twodaysbeforemy71thbirthday} can be substantially generalized follows.
\begin{Theorem} \label{inbetween} Let $I\subset S$ be a homogeneous ideal and suppose suppose that $(I^{(k-1)})^2\subset I^k$ for some $k\geq 2$. Then all homogeneous ideals $J$ with $I^k\subset J\subset I^{(k)}$ are strongly Golod. In particular, any homogeneous ideal $J$ with $I^2\subset J\subset I^{(2)}$ is strongly Golod. \end{Theorem}
\begin{proof} We use a theorem of Zariski-Nagata \cite[p.143]{N} according to which $f\in S$ belongs to $I^{(k)}$ if and only if all partials of $f$ of order $<k$ belong to $I$. It follows from this characterization of the $k$th symbolic power of $I$ that $\partial(I^{(k)})\subset I^{(k-1)}$.
Now assume that $I^k\subset J\subset I^{(k)}$. Then \[ \partial(J)^2\subset \partial(I^{(k)})^2\subset (I^{(k-1)})^2\subset I^k\subset J. \] This proves that $J$ is strongly Golod. \end{proof}
\begin{Example} {\rm Let $X$ be a generic set of points in $\mathbb P^2$, and let $I$ be the ideal of polynomials vanishing at $X$. Bocci and Harbourne \cite{BH} proved that $I^{(3)}\subset I^2$. In this case, $(I^{(3)})^2\subset I^4$, so Theorem \ref{inbetween} applies to conclude that every homogeneous ideal $J$ between $I^3$ and $I^{(3)}$ is strongly Golod.} \end{Example}
\begin{Example} {\rm The hypothesis of Theorem \ref{inbetween} is certainly not always satisfied. For example, let $X$ be a generic $4$ by $4$ matrix, and let $I$ be the ideal generated by the $3$ by $3$ minors of $X$. It is well-known that $I$ is a height 4 prime ideal. Moreover, if $\Delta$ is the determinant of $X$, then $\Delta\in I^{(2)}$. However, $\Delta^2$ cannot be in $I^3$ since this element has degree $8$, and the generators of $I^3$ have degree $9$. Thus $(I^{(2)})^2$ is not contained in $I^3$.} \end{Example}
Another case to consider when the hypotheses of Theorem~\ref{inbetween} are satisfied is the following: Let $G$ e a finite simple graph on the vertex set $[n]$. A vertex cover of $G$ is a subset $C\subset [n]$ such that $C\sect \{i,j\}\neq \emptyset$ for all edges $\{i,j\}$ of $G$. The vertex cover ideal $I$ of $G$ is the ideal generated by all monomials $\prod_{i\in C}x_i\subset S=K[x_1,\ldots,x_n]$ where $C$ is a vertex cover of $G$. It is obvious that \[ I= \Sect_{\{i,j\}\in E(G)}(x_i,x_j), \] where $E(G)$ denotes the set of edges of $G$. It follows that $I^{(k)}=\Sect_{\{i,j\}\in E(G)}(x_i,x_j)^k$ for all $k$.
\begin{Proposition} \label{onecase} Let $I$ be the vertex cover ideal a finite simple graph $G$, and suppose that $(I^{(2)})^2\subset I^3$. Then $(I^{(k-1)})^2\subset I^k$ for all $k$. In particular, every homogeneous ideal $J$ between $I^k$ and $I^{(k)}$ is strongly Golod. \end{Proposition}
In \cite[Theorem~5.1]{HHT} it is shown that the graded $S$-algebra $\Dirsum_{k\geq 0}I^{(k)}t^k\subset S[t]$ is generated in degree $1$ and $2$. Thus the proposition follows from the more general proposition:
\begin{Proposition} \label{onecase} Let $I$ be an unmixed ideal of height two, and suppose that $\Dirsum_{k\geq 0}I^{(k)}t^k$ is generated in degrees $1$ and $2$, and $(I^{(2)})^2\subset I^3$. Then $(I^{(k-1)})^2\subset I^k$ for all $k$. \end{Proposition}
\begin{proof} The assumption that $\Dirsum_{k\geq 0}I^{(k)}t^k\subset S[t]$ is generated in degree $1$ and $2$ is equivalent to the statement that for all $p\geq 0$ one has \[ I^{(2p+1)}=I(I^{(2)})^{p}\quad \text{and}\quad I^{(2p)} = (I^{(2)})^{p}. \] Since $I$ is an ideal of height $2$ it follows by a theorem of Ein, Lazarsfeld, and Smith \cite{ELS} that $I^{(2p)}\subset I^p$ for all $p\geq 0$. This implies that $(I^{(2)})^p\subset I^p$ for all $p$, since $(I^{(2)})^p\subset I^{(2p)}$.
We want to prove that $(I^{(k-1)})^2\subset I^k$. We consider the two cases where $k$ is even or odd.
We have that \[ (I^{(2p+1)})^2=I^{2}((I^{(2)})^p)^2\subset I^{2}(I^{p})^2=I^{2p+2}. \]
Next assume that $k$ is even. Then $(I^{(2p)})^2=(I^{(2)})^{2p}$. Thus \[ (I^{(2p)})^2=((I^{(2)})^{2})^p\subset (I^3)^p\subset I^{2p+1}. \]
Here we used that $(I^{(2)})^{2}\subset I^3$. \end{proof}
Proposition~\ref{onecase} is trivial if $G$ does not contain an odd cycle, in other words, if $G$ is bipartite, because in this case it is known \cite[Theorem~5.1(b)]{HHT} that the symbolic and ordinary powers of $I$ coincide. The first non-trivial case is that when $G$ is an odd cycle, say on the vertex set $[n]$. In that case it is known, and easy to see, that $I$ is generated by the monomials $u_i=\prod_{j=i}^{(n+1)/2}x_{i+2j}$, where for simplicity of notation $x_i=x_{i-n}$ if $i>n$. It is also known that $I^{(2)}=I^2+(u)$, where $u=\prod_{i=1}^nx_i$, see for example \cite[Proposition~5.3]{HHT}. It follows that $(I^{(2)})^2=I^4+I^2(u)+(u^2)$. Since $u\in I$ we see that $I^2(u)\subset I^3$, and since $u_1u_2u_3$ divides $u^2$ we also have $(u^2)\subset I^3$. So that altogether, $(I^{(2)})^2\subset I^3$ for the vertex cover ideal of any odd cycle. We do not whether the same inclusion holds for any graph containing and odd cycle.
\begin{Example} {\em There exist squarefree monomial ideals $I$ for which $(I^{(2)})^2$ is not contained in $I^3$. Indeed, let $I_{n,d}$ be the ideal generated by all squarefree monomials of degree $d$ in $n$ variables, and let $2<d<n$. Then $u=\prod_{i=1}^{d+1}x_i\in I_{n,d}^{(2)}$, but $u^2\not \in I_{n,d}^3$, because $I_{n,d}^3$ is generated in degree $3d$, while $\deg u^2=2(d+1)<3d$. } \end{Example}
It is immediate that a monomial ideal $I$ is strongly Golod, if for all minimal monomial generators $u,v \in I$ and all integers $i$ and $j$ such that $x_i|u$ and $x_j|v$ it follows that $uv/x_ix_j\in I$. Farkhari and Welker showed \cite{FW} that $IJ$ is Golod for any two proper monomial ideals. However a product of proper monomial ideals need not to be strongly Golod as the following example shows: Let $I=(x,y)$ and $J=(z)$. Then $IJ=(xz,yz)$ and $(xz)(yz)/xy=z^2$ does not belong to $IJ$.
We denote by $\bar{I}$ the integral closure of an ideal, and use the above characterization of strongly Golod monomial ideals to show:
\begin{Proposition} \label{integral} Let $I$ be a strongly Golod monomial ideal. Then $\bar{I}$ is strongly Golod. In particular, for any monomial ideal $I$, the ideals $\overline{I^k}$ are strongly Golod for all $k\geq 2$. \end{Proposition}
\begin{proof}
The integral closure of the monomial ideal $I$ is again a monomial ideal, and a monomial $u\in S$ belongs to $\bar{I}$ if and only if there exists an integer $r>0$ such that $u^r\in I^r$, see \cite[Proposition 1.4.2]{SwaHu} and the remarks preceding this proposition.
Let $u,v\in \bar{I}$ monomials and suppose that $x_i|u$ and $x_j|v$. There exist integers $s,t>0$ such that $u^s\in I^s$ and $v^t\in I^t$. We may assume that both, $s$ and $t$, are even.
We observe that for a monomial $w$ with $x_k|w$ and $w^r\in I^r$ it follows that $(w/x_k)^r\in I^{r/2}$ if $r$ is even. Indeed, $w^r=m_1\cdots m_r$ with $m_i\in I$. Let $d_i$ be the highest exponent that divides $m_i$. We may assume that $d_i=0$ for $i=1,\ldots, a$, $d_i=1$ for $i=a+1,\ldots,b$ and $d_i>1$ for $i>b$. Then \[ (w/x_k)^r= (m_1\cdots m_a)((m_{a+1}/x_k)\cdots (m_b/x_k))((m_{b+1}\cdots m_r)/x_k^d), \] where $d=\sum_{i=b+1}^rd_i$.
Since $I$ is strongly Golod, it follows from this presentation of $(w/x_k)^r$ that \[ (w/x_k)^r\in I^aI^{\lfloor b/2\rfloor}=I^{\lfloor(2a+b)/2\rfloor}, \] where $\lfloor c\rfloor$ denotes the lower integer part of the real number $c$.
Note that $r=d+b$ and that $2(r-a-b)\leq d$. This implies that $2a\geq d$, and hence $2a+b\geq d+b=r$, which implies that $(w/x_k)^r\in I^{r/2}$, as desired.
Now it follows that \[ (uv/x_ix_j)^{s+t}=(u/x_i)^{t+s}(v/x_j)^{t+s}\in I^{s+t} \] This shows that $uv/x_ix_j\in\bar{I}$, and proves that $\bar{I}$ is strongly Golod. \end{proof}
\end{document} |
\begin{document}
\title[$A$-hypergeometric series associated to a lattice polytope]{$A$-hypergeometric series
associated to a lattice polytope with a unique interior lattice point} \author{Alan Adolphson} \address{Department of Mathematics\\ Oklahoma State University\\ Stillwater, Oklahoma 74078} \email{[email protected]} \author{Steven Sperber} \address{School of Mathematics\\ University of Minnesota\\ Minneapolis, Minnesota 55455} \email{[email protected]} \date{\today} \keywords{} \subjclass{} \begin{abstract} We associate to lattice points ${\bf a}_0,{\bf a}_1,\dots,{\bf a}_N$ in ${\mathbb Z}^n$ an
$A$-hyper\-geometric series $\Phi(\lambda_0,\dots,\lambda_N)$ with integer coefficients. If
${\bf a}_0$ is the unique interior lattice point of the convex hull of ${\bf a}_1,\dots,{\bf a}_N$,
then for every prime $p\neq 2$ the ratio $\Phi(\lambda)/\Phi(\lambda^p)$ has a $p$-adic analytic
continuation to a closed unit polydisk minus a neighborhood of a hypersurface. \end{abstract} \maketitle
\section{Introduction}
Let ${\bf a}_0,{\bf a}_1,\dots,{\bf a}_N\in{\mathbb Z}^n$ and put ${\bf a}_j = (a_{j1},\dots,a_{jn})$. For each $j=0,\dots,N$, let $\hat{\bf a}_j = (1,{\bf a}_j)\in{\mathbb Z}^{n+1}$ and put $A=\{\hat{\bf a}_j\}_{j=0}^N$. We let $x_0,\dots,x_n$ be the coordinates on~${\mathbb R}^{n+1}$, so that $\hat{\bf a}_{j0} = 1$ while $\hat{\bf a}_{jk} = {\bf a}_{jk}$ for $k=1,\dots,n$. Let $L$ be the module of relations among the $\hat{\bf a}_j$: \[ L=\bigg\{l=(l_0,\dots,l_N)\in{\mathbb Z}^{N+1}\mid \sum_{j=0}^N l_j\hat{\bf a}_j = {\bf 0}\bigg\}. \] We consider the $A$-hypergeometric system with parameter $-\hat{\bf a}_0$. This is the system of partial differential equations in variables $\lambda_0,\dots,\lambda_N$ consisting of the operators \[ \Box_l = \prod_{l_i>0} \bigg(\frac{\partial}{\partial \lambda_i}\bigg)^{l_i} - \prod_{l_i<0} \bigg(\frac{\partial}{\partial \lambda_i}\bigg)^{-l_i} \] for $l\in L$ and the operators \[ Z_i = \begin{cases} \sum_{j=0}^N a_{ji}\lambda_j\frac{\partial}{\partial \lambda_j} + a_{0i} & \text{for $i=1,\dots,n$,} \\ \sum_{j=0}^N \lambda_j\frac{\partial}{\partial \lambda_j} + 1 & \text{for $i=0$.} \end{cases} \]
Let $L_+ = \{ l\in L\mid l_i\geq 0\;\text{for $i=1,\dots,N$}\}.$ Taking $v=(-1,0,\dots,0)$ in \cite[Eq.~(3.36)]{SST} and applying \cite[Proposition 3.4.13]{SST} shows that this system has a formal solution $\lambda_0^{-1}\Phi(\lambda)$, where \begin{equation} \Phi(\lambda) = \sum_{l\in L_+} \frac{(-1)^{-l_0}(-l_0)!}{l_1!\cdots l_N!}\prod_{i=0}^N \lambda_i^{l_i}. \end{equation} Since $\sum_{i=0}^N l_i=0$ for $l\in L$, we can rewrite this as \begin{equation} \Phi(\lambda) = \sum_{l\in L_+} \frac{(-1)^{\sum_{i=1}^N l_i} (l_1+\cdots+l_N)!}{l_1!\cdots l_N!} \prod_{i=1}^N \bigg(\frac{\lambda_i}{\lambda_0}\bigg)^{l_i}, \end{equation} which shows that the coefficients of this series lie in ${\mathbb Z}$. This implies that for each prime number $p$, the series $\Phi(\lambda)$ converges $p$-adically on the set
\[ {\mathcal D} = \{(\lambda_0,\dots,\lambda_N)\in\Omega^{N+1}\mid |\lambda_i/\lambda_0|<1 \text{ for }i=1,\dots,N\}, \]
(where $\Omega = $ completion of an algebraic closure of ${\mathbb Q}_p$) and $\Phi(\lambda)$ takes unit values there. In particular, the ratio $\Phi(\lambda)/\Phi(\lambda^p)$ is an analytic function on ${\mathcal D}$ and takes unit values there.
{\bf Remark:} Rewrite the series $\Phi(\lambda)$ according to powers of $-\lambda_0$: \begin{equation} \Phi(\lambda) = \sum_{k=0}^\infty \bigg(\sum_{\substack{l_1,\dots,l_N\in{\mathbb Z}_{\geq 0}\\ l_1\hat{\bf a}_1+\cdots+l_N\hat{\bf a}_N = k\hat{\bf a}_0}} \binom{k}{l_1,\dots,l_N}\lambda_1^{l_1}\cdots \lambda_N^{l_N}\bigg)(-\lambda_0)^{-k}. \end{equation} It is easy to see that the coefficient of $(-\lambda_0)^{-k}$ in this expression is the coefficient of $x^{k{\bf a}_0}$ in the Laurent polynomial $\big(\sum_{i=1}^N \lambda_ix^{{\bf a}_i}\big)^k$. In particular, if ${\bf a}_0 = 0$, then it is the constant term of this Laurent polynomial. The series $\Phi(\lambda)$ may thus be specialized to the hypergeometric series considered in Samol and van Straten\cite{SvS}.
Define a truncation of $\Phi(\lambda)$ by \[ \Phi_1(\lambda) = \sum_{\substack{l\in L_+\\ l_1+\cdots+l_N\leq p-1}} \frac{(-1)^{\sum_{i=1}^N l_i} (l_1+\cdots+l_N)!}{l_1!\cdots l_N!} \prod_{i=1}^N \bigg(\frac{\lambda_i}{\lambda_0}\bigg)^{l_i} \] and let
\[ {\mathcal D}_+ = \{(\lambda_0,\dots,\lambda_N)\mid |\lambda_i/\lambda_0|\leq 1\text{ for } i=1, \dots,N \text{ and }
|\Phi_1(\lambda)|= 1\}. \] Note that ${\mathcal D}_+$ properly contains ${\mathcal D}$. Let~$\Delta$ be the convex hull of the set $\{{\bf a}_1,\dots,{\bf a}_N\}$. Our main result is the following theorem.
\begin{theorem} Suppose that ${\bf a}_0$ is the unique interior lattice point of $\Delta$. Then for every prime number $p\neq 2$ the ratio $\Phi(\lambda)/\Phi(\lambda^p)$ extends to an analytic function on~${\mathcal D}_+$. \end{theorem}
{\bf Remark:} If we specialize $\lambda_1,\dots,\lambda_N$ to elements of $\Omega$ of absolute value
$\leq 1$, Equation~(1.3) allows us to regard $\Phi$ as a function of $t=-1/\lambda_0$ for $|t|<1$.
By Theorem~1.4, this function of $t$ continues analytically to the region where $|t|\leq 1$ and
$|\Phi_1(t)| =1$ (for $p\neq 2$). When $\lambda_1,\dots,\lambda_N\in{\mathbb Z}_p$, this result on analytic continuation was proved recently by Mellit and Vlasenko\cite{MV} for all primes $p$. We believe that the restriction $p\neq 2$ in Theorem 1.4 is an artifact of our method and that that result is in fact true for all $p$.
{\bf Example:}(the Dwork family): Let $N=n$ and let ${\bf a}_i = (0,\dots,n,\dots,0)$, where the $n$ occurs in the $i$-th position, for $i=1,\dots,n$. Let ${\bf a}_0 = (1,\dots,1)$. Then \[ L_+=\{(-nl,l,\dots,l,)\mid l\in{\mathbb Z}_{\geq 0}\} \] and \[ \Phi(\lambda) = \sum_{l=0}^{\infty} \frac{(-1)^{nl}(nl)!}{(l!)^n} \bigg(\frac{\lambda_1\cdots \lambda_n}{\lambda_0^n}\bigg)^l. \] Then Theorem 1.4 implies that for every prime $p\neq 2$ the ratio $\Phi(\lambda)/\Phi(\lambda^p)$ extends to an analytic function on ${\mathcal D}_+$. We note that J.-D. Yu\cite{Y} has given a treatment of analytic continuation for the Dwork family using a more cohomological approach.
In \cite{D2}, Dwork gave a contraction mapping argument to prove $p$-adic analytic continuation of a ratio $G(\lambda)/G(\lambda^p)$ of normalized Bessel functions. In \cite{AS}, we modified Dwork's approach to avoid computations of $p$-adic cohomology for exponential sums, which allowed us to greatly extend his analytic continuation result. The proof we give here follows the method of \cite{AS}.
Special values of the ratio $\Phi(\lambda)/\Phi(\lambda^p)$ are related to a unit root of the zeta function of the hypersurface $\sum_{i=0}^N \lambda_ix^{{\bf a}_0} = 0$. We hope to return to this connection in a future article. We believe that the methods of this paper will extend to complete intersections as well.
For the remainder of this paper we assume that $p\neq 2$. This hypothesis is needed in Section~2 to define the endomorphism $\alpha^*$ of the space $S$.
\section{Contraction mapping}
We begin by constructing a mapping $\beta$ on a certain space of formal series whose coefficients are $p$-adic analytic functions. The hypothesis that ${\bf a}_0$ is the unique interior point of $\Delta$ will imply that $\beta$ is a contraction mapping.
Let $\Omega$ be the completion of an algebraic closure of ${\bf Q}_p$ and put \[ R = \bigg\{ \xi(\lambda) = \sum_{\nu\in({\bf Z}_{\geq 0})^N} c_\nu\bigg(\frac{\lambda_1}{\lambda_0} \bigg)^{\nu_1}\cdots \bigg(\frac{\lambda_N}{\lambda_0}\bigg)^{\nu_N}\mid \text{$c_\nu\in\Omega$ and
$\{|c_\nu|\}_\nu$ is bounded}\bigg\} \] Let $R'$ be the set of functions on ${\mathcal D}_+$ that are uniform limits of sequences of rational functions in the $\lambda_i/\lambda_0$ that are defined on ${\mathcal D}_+$. The series in the ring $R$ are convergent and bounded on ${\mathcal D}$ and $R'$ is a subring of $R$. We define a norm on $R$ by setting
\[ |\xi| = \sup_{\lambda\in{\mathcal D}} |\xi(\lambda)|. \]
Note that for $\xi\in R'$ one has $\sup_{\lambda\in{\mathcal D}}|\xi(\lambda)| =
\sup_{\lambda\in{\mathcal D}_+} |\xi(\lambda)|$. Both $R$ and $R'$ are complete in this norm.
Let $C$ be the real cone generated by the elements of $A$, let $M=C\cap{\mathbb Z}A$, and let $M^\circ\subset M$ be the subset consisting of those points that do not lie on any face of~$C$. Let $\pi^{p-1}=-p$ and let $S$ be the $\Omega$-vector space of formal series \[ S = \bigg\{\xi(\lambda,x) = \sum_{\mu\in M^{\circ}} \xi_\mu(\lambda) ({\pi}\lambda_0)^{-\mu_0}x^{-\mu} \mid
\text{$\xi_\mu(\lambda)\in R$ and $\{|\xi_\mu|\}_\mu$ is bounded}\bigg\}. \] Let $S'$ be defined analogously with the condition ``$\xi_\mu(\lambda)\in R$'' being replaced by ``$\xi_\mu(\lambda)\in R'$''. Define a norm on $S$ by setting
\[ |\xi(\lambda,x)| = \sup_\mu\{|\xi_\mu|\}. \] Both $S$ and $S'$ are complete under this norm.
Define $\theta(t) = \exp(\pi(t-t^p)) = \sum_{i=0}^\infty b_it^i$. One has (Dwork\cite[Sec\-tion~4a)]{D1}) \begin{equation} {\rm ord}\: b_i\geq \frac{i(p-1)}{p^2}. \end{equation} Let \[ F(\lambda,x) = \prod_{i=0}^N\theta(\lambda_ix^{\hat{\bf a}_i}) = \sum_{\mu\in M} B_\mu(\lambda)x^\mu, \] where \[ B_\mu(\lambda) = \sum_{\nu\in({\bf Z}_{\geq 0})^{N+1}} B^{(\mu)}_\nu\lambda^\nu \] with \begin{equation} B^{(\mu)}_\nu = \begin{cases} \prod_{i=0}^N b_{\nu_i} & \text{if $\sum_{i=0}^N \nu_i\hat{\bf a}_i = \mu$,} \\ 0 & \text{if $\sum_{i=0}^N \nu_i\hat{\bf a}_i\neq\mu$.} \end{cases} \end{equation} The equation $\sum_{i=0}^N \nu_i\hat{\bf a}_i = \mu$ has only finitely many solutions $\nu\in({\mathbb Z}_{\geq 0})^{N+1}$, so $B_\mu$ is a polynomial in the $\lambda_i$. Furthermore, all solutions of this equation satisfy $\sum_{i=0}^N \nu_i = \mu_0$, so $B_\mu$ is homogeneous of degree $\mu_0$. We thus have \begin{equation} B_\mu(\lambda_0,\dots,\lambda_N) = \lambda_0^{\mu_0}B_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/ \lambda_0). \end{equation} Let $\tilde{\pi}\in\Omega$ satisfy ${\rm ord}\;\tilde{\pi} = (p-1)/p^2$.
\begin{lemma} One has $B_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)\in R'$ and
\[ |B_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)\leq |\tilde{\pi}^{\mu_0}|. \] \end{lemma}
\begin{proof} The first assertion is clear since $B_\mu$ is a polynomial. We have \[ B_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)=\sum_{\nu\in({\bf Z}_{\geq 0})^{N+1}} B^{(\mu)}_\nu(\lambda_1/\lambda_0)^{\nu_1}\cdots(\lambda_N/\lambda_0)^{\nu_N} \] Using (2.1) and (2.2) gives \[ {\rm ord}\: B^{(\mu)}_\nu\geq\sum_{i=0}^N {\rm ord}\: b_{\nu_i}\geq \sum_{i=0}^N \frac{\nu_i(p-1)}{p^2}= \mu_0\frac{p-1}{p^2}, \] which implies the second assertion of the lemma. \end{proof}
Using (2.3) we write \begin{equation} F(\lambda,x) = \sum_{\mu\in M} {B}_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)\lambda_0^{\mu_0} x^\mu. \end{equation} Let \[ \xi(\lambda,x) = \sum_{\nu\in M^\circ} \xi_\nu(\lambda)({\pi}\lambda_0)^{-\nu_0}x^{-\nu}\in S. \] We claim that the product $F(\lambda,x)\xi(\lambda^p,x^p)$ is well-defined as a formal series in $x$. Formally we have \[ F(\lambda,x)\xi(\lambda^p,x^p) = \sum_{\rho\in{\mathbb Z}^{n+1}} \zeta_\rho(\lambda)\lambda_0^{-\rho_0} x^{-\rho}, \] where \begin{equation} \zeta_\rho(\lambda) = \sum_{\substack{\mu\in M,\nu\in M^\circ \\ \mu-p\nu = -\rho}} {\pi}^{-\nu_0} {B}_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)\xi_\nu(\lambda^p). \end{equation} By Lemma 2.4, we have \begin{equation}
|{\pi}^{-\nu_0}B_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)\xi_\nu(\lambda^p)|\leq
|\tilde{\pi}^{\mu_0}\pi^{-\nu_0}|\cdot|\xi(\lambda,x)|. \end{equation} Since $\mu=p\nu-\rho$, we have \begin{align} {\rm ord}\;\tilde{\pi}^{\mu_0}\pi^{-\nu_0} &= (p\nu_0-\rho_0)\frac{p-1}{p^2} - \frac{\nu_0}{p-1} \nonumber \\ &= \nu_0\bigg(\frac{p-1}{p} - \frac{1}{p-1}\bigg) - \rho_0\frac{p-1}{p^2}. \end{align} Since $(p-1)/p-1/(p-1)>0$ (we are using here our hypothesis that $p\neq 2$), this shows that $\tilde{\pi}^{\mu_0}\pi^{-\nu_0}\to 0$ as $\nu\to\infty$, so the series (2.6) converges to an element of~$R$. The same argument shows that if $\xi(\lambda,x)\in S'$, then the series (2.6) converges to an element of $R'$.
Let $\gamma^\circ$ be the the truncation map \[ \gamma^\circ\bigg(\sum_{\rho\in{\mathbb Z}^{n+1}} \zeta_\rho(\lambda)\lambda_0^{-\rho_0}x^{-\rho}\bigg) = \sum_{\rho\in M^\circ} \zeta_\rho(\lambda)\lambda_0^{-\rho_0}x^{-\rho} \] and define for $\xi(\lambda,x)\in S$ \begin{align*} \alpha^*\big(\xi(\lambda,x)\big) &= \gamma^\circ\big(F(\lambda,x)\xi(\lambda^p,x^p)\big) \\
&= \sum_{\rho\in M^\circ}\zeta_\rho(\lambda)\lambda_0^{-\rho_0}x^{-\rho}. \end{align*} For $\rho\in M^\circ$ put $\eta_\rho(\lambda) = {\pi}^{\rho_0}\zeta_\rho(\lambda)$, so that \begin{equation} \alpha^*(\xi(\lambda,x)) = \sum_{\rho\in M^\circ} \eta_\rho(\lambda)({\pi}\lambda_0)^{-\rho_0}x^{-\rho} \end{equation} with (by (2.6)) \begin{equation} \eta_\rho(\lambda) = \sum_{\substack{\mu\in M,\nu\in M^\circ\\ \mu-p\nu = -\rho}} {\pi}^{\rho_0-\nu_0} {B}_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)\xi_\nu(\lambda^p). \end{equation}
\begin{proposition} The map $\alpha^*$ is an endomorphism of $S$ and of $S'$, and for $\xi(\lambda,x)\in S$ we have \begin{equation}
|\alpha^*(\xi(\lambda,x))|\leq |p|\cdot|\xi(\lambda,x)|. \end{equation} \end{proposition}
\begin{proof} By (2.9), the proposition follows from the estimate
\[ |\eta_\rho(\lambda)|\leq |p|\cdot|\xi(\lambda,x)| \quad\text{for all $\rho\in M^\circ$.} \] By (2.10), this estimate will follow from the estimate \begin{equation}
|{\pi}^{\rho_0-\nu_0}{B}_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)|\leq |p| \end{equation} for all $\mu\in M$, $\nu\in M^\circ$, $\mu-p\nu=-\rho$.
Consider first the case $\mu_0\leq 2p-1$.
For $0\leq i\leq p-1$ we have $b_i=\pi^i/i!$, hence $|b_i| = |\pi^i|$. One checks easily that for
$p\leq i\leq 2p-1$ one has $|b_i|\leq |\pi^i|$. This implies by (2.2) that
$|B_\mu(\lambda)|\leq |\pi^{\mu_0}|$, thus
\[ |{\pi}^{\rho_0-\nu_0}{B}_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)|\leq
|\pi^{\rho_0-\nu_0+\mu_0}|. \] Since $\mu=p\nu-\rho$ we have $\rho_0-\nu_0+\mu_0 = (p-1)\nu_0$, so \begin{equation}
|{\pi}^{\rho_0-\nu_0}{B}_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)|\leq |p|^{\nu_0}. \end{equation} Since $\nu_0\geq 1$ for $\nu\in M^\circ$, Eq.~(2.14) implies (2.13) for $\mu_0\leq 2p-1$.
Now consider the case $\mu_0\geq 2p$. Lemma 2.4 implies that \begin{equation}
|{\pi}^{\rho_0-\nu_0}B_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)|\leq
|\tilde{\pi}^{\mu_0}\pi^{\rho_0-\nu_0}|. \end{equation} We have (using $\mu=p\nu-\rho$) \begin{align*} {\rm ord}\;\tilde{\pi}^{\mu_0}\pi^{\rho_0-\nu_0} &= (p\nu_0-\rho_0)\frac{p-1}{p^2} + (\rho_0-\nu_0) \frac{1}{p-1} \\ &= \nu_0\bigg(\frac{p-1}{p}-\frac{1}{p-1}\bigg) + \rho_0\bigg(\frac{1}{p-1}-\frac{p-1}{p^2}\bigg). \end{align*} Since $\rho\in M^\circ$ we have $\rho_0\geq 1$, and since $\mu_0\geq 2p$ and $\mu=p\nu-\rho$ we must have $\nu_0\geq 3$. It follows that \begin{equation} \nu_0\bigg(\frac{p-1}{p}-\frac{1}{p-1}\bigg) + \rho_0\bigg(\frac{1}{p-1}-\frac{p-1}{p^2}\bigg)\geq 3\bigg(\frac{p-1}{p}-\frac{1}{p-1}\bigg) + \bigg(\frac{1}{p-1}-\frac{p-1}{p^2}\bigg). \end{equation} This latter expression is $>1$ for $p\geq 5$, hence (2.15) implies (2.13) when $\mu_0\geq 2p$ and $p\geq 5$.
Finally, suppose that $p=3$ and $\mu_0\geq 2p = 6$. By explicitly computing $b_i$, one checks that
$|b_i| = |\pi^i|$ for $i\leq 8$. This implies by (2.2) that $|B_\mu(\lambda)|\leq |\pi^{\mu_0}|$ for $\mu_0\leq 8$, so~(2.14) holds in this case and we conclude as before that~(2.13) holds also. For
$i=9,10$, one has $|b_i| = |\pi^{i-4}|$, so $|B_\mu(\lambda)|\leq |\pi^{\mu_0-4}|$ for $\mu_0=9,10$. We thus have
\[ |{\pi}^{\rho_0-\nu_0}{B}_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)|\leq
|\pi^{\rho_0-\nu_0+\mu_0-4}| \] for $\mu_0=9,10$. Since $\mu=p\nu-\rho$, this gives
\[ |{\pi}^{\rho_0-\nu_0}{B}_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)|\leq |3^{\nu_0-2}|. \]
For $\mu_0=9,10$, $\mu=p\nu-\rho$ implies that $\nu_0\geq 4$, hence $|3^{\nu_0-2}|\leq |3^2|$
and~(2.13) holds. One computes that $|b_{11}| = |\pi^9|$, so $|B_\mu(\lambda)|\leq |\pi^9|$ for $\mu_0=11$. This gives (using $\mu=p\nu-\rho$)
\[ |{\pi}^{\rho_0-\nu_0}{B}_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)|\leq |3^{\nu_0-1}| = |3^3| \] since $\nu_0\geq 4$ for $\mu_0=11$, so (2.13) holds in this case. Finally, if $\mu_0\geq 12$, then $\mu=p\nu-\rho$ implies $\nu_0\geq 5$. When $p=3$, the left-hand side of (2.16) is $>1$ when $\nu_0\geq 5$, so~(2.15) implies~(2.13) in this case. \end{proof}
{\bf Remark:} It follows from the proof of Proposition 2.11 that equality can hold in (2.13) only if $\nu_0=1$. And since \[ {\pi}^{\rho_0-\nu_0}{B}_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)\in{\mathbb Q}_p(\pi, \tilde{\pi})[\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0] \] and ${\mathbb Q}_p(\pi,\tilde{\pi})$ is a discretely valued field, we conclude that there exists a rational number $C$, $0<C<1$, such that \begin{equation}
|{\pi}^{\rho_0-\nu_0}{B}_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)|\leq C|p| \end{equation} for all $\mu\in M$, $\nu\in M^\circ$, $\mu-p\nu=-\rho$, with $\nu_0>1$.
\begin{lemma} Suppose ${\bf a}_0$ is the unique interior lattice point of $\Delta$. If
$\xi_{\hat{\bf a}_0}(\lambda)=0$, then $|\alpha^*(\xi(\lambda,x))|\leq C |p|\cdot|\xi(\lambda,x)|$. \end{lemma}
\begin{proof} Since ${\bf a}_0$ is the unique interior lattice point of $\Delta$, the point $\nu=\hat{\bf a}_0$ is the unique element of~$M^\circ$ with $\nu_0=1$. So for $\nu\in M^\circ$, $\nu\neq\hat{\bf a}_0$, we have $\nu_0\geq 2$. The assertion of the lemma then follows from (2.10) and (2.17). \end{proof}
We examine the polynomial $B_{(p-1)\hat{\bf a}_0}(\lambda)$ to determine its relation to $\Phi_1(\lambda)$. Let \[ V = \bigg\{v=(v_0,\dots,v_N)\in ({\mathbb Z}_{\geq 0})^{N+1} \mid \sum_{i=0}^N v_i\hat{\bf a}_i = (p-1)\hat{\bf a}_0\bigg\}. \] From (2.2) we have \[ B_{(p-1)\hat{\bf a}_0}(\lambda) = \sum_{v\in V} \bigg(\prod_{i=0}^N b_{v_i}\bigg)\lambda_0^{v_0}\cdots \lambda_N^{v_N}. \] For $v\in V$ we have $\sum_{i=0}^N v_i = p-1$ so $v_0=(p-1)-v_1-\cdots-v_N$. Furthermore, $v_i\leq p-1$ for all $i$ so $b_{v_i} = \pi^{v_i}/v_i!$. And since $\pi^{p-1} = -p$, this implies \[ B_{(p-1)\hat{\bf a}_0}(\lambda) = -p\lambda_0^{p-1}\sum_{v\in V}\frac{(\lambda_1/\lambda_0)^{v_1}\cdots (\lambda_N/\lambda_0)^{v_N}}{(p-1-v_1-\ldots-v_N)!v_1!\cdots v_N!}. \] It follows that \[ p^{-1}B_{(p-1){\bf a}_0}(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0) = -\sum_{v\in V} \frac{(\lambda_1/\lambda_0)^{v_1}\cdots (\lambda_N/\lambda_0)^{v_N}} {(p-1-v_1-\ldots-v_N)!v_1!\cdots v_N!}, \]
a polynomial in the $\lambda_i/\lambda_0$ with $p$-integral coefficients.
\begin{lemma} $\Phi_1(\lambda)\equiv p^{-1}B_{(p-1){\bf a}_0}(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0) \pmod{p}$. \end{lemma}
\begin{proof} The map $(v_0,\dots,v_N)\mapsto (-v_1-\dots-v_N,v_1,\dots,v_N)$ is a one-to-one correspondence from $V$ to the elements $l\in L_+$ satisfying $l_1+\cdots+l_N\leq p-1$. The lemma then follows immediately from the congruence \[ -\frac{1}{(p-1-m)!}\equiv (-1)^m m!\pmod{p} \quad\text{for $0\leq m\leq p-1$}, \] which is implied by the congruence $(p-1)!\equiv -1\pmod{p}$. \end{proof}
\begin{corollary} The polynomial $B_{(p-1){\bf a}_0}(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)$ is
an invertible element of $R'$ with $|B_{(p-1){\bf a}_0}(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)|
= |p|$. \end{corollary}
\begin{proof} The first assertion is an immediate consequence of Lemma 2.19. The assertion about the norm follows from the fact that all the coefficients are divisible by~$p$ and the constant term equals $-p/(p-1)!$. \end{proof}
Suppose that ${\bf a}_0$ is the unique interior lattice point of $\Delta$, so that $\nu=\hat{\bf a}_0$ is the unique element of $M^\circ$ with $\nu_0=1$. From Equation (2.10) we have \begin{equation} \begin{split} \eta_{\hat{\bf a}_0}(\lambda) &= \sum_{\substack{\mu\in M,\nu\in M^\circ\\ \mu=p\nu -\hat{\bf a}_0}} {\pi}^{1-\nu_0} B_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)\xi_\nu(\lambda^p) \\
& = B_{(p-1)\hat{\bf a}_0}(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)\xi_{\hat{\bf a}_0}(\lambda^p) \\ & \quad +\sum_{\substack{\mu\in M,\nu\in M^\circ\\ \mu=p\nu -\hat{\bf a}_0\\ \nu_0\geq 2}} {\pi}^{1-\nu_0} B_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)\xi_\nu(\lambda^p). \end{split} \end{equation}
\begin{lemma} Suppose that ${\bf a}_0$ is the unique interior lattice point of $\Delta$. If $\xi_{\hat{\bf a}_0}(\lambda)$ is an invertible element of $R$ (resp.~$R'$) and
$|\xi_{\hat{\bf a}_0}(\lambda)|= |\xi(\lambda,x)|$, then $\eta_{\hat{\bf a}_0}(\lambda)$ is also an
invertible element of $R$ (resp.~$R'$) and $|\eta(\lambda,x)| = |\eta_{\hat{\bf a}_0}(\lambda)| =
|p|\cdot |\xi_{\hat{\bf a}_0}(\lambda)|$. \end{lemma}
\begin{proof} By Corollary 2.20 we have that $B_{(p-1)\hat{\bf a}_0}(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)
\xi_{\hat{\bf a}_0}(\lambda^p)$ is an invertible element of norm $|p|\cdot|\xi_{\hat{\bf a}_0}(\lambda)|$. By hypothesis, we have
\[ |\xi_\nu(\lambda^p)|\leq |\xi_{\hat{\bf a}_0}(\lambda)|\quad\text{for all $\nu\in M^\circ$.} \] Equation (2.17) with $\rho = \hat{\bf a}_0$ gives \begin{equation}
|{\pi}^{1-\nu_0} B_\mu(1,\lambda_1/\lambda_0,\dots,\lambda_N/\lambda_0)|\leq C|p| \end{equation} for all $\mu\in M$, $\nu\in M^\circ$, $\mu-p\nu=-\hat{\bf a}_0$, $\nu_0\geq 2$ and some constant $C$, $0<C<1$.
Equation (2.21) then implies that $\eta_{\hat{\bf a}_0}(\lambda)$ is invertible and that
\[ |\eta_{\hat{\bf a}_0}(\lambda)|=|p|\cdot |\xi_{\hat{\bf a}_0}(\lambda)|. \]
Equation (2.12) then implies that $|\eta(\lambda,x)| = |p|\cdot|\xi_{\hat{\bf a}_0}(\lambda)|$. \end{proof}
Suppose that ${\bf a}_0$ is the unique interior lattice point of $\Delta$. Put
\[ T = \{\xi(\lambda,x)\in S\mid \text{$\xi_{\hat{\bf a}_0}(\lambda) = 1$ and $|\xi(\lambda,x)| = 1$}\} \] and put $T' = T\cap S'$. It follows from Lemma~2.22 that if $\xi(\lambda,x)\in T$, then $\eta_{\hat{\bf a}_0}(\lambda)$ is invertible. We may thus define for $\xi(\lambda,x)\in T$ \[ \beta(\xi(\lambda,x)) = \frac{\alpha^*(\xi(\lambda,x))}{\eta_{\hat{\bf a}_0}(\lambda)}. \] Lemma 2.22 also implies that
\[ \bigg|\frac{\alpha^*(\xi(\lambda,x))}{\eta_{\hat{\bf a}_0}(\lambda)}\bigg|= 1, \] so $\beta(T)\subseteq T$. It is then clear that $\beta(T')\subseteq T'$.
\begin{proposition} Suppose that ${\bf a}_0$ is the unique interior lattice point of $\Delta$. Then the operator $\beta$ is a contraction mapping on the complete metric space $T$. More precisely, for $C$ as in $(2.17)$, if $\xi^{(1)}(\lambda,x),\xi^{(2)}(\lambda,x)\in T$, then
\[ |\beta\big(\xi^{(1)}(\lambda,x)\big)-\beta\big(\xi^{(2)}(\lambda,x)\big)|\leq C|\xi^{(1)}(\lambda,x)-
\xi^{(2)}(\lambda,x)|. \] \end{proposition}
\begin{proof} We have (in the obvious notation) \begin{equation*} \begin{split} \beta\big(\xi^{(1)}(\lambda,x)\big)-\beta\big(\xi^{(2)}(\lambda,x)\big) &= \frac{\alpha^*\big( \xi^{(1)}(\lambda,x)\big)}{\eta^{(1)}_{\hat{\bf a}_0}(\lambda)} - \frac{\alpha^*\big(\xi^{(2)}(\lambda,x) \big)}{\eta^{(2)}_{\hat{\bf a}_0}(\lambda)} \\
&= \frac{\alpha^*\big(\xi^{(1)}(\lambda,x)-\xi^{(2)}(\lambda,x)\big)}{\eta^{(1)}_{\hat{\bf a}_0}(\lambda)}
\\
& \qquad - \alpha^*\big(\xi^{(2)}(\lambda,x)\big)\frac{\eta^{(1)}_{\hat{\bf a}_0}(\lambda) - \eta^{(2)}_{\hat{\bf a}_0}(\lambda)}{\eta^{(1)}_{\hat{\bf a}_0}(\lambda)\eta^{(2)}_{\hat{\bf a}_0}(\lambda)}. \end{split} \end{equation*} By Lemmas 2.18 and 2.22 we have
\[ \bigg|\frac{\alpha^*(\xi^{(1)}(\lambda,x)-\xi^{(2)}(\lambda,x))}{\eta^{(1)}_{\hat{\bf a}_0}(\lambda)}
\bigg| \leq C|\xi^{(1)}(\lambda,x)-\xi^{(2)}(\lambda,x))|. \] Since $\eta^{(1)}_{\hat{\bf a}_0}(\lambda)-\eta^{(2)}_{\hat{\bf a}_0}(\lambda)$ is the coefficient of $x^{-\hat{\bf a}_0}$ in $\alpha^*\big(\xi^{(1)}(\lambda,x)-\xi^{(2)}(\lambda,x)\big)$, we have
\[ |\eta^{(1)}_{\hat{\bf a}_0}(\lambda)-\eta^{(2)}_{\hat{\bf a}_0}(\lambda)|\leq
|\alpha^*\big(\xi^{(1)}(\lambda,x)-\xi^{(2)}(\lambda,x)\big)|\leq C |p|\cdot |\xi^{(1)}(\lambda,x)-
\xi^{(2)}(\lambda,x))| \]
by Lemma 2.18. We have $|\eta^{(1)}_{\hat{\bf a}_0}(\lambda)\eta^{(2)}_{\hat{\bf a}_0}(\lambda)|=|p^2|$ by Lemma~2.22, so by (2.12)
\[ \bigg| \alpha^*\big(\xi^{(2)}(\lambda,x)\big)\frac{\eta^{(1)}_{\hat{\bf a}_0}(\lambda) -
\eta^{(2)}_{\hat{\bf a}_0}(\lambda)}{\eta^{(1)}_{\hat{\bf a}_0}(\lambda)\eta^{(2)}_{\hat{\bf a}_0}(\lambda)}\bigg|
\leq C|\xi^{(1)}(\lambda,x)-\xi^{(2)}(\lambda,x))|. \] This establishes the proposition. \begin{comment}
One has $|\tilde{\pi}^{3(p-1)}/p^2|<1$ for $p\geq 7$, so the proposition is established in that case. For $p=5$, one has by (2.16) and (2.20) that
\[ |\eta^{(1)}_{\hat{\bf a}_0}(\lambda)-\eta^{(2)}_{\hat{\bf a}_0}(\lambda)|\leq
|\tilde{\pi}^{12}|\cdot |\xi^{(1)}(\lambda,x)-\xi^{(2)}(\lambda,x))|. \] It follows that
\[ \bigg| \alpha^*(\xi^{(2)}(\lambda,x))\frac{\eta^{(1)}_{\hat{\bf a}_0}(\lambda) - \eta^{(2)}_{\hat{\bf a}_0}(\lambda)}{\eta^{(1)}_{\hat{\bf a}_0}(\lambda)\eta^{(2)}_{\hat{\bf a}_0}(\lambda)}
\bigg|\leq |\tilde{\pi}^{16}/5^2|\cdot
|\xi^{(1)}(\lambda,x)-\xi^{(2)}(\lambda,x))|, \]
and since $|\tilde{\pi}^{16}/5^2|<1$ the proposition is proved for $p=5$ also. \end{comment} \end{proof}
By a well-known theorem, Proposition 2.24 implies that $\beta$ has a unique fixed point in $T$. And since $\beta$ is stable on $T'$, that fixed point must lie in $T'$. This fixed point of $\beta$ is related to a certain eigenvector of $\alpha^*$. Suppose that $\xi(\lambda,x)\in S$ is an eigenvector of $\alpha^*$, say, \[ \alpha^*\big(\xi(\lambda,x)\big) = \kappa\xi(\lambda,x), \]
with $\xi_{\hat{\bf a}_0}(\lambda)$ invertible and $|\xi(\lambda,x)| = |\xi_{\hat{\bf a}_0}(\lambda|$. Then $\xi(\lambda,x)/\xi_{\hat{\bf a}_0}(\lambda)\in T$. \begin{lemma} With the above notation, $\xi(\lambda,x)/\xi_{\hat{\bf a}_0}(\lambda)$ is the unique fixed point of~$\beta$, hence $\xi(\lambda,x)/\xi_{\hat{\bf a}_0}(\lambda)\in T'$. In particular, $\xi_\rho(\lambda)/\xi_{\hat{\bf a}_0}(\lambda)\in R'$ for all $\rho\in M^\circ$. \end{lemma}
\begin{proof} We have \begin{equation} \alpha^*\bigg(\frac{\xi(\lambda,x)}{\xi_{\hat{\bf a}_0}(\lambda)}\biggr) = \frac{\alpha^*\big( \xi(\lambda,x)\big)}{\xi_{\hat{\bf a}_0}(\lambda^p)} = \bigg(\frac{\kappa\xi_{\hat{\bf a}_0}(\lambda)} {\xi_{\hat{\bf a}_0}(\lambda^p)}\bigg) \frac{\xi(\lambda,x)}{\xi_{\hat{\bf a}_0}(\lambda)}. \end{equation} By the definition of $\beta$, this implies the result. \end{proof}
\begin{corollary} With the above notation, $\xi_{\hat{\bf a}_0}(\lambda)/\xi_{\hat{\bf a}_0}(\lambda^p) \in R'$. \end{corollary}
\begin{proof} Since $\alpha^*$ is stable on $S'$, Lemma 2.25 implies that the right-hand side of~(2.26) lies in $S'$. Since the coefficient of $(\pi\lambda_0)^{-1}x^{-\hat{\bf a}_0}$ on the right-hand side of (2.26) is $\kappa\xi_{\hat{\bf a}_0}(\lambda)/\xi_{\hat{\bf a}_0}(\lambda^p)$, the result follows. \end{proof}
In the next section we find the fixed point of $\beta$ by finding the corresponding eigenvector of $\alpha^*$. This eigenvector will be constructed from solutions of the $A$-hypergeometric system; in particular, we shall have $\xi_{\hat{\bf a}_0}(\lambda) = \Phi(\lambda)$, so Corollary~2.27 will imply Theorem~1.4.
\section{Fixed point}
We begin with a preliminary calculation. Consider the series \[ G(\lambda_0,x) = \sum_{l=0}^{\infty} (-1)^l l!(\pi\lambda_0x^{\hat{\bf a}_0})^{-1-l}. \] It satisfies the equations \begin{equation} \gamma_-\bigg(x_i\frac{\partial}{\partial x_i}-\pi\lambda_0\hat{\bf a}_{0i}x^{\hat{\bf a}_0}\bigg) G(\lambda_0,x) = 0 \end{equation} for $i=0,1,\dots,n$, where $\gamma_-$ is the operator on series defined by \[\gamma_-\bigg(\sum_{l=-\infty}^{\infty} c_l(\pi\lambda_0x^{\hat{\bf a}_0})^{-1-l}\bigg) = \sum_{l=0}^{\infty} c_l(\pi\lambda_0x^{\hat{\bf a}_0})^{-1-l}. \] It is straightforward to check that the series $\sum_{l=0}^{\infty} c_l(\pi\lambda_0 x^{\hat{\bf a}_0})^{-1-l}$ that satisfy the operators $\gamma_-\circ(x_i\partial/\partial x_i- \pi\lambda_0\hat{\bf a}_{0i}x^{\hat{\bf a}_0})$ form a one-dimensional space.
Consider the series \[ H(\lambda_0,x)=\gamma_-\big(\exp\big(\pi(\lambda_0x^{\hat{\bf a}_0}-\lambda_0^px^{p\hat{\bf a}_0})\big) G(\lambda_0^p,x^p)\big). \] Formally one has \[ x_i\frac{\partial}{\partial x_i}-\pi\lambda_0\hat{\bf a}_{0i}x^{\hat{\bf a}_0} = \exp(\pi\lambda_0 x^{\hat{\bf a}_0})\circ x_i\frac{\partial}{\partial x_i}\circ \frac{1}{\exp(\pi\lambda_0x^{\hat{\bf a}_0})} \] and \[ \gamma_-\circ\bigg(x_i\frac{\partial}{\partial x_i} - \pi\lambda_0\hat{\bf a}_{0i}x^{\hat{\bf a}_0} \bigg)\circ \gamma_- = \gamma_-\circ\bigg(x_i\frac{\partial}{\partial x_i} - \pi\lambda_0\hat{\bf a}_{0i}x^{\hat{\bf a}_0} \bigg). \] It follows that \begin{multline*} \gamma_-\bigg(x_i\frac{\partial}{\partial x_i} - \pi\lambda_0\hat{\bf a}_{0i}x^{\hat{\bf a}_0}\bigg) H(\lambda_0,x) = \\ \gamma_-\bigg(\exp(\pi\lambda_0x^{\hat{\bf a}_0})x_i\frac{\partial}{\partial x_i}\bigg(G(\lambda_0^p,x^p)/ \exp(\pi\lambda_0^px^{p\hat{\bf a}_0})\bigg)\bigg). \end{multline*} Eq.~(3.1) implies that this latter expression equals 0, i.~e., $H(\lambda_0,x)$ also satisfies the operators $\gamma_-\circ(x_i\partial/\partial x_i-\pi\lambda_0\hat{\bf a}_{0i}x^{\hat{\bf a}_0})$. Since the solutions of this operator form a one-dimensional space, we have $H(\lambda_0,x) = cG(\lambda_0,x)$ for some constant $c$.
We determine the constant $c$ by comparing the coefficients of $(\pi\lambda_0x^{\hat{\bf a}_0})^{-p}$ in $G(\lambda_0,x)$ and $H(\lambda_0,x)$. The coefficient of $(\pi\lambda_0x^{\hat{\bf a}_0})^{-p}$ in $G(\lambda_0,x)$ is $(-1)^{p-1}(p-1)!$. A calculation shows that the coefficient of $(\pi\lambda_0x^{\hat{\bf a}_0})^{-p}$ in $H(\lambda_0,x)$ is \[ -p\sum_{l=0}^\infty b_{pl}(-1)^ll!\pi^{-l}. \] By Boyarsky\cite[Eq.~(5.6)]{B} (or see \cite[Lemma~3]{A}) this equals $-p\Gamma_p(p)=(-1)^{p+1} p!$, where $\Gamma_p$ denotes the $p$-adic gamma function. It follows that the constant $c$ satisfies the equation \[ (-1)^{p+1}p!=c(-1)^{p-1}(p-1)!, \] so $c=p$ and we conclude that \begin{equation} H(\lambda_0,x) = p G(\lambda_0,x). \end{equation}
We now consider the series \[ \xi(\lambda,x) = \gamma^\circ\bigg(G(\lambda_0,x)\prod_{i=1}^N \exp(\pi\lambda_i x^{\hat{\bf a}_i})\bigg), \] where $\gamma^\circ$ is as defined in Section~2. A calculation shows that \begin{multline*} G(\lambda_0,x)\prod_{i=1}^N \exp(\pi\lambda_ix^{\hat{a}_i}) = \\ \sum_{\rho\in{\mathbb Z}^{n+1}} \bigg( \sum_{\substack{l_0,\dots,l_N\in{\mathbb Z}_{\geq 0}\\ l_1\hat{\bf a}_1+\cdots+l_N\hat{\bf a}_N-(l_0+1)\hat{\bf a}_0 = -\rho}} \frac{(-1)^{l_0}l_0! \pi^{-1-l_0+\sum_{i=1}^N l_i} \lambda_1^{l_1}\cdots\lambda_N^{l_N}\lambda_0^{-l_0-1}}{l_1!\cdots l_N!}\bigg)x^{-\rho}. \end{multline*} It follows that we can write $\xi(\lambda,x)$ as \begin{equation} \xi(\lambda,x) = \sum_{\rho\in M^\circ} \xi_\rho(\lambda)(\pi\lambda_0)^{-\rho_0}x^{-\rho}, \end{equation} where \begin{multline} \xi_\rho(\lambda) = \\ \sum_{\substack{l_0,\dots,l_N\in{\mathbb Z}_{\geq 0}\\ l_1\hat{\bf a}_1+\cdots+l_N\hat{\bf a}_N-(l_0+1)\hat{\bf a}_0 = -\rho}} \frac{(-1)^{\rho_0-1+\sum_{i=1}^N l_i}(\rho_0-1+\sum_{i=1}^N l_i)!}{l_1!\cdots l_N!} \prod_{i=1}^N \bigg(\frac{\lambda_i}{\lambda_0}\bigg)^{l_i}. \end{multline}
{\bf Remark:} One can check that the series $\lambda_0^{-\rho_0}\xi_\rho(\lambda)$ is a solution of the $A$-hyper\-geometric system with parameter $-\rho$ (although we shall not make use of that fact here). Thus the coefficients of powers of $x$ in the series $\xi(\lambda,x)$ form a ($p$-adically normalized) family of ``contiguous'' $A$-hypergeometric functions.
The series $\xi_\rho(\lambda)$ have coefficients in ${\mathbb Z}$ for all $\rho\in M^\circ$, hence
$\xi(\lambda,x)\in S$ and $|\xi(\lambda,x)|\leq 1$. Note that $\xi_{\hat{\bf a}_0}(\lambda) = \Phi(\lambda)$, where $\Phi(\lambda)$ is defined by (1.1). In particular, $\xi_{\hat{\bf a}_0}(\lambda)$ takes unit values on ${\mathcal D}$, hence
$|\xi_{\hat{\bf a}_0}(\lambda)|=1$ and $\xi_{\hat{\bf a}_0}(\lambda)$ is an invertible element of $R$. To show that $\xi(\lambda,x)$ satisfies the hypothesis of Lemma~2.25, it remains only to establish the following result.
\begin{lemma} We have $\alpha^*\big(\xi(\lambda,x)\big) = p\xi(\lambda,x)$. \end{lemma}
\begin{proof} From the definitions we have \[ \alpha^*(\xi(\lambda,x)) = \gamma^\circ\bigg(F(\lambda,x)\cdot\gamma^\circ\bigg(G(\lambda_0^p,x^p) \prod_{i=1}^N \exp(\pi\lambda_i^px^{p\hat{\bf a}_i})\bigg)\bigg). \] One checks that $\big(\text{mult.\ by $F(\lambda,x)$}\big)\circ\gamma^\circ = \gamma^\circ\circ \big(\text{mult.\ by $F(\lambda,x)$}\big)$. Furthermore, by definition we have $F(\lambda,x) = \prod_{i=0}^N \exp(\pi\lambda_ix^{\hat{\bf a}_i})/\exp(\pi\lambda_i^px^{p\hat{\bf a}_i})$. It follows that \[ \alpha^*(\xi(\lambda,x)) = \gamma^\circ\bigg(\gamma^\circ\bigg(\frac{\exp(\pi \lambda_0x^{\hat{\bf a}_0})}{\exp(\pi\lambda_0^px^{p\hat{\bf a}_0})}G(\lambda_0^p,x^p)\bigg)\prod_{i=1}^N \exp(\pi\lambda_ix^{\hat{\bf a}_i})\bigg). \] By (3.2) we finally have \[ \alpha^*(\xi(\lambda,x)) = \gamma^\circ\bigg(pG(\lambda_0,x)\prod_{i=1}^N \exp(\pi\lambda_i x^{\hat{\bf a}_i})\bigg) = p\xi(\lambda,x). \] \end{proof}
\begin{proof}[Proof of Theorem $1.4$] We have shown that the series $\xi(\lambda,x)$ given by (3.3) and~(3.4) satisfies the hypotheses of Lemma 2.25. Since $\xi_{\hat{\bf a}_0}(\lambda)=\Phi(\lambda)$, Theorem~1.4 follows from Corollary~2.27. \end{proof}
\end{document} |
\begin{document}
\title{Hamiltonian Simulation Algorithms for Near-Term Quantum Hardware}
\begin{abstract} The quantum circuit model is the de-facto way of designing quantum algorithms. Yet any level of abstraction away from the underlying hardware incurs overhead. In this work, we develop quantum algorithms for Hamiltonian simulation ``one level below'' the circuit model, exploiting the underlying control over qubit interactions available in most quantum hardware and deriving analytic circuit identities for synthesising multi-qubit evolutions from two-qubit interactions. We then analyse the impact of these techniques under the standard error model where errors occur per gate, and an error model with a constant error rate per unit time.
To quantify the benefits of this approach, we apply it to time-dynamics simulation of the 2D spin Fermi-Hubbard model. Combined with new error bounds for Trotter product formulas tailored to the non-asymptotic regime and an analysis of error propagation, we find that e.g.\ for a $5\times5$ Fermi-Hubbard lattice we reduce the circuit depth from $1,243,586$ using the best previous fermion encoding and error bounds in the literature, to $3,209$ in the per-gate error model, or the circuit-depth-equivalent to $259$ in the per-time error model. This brings Hamiltonian simulation, previously beyond reach of current hardware for non-trivial examples, significantly closer to being feasible in the NISQ era. \end{abstract}
\thispagestyle{empty} \enlargethispage{5em}
\section*{Introduction}\label{sec:intro} Quantum computing is on the cusp of entering the era in which quantum hardware can no longer be simulated effectively classically, even on the world's biggest supercomputers~\cite{Villalonga2020,Bremner2011,Aaronson2010,Bremner2017,Harrow2017}. Google recently achieved the first so-called ``quantum supremacy'' milestone demonstrating this~\cite{GoogleAI}. Whilst reaching this milestone is an impressive experimental physics achievement, the very definition of this goal allows it to be a demonstration that has no useful practical applications~\cite{Preskill2012}. The recent Google results are of exactly this nature. By far the most important question for quantum computing now is to determine whether there are useful applications of this class of noisy, intermediate-scale quantum (NISQ) hardware~\cite{Preskill2018}.
However, current quantum hardware is still extremely limited, with $\approx 50$ qubits capable of implementing quantum circuits up to a gate depth of $\approx 20$~\cite{GoogleAI}. This is far too limited to run useful instances of even the simplest textbook quantum algorithms, let alone implement the error correction and fault-tolerance required for large-scale quantum computations. Estimates of the number of qubits and gates required to run Shor's algorithm on integers that cannot readily be factored on classical computers place it -- and related number-theoretic algorithms -- well into the regime of requiring a fully scalable, fault-tolerant quantum computer~\cite{Haner2016,Roetteler2017}. Studies of practically relevant combinatorial problems tell a similar story for capitalising on the quadratic speedup of Grover's algorithm~\cite{Montanaro2015}. Quantum computers are naturally well-suited for simulation of quantum many-body systems~\cite{Feynman1982,Lloyd1996a} -- a task that is notoriously difficult on classical computers. Quantum simulation is likely to be one of the first practical applications of quantum computing. But, whilst the number of qubits required to run interesting quantum simulations may be lower than for other applications, careful studies of the gate counts required for a quantum chemistry simulation of molecules that are not easily tractible classically~\cite{google-chemisty}, or for simple condensed matter models~\cite{Kivlichan2019}, remain far beyond current hardware.
With severely resource-constrained hardware such as this, squeezing every ounce of performance out of it is crucial. The quantum circuit model is the standard way to design quantum algorithms, and quantum gates and circuits provide a highly convenient abstraction of quantum hardware. Circuits sit at a significantly lower level of abstraction than even assembly code in classical computing. But any layer of abstraction sacrifices some overhead for the sake of convenience. The quantum circuit model is no exception.
In the underlying hardware, quantum gates are typically implemented by controlling interactions between qubits. E.g.\ by changing voltages to bring superconducting qubits in and out of resonance; or by laser pulses to manipulate the internal states of trapped ions. By restricting to a fixed set of standard gates, the circuit model abstracts away the full capabilities of the underlying hardware. In the NISQ era, it is not clear this sacrifice is justified. The Solovay-Kitaev theorem tells us that the overhead of any particular choice of universal gate set is at most poly-logarithmic~\cite{Kitaev1997,Dawson2005}. But when the available circuit depth is limited to $\approx 20$, even a constant factor improvement could make the difference between being able to run an algorithm on current hardware, and being beyond the reach of foreseeable hardware.
The advantages of designing quantum algorithms ``one level below'' the circuit model become particularly acute in the case of Hamiltonian time-dynamics simulation. To simulate evolution under a many-body Hamiltonian $\op H=\sum_{\langle i,j\rangle} \op h_{ij}$, the basic Trotterization algorithm~\cite{Lloyd1996a,Nielsen_and_Chuang} repeatedly time-evolves the system under each individual interaction $\op h_{ij}$ for a small time-step $\delta$, \begin{equation}\label{eq:trotter-intro} \ee^{-\ii H T} \simeq \prod_{n=0}^{T/\delta} \left(\prod_{\langle i,j\rangle} \ee^{-\ii h_{ij}\delta}\right). \end{equation} To achieve good precision, $\delta$ must be small. In the circuit model, each $\ee^{-\ii h_{ij}\delta}$ Trotter step necessarily requires at least one quantum gate to implement. Thus the required circuit depth -- and hence the total run-time -- is at least $T/\delta$. Contrast this with the run-time if we were able to implement $\ee^{-\ii h_{ij}\delta}$ directly in time $\delta$. The total run-time would then be $T$, which improves on the circuit model algorithm by a factor of $1/\delta$. This is ``only'' a constant factor improvement, in line with the Solovay-Kitaev theorem. But this ``constant'' can be very large; indeed, it diverges to $\infty$ as the precision of the algorithm increases.
It is unrealistic to assume the hardware can implement $\ee^{-\ii h_{ij}\delta}$ for any desired interaction $ h_{ij}$ and any time $\delta$. Furthermore, the available interactions are typically limited to at most a handful of specific types, determined by the underlying physics of the device's qubit and quantum gate implementations. And these interactions cannot be switched on and off arbitrarily fast, placing a limit on the smallest achievable value of $\delta$. There are also experimental challenges associated with implementing gates with small $\delta$ with the same fidelities as those with $\delta \approx \BigO(1)$.
A major criticism of analogue computation (classical and quantum) is that it cannot cope with errors and noise. The ``N'' in NISQ stands for ``noisy''; errors and noise will be a significant factor in all foreseeable quantum hardware. But near-term hardware has few resources to spare even on basic error correction, let alone fault-tolerance. Indeed, near-term hardware may not always have the necessary capabilities. E.g.\ the intermediate measurements required for active error-correction are not possible in all superconducting circuit hardware~\cite[Sec.~II]{GoogleAI+Supplementary}.
Algorithms that cope well with errors and noise, and still give reasonable results without active error correction or fault-tolerance, are thus critical for NISQ applications.
Designing algorithms ``one level below'' the circuit model can also in some cases reduce the impact of errors and noise during the algorithm. Again, this benefit is particularly acute in Hamiltonian simulation algorithms. If an error occurs on a qubit in a quantum circuit, a two-qubit gate acting on the faulty qubit can spread the error to a second qubit. In the absence of any error-correction or fault-tolerance, errors can spread to an additional qubit with each two-qubit gate applied, so that after circuit depth $n$ the error can spread to all $n$ qubits.
In the circuit model, each $\ee^{-\ii h_{ij}\delta}$ Trotter step requires at least one two-qubit gate. So a single error can be spread throughout the quantum computer after simulating time-evolution for time as short as $\delta n$. However, if a two-qubit interaction $\ee^{-\ii h_{ij}\delta}$ is implemented directly, one would intuitively expect it to only ``spread the error'' by a small amount $\delta$ for each such time-step. Thus we might expect it to take time $O(n)$ before the error can propagate to all $n$ qubits -- a factor of $1/\delta$ improvement. Another way of viewing this is that, in the circuit model, the Lieb-Robinson velocity~\cite{Lieb1972} at which effects propagate in the system is always $O(1)$, regardless of what unitary dynamics is being implemented by the overall circuit.
In contrast, the Trotterized Hamiltonian evolution has the same Lieb-Robinson velocity as the dynamics being simulated: $O(1/\delta)$ in the same units.
The Fermi-Hubbard model is believed to capture, in a simplified toy model, key aspects of high-temperature superconductors, which are still less well understood theoretically than their low-temperature brethren. Its Hamiltonian is given by a combination of on-site and hopping terms: \begin{align}\label{eq:FH-H-intro}
H_{\text{FH}} &\coloneqq \sum_{i=1}^{N} h_{\text{on-site}}^{(i)} \ + \sum_{i<j,\sigma} h_{\text{hopping}}^{(i,j,\sigma)} \\ &\coloneqq \os \sum_{i=1}^{N} a^{\dagger}_{i \uparrow} a_{i \uparrow} a^{\dagger}_{i \downarrow} a_{i \downarrow} + \hop \sum_{i<j,\sigma} \left(a^{\dagger}_{i \sigma} a_{j \sigma} + a^{\dagger}_{j\sigma} a_{i \sigma}\right). \nonumber \end{align} describing electrons with spin $\sigma = \uparrow$ or $\downarrow$ hopping between neighbouring sites on a lattice, with an on-site interaction between opposite-spin electrons at the same site. The Fermi-Hubbard model serves as a particularly good test-bed for NISQ Hamiltonian simulation algorithms for a number of reasons~\cite[Sec.~IV]{Bauer2020}, beyond the fact that it is a scientifically interesting model in its own right: \begin{enumerate} \item
The Fermi-Hubbard model was a famous, well-studied condensed matter model long before quantum computing was proposed. It is therefore less open to the criticism of being an artificial problem tailored to fit the algorithm. \item
It is a fermionic model, which poses particular challenges for simulation on (qubit-based) quantum computers. Most of the proposed practical applications of quantum simulation involve fermionic systems, either in quantum chemistry or materials science. So achieving quantum simulation of fermionic models is an important step on the path to practical quantum computing applications. \item
There have been over three decades of research developing ever-more-sophisticated classical simulations of Fermi-Hubbard-model physics~\cite{LeBlanc2015}.
This gives clear benchmarks against which to compare quantum algorithms.
And it reduces the likelihood of there being efficient classical algorithms, which haven't been discovered because little interest or effort has been devoted to the model. \end{enumerate}
The state-of-the-art quantum circuit-model algorithm for simulating the time dynamics of the 2D Fermi-Hubbard model on an $8\times 8$ lattice requires $\approx 10^{7}$ Toffoli gates ~\cite[Sec.~C:~Tb.~2]{Kivlichan2019}. This includes the overhead for fault-tolerance, which is necessary for the algorithm to achieve reasonable precision with the gate fidelities available in current and near-term hardware. But it additionally incorporates performing phase estimation, which is a significant extra contribution to the gate count. Thus, although this result is indicative of the scale required for standard circuit-model Hamiltonian simulation, a direct comparison of this result with time-dynamics simulation would be unfair.
To establish a fair benchmark, using available Trotter error bounds from the literature~\cite{Childs2017} with the best previous choice of fermion encoding in the literature~\cite{Verstraete2005}, we calculate that one could achieve a Fermi-Hubbard time-dynamics simulation on a $5\times 5$ square lattice, up to time $T=7$ and to within $10\%$ accuracy, using $50$ qubits and $1,243,586$ standard two-qubit gates. This estimate assumes the effects of decoherence and errors in the circuit can be neglected, which is certainly over-optimistic.
Our results rely on developing more sophisticated techniques for synthesising many-body interactions out of the underlying one- and two-qubit interactions available in the quantum hardware (see Results). This gives us access to $\ee^{-\ii h_{ij}\delta}$ for more general interactions $h_{ij}$. We then quantify the type of gains discussed here under two precisely defined error models, which correspond to different assumptions about the hardware. By using the aforementioned techniques to synthesise local Trotter steps, exploiting a recent fermion encoding specifically designed for this type of algorithm ~\cite{DK}, deriving tighter error bounds on the higher-order Trotter expansions that account for all constant factors, and carefully analysing analytically and numerically the impact and rate of spread of errors in the resulting algorithm, we improve on this by multiple orders of magnitude even in the presence of decoherence. For example, we show that a $5\times 5$ Fermi-Hubbard time-dynamics simulation up to time $T=7$ can be performed to $10\%$ accuracy in what we refer to as a per-gate error model with $\approx 50$ qubits and the equivalent of circuit depth $72,308$. This is a conservative estimate and based on analytic Trotter error bounds that we derive in this paper. Using numerical extrapolation of Trotter errors, a circuit depth of $3,209$ can be reached. In the second error model, which we refer to as a per-time error model, we prove rigorously that the same simulation is achievable in a circuit-depth-equivalent run-time of $1,686$; numerical error computations bring this down to $259$. In the per-time model, for some parameter regimes we are also able to exploit the inherent partial error-detection properties of local fermionic encodings to enable error mitigation strategies to reduce the resource cost. This brings Hamiltonian simulation, previously beyond reach of current hardware for non-trivial examples, significantly closer to being feasible in the NISQ era. \section*{Results and Discussion} \subsection*{Circuit Error Models} We consider two error models for quantum computation in this work. The first error model assumes that noise occurs at a constant rate per gate, independent of the time it takes to implement that gate. This is the standard error model in quantum computation theory, in which the cost of a computation is proportional to its circuit depth. We refer to this model as the per gate error model. The second error model assumes that noise occurs at a constant rate per unit time. This is the traditional model of errors in physics, where dissipative noise is more commonly modelled by continuous-time master equations, which translates to the per-time error model. In this model, the errors accumulate proportionately to the time the interactions in the system are switched on, thus with the total pulse lengths. We refer to this as the per time error model We emphasise that these error models are not fundamentally about execution time, but about an error budget required to execute a particular circuit. While it is clear that deeper circuits experience more decoherence, how much each gate contributes to it can be analysed from two different perspectives. The two error models we study correspond to two difference models of how noise scales in quantum hardware.
Which of these more accurately models errors in practice is hardware-dependent. For example, in NMR experiments, the per-time model is common~\cite{Khaneja_2001,Khaneja_2002,Khaneja2005}. The per-time model is not without basis in more recent quantum hardware, too. Recent work has developed and experimentally tested duration-scaled two-qubit gates using Qiskit Pulse and IBM devices~\cite{ibm-earnest2021,ibm-stenger2021}. In~\cite{ibm-stenger2021} the authors experimentally observe braiding of Majorana zero modes using and IBM device and parameterized two-qubit gates. They also find a relationship between relative gate errors and the duration of these parameterised gates which is further validated in~\cite{ibm-earnest2021}. The authors of~\cite{ibm-earnest2021} explicitly attribute the reduction in error -- seen using these duration-scaled gates in place of CNOT gates -- to the shorter schedules of the scaled gates relative to the coherence time.
Nonetheless, the standard per-gate error model is also very relevant to current quantum hardware hardware. Therefore, throughout this paper we carry out full error analyses of all our algorithms in both of these error models.
Both of these error models are idealisations. Both are reasonable from a theoretical perspective and supported by certain experiments. Analysing both error models allows different algorithm implementations to be compared fairly under different error regimes. In particular, analysing both of these error models gives a more stringent test of new techniques than considering only the ``standard error model'' of quantum computation, which corresponds to the per gate model.
We show that in both error models, significant gains can be achieved using our new techniques.
In our analysis, for simplicity we treat single-qubit gates as a free resource in both error models. There are three reasons for making this simplification, Firstly, single-qubit gates can typically be implemented with an order of magnitude higher fidelity in hardware, so contribute significantly less to the error budget than two-qubit gates. Secondly, they do not propagate errors to the same extent as two-qubit gates (cf.\ only costing T gates in studies of fault-tolerant quantum computation). Thirdly, any quantum circuit can be decomposed with at most a single layer of single-qubit gates between each layer of two-qubit gates. Thus including single-qubit gates in the per gate error model changes the absolute numbers by a constant factor $\leq 2$ in the worst case. Nor does it significantly affect comparisons between different algorithm designs. This is particularly true of product formula simulation algorithms, where the algorithms are composed of the same layers of gates repeated over and over.
Additionally, there is a benefit to utilising our synthesis techniques
regardless of error model. Decomposing the simulation into gates of the form $\ee^{-\ii \op h_{ij}\delta}$ using these methods allows us to exploit the underlying error detection properties of fermionic encodings, as explained in Supplementary Methods and demonstrated in \Cref{fig:cost-delta-5x5} (see below).
\Cref{tab:NumericCost-intro-per-gate,tab:NumericCost-intro-per-time} compare these results, showing how the combination of sub-circuit algorithms, recent advances in fermion encodings (VC~$=$~Verstraete-Cirac encoding~\cite{Verstraete2005}, compact~$=$~encoding reported in~\cite{DK}), and tighter Trotter bounds (both analytic and numeric) successively reduce the run-time of the simulation algorithm. ($\cost=$ circuit-depth for per-gate error model, or sum of pulse lengths for per-time error model.)
\subsection*{Synthesis of Encoded Fermi-Hubbard Hamiltonian Trotter Layers}\label{sec:fermion-encoding-main}
To simulate fermionic systems on a quantum computer, one must encode the fermionic Fock space into qubits. There are many encodings in the literature~\cite{Jordan1928} but we confine our analysis to two: the Verstraete-Cirac (VC) encoding~\cite{Verstraete2005}, and the compact encoding recently introduced in~\cite{DK}. We have selected these two encodings as they minimise the maximum Pauli weight of the encoded interactions, which is a key factor in the efficiency of Trotter-based algorithms and of our sub-circuit techniques: weight-$4$ (VC) and weight-$3$ (compact), respectively. By comparison, the classic Jordan-Wigner transformation~\cite{Jordan1928} results in a maximum Pauli weight that scales as as $\BigO\left(L\right)$ with the lattice size $L$; the Bravyi-Kitaev encoding~\cite{Bravyi2002} has interaction terms of weight $O(\log L)$; and the Bravyi-Kitaev superfast encoding~\cite{Bravyi2002} results in weight-8 interactions.
Under the compact encoding, the fermionic operators in \cref{eq:FH-H-intro} are mapped to operators on qubits arranged on two stacked square grids of qubits (one corresponding to the spin up, and one to the spin down sector, as shown in \cref{fig:intro-onsite}), augmented by a face-centred ancilla in a checkerboard pattern, with an enumeration explained in \cref{fig:intro-ordering}.
The on-site, horizontal and vertical local terms in the Fermi-Hubbard Hamiltonian \cref{eq:FH-H-intro} are mapped under this encoding to qubit operators as follows: \begin{align}
\op h_{\text{on-site}}^{(i)} &\rightarrow \frac{\os}{4} \left(\id - Z_{i \uparrow}\right)\left(\id - Z_{i \downarrow}\right) \\
\op h_{\text{hopping,hor}}^{(i,j,\sigma)} &\rightarrow \frac{\hop}{2}\left(
X_{i,\sigma}X_{j,\sigma}Y_{f_{ij}',\sigma}+Y_{i,\sigma}Y_{j,\sigma}Y_{f_{ij}',\sigma}\right) \\
\op h_{\text{hopping,vert}}^{(i,j,\sigma)} &\rightarrow \frac{\hop}{2}
(-1)^{g(i,j)}\left( X_{i,\sigma}X_{j,\sigma}X_{f_{ij}',\sigma}+Y_{i,\sigma}Y_{j,\sigma}X_{f_{ij}',\sigma}\right), \end{align} where qubit $f'_{ij}$ is the face-centered ancilla closest to vertex $(i,j)$, and $g(i,j)$ indicates an associated sign choice in the encoding, as explained in~\cite{DK}.
If the VC encoding is used, the fermionic operators in \cref{eq:FH-H-intro} are mapped to qubits arranged on two stacked square grids of qubits (again with one corresponding to spin up, the other to spin down, as shown in \cref{fig:intro-VC}), augmented by an ancilla qubit for each data qubit and with an enumeration explained in \cref{fig:intro-ordering-VC}. In this case the on-site, horizontal and vertical local terms are mapped to \begin{align}
\op h_{\text{on-site}}^{(i)} &\rightarrow \frac{\os}{4} \left(\id - Z_{i \uparrow}\right)\left(\id - Z_{i \downarrow}\right) \\
\op h_{\text{hopping,hor}}^{(i,j,\sigma)} &\rightarrow \frac{\hop}{2}\left(
X_{i,\sigma}Z_{i',\sigma}X_{j,\sigma}+Y_{i,\sigma}Z_{i',\sigma}Y_{j,\sigma} \right)\\
\op h_{\text{hopping,vert}}^{(i,j,\sigma)} &\rightarrow \frac{\hop}{2}\left(
X_{i,\sigma}Y_{i',\sigma}Y_{j,\sigma}X_{j',\sigma}-Y_{i,\sigma}Y_{i',\sigma}X_{j,\sigma}X_{j',\sigma}\right), \end{align} where $i'$ indicates the ancilla qubit associated with qubit $i$.
In both encodings, we partition the resulting Hamiltonian $H$ -- a sum of on-site, horizontal and vertical qubit interaction terms on the augmented square lattice -- into $M=5$ layers $\op H=H_1 + H_2 + H_3 + H_4 + H_5$, as shown in \cref{fig:intro-compact,fig:intro-VC}. The Hamiltonians for each layer do not commute with one another. Each layer is a sum of mutually-commuting local terms acting on disjoint subsets of the qubits. For instance, $H_5=\sum_i h_\mathrm{on-site}^{(i)}$ is a sum of all the two-local, non-overlapping, on-site terms.
The Trotter product formula $\mathcal P_p(T,\delta)$ comprises local unitaries, corresponding to the local interaction terms that make up the five layers of Hamiltonians that we decomposed the Fermi Hubbard Hamiltonian into.
In order to implement each step of the product formula as a sequence of gates, we would ideally simply execute all two-, three- (for the compact encoding), or four-local (for the VC encoding) interactions necessary for the time evolution directly within the quantum computer. Yet this is an unrealistic assumption, as the quantum device is more likely to feature a very restricted set of one- and two-qubit interactions.
As outlined in the introduction, we assume in our model that arbitrary single qubit unitaries are available, and that we have access to the continuous family of gates $\{\exp(\ii t Z\otimes Z)\}$ for arbitrary values of $t$. In contrast, the gates we wish to implement all have the form $\exp(\ii \delta Z^{\otimes k})$ for $k=3$ or~$4$. (Or different products of $k$ Pauli operators, but these are all equivalent up to local unitaries, which we are assuming are available.)
It is well known that a unitary describing the evolution under any $k$-local Pauli interaction can be straightforwardly decomposed into CNOT gates and single qubit rotations~\cite[Sec.~4.7.3]{Nielsen_and_Chuang}. For instance, we can decompose evolution under a $3$-local Pauli as \begin{align}\label{eq:conj-method}
\ee^{\ii \delta Z_1Z_2Z_3} &= \ee^{-\ii \pi/4 Z_1 X_2} \ee^{\ii \delta Y_2 Z_3} \ee^{\ii \pi/4 Z_1 X_2}, \end{align} where we then further decompose the remaining $2$-local evolutions in \cref{eq:conj-method} using the exact same method as \begin{align}
\ee^{\ii \delta Y_2 Z_3} &= \ee^{-\ii \pi/4 Y_2 X_3} \ee^{ \ii \delta Y_3 } \ee^{\ii \pi/4 Y_2 X_3}. \end{align} This effectively corresponds to decomposing $\ee^{\ii \delta Z_1Z_2Z_3}$ into CNOT gates and single qubit rotations, as $\ee^{\pm \ii \pi/4 Z_i Z_j}$ is equivalent to a CNOT gate up to single qubit rotations. To generate evolution under any $k$-local Pauli interaction we can simply iterate this procedure, which yields a constant overhead $\propto 2 (k-1) \times \pi/4$.
Can we do better? Even optimized variants of Solovay-Kitaev to decompose multi-qubit gates~-- beyond introducing an additional error -- generally yield gate sequences multiple orders of magnitude larger, as e.g.\ demonstrated in~\cite{Pham2013}. While more recent results conjecture that an arbitrary three-qubit gate can be implemented with at most eight $\BigO(1)$ two-local entangling gates~\cite{Martinez2016}, this is still worse than the conjugation method for the particular case of a rank one Pauli interaction that we are concerned with.
For small pulse times $\delta$, the existing decompositions are thus inadequate, as they all introduce a gate cost $\Omega(1) + \BigO(\delta)$. In this paper, we develop a series of analytic pulse sequence identities (see \cref{lem:Weight-Increase-Depth4-ap,lem:Weight-Increase-Depth5-ap} in Supplementary Methods, which allow us to decompose the three-qubit and four-qubit gates as approximately The approximations in \cref{eq:intro-gatedec-3,eq:intro-gatedec-4} are shown to first order in $\delta$. Exact analytic expressions, which also hold for $\delta \geq 1$, are derived in Supplementary Methods. The constants in \cref{eq:intro-gatedec-4} have been rounded to the third significant figure. \begin{align}
\ee^{\ii \delta Z_1Z_2Z_3} &\approx \ee^{-\ii \sqrt{\delta/2} Z_1X_2} \ee^{\ii \sqrt{\delta/2} Y_2 Z_3} \nonumber \\
&\hspace{11mm} \times \ee^{\ii \sqrt{\delta/2} Z_1X_2} \ee^{-\ii \sqrt{\delta/2} Y_2 Z_3},
\label{eq:intro-gatedec-3} \\
\ee^{\ii \delta Z_1Z_2Z_3Z_4} &\approx \ee^{- \ii 0.22 \delta^{2/3} Y_2 Z_3 Z_4} \ee^{- \ii 1.13 \delta^{1/3} Z_1 X_2} \ee^{ \ii 0.44 \delta^{2/3} Y_2 Z_3 Z_4} \nonumber \\
&\hspace{11mm} \times \ee^{\ii 1.13 \delta^{1/3} Z_1 X_2} \ee^{- \ii 0.22 \delta^{2/3} Y_2 Z_3 Z_4}.
\label{eq:intro-gatedec-4} \end{align} In reality we use the exact versions of these decompositions, which we also note are still exact for $\delta \geq 1$. The depth-5 decomposition in \cref{eq:intro-gatedec-4} yields the shortest overall run-time when breaking down higher-weight interactions in a recursive fashion, assuming that the remaining 3-local gates are decomposed using an expression similar to \cref{eq:intro-gatedec-3}. We also carry out numerical studies that indicate that these decompositions are likely to be optimal. (See Supplementary Methods.)
These circuit decompositions allow us to establish that, for a weight-k interaction term, there exists a pulse sequence which implements the evolution operator for time $\delta$ with an overhead $\propto \delta^{1/(k-1)}$, achieved by recursively applying these decompositions. While we have only made reference to interactions of the form $Z^{\otimes k}$, we remark that this is sufficient as we can obtain any other interaction term of the same weight, for example $ZXZ$, by conjugating $Z^{\otimes k}$ by single qubit rotations, $H$ and $SHS^\dagger$ in this example (where $H$ is a Hadamard and $S$ a phase gate).
For the interactions required for our Fermi-Hubbard simulation, the overhead of decomposing short-pulse gates with this analytic decomposition is $\propto \sqrt\delta$ for any weight-3 interaction term, and $\propto \delta^{1/3}$ for weight-4. The asymptotic run-time is thus $\BigO(T \delta_0^{w})$ for $w=-1/2$ (compact encoding) or $w=-2/3$ (VC encoding). We show the exact scaling for $k=3$ and $k=4$ in \cref{fig:intro-gatedec-overhead}, as compared to the standard conjugation method.
\begin{figure*}
\caption{Gate decomposition cost $\cost$ for decomposing $\exp(\ii \delta Z^{\otimes 3})$ (left) and $\exp(\ii \delta Z^{\otimes 4})$ (right), for $\delta\in[10^{-5},1]$. The lower dashed line is the cost obtained by conjugation decomposition, $\pi/2+\delta$. The upper dashed line is the cost for a once-nested conjgation, $\pi+\delta$.
Decomposing the four-local gate with an outer depth-5 and an inner depth-4 formula according to \cref{eq:intro-gatedec-3,eq:intro-gatedec-4} only saturates the lower conjugation cost bound.
}
\label{fig:intro-gatedec-overhead}
\end{figure*}
\subsection*{Tighter Error Bounds for Trotter Product Formulas}\label{sec:Trotter-bounds-main}
There are by now a number of sophisticated quantum algorithms for Hamiltonian simulation, achieving optimal asymptotic scaling in some or all parameters~\cite{Berry2015,Low2016,Berry2014}. Recently, \cite{Childs2019} have shown that previous error bounds on Trotter product formulae were over-pessimistic. They derived new bounds showing that the older, simpler, product-formula algorithms achieve almost the same asymptotic scaling as the more sophisticated algorithms.
For near-term hardware, achieving good asymptotic scaling is almost irrelevant; what matters is minimising the actual circuit depth for the particular target system being simulated. Similarly, in the NISQ regime we do not have the necessary resources to implement full active error-correction and fault-tolerance. But we can still consider ways of minimising the output error probability for the specific computation being carried out. Simple product-formula algorithms allow good control of error propagation in the absence of active error-correction and fault-tolerance. Furthermore, combining product-formula algorithms with our circuit decompositions allows us to exploit the error detection properties of fermionic encodings. We can use this to relax the effective noise rates required for accurate simulations, especially if we are willing to allow the simulation to include some degree of simulated natural noise. This is explained further the Supplementary Methods and the results of this technique are shown in \cref{fig:cost-delta-5x5}
For these reasons, we choose to implement the time evolution operator $U(T) \coloneqq \exp(-\ii T H)$ by employing Trotter product formulae $U(T) =: \P_p(\delta)^{T/\delta} + \R_p(T,\delta)$. Here, $\R_p\left(T,\delta\right)$ denotes the error term remaining from the approximate decomposition into a product of individual terms, defined directly as $\R_p\left(T,\delta\right) \coloneqq U(T) - \P_p\left(\delta\right)^{T/\delta}$. This includes the simple first-order formula~\cite{Lloyd1996a} \begin{align}
\P_1\left(\delta\right)^{T/\delta} &\coloneqq \prod_{n=1}^{T/\delta} \prod_{i=1}^M \ee^{-\ii \op H_i \delta}, \\ \intertext{as well as higher-order variants~\cite{Suzuki1992,Suzuki1991,Childs2019}}
\P_2\left(\delta\right) &\coloneqq \prod_{j=1}^M \ee^{-\ii \op H_j \delta/2} \prod_{j=M}^1 \ee^{-\ii \op H_j \delta/2}, \\
\P_{2k}\left(\delta\right) &\coloneqq \P_{2k-2}\left(a_k \delta\right)^2\P_{2k-2}\left((1-4a_k) \delta\right) \P_{2k-2}\left(a_k \delta\right)^2 \end{align} for $k\in\field N$, where the coefficients are given by $a_k \coloneqq1/\left(4-4^{1/\left(2k-1\right)}\right)$. It is easy to see that, while for higher-order formulas not all pulse times equal $\delta$, they still asymptotically scale as $\Theta(\delta)$. The product formula $\P_p\left(\delta\right)^{T/\delta}$ then approximates a time-evolution under $U(\delta)^{T/\delta} \approx U(T)$, and it describes the sequence of local unitaries to be implemented as a quantum circuit.
Choosing the Trotter step $\delta$ small means that corrections for every factor in this formula come in at $\BigO \left(\delta^{p+1} \right)$ for $p\in\{1, 2k : k\in\field N\}$. Since we have to perform $T/\delta$ many rounds, the overall error scales roughly as $\BigO \left(T\delta^p \right)$. Yet this rough estimate is insufficient if we need to calculate the largest-possible $\delta$ for our Hamiltonian simulation.
The Hamiltonian dynamics remain entirely within one fermion number sector, as $H_\mathrm{FH}$ commutes with the total fermion number operator. Let $\Lambda$ denote the number of fermions present in the simulation, such that $\| H_i|_{\Lambda\ \text{fermions}} \| \le \Lambda$ as shown in
\cref{thm:norm-bound}. Let $M=5$ denote the number of non-commuting Trotter layers, and set $\epsilon_p(T,\delta) \coloneqq \| \mathcal R_p(T,\delta) \|$, and as shorthand $\epsilon_p(\delta) \coloneqq \epsilon_p(\delta,\delta)$, so that $\epsilon_p(T,\delta) = T/\delta \times \epsilon_p(\delta)$.
To obtain a bound on $\P_{p}\left(\delta\right)$, we apply the variation of constants formula~\cite[Th,~4.9]{RealAnalysis} to $\mathcal R_p(\delta)$, with the condition that $\P_{p}\left(0\right)=\id$, which always holds. As in~\cite[sec.~3.2]{Childs2019}, for $\delta\ge0$, we obtain \begin{equation}\label{eq:integral-representation-main}
\P_{p}\left(\delta\right) = U\left(\delta\right) + \R_{p}\left(\delta\right) = \ee^{- \ii \delta \op H} + \int_{0}^{\delta} \ee^{- \ii \left(\delta-\tau\right)\op H} \op R_{p}\left(\tau\right) d\tau \end{equation} where the integrand $\op R_p\left(\tau\right)$ is defined as \begin{align}
\op R_{p}\left(\tau\right)\coloneqq\frac{d}{d\tau} \P_{p}\left(\tau\right) -\left(- \ii \op H\right) \P_{p}\left(\tau\right). \end{align} Now, if $\P_{p}\left(\delta\right)$ is accurate up to $p$\textsuperscript{th} order -- meaning that $\R_{p}\left(\delta\right) = \BigO\left(\delta^{p+1}\right)$ -- it holds that the integrand $\op R_{p}\left(\delta\right) = \BigO\left(\delta^p\right)$. This allows us to restrict its partial derivatives for all $0\le j\le p-1$ to $\partial_\tau^j \op R_{p}\left(0\right) = 0$. For full details see \cref{rem:order-conditions-ap} and~\cite[Order Conditions]{Childs2019}.
Then, following~\cite{Childs2019}, we perform a Taylor expansion of $\op R_{p}\left(\tau\right)$ around $\tau=0$, simplifying the error bound $\epsilon_{p}(\delta) \equiv \| \mathcal R_{p}(\delta) \|$ to \begin{align}
\epsilon_{p}(\delta)
&=\left\| \int_{0}^{\delta} \ee^{- \ii \left(\delta-\tau\right)\op H} \op R_{p}\left(\tau\right) \dd\tau \right\|
\leq \int_{0}^{\delta} \| \op R_{p}\left(\tau\right) \| \dd\tau\\
&=\int_0^\delta \bigg( \| \op R_{p}\left(0\right) \| + \| \op R'_{p}\left(0\right) \| \tau + \ldots + \\
&\hspace{15mm} \| \op R_{p}^{\left(p-1\right)}\left(0\right)\| \frac{\tau^{p-1}}{\left(p-1\right)!}
+ \| \op S_{p}\left(\tau, 0\right) \|
\bigg) \dd\tau . \end{align} Here we use the aforementioned order condition that for all $0\le j\le p-1$ the partial derivatives satisfy $\partial_\tau^j \op R_{p}\left(0\right) = 0$, leaving all but the $p$\textsuperscript{th} or higher remainder terms -- $\op S_{p}\left(\tau, 0\right)$ -- equal to zero. Thus \begin{align}
\epsilon_{p}(\delta) &\le \int_0^\delta \| \op S_{p}\left(\tau, 0\right) \| \dd \tau \nonumber \\
&= p \int_0^\delta \int_0^1 \left(1-x\right)^{p-1} \| \op R_{p}^{\left(p\right)}\left(x\tau\right)\| \frac{\tau^{p}}{p!} \dd x \dd\tau,
\label{eq:higher-trotter-error-1-intro} \end{align} where we used the integral representation for the Taylor remainder $\op S_{p}\left(\tau,0\right)$.
Motivated by this, we look for simple bounds on the $p$\textsuperscript{th} derivative of the integrand $\|\op R_{p}\left(\tau\right)\|$. At this point our work diverges from~\cite{Childs2019} by focusing on obtaining bounds on $\|\op R_{p}\left(\tau\right)\|$ which have the tightest constants for NISQ-era system sizes, but which now are not optimal in system size. (See \cref{fig:bounds-comparison} and \cref{lem:trotter-tech1-ap,lem:R-bound-Comms-ap} in Supplementary Methods for details.) We derive the following explicit error bounds (see \cref{th:trotter-error-ap,cor:trotter-error-ap}): \begin{align}
\epsilon_p(\delta)
&\le \delta^{p+1} M^{p+1} \Lambda^{p+1} G_p \\ \intertext{where}
G_p &\coloneqq \times \begin{cases}
1 & p=1 \\
\displaystyle \frac{2}{\left(p+1\right)!} \left( \frac{10}3 \right)^{\left(p+1\right)\left(p/2-1\right)} & \text{$p=2k$, $k\ge 1$},
\end{cases}
\\ \intertext{and}
\epsilon_p\left(\delta\right)
&\le \frac{2 \delta^{p+1} M^{p+1} \Lambda^{p+1} }{\left(p+1\right)!} H_p^{p+1}\\ \intertext{where}
H_p &\coloneqq \prod_{i=1}^{p/2-1} \frac{ 4+4^{1/\left(2i+1\right)}}{\left| 4 - 4^{1/\left(2i+1\right)} \right| }. \end{align} The above expressions hold for generic Trotter formulae. Using \cref{lem:R-bound-Comms-ap} we can exploit commutation relations for the specific Hamiltonian at hand (whose structure determines $N$ and $n$, see Supplementary Methods). This yields the bound (see \cref{th:Trotter-Er-Commutator-ap}): \begin{align}\label{eq:Trotter-Er-Commutator-intro}
\epsilon_p \left(\delta \right)
&\le C_1 \frac{T \delta^{p}}{\left(p+1\right)!} + \nonumber\\
&\hspace{7mm} C_2 \frac{T}{\delta} \int_0^\delta p \int_0^1 \left(1-x\right)^{p-1} \frac{x \tau^{p+1}}{p!} \ee^{x \tau N B_p} \dd x \dd\tau \\ \intertext{where}
C_1 &\coloneqq \n p B_p^{2} \Lambda^{p-1} N \left(M H_p - B_p + B_p \left(\frac{N}{\Lambda}\right)\right)^{p-1} \times \nonumber \\
&\hspace{10mm}\left(\left(S_p M\right)^{2}-\left(S_p M\right)\right),\\[2mm]
C_2 &\coloneqq \n B_p^{2}\left( M H_p \Lambda\right)^{p} N \left(\left(S_p M\right)^{2}-\left(S_p M\right)\right),\\ \intertext{and}
B_p &\coloneqq \begin{cases}
1 & p=1 \\
\frac12 & p=2 \\
\frac12 \prod_{i=2}^k (1-4a_i) & \text{$p=2k$, $k\ge2$.}
\end{cases} \end{align} These analytic error bounds are then combined with a Taylor-of-Taylor method, by which we expand the Taylor coefficient $R_p^{(p)}$ in \cref{eq:higher-trotter-error-1-intro} itself in terms of a power series to some higher order $q > p$, with corresponding series coefficients $R_p^{(q)}$, and a corresponding remainder-of-remainder error term $\epsilon_{p,q+1}$. The tightest error expression we obtain is (see \cref{cor:taylor-error-bound-ap} in Supplementary Methods) \begin{equation}
\epsilon_p(\delta) \le \sum_{l=p}^q \frac{\delta^{l+1}\Lambda^{l+1}}{(l+1)!} f(p,M,l)
+ \epsilon_{p,q+1}(\delta), \end{equation} where the $f(p,M,l)$ are exactly-calculated coefficients (using a computer algebra package) that exploit cancellations between the $M$ non-commuting Trotter layers, for a product formula of order $p$ and series expansion order $l$ (given in \cref{tab:coefficients-ap}). The series' remainder $\epsilon_{p,q+1}$ therein is then derived from the analytic bounds in \cref{eq:Trotter-Er-Commutator-intro} (see Supplementary Methods for technical details).
Henceforth, we will assume the tightest choice of $\epsilon_p(\delta)$ amongst all the derived error expressions and choice of $p\in\{1,2,4\}$.
In order to guarantee a target error bound $\epsilon_p(T,\delta)\le \epsilon_\mathrm{target}$, we invert these explicitly derived error bounds and obtain a maximum possible Trotter step $\delta_0 = \delta_0(\epsilon_\mathrm{target})$.
\subsection*{Benchmarking the Sub-Circuit-Model}\label{sec:benchmarking-sub-circuit-model}
How significant is the improvement of the measures set out in previous sections, as benchmarked against state-of-the-art results from literature? A first comparison is in terms of exact asymptotic bounds (which we derive in \cref{th:FH-optimum-dig,th:FH-optimum-analogue}), in terms of the number of non-commuting Trotter layers $M$, fermion number $\Lambda$, simulation time $T$ and target error $\epsilon_\mathrm{target}$:
\noindent Standard Circuit Synthesis: \begin{align}
\cost\left(\mathcal P_p(\delta)^{T/\delta}\right) &= \BigO\left(M^{2 + \frac1p} \Lambda^{1 + \frac1p} T^{1+\frac1p} \epsilon^{-\frac1p}_\mathrm{target}\right), \\ \intertext{Sub-circuit Synthesis:}
\cost\left(\mathcal P_p(\delta)^{T/\delta}\right) &=\BigO\left(
M^{\frac32 + \frac1{2p}} \Lambda^{\frac12 +\frac1{2p}} T^{1 + \frac1{2p}} \epsilon^{-\frac1{2p}}_\mathrm{target}
\right). \end{align} Here we write $\cost$ for the ``run-time'' of the quantum circuits -- i.e.\ the sum of pulse times of all gates within the circuit. (See \cref{def:time-cost} for a detailed discussion of the cost model we employ.)
\begin{table}[]
\centering
\begin{tabular}{c c c c}
\toprule
\parbox[c][][c]{8em}{\centering Fermion encoding}
& \parbox[c][][c]{8em}{\centering Trotter bounds}
& \parbox[c][][c]{8em}{\centering Standard decomposition}
& \parbox[c][][c]{8em}{\centering Subcircuit decomposition} \\
\midrule
\multirow{3}{*}{VC}
&\cite[Prop.~F.4.]{Childs2017} & 1,243,586 & 977,103 \\
& analytic & 121,478 & 95,447 \\
& numeric & 5,391 & 4,236 \\
\midrule
\multirow{2}{*}{compact}
& analytic & 98,339 & 72,308 \\
& numeric & 4,364 & 3,209 \\
\end{tabular}
\caption{Per-gate run-times. A comparison of the run-time $\cost$ for lattice size $L\times L$ with $L=5$, overall simulation time $T=7$ and target Trotter error $\epsilon_\mathrm{target} = 0.1$, with $\Lambda=5$ fermions and coupling strengths $|\os|, |\hop|\le r=1$.
Obtained by minimising over product formulas up to $4$\textsuperscript{th} order.
$\cost=$~circuit-depth for per-gate error model.
In either gate decomposition case---standard and sub-circuit---we account single-qubit rotations as a free resource as explained in the Introduction; the value of $\cost$ depends only on the two-qubit gates/interactions. Two-qubit unitaries are counted by unit time per gate in the per gate error model. Here compact and VC denote the choice of fermionic encoding.}
\label{tab:NumericCost-intro-per-gate} \end{table} \begin{table}[]
\centering
\begin{tabular}{c c c c}
\toprule
\parbox[c][][c]{8em}{\centering Fermion encoding}
& \parbox[c][][c]{8em}{\centering Trotter bounds}
& \parbox[c][][c]{8em}{\centering Standard decomposition}
& \parbox[c][][c]{8em}{\centering Subcircuit decomposition} \\
\midrule
\multirow{3}{*}{VC}
&\cite[Prop.~F.4.]{Childs2017} & 976,710 & 59,830\\
& analytic & 95,409 & 17,100 \\
& numeric & 4,234 & 1,669 \\
\midrule
\multirow{2}{*}{compact}
& analytic & 77,236 & 1,686 \\
& numeric & 3,428 & 259 \\
\end{tabular}
\caption{Per-time run-times. A comparison of the run-time $\cost$ for lattice size $L\times L$ with $L=5$, overall simulation time $T=7$ and target Trotter error $\epsilon_\mathrm{target} = 0.1$, with $\Lambda=5$ fermions and coupling strengths $|\os|, |\hop|\le r=1$.
Obtained by minimising over product formulas up to $4$\textsuperscript{th} order.
$\cost=\cost(\P_p(\delta_0)^{T/\delta_0})$ for per-time error model.
In either gate decomposition case---standard and sub-circuit---we account single-qubit rotations as a free resource; the value of $\cost$ depends only on the two-qubit gates/interactions. Two-qubit unitaries are counted by their respective pulse lengths. Here compact and VC denote the choice of fermionic encoding.}
\label{tab:NumericCost-intro-per-time} \end{table}
Beyond asymptotic scaling, and in order to establish a more comprehensive benchmark that takes into account potentially large but hidden constant factors, we employ our tighter Trotter error bounds that account for all constant factors, and
concretely target a $5\times 5$ Fermi-Hubbard Hamiltonian for overall simulation time $T=7$ (which is roughly the Lieb-Robinson time required for the ``causality-cone'' to spread across the whole lattice, and for correlations to potentially build up between any pair of sites), in the sector of $\Lambda=5$ fermions, and coupling strengths $|\os|, |\hop| \le r=1$ as given in \cref{eq:FH-H-intro}. For this system, we choose the optimal Trotter product formula order $p$ that yields the lowest overall run-time, while still achieving a target error of $\epsilon_\mathrm{target} = 0.1$.
The results are given in \cref{tab:NumericCost-intro-per-time,tab:NumericCost-intro-per-gate}, where we emphasise that in order to maintain a fair comparison, we always account single-qubit gates as a free resource, for the reasons discussed in the Introduction, and two-qubit gates are either accounted at one unit of time per gate in the per-gate error model (making the run-time equal the circuit depth), or accounted at their pulse length for the per-time error model.
Our Trotter error bounds yield an order-of-magnitude improvement as compared to~\cite[Prop~F.4]{Childs2017}. And even for existing gate decompositions by conjugation, the recently published lower-weight compact encoding yields a small but significant improvement. The most striking advantage comes from utilising the sub-circuit sequence decompositions developed in this paper, in particular in conjunction with the lower-weight compact fermionic encoding.
Overall, the combination of Trotter error bounds, numerics, compact fermion encoding and sub-circuit-model algorithm design, allows us to improve the run-time of the simulation algorithm from $976,\!710$, to $259$ -- an improvement of more than three orders of magnitude over that obtainable using the previous state-of-the-art methods, and a further improvement over results in the pre-existing literature~\cite{Kivlichan2019}.
\begin{figure}\label{fig:cost-delta-5x5}
\end{figure}
\subsection*{Sub-Circuit Algorithms on Noisy Hardware}\label{sec:noise-bounds-main}
As ours is a study of quantum simulation on near-term hardware, we cannot neglect decoherence errors that inevitably occur throughout the simulation. To address this concern, we assume an iid noise model described by the qubit depolarizing channel \begin{equation}\label{eq:intro-depol}
\mathcal N_q(\rho) = (1-q)\rho + \frac q3 \big( X \rho X + Y \rho Y + Z \rho Z \big) \end{equation} applied to each individual qubit in the circuit, and after each gate layer in the Trotter product formula, such that the bit, phase, and combined bit-phase-flip probability $q$ is proportional to the elapsed time of the preceding layer. Whilst this standard error model is simplistic, it is a surprisingly good match to the errors seen in some hardware~\cite{GoogleAI}.
Within this setting, a simple analytic decoherence error bound can readily be derived (see Supplementary Methods), by calculating the probability that zero errors appear throughout the circuit. If $V$ denotes the volume of the circuit (defined as $\cost \times L^2$), then the depolarising noise parameter $q < 1-(1-\epsilon_\mathrm{target})^{1/V}$ -- i.e.\, it needs to shrink exponentially quickly with the circuit's volume. We emphasise that this is likely a crude overestimate. As briefly discussed at the start, one of the major advantages of sub-circuit circuits is that, under a short-pulse gate, an error is only weakly propagated due to the reduced Lieb-Robinson velocity (discussed further in~\cite{Error-Mapping}).
Yet irrespective of this overestimate, can we derive a tighter error bound by other means?
In~\cite{Error-Mapping}, the authors analyse how noise on the physical qubits translates to errors in the fermionic code space. To first order and in the compact encoding, all of $\{X, Y, Z\}$ errors on the face, and $\{X, Y\}$ on the vertex qubits can be detected. $Z$ errors on the vertex qubits result in an undetectable error, as evident from the form of $\op h_\mathrm{on-site}$ from \cref{eq:h-onsite} . It is shown in~\cite[Sec.~3.2]{Error-Mapping} that this $Z$ error corresponds to fermionic phase noise in the simulated Fermi-Hubbard model.
It is therefore a natural extension to the notion of simulation to allow for some errors to occur, if they correspond to physical noise in the fermionic space. And indeed, as discussed more extensively in~\cite[Sec.~2.4]{Error-Mapping}, phase noise is a natural setting for many fermionic condensed matter systems coupled to a phonon bath~\cite{Ng2015,Kauch2020,Zhao2017,Melnikov2016,Openov2005,Fedichkin2004,Scully1993} and~\cite[Ch.~6.1\&eq.~6.17]{Wellington2014}.
How can we exploit the encoding's error mapping properties? Under the assumption that $X$, $Y$ and $Z$ errors occur uniformly across all qubits, as assumed in \cref{eq:intro-depol}, each Pauli error occurs with probability $q/3$. We further assume that we can measure all stabilizers (including a global parity operator) once at the end of the entire circuit, which can be done by dovetailing a negligible depth $4$ circuit to the end of our simulation (see Supplementary Methods for more details). We then numerically simulate a stochastic noise model for the circuit derived from aforementioned Trotter formula for a specific target error $\epsilon_\mathrm{target}$, for a Fermi-Hubbard Hamiltonian on an $L\times L$ lattice for $L\in\{3,5,10\}$.
Whenever an error occurs, we keep track of the syndrome violations they induce (including potential cancellations that happen with previous syndromes), using results from~\cite{Error-Mapping} on how Pauli errors translate to error syndromes with respect to the fermion encoding's stabilizers (summarized in \cref{tab:error-mapping} ). We then bin the resulting circuit runs into the following categories: \begin{enumerate}
\item detectable error: at least one syndrome remains triggered, even though some may have canceled throughout the simulation,
\item undetectable phase noise: no syndrome was ever violated, and the only errors are $Z$ errors on the vertex qubits which map to fermionic phase noise, and
\item undetectable non-phase noise: syndromes were at some point violated, but they all canceled.
\item errors not happening in between Trotter layers: naturally, not all errors happen in between Trotter layers, so this category encompasses all those cases where errors happen in between gates in the gate decomposition. \end{enumerate}
This categorization allows us to calculate the maximum depolarizing noise parameter $q$ to be able to run a simulation for time $T=\lfloor \sqrt 2 L \rfloor$ with target Trotter error $\epsilon_\mathrm{t} \le \epsilon_\mathrm{target} \in \{ 1\%, 5\%, 10\% \}$, where we allow the resulting undetectable non-phase noise and the errors not happening in between Trotter layers errors to also saturate this error bound, i.e.\ $\epsilon_\mathrm{s}\le\epsilon_\mathrm{target}$. The overall error is thus guaranteed to stay below a propagated error probability of $( \epsilon_\mathrm{t}^2 + \epsilon_{s}^2)^{1/2} \in \{ 1.5\%, 7.1\%, 15\% \}$, respectively.
In order to achieve these decoherence error bounds, one needs to postselect ``good'' runs and discard ones where errors have occurred, as determined from the single final measurement of all stabilizers of the compact encoding. The required overhead due to the postselected runs is mild, and shown in \cref{fig:postsel-probs}.
We plot the resulting simulation cost vs.\ target simulation time in \cref{fig:cost-delta-5x5} and \cref{fig:cost-numeric-delta-ignore-pulse-lengths,fig:cost-analytic-delta-ignore-pulse-lengths,fig:cost-numeric-delta,fig:cost-analytic-delta},
where we color the graphs according to the depolarizing noise rate required to achieve the target error bound. For instance, in the tightest per-time error model (bottom right plot in \cref{fig:cost-delta-5x5}), a depolarizing noise parameter $q=10^{-5}$ allows simulating a $5\times5$ FH Hamiltonian for time $T\approx 5$, while satisfying a $15\%$ error bound, the required circuit-depth-equivalent is $\cost\approx 140$---and for time $T\approx 2.5$ for a $7.1\%$ error bound, for $\cost\approx 70$.
In this work, we have derived a method for designing quantum algorithms ``one level below'' the circuit model, by designing analytic sub-circuit identities to decompose the algorithm into. As a concrete example, we applied these techniques to the task of simulating time-dynamics of the spin Fermi-Hubbard Hamiltonian on a square lattice. Together with Trotter product formulae error bounds applied to the recent compact fermionic encoding, we estimate these techniques provide a three orders of magnitude reduction in circuit-depth-equivalent. The authors of \cite{Childsnew2019} have recently extended their work on error bounds in \cite{Childsnew2019}, beyond their results in \cite{Childs2019}. We have not yet incorporated their new bounds into our analysis, and this may give further improvements over our analytic error bounds.
Naturally, any real world implementation on actual quantum hardware will allow and require further optimizations; for instance, all errors displayed within this paper are in terms of operator norm, which indicates the worst-case error deviation for any simulation. However, when simulating time dynamics starting from a specific initial configuration and a distinct final measurement setup, a lower error rate results. We have accounted for this in a crude way, by analysing simulation of the Fermi-Hubbard model dynamics with initial states of bounded fermion number. But the error bounds -- even the numerical ones -- are certainly pessimistic for any specific computation. Furthermore, while we already utilize numerical simulations of Trotter errors, more sophisticated techniques such as Richardson extrapolation for varying Trotter step sizes might show promise in improving our findings further.
It is conceivable that other algorithms that require small unitary rotations will similarly benefit from designing the algorithms ``one level below'' the circuit model. Standard circuit decompositions of many interesting quantum algorithms will remain unfeasible on real hardware for some time to come. Whereas our sub-circuit-model algorithms, with their shorter overall run-time requirements and lower error-propagation even in the absence of error correction, potentially bring these algorithms and applications within reach of near-term NISQ hardware.
\subsection*{Author Contributions} The authors L. C., J. B. and T. C. contributed equally to this work.
\subsection*{Competing Interests} The authors declare no competing interests.
\subsection*{Code Availability} The code to support the findings in this work is available upon request from the authors.
\appendix
\part*{Supplementary Figures}
\begin{figure}
\caption{Qubit numbering.}
\caption{Hopping terms in $\op H_3$.}
\caption{Hopping terms in $\op H_4$.}
\caption{On-site terms in $H_5$.}
\caption{Hopping terms in $\op H_1$.}
\caption{Hopping terms in $\op H_2$.}
\caption{Compact encoding: qubit enumeration, and five mutually non-commuting interaction layers.}
\label{fig:intro-ordering}
\label{fig:intro-onsite}
\label{fig:intro-compact}
\end{figure}
\begin{figure}
\caption{Qubit numbering.}
\caption{Hopping terms in $\op H_3$.}
\caption{Hopping terms in $\op H_4$.}
\caption{On-site terms in $H_5$.}
\caption{Hopping terms in $\op H_1$.}
\caption{Hopping terms in $\op H_2$.}
\caption{VC encoding: qubit enumeration, and five mutually non-commuting interaction layers.}
\label{fig:intro-ordering-VC}
\label{fig:intro-onsite-vc}
\label{fig:intro-VC}
\end{figure}
\part*{Supplementary Methods} \section*{The Sub-Circuit Model and Error Models}\label{sec:cost} In this section we introduce the sub-circuit model, which we employ throughout this paper. We analyse it under two different error models; these respective models are applicable to NISQ devices with differing capabilities. Before defining these we introduce the mathematical definition of the sub-circuit model. \begin{definition}[Sub-circuit Model]\label{def:short-pulse-circuit}
Given a set of qubits $Q$, a set $I \subseteq Q \times Q$ specifying which pairs of qubits may interact, a fixed two qubit interaction Hamiltonian $\op h$, and a minimum switching time $t_{\text{min}}$, a sub-circuit pulse-sequence $\op C$ is a quantum circuit of $L$ pairs of alternating layer types $\op C = \prod_l^L \op U_l \op V_l $ with $\op U_l = \prod_{i \in Q} \op u_i^l$ being a layer of arbitrary single qubit unitary gates, and $\op V_l = \prod_{ ij \in \Gamma_l} \op v_{ij} \left(t_{ij}^l \right)$ being a layer of non-overlapping, variable time, two-qubit unitary gates:
\begin{align}
\op v_{ij}(t)=\ee^{\ii t \op h_{ij}}
\end{align} with the set $\Gamma_l \subseteq I$ containing no overlapping pairs of qubits, and $t \geq t_{min}$. Throughout this paper we assume $\op h_{ij} = Z_iZ_j$. As all $\sigma_i \sigma_j$ are equivalent to $Z_i Z_j$ up to single qubit rotations this can be left implicit and so we take $\op h_{ij} =\sigma_i \sigma_j$. \end{definition}
The traditional quantum circuit model measures its run-time in layer count. This also applies in the sub-circuit-model. \begin{definition}[Circuit Depth] \label{def:cost-circuit depth} Under a per-gate error model the cost of a sub-circuit pulse-sequence $\op C$ is defined as \begin{align} \cost(\op C) := L, \end{align} or simply the circuit depth. \end{definition}
However, unlike the traditional quantum circuit model, the sub-circuit-model also allows for a different run-time metric for any given circuit $C$. Depending on the details of the underlying hardware, it can be appropriate to measure run-time as the total physical duration of the two-qubit interaction layers. This is justified for many implementations: for example superconducting qubits have interaction time scales of $\sim 50 - 500 \text{ns}$ \cite{kjaergaard2019}, while the single qubit energy spacing is on the order of $\sim 5 \text{Ghz}$, which gives a time scale for single qubit gates of $\sim 0.2 \text{ns}$. \begin{definition}[Run-time] \label{def:time-cost} The physical run-time of a sub-circuit pulse-sequence $\op C$ is defined as \begin{align} \cost(\op C) := \sum_l^L \max_{ij \in \Gamma_l}\left(t_{ij}^l\right) \end{align} The run-time is normalised to the physical interaction strength, so that $\vert h\vert = 1$. \end{definition}
For both run-time and circuit depth we assume single qubit layers contribute a negligible amount to the total time duration of the circuit and we can cost standard gates according to both metrics as long as they are written in terms of a sub-circuit pulse-sequence. For example, according to \Cref{def:time-cost} a CNOT gate has $\cost = \pi/4$ as it is equivalent to $\ee^{-\ii \frac{\pi}{4} ZZ}$ up to single qubit rotations.
How does this second cost model affect the time complexity of algorithms? I.e., given a circuit $\op C$, does $\cost(\op C)$ ever deviate so significantly from $\op C$'s gate depth count that the circuit would have to be placed in a complexity class lower? Under reasonable assumptions on the shortest pulse time we prove in the following that this is not the case. \begin{remark}\label{rem:cost-model-overhead} Let $\{ \op C_x \}_{x\in\field N}$ be a family of quantum circuits ingesting input of size $x$. Denote with $m(x)$ the circuit depth of $\op C_x$; and let $\delta_0 = \delta_0(x):=\min_{l\in[L]}\max_{ij\in\Gamma_l}(\tau^t_{ij})$ be the shortest layer time pulse present in the circuit $\op C_x$, according to \cref{def:time-cost}. Then if \begin{align}
\delta_0(x) = \begin{cases}
\BigO(1) \\
1/\poly x \\
1/\exp(\poly x) \\
\end{cases}
\quad\Longrightarrow\quad
m(x) = \cost(\op C) \times \begin{cases}
\BigO(1) \\
\BigO(\poly x) \\
\BigO(\exp(\poly x))
\end{cases} \end{align} Furthermore $\cost(\op C) = \BigO(m(x))$. \end{remark} \begin{proof} Clear since $m(x) = \BigO(\cost(\op C)/\delta_0(x))$. The second claim is trivial. \end{proof} An immediate consequence of using the cost model metric and the overhead of counting gates from \cref{rem:cost-model-overhead} can be summarised as follows. \begin{corollary} Let $\epsilon>0$. Any family of short-pulse circuits $\{ \op C_x \}$ with $\delta_0(x) = \BigO(1)$ can be approximated by a family of circuits $\{ \tilde{\op C}_x \}$ made up of gates from a fixed universal gate set; and such that $\tilde{\op C}_x$ approximates $\op C_x$ in operator norm to precision $\epsilon$ in time $\BigO(\log^4(\cost(\op C_x)/\epsilon))$. \end{corollary} \begin{proof} By \cref{rem:cost-model-overhead}, there are $m(x) = \cost(\op C)\times\BigO(1)$ layers of gates in $\op C$; now apply Solovay-Kitaev to compile it to a universal target gate set. \end{proof} Indeed, we can take this further and show that complexity classes like BQP are invariant under an exchange of the two metrics ``circuit depth'' and ``$\cost$''; if e.g.\ $\delta_0(x) = 1/\poly x$, then again invoking Solovay-Kitaev lets one upper-bound and approximate any circuit while only introducing an at most poly-logarithmic overhead in circuit depth. However, a stronger result than this is already known, independent of any lower bound on pulse times, which we cite here for completeness. \begin{remark}[Poulin et. al. \cite{Poulin2011}]\label{rem:bqp-invariant-under-cost} A computational model based on simulating a local Hamiltonian with arbitrarily-quickly varying local terms is equivalent to the standard circuit model. \end{remark}
\newcommand{\op U_\mathrm{circ}}{\op U_\mathrm{circ}} \section*{Sub-Circuit Synthesis of Multi-Qubit Interactions}\label{ap:decomp-deets-ap} \subsection*{Analytic Pulse Sequence Identities} In this section we introduce the analytic pulse sequence identities we use to decompose local Trotter steps $\ee^{- \ii \delta h}$. Their recursive application allows us to establish, that for a $k$-qubit Pauli interaction $\op h$, there exists sub-circuit pulse-sequence $C:=\prod_l^{L} U_l V_l$ which implements the evolution operator $\ee^{-\ii \delta \op h}$. Most importantly, for any target time $\delta \geq 0$ the run-time of that circuit is bounded as \begin{align}\label{eq:general-k-cost} \cost \left(C \right) \leq \mathcal{O}\left( \delta^{\frac{1}{k-1}}\right), \end{align} according to the notion of run-time established in \Cref{def:time-cost}.
For $k=2^n +1$ where $n \in \mathbb{Z}$ as noted by \cite{Dur2007}, this can be done inexactly using a well know identity from Lie algebra. For Hermitian operators $\op A$ and $\op B$ we have \begin{align} \ee^{-\ii t \op B}\ee^{-\ii t \op A}\ee^{\ii t \op B}\ee^{\ii t \op A} = \ee^{ t^2 \left[\op A, \op B \right]} + \mathcal{O}\left( t^3 \right). \end{align} We make this exact for all $t \in [0,2\pi]$ for anti-commuting Pauli interactions in \Cref{lem:Weight-Increase-Depth4-ap}, and use \Cref{lem:Weight-Increase-Depth5-ap} to extend it to all $k \in \mathbb{Z}$. \begin{lemma}[Depth 4 Decomposition]\label{lem:Weight-Increase-Depth4-ap}
Let $\op U(t) = \ee^{\ii t \op H}$ be the time-evolution operator for time $t$ under a Hamiltonian $\op H = \tfrac{1}{2\ii}[ \op h_1, \op h_2]$, where
$\op h_1$ and $\op h_2$ anti-commute and both square to identity.
For $0 \leq \delta \leq \pi/2$ or $\pi \leq \delta \leq 3\pi/2$, $\op U(t)$ can be decomposed as
\begin{align}
\op U(t) =\ee^{\ii t_1 \op h_1} \ee^{\ii t_2 \op h_2} \ee^{\ii t_2 \op h_1} \ee^{\ii t_1 \op h_2}
\end{align}
with pulse times $t_1,t_2$ given by
\begin{align}
t_1 &=\frac{1}{2} \tan^{-1}\left(\frac{1}{\sin (t)+\cos (t)}, \; \pm\frac{\sqrt{\sin (2 t)}}{\sin (t)+\cos (t)}\right)+\pi c\\
t_2 &=\frac{1}{2} \tan^{-1}\left(\cos (t)-\sin (t), \; \mp\sqrt{\sin (2 t)}\right)+\pi c,
\end{align}
where $c \in \mathbb{Z}$, and corresponding signs are taken in the two expressions.
For $\pi/2 \leq t \leq \pi$ or $3\pi/2 \leq t \leq 2 \pi$, $\op U(t)$ can be decomposed as
\begin{align}
\op U(t) & =\ee^{\ii t_1 \op h_1} \ee^{\ii t_2 \op h_2} \ee^{-\ii t_2 \op h_1} \ee^{-\ii t_1 \op h_2}
\end{align}
with pulse times $t_1,t_2$ given by
\begin{align}
t_1 &=\frac{1}{2} \tan^{-1}\left(\frac{1}{\cos (t)-\sin (t)}, \; \pm\frac{\sqrt{-\sin (2 t)}}{\cos (t)-\sin (t)}\right)+\pi c\\
t_2 &=\frac{1}{2} \tan^{-1}\left(\sin (t)+\cos (t), \; \pm\sqrt{-\sin (2 t)}\right)+\pi c,
\end{align}
where $c \in \mathbb{Z}$, and corresponding signs are taken in the two expressions. \end{lemma} \begin{proof}
Follows similarly to \cref{lem:Rank-Increase-Depth3-ap}. \end{proof}
\begin{lemma}[Depth 5 Decomposition]\label{lem:Weight-Increase-Depth5-ap}
Let $\op U(t)$ be the time-evolution operator for time $t$ under a Hamiltonian $\op H = \tfrac{1}{2\ii} [ \op h_1,\op h_2]$.
If $\op h_1$ and $\op h_2$ anti-commute and both square to identity, then $U(t)$ can be decomposed as
\begin{align}
\op U(t) & =\ee^{\ii t_1 \op h_2} \ee^{- \ii \phi \op h_1} \ee^{\ii t_2 \op h_2} \ee^{ \ii \phi \op h_1} \ee^{\ii t_1 \op h_2}
\end{align}
with pulse times $t_1,t_2,\phi$ given by
\begin{align}
t_1 &= \frac{1}{2} \tan^{-1}\left(\pm\sqrt{2} \sec (t) \csc (2 \phi ) \sqrt{\cos (2 t)-\cos (4 \phi )}, \; -2 \tan (t) \cot (2 \phi )\right)+\pi c \\
t_2 &= \tan^{-1}\left(\pm\frac{\csc (2 \phi ) \sqrt{\cos (2 t)-\cos (4 \phi )}}{\sqrt{2}}, \; \sin (t) \csc (2 \phi )\right)+2 \pi c
\end{align}
where $c \in \mathbb{Z}$, and corresponding signs are taken in the two expressions. \end{lemma} \begin{proof}
Follows similarly to \cref{lem:Rank-Increase-Depth3-ap}. \end{proof}
\subsection*{Pulse-Time Bounds on Analytic Decompositions} In our later analysis we apply these methods to the interactions in the Fermi-Hubbard Hamiltonian. Depending on the fermionic encoding used, these interaction terms are at most $3$-local or $4$-local. \Cref{fig:circuits} depicts exactly how \cref{lem:Weight-Increase-Depth4-ap,lem:Weight-Increase-Depth5-ap} are used to decompose 3-local and 4-local interactions of the form $Z^{\otimes k}$.
We establish bounds on the run-time (\cref{def:time-cost}) of these circuits. The exact run-time of the circuit $C_a$ -- defined in \cref{fig:circuits-a} -- follows directly from \cref{def:time-cost} as \begin{align}
\cost(C_a(t)) = 2 |t^a_1(t)| + 2 |t^a_2(t)|. \end{align} We have labelled the functions $t_i(t)$ from \cref{lem:Weight-Increase-Depth4-ap} as $t^a_i(t)$ in order to distinguish them from those given in \cref{lem:Weight-Increase-Depth5-ap}, which are now labelled $t^b_i(t)$. This is to avoid confusion when using both identities in the one circuit, such as in circuit $C_b$ where we use \cref{lem:Weight-Increase-Depth4-ap} to decompose the remaining 3-local gates.
The exact run-time of the circuit $C_b$ -- defined in \cref{fig:circuits-b} -- is left in terms of $\cost(C_a)$ and again follows directly from \cref{def:time-cost} as \begin{align}
\cost(C_b(t,\phi)) = 2 \cost(C_a(t^b_1(t,\phi))) + \cost(C_a(t^b_2(t,\phi))) +2 |\phi| . \end{align} \Cref{lem:Weight-Increase-Depth4-param-bounds-ap,lem:Weight-Increase-Depth5-param-bounds-ap} bound these two functions and determine the optimal choice of the free pulse-time $\phi$. Inserting these bounds into the above $\cost$ expressions gives \begin{equation}\label{eq:decomp-bounds} \cost(C(t)) \leq \begin{cases}
2 \sqrt{2 t} & C=C_a \\
7 \sqrt[3]{t} & C=C_b
\end{cases}. \end{equation} As $Z^{\otimes k}$ is equivalent to any $k$-local Pauli term up to single qubit rotations, these bounds hold for any three or four local Pauli interactions.
\begin{figure}
\caption{Circuit $C_a$. The pulse times $t^a_i(t)$ are defined in \cref{lem:Weight-Increase-Depth4-ap} and the run-time is bounded as $\cost(C_a(t)) \leq 2 \sqrt{2 t}$. }
\label{fig:circuits-a}
\caption{Circuit $C_b$. The pulse times $t^b_i(t)$ are defined in \cref{lem:Weight-Increase-Depth5-ap}. The three local gates are further decomposed using \cref{lem:Weight-Increase-Depth4-ap}, though this is not shown here. If $\phi = \left(\frac{1}{4}(3+2\sqrt{2}) t\right)^{1/3}$ then the run-time is bounded as $\cost(C_b(t)) \leq 7 \sqrt[3]{t}$.}
\label{fig:circuits-b}
\caption{The definitions of circuits $C_a(t)$ and $C_b(t)$ -- which respectively generate evolution under a three and four local Pauli interaction for target time $t \geq 0$.}
\label{fig:circuits}
\end{figure}
\begin{lemma}\label{lem:Weight-Increase-Depth4-param-bounds-ap}
Let $\op H$ be as in \cref{lem:Weight-Increase-Depth4-ap}.
For $0 \leq t \leq \pi/2$, the pulse times $t_i$ in \cref{lem:Weight-Increase-Depth4-ap} can be bounded by
\begin{align}
|t_1| & \leq \sqrt{\frac{t}{2}}\\
|t_2| + |t_1| & \leq \sqrt{2t}.
\end{align} \end{lemma} \begin{proof} Choosing the negative $t_1(t)$ and corresponding $t_2(t)$ solution from \cref{lem:Weight-Increase-Depth4-ap}
and Taylor expanding about $t=0$ gives
\begin{align}
t_1(t) &=
-\sqrt{\frac{t}{2}} + R_1(t)\\
t_2(t) &=
\sqrt{\frac{t}{2}} + R_2(t).
\end{align} Basic calculus shows that $t_1$ is always negative and $t_2$ is always positive for $0 \leq t \leq \pi/2$, thus \begin{align}
|t_2| + |t_1| = t_2 - t_1
= \sqrt{2t} + R_{12}(t). \end{align} Then it can be shown that the Taylor remainders $R_1$ and $R_{12}$ are positive and negative, respectively, giving the stated bounds. \end{proof}
\begin{lemma}\label{lem:Weight-Increase-Depth5-param-bounds-ap} Let $\op H$ be as in \cref{lem:Weight-Increase-Depth5-ap}. For $0 \leq t \leq t_c$ and $\phi = (c t)^{1/3}$, the pulse times $t_i$ in \Cref{lem:Weight-Increase-Depth5-ap} can be bounded by \begin{align}
2\sqrt{2|t_2|} + 4 \sqrt{2|t_1|} + 2 |\phi| & \leq 3 (6 + 4 \sqrt{2})^{1/3} t^{1/3} \\
&\leq 7 t^{1/3}, \end{align} where $c=\frac{1}{4}(3+2\sqrt{2})$ and $t_c \sim 0.33$. \end{lemma} \begin{proof} This follows similarly to \Cref{lem:Weight-Increase-Depth4-param-bounds-ap}. We choose the positive branch of the $\pm$ solutions for pulse times with $t_1(t)$ and $t_2(t)$ given in \Cref{lem:Weight-Increase-Depth5-ap},
and freely set $\phi=(ct)^{1/3}$ for some positive constant $c \in \mathbb{R}$. Within the range $0 \leq t \leq t_c$ we have real pulse times $t_1 \leq 0$ and $t_2 \geq 0$. We can then Taylor expand the following about $t=0$ to find \begin{align}
2\sqrt{2|t_2|} + 4 \sqrt{2|t_1|} + 2 |\phi| &= 2\sqrt{2t_2} + 4 \sqrt{-2t_1} + 2(ct)^{1/3} \\
&=\frac{2 \left(\sqrt{c}+\sqrt{2}+1\right)}{\sqrt[6]{c}} t^{1/3}+ R(t). \end{align} Choosing $c$ to minimise the first term in this expansion, and again showing that $R \leq 0$, leads to the stated result \begin{align}
2\sqrt{2|t_2|} + 4 \sqrt{2|t_1|} + 2 |\phi| & \leq 3 (6 + 4 \sqrt{2})^{1/3} t^{1/3}\\
&\leq 7 t^{1/3} \end{align} where $\phi = (c t)^{1/3}$ and $c=\frac{1}{4}(3+2\sqrt{2})$. This is valid only with the region $0 \leq t \leq t_c$ where $t_c \approx 0.33$. \end{proof} \begin{theorem}
For a set of qubits $Q$, a set $I \subseteq Q \times Q$ specifying which pairs of qubits may interact, and a fixed two qubit interaction Hamiltonian $h_{ij}$, if $H$ is a $k$-body Pauli Hamiltonian then the following holds:
For all $t$ there exists a quantum circuit of $L$ pairs of alternating layer types $\op C = \prod_l^L \op U_l \op V_l $ with $\op U_l = \prod_{i \in Q} \op u_i^l$ being a layer of arbitrary single qubit unitary gates, and $\op V_l = \prod_{ ij \in \Gamma_l} \op v_{ij} \left(t_{ij}^l \right)$ being a layer of non-overlapping, variable time, two-qubit unitary gates $\op v_{ij}(t)=\ee^{\ii t \op h_{ij}}$ with the set $\Gamma_l \subseteq I$ containing no overlapping pairs of qubits such that $C = \ee^{\ii t H}$ and
\begin{align}
\cost(C) \leq \BigO \left(|t|^{\frac{1}{k-1}} \right),
\end{align}
where
\begin{align}
\cost(\op C) := \sum_l^L \max_{ij \in \Gamma_l}\left(t_{ij}^l\right).
\end{align} \end{theorem} \begin{proof} The proof of this claim follows from first noting that for any $t<0$ one can conjugate $\ee^{- \ii t H}$ by a single Pauli operator which anti-commutes with $H$ in order to obtain $\ee^{ \ii t H}$. Therefore we can consider w.l.o.g. we can take $t>0$ as we have done up until now.
The sub-circuit $C$ which implements $\ee^{ \ii t H}$ is constructed recursively using the Depth $5$ decomposition. We note that the Depth $5$ decomposition has an important feature. The free choice of $\phi$ allows us to avoid incurring a fixed root overhead with every iterative application of this decomposition. That is when using it to decompose any $\ee^{\ii t Z^{\otimes k}}$, we can always choose $h_1$ as a 2-local interaction and $h_2$ as a $(k-1)$-local interaction. We can choose $\phi \propto t^{\frac{1}{k-1}}$ and a similar analysis as in \Cref{lem:Weight-Increase-Depth5-param-bounds-ap} will show that this leaves the remaining pulse-times as $t_i \propto t^{1-\frac{1}{k-1}}$. This can be iterated to decompose the remaining gates, all of the form of evolution under $(k-1)$-local interactions for times $\propto t^{1-\frac{1}{k-1}}$. At each iteration we choose to $h_1$ as a 2-local interaction and $\phi \propto t^{\frac{1}{k-1}}$. Hence after $k-2$ iterations we will have established the claim that $ \cost(C) \leq \BigO \left(|t|^{\frac{1}{k-1}} \right)$. \end{proof}
\subsection*{Optimality}\label{ap:pulse-sequence-optimality} An obvious question to ask at this point is whether the proposed decompositions are optimal, in the sense that they minimise the total run-time $\cost$ while reproducing the target gate $\op h$ exactly. A closely related question is then whether relaxing the condition that we want to simulate the target gate without any error allows us to reduce the scaling of $\cost$ with regards to the target time $\delta$.
In this section we perform a series of numerical studies which indicate that the exact decompositions described in this section are indeed optimal within some parameter bounds, and that relaxing the goal to approximate implementations gives no benefit.
The setup is precisely as outlined in
before: for $\op U_\mathrm{target}=\exp(\ii T Z^{\otimes k})$ for some locality $k>1$ and time $T>0$, we iterate over all possible gate sequences of width $k$ and length $n$, the set of which we call $U_{n,k}$. For each sequence $\op U \in U_{n,k}$, we perform a grid search over all parameter tuples $(t_1,\ldots,t_n)\in [-\pi/2,\pi/2]^n$ and $\delta\in[0,\pi/10]$, and calculate the parameter tuple $(\epsilon(\op U), \cost)$, where $\cost$ is given in \cref{def:time-cost}, and \begin{align}
\epsilon(\op U) := \left\| \op U - U_\mathrm{target} \right\|_2. \end{align} The results are binned into brackets over $(\delta,\cost) \in [\pi/10, n\pi / 2]$ and their minimum within each bracket is taken.
This procedure yields two outcomes: \begin{enumerate}
\item For each target time $\delta$ and each target error $\epsilon>0$, it yields the smallest $\cost$, depth $n$ circuit with error less than $\epsilon$, and
\item for each target time $\delta$ and each $\cost$, the smallest error possible with any depth $n$ gate decomposition and total pulse time less than $\cost$. \end{enumerate}
This algorithm scales exponentially both in $k$ and $n$, and polynomial in the number of grid search subdivisions. The following optimisations were performed. \begin{enumerate}
\item We remove duplicate gate sequences under permutations of the qubits (since $\op U_\mathrm{target}$ is permutation symmetric).
\item We restrict ourselves to two-local Pauli gates, since any one-local gate can always be absorbed by conjugations, and
\item We remove mirror-symmetric sequences (since Paulis are self-adjoint).
\item For $n>4$ we switch to performing a random sampling algorithm instead of grid search, since the number of grid points becomes too large. \end{enumerate}
Results for $k=3$ and $n=3,4,5$ are plotted in \cref{fig:depth-3-4-numerics,fig:depth-5-numerics}. \begin{figure}
\caption{Numerical calculation of gate decomposition errors of the $\op U_\mathrm{target}=\exp(\ii T Z^{\otimes 3})$ gate, with a pulse sequence of depth 3 (left) and depth 4 (right).
Plotted in red are the optimal analytical decompositions given by CNOT conjugation and \cref{lem:Weight-Increase-Depth4-ap}, respectively.}
\label{fig:depth-3-4-numerics}
\end{figure} \begin{figure}
\caption{Numerical calculation of gate decomposition errors of the $\op U_\mathrm{target}=\exp(\ii T Z^{\otimes 3})$ gate, with a pulse sequence of depth 5.
Plotted in red is the optimal analytical decompositions given for a depth 4 sequence in \cref{lem:Weight-Increase-Depth4-ap}; the blue lines are an overlay over the optimal depth 4 sequences from \cref{fig:depth-3-4-numerics}.}
\label{fig:depth-5-numerics}
\end{figure} As can be seen (plotted as red line), for $n=3$ the optimal zero-error decomposition has $\cost = \pi + \delta$ from CNOT conjugation. For $n=4$, the optimal decomposition is given by the implicitly-defined solution in \cref{lem:Weight-Increase-Depth4-ap}, with a $\cost \propto \sqrt{\delta}$ dependence. For the depth 5 sequences, it appears that the same optimality as for depth 4 holds. In contrast to $n=3$ and $n=4$, there is now a zero error solution for all $\cost$ greater than the optimum threshold.
\section*{Suzuki-Trotter Formulae Error Bounds}\label{ap:trotter-deets-ap} \subsection*{Existing Trotter Bounds} Trotter error bounds have seen a spate of dramatic and very exciting improvements in the past few years \cite{Childs2017, Childs2019, Childsnew2019}. However, among these recent improvements we could not find a bound that was exactly suited to our purpose.
We wanted bounds which took into account the commutation relations between interactions in the Hamiltonian, as we know this leads to tighter error bounds \cite{Childs2019} \cite{Childs2017}. However, we needed exact constants in the error bound when applied to $2$D lattice Hamiltonians, such as the $2$D Fermi-Hubbard model. For this reason we could not directly apply the results of \cite{Childs2019} which only explicitly obtains constants for $1$D lattice Hamiltonians.
Additionally, we needed to be able to straightforwardly compute the bound for any higher order Trotter formula. This ruled out using the commutator bounds of \cite{Childs2017} as they become difficult to compute at higher orders. Furthermore these bounds require each Trotter layer to consist of a single interaction, meaning we wouldn't be able to exploit the result of \cref{thm:norm-bound}.
We followed the notation and adapted the methods of \cite{Childs2019} to derive bounds that meet the above criteria. Additionally we incorporate our own novel methods to tighten our bounds in \cref{cor:trotter-error-ap,cor:taylor-error-bound-ap} and \cref{thm:norm-bound}.
The authors of \cite{Childs2019} have recently extended their work further in \cite{Childsnew2019}. We have not yet seen whether they will further tighten our analysis, though we are keen to do this in future work.
\subsection*{Hamiltonian Simulation by Trotterisation} In this section we derive our bounds for Trotter error. The standard approach to implementing time-evolution under a local Hamiltonian $\op H = \sum_i \op h_i$ on a quantum computer is to ``Trotterise'' the time evolution operator $\op U(T)=e^{-\ii \op H T}$. Assuming that the Hamiltonian breaks up into $M$ mutually non-commuting layers $\op H = \sum_{i=1}^M \op H_i$ -- i.e.\ such that $\forall i\neq j\, [ \op H_i, \op H_j ] \neq 0$ -- Trotterizing in its basic form means expanding \begin{equation}\label{eq:trotter-basic-ap}
U(T) := \ee^{-\ii \op H T}
= \prod_{n=1}^{T/\delta} \prod_{i=1}^M \ee^{-\ii \op H_i \delta} + \R_1\left(T,\delta\right)
= \P_1\left(\delta\right)^{T/\delta} + \R_1\left(T,\delta\right) \end{equation} and then implementing the approximation $\P_1\left(\delta\right)^{T/\delta}$ as a quantum circuit. Here $\R_1\left(T,\delta\right)$ denotes the error term remaining from the approximate decomposition into a product of individual terms. $R_1\left(T,\delta\right) := U(T) - P_1\left(\delta\right)^{T/\delta}$ is simply defined as the difference to the exact evolution $U(T)$. For $M$ mutually non-commuting layers of interactions $\op H_i$, we must perform $M$ sequential layers per Trotter step.
\Cref{eq:trotter-basic-ap} is an example of a first-order product formula, and is derived from the Baker-Campbell-Hausdorff identity \begin{align}
\ee^{\op A + \op B} &= \ee^{\op A}\ee^{\op B}\ee^{[ \op A, \op B]/2}\cdots
\quad\text{and}\quad
\ee^{\op A + \op B} = \ee^{\left(\delta \op A + \delta\op B\right)/\delta}
= \left[ \ee^{\delta \op A + \delta \op B} \right]^{1/\delta}. \end{align} Choosing $\delta$ small in \cref{eq:trotter-basic-ap} means that corrections for every factor in this formula come in at $\BigO \left(\delta^2 \right)$ i.e.\ in the form of a commutator, and since we have to perform $1/\delta$ many rounds of the sequence $\ee^{\delta\op A}\ee^{\delta\op B}$ the overall error scales roughly as $\BigO \left(\delta \right)$.
Since its introduction in \cite{Lloyd1996a}, there have been a series of improvements, yielding higher-order expansions with more favourable error scaling. For a historical overview of the use of Suzuki-Trotter formulas in the context of Hamiltonian simulation, we direct the reader to the extensive overview given in \cite[sec.~2.2.1]{Yung2014}. In the following, we discuss the most recent developments for higher order product formulas, and analyse whether they yield an improved overall time and error scaling with respect to our introduced cost model.
To obtain higher-order expansions, Suzuki et.\ al.\ derived an iterative expression for product formulas in \cite{Suzuki1992,Suzuki1991}. For the $\left(2k\right)$\textsuperscript{th} order, it reads \cite{Childs2019} \begin{align}
\P_2\left(\delta\right) &:= \prod_{j=1}^M \ee^{-\ii \op H_j \delta/2} \prod_{j=M}^1 \ee^{-\ii \op H_j \delta/2}, \label{eq:P-2-ap} \\
\P_{2k}\left(\delta\right) &:= \P_{2k-2}\left(a_k \delta\right)^2\P_{2k-2}\left((1-4a_k) \delta\right) \P_{2k-2}\left(a_k \delta\right)^2, \label{eq:P-2k-ap} \end{align} where the coefficients are given by $a_k :=1/\left(4-4^{1/\left(2k-1\right)}\right)$. The product limits indicate in which order the product is to be taken. The terms in the product run from right to left, as gates in a circuit would be applied, so that $\prod_{j=1}^L \op A_j = \op A_L\cdots\op A_1$.
\subsection*{Error Analysis of Higher-Order Formulae}\label{subsec:trotter-error-ap} We need an expression for the error $\R_p\left(T,\delta\right)$ arising from approximating the exact evolution $U(T)$ by a $p$\textsuperscript{th} order product formula $\P_p\left(\delta\right)$ repeated $T/\delta$ times. As a first step, we bring the latter into the form: \begin{align}
\P_p\left(\delta\right) &:= \prod_{j=1}^{S} \P_{p,j} \left(\delta\right)=\P_{p,S}\left(\delta\right) \ldots \P_{p,2}\left(\delta\right) \P_{p,1}\left(\delta\right), \label{eq:higher-trotter-ap}\\
\P_{p,j}\left(\delta\right) &:=\prod_{i=1}^{M} \op U_{ij}\left(\delta\right)
\quad\text{where}\quad
\op U_{ij}\left(\delta\right) :=\ee^{-\ii \delta \tcoeff_{ji} \op H_i}.
\label{eq:higher-trotter-2-ap} \end{align} As before, $M$ denotes the number of non-commuting \emph{layers} of interactions in the local Hamiltonian. $S=S_p$ is the number of \emph{stages}; the number of $\P_{p,j}(\delta)$ in a $p$\textsuperscript{th} order decomposition from \cref{eq:P-2-ap} or \cref{eq:P-2k-ap}. Here we note that we count a single stage as either $\prod_{i=1}^{M} \op U_{ij}\left(\delta\right)$ or $\prod_{i=M}^{1} \op U_{ij}\left(\delta\right)$, so that a second order formula is composed of $2$ stages.
\begin{lemma}\label{rem:bounds-on-trotter-coefficients-ap} For a $p$\textsuperscript{th}-order decomposition with $p=1$ or $p=2k$, $k\ge 1$, we have $\sum_{j=1}^S \tcoeff_{ji}\left(p\right) = 1$ for all $i=1,\ldots,M$. Furthermore, the Trotter coefficients $\tcoeff_{ji}$ satisfy \begin{align}
\max_{ij} \{ |\tcoeff_{ji}| \} \le B_p \le \begin{cases}
1 & p=1 \\
\displaystyle \frac12 \left( \frac23 \right)^{k-1} & \text{$p=2k$, $k\ge1$}
\end{cases} \end{align} where \begin{align}
B_p := \begin{cases}
1 & p=1 \\
\frac12 & p=2 \\
\frac12 \prod_{i=2}^k (1-4a_i) & \text{$p=2k$, $k\ge2$.}
\end{cases} \end{align} \end{lemma}
\begin{proof} The first claim is obviously true for the first order formula in \cref{eq:trotter-basic-ap}. For higher orders, by \cite[Th.~3]{Childs2019} and \cref{eq:trotter-basic-ap}, we have that the first derivative \begin{align}
\frac\partial{\partial x} \P_p\left(x\right)\bigg|_{x=0} = -\ii\sum_{i=1}^M \op H_i. \end{align} Similarly, from \cref{eq:P-2-ap,eq:P-2k-ap}, we have that \begin{align}
\frac\partial{\partial x}\P_p\left(x\right)\bigg|_{x=0} =
\frac\partial{\partial x}\prod_{j=1}^S \prod_{i=1}^M \op U_{ij}\left(x\right) \bigg|_{x=0} =-\ii \sum_{j=1}^S\sum_{i=1}^M \tcoeff_{ji}\op H_i. \end{align} Equating both expressions for the first derivative of $\P_p\left(x\right)$ at $x=0$ and realising that they have to hold for any $\op H_i$ yields the claim.
The second claim is again obviously true for a first order expansion, and follows immediately from \cref{eq:P-2-ap} for $p=2$.
Expanding \cref{eq:P-2k-ap} for $\P_{2k}\left(\delta\right)$ all the way down to a product of $\P_2$ terms, the argument of each of the resulting factors will be a product of $k-1$ terms of $a_{k'}$ or $1-4a_{k'}$ for $k'\le k$. We further note that for $k\ge2$, $|a_k|\le|1-4a_{k}|$, as well as $|a_k|\le 1/2$ and $|1-4a_{k}|\le 2/3$, which can be shown easily. The $\tcoeff_{ji}$ can thus be upper-bounded by $B_p$, which in turn is upper-bounded by $\left(1/2\right)\left(2/3\right)^{k-1}$ -- where the final factor of $\left(1/2\right)$ is obtained from the definition of $\P_2$. \end{proof}
Since we are working with a fixed product formula order $p$ for the remainder of this section, we will drop the order subscript in the following and write $\P_p=\P$, $\R_p=\R$ for simplicity. Assuming $\| \op H_i \| \le \Lambda$ for all $i=1,\ldots,M$, and setting the error \begin{align}\label{eq:trotter-epsilon-ap}
\epsilon_p\left(T,\delta\right) := \| \R\left(T,\delta\right) \| = \| U(T) - \P\left(\delta\right)^{T/\delta} \|, \end{align} we can derive an expression for the $p$\textsuperscript{th} order error term. First, note that approximation errors in circuits accumulate at most linearly in \cref{eq:trotter-epsilon-ap}. Thus it suffices to analyse a single $\delta$ step of the approximation, i.e.\ $U\left(\delta\right) = \P\left(\delta\right) + \R\left(\delta,\delta\right)$. Then \begin{equation}\label{eq:epsilon-single-step-ap}
\epsilon_p\left(\delta\right) := \epsilon_p\left(\delta,\delta\right) = \| U\left(\delta\right) - \P\left(\delta\right) \| \end{equation} so that \begin{equation}
\epsilon_p\left(T,\delta\right) \le \frac{T}{\delta} \epsilon_p\left(\delta\right). \end{equation} We will denote $\epsilon_p\left(\delta\right)$ simply by $\epsilon$ in the following.
To obtain a bound on $\P\left(\delta\right)$, we apply the variation of constants formula with the condition that $\P\left(0\right)=I$, which always holds. As in \cite[sec.~3.2]{Childs2019}, for $\delta\ge0$, we obtain \begin{equation}\label{eq:integral-representation-ap}
\P\left(\delta\right) = U\left(\delta\right) + \R\left(\delta\right) = \ee^{- \ii \delta \op H} + \int_{0}^{\delta} \ee^{- \ii \left(\delta-\tau\right)\op H} \op R\left(\tau\right) d\tau \end{equation} where the integrand $\op R\left(\tau\right)$ is defined as \begin{align}
\op R\left(\tau\right):=\frac{d}{d\tau} \P\left(\tau\right) -\left(- \ii \op H\right) \P\left(\tau\right). \end{align}
Now, if $\P\left(\delta\right)$ is accurate up to $p$\textsuperscript{th} order -- meaning that $\R\left(\delta\right) = \BigO\left(\delta^{p+1}\right)$ -- it holds that the integrand $\op R\left(\delta\right) = \BigO\left(\delta^p\right)$. This allows us to restrict its partial derivatives, as the following shows.
\begin{lemma}\label{rem:order-conditions-ap}
For a product formula accurate up to $p$\textsuperscript{th} order -- i.e.\ for which $\op R\left(\delta\right)=\BigO\left(\delta^p\right)$ -- the partial derivatives $\partial_\tau^j \op R\left(0\right) = 0$ for all $0\le j\le p-1$. \end{lemma}
\begin{proof}
We note that $\op R\left(\delta\right)$ is analytic, which means that we can expand it as a Taylor series $\op R\left(\delta\right)=\sum_{j=0}^\infty \op a_j \delta^i$.
We proceed by induction.
If $\op a_0 \neq 0$, then clearly $\op R\left(0\right)\neq 0$, which contradicts the assumption that $\op R\left(\delta\right)$ is accurate up to $p$\textsuperscript{th} order.
Now assume for induction that $\forall j<j'<p-1: a_j = 0$ and $\op a_{j'}\neq 0$.
Then
\begin{align}
\frac{\op R\left(\delta\right)}{T^{j'}} = \op a_{j'} + \sum_{i=1}^\infty \op a_{i+j'} T^i
\xrightarrow{\delta\rightarrow 0} \op a_{j'} \neq 0,
\end{align}
which again contradicts that $\op R\left(0\right) = \BigO\left(\delta^p\right)$.
The claim follows. \end{proof}
Performing a Taylor expansion of $\op R\left(\tau\right)$ around $\tau=0$, the error bound $\epsilon$ given in \cref{eq:epsilon-single-step-ap} simplifies to \begin{align}
\epsilon &= \left\| \int_{0}^{\delta} \ee^{- \ii \left(\delta-\tau\right)\op H} \op R\left(\tau\right) \dd\tau \right\|
\leq \int_{0}^{\delta} \| \op R\left(\tau\right) \| \dd\tau\\
&=\int_0^\delta \left( \| \op R\left(0\right) \| + \| \op R'\left(0\right) \| \tau + \ldots + \| \op R^{\left(p-1\right)}\left(0\right)\| \frac{\tau^{p-1}}{\left(p-1\right)!}
+ \| \op S\left(\tau, 0\right) \|
\right) \dd\tau, \end{align} Further by \cref{rem:order-conditions-ap} all but the $p$\textsuperscript{th} or higher remainder terms $\op S\left(\tau, 0\right)$ equal zero, so \begin{align}
\epsilon &\le \int_0^\delta \| \op S\left(\tau, 0\right) \| \dd \tau = p \int_0^\delta \int_0^1 \left(1-x\right)^{p-1} \| \op R^{\left(p\right)}\left(x\tau\right)\| \frac{\tau^{p}}{p!} \dd x \dd\tau,
\label{eq:higher-trotter-error-1-ap} \end{align} where we used the integral representation for the Taylor remainder $\op S\left(\tau,0\right)$.
Motivated by this, we look for a simple expression for the $p$\textsuperscript{th} derivative of the integrand $\op R\left(\tau\right)$, which capture this in the following technical lemma. \begin{lemma}\label{lem:trotter-tech1-ap}
For a product formula accurate to $p$\textsuperscript{th} order, having $S=S_p$ stages for $M$ non-commuting Hamiltonian layers with the upper-bound $\|\op H_i\|\le\Lambda$, the error term $\op R\left(\tau\right)$ satisfies
\begin{align}
\left\| \frac{\partial^p}{\partial\tau^p}\op R\left(\tau\right)\right\| \le \left(S M\right)^{p+1} \Lambda^{p+1} \begin{cases}
2 & p=1 \\
\displaystyle \frac{1}{2^p} \left(\frac23\right)^{\left(p+1\right)\left(p/2-1\right)} & \text{$p=2k$ for $k\ge1$}.
\end{cases}
\end{align} \end{lemma}
\begin{proof}
We first express $\P\left(\tau\right)$ from \cref{eq:higher-trotter-ap,eq:higher-trotter-2-ap} with a joint index set $\Sigma=[S]\times[M]$ as
\begin{align}
\P\left( \tau\right) = \prod_{j=1}^{S}\prod_{i=1}^{M}\op U_{ij}\left(\tau\right)
=\prod_{I \in \Sigma}\op U_{I}\left(\tau\right).
\end{align}
Then the $\left(p+1\right)$\textsuperscript{th} derivative of this with respect to $\tau$ is
\begin{align}
\P^{\left(p+1\right)}\left(\tau\right) = \sum_{\alpha:\,|\alpha| = p+1} \binom{p+1}{\alpha} \prod_{I}\op U_{I}^{\left(\alpha_I\right)}\left( \tau\right)
\label{eq:P-multiindex-ap}
\end{align}
where $\alpha$ is a multiindex on $\Sigma$, and $|\alpha|=\sum_{I\in\Sigma} \alpha_{I}$.
Following standard convention, the multinomial coefficient for a multiindex is defined as
\begin{align}
\binom{p+1}{\alpha} = \frac{\left(p+1\right)!}{\alpha!} =\frac{\left(p+1\right)!}{\prod_{I\in\Sigma}\alpha_I!}.
\end{align}
We can similarly express $\op H$ with the same index set $\sigma$, and as a derivative of $\op U$ via
\begin{equation}
\op H = \sum_{i=1}^{S}\op H_i
= \sum_{j=1}^{S} \sum_{i=1}^{M} \tcoeff_{ji}\op H_i
= \ii \sum_{j=1}^{S} \sum_{i=1}^{M} \op U_{ij}^{\left(1\right)}\left(0\right)
= \ii \sum_{I\in\Sigma} \op U_{I}^{\left(1\right)}\left(0\right)
\label{eq:H-multiindex-ap}
\end{equation}
where we used the fact that $\sum_{j=1}^S \tcoeff_{ji} = 1$ by \cref{rem:bounds-on-trotter-coefficients-ap}, and the exponential expression of $\op U_I$ from \cref{eq:higher-trotter-2-ap}.
Now we can combine \cref{eq:P-multiindex-ap,eq:H-multiindex-ap} as in \cref{eq:higher-trotter-error-1-ap} to obtain the $p$\textsuperscript{th} derivative of the integrand $\op R\left(\tau\right)$:
\begin{equation}\label{eq:R-bound-ap}
\op R^{\left(p\right)}\left(\tau\right) = \sum_{\alpha:\,|\alpha| = p+1} \binom{p+1}{\alpha} \prod_{I} \op U_{I}^{\left(\alpha_I\right)}\left( \tau\right) - \sum_{I} \op U_{I}^{\left(1\right)}\left(0\right)\sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \prod_{I} \op U_{I}^{\left(\beta_I\right)}\left( \tau\right).
\end{equation}
Noting that $\|\op U_{I}^{\left(\beta_I\right)}\left(\tau\right)\|=\| \op U_{I}^{\left(\beta_I\right)}\left(0\right)\|$, and further $\op U_I^{\left(x\right)}\left(0\right)\op U_I^{\left(y\right)}\left(0\right) = \op U_I^{\left(x+y\right)}\left(0\right)$, we have
\begin{align}
\sum_{J} \left\| \op U_{J}^{\left(1\right)}\left(0\right) \right\| \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \prod_{I} \left\| \op U_{I}^{\left(\beta_I\right)}\left(0\right) \right\|
&= \sum_{\beta:\,|\beta|=p+1}\binom{p+1}{\beta} \prod_I \left\| \op U_I^{\left(\beta_I\right)}\left(0\right) \right\|.
\end{align}
We can therefore bound the norm of $\op R^{\left(p\right)}$ as follows:
\begin{align}
\left\|\op R^{\left(p\right)}\left(\tau\right)\right\| &\leq \sum_{\alpha:\,|\alpha| = p+1}\binom{p+1}{\alpha} \prod_{I} \left\|\op U_{I}^{\left(\alpha_I\right)}\left(0\right)\right\| \\
&\phantom{=}\hspace{.8cm} + \sum_{I} \left\|\op U_{I}^{\left(1\right)}\left(0\right)\right\|\sum_{\beta:\,|\beta| = p} \binom{p}{\beta}\prod_{I} \left\|\op U_{I}^{\left(\beta_I\right)}\left(0\right)\right\| \\
& = 2 \sum_{\alpha:\,|\alpha| = p+1}\binom{p+1}{\alpha} \prod_{I} \left\|\op U_{I}^{\left(\alpha_I\right)}\left(0\right)\right\| \\
&= 2 \sum_{\alpha:\,|\alpha| = p+1}\binom{p+1}{\alpha} \prod_{j=1}^S\prod_{i=1}^M \left|\tcoeff_{ji}\right|^{\alpha_{ij}} \left\| \op H_{i}\right\|^{\alpha_{ij}}.
\end{align}
By \cref{rem:bounds-on-trotter-coefficients-ap}, we know that $|\tcoeff_{ji}|=1$ when $p=1$ and $|\tcoeff_{ji}|\le \left(2/3\right)^{p/2-1}/2$ for all $j,i$ when $p=2k$ for $k\ge1$.
Hence for $p=1$
\begin{align}
\left\| \op R^{\left(1\right)}\left(\tau\right)\right\| \le 2 \left(S M\right)^{2} \Lambda^{2},
\end{align}
and for $p=2k$ for $k\ge1$
\begin{align}
\left\| \op R^{\left(p\right)}\left(\tau\right)\right\| \le 2 \sum_{\alpha:\,|\alpha|=p+1} \binom{p+1}{\alpha} \left[ \left(\frac23 \right)^{p/2-1} \frac{\Lambda}{2} \right]^{|\alpha|}
=: C_p\left(S,M\right) \left( \frac23 \right)^{\left(p+1\right)\left(p/2-1\right)} \frac{\Lambda^{p+1}}{2^p},
\end{align}
where $C_p\left(S,M\right)$ is the sum of the multinomial coefficients of length $p\in\field N$; a simple expression can be obtained by reversing the multinomial theorem, since
\begin{align}
\sum_{\alpha:\,|\alpha|+p+1} \binom{p+1}{\alpha} = \left(\underbrace{1+1+\ldots+1}_{|\Sigma|\ \text{terms}}\right)^{p+1} = |\Sigma|^{p+1} = \left(SM\right)^{p+1}.
\end{align}
\qedhere \end{proof}
To obtain the final error bounds, we combine \cref{lem:trotter-tech1-ap} with the integral representation in \cref{eq:higher-trotter-error-1-ap}.
\begin{theorem}[Trotter Error]\label{th:trotter-error-ap}
For a $p$\textsuperscript{th} order product formula $\P_p$ for $p=1$ or $p=2k$, $k\ge1$, with the same setup as in \cref{lem:trotter-tech1-ap}, a bound on the approximation error for the exact evolution $U(T)$ with $T/\delta$ rounds of the product formula $\P_p\left(\delta\right)$ is given by
\begin{align}
\epsilon_p\left(T,\delta\right) \le \frac {T}{\delta}\, \delta^{p+1} M^{p+1} \Lambda^{p+1} \times \begin{cases}
1 & p=1 \\
\displaystyle \frac{2}{\left(p+1\right)!} \left( \frac{10}3 \right)^{\left(p+1\right)\left(p/2-1\right)} & \text{$p=2k$, $k\ge 1$}.
\end{cases}
\end{align} \end{theorem}
\begin{proof}
We can use the bound on $\op R^{\left(p\right)}$ derived in \cref{lem:trotter-tech1-ap} and perform the integration over $\tau$ and $x$ in \cref{eq:higher-trotter-error-1-ap}, to obtain
\begin{align}
\epsilon \le \| \op R^{\left(p\right)} \| \int_0^\delta p \int_0^1 \left(1-x\right)^{p-1} \frac{\tau^p}{p!} \dd x \dd \tau
= \frac{\delta^{p+1}}{\left(p+1\right)!}\| \op R^{\left(p\right)} \|.
\end{align}
By \cref{rem:order-conditions-ap}, for Trotter formulae of order $p=1$ we have precisely one stage, i.e.\ $S=1$, and $\tcoeff_{ji}=1$ for all $i,j$.
This, together with \cref{lem:trotter-tech1-ap,eq:epsilon-single-step-ap}, yields the first bound.
The number of stages in higher order formulae can be upper-bounded by \cref{eq:P-2-ap,eq:P-2k-ap}, giving $S_p \le 2\times 5^{p/2-1}$.
Together with \cref{lem:trotter-tech1-ap,eq:epsilon-single-step-ap}, this yields the second bound. \end{proof}
We remark that tighter bounds than the ones in \cref{th:trotter-error-ap} are achievable for any given product formula, where the form of its coefficients $\tcoeff_{ji}$ are explicitly available and not merely bounded as in \cref{rem:bounds-on-trotter-coefficients-ap}. Summing up these stage times exactly is therefore an immediate way to obtain an improved error bound.
Furthermore, the triangle inequality on $\| \op R^{\left(p\right)}\left(\tau\right) \|$ in the proof of \cref{lem:trotter-tech1-ap} is a crude overestimate: it looses information about (i).~terms that could cancel between the two multi-index sums, and (ii.)~any commutation relations between the individual Trotter stages.
In the following subsection, we will provide a tighter error analysis, featuring more optimal but less clean analytical expressions which we can nonetheless evaluate efficiently numerically.
\subsection*{Explicit Summation of Trotter Stage Coefficients} For the recursive Suzuki-Trotter formula in \cref{eq:P-2k-ap} we can immediately improve the error bound by summing the stage coefficients $\tcoeff_{ij}$ up exactly, instead of bounding them as in \cref{rem:bounds-on-trotter-coefficients-ap}.
\begin{corollary}[Trotter Error]\label{cor:trotter-error-ap} For the recursive product formula in \cref{eq:P-2k-ap} and $p=2k$ for $k\ge1$, \begin{align}
\epsilon_p\left(T,\delta\right) \le \frac{2 T \delta^p M^{p+1} \Lambda^{p+1} }{\left(p+1\right)!} H_p^{p+1}
\quad\text{where}\quad
H_p := \prod_{i=1}^{p/2-1} \frac{ 4+4^{1/\left(2i+1\right)}}{\left| 4 - 4^{1/\left(2i+1\right)} \right| }. \end{align} \end{corollary} \begin{proof}
This follows from explicitly summing up the magnitudes of all the $\tcoeff_{ji}$'s obtained by solving the recursive definition of the product formula, which can easily be verified to satisfy $\sum_{ij} |\tcoeff_{ij}\left(p\right)| = M H_p$. Then from \cref{lem:trotter-tech1-ap}, \begin{align}
\left\| \op R^{\left(p\right)}\left(\tau\right) \right\| \le 2 \Lambda^{p+1}\!\!\!\! \sum_{\alpha:\,|\alpha|=p+1} \binom{p+1}{\alpha} \prod_{j=1}^S \prod_{i=1}^M \left| \tcoeff_{ji}^{\alpha_{ij}} \right|
= 2\Lambda^{p+1} \left( \sum_{j=1}^S \sum_{i=1}^M |\tcoeff_{ji}| \right)^{p+1}, \end{align} and the claim follows as before. \end{proof}
For later reference, we note that it is straightforward to generalise the error bound in \cref{cor:trotter-error-ap} for the case of a \emph{higher} derivative $\op R^{(q)}$, $q\ge p$, but still for a $p$\textsuperscript{th} order formula: the bound simply reads \begin{equation}\label{eq:p-q-error-1-ap} \epsilon_{p,q}(T,\delta) \le \frac{2 T \delta^q M^{q+1} \Lambda^{q+1} }{\left(q+1\right)!} H_p^{q+1}. \end{equation}
\subsection*{Commutator Bounds}\label{ap:commutator_bounds} Our analysis thus far has completely neglected the underlying structure of the Hamiltonian. In this subsection we establish commutator bounds which are easily applicable to $D$-dimensional lattice Hamiltonians.
We begin with the following technical lemmas. \begin{lemma}\label{lem:trotter-tech-comm-ap}
For a product formula accurate to $p$\textsuperscript{th} order, having $S=S_p$ stages for $M$ non-commuting Hamiltonian layers with the upper-bound $\|\op H_i\|=\Lambda_I$, the error term $\op R\left(\tau\right)$ satisfies \begin{align}
\left\| \frac{\partial^p}{\partial\tau^p}\op R\left(\tau\right)\right\| &\leq \sum_{J} \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \sum_{I=J+1}^{SM} \left(B_p \Lambda\right)^{p-\beta_{I}} \left\|\left[\op U_{J}^{\left(1\right)}\left(0\right) \ , \op U_I^{\left(\beta_I\right)}\left( \tau\right)\right]\right\|. \end{align} \end{lemma} \begin{proof} As shown in \cref{lem:trotter-tech1-ap}, \begin{align}
\op R^{\left(p\right)}\left(\tau\right) = \sum_{\alpha:\,|\alpha| = p+1} \binom{p+1}{\alpha} \prod_{I} \op U_{I}^{\left(\alpha_I\right)}\left( \tau\right) - \sum_{J} \op U_{J}^{\left(1\right)}\left(0\right) \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \prod_{I} \op U_{I}^{\left(\beta_I\right)}\left( \tau\right). \end{align} We begin by commuting every $\op U_{J}^{\left(1\right)}\left(0\right)$ past $\prod^{SM}_{I=J+1} \op U_I^{\left(\beta_I\right)}\left( \tau\right)$. Consider this for some fixed $J$ in the sum of over $J$. That is consider rewriting a particular summand from the second term above to obtain \begin{align}
\op U_{J}^{\left(1\right)}&\left(0\right) \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \prod_{I} \op U_{I}^{\left(\beta_I\right)}\left( \tau\right) \\
&= \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \op U_{J}^{\left(1\right)}\left(0\right) \left(\op U_{sm}^{\left(\beta_{sm}\right)}\left( \tau\right) \ldots \op U_{J+1}^{\left(\beta_{J+1}\right)}\left( \tau\right) \right)\left(\op U_{J}^{\left(\beta_{J}\right)}\left( \tau\right) \ldots \op U_{1}^{\left(\beta_{1}\right)}\left( \tau\right) \right)\\
&=\sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \left(\op U_{sm}^{\left(\beta_{sm}\right)}\left( \tau\right) \ldots \op U_{J+1}^{\left(\beta_{J+1}\right)}\left( \tau\right)\right) \left( \op U_{J}^{\left(\beta_{J}+1\right)}\left( \tau\right) \ldots \op U_{1}^{\left(\beta_{1}\right)}\left( \tau\right) \right) \\
&\mspace{30mu} + \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \bigg[\op U_{J}^{\left(1\right)}\left(0\right)\ ,\ \prod^{SM}_{I=J+1} \op U_I^{\left(\beta_I\right)}\left( \tau\right)\bigg] \prod^{J}_{I=1} \op U_I^{\left(\beta_I\right)}\left( \tau\right). \end{align} Now, by inserting this into the full expression for $ \op R^{\left(p\right)}\left(\tau\right)$, we obtain \begin{align}
\op R^{\left(p\right)}\left(\tau\right) &= \sum_{\alpha:\,|\alpha| = p+1} \binom{p+1}{\alpha} \prod_{I} \op U_{I}^{\left(\alpha_I\right)}\left( \tau \right) - \sum_{\beta:\,|\beta| = p+1} \binom{p+1}{\beta} \prod_{I} \op U_{I}^{\left(\beta_I\right)}\left( \tau\right) \\
&\mspace{30mu} -\sum_{J} \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \bigg[\op U_{J}^{\left(1\right)}\left(0\right)\ ,\ \prod^{SM}_{I=J+1} \op U_I^{\left(\beta_I\right)}\left( \tau\right)\bigg] \prod^{J}_{I=1} \op U_I^{\left(\beta_I\right)}\left( \tau\right) \\
&= -\sum_{J} \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \bigg[\op U_{J}^{\left(1\right)}\left(0\right)\ ,\ \prod^{SM}_{I=J+1} \op U_I^{\left(\beta_I\right)}\left( \tau\right)\bigg] \prod^{J}_{I=1} \op U_I^{\left(\beta_I\right)}\left( \tau\right)\\
&= -\sum_{J} \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \sum_{I=J+1}^{SM} \prod^{SM}_{K=I+1} \op U_K^{\left(\beta_K\right)} \left[\op U_{J}^{\left(1\right)}\left(0\right) \ , \op U_I^{\left(\beta_I\right)}\left( \tau\right)\right] \prod^{I-1}_{K=1} \op U_K^{\left(\beta_K\right)}. \end{align} Taking the norm of this expression gives \begin{align}
\left\| \frac{\partial^p}{\partial\tau^p}\op R\left(\tau\right)\right\| &\leq \sum_{J} \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \sum_{I=J+1}^{SM} \left( B_p \Lambda\right)^{\sum_{K=I+1}^{SM} \beta_K} \left\|\left[\op U_{J}^{\left(1\right)}\left(0\right) \ , \op U_I^{\left(\beta_I\right)}\left( \tau\right)\right]\right\| \left(B_p \Lambda\right)^{\sum_{K=1}^{I-1}\beta_{K}} \\
& =\sum_{J} \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \sum_{I=J+1}^{SM} \left(B_p \Lambda\right)^{p-\beta_{I}} \left\|\left[\op U_{J}^{\left(1\right)}\left(0\right) \ , \op U_I^{\left(\beta_I\right)}\left( \tau\right)\right]\right\|. \end{align} This completes the proof. \end{proof}
\begin{lemma}\label{lem:zero-time-commutator-bounds-ap}
If every pair of Hamiltonians can be written as $\op H_I = \sum_{i=1}^{N} \op h^{I}_{i}$ and $\op H_J = \sum_{i=1}^{N} \op h^{J}_{i}$, where for any $i$ we have $\| \op h^{I}_{i} \|=\| \op h^{J}_{i} \|=1$ and for any fixed term $\op h^{J}$ there are at most $\n$ terms in $\op H_I$ which do not commute with that specific term, then \begin{align}
\left\|\left[\op U_{J}^{\left(1\right)}\left(0\right) \ , \op U_I^{\left(\beta_I\right)}\left( 0\right)\right]\right\| & \leq 2 \n \beta_I N^{\beta_I} B_{p}^{\beta_I + 1}. \end{align} \end{lemma} \begin{proof} First note that \begin{align} \op U_{J}^{\left(1\right)}\left(0\right) = -\ii \tcoeff_{J} \op H_J = -\ii \tcoeff_{J} \sum_{i=1}^{N} \op h^{J}_{i} \end{align} and \begin{align} \op U_{I}^{\left(\beta_I\right)}\left(0\right) = \left(-\ii \tcoeff_{I} \op H_I\right)^{\beta_I} = \left(-\ii \tcoeff_{I}\right)^{\beta_I}\left( \sum_{i=1}^{N} \op h^{I}_{i}\right)^{\beta_I}. \end{align}
Consider a fixed term in $\op U_{J}^{\left(1\right)}\left(0\right)$ such as $-\ii \tcoeff_J \op h^{J}$, where we have dropped the subscript $i$. As there are $N$ of these, we can bound the norm of the commutator as follows \begin{align}
\left\|\left[\op U_{J}^{\left(1\right)}\left(0\right) , \op U_I^{\left(\beta_I\right)}\left( 0\right)\right]\right\| & \leq N B_p \left\|\left[ \op h^{J} , \op U_I^{\left(\beta_I\right)}\left( 0\right)\right]\right\|. \end{align} Where we have also used the triangle inequality and the fact that $\tcoeff_J \leq B_p$.
Now consider fully expanding the $\op U_I^{\left(\beta_I\right)}$ so that it is a sum of $N^{\beta_I}$ norm-$1$ Hamiltonians with coefficients upper-bounded by $\left(B_p\right)^{\beta_I}$. As only $\n$ of the $N$ normalised Hamiltonians do not commute with $\op h^{J}$, the number of Hamiltonians in the expanded $\op U_{I}^{\left(\beta_I\right)}$ which do not commute with $\op h^{J}$ can be upper-bounded by $\n \beta_I N^{\beta_I -1}$. Here we have assumed that if any of the $\n$ non-commuting terms appear at any point in the expansion (the $\n$), then that term will not commute with $\op h^{J}$ regardless of whatever other terms appear (the $N^{\beta_I -1}$). We can over-count by repeating this for each term expanded (the $\beta_{I}$). This gives \begin{align}
\left\|\left[\op U_{J}^{\left(1\right)}\left(0\right), \op U_I^{\left(\beta_I\right)}\left( 0\right)\right]\right\| & \leq 2 \n \beta_I N^{\beta_I} B_{p}^{\beta_I + 1}. \end{align} The extra factor of $2$ comes from bounding the commutators of the norm~1 Hamiltonians via triangle inequality. \end{proof}
\begin{lemma}\label{lem:R-bound-Comms-ap}
If every pair of Hamiltonians can be written as $\op H_I = \sum_{i=1}^{N} \op h^{I}_{i}$ and $\op H_J = \sum_{i=1}^{N} \op h^{J}_{i}$, where all $\| \op h^{I}_{i} \|=\| \op h^{J}_{i} \|=1$, and if additionally for any fixed term $\op h^{J}$ there are at most $\n$ terms $\op h^{I}$ which do not commute with $\op h^{J}$, then \begin{align}
\left\| \frac{\partial^p}{\partial\tau^p}\op R\left(\tau\right)\right\| &\leq \n p B_p^{p+1} \Lambda^{p-1} N \left(\left(SM-1\right)+\frac{N}{\Lambda}\right)^{p-1} \left(\left(SM\right)^{2}-\left(SM\right)\right) \\
&+ \n \tau B_p^{p+2} \Lambda^{p} N \ee^{\tau N B_p} \left(\left(SM\right)^{p+2}-\left(SM\right)^{p+1}\right). \end{align} \end{lemma} \begin{proof} We must obtain a simplified form for the bounded commutator appearing in \cref{lem:trotter-tech-comm-ap}. We can sequentially expand this commutator and use the triangle inequality to write it as \begin{align}
\left\|\left[\op U_{J}^{\left(1\right)}\left(0\right) , \op U_I^{\left(\beta_I\right)}\left(\tau\right)\right]\right\| &= \left\|\left[\op U_{J}^{\left(1\right)}\left(0\right) , \op U_I^{\left(\beta_I\right)}\left(0\right) \ee^{\ii \tau \tcoeff_I \op H_{I}} \right]\right\|
\\ &\leq \left\|\left[\op U_{J}^{\left(1\right)}\left(0\right) , \op U_I^{\left(\beta_I\right)}\left(0\right)\right]\right\| + B_p^{\beta_{I}} \Lambda^{\beta_I} \left\|\left[\op U_{J}^{\left(1\right)}\left(0\right) , \ee^{\ii \tau \tcoeff_I \op H_{I}} \right]\right\|. \end{align} We can use \cref{lem:zero-time-commutator-bounds-ap} to bound the first term. The commutator in the second term can be bounded as follows: \begin{align}
\left\|\left[\op U_{J}^{\left(1\right)}\left(0\right) , \ee^{\ii \tau \tcoeff_I \op H_{I}} \right]\right\| &= \left\|\left[\op U_{J}^{\left(1\right)}\left(0\right) , I+ \ii \tau \tcoeff_I \op H_{I}+\frac{1}{2!} \left(\ii \tau \tcoeff_I \op H_{I}\right)^2 + \ldots \right]\right\| \\
&\leq \sum_{k=1}^{\infty} \frac{\tau^k}{k!}\left\| \left[\op U_{J}^{\left(1\right)}\left(0\right) , \op U_{I}^{\left(k\right)}\left(0\right) \right] \right\| \\
&\leq \sum_{k=1}^{\infty} \frac{2 \n \tau^k}{\left(k-1\right)!} N^{k} B_{p}^{k + 1} \\
&= 2 \n \tau N B^{2}_p \ee^{ N B_p \tau} . \end{align} Where we have used \cref{lem:zero-time-commutator-bounds-ap} to bound the norm of the commutator of $\op U_{J}^{\left(1\right)}\left(0\right) $ and $ \op U_{I}^{\left(k\right)}\left(0\right) $ by $2 \n k N^{k} B^{k+1}_p$ and simplified the resulting expression. The first term can be bounded directly with \cref{lem:zero-time-commutator-bounds-ap}, so we obtain \begin{align}
\left\|\left[\op U_{J}^{\left(1\right)}\left(0\right) , \op U_I^{\left(\beta_I\right)}\left(\tau\right)\right]\right\| & \leq 2 \n \beta_I N^{\beta_I} B^{\beta_I + 1}_p + 2 \n \tau N \Lambda^{\beta_I} B^{2+\beta_I}_p \ee^{N B_p \tau}. \end{align} Now by using this to bound the result of \cref{lem:trotter-tech-comm-ap} we obtain \begin{align}
\left\| \frac{\partial^p}{\partial\tau^p}\op R\left(\tau\right)\right\| &\leq \sum_{J} \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \sum_{I=J+1}^{SM} \left(B_p \Lambda\right)^{p-\beta_{I}} \left\|\left[\op U_{J}^{\left(1\right)}\left(0\right) \ , \op U_I^{\left(\beta_I\right)}\left( \tau\right)\right]\right\| \\
&\leq \sum_{J} \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \sum_{I=J+1}^{SM} \left(B_p \Lambda\right)^{p-\beta_{I}} \left( 2 \n \beta_I N^{\beta_I} B^{\beta_I + 1}_p + 2 \n \tau N \Lambda^{\beta_I} B^{2+\beta_I}_p \ee^{N B_p \tau} \right)\\
&= \sum_{J} \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \sum_{I=J+1}^{SM}\left( 2 \n \beta_I \left(\frac{N}{\Lambda}\right)^{\beta_I} \Lambda^{p}B^{p+1}_p + 2 \n \tau N \Lambda^{p} B^{p+2}_p \ee^{N B_p \tau} \right). \end{align} To simplify this expression, we must simplify an expression of the form \begin{align}
\sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \beta_I x^{\beta_I} \end{align} where in our case $x= N/\Lambda$. This can be done by rewriting this expression in terms of a derivative with respect to $x$ and reversing the multinomial theorem, which gives \begin{align}
\sum_{\beta:\,|\beta| = p} \binom{p}{\beta} \beta_I x^{\beta_I} &= x\frac{d}{dx} \sum_{\beta:\,|\beta| = p} \binom{p}{\beta} x^{\beta_I} \\
&= x\frac{d}{dx} \left(\underbrace{1+\ldots+1+x}_{SM \ \text{terms}}\right)^{p}\\
&=p x \left( SM - 1 + x\right)^{p-1}. \end{align}
Using this and performing the summation over $J$ and $I$ simplifies the expression for $\left\| \op R^{\left(p\right)}\left(\tau \right) \right\|$ to \begin{align}
\left\| \frac{\partial^p}{\partial\tau^p}\op R\left(\tau\right)\right\| &\leq p \n B_p^{p+1} \Lambda^{p-1} N \left(SM-1+\frac{N}{\Lambda}\right)^{p-1} \left(\left(SM\right)^{2}-\left(SM\right)\right) \\
&+ \tau \n B_p^{p+2} \Lambda^{p} N \left(\left(SM\right)^{p+2}-\left(SM\right)^{p+1}\right) \ee^{\tau N B_p}. \end{align} \end{proof}
Now we can use the preceding lemmas to establish a commutator bound for higher order Trotter formulae. Although it is cumbersome looking, it is easy to evaluate. \begin{theorem}[Commutator Error Bound]\label{th:Trotter-Er-Commutator-ap}
Let $\op H = \sum_{i=1}^{M} \op H_i$ with $\| \op H_i \| \leq \Lambda$ be a Hamiltonian with $M$ mutually commuting layers $\op H_I = \sum_{i=1}^{N} \op h^{I}_{i}$.
Assume that for any $i$, $\| \op h^{I}_{i} \|=\| \op h^{J}_{i} \| \leq 1$.
Additionally, assume that for any fixed term $\op h^{J}$ there exist at most $\n$ terms $\op h^{I}$ which do not commute with $\op h^{J}$.
Then, for a $p$\textsuperscript{th} order product formula $\P_p$ with $p=1$ or $p=2k$, $k\ge1$ used to approximate the evolution operator under $H$, the approximation error for the exact evolution $U(T)$ with $T/\delta$ rounds of the product formula $\P_p\left(\delta\right)$ is bounded by
\begin{align}
\epsilon_p \left(T,\delta \right) \leq C_1 \frac{T \delta^{p}}{\left(p+1\right)!} + C_2 \frac{T}{\delta} \int_0^\delta p \int_0^1 \left(1-x\right)^{p-1} \frac{x \tau^{p+1}}{p!} \ee^{x \tau N B_p} \dd x \dd\tau
\end{align}
with
\begin{align}
C_1 &= \n p B_p^{p+1} \Lambda^{p-1} N \left(\left(SM-1\right)+\frac{N}{\Lambda}\right)^{p-1} \left(\left(SM\right)^{2}-\left(SM\right)\right)\\
C_2 &=\n B_p^{p+2} \Lambda^{p} N \left(\left(SM\right)^{p+2}-\left(SM\right)^{p+1}\right).
\end{align} \end{theorem} \begin{proof}
The error formula for a single Trotter step is given by \cref{eq:higher-trotter-error-1-ap} as
\begin{align}
\epsilon_p \left(\delta \right) &\le p \int_0^\delta \int_0^1 \left(1-x\right)^{p-1} \| \op R^{\left(p\right)}\left(x\tau\right)\| \frac{\tau^{p}}{p!} \dd x \dd\tau.
\end{align}
Evaluating this using \cref{lem:R-bound-Comms-ap} and then substituting the resultant expression in $\epsilon_p \left(T,\delta \right)\leq (T/\delta) \epsilon_p \left(\delta \right)$ gives the stated expression. \end{proof}
For later reference, we note that it is straightforward to generalise the error bound in \cref{th:Trotter-Er-Commutator-ap}, by incorporating similar techniques to \cref{cor:trotter-error-ap} in order to sum up the $|\tcoeff_{ij}|$ exactly, instead of simply bounding them by $B_p$. Additionally, we can also generalise to the case of a \emph{higher} derivative $\op R^{(q)}$, $q\ge p$, but still for a $p$\textsuperscript{th} order formula: with these two generalisations the bound simply reads \begin{align}
\epsilon_{p,q} \left(T,\delta \right) \leq C_1 \frac{T \delta^{q}}{\left(q+1\right)!} + C_2 \frac{T}{\delta} \int_0^\delta q \int_0^1 \left(1-x\right)^{q-1} \frac{x \tau^{q+1}}{q!} \ee^{x \tau N B_p} \dd x \dd\tau \end{align} with \begin{align}
C_1 &= \n q B_p^{2} \Lambda^{q-1} N \left(M H_p - B_p + B_p \left(\frac{N}{\Lambda}\right)\right)^{q-1} \left(\left(S_p M\right)^{2}-\left(S_p M\right)\right)\\
C_2 &= \n B_p^{2}\left( M H_p \Lambda\right)^{q} N \left(\left(S_p M\right)^{2}-\left(S_p M\right)\right). \end{align}
\subsection*{A Taylor Bound on the Taylor Bound}\label{ap:taylor-of-taylor} Another method to obtain a tighter bound on a Taylor expansion as used on $\op R(\tau)$ in \cref{eq:integral-representation-ap} and which can be used together with the more sophisticated commutator-based error bound from \cref{th:Trotter-Er-Commutator-ap} derived in the last section, can be obtained by performing a Taylor expansion of the remainder term, and in turn bounding \emph{its} Taylor remainder by some other method \cite[Rem.~4]{Bausch2013}.
We first establish the following technical lemma: \begin{lemma}[Taylor Error Bound]\label{lem:taylor-error-bound-ap} Let the setup be as in \cref{lem:trotter-tech1-ap}, and let $q>p$. The error term $\epsilon$ from \cref{eq:trotter-epsilon-ap} satisfies \begin{align}
\epsilon_p(\delta) &\le \sum_{l=p}^q \frac{\delta^{l+1}}{(l+1)!} \| \op R^{(l)}(0) \| + \epsilon_{p,q+1}(\delta)\\ \intertext{where}
\op R(0)^{(l)} &= \sum_{\alpha:\,|\alpha|=p+1} \binom{p+1}{\alpha} \op F(\alpha) + \ii \op H \sum_{\beta:\,|\beta|=p} \binom{p}{\beta} \op F(\beta),\\ \intertext{with $\op H=\sum_{i=1}^M \op H_i$, and}
\op F(\alpha) :&\!\!= \prod_{j=1}^S\prod_{i=1}^M (-\ii \tcoeff_{ji} \op H_i)^{\alpha_{(i,j)}}. \end{align} \end{lemma} \begin{proof}
The expression for $\epsilon$ stems from Taylor-expanding \cref{eq:higher-trotter-error-1-ap} to order $q$ instead of $p$, and integrating over $\tau$.
The last term $\epsilon_{q+1,p}$ is then simply the overall remainder $\op S$, as before; and we can use \cref{eq:p-q-error-1-ap} to obtain a bound on it.
The bound on $\| \op R(0)^{(l)} \|$ is an immediate consequence of \cref{eq:R-bound-ap}, where we set $\tau=0$. \end{proof}
This allows us to calculate a numerical bound on $\| \op R(0)^{(l)} \|$, by bounding $\| \op H_i \| \le \Lambda$ and allowing terms within the two sums over $\alpha$ and $\beta$ to cancel. The benefit of this approach is that it is generically applicable to any given Trotter formula, and only depends on the non-commuting layers of $\op H$.
We can therefore derive the following bounds: \begin{corollary}[Taylor Error Bound]\label{cor:taylor-error-bound-ap}
Let $\op H=\sum_{i=1}^M \op H_i$ with $\| \op H_i \| \le \Lambda$ for all $i$. Then for $\epsilon$ from \cref{eq:higher-trotter-error-1-ap}, and for a $p$\textsuperscript{th} Trotter formula, we have \begin{align}
\epsilon_p(\delta) &\le \sum_{l=p}^q \frac{\delta^{l+1}\Lambda^{l+1}}{(l+1)!} f(p,M,l)
+ \epsilon_{p,q+1}(\delta),\\ \intertext{where}
f(p,M,l) &= \left\| \sum_{\alpha:\, |\alpha|=p+1} \binom{p+1}{\alpha} \op v(\alpha) + \ii \sum_{j=1}^M \ket j \otimes \sum_{\beta:\,|\beta|=p} \binom{p}{\beta} \op v(\beta) \right\|_1,\\ \intertext{and}
\op v(\alpha) :&\!\!= \bigotimes_{j=1}^S \bigotimes_{i=1}^M(-\ii \tcoeff_{ji} \ket i)^{\otimes \alpha_{(i,j)}}. \end{align} for a basis $\{ \ket1, \ldots, \ket M \}$ of $\field C^M$. \end{corollary} \begin{proof} Follows immediately from \cref{lem:taylor-error-bound-ap}. \end{proof} A selection of the series coefficients $f(p,M,l)$ can be found in \cref{tab:coefficients-ap}. \begin{table}[t]
\centering \begin{tabular}{@{}llllllll@{}} \toprule
& & \multicolumn{6}{c}{$f(p,M,l)$ for $l=\cdot$} \\
& M & l=p & l=p+1 & l=p+2 & l=p+3 & l=p+4 & l=p+5 \\ \midrule \multirow{4}{*}{p=1} & 2 & 2 & 6 & 14 & 30 & 62 & 126 \\
& 3 & 6 & 26 & 90 & 290 & 906 & 2786 \\
& 4 & 12 & 68 & 312 & 1340 & 5592 & 22988 \\
& 5 & 20 & 140 & 800 & 4292 & 22400 & 115220 \\ \midrule \multirow{4}{*}{p=2} & 2 & 3 & 9 & 22.75 & 50 & 108.344 & 225.531 \\
& 3 & 13 & 57 & 213.25 & 711.25 & 2309.47 & 7283.06 \\
& 4 & 34 & 198 & 980.5 & 4377.5 & 18926.6 & 79758 \\
& 5 & 70 & 510 & 3141.5 & 17555 & 94765.3 & 499391 \\ \midrule \multirow{4}{*}{p=4} & 2 & 4.89745 & 19.5277 & 79.5305 & 442.266 & 2312.73 & 11208.3 \\
& 3 & 43.6604 & 277.994 & 1880.62 & 16924.7 & & \\
& 4 & 194.476 & 1719.69 & 16226.8 & & & \\
& 5 & 610.187 & 6926.95 & 83775.9 & & & \\ \bottomrule \end{tabular}
\caption{Trotter error coefficients $f(p,M,l)$ from \cref{cor:taylor-error-bound-ap}; values rounded to the precision shown.}
\label{tab:coefficients-ap} \end{table} \Cref{cor:taylor-error-bound-ap} can then be applied in conjunction with e.g.\ the commutator error bound given in \cref{th:Trotter-Er-Commutator-ap} for the remaining term $\epsilon_{q+1}(\delta, \delta)$.
\section*{Spectral Norm of Fermionic Hopping Terms}\label{ap:normbound} \defa^{\dagger}{a^{\dagger}} Let $a^{\dagger}$ and $a$ be the standard fermionic creation and annihilation operators.
\begin{theorem}\label{thm:norm-bound}
Let $\Omega = \{ij \}$ be a set of pairs of indices such that no two pairs share an index. Define:
\begin{align}
H_{\Omega} = \sum_{ij \in \Omega} h_{ij} \;,\;\; h_{ij}=a^{\dagger}_i a_j + a^{\dagger}_j a_i
\end{align}
Given a normalized fermionic state $\ket{\psi}$ such that $N \ket{\psi} = n \ket{\psi}$:
\begin{align}
\vert \bra{\psi} H_{\Omega} \ket{\psi} \vert \leq \textrm{min}(n, M-n, \vert \Omega \vert)
\end{align}
Where $M$ is the number of fermionic modes.
This bound is tight. \end{theorem}
\begin{proof} Consider that $h_{ij}$ has eigenvalues in $\{-1,0,1\}$, since $h_{ij}^2 = (N_i-N_j)^2$ which has eigenvalues $\{0,1\}$. Suppose there existed a normalised state $\ket{\psi}$ such that $N \ket{\psi} = n \ket{\psi}$ and $H_{\Omega} \ket{\psi} = \lambda \ket{\psi}$ where $\vert \lambda \vert > \textrm{min}(n, M-n, \vert \Omega \vert)$ . Since $H_Y$, $N$ and all $h_{ij}$ are all mutually commuting, we may choose $\ket{\psi}$ to be an eigenstate of all $h_{ij}$ wlog (by convexity). Then it must be the case that $h_{ij}^2 \ket{\psi} = \ket{\psi}$ for at least $\vert \lambda \vert$ pairs $ij$, which implies that in the Fock basis $\ket{\psi} = a \ket{0_i, 1_j, ..} + b \ket{1_i, 0_j, ..} $. Therefore for at least $\vert \lambda \vert$ pairs $ij \in \Omega$ we have $\bra{\psi} (N_i+N_j) \ket{\psi} = 1$. So $\bra{\psi}N \ket{\psi} \geq \vert \lambda \vert $ and $M-\bra{\psi}N \ket{\psi} \geq \vert \lambda \vert$. If $\textrm{min}(n, M-n, \vert \Omega \vert) =n$ then $\bra{\psi}N \ket{\psi} >n$ which is a contradiction. If $\textrm{min}(n, M-n, \vert \Omega \vert) = M-n$ then $M-\bra{\psi}N \ket{\psi} >M-n$ which is a contradiction. If $\textrm{min}(n, M-n, \vert \Omega \vert) = \vert \Omega \vert$ then $\vert \lambda \vert > \vert \Omega \vert$ which is a contradiction. This proves the bound.
Now we need only show the bound is tight. Consider the following state: \begin{align}
\ket{\phi_{ij}^{\pm}}= (a^{\dagger}_i\pma^{\dagger}_j) \Gamma_{s}\ket{0} \end{align} With $\Gamma_{s}$ composed of creation and annihilation operators which do not include $i$ or $j$. This state is an eigenstate of $h_{ij}=a^{\dagger}_i a_j + a^{\dagger}_j a_i$:
\begin{align}
h_{ij} \ket{\phi_{ij}^{\pm}} = \pm \ket{\phi_{ij}^{\pm}} \end{align}
Observe: \begin{align} h_{ij} \ket{\phi_{ij}^{\pm}} &= h_{ij}(a^{\dagger}_i\pma^{\dagger}_j) \Gamma_{s} \ket{0}\\
&= (a^{\dagger}_i a_ja^{\dagger}_i + a^{\dagger}_j a_i a^{\dagger}_i \pm a^{\dagger}_i a_j a^{\dagger}_j \pm a^{\dagger}_j a_ia^{\dagger}_j) \Gamma_{s} \ket{0}\\
&= ( a^{\dagger}_j a_i a^{\dagger}_i \pm a^{\dagger}_i a_j a^{\dagger}_j ) \Gamma_{s} \ket{0}\\ &= ( (-a^{\dagger}_j a^{\dagger}_i a_i +a^{\dagger}_j)\pm (-a^{\dagger}_i a^{\dagger}_j a_j + a^{\dagger}_i) ) \Gamma_{s} \ket{0}\\
&= (a^{\dagger}_j\pm a^{\dagger}_i ) \Gamma_{s} \ket{0}\\ h_{ij} \ket{\phi_{ij}^{\pm}} &= \pm (a^{\dagger}_i\pm a^{\dagger}_j ) \Gamma_{s} \ket{0} \end{align}
Consider a set of pairs of indices $\omega \subseteq \Omega$. Choose an ordering on $\omega$ and define \begin{align}
\ket{\phi_{\omega}^b} = \prod_{ij \in \omega } (a^{\dagger}_i +(-1)^{b_{ij}}a^{\dagger}_j) \ket{0} \end{align}
with $b$ a bit-string indexed by $ij$. Note that $N \ket{\phi_{\omega}^b} = \vert \omega \vert \ket{\phi_{\omega}^b}$. We now argue that $b$ can always be chosen such that:
\begin{align}
H_{\Omega} \ket{\phi_{\omega}^b} = \vert \omega \vert \ket{\phi_{\omega}^b}.
\end{align} Choose a pair $ij \in \omega$, the state $\ket{\phi^b_{\omega}}$ can be expressed as: \begin{align}
\ket{\phi^b_{\omega}} = (\delta_i a^{\dagger}_i + (-1)^{b_{ij}} \delta_j a^{\dagger}_j) \Gamma_s \ket{0},\; \delta_i,\delta_j \in \{-1,1\} \end{align} With $\Gamma_{s}$ composed of creation and annihilation operators which do not include $i$ or $j$. So \begin{align}
\ket{\phi^b_{\omega}} = \delta_i \ket{\phi_{ij}^{\Delta_{ij}}}\;\;,\Delta_{ij} = \delta_i \delta_j (-1)^{b_{ij}}.
\end{align} Let us choose $b_{ij}$ such that $\Delta_{ij} = 1$. Noting that $\Delta_{ij}$ is independent of $\Delta_{pq}$ when $pq \neq ij$ we can do the same for all other $b_{pq}$. This gives: \begin{align}
H_{\Omega} \ket{\phi_{\omega}^b} &=\sum_{ij \in \omega} h_{ij}\delta_i \ket{\phi_{ij}^{+}} \\ H_{\Omega} \ket{\phi_{\omega}^b}&=\sum_{ij \in \omega} \delta_i \ket{\phi_{ij}^{+}}\\ H_{\Omega} \ket{\phi_{\omega}^b}&=\sum_{ij \in \omega} \ket{\phi_{\omega}^b} \\ H_{\Omega} \ket{\phi_{\omega}^b} &= \vert \omega \vert \ket{\phi_{\omega}^b} \end{align} Note that $n=\vert \omega \vert < \vert \Omega \vert $ and $\vert \Omega \vert < M/2$ and so the bound is shown to be tight in the case where $\textrm{min}(n, M-n, \vert Y \vert)=n$.
If we consider the case where $\textrm{min}(n, M-n, \vert \Omega \vert)=\vert \Omega \vert$, then we may always choose $\omega$ such that it is composed of a set of pairs of indices such that no two pairs share an index, and such that $\Omega \subseteq \omega$. In this case, by a similar argument \begin{align}
H_{\Omega} \ket{\phi_{\omega}^b} = \vert \Omega \vert \ket{\phi_{\omega}^b}. \end{align}
Finally, in the case where $\textrm{min}(n, M-n, \vert \Omega \vert)=M-n$ one may choose the particle-hole symmetric state \begin{align}
\ket{\tilde{\phi}^b_{\omega}} = \prod_{ij \in \omega } (a_i +(-1)^{b_{ij}}a_j) \prod_{k=1}^M a^{\dagger}_k \ket{0} \end{align} and a similar argument follows by particle hole symmetry. \end{proof}
\section*{Simulating Fermi-Hubbard via Sub-Circuit Algorithms}\label{sec:FHanalysis} \subsection*{Overview and Benchmarking of Analysis} In the following sections we primarily adopt the per-time error model and associated metric for costing circuits, \Cref{def:time-cost}. We first establish asymptotic bounds on the run-time $\cost$ of performing a time-dynamics simulation of a 2D spin Fermi-Hubbard Hamiltonian using a $p$\textsuperscript{th}-order Trotter formula with $M=5$ Trotter layers, for a target time $T$ and target error $\epsilon_t$. We perform this analysis for both the compact and VC encodings and the results are summarised in \cref{th:FH-optimum-analogue}.
We first want to compare using sub-circuit vs standard circuit decompositions in a per-time error model. We do this in conjunction with our Trotter bounds. To this end we establish the analytic bounds for the same simulation task, first using the standard conjugation method to generate evolution under higher weight interactions using only standard CNOT gates and single qubit rotations as opposed to a sub-circuit pulse sequence. We choose this method as it doesn't introduce any unfair and needless analytic error into the comparison. We decompose the Trotter steps into a standard gate set of CNOTs and single-qubit rotations which are gates of the form $\ee^{\pm \ii \pi/4 ZZ}$ up to single qubit rotations. We cost this with the same metric using a per-time error model, but do not allow the comparison to contain any gates of the form $\ee^{\pm \ii \delta ZZ}$ as this would constitute a sub-circuit gate.
In this comparative analytic expression we still use our Trotter error bounds, so \cref{th:FH-optimum-dig} only serves to evaluate the impact of differing Trotter step synthesis methods on the asymptotic scaling of the run-time $\cost$ in a per-time error model.
Later in this section we perform a tighter numerical analysis of both our proposal and our standard circuit model comparison. In these numerics we compare our Trotter bounds to readily applicable bounds from the literature \cite[Prop.~F.4.]{Childs2017}. We point out that these bounds do not exploit the underlying structure of the Hamiltonian or make use of the recent advances of \cite{Childs2019}, \cite{Childsnew2019}. However these bounds contained all constants, were applicable to 2D lattices and could easily be evaluated for arbitrary $p$ and allowed us to make use of \cref{thm:norm-bound}. We were able to compare our bounds to \cite{Childs2019} for the case of a simple 1D lattice and establish that our bounds are preferable for medium system sizes, not in asymptotic limits of system size, as was our intention in reformulating bounds for NISQ applications.
\begin{figure}\label{fig:bounds-comparison}
\end{figure}
After this comparison in the framework of a per-time error model, we numerically analyse the impact of sub-circuit techniques in the per-gate error model, calculating the circuit depth of sub-circuit Trotter simulations. This is shown in \Cref{fig:cost-numeric-delta-ignore-pulse-lengths,fig:cost-analytic-delta-ignore-pulse-lengths}.
\subsection*{The Fermi-Hubbard Hamiltonian and Fermionic Encodings} We consider a Fermi-Hubbard model on a 2D lattice of $N = L \times L$ fermionic sites. There is hopping between nearest neighbours only and on-site interactions between fermions of opposite spin. In terms of fermionic creation and annihilation operators the Hamiltonian for this system is \begin{equation}\label{def:unencoded-H} \op H_{\text{FH}} \coloneqq \sum_{i=1}^{N} \op h_{\text{on-site}}^{(i)} \ + \sum_{i<j,\sigma} \op h_{\text{hopping}}^{(i,j,\sigma)} \coloneqq \os \sum_{i=1}^{N} a^{\dagger}_{i \uparrow} a_{i \uparrow} a^{\dagger}_{i \downarrow} a_{i \downarrow} + \hop \sum_{i<j,\sigma} \left(a^{\dagger}_{i \sigma} a_{j \sigma} + a^{\dagger}_{j\sigma} a_{i \sigma}\right), \end{equation}
where $ \sigma \in \{ \uparrow, \downarrow \}$ and the sum over hopping terms runs over all nearest neighbour fermionic lattice sites $i$ and $j$. The interaction strengths are $\os$ and $\hop$ and we assume that $\hop = 1$, and that they are bounded as $|\hop|,|\os| \leq r$. Before we proceed we have to choose how to encode this Hamiltonian in terms of spin operators. The choice of encoding has a significant impact on the run-time of the simulation. There are many encodings in the literature \cite{Jordan1928} but we will only analyse two, the Verstraete-Cirac (VC) encoding \cite{Verstraete2005}, and the recent compact encoding from \cite{DK}.
We choose our encoding in order to minimise the maximum Pauli weight of the encoded interaction terms. Using the VC and compact encodings this is constant at weight-$4$ and weight-$3$ respectively. In comparison the Jordan-Wigner encoding results in a maximum Pauli weight of the encoded interaction terms that scales with the lattice size as $\BigO\left(L\right)$, the Bravyi-Kitaev encoding \cite{Bravyi2002} has interaction terms of weight $O(\log L)$, and the Bravyi-Kitaev superfast encoding \cite{Bravyi2002} results in weight-$8$.
The encodings require the addition of ancillary qubits as well as two separate lattices encoding spin up and spin down fermions. For VC $4L^2$ qubits are needed to encode $L^2$ fermionic sites. In contrast compact requires $(L-1)^2$ ancillary qubits and $2L^2$ data qubits. The layout of these ancillary qubits are indicated in \cref{fig:ordering}. Note that we must also choose an ordering of the lattice sites. This is also indicated in \cref{fig:ordering}.
The two encodings map the Fermi-Hubbard Hamiltonian terms to interactions between qubits. In both encodings, on-site interaction terms become \begin{align}\label{eq:h-onsite}
\op h_{\text{on-site}}^{(i)} \rightarrow \frac{\os}{4} \left(I - Z_{i \uparrow}\right)\left(I - Z_{i \downarrow}\right). \end{align} Only the encoded hopping terms differ. The exact expressions for hopping interactions depend on whether two nearest neighbour fermionic sites are horizontally or vertically connected on the lattice. The horizontally connected hopping terms are encoded as \begin{align}
\op h_{\text{hopping,hor}}^{(i,j,\sigma)} \rightarrow \frac{1}{2}
\begin{cases}
X_{i,\sigma}Z_{i',\sigma}X_{j,\sigma}+Y_{i,\sigma}Z_{i',\sigma}Y_{j,\sigma} & \text{VC}\\
X_{i,\sigma}X_{j,\sigma}Y_{f_{ij}',\sigma}+Y_{i,\sigma}Y_{j,\sigma}Y_{f_{ij}',\sigma} & \text{compact}
\end{cases}, \end{align} while the vertically connected hopping terms are encoded as \begin{align}
\op h_{\text{hopping,vert}}^{(i,j,\sigma)} \rightarrow \frac{1}{2}
\begin{cases}
X_{i,\sigma}Y_{i',\sigma}Y_{j,\sigma}X_{j',\sigma}-Y_{i,\sigma}Y_{i',\sigma}X_{j,\sigma}X_{j',\sigma} & \text{VC} \\
(-1)^{g(i,j)}\left( X_{i,\sigma}X_{j,\sigma}X_{f_{ij}',\sigma}+Y_{i,\sigma}Y_{j,\sigma}X_{f_{ij}',\sigma}\right) & \text{compact}
\end{cases}. \end{align} In this notation $i$ labels the data qubit for lattice site $i$ and $\sigma$ its spin lattice. Dashed indices such as $i'$ refer to ancillary qubits. These are illustrated in grey in \Cref{fig:ordering}. In the VC encoding there is an ancillary qubit for every site on each spin lattice. In compact these ancillary qubits are laid out in a checker-board pattern on the faces on each spin lattice. Here $f_{ij}'$ labels the ancillary qubit to $i$ and $j$. There is also a sign determined by $g(i,j) = 0,1$. The details of this can be found in \cite{DK}.
\begin{figure}
\caption{VC Encoding.}
\caption{compact Encoding.}
\caption{The ordering of qubits for $L=2$ and the layout of ancillary qubits (grey) for each encoding. This figure only depicts a single spin lattice. Additionally, the first VC ancillary qubit $f'_{12}$ is also labelled by $f'_{25}$, $f'_{56}$ and $f'_{16}$. Similarly for the other (VC) ancillary qubit.}
\label{fig:ordering}
\end{figure}
\subsection*{Choice of Trotter Layers} We group the interactions into $5$ Trotter layers. Every pair of interactions within a layer must be disjoint. Under the assumptions of \Cref{def:short-pulse-circuit} all interactions within a single layer can then be implemented in parallel. For both encodings the five layers consist of all on-site interactions $\op H_5$, two alternating layers of horizontal hopping interactions $\op H_1$ and $\op H_2$ and, two alternating layers of vertical hopping interactions $\op H_3$ and $\op H_4$. Both cases are illustrated in \cref{fig:VC-layers-hor,fig:VC-layers-vert,fig:compact-layers-hor,fig:compact-layers-vert} for the case of $L=5$.
\begin{figure}\label{fig:os-layer}
\end{figure}
The on-site interaction terms are the same in both cases and do not involve any ancillary qubits. They are shown in \Cref{fig:os-layer}, where the ancillary qubits are consequently not depicted. The hopping terms all act within a single spin lattice. They are shown for the VC encoding in \Cref{fig:VC-layers-hor} and \Cref{fig:VC-layers-vert} for a single spin lattice, and for compact these are shown in \Cref{fig:compact-layers-hor} and \Cref{fig:compact-layers-vert}
\begin{figure}
\caption{The interactions in $\op H_1$.}
\caption{The interactions in $\op H_2$}
\label{fig:VC-layers-hor}
\end{figure} \begin{figure}
\caption{The interactions in $\op H_3$.}
\caption{The interactions in $\op H_4$.}
\label{fig:VC-layers-vert}
\end{figure}
The alternating horizontal layers and alternating vertical layers are chosen to ensure that all pairs of interactions are disjoint and not just commuting. Note that we could have chosen to lay out the alternating horizontal and vertical layers in the VC encoding in the same fashion as the compact as depicted in \Cref{fig:compact-layers-hor} or \Cref{fig:compact-layers-vert}.
These are not the only choices of Trotter layers. In a later Supplementary Method called ``Regrouping Interaction Terms'' we show that we can implement a $p$\textsuperscript{th} order product formula with only $3$ Trotter layers. We do this for the compact encoding only as it is particularly neat. This is despite grouping the interactions in a way where not all interactions within a layer commute with one another. A combination of this and the previous results still enable us to directly implement each layer without incurring any further analytic error.
\begin{figure}
\caption{The interactions in $\op H_1$.}
\caption{The interactions in $\op H_2$.}
\label{fig:compact-layers-hor}
\end{figure} \begin{figure}
\caption{The interactions in $\op H_3$.}
\caption{The interactions in $\op H_4$.}
\label{fig:compact-layers-vert}
\end{figure}
The norm of these layers appears in the Trotter bounds we derive. We bound these as $ \| \op H_i \| \leq \Lambda$ for all $i$. In \Cref{thm:norm-bound} it is shown that $\Lambda$ can be related to fermion number and this fact is used to obtain tighter bounds on the Trotter error in the numerics we perform. We confine ourselves to a sector of $5$ fermions in all our numerical calculations. We do this both for pragmatic reasons, since the Hilbert space dimension is just large enough to be classically hard, and because for a $5 \times 5$ lattice a roughly quarter-filling already implies interesting crossover phenomena appear \cite{Keller2001,Kaneko2014}. We also leave $\Lambda$ explicit in our analytic bounds so that we can explore different parameter regions in later work.
\subsection*{Analytic Run-Time Bounds for Simulating Fermi Hubbard} Now we can proceed with obtaining analytic bounds on the run-time of this simulation for each encoding. Throughout this section we assume a per-time error model. For the recursive product formula in \cref{eq:P-2k-ap} with either $p=1$ or $p=2k$ for $k\ge1$ and $M$ non-commuting Hamiltonian layers $\op H=\sum_{i=1}^M \op H_i$, the cost of the simulation in terms of the single most expensive Trotter layer is \begin{align}\label{eq:trotter-time} \cost\left( \P_p\left(\delta\right)^{T/\delta} \right) \le \frac{MT}{\delta} \times \cost\left(\op U_\mathrm{max}\left(\op H,\delta B_p\right)\right) \times \begin{cases}
1& p=1 \\
2\times 5^{p/2-1} & \text{$p=2k$ for $k\ge1$}, \end{cases} \end{align} where $\op U_\mathrm{max}\left(\op H, \tau \right) \coloneqq \argmax_{ \op U_i } \{ \cost\left( \op U_i \right) \}$ for $\op U_i \coloneqq \exp\left(\ii \tau \op H_i\right)$ and $B_p$ given in \cref{rem:bounds-on-trotter-coefficients-ap}. This follows from the definitions of the product formula in \cref{eq:P-2k-ap}.
We proceed by obtaining bounds on the run-time of the most costly Trotter layer in each encoding. This expression depends on whether we use our methods (summarised in the main text in \cref{eq:intro-gatedec-3,eq:intro-gatedec-4}) or the conjugation method (see \cref{eq:conj-method} in the main text) to implement each Trotter step.
For the VC encoding the most costly layers will be the the vertical hopping layers $\op H_{3}$ and $\op H_{4}$. As they are both a sum of disjoint terms which we assume can be performed in parallel this is simply given by the cost of implementing a single vertical hopping interaction. Remembering that the interaction strengths satisfy $| \hop |$, $| \os| \leq r$, we bound this as \begin{align}
\cost \left(\op U_\mathrm{max}\left(\op H_{\text{VC}}, \delta B_p \right) \right)&\leq \cost\left( \ee^{\ii B_p \delta \frac{r}{2} \left( XYYX-YYXX\right) } \right) \\
& = 2 \cost\left( \ee^{\ii B_p \delta \frac{r}{2} Z^{\otimes 4} } \right) \\
&\leq 2 \times \begin{cases}
7 ( \frac{r}{2} B_p \delta)^{1/3} & \text{sub-circuit}\\
6 \left(\frac{\pi}{4}\right) & \text{standard}
\end{cases} \end{align} The second simplification follows from both terms in the interaction commuting, thus allowing them to be performed sequentially. The same is true for the compact encoding and so we have \begin{align}
\cost \left(\op U_\mathrm{max}\left(\op H_{\text{compact}}, \delta B_p \right) \right)&\leq\cost\left( \ee^{\ii B_p \delta \frac{r}{2} \left( XXY+YYY\right) } \right) \\
& = 2 \cost\left( \ee^{\ii B_p \delta \frac{r}{2} Z^{\otimes 3} } \right)\\
& \leq 2 \times\begin{cases}
2 ( 2 \frac{r}{2} B_p \delta)^{1/2} & \text{sub-circuit}\\
4 \left(\frac{\pi}{4}\right) & \text{standard}
\end{cases} \end{align} The final expressions now depend only on how we decompose local Trotter steps, either in terms of CNOT gates and single qubit rotations or using circuits such as those in \cref{fig:circuits}. The concrete bounds on $\cost \left(\op U_\mathrm{max}\left(\op H, \delta B_p \right) \right)$ are summarised and simplified in \cref{tab:maxInteractionCost}. \begin{figure}
\caption{Cost of implementing the highest weight interaction term in the encoded Fermi-Hubbard Hamiltonian in a per-time error model. Decomposing a $k$-local evolution in terms of the standard CNOT conjugation method has overhead $2 (k-1) \times \pi/4$. The overhead associated with sub-circuit synthesis follows from \cref{eq:decomp-bounds}. }
\label{tab:maxInteractionCost}
\end{figure}
Substituting the bounds in \cref{tab:maxInteractionCost} into \cref{eq:trotter-time} results in run-times $\propto \BigO(\delta^{-1})$ and $\propto \BigO(\delta^{\frac{2-k}{k-1}} )$, assuming decomposition via \cref{eq:conj-method} or \cref{eq:intro-gatedec-3,eq:intro-gatedec-4} in the main text, respectively. As both of these expressions diverge as $\delta \rightarrow 0$ it is optimal to maximise $\delta$ with respect to an allowable analytic Trotter error $\epsilon_t$. This is captured in the following lemma which uses the simplest bounds on Trotter error established previously.
\begin{lemma}[Optimal FH $\delta$]\label{lem:FH-max-delta} For a target error rate $\epsilon_t$, the maximum Trotter step for a $p$\textsuperscript{th} formula saturating the error bound in \Cref{th:trotter-error-ap} is \begin{align}
\delta_0 = \left(\frac{\epsilon_t}{T M^{p+1} \Lambda^{p+1}}\right)^{1/p} \times \begin{cases}
\displaystyle 1 & p=1 \\begin{align}10pt]
\displaystyle \left(\frac{(p+1)!}{2}\right)^{1/p} \left( \frac{3}{10} \right)^{p/2-1/2-1/p} & p=2k\ \text{for}\ k\ge 1.
\end{cases} \end{align} \end{lemma} \begin{proof} Follows from \cref{th:trotter-error-ap} by solving for $\delta$. \end{proof}
Now we can obtain the final analytic bounds on the total run-time of simulating the Fermi-Hubbard Hamiltonian for each of these four cases.
\begin{corollary}[Standard-circuit Minimised Run-time]\label{th:FH-optimum-dig} If standard synthesis techniques are used to implement local Trotter steps in terms of CNOT gates and single-qubit rotations with an optimal Trotter step size $\delta_0$ saturating \Cref{lem:FH-max-delta}, the simulation cost for the Fermi-Hubbard Hamiltonian with a $p$\textsuperscript{th} order Trotter formula with maximum error $\epsilon_t$ is as follows \begin{align} \cost(\P_p(\delta_0)^{T/\delta_0}) &\leq
\begin{cases}
f_p \ M^{2 + \frac{1}{p}} \Lambda^{1 + \frac{1}{p}} T^{1+\frac{1}{p}} \epsilon^{-1/p}_t & \text{VC} \\
g_p \ M^{2 + \frac{1}{p}} \Lambda^{1 + \frac{1}{p}} T^{1+\frac{1}{p}} \epsilon^{-1/p}_t & \text{compact}
\end{cases} \end{align} with \begin{align}
f_p
&=3 \pi \times \begin{cases}
\displaystyle 1 & p=1 \\[10pt]
\displaystyle 2^{\frac{p+1}{2}} 3^{-\frac{p}{2}+\frac{1}{p}+\frac{1}{2}} 5^{p-\frac{1}{p}-\frac{3}{2}} (p+1)!^{-\frac{1}{p}} & p=2k\ \text{for}\ k\ge 1
\end{cases} \end{align} and \begin{align}
g_p
&=2 \pi \times \begin{cases}
1 & p=1 \\[10pt]
\displaystyle 2^{\frac{p+1}{2}} 3^{-\frac{p}{2}+\frac{1}{p}+\frac{1}{2}} 5^{p-\frac{1}{p}-\frac{3}{2}} (p+1)!^{-\frac{1}{p}} & p=2k\ \text{for}\ k\ge 1.
\end{cases} \end{align} \end{corollary} \begin{proof} The proof follows by choosing $\delta$ such that the bound obtained in \cref{lem:FH-max-delta} is saturated and substituting this and the respective expressions in \cref{tab:maxInteractionCost} into \Cref{eq:trotter-time}. \end{proof}
\begin{corollary}[Sub-Circuit Minimised Run-time]\label{th:FH-optimum-analogue} If sub-circuit synthesis techniques are used to implement local Trotter steps with an optimal Trotter step size $\delta_0$ saturating \Cref{lem:FH-max-delta}, the simulation cost for the Fermi-Hubbard Hamiltonian with a $p$\textsuperscript{th} order Trotter formula with maximum error $\epsilon_t$ is as follows \begin{align} \cost(\P_p(\delta_0)^{T/\delta_0}) &\leq
\begin{cases}
f_p \ r^{1/3} M^{5/3 + 2/(3p)} \Lambda^{2/3 + 2/(3p)} T^{1+2/(3p)} \epsilon^{-2/(3p)}_t & \text{VC} \\
g_p \ r^{1/2} M^{3/2 + 1/(2p)} \Lambda^{1/2 +1/(2p)} T^{1 + 1/(2p)} \epsilon^{-1/(2p)}_t & \text{compact}
\end{cases} \end{align} with \begin{align}
f_p
&=12 \times \begin{cases}
\displaystyle 1 & p=1 \\[10pt]
\displaystyle 2^{\frac{p}{2}} 3^{\frac{1}{6} \left(-3 p+\frac{4}{p}+4\right)} 5^{\frac{1}{6} \left(5 p-\frac{4}{p}-8\right)} (p+1)!^{-\frac{2}{3 p}} & p=2k\ \text{for}\ k\ge 1
\end{cases} \end{align} and \begin{align}
g_p
&=4 \times \begin{cases}
1 & p=1 \\[10pt]
\displaystyle 2^{\frac{p}{2}-\frac{1}{4}} 3^{\frac{1}{4} \left(-2 p+\frac{2}{p}+3\right)} 5^{\frac{1}{4} \left(3 p-\frac{2}{p}-5\right)} (p+1)!^{-\frac{1}{2 p}} & p=2k\ \text{for}\ k\ge 1.
\end{cases} \end{align} \end{corollary} \begin{proof} The proof follows by choosing $\delta$ such that the bound obtained in \Cref{lem:FH-max-delta} is saturated and substituting this and the respective expressions in \cref{tab:maxInteractionCost} into \Cref{eq:trotter-time}. \end{proof}
\subsection*{Trivial Stochastic Error Bound}\label{sec:stochasticerror}
\let\mathcal O\relax \newcommand{\mathcal O}{\mathcal O} \DeclareDocumentCommand{\gateA}{m m}{
\draw[fill=white] (#1,#2-0.15) rectangle (#1+0.5,#2+1.15); } \DeclareDocumentCommand{\gateRow}{m m O{2} O{7}}{
\foreach \x in {0,...,#4}{
\gateA{\x*#3+#1}{#2}
} } \DeclareDocumentCommand{\errorRow}{m m O{2}}{
\foreach \x in {0,...,8}{
\draw[fill=white] (\x*#3+#1,#2) circle[radius=.35] node[] {$\mathcal E$};
} } \begin{figure}
\caption{Saturated circuit model with intermediate errors $\mathcal E$, e.g.\ depolarizing noise $\mathcal E = \mathcal N_q$ for some noise parameter $p$ given in \cref{eq:dep-channel}. At the end of the circuit, an observable $\mathcal O$ is measured. Drawn is a one-dimensional circuit; naturally, a similar setup can be derived for a circuit on a $2$-dimensional qudit lattice, for interactions shown in \cref{fig:compact-layers-hor,fig:compact-layers-vert,fig:VC-layers-hor,fig:VC-layers-vert,fig:os-layer}.}
\label{fig:circuit-dense-2}
\end{figure}
So far we have only considered the unitary error introduced by approximating the real Hamiltonian evolution with a Trotterized approximation. However, in a near-term quantum device without error correction in place, we expect the simulated evolution to be noisy. We model the noise by interspersing each circuit gate in the product formula by an iid channel $\mathcal E$; for simplicity we will assume that $\mathcal E$ is a single qubit depolarising channel $\mathcal E = \mathcal N_q$, defined as \begin{equation}\label{eq:dep-channel}
\mathcal N_q=(1-q)I+q \mathcal T,
\quad
\rho \longmapsto (1-q)\rho + \frac{q}{d}I \end{equation} for a noise parameter $q\in[0,1]$.\footnote{Strictly speaking $\mathcal N_q$ defines a completely positive trace preserving map for all $p\le1+1/(d^2-1)$. We emphasise that the error analysis which follows also works for a more general channel than the depolarising one.} Here $I$ denotes the identity channel and $\mathcal T$ takes any state to the maximally mixed state $\tau=I/d$.
A trivial error bound for a circuit as in \cref{fig:circuit-dense-2} can then be found by just calculating the probability of no error occuring at all; disregarding the beneficial effects of a causal lightcone behind the observable $\mathcal O$, and denoting with $\mathcal U := \op U_\mathrm{circ}^\dagger \cdot \op U_\mathrm{circ}$ the clean circuit, and with $\mathcal U'$ the circuit saturated with intermediate errors, we get the expression \begin{align}\label{eq:trivial-error}
\epsilon = \left|\Tr\left[ (\mathcal U(\rho) - \mathcal U'(\rho))\mathcal O \right]\right|
\le 1 - (1-q)^V, \end{align} where $V$ is the circuit's volume (i.e.\ the number of $\mathcal E$ interspersed in $\mathcal U'$). It is clear to see that this error bound asymptotically approaches $1$, and does so exponentially quickly. Thus, to stay below a target error rate $\epsilon_\text{tar}$, a sufficient condition is that \begin{align}
\label{eq:err-V-bound}
1-(1-q)^V < \epsilon_\text{tar}
\quad&\Longleftrightarrow\quad
V < \log\left(\frac{1-\epsilon_\text{tar}}{1-q}\right),
\\ \intertext{or alternatively}
\label{eq:err-q-bound}
&\Longleftrightarrow\quad
q < 1 - \sqrt[V]{1-\epsilon_\text{tar}}. \end{align}
Instead of assuming that each error channel $\mathcal E$ in \cref{fig:circuit-dense-2} has the same error probability $q$, we can analyse the case where $q$ is proportional to the pulse length of the preceding or anteceding gate; corresponding relations as given in \cref{eq:err-V-bound,eq:err-q-bound} can readily be derived numerically.
\subsection*{Error Mapping under Fermionic Encodings}\label{sec:2ndorder}
In \cite{Error-Mapping}, the authors analyse how noise on the physical qubits translates to errors in the fermionic code space. To first order and in the W3 encoding, all of $\{X, Y, Z\}$ errors on the face, and $\{X, Y\}$ on the vertex qubits can be detected. $Z$ errors on the vertex qubits -- as evident from the form of $\op h_\mathrm{on-site}$ from \cref{eq:h-onsite} -- result in an undetectable error; as shown in \cite[Sec.~3.2]{Error-Mapping}, this $Z$ error induces fermionic phase noise.
It is therefore a natural extension to the notion of simulation to allow for some errors occur -- if they correspond to physical noise in the fermionic space. And indeed, as discussed more extensively in \cite[Sec.~2.4]{Error-Mapping}, phase noise is a natural setting for many fermionic condensed matter systems coupled to a phonon bath \cite{Ng2015,Kauch2020,Zhao2017,Melnikov2016,Openov2005,Fedichkin2004,Scully1993} and \cite[Ch.~6.1\&eq.~6.17]{Wellington2014}.
We further assume that we can measure all stabilisers (including a global parity operator) once at the end of the entire circuit. We could imagine measuring these stabilisers after each individual gate of the form $\ee^{\ii \delta h_i}$ -- where $h_i$ is any term in the Hamiltonian. However, as every stabiliser commutes with every term in the Hamiltonian the outcome of the stabiliser measurement is unaffected and so we need only measure all stabilisers once at the end of the entire circuit. It is evident that measuring the stabilisers can be done by dovetailing an at most depth $4$ circuit to the end of our simulation -- much like measuring the stabilisers of the Toric code. It is thus a negligible overhead to the cost of simulation $\cost$.
However, errors may occur within the decomposition of gates of the form $\ee^{\ii \delta h_i}$ into single qubit rotations and two-qubit gates of the form $\ee^{\ii t ZZ}$. The stabilisers do not generally commute with these one- and two-local gates. In spite of this we can commute a Pauli error which occurs within a decomposition past the respective gates that make up that decomposition. For example, consider $h_i = X_1 Z_2 X_3$. Then we would decompose $\ee^{\ii \delta h_i}$ first via $\ee^{\ii \delta X_1 Z_2 X_3} \approx \ee^{\ii \delta^{1/2} X_1 X_2}\ee^{-\ii \delta^{1/2} Y_2 X_3}\ee^{-\ii \delta^{1/2} X_1 X_2}\ee^{\ii \delta^{1/2} Y_2 X_3}$ and then furthermore using identities such as $\ee^{\ii \delta^{1/2} X_1 X_2} = H_1 H_2 \ee^{\ii \delta^{1/2} Z_1 Z_2} H_1 H_2$. Suppose a Pauli X error were to occur on the second qubit during one of these steps, say for example instead leading us to perform the circuit $H_1 H_2 \ee^{\ii \delta^{1/2} Z_1 Z_2} X_2 H_1 H_2 \ee^{-\ii \delta^{1/2} Y_2 X_3}\ee^{-\ii \delta^{1/2} X_1 X_2}\ee^{\ii \delta^{1/2} Y_2 X_3}$. By commuting the error $X_2$ to the end of the circuit we can see that this is still $\mathcal{O}(\delta^{1/2}$) close to performing $Z_2 \ee^{\ii \delta X_1 Z_2 X_3}$ as $H Z = X H$, which is an error we can detect as previously discussed. As a Pauli Z error is equally likely to occur this doesn't introduce any noise bias. Therefore in cases where $\delta$ is sufficiently small, we can use error detection to reduce the fidelity requirements needed in the device by accounting for this additional error term. This is done numerically and shown in the dashed lines of \Cref{fig:cost-analytic-delta-ignore-pulse-lengths,fig:cost-analytic-delta}.
This means that while the fermionic encodings do not provide error correction, they do allow error detection to some extent; we summarise all first order error mappings in \cref{tab:error-mapping}. This means we can numerically simulate the occurrence of depolarising noise throughout the circuit, map the errors to their respective syndromes, and classify the resulting detectable errors, as well as non-detectable phase, and non-phase noise.
This means we can analyse $\cost$ with a demonstrably suppressed error, by allowing the non-detectable non-phase noise to saturate a target error bound. The resulting simulation is such that it corresponds to a faithful simulation of the fermionic system, but where we allow fermionic phase error occurs -- where we emphasise that since detectable non-phase errors occur roughly with the same probability as non-detectable phase errors we know that, in expectation, only $O(1)$ phase error occurs throughout the simulation; in brief, it is not a very noisy simulation after all.
The resulting required depolarising noise parameters for various FH setups we summarise in \cref{fig:cost-analytic-delta,fig:cost-numeric-delta,fig:cost-analytic-delta-ignore-pulse-lengths,fig:cost-numeric-delta-ignore-pulse-lengths}, and the resulting post-selection probabilities in \cref{fig:postsel-probs}.
\begin{table}[t] \centering \begin{tabular}{rrl}
\toprule
Location & Syndrome & Effect \\
\midrule
\multirow{3}{*}{Vertex} & $X$ & detectable \\
& $Y$ & detectable \\
& $Z$ & detectable \\
\midrule
\multirow{3}{*}{\shortstack[r]{Face\\(Ancilla)}} & $X$ & detectable \\
& $Y$ & detectable \\
& $Z$ & phase noise \\
\bottomrule \end{tabular} \caption{Error mapping from first order physical noise to the encoded fermionic code space, under the W3 encoding, by \cite{Error-Mapping}. All but $Z$ errors on the faces are detectable; the latter result in fermionic phase noise.}\label{tab:error-mapping} \end{table}
\subsection*{Numeric Results}\label{ap:numeric-results} We can tighten the preceding analysis in several ways. First of, instead of crudely upper bounding the cost of individual gates we can sum these pulse times exactly. To this end, we use both the explicitly defined Trotter formulae coefficients $h_{ij}$, and also the exact formulae for the pulse times derived previously.
Secondly, we can use tighter bounds for Trotter error, which take into account the commutation relations between pairs of interactions across Trotter layers $\op H_i$, the coefficients defined in \Cref{rem:bounds-on-trotter-coefficients-ap}. Additionally, we obtain a bound which rewrites the Trotter error as a Taylor series, and then bound the Taylor remainder using methods which usually bound the total Trotter error; which bound is tighter depends on the order $p$ of the formula, and the target simulation time. As explained in \cref{lem:FH-max-delta}, a tighter Trotter error allows us to choose a larger $\delta$ while achieving the same error $\epsilon_t$, reducing the total cost of the simulation.
Finally, as explained in the Supplementary Method entitled ``Trivial Stochastic Error Bound'', a certain simulation circuit size will determine the amount of stochastic error present within the circuit. We assume that the depolarising noise precedes each Trotter layer; in the error-per-time model we assume that the noise parameter scales with the pulse length; in the more traditional error-per-gate model we assume that the noise is independent of the pulse length.
Furthermore, instead of taking the analytic Trotter bounds, we can also numerically calculate the optimal Trotter pulse length by calculating $\epsilon_p(T,\delta)$ from \cref{eq:trotter-epsilon-ap} explicitly, and maximizing $\delta$ till $\epsilon_p$ saturates an upper target error rate. Naturally, this is computationally costly; so we perform this calculation for FH lattices up to a size $3\times 3$, and extrapolate these numbers by qubit count to the desired lattice lengths up to $10\times 10$; the dependence of $\delta_0$ on the target time and number of qubits was extracted from the asymptotic Trotter bound in \cref{lem:FH-max-delta}, i.e. \begin{equation}\label{eq:fitted-delta-dependency}
\delta_0 = \left(a_0 + \frac{b_0}{T^{1/p}}\right)\left(a_1 + \frac{b_1}{\Lambda^{(p+1)/p} } \right) \end{equation} for four fit parameters $a_0, a_1, b_0, b_1$; all other dependencies are assumed constant. We show such numerically fitted data in \cref{fig:numeric-extrapolation}; a similar analysis was made for Trotter orders $p=1, 2, 4, 6$, and target error rates $\epsilon_t = 0.1, 0.05, 0.01$. These numerical bounds are much tigher than the analytical bounds, but can still be assumed to overestimate the actual error: the operator norm distance between Trotter evolution and true evolution likely overestimates the error that would occur if starting from a particular initialized state. We found that randomizing the Trotter layer order does not yield any advantage over keeping it fixed, and similarly we did not obtain an advantage by permuting the fixed order further.
\begin{figure}
\caption{Extraploated optimal Trotter error step size $\delta_0$.
The data points stem from the simulation of FH lattices up to size $3\times 3$, for a target error rate $\epsilon_t=0.1$, Trotter order $p=2$, and for times $T=0.5,1.0,\ldots,15.0$.
The fitted surface follows the formula in \cref{eq:fitted-delta-dependency}.}
\label{fig:numeric-extrapolation}
\end{figure}
\Cref{tab:NumericCost} compares these numerics for the case of a lattice with $L=5$ and for a sufficiently long time in terms of units set by the lattice spacing as we assume $\hop =1$. So for any $L$ we choose $T= \lfloor \sqrt{2}L \rfloor$. We choose an analytic error of $\epsilon_t = 0.1$ as there is no point making the analytic error smaller than the experimental error present in NISQ-era gates.
\begin{table}[]
\centering
\begin{tabular}{c c c c}
\toprule
& Trotter Bounds & Standard & Subcircuit \\
\midrule
\multirow{2}{*}{VC}
& analytic & 95,409 & 17,100 \\
& numeric & 4,234 & 1,669 \\
\midrule
\multirow{2}{*}{compact}
& analytic & 77,236 & 1,686 \\
& numeric & 3,428 & 259 \\
\end{tabular}
\caption{Per-time. A comparison of the run-time $\cost$ for lattice size $L\times L$ with $L=5$, overall simulation time $T=7$ and target Trotter error $\epsilon_\mathrm{target} = 0.1$, with $\Lambda=5$ fermions and coupling strengths $|\os|, |\hop|\le r=1$.
Obtained by minimising over product formulas up to $4$\textsuperscript{th} order.
$\cost=\cost(\P_p(\delta_0)^{T/\delta_0})$ for per-time error model.
In either gate decomposition case---standard and sub-circuit---we account single-qubit rotations as a free resource; the value of $\cost$ depends only on the two-qubit gates/interactions. Two-qubit unitaries are counted by their respective pulse lengths. Here compact and VC denote the choice of fermionic encoding.}
\label{tab:NumericCost} \end{table} \begin{table}[]
\centering
\begin{tabular}{c c c c}
\toprule
& Trotter Bounds & Standard & Subcircuit \\
\midrule
\multirow{2}{*}{VC}
& analytic & 121,478 & 95,447 \\
& numeric & 5391 & 4236 \\
\midrule
\multirow{2}{*}{compact}
& analytic & 98,339 & 72,308 \\
& numeric & 4,364 & 3,209 \\
\end{tabular}
\caption{Per-gate. A comparison of the run-time $\cost$ for lattice size $L\times L$ with $L=5$, overall simulation time $T=7$ and target Trotter error $\epsilon_\mathrm{target} = 0.1$, with $\Lambda=5$ fermions and coupling strengths $|\os|, |\hop|\le r=1$.
Obtained by minimising over product formulas up to $4$\textsuperscript{th} order.
$\cost=$~circuit-depth for per-gate error model.
In either gate decomposition case---standard and sub-circuit---we account single-qubit rotations as a free resource; the value of $\cost$ depends only on the two-qubit gates/interactions. Two-qubit unitaries are counted by unit time per gate in the per gate error model. Here compact and VC denote the choice of fermionic encoding.} \end{table} As the compact encoding results in the smallest run-time we investigate how $\cost$ varies with $\epsilon_t$ for $L=3$, $5$ and $10$ below. In these numerics we choose the order $p$ which minimise $\cost$ at each value of $T$.
\begin{figure}\label{fig:cost-numeric-delta-ignore-pulse-lengths}
\end{figure}
\begin{figure}\label{fig:cost-numeric-delta}
\end{figure}
\begin{figure}
\caption{Setup as in \cref{fig:cost-numeric-delta-ignore-pulse-lengths} with the per-gate error model, but we use the tightest analytic (instead of numeric) error expression from \cref{cor:trotter-error-ap,th:Trotter-Er-Commutator-ap,cor:taylor-error-bound-ap}.
We use a depth-3 gate decomposition when error mitigation is not used (left) and a depth-4 decomposition if error mitigation is used (right).}
\label{fig:cost-analytic-delta-ignore-pulse-lengths}
\end{figure}
\begin{figure}
\caption{Setup as in \cref{fig:cost-numeric-delta} with the per-time error model, but we use the tightest error expression from \cref{cor:trotter-error-ap,th:Trotter-Er-Commutator-ap,cor:taylor-error-bound-ap}. All 3-local gates in the compact encoding are decomposed with the depth-4 method from \cref{lem:Weight-Increase-Depth4-ap}.}
\label{fig:cost-analytic-delta}
\end{figure}
\begin{figure}
\caption{
Post-selection probabilities (left column) and probability of zero undetectable non-phase error after post-selection (right column) for lattice sizes $3\times3$, $5\times5$, and $10\times10$, for the Fermi-Hubbard Hamiltonian $H_\mathrm{FH}$ from \cref{eq:FH-H-intro} (main text) in the compact encoding, to go alongside \cref{fig:cost-analytic-delta}.
The choice between $1\%$, $5\%$ and $10\%$ Trotter error is made according to the colouring shown in \cref{fig:cost-analytic-delta}.
}
\label{fig:postsel-probs}
\end{figure} \section*{Simulating Fermi-Hubbard with Three Trotter Layers}\label{ap:regroup} \subsection*{Further Circuit Decompositions} In this section we show that we can actually simulate a 2D spin Fermi-Hubbard model with $M=3$ Trotter layers as opposed to the previous $M=5$. First we need to introduce another circuit decomposition in the same spirit as before. \begin{lemma}[Depth 3 Decomposition]\label{lem:Rank-Increase-Depth3-ap}
Let $\op U(t)=\ee^{\ii t \left(\cos(\theta) \op h_1 + \sin(\theta) \op h_2\right)}$ be the time-evolution operator for time $t$ under a Hamiltonian $\op H_{\theta}=\cos(\theta) \op h_1 + \sin(\theta) \op h_2$.
If $\op h_1$ and $\op h_2$ anti-commute and both square to identity, $\op U(t)$ can be decomposed as
\begin{align}
\op U(t) &=\ee^{\ii t_1 \op h_1} \ee^{\ii t_2 \op h_2} \ee^{\ii t_1 \op h_1}
\end{align}
where the pulse times $t_1,t_2$ as a function of the target time $t$ are given by
\begin{align}
t_1 &=\frac{1}{2} \tan^{-1}\left(\pm\frac{\cos (t)}{\sqrt{1-\sin^2(\theta ) \sin^2(t)}}, \; \pm\frac{\cos (\theta ) \sin (t)}{\sqrt{1-\sin^2(\theta ) \sin^2(t)}}\right)+\pi c\\
t_2 &=\tan^{-1}\left(\pm\sqrt{1-\sin^2(\theta ) \sin^2(t)}, \;\ \pm\sin (\theta ) \sin (t)\right)+2 \pi c
\end{align}
where $c \in \mathbb{Z}$ and signs are taken consistently throughout. \end{lemma} \begin{proof}
Since $h_1,h_2$ square to identity by assumption, we have
\begin{align}
\ee^{\ii t_1 h_1} \ee^{\ii t_2 h_2} \ee^{\ii t_1 h_1} &= I \cos \left(2 t_1\right) \cos \left(t_2\right) + \ii \op h_1 \sin \left(2 t_1\right) \cos \left(t_2\right)+ \ii \op h_2 \sin \left(t_2\right),
\end{align}
and
\begin{align}
\ee^{\ii t \left(\cos(\theta) \op h_1 + \sin(\theta) \op h_2\right)} = I \cos \left(t \right) + \ii \sin \left(t \right) \left( \cos(\theta) \op h_1 + \sin(\theta) \op h_2 \right).
\end{align}
Equating these and solving for $t_1$ and $t_2$ gives the expressions in the \namecref{lem:Rank-Increase-Depth3-ap}. \end{proof} We then need to establish the overhead associated with implementing this decomposition. We will see that for a target time $t$, the pulse times in \cref{lem:Rank-Increase-Depth3-ap} are as $t_i(t) \propto t$. \begin{lemma}\label{lem:Rank-Increase-Depth3-param-bounds-ap}
Let $\op H = \cos(\theta) \op h_1 + \sin(\theta) \op h_2$ be as in \cref{lem:Rank-Increase-Depth3-ap}.
For $0 \leq t \leq \pi/2$ and $0<\theta<\pi/2$, the pulse times $t_1,t_2$ in \cref{lem:Rank-Increase-Depth3-ap} can be bounded by
\begin{align}
|t_1| &\leq \frac{t}{2}, \\
|t_2| &\leq t \theta.
\end{align} \end{lemma} \begin{proof}
\Cref{lem:Rank-Increase-Depth3-ap} gives two valid sets of solutions for $t_1,t_2$.
Choose the following solution:
\begin{align}
t_1 &=\frac{1}{2} \tan^{-1}\left(\frac{\cos (t)}{\sqrt{1-\sin^2(\theta ) \sin^2(t)}},\frac{\cos (\theta ) \sin (t)}{\sqrt{1-\sin^2(\theta ) \sin^2(t)}}\right)\\
t_2 &=\tan^{-1}\left(\sqrt{1-\sin^2(\theta ) \sin^2(t)},\sin (\theta ) \sin (t)\right).
\end{align}
Taylor expanding these functions about $t=0$ and $\theta=0$, we have
\begin{align}
t_1 &=
\frac{t}{2} + R_1\left(t, \theta\right), \\
t_2 &=
t \theta + R_2\left(t, \theta\right),
\end{align}
Basic calculus shows that the Taylor remainders $R_1,R_2$ are always negative for the stated range of $t$, giving the stated bounds. \end{proof}
\subsection*{Regrouping Interaction Terms} Now we apply this and the previous decompositions to simulate $H_{\text{FH}}$ as encoded using the compact encoding, using only three Trotter layers: $\{H_0, H_1, H_2\}$. The first of these layers consists of all the on-site interactions: \begin{align}
\op H_0 := \frac{\os}{4} \sum_{i=1}^{N}\left(I - Z_{i \uparrow}\right)\left(I - Z_{i \downarrow}\right), \end{align} and the other two layers are a \emph{mix} of horizontal and vertical hopping terms. Each has the same form, but consists of different sets of interactions as shown in \cref{fig:M=3-Split-ap}. \begin{align}
\op H_{1/2} &:=+ \frac{\hop}{2}\sum_{i<j}\sum_{\sigma \in \{\uparrow,\downarrow\}} \left( X_{i,\sigma}X_{j,\sigma}Y_{f(i,j),\sigma}+Y_{i,\sigma}Y_{j,\sigma}Y_{f(i,j),\sigma}\right) \\
&+\frac{\hop}{2}\sum_{i<j}\sum_{\sigma \in \{\uparrow,\downarrow\}} g(i,j)\left( X_{i,\sigma}X_{j,\sigma}X_{f(i,j),\sigma}+Y_{i,\sigma}Y_{j,\sigma}X_{f(i,j),\sigma}\right). \end{align} \begin{figure}
\caption{The interactions in $\op H_1$ (left) and those in $\op H_2$ (right). Gray nodes represent ancillary qubits, all non-gray qubits encode fermionic sites of a particular spin.}
\label{fig:M=3-Split-ap}
\end{figure}
Now we show that $\ee^{\ii \delta \op H_{1}}$ can be implemented directly; the case of $\ee^{\ii \delta \op H_{2}}$ follows similarly. As $\op H_{1}$ consists of interactions on disjoint sets of qubits, each forming a square on four qubits in \cref{fig:M=3-Split-ap}, we only need to show that we can implement evolution under the interactions making up one of the squares. We will denote these by $\op h_i$. We will label an example $\op h_1$ as shown in \cref{fig:interaction-square-ap} and demonstrate that this can be done. Using this labelling, the square of interactions is given by \begin{align}
\op h_1 &= \frac{1}{2} \left( X_1 X_2 + Y_1 Y_2 \right) Y_a + \frac{1}{2} \left( X_3 X_4 + Y_3 Y_4 \right) Y_a \\
&+\frac{1}{2} \left( X_1 X_4 + Y_1 Y_4 \right) X_a - \frac{1}{2} \left( X_2 X_3 + Y_2 Y_3 \right) X_a. \end{align} \begin{figure}
\caption{The Hamiltonian $\op h_1$: a \emph{sum} of each Pauli interaction represented by a red line connecting a pair of qubits. Upward pointing arrows indicate $g(i,j)=-1$ and downward, left and right pointing arrows indicate $g(i,j)=1$ (See \cite{DK}). $\op H_1$ is a sum of disjoint Hamiltonians of this form, shown in \cref{fig:M=3-Split-ap}}
\label{fig:interaction-square-ap}
\end{figure} Now we will regroup these interactions in such away that we can use the methods of \cref{lem:Rank-Increase-Depth3-ap} to decompose $\ee^{\ii \delta \op h_1}$. To do this we group the terms as $\op h_1= \op a_1 + \op a_2 + \op b_1 +\op b_2$, where \begin{align}
\op a_1 &= \frac{1}{\sqrt{2}}\left(\frac{X_1 X_2 Y_a - X_a X_2 X_3}{\sqrt{2}}\right) \\
\op a_2 &= \frac{1}{\sqrt{2}}\left(\frac{Y_1 Y_4 X_a + Y_a Y_4 Y_3}{\sqrt{2}}\right) \\
\op b_1 &= \frac{1}{\sqrt{2}}\left(\frac{Y_1 Y_2 Y_a - X_a Y_2 Y_3}{\sqrt{2}}\right) \\
\op b_2 &= \frac{1}{\sqrt{2}}\left(\frac{X_1 X_4 X_a + Y_a X_4 X_3}{\sqrt{2}}\right). \end{align}
We have reordered terms in order to easily verify the following commutation and anti-commutation relations: (i).~Every $\op a_i$ and $\op b_i$ squares to something proportional to the identity and consists of two anti-commuting Pauli terms. (ii).~The only pairs of $\op a_i$ and $\op b_i$ which don't commute, instead anti-commute: \begin{align}
\{\op a_1, \op b_2 \} &=0 \\
\{\op a_2, \op b_1 \} &=0. \end{align} It is easy to verify that all other pairs of $\op a_i$ and $\op b_i$ commute. This allows us to implement the target evolution as follows \begin{align}
\ee^{\ii \delta \op h_1} &= \ee^{\ii \delta \left(\op a_1 + \op b_2 \right)} \ee^{\ii \delta \left(\op a_2 + \op b_1 \right)}. \end{align}
Now we only need to show that we can implement $\ee^{\ii \delta \left(\op a_1 + \op b_2 \right)}$; the implementation of $\ee^{\ii \delta \left(\op a_2 + \op b_1 \right)}$ follows similarly. Consider the Hamiltonian \begin{align}
\op a_1 + \op b_2&= \frac{1}{\sqrt{2}}\left(\frac{X_1 X_2 Y_a - X_a X_2 X_3}{\sqrt{2}}\right) + \frac{1}{\sqrt{2}}\left(\frac{X_1 X_4 X_a + Y_a X_4 X_3}{\sqrt{2}}\right). \end{align} This meets the criteria to decompose $\ee^{\ii \delta \left(\op a_1 + \op b_2 \right)}$ with two applications of \cref{lem:Rank-Increase-Depth3-ap}. The first application gives the decomposition \begin{align}\label{eq:three_decomp}
\ee^{\ii \delta \left(\op a_1 + \op b_2 \right)} &= \ee^{\ii t_1 \frac{X_1 X_2 Y_a - X_a X_2 X_3}{\sqrt{2}}} \ee^{\ii t_2\frac{X_1 X_4 X_a + Y_a X_4 X_3}{\sqrt{2}} }\ee^{\ii t_1\frac{X_1 X_2 Y_a - X_a X_2 X_3}{\sqrt{2}}} \end{align} where the pulse times are a function of the target time $t_i=t_i(\delta)$ as defined in \cref{lem:Rank-Increase-Depth3-ap}. Then we apply \cref{lem:Rank-Increase-Depth3-ap} again, to each of the individual gates in \cref{eq:three_decomp}. The first gate in \cref{eq:three_decomp} decomposes as \begin{align}
\ee^{\ii t_1 \frac{X_1 X_2 Y_a - X_a X_2 X_3}{\sqrt{2}}} &= \ee^{t_1 ( t_1) X_1 X_2 Y_a} \ee^{t_2 ( t_1) X_a X_2 X_3} \ee^{t_1 ( t_1) X_1 X_2 Y_a}, \end{align} and the others decompose similarly. Now we need only apply \cref{lem:Weight-Increase-Depth4-ap} to decompose these three qubit unitaries into evolution under two-local Pauli interactions. As \cref{lem:Rank-Increase-Depth3-param-bounds-ap} shows that up until now the pulse times have remained $\propto \delta$, it is only this final step which introduces a root overhead. Hence the run-time remains asymptotically as \cref{th:FH-optimum-analogue}.
\iffalse The maximum run-time of a single Trotter layer in this full decomposition is $\text{max}_{i,j} 2 \sqrt{2 t_i ( t_j \left(\delta\right))}$. There are $2 \times 3 \times 3 \times 4$ two-qubit unitaries in the full decomposition of $\ee^{\ii \delta \op h_1}$ and $2 \times 3 \times 3$ three-qubit unitaries costed at $\text{max}_{i,j} 2 \sqrt{2 t_i (t_j \left(\delta\right))}$ in the full decomposition of $\ee^{\ii \delta \op h_1}$. All of the $\op h_i$ are disjoint and implemented in parallel giving \begin{align}
\cost(\ee^{\ii \delta \op H_1}) &\leq 36 \ \text{max}_{i,j} \sqrt{2 t_i ( t_j \left(\delta \hop \right))}
\leq 18 \pi \sqrt{ \frac{\delta |\hop|}{2} } \end{align} where we have used the bound established in \cref{lem:Rank-Increase-Depth3-param-bounds-ap} with $\theta = \pi/4$ to obtain $t_i (t_j \left(\delta \right)) \leq \delta \pi^2 / 16$ and reinserted $\hop$. Similarly, the other Trotter layer $\ee^{\ii \delta \op H_2}$ has \begin{align}
\cost(\ee^{\ii \delta \op H_2}) &\leq 18 \pi \sqrt{ \frac{\delta |\hop|}{2} } \end{align} This completes the run-time analysis of the individual Trotter layers. \fi \section*{Feasible simulation time as a function of hardware error rate} In this Appendix we analyse the simulation performance from the perspective of hardware error rate, rather than circuit depth. Specifically, we analyse the following question: given a maximum allowable simulation error $\epsilon_{tar}$ and a given hardware noise rate $q$, for how long can we simulate a given Fermi Hubbard Hamiltonian? That is, what is the maximum simulation time $T_{tar}$ such that the total simulation error remains below the target threshold, $\epsilon \leq \epsilon_{tar}$? In \cref{tab:extra} we consider this question for an $L=5$ Fermi Hubbard model with target error $\epsilon_{tar} = 10\%$.
When error mitigation is unused the only contribution to the total error $\epsilon$ is from Trotter error $\epsilon_t$ and error due to stochastic noise $\epsilon_s$. That is we have $\epsilon = \sqrt{\epsilon^2_{t} + \epsilon^2_{s}} \leq \epsilon_{tar}$. When mitigation is used there is an additional contribution $\epsilon_c$ from commuting errors past short-time gates. That is $\epsilon = \sqrt{\epsilon^2_{t} + \epsilon^2_{s} + \epsilon^2_{c}} \leq \epsilon_{tar}$. All Trotter bounds used to produce \cref{tab:extra} are numeric. We are considering a fermion occupation number of $\Lambda = 5$.
In \Cref{tab:extra} we consider both per-time and per-gate error models, as well as standard circuit decompositions and subcircuit decompositions. Recall that the former only allows a standard gate set, whereas the latter allows access to a continuous family of two-qubit gates.
In the per-gate error model, the error mitigation techniques described in the Supplementary Method entitled ``Trivial Stochastic Error Bound'' do not apply, and we choose the decomposition which yields the lowest depth circuit. In both the standard and subcircuit decompositions, this implies using the conjugation technique \cref{eq:conj-method} rather than the decomposition techniques of before. For example, when decomposing 3-local terms, using a standard gate set this conjugation decomposition must be applied twice to leave us with only single qubit rotations and gates that are equivalent to CNOT gates (up to single-qubit rotations). Whereas if we are in a per-gate error model but allowing a subcircuit gate-set, then we only need to decompose via conjugation once.
Where error mitigation can be applied in the per-gate model, we decompose all gates via our subcircuit decompositions as otherwise error mitigation is not possible. Similarly, in the per-time error model, we always use these decompositions, as they are always more efficient than conjugation decompositions in that model.
We see that in the per-time error model using the compact encoding, using our subcircuit synthesis with the error mitigation techniques they enable yields the best performance across the hardware error rates considered. For $q=10^{-6}$ admitting subcircuit gates and allowing for error mitigation takes us from a maximum simulation time of $T_{tar}=3.17$ to $T_{tar}=896$. For the VC encoding, error mitigation does not yield an improvement. However subcircuit gates and our subcircuit synthesis techniques do, taking us from $T_{tar}=2.25$ to $T_{tar}=3.81$ for $q=10^{-6}$.
For both encodings smaller improvements are seen for $q=10^{-5}$ and $q=10^{-4}$. In the per-gate error model for the compact encoding, we find that our subcircuit decomposition techniques and error mitigation strategy yields an improvement, but over error rates of $q=10^{-6}$ and $q=10^{-5}$. For the VC encoding we see that error mitigation does not help. However for $q=10^{-6}$ subcircuit gates still yield an improvement, taking us from $T_{tar}=1.63$ to $T_{tar}=2.2$.
In cases where error mitigation is used we include the classical post-selection overhead. We have deliberately capped this at a maximum of $\approx 10^{4}$, to disallow excessive post-selection overhead. In some cases this cap is reached, indicating that our error mitigation techniques could yield further improvement still, but at a potentially unreasonably high post-selection overhead.
\begin{table}[h] \thisfloatpagestyle{empty} \hspace*{-1.0cm} \begin{tabular}{@{}lllllllll@{}} \toprule error model & encoding & decomp & mitigation & \begin{tabular}[c]{@{}l@{}}post\\selection\\ overhead\end{tabular} & \begin{tabular}[c]{@{}l@{}} $T_{tar}$\end{tabular} & \begin{tabular}[c]{@{}l@{}} $\delta$\end{tabular} & \begin{tabular}[c]{@{}l@{}}$\lceil T_{tar}/\delta\rceil$\end{tabular} & $\cost$ \\ \midrule \multirow[t]{6}{*}{per time} & \multirow[t]{3}{*}{compact} & standard & False & n/a & \begin{tabular}[c]{@{}l@{}}3.17\\ 0.74\\ 0.23 \\[3mm] \end{tabular} & \begin{tabular}[c]{@{}l@{}}0.133\\ 0.258\\ 0.456\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}24\\ 3\\ 1\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}1273\\ 155\\ 27\\[3mm]\end{tabular} \\
& & subcircuit & False & n/a & \begin{tabular}[c]{@{}l@{}}28.2\\ 3.77\\ 0.55\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.054\\ 0.123\\ 0.299\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}520\\ 31\\ 2\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}1377\\ 123\\ 12\\[3mm]\end{tabular} \\
& & subcircuit & True & \begin{tabular}[c]{@{}l@{}}9.1e3\\ 9.8e3\\ 9.1e3\end{tabular} & \begin{tabular}[c]{@{}l@{}}896.\\ 89.8\\ 8.84\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.005\\ 0.005\\ 0.005\end{tabular} & \begin{tabular}[c]{@{}l@{}}179e3\\ 18.0e3\\ 1770\end{tabular} & \begin{tabular}[c]{@{}l@{}}144e3\\ 14e3\\ 1416\end{tabular} \\
\cmidrule{2-9}
& \multirow[t]{3}{*}{VC} & standard & False & n/a & \begin{tabular}[c]{@{}l@{}}2.25\\ 0.48\\ 0.23\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.155\\ 0.318\\ 0.456\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}15\\ 2\\ 1\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}956\\ 100\\ 33\\[3mm]\end{tabular} \\
& & subcircuit & False & n/a & \begin{tabular}[c]{@{}l@{}}3.81\\ 1.00\\ 0.23\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.123\\ 0.226\\ 0.456\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}31\\ 5\\ 1\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}803\\ 141\\ 21\\[3mm]\end{tabular} \\
& & subcircuit & True & \begin{tabular}[c]{@{}l@{}}1.8e4\\ 8.5e3\\ 9.8e3\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.27\\ 0.04\\ 0.00416\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.65e-6\\ 1.65e-6\\ 1.65e-6\end{tabular} & \begin{tabular}[c]{@{}l@{}}165e3\\ 25.0e3\\ 2530\end{tabular} & \begin{tabular}[c]{@{}l@{}}95e3\\ 14e3\\ 1451\end{tabular} \\ \midrule \multirow[t]{6}{*}{per gate} & \multirow[t]{3}{*}{compact} & standard & False & n/a & \begin{tabular}[c]{@{}l@{}}2.87\\ 0.51\\ 0.23\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.139\\ 0.308\\ 0.456\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}21\\ 2\\ 1\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}1407\\ 113\\ 34\\[3mm]\end{tabular} \\
& & subcircuit & False & n/a & \begin{tabular}[c]{@{}l@{}}3.53\\ 0.83\\ 0.23\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.127\\ 0.245\\ 0.456\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}28\\ 4\\ 1\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}1393\\ 170\\ 25\\[3mm]\end{tabular} \\
& & subcircuit & True & \begin{tabular}[c]{@{}l@{}}1.2e4\\ 2.1e4\\ 1.6e4\end{tabular} & \begin{tabular}[c]{@{}l@{}}11.1\\ 1.08\\ 0.11\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.005\\ 0.005\\ 0.005\end{tabular} & \begin{tabular}[c]{@{}l@{}}2211\\ 216\\ 22\end{tabular} & \begin{tabular}[c]{@{}l@{}}146e3\\ 14e3\\ 1465\end{tabular} \\
\cmidrule{2-9}
& \multirow[t]{3}{*}{VC} & standard & False & n/a & \begin{tabular}[c]{@{}l@{}}1.63\\ 0.48\\ 0.23\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.180\\ 0.318\\ 0.456\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}9\\ 2\\ 1\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}761\\ 126\\ 42\\[3mm]\end{tabular} \\
& & subcircuit & False & n/a & \begin{tabular}[c]{@{}l@{}}2.20\\ 0.50\\ 0.23\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.157\\ 0.312\\ 0.456\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}14\\ 2\\ 1\\[3mm]\end{tabular} & \begin{tabular}[c]{@{}l@{}}929\\ 106\\ 33\\[3mm]\end{tabular} \\
& & subcircuit & True & \begin{tabular}[c]{@{}l@{}}2.6e4\\ 5.1e3\\ 8.0e3\end{tabular} & \begin{tabular}[c]{@{}l@{}}0.000932\\ 0.000135\\ 0.0000140\end{tabular} & \begin{tabular}[c]{@{}l@{}}1.65e-6\\ 1.65e-6\\ 1.65e-6\end{tabular} & \begin{tabular}[c]{@{}l@{}}567\\ 82\\ 9\end{tabular} & \begin{tabular}[c]{@{}l@{}}96e3\\ 14e3\\ 1440
\end{tabular} \\
\bottomrule \end{tabular} \caption{For $L=5$ with $\epsilon_{tar} = 10\%$ and $q = 10^{-6},\ 10^{-5}$ and $10^{-4}$ from top to bottom.}\label{tab:extra} \end{table}
\thispagestyle{empty} \FloatBarrier
\end{document} |
\begin{document}
\title{Kernel Bayes' Rule}
\begin{abstract} A nonparametric kernel-based method for realizing Bayes' rule is proposed, based on representations of probabilities in reproducing kernel Hilbert spaces. Probabilities are uniquely characterized by the mean of the canonical map to the RKHS. The prior and conditional probabilities are expressed in terms of RKHS functions of an empirical sample: no explicit parametric model is needed for these quantities. The posterior is likewise an RKHS mean of a weighted sample. The estimator for the expectation of a function of the posterior is derived, and rates of consistency are shown. Some representative applications of the kernel Bayes' rule are presented, including Baysian computation without likelihood and filtering with a nonparametric state-space model.
\end{abstract}
\section{Introduction}
Kernel methods have long provided powerful tools for generalizing linear statistical approaches to nonlinear settings, through an embedding of the sample to a high dimensional feature space, namely a reproducing kernel Hilbert space (RKHS) \cite{SchoelkopfSmola_book,Hofmann_etal_2008_kernel_AS}. Examples include support vector machines, kernel PCA, and kernel CCA, among others. In these cases, data are mapped via a canonical feature map to a reproducing kernel Hilbert space (of high or even infinite dimension), in which the linear operations that define the algorithms are implemented. The inner product between feature mappings need never be computed explicitly, but is given by a positive definite kernel function unique to the RKHS: this permits efficient computation without the need to deal explicitly with the feature representation.
The mappings of individual points to a feature space may be generalized to mappings of probability measures \citep[e.g.][Chapter 4]{Berlinet_RKHS}. We call such mappings the {\em kernel means} of the underlying random variables. With an appropriate choice of positive definite kernel, the kernel mean on the RKHS uniquely determines the distribution of the variable
\cite{Fukumizu04_jmlr,Fukumizu_etal09_KDR,Sriperumbudur_etal2010JMLR}, and
statistical inference problems on distributions can be solved via operations on the kernel means. Applications of this approach include homogeneity testing \cite{Gretton_etal07,Gretton_etal_nips2009}, where the empirical means on the RKHS are compared directly, and independence testing \cite{Gretton_etal_nips07,Gretton_etal_AOAS2009}, where the mean of the joint distribution on the feature space is compared with that of the product of the marginals. Representations of conditional dependence may also be defined in RKHS, and have been used in conditional independence tests \cite{Fukumizu_etal_nips07}.
In this paper, we propose a novel, nonparametric approach to Bayesian inference, making use of kernel means of probabilities. In applying Bayes' rule, we compute the posterior probability of $x$ in $\mathcal{X}$ given observation $y$ in $\mathcal{Y}$; \begin{equation}\label{eq:BayesRule}
q(x|y) = \frac{p(y|x)\pi(x)}{q_\mathcal{Y}(y)}, \end{equation}
where $\pi(x)$ and $p(y|x)$ are the density functions of the prior and the likelihood of $y$ given $x$, respectively, with respective base measures $\nu_\mathcal{X}$ and $\nu_\mathcal{Y}$, and the normalization factor $q_\mathcal{Y}$(y) is given by \begin{equation}\label{eq:BayesForward}
q_\mathcal{Y}(y) = \int p(y|x)\pi(x)d\nu_\mathcal{X}(x). \end{equation} Our main result is a nonparametric estimate of the kernel mean posterior, given kernel mean representations of the prior and likelihood.
A valuable property of the kernel Bayes' rule is that the kernel posterior mean is estimated nonparametrically from data; specifically, the prior and the likelihood are represented in the form of samples from the prior and the joint probability that gives the likelihood, respectively. This confers an important benefit: we can still perform Bayesian inference by making sufficient observations on the system, even in the absence of a specific parametric model of the relation between variables. More generally, if we can sample from the model, we do not require explicit density functions for inference.
Such situations are typically seen when the prior or likelihood is given by a random process: Approximate Bayesian Computation \citep{Tavare_etal_1997ABC,Marjoram_etal_2003PNAS,Sisson_etal2007} is widely applied in population genetics, where the likelihood is given by a branching process, and nonparametric Bayesian inference \citep{MullerQuintana2004} often uses a process prior with sampling methods. Alternatively, a parametric model may be known, however it might be of sufficient complexity to require Markov chain Monte Carlo or sequential Monte Carlo for inference. The present kernel approach provides an alternative strategy for Bayesian inference in these settings. We demonstrate rates of consistency for our posterior kernel mean estimate, and for the expectation of functions computed using this estimate.
An alternative to the kernel mean representation would be to use nonparametric density estimates for the posterior. Classical approaches include kernel density estimation (KDE) or distribution estimation on a finite partition of the domain. These methods are known to perform poorly on high dimensional data, however.
By contrast, the proposed kernel mean representation is defined as an integral or moment of the distribution, taking the form of a function in an RKHS. Thus, it is more akin to the characteristic function approach (see e.g. \cite{KankainenUshakov1998}) to representing probabilities. A well conditioned empirical estimate of the characteristic function can be difficult to obtain, especially for conditional probabilities. By contrast, the kernel mean has a straightforward empirical estimate, and conditioning and marginalization can be implemented easily, at a reasonable computational cost.
The proposed method of realizing Bayes' rule is an extension of the approach used in \cite{Song_etal_ICML2009} for state-space models. In this earlier work, a heuristic approximation was used, where the kernel mean of the new hidden state was estimated by adding kernel mean estimates from the previous hidden state and the observation. Another relevant work is the belief propagation approach in \cite{Song_etal_AISTATS2010,SonGreBicLowGue11}, which covers the simpler case of a uniform prior.
This paper is organized as follows. We begin in Section \ref{sec:kernel} with a review of RKHS terminology and of kernel mean embeddings. In Section \ref{sec:KBR}, we derive an expression for Bayes' rule in terms of kernel means, and provide consistency guarantees. We apply the kernel Bayes' rule in Section \ref{sec:KBRmethods} to various inference problems, with numerical results and comparisons with existing methods in Section \ref{sec:experiments}. Our proofs are contained in Section \ref{sec:proof} (including proofs of the consistency results of Section \ref{sec:KBR}).
\section{Preliminaries: positive definite kernel and probabilities} \label{sec:kernel}
Throughout this paper, all Hilbert spaces are assumed to be separable. For an operator $A$ on a Hilbert space, the range is denoted by $\mathcal{R}(A)$. The linear hull of a subset $S$ in a vector space is denoted by ${\rm Span}S$.
We begin with a review of positive definite kernels, and of statistics on the associated reproducing kernel Hilbert spaces \citep{Aronszajn50,Berlinet_RKHS,Fukumizu04_jmlr,Fukumizu_etal09_KDR}. Given a set $\Omega$, a (${\mathbb{R}}$-valued) positive definite kernel $k$ on $\Omega$ is a symmetric kernel $k:\Omega\times\Omega\to{\mathbb{R}}$ such that $\sum_{i,j=1}^n c_i c_j k(x_i,x_j)\geq 0$ for arbitrary number of points $x_1,\dots,x_n$ in $\Omega$ and real numbers $c_1,\ldots,c_n$. The matrix $(k(x_i,x_j))_{i,j=1}^n$ is called a Gram matrix. It is known by the Moore-Aronszajn theorem \citep{Aronszajn50} that a positive definite kernel on $\Omega$ uniquely defines a Hilbert space $\mathcal{H}$ consisting of functions on $\Omega$ such that (i) $k(\cdot,x)\in\mathcal{H}$ for any $x\in\Omega$, (ii) ${\rm Span}\{ k(\cdot,x)\mid x\in \Omega\}$ is dense in $\mathcal{H}$, and (iii) $\langle f,k(\cdot,x)\rangle = f(x)$ for any $x\in \Omega$ and $f\in\mathcal{H}$ (the reproducing property), where $\langle\cdot,\cdot\rangle$ is the inner product of $\mathcal{H}$. The Hilbert space $\mathcal{H}$ is called the {\it reproducing kernel Hilbert space} (RKHS) associated with $k$, since the function $k_x = k(\;,x)$ serves as the reproducing kernel $\langle f, k_x\rangle=f(x)$ for $f\in\mathcal{H}$.
A positive definite kernel on $\Omega$ is said to be {\em bounded} if there is $M>0$ such that $k(x,x)\leq M$ for any $x\in \Omega$.
Let $(\mathcal{X},\mathcal{B}_\mathcal{X})$ be a measurable space, $X$ be a random variable taking values in $\mathcal{X}$ with distribution $P_X$, and $k$ be a measurable positive definite kernel on $\mathcal{X}$ such that $E[\sqrt{k(X,X)}]<\infty$. The associated RKHS is denoted by $\mathcal{H}$. The {\em kernel mean} $m_X^k$ (also written $m_{P_X}^k$)
of $X$ on the RKHS $\mathcal{H}$ is defined by the mean of the $\mathcal{H}$-valued random variable $k(\cdot,X)$. The existence of the kernel mean is guaranteed by $E[\|k(\cdot,X)\|] =E[\sqrt{k(X,X)}]<\infty$. We usually write $m_X$ for $m_X^k$ for simplicity, where there is no ambiguity.
By the reproducing property, the kernel mean satisfies the relation \begin{equation}\label{eq:reproducing_mean}
\langle f,m_X\rangle = E[f(X)] \end{equation} for any $f\in\mathcal{H}$. Plugging $f=k(\cdot,u)$ into this relation derives \begin{equation}\label{eq:mean_integ}
m_X(u) = E[k(u, X)] = \int k(u,\tilde{x})dP_X(\tilde{x}), \end{equation} which shows the explicit functional form. The kernel mean $m_X$ is also denoted by $m_{P_X}$, as it depends only on the distribution $P_X$ with $k$ fixed.
Let $(\mathcal{X},\mathcal{B}_\mathcal{X})$ and $(\mathcal{Y},\mathcal{B}_\mathcal{Y})$ be measurable spaces, $(X,Y)$ be a random variable on $\mathcal{X}\times\mathcal{Y}$ with distribution $P$, and $k_\mathcal{X}$ and $k_\mathcal{Y}$ be measurable positive definite kernels with respective RKHS ${\cH_\cX}$ and ${\cH_\cY}$ such that $E[k_\mathcal{X}(X,X)]<\infty$ and $E[k_\mathcal{Y}(Y,Y)]<\infty$. The (uncentered) {\em covariance operator} $C_{YX}: {\cH_\cX}\to{\cH_\cY}$ is defined as the linear operator that satisfies \[
\langle g, C_{YX} f\rangle_{\cH_\cY} = E[f(X)g(Y)] \] for all $f\in{\cH_\cX},g\in{\cH_\cY}$. This operator $C_{YX}$ can be identified with $m_{(YX)}$ in the product space ${\cH_\cY}\otimes{\cH_\cX}$, which is given by the product kernel $k_\mathcal{Y} k_\mathcal{X}$ on $\mathcal{Y}\times\mathcal{X}$ \citep{Aronszajn50}, by the standard identification between the linear maps and the tensor product. We also define $C_{XX}$ for the operator on ${\cH_\cX}$ that satisfies $\langle f_2, C_{XX}f_1\rangle = E[f_2(X)f_1(X)]$ for any $f_1,f_2\in{\cH_\cX}$. Similarly to \eq{eq:mean_integ}, the explicit integral expressions for $C_{YX}$ and $C_{XX}$ are given by \begin{equation}\label{eq:cov_op_integ} (C_{YX}f )(y) = \int k_\mathcal{Y}(y,\tilde{y})f(\tilde{x})dP(\tilde{x},\tilde{y}), \quad (C_{XX}f )(x) = \int k_\mathcal{X}(x,\tilde{x})f(\tilde{x})dP_X(\tilde{x}), \end{equation} respectively.
An important notion in statistical inference with positive definite kernels is the characteristic property. A bounded measurable positive definite kernel $k$ on a measurable space $(\Omega, \mathcal{B})$ is called {\em characteristic} if the mapping from a probability $Q$ on $(\Omega, \mathcal{B})$ to the kernel mean $m_Q^k\in \mathcal{H}$ is injective \citep{Fukumizu_etal09_KDR,Sriperumbudur_etal2010JMLR}. This is equivalent to assuming that $E_{X\sim P}[k(\cdot,X)] = E_{X'\sim Q}[k(\cdot,X')]$ implies $P=Q$: probabilities are uniquely determined by their kernel means on the associated RKHS. With this property, problems of statistical inference can be cast as inference on the kernel means. A popular example of a characteristic kernel defined on Euclidean space is the Gaussian RBF kernel $k(x,y) = \exp(-\|x-y\|^2/(2\sigma^2))$. It is known that a bounded measurable positive definite kernel on a measurable space $(\Omega,\mathcal{B})$ with corresponding RKHS $\mathcal{H}$ is characteristic if and only if $\mathcal{H} + {\mathbb{R}}$ is dense in $L^2(P)$ for arbitrary probability $P$ on $(\Omega,\mathcal{B})$, where $\mathcal{H}+{\mathbb{R}}$ is the direct sum of two RKHSs $\mathcal{H}$ and ${\mathbb{R}}$ \cite{Aronszajn50}. This implies that the RKHS defined by a characteristic kernel is rich enough to be dense in $L^2$ space up to the constant functions. Other useful conditions for a kernel to be characteristic can be found in \cite{Sriperumbudur_etal2010JMLR,FukSriGreSch09,Sriperumbudur_etal2011JMLR}.
Throughout this paper, when positive definite kernels on a measurable space are discussed, the following assumption is made: \begin{description} \item[(K)] Positive definite kernels are bounded and measurable. \end{description} Under this assumption, the mean and covariance always exist with arbitrary probabilities.
Given i.i.d.~sample $(X_1,Y_1),\ldots,(X_n,Y_n)$ with law $P$, the empirical estimator of the kernel mean and covariance operator are given straightforwardly by \[
\widehat{m}^{(n)}_X = \frac{1}{n}\sum_{i=1}^n k_\mathcal{X}(\cdot,X_i),\qquad
\widehat{C}^{(n)}_{YX} = \frac{1}{n} \sum_{i=1}^n k_\mathcal{Y}(\cdot,Y_i)\otimes k_\mathcal{X}(\cdot,X_i), \] where $\widehat{C}^{(n)}_{YX}$ is written in tensor form. It is known that these estimators are $\sqrt{n}$-consistent in appropriate norms, and $\sqrt{n}(\widehat{m}^{(n)}_X - m_X)$ converges to a Gaussian process on ${\cH_\cX}$ \cite[][Sec. 9.1]{Berlinet_RKHS}. While we may use non-i.i.d.~samples for numerical examples in Section \ref{sec:experiments}, in our theoretical analysis we always assume i.i.d.~samples for simplicity.
\section{Kernel expression of Bayes' rule} \label{sec:KBR}
\subsection{Kernel Bayes' rule}
Let $(\mathcal{X},\mathcal{B}_\mathcal{X})$ and
$(\mathcal{Y},\mathcal{B}_\mathcal{Y})$ be measurable spaces, $(X,Y)$ be a random variable on $\mathcal{X}\times\mathcal{Y}$ with distribution $P$, and $k_\mathcal{X}$ and $k_\mathcal{Y}$ be positive definite kernels on $\mathcal{X}$ and $\mathcal{Y}$, respectively, with respective RKHS ${\cH_\cX}$ and ${\cH_\cY}$. Let $\Pi$ be a probability on $(\mathcal{X},\mathcal{B}_\mathcal{X})$, which serves as a {\em prior} distribution. For each $x\in\mathcal{X}$, define a probability $P_{Y|x}$ on $(\mathcal{Y},\mathcal{B}_\mathcal{Y})$ by $P_{Y|x}(B)=E[I_B(Y)|X=x]$, where $I_B$ is the index function of a measurable set $B\in\mathcal{B}_\mathcal{Y}$. The prior $\Pi$ and the family $\{P_{Y|x}\mid x\in\mathcal{X}\}$ defines the joint distribution $Q$ on $\mathcal{X}\times\mathcal{Y}$ by \[
Q(A\times B)=\int_A P_{Y|x}(B)d\Pi(x) \]
for any $A\in\mathcal{B}_\mathcal{X}$ and $B\in\mathcal{B}_\mathcal{Y}$, and its marginal distribution $Q_\mathcal{Y}$ by $Q_\mathcal{Y}(B)=Q(\mathcal{X}\times B)$. Throughout this paper, it is assumed that $P_{Y|x}$ and $Q$ are well-defined under some regularity conditions. Let $(Z,W)$ be a random variable on $\mathcal{X}\times\mathcal{Y}$ with distribution $Q$. It is also assumed that the sigma algebra generated by $W$ includes every point $\{y\}$ ($y\in\mathcal{Y}$). For $y\in\mathcal{Y}$, the {\em posterior} probability given $y$ is defined by the conditional probability \begin{equation}\label{eq:BayesRule_general}
Q_{\mathcal{X}|y}(A) = E[I_A(Z)|W=y] \qquad (A\in\mathcal{B}_\mathcal{X}). \end{equation} If the probability distributions have density functions with respect to measures $\nu_\mathcal{X}$ on $\mathcal{X}$ and $\nu_\mathcal{Y}$ on $\mathcal{Y}$, namely, if the p.d.f.~of $P$ and $\Pi$ are given by $p(x,y)$ and $\pi(x)$, respectively, \eq{eq:BayesRule_general} is reduced to the well known form \eq{eq:BayesRule}.
The goal of this subsection is to derive an estimator of the kernel mean of posterior $m_{Q_\mathcal{X}|y}$. The following theorem is fundamental to discuss conditional probabilities with positive definite kernels. \begin{thm}[\cite{Fukumizu04_jmlr}]\label{thm:cond_mean}
If $E[g(Y)|X=\cdot]\in{\cH_\cX}$ holds for $g\in{\cH_\cY}$, then \[
C_{XX} E[g(Y)|X=\cdot]=C_{XY} g. \] \end{thm} If $C_{XX}$ is injective, i.e., if the function $f\in{\cH_\cX}$ with $C_{XX} f = C_{XY} g$ is unique, the above relation can be expressed as \begin{equation}\label{eq:cond_basic}
E[g(Y)|X=\cdot]={C_{XX}}^{-1}C_{XY} g. \end{equation} Noting $\langle C_{XX}f, f\rangle=E[f(X)^2]$, it is easy to see that $C_{XX}$ is injective, if $\mathcal{X}$ is a topological space, $k_\mathcal{X}$ is a continuous kernel, and ${\rm Supp}(P_X) = \mathcal{X}$, where ${\rm Supp}(P_X)$ is the support of $P_X$.
From Theorem \ref{thm:cond_mean}, we have the following result, which expresses the kernel mean of $Q_\mathcal{Y}$. \begin{thm}[\cite{Song_etal_ICML2009}, Eq. 6] \label{thm:cond_prob_op} Let $m_{\Pi}$ and
$m_{Q_\mathcal{Y}}$ be the kernel means of $\Pi$ in ${\cH_\cX}$ and $Q_\mathcal{Y}$ in ${\cH_\cY}$, respectively. If $C_{XX}$ is injective, $m_\Pi\in \mathcal{R}(C_{XX})$, and $E[g(Y)|X=\cdot]\in{\cH_\cX}$ for any $g\in {\cH_\cY}$, then \begin{equation}\label{eq:cond_prob_op}
m_{Q_\mathcal{Y}}= C_{YX} {C_{XX}}^{-1} m_{\Pi}. \end{equation} \end{thm} \begin{proof}
Take $f\in{\cH_\cX}$ such that $f=C_{XX}^{-1}m_{\Pi}$. For any $g\in {\cH_\cY}$, $\langle C_{YX}f, g\rangle = \langle f, C_{XY}g\rangle = \langle f, C_{XX}E[g(Y)|X=\cdot]\rangle = \langle C_{XX}f, E[g(Y)|X=\cdot]\rangle = \langle m_\Pi, E[g(Y)|X=\cdot]\rangle = \langle m_{Q_{\mathcal{Y}}}, g\rangle$, which implies
$C_{YX}f=m_{Q_{\mathcal{Y}}}$. \end{proof} As discussed in \cite{Song_etal_ICML2009}, the operator
$C_{YX}C_{XX}^{-1}$ can be regarded as the kernel expression of the conditional probability $P_{Y|x}$ or $p(y|x)$.
Note, however, that the assumption $E[g(Y)|X=\cdot]\in {\cH_\cX}$ may not hold in general; we can easily give counterexamples in the case of Gaussian kernels\footnote{Suppose that ${\cH_\cX}$ and ${\cH_\cY}$ are given by Gaussian kernel, and that $X$ and $Y$ are independent. Then, $E[g(Y)|X=x]$ is a constant function of $x$, which is known not to be included in a RKHS given by a Gaussian kernel \cite[Corollary 4.44]{SteChr08}.}. In the following, we nonetheless derive a population expression of Bayes' rule under this strong assumption, use it as a prototype for defining an empirical estimator, and prove its consistency.
\eq{eq:cond_prob_op} has a simple interpretation if the probabilities have density functions and $\pi(x)/p_X(x)$ is in ${\cH_\cX}$, where $p_X$ is the density function of the marginal $P_X$. From \eq{eq:mean_integ} we have $m_\Pi(x) = \int k_\mathcal{X}(x,\tilde{x})\pi(\tilde{x})d\nu_\mathcal{X}(\tilde{x})= \int k_\mathcal{X}(x,\tilde{x})(\pi(\tilde{x})/p_X(\tilde{x}))dP_X(\tilde{x})$, which implies $C_{XX}^{-1}m_\Pi = \pi/p_X$ from \eq{eq:cov_op_integ}. Thus \eq{eq:cond_prob_op} is an operator expression of the obvious relation \[
\int\int k_\mathcal{Y}(y,\tilde{y})p(\tilde{y}|\tilde{x})\pi(\tilde{x})d\nu_\mathcal{X}(\tilde{x})d\nu_\mathcal{Y}(\tilde{y}) = \int k_\mathcal{Y}(y,\tilde{y})(\pi(\tilde{x})/p_X(\tilde{x}))dP(\tilde{x},\tilde{y}) . \]
In deriving kernel realization of Bayes' rule, we will use the following tensor representation of the joint probability $Q$, based on Theorem \ref{thm:cond_prob_op}: \begin{equation}\label{eq:m_Q}
m_Q = C_{(YX)X} C_{XX}^{-1} m_{\Pi} \in {\cH_\cY}\otimes{\cH_\cX}. \end{equation} In the above equation, the covariance operator $C_{(YX)X}:{\cH_\cX}\to{\cH_\cY}\otimes{\cH_\cX}$ is defined by the random variable $((Y,X),X)$ taking values on $(\mathcal{Y}\times\mathcal{X})\times\mathcal{X}$.
In many applications of Bayesian inference, the probability conditioned on a particular value should be computed. By plugging the point measure at $x$ into $\Pi$ in \eq{eq:cond_prob_op}, we have a population expression \begin{equation}\label{eq:cond_mean_x}
E[k_\mathcal{Y}(\cdot,Y)|X=x]=C_{YX}{C_{XX}}^{-1}k_\mathcal{X}(\cdot,x), \end{equation} which has been considered in \cite{Song_etal_ICML2009, Song_etal_AISTATS2010} as the kernel mean of the conditional probability. It must be noted that for this case the assumption $m_\Pi=k(\cdot,x)\in \mathcal{R}(C_{XX})$ in Theorem \ref{thm:cond_prob_op} may not hold in general\footnote{Suppose
$C_{XX}h_x=k_\mathcal{X}(\cdot,x)$ were to hold for some $h_x\in{\cH_\cX}$. Taking the inner product with $k_\mathcal{X}(\cdot,\tilde{x})$ would then imply $k_\mathcal{X}(x,\tilde{x})=\int h_x(x')k_\mathcal{X}(\tilde{x},x')dP_X(x')$, which is not possible for many popular kernels, including the Gaussian kernel.}. We will show in Theorem \ref{thm:transition_consistency1}, however, that under some conditions a regularized empirical estimator based on \eq{eq:cond_mean_x} is a consistent estimator of $E[k_\mathcal{Y}(\cdot,Y)|X=x]$.
If we replace $P$ by $Q$ and $x$ by $y$ in \eq{eq:cond_mean_x}, we obtain \begin{equation}\label{eq:KBR_population}
m_{Q_{\mathcal{X}|y}}=E[k_\mathcal{X}(\cdot,Z)|W=y]=C_{ZW}C_{WW}^{-1}k_\mathcal{Y}(\cdot,y). \end{equation} This is exactly the kernel mean expression of the posterior, and the next step is to provide a way of deriving the covariance operators $C_{ZW}$ and $C_{WW}$. Recall that the kernel mean $m_Q=m_{(ZW)}\in{\cH_\cX}\otimes{\cH_\cY}$ can be identified with the covariance operator $C_{ZW}:{\cH_\cY}\to{\cH_\cX}$, and $m_{(WW)}$, which is the kernel mean on the product space ${\cH_\cY}\otimes{\cH_\cY}$, with $C_{WW}$. Then from \eq{eq:m_Q} and the similar expression $m_{(WW)}=C_{(YY)X}C_{XX}^{-1}m_\Pi$, we are able to obtain the operators in \eq{eq:KBR_population}, and thus the kernel mean of the posterior.
The above argument can be rigorously implemented, if empirical estimators are considered. Let $(X_1,Y_1),\ldots,(X_n,Y_n)$ be an i.i.d.~sample with law $P$. Since the kernel method needs to express the information of variables in terms of Gram matrices given by data points, we assume that the prior is also expressed in the form of an empirical estimate, and that we have a consistent estimator of $m_\Pi$ in the form \[ \widehat{m}^{(\ell)}_\Pi = \sum_{j=1}^\ell \gamma_j k_\mathcal{X}(\cdot,U_j), \] where $U_1,\ldots,U_\ell$ are points in $\mathcal{X}$ and $\gamma_j$ are the weights. The data points $U_j$ may or may not be a sample from the prior $\Pi$, and negative values are allowed for $\gamma_j$. Negative values are observed in successive applications of the kernel Bayes rule, as in the state-space example of Section \ref{sec:filtering}. Based on Theorem \ref{thm:cond_prob_op}, the empirical estimators for $m_{(ZW)}$ and $m_{(WW)}$ are defined respectively by \begin{equation*}
\widehat{m}_{(ZW)}=\widehat{C}^{(n)}_{(YX)X}
\bigl(\widehat{C}^{(n)}_{XX}+\varepsilon_n I\bigr)^{-1} \widehat{m}^{(\ell)}_{\Pi}, \quad
\widehat{m}_{(WW)}=\widehat{C}^{(n)}_{(YY)X}
\bigl(\widehat{C}^{(n)}_{XX}+\varepsilon_n I\bigr)^{-1} \widehat{m}^{(\ell)}_{\Pi}, \end{equation*} where $\varepsilon_n$ is the coefficient of the Tikhonov-type regularization for operator inversion, and $I$ is the identity operator. The empirical estimators $\widehat{C}_{ZW}$ and $\widehat{C}_{WW}$ for $C_{ZW}$ and $C_{WW}$ are identified with $\widehat{m}_{(ZW)}$ and $\widehat{m}_{(WW)}$, respectively. In the following, $G_X$ and $G_Y$ denote the Gram matrices $(k_\mathcal{X} (X_i,X_j))$ and $(k_\mathcal{Y}(Y_i, Y_j ))$, respectively, and $I_n$ is the identity matrix of size $n$.
\begin{prop}\label{prop:Gram_expr_Q} The Gram matrix expressions of $\widehat{C}_{ZW}$ and $\widehat{C}_{WW}$ are given by \[ \widehat{C}_{ZW} = \sum_{i=1}^n \widehat{\mu}_i k_\mathcal{X}(\cdot,X_i)\otimes k_\mathcal{Y}(\cdot,Y_i)\;\;\;\text{and}\;\; \; \widehat{C}_{WW}= \sum_{i=1}^n \widehat{\mu}_i k_\mathcal{Y}(\cdot,Y_i)\otimes k_\mathcal{Y}(\cdot,Y_i), \] respectively, where the common coefficient $\widehat{\mu}\in{\mathbb{R}}^n$ is \begin{equation}\label{eq:lambda} \widehat{\mu} = \Bigl(\frac{1}{n}G_X+\varepsilon_n I_n\Bigr)^{-1} \widehat{{\bf m}}_\Pi,\quad \widehat{{\bf m}}_{\Pi,i} = \widehat{m}_\Pi(X_i) = \sum_{j =1}^\ell \gamma_j k_\mathcal{X}(X_i, U_j). \end{equation} \end{prop} The proof is similar to that of Proposition \ref{prop:KBR_Gram} below, and is omitted. The expressions in Proposition \ref{prop:Gram_expr_Q} imply that the probabilities $Q$ and $Q_\mathcal{Y}$ are estimated by the weighted samples $\{((X_i,Y_i), \widehat{\mu}_i)\}_{i=1}^n$ and $\{(Y_i,\widehat{\mu}_i)\}_{i=1}^n$, respectively, with common weights. Since the weight $\widehat{\mu}_i$ may be negative, in applying \eq{eq:KBR_population} the operator inversion in the form $( \widehat{C}_{WW}+\delta_n I)^{-1}$ may be impossible or unstable. We thus use another type of Tikhonov regularization, thus obtaining the estimator \begin{equation}\label{eq:KBR}
\widehat{m}_{Q_\mathcal{X}|y} := \widehat{C}_{ZW}\bigl( \widehat{C}_{WW}^2+\delta_n I\bigr)^{-1} \widehat{C}_{WW} k_\mathcal{Y}(\cdot,y). \end{equation}
\begin{prop}\label{prop:KBR_Gram}
For any $y\in \mathcal{Y}$, the Gram matrix expression of $\widehat{m}_{Q_\mathcal{X}|y}$ is given by \begin{equation}\label{eq:KBRemp}
\widehat{m}_{Q_\mathcal{X}|y}
= {\bf k}_X^T R_{X|Y} {\bf k}_Y(y),\qquad R_{X|Y} :=\Lambda G_Y ((\Lambda G_Y)^2 + \delta_n I_n)^{-1}\Lambda, \end{equation} where $\Lambda = {\rm diag}(\widehat{\mu})$ is a diagonal matrix with elements $\widehat{\mu}_i$ in \eq{eq:lambda}, ${\bf k}_X=(k_\mathcal{X}(\cdot,X_1),\ldots,k_\mathcal{X}(\cdot,X_n))^T\in {\cH_\cX}^n$, and ${\bf k}_Y=(k_\mathcal{Y}(\cdot,Y_1),\ldots,k_\mathcal{Y}(\cdot,Y_n))^T\in {\cH_\cY}^n$. \end{prop}
\begin{proof} Let $h=(\widehat{C}_{WW}^2 + \delta_n I)^{-1}\widehat{C}_{WW} k_\mathcal{Y}(\cdot,y)$, and decompose it as $h=\sum_{i=1}^n \alpha_i k_\mathcal{Y}(\cdot,Y_i)+ h_\perp =\alpha^T {\bf k}_Y + h_\perp$, where $h_\perp$ is orthogonal to ${\rm Span}\{k_\mathcal{Y}(\cdot,Y_i)\}_{i=1}^n$. Expansion of $(\widehat{C}_{WW}^2 + \delta_n I)h = \widehat{C}_{WW} k_\mathcal{Y}(\cdot,y)$ gives ${\bf k}_Y^T (\Lambda G_Y)^2 \alpha + \delta_n {\bf k}_Y^T \alpha + \delta_n h_\perp = {\bf k}_Y^T \Lambda {\bf k}_Y(y)$. Taking the inner product with $k_\mathcal{Y}(\cdot,Y_j)$, we have \[
\bigl((G_Y\Lambda)^2 + \delta_n I_n\bigr) G_Y\alpha = G_Y\Lambda {\bf k}_Y(y). \]
The coefficient $\rho$ in $\widehat{m}_{Q_\mathcal{X}|y}=\widehat{C}_{ZW} h = \sum_{i=1}^n \rho_i k_\mathcal{X}(\cdot,X_i)$ is given by $\rho=\Lambda G_Y\alpha$, and thus \[
\rho = \Lambda \bigl((G_Y\Lambda)^2 + \delta_n I_n\bigr)^{-1}G_Y\Lambda {\bf k}_Y(y) = \Lambda G_Y \bigl((\Lambda G_Y)^2 + \delta_n I_n\bigr)^{-1}\Lambda {\bf k}_Y(y). \] \end{proof}
We call Eqs.(\ref{eq:KBR}) and (\ref{eq:KBRemp}) the {\em kernel Bayes' rule} (KBR). The required computations are summarized in Figure \ref{alg:KBR}. The KBR uses a weighted sample to represent the posterior; it is similar in this respect to sampling methods such as importance sampling and sequential Monte Carlo (\cite{Docuet_etal_SMC}). The KBR method, however, does not generate samples of the posterior, but updates the weights of a sample by matrix computation. We will give some experimental comparisons between KBR and sampling methods in Section \ref{sec:exp_posterior}.
If our aim is to estimate the expectation of a function $f\in{\cH_\cX}$ with respect to the posterior, the reproducing property \eq{eq:reproducing_mean} gives an estimator \begin{equation}\label{eq:KBRemp_f}
\langle f, \widehat{m}_{Q_\mathcal{X}|y}\rangle_{{\cH_\cX}} = \mathbf{f}_X^T R_{X|Y} \mathbf{k}_\mathcal{Y}(y), \end{equation} where $\mathbf{f}_X = (f(X_1),\ldots,f(X_n))^T\in{\mathbb{R}}^n$.
\begin{figure}
\caption{Algorithm of Kernel Bayes' Rule}
\label{alg:KBR}
\end{figure}
\subsection{Consistency of the KBR estimator}
\label{sec:theory}
We now demonstrate the consistency of the KBR estimator in \eq{eq:KBRemp_f}. For the theoretical analysis, it is assumed that the distributions have density functions for simplicity. In the following two theorems, we show only the best rates that can be derived under the assumptions, and defer more detailed discussions and proofs to Section \ref{sec:proof}. We assume here that the sample size $\ell=\ell_n$ for the prior goes to infinity as the sample size $n$ for the likelihood goes to infinity, and that $\widehat{m}_\Pi^{(\ell_n)}$ is $n^{\alpha}$-consistent in RKHS norm.
\begin{thm}\label{thm:consitency_KBR_1a}
Let $f$ be a function in ${\cH_\cX}$, $(Z,W)$ be a random variable on $\mathcal{X}\times\mathcal{Y}$ such that the distribution is $Q$ with p.d.f.~$p(y|x)\pi(x)$, and $\widehat{m}_\Pi^{(\ell_n)}$ be an estimator of $m_\Pi$ such that $\|\widehat{m}_\Pi^{(\ell_n)} -
m_\Pi\|_{\cH_\cX} = O_p(n^{-\alpha})$ as $n\to\infty$ for some $0< \alpha \leq 1/2$. Assume that $\pi/p_X \in \mathcal{R}(C_{XX}^{1/2})$, where $p_X$ is the p.d.f.~of $P_X$, and
$E[f(Z)|W=\cdot]\in \mathcal{R}(C_{WW}^2)$. For the regularization constants $\varepsilon_n=n^{-\frac{2}{3}\alpha}$ and $\delta_n = n^{- \frac{8}{27}\alpha}$, we have for any $y\in\mathcal{Y}$ \[
\mathbf{f}^T_X R_{X|Y}\mathbf{k}_Y(y) - E[f(Z)|W=y]
= O_p(n^{-\frac{8}{27}\alpha}), \quad (n\to\infty), \]
where $\mathbf{f}_X^T R_{X|Y}\mathbf{k}_Y(y)$ is given by \eq{eq:KBRemp_f}. \end{thm}
It is possible to extend the covariance operator $C_{WW}$ to one defined on $L^2(Q_\mathcal{Y})$ by \begin{equation}\label{eq:cov_op_L2}
\tilde{C}_{WW}\phi = \int k_\mathcal{Y}(y,w)\phi(w) dQ_\mathcal{Y}(w), \qquad (\phi\in L^2(Q_\mathcal{Y})). \end{equation} If we consider the convergence on average over $y$, we have a slightly better rate on the consistency of the KBR estimator in $L^2(Q_\mathcal{Y})$.
\begin{thm}\label{thm:consitency_KBR_2a}
Let $f$ be a function in ${\cH_\cX}$, $(Z,W)$ be a random vector on $\mathcal{X}\times\mathcal{Y}$ such that the distribution is $Q$ with p.d.f.~$p(y|x)\pi(x)$, and $\widehat{m}_\Pi^{(\ell_n)}$ be an estimator of $m_\Pi$ such that $\|\widehat{m}_\Pi^{(\ell_n)} -
m_\Pi\|_{\cH_\cX} = O_p(n^{-\alpha})$ as $n\to\infty$ for some $0<\alpha \leq 1/2$. Assume that $\pi/p_X \in \mathcal{R}(C_{XX}^{1/2})$, where $p_X$ is the p.d.f.~of $P_X$, and
$E[f(Z)|W=\cdot]\in \mathcal{R}(\tilde{C}_{WW}^2)$. For the regularization constants $\varepsilon_n=n^{-\frac{2}{3}\alpha}$ and $\delta_n = n^{- \frac{1}{3}\alpha}$, we have \[
\bigl\| \mathbf{f}_X^T R_{X|Y}\mathbf{k}_Y(W) - E[f(Z)|W]\bigr\|_{L^2(Q_\mathcal{Y})}
= O_p(n^{-\frac{1}{3}\alpha}), \quad (n\to\infty). \] \end{thm}
The condition $\pi/p_X \in \mathcal{R}(C_{XX}^{1/2})$ requires the prior to be sufficiently smooth. If $\widehat{m}^{(\ell_n)}_\Pi$ is a direct empirical mean with an i.i.d.~sample of size $n$ from $\Pi$, typically $\alpha=1/2$, with which the theorems imply $n^{4/27}$-consistency for every $y$, and $n^{1/6}$-consistency in the $L^2(Q_\mathcal{Y})$ sense. While these might seem to be slow rates, the rate of convergence can in practice be much faster than the above theoretical guarantees.
\section{Bayesian inference with Kernel Bayes' Rule} \label{sec:KBRmethods}
\subsection{Applications of Kernel Bayes' Rule} \label{sec:KBR_appl}
In Bayesian inference, we are usually interested in finding a point estimate such as the MAP solution, the expectation of a function under the posterior, or other properties of the distribution. Given that KBR provides a posterior estimate in the form of a kernel mean (which uniquely determines the distribution when a characteristic kernel is used), we now describe how our kernel approach applies to problems in Bayesian inference.
First, we have already seen that a consistent estimator for the expectation of $f\in{\cH_\cX}$ can be defined with respect to the posterior. On the other hand, unless $f\in {\cH_\cX}$ holds, there is no theoretical guarantee that it gives a good estimate. In Section \ref{sec:exp_posterior}, we discuss some experimental results in such situations.
To obtain a point estimate of the posterior on $x$, it is proposed in \cite{Song_etal_ICML2009} to use the preimage $\widehat{x}=\arg\min_x\| k_\mathcal{X}(\cdot,x) - {\bf k}_X^T R_{X|Y} {\bf k}_Y(y)\|^2_{{\cH_\cX}}$, which represents the posterior mean most effectively by one point. We use this approach in the present paper when point estimates are considered. In the case of the Gaussian kernel $\exp(-\|x-y\|^2/(2\sigma^2))$, the fixed point method \[
x^{(t+1)} = \frac{\sum_{i=1}^n X_i \rho_i \exp(-\|X_i-x^{(t)}\|^2/(2\sigma^2))}
{\sum_{i=1}^n \rho_i \exp(-\|X_i-x^{(t)} |^2/(2\sigma^2))}, \]
where $\rho=R_{X|Y} {\bf k}_Y(y)$, can be used to optimize $x$ sequentially \cite{Mika99kernelpca}. This method usually converges very fast, although no theoretical guarantee exists for the convergence to the globally optimal point, as is usual in non-convex optimization.
A notable property of KBR is that the prior and likelihood are represented in terms of samples. Thus, unlike many approaches to Bayesian inference, precise knowledge of the prior and likelihood distributions is not needed, once samples are obtained. The following are typical situations where the KBR approach is advantageous: \begin{itemize} \item The probabilistic relation among variables is difficult to realize with a simple parametric model, while we can obtain samples of the variables easily. We will see such an example in Section \ref{sec:filtering}. \item The probability density function of the prior and/or likelihood is hard to obtain explicitly, but sampling is possible:
\begin{itemize}
\item In the field of population genetics, Bayesian inference is used with a likelihood expressed by branching processes to model the split of species, for which the explicit density is hard to obtain. Approximate Bayesian Computation (ABC) is a popular method for approximately sampling from a posterior without knowing the functional form \citep{Tavare_etal_1997ABC,Marjoram_etal_2003PNAS,Sisson_etal2007}.
\item
Another interesting application along these lines is nonparametric Bayesian inference (\cite{MullerQuintana2004} and references therein), in which the prior is typically given in the form of a process without a density form. In this case, sampling methods are often applied (\cite{MacEachern1994,West_etal1994,MacEachern_etal1999} among others). Alternatively, the posterior may be approximated using variational methods \cite{BleiJordan2006}.
\end{itemize}
We will present an experimental comparison of KBR and ABC in Section \ref{sec:experiment_ABC}.
\item Even if explicit forms for the likelihood and prior are available, and standard sampling methods such as MCMC or sequential MC are applicable, the computation of a posterior estimate given $y$ might still be computationally costly, making real-time applications unfeasible. Using KBR, however, the expectation of a function of the posterior given different $y$ is obtained simply by taking the inner product as in \eq{eq:KBRemp_f}, once $\mathbf{f}_X^T R_{X|Y}$ has been computed. \end{itemize}
\subsection{Discussions concerning implementation}
When implementing KBR, a number of factors should be borne in mind to ensure good performance. First, in common with many nonparametric approaches, KBR requires training data in the region of the new ``test'' points for results to be meaningful. In other words, if the point on which we condition appears in a region far from the sample used for the estimation, the posterior estimator will be unreliable.
Second, in computing the posterior in KBR, Gram matrix inversion is necessary, which would cost $O(n^3)$ for sample size $n$ if attempted directly. Substantial cost reductions can be achieved if the Gram matrices are approximated by low rank matrix approximations. A popular choice is the incomplete Cholesky decomposition \cite{FineScheinberg2001}, which approximates a Gram matrix in the form of $\Gamma\Gamma^T$ with $n\times r$ matrix $\Gamma$ ($r\ll n$) at cost $O(nr^2)$. Using this and the Woodbury identity, the KBR can be approximately computed at cost $O(nr^2)$.
Third, kernel choice or model selection is key to the effectiveness of any kernel method. In the case of KBR, we have three model parameters: the kernel (or its parameter, e.g. the bandwidth), the regularization parameter $\varepsilon_n$, and $\delta_n$. The strategy for parameter selection depends on how the posterior is to be used in the inference problem. If it is to be applied in regression, we can use standard cross-validation. In the filtering experiments in Section \ref{sec:experiments}, we use a validation method where we divide the training sample in two.
A more general model selection approach can also be formulated, by creating a new regression problem for the purpose. Suppose the prior $\Pi$ is given by the marginal $P_X$ of $P$. The posterior ${Q}_{\mathcal{X}|y}$ averaged with respect to $P_Y$ is then equal to the marginal $P_X$ itself. We are thus able to compare the discrepancy of the empirical kernel mean of $P_X$ and the average of the estimators $\widehat{m}_{Q_{\mathcal{X}|y=Y_i}}$ over $Y_i$. This leads to a $K$-fold cross validation approach: for a partition of $\{1,\ldots,n\}$ into $K$ disjoint subsets $\{T_a\}_{a=1}^K$, let $\widehat{m}_{Q_{\mathcal{X}|y}}^{[-a]}$ be the kernel mean of posterior computed using Gram matrices on data $\{(X_i,Y_i)\}_{i\notin T_a}$, and based on the prior mean $\widehat{m}_X^{[-a]}$ with data $\{X_i\}_{i\notin T_a}$. We can then cross validate by minimizing $\sum_{a=1}^K \bigl\| \frac{1}{|T_a|} \sum_{j\in T_a} \widehat{m}_{Q_{\mathcal{X}|y=Y_j}}^{[-a]} - \widehat{m}_X^{[a]}\bigr\|^2_{{\cH_\cX}}$, where $\widehat{m}_X^{[a]}=\frac{1}{|T_a|} \sum_{j\in T_a}k_\mathcal{X}(\cdot,X_j)$.
\subsection{Application to nonparametric state-space model} \label{sec:filtering}
We next describe how KBR may be used in a particular application: namely, inference in a general time invariant state-space model, \[
p(X,Y) = \pi(X_1)\prod_{t=1}^T p(Y_t|X_t) \prod_{t=1}^{T-1} q(X_{t+1}|X_t), \] where $Y_t$ is an observable variable, and $X_t$ is a hidden state variable. We begin with a brief review of alternative strategies for inference in state-space models with complex dynamics, for which linear models are not suitable.
The extended Kalman filter (EKF) and unscented Kalman filter (UKF, \cite{JulierUhlmann1997}) are nonlinear extensions of the standard linear Kalman filter, and are well established in this setting. Alternatively, nonparametric estimates of conditional density functions can be employed, including kernel density estimation or distribution estimates on a partitioning of the space \cite{Monbet_etal2008,Thurn_etal_ICML1999}. The latter nonparametric approaches are effective only for low-dimensional cases, however. Most relevant to this paper are \cite{Song_etal_ICML2009} and \cite{Song_etal_ICML2010}, in which the kernel means and covariance operators are used to implement the nonparametric HMM.
In this paper, we apply the KBR for inference in the nonparametric state-space model. We do not assume the conditional probabilities $p(Y_t|X_t)$ and $q(X_{t+1}|X_t)$ to be known explicitly, nor do we estimate them with simple parametric models. Rather, we assume a sample
$(X_1,Y_1), \ldots, (X_{T+1},Y_{T+1})$ is given for both the observable and hidden variables in the training phase. The conditional probability for observation process $p(y|x)$ and the transition $q(x_{t+1}|x_t)$ are represented by the empirical covariance operators as computed on the training sample, \begin{align}\label{eq:cor_op_seq}
& \widehat{C}_{XY} = \frac{1}{T}\sum_{i=1}^T k_\mathcal{X}(\cdot,X_i)\otimes k_\mathcal{Y}(\cdot,Y_i), \quad
\widehat{C}_{X_{+1}X} = \frac{1}{T}\sum_{i=1}^T k_\mathcal{X}(\cdot,X_{i+1})\otimes k_\mathcal{X}(\cdot,X_i), \\
& \widehat{C}_{YY} = \frac{1}{T}\sum_{i=1}^T k_\mathcal{Y}(\cdot,Y_i)\otimes k_\mathcal{Y}(\cdot,Y_i), \quad
\widehat{C}_{XX} = \frac{1}{T}\sum_{i=1}^T k_\mathcal{X}(\cdot,X_i)\otimes k_\mathcal{X}(\cdot,X_i). \nonumber \end{align}
While the sample is not i.i.d., we can use the empirical covariances, which are consistent by the mixing property of Markov models.
Typical applications of the state-space model are filtering, prediction, and smoothing, which are defined by the estimation of $p(x_s|y_1,\ldots,y_t)$ for $s=t$, $s>t$, and $s<t$, respectively. Using the KBR, any of these can be computed. For simplicity we explain the filtering problem in this paper, but the remaining cases are similar. In filtering, given new observations $\tilde{y}_1,\ldots,\tilde{y}_t$, we wish to estimate the current hidden state $x_t$. The sequential estimate for the kernel mean of
$p(x_t|\tilde{y}_1,\ldots,\tilde{y}_t)$ can be derived via KBR. Suppose we already have an estimator of the kernel mean of $p(x_t|\tilde{y}_1,\ldots,\tilde{y}_t)$ in the form \[
\widehat{m}_{x_t|\tilde{y}_1,\ldots,\tilde{y}_t} = \sum_{i=1}^T \alpha_i^{(t)} k_\mathcal{X}(\cdot,X_i), \] where $\alpha_i^{(t)}=\alpha_i^{(t)}(\tilde{y}_1,\ldots,\tilde{y}_t)$ are the coefficients at time $t$.
From $p(x_{t+1}| \tilde{y}_1,\ldots,\tilde{y}_t) = \int p(x_{t+1}|x_t)p(x_t|\tilde{y}_1,\ldots,\tilde{y}_t)dx_t$, Theorem \ref{thm:cond_prob_op} tells us the kernel mean of $x_{t+1}$ given $\tilde{y}_1,\ldots,\tilde{y}_t$ is estimated by $\widehat{m}_{x_{t+1}|\tilde{y}_1,\ldots,\tilde{y}_t}=\widehat{C}_{X_{+1}X}(\widehat{C}_{XX}+\varepsilon_T I)^{-1} \widehat{m}_{x_t|\tilde{y}_1,\ldots,\tilde{y}_t} = {\bf k}_{X_{+1}}^T (G_X + T\varepsilon_T I_T)^{-1} G_X \alpha^{(t)}$, where ${\bf k}_{X_{+1}}^T=(k_\mathcal{X}(\cdot,X_2),\ldots,k_\mathcal{X}(\cdot,X_{T+1}))$. Applying Theorem \ref{thm:cond_prob_op} again with $p(y_{t+1}| \tilde{y}_1,\ldots,\tilde{y}_t) = \int p(y_{t+1}|x_{t+1})p(x_{t+1}|\tilde{y}_1,\ldots,\tilde{y}_t)dx_t$, we have an estimate for the kernel mean of the prediction $p(y_{t+1}|\tilde{y}_1,\ldots,\tilde{y}_t)$, \begin{equation*}
\widehat{m}_{y_{t+1}|\tilde{y}_1,\ldots,\tilde{y}_t} = \widehat{C}_{YX} (\widehat{C}_{XX}+\varepsilon_T I)^{-1} \widehat{m}_{x_{t+1}|\tilde{y}_1,\ldots,\tilde{y}_t} \\
= \sum_{i=1}^T \widehat{\mu}^{(t+1)}_i k_\mathcal{Y}(\cdot,Y_i), \end{equation*} where the coefficients $\widehat{\mu}^{(t+1)}=(\widehat{\mu}^{(t+1)}_i)_{i=1}^T$ are given by \begin{equation}\label{eq:update_filter1} \widehat{\mu}^{(t+1)}=\bigl(G_X+T\varepsilon_T I_T\bigr)^{-1}G_{X X_{+1}} \bigl(G_X + T\varepsilon_T I_T\bigr)^{-1} G_X \alpha^{(t)}. \end{equation} Here $G_{X X_{+1}}$ is the ``transfer" matrix defined by
$\bigl( G_{XX_{+1}}\bigr)_{ij} = k_\mathcal{X}(X_{i},X_{j+1})$. From $p(x_{t+1}|\tilde{y}_1,\ldots,\tilde{y}_{t+1})=\frac{p(y_{t+1}|x_{t+1})p(x_{t+1}|\tilde{y}_1,\ldots,\tilde{y}_{t})}
{ \int p(y_{t+1}|x_{t+1})p(x_{t+1}|\tilde{y}_1,\ldots,\tilde{y}_{t})dx_{t+1}}$, kernel Bayes' rule with the prior $p(x_{t+1}|\tilde{y}_1,\ldots,\tilde{y}_{t})$ and the likelihood $p(y_{t+1}|x_{t+1})$ yields \begin{equation}\label{eq:update_filter2}
\alpha^{(t+1)}
= \Lambda^{(t+1)}G_Y\bigl((\Lambda^{(t+1)}G_Y)^2 +\delta_T I_T\bigr)^{-1}\Lambda^{(t+1)}
{\bf k}_Y(\tilde{y}_{t+1}), \end{equation} where $\Lambda^{(t+1)} = {\rm diag}(\widehat{\mu}^{(t+1)}_1,\ldots,\widehat{\mu}^{(t+1)}_T)$. Eqs. (\ref{eq:update_filter1}) and (\ref{eq:update_filter2}) describe the update rule of $\alpha^{(t)}(\tilde{y}_1,\ldots,\tilde{y}_t)$.
If the prior $\pi(x_1)$ is available, the posterior estimate at $x_1$ given $\tilde{y}_1$ is obtained by the kernel Bayes' rule. If not, we may use \eq{eq:cond_mean_x} to get an initial estimate $\widehat{C}_{XY}(\widehat{C}_{YY} + \varepsilon_n I)^{-1} k_\mathcal{Y}(\cdot,\tilde{y}_1)$, yielding $\alpha^{(1)}(\tilde{y}_1) = T(G_Y+T\varepsilon_T I_T)^{-1}{\bf k}_Y(\tilde{y}_1)$.
In sequential filtering, a substantial reduction in computational cost can be achieved by low rank matrix approximations, as discussed above. Given an approximation of rank $r$ for the Gram matrices and transfer matrix, and employing the Woodbury identity, the computation costs just $O(Tr^2)$ for each time step.
\subsection{Bayesian computation without likelihood} \label{sec:ABC}
We next address the setting where the likelihood is not known in analytic form, but sampling is possible. In this case, Approximate Bayesian Computation (ABC) is a popular method for Bayesian inference. The simplest form of ABC, which is called the rejection method, generates a sample from $q(Z|W=y)$ as follows: (i) generate a sample $X_t$ from the prior $\Pi$,
(ii) generate a sample $Y_t$ from $P(Y|X_t)$, (iii) if $D(y,Y_t) < \tau$, accept $X_t$; otherwise reject, (iv) go to (i). In step (iii), $D$ is a distance measure of the space $\mathcal{X}$, and $\tau$ is tolerance to acceptance.
In the same setting as ABC,
KBR gives the following sampling-based method for computing the kernel posterior mean: \begin{enumerate} \item Generate a sample $X_1,\ldots,X_n$ from the prior $\Pi$.
\item Generate a sample $Y_t$ from $P(Y|X_t)$ ($t=1,\ldots,n$).
\item Compute Gram matrices $G_X$ and $G_Y$ with $(X_1,Y_1),\ldots,(X_n,Y_n)$, and $R_{X|Y}{\bf k}_Y(y)$. \end{enumerate} Alternatively, since $(X_t,Y_t)$ is an sample from $Q$, it is possible to use \eq{eq:cond_mean_x}
for the kernel mean of the conditional probability $q(x|y)$. As in \cite{Song_etal_ICML2009}, the estimator is given by \[
\sum_{t=1}^n \nu_j k_\mathcal{X}(\cdot,X_t),\quad \nu = (G_Y + N\varepsilon_N I_N)^{-1} {\bf k}_Y(y). \]
The distribution of a sample generated by ABC approaches to the true posterior if $\tau$ goes to zero, while empirical estimates via the kernel approaches converge to the true posterior mean in the limit of infinite sample size. The efficiency of ABC, however, can be arbitrarily poor for small $\tau$, since a sample $X_t$ is then rarely accepted in Step (iii).
The ABC method generates a sample, hence any statistics based on the posterior can be approximated. Given a posterior mean obtained by one of the kernel methods, however, we may only obtain expectations of functions in the RKHS, meaning that certain statistics (such as confidence intervals) are not straightforward to obtain. In Section \ref{sec:experiment_ABC}, we present an experimental evaluation of the trade-off between computation time and accuracy for ABC and KBR.
\section{Numerical Examples} \label{sec:experiments}
\subsection{Nonparametric inference of posterior} \label{sec:exp_posterior}
The first numerical example is a comparison between KBR and a kernel density estimation (KDE) approach to obtaining conditional densities. Let $(X_1,Y_1),\ldots,(X_n,Y_n)$ be an i.i.d.~sample from $P$ on ${\mathbb{R}}^d\times{\mathbb{R}}^r$. With probability density functions $K^\mathcal{X}(x)$ on ${\mathbb{R}}^d$ and $K^\mathcal{Y}(y)$ on ${\mathbb{R}}^r$, the conditional probability density function $p(y|x)$ is estimated by \[
\widehat{p}(y|x) = \frac{\sum_{j=1}^n K^\mathcal{X}_{h_X}(x-X_j) K^\mathcal{Y}_{h_Y}(y-Y_j)}{\sum_{j=1}^n K^\mathcal{X}_h(x-X_j)}, \]
where $K^\mathcal{X}_{h_X}(x)=h_X^{-d}K^\mathcal{X}(x/h_X)$ and $K^\mathcal{Y}_{h_Y}(x)=h_Y^{-r}K^\mathcal{Y}(y/h_Y)$ ($h_X,h_Y>0$). Given an i.i.d.~sample $U_1,\ldots,U_\ell$ from the prior $\Pi$, the particle representation of the posterior can be obtained by importance weighting (IW). Using this scheme, the posterior $q(x|y)$ given $y\in{\mathbb{R}}^r$ is represented by the weighted sample $(U_i,\zeta_i)$ with $\zeta_i=\widehat{p}(y|U_i)/\sum_{j=1}^\ell \widehat{p}(y|U_j)$.
We compare the estimates of $\int x q(x|y)dx$ obtained by KBR and KDE + IW, using Gaussian kernels for both the methods. Note that the function $f(x)=x$ does not belong to the Gaussian kernel RKHS,
and the consistency of KBR is not rigorously guaranteed for this function (c.f.
Theorem \ref{thm:consitency_KBR_1a}). That said, Gaussian kernels are known to be able to approximate any continuous function on a compact subset of the Euclidean space with arbitrary accuracy \cite{Steinwart01}. With such kernels, we can expect the posterior mean to be approximated with high accuracy on any compact set, and thus on average. In our experiments, the dimensionality was given by $r=d$ ranging from 2 to 64. The distribution $P$ of $(X,Y)$ was $N((0, {\bf 1}_d^T)^T, V)$ with $V=A^T A+2 I_d$, where ${\bf 1}_d=(1,\ldots,1)^T\in{\mathbb{R}}^d$ and each component of $A$ was randomly generated as $N(0,1)$ for each run. The prior
$\Pi$ was $P_X=N(0,V_{XX}/2)$, where $V_{XX}$ is the $X$-component of $V$. The sample sizes were $n = \ell = 200$. The bandwidth parameters $h_X,h_Y$ in KDE were set $h_X=h_Y$, and chosen over the set $\{2*i\mid i=1,\ldots,10\}$ in two ways: least square cross-validation \cite{Rudemo1982,Bowman1984} and the best mean performance. For the KBR, we chose $\sigma$ in $e^{-\|x-x'\|^2/(2\sigma^2)}$ in two ways: the median over the pairwise distances in the data \cite{Gretton_etal_nips07}, and the 10-fold cross-validation approach described in Section \ref{sec:KBR_appl}. Figure \ref{BayesRule} shows the mean square errors (MSE) of the estimates over 1000 random points
$y\sim N(0,V_{YY})$.
KBR significantly outperforms the KDE+IW approach. Unsurprisingly, the MSE of both methods increases with dimensionality.
\begin{figure}
\caption{Comparison between KBR and KDE+IW.}
\label{BayesRule}
\end{figure}
\subsection{Bayesian computation without likelihood} \label{sec:experiment_ABC}
We compare ABC and the kernel methods, KBR and conditional mean, in terms of estimation accuracy and computational time, since they have an obvious tradeoff. To compute the estimation accuracy rigorously, the ground truth is needed: thus we use Gaussian distributions for the true prior and likelihood, which makes the posterior easy to compute in closed form. The samples are taken from the same model used in Section \ref{sec:exp_posterior}, and $\int x q(x|y)dx$ is evaluated at 10 different points of $y$. We performed 10 random runs with different random generation of the true distributions.
For ABC, we used only the rejection method; while there are more advanced sampling schemes \cite{Marjoram_etal_2003PNAS,Sisson_etal2007}, their implementation is dependent on the problem being solved.
Various values for the acceptance region $\tau$ are used, and the accuracy and computational time are shown in Fig.~\ref{fig:ABC} together with total sizes of the generated samples. For the kernel methods, the sample size $n$ is varied.
The regularization parameters are given by $\varepsilon_n = 0.01/n$ and $\delta_n = 2\varepsilon_n$ for KBR, and $\varepsilon_n=0.01/\sqrt{n}$ for the conditional kernel mean. The kernels in the kernel methods are Gaussian kernels for which the bandwidth parameters are chosen by the median of the pairwise distances on the data (\cite{Gretton_etal_nips07}). The incomplete Cholesky decomposition is employed for the low-rank approximation. The results indicate that kernel methods achieve more accurate results than ABC at a given computational cost, and the conditional kernel mean shows better results.
\begin{figure}
\caption{Comparison of estimation accuracy and computational time with KBR and ABC for Bayesian computation without likelihood. The numbers at the marks are the sample sizes generated for computation.}
\label{fig:ABC}
\end{figure}
\subsection{Filtering problems} We next compare the KBR filtering method (proposed in Section \ref{sec:filtering}) with EKF and UKF on synthetic data.
KBR has the regularization parameters $\varepsilon_T, \delta_T$, and kernel parameters for $k_\mathcal{X}$ and $k_\mathcal{Y}$ (e.g., the bandwidth parameter for an RBF kernel). Under the assumption that a training sample is available, cross-validation can be performed on the training sample to select the parameters. By dividing the training sample into two, one half is used to estimate the covariance operators \eq{eq:cor_op_seq} with a candidate parameter set, and the other half to evaluate the estimation errors. To reduce the search space and attendant computational cost, we used a simpler procedure, setting $\delta_T = 2\varepsilon_T$, and using the Gaussian kernel bandwidths $\beta \sigma_\mathcal{X}$ and $\beta \sigma_\mathcal{Y}$, where $\sigma_\mathcal{X}$ and $\sigma_\mathcal{Y}$ are the median of pairwise distances in the training samples
(\cite{Gretton_etal_nips07}). This leaves only two parameters $\beta$ and $\varepsilon_T$ to be tuned.
We applied the KBR filtering algorithm from Section \ref{sec:filtering} to two synthetic data sets: a simple nonlinear dynamical system, in which the degree of nonlinearity can be controlled, and the problem of camera orientation recovery from an image sequence. In the first case, the hidden state is $X_t=(u_t,v_t)^T\in{\mathbb{R}}^2$, and the dynamics are given by \[
\begin{pmatrix}u_{t+1}\\ v_{t+1}\end{pmatrix} = (1+b\sin(M\theta_{t+1}))\begin{pmatrix}\cos\theta_{t+1} \\ \sin\theta_{t+1}\end{pmatrix} + \zeta_t,
\quad \theta_{t+1}=\theta_t + \eta\;\;(\text{mod }2\pi), \] where $\eta>0$ is an increment of the angle and $\zeta_t\sim N(0, \sigma_h^2 I_2)$ is independent process noise. Note that the dynamics of $(u_t,v_t)$ are nonlinear even for $b=0$. The observation $Y_t$ follows \[ Y_t = (u_t, v_t)^T + \xi_t,\qquad \xi_t\sim N(0,\sigma_o^2 I), \] where $\xi_t$ is independent noise. The two dynamics are defined as follows. (a) (rotation with noisy observation) $\eta=0.3$, $b=0$, $\sigma_h = \sigma_o=0.2$.
(b) (oscillatory rotation with noisy observation) $\eta=0.4$, $b=0.4$, $M=8$, $\sigma_h = \sigma_o=0.2$. (See Fig.\ref{fig:data}).
We assume the correct dynamics are known to the EKF and UKF.
The results are shown in Fig.~\ref{fig:filtering}. In all the cases, EKF and UKF show unrecognizably small difference. The dynamics in (a) are weakly nonlinear, and KBR has slightly worse MSE than EKF and UKF. For dataset (b), which has strong nonlinearity, KBR outperforms the nonlinear Kalman filter for $T\geq 200$.
\begin{figure}
\caption{Comparisons with the KBR Filter and EKF. (Average MSEs and standard errors over 30 runs.)
}
\label{fig:filtering}
\end{figure}
\begin{figure}
\caption{Example of data (b) ($X_t$, $N = 300$)}
\label{fig:data}
\end{figure}
In our second synthetic example, we applied the KBR filter to the camera rotation problem used in Song et al. \cite{Song_etal_ICML2009}. The angle of a camera, which is located at a fixed position, is a hidden variable, and movie frames recorded by the camera are observed. The data are generated virtually using a computer graphics environment. As in \cite{Song_etal_ICML2009}, we are given 3600 downsampled frames of $20\times 20$ RGB pixels ($Y_t\in[0,1]^{1200}$), where the first 1800 frames are used for training, and the second half are used to test the filter. We make the data noisy by adding Gaussian noise $N(0,\sigma^2)$ to $Y_t$.
Our experiments cover two settings. In the first, we assume we do not know that the hidden state $S_t$ is included in $SO(3)$, but only that it is a general $3\times 3$ matrix. In this case, we use the Kalman filter by estimating the relations under a linear assumption, and the KBR filter with Gaussian kernels for $S_t$ and $X_t$ as Euclidean vectors. In the second setting, we exploit the fact that $S_t\in SO(3)$: for the Kalman Filter, $S_t$ is represented by a quanternion, which is a standard vector representation of rotations; for the KBR filter the kernel $k(A,B)={\rm Tr}[AB^T]$ is used for $S_t$, and $S_t$ is estimated within $SO(3)$. Table \ref{tbl:camera} shows the Frobenius norms between the estimated matrix and the true one. The KBR filter significantly outperforms the EKF, since KBR has the advantage in extracting the complex nonlinear dependence between the observation and the hidden state.
\begin{table}
\centering
\begin{tabular}{c|cc|cc}
\hline
& KBR (Gauss) & KBR (Tr) & Kalman (9 dim.) & Kalman (Quat.) \\
\hline
$\sigma^2=10^{-4}$ & $0.210\pm 0.015$ & $0.146\pm 0.003$ & $1.980\pm0.083$ & $0.557\pm 0.023$ \\
$\sigma^2=10^{-3}$ & $0.222\pm 0.009$ & $0.210\pm 0.008$ & $1.935\pm0.064$ & $0.541\pm 0.022$ \\
\hline
\end{tabular}
\caption{Average MSE and standard errors of estimating camera angles (10 runs).}\label{tbl:camera} \end{table}
\section{Proofs} \label{sec:proof}
The proof idea for the consistency rates of the KBR estimators is similar to \cite{CaponnettoDeVito2007,SmaleZhou2005}, in which the basic techniques are taken from the general theory of regularization \cite{EnglHankeNeubauer}.
The first preliminary result is a rate of convergence for the mean transition in Theorem \ref{thm:cond_prob_op}. In the following $\mathcal{R}(C_{XX}^0)$ means ${\cH_\cX}$.
\begin{thm}\label{thm:transition_consistency1} Assume that $\pi/p_X \in
\mathcal{R}(C_{XX}^\beta)$ for some $\beta\geq 0$, where $\pi$ and $p_X$ are the p.d.f.~of $\Pi$ and $P_X$, respectively. Let $\widehat{m}_\Pi^{(n)}$ be an estimator of $m_\Pi$ such that $\|\widehat{m}_\Pi^{(n)} -
m_\Pi\|_{\cH_\cX} = O_p(n^{-\alpha})$ as $n\to\infty$ for some $0<\alpha \leq 1/2$. Then, with $\varepsilon_n=n^{-\max\{\frac{2}{3}\alpha, \frac{\alpha}{1+\beta}\}}$, we have \[
\bigl\| \widehat{C}^{(n)}_{YX}\bigl( \widehat{C}^{(n)}_{XX}+\varepsilon_n I\bigr)^{-1} \widehat{m}_\Pi^{(n)}
- m_{Q_\mathcal{Y}} \bigr\|_{\cH_\cY} = O_p(n^{-\min\{\frac{2}{3}\alpha, \frac{2\beta+1}{2\beta+2}\alpha\}}),\quad (n\to\infty). \] \end{thm}
\begin{proof} Take $\eta\in{\cH_\cX}$ such that $\pi/p_X =C_{XX}^\beta\eta$. Then, we have \begin{equation}\label{eq:m_pi_integ} m_\Pi = \int k_\mathcal{X}(\cdot,x)\frac{\pi(x)}{p_X(x)}p_X(x)d\nu_\mathcal{X}(x) = C_{XX}^{\beta+1}\eta. \end{equation}
First we show the rate of the estimation error: \begin{equation}\label{eq:Qy_estimation}
\bigl\| \widehat{C}^{(n)}_{YX}\bigl( \widehat{C}^{(n)}_{XX}+\varepsilon_n I\bigr)^{-1} \widehat{m}^{(n)}_\Pi
- C_{YX}\bigl( C_{XX}+\varepsilon_n I\bigr)^{-1} m_\Pi \bigr\|_{\cH_\cY}
=O_p\bigl(n^{-\alpha}\varepsilon_n^{-1/2}\bigr), \end{equation} as $n\to\infty$. By using $B^{-1}-A^{-1} = B^{-1}(A-B)A^{-1}$ for any invertible operators $A$ and $B$, the left hand side of \eq{eq:Qy_estimation} is upper bounded by \begin{multline*}
\bigl\|\widehat{C}^{(n)}_{YX}\bigl( \widehat{C}^{(n)}_{XX}+\varepsilon_n I\bigr)^{-1}
\bigl(\widehat{m}^{(n)}_\Pi - m_\Pi\bigr)\bigr\|_{{\cH_\cY}} +\bigl\| \bigl(\widehat{C}^{(n)}_{YX}-C_{YX}\bigr)
\bigl(C_{XX}+\varepsilon_n I\bigr)^{-1}m_\Pi \bigr\|_{{\cH_\cY}} \\
+ \bigl\| \widehat{C}^{(n)}_{YX}\bigl( \widehat{C}^{(n)}_{XX}+\varepsilon_n I\bigr)^{-1}
\bigl(C_{XX}-\widehat{C}^{(n)}_{XX}\bigr) \bigl(C_{XX}+\varepsilon_n I\bigr)^{-1}m_\Pi \bigr\|_{{\cH_\cY}}. \end{multline*}
By the decomposition $\widehat{C}^{(n)}_{YX}=\widehat{C}_{YY}^{(n)1/2} \widehat{W}_{YX}^{(n)}\widehat{C}_{XX}^{(n)1/2}$ with $\|\widehat{W}_{YX}^{(n)}\|\leq 1$ \cite{Baker73}, we have $\|\widehat{C}^{(n)}_{YX}\bigl( \widehat{C}^{(n)}_{XX}+\varepsilon_n I\bigr)^{-1}\|=O_p(\varepsilon_n^{-1/2})$, which implies the first term is of $O_p(n^{-\alpha}\varepsilon_n^{-1/2})$. From the $\sqrt{n}$ consistency of the covariance operators and $m_\Pi=C_{XX}^{\beta+1}\eta$, a similar argument to the first term proves that the second and third terms are of the order $O_p(n^{-1/2})$ and $O_p(n^{-1/2}\varepsilon_n^{-1/2})$, respectively, which means \eq{eq:Qy_estimation}.
Next, we show the rate for the approximation error \begin{equation}\label{eq:Qy_approx}
\bigl\|C_{YX}\bigl( C_{XX}+\varepsilon_n I\bigr)^{-1} m_\Pi - m_{Q_\mathcal{Y}}\bigr\|_{\cH_\cY} = O(\varepsilon_n^{\min\{(1+2\beta)/2, 1\}})\qquad (n\to\infty). \end{equation}
Let $C_{YX}=C_{YY}^{1/2}W_{YX}C_{XX}^{1/2}$ be the decomposition with $\|W_{YX}\|\leq 1$. It follows from \eq{eq:m_pi_integ} and the relation \[
m_{Q_\mathcal{Y}} = \int\int k(\cdot,y)\frac{\pi(x)}{p_X(x)}p(x,y)
d\nu_\mathcal{X}(x)d\nu_\mathcal{Y}(y) = C_{YX}C_{XX}^\beta \eta \] that the left hand side of \eq{eq:Qy_approx} is upper bounded by \begin{equation*}
\| C_{YY}^{1/2}W_{YX} \|\,\|\bigl( C_{XX} + \varepsilon_n I\bigr)^{-1} C_{XX}^{(2\beta+3)/2}\eta - C_{XX}^{(2\beta+1)/2}\eta\|_{\cH_\cX}. \end{equation*} By the eigendecomposition $C_{XX}=\sum_i \lambda_i \phi_i \langle \phi_i,\cdot\rangle$, where $\{\lambda_i\}$ are the positive eigenvalues and $\{\phi_i\}$ are the corresponding unit eigenvectors, the expansion \begin{align*}
\bigl\|\bigl( C_{XX} + \varepsilon_n I\bigr)^{-1} C_{XX}^{(2\beta+3)/2}\eta - C_{XX}^{(2\beta+1)/2}\eta\bigr\|_{\cH_\cX}^2 = \sum_i \biggl( \frac{\varepsilon_n \lambda_i^{(2\beta+1)/2}}{\lambda_i + \varepsilon_n} \biggr)^2 \langle \eta,\phi_i\rangle^2 \end{align*}
holds. If $0\leq \beta < 1/2$, we have $\frac{ \varepsilon_n\lambda_i^{(2\beta+1)/2} }{ \lambda_i+\varepsilon_n }= \frac{ \lambda_i^{(2\beta+1)/2} }{ (\lambda_i+\varepsilon_n)^{(2\beta+1)/2} } \frac{ \varepsilon_n^{(1 - 2\beta)/2} }{ (\lambda_i+\varepsilon_n)^{(1-2\beta)/2} }\varepsilon_n^{(2\beta+1)/2}\leq \varepsilon_n^{(2\beta+1)/2}$. If $\beta \geq 1/2$, then $\frac{ \varepsilon_n\lambda_i^{(2\beta+1)/2} }{ \lambda_i+\varepsilon_n } \leq \|C_{XX}\|\varepsilon_n$. The dominated convergence theorem shows that the the above sum converges to zero of the order $O(\varepsilon_n^{\min\{2\beta+1, 2\}})$ as $\varepsilon_n\to0$.
From Eqs. (\ref{eq:Qy_estimation}) and (\ref{eq:Qy_approx}), the optimal order of $\varepsilon_n$ and the optimal rate of consistency are given as claimed. \end{proof}
The following theorem shows the consistency rate of the estimator used in the conditioning step \eq{eq:KBR_population}. \begin{thm}\label{thm:consitency_conditioning1}
Let $f$ be a function in ${\cH_\cX}$, and $(Z,W)$ be a random variable taking values in $\mathcal{X}\times\mathcal{Y}$. Assume that $E[f(Z)|W=\cdot]\in \mathcal{R}(C_{WW}^\nu)$ for some $\nu \geq 0$, and $\widehat{C}^{(n)}_{WZ}:{\cH_\cX}\to{\cH_\cY}$ and $\widehat{C}^{(n)}_{WW}:{\cH_\cY}\to{\cH_\cY}$ be compact operators, which may not be positive definite, such that $\| \widehat{C}^{(n)}_{WZ}-C_{WZ}\|=O_p(n^{-\gamma})$ and $\| \widehat{C}^{(n)}_{WW}-C_{WW}\|=O_p(n^{-\gamma})$ for some $\gamma > 0$. Then, for a positive sequence $\delta_n = n^{-\max\{ \frac{4}{9}\gamma,\frac{4}{2\nu+5}\gamma\}}$, we have as $n\to\infty$ \[
\bigl\| \widehat{C}^{(n)}_{WW}\bigl( (\widehat{C}^{(n)}_{WW})^2 + \delta_n I \bigr)^{-1} \widehat{C}^{(n)}_{WZ} f -E[f(X)|W=\cdot]\bigr\|_{{\cH_\cX}} = O_p( n^{-\min\{\frac{4}{9}\gamma,\frac{2\nu}{2\nu+5}\gamma\}}). \] \end{thm} \begin{proof}
Let $\eta\in{\cH_\cX}$ such that $E[f(Z)|W=\cdot]=C_{WW}^\nu \eta$. First we show \begin{multline}\label{eq:estim_order}
\bigl\| \widehat{C}^{(n)}_{WW}\bigl((\widehat{C}^{(n)}_{WW})^2+\delta_n I\bigr)^{-1} \widehat{C}^{(n)}_{WZ}f
- C_{WW} (C_{WW}^2+\delta_n I)^{-1}C_{WZ}f \bigr\|_{\cH_\cX} \\ =O_p(n^{-\gamma}\delta_n^{-5/4}). \end{multline} The left hand side of \eq{eq:estim_order} is upper bounded by \begin{align*}
& \bigl\| \widehat{C}^{(n)}_{WW}\bigl((\widehat{C}^{(n)}_{WW})^2+\delta_n I\bigr)^{-1} (\widehat{C}^{(n)}_{WZ}-C_{WZ})f
\bigr\|_{\cH_\cY} \\ & +
\bigl\| (\widehat{C}^{(n)}_{WW}-C_{WW})(C_{WW}^2+\delta_n I)^{-1}C_{WZ}f
\bigr\|_{\cH_\cY} \\
& + \bigl\| \widehat{C}^{(n)}_{WW}( (\widehat{C}^{(n)}_{WW})^2+\delta_n I\bigr)^{-1} \bigl( (\widehat{C}^{(n)}_{WW})^2 - C_{WW}^2\bigr) \bigl(C_{WW}^2+\delta_n I\bigr)^{-1} C_{WZ}f
\bigr\|_{\cH_\cY}. \end{align*}
Let $\widehat{C}^{(n)}_{WW}= \sum_i \lambda_i \phi_i\langle \phi_i,\cdot\rangle$ be the eigendecomposition, where $\{\phi_i\}$ is the unit eigenvectors and $\{\lambda_i\}$ is the corresponding eigenvalues. From $\bigl| \lambda_i/(\lambda_i^2 + \delta_n)\bigr| = 1/|\lambda_i + \delta_n/\lambda_i| \leq 1/(2\sqrt{|\lambda_i|}\sqrt{\delta_n/|\lambda_i|}) = 1/(2\sqrt{\delta_n})$, we have $\| \widehat{C}^{(n)}_{WW}\bigl((\widehat{C}^{(n)}_{WW})^2+\delta_n I\bigr)^{-1}\| \leq 1/(2\sqrt{\delta_n})$, and thus the first term of the above bound is of $O_p(n^{-\gamma}\delta_n^{-1/2})$. A similar argument by the eigendecomposition of $C_{WW}$ combined with the decomposition $C_{WZ}=C_{WW}^{1/2}U_{WZ}C_{ZZ}^{1/2}$ with $\| U_{WZ} \|\leq 1$ shows that the second term is of $O_p(n^{-\gamma}\delta_n^{-3/4})$. From the fact $\|(\widehat{C}^{(n)}_{WW})^2-C_{WW}^2\|\leq \| \widehat{C}^{(n)}_{WW}(\widehat{C}^{(n)}_{WW}-C_{WW})\| + \| (\widehat{C}^{(n)}_{WW}-C_{WW})C_{WW}\| = O_p(n^{-\gamma})$, the third term is of $O_p(n^{-\gamma}\delta_n^{-5/4})$. This implies \eq{eq:estim_order}.
From $E[f(Z)|W=\cdot] = C_{WW}^\nu\eta$ and $C_{WZ}f = C_{WW}E[f(Z)|W=\cdot]=C_{WW}^{\nu+1}\eta$, the convergence rate \begin{equation}\label{eq:app_order}
\bigl\| C_{WW} (C_{WW}^2+\delta_n I)^{-1}C_{WZ}f - E[f(Z)|W=\cdot]\bigr\|_{\cH_\cY} = O(\delta_n^{\min\{1, \frac{\nu}{2} \}}). \end{equation} can be proved by the same way as \eq{eq:Qy_approx}.
Combination of Eqs.(\ref{eq:estim_order}) and (\ref{eq:app_order}) proves the assertion. \end{proof}
Recall that $\tilde{C}_{WW}$ is the integral operator on $L^2(Q_\mathcal{Y})$ defined by \eq{eq:cov_op_L2}. The following theorem shows the consistency rate on average. Here $\mathcal{R}(\tilde{C}_{WW}^0)$ means $L^2(Q_\mathcal{Y})$.
\begin{thm}\label{thm:consitency_conditioning2} Let $f$ be a function in ${\cH_\cX}$, and $(Z,W)$ be a random variable taking values in $\mathcal{X}\times\mathcal{Y}$ with distribution
$Q$. Assume that $E[f(Z)|W=\cdot]\in \mathcal{R}(\tilde{C}_{WW}^\nu)\cap{\cH_\cY}$ for some $\nu > 0$, and $\widehat{C}^{(n)}_{WZ}:{\cH_\cX}\to{\cH_\cY}$ and $\widehat{C}^{(n)}_{WW}:{\cH_\cY}\to{\cH_\cY}$ be compact operators, which may not be positive definite, such that $\| \widehat{C}^{(n)}_{WZ}-C_{WZ}\|=O_p(n^{-\gamma})$ and $\| \widehat{C}^{(n)}_{WW}-C_{WW}\|=O_p(n^{-\gamma})$ for some $\gamma > 0$. Then, for a positive sequence $\delta_n = n^{-\max\{ \frac{1}{2}\gamma,\frac{2}{\nu+2}\gamma\}}$, we have as $n\to\infty$ \[
\bigl\| \widehat{C}^{(n)}_{WW}\bigl( (\widehat{C}^{(n)}_{WW})^2 + \delta_n I \bigr)^{-1} \widehat{C}^{(n)}_{WZ} f -E[f(X)|W=\cdot]\bigr\|_{L^2(Q_\mathcal{Y})} = O_p( n^{-\min\{\frac{1}{2}\gamma,\frac{\nu}{\nu+2}\gamma\}}). \] \end{thm} \begin{proof} Note that for $f,g\in{\cH_\cX}$ we have $(f,g)_{L^2(Q_\mathcal{Y})} = E[f(W)g(W)] = \langle f,C_{WW}g\rangle_{\cH_\cX}$. It follows that the left hand side of the assertion is equal to \[
\bigl\| C_{WW}^{1/2}\widehat{C}^{(n)}_{WW}\bigl((\widehat{C}^{(n)}_{WW})^2+\delta_n I\bigr)^{-1} \widehat{C}^{(n)}_{WZ}f - C_{WW}^{1/2}E[f(Z)|W=\cdot]\bigr\|_{{\cH_\cY}}. \]
First, by the similar argument to the proof of \eq{eq:estim_order}, it is easy to show that the rate of the estimation error is given by \begin{multline*}
\bigl\| C_{WW}^{1/2}\bigl\{ \widehat{C}^{(n)}_{WW}\bigl((\widehat{C}^{(n)}_{WW})^2+\delta_n I\bigr)^{-1} \widehat{C}^{(n)}_{WZ}f - C_{WW}(C_{WW}^2+\delta_n I)^{-1}C_{WZ}f \bigr\} \bigr\|_{{\cH_\cY}} \\ =O_p(n^{-\gamma }\delta_n^{-1}). \end{multline*}
It suffices then to prove \begin{equation*}
\bigl\| C_{WW} (C_{WW}^2+\delta_n I)^{-1}C_{WZ}f - E[f(Z)|W=\cdot]\bigr\|_{L^2(Q_\mathcal{Y})} = O(\delta_n^{\min\{1, \frac{\nu}{2} \}}). \end{equation*}
Let $\xi\in L^2(Q_\mathcal{Y})$ such that $E[f(Z)|W=\cdot]=\tilde{C}_{WW}^\nu \xi$. In a similar way to Theorem \ref{thm:cond_mean}, $\tilde{C}_{WW}E[f(Z)|W] = \tilde{C}_{WZ}f$ holds, where $\tilde{C}_{WZ}$ is the extension of $C_{WZ}$, and thus $C_{WZ}f = \tilde{C}_{WW}^{\nu+1}\xi$. The left hand side of the above equation is equal to \[
\bigl\| \tilde{C}_{WW} (\tilde{C}_{WW}^2+\delta_n I)^{-1}\tilde{C}_{WW}^{\nu+1}\xi - \tilde{C}_{WW}^{\nu}\xi\bigr\|_{L^2(Q_cY)}. \] By the eigendecomposition of $\tilde{C}_{WW}$ in $L^2(Q_\mathcal{Y})$, a similar argument to the proof of \eq{eq:app_order} shows the assertion. \end{proof}
The consistency of KBR follows by combining the above theorems. \begin{thm}\label{thm:consitency_KBR1}
Let $f$ be a function in ${\cH_\cX}$, $(Z,W)$ be a random variable that has the distribution $Q$ with p.d.f.~$p(y|x)\pi(x)$, and $\widehat{m}_\Pi^{(n)}$ be an estimator of $m_\Pi$ such that $\|\widehat{m}_\Pi^{(n)} -
m_\Pi\|_{\cH_\cX} = O_p(n^{-\alpha})$ ($n\to\infty$) for some $0<\alpha\leq 1/2$. Assume that $\pi/p_X \in
\mathcal{R}(C_{XX}^{\beta})$ with $\beta\geq 0$, and $E[f(Z)|W=\cdot]\in \mathcal{R}(C_{WW}^\nu)$ for some $\nu\geq0$. For the regularization constants $\varepsilon_n=n^{-\max\{\frac{2}{3}\alpha,\frac{1}{1+\beta}\alpha\}}$ and $\delta_n = n^{-\max\{ \frac{4}{9}\gamma,\frac{4}{2\nu+5}\gamma\}}$, where $\gamma=\min\{ \frac{2}{3}\alpha,\frac{2\beta+1}{2\beta+2}\alpha\}$, we have for any $y\in\mathcal{Y}$ \[
\mathbf{f}^T_X R_{X|Y}\mathbf{k}_Y(y) - E[f(Z)|W=y]
= O_p(n^{-\min\{\frac{4}{9}\gamma,\frac{2\nu}{2\nu+5}\gamma\}}), \quad (n\to\infty), \]
where $\mathbf{f}_X^T R_{X|Y}\mathbf{k}_Y(y)$ is given by \eq{eq:KBRemp}. \end{thm} \begin{proof} By applying Theorem \ref{thm:transition_consistency1} to
$Y=(Y,X)$ and $Y=(Y,Y)$, we see that both of $\|\widehat{C}_{WZ}-C_{WZ}\|$
and $\|\widehat{C}_{WW}-C_{WW}\|$ are of $O_p(n^{-\gamma})$. Since \begin{multline*}
\mathbf{f}^T_X R_{X|Y}\mathbf{k}_Y(y) - E[f(Z)|W=y] \\
= \langle k_\mathcal{Y}(\cdot,y), \widehat{C}_{WW}\bigl( (\widehat{C}_{YY})^2 + \delta_n I \bigr)^{-1} \widehat{C}_{WZ} f - E[f(Z)|W=\cdot]\rangle_{\cH_\cY}, \end{multline*} combination of Theorems \ref{thm:transition_consistency1} and \ref{thm:consitency_conditioning1} proves the theorem. \end{proof}
The next theorem shows the rate on average w.r.t.~$Q_\mathcal{Y}$. The proof is similar to the above theorem, and omitted. \begin{thm}\label{thm:consitency_KBR2}
Let $f$ be a function in ${\cH_\cX}$, $(Z,W)$ be a random variable that has the distribution $Q$ with p.d.f.~$p(y|x)\pi(x)$, and $\widehat{m}_\Pi^{(n)}$ be an estimator of $m_\Pi$ such that $\|\widehat{m}_\Pi^{(n)} -
m_\Pi\|_{\cH_\cX} = O_p(n^{-\alpha})$ ($n\to\infty$) for some $0<\alpha \leq 1/2$. Assume that $\pi/p_X \in
\mathcal{R}(C_{XX}^{\beta})$ with $\beta\geq 0$, and $E[f(Z)|W=\cdot]\in \mathcal{R}(\tilde{C}_{WW}^\nu)\cap{\cH_\cY}$ for some $\nu >0$. For the regularization constants $\varepsilon_n=n^{-\max\{\frac{2}{3}\alpha,\frac{1}{1+\beta}\alpha\}}$ and $\delta_n = n^{-\max\{ \frac{1}{2}\gamma,\frac{2}{\nu+2}\gamma\}}$, where $\gamma=\min\{ \frac{2}{3}\alpha,\frac{2\beta+1}{2\beta+2}\alpha\}$, we have \[
\bigl\| \mathbf{f}_X^T R_{X|Y}\mathbf{k}_Y(W) - E[f(Z)|W]\bigr\|_{L^2(Q_\mathcal{Y})}
= O_p(n^{-\min\{\frac{1}{2}\gamma,\frac{\nu}{\nu+2}\gamma\}}), \quad (n\to\infty). \] \end{thm}
We also have consistency of the estimator for the kernel mean of posterior $m_{Q_{\mathcal{X}|y}}$, if we make stronger assumptions. First, we formulate the expectation with the posterior in terms of operators. Let $(Z,W)$ be a random variable with distribution $Q$. Assume that for any $f\in{\cH_\cX}$ the conditional expectation $E[f(Z)|W=\cdot]$ is included in ${\cH_\cY}$. We then have a linear operator $S$ defined by \[
S:{\cH_\cX}\to{\cH_\cY}, \qquad f\mapsto E[f(Z)|W=\cdot]. \] If we further assume that $S$ is bounded, the adjoint operator $S^*:{\cH_\cY}\to{\cH_\cX}$ satisfies \[
\langle S^*k_\mathcal{Y}(\cdot,y), f\rangle_{\cH_\cX} = \langle k_\mathcal{Y}(\cdot,y), Sf\rangle_{\cH_\cY} = E[f(Z)|W=y] \] for any $y\in\mathcal{Y}$, and thus $S^*k_\mathcal{Y}(\cdot,y)$ is equal to the kernel mean of the conditional probability of $Z$ given $W=y$.
We make the following further assumptions:\\ {\bf Assumption (S)} \begin{enumerate} \item The covariance operator $C_{WW}$ is injective. \item There exists $\nu> 0$ such that for any $f\in {\cH_\cX}$ there is $\eta_f\in{\cH_\cX}$ with $Sf = C_{WW}^\nu\eta_f$, and the linear map \[
C_{WW}^{-\nu}S: {\cH_\cX}\to{\cH_\cY},\qquad f\mapsto \eta_f \] is bounded. \end{enumerate}
\begin{thm}\label{thm:consitency_KBR3}
Let $(Z,W)$ be a random variable that has the distribution $Q$ with p.d.f.~$p(y|x)\pi(x)$, and $\widehat{m}_\Pi^{(n)}$ be an estimator of $m_\Pi$ such that $\|\widehat{m}_\Pi^{(n)} -
m_\Pi\|_{\cH_\cX} = O_p(n^{-\alpha})$ ($n\to\infty$) for some $0<\alpha \leq 1/2$. Assume (S) above, and $\pi/p_X \in \mathcal{R}(C_{XX}^{\beta})$ with some $\beta\geq 0$. For the regularization constants $\varepsilon_n=n^{-\max\{\frac{2}{3}\alpha,\frac{1}{1+\beta}\alpha\}}$ and $\delta_n = n^{-\max\{ \frac{4}{9}\gamma,\frac{4}{2\nu+5}\gamma\}}$, where $\gamma=\min\{ \frac{2}{3}\alpha,\frac{2\beta+1}{2\beta+2}\alpha\}$, we have for any $y\in\mathcal{Y}$ \[
\bigl\| \mathbf{k}_X^T R_{X|Y}\mathbf{k}_Y(y) - m_{Q_\mathcal{X}|y} \bigr\|_{{\cH_\cX}}
= O_p(n^{-\min\{\frac{4}{9}\gamma,\frac{2\nu}{2\nu+5}\gamma\}}), \]
as $n\to\infty$, where $m_{Q_\mathcal{X}|y}$ is the kernel mean of the posterior given $y$. \end{thm} \begin{proof} First, in a similar manner to the proof of \eq{eq:estim_order}, we have \begin{multline*}
\bigl\| \widehat{C}^{(n)}_{ZW}\bigl((\widehat{C}^{(n)}_{WW})^2+\delta_n I\bigr)^{-1} \widehat{C}^{(n)}_{WW}k_\mathcal{Y}(\cdot,y)
- C_{ZW} (C_{WW}^2+\delta_n I)^{-1}C_{WW}k_\mathcal{Y}(\cdot,y) \bigr\|_{\cH_\cX} \\ =O_p(n^{-\gamma}\delta_n^{-5/4}). \end{multline*} The assertion is thus obtained if \begin{equation}\label{eq:app_order3}
\bigl\| C_{ZW} (C_{WW}^2+\delta_n I)^{-1}C_{WW}k_\mathcal{Y}(\cdot,y) - S^*k_\mathcal{Y}(\cdot,y)\bigr\|_{\cH_\cX} = O(\delta_n^{\min\{1, \frac{\nu}{2} \}}) \end{equation} is proved. The left hand side of \eq{eq:app_order3} is upper-bounded by \begin{multline*}
\bigl\| C_{ZW} (C_{WW}^2+\delta_n I)^{-1}C_{WW}- S^*\|\,\|k_\mathcal{Y}(\cdot,y)\|_{\cH_\cY} \\
= \bigl\| C_{WW}(C_{WW}^2+\delta_n I)^{-1}C_{WZ} -S\bigr\|\,\|k_\mathcal{Y}(\cdot,y)\|_{\cH_\cY}. \end{multline*}
It follows from Theorem \ref{thm:cond_mean} that $C_{WZ}=C_{WW}S$, and thus $\| C_{WW}(C_{WW}^2+\delta_n I)^{-1}C_{WZ} -S\| = \| C_{WW}(C_{WW}^2+\delta_n I)^{-1}C_{WW}S -S\|\leq
\delta_n\|(C_{WW}^2+\delta_n I)^{-1}C_{WW}^{\nu}\|\,\|C_{WW}^{-\nu} S\|$. The eigendecomposition of $C_{WW}$ together with the inequality $\frac{\delta_n \lambda^\nu}{\lambda^2 + \delta_n}\leq \delta_n^{\min\{1,\nu/2\}}$ ($\lambda\geq 0$) completes the proof. \end{proof}
\if 0 \begin{lma}\label{lma:inv_coef} Let ${\mathcal W}$ be a weighted empirical probability on $\mathcal{X}\times \mathcal{Y}$ given by \[
{\mathcal W} = \sum_{i=1}^n w_i \delta_{(X_i,Y_i)}. \] Then, \[ \widehat{C}_{\mathcal W}(\widehat{C}_{{\mathcal W}_\mathcal{X}} + \varepsilon I)^{-1} k_\mathcal{X}(\cdot,x) = {\bf k}_Y^T \Lambda(G_X\Lambda+\varepsilon I_n)^{-1} {\bf k}_X(x), \] where \[ {\bf k}_X = \begin{pmatrix}k_\mathcal{X}(\cdot,X_1) \\ \vdots \\
k_\mathcal{X}(\cdot,X_n)\end{pmatrix}, \qquad {\bf k}_Y = \begin{pmatrix}k_\mathcal{Y}(\cdot,Y_1) \\ \vdots \\
k_\mathcal{X}(\cdot,X_n)\end{pmatrix}, \] and $\Lambda = {\rm diag}(w_1,\ldots,w_n)$. \end{lma} \begin{proof} Let $h =(\widehat{C}_{{\mathcal W}_\mathcal{X}} + \varepsilon I)^{-1} k_\mathcal{X}(\cdot,x)$, and decompose $h$ as $h=\alpha^T {\bf k}_X + h_\perp$, where $h_\perp$ is orthogonal to the subspace spanned by $\{k_\mathcal{X}(\cdot,X_i)\}_{i=1}^n$. Then, \begin{equation}\label{eq:target}
\widehat{C}^{(n)}_{\mathcal W}(\widehat{C}_{{\mathcal W}_\mathcal{X}} + \varepsilon I)^{-1} k(\cdot,x)
= \widehat{C}^{(n)}_{\mathcal W} h = {\bf k}_Y^T \Lambda G_X\alpha. \end{equation} It follows from the definition of $h$ that \begin{align*} k_\mathcal{X}(\cdot,x) & =(\widehat{C}_{{\mathcal W}_\mathcal{X}} + \varepsilon I)\Bigl(\sum_{i=1}^n \alpha_i k_\mathcal{X}(\cdot,X_i) + h_\perp\Bigr) \\ & = {\bf k}_X^T (\Lambda G_X+\varepsilon I_n)\alpha + \varepsilon h_\perp. \end{align*} By taking the inner products with $k_\mathcal{X}(\cdot,X_i)$ ($i=1,\ldots,n$), we have \[ (G_X\Lambda +\varepsilon I_n)G_X\alpha = {\bf k}_X(x), \] which gives \begin{equation}\label{eq:Ga}
G_X\alpha = (G_X\Lambda +\varepsilon I_n)^{-1} {\bf k}_X(x). \end{equation} The assertion follows from Eqs.(\ref{eq:target}) and (\ref{eq:Ga}). \end{proof}
\fi
\end{document} |
\begin{document}
\title{{f The myth of the down converted photon} \begin{abstract} Parametric down conversion (PDC) is widely interpreted in terms of photons, but, even among supporters of this interpretation, many properties of the photon pairs have been described as ``mind-boggling" and even ``absurd". In this article we argue that a classical description of the light field, taking account of its vacuum fluctuations, leads us to a consistent and rational description of all PDC phenomena. ``Nonlocality" in quantum optics is simply an artifact of the Photon Concept. We also predict a new phenomenon, namely the appearance of a second, or satellite PDC rainbow. (This article will appear in the Proceedings of the Second Vigier Conference held in York University, Canada in August 1997. A somewhat more formal version has been submitted to Phys. Rev. Letters, and may be found at http:\slash\slash xxx.lanl.gov\slash abs \slash quant-ph\slash 9711029.) \end{abstract} \section{Introduction} In an article for the last conference in this series\cite{vig} we gave a description of the Parametric Down Conversion (PDC) process based on the real vacuum electromagnetic-field fluctuations. We indicated that there was a serious unsolved problem, in that detectors must somehow subtract away these fluctuations; such a mechanism must come into play in order to explain the very low dark rates actually observed. We have since published a series of articles\cite{pdc1,pdc2,pdc3,pdc4} in which a great variety of PDC phenomena have been analyzed using this description. Since we have been able to establish a formal parallel, through the Wigner representation, between the new (or rather the old!) field description and the presently dominant Photon Theory, it is clear that, {\it once the reality of the zeropoint field has been accepted, there are no PDC phenomena which require photons.} Furthermore we have made considerable progress on the subtraction problem\cite{pdc4}; all that is needed to explain the low dark rates of detectors is the recognition of their extremely large time windows (5ns is a very large number of light oscillations).
The approach of the above series of articles was a kind of compromise between the standard nonlocal theory of Quantum Optics, where the interaction of the various field modes is represented by a hamiltonian, and a fully maxwellian theory, which would be both local and causal. In this latter case the nonlinear crystal would be represented as a spatially localized current distribution, modified of course by the incoming electromagnetic field; the outgoing field would then be expressed as the retarded field radiated by this distribution. A preliminary attempt at such a theory was made\cite{magic}, using first-order perturbation theory. However, we showed, in the above series of articles, that a calculation of the relevant counting rates, to lowest order, requires us to find the {\it second}-order perturbation corrections to the Wigner density, and the close formal parallel between these two theories means that the same considerations will apply to the maxwellian theory.
\section{What is PDC?} It is necessary to pose this question, because, depending on the answer given, PDC may be described as either a local or a nonlocal phenomenon.
An example of the modern, nonlocal description is provided by Greenberger, Horne and Zeilinger\cite{ghz}. A nonlinear crystal, pumped by a laser at frequency $\omega_0$, produces conjugate pairs of signals, of frequency $\omega$ and $\omega_0-\omega$ (see Fig.1). Since light is supposed to consist of photons, this means that an incoming laser photon ``down converts" into a pair of lower-energy photons. Naturally, since we know that $E=\hbar\omega$, that means energy is conserved in the PDC process, which must be very comforting. However, the above authors themselves refer to the PDC photon-counting statistics as ``mind-boggling", and a more recent commentary\cite{zeil} even uses the term ``absurd". \begin{figure}
\caption{PDC - the modern version. A laser photon down converts into a conjugate pair of PDC photons with conservation of energy.}
\end{figure}
There is an older description, which I suggest is more correct than the modern one. It had only a short life. Nonlinear optics was born in the late 1950s, with the invention of the laser. Up to about 1965, when Quantum Optics was born, the PDC process would have been depicted\cite{saleh,yariv} by Fig.2; an incoming wave of frequency $\omega$ is down converted, by the pumped crystal, into an \begin{figure}
\caption{PDC - the ancient version. When a wave of frequency $\omega$ is incident, at a certain angle $\theta(\omega)$, on a nonlinear crystal pumped at frequency $\omega_0$, a signal of frequency $\omega_0-\omega$ is emitted in a certain conjugate direction. The modified input wave is called the idler.}
\end{figure} outgoing signal of frequency $\omega_0-\omega$. The explanation of the frequency relationships lies in the multiplication, by the nonlinear crystal, of the two input amplitudes; we have no need of $\hbar$!
This process persists when the intensity of the input is reduced to zero, because all modes of the light field are still present in the vacuum, and the nonlinear crystal modifies vacuum modes in exactly the same way as it modifies input modes supplied by an experimenter. What we see emerging from the crystal is the familiar PDC rainbow. This is because the angle of incidence $\theta$, at which PDC occurs, is different for different frequencies on account of the variation of refractive index with frequency.
We depict the process of PDC from the vacuum in Fig.3, but note that this figure shows only two conjugate modes of the light field; a complete picture would show all frequencies participating in conjugate pairs, with varying angles of incidence. In contrast with Fig.2, where we showed only the one relevant input, we must now take account also of the conjugate input mode of the zeropoint, since the first mode itself has only the zeropoint amplitude.
The zeropoint inputs, denoted by interrupted lines in Fig.3, do not activate photodetectors, because the threshold of these devices is set precisely at the level of the zeropoint intensity, as discussed in Ref.\cite{pdc4}. \begin{figure}
\caption{PDC from the vacuum. Both of the outgoing signals are above zeropoint intensity, and hence give photomultiplier counts.}
\end{figure} However, the two idlers have intensities above that of their corresponding inputs. Also there is no coherence between a signal and an idler of the same frequency, so their intensities are additive in both channels. Hence there are photoelectron counts in both of the outgoing channels of Fig.3.
The question we have posed in this section could be rephrased as ``What is it that is down converted?". According to the thinking behind Fig.1, the laser photons are down converted, whereas according to Fig.3 it is the zeropoint modes; they undergo both down conversion, to give signals, and amplification, to give idlers.
\section{Photon production rates in PDC} There is a small, but important difference between the maxwellian theory and the theory outlined in our Wigner series\cite{pdc1,pdc2,pdc3,pdc4}, though both of them could be said to be based on Fig.3. The Wigner series gave us the undulatory version of quantum optics, but its starting point is a hamiltonian which takes the creation of photon pairs as axiomatic. The maxwellian theory, whose details are given elsewhere\cite{puc1}, starts from a nonlinear expression for the induced current and deduces a coupling between the field modes. This coupling is very similar, but not identical, to that deduced from the Wigner-based theory. As we have emphasized, there are no photons in the maxwellian theory, but if we translate the intensities of the outgoing signals in Fig.3 into photon terms, we obtain the result \begin{equation} \frac{n_i(\omega)+n_s(\omega)}{n_i(\omega_0-\omega)+n_s(\omega_0-\omega)}= \frac{\cos[\theta(\omega_0-\omega)]} {\cos[\theta(\omega)]}\;. \label{pdcint} \end{equation}
So we conclude that {\it the photon rate in a given channel is inversely proportional to the cosine of the rainbow angle}. In the Photon Theory, the above ratio is one.
There seems little chance of finding out directly which of these theories is correct; the difference between the two ratios is small, since the rainbow angles are typically around 10 degrees, and it is not possible to measure at all accurately the efficiency of light detectors as a function of frequency. It is true that some of the experiments we have analysed, using the standard theory, in Refs.\cite{pdc1,pdc2,pdc3,pdc4}, have slightly different results in the present theory, for example the fringe visibility in the experiment of Zou, Wang and Mandel\cite{zwm}. Some details will be published shortly, but we can say that an experimental discrimination will be very difficult.
\section{Parametric up conversion from the vacuum}
There is, however, at least one prediction of the new theory which differs dramatically from the standard theory. An incident wave of frequency $\omega$, as well as being down converted by the pump to give a PDC signal of frequency $\omega_0-\omega$, may also be {\it up converted to give a PUC signal} of frequency $\omega_0+\omega$. We depict this phenomenon, which is well known\cite{saleh,yariv} in classical nonlinear optics, in Fig.4. \begin{figure}
\caption{PUC. In contrast with PDC the output signal has its transverse component in the same direction as that of the idler.}
\end{figure} Note that the angle of incidence, $\theta_u(\omega)$, at which PUC occurs is quite different from the PDC angle, which in Fig.2 was denoted simply $\theta(\omega)$, but which we should now call $\theta_d(\omega)$.
Now, following the same argument which led us from Fig.2 to Fig.3, we predict the phenomenon of PUC from the Vacuum, which we depict in Fig.5.
When we come to calculate the intensity of the PUC rainbow, there is an important difference from the PDC situation, because we find that the idler intensities are now less than the input zeropoint intensities. The signal intensities in both channels almost, but not quite, cancel this shortfall, so that the PUC intensities are only about 3 per cent of the PDC intensities, which may explain why nobody has yet observed them. Also, note that there is a detectable signal only in the lower-frequency channel, because the relation corresponding to eq.(\ref{pdcint}) is \begin{equation} \frac{n_i(\omega)+n_s(\omega)}{n_i(\omega_0+\omega)+n_s(\omega_0+\omega)}= -\frac{\cos[\theta_u(\omega_0+\omega)]} {\cos[\theta_u(\omega)]}\;, \end{equation} which means that in one of the channels (actually the upper-frequency one), the total output intensity is less than the zeropoint, so nothing will be detected in this channel. My prediction therefore is that, as well as the main PDC rainbow $\theta_d(\omega)$, {\it there is also a satellite rainbow}, whose intensity is about 3 percent of the main one, at $\theta_u(\omega)$. An approximate calculation\cite{puc1} shows that $\theta_u(\omega)$ is about 2.5 times $\theta_d(\omega)$. \begin{figure}
\caption{PUC from the vacuum. Only one of the outgoing signals is above the zeropoint intensity. The other one, depicted by an interrupted line, is below zeropoint intensity.}
\end{figure}
\section{Conclusion} Our contribution to the previous conference in this series\cite{vig} was entitled ``The myth of the photon''. The present article repeats this theme, but covers a narrower range of phenomena. This is because the local theory of nonlinear crystals is now very much more complete than the corresponding theory for atoms. In retrospect, the word ``obsolete'', which we used in the previous article, for {\it all} photon theories, was excessively triumphalist. Of course one could argue that they became obsolete once their nonlocal nature was revealed, that is a quarter of a century ago, but there was nothing local on offer at that time. The claim we made, maybe prematurely, was based on having demonstrated, by the use of certain model theories with very limited fields of application, that local theories were, in all cases {\it possible}. Now we have passed into a new phase of the programme; we now have, for a very wide and growing area of investigation, {\it a well defined alternative theory which makes certain new predictions}. If and when such predictions are verified, I think that down-converted photons, for example those depicted in our Fig.1, will be very definitely obsolete.
\noindent {\bf Acknowledgement}
\noindent I have had a lot of help with the ideas behind this article, and also in developing the argument, from Emilio Santos.
\end{document} |
\begin{document}
\begin{center} {\bf\Large Statistical Analysis of Fixed Mini-Batch Gradient \\ Descent Estimator}
Haobo Qi$^{1}$, Feifei Wang$^{2,3}$\footnote{The corresponding author. Email: [email protected]}, and Hansheng Wang$^1$
{\it\small
$^1$ Guanghua School of Management, Peking University, Beijing, China;\\ $^2$ Center for Applied Statistics, Renmin University of China, Beijing, China;\\ $^3$ School of Statistics, Renmin University of China, Beijing, China.
}
\end{center}
\begin{singlespace} \begin{abstract} We study here a fixed mini-batch gradient decent (FMGD) algorithm to solve optimization problems with massive datasets. In FMGD, the whole sample is split into multiple non-overlapping partitions. Once the partitions are formed, they are then fixed throughout the rest of the algorithm. For convenience, we refer to the fixed partitions as fixed mini-batches. Then for each computation iteration, the gradients are sequentially calculated on each fixed mini-batch. Because the size of fixed mini-batches is typically much smaller than the whole sample size, it can be easily computed. This leads to much reduced computation cost for each computational iteration. It makes FMGD computationally efficient and practically more feasible. To demonstrate the theoretical properties of FMGD, we start with a linear regression model with a constant learning rate. We study its numerical convergence and statistical efficiency properties. We find that sufficiently small learning rates are necessarily required for both numerical convergence and statistical efficiency. Nevertheless, an extremely small learning rate might lead to painfully slow numerical convergence. To solve the problem, a diminishing learning rate scheduling strategy \citep{2019Understanding} can be used. This leads to the FMGD estimator with faster numerical convergence and better statistical efficiency. Finally, the FMGD algorithms with random shuffling and a general loss function are also studied. \end{abstract}
\noindent {\bf KEYWORDS}: Fixed Mini-Batch; Gradient Descent; Learning Rate Scheduling; Random Shuffling; Stochastic Gradient Descent
\end{singlespace}
\csection{INTRODUCTION}
Modern statistical analyses often encounter challenging optimization problems with massive datasets, ultrahigh dimensional features, and extremely complicated objective functions \citep{2015ImageNet,He2016Deep,2017Building,NEURIPS2021_019f8b94}. In these cases, the classical Newton-Raphson method can hardly be applied, because the second-order derivatives of the objective function (i.e., Hessian matrices) are extremely complicated. To solve the problem, various first-order optimization algorithms have been proposed and widely adopted in practice. Among them, the gradient descent (GD) type of algorithms are arguable the simplest and most widely used methods \citep{Cauchy1847}.
To illustrate the idea of the GD algorithm, let $(X_i, Y_i)$ be the observation collected from the $i$-th subject, where $Y_i\in\mathbb{R}^1$ is the response of interest and $X_i\in\mathbb{R}^p$ is the associated $p$-dimensional predictor for $1\leq i \leq N$. To model the relationship between $X_i$ and $Y_i$, a parametric model with the parameter $\theta$ is defined. To estimate $\theta$, define a global loss function $\mathcal L_{N}(\theta) =N^{-1}\sum^N_{i=1}\ell(X_i,Y_i; \theta)$, where $\ell(X_i,Y_i;\theta)$ is a loss function evaluated at the $i$-th sample. The GD algorithm would start with an initial estimate $\widehat{\theta}^{(0)}$ and then iteratively update according to the following formula, \begin{equation} \wh\theta_{\operatorname{gd}}^{(t+1)}=\wh\theta_{\operatorname{gd}}^{(t)}-\alpha\nabla\mathcal L_N\big(\wh\theta_{\operatorname{gd}}^{(t)}\big),\nonumber \end{equation} where $\wh\theta_{\operatorname{gd}}^{(t)}$ is the estimator obtained in the $t$-th iteration, $\alpha$ is the learning rate, and $\nabla\mathcal L_N(\theta)$ stands for the first-order derivative of $\mathcal L_{N}(\theta)$ with respect to $\theta$.
For an optimization problem with $N$ samples and $p$ features, the computational complexity of GD algorithms for one single iteration is about $O(Np)$. It remains to be very expensive if both $N$ and $p$ are very large. Consider the classical deep learning model \emph{ResNet} \citep{He2016Deep} for example. A standard \emph{ResNet50} model contains more than 25 million parameters and has an extremely complicated nonlinear model structure. It achieves an excellent Top-1 accuracy (i.e., $83\%$) on the ImageNet 2012 classification dataset, which contains a total of 1,431,167 photos belonging to 1000 classes \citep{2015ImageNet}. Obviously, to estimate the {\it ResNet} model with the ImageNet dataset, the GD algorithm remains to be computationally very expensive. Then, how to further reduce the computation cost for massive datasets with ultrahigh dimensional features is worth of consideration.
One way to address this issue is to further reduce the sample size from $N$ to a much reduced number $n$. Then, the computation cost per iteration can be further reduced from $O(Np)$ to $O(np)$. This leads to the idea of fixed mini-batch gradient descent (FMGD) algorithm to be studied in this work. Specifically, assume the whole sample with size $N$ can be split into multiple non-overlapping partitions with each having $n$ samples. Once those partitions are formed, they should be fixed throughout the rest of the algorithm. For convenience, we refer to each fixed partition as a fixed mini-batch and denote its size by $n$. Subsequently, for each iteration, the current estimate should be updated using gradients calculated based on the $n$ samples in one fixed mini-batch. In theory, we might still need $n$ goes to infinity as $N$ goes to infinity. However, its diverging rate can be much slower than $N$. Consequently, the batch size $n$ can be comparatively much smaller than $N$. Then the computation cost of one single iteration can be largely reduced.
It is remarkable that the FMGD algorithm studied in this work is closely related to various stochastic mini-batch gradient descent (SMGD) algorithms, which have been extensively studied in the literature. Due to their outstanding computational performance, the SMGD algorithms (e.g., momentum, Adagrad, RMSprop, Adam) are also popularly applied in practice for sophisticated model learning \citep{2011Adagrad,2012RMSprop,2014Adam}. The FMGD algorithm proposed in this work is similar with the SMGD algorithms in the sense that, they both compute gradients on mini-batches. However, FMGD and SMGD are critically different from each other according to how the mini-batches are generated. Specifically, the mini-batches used by FMGD are fixed once they are formed. However, those of SMGD are randomly generated. In this regard, the SMGD methods can be further classified into two categories. The first category assumes that independent mini-batches can be directly sampled from the population distribution without limitation. This setting is very suitable for streaming data analysis \citep{Mou2020OnLS,An2020Yu,Chen2022StationaryBO}. The second category of the SMGD algorithms assume that the mini-batches are randomly sampled from the whole dataset by (for example) the method of simple random sampling with replacement \citep{OptinML}. By doing so, the whole sample is treated as if it were the population. Accordingly, the mini-batches become independent and identically distributed conditional on the whole sample. This makes the resulting theoretical analysis more convenient.
Different from various SMGD methods, the key feature of the FMGD algorithm is that the mini-batches used by FMGD are fixed and then repeatedly used once they are formed. As a consequence, the sequentially updated mini-batches and various statistics are not independent with each other, even if conditional on the whole sample. This makes the theoretical treatment of FMGD very different from that of SMGD in the literature. Accordingly, new techniques have to be developed. In this regard, we develop here a linear dynamic system approach, whose dynamic properties determine the numerical convergence rate of the FMGD algorithm. In the meanwhile, the stable solution of the linear dynamic systems determines the FMGD estimator and thus its asymptotic properties. It seems to us that, techniques of similar type have not been seen in the past literature \citep{Bottou2018Optimization,Bridging2020,First-order2020,Mou2020OnLS,An2020Yu,Chen2022StationaryBO}. With the help of these novel techniques, the numerical convergence properties of the FMGD algorithm can be well separated from the statistical convergence properties of the FMGD estimator. The former is driven by the number of the numerical iterations, while the latter is mainly determined by the whole sample size.
Specifically, we start with a linear regression model, and formulate the FMGD algorithm as a linear dynamic system. Then we investigate the conditions under which the iteratively updated FMGD estimators should converge to a stable solution. We refer to this stable solution as the FMGD estimator. We then study the asymptotic properties of the resulting FMGD estimator under a constant learning rate. We find that, a standard FMGD estimator with a fixed learning rate might be statistically inefficient. To achieve both statistical efficiency and numerical convergence, a classical scheduling strategy with diminishing learning rate \citep{Numerical,2019Understanding} can be used. This leads to excellent finite sample performance for FMGD. Finally, we make two important extensions about FMGD. First, an FMGD algorithm with random shuffling is studied. Second, a more general loss function is investigated. Extensive simulation studies have been conducted to demonstrate the finite sample performance of the FMGD estimator. A number of deep learning related numerical examples are also presented.
To summarize, we aim to provide the following contributions to the existing literature. First, we develop here the FMGD algorithm with outstanding numerical convergence rate and statistical efficiency. Second, we develop a novel linear dynamic system framework to study the theoretical properties of the FMGD method. Numerically, we show that the FMGD algorithm converges much faster than its SMGD counterparts. Statistically, the resulting FMGD estimator enjoys the same asymptotic efficiency as the global estimator. The rest of this article is organized as follows. Section 2 introduces the FMGD algorithm under the linear regression setup. Section 3 discusses the theoretical properties of the FMGD estimator. Section 4 presents extensive numerical experiments to demonstrate the finite sample performance of the FMGD estimator. Section 5 concludes the paper with a brief discussion.
\csection{FIXED MINI-BATCH GRADIENT DESCENT}
\csubsection{A Linear Regression Setup}
Let $\mathbb S=\{1,2,\cdots,N\}$ be the index set of the whole sample. For each sample $i$, we collect a response variable of interest $Y_i\in\mathbb{R}^1$ and an associated $p$-dimensional predictor $X_i=(X_{i1},X_{i2},\cdots,X_{ip})^\top\in\mathbb{R}^p$. We assume $N$ goes to infinity and $p$ is fixed. Different samples are assumed to be independently and identically distributed. Define the loss function evaluated at sample $i$ as $\ell(X_i,Y_i;\theta)$, where $\theta\in\mathbb{R}^{q}$ denotes the parameter. Then the global loss function can be constructed as $\mathcal L_N(\theta)=N^{-1}\sum^N_{i=1} \ell(X_i,Y_i;\theta)$. The empirical risk minimizer $\wh{\theta} = \operatorname{\mbox{argmin}}\mathcal L_N(\theta)$ is a natural estimator of $\theta$.
We start with a linear regression model. Subsequently, the fruitful theoretical results obtained here can be extended to more general loss functions. Specifically, we start by assuming $Y_i=X_i^\top\theta+\varepsilon_i$, where $\theta = (\theta_1,\theta_2,\cdots,\theta_p)^\top\in\mathbb{R}^p$ becomes the regression coefficient vector and $\varepsilon_i$s ($1\leq i \leq N$) are mutually independent noises with mean 0 and variance $\sigma_\varepsilon^2$. Furthermore, we assume that all entries of $X_i$ are sub-gaussian in the sense that there exists a constant $K$ such that $P(|X_{ij}|>u)\leq \exp(1-u^2/K^2)$ for $1\leq i\leq N$ and $1\leq j\leq p$. Define the response vector as $\mathbb Y=(Y_1,Y_2,\cdots,Y_N)^\top\in\mathbb{R}^N$ , the design matrix as $\mathbb X=(X_1,X_2,\cdots,X_N)^\top\in\mathbb{R}^{N\times p}$ and the noise vector as $\boldsymbol{\mathcal E}=(\varepsilon_1,\varepsilon_2,\cdots,\varepsilon_N)^\top\in\mathbb{R}^N$. Then, the model can be re-written into a matrix form as \begin{equation} \label{eq:ols} \mathbb Y=\mathbb X\theta+\boldsymbol{\mathcal E}. \end{equation} Assume the loss function for the $i$-th sample is $\ell(X_i,Y_i;\theta) = (Y_i-X_i^\top\theta)^2/2$. Then, the global loss function becomes the least square loss function and the corresponding ordinary least squares (OLS) estimator can be obtained as $\wh\theta_{\operatorname{ols}}=\mbox{argmin}_\theta \mathcal L_N(\theta)$ $= \widehat\Sigma_{xx}^{-1}\widehat\Sigma_{xy}$, where $\widehat\Sigma_{xx}=N^{-1}\sum X_iX_i^\top$ and $\widehat\Sigma_{xy}=N^{-1}\sum X_iY_i$. Standard asymptotic theory reveals that $\wh\theta_{\operatorname{ols}}$ is $\sqrt{N}$-consistent and asymptotically normal. More specifically, we should have $\sqrt{N}(\wh\theta_{\operatorname{ols}}-\theta)\rightarrow_d N(0,\sigma_\varepsilon^2\Sigma_{xx}^{-1})$, where $\Sigma_{xx}=\mbox{cov}(X_i)$.
\csubsection{A Standard Gradient Descent Algorithm} \label{GD}
Next, we consider how to estimate $\wh\theta_{\operatorname{ols}}$ by the method of gradient descent (GD). A standard GD algorithm is an iterative algorithm. It starts with an initial value $\widehat{\theta}^{(0)}$, which is often randomly generated or simply set to be zero. Then, the GD algorithm would iteratively update the current estimator to the next one, until the algorithm converges numerically. Let $\wh\theta_{\operatorname{gd}}^{(t)}$ be the estimator obtained in the $t$-th iteration. Then, a GD algorithm should start with an initial estimator $\widehat{\theta}^{(0)}$ and then update $\wh\theta_{\operatorname{gd}}^{(t)}$ in the $(t+1)$-th iteration as \begin{equation} \label{eq:gd} \wh\theta_{\operatorname{gd}}^{(t+1)}=\wh\theta_{\operatorname{gd}}^{(t)}-\alpha\nabla\mathcal L_N\big(\wh\theta_{\operatorname{gd}}^{(t)}\big)=\wh{\Delta}_\alpha\wh\theta_{\operatorname{gd}}^{(t)}+\alpha\widehat\Sigma_{xy},
\end{equation} where $\nabla\mathcal L_N(\theta)=N^{-1}\sum^N_{i=1} \nabla \ell(X_i,Y_i; \theta)$ and $\nabla\ell(x,y; \theta)$ stands for the first-order derivative of $\ell(x,y; \theta)$ with respect to parameter $\theta$. Here $\alpha>0$ is the learning rate and $\wh{\Delta}_\alpha=I-\alpha\widehat\Sigma_{xx} \in\mathbb{R}^{p\times p}$ is the contraction matrix of the GD algorithm. We refer to $\wh{\Delta}_\alpha$ as a contraction operator. According to standard linear system theory, the algorithm (\ref{eq:gd}) converges numerically if and only if the spectral radius of $\wh{\Delta}_\alpha$ is smaller than 1. Specifically, let $\lambda_j(A)$ be the $j$-th largest eigenvalue of an arbitrary matrix $A\in\mathbb{R}^{p\times p}$. For convenience, we also write $\lambda_1(A)=\lambda_{\max}$ and $\lambda_p(A)=\lambda_{\min}$. The spectral radius of $A$ can be defined as $\rho(A) = \max_{1\leq j\leq p}|\lambda_j(A)|$. To study the property of $\rho(\wh{\Delta}_\alpha)$, define the population counterpart of $\wh{\Delta}_\alpha$ as $\Delta_\alpha = I - \alpha\Sigma_{xx}$. One can verify that $\rho(\Delta_\alpha) = \max \big\{|1-\lambda_1(\Sigma_{xx})\alpha|,|1-\lambda_p(\Sigma_{xx})\alpha|\big\}$.
Ample amount of empirical experiences suggest that, if $\alpha$ is set to be unnecessarily large, the GD estimator might diverge and thus cannot converge numerically to any finite limit. In contrast, if $\alpha$ is set to be too small, then the GD algorithm might converge at a painfully slow speed. This interesting phenomenon can be theoretically explained by the following proposition.
\begin{prop}
\label{prop1}
Let $(\wh{\Delta}_\alpha)^t$ be the $t$-th power of $\wh{\Delta}_\alpha$. Let $\|\cdot\|$ denote the $L_2$-norm for vectors and the induced $L_2$-norm for matrices. We then have: (1) $\wh\theta_{\operatorname{gd}}^{(t)}=\big\{I-(\wh{\Delta}_\alpha)^t\big\}\wh\theta_{\operatorname{ols}}
+(\wh{\Delta}_\alpha)^t\widehat{\theta}^{(0)}$; and (2) for any $0< \alpha< 2/\lambda_1(\Sigma_{xx})$ and an arbitrary but sufficiently small $\eta>0$, there exists some positive constants $C$ and $\rho(\Delta_\alpha)-\eta<\rho_{\alpha}<1$ such that $P\Big(\|\wh\theta_{\operatorname{gd}}^{(t)}-\wh\theta_{\operatorname{ols}}\|\leq \rho_{\alpha}^t\|\widehat{\theta}^{(0)}-\wh\theta_{\operatorname{ols}}\|\Big)\geq 1- 2\exp\Big(-CN\eta^2/\alpha^2\Big)$.
\end{prop}
\noindent More general conclusions in this regard have been obtained in the literature. See for examples, \cite{2009Robust}, \cite{2012sgd}, \cite{2016SGD}, \cite{2017AoS}, and \cite{2019Understanding}. For the sake of theoretical completeness, we provide the detailed proof of Proposition \ref{prop1} in Appendix A.1. By Proposition \ref{prop1}, we find that the GD estimator $\wh\theta_{\operatorname{gd}}^{(t)}$ enjoys an analytically explicit solution. It is a convex combination of the OLS estimator and the initial value. When the learning rate satisfies $0< \alpha< 2/\lambda_1(\Sigma_{xx})$, we have $\rho(\Delta_\alpha)<1$. Consequently, for an arbitrarily but sufficiently small $\eta>0$, there exists a constant $\rho(\Delta_\alpha)-\eta<\rho_{\alpha}<1$ such that $\|\wh\theta_{\operatorname{gd}}^{(t)}-\wh\theta_{\operatorname{ols}}\|\leq \rho_{\alpha}^t\|\widehat{\theta}^{(0)}-\wh\theta_{\operatorname{ols}}\|$ holds with probability no smaller than $1-2\exp(-CN\eta^2/\alpha^2)$. Note that this convergence rate is mainly due to the fact $\|\wh\theta_{\operatorname{gd}}^{(t)}-\wh\theta_{\operatorname{ols}}\|\leq \rho_{\alpha}^t\|\widehat{\theta}^{(0)}-\wh\theta_{\operatorname{ols}}\|$. In other words, the numerical error of the $t$-th step estimator $\wh\theta_{\operatorname{gd}}^{(t)}$ is linearly bounded by that of the previous step with large probability. Following \cite{1963Polyak}, \cite{Introductory2004}, \cite{2016Linear}, and \cite{2020Linear}, we refer to this interesting property as a linear convergence property.
\csubsection{The Fixed Mini-Batch Gradient Descent Algorithm} \label{minibatchGD}
To implement the FMGD algorithm, the whole sample should be randomly divided into a total of $M$ non-overlapping mini-batches in each epoch, where $M$ might be pre-specified by the user. Denote $\{\mathbb S^{(t,m)}\}^M_{m=1}$ as the mini-batch index sets in the $t$-th epoch. We should have $\mathbb S=\bigcup_m\mathbb S^{(t,m)}$ and $\mathbb S^{(t,m_1)}\bigcap \mathbb S^{(t,m_2)} = \emptyset$ for any $t\geq 1$ and $m_1\neq m_2$. For convenience, assume $N$ and $M$ are particularly designed so that $n=N/M$ is an integer. We then assume all mini-batches have the same sample size as $|\mathbb S^{(t,m)}|=n$ for $t\geq 1$ and $1 \leq m \leq M$. For the FMGD algorithm, once the mini-batch $\mathbb S^{(t,m)}$ is formed, it should be fixed throughout the rest of the algorithm. Consequently, we should have $\mathbb S^{(t,m)}=\mathbb S^{(m)}$ for some $\mathbb S^{(m)} \subset \mathbb S$ and for any $t\geq 1$. Obviously, we should have $\bigcup_m\mathbb S^{(m)}=\mathbb S$ and $\mathbb S^{(m_1)}\bigcap \mathbb S^{(m_2)} = \emptyset$ for any $m_1\neq m_2$.
Recall $\wh{\theta}^{(0)}$ is the initial estimator. Practically, $\wh{\theta}^{(0)}$ is often randomly generated or simply set to be zero. Let $\wh{\theta}^{(t,m)}$ be the FMGD estimator obtained in the $t$-th epoch on the $m$-th mini-batch. We then have the following updating formula for the FMGD algorithm as \begin{equation} \begin{split} \label{eq:3} \wh{\theta}^{(t,1)}&=\wh{\theta}^{(t-1,M)}-\alpha\nabla\mathcal L_n^{(1)}\Big(\wh{\theta}^{(t-1,M)}\Big),\\ \wh{\theta}^{(t,m)}&=\wh{\theta}^{(t,m-1)}-\alpha\nabla\mathcal L_n^{(m)}\Big(\wh{\theta}^{(t,m-1)}\Big) \mbox{ for } 2\leq m \leq M, \end{split} \end{equation} where $\nabla\mathcal L_n^{(m)}(\theta)=n^{-1}\sum_{i\in\mathbb S^{(m)}}\nabla\ell(X_i,Y_i;\theta)$ is the gradient computed in the $t$-th epoch on the $m$-th mini-batch, and $\wh{\theta}^{(0,M)}=\wh{\theta}^{(0)}$. Under the linear regression setting, one can easily verify that $\nabla\mathcal L_n^{(m)}(\theta)=\widehat\Sigma_{xx}^{(m)}\theta-\widehat\Sigma_{xy}^{(m)}$ with $\widehat\Sigma_{xx}^{(m)}=n^{-1}\sum_{i\in\mathbb S^{(m)}} X_iX_i^\top$ and $\widehat\Sigma_{xy}^{(m)}=n^{-1}\sum_{i\in\mathbb S^{(m)}}X_i Y_i$. Then, \eqref{eq:3} can be re-written as \begin{equation} \begin{split} \label{eq:2} \wh{\theta}^{(t,1)}&=\wh{\Delta}_\alpha^{(1)}\wh{\theta}^{(t-1,M)}+\alpha\widehat\Sigma_{xy}^{(1)},\\ \wh{\theta}^{(t,m)}&=\wh{\Delta}_\alpha^{(m)}\wh{\theta}^{(t,m-1)}+\alpha\widehat\Sigma_{xy}^{(m)} \mbox{ for } 2\leq m \leq M, \end{split} \end{equation} where $\wh{\Delta}_\alpha^{(m)}=I-\alpha\widehat\Sigma_{xx}^{(m)}$ is the contraction operator generated by the $m$-th mini-batch in the $t$-th epoch. For convenience, we refer to \eqref{eq:2} as an FMGD algorithm for the linear regression model.
\csubsection{Comparing FMGD with SMGD}
As we mentioned before, the FMGD algorithm is similar with the SMGD algorithms extensively studied in the literature. To compare the differences between FMGD and SMGD, we rewrite the FMGD algorithm \eqref{eq:3} and \eqref{eq:2} in a more general theoretical framework \citep{Mou2020OnLS, An2020Yu} as \begin{eqnarray} \label{sgd} \wh{\theta}^{(t,1)}=\wh{\theta}^{(t-1,M)}-\alpha \left\{\nabla\mathcal L\Big(\widehat\theta^{(t-1,M)}\Big) + \xi_{t,1}\Big(\widehat\theta^{(t-1,M)}\Big)\right\},\ \ \ \ \ \ \ \ \ \notag \\ \wh{\theta}^{(t,m)}=\wh{\theta}^{(t,m-1)}-\alpha\left\{\nabla\mathcal L\Big(\widehat\theta^{(t,m-1)}\Big) + \xi_{t,m}\Big(\widehat\theta^{(t,m-1)}\Big)\right\} \mbox{ for } 2\leq m \leq M. \end{eqnarray} Here $\nabla\mathcal L(\theta) = E\{\nabla\ell(X_i,Y_i; \theta)\}=\Sigma_{xx}\theta-\Sigma_{xy}$ with $\Sigma_{xy}=E(X_iY_i)$ is the population gradient. Moreover, $\xi_{t,m}(\theta)=\nabla\mathcal L_n^{(m)}(\theta)-\nabla\mathcal L(\theta)=(\widehat\Sigma_{xx}^{(m)}-\Sigma_{xx})\theta-(\widehat\Sigma_{xy}^{(m)}-\Sigma_{xy})$ can be viewed as a zero-mean noise added to the population gradient. This makes the computed gradient \emph{stochastic}. Note that the theoretical framework \eqref{sgd} has been popularly used in the literature to study the theoretical properties of various SMGD algorithms \citep{Bridging2020, Mou2020OnLS, An2020Yu,Chen2022StationaryBO}. It seems that our FMGD method can also be nicely covered by this elegant theoretical framework. By this framework, the difference between FMGD and SMGD can be better illustrated as follows.
By \eqref{sgd} we find that, FMGD and SMGD are indeed very similar with each other. However, these exists one critical difference. That is how the random noise term $\xi_{t,m}(\theta)$ is generated. For most SMGD algorithms studied in the literature, $\xi_{t,m}(\theta)$ is always assumed to be independently generated for different $t$ and $m$. The independence could be marginal independence due to (for example) the unlimited streaming data, or conditional independence due to (for example) subsampling on the whole dataset. In this case, the subscript $m$ for identifying different mini-batches within the same epoch $t$ becomes unnecessary. This explains why in the existing SMGD literature, the subscript $m$ was seldom used. For most cases, one single subscript $t$ for identifying the number of numerical iterations is already sufficient. In contrast, for the FMGD algorithm, we have fixed partitions. The fixed partitions are repeatedly used throughout the whole algorithm. In other words, we always have $\mathbb S^{(t_1,m)}=\mathbb S^{(t_2,m)}=\mathbb S^{(m)}$ for any $t_1\neq t_2$ but a fixed $m$. Obviously, we should have $\bigcup_m\mathbb S^{(m)}=\mathbb S$ and $\mathbb S^{(m_1)}\bigcap \mathbb S^{(m_2)} = \emptyset$ for any $m_1\neq m_2$. Accordingly, the random noise terms used in \eqref{sgd} can never be independent with each other, neither marginally nor conditionally. This is because $\xi_{t_1,m}(\theta)=\xi_{t_2,m}(\theta)$ for any $t_1\neq t_2$ since they are both computed on the same mini-batch $\mathbb S^{(m)}$.
To further illustrate the idea, we compare the two algorithms (i.e., FMGD and SMGD) under a linear regression model setup with the least squared loss function. Let $\wh{\theta}_{\text{fmgd}}^{(t,M)}$ be the FMGD estimator obtained in the $t$-th epoch on the $m$-th mini-batch and $\wh{\theta}_{\text{smgd}}^{(t,M)}$ be the SMGD estimator obtained after $tM$-th mini-batch updates. Then by equation (\ref{sgd}), we know that both the FMGD and SMGD methods should share a similar updating formula as follows \begin{eqnarray}\label{update} \wh{\theta}^{(t,m)}-\wh\theta_{\operatorname{ols}} = \wh{\Delta}_\alpha^{(t,m)}\left(\wh{\theta}^{(t,m-1)}-\wh\theta_{\operatorname{ols}}\right) + \alpha\left(\widehat\Sigma_{xy}^{(t,m)} - \widehat\Sigma_{xx}^{(t, m)}\wh\theta_{\operatorname{ols}}\right), \end{eqnarray} where $\wh{\theta}^{(t,m)}$ can be $\wh{\theta}_{\text{fmgd}}^{(t,M)}$ or $\wh{\theta}_{\text{smgd}}^{(t,M)}$. By iteratively applying equation (\ref{update}) for a total of $M$ times (i.e., a single epoch), one can obtain that $\wh{\theta}^{(t,M)}-\wh\theta_{\operatorname{ols}} = Q_1 + Q_2$, where \begin{eqnarray*} &&Q_1 = \left(\prod^M_{m=1}\wh{\Delta}^{(t,m)}_\alpha\right)\Big(\wh{\theta}^{(t-1,M)}-\wh\theta_{\operatorname{ols}}\Big),\\ &&Q_2 = \alpha \sum^M_{m=1}\left(\prod^M_{s=m+1}\wh{\Delta}^{(t,m)}_\alpha\right)\bigg(\widehat\Sigma_{xy}^{(t,m)} - \widehat\Sigma_{xx}^{(t,m)}\wh\theta_{\operatorname{ols}}\bigg). \end{eqnarray*} For both FMGD and SMGD, $Q_1$ represents the contracted distance between historical estimator and the OLS estimator, while $Q_2$ represents the accumulated computational error due to the mini-batch gradients. It is remarkable that for the FMGD algorithm, $\widehat\Sigma_{xx}^{(t,m)}$s and $\widehat\Sigma_{xy}^{(t,m)}$s are statistics calculated on non-overlapping subsets of the whole sample. Therefore, they satisfy an interesting analytical relationship as $\widehat\Sigma_{xx} = M^{-1}\sum^M_{m=1}\widehat\Sigma_{xx}^{(t,m)}$ and $\widehat\Sigma_{xy} = M^{-1}\sum^M_{m=1}\widehat\Sigma_{xy}^{(t,m)}$. Consequently, we have $\sum^M_{m=1}(\widehat\Sigma_{xy}^{(t,m)} - \widehat\Sigma_{xx}^{(t, m)}\wh\theta_{\operatorname{ols}}) = 0$ for every $t>0$, since $\wh\theta_{\operatorname{ols}} = \widehat\Sigma_{xx}^{-1}\widehat\Sigma_{xy}$. Then the $Q_2$ term for the FMGD algorithm reduces to \begin{eqnarray*} Q_2 = \alpha \sum^M_{m=1}\left(\prod^M_{s=m+1}\wh{\Delta}^{(t,m)}_\alpha-I\right)\bigg(\widehat\Sigma_{xy}^{(t,m)} - \widehat\Sigma_{xx}^{(t,m)}\wh\theta_{\operatorname{ols}}\bigg)
\end{eqnarray*} with $\|\prod^M_{s=m+1}\wh{\Delta}^{(t,m)}_\alpha-I\| = O_p(\alpha)$. In the meanwhile, under appropriate regularity conditions, we can verify that $\|\widehat\Sigma_{xy}^{(t,m)} - \widehat\Sigma_{xx}^{(t,m)}\wh\theta_{\operatorname{ols}}\| = O_p(n^{-1/2})$ is bounded in probability for all $1\leq m\leq M$. As a result, the $Q_2$ term in FMGD becomes $O_p(\alpha^2n^{-1/2})$, which is a term of very tiny size for small learning rate $\alpha$. In contrast, for the SMGD algorithm, both $\widehat\Sigma_{xx}^{(t,m)}$s and $\widehat\Sigma_{xy}^{(t,m)}$s are statistics calculated on randomly sampled subsets of the whole sample with replacement. Consequently, we have $\sum^M_{m=1}(\widehat\Sigma_{xy}^{(t,m)} - \widehat\Sigma_{xx}^{(t, m)}\wh\theta_{\operatorname{ols}}) =O_p(n^{-1/2})$ for every $t$. Then the $Q_2$ term remains to be $O_p(\alpha n^{-1/2})$, instead of $O_p(\alpha^2 n^{-1/2})$ in the FMGD algorithm. This suggests that the accumulated computational error by FMGD in every epoch iteration is smaller than the counterpart of the SMGD algorithm. This is mainly because in each epoch of the FMGD method, every sample point in the whole sample is guaranteed to be fully utilized. However, this nice property cannot be achieved by SMGD due to its stochastic nature for mini-batch sampling. This makes the numerical convergence performance of SMGD less efficient.
\csection{THEORETICAL PROPERTIES}
\csubsection{The Stable Solution}
To study the theoretical properties of FMGD, we start with a linear regression model with a fixed learning rate. Comparing \eqref{eq:gd} with \eqref{eq:2}, we find that only one contraction operator $\wh{\Delta}_\alpha$ is involved in the GD estimator, while a total of $M$ contraction operators (i.e., $\wh{\Delta}_\alpha^{(m)}$) are involved in the FMGD algorithm. As a result, the FMGD algorithm is considerably more sophisticated. Moreover, it is natural to query whether a numerical limit of the FMGD sequence does exist as $t\rightarrow\infty$. If such a limit exists, we refer to it as the FMGD estimator. Before we formally address this important problem, we temporally assume that, there exists a limit $\wh{\theta}^{(m)}$ such that $\wh{\theta}^{(t,m)}\rightarrow \wh{\theta}^{(m)}$ as $t\rightarrow\infty$ for $1\leq m\leq M$. It following then $\wh{\theta}^{(m)}$ should be the stable solution of the following interesting linear dynamic system: \begin{equation} \begin{split}
\label{eq:4}
\wh{\theta}^{(1)}&=\wh{\Delta}_\alpha^{(1)}\wh{\theta}^{(M)}+\alpha\widehat\Sigma_{xy}^{(1)},\\
\wh{\theta}^{(m)}&=\wh{\Delta}_\alpha^{(m)}\wh{\theta}^{(m-1)}+\alpha\widehat\Sigma_{xy}^{(m)} \mbox{ for } 2\leq m \leq M. \end{split} \end{equation} It is remarkable that the contraction operator $\wh{\Delta}_\alpha^{(m)}$ does not depend on $t$. Write the linear dynamic system \eqref{eq:4} into a matrix form as $\widehat\Omega \widehat\theta^*=\alpha\widehat\Sigma_{xy}^*$, where $\widehat\theta^* = (\wh{\theta}^{(1)\top},\wh{\theta}^{(2)\top},\cdots,\wh{\theta}^{(M)\top})^\top\in\mathbb{R}^{(Mp)}$ is the vectorized FMGD estimator, $\widehat\Sigma_{xy}^*=(\widehat\Sigma_{xy}^{(1)\top},\widehat\Sigma_{xy}^{(2)\top},$ $\cdots,\widehat\Sigma_{xy}^{(M)\top})^\top\in\mathbb{R}^{(Mp)}$, and $\widehat\Omega$ is a $(Mp)\times (Mp)$ matrix given by \[\widehat\Omega=\left[\begin{array}{cccccc} I&0&0&\cdots&0&-\wh{\Delta}_\alpha^{(1)}\\ -\wh{\Delta}_\alpha^{(2)}&I&0&\cdots&0&0\\ 0&-\wh{\Delta}_\alpha^{(3)}&I&\cdots&0&0\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots\\ 0&0&0&\cdots&-\wh{\Delta}_\alpha^{(M)}&I \end{array} \right].\] If the matrix $\widehat\Omega$ is invertible, we should immediately have $\widehat\theta^*=\alpha{\widehat\Omega}^{-1}\widehat\Sigma_{xy}^*$. It is then of great interest to study two important problems. First, under what conditions the matrix $\widehat\Omega$ is invertible? Once $\widehat\Omega$ is invertible, the stable solution to the dynamic system \eqref{eq:4} can be uniquely determined. Second, what is the analytical solution for ${\widehat\Omega}^{-1}$? Regarding the first problem, we give the following proposition. \begin{prop} \label{prop3}
Assume $0<\alpha <2/\lambda_1(\Sigma_{\text{xx}})$. Then there exists a positive constant $C$ such that for any $0<\eta< \min\{\alpha \lambda_p(\Sigma_{\text{xx}}), 2-\alpha\lambda_1(\Sigma_{\text{xx}})\}$, $\widehat\Omega$ is invertible with probability no less than $1 - 2M\exp\left\{-CN\eta^2/(\alpha^2M^2)\right\}$. \end{prop} \noindent The detailed proof is given in Appendix A.2. By Proposition \ref{prop3}, we find that the dynamic system \eqref{eq:4} should have a unique stable solution with large probability. However, whether this solution can be approached by the FMGD algorithm is another issue. An intuitive condition is that the learning rate $\alpha$ cannot be too large, otherwise the FMGD algorithm might not converge. If the stable solution can indeed be captured by the FMGD algorithm (in the sense that $\wh{\theta}^{(t,m)}\rightarrow \wh{\theta}^{(m)}$ as $t\rightarrow\infty$), we then refer to this stable solution as the FMGD estimator. \csubsection{Numerical Convergence and Asymptotic Distribution}
In the previous subsection, we have studied the conditions for the existence of the stable solution. However, this does not necessarily imply the existence of the FMGD estimator. We refer to a stable solution $\wh{\theta}^{*}$ as an FMGD estimator only if it can be computed by the FMGD algorithm. Otherwise, it is just a stable solution to the dynamic system \eqref{eq:4}, but not a practically computable estimator. To find the answer, define the iteratively updated FMGD estimators at the $t$-th epoch as $\widehat\theta^{*(t)}=(\wh{\theta}^{(t,1)\top},\cdots,\wh{\theta}^{(t,M)\top})^\top$. Next, we subtract both sides of \eqref{eq:4} from those of \eqref{eq:2}. We then obtain $\wh{\theta}^{(t+1,m)} - \wh{\theta}^{(m)} = \widehat{C}^{(m)} \Big(\wh{\theta}^{(t,m)} - \wh{\theta}^{(m)}\Big)$ for $1\leq m\leq M$, where $\widehat{C}^{(1)} =\wh{\Delta}_\alpha^{(1)}\wh{\Delta}_\alpha^{(M)}\cdots\wh{\Delta}_\alpha^{(2)}$, $\widehat{C}^{(m)} =\wh{\Delta}_\alpha^{(m)}\cdots\wh{\Delta}_\alpha^{(1)}\wh{\Delta}_\alpha^{(M)}\cdots\wh{\Delta}_\alpha^{(m+1)}$ for $2 \leq m \leq M-1$, and $\widehat{C}^{(M)} =\wh{\Delta}_\alpha^{(M)}\cdots\wh{\Delta}_\alpha^{(1)}$. Recall that $\rho(A)$ denotes the spectral radius of an arbitrary $p\times p$ real value matrix $A$. Then, the iteratively updated FMGD estimator $\wh{\theta}^{(t,m)}$ converges linearly towards the FMGD estimator $\wh{\theta}^{(m)}$, as long as the spectral radius $\rho(\widehat{C}^{(m)})<1$ for $1\leq m \leq M$. We summarize this interesting finding in the following theorem.
\begin{theorem} \label{Th1} {\sc (Numerical Convergence)} Assume $0<\alpha<2/\lambda_{1}(\Sigma_{xx})$. Let $\wh{\theta}^{(0,m)}$ be the initial point in the $m$-th mini-batch with $\wh{\theta}^{(0,1)}=\wh{\theta}^{(0)}$ and $\wh{\theta}^{(0,m)}=\wh{\theta}^{(1,m-1)}$ for $2\leq m \leq M$. Then for an arbitrarily but sufficiently small $\eta>0$, there exist some positive constants $C$ and $\{\rho(\Delta_\alpha)\}^M-\eta<\rho_{\alpha,M}<1$, such that \begin{equation}
P\left(\Big\|\wh{\theta}^{(t,m)}-\wh{\theta}^{(m)}\Big\|\leq \rho_{\alpha,M}^{t-1}\Big\|\wh{\theta}^{(0,m)}-\wh{\theta}^{(m)}\Big\|\right) \geq 1- 4M\exp\left(-\frac{CN\eta^2}{\alpha^2M^2}\right). \nonumber \end{equation}
\end{theorem}\noindent The detailed proof of Theorem \ref{Th1} is given in Appendix A.3. Theorem \ref{Th1} suggests that, for any learning rate satisfying $0< \alpha< 2/\lambda_1(\Sigma_{xx})$ and an arbitrary but sufficiently small $\eta>0$, there exists a constant $\rho_{\alpha,M}<1$, such that $\|\wh{\theta}^{(t,m)}-\wh{\theta}^{(m)}\|\leq \rho_{\alpha,M}^{t-1}\|\wh{\theta}^{(0,m)}-\wh{\theta}^{(m)}\|$ holds with probability no less than $1- 4M\exp(-CN\eta^2/\alpha^2M^2)$. Therefore, the FMGD algorithm can converge linearly to its stable solution. Here by ``converge linearly", we mean that the numerical error of the $t$-th step FMGD estimator is linearly bounded by that of the previous step \citep{1963Polyak,2009Robust}. In addition we find the sufficient conditions for the numerical convergence of the FMGD algorithm is more restrictive than the conditions for the existence of the stable solution. Last, we focus on the influence of learning rate $\alpha$ on the numerical convergence rate of FMGD. Specifically, the convergence factor is given by $\rho_{\alpha,M}=\rho(\widehat{C}^{(M)})$, where $\widehat{C}^{(M)} =\wh{\Delta}_\alpha^{(M)}\cdots\wh{\Delta}_\alpha^{(1)}$. Its population counterpart is given by $\{\rho(\Delta_\alpha)\}^M = \max\{|1-\alpha\lambda_1(\Sigma_{xx})|^M, |1-\alpha\lambda_p(\Sigma_{xx})|^M\}$. Therefore, as $\alpha \to 0$, we should have $\{\rho(\Delta_\alpha)\}^M \to 1$. This forces the convergence factor $\rho_{\alpha,M}\rightarrow 1$ in probability and thus leads to very slow numerical convergence rate for $\wh{\theta}^{(t,m)}$. This suggests that small $\alpha$ value leads to slow numerical convergence. We next study the statistical properties of the FMGD estimator, which is summarized in the following theorem.
\begin{theorem} \label{Th2} {\sc (Asymptotic Normality)} Suppose the assumptions in Theorem \ref{Th1} hold. Then conditional on the whole sample, we have $\sqrt{N}\{v_m(\alpha)\}^{-1/2}\{\widehat\theta^{(m)} - \widehat{\mu}_m(\alpha)\}\rightarrow_d N(0,\sigma_\varepsilon^2I_p)$ as $N \rightarrow \infty$, where \begin{equation} \widehat{\mu}_m(\alpha)=\theta + O_p\left(\alpha^2 n^{-3/2}\right) \mbox{~~and~~~} v_m(\alpha)= \Sigma^{-1}_{xx} + \frac{\alpha^2(M^2-1)}{12}\Sigma_{xx} + o(\alpha^2).\nonumber \end{equation} \end{theorem} \noindent The detailed proof is given in Appendix A.4. Theorem \ref{Th2} suggests that, with an appropriately specified constant learning rate $\alpha$, the FMGD estimator $\widehat\theta^{(m)}$ might still converge at the $\sqrt{N}$ speed. However, it is biased and its variance is larger than that of the whole sample OLS estimator by an appropriate amount of $\alpha^2\sigma_\varepsilon^2(M^2-1)\Sigma_{xx} /12$. Furthermore, a smaller learning rate should lead to smaller bias and smaller variance. The variance inflation effect disappears as $\alpha \rightarrow 0$.
\csubsection{Learning Rate Scheduling}
Previous analyses suggest that the FMGD estimator with a constant learning rate can hardly achieve the same asymptotic efficiency as the whole sample OLS estimator, unless an extremely small learning rate is used. This is because the statistical efficiency requires a sufficiently small learning rate $\alpha$. However, an extremely small learning rate would lead to painfully slow numerical convergence. A more common practice is to use relatively large learning rates in the early stage of the FMGD algorithm. By doing so, the numerical convergence speed can be much improved. In the meanwhile, much smaller learning rates should be used in the late stage of the FMGD algorithm. By doing so, better statistical efficiency can be obtained. Then how to appropriately schedule the learning rates for the FMGD algorithm becomes the key issue.
To address this problem, we follow the classical literature and consider a scheduling strategy with diminishing learning rate \citep{Numerical,2019Understanding}. Specifically, we allow different epochs to have different learning rates. Let $\alpha_t$ be the learning rate used for the $t$-th epoch iteration. By diminishing the learning rate, we mean $\alpha_t$ should monotonically converges to 0 as $t \to \infty$ with an appropriate rate. Intuitively, the diminishing rate cannot be too fast, otherwise the algorithm might not converge numerically. In the meanwhile, it cannot be too slow, otherwise the resulting estimator should be statistically inefficient. Then what type of diminishing rate can satisfy both requirements is the key problem. We then have the following theorem to address this important problem. \begin{theorem} \label{Th3}{\sc (Asymptotic Convergence)}
Assume that the whole data $\{(X_i,Y_i):1\leq i \leq N\}$ are given and fixed. Assume that there exist two positive constants $0<\lambda_{\min}\leq \lambda_{\max}<+\infty$ such that $\lambda_{\min}\leq \lambda_{p}(\widehat\Sigma_{xx}^{(m)})\leq \lambda_{1}(\widehat\Sigma_{xx}^{(m)}) \leq \lambda_{\max}$ for all $1\leq m \leq M$. Let $\wh{\theta}^{(t,M)}$ be the FMGD estimator obtained in the $t$-th epoch on the $M$-th mini-batch. Define $\nabla{\mathcal L}_{\max} = \max_{m}\|\widehat\Sigma_{xy}^{(m)} - \widehat\Sigma_{xx}^{(m)}\wh\theta_{\operatorname{ols}}\|$. Assume that $\alpha_t$ is the learning rate used in the $t$-th epoch, then we have \begin{eqnarray}\label{eq:32}
\Big\|\wh{\theta}^{(t,M)} - \wh\theta_{\operatorname{ols}}\Big\|\leq \frac{\big\|\wh{\theta}^{(0)} - \wh\theta_{\operatorname{ols}}\big\|}{\exp\left\{M\lambda_{\min}\left(\sum^t_{k=1}\alpha_k\right)\right\}} + M\nabla{\mathcal L}_{\max} \left(\frac{\lambda_{\max}}{\lambda_{\min}}\right)\sum^t_{k=1} \frac{\alpha^{2}_k}{\left(\sum^{t}_{s=k+1}\alpha_s\right)}.
\end{eqnarray} FUrthermore, if the learning rate sequence satisfies three conditions as: (1) $\sum^\infty_{t=1}\alpha_t = \infty$, (2) $\sum^\infty_{t=1}\alpha^2_t < \infty$, and (3) $0<\alpha_1<\lambda^{-1}_{\max}$. Then we have $\|\wh{\theta}^{(t,M)} - \wh\theta_{\operatorname{ols}}\| \to 0$ as $t\to \infty$. \end{theorem} The detailed proof of Theorem \ref{Th3} is given in Appendix A.6. By Theorem \ref{Th3}, we can obtain the following conclusions. The first term in the right hand side of \eqref{eq:32} explains how the initial value $\widehat{\theta}^{(0)}$ might affect the numerical convergence speed. In this regard, the learning rate sequence $\alpha_t$ should not diminish too fast in the sense we should have $\sum^\infty_{t=1}\alpha_t = \infty$. Otherwise the numerical error due to the initial value might not disappear. For this term, a larger number of mini-batches (i.e., $M$) leads to a larger number of numerical updates within one single epoch and thus a smaller numerical error. The second term in the right hand side of \eqref{eq:32} contains many factors. First, it reflects the fact that a too large number of mini-batches (i.e., $M$) is not necessarily good for the overall convergence. This is mainly because a too large mini-batch number (i.e., $M$) makes the sample size per mini-batch (i.e., $n=N/M$) too small. Second, it also reflects the effect due to the learning rate scheduling strategy. Under the assumption $\sum^\infty_{t=1}\alpha_t = \infty$, we have $1/\left(\sum^{t}_{s=k+1}\alpha_s\right)$ shrinks to 0 as $t\rightarrow \infty$ for any fixed $k$. Nevertheless, we have an infinite number of those terms, whose collective effect might not diminish, unless appropriate control can be provided. Fortunately, a nice control can be provided by $\alpha_t^2$ under the condition $\sum^{\infty}_{t=1} \alpha_t^2 < \infty$. Consequently we can apply dominate convergence theorem to \eqref{eq:32}. This makes the second term in the right hand side of \eqref{eq:32} shrink to 0. Lastly, we find that the spectrum of $\widehat\Sigma_{xx}^{(m)}$s also play an important role here. It suggests that the smaller the conditional number (i.e., $\lambda_{\max}/\lambda_{\min}$) is, the better the numerical convergence rate should be.
Next we discuss several commonly used learning rate decaying strategies. We start with the polynomial decay strategy as $\alpha_t=c_{\alpha}t^{-\gamma}$ for some positive constants $c_{\alpha}>0$ and $\gamma>0$. Then $0.5 < \gamma \leq 1$ should satisfy the theorem conditions. In contrast, the theorem conditions should be violated if $0<\gamma\leq 0.5$ or $\gamma>1$. Another one is the exponential decaying strategy $\alpha_t=c_\alpha\gamma^{-t/b}$, where $c_\alpha>0$ is a pre-defined initial learning rate, $0<\gamma<1$ is the decay rate, and $b$ is the decay step. The exponential decaying strategy dose not satisfy the condition $\sum^\infty_{t=1}\alpha_t = \infty$ and thus could lead to early stop. Finally we focus on the stage-wise step decaying strategy. In this strategy, the whole training procedure is cut into several stages according to the decay step $b$. Then a stage-wise constant learning rate is used for each stage. Depending on how the stage-wise constant learning rates are specified, the technical conditions in Theorem \ref{Th3} might be satisfied or violated. For example, if we set $\alpha_t=c_\alpha/t$ with $t$ representing the number of stages for some initial value $c_\alpha$, then the theorem conditions are satisfied. However, if we set $\alpha_t=c_{\alpha}\gamma^{k}$ with $c_{\alpha}>0$ and some $0<\gamma<1$, then the theorem conditions are violated.
\csubsection{Two Useful Extensions}
In this subsection, we make two useful extensions about FMGD. First, we develop a more flexible FMGD algorithm, which allows random shuffling about the whole sample between two consecutive epochs. Second, a more general loss function (e.g., the negative log-likelihood function) is studied. We start with the shuffled FMGD method (or sFMGD for short). In the original FMGD method discussed above, once the mini-batches are partitioned, they are then fixed throughout the whole algorithm. In the meanwhile, many practically implemented softwares (e.g., TensorFlow, PyTorch) allow the whole sample to be randomly shuffled between two consecutive epochs. Accordingly, we should have $\bigcup_m\mathbb S^{(t,m)}=\mathbb S$ and $\mathbb S^{(t,m_1)}\bigcap \mathbb S^{(t,m_2)}= \emptyset$ for any $m_1\neq m_2$ and a common $t$. However, we should have $\mathbb S^{(t_1,m)}\neq \mathbb S^{(t_2,m)}$ for $t_1 \neq t_2$ because the partitions are randomly shuffled for every $t$. Denote the shuffled FMGD estimator by $\widehat\theta^{(t,M)}_{\text{sf}}$. Then the updating formula for a linear regression model becomes \begin{equation} \begin{split} \label{eq:sfmgd} \wh{\theta}^{(t,1)}_{\text{sf}}&=\wh{\Delta}_\alpha^{(t,1)}\wh{\theta}^{(t-1,M)}_{\text{sf}}+\alpha\widehat\Sigma_{xy}^{(t,1)},\\ \wh{\theta}^{(t,m)}_{\text{sf}}&=\wh{\Delta}_\alpha^{(t,m)}\wh{\theta}^{(t,m-1)}_{\text{sf}}\alpha\widehat\Sigma_{xy}^{(t,m)} \mbox{ for } 2\leq m \leq M, \end{split} \end{equation} where $\widehat\Sigma_{xx}^{(t,m)}=n^{-1}\sum_{i\in\mathbb S^{(t,m)}} X_iX_i^\top$, $\widehat\Sigma_{xy}^{(t,m)}=n^{-1}\sum_{i\in\mathbb S^{(t,m)}}X_i Y_i$, and $\wh{\Delta}_\alpha^{(t,m)}=I-\alpha\widehat\Sigma_{xx}^{(t,m)}$. Comparing \eqref{eq:sfmgd} and \eqref{eq:2}, we find the only difference between FMGD and sFMGD lies in the way that $\mathbb S^{(t,m)}$ is generated. For sFMGD, the mini-batches $\mathbb S^{(t,m)}$ with $1\leq m \leq M$ used in the $t$-th epoch are still generated under the constraints: (1) $\bigcup_m\mathbb S^{(t,m)}=\mathbb S$ and (2) $\mathbb S^{(t,m_1)}\bigcap \mathbb S^{(t,m_2)}= \emptyset$ for any $m_1\neq m_2$. Therefore different $\mathbb S^{(t,m)}$s are not conditionally independent with each epoch. Consequently, sFMGD remains to be different from the SMGD algorithms studied in literature in terms of how the mini-batches are generated.
Next, we investigate the theoretical properties of the sFMGD algorithm. As the sFMGD algorithm randomly shuffles the mini-batches for every epoch, the stable solution no longer exists. We thus have to take a slightly different approach (i.e., error bound analysis) to understand its theoretical properties. We focus on the last mini-batch estimator in each epoch (i.e., $\wh{\theta}^{(t,M)}_{\text{sf}}$), because this is the estimator utilizing the whole sample information in a single epoch. Then we have the following theorem for the numerical error bound.
\begin{theorem} \label{Th4}{\sc (Numerical Error Bound)}
Assume that the whole data $\{(X_i,Y_i):1\leq i \leq N\}$ are given and fixed. Further assume that there exist two positive constants $0<\lambda_{\min}\leq \lambda_{\max}<+\infty$ such that $\lambda_{\min}\leq \lambda_{p}(\widehat\Sigma_{xx}^{(t,m)})\leq \lambda_{1}(\widehat\Sigma_{xx}^{(t,m)}) \leq \lambda_{\max}$ for all $1\leq m \leq M$ and $t\geq 1$. Define $\nabla{\mathcal L}_{\max} = \max_{t,m}\|\widehat\Sigma_{xy}^{(t,m)} - \widehat\Sigma_{xx}^{(t,m)}\wh\theta_{\operatorname{ols}}\|$. If $0<\alpha< (M\lambda_{\max})^{-1}$, then we have \begin{equation} \label{eq:t4}
\Big\|\wh{\theta}^{(t,M)}_{\operatorname{sf}}-\wh\theta_{\operatorname{ols}}\Big\| \leq \Big(1-\lambda_{\min}\alpha\Big)^{tM} \Big\|\wh{\theta}^{(0)}-\wh\theta_{\operatorname{ols}}\Big\| + 2\alpha M(\lambda_{\max}/\lambda_{\min})\nabla{\mathcal L}_{\max}.\nonumber \end{equation} \end{theorem}
\noindent The detailed proof of Theorem \ref{Th4} is given in Appendix A.7. Theorem \ref{Th4} derives a numerical error bound for the difference between the sFMGD estimator and the global estimator $\wh\theta_{\operatorname{ols}}$, under the condition that the whole dataset is given and fixed. The upper bound can be decomposed into two terms. The first term is due to the contracted distance between the initial value $\widehat\theta^{(0)}$ and $\wh\theta_{\operatorname{ols}}$, while the second term is due to the accumulated computational error. The first term suggests that the effect of initial value diminishes at an exponential rate with a contraction factor $\big(1-\lambda_{\min}\alpha\big)^M$. As $\alpha \to 0$, we should have $\big(1-\lambda_{\min}\alpha\big)^M\to 1$. Consequently, a smaller learning rate $\alpha$ leads to a slower numerical convergence rate. Besides, a larger number of mini-batches $M$ leads to a smaller contraction factor $\big(1-\lambda_{\min}\alpha\big)^M$. Furthermore, for a fixed learning rate $\alpha$ satisfying $\alpha< \lambda^{-1}_{\max}$, we can verify that $1-\lambda_{\min}\alpha \geq 1- \lambda_{\min}/\lambda_{\max}$. Thus a smaller conditional number $\big(\lambda_{\max}/\lambda_{\min}\big)$ leads to a faster convergence rate. The second term is the accumulated computational error due to mini-batch gradient updates. We find that this term is linearly bounded by the learning rate $\alpha$, the number of mini-batches $M$, the conditional number $\lambda_{\max}/\lambda_{\min}$, and the local gradient evaluated at global minimizer $\wh\theta_{\operatorname{ols}}$ (i.e. $\|\widehat\Sigma_{xy}^{(t,m)}-\widehat\Sigma_{xx}^{(t,m)}\wh\theta_{\operatorname{ols}}\|$). It is notable that, once the whole dataset is given and fixed, both the quantity $\lambda_{\max}/\lambda_{\min}$ and $\max_{t,m}\big\|\widehat\Sigma_{xy}^{(t,m)}-\widehat\Sigma_{xx}^{(t,m)}\wh\theta_{\operatorname{ols}}\big\|$ are fixed and thus bounded. Then the second term can be arbitrarily small by specifying a sufficiently small learning rate $\alpha$. Note that a smaller learning rate would increase the first term by $\big(1-\lambda_{\min}\alpha\big)^M$ but decrease the second term by $\alpha M(\lambda_{\max}/\lambda_{\min})\nabla{\mathcal L}_{\max}$. As we can see, the learning rate $\alpha$ plays opposite roles for the two different terms. Thereafter, a constant learning rate can hardly be used to achieve the best trade-off between those two terms.
Lastly, we extend the fruitful theoretical results obtained in previous subsections to a general loss function. Define $\ell(X_i,Y_i;\theta)$ to be an arbitrary loss function evaluated at sample $i$. Let $\mathcal L_N(\theta)=N^{-1}\sum^N_{i=1} \ell(X_i,Y_i;\theta)$ denote the global loss function. Then the global optimal estimator can be defined as $\widehat{\theta}=\operatorname{\mbox{argmin}}_{\theta}\mathcal L_N(\theta)$. For example, $\mathcal L_N(\theta)$ could be the two-times negative log-likelihood function of a generalized linear regression model. Then $\widehat{\theta}$ should be the corresponding maximum likelihood estimator. We assume that $\widehat{\theta}$ is $\sqrt{N}$-consistent. It is then of great interest to study the theoretical properties of the FMGD-type (i.e., FMGD and sFMGD) estimators under the general loss function. Let $\wh{\theta}^{(t,m)}_{\operatorname{gf}}$ be the FMGD-type estimator obtained in the $t$-th epoch on the $m$-th mini-batch under the general loss function. Then the following updating formula for the FMGD-type algorithm is given as \begin{equation} \begin{split} \label{eq:gl} \wh{\theta}^{(t,1)}_{\operatorname{gf}}&=\wh{\theta}^{(t-1,M)}_{\operatorname{gf}}-\alpha\nabla\mathcal L_n^{(t,1)}\Big(\wh{\theta}^{(t-1,M)}_{\operatorname{gf}}\Big),\\ \wh{\theta}^{(t,m)}_{\operatorname{gf}}&=\wh{\theta}^{(t,m-1)}_{\operatorname{gf}}-\alpha\nabla\mathcal L_n^{(t,m)}\Big(\wh{\theta}^{(t,m-1)}_{\operatorname{gf}}\Big) \mbox{ for } 2\leq m \leq M,\nonumber \end{split} \end{equation} where $\mathcal L_n^{(t,m)}(\theta)=n^{-1}\sum_{i\in\mathbb S^{(t,m)}}\ell(X_i,Y_i;\theta)$ is the loss function for the $m$-th mini-batch in the $t$-th epoch, and $\nabla\mathcal L_n^{(t,m)}(\theta)$ is the first-order derivatives of $\mathcal L_n^{(t,m)}(\theta)$ with respect to $\theta$. Accordingly, define $\nabla^2\mathcal L_n^{(t,m)}(\theta)$ to be the second-order derivatives of $\mathcal L_n^{(t,m)}(\theta)$ with respect to $\theta$. Recall that for the FMGD estimator, we should have $\mathbb S^{(t_1,m)}=\mathbb S^{(t_2,m)}$ for any $t_1 \neq t_2$; while for the sFMGD estimator, we usually have $\mathbb S^{(t_1,m)}\neq \mathbb S^{(t_2,m)}$ for any $t_1 \neq t_2$. Then the theoretical properties of the FMGD-type estimator under the general loss function are given by the following theorem. \begin{theorem} \label{Th5}{\sc (General Loss Function)}
Assume that the whole data $\{(X_i,Y_i):1\leq i \leq N\}$ are given and fixed. Assume that there exist two positive constants $0<\lambda_{\min}\leq \lambda_{\max}<+\infty$ such that $\lambda_{\min}\leq \lambda_{p}\big\{\nabla^2\mathcal L_n^{(t,m)}(\theta)\big\}\leq \lambda_{1}\big\{\nabla^2\mathcal L_n^{(t,m)}(\theta)\big\} \leq \lambda_{\max}$ for all $1\leq m \leq M$, $t\geq 1$ and $\theta \in \mathbb{R}^p$. Define $\nabla{\mathcal L}_{\max} = \max_{t,m}\| \nabla\mathcal L_n^{(t,m)}(\widehat{\theta})\|$ as the maximum of the norm of local gradient evaluated at the global minimizer $\wh{\theta}$. Define $\alpha_t$ is the learning rate used in the $t$-th epoch. Then we have \begin{equation}
\begin{split}
\Big\|\wh{\theta}_{\operatorname{gf}}^{(t,M)} - \wh{\theta}\Big\|\leq \frac{\big\|\wh{\theta}^{(0)} - \wh{\theta}\big\|}{\exp\left\{M\lambda_{\min}\left(\sum^t_{k=1}\alpha_k\right)\right\}} + M\nabla{\mathcal L}_{\max} \left(\frac{\lambda_{\max}}{\lambda_{\min}}\right)\sum^t_{k=1} \frac{\alpha^{2}_k}{\left(\sum^{t}_{s=k+1}\alpha_s\right)}.\nonumber
\end{split} \end{equation}
Furthermore, if the learning rate sequence satisfies the following three conditions:(1) $\sum^\infty_{t=1}\alpha_t = \infty$, (2) $\sum^\infty_{t=1}\alpha^2_t < \infty$, and (3) $0<\alpha_1 < \lambda^{-1}_{\max}$. Then we have $\|\wh{\theta}_{\operatorname{gf}}^{(t,M)} - \wh{\theta}\| \to 0$ as $t\to \infty$.
\end{theorem} \noindent The detailed proof of Theorem \ref{Th5} is given in Appendix A.8. It derives a numerical error bound for the difference between the FMGD-type estimator and the global estimator under a general loss function. We find that the results in Theorem \ref{Th5} are similar with the results in Theorem \ref{Th3}. By Theorem \ref{Th5}, the initial point, mini-batch size, learning rate, and eigenvalues of $\nabla^2\mathcal L(\theta)$ can all affect the numerical error between the FMGD-type estimator and the global estimator. However, under the assumptions $\sum^\infty_{t=1}\alpha_t = \infty$ and $\sum^\infty_{t=1}\alpha^2_t < \infty$, the numerical error can shrink to 0 for a fixed $k$ with $t\rightarrow \infty$. It implies that, as long as $t$ is sufficiently large, the resulting FMGD-type estimator should be very close to the global estimator.
\csection{NUMERICAL STUDIES}
Extensive numerical studies are presented in this section to demonstrate the finite sample performance of the FMGD-type methods. Specifically, we aim to study their finite sample performance on various types of models. Those models are, respectively, linear regression, logistic regression, poisson regression, and a deep learning model (i.e., the convolutional neural network).
\csubsection{Linear Regression}
We start with the finite sample performance of the FMGD-type methods on linear regression models from two perspectives. First, we aim to understand how the learning rate $\alpha$ would affect the statistical efficiency of the FMGD-type estimators. Second, we try to demonstrate the finite sample performance of the proposed learning rate scheduling strategy. Specifically, consider a standard linear regression model with $N=5,000$ and $p=50$. For $1 \leq i \leq N$, the predictor $X_i$ is generated from the multivariate normal distribution with mean zero and covariance matrix $\Sigma_{xx}=(\sigma_{j_1j_2})$, where $\sigma_{j_1j_2}=0.5^{|j_1-j_2|}$ for $1 \leq j_1, j_2 \leq p$. Fix $\theta=(1,1,..,1)^{\top} \in \mathbb{R}^{p}$ and $\sigma^2_{\varepsilon}=1$. Then, the response vector can be generated according to \eqref{eq:ols}. Once the data are generated, the FMGD and sFMGD estimators can be computed. We also compute the SMGD estimator for comparison, in which each mini-batch is randomly generated from the whole dataset. Finally, the OLS estimator is computed as the optimal global estimator.
{\sc Case 1. (Constant Learning Rates)} For all the mini-batch gradient decent (MGD) estimators (i.e., FMGD, sFMGD, and SMGD), we set the mini-batch size as $n=100$ and consider four different fixed learning rates ($\alpha=0.2,0.1,0.05,0.01$). The total number of epoch iterations is fixed as $T=100$. The statistical efficiencies of the resulting estimators are then evaluated by mean squared error (MSE). For a reliable evaluation, the experiment is randomly replicated for a total of $B=200$ times for each fixed learning rate. This leads to a total of $B=200$ MSE values for each estimator, which are then log-transformed and boxplotted in Figure \ref{simu_fixed}.
\begin{figure}
\caption{The boxplots of log(MSE) values of FMGD, sFMGD, SMGD, and OLS estimators under four different fixed learning rates.}
\label{simu_fixed}
\end{figure}
By Figure \ref{simu_fixed}, we can draw the following conclusions. First, when the learning rate is not small (e.g., $\alpha=0.1$ or 0.2), the FMGD estimator demonstrates much larger MSE values than the OLS estimator. This finding is consistent with the theoretical claims in Theorem 1. That is, with a larger fixed learning rate, the FMGD estimator cannot be statistically as efficient as the whole sample estimator. Second, as the learning rate decreases, the statistical efficiency of the FMGD estimator improves, as its MSE values getting smaller and thus closer to those of the OLS estimator. These results verify our theoretical findings in Theorem \ref{Th2}. That is, the statistical efficiency of the FMGD estimator can be improved by letting $\alpha \rightarrow 0$. Next, comparing the FMGD estimator with the sFMGD estimator, we find the sFMGD estimator can achieve similar estimation performance with the FMGD estimator under the same fixed learning rate. Finally, we find both the FMGD estimator and sFMGD estimator perform quite better than the SMGD estimator. When the learning rate is small enough (i.e. $\alpha=0.01$), the performances of both the FMGD and sFMGD estimators can be very close to that of the OLS estimator.
{\sc Case 2. (Learning Rate Scheduling)} We next study the proposed learning rate scheduling strategy. We fix the mini-batch size as $n=100$. Define the scheduling learning rate sequence as $\alpha_t=0.2t^{-\gamma}$ for $\gamma$ varying from 0.1 to 1. Different $\gamma$ specifications lead to different scheduling strategies. According to Theorem \ref{Th3}, we know $0.5<\gamma\leq 1$ should result in statistically efficient FMGD estimator. For each fixed $\gamma$, we evaluate this scheduling strategy for a total of $T=100$ epoch iterations. Then the statistical efficiencies of FMGD, sFMGD and SMGD estimators are evaluated in terms of MSE. The global OLS estimator is also evaluated for comparison purpose. The experiment is randomly replicated for $B = 200$ times for each fixed $\gamma$. This leads to a total of $B=200$ MSE values for each $\gamma$ specification. They are then log-transformed, averaged, and displayed in Figure \ref{simu_shuffle}.
\begin{figure}
\caption{The averaged log(MSE) values of FMGD, sFMGD, SMGD, and OLS estimators using the learning rate scheduling strategy $\alpha_t=0.2t^{-\gamma}$ with $\gamma$ ranging from 0.1 to 1.}
\label{simu_shuffle}
\end{figure}
By Figure \ref{simu_shuffle}, we can draw the following conclusions. First, for the given learning rate scheduling strategy, both FMGD and sFMGD estimators perform similarly. They both demonstrate better performances (i.e., the lower MSE values) than the SMGD estimator. Second, when the diminishing rate $\gamma$ is appropriately set (i.e., $0.5<\gamma\leq 1$), both FMGD and sFMGD estimators can achieve similar MSE values as the OLS estimator. However, if $\gamma$ is inappropriately specified (i.e., $\gamma\leq0.5$), then all MGD estimators would perform much worse than the OLS estimator. These empirical findings coordinate our theoretical claims in Theorem \ref{Th3} very well.
Lastly, we focus on the numerical convergence rates of different methods. For illustration purpose, we consider two examples: (1) the fixed learning rate with $\alpha=0.1$ and (2) the diminishing learning rate scheduling strategy $\alpha_t=0.2t^{-\gamma}$ with $\gamma=0.6$. In each example, define $\widehat{\theta}^{(t)}$ to be the estimate obtained by the MGD-type method (i.e., FMGD, sFMGD, and SMGD) in the $t$-th iteration. Define $\widehat{\theta}_{\text{ols}}$ to be the OLS estimate. We then investigate the convergence performance of different methods from two perspectives. The first perspective is the \emph{numerical error}, in which we compare the iteration-wise estimate with the global OLS estimate by $\|\widehat{\theta}^{(t)}-\widehat{\theta}_{\text{ols}}\|$. The second perspective is the \emph{estimation error}, in which we compare the iteration-wise estimate with the true parameter by $\|\widehat{\theta}^{(t)}-\theta\|$. The experiment is randomly repeated for a total of $B=200$ times, and the averaged numerical errors and estimation errors over 200 replications can be computed. Detailed results are given in Figure \ref{simu_conver} in log-scale. We find both FMGD and sFMGD perform much better than SMGD in both the numerical error and estimation error.
\begin{figure}
\caption{The numerical and estimation errors obtained by FMGD, sFMGD, and SMGD, respectively. In the fixed learning rate example, we set $\alpha=0.1$. In the diminishing learning rate example, we set $\alpha_t=0.2t^{-\gamma}$ with $\gamma=0.6$.}
\label{simu_conver}
\end{figure}
\csubsection{General Loss Functions}
We next demonstrate the finite sample performances of FMGD-type algorithms on various general loss functions. The whole experiments are designed similarly as those in the Section 4.1. The only difference is that, the loss function is replaced by more general ones. Specifically, we use the negative two-times log-likelihood function as the general loss function for the following two examples.
\begin{enumerate}[{[1]}]
\item {\sc (Logistic Regression)} Logistic regression has been widely used to model the binary response. In this example, we consider the whole sample size $N=5,000$ and the dimension of predictors $p=50$. We first follow Section 4.1 to generate the predictor $X_i$ with $1\leq i \leq N$. Then fix $\theta=(0.1,0.1,...,0.1)^{\top}\in \mathbb{R}^{p}$. The response $Y_i$ is then generated from a Bernoulli distribution with the probability given as $P(Y_i=1|X_i,\theta)=\exp(X_i^{\top}\theta)/\{1+\exp(X_i^{\top}\theta)\}$.
\item {\sc (Poisson Regression)} Poisson regression is often used to deal with the count responses. In this example, we also assume $N=5,000$ and $p=50$. The predictor $X_i$ is generated similarly as in Section 4.1. Let $\theta=(0.02,0.02,...,0.02)^{\top}\in \mathbb{R}^{p}$. Then, the response $Y_i$ is generated from a Poisson distribution given as $P(Y_i|X_i,\theta)=\lambda_i^{Y_i}\exp(\lambda_i)/Y_i!$, where $\lambda_i=\exp(X_i^{\top}\theta)$. \end{enumerate}
Once the data are generated, various MGD estimators (i.e., FMGD, sFMGD, and SMGD) are computed. The whole sample maximum likelihood estimator (MLE) is also computed for comparison purpose. For the MGD methods, various learning rate scheduling strategies are applied. The experiments for the MGD methods are conducted in the similar way as in Section 4.1. The detailed results are given in Figure \ref{simu_general}. We find the conclusions are quantitatively similar to those in Section 4.1. Specifically, for both the logistic regression and poisson regression models, the FMGD and sFMGD estimators show clear advantages over the SMGD estimator. Moreover, when the diminishing rate $\gamma$ is appropriately chosen, the FMGD and sFMGD estimators could be statistically as efficient as the whole sample optimal estimator (i.e., the MLE). The outstanding numerical performance of the sFMGD estimator is expected, because it is extremely similar with the FMGD estimator. The only difference is that, the shuffling operation should be conducted for every epoch for sFMGD. The outstanding performances of FMGD and sFMGD estimators are also consistent with the theoretical claims in Theorem \ref{Th5}.
\begin{figure}
\caption{The averaged log(MSE) values of FMGD, sFMGD, SMGD, and MLE estimators under the logistic regression model and poisson regression model. For the MGD methods, the diminishing learning rate scheduling strategy $\alpha_t=0.2t^{-\gamma}$ is used with $\gamma$ ranging from 0.1 to 1.}
\label{simu_general}
\end{figure}
\csubsection{Sampling Cost}
In this subsection, we focus on the time cost of FMGD and sFMGD methods. As demonstrated in the previous subsection, both methods are very competitive in terms of statistical efficiency. In particular, with appropriately designed learning rate scheduling strategy, both the FMGD and sFMGD estimators can be statistically as efficient as the global estimator. Then it is of great interest to query about the difference between these two estimators. In fact, we are going to demonstrate in this subsection that there indeed exists a critical difference, i.e., the time cost.
Note that the FMGD methods (both FMGD and sFMGD) are designed for datasets of massive sizes. In this case, the data have to be placed on the hard drive, instead of the memory. If the sFMGD method is applied, then the sample indices in each mini-batch have to be randomly shuffled for each epoch iteration. As a consequence, the samples in each mini-batch have to be obtained from random positions on the hard drive from epoch to epoch. This leads to a significant time cost for hard drive addressing. In contrast, if the FMGD method is applied, then the whole sample only needs to be shuffled once. Then, according to the pre-specified mini-batch size, the whole sample can be split sequentially into a sequence of $M$ non-overlapping mini-batches. Each mini-batch is then formed into one packaged data file. By doing so, we should pay two prices. The first price is the time cost for data packaging. Our numerical experiments suggest that, this is a time cost no more than that of two epochs for a standard sFMGD updating. Therefore, the time cost due to data packaging can be very negligible. The second price is the storage cost on the hard drive. By data packaging, we effectively duplicate the data size on the hard drive. However, given the outstanding storage capacity of modern hard drives, we find this cost is practically very acceptable. Once the mini-batches are packaged, they would be sequentially placed on the hard drive. Subsequently, they can be read into the memory at a much faster speed than those used in the sFMGD method. To demonstrate the computational advantage of the FMGD method over the sFMGD method, we conduct the following experiments.
We generate the dataset similarly as in Section 4.1, but with a larger sample size $N$. Specifically, we consider $N=\kappa10^4$ with $\kappa$ varying from 1 to 10. The generated data are then placed on the hard drive in CSV format. Fix the mini-batch size as $n=100$. We then compute the FMGD and sFMGD estimators for a linear regression model in the similar way as in Section 4.1. The total time costs consumed by the FMGD and sFMGD methods under different sample sizes are averaged over $B=20$ random replications. Then, the averaged time costs in each epoch are plotted in Figure \ref{simu_time} in log-scale. As shown in Figure \ref{simu_time}, the time costs consumed by the FMGD method are much smaller than those consumed by the sFMGD method. In addition, as the whole sample size increases, the time costs consumed by the FMGD and sFMGD methods both increase. However, the sFMGD method increases at a much faster speed than the FMGD method (recall the difference of time cost reported in Figure \ref{simu_time} is in log-scale). These empirical findings confirm that the FMGD method is computational more efficient than the sFMGD method.
\begin{figure}
\caption{The time costs (in log-scale) per epoch consumed by FMGD and sFMGD methods for different sample sizes $N$.}
\label{simu_time}
\end{figure}
\csubsection{Deep Neural Networks}\label{DL}
We next demonstrate the performance of the FMGD-type methods on a deep neural network model. Compared with the models studied in the previous subsections, deep neural networks are considerably more sophisticated. Specifically, we consider a convolutional neural network (CNN) for image data analysis. We study in this example the famous CatDog dataset, which was first released by Kaggle in 2013. The dataset contains 15,000 training images and 10,000 validation images. Those images belong to two classes (i.e., cats and dogs). The objective here is automatic image classification (cats v.s. dogs). To solve this problem, a simple convolutional neural network is designed in Figure \ref{cnn}. \begin{figure}
\caption{The structure of designed CNN model.}
\label{cnn}
\end{figure}
This is a standard CNN model. Specifically, the input is a fixed-size $128\times128$ RGB image. Subsequently, the image (i.e. $128\times128\times 3$ tensor) is passed through three batch-normalization-convolution-maxpooling blocks. In the first block, we utilize $11\times 11$ convolution filters with the same padding for preliminary feature extraction. We then down sample the features with maxpooling of stride 3. In the second block, we change the size of convolution filters to $5\times 5$ and keep the stride equal to 3 in the maxpooling layer. In the third block, we set two consecutive convolutional layers with $3\times 3$ convolution filters of amount $64$ and $128$ respectively. They are then followed by a maxpooling layer with stride 2. Finally, these feature extraction blocks are followed by three fully-connected layers, which have 256, 64 and 2 channels, respectively. The last fully-connected layer is also combined with a softmax activation function, which links the feature map to the output probability. This leads to a CNN model with 16 layers and a total of 965,800 parameters.
Various MGD algorithms (i.e., FMGD, sFMGD, and SMGD) are used to train this model. For each MGD algorithm, we set the mini-batch size as $n = 100$ and run a total of $T = 100$ epochs. The resulting training losses and validation accuracies are monitored and averaged over $B=20$ randomly replicated experiments. The averaged training loss curves and validation accuracy curves are given in Figure \ref{simu_catdog(time)}. By the left panel in Figure \ref{simu_catdog(time)}, we find that the FMGD method demonstrates the fastest performance in loss reduction. As a consequence, it achieves the fastest accuracy improvement in the validation dataset; see the right panel of Figure \ref{simu_catdog(time)}. Comparatively, the loss reduction speed of sFMGD is slower than that of the FMGD. Consequently, its prediction accuracy does not improve as fast as FMGD. Nevertheless, the final prediction accuracy of sFMGD is slightly better than that of FMGD (i.e., 91.41\% for sFMGD v.s. 91.13\% for FMGD). The performance of SMGD is the worst from all perspectives.
\begin{figure}
\caption{The left panel shows the averaged training loss curves of FMGD, sFMGD and SMGD algorithms. The right panel shows the averaged validation accuracy curves of these three MGD algorithms. The horizontal black dashed line denotes the best validation accuracy that can be achieved by the FMGD algorithm.}
\label{simu_catdog(time)}
\end{figure}
\csection{CONCLUDING REMARKS}
We study in this work a fixed mini-batch gradient decent (FMGD) algorithm. The key difference between FMGD and SMGD is how the mini-batches are generated. For FMGD, the whole sample is split into multiple non-overlapping partitions (i.e., mini-batches). Once the mini-batches are formed, they are fixed and then repeatedly used throughout the rest of the algorithm. Consequently, the mini-batches used by FMGD are dependent with each other. However, for SMGD the mini-batches are independent with each other either marginally or conditionally. This difference makes FMGD enjoy faster numerical convergence and better statistical efficiency.
To conclude this article, we discuss here a number of interesting directions for future research. First, we study the theoretical properties of the FMGD-type estimators under a fixed $p$ setup. Note that modern statistical analyses often encounter problems with ultrahigh dimensional features. It is then of great interest to investigate the theoretical properties of the FMGD-type estimators with diverging $p$. Second, many FMGD variants (e.g., momentum, Adagrad, RMSprop, Adam) have been popularly used in practice. However, it seems to us that, their theoretical properties under the fixed mini-batch setup remain unknown. This should also be another interesting topic for future study.
\section*{Conflict of Interest} The authors report there are no competing interests to declare.
\end{document} |
\begin{document}
\title{A new segmentation method for the homogenisation of GNSS-derived IWV time-series}
\begin{abstract} Homogenization is an important and crucial step to improve the usage of observational data for climate analysis. This work is motivated by the analysis of long series of GNSS Integrated Water Vapour (IWV) data which have not yet been used in this context. This paper proposes a novel segmentation method that integrates a periodic bias and a heterogeneous, monthly varying, variance. The method consists in estimating first the variance using a robust estimator and then estimating the segmentation and periodic bias iteratively. This strategy allows for the use of the dynamic programming algorithm that remains the most efficient exact algorithm to estimate the change-point positions. The statistical performance of the method is assessed through numerical experiments. An application to a real data set of 120 global GNSS stations is presented. The method is implemented in the R package GNSSseg that will be available on the CRAN. \end{abstract}
keywords: Change-point detection; Dynamic programming; Homogenization climate series; GNSS IWV series
\section {Introduction} \label{sec:intro}
Long records of observational data are essential to monitoring climate change and understanding the underlying climate processes. However, long time series are often affected by inhomogeneties due to changes in instrumentation, in station location, in observation and processing methods, and/or in the measurement conditions around the station \citep{Jones1986}. Inhomogeneities most often take the form of abrupt changes which are detrimental to estimating trends and multi-scale climate variability \citep{Easterling1995}. Various homogenization methods have been developed for the detection and correction of such change-points in the context of climate data analysis, e.g. \citet{Peterson1998, CaussinusMestre2004, Menne2005, Szentimrey2008, Reeves2007, Costa2009, Venema2012}. In this paper, we are interested in ground-based Global Navigation Satellite System (GNSS) integrated water vapour (IWV) measurements. GNSS measurements provide among the most accurate and continuous IWV measurements, in all weather conditions, and have not yet been used much for climate analysis \citep{Bevis1992, Bock2013, VanMalderen2014, NingUncertainty2016}.
In order to remove the climate signal and reveal the inhomogeneities in the GNSS measurements, it has been a common approach to compare the candidate series with a well correlated reference series. The reference series can be taken from nearby stations (i.e. observing a similar climate signal) as proposed by, e.g., \citet{CaussinusMestre2004}, \citet{Menne2005}, or \citet{Szentimrey2008}, or from climate model data \citet{ning2016,Bock2018}. Since the number of stations from the International GNSS Service (IGS) is limited to a hundred or so, the construction of references series from neighboring stations is hard. The second approach is here considered using the European Center for Medium Range Forecasts (ECMWF) reanalysis ERA-Interim \citep{Dee2011} as a reference. Figure \ref{fig:ccjm}(a) shows an example of daily IWV data from GNSS measurements and from the ERA-Interim (ERAI) reanalysis at station CCJM. The daily IWV data exhibit a marked seasonal variation, with values varying from 10 $kg m^{-2}$ to 60 $kg m^{-2}$ between winter and summer, as well as a strong day-to-day variability looking as superposed noise. When the ERA-Interim data are subtracted from the GNSS data, one clear jump can be seen on 24 Feb 2001 (Figure \ref{fig:ccjm}(b)). This jump coincides with a change of receiver and antenna at this station.
\begin{center} \begin{figure}
\caption{Station CCJM: (a) GNSS (in black) and ERA-Interim (in red) IWV time series; (b) IWV difference (GPS - ERA-Interim) series; (c) estimated monthly variance; (d) obtained change-points with the method proposed by \cite{Bock2018}. }
\label{fig:ccjm}
\end{figure} \end{center}
In a previous work, \citet{Bock2018} proposed a first segmentation method to detect abrupt changes in the mean in such data (GNSS - ERAI IWV differences). A specific feature of their method is that it accounts for a heterogeneous variance that is assumed to vary on a monthly basis. Indeed, as can be seen in Figure \ref{fig:ccjm}(c), the GNSS - ERAI IWV differences show a seasonal variation with an increased variability in summer. Thus classical segmentation models with homogeneous or segment-specific variance are not adapted. The result of their model is given in Figure \ref{fig:ccjm}(d). The previously mentioned jump is well detected. However, as already mentioned in \cite{Bock2018}, despite the ERAI data are subtracted, it can happen that not all the climate signal is removed due to representativeness differences between the reanalysis and the GNSS observations \citep{Bock2019}. This residual signal exhibits a strong seasonal variation which can lead to wrong, misplaced, or missing change-points.
This paper described an improved method which accounts for seasonal variation in the signal by adding a functional term to earlier model used by \citet{Bock2018}. To infer the parameters of this enhanced model, a (penalized)-maximum likelihood procedure is used again. In this framework, it is well known that segmentation methods have to deal with two problems: (i) an inherent algorithmic complexity for estimating the change-point locations and (ii) an appropriate choice of the penalty term which controls the number of change-points. Indeed, for problem (i), the inference of the discrete change-points requires to search over the whole segmentation space that is huge. Such a search is prohibitive in terms of computational time when performed in a naive way. The Dynamic Programming (DP) algorithm \citep{Auger1989} and its recent pruned versions \citep{Killick2012, Rigaill2015, Maidstone2017}, are the only algorithms that retrieve the exact solution in a fast way. However, a necessary condition for using DP is that the quantity to be optimized is additive with respect to the segments \citep{BaiPerron2003, CaussinusMestre2004, Picard2005}. Here, with the presence of the monthly variance and the functional part, the condition is not verified. To circumvent this, \citet{LiLund2012} and \citet{Lu2010} proposed to use a genetic algorithm. However, this algorithm leads to a suboptimal solution.
Our objective here is to keep the interest of the exact DP algorithm as possible. To be enable us to use it in the inference procedure, we propose to (1) estimate first the variance using a robust (to the change-points) estimator as in \citet{Bock2018} and (2) treat sequentially the estimation of the segmentation parameters and the functional as in \citet{Bertin2017}. For the choice of the number of segments, different penalties have been proposed in the literature (see \citet{Lebarbier2005, Lavielle2005, Zhang2007, Lu2010, CaussinusMestre2004}). Here we propose to use some of them.
The article is organized as follows. Section \ref{sec:method} presents the model and the inference procedure. A simulation study is performed in Section \ref{sec:sim} to evaluate the performance of the method. In Section \ref{sec:real} the method is applied on real data from a set 120 global GNSS stations. Section \ref{sec:conclusion} discusses the results and concludes.
\section{Model and inference}\label{sec:method}
\subsection{Model} We consider the model proposed by \citet{Bock2018} in which we add a functional part in order to take into account the periodic bias. Let $\textbf{y}=\{y_t\}_{1,\ldots,n}$ be the observed series with length $n$ that is modeled by a Gaussian independent random process $\textbf{Y}=\{Y_t\}_{t=1,\ldots,n}$ such that \begin{itemize} \item the mean of $\textbf{Y}$ is composed of two terms: \begin{itemize}
\item[$\star$] a piecewise constant function equals to $\mu_k$ on the interval $I_k^{\text{mean}}= \llbracket t_{k-1}+1, t_{k} \rrbracket$ with length $n_k=t_{k}-t_{k-1}$ where $0=t_0<t_1<\ldots<t_{K-1}<t_K=n$. The $\{t_{k}\}_{k=1,\ldots,K-1}$ are the times of the change-points and $K$ is the number of intervals or segments,
\item[$\star$] and a function $f$; \end{itemize} \item the variance of $\textbf{Y}$ is month-dependent, i.e. it is constant within the interval $I_{\text{month}}^{\text{var}}=\{t; \text{date}(t) \in \text{month}\}$ with length $n_{\text{month}}$ where $\text{date}(t)$ stands for the date at the time $t$. \end{itemize} The resulting model is thus the following \begin{equation} \label{sec:method:eq:m} Y_t =\mu_k+f_t+ E_t, \ \ \text{where $E_t\sim \mathcal{N}(0,\sigma_{\text{month}}^2)$ if $t \in I_{k}^{\text{mean}} \cap I^{\text{var}}_{\text{month}}$}, \end{equation}
for $k=1,\ldots,K$. The intervals $\{I_k^{\text{mean}}\}_k$ are unknown contrary to the intervals $\{I_{\text{month}}^{\text{var}}\}_{\text{month}}$. The parameters to be estimated are the number of segments $K$ (or the number of change-points $K-1$), the change-points $\tbf=\{t_k\}_k$ and the distribution parameters, the means $\mubf=\{\mu_k\}_k$, the variances ${\bf{\sigma}^2}=\{\sigma^2_{\text{month}}\}_{\text{month}}$ and the function $f$.
\subsection{Inference} \label{sec:method:subsec:inference}
As usual in segmentation, the inference is performed in two steps (e.g. \citet{Truong2019}): \begin{description} \item[Step 1] Estimate $\tbf$, $\mubf$, ${\bf{\sigma}^2}$ and $f$, $K$ being fixed. \item[Step 2] Choose the number of segments $K$. \end{description} We consider here a penalized maximum likelihood approach. The $\log$-likelihood of the model defined by \eqref{sec:method:eq:m} is \begin{equation} \label{eq:loglik_m} \footnotesize \log \ p(\ybf; K, \tbf, \mubf, {\bf{\sigma}^2},f)= - \frac{n}{2} \log{(2 \pi)} - \sum_{\text{month}} \frac{n_{\text{month}} }{2} \log{(\sigma^2_{\text{month}})}-\frac{1}{2} \sum_{k=1}^K \sum_{\text{month}} \sum_{t \in I_k^{\text{mean}} \cap I^{\text{var}}_\text{month}} \frac {(y_t-\mu_k-f_t)^2}{\sigma^2_{\text{month}}} \end{equation}
\subsubsection{Step 1: Inference of $\tbf$, $\mubf$, ${\bf{\sigma}^2}$ and $f$, $K$ being fixed} \label{sec:method:subsec:inference:subsub:inference_K}
The use of the DP algorithm is now classical to estimate the change-points. However, DP can be applied if and only if the quantity to be optimized is additive with respect to the segments. Here the presence of the 'global' parameters $\sigma^2_{\text{month}}$ and $f$ will link the segments and the required condition will not be satisfied. To circumvent this problem a two-step procedure is proposed: (1) we estimate the variances using a robust estimator as in \cite{Chakar2017} and \cite{Bock2018} and (2) we estimate iteratively $f$ and the segmentation parameters (i.e. the change-points and the means) using DP as in \cite{gazeaux2015joint} and \cite{Bertin2017}.
The resulting algorithm is the following: \begin{description}
\item[Estimation of $\sigma^2_{\text{month}}$] \citet{Bock2018} proposed a consistent estimator for the variance parameter based on the robust one proposed by \citet{Rousseeuw1993}. The key idea is to apply this robust estimator (up to a constant) on the differentiated series $y_t-y_{t-1}$. This series is centered except at the change-point positions which are treated as outliers. We again use this estimator even in the presence of the function $f$ because the latter does not have much impact on the resulting estimation (in the application, the seasonal signal is slowly varying and is almost completely cancelled in the differentiated series). The estimated variance is noted $\widehat{\sigma}_{\text{month}}^{2}$.
\item[Estimation of $f$ and both $\tbf$ and $\mubf$ iteratively] by minimizing the minus $\log$-likelihood given in \eqref{eq:loglik_m}. At iteration $[h+1]$: \begin{itemize} \item[$(a)$] The estimator of $f$ results in a weighted least-square estimator with weights $1/\widehat{\sigma}_{\text{month}}^{2}$ on $\{y_t -{\mu}_k^{[h]}\}_t$. For our application and following \cite{Weatherhead1998}, we represent $f$ as a Fourier series of order $4$ accounting for annual, semi-annual, terannual, and quarterly periodicities in the signal: \begin{equation*} f_t = \sum_{i=1}^{4} a_{i} \cos( w_i t) + b_{i} \sin( w_i t), \\ \end{equation*} where $w_i= 2\pi \frac{i}{L}$ is the angular frequency of period $L/i$ and $L$ is the mean length of the year ($L=365.25$ days when time $t$ is expressed in days). The estimated function is denoted $f^{[h+1]}$.
\item[$(b)$] The segmentation parameters are estimated based on $\{y_t -{f}^{[h+1]}_t\}_t$. We get \begin{equation*} {\mu}^{[h+1]}_k = \frac{\sum_{\text{month}} \sum_{t \in I_k^{\text{mean}} \cap I^{\text{var}}_{\text{month}}} \frac{({y}_t-{f}^{[h+1]}_t)}{\widehat{\sigma}_{\text{month}}^{2}}} { \sum_{\text{month}} \sum_{t \in I_k^{\text{mean}} \cap I^{\text{var}}_{\text{month}}} \frac{1}{\widehat{\sigma}_{\text{month}}^{2}}}, \end{equation*} and \begin{equation*} {\tbf}^{[h+1]} = \operatornamewithlimits{argmin}_{\tbf \in \mathcal{M}_{K,n}} \sum_{k=1}^K \sum_{\text{month}} \sum_{t \in I_k^{\text{mean}} \cap I^{\text{var}}_{\text{month}}} \frac{({y}_t-{f}^{[h+1]}_t-{\mu}^{[h+1]}_k)^2}{\widehat{\sigma}_{\text{month}}^{2}}, \end{equation*} where $\mathcal{M}_{K,n}=\{(t_1,\ldots,t_{K-1}) \in \mathbb{N}^{K-1}, 0=t_0<t_1<\ldots,t_{K-1}<t_K=n\}$ is the set of all the possible partitions of the grid $\llbracket 1, n \rrbracket$ in $K$ segments. This minimization is obtained using DP. \end{itemize} The final estimators are denoted $\widehat{f}$, $\widehat{\tbf}$ and $\widehat{\mubf}$.
\end{description}
\subsubsection{Choice of $K$} \label{sec:method:subsec:inference:subsub:ModelSelection} Various criteria have been theoretically developed for the choice of $K$ in segmentation with a homogeneous (known or unknown) variance. However, no criteria exist for the case with a heterogeneous variance on fixed intervals. Since in our estimation procedure the variances are estimated first, our segmentation problem can be seen as one in which the variance is known. We thus propose to use the least-squares-based criterion: \begin{equation} \text{SSR}_K (\widehat{\tbf}, \widehat{\mubf}, {\widehat{\bf{\sigma}}}^2,\widehat{f})=\sum_{k=1}^K \sum_{\text{month}} \sum_{t \in \widehat{I}^{\text{mean}}_k \cap I^{\text{var}}_{\text{month}}} \frac{({y}_t-\widehat{f}_t-\widehat{\mu}_k)^2}{\widehat{\sigma}_{month}^{2}}. \end{equation} Different penalties are considered and tested in this paper: \begin{description} \item[Lav] proposed by \cite{Lavielle2005}: \begin{equation*} \widehat{K}=\operatornamewithlimits{argmin}_{K} \text{SSR}_K (\widehat{\tbf}, \widehat{\mubf}, {\widehat{\bf{\sigma}}}^2,\widehat{f})+\beta K, \end{equation*} where $\beta$ is the penalty constant chosen using an adaptive method. The method involves a threshold $S$ which is fixed to $S=0.75$, both in the simulation study and the applications, as suggested by \citet{Lavielle2005}. \item[BM] proposed by \cite{BM2001} and \cite{Lebarbier2005} for an application in a segmentation context: \begin{equation*} \widehat{K}=\operatornamewithlimits{argmin}_{K} \text{SSR}_K (\widehat{\tbf}, \widehat{\mubf}, {\widehat{\bf{\sigma}}}^2,\widehat{f})+\alpha K \left[ 5 +2 \log \left (\frac{n}{K} \right) \right], \end{equation*} where the penalty constant $\alpha$ can be calibrated using the slope heuristic proposed by \cite{Arlot2009}. Two methods are proposed actually: the "dimension jump" and the "data-driven slope estimation" which are referred to as BM1 and BM2, respectively, hereafter. \item[mBIC] the modified version of the classical BIC criterion derived in the segmentation framework by \cite{Zhang2007}, \begin{equation*} \widehat{K}=\operatornamewithlimits{argmax}_{K} - \frac{1}{2} \text{SSR}_K (\widehat{\tbf}, \widehat{\mubf}, {\widehat{\bf{\sigma}}}^2,\widehat{f}) -\frac{1}{2} \sum_{k=1}^K \log {(\widehat{t}_k - \widehat{t}_{k-1})} +\left (\frac{1}{2}-K \right ) \log{(n)}. \end{equation*} In the specific climate context, some authors as \citet{LiLund2012} and \citet{Lu2010} use a MDL based-criterion (\citet{Rissanen78}). \citet{ardia2019frequentist} show that the MDL criterion can be seen as a Bayesian criterion with appropriate prior distributions for change-point models. As a consequence, the obtained based-MDL penalties (see \citet{LiLund2012,Lu2010}) looks like the mBIC (their both penalties integrate a term depending on the segment lengths of the segmentation). \end{description}
\subsubsection{Different choices for our procedure} \label{sec:implementation}
The proposed inference procedure is summarized in Figure \ref{fig:algorithm} given in the Supplemental Material. The method is implemented in a R package named \texttt{GNSSseg} which is available on the CRAN. \\
In practice, Step 1 of the inference (Section \ref{sec:method:subsec:inference:subsub:inference_K}) is performed for $K=1,\ldots, K_{\text {max}}$ where $K_{\text{max}}$ should be 2 or 3 times larger than the expected number of change-points. For both the simulations and the applications, we used $K_{\text {max}}=30$.
The iterative procedure needs a proper initialization procedure and a stopping rule. For the initialization, the function $f$ is estimated first, using a unweighted least-squares criterion. For the stopping rule the change of $f_t$ and $\mu_k$ between two successive iterations is checked against a fixed threshold. The convergence of the iterative procedure is accelerated using the stopping test proposed by \cite{varadhan2008simple}.
The final parameterization was derived after testing several different options which are described in the Supplemental Material.
\section{Simulation Study} \label{sec:sim}
\subsection{Simulation design and quality criteria.}\label{sec:sim:subsec:qc}
\subsubsection*{Simulation design.} The simulated time series are characterized by a length of $n=400$ with $4$ "years" of $2$ "months" of $50$ "days" each and with a monthly variance. A total of $6$ change-points were introduced at positions $t=55, 77, 177, 222, 300, 366$ and values for the signal mean were alternating between $0$ and $1$. The periodic function was modelled by $f_t=0.7 \cos(2 \pi t/L)$ where $L=100$ is the length of one year. Since we consider here only two months, the variance is alternating between two values, $\sigma_1^2$ and $\sigma_2^2$. Several batches of $100$ time series were generated with different values for $\sigma_1$ = $0.1$, $0.5$, or $0.9$ and $\sigma_2$ = $0.1$ to $1.5$ by step of $0.2$. Figure \ref{fig:simulation} shows an example.
\begin{figure}
\caption{Example of a simulated time series (black solid line in lower panel) of length $n=400$ with $K=7$ segments (red solid line), function $f_t=0.7 \cos(2 \pi t/L)$ (blue solid line), noise (cyan solid line) with standard deviation $\sigma_1^\star = 0.1$ and $\sigma_2^\star = 0.5$ (changing every $L/2=50$ points, starting with $\sigma_1^\star$).}
\label{fig:simulation}
\end{figure}
\subsubsection*{Quality criteria.} The accuracy of the results is quantified by the differences between the estimates (denoted with a hat ${\widehat{x}}$) and the true values (denoted as $x^{\star{}}$).\\
For the function $f$, the root mean square error (RMSE) of the estimated function is computed: $\mbox{RMSE}(f) =\left[\frac{1}{n}\sum_{t=1}^{n} \{\hat{f}_t-f^{*}_t \}^2\right]^{1/2}$.
For the segmentation parameters, the following criteria are considered: \begin{itemize}
\item the difference between the estimated number of segments and the true one $\hat{K}-K^{*}$;
\item the RMSE of the estimated mean parameter $\hat{\mubf}$:
$\mbox{RMSE}(\mubf)=\left[\frac{1}{n}\sum_{t=1}^{n}\left\{\hat{\mu}_t-\mu^{*}_t\right\}^2\right]^{1/2}$;
\item the distance between the estimated positions of the change-points $\widehat{\tbf}$ and the true ones $\tbf^{\star}$; this distance is measured with the help of the two components of the Hausdorff distance, $d_1$ and $d_2$, defined as: $$
d_1(a,b)=\max_b \min_a |a-b| \ \ \text{and} \ \ d_2(a,b)=d_1(b,a). $$
A perfect segmentation results in both null $d_1(\tbf^{\star}, \widehat{\tbf})$ and $d_2(\tbf^{\star}, \widehat{\tbf})$. A small $d_1$ means that the detected change-points are well positioned and a small $d_2$ that a large part of the true change-points are correctly detected. A common situation found in practice is the one where the number of change-points is under-estimated, with a small $d_1$ and a large $d_2$. In that case, some change-points are undetected but the detected ones are correctly located. This situation is satisfying here since in our application it is acceptable to miss a few change-points (usually of small amplitude) rather than over-segmenting the data with badly-positioned change-points.
\item the histogram of the change-point locations that provides a measure of the probability of the position of the change-points. \end{itemize}
\subsection{Results.}\label{sec:sim:subsec:res}
Only the results for $\sigma_1^\star$ = $0.5$ are illustrated hereafter. The results for the others values of $\sigma_1^\star$ are briefly discussed at the end of the section.
\subsubsection*{Accuracy of the variance estimates.} Figure \ref{fig:estimated_variance} presents the estimation errors of $\hat{\sigma}_1$ and $\hat{\sigma}_2$ for different values of $\sigma_2^\star$. It is seen that the variance estimator works well and the estimated standard deviations are retrieved with the same accuracy as in \citet{Bock2018} despite the presence of the periodic bias. The dispersion increases when $\sigma_2^\star$ is increasing as one can expect.
\begin{figure}
\caption{Boxplots of standard deviation estimation errors: $\hat{\sigma}_{1}-\sigma_{1}^\star$ in red and $\hat{\sigma}_{2}-\sigma_{2}^\star$ in blue, with $\sigma_1^\star$=0.5 and $\sigma_2^\star=0.1,\dots,1.5$. Each case includes 100 simulations. }
\label{fig:estimated_variance}
\end{figure}
\subsubsection*{Accuracy of segmentation parameter estimates.} Figure \ref{fig:quality_criteria} shows the results for the four model selection criteria and the special case where the number of segments $K$ is fixed to the true value ($K=7$). For small values of $\sigma_2^\star$, the detection problem is easy and all the model selection criteria retrieve the correct number of segments (Figure \ref{fig:quality_criteria}(a)). However for large values of $\sigma_2^\star$, the detection becomes difficult, and the errors increase. The different criteria behave slightly differently. Lav tends to give the true number of segments in median, but with a large dispersion, while BM1, BM2, and mBIC tend to underestimate the number of segments (more for mBIC). However, finding the correct number of segments does not mean that the change-points are properly positioned. Indeed, for Lav and the case when $K=7$, the median $d_1$ is still quite large (Figure \ref{fig:quality_criteria}(c)). On the other hand, the median $d_2$ is smaller for the case when $K=7$ compared to the tested criteria (Figure \ref{fig:quality_criteria}(d)). Finally, RMSE($\mubf$) is very similar for all the criteria (Figure \ref{fig:quality_criteria}(b)), though Lav shows a larger median and dispersion when $\sigma_2^\star$ is large. When $\sigma_2^\star$ takes intermediate values the case when $K=7$ yields slightly improved results.
\begin{figure}
\caption{Results with the four selection criteria (BM1, BM2, Lav, and mBIC) and with the true number of segments (True), for $\sigma_1^\star=0.5$ and different values of $\sigma_2^\star$. \emph{(a)} $\hat{K}-K^\star$; \emph{(b)} $\mbox{RMSE}(\mubf)$; \emph{(c)} first Hausdorff distance $d_1$ and \emph{(d)} second Hausdorff distance $d_2$.}
\label{fig:quality_criteria}
\end{figure}
\subsubsection*{Probability of detection.} Figure \ref{fig:frequenze} shows the percentage of the change-point detections for three values of $\sigma_2^\star=0.1, 0.5$ and $1.5$, and $\sigma_1^\star=0.5$. In general, the change-points located in the "months" with smaller variance are more often recovered with all three criteria, and also when the true $K$ is used. Hence, in the case (a) when $\sigma_1^\star=0.5$ and $\sigma_2^\star=0.1$, the probability of detection is slightly smaller for the position $222$, which is contained in a segment with $\sigma_1^\star = 0.5$, and for the position $300$ where both the mean and the variance change. In the case (b) when $\sigma_1^\star=\sigma_2^\star=0.5$, the probability of detection is more or less the same for all the change-points and all the criteria. When $\sigma_2^\star = 1.5$, the problem is more complicated. Again the change-points located in the "months" with smaller noise are better detected (positions $222$ and $300$) but for the other four change-points the results are contrasted although they are all located in months with $\sigma_2^\star=1.5$. The change-points at $55$ and $77$ are almost never detected. For mBIC this is consistent with the fact that the median $\hat{K}$=5, i.e. two change-points are missing, on average (Figure \ref{fig:quality_criteria}(a)), but the other four change-points are not so badly located ($d_1$ is not that large, Figure \ref{fig:quality_criteria}(c), but $d_2$ is very large, Figure \ref{fig:quality_criteria}(d)). The situation is a bit similar for BM1. On the other hand, for Lav and the true $K$, the number of detections is correct (on average for Lav) but due to the large noise they are sometimes very badly positioned (large $d_1$ and $d_2$).
\begin{figure}
\caption{Histogram of change-point detections with, from left to right, the BM1, Lav, and mBIC selection criteria, and the case when the true number of segments is used (TRUE), for $\sigma_1^\star=0.5$ and three different values for $\sigma_2^\star$: (a) $\sigma_2^\star=0.1$, (b) $\sigma_2^\star=0.5$ and (c) $\sigma_2^\star=1.5$. The red dotted lines indicate the positions of the true change-points. The results for BM2 are very similar to BM1 and are not shown.}
\label{fig:frequenze}
\end{figure}
\subsubsection*{Accuracy of the function estimate.} Figure \ref{fig:RMSE_f} shows RMSE($f$) as a function of $\sigma^\star_2$. As expected, the errors increase when $\sigma_2^\star$ increases. The results do not much depend on the selection criterion, but the results are slightly better when the true number of segments is known and when $\sigma_2^\star$ takes intermediate values. The results for Lav show a slightly larger median and larger dispersion. \\
\begin{figure}
\caption{RMSE of the estimated function $f$ for $\sigma_1^\star=0.5$ and different values for $\sigma_2^\star$.}
\label{fig:RMSE_f}
\end{figure}
The results for other values of $\sigma_1^\star$ (not shown) are very similar for BM1, BM2, mBIC, and the case when the true $K$ is used. The results are slightly improved for $\sigma_1^\star=0.1$ and slightly degraded for $\sigma_1^\star=0.9$, as expected. The results for Lav are more chaotic, with either large under-estimation of $K$ for the smaller $\sigma_1^\star$ and over-estimation of $K$ for the larger $\sigma_1^\star$, with large subsequent degradation of the other quality criteria. In general, under-estimating $K$ leads to an increase of RMSE$(\mu)$, while over-estimating $K$ leads to an increase of $d_1$. \\
The main conclusions from the simulation study are the following: \begin{itemize}
\item The proposed method works well but the results are sensitive to the choice of the function form due to its possible confusion with the change-points. Performing a selection of the statistically significant parameters of the function appears as a good way to reduce this problem and improves slightly the change-point detection with our simulated data (see Supplemental Material).
\item Concerning the model selection criteria, BM1, BM2, and mBIC, provide very similar results. They behave well and detect correctly the number and position of change-points when the noise is not too large. When the noise is heavy some change-points are missed but this is a counterpart of the limited number of false detections. The Lav criterion shows much larger dispersion in the number of change-points and, though the estimated number is close to the truth in median, some change-points are not properly located (larger $d_1$ and $d_2$) with an impact on the estimated $\mubf$ and $f$. \end{itemize}
\section{Application to real GNSS data}\label{sec:real}
\subsection{Dataset, metadata, and validation procedure} The method is applied to the daily IWV differences from 120 global GNSS stations and ERA-Interim reanalysis for the period from 1 January 1995 to 31 December 2010. The dataset used here is the same as in \cite{Bock2019}. The metadata for the GNSS stations are available from the IGS site-logs (ftp://igs.org/pub/station/log/). They contain for each station the dates of changes of receiver (R), antenna (A), and radome (D). We also included the dates of processing changes (P) which occurred at a few stations in 2008 and 2009 (this issue is discussed in \citet{parracho2018global}). Experience shows that equipment changes do not produce systematically a break in the GNSS IWV time series. The most important changes are those affecting the antenna and its electromagnetic environment, the satellite visibility, and the number of observations \citep{Vey2009}. For instance, \citet{ning2016} considered only antenna and radome changes, as well addition/removal of microwave absorbing material which was known by the authors for one specific station. However, there is some evidence that changes in the receiver settings also induce inhomogeneities, e.g. when the elevation cutoff angle is changed. Changes in the environment due e.g. to cutting of vegetation and construction of buildings nearby the antenna as well as seasonal changes in multipath due to growing/declining vegetation may also impact the measurements and produce either abrupt or gradual changes. As a consequence, though metadata represent a valuable source of validation, a full matching between detected change-points and metadata is not to be expected.
Because of noise in the signal, the detected changes may also not coincide perfectly with the known changes and we must allow some flexibility in the validation procedure. A window of 30 days before or after a documented change was used for the automatic validation of the detected change-points. A visual inspection was also performed to check if the invalidated change-points make sense. In some cases double detections just a few days apart are found on noise spikes, often with two large offsets of opposite signs. Such noise detections are classified as outliers.
\subsection{General Results}\label{sec:real:subsec:general_results}
In this section, we present results for the final method described in the preceding sections as well as for three alternative methods. The final method is referred to as variant (a). Variant (b) is a similar method where only the statistically significant terms of the Fourier series are selected. It is intended to check if reducing the number of degrees of freedom in the function leads to better results as was found with the simulations. Variant (c) is the earlier method proposed by \citet{Bock2018} in which only the segmentation is performed (i.e. the functional part is removed). Variant (d) is another form of a simplified method where the functional is modelled but a homogeneous variance is considered instead of a monthly variance. Statistics on the number of detected change-points are included in Figure \ref{fig:stat_real}. More statistics including the number of validations and outliers are given in Table \ref{table:criteres}.
\begin{figure}
\caption{Histograms of the number of change-points detected for four variants of the model selecting criteria (mBIC, Lav, BM1, and BM2). The numbers given in the plots are the mean, min, and max number of change-points detected per station, N is the total number of change-points per method.}
\label{fig:stat_real}
\end{figure}
\begin{table}[t] \caption{Comparison of segmentation results for the four variants and the four model selection criteria. From left to right: Number of stations with change-points, min/mean/max number of detected change-points per station, total number of change-points, total number of outliers, total number of validations, percentage of validations including outliers, percentage of validations without outliers.}\label{table:criteres} \begin{tabular}{llllllllll}
& & & & & & & & & \\ \hline
& Nsta & min & mean & max & detections & outliers & validations \\ \hline \multicolumn{10}{l}{\textbf{Variant (a)} (segfonc)} \\ \hline \textit{mBIC} & 120 & 9 & 27.1 & 29 & 3251 & 2096 & 267 & 8.2\% & 20.9\% \\ \textit{Lav} & 114 & 0 & 4.0 & 28 & 474 & 129 & 75 & 15.8\% & 21.3\% \\ \textit{BM1} & 98 & 0 & 2.8 & 14 & 335 & 36 & 70 & 20.9\% & 23.3\% \\ \textit{BM2} & 107 & 0 & 3.6 & 18 & 435 & 64 & 77 & 17.7\% & 20.6\% \\ \hline \multicolumn{10}{l}{\textbf{Variant (b)} (segfonc/select)} \\ \hline \textit{mBIC} & 120 & 8 & 27.2 & 29 & 3268 & 2090 & 270 & 8.3\% & 20.7\% \\ \textit{Lav} & 115 & 0 & 7.8 & 28 & 940 & 411 & 116 & 12.3\% & 20.8\% \\ \textit{BM1} & 100 & 0 & 2.8 & 13 & 334 & 46 & 68 & 20.4\% & 23.4\% \\ \textit{BM2} & 107 & 0 & 3.7 & 24 & 439 & 76 & 81 & 18.5\% & 22.1\% \\ \hline \multicolumn{10}{l}{\textbf{Variant (c)} (segonly)} \\ \hline \textit{mBIC} & 120 & 9 & 28.1 & 29 & 3367 & 1255 & 361 & 10.7\% & 16.4\% \\ \textit{Lav} & 113 & 0 & 2.9 & 16 & 350 & 28 & 64 & 18.3\% & 19.6\% \\ \textit{BM1} & 90 & 0 & 2.2 & 12 & 269 & 8 & 53 & 19.7\% & 20.2\% \\ \textit{BM2} & 102 & 0 & 3.5 & 17 & 414 & 24 & 68 & 16.4\% & 17.4\% \\ \hline \multicolumn{10}{l}{\textbf{Variant (d)} (seghomofonc)} \\ \hline \textit{mBIC} & 116 & 0 & 19.0 & 29 & 2283 & 1637 & 178 & 7.8\% & 24.1\% \\ \textit{Lav} & 114 & 0 & 3.5 & 26 & 415 & 148 & 56 & 13.5\% & 20.4\% \\ \textit{BM1} & 92 & 0 & 2.4 & 19 & 287 & 40 & 61 & 21.3\% & 24.1\% \\ \textit{BM2} & 101 & 0 & 3.2 & 19 & 387 & 82 & 68 & 17.6\% & 21.7\% \\ \hline
& & & & & & & & & \end{tabular} \end{table}
Figure \ref{fig:stat_real}(a) shows that with variant (a), mBIC, Lav, BM1, and BM2 detect a total of $3251$, $474$, $335$, and $435$ change-points, respectively. The distribution of the number of change-points per station is very different depending on the selection criterion. Most notably, mBIC detects between 9 and 29 change-points per station, with a mean value of $27.1$, i.e. in most cases the maximum number of segments is selected (here $K_{\text {max}}=30$). This behaviour was not observed with the simulations. From Table \ref{table:criteres} we see that mBIC detects many outliers. Comparison of contrast values reveals that mBIC selects solutions with smaller SSR values than the other criteria, i.e. the model selected by mBIC generally explains better the observed signal. However, this is at the expense of strong over-segmentation, which is not wanted. mBIC is thus not well adapted to the nature of the data analyzed here. One of the reasons might be that the hypothesis of Gaussian errors is not valid (e.g. due to serial correlation in the data and noise spikes). The three other selection criteria provide much more consistent results, with mean number of change-points of $2.8$, $3.6$ and $4.0$ for BM1, BM2, and Lav, respectively. Among the three criteria, we see from Table \ref{table:criteres} that BM1 has the smallest number of outliers (36) and the highest rate of validations ($20.9 \%$). These two features, and also the fact that BM1 has a reasonable number of change-points (the mean is 2.8 per station), make this selection criterion the preferred one.
Compared to variant (a), variant (b) shows marginal impact on the number of detections and the number of validations for three criteria (mBIC, BM1, and BM2). Only for Lav do the mean and total number of detections increase (by nearly a factor of 2). This behavior is not explained but it reveals some instability in the model selection with this criterion. Instability could also be guessed from the maximal number of detections of 28 already seen in variant (a). It means that in some cases, Lav selects a number of segments very close to the maximum ($K_{\text {max}}=30$). BM1 and BM2 have also more outliers with this variant, though the total number of detections is almost unchanged. So, contrary to the simulation results, with the real data there is no benefit of applying a selection of significant terms of the functional model.
In variant (c), the result for mBIC is slightly worse (more detections) but with fewer outliers. For the three other criteria the number of detections decreases significantly. The latter behaviour was actually not expected. Our interpretation is that when the periodic bias is not modelled, the segmentation algorithm has two options: either (i) put additional change-points to better fit the periodic variations in the signal, but this would lead to many more detections (4 per year, i.e. a total of 64 per station for a 16-year time series), or (ii) select only those change-points with a large amplitude that are not confounded with the periodic bias. The observed result (Figure \ref{fig:stat_real}(c) and Table \ref{table:criteres}) suggest that BM1, BM2, and Lav select the second, more conservative, option. Our final method is actually capable of detecting smaller offsets, which makes it more efficient for the homogenization purpose. Note that with variant (c), the situation described by option (i) occurs nevertheless in some cases, as will be illustrated in the next sub-section, and though the number of outliers and validations both decrease for BM1, BM2, and Lav, the percentage of validations remains nearly the same (Table \ref{table:criteres}). So, variant (a) clearly works better than variant (c) in the sense it detects more change-points; it has nevertheless the drawback of detecting more outliers. This point is further discussed in the last section.
In variant (d) the variance is assumed to be constant. This has two consequences: (i) the function is fitted with uniform weights which in general leads to an estimated function $\hat{f}$ and an estimated mean $\hat{\mu}$ of different shapes, (ii) the estimated variance is larger than the mean variance of the variant (a) (the average mean standard deviations amount to $1.19$ vs. $0.84 kgm^{-2}$, respectively) and fewer change-points are detected. Table \ref{table:criteres} confirms that with this method fewer change-points are detected than with variant (a), however the number of outliers is increased (except for mBIC which is again a special case). The number of validations is also decreased, but the percentage of validations is almost unchanged. \\
The comparison of the four variants shows thus that the final method, including a heterogeneous variance and a full functional model for the periodic bias, has the best properties: reasonable number of detections and outliers, and high rate of validations. Among the four model selection criteria, BM1 and BM2 behave better than Lav and mBIC, with a small advantage for BM1. Figure \ref{fig:offsets}(a) shows that the yearly-mean standard deviation of the noise ranges between 0 and 2 $kg m^{-2}$, with a mean value over the 120 stations of $0.84 kg m^{-2}$. The seasonal excursion is of $0.63 kg m^{-2}$ on average, which reflects the importance of modelling the heterogeneous variance. Figure \ref{fig:offsets}(b) presents a measure of the magnitude of the periodic bias for BM1. With an average value of 0.33 $kg m^{-2}$ it is clear that the periodic bias is not negligible and modelling it improves the segmentation results as shown by comparing the results of variant (d) and (a). Figure \ref{fig:offsets}(c) shows that the distribution of offsets (changes in mean) is nearly symmetrical. The mean absolute value of $1.27 kg m^{-2}$ is relatively large. The dip centred on zero reflects the fact that the smaller offsets are more difficult to detect because of their small signal-to-noise (SNR) ratio. The most frequently detected offsets are found around +/- $0.5 kg m^{-2}$. The larger offsets (up to +/- $10 kg m^{-2}$) are outliers. The distribution of SNR can be computed as the absolute value of offset divided by standard deviation of noise. It is peaking at 0.6 and the larger values (up to 10) correspond again to outliers (Figure \ref{fig:offsets}(d)). The mean SNR of 1.55 indicates that our method has a good efficiency of detection.
\begin{figure}
\caption{Histograms of segmentation results for the final method with selection criterion BM1: (a) Number of stations with respect to the estimated standard deviation of the noise (mean and max-min of the 12 monthly values); (b) Number of stations with respect to the standard deviation of the estimated function; (c) Distribution of offsets of detected change-points; (d) Distribution of SNR of detected change-points..}
\label{fig:offsets}
\end{figure}
\subsection{Examples of special cases}
In addition to the global results, we exhibit the results for four stations showing for special cases of the variants. Only the criterion BM1 is considered here. With variant (c) there are actually 66 stations which have the same number of detections as variant (a). Though in general the change-points are located at the same position in the time series, this is not always the case. For 18 stations, variant (c) detects more change-points and for 36 stations it detects fewer. Station POL2 is an example of the former category and station STJO an example of the latter. DUBO is an example where the same number is detected but the change-points are not located at the same position. With variant (d), the number of stations with equal, more, and fewer numbers of detections is: 57, 24, and 39, respectively. Examples are: EBRE, MCM4, and POL2, respectively. \\
The results for a selection of four stations are given in Figure \ref{fig:special_cases}: \begin{itemize} \item In the case of POL2, variants (a), (c) and (d) detect 3, 12, and 1 change-point, respectively. The signal shows a strong periodic variation which well fitted by the models of variant (a) and (d) but is erroneously captured by the segmentation in variant (c). Variant (a) has one validated change-point (detected date: 2008-02-23, known change: 2008-03-06, type of change: P). Variant (c) has no validation, although it detects 12 change-points. Variant (d) detects only one change-point, which is located 72 days from the nearest known change-point and is thus not validated, but it coincides with one of the three detections found by variant (a). The detection of this change-point is made difficult because it is located in a month with heavy noise. \item In the case of STJO, variants (a) and (d) detect 5 and 4 change-points, respectively, with one outlier each but not at the same position. Among the detected change-points, one is exactly the same (detected: 2003-04-18, known: 2003-06-08, type: R) but is not validated, and one is close (detected by variant (a): 1999-07-20, by variant (d): 1999-07-19, known: 1999-07-29, type: R) and is validated. Variant (c) gives no detection, the conservative option is selected by BM1 (option (ii) discussed above). \item In the case of DUBO, variants (a) and (c) detect two change-points at almost the same position but not exactly. Both are located close to known changes but only one is validated for variant (a) (detected: 1999-05-07, known: 1999-05-26, type: R). The second one is located 34 days from a known change for variant (a) and 148 days for variant (c). Though variant (c) works not bad, it is not as accurate as variant (a) because the periodic bias is neglected. Variant (d) has 4 detections which actually consist in 2 change-points, each being associated with an outlier. Although the periodic bias is modelled here, both change-points are quite badly located and thus not validated. \item Finally for MCM4 the signal has very marked inhomogeneities in the form of several abrupt changes but also non-stationary oscillations. The abrupt changes are well captured by variant (a) who detects 5 change-points among which 4 are validated (types are in chronological order: R, R, P, P). The non-stationary oscillations are only partly modelled by the periodic function. This is a special case where even the model used in variant (a) is not well adapted to such oscillations. This result advocates for an improvement of the functional basis. In that case, variant (c) works quite well too and leads to almost the same detections as variant (a), but only the two P changes are validated. Variant (d) on the other hand over-estimates the number of change-points to better fit the non-stationary oscillations but with detections of outliers. The four same change-points are validated as with variant (a) but the fitted means are quite different. \end{itemize}
\begin{figure}
\caption{Examples of results obtained with variants (a), (c), and (d) from left to right, for four different stations: POL2, STJO, DUBO, and MCM4 (from top to bottom). The content of the plots is similar to Fig. \ref{fig:ccjm}(b). The text inserted at the top left of the plots reports the mean standard deviation of the noise, the variation (max-min) of the standard deviation of the noise, the standard deviation of the periodic bias function, and the variation (max-min) of the periodic bias function. The text in blue reports the total number of detections and of known changes, the minimum and maximum distance between detected change-points and the nearest known changes, the number of validated detections, and the number of noise detections. }
\label{fig:special_cases}
\end{figure}
Among the 70 validated change-points found by BM1 in the case of variant (a) there are 53 R, 16 A, 7 D, and 13 P types (note that these numbers don't sum up to 70 because in many cases the changes involve several types). We find here that receiver changes are the most frequent explanation for inhomogeneities. This is not surprising since they are the most frequent change-type occurring at GNSS stations. However, this is in contrast with \citet{ning2016}'s results who did not consider receiver changes at all. About 70\% of the receiver changes documented in the IGS sitelogs actually refer to firmware updates which don't have much impact on the observations as long as they don't involve a change in the minimum elevation cutoff angle. Hardware changes on the other hand are more prone to have an impact. We performed a quality control based on the observation files with TEQC software \citep{esteymertens1999} and found that in many cases hardware changes lead to changes in the multipath diagnostic parameters and in some occasions in the percentage of observations. Receiver changes that have an impact are e.g. found at station STJO on 1999-08-06 (from ROGUE\_SNR\_8000 to AOA\_SNR\_12\_ACT) and at station MCM4 on 2002-01-03 (from ROGUE\_SNR\_8000 to AOA\_SNR\_12\_ACT) and on 2006-05-19 (from AOA\_SNR\_12\_ACT to ASHTECH\_ZXII3). At MCM4, strong oscillations are found in the multipath diagnostics (mp1 and mp2) during the AOA\_SNR\_12\_ACT period, similar to those seen in the IWV differences (Figure \ref{fig:special_cases}). This reveals a malfunctioning of the GNSS equipment also associated with a jump in the mean signal at the beginning and at the end of that period.
\subsection{Comparison with \citet{ning2016}}
Similar to this study, \citet{ning2016} analyzed the homogeneity of GNSS-ERAI IWV differences for a global network of 101 GNSS sites with a least 15 years of observations. Their series were used with monthly sampling whereas here we used daily sampling. They used the PMTred test \citep{Wang2008} to detect abrupt changes in the mean IWV difference but this model does neither include a periodic bias not a monthly varying variance. They detected a total of 62 change-points affecting 47 stations among which 45 detections were attributed to the GNSS series, 16 to ERAI, and 1 was undetermined. Their attribution method was based on the comparison of the GNSS candidate series to two or three references series (ERAI, another nearby GNSS series, and/or a nearby VLBI series). Consistency between the two or three detected offsets was used to attribute the change-points to GNSS and disagreement to ERAI (by default). They also validated 13 detections with the GNSS metadata, but they included only antenna, radome, and known microwave absorbing material changes. Their validation window was +/- 6-month wide, i.e. much larger than our +/- 30-day window. We reviewed their validations for 42 of their sites for which we had metadata information from the IGS sitelogs including in our case receiver changes. Using the same 6-month window, we found that 10 out of their 12 undocumented GNSS detections can actually be explained with receiver changes and 2 with receiver+antenna changes (the latter were surprisingly missing in their analysis). Six of these changes agreed with the metadata within 2 months or less. We also found that 5 out of 15 of their change-points attributed to ERAI coincide actually with 2 GNSS receiver changes and 3 antenna changes. Finally, inspection of the GNSS-ERAI IWV difference time series suggests that many of their undocumented detections may be due to outliers and gaps in the time series. This suggests that the implementation of the PMTred test is quite sensitive to fluctuations in the noise, a property similar to that of variant (d) discussed in the previous sub-section.
The comparison of our results for variant (a) with \citet{ning2016}'s results for 31 common stations which have change-points leads to the following conclusions: (i) our method detects nearly twice more change-points than PMTred (107 vs. 43), (ii) among 32 PMTred detections attributed to GNSS, about 1/3rd coincide with ours within +/- 2 months, 1/3rd within 2-6 months and 1/3rd within more than 6 months, (iii) among 11 PMTred detections attributed to ERAI, 4 change-points coincide with ours within +/- 1 month (the others being about 6 months or more apart) and none of them can actually be explained by GNSS changes (even involving receiver changes). Inspection of the IWV differences and the TEQC diagnostics confirms that the 4 change-points attributed to ERAI cannot be explained by changes in the GNSS time series, i.e. they may truly be due to ERAI; these are: GODE (1998-08-06), HOB2 (2006-06-10), and WUHN (1999-02-14 and 2006-09-27). The latter change-point was already mentioned by \citet{parracho2018global} as being due to a change in radiosonde data from the station at the city of Wuhan, China, being assimilated in ERAI.
\section{Discussion and conclusions} \label{sec:conclusion} In this paper we presented a new segmentation method for the detection of abrupt changes in the mean of geophysical time series including a periodic bias and heterogeneous variance. The results on simulated data showed that the segmentation results (position and amplitude of change-points) are sensitive to the choice of the function basis used to model the periodic bias and to the initialisation of the iterative procedure in which the function and segmentation parameters are estimated. Several model selection criteria were tested. The criterion proposed by \citet{BM2001} and the modified BIC proposed by \citet{Zhang2007} appeared to have good properties. The criterion of \citet{Lavielle2005} appears rather unstable with large dispersion in the number of detected change-points.
When applied to real data (GNSS minus ERAI IWV series), the modified BIC's results were very disappointing (strong over-estimation of the number of change-points), certainly due to the fact that it is derived in the case of a normal distribution and a homoscedastic variance case. In fact, all the considered model selection criteria are based on these assumptions, but according to our experience mBIC is much more sensitive to deviations from the normal distribution.
We tested several variants of the method with the real data and found that accounting for a monthly variance and a period bias improved clearly the detection, although this method has some tendency to detect outliers due to noise spikes (about 20\% of the detections). A proper outlier detection method has to be developed, e.g. based on the SNR, in order to reject these detections.
In addition, future improvements of the proposed method would be: (i) to consider other models for the function $f$ since it was found that in some cases like at station MCM4 a simple periodic function is not adequate, (ii) to take the serial correlation in the data into account. The first point can be handled by an estimation of the function $f$ using a non-parametric approach. The second point can be developed by following the approach of \citet{Chakar2017} who proposed to model the temporal correlation using an autoregressive process of order 1. These authors also proposed a two-stage whitening inference strategy that allows the use of the DP algorithm and find the exact maximum likelihood solution.
\section*{Data statement} The GNSS IWV data are available from \href{}{https://doi.org/10.14768/06337394-73a9-407c-9997-0e380dac5591}. (last access: April 2020; \citep{Bock2016}).
ERA-Interim data are avaialable from \href{}{https://www.ecmwf.int/en/forecasts/datasets/archive-datasets/reanalysis-datasets/era-interim} (last access: April 2020; \citep{Dee2011}).
\section*{Supplemental Material} \label{sec:supplement}
\subsection*{Summary of the proposed procedure} Figure \ref{fig:algorithm} summarizes the proposed procedure.
\begin{figure}
\caption{Schematic of the algorithm.}
\label{fig:algorithm}
\end{figure}
\subsection*{Tested alternatives to the proposed procedure}
Recall that in our procedure (see Section \ref{sec:method:subsec:inference}), (1) the variances are estimated first; (2) the iterative procedure is initialized by the estimation of $f$ using an unweighted leas-square criterion; (3) the function is estimated with a Fourier decomposition of order $4$. We tested different variants for these three points:
\begin{description}
\item[(1) Updating the variances] we tested a version of the procedure where ${\bf\sigma}$ was updated at each iteration of the iterative procedure. The estimated variances are plotted in Figure \ref{fig:S1}. This option provided slightly more accurate estimates for all the variance (see Figure \ref{fig:estimated_variance}) and the function parameters with very little impact on the segmentation parameters (not shown) compared to our procedure. However, the small changes in variance at each iteration severely slowed down the convergence of the algorithm. \\
\item[(2) Variants of the initialization] three variants are tested: (a) the segmentation is performed first; (b) $f$ is estimated first using a weighted regression (as in the iterative procedure); (c) $f$ is estimated first using a weighted regression but on the centered signal $y_t - \bar{y}$.
Figure \ref{fig:tot_out_init_seg} shows the results for option (a). Compared to the results of our procedure (see Figs. \ref{fig:quality_criteria} and \ref{fig:RMSE_f}), the results are significantly degraded. Especially, the larger $d_1$ indicates that change-points are badly located. At the beginning, the unmodelled periodic variations present in the signal are captured by the segmentation. The iterative procedure does not change this effect leading naturally to an over-segmentation in addition of the bad estimation of $f$. This is particularly marked for small values of the noise $\sigma_2$ and for the Lav criterion whatever $\sigma_2$.
Figure \ref{fig:tot_out_init_weighted} shows the results for option (b). The results are degraded as well but less than previously and mainly for larger $\sigma_2$. This can be explained by the fact that the unmodelled change-points belonging to small variance periods are absorbed by $f$ degrading thus its estimation at this initialization step. And as previously, the iterative procedure does not correct this effect.
The results for option (c) (not shown here) are very similar to those obtained with our initialization procedure. This alternative is equivalent to include a constant term in the linear regression to estimate $f$. Its estimation is less degraded compared to option (b) and it is correct in the loop.
Our choice of estimating first the function $f$ using an unweighted regression is more flexible in the sense that it does not capture the all segmentation effect at the initialization step allowing thus the iterative procedure to correctly separate the function and the segmentation terms.\\
\item[(3) Function model] The sensitivity of the procedure to the initialization step discussed above highlights the possible confusion between the function and segmentation. This sensitivity can be further explored by testing different models for $f$. The idea behind is that simpler models might be less confused with the segmentation making the procedure more accurate in terms of change-point locations. We tested two alternatives: (a) the shape of $f$ is known up to a scaling factor, i.e. $f_t=a_1\cos(2\pi t/L)$; (b) the statistically significant terms of the Fourier series are selected which have a p-value < $0.001$. Figure \ref{fig:cos_out_init} and \ref{fig:selb_out_init} show that the results for these two cases are both consistent and improve the segmentation results compared to our method (see Figure \ref{fig:quality_criteria} and \ref{fig:RMSE_f}) as expected. Especially, the overall RMSE of the fitted function is strongly reduced. The impact on the positions and amplitudes of the change-points is rather small, however, and the impact in the case of real data is negligible (see Section \ref{sec:real}). This test points to the importance of the function model in our method. However, when it comes to real data, the real form of the function is not well known, i.e. the Fourier series of order 4 or even higher may be inadequate. It might thus be useful in a future version of the method to use a more complex base of functions. \end{description}
\begin{figure}
\caption{Boxplots of standard deviation estimation errors for variant (1): $\hat{\sigma}_{1}-\sigma_{1}^\star$ in red and $\hat{\sigma}_{2}-\sigma_{2}^\star$ in blue, with $\sigma_1^\star$=0.5 and $\sigma_2^\star=0.1,\dots,1.5$. }
\label{fig:S1}
\end{figure}
\begin{figure}
\caption{Results for variant 2-(a). \emph{(a)} $\hat{K}-K^\star$; \emph{(b)} first Hausdorff distance $d_1$; \emph{(c)} $\mbox{RMSE}(\mubf)$; \emph{(d)} $\mbox{RMSE}(f)$.}
\label{fig:tot_out_init_seg}
\end{figure}
\begin{figure}
\caption{Results for variant 2-(b). \emph{(a)} $\hat{K}-K^\star$; \emph{(b)} first Hausdorff distance $d_1$; \emph{(c)} $\mbox{RMSE}(\mubf)$; \emph{(d)} $\mbox{RMSE}(f)$.}
\label{fig:tot_out_init_weighted}
\end{figure}
\begin{figure}
\caption{Results for variant 3-(a). \emph{(a)} $\hat{K}-K^\star$; \emph{(b)} first Hausdorff distance $d_1$; \emph{(c)} $\mbox{RMSE}(\mubf)$; \emph{(d)} $\mbox{RMSE}(f)$.}
\label{fig:cos_out_init}
\end{figure}
\begin{figure}
\caption{Results for variant 3-(b). \emph{(a)} $\hat{K}-K^\star$; \emph{(b)} first Hausdorff distance $d_1$; \emph{(c)} $\mbox{RMSE}(\mubf)$; \emph{(d)} $\mbox{RMSE}(f)$.}
\label{fig:selb_out_init}
\end{figure}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Greenberger-Horne-Zeilinger correlation and Bell-type inequality seen from moving frame}
\author[label1]{Hao You\thanksref{email}} \thanks[email]{E-mail:[email protected]} \author[label1,label2]{An Min Wang} \author[label1]{Xiaodong Yang} \author[label1]{Wanqing Niu} \author[label1]{Xiaosan Ma} \author[label1]{Feng Xu} \address[label1]{Department of Modern Physics, University of Science and Technology of China, Hefei 230026, People's Republic of China} \address[label2]{Laboratory of Quantum Communication and Quantum Computing and Institute for
Theoretical Physics, University of Science and Technology of China, Hefei 230026, People's Republic of China}
\begin{abstract} The relativistic version of the Greenberger-Horne-Zeilinger experiment with massive particles is proposed. We point out that, in the moving frame, GHZ correlations of spins in original directions transfer to different directions due to the Wigner rotation. Its effect on the degree of violation of Bell-type inequality is also discussed.
\end{abstract}
\begin{keyword}
GHZ correlation \sep Bell-type inequality \sep relativity
\PACS 03.65.Bz \sep 03.30.+p \end{keyword} \end{frontmatter}
Contemporary applications of the Einstein, Podolsky, and Rosen (EPR) correlations and the Bell inequality range from purely theoretical problems \cite{EPR,Bohm,Bell} to quantum communication such as quantum teleportation \cite{Bennett1} and quantum cryptography \cite{Ekert,Bennett2}. Recently a lot of interest has been devoted to the study of the EPR correlation function under the Lorentz transformations \cite{Czachor,Terashima,Terashima1,Ahn1,Ahn2,Terashima2}. They showed that, the relativistic effects on the EPR correlations are nontrivial and the degree of violation of the Bell inequality depends on the relative motion of the particles and the observers.
Greenberger, Horne and Zeilinger (GHZ) also proposed a kind of quantum correlations known as GHZ correlations \cite{Greenberger1,Bouwmeester1}. This kind of refutation of local realism is strikingly more powerful than the one Bell's theorem provides for Bohm's version of EPR --- it is no longer statistical in principle. Furthermore, GHZ correlations are essential for most quantum communication schemes in practice \cite{Bouwmeester1,pan,Bouwmeester2,Nelson}. Thus it is an interesting question that whether GHZ correlations can still be held in the moving frame.
In this letter, we formulate a relativistic GHZ gedanken-experiment with massive particles considering a situation in which measurements are performed by moving observers. It is pointed that GHZ correlations of spins in original directions no longer hold and transfer to different directions in the moving frame. This is a consequence of Wigner rotation \cite{Wigner} and does not imply a breakdown of non-local correlation. To obtain and utilize perfect properties of GHZ correlations again, we should choose spin variables to be measured appropriately to purse desired tasks. Our intention is to explore effects of the relative motion between the sender and receiver which may play role in future relativistic experiment testing the strong conflict between local realism and quantum mechanics, or which may be useful in future quantum information processing using GHZ correlations in high velocity case.
The version of the GHZ experiment in non-relativistic case is listed as follow \cite{Bouwmeester1,Mermin1}. Consider three spin-$\frac{1}{2}$ particles prepared in the state \begin{equation}
|\psi\rangle=\frac{1}{\sqrt{2}}[|\uparrow;\uparrow;\uparrow\rangle+
|\downarrow;\downarrow;\downarrow\rangle] \end{equation}
Here $|\uparrow\rangle$ represents "up" along the $z$ axis and
$|\downarrow\rangle$ signifies "down" along the $z$ axis. Now consider the result of the following products of spin measurements, each made on state $|\psi\rangle$.\\ (i) Particle $1$ along $y$, particle $2$ along $y$, particle $3$ along $x$. Note that since
$\sigma_{y}\sigma_{y}\sigma_{x}|\psi\rangle=-|\psi\rangle$. The product of the $yyx$ measurements should be $-1$, i.e., the expectation value of three-particle spin correlation in the direction $yyx$ (which is denoted by $E(yyx)$), should be $-1$.\\ (ii) Particle $1$ along $y$, particle $2$ along $x$, particle $3$ along $y$. The product of the $yxy$ measurements $E(yxy)$ should be $-1$.\\ (iii) Particle $1$ along $x$, particle $2$ along $y$, particle $3$ along $y$. The product of the $xyy$ measurements $E(xyy)$ should be $-1$.\\ (iv) Particle $1$ along $x$, particle $2$ along $x$, particle $3$ along $x$. In this case, the product of the $xxx$ measurements $E(xxx)$ should be $+1$.
Obviously spin correlations in directions $yyx$, $yxy$, $xyy$, and $xxx$ are maximally correlated, known as GHZ correlations. And the positive sign in the final scenario is crucial for differentiating between quantum-mechanics and hidden-variable descriptions of reality, because local realistic theory predicts the product be $-1$. We can see whenever local realism predicts that a specific result definitely occurs for a measurement on one of the particle's spin given the results for the other two, quantum physics definitely predicts the opposite result. Thus, using GHZ correlations, quantum mechanics predictions are in conflict with local realism definitely, while in the case of EPR experiments, quantum mechanics predictions are in conflict with local realism only statistically. In experiment, GHZ's prediction was confirmed in Ref.\cite{Nelson} using nuclear magnetic resonance.
In relativistic case, the Lorentz transformation induces unitary transformation on vectors in Hilbert space\cite{Wigner}. Suppose that a massive spin-$\frac{1}{2}$ particle moves with the laboratory-frame $4$-momentum $p=(m\cosh\xi,m\sinh\xi\sin\theta\cos\phi,\\ m\sinh\xi\sin\theta\sin\phi,m\sinh\xi\cos\theta)$ where the rapidity $\vec{\xi}=\xi\hat{\textbf{p}}$ with the normal vector $\hat{\bf{p}}=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)$. An observer is moving along $z$-axis with the velocity $\vec{V}$ in the laboratory frame. The rest frame of the observer is obtained by performing a Lorentz transformation
$\Lambda=L^{-1}(\vec{\chi})=L(-\vec{\chi})$ on the laboratory frame with the rapidity $-\vec{\chi}=\chi(-\hat{\textbf{z}})$. Here $V=\tanh\chi$ and $-\hat{\textbf{z}}=(0,0,-1)$ is the normal vector in the boost direction. In this frame, the observer describes the $4$-momentum eigenstate $|p\lambda\rangle$ as
$U(\Lambda)|p\lambda\rangle$ (for a review of momentum eigenstates, one may refer to Ref.\cite{Weinberg,Peres1,Gingrich}). A straightforward calculation shows that \cite{Terashima,Terashima1,Ahn1,Ahn2}: \begin{eqnarray}
U(\Lambda)|p\uparrow\rangle&=&
\cos\frac{\delta}{2}|\Lambda p\uparrow\rangle+e^{i\phi}\sin\frac{\delta}{2}|\Lambda p\downarrow\rangle\\
U(\Lambda)|p\downarrow\rangle&=&
-e^{-i\phi}\sin\frac{\delta}{2}|\Lambda p\uparrow\rangle+\cos\frac{\delta}{2}|\Lambda p\downarrow\rangle \end{eqnarray} where $\uparrow$ and $\downarrow$ represent "up" and "down" along $z$-axis, respectively. The Wigner rotation is indeed a rotation about the direction $\hat{\delta}=
(-\bf{\hat{\textbf{z}}}\times\hat{\textbf{p}})/|\bf{\hat{\textbf{z}}}\times\hat{\textbf{p}}|$ through the angle $\delta$: \begin{eqnarray} \cos\delta&=&\frac{A-B(\hat{\textbf{z}}\cdot\hat{\textbf{p}})+C(\hat{\textbf{z}}\cdot\hat{\textbf{p}})^{2}} {D-B(\hat{\textbf{z}}\cdot\hat{\textbf{p}})}\\ \sin\delta\hat{\delta}&=&-\frac{B-C(\hat{\textbf{z}}\cdot\hat{\textbf{p}})}{D-B(\hat{\textbf{z}}\cdot\hat{\textbf{p}})}\hat{\textbf{z}}\times\hat{\textbf{p}} \end{eqnarray} with \begin{eqnarray} A&=&\cosh\xi+\cosh\chi\\ B&=&\sinh\xi\sinh\chi\\ C&=&(\cosh\xi-1)(\cosh\chi-1)\\ D&=&\cosh\xi\cosh\chi+1 \end{eqnarray}
A GHZ state for three massive particles in the laboratory frame reads: \begin{equation}
|\psi\rangle=\frac{1}{\sqrt{2}}[|p_{1}\uparrow;p_{2}\uparrow;p_{3}\uparrow\rangle+
|p_{1}\downarrow;p_{2}\downarrow;p_{3}\downarrow\rangle] \end{equation} where $p_{i}=(m\cosh\xi_{i},m\sinh\xi_{i}\sin\theta_{i}\cos\phi_{i}, m\sinh\xi_{i}\sin\theta_{i}\sin\phi_{i},m\sinh\xi_{i}\cos\theta_{i})$ represents the $4$-momentum of the $i$-th particle in the laboratory frame and $i=1,2,3$. Without loss of generality, $\phi_{3}$ is set to $0$ which means the third particle moves in the $yOz$-plane. It is necessary to explicitly specify the motion of the particles because Wigner rotation depends on the momentum. In this GHZ experiment, suppose three spin-$\frac{1}{2}$ particles, prepared in the state (10), move apart from the GHZ source and are detected by three observers. Each observer measures a spin component along a chosen direction. Note that whatever frame is chosen for defining simultaneity, the experimentally observable result is the same \cite{Peres1,Peres2}, so we needn't discuss the chronology of spin measurements. Here we assume that, three observers are moving in the $z$ direction at the same velocity $\vec{V}$ in the laboratory frame. What we are interested in are GHZ correlations in the common inertial frame where the observers are all at rest. In this moving frame, the observers see the GHZ state (10) as:
\begin{multline}
U(\Lambda)|\psi\rangle= \frac{1}{\sqrt{2}}\bigg[\big(c_{1}c_{2}c_{3}-e^{-i(\phi_{1}+\phi_{2})}s_{1}s_{2}s_{3}\big)
|\Lambda p_{1}\uparrow;\Lambda p_{2}\uparrow;\Lambda p_{3},\uparrow\rangle\\ +\big(e^{-i(\phi_{1}+\phi_{2})}s_{1}s_{2}c_{3}+c_{1}c_{2}s_{3}\big)
|\Lambda p_{1}\uparrow;\Lambda p_{2}\uparrow;\Lambda p_{3}\downarrow\rangle\\ +\big(e^{-i\phi_{1}}s_{1}c_{2}s_{3}+e^{i\phi_{2}}c_{1}s_{2}c_{3}\big)
|\Lambda p_{1}\uparrow;\Lambda p_{2}\downarrow;\Lambda p_{3}\uparrow\rangle\\ +\big(-e^{-i\phi_{1}}s_{1}c_{2}c_{3}+e^{i\phi_{2}}c_{1}s_{2}s_{3}\big)
|\Lambda p_{1}\uparrow;\Lambda p_{2}\downarrow;\Lambda p_{3}\downarrow\rangle\\ +\big(e^{-i\phi_{2}}c_{1}s_{2}s_{3}+e^{i\phi_{1}}s_{1}c_{2}c_{3}\big)
|\Lambda p_{1}\downarrow;\Lambda p_{2}\uparrow;\Lambda p_{3}\uparrow\rangle\\ +\big(-e^{-i\phi_{2}}c_{1}s_{2}c_{3}+e^{i\phi_{1}}s_{1}c_{2}s_{3}\big)
|\Lambda p_{1}\downarrow;\Lambda p_{2}\uparrow;\Lambda p_{3}\downarrow\rangle\\ +\big(e^{i(\phi_{1}+\phi_{2})}s_{1}s_{2}c_{3}-c_{1}c_{2}s_{3}\big)
|\Lambda p_{1}\downarrow;\Lambda p_{2}\downarrow;\Lambda p_{3}\uparrow\rangle\\ +\big(c_{1}c_{2}c_{3}+e^{i(\phi_{1}+\phi_{2})}s_{1}s_{2}s_{3}\big)
|\Lambda p_{1}\downarrow;\Lambda p_{2}\downarrow;\Lambda p_{3}\downarrow\rangle\Big]\\ \end{multline}
where $c_{i}\equiv\cos{\frac{\delta_{i}}{2}},s_{i}\equiv\sin{\frac{\delta_{i}}{2}}.$
And $\delta_{i}$ represents the Wigner angle of the \emph{i}-th particle which is defined in (4) and (5). Spin operators in relativistic case are defined as \cite{Terashima,Terashima1,Terashima2}: \begin{eqnarray}
\sigma_{x}(p)&=&|p\uparrow\rangle\langle p\downarrow|+|p\downarrow\rangle\langle p\uparrow|\\
\sigma_{y}(p)&=&-i|p\uparrow\rangle\langle p\downarrow|+i|p\downarrow\rangle\langle p \uparrow|\\
\sigma_{z}(p)&=&|p\uparrow\rangle\langle p\uparrow|-|p\downarrow\rangle\langle p\downarrow| \end{eqnarray} Since the observers are moving in the $z$ direction, the directions that are parallel in the laboratory frame remain parallel in the moving frame where the observers are all at rest. However, whether the results of spin measurements in the same direction are still maximally correlated in this moving frame isn't obvious. We now research this question. Let the observer who receives particle $1$ performs measurement of $\sigma_{y}$, the observer who receives particle $2$ performs measurement of $\sigma_{y}$, and the observer who receives particle $3$ performs measurement of $\sigma_{x}$. Thus in the moving frame, the expectation value of three-particle spin correlation in the direction $yyx$ is obtained: \begin{equation} E(yyx)=-\cos\delta_{3} \end{equation} Similarly we obtain: \begin{eqnarray} E(yxy)&=&-\cos\delta_{2}\\ E(xyy)&=&-\cos\delta_{1} \end{eqnarray} We can see that GHZ correlations that are maximally correlated in the laboratory frame no longer appear so in the moving frame. That is, in the moving frame, given the results of measurements on two particles, one can't predict with certainty what the result of a corresponding measurement performed on the third particle. In practice, this means that the relative motion between the source of entangled particles and the observers can alter properties of spin correlations when the observers receive the particles. Thus quantum information processing using these perfect correlations of the GHZ state can't be held, for example, the GHZ experiment can't be done due to lack of knowledge of GHZ correlations in the moving frame.
This effect occurs because Lorentz transformation rotates the direction of spin of the particle as can be seen form (2) and (3). Since the Wigner rotation is in fact a kind of local transformation, it preserves the entanglement of the state \cite{Alsing}. Thus it is reasonable that the GHZ correlation should be preserved in appropriately chosen direction. Here we point out that, to utilize GHZ correlations in the moving frame, the observers should choose spin variables to be measured appropriately according to the wigner rotation: \begin{eqnarray} \sigma_{x}(\Lambda p_{i})\rightarrow \sigma_{x^{\prime}}(\Lambda p_{i})&=&U(\Lambda)\sigma_{x}(\Lambda p_{i})U(\Lambda)^{+}\nonumber\\&=&(c_{i}^{2}-s_{i}^{2}\cos{2\phi_{i}})\sigma_{x}(\Lambda p_{i})-s_{i}^{2}\sin{2\phi_{i}}\sigma_{y}(\Lambda p_{i})\nonumber\\& &-2s_{i}c_{i}\cos{\phi_{i}}\sigma_{z}(\Lambda p_{i})\\ \sigma_{y}(\Lambda p_{i})\rightarrow \sigma_{y^{\prime}}(\Lambda p_{i})&=&U(\Lambda)\sigma_{y}(\Lambda p_{i})U(\Lambda)^{+}\nonumber\\&=&-s_{i}^{2}\sin{2\phi_{i}}\sigma_{x}(\Lambda p_{i})+(c_{i}^{2}+s_{i}^{2}\cos{2\phi_{i}})\sigma_{y}(\Lambda p_{i})\nonumber\\& &-2s_{i}c_{i}\sin{\phi_{i}}\sigma_{z}(\Lambda p_{i}) \end{eqnarray} where the index $i$ represents the $i$-th particle. Thus GHZ correlations will be obtained in new different directions in the moving frame. For example, if the observer who receives particle $1$ measures spin along the direction $y_{1}^{\prime}=(-s_{1}^{2}\sin{2\phi_{1}},c_{1}^{2}+s_{1}^{2}\cos{2\phi_{1}},-2s_{1}c_{1}\sin{\phi_{1}})$, the observer who receives particle $2$ measures spin along the direction $y_{2}^{\prime}=(-s_{2}^{2}\sin{2\phi_{2}},c_{2}^{2}+s_{2}^{2}\cos{2\phi_{2}},-2s_{2}c_{2}\sin{\phi_{2}})$ and the observer who receives particle $3$ measures spin along the direction $x_{3}^{\prime}=(c_{3}^{2}-s_{3}^{2},0,-2s_{3}c_{3})$, maximal correlation $E(y^{\prime}y^{\prime}x^{\prime})=-1$ is obtained again. That is, the GHZ correlation in the direction $yyx$ in the laboratory frame transfers to new direction $y^{\prime}y^{\prime}x^{\prime}$ seen from the moving frame. Similar conclusions are also held for $y^{\prime}x^{\prime}y^{\prime}$ and $x^{\prime}y^{\prime}y^{\prime}$ cases with a careful choice of spin variables according to (18) and (19).
Now we can perform GHZ experiment in the moving frame. After a set of spin measurements along the direction $y^{\prime}y^{\prime}x^{\prime}$, $y^{\prime}x^{\prime}y^{\prime}$, and $x^{\prime}y^{\prime}y^{\prime}$ respectively, local realism will predict the possible outcomes for a $x^{\prime}x^{\prime}x^{\prime}$ spin measurement must be those terms yielding a expectation value $E(x^{\prime}x^{\prime}x^{\prime})=-1$. While quantum theory predicts the outcomes should be the terms yielding $E(x^{\prime}x^{\prime}x^{\prime})=1$. Then the strong conflict between the quantum theory and the local realism is seen in the moving frame in principle.
Similarly can people test the Bell-type inequality for three-qubit state in relativistic case. In non-relativistic case, any local realistic theory predicts
$|\varepsilon|=|E(xyy)+E(yxy)+E(yyx)-E(xxx)|\leq2$ while the maximal possible value is reached for the GHZ state where
$|\varepsilon|=4$ \cite{Mermin,Klyshko}. If the directions of measurements of spin are fixed as $xyy$, $yxy$, $yyx$, and $xxx$, the degree of violation for the GHZ state in the moving frame where the observers are at rest equals to: \begin{equation}
|\varepsilon|=4\sqrt{(c_{1}c_{2}c_{3})^4+(s_{1}s_{2}s_{3})^4-2(c_{1}c_{2}c_{3}s_{1}s_{2}s_{3})^2\cos{[2(\phi_{1}+\phi_{2})}]} \end{equation} The result depends on the velocity of both the particles and the observers with respect to the laboratory in terms of parameters $\theta$, $\phi$, and the Wigner angle $\delta$. If the particles
(or the observers) are all at rest in the laboratory frame or if the moving direction of the observer is parallel with that of the particle to be measured, the amount of violation reaches to the maximal value $|\varepsilon|=4$ which gives the same outcome as the case in non-relativistic case. It is interesting to see in some cases the observers will find the degree of violation to be zero. For example, In the case $\xi\rightarrow\infty$ and
$\chi\rightarrow\infty$ and $\theta=\pi/2$, where the particles and the observers move perpendicularly with high velocities, observers will find $|\varepsilon|=0$. In fact, if three particles rotate angles that are represented by points on the surface $\tan\frac{\delta_{1}}{2}\tan\frac{\delta_{2}}{2}\tan\frac{\delta_{3}}{2}=1$, the degree of violation for the GHZ state is $0$ when $\phi_{1}+\phi_{2}=n\pi$.
The change in the degree of violation of the Bell-type inequality also results from the fact that the Wigner rotation rotates the direction of spins and thus perfect correlations transfer to different directions as we point out above. As can be seen from
(18) and (19), if observers rotate the directions of measurements in accordance with the Wigner rotation, Bell-type inequality turns out to be maximally violated with $|\varepsilon|=4$.
In summarize, We apply a specific Lorentz boost to the GHZ state and then compute the expectation value of three-particle spin correlations in the transformed state. As a result, spin variables averages that are maximally correlated in the laboratory frame no longer appear so in the same directions seen from the moving frame. The entanglement of the GHZ state is however not lost and perfect correlations of the GHZ state are always possible to be found in different directions seen from the moving frame. As its applications, we formulated GHZ experiment in relativistic case, and Bell-type inequality for three-qubit in relativistic case is also discussed. If the relative motion between the source of entangled particles and the observers must be taken account of, we should consider this GHZ correlation transfer in practice.
We thank Xiaoqiang Su, Ningbo Zhao, and Rengui Zhu for useful discussions. We also thank the anonymous referees for pointing out to us some obscure points and for their constructive comments. This project was supported by the National Basic Research Programme of China under Grant No 2001CB309310, the National Natural Science Foundation of China under Grant No 60173047, the Natural Science Foundation of Anhui Province, and China Post-doctoral Science Foundation.
\end{document} |
\begin{document}
\title{The spectral function of a first order elliptic system}
\maketitle \begin{abstract} We consider an elliptic self-adjoint first order pseudodifferential operator acting on columns of complex-valued half-densities over a connected compact manifold without boundary. The eigenvalues of the principal symbol are assumed to be simple but no assumptions are made on their sign, so the operator is not necessarily semi-bounded. We study the following objects: the propagator (time-dependent operator which solves the Cauchy problem for the dynamic equation), the spectral function (sum of squares of Euclidean norms of eigenfunctions evaluated at a given point of the manifold, with summation carried out over all eigenvalues between zero and a positive~$\lambda$) and the counting function (number of eigenvalues between zero and a positive~$\lambda$). We derive explicit two-term asymptotic formulae for all three. For the propagator ``asymptotic'' is understood as asymptotic in terms of smoothness, whereas for the spectral and counting functions ``asymptotic'' is understood as asymptotic with respect to $\lambda\to+\infty$. \end{abstract}
\textbf{Mathematics Subject Classification (2010).} Primary 35P20; Secondary 35J46, 35R01.
\
\textbf{Keywords.} Spectral theory, asymptotic distribution of eigenvalues.
\section{Main results} \label{Main results}
The aim of the paper is to extend the classical results of \cite{DuiGui} to systems. We are motivated by the observation that, to our knowledge, all previous publications on systems give formulae for the second asymptotic coefficient that are either incorrect or incomplete (i.e.~an algorithm for the calculation of the second asymptotic coefficient rather than an actual formula). The appropriate bibliographic review is presented in Section~\ref{Bibliographic review}.
Consider a first order classical pseudodifferential operator $A$ acting on columns $v=\begin{pmatrix}v_1&\ldots&v_m\end{pmatrix}^T$ of complex-valued half-densities over a connected compact $n$-dimensional manifold $M$ without boundary. Throughout this paper we assume that $m,n\ge2\,$.
We assume the coefficients of the operator $A$ to be infinitely smooth. We also assume that the operator $A$ is formally self-adjoint (symmetric): $\int_Mw^*Av\,dx=\int_M(Aw)^*v\,dx$ for all infinitely smooth $v,w:M\to\mathbb{C}^m$. Here and further on the superscript $\,{}^*\,$ in matrices, rows and columns indicates Hermitian conjugation in $\mathbb{C}^m$ and $dx:=dx^1\ldots dx^n$, where $x=(x^1,\ldots,x^n)$ are local coordinates on $M$.
Let $A_1(x,\xi)$ be the principal symbol of the operator $A$. Here $\xi=(\xi_1,\ldots,\xi_n)$ is the variable dual to the position variable $x$; in physics literature the $\xi$ would be referred to as \emph{momentum}. Our principal symbol $A_1$ is an $m\times m$ Hermitian matrix-function on $T'M:=T^*M\setminus\{\xi=0\}$, i.e.~on the cotangent bundle with the zero section removed.
Let $h^{(j)}(x,\xi)$ be the eigenvalues of the principal symbol. We assume these eigenvalues to be nonzero (this is a version of the ellipticity condition) but do not make any assumptions on their sign. We also assume that the eigenvalues $h^{(j)}(x,\xi)$ are simple for all $(x,\xi)\in T'M$. The techniques developed in our paper do not work in the case when eigenvalues of the principal symbol have variable multiplicity, though they could probably be adapted to the case of constant multiplicity different from multiplicity 1. The use of the letter ``$h$'' for an eigenvalue of the principal symbol is motivated by the fact that later it will take on the role of a Hamiltonian, see formula (\ref{Hamiltonian system of equations}).
We enumerate the eigenvalues of the principal symbol $h^{(j)}(x,\xi)$ in increasing order, using a positive index $j=1,\ldots,m^+$ for positive $h^{(j)}(x,\xi)$ and a negative index $j=-1,\ldots,-m^-$ for negative $h^{(j)}(x,\xi)$. Here $m^+$ is the number of positive eigenvalues of the principal symbol and $m^-$ is the number of negative ones. Of course, $m^++m^-=m$.
Under the above assumptions $A$ is a self-adjoint operator, in the full functional analytic sense, in the Hilbert space $L^2(M;\mathbb{C}^m)$ (Hilbert space of square integrable complex-valued column ``functions'') with domain $H^1(M;\mathbb{C}^m)$ (Sobolev space of complex-valued column ``functions'' which are square integrable together with their first partial derivatives) and the spectrum of $A$ is discrete. These facts are easily established by constructing the parametrix (approximate inverse) of the operator $A+iI$.
Let $\lambda_k$ and $v_k=\begin{pmatrix}v_{k1}(x)&\ldots&v_{km}(x)\end{pmatrix}^T$ be the eigenvalues and eigenfunctions of the operator $A$. The eigenvalues $\lambda_k$ are enumerated in increasing order with account of multiplicity, using a positive index $k=1,2,\ldots$ for positive $\lambda_k$ and a nonpositive index $k=0,-1,-2,\ldots$ for nonpositive $\lambda_k$. If the operator $A$ is bounded from below (i.e.~if $m^-=0$) then the index $k$ runs from some integer value to $+\infty$; if the operator $A$ is bounded from above (i.e.~if $m^+=0$) then the index $k$ runs from $-\infty$ to some integer value; and if the operator $A$ is unbounded from above and from below (i.e.~if $m^+\ne0$ and $m^-\ne0$) then the index $k$ runs from $-\infty$ to $+\infty$.
\
We will be studying the following three objects.
\
\textbf{Object 1.} Our first object of study is the \emph{propagator}, which is the one-parameter family of operators defined as \begin{equation} \label{definition of wave group} U(t):=e^{-itA} =\sum_k e^{-it\lambda_k}v_k(x)\int_M[v_k(y)]^*(\,\cdot\,)\,dy\,, \end{equation} $t\in\mathbb{R}$. The propagator provides a solution to the Cauchy problem \begin{equation} \label{initial condition most basic}
\left.w\right|_{t=0}=v \end{equation} for the dynamic equation \begin{equation} \label{dynamic equation most basic} D_tw+Aw=0\,, \end{equation} where $D_t:=-i\partial/\partial t$. Namely, it is easy to see that if the column of half-densities $v=v(x)$ is infinitely smooth, then, setting $\,w:=U(t)\,v$, we get a time-dependent column of half-densities $w(t,x)$ which is also infinitely smooth and which satisfies the equation (\ref{dynamic equation most basic}) and the initial condition (\ref{initial condition most basic}). The use of the letter ``$U$'' for the propagator is motivated by the fact that for each $t$ the operator $U(t)$ is unitary.
\
\textbf{Object 2.} Our second object of study is the \emph{spectral function}, which is the real density defined as \begin{equation} \label{definition of spectral function}
e(\lambda,x,x):=\sum_{0<\lambda_k<\lambda}\|v_k(x)\|^2, \end{equation}
where $\|v_k(x)\|^2:=[v_k(x)]^*v_k(x)$ is the square of the Euclidean norm of the eigenfunction $v_k$ evaluated at the point $x\in M$ and $\lambda$ is a positive parameter (spectral parameter).
\
\textbf{Object 3.} Our third and final object of study is the \emph{counting function} \begin{equation} \label{definition of counting function} N(\lambda):=\,\sum_{0<\lambda_k<\lambda}1\ =\int_Me(\lambda,x,x)\,dx\,. \end{equation} In other words, $N(\lambda)$ is the number of eigenvalues $\lambda_k$ between zero and $\lambda$.
\
It is natural to ask the question: why, in defining the spectral function (\ref{definition of spectral function}) and the counting function (\ref{definition of counting function}), did we choose to perform summation over all \emph{positive} eigenvalues up to a given positive $\lambda$ rather than over all \emph{negative} eigenvalues up to a given negative $\lambda$? There is no particular reason. One case reduces to the other by the change of operator $A\mapsto-A$. This issue will be revisited in Section~\ref{Spectral asymmetry}.
Further on we assume that $m^+>0$, i.e.~that the operator $A$ is unbounded from above.
\
Our objectives are as follows.
\
\textbf{Objective 1.} We aim to construct the propagator (\ref{definition of wave group}) explicitly in terms of oscillatory integrals, modulo an integral operator with an infinitely smooth, in the variables $t$, $x$ and $y$, integral kernel.
\
\textbf{Objectives 2 and 3.} We aim to derive, under appropriate assumptions on Hamiltonian trajectories, two-term asymptotics for the spectral function (\ref{definition of spectral function}) and the counting function (\ref{definition of counting function}), i.e.~formulae of the type \begin{equation} \label{two-term asymptotic formula for spectral function} e(\lambda,x,x)=a(x)\,\lambda^n+b(x)\,\lambda^{n-1}+o(\lambda^{n-1}), \end{equation} \begin{equation} \label{two-term asymptotic formula for counting function} N(\lambda)=a\lambda^n+b\lambda^{n-1}+o(\lambda^{n-1}) \end{equation} as $\lambda\to+\infty$. Obviously, here we expect the real constants $a$, $b$ and real densities $a(x)$, $b(x)$ to be related in accordance with \begin{equation} \label{a via a(x)} a=\int_Ma(x)\,dx, \end{equation} \begin{equation} \label{b via b(x)} b=\int_Mb(x)\,dx. \end{equation}
\
It is well known that the above three objectives are closely related: if one achieves Objective 1, then Objectives 2 and 3 follow via Fourier Tauberian theorems \cite{DuiGui,mybook,ivrii_book,Safarov_Tauberian_Theorems}.
\
We are now in a position to state our main results.
\
\textbf{Result 1.} We construct the propagator as a sum of $m$ oscillatory integrals \begin{equation} \label{wave group as a sum of oscillatory integrals} U(t)\overset{\operatorname{mod}C^\infty}=\sum_j U^{(j)}(t)\,, \end{equation} where the phase function of each oscillatory integral $U^{(j)}(t)$ is associated with the corresponding Hamiltonian $h^{(j)}(x,\xi)$. The symbol of the oscillatory integral $U^{(j)}(t)$ is a complex-valued $m\times m$ matrix-function $u^{(j)}(t;y,\eta)$, where $y=(y^1,\ldots,y^n)$ is the position of the source of the wave (i.e.~this is the same $y$ that appears in formula (\ref{definition of wave group})) and $\eta=(\eta_1,\ldots,\eta_n)$ is the corresponding dual variable
(covector at the point $y$). When $|\eta|\to+\infty$, the symbol admits an asymptotic expansion \begin{equation} \label{decomposition of symbol of OI into homogeneous components} u^{(j)}(t;y,\eta)=u^{(j)}_0(t;y,\eta)+u^{(j)}_{-1}(t;y,\eta)+\ldots \end{equation} into components positively homogeneous in $\eta$, with the subscript indicating degree of homogeneity.
The formula for the principal symbol of the oscillatory integral $U^{(j)}(t)$ is known \cite{SafarovDSc,NicollPhD} and reads as follows: \begin{multline} \label{formula for principal symbol of oscillatory integral} u^{(j)}_0(t;y,\eta)= [v^{(j)}(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta))] \,[v^{(j)}(y,\eta)]^* \\ \times\exp \left( -i\int_0^tq^{(j)}(x^{(j)}(\tau;y,\eta),\xi^{(j)}(\tau;y,\eta))\,d\tau \right), \end{multline} where $v^{(j)}(z,\zeta)$ is the normalised eigenvector of the principal symbol $A_1(z,\zeta)$ corresponding to the eigenvalue (Hamiltonian) $h^{(j)}(z,\zeta)$, \ $(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta))$ is the Hamiltonian trajectory originating from the point $(y,\eta)$, i.e.~solution of the system of ordinary differential equations (the dot denotes differentiation in $t$) \begin{equation} \label{Hamiltonian system of equations} \dot x^{(j)}=h^{(j)}_\xi(x^{(j)},\xi^{(j)}), \qquad \dot\xi^{(j)}=-h^{(j)}_x(x^{(j)},\xi^{(j)}) \end{equation}
subject to the initial condition $\left.(x^{(j)},\xi^{(j)})\right|_{t=0}=(y,\eta)$, \ $q^{(j)}:T'M\to\mathbb{R}$ is the function \begin{equation} \label{phase appearing in principal symbol} q^{(j)}:=[v^{(j)}]^*A_\mathrm{sub}v^{(j)} -\frac i2 \{ [v^{(j)}]^*,A_1-h^{(j)},v^{(j)} \} -i[v^{(j)}]^*\{v^{(j)},h^{(j)}\} \end{equation} and \begin{equation} \label{definition of subprincipal symbol} A_\mathrm{sub}(z,\zeta):= A_0(z,\zeta)+\frac i2 (A_1)_{z^\alpha\zeta_\alpha}(z,\zeta) \end{equation} is the subprincipal symbol of the operator $A$, with the subscripts $z^\alpha$ and $\zeta_\alpha$ indicating partial derivatives and the repeated index $\alpha$ indicating summation over $\alpha=1,\ldots,n$. Curly brackets in formula (\ref{phase appearing in principal symbol}) denote the Poisson bracket on matrix-functions \begin{equation} \label{Poisson bracket on matrix-functions} \{P,R\}:=P_{z^\alpha}R_{\zeta_\alpha}-P_{\zeta_\alpha}R_{z^\alpha} \end{equation} and its further generalisation \begin{equation} \label{generalised Poisson bracket on matrix-functions} \{P,Q,R\}:=P_{z^\alpha}QR_{\zeta_\alpha}-P_{\zeta_\alpha}QR_{z^\alpha}\,. \end{equation}
As the derivation of formula (\ref{formula for principal symbol of oscillatory integral}) was previously performed only in theses \cite{SafarovDSc,NicollPhD}, we repeat it in Sections \ref{Algorithm for the construction of the wave group} and \ref{Leading transport equations} of our paper. Our derivation differs slightly from that in \cite{SafarovDSc} and \cite{NicollPhD}.
Formula (\ref{formula for principal symbol of oscillatory integral}) is invariant under changes of local coordinates on the manifold $M$, i.e.~elements of the $m\times m$ matrix-function $u^{(j)}_0(t;y,\eta)$ are scalars on $\mathbb{R}\times T'M$. Moreover, formula (\ref{formula for principal symbol of oscillatory integral}) is invariant under the transformation of the eigenvector of the principal symbol \begin{equation} \label{gauge transformation of the eigenvector} v^{(j)}\mapsto e^{i\phi^{(j)}}v^{(j)}, \end{equation} where \begin{equation} \label{phase appearing in gauge transformation} \phi^{(j)}:T'M\to\mathbb{R} \end{equation} is an arbitrary smooth function. When some quantity is defined up to the action of a certain transformation, theoretical physicists refer to such a transformation as a \emph{gauge transformation}. We follow this tradition. Note that our particular gauge transformation (\ref{gauge transformation of the eigenvector}), (\ref{phase appearing in gauge transformation}) is quite common in quantum mechanics: when $\phi^{(j)}$ is a function of the position variable $x$ only (i.e.~when $\phi^{(j)}:M\to\mathbb{R}$) this gauge transformation is associated with electromagnetism.
Both Y.~Safarov \cite{SafarovDSc} and W.J.~Nicoll \cite{NicollPhD} assumed that the operator $A$ is semi-bounded from below but this assumption is not essential and their formula (\ref{formula for principal symbol of oscillatory integral}) remains true in the more general case that we are dealing with.
However, knowing the principal symbol (\ref{formula for principal symbol of oscillatory integral}) of the oscillatory integral $U^{(j)}(t)$ is not enough if one wants to derive two-term asymptotics (\ref{two-term asymptotic formula for spectral function}) and (\ref{two-term asymptotic formula for counting function}). One needs information about $u^{(j)}_{-1}(t;y,\eta)$, the component of the symbol of the oscillatory integral $U^{(j)}(t)$ which is positively homogeneous in $\eta$ of degree~-1, see formula (\ref{decomposition of symbol of OI into homogeneous components}), but here the problem is that $u^{(j)}_{-1}(t;y,\eta)$ is not a true invariant in the sense that it depends on the choice of phase function in the oscillatory integral. We overcome this difficulty by observing that $U^{(j)}(0)$ is a pseudodifferential operator, hence, it has a well-defined subprincipal symbol $[U^{(j)}(0)]_\mathrm{sub}$. We prove that \begin{equation} \label{subprincipal symbol of OI at time zero} \operatorname{tr}[U^{(j)}(0)]_\mathrm{sub} =-i\{[v^{(j)}]^*,v^{(j)}\} \end{equation} and subsequently show that information contained in formulae (\ref{formula for principal symbol of oscillatory integral}) and (\ref{subprincipal symbol of OI at time zero}) is sufficient for the derivation of two-term asymptotics (\ref{two-term asymptotic formula for spectral function}) and (\ref{two-term asymptotic formula for counting function}).
Note that the RHS of formula (\ref{subprincipal symbol of OI at time zero}) is invariant under the gauge transformation (\ref{gauge transformation of the eigenvector}), (\ref{phase appearing in gauge transformation}).
Formula~(\ref{subprincipal symbol of OI at time zero}) plays a central role in our paper. Sections~\ref{Algorithm for the construction of the wave group} and~\ref{Leading transport equations} provide auxiliary material needed for the proof of formula~(\ref{subprincipal symbol of OI at time zero}), whereas the actual proof of formula~(\ref{subprincipal symbol of OI at time zero}) is given in Section~\ref{Proof of formula}.
Let us elaborate briefly on the geometric meaning of the RHS of (\ref{subprincipal symbol of OI at time zero}) (a more detailed exposition is presented in Section~\ref{U(1) connection}). The eigenvector of the principal symbol is defined up to a gauge transformation (\ref{gauge transformation of the eigenvector}), (\ref{phase appearing in gauge transformation}) so it is natural to introduce a $\mathrm{U}(1)$ connection on $T'M$ as follows: when parallel transporting an eigenvector of the principal symbol along a curve in $T'M$ we require that the derivative of the eigenvector along the curve be orthogonal to the eigenvector itself. This is equivalent to the introduction of an (intrinsic) electromagnetic field on $T'M$, with the $2n$-component real quantity \begin{equation} \label{electromagnetic covector potential} i\,(\,[v^{(j)}]^*v^{(j)}_{x^\alpha}\,,\,[v^{(j)}]^*v^{(j)}_{\xi_\gamma}\,) \end{equation} playing the role of the electromagnetic covector potential. Our quantity (\ref{electromagnetic covector potential}) is a 1-form on $T'M$, rather than on $M$ itself as is the case in ``traditional'' electromagnetism. The above $\mathrm{U}(1)$ connection generates curvature which is a 2-form on $T'M$, an analogue of the electromagnetic tensor. Out of this curvature 2-form one can construct, by contraction of indices, a real scalar. This scalar curvature is the expression appearing in the RHS of formula (\ref{subprincipal symbol of OI at time zero}).
Observe now that $\sum_jU^{(j)}(0)$ is the identity operator on half-densities. The subprincipal symbol of the identity operator is zero, so formula (\ref{subprincipal symbol of OI at time zero}) implies \begin{equation} \label{sum of curvatures is zero} \sum_j\{[v^{(j)}]^*,v^{(j)}\}=0. \end{equation} One can check the identity (\ref{sum of curvatures is zero}) directly, without constructing the oscillatory integrals $U^{(j)}(t)$: it follows from the fact that the $v^{(j)}(x,\xi)$ form an orthonormal basis, see end of Section \ref{U(1) connection} for details. We mentioned the identity (\ref{sum of curvatures is zero}) in order to highlight, once again, the fact that the curvature effects we have identified are specific to systems and do not have an analogue in the scalar case.
\
\textbf{Results 2 and 3.} We prove, under appropriate assumptions on Hamiltonian trajectories (see Theorems~\ref{theorem spectral function unmollified two term} and \ref{theorem counting function unmollified two term}), asymptotic formulae (\ref{two-term asymptotic formula for spectral function}) and (\ref{two-term asymptotic formula for counting function}) with \begin{equation} \label{formula for a(x)} a(x)=\sum_{j=1}^{m^+} \ \int\limits_{h^{(j)}(x,\xi)<1}{d{\hskip-1pt\bar{}}\hskip1pt}\xi\,, \end{equation} \begin{multline} \label{formula for b(x)} b(x)=-n\sum_{j=1}^{m^+} \ \int\limits_{h^{(j)}(x,\xi)<1} \Bigl( [v^{(j)}]^*A_\mathrm{sub}v^{(j)} \\ -\frac i2 \{ [v^{(j)}]^*,A_1-h^{(j)},v^{(j)} \} +\frac i{n-1}h^{(j)}\{[v^{(j)}]^*,v^{(j)}\} \Bigr)(x,\xi)\, {d{\hskip-1pt\bar{}}\hskip1pt}\xi\,, \end{multline} and $a$ and $b$ expressed via the above densities (\ref{formula for a(x)}) and (\ref{formula for b(x)}) as (\ref{a via a(x)}) and (\ref{b via b(x)}). In (\ref{formula for a(x)}) and (\ref{formula for b(x)}) \,${d{\hskip-1pt\bar{}}\hskip1pt}\xi$ is shorthand for ${d{\hskip-1pt\bar{}}\hskip1pt}\xi:=(2\pi)^{-n}\,d\xi =(2\pi)^{-n}\,d\xi_1\ldots d\xi_n$, and the Poisson bracket on matrix-functions $\{\,\cdot\,,\,\cdot\,\}$ and its further generalisation $\{\,\cdot\,,\,\cdot\,,\,\cdot\,\}$ are defined by formulae (\ref{Poisson bracket on matrix-functions}) and (\ref{generalised Poisson bracket on matrix-functions}) respectively.
To our knowledge, formula (\ref{formula for b(x)}) is a new result. Note that in \cite{SafarovDSc} this formula (more precisely, its integrated over $M$ version (\ref{b via b(x)})) was written incorrectly, without the curvature terms $\,-\frac{ni}{n-1}\int h^{(j)}\{[v^{(j)}]^*,v^{(j)}\}$. See also Section~\ref{Bibliographic review} where we give a more detailed bibliographic review.
It is easy to see that the right-hand sides of (\ref{formula for a(x)}) and (\ref{formula for b(x)}) behave as densities under changes of local coordinates on the manifold $M$ and that these expressions are invariant under gauge transformations (\ref{gauge transformation of the eigenvector}), (\ref{phase appearing in gauge transformation}) of the eigenvectors of the principal symbol. Moreover, the right-hand sides of (\ref{formula for a(x)}) and (\ref{formula for b(x)}) are unitarily invariant, i.e.~invariant under the transformation of the operator \begin{equation} \label{unitary transformation of operator A} A\mapsto RAR^*, \end{equation} where \begin{equation} \label{matrix appearing in unitary transformation of operator} R:M\to\mathrm{U}(m) \end{equation} is an arbitrary smooth unitary matrix-function. The fact that the RHS of (\ref{formula for b(x)}) is unitarily invariant is non-trivial: the appropriate calculations are presented in Section~\ref{U(m) invariance}. The observation that without the curvature terms $\,-\frac{ni}{n-1}\int h^{(j)}\{[v^{(j)}]^*,v^{(j)}\}$ (as in \cite{SafarovDSc}) the RHS of (\ref{formula for b(x)}) is not unitarily invariant was a major motivating factor in the writing of this paper.
\
Formula (\ref{formula for b(x)}) is the main result of our paper. Note that even though the two-term asymptotic expansion (\ref{two-term asymptotic formula for spectral function}) holds only under certain assumptions on Hamiltonian trajectories (loops), the second asymptotic coefficient (\ref{formula for b(x)}) is, in itself, well-defined irrespective of how many loops we have. If one wishes to reformulate the asymptotic expansion (\ref{two-term asymptotic formula for spectral function}) in such a way that it remains valid without assumptions on the number of loops, this can easily be achieved, say, by taking a convolution with a function from Schwartz space $\mathcal{S}(\mathbb{R})$: see Theorem~\ref{theorem spectral function mollified}.
\section{Algorithm for the construction of the propagator} \label{Algorithm for the construction of the wave group}
We construct the propagator as a sum of $m$ oscillatory integrals (\ref{wave group as a sum of oscillatory integrals}) where each integral is of the form \begin{equation} \label{algorithm equation 1} U^{(j)}(t) = \int e^{i\varphi^{(j)}(t,x;y,\eta)} \,u^{(j)}(t;y,\eta) \,\varsigma^{(j)}(t,x;y,\eta)\,d_{\varphi^{(j)}}(t,x;y,\eta)\, (\ \cdot\ )\,dy\,{d{\hskip-1pt\bar{}}\hskip1pt}\eta\,. \end{equation} Here we use notation from the book \cite{mybook}, only adapted to systems. Namely, the expressions appearing in formula (\ref{algorithm equation 1}) have the following meaning.
\begin{itemize} \item The function $\varphi^{(j)}$ is a phase function, i.e.~a function $\mathbb{R}\times M\times T'M\to\mathbb{C}$ positively homogeneous in $\eta$ of degree 1 and satisfying the conditions \begin{equation} \label{algorithm equation 2} \varphi^{(j)}(t,x;y,\eta) =(x-x^{(j)}(t;y,\eta))^\alpha\,\xi^{(j)}_\alpha(t;y,\eta)
+O(|x-x^{(j)}(t;y,\eta)|^2), \end{equation} \begin{equation} \label{algorithm equation 3} \operatorname{Im}\varphi^{(j)}(t,x;y,\eta)\ge0, \end{equation} \begin{equation} \label{algorithm equation 4} \det\varphi^{(j)}_{x^\alpha\eta_\beta}(t,x^{(j)}(t;y,\eta);y,\eta)\ne0. \end{equation} Recall that according to Corollary 2.4.5 from \cite{mybook} we are guaranteed to have (\ref{algorithm equation 4}) if we choose a phase function \begin{multline} \label{algorithm equation 5} \varphi^{(j)}(t,x;y,\eta) =(x-x^{(j)}(t;y,\eta))^\alpha\,\xi^{(j)}_\alpha(t;y,\eta) \\ +\frac12C^{(j)}_{\alpha\beta}(t;y,\eta) \,(x-x^{(j)}(t;y,\eta))^\alpha\,(x-x^{(j)}(t;y,\eta))^\beta \\
+O(|x-x^{(j)}(t;y,\eta)|^3) \end{multline} with complex-valued symmetric matrix-function $C^{(j)}_{\alpha\beta}$ satisfying the strict inequality $\operatorname{Im}C^{(j)}>0$ (our original requirement (\ref{algorithm equation 3}) implies only the non-strict inequality $\operatorname{Im}C^{(j)}\ge0$). Note that even though the matrix-function $C^{(j)}_{\alpha\beta}$ is not a tensor, the inequalities $\operatorname{Im}C^{(j)}\ge0$ and $\operatorname{Im}C^{(j)}>0$ are invariant under transformations of local coordinates $x$; see Remark 2.4.9 in \cite{mybook} for details.
\item The quantity $u^{(j)}$ is the symbol of our oscillatory integral, i.e.~a complex-valued $m\times m$ matrix-function $\mathbb{R}\times T'M\to\mathbb{C}^{m^2}$ which admits the asymptotic expansion (\ref{decomposition of symbol of OI into homogeneous components}). The symbol is the unknown quantity in our construction.
\item The quantity $d_{\varphi^{(j)}}$ is defined in accordance with formula (2.2.4) from \cite{mybook} as \begin{equation} \label{algorithm equation 6} d_{\varphi^{(j)}}(t,x;y,\eta) :=({\det}^2\varphi^{(j)}_{x^\alpha\eta_\beta})^{1/4}
=|\det\varphi^{(j)}_{x^\alpha\eta_\beta}|^{1/2} \,e^{\,i\arg({\det}^2\varphi^{(j)}_{x^\alpha\eta_\beta})/4}. \end{equation} Note that in view of (\ref{algorithm equation 4}) our $d_{\varphi^{(j)}}$ is well-defined and smooth for $x$ close to $x^{(j)}(t;y,\eta)$. It is known \cite{mybook} that under coordinate transformations $d_{\varphi^{(j)}}$ behaves as a half-density in $x$ and as a half-density to the power $-1$ in $y$.
In formula (\ref{algorithm equation 6}) we wrote $({\det}^2\varphi^{(j)}_{x^\alpha\eta_\beta})^{1/4}$ rather than $(\det\varphi^{(j)}_{x^\alpha\eta_\beta})^{1/2}$ in order to make this expression truly invariant under coordinate transformations. Recall that local coordinates $x$ and $y$ are chosen independently and that $\eta$ is a covector based at the point $y$. Consequently, $\det\varphi^{(j)}_{x^\alpha\eta_\beta}$ changes sign under inversion of one of the local coordinates $\,x^\alpha$, $\alpha=1,\ldots,n$, or $\,y^\beta$, $\beta=1,\ldots,n$, whereas ${\det}^2\varphi^{(j)}_{x^\alpha\eta_\beta}$ retains sign under inversion.
The choice of (smooth) branch of $\arg({\det}^2\varphi^{(j)}_{x^\alpha\eta_\beta})$ is assumed to be fixed. Thus, for a given phase function $\varphi^{(j)}$ formula (\ref{algorithm equation 6}) defines the quantity $d_{\varphi^{(j)}}$ uniquely up to a factor $e^{ik\pi/2}$, $k=0,1,2,3$. Observe now that if we set $t=0$ and choose the same local coordinates for $x$ and $y$, we get $\varphi^{(j)}_{x^\alpha\eta_\beta}(0,y;y,\eta)=I$. This implies that we can fully specify the choice of branch of $\arg({\det}^2\varphi^{(j)}_{x^\alpha\eta_\beta})$ by requiring that
$d_{\varphi^{(j)}}(0,y;y,\eta)=1$.
The purpose of the introduction of the factor $d_{\varphi^{(j)}}$ in (\ref{algorithm equation 1}) is twofold. \begin{itemize} \item[(a)] It ensures that the symbol $u^{(j)}$ is a function on $\mathbb{R}\times T'M$ in the full differential geometric sense of the word, i.e.~that it is invariant under transformations of local coordinates $x$ and $y$. \item[(b)] It ensures that the principal symbol $u^{(j)}_0$ does not depend on the choice of phase function $\varphi^{(j)}$. See Remark 2.2.8 in \cite{mybook} for more details. \end{itemize}
\item The quantity $\varsigma^{(j)}$ is a smooth cut-off function $\mathbb{R}\times M\times T'M\to\mathbb{R}$ satisfying the following conditions. \begin{itemize} \item[(a)] $\varsigma^{(j)}(t,x;y,\eta)=0$ on the set
$\{(t,x;y,\eta):\ |h^{(j)}(y,\eta)|\le1/2\}$. \item[(b)] $\varsigma^{(j)}(t,x;y,\eta)=1$ on the intersection of a small conic neighbourhood of the set \begin{equation} \label{algorithm equation 7.1} \{(t,x;y,\eta):\ x=x^{(j)}(t;y,\eta)\} \end{equation}
with the set $\{(t,x;y,\eta):\ |h^{(j)}(y,\eta)|\ge1\}$. \item[(c)] $\varsigma^{(j)}(t,x;y,\lambda\eta)=\varsigma^{(j)}(t,x;y,\eta)$
for $\,|h^{(j)}(y,\eta)|\ge1$, $\,\lambda\ge1$. \end{itemize}
\item It is known (see Section 2.3 in \cite{mybook} for details) that Hamiltonian trajectories generated by a Hamiltonian $h^{(j)}(x,\xi)$ positively homogeneous in $\xi$ of degree~1 satisfy the identity \begin{equation} \label{algorithm equation 7.1.5} (x^{(j)}_\eta)^{\alpha\beta}\xi^{(j)}_\alpha=0, \end{equation} where $(x^{(j)}_\eta)^{\alpha\beta}:=\partial(x^{(j)})^\alpha/\partial\eta_\beta$. Formulae (\ref{algorithm equation 2}) and (\ref{algorithm equation 7.1.5}) imply \begin{equation} \label{algorithm equation 7.2} \varphi^{(j)}_\eta(t,x^{(j)}(t;y,\eta);y,\eta)=0. \end{equation} This allows us to apply the stationary phase method in the neighbourhood of the set (\ref{algorithm equation 7.1}) and disregard what happens away from it. \end{itemize}
\
Our task now is to construct the symbols $u^{(j)}_0(t;y,\eta)$, $j=1,\ldots,m$, so that our oscillatory integrals $U^{(j)}(t)$, $j=1,\ldots,m$, satisfy the dynamic equations \begin{equation} \label{algorithm equation 8} (D_t+A(x,D_x))\,U^{(j)}(t)\overset{\operatorname{mod}C^\infty}=0 \end{equation} and initial condition \begin{equation} \label{algorithm equation 9} \sum_jU^{(j)}(0)\overset{\operatorname{mod}C^\infty}=I\,, \end{equation} where $I$ is the identity operator on half-densities; compare with formulae (\ref{dynamic equation most basic}), (\ref{initial condition most basic}) and (\ref{wave group as a sum of oscillatory integrals}). Note that the pseudodifferential operator $A$ in formula (\ref{algorithm equation 8}) acts on the oscillatory integral $U(t)$ in the variable $x$; say, if $A$ is a differential operator this means that in order to evaluate $A\,U^{(j)}(t)$ one has to perform the appropriate differentiations of the oscillatory integral (\ref{algorithm equation 1}) in the variable $x$. Following the conventions of Section 3.3 of \cite{mybook}, we emphasise the fact that the pseudodifferential operator $A$ in formula (\ref{algorithm equation 8}) acts on the oscillatory integral $U(t)$ in the variable $x$ by writing this pseudodifferential operator as $A(x,D_x)$, where $D_{x^\alpha}:=-i\partial/\partial x^\alpha$.
We examine first the dynamic equation (\ref{algorithm equation 8}). We have
\[ (D_t+A(x,D_x))\,U^{(j)}(t)=F^{(j)}(t)\,, \]
where $F^{(j)}(t)$ is the oscillatory integral
\[ F^{(j)}(t) =
\int e^{i\varphi^{(j)}(t,x;y,\eta)} \,f^{(j)}(t,x;y,\eta) \,\varsigma^{(j)}(t,x;y,\eta)\,d_{\varphi^{(j)}}(t,x;y,\eta)\, (\ \cdot\ )\,dy\,{d{\hskip-1pt\bar{}}\hskip1pt}\eta \]
whose matrix-valued amplitude $f^{(j)}$ is given by the formula \begin{equation} \label{algorithm equation 12} f^{(j)}=D_tu^{(j)}+ \bigl( \varphi^{(j)}_t+(d_{\varphi^{(j)}})^{-1}(D_t d_{\varphi^{(j)}})+s^{(j)} \bigr) \,u^{(j)}, \end{equation} where the matrix-function $s^{(j)}(t,x;y,\eta)$ is defined as \begin{equation} \label{algorithm equation 13} s^{(j)}=e^{-i\varphi^{(j)}}(d_{\varphi^{(j)}})^{-1}\,A(x,D_x)\,(e^{i\varphi^{(j)}}d_{\varphi^{(j)}})\,. \end{equation}
Theorem 18.1 from \cite{shubin} gives us the following explicit asymptotic (in inverse powers of $\eta$) formula for the matrix-function (\ref{algorithm equation 13}): \begin{equation} \label{algorithm equation 14} s^{(j)}=(d_{\varphi^{(j)}})^{-1}\sum_{\bm\alpha} \frac1{{\bm\alpha}!}
\,A^{({\bm\alpha})}(x,\varphi^{(j)}_x)\,(D_z^{\bm\alpha}\chi^{(j)})\bigr|_{z=x}\ , \end{equation} where \begin{equation} \label{algorithm equation 15} \chi^{(j)}(t,z,x;y,\eta) =e^{i\psi^{(j)}(t,z,x;y,\eta)}d_{\varphi^{(j)}}(t,z;y,\eta), \end{equation} \begin{equation} \label{algorithm equation 16} \psi^{(j)}(t,z,x;y,\eta) =\varphi^{(j)}(t,z;y,\eta) -\varphi^{(j)}(t,x;y,\eta) -\varphi^{(j)}_{x^\beta}(t,x;y,\eta)\,(z-x)^\beta. \end{equation} In formula (\ref{algorithm equation 14}) \begin{itemize} \item ${\bm\alpha}:=(\alpha_1,\ldots,\alpha_n)$ is a multi-index (note the bold font which we use to distinguish multi-indices and individual indices), ${\bm\alpha}!:=\alpha_1!\cdots\alpha_n!\,$, $D_z^{\bm\alpha}:=D_{z^1}^{\alpha_1}\cdots D_{z^n}^{\alpha_n}$, $D_{z^\beta}:=-i\partial/\partial z^\beta$, \item $A(x,\xi)$ is the full symbol of the pseudodifferential operator $A$ written in local coordinates~$x$,
\item $A^{({\bm\alpha})}(x,\xi):=\partial_\xi^{\bm\alpha}A(x,\xi)$, $\partial_\xi^{\bm\alpha}:=\partial_{\xi_1}^{\alpha_1}\cdots\partial_{\xi_n}^{\alpha_n}$ and $\partial_{\xi_\beta}:=\partial/\partial\xi_\beta\,$. \end{itemize}
When $|\eta|\to+\infty$ the matrix-valued amplitude $f^{(j)}(t,x;y,\eta)$ defined by formula (\ref{algorithm equation 12}) admits an asymptotic expansion \begin{equation} \label{algorithm equation 17} f^{(j)}(t,x;y,\eta)=f^{(j)}_1(t,x;y,\eta)+f^{(j)}_0(t,x;y,\eta)+f^{(j)}_{-1}(t,x;y,\eta)+\ldots \end{equation} into components positively homogeneous in $\eta$, with the subscript indicating degree of homogeneity. Note the following differences between formulae (\ref{decomposition of symbol of OI into homogeneous components}) and (\ref{algorithm equation 17}). \begin{itemize} \item The leading term in (\ref{algorithm equation 17}) has degree of homogeneity 1, rather than 0 as in (\ref{decomposition of symbol of OI into homogeneous components}). In fact, the leading term in (\ref{algorithm equation 17}) can be easily written out explicitly \begin{equation} \label{algorithm equation 18} f^{(j)}_1(t,x;y,\eta)= (\varphi^{(j)}_t(t,x;y,\eta)+A_1(x,\varphi^{(j)}_x(t,x;y,\eta)))\,u^{(j)}_0(t;y,\eta)\,, \end{equation} where $A_1(x,\xi)$ is the (matrix-valued) principal symbol of the pseudodifferential operator $A$. \item Unlike the symbol $u^{(j)}(t;y,\eta)$, the amplitude $f^{(j)}(t,x;y,\eta)$ depends on $x$. \end{itemize}
We now need to exclude the dependence on $x$ from the amplitude $f^{(j)}(t,x;y,\eta)$. This can be done by means of the algorithm described in subsection 2.7.3 of \cite{mybook}. We outline this algorithm below.
Working in local coordinates, define the matrix-function $\varphi^{(j)}_{x\eta}$ in accordance with $(\varphi^{(j)}_{x\eta})_\alpha{}^\beta:=\varphi^{(j)}_{x^\alpha\eta_\beta}$ and then define its inverse $(\varphi^{(j)}_{x\eta})^{-1}$ from the identity $(\varphi^{(j)})_\alpha{}^\beta[(\varphi^{(j)}_{x\eta})^{-1}]_\beta{}^\gamma:=\delta_\alpha{}^\gamma$. Define the ``scalar'' first order linear differential operators \begin{equation} \label{algorithm equation 19} L^{(j)}_\alpha:=[(\varphi^{(j)}_{x\eta})^{-1}]_\alpha{}^\beta\,(\partial/\partial x^\beta), \qquad\alpha=1,\ldots,n. \end{equation} Note that the coefficients of these differential operators are functions of the position variable $x$ and the dual variable $\xi$. It is known, see part 2 of Appendix E in \cite{mybook}, that the operators (\ref{algorithm equation 19}) commute:
$\ L^{(j)}_\alpha L^{(j)}_\beta=L^{(j)}_\beta L^{(j)}_\alpha$,
$\ \alpha,\beta=1,\ldots,n$.
Denote
$\ L^{(j)}_{\bm\alpha}:=(L^{(j)}_1)^{\alpha_1}\cdots(L^{(j)}_n)^{\alpha_n}$,
$\ (-\varphi^{(j)}_\eta)^{\bm\alpha}:=(-\varphi^{(j)}_{\eta_1})^{\alpha_1}\cdots(-\varphi^{(j)}_{\eta_n})^{\alpha_n}$,
and, given an $r\in\mathbb{N}$, define the ``scalar'' linear differential operator \begin{equation} \label{algorithm equation 21} \mathfrak{P}^{(j)}_{-1,r}:= i(d_{\varphi^{(j)}})^{-1} \, \frac\partial{\partial\eta_\beta} \,d_{\varphi^{(j)}} \left(1+
\sum_{1\le|{\bm\alpha}|\le2r-1}
\frac{(-\varphi^{(j)}_\eta)^{\bm\alpha}}{{\bm\alpha}!\,(|{\bm\alpha}|+1)} \,L^{(j)}_{\bm\alpha} \right) L^{(j)}_\beta\,, \end{equation}
where $|{\bm\alpha}|:=\alpha_1+\ldots+\alpha_n$ and the repeated index $\beta$ indicates summation over $\beta=1,\ldots,n$.
Recall Definition 2.7.8 from \cite{mybook}: the linear operator $L$ is said to be positively homogeneous in $\eta$ of degree $p\in\mathbb{R}$ if for any $q\in\mathbb{R}$ and any function $f$ positively homogeneous in $\eta$ of degree $q$ the function $Lf$ is positively homogeneous in $\eta$ of degree $p+q$. It is easy to see that the operator (\ref{algorithm equation 21}) is positively homogeneous in $\eta$ of degree $-1$ and the first subscript in $\mathfrak{P}^{(j)}_{-1,r}$ emphasises this fact.
Let $\mathfrak{S}^{(j)}_0$ be the (linear) operator of restriction to $x=x^{(j)}(t;y,\eta)$, \begin{equation} \label{algorithm equation 22}
\mathfrak{S}^{(j)}_0:=\left.(\,\cdot\,)\right|_{x=x^{(j)}(t;y,\eta)}\,, \end{equation} and let \begin{equation} \label{algorithm equation 23} \mathfrak{S}^{(j)}_{-r}:=\mathfrak{S}^{(j)}_0(\mathfrak{P}^{(j)}_{-1,r})^r \end{equation} for $r=1,2,\ldots$. Observe that our linear operators $\mathfrak{S}^{(j)}_{-r}$, $r=0,1,2,\ldots$, are positively homogeneous in $\eta$ of degree $-r$. This observation allows us to define the linear operator \begin{equation} \label{algorithm equation 24} \mathfrak{S}^{(j)}:=\sum_{r=0}^{+\infty}\mathfrak{S}^{(j)}_{-r}\ , \end{equation} where the series is understood as an asymptotic series in inverse powers of $\eta$.
According to subsection 2.7.3 of \cite{mybook}, the dynamic equation (\ref{algorithm equation 8}) can now be rewritten in the equivalent form \begin{equation} \label{algorithm equation 25} \mathfrak{S}^{(j)}f^{(j)}=0\,, \end{equation} where the equality is understood in the asymptotic sense, as an asymptotic expansion in inverse powers of $\eta$. Recall that the matrix-valued amplitude $f^{(j)}(t,x;y,\eta)$ appearing in (\ref{algorithm equation 25}) is defined by formulae (\ref{algorithm equation 12})--(\ref{algorithm equation 16}).
Substituting (\ref{algorithm equation 24}) and (\ref{algorithm equation 17}) into (\ref{algorithm equation 25}) we obtain a hierarchy of equations \begin{equation} \label{algorithm equation 26} \mathfrak{S}^{(j)}_0f^{(j)}_1=0, \end{equation} \begin{equation} \label{algorithm equation 27} \mathfrak{S}^{(j)}_{-1}f^{(j)}_1+\mathfrak{S}^{(j)}_0f^{(j)}_0=0, \end{equation}
\[ \mathfrak{S}^{(j)}_{-2}f^{(j)}_1+\mathfrak{S}^{(j)}_{-1}f^{(j)}_0+\mathfrak{S}^{(j)}_0f^{(j)}_{-1}=0, \]
\[ \ldots \] positively homogeneous in $\eta$ of degree 1, 0, $-1$, $\ldots$. These are the \emph{transport} equations for the determination of the unknown homogeneous components $u^{(j)}_0(t;y,\eta)$, $u^{(j)}_{-1}(t;y,\eta)$, $u^{(j)}_{-2}(t;y,\eta)$, $\ldots$, of the symbol of the oscillatory integral (\ref{algorithm equation 1}).
Let us now examine the initial condition (\ref{algorithm equation 9}). Each operator $U^{(j)}(0)$ is a pseudodifferential operator, only written in a slightly nonstandard form. The issues here are as follows.
\begin{itemize} \item We use the invariantly defined phase function
$ \varphi^{(j)}(0,x;y,\eta) =(x-y)^\alpha\,\eta_\alpha
+O(|x-y|^2) $
rather than the linear phase function $(x-y)^\alpha\,\eta_\alpha$ written in local coordinates. \item When defining the (full) symbol of the operator $U^{(j)}(t)$ we excluded the variable $x$ from the amplitude rather than the variable $y$. Note that when dealing with pseudodifferential operators it is customary to exclude the variable $y$ from the amplitude; exclusion of the variable $x$ gives the dual symbol of a pseudodifferential operator, see subsection 2.1.3 in \cite{mybook}. Thus, at $t=0$, our symbol $u^{(j)}(0;y,\eta)$ resembles the dual symbol of a pseudodifferential operator rather than the ``normal'' symbol. \item We have the extra factor $d_{\varphi^{(j)}}(0,x;y,\eta)$ in our representation of the operator $U^{(j)}(0)$ as an oscillatory integral. \end{itemize}
The (full) dual symbol of the pseudodifferential operator $U^{(j)}(0)$ can be calculated in local coordinates in accordance with the following formula which addresses the issues highlighted above: \begin{equation} \label{algorithm equation 30} \sum_{\bm\alpha}
\frac{(-1)^{|{\bm\alpha}|}}{{\bm\alpha}!}\, \bigl( D_x^{\bm\alpha}\,\partial_\eta^{\bm\alpha}\, u^{(j)}(0;y,\eta)\, e^{i\omega^{(j)}(x;y,\eta)}\,d_{\varphi^{(j)}}(0,x;y,\eta) \bigr)
\bigr|_{x=y}\ , \end{equation} where
$\omega^{(j)}(x;y,\eta)=\varphi^{(j)}(0,x;y,\eta)-(x-y)^\beta\,\eta_\beta\,$.
Formula (\ref{algorithm equation 30})
is a version of the formula from subsection 2.1.3 of \cite{mybook}, only with the extra factor $(-1)^{|{\bm\alpha}|}$. The latter is needed because we are writing down the dual symbol of the pseudodifferential operator $U^{(j)}(0)$ (no dependence on $x$) rather than its ``normal'' symbol (no dependence on $y$).
The initial condition (\ref{algorithm equation 9}) can now be rewritten in explicit form as \begin{equation} \label{algorithm equation 32} \sum_j \sum_{\bm\alpha}
\frac{(-1)^{|{\bm\alpha}|}}{{\bm\alpha}!}\, \bigl( D_x^{\bm\alpha}\,\partial_\eta^{\bm\alpha}\, u^{(j)}(0;y,\eta)\, e^{i\omega^{(j)}(x;y,\eta)}\,d_{\varphi^{(j)}}(0,x;y,\eta) \bigr)
\bigr|_{x=y}=I\,, \end{equation} where $I$ is the $m\times m$ identity matrix. Condition (\ref{algorithm equation 32}) can be decomposed into components positively homogeneous in $\eta$ of degree $0,-1,-2,\ldots$, giving us a hierarchy of initial conditions. The leading (of degree of homogeneity 0) initial condition reads \begin{equation} \label{algorithm equation 33} \sum_j u^{(j)}_0(0;y,\eta)=I\,, \end{equation} whereas lower order initial conditions are more complicated and depend on the choice of our phase functions $\varphi^{(j)}$.
\section{Leading transport equations} \label{Leading transport equations}
Formulae (\ref{algorithm equation 22}), (\ref{algorithm equation 18}), (\ref{algorithm equation 2}), (\ref{Hamiltonian system of equations}) and the identity $\xi_\alpha h^{(j)}_{\xi_\alpha}(x,\xi)=h^{(j)}(x,\xi)$ (consequence of the fact that $h^{(j)}(x,\xi)$ is positively homogeneous in $\xi$ of degree~1) give us the following explicit representation for the leading transport equation (\ref{algorithm equation 26}): \begin{equation} \label{Leading transport equations equation 1} \!\! \bigl[ A_1\bigl(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta)\bigr) - h^{(j)}\bigl(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta)\bigr) \bigr] \,u^{(j)}_0(t;y,\eta)=0. \end{equation} Here, of course, $h^{(j)}\bigl(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta)\bigr)=h^{(j)}(y,\eta)$.
Equation (\ref{Leading transport equations equation 1}) implies that \begin{equation} \label{Leading transport equations equation 2} u^{(j)}_0(t;y,\eta)=v^{(j)}(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta)) \,[w^{(j)}(t;y,\eta)]^T, \end{equation} where $v^{(j)}(z,\zeta)$ is the normalised eigenvector of the principal symbol $A_1(z,\zeta)$ corresponding to the eigenvalue $h^{(j)}(z,\zeta)$ and $w^{(j)}:\mathbb{R}\times T'M\to\mathbb{C}^m$ is a column-function, positively homogeneous in $\eta$ of degree 0, that remains to be found. Formulae (\ref{algorithm equation 33}) and (\ref{Leading transport equations equation 2}) imply the following initial condition for the unknown column-function $w^{(j)}$: \begin{equation} \label{Leading transport equations equation 3} w^{(j)}(0;y,\eta)=\overline{v^{(j)}(y,\eta)}. \end{equation}
We now consider the next transport equation in our hierarchy, equation (\ref{algorithm equation 27}). We will write down the two terms appearing in (\ref{algorithm equation 27}) separately.
In view of formulae (\ref{algorithm equation 18}) and (\ref{algorithm equation 21})--(\ref{algorithm equation 23}), the first term in (\ref{algorithm equation 27}) reads \begin{multline} \label{Leading transport equations equation 4} \mathfrak{S}^{(j)}_{-1}f^{(j)}_1= \\ i \left. \left[ (d_{\varphi^{(j)}})^{-1} \frac\partial{\partial\eta_\beta} d_{\varphi^{(j)}} \left(1- \frac12 \varphi^{(j)}_{\eta_\alpha} L^{(j)}_\alpha \right) \left( L^{(j)}_\beta \bigl(\varphi^{(j)}_t+A_1(x,\varphi^{(j)}_x)\bigr) \right) u^{(j)}_0 \right]
\right|_{x=x^{(j)}}\,, \end{multline} where we dropped, for the sake of brevity, the arguments $(t;y,\eta)$ in $u^{(j)}_0$ and $x^{(j)}$, and the arguments $(t,x;y,\eta)$ in $\varphi^{(j)}_t$, $\varphi^{(j)}_x$, $\varphi^{(j)}_\eta$ and $d_{\varphi^{(j)}}\,$. Recall that the differential operators $L^{(j)}_\alpha$ are defined in accordance with formula (\ref{algorithm equation 19}) and the coefficients of these operators depend on $(t,x;y,\eta)$.
In view of formulae (\ref{algorithm equation 12})--(\ref{algorithm equation 17}) and (\ref{algorithm equation 22}), the second term in (\ref{algorithm equation 27}) reads \begin{multline} \label{Leading transport equations equation 5} \mathfrak{S}^{(j)}_0f^{(j)}_0= D_tu^{(j)}_0 \\ +\left.\left[ (d_{\varphi^{(j)}})^{-1} \left(D_t+(A_1)_{\xi_\alpha}D_{x^\alpha}\right) d_{\varphi^{(j)}} +A_0 -\frac i2(A_1)_{\xi_\alpha\xi_\beta}C^{(j)}_{\alpha\beta} \right]
\right|_{x=x^{(j)}}u^{(j)}_0 \\ +\bigl[A_1-h^{(j)}\bigr]u^{(j)}_{-1}\,, \end{multline} where \begin{equation} \label{Leading transport equations equation 6}
C^{(j)}_{\alpha\beta}:=\left.\varphi^{(j)}_{x^\alpha x^\beta}\right|_{x=x^{(j)}} \end{equation} is the matrix-function from (\ref{algorithm equation 5}). In formulae (\ref{Leading transport equations equation 5}) and (\ref{Leading transport equations equation 6}) we dropped, for the sake of brevity, the arguments $(t;y,\eta)$ in $u^{(j)}_0$, $u^{(j)}_{-1}$, $C^{(j)}_{\alpha\beta}$ and $x^{(j)}$, the arguments $(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta))$ in $A_0$, $A_1$, $(A_1)_{\xi_\alpha}$, $(A_1)_{\xi_\alpha\xi_\beta}$ and $h^{(j)}$, and the arguments $(t,x;y,\eta)$ in $d_{\varphi^{(j)}}$ and $\varphi^{(j)}_{x^\alpha x^\beta}\,$.
Looking at (\ref{Leading transport equations equation 4}) and (\ref{Leading transport equations equation 5}) we see that the transport equation (\ref{algorithm equation 27}) has a complicated structure. Hence, in this section we choose not to perform the analysis of the full equation (\ref{algorithm equation 27}) and analyse only one particular subequation of this equation. Namely, observe that equation (\ref{algorithm equation 27}) is equivalent to $m$ subequations \begin{equation} \label{Leading transport equations equation 7} \bigl[v^{(j)}\bigr]^* \, \bigl[\mathfrak{S}^{(j)}_{-1}f^{(j)}_1+\mathfrak{S}^{(j)}_0f^{(j)}_0\bigr] =0, \end{equation} \begin{equation} \label{Leading transport equations equation 8} \bigl[v^{(l)}\bigr]^* \, \bigl[\mathfrak{S}^{(j)}_{-1}f^{(j)}_1+\mathfrak{S}^{(j)}_0f^{(j)}_0\bigr] =0, \qquad l\ne j, \end{equation} where we dropped, for the sake of brevity, the arguments $(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta))$ in $\bigl[v^{(j)}\bigr]^*$ and $\bigl[v^{(l)}\bigr]^*$. In the remainder of this section we analyse (sub)equation (\ref{Leading transport equations equation 7}) only.
Equation (\ref{Leading transport equations equation 7}) is simpler than each of the $m-1$ equations (\ref{Leading transport equations equation 8}) for the following two reasons.
\begin{itemize}
\item Firstly, the term $\bigl[A_1-h^{(j)}\bigr]u^{(j)}_{-1}$ from (\ref{Leading transport equations equation 5}) vanishes after multiplication by $\bigl[v^{(j)}\bigr]^*$ from the left. Hence, equation (\ref{Leading transport equations equation 7}) does not contain $u^{(j)}_{-1}$.
\item Secondly, if we substitute (\ref{Leading transport equations equation 2}) into (\ref{Leading transport equations equation 7}), then the term with \[ \partial[d_{\varphi^{(j)}}w^{(j)}(t;y,\eta)]^T/\partial\eta_\beta \] vanishes. This follows from the fact that the scalar function \[ \bigl[v^{(j)}\bigr]^* \bigl(\varphi^{(j)}_t+A_1(x,\varphi^{(j)}_x)\bigr) v^{(j)} \] has a second order zero, in the variable $x$, at $x=x^{(j)}(t;y,\eta)$. Indeed, we have \begin{multline*} \left. \left[ \frac\partial{\partial x^\alpha} \bigl[v^{(j)}\bigr]^* \bigl(\varphi^{(j)}_t+A_1(x,\varphi^{(j)}_x)\bigr) v^{(j)} \right]
\right|_{x=x^{(j)}} \\ = \bigl[v^{(j)}\bigr]^* \left. \left[ \bigl(\varphi^{(j)}_t+A_1(x,\varphi^{(j)}_x)\bigr)_{x^\alpha} \right]
\right|_{x=x^{(j)}} v^{(j)} \\ = \bigl[v^{(j)}\bigr]^* \bigl( -h^{(j)}_{x^\alpha}-C^{(j)}_{\alpha\beta}h^{(j)}_{\xi_\beta} +(A_1)_{x^\alpha}+C^{(j)}_{\alpha\beta}(A_1)_{\xi_\beta} \bigr) v^{(j)} \\ = \bigl[v^{(j)}\bigr]^*(A_1)_{x^\alpha}v^{(j)}-h^{(j)}_{x^\alpha} + C^{(j)}_{\alpha\beta} \bigl( \bigl[v^{(j)}\bigr]^*(A_1)_{\xi_\beta}v^{(j)}-h^{(j)}_{\xi_\beta} \bigr) =0\,, \end{multline*} where in the last two lines we dropped, for the sake of brevity, the arguments $(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta))$ in $(A_1)_{x^\alpha}$, $(A_1)_{\xi_\beta}$, $h^{(j)}_{x^\alpha}$, $h^{(j)}_{\xi_\beta}$, and the argument $(t;y,\eta)$ in $C^{(j)}_{\alpha\beta}$ (the latter is the
matrix-function from formulae (\ref{algorithm equation 5}) and (\ref{Leading transport equations equation 6})). Throughout the above argument we used the fact that our $\bigl[v^{(j)}\bigr]^*$ and $v^{(j)}$ do not depend on $x$: their argument is $(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta))$.
\end{itemize}
Substituting (\ref{Leading transport equations equation 4}), (\ref{Leading transport equations equation 5}) and (\ref{Leading transport equations equation 2}) into (\ref{Leading transport equations equation 7}) we get \begin{equation} \label{Leading transport equations equation 9} (D_t+p^{(j)}(t;y,\eta))\,[w^{(j)}(t;y,\eta)]^T=0\,, \end{equation} where \begin{multline} \label{Leading transport equations equation 10} p^{(j)}= i \left. [v^{(j)}]^* \left[ \frac\partial{\partial\eta_\beta} \left(1- \frac12 \varphi^{(j)}_{\eta_\alpha} L^{(j)}_\alpha \right) \left( L^{(j)}_\beta \bigl(\varphi^{(j)}_t+A_1(x,\varphi^{(j)}_x)\bigr) \right) v^{(j)} \right]
\right|_{x=x^{(j)}} \\ -i[v^{(j)}]^*\{v^{(j)},h^{(j)}\} +\left.\left[ (d_{\varphi^{(j)}})^{-1} \left(D_t+h^{(j)}_{\xi_\alpha}D_{x^\alpha}\right) d_{\varphi^{(j)}} \right]
\right|_{x=x^{(j)}} \\ +[v^{(j)}]^* \left( A_0 -\frac i2(A_1)_{\xi_\alpha\xi_\beta}C^{(j)}_{\alpha\beta} \right) v^{(j)}. \end{multline}
Note that the ordinary differential operator in the LHS of formula (\ref{Leading transport equations equation 9}) is a scalar one, i.e. it does not mix up the different components of the column-function $w^{(j)}(t;y,\eta)$. The solution of the ordinary differential equation (\ref{Leading transport equations equation 9}) subject to the initial condition (\ref{Leading transport equations equation 3}) is \begin{equation} \label{Leading transport equations equation 11} w^{(j)}(t;y,\eta)=\overline{v^{(j)}(y,\eta)} \exp\left(-i\int_0^tp^{(j)}(\tau;y,\eta)\,d\tau\right). \end{equation} Comparing formulae (\ref{Leading transport equations equation 2}), (\ref{Leading transport equations equation 11}) with formula (\ref{formula for principal symbol of oscillatory integral}) we see that in order to prove the latter we need only to establish the scalar identity \begin{equation} \label{Leading transport equations equation 12} p^{(j)}(t;y,\eta)=q^{(j)}(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta))\,, \end{equation} where $q^{(j)}$ is the function (\ref{phase appearing in principal symbol}). In view of the definitions of the quantities $p^{(j)}$ and $q^{(j)}$, see formulae (\ref{Leading transport equations equation 10}) and (\ref{phase appearing in principal symbol}), and the definition of the subprincipal symbol (\ref{definition of subprincipal symbol}), proving the identity (\ref{Leading transport equations equation 12}) reduces to proving the identity \begin{multline} \label{Leading transport equations equation 13} \{ [v^{(j)}]^*,A_1-h^{(j)},v^{(j)} \} (x^{(j)},\xi^{(j)}) = \\ -2 \left. [v^{(j)}(x^{(j)},\xi^{(j)})]^* \left[ \frac\partial{\partial\eta_\beta} \left(1- \frac12 \varphi^{(j)}_{\eta_\alpha} L^{(j)}_\alpha \right) \left( L^{(j)}_\beta \bigl(\varphi^{(j)}_t+A_1(x,\varphi^{(j)}_x)\bigr) \right) v^{(j)}(x^{(j)},\xi^{(j)}) \right]
\right|_{x=x^{(j)}} \\ +2\left.\left[ (d_{\varphi^{(j)}})^{-1} \left(\partial_t+h^{(j)}_{\xi_\alpha}\partial_{x^\alpha}\right) d_{\varphi^{(j)}} \right]
\right|_{x=x^{(j)}} \\ +[v^{(j)}(x^{(j)},\xi^{(j)})]^* \left( (A_1)_{x^\alpha\xi_\alpha}+(A_1)_{\xi_\alpha\xi_\beta}C^{(j)}_{\alpha\beta} \right) v^{(j)}(x^{(j)},\xi^{(j)}). \end{multline} Note that the expressions in the LHS and RHS of (\ref{Leading transport equations equation 13}) have different structure. The LHS of (\ref{Leading transport equations equation 13}) is the generalised Poisson bracket $\{ [v^{(j)}]^*,A_1-h^{(j)},v^{(j)} \}$, see (\ref{generalised Poisson bracket on matrix-functions}), evaluated at $z=x^{(j)}(t;y,\eta)$, $\zeta=\xi^{(j)}(t;y,\eta)$, whereas the RHS of (\ref{Leading transport equations equation 13}) involves partial derivatives (in $\eta$) of $v^{(j)}(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta))$ (Chain Rule). In writing (\ref{Leading transport equations equation 13}) we also dropped, for the sake of brevity, the arguments $(t,x;y,\eta)$ in $\varphi^{(j)}_t$, $\varphi^{(j)}_x$, $\varphi^{(j)}_\eta$, $d_{\varphi^{(j)}}\,$ and the coefficients of the differential operators $L^{(j)}_\alpha$ and $L^{(j)}_\beta$, the arguments $(x^{(j)},\xi^{(j)})$ in $h^{(j)}_{\xi_\alpha}$, $(A_1)_{x^\alpha\xi_\alpha}$ and $(A_1)_{\xi_\alpha\xi_\beta}$, and the arguments $(t;y,\eta)$ in $x^{(j)}$, $\xi^{(j)}$ and $C^{(j)}_{\alpha\beta}$.
Before performing the calculations that will establish the identity (\ref{Leading transport equations equation 13}) we make several observations that will allow us to simplify these calculations considerably.
Firstly, our function $p^{(j)}(t;y,\eta)$ does not depend on the choice of the phase function $\varphi^{(j)}(t,x;y,\eta)$. Indeed, if $p^{(j)}(t;y,\eta)$ did depend on the choice of phase function, then, in view of formulae (\ref{Leading transport equations equation 2}) and (\ref{Leading transport equations equation 11}) the principal symbol of our oscillatory integral $U^{(j)}(t)$ would depend on the choice of phase function, which would contradict Theorem 2.7.11 from \cite{mybook}. Here we use the fact that operators $U^{(j)}(t)$ with different $j$ cannot compensate each other to give an integral operator whose integral kernel is infinitely smooth in $t$, $x$ and $y$ because all our $U^{(j)}(t)$ oscillate in $t$ in a different way: $\varphi^{(j)}_t(t,x^{(j)}(t;y,\eta);y,\eta)=-h^{(j)}(y,\eta)$ and we assumed the eigenvalues $h^{(j)}(y,\eta)$ of our principal symbol $A_1(y,\eta)$ to be simple.
Secondly, the arguments (free variables) in (\ref{Leading transport equations equation 13}) are $(t;y,\eta)$. We fix an arbitrary point $(\tilde t;\tilde y,\tilde\eta)\in\mathbb{R}\times T'M$ and prove formula (\ref{Leading transport equations equation 13}) at this point. Put $(\xi^{(j)}_\eta)_\alpha{}^\beta:=\partial(\xi^{(j)})_\alpha/\partial\eta_\beta$. According to Lemma 2.3.2 from \cite{mybook} there exists a local coordinate system $x$ such that $\det(\xi^{(j)}_\eta)_\alpha{}^\beta\ne0$. This opens the way to the use of the linear phase function \begin{equation} \label{Leading transport equations equation 14} \varphi^{(j)}(t,x;y,\eta) =(x-x^{(j)}(t;y,\eta))^\alpha\,\xi^{(j)}_\alpha(t;y,\eta) \end{equation} which will simplify calculations to a great extent. Moreover, we can choose a local coordinate system $y$ such that \begin{equation} \label{Leading transport equations equation 15} (\xi^{(j)}_\eta)_\alpha{}^\beta(\tilde t;\tilde y,\tilde\eta)=\delta_\alpha{}^\beta \end{equation} which will simplify calculations even further.
The calculations we are about to perform will make use of the symmetry \begin{equation} \label{Leading transport equations equation 16} (x^{(j)}_\eta)^{\gamma\alpha}(\xi^{(j)}_\eta)_\gamma{}^\beta = (x^{(j)}_\eta)^{\gamma\beta}(\xi^{(j)}_\eta)_\gamma{}^\alpha \end{equation} which is an immediate consequence of formula (\ref{algorithm equation 7.1.5}). Formula (\ref{Leading transport equations equation 16}) appears as formula (2.3.3) in \cite{mybook} and the accompanying text explains its geometric meaning. Note that at the point $(\tilde t;\tilde y,\tilde\eta)$ formula (\ref{Leading transport equations equation 16}) takes the especially simple form \begin{equation} \label{Leading transport equations equation 17} (x^{(j)}_\eta)^{\alpha\beta}(\tilde t;\tilde y,\tilde\eta) = (x^{(j)}_\eta)^{\beta\alpha}(\tilde t;\tilde y,\tilde\eta). \end{equation}
Our calculations will also involve the quantity $\varphi^{(j)}_{\eta_\alpha\eta_\beta}(\tilde t,\tilde x;\tilde y,\tilde\eta)$ where $\tilde x:=x^{(j)}(\tilde t;\tilde y,\tilde\eta)$. Formulae (\ref{Leading transport equations equation 14}), (\ref{algorithm equation 7.1.5}), (\ref{Leading transport equations equation 15}) and (\ref{Leading transport equations equation 17}) imply \begin{equation} \label{Leading transport equations equation 18} \varphi^{(j)}_{\eta_\alpha\eta_\beta}(\tilde t,\tilde x;\tilde y,\tilde\eta) = -(x^{(j)}_\eta)^{\alpha\beta}(\tilde t;\tilde y,\tilde\eta). \end{equation}
Further on we denote $\tilde\xi:=\xi^{(j)}(\tilde t;\tilde y,\tilde\eta)$.
With account of all the simplifications listed above, we can rewrite formula (\ref{Leading transport equations equation 13}), which is the identity that we are proving, as \begin{multline} \label{Leading transport equations equation 19} \{ [v^{(j)}]^*,A_1-h^{(j)},v^{(j)} \} (\tilde x,\tilde\xi) = \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! -2 [\tilde v^{(j)}]^* \Bigl[ \frac{\partial^2}{\partial x^\alpha\partial\eta_\alpha} \bigl(A_1(x,\xi^{(j)})-h^{(j)}(\tilde y,\eta) \\ \qquad\qquad\qquad\qquad\qquad\qquad -(x-x^{(j)})^\gamma h^{(j)}_{x^\gamma}(x^{(j)},\xi^{(j)})\bigr) \,v^{(j)}(x^{(j)},\xi^{(j)}) \Bigr]
\Bigr|_{(x,\eta)=(\tilde x,\tilde\eta)} \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! -(\tilde x^{(j)}_\eta)^{\alpha\beta}\, [\tilde v^{(j)}]^* \Bigl[ \frac{\partial^2}{\partial x^\alpha\partial x^\beta} \bigl(A_1(x,\xi^{(j)})-h^{(j)}(\tilde y,\eta) \\ \qquad\qquad\qquad\qquad\qquad\qquad -(x-x^{(j)})^\gamma h^{(j)}_{x^\gamma}(x^{(j)},\xi^{(j)})\bigr) \,v^{(j)}(x^{(j)},\xi^{(j)}) \Bigr]
\Bigr|_{(x,\eta)=(\tilde x,\tilde\eta)} \\ +[\tilde v^{(j)}]^* (\tilde A_1)_{x^\alpha\xi_\alpha} \tilde v^{(j)} -\tilde h^{(j)}_{x^\alpha\xi_\alpha} -\tilde h^{(j)}_{x^\alpha x^\beta}(\tilde x^{(j)}_\eta)^{\alpha\beta}\,, \qquad \end{multline}
\noindent where $\tilde v^{(j)}=v^{(j)}(\tilde x,\tilde\xi)$, $\tilde x^{(j)}_\eta=x^{(j)}_\eta(\tilde t;\tilde y,\tilde\eta)$, $(\tilde A_1)_{x^\alpha\xi_\alpha}=(A_1)_{x^\alpha\xi_\alpha}(\tilde x,\tilde\xi)$, $\tilde h^{(j)}_{x^\alpha\xi_\alpha}=h^{(j)}_{x^\alpha\xi_\alpha}(\tilde x,\tilde\xi)$, $\tilde h^{(j)}_{x^\alpha x^\beta}=h^{(j)}_{x^\alpha x^\beta}(\tilde x,\tilde\xi)$, $x^{(j)}=x^{(j)}(\tilde t;\tilde y,\eta)$ and $\xi^{(j)}=\xi^{(j)}(\tilde t;\tilde y,\eta)$.
Note that the last two terms in the RHS of (\ref{Leading transport equations equation 19}) originate from the term with $d_{\varphi^{(j)}}$ in (\ref{Leading transport equations equation 13}): we used the fact that $d_{\varphi^{(j)}}$ does not depend on $x$ and that \begin{equation} \label{Leading transport equations equation 20} \left. \left[ (d_{\varphi^{(j)}})^{-1} \partial_t d_{\varphi^{(j)}} \right]
\right|_{(t,x;y,\eta)=(\tilde t,\tilde x;\tilde y,\tilde\eta)} =-\frac12 \bigl( \tilde h^{(j)}_{x^\alpha\xi_\alpha} +\tilde h^{(j)}_{x^\alpha x^\beta}(\tilde x^{(j)}_\eta)^{\alpha\beta} \bigr). \end{equation} Formula (\ref{Leading transport equations equation 20}) is a special case of formula (3.3.21) from \cite{mybook}.
Note also that the term $-h^{(j)}(\tilde y,\eta)$ appearing (twice) in the RHS of (\ref{Leading transport equations equation 19}) will vanish after being acted upon with the differential operators $\frac{\partial^2}{\partial x^\alpha\partial\eta_\alpha}$ and $\frac{\partial^2}{\partial x^\alpha\partial x^\beta}$ because it does not depend on $x$.
We have \begin{multline} \label{Leading transport equations equation 21} [\tilde v^{(j)}]^* \left. \left[ \frac{\partial^2}{\partial x^\alpha\partial\eta_\alpha} \bigl(A_1(x,\xi^{(j)})-(x-x^{(j)})^\gamma h^{(j)}_{x^\gamma}(x^{(j)},\xi^{(j)})\bigr) \,v^{(j)}(x^{(j)},\xi^{(j)}) \right]
\right|_{(x,\eta)=(\tilde x,\tilde\eta)} \\ = [\tilde v^{(j)}]^* (\tilde A_1)_{x^\alpha\xi_\alpha} \tilde v^{(j)} -\tilde h^{(j)}_{x^\alpha\xi_\alpha} -\tilde h^{(j)}_{x^\alpha x^\beta}(\tilde x^{(j)}_\eta)^{\alpha\beta} \\ + [\tilde v^{(j)}]^* \bigl( (\tilde A_1)_{x^\alpha} -\tilde h^{(j)}_{x^\alpha} \bigr) \bigl( \tilde v^{(j)}_{\xi_\alpha} +\tilde v^{(j)}_{x^\beta}(\tilde x^{(j)}_\eta)^{\alpha\beta} \bigr), \end{multline} \begin{multline} \label{Leading transport equations equation 22} [\tilde v^{(j)}]^* \left. \left[ \frac{\partial^2}{\partial x^\alpha\partial x^\beta} \bigl(A_1(x,\xi^{(j)})-(x-x^{(j)})^\gamma h^{(j)}_{x^\gamma}(x^{(j)},\xi^{(j)})\bigr) \,v^{(j)}(x^{(j)},\xi^{(j)}) \right]
\right|_{(x,\eta)=(\tilde x,\tilde\eta)} \\ = [\tilde v^{(j)}]^* (\tilde A_1)_{x^\alpha x^\beta} \tilde v^{(j)}\,, \end{multline} where $(\tilde A_1)_{x^\alpha}=(A_1)_{x^\alpha}(\tilde x,\tilde\xi)$, $\tilde h^{(j)}_{x^\alpha}=h^{(j)}_{x^\alpha}(\tilde x,\tilde\xi)$, $\tilde v^{(j)}_{\xi_\alpha}=v^{(j)}_{\xi_\alpha}(\tilde x,\tilde\xi)$ and $\tilde v^{(j)}_{x^\beta}=v^{(j)}_{x^\beta}(\tilde x,\tilde\xi)$. We also have \begin{multline} \label{Leading transport equations equation 23} [\tilde v^{(j)}]^* \bigl( (\tilde A_1)_{x^\alpha} -\tilde h^{(j)}_{x^\alpha} \bigr) \tilde v^{(j)}_{x^\beta} + [\tilde v^{(j)}]^* \bigl( (\tilde A_1)_{x^\beta} -\tilde h^{(j)}_{x^\beta} \bigr) \tilde v^{(j)}_{x^\alpha} \\ = \tilde h^{(j)}_{x^\alpha x^\beta} - [\tilde v^{(j)}]^* (\tilde A_1)_{x^\alpha x^\beta} \tilde v^{(j)}. \end{multline} Using formulae (\ref{Leading transport equations equation 23}) and (\ref{Leading transport equations equation 17}) we can rewrite formula (\ref{Leading transport equations equation 21}) as \begin{multline} \label{Leading transport equations equation 24} [\tilde v^{(j)}]^* \left. \left[ \frac{\partial^2}{\partial x^\alpha\partial\eta_\alpha} \bigl(A_1(x,\xi^{(j)})-(x-x^{(j)})^\gamma h^{(j)}_{x^\gamma}(x^{(j)},\xi^{(j)})\bigr) \,v^{(j)}(x^{(j)},\xi^{(j)}) \right]
\right|_{(x,\eta)=(\tilde x,\tilde\eta)} \\ = [\tilde v^{(j)}]^* (\tilde A_1)_{x^\alpha\xi_\alpha} \tilde v^{(j)} -\tilde h^{(j)}_{x^\alpha\xi_\alpha} + [\tilde v^{(j)}]^* \bigl( (\tilde A_1)_{x^\alpha} -\tilde h^{(j)}_{x^\alpha} \bigr) \tilde v^{(j)}_{\xi_\alpha} \\ -\frac12 \bigl( [\tilde v^{(j)}]^* (\tilde A_1)_{x^\alpha x^\beta} \tilde v^{(j)} + \tilde h^{(j)}_{x^\alpha x^\beta} \bigr) (\tilde x^{(j)}_\eta)^{\alpha\beta}. \end{multline} Substituting (\ref{Leading transport equations equation 24}) and (\ref{Leading transport equations equation 22}) into (\ref{Leading transport equations equation 19}) we see that all the terms with $(\tilde x^{(j)}_\eta)^{\alpha\beta}$ cancel out and we get \begin{multline} \label{Leading transport equations equation 25} \{ [v^{(j)}]^*,A_1-h^{(j)},v^{(j)} \} (\tilde x,\tilde\xi) = \\ -[\tilde v^{(j)}]^* \bigl( (\tilde A_1)_{x^\alpha\xi_\alpha} - \tilde h^{(j)}_{x^\alpha\xi_\alpha} \bigr) \tilde v^{(j)} -2 [\tilde v^{(j)}]^* \bigl( (\tilde A_1)_{x^\alpha} -\tilde h^{(j)}_{x^\alpha} \bigr) \tilde v^{(j)}_{\xi_\alpha}. \end{multline} Thus, the proof of the identity (\ref{Leading transport equations equation 13}) has been reduced to the proof of the identity~(\ref{Leading transport equations equation 25}).
Observe now that formula (\ref{Leading transport equations equation 25}) no longer has Hamiltonian trajectories present in it. This means that we can drop all the tildes and rewrite (\ref{Leading transport equations equation 25}) as \begin{multline} \label{Leading transport equations equation 26} \{ [v^{(j)}]^*,A_1-h^{(j)},v^{(j)} \} = \\ -[v^{(j)}]^* \bigl( A_1 -h^{(j)} \bigr)_{x^\alpha\xi_\alpha} v^{(j)} -2 [v^{(j)}]^* \bigl( A_1 -h^{(j)} \bigr)_{x^\alpha} v^{(j)}_{\xi_\alpha}\,, \end{multline} where the arguments are $(x,\xi)$. We no longer need to restrict our consideration to the particular point $(x,\xi)=(\tilde x,\tilde\xi)$: if we prove (\ref{Leading transport equations equation 26}) for an arbitrary $(x,\xi)\in T'M$ we will prove it for a particular $(\tilde x,\tilde\xi)\in T'M$.
The proof of the identity (\ref{Leading transport equations equation 26}) is straightforward. We note that \begin{multline} \label{Leading transport equations equation 27} [v^{(j)}]^* (A_1-h^{(j)})_{x^\alpha\xi_\alpha} v^{(j)}= \\ - [v^{(j)}]^* (A_1-h^{(j)})_{x^\alpha} v^{(j)}_{\xi_\alpha} - [v^{(j)}]^* (A_1-h^{(j)})_{\xi_\alpha} v^{(j)}_{x^\alpha} \end{multline} and substituting (\ref{Leading transport equations equation 27}) into (\ref{Leading transport equations equation 26}) reduce the latter to the form \begin{multline} \label{Leading transport equations equation 28} \{ [v^{(j)}]^*,A_1-h^{(j)},v^{(j)} \} = \\ [v^{(j)}]^* \bigl( A_1 -h^{(j)} \bigr)_{\xi_\alpha} v^{(j)}_{x^\alpha} - [v^{(j)}]^* \bigl( A_1 -h^{(j)} \bigr)_{x^\alpha} v^{(j)}_{\xi_\alpha}. \end{multline} But \begin{equation} \label{Leading transport equations equation 29} [v^{(j)}]^* \bigl( A_1 -h^{(j)} \bigr)_{x^\alpha} = - [v^{(j)}_{x^\alpha}]^* \bigl( A_1 -h^{(j)} \bigr), \end{equation} \begin{equation} \label{Leading transport equations equation 30} [v^{(j)}]^* \bigl( A_1 -h^{(j)} \bigr)_{\xi_\alpha} = - [v^{(j)}_{\xi_\alpha}]^* \bigl( A_1 -h^{(j)} \bigr). \end{equation} Substituting (\ref{Leading transport equations equation 29}) and (\ref{Leading transport equations equation 30}) into (\ref{Leading transport equations equation 28}) we get
\[ \{ [v^{(j)}]^*,A_1-h^{(j)},v^{(j)} \} = [v^{(j)}_{x^\alpha}]^* \bigl( A_1 -h^{(j)} \bigr) v^{(j)}_{\xi_\alpha} - [v^{(j)}_{\xi_\alpha}]^* \bigl( A_1 -h^{(j)} \bigr) v^{(j)}_{x^\alpha} \]
which agrees with the definition of the generalised Poisson bracket (\ref{generalised Poisson bracket on matrix-functions}).
\section{Proof of formula (\ref{subprincipal symbol of OI at time zero})} \label{Proof of formula}
In this section we prove formula (\ref{subprincipal symbol of OI at time zero}). Our approach is as follows.
We write down explicitly the transport equations (\ref{Leading transport equations equation 8}) at $t=0$, i.e. \begin{equation} \label{Proof of formula equation 1} \bigl[v^{(l)}\bigr]^* \, \left. \bigl[\mathfrak{S}^{(j)}_{-1}f^{(j)}_1+\mathfrak{S}^{(j)}_0f^{(j)}_0\bigr]
\right|_{t=0} =0, \qquad l\ne j. \end{equation} We use the same local coordinates for $x$ and $y$ and we assume all our phase functions to be linear, i.e.~we assume that for each $j$ we have (\ref{Leading transport equations equation 14}). Using linear phase functions is justified for small $t$ because we have $(\xi^{(j)}_\eta)_\alpha{}^\beta(0;y,\eta)=\delta_\alpha{}^\beta$ and, hence, $\det\varphi^{(j)}_{x^\alpha\eta_\beta}(t,x;y,\eta)\ne0$ for small $t$. Writing down equations (\ref{Proof of formula equation 1}) for linear phase functions is much easier than for general phase functions (\ref{algorithm equation 2}).
Using linear phase functions has the additional advantage that the initial condition (\ref{algorithm equation 32}) simplifies and reads now $\sum_ju^{(j)}(0;y,\eta)=I$. In view of (\ref{decomposition of symbol of OI into homogeneous components}), this implies, in particular, that \begin{equation} \label{Proof of formula equation 2} \sum_j u^{(j)}_{-1}(0)=0. \end{equation} Here and further on in this section we drop, for the sake of brevity, the arguments $(y,\eta)$ in $u^{(j)}_{-1}$.
Of course, the formula we are proving, formula (\ref{subprincipal symbol of OI at time zero}), does not depend on our choice of phase functions. It is just easier to carry out calculations for linear phase functions.
We will show that (\ref{Proof of formula equation 1}) is a system of complex linear algebraic equations for the unknowns $u^{(j)}_{-1}(0)$. The total number of equations (\ref{Proof of formula equation 1}) is $m^2-m$. However, for each $j$ and $l$ the LHS of (\ref{Proof of formula equation 1}) is a row of $m$ elements, so (\ref{Proof of formula equation 1}) is, effectively, a system of $m(m^2-m)$ scalar equations.
Equation (\ref{Proof of formula equation 2}) is a single matrix equation, so it is, effectively, a system of $m^2$ scalar equations.
Consequently, the system (\ref{Proof of formula equation 1}), (\ref{Proof of formula equation 2}) is, effectively, a system of $m^3$ scalar equations. This is exactly the number of unknown scalar elements in the $m$ matrices $u^{(j)}_{-1}(0)$.
In the remainder of this section we write down explicitly the LHS of (\ref{Proof of formula equation 1}) and solve the linear algebraic system (\ref{Proof of formula equation 1}), (\ref{Proof of formula equation 2}) for the unknowns $u^{(j)}_{-1}(0)$. This will allow us to prove formula (\ref{subprincipal symbol of OI at time zero}).
Before starting explicit calculations we observe that equations (\ref{Proof of formula equation 1}) can be equivalently rewritten as \begin{equation} \label{Proof of formula equation 3} P^{(l)} \, \left. \bigl[\mathfrak{S}^{(j)}_{-1}f^{(j)}_1+\mathfrak{S}^{(j)}_0f^{(j)}_0\bigr]
\right|_{t=0} =0, \qquad l\ne j, \end{equation} where $P^{(l)}:=[v^{(l)}(y,\eta)]\,[v^{(l)}(y,\eta)]^*$ is the orthogonal projection onto the eigenspace corresponding to the (normalised) eigenvector $v^{(l)}(y,\eta)$ of the principal symbol. We will deal with (\ref{Proof of formula equation 3}) rather than with (\ref{Proof of formula equation 1}). This is simply a matter of convenience.
\subsection{Part 1 of the proof of formula (\ref{subprincipal symbol of OI at time zero})} \label{Part 1}
Our task in this subsection is to calculate the LHS of (\ref{Proof of formula equation 3}). In our calculations we use the explicit formula (\ref{formula for principal symbol of oscillatory integral}) for the principal symbol $u^{(j)}_0(t;y,\eta)$ which was proved in Section~\ref{Leading transport equations}.
At $t=0$ formula (\ref{Leading transport equations equation 4}) reads \[ \left. \bigl[\mathfrak{S}^{(j)}_{-1}f^{(j)}_1\bigr]
\right|_{t=0} = i \left. \left[ \frac{\partial^2}{\partial x^\alpha\eta_\alpha} \bigl( A_1(x,\eta) -h^{(j)}(y,\eta) -(x-y)^\gamma h^{(j)}_{y^\gamma}(y,\eta) \bigr) P^{(j)}(y,\eta) \right]
\right|_{x=y} \] which gives us \begin{equation} \label{Proof of formula equation 4} \left. \bigl[\mathfrak{S}^{(j)}_{-1}f^{(j)}_1\bigr]
\right|_{t=0} = i \left[ (A_1-h^{(j)})_{y^\alpha\eta_\alpha}P^{(j)} + (A_1-h^{(j)})_{y^\alpha}P^{(j)}_{\eta_\alpha} \right]. \end{equation} In the latter formula we dropped, for the sake of brevity, the arguments $(y,\eta)$.
At $t=0$ formula (\ref{Leading transport equations equation 5}) reads \begin{multline} \label{Proof of formula equation 5} \left. \bigl[\mathfrak{S}^{(j)}_0f^{(j)}_0\bigr]
\right|_{t=0} = -i\{v^{(j)},h^{(j)}\}[v^{(j)}]^* + \left( A_0 - q^{(j)} + \frac i2h^{(j)}_{y^\alpha\eta_\alpha} \right) P^{(j)} \\ +[A_1-h^{(j)}]u^{(j)}_{-1}(0)\,, \end{multline} where $q^{(j)}$ is the function (\ref{phase appearing in principal symbol}) and we dropped, for the sake of brevity, the arguments $(y,\eta)$. Note that in writing down (\ref{Proof of formula equation 5}) we used the fact that \[ \left. \left[ (d_{\varphi^{(j)}})^{-1} \partial_t d_{\varphi^{(j)}} \right]
\right|_{(t,x;y,\eta)=(0,y;y,\eta)} =-\frac12 h^{(j)}_{y^\alpha\eta_\alpha}(y,\eta)\,, \] compare with formula (\ref{Leading transport equations equation 20}).
Substituting formulae (\ref{Proof of formula equation 4}) and (\ref{Proof of formula equation 5}) into (\ref{Proof of formula equation 3}) we get \begin{equation} \label{Proof of formula equation 6} (h^{(l)}-h^{(j)})P^{(l)}u^{(j)}_{-1}(0)+P^{(l)}B^{(j)}_0=0, \qquad l\ne j, \end{equation} where \begin{equation} \label{Part 1 result} B^{(j)}_0= \left( A_0-q^{(j)}-\frac i2h^{(j)}_{y^\alpha\eta_\alpha}+i(A_1)_{y^\alpha\eta_\alpha} \right) P^{(j)}
-i h^{(j)}_{\eta_\alpha}P^{(j)}_{y^\alpha} +i(A_1)_{y^\alpha}P^{(j)}_{\eta_\alpha}. \end{equation} The subscript in $B^{(j)}_0$ indicates the degree of homogeneity in $\eta$.
\subsection{Part 2 of the proof of formula (\ref{subprincipal symbol of OI at time zero})} \label{Part 2}
Our task in this subsection is to solve the linear algebraic system (\ref{Proof of formula equation 6}), (\ref{Proof of formula equation 2}) for the unknowns $u^{(j)}_{-1}(0)$.
It is easy to see that the unique solution to the system (\ref{Proof of formula equation 6}), (\ref{Proof of formula equation 2}) is \begin{equation} \label{Part 2 result} u^{(j)}_{-1}(0) =\sum_{l\ne j} \frac {P^{(l)}B^{(j)}_0+P^{(j)}B^{(l)}_0} {h^{(j)}-h^{(l)}}\,. \end{equation} Summation in (\ref{Part 2 result}) is carried out over all $l$ different from $j$.
\subsection{Part 3 of the proof of formula (\ref{subprincipal symbol of OI at time zero})} \label{Part 3}
Our task in this subsection is to calculate $[U^{(j)}(0)]_\mathrm{sub}$.
We have \begin{equation} \label{subprincipal symbol of Uj0 equation 1} [U^{(j)}(0)]_\mathrm{sub} =u^{(j)}_{-1}(0)-\frac i2P^{(j)}_{y^\alpha\eta_\alpha}. \end{equation} Here the sign in front of $\frac i2$ is opposite to that in (\ref{definition of subprincipal symbol}) because the way we write $U^{(j)}(0)$ is using the dual symbol.
Substituting (\ref{Part 2 result}) and (\ref{Part 1 result}) into (\ref{subprincipal symbol of Uj0 equation 1}) we get \begin{multline} \label{subprincipal symbol of Uj0 equation 2} [U^{(j)}(0)]_\mathrm{sub} = -\frac i2P^{(j)}_{y^\alpha\eta_\alpha} +\sum_{l\ne j}\frac1{h^{(j)}-h^{(l)}} \\ \times \bigl( P^{(l)} [ (A_0+i(A_1)_{y^\alpha\eta_\alpha})P^{(j)} -ih^{(j)}_{\eta_\alpha}P^{(j)}_{y^\alpha} +i(A_1)_{y^\alpha}P^{(j)}_{\eta_\alpha} ] \\ \qquad\qquad+ P^{(j)} [ (A_0+i(A_1)_{y^\alpha\eta_\alpha})P^{(l)} -ih^{(l)}_{\eta_\alpha}P^{(l)}_{y^\alpha} +i(A_1)_{y^\alpha}P^{(l)}_{\eta_\alpha} ] \bigr) \\ = \sum_{l\ne j} \frac { P^{(l)}A_\mathrm{sub}P^{(j)} + P^{(j)}A_\mathrm{sub}P^{(l)} } { h^{(j)}-h^{(l)} } +\frac i2 \Bigl( -P^{(j)}_{y^\alpha\eta_\alpha} + \sum_{l\ne j} \frac { G_{jl} } { h^{(j)}-h^{(l)} } \Bigr)\,, \end{multline} where \begin{multline*} G_{jl}:= P^{(l)} [ (A_1)_{y^\alpha\eta_\alpha}P^{(j)} -2h^{(j)}_{\eta_\alpha}P^{(j)}_{y^\alpha} +2(A_1)_{y^\alpha}P^{(j)}_{\eta_\alpha} ] \\ + P^{(j)} [ (A_1)_{y^\alpha\eta_\alpha}P^{(l)} -2h^{(l)}_{\eta_\alpha}P^{(l)}_{y^\alpha} +2(A_1)_{y^\alpha}P^{(l)}_{\eta_\alpha} ] \,. \end{multline*}
We have \begin{multline*} G_{jl} = 2P^{(l)}\{A_1,P^{(j)}\} + 2P^{(j)}\{A_1,P^{(l)}\} \\ + P^{(l)} [ (A_1-h^{(j)})_{y^\alpha\eta_\alpha}P^{(j)} +2(A_1-h^{(j)})_{\eta_\alpha}P^{(j)}_{y^\alpha} ] \\ + P^{(j)} [ (A_1-h^{(l)})_{y^\alpha\eta_\alpha}P^{(l)} +2(A_1-h^{(l)})_{\eta_\alpha}P^{(l)}_{y^\alpha} ] \\ = 2P^{(l)}\{A_1,P^{(j)}\} + 2P^{(j)}\{A_1,P^{(l)}\} - P^{(l)}\{A_1-h^{(j)},P^{(j)}\} - P^{(j)}\{A_1-h^{(l)},P^{(l)}\} \\ + P^{(l)} [ (A_1-h^{(j)})_{y^\alpha\eta_\alpha}P^{(j)} +(A_1-h^{(j)})_{\eta_\alpha}P^{(j)}_{y^\alpha} +(A_1-h^{(j)})_{y^\alpha}P^{(j)}_{\eta_\alpha} ] \\ + P^{(j)} [ (A_1-h^{(l)})_{y^\alpha\eta_\alpha}P^{(l)} +(A_1-h^{(l)})_{\eta_\alpha}P^{(l)}_{y^\alpha} +(A_1-h^{(l)})_{y^\alpha}P^{(l)}_{\eta_\alpha} ] \\ = P^{(l)}\{A_1+h^{(j)},P^{(j)}\} + P^{(j)}\{A_1+h^{(l)},P^{(l)}\} \\ - P^{(l)} (A_1-h^{(j)}) P^{(j)}_{y^\alpha\eta_\alpha} - P^{(j)} (A_1-h^{(l)}) P^{(l)}_{y^\alpha\eta_\alpha} \\ = P^{(l)}\{A_1+h^{(j)},P^{(j)}\} + P^{(j)}\{A_1+h^{(l)},P^{(l)}\} \\ - P^{(l)} (h^{(l)}-h^{(j)}) P^{(j)}_{y^\alpha\eta_\alpha} - P^{(j)} (h^{(j)}-h^{(l)}) P^{(l)}_{y^\alpha\eta_\alpha} \\ = P^{(l)}\{A_1+h^{(j)},P^{(j)}\} + P^{(j)}\{A_1+h^{(l)},P^{(l)}\} +(h^{(j)}-h^{(l)}) ( P^{(l)} P^{(j)}_{y^\alpha\eta_\alpha} - P^{(j)} P^{(l)}_{y^\alpha\eta_\alpha} )\,, \end{multline*} so formula (\ref{subprincipal symbol of Uj0 equation 2}) can be rewritten as \begin{multline} \label{subprincipal symbol of Uj0 equation 3} [U^{(j)}(0)]_\mathrm{sub} = \frac i2 \Bigl( -P^{(j)}_{y^\alpha\eta_\alpha} + \sum_{l\ne j} ( P^{(l)} P^{(j)}_{y^\alpha\eta_\alpha} - P^{(j)} P^{(l)}_{y^\alpha\eta_\alpha} ) \Bigr) \\ + \frac12 \sum_{l\ne j} \frac { P^{(l)}(2A_\mathrm{sub}P^{(j)}+i\{A_1+h^{(j)},P^{(j)}\}) + P^{(j)}(2A_\mathrm{sub}P^{(l)}+i\{A_1+h^{(l)},P^{(l)}\}) } { h^{(j)}-h^{(l)} }\,. \end{multline}
But \begin{multline*} \sum_{l\ne j} ( P^{(l)} P^{(j)}_{y^\alpha\eta_\alpha} - P^{(j)} P^{(l)}_{y^\alpha\eta_\alpha} ) = \Bigl(\, \sum_{l\ne j} P^{(l)} \Bigr) P^{(j)}_{y^\alpha\eta_\alpha} - P^{(j)} \Bigl(\, \sum_{l\ne j} P^{(l)} \Bigr)_{y^\alpha\eta_\alpha} \\ =(I-P^{(j)})P^{(j)}_{y^\alpha\eta_\alpha} -P^{(j)}(I-P^{(j)})_{y^\alpha\eta_\alpha} =P^{(j)}_{y^\alpha\eta_\alpha}, \end{multline*} so formula (\ref{subprincipal symbol of Uj0 equation 3}) can be simplified to read \begin{multline} \label{Part 3 result} [U^{(j)}(0)]_\mathrm{sub} \\ = \frac12 \sum_{l\ne j} \frac { P^{(l)}(2A_\mathrm{sub}P^{(j)}+i\{A_1+h^{(j)},P^{(j)}\}) + P^{(j)}(2A_\mathrm{sub}P^{(l)}+i\{A_1+h^{(l)},P^{(l)}\}) } { h^{(j)}-h^{(l)} } \,. \end{multline}
\subsection{Part 4 of the proof of formula (\ref{subprincipal symbol of OI at time zero})} \label{Part 4}
Our task in this subsection is to calculate $\operatorname{tr}[U^{(j)}(0)]_\mathrm{sub}$.
Formula (\ref{Part 3 result}) implies \begin{equation} \label{trace of subprincipal symbol of Uj0 equation 1} \operatorname{tr}[U^{(j)}(0)]_\mathrm{sub} = \frac i2\operatorname{tr}\sum_{l\ne j} \frac { P^{(l)}\{A_1,P^{(j)}\} + P^{(j)}\{A_1,P^{(l)}\} } { h^{(j)}-h^{(l)} } \,. \end{equation} Put $A_1=\sum_kh^{(k)}P^{(k)}$ and observe that \begin{itemize} \item terms with the derivatives of $h$ vanish and \item the only $k$ which may give nonzero contributions are $k=j$ and $k=l$. \end{itemize} Thus, formula (\ref{trace of subprincipal symbol of Uj0 equation 1}) becomes \begin{multline} \label{trace of subprincipal symbol of Uj0 equation 2} \operatorname{tr}[U^{(j)}(0)]_\mathrm{sub} = \frac i2\operatorname{tr}\sum_{l\ne j} \frac1 { h^{(j)}-h^{(l)} } \\ \times\bigl( h^{(j)} [ P^{(l)}\{P^{(j)},P^{(j)}\} + P^{(j)}\{P^{(j)},P^{(l)}\} ] + h^{(l)} [ P^{(l)}\{P^{(l)},P^{(j)}\} + P^{(j)}\{P^{(l)},P^{(l)}\} ] \bigr). \end{multline}
We claim that \begin{multline} \label{Part 4 auxiliary equation 1} \operatorname{tr}(P^{(l)}\{P^{(j)},P^{(j)}\}) = \operatorname{tr}(P^{(j)}\{P^{(j)},P^{(l)}\}) \\ = -\operatorname{tr}(P^{(l)}\{P^{(l)},P^{(j)}\}) = -\operatorname{tr}(P^{(j)}\{P^{(l)},P^{(l)}\})
=[v^{(l)}]^*\{v^{(j)},[v^{(j)}]^*\}v^{(l)} \\ =([v^{(l)}]^*v^{(j)}_{y^\alpha})([v^{(j)}_{\eta_\alpha}]^*v^{(l)}) -([v^{(l)}]^*v^{(j)}_{\eta_\alpha})([v^{(j)}_{y^\alpha}]^*v^{(l)}). \end{multline} These facts are established by writing the orthogonal projections in terms of the eigenvectors and using, if required, the identities \[ [v^{(l)}_{y^\alpha}]^*v^{(j)}+[v^{(l)}]^*v^{(j)}_{y^\alpha}=0, \qquad [v^{(l)}_{\eta_\alpha}]^*v^{(j)}+[v^{(l)}]^*v^{(j)}_{\eta_\alpha}=0, \] \[ [v^{(j)}_{y^\alpha}]^*v^{(l)}+[v^{(j)}]^*v^{(l)}_{y^\alpha}=0, \qquad [v^{(j)}_{\eta_\alpha}]^*v^{(l)}+[v^{(j)}]^*v^{(l)}_{\eta_\alpha}=0. \] In view of the identities (\ref{Part 4 auxiliary equation 1}) formula (\ref{trace of subprincipal symbol of Uj0 equation 2}) can be rewritten as \begin{multline} \label{Part 4 auxiliary equation 2} \operatorname{tr}[U^{(j)}(0)]_\mathrm{sub} = i\operatorname{tr} \sum_{l\ne j} P^{(l)}\{P^{(j)},P^{(j)}\} \\ = i\operatorname{tr} (\{P^{(j)},P^{(j)}\}-P^{(j)}\{P^{(j)},P^{(j)}\}) = -i\operatorname{tr} (P^{(j)}\{P^{(j)},P^{(j)}\}). \end{multline}
It remains only to simplify the expression in the RHS of (\ref{Part 4 auxiliary equation 2}). We have \begin{multline} \label{Part 4 auxiliary equation 3} \operatorname{tr} (P^{(j)}\{P^{(j)},P^{(j)}\}) = \{[v^{(j)}]^*,v^{(j)}\} \\ +[([v^{(j)}]^*v^{(j)}_{y^\alpha})([v^{(j)}]^*v^{(j)}_{\eta_\alpha})-([v^{(j)}]^*v^{(j)}_{\eta_\alpha})([v^{(j)}]^*v^{(j)}_{y^\alpha})] \\ +[([v^{(j)}_{y^\alpha}]^*v^{(j)})([v^{(j)}_{\eta_\alpha}]^*v^{(j)})-([v^{(j)}_{\eta_\alpha}]^*v^{(j)})([v^{(j)}_{y^\alpha}]^*v^{(j)})] \\ +[([v^{(j)}]^*v^{(j)}_{y^\alpha})([v^{(j)}_{\eta_\alpha}]^*v^{(j)})-([v^{(j)}]^*v^{(j)}_{\eta_\alpha})([v^{(j)}_{y^\alpha}]^*v^{(j)})] \\ = \{[v^{(j)}]^*,v^{(j)}\} +[([v^{(j)}]^*v^{(j)}_{y^\alpha})([v^{(j)}_{\eta_\alpha}]^*v^{(j)})-([v^{(j)}]^*v^{(j)}_{\eta_\alpha})([v^{(j)}_{y^\alpha}]^*v^{(j)})] \\ = \{[v^{(j)}]^*,v^{(j)}\} -[([v^{(j)}]^*v^{(j)}_{y^\alpha})([v^{(j)}]^*v^{(j)}_{\eta_\alpha})-([v^{(j)}]^*v^{(j)}_{\eta_\alpha})([v^{(j)}]^*v^{(j)}_{y^\alpha})] \\ = \{[v^{(j)}]^*,v^{(j)}\}. \end{multline} Formulae (\ref{Part 4 auxiliary equation 2}) and (\ref{Part 4 auxiliary equation 3}) imply formula (\ref{subprincipal symbol of OI at time zero}).
\section{$\mathrm{U}(1)$ connection} \label{U(1) connection}
In the preceding Sections \ref{Algorithm for the construction of the wave group}--\ref{Proof of formula} we presented technical details of the construction of the propagator. We saw that the eigenvectors of the principal symbol, $v^{(j)}(x,\xi)$, play a major role in this construction. As pointed out in Section~\ref{Main results}, each of these eigenvectors is defined up to a $\mathrm{U}(1)$ gauge transformation (\ref{gauge transformation of the eigenvector}), (\ref{phase appearing in gauge transformation}). In the end, the full symbols (\ref{decomposition of symbol of OI into homogeneous components}) of our oscillatory integrals $U^{(j)}(t)$ do not depend on the choice of gauge for the eigenvectors $v^{(j)}(x,\xi)$. However, the effect of the gauge transformation (\ref{gauge transformation of the eigenvector}), (\ref{phase appearing in gauge transformation}) is not as trivial as it may appear at first sight. We will demonstrate in this section that the gauge transformation (\ref{gauge transformation of the eigenvector}), (\ref{phase appearing in gauge transformation}) shows up, in the form of invariantly defined curvature, in the lower order terms $u^{(j)}_{-1}(t;y,\eta)$ of the symbols of our oscillatory integrals $U^{(j)}(t)$. More precisely, we will show that the RHS of formula~(\ref{subprincipal symbol of OI at time zero}) is the scalar curvature of a connection associated with the gauge transformation (\ref{gauge transformation of the eigenvector}), (\ref{phase appearing in gauge transformation}). Further on in this section, until the very last paragraph, the index $j$ enumerating eigenvalues and eigenvectors of the principal symbol is assumed to be fixed.
Consider a smooth curve $\Gamma\subset T'M$ connecting points $(y,\eta)$ and $(x,\xi)$. We write this curve in parametric form as $(z(t),\zeta(t))$, $t\in[0,1]$, so that $(z(0),\zeta(0))=(y,\eta)$ and $(z(1),\zeta(1))=(x,\xi)$. Put \begin{equation} \label{derivative of eigenvector is orthogonal to eigenvector auxiliary} w(t):=e^{i\phi(t)}v^{(j)}(z(t),\zeta(t))\,, \end{equation} where $\phi:[0,1]\to\mathbb{R}$ is an unknown function which is to be determined from the condition \begin{equation} \label{derivative of eigenvector is orthogonal to eigenvector} iw^*\dot w=0 \end{equation} with the dot indicating the derivative with respect to the parameter $t$. Substituting (\ref{derivative of eigenvector is orthogonal to eigenvector auxiliary}) into (\ref{derivative of eigenvector is orthogonal to eigenvector}) we get an ordinary differential equation for $\phi$ which is easily solved, giving \begin{multline} \label{formula for phi(1)} \phi(1) =\phi(0)+\int_0^1(\dot z^\alpha(t)\,P_\alpha(z(t),\zeta(t))+\dot\zeta_\gamma(t)\,Q^\gamma(z(t),\zeta(t)))\,dt \\ =\phi(0)+\int_\Gamma(P_\alpha dz^\alpha+Q^\gamma d\zeta_\gamma)\,, \end{multline} where \begin{equation} \label{formula for P and Q} P_\alpha:=i[v^{(j)}]^*v^{(j)}_{z^\alpha}, \qquad Q^\gamma:=i[v^{(j)}]^*v^{(j)}_{\zeta_\gamma}. \end{equation} Note that the $2n$-component real quantity $(P_\alpha,Q^\gamma)$ is a covector field (1-form) on $T'M$. This quantity already appeared in Section~\ref{Main results} as formula (\ref{electromagnetic covector potential}).
Put $f(y,\eta):=e^{i\phi(0)}$, $f(x,\xi):=e^{i\phi(1)}$ and rewrite formula (\ref{formula for phi(1)}) as \begin{equation} \label{formula for a(1)} f(x,\xi) =f(y,\eta)\,e^{i\int_\Gamma(P_\alpha dz^\alpha+Q^\gamma d\zeta_\gamma)}. \end{equation}
Let us identify the group $\mathrm{U}(1)$ with the unit circle in the complex plane, i.e. with $f\in\mathbb{C}$, $|f|=1$. We see that formulae (\ref{formula for a(1)}) and (\ref{formula for P and Q}) give us a rule for the parallel transport of elements of the group $\mathrm{U}(1)$ along curves in $T'M$. This is the natural $\mathrm{U}(1)$ connection generated by the normalised field of columns of complex-valued scalars \begin{equation} \label{jth eigenvector of the principal symbol} v^{(j)}(z,\zeta)= \bigl( \begin{matrix}v^{(j)}_1(z,\zeta)&\ldots&v^{(j)}_m(z,\zeta)\end{matrix} \bigr)^T. \end{equation} Recall that the $\Gamma$ appearing in formula (\ref{formula for a(1)}) is a curve connecting points $(y,\eta)$ and $(x,\xi)$, whereas the $v^{(j)}(z,\zeta)$ appearing in formulae (\ref{formula for P and Q}) and (\ref{jth eigenvector of the principal symbol}) enters our construction as an eigenvector of the principal symbol of our $m\times m$ matrix pseudo\-differential operator $A$.
In practice, dealing with a connection is not as convenient as dealing with the covariant derivative $\nabla$. The covariant derivative corresponding to the connection (\ref{formula for a(1)}) is determined as follows. Let us view the $(x,\xi)$ appearing in formula (\ref{formula for a(1)}) as a variable which takes values close to $(y,\eta)$, and suppose that the curve $\Gamma$ is a short straight (in local coordinates) line segment connecting the point $(y,\eta)$ with the point $(x,\xi)$. We want the covariant derivative of our function $f(x,\xi)$, evaluated at $(y,\eta)$, to be zero. Examination of formula (\ref{formula for a(1)}) shows that the unique covariant derivative satisfying this condition is \begin{equation} \label{formula for U(1) covariant derivative} \nabla_\alpha:=\partial/\partial x^\alpha-iP_\alpha(x,\xi), \qquad \nabla^\gamma:=\partial/\partial\xi_\gamma-iQ^\gamma(x,\xi). \end{equation}
We define the curvature of our $\mathrm{U}(1)$ connection as \begin{equation} \label{definition of U(1) curvature} R:= -i \begin{pmatrix} \nabla_\alpha\nabla_\beta-\nabla_\beta\nabla_\alpha& \nabla_\alpha\nabla^\delta-\nabla^\delta\nabla_\alpha \\ \nabla^\gamma\nabla_\beta-\nabla_\beta\nabla^\gamma& \nabla^\gamma\nabla^\delta-\nabla^\delta\nabla^\gamma \end{pmatrix}. \end{equation} It may seem that the entries of the $(2n)\times(2n)$ matrix (\ref{definition of U(1) curvature}) are differential operators. They are, in fact, operators of multiplication by ``scalar functions''. Namely, the more explicit form of (\ref{definition of U(1) curvature}) is \begin{equation} \label{explicit formula for U(1) curvature} R= \begin{pmatrix} \frac{\partial P_\alpha}{\partial x^\beta}-\frac{\partial P_\beta}{\partial x^\alpha}& \frac{\partial P_\alpha}{\partial\xi_\delta}-\frac{\partial Q^\delta}{\partial x^\alpha} \\ \frac{\partial Q^\gamma}{\partial x^\beta}-\frac{\partial P_\beta}{\partial\xi_\gamma}& \frac{\partial Q^\gamma}{\partial\xi_\delta}-\frac{\partial Q^\delta}{\partial\xi_\gamma} \end{pmatrix}. \end{equation} The $(2n)\times(2n)$\,-\,component real quantity (\ref{explicit formula for U(1) curvature}) is a rank 2 covariant antisymmetric tensor (2-form) on $T'M$. It is an analogue of the electromagnetic tensor.
Substituting (\ref{formula for P and Q}) into (\ref{explicit formula for U(1) curvature}) we get an expression for curvature in terms of the eigenvector of the principal symbol \begin{equation} \label{more explicit formula for U(1) curvature} R=i \begin{pmatrix} [v^{(j)}_{x^\beta}]^*v^{(j)}_{x^\alpha}-[v^{(j)}_{x^\alpha}]^*v^{(j)}_{x^\beta}& [v^{(j)}_{\xi_\delta}]^*v^{(j)}_{x^\alpha}-[v^{(j)}_{x^\alpha}]^*v^{(j)}_{\xi_\delta} \\ [v^{(j)}_{x^\beta}]^*v^{(j)}_{\xi_\gamma}-[v^{(j)}_{\xi_\gamma}]^*v^{(j)}_{x^\beta}& [v^{(j)}_{\xi_\delta}]^*v^{(j)}_{\xi_\gamma}-[v^{(j)}_{\xi_\gamma}]^*v^{(j)}_{\xi_\delta} \end{pmatrix}. \end{equation} Examination of formula (\ref{more explicit formula for U(1) curvature}) shows that, as expected, curvature is invariant under the gauge transformation (\ref{gauge transformation of the eigenvector}), (\ref{phase appearing in gauge transformation}).
It is natural to take the trace of the upper right block in (\ref{definition of U(1) curvature}) which, in the notation (\ref{Poisson bracket on matrix-functions}), gives us \begin{equation} \label{scalar curvature of U(1) connection} -i(\nabla_\alpha\nabla^\alpha-\nabla^\alpha\nabla_\alpha) =-i\{[v^{(j)}]^*,v^{(j)}\}. \end{equation} Thus, we have shown that the RHS of formula~(\ref{subprincipal symbol of OI at time zero}) is the scalar curvature of our $\mathrm{U}(1)$ connection.
\
We end this section by proving, as promised in Section~\ref{Main results}, formula (\ref{sum of curvatures is zero}) without referring to microlocal analysis. In the following arguments we use our standard notation for the orthogonal projections onto the eigenspaces of the principal symbol, i.e.~we write $P^{(k)}:=v^{(k)}[v^{(k)}]^*$. We have $\operatorname{tr}\{P^{(j)},P^{(j)}\}=0$ and $\sum_lP^{(l)}=I$ which implies \begin{multline} \label{sum of curvatures is zero proof equation 1} 0=\sum_{l,j}\operatorname{tr}(P^{(l)}\{P^{(j)},P^{(j)}\}) \\ =\sum_j\operatorname{tr}(P^{(j)}\{P^{(j)},P^{(j)}\}) +\sum_{l,j:\ l\ne j}\operatorname{tr}(P^{(l)}\{P^{(j)},P^{(j)}\}). \end{multline} But, according to formula (\ref{Part 4 auxiliary equation 1}), for $l\ne j$ we have \[ \operatorname{tr}(P^{(l)}\{P^{(j)},P^{(j)}\}) =-\operatorname{tr}(P^{(j)}\{P^{(l)},P^{(l)}\}), \] so the expression in the last sum in the RHS of (\ref{sum of curvatures is zero proof equation 1}) is antisymmetric in the indices $l,j$, which implies that this sum is zero. Hence, formula (\ref{sum of curvatures is zero proof equation 1}) can be rewritten as $\sum\limits_j\operatorname{tr}(P^{(j)}\{P^{(j)},P^{(j)}\})=0$. It remains only to note that, according to formula (\ref{Part 4 auxiliary equation 3}), $\operatorname{tr}(P^{(j)}\{P^{(j)},P^{(j)}\})=\{[v^{(j)}]^*,v^{(j)}\}$.
\section{Singularity of the propagator at $t=0$} \label{Singularity of the wave group at time zero}
Following the notation of \cite{mybook}, we denote by
\[ \mathcal{F}_{\lambda\to t}[f(\lambda)]=\hat f(t)=\int e^{-it\lambda}f(\lambda)\,d\lambda \]
the one-dimensional Fourier transform and by
\[ \mathcal{F}^{-1}_{t\to\lambda}[\hat f(t)]=f(\lambda)=(2\pi)^{-1}\int e^{it\lambda}\hat f(t)\,dt \]
its inverse.
Suppose that we have a Hamiltonian trajectory $(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta))$ and a real number $T>0$ such that $x^{(j)}(T;y,\eta)=y$. We will say in this case that we have a loop of length $T$ originating from the point $y\in M$.
\begin{remark} \label{remark on reversibility} There is no need to consider loops of negative length $T$ because, given a $T>0$, we have $x^{(j)}(T;y,\eta^+)=y$ for some $\eta^+\in T'_yM$ if and only if we have $x^{(j)}(-T;y,\eta^-)=y$ for some $\eta^-\in T'_yM$. Indeed, it suffices to relate the $\eta^\pm$ in accordance with $\eta^\mp=\xi^{(j)}(\pm T;y,\eta^\pm)$. \end{remark}
Denote by $\mathcal{T}^{(j)}\subset\mathbb{R}$ the set of lengths $T>0$ of all possible loops generated by the Hamiltonian $h^{(j)}$. Here ``all possible'' refers to all possible starting points $(y,\eta)\in T'M$ of Hamiltonian trajectories. It is easy to see that $0\not\in\overline{\mathcal{T}^{(j)}}$. We put \[
\mathbf{T}^{(j)}:= \begin{cases} \inf\mathcal{T}^{(j)}\quad&\text{if}\quad\mathcal{T}^{(j)}\ne\emptyset, \\ +\infty\quad&\text{if}\quad\mathcal{T}^{(j)}=\emptyset. \end{cases}
\]
In the Riemannian case (i.e.~the case when the Hamiltonian is a square root of a quadratic polynomial in $\xi$) it is known \cite{sabourau,rotman} that there is a loop originating from every point of the manifold $M$ and, moreover, there is an explicit estimate from above for the number $\mathbf{T}^{(j)}$. We are not aware of similar results for general Hamiltonians.
We also define
$\mathbf{T}:=\min\limits_{j=1,\ldots,m^+}\mathbf{T}^{(j)}$.
\begin{remark} \label{remark on negative Hamiltonians} Note that negative eigenvalues of the principal symbol, i.e.~Hamiltonians $h^{(j)}(x,\xi)$ with negative index $j=-1,\ldots,-m^-$, do not affect the asymptotic formulae we are about to derive. This is because we are dealing with the case $\lambda\to+\infty$ rather than $\lambda\to-\infty$. \end{remark}
Denote by \begin{equation} \label{definition of integral kernel of wave group} u(t,x,y):= \sum_k e^{-it\lambda_k}v_k(x)[v_k(y)]^* \end{equation} the integral kernel of the propagator (\ref{definition of wave group}). The quantity (\ref{definition of integral kernel of wave group}) can be understood as a distribution in the variable $t\in\mathbb{R}$ depending on the parameters $x,y\in M$.
The main result of this section is the following \begin{lemma} \label{Singularity of the wave group at time zero lemma} Let $\hat\rho:\mathbb{R}\to\mathbb{C}$ be an infinitely smooth function such that \begin{equation} \label{condition on hat rho 1} \operatorname{supp}\hat\rho\subset(-\mathbf{T},\mathbf{T}), \end{equation} \begin{equation} \label{condition on hat rho 2} \hat\rho(0)=1, \end{equation} \begin{equation} \label{condition on hat rho 3} \hat\rho'(0)=0. \end{equation} Then, uniformly over $y\in M$, we have \begin{equation} \label{Singularity of the wave group at time zero lemma formula} \mathcal{F}^{-1}_{t\to\lambda}[\hat\rho(t)\operatorname{tr}u(t,y,y)]= n\,a(y)\,\lambda^{n-1}+(n-1)\,b(y)\,\lambda^{n-2}+O(\lambda^{n-3}) \end{equation} as $\lambda\to+\infty$. The densities $a(y)$ and $b(y)$ appearing in the RHS of formula (\ref{Singularity of the wave group at time zero lemma formula}) are defined in accordance with formulae (\ref{formula for a(x)}) and (\ref{formula for b(x)}). \end{lemma}
\emph{Proof\ } Denote by $(S^*_yM)^{(j)}$ the $(n-1)$-dimensional unit cosphere in the cotangent fibre defined by the equation $h^{(j)}(y,\eta)=1$ and denote by $d(S^*_yM)^{(j)}$ the surface area element on $(S^*_yM)^{(j)}$ defined by the condition $d\eta=d(S^*_yM)^{(j)}\,dh^{(j)}$. The latter means that we use spherical coordinates in the cotangent fibre with the Hamiltonian $h^{(j)}$ playing the role of the radial coordinate, see subsection 1.1.10 of \cite{mybook} for details. In particular, as explained in subsection 1.1.10 of \cite{mybook}, our surface area element $d(S^*_yM)^{(j)}$ is expressed via the Euclidean surface area element as \[ d(S^*_yM)^{(j)}= \biggl(\,\sum_{\alpha=1}^n\bigl(h^{(j)}_{\eta_\alpha}(y,\eta)\bigr)^2\biggr)^{-1/2} \times\, \text{Euclidean surface area element} \,. \] Denote also $\,{d{\hskip-1pt\bar{}}\hskip1pt}(S^*_yM)^{(j)}:=(2\pi)^{-n}\,d(S^*_yM)^{(j)}\,$.
According to Corollary 4.1.5 from \cite{mybook} we have uniformly over $y\in M$ \begin{multline} \label{Singularity of the wave group at time zero lemma equation 1} \mathcal{F}^{-1}_{t\to\lambda}[\hat\rho(t)\operatorname{tr}u(t,y,y)]= \\ \sum_{j=1}^{m^+} \left(c^{(j)}(y)\,\lambda^{n-1}+d^{(j)}(y)\,\lambda^{n-2}+e^{(j)}(y)\,\lambda^{n-2}\right) +O(\lambda^{n-3})\,, \end{multline} where \begin{equation} \label{Singularity of the wave group at time zero lemma equation 2} c^{(j)}(y)=\int\limits_{(S^*_yM)^{(j)}} \operatorname{tr}u^{(j)}_0(0;y,\eta) \,{d{\hskip-1pt\bar{}}\hskip1pt}(S^*_yM)^{(j)}\,, \end{equation} \begin{multline} \label{Singularity of the wave group at time zero lemma equation 3} d^{(j)}(y)= \\ (n-1)\int\limits_{(S^*_yM)^{(j)}} \operatorname{tr} \left( -\,i\,\dot u^{(j)}_0(0;y,\eta)
+\frac i2\bigl\{u^{(j)}_0\bigr|_{t=0}\,,h^{(j)}\bigr\}(y,\eta) \right) {d{\hskip-1pt\bar{}}\hskip1pt}(S^*_yM)^{(j)}\,, \end{multline} \begin{equation} \label{Singularity of the wave group at time zero lemma equation 4} e^{(j)}(y)=\int\limits_{(S^*_yM)^{(j)}} \operatorname{tr}[U^{(j)}(0)]_\mathrm{sub}(y,\eta) \,{d{\hskip-1pt\bar{}}\hskip1pt}(S^*_yM)^{(j)}\,. \end{equation} Here $u^{(j)}_0(t;y,\eta)$ is the principal symbol of the oscillatory integral (\ref{algorithm equation 1}) and $\dot u^{(j)}_0(t;y,\eta)$ is its time derivative. Note that in writing the term with the Poisson bracket in (\ref{Singularity of the wave group at time zero lemma equation 3}) we took account of the fact that Poisson brackets in \cite{mybook} and in the current paper have opposite signs.
Observe that the integrands in formulae (\ref{Singularity of the wave group at time zero lemma equation 2}) and (\ref{Singularity of the wave group at time zero lemma equation 3}) are positively homogeneous in $\eta$ of degree 0, whereas the integrand in formula (\ref{Singularity of the wave group at time zero lemma equation 4}) is positively homogeneous in $\eta$ of degree $-1$. In order to have the same degree of homogeneity, we rewrite formula (\ref{Singularity of the wave group at time zero lemma equation 4}) in equivalent form \begin{equation} \label{Singularity of the wave group at time zero lemma equation 5} e^{(j)}(y)=\int\limits_{(S^*_yM)^{(j)}} \bigl( h^{(j)}\operatorname{tr}[U^{(j)}(0)]_\mathrm{sub} \bigr) (y,\eta) \,{d{\hskip-1pt\bar{}}\hskip1pt}(S^*_yM)^{(j)}\,. \end{equation}
Switching from surface integrals to volume integrals with the help of formula (1.1.15) from \cite{mybook}, we rewrite formulae (\ref{Singularity of the wave group at time zero lemma equation 2}), (\ref{Singularity of the wave group at time zero lemma equation 3}) and (\ref{Singularity of the wave group at time zero lemma equation 5}) as \begin{equation} \label{Singularity of the wave group at time zero lemma equation 6} c^{(j)}(y)=n\int\limits_{h^{(j)}(y,\eta)<1} \operatorname{tr}u^{(j)}_0(0;y,\eta) \,{d{\hskip-1pt\bar{}}\hskip1pt}\eta\,, \end{equation} \begin{multline} \label{Singularity of the wave group at time zero lemma equation 7} d^{(j)}(y)=n(n-1)\times \\ \int\limits_{h^{(j)}(y,\eta)<1} \operatorname{tr} \left( -\,i\,\dot u^{(j)}_0(0;y,\eta)
+\frac i2\bigl\{u^{(j)}_0\bigr|_{t=0}\,,h^{(j)}\bigr\}(y,\eta) \right) {d{\hskip-1pt\bar{}}\hskip1pt}\eta\,, \end{multline} \begin{equation} \label{Singularity of the wave group at time zero lemma equation 8} e^{(j)}(y)=n\int\limits_{h^{(j)}(y,\eta)<1} \bigl( h^{(j)}\operatorname{tr}[U^{(j)}(0)]_\mathrm{sub} \bigr) (y,\eta) \,{d{\hskip-1pt\bar{}}\hskip1pt}\eta\,. \end{equation}
Substituting formulae (\ref{formula for principal symbol of oscillatory integral}) and (\ref{phase appearing in principal symbol}) into formulae (\ref{Singularity of the wave group at time zero lemma equation 6}) and (\ref{Singularity of the wave group at time zero lemma equation 7}) we get \begin{equation} \label{Singularity of the wave group at time zero lemma equation 9} c^{(j)}(y)=n\int\limits_{h^{(j)}(y,\eta)<1} {d{\hskip-1pt\bar{}}\hskip1pt}\eta\,, \end{equation} \begin{multline} \label{Singularity of the wave group at time zero lemma equation 10} d^{(j)}(y)=-n(n-1)\times \\ \int\limits_{h^{(j)}(y,\eta)<1} \left( [v^{(j)}]^*A_\mathrm{sub}v^{(j)} -\frac i2 \{ [v^{(j)}]^*,A_1-h^{(j)},v^{(j)} \} \right)(y,\eta) \,{d{\hskip-1pt\bar{}}\hskip1pt}\eta\,. \end{multline} Substituting formula (\ref{subprincipal symbol of OI at time zero}) into formula (\ref{Singularity of the wave group at time zero lemma equation 8}) we get \begin{equation} \label{Singularity of the wave group at time zero lemma equation 11} e^{(j)}(y)=-n\,i\int\limits_{h^{(j)}(y,\eta)<1} \bigl( h^{(j)}\{[v^{(j)}]^*,v^{(j)}\} \bigr) (y,\eta) \,{d{\hskip-1pt\bar{}}\hskip1pt}\eta\,. \end{equation}
Substituting formulae (\ref{Singularity of the wave group at time zero lemma equation 9})--(\ref{Singularity of the wave group at time zero lemma equation 11}) into formula (\ref{Singularity of the wave group at time zero lemma equation 1}) we arrive at (\ref{Singularity of the wave group at time zero lemma formula}).~$\square$
\begin{remark} The proof of Lemma~\ref{Singularity of the wave group at time zero lemma} given above was based on the use of Corollary 4.1.5 from \cite{mybook}. In the actual statement of Corollary 4.1.5 in \cite{mybook} uniformity in $y\in M$ was not mentioned because the authors were dealing with a manifold with a boundary. Uniformity reappeared in the subsequent Theorem 4.2.1 which involved pseudodifferential cut-offs separating the point $\,y\,$ from the boundary. \end{remark}
\section{Mollified spectral asymptotics} \label{Mollified spectral asymptotics}
\begin{theorem} \label{theorem spectral function mollified} Let $\rho:\mathbb{R}\to\mathbb{C}$ be a function from Schwartz space $\mathcal{S}(\mathbb{R})$ whose Fourier transform $\hat\rho$ satisfies conditions (\ref{condition on hat rho 1})--(\ref{condition on hat rho 3}). Then, uniformly over $x\in M$, we have \begin{equation} \label{theorem spectral function mollified formula} \int e(\lambda-\mu,x,x)\,\rho(\mu)\,d\mu= a(x)\,\lambda^n+b(x)\,\lambda^{n-1}+ \begin{cases} O(\lambda^{n-2})\quad&\text{if}\quad{n\ge3}, \\ O(\ln\lambda)\quad&\text{if}\quad{n=2}, \end{cases} \end{equation} as $\lambda\to+\infty$. The densities $a(x)$ and $b(x)$ appearing in the RHS of formula (\ref{theorem spectral function mollified formula}) are defined in accordance with formulae (\ref{formula for a(x)}) and (\ref{formula for b(x)}). \end{theorem}
\emph{Proof\ } Our spectral function $e(\lambda,x,x)$ was initially defined only for $\lambda>0$, see formula (\ref{definition of spectral function}). We extend the definition to the whole real line by setting
\[ e(\lambda,x,x):=0\quad\text{for}\quad\lambda\le0. \]
Denote by $e'(\lambda,x,x)$ the derivative, with respect to the spectral parameter, of the spectral function. Here ``derivative'' is understood in the sense of distributions. The explicit formula for $e'(\lambda,x,x)$ is \begin{equation} \label{theorem spectral function mollified equation 2}
e'(\lambda,x,x):=\sum_{k=1}^{+\infty}\|v_k(x)\|^2\,\delta(\lambda-\lambda_k). \end{equation}
Formula (\ref{theorem spectral function mollified equation 2}) gives us \begin{equation} \label{theorem spectral function mollified equation 3} \int e'(\lambda-\mu,x,x)\,\rho(\mu)\,d\mu=
\sum_{k=1}^{+\infty}\|v_k(x)\|^2\,\rho(\lambda-\lambda_k). \end{equation} Formula (\ref{theorem spectral function mollified equation 3}) implies, in particular, that, uniformly over $x\in M$, we have \begin{equation} \label{theorem spectral function mollified equation 4}
\int e'(\lambda-\mu,x,x)\,\rho(\mu)\,d\mu=O(|\lambda|^{-\infty}) \quad\text{as}\quad\lambda\to-\infty\,, \end{equation}
where $O(|\lambda|^{-\infty})$ is shorthand for ``tends to zero faster than any given inverse power of $|\lambda|$''.
Formula (\ref{theorem spectral function mollified equation 3}) can also be rewritten as \begin{equation} \label{theorem spectral function mollified equation 5} \int e'(\lambda-\mu,x,x)\,\rho(\mu)\,d\mu= \mathcal{F}^{-1}_{t\to\lambda}[\hat\rho(t)\operatorname{tr}u(t,x,x)]
-\sum_{k\le0}\|v_k(x)\|^2\,\rho(\lambda-\lambda_k)\,, \end{equation} where the distribution $u(t,x,y)$ is defined in accordance with formula (\ref{definition of integral kernel of wave group}). Clearly, we have \begin{equation} \label{theorem spectral function mollified equation 6}
\sum_{k\le0}\|v_k(x)\|^2\,\rho(\lambda-\lambda_k)=O(\lambda^{-\infty}) \quad\text{as}\quad\lambda\to+\infty\,. \end{equation} Formulae (\ref{theorem spectral function mollified equation 5}), (\ref{theorem spectral function mollified equation 6}) and Lemma~\ref{Singularity of the wave group at time zero lemma} imply that, uniformly over $x\in M$, we have \begin{multline} \label{theorem spectral function mollified equation 7} \int e'(\lambda-\mu,x,x)\,\rho(\mu)\,d\mu= \\ n\,a(x)\,\lambda^{n-1}+(n-1)\,b(x)\,\lambda^{n-2}+O(\lambda^{n-3}) \quad\text{as}\quad\lambda\to+\infty\,. \end{multline}
It remains to note that \begin{equation} \label{theorem spectral function mollified equation 8} \frac d{d\lambda}\int e(\lambda-\mu,x,x)\,\rho(\mu)\,d\mu = \int e'(\lambda-\mu,x,x)\,\rho(\mu)\,d\mu\,. \end{equation} Formulae (\ref{theorem spectral function mollified equation 8}), (\ref{theorem spectral function mollified equation 4}) and (\ref{theorem spectral function mollified equation 7}) imply (\ref{theorem spectral function mollified formula}).~$\square$
\begin{theorem} \label{theorem counting function mollified} Let $\rho:\mathbb{R}\to\mathbb{C}$ be a function from Schwartz space $\mathcal{S}(\mathbb{R})$ whose Fourier transform $\hat\rho$ satisfies conditions (\ref{condition on hat rho 1})--(\ref{condition on hat rho 3}). Then we have \begin{equation} \label{theorem counting function mollified formula} \int N(\lambda-\mu)\,\rho(\mu)\,d\mu= a\,\lambda^n+b\,\lambda^{n-1}+ \begin{cases} O(\lambda^{n-2})\quad&\text{if}\quad{n\ge3}, \\ O(\ln\lambda)\quad&\text{if}\quad{n=2}, \end{cases} \end{equation} as $\lambda\to+\infty$. The constants $a$ and $b$ appearing in the RHS of formula (\ref{theorem counting function mollified formula}) are defined in accordance with formulae (\ref{a via a(x)}), (\ref{formula for a(x)}), (\ref{b via b(x)}) and (\ref{formula for b(x)}). \end{theorem}
\emph{Proof\ } Formula (\ref{theorem counting function mollified formula}) follows from formula (\ref{theorem spectral function mollified formula}) by integration over $M$, see also formula (\ref{definition of counting function}).~$\square$
\
In stating Theorems \ref{theorem spectral function mollified} and \ref{theorem counting function mollified} we assumed the mollifier $\rho$ to be complex-valued. This was done for the sake of generality but may seem unnatural when mollifying real-valued functions $e(\lambda,x,x)$ and $N(\lambda)$. One can make our construction look more natural by dealing only with real-valued mollifiers $\rho$. Note that if the function $\rho$ is real-valued and even then its Fourier transform $\hat\rho$ is also real-valued and even and, moreover, condition (\ref{condition on hat rho 3}) is automatically satisfied.
\section{Unmollified spectral asymptotics} \label{Unmollified spectral asymptotics}
In this section we derive asymptotic formulae for the spectral function $e(\lambda,x,x)$ and the counting function $N(\lambda)$ without mollification. The section is split into two subsections: in the first we derive one-term asymptotic formulae and in the second --- two-term asymptotic formulae.
\subsection{One-term spectral asymptotics} \label{One-term spectral asymptotics}
\begin{theorem} \label{theorem spectral function unmollified one term} We have, uniformly over $x\in M$, \begin{equation} \label{theorem spectral function unmollified one term formula} e(\lambda,x,x)=a(x)\,\lambda^n+O(\lambda^{n-1}) \end{equation} as $\lambda\to+\infty$. \end{theorem}
\emph{Proof\ } The result in question is an immediate consequence of formulae (\ref{theorem spectral function mollified equation 8}), (\ref{theorem spectral function mollified equation 7}) and Theorem~\ref{theorem spectral function mollified} from the current paper and Corollary~B.2.2 from \cite{mybook}.~$\square$
\begin{theorem} \label{theorem counting function unmollified one term} We have \begin{equation} \label{theorem counting function unmollified one term formula} N(\lambda)=a\lambda^n+O(\lambda^{n-1}) \end{equation} as $\lambda\to+\infty$. \end{theorem}
\emph{Proof\ } Formula (\ref{theorem counting function unmollified one term formula}) follows from formula (\ref{theorem spectral function unmollified one term formula}) by integration over $M$, see also formula (\ref{definition of counting function}).~$\square$
\subsection{Two-term spectral asymptotics} \label{Two-term spectral asymptotics}
Up till now, in Section~\ref{Mollified spectral asymptotics} and subsection~\ref{One-term spectral asymptotics}, our logic was to derive asymptotic formulae for the spectral function $e(\lambda,x,x)$ first and then obtain corresponding asymptotic formulae for the counting function $N(\lambda)$ by integration over $M$. Such an approach will not work for two-term asymptotics because the geometric conditions required for the existence of two-term asymptotics of $e(\lambda,x,x)$ and $N(\lambda)$ will be different: for $e(\lambda,x,x)$ the appropriate geometric conditions will be formulated in terms of \emph{loops}, whereas for $N(\lambda)$ the appropriate geometric conditions will be formulated in terms of \emph{periodic trajectories}.
Hence, in this subsection we deal with the spectral function $e(\lambda,x,x)$ and the counting function $N(\lambda)$ separately.
In what follows the point $y\in M$ is assumed to be fixed.
Denote by $\Pi_y^{(j)}$ the set of normalised ($h^{(j)}(y,\eta)=1$) covectors $\eta$ which serve as starting points for loops generated by the Hamiltonian $h^{(j)}$. Here ``starting point'' refers to the starting point of a Hamiltonian trajectory $(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta))$ moving forward in time ($t>0$), see also Remark~\ref{remark on reversibility}.
The reason we are not interested in large negative $t$ is that the refined Fourier Tauberian theorem we will be applying, Theorem~B.5.1 from \cite{mybook}, does not require information regarding large negative $t$. And the underlying reason for the latter is the fact that the function we are studying, $e(\lambda,x,x)$ (and, later, $N(\lambda)$), is real-valued. The real-valuedness of the function $e(\lambda,x,x)$ implies that its Fourier transform, $\hat e(t,x,x)$, possesses the symmetry $\hat e(-t,x,x)=\overline{\hat e(t,x,x)}$.
The set $\Pi_y^{(j)}$ is a subset of the $(n-1)$-dimensional unit cosphere $(S^*_yM)^{(j)}$ and the latter is equipped with a natural Lebesgue measure, see proof of Lemma~\ref{Singularity of the wave group at time zero lemma}. It is known, see Lemma 1.8.2 in \cite{mybook}, that the set $\Pi_y^{(j)}$ is measurable.
\begin{definition} \label{definition of nonfocal point 1} A point $y\in M$ is said to be \emph{nonfocal} if for each $j=1,\ldots,m^+$ the set $\Pi_y^{(j)}$ has measure zero. \end{definition}
With regards to the range of the index $j$ in Definition~\ref{definition of nonfocal point 1}, as well as in sub\-sequent Definitions~\ref{definition of nonfocal point 2}--\ref{definition of nonperiodicity condition 2}, see Remark~\ref{remark on negative Hamiltonians}.
We call a loop of length $T>0$ \emph{absolutely focused} if the function \[
|x^{(j)}(T;y,\eta)-y|^2 \] has an infinite order zero in the variable $\eta$, and we denote by $(\Pi_y^a)^{(j)}$ the set of normalised ($h^{(j)}(y,\eta)=1$) covectors $\eta$ which serve as starting points for absolutely focused loops generated by the Hamiltonian $h^{(j)}$. It is known, see Lemma 1.8.3 in \cite{mybook}, that the set $(\Pi_y^a)^{(j)}$ is measurable and, moreover, the set $\Pi_y^{(j)}\setminus(\Pi_y^a)^{(j)}$ has measure zero. This allows us to reformulate Definition~\ref{definition of nonfocal point 1} as follows.
\begin{definition} \label{definition of nonfocal point 2} A point $y\in M$ is said to be \emph{nonfocal} if for each $j=1,\ldots,m^+$ the set $(\Pi_y^a)^{(j)}$ has measure zero. \end{definition}
In practical applications it is easier to work with Definition~\ref{definition of nonfocal point 2} because the set $(\Pi_y^a)^{(j)}$ is usually much thinner than the set $\Pi_y^{(j)}$.
In order to derive a two-term asymptotic formula for the spectral function $e(\lambda,x,x)$ we need the following lemma (compare with Lemma~\ref{Singularity of the wave group at time zero lemma}).
\begin{lemma} \label{Singularity of the wave group at time nonzero lemma pointwise} Suppose that the point $y\in M$ is nonfocal. Then for any complex-valued function $\hat\gamma\in C_0^\infty(\mathbb{R})$ with $\operatorname{supp}\hat\gamma\subset(0,+\infty)$ we have \begin{equation} \label{Singularity of the wave group at time nonzero lemma pointwise formula} \mathcal{F}^{-1}_{t\to\lambda}[\hat\gamma(t)\operatorname{tr}u(t,y,y)]= o(\lambda^{n-1}) \end{equation} as $\lambda\to+\infty$. \end{lemma}
\emph{Proof\ } The result in question is a special case of Theorem~4.4.9 from \cite{mybook}.~$\square$
\
The following theorem is our main result regarding the spectral function $e(\lambda,x,x)$.
\begin{theorem} \label{theorem spectral function unmollified two term} If the point $x\in M$ is nonfocal then the spectral function $e(\lambda,x,x)$ admits the two-term asymptotic expansion (\ref{two-term asymptotic formula for spectral function}) as $\lambda\to+\infty$. \end{theorem}
\emph{Proof\ } The result in question is an immediate consequence of formulae (\ref{theorem spectral function mollified equation 7}), Theorem~\ref{theorem spectral function mollified} and Lemma~\ref{Singularity of the wave group at time nonzero lemma pointwise} from the current paper and Theorem~B.5.1 from \cite{mybook}.~$\square$
\
We now deal with the counting function $N(\lambda)$.
Suppose that we have a Hamiltonian trajectory $(x^{(j)}(t;y,\eta),\xi^{(j)}(t;y,\eta))$ and a real number $T>0$ such that $(x^{(j)}(T;y,\eta),\xi^{(j)}(T;y,\eta))=(y,\eta)$. We will say in this case that we have a $T$-periodic trajectory originating from the point $(y,\eta)\in T'M$.
Denote by $(S^*M)^{(j)}$ the unit cosphere bundle, i.e.~the $(2n-1)$-dimensional surface in the cotangent bundle defined by the equation $h^{(j)}(y,\eta)=1$. The unit cosphere bundle is equipped with a natural Lebesgue measure: the $(2n-1)$-dimensional surface area element on $(S^*M)^{(j)}$ is $dy\,d(S^*_yM)^{(j)}$ where $d(S^*_yM)^{(j)}$ is the $(n-1)$-dimensional surface area element on the unit cosphere $(S^*_yM)^{(j)}$, see proof of Lemma~\ref{Singularity of the wave group at time zero lemma}.
Denote by $\Pi^{(j)}$ the set of points in $(S^*M)^{(j)}$ which serve as starting points for periodic trajectories generated by the Hamiltonian $h^{(j)}$. It is known, see Lemma 1.3.4 in \cite{mybook}, that the set $\Pi^{(j)}$ is measurable.
\begin{definition} \label{definition of nonperiodicity condition 1} We say that the nonperiodicity condition is fulfilled if for each $j=1,\ldots,m^+$ the set $\Pi^{(j)}$ has measure zero. \end{definition}
We call a $T$-periodic trajectory \emph{absolutely periodic} if the function \[
|x^{(j)}(T;y,\eta)-y|^2+|\xi^{(j)}(T;y,\eta)-\eta|^2 \] has an infinite order zero in the variables $(y,\eta)$, and we denote by $(\Pi^a)^{(j)}$ the set of points in $(S^*M)^{(j)}$ which serve as starting points for absolutely periodic trajectories generated by the Hamiltonian $h^{(j)}$. It is known, see Corollary 1.3.6 in \cite{mybook}, that the set $(\Pi^a)^{(j)}$ is measurable and, moreover, the set $\Pi^{(j)}\setminus(\Pi^a)^{(j)}$ has measure zero. This allows us to reformulate Definition~\ref{definition of nonperiodicity condition 1} as follows.
\begin{definition} \label{definition of nonperiodicity condition 2} We say that the nonperiodicity condition is fulfilled if for each $j=1,\ldots,m^+$ the set $(\Pi^a)^{(j)}$ has measure zero. \end{definition}
In practical applications it is easier to work with Definition~\ref{definition of nonperiodicity condition 2} because the set $(\Pi^a)^{(j)}$ is usually much thinner than the set $\Pi^{(j)}$.
In order to derive a two-term asymptotic formula for the counting function $N(\lambda)$ we need the following lemma.
\begin{lemma} \label{Singularity of the wave group at time nonzero lemma integrated} Suppose that the nonperiodicity condition is fulfilled. Then for any complex-valued function $\hat\gamma\in C_0^\infty(\mathbb{R})$ with $\operatorname{supp}\hat\gamma\subset(0,+\infty)$ we have \begin{equation} \label{Singularity of the wave group at time nonzero lemma integrated formula} \int_M \mathcal{F}^{-1}_{t\to\lambda}[\hat\gamma(t)\operatorname{tr}u(t,y,y)]\,dy= o(\lambda^{n-1}) \end{equation} as $\lambda\to+\infty$. \end{lemma}
\emph{Proof\ } The result in question is a special case of Theorem~4.4.1 from \cite{mybook}.~$\square$
\
The following theorem is our main result regarding the counting function $N(\lambda)$.
\begin{theorem} \label{theorem counting function unmollified two term} If the nonperiodicity condition is fulfilled then the counting function $N(\lambda)$ admits the two-term asymptotic expansion (\ref{two-term asymptotic formula for counting function}) as $\lambda\to+\infty$. \end{theorem}
\emph{Proof\ } The result in question is an immediate consequence of formulae (\ref{definition of counting function}), (\ref{theorem spectral function mollified equation 7}), Theorem~\ref{theorem spectral function mollified} and Lemma~\ref{Singularity of the wave group at time nonzero lemma integrated} from the current paper and Theorem~B.5.1 from \cite{mybook}.~$\square$
\section{$\mathrm{U}(m)$ invariance}
\label{U(m) invariance}
We prove in this section that the RHS of formula (\ref{formula for b(x)}) is invariant under unitary transformations (\ref{unitary transformation of operator A}), (\ref{matrix appearing in unitary transformation of operator}) of our operator $A$. The arguments presented in this section bear some similarity to those from Section~\ref{U(1) connection}, the main difference being that the unitary matrix-function in question is now a function on the base manifold~$M$ rather than on $T'M$.
Fix a point $x\in M$ and an index $j$ (index enumerating the eigenvalues and eigenvectors of the principal symbol) and consider the expression \begin{multline} \label{formula for bj(x)} \int\limits_{h^{(j)}(x,\xi)<1} \biggl( [v^{(j)}]^*A_\mathrm{sub}v^{(j)} \\ -\frac i2 \bigl\{ [v^{(j)}]^*,A_1-h^{(j)},v^{(j)} \bigr\} +\frac i{n-1}h^{(j)}\bigl\{[v^{(j)}]^*,v^{(j)}\bigr\} \biggr)(x,\xi)\, d\xi\,, \end{multline} compare with (\ref{formula for b(x)}). We will show that this expression is invariant under the transformation (\ref{unitary transformation of operator A}), (\ref{matrix appearing in unitary transformation of operator}).
The transformation (\ref{unitary transformation of operator A}), (\ref{matrix appearing in unitary transformation of operator}) induces the following transformation of the principal and subprincipal symbols of the operator $A$: \begin{equation} \label{transformation of the principal symbol} A_1\mapsto RA_1R^*, \end{equation} \begin{equation} \label{transformation of the subprincipal symbol} A_\mathrm{sub}\mapsto RA_\mathrm{sub}R^* +\frac i2 \left( R_{x^\alpha}(A_1)_{\xi_\alpha}R^* - R(A_1)_{\xi_\alpha}R^*_{x^\alpha} \right). \end{equation} The eigenvalues of the principal symbol remain unchanged, whereas the eigen\-vectors transform as \begin{equation} \label{transformation of the eigenvectors of the principal symbol} v^{(j)}\mapsto Rv^{(j)}. \end{equation} Substituting formulae (\ref{transformation of the principal symbol})--(\ref{transformation of the eigenvectors of the principal symbol}) into the RHS of (\ref{formula for bj(x)}) we conclude that the increment of the expression (\ref{formula for bj(x)}) is \begin{multline*} \int\limits_{h^{(j)}(x,\xi)<1} \biggl(\, \frac i2[v^{(j)}]^* \left( R^*R_{x^\alpha}(A_1)_{\xi_\alpha}-(A_1)_{\xi_\alpha}R^*_{x^\alpha}R \right) v^{(j)} \\ - \frac i2 \left( [v^{(j)}]^*R^*_{x^\alpha}R(A_1-h^{(j)})v^{(j)}_{\xi_\alpha} - [v^{(j)}_{\xi_\alpha}]^*(A_1-h^{(j)})R^*R_{x^\alpha}v^{(j)} \right) \\ + \frac i{n-1}h^{(j)} \left( [v^{(j)}]^*R^*_{x^\alpha}Rv^{(j)}_{\xi_\alpha} - [v^{(j)}_{\xi_\alpha}]^*R^*R_{x^\alpha}v^{(j)} \right) \biggr)(x,\xi)\, d\xi\,, \end{multline*} which can be rewritten as \begin{multline*}
-\frac i2\int\limits_{h^{(j)}(x,\xi)<1} \biggl( h^{(j)}_{\xi_\alpha} \left( [v^{(j)}]^*R^*_{x^\alpha}Rv^{(j)} - [v^{(j)}]^*R^*R_{x^\alpha}v^{(j)} \right) \\ -\frac 2{n-1}h^{(j)} \left( [v^{(j)}]^*R^*_{x^\alpha}Rv^{(j)}_{\xi_\alpha} - [v^{(j)}_{\xi_\alpha}]^*R^*R_{x^\alpha}v^{(j)} \right) \biggr)(x,\xi)\, d\xi\,. \end{multline*} In view of the identity $R^*R=I$ the above expression can be further simplified, so that it reads now \begin{multline} \label{increment of bj(x) first iteration} i\int\limits_{h^{(j)}(x,\xi)<1} \biggl( h^{(j)}_{\xi_\alpha}[v^{(j)}]^*R^*R_{x^\alpha}v^{(j)} \\ -\frac1{n-1}h^{(j)} \left( [v^{(j)}]^*R^*R_{x^\alpha}v^{(j)}_{\xi_\alpha} + [v^{(j)}_{\xi_\alpha}]^*R^*R_{x^\alpha}v^{(j)} \right) \biggr)(x,\xi)\, d\xi\,. \end{multline}
Denote $B_\alpha(x):=-iR^*R_{x^\alpha}$ and observe that this set of matrices, enumerated by the tensor index $\alpha$ running through the values $1,\ldots,n$, is Hermitian.
Denote also $b_\alpha(x,\xi):=[v^{(j)}]^*B_\alpha v^{(j)}$ and observe that these $b_\alpha$ are positively homogeneous in $\xi$ of degree 0. Then the expression (\ref{increment of bj(x) first iteration}) can be rewritten as \begin{equation*} \label{increment of bj(x) second iteration} - \int\limits_{h^{(j)}(x,\xi)<1} \left( h^{(j)}_{\xi_\alpha}\,b_\alpha -\frac 1{n-1}\,h^{(j)}\,\frac{\partial b_\alpha}{\partial\xi_\alpha} \right)\!(x,\xi)\, d\xi\,. \end{equation*} Lemma 4.1.4 and formula (1.1.15) from \cite{mybook} tell us that this expression is zero.
\section{Spectral asymmetry} \label{Spectral asymmetry}
In this section we deal with the special case when the operator $A$ is differential (as opposed to pseudodifferential). Our aim is to examine what happens when we change the sign of the operator. In other words, we compare the original operator $A$ with the operator $\tilde A:=-A$. In theoretical physics the transformation $A\mapsto-A$ would be interpreted as time reversal, see equation (\ref{dynamic equation most basic}).
It is easy to see that for a differential operator the number $m$ (number of equations in our system) has to be even and that the principal symbol has to have the same number of positive and negative eigenvalues. In the notation of Section~\ref{Main results} this fact can be expressed as $m=2m^+=2m^-$.
It is also easy to see that the principal symbols of the two operators, $A$ and $\tilde A$, and the eigenvalues and eigenvectors of the principal symbols are related as \begin{equation} \label{Spectral asymmetry equation 1} A_1(x,\xi)=\tilde A_1(x,-\xi), \end{equation} \begin{equation} \label{Spectral asymmetry equation 2} h^{(j)}(x,\xi)=\tilde h^{(j)}(x,-\xi), \end{equation} \begin{equation} \label{Spectral asymmetry equation 3} v^{(j)}(x,\xi)=\tilde v^{(j)}(x,-\xi), \end{equation} whereas the subprincipal symbols are related as \begin{equation} \label{Spectral asymmetry equation 4} A_\mathrm{sub}(x)=-\tilde A_\mathrm{sub}(x). \end{equation}
Formulae (\ref{formula for a(x)}), (\ref{formula for b(x)}), (\ref{generalised Poisson bracket on matrix-functions}), (\ref{Poisson bracket on matrix-functions}) and (\ref{Spectral asymmetry equation 1})--(\ref{Spectral asymmetry equation 4}) imply \begin{equation} \label{Spectral asymmetry equation 5} a(x)=\tilde a(x), \qquad b(x)=-\tilde b(x). \end{equation} Substituting (\ref{Spectral asymmetry equation 5}) into (\ref{a via a(x)}) and (\ref{b via b(x)}) we get \begin{equation} \label{Spectral asymmetry equation 6} a=\tilde a, \qquad b=-\tilde b. \end{equation}
Formulae (\ref{two-term asymptotic formula for counting function}) and (\ref{Spectral asymmetry equation 6}) imply that the spectrum of a generic first order differential operator is asymmetric about $\lambda=0$. This phenomenon is known as \emph{spectral asymmetry} \cite{atiyah_short_paper,atiyah_part_1,atiyah_part_2,atiyah_part_3}.
If we square our operator $A$ and consider the spectral problem $A^2v=\lambda^2v$, then the terms $\pm b\lambda^{n-1}$ cancel out and the second asymptotic coefficient of the counting function (as well as the spectral function) of the operator $A^2$ turns to zero. This is in agreement with the known fact that for an even order semi-bounded matrix differential operator acting on a manifold without boundary the second asymptotic coefficient of the counting function is zero, see Section 6 of \cite{VassilievFuncAn1984} and \cite{SafarovIzv1989}.
\section{Bibliographic review} \label{Bibliographic review}
To our knowledge, the first publication on the subject of two-term spectral asymptotics for systems was Ivrii's 1980 paper \cite{IvriiDoklady1980} in Section 2 of which the author stated, without proof, a formula for the second asymptotic coefficient of the counting function. In a subsequent 1982 paper \cite{IvriiFuncAn1982} Ivrii acknowledged that the formula from \cite{IvriiDoklady1980} was incorrect and gave a new formula, labelled (0.6), followed by a ``proof''. In his 1984 Springer Lecture Notes \cite{ivrii_springer_lecture_notes} Ivrii acknowledged on page 226 that both his previous formulae for the second asymptotic coefficient were incorrect and stated, without proof, yet another formula.
Roughly at the same time Rozenblyum \cite{grisha} also stated a formula for the second asymptotic coefficient of the counting function of a first order system.
The formulae from \cite{IvriiDoklady1980}, \cite{IvriiFuncAn1982} and \cite{grisha} are fundamentally flawed because they are proportional to the subprincipal symbol. As our formulae (\ref{b via b(x)}) and (\ref{formula for b(x)}) show, the second asymptotic coefficient of the counting function may be nonzero even when the subprincipal symbol is zero. This illustrates, yet again, the difference between scalar operators and systems.
The formula on page 226 of \cite{ivrii_springer_lecture_notes} gives an algorithm for the calculation of the correction term designed to take account of the effect described in the previous paragraph. This algorithm requires the evaluation of a limit of a complicated expression involving the integral, over the cotangent bundle, of the trace of the symbol of the resolvent of the operator $A$ constructed by means of pseudodifferential calculus. This algorithm was revisited in Ivrii's 1998 book, see formulae (4.3.39) and (4.2.25) in \cite{ivrii_book}.
The next contributor to the subject was Safarov who, in his 1989 DSc Thesis~\cite{SafarovDSc}, wrote down a formula for the second asymptotic coefficient of the counting function which was ``almost'' correct. This formula appears in \cite{SafarovDSc} as formula (2.4). As explained in Section~\ref{Main results}, Safarov lost only the curvature terms $\,-\frac{ni}{n-1}\int h^{(j)}\{[v^{(j)}]^*,v^{(j)}\}$. Safarov's DSc Thesis \cite{SafarovDSc} provides arguments which are sufficiently detailed and we were able to identify the precise point (page 163) at which the mistake occurred.
In 1998 Nicoll rederived \cite{NicollPhD} Safarov's formula (\ref{formula for principal symbol of oscillatory integral}) for the principal symbols of the propagator, using a method slightly different from \cite{SafarovDSc}, but stopped short of calculating the second asymptotic coefficient of the counting function.
In 2007 Kamotski and Ruzhansky \cite{kamotski} performed an analysis of the propagator of a first order elliptic system based on the approach of Rozenblyum \cite{grisha}, but stopped short of calculating the second asymptotic coefficient of the counting function.
One of the authors of this paper, Vassiliev, considered systems in Section 6 of his 1984 paper \cite{VassilievFuncAn1984}. However, that paper dealt with systems of a very special type: differential (as opposed to pseudodifferential) and of even (as opposed to odd) order.
In this case the second asymptotic coefficients of the counting function and the spectral function vanish, provided the manifold does not have a boundary.
\end{document} |
\begin{document}
\begin{frontmatter}
\title{An identity for the odd double factorial\tnoteref{label1}} \tnotetext[label1]{This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.\\ 2010 Mathematics Subject Classification 05A10}
\author{Saud Hussein} \address{Institute of Mathematics, Academia Sinica, 6F, Astronomy-Mathematics Building, No.1, Sec.4, Roosevelt Road, Taipei 10617, Taiwan}
\ead{[email protected]}
\begin{abstract} The ordinary factorial may be written in terms of the Stirling numbers of the second kind as shown by Quaintance and Gould and the odd double factorial in terms of the Stirling numbers of the first kind as shown by Callan. During the preparation of an expository paper on Wolstenholme's Theorem, we discovered an expression for the odd double factorial using the Stirling numbers of the second kind. This appears to be the first such identity involving both positive and negative integers and the result is outlined here. \end{abstract}
\begin{keyword} Odd double factorial \end{keyword}
\end{frontmatter}
\section{Odd double factorial}
The \textit{Stirling numbers of the first kind and second kind}, denoted $s(n,k)$ and $S(n,k)$ respectively, are characterized by \begin{align} \sum_{k=0}^n s(n,k)x^k = n!\binom{x}{n}, \quad x \in \mathbb{R}, \label{first} \end{align} \begin{align}\sum_{k=0}^n k!\binom{x}{k}S(n,k) = x^n, \quad x \in \mathbb{R}, \label{second} \end{align} and we have an explicit formula \cite[equation 13.32]{Gould} relating the two sets of numbers, \[s(n,n-k) = \sum_{j=0}^k(-1)^j\binom{n+j-1}{k+j}\binom{n+k}{k-j}S(j+k,j).\] Also, \begin{align}(-1)^ks(n,n-k) &= \sum_{j=0}^k(-1)^{j+k}\binom{n+j-1}{k+j}\binom{n+k}{k-j}S(j+k,j) \label{third}\\ &= \frac{(n+k)!}{(2k)!(n-k-1)!}\sum_{j=0}^k(-1)^{j+k}\binom{2k}{k-j}\frac{S(j+k,j)}{n+j} \nonumber\\ &= \frac{1}{(2k)!}\sum_{j=0}^k(-1)^{j+k}\binom{2k}{k-j}\frac{C(n,k)}{n+j}S(j+k,j) \nonumber \end{align} with $C(n,k) = (n-k)(n-k+1)\cdots (n+k)$. For fixed integer $k\geq 0$, since $(-1)^ks(n,n-k)$ is a polynomial in $n$ over the rationals with leading coefficient $\frac{(2k-1)!!}{(2k)!}$ by \cite[proposition 1.1]{Gessel} and $C(n,k)$ is a monic polynomial in $n$, then \begin{align} (2k-1)!! = \sum_{j=0}^k(-1)^{j+k}\binom{2k}{k-j}S(j+k,j). \label{fourth}\end{align}
Compare this result with \[k! = \sum_{j=0}^k(-1)^{j+k}\binom{2k+1}{k-j}S(j+k,j),\] found by setting $n=k+1$ in \eqref{third}. See \cite[equation 14.34]{Gould}. From a survey of identities for the double factorial, we also have \[(2k-1)!! = \sum_{j=0}^k(-2)^{k-j}s(k,j).\] See \cite[identity 5.3]{Callan}. This easily follows by plugging $x = -\frac{1}{2}$ in \eqref{first}. Notice the summation is over non-negative integers whereas identity \eqref{fourth} involves both negative and positive integers. Setting $x=-\frac{1}{2}$ in \eqref{second} gives us \[1=\sum_{k=0}^n(-2)^{n-k}S(n,k)(2k-1)!!,\] an interesting expression but not one that allows us to solve for the odd double factorial like we have in \eqref{fourth}. Identity \eqref{fourth} was initially discovered during the preparation of an expository paper \cite{Hussein} on Wolstenholme's Theorem.
\end{document} |
\begin{document}
\title{Inverting log blow-ups in log geometry} \author{Doosung Park} \address{Department of Mathematics and Informatics, University of Wuppertal, Germany} \email{[email protected]} \subjclass[2020]{14A21} \keywords{log schemes, log blow-ups, divided log spaces} \date{\today} \begin{abstract} In the category of log schemes, it is unclear how to define the blow-ups for non-strict closed immersions. In this article, we introduce the notion of divided log spaces. We obtain the category of divided log spaces by locally inverting log blow-ups in the category of log schemes. We show that blow-ups exist for closed immersions of log smooth divided log spaces. This is an ingredient of the motivic six-functor formalism for log schemes. \end{abstract} \maketitle
\section{Introduction}
For a regular embedding $Z\to X$ of schemes, the deformation to the normal cone construction, denoted $\Deform_Z X$, plays a central role in intersection theory. For example, if $X$ is a smooth scheme over a field $k$, Fulton-MacPherson \cite{zbMATH01027930} used this construction for the diagonal morphism $X\to X\times_k X$ to define the intersection product.
The notion of log schemes, introduced by Fontaine-Illusie and further developed by Kato \cite{zbMATH00125808}, can be thought of as the notion of ``schemes with boundaries''. The extra structure of boundaries is helpful for the compactification and degeneration problems in algebraic geometry. For example, log geometry has been applied to compactifying moduli spaces and the proof of the $C_{st}$-conjecture in $p$-adic hodge theory.
To develop intersection theory or motivic homotopy theory of log schemes, the deformation to the normal cone construction for log schemes is desirable. However, there is a technical difficulty: If $X$ is a log smooth fs log scheme over a field $k$ whose log structure is nontrivial, then the diagonal morphism $X\to X\times_k X$ is a closed immersion that is not strict. It is unclear how to define the blow-ups for non-strict closed immersions in the category of fs log schemes because a non-strict closed immersion is not defined by a sheaf of ideals. Hence the construction of $\Deform_X(X\times_k X)$ in this category is unclear.
The purpose of this article is to introduce the notion of \emph{divided log spaces}. The category of divided log spaces $\lSpc$ is obtained by locally inverting log blow-ups in the category of fs log schemes $\lSch$. More precisely, we consider a full subcategory $\lFan$ of $\lSch$ in Definition \ref{lFan.1} such that globally inverting log blow-ups in $\lFan$ is reasonable. Then we define a divided space in Definition \ref{equiv.5} as a presheaf on $\lFan$ satisfying certain conditions whose formulation is similar to that of algebraic spaces. Any morphism of divided log spaces behaves like an exact morphism. In particular, a non-strict closed immersion of fs log schemes becomes a strict closed immersion of divided log spaces.
For applications, we construct the open complements (resp.\ blow-ups) for non-strict closed immersions $Z\to X$ of fs log schemes (resp.\ log smooth fs log schemes) in the category of divided log spaces. This allows us to construct $\Deform_Z X$, which is a crucial ingredient in author's forthcoming articles \cite{logGysin} and \cite{logsix}.
\subsection*{Organization of the article.}
In Section \ref{dZar}, we explain the three types of covers in log geometry: dividing covers, Zariski covers, and dividing Zariski covers. The associated topologies are called the dividing, Zariski, and dividing Zariski topologies. We study several properties of dividing Zariski sheaves.
In Section \ref{rep}, we consider properties of representable morphisms of sheaves. For example, we define representable strict closed immersions, representable log smooth morphisms, and representable Zariski covers of sheaves. We also provide various basic lemmas about representable morphisms, which are used in later sections.
The definition of divided log spaces appears in Section \ref{divspace}. A sheaf $\cX$ is called a divided log space if the diagonal morphism is a representable strict closed immersion, and if there exists a representable Zariski cover $\cY\to \cX$ such that $\cY$ is representable. Properties of representable morphisms of sheaves can be restricted to divided log spaces.
The purpose of Section \ref{equiv} is to glue divided log spaces. Our method for this is to introduce the notion of Zariski equivalence relations, which is an analog of \'etale equivalence relations in the theory of algebraic spaces.
In Section \ref{property}, we explain properties of morphisms of divided log spaces, which do not need to be representable. For example, we define closed immersions, log smooth morphisms, and Zariski covers of divided log spaces. We show that closed immersions of divided log spaces are representable strict closed immersions.
In Section \ref{topology}, we introduce several topologies on the category of divided log spaces. We also compare sheaves on the category of divided log spaces and sheaves on the category of fs log schemes.
In Sections \ref{complement} and \ref{blow-up}, we define the open complements and blow-ups of closed immersions of divided log spaces using universal properties. The open complements always exist, but we only show the existence of the blow-ups in the case of closed immersions of log smooth divided log spaces. We combine these two notions for the deformation to the normal cone construction. We also show that the open complements are closed under pullbacks and the blow-ups are closed under pullbacks along log smooth morphisms.
In Appendices, we collect several results in log geometry.
\subsection*{Related work} Kato introduced \emph{algebraic valuative log spaces} in \cite{Katoval}, and this also does a similar job of locally inverting log blow-ups. One technical advantage of our divided log spaces is that the definition resembles that of algebraic spaces. Hence we can imitate many proofs in the literature of algebraic spaces to develop our theory.
Kato gave in \cite[Proposition 1.4.2]{Katoval} a description of the Hom group in the category of algebraic valuative log spaces, whose proof was left to the reader. Assuming this, we expect that there is a fully faithful functor from the category of divided log spaces into the category of algebraic valuative log spaces since we have a similar result in Proposition \ref{div.5}. We left an investigation about the comparison to the interested reader.
\subsection*{Notation}
Throughout this article, we fix a noetherian fs log scheme $B$ of finite Krull dimension. Our standard reference for the notation and terminology in log geometry is Ogus's book \cite{Ogu}.
The coproduct $M\oplus_P N$ of saturated monoids is taken in the category of saturated monoids, and the fiber product $X\times_S Y$ of saturated log schemes is taken in the category of saturated log schemes.
\section{Dividing Zariski topology} \label{dZar} We want a topology that is finer than the Zariski topology and does the job of locally inverting log blow-ups in the category of sheaves. The dividing Zariski topology defined below is suited for this.
\begin{definition} \label{mono.1} Let $f\colon Y\to X$ be a quasi-compact morphism of fs log schemes. \begin{enumerate} \item[\textup{(1)}] We say that $f$ is a \emph{dividing cover} if $f$ is a universally surjective proper log \'etale monomorphism. \item[\textup{(2)}] We say that $f$ is a \emph{Zariski cover} if $f$ is surjective and of the form $\amalg_{i\in I}Y_i\to X$ with finite $I$ such that each $Y_i\to X$ is an open immersion. \item[\textup{(3)}] We say that $f$ is a \emph{dividing Zariski cover} if $f$ is universally surjective and of the form $\amalg_{i\in I}Y_i\to X$ with finite $I$ such that each $Y_i\to X$ is a log \'etale monomorphism. \end{enumerate} \end{definition}
Recall that a morphism $f\colon Y\to X$ in a category with fiber products is a monomorphism if and only if the diagonal morphism $Y\to Y\times_X Y$ is an isomorphism.
\begin{definition} \label{mono.2} Let $\{Y_i\to X\}_{i\in I}$ be a family of quasi-compact morphisms of fs log schemes with finite $I$. \begin{enumerate} \item[\textup{(1)}] The family is called a \emph{Zariski covering family} if $\amalg_{i\in I}Y_i\to X$ is a Zariski cover. \item[\textup{(2)}] The family is called a \emph{dividing Zariski covering family} if $\amalg_{i\in I}Y_i\to X$ is a dividing Zariski cover. \end{enumerate} \end{definition}
Every dividing cover is a dividing Zariski cover. Every pullback of a dividing (resp.\ dividing Zariski) cover is again a dividing (resp.\ dividing Zariski) cover. Every composition of dividing (resp.\ dividing Zariski) covers is again a dividing (resp.\ dividing Zariski) cover.
\begin{definition} The topology on the category of quasi-compact fs log schemes generated by dividing (resp.\ dividing Zariski) covers is called the \emph{dividing} (resp.\ \emph{dividing Zariski}) \emph{topology}. Let $div$ (resp.\ $dZar$) be the shorthand for the dividing (resp.\ dividing Zariski) topology. \end{definition}
\begin{definition} We refer to \cite[Definition II.1.9.2]{Ogu} for the definition of the category of fans. For a fan $\Sigma$, let $\T_{\Sigma}$ be the fs log scheme whose underlying scheme is the toric variety over $\Spec(\Z)$ associated with $\Sigma$ and whose log structure is the compactifying log structure \cite[Definition III.1.6.1]{Ogu} associated with the open immersion from the torus $\G_m^n$, where $n$ is the rank of $\Sigma$.
Every morphism of fans $\theta\colon \Delta\to \Sigma$ induces a morphism of fs log schemes $\T_{\theta}\colon \T_\Delta\to \T_\Sigma$. We say that $\theta$ is a \emph{subdivision} if the associated homomorphism of lattices and the associated map of supports $\vert\Delta\vert\to \vert\Sigma\vert$ are isomorphisms. In this case, $\T_\theta$ is a proper birational morphism. \end{definition}
\begin{definition} \label{lFan.1} For an fs log scheme $X$, a \emph{fan chart of $X$} is a fan $\Sigma$ together with a strict morphism $X\to \T_\Sigma$.
Let $\lSch/B$ be the category of noetherian schemes of finite Krull dimensions over $B$. Let $\lFan/B$ be the full subcategory of the category $\lSch/B$ consisting of disjoint unions $\amalg_{i\in I} X_i$ with finite $I$ such that each $X_i$ admits a fan chart. The dividing topology and dividing Zariski topology can be restricted to $\lFan/B$.
Any fs log scheme admits a fan chart Zariski locally by \cite[Theorem III.1.2.7(1)]{Ogu}. Hence any $X\in \lSch/B$ admits a Zariski cover $Y\to X$ with $Y\in \lFan/B$. \end{definition}
\begin{definition} Let $f\colon Y\to X$ be a morphism of fs log schemes. A \emph{fan chart of $f$} is a triple $(\Sigma,\Delta,\theta\colon \Delta\to \Sigma)$ such that the induced diagram \[ \begin{tikzcd} Y\ar[d]\ar[r,"f"]& X\ar[d] \\ \T_\Delta\ar[r,"\T_\theta"]& \T_\Sigma \end{tikzcd} \] commutes, the vertical morphisms are strict, and $\theta$ is a morphism of fans. If $X$ has a fan chart, then $f$ has a fan chart Zariski locally on $Y$ by \cite[Theorem III.1.2.7(1)]{Ogu}. \end{definition}
\begin{example} Every log blow-up \cite[Definition III.2.6.2]{Ogu} is a dividing cover, see \cite[Example A.11.1]{logDM}.
Let $\theta \colon \Sigma'\to \Sigma$ be a subdivision of fans. We have isomorphisms \[ \T_{\Sigma'}\simeq \T_{\Sigma'\times_{\Sigma} \Sigma'}\simeq \T_{\Sigma'}\times_{\T_\Sigma}\T_{\Sigma'}, \] so the induced morphism $\T_\theta\colon \T_{\Sigma'}\to \T_\Sigma$ is a monomorphism. Zariski locally, $\T_\theta$ is of the form $\A_\eta\colon \A_Q\to \A_P$ for some injective homomorphism $\eta\colon P\to Q$ of fs monoids such that $\eta\colon P^\gp\to Q^\gp$ is an isomorphism. Hence \cite[Corollary IV.3.1.10]{Ogu} shows that $\T_\theta$ is log \'etale. Since $\theta$ is a subdivision, $\T_\theta$ is proper. As a consequence of \cite[Lemma A.11.4]{logDM}, $\T_\theta$ is universally surjective. Hence we have shown that $\T_\theta$ is a dividing cover.
If $X\to \T_\Sigma$ is any morphism of quasi-compact log schemes, then the projection $X\times_{\T_\Sigma}\T_{\Sigma'}\to X$ is a dividing cover too. \end{example}
\begin{proposition} \label{equiv.11} We have the following list of synonyms of morphisms of fs log schemes: \begin{enumerate} \item[\textup{(1)}] exact proper monomorphism $=$ strict closed immersion, \item[\textup{(2)}] exact log \'etale monomorphism $=$ open immersion, \item[\textup{(3)}] exact dividing cover $=$ isomorphism, \item[\textup{(4)}] exact dividing Zariski cover $=$ Zariski cover. \end{enumerate} \end{proposition} \begin{proof} (1) Consequence of Proposition \ref{equiv.32} and \cite[Corollaire IV.18.12.6]{EGA}.
(2) Consequence of Proposition \ref{equiv.32} and \cite[Th\'eor\`eme IV.17.9.1]{EGA}.
(3) Consequence of (1) and (2).
(4) Consequence of (2). \end{proof}
The case of log \'etale monomorphisms in the below result is \cite[Proposition A.11.5]{logDM}.
\begin{proposition} \label{equiv.23} Let $f\colon Y\to X$ be a quasi-compact morphism of fs log schemes. Assume that $X$ admits a fan chart $\Sigma$. If $f$ is a monomorphism (resp.\ proper monomorphism, resp.\ log \'etale monomorphism, resp.\ dividing cover, resp.\ dividing Zariski cover), then there exists a subdivision $\Sigma'$ of $\Sigma$ such that the pullback \[ f'\colon Y\times_{\T_\Sigma}\T_{\Sigma'}\to X\times_{\T_\Sigma}\T_{\Sigma'} \] is a strict monomorphism (resp.\ closed immersion, resp.\ open immersion, resp.\ isomorphism, resp.\ Zariski cover). \end{proposition} \begin{proof} Combine \cite[Proposition 4.2.3]{logA1} (a variant of \cite[Theorem III.2.6.7]{Ogu}) and Propositions \ref{equiv.32} and \ref{equiv.11}. \end{proof}
This immediately implies the following.
\begin{corollary} \label{equiv.34} Let $f\colon Y\to X$ be a dividing cover of quasi-compact fs log schemes. If $X$ admits a fan chart $\Sigma$, then $f$ admits a refinement of the form $X\times_{\T_\Sigma}\T_{\Sigma'}\to X$ for some subdivision $\Sigma'$ of $\Sigma$. \end{corollary}
\begin{remark} \label{equiv.35} Let $\theta\colon \Sigma'\to \Sigma$ be a subdivision of fans. Combine toric resolution of singularities \cite[Theorem 11.1.9]{CLStoric} and De Concini-Procesi's Theorem \cite[pp.\ 39--40]{TOda} to see that there exists a refinement $\Sigma''\to \Sigma$ of $\theta$ that is a composition of star subdivisions. The induced morphism $\T_{\Sigma''}\to \T_{\Sigma}$ is a log blow-up. Hence Proposition \ref{equiv.23} is valid if we further require that $\T_{\Sigma'}\to \T_{\Sigma}$ is a log blow-up.
Using this, one can check that the dividing Zariski topology on the category of quasi-compact fs log schemes is equivalent to the \emph{Zariski valuative topology} \cite[Definition 2.23]{zbMATH05342987}, which is the smallest Grothendieck topology containing morphisms of the form $f\colon \amalg_{i\in I} Y_i\to X$ with finite $I$ such that $f$ is universally surjective and each $Y_i\to X$ is a composition of open immersions and log blow-ups.
Due to \cite[Remark 2.24]{zbMATH05342987}, the dividing Zariski topology has enough points. \end{remark}
The next result justifies our choice of the terminology ``dividing Zariski.''
\begin{proposition} \label{div.8} Let $\cF$ be a presheaf on $\lFan/B$. Then $\cF$ is a dividing Zariski sheaf if and only if $\cF$ is both a dividing sheaf and a Zariski sheaf. \end{proposition} \begin{proof} The only if direction is trivial. For the if direction, assume that $\cF$ is both a dividing sheaf and a Zariski sheaf. Let $Y\to X$ be a dividing Zariski cover in $\lFan/B$. By Proposition \ref{equiv.23}, there exists a dividing cover $X'\to X$ such that the projection $Y':=Y\times_X X'\to X'$ is a Zariski cover. Since $\cF$ is a Zariski sheaf, we obtain \[ \cF(X') \xrightarrow{\simeq} \Eq(\cF(Y')\rightrightarrows \cF(Y'\times_{X'}Y')). \] Use the assumption that $\cF$ is a dividing sheaf to obtain \[ \cF(X) \xrightarrow{\simeq} \Eq(\cF(Y)\rightrightarrows \cF(Y\times_{X}Y)). \] This shows that $\cF$ is a dividing Zariski sheaf. \end{proof}
The dividing sheafification and dividing Zariski sheafification admit explicit descriptions as follows.
\begin{proposition} \label{div.10} Let $\cF$ be a presheaf on $\lFan/B$. For every $X\in \lFan/B$ admitting a fan chart $\Sigma$, there exists an isomorphism \begin{equation} a_{div}\cF(X) \simeq \colimit_{\Sigma'\to \Sigma}\cF(X\times_{\T_\Sigma}\T_{\Sigma'}), \end{equation} where the colimit runs over the category of the subdivisions of $\Sigma$. \end{proposition} \begin{proof} Let $L_{div}\cF$ be the separated presheaf associated with $\cF$ for the dividing topology, which is defined using \cite[Section II.3.0.5]{SGA4}. For a subdivision $\Sigma'$ of $\Sigma$, we set $X_{\Sigma'}:=X\times_{\T_\Sigma}\T_{\Sigma'}$. By Corollary \ref{equiv.34}, we have isomorphisms \begin{equation} \label{div.10.1} L_{div}\cF(X) \simeq \colimit_{\Sigma'\to \Sigma}\Eq(\cF(X_{\Sigma'})\rightrightarrows \cF(X_{\Sigma'}\times_X X_{\Sigma'})) \simeq \colimit_{\Sigma'\to \Sigma}\cF(X_{\Sigma'}). \end{equation}
Suppose that $Y\to X$ is a dividing cover in $\lFan/B$. Use Proposition \ref{equiv.23} and the above description \eqref{div.10.1} to obtain an isomorphism \[ L_{div}\cF(X) \simeq L_{div}\cF(Y). \] This means that $L_{div}\cF$ is already a dividing sheaf, i.e., $L_{div}\cF\simeq a_{div}\cF$. \end{proof}
For a fan $\Sigma$, the category of subdivisions of $\Sigma$ is a filtered category. Hence the colimit in \eqref{div.10.1} is filtered.
\begin{proposition} \label{div.3} Let $\cF$ be a Zariski sheaf on $\lFan/B$. For every $X\in \lFan/B$ admitting a fan chart $\Sigma$, there exists an isomorphism \begin{equation} \label{div.3.1} a_{dZar}\cF(X) \simeq \colimit_{\Sigma'\to \Sigma}\cF(X\times_{\T_\Sigma}\T_{\Sigma'}), \end{equation} where the colimit runs over the category of the subdivisions of $\Sigma$. \end{proposition} \begin{proof} Suppose that $X'\to X$ is a Zariski cover. For any subdivision $\Sigma'$ of $\Sigma$, we set $X_{\Sigma'}:=X\times_{\T_{\Sigma}}\T_{\Sigma'}$ and $X'_{\Sigma'}:=X'\times_X X_{\Sigma'}$. We obtain an isomorphism \[ \cF(X_{\Sigma'}) \xrightarrow{\simeq} \Eq(\cF(X_{\Sigma'}')\rightrightarrows \cF(X_{\Sigma'}'\times_{X_{\Sigma'}}X_{\Sigma'}')). \] Since any filtered colimits commute with finite limits, we obtain \[ \colimit_{\Sigma'\to \Sigma}\cF(X_{\Sigma'}) \xrightarrow{\simeq} \Eq(\colimit_{\Sigma'\to \Sigma}\cF(X_{\Sigma'}')\rightrightarrows \colimit_{\Sigma'\to \Sigma}\cF(X_{\Sigma'}'\times_{X_{\Sigma'}}X_{\Sigma'}')). \] This shows that the right-hand side of \eqref{div.3.1} is a Zariski sheaf. Together with Proposition \ref{div.8}, we finish the proof. \end{proof}
The following result enables us to work with $\lFan/B$ instead of $\lSch/B$ in many situations.
\begin{proposition} \label{div.9} Let $t$ be the topology on $\lSch/B$ finer than the Zariski topology. Then the inclusion functor $\lFan/B \to \lSch/B$ induces an equivalence \[ \Shv_t(\lFan/B) \simeq \Shv_t(\lSch/B). \] \end{proposition} \begin{proof} For every $X\in \lSch/B$, there exists a Zariski cover $Y\to X$ such that $Y\in \lFan/B$. The implication (i)$\Rightarrow$(ii) in \cite[Th\'eor\`eme 4.1]{SGA4} finishes the proof. \end{proof}
\begin{definition} \label{div.4} For $X\in \lSch/B$, let $h_X$ be the dividing Zariski sheaf on $\lFan/B$ represented by $X$. \end{definition}
If $f\colon Y\to X$ is a dividing cover in $\lFan/B$, then the induced morphism $h_f\colon h_Y\to h_X$ is an isomorphism. Hence the Yoneda functor \begin{equation} \label{div.0.1} h\colon \lSch/B \to \Shv_{dZar}(\lSch/B) \simeq \Shv_{dZar}(\lFan/B) \end{equation} is \emph{not} conservative if $B$ is nonempty. In particular, $h$ is not fully faithful, and the dividing Zariski topology is not subcanonical.
\begin{proposition} \label{div.5} Suppose $X\in \lSch/B$ and $Y\in \lFan/B$. If $Y$ admits a fan chart $\Sigma$, then there exists a canonical isomorphism \[ h_X(Y) \simeq \colimit_{\Sigma'\to \Sigma}\Hom_{\lSch/B}(Y\times_{\T_\Sigma}\T_{\Sigma'},X), \] where the colimit runs over the category of subdivisions of $\Sigma$. \end{proposition} \begin{proof} Since $h_X$ is a Zariski sheaf, we can use Proposition \ref{div.3} for $h_X$. \end{proof}
\begin{proposition} \label{div.13} The Yoneda functor $h\colon \lSch/B \to \Shv_{dZar}(\lFan/B)$ preserves finite limits. \end{proposition} \begin{proof} Follows from Proposition \ref{div.5} since filtered colimits commute with finite limits. \end{proof}
\section{Representable morphisms of sheaves}\label{rep}
The purpose of this section is to deal with properties of morphisms of sheaves, which is needed later to define divided log spaces.
\begin{definition} We say that $\cF\in \Shv_{dZar}(\lFan/B)$ is \emph{represented by $X\in \lFan/B$} if $\cF\simeq h_X$. We say that a morphism $\cG\to \cF$ in $\Shv_{dZar}(\lFan/B)$ is \emph{represented by a morphism $f\colon Y\to X$ in $\lFan/B$} if there exists a commutative square with vertical isomorphisms \[ \begin{tikzcd} h_Y\ar[d,"\simeq"']\ar[r,"h_f"]& h_X\ar[d,"\simeq"] \\ \cG\ar[r]& \cF. \end{tikzcd} \] \end{definition}
\begin{definition} \label{equiv.6} Let $u\colon \cG\to \cF$ be a morphism in $\Shv_{dZar}(\lFan/B)$. We say that $u$ is \emph{$\lFan/B$-representable} (or simply \emph{representable}) if for every morphism $h_X\to \cF$ with $X\in \lFan/B$, there exists $Y\in \lFan/B$ such that $h_X\times_{\cF}\cG\simeq h_Y$.
Suppose that $\cP$ is a class of morphisms in $\lFan/B$. We say that $u$ is a \emph{representable $\cP$-morphism} if for every morphism $h_X\to \cF$ with $X\in \lFan/B$, there exists a commutative square \begin{equation} \label{equiv.6.1} \begin{tikzcd} h_{V}\ar[d,"\simeq"']\ar[r,"h_{g}"]& h_{U}\ar[d,"\simeq"] \\ h_X\times_{\cF}\cG\ar[r,"p"]& h_X \end{tikzcd} \end{equation} with vertical isomorphisms such that $p$ is the projection and $g$ is a $\cP$-morphism. \end{definition}
Observe that every representable $\cP$-morphism is representable. If $\cP$ is closed under pullbacks, then the class of representable $\cP$-morphisms in $\Shv_{dZar}(\lFan/B)$ is closed under pullbacks too.
\begin{proposition} \label{equiv.30} Suppose that $\cP$ is the class of isomorphisms in $\lFan/B$. If $f\colon \cG\to \cF$ is a representable $\cP$-morphism in $\Shv_{dZar}(\lFan/B)$, then $f$ is an isomorphism. \end{proposition} \begin{proof} For every morphism $h_X\to \cF$ with $X\in \lFan/B$, we have a cartesian square \[ \begin{tikzcd} h_X\ar[r,"\id"]\ar[d]& h_X\ar[d] \\ \cG\ar[r,"f"]& \cF. \end{tikzcd} \] This means $\cG(X)\simeq \cF(X)$. \end{proof}
The next five lemmas deal with the structure of representable morphisms.
\begin{lemma} \label{equiv.24} Let $f\colon h_Y\to h_X$ be a morphism in $\Shv_{dZar}(\lFan/B)$, where $X,Y\in \lFan/B$. Then there exists a dividing cover $v\colon V\to Y$ and a morphism $g\colon V\to X$ in $\lFan/B$ such that $fh_v=h_g$. Furthermore, $f$ is representable. \end{lemma} \begin{proof} The first claim is a consequence of Proposition \ref{div.5}. To show that $f$ is representable, using the first claim, we only need to show that $h_{V}\times_{h_X} h_{X'}$ is representable for every morphism $X'\to X$ in $\lFan/B$. This holds since $h_{V}\times_{h_X}h_{X'}\simeq h_{V\times_X X'}$ by Proposition \ref{div.13}. \end{proof}
\begin{lemma} \label{equiv.26} Suppose that $\cP$ is a class of morphisms in $\lFan/B$ closed under pullbacks. Let $f\colon h_Y\to h_X$ be a representable $\cP$-morphism in $\Shv_{dZar}(\lFan/B)$, where $X,Y\in \lFan/B$. Then there exists a commutative square \begin{equation} \begin{tikzcd} h_{V}\ar[d,"\simeq"']\ar[r,"h_{g}"]& h_{U}\ar[d,"h_{u}"',"\simeq"] \\ h_{Y}\ar[r,"f"]& h_X \end{tikzcd} \end{equation} with vertical isomorphisms such that $g$ is a $\cP$-morphism and $u$ is a dividing cover in $\lFan/B$. \end{lemma} \begin{proof} From \eqref{equiv.6.1}, we have a commutative square \[ \begin{tikzcd} h_{Y'}\ar[r,"h_{f'}"]\ar[d,"\simeq"']& h_{X'}\ar[d,"\simeq"] \\ h_Y\ar[r,"f"]& h_X \end{tikzcd} \] with vertical isomorphisms such that $f'$ is a $\cP$-morphism in $\lFan/B$. Apply Lemma \ref{equiv.24} to $h_X\xrightarrow{\simeq} h_{X'}$ to obtain a commutative triangle \[ \begin{tikzcd} h_{X'}\ar[d,"\simeq"']\ar[r,leftarrow,"h_{u'}"]& h_U\ar[ld,"h_u","\simeq"'] \\ h_X \end{tikzcd} \] such that $u$ is a dividing cover and $u'$ is a morphism in $\lFan/B$. Take $V:=Y'\times_{X'} U$ to conclude. \end{proof}
\begin{lemma} \label{equiv.25} Let $f\colon Y\to X$ be a morphism in $\lFan/B$. If $h_f$ is an isomorphism, then there exist dividing covers $u\colon V\to X$ and $v\colon V\to Y$ in $\lFan/B$ such that $fv=u$. \end{lemma} \begin{proof} Apply Proposition \ref{div.5} to $h_f^{-1}\in h_Y(X)$ to obtain a morphism $p\colon X'\to Y$ such that $fp$ is a dividing cover. Since $h_p$ is an isomorphism, we can also apply Proposition \ref{div.5} to $h_p^{-1}\in h_{X'}(Y)$ to obtain a morphism $q\colon Y'\to X'$ such that $pq$ is a dividing cover. We set $V:=X'\times_Y Y'$, which is a dividing cover over $X$. Use $Y'\times_Y Y'\simeq Y'$ and $V\times_X V\simeq V$ to obtain an induced commutative diagram \[ \begin{tikzcd}[column sep=small, row sep=small] Y'\times_X V\ar[r]\ar[d]& V\ar[r]\ar[d]& Y'\times_X V\ar[r]\ar[d]& V\ar[d] \\ Y'\ar[r]\ar[d]& V\ar[r]\ar[d]& Y'\ar[r,"fpq"]\ar[d,"pq"]& X \\ Y'\ar[r,"q"]& X'\ar[r,"p"]& Y \end{tikzcd} \] whose small squares are cartesian and vertical morphisms are dividing covers. Let $a\colon V\to Y'\times_X V$ be the graph morphism, and let $b\colon Y'\times_X V\to V$ be the projection. From the upper row of the diagram, we see that $a$ is an inverse of $b$. Hence $b$ is an isomorphism, so we obtain the desired dividing covers $V\to X$ and $V\to Y$. \end{proof}
\begin{lemma} \label{equiv.28} Let $f\colon h_Y\to h_X$ be an isomorphism in $\Shv_{dZar}(\lFan/B)$, where $X,Y\in \lFan/B$. Then there exist dividing covers $u\colon V\to X$ and $v\colon V\to Y$ in $\lFan/B$ such that $fh_v=h_u$. \end{lemma} \begin{proof} Lemma \ref{equiv.24} yields a dividing cover $q\colon Y'\to Y$ and a morphism $p\colon Y'\to X$ such that $fh_q=h_p$. Since $h_p$ is an isomorphism, Lemma \ref{equiv.25} yields a dividing cover $V\to Y'$ such that the composition $V\to Y'\xrightarrow{p} X$ is a dividing cover. The composition $V\to Y'\xrightarrow{q} Y$ is a dividing cover too. \end{proof}
\begin{lemma} \label{equiv.29} Let $f\colon Y\to X$ be a $\cP$-morphism in $\lFan/B$, where $\cP$ is a class of morphisms in $\lFan/B$ closed under pullbacks. Then $h_f$ is a representable $\cP$-morphism. \end{lemma} \begin{proof} Let $h_{V}\to h_X$ be a morphism in $\Shv_{dZar}(\lFan/B)$ with $V\in \lFan/B$. By Lemma \ref{equiv.24}, we can replace $V$ with its suitable dividing cover to assume that $h_V\to h_X$ is equal to $h_g$ for some morphism $g\colon V\to X$ in $\lFan/B$. To conclude, observe that we have an isomorphism $h_Y\times_{h_X} h_V \simeq h_{Y\times_X V}$. \end{proof}
For a class of morphisms $\cP$ in $\lFan/B$, we will frequently assume the following condition: \begin{enumerate} \item[(Div)] If $Y\to X$ is a $\cP$-morphism and $Y'\to Y$ is a dividing cover in $\lFan/B$, then there exists a dividing cover $X'\to X$ in $\lFan/B$ such that the projection $Y'\times_X X'\to X'$ is a $\cP$-morphism. \end{enumerate}
\begin{example} \label{equiv.16} Suppose that $\cP$ is a class of morphisms in $\lFan/B$ closed under pullbacks. Let $Y\to X$ be a $\cP$-morphism, and let $Y'\to Y$ be a dividing cover in $\lFan/B$. \\[5pt] (1) Assume that every morphism in $\cP$ is strict. If $X$ has a fan chart $\Sigma$, then $Y$ has a fan chart $\Sigma$. Proposition \ref{equiv.23} yields a subdivision $\Sigma'$ of $\Sigma$ such that the pullback $Y'\times_{\T_\Sigma}\T_{\Sigma'}\to Y\times_{\T_\Sigma}\T_{\Sigma'}$ is an isomorphism. This means that the pullback $Y'\times_X X_\Sigma\to Y\times_X X_\Sigma$ is an isomorphism, where $X_\Sigma:=X\times_{\T_\Sigma}\T_{\Sigma'}$. Hence $\cP$ satisfies (Div). \\[5pt] (2) Let exact $\cP$ be the subclass of morphisms in $\cP$ that are exact. Suppose that exact $\cP$ satisfies (Div). By \cite[Proposition 4.2.3]{logA1}, there exists a dividing cover $X''\to X$ such that the projection $Y\times_X X''\to X''$ is exact $\cP$. Our assumption on exact $\cP$ yields a dividing cover $X'\to X''$ such that the projection $Y'\times_{X} X'\to X'$ is exact $\cP$. Hence $\cP$ satisfies (Div). \\[5pt] (3) Suppose that $\cP$ contains all dividing covers and is closed under compositions. Observe that $\cP$ satisfies (Div). If $V\to U$ is an exact $\cP$-morphism and $V'\to V$ is a dividing cover, then there exists a dividing cover $U'\to U$ such that the projection $p\colon V'\times_U U'\to U'$ is exact by \cite[Proposition 4.2.3]{logA1}. It follows that $p$ is an exact $\cP$-morphism, so exact $\cP$ satisfies (Div). \end{example}
According to \cite[Proposition 4.2.3]{logA1}, $f$ is a representable $\cP$-morphism if and only if $f$ is a representable exact $\cP$-morphism.
In the following cases of $\cP$, all exact $\cP$-morphisms are strict by Proposition \ref{equiv.11} so that we can use (1) and (2):
\begin{center}
\begin{tabular}{|c|c|c|} \hline $\cP$ & exact $\cP$ \\ \hline dividing cover & isomorphism \\ dividing Zariski cover & Zariski cover \\ proper monomorphism & strict closed immersion \\ log \'etale monomorphism & open immersion \\ strict immersion & strict immersion \\ \hline \end{tabular} \end{center}
If $\cP$ is log smooth or log \'etale, then we can use (3).
\begin{example} A morphism of fs log schemes $i\colon Z\to X$ is called a \emph{closed immersion} if the underlying morphism of schemes $\ul{i}$ is a closed immersion and the induced morphism of structure sheaves of monoids $i_{\log}^*\cM_X\to \cM_Z$ is surjective, see \cite[Definition III.2.3.1]{Ogu}. In this case, the construction of the fiber products in the proof of \cite[Proposition III.2.1.2]{Ogu} shows that the diagonal morphism $Z\to Z\times_X Z$ is an isomorphism, i.e., $i$ is a monomorphism. Hence we have the following relations: \[ \text{(strict closed immersion) } \subset \text{ (closed immersion) } \subset \text{ (proper monomorphism).} \] We deduce that a morphism $f$ in $\Shv_{dZar}(\lFan/B)$ is a representable closed immersion if and only if $f$ is a representable strict closed immersion. \end{example}
With the condition \textup{(Div)}, we have a more structured version of Lemma \ref{equiv.26} as follows.
\begin{lemma} \label{equiv.27} Suppose that $\cP$ is a class of morphisms in $\lFan/B$ closed under pullbacks and satisfying \textup{(Div)}. Let $f\colon h_Y\to h_X$ be a representable $\cP$-morphism in $\Shv_{dZar}(\lFan/B)$, where $X,Y\in \lFan/B$. Then there exists a commutative square \begin{equation} \begin{tikzcd} h_{V}\ar[d,"h_{v}","\simeq"']\ar[r,"h_{g}"]& h_{U}\ar[d,"h_{u}"',"\simeq"] \\ h_{Y}\ar[r,"f"]& h_X \end{tikzcd} \end{equation} with vertical isomorphisms such that $g$ is a $\cP$-morphism and $u$ and $v$ are dividing covers in $\lFan/B$. \end{lemma} \begin{proof} By Lemmas \ref{equiv.26} and \ref{equiv.28}, there exists a commutative diagram \[ \begin{tikzcd} h_{Y''}\ar[d,"\simeq"',"h_{q'}"]\ar[dd,bend right=70,"h_{q}"'] \\ h_{Y'}\ar[d,"\simeq"']\ar[r,"h_{f'}"]& h_{X'}\ar[d,"h_{p}"',"\simeq"] \\ h_{Y}\ar[r,"f"]& h_X \end{tikzcd} \] with vertical isomorphisms such that $f'$ is a $\cP$-morphism and $p$, $q$, and $q'$ are dividing covers in $\lFan/B$. Use (Div) to obtain a cartesian square \[ \begin{tikzcd} Y''\times_{X'}X''\ar[r,"f''"]\ar[d]& X''\ar[d,"p'"] \\ Y''\ar[r,"f'q'"]& X' \end{tikzcd} \] such that $p'$ is a dividing cover and $f''$ is a $\cP$-morphism. Take $U:=X''$ and $V:=Y''\times_{X'}X''$ to obtain the desired commutative diagram. \end{proof}
\begin{lemma} \label{equiv.18} Suppose that $\cP$ and $\cQ$ are classes of morphisms in $\lFan/B$ closed under pullbacks. Let $h_Y\to h_X$ be a representable $\cP$-morphism, and let $h_Z\to h_Y$ be a representable $\cQ$-morphism with $X,Y,Z\in \lFan/B$. If $\cP$ satisfies \textup{(Div)}, then there exists a commutative diagram \[ \begin{tikzcd} h_{W}\ar[r,"h_g"]\ar[d,"\simeq"']& h_V\ar[d,"\simeq"]\ar[r,"h_f"]& h_U\ar[d,"h_u"',"\simeq"] \\ h_Z\ar[r]& h_Y\ar[r]& h_X \end{tikzcd} \] with vertical isomorphisms such that $f$ is a $\cP$-morphism, $g$ is a $\cQ$-morphism, and $u$ is a dividing cover in $\lFan/B$. \end{lemma} \begin{proof} Use Lemma \ref{equiv.26} twice to obtain a commutative diagram \[ \begin{tikzcd} h_{Z''}\ar[dd,"\simeq"']\ar[r,"h_b"]& h_{Y''}\ar[d,"\simeq","h_q"'] \\ & h_{Y'}\ar[r,"h_a"]\ar[d,"\simeq"]& h_{X'}\ar[d,"\simeq","h_p"'] \\ h_Z\ar[r]& h_Y\ar[r]& h_X \end{tikzcd} \] with vertical isomorphisms such that $a$ is a $\cP$-morphism, $b$ is a $\cQ$-morphism, and $p$ and $q$ are dividing covers in $\lFan/B$. By \textup{(Div)}, there exists a dividing cover $X''\to X'$ such that the projection $Y''\times_{X'} X''\to X''$ is a $\cP$-morphism. Take $U:=X''$, $V:=Y''\times_{X'} X''$, and $W:=Z''\times_{X'}X''$ to conclude. \end{proof}
\begin{proposition} \label{equiv.15} Suppose that $\cP$ is a class of morphisms in $\lFan/B$ closed under compositions and pullbacks. If $\cP$ satisfies \textup{(Div)}, then the class of representable $\cP$-morphisms in $\Shv_{dZar}(\lFan/B)$ is closed under compositions. \end{proposition} \begin{proof} An immediate consequence of Lemma \ref{equiv.18} when $\cQ:=\cP$. \end{proof}
\begin{proposition} \label{equiv.37} Suppose that $\cP$ is the class of all morphisms in $\lFan/B$. Then a morphism $f\colon \cG\to \cF$ in $\Shv_{dZar}(\lFan/B)$ is representable if and only if $f$ is a representable $\cP$-morphism. \end{proposition} \begin{proof} The if direction is trivial. For the only if direction, assume that $f$ is representable. Let $h_X\to \cF$ be a morphism with $X\in \lFan/B$. Then there exists a cartesian square \[ \begin{tikzcd} h_Y\ar[r]\ar[d]& h_X\ar[d] \\ \cG\ar[r,"f"]& \cF \end{tikzcd} \] with $Y\in \lFan/B$. To obtain \eqref{equiv.6.1}, apply Lemma \ref{equiv.24} to $h_Y\to h_X$. \end{proof}
\begin{proposition} \label{equiv.36} Let $f\colon \cG\to \cF$ be a morphism in $\Shv_{dZar}(\lFan/B)$. If $f$ is a representable Zariski cover, then $f$ is an epimorphism of sheaves. \end{proposition} \begin{proof} Suppose $a\in \cF(X)$ with $X\in \lFan/B$, which can be expressed as a morphism of sheaves $a\colon h_X\to \cF$. Lemma \ref{equiv.26} yields a commutative square \[ \begin{tikzcd} h_V\ar[d,"\simeq"']\ar[r,"h_g"]& h_U\ar[d,"h_u"',"\simeq"]\\ \cG\times_{\cF} h_X\ar[r]& h_X \end{tikzcd} \] with vertical isomorphisms such that $g$ is a Zariski cover and $u$ is a dividing cover in $\lFan/B$. Hence $(ug)^*a\in \cF(V)$ is in the image of $f(V)\colon \cG(V)\to \cF(V)$. To conclude, observe that $V\to X$ is a dividing Zariski cover. \end{proof}
\begin{lemma} \label{equiv.17} Suppose that $\cP$ is one of the following classes: \begin{enumerate} \item[\textup{(i)}] isomorphisms, \item[\textup{(ii)}] strict immersions, \item[\textup{(iii)}] open immersions, \item[\textup{(iv)}] strict closed immersions, \item[\textup{(v)}] Zariski covers. \end{enumerate} Let \begin{equation} \label{equiv.17.1} \begin{tikzcd} \cG'\ar[d,"u'"']\ar[r,"v'"]& \cG\ar[d,"u"] \\ \cF'\ar[r,"v"]& \cF \end{tikzcd} \end{equation} be a cartesian square in $\Shv_{dZar}(\lFan/B)$ such that $u$ is a representable Zariski cover and $v'$ is a representable $\cP$-morphism. Then $v$ is a representable $\cP$-morphism. \end{lemma} \begin{proof} We only need to consider the case when $\cF=h_X$ with $X\in \lFan/B$. By Lemma \ref{equiv.18}, we can replace $X$ with its suitable dividing cover to assume that \eqref{equiv.17.1} is isomorphic to a cartesian square \begin{equation} \label{equiv.17.3} \begin{tikzcd} h_{Y'}\ar[d,"u'"']\ar[r,"h_{g'}"]& h_{Y}\ar[d,"h_{f}"] \\ \cF'\ar[r,"v"]& h_{X} \end{tikzcd} \end{equation} for some Zariski cover $f$ and $\cP$-morphism $g'$ in $\lFan/B$.
We have isomorphisms $ h_{Y'\times_X Y} \simeq h_{Y'}\times_{\cF'}h_{Y'} \simeq h_{Y\times_X Y'} $. Let $c\colon h_{Y'\times_X Y}\to h_{Y\times_X Y'}$ be the composition. Lemma \ref{equiv.28} yields dividing covers $a\colon V\to Y'\times_X Y$ and $b\colon V \to Y\times_X Y'$ such that $ch_a=h_b$. The induced square \[ \begin{tikzcd} h_V\ar[r,"h_a"]\ar[d,"h_b"']& h_{Y'\times_X Y}\ar[d] \\ h_{Y\times_X Y'}\ar[r]& h_{Y\times_X Y} \end{tikzcd} \] commutes. By Proposition \ref{div.5}, after replacing $V$ by its suitable dividing cover, we may assume that the induced square \[ \begin{tikzcd} V\ar[r,"a"]\ar[d,"b"']& Y'\times_X Y\ar[d] \\ Y\times_X Y'\ar[r]& Y\times_X Y \end{tikzcd} \] commutes. Let $g''\colon V\to Y\times_X Y$ be the composition obtained from this. We also have the induced diagram \[ \begin{tikzcd} h_{V}\ar[d,shift left=0.5ex,"h_{q_2b}"]\ar[d,shift right=0.5ex,"h_{q_1a}"']\ar[r,"h_{g''}"]& h_{Y\times_X Y}\ar[d,shift left=0.5ex,"h_{p_2}"]\ar[d,shift right=0.5ex,"h_{p_1}"'] \\ h_{Y'}\ar[r,"g'"]& h_{Y}, \end{tikzcd} \] where $p_1$ and $q_1$ are the first projections and $p_2$ and $q_2$ are the second projections. The two squares formed with the left vertical or right vertical morphisms commute. By Proposition \ref{div.5} again, after replacing $V$ by its suitable dividing cover, we may assume that the two squares in the diagram \begin{equation} \label{equiv.17.4} \begin{tikzcd} V\ar[d,shift left=0.5ex,"q_2b"]\ar[d,shift right=0.5ex,"q_1a"']\ar[r,"g''"]& Y\times_X Y\ar[d,shift left=0.5ex,"p_2"]\ar[d,shift right=0.5ex,"p_1"'] \\ Y'\ar[r,"g'"]& Y \end{tikzcd} \end{equation} commute. Using \cite[Proposition 4.2.3]{logA1}, we can replace $X$ with its suitable dividing cover and $Y$, $Y'$, and $V$ by their corresponding pullbacks to assume that the composition $V\to X$ is exact. In this case, $a$ and $b$ are exact since $Y'\times_X Y$ and $Y\times_X Y'$ are strict over $X$. Proposition \ref{equiv.11}(3) shows that $a$ and $b$ are isomorphisms. It follows that the two squares in \eqref{equiv.17.4} are cartesian. All the morphisms in \eqref{equiv.17.4} are strict. By gluing, we obtain $X'\in \lSch/B$ with a cartesian square \[ \begin{tikzcd} Y'\ar[d,"f'"']\ar[r,"g'"]& Y\ar[d,"f"] \\ X'\ar[r,"g"]& X \end{tikzcd} \] such that $g$ is a $\cP$-morphism. Since $g$ is strict, we have $X'\in \lFan/B$.
Proposition \ref{equiv.36} shows that $h_f$ is an epimorphism of sheaves. Hence $h_f$ is a universal effective epimorphism of sheaves. Its pullbacks $h_{f'}$ and $u'$ are universal effective epimorphisms of sheaves too, so we have isomorphisms \[ h_{X'}\simeq \Coeq(h_{V}\rightrightarrows h_{Y'}) \simeq \cF'. \] By Lemma \ref{equiv.29} for $g$, we deduce that $v$ is a representable $\cP$-morphism. \end{proof}
\section{Divided log spaces}\label{divspace}
We adapt the definition of algebraic spaces \cite[Definition II.1.1]{zbMATH03350017} to the dividing Zariski topology as follows.
\begin{definition} \label{equiv.5} We say that $\cX\in \Shv_{dZar}(\lFan/B)$ is a \emph{(noetheiran) divided log space over $B$} if the following two conditions are satisfied. \begin{enumerate} \item[(i)] The diagonal morphism $\Delta\colon\cX\to \cX\times \cX$ is a representable strict immersion. \item[(ii)] There exists a representable Zariski cover $h_X\to \cX$ with $X\in \lFan/B$. \end{enumerate} A \emph{morphism of divided log spaces over $B$} is a morphism of sheaves. The category of divided log spaces over $B$ is denoted by $\lSpc/B$. \end{definition}
\begin{remark} \label{equiv.7} We chose the terminology ``divided'' because a dividing sheaf has no more nontrivial dividing cover, i.e., dividing is finished. \end{remark}
\begin{proposition} \label{equiv.20} Let $h_Y\to \cX$ and $h_Z\to \cX$ be morphisms in $\lSpc/B$ with $Y,Z\in \lFan/B$. Then the fiber product $h_Y\times_{\cX}h_Z$ is representable. \end{proposition} \begin{proof} Consider the induced cartesian square \[ \begin{tikzcd} h_Y\times_{\cX}h_Z\ar[r]\ar[d]& h_Y\times h_Z\ar[d] \\ \cX\ar[r,"\Delta"]& \cX\times \cX. \end{tikzcd} \] The condition that $\Delta$ is a representable strict immersion implies the claim. \end{proof}
\begin{proposition} \label{equiv.38} Let $\cZ\xrightarrow{g}\cY\xrightarrow{f}\cX$ be morphisms in $\lSpc/B$. If $f$ is a representable strict (resp.\ strict separated) morphism and $fg$ is a representable strict (resp.\ strict closed) immersion, then $g$ is a representable strict (resp.\ strict closed) immersion. \end{proposition} \begin{proof} Choose a representable Zariski cover $h_U\to \cX$ with $U\in \lFan/B$. Since $f$ and $fg$ are representable, there exists a commutative diagram \[ \begin{tikzcd} h_W\ar[d]\ar[r]& h_V\ar[d]\ar[r]& h_U\ar[d] \\ \cZ\ar[r,"g"]& \cY\ar[r,"f"]& \cX \end{tikzcd} \] with cartesian squares. By Lemma \ref{equiv.18} and Proposition \ref{equiv.37}, we may assume that $h_V\to h_U$ is equal to $h_u$ for some strict (resp.\ strict separated) morphism $u\colon V\to U$ in $\lFan/B$ and $h_W\to h_V$ is equal to $h_v$ for some morphism $v\colon W\to V$ in $\lFan/B$. Lemma \ref{equiv.27} yields a commutative square \[ \begin{tikzcd} h_{W'}\ar[d,"h_q","\simeq"']\ar[r,"h_a"]& h_{U'}\ar[d,"h_p"',"\simeq"] \\ h_W\ar[r,"h_{uv}"]& h_U \end{tikzcd} \] with vertical isomorphisms such that $a$ is a strict (resp.\ strict closed) immersion and $p$ and $q$ are dividing covers in $\lFan/B$. By Proposition \ref{div.5}, there exists a dividing cover $r\colon W''\to W'$ such that the square \[ \begin{tikzcd} W''\ar[r,"ar"]\ar[d,"qr"']& U'\ar[d,"p"] \\ W\ar[r,"uv"]& U \end{tikzcd} \] commutes. Use \cite[Proposition 4.2.3]{logA1} to find a dividing cover $U''\to U'$ such that the projection $W''\times_{U'}U''\to U''$ is exact. The projection $W'\times_{U'}U''\to U''$ is a strict (resp.\ strict closed) immersion, so the pullback $W''\times_{U'}U''\to W'\times_{U'} U''$ is an exact dividing cover, i.e., an isomorphism by Proposition \ref{equiv.11}(3). It follows that the projection $W''\times_{U'}U''\to U''$ is a strict (resp.\ strict closed) immersion.
We have the commutative diagram \[ \begin{tikzcd} W''\times_{U'} U''\ar[d]\ar[r]& V\times_U U''\ar[r]\ar[d]& U''\ar[d] \\ W''\ar[r]& V\times_U U'\ar[r]& U'. \end{tikzcd} \] with cartesian squares. Since $u$ is a strict (resp.\ strict separated) morphism, the projection $V\times_U U''\to U''$ is a strict (resp.\ strict separated) morphism. It follows that the induced morphism $W''\times_{U'}U''\to V\times_U U''$ is a strict (resp.\ strict closed) immersion. Since the composition $h_{V\times_U U''}\to \cY$ is a representable Zariski cover, Lemma \ref{equiv.17} shows that $g$ is a representable strict (resp.\ strict closed) immersion. \end{proof}
\begin{proposition} \label{equiv.39} Let $\cY\to \cX$ be a morphism in $\lSpc/B$. Then the diagonal morphism $\cY\to \cY\times_{\cX}\cY$ is a representable strict immersion. \end{proposition} \begin{proof} Consider the induced commutative diagram \[ \begin{tikzcd} \cY\ar[r]& \cY\times_{\cX}\cY\ar[r]\ar[d]& \cY\times \cY\ar[d] \\ & \cX\ar[r]& \cX\times \cX, \end{tikzcd} \] where the horizontal morphisms are the diagonal morphisms, and the square is cartesian. The diagonal morphisms $\cX\to \cX\times \cX$ and $\cY\to \cY\times \cY$ are representable strict immersions. To finish the proof, apply Proposition \ref{equiv.38} to the upper row. \end{proof}
\begin{proposition} \label{equiv.21} Let $\cY\to \cX$ and $\cZ\to \cX$ be morphisms in $\lSpc/B$. Then the fiber product $\cY\times_{\cX}\cZ$ in $\Shv_{dZar}(\lFan/B)$ is a divided log space over $B$. \end{proposition} \begin{proof} We have the induced cartesian square \[ \begin{tikzcd} \cY\times_{\cX}\cZ\times_{\cX}\cZ\ar[r,"d"]\ar[d]& \cY\times_{\cX}\cZ\times \cY\times_{\cX}\cZ\ar[d] \\ \cY\ar[r]& \cY\times \cY. \end{tikzcd} \] Since the diagonal morphism $\cY\to \cY\times \cY$ is a representable strict immersion, $d$ is a representable strict immersion. Compose $d$ with the pulback $\cY\times_{\cX} \cZ\to \cY\times_{\cX} \cZ\times_{\cX} \cZ$ of the diagonal morphism $\cZ\to \cZ\times_{\cX}\cZ$, which is a representable strict immersion by Proposition \ref{equiv.39}, to deduce the condition (i) in Definition \ref{equiv.5} for $\cY\times_{\cX}\cZ$.
There exist representable Zariski covers $h_Y\to \cY$ and $h_Z\to \cZ$ with $Y,Z\in \lFan/B$. The pullbacks $h_Y\times_{\cX} h_Z\to \cY\times_{\cX} h_Z$ and $\cY\times_{\cX} h_Z\to \cY\times_{\cX} \cZ$ are representable Zariski covers. Hence the composition $h_Y\times_{\cX} h_Z\to \cY\times_{\cX} \cZ$ is a representable Zariski cover by Proposition \ref{equiv.15}. This shows the condition (ii) in Definition \ref{equiv.5} for $\cY\times_{\cX}\cZ$. \end{proof}
\begin{proposition} \label{equiv.31} Let $f\colon \cY\to \cX$ be a representable Zariski cover in $\lSpc/B$. If $f$ is a monomorphism in $\lSpc/B$, then $f$ is an isomorphism. \end{proposition} \begin{proof} The diagonal morphism $\cY\to \cY\times_{\cX} \cY$ is an isomorphism, where $\cY\times_{\cX} \cY$ is the fiber product in $\lSpc/B$, which is the fiber product in $\Shv_{dZar}(\lFan/B)$ by Proposition \ref{equiv.21}. Hence $f$ is a monomorphism of sheaves. By Proposition \ref{equiv.36}, $f$ is an epimorphism of sheaves. It follows that $f$ is a stalkwise isomorphism of sheaves since the dividing Zariski topology has enough points by Remark \ref{equiv.35}, i.e., $f$ is an isomorphism. \end{proof}
\begin{proposition} \label{equiv.40} Suppose that $\cP$ is the class of monomorphisms in $\lFan/B$. If $f\colon \cY\to \cX$ be a representable $\cP$-morphism in $\lSpc/B$, then $f$ is a monomorphism in $\lSpc/B$. \end{proposition} \begin{proof} Choose a representable Zariski cover $h_U\to \cX$ with $U\in \lFan/B$. We may assume that there exists a cartesian square \[ \begin{tikzcd} h_V\ar[r,"h_g"]\ar[d]& h_U\ar[d] \\ \cY\ar[r,"f"]& \cX \end{tikzcd} \] such that $g$ is a monomorphism in $\lFan/B$. Then we obtain a cartesian square \[ \begin{tikzcd} h_V\ar[r,"h_d"]\ar[d]& h_{V\times_U V}\ar[d] \\ \cY\ar[r,"\Delta"]& \cY\times_{\cX} \cY, \end{tikzcd} \] where $d$ and $\Delta$ are the diagonal morphisms. Since $g$ is a monomorphism, $d$ is an isomorphism. Use Lemma \ref{equiv.17} to show that $\Delta$ is an isomorphism, i.e., $f$ is a monomorphism. \end{proof}
Hence any representable strict morphism in $\lSpc/B$ is a monomorphism.
\begin{proposition} \label{equiv.9} If $X\in \lSch/B$, then $h_X$ is a divided log space over $B$. \end{proposition} \begin{proof} Let $h_V\to h_X\times h_X$ be a morphism in $\lSpc/B$ with $V\in \lFan/B$. By Proposition \ref{div.5}, after replacing $V$ by its suitable dividing cover, we may assume that this is isomorphic to $h_f$ for some morphism $f\colon V\to X\times_B X$ in $\lSch/B$. The diagonal morphism $X\to X\times_B X$ is a proper monomorphism, so the projection $W:=V\times_{X\times_B X}X\to V$ is a proper monomorphism too. Proposition \ref{equiv.23} yields a dividing cover $V'\to V$ such that the projection $W':=V'\times_V W\to V'$ is a strict closed immersion. This shows the condition (i) in Definition \ref{equiv.5} for $h_X$.
Choose a Zariski cover $Y\to X$ with $Y\in \lFan/B$. Let $h_V\to h_X$ be a morphism with $V\in \lFan/B$. By Proposition \ref{div.5}, after replacing $V$ by its suitable dividing cover, we may assume that this is isomorphic to $h_f$ for some morphism $f\colon V\to X$ in $\lSch/B$. The projection $h_V\times_{h_X}h_Y\to h_V$ is isomorphic to $h_p$, where $p$ is the projection $V\times_X Y\to V$. Since $p$ is a Zariski cover, $h_X$ satisfies the condition (ii) in Definition \ref{equiv.5}. \end{proof}
Hence the essential image of the Yoneda functor \eqref{div.0.1} lies in $\lSpc/B$.
\begin{proposition} \label{equiv.19} Let $h_X\to \cX$ be a representable Zariski cover in $\lSpc/B$ with $X\in \lFan/B$. Then there exists a dividing Zariski covering family $\{U_i\to X\}_{i\in I}$ with finite $I$ such that each $h_{U_i}\to \cX$ is a representable open immersion. \end{proposition} \begin{proof} Use Lemmas \ref{equiv.26} and \ref{equiv.27} to obtain a commutative diagram \[ \begin{tikzcd} h_{Y'}\ar[d,"h_{f''}"',"\simeq"]\ar[rr,"h_{g'}"]& & h_{X''}\ar[d,"h_{f'}","\simeq"'] \\ h_{Y}\ar[d,"h_g"']\ar[r,"\simeq"]& h_X\times_{\cX}h_X\ar[d,"p_1"]\ar[r,"p_2"]& h_X\ar[d] \\ h_{X'}\ar[r,"\simeq","h_f"']& h_{X}\ar[r]& \cX, \end{tikzcd} \] where $p_1$ (resp.\ $p_2$) is the first (resp.\ second) projection, $f$, $f'$, and $f''$ are dividing covers, $g$ and $g'$ are Zariski covers, and $h_{Y'}\simeq h_{Y}\times_{h_X}h_{X''}$. We can decompose $Y$ as $\amalg_{i\in I}Y_i$ such that each $Y_i\to X'$ is an open immersion. We set $U_i:=g'(Y_i\times_Y Y')$ and $V_i:=g'^{-1}(U_i)$, and we regard them as open subschemes of $X''$ and $Y'$ respectively. There is a cartesian square \[ \begin{tikzcd} h_{V_i}\ar[d]\ar[r]& h_{U_i}\ar[d] \\ h_{X'}\ar[r]& \cX. \end{tikzcd} \] Since $V_i\to U_i$ is a Zariski cover and $V_i\to X'$ is a log \'etale monomorphism, Lemma \ref{equiv.17} shows that $h_{U_i}\to \cX$ is a representable open immersion. To conclude, observe that $\amalg_{i\in I} U_i\to X''$ is a Zariski cover. \end{proof}
\section{Zariski equivalence relations} \label{equiv}
\'Etale equivalence relations in the theory of algebraic spaces are helpful for constructing examples. The purpose of this section is to develop an analogous notion in the category of divided log spaces. As an application, we explain how to glue divided log spaces.
\begin{definition} \label{equiv.2} Suppose $\cX\in \lSpc/B$. A \emph{Zariski equivalence relation on $\cX$} is a morphism \[ i\colon \cR\to \cX\times \cX \] in $\lSpc/B$ satisfying the following conditions. \begin{enumerate} \item[(i)] $i$ is a representable strict immersion. \item[(ii)] If $p_1,p_2\colon \cX\times \cX\rightrightarrows \cX$ denote two projections, then $p_1i$ and $p_2i$ are representable Zariski covers. \item[(iii)] For all $T\in \lFan/B$, $\cR(T)\to \cX(T)\times \cX(T)$ is an equivalence relation. \end{enumerate} \end{definition}
By Proposition \ref{equiv.40}, the condition (i) implies that $i$ is a monomorphism, i.e., $\cR(T)\to \cX(T)\times \cX(T)$ is injective for all $T\in \lFan/B$.
Let $\cX/\cR$ denote the dividing Zariski sheaf associated with the presheaf \[ (T\in \lFan/B)\mapsto \cX(T)/\cR(T). \] There is an induced cartesian square \begin{equation} \label{equiv.2.1} \begin{tikzcd} \cR\ar[d,"p_1i"']\ar[r,"p_2i"]& \cX\ar[d] \\ \cX\ar[r]& \cX/\cR. \end{tikzcd} \end{equation}
Suppose that $\cT$ is a Zariski equivalence relation on $\cY\in \lSpc/B$. If there is a morphism $f\colon \cY\to \cX$ and a commutative square \[ \begin{tikzcd} \cT\ar[r]\ar[d]& \cR\ar[d] \\ \cY\times \cY\ar[r,"f\times f"]& \cX\times \cX, \end{tikzcd} \] then there is an induced morphism $\cY/\cT\to \cX/\cR$.
\begin{proposition} \label{equiv.3} Let $\cR$ be a Zariski equivalence relation on $\cX\in \lSpc/B$. Then $\cX/\cR\in \lSpc/B$. \end{proposition} \begin{proof} Let $h_V\to \cX/\cR\times \cX/\cR$ be a morphism with $V\in \lFan/B$. The morphism $\cX\to \cX/\cR$ is an epimorphism, so there exists a dividing Zariski cover $V'\to V$ in $\lFan/B$ such that the composition $h_{V'}\to h_{V}\to \cX/\cR\times \cX/\cR$ factors through $\cX\times \cX$. From the cartesian square \eqref{equiv.2.1}, we have an isomorphism \[ h_{V'}\times_{\cX/\cR\times \cX/\cR}\cX/\cR \simeq h_{V'}\times_{\cX\times \cX}\cR. \] Since $\cR$ is a Zariski equivalence relation on $\cX$, the projection $h_{V'}\times_{\cX\times \cX}\cR\to h_{V'}$ is a representable strict immersion.
We set $\cF:=h_V\times_{\cX/\cR\times \cX/\cR}\cX/\cR$ to have a cartesian square \[ \begin{tikzcd} h_{V'}\times_{\cX\times \cX}\cR\ar[r]\ar[d]& h_{V'}\ar[d] \\ \cF\ar[r]& h_V. \end{tikzcd} \] Lemma \ref{equiv.17} shows that $\cF\to h_V$ is a representable strict immersion. This shows that the diagonal morphism $\cX/\cR\to \cX/\cR\times \cX/\cR$ is a representable strict immersion, which verifies the axiom (i) of divided log spaces for $\cX/\cR$.
Let $h_V\to \cX/\cR$ be a morphism with $V\in \lFan/B$. There exists a dividing Zariski cover $V'\to V$ in $\lFan/B$ such that the composition $h_{V'}\to h_{V}\to \cX/\cR$ factors through $\cX$. Since \eqref{equiv.2.1} is cartesian, we have an isomorphism \[ h_{V'}\times_{\cX/\cR}\cX \simeq h_{V'}\times_{\cX}\cR, \] where the morphism $\cR\to \cX$ in this formulation is $p_1i$. Hence we have a cartesian square \[ \begin{tikzcd} h_{V'}\times_{\cX}\cR\ar[r]\ar[d]& h_{V'}\ar[d] \\ h_V\times_{\cX/\cR}\cX\ar[r]& h_V. \end{tikzcd} \] Since $p_1i$ is a representable Zariski cover, Lemma \ref{equiv.17} shows that the projection $h_V\times_{\cX/\cR}\cX\to h_V$ is a representable Zariski cover. It follows that $\cX\to \cX/\cR$ is a representable Zariski cover. Hence $\cX/\cR$ satisfies the axiom (ii) of divided log spaces. \end{proof}
\begin{proposition} Let $f\colon \cY\to \cX$ be a representable Zariski cover in $\lSpc/B$. Then the induced morphism $i\colon \cR:=\cY\times_{\cX}\cY\to \cY\times \cY$ is a Zariski equivalence relation on $\cY$. Furthermore, $\cY/\cR\simeq \cX$. \end{proposition} \begin{proof} For $T\in \lFan/B$, a section $(a,b)\in \cY(T)\times \cY(T)$ is in $\cR(T)$ if and only if $f(a)=f(b)$ in $\cX(T)$. This explicit description shows that $\cR(T)\to \cX(T)\times \cX(T)$ is an equivalence relation and $\cY/\cR\simeq \cX$. Since the diagonal morphism $\cX\to \cX\times \cX$ is a representable strict immersion, so is $i$. The assumption that $f$ is a representable Zariski cover implies that the two projections $\cR\rightrightarrows \cY$ are representable Zariski covers. Hence $\cR$ is a Zariski equivalence relation on $\cY$. \end{proof}
\begin{construction} \label{equiv.13} Let $I$ be a finite set. Assume that we have given the gluing data \begin{enumerate} \item[(1)] $\cX_i\in \lSpc/B$ for all $i\in I$, \item[(2)] representable open immersions $\cU_{ij}\to \cX_i$ for all $i,j\in I$, \item[(3)] isomorphisms $\varphi_{ij} \colon \cU_{ij}\to \cU_{ji}$ for all $i,j\in I$, \end{enumerate} satisfying the following conditions for $i,j,k\in I$: \begin{enumerate} \item[(i)] $\cU_{ii}=\cX_i$ and $\varphi_{ii}=\id$, \item[(ii)] there exists an isomorphism $\psi_{ijk}\colon \cU_{ij}\times_{\cX_i}\cU_{ik}\to \cU_{ji}\times_{\cX_j}\cU_{jk}$ such that the square \[ \begin{tikzcd} \cU_{ij}\times_{\cX_i}\cU_{ik}\ar[d,hookrightarrow]\ar[r,"\psi_{ijk}"]& \cU_{ji}\times_{\cX_j}\cU_{jk}\ar[d,hookrightarrow] \\ \cU_{ij}\ar[r,"\varphi_{ij}"]& \cU_{ji} \end{tikzcd} \] commutes, \item[(iii)] the square \[ \begin{tikzcd} \cU_{ij}\times_{\cX_i}\cU_{ik}\ar[d,"\simeq"']\ar[r,"\psi_{ijk}"]& \cU_{ji}\times_{\cX_j}\cU_{jk}\ar[d,"\psi_{jik}"] \\ \cU_{ik}\times_{\cX_i}\cU_{ij}\ar[r,"\psi_{ikj}"]& \cU_{ki}\times_{\cX_k}\cU_{kj} \end{tikzcd} \] commutes. \end{enumerate}
In this setting, let us explain the gluing construction. We set $\cX:=\amalg_{i\in I} \cX_i$ and $\cR:=\amalg_{i,j\in I}\cU_{ij}$. The composition \[ \cU_{ij}\xrightarrow{\Gamma_{\varphi_{ij}}} \cU_{ij}\times \cU_{ji} \hookrightarrow \cX_i\times \cX_j \] induces a morphism $\cR\to \cX\times \cX$, where $\Gamma_{\varphi_{ij}}$ is the graph morphism. Since the diagonal morphism $\cU_{ij}\to \cU_{ij}\times \cU_{ij}$ is a representable strict immersion, $\Gamma_{\varphi_{ij}}$ is a representable strict immersion. Hence $\cR\to \cX\times \cX$ is a representable strict immersion. The two compositions $\cR\to \cX\times \cX\rightrightarrows \cX$ are representable Zariski covers since the induced morphism $\amalg_{j\in I} \cU_{ij}\to \cX_i$ is a representable Zariski cover for all $i\in I$. Together with the above conditions (i)--(iii), we see that $\cR$ is a Zariski equivalence relation on $\cX$. The \emph{gluing of $\{\cX_{i}\}_{i\in I}$ along $\{\cU_{ij}\}_{i,j\in I}$} is defined to be $\cX/\cR$.
Apply Lemma \ref{equiv.17} to the induced cartesian squares \[ \begin{tikzcd} \amalg_{j\in I}\cU_{ij}\ar[r]\ar[d]& \amalg_{j\in I}\cX_j\ar[d] \\ \cX_i\ar[r]& \cX/\cR \end{tikzcd} \quad \begin{tikzcd} \amalg_{i,j\in I}\cU_{ij}\ar[r]\ar[d]& \amalg_{j\in I}\cX_j\ar[d] \\ \amalg_{i\in I}\cX_i\ar[r]& \cX/\cR \end{tikzcd} \] to see that the induced morphism $\cX_i\to \cX/\cR$ is a representable open immersion for all $i\in I$ and the induced morphism $\amalg_{i\in I}\cX_i\to \cX/\cR$ is a representable Zariski cover.
For the functoriality of the gluing construction, assume that another gluing data \[ \text{ $\cY_{i}$, $\cU_{ij}\to \cY_{i}$, $\varphi_{ij}$, and $\psi_{ijk}$ } \] for $i,j,k\in J$ are given, where $J$ is a finite set. Furthermore, assume that a map $\eta\colon I\to J$, morphisms $\cX_i\to \cY_{\eta(i)}$ for all $i\in I$, and morphisms $\cU_{ij}\to \cV_{\eta(i)\eta(j)}$ are given too such that the squares \[ \begin{tikzcd} \cU_{ij}\ar[r]\ar[d]& \cX_i\ar[d] \\ \cV_{\eta(i)\eta(j)}\ar[r]& \cY_{\eta(i)\eta(j)} \end{tikzcd} \quad \begin{tikzcd} \cU_{ij}\ar[d]\ar[r,"\varphi_{ij}"]& \cU_{ji}\ar[d] \\ \cV_{\eta(i)\eta(j)}\ar[r,"\varphi_{\eta(i)\eta(j)}"]& \cV_{\eta(j)\eta(i)} \end{tikzcd} \] commutes. We set $\cY:=\amalg_{i\in J}\cY_{i}$ and $\cT:=\amalg_{i,j\in J}\cV_{ij}$. There is an induced commutative square \[ \begin{tikzcd} \cR\ar[d]\ar[r]& \cT\ar[d] \\ \cX\times \cX\ar[r]& \cY\times \cY. \end{tikzcd} \] This induces a functorial morphism $\cX/\cR\to \cY/\cT$. \end{construction}
\begin{definition} Let $\{\cU_i\to \cX\}_{i\in I}$ be a family of representable open immersions in $\lSpc/B$ with finite $I$. The \emph{union $\cup_{i\in I} \cU_i$ of $\{\cU_i\}_{i\in I}$} is defined to be the gluing of $\{\cU_i\}_{i\in I}$ along $\{\cU_i\times_{\cX} \cU_j\}_{i,j\in I}$.
Observe that the induced morphism $\cU_a\to \cup_{i\in I} \cU_i$ is a representable open immersion for all $a\in I$ and the induced morphism $\amalg_{i\in I} \cU_i\to \cup_{i\in I} \cU_i$ is a representable Zariski cover. \end{definition}
\begin{proposition} Let $\{\cU_i\to \cX\}_{i\in I}$ be a family of representable open immersions in $\lSpc/B$ with finite $I$. Then the induced morphism $\cup_{i\in I} \cU_i\to \cX$ is a representable open immersion. \end{proposition} \begin{proof} There exists a representable Zariski cover $h_X\to \cX$ with $X\in \lFan/B$. Apply Lemma \ref{equiv.17} to the induced cartesian square \[ \begin{tikzcd} \cup_{i\in I} (\cU_i \times_{\cX} h_X)\ar[d]\ar[r]& h_X\ar[d] \\ \cup_{i\in I} \cU_i\ar[r]& \cX \end{tikzcd} \] to reduce to the case when $\cX=h_X$ with $X\in \lFan/B$.
Then by Lemma \ref{equiv.26}, there exists a commutative square \[ \begin{tikzcd} h_{U_i}\ar[r,"h_{u_i}"]\ar[d,"\simeq"']& h_{X_i}\ar[d,"h_{f_i}"',"\simeq"] \\ \cU_i\ar[r]& h_X \end{tikzcd} \] with vertical isomorphisms such that $f_i$ is a dividing cover and $u_i$ is an open immersion in $\lFan/B$. Let $Y$ be the fiber product of all $X_i$ over $X$, and we set $V_i:=U_i\times_{X_i} Y$. The induced morphism $V_i\to Y$ is an open immersion for all $i$. Since $\cup_{i\in I} \cU_i\simeq h_{\cup_{i\in I} V_i}$ and $h_X\simeq h_Y$, we deduce that $\cup_{i\in I} \cU_i\to \cX$ is a representable open immersion. \end{proof}
\section{Properties of morphisms of divided log spaces} \label{property}
When a morphism $f\colon \cY\to \cX$ and a representable Zariski cover $g\colon \cZ\to \cY$ such that $fg$ is a representable smooth morphism are given, it is natural to regard $f$ as a smooth morphism. However, it is unclear whether $f$ is a representable smooth morphism or not.
This is the reason why we introduce a class of morphisms that can include non-representable morphisms as follows.
\begin{definition} Let $\cP$ be a class of morphisms in $\lFan/B$ closed under pullbacks and compositions and satisfying (Div) and the following condition: \begin{itemize} \item[(Zarloc)] If $f\colon Y\to X$ is a morphism and $u\colon U\to X$ is a Zariski cover in $\lFan/B$, then $fu\in \cP$ implies $f\in \cP$. \end{itemize}
We say that a morphism $f\colon \cY\to \cX$ in $\lSpc/B$ is a \emph{$\cP$-morphism} if there exists a representable Zariski cover $u\colon \cU\to \cY$ such that $fu$ is a representable $\cP$-morphism. \end{definition}
\begin{example} By \cite[Theorem 0.2]{zbMATH06164842}, the classes of log smooth and log \'etale morphisms in $\lFan/B$ satisfy (Zarloc). This implies that the classes of exact log smooth and Kummer \'etale morphisms in $\lFan/B$ satisfy (Zarloc).
The classes of Zariski covers, strict Nisnevich covers, and strict \'etale covers also satisfy (Zarloc).
If $f\colon Y\to X$ is a morphism and $u\colon U\to X$ is a Zariski cover in $\lFan/B$ such that $fu$ is a monomorphism, then $u$ is a monomorphism. This implies that $u$ is an isomorphism. Hence the classes of open immerions and strict closed immersions in $\lFan/B$ satisfy (Zarloc). \end{example}
\begin{proposition} \label{property.4} Let $\cP$ be a class of morphisms in $\lFan/B$ closed under pullbacks and compositions and satisfying \textup{(Div)} and \textup{(Zarloc)}. Then a morphism in $\lSpc/B$ is a representable $\cP$-morphism if and only if it is representable and a $\cP$-morphism. \end{proposition} \begin{proof} Any representable $\cP$-morphism in $\lSpc/B$ is obviously representable and a $\cP$-morphism.
For the converse, assume that $f\colon Y\to X$ is a morphism in $\lFan/B$ such that $h_f$ is a $\cP$-morphism. We need to show that $h_f$ is a representable $\cP$-morphism. There exists a representable Zariski cover $h_U\to h_Y$ with $U\in \lFan/B$ such that the composition $h_U\to h_X$ is a representable $\cP$-morphism. By Lemma \ref{equiv.27}, after replacing $U$ by its suitable dividing cover, we may assume that $h_U\to h_Y$ is equal to $h_g$ for some dividing Zariski cover $g\colon U\to Y$. Then apply Lemma \ref{equiv.27} to $h_{fg}$ to obtain a commutative square \[ \begin{tikzcd} h_{V}\ar[d,"h_{w'}","\simeq"']\ar[r,"h_{g'}"]& h_{X'}\ar[d,"h_w"',"\simeq"] \\ h_{U}\ar[r,"h_{fg}"]& h_X \end{tikzcd} \] with vertical isomorphisms such that $g'$ is a $\cP$-morphism and $w$ and $w'$ are dividing covers. By Proposition \ref{div.5}, there exists a dividing cover $v\colon V'\to V$ such that the square \[ \begin{tikzcd} V'\ar[r,"g'v"]\ar[d,"w'v"']& X'\ar[d,"w"] \\ U\ar[r,"fg"]& X \end{tikzcd} \] commutes. Proposition \ref{equiv.23} yields a dividing cover $Y'\to Y$ such that the projection $V'':=V'\times_Y Y'\to Y'$ is a Zariski cover. Since $\cP$ satisfies (Div), there exists a dividing cover $X''\to X'$ such that the projection $V''\times_X X''\simeq V''\times_{X'}X''\to X''$ is a $\cP$-morphism. The first arrow in \[ V''\times_{X} X'' \to Y'\times_X X'' \to X'' \] is a Zariski cover. Use (Zarloc) to see that the projection $Y'\times_X X''\to X''$ is a $\cP$-morphism. This implies that $h_f\colon h_{Y}\to h_X$ is a representable $\cP$-morphism. \end{proof}
\begin{proposition} \label{property.1} Let $\cP$ be a class of morphisms in $\lFan/B$ closed under compositions and pullbacks and satisfying \textup{(Div)} and \textup{(Zarloc)}. For all $\cP$-morphism $\cY\to \cX$ in $\lSpc/B$, there exists a commutative square \[ \begin{tikzcd} h_V\ar[r,"h_g"]\ar[d]& h_U\ar[d] \\ \cY\ar[r]& \cX \end{tikzcd} \] such that $h_U\to \cX$ and $h_V\to \cY\times_{\cX} h_U$ are representable Zariski covers and $g$ is a $\cP$-morphism in $\lFan/B$. \end{proposition} \begin{proof} Choose a representable Zariski cover $h_X\to \cX$ with $X\in \lFan/B$. There exists a representable Zariski cover $\cU\to \cY$ such that the composition $\cU\to \cX$ is a representable $\cP$-morphism. By Lemma \ref{equiv.26}, there exists a commutative square \[ \begin{tikzcd} h_V\ar[r,"h_{g}"]\ar[d,"\simeq"']& h_{U}\ar[d,"h_u"',"\simeq"] \\ \cU\times_{\cX} h_X\ar[r]& h_X \end{tikzcd} \] such that $g$ is a $\cP$-morphism and $u$ is a dividing cover. Since $\cU\to \cY$ is a representable Zariski cover, $\cU\times_{\cX} h_X\to \cY\times_{\cX} h_X$ is a representable Zariski cover. This means that $h_V\to \cY\times_{\cX} h_U$ is a represnetable Zariski cover. \end{proof}
\begin{proposition} \label{equiv.42} Let $\cP$ be a class of morphisms in $\lFan/B$ closed under compositions and pullbacks and satisfying \textup{(Div)} and \textup{(Zarloc)}. Then the class of $\cP$-morphisms in $\lSpc/B$ is closed under pullbacks and compositions. \end{proposition} \begin{proof} Let $\cY\to \cX$ be a $\cP$-morphism, and let $\cX'\to \cX$ be a morphism in $\lSpc/B$. There exists a representable Zariski cover $\cU\to \cY$ such that the composition $\cU\to \cX$ is a representable $\cP$-morphism. The pullback $\cU\times_{\cX} \cX'\to \cY\times_{\cX}\cX'$ is a representable Zariski cover, and the projection $\cU\times_{\cX} \cX'\to \cX'$ is a representable $\cP$-morphism. Hence the projection $\cY\times_{\cX}\cX'\to \cX'$ is a $\cP$-morphism.
Let $\cZ\to \cY\to \cX$ be $\cP$-morphisms in $\lSpc/B$. There exists a representable Zariski cover $\cU\to \cY$ such that the composition $\cU\to \cX$ is a representable $\cP$-morphism. By the above paragraph, the projection $\cZ\times_{\cY}\cU\to \cU$ is a $\cP$-morphism. There exists a representable Zariski cover $\cV\to \cZ\times_{\cY}\cU$ such that the composition $\cV\to \cU$ is a representable $\cP$-morphism. By Proposition \ref{equiv.15}, the composition $\cV\to \cX$ is a representable $\cP$-morphism, and the composition $\cV\to \cZ$ is a representable Zariski cover. Hence $\cZ\to \cX$ is a $\cP$-morphism. \end{proof}
\begin{proposition} Let $\cZ\xrightarrow{g}\cY\xrightarrow{f}\cX$ be morphisms in $\lSpc/B$. If $f$ is log \'etale and $fg$ is log \'etale (resp.\ log smooth), then $g$ is log \'etale (resp.\ log smooth). \end{proposition} \begin{proof} There exists a representable Zariski cover $\cU\to \cY$ such that the composition $\cU\to \cX$ is a representable log \'etale morphism in $\lSpc/B$. The composition $\cZ\times_{\cY}\cU\to \cX$ is log \'etale (resp.\ log smooth), so there exists a representable Zariski cover $\cV\to \cZ\times_{\cY}\cU$ such that the composition $\cV\to \cX$ is representable log smooth.
By Lemma \ref{equiv.18}, there exists a commutative square \[ \begin{tikzcd} h_V\ar[d]\ar[r,"h_v"]& h_U\ar[d]\ar[r,"h_u"]& h_X\ar[d] \\ \cV\ar[r]& \cU\ar[r]& \cX \end{tikzcd} \] with cartesian squares such that the vertical morphisms are representable Zariski covers, $u$ is a log \'etale morphism, and $v$ is a morphism in $\lFan/B$. The composition $h_V\to h_X$ is a representable log smooth morphism, so Lemma \ref{equiv.27} yields a dividing cover $v'\colon V'\to V$ and a log smooth morphism $p\colon V'\to X$ such that $h_{uvv'}=h_p$. By Proposition \ref{div.5}, we can replace $V'$ with its suitable dividing cover to assume that the diagram \[ \begin{tikzcd} V'\ar[d,"v'"']\ar[rrd,"p",bend left=20] \\ V\ar[r,"v"]& U\ar[r,"u"]& X \end{tikzcd} \] commutes. Owing to \cite[Remark IV.3.1.2]{Ogu}, the composition $V'\to U$ is log \'etale (resp.\ log smooth). Hence the composition $h_{V'}\to \cY$ is a representable log \'etale (resp.\ log smooth) morphism too. To conclude, observe that the composition $h_{V'}\to \cZ$ is a representable Zariski cover. \end{proof}
\begin{proposition} \label{property.3} Let $f\colon \cY\to \cX$ be a morphism in $\lSpc/B$. If $f$ is an open immersion (resp.\ strict closed immersion), then $f$ is a representable open immersion (resp.\ representable strict closed immersion). \end{proposition} \begin{proof} There exists a representable Zariski cover $g\colon \cU\to \cY$ such that $fg$ is a representable open immersion (resp.\ representable strict closed immersion). Proposition \ref{equiv.40} shows that $fg$ is a monomorphism in $\lSpc/B$, so $g$ is a monomorphism in $\lSpc/B$ too. By Proposition \ref{equiv.31}, $g$ is an isomorphism. Hence $f$ is a representable open immersion (resp.\ representable strict closed immersion). \end{proof}
\section{Topologies on divided log spaces} \label{topology}
In this section, we begin with introducing several topologies on $\lSpc/B$. Then we compare the categories of sheaves on $\lFan/B$ and $\lSpc/B$.
\begin{definition} \label{divtop.1} Consider the following classes of morphisms in $\lFan/B$:
\begin{center} \small
\begin{tabular}{|c|c|} \hline {\normalsize $\cP$} & {\normalsize exact $\cP$} \\ \hline dividing Zariski covers & Zariski covers \\ dividing Nisnevich covers & strict Nisnevich covers \\ dividing \'etale covers & strict \'etale covers \\ log \'etale covers & Kummer \'etale covers \\ \hline \end{tabular} \end{center} The smallest topologies $t_{\cP}$ containing all exact $\cP$-morphism as a covering for the above cases are called the \emph{Zariski}, \emph{strict Nisnevich}, \emph{strict \'etale}, and \emph{Kummer \'etale topologies}. We also call them as the \emph{dividing Zariski}, \emph{dividing Nisnevich}, \emph{dividing \'etale}, and \emph{log \'etale topologies}.
Let $\lSmSpc/B$ be the full subcategory of $\lSpc/B$ consisting of $\cX$ that is log smooth over $B$. \end{definition}
\begin{proposition} \label{divtop.2} For the above four cases of $\cP$, the topology $t_\cP$ on $\lSpc/B$ is the smallest topology such that all representable $\cP$-morphisms are covers. \end{proposition} \begin{proof} Immediate from the fact that every $\cP$-morphism admits a refinement that is a representable $\cP$-morphism. \end{proof}
Let $\varphi\colon \cC\to \cC'$ be a functor of sites. There is a functor \[ \varphi^*\colon \Shv(\cC') \to \Shv(\cC) \] such that $\varphi^*\cF(X):=\cF(\varphi(X))$ for $X\in \cC$ and $\cF\in \Shv_t(\cC)$. If $\varphi$ is a continuous functor of sites, then according to \cite[Proposition III.1.2]{SGA4}, $\varphi^*$ admits a left adjoint \[ \varphi_!\colon \Shv(\cC)\to \Shv(\cC'). \]
Consider the induced commutative diagram \begin{equation} \begin{tikzcd} \lSmFan/B\ar[r,"\beta"]\ar[d,hookrightarrow,"\alpha"']& \lSm/B\ar[d,hookrightarrow,"\alpha'"]\ar[r,"\gamma"]& \lSmSpc/B\ar[d,hookrightarrow,"\alpha''"] \\ \lFan/B\ar[r,"\beta'"]& \lSch/B\ar[r,"\gamma'"]& \lSpc/B. \end{tikzcd} \end{equation} Let $\cP$ be one of the four class of morphisms in Definition \ref{divtop.1}. These functors are continuous functors of sites for the $t_\cP$-topology, and hence we have a commutative diagram \begin{equation} \begin{tikzcd} \Shv_{t_{\cP}}(\lSmFan/B)\ar[r,"\beta_!"]\ar[d,"\alpha_!"']& \Shv_{t_{\cP}}(\lSm/B)\ar[d,"\alpha_!'"]\ar[r,"\gamma_!"]& \Shv_{t_{\cP}}(\lSmSpc/B)\ar[d,"\alpha_!''"] \\ \Shv_{t_{\cP}}(\lFan/B)\ar[r,"\beta_!'"]& \Shv_{t_{\cP}}(\lSch/B)\ar[r,"\gamma_!'"]& \Shv_{t_{\cP}}(\lSpc/B). \end{tikzcd} \end{equation} Due to the implication (i)$\Rightarrow$(ii) in \cite[Th\'eor\`eme III.4.1]{SGA4}, $\beta_!$ and $\beta_!'$ are equivalences. Since $\alpha$, $\alpha'$, and $\alpha''$ are cocontinuous and fully faithful, \cite[Proposition III.2.6]{SGA4} shows that $\alpha_!$, $\alpha_!'$, and $\alpha_!''$ are fully faithful.
If $X\in \lFan/B$ and $h_Y\to h_X$ is a representable $t_\cP$-cover with $Y\in \lFan/B$, then Lemma \ref{equiv.26} yields a commutative square \[ \begin{tikzcd} h_{Y'}\ar[d,"\simeq"']\ar[r,"h_{f'}"]& h_{X'}\ar[d,"h_g"',"\simeq"] \\ h_Y\ar[r]& h_X \end{tikzcd} \] with vertical isomorphisms such that $g$ is a dividing cover and $f'$ is a $t_\cP$-cover. The composition $Y'\to X$ is a $t_{\cP}$-cover and $h_{Y'}\to h_X$ refines $h_Y\to h_X$. This shows that $\gamma'\beta'$ is cocontinuous. We can similarly show that $\gamma\beta$ is cocontinuous.
\begin{proposition} \label{sheaves.1} Let $\cP$ be as above. The functors \begin{gather*} \gamma_!\colon \Shv_{t_{\cP}}(\lSm/B)\to \Shv_{t_{\cP}}(\lSmSpc/B), \\ \gamma_!'\colon \Shv_{t_{\cP}}(\lSch/B)\to \Shv_{t_{\cP}}(\lSpc/B) \end{gather*} are equivalences. \end{proposition} \begin{proof} By the above observation, we only need to show that $\gamma_!\beta_!$ and $\gamma_!'\beta_!'$ are equivalences. We focus on $\gamma_!'\beta_!'$ since the proofs are similar.
Let us check the conditions (1)--(5) in \cite[Tag 03A0]{Stacks} for $\gamma'\beta'$. We have checked the conditions (1) and (2) above. The conditions (3) and (4) are consequences of Proposition \ref{div.5}. To show the condition (5), consider a representable Zariski cover $h_X\to \cX$ with $X\in \lFan/B$.
Hence we have checked the conditions (1)--(5), and we deduce that $\gamma_!'\beta_!'$ is an equivalence. \end{proof}
\begin{definition} \label{divtop.3} A \emph{Zariski distinguished square in $\lSpc/B$} is a cartesian square in $\lSpc/B$ of the form \begin{equation} \label{divtop.3.1} \begin{tikzcd} \cW\ar[d,"g'"']\ar[r,"f'"]& \cV\ar[d,"g"] \\ \cU\ar[r,"f"]& \cX \end{tikzcd} \end{equation} such that $f$ and $g$ are representable open immersions and the induced morphism $\cU\amalg \cV\to \cX$ is a representable Zariski cover. The \emph{Zariski cd-structure on $\lSpc/B$} is the collection of Zariski distinguished squares.
By Proposition \ref{equiv.40}, $f$ and $g$ are monomorphismss. Observe that the square \[ \begin{tikzcd} \cW\ar[d,"\Delta"']\ar[r,"g"]& \cV\ar[d,"\Delta"] \\ \cW\times_{\cU} \cW\ar[r,"g'\times_g g'"]& \cV\times_{\cX} \cV \end{tikzcd} \] whose vertical morphisms are the diagonal morphisms is a Zariski distinguished square since the vertical morphisms are isomorphisms. The Zariski cd-structure on $\lSpc/B$ is complete and regular in the sense of \cite[Definitions 2.3, 2.10]{Voe10a}.
The topology associated associated with the Zariski cd-structure is defined to be the smallest Grothendieck topology containing $\cU\amalg \cV\to \cX$ as a covering for all distinguished square of the form \eqref{divtop.3.1}. \end{definition}
\begin{proposition} The Zariski topology on $\lSpc/B$ is the topology associated with the Zariski cd-structure on $\lSpc/B$. \end{proposition} \begin{proof} Let $\cY\to \cX$ be a Zariski cover in $\lSpc/B$. There exists a representable Zariski cover $\cU\to \cY$ such that the composition $\cU\to \cX$ is a representable Zariski cover. By Proposition \ref{equiv.19}, we may assume $\cU\simeq \amalg_{i\in I} h_{U_i}$ with finite $I$ and each morphism $h_{U_i}\to \cX$ is a representable open immersion, where $U_i\in \lFan/B$ for all $i\in I$.
Let $t$ be the topology associated with the Zariski cd-structure. The sieve generated by $\{h_{U_a},\cup_{i\in I-\{a\}} h_{U_i}\to \cX\}$ is a $t$-covering sieve for all $a\in I$. By induction on the number of elements of $I$, we see that the sieve generated by $\{h_{U_i}\to \cX\}_{i\in I}$ is a $t$-covering sieve. It follows that the sieve generated by $\cY\to \cX$ is a $t$-covering sieve. \end{proof}
\section{Open complements} \label{complement}
In this section, we define the open complement of a closed immersion of divided log spaces as a universal property. We also show that the open complements always exist and are compatible with pullbacks.
\begin{definition} \label{comp.1} For a closed immersion $\cZ\to \cX$ in $\lSpc/B$, the \emph{open complement of $\cZ$ in $\cX$}, denoted $\cX-\cZ$, is defined to be a final object (if exists) of the full subcategory of $\lSpc/\cX$ consisting of morphisms $\cY\to \cX$ such that $\cZ\times_{\cX}\cY=\emptyset$. \end{definition}
\begin{lemma} \label{comp.2} Let $\cZ\to \cX$ be a closed immersion in $\lSpc/B$, and let $\cX'\to \cX$ be a morphism in $\lSpc/B$. We set $\cZ':=\cZ\times_{\cX} \cX'$. If $\cX-\cZ$ exists, then $\cX'-\cZ'$ exists, and there is an isomorphism \[ \cX'-\cZ' \simeq (\cX-\cZ)\times_{\cX}\cX'. \] \end{lemma} \begin{proof} Suppose that $\cY\to \cX'$ is a morphism in $\lSpc/B$ such that $\cZ'\times_{\cX'}\cY=\emptyset$. Then $\cZ\times_{\cX}\cY=\emptyset$, so there exists a unique morphism $\cY\to \cX-\cZ$ over $\cX$. This means that there exists a unique morphism $\cY\to (\cX-\cZ)\times_{\cX}\cX'$ over $\cX'$, which completes the proof. \end{proof}
\begin{lemma} \label{comp.3} Let $\cZ\to \cX$ be a closed immersion in $\lSpc/B$, and let $\amalg_{i\in I} \cX_i\to \cX$ be a representable Zariski cover with finite $I$ such that each $\cX_i\to \cX$ is a representable open immersion. We set $\cX_{ij}:=\cX_i\times_{\cX}\cX_j$, $\cZ_i:=\cZ\times_{\cX} \cX_i$, and $\cZ_{ij}:=\cZ\times_{\cX} \cX_{ij}$ for $i,j\in I$. If $\cX_i-\cZ_i$ and $\cX_{ij}-\cZ_{ij}$ exist for all $i,j\in I$, then $\cX-\cZ$ exists. \end{lemma} \begin{proof} By Lemma \ref{comp.2}, we can glue $\{\cX_i-\cZ_i\}_{i\in I}$ using Construction \ref{equiv.13}, and let $\cV$ be the resulting divided log space over $\cX$. For every $\cY\in \lSpc/\cX$, $\cZ\times_{\cX} \cY=\emptyset$ if and only if $\cZ_i\times_{\cX_i}\cY=\emptyset$ for all $i\in I$. Together with the isomorphism \[ \Hom_{\cX}(\cY,\cV) \simeq \mathrm{Eq}( \Hom_{\cX_i}(\cY\times_{\cX} \cX_i,\cV\times_{\cX}\cX_i)\rightrightarrows \Hom_{\cX_{ij}}(\cY\times_{\cX} \cX_{ij},\cV\times_{\cX}\cX_{ij})), \] we deduce $\Hom_{\cX}(\cY,\cV)\simeq *$ whenever $\cZ\times_{\cX}\cY=\emptyset$. \end{proof}
\begin{lemma} \label{comp.4} Let $Z\to X$ be a strict closed immersion in $\lFan/B$. Then $h_X-h_Z$ exists, and there is an isomorphism \[ h_X-h_Z \simeq h_{X-Z}. \] \end{lemma} \begin{proof} Suppose $\cY\in \lSpc/h_X$ and $h_Z\times_{h_X}\cY=\emptyset$. Then there is a dividing Zariski cover $h_Y\to \cY$ such that the composition $h_Y\to h_X$ is equal to $h_f$ for some morphism $f$ in $\lFan/B$. We have $Z\times_X Y=\emptyset$. Hence there exists a unique morphism $u\colon Y\to X-Z$ over $X$.
Suppose that $v\colon h_Y\to h_{X-Z}$ is a morphism over $h_X$. Then there exists a dividing cover $p\colon Y'\to Y$ such that the composite morphism $h_{Y'}\xrightarrow{vh_p} h_{X-Z}$ is equal to $h_w$ for some morphism $w\colon Y'\to X-Z$ over $X$. By the universal property of open complements, we have $w=up$. Hence we have $v=h_u$, i.e., $\Hom_{h_X}(h_Y,h_{X-Z})\simeq *$.
Proposition \ref{equiv.20} shows that $h_Y\times_{\cY} h_Y$ is representable. Using this, we can similarly show $\Hom_{h_X}(h_Y\times_{\cY} h_Y,h_{X-Z})\simeq *$. Together with the isomorphism \[ \Hom_{h_X}(\cY,h_{X-Z}) \simeq \mathrm{Eq}(\Hom_{h_X}(h_Y,h_{X-Z})\rightrightarrows \Hom_{h_X}(h_Y\times_{\cY} h_Y,h_{X-Z})), \] we obtain $\Hom_{h_X}(\cY,h_{X-Z})\simeq *$. To conclude, observe $h_Z\times_{h_X}h_{X-Z}=\emptyset$. \end{proof}
\begin{lemma} \label{comp.6} Let $h_Z\to h_X$ be a closed immersion in $\lSpc/B$, where $X,Z\in \lFan/B$. Then $h_X-h_Z$ exists. \end{lemma} \begin{proof} There exists a commutative square \[ \begin{tikzcd} h_{Z'}\ar[r,"f'"]\ar[d,"\simeq"']& h_{X'}\ar[d,"\simeq"] \\ h_Z\ar[r]& h_X \end{tikzcd} \] with vertical isomorphisms such that $f'$ is a strict closed immersion in $\lFan/B$. Apply Lemma \ref{comp.4} to $f'$ to conclude. \end{proof}
\begin{theorem} \label{comp.5} Let $i\colon \cZ\to \cX$ be a closed immersion in $\lSpc/B$. Then $\cX-\cZ$ exists. Furthermore, the induced morphism $\cX-\cZ\to \cX$ is an open immersion. \end{theorem} \begin{proof} By Proposition \ref{equiv.19}, there exists a representable Zariski cover $\amalg_{i\in I}h_{X_i}\to \cX$ with finite $I$ and $x_i\in \lFan/B$ such that each $h_{X_i}\to \cX$ is a representable open immersion. We set $\cX_i:=h_{X_i}$. Proposition \ref{equiv.20} shows that $\cX_{ij}:=\cX_i\times_{\cX} \cX_j$ is representable for all $i,j\in I$. By Lemma \ref{comp.6}, $\cX_i-\cZ\times_{\cX}\cX_i$ and $\cX_{ij}-\cZ\times_{\cX}\cX_{ij}$ exist for all $i,j\in I$. Together with Lemma \ref{comp.3}, we deduce that $\cX-\cZ$ exists.
Apply Lemma \ref{equiv.17} to the induced cartesian square \[ \begin{tikzcd} \amalg_{i\in I}(\cX-\cZ)\times_{\cX}\cX_i\ar[d]\ar[r]& \amalg_{i\in I} \cX_i\ar[d] \\ \cX-\cZ\ar[r]& \cX \end{tikzcd} \] to show that $\cX-\cZ\to \cX$ is a representable open immersion, i.e., an open immersion. \end{proof}
\section{Blow-ups along closed immersions} \label{blow-up}
As in the previous section, we define blow-ups by a universal property. We also show that blow-ups exist in the log smooth case and are compatible with log smooth pullbacks.
\begin{definition} For $X\in \lFan/B$, a strict closed subscheme $Z$ of $X$ is called an \emph{effective log Cartier divisor on $X$} if $\ul{Z\times_X X'}$ is an effective Cartier divisor on $\ul{X'}$ for all log smooth morphism $X'\to X$. \end{definition}
We refer to Definition \ref{blow.10} for the notion of the blow-up $\Blow_Z X$ for all strict closed immersion $Z\to X$ in $\lSch/B$.
\begin{lemma} \label{blow.20} Let $i\colon Z\to X$ be a strict closed immersion in $\lSm/S$, where $S$ is an fs log scheme. If $\ul{Z}$ is an effective Cartier divisor on $\ul{X}$, then $Z$ is an effective log Cartier divisor on $X$. \end{lemma} \begin{proof} The projection $\Blow_Z X\to X$ is an isomorphism. Hence by Lemma \ref{blow.1}, the projection \[ \Blow_{Z'} X'\to X' \] is an isomorphism for all log smooth morphism $X'\to X$ of fs log schemes, where $Z':=Z\times_X X'$. It follows that $\ul{Z'}$ is an effective Cartier divisor on $\ul{X'}$. \end{proof}
\begin{definition} \label{blow.5} Let $\cZ\to \cX$ be a closed immersion in $\lSpc/B$. We say that $\cZ$ is an \emph{effective log Cartier divisor on $\cX$} if there exists a cartesian square \begin{equation} \label{blow.5.1} \begin{tikzcd} h_{Z}\ar[r,"h_i"]\ar[d]& h_{X}\ar[d] \\ \cZ \ar[r]& \cX \end{tikzcd} \end{equation} such that the vertical morphisms are representable Zariski covers and the morphism $i\colon Z\to X$ in $\lFan/B$ exhibits $Z$ as an effective log Cartier divisor on $X$. \end{definition}
\begin{lemma} \label{blow.19} Let $\cZ$ be an effective log Cartier divisor on $\cX$. Then for every log smooth morphism $\cX'\to \cX$ in $\lSpc/B$, $\cZ\times_{\cX} \cX'$ is an effective log Cartier divisor on $\cX'$. \end{lemma} \begin{proof} We have a cartesian square of the form \eqref{blow.5.1}. Proposition \ref{property.1} yields a commutative square \[ \begin{tikzcd} h_{V}\ar[r,"h_g"]\ar[d]& h_{U}\ar[d] \\ \cX'\times_{\cX}h_X\ar[r]& h_X \end{tikzcd} \] such that $g$ is a log smooth morphism in $\lFan/B$ and the vertical morphisms are representable Zariski covers. By Lemma \ref{equiv.27}, we can replace $U$ with its suitable dividing cover and $V$ by the corresponding pullback to assume that $h_U\to h_X$ is equal to $h_u$ for some dividing Zariski cover. We have a commutative square \[ \begin{tikzcd} h_{Z\times_X V}\ar[d]\ar[r,"h_{i'}"]& h_{V}\ar[d] \\ \cZ\times_{\cX} \cX'\ar[r]& \cX' \end{tikzcd} \] such that the vertical morphisms are representable Zariski covers, where $i'$ is the projection. To conclude, observe that $Z\times_X V$ is an effective log Cartier divisor on $V$ since the composition $V\to X$ is log smooth. \end{proof}
\begin{lemma} \label{blow.26} Let $\cZ\to \cX$ be a closed immersion in $\lSpc/B$. If there exists a representable Zariski cover $\cX'\to \cX$ such that $\cZ\times_{\cX}\cX'$ is an effective log Cartier divisor on $\cX'$, then $\cZ$ is an effective log Cartier divisor on $\cX$. \end{lemma} \begin{proof} There exists a cartesian square \[ \begin{tikzcd} h_{Z'}\ar[r,"h_{i'}"]\ar[d]& h_{X'}\ar[d] \\ \cZ\times_{\cX} \cX'\ar[r]& \cX' \end{tikzcd} \] such that the vertical morphisms are representable Zariski covers and $i'$ exhibits $Z'$ as an effective log Cartier divisor on $X'$. Hence we obtain a cartesian square \[ \begin{tikzcd} h_{Z'}\ar[r,"h_{i'}"]\ar[d]& h_{X'}\ar[d] \\ \cZ\ar[r]& \cX \end{tikzcd} \] such that the vertical morphisms are representable Zariski covers. \end{proof}
\begin{definition} For a closed immersion $\cZ\to \cX$ in $\lSpc/B$, the \emph{blow-up of $\cX$ along $\cZ$}, denoted $\Blow_{\cZ}\cX$, is defined to be a final object (if exists) of the full subcategory of $\lSpc/\cX$ consisting of $\cY\to \cX$ such that $\cZ\times_{\cX} \cY$ is an effective log Cartier divisor on $\cY$.
Suppose that $\cX'\to \cX$ is a morphism in $\lSpc/B$ with $\cZ':=\cX'\times_{\cX} \cZ$. Then $\Blow_{\cZ'}\cX'\times_{\cX'}\cZ'\simeq \Blow_{\cZ'}\cX'\times_{\cX} \cZ$ is an effective log Cartier divisor on $\Blow_{\cZ'}\cX'$, so there is a canonical morphism \begin{equation} \Blow_{\cZ'}\cX'\to \Blow_{\cZ}\cX \end{equation} whenever the two blow-ups exist. \end{definition}
\begin{lemma} \label{blow.17} Let $\cZ\to \cX$ be a closed immersion in $\lSpc/B$, and let $\cX'\to \cX$ be a log smooth morphism in $\lSpc/B$. We set $\cZ':=\cZ\times_{\cX} \cX'$. If $\Blow_{\cZ}\cX$ exists, then $\Blow_{\cZ'}\cX'$ exists, and there is an isomorphism \[ \Blow_{\cZ'}\cX' \simeq \Blow_{\cZ}\cX\times_{\cX}\cX'. \] \end{lemma} \begin{proof} Apply Lemma \ref{blow.19} to $\Blow_{\cZ}\cX\times_{\cX} \cX'\to \Blow_{\cZ} \cX$ to show that $\Blow_{\cZ}\cX\times_{\cX} \cZ'$ is an effective log Cartier divisor on $\Blow_{\cZ}\cX\times_{\cX}\cX'$. For every $\cY\in \lSpc/\cX'$, there is an isomorphism $\cZ\times_{\cX} \cY\simeq \cZ'\times_{\cX'}\cY$. Use these to show that $\Blow_{\cZ}\cX\times_{\cX}\cX'$ is a final object of the full subcategory of $\lSpc/\cX'$ consisting of $\cY\to \cX'$ such that $\cZ'\times_{\cX'}\cY$ is an effective log Cartier divisor on $\cY$. \end{proof}
\begin{lemma} \label{blow.16} Let $\cZ\to \cX$ be a closed immersion in $\lSpc/B$, and let $\amalg_{i\in I} \cX_i \to \cX$ is a representable Zariski cover with finite $I$ such that each $\cX_i\to \cX$ is a representable open immersion. We set $\cX_{ij}:=\cX_i\times_{\cX} \cX_{j}$, $\cZ_i:=\cZ\times_{\cX} \cX_i$, and $\cZ_{ij}:=\cZ\times_{\cX} \cX_{ij}$ for all $i,j\in I$. If $\Blow_{\cZ_i}\cX_i$ and $\Blow_{\cZ_{ij}}\cX_{ij}$ exist for all $i,j\in I$, then $\Blow_{\cZ}\cX$ exists. \end{lemma} \begin{proof} By Lemma \ref{blow.17}, we can glue $\{\Blow_{\cZ_i}\cX_i\}$ using Construction \ref{equiv.13}, and let $\cV$ be the resulting divided log space over $\cX$. For every $\cY\in \lSpc/\cX$, $\cZ\times_{\cX} \cY$ is an effective Cartier divisor on $\cY$ if and only if $\cZ_i\times_{\cX}\cY$ is an effective Cartier divisor on $\cX_i\times_{\cX}\cY$ for all $i\in I$ by Lemma \ref{blow.26}. Together with an isomorphism \[ \Hom_{\cX}(\cY,\cV) \simeq \mathrm{Eq}( \Hom_{\cX_i}(\cY\times_{\cX} \cX_i,\cV\times_{\cX}\cX_i)\rightrightarrows \Hom_{\cX_{ij}}(\cY\times_{\cX} \cX_{ij},\cV\times_{\cX}\cX_{ij})), \] we deduce $\Hom_{\cX}(\cY,\cV)=*$ whenever $\cZ\times_{\cX}\cY$ is an effective log Cartier divisor on $\cY$. To conclude, observe that $\cZ\times_{\cX} \cV$ is an effective log Cartier divisor on $\cV$ by Lemma \ref{blow.26}. \end{proof}
\begin{lemma} \label{blow.22} Let $i\colon Z\to X$ be a strict closed immersion in $\lFan/B$. If $h_Z$ is an effective log Cartier divisor on $h_X$, then there exists a dividing Zariski cover $Y\to X$ such that $Z\times_X Y$ is an effective log Cartier divisor on $Y$. \end{lemma} \begin{proof} There exists a cartesian square \[ \begin{tikzcd} h_{Z'}\ar[r,"i'"]\ar[d]& h_{X'}\ar[d] \\ h_Z\ar[r,"h_i"]& h_X \end{tikzcd} \] such that the vertical morphisms are representable Zariski covers and $i'$ exhibits $Z'$ as an effective log Cartier divisor on $X'$. By Lemma \ref{equiv.24}, we can replace $X'$ with its suitable dividing cover and $Z'$ by the corresponding pullback to assume that $h_{X'}\to h_X$ is equal to $h_u$ for some morphism $u\colon X'\to X$ in $\lFan/B$.
There is an isomorphism $q\colon h_{Z'} \xrightarrow{\simeq} h_{Z\times_X X'}$. Apply Lemma \ref{equiv.28} to $q$ to obtain a dividing cover $v\colon V\to Z'$ and a morphism $r\colon V\to Z\times_X X'$ in $\lFan/B$ such that $qh_v=h_{r}$. By Lemma \ref{equiv.24}, we obtain a dividing cover $v'\colon V'\to V$ and a morphism $s\colon V'\to Z$ such that the composition \[ h_{V'}\xrightarrow{h_{v'}}h_V\xrightarrow{h_v}h_{Z'}\to h_Z \] is equal to $h_s$. The square \[ \begin{tikzcd} h_{V'}\ar[r,"h_{i'vv'}"]\ar[d,"h_s"']& h_{X'}\ar[d,"h_u"] \\ h_Z\ar[r,"h_i"]& h_X \end{tikzcd} \] commutes. Use Proposition \ref{div.5} to see that the square \[ \begin{tikzcd} V'\ar[r,"i'vv'"]\ar[d,"s"']& X'\ar[d,"u"] \\ Z\ar[r,"i"]& X \end{tikzcd} \] commutes after replacing $V'$ by a suitable dividing cover.
By Proposition \ref{equiv.23}, there exists a dividing cover $X_1'\to X'$ (resp.\ $X_2'\to X'$) such that the pullback $V'\times_{X'} X_1'\to Z'\times_{X'} X_1'$ (resp.\ $V'\times_{X'}X_2'\to (Z\times_{X} X')\times_{X'}X_2'$) is an isomorphism. Take $Y:=X_1'\times_X X_2'$. Then $Z\times_X Y$ is an effective log Cartier divisor on $Y$ since the closed immersion $Z\times_X Y\to Y$ is a pullback of $i'\colon Z'\to X'$ along the log smooth morphism $Y\to X'$. \end{proof}
\begin{lemma} \label{blow.6} Let $i\colon Z\to X$ be a strict closed immersion in $\lSmFan/S$, where $S\in \lFan/B$. Then $\Blow_{h_Z}h_X$ exists, and there is an isomorphism \[ \Blow_{h_Z}h_X \simeq h_{\Blow_Z X}. \] \end{lemma} \begin{proof} Suppose that $\cY\in \lSpc/h_X$ and $h_Z\times_{h_X}\cY$ is an effective log Cartier divisor on $\cY$. Choose a dividing Zariski cover $h_Y\to \cY$ with $Y\in \lFan/B$. By Lemma \ref{equiv.24}, after replacing $Y$ by its suitable dividing cover, the composition $h_Y\to \cY\to h_X$ is equal to $h_f$ for some morphism $f\colon Y\to X$ in $\lFan/B$. Since $h_Y\to \cY$ is log smooth, Lemma \ref{blow.19} shows that $h_{Z\times_X Y}$ is an effective log Cartier divisor on $h_Y$. Hence by Lemma \ref{blow.22}, after replacing $Y$ by its suitable dividing Zariski cover, $Z\times_X Y$ is an effective log Cartier divisor on $Y$.
Since $\ul{Z\times_X Y}$ is an effective Cartier divisor on $\ul{Y}$, there exists a unique morphism $\ul{Y}\to \ul{\Blow_Z X}$ over $\ul{X}$ by the universal property of blow-ups. Together with $\Blow_Z X\simeq \ul{\Blow_Z X}\times_{\ul{X}} X$, we deduce that there exists a unique morphism $u\colon Y\to \Blow_Z X$ over $X$.
Suppose that $v\colon h_Y\to h_{\Blow_Z X}$ is a morphism over $h_X$. By Lemma \ref{equiv.24}, there exists a dividing cover $p\colon Y'\to Y$ such that the composite morphism $h_{Y'}\xrightarrow{vh_p} h_{\Blow_Z X}$ is equal to $h_w$ for some morphism $w\colon Y'\to \Blow_Z X$ in $\lFan/B$. The universal property of blow-ups shows $w=up$. Hence we have $v=h_u$, i.e., $\Hom_{h_X}(h_Y,h_{\Blow_Z X})\simeq *$.
By Proposition \ref{equiv.20}, $h_Y\times_{\cY} h_Y$ is representable. Using this, we can similarly show $\Hom_{h_X}(h_Y\times_{\cY} h_Y,h_{\Blow_Z X})\simeq *$. Together with the isomorphism \[ \Hom_{h_X}(\cY,h_{\Blow_Z X}) \simeq \mathrm{Eq}(\Hom_{h_X}(h_Y,h_{\Blow_Z X})\rightrightarrows \Hom_{h_X}(h_Y\times_{\cY} h_Y,h_{\Blow_Z X})), \] we obtain $\Hom_{h_X}(\cY,h_{\Blow_Z X})\simeq *$. To conclude, observe that $\Blow_Z X\times_X Z$ is an effective log Cartier divisor on $\Blow_Z X$ by Lemmas \ref{blow.2} and \ref{blow.20}. \end{proof}
\begin{lemma} \label{blow.21} Let $f\colon X\to S$ be a log smooth morphism in $\lFan/B$. If $\cZ\to h_X$ is a closed immersion such that the composition $\cZ\to h_S$ is a log smooth morphism in $\lSpc/B$, then there exists a cartesian square \[ \begin{tikzcd} h_W\ar[r,"h_a"]\ar[d]& h_V\ar[d,"h_v"] \\ \cZ\ar[r]& h_X \end{tikzcd} \] such that $a$ is a strict closed immersion, $fva$ is log smooth, and $v$ is a dividing cover. \end{lemma} \begin{proof} By Lemma \ref{equiv.26}, there exists a commutative square \[ \begin{tikzcd} h_{Z}\ar[r,"h_i"]\ar[d,"\simeq"']& h_{X'}\ar[d,"h_u"',"\simeq"] \\ \cZ\ar[r]& h_X \end{tikzcd} \] with vertical isomorphisms such that $i$ is a strict closed immersion and $u$ is a dividing cover. Apply Lemma \ref{equiv.27} and Proposition \ref{property.4} to the log smooth morphism $h_Z\to h_S$ to obtain a dividing cover $z\colon Z'\to Z$ such that the composition $h_{Z'}\to h_S$ is equal to $h_g$ for some log smooth morphism $g\colon Z'\to S$ in $\lFan/B$. Use Proposition \ref{div.5} to obtain a dividing cover $z'\colon Z''\to Z'$ such that the two compositions \[ h_{Z''}\xrightarrow{h_{z'}} h_{Z'}\xrightarrow{h_z}h_Z\xrightarrow{h_{iuf}}h_S \text{ and } h_{Z''}\xrightarrow{h_{z'}}h_{Z'}\xrightarrow{h_g}h_S \] are equal. The composition $Z''\to X'$ is a monomorphism, so Proposition \ref{equiv.23} yields a dividing cover $X''\to X'$ such that the projection $Z''\times_{X'} X''\to X''$ is strict. Since the projection $Z\times_{X'}X''\to X''$ is strict, the induced morphism $Z''\times_{X'}X''\to Z\times_{X'}X''$ is a strict dividing cover, i.e., an isomorphism by Proposition \ref{equiv.11}(3). The composition $Z''\times_{X'}X''\to S$ is log smooth, so the composition $Z\times_{X'}X''\to S$ is log smooth too. Take $V:=X''$ and $W:=Z\times_{X'}X''$ to conclude. \end{proof}
\begin{theorem} \label{blow.7} Let $\cZ\to \cX$ be a closed immersion in $\lSmSpc/\cS$, where $\cS\in \lSpc/B$. Then $\Blow_{\cZ}\cX$ exists, and $\Blow_{\cZ}\cX\in \lSmSpc/\cS$. \end{theorem} \begin{proof} By Proposition \ref{property.1}, there exists a commutative square \[ \begin{tikzcd} h_X\ar[d]\ar[r,"h_f"]& h_S\ar[d] \\ \cX\ar[r]& \cS \end{tikzcd} \] such that $f$ is a log smooth morphism in $\lFan/B$ and the vertical morphisms are representable Zariski covers. After replacing $X$ by its suitable dividing cover, by Lemma \ref{blow.21}, we may assume that there exists a cartesian square \[ \begin{tikzcd} h_Z\ar[r,"h_i"]\ar[d]& h_X\ar[d] \\ \cZ\ar[r]& \cX \end{tikzcd} \] such that $i$ is a strict closed immersion monomorphism in $\lFan/B$ and $fi$ is log smooth.
By Proposition \ref{equiv.19}, we may further assume that $X\simeq \amalg_{i\in I} X_i$ with finite $I$ such that each $h_{X_i}\to \cX$ is a representable open immersion. For all $i,j\in I$, there exists a commutative square \[ \begin{tikzcd} h_{U_{ij}}\ar[r,"u_{ij}"]\ar[d,"\simeq"']& h_{X_i}\ar[d,"\simeq"] \\ \cX_i\times_{\cX}\cX_j\ar[r]& \cX_i \end{tikzcd} \] with vertical isomorphisms such that $u_{ij}$ is a representable open immersion.
Since $X_i,Z\times_X X_i,U_{ij},Z\times_X U_{ij}\in \lSm/S$, Lemma \ref{blow.6} shows that $\Blow_{\cZ_i}\cX_i$ and $\Blow_{h_{Z_i}}h_{X_{ij}}$ exist. Lemma \ref{blow.16} finishes the proof. \end{proof}
Let $\boxx$ be the fs log scheme whose underlying scheme is $\P^1$ and whose log structure is the compactifying log structure associated with the open immersion $\A^1\to \P^1$ away from $\infty$.
\begin{definition} \label{blow.15} Let $i\colon \cZ\to \cX$ be a closed immersion in $\lSmSpc/\cS$, where $\cS\in \lSpc/B$. The \emph{deformation space associated with $i$} is defined to be \[ \Deform_{\cZ}\cX := \Blow_{\cZ\times \{0\}}(\cX\times \boxx)-\Blow_{\cZ\times \{0\}}(\cX\times \{0\}). \] The \emph{normal bundle of $\cZ$ in $\cX$} is defined to be \[ \Normal_{\cZ}\cX := \Deform_{\cZ}\cX \times_{\boxx}\{0\}. \] These exist by Theorems \ref{comp.5} and \ref{blow.7}. \end{definition}
Suppose that $Z\to X$ is a strict closed immersion in $\sSm/S$ with $S\in \lSch/B$, where $\sSm$ denotes the class of strict smooth morphisms. Let $\Normal_Z X:=\Normal_{\ul{Z}}\ul{X}\times_{\ul{X}}X$ denote the normal bundle of $Z$ in $X$. As explained in \cite[Definition 7.5.1]{logDM}, there is an isomorphism \begin{equation} \Deform_Z X\times_{\boxx}\{0\} \simeq \Normal_Z X, \end{equation} where $\Deform_Z X:=\Blow_{Z\times \{0\}}(X\times \boxx)-\Blow_{Z\times \{0\}}(X\times \{0\})$. Hence we have an isomorphism \begin{equation} \label{blow.15.1} \Normal_{h_Z}h_X \simeq h_{\Normal_Z X} \end{equation} by Lemmas \ref{comp.4} and \ref{blow.6} since showing \eqref{blow.15.1} is Zariski local on $X$ and $Z$.
\begin{remark} For a closed immersion of schemes $Z\to X$, Verdier \cite{zbMATH03522129} used $\A^1$ to define a deformation space, while Fulton \cite{zbMATH01027930} used $\P^1$. We use $\boxx$ since this choice is suitable for log motivic homotopy theory, see e.g.\ \cite[Theorem 7.5.4]{logDM}. \end{remark}
\begin{proposition} \label{blow.28} Let $i\colon Z\to X$ be a closed immersion in $\lSm/S$ with $S\in \lSch/B$. Then the induced morphism \begin{equation} \label{blow.28.1} i^*\Omega_{X/S}^1\to \Omega_{Z/S}^1 \end{equation} is surjective, and its kernel is a locally free $\cO_Z$-module. Furthermore, if $V$ is the vector bundle over $Z$ associated with the dual of the kernel of \eqref{blow.28.1}, then then there is an isomorphism \begin{equation} \label{blow.28.2} \Normal_{h_Z}h_X \simeq h_V. \end{equation} \end{proposition} \begin{proof} The question is strict \'etale local on $X$ and $Z$. Hence by \cite[Proposition III.2.3.5]{Ogu}, we may assume that $f$ admits a factorization $Z\xrightarrow{i'}X'\xrightarrow{u} X$ such that $i'$ is a strict closed immersion and $u$ is a log \'etale monomorphism. Then \eqref{blow.28.1} is isomorphic to the induced morphism \[ i'^*\Omega_{X'/S}^1 \to \Omega_{Z/S}^1. \] Together with \cite[Theorem IV.3.2.2]{Ogu}, we see that \eqref{blow.28.1} is surjective and its kernel is a locally free $\cO_Z$-module.
To show \eqref{blow.28.2}, it suffices to show \[ \Normal_{h_Z}h_{X'} \simeq h_V \] since $h_u\colon h_{X'}\to h_X$ is an open immersion. Hence we can replace $Z\xrightarrow{i} X$ by $Z\xrightarrow{i'} X'$, so we may assume that $i$ is a strict closed immersion.
Recall that the question is strict \'etale local on $X$. By Proposition \ref{blow.3}, we may assume that there exists a strict smooth morphism $X\to Y$ in $\lSm/S$ such that the composition $Z\to Y$ is strict smooth too. We finish the proof by applying \eqref{blow.15.1} to the strict closed immersion $Z\to X$ in $\sSm/S$. \end{proof}
\begin{lemma} \label{blow.24} Let $W\to Z\to X$ be closed immersions of schemes. Then there is a cartesian square \begin{equation} \label{blow.24.1} \begin{tikzcd} \Deform_W Z\ar[d]\ar[r]& \Deform_W X\ar[d] \\ Z\times \boxx\ar[r]& \Deform_Z X. \end{tikzcd} \end{equation} \end{lemma} \begin{proof} The question is Zariski local on $X$, so we reduce to the case when $X=\Spec(A)$, $Z=\Spec(A/I)$, and $W=\Spec(A/J)$, where $I\subset J$ are ideals of $A$. According to \cite[Section 5.1]{zbMATH01027930}, we have explicit descriptions \begin{gather*} \Blow_{Z\times \{0\}}(X\times \A^1)-\Blow_{Z\times \{0\}} (X\times \{0\}) \simeq \Spec\big(\bigoplus_{n\in \Z} I^{-n} t^n\big), \\ \Blow_{W\times \{0\}}(X\times \A^1)-\Blow_{W\times \{0\}}(X\times \{0\}) \simeq \Spec\big(\bigoplus_{n\in \Z} J^{-n} t^n\big), \end{gather*} where $t$ is an indeterminate and $I^{n},J^{n}:=A$ for all integer $n\leq 0$. The closed subscheme $Z\times \A^1$ of $\Blow_{Z\times \{0\}}(X\times \A^1)-\Blow_{Z\times \{0\}} (X\times \{0\})$ is given by the ideal generated by $It^{-1}$. Hence we see that \[ (Z\times \A^1) \times_{\Blow_{Z\times \{0\}}(X\times \A^1)-\Blow_{Z\times \{0\}} (X\times \{0\})}(\Blow_{W\times \{0\}}(X\times \A^1)-\Blow_{W\times \{0\}} (X\times \{0\})) \] is isomorphic to \[ \Blow_{W\times \{0\}}(Z\times \A^1)-\Blow_{W\times \{0\}} (Z\times \{0\})\simeq \Spec\big(\bigoplus_{n\in \Z} (J/I)^{-n} t^n\big), \] where $(J/I)^n:=A/I$ for all integer $n\leq 0$. It follows that \eqref{blow.24.1} is cartesian. \end{proof}
\begin{proposition} \label{blow.23} Let $\cW\to \cZ\to \cX$ be closed immersions in $\lSmSpc/\cS$, where $\cS\in \lSpc/B$. Then there is a cartesian square \begin{equation} \label{blow.23.2} \begin{tikzcd} \Deform_{\cW}\cZ\ar[d]\ar[r]& \Deform_{\cW}\cX\ar[d] \\ \cZ\times \boxx\ar[r]& \Deform_{\cZ}\cX. \end{tikzcd} \end{equation} \end{proposition} \begin{proof} As in the proof of Theorem \ref{blow.7}, there exists a commutative diagram \[ \begin{tikzcd} h_Z\ar[r,"h_i"]\ar[d]& h_X\ar[r,"h_f"]\ar[d]& h_S\ar[d] \\ \cZ\ar[r]& \cX\ar[r]& \cS \end{tikzcd} \] such that $i$ is strict closed immersion, $f$ and $fi$ are log smooth, the vertical morphisms are representable Zariski covers, and the left small square is cartesian. By Lemma \ref{blow.21}, there exists a cartesian square \[ \begin{tikzcd} h_{W'}\ar[d,"\simeq"']\ar[r,"h_{a}"]& h_{Z'}\ar[d,"h_u"',"\simeq"] \\ \cW\times_{\cZ}h_Z\ar[r]& h_Z \end{tikzcd} \] with vertical isomorphisms such that $u$ is a dividing cover, $a$ is a strict closed immersion, and the composition $W'\to S$ is log smooth. The composition $Z'\to X$ is a proper monomorphism, so Proposition \ref{equiv.23} yields a dividing cover $X''\to X$ such that the projection $Z'':=Z'\times_X X''\to X''$ is a strict closed immersion. Hence we obtain a commutative diagram \[ \begin{tikzcd} h_{W''}\ar[d]\ar[r,"h_{a''}"]& h_{Z''}\ar[d]\ar[r,"h_{i''}"]& h_{X''}\ar[d] \\ \cW\ar[r]& \cZ\ar[r]& \cX \end{tikzcd} \] with cartesian squares such that $W'':=W\times_X X''$, the vertical morphisms are representable Zariski covers, $i''$ and $a''$ are strict closed immersions, and the compositions $X'',Z'',W''\to S$ are log smooth.
The proof is done if we prove the following steps: \begin{enumerate} \item[(1)] Show that $(\cZ\times \{0\})\times_{\cX\times \boxx}\Deform_{\cW}\cX$ is an effective log Cartier divisor on $\Deform_{\cW}\cX$. \item[(2)] Show $\Blow_{\cZ\times \{0\}}(\cX\times \{0\})\times_{\Blow_{\cZ\times \{0\}}(\cX\times \boxx),p} \Deform_{\cW}\cX=0$, where the morphism $p$ is obtained by (1). \item[(3)] Show that \eqref{blow.23.2} is cartesian, where its right vertical morphism is obtained by (2). \end{enumerate} The steps (1)--(3) are Zariski local on $\cX$ by Lemmas \ref{comp.2}, \ref{blow.19}, \ref{blow.26}, and \ref{blow.17}. Hence we reduce to showing the similar steps for $h_{W''}\to h_{Z''}\to h_{X''}\to h_{S''}$. Lemma \ref{blow.24} proves the steps (1)--(3) at once. \end{proof}
\begin{corollary} \label{blow.25} Let $\cW\to \cZ\to \cX$ be closed immersions in $\lSmSpc/\cS$, where $\cS\in \lSpc/B$. Then there is a cartesian square \begin{equation} \label{blow.25.1} \begin{tikzcd} \Normal_{\cW}\cZ\ar[d]\ar[r]& \Normal_{\cW}\cX\ar[d] \\ \cZ\ar[r]& \Normal_{\cZ}\cX. \end{tikzcd} \end{equation} \end{corollary} \begin{proof} The square \eqref{blow.25.1} is obtained by a pullback of \eqref{blow.23.2}. \end{proof}
Normal bundles of divided log spaces can be regarded as affine bundles in the following sense.
\begin{proposition} \label{blow.27} Let $\cZ\to \cX$ be a closed immersion in $\lSmSpc/\cS$, where $\cS\in \lSpc/B$. Then there exists a cartesian square \[ \begin{tikzcd} h_{Z\times \A^n}\ar[d]\ar[r,"h_p"]& h_Z\ar[d] \\ \Normal_{\cZ} \cX\ar[r]& \cZ \end{tikzcd} \] with $Z\in \lFan/B$ and $n\in \N$ such that $h_Z\to \cZ$ is a representable Zariski cover and $p$ is the projection. \end{proposition} \begin{proof} As in the proof of Theorem \ref{blow.7}, there exists a commutative diagram \[ \begin{tikzcd} h_Z\ar[d]\ar[r,"h_i"]& h_X\ar[d]\ar[r,"h_f"]& h_S\ar[d] \\ \cZ\ar[r]& \cX\ar[r]& \cS \end{tikzcd} \] with $S,X,Z\in \lFan/B$ such that the left square is cartesian, the vertical morphisms are representable Zariski covers, $f$ and $fi$ are log smooth, and $i$ is a strict closed immersion. Lemmas \ref{blow.17} and \ref{blow.6} yield isomorphisms \[ \Normal_{\cZ} \cX\times_{\cX}h_X \simeq \Normal_{h_Z}h_X \simeq h_{\Normal_Z X}. \] Together with $h_Z\simeq \cZ\times_{\cX} h_X$, we obtain an isomorphism \[ \Normal_{\cZ} \cX\times_{\cZ} h_Z \simeq h_{\Normal_Z X}. \] Since $\ul{\Normal_Z X}$ is a vector bundle over $\ul{Z}$, we obtain a desired cartesian square after further Zariski localization on $Z$. \end{proof}
\appendix
\section{Charts for log smooth morphisms}
The chart theorem for log smooth morphisms \cite[Theorem IV.3.3.1]{logDM} is crucial for the development of the theory of log motives since this allows us to understand the structure of log smooth morphisms more concretely. However, the theorem is \'etale local on the source. In this section, we explain how the theorem can be Zariski local on the source with a stronger assumption.
\begin{definition} Let $X$ be an fs log scheme, and let $x$ be a point. We say that a chart $P$ of $X$ is called \emph{neat at $x$} if $P$ is sharp and $P\to \ol{\cM}_{X,x}$ is an isomorphism. See \cite[Definition II.2.3.1]{Ogu} for other equivalent conditions. \end{definition}
\begin{definition} Let $f\colon X\to S$ be a morphism of fs log schemes. We set \[ \cM_{X/S}:=\coker(\cM_{\ul{X}\times_{\ul{S}}S} \to \cM_X), \] where the cokernel is taken in the category of sheaves of monoids on $X$. By \cite[Proposition I.1.3.3]{Ogu}, there is an isomorphism \[ \cM_{X/S}^\gp \simeq \coker(\cM_{\ul{X}\times_{\ul{S}}S}^\gp \to \cM_X^\gp). \] Let $x$ be a point of $X$. A chart $\theta\colon P\to Q$ for $f$ is called \emph{neat at $x$} if the induced sequence \[ 0\to P^\gp \to Q^\gp \to \cM_{X/S,x}^\gp \to 0 \] is exact. This is a rephrase of the conditions in \cite[Theorem II.2.4.4]{Ogu}. \end{definition}
\begin{proposition} Let $f\colon X\to S$ be an exact morphism of fs log schemes. If $\theta\colon P\to Q$ is a neat chart for $f$ at $x\in X$ such that $P$ is a neat chart at $f(x)$, then $Q$ is a neat chart at $x$. \end{proposition} \begin{proof} This is proved in \cite[Remark II.2.4.5]{Ogu} with the assumption that the induced homomorphism $\ol{\cM}_{S,f(x)}\to \ol{\cM}_{X,x}$ is injective. The exactness of $f$ implies this assumption by \cite[Proposition I.4.2.1(5)]{Ogu}. \end{proof}
\begin{proposition} \label{logsmooth.1} Let $f\colon X\to S$ be a log smooth (resp.\ log \'etale) morphism of fs log schemes, and let $x$ be a point of $X$. Assume that $\cM_{X/S,x}^\gp$ is torsion free and $S$ has a chart $P$. Then in a Zariski neighborhood of $x$, $f$ admits a neat chart $\theta\colon P\to Q$, and the induced morphism $X\to S\times_{\A_P} \A_Q$ is strict smooth (resp.\ strict log \'etale). \end{proposition} \begin{proof} By \cite[Theorem III.1.2.7(4)]{Ogu}, $f$ admits a neat chart in a Zariski neighborhood of $x$. After further Zariski localization, we may assume that $f$ is neat at $x$ using \cite[Proposition II.2.3.7]{Ogu}. Then argue as in the proof of \cite[Theorem IV.3.3.1(3)]{Ogu} to conclude. \end{proof}
\section{Exact monomorphisms}
\begin{definition} For an fs monoid $P$ and a ring $R$, we set $\A_{P,R}:=\Spec(P\to R[P])$, see \cite[Definition III.1.2.3]{Ogu}. If $I$ is an ideal of $P$, we set \[ \A_{(P,I),R} := \A_{P,R}\times_{\Spec(R[P])}\Spec(R[P]/I). \] If $P$ is sharp, we set \[ \pt_{P,R} := \A_{(P,P^+),R}, \] where $P^+$ denotes the ideal of non-units of $P$. We often omit $R$ in this notation when it is clear from the context. \end{definition}
\begin{lemma} \label{equiv.10} Let $\theta\colon P\to Q$ be a homomorphism of sharp fs monoids, and let $k$ be a field. If the induced morphism $f\colon \pt_{Q,k}\to \pt_{P,k}$ is an exact monomorphism of fs log schemes, then $\theta$ is an isomorphism. \end{lemma} \begin{proof} Let us omit $k$ for simplicity of notation. Consider the cartesian square \[ \begin{tikzcd} Z\ar[d]\ar[r]& \pt_{Q}\times \pt_{Q}\ar[d] \\ \A_{Q\oplus_P Q}\ar[r]& \A_{Q}\times \A_{Q}, \end{tikzcd} \] where $Z:=\pt_Q\times_{\pt_P}\pt_Q$. If $(q_1,q_2)\in Q\oplus_P Q$ is equal to $0$, then there exists $p\in P^\gp$ such that $q_1=p$ and $q_2=-p$ in $Q^\gp$. This implies $q_1,q_2\in Q^*=0$. Hence the two inclusions $Q\rightrightarrows Q\oplus_P Q$ send $Q^+$ into $(Q\oplus_P Q)^+$, so $Z$ contains $\A_{(Q\oplus_P Q,(Q\oplus_P Q)^+)}$ as a strict closed subscheme. It follows that $Z$ contains a point $z$ such that $\ol{\cM}_{Z,z}\simeq \ol{Q\oplus_P Q}$. Since $f$ is a monomorphism, the diagonal morphism $\pt_Q\to Z$ is an isomorphism. This gives an isomorphism \[ \overline{Q\oplus_P Q} \simeq Q. \]
Due to \cite[Proposition I.1.4.7(2), Corollaries I.2.3.8, I.4.2.16]{Ogu}, we have an equality \[ 2\rank(Q^{\gp})-\rank(P^{\gp}) = \rank((\overline{Q\oplus_P Q})^{\gp}). \] We deduce that $P^{\gp}$ and $Q^{\gp}$ have the same rank. By \cite[Proposition I.4.2.1(5)]{Ogu}, $\theta$ is injective. Together with \cite[Proposition I.4.3.5]{Ogu}, we see that $\theta$ is Kummer.
Assume that $q\in Q$ is not in $P$. Since $\theta$ is Kummer, $nq\in P$ for some integer $n>1$. The element $(q,-q)\in Q^{\gp}\oplus_{P^\gp}Q^\gp$ satisfies $n(q,-q)=0$, which implies $(q,-q)\in (Q\oplus_P Q)^*$ since $Q\oplus_P Q$ is saturated. It follows that $(Q\oplus_P Q)^*$ is nontrivial. The underlying scheme of $\A_{(Q\oplus_P Q,(Q\oplus_P Q)^+)}$ is $\A_{(Q\oplus_P Q)^*}$, which is not the spectrum of $k$. This contradicts to the fact that $\pt_Q\to Z$ is an isomorphism. Hence $\theta$ is an isomorphism. \end{proof}
\begin{proposition} \label{equiv.32} Let $f\colon Y\to X$ be an exact monomorphism of fs log schemes. Then $f$ is strict. \end{proposition} \begin{proof} Suppose that $f$ is not strict at a point $y\in Y$. We set $x:=f(y)$, $P:=\ol{\cM}_{X,x}$, and $Q:=\ol{\cM}_{Y,y}$. Let $\theta\colon P\to Q$ be the induced homomorphism. The restriction of $f$ at $x$ and $y$ is a morphism $g\colon \pt_{Q,k'}\to \pt_{P,k}$ for some fields $k$ and $k'$. The morphism $g$ is an exact monomorphism too.
Consider the canonical factorization \[ \pt_{Q,k'} \xrightarrow{g'} \pt_{P,k'} \to \pt_{P,k}. \] Since $g$ is an exact monomorphism, $g'$ is an exact monomorphism. By Lemma \ref{equiv.10}, $\theta$ is an isomorphism. It follows that $g'$ is an isomorphism. Hence $g$ is strict, which is a contradiction. \end{proof}
\section{Strict closed immersions of log smooth schemes}
\begin{proposition} \label{blow.3} Let $i\colon Z\to X$ be a strict closed immersion in $\lSm/S$, where $S$ is an fs log scheme. Then strict \'etale locally on $X$, there exists a cartesian square \begin{equation} \label{blow.3.1} \begin{tikzcd} Z\ar[d,"i"']\ar[r]& Y\ar[d,"i_0"] \\ X\ar[r,"u"]& Y\times \A^s \end{tikzcd} \end{equation} with $Y\in \lSm/S$ such that $i_0$ is the zero section and $u$ is strict \'etale. If $\cM_{X/S}^\gp$ is torsion free, then \eqref{blow.3.1} exists Zariski locally on $X$. \end{proposition} \begin{proof} Let $x$ be a point of $Z$, and let $\cI$ be the sheaf of ideals on $X$ defining $Z$. By \cite[Lemma IV.1.2.10, Theorem IV.3.2.2]{Ogu}, we can choose local sections $m_1,\ldots,m_r$ of $\cM_X$ and $m_{r+1},\ldots,m_{r+s}$ of $\cI$ such that $\{dm_1,\ldots,dm_{r+s}\}$ (resp.\ $\{dm_{1},\ldots,dm_{r}\}$) gives rise a basis of $\Omega_{X/S,x}^1$ (resp.\ $\Omega_{Z/S,x}^1$). Zariski locally on $X$, the local sections $m_1,\ldots,m_{r+s}$ are global sections. Hence Zariski locally on $X$, we obtain a cartesian square \[ \begin{tikzcd} Z\ar[d,"i"']\ar[r]& S\times \A_{\N^r}\ar[d] \\ X\ar[r,"v"]& S\times \A_{\N^r}\times \A^s \end{tikzcd} \] such that the right vertical morphism is the zero section. According to the proof of \cite[Theorem IV.3.2.6]{Ogu}, $v$ is log \'etale.
We may assume that $S\times \A_{\N^r}$ admits a chart $P$. By \cite[Theorem IV.3.3.1]{Ogu}, strict \'etale locally on $X$, $v$ admits a chart $\theta\colon P\to Q$ such that the induced morphism $X\to (S\times \A_{\N^r})\times_{\A_P}\A_Q \times \A^s$ is strict \'etale. By setting $Y:=(S\times \A_{\N^r})\times_{\A_P}\A_Q$, we obtain \eqref{blow.3.1}.
If $\cM_{Y/X}^\gp$ is torsion free, use Proposition \ref{logsmooth.1} instead. \end{proof}
\section{Blow-ups along strict closed subschemes}
\begin{definition} \label{blow.10} Suppose that $i\colon Z\to X$ is a strict closed immersion in $\lSch/S$, where $S$ is an fs log scheme. The \emph{blow-up of $X$ along $Z$} is defined to be \[ \Blow_Z X := \Blow_{\ul{Z}}\ul{X}\times_{\ul{X}}X, \] where $\Blow_{\ul{Z}}\ul{X}$ denotes the usual blow-up. \end{definition}
\begin{lemma} \label{blow.2} Let $i\colon Z\to X$ be a strict closed immersion in $\lSm/S$, where $S$ is an fs log scheme. Then $\ul{\Blow_Z X\times_X Z}$ is an effective Cartier divisor on $\ul{\Blow_Z X}$ and we have $\Blow_Z X,\Blow_Z X\times_X Z\in \lSm/S$. \end{lemma} \begin{proof} The question is strict \'etale local on $X$ by \cite[Theorem 0.2]{zbMATH06164842}, so we may assume the existence of the diagram \eqref{blow.3.1}. Since the morphism $u$ in this diagram is strict flat, there is a canonical isomorphism \begin{equation} \label{blow.2.1} \Blow_Z X \simeq \Blow_Y(Y\times \A^s) \times_{Y\times \A^s}X \end{equation} To conclude, observe that the claim for the strict closed immersion $Y\to Y\times \A^s$ is clear. \end{proof}
\begin{lemma} \label{blow.1} Let $i\colon Z\to X$ be a strict closed immersion in $\lSm/S$, where $S$ is an fs log scheme. Then for every log smooth morphism $X'\to X$, there is a canonical isomorphism \[ \Blow_{Z'} X'\simeq\Blow_Z X\times_X X', \] where $Z':=Z\times_X X'$. \end{lemma} \begin{proof} The question is strict \'etale local on $X$ and $X'$, so we may assume that \eqref{blow.3.1} exists and $X$ admits a chart $P$. By \cite[Theorem IV.3.3.1]{Ogu}, we may also assume that there exists a chart $P\to Q$ of $X'\to X$ such that the induced morphism \[ X'\to X\times_{\A_P}\A_Q \] is strict \'etale. We set $Y':=Y\times_{\A_P}\A_Q$. There are canonical isomorphisms \[ \Blow_{Z'} {X'} \simeq \Blow_{Y'}(Y'\times \A^r) \times_{Y'\times \A^r}X' \simeq \Blow_{\{0\}}(\A^r)\times_{\A^r}X'. \] Together with \eqref{blow.2.1}, we obtain the desired isomorphism. \end{proof}
\begin{example} \label{blow.4} The conclusion of Lemma \ref{blow.1} is wrong if we do not assume $Z\in \lSm/S$. For example, suppose \[ X:=(\A^2,H_1+H_2), \; X':=(\Blow_{\{0\}}\A^2,\widetilde{H}_1+\widetilde{H}_2+E), \text{ and } Z:=X\times_{\A^2}\{0\}, \] where $H_1$ and $H_2$ are the axes, $\widetilde{H}_1$ and $\widetilde{H_2}$ are their strict transforms, and $E$ is the exceptional divisor. While $\Blow_Z X\times_X X'$ is not irreducible, $\Blow_{Z'}X'\simeq X'$ is irreducible. \end{example}
\end{document} |
\begin{document}
\newtheorem{Definition}{Definition}[section] \newtheorem{Theorem}[Definition]{Theorem} \newtheorem{Proposition}[Definition]{Proposition} \newtheorem{Remark}[Definition]{Remark} \newtheorem{Lemma}[Definition]{Lemma} \newtheorem{Corollary}[Definition]{Corollary}
\numberwithin{equation}{section}
\title{Prescribed Webster scalar curvatures on compact pseudo-Hermitian manifolds} \author{Yuxin Dong\footnote{Supported by NSFC grant No. 11771087, and LMNS, Fudan},\ Yibin Ren\footnote{Supported by NSFC grant No. 11801517},\ Weike Yu\footnote{Corresponding author}} \date{} \maketitle \begin{abstract} In this paper, we investigate the problem of prescribing Webster scalar curvatures on compact pseudo-Hermitian manifolds. In terms of the method of upper and lower solutions and the perturbation theory of self-adjoint operators, we can describe some sets of Webster scalar curvature functions which can be realized through pointwise CR conformal deformations and CR conformally equivalent deformations respectively from a given pseudo-Hermitian structure. \par\textbf{Keywords: } Compact pseudo-Hermitian manifolds; Webster scalar curvature; CR conformal deformations. \end{abstract}
\section{Introduction} In Riemannian geometry, the problem of finding a conformal metric on a compact Riemannian manifold with a prescribed scalar curvature has been investigated extensively (cf. \cite{[KW1], [KW2], [KW3], [Ou1], [Ou2], [Ta], [Ra], [Ho1], [CX]} and the references therein). Its special case that the candidate scalar curvature function is constant is the well-known Yamabe problem, which was settled down by a series of works due to Yamabe, Trudinger, Aubin, and Schoen (cf. \cite{[Yam], [Tr], [Au], [Sc]}).
The following problem is a CR analogue of prescribing scalar curvature problem: given any smooth function $\hat{\rho}$ on a compact strictly pseudoconvex CR manifold $M$ of real dimension $2n+1$ with contact form $\theta$, does there exist a contact form $\hat{\theta}$ CR conformal to $\theta$, that is, $\hat{\theta}=u^{\frac{2}{n}}\theta$ for some positive function $u$, such that its Webster scalar curvature $\text{Scal}_{\hat{\theta}}=\hat{\rho}$? It is equivalent to solving the following partial differential equation \begin{align} -(2+\frac{2}{n})\Delta_\theta u+\text{Scal}_\theta u=\hat{\rho}u^{1+\frac{2}{n}}\ \ \ \text{on}\ M\label{1.1.} \end{align} for $u>0$, where $\text{Scal}_\theta$ is the Webster scalar curvature of $(M,\theta)$. When $\hat{\rho}$ is constant, the above problem is referred to as CR Yamabe problem, which was solved by Jerison and Lee (cf. \cite{[JL1], [JL2]}), Gamara and Yacoub (cf. \cite{[Ga1], [GY]}). Another interesting special case for the prescribed Webster scalar curvature problem is to consider the domain manifold to be a CR sphere $S^{2n+1}$. Similar to the Riemannian case, this problem is not always solvable. Indeed, Cheng \cite{[Ch]} gave a Kazdan-Warner type necessary condition for the solution $u$ and the prescribed function $\hat{\rho}$. Besides, in \cite{[FU], [MU], [HK], [SG], [RG], [CPY], [Ho3], [Ho4], [Ho5]}, if $\hat{\rho}$ satisfies suitable conditions, some existence results were established for the prescribed Webster scalar curvature problem on $S^{2n+1}$ by means of variational, topological, perturbation methods, Webster scalar curvature flow, or the theory of critical points, etc. In \cite{[Ga2], [CEG], [CAY1], [CAY2], [Yac], [GAG]}, the authors investigated the problem on strictly pseudoconvex spherical CR manifolds. In \cite{[Ho2]}, using geometric flow, Ho proved that any negative smooth function $\hat{\rho}$ can be prescribed as the Webster scalar curvature in the CR conformal class, provided that $\dim M=3$ and the CR Yamabe invariant of M is negative. In \cite{[NZ]}, the authors studied the prescribed Webster scalar curvature problem on a pseudo-Hermitian manifold in arbitrary CR dimension with negative CR Yamabe invariant. Using variational techniques, they established several non-existence, existence, and multiplicity results when the function $\hat{\rho}$ is sign-changing.
In this paper, we investigate the prescribed Webster scalar curvature problem on a compact strictly pseudoconvex CR manifold $M$, following the original approaches in \cite{[KW1],[KW2]}, but adjusting their related arguments to subelliptic version. In this way, we are able to establish several existence results for the problem on a compact strictly pseudoconvex CR manifold in arbitrary CR dimension. To state our main results, let us introduce some notations. Given a compact strictly pseudoconvex CR manifold $(M^{2n+1},H,J,\theta)$ (also called the pseudo-Hermitian manifold, see Section \ref{section2} ), where $(H,J)$ is a CR structure of type $(n,1)$ and $\theta$ is a pseudo-Hermitian structure with positive Levi form, let $PC(\theta)$ denote the set of smooth functions on $M$ that are the Webster scalar curvatures of pseudo-Hermitian structures $\hat{\theta}$ in the CR conformal class $[\theta]=\{u\theta: 0<u\in C^\infty(M)\}$. In other words, $PC(\theta)$ is the set of smooth functions $\hat{\rho}$ for which one can find a positive solution of \eqref{1.1.}. Let $Y_M(\theta)$ be the CR Yamabe constant (see \eqref{2.14} or \eqref{2.15}) and $\lambda_1$ be the first eigenvalue of operator $L=-(2+\frac{2}{n})\Delta_\theta+\text{Scal}_\theta$ (also see \eqref{2.12}). Using the method of upper and lower solutions on CR manifolds, we obtain the following conclusions. \begin{Theorem}\label{theorem3.10} Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold. Then \begin{enumerate}[(A)] \item The following statements are equivalent:\\ $(a1)$ $\lambda_1<0$.\\ $(a2)$ $\{\hat{\rho}\in C^\infty(M):\hat{\rho}<0\} \subset PC(\theta).$\\ $(a3)$ $\{\hat{\rho}\in C^\infty(M):\hat{\rho}<0\} \cap PC(\theta)\neq \emptyset.$\\ $(a4)$ $Y_M(\theta)<0.$ \item The following statements are equivalent:\\ $(b1)$ $\lambda_1=0$\\ $(b2)$ $0\in PC(\theta).$\\ $(b3)$ $ Y_M(\theta)=0.$ \item The following statements are equivalent:\\ $(c1)$ $\lambda_1>0.$\\ $(c2)$ $\{\hat{\rho}\in C^\infty(M):\hat{\rho}>0\} \cap PC(\theta)\neq \emptyset.$\\ $(c3)$ $Y_M(\theta)>0.$ \end{enumerate} \end{Theorem} Note that the results in (B) of Theorem 1.1 essentially belong to \cite{[JL1]} as a special case of CR Yamabe problem, and we state these results here for relative completeness, while the prescribed Webster scalar curvature problem for more general nonvanishing function $\hat{\rho}$ in the case of $\lambda_1=0$ is still an unsolved problem.
Since in general not all smooth functions $\hat{\rho}$ can be realized as the Webster scalar curvature of some pseudo-Hermitian structure $\hat{\theta}$ pointwise CR conformal to $\theta$, i.e., $\hat{\theta}\in [\theta]$ (cf. \cite{[Ch]}, \cite{[NZ]}), we will try to enhance the possibility of realizing $\hat{\rho}$ as the prescribed Webster scalar curvature by relaxing the desired pseudo-Hermitian structure $(\hat{H},\hat{J}, \hat{\theta})$ to be CR conformally equivalent to $(H,J,\theta)$, i.e., there is a map $\Phi\in \text{Diff}(M)$ such that $\Phi^*\hat{\theta}\in [\theta],\hat{H}=d\Phi(H),\hat{J}=d\Phi\circ J\circ (d\Phi)^{-1}$. For simplicity, let $CE(\theta)$ denote the set of smooth functions on $M$ which are the Webster scalar curvatures of $(\hat{H},\hat{J}, \hat{\theta})$ CR conformally equivalent to $(H,J,\theta)$. In other words, $CE(\theta)$ is the set of smooth functions $\hat{\rho}$ for which one can find a map $\Phi\in\text{Diff}(M)$ such that \begin{align} -(2+\frac{2}{n})\Delta_\theta u+\text{Scal}_\theta u=(\hat{\rho}\circ \Phi) u^{1+\frac{2}{n}}\ \ \ \text{on}\ M \end{align} admits a positive solution. By the inverse function theorem and perturbation methods in our cases, we obtain \begin{Theorem}\label{theorem1.2} Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold. \begin{enumerate}[(1)] \item If $\lambda_1<0$, then $CE(\theta)=\{\hat{\rho}\in C^\infty(M):\ \hat{\rho}<0\ \text{somewhere}\}$. \item If $\lambda_1=0$, then $CE(\theta)=\{\hat{\rho}\in C^\infty(M):\ \hat{\rho}\ \text{changes\ sign\ on\ M}\}\cup \{0\}$. \item If $\lambda_1>0$, then $CE(\theta)=\{\hat{\rho}\in C^\infty(M):\ \hat{\rho}>0\ \text{somewhere}\}$. \end{enumerate} \end{Theorem} In particular, if $\hat{\rho}$ is a smooth function on $M$ with changing sign, then it belongs to $CE(\theta)$ regardless of the sign of $\lambda_1$. In other words, any smooth function with changing sign can be realized as some Webster scalar curvature.
\begin{Corollary} Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold. If $\hat{\rho}\in C^\infty(M)$ and it changes sign on $M$, then there exists a structure $(\hat{H},\hat{J},\hat{\theta})$ on $M$ such that $(M,\hat{H},\hat{J},\hat{\theta})$ is a pseudo-Hermitian manifold with the Webster scalar curvature $\hat{\rho}$ and is CR conformally equivalent to $(M,H,J,\theta)$.
\end{Corollary}
\section{Preliminaries}\label{section2} In this section, we will introduce the notions and notations of pseudo-Hermitian geometry (cf. \cite{[DT]}).
Let $M$ be an orientable real smooth manifold with $\dim_{\mathbb{R}} M=2n+1$. A CR structure on $M$ is a complex subbundle $T_{1,0}M$ of complex rank $n$ of the complexified tangent bundle $TM\otimes\mathbb{C}$ satisfying \begin{align} T_{1,0}M\cap T_{0,1}M=\{0\}, \ \ [\Gamma(T_{1,0}M),\Gamma(T_{1,0}M)]\subseteq\Gamma(T_{1,0}M)\label{2.1} \end{align} where $T_{0,1}M=\overline{T_{1,0}M}$. The complex subbundle $T_{1,0}M$ corresponds to a real rank $2n$ subbundle of $TM$: \begin{align} H=Re\{T_{1,0}M\oplus T_{0,1}M\}, \end{align} which is called the Levi distribution. Clearly, it carries a natural complex structure $J$ defined as \begin{align} J(X+\bar{X})=\sqrt{-1}(X-\bar{X}) \end{align} for any $X\in T_{1,0}M$. Equivalently, the CR structure may be described by the pair $(H,J)$. Let $(M,H,J)$ and $(\tilde{M},\tilde{H},\tilde{J})$ be two CR manifolds. A smooth map $f: (M,H,J) \rightarrow (\tilde{M},\tilde{H},\tilde{J})$ is called a CR map if it satisfies \begin{align} df(H)\subset \tilde{H},\ \ \ \ df\circ J=\tilde{J}\circ df\ \text{on}\ H. \end{align} Furthermore, $f$ is said to be a CR isomorphism if it is a $C^\infty$ diffeomorphism and a CR map.
Since both $M$ and $H$ are orientable, there is a global nowhere vanishing 1-form $\theta$ with $H=\ker \theta$, which is called a pseudo-Hermitian structure on $M$. The corresponding Levi form is defined as \begin{align} L_\theta(X,Y)=d\theta(X,JY) \end{align} for any $X,Y\in H$. The integrability assumption of $T_{1,0}M$ implies $L_{\theta}$ is $J$-invariant and symmetric. If the CR manifold $M$ admits a pseudo-Hermitian structure $\theta$ such that $L_\theta$ is positive definite, then $(M,H,J)$ is said to be strictly pseudoconvex. Henceforth we will assume that $(M,H,J)$ is a strictly pseudoconvex CR manifold and $\theta$ is a pseudo-Hermitian structure with positive Levi form. The quadruple $(M^{2n+1}, H, J, \theta)$ is referred to as a pseudo-Hermitian manifold.
Let $\theta, \hat{\theta}$ be two pseudo-Hermitian structures on the CR manifold $(M,H,J)$, whose Levi forms are positive definite. Since $\dim_{\mathbb{R}}TM/H=1$, $\theta, \hat{\theta}$ are related by
\begin{align}
\hat{\theta}=u\theta \label{2.5.}
\end{align}
for some nowhere vanishing function $u\in C^\infty(M)$. Applying the exterior differentiation operator $d$ to \eqref{2.5.}, we get
\begin{align}
L_{\hat{\theta}}=uL_\theta.
\end{align}
Since both $L_\theta$ and $L_{\hat{\theta}}$ are positive definite, we see that $u$ is positive everywhere. Given a CR structure $(H,J)$, then the set of all its pseudo-Hermitian structures with positive Levi form is exactly
\begin{align}
[\theta]=\{u\theta:\ 0<u\in C^\infty(M)\},
\end{align}
where $\theta$ is one pseudo-Hermitian structure of $(H,J)$ with positive Levi form.
A property on a CR manifold $(M,\theta)$ is said to be CR invariant if it is invariant for all pseudo-Hermitian structures in $[\theta]$.
On a pseudo-Hermitian manifold $(M^{2n+1}, H, J, \theta)$, there is a unique globally defined nowhere vanishing tangent vector field $\xi$ on M such that \begin{align} \theta(\xi)=1,\ \ \ \ d\theta(\xi,\cdot)=0, \end{align} which is usually called the Reeb vector field. Hence, we have a splitting of the tangent bundle \begin{align} TM=H\oplus \mathbb{R}\xi, \end{align} which leads to a natural projection $\pi_H:TM\rightarrow H$ and a Riemannian metric on $M$ (the Webster metric) \begin{align} g_\theta=\pi^*_HL_\theta+\theta\otimes\theta, \end{align} where $(\pi^*_HL_\theta)(X,Y)=L_{\theta}(\pi_HX, \pi_HY)$ for $X,Y\in TM$. On a pseudo-Hermitian manifold, there is a unique linear connection $\nabla$ called Tanaka-Webster connection preserving the CR structure and Webster metric (cf. Theorem 1.3 of \cite{[DT]} ). For a smooth function $u$ on $M$, one can define the sub-Laplacian of $u$ as the divergence of horizontal gradient: \begin{align} \Delta_\theta u=\text{div} (\nabla^H u), \end{align} where $\nabla^H u=\pi_H \nabla u$. Then the integration by parts yields \begin{align} \int_M (\Delta_\theta u)v \Psi^{\theta}=-\int_M L_\theta(\nabla^H u, \nabla^H v)\Psi^{\theta}, \end{align} for $u,v \in C^2(M)$ with compact support, where $\Psi^{\theta}=\theta\wedge (d\theta)^n$ is a volume form of $(M^{2n+1},H,J,\theta)$.
The curvature theory of Tanaka-Webster connection was developed in \cite{[We]} (cf. also \cite{[DT]}). In particular, Webster defined a scalar curvature associated with a pseudo-Hermitian structure $\theta$, which is referred to as Webster scalar curvature in literature. For a pseudo-Hermitian manifold $(M^{2n+1},H,J,\theta)$, we say that a pseudo-Hermitian structure $\hat{\theta}$ on $M$ is pointwise CR conformal to $\theta$ if $\hat{\theta}=u^{\frac{2}{n}}\theta$ for some positive function $u\in C^\infty(M)$. The pseudo-Hermitian manifold $(M^{2n+1},H,J,\hat{\theta})$ is said to be a pointwise CR conformal deformation of $(M^{2n+1},H,J,\theta)$. Furthermore, from \cite{[Le]} and \cite{[JL1]}, the Webster scalar curvatures of $\theta$ and $\hat{\theta}$ have the following relationship: \begin{align} -b_n\Delta_\theta u+\rho u=\hat{\rho}u^a\label{2.10.} \end{align} where $\hat{\rho}$ is the Webster scalar curvature of $(M,H,J,\hat{\theta})$, and $a=1+\frac{2}{n}$, $b_n=2+\frac{2}{n}$. For convenience, let $PC(\theta)$ denote the set of $C^\infty(M)$ functions which are Webster scalar curvatures of all $\hat{\theta}\in [\theta]$. In other words, $PC(\theta)$ is the set of $C^\infty(M)$ functions for which one can find a positive solution of \eqref{2.10.}.
Now we assume that $(M^{2n+1},H,J,\theta)$ is a compact pseudo-Hermitian manifold. Set \begin{align} L=-b_n\Delta_\theta+\rho\label{2.15...} \end{align} and let $\lambda_1$ be the first eigenvalue of the operator $L$, that is \begin{align}
\lambda_1=\inf_{u\in S^2_1(M)-\{0\}}\frac{\int_M (b_n|\nabla^Hu|_\theta^2+\rho u^2)\Psi^\theta}{\int_M u^2\Psi^\theta}, \label{2.12} \end{align}
where $ S^2_1(M)$ is the Folland-Stein space (cf. \cite{[FS]}, \cite{[RS]}), the norm $|\cdot|_\theta$ is induced by $g_\theta$. If $\psi$ is an eigenfunction corresponding to $\lambda_1$, then $L\psi=\lambda_1\psi$. Note that $\psi$ is $C^\infty$ and nowhere vanishing (cf. \cite{[Wa]}), so we may assume that $\psi>0$ and thus \begin{align}
\lambda_1=\inf_{0<u\in C^\infty(M)}\frac{\int_M (b_n|\nabla^Hu|_\theta^2+\rho u^2)\Psi^\theta}{\int_M u^2\Psi^\theta}. \end{align} Recall that CR Yamabe constant is given by \begin{align}
Y_M(\theta)&=\inf_{0<u\in C^\infty(M)}\frac{\int_M (b_n|\nabla^Hu|_\theta^2+\rho u^2)\Psi^\theta}{(\int_M u^{2+\frac{2}{n}}\Psi^\theta)^{\frac{n}{n+1}}}\label{2.14}\\
&=\inf_{\hat{\theta}\in [\theta]} \frac{\int_M \hat{\rho} \Psi^{\hat{\theta}} }{(\int_M \Psi^{\hat{\theta}} )^{\frac{n}{n+1}}}\label{2.15} \end{align} which is a CR invariant.
Given a pseudo-Hermitian manifold $(M^{2n+1},H,J,\theta)$, we say that the structures $(\hat{H},\hat{J},\hat{\theta})$ is CR conformally equivalent to $(H,J,\theta)$ if there is a map $\Phi\in \text{Diff}(M)$ and $0<u\in C^\infty(M)$ such that \begin{align} \Phi^*\hat{\theta}=u^{\frac{2}{n}}\theta,\ \ \ \hat{H}=d\Phi(H),\ \ \ \hat{J}=d\Phi\circ J\circ (d\Phi)^{-1}.\label{2.19..} \end{align} Clearly, $\hat{J}$ is a complex structure on $\hat{H}$ and $\Phi: (M,H,J,\theta)\rightarrow (M, \hat{H}, \hat{J}, \hat{\theta})$ is a CR isomorphism, where the pseudo-Hermitian manifold $(M, \hat{H}, \hat{J}, \hat{\theta})$ is called a CR conformally equivalent deformation of $(M,H,J,\theta)$. Furthermore, Webster scalar curvatures have the following relationship: \begin{align} -b_n\Delta_\theta u+\rho u=(\hat{\rho}\circ \Phi) u^a,\label{2.21.} \end{align} where $\hat{\rho}$ is the Webster scalar curvature of $(M, \hat{H}, \hat{J}, \hat{\theta})$, and $a=1+\frac{2}{n}$, $b_n=2+\frac{2}{n}$. Similarly, let $CE(\theta)$ denote the set of $C^\infty(M)$ functions which are the Webster scalar curvatures of $(M, \hat{H}, \hat{J}, \hat{\theta})$. In other words, $CE(\theta)$ is the set of $C^\infty(M)$ functions $\hat{\rho}$ for which one can find a map $\Phi\in\text{Diff}(M)$ such that \eqref{2.21.} admits a positive solution on $M$. Clearly, $PC(\theta)$ is a subset of $CE(\theta)$.
At the end of this section, we recall the Folland-Stein spaces on the pseudo-Hermitian manifold $(M^{2n+1},H,J,\theta)$ briefly (cf. \cite{[FS]}, \cite{[RS]}), which are the generalized Sobolev spaces compatible to the CR structure $(H,J)$. Let $\{X_\alpha\}_{\alpha=1}^{2n}$ be a local $G_\theta$-orthonormal real frame of $H$ defined on an open subset $U\subset M$. For any $k\in\mathbb{N}_+$ and $1<p<+\infty$, the Folland-Stein spaces on $U$ is defined by \begin{align} S^p_k(U)=\{f\in L^p(U) : X_{i_1}X_{i_2}\dots X_{i_s}f\in L^p(U), s\leq k, X_{i_j}\in \{X_{\alpha}\}\} \end{align} with the norms \begin{align}
\|f\|_{S^p_k(U)}=\|f\|_{L^p(U)}+\sum_{1\leq s\leq k} \|X_{i_1}X_{i_2}\dots X_{i_s}f\|_{L^p(U)} \end{align}
where the $L^p$-norm of $f$ is defined by $\|f\|_{L^p(U)}=\left(\int_U |f|^p \Psi^\theta\right)^{\frac{1}{p}}$. By the partition of unity, we can also define $S^p_k(\Omega)$ and $S^p_k(M)$, where $\Omega$ is any open subset of $M$.
\section{ Pointwise CR conformal deformations with prescribed Webster scalar curvature} \label{section3} In this section, we will investigate the set $PC(\theta)$ on a compact pseudo-Hermitian manifold $(M^{2n+1},H,J,\theta)$ when $\lambda_1<0$, $\lambda_1=0$ and $\lambda_1>0$ respectively. Firstly, we consider the case $\lambda_1<0$. For this case, we will use the method of upper and lower solutions on pseudo-Hermitian manifolds. For this purpose, we need the following existence and comparison results.
\begin{Lemma}\label{lemma4.1} Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold. Let $L_1=-\Delta_\theta+f$, where $f\in C^\infty(M)$. Then \begin{enumerate}[(1)] \item If $f>0$, then $L_1: S^2_2(M)\rightarrow L^2(M)$ is invertible. \item The equation $L_1v=g$ has a weak solution if and only if $\langle g,w\rangle_{L^2}=0$ for any solution $w$ of $L_1w=0$. \item If $f>0$ and $v$ is a $S^2_1(M)$ function with \begin{align} L_1v\geq 0, \end{align} then $v\geq 0$.
\end{enumerate} \end{Lemma} \proof (1) Let us show that for any $g\in L^2(M)$, there is a unique solution $u\in S^2_2(M)$ such that $L_1u=g$. Here we can relax the requirement $u\in S^2_2(M)$ to $u\in S^2_1(M)$, because of the $L^2$ interior regularity result for $\Delta_\theta$. Set \begin{align} (u,v)=\langle L_1u,v\rangle_{L^2}=\int_M \left(\nabla^Hu\cdot\nabla^H v+ fuv\right) \Psi^\theta, \end{align} where $\cdot$ is the inner product induced by the Webster metric $g_\theta$ and $u,v\in S^2_1(M)$. By a simple computation, we obtain \begin{align}
(u,u)\geq c_1\|u\|_{S^2_1(M)}^2\label{3.3...} \end{align} and \begin{align}
(u,v)\leq c_2\|u\|_{S^2_1(M)}\|v\|_{S^2_1(M)} \end{align} where $c_1,c_2$ are two positive constants. Therefore, the space $S^2_1(M)$ with the inner product $(\cdot,\cdot)$ is a Hilbert space. By Cauchy-Schwarz inequality and \eqref{3.3...}, we have \begin{align}
|\langle g,v\rangle_{L^2}|\leq \|g\|_{L^2}\|v\|_{L^2}\leq c_1^{-\frac{1}{2}}\|g\|_{L^2}(v,v)^{\frac{1}{2}} \end{align} for any $v\in S^2_1(M)$, which implies $\langle g,v\rangle_{L^2}$ is a bounded linear functional of $v\in S^2_1(M)$. Applying the Riesz representation theorem, there is a unique $u\in S^2_1(M)$ such that \begin{align} \langle g,v\rangle_{L^2}=(u,v)=\langle L_1u,v\rangle_{L^2} \end{align} for any $v\in S^2_1(M)$.
(2) Since $M$ is compact and $f\in C^\infty(M)$, there is a positive constant $\lambda>0$ such that $f+\lambda>0$ on $M$. In view of the part (1) of this lemma, the inverse operator $L_2=(L_1+\lambda)^{-1}:\ L^2(M)\rightarrow S^2_2(M)$ exists. Using $S^2_1(M)\subset \subset L^2(M)$ (cf. Theorem 3.15 of \cite{[DT]}, \cite{[Da]}) yields that $L_2: L^2(M)\rightarrow L^2(M)$ is a completely continuous. The equation $L_1v=g$ is equivalent to $v-\lambda L_2v=L_2g$. Applying the Fredholm-Riesz-Schauder theory (cf. \cite{[BJS]}) and the facts $L_1^*=L_1, L_2^*=L_2$, we get that $v-\lambda L_2v=L_2g$ has a weak solution if and only if $\langle L_2g,w\rangle_{L^2}=0$ where $w$ satisfies $w-\lambda L_2w=0$ which is equivalent to $L_1w=0$. From \begin{align} \langle g,w\rangle_{L^2}=\langle L_2^{-1}L_2g,w\rangle_{L^2}=\langle L_2g,L_2^{-1}w\rangle_{L^2}=\lambda \langle L_2g, w\rangle_{L^2}, \end{align} it follows that $L_1v=g$ has a weak solution if and only if $\langle g, w\rangle_{L^2}=0$ for any solution $w$ of $L_1w=0$.
(3) Since $v\in S^2_1(M)$, $v_-=\min\{v,0\}\in S^2_1(M)$. Taking $-v_-$ as a test function of $L_1v\geq0$ yields \begin{align}
\int_M |\nabla^H v_-|_\theta^2 \Psi^\theta\leq -\int_M f(v_-)^2\Psi^\theta,\label{3.7...} \end{align} which implies $v_{-}=0$ since $f>0$. Hence, $v\geq 0$. \qed
Using the above lemma, we obtain the following result, which is a pseudo-Hermitian version of Lemma 2.6 of \cite{[KW1]}.
\begin{Lemma}\label{lemma4.2} Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold. Assume that $f(x,u)\in C^\infty(M\times \mathbb{R})$. If there are two functions $u_+,u_-\in C^0(M)\cap S^2_1(M)$ satisfying \begin{equation} -\Delta_\theta u_++f(x, u_+)\geq 0\ \ \ \ \text{in}\ M,\label{4.1} \end{equation} \begin{equation} -\Delta_\theta u_-+f(x, u_-)\leq 0\ \ \ \ \text{in}\ M\label{4.2}\\ \end{equation} \begin{equation} u_+\geq u_- \ \ \ \ \text{in}\ M,\label{3.11...} \end{equation} then there exists a function $u\in C^\infty(M)$ such that \begin{equation} -\Delta_\theta u+f(x, u)=0\ \ \ \ \text{in}\ M,\label{4.4} \end{equation} \begin{equation} u_-\leq u\leq u_+ \ \ \ \ \text{in}\ M.\label{4.5} \end{equation} Here $u_+$ and $u_-$ are called the upper and lower solutions of \eqref{4.4} respectively. \end{Lemma} \proof Set $A_1=\min_M u_-$, $A_2=\max_M u_+$ and $I=[A_1, A_2]$. Since $f(x,u)\in C^\infty(M\times \mathbb{R})$, there exists a constant $\lambda>0$ such that $\tilde{f}(x,u)=-f(x,u)+\lambda u$ is increasing with respect to $u\in I$ for any fixed $x\in M$. In order to find a solution of \eqref{4.4} and \eqref{4.5}, we consider the sequence $\{u_k\}$ defined for $k\geq 1$ by \begin{equation} \left\{ \begin{aligned} &(-\Delta_\theta+\lambda) u_k=\tilde{f}(x, u_{k-1}) \\ &u_0=u_-. \end{aligned} \right. \label{4.6} \end{equation} By \eqref{4.1}, \eqref{4.2}, \eqref{3.11...} and \eqref{4.6}, we have \begin{equation} (-\Delta_\theta+\lambda)(u_+-u_1)\geq 0, \end{equation} \begin{equation} (-\Delta_\theta+\lambda) (u_1-u_-)\geq 0.\label{4.8} \end{equation} According to Lemma \ref{lemma4.1}, \begin{align} u_-\leq u_1\leq u_+. \end{align} Iterating the above procedure yields \begin{align} u_-\leq u_1\leq u_2\leq \ldots\leq u_+. \end{align} Set $u=\lim_{k\rightarrow\infty}u_k$, then $u$ is a weak solution of \eqref{4.4} and \eqref{4.5}. Since $\text{Im}\ u\subset I$ and $f(\cdot,\cdot)\in C^\infty(M\times \mathbb{R})$, we conclude that $f(x,u)\in L^p(M)$ with $p>2n+1$. By the regularity result for $\Delta_\theta$ (cf. Theorem 18 of \cite{[RS]}), we have $u\in S^p_2(M)$, and thus $f(x,u)\in S_2^p(M)$. Repeating the above argument, we obtain that $u\in S^p_{2k}(M)$ for any $k\in\mathbb{N}_+$. Therefore, $u\in C^\infty(M)$ because of $S^p_{2k}(M)\subset W^{k,p}(M) \subset C^{k-1}(M)$ for any $k\in\mathbb{N}_+$ (cf. Theorem 19.1 of \cite{[FS]}), where $W^{k,p}(M)$ is the classical Sobolev space. \qed \begin{Remark} The authors of \cite{[NZ]} proved that when $f(x,u)=-b_n^{-1}(\rho u-\hat{\rho}u^a)$, the equation \eqref{4.4} admits a weak solution satisfying \eqref{4.5} if it has a weak lower solution $u_-$ and a weak upper solution $u_+$. \end{Remark}
In terms of Lemma \ref{lemma4.2}, and by a similar argument of \cite{[KW1]}, we obtain
\begin{Theorem} \label{theorem3.4.} Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold and $\hat{\rho}$ be a smooth negative function on $M$. Then $\hat{\rho}\in PC(\theta)$ if and only if $\lambda_1<0$. \end{Theorem} \proof If $\hat{\rho}\in PC(\theta)$, then there is a positive function $u\in C^\infty(M)$ such that $u$ satisfies the prescribed Webster scalar curvature equation \eqref{2.10.}, that is, $Lu=\hat{\rho}u^a$, where $L$ is given by \eqref{2.15...} and $a=1+\frac{2}{n}$. Let $\psi$ be the positive eigenfunction associated with $\lambda_1$ of $L$. Then \begin{align} \lambda_1\langle \psi, u\rangle_{L^2}=\langle L\psi, u\rangle_{L^2}=\langle \psi, Lu\rangle_{L^2}=\langle \psi, \hat{\rho} u^a\rangle_{L^2}<0 \end{align} from which it follows that $\lambda_1<0$. Conversely, If $\lambda_1<0$, then there is a positive solution $u\in C^\infty(M)$ such that $Lu=\hat{\rho}u^a$ by the existence of upper and lower solutions, hence $\hat{\rho}\in PC(\theta)$. Indeed, Let $u_+\equiv \alpha>0$ where $\alpha$ is a constant large enough so that \begin{align} Lu_+-\hat{\rho}u_+^a=\alpha(\rho-\hat{\rho}\alpha^{a-1})\geq 0. \end{align} On the other hand, let $u_-=\beta\psi$ where $\beta>0$ is so small that $u_-\leq \alpha\equiv u_+$ and $u_-\leq\left( \frac{\lambda_1}{\inf_M \hat{\rho}}\right)^{\frac{1}{a-1}}$. Then \begin{align} Lu_-=\beta L\psi=\lambda_1\beta\psi=\lambda_1u_-\leq \hat{\rho} u_-^a.\label{3.16.} \end{align} Therefore, by Lemma \ref{lemma4.2}, there exists a smooth solution $u$ satisfying $Lu=\hat{\rho} u^a$ and $0<u_-\leq u\leq u_+$, i.e., $\hat{\rho}\in PC(\theta)$. \qed \begin{Remark}\label{remark3.5} From the above proof, if $\lambda_1<0$, then there always exists a lower solution $u_-\in C^\infty(M)$ of \eqref{2.10.} with $0<u_-<u$, where $u$ is a given $C^\infty(M)$ function. \end{Remark} \begin{Remark} By a flow method, Ho \cite{[Ho2]} proved that every negative function $\hat{\rho}\in PC(\theta)$ if the CR Yamabe constant $Y_M(\theta)<0$ and $\dim_{\mathbb{R}} M=3$. \end{Remark} \begin{Remark} In \cite{[NZ]}, authors gave the following results. \begin{enumerate}[(1)] \item When $\hat{\rho}$ is a smooth nonpositive function on $(M^{2n+1},H,J,\theta)$ with $Y_M(\theta)<0$ such that the set $\{x\in M: \hat{\rho}(x)=0\}$ has positive measure, authors gave a necessary and sufficient condition for $\hat{\rho}\in PC(\theta)$. \item If $\hat{\rho}$ is a smooth nonpositive function on $(M^{2n+1},H,J,\theta)$, then $\hat{\rho}$ is the Webster scalar curvature of at most one pseudo-Hermitian structure $\hat{\theta}\in [\theta]$. \end{enumerate} \end{Remark}
Making use of Lemma \ref{lemma4.2} again, we can establish the following property of the set $PC(\theta)$.
\begin{Proposition}\label{theorem3.7.} Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold with $\lambda_1<0$. If $\hat{\rho}\in PC(\theta)$ and $\hat{\rho}_1\leq \hat{\rho}$, then $\hat{\rho}_1\in PC(\theta)$. \end{Proposition} \proof To prove $\hat{\rho}_1\in PC(\theta)$, we just need to find a positive solution of \begin{align} -b_n\Delta_\theta u+\rho u=\hat{\rho}_1 u^a,\label{3.21.} \end{align} where $\rho$ is the Webster scalar curvature of $(M^{2n+1},H,J,\theta)$. We will use the method of upper and lower solutions again. From Remark \ref{remark3.5}, we know that there exists a lower solution of the above equation. Hence, it suffices to find an upper solution of \eqref{3.21.}. Since $\hat{\rho}\in PC(\theta)$, there is a positive solution $u\in C^\infty(M)$ of $-b_n\Delta_\theta u+\rho u=\hat{\rho} u^a$. If $\hat{\rho}_1\leq \hat{\rho}$, then $u$ is an upper solution of \eqref{3.21.}. Indeed, \begin{align} -b_n\Delta_\theta u+\rho u-\hat{\rho}_1u^a=(-b_n\Delta_\theta u+\rho u-\hat{\rho}u^a)+(\hat{\rho}-\hat{\rho}_1)u^a\geq 0. \end{align} Hence we may get a positive solution of \eqref{3.21.}. \qed \begin{Remark} If $\hat{\rho}\in PC(\theta)$ and $\hat{\rho}_1=\alpha\hat{\rho}$ for some constant $\alpha>0$, then $\hat{\rho}_1\in PC(\theta)$ regardless of the sign of $\lambda_1$. Indeed, since $\hat{\rho}\in PC(\theta)$, there is a positive solution $u\in C^\infty(M)$ of \eqref{2.10.}, then $\alpha^{-\frac{1}{a-1}}u$ is a solution of \eqref{3.21.}, so $\hat{\rho}_1\in PC(\theta)$. \end{Remark}
Now we turn to the case $\lambda_1=0$.
\begin{Proposition} \label{theorem3.8.} Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold. Then $0\in PC(\theta)$ if and only if $\lambda_1=0$.\end{Proposition} \proof Assume that $0\in PC(\theta)$, that is, there is a positive solution $u\in C^\infty(M)$ of $Lu=-b_n\Delta_\theta u+\rho u=0$. From \begin{align} \lambda_1\langle \psi, u\rangle_{L^2}=\langle L\psi, u\rangle_{L^2}=\langle \psi, Lu\rangle_{L^2}=0 \end{align}
where $\psi$ is the positive eigenfunction of $\lambda_1$ of $L$, we deduce that $\lambda_1=0$. Conversely, if $\lambda_1=0$, then the associated eigenfunction $\psi$ realizes the zero Webster curvature, i.e., $0\in PC(\theta)$. \qed
Since by Proposition \ref{theorem3.8.} if $\lambda_1=0$ then one can always find a pseudo-Hermitian structure $\hat{\theta}\in [\theta]$ of zero Webster scalar curvature, we can without loss of generality restrict our attention to the case that $(M^{2n+1},H,J,\theta)$ already has a zero Webster scalar curvature $\rho\equiv0$.
\begin{Proposition} Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold with Webster scalar curvature $\rho\equiv 0$. If $0\not\equiv\hat{\rho}\in PC(\theta)$, then $\hat{\rho}$ must change sign on $M$ and $\int_M\hat{\rho}\ \Psi^\theta<0$. \end{Proposition} \proof Since $\hat{\rho}\in PC(\theta)$, there is a positive solution $u\in C^\infty(M)$ such that \begin{align} -b_n\Delta_\theta u=\hat{\rho}u^{a},\label{3.27...} \end{align} where $b_n=2+\frac{2}{n}$, $a=1+\frac{2}{n}$. Hence, \begin{align} \int_M \hat{\rho}u^{a}\Psi^\theta=-b_n\int_M \Delta_\theta u\ \Psi^\theta=0. \end{align} Therefore, $\hat{\rho}$ must change sign on $M$ since $u>0$ and $\hat{\rho}\not\equiv 0$. Furthermore, multiplying \eqref{3.27...} by $u^{-a}$ and integrating by parts yield \begin{align}
\int_M\hat{\rho}\Psi^\theta=-b_n\int_Mu^{-a}\Delta_\theta u\ \Psi^\theta=-ab_n\int_Mu^{-a-1}|\nabla^Hu|^2\Psi^\theta<0. \end{align} \qed
In the case $\lambda_1>0$, there is a substitute for Theorem \ref{theorem3.4.} as follows.
\begin{Proposition} \label{theorem3.9.} Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold. Then $\lambda_1>0$ if and only if there is a positive function $\hat{\rho}\in C^\infty(M)$ such that $\hat{\rho}\in PC(\theta)$. \end{Proposition} \proof Let $\psi$ be the positive eigenfunction associated with $\lambda_1$ of $L$. If there is a positive function $\hat{\rho}\in C^\infty(M)$ such that $\hat{\rho}\in PC(\theta)$, then $Lu=\hat{\rho}u^a$ has a positive solution $u$, and \begin{align} \lambda_1\langle \psi, u\rangle_{L^2}=\langle L\psi, u\rangle_{L^2}=\langle \psi, Lu\rangle_{L^2}=\langle \psi, \hat{\rho}u^a\rangle_{L^2}>0 \end{align} which implies that $\lambda_1>0$. Conversely, if $\lambda_1>0$, then \begin{align} L\psi=\lambda_1\psi=(\lambda_1\psi^{1-a})\psi^a. \end{align} Pick $\hat{\rho}=\lambda_1\psi^{1-a}>0$, then $L\psi=\hat{\rho}\psi^a$, so $\hat{\rho}\in PC(\theta)$. \qed
Combining Theorem \ref{theorem3.4.}, Proposition \ref{theorem3.8.}, Proposition \ref{theorem3.9.} with the definition of the CR Yamabe constant, we can give the proof of Theorem \ref{theorem3.10}. \proof[\bf{Proof of Theorem \ref{theorem3.10}}] First, we show the assertion $(A)$. Clearly, ``$(a1)\Leftrightarrow(a2)$" has already been proved in Theorem \ref{theorem3.4.}. The statement ``$(a3)\Leftrightarrow(a4)$" can be deduced from \eqref{2.15} and Theorem 3.4 of \cite{[JL1]}. It remains to prove the statement ``$(a2)\Leftrightarrow(a3)$". The case ``$(a2)\Rightarrow(a3)$" is trivial. Let us consider ``$(a3)\Rightarrow(a2)$". Assume that ``$\hat{\rho}_0\in \{\hat{\rho}\in C^\infty(M):\hat{\rho}<0\} \cap PC(\theta)$". It follows from Theorem \ref{theorem3.4.} that $\lambda_1<0$. In terms of the equivalence ``$(a1)\Leftrightarrow(a2)$", we see that $(a2)$ holds.
Next, we consider the assertion $(B)$. Obviously, ``$(b1)\Leftrightarrow(b2)$" has been proved in Proposition \ref{theorem3.8.}. ``$(b3)\Rightarrow(b2)$" follows from Theorem 3.4 of \cite{[JL1]}. Now let us consider the case ``$(b2)\Rightarrow(b3)$". By \eqref{2.15}, we have $Y_M(\theta)\leq 0$. If $Y_M(\theta)<0$, then $\lambda_1<0$ by the results in $(A)$ of this theorem. However, according to ``$(b1)\Leftrightarrow(b2) $", $0\in PC(\theta)$ implies $\lambda_1=0$, which leads to a contradiction. Therefore, $Y_M(\theta)=0$.
Finally, we treat the assertion $(C)$. It is obvious that ``$(c1)\Leftrightarrow(c2)$" can be obtained by Proposition \ref{theorem3.9.}. ``$(c3)\Rightarrow(c2)$" may be deduced by the solvability of CR Yamabe problem (cf. \cite{[Ga1]}, \cite{[GY]}, \cite{[JL1]}). For the case ``$(c2)\Rightarrow(c3)$", using the equivalence ``$(c1)\Leftrightarrow(c2)$", we have $\lambda_1>0$. Combining ``$(a1)\Leftrightarrow(a4)$" and ``$(b1)\Leftrightarrow(b3)$" yields $Y_M(\theta)>0$. \qed
From Theorem \ref{theorem3.10}, we can easily see that \begin{Corollary}\label{corollary3.11} Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold. Then $\lambda_1$ and $Y_M(\theta)$ have the same sign or are both zero, and thus the sign of $\lambda_1$ is a CR invariant. \end{Corollary} \begin{Remark} In \cite{[Wa]}, it is easy to see that Corollary \ref{corollary3.11} is implicit in his argument, although it is not clearly pointed out. Indeed, it was proved directly by using the transformation law $(3.1)$ in \cite{[JL1]} under the CR pointwise conformal deformations and the solvability of CR Yamabe problem. \end{Remark}
\section{CR conformally equivalent deformations with prescribed Webster scalar curvature} In this section, we will determine the set $CE(\theta)$ on a compact pseudo-Hermitian manifold $(M^{2n+1},H,J,\theta)$ with $\lambda_1<0$, $\lambda_1=0$ and $\lambda_1>0$ respectively.
Let us consider a second-order quasilinear degenerate elliptic differential operator $T$ on the compact pseudo-Hermitian manifold $(M^{2n+1}, H, J, \theta)$: \begin{align}
Tu=u^{-a}\left(-b_n\Delta_\theta u+\rho u \right),\label{3.1} \end{align} where $a=1+\frac{2}{n}$, $b_n=2+\frac{2}{n}$ and $\rho\in C^\infty(M)$. The linearization of $T$ at a given positive function $u_0\in C^\infty(M)$ is a second-order linear degenerate elliptic differential operator
\begin{align}
T'(u_0)v&=\left.\frac{d}{dt}\right |_{t=0}T(u_0+tv)\notag\\
&=b_nu_0^{-a}\left\{-\Delta_\theta v+\left(a\frac{\Delta_\theta u_0}{u_0}+\frac{1-a}{b_n}\rho\right) v\right\},\label{4.2.}
\end{align} where $v\in S^p_2(M)$. Set \begin{align} A(u_0)v=-\Delta_\theta v+\left( a\frac{\Delta_\theta u_0}{u_0}+\frac{1-a}{b_n}\rho\right)v,\label{3.14} \end{align} which is a linear self-adjoint degenerate elliptic operator with $\ker T'(u_0)=\ker A(u_0)$.
\begin{Lemma}\label{lemma3.1} Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold. Let $L_3: S^p_2(M)\rightarrow L^p(M)$ be the operator defined as in \eqref{4.2.} with $0<u_0\in C^\infty(M)$ and $\rho\in C^\infty(M)$. Assume that $p>2n+1$. If $\ker L_3=0$, then \begin{align}
\|v\|_{S^p_2(M)}\leq C\|L_3v\|_{L^p(M)}\label{3.5} \end{align} for any $v\in S^p_2(M)$, where $C$ is a positive constant independent of $v$. Therefore, the operator $L_3: S^p_2(M)\rightarrow L^p(M)$ is bijective with a continuous inverse. \end{Lemma} \proof Since $M$ is compact, there exists a constant $C_1>0$ such that \begin{align}
\|v\|_{S^p_2(M)}\leq C_1\left(\|L_3v\|_{L^p(M)}+\|v\|_{L^p(M)} \right)\label{3.6} \end{align} for any $v\in S^p_2(M)$, which can be deduced from the $L^p$ interior regularity results for $\Delta_\theta$ (cf. Theorem 18 of \cite{[RS]}) by using a partition of unity on $M$. In order to get \eqref{3.5}, it is sufficient to prove that \begin{align}
\|v\|_{L^p(M)}\leq C_2\left\|L_3v\right\|_{L^p(M)} \end{align}
for any $v\in S^p_2(M)$, where $C_2>0$ is a constant independent of $v$. If not, there is a sequence $\{v_n\}\subset S^p_2(M)$ such that $\|v_n\|_{L^p(M)}=1$ but $\left\|L_3v_n\right\|_{L^p(M)}\rightarrow 0$ as $n\rightarrow +\infty$. Then by \eqref{3.6}, we have
\begin{align}
\|v_n\|_{S^p_2(M)}\leq C_3,
\end{align}
where the constant $C_3>0$ is independent of $n$. Using the compactly embedding theorem $S^p_2(M)\subset W^{1,p}(M)\subset\subset C^0(M)$ (cf. Theorem 19.1 of \cite{[FS]}) yields that there exists a subsequence $v_{n_k}$ of $v_n$ and a function $v\in C^0(M)$ such that $\lim_{k\rightarrow +\infty}\|v_{n_k}-v\|_{C^0(M)}=0$, and thus $\lim_{k\rightarrow +\infty}\|v_{n_k}-v\|_{L^p(M)}=0$, where $W^{1,p}(M)$ is the classical Sobolev space with $p>2n+1$. According to \eqref{3.6} and the triangle inequality, we have
\begin{align}
\|v_{n_i}-v_{n_j}\|_{S^p_2(M)}\leq C_1&\left( \|L_3v_{n_i}\|_{L^p(M)}+\|L_3v_{n_j}\|_{L^p(M)}\right.\notag\\
&\left.+\|v_{n_i}-v\|_{L^p(M)}+\|v-v_{n_j}\|_{L^p(M)}\right)\rightarrow 0
\end{align}
as $i,j\rightarrow \infty$, i.e., $\{v_{n_k}\}$ is a Cauchy sequence in $S^p_2(M)$, so $\lim_{k\rightarrow +\infty} \|v_{n_k}-v\|_{S^p_2(M)}=0$. By the continuity of the operator $L_3: S^p_2(M)\rightarrow L^p(M)$, we obtain that
\begin{align}
\|L_3v\|_{L^p(M)}=\lim_{k\rightarrow +\infty}\|L_3v_{n_k}\|_{L^p(M)}=0.
\end{align}
Hence, $L_3v=0$ in $L^p(M)$. By $\ker L_3=0$, we get $v=0$ in $S^p_2(M)$. However, from $\|v_{n_k}\|_{L^p(M)}=1$ for any $k$, we deduce that $\|v\|_{L^p(M)}=1$, which leads to a contradiction.
Now we go to prove the last conclusion of this lemma, namely, $L_3: S^p_2(M)\rightarrow L^p(M)$ is bijective with a continuous inverse. In fact, the condition $\ker L_3=0$ implies the injectivity. So it is sufficient to show the existence of $L_3v=f$ for any $f\in L^p(M)$. According to the fact that $C^\infty(M)$ is dense in $L^p(M)$, there is a sequence $\{f_j\}\subset C^\infty(M)$ such that $\lim_{j\rightarrow \infty}\|f_j-f\|_{L^p(M)}=0$. In terms of Lemma \ref{lemma4.1} and regularity results in \cite{[Xu-2]}, there exists $v_j\in C^\infty(M)$ such that $L_3v_j=f_j$. Using \eqref{3.5} and $\lim_{j\rightarrow \infty}\|f_j-f\|_{L^p(M)}=0$ yields that \begin{align}
\|v_i-v_j\|_{S^p_2(M)}\leq C\|L_3v_i-L_3v_j\|_{L^p(M)}=C\|f_i-f_j\|_{L^p(M)}\rightarrow 0 \end{align}
as $i,j\rightarrow +\infty$, so $\{v_i\}$ is a Cauchy sequence in $S^p_2(M)$. Hence, $v_j\rightarrow v$ in $S^p_2(M)$ due to the completeness of $(S^p_2(M), \|\cdot\|_{S^p_2(M)})$. Since $L_3: S^p_2(M)\rightarrow L^p(M)$ is a continuous map,
\begin{align}
\|L_3v-f\|_{L^p(M)}&\leq \|L_3v-L_3v_j\|_{L^p(M)}+ \|L_3v_j-f\|_{L^p(M)}\notag\\
&=\|L_3v-L_3v_j\|_{L^p(M)}+ \|f_j-f\|_{L^p(M)}\rightarrow 0
\end{align}
as $j\rightarrow +\infty$. Consequently, $L_3v=f$. Hence, $L_3: S^p_2(M)\rightarrow L^p(M)$ is a bijective continuous linear map, and thus $L_3$ has a continuous inverse map due to the open mapping theorem of Banach. \qed
Using the inverse function theorem for Banach spaces and the regularity results for degenerate elliptic equations (cf. \cite{[Xu-2]}), we have the following theorem. \begin{Theorem}\label{theorem3.2}
Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold, and let $T$ be the operator defined as in \eqref{3.1} with $\rho\in C^\infty(M)$. Assume that $0<u_0\in C^\infty(M),\ p>2n+1$. If the linearization $T'(u_0):S^p_2(M)\rightarrow L^p(M)$ is injective (and thus is invertible), then there exists a constant $\eta>0$ such that for any function $f\in C^\infty(M)$ with $\|f-T(u_0)\|_{L^p(M)}<\eta$, there is a positive function $u\in C^\infty(M)$ satisfying $T(u)=f$. \end{Theorem}
According to Theorem 4.10 of Chapter five of \cite{[Ka]}, the spectrum of the self-adjoint operator $A(u)$ depends continuously on $u$. Furthermore, from the proof of Lemma \ref{lemma4.1} (2), we know that the resolvent of the self-adjoint operator $A(u)$ is compact for any positive function $u\in C^\infty(M)$. Therefore, the spectrum of $A(u)$ is discrete, and every eigenvalue is of finite multiplicity. In addition, given a function $z\in C^\infty(M)$, the self-adjoint operator $A(u+tz)$ depends analytically on $t$ for $|t|$ small enough, hence so do the eigenvalues and eigenfunctions of $A(u+tz)$ (cf. \cite{[KMR]}). After a process similar to Theorem 4.5 and Lemma 4.6 in \cite{[KW2]}, we have the following perturbation theorem for $T'$.
\begin{Theorem}\label{theorem3.4}
The second-order linear degenerate elliptic operator $T'(u): S^p_2(M)\rightarrow L^p(M)$ is bijective on an open dense subset of the set $\{u\in C^\infty(M):\ u>0\}$. \end{Theorem}
For our purpose in this section, we need the following approximation theorem (cf. Theorem 2.1 of \cite{[KW2]}). \begin{Theorem}\label{theorem4.4.} Let $N$ be a connected manifold with dimension $n\geq 2$ and let $f\in C(N)\cap L^p(N)$. Then a function $g\in L^p(N)$ is in the $L^p-$closure of $O_f$ if and only if $\inf_N f\leq g(x)\leq \sup_N f$ for almost all $x\in N$. Here $O_f$ is the orbit of $f$ under the group of diffeomorphism of $N$. \end{Theorem}
Making use of the above three theorems yields the key lemma as follows.
\begin{Lemma}\label{lemma3.5} For a given smooth function $\hat{\rho}$ on a compact pseudo-Hermitian manifold $(M^{2n+1},H,J,\theta)$ with the Webster scalar curvature $\rho$, if $\min_{M} \hat{\rho}<C\rho<\max_M\hat{\rho}$ for some constant $C>0$, then $\hat{\rho}\in CE(\theta)$. \end{Lemma} \proof Let $u_0\equiv1$. According to Theorem \ref{theorem3.4}, for any $\epsilon >0$, there exists a smooth function $u_1$ so close to $u_0$ that \begin{align}
\|T(u_1)-\rho\|_{\infty}=\|T(u_1)-T(u_0)\|_{\infty}<\epsilon, \end{align} and $T'(u_1)$ is invertible. Picking $\epsilon$ sufficiently small and using the assumption $C^{-1}\min_{M} \hat{\rho}<\rho<C^{-1}\max_M\hat{\rho}$ yield \begin{align} C^{-1}\min_{M} \hat{\rho}<T(u_1)<C^{-1}\max_M\hat{\rho}. \end{align} By Theorem \ref{theorem4.4.}, we obtain that for any $\eta>0$, there exists a diffeomorphism $\Phi$ of $M$ such that \begin{align}
\|C^{-1}\hat{\rho}\circ \Phi-T(u_1)\|_{L^p(M)}<\eta. \end{align} Making use of Theorem \ref{theorem3.2}, we get that there exists a positive solution $u\in C^\infty(M)$ of $T(u)=C^{-1}\hat{\rho}\circ \Phi$. Set $v=C^{-\frac{1}{a-1}}u$, then $v$ is a positive solution of $T(v)=\hat{\rho}\circ\Phi$. Therefore, $\hat{\rho}\in CE(\theta)$. \qed
In terms of the above key lemma, we can give the proof of Theorem \ref{theorem1.2}. \proof[\bf{Proof of Theorem \ref{theorem1.2}}] $(1)$ Since $\lambda_1<0$ and Theorem \ref{theorem3.4.}, there is a pseudo-Hermitian structure $\theta_1\in [\theta]$ such that the corresponding Webster scalar curvature equals -1. If $\hat{\rho}$ is negative somewhere, then $\min_M \hat{\rho}<-C<\max_M \hat{\rho}$ for some constant $C>0$. According to Lemma \ref{lemma3.5}, we have $\hat{\rho}\in CE(\theta_1)=CE(\theta)$. Conversely, if $\hat{\rho}\in CE(\theta)$, then there exists a diffeomorphism $\Phi$ of $M$ and a positive function $u\in C^\infty(M)$ such that $Lu=(\hat{\rho}\circ \Phi) u^a$. Let $\psi$ be the positive eigenfunction associated with the eigenvalue $\lambda_1$ of $L$, then \begin{align} 0>\lambda_1\langle \psi, u\rangle_{L^2}=\langle L\psi, u\rangle_{L^2}=\langle \psi, Lu\rangle_{L^2}=\langle \psi, (\hat{\rho}\circ \Phi) u^a\rangle_{L^2}.\label{3.35} \end{align} Consequently, $\hat{\rho}$ must be negative somewhere on $M$.
$(2)$ From Proposition \ref{theorem3.8.}, it follows that $0\in PC(\theta)\subset CE(\theta)$. Similar to part (1) of this theorem, we can get the conclusion easily.
$(3)$ According to Theorem \ref{theorem3.10} and the results about CR Yamabe problem (cf. \cite{[Ga1]}, \cite{[GY]}, \cite{[JL1]}), $\lambda_1>0$ implies that there is a positive constant $\rho_1\in PC(\theta)$. By an argument similar to part (1) of this theorem, we can obtain that $\hat{\rho}\in CE(\theta)$ if and only if $\hat{\rho}$ is positive somewhere. \qed
Before the end of this section, we point out that the sign of $\lambda_1$ is invariant under CR conformally equivalent deformations. \begin{Theorem} Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold. If the structures $(\hat{H},\hat{J},\hat{\theta})$ and $(H,J,\theta)$ are CR conformally equivalent, then $\lambda_1(\theta)$ and $\lambda_1(\hat{\theta})$ have the same sign or are both zero. \end{Theorem} \proof Since $(\hat{H},\hat{J},\hat{\theta})$ and $(H,J,\theta)$ are CR conformally equivalent, then there is a map $\Phi\in \text{Diff}(M)$ and $0<u\in C^\infty(M)$ such that \begin{align} \Phi^*\hat{\theta}=u^{\frac{2}{n}}\theta,\ \ \ \hat{H}=d\Phi(H),\ \ \ \hat{J}=d\Phi\circ J\circ (d\Phi)^{-1}. \end{align} Set $\tilde{\theta}=u^{\frac{2}{n}}\theta$, hence $\Phi: (M,H,J,\tilde{\theta})\rightarrow (M, \hat{H}, \hat{J}, \hat{\theta})$ is a CR isomorphism with $\Phi^*\hat{\theta}=\tilde{\theta}$. Therefore, $\lambda_1(\hat{\theta})=\lambda_1(\tilde{\theta})$. From Corollary \ref{corollary3.11}, it follows that $\lambda_1(\hat{\theta})$ and $\lambda_1(\theta)$ have the same sign or are both zero. \qed
\section*{Appendix} In this section, we give an alternative proof of Theorem \ref{theorem3.4.} by following the spirit of \cite{[KW2]}. Although the expressions are different, the following theorem and Theorem \ref{theorem3.4.} are completely equivalent.
\begin{Theorem} Let $(M^{2n+1},H,J,\theta)$ be a compact pseudo-Hermitian manifold. Then $S=\{f\in C^\infty(M):\ f<0\}\subset \text{Im}\ T$ if and only if $\lambda_1<0$, where $T$ is as in \eqref{3.1} with $\rho\in C^\infty(M)$, and $\text{Im}\ T=\{Tu: 0<u\in S^p_2(M)\}$. \end{Theorem} \proof If $S\subset \text{Im}\ T$, then $-1\in \text{Im}\ T$, i.e., $T(u)=-1$ for some $C^\infty(M)$ function $u>0$. Thus, $Lu=-u^a$. Let $\psi$ be the positive eigenfunction of $L$ with respect to $\lambda_1$, namely, $L\psi=\lambda_1\psi$. So we have the following \begin{align} \lambda_1\langle \psi, u\rangle_{L^2}=\langle L \psi, u\rangle_{L^2}=\langle \psi, Lu\rangle_{L^2}=-\langle \psi, u^a\rangle_{L^2}<0 \end{align}
which implies $\lambda_1<0$.
For the converse, assume that $\lambda_1<0$. Set $K=S\cap \text{Im}\ T$. From $L\psi=\lambda_1\psi$, we have $T(\psi)=\lambda_1\psi^{1-a}<0$, thus $\lambda_1\psi^{1-a}\in K$. Consequently, $K$ is nonempty. Clearly, $S$ is connected. In order to prove $K=S$, it is sufficient to prove $K$ is a both open and closed subset in $S$. For openness, we will show that for any $u>0$, $T(u)\in K$ implies $\ker T'(u)=\ker A(u)=0$, where $A(u)$ is defined by \eqref{3.14}, and thus $K$ is open subset of $S$ in terms of Theorem \ref{theorem3.2}. Let $\mu_1$ be the first eigenvalue of $A(u)$ and $\phi$ be the corresponding positive eigenfunction. By \eqref{3.14}, we have
\begin{align}
\mu_1\langle \phi, u\rangle_{L^2}=\langle A(u)\phi, u\rangle_{L^2}=\langle \phi, A(u)u\rangle_{L^2}=\frac{1-a}{b_n}\langle \phi, T(u)u^a \rangle_{L^2}>0
\end{align}
which gives $\mu_1>0$, and so $\ker A(u)=0$. For closeness, we assume that $f_j\in K$ and $f_j\xrightarrow{C^0} f\in S$, we need prove $f\in K$, i.e., there is $u\in S^p_2(M)$ such that $T(u)=f$. Since $f_j\in K$, there exists a function $0<u_j\in C^\infty(M)$ satisfying $T(u_j)=f_j$. Let $w_j=\log{\frac{u_j}{\psi}}$ where $\psi$ is the positive eigenfunction associated with the first eigenvalue $\lambda_1$ of $L$. Then $w_j$ satisfies
\begin{align}
-b_n\Delta_\theta w_j-b_n \nabla^H w_j\cdot\left(\nabla^H w_j+2\frac{\nabla^H\psi}{\psi} \right)=-\lambda_1+f_j\psi^{a-1}e^{(a-1)w_j}
\end{align}
where $\cdot$ is the inner product induced by the Webster metric $g_\theta$. Considering the maximum and minimum of $w_j$ and using the classical maximum principle, it is easy to show that there are two constants $m_1, m_2>0$ independent of $j$ such that $0<m_1\leq u_j\leq m_2$. Hence, applying Lemma \ref{lemma3.1} to the operator $L_4=-\Delta_\theta+ id$, we have \begin{align}
\|u_j\|_{S^p_2(M)}\leq C\|L_4u_j\|_{L^p(M)}=C\left\|\frac{1}{b_n}(f_ju_j^a-\rho u_j)+u_j\right\|_{L^p(M)}\leq \hat{C} \end{align}
where $C, \hat{C}$ are constants independent of $j$. Using the compactly embedding theorem $S^p_2(M)\subset W^{1,p}(M)\subset\subset C^0(M)$ ( $p>2n+1$, cf. Theorem 19.1 of \cite{[FS]}), there exists a subsequence $\{u_{j_k}\}$ such that $u_{j_k}\xrightarrow{C^0} u$ as $k\rightarrow +\infty$, where $u>0$ in $M$ since $0<m_1\leq u_{j_k}\leq m_2$. Moreover, the subsequence $\{u_{j_k}\}$ is a Cauchy sequence in $S^p_2(M)$, because
\begin{align}
\|u_{j_k}-u_{j_l}\|_{S^p_2(M)}&\leq C \|L_4(u_{j_k}-u_{j_l})\|_{L^p(M)}\notag\\
&=C\left\|(u_{j_k}-u_{j_l})+\frac{1}{b_n}(f_{j_k}u_{j_k}^a-f_{j_l}u_{j_l}^a+\rho u_{j_l}-\rho u_{j_k})\right\|_{L^p(M)}\notag\\
&\rightarrow 0,
\end{align}
as $k,l \rightarrow +\infty$. Therefore, $u_{j_k}\rightarrow u$ in $S^p_2(M)$ as $k\rightarrow +\infty$. Let $k\rightarrow\infty$ in $T(u_{j_k})=f_{j_k}$, by the continuity of $T: S^p_2(M)\rightarrow L^p(M)$, we obtain $T(u)=f$, so $f\in K$.
\qed
Yuxin Dong \ \
School of Mathematical Sciences
Fudan University
Shanghai, 200433, P. R. China
[email protected] \ \ \
Yibin Ren
College of Mathematics and Computer Science
Zhejiang Normal University
Jinhua, 321004, Zhejiang, P.R. China
[email protected]
Weike Yu
School of Mathematical Sciences
Fudan University
Shanghai, 200433, P. R. China
[email protected]
\end{document} |
\begin{document}
\title{Characterisation of multi-level quantum coherence without ideal measurements} \date{\today} \author{Benjamin Dive$^{*,1,2}$, Nikolaos Koukoulekidis$^{*,1}$, Stefanos Mousafeiris$^1$, and Florian Mintert$^1$} \affiliation{$^*$These two authors contributed equally to this work} \affiliation{$^1$Department of Physics, Imperial College, London SW7 2AZ, UK} \affiliation{$^2$Institute of Quantum Optics and Quantum Information, Austrian Academy of Sciences, Vienna 1090, Austria}
\begin{abstract} Coherent superpositions are one of the hallmarks of quantum mechanics and are vital for any quantum mechanical device to outperform the classically achievable. Generically, superpositions are verified in interference experiments, but despite their longstanding central role we know very little about how to extract the number of coherently superposed amplitudes from a general interference pattern. A fundamental issue is that performing a phase-sensitive measurement is as challenging as creating a coherent superposition, so that assuming a perfectly implemented measurement for verification of quantum coherence is hard to justify. In order to overcome this issue, we construct a coherence certifier derived from simple statistical properties of an interference pattern, such that any imperfection in the measurement can never over-estimate the number of coherently superposed amplitudes. We numerically test how robust this measure is to under-estimating the coherence in the case of imperfect state preparation or measurement, and find it to be very resilient in both cases. \end{abstract}
\maketitle
\section{Introduction}
The superposition principle allows wave mechanics, in particular quantum mechanics, to feature dynamics that are unthinkable for classical particles. The prospect of exploiting quantum coherence for applications in quantum computation, communication, metrology, and thermodynamics \cite{ref:Stahlke, ref:Knill, Zhang2017a, ref:Lostaglio, ref:Korzekwa} has resulted in numerous activities towards the classification and quantification of quantum coherence \cite{1367-2630-16-3-033007, Baumgratz2014a, Girolami2014, VonPrillwitz2015, ref:Winter, ref:Marvian, ref:Streltsov2, Streltsov2017}.
Those developments are inspired by earlier work in the theory of entanglement. There is, however, a central difference between entanglement and coherence that poses a fundamental challenge in its experimental characterisation. To create entanglement it is necessary to use coherent interactions between particles that go beyond Local Operations and Classical Communications (LOCC). It can however be detected using only local measurements and classical processing of the resulting data, e.g., in terms of Bell inequalities, witnesses or state tomography \cite{Horodecki2009, Friis2018}. Thus, verifying entanglement requires less challenging experimental tools than to prepare it.
This distinction between resources needed for preparation and detection does not typically exist for coherence. Coherence is always defined with respect to a basis and this is generically the only basis in which measurements can be performed. Creating coherence requires an operation that maps a basis state into a coherent superposition of basis states; detecting coherence requires a measurement in such a superposition basis. As the latter typically cannot be done, it is instead replaced with an operation that maps the state back to an incoherent one (essentially the reverse of the preparation step), followed by a projection onto one of the basis states. This results in the awkward situation that any measurement that is supposed to verify the successful preparation of a coherent superposition is reliable only under the assumption that coherent superpositions can be created.
As we show here, this is not an insurmountable obstacle. We can find suitable figures of merit that offer a detailed characterisation of coherence properties, but that do not require any assumption on the ability to realise operations that can create coherent superpositions.
Doing this first requires a rigorous definition of the aspects of coherence that we want to certify. For any given reference basis $\{\ket{j}\}$, one can define pure states $\ket{\psi}=\sum_j\psi_j\ket{j}$ with at least $k$ non-vanishing amplitudes $\psi_j$ to be $k$-coherent. Extending this, a mixed state $\rho$ is $k$-coherent if all decompositions $\rho=\sum_ip_i\prj{\psi_i}$ into pure states $\ket{\psi_i}$ with $p_i\ge 0$ contains at least one $k$-coherent pure state \cite{1367-2630-16-3-033007}. We denote the set of $k$-coherence state for a given Hilbert space by $C_k$, and have the natural relation that $C_{k+1} \subset C_k $ for $k\ge1$ and where $C_1$ is the full state space.
Following this definition, the concept of $k$-coherence is closely analogous to genuine $k$-partite entanglement. Most of the prior literature on quantum coherence has not yet addressed this fine classification of different classes of coherence, but there are figures of merit that characterize $k$-coherence quantitatively \cite{1367-2630-16-3-033007,Ringbauer2018} or qualitatively \cite{VonPrillwitz2015}. Almost all existing approaches do rely on the assumption that measurements can be performed reliably in a basis other than that of the $1$-coherent states, which is highly problematic for the reasons described above. The only exception we are aware of is from one of the proposals in \cite{Ringbauer2018} where the coherence is instead bounded by the probability of success of a quantum game, which comes with its own assumptions about the dynamics on the system and the measurements performed. Our method does away with these different assumptions and instead requires only the acquisition of relative phases and the ability to perform some rank-$1$ measurement afterwards.
We envision an experiment similar to the famous Ramsey sequence. This involves a preparation unitary $U_p$ such that $U_p\ket{0}=\sum_j\psi_j\ket{j}=\ket{\psi}$, followed by an evolution $U(t)$ generated by the system Hamiltonian $H$ for a time $t$. This is followed by an effective projection onto a state $\ket{\chi}= \sum_j \chi_j \ket{j}$ which is realised by the unitary evolution $U_r$, defined by $U_r^\dagger\ket{0} =\ket{\chi}$, and a subsequent projection onto the basis state $\ket{0}$. As such, the probability of getting a `click' in the detector for an initial pure state $\ket{0}$ is given by $p(t)=|\matel{\chi}{U(t)}{\psi}|^2$. This defines the interference pattern that is observed.
The coherence of $\ket{\psi}$, with respect to the eigenbasis of $H$, can be characterised in terms of the statistical moments of this probability distribution, $M_q=\langle p^q\rangle$, where the average is taken over the period of the dynamics. When $\ket{\chi}$ is promised to be an equal superposition of all the eigenstates of $H$ (a state denoted by $\ket{W}$), these moments provide a rigorous indicator of $k$-coherence. That is, there is a threshold value such that moments above this threshold value can only be achieved with states that are at least $k$-coherent \cite{VonPrillwitz2015}. The intuition behind this is that the interference pattern of higher coherent states exhibit higher peaks and deeper troughs than low coherent states; in an analogous way to how the interference pattern of a diffraction grating and a double slit differ. This behaviour can be detected with the statistical moments, with higher moments being more sensitive to the more extreme peaks and troughs.
As argued above, it is highly problematic to assume that the desired projection onto the state $\ket{W}$ can be performed reliably. Assuming that such a projection was performed when a different measurement was realised can suggest a higher degree of coherence than there is. This can easily be seen with the extreme case of $\ket{\chi} = \ket{0}$. In this case $p(t)$ is maximised with the incoherent initial state $\ket{0}$, and since this holds for all $t$, also all moments adopt their maximum value for this state. Erroneously implementing a measurement including the projection onto the state $\ket{0}$ rather than the projection onto a balanced superposition of all basis states is certainly not a realistic experimental scenario, but it helps to illustrate that uncontrollable experimental imperfections can result in wrong conclusions if assumptions on the type of measurement are made. In order to have trusted certification, we require a function that can identify coherence in the case of suitable measurements, but that does not result in false positives.
In this paper we introduce a family of functions which do this, based on the ratio of moments of an interference pattern. We will show that those are convex functions of a quantum state, which makes them directly applicable to mixed states. The maximum value that such functions can adopt for a $k$-coherent state will be shown to be bounded from above independently of the Hamiltonian $H$ and the projector $\ket{\chi}\bra{\chi}$. Experimental limitations in the realization of the desired measurement will thus not result in wrong conclusions on the coherence properties of the state, but will in the worst case only result in the failure to exceed the threshold.
The construction of these coherence certifiers is presented in \secref{sec:math}, where their properties are also discussed. The technical aspects of the proofs are left to the appendices. In the cases where the exact threshold values are not known, we use numerical methods to approximate them; a discussion of these results is given in \secref{sec:num_threshold}. This is followed in \secref{sec:exp} by a discussion of the ability of the proposed framework to verify $k$-coherence in the presence of various imperfections, and we conclude in \secref{sec:conclusion}.
\section{Coherence certifier} \label{sec:math} To talk in precise terms about the coherence certifiers we introduce, it is necessary to specify exactly the range of systems under consideration. The coherence of a state is defined with respect to a basis, and the natural basis to use for a Ramsey-like experiment is the eigenbasis of the system Hamiltonian. We make no restrictions on this Hamiltonian other than it being time-independent and having a discrete and commensurate spectrum (all finite Hamiltonians are discrete and $\epsilon$-close to being commensurate). It may contain some degeneracies but, as degenerate levels always have the same relative phases, these will never get picked up by the interference pattern and so the amount of coherence would be underestimated. As we are only lower bounding the coherence, this is not a problem. In order to simplify the analysis it is therefore convenient to ignore these degeneracies and, furthermore, expand the Hilbert space of the system by adding new levels such that the spectrum of the Hamiltonian is equally spaced. As this does not affect the evolution of the physical state, there is no loss of generality in only considering Hamiltonians \begin{align} H = \sum_{n} n \ket{n}\bra{n}\ , \label{eq:BasicHamiltonian} \end{align} with the spectrum of a harmonic oscillator. For the certifier of coherence we introduce below, any anharmonicity in the physical Hamiltonian will lead to less coherence being measured, and therefore cannot result in a false certification of the amount of coherence present in the state.
As discussed in the introduction, the basic objects we use to study coherence are the moments of the interference pattern. The $n$\textsuperscript{th} moment is \begin{align} \label{eq:defineMoment} M_n(\rho, \ket{\chi}) &= \frac{1}{2\pi} \int_0^{2\pi} p(t)^n \,dt \\
&= \frac{1}{2\pi} \int_0^{2\pi} \braket{\chi | e^{-i H t} \,\rho\, e^{i H t} | \chi}^n \,dt, \nonumber \end{align} where the duration of the integral is due to the energy scale picked in \eqref{eq:BasicHamiltonian}. The key object of interest is the ratio \begin{equation} R_n(\rho, \ket{\chi}) = \frac{M_n}{M_1^{n-1}} \end{equation} of the moments $M_n$ and $M_1^{n-1}$ for $n>2$. In particular, we will focus on $R_3$ as it is the lowest order which can act as a coherence certifier.
\begin{table}[t] \small \begin{center}
\begin{tabular}{| c | c | c |} \hline $k$-coherence & $R_3$ Threshold & $R_3$ Best Known\\ \hline $1$ & $1$ & $1$\\ $2$ & $5/4$ & $1.25$\\ $3$ & $179/96 \approx 1.86$ & $1.77$\\ \hline \end{tabular} \end{center} \caption{The maximum values that $R_3(\rho, \ket{\chi})$ can attain, under any Hamiltonian, for any $\ket{\chi}$ and for any $\rho \in C_k$ as a function of $k$. As such, exceeding these values means that the state must be at least $(k+1)$-coherent. The middle column is an upper bound to this highest value obtained analytically. The last column is the highest value we found after conducting a thorough numerical optimisation.} \label{tab:R3Bounds} \end{table}
A central property of these functions is their convexity under the mixing of states \begin{equation} R_n\left(\lambda\rho_1 + (1-\lambda)\rho_2\right) \le \lambda R_n\left(\rho_1\right) + (1-\lambda)R_n\left(\rho_2\right), \label{eq:Convexity} \end{equation} with the same $\ket{\chi}$ throughout, as proven in \appref{sec:ProofConvex}. As $C_k$ is itself convex, it is highly desirable for our certifier to also have this property as it implies that $R_n$ is maximised for pure states, {\it i.e.} \begin{equation} \max_{\ket{\psi}\bra{\psi} \in C_k,\,\ket{\chi}} R_n(\ket{\psi}, \ket{\chi}) \ge \max_{\rho \in C_k,\,\ket{\chi}} R_n(\rho, \ket{\chi}), \end{equation} where the ket in the first argument of $R_n$ stands for the corresponding pure state. Because of this, the maximum found for pure states also applies to mixed states directly.
Another useful feature of $R_n$ is that its maximum is reached when the measurement projector and the initial state are the same, {\it i.e.} \begin{equation} \max_{\ket{\psi}\bra{\psi} \in C_k} R_n(\ket{\psi}, \ket{\psi}) \ge \max_{\ket{\psi}\bra{\psi} \in C_k,\,\ket{\chi}} R_n(\ket{\psi}, \ket{\chi}). \end{equation} This is not necessary for a coherence certifier, but is nevertheless desirable for two reasons. Firstly it aligns with the intuition of a Ramsey-like interferometer, where the highest contrast is obtained by projecting onto the initial state, which is also what was found in prior work where $\ket{\chi}$ was assumed to be the equal superposition state $\ket{W}$ \cite{VonPrillwitz2015}. Secondly it further simplifies calculating the threshold values, rather than maximising over the $4d$ real variables that define $\ket{\psi}$ and $\ket{\chi}$: it is enough to consider only the $d$ variables, $\psi_i\chi_i^*$, which can always be chosen such that they are real. This is proven in \appref{sec:ProofEqualStateMeas}.
Of particular importance is the need for $R_n$ to be hierarchical, such that it obeys the strict inequality \begin{equation} \max_{\rho \in C_{k+1},\,\ket{\chi}} R_n(\rho, \ket{\chi}) > \max_{\rho \in C_k,\,\ket{\chi}} R_n(\rho, \ket{\chi}), \end{equation} where the maximum for a given $k$ is known as the threshold value for $k+1$. As proven in \appref{sec:ProofThresholdValues}, this holds for $k=1,2$ and $3$ independently of the dimension of the system Hilbert space. Observing a higher value than those thresholds, given in \tableref{tab:R3Bounds}, therefore proves that the state is at least $2, 3$, or $4$-coherent respectively.
The assumption so far is that the measurement is projective. In practice, however, the realization of the unitary $U_r$ can be affected by noise, and repetitions of the experiment that are required to obtain good statistics will suffer from fluctuations in $U_r$.
The signal on the measurement device will thus not reliably indicate projection onto the state $\ket{\chi}$, but rather randomly a projection onto one out of several states $\ket{\chi_j}$ occurring with probability $q_j$. In this case the recorded interference pattern reads \begin{align} p(t)&=\sum_j q_j p_j(t),\quad\text{where}\\ p_j(t)&=\bra{\chi_j} \, U(t)\rho\, U^\dagger(t) \ket{\chi_j}\ , \end{align} and the definition of moments given above in \eqref{eq:defineMoment} generalizes to \begin{align} M_n(\rho, \sigma_{\chi})&= \frac{1}{2\pi} \int_0^{2\pi} (\mbox{Tr}\ {e^{-i H t} \,\rho\, e^{i H t}\,\sigma_\chi})^n \,dt\, \nonumber \end{align} with $\sigma_\chi=\sum_jq_j\prj{\chi_j}$. In exactly the same way that $R_n(\rho, \sigma_\chi)=M_n(\rho, \sigma_{\chi})/M_1(\rho, \sigma_{\chi})^{n-1}$ is convex in the first argument $\rho$ for any given $\sigma_\chi$, it is also convex in the second argument for any given $\rho$ such that \begin{equation} R_n(\rho, \sigma_\chi) \le \sum_jq_jR_n(\rho,\ket{\chi_j}) \ , \label{eq:notprojective} \end{equation} for any state $\sigma_\chi$ and convex decomposition into pure states $\sum_jq_j\prj{\chi_j}=\sigma_\chi$. Since no projective measurement can overestimate the degree of coherence, no fluctuations in the realisation of such a measurement can result in a false positive either.
\section{Numerical threshold values} \label{sec:num_threshold}
While the previous section details analytically proved results about the threshold values for $k$ up to $4$, we can go to much higher coherence levels numerically. We do this by maximising the value of $R_n$ over all $\rho \in C_k$ and all $\ket{\chi}$, for given values of $n$ and $k$. This problem is substantially simplified using the results of the previous section, which lets us set $\rho = \ket{\psi}\bra{\psi}$ and $\ket{\psi} = \ket{\chi}$ which only contain real coefficients in the eigenbasis of the Hamiltonian. We are confident that the results found this way are an excellent approximation of the true maxima as they are stable under different parameterisations of the problem and for different initial conditions in the numerical optimisation. These numerical results can also be compared to the upper bounds given by the analytic results, thereby illustrating how tight they are.
\begin{table}[h] \footnotesize
\begin{center}
\begin{tabular}{|c|c|c|c|l|}
\hline
$R_n$ & $k$ & $R_n(\ket{\Psi_k})$ & $R_n(\ket{W_k})$ & $\boldsymbol{\Psi}_k$ \\
\hline
~&~&~&~&~ \\[-1.2ex]
$R_3$ & 2 & 1.25 & $1.25$ & $(0.50, 0.50)$ \\[1ex]
~ & 3 & 1.77 & $1.74$ & $(0.31, 0.38, 0.31)$ \\[1ex]
~ & 4 & 2.32 & $2.27$ & $(0.22, 0.28, 0.28, 0.22)$ \\[1ex]
~ & 5 & 2.88 & $2.80$ & $(0.17, 0.21, 0.23, 0.21, 0.17)$ \\[1ex]
\hline
~&~&~&~&~ \\[-1.2ex]
$R_4$ & 2 & 2.19 & 2.19 & $(0.50, 0.50)$ \\[1ex]
~ & 3 & 4.61 & 4.56 & $(0.32, 0.36, 0.32)$ \\[1ex]
~ & 4 & 8.02 & 7.90 & $(0.23, 0.27, 0.27, 0.23)$ \\[1ex]
~ & 5 & 12.42 & 12.21 & $(0.18, 0.21, 0.22, 0.21, 0.18)$ \\[1ex]
\hline
~&~&~&~&~ \\[-1.2ex]
$R_5$ & 2 & 3.94 & 3.94 & $(0.50, 0.50)$ \\[1ex]
~ & 3 & 12.39 & 12.28 & $(0.32, 0.36, 0.32)$ \\[1ex]
~ & 4 & 28.71 & 28.39 & $(0.24, 0.26, 0.26, 0.24)$ \\[1ex]
~ & 5 & 55.52 & 54.84 & $(0.19, 0.21, 0.21, 0.21, 0.19)$ \\
\hline
\end{tabular}
\end{center}
\caption{Numerical results for the first three hierarchical ratios for up to $5$-coherent states; showing their behaviour as coherence certifiers. The values of $R_n$ are given for the equal superposition state $\ket{W_k}$, and for the state $\ket{\Psi_k}$ which maximises the value (in all cases the basis for the projector $\ket{\chi}$ is equal to the state itself as we know that this maximises $R_n$). The states $\ket{W_k}$ and $\ket{\Psi_k}$ only have adjacent energy levels populated, spacing these levels out always results in a decrease in $R_n$ (unless they are spaced out equally in which case they are effectively adjacent levels for a different harmonic Hamiltonian). $\ket{\Psi_k}$ is found through numerical optimisation and is stable through different parametrisations of the problem and from different initial points. The amplitudes squared of $\ket{\Psi_k}$ are also listed as a vector to show how it differs from the uniform case of $\tfrac{1}{k}$.} \label{tab:ratios} \end{table}
These numerical results are listed in \tableref{tab:ratios}, which also shows the state $\ket{\Psi_k}$ that gives the maximum value of $R_n$ over all states in $C_k$, and how this value compares to the value given by the equally balanced state $\ket{W_k} = \tfrac{1}{\sqrt{k}}\sum_i^k \ket{i}$. These states are, surprisingly, not the same, although they both share the property of having $k$ adjacent basis states populated while the others have zero amplitude. $\ket{\Psi_k}$ has a concentration of population towards the middle of the occupied energy levels. One way to understand this is to note that interferences between basis states with small energy differences contribute more to $R_n$ than those with large energy differences. As the basis states in the middle of the spectrum are closer to more of the basis states, the function is maximised by populating them more than the others. This intuition is more visible in the re-parametrisation of $R_n$ done in \appref{sec:ProofThresholdValues}. Furthermore, the larger $k$ is and the smaller $n$ is, the more pronounced the difference between $\ket{W_k}$ and $\ket{\Psi_k}$ is.
\begin{figure}
\caption{Comparison of numerical and analytical threshold values. The crosses show the maximum value that we found for $R_3$ for $\rho\in C_k$ as a function of $k$. The solid blue line is a linear fit for these, showing how they are equally spaced. The dashed orange line shows what value the equal superposition state $\ket{W_k}$ has for the optimal measurement for comparison and is given by $\frac{4 + 5k^2 + 11k^4}{20k^3}$ (derived in \appref{sec:StraightLineR3}), which is asymptotically linear. The horizontal lines are the analytic threshold values. For the 2-coherent case, the equal superposition and optimal states overlap, and lie immediately below the threshold for certifying 3-coherence. For the 3-coherent case and higher, there is a finite but small gap between the equally balanced and optimal states. The threshold for 4-coherence also does not lie exactly above the maximum for 3-coherence, but the gap is again very small and, as we are lower bounding the amount of coherence present, this only means that $R_3$ is occasionally too cautious.}
\label{fig:BoundsComparison}
\end{figure}
In all cases of interest, however, the difference in the $R_n$ value between $\ket{\Psi_k}$ and $\ket{W_k}$ is relatively small, which can be seen in \figref{fig:BoundsComparison}. This figure also compares these to the analytic thresholds which shows how tight they are. Furthermore, the maximal values grow linearly (tested up to $k=30$, not shown on the graph). This constant interval means that $R_3$ would also be able to distinguish between more highly coherent states. The functions $R_4$ and $R_5$ seem to have even faster growth, potentially making them more useful in such circumstances, although the additional experimental difficulty in accurately reconstructing higher moments should not be neglected \cite{Flusser2009}.
\section{Verification of $k$-coherence in the presence of imperfections} \label{sec:exp}
In this section we demonstrate that the present approach can verify coherence properties, even in the presence of substantial imperfections in the projective measurement and that coherence can be detected even in highly mixed states.
\subsection{Measurement tolerance} \label{sec:tolerance}
Having proved that an imperfect measurement will never overestimate the coherence of a state, it is important to demonstrate that it does not underestimate it too strongly either. Therefore, we quantify this implication of measurement imperfections here. To achieve this, we produce a sample of random faulty measurements and estimate the deviation from perfect measurement required to reduce the value of the maximum $k$-coherent state below the threshold below which $k$-coherence is not verified anymore.
We define the states $\ket{\chi_k(\tau)}$ that define a projective measurement in terms of a random Hamiltonian $H_r$ via the relation \begin{equation}
\label{eq:meas_evolution}
\ket{\chi_k(\tau)} = \mathcal{U}(\tau) \ket{\Psi_k} \coloneqq e^{iH_r\tau}\ket{\Psi_k}\ , \end{equation} with $\ket{\Psi_k}$ given in Tab.~\ref{tab:ratios}; the random Hamiltonians $H_r$ are drawn from the Gaussian Unitary Ensemble (GUE)~\cite{ref:Fyodorov}.
The degree to which the projective measurement deviates from the ideal measurement can be quantified by the norm \begin{equation}
\label{eq:meas_norm}
D(\tau) \coloneqq ||\ket{\chi_k(\tau)} - \ket{\chi_k(0)}|| \equiv \sum\limits_{i=1}^d \left[ \mathcal{U}_{ij}(\tau) \chi_j - \chi_i \right]^2\ ,
\end{equation} for each realisation of $H_r$.
\figref{fig:tolerance} depicts the ensemble average of $R_3(\ket{\Psi_k},\ket{\chi_k(\tau)})$ with the average performed over $100$ random Hamiltonians as function of $D(\tau)$ with black lines for $k=3$ and $k=4$. The blue and pink lines depict the width of the underlying distribution, and the horizontal black dashed lines depict the threshold values for the detection of 3-coherence and 4-coherence. As one can see, a substantial value of $\tau$ is required before the recorded values of $R_3$ drop below the threshold values. As one might have expected the verification of 3-coherence can tolerate a large amount of deviations, but even for the verification of 4-coherence, a deviation $D\le 0.3$ is typically good enough.
\begin{figure}
\caption{Ensemble average (black) of $R_3$ for the states $\ket{\Psi_k}$ ($k=4$ at the top, $k=3$ at the bottom) obtained with faulty measurements as function of the measurement deviation $D$ defined in~\eqref{eq:meas_norm}. The standard deviation of the random distribution is depicted with solid (blue) and dotted (pink) lines centred around the average. The values of $D$, for which the threshold values (Tab.~\ref{tab:R3Bounds}) are reached, are depicted in turqoise crosses (ensemble average) with solid horizontal red lines for the width of the distribution.}
\label{fig:tolerance}
\end{figure}
\subsection{Decoherence tolerance} \label{sec:decoh}
Since our central aim is the ability to verify coherence in the presence of experimental imperfections, the big remaining question is on the degree of decoherence that can be present, before the present criteria fail to verify a desired level of coherence. Repetitions of the experiment may result in some instances of initially mixed states which are then evolved through the system. The decoherence effects of such a faulty state preparation is to reduce the visibility of the interference pattern, thus rendering the task of bounding coherence more challenging.
We explore the impact of decoherence by introducing the Werner-like state~\cite{ref:Werner} \begin{equation}
\label{eq:werner}
\rho_W = (1-\lambda)\ket{W_k}\bra{W_k} + \frac{\lambda}{k} \mathbb{I}_k\ , \end{equation} and exploring the ability of the ratios to distinguish its level of coherence. The Werner-like state is given by a mixture of the equal superposition $k$-coherent state and the totally incoherent state $\mathbb{I}_k/d$. The degree of mixedness is varied with the parameter $\lambda \in [0,1]$. For $\lambda = 0$, the system is pure and $k$-coherent, while $\lambda = 1$ corresponds to a completely mixed state. Therefore, there must be a theoretical upper bound $\lambda_{\text{dec}}(q)$ above which the system is in $C_q$, but not $C_{q+1}$, and as the noise increases further there must be another bound above which the coherence drops further. For a $k$-dimensional system, these bounds $\lambda_{\text{dec}}(q)$ are given by \begin{equation}
\label{eq:decoh_thr}
\lambda_{\text{dec}}(q) = \frac{k-q}{k-1}, \quad 1 \leq q \leq k, \end{equation} as proved in \appref{sec:decoh_thr} and also discussed in Ref.~\cite{Ringbauer2018}. Similarly, we can define threshold values $\lambda_{\text{thr}}^{(n)}(q)$ at which a given certifier $R_n$ fails to verify $(q+1)$-coherence in a system from its interference pattern. The values $\lambda_{\text{thr}}^{(n)}(k-1)$ at which a given certifier $R_n$ fails to identify $k$-coherence are depicted in \tableref{tab:r3_dthr} for $R_3$, $R_4$ and $R_5$, and numerical expressions for $\lambda_{\text{dec}}(q)$ are given for comparison.
As one can see, the threshold values for the detection of $k$-coherence are larger the smaller $k$ is. $k$-coherence can thus be identified for rather strongly mixed states as long as $k$ is sufficiently low. $R_5$ can identify coherence for larger values of $\lambda$ ({\it i.e.} more strongly mixed states) than $R_4$ for any value of $k$, and $R_4$ outperforms $R_3$ in the same sense. If a given $R_n$ fails to verify $k$-coherence in a strongly mixed state, one can thus resort to a certifier $R_n$ with a larger value of $n$, and find better performance. Even for $R_5$, however, the threshold value $\lambda_{\text{dec}}(k-1)$ is about $50\%$ larger than $\lambda_{\text{thr}}^{(5)}(q)$, and higher moments would be required in order to identify the $k$-coherence in very strongly mixed states.
\begin{table} \small
\begin{center}
\begin{tabular}{|c|cccccccc|}
\hline
$k$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ \\
\hline
$\lambda_{\text{thr}}^{(3)}(k-1)$ & $0.18$ & $0.13$ & $0.10$ & $0.08$ & $0.06$ & $0.06$ & $0.05$ & $0.04$ \\
$\lambda_{\text{thr}}^{(4)}(k-1)$ & $0.28$ & $0.19$ & $0.14$ & $0.11$ & $0.09$ & $0.08$ & $0.07$ & $0.06$\\
$\lambda_{\text{thr}}^{(5)}(k-1)$ & $0.33$ & $0.22$ & $0.16$ & $0.13$ & $0.11$ & $0.09$ & $0.08$ & $0.07$\\
$\lambda_{\text{dec}}(k-1)$ & $0.5$ & $0.33$ & $0.25$ & $0.2$ & $0.17$ & $0.14$ & $0.13$ & $0.11$\\
\hline
\end{tabular}
\end{center} \caption{Numerical expressions for decoherence thresholds for the state $\rho_W$ from \eqref{eq:werner}, for $R_3, R_4$ and $R_5$ between consecutive levels of coherence for $k = 3$ to $10$.} \label{tab:r3_dthr} \end{table}
\subsection{Best approximations of interference pattern}
In addition to the thresholds $\lambda_{\text{dec}}(q)$ and $\lambda_{\text{thr}}^{(n)}(k-1)$ discussed above, there is also the threshold $\lambda_{\text{patt}}(q)$ at which a given interference pattern no longer allows to verify $q$-coherence. As $R_n$ is a scalar functional of the interference pattern, it can contain up to as much information as the pattern itself, and a small difference between $\lambda_{\text{patt}}$ and $\lambda_{\text{thr}}^{(n)}$ indicates that only little information is lost by looking at the ratio of specific moments instead of the full interference pattern.
These thresholds, for all $n$, satisfy the relation \begin{equation}
\label{eq:decoh_inequality}
0 < \lambda_{\text{thr}}^{(n)}(q) \leq \lambda_{\text{patt}}(q) \leq \lambda_{\text{dec}}(q) < 1, \end{equation} for all $n$. The value of $\lambda_{\text{patt}}(q)$ is strongly dependent on the measurement projection $\ket{\chi}\bra{\chi}$, and we show in \appref{sec:decoh_thr} that the threshold values $\lambda_{\text{patt}}(q)$ and $\lambda_{\text{dec}}(q)$ nevertheless coincide for Werner-like states, with a projection onto the equal superposition $k$-coherent state $\ket{W_k}$. Strikingly, this verifies that in this case a single interference pattern can provide enough information for a complete classification of $q$-coherence.
For $\lambda \geq \lambda_{\text{patt}}(q)$, a $q$-coherent state is mixed enough to produce a pattern $p(t)$ which can be reproduced by states of lesser coherence. Patterns $p(t)$ resulting from states with $\lambda < \lambda_{\text{patt}}(q)$, on the other hand, cannot be reproduced by states in $C_{q-1}$. In order to exemplify the differences in interference patterns that the present criteria aim at identifying, for a given interference pattern $p(t)$ produced by a $k$-coherent state with $k > q$, we introduce the best $q$-approximation $\bar{p}_{q}(t)$ to $p(t)$, as the interference pattern resulting from $q$-coherent states only with minimal deviation from $p(t)$.
\begin{figure}
\caption{(a) Interference patterns $p(t)$ of $\rho_W \in C_3$ in a 3-dimensional space with different values of $\lambda$ projected under optimal measurement state $\ket{W_3}$ (solid/dotted curves), along with their best approximations $\bar{p}_{2}(t)$ (dashed curves) reproduced by states in $C_2$. The $R_3$ values of the states are, $1.26, 0.88, 0.60$, with increasing $\lambda$, so the system corresponding to $\lambda=0.18$ can be certified by $R_3$ as 3-coherent. \\ (b)-(d) Three linearly independent 2-coherent states that, when mixed with the given probabilities $p_m$, provide the best approximation of $\rho_W$ at $\lambda=0.18$. }
\label{fig:pattdecomp}
\end{figure}
In \figref{fig:pattdecomp}, we focus on $R_3$ and we investigate the ability to detect 3-coherence on states $\rho_W \in C_3$ with optimal projection. The patterns corresponding to $\rho_W$ are plotted for $\lambda = 0.18, 0.36$ and $0.54$ along with their best approximations $\bar{p}_{2}(t)$. As long as $\lambda < \lambda_{\text{dec}}(3)=\frac{1}{2}$, which is the case for the red and blue curves (corresponding to $\lambda = 0.18, 0.36$ respectively), the pattern cannot be reproduced by states in $C_2$, as expected, since $\lambda_{\text{patt}}(3) = \lambda_{\text{dec}}(3)$. The green pattern ($\lambda = 0.54$) can be reproduced by states in $C_2$ exactly, because in this case $\lambda > \lambda_{\text{dec}}(3)$. If the projection was sufficiently far from the optimal, the red and blue patterns would also be exactly reproducible by patterns of 2-coherent states. The red pattern corresponds to $\rho_W(\lambda=0.18)$, and has a value of $R_3$ (when maximised over $\ket{\chi}$) of $1.26$, which lies above the threshold given in \tableref{tab:R3Bounds} to certify a state as $3$-coherent. The bottom part of the plot gives the patterns of the three 2-coherent states which, when mixed, provide the best approximation to the red $\rho_W$ pattern. Three is the minimum number of basis states required to form the 3-dimensional Werner-like state. For $\lambda>\frac{2}{3}$, the patterns could also be decomposed simply into incoherent states, as \eqref{eq:decoh_thr} indicates.
\section{Conclusion} \label{sec:conclusion}
Despite the numerous similarities between the theories of entanglement and coherence, the equality in operation required for creation and verification of quantum coherence defines a crucial difference between those two theories. Our proposed solution relies on easily observable quantities such that an imperfectly implemented verification protocol can never overestimate the degree of coherence. As such, it offers very practical and robust avenue to rigorously verify coherence properties beyond the two-level setting.
Beyond the fundamental question `{\it when is a triple-slit interference pattern so washed out, that one can not recognize it anymore?}', the ability to verify the number of states contributing to a coherent superposition has also very practical applications in the verification that a potential quantum device is actually able to operate in the quantum regime that it is supposed to.
\section{Acknowledgments} We are grateful for stimulating discussions with Nicky Kai Hong Li that motivated this work. B.D. acknowledges funding from the Engineering and Physical Sciences Research Council (EPSRC UK) administered by Imperial College London via the Postdoctoral Prize Fellowship program for the core duration of this work; and funding from Austrian Science Fund (FWF): P 30947 for the closing stages. N.K. acknowledges funding from the EPSRC UK through the Controlled Quantum Dynamics Centre for Doctoral Training for the closing stages.
\small
\onecolumngrid
\appendix
\section{Proof that $R_n$ is convex in either argument} \label{sec:ProofConvex} To prove that $R_n$ is convex under the mixing of states it suffices to show that \begin{equation} R_n\left(\lambda\rho_1 + (1-\lambda)\rho_2, \ket{\chi}\right) \le \lambda R_n\left(\rho_1, \ket{\chi}\right) + (1-\lambda)R_n\left(\rho_2, \ket{\chi}\right), \label{eq:ConvexityAppendix} \end{equation} for all pairs of states $\rho_1, \rho_2$, for all projectors $\ket{\chi}\bra{\chi}$, and for all $\lambda\in[0, 1]$.
This property holds for the moments themselves, which are convex and positive by construction. Products and sums of such functions stay convex, but this is not necessarily the case for ratios of them. We prove that this particular function is indeed convex, for $n \ge 2$, by taking the second derivative of \eqref{eq:ConvexityAppendix} with respect to $\lambda$ and showing that it is always non-negative.
This second derivative is \begin{align} &\partial_{\lambda}^2R_n = \frac{M_1^{3n-5}}{M_1^{4n-4}} \big[ M_1^2 \partial_{\lambda}^2 M_n - 2(n-1)M_1(\partial_{\lambda}M_1) \partial_{\lambda}M_n + n(n-1)(\partial_{\lambda} M_1)^2 M_n \big]. \nonumber \label{eq:secondDeriv} \end{align} Denoting the integrand of $M_n$ in \eqref{eq:defineMoment} by $p^n$ and the time average by $\langle \cdot \rangle$, allows the derivatives to be calculated according to \begin{align} \partial_{\lambda} M_n &= \langle n (\partial_{\lambda}p) p^{n-1}\rangle, \nonumber \\ \partial_{\lambda}^2 M_n &= \langle n(n-1) (\partial_{\lambda} p)^2 p^{n-2} \rangle. \nonumber \end{align} Substituting these expressions into \eqref{eq:secondDeriv} gives \begin{align} &\partial_{\lambda}^2R_n =\frac{M_1^{3n-5}}{M_1^{4n-4}} \langle n(n-1)p^{n-2} \left[p \langle \partial_{\lambda}p \rangle - \langle p \rangle \partial_{\lambda}p \right]^2 \rangle, \end{align} where the fraction at the front is non-negative, as is the squared term in the time average and its pre-factor (for $n\ge2$), thereby showing that $R_n$ is convex as desired.
\section{Proof that $R_n$ is maximised for equal preparation and projection} \label{sec:ProofEqualStateMeas}
We begin by noting that the expression for the probability distribution in \eqref{eq:defineMoment} for pure states is given by the double sum \begin{align} p(\ket{\psi}, \ket{\chi}, t) = \sum_{p, q} \chi_p^* \psi_p \psi_q^* \chi_q e^{-i(p - q)t}, \end{align} where $\ket{\psi} = \sum_p \psi_p \ket{p}$, $\ket{\chi} = \sum_q \phi_q \ket{q}$ and the basis states are eigenkets of the Hamiltonian of \eqref{eq:BasicHamiltonian} $H = \sum_n n \ket{n}\bra{n}$. By defining $\psi_p \chi_p^* = \alpha_p e^{i \phi_p}$, $\phi_{pq} = \phi_p - \phi_q$ and $\omega_{pq} = p - q$ this can be recast as \begin{align} p(\ket{\psi}, \ket{\chi}, t) = \sum_p \alpha_p^2 + 2\sum_{p>q} \alpha_p \alpha_q \cos(\omega_{pq} t + \phi_{pq}), \label{eq:CosProbability} \end{align} where the $\alpha$ are real and non-negative by construction.
We now show that the maximum of this over $k$-coherent $\ket{\psi}$ and any $\ket{\chi}$ is reached when the phases $\phi_{pq}$ are all zero, for all $k$. Firstly, because integrating cosines over an integer number of periods gives zero, the first moment is independent of them, \begin{align} M_1 = 2 \pi \sum_p \alpha_p^2. \end{align} It is therefore clear that changes in $\phi_{pq}$ (arising from different phases between the state and the projector) affect the numerator of $R_n$ but not the denominator. The terms of $M_{>1}$ which depend non-trivially on the phases are inside the integral over time and are of the form \begin{align} \int_0^{2\pi} \left(\sum_{p>q} \alpha_p\alpha_q \cos(\omega_{pq} t + \phi_{pq})\right)^m \,dt. \end{align} To see which terms do not vanish when integrated over, it is useful to look at the products of cosines individually \begin{align} \int_0^{2\pi} &\alpha_{p_1}\alpha_{q_1} \cos(\omega_{p_1q_1} t + \phi_{p_1q_1})\times\alpha_{p_2}\alpha_{q_2} \cos(\omega_{p_2q_2} t + \phi_{p_2q_2})\times...\,dt. \end{align} which can themselves be expanded into a sum of cosines, where each term is of the form \begin{align} \propto \int_0^{2\pi} \cos\left[(\omega_{p_1q_1} \pm \omega_{p_2q_2}...)t + \phi_{p_1q_1} \pm \phi_{p_2q_2}...\right]\,dt. \end{align} If the sum (for the different permutations of signs) of frequencies do not sum to $0$, then the integral vanishes. If they do sum to $0$, the term is proportional to the cosine of the sum (for the different permutations of signs) of the phases. One of the solutions which maximises this is to pick all the $\phi_{pq}=0$, which simultaneously maximises every such integral no matter the number of terms or the sign configuration. This itself increases $M_n$ and therefore the value of $R_n$.\\
In this case that there are no relative phases, $R_n$ can be written in terms of a simplified \eqref{eq:CosProbability} as \begin{align} R_n(\ket{\psi},\ket{\chi}) = \frac{\int_0^{2\pi}\left(\sum_p \alpha_p^2 + 2 \sum_{p>q} \alpha_p\alpha_q \cos(\omega_{pq} t) \right)^n \,dt} {2\pi\left(\sum_p \alpha_q^2\right)^{n-1}}. \label{eq:RnNoPhase} \end{align} From this it can be seen that the mapping $\alpha_p \to x\alpha_p$ changes the function $R_n\to x^2R_n$. It is therefore desirable to scale the $\alpha$ to be as large as possible. The extent to which this can be done is bounded by the normalisations of the states, using Cauchy-Schwarz we can express this as: \begin{align} \left(\sum_p \alpha_p\right)^2 = \left(\sum_p \psi_p \chi_p \right)^2 &\le \left(\sum_p \psi_p^2 \right) \; \left(\sum_p \chi_p^2 \right) = 1, \\ \implies \sum_p \alpha_p &\le 1, \nonumber \end{align} Furthermore, any set of $\{\alpha_p\}$ that satisfy this bound can be realised by the normalised states $\ket{\psi}$, $\ket{\chi}$ by picking their amplitudes according to $\psi_p = \chi_p = \sqrt{\alpha_p}$.
Taking a step back, what we have shown by parametrising the function $R_n$ in terms of $\{\alpha_p, \phi_p\}$, is that the maximum of $R_n$ occurs when $\phi_p = 0$ and $\sum \alpha_p$ = 1. These two conditions are equivalent, in terms of the physical state and measurement projector, to having $\ket{\psi}= \ket{\chi}$. Thus, we know that $R_n$ is maximised when the input state is pure and the projective measurement is equal to it, thereby greatly shrinking the space over which we have to optimise. Note that this is not the same as the subtly different question of whether the optimal $\ket{\chi}$ that should be picked for a given $\ket{\psi}$ is for them to be the same. Here, we are only interested in the overall bound $R_n$ can have over any input state with a fixed $k$-coherence.
\section{Derivation of analytic threshold values} \label{sec:ProofThresholdValues} We now compute the maximum of \eqref{eq:RnNoPhase} for $n=3$ as a function for $k$ where $\ket{\psi}\bra{\psi} \in C_k$. For $k=2$, this is easily done by using the previously found constraint of $\sum \alpha_p = 1$. We denote the two non-zero $\alpha$'s as $x$ and $1-x$ and can perform the integration over time explicitely to arrive at \begin{equation} \max_{\rho \in C_2, \ket{\chi}} R_3(\rho, \ket{\chi}) = \max_{x\in[0,1]} \frac{1 + 2x(x-1)(5x^2-5x+2)}{1 + 2x(x-1)}. \end{equation} The right hand side is easily solved analytically and gives a value of $5/4$. Therefore, measuring an $R_3$ of greater than that value implies that the state must be at least $3$-coherent.\\
To deal with higher $k$, it is highly advantageous to reparametris the optimsation problem. the starting point is \eqref{eq:RnNoPhase} and we now make another simplification in the notation by grouping together terms with the same frequency $\omega_{pq} = p - q$. This allows the sum over the cosines to be expressed as \begin{align} \sum_{p>q} \alpha_p\alpha_q \cos(\omega_{pq} t) &= \sum_n D_n \cos(\omega_n t) \end{align} where the new variables are given by \begin{align} \quad D_n &= \sum_p \alpha_{p+n} \alpha_p, \quad \omega_n = n, \label{eq:DConstraints} \end{align} which also lets us rewrite the term $\sum_p \alpha_p^2 = D_0$, thereby unifying the notation. We also recall that, from previous arguments, that $\sum_p \alpha_p = 1$ for the maximum of the function. Using this notation in \eqref{eq:RnNoPhase} for the case $n=3$ we obtain \begin{align} R_3(\ket{\psi},\ket{\chi}) = \int_0^{2\pi} \big[&D_0 + 6 \sum_i D_i \cos(\omega_i t) + \frac{12}{D_0} \sum_{ij} D_i D_j \cos(\omega_i t) \cos(\omega_j t) +\\ &\frac{8}{D_0^2}\sum_{ijk} D_i D_j D_k \cos(\omega_i t) \cos(\omega_j t) \cos(\omega_j t)\big]\;dt\nonumber \end{align} Performing the integrals in the way described earlier, only terms where the $\omega$ sum to $0$ contribute, which yields \begin{align} R_3(\ket{\psi},\ket{\chi}) &= 6 D_0 \left(\frac{1}{6} + \sum_{i j} \frac{D_i D_j}{D_0^2}\delta_{i j} + \sum_{i j k} \frac{D_i D_j D_k}{D_0^3}\sigma_{i j k} \right) \nonumber\\ &= D_0 \left(\frac{1}{6} + \sum_i \tilde{D}_i^2 + \sum_{i j k} \tilde{D}_i \tilde{D}_j \tilde{D}_k \sigma_{i j k} \right) \label{eq:R3InDs} \end{align} where $\sigma_{ijk}$ is a phase matching condition which is $1$ if $i + j = k$ and $0$ otherwise, and $\tilde{D}_i = D_i/D_0$.
Finding the maximum value of $R_3$ over all $k$-coherent states as a function of $k$ has proved very difficult. What we have found is a method to calculate an analytic upper bound for this quantity for a given $k$. We have evaluated this bound for small $k$ and, although the method is applicable in general, it may be too laborious to be practical for high $k$.
The key idea is to treat the $\{\tilde{D}_i\}$ as independent variables to optimise over and $D_0$ as a `free' parameter. \eqref{eq:DConstraints} is used to form linear constraints on the $\{\tilde{D}_i\}$, which forms an outer approximation to the physically allowed region for a choice of $D_0$. We then show that in this region $R_3$ has a positive definite Hessian, which implies that for any line cutting through this region, the maxima of the function must be reached where the line crosses the bounding surface. Therefore, the maximum value is attained at one of the vertices. As this region is defined by linear constraints it is a polytope, and hence has only a finite number of vertices which can be individually evaluated to see which produces the largest value of $R_3$. The remaining step is then optimise over $D_0$, which is easily done numerically as the problem is reduced to finding the turning points of a quotient of low order polynomials in one variable.
Although we could not show that the Hessian is positive in general, we do find that it is in all the cases of interest. For convenience, it is useful to list its components here. These are the derivatives of $R_3$, which are given by \begin{align} \partial_{\tilde{D}_a} R_3 &= 6 D_0 \left(2D_a + 2 \sum_{j k} \tilde{D}_j \tilde{D}_k \sigma_{a j k}+ \sum_{i j} \tilde{D}_i \tilde{D}_j \sigma_{i j a}\right) \nonumber\\ \partial_{\tilde{D}_a} \partial_{\tilde{D}_a} R_3 &= 12 D_0 (1 + D_{2a}) \\
\partial_{\tilde{D}_b} \partial_{\tilde{D}_a} R_3 &= 12 D_0 (D_{a + b} + D_{|a-b|}). \end{align}
We now apply the method outlined above to $k=3$, treating the case where the dimension of the Hamiltonian is truncated at $d=3$ and where it is unbounded separaly. We also calculate the case $k=4, d=4$ to show that the method can be applied to higher coherence levels.
\subsection*{$k=d=3$} For states which are at most $3$-coherent in a $3$-dimensional Hamiltonian, the variables are explicitely given by: \begin{align*} D_0 &= \alpha_1^2 + \alpha_2^2 + \alpha_3^2\\ D_1 &= \alpha_1 \alpha_2 + \alpha_2 \alpha_3\\ D_2 &= \alpha_1 \alpha_3 \\ 1 &= \alpha_1 + \alpha_2 + \alpha_3. \end{align*} From this we can write some inequalities which constrain the allowed values. Firstly, as the $\alpha$'s are all positive we have that $0\le D_0$ and $0\le\tilde{D}_i$. Secondly, the triangle inequality implies that \begin{equation} \frac{1}{d} \le D_0 \le 1. \label{d0bounds} \end{equation} The first non-trivial constraint comes about from the same starting point \begin{align} 1 &= (\alpha_1 + \alpha_2 + \alpha_3)^2 \nonumber \\ &= D_0 + 2 D_1 + 2 D_2 \nonumber \\ &= D_0 \left(1 + 2 \sum_i \tilde{D}_i \right). \label{d3Equality} \end{align} From these two relations we upper bound the maximum values of any $\tilde{D}_i$ \begin{align} \frac{1}{d} \left(1 + 2 \sum_i \tilde{D}_i \right) \le 1 \nonumber \\ \sum_i \tilde{D}_i \le \frac{d - 1}{2} \nonumber \\ \tilde{D}_i \le \frac{d - 1}{2}. \label{Dibound} \end{align} Other inequalities can be obtained by considering well chosen sums of squares, the three useful ones are listed here. Firstly \begin{align} (\alpha_1 - \alpha_3)^2 + \alpha_2^2 \ge 0 \nonumber \\
D_0 - 2 D_2 \ge 0 \nonumber \\ 1 - 2 \tilde{D}_2 \ge 0. \label{d3Ineq1} \end{align} Changing the sign gives a different inequality \begin{align} (\alpha_1 + \alpha_3)^2 + \alpha_2^2 \ge \tfrac{1}{2} \nonumber \\
D_0 + 2 D_2 \ge \tfrac{1}{2} \nonumber \\
D_0(1 + 2 \tilde{D}_2) \ge \tfrac{1}{2}, \label{d3Ineq2} \end{align} where the triangle inequality is used in the first line. Lastly, there is \begin{align} (\alpha_1 - \alpha_2 + \alpha_3)^2 \ge 0 \nonumber \\
D_0 - 2 D_1 + 2 D_2 \ge 0 \nonumber \\ 1 - 2 \tilde{D}_1 + 2 \tilde{D}_2 \ge 0. \label{d3Ineq3} \end{align} The last three equations (for fixed $ D_0$) define a triangular region of interest, while \eqref{d3Equality} is a line that cuts through it. They can be expressed as succinctly as \begin{align} \max\left(\frac{1 - 2 D_0}{4 D_0}, 0\right) &\le \tilde{D}_2 \le \frac{1}{2}\nonumber \\ \label{d3complete} 0 &\le \tilde{D}_1 = \frac{1 - D_0}{2 D_0} - \tilde{D}_2 \le 1\\ 0 &\le 1 - 2 \tilde{D}_1 + 2 \tilde{D}_2 \nonumber \end{align}
In order to be sure that the maxima of the function in this region is located at the vertices, we need the Hessian, which is \[ \left(\begin{array}{c c} 1 + D_2 & D_1\\ D_1 & 1 \end{array}\right), \] which is strictly positive definite everywhere in the allowed region. Therefore, the only points that need to be examined are the vertices of the polytope (in this case, just a line) defined by \eqref{d3complete} for the valid range of $ D_0$. It therefore just remains to find these vertices by solving these equations on the boundary in the $\tilde{D}_1 - \tilde{D}_2$ plane, which depends on the value of $ D_0$. They can be summarised as
\[ \begin{array}{c c c | c} \tilde{D}_1 & \tilde{D}_2 & D_0 & \max R_3 \\ \hline \frac{1 - D_0}{2 D_0} & 0 & \frac{1}{2}\le D_0\le1 & 1.25\\ \hline \frac{1 - 2 D_0}{2 D_0} & \frac{1}{2} & \frac{1}{3} \le D_0 \le \frac{1}{2} & 1.58 \\ \frac{1}{4 D_0} & \frac{1 - 2 D_0}{4 D_0} & \frac{1}{3} \le D_0 \le \frac{1}{2} & 1.86 \end{array} \] where the largest values of $R_3$ over all $ D_0$ in the allowed range are also given. From this we can conclude that if $R_3$ is larger than $1.86$ we can certify that the state is not a 3-coherent state lying in adjacent energy levels of an SHO. For comparison, the perfectly balanced state gives $1.74$ and the largest value we could fine numerically was $1.77$. The largest value found for a 4-coherent state (that we want to distinguish from) is $2.32$, while for a 2-coherent state it is $1.25$.
\subsection*{$k=3, d\ge3$} We now remove the restriction on the dimension and instead restrict ourselves to a 3-level state, which is to say that only 3 of the $\alpha$'s are non-zero. Without loss of generality, we have as the three populated levels $1, p, q$ with $1 < p < q$. This means that the only non-zero variables are $\alpha_1, \alpha_p, \alpha_q$, which gives \begin{align} D_0 &= \alpha_1^2 + \alpha_p^2 + \alpha_q^2, \\ D_{p-1} &= \alpha_1 \alpha_p, \\ D_{q-p} &= \alpha_p \alpha_q, \\ D_{q-1} &= \alpha_1 \alpha_q \end{align} with the assumption that $p-1 \neq q-p$. If these are equal, then the energy levels are equally spaced and we are back to the 3-level case considered in the first instance. As before, we now find inequalities on the $\tilde{D}$'s to define a volume. As each one only contains a single term, this can be done for each independently by considering \begin{align} (\alpha_i - \alpha_j)^2 + \alpha_k^2 \ge 0 \\ (\alpha_i + \alpha_j)^2 + \alpha_k^2 \ge \tfrac{1}{2}, \end{align} where $i, j, k$ are all different. This and results of Eqs.(\ref{d0bounds},\ref{d3Equality}) gives \begin{align} \max \left(0,\, \frac{1-2 D_0}{4 D_0}\right) \le \tilde{D}_i \le \frac{1}{2}\\ \frac{1}{3} \le D_0 \le 1\\ \tilde{D}_{p-1} + \tilde{D}_{q-p} + \tilde{D}_{q-1} = \frac{1- D_0}{2 D_0}. \label{eq:k3GeneralBounds} \end{align} The first line defines a cube in $\tilde{D}_i$ space and the last two a family of planes that cut through that space. We show that within the cube the Hessian is always positive.
The function $R_3$, and therefore the Hessian, depends on the indices of the $\tilde{D}$ due to the $\sigma$ ``energy matching'' term in the triple sum. There are several triplets that could enter: \begin{align} & D_{p-1}\;D_{q-p}\;D_{q-1} \\ & \qquad \text{always contributes}\nonumber\\ & D_{p-1}\;D_{p-1}\;D_{q-1} \text{ or } D_{q-p}\;D_{q-p}\;D_{q-1} \\ & \qquad \text{are ruled out by the condition $p-1 \neq q-p$} \nonumber \\ & D_{p-1}\;D_{p-1}\;D_{q-p} \\ & \qquad \text{if and only if $q=3p - 2$} \nonumber\\ & D_{q-p}\;D_{q-p}\;D_{p-1} \\ & \qquad \text{if and only if $q=\tfrac{1}{2}(3p - 1)$} \nonumber \end{align} The first case is the generic one. The second case happens if the energy differences are equal, which we explicitly rule out. The third case happens if the populated levels are $(1, 2, 4),\, (1, 3, 7),\, ...$ where the energy difference is in the ratio $1:2$. The third case requires the populated levels to be $(1, 3, 4),\, (1, 5, 7),\, ...$ where the energy difference has the ratio $2:1$. This is therefore identical to the previous case under the Hamiltonian mapping $H\to-H$, which clearly leaves the interference pattern unchanged.
There are thus 2 different cases to consider. The Hessian in the first case is \begin{align} \left(\begin{array}{c c c} 1 & \tilde{D}_{q-1} & \tilde{D}_{q-p} \\ \tilde{D}_{q-1} & 1 & \tilde{D}_{p-1} \\ \tilde{D}_{q-p} & \tilde{D}_{p-1} & 1 \end{array}\right). \end{align} This is positive definite as, from \eqref{eq:k3GeneralBounds} all principle minors of the matrix are themselves positive definite in the cubic region of interest \cite{Horn1985}. The second case has the Hessian \begin{align} \left(\begin{array}{c c c} 1 +\tilde{D}_{q-p} & \tilde{D}_{q-1} + \tilde{D}_{p-1}& \tilde{D}_{q-p} \\ \tilde{D}_{q-1} + \tilde{D}_{p-1} & 1 & \tilde{D}_{p-1} \\ \tilde{D}_{q-p} & \tilde{D}_{p-1} & 1 \end{array}\right), \end{align} which is also positive everywhere, except potentially at some of the vertices of the cube.
The vertices can be found in much the same way as before, except that the boundaries are now symmetric between the $\tilde{D}_i$. We therefore give them as triplets where all permutations need to be considered separately for evaluating $R_3$. \begin{align}
\begin{array}{c | c c c | c c}
D_0 & \tilde{D}_i & \tilde{D}_j & \tilde{D}_k & \max R_3 (\text{generic}) & \max R_3 (1:2 \text{ ratio})\\ \hline \frac{1}{2} < D_0 \le 1 & 0 & 0 & \frac{1- D_0}{2 D_0} & 1.25 & 1.25\\ \hline \frac{1}{3} \le D_0 \le \frac{1}{2} & \frac{1-2 D_0}{4 D_0} & \frac{1-2 D_0}{4 D_0} & \frac{1}{2} & 1.27 & 1.33\\ \end{array} \end{align}
Importantly, these values are all lower than for the case of a 3-level system in adjacent energy levels. Therefore, the previous result we had is very significantly strengthened: if $R_3$ is larger than $1.86$ then we know that the state is not 3-coherent for any Hamiltonian.
\subsection*{$k=d=4$} To highlight that this algorithmic way of calculating the threshold values can be extended to high dimentions, we demonstrate it for the case of $4$-coherent states. In order to reduce the number of cases to consider, we limit ourselves to states where the $4$ populated levels are all adjacent basis states of an harmonic Hamiltonian. For this case, the variables are \begin{align*} D_0 &= \alpha_1^2 + \alpha_2^2 + \alpha_3^2 + \alpha_4^2\\ D_1 &= \alpha_1 \alpha_2 + \alpha_2 \alpha_3 + \alpha_3 \alpha_4\\ D_2 &= \alpha_1 \alpha_3 + \alpha_2 \alpha_4 \\ D_3 &= \alpha_1 \alpha_4. \end{align*} Eqs.(\ref{d0bounds}, \ref{d3Equality}, \ref{Dibound}) hold as before. Other bounds can be obtained in a similar way to before by considering sums of squares. These are firstly \begin{align} (\alpha_1 - \alpha_3)^2 + (\alpha_2 - \alpha_4)^2 \ge 0 \nonumber \\ 1 - 2 \tilde{D}_2 \ge 0, \end{align} and \begin{align} (\alpha_1 + \alpha_3)^2 + (\alpha_2 + \alpha_4)^2 \ge \tfrac{1}{2} \nonumber \\
D_0(1 + 2 \tilde{D}_2) \ge \tfrac{1}{2}. \end{align} Similarly there is \begin{align} (\alpha_1 - \alpha_4)^2 + \alpha_2^2 + \alpha_3^2 \ge 0 \nonumber \\ 1 - 2 \tilde{D}_3 \ge 0, \end{align} and \begin{align} (\alpha_1 + \alpha_4)^2 + \alpha_2^2 + \alpha_3^2 \ge \tfrac{1}{3} \nonumber \\
D_0(1+ 2 \tilde{D}_3) \ge \tfrac{1}{3}. \end{align} Finally \begin{align} (\alpha_1 - \alpha_2 + \alpha_3 - \alpha_4)^2 \ge 0 \nonumber\\ 1 - 2 \tilde{D}_1 + 2 \tilde{D}_2 - 2 \tilde{D}_3 \ge 0. \label{d43Body} \end{align}
Rougher versions of these can be obtained by eliminating $ D_0$ by taking the `worst case' approach, providing the simple inequalities \begin{align} \tilde{D}_1 \le 1 &\qquad \tilde{D}_1 + \tilde{D}_3 \le 1\nonumber\\ \tilde{D}_2 \le \tfrac{1}{2} &\qquad \tilde{D}_3 \le \tfrac{1}{2}, \label{d4simple} \end{align} which will be useful in proving the positivity of the Hessian. The Hessian is given by \begin{align} \left(\begin{array}{c c c} 1 + D_2 & D_1 + D_3 & D_2 \\ D_1 + D_3 & 1 & D_1 \\ D_2 & D_1 & 1 \end{array}\right). \end{align} The easiest way to prove positivity is, as before, to show that each of the principle minors is itself positive definite in \cite{Horn1985}, which is straightforward to compute in the region of interest defined by the inequalities \eqref{d4simple}. As before, it remains to find the vertices as a function of $\tilde{D}_2$. This is a harder problem than before, which is most easily tackled by rewriting the tighter inequalities as \begin{align} &\max \left(0, \frac{1-2 D_0}{4 D_0}\right) \le \tilde{D}_2 \le \frac{1}{2}\\ &\max \left(0, \frac{1-3 D_0}{6 D_0}\right) \le \tilde{D}_3 \le \frac{1}{2}\\ &0 \le 1 - 2 \tilde{D}_1 + 2 \tilde{D}_2 - 2 \tilde{D}_3\\ &0 \le \tilde{D}_1 = \frac{1- D_0}{2 D_0} - \tilde{D}_2 - \tilde{D}_3 \le 1 \end{align} The first two describe a surface in the $\tilde{D}_2 - \tilde{D}_3$, with the coordinates of the vertices depending on the value of $ D_0$. The third equation states how this protrudes in the $\tilde{D}_1$ direction. The fourth describes a plane that cuts this volume, and imposes additional physical constraints. The way to solve this is therefore, for a given range of $ D_0$, to find the vertices in the $\tilde{D}_2 - \tilde{D}_3$ plane (a maximum of 4), find the corresponding value of $\tilde{D}_1$ and check if any additional constraints on $ D_0$ arise. The results are summarised below. \begin{align}
\begin{array}{c | c c c | c}
D_0 & \tilde{D}_1 & \tilde{D}_2 & \tilde{D}_3 & \max R_3\\ \hline \frac{1}{2} < D_0 \le 1 & \frac{1 - D_0}{4 D_0} & 0 & 0 & 1\\ \hline
& \frac{1 - 2 D_0}{2 D_0} & \frac{1}{2} & 0 & 1.58\\ \frac{1}{3} < D_0 \le \frac{1}{2} & \frac{1 - 2 D_0}{4 D_0} & \frac{1 - 2 D_0}{4 D_0} & \frac{1}{2} & 1.25\\
& \frac{1}{4 D_0} & \frac{1 - 2 D_0}{4 D_0} & 0 & 1.86\\ \hline
& \frac{1 - 3 D_0}{2 D_0} & \frac{1}{2} & \frac{1}{2} & 1.33\\ \frac{1}{4} \le D_0 \le \frac{1}{3} & \frac{2 - 3 D_0}{6 D_0} & \frac{1}{2} & \frac{1 - 3 D_0}{6 D_0} & 2.44\\
& \frac{1 - 2 D_0}{4 D_0} & \frac{1 - 2 D_0}{4 D_0} & \frac{1}{2} & 1.93 \end{array} \end{align}
We see that the plane can intersect the volume at a single point (one vertex), in a plane (three vertices) or, in the case that $ D_0 = \tfrac{1}{4}$ in a line (two vertices). It is to be expected that this geometry becomes far more complicated in higher dimensions. This sort of analyses ought to generalise, but doing so is probably difficult. Nevertheless, from the table we can conclude that if $R_3$ is larger than $2.44$ we can certify that the state is not a 4-coherent state lying in adjacent energy levels of an SHO. For comparison, the perfectly balanced state gives $2.26$ and the largest value we could fine numerically was $2.32$. The largest value found for a 5-coherent state (that we want to distinguish from) was $2.88$.\\
\section{Derivation of $R_3(\ket{W_k}, \ket{W_k})$} \label{sec:StraightLineR3} We seek an exact analytical expression for the value of the certifier $R_3$ for the maximally coherent state, $R_3(\ket{W_k}, \ket{W_k})$. For the Hamiltonian in Eq. (\ref{eq:BasicHamiltonian}), the certifier can be rewritten as \begin{equation}
R_3(\ket{W_k}, \ket{W_k}) = \frac{1}{k} + \frac{12}{k^3} \frac{1}{T} \int_0^T \left( \sum_{i < j} \cos{(\omega_{i,j}t)} \right)^2 + \frac{8}{k^4} \frac{1}{T} \int_0^T \left( \sum_{i < j} \cos{(\omega_{i,j}t)} \right)^3, \label{eq:combinatorix} \end{equation}
for energy level differences $\omega_{i,j} = |i-j|$.
The first and second integral involve products of two and three cosinusoidal terms respectively. Using the trigonometric identity for terms of frequencies $\alpha > \beta > \gamma \geq 0$, \begin{equation}
\cos{(\alpha)}\cos{(\beta)}\cos{(\gamma)} = \frac{1}{4} \Bigl[ \cos{(\alpha + \beta + \gamma)} + \cos{(-\alpha + \beta + \gamma)} + \cos{(\alpha - \beta + \gamma)} + \cos{(\alpha + \beta - \gamma)} \Bigr], \end{equation} these products are reduced into linear terms. We need to find those that survive and calculate the integrals for them. The condition $\alpha = \beta + \gamma$ is equivalent to the statement that at least one, and in fact exactly one, of the linearised terms survives. In other words, the largest energy level spacing must be equal to the sum of the two smaller ones. Once the conditions for non-vanishing terms in the products of cosines have been identified, it is a matter of counting the number of combinations $A$ and $B$ of energy levels that obey these conditions and survive in the first and second integral in Eq. (\ref{eq:combinatorix}) respectively, leading to: \begin{equation}
\label{eq:prelim_maxcoh}
R_3 = \frac{1}{k} + \frac{6}{k^3} A + \frac{2}{k^4} B \end{equation}
Calculating $A$ is simple, since in this case $\gamma = 0$ and the non-vanishing terms are the ones with identical cosines multiplied together. Therefore, summing over all different values of $\omega_{i,j}$ gives \begin{equation}
\label{eq:combinationsa}
A = \sum\limits_{n=1}^{k-1} n^2 = \frac{k(k-1)(2k-1)}{6}. \end{equation}
Calculating $B$ requires that cosine terms multiplied together satisfy that the largest frequency equals to the sum of the smaller ones. Let us label the largest frequency by $\omega_{i,i+\alpha}$, then it has multiplicity $(k-\alpha)$ and there are $S_{\alpha}$ ways that two frequencies can sum up to $\omega_{i,i+\alpha}$. Now, we seek all frequencies $\omega_{j_1,j_1+\beta}$ and $\omega_{j_2,j_2+\gamma}$ of multiplicities $(k-\beta)$ and $(k-\gamma)$ respectively, for which $\omega_{i,i+\alpha} = \omega_{j_1,j_1+\beta} + \omega_{j_2,j_2+\gamma}$, for all indices $i, j_1, j_2$. The last factor to consider is that the three cosines may be multiplied together in any order, so there is a combinatorial coefficient of $3!$ when three different frequencies are multiplied together and $\frac{3!}{2!}$ when the two shorter frequencies are the same, as in when $\beta = \gamma$, which can only happen for even $\alpha$. We now reach the expression \begin{align}
\label{eq:fa}
S_\alpha &=
\begin{cases}
3!\sum\limits_{\substack{\beta+\gamma = \alpha \\ \beta \neq \gamma}} (k-\beta)(k-\gamma) & (\alpha \text{ is odd})
\\[5ex]
3!\sum\limits_{\substack{\beta+\gamma = \alpha \\ \beta \neq \gamma}} (k-\beta)(k-\gamma) + \frac{3!}{2!} \left( k - \frac{\alpha}{2} \right)^2 & (\alpha \text{ is even})
\end{cases}
\\[3ex]
&= \frac{1}{2} (\alpha - 1) (\alpha + \alpha^2 - 6\alpha k + 6k^2), \end{align} for any $0 < \alpha < k$. Finally, summing over all allowed energy level differences, \begin{equation}
\label{eq:combinationsa}
B = \sum\limits_{\alpha=1}^{k-1} (k-\alpha)S_{\alpha} = \frac{1}{40} k(k-1)(k-2)(2-7k+11k^2). \end{equation} Substituting $A$ and $B$ in Eq. (\ref{eq:prelim_maxcoh}), we get the desired sequence \begin{equation}
\label{eq:ratios_maxcoh}
R_3 \left( \ket{W_k}\bra{W_k}, \ket{\chi_0} \right) = \frac{4 + 5k^2 + 11k^4}{20k^3}. \end{equation}
\section{Derivation of decoherence theoretical and pattern thresholds} \label{sec:decoh_thr} We first derive the theoretical threshold of coherence for the Werner-like state $\rho_W$ of \eqref{eq:werner} and then prove that an interference pattern gives a threshold equal to the theoretical, under optimal measurement.
We observe that $\rho_W \in C_k$ is fully symmetric under permutations of basis states as well as that all the off-diagonal elements are $\frac{1-\lambda}{k}$, resulting in \begin{equation}
\label{eq:werner_norm}
C_{\ell_1}(\rho_W) = (k-1)(1-\lambda), \end{equation}
where $C_{\ell_1}(\rho) \coloneqq \sum\limits_{i \neq j} |\rho_{ij}|$ is the $\ell_1$-norm as studied by Bera \textit{et al}~\cite{ref:Bera}. These two properties define a Werner-like state.
In general, the $\ell_1$ norm of a $q$-coherent state is bounded from above. The bound is obtained when the system state is pure since $C_{\ell_1}$ is a convex measure~\cite{ref:Bera}. Let $\rho = \ket{\alpha}\bra{\alpha} \in C_q$ for a state $\ket{\alpha}$ defined in the reference basis, so that $\rho_{ij} = \alpha_i \alpha_j^* = \alpha_i^* \alpha_j$ and $\tr{\rho} = \sum_{i=1}^q |\rho_{ii}| = 1$. \begin{align}
(q-1) - C_{\ell_1}(\rho) = &(q-1) \sum_{i=1}^q |\rho_{ii}| - 2\sum\limits_{i < j} |\rho_{ij}| \\
= &\sum\limits_{i < j} (|\alpha_i| - |\alpha_j|)^2 \geq 0. \end{align} This means that the coherence of the system is bounded above, \begin{equation}
\label{eq:cl1_bound}
C_{\ell_1}\left(\rho\right) \leq q-1, \end{equation}
with equality obtained when $\forall i,j, |\alpha_i| = |\alpha_j|$ in the reference basis, so that $\ket{\alpha}$ is the maximally $q$-coherent state.
Using Eqs.(\ref{eq:werner_norm}, \ref{eq:cl1_bound}), we obtain for the Werner-like states in $C_q$ \begin{align}
\label{eq:werner_bound}
\lambda &\geq \frac{k-q}{k-1} \\
\therefore \lambda_{\text{dec}}(q) &= \frac{k-q}{k-1}, \quad 1 \leq q \leq k. \end{align}
Now projecting with the optimal measurement $\ket{W_q}$ we get \begin{align}
p(t)&= \matel{W_q}{\rho_W}{W_q} \\
&\leq \frac{1}{k} + \frac{2}{k}\sum_{i < j} |\rho_{ij}| = \frac{1}{k} + \frac{1}{k}C_{\ell_1}\left(\rho\right)\\
&\leq \frac{1}{k} + \frac{q-1}{k} = \frac{q}{k} \end{align}
Therefore, a pattern with a maximum higher than this boundary value, $\frac{q}{k}$, cannot be decomposed into patterns arising from states of $q$-coherence or lower. We get the threshold value $\lambda_{\text{patt}}(q)$ at which the interference pattern can no longer distinguish consecutive coherence levels, by bounding the interference pattern produced from the Werner-like state by the probability maximum, so that \begin{align}
\frac{q}{k} &\geq \matel{W_q}{\rho_W}{W_q} = 1 - \lambda + \frac{\lambda}{k} \nonumber \\
\Rightarrow \lambda &\geq \frac{k-q}{k-1} \nonumber \\
\therefore \lambda_{\text{patt}}(q) &= \frac{k-q}{k-1}, \quad 1 \leq q \leq k, \end{align} which coincides with $\lambda_{\text{dec}}(q)$.
\end{document} |
\begin{document}
\title{A stronger local monomialization theorem} \author{Steven Dale Cutkosky} \thanks{partially supported by NSF}
\address{Steven Dale Cutkosky, Department of Mathematics, University of Missouri, Columbia, MO 65211, USA} \email{[email protected]}
\dedicatory{Dedicated to Professor Winfried Bruns on the occasion of his 70th birthday}
\begin{abstract} In this article we prove stronger versions of local monomialization. \end{abstract}
\maketitle
In this note we derive extensions of the monomialization theorems for morphisms of varieties in \cite{Ast} and \cite{LMTE}. I thank Jan Denef for conversations on this topic, suggesting that I make improvements of this type, and explaining applications of these theorems. A global ``weak'' monomialization theorem is established in \cite{ADK} by Abramovich, Denef and Karu, generalizing an earlier theorem by Abramovich and Karu in \cite{AK}. A monomialization is ``weak'' if the modifications used have no further requirements; in a monomialization all modifications must be products of blow ups of nonsingular sub varieties. In this note we show that a local monomialization can be found which satisfies the extra local statements obtained in \cite{ADK}. In \cite{De} and \cite{De1} some comments are made about how the results of this paper can be used.
The techniques in this paper come from the theory of resolution of singularities. Some basic references in this subject are \cite{H0}, \cite{BM1}, \cite{BV}, \cite{CP1}, \cite{CP2}, \cite{CP3}, \cite{EV}, \cite{Ha1}, \cite{T}. In this paper we assume that the ground field has characteristic zero. Counterexamples to local monomialization in positive characteristic are given in \cite{C2}. A proof of local monomialization, within the context of analytic geometry, is given in \cite{LM} for germs of real and complex analytic maps.
\section{A stronger local monomomialization theorem for algebraic morphisms}
In this section we state and prove an extension Theorem \ref{TheoremF} of the local monomialization theorem Theorem 1.4 \cite{LMTE}.
Suppose that $K$ is an algebraic function field over a field $k$. A local ring $R$ is an algebraic local ring of $K$ if $R$ is a subring of $K$, the quotient field of $R$ is $K$ and $R$ is essentially of finite type over $k$.
\begin{Theorem}\label{TheoremF} Suppose that $k$ is a field of characteristic zero, $K\rightarrow K^*$ is a (possibly transcendental) extension of algebraic function fields over $k$, and that $\nu^*$ is a valuation of $K^*$ which is trivial on $k$. Further suppose that $R$ is an algebraic local ring of $K$ and $S$ is an algebraic local ring of $K^*$ such that $S$ dominates $R$ and $\nu^*$ dominates $S$. Suppose that $I$ is a nonzero ideal of $S$. Then there exist sequences of monoidal transforms $R\rightarrow R'$ and $S\rightarrow S'$ along $\nu^*$ such that $R'$ and $S'$ are regular local rings, $S'$ dominates $R'$, there exist regular parameters $(y_1,\ldots,y_n)$ in $S'$, $(x_1,\ldots,x_m)$ in $R'$, units $\delta_1,\ldots,\delta_m\in S'$ and an $m\times n$ matrix $(c_{ij})$ of nonnegative integers such that $(c_{ij})$ has rank $m$, and $$ x_i=\prod_{j=1}^ny_j^{c_{ij}}\delta_i $$ for $1\le i\le m$. Further, we have that \begin{enumerate} \item[1)] $S'$ is a local ring of the blowup of an ideal $J$ of $S$ such that $JS'=(y_1^{\alpha_1}\cdots y_n^{\alpha_n})$ for some $\alpha_1,\ldots,\alpha_n\in {\NZQ N}$. \item[2)] $I S'=(y_1^{\beta_1}\cdots y_n^{\beta_n})$ for some $\beta_1,\ldots,\beta_n\in {\NZQ N}$. \end{enumerate} \end{Theorem}
The proof of Theorem 1.4 \cite{LMTE} is given in \cite{Ast} and \cite{LMTE}. Also see \cite{C3} for some errata. The new part of Theorem 1.1 is the addition of the conclusions 1) and 2). We now explain the changes in this proof which must be made to obtain the stronger result Theorem \ref{TheoremF}. Theorem \ref{TheoremF} is a consequence of Theorem \ref{TheoremM4} which is proven at the end of this section.
We first indicate changes required in the proofs of \cite{Ast} to obtain Theorem \ref{TheoremF} in the case of a finite extension of function fields. In the construction of Theorem 5.1 \cite{Ast}, we have that $\nu(y_1'),\ldots,\nu(y_s')$ are rationally independent. We first settle the case when the quotient field of $S$ is finite over the quotient field of $R$ and the valuation $\nu$ has rank 1.
\vskip .2truein \begin{Theorem} \label{TheoremM1} Suppose that $R\rightarrow S$ and $R'\rightarrow S'$ satisfy the assumptions and conclusions of Theorem 5.1 \cite{Ast} and $h\in S'$ is nonzero. Then there exist sequences of monoidal transforms $R'\rightarrow R''$ and $S'\rightarrow S''$ along $\nu$ such that $S''$ dominates $R''$, $R''$ has regular parameters $x_1'',\ldots,x_n''$, $S''$ has regular parameters $y_1'',\ldots,y_n''$ having the monomial form of the conclusions of Theorem 5.1 \cite{Ast} and $$ h=(y_1'')^{e_1}\cdots(y_s'')^{e_s}\overline u $$ where $e_1,\ldots,e_s\in {\NZQ N}$ and $\overline u\in S''$ is a unit. \end{Theorem}
\begin{proof} This is an immediate consequence of Theorems 4.8 and 4.10 of \cite{Ast}
\end{proof}
\begin{Corollary}\label{CorM2} Suppose that $R\rightarrow S$ and $R'\rightarrow S'$ satisfy the assumptions and conclusions of Theorem 5.1 \cite{Ast} and $I\subset S$ is a nonzero ideal. Then there exist sequences of monoidal transforms $R'\rightarrow R''$ and $S'\rightarrow S''$ along $\nu$ such that $S''$ dominates $R''$, $R''$ has regular parameters $x_1'',\ldots,x_n''$, $S''$ has regular parameters $y_1'',\ldots,y_n''$ having the monomial form of the conclusions of Theorem 5.1 \cite{Ast} and the following holds. \begin{enumerate} \item[1)] $S''$ is a local ring of the blowup of an ideal $J$ of $S$ such that $JS''=((y_1'')^{a_1}\cdots (y_s'')^{a_s})$ for some $a_1,\ldots,a_s\in {\NZQ N}$. \item[2)] $I S''=((y_1'')^{b_1}\cdots (y_s'')^{b_s})$ for some $b_1,\ldots,b_s\in {\NZQ N}$. \end{enumerate} \end{Corollary}
\begin{proof} Let $K$ be an ideal in $S$ such that $S'$ is a local ring of the blow up of $K$. Let $KS'=(f_0)$ and $IS'=(f_1,\ldots,f_l)$. Let $h=\prod_{j=0}^lf_i$. By Theorem \ref{TheoremM1}, there exist sequences of monoidal transforms $R'\rightarrow R''$ and $S'\rightarrow S''$ along $\nu$ such that $R''\rightarrow S''$ and $h$ satisfy the conclusions of Theorem \ref{TheoremM1}. Now by Lemma 4.2 and Remark 4.1 \cite{Ast}, there exists a sequence of monoidal transforms $S''\rightarrow S(1)$ such that $R''\rightarrow S(1)$ has a monomial form as in the conclusion of Theorem 5.1 \cite{Ast}, $KS(1)=(y_1(1)^{e_{1}}\cdots y_s(1)^{e_s})$ and $IS(1)=(y_1(1)^{c_{1}}\cdots y_s(1)^{c_s})$ for some $e_i$ and $c_j$ in ${\NZQ N}$.
Now $S'\rightarrow S(1)$ is a product of Perron transforms (as defined in Section 4.1 \cite{Ast}). Thus $S(1)$ is a local ring of the blow up of an ideal $L$ in $S'$ such that $$ LS(1)=(y_1(1)^{g_1(i+1)}\cdots y_s(1)^{g_s(i+1)}) $$
for some $g_j(i+1)\in {\NZQ N}$. There exists a positive integer $\beta$ such that $S(1)$ is a local ring of the blow up of an ideal $J$ of $S$ such that $JS(1)=K^{\beta}LS(1)$
(this can be verified using the universal property of blowing up) and the corollary follows. \end{proof}
We now prove the case when the quotient field of $S$ is finite over the quotient field of $R$ and the valuation ring has arbitrary rank. We use the notation of Theorem 5.3 \cite{Ast}.
\begin{Theorem}\label{TheoremM3} Suppose that $R\rightarrow S$ satisfies the assumptions of Theorem 5.3 \cite{Ast} and $I$ is a nonzero ideal in $S$. Then there exist sequences of monoidal transforms $R\rightarrow R'$ and $S\rightarrow S'$ such that $R'\rightarrow S'$ satisfies the conclusions of Theorem 5.3 \cite{Ast} and \begin{enumerate} \item[1)] $S'$ is a local ring of the blowup of an ideal $J$ of $S$ such that $$ JS'=(\prod_{j=0}^{r-1}\prod_{l=1}^{s_{j+1}}(w_{t_1+\cdots+t_j+l})^{\epsilon_{jl}}) $$
for some $\epsilon_{jl}\in {\NZQ N}$ (with the convention that $t_1+\cdots+t_0=0$). \item[2)] $I S'=(\prod_{j=0}^{r-1}\prod_{l=1}^{s_{j+1}}(w_{t_1+\cdots+t_j+l})^{\gamma_{jl}})$ for some $\gamma_{jl}\in {\NZQ N}$. \end{enumerate} \end{Theorem} \vskip .2truein To obtain Theorem \ref{TheoremM3}, we must modify the proof of Theorem 5.3 \cite{Ast} as follows. \vskip .2truein On line 1 of page 117, replace the reference to Theorem 5.1 \cite{Ast} with Corollary \ref{CorM2}. \vskip .2truein Insert the following at the end of line 15 on page 119 (after ``for $1\le i\le \lambda$''): ``By our construction, $S''$ is a local ring of the blow up of an ideal $J$ of $S$ such that $$ JS''_{q''_{r-1}}=(\prod_{j=0}^{r-2}\prod_{l=1}^{s_{j+1}}(y''_{t_1+\cdots+t_j+l})^{\alpha_{jl}}) $$ for some $\alpha_{jl}\in {\NZQ N}$ so $$ JS''=H''(\prod_{j=0}^{r-2}\prod_{l=1}^{s_{j+1}}(y''_{t_1+\cdots+t_j+l})^{\alpha_{jl}}) $$ where $H''$ is an ideal in $S''$ such that $H''S''_{q''_{r-1}}=S''_{q''_{r-1}}$. By our construction, $$ IS''_{q''_{r-1}}=(\prod_{j=0}^{r-2}\prod_{l=1}^{s_{j+1}}(y''_{t_1+\cdots+t_j+l})^{\beta_{jl}}) $$ for some $\beta_{jl}\in {\NZQ N}$ so $$ IS''=K''(\prod_{j=0}^{r-2}\prod_{l=1}^{s_{j+1}}(y''_{t_1+\cdots+t_j+l})^{\alpha_{jl}}) $$ where $K''$ is an ideal in $S''$ such that $K''S''_{q''_{r-1}}=S''_{q''_{r-1}}$.'' \vskip .2truein After line 11 of page 120 (after ``for $1\le i\le \lambda$''), insert: ``We also may obtain, using Theorem \ref{TheoremM1} and the argument of the proof of Corollary \ref{CorM2} (above) that $$ H''U'=(\overline y_{\lambda+1}^{\gamma_1}\cdots\overline y_{\lambda+s_r}^{\gamma_{s_r}}) $$ and $$ K''U'=(\overline y_{\lambda+1}^{\delta_1}\cdots\overline y_{\lambda+s_r}^{\delta_{s_r}}).'' $$ \vskip .2truein Insert the following at the end of line -2 of page 121 (at the end of the proof): ``We have $$ H''\overline S(m'+1)=(\overline y_{\lambda+1}(m'+1)^{\gamma_1}\cdots \overline y_{\lambda+s_r}(m'+1)^{\gamma_{s_r}}+\Sigma) $$ and $$ K''\overline S(m'+1)=(\overline y_{\lambda+1}(m'+1)^{\delta_1}\cdots \overline y_{\lambda+s_r}(m'+1)^{\delta_{s_r}}+\Psi_1,\ldots,\overline y_{\lambda+1}(m'+1)^{\delta_1}\cdots \overline y_{\lambda+s_r}(m'+1)^{\delta_{s_r}}+\Psi_e) $$ where $\Sigma,\Psi_1,\ldots\Psi_e\in (\overline y_1(m'+1),\ldots,\overline y_{\lambda}(m'+1))$ and $\gamma_1,\ldots,\delta_{s_r}\in {\NZQ N}$.
Let $\epsilon_i=\max\{\gamma_i,\delta_i\}$ for $1\le i\le s_r$. Define a MTS $\overline S(m'+1)\rightarrow \overline S(m'+2)$ by $$ \overline y_i(m'+1)=\left\{\begin{array}{ll} \prod_{j=1}^{s_r}\overline y_{\lambda+j}(m'+2)^{\epsilon_j}\overline y_i(m'+2)&\mbox{ for }1\le i\le \lambda\\ \overline y_i(m'+2)&\mbox{ for }\lambda< i\le n. \end{array}\right. $$ Further, by our construction, $\overline S(m'+2)$ is a local ring of the blow up of an ideal $B$ of $S''$ such that $$ B\overline S(m'+2)=(\prod_{j=0}^{r-1}\prod_{l=1}^{s_{j+1}}(\overline y_{t_1+\cdots+t_j+l}(m'+2))^{\gamma_{jl}}) $$ for some $\epsilon_{jl}\in {\NZQ N}$. Thus there exists an ideal $J^*$ of $S$ and $\beta>0$ such that $\overline S(m'+2)$ is a local ring of the blow up of $J^*$ and $J^*\overline S(m'+2)=J^{\beta}B\overline S(m'+2)$. We have thus achieved the conclusions of Theorem \ref{TheoremM3} in $R(m')\rightarrow \overline S(m'+2)$. \vskip .2truein We now indicate the changes which need to be made in the statements of \cite{LMTE} to obtain the proof of Theorem \ref{TheoremF}. \vskip .2truein We first extend Theorem \ref{TheoremM1} to arbitrary extensions of characteristic zero algebraic function fields. We will call this ``Extended Theorem \ref{TheoremM1}''.
In the statement and proof of Theorem \ref{TheoremM1}, we replace references to Theorem 5.1 \cite{Ast} with Theorem 10.1 \cite{LMTE}, Theorems 4.8 and 4.10 \cite{Ast} with Theorems 9.1 and 9.3 \cite{LMTE}. Also replace $n$ with $m$ when referring to regular parameters in birational extensions of $R$ and $s$ with $\overline s$. We must add the following sentence to the first line of the proof: ``First suppose that $\mbox{rank}(\nu)>0$, so that $\mbox{rank}(\nu)=1$ (here $\nu$ is the restriction of the given rank 1 valuation $\nu^*$ of the quotient field of $S$ to the quotient field $K$ of $R$)''. At the end of the proof, add: ``If $\mbox{rank}(\nu)=0$, then $\nu$ is trivial so $R=K$ and the proof is a substantial simplification.''
\vskip .2truein We now extend Corollary \ref{CorM2}. We will call this ``Extended Corollary \ref{CorM2}''. In the statement and proof of Corollary \ref{CorM2}, replace references to Theorem 5.1 \cite{Ast} with Theorem 10.1 \cite{LMTE} and Section 4.1 \cite{Ast} with Section 5 \cite{LMTE}. Replace references to Theorem \ref{TheoremM1} with ``Extended Theorem \ref{TheoremM1}''. Replace $n$ with $m$ when referring to regular parameters in birational extensions of $R$. Replace $s$ with $\overline s$. We must add the following sentence to the first line of the proof: ``First suppose that $\mbox{rank}(\nu)>0$, so that $\mbox{rank}(\nu)=1$ (here $\nu$ is the restriction of the given rank 1 valuation $\nu^*$ of the quotient field of $S$ to the quotient field $K$ of $R$)''. At the end of the proof, add: ``If $\mbox{rank}(\nu)=0$, then $\nu$ is trivial so $R=K$ and the proof is a substantial simplification.''
\vskip .2truein We now adopt the notation on valuation rings introduced on page 1579 - 1581 of \cite{LMTE}, which we will use below.
We extend Theorem \ref{TheoremM3} to arbitrary extensions of algebraic function fields in the following Theorem \ref{TheoremM4}.
\begin{Theorem}\label{TheoremM4} Suppose that $R\rightarrow S$ satisfies the assumptions of Theorem 10.5 \cite{LMTE} and $I$ is a nonzero ideal in $S$. Then there exist sequences of monoidal transforms $R\rightarrow R'$ and $S\rightarrow S'$ such that $R'\rightarrow S'$ satisfies the conclusions of Theorem 10.5 \cite{LMTE} and \begin{enumerate} \item[1)] $S'$ is a local ring of the blowup of an ideal $J$ of $S$ such that $$ JS'=(\prod (w_{\overline t_0+\cdots+\overline t_{i-1}+\overline t_{i,1}+\cdots+\overline t_{i,j-1}+l})^{\epsilon_{ijl}}) $$ where the product is over $$ 0\le i\le \beta, 1\le j\le \sigma(i), 1\le l\le \overline s_{ij} $$ (with $\overline s_{i1}=\overline s_i$) and $\epsilon_{ijl}\in {\NZQ N}$ for all $i,j,l$. \item[2)] We have that $$ I S'=(\prod (w_{\overline t_0+\cdots+\overline t_{i-1}+\overline t_{i,1}+\cdots+\overline t_{i,j-1}+l})^{\gamma_{ijl}}) $$ where the product is over $$ 0\le i\le \beta, 1\le j\le \sigma(i), 1\le l\le \overline s_{ij} $$ and $\gamma_{ijl}\in {\NZQ N}$ for all $i,j,l$. \end{enumerate} \end{Theorem}
\begin{proof} In the proof of Theorem \ref{TheoremM3}, replace references to Theorem 5.3 \cite{Ast} with Theorem 10.5 \cite{LMTE}. Replace references to Theorem \ref{TheoremM3} with Theorem \ref{TheoremM4}, references to Corollary \ref{CorM2} with Extended Corollary \ref{CorM2}. and references to Theorem \ref{TheoremM1} with Extended Theorem \ref{TheoremM1}. The indexing of the variables must be changed (as in the statement of Theorem \ref{TheoremM4}). The prime ideal $q_{r-1}$ in the valuation ring of the quotient field of $S$ is replaced by the prime ideal $q_{\beta-1,\sigma(\beta-1)}$ of the valuation ring of the quotient field of $S$, so $q_{r-1}''=q_{r-1}\cap S''$ becomes $q_{\beta-1}''=q_{\beta-1,\sigma(\beta-1)}\cap S''$.
We must add the following sentences as the first lines of the proof: ``We prove the theorem by induction on $\mbox{rank }V^*$. If $\mbox{rank }V^*=1$ then the theorem is immediate from Extended Corollary \ref{CorM2}.
By induction on $\gamma=\mbox{rank } V^*$, we may assume that the theorem is true whenever $\mbox{rank } V^*=\gamma-1$. We are reduced to proving the theorem in the following two cases: \vskip .2truein Case 1. $\sigma(\beta)=1$
Case 2. $\sigma(\beta)>1$. \vskip .2truein
Suppose that we are in Case 1, $\sigma(\beta)=1$. Then $V^*/q_{\beta-1,\sigma(\beta-1)}$ is a rank 1 valuation ring which dominates the rank 1 valuation ring $V/p_{\beta-1}$. $V^*/q_{\beta-1,\sigma(\beta-1)}$ has rational rank $\overline s_{\beta}$ and $V/p_{\beta-1}$ has rational rank $\overline r_{\beta}$.'' \vskip .2truein We must add the following sentences as the last lines of the proof: ``Now suppose that we are in Case 2, $\sigma(\beta)>1$. Then $V^*/q_{\beta,\sigma(\beta)-1}$ is a rank 1, rational rank $\overline s_{\beta,\sigma(\beta)}$ valuation ring which dominates the rank 0 valuation ring $V/p_{\beta}$, which is a field. The proof in Case 2 is thus a substantial simplification of the proof in Case 1.'' \end{proof}
\section{A geometric local monomialization theorem}
The main result in this section is Theorem \ref{TheoremM9}. Uniformizing parameters on an affine $k$-variety $V$ are a set of elements $u_1,\ldots,u_s\in A=\Gamma(V,\mathcal O_V)$ such that $du_1,\ldots,du_s$ is a free basis of $\Gamma(V,\Omega_{V/k})$ as an $A$-module.
\begin{Theorem}\label{TheoremM6} Suppose that $k$ is a field of characteristic zero, $\phi:Y\rightarrow X$ is a dominant morphism of $k$-varieties and $\nu$ is a zero dimensional valuation of the function field $k(Y)$ (the residue field of the valuation ring of $\nu$ is algebraic over $k$) which has a center on $Y$ (the valuation ring of $\nu$ dominates a local ring of $Y$). Further suppose that $\mathcal I\subset \mathcal O_Y$ is a nonzero ideal sheaf. Then there exists a commutative diagram of morphisms of $k$-varieties \begin{equation}\label{eqM8} \begin{array}{lll} Y_{\nu}&\stackrel{\phi_{\nu}}{\rightarrow}&X_{\nu}\\ \beta_{\nu}\downarrow&&\downarrow \alpha_{\nu}\\ Y&\stackrel{\phi}{\rightarrow}&X \end{array} \end{equation} where the vertical arrows are proper morphisms which are products of blow ups of nonsingular sub varieties, and if $q'$ is the center of $\nu$ on $Y_{\nu}$ and $p'$ is the center of $\nu$ on $X_{\nu}$, then there exists an affine open neighborhood $V_{\nu}$ of $q'$ in $Y_{\nu}$ and an affine open neighborhood $U_{\nu}$ of $p'$ in $X_{\nu,p'}$, regular parameters $y_1,\ldots,y_n$ in $\mathcal O_{Y_{\nu,q'}}$ which are uniformizing parameters on $V_{\nu}$ and regular parameters $x_1,\ldots,x_m$ in $\mathcal O_{X_{\nu},p'}$ which are uniformizing parameters on $U_{\nu}$ (where $m=\dim X$, $n=\dim Y$) and units $\delta_1,\ldots,\delta_m\in \Gamma(V_{\nu},\mathcal O_{Y_{\nu}})$ such that \begin{enumerate} \item[1)] $x_i=\prod_{j=1}^ny_j^{c_{ij}}\delta_i\mbox{ with }c_{ij}\in {\NZQ N}\mbox{ for }1\le i\le m$ and ${\rm rank}(c_{ij})=m$. \item[2)] $V_{\nu}\setminus Z(y_1\cdots y_n)\rightarrow Y$ is an open immersion; \item[3)] $\mathcal I\mathcal O_{V_{\nu}}=y_1^{a_1}\cdots y_n^{a_n}\mathcal O_{V_{\nu}}$ for some $a_1,\ldots,a_n\in {\NZQ N}$ \item[4)] $\phi_{\nu}:V_{\nu}\rightarrow U_{\nu}$ is toroidal (Section 2.2 \cite{ADK}) with respect to the locus of the product of the $y_j$ and the locus of the product of the $x_i$. \end{enumerate} \end{Theorem}
\begin{proof} Let $p$ be the center of $\nu$ on $X$ and $q$ be the center of $\nu$ on $Y$. Let $\nu^*=\nu$, $R=\mathcal O_{X,p}$, $S=\mathcal O_{Y,q}$, $I=\mathcal I_q$, $K^*=k(Y)$ and $K=k(X)$. Let $$ \begin{array}{lll} R'&\rightarrow &S'\\ \uparrow&&\uparrow\\ R&\rightarrow&S \end{array} $$ be the diagram of the conclusions of Theorem \ref{TheoremF}. There exists a commutative diagram $$ \begin{array}{lll} Y_{\nu}&\stackrel{\phi_{\nu}}{\rightarrow}&X_{\nu}\\ \beta_{\nu}\downarrow&&\downarrow\alpha_{\nu}\\ Y&\stackrel{\phi}{\rightarrow}&X \end{array} $$ where the vertical arrows are products of blow ups of nonsingular sub varieties and if $p'$ is the center of $\nu$ on $X_{\nu}$ and $q'$ is the center of $\nu$ on $Y_{\nu}$ then $\mathcal O_{Y_{\nu},q'}=S'$ and $\mathcal O_{X_{\nu},p'}=R'$. Since $k$ has characteristic zero, there exists an affine open neighborhood $U_{\nu}$ of $p'$ such that $x_1,\ldots,x_m$ are uniformizing parameters on $U_{\nu}$. Let $V_{\nu}$ be an affine neighborhood of $q'$ in $Y_{\nu}$ such that $y_1,\ldots,y_n$ are uniformizing parameters on $V_{\nu}$ and $\delta_1,\ldots,\delta_m\in \Gamma(V_{\nu},\mathcal O_{Y_{\nu}})$ are units. Let $\overline C$ be an $m\times m$ sub matrix of $C=(c_{ij})$ of rank $m$. Without loss of generality, $$ \overline C=\left(\begin{array}{lll} c_{11}&\cdots&c_{1m}\\ \vdots&&\vdots\\ c_{m1}&\cdots&c_{mm} \end{array}\right). $$ Let $d=\mbox{Det}(\overline C)$. Let $$ A=\Gamma(V_{\nu},\mathcal O_{Y_{\nu}}) $$ and $$ B=A[z_1,\ldots,z_m]/(z_1^d-\delta_1,\ldots,z_m^d-\delta_m)=A[\overline z_1,\ldots,\overline z_m] $$ where $\overline z_i$ is the class of $z_i$.
Let $V_{\nu}'=\mbox{Spec}(B)$ with natural finite \'etale morphism $\pi:V_{\nu'}\rightarrow V_{\nu}$. Define $\overline y_j$ for $1\le j\le n$ by
$$
y_j=\overline y_j\prod_{l=1}^m\overline z_{l}^{db_{jl}}
$$
where
$$
\overline B=(b_{jl})=\left(\begin{array}{c}
-\overline C^{-1}\\
0\end{array}\right)
$$ is an $n\times m$ matrix with coefficients in $\frac{1}{d}{\NZQ Z}$. We have that for $1\le i\le m$, \begin{equation}\label{eqU1} x_i=\prod_{j=1}^ny_j^{c_{ij}}\delta_i=\left(\prod_{j=1}^n\overline y_j^{c_{ij}}\right)\left(\prod_{l=1}^m\overline z_l^{\sum_{j=1}^nc_{ij}db_{jl}}\right)\delta_i =\prod_{j=1}^n\overline y_j^{c_{ij}}. \end{equation} The morphism $\pi:V_{\nu}'\rightarrow V_{\nu}$ is \'etale so $y_1,\ldots,y_n$ are uniformizing parameters on $V_{\nu}'$ or equivalently, $dy_1,\ldots,dy_n$ are a free basis of $\Omega^1_{B/k}$ as a $B$-module. Let $$ \epsilon_j=\left(\prod_{l=1}^m\overline z_l^{db_{jl}}\right)^{-1}\in B $$ and $\overline y_j=\epsilon_jy_j$ for $1\le j\le n$. Since $y_1,\ldots,y_n$ are regular parameters in $\mathcal O_{V'_{\nu},q''}$ for all $q''\in \pi^{-1}(q')$ and $d\overline y_j=\epsilon_jdy_j+y_jd\epsilon_j$, we have that $$ \left(\Omega^1_{V'_{\nu}/k}/d\overline y_1\mathcal O_{V'_{\nu}}+\cdots+d\overline y_n\mathcal O_{V'_{\nu}}\right)_{q''}=0 $$ for all $q''\in \pi^{-1}(q')$. Let $$ Z=\mbox{Supp}\left(\Omega^1_{V'_{\nu}/k}/d\overline y_1\mathcal O_{V'_{\nu}}+\cdots+d\overline y_n\mathcal O_{V'_{\nu}}\right). $$ $Z$ is a closed subset of $V'_{\nu}$ which is disjoint from $\pi^{-1}(q')$. The morphism $V'_{\nu}\rightarrow V_{\nu}$ is finite, so $\pi(Z)$ is a closed subset of $V_{\nu}$ which does not contain $q'$. After replacing $V_{\nu}$ with an affine neighborhood of $q'$ in $V_{\nu}\setminus \pi(Z)$, we have that $\overline y_1,\ldots,\overline y_n$ are uniformizing parameters on $V'_{\nu}$ giving the monomial expression (\ref{eqU1}) and so $V_{\nu}\rightarrow U_{\nu}$ is toroidal with respect to the locus of the products of the $y_j$ and the products of the $x_i$. \end{proof}
\begin{Theorem}\label{TheoremM9} Suppose that $k$ is a field of characteristic zero, $\phi:Y\rightarrow X$ is a dominant morphism of $k$-varieties and $\mathcal I\subset \mathcal O_Y$ is a nonzero ideal sheaf. Then there exists a finite number of commutative diagrams $$ \begin{array}{lll} Y_i&\stackrel{\phi_i}{\rightarrow}&X_i\\ \beta_i\downarrow&&\downarrow\alpha_i\\ Y&\stackrel{\phi}{\rightarrow}&X \end{array} $$ for $1\le i\le t$ such that the vertical arrows are products of blow ups of nonsingular sub varieties and there are affine open subsets $V_i\subset Y_i$ and $U_i\subset X_i$ such that $\phi_i(V_i)\subset U_i$, $\cup_{i=1}^t\beta_i(V_i)=Y$ and the restriction $\phi_i:V_i\rightarrow U_i$ is toroidal with respect to strict normal crossings divisor $E_i$ on $V_i$ and $D_i$ on $X_i$ such that the restriction of $\phi_i$ to $V_i\setminus E_i$ is an open immersion and $\mathcal I\mathcal O_{V_i}$ is a divisor on $V_i$ whose support is contained in $E_i$. \end{Theorem}
\begin{proof} Let $\Omega$ be the Zariski Riemann manifold of $Y$ (Section 17 of Chapter VI \cite{ZS2}). The points of $\Omega$ are equivalence classes of valuations of $k(Y)$ which dominate a local ring of $Y$. There are natural continuous surjections $\rho_Z:\Omega\rightarrow Z$ for any birational proper morphism $Z\rightarrow Y$. Let $\Sigma$ be the subset of $\Omega$ of zero dimensional valuations which dominate a local ring of $Y$. For each $\nu\in \Sigma$, we construct a diagram (\ref{eqM8}) with corresponding open subset $V_{\nu}$ of $Y_{\nu}$.
Suppose that $\omega$ is a valuation of $k(Y)$ which dominates a local ring of $Y$. If $\omega$ is not zero dimensional, then there exists $\nu\in \Sigma$ such that $\nu$ is composite with $\omega$ (Theorem 7, page 16 \cite{ZS2}), so that $\omega\in \rho_{Y_{\nu}}^{-1}(V_{\nu})$. Thus $\{\rho_{Y_{\nu}}^{-1}(V_{\nu})\mid \nu\in \Sigma\}$ is an open cover of $\Omega$, and thus there is a finite sub cover since $\Omega$ is quasi compact (Theorem 40 page 113 \cite{ZS2}).
\end{proof}
\end{document} |
\begin{document}
\title[Strong $BV$-extension and $W^{1,1}$-extension domains]{Strong $BV$-extension and $W^{1,1}$-extension domains}
\author{Miguel Garc\'ia-Bravo} \author{Tapio Rajala}
\address{University of Jyvaskyla \\
Department of Mathematics and Statistics \\
P.O. Box 35 (MaD) \\
FI-40014 University of Jyvaskyla \\
Finland}
\email{[email protected]} \email{[email protected]}
\thanks{The authors acknowledge the support from the Academy of Finland, grant no. 314789.} \subjclass[2000]{Primary 46E35.} \keywords{Sobolev extension, BV-extension} \date{\today}
\begin{abstract} We show that a bounded domain in a Euclidean space is a $W^{1,1}$-extension domain if and only if it is a strong $BV$-extension domain. In the planar case, bounded and strong $BV$-extension domains are shown to be exactly those $BV$-extension domains for which the set $\partial\Omega \setminus \bigcup_{i} \overline{\Omega}_i$ is purely $1$-unrectifiable, where $\Omega_i$ are the open connected components of $\mathbb{R}^2\setminus\overline{\Omega}$. \end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
Let $\Omega\subset\mathbb{R}^n$ be a domain for some $n\geq 2$. For every $1\leq p\leq \infty$, we define the Sobolev space $W^{1,p}(\Omega)$ to be $$W^{1,p}(\Omega)=\{u\in L^p(\Omega):\, \nabla u\in L^p(\Omega;\mathbb{R}^n)\}, $$ where $\nabla u$ denotes the distributional gradient of $u$. We equip this space with the non-homogeneous norm
$$\|u\|_{W^{1,p}(\Omega)}= \|u\|_{L^p(\Omega)}+\|\nabla u\|_{L^p(\Omega)}. $$ We say that $\Omega$ is a $W^{1,p}$-extension domain if there exists an operator $T\colon W^{1,p}(\Omega)\to W^{1,p}(\mathbb{R}^n)$ and a constant $C>0$ so that
$$ \|Tu\|_{W^{1,p}(R^n)}\leq C \|u\|_{W^{1,p}(\Omega)}$$
and $Tu|_\Omega = u$ for every $u\in W^{1,p}(\Omega)$.
We denote the minimal constant $C$ above by $\|T\|$. We point out that by the results from \cite{HKT2008,S2006}, for $p>1$ one can always assume the operator $T$ to be linear, and also for the case of bounded simply connected planar domains if $p=1$ by \cite{KRZ}. It is not yet known if this is the case for general domains when $p=1$.
It is well-known from the works of Calder\'on and Stein \cite{cal1961,stein} that Lipschitz domains are $W^{1,p}$-extension domains for every $p\geq 1$. Moreover, Jones showed in \cite{jo1981} that every uniform domain $\Omega\subset \mathbb{R}^n$ is a $W^{1,p}$-extension domain for all $p\geq 1$. However, these conditions are not necessary for a domain to be a Sobolev extension domain. For bounded simply connected planar domains a geometric characterization of Sobolev extension domains by means of a curve condition has been given in the works \cite{sh2010,KRZ2015,KRZ}. Namely, for the $W^{1,1}$ case we have the following: A bounded planar simply connected domain $\Omega$ is a $W^{1,1}$-extension domain if and only if for every $x,y\in \Omega^c$ there exists a curve $ \gamma \subset \Omega^c$ connecting $x$ and $y$ with \begin{equation}\label{eq:curvep1}
\ell(\gamma) \le C|x-y|, \text{ and } \mathcal H^1(\gamma \cap \partial \Omega) = 0. \end{equation}
A typical example of a simply connected planar domain $\Omega$ which is not a $W^{1,p}$-extension domain for any $p\geq 1$ is the slit disk $D=\{(x,y)\in\mathbb{R}^2:\, x^2+y^2<1\}\setminus ([0,1)\times \{0\})\}$. However, by the results of \cite{KMS2010}, knowing that the complement is quasiconvex is enough to ensure that $D$ is a $BV$-extension domain.
Recall that
$$BV(\Omega)=\{u\in L^1(\Omega):\, \| D u\|(\Omega)<\infty\}$$ is the space of functions of bounded variation where
$$\Vert D u\Vert (\Omega)=\sup\left\{\int_{\Omega}u \,\text{div} (v)\, dx:\, v\in C^{\infty}_{0}(\Omega;\mathbb{R}^n) ,\, |v|\leq 1\right\}$$
denotes the total variation of $u$ on $\Omega$. We endow this space with the norm $ \Vert u\Vert_{BV(\Omega)}=\Vert u\Vert_{L^1(\Omega)}+\Vert D u\Vert (\Omega).$ Note that $\|Du\|$ is a Radon measure on $\Omega$ that is defined for every set $F\subset\Omega$ as \[
\|Du\|(F)=\inf\{\|Du\|(U):\, F\subset U\subset\Omega,\;U\;\text{open}\}. \]
We say that $\Omega$ is a $BV$-extension domain if there exists a constant $C>0$ and a (not necessarily linear) extension operator $T\colon BV(\Omega)\to BV(\mathbb{R}^n)$ so that $Tu|_{\Omega}=u$ and
$$\|Tu\|_{BV(\mathbb{R}^n)}\leq C\|u\|_{BV(\Omega)} $$ for all $u\in BV(\Omega)$ and where $C>0$ is an absolute constant, independent of $u$. Let us point out that $\Omega$ being a $W^{1,1}$-extension domain always implies that it is also a $BV$-extension domain (see \cite[Lemma 2.4]{KMS2010}).
Our first main result is the characterization of bounded $W^{1,1}$-extension domains in terms of strong extendability of $BV$-functions, or equivalently, in terms of strong extendability of sets of finite perimeter. The equivalence between strong extendability of $BV$-functions and strong extendability of sets of finite perimeter is inspired by the work of Mazy'a and Burago \cite{BM1967} (see also \cite[Section 9.3]{mazya}). They showed that for all $u\in L^{1}_{\text{loc}}(\Omega)$ with finite total variation we may find an extension $Tu\in L^{1}_{\text{loc}}(\mathbb{R}^n)$ with $\|D(Tu) \|(\mathbb{R}^n) \leq C \|Du\|(\Omega)$, for some constant $C>0 $, if and only if any set $E\subset\Omega$ of finite perimeter in $\Omega$ admits an extension $\widetilde E\subset\mathbb{R}^n$ satisfying $\widetilde E\cap\Omega=E$ and $P(\widetilde E,\mathbb{R}^n)\leq C P(E,\Omega)$ where $C>0$ is some constant. Recall that a Lebesgue measurable subset $E\subset \mathbb{R}^n$ has finite perimeter in $\Omega$ if $\chi_E\in BV(\Omega)$, where $\chi_E$ denotes the characteristic function of the set $E$. We set $P(E,\Omega)=\|D \chi_E\|(\Omega)$ and call it the perimeter of $E$ in $\Omega$. If a set $E$ does not have finite perimeter in $\Omega$ we set $P(E,\Omega)=\infty$.
Before stating our characterization, we introduce the terminology of strong extendability, following \cite{HKLL2016} and \cite{L2015}.
\begin{definition}[Strong $BV$-extension domain]\label{def:strongBV} A domain $\Omega \subset \mathbb R^n$ is called a \emph{strong $BV$-extension domain} if there exists a constant $C>0$ so that for any
$u \in BV(\Omega)$ there exists $Tu \in BV(\mathbb R^n)$ with $Tu|_\Omega = u$,
$\|Tu\|_{BV(\mathbb R^n)} \le C\|u\|_{BV(\Omega)}$, and $\|D(Tu)\|(\partial\Omega) = 0$. \end{definition}
In the spirit of Definition \ref{def:strongBV}, we define the analogous concept for sets of finite perimeter.
\begin{definition}[Strong extension property for sets of finite perimeter]\label{def:strongperi} A domain $\Omega \subset \mathbb R^n$ is said to have the \emph{strong extension property for sets of finite perimeter} if there exists a constant $C>0$ so that for any set $E\subset \Omega$ of finite perimeter in $\Omega$ there exists a set $\widetilde E\subset \mathbb{R}^n$ such that \begin{enumerate}
\item[(PE1)] $\widetilde E \cap \Omega = E$ modulo measure zero sets,
\item[(PE2)] $P(\widetilde E,\mathbb{R}^n)\leq C P(E,\Omega) $, and
\item[(PE3)] $\mathcal{H}^{n-1}(\partial^M \widetilde E \cap \partial \Omega)=0$. \end{enumerate} \end{definition}
With the above definitions we can state our first main result.
\begin{theorem}\label{thm:mainRn} Let $\Omega\subset\mathbb{R}^n$ be a bounded domain. Then the following are equivalent: \begin{enumerate}
\item $\Omega$ is a $W^{1,1}$-extension domain.
\item $\Omega$ is a strong $BV$-extension domain.
\item $\Omega$ has the strong extension property for sets of finite perimeter. \end{enumerate} \end{theorem}
Our main motivation behind this theorem is to understand better the geometry of $W^{1,1}$-extension domains. From Theorem \ref{thm:mainRn} we see that for a bounded $W^{1,1}$-extension domain, except for a purely $(n-1)$-unrectifiable set, the boundary consists of points where the domain has density at most $1/2$. See Section \ref{sec:corollaries} for the proof of this. In the same section we give an example showing that the above density bound is not sufficient to imply that a bounded $BV$-extension domain is a $W^{1,1}$-extension domain, even in the plane. Another corollary of Theorem \ref{thm:mainRn} is that for a bounded $W^{1,1}$-extension domain, again up to a purely $(n-1)$-unrectifiable set, the boundary consists of points that are boundary points also for some component of the interior of the complement of the domain. In Section \ref{sec:corollaries} we provide also an example showing that in $\mathbb R^3$ this property does not characterize $W^{1,1}$-extension domains among bounded $BV$-extension domains. However, our second main result states that in the planar case this is true.
\begin{theorem}\label{thm:planar} Let $\Omega \subset \mathbb R^2$ be a bounded $BV$-extension domain. Then $\Omega$ is a $W^{1,1}$-extension domain if and only if the set \[
\partial \Omega \setminus \bigcup_{i \in I} \overline{\Omega_i} \] is purely $1$-unrectifiable, where $\{\Omega_i\}_{i\in I}$ are the connected components of $\mathbb R^2 \setminus \overline{\Omega}$. \end{theorem}
Let us mention that Theorem \ref{thm:planar} recovers partly the theorems in \cite{KRZ}. Namely, it immediately follows that Jordan $BV$-extension domains are $W^{1,1}$-extension domains since the set required in Theorem \ref{thm:planar} to be purely unrectifiable, is indeed empty. The curve characterization \eqref{eq:curvep1} also follows quite easily from Theorem \ref{thm:planar} using a small observation recorded in \cite{KRZ}. Let us briefly sketch this. Since a $W^{1,1}$-extension domain is known to be a $BV$-extension domain, its complement is quasiconvex. Then, a quasiconvex curve between two points in the complement can be modified to intersect the boundaries of each $\Omega_i$ at most twice (see Lemma \ref{lem:quasi.int.Omega_i}). Theorem \ref{thm:planar} now says that the rest of the curve intersects $\partial \Omega$ in a $\mathcal H^1$-measure zero set, giving condition \eqref{eq:curvep1}. Conversely, \eqref{eq:curvep1} implies quasiconvexity, and hence that $\Omega$ is a $BV$-extension domain. For a simply connected $\Omega$, we can connect every pair of components $\Omega_i$ and $\Omega_j$ with a curve satisfying \eqref{eq:curvep1}. Since the set \[
\partial \Omega \setminus \bigcup_{i \in I} \overline{\Omega_i}. \] is contained in countably many of such curves by \cite[Lemma 4.6]{KRZ}, we see that it is purely $1$-unrectifiable.
Let us point out, however, that the extension operator that we construct in Theorem \ref{thm:planar}, is not always linear. One of the main points of \cite{KRZ} was to construct a linear extension operator. At the moment we do not see how our construction could be modified to give a linear extension operator. Still, the general smoothing operator we use for proving Theorem \ref{thm:mainRn} (and Theorem \ref{thm:planar}) immediately gives the following.
\begin{corollary} Suppose $\Omega \subset \mathbb R^n$ is a bounded strong $BV$-extension domains where the extension operator is linear. Then there exists a linear $W^{1,1}$-extension operator from $W^{1,1}(\Omega)$ to $W^{1,1}(\mathbb R^n)$. \end{corollary}
Although not strictly used in our proofs, we include the following result for future use: Every $BV$-extension domain $\Omega\subset\mathbb{R}^n$ satisfies the measure density condition, that is, there exists a constant $c>0$ so that for every $x\in\overline{\Omega}$ and $r\in (0,1]$ we have
$|B(x,r)\cap \Omega|\geq c r^n .$ One may find this result in Section 2.2. The same conclusion for $W^{1,p}$ -extension domains with $1\leq p<\infty$ is also true and was already shown in \cite{HKT2008}.
\section{Preliminaries}
When making estimates, we often write the constants as positive real numbers $C$ which may vary between appearances, even within a chain of inequalities. These constants normally only depend on the dimension of the underlying space $\mathbb{R}^n$ unless otherwise stated.
For any point $x \in \mathbb R^n$ and radius $r>0$ we denote the open ball by
\[
B(x,r) = \{y \in \mathbb R^n\,:\, |x-y| < r\}.
\]
More generally, for a set $A \subset \mathbb R^n$ we define the open $r$-neighbourhood as
\[
B(A,r) = \bigcup_{x \in A} B(x,r).
\]
We denote by $|E|$ the $n$-dimensional outer Lebesgue measure of a set $E \subset \mathbb R^n$.
For any Lebesgue measurable subset $E \subset \mathbb R^n$ and any point $x \in \mathbb R^n$ we then define the upper density of $E$ at $x$ as
\[
\overline{D}(E,x) = \limsup_{r\searrow 0}\frac{|E\cap B(x,r)|}{|B(x,r)|},
\]
and the lower density of $E$ at $x$ as
\[
\underline{D}(E,x) = \liminf_{r\searrow 0}\frac{|E\cap B(x,r)|}{|B(x,r)|}.
\]
If $\overline{D}(E,x) = \underline{D}(E,x)$, we call the common value the density of $E$ at $x$ and denote it by $D(E,x)$.
The essential interior of $E$ is then defined as
\[
\mathring{E}^M = \{x \in \mathbb R^n \,:\, D(E,x)=1 \},
\]
the essential closure of $E$ as
\[
\overline{E}^M = \{x \in \mathbb R^n \,:\, \overline{D}(E,x) > 0\},
\]
and the essential boundary of $E$ as
\[
\partial^M E = \{x \in \mathbb R^n\,:\, \overline{D}(E,x) > 0\text{ and } \overline{D}(\mathbb R^n \setminus E,x) > 0\}.
\] As usual, $\mathcal{H}^s(A)$ will stand for the $s$-dimensional Hausdorff measure of a set $A \subset \mathbb{R}^n$ obtained as the limit \[ \mathcal H^s(A) = \lim_{\delta \searrow 0}\mathcal H_\delta^s(A), \] where $\mathcal H_\delta^s(A)$ is the $s$-dimensional Hausdorff $\delta$-content of $A$ defined as \[ \mathcal H_\delta^s(A) = \inf\left\{\sum_{i=1}^\infty {\mathop\mathrm{\,diam\,}}(U_i)^s \,:\, A \subset \bigcup_{i=1}^\infty U_i, {\mathop\mathrm{\,diam\,}}(U_i) \le \delta\right\}. \]
We say that a set $H\subset\mathbb{R}^n$ is $m$-rectifiable, for some $m<n$, if there exist countably many Lipschitz maps $f_j\colon\mathbb{R}^m\to \mathbb{R}^n$ so that $\mathcal{H}^m(H\setminus \bigcup_{j}f_j(\mathbb{R}^m))=0$. A set $H$ will be called purely $m$-unrectifiable if for every Lipschitz map $f\colon \mathbb{R}^m\to\mathbb{R}^n$ we have $$\mathcal{H}^m(H\cap f(\mathbb{R}^m))=0.$$ Observe that by Rademacher's theorem one can deduce that if $f \colon \mathbb{R}^m\to\mathbb{R}^n$ is Lipschitz, then there are countably many sets $E_i\subset\mathbb{R}^m$ on which $f$ is bi-Lipschitz and such that $\mathcal{H}^m(f(\mathbb{R}^m \setminus\bigcup_i E_i)) = 0$.
Moreover, it easily follows that if $H \subset \mathbb R^n$ is not $m$-purely unrectifiable, then there exists a Lipschitz map $f \colon \mathbb R^m \to \mathbb R^{n-m}$ so that up to a rotation, the set \[ H \cap \textrm{Graph}(f) \] has positive $\mathcal H^{m}$-measure, where $\textrm{Graph}(f) = \{(x,f(x))\,:\, x \in \mathbb R^m\}$.
By a dyadic cube we refer to $Q = [0,2^{-k}]^n + \mathtt {j} \subset \mathbb{R}^n$ for some $k \in \mathbb Z$ and $\mathtt{j} \in 2^{-k}{\mathbb Z}^n$. We denote the side-length of such dyadic cube $Q$ by $\ell(Q) := 2^{-k}$.
\subsection{$BV$-functions and sets of finite perimeter}
Let us recall some basic results related to $BV$-functions and sets of finite perimeter. For a more detailed account, we refer to the books \cite{AFP2000,EG2015,F1969}.
Differently to this paper, Mazy'a and Burago' (see \cite{BM1967} and also \cite[Section 9.3]{mazya}) considered the space
$$BV_l(\Omega)=\{u\in L^{1}_{loc}(\Omega):\, \|Du\|(\Omega)<\infty\} $$
equipped with the seminorm $\|Du\|(\Omega)$. This way they defined $BV_l$-extension domains to be those $\Omega\subset\mathbb{R}^n$ for which just the total variation of the extension is controlled, that is, whenever $\|D(Tu)\|(\mathbb{R}^n)\leq C\|Du\|(\Omega)$. As we already explained in the introduction, they proved that being a $BV_l$-extension domain was equivalent to the fact that any set $E\subset\Omega$ of finite perimeter in $\Omega$ admits an extension $\widetilde E\subset\mathbb{R}^n$ satisfying only (PE1) and (PE2) from Definition \ref{def:strongperi}. Note however, that thanks to \cite[Lemma 2.1]{KMS2010} $BV_l$-extension domains are equivalent to $BV$ extension domains if $\Omega$ is bounded.
When working with $BV$ functions we will make use of the well-known $(1,1)$-Poincar\'e inequality that we now state (see for instance \cite[Theorem 3.44]{AFP2000} for the proof). \begin{theorem}\label{thm:Poincare} Let $\Omega\subset\mathbb{R}^n$ be an open bounded set with Lipschitz boundary. Then there exists a constant $C>0$ depending only on $n$ and $\Omega$ so that for every $u\in BV(\Omega)$ we have
$$\int_\Omega |u(y)-u_\Omega|\,dy\leq C \|Du\|(\Omega) .$$ In particular, there exists a constant $C>0$ only depending on $n$ so that if $Q,Q'\subset\mathbb{R}^n$ are two dyadic cubes with $\frac{1}{4}\ell(Q')\leq \ell(Q)\leq 4\ell(Q')$ and $\Omega=\text{int}(Q\cup Q')$ connected, then for every $u\in BV(\Omega)$, \begin{equation}\label{Poincare}
\int_{\Omega} |u(y)-u_\Omega|\,dy\leq C \ell(Q)\|Du\|(\Omega) . \end{equation} \end{theorem} We are using here the notation of the mean value integral of a function $u$ on the set $\Omega$ as
$$u_\Omega=\frac{1}{|\Omega|}\int_\Omega u(y)\,dy .$$
Let us record as well the coarea formula for $BV$ functions. See for example \cite[Section 5.5]{EG2015}.
\begin{theorem}\label{thm:coarea}
Given a function $u\in BV(\Omega)$, the superlevel sets $u_t=\{x\in\Omega:\, u(x)>t\}$ have finite perimeter in $\Omega$ for almost every $t\in\mathbb{R}$ and
$$\|Du\|(F)=\int^{\infty}_{-\infty} P(u_t,F)\,dt $$
for every Borel set $F\subset \Omega$.
Conversely, if $u\in L^1(\Omega)$ and $\int^{\infty}_{-\infty} P(u_t,\Omega)\,dt<\infty $ then $u\in BV(\Omega)$. \end{theorem}
An important result due to Federer \cite[Section 4..5.11]{F1969} tells us that a set $E$ has finite perimeter in $\Omega$ if and only if $ \mathcal{H}^{n-1}(\partial ^M E\cap\Omega)<\infty$. Moreover, thanks to De Giorgi's pioneering work \cite{DG1955} we can understand the structure of the boundary of sets of finite perimeter even better. Namely, if $E$ has finite perimeter in $\Omega$ then for every subset $A\subset \Omega$,
$$\|D\chi_E\|(A)=P(E,A)= \mathcal{H}^{n-1} ( \partial^M E\cap A)$$
and if $E$ has finite perimeter in $\mathbb{R}^n$ then
$$\partial^M E=F\cup \bigcup_{n\in\mathbb{N}} K_n $$
where $\mathcal{H}^{n-1}(F)=0$ and $K_n$ are compact subsets of $C^1$ hypersurfaces. Furthermore, for any set $E$ with finite perimeter we have $$\overline{D}(E,x)=\underline{D}(E,x)\in \{0,1/2,1\} $$ for $\mathcal{H}^{n-1}$-almost every $x\in\mathbb{R}^n$. Moreover, $\mathcal{H}^{n-1}(\partial ^M E\setminus \{x:\, D(E,x)=1/2 \})=0$, and hence for $\mathcal{H}^{n-1}$-almost every $x\in\partial^M E$ we have $$D(E,x) = \frac12.$$
Let us finally recall some terminology and results from \cite{ACMM2001}.
A Lebesgue measurable set $E \subset \mathbb R^n$ with $|E|>0$ is called decomposable if there exist two Lebesgue measurable sets $F,G \subset \mathbb R^n$ so that $|F|,|G|>0$, $E = F \cup G$, $F\cap G = \emptyset$, and
\[
P(E,\mathbb R^n) = P(F,\mathbb R^n) + P(G,\mathbb R^n).
\]
A set is called indecomposable if it is not decomposable. For example, any connected open set $E\subset\mathbb{R}^n$ with $\mathcal{H}^{n-1}(\partial ^M E)<\infty$ is indecomposable.
For any set $E\subset\mathbb{R}^n$ of finite perimeter we can always find a unique countable family of disjoint indecomposable subsets $E_i\subset E$ so that $|E_i|>0$, $P(E,\mathbb{R}^n)=\sum_{i} P(E_i,\mathbb{R}^n)$ and, moreover, $$ \mathcal{H}^{n-1}\left( \mathring{E}^M\setminus \bigcup_{i} \mathring{E}_i^M \right)=0.$$ For a proof of this result we refer to \cite[Theorem 1]{ACMM2001}.
In the particular case of $\mathbb{R}^2$, thanks to \cite[Corollary 1]{ACMM2001} one can find a decomposition of sets of finite perimeter into indecomposable sets whose boundaries are rectifiable Jordan curves, except for a set of $\mathcal{H}^1$-measure zero. We will state this useful result in the last Section 5, in Theorem \ref{thm:planardecomposition}.
\subsection{Measure density condition for $BV$-extension domains}
Nowadays it is a well-known fact that all $W^{1,p}$-extension domains for $1\leq p<\infty$ satisfy the measure density condition (see \cite{HKT2008}). Although we do not need this in our proofs, we record here the fact that the same property holds for $BV$-extension domains. Let us remark that a measure density condition for planar $BV_l$-extension domains was proven in \cite[Lemma 2.10]{KMS2010}. However, the proof does not seem to extend to domains in $\mathbb{R}^n$. The method of proof we employ here follows the same lines as \cite{HKT2008} and can be adapted for $BV_l$-extension domains as well.
\begin{proposition}\label{meas.dens:BV} Let $\Omega\subset\mathbb{R}^n$ be a $BV$-extension or a $W^{1,1}$-extension domain, then there exists a constant $c>0$, depending only on $n$ and on the operator norm, so that for every $x\in\overline{\Omega}$ and $r\in (0,1]$ we have
$$|B(x,r)\cap \Omega|\geq c r^n .$$ \end{proposition} \begin{proof} We will only make the proof for $BV$-extension domains. For $W^{1,1}$-extension domains one can use the results from \cite{HKT2008}, or the fact that $W^{1,1}$-extension domains are $BV$-extension domains. A proof of this fact can be found in \cite[Lemma 2.4]{KMS2010}. The reader will notice that the key point will be to apply the Sobolev embedding theorem, which is both valid for $W^{1,1}$ and $BV$ functions.
Let us denote $r_0 = r$. By induction, we define for every $i \in \mathbb{N}$ the radius $r_i \in (0,r_{i-1})$ by the equality \[
|\Omega\cap B(x,r_i)| = \frac12|\Omega\cap B(x,r_{i-1})| = 2^{-i}|\Omega\cap B(x,r_0)|. \] Since $x \in \overline\Omega$, we have that $r_i \searrow 0$ as $i\to \infty$.
For each $i \in\mathbb{N}$, consider the function $f_i:\Omega \to \mathbb{R}$ \[
f_i(y) = \begin{cases}1, & \text{for }y \in B(x,r_i) \cap \Omega,\\
\frac{r_{i-1}-|x-y|}{r_{i-1}-r_i}, & \text{for }y \in (B(x,r_{i-1})\setminus B(x,r_i)) \cap \Omega,\\
0, & \text{otherwise}.
\end{cases} \] Note that these functions belong to the class $W^{1,1}(\Omega)$, in particular they are $BV$ functions. We can estimate their $BV$-norms by \begin{align*}
\Vert f_i\Vert_{BV(\Omega)} &=\Vert f_i\Vert_{L^1(\Omega)} +\|D\, f_i\|(\Omega) =\int_{\Omega} |f|+ \int_{\Omega} |\nabla f_i| \\
& \le |B(x,r_{i-1})\cap \Omega| + |r_i - r_{i-1}|^{-1} |(B(x,r_{i-1})\setminus B(x,r_i))\cap \Omega)| \\
&\leq C|r_i - r_{i-1}|^{-1} 2^{-i}|\Omega\cap B(x,r_0)|. \end{align*} Call $1^*=\frac{n}{n-1}$ and denote by $T\colon BV(\Omega)\to BV(\mathbb{R}^n)$ the extension operator. By the Sobolev inequality for BV functions (see \cite[Theorem 5.10]{EG2015}) we know that $$\Vert Tf_i\Vert_{L^{1^*}(\mathbb{R}^n)}\leq C \Vert D(Tf_i)\Vert(\mathbb{R}^n),$$ where $C>0$ depends only on the dimension $n$. Hence we have the following chain of inequalities
$$\|f_i\|_{L^{1^*}(\Omega)}\leq \|Tf_i\|_{L^{1*}(\mathbb{R}^n)}\leq C\|D(Tf_i)\|(\mathbb{R}^n)\leq C\|T\|\,\|f_i\|_{BV(\Omega)}. $$ We also have \begin{align*}
\int_{\Omega} \vert f_i(y)\vert^{1^*}\,dy &\geq |B(x,r_{i})\cap \Omega|=2^{-i}|B(x,r_0)\cap \Omega|, \end{align*} and therefore \begin{align*}
2^{-i}|B(x,r_0)\cap \Omega|&\leq C\|T\|^{1^*}\left(|r_i - r_{i-1}|^{-1} 2^{-i}|\Omega\cap B(x,r_0)|\right)^{1^*} . \end{align*} Consequently, \begin{align*}
r_{i-1}-r_i & \leq C\|T\|2^{i(1/1^{*}-1)}|\Omega\cap B(x,r_0)|^{1-1/1^{*}} \\
&= C\|T\|2^{-i/n}|\Omega\cap B(x,r_0)|^{1/n}. \end{align*} By summing up all these quantities we conclude that
$$ r =r_0= \sum_{i=1}^\infty(r_{i-1} - r_{i}) \le C\|T\|\sum_{i=1}^\infty 2^{-i/n}|\Omega\cap B(x,r)|^{1/n} = \frac{C\|T\|}{2^{1/n}-1}|\Omega\cap B(x,r)|^{1/n}. $$ This gives the claimed inequality.
\end{proof}
\section{Equivalence of $W^{1,1}$-extension and strong $BV$-extension domains}
This section is devoted to the proof of Theorem \ref{thm:mainRn}. The idea in going from a strong $BV$-extension to a $W^{1,1}$-extension is to first extend the $W^{1,1}$-function from the domain as a $BV$-function to the whole space and then mollify it in the exterior of the domain. In the mollification process it is important to check that we do not change the function too much near the boundary.
\subsection{Whitney smoothing operator}
In this subsection we prove existence of a suitable smoothing operator from BV to $W^{1,1}$. For similar constructions we recommend to the reader to have a look at \cite{BHS2002,HK1998,LLW2020}.
\begin{theorem}\label{thm:smoothing}
Let $A \subset B \subset \mathbb R^n$ be open subsets. There exist a constant $C$ depending only on the dimension $n$ and a linear operator \[
S_{B,A} \colon BV(B) \to \left\{u \in BV(B) \,:\, u|_{A}\in W^{1,1}(A) \right\} \]
so that for any $u \in BV(B)$ we have $S_{B,A}u|_{B\setminus A}=u$, \begin{equation}\label{eq:normineq}
\|S_{B,A}u\|_{BV(B)} \leq C \|u\|_{BV(B)}, \end{equation} and \begin{equation}\label{eq:boundaryzero}
\|D(S_{B,A}u-u)\|(\partial A)=0, \end{equation} where $S_{B,A}u-u$ is understood to be defined in the whole $\mathbb R^n$ via a zero-extension. Moreover,
the operator $S_{B,A}$ is also bounded when acting from the space $BV_l(A)$ into the homogeneous Sobolev space $L^{1,1}(A)$. \end{theorem}
Recall that $L^{1,1}(A)=\{u\in L^{1}_{loc}(A):\, \nabla u\in L^{1}(A)\}$ stands for the homogeneous Sobolev space endowed with the seminorm $\|u\|_{L^{1,1}(A)}=\|\nabla u\|_{L^{1}(A)}$.
Let us briefly explain how the operator $S_{B,A}$ is constructed. We first take a Whitney decomposition of the open set $A$ and a partition of unity based on it. The operator on a $BV$-function $u$ is then defined as the sum of $u$ restricted to the complement of $A$ and the average values of $u$ in each Whitney cube of $A$ times the associated partition function. This way, we immediately have that the function $S_{B,A}$ is left unchanged in the complement of $A$, and that in $A$ it is smooth. The inequality \eqref{eq:normineq} will follow in a standard way from the Poincar\'e inequality for $BV$-functions, whereas for showing \eqref{eq:boundaryzero} we will show that the average difference between $u$ and $S_{B,A}u$ near $\mathcal H^{n-1}$-almost every boundary point of $A$ tends to zero as we get closer to the point.
Let us now give the definition of the operator doing the smoothing part. Suppose $A\subset \mathbb R^n$ is an open set, not equal to the entire space $\mathbb{R}^n$. Let $\mathcal W = \{Q_i\}_{i=1}^\infty$ be the standard \emph{Whitney decomposition} of $A$, by which we mean that it satisfies the following properties: \begin{itemize}
\item[(W1)] Each $Q_i$ is a dyadic cube inside $A$.
\item[(W2)] $A=\bigcup_i Q_i$ and for every $i \ne j$ we have $\text{int}(Q_i) \cap \text{int}(Q_j) = \emptyset$.
\item[(W3)] For every $i$ we have $\ell(Q_i) \le {\mathop\mathrm{\,dist\,}}(Q_i,\partial A)\le 4\sqrt{n} \ell(Q_i)$,
\item[(W4)] If $Q_i\cap Q_j \ne \emptyset$, we have $\frac{1}{4}\ell(Q_i) \le \ell(Q_j)\le 4\ell(Q_i)$. \end{itemize} The reader can find a proof of the existence of such a dyadic decomposition of the set $A$ in \cite[Chapter VI]{stein}.
For a given set $A$ and its Whitney decomposition $\mathcal W$ we take a partition of unity $\{\psi_i\}_{i=1}^\infty$ so that for every $i$ we have $\psi_i \in C^{\infty}(\mathbb R^n)$, {$\text{spt}(\psi_i)=\{x\in\mathbb{R}^n:\,\psi_i(x)\neq 0\} \subset B(Q_i,\frac18\ell(Q_i))$,} $\psi_i \ge 0$, $|\nabla \psi_i| \le C \ell(Q_i)^{-1}$ with a constant $C$ depending only on $n$, and \[ \sum_{i=1}^\infty \psi_i = \chi_A. \] With the partition of unity we then define for any $u \in BV(A)$ a function \begin{equation}\label{eq:SWdef}
S_{\mathcal W}u = \sum_{i=1}^\infty u_{Q_i}\psi_i. \end{equation}
Let us start by showing that $S_{\mathcal W}$ maps to $W^{1,1}(A)$ boundedly. Even though we could obviously equivalently use the BV norm also on the target, we prefer to write it as the $W^{1,1}$-norm in order to underline the spaces where the operator will be used.
\begin{lemma}\label{lma:BVtoSobo}
Let $S_{\mathcal W}$ be the operator defined in \eqref{eq:SWdef}. Then for any $u \in BV(A)$ we have $S_{\mathcal W}u \in C^\infty(A)$ and $\|S_{\mathcal W}u\|_{W^{1,1}(A)} \le C\|u\|_{BV(A)}$ with a constant $C$ depending only on $n$. \end{lemma} \begin{proof}
By (W2) and the fact that $\text{spt}(\psi_i) \subset B(Q_i,\frac18\ell(Q_i))$ for every $i$, we know that $\text{spt}(\psi_i) \cap \text{spt}(\psi_j) \ne \emptyset$ implies
that $Q_i \cap Q_j \ne \emptyset$. Therefore, any point in $A$ has a neighbourhood where $S_{\mathcal W}u$ is defined as a sum of finitely many $C^\infty$-functions. Consequently, $S_{\mathcal W}u \in C^\infty(A)$.
For the $L^1$-norm of the function we can estimate
\[
\|S_{\mathcal W}u\|_{L^1(A)} \le \sum_{i=1}^\infty \|u_{Q_i}\psi_i\|_{L^1(A)}
= \sum_{i=1}^\infty |u_{Q_i}|\|\psi_i\|_{L^1(A)} \le \sum_{i=1}^\infty |u_{Q_i}|2^n\ell(Q_i)^n = 2^n \|u\|_{L^1(A)}.
\]
For the estimate on the $L^1$-norm of the gradient we start with an estimate via the $(1,1)$-Poincar\'e inequality \eqref{Poincare}
\begin{align*}
\|\nabla(S_{\mathcal W}u)\|_{L^1(Q_i)} & \le \sum_{Q_j \cap Q_i \ne \emptyset}|u_{Q_i}-u_{Q_j}|\|\nabla \psi_j\|_{L^1(A)}\\
&\le \sum_{Q_j \cap Q_i \ne \emptyset}|u_{Q_i}-u_{Q_j}|C\ell(Q_j)^{n-1}\\
& \le C\sum_{Q_j \cap Q_i \ne \emptyset}\ell(Q_j)^{-1}\int_{Q_i \cup Q_j}|u_{Q_i}-u(y)| + |u(y)-u_{Q_j}|\,dy\\
& \le C\sum_{Q_j \cap Q_i \ne \emptyset}\ell(Q_j)^{-1}\left(2\int_{Q_i \cup Q_j}|u_{Q_i\cup Q_j}-u(y)| + 2\int_{Q_i \cup Q_j}|u_{Q_i\cup Q_j}-u_{Q_j}|\,dy\right)\\
& \le C\sum_{Q_j \cap Q_i \ne \emptyset}(\|Du\|(Q_i\cup Q_j)),
\end{align*}
which then gives, by summing over all $i$, and noticing that in the final double sum the sets $Q_i\cup Q_j$ have finite overlap with a constant depending only on $n$,
\begin{equation}\label{bound.tot.var.}
\begin{split}
\|\nabla(S_{\mathcal W}u)\|_{L^1(A)} & = \sum_{i=1}^\infty \|\nabla(S_{\mathcal W}u)\|_{L^1(Q_i)} \\
& \le C \sum_{i=1}^\infty\sum_{Q_j \cap Q_i \ne \emptyset}(\|Du\|(Q_i\cup Q_j)) \\
&\leq C \|Du\|(A).
\end{split} \end{equation} This concludes the proof of the lemma. \end{proof}
The next lemma gives the crucial boundary behaviour that will imply \eqref{eq:boundaryzero}.
\begin{lemma}
For the operator $S_{\mathcal W}$ defined in \eqref{eq:SWdef} and for any $u \in BV(A)$ we have \begin{equation}\label{eq:boundaryagain}
\lim_{r\searrow 0}\frac{1}{|B(x,r)|}\int_{B(x,r)\cap A}|S_{\mathcal W}u(y) - u(y)|\,d y =0 \end{equation} for $\mathcal H^{n-1}$-almost every point $x \in \partial A$. \end{lemma} \begin{proof} Suppose \eqref{eq:boundaryagain} fails on a set $F\subset \partial A$ with $\mathcal H^{n-1}(F) > 0$. Without loss of generality, we may assume $F$ compact. By going to a subset of $F$ if needed, we may further assume that there exists a constant $\delta > 0$ so that \begin{equation*}
\limsup_{r\searrow 0} \frac{1}{|B(x,r)|}\int_{B(x,r)\cap A}|S_{\mathcal W}u(y) - u(y)|\,d y > \delta \end{equation*} for every $x \in F$.
Let $\varepsilon>0$. By the $5r$-covering lemma there exists a disjointed countable collection $\{B(x_i,r_i)\}_{i \in I}$ so that $x_i\in F$,
$r_i<\varepsilon$ for all $i$, \begin{equation}\label{eq:delta}
|B(x_i,r_i)|\leq \frac{1}{\delta}\int_{B(x_i,r_i)\cap A}|S_{\mathcal W}u(y) - u(y)|\,d y \end{equation} and \[ F \subset \bigcup_{i\in I} B(x_i,5r_i). \] Similarly as in the proof of Lemma \ref{lma:BVtoSobo}, we first estimate in a Whitney cube $Q \in \mathcal W$ using the (1,1)-Poincar\'e inequality \eqref{Poincare} \begin{equation}\label{eq:poincareasalways} \begin{split}
\int_{Q}|S_{\mathcal W}u(y) - u(y)|\,d y
& = \int_{Q}\left|\sum_{Q_i \cap Q \ne \emptyset}(u_{Q_i}\psi_i(y) - u(y)\psi_i(y))\right|\,d y\\
&\le \sum_{Q_i \cap Q \ne \emptyset} \int_Q|u_{Q_i} - u(y)|\,d y\\
&\le \sum_{Q_i \cap Q \ne \emptyset} \int_{Q\cup Q_i}|u_{Q_i} - u(y)|\,d y\\
& \le C \ell(Q) \sum_{Q_i \cap Q \ne \emptyset} \|Du\|(Q\cup Q_i). \end{split} \end{equation} By the property (W3) of the Whitney decomposition, we conclude that if $Q \in \mathcal W$ is such that $Q \cap B(x_i,r_i) \ne \emptyset$, we have \[ \ell(Q) \le {\mathop\mathrm{\,dist\,}}(Q,\partial A) \le {\mathop\mathrm{\,dist\,}}(Q,x_i) < r_i, \] and hence \[ Q \subset B(x_i,(\sqrt{n}+1)r_i) \subset B(F,(\sqrt{n}+1)\varepsilon). \] Similarly, for the same $Q$, if $Q_i \cap Q \ne \emptyset$ for some $Q_i \in \mathcal W$, by (W4), we get \[ \ell(Q_i) \le 4\ell(Q), \] and so \[ Q_i \subset B(x_i,(5\sqrt{n}+1)r_i) \subset B(F,(5\sqrt{n}+1)\varepsilon). \] Now, using the definition of the Hausdorff content, the inequality \eqref{eq:delta}, the estimate \eqref{eq:poincareasalways}, and the above consideration for the cubes $Q$, we get \begin{align*} \mathcal H_{5\epsilon}^{n-1}(F) & \le C \sum_{i\in I} r_i^{n-1}
\le C \sum_{i \in I} \frac{|B(x_i,r_i)|}{r_i}\\
& \le C \sum_{i \in I} \frac{1}{\delta r_i}\int_{B(x_i,r_i)\cap A}|S_{\mathcal W}u(y) - u(y)|\,d y\\
& \le C \sum_{Q\in\mathcal{W}}\sum_{i \in I} \frac{1}{\delta \ell(Q)}\frac{\ell(Q)}{r_i}\int_{Q}\chi_{B(x_i,r_i)}(y)|S_{\mathcal W}u(y) - u(y)|\,d y\\ & \le \frac{C}{\delta} \left(\sum_{Q \cap B(F,(\sqrt{n}+1)\varepsilon)\ne \emptyset}
\frac{1}{\ell(Q)}\int_{Q}|S_{\mathcal W}u(y) - u(y)|\,d y\right)\\ & \le \frac{C}{\delta} \left(\sum_{Q \cap B(F,(\sqrt{n}+1)\varepsilon)\ne \emptyset}
\sum_{Q_i \cap Q \ne \emptyset} \|Du\|(Q\cup Q_i)\right)\\
& \le \frac{C}{\delta}\|Du\|(B(F,(5\sqrt{n}+1)\epsilon) \cap A) \searrow 0 \end{align*} as $\varepsilon \searrow 0$. Thus \[ \mathcal H^{n-1}(F) = 0, \] giving a contradiction and concluding the proof. \end{proof}
With the previous two lemmas we can now prove the main theorem of the section.
\begin{proof}[Proof of Theorem \ref{thm:smoothing}]
Let $S_{\mathcal W}$ be the operator defined in \eqref{eq:SWdef} and suppose that $u \in BV(B)$ is given. We define \[
S_{B,A}u = u|_{B\setminus A} + S_{\mathcal W}u|_A. \]
Consider $S_{B,A}u-u\in L^1(B)$ for which, by \eqref{eq:boundaryagain}, we have \begin{equation}\label{prop.zero.ext.}
\lim_{r\searrow 0}\frac{1}{|B(x,r)|}\int_{B(x,r)\cap A}|S_{B,A}u(y)-u(y)|\,dy =0 \end{equation} for $\mathcal{H}^{n-1}$-almost every $x \in \partial A$. Observe that $S_{B,A}u-u=0$ on $B\setminus A$.
Let us introduce the superlevel sets $E_t=\{y\in \mathbb R^n:\, S_{B,A}u(y)-u(y)>t\}$ for every $t\in\mathbb{R}$, where $S_{B,A}u-u$ is defined in the whole $\mathbb R^n$ via a zero-extension. We want to show that $\mathcal H^{n-1}(\partial^M E_t\cap \partial A)=0$ for almost every $t\in\mathbb{R}$ and the equality \eqref{eq:boundaryzero} will follow by a simple application of the coarea formula. We proceed as follows.
In the case that $t<0$, observe that for every $y\in A\setminus E_t$ we have $|S_{B,A}u(y)-u(y)|\geq |t|$, then for $\mathcal H^{n-1}$-almost all $x\in\partial A$, by \eqref{prop.zero.ext.}, \begin{align*}
\overline{D}(A\setminus E_t,x) & = \limsup_{r\searrow 0}\frac{|A\setminus E_t\cap B(x,r)|}{|B(x,r)|} \\
& \leq \limsup_{r\searrow 0}\frac{1}{|t||B(x,r)|}\int_{A\cap B(x,r)} |S_{B,A}u(y)-u(y)|\,dy =0 . \end{align*} This, together with the fact that $B\setminus A\subset E_t $, means that the set $E_t$ has density $1$ at $\mathcal H^{n-1}$-almost all points $x\in \partial A$.
If we take $t>0$, for every $y\in E_t$ we have $|S_{B,A}u(y)-u(y)|\geq t$, and then for $\mathcal H^{n-1}$-almost all $x\in\partial A$, again by \eqref{prop.zero.ext.},
\begin{align*}\overline{D}( E_t,x) & = \limsup_{r\searrow 0}\frac{| E_t\cap B(x,r)|}{|B(x,r)|} \\
& \leq \limsup_{r\searrow 0}\frac{1}{t |B(x,r)|}\int_{A\cap B(x,r)} |S_{B,A}u(y)-u(y)|\,dy =0 . \end{align*} This means, using $E_t\subset A$, that the set $E_t$ has density $0$ at $\mathcal H^{n-1}$-almost all points $x\in\partial A$.
From these previous observations we deduce that $\mathcal H^{n-1}(\partial^M E_t\cap \partial A)=0$ for all $t\neq 0$. We therefore obtain \eqref{eq:boundaryzero}, applying the coarea formula,
$$\|D(S_{B,A}u-u)\|(\partial A)=\int^{\infty}_{-\infty} \mathcal{H}^{n-1}(\partial ^M E_t\cap \partial A)\,dt=0. $$
We now combine this with Lemma \ref{lma:BVtoSobo} to obtain \eqref{eq:normineq} and hence also that $S_{B,A}u \in BV(B)$. We get \begin{align*}
\|D(S_{B,A}u)\|(B) & \le \|Du\|(B) + \|D(S_{B,A}u)-u\|(B)\\
& = \|Du\|(B) + \|D(S_{B,A}u)-u\|(A)\\
& \le \|Du\|(B) + \|D(S_{\mathcal W}u|_A)\|(A) + \|Du\|(A)\\
& \le \|Du\|(B) + C\|Du|_A\|(A) + \|Du\|(A)\\
& \le (C+2)\|Du\|(B) \end{align*} and conclude the proof.
\end{proof}
\subsection{Proof of Theorem \ref{thm:mainRn}}
In this section we will prove Theorem \ref{thm:mainRn} with the help of Theorem \ref{thm:smoothing}. Recall that we are claiming that for a bounded domain $\Omega\subset\mathbb{R}^n$ the following are equivalent: \begin{enumerate}
\item $\Omega$ is a $W^{1,1}$-extension domain.
\item $\Omega$ is a strong $BV$-extension domain.
\item $\Omega$ has the strong extension property for sets of finite perimeter. \end{enumerate}
We will show the equivalence by showing the implications \[ \text{(1)} \Longrightarrow \text{(3)} \Longrightarrow \text{(2)} \Longrightarrow \text{(1)}. \]
\begin{proof}[Proof of the implication (1) $\Longrightarrow$ (3)] We start with the assumption that $\Omega$ is a bounded $W^{1,1}$-extension domain. In particular, it is known that $\Omega$ is also a $L^{1,1}$-extension domain (see \cite{K1990}). That is, there exists an extension operator $T\colon L^{1,1}(\Omega) \to L^{1,1}(\mathbb R^n)$ with $\|\nabla (Tu)\|_{L^1(\mathbb{R}^n)}\leq \|T\| \|\nabla u\|_{L^1(\Omega)}$ for every $u\in L^{1,1}(\Omega)$. Since $\Omega$ is bounded, after multiplying with a suitable Lipschitz cutoff-function we may assume that $Tu\in L^1(\mathbb{R}^n)$ and still keep the control on the gradient norm.
We claim that $\Omega$ has the strong extension property for sets of finite perimeter. Thus, let $E \subset \Omega$ be a set of finite perimeter in $\Omega$. We need to find a set $\widetilde E \subset \mathbb R^n$ so that (PE1)--(PE3) of Definition \ref{def:strongperi} hold.
Towards this, let $S_{\Omega,\Omega} \colon BV(\Omega) \to W^{1,1}(\Omega)$ be the operator given by Theorem \ref{thm:smoothing}. We now define a function $v \in W^{1,1}(\mathbb R^n)$ by \[ v = TS_{\Omega,\Omega}\chi_E. \] By truncating the function if needed, we may assume that $0 \le v\le 1$.
Applying the coarea formula (Theorem \ref{thm:coarea}) for the function $v$, \[
\int_{0}^1 P(\{v>t\},\mathbb R^n)\, dt= \|Dv\|({\mathbb R^n}) = \int_{\mathbb{R}^n} |\nabla v(y)|\,dy<\infty . \] This gives, in particular, that there exists a set $I\subset [0,1]$ with $\mathcal H^1(I) \ge \frac12$ for which for every $t \in I$ we have \begin{equation}\label{control.per.ext.} \begin{split}
P(\{v>t\},\mathbb R^n) &\le 2\|Dv\|({\mathbb R^n})=2\int_{\mathbb{R}^n} |\nabla v(y)|\,dy \leq 2\|T\|\, \|\nabla S_{\Omega,\Omega} \chi_E\|_{L^{1}(\Omega)}\\
& \leq 2 \|T\|\, C\|D\chi_E\|(\Omega)=2C\|T\|\, P(E,\Omega). \end{split} \end{equation} In the penultimate inequality we are using \eqref{bound.tot.var.}.
By the measure density (Proposition \ref{meas.dens:BV}) we have $|\partial\Omega|=0$. This together with the fact that $\nabla v \in L^{1}(\mathbb R^n)$, gives \[
\|D v\|(\partial\Omega) = 0, \] and by \eqref{eq:boundaryzero} \[
\|D(\chi_{E} - v\chi_{\Omega})\|(\partial\Omega) = \|D(\chi_{E} - S_{\Omega,\Omega}\chi_{E})\|(\partial\Omega) =0, \] where $\chi_{E} - S_{\Omega,\Omega}\chi_{E}$ is understood to be defined in the whole $\mathbb R^n$ via a zero-extension. Hence, again by the coarea formula \begin{align*} \int_0^1\mathcal{H}^{n-1}(\partial^M(E \cup (\{v>t\}\setminus \Omega))\cap \partial\Omega)\,dt
& = \|D(\chi_{E} + v\chi_{\mathbb R^n\setminus\Omega})\|(\partial\Omega)\\
&\le \|D v\|(\partial\Omega) + \|D(\chi_{E} - v\chi_{\Omega})\|(\partial\Omega) = 0. \end{align*} This gives that for almost every $t \in [0,1]$ we have \begin{equation}\label{eq:boundaryintersectionzero}
\mathcal{H}^{n-1}(\partial^M(E \cup (\{v>t\}\setminus \Omega))\cap \partial\Omega) = 0. \end{equation}
Let us pick $t \in I\subset [0,1]$ so that both \eqref{control.per.ext.} and \eqref{eq:boundaryintersectionzero} hold, and define \[ \widetilde E = E \cup (\{v>t\}\setminus \Omega). \]
Now, it is straightforward that condition (PE1) holds. The equation \eqref{eq:boundaryintersectionzero} gives (PE3), and together with \eqref{control.per.ext.} it also implies \begin{align*}
P(\widetilde E,\mathbb R^n) &= \mathcal H^{n-1}(\partial^M\widetilde E)\\
&\le \mathcal H^{n-1}((\partial^M E)\cap \Omega)
+\mathcal H^{n-1}((\partial^M\widetilde E) \cap \partial\Omega)
+\mathcal H^{n-1}(\partial^M\{v > t\})\\
&\le
P(E,\Omega) + 2C\|T\|P(E,\Omega)\end{align*} proving (PE2). \end{proof}
\begin{proof}[Proof of the implication (3) $\Longrightarrow$ (2)] {By assumption $\Omega$ has the strong extension property for sets of finite perimeter, so there exists a constant $C>0$ such that for any set $E\subset\Omega$ of finite perimeter there exists a set $\widetilde E\subset \mathbb{R}^n$ such that (PE1)--(PE3) are satisfied.}
Take a function $u\in BV(\Omega)$ and let $B \supset \overline{\Omega}$ be a large enough ball. Without loss of generality, we may assume that $u \colon \Omega \to [0,1]$. Let us write $E_t = \{u \ge t\}=\{y\in\Omega:\, u(y)\geq t\}$ for the superlevel sets for each $t$. Since $u \in BV(\Omega)$, by the coarea formula, $P(E_t,\Omega) < \infty$ for almost every $t \in [0,1]$. For these $t$, we select $\tilde E_t$ to be a strong perimeter extension of $E_t$. For convenience, for the remaining $t$ we define $\tilde E_t = E_t$. Notice that these are not strong perimeter extensions of $E_t$. This will not pose a problem for us, since we will not use these values of $t$ in the construction below.
Before going to the actual proof, let us note that if the strong perimeter extensions $\tilde E_t$ could be chosen so that $(t,x) \mapsto \chi_{\tilde E_t}(x)$ is measurable, by Fubini's theorem we would obtain \[
\|DTu\| \le \int_0^1 P(\tilde E_t)\,dt \] for the function $Tu(x) = \mathcal H^1\{t \in [0,1]\,:\, x \in \tilde E_t\}$. In order to circumvent the measurability issue, we proceed by defining the extension $Tu$ in a similar way, but as a limit of simple functions $u_m$.
For every $t \in [0,1]$, let us denote by $I_k(t)$ the (half-open) dyadic interval of length $2^{-k}$ containing $t$. For almost every $t \in [0,1]$ we then have \begin{equation}\label{eq:tdensity} P(E_t,\Omega) \le \limsup_{k \to \infty} 2^k\int_{I_k(t)}P(E_s,\Omega)\,ds. \end{equation} For almost every $t \in [0,1]$ we also have \[ P(\tilde E_t,\partial\Omega)= 0. \] Since $\partial\Omega$ is a compact set, for almost every $t$ we then have \[ \lim_{k\to \infty}P(\tilde E_t,B(\partial\Omega,2^{-k}))= 0. \] Let us write for each $k,m \in \mathbb N$ \[ I_k^m = \{t \in [0,1] \,:\, P(\tilde E_t,B(\partial\Omega,2^{-k})) < 2^{-m} \text{ and \eqref{eq:tdensity} holds}\}. \] Notice that the sets $I_k^m$ are not necessarily measurable. Nevertheless, since $\mathcal H^1$ is a regular outer measure, we have \[ \mathcal H^1(I_k^m) \nearrow 1, \qquad \text{as }k \to \infty \] for every $m \in \mathbb N$.
We define a sequence $(k_m)_{m =1}^\infty \subset \mathbb N $ inductively as follows. First take $k_1 \in \mathbb N$ so that \[ \mathcal H^1(I_{k_1}^1) > 1- 2^{-1}. \] Suppose now that $k_i$ has been defined for all $i < m$. Then we take $k_m \in \mathbb N$ so that \begin{equation}\label{eq:smallcomplements} \mathcal H^1\left(\bigcap_{j=i}^mI_{k_j}^j\right) > 1- 2^{-i} \end{equation} for all $i \le m$. Notice that this requirement can be obtained since \eqref{eq:smallcomplements} is with a strict inequality and again by outer regularity, for every $i < m$ we have \[ \mathcal H^1\left(I_{k}^m \cap \bigcap_{j=i}^{m-1}I_{k_j}^j\right) \to \mathcal H^1\left(\bigcap_{j=i}^{m-1}I_{k_j}^j\right) \] as $k \to \infty$.
Now, for $m \in \mathbb N$ we also take $l_m \in \mathbb N$ for which \begin{equation}\label{eq:intervalscover}
\mathcal{H}^1 (J^m) > 1 - 2^{-m}, \end{equation}
where \[ J^m = \left\{t \in [0,1]\,:\, P(E_t,\Omega) \le 2^{l_m+1}\int_{I_{l_m}(t)}P(E_s,\Omega)\,ds\right\} \] The index $l_m$ then gives us the scale at which the simple function $u_m$ is constructed.
Let us now construct the function $u_m$ for a given $m \in \mathbb N$. For each $j \in \{1,\dots, 2^{l_m}\}$ define \[ i_j^m = \min\left\{i \,:\, [(j-1)2^{-l_m},j2^{-l_m}) \cap J^m \cap \bigcap_{h=i}^mI_{k_h}^h \ne \emptyset \right\}. \] Notice that always $i_j^m\le m+1$ since $[(j-1)2^{-l_m},j2^{-l_m}) \cap J^m \ne \emptyset$.
We then select \[ t_j^m \in [(j-1)2^{-l_m},j2^{-l_m}) \cap J^m \cap \bigcap_{h=i_j^m}^mI_{k_h}^h. \]
Next, we define \[ u_m = \sum_{j=1}^{2^{l_m}}2^{-l_m}\chi_{\tilde E_{t_j^m}}, \] which satisfies $0\leq u_m\leq 1$ and $u_m\in BV(B)$.
For every $i \in \{1,\dots,m\}$, let us denote \[ K_i^m = \left\{j \in \{1, \dots, 2^{l_m}\}\,:\, [(j-1)2^{-l_m},j2^{-l_m}) \cap J^m \cap \bigcap_{h=i}^mI_{k_h}^h = \emptyset \right\} \] and \[ B_i^m = \bigcup_{j \in K_i^m} [(j-1)2^{-l_m},j2^{-l_m}). \] Since $J^m$ is measurable, \eqref{eq:intervalscover} gives $\mathcal H^1\left([0,1] \setminus J_m\right) < 2^{-m}$, and thus, by \eqref{eq:smallcomplements} \begin{align*} \mathcal H^1\left(\bigcap_{h=i}^mI_{k_h}^h \cap J_m\right) & = \mathcal H^1\left(\bigcap_{h=i}^mI_{k_h}^h \right) - \mathcal H^1\left(\bigcap_{h=i}^mI_{k_h}^h \setminus J_m\right)\\ & \ge \mathcal H^1\left(\bigcap_{h=i}^mI_{k_h}^h \right) - \mathcal H^1\left([0,1] \setminus J_m\right)\\ & > 1 - 2^{-i} - 2^{-m} \ge 1 - 2^{-i+1}. \end{align*} Hence, we have \begin{equation}\label{eq:badestimate} \mathcal H^1(B_i^m) < 2^{-i+1}. \end{equation}
For the norm of $Du_m$, by the fact that $t_j^m \in J^m$ for every $j$, we get the estimate \begin{align*}
\|Du_m\|(\mathbb R^n) &\le \sum_{j=1}^{2^{l_m}}2^{-l_m} P(\tilde E_{t_j^m}, \mathbb R^n) \le C \sum_{j=1}^{2^{l_m}}2^{-l_m} P(E_{t_j^m}, \Omega)\\ &\le C \sum_{j=1}^{2^{l_m}}2\int_{[(j-1)2^{-l_m},j2^{-l_m})} P(E_{s},\Omega)
= 2C\|Du\|(\Omega). \end{align*} Hence, there exists a subsequence of $(u_m)_{m=1}^\infty$, which converges in $L^1(B)$ to a function $v\in BV(B)$. For it, we have \[
\|Dv\|(B) \le \limsup_{m \to \infty}\|Du_m\|(\mathbb R^n) \le 2C\|Du\|(\Omega). \] Moreover, clearly $v = u$ on $\Omega$.
In order to estimate $\|Dv\|(\partial\Omega)$ we observe that, for every $i \in \{1,\dots,m\}$, we have, by \eqref{eq:badestimate}, \begin{equation}\label{eq:nearboundarysmall} \begin{split}
\|Du_m\|(B(\partial\Omega,2^{-k_i}))
& \le \sum_{j=1}^{2^{l_m}}2^{-l_m} P(\tilde E_{t_j}, B(\partial\Omega,2^{-k_i}))\\
&\le \sum_{j\notin K_i^m}2^{-l_m} P(\tilde E_{t_j}, B(\partial\Omega,2^{-k_i})) +
\sum_{j\in K_i^m}2^{-l_m} P(\tilde E_{t_j}, \mathbb R^n)\\
& \le 2^{-i} + C2\int_{B_i^m}P(E_s,\Omega)\,ds\\
& \le 2^{-i} + C2\delta(2^{-i+1}),
\end{split} \end{equation} where \[ \delta(r) = \sup\left\{\int_{A}P(E_s,\Omega)\,ds\,:\,A \subset [0,1], \mathcal H^1(A) = r \right\} \to 0 \] as $r \to 0$ by the absolute continuity of the integral. Since the upper bound in \eqref{eq:nearboundarysmall} goes to zero as $i \to \infty$ independently of $m$, we have \[
\|Dv\|(\partial\Omega) = 0. \] Let us assume that the function $v$ is extended as zero outside $B$. Recall that we have
$$\|Dv\|(B)\leq 2C \|Du\|(\Omega) \;\;\text{and}\;\; \|Dv\|(\partial \Omega)=0. $$ In order to conclude the proof we control the $BV$-norm of the function $v$ in the whole $\mathbb{R}^n$ as follows.
Consider a Lipschitz function $\eta$ which takes the value $1$ on $\Omega$ and has support in $B$. Then one can check that
$$ \|D(\eta v)\|(\mathbb{R}^n) \leq C \|Dv\|(B) $$ and using the Poincar\'e inequality that
$$\|\eta v\|_{L^1(\mathbb{R}^n)}=\| \eta v\|_{L^1(B)}\leq C\|D(\eta v)\|(B)\leq C \|Dv\|(B) .$$
Therefore we have $\|\eta v\|_{BV(\mathbb{R}^n)}\leq C \|Du\|_{BV(\Omega)}$, where the constant $C$ depends on the constant coming from (PE2), on $|\Omega|$ and on the constant coming from the Poincar\'e inequality. We then can assure that $T\colon BV(\Omega)\to BV(\mathbb{R}^n) \colon u \mapsto \eta v$ is an extension operator.
Obviously we still have $\|D(\eta v)\|(\partial\Omega)=\|Dv\|(\partial\Omega)=0$.
Hence $\Omega$ is indeed a strong $BV$-extension domain. \end{proof}
\begin{proof}[Proof of the implication (2) $\Longrightarrow$ (1)]
We start with a strong $BV$-extension operator $$T\colon BV(\Omega)\to BV(\mathbb{R}^n).$$ In particular, we know that \begin{equation}\label{Sing.part.=0}
\|D(Tu)\|(\partial \Omega)=0 \end{equation} for every $u\in BV(\Omega)$.
Let $S=S_{\mathbb{R}^n, (\mathbb{R}^n\setminus\overline \Omega)} $ be a Whitney smoothing operator given by Theorem \ref{thm:smoothing}. We assert that the operator $R \colon W^{1,1}(\Omega)\to W^{1,1}(\mathbb{R}^n)$ defined by $Ru(x)=(S\circ T)(u)(x)$ is a $W^{1,1}$-extension operator.
Observe that $Ru=u$ on $\Omega$ and
$$\|Ru\|_{BV(\mathbb{R}^n)}\leq C\|Tu\|_{BV(\mathbb{R}^n)}\leq C\|T|\|\, \|u\|_{BV(\Omega)}=C\|T\|\, \|u\|_{W^{1,1}(\Omega)}.$$
To conclude we must check that indeed $Ru\in W^{1,1}(\mathbb{R}^n)$, so that in particular $\|Ru\|_{BV(\mathbb{R}^n)}=\|Ru\|_{W^{1,1}(\mathbb{R}^n)}$. In order to get this let us show that the Radon measure $\|D(Ru)\|$ consists only of its absolutely continuous part, and not of its singular part. Since we already now that $Ru|_{\Omega}$ and $Ru|_{\mathbb{R}^n\setminus \overline\Omega}$ are $W^{1,1}$ functions we merely have to prove that $\|D(Ru)\|(\partial\Omega)=0$. By the special properties of our smoothing operator given by \eqref{eq:boundaryzero} and by our assumption \eqref{Sing.part.=0} we have that $$\|D(Ru)\|(\partial \Omega)=\|D(STu -Tu+Tu)\|(\partial \Omega) \leq \|D(STu-Tu)\|(\partial\Omega)+\|D(Tu)\|(\partial\Omega)=0$$ and we are done. \end{proof}
\section{Further properties of $W^{1,1}$-domains}\label{sec:corollaries} In this section we prove some corollaries to Theorem \ref{thm:mainRn}.
\begin{corollary}\label{cor:density} Let $\Omega \subset \mathbb R^n$ be a bounded $W^{1,1}$-extension domain. Then the set of points $x \in \partial \Omega$ with $\overline{D}(\Omega,x)>\frac12$ is purely $(n-1)$-unrectifiable. \end{corollary} \begin{proof}
If the set
\[
F = \left\{x \in \partial\Omega \,:\, \overline{D}(\Omega,x)>\frac12\right\}
\]
is not purely $(n-1)$-unrectifiable, there exists a Lipschitz map $f \colon \mathbb R^{n-1} \to \mathbb R$ so that, after a suitable rotation, \[ \mathcal H^{n-1}(\textrm{Graph}(f) \cap F) > 0. \] Notice that the set $\mathbb R^n \setminus \textrm{Graph}(f)$ consists of two connected components. Select one of the components that has nonempty intersection with $\Omega$ (actually, both have) and call $E$ its restriction to $\Omega$. Then \[ \partial^M E \cap \Omega = \textrm{Graph}(f) \cap \Omega \] and so in particular $E$ has finite perimeter in $\Omega$. Let $\widetilde E \subset \mathbb R^n$ be any set of finite perimeter with $\widetilde E \cap \Omega = E$. Since \[ D(E,x) = \frac12 \] at $\mathcal {H}^{n-1}$-almost every point $x \in \partial^M E$, and $\overline{D}(\Omega,x)>\frac12$ for every $x \in F$, we have \[ \mathcal H^{n-1}(F \cap \partial^M E) = \mathcal H^{n-1}(F \cap \textrm{Graph}(f) ). \] Using again the fact that \[ D(E,x) = \frac12 \] at $\mathcal {H}^{n-1}$-almost every point $x \in \partial^M E$, and $\underline{D}(\mathbb R^n \setminus \Omega,x)< \frac12$ for every $x \in F$, we have \[ 0 < \underline{D}(E,x) \le \underline{D}(\widetilde E,x) \le \underline{D}(\mathbb R^n\setminus \Omega,x) + \underline{D}(E,x) < 1 \] for $\mathcal {H}^{n-1}$-almost every point $x \in F \cap \partial^ME$. This means that there exists a set $G \subset F\cap \partial^M E$ with $\mathcal{H}^{n-1}(G)=0$ for which \[ (F\cap \partial^M E)\setminus G\subset F\cap \partial^M \widetilde E. \] Consequently, $$ \mathcal H^{n-1}(\partial\Omega \cap \partial^M \widetilde E) \geq \mathcal H^{n-1}(F \cap \partial^M \widetilde E)\geq \mathcal H^{n-1}(F \cap \partial^M E) = \mathcal H^{n-1}(F \cap \textrm{Graph}(f) )> 0.$$ Hence $\Omega$ does not have the strong extension property for sets of finite perimeter ((PE3) fails), and so by Theorem \ref{thm:mainRn} it is not a $W^{1,1}$-extension domain. \end{proof}
The next example shows that even in the plane the conclusion of Corollary \ref{cor:density} is not sufficient to imply that a bounded $BV$-extension domain is a $W^{1,1}$-extension domain.
\begin{example}\label{ex:1} Let us construct a planar $BV$-extension domain $\Omega$ so that the upper-density of $\Omega$ at all except at countably many boundary-points is at most $1/2$, but the domain is not a $W^{1,1}$-extension domain. We set \[ \Omega = (-1,1)^2 \setminus \left((\{0\}\times [-1/2,1/2]) \cup \bigcup_{i=2}^\infty E_i \right), \] where, for every $i \geq 2$, we define \begin{align*}
E_i = \bigcup_{k=0}^{2^i-1}&\bigg( [-2^{-i+1},-2^{-i}-2^{-i-10}] \cup [2^{-i}+2^{-i-10},2^{-i+1}]\\
&\times[-2^{-1} + k2^{-i},-2^{-1} + (k+1)2^{-i}-2^{-i-10}]\bigg). \end{align*} See Figure \ref{fig:example.4.2} for an illustration. \begin{figure}
\caption{An illustration of the $BV$-extension domain $\Omega \subset \mathbb R^2$ in Example \ref{ex:1} which is not a $W^{1,1}$-extension domain. Components of the complement accumulate on the vertical line segment where the upper-density is less than $\frac12$ at almost every point.
}
\label{fig:example.4.2}
\end{figure} Now, the upper-density of $\Omega$ is clearly at most $1/2$ at all the points of the boundary $\partial \Omega$ except for the corners of the connected components of $E_i$, and the points $(0,-1/2)$ and $(0,1/2)$, which together form only a countable set. (One could remove balls instead of rectangles to get the upper-density bound for all boundary points.)
The domain $\Omega$ is a $BV$-extension domain because each removed square has a neighbourhood inside $\Omega$ from which the $BV$-function can be extended to the square with a uniform constant. These neighbourhoods can be taken pairwise disjoint. This will result in an extension operator \[ T \colon BV(\Omega) \to BV((-1,1)^2 \setminus \{0\}\times[-1/2,1/2]). \] The target set clearly admits an extension to $BV(\mathbb R^2)$.
The domain $\Omega$ is not a $W^{1,1}$-extension domain, because the set $\{0\}\times [-1/2,1/2]$ is not purely $1$-unrectifiable, and this is the set $H$ in the following Corollary \ref{cor:unrectifiabilitynecessity}. \end{example}
The next corollary to Theorem \ref{thm:mainRn} shows that one direction in Theorem \ref{thm:planar} holds also in higher dimensions.
\begin{corollary}\label{cor:unrectifiabilitynecessity} Suppose that $\Omega \subset \mathbb R^n$ is a bounded $W^{1,1}$-extension domain. Let $\Omega_i$, for $i \in I$, be the connected components of $\mathbb R^n \setminus \overline{\Omega}$. Then the set \[
H = \partial \Omega \setminus \bigcup_{i \in I}\overline{\Omega_i} \] is purely $(n-1)$-unrectifiable. \end{corollary} \begin{proof}
Supposing $\Omega$ to be a $W^{1,1}$-extension domain, by Theorem \ref{thm:mainRn} we know that it has the strong perimeter extension property.
Now, towards a contradiction, suppose that $f \colon \mathbb R^{n-1} \to \mathbb R$ is an $L$-Lipschitz map so that
\[
\mathcal H^{n-1}(\textrm{Graph}(f) \cap H) > 0,
\]
after a suitable rotation.
Let $A$ be a component of $\mathbb R^n \setminus \textrm{Graph}(f)$
such that the set
\[
F = \{ x \in \textrm{Graph}(f) \cap H \,:\,\overline D(\Omega \cap A, x)> 0\}
\]
has positive $\mathcal H^{n-1}$-measure. By the measure density of $\Omega$ (Proposition \ref{meas.dens:BV}), at least one of the components must satisfy this.
Without loss of generality, we may assume that
\[
A = \{(y,f(x))\,:\, y < f(x)\}.
\]
Take $E = \Omega \cap A$ and let $\widetilde E$ be the strong perimeter extension of $E$. Now, since $\mathcal H^{n-1}(\partial^M\widetilde E \cap \partial\Omega) = 0$, the set \[ G = \{x \in F\,:\, D(\widetilde E, x) = 1\} \] has positive $\mathcal H^{n-1}$-measure. Take $x = (x_1,\dots,x_n) \in G$. Since $E \subset A$, which was bounded by a graph of an $L$-Lipschitz map, the set \[
R_{x,L} = \{y = (y_1,\dots, y_n) \in \mathbb R^n \,:\, y_n - x_n>L|(x_1,\dots,x_{n-1}) - (y_1,\dots,y_{n-1})|\} \] does not intersect $E$. If there exists a small radius $r>0$ for which \[ R_{x,2L} \cap B(x,r) \cap \Omega = \emptyset, \]
we conclude that there exists a connected component $\Omega_i$ of $\mathbb R^n \setminus \overline{\Omega}$ for which $x \in \partial\Omega_i$ contradicting the fact that $x \in H$. Hence, there exists a sequence of points $x^i \in R_{x,2L} \cap \Omega$ such that $|x^i-x| \to 0$. Since $f$ is $L$-Lipschitz, writing $\delta= \frac{1}{4(L+1)}$, we have \[
B\left(x^i,\delta|x^i-x|\right) \subset R_{x,L} \subset \mathbb R^n \setminus E. \] By the measure density (Proposition \ref{meas.dens:BV}), we have \[
|B(x^i,\delta|x^i-x|)\cap \Omega| > c|B(x^i,\delta|x^i-x|)| \] for all $x_i$. Thus, \[
\frac{|B(x,2|x^i-x|)\cap \widetilde E|}{|B(x,2|x^i-x|)|} \le
1 - \frac{|B(x^i,\delta|x^i-x|)\cap \Omega|}{|B(x,2|x^i-x|)|} < 1-c\left(\frac{\delta}{2}\right)^n < 1 \] giving that \[ \underline{D}(\widetilde E,x) < 1, \] which contradicts the fact that $x \in G$. \end{proof} Let us point out that if in addition we require $\Omega$ to be planar and simply connected in the previous corollary, we would get the stronger fact that $\mathcal{H}^1(\partial \Omega\setminus\bigcup_i \overline{\Omega}_i)=0$.
In the next section we will show that in the planar case the conclusion of Corollary \ref{cor:unrectifiabilitynecessity} is also a sufficient condition for a bounded $BV$-extension domain to be a $W^{1,1}$-extension domain. The following example shows that this is not the case in dimension three.
\begin{example} Let us construct a bounded $BV$-extension domain $\Omega \subset \mathbb R^3$ which is not a $W^{1,1}$-extension domain so that $\mathbb R^3 \setminus \overline{\Omega}$ consists of only one component $\Omega_0$ for which $\partial \Omega = \partial \Omega_0$. Consequently, in the statement of Corollary \ref{cor:unrectifiabilitynecessity} we have $H = \emptyset$.
Let $C \subset [0,1]^2$ be a Cantor set with $\mathcal H^2(C)>0$ and let \[
\Omega = (-1,1)^3 \setminus \left\{(x_1,x_2,x_3)\,:\, |x_3| \le {\mathop\mathrm{\,dist\,}}((x_1,x_2),C), (x_1,x_2) \in [0,1]^2\right\}. \]
The fact that $\mathbb R^3 \setminus \overline{\Omega}$ consists of only one component $\Omega_0$ for which $\partial \Omega = \partial \Omega_0$ is immediate from the construction.
Also, with the same arguments as in the previous two corollaries, we see that \[ E = \{(x_1,x_2,x_3) \in \Omega\,:\, x_3 < 0\} \] does not have a strong perimeter extension.
In order to see that $\Omega$ is a $BV$-extension domain, take $u\in BV(\Omega)$. First notice that since the parts \[ \Omega_1=\{(x_1,x_2,x_3) \in \Omega \,:\, x_3>0\} \;\text{ and } \;\Omega_2=\{(x_1,x_2,x_3) \in \Omega \,:\, x_3<0\} \]
have Lipschitz boundaries, similarly to \cite[Theorem 5.8]{EG2015} we can consider the zero extension of both $u|_{\Omega_1}$ and $u|_{\Omega_2}$ to the whole $\mathbb{R}^3$ and calling them $\tilde u_1$ and $\tilde u_2$ respectively, we have $\tilde u_1,\tilde u_2\in BV(\mathbb{R}^3)$ with
\begin{equation}\label{ex:union.BV.func}
\|D \tilde u_i\|(\mathbb{R}^3)=\|D u\|(\Omega_i)+\int_{\partial \Omega_i} |\text{Tr}_i(u)|\,d\mathcal{H}^2 \end{equation} for every $i=1,2$. Here \begin{align*}
\text{Tr}_i\colon BV( \Omega_i)\to L^1\left(\partial\Omega_i; \mathcal{H}^2\right) \end{align*} for $i=1,2$ are bounded linear operators, called the traces, which are defined as \begin{align*}
&\text{Tr}_i(u)(x) = \lim_{r\searrow 0}\frac{1}{|B(x,r)\cap \Omega_i|}\int_{B(x,r)\cap\Omega_i}u(y)\,dy
\end{align*} for $\mathcal{H}^2$-almost every $x$. Now it is easy to check, following \eqref{ex:union.BV.func}, \begin{align*}
\|D \tilde u_i\|(\mathbb{R}^3) & \leq\|D u\|(\Omega_i)+\int_{\partial\Omega_i} |\text{Tr}_i(u)|\,d\mathcal{H}^1\\
&\leq \|u\|_{BV(\Omega_i)}+ C\|u\|_{BV(\Omega_i)}=(1+C)\|u\|_{BV(\Omega)}, \end{align*} for $i=1,2$. To conclude, we just let our extension operator $T\colon BV(\Omega)\to BV(\mathbb{R}^3)$ be $Tu=\tilde u_1+\tilde u_2$, which is the zero extension of $u$ outside $\Omega$. \end{example}
In the case where $\overline{\Omega} = \mathbb R^n$, the study of extension domains $\Omega$ is the same as the study of closed removable sets. Notice that by the measure density (Proposition \ref{meas.dens:BV}) the Lebesgue measure of $\partial \Omega$ is zero for a Sobolev or $BV$-extension domain. We call a set $F \subset \mathbb R^n$ of Lebesgue measure zero a removable set for $BV$, if $BV(\mathbb R^n \setminus F) = BV(\mathbb R^n)$ as sets and $\|Du\|(\mathbb R^n) = \|Du\|(\mathbb R^n\setminus F)$ for every $u \in BV(\mathbb R^n)$. Similarly, we call $F$ removable for $W^{1,1}$, if $W^{1,1}(\mathbb R^n \setminus F)= W^{1,1}(\mathbb R^n)$. We obtain the following equivalence of removability.
\begin{corollary} Let $F \subset \mathbb R^n$ be a closed set of Lebesgue measure zero. Then $F$ is removable for $BV$ if and only if $F$ is removable for $W^{1,1}$. \end{corollary} \begin{proof}
Suppose $F$ is removable for $BV$. Then $F$ is purely $(n-1)$-unrectifiable. Otherwise, similarly as in the proof of Corollary \ref{cor:unrectifiabilitynecessity}, we can construct a set $E$ of finite perimeter so that $\mathcal H^{n-1}(\partial^M E\cap F)>0$. Hence, $P(E,\mathbb R^n\setminus F) \ne P(E,\mathbb R^n)$, contradicting the assumption that $F$ is removable for $BV$.
Now, since $F$ is removable for $BV$, for every radius $R>0$, the set $B(0,R) \setminus F$ is a $BV$-extension domain. Since $F$ is purely $(n-1)$-unrectifiable, $B(0,R) \setminus F$ trivially has the strong perimeter extension property and is thus a $W^{1,1}$-extension domain by Theorem \ref{thm:mainRn}. Consequently, $F$ is removable for $W^{1,1}$.
Suppose then that $F$ is removable for $W^{1,1}$. Let $u \in BV(\mathbb R^n\setminus F)$. We only need to check that the function $u$ when seeing as a function defined on the whole $\mathbb{R}^n$, satisfies $\|Du\|(F)=0$. With the Whitney smoothing operator $S_{\mathbb R^n \setminus F,\mathbb R^n \setminus F}$ from Theorem \ref{thm:smoothing} we can modify $u$ to be a $W^{1,1}$-function $\tilde u = S_{\mathbb R^n \setminus F,\mathbb R^n \setminus F}u$ on $\mathbb{R}^n\setminus F$ and moreover, by \eqref{eq:boundaryzero},
$$\| D (\tilde u -u)\|(F)=0,$$
where $\tilde u$ can be defined as any value on $F$. Since $F$ is removable for $W^{1,1}$, we have $\tilde u \in W^{1,1}(\mathbb R^n)$. Thus $\|D \tilde u\|(F)=0$ because $|F|=0$ and therefore
$$\|D u\|(F)\leq \|D( u-\tilde u)\|(F)+\|D \tilde u\|(F)=0, $$
and we get that $u \in BV(\mathbb R^n)$ with $\|Du\|(\mathbb R^n) = \|Du\|(\mathbb R^n \setminus F)$. \end{proof}
\section{Characterization of planar $W^{1,1}$-extension domains}\label{sec:planar}
In this section we prove Theorem \ref{thm:planar} using the higher dimensional result stated in Theorem \ref{thm:mainRn}. Since the necessity part of Theorem \ref{thm:planar} holds in the higher-dimensional case by Corollary \ref{cor:unrectifiabilitynecessity}, we only need to prove the sufficiency. We first set some notations and definitions.
We say that $\Gamma\subset\mathbb{R}^2$ is a Jordan curve if $\Gamma=\gamma([a,b])$ for some $a,b\in \mathbb{R}$, $a<b$, and some continuous map $\gamma$, injective on $[a,b)$ and such that $\gamma(a)=\gamma(b)$. Accordingly to the famous Jordan curve theorem any Jordan curve $\Gamma$ splits $\mathbb{R}^2\setminus \Gamma$ in exactly two connected components, a bounded one and an unbounded one that we call $\text{int} (\Gamma)$ and $\text{ext} (\Gamma)$ respectively. We will often talk about rectifiable Jordan curves $J$, for which we mean that $J$ is a Jordan curve and it is $1$-rectifiable. A set $A$ whose boundary $\partial A$ is a Jordan curve is called a Jordan domain.
For technical reasons we also add to the class of Jordan curves the formal ''Jordan'' curves $J_0$ and $J_{\infty}$, whose interiors are $\mathbb{R}^2$ and the empty set respectively and for which we set $\mathcal{H}^{1}(J_{0})=\mathcal{H}^{1}(J_{\infty})=0$.
We say that a set $E\in \mathbb{R}^2$ has a decomposition into other sets $\{E_i\}_{i}$ up to $\mathcal{H}^1$-measure zero sets if $$\mathcal{H}^1\left(\left(E\setminus \bigcup_{i} E_i\right)\cup \left( \bigcup_{i} E_i\setminus E\right)\right)=0 $$ and $\mathcal H^1(E_i\cap E_j) = 0$ for every $i \ne j$.
For the particular case of planar sets of finite perimeter we have the following decomposition theorem from \cite[Corollary 1]{ACMM2001}.
\begin{theorem} \label{thm:planardecomposition} Let $E \subset \mathbb R^2$ have finite perimeter. Then, there exists a unique decomposition of $\partial^ME$ into rectifiable Jordan curves $\{C_i^+, C_k^-\,:\,i,k \in \mathbb N\}$, up to $\mathcal H^1$-measure zero sets, such that \begin{enumerate}
\item Given $\text{int}(C_i^+)$, $\text{int}(C_k^+)$, $i \ne k$, they are either disjoint or one is contained in the other; given $\text{int}(C_i^-)$, $\text{int}(C_k^-)$, $i \ne k$, they are either disjoint or one is contained in the other. Each $\text{int}(C_i^-)$ is contained in one of the $\text{int}(C_k^+)$.
\item $P(E,\mathbb{R}^2) = \sum_{i}\mathcal H^1(C_i^+) + \sum_k \mathcal H^1(C_k^-)$.
\item If $\text{int}(C_i^+) \subset \text{int}(C_j^+)$, $i \ne j$, then there is some rectifiable Jordan curve $C_k^-$ such that $\text{int}(C_i^+)\subset \text{int}(C_k^-) \subset \text{int}(C_j^+)$. Similarly, if $\text{int}(C_i^-) \subset \text{int}(C_j^-)$, $i \ne j$, then there is some rectifiable Jordan curve $C_k^+$ such that $\text{int}(C_i^-)\subset \text{int}(C_k^+) \subset \text{int}(C_j^-)$.
\item Setting $L_j =\{i \,:\, \text{int}(C_i^-)\subset \text{int}(C_j^+)\}$ the sets $Y_j = \text{int}(C_j^+) \setminus \bigcup_{i \in L_j}\text{int}(C_i^-)$ are pairwise disjoint, indecomposable and $E = \bigcup_j Y_j$. \end{enumerate}
\end{theorem}
Since sets of finite perimeter are defined via the total variation of $BV$-functions, they are understood modulo $2$-dimensional measure zero sets. In particular, the last equality in (4) of Theorem \ref{thm:planardecomposition} is modulo measure zero sets.
In order to prove the sufficiency part of Theorem \ref{thm:planar} we will proceed as follows: Starting from a set $E\subset \Omega$ of finite perimeter we first find an extension $E'$ to $\mathbb{R}^2$ using the fact that $\Omega$ is a $BV$-extension domain. Then we decompose $\partial^M E'$ using Theorem \ref{thm:planardecomposition} and after proving the quasiconvexity of each of the open connected components $\Omega_i$ of $\mathbb{R}^2\setminus \overline{\Omega}$, we will be able to perturb the Jordan curves of the decomposition of $\partial^M E'$ around each $\partial \Omega_i$ so that we get a final set $\widetilde E$ which will be a strong extension of $E$. An application of Theorem \ref{thm:mainRn} will conclude the proof.
We start by presenting a couple of lemmas showing the quasiconvexity of all the connected components of $\mathbb{R}^2\setminus\overline{\Omega}$.
\begin{lemma}\label{lem:quasi.comp.domains}
Suppose that $\Omega \subset \mathbb R^2$ is a bounded $BV$-extension domain. Then there exists a constant $C>0$ so that for any connected component $\Omega_i$ of $\mathbb R^2 \setminus \overline{\Omega}$, any two points $z,w \in \partial\Omega_i$ can be connected by a curve $\beta \subset \overline{\Omega_i}$ with $\ell(\beta) \le C|z-w|$. \end{lemma}
\begin{proof} One can essentially follow step by step the proof of \cite[Theorem 1.1]{KMS2010}, once we have taken into account some facts.
\begin{enumerate}
\item For a given $i$, since $\Omega$ is a $BV$-extension domain, so is $\Omega' = \mathbb R^2\setminus \overline{\Omega}_i$. As an extension operator we can take \[ T' \colon BV(\Omega') \to BV(\mathbb R^2) \colon
u \mapsto T(u|_{\Omega})|_{\overline{\Omega}_i} + u, \] where $T$ is the extension operator from $BV(\Omega)$ to $BV(\mathbb R^2)$. Let us explain more in detail why our resulting function $T'u$ is well-defined as a function in $ BV(\mathbb{R}^2)$. Observe that the closures of the different components $\overline{\Omega}_i$ can only intersect between themselves in just one point. That is, \begin{equation}\label{eq:few.inter.Omega_i}
\#{\{\partial \Omega_i\cap \partial\Omega_j\}}\leq 1\;\;\;\text{for every}\;\; i\neq j. \end{equation} Otherwise we would be losing the connectedness of $\Omega$ or either $\Omega_i$ and $\Omega_j$ are the same component. This means that $\partial\Omega_i\cap \bigcup_{j\neq i} \overline{\Omega}_j $ is a countable set. Once we are aware of this simple fact it is clear that $T'(u)$ behaves well around $\partial\Omega_i$ and it belongs to $BV(\mathbb{R}^2)$.
Observe that since $\Omega'$ is a $BV$-extension domain there is a constant $C'>0$ for which the property $(PE2)$ of extension of sets of finite perimeter holds. Note that this constant $C'$ only depends on the norm $\|T'\|$, which only depends on $\|T\|$ and which in turn only depends on the constant $C>0$ of the same property $(PE2)$ but now applied to the $BV$-extension domain $\Omega$.
\item We can assume that $\Omega'$ is bounded, and hence also a $BV_l$-extension domain thanks to \cite[Lemma 2.1]{KMS2010}. If $\Omega'$ was not bounded then $\Omega_i$ had to be bounded and we can take a large enough radius $R>0$ so that $$\Omega_i \subset B(0,R) \;\; \text{and}\;\;\Omega\subset B(0,R)\setminus \overline{\Omega}_i.$$ It is clear that changing $\Omega'$ by $\Omega'\cap B(0,R)$ does not affect the $BV$-extension property.
\end{enumerate}
The proof of \cite[Theorem 1.1]{KMS2010} is made under the assumptions that a set $\Omega'$ is a bounded simply connected $BV_l$-extension domain, reaching as a conclusion that $\mathbb{R}^2\setminus \Omega'$ is quasiconvex.
In the case $\Omega_i$ was unbounded, $\Omega'$ will be a bounded simply connected $BV_l$-extension domain and we apply the previous result directly to show the quasiconvexity of $\overline{\Omega}_i$.
If $\Omega_i$ was bounded, after the modification mentioned above, $\Omega'$ will be a bounded $BV_l$-extension domain. To prove the quasiconvexity of $\overline{\Omega}_i$ in \cite[Theorem 1.1]{KMS2010} the simply connectedness was just used at the following point: when we take two points $z,w\in\partial\Omega_i$ and join them with a line-segment $L_{z,w}$, the set $ \Omega' \cap L_{z,w}$ consists on the disjoint union of countably many line-segments $L_{z_i,w_i}$, with $z_i,w_i\in\partial\Omega_i$. Now, under the assumption of simply connectedness of $\Omega'$ one can assert that $\Omega' \setminus L_{z_i,w_i}$ has two disjoint connected components. However, in our case this is still true because otherwise $\Omega_i$ would not be connected.
The previous facts yield that every set $\overline\Omega_i$ is quasiconvex. A careful reading of the proof \cite[Theorem 1.1]{KMS2010} also shows that the constant of quasiconvexity of all these sets is uniformly bounded by a constant $C>0$, independent of $i$. Indeed, the quasiconvexity constant of any set $\overline \Omega_i$ only depends on the constant of the extension property of sets of finite perimeter $(PE2)$ for the $BV$-extension domains $\Omega'=\mathbb{R}^2\setminus \overline \Omega_i$, which, as we already noted, depends only on the constant for the $BV$-extension domain $\Omega$ independently of what $i$ we are fixing. \end{proof}
Notice that the previous Lemma \ref{lem:quasi.comp.domains} implies, in particular, that if $\Omega$ is a bounded $BV$-extension domain, then all open connected components of $\mathbb{R}^2\setminus \overline{\Omega}$ are Jordan domains.
We record the following general lemma which might be of independent interest. A version of it for quasiconvex sets was proven via conformal maps in \cite{KRZ}. Let us also point out that with the sharp Painleve-length result for a connected set \cite{LPR2020} one could quite easily prove a version of the lemma with a multiplicative constant $2$. \begin{lemma}\label{lem:quasi.int.Omega_i}
Let $\Omega$ be a Jordan domain. For every $x,y \in \overline{\Omega}$, every $\varepsilon > 0$ and any rectifiable curve $\gamma \subset \overline{\Omega}$ joining $x$ to $y$ there exists a curve $\sigma \subset \Omega \cup \{x,y\}$ joining $x$ to $y$ so that my g \[
\ell(\sigma) \le \ell(\gamma) + \varepsilon.
\] \end{lemma} \begin{proof} Without loss of generality, we may assume that $\gamma \colon [0,\ell(\gamma)] \to \mathbb R^2$ minimizes the length of curves joining $x$ to $y$ in $\overline{\Omega}$, $\gamma(0) = x$, $\gamma(\ell(\gamma)) = y$, and that $\gamma$ has unit speed.
If $\gamma((0,\ell(\gamma))) \cap \partial\Omega = \emptyset$, we are done. Suppose this is not the case and define \[ s_1 = \min \{t \in [0,\ell(\gamma)] \,:\, \gamma(t) \in \partial\Omega\} \] and \[ s_2 = \max \{t \in [0,\ell(\gamma)] \,:\, \gamma(t) \in \partial\Omega\} \] If $s_1=s_2$, by minimality the curve $\gamma$ is the concatenation of line-segments $[x,\gamma(s_1)]$ and $[\gamma(s_1), y]$. In this case, for small $r \in (0,\varepsilon /(2\pi))$, the curve $\gamma$ divides $B(\gamma(s_1),r)$ into two parts so that one of them is a subset of $\Omega$. Thus, we may replace part of $\gamma$ by an arc of the circle $S(\gamma(s_1),r)$, and we are done.
We are then left with the more substantial case where $s_1 < s_2$. Since $\partial\Omega$ is a Jordan loop, the set $\partial \Omega \setminus \gamma(\{s_1,s_2\})$ consists of two connected components $T_1$ and $T_2$.
We will show that $\gamma$ can be slightly pushed away from $\partial\Omega$ in directions that change in a locally Lipschitz way in $(0,\ell(\gamma))$. Namely, we assert that there exist functions \begin{align*}
&\varepsilon\colon (0,\ell(\gamma))\to (0,1),\\
&v\colon(0,\ell(\gamma))\to \mathbb{S}^1, \end{align*} so that $\varepsilon(\cdot)$ and $v(\cdot)$ are locally Lipschitz continuous and satisfy $\gamma(t)+hv(t)\in\Omega$ for all $0<h<\varepsilon(t)$ and $t \in (0,\ell(\gamma))$.
In order to show this, let $t \in (0,\ell(\gamma))$. If $\gamma(t) \in \Omega$, then with $\varepsilon_t = \frac12{\mathop\mathrm{\,dist\,}}(\gamma(t),\partial\Omega)$ we have $\gamma(s) + hv \in \Omega$ for all $s\in (t-\varepsilon_t, t+\varepsilon_t)\cap(0,\ell(\gamma))$, $v \in \mathbb S^1$ and $0 < h <\varepsilon_t$.
Suppose then that $t \in \gamma^{-1}(\partial\Omega) \cap (0,\ell(\gamma))$. Without loss of generality we may assume that $\gamma(t) \in T_1$.
The concatenation of $\gamma|_{[s_1,s_2]}$ with $T_2$ forms a closed loop $\alpha$ so that one of the components $\tilde\Omega$ of its complement is contained in $\Omega$, and $\gamma(t) \in \partial\tilde\Omega$. Now, let $r_t = {\mathop\mathrm{\,dist\,}}(\gamma(t),T_2)$. Then, by minimality of $\gamma$, the set $\gamma \cap B(\gamma(t),r_t)$ is contained on the boundary of a convex set $K_t = B(\gamma(t),r_t) \setminus \tilde\Omega$ with non-empty interior. Consequently, there exists a constant $\varepsilon_t>0$ so that for any $t-\varepsilon_t < \tau_1 < \tau_2 < t +\varepsilon_t$ for which the outer normal vectors $w_1$ and $w_2$ to $K_t$ exist at $\gamma(\tau_1)$ and $\gamma(\tau_2)$ respectively, there is a Lipschitz map $[\tau_1,\tau_2] \to \mathbb S^1\colon t \mapsto v_{\tau_1,\tau_2}(t)$ so that $v_{\tau_1,\tau_2}(\tau_1) = w_1$, $v_{\tau_1,\tau_2}(\tau_2) = w_2$, and $\gamma(t)+hv_{\tau_1,\tau_2}(t)\in\Omega$ for all $0<h<\varepsilon_t$ and $\tau_1 \le t \le \tau_2$.
Write $I \subset (0,\ell(\gamma))$ to be the points $t$ where a normal direction to $\gamma$ exists at $\gamma(t)$. Now, cover $(0,\ell(\gamma))$ with the intervals $(t-\varepsilon_t,t+\varepsilon_t)\cap(0,\ell(\gamma))$ and then take a subcover $\{U_i=(\overline{t}_i-\varepsilon_{\overline{t}_i},\overline{t}_i+\varepsilon_{\overline{t}_i})\}_{i\in\mathbb Z}$ that is finite for compact subsets of $(0,\ell(\gamma))$, and so that every $t\in(0,\ell(\gamma))$ belongs to at most two intervals $U_i$. Assume the intervals $U_i$ are in order, that is $U_i$ only intersects $U_{i-1}$ and $U_{i+1}$. By dividing into smaller intervals if needed, we may also assume that if $\gamma(U_i)\cap \partial \Omega \ne \emptyset$ and $\gamma(U_{i+1})\cap \partial \Omega \ne \emptyset$, then $\gamma(U_i\cup U_{i+1})\cap \partial \Omega \subset T_j$ for $j =1$ or $j=2$. This allows us to select the normal directions $w \in \mathbb S^1$ in a way so that they agree for the intervals $U_i$ and $U_{i+1}$ at the points $t \in I \cap U_i\cap U_{i+1}$. Notice that for $i$ for which $\gamma(U_i)\cap \partial\Omega = \emptyset$ we have to make a choice between two opposite directions.
We will then have subset $I \subset (0,\ell(\gamma))$ with $\mathcal H^1((0,\ell(\gamma)) \setminus I) = 0$, and an open covering $\{U_i\}_i$ of $(0,\ell(\gamma))$ of multiplicity at most two, where $U_i$ are intervals, so that \begin{itemize}
\item for every $i \in \mathbb Z$ there exists a constant $\varepsilon_i>0$ so that for every $\tau_1,\tau_2 \in I \cap U_i$, $\tau_1 < \tau_2$, there is a Lipschitz map $[\tau_1,\tau_2] \to \mathbb S^1\colon t \mapsto v_{\tau_1,\tau_2}(t)$ so that $\gamma(t)+hv_{\tau_1,\tau_2}(t)\in\Omega$ for all $0<h<\varepsilon_i$ and $\tau_1 \le t \le \tau_2$, \item if $v_{\tau_1,\tau_2}$ and $v_{\tau_2,\tau_3}$ have been defined as above, $v_{\tau_1,\tau_2}(\tau_2) = v_{\tau_2,\tau_3}(\tau_2)$. \end{itemize}
For each $i\in \mathbb Z$ we will now fix a $t_i \in I \cap U_i \cap U_{i+1}$, and define $v(t) = v_{t_i,t_{i+1}}(t)$ on $[t_i,t_{i+1}]$. A locally Lipschitz choice for $\varepsilon$ can be given by defining \[ \varepsilon(t) = \frac{t_{i+1}-t}{t_{i+1}-t_i}\min(\varepsilon_i,\varepsilon_{i+1}) + \frac{t-t_{i}}{t_{i+1}-t_i}\min(\varepsilon_{i+1},\varepsilon_{i+2}) \] when $t \in [t_i,t_{i+1}]$.
Let $i_0 \in \mathbb{N}$ be such that $i_0 > \frac{2}{s_2-s_1} \geq \frac{2}{\ell(\gamma)}$. Then, for any $i \ge i_0$, the function $v(\cdot)$ is Lipschitz in $[1/i, \ell(\gamma)-1/i]$ and $\eta_i := \min_{t \in [1/i, \ell(\gamma)-1/i]}\varepsilon(t) > 0$. Hence, if we define \[ \delta_i(t) = \begin{cases}
\eta_i\frac{\min\{|\ell(\gamma) - 1/i-t|,|1/i-t|\}}{\ell(\gamma) - 2/i}, & \text{if } t\in [1/i, \ell(\gamma)-1/i],\\ 0, & \text{otherwise} \end{cases}, \]
we have $|\delta'_i(t)|=\frac{\eta_i}{\ell(\gamma)-2/i}$ for all $t\in (1/i, \ell(\gamma)-1/i)\setminus \{\ell(\gamma)/2\}$, and also $|\delta_i(t)|\leq \eta_i/2\leq \varepsilon(t)/2$ for every $t$. Now if we let \[
L_i = \int_{0}^{\ell(\gamma)}|(\delta_i(t)v(t))'|\,dt < \infty, \] defining \[ \delta(t) = \sum_{i= i_0}^\infty\frac{2^{-i-1}\min(\varepsilon,1)}{1+L_i}\delta_i(t), \] we get a function $\delta \colon [0,\ell(\gamma)] \to \mathbb R$ such that $\delta(0)=\delta(\ell(\gamma))=0$ and $0 < \delta(t) < \varepsilon(t)$ for all $t \in (0,\ell(\gamma))$. Note that the function $\delta$ is continuous as a limit of an absolutely and uniformly convergent series of continuous functions, and it is differentiable except on $\ell(\gamma)/2$. Thus, $\sigma \colon [0,\ell(\gamma)] \to \mathbb R^2$ defined by \[ \sigma(t) = \gamma(t) + \delta(t)v(t) \] is a curve joining $x$ and $y$, $\sigma((0,\ell(\gamma))) \subset \Omega$ and \begin{align*}
\ell(\sigma) &= \int_{0}^{\ell(\gamma)}|\sigma'(t)|\,dt \le \int_{0}^{\ell(\gamma)}|\gamma'(t)|\,dt + \int_{0}^{\ell(\gamma)}|(\delta(t)v(t))'|\,dt \\
& \le \ell(\gamma) + \sum_{i=i_0}^\infty\frac{2^{-i-1}\varepsilon}{1+L_i}\int_{0}^{\ell(\gamma)}|(\delta_i(t)v(t)7)'|\,dt < \ell(\gamma) + \varepsilon, \end{align*} finishing the proof. \end{proof}
The next lemma, together with Theorem \ref{thm:planardecomposition}, are the key tools for our proof of the sufficiency part of Theorem \ref{thm:planar}, that we will show afterwards.
\begin{lemma}\label{lma:planarpushing}
Let $\Omega \subset \mathbb R^2$ be a bounded $BV$-extension domain and $\Omega_i$ the open connected components of $\mathbb{R}^2\setminus \overline{\Omega}$. Suppose that the set $H=\partial \Omega\setminus \bigcup_i \overline{\Omega}_i $ is purely $1$-unrectifiable and
let $E \subset \mathbb R^2$ be a Jordan domain with $\partial E$ rectifiable. Then there exists a set $\widetilde E \subset \mathbb R^2$ of finite perimeter so that
\begin{itemize}
\item[(i)] $E \cap \Omega = \widetilde E \cap \Omega$,
\item[(ii)] $\mathcal H^1(\partial^M \widetilde E) \le C \mathcal H^1(\partial^M E)$, and
\item[(iii)] $\mathcal H^{1}(\partial^M \widetilde E \cap \partial \Omega) = 0$,
\end{itemize}
where the constant $C$ is absolute. \end{lemma} \begin{proof} Consider the at most countably many components $\{\Omega_i\}_i$ of $\mathbb{R}^2 \setminus \overline{\Omega}$. For each $i$ we want to modify the set $E$ in $\overline{\Omega_i}$ to get some $\widetilde E\subset\mathbb{R}^2$ with $\partial \widetilde E$ rectifiable so that
\begin{equation}\label{eq:boundarytozero}
\mathcal{H}^1(\partial^M \widetilde E \cap \partial \Omega_i) = 0
\end{equation}
and
\begin{equation}\label{eq:boundary.int.controled}
\mathcal{H}^1(\partial^M \widetilde E \cap \Omega_i) \le C \mathcal{H}^1(\partial^M E \cap \overline{\Omega_i}) .
\end{equation}
Let us show how to conclude the proof of the lemma after assuming these facts. Since we are not changing the set $E$ inside $\Omega$ the property (i) is clear. To check (ii) let us first write
$$\mathcal{H}^1(\partial ^M \widetilde E)= \mathcal{H}^1(\partial ^M \widetilde E\cap\Omega)+\mathcal{H}^1(\partial ^M \widetilde E\cap\partial \Omega) + \mathcal{H}^1(\partial ^M \widetilde E\cap (\mathbb{R}^n\setminus \overline{\Omega})).$$
We will estimate each of these terms separately. For the first one is clear that $\mathcal{H}^1(\partial ^M \widetilde E\cap\Omega)=\mathcal{H}^1(\partial ^M E\cap\Omega)$. For the second one we use the fact that $\partial \widetilde E$ is rectifiable, that $\partial \Omega\setminus \bigcup_i \overline{\Omega}_i$ is purely $1$-unrectifiable and \eqref{eq:boundarytozero},
\begin{align}\label{eq:mes.zero.in.boundary}
\mathcal{H}^1(\partial ^M \widetilde E\cap\partial \Omega)&=
\mathcal{H}^1\left(\partial ^M \widetilde E\cap\left[\partial \Omega \setminus \bigcup_i \overline{\Omega}_i\right]\right)+
\mathcal{H}^1\left(\partial ^M \widetilde E\cap\left[\partial \Omega \cap\bigcup_i \overline{\Omega}_i\right]\right) \nonumber \\
&=\mathcal{H}^1\left(\bigcup_i(\partial ^M \widetilde E\cap \partial \Omega \cap\overline{\Omega}_i)\right) \nonumber \\
&\leq \sum_i \mathcal{H}^1(\partial^M \widetilde E \cap \partial \Omega_i)=0.
\end{align}
For the third term we use \eqref{eq:boundary.int.controled} to get
$$\mathcal{H}^1(\partial ^M \widetilde E\cap (\mathbb{R}^n\setminus \overline{\Omega})) \leq \sum_i\mathcal{H}^1(\partial^M \widetilde E\cap \Omega_i)\leq C \sum_i \mathcal{H}^1(\partial^M E\cap \overline{\Omega_i}) .$$ All these estimates together yield $$ \mathcal{H}^1(\partial ^M \widetilde E)\leq \mathcal{H}^1(\partial ^M E\cap\Omega) +C \sum_i \mathcal{H}^1(\partial^M E\cap \overline{\Omega_i}) .$$ Since $\{x\in\partial \Omega_i:\, x\in \partial \Omega_j \;\;\text{for some}\;\; j\neq i\}$ is at most countable by \eqref{eq:few.inter.Omega_i}, we conclude that $$ \mathcal{H}^1(\partial ^M \widetilde E)\leq C\mathcal{H}^1(\partial ^M E),$$ proving (ii). Finally (iii) has already been shown in \eqref{eq:mes.zero.in.boundary}.
We now move to prove how to modify $E$ inside each set $\overline{\Omega}_i$ in order to get \eqref{eq:boundarytozero} and \eqref{eq:boundary.int.controled}.
If $\mathcal H^1(\partial^ME\cap \partial \Omega_i) = 0$, we may skip this $i$ and move to the next. Let us thus assume $\mathcal H^1(\partial^ME\cap \partial \Omega_i) > 0$. Let $f \colon \mathbb S^1 \to \partial E$ be a parameterisation of the boundary by a homeomorphism. By the Lebesgue density theorem, for almost every $t \in f^{-1}(\partial^ME\cap \partial \Omega_i)$ there exists a $r_t>0$ so that for all $0 < r < r_t$ \begin{equation}\label{eq:balllarge}
\mathcal H^1\left(f(B(t,r))\cap \partial\Omega_i\right) \ge \frac12 \mathcal H^1(f(B(t,r))).
\end{equation}
By the Vitali covering lemma, we then find a disjointed collection $\{B(t_j,r_j)\}_j$ so that \eqref{eq:balllarge} holds for each of the balls and
\[
\mathcal H^1\left((\partial^ME\cap \partial \Omega_i) \setminus \bigcup_j f(B(t_j,r_j)\right) = 0.
\]
Now, we define $I_{i,j} = \overline{B(t_j,r_j)} \cap\mathbb S^1$ for each $j$ and obtain a collection $\{I_{i,j}\}_j$ of closed arcs in $\mathbb S^1$
whose interiors are pairwise disjoint,
\[
\mathcal H^1\left((\partial^ME\cap \partial \Omega_i) \setminus \bigcup_j f(I_{i,j})\right) = 0
\]
and
\[
\mathcal H^1\left(f(I_{i,j})\cap \partial\Omega_i\right) \ge \frac12 \mathcal H^1(f(I_{i,j}))
\]
for every $j$.
\begin{figure}
\caption{An illustration of the construction in Lemma \ref{lma:planarpushing}. The boundary $\partial E$ intersect the boundaries $\partial \Omega_1$ and $\partial\Omega_2$ in a set of positive $\mathcal H^1$-measure. The modification of $E$ inside $\Omega_1$ consists of the added set bounded by $\gamma_{1,1}$ from which three sets have been removed, bounded by $\gamma_{1,1,1}$, $\gamma_{1,1,3}$, and $\gamma_{1,1,4}$, respectively. The modification inside $\Omega_2$ consists of only one added part bounded by $\gamma_{2,1}$.}
\label{fig:perturbation}
\end{figure}
For the next argument we have $i,j$ fixed. The set $f(I_{i,j}) \setminus \partial \Omega_i$ consists of at most countably many open curves $\{\alpha_{i,j,k}\}_{k}$. For each $k $ for which $\alpha_{i,j,k} \cap \Omega_i = \emptyset$, we use Lemma \ref{lem:quasi.comp.domains} to find a curve $\beta_{i,j,k} \subset \overline{\Omega_i}$ such that $\ell(\beta_{i,j,k}) \le C|z_{i,j,k}-w_{i,j,k}|$ where $z_{i,j,k}$ and $w_{i,j,k}$ are the endpoints of $\alpha_{i,j,k}$. Now, for $\varepsilon=|z_{i,j,k}-w_{i,j,k}| $, Lemma \ref{lem:quasi.int.Omega_i} provides us with another curve $\gamma_{i,j,k}\subset \Omega_i\cup \{z_{i,j,k},w_{i,j,k}\}$ so that \begin{equation}
\ell(\gamma_{i,j,k})\leq \ell(\beta_{i,j,k})+|z_{i,j,k}-w_{i,j,k}|\leq (C+1)|z_{i,j,k}-w_{i,j,k}|. \end{equation} The curves $\alpha_{i,j,k}$ and $\gamma_{i,j,k}$ enclose a bounded subset that we call $E_{i,j,k} \subset \mathbb R^2$. Similarly, if we let $z_{i,j}$ be the first, and $w_{i,j}$ the last point of $f(I_{i,j}) \cap \partial \Omega_i$ we again use Lemmas \ref{lem:quasi.comp.domains} and \ref{lem:quasi.int.Omega_i} to connect $z_{i,j}$ to $w_{i,j}$ with a curve $\gamma_{i,j} \subset \Omega_i \cup\{z_{i,j},w_{i,j}\}$ so that \begin{equation}
\ell(\gamma_{i,j}) \le (C+1)|z_{i,j}-w_{i,j}|. \end{equation} Let $F_{i,j}$ be the bounded set enclosed by $\partial \Omega_i$ (from $z_{i,j}$ to $w_{i,j}$) and by $\gamma_{i,j}$. Now, we will modify $E$ by considering
$$\widetilde E_{i,j}=E\cup \left( F_{i,j} \setminus \bigcup_{k} E_{i,j,k} \right) .$$
See Figure \ref{fig:perturbation} for an illustration of the modification.
Repeating this process for all $i$ with $\mathcal H^1(\partial^ME\cap \partial \Omega_i) > 0$ and all $j$ we can finally define
$$\widetilde E=\bigcup_{i,j}\widetilde E_{i,j} .$$
Let us check that the properties $\eqref{eq:boundarytozero}$ and $\eqref{eq:boundary.int.controled}$ hold.
Firstly, observing that we did not modified $\partial E$ outside the arcs $f(I_{i,j})$,
\begin{align*}
\mathcal{H}^1(\partial^M \widetilde E\cap \partial\Omega_i)&=\mathcal{H}^1\left((\partial^M \widetilde E\cap \partial \Omega_i)\setminus\bigcup_j f(I_{i,j})\right)\\
&\qquad +\sum_j\mathcal{H}^1(\partial^M \widetilde E \cap \partial\Omega_i\cap f(I_{i,j}) )\\
&=\mathcal{H}^1\left((\partial^M E\cap \partial \Omega_i)\setminus\bigcup_j f(I_{i,j})\right)\\
&\qquad+\sum_j \left(\mathcal{H}^{1}(\partial^M F_{i,j} \cap \partial\Omega_i) +\sum_k \mathcal{H}^{1}(\partial^M E_{i,j,k} \cap \partial\Omega_i)\right) \\
&=0,
\end{align*}
which gives us \eqref{eq:boundarytozero}.
Secondly,
\begin{align*}
\mathcal{H}^1(\partial^M \widetilde E \cap \Omega_i) &\leq \mathcal{H}^1(\partial^M E\cap\Omega_i)+\sum_j\left(
\mathcal{H}^1(\gamma_{i,j})+\sum_k \mathcal{H}^1(\gamma_{i,j,k})\right) \\
&\leq \mathcal{H}^1(\partial^M E\cap\Omega_i)+\sum_j\left(
(C+1)|z_{i,j}-w_{i,j}|+\sum_k (C+1)|z_{i,j,k}-w_{i,j,k}|\right) \\
&\leq \mathcal{H}^1(\partial^M E\cap\Omega_i)+\sum_j 2C \mathcal{H}^1(f(I_{i,j})) \\
&\leq \mathcal{H}^1(\partial^M E\cap\Omega_i)+\sum_j 2(C+1) \mathcal{H}^1(f(I_{i,j})\cap \partial \Omega_i) \\
&\le C \mathcal{H}^1(\partial^M E \cap \overline{\Omega}_i)
\end{align*}
proving \eqref{eq:boundary.int.controled}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:planar}] One direction is proven in Corollary \ref{cor:unrectifiabilitynecessity}. Thus we only need to prove the converse. Thus, assume that $\Omega \subset \mathbb R^2$ is a bounded $BV$-extension domain and that the set $H=\partial \Omega\setminus \bigcup_i \overline{\Omega}_i $ is purely $1$-unrectifiable, where $\Omega_i$ are the open connected components of $\mathbb{R}^2\setminus \overline{\Omega}$.
We will show that $\Omega$ has the strong extension property for sets of finite perimeter and hence, by Theorem \ref{thm:mainRn}, $\Omega$ will be a $W^{1,1}$-extension domain. Using the fact that $\Omega$ is a bounded $BV$-extension domain if we let $E \subset \Omega$ be a set of finite perimeter in $\Omega$ then there exists an extension $E'$ to $\mathbb R^2$ so that $P(E',\mathbb R^2) \le C P(E,\Omega)$. This extension can be obtained for instance by the Maz'ya and Burago result \cite[Section 9.3]{mazya}.
Let now $\{C_i^+, C_k^-\,:\,i,k \in \mathbb N\}$ be the rectifiable Jordan curves of Theorem \ref{thm:planardecomposition} for the set $E'$. By applying Lemma \ref{lma:planarpushing}, each Jordan domain $\text{int}(C_i^+)$ can be replaced by a set $\widetilde E_i^+$ so that
$\widetilde E_i^+ \cap \Omega = \text{int}(C_i^+) \cap \Omega$, $\mathcal H^1(\partial^M \widetilde E_i^+) \le C \mathcal H^1(C_i^+)$,
and $\mathcal H^1(\partial^M\widetilde E_i^+ \cap \partial \Omega) = 0$.
Similarly, each $\text{int}(C_k^-)$ can be replaced by a set $\widetilde E_k^-$ so that
$\widetilde E_k^- \cap \Omega = \text{int}(C_k^-) \cap \Omega$, $\mathcal H^1(\partial ^M\widetilde E_k^-) \le C \mathcal H^1(C_k^-)$, and
$\mathcal H^1(\partial^M\widetilde E_k^- \cap \partial \Omega) = 0$.
Now,
\[
E = E' \cap \Omega = \left(\bigcup_{i}\text{int}(C_i^+) \setminus \bigcup_k \text{int}(C_k^-) \right) \cap \Omega = \left(\bigcup_{i}\widetilde E_i^+ \setminus \bigcup_k \widetilde E_k^- \right) \cap \Omega,
\] holds modulo a measure zero set. Thus, the set
\[
\widetilde E = \left(\bigcup_{i}\widetilde E_i^+ \setminus \bigcup_k \widetilde E_k^- \right)
\]
is an extension of $E$ to $\mathbb R^2$,
and
\begin{align*}
P(\widetilde E,\mathbb R^2) & \le \sum_{i}\mathcal H^1(\partial^M\widetilde E_i^+) + \sum_k \mathcal H^1(\partial^M\widetilde E_k^-)\\
& \le \sum_{i}C\mathcal H^1(C_i^+) + \sum_k C\mathcal H^1(C_k^-)\\
& = CP(E',\mathbb R^2) \le C P(E,\Omega).
\end{align*}
Since,
\[
\mathcal H^1(\partial^M \widetilde E \cap \partial \Omega) \le \sum_i
\mathcal H^1(\partial^M \widetilde E_i^+ \cap \partial \Omega) +
\sum_k
\mathcal H^1(\partial^M \widetilde E_k^- \cap \partial \Omega) = 0,
\]
the set $\widetilde E$ is the strong extension of $E$ that we had to find. \end{proof}
\end{document} |
\begin{document}
\title{Projective measurements under qubit quantum channels}
\author{Javid Naikoo}
\email{[email protected]}
\affiliation{Centre for Quantum Optical Technologies, Centre of New Technologies, University of
Warsaw, Banacha 2c, 02-097 Warsaw, Poland}
\author{Subhashish Banerjee}
\email{[email protected]}
\affiliation{Indian Institute of Technology Jodhpur, Jodhpur 342011, India}
\author{A. K. Pan}
\email{[email protected]}
\affiliation{National Institute of Technology Patna, Ashok Rajpath, Patna, Bihar 800005, India }
\author{Sibasish Ghosh}
\email{[email protected]}
\affiliation{Optics \& Quantum Information Group,The Institute of Mathematical Sciences, HBNI,CIT Campus, Taramani, Chennai - 600113, India }
\begin{abstract}
\noindent
The action of qubit channels on projective measurements on a qubit state is used to establish an equivalence between channels and properties of generalized measurements characterized by \textit{bias} and \textit{sharpness} parameters. This can be interpreted as shifting the description of measurement dynamics from the Schrodinger to the Heisenberg picture. In particular, unital quantum channels are shown to induce \textit{unbiased} measurements. The Markovian channels are found to be equivalent to measurements for which sharpness is a monotonically decreasing function of time. These results are illustrated by considering various noise channels. Further, the effect of \textit{bias} and \textit{sharpness} parameters on the energy cost of a measurement and its interplay with non-Markovianity of dynamics is also discussed.
\end{abstract}
\maketitle
\section{Introduction}
Measurement in quantum theory plays a crucial role compared to its classical counterpart. The ineluctable feature of any quantum measurement is that it entails an interaction between the measuring apparatus so that the
observed system necessarily gets entangled with the system of observing apparatus \cite{busch1996quantum,braginsky1995quantum}. The text-book description of quantum measurement usually deals with the ideal measurement when there is an one-to-one correspondence between system and apparatus states is achieved. Such an ideal measurement scenario can be modelled by a complete set of orthogonal projectors on the given Hilbert space of the system. However, in practice, the system-apparatus correspondence may not be achieved in general measurement scenario. In such a case, the orthogonal projectors are needed to be replaced by the
so-called positive operator-valued measures (POVMs) which is a set of Hermitian, positive semi-definite operators commonly denoted as $\{ E_i\}$, that sum up to identity, i.e., $\sum_{i = 1}^{n} E_i = \mathbb{1}$. If all the elements of a POVM are projective, i.e., $E_i = \Pi_{i}=\ket{\phi_i}\bra{\phi_i}$, where $\{ \ket{\phi_i}\}$ is an orthonormal basis, then the measurement is called \textit{sharp}. There are pertinent schemes where the generalized measurement involving POVMs outperform the projective measurements, such as, quantum tomography \cite{braginsky1995quantum}, unambiguous state discrimination \cite{busch2010coexistence}, quantum cryptography \cite{yu2010joint,busch2010unsharp,das2018testing}, device-independent randomness certification \cite{acin16} and many more.
Mathematically, a two-outcome POVMs in two dimension in its general form can be written as \cite{busch2010coexistence,stano2008,yu2010joint}
\begin{equation}\label{eq:Epm}
E_{\pm}(x, \vec{m}) = \frac{\mathbb{1} \pm (\mathbb{1} x + \vec{m} \cdot \vec{\sigma})}{2}.
\end{equation}
where $x$ and $|\vec{m}|$ are called \textit{bias} and \textit{sharpness} parameters respectively. The positivity of the POVMs $E_{\pm}(x, \vec{m})$ demands that the following condition be satisfied
\begin{equation}
|x| + |\vec{m}| \le 1.
\end{equation}
For ideal sharp measurement scenario, $|x|=0$ and $|\vec{m}|=1$. The notion of \textit{bias} and \textit{shapness} capture the deviation from the ideal projective measurements, but arises due to different physical reasons. The \textit{sharpness} parameter is linked with the precision of measurement arising due to operational indistinguishability between the probability distributions corresponding to the post-measurement apparatus states and thus $|\vec{m}|=1$ implies vanishingly small overlap between them. On the other hand \textit{bias} parameter quantifies the tendency of a measurement to favor one state over the other. When $|x|=0$ the POVM $E_{\pm}(0, \vec{m})$ is called \textit{unbiased}, meaning that the outcomes of measurement are purely random if the system is prepared in a maximally mixed state, i.e., $\operatorname{Tr} \{ E_{+}(0, \vec{m}) \mathbb{1}/2\} = \operatorname{Tr} \{ E_{-}(0, \vec{m}) \mathbb{1}/2\} = 1/2$.
The POVM elements in Eq. (\ref{eq:Epm}), can be viewed as an \textit{affine} transformation on a pure state $\rho = \frac{1}{2}( \mathbb{1} + \vec{r} \cdot \sigma)$, with $\vec{r} = (\sin\theta \cos\phi, \sin\theta \sin\phi, \cos\theta)$ being the Bloch vector. The post-measurement state becomes $\rho^\pm = \frac{1}{2}( \mathbb{1} + \vec{s}^{\pm} \cdot \sigma)$, such that
\begin{equation}
\vec{s}^{\pm} = A^{\pm}\vec{r} + \vec{T}^{\pm}.
\end{equation}
Here, $A_{ij}^{\pm} = \frac{1}{2} \operatorname{Tr} \{\sigma_i E_{\pm} [\sigma_j]\}$ and $T_{i}^{\pm} = \frac{1}{2} \operatorname{Tr} \{\sigma_i E_{\pm}[\mathbb{1}]\}$, with $E_{j}[ \bm{\omega}] = E_{j} \bm{\omega} E_{j}^\dagger$, $j=\pm$. We have
\begin{align}
A_{ii}^{\pm} &= \frac{1}{4} \big[ (1\pm x)^2 + 2m_i^2 - |\vec{m}|^2 \big],\\\nonumber
A_{ij}^{\pm} &=\frac{m_i m_j}{2}, ~~{\rm for}~~i\ne j ;\ \
T_{i}^{\pm} = \frac{m_i}{2}(1 + x),
\end{align}
with $i,j = 1,2,3$ and $\pm$ pertaining to the POVM element $E_{\pm}$ being used.
The effect of \textit{biased}-\textit{unsharp} measurements on various quantum correlations has been studied in \cite{busch2010unsharp,das2018testing,swati2017probing}.
The \textit{biased} and \textit{unsharp} measurements have been realized experimentally using quantum feedback stabilization of photon number in a cavity \cite{sayrin2011real}. The \textit{unsharp} measurements were studied with qubit observables through an Arthur–Kelly-type joint measurement model for qubits \cite{pal2011approximate}. Recently, implementation of generalized measurements on a qudit via quantum walks was also proposed \cite{Zhiao2019implement}.
The quantum dynamics, in its idealized version without presence of environment, is governed by unitary evolution. In realistic scenario, the most general evolution is governed by quantum channels which are characterized by suitably formulated Kraus operators. The action of a quantum channel is conveniently studied in the Schr\"{o}dinger picture, while its effect on the operator, leading to the operators subsequent evolution, requires the use of the Heisenberg picture. In this work, we address the following issues. Under the action of a quantum dynamical process ({\it e.g.}, a quantum channel), how does an ideal projector evolve, and whether such a process transform it to a POVM. In particular, we examine the types of quantum channels that leads to the two-outcome \textit{biased}-\textit{unsharp} POVMs. Moreover, we have investigated the behavior of the \textit{bias} and \textit{sharpness} parameters under the effect of Markovian as well as non-Markovian nature of the quantum dynamics. Further, in a direction towards motivating our study, we look at the comparison of energy costs for implementing such measurements under different kinds of open system dynamics.
The work is organized as follows: In Sec. (\ref{UnitalvsBUM}) we start with a brief review of quantum channels and discuss the effect of dynamics on ideal measurements. The arguments are made rigorous by proving two theorems establishing that the conjugate of a unital (non-unital) channel generates \textit{unbiased} (\textit{biased}) POVM. Also, the conjugate of a Markovian channel is shown to lead to \textit{unsharp} measurements such that the \textit{sharpness} is a monotonically decreasing function of time. The effect of \textit{bias} and \textit{sharpness}, in the context of non-Markovian dynamics, on the energy cost of measurements is discussed in Sec. (\ref{sec:ECost}). We conclude in Sec. (\ref{Conclusion}).
\section{Quantum channels and \textit{biased}-\textit{unsharp} POVMs }\label{UnitalvsBUM}
Mathematically, \textit{quantum channels} are linear maps $\mathcal{E}: \mathcal{S}(X) \rightarrow \mathcal{S}(Y)$, such that $ \mathcal{S}(X)$ ($ \mathcal{S}(Y)$) is the set of all density operators acting on $X$ ($Y$) \cite{watrous2018theory}. Geometrically, the quantum channel $\mathcal{E}$ is an \textit{affine} transformation \cite{ruskai2002analysis}. An elegant description of quantum channels is given in terms of \textit{operator sum representation}, such that an initial density matrix $\rho$ is evolved to some final density matrix $\rho^\prime$
\begin{equation}
\rho^\prime = \mathcal{E} [\rho] = \sum\limits_{i} \mathcal{K}_i \rho \mathcal{K}_i^\dagger.
\end{equation}
Here $\mathcal{K}_i$ are the Kraus operators and satisfy the completeness condition $\sum_i \mathcal{K}_i^\dagger \mathcal{K}_i = \mathbb{1}$. The conjugate channel $\mathcal{E}^\dagger$ corresponding to $\mathcal{E}$ is defined such that $\mathcal{E}^\dagger[\rho] = \sum\limits_{i} \mathcal{K}_i^\dagger \rho \mathcal{K}_i$.
An important class of quantum channels are the \textit{unital channels}, each of which maps Identity operator to itself, that is, $\mathcal{U}[\mathbb{1}] = \mathbb{1}$. Examples include the phase damping, depolarizing and Pauli channels \cite{sbrg,omkar}. A typical example of non-unital channel is the amplitude damping channel \cite{sbbook,sgad,sbgp}. Note that the conjugate channel of each quantum channel is unital. For unital qubit channels, the following properties are equivalent \cite{mendl2009unital}:
\begin{enumerate}
\item $\mathcal{E}$ is unital if $\mathcal{E}[\mathbb{1}] = \mathbb{1}$.
\item
$\mathcal{E}$ can be realized as a random unitary map: ${\cal E}(\rho) = \sum_{i} p_iU_i\rho U_i^{\dagger}$ with $U_i$'s being unitary and $p_i$'s being probabilities such that $\sum_i p_i = 1$.
\end{enumerate}
The projective measurements are mapped to the POVMs due to the action of channels. As an example, consider a unital qubit channel $T$ which can be described as \cite{mendl2009unital} $T (\rho) = \sum_{i}^{} \lambda_i U^\dagger_i \rho U_i = \rho^\prime$, with $U_iU_i^\dagger = \mathbb{1}$, and $ \sum_{i}^{} \lambda_i =1$. Let a qubit projective measurement be denoted by $\Pi^{\pm}$ such that the probability of obtaining the outcome $\pm 1$ is given by $\rm{prob}({\pm}1) = \operatorname{Tr}\{ \Pi^{\pm} \rho^\prime \} = \operatorname{Tr} \{\Pi^{\pm} \sum_{i}^{} \lambda_i U^\dagger_i \rho U_i \} =\operatorname{Tr} \{ \sum_{i}^{} \lambda_i U_i \Pi^{\pm} U^\dagger_i \rho \} = \operatorname{Tr} \{ E_{\pm} \rho \}$. Here $E_{\pm} = \sum_{i}^{} \lambda_i U_i \Pi^{\pm} U^\dagger_i$ can be identified as the POVM elements, in the sense that the projectors evolve to POVMs: $E_{\pm} \ge 0$ and $E_+ + E_- = I$, through the dynamics. Since $\Pi^{\pm}$ is a projector having unit trace, therefore $\operatorname{Tr} \{U_i \Pi^{\pm} U^\dagger_i\} = 1 $, as $U_i$ is trace preserving. This indicates that $ \operatorname{Tr} \{E_{\pm}\} = \operatorname{Tr}\{\sum_{i}^{} \lambda_i U_i \Pi^{\pm} U^\dagger_i \} = \sum_{i}^{} \lambda_i \operatorname{Tr} \{U_i\Pi^{\pm} U^\dagger_i\},
= \sum_{i}^{} \lambda_i =1$. But, the trace of POVM element $ E_{\pm}$ in Eq. (\ref{eq:Epm}) is $ \operatorname{Tr} \{E_{\pm}\} = \operatorname{Tr} \{\frac{\mathbb{1} \pm (\mathbb{1} x + \vec{m}.\vec{\sigma})}{2} \} = 1 \pm x $. \textit{Hence a unital qubit channel acting on projectors leads to \textit{unbiased} POVMs}.
\begin{table*}[htp]
\centering
\caption{\ttfamily The \textit{bias} and \textit{sharpness} parameters corresponding to different noise channels. Here, $\Theta$ is the measurement parameter defined in Eq. (\ref{measurement}).
Detailed discussion about these channels (including their time dependence) can be found in the cited references.
}
\begin{tabular}{ |p{3cm}|p{6cm}|p{3cm}|p{4.2cm}|}
\hline
Channel (unital/non-unital) & Kraus operators & \textit{Bias} & \textit{Sharpness} \\
\hline
Random Telegraph Noise (RTN) \cite{daffer2004depolarizing}, unital, with memory & $R_1 = \sqrt{\frac{1+\Lambda(t)}{2}} \mathbb{1}$, $R_2 =\sqrt{\frac{1-\Lambda(t)}{2}} \sigma_z$ & 0 & $\sqrt{\cos^2\Theta + \Lambda^2(t) \sin^2\Theta}$\\
\hline
Phase Damping (PD) \cite{nielsen2002quantum}, unital, without memory & \small $ P_0 = \begin{pmatrix}
1 & 0 \\
0 & \sqrt{1-\lambda}
\end{pmatrix}$, $P_1 = \begin{pmatrix}
1 & 0\\
0 & \sqrt{\lambda}
\end{pmatrix}$. \normalsize & 0 & $\sqrt{1- \lambda \sin^2\Theta}$ \\
\hline
Depolarizing \cite{nielsen2002quantum}, unital, without memory & \small $ D_0 = \sqrt{1-q}~ \mathbb{1}$ \normalsize, \small $D_i = \sqrt{q/3}~ \sigma_i$ $i=1,2,3$ \normalsize & 0 & $ 1 - 4q/3$ \\
\hline
Amplitude Damping (AD) \cite{nielsen2002quantum}, non-unital, without memory & $A_0 = \small \begin{pmatrix} 1 & 0\\ 0 & \sqrt{\gamma} \end{pmatrix}$, $A_1 = \begin{pmatrix} 0 & \sqrt{1-\gamma}\\ 0 &0 \end{pmatrix}$ \normalsize & $|\gamma \cos\Theta|$ & $\sqrt{1-\gamma} ~\sin\Theta$ \\
\hline
AD \cite{bylicka2014non}, non-unital, with memory & $\tilde{A}_0 = \small \begin{pmatrix} 1 & 0\\ 0 & G(t) \end{pmatrix}$, $\tilde{A}_1 = \begin{pmatrix} 0 & \sqrt{1-|G(t)|^2}\\ 0 &0 \end{pmatrix}$ \normalsize & $|G^2(t) \cos\Theta -1|$ & $|G(t)| \sqrt{\sin^2\Theta + |G(t)|^2 \cos^2\Theta}$ \\
\hline
Generalized AD (GAD) \cite{nielsen2002quantum}, non-unital, without memory & $G_0 = \small \begin{pmatrix} \sqrt{p} & 0\\ 0 & \sqrt{p(1- \gamma)} \end{pmatrix}$,\newline \small $G_1 = \begin{pmatrix} 0 & \sqrt{p \gamma}\\ 0 & 0 \end{pmatrix}$\normalsize,
\newline \small $G_2 = \begin{pmatrix} \sqrt{(1-p) (1- \gamma)} & 0\\ 0 & 1 \end{pmatrix}$ \normalsize, \newline \small $G_3 = \begin{pmatrix} 0 & 0\\ \sqrt{(1-p) \gamma} & 0 \end{pmatrix}$.\normalsize \normalsize& $ |(2p-1) \gamma \cos\Theta|$ & \small $\sqrt{(1-\gamma)(1-\gamma \sin^2\Theta)}$\normalsize \\
\hline
\end{tabular}\label{tabBiasUnsharp}
\end{table*}
We provide an illustrative example to demonstrate the interplay of the \textit{bias} and \textit{sharpness} parameters with the nature of the underlying dynamics. For this let us assume a qubit interacting with random telegraph noise \cite{daffer2004depolarizing}, characterized by the stochastic variable $\Gamma(t)$ switching at a rate $\gamma$ between $\pm 1$. The variable $\Gamma(t)$ satisfies the correlation $\langle \Gamma (t) \Gamma(s) \rangle = a^{2} e^{-(t-s)/\tau}$, where $a$ is the qubit-RTN coupling strength, and $\tau = \frac{1}{2 \gamma}$. The reduced dynamics of qubit is governed by following Kraus operators
\begin{equation}\label{KrausRTN}
R_1 (\nu) = \sqrt{\frac{1+ \Lambda(\nu)}{2}} \mathbb{1}, \qquad R_2 (\nu) = \sqrt{\frac{1- \Lambda(\nu)}{2}} \sigma_z.
\end{equation}
Here, $\Lambda(\nu) = e^{-\nu } \big[\cos(\mu \nu) + \frac{\sin(\mu \nu)}{\mu}\big]$ is the memory kernel with $\mu = \sqrt{(4 a \tau)^2 - 1}$ and $\nu=\frac{t}{2\tau}=\gamma t$ is a dimensionless parameter. When $0 \le 4a \tau < 1$, the dynamics is damped with the frequency parameter $\mu$ imaginary with magnitude less than unity. At $4a \tau =1$, the memory kernel $\Lambda = e^{\nu} (1-\nu)$, which is unity at the initial time and approaches zero as time approaches infinity. For $4a \tau > 1$, the dynamics exhibits damped harmonic oscillations in the interval $[-1,1]$. The former and later scenarios correspond to the Markovian and non-Markovian dynamics, respectively \cite{naikoo2019facets,pradeep,qds}.
Let the initial system be represented by a pure qubit state (as it will turn out, the conclusion we draw are actually independent of the initial state)
\begin{equation}\label{rho}
\rho = \begin{pmatrix}
\cos^2(\frac{\theta}{2}) & \frac{1}{2} e^{-i \phi} \sin(\theta) \\\\
\frac{1}{2} e^{i \phi} \sin(\theta) & \sin^2(\frac{\theta}{2})
\end{pmatrix}.
\end{equation}
Here, $0 \le \theta \le \pi$ and $0 \le \phi \le 2 \pi$. Further, we use the general dichotomic observable $\hat{\mathcal{Q}}$ parametrized as
\begin{equation}\label{measurement}
\hat{\mathcal{Q}} = \begin{pmatrix}
\cos\Theta & e^{i \Phi} \sin \Theta\\
e^{-i\Phi} \sin\Theta & -\cos\Theta
\end{pmatrix},
\end{equation}
with $0 \le \Theta < \pi$ and $0\le \Phi \le 2 \pi$ \cite{murnaghan1962unitary}. The corresponding eigen projectors are $\Pi^{\pm} =\frac{1}{2}[ \mathbb{1} \pm \hat{\mathcal{Q}}]$. The expectation values of $\Pi^{\pm}$ under RTN dynamics is given by
\begin{equation}
\langle \Pi^{\pm} \rangle = \operatorname{Tr} \{ \Pi^{\pm} \sum_{i=1}^2 R_i \rho R_{i}^{\dagger} \} = \operatorname{Tr} \{ \rho \sum_{i=1}^2 R_{i}^{\dagger} \Pi^{\pm} R_i \}.
\end{equation}
We can then identify the term $ \sum_{i=1}^2 R_{i}^{\dagger} \Pi^{\pm} R_i $ as the POVM $E^{\pm}$, which when compared to Eq. (\ref{eq:Epm}), gives the \textit{bias} and \textit{shaprness} parameters.
Thus the action of a noisy channel on the dynamics of the qubit can be viewed as a generalized measurement.
Equating $\operatorname{Tr} \{ \rho \sum_{i=1}^{2} R^\dagger_i \Pi^{\pm} R_i \}$ with $ \operatorname{Tr} \{E_{\pm}(x, \vec{m}) \rho \} $, one can obtain the \textit{bias} and sharpness parameters as
\begin{equation}
x= 0, \quad {\rm and}\quad |\vec{m}| = \sqrt{\cos^2\Theta + \Lambda^2(t) \sin^2\Theta}.
\end{equation}
Thus the evolution of projector under this channel provides the \textit{unbiased} POVMs and the \textit{sharpness} parameter contains the \textit{memory kernel} $\Lambda(t)$, which in turn decides whether the dynamics is Markovian or non-Markovian. Since RTN constitutes a unital channel, this is in accordance with our above considerations about unital channels inducing \textit{unbiased} POVMs. Similarly, one finds that for the amplitude damping channel with memory, see Table (\ref{tabBiasUnsharp}), the \textit{memory kernel} $G(t)$, is present both in \textit{bias} as well as \textit{sharpness} parameter.
Before proceeding further we provide a brief introduction of the \textit{Mueller matrix} formulation which will be used to prove two general results.
A linear operation $\mathcal{E}$ and its adjoint $\mathcal{E}^\dagger$ are defined in terms of Hilbert-Schmidt inner product $\langle \rho \sigma \rangle = \operatorname{Tr}\{\rho^\dagger \sigma \}$, such that $\operatorname{Tr} \{ [\mathcal{E}(\rho)]^\dagger \sigma \} = \operatorname{Tr} \{ \rho^\dagger \mathcal{E}^\dagger (\sigma) \}$, with the Kraus operators of $\mathcal{E}^\dagger$ being the adjoint of those of $\mathcal{E}$ \cite{ruskai2002analysis}. Further, $\mathcal{E}$ is trace preserving if and only if $\mathcal{E}^\dagger$ is unital. Now, the (linear) action of a qubit channel on the four dimensional column vector $(1, r_x, r_y, r_z)^T$ to produce the four dimensional
column vector $(1, s_x, s_y, s_z)^T$ is obtained by a $4 \times 4$ real matrix ( say $M$). In the Optics literature, $M$ is generally called a Mueller matrix. Here $(r_x, r_y, r_z) ~[(s_x, s_y, s_z)] $ is the Bloch vector of the input [output] qubit state ${\rho}_{in} = (1/2)(I + r_x{\sigma}_x + r_y{\sigma}_y + r_z{\sigma}_z)~ [{\rho}_{out} = (1/2)(I +
s_x{\sigma}_x + s_y{\sigma}_y + s_z{\sigma}_z)].$
The qubit channel $\mathcal{E}$ - a $4 \times 4$ matrix with complex entries in general) - acting on the column vector $((1 + r_z)/2, (r_x - ir_y)/2, (r_x + ir_y)/2, (1 - r_z)/2)^T$ of the input state ${\rho}_{in}$, produces another column vector $((1 + s_z)/2, (s_x - is_y)/2, (s_x + is_y)/2, (1 - s_z)/2)^T$ of the output state ${\rho}_{out}$. This transformation matrix is the Mueller matrix $M$ under conjugation, i.e., every entry of $\mathcal{E}$ is a linear combination of the entries of $M$ and vice-versa. The coefficients of these linear combinations are independent of the parameters of the input as well as output qubit states. Trace-preservation condition of the channel $\mathcal{E}$ demands that the 1st row of the Mueller matrix M must be: $(1, 0, 0, 0)$, i.e., $M = \begin{pmatrix} 1 & \mathbf{0} \\ \mathbf{t} & \mathbf{\Lambda} \end{pmatrix}$, with $\mathbf{\Lambda}$ a $3 \times 3$ real matrix and ${\bf 0} = (0, 0, 0)$ and ${\bf t} = (t_1, t_2, t_3)^T$ are real vectors. The map $\mathcal{E}$ is unital if and only if $\mathbf{t}=0$. It can be verified easily that the Mueller matrix corresponding to the conjugate channel ${\cal E}^{\dagger}$ of the qubit channel ${\cal E}$ is given by: $M_{{\cal E}^{\dagger}} = M_{{\cal E}}^T$, the transpose of the Mueller matrix for ${\cal E}$.
While finding out the canonical form of a qubit channel, it is useful to work with the Mueller matrix $M$ rather than the channel matrix $\mathcal{E}$. Thus, for example, for any two $2 \times 2$ special unitary matrices $U$ and $V$, the qubit state $ V \mathcal{E} (U{\rho}_{in}U^{\dagger})V^{\dagger} $ corresponds to the action of the Mueller matrix $(1 \oplus R_V)M (1 \oplus R_U)$, where $R_U (R_V)$ is the $3 \times 3$ real rotation matrix corresponding to $U (V)$. Using this fact and the idea of singular value decomposition, one can now make the last $3 \times 3$ block sub-matrix of $M$ to be a real diagonal matrix: $diag({\lambda}_1, {\lambda}_2, {\lambda}_3)$. Thus a canonical $M$ matrix is represented by six real parameters (satisfying the CPTP condition) -- three $t_1, t_2, t_3$ -- say, corresponding to the 1st column vector $(1, t_1, t_2, t_3)^T$ of $M$ and the aforesaid remaining three parameters ${\lambda}_1, {\lambda}_2$, and ${\lambda}_3$. For unital channels $ t_1 = t_2 = t_3 = 0$ \cite{ruskai2002analysis}.
We now prove two theorems based on the observations at the beginning of this section.
\begin{thm}\label{thm:unital}
The conjugate of a unital (non-unital) qubit channel generates an \textit{unbiased} (\textit{biased}) POVM.
\end{thm}
\textit{Proof}: The case of unital channels has already been discussed earlier in this section. Here we provide the explicit form of the POVM after the effect of the dual channel.
Consider a unital channel $\mathcal{E}[\rho] = \sum_j p_j U_j \rho U_j^\dagger$, with $0 \le p_j \le 1$, $\sum_j p_j = 1$, and $U_j U_j^\dagger = U_j^\dagger U_j = \mathbb{1}$. The action of a channel on a state $\rho$ is equivalent to the action of its \textit{conjugate channel} on the operator, $\operatorname{Tr}\{ \mathcal{E}[\rho] \Pi^{\pm}\} = \operatorname{Tr}\{ \sum_j p_j A_j \rho A_j^\dagger \Pi^{\pm} \} = \operatorname{Tr} \{\rho \sum_j p_j A_j^\dagger \Pi^{\pm} A_j \} = \operatorname{Tr}\{\rho \mathcal{E}^\dagger[\Pi^{\pm}] \}$. Therefore, a projective measurement $\mathcal{M} = \{\Pi^{\pm} = \frac{1}{2}(\mathbb{1} \pm \hat{m}.\vec{\sigma})\}$, under the action of a conjugate of \textit{unital} channel $\mathcal{E}$, evolves as $\mathcal{M} = \mathcal{E}^\dagger [\mathcal{M}] = \{ \mathcal{E}^\dagger [\Pi^+], \mathcal{E}^\dagger[\Pi^-]\}$, such that
\begin{align}
\mathcal{E}^\dagger[\Pi^{\pm}] &= \sum_j p_j U_j^\dagger \Pi^{\pm} U_j = \sum_j p_j U_j^\dagger (\frac{\mathbb{1} \pm \hat{m} \cdot \vec{\sigma}}{2}) U_j \nonumber \\
&= \frac{1}{2} \Big[ \mathbb{1} \pm \Big[\big(\sum_j p_j R_{U_j^\dagger}\big) \hat{m}\Big] \cdot \vec{\sigma} \Big].
\end{align}
The effect on the Bloch vector is a series of rotations $R_{U^\dagger_j} [\cdot] = U^\dagger_j \cdot U_j$ weighted by the probability $p_j$. It follows that $\mathcal{E}^\dagger[\Pi^+] + \mathcal{E}^\dagger[\Pi^-] = \mathbb{1}$ and forms a POVM.
Let us now consider the case when the channel $\mathcal{E}$ is \textit{non-unital}, i.e., $\mathcal{E}[\mathbb{1}] = \sum_j p_j A_j \mathbb{1} A_j^\dagger \neq \mathbb{1}$, $0 \le p_j \le 1$, $\sum_j p_j =1$, and $\sum_j p_j A_j^\dagger A_j = \mathbb{1}$. The last condition ensures that the conjugate channel is unital $\mathcal{E}^\dagger[\mathbb{1}] = \sum_j p_j A_j^\dagger \mathbb{1} A_j = \sum_j p_j A_j^\dagger A_j = \mathbb{1}$. The action of a quantum channel and its conjugate on an input $R = (r_0~ r_1~ r_2~ r_3)^T$, $T$ being the transposition operation, can be described by \textit{Mueller} matrices, as discussed above, with the following representation
\begin{equation}
M_{\mathcal{E}} = \begin{pmatrix}
1 & 0 & 0 & 0\\
t_1 & \lambda_1 & 0 & 0 \\
t_2 & 0 & \lambda_2 & 0 \\
t_3 & 0 & 0 & \lambda_3
\end{pmatrix}, ~{\rm and }~ M_{\mathcal{E}^\dagger} = \begin{pmatrix}
1 & t_1 & t_2 & t_3\\
0 & \lambda_1 & 0 & 0 \\
0 & 0 & \lambda_2 & 0 \\
0 & 0 & 0 & \lambda_3
\end{pmatrix},
\end{equation}
respectively. It immediately follows that under
\begin{align}
M_{\mathcal{E}} &: \mathbb{1}\rightarrow \mathbb{1} + \vec{t} \cdot \vec{\sigma}; ~~ \sigma_j \rightarrow \lambda_j \sigma_j,\nonumber \\
{\rm and} ~~~~ M_{\mathcal{E}^\dagger} &: \mathbb{1}\rightarrow \mathbb{1}; ~~ \sigma_j \rightarrow t_j \mathbb{1}+ \lambda_j \sigma_j,
\end{align}
with $j=1,2,3$. Hence it follows that under the action of a unital channel, the output of the Muller matrices for the channel as well as its adjoint are equal. The action on the corresponding inputs translate to
\begin{align}\label{eq:Mueller}
M_{\mathcal{E}} [\rho]&= M_{\mathcal{E}} [\frac{1}{2} (\mathbb{1} + \vec{m} \cdot\vec{\sigma})] = \frac{1}{2} [\mathbb{1} + (\vec{t} + \vec{m}^{\prime} )\cdot \vec{\sigma}], \nonumber \\
M_{\mathcal{E}^\dagger} [\Pi^{\pm}] &= M_{\mathcal{E}^\dagger} [\frac{1}{2} (\mathbb{1} \pm \hat{m} \cdot \vec{\sigma})] = \frac{1}{2} [ (1 \pm x )\mathbb{1} \pm \vec{m}^\prime \cdot \vec{\sigma}].
\end{align}
Thus the \textit{bias} and \textit{sharpness} parameters are identified with $x = \hat{m}\cdot \vec{t}$, and $\vec{m}^\prime = (m_1 \lambda_1, ~ m_2 \lambda_2, ~ m_3 \lambda_3)^T$, respectively. Therefore the resulting POVM is \textit{unbiased} if $\vec{t}=0$, i.e., if the channel is unital. Further, $\operatorname{Tr} M_{\mathcal{E}^\dagger} [\Pi^{\pm}] = 1 \pm x$, implies that the trace preserving conjugate channels must be \textit{unbiased}. It follows that the conjugate of a unital qubit channel acting on a projective measurement generates an \textit{unbiased} POVM.
$\blacksquare$
\begin{thm}\label{thm:Markov}
Under the action of the dual of any qubit Markovian dynamics, a projective measurement gets mapped into a POVM for which the sharpness parameter decreases monotonically with time.
\end{thm}
\textit{Proof}:
We make use of the fact that under Markovian dynamics ${\cal E}_{\tau}$, the trace distance (TD) between two arbitrary states $\rho_1$ and $\rho_2$ is a monotonically decreasing function of time. Consider the case when $\rho(0) = \frac{1}{2}(\mathbb{1} + \vec{n} \cdot \sigma)$ and $\sigma(0) = \frac{1}{2} \mathbb{1}$. The action of map $\mathcal{E}$ is described by the corresponding Mueller matrix as shown in Eq. (\ref{eq:Mueller}). We have
\begin{align}
{\rm TD} &= \frac{1}{2} \big|\big| \mathcal{E}_{\uptau} [\rho(0)] - \mathcal{E}_{\uptau}[\sigma(0)] \big|\big|_1 \nonumber \\
&= \frac{1}{2} \big|\big| \mathcal{E}_{\uptau} [\frac{1}{2}(\mathbb{1} + \vec{n} \cdot \vec{\sigma})] - \mathcal{E}_{\uptau}[\frac{1}{2} \mathbb{1}] \big|\big|_1 \nonumber \\
&= \big|\big| \frac{1}{2} [\mathbb{1} + \vec{t}(\uptau) \cdot \vec{\sigma} + \sum_j n_j \lambda_j(\uptau) \sigma_j] - \frac{1}{2}[\mathbb{1} + \vec{t}(\uptau) \cdot \vec{\sigma}] \big|\big|_1 \nonumber \\
&= \alpha \big| \vec{n}^\prime (\uptau) \big|.
\end{align}
Here, $\alpha = \frac{1}{2}|| \hat{n}^\prime (\uptau) \cdot \vec{\sigma} ||_1 = \frac{1}{2} || \frac{1}{2} (\mathbb{1} + \hat{n}^\prime \cdot \sigma) - \frac{1}{2} (\mathbb{1} - \hat{n}^\prime \cdot \sigma) ||_1$ is a constant and $\vec{n}^\prime (\uptau) = \big(n_1 \lambda_1(\uptau), n_2 \lambda_2(\uptau), n_3 \lambda_3(\uptau) \big)$. Therefore, for ${\rm TD}$ to be a monotonically decreasing function, $| \vec{n}^\prime (\uptau) |$ must monotonically decrease and saturate to zero when time $\uptau \rightarrow \infty$. This, in turn implies that $\lambda_j (\uptau)$ is a decreasing function and converges to zero in the limit $\uptau \rightarrow \infty$.
From the statements made below Eq. (14), the sharpness parameter $| \vec{m}^\prime(\uptau) | = \sqrt{\sum_j |m_j \lambda_j (\uptau)|^2}$. Since $\lambda_j(\uptau)$ is monotonically decreasing as shown above, we conclude that, for Markovian dynamics, the \textit{shaprness} parameter $| \vec{m}^\prime(\uptau) |$ must also be a monotonically decreasing function of time and should saturate to zero as $ \uptau \rightarrow \infty$. As a consequence of Theorem 2, the non-monotonic behavior in time of the sharpness parameter would be an indicator of P-indivisible form of non-Markovianity \cite{blpv}.
$\blacksquare$
An example of such a scenario would be furnished by the RTN noise channel discussed above.
\section{Effect on energy cost of a measurement}\label{sec:ECost}
In this section, we discuss the energy requirement for performing a general quantum measurement \cite{abdelkhalek2016fundamental}. The physical implementation of a measuring device comprises of two steps: a \textit{measurement step} which consists of storing a particular measurement outcome in a register $M$, and a \textit{resetting step}, which resets the measuring device to its initial state for repeated implementation \cite{abdelkhalek2016fundamental,deleter}. The total energy cost of the complete measurement process amount to the energy costs in these two steps together.
\begin{figure}
\caption{(Color online) Measurement scheme: A state $\rho_S$ is coupled to a memory register via unitary $U_{SM}$, followed by \textit{biased}-\textit{unsharp} POVM elements $E_k$ on the memory register. The memory is reset to its initial state $\rho_M$ using a thermal resource $\rho_B$, before the device is used again.}
\label{fig:diag}
\end{figure}
\begin{figure}
\caption{(Color online) Depicting the contributions to the energy cost $E_{cost}$ as defined in Eq. (\ref{eq:Ecost}) with respect to \textit{bias} ($x$) and \textit{sharpness} ($\lambda$) (dimensionless) parameters, such that $x+\lambda \le 1$, with the red (dashed) curve pertaining to equality.}
\label{fig:EntopyChange}
\end{figure}
The register $M$ in the measurement step stores a measurement outcome $k$ in a state $\rho_{M, k}^\prime \in \mathcal{S} (H_M)$. In order to read out the measurement outcome from the register, one would apply the projectors $\{\Pi_k \}_{k}$ satisfying $\sum_k \Pi_k = \mathbb{1}$, on the respective subspaces $H_k$. Accordingly, the implementation of a quantum measurement is described by a tuple $(\rho_M, U_{SM}, \{\Pi_k\})$ where $\rho_M$ and $U_{SM}$ denote the initial state of the register and the unitary operator describing the interaction between system $S$ and register $M$. Therefore, one can think of the measurement process as a channel which maps an input state $\rho_S$ to an output state
\begin{equation}
\rho_{SM, k}^\prime = (\mathbb{1} \otimes \Pi_k) U_{SM} (\rho_S \otimes \rho_M) U_{SM}^\dagger (\mathbb{1} \otimes \Pi_k) / p_k.
\end{equation}
with probability $p_k = \operatorname{Tr} \big[ (\mathbb{1} \otimes \Pi_k) U_{SM} (\rho_S \otimes \rho_M) U_{SM}^\dagger \big]$, such that $\rho_{SM}^\prime = \sum_{k} p_k~ \rho_{SM, k}^\prime $, and the corresponding reduces states of the system and the memory register are respectively given by $\rho_S^\prime = \operatorname{Tr}_M[\rho_{SM}^\prime]$ and $\rho_M^\prime = \operatorname{Tr}_S[\rho_{SM}^\prime]$.
The total energy cost for measurement and resetting step turns out to be \cite{abdelkhalek2016fundamental}
\begin{equation}\label{eq:Ecost}
E_{cost} = \Delta E_{S} +\frac{1}{\beta} \Delta S_M,
\end{equation}
with $\Delta E_S = \operatorname{Tr}[H_S (\rho_S^\prime - \rho_S)]$ and $\Delta S_M = S(\rho_M^\prime) - S(\rho_M)$. Thus the total energy cost is essentially determined by the entropy change in the memory. In what follows, we bring out the effect of \textit{biased}-\textit{unsharp} measurements on this quantity. Such measurements, often called \textit{inefficient} measurements are characterized by Kraus operators $\{E_i\}$, such that the post-measurement state is given by $\rho^\prime = \sum_i E_i \rho E_i^\dagger$, with $1\le i \le r$, where $r$ is the Kraus \textit{rank}, and $r=1$ corresponds to \textit{efficient} measurements. Rewriting Eq. (\ref{eq:Epm}) as
\begin{equation}\label{eq:EpmNew}
E_{\pm} = \frac{1 - \lambda \pm x}{2} \mathbb{1} + \lambda \Pi^\pm.
\end{equation}
Here, $\lambda = |\vec{m}|$ is the \textit{sharpness} parameter and $\Pi^\pm = (1 \pm \hat{Q})/2$ are the sharp projectors corresponding to observable $ \hat{Q} = \hat{m}.\vec{\sigma}$, assumed to be of the form given in Eq. (\ref{measurement}).
We take a simple model with of the system prepared in state $\rho_S = \frac{1}{2}|0\rangle \langle 0| + \frac{1}{2} | 1 \rangle \langle 1|$, a statistical mixture of the eigenstates of Hamiltonian $H_S = \omega_S \sigma_z$. Further, the memory is assumed to be in a two qubit state $\rho_M = |0\rangle \langle 0|_{M_A} \otimes \frac{1}{2} \mathbb{1}_{M_B}$. With the projectors $P_k = |k\rangle \langle k| \otimes \frac{1}{2} \mathbb{1}_{M_B}$, $k=0,1$ and the unitary interaction between the system and memory of the form \cite{abdelkhalek2016fundamental}
\begin{align}
U_{SM} &= \Big(|0\rangle \langle 0|_{S} \otimes \mathbb{1}_{M_A} + |1\rangle \langle 1|_{S} \otimes \sigma^x_{M_A} \Big) \otimes \mathbb{1}_{M_B},
\end{align}
one can show that the measurement device represented by [$U_{SM}, \rho_M, \{P_k\}$] outputs the correct state $\rho_{M,k}^\prime = |k\rangle \langle k|_{M_A} \otimes \mathbb{1}_{M_B} $. However, we are interested in the situation when $\{P_k\}$ are not ideal projective measurements but are \textit{biased} and have some \textit{unsharp}. This is incorporated in the above scheme by replacing $|k\rangle \langle k |$ by $E_k$ given in Eq. (\ref{eq:EpmNew}), such that $\{P_k\} \rightarrow \{\tilde{P}_k\} = \{E_k \otimes \frac{1}{2} \mathbb{1}_{M_B} \}$, we have
\begin{align}
\rho_{SM}^\prime &= \sum_{k = \pm } (\mathbb{1} \otimes \tilde{P}_k) U_{SM} (\rho_S \otimes \rho_M) U_{SM}^\dagger (\mathbb{1} \otimes \tilde{P}_k).
\end{align}
The normalized reduced states of the system and memory are respectively given by
\begin{align}
\rho_S^\prime &= \frac{1}{2} \begin{pmatrix}
1 + R_c & 0 \\
0 & 1 - R_c
\end{pmatrix}, \label{eq:rhoSprime} \\
\rho_M^\prime &=\begin{pmatrix}
\frac{1}{4} + \frac{1}{2} R_c & 0 & \frac{1}{2} R_s & 0 \\
0 & \frac{1}{4} + \frac{1}{2} R_c & 0 & \frac{1}{2} R_s \\
\frac{1}{2} R_s^* & 0 & \frac{1}{4} - \frac{1}{2} R_c & 0 \\
0 & \frac{1}{2} R_s^* & 0 & \frac{1}{4} - \frac{1}{2} R_c
\end{pmatrix}. \label{eq:rhoMprime}
\end{align}
Here $R_c = \frac{x \lambda}{1 + x^2 + \lambda^2} \cos(\Theta)$, $R_s = \frac{x \lambda}{1 + x^2 + \lambda^2} e^{i \Phi} \sin(\Theta)$ are introduced for convenience, and $\Theta$ and $\Phi$ are the measurement parameters defined in Eq. (\ref{measurement}). It follows that with either $x=0$ or $\lambda = 0$, both system as well as memory are found to be in maximally mixed state. The energy cost is proportional to the entropy change $\Delta S_M = S(\rho_M^\prime) - S(\rho_M)$ in the memory
\begin{align}\label{eq:EntropyChange}
\Delta S_M &= - \sum\limits_{n=1}^4 \eta_{n} \log_2(\eta_n) - 1,
\end{align}
where $\eta_{n}$ are the eigenvalues of $\rho_M^\prime$, with $\eta_1 = \eta_2 = \frac{1}{4}(1 + \frac{2x \lambda}{1+x^2 + \lambda^2}), \eta_3 = \eta_4 = \frac{1}{4} (1 - \frac{2x \lambda}{1+x^2 + \lambda^2})$, independent of the measurement parameters $\Theta$ and $\Phi$. This quantity is depicted in Fig. (\ref{fig:EntopyChange}) with respect to \textit{bias} and \textit{sharpness} parameters. A decrease in $\Delta S_M$ is observed for non-zero values of these parameters, and is found to be minimum for $x=\lambda=1/2$. Further, the contribution to energy cost due to change in the system state is given by
\begin{align}\label{eq:DeltaES}
\Delta E_{S} &= \operatorname{Tr}\big[ H_S (\rho_{S}^\prime - \rho_S) \big]
= \frac{2 x \lambda \omega_S}{1 + x^2 + \lambda^2} \cos(\Theta).
\end{align}
In this particular example, the measurement on the system is performed in $\{ \ket{0}, \ket{1}\}$ basis, so we set $\Theta = \Phi = 0$, which amount to $\mathcal{Q} = \sigma_z$ in Eq. (\ref{measurement}). With this setting. $\Delta E_S$ is depicted in Fig. (\ref{fig:EntopyChange}) for $\omega_S = 1$, and attains maximum value for $x=\lambda = 1/2$. The total energy cost $E_{cost}$ is positive and is also maximum for the measurement characterized by equal \textit{bias} and \textit{sharpness}. One can map this scenario with the non-Markovian amplitude damping channel for which the \textit{bias} and \textit{sharpness} are respectively given by $x = |G^2(t) - 1|$ and $\lambda = |G(t)|^2$, for $\Theta = 0$, see Table (\ref{tabBiasUnsharp}), where $G(t)$ is the memory kernel with following form \cite{bylicka2014non}
\begin{align}
G(\uptau) = e^{-\uptau/2}\Bigg[\cosh(\sqrt{1-2R}~\uptau/2) + \frac{\sinh(\sqrt{1-2R}~\uptau/2)}{\sqrt{1-2R}} \Bigg].
\end{align}
Here, $R$ is proportional to the coupling strength and $\uptau$, is dimensionless time. The regimes $2R \le 1$ and $2R>1$ correspond to Markovian and non-Markovian dynamics, respectively. The time behavior of the memory kenel is depicted in Fig. (\ref{fig:EcostAD}), and is found to acquire negative values under non-Markovian dynamics. As time increases, $G(\uptau) \rightarrow 0$, and an arbitrary state subjected to AD channel settles to the ground state $\ket{0}$. Correspondingly the \textit{bias} ($x$) and \textit{sharpness} ($\lambda$) parameters tends to $1$ and $0$, respectively, and the POVM elements in Eq. (\ref{eq:EpmNew}) become $\{E_{+} = \mathbb{1}, E_{-} = 0 \}$. Therefore, the fact that system is eventually found in ground state is equivalent to the statement that the POVM reduces to the identity operation. Notice that $G(\uptau)$ damps quickly as the coupling strength is increased. Therefore, the sharpness of our POVM decreases rapidly with increase in the degree of non-Markovianity of the noisy channel. This fact is reflected in the energy cost of performing such measurements, as depicted in Fig. (\ref{fig:EcostAD}).
\begin{figure}
\caption{The memorial kernel $G(\uptau)$ (top) and energy cost $E_{cost}$ (bottom) as a function of dimensionless time parameter $\tau$. }
\label{fig:EcostAD}
\end{figure}
\section{Conclusion}\label{Conclusion}
Generalized dichotomic measurements characterized by \textit{bias} and \textit{sharpness} provide a way to take into account the different causes which make a measurement non-ideal. The \textit{bias} quantifies tendency of a measurement to favor one state over the other while as sharpness is proportional to the precision of the measurement. In this work, we have shown how the \textit{bias} and \textit{sharpness} change under the action of a dynamical process (\textit{e.g.} quantum channels) from the perspective of the Heisenberg picture. Specifically, we considered various quantum channels, both Markovian and non-Markovian. We have shown the unital channel induce \textit{unbiased} measurements on a qubit state. Also, the conjugate of a unital channel acting on a projective measurement generates an \textit{unbiased} POVM. Further, Markovian channels are shown to lead to measurements for which sharpness is a monotonically decreasing function of time. Hence, for unital channels, this provides a witness for P-indivisible form of non-Markovian dynamics. \\
Measurement process is central in carrying out operations with quantum devices in a controlled manner. With increasing complexity of quantum devices, the energy supply for carrying out the elementary quantum operations must be taken into account. We investigated the effect of \textit{bias} and \textit{sharpness} parameters on the energy cost of the measurement. The energy cost is proportional to the entropy of memory register which is found to decrease in presence of \textit{biased}-\textit{unsharp} measurements, however the total energy cost is found to increase under such measurements.
The present work may be extended to higher dimensional systems and by considering other definitions of non-Markovianity-- via CP-divisibility of channels.
\end{document} |
\begin{document}
\title[Expansion of a simplicial complex]{Expansion of a simplicial complex} \author[S. Moradi and F. Khosh-Ahang]{Somayeh Moradi and Fahimeh Khosh-Ahang}
\address{Somayeh Moradi, Department of Mathematics,
Ilam University, P.O.Box 69315-516, Ilam, Iran and School of Mathematics, Institute
for Research in Fundamental Sciences (IPM), P.O.Box: 19395-5746, Tehran, Iran.} \email{[email protected]} \address{Fahimeh Khosh-Ahang, Department of Mathematics,
Ilam University, P.O.Box 69315-516, Ilam, Iran.} \email{fahime$_{-}[email protected]}
\keywords{Cohen-Macaulay, edge ideal, expansion, projective dimension, regularity, shellable, vertex decomposable.\\
} \subjclass[2010]{Primary 13D02, 13P10; Secondary 16E05}
\begin{abstract} \noindent For a simplicial complex $\Delta$, we introduce a simplicial complex attached to $\Delta$, called the expansion of $\Delta$, which is a natural generalization of the notion of expansion in graph theory. We are interested in knowing how the properties of a simplicial complex and its Stanley-Reisner ring relate to those of its expansions. It is shown that taking expansion preserves vertex decomposable and shellable properties and in some cases Cohen-Macaulayness. Also it is proved that some homological invariants of Stanley-Reisner ring of a simplicial complex relate to those invariants in the Stanley-Reisner ring of its expansions. \end{abstract}
\maketitle
\section*{Introduction}
Simplicial complexes are widely used structures which have many applications in algebraic topology and commutative algebra. In particular, in order to characterize monomial quotient rings with a desired property, simplicial complex is a very strong tool considering the Stanley-Reisner correspondence between simplicial complexes and monomial ideals.
Characterizing simplicial complexes which have properties like vertex decomposability, shellability and Cohen-Macaulayness are some main problems in combinatorial commutative algebra. It is rather hopeless to give a full classification of simplicial complexes with each of these properties. In this regard, finding classes of simplicial complexes, especially independence complexes of graphs with a desired property have been considered by many researchers (cf. \cite{F,FV,HMV,VVi,W,W1}). Constructing new simplicial complexes from the existing ones satisfying a desired property is another way to know more about the characterization. In the works \cite{CN,DE,FH,Moha,V}, the idea of making modifications to a graph like adding whiskers and ears to the graph in order to obtain sequentially Cohen-Macaulay, Cohen-Macaulay and vertex decomposable graphs is investigated. In \cite{BV}, the authors developed a construction similar to whiskers to build a vertex decomposable simplicial complex $\Delta_{\chi}$ from a coloring $\chi$ of the vertices of a simplicial complex $\Delta$, and in \cite{BFHV} for colorings of subsets of the vertices, necessary and sufficient conditions are given for this construction to produce vertex decomposable simplicial complexes.
Motivated by the above works and the concept of expansion of a graph in graph theory, in this paper, we introduce the concept of expansion of simplicial complexes which is a natural generalization of expansion of graphs. Also, we study some properties of this expansion to see how they are related to corresponding properties of the initial simplicial complex. This tool allows us construct new vertex decomposable and shellable simplicial complexes from vertex decomposable and shellable ones. Moreover, some families of Cohen-Macaulay simplicial complexes are introduced. We are also interested in knowing how the homological invariants of the Stanley-Reisner ring of a simplicial complex and its expansions are related.
The paper is organized as follows. In the first section, we review some preliminaries from the literature. In Section 2, first in Theorem \ref{evd} we show that for a simplicial complex $\Delta$, vertex decomposability of $\Delta$ is equivalent to vertex decomposability of an expansion of $\Delta$. Also it is proved that expansions of a shellable simplicial complex are again shellable (see Theorem \ref{vI}). Moreover, it is shown that under some conditions, expansions of a simplicial complex inherit Cohen-Macaulayness (see Corollaries \ref{cor2}, \ref{cor3}, \ref{cor1} and \ref{CM}). Finally, in Section 3, for a shellable simplicial complex, the projective dimension and the regularity of its Stanley-Reisner ring are compared with the corresponding ones in an expansion of $\Delta$ (see Propositions \ref{pd} and \ref{shreg}).
\section{Preliminaries}
Throughout this paper, we assume that $\Delta$ is a simplicial complex on the vertex set $V(\Delta)=\{x_1, \dots, x_n\}$. The set of facets (maximal faces) of $\Delta$ is denoted by $\mathcal{F}(\Delta)$.
In this section, we recall some preliminaries which are needed in the sequel. We begin with definition of a vertex decomposable simplicial complex. To this aim, we need to recall definitions of the link and the deletion of a face in $\Delta$. For a simplicial complex $\Delta$ and $F\in \Delta$, the \textbf{link} of $F$ in $\Delta$ is defined as $$\mathrm{lk}_{\Delta}(F)=\{G\in \Delta: G\cap F=\emptyset, G\cup F\in \Delta\},$$ and the \textbf{deletion} of $F$ is the simplicial complex $$\mathrm{del}_{\Delta}(F)=\{G\in \Delta: G\cap F=\emptyset\}.$$
\begin{defn}\label{1.1} {\rm A simplicial complex $\Delta$ is called \textbf{vertex decomposable} if $\Delta$ is a simplex, or $\Delta$ contains a vertex $x$ such that \begin{itemize} \item[(i)] both $\mathrm{del}_{\Delta}(x)$ and $\mathrm{lk}_{\Delta}(x)$ are vertex decomposable, and \item[(ii)] every facet of $\mathrm{del}_{\Delta}(x)$ is a facet of $\Delta$. \end{itemize} A vertex $x$ which satisfies condition (ii) is called a \textbf{shedding vertex} of $\Delta$.} \end{defn}
\begin{rem}\label{remark1} {\rm It is easily seen that $x$ is a shedding vertex of $\Delta$ if and only if no facet of $\mathrm{lk}_{\Delta}(x)$ is a facet of $\mathrm{del}_{\Delta}(x)$.} \end{rem}
\begin{defn} {\rm A simplicial complex $\Delta$ is called \textbf{shellable} if there exists an ordering $F_1<\cdots<F_m$ on the facets of $\Delta$ such that for any $i<j$, there exists a vertex $v\in F_j\setminus F_i$ and $\ell<j$ with $F_j\setminus F_\ell=\{v\}$. We call $F_1,\ldots,F_m$ a \textbf{shelling} for $\Delta$.} \end{defn} The above definition is referred to as non-pure shellable and is due to Bj\"{o}rner and Wachs \cite{BW}. In this paper we will drop the adjective ``non-pure".
\begin{defn} {\rm A graded $R$-module $M$ is called \textbf{sequentially Cohen--Macaulay} (over a field $K$) if there exists a finite filtration of graded $R$-modules $$0=M_0\subset M_1\subset \cdots \subset M_r=M$$ such that each $M_i/M_{i-1}$ is Cohen--Macaulay and $$\mbox{dim}\,(M_1/M_0)<\mbox{dim}\,(M_2/M_1)<\cdots<\mbox{dim}\,(M_r/M_{r-1}).$$} \end{defn}
For a $\mathbb{Z}$-graded $R$-module $M$, the \textbf{Castelnuovo-Mumford regularity} (or briefly regularity) of $M$ is defined as $$\mathrm{reg}(M) = \max\{j-i: \ \beta_{i,j}(M)\neq 0\},$$ and the \textbf{projective dimension} of $M$ is defined as $$\mathrm{pd}(M) = \max\{i:\ \beta_{i,j}(M)\neq 0 \ \text{for some}\ j\},$$ where $\beta_{i,j}(M)$ is the $(i,j)$th graded Betti number of $M$.
Let $V = \{x_1,\ldots, x_n\}$ be a finite set, and let $\mathcal{E} = \{E_1,\ldots,E_s\}$ be a family of nonempty subsets of $V$. The pair $\mathcal{H} = (V, \mathcal{E})$ is called a \textbf{simple hypergraph} if for each $i$, $|E_i| \geq 2$ and whenever $E_i,E_j\in \mathcal{E}$ and $E_i \subseteq E_j$, then $i =j$. The elements of $V$ are called the vertices and the elements of $\mathcal{E}$ are called the edges of $\mathcal{H}$. For a hypergraph $\mathcal{H}$, the \textbf{independence complex} of $\mathcal{H}$ is defined as $$\Delta_{\mathcal{H}}=\{F\subseteq V(\mathcal{H}):\ E\nsubseteq F, \text{ for each } E\in \mathcal{E}(\mathcal{H})\}.$$
A simple graph $G=(V(G), E(G))$ is a simple hypergraph with the vertices $V(G)$ and the edges $E(G)$, where each of its edges has cardinality exactly two. For a simple graph $G$, the \textbf{edge ideal} of $G$ is defined as the ideal $I(G)=(x_ix_j:\ \{x_i,x_j\}\in E(G))$. It is easy to see that $I(G)$ can be viewed as the Stanley-Reisner ideal of the simplicial complex $\Delta_{G}$ i.e., $I(G)=I_{\Delta_G}$. Also, the \textbf{big height} of $I(G)$, denoted by $\mathrm{bight}(I(G))$, is defined as the maximum height among the minimal prime divisors of $I(G)$.
A graph $G$ is called vertex decomposable, shellable, sequentially Cohen-Macaulay or Cohen-Macaulay if the independence complex $\Delta_G$ is vertex decomposable, shellable, sequentially Cohen-Macaulay or Cohen-Macaulay.
A graph $G$ is called \textbf{chordal}, if it contains no induced cycle of length $4$ or greater.
\begin{defn}\label{1.2} {\rm A monomial ideal $I$ in the ring $R=K[x_1,\ldots,x_n]$ has \textbf{linear quotients} if there exists an ordering $f_1, \dots, f_m$ on the minimal generators of $I$ such that the colon ideal $(f_1,\ldots,f_{i-1}):_R(f_i)$ is generated by a subset of $\{x_1,\ldots,x_n\}$ for all $2\leq i\leq m$. We show this ordering by $f_1<\dots <f_m$ and we call it \textbf{an order of linear quotients} on $\mathcal{G}(I)$. Also for any $1\leq i\leq m$, $\mbox{set}\,_I(f_i)$ is defined as $$\mbox{set}\,_I(f_i)=\{x_k:\ x_k\in (f_1,\ldots, f_{i-1}) :_R (f_i)\}.$$ We denote $\mbox{set}\,_I(f_i)$ by $\mbox{set}\, (f_i)$ if there is no ambiguity about the ideal $I$. } \end{defn}
A monomial ideal $I$ generated by monomials of degree $d$ has a \textbf{linear resolution} if $\beta _{i,j}(I)=0$ for all $j\neq i+d$. Having linear quotients is a strong tool to determine some classes of ideals with linear resolution. The main tool in this way is the following lemma.
\begin{lem}(See \cite[Lemma 5.2]{F}.)\label{Faridi} Let $I=(f_1, \dots, f_m)$ be a monomial ideal with linear quotients such that all $f_i$s are of the same degree. Then $I$ has a linear resolution. \end{lem}
For a squarefree monomial ideal $I=( x_{11}\cdots x_{1n_1},\ldots,x_{t1}\cdots x_{tn_t})$, the \textbf{Alexander dual ideal} of $I$, denoted by $I^{\vee}$, is defined as $$I^{\vee}:=(x_{11},\ldots, x_{1n_1})\cap \cdots \cap (x_{t1},\ldots, x_{tn_t}).$$ For a simplicial complex $\Delta$ with the vertex set $X=\{x_1, \dots, x_n\}$, the \textbf{Alexander dual simplicial complex} associated to $\Delta$ is defined as $$\Delta^{\vee}=\{X\setminus F:\ F\notin \Delta\}.$$ For a subset $C\subseteq X$, by $x^C$ we mean the monomial $\prod_{x\in C} x$ in the ring $K[x_1, \dots, x_n]$. One can see that $(I_{\Delta})^{\vee}=(x^{F^c} \ : \ F\in \mathcal{F}(\Delta)),$ where $I_{\Delta}$ is the Stanley-Reisner ideal associated to $\Delta$ and $F^c=X\setminus F$. Moreover, one can see that $(I_{\Delta})^{\vee}=I_{\Delta^{\vee}}$.
The following theorem which was proved in \cite{T}, relates projective dimension and regularity of a squarefree monomial ideal to its Alexander dual. It is one of our tools in the study of the projective dimension and regularity of the ring $R/I_{\Delta}$.
\begin{thm}(See \cite[Theorem 2.1]{T}.) \label{1.3} Let $I$ be a squarefree monomial ideal. Then $\mathrm{pd}(I^{\vee})=\mathrm{reg}(R/I)$. \end{thm}
\section{Expansions of a simplicial complex and their algebraic properties} In this section, expansions of a simplicial complex and their Stanley-Reisner rings are studied. The main goal is to explore how the combinatorial and algebraic properties of a simplicial complex $\Delta$ and its Stanley-Reisner ring affects on the expansions. \begin{defn}\label{2.1} {\rm Let $\Delta=\langle F_1,\ldots,F_m\rangle$ be a simplicial complex with the vertex set $V(\Delta)=\{x_1,\ldots,x_n\}$ and $s_1,\ldots,s_n\in \mathbb{N}$ be arbitrary integers. For any $F_i=\{x_{i_1},\ldots,x_{i_{k_i}}\}\in \mathcal{F}(\Delta)$, where $1\leq i_1<\cdots<i_{k_i}\leq n$ and any $1\leq r_1\leq s_{i_1},\ldots, 1\leq r_{k_i}\leq s_{i_{k_i}}$, set $$F_i^{r_1,\ldots, r_{k_i}}=\{x_{i_1r_1},\ldots,x_{i_{k_i}r_{k_i}}\}.$$
We define the $(s_1,\ldots,s_n)$-expansion of $\Delta$ to be a simplicial complex with the vertex set $\{\{x_{11},\ldots,x_{1s_1},x_{21},\ldots,x_{2s_2},\ldots,x_{n1},\ldots,x_{ns_n}\}$ and the facets $$\{x_{i_1r_1},\ldots,x_{i_{k_i}r_{k_i}}\} \ :\ \{x_{i_1},\ldots,x_{i_{k_i}}\}\in \mathcal{F}(\Delta), \ (r_1,\ldots,r_{k_i})\in [s_{i_1}]\times \cdots \times [s_{i_{k_i}}]\}.$$ We denote this simplicial complex by $\Delta^{(s_1,\ldots,s_n)}$}. \end{defn}
\begin{exam} {\rm Consider the simplicial complex $\Delta=\langle\{x_1,x_2,x_3\},\{x_1,x_2,x_4\},\{x_4,x_5\}\rangle$ depicted in Figure $1$. Then $$\Delta^{(1,2,1,1,2)}=\langle\{x_{11},x_{21},x_{31}\},\{x_{11},x_{22},x_{31}\},\{x_{11},x_{21},x_{41}\},\{x_{11},x_{22}, x_{41}\},\{x_{41},x_{51}\},\{x_{41},x_{52}\}\rangle.$$ \begin{figure}
\caption{The simplicial complex $\Delta$ and the $(1,2,1,1,2)$-expansion of $\Delta$}
\label{fig:graph}
\label{Fig1}
\end{figure} } \end{exam}
The following definition, gives an analogous concept for the expansion of a hypergraph, which is also a generalization of \cite[Definition 4.2]{FHV}.
\begin{defn}\label{2.3} {\rm For a hypergraph $\mathcal{H}$ with the vertex set $V(\mathcal{H})=\{x_1,\ldots,x_n\}$ and the edge set $\mathcal{E}(\mathcal{H})$, we define the $(s_1,\ldots,s_n)$-expansion of $\mathcal{H}$ to be a hypergraph with the vertex set $\{x_{11},\ldots,x_{1s_1},x_{21},\ldots,x_{2s_2},\ldots,x_{n1},\ldots,x_{ns_n}\}$ and the edge set \begin{align*} \{\{x_{i_1r_1},\ldots, x_{i_tr_t}\}:\ \{x_{i_1},\ldots, x_{i_t}\}\in \mathcal{E}(\mathcal{H}),\ (r_1,\ldots,r_{t})\in [s_{i_1}]\times \cdots \times [s_{i_{t}}]\}\cup\\
\{\{x_{ij},x_{ik}\}: \ 1\leq i\leq n, \ j\neq k\}. \end{align*}
We denote this hypergraph by $\mathcal{H}^{(s_1,\ldots,s_n)}$. } \end{defn}
\begin{rem} {\rm From Definitions \ref{2.1} and \ref{2.3} one can see that for a hypergraph $\mathcal{H}$ and integers $s_1,\ldots,s_n\in \mathbb{N}$, $\Delta_{\mathcal{H}^{(s_1,\ldots,s_n)}}=\Delta_{\mathcal{H}}^{(s_1,\ldots,s_n)}.$ Thus the expansion of a simplicial complex is the natural generalization of the concept of expansion in graph theory. } \end{rem}
\begin{exam} {\rm
Let $G$ be the following graph. \begin{figure}\label{fig5}
\label{fig:graph}
\end{figure}
The graph $G^{(1,1,2,1,2)}$ and the independence complexes $\Delta_G$ and $\Delta_{G^{(1,1,2,1,2)}}$ are shown in Figure $2$. \begin{figure}\label{fig6}
\label{fig:graph2}
\end{figure} } \end{exam}
In the following proposition, it is shown that a graph is chordal if and only if some of its expansions is chordal. \begin{prop}\label{cl} For any $s_1,\ldots,s_n\in \mathbb{N}$, $G$ is a chordal graph if and only if $G^{(s_1,\ldots,s_n)}$ is chordal. \end{prop}
\begin{proof} If $G^{(s_1,\ldots,s_n)}$ is chordal, then clearly $G$ is also chordal, since it can be considered as an induced subgraph of $G^{(s_1,\ldots,s_n)}$. Now, let $G$ be chordal, $V(G)=\{x_1,\ldots,x_n\}$ and consider a cycle $C_m: x_{i_1j_1},\ldots, x_{i_mj_m}$ in $G^{(s_1,\ldots,s_n)}$, where $m\geq 4$ and $1\leq j_k\leq s_{i_k}$ for all $1\leq k\leq m$. We consider two cases.
Case 1. $i_k=i_\ell$ for some distinct integers $k$ and $\ell$ with $1\leq k<\ell\leq m$. Then by the definition of expansion, $x_{i_kj_k}x_{i_\ell j_\ell}\in E(G^{(s_1,\ldots,s_n)})$. Thus if $x_{i_kj_k}x_{i_\ell j_\ell}$ is not an edge of $C_m$, then it is a chord in $C_m$. Now, assume that $x_{i_kj_k}x_{i_\ell j_\ell}$ is an edge of $C_m$. Note that since $x_{i_\ell j_\ell}x_{i_{\ell+1}j_{\ell+1}}\in E(C_m)$, either $i_\ell=i_{\ell+1}$ or $x_{i_\ell}x_{i_{\ell+1}}\in E(G)$ (if $\ell=m$, then set $\ell+1:=1$). Thus $x_{i_kj_k}x_{i_{\ell+1}j_{\ell+1}}\in E(G^{(s_1,\ldots,s_n)})$ is a chord in $C_m$.
Case 2. $i_k\neq i_\ell$ for any distinct integers $1\leq k,\ell\leq m$. By the definition of expansion, one can see that $x_{i_1},\ldots, x_{i_m}$ forms a cycle of length $m$ in $G$. So it has a chord. Let $x_{i_k}x_{i_\ell}\in E(G)$ be a chord in this cycle. Then $x_{i_kj_k}x_{i_\ell j_\ell}\in E(G^{(s_1,\ldots,s_n)})$ is a chord in $C_m$. Thus $G^{(s_1,\ldots,s_n)}$ is also chordal. \end{proof}
The following theorem illustrates that the vertex decomposability of a simplicial complex is equivalent to the vertex decomposability of its expansions. \begin{thm}\label{evd} Assume that $s_1, \dots, s_n$ are positive integers. Then $\Delta$ is vertex decomposable if and only if $\Delta^{(s_1,\ldots,s_n)}$ is vertex decomposable. \end{thm} \begin{proof}
Assume that $\Delta$ is a simplicial complex with the vertex set $V(\Delta)=\{x_1,\dots, x_n\}$ and $s_1, \dots, s_n$ are positive integers. To prove the `only if' part, we use generalized induction on $|V(\Delta^{(s_1,\ldots,s_n)})|$ (note that $|V(\Delta^{(s_1,\ldots,s_n)})|\geq |V(\Delta)|$). If $|V(\Delta^{(s_1,\ldots,s_n)})|=|V(\Delta)|$, then $\Delta=\Delta^{(s_1, \dots, s_n)}$ and so there is nothing to prove in this case. Assume inductively that for all vertex decomposable simplicial complexes $\Delta'$ and all positive integers $s'_1, \dots, s'_n$ with $|V(\Delta'^{(s'_1,\dots,s'_n)})|< t$, $\Delta'^{(s'_1,\ldots,s'_n)}$ is vertex decomposable. Now, we are going to prove the result when $t=|V(\Delta^{(s_1,\ldots,s_n)})|>|V(\Delta)|$. Since $|V(\Delta^{(s_1,\ldots,s_n)})|>|V(\Delta)|$, there exists an integer $1\leq i\leq n$ such that $s_i>1$. If $\Delta=\langle F\rangle$ is a simplex, we claim that $x_{i1}$ is a shedding vertex of $\Delta^{(s_1,\ldots,s_n)}$. It can be easily checked that $$\mathrm{lk}_{\Delta^{(s_1,\ldots,s_n)}}(x_{i1})=\langle F\setminus \{x_i\}\rangle^{(s_1, \dots, s_{i-1},s_{i+1}, \dots, s_n)}$$ and $$\mathrm{del}_{\Delta^{(s_1,\ldots,s_n)}}(x_{i1})=\Delta^{(s_1, \dots, s_{i-1}, s_i-1, s_{i+1}, \dots, s_n)}.$$ So, inductive hypothesis ensures that $\mathrm{lk}_{\Delta^{(s_1,\ldots,s_n)}}(x_{i1})$ and $\mathrm{del}_{\Delta^{(s_1,\ldots,s_n)}}(x_{i1})$ are vertex decomposable. Also, it can be seen that every facet of $\mathrm{del}_{\Delta^{(s_1,\ldots,s_n)}}(x_{i1})$ is a facet of $\Delta^{(s_1,\ldots,s_n)}$. This shows that $\Delta^{(s_1,\ldots,s_n)}$ is vertex decomposable in this case. Now, if $\Delta$ is not a simplex, it has a shedding vertex, say $x_1$. We claim that $x_{11}$ is a shedding vertex of $\Delta^{(s_1,\ldots,s_n)}$. To this end, it can be seen that $$\mathrm{lk}_{\Delta^{(s_1,\ldots,s_n)}}(x_{11})={\mathrm{lk}_\Delta (x_1)}^{(s_2,\ldots,s_n)}$$ and $$\mathrm{del}_{\Delta^{(s_1,\ldots,s_n)}}(x_{11})=\left\lbrace \begin{array}{c l} \Delta^{(s_1-1, s_2,\ldots,s_n)} & \text{if $s_1>1$;}\\
{\mathrm{del}_\Delta (x_1)}^{(s_2,\ldots,s_n)} & \text{if $s_1=1$.} \end{array} \right.$$ Hence, inductive hypothesis deduces that $\mathrm{lk}_{\Delta^{(s_1,\ldots,s_n)}}(x_{11})$ and $\mathrm{del}_{\Delta^{(s_1,\ldots,s_n)}}(x_{11})$ are vertex decomposable simplicial complexes. Now, suppose that $F^{j_1, \dots, j_k}=\{x_{i_1j_1}, \dots, x_{i_kj_k} \}$ is a facet of $\mathrm{lk}_{\Delta^{(s_1,\ldots,s_n)}}(x_{11})$, where $F=\{x_{i_1}, \dots, x_{i_k}\}$ is a face of $\Delta$. Then since $\mathrm{lk}_{\Delta^{(s_1,\ldots,s_n)}}(x_{11})={\mathrm{lk}_\Delta (x_1)}^{(s_2,\ldots,s_n)}$, $F$ is a facet of $\mathrm{lk}_\Delta (x_1)$. So, there is a vertex $x_{i_{k+1}}\in V(\Delta)$ such that $\{x_{i_1}, \dots,x_{i_k}, x_{i_{k+1}}\}$ is a face of $\mathrm{del}_\Delta (x_1)$ (see Remark \ref{remark1}). Hence $\{x_{i_1j_1}, \dots, x_{i_kj_k}, x_{i_{k+1}1} \}$ is a face of $\mathrm{del}_{\Delta^{(s_1,\ldots,s_n)}}(x_{11})$. This completes the proof of the first part.
To prove the `if' part, we also use generalized induction on $|V(\Delta^{(s_1,\ldots,s_n)})|$. If $|V(\Delta^{(s_1,\ldots,s_n)})|=|V(\Delta)|$, then $\Delta=\Delta^{(s_1, \dots, s_n)}$ and so there is nothing to prove in this case. Assume inductively that for all simplicial complexes $\Delta'$ and all positive integers $s'_1, \dots, s'_n$ with $|V(\Delta'^{(s'_1,\dots,s'_n)})|< t$ such that $\Delta'^{(s'_1,\dots,s'_n)}$ is vertex decomposable, we have proved that $\Delta'$ is also vertex decomposable. Now, we are going to prove the result when $t=|V(\Delta^{(s_1,\ldots,s_n)})|>|V(\Delta)|$. Now, since $|V(\Delta^{(s_1,\ldots,s_n)})|>|V(\Delta)|$ and $\Delta^{(s_1,\ldots,s_n)}$ is vertex decomposable, it has a shedding vertex, say $x_{11}$. If $s_1>1$, then $$\mathrm{del}_{\Delta^{(s_1,\ldots,s_n)}}(x_{11})=\Delta^{(s_1-1, s_2,\ldots,s_n)},$$ and the inductive hypothesis ensures that $\Delta$ is vertex decomposable as desired. Else, we should have $s_1=1$, $$\mathrm{lk}_{\Delta^{(s_1,\ldots,s_n)}}(x_{11})={\mathrm{lk}_\Delta (x_1)}^{(s_2,\ldots,s_n)}$$ and $$\mathrm{del}_{\Delta^{(s_1,\ldots,s_n)}}(x_{11})={\mathrm{del}_\Delta (x_1)}^{(s_2,\ldots,s_n)}.$$ So, inductive hypothesis implies that $\mathrm{lk}_\Delta (x_1)$ and $\mathrm{del}_\Delta (x_1)$ are vertex decomposable simplicial complexes. Now, assume that $F=\{x_{i_1}, \dots, x_{i_k}\}$ is a facet of $\mathrm{del}_\Delta (x_1)$. Then $\{x_{i_11}, \dots, x_{i_k1}\}$ is a facet of $\mathrm{del}_{\Delta^{(s_1,\ldots,s_n)}}(x_{11})$. Since $x_{11}$ is a shedding vertex of $\Delta^{(s_1,\ldots,s_n)}$, $\{x_{i_11}, \dots, x_{i_k1}\}$ is a facet of $\Delta^{(s_1,\ldots,s_n)}$. Hence, $F$ is a facet of $\Delta$ and the proof is complete. \end{proof}
\begin{rem}\label{pure} {\rm By the notations as in Definition \ref{2.1}, $\Delta$ is pure if and only if $\Delta^{(s_1,\ldots,s_n)}$ is pure, since any facet $F_i^{r_1,\ldots, r_{k_i}}$ of $\Delta^{(s_1,\ldots,s_n)}$ has the same cardinality as $F_i$.} \end{rem}
The following theorem together with Theorem \ref{evd} help us to see how the Cohen-Macaulayness propery in a vertex decomposable simplicial complex and its expansions are related. \begin{thm}\label{vc} A vertex decomposable simplicial complex $\Delta$ is Cohen-Macaulay if and only if $\Delta$ is pure. \end{thm} \begin{proof} See \cite[Theorem 11.3]{BW2} and \cite[Theorem 5.3.18]{VIL}. \end{proof}
\begin{cor}\label{cor2} Let $\Delta$ be a vertex decomposable simplicial complex and $s_1, \dots, s_n$ be positive integers. Then $\Delta$ is Cohen-Macaulay if and only if $\Delta^{(s_1,\ldots,s_n)}$ is Cohen-Macaulay. \end{cor} \begin{proof} By Theorem \ref{evd}, $\Delta^{(s_1,\ldots,s_n)}$ is also vertex decomposable. Also, by Theorem \ref{vc}, $\Delta$, respectively $\Delta^{(s_1,\ldots,s_n)}$, is Cohen-Macaulay if and only if $\Delta$, respectively $\Delta^{(s_1,\ldots,s_n)}$, is pure. Now, by Remark \ref{pure}, the result is clear. \end{proof}
\begin{cor}\label{cor3} Let $G$ be a Cohen-Macaulay chordal graph or a Cohen-Macaulay bipartite graph. Then $G^{(s_1,\ldots,s_n)}$ is Cohen-Macaulay. \end{cor} \begin{proof} By \cite[Corollary 7]{W} and \cite[Corollary 2.12]{VT} chordal graphs and Cohen-Macaulay bipartite graphs are vertex decomposable. The result now follows from Corollary \ref{cor2}. \end{proof}
In the following theorem, it is shown that shellability is preserved under expansion and from a shelling for $\Delta$, a shelling for its expansion is constructed. \begin{thm}\label{vI} Let $\Delta$ be a shellable simplicial complex with $n$ vertices. Then $\Delta^{(s_1,\ldots,s_n)}$ is shellable for any $s_1,\ldots,s_n\in \mathbb{N}$. \end{thm} \begin{proof} Use the notations as in Definition \ref{2.1}. Let $\Delta$ be a shellable simplicial complex with the shelling order $F_1<\cdots<F_m$ on the facets of $\Delta$. Consider an order on $\mathcal{F}(\Delta^{(s_1,\ldots,s_n)})$ as follows. For two facets $F_i^{r_1,\ldots, r_{k_i}}$ and $F_j^{r'_1,\ldots, r'_{k_j}}$ of $\Delta^{(s_1,\ldots,s_n)}$ \begin{itemize} \item[(i)] if $i<j$, set $F_i^{r_1,\ldots, r_{k_i}}<F_j^{r'_1,\ldots, r'_{k_j}}$, \item[(ii)] if $i=j$, set $F_i^{r_1,\ldots, r_{k_i}}<F_i^{r'_1,\ldots, r'_{k_i}}$, when $(r_1,\ldots, r_{k_i})<_{lex} (r'_1,\ldots, r'_{k_i})$. \end{itemize} We show that this ordering forms a shelling order. Consider two facets $F_i^{r_1,\ldots, r_{k_i}}$ and $F_j^{r'_1,\ldots, r'_{k_j}}$ with $i<j$. Since $F_i<F_j$, there exists an integer $\ell<j$ and $x_{j_t}\in F_j\setminus F_i$ such that $F_j\setminus F_\ell=\{x_{j_t}\}$. So $x_{j_tr'_t}\in F_j^{r'_1,\ldots, r'_{k_j}}\setminus F_i^{r_1,\ldots, r_{k_i}}$. Let $F_\ell=\{x_{\ell_1},\ldots,x_{\ell_{k_\ell}}\}$, where $\ell_1<\cdots<\ell_{k_\ell}$. Then there exist indices $h_1,\ldots,h_{t-1},h_{t+1},\ldots,h_{k_j}$ such that $j_1=\ell_{h_1},\ldots, j_{t-1}=\ell_{h_{t-1}},j_{t+1}=\ell_{h_{t+1}},\ldots,j_{k_j}=\ell_{h_{k_j}}$. Thus $$F_j^{r'_1,\ldots, r'_{k_j}}\setminus F_\ell^{r''_1,\ldots,r''_{k_\ell}}=\{x_{j_tr'_t}\},$$ where $r''_{h_1}=r'_1,\ldots,r''_{h_{t-1}}=r'_{t-1},r''_{h_{t+1}}=r'_{t+1},\ldots,r''_{h_{k_j}}=r'_{k_j}$ and $r''_{\lambda}=1$ for other indices $\lambda$. Since $\ell<j$, we have $F_\ell^{r''_1,\ldots,r''_{k_\ell}}<F_j^{r'_1,\ldots, r'_{k_j}}$.
Now assume that $i=j$ and $F_i^{r_1,\ldots, r_{k_i}}<F_i^{r'_1,\ldots, r'_{k_i}}$. Thus $$(r_1,\ldots, r_{k_i})<_{lex} (r'_1,\ldots, r'_{k_i}).$$ Let $1\leq t\leq k_i$ be an integer with $r_t<r'_t$. Then $x_{i_tr'_t}\in F_i^{r'_1,\ldots, r'_{k_i}}\setminus F_i^{r_1,\ldots, r_{k_i}}$, $$F_i^{r'_1,\ldots,r'_{k_i}}\setminus F_i^{r'_1,\ldots,r'_{t-1},r_t,r'_{t+1},\ldots, r'_{k_i}}=\{x_{i_tr'_t}\}$$ and $$(r'_1,\ldots,r'_{t-1},r_t,r'_{t+1},\ldots, r'_{k_i})<_{lex} (r'_1,\ldots,r'_{k_i}).$$ Thus $F_i^{r'_1,\ldots,r'_{t-1},r_t,r'_{t+1},\ldots, r'_{k_i}}<F_i^{r'_1,\ldots,r'_{k_i}}$. The proof is complete. \end{proof}
The following corollary is an immediate consequence of Theorem \ref{vI}, Remark \ref{pure} and \cite[Theorem 5.3.18]{VIL}. \begin{cor}\label{cor1} Let $\Delta$ be a pure shellable simplicial complex. Then $\Delta^{(s_1,\ldots,s_n)}$ is Cohen-Macaulay for any $s_1,\ldots,s_n\in \mathbb{N}$. \end{cor}
\begin{thm}\label{one dimension} Let $\Delta$ be a pure one dimensional simplicial complex. Then the following statements are equivalent. \begin{itemize} \item[(i)] $\Delta$ is connected. \item[(ii)] $\Delta$ is vertex decomposable. \item[(iii)] $\Delta$ is shellable. \item[(iv)] $\Delta$ is sequantially Cohen-Macaulay. \item[(v)] $\Delta$ is Cohen-Macaulay. \end{itemize} \end{thm} \begin{proof} \begin{itemize}
\item[$(i\Rightarrow ii)$] Suppose that $\Delta=\langle F_1, \dots, F_m\rangle$. We use induction on $m$. If $m=1$, $\Delta$ is clearly vertex decomposable. Suppose inductively that the result has been proved for smaller values of $m$. We consider two cases. If $\Delta$ has a free vertex (a vertex which belongs to only one facet), then there is a facet, say $F_m=\{x,y\}$, of $\Delta$ such that $x\not\in \bigcup_{i=1}^{m-1}F_i$. In this case
$\mathrm{lk}_\Delta(x)=\langle \{ y\}\rangle,$
which is clearly vertex decomposable.
Also, since $\Delta$ is connected,
$$\mathrm{del}_\Delta(x)=\langle F_1,\dots, F_{m-1}\rangle$$ is a pure one dimensional connected simplicial complex. So, by inductive hypothesis $\mathrm{del}_\Delta(x)$ is also vertex decomposable. Moreover each facet of $\mathrm{del}_\Delta(x)$ is a facet of $\Delta$. This shows that $\Delta$ is vertex decomposable. Now, suppose that $\Delta$ doesn't have any free vertex. So, each vertex belongs to at least two facets. Hence, there is a vertex $x$ such that $\mathrm{del}_\Delta(x)$ is also connected and one dimensional. (Note that since $\Delta$ is connected and one dimensional, it may be illustrated as a connected graph. Also, from graph theory, we know that every connected graph has at least two vertices such that by deleting them, we still have a connected graph). Now, by induction hypothesis we have that $\mathrm{del}_\Delta(x)$ is vertex decomposable. Also, $\mathrm{lk}_\Delta(x)$ is a discrete set and so vertex decomposable. Furthermore, in view of the choice of $x$, it is clear that every facet of $\mathrm{del}_\Delta(x)$ is a facet of $\Delta$. Hence, $\Delta$ is vertex decomposable as desired.
\item[$(ii\Rightarrow iii)$] follows from \cite[Theorem 11.3]{BW2}.
\item[$(iii\Rightarrow iv)$] is firstly shown by Stanley in \cite{Stanley}.
\item[$(iv\Rightarrow v)$] The result follows from the fact that every pure sequantially Cohen-Macaulay simplicial complex is Cohen-Macaulay.
\item[$(v\Rightarrow i)$] follows from \cite[Corollary 5.3.7]{VIL}. \end{itemize} \end{proof}
\begin{cor}\label{CM} Let $\Delta$ be a Cohen-Macaulay simplicial complex of dimension one. Then $\Delta^{(s_1,\ldots,s_n)}$ is Cohen-Macaulay for any $s_1,\ldots,s_n\in \mathbb{N}$. \end{cor} \begin{proof} Since $\Delta$ is Cohen-Macaulay of dimension one, Theorem \ref{one dimension} implies that $\Delta$ is pure shellable. Hence, Corollary \ref{cor1} yields the result. \end{proof}
The evidence suggests when $\Delta$ is Cohen-Macaulay, its expansions are also Cohen-Macaulay. Corollaries \ref{cor2}, \ref{cor3}, \ref{cor1} and \ref{CM} are some results in this regard. But in general, we did not get to a proof or a counter example for this statement. So, we just state it as a conjecture as follows.
\textbf{Conjecture.} If $\Delta$ is a Cohen-Macaulay simplicial complex, then $\Delta^{(s_1,\ldots,s_n)}$ is Cohen-Macaulay for any $s_1,\ldots,s_n\in \mathbb{N}$.
\section{Homological invariants of expansions of a simplicial complex} We begin this section with the next theorem which presents formulas for the projective dimension and depth of the Stanley-Reisner ring of an expansion of a shellable simplicial complex in terms of the corresponding invariants of the Stanley-Reisner ring of the simplicial complex.
\begin{thm}\label{pd} Let $\Delta$ be a shellable simplicial complex with the vertex set $\{x_1,\ldots,x_n\}$, $s_1,\ldots,s_n\in \mathbb{N}$ and $R=K[x_1,\ldots,x_n]$ and $R'=K[x_{11},\ldots,x_{1s_1},\ldots,x_{n1},\ldots,x_{ns_n}]$ be polynomial rings over a field $K$. Then $$\mathrm{pd}(R'/I_{\Delta^{(s_1,\ldots,s_n)}})=\mathrm{pd}(R/I_{\Delta})+s_1+\cdots+s_n-n$$ and $$\mathrm{depth}(R'/I_{\Delta^{(s_1,\ldots,s_n)}})=\mathrm{depth}(R/I_{\Delta}).$$ \end{thm} \begin{proof}
Let $\Delta$ be a shellable simplicial complex. Then it is sequentially Cohen-Macaulay. By Theorem \ref{vI}, $\Delta^{(s_1,\ldots,s_n)}$ is also shellable and then sequentially Cohen-Macaulay. Thus by \cite[Corollary 3.33]{MVi}, $$\mathrm{pd}(R'/I_{\Delta^{(s_1,\ldots,s_n)}})=\mbox{bight}\,(I_{\Delta^{(s_1,\ldots,s_n)}})$$
and $$\mathrm{pd}(R/I_{\Delta})=\mbox{bight}\,(I_{\Delta}).$$ Let $k=\min\{|F|:\ F\in \mathcal{F}(\Delta)\}$. It is easy to see that
$\min\{|F|:\ F\in \mathcal{F}(\Delta^{(s_1,\ldots,s_n)})\}=k$. Then $\mbox{bight}\,(I_{\Delta})=n-k$ and
$$\mbox{bight}\,(I_{\Delta^{(s_1,\ldots,s_n)}})=|V(\Delta^{(s_1,\ldots,s_n)})|-k=s_1+\cdots+s_n-k=s_1+\cdots+s_n+\mathrm{pd}(R/I_{\Delta})-n.$$ The second equality holds by Auslander-Buchsbaum formula, since $\mathrm{depth}(R')=s_1+\cdots+s_n$. \end{proof}
In the following example, we compute the invariants in Theorem \ref{pd} and illustrate the equalities. \begin{exam} {\rm Let $\Delta=\langle \{x_1,x_2,x_3\}, \{x_1,x_2,x_4\},\{x_4,x_5\}\rangle$. Then $\Delta$ is shellable with the order as listed in $\Delta$. Then $$\Delta^{(1,1,2,1,2)}=\langle \{x_{11},x_{21},x_{31}\},\{x_{11},x_{21},x_{32}\},\{x_{11},x_{21},x_{41}\},\{x_{41},x_{51}\},\{x_{41},x_{52}\}\rangle.$$ computations by Macaulay2 \cite{GS}, show that $\mathrm{pd}(R/I_{\Delta})=3$ and $\mathrm{pd}(R'/I_{\Delta^{(s_1,\ldots,s_n)}})=5=\mathrm{pd}(R/I_{\Delta})+s_1+\cdots+s_n-n=3+1+1+2+1+2-5$. Also $\mathrm{depth}(R'/I_{\Delta^{(s_1,\ldots,s_n)}})=\mathrm{depth}(R/I_{\Delta})=2$. } \end{exam}
The following result, which is a special case of \cite[Corollary 2.7]{Leila}, is our main tool to prove Proposition \ref{shreg}.
\begin{thm}(See \cite[Corollary 2.7]{Leila}.)\label{Leila}
Let $I$ be a monomial ideal with linear quotients with the ordering $f_1<\cdots<f_m$ on the minimal generators of $I$. Then $$\beta_{i,j}(I)=\sum_{\deg(f_t)=j-i} {|\mbox{set}\,_I(f_t)|\choose i}.$$ \end{thm}
\begin{prop}\label{shreg} Let $\Delta=\langle F_1,\ldots,F_m\rangle$ be a shellable simplicial complex with the vertex set $\{x_1,\ldots,x_n\}$, $s_1,\ldots,s_n\in \mathbb{N}$ and $R=K[x_1,\ldots,x_n]$ and $R'=K[x_{11},\ldots,x_{1s_1},\\ \ldots,x_{n1},\ldots,x_{ns_n}]$ be polynomial rings over a field $K$. Then \begin{itemize} \item[(i)] if $s_1,\ldots,s_n>1$, then $\mathrm{reg}(R'/I_{\Delta^{(s_1,\ldots,s_n)}})=\mathrm{dim}(\Delta)+1=\mathrm{dim}(R/I_{\Delta});$
\item[(ii)] if for each $1\leq i\leq m$, $\lambda_i=|\{x_\ell\in F_i:\ s_\ell>1\}|$, then $$\mathrm{reg}(R'/I_{\Delta^{(s_1,\ldots,s_n)}})\leq \mathrm{reg}(R/I_{\Delta})+\max \{\lambda_i:\ 1\leq i\leq m\}.$$ \end{itemize} \end{prop} \begin{proof}
Without loss of generality assume that $F_1<\cdots<F_m$ is a shelling for $\Delta$. We know that $I_{\Delta^{\vee}}$ has linear quotients with the ordering $x^{F_1^c}<\cdots<x^{F_m^c}$ on its minimal generators (see \cite[Theorem 1.4]{HD}). Moreover by Theorem \ref{1.3}, $\mathrm{reg}(R'/I_{\Delta^{(s_1,\ldots,s_n)}})=\mathrm{pd}(I_{\Delta^{(s_1,\ldots,s_n)^{\vee}}})$ and by \cite[Theorem 5.1.4]{BH} we have $\mathrm{dim}(R/I_\Delta)=\mathrm{dim}(\Delta)+1$. Thus, to prove (i), it is enough to show that $\mathrm{pd}(I_{\Delta^{(s_1,\ldots,s_n)^{\vee}}})=\mbox{dim}\,(\Delta)+1$. By Theorem \ref{Leila}, $\mathrm{pd}(I_{\Delta^{\vee}})=\max\{|\mbox{set}\,(x^{F_i^c})|:\ 1\leq i\leq m\}$. For any $1\leq i\leq m$, $\mbox{set}\,(x^{F_i^c})\subseteq F_i$, since any element $x_\ell\in \mbox{set}\,(x^{F_i^c})$ belongs to $(x^{F_j^c}):_R(x^{F_i^c})$ for some $1\leq j<i$. Thus $x_\ell=x^{F_j^c}/\gcd(x^{F_j^c},x^{F_i^c})=x^{F_i\setminus F_j}$. Let $F_i=\{x_{i_1},\ldots,x_{i_{k_i}}\}$ and $\mbox{set}\,(x^{F_i^c})=\{x_{i_\ell}:\ \ell\in L_i\}$, where $L_i\subseteq \{1,\ldots,k_i\}$. Consider the shelling for $\Delta^{(s_1,\ldots,s_n)}$ constructed in the proof of Theorem \ref{vI}. Using again of \cite[Theorem 1.4]{HD} shows that this shelling induces an order of linear quotients on the minimal generators of $I_{\Delta^{(s_1,\ldots,s_n)^{\vee}}}$. With this order \begin{equation}\label{tasavi} \mbox{set}\,(x^{(F_i^{r_1,\ldots, r_{k_i}})^c})=\{x_{i_\ell r_\ell}:\ \ell\in L_i\}\cup \{x_{i_tr_t}:\ r_t>1\}. \end{equation} More precisely, if $r_t>1$ for some $1\leq t\leq k_i$, then \begin{align*}
x_{i_tr_t} & =x^{(F_i^{r_1,\ldots, r_{k_i}}\setminus F_i^{r_1,\ldots,r_{t-1},r_t-1,r_{t+1},\ldots, r_{k_i}})}\\
& \in (x^{(F_i^{r_1,\ldots,r_{t-1},r_t-1,r_{t+1},\ldots, r_{k_i}})^c}):_{R'}(x^{(F_i^{r_1,\ldots, r_{k_i}})^c}). \end{align*} Hence, $x_{i_tr_t}\in \mbox{set}\,(x^{(F_i^{r_1,\ldots, r_{k_i}})^c})$. Also for any $x_{i_\ell}\in \mbox{set}\,(x^{F_i^c})$, there exists $1\leq j<i$ such that $x_{i_\ell}=x^{F_i\setminus F_j}\in (x^{F_j^c}):_R(x^{F_i^c})$. Thus there exist positive integers $r''_1,\ldots,r''_j$ such that
\begin{align*}
x_{i_\ell r_\ell} & =x^{(F_i^{r_1,\ldots,r_{k_i}}\setminus F_j^{r''_1,\ldots,r''_{k_j}})}\\
& \in (x^{(F_j^{r''_1,\ldots, r''_{k_j}})^c}):_{R'}(x^{(F_i^{r_1,\ldots,r_{k_i}})^c}).
\end{align*} Hence, $x_{i_\ell r_\ell}\in \mbox{set}\,(x^{(F_i^{r_1,\ldots, r_{k_i}})^c})$. Now, if $s_1,\ldots,s_n>1$, then $\mbox{set}\,(x^{(F_i^{s_{i_1},\ldots, s_{i_{k_i}}})^c})=\{x_{i_1s_{i_1}},\ldots,x_{i_{k_i}s_{i_{k_i}}}\}.$ Thus
$$\mathrm{pd}(I_{{\Delta^{(s_1,\ldots,s_n)}}^\vee})=\max\{|\mbox{set}\,(x^{(F_i^{s_{i_1},\ldots, s_{i_{k_i}}})^c})|:\ 1\leq i\leq m\}=\max\{|F_i|:\ 1\leq i\leq m \}=\mathrm{dim}(\Delta)+1.$$ To prove (ii), notice that by equality \ref{tasavi}, $|\mbox{set}\,(x^{(F_i^{r_1,\ldots, r_{k_i}})^c})|\leq |\mbox{set}\,(x^{F_i^c})|+ \lambda_i$. Therefore $\mathrm{pd}(I_{{\Delta^{(s_1,\ldots,s_n)}}^\vee})\leq \mathrm{pd}(I_{\Delta^{\vee}})+\max \{\lambda_i:\ 1\leq i\leq m\}.$ Now, by Theorem \ref{1.3}, the result holds. \end{proof}
\begin{exam} {\rm Consider the chordal graph $G$ depicted in Figure $3$ and its $(2,2,3,2,3)$-expansion which is a graph with $12$ vertices. Then $\Delta_G=\langle\{x_1,x_3\},\{x_3,x_5\},\{x_4,x_5\},\{x_2\}\rangle$. Since $G$ is shellable, by Proposition \ref{shreg}, $\mathrm{reg}(R'/I(G^{(2,2,3,2,3)}))=\mbox{dim}\,(\Delta_G)+1=2$. \begin{figure}
\caption{The graph $G$ and the $(2,2,3,2,3)$-expansion of $G$}
\label{fig7}
\end{figure} }
\end{exam}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\end{document} |
\begin{document}
\title[Diagonal quartic forms]{Pairs of diagonal quartic forms:\\ the asymptotic formulae} \author[J\"org Br\"udern]{J\"org Br\"udern} \address{Mathematisches Institut, Bunsenstrasse 3--5, D-37073 G\"ottingen, Germany} \email{[email protected]} \author[Trevor D. Wooley]{Trevor D. Wooley} \address{Department of Mathematics, Purdue University, 150 N. University Street, West Lafayette, IN 47907-2067, USA} \email{[email protected]} \subjclass[2010]{11D72, 11P55, 11E76} \keywords{Quartic Diophantine equations, Hardy-Littlewood method.} \thanks{First Author supported by Deutsche Forschungsgemeinschaft Project Number 255083470. Second author supported by NSF grants DMS-1854398 and DMS-2001549.} \date{}
\begin{abstract} We establish an asymptotic formula for the number of integral solutions of bounded height for pairs of diagonal quartic equations in $26$ or more variables. In certain cases, pairs in $25$ variables can be handled. \end{abstract} \maketitle
\section{Introduction} Once again we are concerned with the pair of Diophantine equations \begin{equation}\label{1.1} a_1x_1^4+a_2x_2^4+\ldots +a_sx_s^4=b_1x_1^4+b_2x_2^4+\ldots +b_sx_s^4=0, \end{equation} wherein the given coefficients $a_j,b_j$ satisfy $(a_j,b_j)\in {\mathbb Z}}\def\dbQ{{\mathbb Q}^2\setminus \{(0,0)\}$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$. While our focus was on the validity of the Hasse principle for such pairs in two precursors of this article \cite{Jems1,BW21}, we now investigate the asymptotic density of integral solutions. Denote by $\mathscr N(P)$ the number of solutions in integers
$x_j$ with $|x_j|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$ to this system. Then, subject to a natural rank condition on the coefficient matrix, one expects an asymptotic formula for $\mathscr N(P)$ to hold provided that $s$ is not too small. Indeed, following Hardy and Littlewood \cite{PN3} in spirit, the quantity $P^{8-s}\mathscr N(P)$ should tend to a limit that is itself a product of local densities. On a formal level, the densities are readily described. The real density, also known as the singular integral, is defined by \begin{equation} \label{1.2} \mathfrak I = \lim_{T\to \infty} \int_{-T}^T\int_{-T}^T\prod_{j=1}^s \int_{-1}^1 e\big((a_j\alpha+b_j\beta)t_j^4\big)\,\mathrm d t_j\,\mathrm d\alpha\, \mathrm d\beta \end{equation} whenever the limit exists. Let $M(q)$ denote the number of solutions ${\mathbf x}$ in $({\mathbb Z}}\def\dbQ{{\mathbb Q}/q{\mathbb Z}}\def\dbQ{{\mathbb Q})^s$ satisfying (\ref{1.1}). Then for primes $p$, the $p$-adic density is defined by \begin{equation}\label{1.3} \mathfrak s_p = \lim_{h\to\infty} p^{(2-s)h} M(p^h), \end{equation} assuming again that this limit exists. In case of convergence, the product $\mathfrak S = \prod_p \mathfrak s_p$ is referred to as the singular series, and the desired asymptotic relation can be presented as the limit formula \begin{equation}\label{1.4} \lim_{P\to\infty} P^{8-s} \mathscr N (P) = \mathfrak I \mathfrak S. \end{equation}
Note that \eqref{1.4} can hold only when in each of the two equations comprising \eqref{1.1} there are sufficiently many non-zero coefficients. Of course one may pass from \eqref{1.1} to an equivalent system obtained by taking linear combinations of the two constituent equations. Thus, the invariant $q_0=q_0(\mathbf a,\mathbf b)$, defined by $$q_0(\mathbf a,\mathbf b)=\min_{(c,d)\in\mathbb Z^2\setminus\{(0,0)\}}\text{card} \{1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s: ca_j+db_j\neq 0\},$$ must be reasonably large. Indeed, it follows from Lemmata 3.1, 3.2 and 3.3 in our companion paper \cite{BW21} that the conditions $s\ge 16$ and $q_0\ge 12$ ensure that the limits \eqref{1.2} and \eqref{1.3} all exist, that the product $\mathfrak S$ is absolutely convergent, and that the existence of non-singular solutions to the system \eqref{1.1} in each completion of the rationals implies that $\mathfrak I \mathfrak S>0$. A first result concerning the limit \eqref{1.4} is then obtained by introducing the moment estimate \begin{equation}\label{1.5}
\int_0^1\bigg| \sum_{x\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P}e(\alpha x^4)\bigg|^{14}\,\mathrm d\alpha \ll P^{10+\varepsilon}, \end{equation} derived as the special case $u=14$ of Lemma \ref{lemma5.3} below, to a familiar method of Cook \cite{Coo1972} (see also \cite{BrC}). Here we point out that the estimate (\ref{1.5}) first occurs implicitly in the proof of \cite[Theorem 4.1]{Woo2012}, conditional on the validity of the (now proven) main conjecture in Vinogradov's mean value theorem (for which see \cite{BDG2016} and \cite[Corollary 1.3]{Woo2019}). In this way, one routinely confirms \eqref{1.4} when $s\ge 29$ and $q_0\ge 15$. This result, although not explicitly mentioned in the literature, is certainly familiar to experts in the area, and has to be considered as the state of the art today. It seems worth remarking in this context that, at a time when the estimate \eqref{1.5} was not yet available, the authors \cite{BWBull,Camb} handled the case $s\ge 29$ with more restrictive rank conditions. The main purpose of this memoir is to make three variables redundant.
\begin{theorem}\label{theorem1.1} For pairs of equations \eqref{1.1} with $s\ge 26$ and $q_0\ge 15$, one has $\mathscr N(P)\sim \mathfrak I\mathfrak SP^{s-8}$. \end{theorem}
Relaxing the rank condition $q_0\ge 15$ appears to be a difficult enterprise, as we now explain. Consider a pair of equations \eqref{1.1} with $s\ge 29$, and suppose that $b_i=a_j=0$ for $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 14<j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s$. These two equations are independent and thus $\mathscr N(P)$ factorises as $\mathscr N(P)=N_1(P)N_2(P)$, where $N_1(P)$ and $N_2(P)$ denote the number of integral solutions of the respective single equations \begin{equation} \label{1.6} a_1x_1^4+a_2x_2^4+\ldots +a_{14}x_{14}^4=0, \end{equation}
with $|x_j|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 14)$, and \begin{equation}\label{1.7} b_{15}y_1^4+b_{16}y_2^4+\ldots +b_sy_{s-14}^4=0, \end{equation}
with $|y_j|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s-14)$. The equation (\ref{1.7}) has at least $15$ non-zero coefficients, and so a straightforward application of the Hardy-Littlewood method using the mean value \eqref{1.5} shows that $P^{18-s}N_2(P)$ tends to a limit as $P\to \infty$, with this limit equal to a product of local densities analogous to $\mathfrak I$ and $\mathfrak s_p$. By choosing $b_j=(-1)^j$ for $15\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s$, we ensure that this limit is positive, and thus $P^{8-s}\mathscr N(P)$ tends to a limit as $P\to\infty$ if and only if $P^{-10}N_1(P)$ likewise tends to a limit. From the definitions \eqref{1.2} and \eqref{1.3}, it is apparent that the local densities $\mathfrak I$ and ${\mathfrak s}}\def\grT{{\mathfrak T}_p$ factorise into components stemming from the equations underlying $N_1$ and $N_2$. The relation \eqref{1.4} therefore holds for this particular pair of equations if and only if $P^{-10}N_1(P)$ tends to the product of local densities associated with the equation \eqref{1.6}. In particular, were \eqref{1.4} known to hold in any case where $q_0=14$ and $s$ is large, then it would follow that $P^{-10}N_1(P)$ tends to the limit suggested by a formal application of the circle method, a result that is not yet known. This shows that relaxing the condition on $q_0$ would imply progress with single diagonal quartic equations.\par
The invariant $q_0$ is a very rough measure for the entanglement of the two equations present in \eqref{1.1}. This can be refined considerably. The pairs $(a_j,b_j)$ are all non-zero in $\mathbb Z^2$, so they define a point $(a_j:b_j)\in\mathbb P(\mathbb Q)$. We refer to indices $i,j\in\{1,2,\ldots,s\}$ as {\em equivalent} if $(a_i:b_i)=(a_j:b_j)$. This defines an equivalence relation on $\{1,2,\ldots,s\}$. Suppose that there are $\nu$ equivalence classes with $r_1,\ldots ,r_\nu$ elements, respectively, where $r_1\ge r_2\ge \ldots\ge r_\nu$. On an earlier occasion \cite{Camb} we named the tuple $(r_1,\ldots,r_\nu)$ the {\em profile} of the equations \eqref{1.1}. Note that $q_0=s-r_1$, whence our assumed lower bound $q_0\ge 15$ implies that $r_1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s-15$ and $\nu\ge 2$. If more is known about the profile, then we can save yet another variable.
\begin{theorem}\label{theorem1.2} Suppose that $s= 25$ and that $(r_1,\ldots,r_\nu)$ is the profile of the pair of equations \eqref{1.1}. If $q_0\ge 16$ and $\nu\ge 5$, then $\mathscr N(P)\sim \mathfrak I\mathfrak SP^{s-8}$. \end{theorem}
For a pair \eqref{1.1} in ``general position'' one has $\nu=s$ and $r_1=1$, and in a quantitative sense easily made precise, such pairs constitute almost all such Diophantine systems. Hence, the conclusion of Theorem \ref{theorem1.2} applies to almost all pairs of equations of the shape \eqref{1.1}.\par
We pointed out long ago \cite{Camb} that a diffuse profile can be advantageous. However, even with the estimate \eqref{1.5} in hand, the method of \cite{Camb} only handles cases where $s\ge 27$ and $r_1$ and $r_2$ are not too large. Thus our results improve on all previous work on the subject even if the input to the published versions is enhanced by the newer mean value bound \eqref{1.5}.\par
It is time to describe the methods, and in particular the new ideas involved in the proofs. Our more recent results specific to systems of diagonal quartic forms \cite{Jems1,Jems2,BW21} all depend on large values estimates for Fourier coefficients of powers of Weyl sums, and the current communication is no exception. The large values estimates provide upper bounds for higher moments of these Fourier coefficients, and these in turn yield mean value bounds for correlations of Weyl sums. We describe this link here in a setting appropriate for application to pairs of equations. Consider a $1$-periodic twice differentiable function $h:\mathbb R \to\mathbb R$. Its Fourier expansion \begin{equation}\label{Fou} h(\alpha)=\sum_{n\in {\mathbb Z}}\def\dbQ{{\mathbb Q}}\hat h(n)e(\alpha n) \end{equation} converges uniformly and absolutely. Hence, by orthogonality, one has \begin{equation}\label{3rd} \int_0^1\!\!\int_0^1 h(\alpha)h(\beta)h(-\alpha-\beta)\,\mathrm d\alpha\,\mathrm d\beta = \sum_{n\in {\mathbb Z}}\def\dbQ{{\mathbb Q}}\hat h(n)^3. \end{equation} The methods of \cite{Jems1,Jems2,BW21} rest on this and closely related identities,
choosing $h(\alpha)=|g(\alpha)|^u$ with suitable quartic Weyl sums $g$ and a positive real number $u$. As a service to future scholars, we analyse in some detail the differentiability
properties of functions like $|g(\alpha)|^u$ in \S3. It transpires that when $u\ge 2$ then
the relation \eqref{3rd} holds. We use \eqref{3rd} with $h(\alpha)=|f(\alpha)|^u$, where now \begin{equation}\label{ff} f(\alpha)= \sum_{x\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P} e(\alpha x^4) \end{equation} is the ordinary Weyl sum. We then obtain new entangled mean value estimates for smaller values of $u$. This alone is not of strength sufficient to reach the conclusions of Theorem \ref{theorem1.1}.\par
As experts in the field will readily recognise, for larger values of $u$ the quality of the aforementioned mean value estimates is diluted by major arc contributions, and one would therefore like to achieve their removal. Thus, if $\mathfrak n$ is a $1$-periodic set of real numbers with $\mathfrak n\cap [0,1)$ a classical choice of minor arcs and ${\bf 1}_{\mathfrak n}$ is the indicator function of $\mathfrak n$, then one is tempted to
apply the function $h(\alpha)={\bf 1}_{\mathfrak n}(\alpha)|f(\alpha)|^u$ in place of
$|f(\alpha)|^u$ within \eqref{3rd}. However, this function is no longer continuous. We bypass this difficulty by introducing a smoothed Farey dissection in \S4. This is achieved by a simple and very familiar convolution technique that should be useful in other contexts, too. In this way, in \S5 we obtain a minor arc variant of the cubic moment method developed in our earlier work \cite{Jems1}. Equipped with this and the mean value bounds that follow from it, one reaches the conclusions of Theorem \ref{theorem1.1} in the majority of cases under consideration. Unfortunately, some cases with exceptionally large values of $r_j$ stubbornly deny treatment. To cope with these remaining cases, we develop a mixed moment method in \S6.\par
The point of departure is a generalisation of \eqref{3rd}. If $h_1,h_2,h_3$ are functions that qualify for the discussion surrounding \eqref{Fou} and \eqref{3rd}, then by invoking orthogonality once again, we see that \begin{equation}\label{3rdv} \int_0^1\!\!\int_0^1 h_1(\alpha)h_2(\beta)h_3(-\alpha-\beta)\,\mathrm d\alpha\,\mathrm d\beta = \sum_{n\in \mathbb Z}\hat h_1(n)\hat h_2(n) \hat h_3(n). \end{equation} By H\"older's inequality, the right hand side here is bounded in terms of the three moments \begin{equation}\label{3rdv2}
\sum_{n\in \mathbb Z}|\hat h_j(n)|^3. \end{equation}
In all cases where $h_j(\alpha)= |f(\alpha)|^{u_j}$ for some even positive integral exponent $u_j$ one has $\hat h_j(n)\ge 0$, so \eqref{3rd} can be used in reverse to interpret \eqref{3rdv2} in terms of the number of solutions of a pair of Diophantine equations. The purely analytic description of the method has several advantages. First and foremost, one can break away from even numbers $u_j$, and still estimate all three cubic moments \eqref{3rdv2}. This paves the way to a complete treatment of pairs of equations \eqref{1.1} with $s\ge 26$ and $q_0\ge 15$. Beyond this, the identity \eqref{3rdv} offers extra flexibility for the arithmetic harmonic analysis. Instead of the homogeneous passage from \eqref{3rdv} to \eqref{3rdv2} one could apply H\"older's inequality with differing weights. As an example of stunning simplicity, we note that the expression in \eqref{3rdv} is bounded above by
$$ \biggl(\sum_{n\in \mathbb Z}|\hat h_1(n)|^2\biggr)^{1/2}
\biggl(\sum_{n\in \mathbb Z}|\hat h_2(n)|^4\biggr)^{1/4}
\biggl(\sum_{n\in \mathbb Z}|\hat h_3(n)|^4\biggr)^{1/4}. $$
If we apply this idea with $h_j(\alpha)=|f(\alpha)|^{u_j}$ and $u_j$ a positive even integer, then the first factor relates to a single diagonal Diophantine equation while the other two factors concern systems consisting of three diagonal Diophantine equations. This argument is dual (in the sense that we work with Fourier coefficients) to a method that we described as {\em complification} in our work on systems of cubic forms \cite{MA}. There is, of course, an obvious generalisation of \eqref{3rd} to higher dimensional integrals that has been used here. This points to a complex interplay between systems of diagonal equations in which the size parameters (number of variables and number of equations) vary, and need not be restricted to natural numbers. We have yet to explore the full potential of this observation.
\par We briefly comment on the role of the Hausdorff-Young inequality \cite[Chapter XII, Theorem 2.3]{Z} within this circle of ideas. In the notation of \eqref{3rdv} this asserts that
$$ \sum_{n\in \mathbb Z}|\hat h_j(n)|^3 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \biggl(\int_0^1 |h_j(\alpha)|^{3/2} \,\mathrm d\alpha \biggr)^2. $$ Passing through \eqref{3rdv} and \eqref{3rdv2}, one then arrives at the estimate
\begin{equation}\label{HY2} \biggl|\int_0^1\!\!\int_0^1
h_1(\alpha)h_2(\beta)h_3(-\alpha-\beta)\,\mathrm d\alpha\,\mathrm d\beta\biggr|
\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \prod_{j=1}^3 \biggl(\int_0^1 |h_j(\alpha)|^{3/2}\,\mathrm d\alpha \biggr)^{2/3}. \end{equation} However, by H\"older's inequality, one finds
$$ \biggl|\int_0^1\!\!\int_0^1 h_1(\alpha)h_2(\beta)h_3(-\alpha-\beta)\,\mathrm d\alpha\,
\mathrm d\beta\biggr| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \prod_{1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i<j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 3} \biggl(\int_0^1 |h_i h_j|^{3/2}\,\mathrm d\alpha \, \mathrm d\beta \biggr)^{1/3}, $$ where, on the right hand side, one should read $h_1=h_1(\alpha)$, $h_2=h_2(\beta)$ and $h_3=h_3(-\alpha-\beta)$. By means of obvious linear substitutions, this also delivers the bound \eqref{HY2}. This last method is essentially that of Cook \cite{Coo1972}. Our approach is superior because the methods are designed to remember the arithmetic source of the Weyl sums when estimating moments of Fourier coefficients.\par
The proof of Theorem \ref{theorem1.2} requires yet another tool that is a development of our multidimensional version of Hua's lemma \cite{BWBull}. This somewhat outdated work is based on Weyl differencing. An analysis of the method shows that whenever a new block of differenced Weyl sums enters the recursive process, a new entry $r_j$ to the profile of the underlying Diophantine system is needed. It is here where one imports undesired constraints on the profile, as in Theorem \ref{theorem1.2}. However, powered with the new upper bound \eqref{1.5}, the method just described yields a bound for a two-dimensional entangled mean value over eighteen Weyl sums that outperforms the cubic moments technique by a factor $P^{1/6}$ (compare Theorem \ref{thm5.1} with Theorem \ref{thm6.2}). Within a circle method approach, this mean value is introduced via H\"older's inequality. In the complementary factor, we have available an abundance of Weyl sums. Fortunately the cubic moments technique restricted to minor arcs presses the method home. We point out that our proof of Theorem \ref{theorem1.2} constitutes the first instance in which the cubic moments technique is successfully coupled with the differencing techniques derived from \cite{BWBull}.\par
One might ask whether more restrictive conditions on the profile allow one to reduce the number of variables even further. As we demonstrate at the very end of this memoir it is indeed possible to accelerate the convergence in \eqref{1.4}, but even the extreme condition $r_1=1$ seems insufficient to save a variable without another new idea.\par
Once the new moment estimates are established, our proofs of Theorems \ref{theorem1.1} and \ref{theorem1.2} are fairly concise. There are two reasons. First, we may import the major arc work, to a large extent, from \cite{BW21}. Second, more importantly, our minor arc treatment rests on a new inequality (Lemma \ref{lemA3} below) that entirely avoids combinatorial difficulties associated with exceptional profiles. This allows us to reduce the minor arc work to a single profile with a certain maximality property. We expect this argument to become a standard preparation step in related work, and have therefore presented this material in broad generality. We refer to \S2 where the reader will also find comment on previous attempts in this direction.\par
{\em Notation}. Our basic parameter is $P$, a sufficiently large real number. Implicit constants in Vinogradov's familiar symbols $\ll$ and $\gg$ may depend on $s$ and $\varepsilon$ as well as ambient coefficients such as those in the system \eqref{1.1}. Whenever $\varepsilon$ appears in a statement we assert that the statement holds for each positive real value assigned to $\varepsilon$. As usual, we write $e(z)$ for $e^{2\pi iz}$.
\section{Some inequalities} This section belongs to real analysis. We discuss a number of inequalities for products. As is familiar for decades, in an attempt to prove results of the type described in Theorems \ref{theorem1.1} and \ref{theorem1.2} via harmonic analysis, it is desirable to simplify to a situation where the profile is extremal relative to the conditions in hand, that is, the multiplicities $r_1, r_2, \ldots$ are as large as possible, and consequently $\nu$ is as small as is possible. In the past, most scholars have applied H\"older's inequality to achieve this objective, often by an {\em ad hoc} argument that led to the consideration of several cases separately. The purpose of this section is to make available general inequalities that encapsulate the reduction step in a single lemma of generality sufficient to include all situations that one encounters in practice.\par
The germ of our method is a classical estimate, sometimes referred to as Young's inequality: if $p$ and $q$ are real numbers with $p>1$ and $$\frac{1}{p}+\frac{1}{q}=1,$$ then for all non-negative real numbers $u$ and $v$ one has \begin{equation} \label{HY} uv \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac{u^p}{p}+\frac{v^q}{q}. \end{equation} This includes the case $r=2$ of the bound \begin{equation}
\label{T} |z_1z_2\cdots z_r| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac1r \big(|z_1|^r+\ldots+|z_r|^r\big) \end{equation} which holds for all $r\in\mathbb N$ and all $z_j\in\mathbb C$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec r)$. Indeed, the general case of \eqref{T} follows from \eqref{HY} by an easy induction on $r$.\par
In the following chain of lemmata we are given a number $\nu\in\mathbb N$ and integral exponents $m_j$, $M_j$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \nu)$ with \begin{equation}\label{Hyp1} m_1\ge m_2\ge \ldots \ge m_\nu\ge 0, \qquad M_1\ge M_2\ge\ldots\ge M_\nu\ge 0 \end{equation} and \begin{equation}\label{Hyp2} \sum_{l=1}^L m_l \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \sum_{l=1}^L M_l \quad (1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec L<\nu), \qquad \sum_{l=1}^\nu m_l = \sum_{l=1}^\nu M_l. \end{equation} We write $S_\nu$ for the group of permutations on $\nu$ elements. We refer to a function $w:S_\nu\to [0, 1]$ with $$ \sum_{\sigma\in S_\nu} w(\sigma) = 1 $$ as a {\em weight on} $S_\nu$.
\begin{lemma}\label{lemA1} Suppose that the exponents $m_j$, $M_j$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \nu)$ satisfy \eqref{Hyp1} and \eqref{Hyp2}. Then there is a weight $w$ on $S_\nu$ with the property that for all non-negative real numbers $u_1,u_2,\ldots,u_\nu$ one has \begin{equation}\label{A1} u_1^{m_1}u_2^{m_2}\cdots u_\nu^{m_\nu} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \sum_{\sigma\in S_\nu} w(\sigma) u_{\sigma(1)}^{M_1}u_{\sigma(2)}^{M_2}\cdots u_{\sigma(\nu)}^{M_\nu}. \end{equation} \end{lemma}
\begin{proof} We define
$$ D= \sum_{l=1}^\nu |M_l-m_l| $$ and proceed by induction on $\nu+D$. In the base case of the induction one has $\nu+D=1$. In this situation $\nu=1$ and $D=0$, and the claim of the lemma is trivially true with $\sigma=\text{id}$ and $w(\sigma)=1$.\par
Now suppose that $\nu+D>1$. We consider two cases. First we suppose that there is a number $\nu_1$ with $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \nu_1<\nu$ and $$ \sum_{l=1}^{\nu_1} m_l = \sum_{l=1}^{\nu_1} M_l.$$ We put
$$ D_1= \sum_{l=1}^{\nu_1} |M_l-m_l|, \quad D_2 = \sum_{l=\nu_1+1}^\nu |M_l-m_l|, \quad \nu_2=\nu-\nu_1. $$ Then \eqref{Hyp1} and \eqref{Hyp2} are valid with $\nu_1$ in place of $\nu$, and one has $D_1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec D$. Hence $\nu_1+D_1<\nu +D$ so that we may invoke the inductive hypothesis to find a weight $w_1$ on $ S_{\nu_1}$ with \begin{equation}\label{A3}
u_1^{m_1}u_2^{m_2}\cdots u_{\nu_1}^{m_{\nu_1}} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \sum_{\sigma\in S_{\nu_1}} w_1(\sigma)u_{\sigma(1)}^{M_1}u_{\sigma(2)}^{M_2}\cdots u_{\sigma(\nu_1)}^{M_{\nu_1}}. \end{equation} Similarly, in the current situation, the numbers $m_{\nu_1+j}$, $M_{\nu_1+j}$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \nu_2)$ may take the roles of $m_j$, $M_j$ in \eqref{Hyp1} and \eqref{Hyp2} with $\nu_2$ in place of $\nu$. Again, we have $\nu_2+D_2<\nu+D$. Now writing $\tau$ for a permutation in $S_{\nu_2}$ acting on the set $\{\nu_1+1,\nu_1+2,\ldots, \nu\}$, we may invoke the inductive hypothesis again to find a weight $w_2$ on $S_{\nu_2}$ with \begin{equation}\label{A4}
u_{\nu_1+1}^{m_{\nu_1+1}}u_{\nu_1+2}^{m_{\nu_1+2}}\cdots u_{\nu}^{m_{\nu}} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \sum_{\tau\in S_{\nu_2}}w_2(\tau)u_{\tau(\nu_1+1)}^{M_{\nu_1+1}} u_{\tau(\nu_1+2)}^{M_{\nu_1+2}}\cdots u_{\tau(\nu)}^{M_{\nu}}. \end{equation} We multiply the inequalities \eqref{A3} and \eqref{A4}. It is then convenient to read permutations $\sigma$ on $1,2,\ldots,\nu_1$ and $\tau$ on $\nu_1+1,\nu_1+2,\ldots,\nu$ as permutations on $1,2,\ldots, \nu$ with $\sigma(j)=j$ for $j>\nu_1$ and $\tau(j)=j$ for $j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \nu_1$. Then, for permutations of the type $\sigma\tau$ in $S_\nu$ we put $w(\sigma\tau) = w_1(\sigma)w_2(\tau)$, and we put $w(\phi)=0$ for the remaining permutations $\phi\in S_\nu$. With this function $w$ the product of \eqref{A3} and \eqref{A4} becomes \eqref{A1}, completing the induction in the case under consideration.
\par In the complementary case we have \begin{equation}\label{A5} \sum_{l=1}^L m_l < \sum_{l=1}^L M_l \qquad (1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec L<\nu). \end{equation} In particular, this shows that $m_1<M_1$. Also, by comparing the case $L=\nu-1$ of \eqref{A5} with the equation corresponding to the case $L=\nu$ in \eqref{Hyp2}, we see that $m_\nu>M_\nu$, as a consequence of which we have $m_\nu\ge 1$. We write $m_1= m_\nu+r$. In view of \eqref{Hyp1}, we see that $r\ge 0$, and so an application of \eqref{HY} with $q=r+2$ leads to the inequality $$ u_1^{r+1} u_\nu \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac{r+1}{r+2} u_1^{r+2} + \frac1{r+2} u_\nu^{r+2}. $$ Recall that $m_\nu\ge 1$, whence $m_1-r-1=m_\nu-1\ge 0$. It follows that $$ u_1^{m_1}u_\nu^{m_\nu}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u_1^{m_1-r-1} u_\nu^{m_\nu-1} \Big( \frac{r+1}{r+2} u_1^{r+2} + \frac1{r+2} u_\nu^{r+2}\Big), $$ and thus \begin{align*} u_1^{m_1}\cdots u_\nu^{m_\nu} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac{r+1}{r+2} u_1^{m_1+1} & u_2^{m_2}u_3^{m_3}\cdots u_{\nu-1}^{m_{\nu-1}} u_\nu^{m_\nu -1} \\ +&\frac1{r+2}u_1^{m_\nu -1}u_2^{m_2}u_3^{m_3}\cdots u_{\nu-1}^{m_{\nu-1}}u_\nu^{m_1+1}. \end{align*} The chain of exponents $m_1+1,m_2,m_3,\ldots,m_{\nu-1},m_\nu-1$ is decreasing, and we have $m_1+1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec M_1$ and $m_\nu-1\ge 0$. Hence, in view of \eqref{A5}, the hypotheses \eqref{Hyp1} and \eqref{Hyp2} are still met when we put $m_1+1$ in place of $m_1$ and $m_\nu-1$ in place of $m_\nu$. However, $m_1+1$ is closer to $M_1$ than is $m_1$, and likewise $m_\nu-1$ is closer to $M_\nu$ than is $m_\nu$. The value of $D$ associated with this new chain of exponents therefore decreases, and so we may apply the inductive hypothesis to find a weight $W$ on $S_\nu$ with $$ u_1^{m_1+1}u_2^{m_2}u_3^{m_3}\cdots u_{\nu-1}^{m_{\nu-1}} u_\nu^{m_\nu -1} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \sum_{\sigma\in S_\nu} W(\sigma) u_{\sigma(1)}^{M_1}u_{\sigma(2)}^{M_2}\cdots u_{\sigma(\nu)}^{M_\nu}. $$ Interchanging the roles of $u_1$ and $u_\nu$, and denoting by $\tau$ the transposition of $1$ and $\nu$, we obtain in like manner the bound $$ u_\nu^{m_1+1}u_2^{m_2}u_3^{m_3}\cdots u_{\nu-1}^{m_{\nu-1}} u_1^{m_\nu -1} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \sum_{\sigma\in S_\nu} W(\sigma\circ \tau)u_{\sigma(1)}^{M_1}u_{\sigma(2)}^{M_2} \cdots u_{\sigma(\nu)}^{M_\nu}. $$ If we now import the last two inequalities into the inequality preceding them, we find that \eqref{A1} holds with $$ w(\sigma) = \frac{r+1}{r+2} W(\sigma) + \frac1{r+2}W(\sigma\circ \tau), $$ and $w$ is a weight on $S_\nu$. This completes the induction in the second case. \end{proof}
\begin{lemma}\label{lemA2} Suppose that $m_j$, $M_j$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \nu)$ satisfy \eqref{Hyp1} and \eqref{Hyp2}. For $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \nu$ let $h_j:\mathbb R^n \to [0,\infty)$ denote a Lebesgue measurable function. Then $$ \int h_{1}^{m_1}h_{2}^{m_2}\cdots h_{\nu}^{m_\nu}\, \mathrm d\mathbf{x} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \max_{\sigma\in S_\nu} \int h_{\sigma(1)}^{M_1}h_{\sigma(2)}^{M_2}\cdots h_{\sigma(\nu)}^{M_\nu}\,\mathrm d\mathbf{x}. $$ \end{lemma}
\begin{proof} Choose $u_j=h_j$ in Lemma \ref{lemA1} for $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \nu$ and integrate. \end{proof}
For applications to systems of diagonal equations or inequalities, functions $h_j$ come with an equivalence relation between them. This we encode as a partition of the set of indices $j$ in the final lemma of this section.
\begin{lemma}\label{lemA3} Suppose that the exponents $m_j$, $M_j$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \nu)$ satisfy \eqref{Hyp1} and \eqref{Hyp2}. Let $s=m_1+m_2+\ldots +m_\nu$, and for $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s$, let $h_j:\mathbb R^n\to [0,\infty)$ denote a Lebesgue measurable function. Finally, suppose that $J_1,J_2,\ldots ,J_\nu$ are sets with respective cardinalities $m_1,m_2,\ldots ,m_\nu$ that partition $\{1,2,\ldots,s\}$. Then, there exists a tuple $(i_1,\ldots ,i_\nu)$ and a permutation $\sigma\in S_\nu$, with $i_l\in J_{\sigma (l)}$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec l\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \nu)$, having the property that \begin{equation}\label{A7} \int h_1h_2 \cdots h_s\,\mathrm d \mathbf{x} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \int h_{i_1}^{M_1}h_{i_2}^{M_2}\ldots h_{i_\nu}^{M_\nu} \,\mathrm d\mathbf{x}. \end{equation} \end{lemma}
\begin{proof} For each suffix $l$ with $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec l\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \nu$, it follows from \eqref{T} that $$ \prod_{j\in J_l} h_j \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac1{m_j}\sum_{j\in J_l} h_j^{m_j}. $$ Multiplying these inequalities together yields the bound $$ h_1h_2 \cdots h_s \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac{1}{m_1\cdots m_\nu} \sum_{j_1\in J_1} \cdots \sum_{j_\nu\in J_\nu} h_{j_1}^{m_1}h_{j_2}^{m_2} \cdots h_{j_\nu}^{m_\nu}. $$ Now integrate. One then finds that there exists a tuple $(j_1,\ldots ,j_\nu)$, with $j_l\in J_l$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec l\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \nu)$, for which $$ \int h_1h_2 \cdots h_s\,\mathrm d \mathbf{x} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \int h_{j_1}^{m_1}h_{j_2}^{m_2}
\ldots h_{j_\nu}^{m_\nu} \,\mathrm d\mathbf{x}. $$ Finally, we apply Lemma \ref{lemA2}. One then finds that for some $\sigma\in S_\nu$ the upper bound \eqref{A7} holds with $i_l=j_{\sigma(l)}$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec l\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \nu)$. \end{proof}
\section{Smooth Farey dissections} In this section we describe a partition of unity that mimics the traditional Farey dissection. With other applications in mind, we work in some generality. Throughout this section we take $X$ and $Y$ to be real numbers with $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Y\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac12 \sqrt X$, and then let $\mathfrak N(q,a)$ denote the interval of all real $\alpha$ satisfying
$|q\alpha -a |\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec YX^{-1}$. Define $\mathfrak N=\mathfrak N_{X,Y} $ as the union of all $\mathfrak N(q,a)$ with $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Y$, $a\in\mathbb Z$ and $(a,q)=1$. Note that the intervals $\mathfrak N(q,a)$ comprising $\mathfrak N$ are pairwise disjoint. We also write $\mathfrak M=\mathfrak M_{X,Y}$ for the set $\mathfrak N\cap [0,1]$. For appropriate choices of the parameter Y, the latter is a typical choice of major arcs in applications of the Hardy-Littlewood method.\par
The set $\mathfrak N$ has period 1. Its indicator function ${\bf 1}_{\mathfrak N}$ has finitely many discontinuities in $[0,1)$, implying unwanted delicacies concerning the convergence of the Fourier series of ${\bf 1}_{\mathfrak N}$. We avoid complications associated with this feature by a familiar convolution trick, which we now describe.
Define the positive real number $$ \kappa = \int_{-1}^1 \exp(1/(t^2-1))\,{\,{\rm d}} t, $$ and the function $K : \mathbb R\to [0,\infty)$ by
$$ K(t) = \left\{\begin{array}{ll} \kappa^{-1} \exp(1/(t^2-1)) & \text{if } |t|<1, \\
0 & \text{if } |t|\ge 1.\end{array} \right. $$ As is well known, the function $K(t)$ is smooth and even. We scale this function with the positive parameter $X$ in the form $$ K_X(t)= 4X\, K(4Xt). $$
Then $ K_X$ is supported on the interval $|t|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1/(4X)$ and satisfies the important relation \begin{equation}\label{2.0} \int_{-\infty}^\infty K_X(t){\,{\rm d}} t = \int_{-\infty}^\infty K(t){\,{\rm d}} t=1. \end{equation} We now define the function ${\sf N}_{X,Y}: \mathbb R\to [0,1]$ by \begin{equation}\label{2.1} {\sf N}_{X,Y}(\alpha) = \int_{-\infty}^\infty {\bf 1}_{\mathfrak N}(\alpha-t) K_X(t){\,{\rm d}} t = \int_{-\infty}^\infty {\bf 1}_{\mathfrak N}(t) K_X(\alpha-t){\,{\rm d}} t. \end{equation} The main properties of this function ${\sf N}={\sf N}_{X,Y}$ are listed in the next lemma.
\begin{lemma}\label{majapprox} The function ${\sf N}={\sf N}_{X,Y}$ is smooth, and for all $\alpha\in\mathbb R$ one has ${\sf N}(\alpha)\in [0,1]$. Further, whenever $2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Y\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac14 \sqrt X$, the inequalities \begin{equation}\label{2.2} {\bf 1}_{{\mathfrak N}_{X, Y/2}} (\alpha) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec {\sf N} (\alpha)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec {\bf 1}_{{\mathfrak N}_{X,2 Y}}(\alpha) \end{equation} and \begin{equation}\label{2.3} {\sf N}'(\alpha) \ll X, \quad {\sf N}''(\alpha) \ll X^2 \end{equation} hold uniformly in $\alpha\in\mathbb R$. \end{lemma}
\begin{proof} The integrands in \eqref{2.1} are non-negative, so ${\sf N}(\alpha)\ge 0$, while \eqref{2.0} shows that ${\sf N}(\alpha)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1$. Since $K$ is smooth and compactly supported, the second integral formulation of ${\sf N}$ in \eqref{2.1} shows that $\sf N$ is smooth, and that the derivative is obtained by differentiating the integrand. Thus, we obtain $$ {\sf N}'(\alpha) = \int_{\mathfrak N} \frac{\partial}{\partial \alpha} K_X(\alpha-t){\,{\rm d}} t, $$ whence
$$ |{\sf N}'(\alpha)| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 4X \int_{-1}^1 |K'(t)|{\,{\rm d}} t. $$ This confirms the inequality for the first derivative in \eqref{2.3}. The bound for the second derivative follows in like manner by differentiating again.\par
We now turn to the task of establishing \eqref{2.2}. First suppose that $\alpha\in\mathfrak N_{X,Y/2}$. Then, there is a unique pair of integers $a\in\mathbb Z$ and $q\in\mathbb N$ with $(a,q)=1$, $q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac12 Y$ and
$|q\alpha-a|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac12 YX^{-1}$. For $|t|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec (4X)^{-1}$ we then have
$$ \Big| (\alpha-t) - \frac{a}{q}\Big| \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec\frac1{4X} + \frac{Y}{2qX} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac{Y}{qX}. $$ Thus $\alpha-t \in\mathfrak N(q,a)\subseteq \mathfrak N_{X,Y}$. Since $K_X$ is supported on $[-1/(4X), 1/(4X)]$, we deduce from \eqref{2.0} and \eqref{2.1} that $$ {\sf N}(\alpha) \ge \int_{-\infty}^\infty {\bf 1}_{\mathfrak N(q,a)}(\alpha-t) K_X(t){\,{\rm d}} t \ge \int_{-1}^1 K_X(t){\,{\rm d}} t =1. $$ It follows that one has ${\sf N}(\alpha)=1$ for all $\alpha\in\mathfrak N_{X,Y/2}$. However, we know already that ${\sf N}(\alpha)$ is non-negative for all $\alpha\in\mathbb R$, and thus we have proved the first of the two inequalities in \eqref{2.2}.\par
We complete the proof of the lemma by addressing the second inequality in \eqref{2.2}. Suppose that ${\sf N}(\alpha)>0$. Then, it follows from \eqref{2.1} that for some
$t\in \mathbb R$ with $|t|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec (4X)^{-1}$, one has $\alpha-t\in \mathfrak N_{X,Y}$. Hence, there exist $a\in\mathbb Z$ and $q\in\mathbb N$
with $(a,q)=1$, $q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Y$ and $|\alpha-t-a/q|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Y/(qX)$. By the triangle inequality,
$$ \Big|\alpha-\frac{a}{q}\Big|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac{Y}{qX} + \frac{1}{4X} \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac{2Y}{qX}.$$ This shows that $\alpha\in{\mathfrak N}_{X,2Y}$. Since $0\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec {\sf N}(\alpha)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1$, the second of the inequalities in \eqref{2.2} also follows. \end{proof}
We consider ${\sf N}= {\sf N}_{X,Y}$ as a smooth model of the major arcs $\mathfrak N_{X,Y}$. It is convenient to define corresponding minor arcs $\mathfrak n = \mathfrak n_{X,Y}$, with $\mathfrak n_{X,Y}= \mathbb R\setminus \mathfrak N_{X,Y}$, and to write $\mathfrak m = [0,1]\setminus \mathfrak M$ for the set of minor arcs complementary to $\mathfrak M$. The smoothed version of $\mathfrak n_{X,Y}$ is the function ${\sf n}_{X,Y}:\mathbb R\to [0,1]$ defined by $$ {\sf n}(\alpha) = \int_{-\infty}^\infty {\bf 1}_{\mathfrak n}(\alpha-t) K_X(t){\,{\rm d}} t. $$ We trivially have ${\bf 1}_{\mathfrak N}(\alpha)+{\bf 1}_{\mathfrak n}(\alpha)=1$ for all $\alpha\in \mathbb R$, so it is a consequence of \eqref{2.0} and \eqref{2.1} that ${\sf n}={\sf n}_{X,Y}$ satisfies the identity \begin{equation} \label{2.4} {\sf N}(\alpha)+{\sf n}(\alpha)=1. \end{equation} The properties of $\sf n$ can therefore be deduced from the corresponding facts concerning $\sf N$. In particular, Lemma \ref{majapprox} translates as follows.
\begin{lemma}\label{minapprox} The function ${\sf n}={\sf n}_{X,Y}$ is smooth, and for all $\alpha\in\mathbb R$ one has ${\sf n}(\alpha)\in [0,1]$. Further, whenever $2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Y\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac14 \sqrt X$, the inequalities \[ {\bf 1}_{{\mathfrak n}_{X, 2Y}} (\alpha) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec {\sf n} (\alpha)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec {\bf 1}_{{\mathfrak n}_{X, Y/2}}(\alpha) \] and \[ {\sf n}'(\alpha) \ll X, \quad {\sf n}''(\alpha) \ll X^2 \] hold uniformly in $\alpha\in\mathbb R$. \end{lemma}
\section{Fractional powers of Weyl sums} In this section we consider a trigonometric polynomial \begin{equation} \label{3.1} T(\alpha) = \sum_{M<n\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec M+N} c_n e(\alpha n) \end{equation} with complex coefficients $c_n$. The associated ordinary polynomial \begin{equation} \label{3.2} P(z) = \sum_{n=1}^N c_{M+n} z^n \end{equation} is related to $T$ via the identity \begin{equation} \label{TT} T(\alpha) = e(M\alpha)P(e(\alpha)). \end{equation}
\begin{lemma} \label{lem3.1} Let $k\in\mathbb N$. Then, for any real number $u>k$, the real function
$\Omega_u:\mathbb R \to \mathbb R$, defined by $\Omega_u(\alpha)=|T(\alpha)|^u$, is $k$ times continuously differentiable. \end{lemma}
\begin{proof} In view of \eqref{TT}, we see that it suffices to prove this result in the special case where $M=0$. This reduction step noted, we proceed by a succession of elementary exercises.\par
Let $u\in\mathbb R$. We begin by considering the function $\theta_u : \mathbb R\setminus\{0\} \to \mathbb R$ defined by
$\theta_u(\alpha)=|\alpha|^u$. This function is differentiable on $ \mathbb R\setminus\{0\}$, and one has
$$ \theta'_u(\alpha) = u|\alpha|^u\alpha^{-1} =u\theta_u(\alpha)\alpha^{-1}.$$ By induction, it follows that for any $l\in\mathbb N$ the function $\theta_u$ is $l$ times differentiable, and that the $l$-th derivative is \begin{equation} \label{3.3} \theta_u^{(l)}(\alpha) = u(u-1)\cdots (u-l+1)\theta_u(\alpha)\alpha^{-l}. \end{equation} Now suppose that $u>0$. Then, by putting $\theta_u(0)=0$ we extend $\theta_u$ to a continuous function on $\mathbb R$. More generally, whenever $u>l$, then $$ \lim_{\alpha\to 0} \frac{\theta_u(\alpha)}{\alpha^l} = 0.$$ By \eqref{3.3}, this shows that whenever $u>l$ then $\theta_u^{(l)}$ extends to a continuous function on $\mathbb R$ by choosing $\theta_u^{(l)}(0)=0$, and that $\theta_u^{(l-1)}$ is differentiable at $0$ with derivative $0$. We summarize this last statement as follows: \\[2ex] (a) {\em Let $k\in\mathbb N$ and $u>k$. Then $\theta_u$ is $k$ times continuously differentiable on $\mathbb R$.} \\
Next, for $u>0$, consider the function $\rho_u: \mathbb R\to \mathbb R$ defined by
putting $\rho_u(\alpha)=|\sin \pi\alpha|^u$. For $\alpha\in (0,1)$ one has $\sin \pi \alpha >0$, whence $\rho_u(\alpha)=(\sin \pi \alpha)^u$. Thus $\rho_u$ is smooth on $(0,1)$. But $\rho$ has period $1$, so it suffices to examine its differentiability properties at $\alpha=0$, a point at which $\rho_u$ is continuous. For all real $\alpha$ we have $\sin \pi\alpha= \pi\alpha E(\alpha)$, where $$ E(\alpha)= \sum_{j=0}^\infty (-1)^j \frac{(\pi\alpha)^{2j}}{(2j+1)!}. $$ The function $E$ is smooth on $\mathbb R$ with $E(0)=1$. Hence $E(\alpha)>0$ in a neighbourhood of $0$ where we then also have
$$ \rho_u(\alpha) = \pi^u |\alpha|^u E(\alpha)^u. $$ By applying the product rule in combination with our earlier conclusion (a), we therefore conclude as follows: \\[2ex] (b) {\em Let $k\in\mathbb N$ and $u>k$. Then $\rho_u$ is $k$ times continuously differentiable on} $\mathbb R$. \\
We now turn to the function $T$ where we suppose that $M=0$, as we may. The sum in \eqref{3.1} defines a holomorphic function of the complex variable $\alpha$, and hence the function $T:\mathbb R\to\mathbb C$ is a smooth map of period $1$. The sum $$ \bar T(\alpha)= \sum_{1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec n\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec N} \bar c_n e(-\alpha n) $$ defines another trigonometric polynomial, and for $\alpha\in\mathbb R$ we have $\overline{T(\alpha)}= \bar T(\alpha)$. Consequently, for real $\alpha$ we have \begin{equation}
\label{3.31} |T(\alpha)|^2 = T(\alpha)\bar T(\alpha), \end{equation}
whence the function $|T|^2:\mathbb R\rightarrow \mathbb C$, given by
$\alpha\mapsto |T(\alpha)|^2$, is smooth on $\mathbb R$ with \begin{equation}
\label{3.4} \frac{\mathrm d}{\mathrm d \alpha}\,|T(\alpha)|^2 = T'(\alpha)\bar T(\alpha) + T(\alpha)\bar T'(\alpha). \end{equation} On noting that $T(\alpha)^j$ is again a trigonometric polynomial for all $j\in \mathbb N$,
we see that $|T(\alpha)|^{2j}$ is smooth. Hence, from now on, we may suppose that $u$ {\em is a real number but not an even natural number}. Also, the conclusion of Lemma \ref{lem3.1} is certainly true in the trivial case where $c_n=0$ for all $n$. In the contrary case, the polynomial in \eqref{3.2} has at most finitely many zeros. Therefore, the set $$ Z= \{\alpha\in\mathbb R: T(\alpha)=0\} $$ is $1$-periodic with $Z\cap [0,1)$ finite, and consequently $\mathbb R\setminus Z$ is open.
\par We next examine the function $|T|^u:\mathbb R\setminus Z\rightarrow \mathbb C$, given by
$\alpha\mapsto |T(\alpha)|^u$. \\[2ex] (c) {\em When $u$ is real but not an even natural number, the function
$|T|^u$ is smooth}. \\[2ex]
In order to confirm this assertion, note that $|T(\alpha)|^u=\theta_{u/2}(|T(\alpha)|^2)$. By applying the chain rule in combination with the preamble to conclusion (a) and
\eqref{3.4}, we find that $|T(\alpha)|^u$ is differentiable for $\alpha\in\mathbb R \setminus Z$. Indeed, \begin{align}
\frac{\mathrm d}{\mathrm d \alpha}\,|T(\alpha)|^u &=\theta'_{u/2}(|T(\alpha)|^2) \big(T'(\alpha)\bar T(\alpha) + T(\alpha)\bar T'(\alpha)) \notag\\
& = \frac{u}{2}|T(\alpha)|^{u-2} \big(T'(\alpha)\bar T(\alpha) + T(\alpha)\bar T'(\alpha)). \label{3.41} \end{align} Since the final factor on the right hand side here is smooth, we may repeatedly apply the
product rule to conclude that $|T(\alpha)|^u$ is smooth on $\mathbb R\setminus Z$, as claimed.
Finally, we consider any element $\alpha_0\in Z$. Then one has $P(e(\alpha_0))=0$. Since $P$ is not the zero polynomial, there exists $r\in\mathbb N$ and a polynomial $Q\in \mathbb C[z]$ with $Q(e(\alpha_0))\neq 0$ such that $P(z)=(z-e(\alpha_0))^r Q(z)$. Write $U(\alpha)=Q(e(\alpha))$ for the trigonometric polynomial associated with $Q$. Then $T(\alpha)=\big(e(\alpha)-e(\alpha_0)\big)^r U(\alpha)$. For $u>0$ and all real $\alpha$ we then have
$$ |T(\alpha)|^u=|e(\alpha)-e(\alpha_0)|^{ru} |U(\alpha)|^u
= |2\sin \pi(\alpha-\alpha_0)|^{ru} |U(\alpha)|^u. $$ There is an open neighbourhood of $\alpha_0$ on which $U(\alpha)$ does not vanish. By
our conclusion (c) it is apparent that $|U(\alpha)|^u$ is smooth on this neighbourhood. If
$u>k$, then the conclusion (b) implies that the function $|2\sin \pi(\alpha-\alpha_0)|^{ru}$ is $k$ times continuously differentiable. The conclusion of the lemma therefore follows by application of the product rule. \end{proof}
We mention in passing that if more is known about the zeros of $P$, then the argument that we have presented shows more. For example, if all the zeros in $Z$ are double zeros and
$u>k$, then $|T(\alpha)|^u$ is $2k$ times differentiable.
\begin{lemma}\label{lem3.2} Let $W:\mathbb R \to \mathbb R$ be a twice continuously differentiable function of period $1$, and let $u\ge 2$. For $l\in\mathbb Z$ let \begin{equation}\label{3.51}
b_l = \int_0^1 W(\alpha) |T(\alpha)|^u e(-\alpha l){\,{\rm d}}\alpha. \end{equation} Then, for all $l\in\mathbb Z\setminus\{0\}$, one has \begin{equation}
\label{3.5} |b_l|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac{1}{(2\pi l)^2} \int_0^1 \Big|
\frac{\mathrm d^2}{\mathrm d \alpha^2}\,W(\alpha) |T(\alpha)|^u\Big|{\,{\rm d}}\alpha. \end{equation} Moreover, for all $\alpha\in\mathbb R$ one has the Fourier series expansion \begin{equation}\label{3.6}
W(\alpha)|T(\alpha)|^u = \sum_{l\in \mathbb Z}b_le(\alpha l), \end{equation} in which the right hand side converges absolutely and uniformly on $\mathbb R$. \end{lemma}
\begin{proof} By \eqref{3.31} and Lemma \ref{lem3.1}, the condition $u\ge 2$ ensures that
$W(\alpha)|T(\alpha)|^u$ is twice continuously differentiable. Hence, the integral on the right hand side of \eqref{3.5} exists, and the upper bound \eqref{3.5} follows from \eqref{3.51} by integrating by parts two times. Furthermore, the upper bound \eqref{3.5} ensures that the series in \eqref{3.6} converges absolutely and uniformly on $\mathbb R$. Thus, by \cite[Chapter II, Theorem 8.14]{Z}, this Fourier series sums to
$W(\alpha)|T(\alpha)|^u$. \end{proof}
In this paper Lemmata \ref{lem3.1} and \ref{lem3.2} will only be used with the quartic Weyl sum $f$, as defined in \eqref{ff}, in the role of $T$. The weight $W$ will be either constantly $1$ or a smooth minor arc. Let $u>0$ and define the Fourier coefficient \begin{equation}\label{3.8}
\psi_u(n) = \int_0^1 |f(\alpha)|^u e(-\alpha n) {\,{\rm d}}\alpha. \end{equation} Also, with a parameter $Y$ at our disposal within the range $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Y\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac14 P^2$, we consider the smooth minor arcs ${\sf n}(\alpha )={\sf n}_{P^4,Y}(\alpha)$ and introduce the related Fourier coefficient \begin{equation}\label{3.9}
\phi_u(n) = \int_0^1 {\sf n}(\alpha)|f(\alpha)|^u e(-\alpha n) {\,{\rm d}}\alpha. \end{equation}
\begin{lemma} \label{lem3.3} Suppose that $u\ge 2$ and $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Y\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \tfrac{1}{4}P^2$. Then, for all $n\in\mathbb Z\setminus\{0\}$, one has
$$ |\phi_u(n)|+|\psi_u(n)|\ll P^{u+8}n^{-2}. $$ \end{lemma}
\begin{proof}
We first compute the derivatives of $|f(\alpha)|^u$. Suppose temporarily that $u$ is not an even natural number. By \eqref{3.41}, whenever $f(\alpha)\neq 0$, we have
$$ \frac{\mathrm d}{\mathrm d \alpha} \, |f(\alpha)|^u =
\frac{u}{2}\, |f(\alpha)|^{u-2}\left( f'(\alpha)\bar f(\alpha)+f(\alpha)\bar f'(\alpha)\right),$$ and we may differentiate again to confirm the identity \begin{align*}
\frac{\mathrm d^2}{\mathrm d \alpha^2} \, |f(\alpha)|^u=&\frac{u(u-2)}4\,
|f(\alpha)|^{u-4}\big(f'(\alpha)\bar f(\alpha)+f(\alpha)\bar f'(\alpha)\big)^2\\
& + \frac{u}{2}\, |f(\alpha)|^{u-2} \big(f''(\alpha)\bar f(\alpha) +2 f'(\alpha)\bar f'(\alpha) + f(\alpha)\bar f''(\alpha)\big). \end{align*} These formulae hold for all $\alpha \in \mathbb R$ when $u$ is an even natural number, and thus
$$ \Big| \frac{\mathrm d}{\mathrm d \alpha} \, |f(\alpha)|^u \Big|
\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u |f(\alpha)|^{u-1} |f'(\alpha)| $$ and
$$ \Big|\frac{\mathrm d^2}{\mathrm d \alpha^2} \, |f(\alpha)|^u \Big|
\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u(u-1) |f(\alpha)|^{u-2} |f'(\alpha)|^2 + u |f(\alpha)|^{u-1} |f''(\alpha)|.$$ Hence, the trivial estimates $f(\alpha)\ll P$, $f'(\alpha)\ll P^5$ and $f''(\alpha)\ll P^9$ suffice to conclude that the upper bounds \begin{equation}\label{3.10}
\frac{\mathrm d}{\mathrm d \alpha} \, |f(\alpha)|^u \ll P^{u+4}\quad \text{and}\quad
\frac{\mathrm d^2}{\mathrm d \alpha^2} \, |f(\alpha)|^u \ll P^{u+8} \end{equation} hold for all $\alpha\in\mathbb R$ when either $u=2$ or $f(\alpha)\neq 0$. However, when $u>2$ these derivatives will be zero whenever $f(\alpha)=0$, so the inequalities \eqref{3.10} hold uniformly in $\alpha\in\mathbb R$. The upper bound $\psi_u(n)\ll P^{u+8}n^{-2}$ is now immediate from Lemma \ref{lem3.2}. Furthermore, an application of the product rule in combination with Lemma \ref{minapprox} and \eqref{3.10} shows that
$$\frac{\mathrm d}{\mathrm d \alpha} \,{\sf n}(\alpha) |f(\alpha)|^u \ll P^{u+4}\quad
\text{and}\quad \frac{\mathrm d^2}{\mathrm d \alpha^2} \, {\sf n}(\alpha)|f(\alpha)|^u \ll P^{u+8}.$$ The estimate $\phi_u(n)\ll P^{u+8}n^{-2}$ therefore follows by invoking Lemma \ref{lem3.2} once again, and this completes the proof of the lemma. \end{proof}
\section{Cubic moments of Fourier coefficients} The principal results in this section are the upper bounds for cubic moments of $\phi_u(n)$ and $\psi_u(n)$ embodied in Theorem \ref{thm4.1} below. The proof of these estimates involves a development of the ideas underpinning the main line of thought in our earlier paper \cite{Jems1}. For $u>0$ it is convenient to define \begin{equation}\label{4.1} \delta(u)=(25-3u)/6. \end{equation} In many of the computations later it is useful to note that \begin{equation}\label{4.13} 3u-8+\delta(u)= \frac52 u -\frac{23}6. \end{equation}
\begin{theorem}\label{thm4.1} Let $u$ be a real number with $6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 25/3$. Then \begin{equation}
\label{4.2} \sum_{n\in \mathbb Z}|\psi_u(n)|^3 \ll P^{3u-8+\delta(u)+\varepsilon}. \end{equation} Further, when $2P^{4/15}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Y\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P/16$ and $6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 11$, one has \begin{equation}\label{4.3}
\sum_{n\in \mathbb Z}|\phi_u(n)|^3 \ll P^{3u-8+\delta(u)+\varepsilon}. \end{equation} \end{theorem}
When $u\ge 6$, the contribution from the major arcs to the sum in \eqref{4.2} is easily seen to be of order $P^{3u-8}$. Since $\delta(u)$ is negative for $u>25/3$, we cannot expect that the upper bound \eqref{4.2} holds for such $u$. However, as is evident from \eqref{4.3}, a minor arcs version remains valid for $u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 11$. Before we embark on the proof of this theorem, we summarize some mean value estimates related to the Weyl sum \eqref{ff}. In the following two lemmata, we assume that $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Y\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P/8$ and write $\mathfrak M=\mathfrak M_{P^4,Y}$ and $\mathfrak m=\mathfrak m_{P^4,Y}$. It is useful to note that $\mathfrak m_{P^4,Y}=\mathfrak m_{P^4,P/8}\cup\mathfrak K$, where $\mathfrak K = \mathfrak M_{P^4,P/8} \setminus \mathfrak M_{P^4,Y}$. Then, from \cite[Lemma 5.1]{V89}, we have the bounds \begin{equation}\label{4.51}
\int_{\mathfrak M} |f(\alpha)|^{6}{\,{\rm d}} \alpha \ll P^{2}\quad \text{and}\quad
\int_{\mathfrak K} |f(\alpha)|^{6}{\,{\rm d}} \alpha \ll P^{2}Y^{\varepsilon-1/4}. \end{equation}
\begin{lemma}\label{lem4.1} Suppose that $P^{4/15}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Y\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P/8$. Then
$$ \int_{\mathfrak m} |f(\alpha)|^{20}{\,{\rm d}} \alpha \ll P^{15+\varepsilon}. $$ \end{lemma} \begin{proof} For $Y=P/8$, the desired estimate is the case $k=4$, $w=20$ of Wooley \cite[Lemma 3.1]{Woo2016}. For smaller values of $Y$, we make use of the case $Y=P/8$ and apply the second bound of \eqref{4.51}. On combining \cite[Theorem 4.1]{hlm} with \cite[Lemma 2.8 and Theorem 4.2]{hlm}, moreover, one readily confirms that the upper bound $f(\alpha)\ll PY^{-1/4}$ holds uniformly for $\alpha\in\mathfrak K$. Consequently, one has the estimate
$$ \int_{\mathfrak K} |f(\alpha)|^{20}{\,{\rm d}} \alpha \ll P^{16}Y^{\varepsilon-15/4},$$ and the conclusion of the lemma follows. \end{proof}
\begin{lemma}\label{lemma5.3} When $8\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 14$, one has \begin{equation}\label{4.4}
\int_0^1 |f(\alpha)|^u{\,{\rm d}} \alpha \ll P^{\frac56 u-\frac53+\varepsilon}. \end{equation} Meanwhile, when $8\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 20$, then uniformly in $P^{4/15}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Y\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P/8$, one has \begin{equation}\label{4.5}
\int_{\mathfrak m} |f(\alpha)|^u{\,{\rm d}} \alpha \ll P^{\frac56 u-\frac53+\varepsilon}. \end{equation} \end{lemma}
\begin{proof} It is a consequence of Hua's Lemma \cite[Lemma 2.5]{hlm} that \begin{equation}\label{4.6}
\int_{\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N} |f(\alpha)|^8{\,{\rm d}} \alpha \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \int_0^1 |f(\alpha)|^8{\,{\rm d}} \alpha \ll P^{5+\varepsilon}. \end{equation} One interpolates linearly between this estimate and the bound established in Lemma \ref{lem4.1} via H\"older's inequality to confirm the upper bound \eqref{4.5} for $8\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 20$. The upper bound \eqref{4.4} then follows on noting that for $6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 14$, it follows from \eqref{4.51} that \[
\int_{\mathfrak M}|f(\alpha)|^u{\,{\rm d}} \alpha \ll P^{u-4}\ll P^{\frac56 u-\frac53}. \] Since $[0,1]=\mathfrak M\cup \mathfrak m$, the desired conclusion follows at once. \end{proof}
In the special case $u=14$, the first conclusion of Lemma \ref{lemma5.3} assumes the simple form already announced in (\ref{1.5}).
\begin{lemma}\label{lem4.3} Let $\mathscr Z$ be a set of $Z$ integers. Then
$$ \int_0^1 \bigg|\sum_{z\in\mathscr Z} e(\alpha z) \bigg|^2 |f(\alpha)|^2{\,{\rm d}}\alpha \ll PZ +P^{1/2+\varepsilon}Z^{3/2} $$ and
$$ \int_0^1 \Big|\sum_{z\in\mathscr Z} e(\alpha z) \Big|^2 |f(\alpha)|^4{\,{\rm d}}\alpha \ll P^3 Z +P^{2+\varepsilon}Z^{3/2}. $$ \end{lemma} \begin{proof} This is essentially contained in \cite[Lemma 6.1]{KW}, where these estimates are established in the case when $\mathscr Z$ is contained in $[0,P^4]$. As pointed out in \cite[Lemma 2.2]{BW21} this condition is not required. \end{proof}
We now have available sufficient infrastructure to derive upper bounds for cubic moments of $\phi_u(n)$ and $\psi_u(n)$.
\begin{proof}[The proof of Theorem \ref{thm4.1}] Let $\vartheta_u (n)$ denote one of $\psi_u(n)$, $\phi_u(n)$. On examining the statement of the theorem, it is apparent that we may assume that in the former case we have $6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 25/3$, and in the latter case $6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 11$ and $2P^{4/15}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Y\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P/16$. We begin with the observation that, by Lemma \ref{lem3.3}, one has $\vartheta_u (n) \ll P^{u+8} n^{-2}$. Consequently, when $u\ge 6$, one has
$$\sum_{|n|>P^7}|\vartheta_u(n)|^3+
\sum_{\substack{|n|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^7\\ |\vartheta_u(n)|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 1}}|\vartheta_u(n)|^3
\ll P^7+P^{3u+24}\sum_{|n|>P^7}n^{-6}\ll P^{3u-11}.$$
\par It remains to consider the contribution of those integers $n$ with $|n|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^7$ and
$|\vartheta_u(n)|>1$. We put $\Theta(\alpha)=1$ when $\vartheta_u=\psi_u$, and $\Theta(\alpha)={\sf n}(\alpha)$ when $\vartheta_u=\phi_u$. Then the definitions \eqref{3.8} and \eqref{3.9} take the common form \begin{equation}\label{Thet}
\vartheta_u(n) = \int_0^1 \Theta(\alpha)|f(\alpha)|^u e(-\alpha n){\,{\rm d}}\alpha. \end{equation} By Lemma \ref{minapprox}, it follows that $\Theta(\alpha)\in [0,1]$. Thus, by Lemma \ref{lemma5.3}, one finds that
$$|\vartheta_u(n)|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \vartheta_u(0)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \psi_u(0)\ll P^{\frac56 u-\frac53+\varepsilon}\quad (8\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 11).$$ In the missing cases where $6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u< 8$ one interpolates between \eqref{4.6} and the elementary inequality \begin{equation}\label{Hua4}
\int_0^1|f(\alpha)|^4{\,{\rm d}}\alpha \ll P^{2+\varepsilon}, \end{equation} also a consequence of Hua's Lemma \cite[Lemma 2.5]{hlm}, to conclude that
$$|\vartheta_u(n)|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \vartheta_u(0)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \psi_u(0)\ll P^{2+\frac34(u-4)+\varepsilon}. $$
Fix a number $\tau$ with $0<\tau< 10^{-10}$ and define $T_0$ by $$T_0=\begin{cases} P^{\frac34 u-1+\tau},&\text{when $6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u<8$},\\ P^{\frac56 u-\frac53+\tau},&\text{when $8\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 11$}.\end{cases}$$ Then, on recalling the upper bounds for $\vartheta_u(n)$ just derived, a familiar dyadic dissection argument shows that there is a number $T\in [1,T_0]$ with the property that
\begin{align} \sum_{n\in \mathbb Z}|\vartheta_u(n)|^3 &\ll P^{3u-11} +
(\log P)\sum_{\substack{|n|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^7\\ T<|\vartheta_u(n)|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2T}}|\vartheta_u(n)|^3\notag\\ &\ll P^{3u-11} + P^\varepsilon T^3 Z,\label{4.7} \end{align} where $Z$ denotes the number of elements in the set
$$ {\mathscr Z}=\{n\in\mathbb Z: \text{$|n|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^7$ and $T<|\vartheta_u(n)|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2T$}\}.$$
For each $n\in\mathscr Z$ there is a complex number $\eta_n$, with $|\eta_n|=1$, for which $\eta_n \vartheta_u(n)$ is a positive real number. Write \begin{equation} \label{4.8} K(\alpha)=\sum_{n\in\mathscr Z} \eta_n e(-\alpha n). \end{equation} Then one concludes from \eqref{Thet} and orthogonality that \begin{equation}\label{4.9} TZ<\sum_{n\in \mathscr Z}\eta_n\vartheta_u(n)=
\int_0^1 \Theta(\alpha)K(\alpha)|f(\alpha)|^u {\,{\rm d}}\alpha. \end{equation}
Beyond this point our argument depends on the size of $T$. Our first argument handles the small values $T\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{\frac56u-\frac{35}{18}}$. By \eqref{4.9} and H\"older's inequality, we obtain the bound \begin{equation}\label{4.11}
TZ \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec I^{1/2}\biggl(\int_0^1 |K(\alpha)^2f(\alpha)^4|{\,{\rm d}}\alpha\biggr)^{1/3}
\biggl( \int_0^1 |K({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^2{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \biggr)^{1/6}, \end{equation} where
$$I=\int_0^1 \Theta(\alpha)^2|f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^{2u-\frac83}{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} .$$ By orthogonality, one has
$$ \int_0^1 |K({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^2{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} = Z, $$ and by a consideration of the underlying Diophantine equations, one deduces via Lemma \ref{lem4.3} that \begin{equation}\label{4.10}
\int_0^1 |K(\alpha)^2f(\alpha)^4|{\,{\rm d}}\alpha \ll P^3Z + P^{2+\varepsilon}Z^{3/2}. \end{equation} Next we confirm the bound $I\ll P^{\frac53u-\frac{35}9+\varepsilon}$. Indeed, in the case where $\Theta=1$ we have $6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 25/3$. In such circumstances $8<2u-8/3\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 14$, and so \eqref{4.4} applies and yields the claimed bound. In the case $\Theta={\sf n}$ we have $u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 11$, and hence $2u-8/3<20$. Write $\mathfrak m=\mathfrak m_{P^4,Y/2}$. Then by Lemma \ref{minapprox}, we have $0\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec {\sf n}(\alpha)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec {\bf 1}_{\mathfrak m}$. We therefore deduce that in this second case we have
$$I\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \int_0^1 {\sf n}(\alpha)|f(\alpha)|^{2u-\frac{8}{3}}{\,{\rm d}}\alpha \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \int_{\mathfrak m}}\def\grM{{\mathfrak M}}\def\grN{{\mathfrak N}
|f(\alpha)|^{2u-\frac{8}{3}}{\,{\rm d}}\alpha ,$$ and \eqref{4.5} confirms our claimed bound for $I$.\par
Collecting these estimates together within \eqref{4.11}, we now have \[ TZ \ll P^\varepsilon \left(P^3Z + P^2Z^{3/2}\right)^{1/3}Z^{1/6} \bigl(P^{\frac53 u-\frac{35}9}\bigr)^{1/2}. \] On recalling \eqref{4.13}, we find that this relation disentangles to yield the bound \begin{align*} T^3Z &\ll P^{2+\frac32(\frac53 u-\frac{35}9)+\varepsilon} +TP^{2+\frac53 u-\frac{35}9+\varepsilon}\\ & = P^{3u-8 +\delta(u)+\varepsilon} + TP^{\frac53 u -\frac{17}9+\varepsilon}. \end{align*} It transpires that in the range $T\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{\frac56u-\frac{35}{18}}$ the first term on the right hand side dominates, so that we finally reach the desired conclusion $T^3Z \ll P^{3u-8 +\delta(u)+\varepsilon}$. In view of \eqref{4.7}, this is enough to complete the proof of Theorem \ref{thm4.1} in the case that $T$ is small.\par
Our second approach is suitable for $T$ of medium size, with \begin{equation}\label{4.14} P^{\frac56u-\frac{35}{18}}<T\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{\frac56 u - \frac{11}{6}}. \end{equation} We apply Schwarz's inequality to \eqref{4.9}, obtaining the bound
$$ TZ\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \biggl(\int_0^1 |K(\alpha)^2f(\alpha)^4|{\,{\rm d}}\alpha\biggr)^{1/2}
\biggl( \int_0^1 \Theta(\alpha)^2|f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^{2u-4}{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha}\biggr)^{1/2}. $$ Note that when $6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 11$, one has $8\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2u-4\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 18$, and when instead $u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 25/3$, we have $2u-4<14$. Hence, as in the proof of our earlier estimate for $I$, it follows from Lemma \ref{lemma5.3} that
$$ \int_0^1 \Theta(\alpha)^2|f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^{2u-4}{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \ll P^{\frac53 u - 5+\varepsilon}. $$ Applying this estimate in combination with \eqref{4.10}, we conclude that $$TZ \ll P^\varepsilon (P^3Z+P^2Z^{3/2})^{1/2}(P^{\frac53 u-5})^{1/2}. $$ This bound disentangles to deliver the relation $$ T^3Z \ll TP^{\frac53 u-2+\varepsilon}+ T^{-1} P^{\frac{10}{3}u-6+\varepsilon}. $$ On recalling \eqref{4.13}, we find that our present assumptions \eqref{4.14} concerning the size of $T$ deliver the estimate $$T^3Z \ll P^{\frac52 u-\frac{23}{6}+\varepsilon}+ P^{\frac{5}{2}u-\frac{73}{18}+\varepsilon}\ll P^{3u-8+\delta(u)+\varepsilon}.$$ The conclusion of Theorem \ref{thm4.1} again follows in this case, by virtue of \eqref{4.7}.
\par The analysis of the large values $T$ satisfying $P^{\frac56 u-\frac{11}{6}}<T\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec T_0$ is more subtle. Suppose temporarily that $\vartheta_u=\psi_u$, and hence that $u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 25/3$. Then, by \eqref{2.4} and \eqref{4.9},
$$ TZ \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \int_0^1 {\sf N}(\alpha)K(\alpha)|f(\alpha)|^u {\,{\rm d}}\alpha
+ \int_0^1 {\sf n}(\alpha)K(\alpha)|f(\alpha)|^u {\,{\rm d}}\alpha.$$ By hypothesis, we have $u\ge 6$. Also, from Lemma \ref{majapprox}, we have ${\sf N}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec {\bf 1}_{{\mathfrak N}_{P^4,P/8}}$, so that \eqref{4.51} yields the bound
$$ \int_0^1 {\sf N}(\alpha)K(\alpha)|f(\alpha)|^u {\,{\rm d}}\alpha
\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Z \int_{{\mathfrak M}_{P^4,P/8}} |f(\alpha)|^u {\,{\rm d}}\alpha \ll ZP^{u-4}.$$ Since $u-4<\frac56 u - \frac{11}{6}$, for large enough $P$ one has $ZP^{u-4}<\frac 12 TZ$. Thus \begin{equation}\label{4.16}
TZ \ll \int_0^1 {\sf n}(\alpha)K(\alpha)|f(\alpha)|^u {\,{\rm d}}\alpha. \end{equation} Note that this is exactly the inequality \eqref{4.9} in the case where $\vartheta_u=\phi_u$. Consequently, the upper bound \eqref{4.16} holds for the large values of $T$ currently under consideration, irrespective of the choice of $\vartheta_u$. Now apply Schwarz's inequality to \eqref{4.16}. Then, by Lemma \ref{minapprox}, we deduce that
$$ TZ \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \biggl(\int_0^1 |K(\alpha)f(\alpha)|^2 {\,{\rm d}}\alpha\biggr)^{1/2}
\biggl( \int_{\mathfrak m}|f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^{2u-2}{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha}\biggr)^{1/2}, $$ where again we write $\mathfrak m=\mathfrak m_{P^4,Y/2}$. Note here that $u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 11$, so that $2u-2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 20$. Hence, by Lemmata \ref{lemma5.3} and \ref{lem4.3}, we have $$ TZ \ll P^\varepsilon \bigl( PZ+P^{\frac12}Z^{\frac32}\bigr)^{1/2} \bigl( P^{\frac53 u-\frac{10}3}\bigr)^{1/2}.$$ Consequently, our assumptions concerning the size of $T$ reveal that \begin{align} T^3Z&\ll TP^{\frac53 u-\frac{7}{3}+\varepsilon} +T^{-1}P^{\frac{10}3 u-\frac{17}3+\varepsilon}\notag \\ &\ll T_0P^{\frac53 u-\frac{7}{3}+\varepsilon}+P^{\frac52 u-\frac{23}{6}+\varepsilon}. \label{5.z} \end{align} When $6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u<8$, one has $$\bigl(\tfrac{3}{4}u-1\bigr)+\bigl(\tfrac{5}{3}u-\tfrac{7}{3}\bigr) =\tfrac{29}{12}u-\tfrac{10}{3}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \tfrac{5}{2}u-\tfrac{23}{6},$$ whilst for $8\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 11$, $$\bigl(\tfrac{5}{6}u-\tfrac{5}{3}\bigr)+\bigl(\tfrac{5}{3}u-\tfrac{7}{3}\bigr) =\tfrac{5}{2}u-4<\tfrac{5}{2}u-\tfrac{23}{6}.$$ Then in either case one finds from \eqref{5.z} via \eqref{4.13} that $T^3Z\ll P^{3u-8+\delta(u)+2\tau}$, and the conclusion of Theorem \ref{thm4.1} follows in this final case, again by \eqref{4.7}, on taking $\tau$ sufficiently small. \end{proof}
We close this section with a related but simpler result.
\begin{theorem}\label{thm4.4} One has $$ \sum_{n\in \mathbb Z}\psi_4(n)^3 \ll P^{13/2+\varepsilon}.$$ \end{theorem}
\begin{proof} By \eqref{3.8} and orthogonality, the Fourier coefficient $\psi_4(n)$ has a Diophantine interpretation that shows on the one hand that $\psi_4(n)\in\mathbb N_0$, and
on the other that $\psi_4(n)=0$ for all $n\in\mathbb Z$ with $|n|>2P^4$. By \eqref{3.8} and \eqref{Hua4}, we also have the bound $ \psi_4(n)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \psi_4(0) \ll P^{2+\varepsilon}$. The argument leading to \eqref{4.7} now shows that there is a number $T$ with $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec T \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{2+\varepsilon}$ having the property that \begin{align} \sum_{n\in \mathbb Z}\psi_4(n)^3 &\ll P^{6+\varepsilon}+
P^\varepsilon\sum_{\substack{|n|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2P^4\\ T\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \psi_4(n)\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2T}} \psi_4(n)^3\notag \\ &\ll P^{6+\varepsilon}+P^\varepsilon T^3Z,\label{5.w} \end{align} where $Z$ denotes the number of elements in the set
$$ {\mathscr Z}=\{n\in\mathbb Z: \text{$|n|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2P^4$ and $T<|\psi_4(n)|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2T$}\}.$$ As in the corresponding analysis within the proof of Theorem \ref{thm4.1}, we next find that there are unimodular complex numbers $\eta_n$ $(n\in\mathscr Z)$ having the property that, with $K(\alpha)$ defined via \eqref{4.8}, one has
$$TZ <\int_0^1 K(\alpha)|f(\alpha)|^4 {\,{\rm d}}\alpha.$$
We first handle small values of $T$. Here, an application of Schwarz's inequality leads via \eqref{4.6} to the bound
$$TZ\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \biggl(\int_0^1 |f(\alpha)|^8{\,{\rm d}}\alpha\biggr)^{1/2}
\biggl( \int_0^1 |K({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^2{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \biggr)^{1/2}\ll P^{5/2+\varepsilon}Z^{1/2}.$$ This disentangles to yield $T^3Z \ll TP^{5+\varepsilon}$, proving the theorem for $T\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{3/2}$.\par
Next, when $T$ is large, we apply H\"older's inequality in a manner similar to that employed in the large values analysis of the proof of Theorem \ref{thm4.1}. Thus
$$ TZ \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \biggl(\int_0^1 |K(\alpha)^2f(\alpha)^2|{\,{\rm d}}\alpha\biggr)^{1/2}
\biggl( \int_0^1 |f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^4{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \biggr)^{1/4} \biggl( \int_0^1 |f({\alpha}} \def\bfalp{{\boldsymbol \alpha})|^8{\,{\rm d}}{\alpha}} \def\bfalp{{\boldsymbol \alpha} \biggr)^{1/4},$$ and hence $$ TZ\ll P^\varepsilon (PZ+ P^{1/2}Z^{3/2})^{1/2}P^{7/4}. $$ We now obtain the bound $$ T^3Z \ll TP^{9/2+\varepsilon} +T^{-1}P^{8+\varepsilon},$$ and in view of \eqref{5.w}, this proves Theorem \ref{thm4.4} in the complementary case $P^{3/2}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec T\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{2+\varepsilon}$. \end{proof}
\section{Mean values of quartic Weyl sums} In this section we estimate certain entangled moments of quartic Weyl sums, and then apply them to obtain minor arc estimates for use within the proofs of Theorems \ref{theorem1.1} and \ref{theorem1.2}. Throughout this section and the next, let the pair of integers $c_i, d_i$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 5)$ satisfy the condition that the points $(c_i:d_i)\in\mathbb P^1(\mathbb Q) $ are distinct. Define the linear forms $\mathrm M_i=\mathrm M_i (\alpha,\beta)$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 5)$ by \begin{equation}\label{5.1} \mathrm M_i(\alpha,\beta) = c_i\alpha+d_i\beta. \end{equation} Let $u>0$, and recall the definition of the exponent $\delta(u)$ from \eqref{4.1}. Then, with $2P^{4/15}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Y\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P/16$ and ${\sf n}={\sf n}_{P^4,Y}$, we consider the mean values \begin{align*} I_u=& \int_0^1\!\!\int_0^1
|f(\mathrm M_1)f(\mathrm M_2)f(\mathrm M_3)|^u{\,{\rm d}}\alpha{\,{\rm d}}\beta, \\ J_u=& \int_0^1\!\!\int_0^1 {\sf n}(\mathrm M_1){\sf n}(\mathrm M_2){\sf n}(\mathrm M_3)
|f(\mathrm M_1)f(\mathrm M_2)f(\mathrm M_3)|^u{\,{\rm d}}\alpha{\,{\rm d}}\beta. \end{align*}
\begin{theorem}\label{thm5.1} One has $I_4\ll P^{13/2+\varepsilon}$ and $I_u\ll P^{3u-8+\delta(u)+\varepsilon}$ $(6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 25/3)$. Also, when $6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 11$, one has $J_u\ll P^{3u-8+\delta(u)+\varepsilon}$. \end{theorem}
\begin{proof}
It follows from Lemmata \ref{minapprox} and \ref{lem3.2} that the
function ${\sf n}(\gamma)|f(\gamma)|^u$ has a uniformly convergent Fourier series with coefficients $\phi_u(n)$. By orthogonality, we conclude that $$ J_u = \sum_{(n_1,n_2,n_3)\in N} \phi_u(n_1)\phi_u(n_2)\phi_u(n_3), $$ where $N$ is the set of solutions in integers $n_1,n_2,n_3$ of the linear system $$ c_1n_1+ c_2n_2+ c_3n_3 = d_1n_1+d_2n_2+d_3n_3=0. $$ Since the projective points $(c_i:d_i)$ are distinct, there exist non-zero integers $l_i$, depending only on the $c_i,d_i$, having the property that the solutions of this system are precisely the triples $(n_1,n_2,n_3)=m(l_1,l_2,l_3)$ $(m\in \mathbb Z)$. It therefore follows from \eqref{T} that
$$J_u \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac13 \sum_{m\in\mathbb Z} \big(|\phi_u(l_1m)|^3 + |\phi_u(l_2m)|^3
+|\phi_u(l_3m)|^3 \big) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \sum_{n\in\mathbb Z}|\phi_u(n)|^3 .$$ The desired bound for $J_u$ now follows from Theorem \ref{thm4.1}. The bounds for $I_4$ and $I_u$ follow in the same way, but the argument has to be built on the cubic moment estimates for $\psi_u(n)$ that are provided by Theorems \ref{thm4.1} and \ref{thm4.4}. \end{proof}
We now turn to related, less balanced mixed moments. With $u$ and $Y$ as before, we define \begin{align*}
K_u=&\int_0^1\!\!\int_0^1|f(\mathrm M_1)f(\mathrm M_2)|^u |f(\mathrm M_3)|^6{\,{\rm d}}\alpha {\,{\rm d}}\beta, \\ L_u=& \int_0^1\!\!\int_0^1 {\sf n}(\mathrm M_1){\sf n}(\mathrm M_2)
|f(\mathrm M_1)f(\mathrm M_2)|^u |f(\mathrm M_3)|^6{\,{\rm d}}\alpha{\,{\rm d}}\beta , \end{align*} and put $$ \eta(u) = \frac{19}{6}-\frac{u}{3}. $$
\begin{theorem}\label{thm5.2} Subject to the hypotheses of this section, one has \begin{align*} K_u &\ll P^{2u-2+\eta(u)+\varepsilon} \quad (6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 19/2),\\ L_u &\ll P^{2u-2+\eta(u)+\varepsilon} \quad (6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 11). \end{align*} \end{theorem} \begin{proof} We proceed as in the initial phase of the proof of Theorem \ref{thm5.1}. Using the same notation, we obtain $$ L_u = \sum_{(n_1,n_2,n_3)\in N} \phi_u(n_1)\phi_u(n_2)\psi_6(n_3). $$ Note here that $\psi_6(m)$ counts solutions of a Diophantine equation, and consequently is a non-negative integer. Hence
$$ L_u \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \frac12 \sum_{(n_1,n_2,n_3)\in N} \psi_6(n_3)\big(|\phi_u(n_2)|^2+
|\phi_u(n_1)|^2\big). $$ By symmetry, we may therefore suppose that for appropriate non-zero integers $l_2$ and $l_3$, depending at most on $\mathbf c$ and $\mathbf d$, one has \begin{equation}\label{LL}
L_u \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \sum_{(n_1,n_2,n_3)\in N} \psi_6(n_3)|\phi_u(n_2)|^2
= \sum_{m\in\mathbb Z} \psi_6(l_3m)|\phi_u(l_2m)|^2. \end{equation} Next, first applying H\"older's inequality, and then Theorem \ref{thm4.1} and \eqref{4.13}, we obtain the bound \begin{align*} L_u&\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \Big(\sum_{n\in\mathbb Z} \psi_6(n)^3\Big)^{1/3}
\Big(\sum_{m\in\mathbb Z} |\phi_u(m)|^3\Big)^{2/3}\\ &\ll P^\varepsilon \bigl( P^{15-\frac{23}{6}}\bigr)^{1/3} \left( P^{\frac{5}{2}u-\frac{23}{6}}\right)^{2/3}. \end{align*} The estimate for $L_u$ recorded in Theorem \ref{thm5.2} therefore follows on recalling the definition of $\eta(u)$.\par
The initial steps in the estimation of $K_u$ are the same, and one reaches a bound for $K_u$ identical to \eqref{LL} except that $\phi_u$ now becomes $\psi_u$. We split into major and minor arcs by inserting the relation $1={\sf N}(\alpha)+{\sf n}(\alpha)$, with parameters $X=P^4$ and $Y=P^{1/3}$, into \eqref{3.8}. From \eqref{4.51} we obtain
$$ \biggl|\int_0^1 {\sf N}(\alpha) |f(\alpha)|^u e(-\alpha n){\,{\rm d}}\alpha\biggr|
\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \int_{{\mathfrak M}_{P^4,P}} |f(\alpha)|^u{\,{\rm d}}\alpha\ll P^{u-4}.$$ Hence, we discern from \eqref{3.8} and \eqref{3.9} that
$$ |\psi_u(n)|^2\ll |\phi_u(n)|^2 + P^{2u-8}, $$ and so,
$$K_u\ll \sum_{m\in\mathbb Z} \psi_6(l_3m)|\phi_u(l_2m)|^2 + P^{2u-8} \sum_{m\in\mathbb Z} \psi_6(l_3m).$$ Here the first sum over $m$ is the same as that occurring in the estimation of $L_u$ in \eqref{LL}, and has already been estimated above. Thus, since
$$\sum_{n\in\mathbb Z}\psi_6(n)= |f(0)|^6 \ll P^6,$$ we conclude that $$K_u\ll P^{2u-2+\eta(u)+\varepsilon}+P^{2u-8}\sum_{n\in\mathbb Z} \psi_6(n)\ll P^{2u-2+\eta(u)+\varepsilon}+P^{2u-2}.$$ Provided that $u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 19/2$, which guarantees $\eta(u)$ to be non-negative, this estimate confirms the upper bound for $K_u$ claimed in the theorem. \end{proof}
Note that the mean values $I_u$ and $J_u$ involve $s= 3u$ Weyl sums, at least for integral values of $u$. By comparison, the number of Weyl sums in $K_u$ and $L_u$ is $s=2u+6$. A short calculation shows that when applied with the same value of $s$, with $s\ge 18$, the exponents of $P$ in Theorems \ref{thm5.1} and \ref{thm5.2} coincide. Since almost all of Theorem \ref{thm5.1} may be recovered from Theorem \ref{thm5.2} via H\"older's inequality, and since for fixed values of $s$ the exponent $u$ in Theorem \ref{thm5.2} is at least as large, Theorem \ref{thm5.2} is morally the stronger result. In our later application of the circle method, this allows for larger values of $r_j$ in the profiles associated to the simultaneous equations \eqref{1.1}, and this is essential for our method to succeed. Another advantage is that in $L_u$ only two of the forms $\mathrm M_i$ are on minor arcs, while in the mean value $J_u$ all three are constrained to minor arcs.\par
We continue with another result in which the profile is even farther out of balance. We consider the integral $$ M = \int_0^1\!\!\int_0^1 {\sf n}(\mathrm M_1) {\sf n}(\mathrm M_2)
|f(\mathrm M_1)^{11}f(\mathrm M_2)^{11}f(\mathrm M_3)^4|{\,{\rm d}}\alpha{\,{\rm d}}\beta. $$
\begin{theorem}\label{thm5.3} Given the hypotheses of this section, one has $M \ll P^{18-1/{18}+\varepsilon}$. \end{theorem}
\begin{proof} We again traverse the initial phase of the proof of Theorem \ref{thm5.1} to confirm the relation $$ M = \sum_{(n_1,n_2,n_3)\in N} \phi_{11}(n_1)\phi_{11}(n_2)\psi_4(n_3). $$ Then, just as in the argument of the proof of Theorem \ref{thm5.2} leading to \eqref{LL}, we find that for appropriate non-zero integers $l_2$ and $l_3$, depending at most on $\mathbf c$ and $\mathbf d$, one has
$$ M \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \sum_{m\in\mathbb Z} \psi_4(l_3m)|\phi_{11}(l_2m)|^2.$$ Thus, an application of H\"older's inequality in combination with Theorems \ref{thm4.1} and \ref{thm4.4}, together with \eqref{4.13}, yields the bound $$M\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \Bigl( \sum_{n\in \mathbb Z} \psi_{4}(n)^{3} \Bigr)^{1/3}
\Bigl( \sum_{n\in \mathbb Z} |\phi_{11}(n)|^{3} \Bigr)^{2/3}\ll P^\varepsilon \bigl( P^{13/2}\bigr)^{1/3}\left( P^{71/3}\right)^{2/3}.$$ The desired conclusion follows a rapid computation.\end{proof}
Finally, we transform the estimates for $L_u$ and $M$ into proper minor arc estimates. In the interest of brevity we write $\mathfrak M= \mathfrak M_{P^4,P^{1/3}}$ and put \begin{equation}\label{pee} \mathfrak p = [0,1]^2 \setminus (\mathfrak M\times \mathfrak M). \end{equation}
\begin{theorem}\label{thm5.4} Suppose that $19/2 < u \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 11$. Then \begin{equation}\label{minor6uu}
\iint_{\mathfrak p}|f(\mathrm M_1)f(\mathrm M_2)|^u |f(\mathrm M_3)|^6{\,{\rm d}}\alpha{\,{\rm d}}\beta \ll P^{2u-2+\eta(u)+\varepsilon}. \end{equation} Further, one has \begin{equation}\label{minor41111}
\iint_{\mathfrak p}|f(\mathrm M_1)^{11}f(\mathrm M_2)^{11}f(\mathrm M_3)^4| {\,{\rm d}}\alpha{\,{\rm d}}\beta \ll P^{18-1/18+\varepsilon}. \end{equation} \end{theorem}
\begin{proof} Let ${\sf N}= {\sf N}_{P^4,P^{2/7}} $ and ${\sf n}=1-{\sf N}$. Then \begin{equation}\label{8.4} 1 = \big({\sf N}(\mathrm M_1) + {\sf n}(\mathrm M_1)\big)\big({\sf N}(\mathrm M_2) + {\sf n}(\mathrm M_2)\big). \end{equation} We note at once that whenever $(\alpha,\beta)\in\mathfrak p$, one has ${\sf N}(\mathrm M_1){\sf N}(\mathrm M_2)=0$. The explanation for this observation is that whenever ${\sf N}(\mathrm M_1){\sf N}(\mathrm M_2)>0$, then it follows from Lemma \ref{majapprox} that $\mathrm M_j\in\mathfrak N_{P^4,2P^{2/7}}$ $(j=1,2)$. By taking suitable linear combinations of $\mathrm M_1$ and $\mathrm M_2$ we find that $\alpha$ and $\beta$ lie in $\mathfrak N_{P^4,AP^{2/7}}$, with some $A\ge 2$ depending only on the coefficients of $\mathrm M_1$ and $\mathrm M_2$. But $(\alpha,\beta)\in[0,1]^2$, and so $(\alpha,\beta)\in\mathfrak M\times\mathfrak M$ for large enough $P$. This is not the case when $(\alpha,\beta)\in\mathfrak p$, as claimed.\par
With this observation in hand, we apply \eqref{8.4} within the integral on the left hand side of \eqref{minor41111} to conclude that \begin{equation}\label{8.5}
\iint_{\mathfrak p}|f(\mathrm M_1)^{11}f(\mathrm M_2)^{11}f(\mathrm M_3)^4| {\,{\rm d}}\alpha{\,{\rm d}}\beta \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec M + M_{\sf Nn} + M_{\sf nN}, \end{equation} where \begin{equation}\label{8.7} M_{\sf Nn} = \int_0^1\!\!\int_0^1 {\sf N}(\mathrm M_1){\sf n}(\mathrm M_2)
|f(\mathrm M_1)^{11}f(\mathrm M_2)^{11} f(\mathrm M_3)^4| \,\mathrm d\alpha\, \mathrm d\beta \end{equation} and $M_{\sf nN}$ is the integral in \eqref{8.7} with $\mathrm M_1$, $\mathrm M_2$ interchanged.\par
By symmetry in $\mathrm M_1$ and $\mathrm M_2$, it now suffices to estimate $M_{\sf Nn}$. Recalling the definition \eqref{5.1} of the linear forms $\mathrm M_i$, we put
$D=|c_1d_2-c_2d_1|$ and note that $D>0$. Consider the linear transformation from $\mathbb R^2$ to $\mathbb R^2$, with $(\alpha,\beta)\mapsto (\alpha',\beta')$, defined by means of the relation \begin{equation}\label{8.8} \Big(\begin{array}{c} \alpha' \\ \beta'\end{array}\Big)=D^{-1} \Big(\begin{array}{cc} c_1&d_1 \\ c_2& d_2\end{array}\Big)\Big(\begin{array}{c} \alpha \\ \beta\end{array}\Big). \end{equation} Then $\mathrm M_1= D\alpha'$, $\mathrm M_2=D\beta'$, and $\alpha$ and $\beta$ are linear forms in $\alpha'$ and $\beta'$ with integer coefficients. By applying the transformation formula as a change of variables, one finds that $$ M_{\sf Nn} = \iint_{\mathfrak B} {\sf N}(D\alpha'){\sf n}(D\beta')
|f(D\alpha')^{11}f(D\beta')^{11} f(A\alpha'+B\beta')^4| \,\mathrm d\alpha'\, \mathrm d\beta', $$ wherein $A,B$ are non-zero integers and $\mathfrak B$ is the image of $[0,1]^2$ under the transformation \eqref{8.8}. The parallelogram $\mathfrak B$ is covered by finitely many sets $[0,1]^2+\mathbf t$, with $\mathbf t\in\mathbb Z^2$. Since the integrand in the last expression for $M_{\sf Nn}$ is $\mathbb Z^2$-periodic it follows that $$ M_{\sf Nn} \ll \int_0^1\!\!\int_0^1 {\sf N}(D\alpha){\sf n}(D\beta)
|f(D\alpha)^{11}f(D\beta)^{11}f(A\alpha+B\beta)^4|\,\mathrm d\alpha\,\mathrm d\beta. $$ Here we have removed decorations from the variables of integration for notational simplicity.
\par We now inspect all factors of the integrand in the latter upper bound that depend on $\beta$. By H\"older's inequality, Lemma \ref{lemma5.3} and obvious changes of variable, one obtains the estimate \begin{align*}
\int_0^1 {\sf n}(D\beta)&|f(D\beta)^{11}f(A\alpha+B\beta)^4|\,\mathrm d\beta \\
&\ll \biggl( \int_0^1 {\sf n}(D\beta)|f(D\beta)|^{77/5}\,\mathrm d\beta\biggr)^{5/7}
\biggl( \int_0^1 |f(A\alpha+B\beta)|^{14}\,\mathrm d\beta \biggr)^{2/7}\\ & \ll P^\varepsilon \bigl( P^{67/6}\bigr)^{5/7}(P^{10})^{2/7}=P^{65/6+\varepsilon}, \end{align*} uniformly in $\alpha\in\mathbb R$. Consequently, applying \eqref{4.51} in combination with yet another change of variable, we finally arrive at the bound
$$ M_{\sf Nn} \ll P^{65/6+\varepsilon} \int_0^1 {\sf N}(D\alpha)|f(D\alpha)|^{11} \,\mathrm d\alpha \ll P^{18-1/6+\varepsilon}. $$ We may infer thus far that $M_{\sf Nn}+M_{\sf nN}\ll P^{18-1/6+\varepsilon}$. On substituting this estimate into \eqref{8.5}, noting also the bound $M\ll P^{18-1/18+\varepsilon}$ supplied by Theorem \ref{thm5.3}, the conclusion \eqref{minor41111} is confirmed.\par
The proof of \eqref{minor6uu} is essentially the same, and we economise by making similar notational conventions. The exponents $11$ and $4$ that occur in \eqref{minor41111} must now be replaced by $u$ and $6$, respectively. The initial phase of the preceding argument then remains valid, and an appeal to Theorem \ref{thm5.2} delivers the bound \begin{equation}\label{8.z}
\iint_{\mathfrak p}|f(\mathrm M_1)f(\mathrm M_2)|^u|f(\mathrm M_3)|^6 {\,{\rm d}}\alpha{\,{\rm d}}\beta \ll L_{\sf Nn} + L_{\sf nN}+P^{2u-2+\eta(u)+\varepsilon}, \end{equation} where $$ L_{\sf Nn} \ll \int_0^1\!\!\int_0^1 {\sf N}(D\alpha){\sf n}(D\beta)
|f(D\alpha)f(D\beta)|^{u} |f(A\alpha+B\beta)|^6 \,\mathrm d\alpha\,\mathrm d\beta. $$ Here, we isolate factors of the integrand that depend on $\beta$ and apply H\"older's inequality. Note that since $u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 11$ we have $7u/4<20$. Thus, by Lemma \ref{lemma5.3}, \begin{align*}
\int_0^1 {\sf n}(D\beta)&|f(D\beta)^{u}f(A\alpha+B\beta)^6|\,\mathrm d\beta \\
&\ll \biggl( \int_0^1 {\sf n}(D\beta)|f(D\beta)|^{7u/4}\,\mathrm d\beta\biggr)^{4/7}
\biggl( \int_0^1 |f(A\alpha+B\beta)|^{14}\,\mathrm d\beta \biggr)^{3/7}\\ &\ll P^\varepsilon \bigl( P^{\frac{35}{24}u-\frac{5}{3}}\bigr)^{4/7} \bigl( P^{10}\bigr)^{3/7}. \end{align*} Applying this bound, which is uniform in $\alpha \in \mathbb R$, together with \eqref{4.51}, we arrive at the estimate $$L_{\sf Nn}\ll P^{\frac56 u+\frac{10}{3}+\varepsilon}
\int_0^1 {\sf N}(D\alpha)|f(D\alpha)|^u\,\mathrm d\alpha \ll P^{\frac{11}6 u - \frac{2}3+\varepsilon}.$$ When $u\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 11$, the definition of $\eta(u)$ ensures that $\frac{11}{6}u-\frac{2}{3}\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2u-2+\eta(u)$, and hence $L_{\sf Nn}+L_{\sf nN}\ll P^{2u-2+\eta(u)+\varepsilon}$. The conclusion \eqref{minor6uu} now follows by substituting this estimate into \eqref{8.z}. \end{proof}
\section{Another mean value estimate} This section is an update for quartic Weyl sums of our earlier work \cite{BWBull} on highly entangled mean values. We now attempt to avoid independence conditions on linear forms as far as the argument allows while incorporating the consequences of the recent bound \eqref{1.5}. We emphasise that throughout this section, we continue to work subject to the overall assumptions made at the outset of the previous section. We begin by examining the mean value \begin{equation}\label{7.z1}
G_1=\int_0^1\!\!\int_0^1|f(\mathrm M_1)^2 f(\mathrm M_2)^4 f(\mathrm M_3)^4|{\,{\rm d}}\alpha{\,{\rm d}}\beta . \end{equation}
\begin{lemma}\label{lemma6.1} One has $G_1\ll P^{5+\varepsilon}$. \end{lemma}
\begin{proof} This is essentially contained in \cite[Section 2]{BWpauc}, but we give a proof for completeness. Recall the definition \eqref{5.1} of the linear forms $\mathrm M_i$. By orthogonality, the integral $G_1$ is equal to the number of solutions of an associated pair of quartic equations. By taking suitable integral linear combinations of these two equations, we may assume that they take the shape \begin{equation}\label{10var} a(x_1^4-x_2^4)=b(x_3^4+x_4^4-x_5^4-x_6^4)=c(x_7^4+x_8^4-x_9^4-x_{10}^4), \end{equation} for suitable natural numbers $a,b,c$. Thus, we see that $G_1$ is equal to the number of solutions of the Diophantine system \eqref{10var} with $x_i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P$. For each of the $O(P)$ possible choices for $x_1$ and $x_2$ with $x_1=x_2$, it follows via orthogonality and \eqref{Hua4} that the number of solutions of this system in the remaining variables $x_3,\ldots,x_{10}$ is equal to
$$\biggl( \int_0^1 |f(\alpha)|^4\,{\,{\rm d}}\alpha\biggr)^2 \ll P^{4+\varepsilon}.$$ Consequently, the contribution to $G_1$ from this first class of solutions is $O(P^{5+\varepsilon})$. Now consider solutions of \eqref{10var} in which $x_1\neq x_2$. By orthogonality, the total number of choices for $x_3, \ldots, x_{10}$ satisfying the rightmost equation in \eqref{10var} is
$$ \int_0^1 |f(b\alpha)f(c\alpha)|^4{\,{\rm d}}\alpha. $$ Schwarz's inequality in combination with \eqref{4.6} shows this integral to be $O(P^{5+\varepsilon})$. However, for any fixed choice of $x_3,\ldots,x_{10}$ in this second class of solutions, one has $x_1\ne x_2$, and hence the fixed integer $N=b(x_3^4+x_4^4-x_5^4-x_6^4)$ is non-zero. But it follows from \eqref{10var} that $x_1^2-x_2^2$ and $x_1^2+x_2^2$ are each divisors of $N$. Thus, a standard divisor function estimate shows that the number of choices for $x_1$ and $x_2$ is $O(P^\varepsilon)$, and we conclude that the contribution to $G_1$ from this second class of solutions is $O(P^{5+\varepsilon})$. Adding these two contributions, we obtain the bound claimed in the statement of the lemma. \end{proof}
We next examine the mean value \begin{equation}\label{7.z3}
G_2=\int_0^1\!\!\int_0^1 |f(\mathrm M_1)^2 f(\mathrm M_2)^4f(\mathrm M_3)^4 f(\mathrm M_4)^4f(\mathrm M_5)^4|{\,{\rm d}}\alpha{\,{\rm d}}\beta . \end{equation}
\begin{theorem}\label{thm6.2} One has $G_2 \ll P^{11+\varepsilon}$. \end{theorem}
Note that in this result we require the five linear forms $\mathrm M_j$ to be pairwise independent. Therefore, the result will be of use only in cases where the profile of \eqref{1.1} has $r_5\ge 1$. The mean value in Theorem \ref{thm6.2} involves $18$ Weyl sums and should therefore be compared with the bound $I_6 \ll P^{67/6+\varepsilon}$ provided by Theorem \ref{thm5.1}. The extra savings that we obtain here are the essential stepping stone toward Theorem \ref{theorem1.2}.
\begin{proof}[The proof of Theorem \ref{thm6.2}] As in the proof of Lemma \ref{lemma6.1}, it follows from orthogonality that the integral $G_2$ is equal to the number of solutions of an associated pair of quartic equations. Taking suitable integral linear combinations of these two equations, we reduce to the situation where $c_4=d_5=0$, and consequently $\mathrm M_4 = d_4\beta$ and $\mathrm M_5=c_5\alpha$. Motivated by this observation, we begin our deliberations by estimating the auxiliary mean value
$$G_3=\int_0^1\!\!\int_0^1 |f(\mathrm M_1)^2 f(\mathrm M_2)^4 f(\mathrm M_3)^4f(d_4\beta)^4|{\,{\rm d}}\alpha{\,{\rm d}}\beta .$$
\par The Weyl differencing argument \cite[Lemma 2.3]{hlm} shows that there are real numbers $u_h$ with $u_h\ll P^\varepsilon$ for which \begin{equation}\label{6.4}
|f(\gamma)|^4 \ll P^3 + P \sum_{1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec |h|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2P^4} u_h e(\gamma h). \end{equation} We apply this relation with $\gamma=\mathrm M_4$ to the mean value $G_3$ and infer that \begin{equation}\label{7.z2} G_3\ll P^3 G_1 + PG_4, \end{equation} where $G_1$ is the mean value defined in \eqref{7.z1}, and
$$ G_4 = \sum_{1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec |h|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2P^4} u_h\int_0^1\!\!\int_0^1 |f(\mathrm M_1)^2
f(\mathrm M_2)^4f(\mathrm M_3)^4|e(d_4 h\beta) {\,{\rm d}}\alpha{\,{\rm d}}\beta. $$ By orthogonality, the double integral on the right hand side here is equal to the number of solutions of the system of Diophantine equations \begin{align} c_1(x_1^4-y_1^4) + c_2(x_2^4+x_3^4-y_2^4-y_3^4)+ c_3(x_4^4+x_5^4-y_4^4-y_5^4) &\hskip-2.5mm&= 0 \label{6.1}\\ d_1(x_1^4-y_1^4) + d_2(x_2^4+x_3^4-y_2^4-y_3^4)+ d_3(x_4^4+x_5^4-y_4^4-y_5^4) &\hskip-2.5mm&+\hskip.3mm d_4h=0 \notag \end{align} with $x_i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P$ and $y_i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P$. We may sum over $h\neq 0$ and replace $u_h$ by its upper bound. Then we find that $G_4\ll P^\varepsilon G_5$, where $G_5$ is the number of solutions of the equation \eqref{6.1} with the same conditions on $x_i$ and $y_i$. By orthogonality again, we deduce that
$$G_5=\int_0^1 |f(c_1\alpha)^2f(c_2\alpha)^4f(c_3\alpha)^4|{\,{\rm d}}\alpha. $$ For $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 3$ the linear form $\mathrm M_i$ is linearly independent of $\mathrm M_4=d_4\beta$, and thus $c_1c_2c_3\neq 0$. The trivial bound
$|f(c_1\alpha)|^2\ll P^2$ therefore combines with Schwarz's inequality and \eqref{4.6} to award us the bound
$$G_5\ll P^2\int_0^1 |f(\gamma)|^8{\,{\rm d}}\gamma \ll P^{7+\varepsilon}. $$ We therefore deduce that $G_4\ll P^{7+2\varepsilon}$. Meanwhile, the estimate $G_1\ll P^{5+\varepsilon}$ is available from Lemma \ref{lemma6.1}. On substituting these bounds into \eqref{7.z2}, we conclude thus far that $G_3\ll P^{8+\varepsilon}$.\par
We now repeat this argument with $\gamma=\mathrm M_5$ in \eqref{6.4}, applying the resulting inequality within the integral $G_2$ defined in \eqref{7.z3}. Thus we obtain \begin{equation}\label{7.z4} G_2\ll P^3G_3+P^{1+\varepsilon}G_6, \end{equation} where $G_6$ denotes the number of solutions of the Diophantine equation $$d_1(x_1^4-y_1^4)+d_2(x_2^4+x_3^4-y_2^4-y_3^4)+d_3(x_4^4+x_5^4-y_4^4-y_5^4) + d_4(x_6^4+x_7^4-y_6^4-y_7^4)=0,$$ with $x_i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P$ and $y_i\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P$. By orthogonality,
$$G_6=\int_0^1 |f(d_1\alpha)^2f(d_2\alpha)^4f(d_3\alpha)^4f(d_4\alpha)^4|{\,{\rm d}}\alpha.$$ One may confirm that $d_1d_2d_3d_4\neq 0$ by arguing as above, and so an application of \eqref{T} in combination with \eqref{1.5} reveals that
$$G_6\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \sum_{i=1}^4 \int_0^1 |f(d_i\alpha)|^{14} {\,{\rm d}}\alpha =
4 \int_0^1 |f(\gamma)|^{14}{\,{\rm d}}\gamma\ll P^{10+\varepsilon}. $$ The conclusion of the theorem now follows on substituting this bound together with our earlier estimate for $G_3$ into \eqref{7.z4}. \end{proof}
\section{The circle method} In this section we prepare the ground to advance to the proofs of Theorems \ref{theorem1.1} and \ref{theorem1.2}. A preliminary man\oe uvre is in order. Let $k=0$ or 1, and let $N_k(P)=N_k$ denote the number of solutions of the system \eqref{1.1} with $k\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec x_j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P$ $(1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$. Note that the equations \eqref{1.1} are invariant under the $s$ mappings $x_j\mapsto -x_j$. This observation shows that \begin{equation} \label{sandw} 2^s N_1(P) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec {\mathscr N}(P) \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 2^s N_0 (P). \end{equation} The goal is then to establish the formulae \begin{equation}\label{sandw2} \lim_{P\to\infty} 2^s P^{8-s} N_k(P) = \mathfrak I \mathfrak S \quad (k=0, 1), \end{equation} since then \eqref{1.4} follows immediately from \eqref{sandw} and the sandwich principle. Thus, we now launch the Hardy-Littlewood method to evaluate the counting functions $N_k(P)$. This involves the exponential sum \begin{equation} \label{ffk} f_k(\alpha)=\sum_{k\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec x\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P} e(\alpha x^4). \end{equation} This sum is, of course, an instance of the sum \eqref{ff}, where we have been deliberately imprecise about the lower end of the interval of summation. The results we have formulated so far are indeed independent of the choice of $k$, and it is only now and temporarily where this detail matters. We require the linear forms $\Lambda_j=\Lambda_j(\alpha,\beta)$, defined by $$\Lambda_j(\alpha,\beta)=a_j\alpha + b_j \beta\quad (1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s)$$ that are associated with the equations \eqref{1.1}. We then put \begin{equation}\label{7.2} {\mathscr F}_k(\alpha,\beta) = f_k(\Lambda_1)f_k(\Lambda_2)\cdots f_k(\Lambda_s), \end{equation} and observe that, by orthogonality, one has \begin{equation}\label{7.3} N_k(P)=\int_0^1\!\!\int_0^1{\mathscr F}_k(\alpha,\beta)\,\mathrm d\alpha \,\mathrm d\beta. \end{equation}
Subject to conditions milder than those imposed in Theorems \ref{theorem1.1} and \ref{theorem1.2} we reduce the evaluation of the integral \eqref{7.3} to the estimation of its minor arc part. With this end in mind we define the major arcs $\mathfrak V$ as the union of the rectangles
$$ \mathfrak V(q,a,b)=\{(\alpha,\beta)\in[0,1]^2: \text{$|\alpha-a/q|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{-31/8}$ and
$|\beta-b/q|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{-31/8}$}\},$$ with $0\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec a,b\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec q$, $(a,b,q)=1$ and $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{1/8}$.\par
Define the generating functions $$ S(q,c) = \sum_{x=1}^q e(cx^4/q)\quad \text{and}\quad v(\gamma) = \int_0^P e(\gamma t^4)\,\mathrm d t. $$ Then, given $(\alpha,\beta)\in [0,1]^2$, if we put $\gamma=\alpha -a/q$ and $\delta=\beta-b/q$ for some $a,b\in\mathbb Z$ and $q\in\mathbb N$, one concludes from \eqref{ffk} and \cite[Theorem 4.1]{hlm} that \begin{equation}\label{Fapprox} f_k(\Lambda_j)=q^{-1}S\left(q,\Lambda_j(a,b)\right)v\left(\Lambda_j(\gamma,\delta)\right)
+ O\left(q^{1/2+\varepsilon}(1+P^4|\Lambda_j(\gamma,\delta)|)^{1/2}\right). \end{equation} Note that the right hand side here is independent of $k$. We multiply these approximations for $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s$. This brings into play the expressions $$\mathscr S(q,a,b)=q^{-s} \prod_{j=1}^s S\left(q,\Lambda_j(a,b)\right) \quad \text{and}\quad \mathscr V(\gamma,\delta) = \prod_{j=1}^s v\left(\Lambda_j(\gamma,\delta)\right).$$ If $(\alpha,\beta)\in \mathfrak V(q,a,b)\subseteq \mathfrak V$ then the error term in \eqref{Fapprox} is $O(P^{1/8+\varepsilon})$, and we infer that $$\mathscr F_k(\alpha,\beta)=\mathscr S(q,a,b)\mathscr V(\gamma,\delta)+ O(P^{s-7/8+\varepsilon}). $$ Since $\mathfrak V$ is a set of measure $O(P^{-59/8})$, when we integrate this formula for $\mathscr F_k(\alpha, \beta)$ over $\mathfrak V$, we obtain the asymptotic relation $$\iint_{\mathfrak V} \mathscr F_k(\alpha,\beta) \,\mathrm d\alpha\, \mathrm d\beta = \mathfrak S(P^{1/8})\mathfrak J^*(P^{1/8})+O(P^{s-33/4+\varepsilon}),$$ where, for $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P$ we define \begin{align*} \mathfrak S (Q)&=\sum_{q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q}\underset{(a,b,q)=1}{\sum_{a=1}^q\sum_{b=1}^q} \mathscr S(q,a,b),\\ \mathfrak J^*(Q)&=\iint_{\mathfrak U(Q)} \mathscr V(\gamma,\delta)\,\mathrm d\gamma\, \mathrm d\delta , \end{align*} and $\mathfrak U(Q)=[-QP^{-4},QP^{-4}]^2$.\par
At this point, we require some more information concerning the matrix of coefficients, and we shall suppose that $q_0\ge 15$. Then $s\ge 16$, and we may apply \cite[Lemma 3.3]{BW21} to conclude that $\mathfrak S(Q)=\mathfrak S+O(Q^{\varepsilon -1})$. Further, we have $$ \int_{-P}^P e(\gamma t^4) \,\mathrm dt = 2 v(\gamma),$$ and thus \cite[Lemma 3.1]{BW21} shows that the limit \eqref{1.2} exists, and that we have $2^s\mathfrak J^*(Q)=P^{s-8}\mathfrak J+O(P^{s-8}Q^{-1/4})$. We summarise these deliberations in the following lemma.
\begin{lemma}\label{lem7.1} Suppose that $q_0\ge 15$ and that $k\in \{0,1\}$. Then $$ \iint_{\mathfrak V} \mathscr F_k(\alpha,\beta) \,\mathrm d\alpha\, \mathrm d\beta = 2^{-s}P^{s-8}\mathfrak S\mathfrak J + O(P^{s-8-1/32}). $$ \end{lemma}
The major arcs in Lemma \ref{lem7.1} are certainly too slim for efficient use of Weyl type inequalities on the complementary set. A pruning argument allows us to enlarge the major arcs considerably. Let $\mathfrak W$ denote the union of the rectangles
$$\mathfrak W(q,a,b)=\{(\alpha,\beta)\in[0,1]^2: \text{$|q\alpha-a|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{-3}$ and
$|q\beta-b|\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{-3}$}\},$$ with $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P$, $0\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec a,b \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec q$ and $(a,b,q)=1$. Then $\mathfrak V\subset \mathfrak W$, and we proceed to estimate the contribution from $\mathfrak W\setminus \mathfrak V$ to the integral \eqref{7.3}. A careful application of \cite[Theorem 4.2]{hlm} shows that $S(q,c) \ll q^{3/4}(q,c)^{1/4}$.
Further, if $V(\gamma) = P(1+P^4|\gamma|)^{-1/4}$, then by \cite[Theorem 7.3]{hlm}, one has $v(\gamma)\ll V(\gamma)$. Hence, whenever $(\alpha,\beta)\in \mathfrak W(q,a,b)$ with $q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P$, one deduces from \eqref{Fapprox} that $$f_k(\Lambda_j)\ll q^{-1/4}\left( q,\Lambda_j(a,b)\right)^{1/4} V\left( \Lambda_j(\alpha-a/q,\beta-b/q)\right) + P^{1/2+\varepsilon}.$$ It is immediate that the first term on the right hand side here always dominates the second, and therefore, $${\mathscr F}_k(\alpha,\beta)\ll q^{-s/4}\prod_{j=1}^s \left( q,\Lambda_j(a,b)\right)^{1/4} V\left( \Lambda_j(\alpha-a/q,\beta-b/q)\right). $$
\par We integrate over $\mathfrak W\setminus \mathfrak V$. The result is a sum over $q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P$ in which we consider the portion $q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P^{1/8}$ separately. This yields the bound \begin{equation}\label{8.z1} \iint_{\mathfrak W\setminus\mathfrak V}\mathscr F_k(\alpha,\beta)\,\mathrm d\alpha\, \mathrm d\beta \ll K_1(P^{1/8})+K_2(P^{1/8}), \end{equation} where for $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P$, we write $$K_1(Q)=\sum_{q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec Q}\underset{(a,b,q)=1}{\sum_{a=1}^q\sum_{b=1}^q} q^{-s/4} \prod_{j=1}^s \left( q,\Lambda_j(a,b)\right)^{1/4}\iint_{\mathfrak B(Q)} \prod_{j=1}^sV(\Lambda_j) \,\mathrm d\alpha\,\mathrm d\beta ,$$ with $\mathfrak B(Q)=[-1,1]^2\setminus \mathfrak U(Q)$, and $$K_2(Q)=\sum_{Q<q\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec P}\underset{(a,b,q)=1}{\sum_{a=1}^q\sum_{b=1}^q} q^{-s/4}\prod_{j=1}^s (q,\Lambda_j(a,b))^{1/4}\iint_{[-1,1]^2} \prod_{j=1}^s V(\Lambda_j) \,\mathrm d\alpha\,\mathrm d\beta.$$ Still subject to the condition $q_0\ge 15$, the proof of \cite[Lemma 3.2]{BW21} shows that $$ \sum_{q>Q}\underset{(a,b,q)=1}{\sum_{a=1}^q\sum_{b=1}^q} q^{-s/4} \prod_{j=1}^s(q,\Lambda_j(a,b))^{1/4} \ll \sum_{q>Q}q^{\varepsilon-2}\ll Q^{\varepsilon-1},$$ and similarly, the proof of \cite[Lemma 3.1]{BW21} delivers the bound $$ \iint_{\mathfrak B(Q)}\prod_{j=1}^s V(\Lambda_j)\,\mathrm d\alpha\,\mathrm d\beta \ll P^{s-8}Q^{-1/4}. $$ Thus we deduce that $K_1(P^{1/8})+K_2(P^{1/8})\ll P^{s-8-1/32}$. Substituting this estimate into \eqref{8.z1}, and then recalling Lemma \ref{lem7.1}, we see that in the latter lemma we may replace $\mathfrak V$ by $\mathfrak W$. This establishes the following theorem.
\begin{theorem}\label{thm7.2} Suppose that $q_0\ge 15$ and that $k\in \{0,1\}$. Then $$\iint_{\mathfrak W}\mathscr F_k(\alpha,\beta)\,\mathrm d\alpha\, \mathrm d\beta = 2^{-s} P^{s-8}\mathfrak S\mathfrak I + O(P^{s-8-1/32}). $$ \end{theorem}
Let $\mathfrak w= [0,1]^2\setminus \mathfrak W$ denote the minor arcs. Then, in view of \eqref{sandw2}, \eqref{7.3} and Theorem \ref{thm7.2}, whenever $q_0\ge 15$, the asymptotic relation \eqref{1.4} is equivalent to the minor arc estimate \begin{equation}\label{7minor} \iint_{\mathfrak w}\mathscr F_k(\alpha,\beta) \,\mathrm d\alpha\, \mathrm d\beta = o(P^{s-8}), \end{equation} as $P\to \infty$, and in the next two sections we shall confirm this subject to the hypotheses imposed in Theorems \ref{theorem1.1} and \ref{theorem1.2}.
\section{The proof of Theorem \ref{theorem1.1}} At the core of the proof of Theorem \ref{theorem1.1} we require two minor arc estimates.
\begin{lemma}\label{lem8.1} Let $c_1,c_2,d_1,d_2\in\mathbb Z$, and suppose that $\mathrm M_j=c_j\alpha+d_j\beta$ $(j=1,2)$ are linearly independent. Then
$$ \iint_{\mathfrak w} |f(\mathrm M_1)f(\mathrm M_2)|^{15}\,\mathrm d\alpha\, \mathrm d\beta \ll P^{22-1/6+\varepsilon}. $$ \end{lemma}
\begin{proof} It is immediate from \eqref{pee} that $\mathfrak w \subset \mathfrak p$. Recall the initial argument within the proof of Theorem \ref{thm5.4}. This shows that for $(\alpha,\beta)\in \mathfrak p$, the forms $\mathrm M_1$ and $\mathrm M_2$ cannot be in $\mathfrak N_{P^4,P^{2/7}}$ simultaneously. By symmetry we may therefore suppose that $\mathrm M_1 \in\mathfrak n_{P^4,P^{2/7}}$. Now apply the transformation formula as in \eqref{8.8}. One finds that for an appropriate non-zero integer $D$, depending at most on $\mathbf c$ and $\mathbf d$, one has
$$\iint_{\mathfrak w}|f(\mathrm M_1)f(\mathrm M_2)|^{15}\,\mathrm d\alpha\,
\mathrm d\beta \ll \int_0^1\!\!\int_{\mathfrak m} |f(D\alpha )f(D\beta)|^{15}\, \mathrm d\alpha\,\mathrm d\beta ,$$ where $\mathfrak m = \mathfrak m_{P^4,P^{2/7}}$. Thus, applying a trivial estimate for one factor $f(D\beta)$, we deduce via Lemma \ref{lemma5.3} that
$$\iint_{\mathfrak w}|f(\mathrm M_1)f(\mathrm M_2)|^{15}\,\mathrm d\alpha\, \mathrm d\beta \ll P^\varepsilon \left( P^{65/6}\right) \left( P^{11}\right) \ll P^{22-1/6+\varepsilon}.$$ This completes the proof of the lemma. \end{proof}
\begin{lemma}\label{lem8.2} Suppose that any two of the binary linear forms $\mathrm M_1$, $\mathrm M_2$, $\mathrm M_3$ are linearly independent. Then
$$ \iint_{\mathfrak w} |f(\mathrm M_1)^{11}f(\mathrm M_2)^{11} f(\mathrm M_3)^4 | \,\mathrm d\alpha\,\mathrm d\beta \ll P^{18-1/18+\varepsilon}. $$ \end{lemma}
\begin{proof} On recalling that $\mathfrak w \subset \mathfrak p$, the lemma is immediate from Theorem \ref{thm5.4}.\end{proof}
We are now fully equipped to complete the proof of Theorem \ref{theorem1.1}. Suppose that we are given a pair of equations \eqref{1.1} with $s\ge 26$, $q_0\ge 15$ and profile $(r_1,r_2,\ldots,r_\nu)$. The parameter $l=s-r_1-r_2$ determines our argument. In the notation of Section 7, we let $\mathscr F= \mathscr F_k$ with $k=0$ or $1$ be the generating function defined in \eqref{7.2}.\par
Small values of $l$ call for special attention. Initially, we consider the situation with $0\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec l \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 3$. We apply Lemma \ref{lemA3} with $J_1$ and $J_2$ the subsets of the set of indices $\{1,2,\ldots ,s\}$ counted by $r_1$ and $r_2$, respectively, and with $J_3$ the subset consisting of the remaining indices. Then $\text{card}(J_3)=l$. We also choose $$M_\nu=\ldots =M_4=0,\quad M_3=l,\quad M_2=15-l\quad \text{and}\quad M_1=s-15.$$ The condition $q_0\ge 15$ ensures that $r_1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s-15$, and $r_1+r_2=s-l=M_1+M_2$. Also, we have $M_1=s-15\ge 15-l=M_2$ because $r_1\ge r_2 \ge 15-l$ and $s=r_1+r_2+l\ge 2r_2 +l \ge 30-l$. Finally, since $0\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec l\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 3$ it is apparent that $M_2=15-l\ge l=M_3$. Therefore, Lemma \ref{lemA3} is indeed applicable and delivers the bound $$ \iint_{\mathfrak w} \mathscr F(\alpha,\beta)\,\mathrm d\alpha\,\mathrm d\beta
\ll \iint _{\mathfrak w}|f(\mathrm M_1)^{s-15}f(\mathrm M_2)^{15-l}f(\mathrm M_3)^l| \,\mathrm d\alpha\,\mathrm d\beta ,$$ where each of the $\mathrm M_j$ is one of the linear forms $\Lambda_i$, and any two of the $\mathrm M_j$ are linearly independent. We now reduce the exponent $s-15$ to $15-l$ and then apply H\"older's inequality. Thus \begin{align*} \iint_{\mathfrak w} \mathscr F(\alpha,\beta)\,\mathrm d\alpha\,\mathrm d\beta &\ll
P^{s-30+l}\iint_{\mathfrak w} |f(\mathrm M_1)^{15-l}f(\mathrm M_2)^{15-l}
f(\mathrm M_3)^l| \,\mathrm d\alpha\,\mathrm d\beta \\ &\ll {\Upsilon}} \def\Upshat{{\widehat \Ups}_1^{l/4}{\Upsilon}} \def\Upshat{{\widehat \Ups}_2^{1-l/4}, \end{align*} where \begin{align*}
{\Upsilon}} \def\Upshat{{\widehat \Ups}_1&=\iint_{\mathfrak w}|f(\mathrm M_1)^{11}f(\mathrm M_2)^{11}
f(\mathrm M_3)^4| \,\mathrm d\alpha\,\mathrm d\beta,\\
{\Upsilon}} \def\Upshat{{\widehat \Ups}_2&=\iint_{\mathfrak w} |f(\mathrm M_1)f(\mathrm M_2)|^{15}\,\mathrm d\alpha\, \mathrm d\beta . \end{align*} In this scenario, therefore, we deduce from Lemmata \ref{lem8.1} and \ref{lem8.2} that \begin{align} \iint_{\mathfrak w} \mathscr F(\alpha,\beta)\,\mathrm d\alpha\,\mathrm d\beta &\ll P^{s-30+l+\varepsilon}\left( P^{18-1/18}\right)^{l/4}\left( P^{22-1/6}\right)^{1-l/4} \notag \\ &\ll P^{s-8-1/18+\varepsilon}.\label{8.9} \end{align}
We may now suppose that $l\ge 4$. Then $r_1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s-15$ and $r_1+r_2\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s-4$. In Lemma \ref{lemA3} we now take $J_j$ to be the subset of the set of indices $\{1,2,\ldots ,s\}$ counted by $r_j$. We also choose $$M_\nu=\ldots =M_4=0,\quad M_3=4,\quad M_2=11\quad \text{and}\quad M_1=s-15,$$ and note that the hypothesis $s\ge 26$ ensures that $M_1\ge M_2$. The conditions required to apply Lemma \ref{lemA3} are consequently in play, and we deduce that $$ \iint_{\mathfrak w} \mathscr F(\alpha,\beta)\,\mathrm d\alpha\,\mathrm d\beta \ll
\iint_{\mathfrak w}|f(\mathrm M_1)|^{s-15}|f(\mathrm M_2)|^{11}
|f(\mathrm M_3)|^{4}\,\mathrm d\alpha\,\mathrm d\beta, $$ where again each of the $\mathrm M_j$ is one of the linear forms $\Lambda_i$, and any two of the $\mathrm M_j$ are linearly independent. Here $s-15\ge 11$ by the hypothesis $s\ge 26$, and we may estimate excessive copies of $f(\mathrm M_1)$ trivially and apply Lemma \ref{lem8.2}. This confirms that \eqref{8.9} also holds for $l\ge 4$. In particular, we have \eqref{7minor} subject to the hypotheses of Theorem \ref{theorem1.1}. This completes the proof of Theorem \ref{theorem1.1}.
\section{The proof of theorem \ref{theorem1.2}} We continue to use the notation introduced in \S\S8 and 9, but now suppose that the hypotheses of Theorem \ref{theorem1.2} are met. Hence $s=25$ and $r_1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec s-q_0 \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 9$. We also assume that $r_5\ge 1$. Our goal on this occasion is the estimate \begin{equation}\label{10.1} \iint_{\mathfrak w} \mathscr F(\alpha,\beta)\,\mathrm d\alpha\,\mathrm d\beta \ll P^{17-1/24+\varepsilon}. \end{equation} Once this is established, Theorem \ref{theorem1.2} follows in the same way as Theorem \ref{theorem1.1} was deduced from \eqref{8.9}.\par
We apply Lemma \ref{lemA3} with $J_j$ the subset of the set of indices $\{1,2,\ldots ,s\}$ counted by $r_j$ for $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \nu$. Also, we put $m_j=r_j$ for each $j$ and $$M_\nu=\ldots =M_6=0,\quad M_5=M_4=1, \quad M_3=5\quad \text{and}\quad M_2=M_1=9.$$ On recalling that $r_1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 9$, it is immediate that \eqref{Hyp1} and \eqref{Hyp2} hold. Hence, Lemma \ref{lemA3} is applicable, and yields linear forms $\mathrm M_1, \ldots,\mathrm M_5$ that are linearly independent in pairs, where each $\mathrm M_j$ is one of the $\Lambda_i$, and where $$ \iint_{\mathfrak w} \mathscr F(\alpha,\beta)\,\mathrm d\alpha\,\mathrm d\beta
\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec \iint_{\mathfrak w} |f(\mathrm M_1)^9f(\mathrm M_2)^9f(\mathrm M_3)^5 f(\mathrm M_4)f(\mathrm M_5)|\,\mathrm d\alpha\,\mathrm d\beta. $$ By H\"older's inequality, we find that $$ \iint_{\mathfrak w} \mathscr F(\alpha,\beta)\,\mathrm d\alpha\,\mathrm d\beta \leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec {\Upsilon}} \def\Upshat{{\widehat \Ups}_3^{1/4} {\Upsilon}} \def\Upshat{{\widehat \Ups}_4^{3/4},$$ where \begin{align*}
{\Upsilon}} \def\Upshat{{\widehat \Ups}_3&= \int_0^1\!\!\int_0^1|f(\mathrm M_1)f(\mathrm M_2)f(\mathrm M_4)
f(\mathrm M_5)|^4 |f(\mathrm M_3)|^2\,\mathrm d\alpha\,\mathrm d\beta,\\
{\Upsilon}} \def\Upshat{{\widehat \Ups}_4&=\iint_{\mathfrak w}|f(\mathrm M_1)f(\mathrm M_2)|^{32/3}|f(\mathrm M_3)|^6 \, \mathrm d\alpha\, \mathrm d\beta. \end{align*} Making use of the bounds supplied by Theorem \ref{thm6.2} and Theorem \ref{thm5.4} with $u=32/3$, we therefore infer that $$\iint_{\mathfrak w} \mathscr F(\alpha,\beta)\,\mathrm d\alpha\,\mathrm d\beta \ll P^\varepsilon \left( P^{11}\right)^{1/4}\left( P^{19-1/18}\right)^{3/4} \ll P^{17-1/24+\varepsilon}.$$ Thus the bound \eqref{10.1} is confirmed, and the proof of Theorem \ref{theorem1.2} is complete.
Finally, we briefly comment on the prospects of reducing the number of variables further. Note that the estimates for the minor arcs and for the whole unit square in Theorem \ref{thm5.1} coincide for $u=25/3$. Since $\delta(25/3)=0$, therefore, when $s=25$ our basic method narrowly fails to be applicable to the system of equations \eqref{1.1}. Further, it transpires that each additional variable contributes a factor $P$ to the major arc contribution, but only $P^{5/6}$ to the minor arc versions of Theorems \ref{thm5.1} and \ref{thm5.2}. As indicated in \S 1 already, it is worth comparing the $18$th moment ($u=6$) in Theorem \ref{thm5.1} with that in Theorem \ref{thm6.2}, the latter being superior by a factor $P^{1/6}$. It transpires that even if it were possible to propagate this saving through the moment method, then we would still fail to handle cases of \eqref{1.1} with $s=24$, but only by a factor $P^\varepsilon$. However, at this stage, the only workable compromise seems to be to apply Theorem \ref{thm6.2} in conjunction with Theorems \ref{thm5.1} or \ref{thm5.4}, via H\"older's inequality. If the profile of the equations \eqref{1.1} is even more illustrious than in Theorem \ref{theorem1.2}, then one can put more weight on the bound stemming from Theorem \ref{thm6.2}. For example, if we suppose that $s=24$ and $r_1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 5$, then $\nu\ge 5$ and $r_5\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 4$, so that in hopefully self-explanatory notation, the minor arc contribution can be reduced to something of the shape $$ \iint_{\mathfrak w} \mathscr F(\alpha,\beta)\,\mathrm d\alpha\,\mathrm d\beta
\ll \iint_{\mathfrak w} |f(\mathrm M_1)^5f(\mathrm M_2)^5f(\mathrm M_3)^5 f(\mathrm M_4)^5f(\mathrm M_5)^4|\,\mathrm d\alpha\,\mathrm d\beta . $$ One may then introduce the identity \eqref{2.4} with $\alpha=\mathrm M_j$ for all $1\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec j\leqslant} \def\ge{\geqslant} \def\pleq{\preccurlyeq} \def\ple{\prec 5$ simultaneously. The most difficult term that then arises is that weighted with ${\sf n}(\mathrm M_1)\cdots{\sf n}(\mathrm M_5)$. A cascade of applications of H\"older's inequality together with Theorem \ref{5.1} shows this term to be bounded by $$ ({\Upsilon}} \def\Upshat{{\widehat \Ups}_3)^{3/5} (J_{11})^{2/5} \ll P^{16+1/15+\varepsilon}, $$ which is quite far from saving another variable.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\end{document} |
\begin{document}
\title{Response Behavior of Bi-stable Point Wave Energy Absorbers under Harmonic Wave Excitations} \begin{abstract} To expand the narrow response bandwidth of linear point wave energy absorbers (PWAs), a few research studies have recently proposed incorporating a bi-stable restoring force in the design of the absorber. Such studies have relied on numerical simulations to demonstrate the improved bandwidth of the bi-stable absorbers. In this work, we aim to understand how the shape of the bi-stable restoring force influences the effective bandwidth of the absorber. To this end, we use perturbation methods to obtain an approximate analytical solution of the nonlinear differential equations governing the complex motion of the absorber under harmonic wave excitations. The approximate solution is validated against a numerical solution obtained via direct integration of the equations of motion. Using a local stability analysis of the equations governing the slow modulation of the amplitude and phase of the response, the loci of the different bifurcation points are determined as function of the wave frequency and amplitude. Those bifurcation points are then used to define an effective bandwidth of the absorber. The influence of the shape of the restoring force on the effective bandwidth is also characterized by generating design maps that can be used to predict the kind of response behavior (small amplitude periodic, large amplitude periodic, or aperiodic) for any given combination of wave amplitude and frequency. Such maps are critical towards designing efficient bi-stable PWAs for known wave conditions. \end{abstract}
\keywords{Wave energy, Point wave energy absorber, Bi-stability, Nonlinearity}
\section{Introduction} Wave energy constitutes one of the most promising and dense renewable energy sources that is yet to be fully exploited. Since the beginning of human civilization, several devices have been devised to harness energy from ocean waves both at small and large scales. Today, methods used to exploit wave energy can be categorized based on their working principle into three different categories; namely, oscillating water columns, overtopping devices, and point wave energy absorbers (PWAs) \cite{al2019point}. Among such approaches, PWAs received the most attention due to their simple design and working principle. In its simplest form, a PWA is composed of a partially-submerged body (buoy) connected through a mooring mechanism to a linear electromagnetic generator attached to the seabed. When waves set the buoy into motion, it pulls a cable connecting it to a linear generator, which creates relative motions between the translating part of the generator (translator) and stationary magnets (stator). As per Faraday's law of induction, this motion induces a current in the generator coils.
Due to their fundamental principle of operation, traditional PWAs which employ a linear restoring force can work efficiently only near resonance; i.e., when the buoy's velocity is in phase with the wave excitation force. Unfortunately, for typical energetic marine sites, this condition cannot be easily satisfied for reasonably sized systems. Because of the high stiffness of the hydrostatic restoring force emanating from buoyancy, the resonance frequency of the absorber is typically higher than the dominant frequencies in the spectrum of the incoming ocean waves \cite{Falnes2012}. Furthermore, because linear PWAs has a narrow bell-shaped frequency response with a distinct peak occurring at the resonance frequency, they are incapable of efficiently extracting power from the wide frequency content of the ocean waves; thereby leaving most of the wave energy unexploited.
To overcome such issues, different ideas and solutions have been proposed \cite{drew2009review}. These include the use of active control strategies to bring the natural frequency of the absorber closer to the dominant frequency in the ocean wave spectrum, and the introduction of a bi-stable restoring force to broaden the frequency response bandwidth of the absorber \cite{younesian2017multi,schubert2020performance}. The idea of utilizing a bi-stable restoring force in PWAs emanated from the field of vibration energy harvesting, where it was shown that vibratory energy harvesters whose potential energy function has two potential wells separated by a potential energy barrier have a broader frequency bandwidth, and are, therefore, less sensitive to changes in the excitation parameters \cite{daqaq2014role}.
A schematic diagram of a bi-stable PWA is shown in Figure \ref{fig:schematic1}. The only difference between the linear and bi-stable PWAs is the addition of the bi-stable spring attachment in parallel with the power take-off unit (PTO). This attachment is specifically designed to create a bi-stable restoring force behavior and can be created by using a set of pre-stretched springs \cite{younesian2017multi} or by using magnetic interactions \cite{schubert2020performance,xi2021high,xiao2017comparative,zhang2019efficiency}. The shape of the potential energy function associated with the bi-stable PWA is shown in Figure \ref{fig:Types of Motion}. The system has two stable equilibria (nodes) separated by a potential barrier (saddle). For some combination of the wave frequency and amplitude, the response of the buoy remains confined to a single potential well (intra-well motion), while for others, the dynamic trajectories overcome the potential barrier causing the buoy to undergo large-amplitude inter-well motions that span the two stable equilibria. This type of large-amplitude motion can extend over a wide spectrum of frequencies which, depending on the shape of the restoring force, can even extend to very low frequencies. These characteristics are key to improving the energy capture from the lower frequency content of the ocean waves.
In terms of performance, a comparison between linear and bi-stable PWAs has revealed a superior bandwidth for the bi-stable absorbers under harmonic waves conditions \cite{younesian2017multi}. In addition, when considering irregular random waves, results demonstrated superior robustness of the bi-stable absorber with less sensitivity to variations in the frequency content of the waves. A numerical analysis performed in Ref. \cite{zhang2016oscillating} demonstrated that the performance of the bi-stable PWA is dependent on the shape of its potential energy function and the ability of the dynamic trajectories to escape the potential wells for any given combination of wave frequency and amplitude. Thus, in order to improve the ability of the absorber to perform large-amplitude inter-well motions for a wide range of wave conditions, an adaptive bi-stable absorber, which can adjust the depth of its potential barrier to match the waves excitation intensity was proposed first in Ref. \cite{zhang2018application}, followed by other studies \cite{ zhang2019mechanism,song2020performance}. \begin{figure}
\caption{Schematic diagram of a bi-stable PWA.}
\label{fig:schematic1}
\end{figure} \begin{figure}
\caption{Typical potential energy function of a symmetric bi-stable system showing the two types of possible motions (inter- and intra-well).}
\label{fig:Types of Motion}
\end{figure}
We noticed that, despite the relatively large body of research focused on studying the behaviour of bi-stable PWAs, all of the previous studies relied on purely numerical means without attempting to investigate the complex underlying dynamics of the system via analytical or semi-analytical techniques. Utilizing approximate analytical solutions of the governing nonlinear equations can provide key additional insights into the long-time behavior and bandwidth characteristics of the PWA that cannot be otherwise inferred by relying on numerical simulations alone \cite{daqaq2014role}.
Aiming to bridge this gap, we derive in this paper an approximate analytical solution of the nonlinear differential equations governing the complex motion of the absorber under harmonic wave excitations. Using a local stability analysis of the equations governing the slow modulation of the amplitude and phase of the response, we determine the loci of the different bifurcation points as function of the waves' excitation frequency and amplitude. Those bifurcation points are then used to define an effective bandwidth of the absorber. We generate design maps that characterize the influence of the shape of the potential energy function of the PWA on its effective bandwidth. Those maps can be used to predict the type of response behavior of the absorber; e.g. small amplitude periodic, large amplitude periodic, or aperiodic, for any given combination of wave amplitude and frequency. We believe that such maps are valuable towards designing efficient bi-stable PWAs for known wave conditions.
The rest of the paper is organized as follow: in Section \ref{sec:mathematical formulation}, the mathematical model governing the motion of the bi-stable PWA is presented and discussed. In Section \ref{sec:MOMS}, an asymptotic analytical solution of the governing equations is derived using the method of multiple scales for both intra-well and inter-well oscillations. In Section 4, the different bifurcations of the asymptotic solution are identified and analyzed using a stability analysis of the equations governing the slow modulation of the response. In Section \ref{sec: maps}, design maps that characterize the influence of the shape of the bi-stable restoring force of the PWA on its effective bandwidth are generated and discussed. Finally, in Section \ref{sec:conclusion}, the main conclusions of this work are presented. \section{Mathematical formulation} \label{sec:mathematical formulation} \subsection{Governing equations} \begin{figure}
\caption{A lumped-parameter model of the bi-stable PWA.}
\label{fig:mathmodel}
\end{figure} Assuming that the buoy undergoes motions in the heave direction only, the equations governing the motion of the absorber can be obtained by applying Newton's second law on the buoy, and Kirchhoff's current law on the harvesting circuit to obtain the following governing equations for the equivalent lumped system shown in Figure \ref{fig:mathmodel}:
\begin{subequations} \label{eq:Commins} \begin{align} \begin{split} \label{eq:Subeq1}
&(m+m_{\infty})y'' +\int_{0}^{t} h(t-\tau) y' d\tau +c y' +(k_{hys}-k_{1})y+k_{3}y^3=f_{wave} \cos(\omega t), \end{split}\\
&V_L^{'} + \frac{R_L}{L} V_L = \alpha y' .\label{eq:Subeq2} \end{align} \end{subequations}
Here, $y$ represents the displacement of the buoy in the heave direction and the overprime represents a derivative with respect to time, $t$. In Equation (\ref{eq:Subeq1}), $m$ and $m_\infty$ represent, respectively, the mass of the buoy and the added mass of the fluid. The integral term is used to account for the radiation damping effect; $c$ is a linear viscous damping coefficient; $k_{hys}$ is the stiffness resulting from the hydrostatic buoancy force, which is equal to $\rho g S y$. Here, $\rho$ is the density of water, $g$ is the gravitational acceleration constant, and $S$ is the buoy's wet surface area. The coefficients $k_1>0$ and $k_3>0$ represent, respectively, the linear and cubic coefficients of the nonlinear restoring force added to introduce the bi-stable behavior, and $f_{wave}$ and $\omega$ represent, respectively, the wave amplitude and frequency. Note that to induce a bi-stable potential energy function, $k_1$ must be larger than $k_{hys}$.
In Equation (\ref{eq:Subeq2}), $V_L$ represents the voltage induced by the generator across a purely resistive load $R_L$; $L$ represents the inductance of the harvesting coil, and $\alpha$ is the electromechanical coupling coefficient.
The added mass and the radiation damping depend on the fluid velocity field around the buoy, and hence, are a function of the wave frequency. Following the asymptotic analysis provided in the work of \textit{Holme} \cite{hulme1982wave} for a spherical buoy of radius $R$, the curves governing the dependence of the normalized added mass denoted here as $\overline{m}_a=\frac{m_a}{M}$ $(M=\frac{2}{3}\pi R^3 \rho)$, and the normalized radiation damping coefficient $\overline{B}=\frac{B}{M\omega}$ on the normalized wave frequency $\Omega=\omega/\sqrt{g/R}$ are shown in Figure \ref{fig:addedMass and Damping}.
\begin{figure}
\caption{Variation of the normalized added mass $\overline{m}_a$, and the normalized radiation damping coefficient $\overline{B}$ with the square of the normalized wave frequency.}
\label{fig:addedMass and Damping}
\end{figure}
The radiation kernel can be further related to the radiation damping coefficient $B(\omega)$ via the following equation \cite{ogilvie1969rational}: \begin{equation} \label{eq:Ogilive int}
h(t) = \frac{2}{\pi} \int_0^\infty B(\omega)\cos(\omega t) d\omega, \end{equation} which can be discretized as following: \begin{equation} \label{eq:ogilvie}
h(t)= \frac{2}{\pi}\lim_{\delta \omega\to 0}\sum_{i=1}^{\infty} B(\omega_i)\cos(\omega_i t)\delta\omega.
\end{equation} Here, the $B(\omega_i)$ are the values of the radiation damping coefficient calculated at the discrete value of $\omega_i$ shown in Figure \ref{fig:addedMass and Damping}.
The amplitude of the force $f_{wave}$ acting on the buoy due to the incident waves can be related to the radiation damping coefficients by employing \textit{Haskinds's} relation \cite{haskind2010exciting,newman1962exciting}, which states that \begin{equation} \label{eq:Haskind}
f_{wave}(\omega) = A_{wave} \sqrt{\frac{2 \rho g^3}{\omega^3} B(\omega)}, \end{equation} where $A_{wave}$ is the regular wave amplitude. The physical interpretation of Equation (\ref{eq:Haskind}) is fairly intuitive, as it relates the tendency of the buoy to radiate waves in a certain direction to the excitation forces the buoy experiences from waves propagating in the same direction. For more insight, the reader can refer to the seminal works of \textit{Haskind} and \textit{Newman}\cite{haskind2010exciting,newman1962exciting}.
\subsection{Approximation of the convolution integral} \label{sec: convolution } As aforestated, the goal of our work is to obtain approximate analytical solutions of the equations governing the motion of the buoy in order to gain deeper insights into the influence of the shape of the restoring force on the performance of the absorber. To achieve this goal, we will use the method of multiple scales \cite{nayfeh2008perturbation}. In order to facilitate the implementation of the method, we obtain in this section an approximation of the convolution integral governing the radiation damping in Equation (\ref{eq:Subeq1}).
To this end, we first use the fact that the output, $z(t)$, of a linear dynamical system can be expressed as a convolution integral between its input $u(t)$, and an impulse response function $h(t)$, in the form \begin{equation} \label{eq:state-space output}
\textbf{z}(t)=\int_0^t h(t-\tau) \textbf{u}(\tau) d\tau \approx \textbf{C}_r \textbf{x}(t). \end{equation} Here, $\textbf{z(t)} \in \mathbb{R}^q$ is the output vector, $\textbf{x(t)} \in \mathbb{R}^n$ is the state vector and $\textbf{C}_r$ is a $q \times n $ output matrix governed by the following linear state-space equation \begin{align} \label{state-space}
\textbf{x}'(t) & =\textbf{A}_r\textbf{x}(t)+\textbf{B}_r\textbf{u}(t). \end{align}
Using the Eigensystem Realization Algorithm (ERA) detailed in Ref. \cite{brunton2019data}, the convolution integral can be expressed in terms of the realized state-space matrices $\textbf{A}_r, \textbf{B}_r$ and $\textbf{C}_r$ as:
\begin{equation} \label{eq:h(t)=CexpAB}
h(t)=\left(M \sqrt{\frac{g}{R}} \right)\textbf{C}_r e^{\textbf{A}_rt} \textbf{B}_r,
\end{equation} where the realized state-space matrices are : \begin{equation*} \textbf{A}_r = 0.8 \begin{pmatrix} -1 & 1 & 1\\ -1 & 0 & 0\\ -1 & 0 & -2 \end{pmatrix}, \end{equation*} \begin{equation*} \textbf{B}_r = \begin{pmatrix} -0.48 & -0.02 & -0.22 \end{pmatrix}^T, \end{equation*} and \begin{equation*} \textbf{C}_r = \begin{pmatrix} -0.46 & 0 & 0.18 \end{pmatrix}. \end{equation*} Note that the numerical values appearing in the realized state-space matrices are general for any spherical buoy of radius, $\textit{R}$. For more details on the ERA procedure, the interested reader can refer to Appendix \ref{apndx: ERA}.
The matrix exponential $e^{\textbf{A}_rt}$ in Equation (\ref{eq:h(t)=CexpAB}) can be further expressed in the following form \begin{equation} \label{eq:state-transition}
e^{\textbf{A}_rt}=\mathcal{L}^{-1}\left(s\textbf{I} - \textbf{A}_r\right)^{-1}. \end{equation} Here, $\mathcal{L}^{-1}$ is the inverse Laplace transform, and \textbf{I} is the identity matrix. Substituting Equation (\ref{eq:state-transition}) into Equation (\ref{eq:h(t)=CexpAB}), we obtain the following analytical expression for $h(t)$: \begin{equation} \label{eq:h(t) analytical}
h(t)=\left( M \sqrt{\frac{g}{R}} \right) e^{-\mu t}\left(\lambda_1 + \lambda_2 \cos(\mu t) + \lambda_3 \sin(\mu t)\right), \end{equation} where $\mu , \lambda_1, \lambda_2$ and $\lambda_3$ are constants listed in Table \ref{Table:h(t) constants}, and are valid for any spherical buoy of radius, R. \begin{table} \caption{Numerical values of the constants appearing in Equation (\ref{eq:h(t) analytical}).} \label{Table:h(t) constants} \begin{center} \begin{tabular}{c c} \hline
$Parameter$ & $Value$ \\
\hline
$\mu$ & 0.8 \\
$\lambda_1$ & -0.44 \\
$\lambda_2$ & 0.62 \\
$\lambda_3$ & 0.24 \\
\hline \end{tabular} \end{center} \end{table}
Figure \ref{fig:h(t)} depicts a comparsion between the analytical expression of $h(t)$ as obtained using Equation (\ref{eq:h(t) analytical}) and that obtained using the original expression of Equation (\ref{eq:ogilvie}) for a hemispherical buoy of radius $R=5$ [m]. It can be clearly seen that the analytical expression for $h(t)$ is in an excellent agreement with the original impulse response function obtained via Equation (\ref{eq:ogilvie}). \begin{figure}
\caption{Impulse response function $h(t)$ for a hemispherical buoy of radius 5 [m]. Solid line represents the original function obtained by applying Equation (\ref{eq:ogilvie}). Circles represent the values of $h(t)$ obtained from the analytical expression in Equation (\ref{eq:h(t) analytical}) which is based on the third-order realized state-space estimated through employing the eigensystem realization algorithm.}
\label{fig:h(t)}
\end{figure}\\
\section{Approximate Analytical Solution} \label{sec:MOMS} In this section, we employ the method of multiple scales \cite{nayfeh2008perturbation} to obtain an approximate analytical solution of Equation (\ref{eq:Commins}). As aforementioned, and shown in Figure \ref{fig:Types of Motion}, there are two possible steady-state motions. Those that are confined to the potential well and known as intra-well oscillations and those that span the two potential wells and known as the inter-well motions. We first obtain analytical approximations of the motion trajectories within one potential well, then we seek approximate solutions that govern the inter-well motions.
\subsection{Local intra-well response} In this subsection, we obtain an approximate solution for Equation (\ref{eq:Commins}) when the buoy undergoes motions within a single potential well; i.e, the dynamics around one stable equilibrium node. Upon using the radius of the buoy, $R$, as a length scale and the $\sqrt{g/R}$ as a time scale, we obtain the following dimensionless equivalent equation of motion:
\begin{subequations} \label{eq:ComminsWN}
\begin{align}
\ddot{Y}&+\delta_1 \int_{0}^{t^*} \overline{h}(t^*-\tau) \dot{Y}(\tau) d\tau +\delta_2 \dot{Y}-\omega^2_n Y+\gamma Y^3=g_{wave} \cos(\Omega t^*),\\
&\dot{v} + \theta v= \dot{Y},
\end{align}
\end{subequations} where the overdot represents the derivative with respect to the nondimensional time, $t^*$, and the other parameters and nondimensional groups are defined as:
\begin{center}
$\delta_1=\frac{m+m_\infty}{M}$, $\delta_2=\frac{c}{(m+m_{\infty})}\sqrt{\frac{R}{g}}$, $\omega_n=\sqrt{\frac{(k_1-\rho g S) R }{(m+m_{\infty})g}}$, $\gamma=\frac{R^3 k_3}{(m+m_{\infty})g}$, $g_{wave}=\frac{A M}{R(m+m_{\infty})} \Omega\sqrt{\frac{3\overline{B}}{\pi}}$ , $M=\frac{2}{3}\pi R^3 \rho$, $Y=\frac{y}{R}$, $v=\frac{V}{\alpha R}$, $\overline{h}=\frac{h}{M}\sqrt{\frac{R}{g}}$, $\theta=\frac{R_L}{L } \sqrt{\frac{R}{g}}$, $t^*=t\sqrt{g/R}$.
\end{center} We expand the dynamics governed by Equation (\ref{eq:ComminsWN}) about the stable node, $Y_s= \sqrt{\omega^2_n/\gamma}$, by introducing the transformation $Y(t^*)=z(t^*) - Y_s$ into Equation (\ref{eq:ComminsWN}). This yields
\begin{subequations} \label{eq:Commins intra-well}
\begin{align}
\begin{split}
\ddot{z} &+\delta_1 \int_{0}^{t^*} \overline{h}(t^*-\tau) \dot{z}(\tau) d\tau +\delta_2 \dot{z}+\omega^2_0 z+\eta z^2+\gamma z^3=g_{wave} \cos(\Omega t^*),
\end{split}\\
&\dot{v} + \theta v = \dot{z}.
\end{align}
\end{subequations} Here $z(t)$ represents the dynamic trajectories within the potential well, $\omega_o=\sqrt{2}\omega_n$ is the corresponding local frequency of oscillations. It is worth noting that the expansion about the stable node introduces a new quadratic term which captures the asymmetric nature of the motion trajectories about the stable node, $Y_s$.
To implement the method of multiple scales on Equation (\ref{eq:Commins intra-well}), we introduce multiple time scales, $T_n= \epsilon^n t^*$, $n=0,1,2$. It follows that \begin{equation} \label{eq:time scales_intra} \begin{split}
\dot{(.)} &= D_0 + \epsilon D_1 + \epsilon^2 D_2 + O (\epsilon^3), \\
\ddot{(.)} &= D_0^2 + \epsilon^2 D_1 + 2\epsilon D_0 D_1 + 2 \epsilon^2 D_0 D_2 + O (\epsilon^3), \end{split} \end{equation} where $\epsilon$ is a scaling parameter, and $D_n$ is the temporal derivative operator with respect to the time scales, $T_n$. Next, we seek an expansion for the response $z(t^*)$ and voltage $v(t^*)$ as \begin{subequations} \label{eq:y expansion_intra} \begin{align}
z(t^*, \epsilon)&=z_0(T_0,T_1,T_2) + \epsilon z_1(T_0,T_1,T_2) +\epsilon^2 z_2(T_0,T_1,T_2) + O (\epsilon^3), \\
v (t^*, \epsilon)&= v_0 (T_0,T_1,T_2) + \epsilon v_1(T_0,T_1,T_2) +\epsilon^2 v_2(T_0,T_1,T_2) + O (\epsilon^3). \end{align} \end{subequations} Based on the typically small values of the damping coefficients, quadratic and cubic nonlinearities, electromechanical coupling, the time constant of the harvesting circuit, and the excitation amplitude, we scale them to be at order $\epsilon^2$; that is \begin{center}
$\delta_1=\epsilon^2 \delta_1$, $\delta_2=\epsilon^2 \delta_2$, $\eta= \epsilon^2 \eta$, $\gamma=\epsilon^2 \gamma$ , $g_{wave}=\epsilon^2 g_{wave}$.
\end{center} Since large-amplitude intra-well motions occur near the primary resonance of the system; i.e. when $\Omega$ is near $\omega_o$, we limit the intra-well analysis to wave frequencies that are close to $\omega_o$ by introducing the detuning parameter $\sigma$ such that \begin{equation} \label{eq:detuning paraemter_intra}
\Omega=\omega_o + \epsilon^2 \sigma. \end{equation} Upon substituting Equations (\ref{eq:time scales_intra} - \ref{eq:detuning paraemter_intra}) into Equation (\ref{eq:Commins intra-well}), then collecting terms of equal powers of $\epsilon$, we obtain the following perturbation problems at the different scales:
\noindent \underline{$O(\epsilon^0)$}:\\ \begin{subequations} \label{eq:order 0 intra-well} \begin{align}
D_0^2 z_0 + \omega_o^2 z_0 &=0,\\
D_0 v_{0} + \theta v_0 &= D_0z_0, \end{align} \end{subequations}\\ \underline{$O(\epsilon^1)$}:\\ \begin{subequations} \label{eq:order 1 intra-well} \begin{align}
D_0^2 z_1 +\omega_o^2 z_1 &=-2D_0 D_1 z_0 - \eta z_0^2, \\
D_0 v_1+\theta z_1 &= D_0z_1+ D_1z_0 - D_0v_0, \end{align} \end{subequations}\\ \underline{$O(\epsilon^2)$}:\\ \begin{subequations} \label{eq:order 2 intra-well} \begin{align} \begin{split}
D_0^2 z_2 + \omega^2_o z_2 =&-2D_0 D_1 z_1 - 2D_0 D_2z_0 -D_1^2 z_0 -\delta_1 \int_0^{T_0} \overline{h} (T_0 - \tau) D_0 z_0 (\tau) d\tau \\
&-\delta_2 D_0z_0 -2\eta z_0 z_1 - \gamma z^3_0+g_{wave} \cos\left((\omega_o+\epsilon^2 \sigma)T_0\right), \end{split}\\ \begin{split} D_0 v_2+ \theta v_{2} =& D_0 z_2 + D_2 z_0 + D_1z_1 - D_2 v_0 - D_1 v_1. \end{split} \end{align} \end{subequations}\\ Upon solving Equations (\ref{eq:order 0 intra-well}) and (\ref{eq:order 1 intra-well}), we obtain the following expressions for $z_0$, $v_0$, $z_1$ and $v_1$: \begin{subequations} \label{eq:y_0 intra-well} \begin{align}
z_0 &=A(T_1,T_2) e^{i \omega_o T_0} + cc,\\
v_0 &=\Gamma_0 A(T_1,T_2) e^{i \omega_o T_0} + cc, \end{align} \end{subequations} and \begin{subequations} \label{eq:y_1 intra-well} \begin{align}
z_1 &=\frac{\eta}{\omega^2_o} \left( \frac{A^2(T_1,T_2)}{3} e^{2 i \omega_o T_0} - 2 A(T_1,T_2) \overline{A}(T_1,T_2) \right) + cc, \\
v_1 &=\eta \Gamma_1 \frac{A^2(T_1,T_2)}{3 \omega_o^2} e^{2 i\omega_0 T_0} +cc, \end{align} \end{subequations} where \begin{equation*}
\Gamma_0 = \frac{\omega^2_o + i \theta \omega_o }{\omega^2_o + \theta^2}, \qquad \Gamma_1 = \frac{4 \omega^2_o + 2i \theta \omega_o }{4 \omega^2_o +\theta^2}, \end{equation*} and $cc$ stands for the complex conjugate of the preceding terms.
Elimination of the secular terms from the second-order perturbation problem, yields
$D_1 A(T_1,T_2)=0$, which implies that the complex valued function $A$ is only dependent on the third time scale, $T_2$.
Elimination of the secular terms from the third-order problem associated with Equation (\ref{eq:order 2 intra-well}) requires evaluating the convolution integral $ \int_0^{T_0} \overline{h}(T_0 - \tau) D_0 z_0 (\tau) d\tau$. To this end, we use Equation (\ref{eq:h(t) analytical}) to write: \begin{equation} \label{eq:h(T0-tau)}
\overline{h}(T_0 - \tau)= \frac{e^{-\mu T_0}}{M} \sqrt{\frac{R}{g}} \Big( \lambda_1 e^{\mu \tau} + \lambda_2 e^{\mu \tau} \cos(\mu \tau - \mu T_0) - \lambda_3 e^{\mu \tau} \sin(\mu \tau - \mu T_0) \Big).
\end{equation} Also, from Equation (\ref{eq:y_0 intra-well}), we have \begin{equation} \label{eq:D_0 y_0}
D_0 z_0 (\tau)= i \omega_o A e^{i \omega_o \tau } + cc. \end{equation} Thus, using Equations (\ref{eq:h(T0-tau)}) and (\ref{eq:D_0 y_0}), we can express the convolution integral as \begin{equation} \label{eq:convolution expanded} \begin{split}
\int_0^{T_0} \overline{h}(T_0 - \tau) D_0z_0 (\tau) d\tau =i \omega_o A e^{-\mu T_0}\int_0^{T_0} \Big( &\lambda_1 e^{(\mu+i \omega_o) \tau}+ \lambda_2 e^{(\mu+i \omega_o) \tau} \cos(\mu \tau - \mu T_0) \\
&- \lambda_3 e^{(\mu+i \omega_o) \tau} \sin(\mu \tau - \mu T_0) \Big) d\tau, \end{split} \end{equation} which upon integration by parts yields \begin{equation} \label{eq:convolution solution -intrawell}
\int_0^{T_0} \overline{h}(T_0 - \tau) D_0 z_0 (\tau) d\tau= A \omega_o (\xi_0 +i \bar{\xi}_0) e^{i \omega_o T_0} +NST+cc, \end{equation} where $NST$ stands for non-secular terms, and \begin{equation} \label{eq:C_1 -intrawell}
\begin{split}
\xi_0&=\left( \frac{\lambda_1\omega_o}{\mu^2+\omega_o^2} + \frac{2\lambda_2 \mu^2\omega_o}{4\mu^4+\omega_o^4} -\frac{\lambda_3\omega_o^3}{4\mu^4+\omega_o^4} \right),\\
\bar{\xi}_0&=\left( \frac{\lambda_1 \mu}{\mu^2+\omega_o^2} + \frac{\lambda_2(2\mu^3-\mu\omega_o^2)}{4\mu^4+\omega_o^4}-\frac{\lambda_3(2\mu^3+\mu\omega_o^2)}{4\mu^4+\omega_o^4} \right).
\end{split} \end{equation} Substituting Equations (\ref{eq:convolution solution -intrawell}) and (\ref{eq:C_1 -intrawell}) into the third perturbation problem presented in Equation (\ref{eq:order 2 intra-well}), and using the following polar transformation for the complex valued function $A(T_2)$ \begin{equation} \label{eq:polar transformation}
\begin{split}
A(T_2)&= \frac{a(T_2)}{2} e^{i \beta (T_2)}, \\
\overline{A}(T_2)&= \frac{a(T_2)}{2} e^{-i \beta (T_2)},
\end{split} \end{equation} then eliminating the secular terms, yields the following modulation equations: \begin{equation} \label{eq:amplitude modulation-intra-well}
D_2 a= -\left(\frac{\delta_1 \bar{\xi}_0}{2} + \frac{\delta_2}{2}\right) a + \frac{g_{wave}}{2\omega_o} \sin \psi \end{equation} \begin{equation} \label{eq:phase modulation-intra-well}
a D_2 \psi=\left(\sigma-\frac{\delta_1 \xi_0}{2}\right)a+\left(\frac{5\eta^2}{12\omega_o^3} - \frac{3\gamma}{8\omega_o}\right)a^3+ \frac{g_{wave}}{2\omega_o} \cos \psi, \end{equation} where $a$ and $\beta$ represent, respectively, the amplitude and phase of oscillations, and $\psi=\sigma T_2-\beta$.
\subsection{Global inter-well response}
In this section, we seek an approximate solution for the high energy orbits; i.e., when the system undergoes global inter-well oscillations. Since Equation (\ref{eq:ComminsWN}) has a negative linear stiffness, it is difficult to employ the method of multiple scales in its straightforward fashion. To overcome this, we first expand the natural frequency of symmetric oscillations to be in the following form
\begin{equation} \label{eq:Nat freq.}
\omega_N^2=-\omega_n^2+ \sigma_1,
\end{equation} where $\sigma_1$ is a detuning parameter. Since we are seeking an approximate solution in the vicinity of the primary resonance of the inter-well motions, we express the nearness of the wave frequency to the natural frequency, $\omega_N$, by introducing the detuning parameter $\sigma_2$, such that
\begin{equation} \label{eq:detuning interwell}
\Omega^2=\omega_N^2+\epsilon^2 \sigma_2.
\end{equation}
Upon adding Equations (\ref{eq:Nat freq.} - \ref{eq:detuning interwell}), we arrive at
\begin{equation} \label{eq:Nat freq 2}
-\omega_n^2=\Omega^2 - (\omega_N^2+\omega_n^2) - \epsilon^2(\Omega^2-\omega_N^2).
\end{equation} Substituting Equation (\ref{eq:Nat freq 2}) into Equation (\ref{eq:ComminsWN}), we obtain
\begin{equation} \label{eq:Commins-interwell2}
\begin{split}
\ddot{Y} + &\epsilon^2 \delta_1 \int_{0}^{t^*} h(t^*-\tau) \dot{Y}(\tau) d\tau + \epsilon^2 \delta_2 \dot{Y} +\Omega^2 Y \\
+ &\epsilon(-(\omega_N^2+\omega_n^2)Y+ \gamma Y^3)- \epsilon^2(\Omega^2-\omega_N^2)Y = \epsilon^2 g_{wave} \cos(\Omega t^*).
\end{split}
\end{equation}
Next, we employ the method of multiple scales on Equation (\ref{eq:Commins-interwell2}) by introducing slow and fast time scales $T_n= \epsilon^n t^*$, $n=0,1,2$, and seek an expansion of the response in the form
\begin{subequations} \label{eq:yexpansion} \begin{align}
Y(t^*, \epsilon)&=Y_0(T_0,T_1,T_2) + \epsilon Y_1(T_0,T_1,T_2) +\epsilon^2 Y_2(T_0,T_1,T_2) + O (\epsilon^3), \\
v (t^*, \epsilon)&= v_0 (T_0,T_1,T_2) + \epsilon v_1(T_0,T_1,T_2) +\epsilon^2 v_2(T_0,T_1,T_2) + O (\epsilon^3). \end{align} \end{subequations} Implementing the method of multiple scales on the scaled equations as described in the previous subsection yields the following solution for $Y$ and $v$.
\begin{subequations} \begin{align}
Y &= a \cos (\Omega t^* + \psi) + O (\epsilon), \\
v &= \Gamma_2 a \cos (\Omega t^*+ \psi) + O (\epsilon), \end{align} \end{subequations} where \begin{equation*}
\Gamma_2 = \frac{\Omega^2 + i \theta \Omega}{\Omega^2 + \theta^2} \end{equation*} and the amplitude, $a$, and phase, $\psi$, of the response are governed by the following modulation equations: \begin{equation} \label{eq:amplitude modulation-interwell}
\omega_n D_2 a = -\left(\frac{\omega_n \delta_1 \bar{\xi_1} + \omega_n \delta_2}{2}\right)a - \frac{f_{wave}}{2} \sin \psi \end{equation} \begin{equation} \label{eq:phase modulation-interwell} \omega_n a D_2 \psi=-\left(\frac{\Omega^2 + \omega_n^2}{2 } -\omega_n \delta_1 \xi_1 \right)a + \frac{3 \gamma}{8} a^3+\frac{3 \gamma^2 }{256 \omega_n^2}a^5-\frac{f_{wave}}{2} \cos \psi, \end{equation} where \begin{equation} \label{eq:C_2 -interwell}
\begin{split}
\xi_1&=\left( \frac{\lambda_1 \omega_n}{\mu^2+\omega_n^2} + \frac{2\lambda_2 \mu^2 \omega_n}{4\mu^4+\omega_n^4} -\frac{\lambda_3 \omega_n^3}{4\mu^4+\omega_n^4} \right)\\
\bar{\xi}_1&=\left( \frac{\lambda_1 \mu}{\mu^2+\omega_n^2} + \frac{\lambda_2(2\mu^3-\mu \omega_n^2)}{4\mu^4+\omega_n^4}-\frac{\lambda_3(2\mu^3+\mu \omega_n^2)}{4\mu^4+\omega_n^4} \right)
\end{split} \end{equation}
\subsection{Steady-state response} The long-time steady-state behavior of the absorber is of particular interest to assess its performance. Thus, the time derivatives in Equations (\ref{eq:amplitude modulation-intra-well} - \ref{eq:phase modulation-intra-well}) and (\ref{eq:amplitude modulation-interwell} - \ref{eq:phase modulation-interwell}) are set to zero and the resulting algebraic equations are then solved numerically for the steady-state amplitude $a_o$, and phase $\psi_o$. For intra-well oscillations, we obtain: \begin{subequations} \label{eq: intrawell oscillations} \begin{align} \begin{split} \label{eq:Intra-Subeq1}
Y=a_o \cos(\Omega t^*- \psi_o)+ \frac{\eta}{2 \omega_0^2} \left(-a_o^2+\frac{a_o^2}{3} \cos(2 \Omega t^*- 2 \psi_o)\right) +O(\epsilon^2), \end{split}\\
&v=\frac{\omega_0^2}{\omega_0^2+\theta^2}a_o \cos(\Omega t^* -\psi_o) - \frac{\omega_0 \theta}{\omega_0^2+\theta^2}a_o \sin(\Omega t^* -\psi_o) +O(\epsilon^2),
\label{eq:Intra-Subeq2} \end{align} \end{subequations} and for the inter-well oscillations, we obtain \begin{subequations} \label{eq: interwell oscillations} \begin{align} \begin{split} \label{eq:Inter-Subeq1} Y&=a_o \cos(\Omega t^* - \psi_o) + \left( \frac{\gamma}{32 \omega_n^2} a_o^3 + \frac{3 \gamma^2}{1024 \omega_n^4} a_o^5 \right) \cos(3\Omega t^* - 3\psi_o) \\ &+ \frac{\gamma^2}{1024 \omega_n^4} a_o^5 \cos(5\Omega t^* - 5\psi_o) +O(\epsilon^2), \end{split}\\
v&=\frac{\omega_n^2}{\omega_n^2+\theta^2}a_o \cos(\Omega t^* -\psi_o) - \frac{\omega_n \theta}{\omega_n^2+\theta^2}a_o \sin(\Omega t^* -\psi_o) +O(\epsilon^2).
\end{align} \end{subequations}
Using the steady-state response, the average power available at the buoy can be expressed as \begin{equation} \label{eq: Normalized averaged power}
P_{avg}=\frac{1}{T^*} \int_0^{T^*} \delta_2 \dot{Y}^2 dt^* , \end{equation} where $T^*$ is the period of oscillations. For intra-well oscillations Equation (\ref{eq: Normalized averaged power}) reduces to \begin{equation} \label{eq: POWER intrawell}
P_{avg}= \delta_2 \left(\frac{\Omega^2 a_o^2}{2} + \frac{\eta^2 \Omega^2 a_o^4}{18 \omega_0^4} \right) + O(\epsilon^2), \end{equation} while for inter-well oscillations, we get \begin{equation} \label{eq: POWER interwell}
P_{avg}=\delta_2 \left(\frac{\Omega^2 a_o^2}{2} + \frac{9 \gamma^2 \Omega^2 a_o^6}{2048 \omega_n^4}\right) +\ O(\epsilon^2). \end{equation}
In addition to the averaged power, we are also interested in evaluating the capture width ratio (CWR), which is a common parameter used to evaluate a PWA's performance. CWR or absorption width as commonly coined in some literature is defined as the ratio between the average absorbed power, $P_{avg}$, and the power available at the wave front, $P_{wave}$. The latter can be obtained by multiplying the wave energy flux per unit crest length with the buoy's characteristic length, which is the diameter for a hemispherical buoy. This yields \cite{wang2017modelling}:
\begin{equation} \label{eq: CWR}
CWR=\frac{6 (m+m_\infty) \Omega}{\rho R A^2} P_{avg}. \end{equation}
\section{Stability Analysis and Numerical Simulations} \label{sec:bifurcations} Bi-stable PWAs are known to produce large-amplitude responses over certain frequency ranges under harmonic waves excitations. Unfortunately, such desired motions can be incited only when the excitation is capable of channelling enough energy to the PWA to overcome the potential energy barrier and perform periodic inter-well motions. Even when excited, those desired motions can only be uniquely realized over a specific bandwidth of the wave frequency, which we coin here as the effective bandwidth of the PWA. Outside the effective bandwidth, the large-amplitude motions are often accompanied with other, less desirable responses; e.g., small periodic or aperiodic motions. To define this effective bandwidth, it is important to study the stability of the steady-state periodic solutions $a_o$ and $\psi_o$ as the excitation frequency is varied. At the first level, this can be realized by finding the eigenvalues of the Jacobian matrices associated with Equations (\ref{eq:amplitude modulation-intra-well}) - (\ref{eq:phase modulation-intra-well}) and (\ref{eq:amplitude modulation-interwell})-(\ref{eq:phase modulation-interwell}).
Figure \ref{fig:JacobianStability} depicts variation of the steady-state amplitude of the absorber with the excitation frequency as obtained using equations (\ref{eq:amplitude modulation-intra-well}) - (\ref{eq:phase modulation-intra-well}) and (\ref{eq:amplitude modulation-interwell})-(\ref{eq:phase modulation-interwell}). The parameters used in the simulation are listed in Table \ref{Table:WEC parameters} and the associated symmetric potential energy function of the PWA is depicted in Figure \ref{fig:POT2}.
\begin{table} \caption{WEC parameters.} \label{Table:WEC parameters} \begin{center} \begin{tabular}{c c} \hline
$Parameter$ & $Value$ \\
\hline
$\omega_n$ & 0.78\\
$\gamma$ & 50 \\
$\delta_2$ & 0.13 \\
\hline \end{tabular} \end{center} \end{table}
On the figure, the solid lines represent stable steady-state single-period periodic solutions of amplitude $a_o$, and dashed lines represent unstable unrealizable periodic solutions. It is evident that there are three branches of stable periodic solutions based on the Jacobian-based stability analysis. The branches $B_r$ and $B_n$ which represent, respectively, the resonant and non-resonant branches of the small-amplitude intra-well motions, and the branch $B_L$, which represents the large amplitude inter-well motions. The stability analysis reveals two cyclic-fold bifurcations, $Cf_1$, and $Cf_2$, which result from the stable and unstable periodic orbits colliding and destructing each other. \begin{figure}
\caption{Stroboscopic and analytical bifurcation diagrams for the bi-stable PWA under regular wave excitation at nondimensional wave amplitude of 0.1. Solid lines: stable solution. Dashed lines: unstable solutions.}
\label{fig:JacobianStability}
\end{figure}
\begin{figure}
\caption{Potential energy function associated with the system's parameters listed in Table \ref{Table:WEC parameters}.}
\label{fig:POT2}
\end{figure}
The superimposed stroboscopic bifurcation map in Figure \ref{fig:JacobianStability} obtained by a numerical integration of the original equations of motion, Equation (\ref{eq:Commins}), reveals a much more complex behaviour than what can be seen by relying on the Jacobian-based stability analysis. In particular, The bifurcation map reveals regions of aperiodic motions that extend over a wide range of frequencies. While we notice good agreement between the analytical solution and the numerical solution on the $B_r$ branch in the higher range of frequencies down to about $\Omega=1.2$. The Jacobian-based stability analysis does not reveal that the intra-well solutions on the branch, $B_r$, actually undergo a cascade of period-doubling, $pd$, bifurcations starting near $\Omega=1.2$. These bifurcations ultimately lead to a window of chaos, $CH$, which extends down to about $\Omega=0.9$. This window ultimately disappears in a boundary crisis. Phase portraits and the Fast Fourier Transform (FFT) of the period-doubled and chaotic solutions are shown in Figure \ref{fig: FFTs_Local} clearly demonstrating the period-doubling route to chaos.
\begin{figure}
\caption{Phase portraits and FFT spectra showing the instability route of the intra-well branch. $(a):\Omega_{wave}=1$, $(b):\Omega_{wave}=1.2$, and $(c):\Omega_{wave}=1.4$. }
\label{fig: FFTs_Local}
\end{figure}
Furthermore, as shown in Figure \ref{fig:JacobianStability}, the period one solutions on branch $B_L$, also lose stability via symmetry breaking bifurcations, $SB_1$ and $SB_2$ resulting in asymmetric periodic orbits as shown in the phase portraits and FFTs of Figure \ref{fig: FFTs_Global} . In the region between $SB_1$ and $SB_2$, the desired large orbit period one periodic solution is unique and, unlike the rest of the frequency bandwidth considered, does not coexist with other less desirable solutions. We coin this desired bandwidth as the effective bandwidth of the PWA. \begin{figure}
\caption{Phase portraits and FFT spectra showing the instability route of the inter-well branch. $(a):\Omega_{wave}=0.42$, $(b):\Omega_{wave}=0.62$, and $(c):\Omega_{wave}=0.8$. }
\label{fig: FFTs_Global}
\end{figure}
Based on the numerical analysis, the response bandwidth of the absorber can be divided into different regions as shown in Figure \ref{fig:motion_discription_NUM}. The first region ($I$) occurs in the low frequency range and contains inter-well chaotic motions coexisting with small magnitude intra-well motions. The second region ($II$) is the effective bandwidth, which contains the unique large-magnitude period one inter-well motions. The third region ($III$) contains chaotic motions coexisting with asymmetric periodic motions. Region four ($IV$) contains chaotic motions only. Finally, region five ($V$) contains unique small-amplitude period one intra-well periodic motions.
\begin{figure}
\caption{Stroboscopic diagram for the Bi-stable PWA showing regions with different types of motion. Simulation performed at nondimensional wave amplitude $A_{wave}/R$ = 0.1. }
\label{fig:motion_discription_NUM}
\end{figure}
At lower wave amplitudes, the bi-stable PWA reveals a different frequency response behavior as shown in Figure \ref{fig:JacobianStability2}. The most important feature is the birth of a new cyclic-fold bifurcation point denoted by $Cf_3$ and located at the tip of the resonant branch, $B_r$. In addition, we notice that the entire $B_r$ branch remains stable and the $pd$ bifurcation is shifted towards the nonresonant intra-well branch, $B_n$. This arrangement creates a region where two stable branches coexist, $B_r$ and $B_n$, which leads to jumps between them as depicted in the enlarged window in Figure \ref{fig:JacobianStability2}. In addition, the stroboscopic bifurcation map points shows a chaotic region that spans the entire frequency spectrum to the left of $Cf_1$. This Implies that the large-amplitude branch $B_L$ breaks symmetry at $Cf_1$, and no periodic inter-well motion can be realized.
\begin{figure}
\caption{Stroboscopic and analytical bifurcation diagrams for the bi-stable PWA under regular wave excitation at nondimensional wave amplitude of 0.034. Solid lines: stable solution. Dashed lines: unstable solutions.}
\label{fig:JacobianStability2}
\end{figure}
Understanding the behavior of the absorber requires characterizing these regions as function of the frequency and amplitude of the incident waves. This can be realized by approximating the loci of the different bifurcations: $Cf_1$, $Cf_2$, $Cf_3$ $pd$, $SB_1$ and $SB_2$. The first three bifurcations can be approximated using the Jacobian stability analysis, while the last three need the implementation of the Floquet theory. In what follows, we obtain analytical approximation of all of these bifurcations as function of the design parameters of the PWA.
\subsection{Cyclic-fold bifurcation $Cf_1$ on the inter-well branch} The cyclic-fold point $C_{f1}$ satisfies the relation $d\Omega/d a_o= 0$ on the resonant branch, $B_L$. Thus, its location can be obtained by differentiating Equations (\ref{eq:amplitude modulation-interwell}-\ref{eq:phase modulation-interwell}) with respect to $a_o$ and setting $d\Omega/d a_o$ to zero to arrive at the following 8$^{th}$ order polynomial: \begin{equation} \label{eq: CF1}
\begin{aligned}
&\frac{45 \gamma^4}{4096 \omega_n^4}a_b^8 +\frac{9 \gamma^3}{128 \omega_n^2}a_b^6 + \left( \frac{99 \gamma^2}{64} - \frac{3 \gamma^2 \Omega_b^2}{128 \omega_n^2} + \frac{3 \gamma^2 \delta_1 \xi_1}{64 \omega_n} \right)a_b^4 -3\gamma \left(\Omega_b^2+\omega_n^2 -\delta_1 \omega_n \xi_1 \right)a_b^2\\
&+\left(\Omega_b^2+\omega_n^2 -\delta_1 \omega_n \xi_1 \right)^2+\left(\delta_2 \omega_n + \delta_1 \omega_n\bar{\xi}_1\} \right)^2=0.
\end{aligned} \end{equation} where $(\Omega_b, a_b)$ represent the wave frequency and response amplitude at which the bifurcation occurs. For any set of design parameters, the polynomial in Equation (\ref{eq: CF1}) can be solved to obtain the locus of the $Cf_1$ bifurcation in the parameter space of wave amplitude and frequency. \subsection{Cyclic-fold bifurcations $Cf_2$ and $Cf_3$ on the intra-well branch} These cyclic fold bifurcations also satisfy the relation $d \Omega/ d a_o = 0$ but on the intra-well branch, $B_n$; i.e. Equations (\ref{eq:amplitude modulation-intra-well}-\ref{eq:phase modulation-intra-well}). Differentiating Equations (\ref{eq:amplitude modulation-intra-well}-\ref{eq:phase modulation-intra-well}) with respect to $a_o$ and setting $d \Omega/ d a_o = 0$, we obtain \begin{equation} \label{eq: intra-well CF loci}
a_b^2=\frac{-2\left(\Omega_b - \omega_o-\frac{\delta_1 \xi_0}{2}\right) \pm \sqrt{\left(\Omega_b - \omega_o-\frac{\delta_1 \xi_0}{2}\right)^2-3\left(\frac{\delta_1\bar{\xi}_0}{2} + \frac{\delta_2}{2}\right)^2}}{3\left(\frac{5\eta^2}{12\omega_o^3} - \frac{3\gamma}{8\omega_o}\right)}. \end{equation} Here, $a_b$ and $\Omega_b$ are, respectively, the amplitude of oscillations and the wave frequency at which the cyclic-fold bifurcations, $Cf_2$ and $Cf_3$, occurs. Upon solving Equation (\ref{eq: intra-well CF loci}), we can obtain the loci of the $Cf_2$ and $Cf_3$ bifurcations in the parameter space of wave amplitude and frequency.
\subsection{Period doubling bifurcation} \label{sec: PD bifurcation} One of the main dynamical characteristics of the intra-well oscillations is the appearance of a period-doubling bifurcation on the $B_r$ branch as the wave frequency is decreased toward lower values. To estimate this bifurcation point as function of the wave frequency and amplitude, we perturb the periodic orbit ${Y}(t^*)$ by introducing the perturbation $\mathcal{p(t^*)}$ such that \begin{equation} \label{eq:PD perturbation}
\Tilde{Y}(t^*)=Y(t^*) + \mathcal{p(t^*)}. \end{equation} Here, $\Tilde{Y}(t^*)$ is a stable periodic orbit if $\mathcal{p(t^*) \to 0}$ as $t^* \to \infty$; otherwise it is unstable. Substituting Equation (\ref{eq:PD perturbation}) into Equation (\ref{eq:Commins intra-well}) and retaining only linear terms in $\mathcal{p(t^*)}$, we obtain, after simplifications, the following scaled equations:
\begin{equation} \label{eq:Hill's}
\begin{split}
\mathcal{p}''+\epsilon \delta_1 \int_{0}^{t^*} \overline{h}(t^*-\tau) \mathcal{p}'(\tau) d\tau +\epsilon \delta_2 \mathcal{p}' &+ [ G_0+\epsilon G_1 \cos(\Omega t^* -\psi)+\epsilon^2 G_2 \cos(2\Omega t^* -2\psi) \\
& +\epsilon^2G_3 \cos(3\Omega t^* -3 \psi) +\epsilon^2 G_4 \cos(4\Omega t^* -4\psi) ] \mathcal{p} =0,
\end{split}
\end{equation}
where Equation (\ref{eq:Hill's}) represents a Mathieu’s-type differential equation with four parametric excitation terms. Each of these terms produces a principle subharmonic parametric instability when their frequency is half the natural frequency $\sqrt{G_0}$ of the perturbation \cite{meyers2011mathematics, kovacic2018mathieu}. The interested reader can refer to Appendix \ref{Apndx: Parametric constants} for the full expressions of the parametric terms constants, $G_i$.
In order to examine the dynamics of the perturbation $\mathcal{p(t^*)}$, we derive an asymptotic approximation for the evolution of $\mathcal{p(t^*)}$ using the method of multiple scales assuming the following first-order expansion:
\begin{equation} \label{eq: P expansion}
\mathcal{p}(t^*,\epsilon)=\mathcal{p}_0(t^*)+\epsilon \mathcal{p}_1(t^*)+ O(\epsilon^2).
\end{equation}
Using the time scales and time derivatives defined in Equation (\ref{eq:time scales_intra}), and substituting Equation (\ref{eq: P expansion}) into Equation (\ref{eq:Hill's}), we obtain the following differential equations at the different orders of $\epsilon$:\\ \underline{$O(\epsilon^0)$}:\\ \begin{equation} \label{eq: order0 PD}
D_0^2 \mathcal{p}_0 + G_0 \mathcal{p}_0=0, \end{equation} which admits the following homogeneous solution \begin{equation} \label{eq: order0 soln PD}
\mathcal{p}_0=Q(T_1) e^{i\sqrt{G_0}T_0}+\overline{Q}(T_1) e^{-i\sqrt{G_0}T_0}, \end{equation} and \\ \underline{$O(\epsilon^1)$}: \begin{equation} \label{eq: order1 PD}
D_0^2 \mathcal{p}_1 +G_0 \mathcal{p}_1 = -2D_1 D_0 \mathcal{p}_0 - \delta_1 \int_0^{T_0} \overline{h}(T_0-\tau)D_0 \mathcal{p}_0 d\tau -\delta_2 D_0 \mathcal{p}_0 -G_1 \mathcal{p}_0 \cos(\Omega t^* - \psi).
\end{equation} Here, $Q(T_1)$ and its complex conjugate $\overline{Q}(T_1)$ are unknowns that can be expressed in the following polar form: \begin{equation} \begin{split}
Q(T_1) &= \frac{q(T_1)}{2} e^{i\nu (T_1)},\\
\overline{Q}(T_1) &= \frac{q(T_1)}{2} e^{-i\nu (T_1)}, \end{split} \end{equation} where $q$ and $\nu$ are, respectively, the amplitude and phase of the perturbation $\mathcal{p}$. Since we are interested in obtaining the point at which the first period-doubling bifurcation occurs, we seek to approximate the solution when $\Omega$ is near $2 \sqrt{G_0} $. Thus, we express the proximity of the excitation frequency to twice the natural frequency by introducing \begin{equation} \label{eq: Hills detuning}
\Omega=2 \sqrt{G_0} + \epsilon \sigma. \end{equation} Upon substituting (\ref{eq: order0 soln PD}) and (\ref{eq: Hills detuning}) into Equation (\ref{eq: order1 PD}), then eliminating the secular terms, we obtain the following equation which governs the locus of the period-doubling bifurcation as function of the design parameters of the absorber: \begin{equation} \label{eq: Hill's Soln}
(\sqrt{G_0} \Omega - 2G_0-\delta_1 \sqrt{G_0} \xi_2) ^2
+ ( \delta_1 \sqrt{G_0} \bar{\xi}_2 + \delta_2 \sqrt{G_0} )^2 = \frac{G^2_1}{4},
\end{equation} where \begin{equation} \label{eq:C_3 -pd}
\begin{split}
\xi_2&=\left( \frac{\lambda_1 \sqrt{G_0}}{\mu^2+G_0} + \frac{2\lambda_2 \mu^2 G_0}{4\mu^4+G_0^2} -\frac{\lambda_3 G_0^{\frac{2}{3}}}{4\mu^4+G_0^2} \right),\\
\bar{\xi}_2&=\left( \frac{\lambda_1 \mu}{\mu^2+G_0} + \frac{\lambda_2(2\mu^3-\mu G_0)}{4\mu^4+G_0^2}-\frac{\lambda_3(2\mu^3+\mu G_0)}{4\mu^4+G_0^2} \right).
\end{split} \end{equation}
\subsection{Symmetry-break bifurcations of the inter-well branch} The other dynamical features that we have a particular interest in estimating are the points of symmetry-break bifurcation on the symmetric inter-well solution branch. To this end, we examine the stability of the approximate expansion given in Equation (\ref{eq: interwell oscillations}) by introducing an infinitesimal perturbation $\mathcal{p}(t^*)$ to the periodic solution $Y(t^*)$ as \begin{equation} \label{eq:CF perturbation}
\tilde{Y}(t^*) = Y(t^*) + \mathcal{p}(t^*). \end{equation} Upon substituting Equation (\ref{eq:CF perturbation}) into Equation (\ref{eq:ComminsWN}), we obtain the following Mathieu’s-type differential equation: \begin{equation} \label{eq: Hill's CF}
\mathcal{p}''(t^*) + \delta_1 \int_{0}^{t^*} h(t^*-\tau) \mathcal{p}'(\tau) d\tau + \delta_2 \mathcal{p}'(t^*)
+ \left(K_0 + \sum_{n=1}^N K_n \cos(2n \Omega t^*) \right) \mathcal{p}(t^*) =0.
\end{equation} Because of the presence of the term, $K_n \cos(2n \Omega t^*)$ in Equation (\ref{eq: Hill's CF}), $\mathcal{p}(t^*)$ must admit solutions that have even frequencies, which breaks the symmetry of the original solution, $Y(t^*)$. Since the parametric terms are periodic with period $T=\pi/\Omega$, it follows by the virtue of Floquet theory that $\mathcal{p}(t^*)$ satisfies the following equation:
\begin{equation}
\mathcal{p}(t^*+T)=\lambda \mathcal{p}(t^*),
\end{equation}
where $\lambda$ is an eigenvalue called the Floquet multiplier. This multiplier is an eigenvalue of a matrix $C$ associated with the solution $\Phi(t^*)$ of the Equation (\ref{eq: Hill's CF}), which satisfies $\Phi(t^*+T)=\Phi(t^*)C$. The matrix $C$, also known as the monodromy matrix, can be thought of as a transformation that maps $\Phi(0)$ to $\Phi(T)$. Specifying the initial condition $\Phi(0)=I$ yields \begin{equation}
C=\Phi(T). \end{equation}
In order to assess the stability of $\mathcal{p}(t^*)$, we solve for $\Phi(t^*)$ starting at $t^*=0$ and ending at $t^*=T$. We then examine the eigenvalues of the matrix $C=\Phi(T)$. The set of differential equations governing the fundamental matrix solution can be constructed using the following equation: \begin{equation} \label{eq:fundamental matrix solution} \frac{d}{dt^*} \Phi(t^*)=
\left[\includegraphics[width=0.25\textwidth ,valign=c]{matrix.pdf}\right]
\Phi(t^*), \end{equation} where $A_r$, $B_r$ and $C_r$ are the realized state-space accounting for radiation damping, and $f(t^*)$ is the parametric excitation term given as: \begin{equation}
f(t^*)=K_0 + \sum_{n=1}^N K_n \cos(2n \Omega t^*). \end{equation} The reader can refer to Appendix \ref{Apndx: Parametric constants} for the constants $K_i$. Equation (\ref{eq:fundamental matrix solution}) results in a set of 25 linear differential equations subjected to the initial conditions $\Phi(0)=I$. Here, $I$ is an identity matrix of dimension 5.
Integrating Equation (\ref{eq:fundamental matrix solution}) numerically in [0, $T$] using the initial conditions of $\Phi(0)=I$, finding the Floquet multipliers, $\lambda$, of the resulting numerical matrix, then inspecting their location with respect the unit circle, we can find the loci of the symmetry breaking bifurcations in the wave amplitude versus frequency parameter space. In particular, a symmetry breaking bifurcation occurs when one of the Floquet multipliers exits the unit circle through $\lambda=1$.
\subsection{Bifurcation diagram} \label{sec:RES} The bifurcation diagram based on the aforedescribed stability analysis is shown in Figure \ref{fig: Regular waves POT2 Results} (a). It is evident that the bifurcations $SB_1$, $SB_2$, and $pd$ are in excellent agreement with the points where the stroboscopic numerical bifurcation maps undergo qualitative changes. The average power and the CWR curves shown in Figures \ref{fig: Regular waves POT2 Results} (b-c) also illustrate good agreement with the analytical solution in the range when the response exhibits unique period one periodic motion. It is evident that maximum average power and CWR are realized within the effective bandwidth of the absorber and that outside that bandwidth the average power drops whether the response is of the chaotic or intra-well type.
\begin{figure}
\caption{Bi-stable PWA response under regular wave excitation at nondimensional wave amplitude $A_{wave}/R=0.1$. (a): Analytical and stroboscopic bifurcation map. (b): Nondimensional averaged power. (c): CWR. (Solid lines represent stable solutions. Dashed lines represent unstable solutions based on the Jacobian matrix criterion. Red dashed-dotted lines represent unstable solutions based on period-doubling instability analysis. Blue dashed-dotted lines represent unstable solutions based on Floquet analysis. Circles represent the numerical results). }
\label{fig: Regular waves POT2 Results}
\end{figure}
\section{The Effective Bandwidth} \label{sec: maps} In this section, we use the stability analysis furnished in the previous sections to define an effective bandwidth for the bi-stable wave energy absorber by marking the boundaries at which the PWA switches to different types of motion in the wave amplitude versus frequency $(A_{wave}/R, \Omega)$ parameters space. We also draw a clearer picture of how the shape of the bi-stable potential influences the effective bandwidth.
Using the loci of the different bifurcations, we create the design map shown in Figure \ref{fig:map1} which characterizes the type of motion realized for every combination of wave amplitude and frequency. The largest region on the figure is the one denoted by $B_r$, which represents $(A_{wave}/R, \Omega)$ combinations that incite small-amplitude inter-well motions. We can see that this type of response occurs for any frequency when the wave amplitude is small and for any wave amplitude when the wave frequency is large. In terms of size, the second region on the figure is that which results in coexisting chaotic, $CH$, and $n$-period, $nT$, periodic solutions. This region occurs slightly to the left of $\Omega=1$ and increases in bandwidth as $A_{wave}/R$ is increased. The basin of attraction of the different coexisting solutions in this region are shown in Figure \ref{fig:basins} for the two points indicated on the map. We can see that there are three different basins: the black one represents symmetric period one motions, the white one represents asymmetric period one motions, and the one where the black and white colors blend together represents chaotic motions.
The most important region on the map is that denoted by $B_L$, which corresponds to the combination of wave parameters leading to a unique large-amplitude inter-well motion. We see that this region exists around $\Omega=1$ for wave amplitudes larger than $A_{wave}/R=0.05$. The size of the effective bandwidth of the absorber increases as the wave amplitude is increased up to a value of $A_{wave}/R=0.125$ beyond which the size of the bandwidth remains almost constant. The $B_L$ region is bordered from the right by the region, $B_L+CH$, where large-amplitude periodic motions coexist with chaotic motions, and from below by the region, $CH$, where the chaotic attractor is unique. The region denoted by $CH+B_L+B_n$ corresponds to wave parameters that results in a chaotic attractor coexisting with two types of periodic orbits: $B_L$ and $B_n$ each with competing basins of attraction.
\begin{figure}
\caption{A map demarcating regions of quantitatively different PWA responses.}
\label{fig:map1}
\end{figure}
\begin{figure}
\caption{Basins of attraction showing different coexisting orbits. }
\label{fig:basins}
\end{figure}
\subsection{Influence of the shape of the potential energy function} In this section, we use the understanding developed in the previous section to examine the influence of the potential function on the effective bandwidth of the bi-stable PWA. In particular, we want to understand how the depth of the potential wells and their separation influences the size of the effective bandwidth and the other regions in the bifurcation map. To this end, we consider the three different potential energy functions shown in Figure \ref{fig:POT123}, which were obtained using $\gamma=$30, 50, and 90. The potential energy function associated with $\gamma=$30 is deeper with a larger separation between the stable equilibria, while the potential energy function associated with $\gamma=$90 has shallower potential wells and smaller separation.
The bifurcation maps showing the effective bandwidth associated with each of the potential energy functions considered are depicted in Figure \ref{fig:three graphs}. In order to make a quantitative comparison, three critical wave amplitudes were marked on the figures, and labeled as $(A_{wave}/R)_{cr1}$, $(A_{wave}/R)_{cr2}$, and $(A_{wave}/R)_{cr3}$. The first critical amplitude occurs at the intersection between the $Cf_1$ and $SB_1$ curves and can be used to define the wave amplitude at which the effective bandwidth of the absorber approaches its maximum size. We notice that this critical level of wave amplitude increases as the potential wells become deeper; that is, larger excitation levels become necessary to attain the unique periodic large-amplitude motions when increasing the depth and the separation distance between the potential energy wells.
It is interesting to note that, for any wave amplitude above $(A_{wave}/R)_{cr1}$, the size of the effective bandwidth remains almost constant regardless of the shape of the potential energy function. However, while the effective bandwidth remains unchanged, the power levels within the effective bandwidth change considerably with the shape of the potential energy function as shown in Figure \ref{fig: PWR MAPS 123}. The bi-stable PWA with the deeper potential energy wells produces higher average power levels within the effective bandwidth (the region bounded between $SB_1$ and $SB_2$).
This critical level of excitation marked by $(A_{wave}/R)_{cr2}$ occurs at the intersection between the $Cf_1$ and $pd$ lines. It represents the minimum value of wave amplitude necessary to generate unique inter-well motions. It is evident that this amplitude decreases as the potential wells become shallower. Finally, the amplitude level $(A_{wave}/R)_{cr3}$, represents the wave amplitude below which no bifurcations occur as the wave frequency is varied. At such a low level of wave amplitude, the response of the PWA resembles the bell-shaped response of the traditional linear PWA.
\begin{figure}
\caption{Potential energy function of the bi-stable wave energy absorber at different values of $\gamma$ and $\omega_n=0.78$. (1): $\gamma=30$, (2): $\gamma=50$, (3): $\gamma=90$.}
\label{fig:POT123}
\end{figure}
\begin{figure}
\caption{$\gamma=30$}
\caption{$\gamma=50$}
\caption{$\gamma=90$}
\caption{Bifurcation maps in the wave amplitude versus frequency parameter space for the different potential functions shown in Figure \ref{fig:POT123}.}
\label{fig:three graphs}
\end{figure}
\begin{figure}
\caption{Comparison of the average generated power in the wave amplitude - frequency parameter space for the different potential functions shown in Figure \ref{fig:POT123}. The numerical simulations were performed on Equation (\ref{eq:ComminsWN}) at initial conditions $(Y_0, \dot{Y}_0)$ of $(0, 0)$.}
\label{fig: PWR MAPS 123}
\end{figure}
\section{Conclusion} \label{sec:conclusion} This paper presented a theoretical analytical analysis of the response of bi-stable PWAs to harmonic wave excitations. To this end, approximate asymptotic solutions of the governing equations of motion were derived by implementing the method of multiple scales. A stability analysis of the attained solutions revealed the presence of key bifurcations that can be used to define an effective bandwidth of the generator. This effective bandwidth is characterized by the presence of a unique large-orbit inter-well motion for a set of wave amplitudes and frequencies. This effective bandwidth occurs slightly below the resonant frequency of the absorber and exists only above a certain threshold in the wave amplitude. This threshold increases as the depth of the potential well is increased. The size of the effective bandwidth increases as the wave amplitude is increased up to a certain threshold above which the effective bandwidth remains almost constant even when the wave amplitude is substantially increased. The size of the effective bandwidth is observed to be insensitive to variations in the depth of the potential well of the absorber. However, the power levels within the effective bandwidth change considerably with the shape of the potential energy function. In particular, a bi-stable PWA with deeper potential energy wells produces higher average power levels within its effective bandwidth. It is our belief that this comprehensive analytical treatment is key to designing effective bi-stable PWAs for known wave conditions and provide backbone results for future studies addressing more realistic regular non-harmonic wave excitations.
\section*{Funding} This research was funded by Abu Dhabi Education and Knowledge Council (ADEK) under grant number AARE2019-161: Exploiting Bi-stability to Develop a Novel Broadband Point Wave Energy Absorber. \section*{Conflict of interest} The authors declare that they have no conflict of interest.
\section*{Data availability} The data that support the findings of this study will be made available upon reasonable request.
\break \appendix \begin{appendices} \numberwithin{equation}{section} \makeatletter \newcommand{Appendix \thesection:\ }{Appendix \thesection:\ } \makeatother
\section{Eigensystem realization algorithm} \label{apndx: ERA} Consider the following single-input single-output discrete-time dynamical system: \begin{equation} \begin{split}
\textbf{x}_{k+1} &=\textbf{A}\textbf{x}_k + \textbf{B}u_k,\\
h_{k} &= \textbf{C}\textbf{x}_k + \textbf{D} u_k, \end{split} \end{equation} and a discrete-time scalar input $u$: \begin{equation} u_{k}^{\delta} \equiv u^{\delta} (k \Delta t)=
\begin{cases}
1, & \text{if $k=0$}\\
0, & \text{if $1, \dots \infty$}
\end{cases} \end{equation}
According to linear system theory, the discrete-time impulse response data $h_{k}^{\delta}$ could be expressed as:
\begin{equation} \label{impusledata2}
h_{k}^{\delta} \equiv h^{\delta}(k \Delta t)=\textbf{C} \textbf{A}^{k} \textbf{B}, \hspace{0.5 cm} \left(k=0,1,\dots, \infty\right) \end{equation} In our analysis, the discrete-time impulse response data are obtained through substituting the radiation damping coefficients $B(\omega_i)$ into Equation (\ref{eq:ogilvie}). The next step is to proceed by constructing the generalized \textit{Hankel} matrix $\textbf{H}_{r \times s} (g)$ for $g=0,1$ which consists of $r$ rows and $s$ columns, generated by stacking time-shifted impulse response data in the following order: \begin{equation} \label{Hankel1} \textbf{H}_{r \times s} (g) =
\begin{pmatrix}
h_{g}^{\delta} & h_{g+1}^{\delta} & \dots & h_{g+s-1}^{\delta} \\
h_{g+1}^{\delta} & h_{g+2}^{\delta} & \dots & h_{g+s}^{\delta} \\
\vdots & \vdots & \ddots & \vdots \\
h_{g+r-1}^{\delta} & h_{g+r}^{\delta} & \dots & h_{g+r+s-2}^{\delta}
\end{pmatrix} \end{equation} Using Equation (\ref{impusledata2}), we can express the generalized \textit{Hankel} matrix in Equation (\ref{Hankel1}) in terms of the realized state-space matrices $\textbf{A}_r$, $\textbf{B}_r$ and $\textbf{C}_r$ as: \begin{equation} \label{Hankel2} \textbf{H}_{r \times s} (g) =
\begin{pmatrix}
\textbf{C}_r \textbf{A}_r^{g} \textbf{B}_r & \textbf{C}_r \textbf{A}_r^{g+1} \textbf{B}_r & \dots & \textbf{C}_r \textbf{A}_r^{g+s-1} \textbf{B}_r \\
\textbf{C}_r \textbf{A}_r^{g+1} \textbf{B}_r & \textbf{C}_r \textbf{A}_r^{g+2} \textbf{B}_r & \dots & \textbf{C}_r \textbf{A}_r^{g+s} \textbf{B}_r \\
\vdots & \vdots & \ddots & \vdots \\
\textbf{C}_r \textbf{A}_r^{g+r-1} \textbf{B}_r & \textbf{C}_r \textbf{A}_r^{g+r} \textbf{B}_r & \dots & \textbf{C}_r \textbf{A}_r^{g+r+s-2} \textbf{B}_r
\end{pmatrix} \end{equation} which could be reduced as: \begin{equation} \label{ReducedHankel}
\textbf{H}_{r \times s} (g) = \mathcal{O} \textbf{A}_r^{g} \mathcal{C} \end{equation} where
\begin{align*}
\mathcal{O} &=\left(\textbf{C}_r \hspace{0.5 cm} \textbf{C}_r \textbf{A}_r \hspace{0.5 cm} \dots \hspace{0.5 cm} \textbf{C}_r \textbf{A}_r^{r-1}\right)^{T}\\
\mathcal{C} &=\left(\textbf{B}_r \hspace{0.5 cm} \textbf{A}_r \textbf{B}_r \hspace{0.5 cm} \dots \hspace{0.5 cm} \textbf{A}_r^{r-1} \textbf{B}_r\right)^{T}
\end{align*} are respectively the generalized observability and controllability matrices, with observability and controllability indices of $r$ and $s$. Upon taking the singular value decomposition (SVD) for the first \textit{Hankel} matrix $\textbf{H}_{r \times s} (0)$ we get the following definition: \begin{equation} \begin{aligned} \textbf{H}_{r \times s} (0) &= \textbf{U} \Sigma \textbf{V}^{T}\\ &= \begin{pmatrix}
\Tilde{\textbf{U}} & \textbf{U}_{t}\\
\end{pmatrix}
\begin{pmatrix}
\Tilde{\Sigma} & 0\\
0 & \Sigma_{t} \\
\end{pmatrix}
\begin{pmatrix}
\Tilde{\textbf{V}}^T\\
\Tilde{\textbf{V}}_{t}^T \\
\end{pmatrix} \\
&\approx \Tilde{\textbf{U}} \Tilde{\Sigma} \Tilde{\textbf{V}}^T \end{aligned} \end{equation} where,
\begin{align*}
\Tilde{\textbf{U}}^T \Tilde{\textbf{U}} &=\textbf{I}\\
\Tilde{\textbf{V}}^T \Tilde{\textbf{V}}& =\textbf{I}\\
\Tilde{\Sigma} &=
\begin{pmatrix}
\sigma_1 & & & \\
& \sigma_2 & & \\
& & \ddots & \\
& & & \sigma_N\\
\end{pmatrix}
\end{align*} The diagonal matrix $\Tilde{\Sigma}$ which is constructed from the first $N \times N$ block of $\Sigma$ contains the dominant singular values $\sigma_i$ in the following order $(\sigma_1 \geq \sigma_2 \geq \dots \geq \sigma_N \geq 0)$. While $\Sigma_t$ contains the small truncated singular values. This truncation step is very vital for reducing the order of the realized state-space model; such that the realized dynamics matrix $A_r$ has size $N$. Also, vectors in $\Tilde{\textbf{U}}$ and $\Tilde{\textbf{V}}^T$ contain the dominant modes associated with the singular values retained in $\Tilde{\Sigma}$. As a result, the product $\Tilde{\textbf{U}} \Tilde{\Sigma} \Tilde{\textbf{V}}^T$ is considered to be a faithful representation of the original \textit{Hankel} matrix $\textbf{H}$ for the smallest size of $\Tilde{\Sigma}$.\\ It follows from Equation (\ref{ReducedHankel}) that \begin{equation} \begin{aligned}
\textbf{H}_{r \times s} (0) &= \Tilde{\textbf{U}} \Tilde{\Sigma} \Tilde{\textbf{V}}^T\\
&= \left(\Tilde{\textbf{U}} \Tilde{\Sigma}^{\frac{1}{2}} \right) \left( \Tilde{\Sigma}^{\frac{1}{2}} \Tilde{\textbf{V}}^T \right) = \mathcal{O} \mathcal{C} \end{aligned} \end{equation} Using the above balanced decomposition of $\textbf{H}_{r \times s} (0)$ we can write: \begin{equation*}
\mathcal{O} = \Tilde{\textbf{U}} \Tilde{\Sigma}^{\frac{1}{2}} \hspace{1 cm} and \hspace{1 cm} \mathcal{C} = \Tilde{\Sigma}^{\frac{1}{2}} \Tilde{\textbf{V}}^T \end{equation*} Also from Equation (\ref{ReducedHankel}) we can express the second \textit{Hankel} matrix $\textbf{H}_{r \times s} (1)$ as: \begin{equation} \begin{aligned}
\textbf{H}_{r \times s} (1) &= \mathcal{O} \textbf{A}_r \mathcal{C} \\
&= \left(\Tilde{\textbf{U}} \Tilde{\Sigma}^{\frac{1}{2}}\right) \textbf{A}_r \left(\Tilde{\Sigma}^{\frac{1}{2}} \Tilde{\textbf{V}}^T\right) \end{aligned} \end{equation} Using the properties of $\textbf{U}$ and $\textbf{V}$ we can write: \begin{equation}
\Tilde{\Sigma}^{\frac{1}{2}} \textbf{A}_r \Tilde{\Sigma}^{\frac{1}{2}} = \Tilde{\textbf{U}}^T \textbf{H}_{r \times s} (1) \Tilde{\textbf{V}} \end{equation} It follows that the matrix $\textbf{A}_r$ could be obtained through: \begin{equation}
\textbf{A}_r=\Tilde{\Sigma}^{-\frac{1}{2}} \Tilde{\textbf{U}}^T \textbf{H}_{r \times s} (1) \Tilde{\textbf{V}} \Tilde{\Sigma}^{-\frac{1}{2}} \end{equation} Let: \begin{equation*} \textbf{E}^T_1 =
\begin{pmatrix}
1 & 0 & \dots & 0 \\
\end{pmatrix} \hspace{1 cm} \textbf{E}^T_2=
\begin{pmatrix}
1 & 0 & \dots & 0 \\
\end{pmatrix} \end{equation*} where, $\textbf{E}^T_1$ and $\textbf{E}^T_2$ are respectively $1 \times r$ and $1 \times s$ vectors. We use that along with Equation (\ref{Hankel2}) to write the following balanced expression for $h_k^\delta$: \begin{equation}
\begin{aligned}
h_k^\delta &= \textbf{E}^T_1 \textbf{H}_{r \times s} (g) \textbf{E}^T_2 \\
&= \textbf{E}^T_1 (\mathcal{O} \textbf{A}_r^{g} \mathcal{C}) \textbf{E}_2 \\
&= (\textbf{E}^T_1 \Tilde{\textbf{U}} \Tilde{\Sigma}^{\frac{1}{2}}) (\Tilde{\Sigma}^{-\frac{1}{2}} \Tilde{\textbf{U}}^{T} \textbf{H}_{r \times s} (1) \Tilde{\textbf{V}} \Tilde{\Sigma}^{-\frac{1}{2}})^{g} (\Tilde{\Sigma}^{\frac{1}{2}} \Tilde{\textbf{V}}^{T} \textbf{E}_2) \\
&\equiv \textbf{C}_r \textbf{A}_r^{g} \textbf{B}_r
\end{aligned} \end{equation} We use the decomposed expression above to obtain the reduced input and output matrices $\textbf{B}_r$ and $\textbf{C}_r$ as: \begin{equation}
\textbf{B}_r=\Tilde{\Sigma}^{\frac{1}{2}} \Tilde{\textbf{V}}^{T} \textbf{E}_2 \end{equation} \begin{equation}
\textbf{C}_r=\textbf{E}^T_1 \Tilde{\textbf{U}} \Tilde{\Sigma}^{\frac{1}{2}} \end{equation} \section{Parametric terms constants} \label{Apndx: Parametric constants} \begin{align}
G_0&=\omega_o^2-\frac{\eta^2}{\omega_o^2}a_o^2+\frac{3\gamma}{2}a_o^2+\frac{3 \gamma \eta^2}{4 \omega_o^4}a_o^4 + \frac{\gamma \eta^2}{24 \omega_o^4}a_o^4\\
G_1&= 2\eta a_o - \frac{5 \gamma \eta}{2 \omega_o^2} a_o^3 \\
G_2&= \frac{\eta^2}{3 \omega_o^2}a_o^2 + \frac{3 \gamma}{2} a_o^2 - \frac{\gamma \eta^2}{2\omega_o^4}a_o^4\\
G_3 &= \frac{\gamma \eta}{2 \omega_o^2} a_o^3 \\
G_4 &= \frac{\gamma \eta^2}{24 \omega_o^4} a_o^4\\ K_0&=-\omega_n^2 + 3\gamma \left(\frac{R_1}{2}+\frac{R_3^2}{2}+\frac{R_5^2}{2}\right)\\ K_2&=3\gamma \left(\frac{R_1^2}{2}+R_1 R_3+R_3 R_5 \right)\\ K_4&=3\gamma \left(R_1 R_3+R_1 R_5 \right)\\ K_6&=3\gamma \left(\frac{R_3^2}{2}+R_1 R_5\right)\\ K_8&=3\gamma \left(R_3 R_5 \right)\\ K_{10}&=3\gamma \left(\frac{R_5^2}{2}\right) \end{align} where: \begin{align} R_1 &= a_o\\ R_3 &= \frac{\gamma}{32 \Omega^2} a_o^3 + \frac{3 \gamma^2}{1024 \Omega^4} a_o^5\\ R_5 &=\frac{\gamma^2}{1024 \Omega^4} a_o^5 \end{align}
\end{appendices}
\end{document} |
\begin{document}
\title{Action Assembly: Sparse Imitation Learning for Text Based Games with Combinatorial Action Spaces} \author[*,1]{Chen Tessler} \author[*,3]{Tom Zahavy} \author[2]{Deborah Cohen} \author[3]{Daniel J. Mankowitz} \author[1]{Shie Mannor} \affil[*]{Equal contribution} \affil[1]{Technion Israel Institute of Technology, Haifa, Israel} \affil[2]{Google Research, Tel-Aviv, Israel} \affil[3]{DeepMind, Longdon, England} \date{} \setcounter{Maxaffil}{0} \renewcommand\Affilfont{\itshape\small}
\maketitle
\begin{abstract} We propose a computationally efficient algorithm that combines compressed sensing with imitation learning to solve text-based games with combinatorial action spaces. Specifically, we introduce a new compressed sensing algorithm, named IK-OMP, which can be seen as an extension to the Orthogonal Matching Pursuit (OMP). We incorporate IK-OMP into a supervised imitation learning setting and show that the combined approach (Sparse Imitation Learning, Sparse-IL) solves the entire text-based game of Zork1 with an action space of approximately 10 million actions given both perfect and noisy demonstrations.
\end{abstract}
\section{Introduction}
Combinatorial action spaces pose a challenging problem for AI agents, both from a computational and from an exploratory point of view. The reason being that (i) finding the best action may require iterating over all actions, an exponentially hard task, and (ii) absent prior knowledge, finding the best action requires testing all actions multiple times at each state \citep{brafman2002r}. While the exploratory task is of great importance, in this work we focus on the computational aspects of the problem. Our method can be seen as a natural application of the Action Assembly Theory (AAT) \citep{greene2008action}. According to Greene, behavior is described by two essential processes: \textit{representation} and \textit{processing}. Representation refers to the way information is coded and stored in the mind, whereas processing refers to the mental operations performed to retrieve this information \citep{greene2008action}. Having good representations of information and an efficient processing procedure allows us to quickly exploit highly rewarding nuances of an environment upon first discovery.
In this work we propose the first computationally efficient algorithm (see Figure \ref{fig:algorithm}), called \textit{Sparse Imitation Learning\ (Sparse-IL)}, which is inspired by AAT and combines imitation learning with a Compressed Sensing (CS) retrieval mechanism to solve text-based games with combinatorial action spaces. Our approach is composed of:
\textbf{(1) Encoder} - the encoder receives a state as input (Figure~\ref{fig:algorithm}). The state is composed of individual words represented by word embeddings that were previously trained on a large corpus of text. We train the encoder, using imitation learning, to generate a continuous action $\mathbf{a}_\text{SoE}$ (a dense representation of the action). The action $\mathbf{a}_\text{SoE}$ corresponds to a sum of word embeddings of the action that the agent intends to take, e.g., the embedding of the action `take egg' is the sum of the word embedding vectors of `take' and `egg'. As the embeddings capture a prior, i.e., similarity, over language, it enables improved generalization and robustness to noise when compared to an end-to-end approach.
\begin{figure}
\caption{The Sparse-IL\ algorithm.}
\label{fig:algorithm}
\end{figure}
\textbf{(2) Retrieval Mechanism} - given a continuous vector $\mathbf{a}_\text{SoE}$, we reconstruct the $K$ best Bag-of-Words (BoW) actions $\mathbf{a}_\text{BoW}$, composed of up to $l=4$ words, from the continuous output of the encoder. We do this using an algorithm that we term Integer K-Orthogonal Matching Pursuit (IK-OMP). We then use a fitness function to score the actions, after which, the best action is fed into a language model to yield an action sentence $\mathbf{a}_\text{env}$ that can be parsed by the game.
\textbf{Main contributions:} We propose a computationally efficient algorithm called Sparse-IL that combines CS with imitation learning to solve natural language tasks with combinatorial action spaces. We show that IK-OMP, which we adapted from \cite{white2016generating} and \cite{lin2013kbest}, can be used to recover a BoW vector from a sum of the individual word embeddings in a computationally efficient manner, even in the presence of significant noise. We demonstrate that Sparse-IL\ can solve the entire game of Zork1, for the first time, while considering a combinatorial action space of approximately 10 million actions, using noisy, imperfect demonstrations.
This paper is structured as follows: Section \ref{sec:relatedwork} details relevant related work. Section \ref{sec:problemsetting} provides an overview of the problem setting; that is, the text-based game of Zork and the challenges it poses. Section \ref{sec:compressed_sensing} provides an overview of CS algorithms and, in particular, our variant called IK-OMP. Section \ref{sec:imitationlearning} introduces our Sparse-IL\ algorithm. Finally, in Section \ref{sec: experiments} we present our empirical evaluations, which include experiments in the text-based game Zork1 highlighting the robustness of IK-OMP to noise and its computational efficiency and showcasing the ability of Sparse-IL\ solving both the `Troll Quest' and the entire game of Zork.
\section{Related work} \label{sec:relatedwork}
\textit{Combinatorial action spaces in text-based games:} Previous works have suggested approaches for solving text-based games \citep{he2016deep,yuan2018counting,zahavy2018learn,zelinka2018using,tao2018towards}. However, these techniques do not scale to combinatorial action spaces. For example, \cite{he2016deep} presented the DRRN, which requires each action to be evaluated by the network. This results in a total of $\bigO(|A|)$ forward passes. \cite{zahavy2018learn} proposed the Action-Elimination DQN, resulting in a smaller action set $\bigO(|A'|)$. However, this set may still be of exponential size.
\textit{CS and embeddings representation:} CS was originally introduced in the Machine Learning (ML) world by \cite{calderbank2009compressed}, who proposed the concept of compressed learning. That is, learning directly in the compressed domain, e.g. the embeddings domain in the Natural Language Processing (NLP) setting. The task of generating BoW from the sums of their word embeddings was first formulated by \cite{white2016generating}. A greedy approach, very similar to orthogonal matching pursuit (OMP), was proposed to iteratively find the words. However, this recovery task was only explicitly linked to the field of CS two years later in \cite{arora2018compressed}.
\begin{figure}
\caption{Zork1 example screen.}
\label{fig:zork1_egg}
\end{figure}
\section{Problem setting} \label{sec:problemsetting}
\paragraph{Zork - A text-based game:} Text-based games \citep{cote2018textworld} are complex interactive games usually played through a command line terminal. An example of Zork1, a text-based game, is shown in Figure \ref{fig:zork1_egg}. In each turn, the player is presented with several lines of text which describe the state of the game, and the player acts by entering a text command. In order to cope with complex commands, the game is equipped with an interpreter which deciphers the input and maps it to in-game actions. For instance, in Figure~\ref{fig:zork1_egg}, a command ``climb the large tree" is issued, after which the player receives a response. In this example, the response explains that up in the tree is a collectible item - a jewel encrusted egg. The large, combinatorial action space is one of the main reasons Zork poses an interesting research problem. The actions are issued as free-text and thus the complexity of the problem grows exponentially with the size of the dictionary in use.
\textbf{Our setup:} In this work, we consider two tasks: the `Troll Quest' \citep{zahavy2018learn} and `Open Zork', i.e., solving the entire game. The `Troll Quest' is a sub-task within `Open Zork', in which the agent must enter the house, collect a lantern and sword, move a rug which reveals a trapdoor, open the trapdoor and enter the basement. Finally, in the basement, the agent encounters a troll which it must kill using the sword. An incorrect action at any stage may prevent the agent from reaching the goal, or even result in its death (termination).
In our setting, we consider a dictionary $D$ of $112$ unique words, extracted from a walk-through of actions which solve the game, a demonstrated sequence of actions (sentences) used to solve the game. We limit the maximal sentence length to $l=4$ words. Thus, the number of possible, unordered, word combinations are $d^l/l!$, i.e., the dictionary size to the power of the maximal sentence length, divided by the number of possible permutations. This results in approximately 10 million possible actions.
\textbf{Markov Decision Process (MDP):} Text-based games can be modeled as Markov Decision Processes. An MDP $\mathcal{M}$ is defined by the tuple $(S, A, R, P)$ \citep{sutton1998reinforcement}. In the context of text-based games, the states $\state$ are paragraphs representing the current observation. $\mathbf{a}_\text{env} \in A$ are the available discrete actions, e.g., all combinations of words from the dictionary up to a maximal given sentence length $l$. $R : S \times A \times S \mapsto \mathbb{R}$ is the bounded reward function, for instance collecting items provides a positive reward. $P : S \times A \times S \mapsto [0, 1]$ is the transition matrix, where $P(\state'|\state,\mathbf{a}_\text{env})$ is the probability of transitioning from $\state$ to $\state'$ given $\mathbf{a}_\text{env}$ was taken.
\textbf{Action Space:} While the common approach may be to consider a discrete action space, such an approach may be infeasible to solve, as the complexity of solving the MDP is related to the effective action space size. Hence, in this work, we consider an alternative, continuous representation. As each action is a sentence composed of words, we represent each action using the sum of the embeddings of its tokens, or constitutive words, denoted by $\mathbf{a}_\text{SoE}$ (Sum of Embeddings). A simple form of embedding is the BoW, it represents the word using a one-hot vector the size of the dictionary in which the dictionary index of the word is set to $1$. Aside from the BoW embedding, there exist additional forms of embedding vectors. For instance, Word2vec and GloVe, which encode the similarity between words (in terms of cosine distance). These embeddings are pre-trained using unsupervised learning techniques and similarly to how convolutional neural networks enable generalization across similar states, word embeddings enable generalization across similar sentences, i.e., actions.
In this work, we utilize GloVe embeddings, pre-trained on the Wikipedia corpus. We chose GloVe over Word2vec, as there exist pre-trained embeddings in low dimensional space. The embedding space dimensionality is $m=50$, significantly smaller in dimension than the size $d$ of the dictionary $D$, $112$ in our experiments. Given the continuous representation of an action, namely the sum of embeddings of the sentence tokens $\mathbf{a}_\text{SoE} \in \reals^m$, the goal is to recover the corresponding discrete action $\mathbf{a}_\text{env}$, that is the tokens composing the sentence. These may be represented as a BoW vector $\mathbf{a}_\text{BoW} \in \integers^d$. Recovering the sentence from $\mathbf{a}_\text{BoW}$ requires prior information on the language model.
Provided a set of words, the goal of a \emph{language model}, the last element in Figure~\ref{fig:algorithm}, a central piece in many important NLP tasks, is to output the most likely ordering which yields a grammatically correct sentence. In this paper, we use a rule based approach. Our rules are relatively simple. For example, given a verb and an object, the verb comes before the object - e.g., [`sword', `take'] $\mapsto$ `take sword'.
To conclude, we train a neural network $\text{E}_\theta (\state)$ to predict the sum of embeddings $\mathbf{a}_\text{SoE}$. Using CS (Section~\ref{sec:compressed_sensing}), we recover the BoW vector $\text{R}(\mathbf{a}_\text{SoE}) = \mathbf{a}_\text{BoW}$, i.e., the set of words which compose the sentence. Finally, a language model M converts $\mathbf{a}_\text{BoW}$ into a valid discrete-action, namely $\text{M}(\mathbf{a}_\text{BoW}) = \mathbf{a}_\text{env}$. The combined approach is as follows:
$\mathbf{a}_\text{env} = \text{M}(\text{R}(\text{E}(\state))) \enspace .$
\section{Compressed sensing}\label{sec:compressed_sensing}
This section provides some background on CS and sparse recovery, including practical recovery algorithms and theoretical recovery guarantees. In particular, we describe our variant of one popular reconstruction algorithm, OMP, that we refer to as Integer K-OMP (IK-OMP). The first modification allows exploitation of the integer prior on the sparse vector $\mathbf{a}_\text{BoW}$ and is inspired by \citet{white2016generating} and \citet{sparrer2015soft}. The second mitigates the greedy nature of OMP using beam search \citep{lin2013kbest}. In \cref{sec: cs experiments}, we experimentally compare different sparse recovery methods and demonstrate the superiority of introducing the integer prior and the beam search strategy.
\subsection{Sparse Recovery}
CS is concerned with recovering a high-dimensional $p$-sparse signal $\mathbf{x} \in \reals^d$ (the BoW vector $\mathbf{a}_\text{BoW}$ in our setting) from a low dimensional measurement vector $\mathbf{y} \in \reals^m$ (the sum of embeddings vector $\mathbf{a}_\text{SoE}$). That is, given a dictionary $\mathbf{D} \in \reals^{m \times d}$: \begin{align}
\min ||\mathbf{x}||_0 \enspace \text{subject to} \enspace \mathbf{D}\mathbf{x} = \mathbf{y}.
\label{eq:cs_l0} \end{align}
To ensure uniqueness of the solution of \eqref{eq:cs_l0}, the sensing matrix, or dictionary, $\mathbf{D}$ must fulfill certain properties. These are key to provide practical recovery guarantees as well. Well known such properties are the spark, or Kruskal rank \citep{donoho2003optimally}, and the Restricted Isometry Property (RIP) \citep{candes2005decoding}. Unfortunately, these are typically as hard to compute as solving the original problem \eqref{eq:cs_l0}. While the mutual-coherence (see Definition~\ref{def:coherence}) provides looser bounds, it is easily computable. Thus, we focus on mutual-coherence based results and note that Spark and RIP based guarantees may be found in \cite{elad2010book}.
\begin{definition}[\cite{elad2010book} Definition 2.3]\label{def:coherence}
The mutual coherence of a given matrix $\mathbf{D}$ is the largest absolute normalized inner product between different columns from $\mathbf{D}$. Denoting the $k$-th column in $\mathbf{D}$ by $\mathbf{d}_k$, it is given by
$\mu (\mathbf{D}) = \max_{1 \leq i, j \leq m, \enspace i \neq j} \frac{| \mathbf{d}_j^T \mathbf{d}_i |}{||\mathbf{d}_i||_2 ||\mathbf{d}_j||_2}.$
\end{definition}
The mutual-coherence characterizes the dependence between columns of the matrix $\mathbf{D}$. For a unitary matrix, columns are pairwise orthogonal, and as a result, the mutual-coherence is zero. For general matrices with more columns than rows ($m < d$), as in our case, $\mu$ is necessarily strictly positive, and we desire the smallest possible value so as to get as close as possible to the behavior exhibited by unitary matrices \citep{elad2010book}. This is illustrated in the following uniqueness theorem.
\begin{theorem}[\cite{elad2010book} Theorem 2.5] \label{th:cs_unique} If a system of linear equations $\mathbf{D}\mathbf{x}=\mathbf{y}$ has a solution $\mathbf{x}$ obeying
$p < \frac{1}{2} \left( 1 + \frac{1}{\mu(\mathbf{D})} \right),$
where $p=||\mathbf{x}||_0$, this solution is the sparsest possible. \end{theorem} We now turn to discuss practical methods to solve \eqref{eq:cs_l0}.
\subsection{Recovery Algorithms}
The sparse recovery problem \eqref{eq:cs_l0} is non-convex due to the $\ell_0$-norm. Although it may be solved via combinatorial search, the complexity is exponential in the dictionary dimension $d$, and it has been proven that \eqref{eq:cs_l0} is, in general, NP-Hard \citep{elad2010book}.\\ One approach to solve \eqref{eq:cs_l0}, \textit{basis pursuit}, relaxes the $\ell_0$-minimization to its $\ell_1$-norm convex surrogate, \begin{align} \label{eq:cs_l1}
\min ||\mathbf{x}||_1\enspace \text{s.t.} \enspace \mathbf{D}\mathbf{x} = \mathbf{y}. \end{align}
In the presence of noise, the condition $\mathbf{D}\mathbf{x}=\mathbf{y}$ is replaced by $||\mathbf{D}\mathbf{x}-\mathbf{y}||_2 \leq \epsilon$. The Lagrangian relaxation of this quadratic program is written, for some $\lambda>0,$ as
$\min ||\mathbf{x}||_1 + \lambda ||\mathbf{y}-\mathbf{D}\mathbf{x}||_2,$
and is known as basis pursuit denoising (BPDN).
The above noiseless and noisy problems can be respectively cast as linear programming and second order cone programming problems \citep{chen2001atomic}. They thus may be solved using techniques such as interior-point methods \citep{ben2001lectures, boyd2004convex}. Large scale problems involving dense sensing matrices often precludes the use of such methods. This motivated the search for simpler gradient-based algorithms for solving \eqref{eq:cs_l1}, such as fast iterative shrinkage-thresholding algorithm (FISTA) \citep{beck2009fast}.
Alternatively, one may use greedy methods, broadly divided into \textit{matching pursuit} based algorithms, such as OMP \citep{blumensath2008gradient}, and \textit{thresholding} based methods, including iterative hard thresholding \citep{blumensath2009iterative}. The popular OMP algorithm, proceeds by iteratively finding the dictionary column with the highest correlation to the signal residual, computed by subtracting the contribution of a partial estimate of $\mathbf{x}$ from $\mathbf{y}$. The coefficients over the selected support set are then chosen so as to minimize the residual error. A typical halting criterion compares the residual to a predefined threshold.
\subsection{Recovery Guarantees} Performance guarantees for both $\ell_1$-relaxation and greedy methods have been provided in the CS literature. In noiseless settings, under the conditions of Theorem~\ref{th:cs_unique}, the unique solution of \eqref{eq:cs_l0} is also the unique solution of \eqref{eq:cs_l1} \citep[Theorem 4.5]{elad2010book}. Under the same conditions, OMP with halting criterion threshold $\epsilon = 0$ is guaranteed to find the exact solution of \eqref{eq:cs_l0} \citep[Theorem 4.3]{elad2010book}. More practical results are given for the case where the measurements are contaminated by noise \citep{donoho2006stable,elad2010book}.
\subsection{Integer K-OMP (IK-OMP)}
\begin{algorithm}[H]
\caption{IK-OMP}\label{algo:beam_omp}
\begin{algorithmic}
\State \textbf{Input:} Measurement vector $\mathbf{y} \in \reals^m$, dictionary $\mathbf{D} \in \reals^{m \times d}$, maximal number of characters $L$ and beam width $K$
\State Initial solutions $\mathbf{X}^0 = [\mathbf{0}_d, \dots, \mathbf{0}_d]$
\For{$l = 1, L$}
\For{$i \in [1, \dots, k]$}
\State \textbf{Extend:} Append $\mathbf{X}_i^{l-1}\! +\! \textbf{1}_j,\! \forall j\!\! \in\! [1, ..., d]$ to $\mathbf{X}^{l-1}$
\EndFor
\State \textbf{Trim:} $\mathbf{X}^l= \text{K-}\argmin_{\mathbf{X}_i \in \mathbf{X}^{l-1}} ||\mathbf{y} - \mathbf{D}\mathbf{X}_i||_2^2$
\EndFor
\State \textbf{return} $\mathbf{X}^L$
\end{algorithmic} \end{algorithm}
\textbf{An Integer Prior:} While CS is typically concerned with the reconstruction of a sparse real-valued signal, in our BoW linear representation, the signal fulfills a secondary structure constraint besides sparsity. Its nonzero entries stem from a finite, or discrete, alphabet. Such prior information on the original signal appears in many communication scenarios \citep{candes2005error, axell2012spectrum}, where the transmitted data originates from a finite set.
\textbf{Beam Search OMP:} As OMP iteratively adds atoms to the recovered support, the choice of a new element in an iteration is blind to its effect on future iterations. Therefore, any mistakes, particularly in early iterations, may lead to large recovery errors. To mitigate this phenomenon, several methods have been proposed to amend the OMP algorithm.
To decrease the greediness of the greedy addition algorithm (which acts similarly to OMP), \citet{white2016generating} use a substitution based method, also referred as swapping \citep{andrle2006swapping} in the CS literature. Unfortunately, the computational complexity of this substitution strategy makes it impractical. \citet{elad2009plurality} combine several recovered sparse representations, to improving denoising, by randomizing the OMP algorithm. However, in our case, the sum of embeddings $\mathbf{a}_\text{SoE}$ represents a true sparse BoW vector $\mathbf{a}_\text{BoW}$, so that combining several recovered vectors should not lead to the correct solution.
\textbf{IK-OMP:} We combine the integer-prior with the beam search strategy, and propose the IK-OMP (Algorithm~\ref{algo:beam_omp}). In the algorithm description, $\mathbf{1}_j$ is the vector with a single nonzero element at index $j$ and $\text{K-}\argmin$ denotes the $K$ elements with smallest value for the following expression. In this work, the selected BoW is the candidate which minimizes the reconstruction score.
\begin{figure*}
\caption{\textbf{Compressed Sensing}: Comparison of the accuracy, and accumulated reward, of the various reconstruction algorithms on the `Troll Quest' and in `Open Zork'. The SnR denotes the ratio between the norm of the original signal $\mathbf{a}_\text{SoE}$ and that of the added noise.}
\label{fig:full compressed sensing}
\end{figure*}
\section{Imitation Learning} \label{sec:imitationlearning} In this section, we present our Sparse-IL\ algorithm and provide in-depth details regarding the design and implementation of each of its underlying components. We also detail the experiments of executing Sparse-IL\ on the entire game of Zork.
\textbf{Sparse Imitation Learning:} Our Sparse-IL\ architecture is composed of two major components - Encoder $\text{E}_\theta (\state)$ and Retrieval Mechanism (as seen in Figure~\ref{fig:algorithm}). Each component has a distinct role and combining them together enables for a computationally efficient approach.
\textbf{The Encoder (E)} is a neural network trained to output the optimal action representation at each state. As we consider the task of imitation learning, this is performed by minimizing the $\ell_2$ loss between the Encoder's output $E_{\theta}(\state)$ and the embedding of the action provided by the expert $\mathbf{a}_\text{SoE}$.
\begin{figure*}
\caption{Difference in reconstruction accuracy, between \textbf{Sparse-IL and DeepCS-2}. Higher value represents a higher reconstruction accuracy for Sparse-IL. DeepCS-2 fails when presented with several variants of the correct actions (synonyms).}
\label{fig:deepcs vs ik-omp accuracy comparison}
\caption{\textbf{Sparse Imitation Learning:} Comparison of the accuracy of each reconstruction algorithm on an agent trained using imitation learning to solve the entire game. In the graph on the left, IK-OMP with K=20 and K=112 result in identical performance.}
\label{fig:openzork imitation learning}
\end{figure*}
In all of the learning experiments, the architecture we use is a convolutional neural network (CNN) that is suited to NLP tasks \citep{kim2014convolutional}. Due to the structure of the game, there exist long term-dependencies. Frame-stacking, a common approach in games \citep{mnih2015human}, tackles this issue by providing the network with the N previous states. For the ``Open Zork'' task, we stack the previous 12 states, whereas for the ``Troll Quest'' we only provide it with the current frame.
\textbf{Retrieval Mechanism (R):} The output of the Encoder, $\text{E}_{\theta} (\state)$, is fed into a CS algorithm, such as IK-OMP. IK-OMP produces K candidate actions, ${\mathbf{a}_\text{BoW}}_1, ..., {\mathbf{a}_\text{BoW}}_K$. These actions are fed into a fitness function which ranks them, based on the reconstruction score $||\text{E}_{\theta} (\state) - \mathbf{D}{\mathbf{a}_\text{BoW}}_i||_2^2 , \enspace \forall i = 1, ..., k$ (see Section~\ref{sec:compressed_sensing}), and returns the optimal candidate. Other CS approaches, e.g., OMP and FISTA, return a single candidate action.
\section{Experiments}\label{sec: experiments}
In this section, we present our experimental results. We begin by analyzing our proposed CS method, namely IK-OMP, in \cref{sec: cs experiments}, and its ability to reconstruct the action when provided the sum of word embeddings $\mathbf{a}_\text{SoE}$. After evaluating our proposed method in a clean and analyzable scenario, we evaluate the entire system `Sparse Imitation Learning'\ on the full game of Zork (\cref{sec: il experiments}).
\subsection{Compressed Sensing}\label{sec: cs experiments} In this section, we focus on comparing several CS approaches. To do so, we follow the set of commands, extracted from a walk-through of the game, required to solve Zork1, both in the `Troll Quest' and `Open Zork' domains. In each state $\state$, we take the ground-truth action $\mathbf{a}_\text{env} (\state)$, calculate the sum of word embeddings $\mathbf{a}_\text{SoE} (\state)$, add noise and test the ability of various CS methods to reconstruct $\mathbf{a}_\text{env} (\state)$. We compare the \emph{run-time} (Table~\ref{table:sparse comparison}), and the \emph{reconstruction accuracy} (number of actions reconstructed correctly) and \emph{reward gained} in the presence of noise (Figure~\ref{fig:full compressed sensing}). Specifically, the measured action is $\mathbf{a}_\text{mes} (\state) = \mathbf{a}_\text{SoE} (\state) + \epsilon$, where $\epsilon \sim N(0,1)$ is normalized based on the signal to noise ratio (SnR).
\begin{table}[H]
\caption{Runtime comparison.}\label{table:sparse comparison}
\centering
\begin{tabular}{|l|l|}
\hline
\\[-1em]
\textbf{Algorithm} & \textbf{Runtime} \\
\hline
\\[-1em]
OMP & 0.008 \\
\hline
\\[-1em]
IOMP, K=1 & 0.008 \\
\hline
\\[-1em]
IK-OMP, K=3 & 0.021 \\
\hline
\\[-1em]
IK-OMP, K=20 & 0.166 \\
\hline
\\[-1em]
IK-OMP, K=112 & 1.116 \\
\hline
\\[-1em]
FISTA &0.881 \\
\hline
\\[-1em]
DeepCS &0.347 \\
\hline
\end{tabular} \end{table}
We compare 4 CS methods: the FISTA implementation of BP, OMP, IK-OMP (Algorithm~\ref{algo:beam_omp}) and a Deep Learning variant we deem DeepCS described below. The dictionary is composed of $d=112$ possible words which can be used in the game. The dimension of the embedding is $m=50$ (standard GloVe embedding available online) and the sentence length is limited to at most $4$ words. This yields a total number of $\approx$ 10 million actions, from which the agent must choose one at each step. It is important to note that while \emph{accuracy} and \emph{reward} might seem similar, an inaccurate reconstruction at an early stage results in an immediate failure, even when the accuracy over the entire trajectory seems high.
Clearly, as seen from Figure~\ref{fig:full compressed sensing}, OMP fails to reconstruct the true BoW vectors $\mathbf{a}_\text{BoW}$, even in the noiseless scenario. Indeed, the mutual-coherence (Definition~\ref{def:coherence}) is $\mu=0.97$ and from Theorem~\ref{th:cs_unique}, there is no guarantee that OMP can reconstruct a sparse vector for any sparsity $p>0$. However, our suggested approach, IK-OMP, is capable of correctly reconstructing the original action $\mathbf{a}_\text{BoW}$, even in the presence of relatively large noise. This gives evidence that the integer prior, in particular, and the beam search strategy significantly improve the sparse recovery performance.
\textbf{Deep Compressed Sensing:} Besides traditional CS methods, it is natural to test the ability of deep learning methods to perform such a task. Here, we train a neural network to predict the BoW vector $\mathbf{a}_\text{BoW}$ which composes the continuous embedding vector. Our network is a multi layer perceptron (MLP), composed of two hidden layers, 100 neurons each. We use a sigmoid activation function to bound the outputs to $[0,1]$ and train the network using a binary cross entropy loss. In these experiments, we denote by $T$ the threshold above which an output is selected, e.g., when $T=0.9$ all words which receive a weight of above $0.9$ are selected.
Our results (Figure~\ref{fig:compressed sensing deep cs openzork}) show that the DeepCS approach works when no noise is present, however, once noise is added to the setup, it is clear that DeepCS performs poorly compared to classic CS methods such as IK-OMP. We observed similar results in the Troll domain. Besides, as DeepCS requires training a new model for each domain, it is data-specific and does not transfer easily, which is not the case with traditional CS methods.
\begin{figure}
\caption{\textbf{Compressed Sensing - DeepCS:} Comparison of the accuracy, and accumulated reward, of the DeepCS baselines, compared to the IK-OMP approach.}
\label{fig:compressed sensing deep cs openzork}
\end{figure}
\subsection{Imitation Learning}\label{sec: il experiments}
In an imitation learning setup, we are given a data set of state-action pairs $(\state, \mathbf{a}_\text{env})$, provided by an expert; the goal is to learn a policy that achieves the best performance possible. We achieve this by training the embedding network $\text{E}_\theta (\state)$ to imitate the demonstrated actions in the embedding space, namely $\mathbf{a}_\text{SoE}$, at each state $\state$, using the MSE between the predicted actions and those demonstrated. We consider three setups: (1) Perfect demonstrations, where we test errors due to architecture capacity and function approximation, (2) Gaussian noise, $\mathbf{a}_\text{mes} (\state) = \mathbf{a}_\text{SoE} (\state) + \epsilon$ (See Section~\ref{sec: cs experiments}), and (3) discrete-action noise, in which a random incorrect action is demonstrated with probability (w.p.) $p$. This experiment can be seen as learning from demonstrations provided by an ensemble of sub-optimal experts.
Our results (Figure~\ref{fig:openzork imitation learning}) show that by combining CS with imitation learning techniques, we are capable of solving the entire game of Zork1, even in the presence of discrete-action noise. In all our experiments, IK-OMP outperforms the various baselines, including the end-to-end approach DeepCS-2 which is trained to predict the BoW embedding $\mathbf{a}_\text{BoW}$ directly from the state $\state$.
\textbf{Training:} Analyzing the training graph presents an interesting picture. It shows that during the training process, the output of the Encoder can be seen as a noisy estimation of $\mathbf{a}_\text{SoE}$. As training progresses, the effective SnR of the noise decreases which is seen by the increase in the reconstruction performance.
\textbf{Generalization:} In Figure~\ref{fig:deepcs vs ik-omp accuracy comparison}, we present the generalization capabilities which our method Sparse-IL\ enjoys, due to the use of pre-trained unsupervised word embeddings. The heatmap shows two forms of noise. The first, as before, is the probability of receiving a bad demonstration, an incorrect action. The second, synonym probability, is the probability of being presented with a correct action, yet composed of different words, e.g., drop, throw and discard result in an identical action in the environment and have a similar meaning. These results clearly show that Sparse-IL outperforms DeepCS-2 in nearly all scenarios, highlighting the generalization improvement inherent in the embeddings.
\textbf{The benefit of meaningful embeddings:} In our approach, the Encoder $\text{E}_\theta$ is trained to predict the sum-of-embeddings $\mathbf{a}_\text{SoE}$. However, it can also be trained to directly predict the BoW vector $\mathbf{a}_\text{BoW}$. While this approach may work, it lacks the generalization ability which is apparent in embeddings such as GloVe, in which similar words receive similar embedding vectors.
Consider a scenario in which there are 4 optimal actions (e.g., `go north', `walk north', 'run north' and `move north') and 1 sub-optimal action (e.g., `climb tree'). With probability $0.15$ we are presented with one of the optimal actions and with probability $0.4$ the sub-optimal action. In this example, the expected BoW representation would include `north' w.p. $0.6$, `climb' and `tree' w.p. $0.4$, and the rest w.p. $0.15$. On the other hand, since `go', `walk', `run' and `move' have similar meanings and in turn similar embeddings, the expected $\mathbf{a}_\text{SoE}$ is much closer to the optimal actions than to the sub-optimal one and thus an imitation agent is less likely to make a mistake.
\section{Conclusion}\label{sec:conclusion} We have presented a computationally efficient algorithm called Sparse Imitation Learning\ (Sparse-IL) that combines CS with imitation learning to solve text-based games with combinatorial action spaces. We proposed a CS algorithm variant of OMP which we have called Integer K-OMP (IK-OMP) and demonstrated that it can deconstruct a sum of word embeddings into the individual BoW that make up the embedding, even in the presence of significant noise. In addition, IK-OMP is significantly more computationally efficient than the baseline CS techniques. When combining IK-OMP with imitation learning, our agent is able to solve Troll quest as well as the entire game of Zork1 for the first time. Zork1 contains a combinatorial action space of 10 million actions. Future work includes replacing the fitness function with a critic in order to further improve the learned policy as well as testing the capabilities of the critic agent in cross-domain tasks.
\end{document} |
\begin{document}
\title{{\LARGE Quantum exam}} \author{\bf Nguyen Ba An} \email{[email protected]} \affiliation{School of Computational Sciences, Korea Institute for Advanced Study, 207-43 Cheongryangni 2-dong, Dongdaemun-gu, Seoul 130-722, Republic of Korea}
\begin{abstract} Absolutely and asymptotically secure protocols for organizing an exam in a quantum way are proposed basing judiciously on multipartite entanglement. The protocols are shown to stand against common types of eavesdropping attack. \end{abstract} \pacs{03.67.Hk, 03.65.Ud, 03.67.Dd} \maketitle
\noindent \textbf{1. Introduction}
Simultaneous distance-independent correlation between different systems called entanglement \cite{r1} is the most characteristic trait that sharply distinguishes between quantum and classical worlds. At present entanglement between two systems, i.e. bipartite entanglement, is quite well understood, but that between more than two systems, i.e. multipartite entanglement, remains still far from being satisfactorily known. In spite of that, multipartite entanglement has proven to play a superior role in recently emerging fields of quantum information processing and quantum computing since it exhibits a much richer structure than bipartite entanglement. Motivation for studying multipartite entanglement arises from many reasons some of which are listed now. First, multipartite entanglement provides a unique means to check the Einstein locality without invoking statistical arguments \cite{r2}, contrary to the case of Bell inequalities using bipartite entanglement. Second, multipartite entanglement serves as a key ingredient for quantum computing to achieve an exponential speedup over classical computation \cite{r3}. Third, multipartite entanglement is central to quantum error correction \cite{r4} where it is used to encode states, to detect errors and, eventually, to allow fault-tolerant quantum computation \cite{r5}, Fourth, multipartite entanglement helps to better characterize the critical behavior of different many-body quantum systems giving rise to a unified treatment of the quantum phase transitions \cite{r6}. Fifth, multipartite entanglement is crucial also in condensed matter phenomena and might solve some unresolved problems such as high-T superconductivity \cite {r7}. Sixth, multipartite entanglement is recognized as a unreplaceable or efficient resource to perform tasks involving a large number of parties such as network teleportation \cite{r8}, quantum cryptography \cite{r9}, quantum secret sharing \cite{r10}, remote entangling \cite{r11}, quantum (tele)cloning \cite{r12}, quantum Byzantine agreement \cite{r13}, etc. Finally, multipartite entanglement is conjectured to yield a wealth of fascinating and unexplored physics \cite{r14}. Current research in multipartite entanglement is progressing along two directions in parallel. One direction deals with problems such as how to classify \cite{r15}, quantify \cite{r16}, generate/control/distill \cite{r17} and witness \cite {r18} multipartite entanglement. The other direction proceeds to advance various applications exploiting the nonclassical multiway correlation inherent in multipartite entanglement \cite{r8,r9,r10,r11,r12,r13}. Our work here belongs to the second direction mentioned above. Namely, we propose protocols to organize the so-called quantum exam which will be specified in the next section. To meet the necessary confidentiality of the exam we use suitable multipartite GHZ entangled states \cite{r2} as the quantum channel. We consider two scenarios. One scenario is absolutely secure provided that the participants share a prior proper multipartite entanglement. The other scenario can be performed directly without any nonlocal quantum arrangements in the past but it is only asymptotically secure. Both the scenarios are shown to stand against commonly utilized eavesdropping attacks.
\vskip 0.5cm
\noindent \textbf{2. Quantum exam}
Exploiting the superdense coding feature possessed in bipartite entanglement we have recently proposed a quantum dialogue scheme \cite{r19} (see also \cite{r20}) allowing two legitimate parties to securely carry out their conversation. In this work multipartite entanglement will be judiciously exploited to do a more sophisticated task. Suppose that a teacher Alice wishes to organize an important exam with her remotely separate students Bob $1$, Bob $2$, ..... and Bob $N.$ Alice gives her problem to all Bobs and, after some predetermined period of time, asks each Bob to return a solution independently. Alice's problem should be kept confidential from any outsiders. The solution of a Bob should be accessible only to Alice but not to anyone else including the $N-1$ remaining Bobs. Such confidentiality constraints cannot be maintained even when Alice and Bobs are connected by authentic classical channels because any classical communication could be eavesdropped perfectly without a track left behind. However, combined with appropriate quantum channels such an exam is accomplishable. We call it quantum exam, i.e. an exam organized in a quantum way to guarantee the required secrecy.
Let Alice's problem is a binary string \begin{equation} Q=\{q_{m}\} \label{Q} \end{equation} and the solution of a Bob is another string \begin{equation} R_{n}=\{r_{nm}\} \label{R} \end{equation} where $n=1,$ $2,$ $...,$ $N$ labels the Bob while $q_{m},$ $r_{nm}\in \{0,1\} $ with $m=1,$ $2,$ $3,...$ denote a secret bit of Alice and a Bob.
\vskip 0.5cm
\noindent \textbf{2.1. Absolutely secure protocol}
An exam consists of two stages. In the first stage Alice gives a problem to Bobs and in the second stage she collects Bobs' solutions. \vskip 0.5cm \noindent \textbf{The problem-giving process}
To securely transfer the problem from Alice to Bobs the following steps are to be proceeded.
\begin{enumerate} \item[a1)] Alice and Bobs share beforehand a large number of ordered identical $(N+1)$-partite GHZ states in the form \begin{equation}
\left| \Psi _{m}\right\rangle \equiv \left| \Psi \right\rangle _{a_{m}1_{m}...N_{m}}=\frac{1}{\sqrt{2}}\left( \left| 00...0\right\rangle _{a_{m}1_{m}...N_{m}}+\left| 11...1\right\rangle _{a_{m}1_{m}...N_{m}}\right) \label{e1} \end{equation} of which qubits $a_{m}$ are with Alice and qubits $n_{m}$ with Bob $n.$
\item[a2)] For a given $m,$ Alice measures her qubit $a_{m}$ in the basis $
\mathcal{B}_{z}=\{\left| 0\right\rangle ,\left| 1\right\rangle \},$ then asks Bobs to do so with their qubits $n_{m}.$ All the parties obtain the same outcome $j_{m}^{z}$ where $j_{m}^{z}=0$ $(j_{m}^{z}=1)$ if they find $
\left| 0\right\rangle $ $(\left| 1\right\rangle ).$
\item[a3)] Alice publicly broadcasts the value $x_{m}=q_{m}\oplus j_{m}^{z}$ $(\oplus $ denotes an addition mod 2).
\item[a4)] Each Bob decodes Alice's secret bit as $q_{m}=x_{m}\oplus j_{m}^{z}.$ \end{enumerate}
This problem-giving process is absolutely secure because $j_{m}^{z},$ for each $m,$ takes on the value of either $0$ or $1$ with an equal probability resulting in a truly random string $\{j_{m}^{z}\}$ which Alice uses as a one-time-pad to encode her secret problem $\{q_{m}\}$ simultaneously for all Bobs who also use $\{j_{m}^{z}\}$ to decode Alice's problem.
\vskip 0.5cm
\noindent \textbf{The solution-collecting process}
After a predetermined period of time depending on the problem difficulty level Alice collects the solution from independent Bobs as follows.
\begin{enumerate} \item[b1)] Alice and Bobs share beforehand a large number of ordered nonidentical $(N+1)$-partite GHZ states in the form \begin{equation}
\left| \Phi _{m}\right\rangle \equiv \left| \Phi \right\rangle _{a_{m}1_{m}...N_{m}}=U_{m}\left| \Psi \right\rangle _{a_{m}1_{m}...N_{m}} \label{e2} \end{equation} with \begin{equation} U_{m}=I_{a_{m}}\otimes u(s_{1_{m}})\otimes u(s_{2_{m}})\otimes ...\otimes u(s_{N_{m}}) \label{U} \end{equation} where $I_{a_{m}}$ is the identity operator acting on qubit $a_{m}$ and \begin{equation}
u(s_{n_{m}})=(\left| 0\right\rangle \left\langle 1\right| +\left|
1\right\rangle \left\langle 0\right| )^{s_{n_{m}}} \label{u} \end{equation} is a unitary operator acting on qubit $n_{m}.$ For each $n$ and $m,$ the value of $s_{n_{m}}$ chosen at random between $0$ and $1$ is known only to Alice but by no means to any other person including Bobs. Qubits $a_{m}$ are with Alice and qubits $n_{m}$ with Bob $n.$
\item[b2)] For a given $m,$ Alice measures her qubit $a_{m}$ in $\mathcal{B} _{z}$ with the outcome $j_{a_{m}}^{z}=\{0,1\},$ then asks Bobs to do so with their qubits $n_{m}$ with the outcome $j_{n_{m}}^{z}=\{0,1\}.$
\item[b3)] Each Bob $n$ publicly broadcasts the value $y_{nm}=r_{nm}\oplus j_{n_{m}}^{z}.$
\item[b4)] Alice decodes the solution of Bob $n$ as $r_{nm}=y_{nm}\oplus \left[ \delta _{0,s_{n_{m}}}j_{a_{m}}^{z}+\delta _{1,s_{n_{m}}}(j_{a_{m}}^{z}\oplus 1)\right] .$ \end{enumerate}
In the solution-collecting process the outcomes $j_{a_{m}}^{z}$ and $ j_{n_{m}}^{z}$ are not the same anymore in general, but they are dynamically correlated as $j_{n_{m}}^{z}=\delta _{0,s_{n_{m}}}j_{a_{m}}^{z}+\delta _{1,s_{n_{m}}}(j_{a_{m}}^{z}\oplus 1).$ This correlation allows only Alice who knows the value of $\{s_{n_{m}}\}$ to decode the solution of a Bob after she obtains her own measurement outcome $j_{a_{m}}^{z}.$ As is clear, each of the $N$ strings $\{j_{1_{m}}^{z}\},$ $\{j_{2_{m}}^{z}\},$ $...,\{j_{N_{m}}^{z}\}$ appears truly random and each such a string is used by a Bob and Alice only one time to encode/decode a secret solution $\{r_{nm}\}. $ The above solution-collecting process is therefore absolutely secure as well.
The essential condition to ensure absolute security of the quantum exam is a prior sharing of the entangled states $\{\left| \Psi _{m}\right\rangle \}$
and $\{\left| \Phi _{m}\right\rangle \}$ between the teacher Alice and the students Bobs. It is therefore necessary to propose methods for multipartite entanglement sharing.
\vskip 0.5cm
\noindent \textbf{The }$\left| \Psi _{m}\right\rangle $-\textbf{sharing process}
Alice and Bobs can securely share the states $\{\left| \Psi _{m}\right\rangle \}$ as follows.
\begin{enumerate} \item[c1)] Alice generates a large enough number of identical states $
\left| \Psi _{m}\right\rangle $ defined in Eq. (\ref{e1}) \cite{n1}. For each such state she keeps qubit $a_{m}$ and sends qubits $1_{m},$ $2_{m},$ $ ...,$ $N_{m}$ to Bob $1,$ Bob $2,$ $...,$ Bob $N,$ respectively. Before sending a qubit $n_{m}$ Alice authenticates Bob $n$ of that action.
\item[c2)] After receiving a qubit each Bob also authenticates Alice independently.
\item[c3)] Alice selects at random a subset $\{\left| \Psi _{l}\right\rangle \}$ out of the shared $\left| \Psi _{m}\right\rangle $ -states and lets Bobs know that subset. For each state of the subset Alice measures her qubit randomly in $\mathcal{B}_{z}$ or in $\mathcal{B}
_{x}=\{\left| +\right\rangle ,\left| -\right\rangle \}$ with $\left| \pm
\right\rangle =(\left| 0\right\rangle \pm \left| 1\right\rangle )/\sqrt{2}\}. $ Then she asks every Bob to measure their qubits in the same basis as hers. Alice's (Bobs') outcome in $\mathcal{B}_{z}$ is $
j_{a_{l}}^{z}(j_{n_{l}}^{z})=\{0,1\}$ corresponding to finding $\{\left|
0\right\rangle ,\left| 1\right\rangle \}$ and that in $\mathcal{B}_{x}$ is $
j_{a_{l}}^{x}(j_{n_{l}}^{x})=\{+1,-1\}$ corresponding to finding $\{\left|
+\right\rangle ,\left| -\right\rangle \}.$
\item[c4)] Alice requires each Bob to publicly reveal the outcome of each his measurement and makes an analysis. For those measurements in $\mathcal{B} _{z}$ she compares $j_{a_{l}}^{z}$ with $j_{n_{l}}^{z}:$ if $ j_{a_{l}}^{z}=j_{n_{l}}^{z}$ $\forall n$ it is all-right, otherwise she realizes a possible attack of an outsider Eve in the quantum channel. As for measurements in $\mathcal{B}_{x}$ she compares $j_{a_{l}}^{x}$ with $
J_{l}^{x}=\prod_{n=1}^{N}j_{n_{l}}^{x}:$ if $j_{a_{l}}^{x}=J_{l}^{x}$ it is all-right \cite{n2}, otherwise there is Eve in the line. If the error rate exceeds a predetermined small value Alice tells Bobs to restart the whole process, otherwise they record the order of the remaining shared $\left| \Psi _{m}\right\rangle $-states and can use them for the problem-giving process following the steps from a1) to a4). \end{enumerate}
\noindent \textbf{The }$\left| \Phi _{m}\right\rangle $-\textbf{sharing process}
The states $\{\left|\Phi_m\right\rangle \}$ cab be securely shared between the participants as follows.
\begin{enumerate} \item[d1)] Alice generates a large enough number of identical states $
\{\left| \Psi _{m}\right\rangle \}$ \cite{n1}. She then applies on each of the identical states a unitary operator $U_{p}$ determined by Eq. (\ref{U})
to transform them into the $\left| \Phi _{p}\right\rangle $-states defined in Eq. (\ref{e2}) which are nonidentical states \cite{n3}. Afterward, for each $\left| \Phi _{p}\right\rangle ,$ she keeps qubit $a_{p}$ and sends qubits $1_{p},$ $2_{p},$ $...,$ $N_{p}$ to Bob $1,$ Bob $2,$ $...,$ Bob $N,$ respectively. Before sending a qubit $n_{p}$ Alice authenticates Bob $n$ of that action.
\item[d2)] After receiving a qubit each Bob also authenticates Alice independently.
\item[d3)] Alice selects at random a large enough subset $\{\left| \Phi _{l}\right\rangle \}$ out of the shared $\left| \Phi _{p}\right\rangle $
-states and lets Bobs know that subset. For each state $\left| \Phi _{l}\right\rangle $ of the subset Alice measures her qubit randomly in either $\mathcal{B}_{z}$ or $\mathcal{B}_{x},$ then asks Bobs to measure their qubits in the same basis as hers.
\item[d4)] Alice requires each Bob to publicly reveal the outcome of each his measurement and makes a proper analysis. For those measurements in $ \mathcal{B}_{z}$ she verifies the equalities $j_{a_{l}}^{z}=\delta _{0,s_{n_{l}}}j_{n_{l}}^{z}+\delta _{1,s_{n_{l}}}(j_{n_{l}}^{z}\oplus 1).$ If the equalities hold for every $n$ it is all-right, otherwise the quantum channel was attacked. As for measurements in $\mathcal{B}_{x}$ she compares $ j_{a_{l}}^{x}$ with $J_{l}^{x}=\prod_{n=1}^{N}j_{n_{l}}^{x}:$ if $
j_{a_{l}}^{x}=J_{l}^{x}$ it is all-right \cite{n4}, otherwise the quantum channel was attacked. If the error rate exceeds a predetermined value Alice tells Bobs to restart the whole process, otherwise they record the order of the remaining shared $\left| \Phi _{p}\right\rangle $-states and can use them for the solution-collecting process following the steps from b1) to b4). \end{enumerate}
\noindent \textbf{Security of the entanglement-sharing process}
To gain useful information about the exam, Eve must attack the quantum channel during the entanglement-sharing process. Below are several types of attack that Eve commonly uses.
\textit{Measure-Resend Attack}. In $\mathcal{B}_{z}$ Eve measures the qubits emerging from Alice and then resends them on to Bobs. After Eve's measurement the entangled state collapses into a product state and her attack is detectable when Alice and Bobs use $\mathcal{B}_{x}$ for a security check \cite{n5}.
\textit{Disturbance Attack}. If Alice and Bobs check security only by measurement outcomes in $\mathcal{B}_{x},$ then Eve, though cannot gain any information, is able to make the protocol to be denial-of-service. Namely, for each $n,$ on the way from Alice to Bob $n,$ Eve applies on qubit $n$ an operator $u(v_{n_{m}})$ as defined in Eq. (\ref{u}) with $v_{n_{m}}$ randomly taken as either $0$ or $1,$ then lets the qubit go on its way. By doing so the disturbed states become truly random and totally unknown to everybody, hence no cryptography is possible at all. Though measurements in $\mathcal{B}_{x}$ cannot detect this type of attack \cite{n6}, those in $\mathcal{B}_{z}$ can \cite{n7}.
\textit{Entangle-Measure Attack}. Eve may steal some information by entangling her ancilla (prepared, say, in the state $\left| \chi
\right\rangle _{E})$ with a qubit $n$ (assumed to be in the state $\left|
i\right\rangle _{n})$ before the qubit reaches Bob $n:$ $\left| \chi
\right\rangle _{E}\left| i\right\rangle _{n}\rightarrow \alpha \left| \chi _{i}\right\rangle _{E}\left| i\right\rangle _{n}+\beta \left| \overline{\chi _{i}}\right\rangle _{E}\left| i\oplus 1\right\rangle _{n}$ where $|\alpha
|^{2}+|\beta |^{2}=1$ and $_{E}\left\langle \chi _{i}\right. \left|
\overline{\chi _{i}}\right\rangle _{E}=0.$ After Bob $n$ measures his qubit Eve does so with her ancilla and thus can learn about the Bob's outcome. Yet, with a probability of $|\beta |^{2}$ Eve finds $\left| \overline{\chi _{i}}\right\rangle _{E}$ in which case she is detected if the security check by Alice and Bobs is performed in $\mathcal{B}_{z}$ \cite{n8}.
\textit{Intercept-Resend Attack}. Eve may create her own entangled states $
\left| \Psi ^{\prime }\right\rangle _{a_{m}^{\prime }1_{m}^{\prime
}...N_{m}^{\prime }}$ $(\left| \Phi ^{\prime }\right\rangle _{a_{m}^{\prime
}1_{m}^{\prime }...N_{m}^{\prime }}=U_{m}^{\prime }\left| \Psi ^{\prime }\right\rangle _{a_{m}^{\prime }1_{m}^{\prime }...N_{m}^{\prime }}$ where $ U_{m}^{\prime }=I_{a_{m}^{\prime }}\otimes u(s_{1_{m}}^{\prime })\otimes u(s_{2_{m}}^{\prime })\otimes ...\otimes u(s_{N_{m}}^{\prime })$ with $ \{s_{n_{m}}^{\prime }\}$ an arbitrary random string). Then she keeps qubit $a_{m}^{\prime } $ and sends qubit $n_{m}^{\prime }$ to Bob $n.$ When Alice sends qubits $ n_{m}$ to Bobs Eve captures and stores all of them. Subsequently, after Alice's and Bobs' measurements, Eve also measures her qubits $a_{m}^{\prime } $ and the qubits $n_{m}$ she has kept to learn the corresponding keys. This attack is detected as well when Alice and Bobs use $\mathcal{B}_{z}$ -measurement outcomes for their security-check \cite{n9}.
\textit{Masquerading Attack}. Eve may pretend to be a Bob in the $\left|
\Psi _{m}\right\rangle $-sharing process to later obtain Alice's problem. Likewise, she may pretend to be Alice in the $\left| \Phi _{m}^{\prime }\right\rangle $-sharing process to later collect Bobs' solutions. Such pretenses are excluded because each Bob after receiving a qubit has to inform Alice and Alice before sending a qubit has also to inform all Bobs. The classical communication channels Alice and Bobs possess have been assumed highly authentic so that any disguisement must be disclosed.
\vskip 0.5cm
\noindent \textbf{2.2. Asymptotically secure protocol}
In some circumstances an urgent exam needs to be organized but no prior quantum nonlocal arrangements are available at all. We now propose a protocol to directly accomplish such an urgent task. At that aim, Alice has to have at hand a large number of states $\{\left| \Psi _{m}\right\rangle \}$
and $\{\left| \Phi _{m}\right\rangle =U_{m}\left| \Psi _{m}\right\rangle \}.$ Let $M$ $(M^{\prime })$ be length of Alice's problem (Bobs' solution) and $T$ the time provided for Bobs to solve the problem. \vskip 0.5cm \noindent \textbf{The direct problem-giving process}
Alice can directly give her problem to Bobs by ``running'' the following program.
\begin{enumerate} \item[e1)] $m=0.$
\item[e2)] $m=m+1.$ Alice picks up a state $\left| \Psi _{m}\right\rangle ,$ keeps qubit $a_{m}$ and sends qubits $1_{m},$ $2_{m},$ $...,$ $N_{m}$ to Bob $1,$ Bob $2,$ $...,$ Bob $N,$ respectively. Before doing so Alice informs all Bobs via her authentic classical channels.
\item[e3)] Each Bob confirms receipt of a qubit via their authentic classical channels.
\item[e4)] Alice switches between two operating modes: the control mode (CM) with rate $c$ and the message mode (MM) with rate $1-c.$ Alice lets Bobs know which operating mode she chose.
\begin{enumerate} \item[e4.1)] If CM is chosen, Alice measures qubit $a_{m}$ randomly in $ \mathcal{B}_{z}$ or $\mathcal{B}_{x}$ with an outcome $j_{a_{m}}^{z}$ or $ j_{a_{m}}^{x},$ then lets Bobs know her basis choice and, asks them to measure their qubits $n_{m}$ in the chosen basis. After measurements each Bob publicly publishes his outcome $j_{n_{m}}^{z}$ or $j_{n_{m}}^{x}.$ Alice analyzes the outcomes: if $ j_{a_{m}}^{z}=j_{1_{m}}^{z}=j_{2_{m}}^{z}=...=j_{N_{m}}^{z}$ or $ j_{a_{m}}^{x}=\prod_{n=1}^{N}j_{n_{m}}^{x}$ she sets $m=m-1$ and goes to step e2) to continue, else she tells Bobs to reinitialize from the beginning by going to step e1).
\item[e4.2)] If MM is chosen, Alice measures qubit $a_{m}$ in $\mathcal{B} _{z}$ with an outcome $j_{a_{m}}^{z}$ and publicly reveals $ x_{m}=j_{a_{m}}^{z}\oplus q_{m}.$ Each Bob measures his qubit also in $ \mathcal{B}_{z}$ with an outcome $j_{n_{m}}^{z},$ then decodes Alice's secret bit as $q_{m}=j_{n_{m}}^{z}\oplus x_{m}.$ If $m<M$ Alice goes to step e2) to continue, else she publicly announces: \textit{``My problem has been transferred successfully to all of you. Please return your solution after time }$T".$ \end{enumerate} \end{enumerate}
\noindent \textbf{The direct solution-collecting process}
After time $T$ Alice can directly collect Bobs' solutions by ``running'' another program as follows.
\begin{enumerate} \item[g1)] $m=0.$
\item[g2)] $m=m+1.$ Alice picks up a $\left| \Phi _{m}\right\rangle ,$ keeps qubit $a_{m}$ and sends qubits $1_{m},$ $2_{m},$ $...,$ $N_{m}$ to Bob $1,$ Bob $2,$ $...,$ Bob $N,$ respectively. Before doing so Alice informs all Bobs via her authentic classical channels.
\item[g3)] Each Bob confirms receipt of a qubit via their authentic classical channels.
\item[g4)] Alice switches between two operating modes: the CM with rate $c$ and the MM with rate $1-c.$ Alice lets Bobs know which operating mode she chose.
\begin{itemize} \item[g4.1)] If CM is chosen, Alice measures qubit $a_{m}$ randomly in $ \mathcal{B}_{z}$ or $\mathcal{B}_{x}$ with an outcome $j_{a_{m}}^{z}$ or $ j_{a_{m}}^{x},$ then lets Bobs know her basis choice and, asks them to measure their qubits $n_{m}$ in the chosen basis. After measurements each Bob publicly publishes his outcome $j_{n_{m}}^{z}$ or $j_{n_{m}}^{x}.$ Alice analyzes the outcomes: if $j_{a_{m}}^{z}=\delta _{0,s_{n_{m}}}j_{n_{m}}^{z}+\delta _{1,s_{n_{m}}}(j_{n_{m}}^{z}\oplus 1)$ for every $n$ or $j_{a_{m}}^{x}=\prod_{n=1}^{N}j_{n_{m}}^{x}$ she sets $m=m-1 $ and goes to step g2) to continue, else she tells Bobs to reinitialize from the beginning by going to step g1).
\item[g4.2)] If MM is chosen, Alice measures qubit $a_{m}$ in $\mathcal{B} _{z}$ with an outcome $j_{a_{m}}^{z}$ and each Bob measures his qubit also in $\mathcal{B}_{z}$ with an outcome $j_{n_{m}}^{z}.$ Each Bob publicly reveals $y_{nm}=r_{nm}\oplus j_{n_{m}}^{z}$ and Alice decodes Bobs' secret bits as $r_{nm}=y_{nm}\oplus \left[ \delta _{0,s_{n_{m}}}j_{a_{m}}^{z}+\delta _{1,s_{n_{m}}}(j_{a_{m}}^{z}\oplus 1)\right] $ for $n=1,2,...,N.$ If $m<M^{\prime }$ Alice goes to step g2) to continue, else she publicly announces: \textit{``Your solutions have been collected successfully''}. \end{itemize} \end{enumerate}
As described above, in the direct problem-giving (solution-collecting) process Alice alternatively gives (collects) secret bits and checks Eve's eavesdropping. These direct protocols also stand against the types of attack mentioned above. The protocols terminate immediately whenever Eve is detected in a control mode. However, Eve might get a partial information before her tampering is disclosed. Such an information leakage can be reduced as much as Alice wants by increasing the control mode rate $c$ at the expense of reducing the information transmission rate $r=1-c.$ For short strings $Q$ and $R_{n}$ (see Eq. (\ref{Q}) and Eq. (\ref{R})) Eve's detection probability may be quite small. But, the longer the strings the higher the detection probability. In the long-string limit the detection probability approaches one, i.e. Eve is inevitably detected. In this sense, the direct quantum exam protocols are asymptotically secure only. \vskip 0.5cm \noindent \textbf{3. Conclusion}
We have proposed two protocols for organizing a quantum exam \cite{n10} basing on a judicious use of appropriate multipartite entangled states. The first protocol is absolutely secure iff the participants have successfully shared the necessary entanglement in advance. We also provide methods for sharing the multipartite entanglement in the presence of a potential eavesdropping outsider. The second protocol can be processed directly without a prior entanglement sharing. This advantage is however compromised by a lower confidentiality level or by a slower information transmission rate. Both the protocols have been shown to sustain various kinds of attacks such as measure-resend attack, disturbance attack, entangle-measure attack, intercept-resend attack and masquerading attack. Our protocols work well in an idealized situation with perfect entanglement sources/measuring devices and in noiseless quantum channels which we have assumed for simplicity. We are planning to further develop our protocols to cope with more realistic situations.
\vskip 0.5cm
\noindent \textbf{Acknowledgments.}
The author is grateful to Professor Hai-Woong Lee from KAIST for useful discussion and comments. This research was supported by a Grant (TRQCQ) from the Ministry of Science and Technology of Korea and also by a KIAS R\&D Fund No 6G014904.
\end{document} |
\begin{document}
\title{On the Moebius deformable hypersurfaces} \maketitle \begin{center} \author{M. I. Jimenez and
R. Tojeiro$^*$}
\footnote{Corresponding author}
\footnote{The first author is supported by CAPES-PNPD Grant 88887.469213/2019-00. The second author is partially supported by Fapesp grant 2016/23746-6 and CNPq grant 307016/2021-8.\\
Data availability statement: Not applicable.} \end{center} \date{}
\begin{abstract}
In the article [\emph{Deformations of hypersurfaces preserving the M\"obius metric and a reduction theorem}, Adv. Math. 256 (2014), 156--205], Li, Ma and Wang investigated the interesting class of Moebius deformable hypersurfaces, that is, the umbilic-free Euclidean hypersurfaces $f\colon M^n\to \mathbb{R}^{n+1}$ that admit non-trivial deformations preserving the Moebius metric. The classification of Moebius deformable hypersurfaces of dimension $n\geq 4$ stated in the aforementioned article, however, misses a large class of examples. In this article we complete that classification for $n\geq 5$. \end{abstract}
\noindent \emph{2020 Mathematics Subject Classification:} 53 B25.
\noindent \emph{Key words and phrases:} {\small {\em Moebius metric, Moebius deformable hypersurface, Moebius bending. }}
\date{} \maketitle
\section{Introduction} Let $f\colon M^n\to\mathbb{R}^{m}$ be an isometric immersion of a Riemannian manifold $(M^n,g)$ into Euclidean space with normal bundle-valued second fundamental form $\alpha\in \Gamma(\mbox{Hom}(TM,TM;N_fM))$.
Let $\|\alpha\|^2\in C^\infty(M)$ be given at any point $x\in M^n$ by $$
\|\alpha(x)\|^2=\sum_{i,j=1}^n\|\alpha(x)(X_i,X_j)\|^2, $$ where $\{X_i\}_{1\leq i\leq n}$ is an orthonormal basis of $T_xM$. Define $\phi\in C^\infty(M)$ by \begin{equation} \label{phi}
\phi^2=\frac{n}{n-1}(\|\alpha\|^2-n\|\mathcal{H}\|^2), \end{equation} where $\mathcal{H}$ is the mean curvature vector field of $f$. Notice that $\phi$ vanishes precisely at the umbilical points of $f$. The metric $$ g^*=\phi^2 g, $$ defined on the open subset of non-umbilical points of $f$, is called the \emph{Moebius metric} determined by $f$. The metric $g^*$ is invariant under Moebius transformations of the ambient space, that is, if two immersions differ by a Moebius transformation of $\mathbb{R}^m$, then their corresponding Moebius metrics coincide.
It was shown in \cite{Wa} that a hypersurface $f\colon M^n\to\mathbb{R}^{n+1}$ is uniquely determined, up to Moebius transformations of the ambient space, by its Moebius metric and its \emph{Moebius shape operator} $S=\phi^{-1}(A-HI)$, where $A$ is the shape operator of $f$ with respect to a unit normal vector field $N$ and $H$ is the corresponding mean curvature function. A similar result holds for submanifolds of arbitrary codimension (see \cite{Wa} and Section $9.8$ of \cite{DT}).
Li, Ma and Wang investigated in \cite{LMW} the natural and interesting problem of looking for the hypersurfaces $f\colon M^n\to\mathbb{R}^{n+1}$ that are not determined, up to Moebius transformations of $\mathbb{R}^{n+1}$, only by their Moebius metrics. This fits into the fundamental problem in Submanifold theory of looking for data that are sufficient to determine a submanifold up to some group of transformations of the ambient space.
More precisely, an umbilic-free hypersurface $f\colon M^n\to\mathbb{R}^{n+1}$ is said to be \emph{Moebius deformable} if there exists an immersion $\tilde f\colon M^n\to\mathbb{R}^{n+1}$ that shares with $f$ the same Moebius metric and is not Moebius congruent to $f$ on any open subset of $M^n$. The first result in \cite{LMW} is that a Moebius deformable hypersurface with dimension $n\geq 4$ must carry a principal curvature with multiplicity at least $n-2$. As pointed out in \cite{LMW}, for $n\geq 5$ this is already a consequence of Cartan's classification in \cite{Ca} (see also \cite{DT1} and Chapter $17$ of~\cite{DT}) of the more general class of \emph{conformally deformable} hypersurfaces. These are the hypersurfaces $f\colon M^n\to\mathbb{R}^{n+1}$ that admit a non-trivial \emph{conformal deformation} $\tilde f\colon M^n\to\mathbb{R}^{n+1}$, that is, an immersion such that $f$ and $\tilde f$ induce conformal metrics on $M^n$ and do not differ by a Moebius transformation of $\mathbb{R}^{n+1}$ on any open subset of $M^n$ .
According to Cartan's classification, besides the conformally flat hypersurfaces, which have a principal curvature with multiplicity greater than or equal to $n-1$ and are highly conformally deformable, the remaining ones fall into one of the following classes: \begin{itemize} \item[(i)] \emph{conformally surface-like hypersurfaces}, that is, those that differ by a Moebius transformation of $\mathbb{R}^{n+1}$ from cylinders and rotation hypersurfaces over surfaces in $\mathbb{R}^3$, or from cylinders over three-dimensional hypersurfaces of $\mathbb{R}^4$ that are cones over surfaces in $\mathbb{S}^3$. \item[(ii)] \emph{conformally ruled hypersurfaces}, that is, hypersurfaces $f\colon M^n\to\mathbb{R}^{n+1}$ for which $M^n$ carries an integrable $(n-1)$-dimensional distribution whose leaves (\textit{rulings}) are mapped by $f$ into umbilical submanifolds of $\mathbb{R}^{n+1}$; \item[(iii)] hypersurfaces that admit a non-trivial \emph{conformal variation} $F\colon I\times M^n \to \mathbb{R}^{n+1}$, that is, a smooth map defined on the product of an open interval $I\subset \mathbb{R}$ containing $0$ with $M^n$ such that, for any $t \in I$, the map $f_t = F(t; \cdot)$, with $f_0 = f$, is a non-trivial \emph{conformal deformation} of $f$. \item[(iv)] hypersurfaces that admit a single non-trivial conformal deformation. \end{itemize}
It was shown in \cite{LMW} that, among the {conformally surface-like hypersurfaces, the ones that are Moebius deformable are those that are determined by a Bonnet surface $h\colon L^2\to \mathbb{Q}_\epsilon^3$ admitting isometric deformations preserving the mean curvature function. Here $\mathbb{Q}_\epsilon^3$ stands for a space form of constant sectional curvature $\epsilon \in \{-1,0,1\}$. It was also shown in \cite{LMW} that an umbilic-free conformally flat hypersurface $f\colon M^n\to \mathbb{R}^{n+1}$, $n\geq 4$ (hence with a principal curvature of constant multiplicity $n-1$), admits non-trivial deformations preserving the Moebius metric if and only if it has constant Moebius curvature, that its, its Moebius metric has constant sectional curvature. Such hypersurfaces were classified in \cite{GLLMW}, and an alternative proof of the classification was given in \cite{LMW}. They were shown to be, up to Moebius transformations of $\mathbb{R}^{n+1}$, either cylinders or rotation hypersurfaces over the so-called \emph{curvature spirals} in $\mathbb{R}^2$ or $\mathbb{R}_+^2$, respectively, the latter endowed with the hyperbolic metric, or cylinders over surfaces that are cones over curvature spirals in $\mathbb{S}^2$.
It is claimed in \cite{LMW} that there exists only one further example of a Moebius deformable hypersurface, which belongs to the third of the above classes in Cartan's classification of the conformally deformable hypersurfaces. Namely, the hypersurface given by \begin{equation}\label{example} f=\Phi\circ (\mbox{id} \times f_1)\colon M^n:=\mathbb{H}_{-m}^{n-3}\times N^3\to \mathbb{R}^{n+1},\,\,\,m=\sqrt{\frac{n-1}{n}}, \end{equation}
where $\mbox{id}$ is the identity map of $\mathbb{H}_{-m}^{n-3}$, $f_1\colon N^3\to \mathbb{S}_m^4$ is Cartan's minimal isoparametric hypersurface, which is a tube over the Veronese embedding of $\mathbb{R}\mathbb{P}^2$ into $\mathbb{S}_m^4$, and $\Phi\colon \mathbb{H}_{-m}^{n-3}\times \mathbb{S}_m^{4}\subset \mathbb{L}^{n-2}\times \mathbb{R}^{5}\to \mathbb{R}^{n+1}\setminus \mathbb{R}^{n-4}$ is the conformal diffeomorphism given by $$ \Phi(x, y)=\frac{1}{x_0}(x_1, \ldots, x_{n-4},y) $$ for all $x=x_0e_0+x_1e_1+\cdots+ x_{n-3}e_{n-3}\in \mathbb{L}^{n-2}$ and $y=(y_1, \ldots, y_{5})\in \mathbb{S}^{4}\subset \mathbb{R}^{5}$. Here $\{e_0, \ldots, e_{n-3}\}$ denotes a pseudo-orthonormal basis of the Lorentzian space $\mathbb{L}^{n-2}$ with $\<e_0, e_0{\rangle}=0=\<e_{n-3}, e_{n-3}{\rangle}$ and $\<e_0, e_{n-3}{\rangle}=-1/2$. The deformations of $f$ preserving the Moebius metric have been shown to be actually compositions ${f}_t=f\circ \phi_t$ of $f$ with the elements of a one-parameter family of isometries $\phi_t\colon M^n\to M^n$ with respect to the Moebius metric; hence all of them have the same image as $f$.
Our initial goal was to investigate the larger class of \emph{infinitesimally} Moebius bendable hypersurfaces, that is, umbilic-free hypersurfaces $f\colon M^n\to \mathbb{R}^{n+1}$ for which there exists a one-parameter family of immersions $f_t\colon M^n\to \mathbb{R}^{n+1}$, with $t\in (-\epsilon, \epsilon)$ and $f_0=f$, such that the Moebius metrics determined by $f_t$ coincide up to the first order, in the sense that
$\frac{\partial}{\partial t}|_{t=0}g_t^*=0$. This is carried out for $n\geq 5$ in the forthcoming paper \cite{JT2}.
In the course of our investigation, however, we realized that the infinitesimally Moebius bendable hypersurfaces of dimension $n\geq 5$ in our classification that are not conformally surface-like are actually also Moebius deformable. Nevertheless, except for the example in the preceding paragraph, they do not appear in the classification of such hypersurfaces as stated in \cite{LMW}. This has led us to revisit that classification under a different approach from that in \cite{LMW}.
To state our result, we need to recall some terminology. Let $f\colon M^n\to \mathbb{R}^{n+1}$ be an oriented hypersurface with respect to a unit normal vector field~$N$. Then the family of hyperspheres $ x\in M^n\mapsto S(h(x),r(x)) $ with radius $r(x)$ and center $ h(x)=f(x)+r(x)N(x) $ is enveloped by $f$. If, in particular, $1/r$ is the mean curvature of $f$, it is called the \emph{central sphere congruence} of $f$.
Let $\mathbb{V}^{n+2}$ denote the light cone in the Lorentz space $\mathbb{L}^{n+3}$ and let $\Psi=\Psi_{v,w,C}\colon\mathbb{R}^{n+1}\to\mathbb{L}^{n+3}$ be the isometric embedding onto $$ \mathbb{E}^{n+1}=\mathbb{E}^{n+1}_w=\{u\in\mathbb{V}^{n+2}:\<u,w{\rangle} =1\}\subset\mathbb{L}^{n+3} $$ given by \begin{equation} \label{eq:Psi}
\Psi(x)=v+Cx-\frac{1}{2}\|x\|^2w, \end{equation} in terms of $w\in \mathbb{V}^{n+2}$, $v\in \mathbb{E}^{n+1}$ and a linear isometry $C\colon\mathbb{R}^{n+1}\to\{v,w\}^\perp$. Then the congruence of hyperspheres $x\in M^n\mapsto S(h(x),r(x))$ is determined by the map $S\colon M^n\to\mathbb{S}_{1,1}^{n+2}$ that takes values in the Lorentzian sphere $$ \mathbb{S}_{1,1}^{n+2}=\{x\in\mathbb{L}^{n+3}\colon\<x,x{\rangle}=1\} $$ and is defined by
$$ S(x)=\frac{1}{r(x)}\Psi(h(x))+\frac{r(x)}{2}w, $$
in the sense that $\Psi(S(h(x),r(x)))=\mathbb{E}^{n+1}\cap S(x)^\perp$ for all $x\in M^n$. The map $S$ has rank $0<k<n$, that is, it corresponds to a $k$-parameter congruence of hyperespheres, if and only if $\lambda=1/r$ is a principal curvature of $f$ with constant multiplicity $n-k$ (see Section $9.3$ of \cite{DT} for details). In this case, $S$ gives rise to a map $s\colon L^k\to \mathbb{S}_{1,1}^{n+2}$ such that $S\circ \pi=s$, where $\pi\colon M^n\to L^k$ is the canonical projection onto the quotient space of leaves of $\ker (A-\lambda I)$.
\begin{theorem}\label{thm:crux} Let $f\colon M^n\to\mathbb{R}^{n+1}$, $n\geq 5$, be a Moebius deformable hypersurface that is not conformally surface-like on any open subset and has a principal curvature of constant multiplicity $n-2$. Then the central sphere congruence of $f$ is determined by a minimal space-like surface $s\colon L^2\to\mathbb{S}_{1,1}^{n+2}$.
Conversely, any simply connected hypersurface $f\colon M^n\to\mathbb{R}^{n+1}$, $n\geq 5$, whose central sphere congruence is determined by a minimal space-like surface $s\colon L^2\to\mathbb{S}_{1,1}^{n+2}$ is Moebius deformable. In fact, $f$ is Moebius bendable: it admits precisely a one-parameter family of conformal deformations, all of which share with $f$ the same Moebius metric. \end{theorem}
\begin{remarks}\label{rem}\emph{ 1) Particular examples of Moebius deformable hypersurfaces $f\colon M^n\to\mathbb{R}^{n+1}$ that are not conformally surface-like on any open subset and have a principal curvature of constant multiplicity $n-2$ are the minimal hypersurfaces of rank two. These are well-known to admit a one-parameter associated family of isometric deformations, all of which are also minimal of rank two. The elements of the associated family, sharing with $f$ the same induced metric, all have the same scalar curvature and, being minimal, also share with $f$ the same Moebius metric. These examples are not comprised in the statement of Proposition $9.2$ in \cite{LMW} and, since the elements of the associated family of a minimal hypersurface of rank two do not have in general the same image, neither in the statement of Theorem $1.5$ therein.
\\ 2) More general examples are the compositions $f=P\circ h$ of minimal hypersurfaces $h\colon M^n\to\mathbb{Q}_c^{n+1}$ of rank two with a ``stereographic projection" $P$ of $\mathbb{Q}_c^{n+1}$ (minus one point if $c> 0$) onto $\mathbb{R}^{n+1}$. The latter are precisely the hypersurfaces $f\colon M^n\to\mathbb{R}^{n+1}$ with a principal curvature of constant multiplicity $n-2$ whose central sphere congruences are determined by minimal space-like surfaces $s\colon L^2\to\mathbb{S}_{1,1}^{n+2}\subset\mathbb{L}^{n+3}$ such that $s(L)$ is contained in a hyperplane of $\mathbb{L}^{n+3}$ orthogonal to a vector $T\in \mathbb{L}^{n+3}$ satisfying $-\<T,T{\rangle}=c$ (see, e.g., Corollary $3.4.6$ in \cite{H-J}).
\\ 3) The central sphere congruence of the hypersurface given by \eqref{example} is a Veronese surface in a sphere $\mathbb{S}^4\subset \mathbb{S}_1^{n+2}$.
\\ 4) The proof of Theorem \ref{thm:crux} relies on the classification of the conformally deformable hypersurfaces of dimension $n\geq 5$ given in Chapter 17 of~\cite{DT}. } \end{remarks}
\section{Preliminaries}
In this short section we recall some basic definitions and state Wang's fundamental theorem for hypersurfaces in Moebius geometry.
Let $f\colon M\to\mathbb{R}^{n+1}$ be an umbilic-free immersion with Moebius metric $g^*={\langle}\cdot,\cdot{\rangle}^*$ and Moebius shape operator $S$. The \emph{Blaschke tensor} $\psi$ of $f$ is the symmetric bilinear form given by $$ \psi(X,Y)=\frac{H}{\phi}\<SX,Y{\rangle}^*
+\frac{1}{2\phi^2}\left(\|\mbox{grad\,}^* \phi\|_*^2 +H^2\right)\<X,Y{\rangle}^*-\frac{1}{\rho}\mbox{Hess\,}^* \phi(X,Y) $$ for all $X,Y\in \mathfrak{X}(M)$, where $\mbox{grad\,}^*$ and $\mbox{Hess\,}^*$ stand for the gradient and Hessian, respectively, with respect to $g^*$. The \emph{Moebius form} $\omega\in\Gamma(T^*M)$ of $f$ is defined by $$ \omega(X)=-\frac{1}{\phi}{\langle}\mbox{grad\,}^*H+S\mbox{grad\,}^*\phi,X{\rangle}^*. $$
The Moebius shape operator, the Blaschke tensor and the Moebius form of $f$ are Moebius invariant tensors that satisfy the conformal Gauss and Codazzi equations \begin{equation} \label{gaussconf} R^*(X,Y)=SX\wedge^*SY+\psi X\wedge^*Y+X\wedge^*\psi Y \end{equation} and \begin{equation} \label{codconf} (\nabla^*_XS)Y-(\nabla^*_YS)X=\omega(X)Y-\omega(Y)X \end{equation} for all $X,Y\in\mathfrak{X}(M)$, where $\nabla^*$ denotes the Levi-Civita connection, $R^*$ the curvature tensor and $\wedge^*$ the wedge product with respect to $g^*$. We also point out for later use that the Moebius shape operator $S=\phi^{-1}(A-HI)$, besides being traceless, has constant norm $ \sqrt{(n-1)/n}$.
The following fundamental result was proved by Wang (see Theorem 3.1 in \cite{Wa}).
\begin{proposition}\label{congr} Two umbilic-free hypersurfaces $f_1,f_2\colon M^n\to \mathbb{R}^{n+1}$ are conformally (Moebius) congruent if and only if they share the same Moebius metric and the same Moebius second fundamental form (up to sign). \end{proposition}
\section{Proof of Theorem \ref{thm:crux}}
This section is devoted to the proof of Theorem \ref{thm:crux}. In the first subsection we
use the theory of flat bilinear forms to give an alternative proof of a key proposition proved in \cite{LMW} on the structure of the Moebius shape operators of Moebius deformable hypersurfaces. The proof of Theorem \ref{thm:crux} is provided in the subsequent subsection.
\subsection{Moebius shape operators of Moebius deformable hypersurfaces}
The starting point for the proof of Theorem \ref{thm:crux} is Proposition \ref{kermoeb} below, which gives the structure of the Moebius shape operator of a Moebius deformable hypersurface of dimension $n\geq 5$ that carries a principal curvature of multiplicity $(n-2)$ and is not conformally surface-like on any open subset.
First we provide, for the sake of completeness, an alternative proof for $n\geq 5$, based on the theory of flat bilinear forms, of a result first proved for $n\geq 4$ by Li, Ma and Wang in \cite{LMW} (see Theorem $6.1$ therein) on the structure of the Moebius shape operators of any pair of Euclidean hypersurfaces of dimension $n\geq 5$ that are Moebius deformations of each other (see Proposition \ref{commeig} below).
Recall that if $W^{p,q}$ is a vector space of dimension $p+q$ endowed with an inner product ${\langle}\!{\langle}\,,\,{\rangle}\!{\rangle}$ of signature $(p,q)$, and $V$, $U$ are finite dimensional vector spaces, then a bilinear form $\beta\colon V\times U\to W^{p,q}$ is said to be \emph{flat} with respect to ${\langle}\!{\langle}\,,\,{\rangle}\!{\rangle}$ if $$ {\langle}\!{\langle}\,\beta(X,Y),\beta(Z,T){\rangle}\!{\rangle}-{\langle}\!{\langle}\,\beta(X,T),\beta(Z,Y){\rangle}\!{\rangle}=0 $$ for all $X,Z\in V$ and $Y,T\in U$. It is called \emph{null} if $$ {\langle}\!{\langle}\,\beta(X,Y),\beta(Z,T){\rangle}\!{\rangle}=0 $$ for all $X,Z\in V$ and $Y,T\in U$. Thus a null bilinear form is necessarily flat.
\begin{proposition}\label{flat} Let $f_1,f_2\colon M^n\to\mathbb{R}^{n+1}$, be umbilic-free immersions that share the same Moebius metric ${\langle}\, ,\,{\rangle}^*$. Let $S_i$ and $\psi_i$, $i=1,2$, denote their corresponding Moebius shape operators and Blaschke tensors. Then, for each $x\in M^n$, the bilinear form $\Theta\colon T_xM\times T_xM\to\mathbb{R}^{2,2}$ defined by $$ \Theta(X,Y)=(\<S_1X,Y{\rangle}^*,\frac{1}{\sqrt{2}}{\langle}\Psi_+X,Y{\rangle}^*,\<S_2X,Y{\rangle}^*,\frac{1}{\sqrt{2}}{\langle}\Psi_-X,Y{\rangle}^*), $$
where $\Psi_{\pm}=I\pm(\psi_1-\psi_2)$, is flat with respect to the (indefinite) inner product ${\langle}\!{\langle}\cdot,\cdot{\rangle}\!{\rangle}$ in $\mathbb{R}^{2,2}$. Moreover, $\Theta$ is null for all $x\in M^n$ if and only if $f_1$ and $f_2$ are Moebius congruent. \end{proposition} \noindent{\it Proof: } Using \eqref{gaussconf} for $f_1$ and $f_2$ we obtain \begin{align*}
{\langle}\!{\langle}\Theta(X,Y),&\Theta(Z,W){\rangle}\!{\rangle}-{\langle}\!{\langle}\Theta(X,W),\Theta(Z,Y){\rangle}\!{\rangle}\\
=&{\langle}(S_1Z\wedge^*S_1X)Y,W{\rangle}^*-{\langle}(S_2Z\wedge^*S_2X)Y,W{\rangle}^*\\ &+{\langle}((\psi_1-\psi_2)Z\wedge^*X)Y,W{\rangle}^*+{\langle}(Z\wedge^*(\psi_1-\psi_2)X)Y,W{\rangle}^*\\ =&0 \end{align*} for all $x\in M^n$ and $X, Y, Z, W\in T_xM$, which proves the first assertion.
Assume now that $\Theta$ is null for all $x\in M^n$.
Then \begin{align*}
0=& {\langle}\!{\langle}\Theta(X,Y),\Theta(Z,W){\rangle}\!{\rangle}= \<S_1X,Y{\rangle}^*\<S_1Z,W{\rangle}^*-\<S_2X,Y{\rangle}^*\<S_2Z,W{\rangle}^*\\
&+\frac{1}{2}{\langle}(I+(\psi_1-\psi_2))X,Y){\rangle}^* {\langle}(I+(\psi_1-\psi_2))Z,W){\rangle}^*\\ &-\frac{1}{2}{\langle}(I-(\psi_1-\psi_2))X,Y){\rangle}^*{\langle}(I-(\psi_1-\psi_2))Z,W){\rangle}^*
\end{align*} for all $x\in M^n$ and $X, Y, Z, W\in T_xM$. This is equivalent to \begin{align}\label{eq:s1s2} &\;\<S_1X,Y{\rangle}^*S_1-\<S_2X,Y{\rangle}^*S_2+\frac{1}{2}{\langle}(I+(\psi_1-\psi_2))X,Y){\rangle}^*(I+(\psi_1-\psi_2))\nonumber\\ &-\frac{1}{2}{\langle}(I-(\psi_1-\psi_2))X,Y){\rangle}^*(I-(\psi_1-\psi_2))\\ =&\<S_1X,Y{\rangle}^*S_1-\<S_2X,Y{\rangle}^*S_2+\<X,Y{\rangle}^*(\psi_1-\psi_2)+{\langle}(\psi_1-\psi_2)X,Y{\rangle}^*I\nonumber\\ =&0\nonumber \end{align} for all $x\in M^n$ and $X,Y\in T_xM$. Now we use that \begin{equation}\label{eq:prop920} (n-2){\langle}\psi_i X,Y{\rangle}^*=Ric^*(X,Y)+\<S_i^2X,Y{\rangle}^*-\frac{n^2s^*+1}{2n}\<X,Y{\rangle}^* \end{equation} for all $X,Y\in T_xM$, where $Ric^*$ and $s^*$ are the Ricci and scalar curvatures of the Moebius metric (see, e.g., Proposition $9.20$ in \cite{DT}), which implies that $$ \mbox{tr\,} \psi_1=\frac{n^2s^*+1}{2n}=\mbox{tr\,}\psi_2. $$ Therefore, taking traces in \eqref{eq:s1s2} yields $$ {\langle}(\psi_1-\psi_2)X,Y{\rangle}^*=0 $$ for all $x\in M^n$ and $X, Y\in T_xM$. Thus $\psi_1=\psi_2$, and hence $ \<S_1X,Y{\rangle}^*S_1=\<S_2X,Y{\rangle}^*S_2. $ In particular, $S_1$ and $S_2$ commute. Let $\lambda_i$ and $\rho_i$, $1\leq i\leq n$, denote their respective eigenvalues. Then $ \lambda_i\lambda_j=\rho_i\rho_j $ for all $1\leq i,j\leq n$ and, in particular, $\lambda_i^2=\rho_i^2$ for any $1\leq i\leq n$. If $\lambda_1=\rho_1\neq 0$, then $\lambda_i=\rho_j$ for any $j$, and then $S_1=S_2$. Similarly, if $\lambda_1=-\rho_1\neq 0$, then $S_1=-S_2$. Therefore, in any case, $f_1$ and $f_2$ are Moebius congruent by Proposition \ref{congr}.
\qed
\begin{proposition}\label{commeig} Let $f_1,f_2\colon M^n\to\mathbb{R}^{n+1}$, $n\geq5$, be umbilic-free immersions that are Moebius deformations of each other. Then there exists a distribution $\Delta$ of rank $(n-2)$ on an open and dense subset $\mathcal{U}\subset M^n$ such that, for each $x\in \mathcal{U}$, $\Delta(x)$ is contained in eigenspaces of the Moebius shape operators of both $f_1$ and $f_2$ at $x$ correspondent to a common eigenvalue (up to sign). \end{proposition} \noindent{\it Proof: } First notice that, for each $x\in M^n$, the kernel $$\mathcal{N}(\Theta):=\{Y\in T_xM\,:\, \Theta(X,Y)=0\,\,\mbox{for all}\,\, X\in T_xM\}$$ of the flat bilinear form $\Theta\colon T_xM\times T_xM\to\mathbb{R}^{2,2}$ given by Proposition \ref{flat} is trivial, for if $Y\in T_xM$ belongs to $\mathcal{N}(\Theta)$, then ${\langle}\Psi_+Y,Y{\rangle}^*=0={\langle}\Psi_-Y,Y{\rangle}^*$, which implies that $\<Y,Y{\rangle}=0$, and hence $Y=0$.
Now, by Proposition \ref{congr} and the last assertion in Proposition \ref{flat}, the flat bilinear form $\Theta$ is not null on any open subset of $M^n$, for $f_1$ and $f_2$ are not Moebius congruent on any open subset of $M^n$. Let $\mathcal{U}\subset M^n$ be the open and dense subset where $\Theta$ is not null. Since $n\geq 5$, it follows from Lemma~4.22 in \cite{DT} that, at any $x\in \mathcal{U}$, there exists an orthogonal decomposition $ \mathbb{R}^{2,2}=W_1^{1,1}\oplus W_2^{1,1} $ according to which $\Theta$ decomposes as $\Theta=\Theta_1+\Theta_2$, where $\Theta_1$ is null and $\Theta_2$ is flat with $\dim \mathcal{N}(\Theta_2)\geq n-2$.
We claim that $\Delta=\mathcal{N}(\Theta_2)$ is contained in eigenspaces of both $S_1$ and $S_2$ at any $x\in \mathcal{U}$. In order to prove this, take any $T\in\Gamma(\Delta)$, so that $\Theta(X,T)=\Theta_1(X,T)$ for any $X\in T_xM$, and hence $ {\langle}\!{\langle}\Theta(X,T),\Theta(Z,Y){\rangle}\!{\rangle}=0 $ for all $X,Y,Z\in T_xM$. Equivalently, \begin{equation} \label{rels1s2} \<S_1X,T{\rangle}^*S_1-\<S_2X,T{\rangle}^*S_2+{\langle}(\psi_1-\psi_2)X,T{\rangle}^*I+\<X,T{\rangle}^*(\psi_1-\psi_2)=0 \end{equation} for any $X\in T_xM$. In particular, for $X$ orthogonal to $T$, $$ \<S_1X,T{\rangle}^*S_1-\<S_2X,T{\rangle}^*S_2+{\langle}(\psi_1-\psi_2)X,T\>I=0. $$ Assume that $T$ is not an eigenvector of $S_1$. Then there exists $X$ orthogonal to $T$ such that $\<S_1X,T{\rangle}^*\neq 0$. Since $f_1$ is umbilic-free, we must have $\<S_2X,T{\rangle}^*\neq 0$. Thus $S_1$ and $S_2$ are mutually diagonalizable. Let $X_1, \ldots, X_n$ be an orthonormal diagonalizing basis of both $S_1$ and $S_2$ with respective eigenvalues $\lambda_i$ and $\rho_i$, $1\leq i\leq n$. Since $T$ is not an eigenvector, there are at least two distinct eigenvalues, say, $0\neq\lambda_1\neq\lambda_2$, with corresponding eigenvectors $X_1$ and $X_2$, such that $\<X_1,T{\rangle}^*\neq0\neq\<X_2,T{\rangle}^*$. Thus \eqref{rels1s2} yields $$ \lambda_1\<X_1,T{\rangle}^*S_1-\rho_1\<X_1,T{\rangle}^*S_2+{\langle}(\psi_1-\psi_2)X_1,T{\rangle}^*I+\<X_1,T{\rangle}^*(\psi_1-\psi_2)=0 $$ and $$ \lambda_2\<X_2,T{\rangle}^*S_1-\rho_2\<X_2,T{\rangle}^*S_2+{\langle}(\psi_1-\psi_2)X_2,T{\rangle}^*I+\<X_2,T{\rangle}^*(\psi_1-\psi_2)=0. $$ It follows from \eqref{eq:prop920} that $ (n-2)(\psi_1-\psi_2)=S_1^2-S_2^2. $ Hence $$ \lambda_1S_1-\rho_1S_2+\frac{1}{n-2}(\lambda_1^2-\rho_1^2)I+(\psi_1-\psi_2)=0 $$ and $$ \lambda_2S_1-\rho_2S_2+\frac{1}{n-2}(\lambda_2^2-\rho_2^2)I+(\psi_1-\psi_2)=0. $$ Taking traces in the above expressions we obtain $$\lambda_1^2-\rho_1^2=0=\lambda_2^2-\rho_2^2.$$ On the other hand, the above relations also yield $$ \lambda_1\lambda_i-\rho_1\rho_i+\frac{1}{n-2}(\lambda_i^2-\rho_i^2)=0 $$ and $$ \lambda_2\lambda_i-\rho_2\rho_i+\frac{1}{n-2}(\lambda_i^2-\rho_i^2)=0 $$ for any $1\leq i\leq n$. Assume first that $\lambda_1=\rho_1$, and hence $\lambda_2=\rho_2$. Then the preceding expressions become $$ (\lambda_i-\rho_i)\left(\lambda_j+\frac{1}{n-2}(\lambda_i+\rho_i)\right)=0 $$ for $j=1,2$ and $1\leq i\leq n$. Since $S_1\neq S_2$ and both tensors have vanishing trace, there must be at least two directions for which $\lambda_i-\rho_i\neq0$. For such a fixed direction, say $k$, we have $$ \lambda_j+\frac{1}{n-2}(\lambda_k+\rho_k)=0, $$ with $j=1,2$. Thus $\lambda_1=\lambda_2$, which is a contradiction.
Similarly, if we assume $\lambda_1=-\rho_1$, we obtain that $\lambda_2=-\rho_2$, and then $$ (\lambda_i+\rho_i)\left(\lambda_j+\frac{1}{n-2}(\lambda_i-\rho_i)\right)=0 $$ for $j=1,2$ and $1\leq i\leq j$. By the same argument as above, we see that $\lambda_1=\lambda_2$, reaching again a contradiction. Therefore $T$ must be an eigenvector of $S_1$.
Since $S_2$ is not a multiple of the identity, taking $X$ orthogonal to $T$ we see from \eqref{rels1s2} that $T$ must also be an eigenvector of $S_2$. Given that $T\in\Gamma(\Delta)$ was chosen arbitrarily, we conclude that $\Delta$ is contained in eigenspaces of both $S_1$ and $S_2$.
Let $\mu_1$ and $\mu_2$ be such that $S_1|_\Delta=\mu_1 I$ and $S_2|_\Delta=\mu_2 I$. By \eqref{rels1s2} we have $$ \mu_1^2-\mu_2^2+\frac{2}{n-2}(\mu_1^2-\mu_2^2)=0. $$ Thus $ \mu_1^2-\mu_2^2=0, $ and hence $\mu_1=\pm \mu_2$.
It remains to argue that $\dim\Delta=n-2$. After changing the normal vector of either $f_1$ or $f_2$, if necessary, one can assume that $\mu_1=\mu_2:=\mu$. Since $S_1|_\Delta=\mu I=S_2|_\Delta$, if $\dim\Delta=n-1$, then the condition $\mbox{tr\,}(S_1)=0=\mbox{tr\,}(S_2)$ would imply that $S_1=S_2$, a contradiction.
\qed
Now we make the extra assumptions that $f$ is not conformally surface-like on any open subset of $M^n$ and has a principal curvature with constant multiplicity $n-2$.
\begin{proposition}\label{kermoeb} Let $f_1\colon M^n\to\mathbb{R}^{n+1}$, $n\geq 5$, be a Moebius deformable hypersurface with a principal curvature $\lambda$ of constant multiplicity $n-2$. Assume that $f_1$ is not conformally surface-like on any open subset of $M^n$. If $f_2\colon M^n\to\mathbb{R}^{n+1}$ is a Moebius deformation of $f_1$, then the Moebius shape operators $S_1$ and $S_2$ of $f_1$ and $f_2$, respectively, have constant eigenvalues $\pm\sqrt{(n-1)/2n}$ and $0$, and the eigenspace $\Delta$ correspondent to $\lambda$ as a common kernel. In particular, $\lambda$ and the corresponding principal curvature of $f_2$ coincide with the mean curvatures of $f_1$ and $f_2$, respectively. Moreover, the Moebius forms of $f_1$ and $f_2$ vanish on $\Delta$. \end{proposition}
For the proof of Proposition \ref{kermoeb} we will make use of Lemma \ref{le:split} below (see Theorem $1$ in \cite{DFT} or Corollary $9.33$ in \cite{DT}), which characterizes conformally surface-like hypersurfaces among hypersurfaces of dimension $n$ that carry a principal curvature with constant multiplicity $n-2$ in terms of the splitting tensor of the corresponding eigenbundle. Recall that, given a distribution $\Delta$ on a Riemannian manifold $M^n$, its \emph{splitting tensor} $C\colon\Gamma(\Delta)\to\Gamma(\mbox{End}(\Delta^\perp))$ is defined by $$ C_TX=-\nabla_X^hT $$ for all $T\in\Gamma(\Delta)$ and $X\in\Gamma(\Delta^\perp)$, where $\nabla_X^hT=(\nabla_XT)_{\Delta^\perp}$.
\begin{lemma} \label{le:split} Let $f\colon M^n\to\mathbb{R}^{n+1}$, $n\geq 3$, be a hypersurface with a principal curvature of multiplicity $n-2$ and let $\Delta$ denote its eigenbundle. Then $f$ is conformally surface-like if and only if the splitting tensor of $\Delta$ satisfies $C(\Gamma(\Delta))\subset \mbox{span}\{I\}$. \end{lemma}
\noindent \emph{Proof of Proposition \ref{kermoeb}:} Since $f_1$ has a principal curvature $\lambda$ of constant multiplicity $n-2$, it follows from Proposition \ref{commeig} that, after changing the normal vector field of either $f_1$ or $f_2$, if necessary, we can assume that the Moebius shape operators $S_1$ and $S_2$ of $f_1$ and $f_2$ have a common eigenvalue $\mu$ with the same eigenbundle $\Delta$ of rank $n-2$.
Let $\lambda_i$, $i=1,2$, be the eigenvalues of $S_1|_{\Delta^\perp}$. In particular, $\lambda_1\neq\mu\neq\lambda_2$. The conditions $\mbox{tr\,}(S_1)=0=\mbox{tr\,}(S_2)$ and $|S_1|^2=(n-1)/n=|S_2|^2$ imply that $S_1$ and $S_2$ have the same eigenvalues. Then we must also have $\lambda_1\neq\lambda_2$, for otherwise $S_1$ and $S_2$ would coincide.
Let $X,Y\in\Gamma(\Delta^\perp)$ be an orthonormal frame of eigenvectors of $S_1|_{\Delta^\perp}$ with respect to $g^*$. Then $S_1X=\lambda_1X$, $S_1Y=\lambda_2Y$, $S_2X=b_1X+cY$ and $S_2Y=cX+b_2Y$ for some smooth functions $b_1$, $b_2$ and $c$. Since $\mbox{tr\,}(S_1)=0=\mbox{tr\,}(S_2)$ and $|S_1|^{*2}=(n-1)/n=|S_2|^{*2}$, we have \begin{align} \lambda_1+\lambda_2+(n-2)\mu&=0\label{trace1},\\ \lambda_1^2+\lambda_2^2+(n-2)\mu^2&=\frac{n-1}{n},\label{norm1}\\ b_1+b_2+(n-2)\mu&=0\label{trace2},\\ b_1^2+b_2^2+2c^2+(n-2)\mu^2&=\frac{n-1}{n}.\label{norm2} \end{align}
Thus the first assertion in the statement will be proved once we show that $\mu$ vanishes identically. The last assertion will then be an immediate consequence of \eqref{codconf}.
The umbilicity of $\Delta$, together with \eqref{codconf} evaluated in orthonormal sections $T$ and $S$ of $\Delta$ with respect to $g^*$, imply that $\omega_1(T)=T(\mu)=\omega_2(T)$, where $\omega_i$ is the Moebius form of $f_i$, $1\leq i\leq 2$. Taking the derivative of \eqref{trace1} and \eqref{norm1} with respect to $T\in\Gamma(\Delta)$, we obtain $$ T(\lambda_1)=\frac{(n-2)(\mu-\lambda_2)}{\lambda_2-\lambda_1}T(\mu)\;\;\mbox{and}\;\; T(\lambda_2)=\frac{(n-2)(\lambda_1-\mu)}{\lambda_2-\lambda_1}T(\mu). $$
The $X$ and $Y$ components of \eqref{codconf} for $S_1$ evaluated in $X$ and $T\in\Gamma(\Delta)$ give, respectively, \begin{equation} \label{cod1xtx} (\mu-\lambda_1){\langle}\nabla^*_XT,X{\rangle}^*=T(\lambda_1)-T(\mu)=-\frac{n\lambda_2}{\lambda_2-\lambda_1}T(\mu) \end{equation} and \begin{equation} \label{cod1xty} (\mu-\lambda_2){\langle}\nabla^*_XT,Y{\rangle}^*=(\lambda_1-\lambda_2){\langle}\nabla^*_TX,Y{\rangle}^*. \end{equation} Similary, the $X$ and $Y$ components of \eqref{codconf} for $S_1$ evaluated in $Y$ and $T$ give, respectively, \begin{equation} \label{cod1ytx} (\mu-\lambda_1){\langle}\nabla^*_YT,X{\rangle}^*=(\lambda_2-\lambda_1){\langle}\nabla^*_TY,X{\rangle}^* \end{equation} and \begin{equation} \label{cod1yty} (\mu-\lambda_2){\langle}\nabla^*_YT,Y{\rangle}^*=T(\lambda_2)-T(\mu)=\frac{n\lambda_1}{\lambda_2-\lambda_1}T(\mu). \end{equation}
We claim that $S_1$ and $S_2$ do not commute, that is, that $c\neq 0$. Assume otherwise. Then Eqs. \eqref{trace1} to \eqref{norm2} imply that $S_2X=\lambda_2X$ and $S_2Y=\lambda_1Y$. Hence, the $X$ and $Y$ components of \eqref{codconf} for $S_2$ evaluated in $X$ and $T\in\Gamma(\Delta)$ give, respectively, \begin{equation} \label{cod2xtx} (\mu-\lambda_2){\langle}\nabla^*_XT,X{\rangle}^*=T(\lambda_2)-T(\mu) \end{equation} and \begin{equation} \label{cod2xty} (\mu-\lambda_1){\langle}\nabla^*_XT,Y{\rangle}^*=(\lambda_2-\lambda_1){\langle}\nabla^*_TX,Y{\rangle}^*. \end{equation} Similary, the $X$ and $Y$ components of \eqref{codconf} for $S_2$ evaluated in $Y$ and $T$ give, respectively, \begin{equation} \label{cod2ytx} (\mu-\lambda_2){\langle}\nabla^*_YT,X{\rangle}^*=(\lambda_1-\lambda_2){\langle}\nabla^*_TY,X{\rangle}^* \end{equation} and \begin{equation} \label{cod2yty} (\mu-\lambda_1){\langle}\nabla^*_YT,Y{\rangle}^*=T(\lambda_1)-T(\mu). \end{equation} Adding \eqref{cod1xty} and \eqref{cod2xty} yields $$ (2\mu-\lambda_1-\lambda_2){\langle}\nabla^*_XT,Y{\rangle}^*=0. $$ Similarly, Eqs \eqref{cod1ytx} and \eqref{cod2ytx} give $$ (2\mu-\lambda_1-\lambda_2){\langle}\nabla^*_YT,X{\rangle}^*=0. $$ If $(2\mu-\lambda_1-\lambda_2)$ does not vanish identically, there exists an open subset $U\subset M^n$ where ${\langle}\nabla^*_XT,Y{\rangle}^*=0={\langle}\nabla^*_YT,X{\rangle}^*$. Now, from \eqref{cod1xtx} and \eqref{cod2xtx} we obtain $$ (\lambda_2-\lambda_1){\langle}\nabla^*_XT,X{\rangle}^*=T(\lambda_1-\lambda_2). $$ Similarly, using \eqref{cod1yty} and \eqref{cod2yty} we have $$ (\lambda_1-\lambda_2){\langle}\nabla^*_YT,Y{\rangle}^*=T(\lambda_2-\lambda_1). $$
The preceding equations imply that the splitting tensor $C^*$ of $\Delta$ with respect to the Moebius metric satisfies $C^*_T\in\mbox{span}\{I\}$ for any $T\in\Gamma(\Delta|_U)$. From the relation between the Levi-Civita connections of conformal metrics we obtain \begin{equation} \label{confmetrics} C^*_T=C_T -T(\log\phi)\,I,
\end{equation} where $\phi$ is the conformal factor of $g^*$ with respect to the metric induced by $f_1$ and $C$ is the splitting tensor of $\Delta$ corresponding to the latter metric. Therefore, we also have $C_T\in\mbox{span}\{I\}$ for any $T\in\Gamma(\Delta|_U)$, and hence $f_1|U$ is conformally surface-like by Lemma \ref{le:split}, a contradiction. Thus $(2\mu-\lambda_1-\lambda_2)$ must vanish everywhere, which, together with \eqref{trace1}, implies that also $\mu$ is everywhere vanishing. Hence $\lambda_1=-\lambda_2$, and therefore $S_1=-S_2$, which is again a contradiction, and proves the claim.
Now we compute \begin{align*}
{\langle}(\nabla^*_TS_2)X,X{\rangle}^*&={\langle}\nabla^*_T(b_1X+cY),X{\rangle}^*-\<S_2\nabla^*_TX,X{\rangle}^*\\
&=T(b_1)+c{\langle}\nabla^*_TY,X{\rangle}^*-c{\langle}\nabla^*_TX,Y{\rangle}^*\\
&=T(b_1)+2c{\langle}\nabla^*_TY,X{\rangle}^*. \end{align*} In a similar way, \begin{align*}
{\langle}(\nabla^*_TS_2)Y,Y{\rangle}^*&=T(b_2)+2c{\langle}\nabla^*_TX,Y{\rangle}^*. \end{align*} Adding the preceding equations and using \eqref{trace2} yield \begin{equation} \label{derttr2} {\langle}(\nabla^*_TS_2)X,X{\rangle}^*+{\langle}(\nabla^*_TS_2)Y,Y{\rangle}^*=(2-n)T(\mu). \end{equation} From \eqref{codconf} we obtain \begin{align*}
{\langle}(\nabla^*_TS_2)X,X{\rangle}^*&={\langle}(\nabla^*_XS_2)T,X{\rangle}^*+T(\mu)\\
&=\mu{\langle}\nabla^*_XT,X{\rangle}^*-{\langle}\nabla^*_XT,S_2X{\rangle}^*+T(\mu)\\
&=(\mu-b_1){\langle}\nabla^*_XT,X{\rangle}^*-c{\langle}\nabla^*_XT,Y{\rangle}^*+T(\mu), \end{align*} and similarly, $$ {\langle}(\nabla^*_TS_2)Y,Y{\rangle}^*=(\mu-b_2){\langle}\nabla^*_YT,Y{\rangle}^*-c{\langle}\nabla^*_YT,X{\rangle}^*+T(\mu). $$ Substituting the preceding expressions in \eqref{derttr2} gives \begin{equation} \label{derttr22}
nT(\mu)+(\mu-b_1){\langle}\nabla^*_XT,X{\rangle}^*+(\mu-b_2){\langle}\nabla^*_YT,Y{\rangle}^*
=c{\langle}\nabla^*_XT,Y{\rangle}^*+c{\langle}\nabla^*_YT,X{\rangle}^*. \end{equation} Let us first focus on the terms on the left-hand side of the above equation. Using \eqref{cod1xtx} and \eqref{cod1yty} we obtain \begin{align*}
nT(\mu)&+(\mu-b_1){\langle}\nabla^*_XT,X{\rangle}^*+(\mu-b_2){\langle}\nabla^*_YT,Y{\rangle}^*\\
&=nT(\mu)-\frac{n\lambda_2(\mu-b_1)}{(\mu-\lambda_1)(\lambda_2-\lambda_1)}T(\mu)
+\frac{n\lambda_1(\mu-b_2)}{(\mu-\lambda_2)(\lambda_2-\lambda_1)}T(\mu)\\ &=\frac{(n-1)(\lambda_1-b_1)}{(\mu-\lambda_2)(\lambda_2-\lambda_1)}T(\mu).
\end{align*} For the right-hand side of \eqref{derttr22}, using \eqref{cod1xty} and \eqref{cod1ytx} we have \begin{align*}
c({\langle}\nabla^*_XT,Y{\rangle}^*&+{\langle}\nabla^*_YT,X{\rangle}^*)=c\left(\frac{\lambda_1-\lambda_2}{\mu-\lambda_2}{\langle}\nabla^*_TX,Y{\rangle}^*+\frac{\lambda_2-\lambda_1}{\mu-\lambda_1}{\langle}\nabla^*_TY,X{\rangle}^*\right)\\
=&c\frac{(\lambda_1-\lambda_2)(\mu-\lambda_1+\mu-\lambda_2)}{(\mu-\lambda_1)(\mu-\lambda_2)}{\langle}\nabla^*_TX,T{\rangle}^*\\
=&c\frac{n\mu(\lambda_1-\lambda_2)}{(\mu-\lambda_1)(\mu-\lambda_2)}{\langle}\nabla^*_TX,Y{\rangle}^*. \end{align*} Therefore \eqref{derttr2} becomes \begin{equation} \label{A} (n-1)(b_1-\lambda_1)T(\mu)=nc\mu(\lambda_1-\lambda_2)^2{\langle}\nabla^*_TX,T{\rangle}^*. \end{equation} Now evaluate \eqref{codconf} for $S_2$ in $X$ and $T$. More specifically, the $Y$ component of that equation is $$
T(c)=(\mu-b_2){\langle}\nabla^*_XT,Y{\rangle}^*-c{\langle}\nabla^*_XT,X{\rangle}+(b_2-b_1){\langle}\nabla^*_TX,Y{\rangle}^*. $$ Substituting \eqref{cod1xtx} and \eqref{cod1xty} in the above equation, and using \eqref{trace1} and \eqref{trace2}, we obtain \begin{align}\label{tc1}
T(c)=&\frac{(\mu-b_2)(\lambda_1-\lambda_2)}{\mu-\lambda_2}{\langle}\nabla^*_TX,Y{\rangle}^*+\frac{cn\lambda_2}{(\mu-\lambda_1)(\lambda_2-\lambda_1)}T(\mu)\nonumber\\
&+(b_2-b_1){\langle}\nabla^*_TX,Y{\rangle}^*\nonumber\\
=&\frac{\mu\lambda_1-\mu\lambda_2-b_2\lambda_1+b_2\lambda_2+\mu b_2-\mu b_1-\lambda_2b_2+b_1\lambda_2}{\mu-\lambda_2}{\langle}\nabla^*_TX,Y{\rangle}^*\nonumber\\
&+\frac{cn\lambda_2}{(\mu-\lambda_1)(\lambda_2-\lambda_1)}T(\mu)\nonumber\\
=&\frac{n\mu(\lambda_1-b_1)}{\mu-\lambda_2}{\langle}\nabla^*_TX,Y{\rangle}^*
+\frac{cn\lambda_2}{(\mu-\lambda_1)(\lambda_2-\lambda_1)}T(\mu). \end{align} Similarly, the $X$ component of \eqref{codconf} for $S_2$ evaluated in $Y$ and $T$ gives $$ T(c)=(\mu-b_1){\langle}\nabla^*_YT,X{\rangle}^*-c{\langle}\nabla^*_YT,Y{\rangle}^*+(b_2-b_1){\langle}\nabla^*_TX,Y{\rangle}^*. $$ Substituting \eqref{cod1ytx} and \eqref{cod1yty} in the above equation we obtain \begin{equation} \label{tc2} T(c)=-\frac{cn\lambda_1}{(\mu-\lambda_2)(\lambda_2-\lambda_1)}T(\mu)+\frac{n\mu(\lambda_1-b_1)}{\mu-\lambda_1}{\langle}\nabla^*_TX,Y{\rangle}^*. \end{equation} Using \eqref{norm1}, it follows from \eqref{tc1} and \eqref{tc2} that \begin{equation} \label{B} (n-1)cT(\mu)=n\mu(\lambda_1-b_1)(\lambda_1-\lambda_2)^2{\langle}\nabla^*_TX,Y{\rangle}^*. \end{equation} Comparing \eqref{A} and \eqref{B} yields $$ \mu((\lambda_1-b_1)^2+c^2){\langle}\nabla^*_TX,Y{\rangle}^*=0. $$ Since $(\lambda_1-b_1)^2+c^2\neq 0$, for otherwise the immersions would be Moebius congruent, then $ \mu{\langle}\nabla^*_TX,Y{\rangle}^*=0. $
If $\mu$ does not vanish identically, then there is an open subset $U$ where ${\langle}\nabla^*_TX,Y{\rangle}^*=0$ for any $T\in\Gamma(\Delta)$. Then \eqref{cod1xty} and \eqref{cod1ytx} imply that the splitting tensor of $\Delta$ with respect to the Moebius metric satisfies $C^*_T\in\mbox{span}\{I\}$ for any $T\in\Gamma(\Delta)$. As before, this implies that the splitting tensor of $\Delta$ with respect to the metric induced by $f_1$ also satisfies $C_T\in\mbox{span}\{I\}$ for any $T\in\Gamma(\Delta)$, and hence $f_1|_U$ is conformally surface-like by Lemma \ref{le:split}, a contradiction. Thus $\mu$ must vanish identically. \qed
\subsection{Proof of Theorem \ref{thm:crux}}
In this subsection we prove Theorem \ref{thm:crux}. First we recall one further definition.
Let $f\colon M^n\to\mathbb{R}^{n+1}$, $n\geq 3$, be a hypersurface that carries a principal curvature of constant multiplicity $n-2$ with corresponding eigenbundle $\Delta$. Let $C\colon\Gamma(\Delta)\to\Gamma(\mbox{End}(\Delta^\perp))$ be the splitting tensor of $\Delta$. Then $f$ is said to be \emph{hyperbolic} (respectively, \emph{parabolic} or \emph{elliptic}) if there exists $J\in\Gamma(\mbox{End}(\Delta^\perp))$ satisfying the following conditions: \begin{itemize} \item[(i)] $J^2=I$ and $J\neq I$ (respectively, $J^2=0$, with $J\neq 0$, or $J^2=-I$), \item[(ii)] $\nabla^h_T J=0$ for all $T\in\Gamma(\Delta)$, \item[(iii)] $C(\Gamma(\Delta))\subset \mbox{span}\{I,J\}$, but $C(\Gamma(\Delta))\not\subset \mbox{span}\{I\}$. \end{itemize}
\noindent \emph{Proof of Theorem \ref{thm:crux}:} Let $f_2\colon M^n\to\mathbb{R}^{n+1}$ be a Moebius deformation of $f_1:=f$. By Proposition \ref{kermoeb}, the Moebius shape operators $S_1$ and $S_2$ of $f_1$ and $f_2$, respectively, share a common kernel $\Delta$ of dimension $n-2$. Let $S_i$, $i=1,2$, denote also the restriction $S_i|_{\Delta^\perp}$ and define $D\in\Gamma(\mbox{End}(\Delta^\perp))$ by $$ D=S_1^{-1}S_2. $$
It follows from Proposition \ref{kermoeb} that $\det D=1$ at any point of $M^n$, while Proposition \ref{congr} implies that $D$ cannot be a multiple of the identity endomorphism on any open subset $U\subset M^n$, for otherwise $f_1|_U$ and $f_2|_U$ would be Moebius congruent by Lemma \ref{le:split}. Therefore, we can write $D=aI+bJ$, where $b$ does not vanish on any open subset of $M^n$ and $J\in\Gamma(\mbox{End}(\Delta^\perp))$ satisfies $J^2=\epsilon I$, with $\epsilon\in \{1,0,-1\}$, $J\neq I$ if $\epsilon=1$ and $J\neq 0$ if $\epsilon=0$.
From the symmetry of $S_2$ and the fact that $b$ does not vanish on any open subset of $M^n$, we see that $S_1J$ must be symmetric. Moreover, given that $\mbox{tr\,} S_1=0=\mbox{tr\,} S_2$, also $\mbox{tr\,} S_1J=0$.
Assume first that $J^2=0$. Let $X,Y\in\Gamma(\Delta^\perp)$ be orthogonal vector fields, with $Y$ of unit length (with respect to the Moebius metric $g^*$), such that $JX=Y$ and $JY=0$. Replacing $J$ by $|X|^*J$, if necessary, we can assume that also $X$ has unit length. Let $\alpha, \beta, \gamma\in C^{\infty}(M)$ be such that $S_1X=\alpha X+\beta Y$ and $S_1Y=\beta X+\gamma Y$, so that $S_1JX=\beta X+\gamma Y$ and $S_1JY=0$. From the symmetry of $S_1J$ and the fact that $\mbox{tr\,} S_1J=0$ we obtain $\beta=0=\gamma$, and hence $\alpha=\mbox{tr\,} S_1=0$. Thus $S_1=0$, which is a contradiction.
Now assume that $J^2=I$, $J\neq I$. Let $X,Y$ be a frame of unit vector fields (with respect to $g^*$) satisfying $JX=X$ and $JY=-Y$. Write $S_1X=\alpha X+\beta Y$ and $S_1 Y=\gamma X+\delta Y$ for some $\alpha, \beta, \gamma, \delta\in C^{\infty}(M)$, so that $S_1JX=\alpha X+\beta Y$ and $S_1JY=-\gamma X-\delta Y$. Since $\mbox{tr\,} S_1J=0=\mbox{tr\,} S_1$, then $\alpha=0=\delta$. The symmetry of $S_1$ and $S_2J$ implies that $\beta=0=\gamma$, which is again a contradiction.
Therefore, the only possible case is that $J^2=-I$. Let $X,Y\in\Gamma(\Delta^\perp)$ be a frame of unit vector fields such that $JX=Y$ and $JY=-X$. Write as before $S_1X=\alpha X+\beta Y$ and $S_1 Y=\gamma X+\delta Y$ for some $\alpha, \beta, \gamma, \delta\in C^{\infty}(M)$. Then
$S_1JX=\gamma X+\delta Y$ and $S_1JY=-\alpha X-\beta Y$, hence
$\beta=\gamma$, for $\mbox{tr\,} S_1J=0$. From the symmetry of $S_1$ we obtain $$ \<S_1JX,Y{\rangle}=\<JX,S_1Y{\rangle}=\<Y,S_1Y{\rangle}=\gamma\<X,Y{\rangle}+\delta=\beta\<X,Y{\rangle}+\delta, $$ and similarly, $$ \<S_1JY,X{\rangle}=-\alpha-\beta\<X,Y{\rangle}. $$ Comparing the two preceding equations, taking into account the symmetry of $S_1J$ and the fact that $\mbox{tr\,} S_1=0$, yields $ \beta\<X,Y{\rangle}=0. $ If $\beta$ is nonzero, then $X$ and $Y$ are orthogonal to each other. This is also the case otherwise, for if $\beta$, hence also $\gamma$, is zero, then $X$ and $Y$ are eigenvectors of $S_1$. Thus, in any case, we conclude that the tensor $J$ acts as a rotation of angle $\pi/2$ on $\Delta^\perp$.
Eq. \eqref{codconf} and the fact that $\omega_i|_\Delta=0$ imply that the splitting tensor of $\Delta$ with respect to the Moebius metric satisfies
$$ \nabla_T^{*h}S_i=S_iC^*_T $$
for all $T\in\Gamma(\Delta)$ and $1\leq i\leq 2$, where $$(\nabla_T^{*h}S_i)X=\nabla_T^{*h}S_iX-S_i\nabla_T^{*h}X$$ for all $X\in\Gamma(\Delta^\perp)$ and $T\in\Gamma(\Delta)$. Here $ \nabla^{*h}_T X=(\nabla^*_TX)_{\Delta^\perp}. $ In particular, $S_iC^*_T={C^*_T}^tS_i$, $1\leq i\leq 2$. Therefore $$ S_1DC^*_T=S_2C^*_T={C^*_T}^tS_2={C^*_T}^tS_1D=S_1C^*_TD, $$ and hence $$ [D,C^*_T]=0. $$
This implies that $C^*_T$ commutes with $J$, and hence $C^*_T\in\mbox{span}\{I,J\}$ for any $T\in\Gamma(\Delta)$. It follows from \eqref{confmetrics} that also the splitting tensor $C$ of $\Delta$ corresponding to the metric induced by $f$ satisfies $C_T\in\mbox{span}\{I,J\}$ for any $T\in\Gamma(\Delta)$. Moreover, by Lemma \ref{le:split} and the assumption that $f$ is not surface-like on any open subset, we see that $C(\Gamma(\Delta))\not\subset \mbox{span}\{I\}$ on any open subset. Now, since $J$ acts as a rotation of angle $\pi/2$ on $\Delta^\perp$, then $\nabla_T^hJ=0$. We conclude that $f$ is elliptic with respect to $J$.
By Proposition \ref{kermoeb}, the central sphere congruence $S\colon M^n\to \mathbb{S}_{1,1}^{n+2}$ of $f$ is a two-parameter congruence of hyperspheres, which therefore gives rise to a surface $s\colon L^2\to\mathbb{S}_{1,1}^{n+2}$ such that $s=S\circ \pi$, where $\pi\colon M^n\to L^2$ is the (local) quotient map onto the space of leaves of $\Delta$. Since $\nabla_T^hJ=0=[C_T,J]$ for any $T\in\Gamma(\Delta)$, it follows from Corollary $11.7$ in \cite{DT} that $J$ is projectable with respect to $\pi$, that is, there exists $\bar{J} \in \mbox{End}(TL)$ such that $\bar{J}\circ \pi_*=\pi_*\circ J$. In particular, the fact that $J^2=-I$ implies that $\bar{J}^2=-I$, where we denote also by $I$ the identity endomorphism of $TL$.
Now observe that, since $f_2$ shares with $f_1$ the same Moebius metric, its induced metric is conformal to the metric induced by $f_1$. Moreover, $f_2$ is not Moebius congruent to $f_1$ on any open subset of $M^n$ and $f_1$ has a principal curvature of constant multiplicity $(n-2)$. Thus $f_1$ is a so-called \emph{Cartan hypersurface}. By the proof of the classification of Cartan hypersurfaces given in Chapter~17 of \cite{DT} (see Lemma $17.4$ therein), the surface $s$ is \emph{elliptic} with respect to $\bar{J}$, that is, for all $\bar{X},\bar{Y}\in \mathfrak{X}(L)$ we have \begin{equation} \label{sffsur} \alpha^s(\bar{J}\bar{X},\bar{Y})=\alpha^s(\bar{X},\bar{J}\bar{Y}). \end{equation}
We claim that $\bar{J}$ is an orthogonal tensor, that is, it acts as a rotation of angle $\pi/2$ on each tangent space of $L^2$. The minimality of $s$ will then follow from this, the fact that $\bar{J}^2=-I$ and \eqref{sffsur}.
In order to show the orthogonality of $\bar{J}$, we use that the metric ${\langle}\cdot,\cdot{\rangle}'$ on $L^2$ induced by $s$ is related to the metric of $M^n$ by \begin{equation} \label{relmetr} {\langle}\bar{Z},\bar{W}{\rangle}'={\langle}(A-\lambda I)Z, (A-\lambda I)W{\rangle} \end{equation} for all $\bar{Z},\bar{W}\in\mathfrak{X}(L)$, where $A$ is the shape operator of $f$, $\lambda$ is the principal curvature of $f$ having $\Delta$ as its eigenbundle, which coincides with the mean curvature $H$ of $f$ by Proposition~\ref{kermoeb}, and $Z$, $W$ are the horizontal lifts of $\bar{Z}$ and $\bar{W}$, respectively. Notice that $(A-\lambda I)$ is a multiple of $S_1$. Since $S_1J$ is symmetric, then also $(A-\lambda I)J$ is symmetric. Therefore, given any $\bar{X}\in\mathfrak{X}(L)$ and denoting by $X\in\Gamma(\Delta^\perp)$ its horizontal lift, we have \begin{align*}
{\langle}\bar{X},\bar{J}\bar{X}{\rangle}'&={\langle}(A-\lambda I)X, (A-\lambda I)JX{\rangle}\\
&={\langle}(A-\lambda I)J(A-\lambda I)X,X{\rangle}\\
&=\<J(A-\lambda I)X,(A-\lambda I)X{\rangle}\\
&=0, \end{align*} where in the last step we have used that $J$ acts as a rotation of angle $\pi/2$ on $\Delta^\perp$. Using again the symmetry of $(A-\lambda I)J$, the proof of the orthogonality of $\bar{J}$ is completed by noticing that \begin{align*}
{\langle}\bar{J}\bar{X},\bar{J}\bar{X}{\rangle}'&={\langle}(A-\lambda I)JX,(A-\lambda I)JX{\rangle}\\
&=\<J(A-\lambda I)JX,(A-\lambda I)X{\rangle}\\
&=\<JJ^t(A-\lambda I)X,(A-\lambda I)X{\rangle}\\
&=-\<J^2(A-\lambda I)X,(A-\lambda I)X{\rangle}\\
&={\langle}\bar{X},\bar{X}{\rangle}'. \end{align*}
Conversely, assume that the central sphere congruence of $f\colon M^n\to\mathbb{R}^{n+1}$, with $M^n$ simply connected, is determined by a space-like minimal surface $s\colon L^2\to \mathbb{S}_{1,1}^{n+2}$. Let $\bar{J}\in \Gamma(\mbox{End}(TL))$ represent a rotation of angle $\pi/2$ on each tangent space. Then ${\bar J}^2=-I$ and the second fundamental form of $s$ satisfies \eqref{sffsur} by the minimality of $s$. In particular, $s$ is elliptic with respect to $\bar{J}$. By Lemma $17.4$ in \cite{DT}, the hypersurface $f$ is elliptic with respect to the lift $J\in\Gamma(\mbox{End} (\Delta^\perp))$ of $\bar{J}$, where $\Delta$ is the eigenbundle correspondent to the principal curvature $\lambda$ of $f$ with multiplicity $n-2$, which coincides with its mean curvature. Therefore, the splitting tensor of $\Delta$ satisfies $C_T\in\mbox{span}\{I,J\}$ for any $T\in\Gamma(\Delta)$. Since $(A-\lambda I)C_T$ is symmetric for any $T\in \Gamma(\Delta)$, as follows from the Codazzi equation, and $C(\Gamma(\Delta))\not \subset \mbox{span}\{I\}$ on any open subset, for $f$ is not conformally surface-like on any open subset, then $(A-\lambda I)J$ is also symmetric.
By Theorem $17.5$ in \cite{DT}, the set of conformal deformations of $f$ is in one-to-one correspondence with the set of tensors $\bar{D}\in \Gamma(\mbox{End}(TL))$ with $\det \bar{D}=1$ that satisfy the Codazzi equation
$$\left(\nabla'_{\bar{X}}\bar{D}\right)\bar{Y} -\left(\nabla'_{\bar{Y}}\bar{D}\right){\bar{X}}=0$$ for all $\bar{X},\bar{Y}\in\mathfrak{X}(L)$, where $\nabla'$ is the Levi-Civita connection of the metric induced by $s$. For a general elliptic hypersurface, this set either consists of a one-parameter family (continuous class) or of a single element (discrete class; see Section $11.2$ and Exercise $11.3$ in \cite{DT}). The surface $s$ is then said to be of the complex type of first or second species, respectively. For a minimal surface $s\colon L^2\to \mathbb{S}_{1,1}^{n+2}$, each tensor $\bar{J}_\theta=\cos\theta I+\sin \theta \bar{J}$, $\theta\in [0,2\pi)$, satisfies both the condition $\det \bar{J}_\theta=1$ and the Codazzi equation, since it is a parallel tensor in $L^2$. Thus $\{\bar{J}_\theta\}_{\theta\in [0,2\pi)}$ is \emph{the} one-parameter family of tensors in $L^2$ having determinant one and satisfying the Codazzi equation. In particular, the surface $s$ is of the complex type of first species. Therefore, the hypersurface $f$ admits a one-parameter family of conformal deformations, each of which determined by one of the tensors $\bar{J}_\theta\in \mbox{End}(TL)$, $\theta\in [0,2\pi)$. The proof of Theorem \eqref{thm:crux} will be completed once we prove that any of such conformal deformations shares with $f$ the same Moebius metric.
Let $f_\theta\colon M^n\to\mathbb{R}^{n+1}$ be the conformal deformation of $f$ determined by $\bar{J}_\theta$. Let $F_\theta\colon M^n\to \mathbb{V}^{n+2}$ be the \emph{isometric light-cone representative} of $f_\theta$, that is, $F_\theta$ is the isometric immersion of $M^n$ into the light-cone $\mathbb{V}^{n+2}\subset \mathbb{L}^{n+3}$ given by $F_\theta=\varphi_\theta^{-1}(\Psi\circ f_\theta)$, where $\varphi_\theta$ is the conformal factor of the metric ${\langle}\cdot,\cdot{\rangle}_\theta$ induced by $f_\theta$ with respect to the metric ${\langle}\cdot,\cdot{\rangle}$ of $M^n$, that is, ${\langle}\cdot,\cdot{\rangle}_\theta=\varphi_\theta^2{\langle}\cdot,\cdot{\rangle}$, and $\Psi\colon \mathbb{R}^n\to \mathbb{V}^{n+2}$ is the isometric embedding of $\mathbb{R}^n$ into $\mathbb{V}^{n+2}$ given by~\eqref{eq:Psi}. As shown in the proof of Lemma $17.2$ in \cite{DT}, as part of the proof of the classification of Cartan hypersurfaces of dimension $n\geq 5$ given in Chapter~17 therein, the second fundamental form of $F_\theta$ is given by \begin{equation} \label{sffF} \alpha^{F_\theta}(X,Y)=\<AX,Y{\rangle}\mu-{\langle}(A-\lambda I)X,Y{\rangle}\zeta+{\langle}(A-\lambda I)J_\theta X,Y{\rangle}\bar{\zeta}
\end{equation} for all $X,Y\in\mathfrak{X}(M)$, where $\{\mu,\zeta,\bar{\zeta}\}$ is an orthonormal frame of the normal bundle of $F_\theta$ in $\mathbb{L}^{n+3}$ with $\mu$ space-like, $\lambda=-{\langle}\mu, F_\theta{\rangle}^{-1}$ and $\zeta=\lambda F_\theta+\mu$ (hence ${\langle}\zeta,\zeta{\rangle}=-1$). Here $J_\theta$ is the horizontal lift of $\bar{J}_\theta$, which has been extended to $TM$ by setting ${J_\theta}|_\Delta=I$.
Let $\bar{X},\bar{Y}\in\mathfrak{X}(L)$ be an orthonormal frame such that $\bar{J}\bar{X}=\bar{Y}$ and $\bar{J}\bar{Y}=-\bar{X}$, and let $X,Y\in\Gamma(\Delta^\perp)$ be the respective horizontal lifts. It follows from \eqref{relmetr} that $\{(A-\lambda I)X,(A-\lambda I)Y\}$ is an orthonormal frame of $\Delta^\perp$. From the symmetry of $(A-\lambda I)J$ and $(A-\lambda I)$ we have \begin{align*} \<J(A-\lambda I)X,(A-\lambda I)X{\rangle}&={\langle}(A-\lambda I)J(A-\lambda I)X,X{\rangle}\\ &={\langle}(A-\lambda I)X,(A-\lambda I)J X{\rangle}\\ &={\langle}\bar{X},\bar{J}\bar{X}{\rangle}'\\ &=0. \end{align*} In a similar way one verifies that the relations $\<J(A-\lambda I)Y,(A-\lambda I)Y{\rangle}=0$, $\<J(A-\lambda I)X,(A-\lambda I)Y{\rangle}=-1$ and $\<J(A-\lambda I)Y,(A-\lambda I)X{\rangle}=1$ hold. Thus $J$ acts on $\Delta^\perp$ as a rotation of angle $\pi/2$. The symmetry of both $(A-\lambda I)J$ and $(A-\lambda I)$ implies that $\mbox{tr\,}(A-\lambda I)=0=\mbox{tr\,} (A-\lambda I)J$, hence \begin{equation} \label{traces} \mbox{tr\,} (A-\lambda I)J_\theta=0 \end{equation} for all $\theta\in [0,2\pi)$.
Now we use the relation between the second fundamental forms of $f_\theta$ and $F_\theta$, given by Eq. $9.32$ in \cite{DT}, namely, \begin{equation} \label{sffF2} \alpha^{F_\theta}(X,Y)={\langle}\varphi (A_\theta-H_\theta I)X,Y{\rangle}_2\tilde{N}-\psi(X,Y)F_\theta-\<X,Y{\rangle}\zeta_2, \end{equation} where ${\langle}\cdot,\cdot{\rangle}_\theta=\varphi_\theta^2{\langle}\cdot,\cdot{\rangle}$ is the metric induced by $f_\theta$, $A_\theta$ and $H_\theta$ are its shape operator and mean curvature, respectively, $\psi$ is a certain symmetric bilinear form,
$\tilde{N}\in \Gamma(N_FM)$, with ${\langle}\tilde{N}, F_\theta{\rangle}=0$, is a unit space-like vector field,
and $\zeta_2\in \Gamma(N_FM)$ satisfies ${\langle}\tilde{N},\zeta_2{\rangle}=0={\langle}\zeta_2,\zeta_2{\rangle}$ and $\<F_\theta,\zeta_2{\rangle}=1$. Eqs. \eqref{sffF} and \eqref{sffF2} give \begin{align*} {\langle}(A-\lambda I)J_\theta X,Y{\rangle}&={\langle}\alpha^{F_\theta}(X,Y),\bar{\zeta}{\rangle}\\ &=\varphi_\theta{\langle}(A_\theta-H_\theta I)X,Y{\rangle}{\langle}\tilde{N},\bar{\zeta}{\rangle}-\<X,Y{\rangle}{\langle}\zeta_2,\bar{\zeta}{\rangle}. \end{align*} for all $X,Y\in\mathfrak{X}(M)$, or equivalently, \begin{equation}\label{eq:imp} (A-\lambda I)J_\theta=\varphi_\theta{\langle}\tilde{N},\bar{\zeta}{\rangle}(A_\theta-H_\theta I)-{\langle}\zeta_2,\bar{\zeta}\>I. \end{equation} Using that $\mbox{tr\,} (A-\lambda I)J_\theta=0=\mbox{tr\,} (A_\theta- H_\theta I)$, we obtain from the preceding equation that ${\langle}\zeta_2,\bar{\zeta}{\rangle}=0$. Thus $\bar{\zeta}\in \mbox{span}\{F_\theta,\zeta_2\}^\perp$, and hence $\bar{\zeta}=\pm\tilde{N}$. Therefore, \eqref{eq:imp} reduces to \begin{equation} \label{finalkey} (A-\lambda I)J_\theta=\pm\varphi(A_\theta-H_\theta I).
\end{equation} In particular, $(A_\theta-H_\theta I)|_\Delta=0$, hence also $S_\theta|_\Delta=0$, where $S_\theta=\phi_\theta^{-1}(A_\theta-H_\theta I)$ is the Moebius shape operator of $f_\theta$, with $\phi_\theta$ given by \eqref{phi} for $f_\theta$. Since the Moebius shape operator of an umbilic-free immersion is traceless and has constant norm $\sqrt{(n-1)/n}$, then $S_\theta$ must have constant eigenvalues $\sqrt{(n-1)/2n}$, $-\sqrt{(n-1)/2n}$ and $0$. The same holds for the Moebius second fundamental form $S_1$ of $f$, which has also $\Delta$ as its kernel. We conclude that the eigenvalues of $(A_\theta-H_\theta I)|_{\Delta^\perp}$ are $$ \delta_1=\phi_\theta\sqrt{(n-1)/2n}\;\;\mbox{and}\;\;\delta_2=-\phi_\theta\sqrt{(n-1)/2n} $$
and, similarly, the eigenvalues of $(A-\lambda I)|_{\Delta^\perp}$ are $$ \lambda_1=\phi_1\sqrt{(n-1)/2n}\;\;\mbox{and}\;\;\lambda_2=-\phi_1\sqrt{(n-1)/2n}, $$ where $\phi_1$ is given by \eqref{phi} with respect to $f$. On the other hand, since $\det((A-\lambda I)J_\theta)=\det((A-\lambda I)$, for $\det J_\theta=1$, and both $(A-\lambda I)$ and $(A-\lambda I)J_\theta$ are traceless (see \eqref{traces}), it follows that $(A-\lambda I)$ and $(A-\lambda I)J_\theta$ have the same eigenvalues. This and \eqref{finalkey} imply that
$$ \phi_1^2=\varphi_\theta^2\phi_\theta^2, $$
hence the Moebius metrics of $f$ and $f_\theta$ coincide.
\vspace*{5ex}
\noindent Universidade de S\~ao Paulo\\ Instituto de Ci\^encias Matem\'aticas e de Computa\c c\~ao.\\ Av. Trabalhador S\~ao Carlense 400\\ 13560-970 -- S\~ao Carlos\\ BRAZIL\\ \texttt{[email protected]} and \texttt{[email protected]}
\end{document} |
\begin{document}
\title[Generalized quaternion groups with the $m$-DCI property] {Generalized quaternion groups with the $m$-DCI property} \author{Jin-Hua Xie} \address{Jin-Hua Xie, School of Mathematics and Statistics, Beijing Jiaotong University, Beijing 100044, China} \email{[email protected]}
\author{Yan-Quan Feng} \address{Yan-Quan Feng, School of Mathematics and Statistics, Beijing Jiaotong University, Beijing 100044, China} \email{[email protected]}
\author{Binzhou Xia} \address{Binzhou Xia, School of Mathematics and Statistics, The University of Melbourne, Parkville, VIC 3010, Australia} \email{[email protected]}
\begin{abstract}
A Cayley digraph $\mathrm{Cay}(G,S)$ of a finite group $G$ with respect to a subset $S$ of $G$ is said to be {\em a CI-digraph} if for every Cayley digraph $\mathrm{Cay}(G,T)$ isomorphic to $\mathrm{Cay}(G,S)$, there exists an automorphism $\sigma$ of $G$ such that $S^\sigma=T$. A finite group $G$ is said to have {\em the $m$-DCI property} for some positive integer $m$ if all $m$-valent Cayley digraphs of $G$ are CI-digraphs, and is said to be a DCI-group if $G$ has the $m$-DCI property for all $1\leq m\leq |G|$. Let $\mathrm{Q}_{4n}$ be a generalized quaternion group of order $4n$ with an integer $n\geq 3$, and let $\mathrm{Q}_{4n}$ have the $m$-DCI property for some $1 \leq m\leq 2n-1$. It is shown in this paper that $n$ is odd, and $n$ is not divisible by $p^2$ for any prime $p\leq m-1$. Furthermore, if $n\geq 3$ is a power of a prime $p$, then $\mathrm{Q}_{4n}$ has the $m$-DCI property if and only if $p$ is odd, and either $n=p$ or $1\leq m\leq p$.
\noindent\textit{Key words:}~{Cayley digraph, CI-digraph, $m$-DCI property, Generalized quaternion group.}
\noindent\textit{MSC2020:}~{05C25, 20B25} \end{abstract} \maketitle
\section{Introduction}
Unless otherwise indicated, digraphs and graphs considered in this paper are finite with no parallel edges or loops, and groups are finite. For a digraph $\Gamma$, denote by $V(\Gamma)$, $E(\Gamma)$, $\mathrm{Arc}(\Gamma)$ and $\mathrm{Aut}(\Gamma)$ the vertex set, edge set, arc set, and automorphism group of $\Gamma$, respectively. If for some integer $m$, the in-valency or out-valency of every vertex of $\Gamma$ equals $m$, then we say that the digraph has \emph{in-valency $m$} or \emph{out-valency $m$}, respectively. Moreover, if the in-valency and out-valency of every vertex of a digraph both equal $m$, then we say that the digraph has \emph{valency $m$} or is \emph{$m$-valent}.
Let $G$ be a group and $S$ be a subset of $G$ with $1\notin S$. A digraph with vertex set $G$ and arc set $\{(g,sg)\mid g\in G,\,s\in S\}$ is said to be a \emph{Cayley digraph} of $G$ with respect to $S$, denoted by $\mathrm{Cay}(G,S)$. If $S=S^{-1}$, then both $(u,v)$ and $(v,u)$ are arcs for two adjacent vertices $u$ and $v$ in $\mathrm{Cay}(G,S)$, and $\mathrm{Cay}(G,S)$ is a graph by identifying the two arcs with one edge $\{u,v\}$. Two Cayley digraphs $\mathrm{Cay}(G,S)$ and $\mathrm{Cay}(G,T)$ are said to be \emph{Cayley isomorphic} if $S^{\sigma}=T$ for some $\sigma \in \mathrm{Aut}(G)$, where $\mathrm{Aut}(G)$ is the automorphism group of $G$. Cayley digraphs are isomorphic if they are Cayley isomorphic, but the converse is not true. A subset $S$ of $G$ with $1\notin S$ is said to be a \emph{CI-subset} if $\mathrm{Cay}(G,S)\cong \mathrm{Cay}(G,T)$, for some $T\subseteq G$ with $1\notin T$, implies that they are Cayley isomorphic. In this case, $\mathrm{Cay}(G,S)$ is said to be a {\em CI-digraph}, or a {\em CI-graph} when $S=S^{-1}$. A group $G$ is said to be a DCI-group or a CI-group if all Cayley digraphs or Cayley graphs of $G$ are CI-digraphs or CI-graphs, respectively.
\'Ad\'am~\cite{Ad} conjectured that every finite cyclic group is a CI-group. Although this conjecture was disproved by Elspas and Turner~\cite{Elspas}, many researchers studied actively CI-groups and DCI-groups during the last fifty years and obtained great contributions, see \cite{Alspach,Babai,DE1,DT,Godsil1,C.H.Li8} for example. For cyclic DCI-groups and CI-groups, the classifications were finally completed by Muzychuk\cite{Mu1,Mu2}: a cyclic group of order $n$ is a DCI-group if and only if $n/\gcd(2,n)$ is square-free, and is a CI-group if and only if either $n/\gcd(2,n)$ is square-free or $n\in\{8,9,18\}$. A powerful method for studying DCI-groups or CI-groups comes from Schur ring theory, which was initiated by Schur and developed by Wielandt~(see~\cite[Chapter IV]{Wi}). In particular, this method is widely used to classify the DCI-groups and CI-groups among abelian groups, especially elementary abelian groups, refer to \cite{Feng,Kov,Mu3,MS,Sp,Sp1}. However, it is usually difficult to determine whether a particular group is a DCI-group or a CI-group.
For a positive integer $m$, a group $G$ is said to have the \emph{$m$-DCI property} or \emph{$m$-CI property} if all $m$-valent Cayley digraphs of $G$ are CI-digraphs or all Cayley graphs of $G$ of valency $m$ are CI-graphs, respectively. Clearly, if $G$ has the $m$-DCI property then $G$ has the $m$-CI property. A group $G$ is said to be an \emph{$m$-DCI-group} or \emph{$m$-CI-group} if $G$ has the $k$-DCI property or $k$-CI property for every positive integer $k\leq m$, respectively. Evidently, a group $G$ is a DCI-group or CI-group if $G$ has the $m$-DCI property or $m$-CI property for all $m\leq |G|$, respectively.
Considerable work has been done on the $m$-DCI property or $m$-CI property of a group, with interesting results obtained to characterize $m$-DCI-groups or $m$-CI-groups. In \cite{Fang1,Fang2,Fang3}, Fang and Xu completely classified abelian $m$-DCI-groups for a positive integer $m$ at most $3$. For an integer $n$ at least $3$ and $m\in \{1,2,3\}$, the dihedral group $\mathrm{D}_{2n}$ is $m$-DCI-group if and only if $n$ is odd~(see \cite{Qu}), and the generalized quaternion group $\mathrm{Q}_{4n}$ is $m$-DCI-group if and only if $n$ is odd~(see \cite{Ma}). In \cite{C.H.Li5}, Li, Praeger and Xu classified all finite abelian groups with the $m$-DCI property for a positive integer $m$ at most $4$, and they proposed a natural problem: characterize finite groups with the $m$-DCI property. For cyclic groups, Li~\cite{C.H.Li1} characterized the cyclic group of order $n$ with the $m$-DCI property. Soon after, Li \cite{C.H.Li6} proved that all Sylow subgroups of an abelian group with the $m$-DCI property are homocyclic. For more details, we refer to \cite{C.H.Li2,C.H.Li3,C.H.Li4,C.H.Li7} for example.
Recently, Xie, Feng and Kwon \cite{XFK} studied dihedral groups with the $m$-DCI property: if a dihedral group $G$ of order $2n$ has the $m$-DCI property for some $1\leq m\leq n-1$, then $n$ is odd and not divisible by the square of any prime less than $m$; moreover, the converse of this is true for prime power $n$, but in general it is unknown whether the converse is true. In this paper, we consider the $m$-DCI property of generalized quaternion groups. For an integer $n$ with $n\geq 2$, write \[ \mathrm{Q}_{4n}=\langle a,b\mid a^{2n}=1,\,b^2=a^n,\,a^b=a^{-1}\rangle, \] and call it the generalized quaternion group of order $4n$. In particular, $\mathrm{Q}_{4n}=\mathrm{Q}_8$ is the quaternion group. One can verify that $\mathrm{Q}_8$ is a DCI-group, see, for example,~\cite[Theorem~1.1]{SG}. For a group $G$, a subset $S$ of $G\setminus \{1\}$ is a CI-subset of $G$ if and only if the complement of $S$ in $G\setminus\{1\}$ is a CI-subset of $G$. Then for the $m$-DCI property of $\mathrm{Q}_{4n}$, it suffices to consider $m$ such that $1\leq m\leq 2n-1$. As the first main result of this paper, we give necessary conditions for the $m$-DCI property of $\mathrm{Q}_{4n}$ with integer $n\geq 3$.
\begin{thm}\label{mainth1} Let $G$ be the generalized quaternion group of order $4n$ with $n\geq 3$ such that $G$ has the $m$-DCI property for some $1\leq m\leq 2n-1$. Then $n$ is odd, and $n$ is not divisible by $p^2$ for any prime $p\leq m-1$. \end{thm}
Based on Theorem~\ref{mainth1}, we have the following corollary.
\begin{cor}\label{cor1} If the generalized quaternion group of order $4n$ with $n\geq 3$ is a DCI-group, then $n$ is odd and square-free. \end{cor}
It is worth remarking that we do not know whether the converses of Theorem~\ref{mainth1} and Corollary~\ref{cor1} are true in general. However, we will show that they are true when $n$ is a prime power. Note that when $n$ is a power of a prime $p$, the conclusion in Theorem~\ref{mainth1} turn out to be that $p$ is odd and either $n=p$ or $m\leq p$.
We first illustrate the case $n=p$. Let $M$ be an abelian group of odd order such that every Sylow subgroup of $M$ is elementary abelian. Define $E(M,4)=M \rtimes \langle y\rangle$ such that $|y|=4$ and $x^y=x^{-1}$ for all $x \in M$, where $|y|=4$ is the order of $y$. By~\cite{Mu4}, $E(\mathbb{Z}_n,4)$ is a DCI-group if $(n,\varphi(n))=1$, where $\varphi(n)$ is the Euler function. Since $\mathrm{Q}_{4p}\cong E(\mathbb{Z}_p,4)$, it follows that $\mathrm{Q}_{4p}$ is a DCI-group for every odd prime $p$. In particular, the converse of Corollary~\ref{cor1} is true for prime power $n$.
Next let $n=4p^\ell$ with an odd prime $p$ and an integer $\ell\geq 2$. The following theorem asserts that $\mathrm{Q}_{4n}$ does have the $m$-DCI property for all $m\leq p$. In other words, $\mathrm{Q}_{4n}$ is a $p$-DCI-group.
\begin{thm}\label{mainth2} Let $n\geq 3$ be a power of a prime $p$, and let $G$ be a generalized quaternion group of order $4n$. Then for $1\leq m\leq 2n-1$, $G$ has the $m$-DCI property if and only if $p$ is odd and either $n=p$ or $m\leq p$. \end{thm}
After this Introduction, we introduce some preliminary results in Section~2. Then Theorems~\ref{mainth1} and~\ref{mainth2} will be proved in Sections 3 and 4, respectively.
\section{Preliminaries}
In this section we give some basic concepts and facts that will be used later. For a positive integer $n$ and a prime $p$, denote by $n_p$ the largest $p$-power dividing $n$ and denote $n_{p'}=n/n_p$. Denote by $\mathrm{K}_n$ the complete graph with $n$ vertices in which arbitrary two vertices are adjacent, and denote by $\overline{\mathrm{K}}_n$ the empty graph with $n$ vertices in which no two vertices are adjacent. A digraph $\overrightarrow{\mathrm{K}}_{m,n}$ is called a \emph{complete bipartite digraph} if its vertex set can be partitioned into two subsets $X$ and $Y$ such that $|X|=m$ and $|Y|=n$ and its arc set is $\{(x,y)\mid x\in X,\,y\in Y\}$.
Let $G$ be a group. The \emph{commutator} of elements $x$ and $y$ in $G$ is $[x,y]=x^{-1}y^{-1}xy$. The \emph{derived group} $G'$ of $G$ is $\langle [x,y]\mid x, y\in G\rangle$. For a subgroup $H$ of $G$, denote the normalizer and centralizer of $H$ in $G$ by $\mathrm{N}_G(H)$ and $\mathrm{C}_G(H)$, respectively. The following result is from \cite[Chapter 2, Theorem 1.6]{Suzuki}.
\begin{pro}\label{proper-p-group} Let $G$ be a $p$-group for some prime $p$ and let $H$ be a proper subgroup of $G$. Then $\mathrm{N}_G(H)$ properly contains $H$, that is, $\mathrm{N}_G(H)>H$. \end{pro}
Let $p$ be a prime. A finite group $G$ is said to be \emph{$p$-abelian} if $(xy)^p=x^py^p$ for all $x$ and $y$ in $G$. A $p$-group $G$ is called a \emph{regular $p$-group} if for arbitrary two elements $x$ and $y$ in $G$, there exists $c_1,c_2,\ldots,c_r$ in the derived group $\langle x,y\rangle'$ of $\langle x,y\rangle$ such that $(xy)^p=x^py^pc_1^pc_2^p\cdots c_r^p$. The following proposition is from \cite[Proposition 3]{Mann} and \cite[Proposition 2.3]{ZSX}.
\begin{pro}\label{regular-p-group} Let $G$ be a $p$-group for some prime $p$. If every subgroup of $G'$ can be generated by at most $(p-1)/2$ elements, then $G$ is a regular $p$-group. Moreover, a regular $p$-group $G$ is $p$-abelian if and only if $G'$ has exponent $p$. \end{pro}
Let $\mathrm{Cay}(G,S)$ be a Cayley digraph of a group $G$ with respect to $S$. For a given $g\in G$, the right multiplication $R(g)$ is a permutation on $G$ such that $x^{R(g)}=xg$ for every $x\in G$. Clearly, $R(g)$ is an automorphism of $\mathrm{Cay}(G,S)$. Let $R(G)=\{R(g)\mid g\in G\}$. Then $R(G)$ is a regular group of automorphisms of $\mathrm{Cay}(G,S)$, which is called \emph{the right regular representation} of $G$. The following well-known Babai's criterion is from \cite{Babai}~(also see \cite[Theorem 2.4]{C.H.Li8}).
\begin{pro}\label{CI-graph-prop} A Cayley digraph $\mathrm{Cay}(G,S)$ is a CI-digraph if and only if every regular subgroup of $\mathrm{Aut}(\mathrm{Cay}(G,S))$ isomorphic to $G$ is conjugate to $R(G)$ in $\mathrm{Aut}(\mathrm{Cay}(G,S))$. \end{pro}
The following result says that the $m$-DCI property of a group is hereditary by subgroups and factor groups. The fact about subgroups can be proved by the same argument as that for the $m$-CI property in~\cite[Lemma~8.2]{C.H.Li7}, and the fact about factor groups is stated in~\cite[Theorem 9]{DE2}.
\begin{pro}\label{subgroup} Suppose that a finite group $G$ has the $m$-DCI property for a positive integer $m$. Then the following statements hold: \begin{enumerate}[{\rm (a)}] \item Every subgroup of $G$ has the $m$-DCI property. \item If $H\unlhd G$ then $G/H$ has the $m$-DCI property. \end{enumerate} \end{pro}
Li~\cite[Theorem~1.2]{C.H.Li1} characterized cyclic groups with the $m$-DCI property. We restate this result as follows.
\begin{pro}\label{cyclicgroupMP} Let $G$ be a cyclic group of order $n$ such that $G$ has the $m$-DCI property for some $p+1\leq m\leq n-(p+2)$ with $p$ prime. Then either $n=p^2$ and $m\equiv 0$ or $-1\pmod{p}$, or $n_p$ divides $\mathrm{lcm}(4,p)$. \end{pro}
For subsets of a cyclic group, we have the following result (see \cite[Lemma~2.1]{C.H.Li6}).
\begin{lem}\label{cyclic group property} Let $G=\langle z\rangle$ be a cyclic group of order $n$, and let $i,j\in \{1,2,\ldots,n-2\}$. If $\{z,z^2,\ldots,z^i\}=\{z^j, z^{2j},\ldots.z^{ij}\}$, then $j=1$. \end{lem}
Let $G$ be a finite group. If for any two subgroups $H$ and $K$ of $G$, every isomorphism from $H$ to $K$ can be extended to an automorphism of $G$, then $G$ is called {\em homogeneous}. For generalized quaternion groups $\mathrm{Q}_{4n}$, the following property is shown in~\cite[Lemma~2.4]{Ma}.
\begin{lem}\label{homogeneous} For an odd positive integer $n$, the generalized quaternion group $\mathrm{Q}_{4n}$ is homogeneous. \end{lem}
From~\cite[Lemma~3.1]{XFK}, we have the following lemma, which provides a fairly general way to construct isomorphic Cayley digraphs.
\begin{lem}\label{general} Let $G$ be a finite group with $L\unlhd G$ and $L\leq M\leq G$. Suppose that $A$ and $B$ are subsets of $M\setminus\{1\}$ such that $A^\gamma=B$ for some $\gamma\in\mathrm{Aut}(M)$ and $\gamma$ fixes every coset of $L$ in $M$, and that $C\subseteq G\setminus L$ is a union of some cosets of $L$ in $G$. Then $\mathrm{Cay}(G,A\cup C)\cong\mathrm{Cay}(G,B\cup C)$. \end{lem}
Let $\Gamma$ be a digraph and let $X\subseteq V(\Gamma)$. The \emph{induced subdigraph} $[X]$ of $\Gamma$ by $X$ is the digraph whose vertex set is $X$ and arc set is $\{(u,v)\mid u,v\in X,\,(u,v)\in \mathrm{Arc}(\Gamma)\}$. Let $N$ be a subgroup of $\mathrm{Aut}(\Gamma)$. Denote by $u^N$ the orbit of $N$ containing $u\in V(\Gamma)$, and by $\Gamma^+(u)$ the out-neighborhood of $u$ in $\Gamma$. The \emph{quotient digraph} $\Gamma_N$ of $\Gamma$ induced by $N$ is defined as the digraph whose vertex set is the set of $N$-orbits in $V(\Gamma)$ such that $(u^N,v^N)$ is an arc of $\Gamma_N$, where $u^N$ and $v^N$ are distinct orbits of $N$, if and only if $(x,y)$ is an arc of $\Gamma$ for some $x\in u^N$ and $y\in v^N$. The digraph $\Gamma$ is said to be an {\em $N$-cover} of $\Gamma_N$, if for every $u\in \Gamma$, the out-valency of $u$ in $\Gamma$ is the same as the out-valency of $u^N$ in $\Gamma_N$, is said to be \emph{$G$-locally primitive} if $G_u$ acts primitively on $\Gamma^+(u)$ for every $u\in V(\Gamma)$, and is said to be {\em strongly connected} if there exists a directed path from $u$ to $v$ for each pair of vertices $u$ and $v$. To avoid trivial cases, a digraph with one vertex is also called strongly connected. It well known that every finite connected vertex-transitive digraph is strongly connected (see~\cite[Lemma 2.6.1]{GR} for instance). For convenience, the complete graph on two vertices is also viewed as a directed cycle.
\begin{lem}\label{no-arc} Let $\Gamma$ be a finite connected $G$-vertex-transitive digraph, where $G\leq \mathrm{Aut}(\Gamma)$, and let $N$ be a normal subgroup of $G$ with at least two orbits on $V(\Gamma)$. Then the following statements hold: \begin{enumerate}[{\rm (a)}] \item If $\Gamma$ is $G$-arc-transitive, then there are no arcs in the induced subdigraph of any orbit of $N$ in $\Gamma$.
\item If $\Gamma$ is $G$-locally primitive, then either $N$ is the kernel of $G$ on $V(\Gamma_N)$ acting semiregularly on $V(\Gamma)$, and $\Gamma$ is an $N$-cover of $\Gamma_N$ with $|V(\Gamma_N)|\geq 3$, or $\Gamma_N$ is a directed cycle. \end{enumerate} \end{lem}
\begin{proof} To prove part~(a), let $\Gamma$ be $G$-arc-transitive and suppose on the contrary that the induced subdigraph of some orbit of $N$ has an arc. Since $\Gamma$ is $G$-vertex-transitive, it follows that the induced subdigraph of every orbit of $N$ has an arc. By the connectivity of $\Gamma$, there is an arc between some distinct orbits of $N$, say $O_1$ and $O_2$. Let $(u,v)$ be an arc of $\Gamma$ with $u\in O_1$ and $v\in O_2$. Since $O_1$ has an arc and $N$ is transitive on $O_1$, there is an arc $(u,w)$ of $\Gamma$ with $w\in O_1$. Since $\Gamma$ is $G$-arc-transitive, there exists $g\in G$ such that $u^g=u$ and $v^g=w$. However, such an element $g$ does not preserve the set of $N$-orbits as $u,w\in O_1$ and $v\in O_2$. This contradicts the fact that $N$ is normal in $G$, completing the proof of part~(a).
In following we prove part~(b). Let $\Gamma$ be $G$-locally primitive. Since $\Gamma$ is connected, there exists an arc between some distinct orbits of $N$, say $O_1$ and $O_2$. Let $(u,v)$ be an arc of $\Gamma$ with $u\in O_1$ and $v\in O_2$.
First assume that the out-neighbors of $u$ are contained in the same orbit of $N$. Then $\Gamma^+(u)\subseteq O_2$ as $v\in O_2$. Since $N$ is transitive on both $O_1$ and $O_2$, we have $\Gamma^+(x)\subseteq O_2$ for all $x\in O_1$. Since $G$ has an element mapping $O_1$ to $O_2$, the out-neighborhood of each vertex in $O_2$ is a subset of some orbit of $N$. Repeating this argument, we see that the out-neighborhood of each vertex in every orbit of $N$ is a subset of some orbit of $N$. Then we conclude from the connectivity of $\Gamma$ that $\Gamma_N$ is a directed cycle.
Now assume that the out-neighbors of $u$ are not contained in the same orbit of $N$. Let $\mathcal{O}=\{O_1,O_2,\ldots,O_n\}$ be the set of orbits of $N$ and assume that the out-neighborhood of $O_1$ in $\Gamma_N$ is $\{O_2,O_3,\ldots,O_d\}$. Then $|V(\Gamma_N)|=n\geq d\geq 3$, and $|\Gamma^+(u)\cap O_i|\geq 1$ for each $i\in\{2,\dots,d\}$. The hypothesis of part~(b) implies that $\Gamma$ is strongly connected and $G$-arc-transitive, whence $G_u$ is transitive on $\Gamma^+(u)$. Moreover, the conclusion of part~(a) implies that \[ \{\Gamma^+(u)\cap O_2,\Gamma^+(u)\cap O_3,\ldots, \Gamma^+(u)\cap O_d\} \]
is a partition of $\Gamma^+(u)$. Since $N$ is normal in $G$, it follows that this partition is preserved by $G_u$. Then we conclude from the $G$-local-primitivity of $\Gamma$ that $|\Gamma^+(u)\cap O_i|=1$ for each $i\in\{2,\dots,d\}$. Hence $u$ has the same out-valency as $O_1$ in $\Gamma_N$, which means that $\Gamma$ is an $N$-cover of $\Gamma_N$. Let $K$ be the kernel of $G$ acting on $\mathcal{O}$. Then the stabilizer $K_u$ fixes $\Gamma^+(u)$ pointwise because $|\Gamma^+(u)\cap O_i|=1$ for each $i\in\{2,\dots,d\}$. This implies that $K_u=K_w$ for every $w\in \Gamma^+(u)$. Then the strong connectivity of $\Gamma$ implies that $K_x=1$ for all $x\in V(\Gamma)$, that is, $K$ is semiregular on $V(\Gamma)$. Noting $N\leq K$, we deduce by the Frattini argument that $K=NK_x=N$. This shows that $N$ is the kernel of $G$ acting on $V(\Gamma_N)$ and is semiregular on $V(\Gamma)$. \end{proof}
\section{Proof of Theorem~\ref{mainth1}}
By \cite[Theorem 1.4]{Ma}, if $\mathrm{Q}_{4n}$ ($n\geq 3$) has the $m$-DCI property for $m\in \{1,2,3\}$, then $n$ is odd. This is true for every $1\leq m\leq 2n-1$ as the following lemma states.
\begin{lem}\label{n-odd} Let $G$ be a generalized quaternion group of order $4n$ with $n\geq 3$ such that $G$ has the $m$-DCI property for some $1\leq m\leq 2n-1$. Then $n$ is odd. \end{lem}
\begin{proof} Let $G=\mathrm{Q}_{4n}=\langle a,b\mid a^{2n}=1,\,b^2=a^n,\,a^b=a^{-1}\rangle$. Write $H=\langle a^n\rangle$. Then $H$ is a normal subgroup of order $2$ in $G$. Then we have \[ G/H=\mathrm{Q}_{4n}/H=\langle aH,bH\mid (aH)^{n}=1,\,(bH)^2=1,\,(aH)^{bH}=a^{-1}H\rangle. \] This implies that $G/H=\mathrm{Q}_{4n}/H\cong \mathrm{D}_{2n}$. Since $G$ has the $m$-DCI property for some $1\leq m\leq 2n-1$, it follows from Proposition~\ref{subgroup} that $G/H$ has the $m$-DCI property. Then by \cite[Lemma~3.3]{XFK}, $n$ is odd. \end{proof}
For the group $G=\mathrm{Q}_{4n}$ with the $m$-DCI property and a prime divisor $p$ of $n$ such that $p+1\leq m\leq 2n-1$, the following lemma will be obtained by considering the Sylow $p$-subgroups of $G$.
\begin{lem}\label{p-odd} Let $G$ be a generalized quaternion group of order $4n$ with $n\geq 3$. If $G$ has the $m$-DCI property such that $p+1\leq m\leq 2n-1$ for some prime divisor $p$ of $n$, then $p$ is odd and $n$ is not divisible by $p^2$. \end{lem}
\begin{proof} Let $G=\mathrm{Q}_{4n}=\langle a,b\mid a^{2n}=1,\,b^2=a^n,\,a^b=a^{-1}\rangle$. Suppose that $G$ has the $m$-DCI property such that $p+1\leq m\leq 2n-1$ for some prime divisor $p$ of $n$. By Lemma~\ref{n-odd}, $n$ is odd, and so $p$ is odd.
Write $n'=2n/p$, $z=a^{n'}$ and $P=\langle z\rangle$. Then $n'$ is even and $P$ is the unique subgroup of order $p$ in $G$, which implies that $P$ is characteristic in $G$. Suppose for a contradiction that $p$ divides $n'$. Note that $\langle a\rangle$ has the $m$-DCI property by Proposition~\ref{subgroup}. Then it follows from Proposition~\ref{cyclicgroupMP} that $2n-(p+1)\leq m\leq 2n-1$. Define an integer $j\in\{1,\dots,p-2\}$ and a subset $Q$ of $G$ as follows: \[ (j,Q)=\begin{cases} (m\bmod p,\,\emptyset),&\text{ if }m\not\equiv 0\text{ or}\,-1\ (\bmod\ p)\\ (p-2,\,\{b\}),&\text{ if }m\equiv -1\ (\bmod\ p)\\ (p-2,\,\{b,bz\}),&\text{ if }m\equiv 0\ (\bmod\ p).\\ \end{cases} \]
Then $m=kp+j+|Q|$ for some positive integer $k\leq n'-1$. Write $X=\langle z,b\rangle=\langle z\rangle\rtimes \langle b\rangle$. Then $M$ has an automorphism $\gamma$ induced by $z\mapsto z^{-1}$ and $b\mapsto b$. Let $Z=\{z,\ldots,z^j\}$ and let \begin{align*} S&=aP\cup (baP\cup\cdots\cup ba^{k-1}P) \cup (Z\cup Q),\\ T&=aP\cup (baP\cup\cdots\cup ba^{k-1}P) \cup (Z^\gamma \cup Q^\gamma). \end{align*}
Note that $|S|=|T|=m$. Taking $L=P$ and $M=X$ and $C=aP\cup (baP\cup\cdots\cup ba^{k-1}P)$ in Lemma~\ref{general}, we obtain $\mathrm{Cay}(G,S)\cong\mathrm{Cay}(G,T)$. Since $G$ has the $m$-DCI property, we have $S^\sigma=T$ for some $\sigma\in \mathrm{Aut}(G)$. Let $x\in aP$. Then $x=az^\ell=a^{\ell n'+1}$ for some $0\leq \ell\leq p-1$, which implies that $|x|=2n/(2n,\ell n'+1)$. Since $2$ divides $n'$ and $p$ divides $n'$, we have $(2n,\ell n'+1)=1$, and so $|x|=2n$. This means that every element in $aP$ has order $2n$. Note that every element in $(baP\cup\cdots\cup ba^{k-1}P)\cup Q\cup Q^\gamma$ has order $4$ and every element in $Z\cup Z^\gamma$ has order $p$. We derive from $S^\sigma=T$ that $(aP)^\sigma=aP$ and $Z^\sigma=Z^\gamma$. Since $\langle a\rangle$ is characteristic in $G$, it follows that $a^\sigma=a^r$ for some integer $r$. In particular, \[ \{z^r,\ldots,z^{jr}\}=Z^\sigma=Z^\gamma=\{z^{-1},\ldots,z^{-j}\}. \] Then by Lemma~\ref{cyclic group property}, $r\equiv-1\pmod{p}$. Note that $P^\sigma=P$ as $P$ is characteristic in $G$. We conclude that $aP=(aP)^\sigma=a^\sigma P^\sigma=a^rP$, which leads to $a^{r-1}\in P=\langle a^{n'}\rangle$. However, this together with $p$ dividing $n'$ implies that $p$ divides $r-1$, contradicting $r\equiv-1\pmod p$. Thus $p$ does not divide $n'$, which means that $n$ is not divisible by $p^2$, completing the proof. \end{proof}
Now we are ready to prove Theorem~\ref{mainth1}.
\begin{proof}[Proof of Theorem~\ref{mainth1}.] Let $G$ be a generalized quaternion group of order $4n$ with $n\geq 3$ such that $G$ has the $m$-DCI property for some $1\leq m\leq 2n-1$. Then $n$ is odd as Lemma~\ref{n-odd} asserts. Furthermore, for any prime $p\leq m-1$, according to Lemma~\ref{p-odd} we have that $n$ is not divisible by $p^2$. This completes the proof. \end{proof}
\section{Proof of Theorem~\ref{mainth2}}
In this section, we prove Theorem~\ref{mainth2}. Besides being important ingredients of the proof of Theorem~\ref{mainth2}, the following two lemmas are of their own interest as well.
\begin{lem}\label{sub-conjugate}
Let $G\leq A\leq\mathrm{Sym}(\Omega)$ with $G$ regular on $\Omega$, and let $H$ be a normal subgroup of odd order $n$ in $G$. Suppose $G=H\rtimes\langle b\rangle$ for some $b\in G$ with $|b|\in\{2,4\}$ such that either $G=H\times\langle b\rangle$ or $h^b=h^{-1}$ for all $h\in H$. Then for a regular subgroup $X$ of $A$ isomorphic to $G$, the subgroups $X$ and $G$ are conjugate in $A$ if and only if $H$ and the unique subgroup of order $n$ of $X$ are conjugate in $A$. \end{lem}
\begin{proof}
By the assumption of the lemma, there exists $r=\pm1$ such that $h^b=h^r$ for all $h\in H$. Let $X$ be a subgroup of $A$ isomorphic to $G$. Then $X$ has a unique subgroup of order $n$, say $Y$, and we may write $X=Y\rtimes\langle c\rangle$ such that $|b|=|c|$ and $y^c=y^r$ for all $y\in Y$. We need to prove that $G$ and $X$ are conjugate in $A$ if and only if $H$ and $Y$ are conjugate in $A$. The necessity is clear because $H$ and $Y$ are the unique subgroups of order $n$ in $G$ and $X$, respectively. To finish the proof, assume that $A$ has an element $\alpha$ with $Y^\alpha=H$, and we shall show that there exists an element of $A$ conjugating $X$ to $G$.
Since $c\in \mathrm{N}_A(Y)$, we have $c^\alpha\in \mathrm{N}_A(Y^\alpha)=\mathrm{N}_A(H)$. Hence both the $2$-elements $b$ and $c^\alpha$ are in $\mathrm{N}_A(H)$. Let $P$ be a Sylow $2$-subgroup of $\mathrm{N}_A(H)$ such that $b\in P$. By Sylow Theorem, there exists $\beta \in \mathrm{N}_{A}(H)$ such that $(c^\alpha)^\beta \in P$. Then \[ X^{\alpha\beta}=(Y\rtimes\langle c\rangle)^{\alpha\beta}=Y^{\alpha\beta}\rtimes\langle c^{\alpha\beta}\rangle =H^\beta\rtimes\langle c^{\alpha\beta}\rangle=H\rtimes \langle c^{\alpha\beta}\rangle. \]
Let $d=c^{\alpha\beta}\in P$. Then $|d|=|c|=|b|$ and $h^d=h^r$ for all $h\in H$ as $y^c=y^r$ for all $y\in Y$.
Write $m=|b|$. Then $m=2$ or $4$. The regularity of $G$ on $\Omega$ implies $|\Omega|=|G|=|b||H|=mn$. Since $H$ is a normal subgroup of $G$, it follows that $H$ has $m$ orbits on $\Omega$, say $\Omega_1,\Omega_2,\ldots,\Omega_m$, where $|\Omega_i|=n$ for every $i\in\{1,\dots, m\}$. Moreover, since $G=H\rtimes\langle b\rangle$ with $|b|=m$, the element $b$ permutes the set $\{\Omega_1,\Omega_2,\ldots,\Omega_m\}$ cyclicly. Similarly, $d$ permutes $\{\Omega_1,\Omega_2,\ldots,\Omega_m\}$ cyclicly because $X^{\alpha\beta}=H\rtimes\langle d\rangle$ with $|d|=m$.
Note that $P$ is a $2$-group and $b,d\in P$. Every orbit of $P$ on $\Omega$ has length $2$-power that is at least $m$, where $m=2$ or $4$. If every orbit of $P$ on $\Omega$ has length greater than $m$, then every orbit of $P$ on $\Omega$ has length divisible by $2m$, and so $|\Omega|$ is divisible by $2m$, which is impossible because $|\Omega|=mn$ with $n$ odd. Thus $P$ has an orbit of length $m$, say $\Delta$. In particular, both $\langle b\rangle$ and $\langle d\rangle$ are regular on $\Delta$. Write $\Delta=\{\delta_1,\delta_2,\ldots,\delta_m\}$. Since $b$ permutes $\{\Omega_1,\Omega_2,\ldots,\Omega_m\}$ cyclicly, we have $|\Delta\cap \Omega_i|=1$, say $\delta_i\in \Omega_i$, for every $i\in\{1,\dots,m\}$.
Consider $b^{\Delta}$ and $d^{\Delta}$, namely, the permutations of $\Delta$ induced by $b$ and $d$, respectively. For $m=2$, since both $\langle b\rangle$ and $\langle d\rangle$ are regular on $\Delta$, we have $b^\Delta=d^\Delta$ and set $x=d$. Now assume $m=4$. It is easy to check that if a product of two elements of order $4$ in $\mathrm{S}_4$ has $2$-power order, then the two elements are equal or inverse to each other. Since $bd\in P$ has $2$-power order, we conclude that $b^\Delta=d^\Delta$ or $b^\Delta=(d^{-1})^\Delta$. Set $x=d$ in the former case, and $x=d^{-1}$ in the latter case. Then summarizing this paragraph, we obtain $b^\Delta=x^\Delta$ with $x=d^{\pm1}$. Consequently, $bx^{-1}$ fixes every element in $\Delta$.
Since $h^d=h^r$ for every $h\in H$ and $r=\pm1$, we have $h^{d^{-1}}=h^r$ for every $h\in H$. This together with $x=d^{\pm1}$ gives $h^x=h^r=h^b$ for every $h\in H$, which indicates that $bx^{-1}$ centralizes $H$. For each $i\in\{1,\dots,m\}$, since $bx^{-1}$ fixes $\delta_i$ and $\Omega_i$ is the orbit of $H$ containing $\delta_i$, it follows that $bx^{-1}$ fixes every element in $\Omega_i$. Hence $bx^{-1}=1$, and so $\langle b\rangle=\langle x\rangle=\langle d\rangle$. As $X^{\alpha\beta}=H\rtimes\langle c^{\alpha\beta}\rangle=H\rtimes\langle d\rangle$, this shows that $X^{\alpha\beta}=H\rtimes\langle b\rangle=G$, which completes the proof. \end{proof}
Based on Lemma~\ref{sub-conjugate}, we may prove the following:
\begin{lem}\label{p-CI-cyclic} Let $G$ be a cyclic group of order $2^{\ell}n$ with $\ell\in \{0,1,2\}$ and $n$ odd, and let $p$ be the least prime divisor of $n$. Then every connected Cayley digraph of $G$ with valency at most $p$ is a CI-digraph. \end{lem}
\begin{proof}
Write $G=\langle a\rangle\cong \mathbb{Z}_{2^{\ell}n}$. Let $\Gamma=\mathrm{Cay}(G,S)$ be a connected Cayley digraph with $|S|\leq p$, and let $A=\mathrm{Aut}(\Gamma)$. If $\ell=0$, then since $G$ is a connected $p$-DCI-group (~\cite[Theorem~1.1]{C.H.Li2}), $\Gamma$ is a CI-digraph, as required. Next we consider the case $\ell\in \{1,2\}$. Denote by $A_1$ the stabilizer of $1$ in $A$.
Assume that $p$ does not divide $|A_1|$. Since $\Gamma$ is connected and has valency at most $p$, each prime divisor of $|A_1|$ is at most $p$. Then as $p$ is the least prime divisor of $n$, we conclude that $|A_1|$ is coprime to $n$. Let $\pi$ be the set of prime divisor of $n$. It follows from $A=R(G)A_1$ that $\langle a^{2^\ell}\rangle$ is a Hall $\pi$-subgroup of $A$. By \cite[Theorem 9.1.10]{Robinson}, all nilpotent Hall $\pi$-subgroup of $A$ are conjugate. Hence all subgroups isomorphic to $\langle a^{2^\ell}\rangle$ are conjugate in $A$, and so all regular subgroups of $A$ isomorphic to $R(G)$ are conjugate by Lemma~\ref{sub-conjugate}. This shows that $\Gamma$ is a CI-digraph by Proposition~\ref{CI-graph-prop}.
Assume that $p$ divides $|A_1|$. If $\Gamma$ has valency less than $p$, then the connectivity of $\Gamma$ means that $|A_1|$ is not divisible by $p$, a contradiction. Thus $\Gamma$ has valency $p$, and it further follows from $p$ dividing $|A_1|$ that $\Gamma$ is arc-transitive. Then by \cite[Theorem 1.3]{C.H.Li9}, every connected arc-transitive Cayley digraph over a cyclic group is a CI-digraph, and hence $\Gamma$ is a CI-digraph. This completes the proof. \end{proof}
Let $X$ and $Y$ be digraphs. The {\em lexicoproduct $X[Y]$} of $X$ and $Y$ is defined as the digraph with vertex set $V(X)\times V(Y)$ such that $((x_1,y_1),(x_2,y_2))$, where $x_1,x_2\in V(X)$ and $y_1,y_2\in V(Y)$, is an arc if and only if $(x_1,x_2)\in\mathrm{Arc}(X)$, or $x_1=x_2$ and $(y_1,y_2)\in \mathrm{Arc}(Y)$. We now give some sufficient conditions for CI-digraphs of generalized quaternion groups $\mathrm{Q}_{4n}$ with $n\geq 3$ odd.
\begin{lem}\label{p-CI-Q} Let $\Gamma=\mathrm{Cay}(\mathrm{Q}_{4n},S)$ be a connected Cayley digraph of $\mathrm{Q}_{4n}$ with $n\geq 3$ odd, and let $A=\mathrm{Aut}(\Gamma)$. Then the following statements hold: \begin{enumerate}[{\rm (a)}]
\item If $|A_1|$ is coprime to $n$, then $\Gamma$ is a CI-digraph.
\item If $n$ is a power of a prime $p$ and $\Gamma$ is arc-transitive with $|S|=p$, then $\Gamma$ is a CI-digraph. \end{enumerate} \end{lem}
\begin{proof} Let $G=\mathrm{Q}_{4n}=\langle a,b\mid a^{2n}=1,\,b^2=a^n,\,a^b=a^{-1}\rangle$, and let $\pi$ be the set of prime divisors of $n$.
To prove part~(a), suppose that $|A_1|$ is coprime to $n$. Since $A=R(G)A_1$ and $n$ is odd, we conclude that $\langle R(a^2)\rangle$ is a Hall $\pi$-subgroup of $A$. Since all nilpotent Hall $\pi$-subgroups of $A$ are conjugate by~\cite[Theorem 9.1.10]{Robinson}, Lemma~\ref{sub-conjugate} implies that all regular subgroups of $A$ isomorphic to $R(G)$ are conjugate. Hence $\Gamma$ is a CI-digraph by Proposition~\ref{CI-graph-prop}. This proves part~(a).
To prove part (b), suppose that $n=p^\ell$ for an odd prime $p$ and a positive integer $\ell$, and that $\Gamma$ is arc-transitive with $|S|=p$. If $\ell=1$, then $\mathrm{Q}_{4p}$ is a DCI-group by \cite{Mu4}, and so $\Gamma$ is a CI-digraph.
From now on we assume that $\ell\geq 2$. Since $A_1$ acts transitively on $S$, the order $|A_1|$ is divisible by $p$, and so $|A|=|R(G)||A_1|=4n|A_1|$ is divisible by $p^{\ell+1}$. Write \[ H=\langle a^2\rangle\ \text{ and }\ N=\mathrm{N}_A(R(H)). \]
Since $|R(H)|=|H|=p^{\ell}$ and $|A|$ is divisible by $p^{\ell+1}$, it is clear that $R(H)$ is not a Sylow $p$-subgroup of $A$. By Sylow Theorem and Proposition~\ref{proper-p-group}, $|N|$ is divisible by $p^{\ell+1}$. Since $R(H)\unlhd R(G)$, we get $R(G)\leq N$. It follows that $\Gamma$ is $N$-vertex-transitive, and \begin{equation}\label{Eqn1}
|N_u|=|N|/|R(G)| \text{ is divisible by } p \text{ for every } u\in V(\Gamma). \end{equation}
Hence $\Gamma$ is $N$-arc-transitive as $|S|=p$. Since $R(H)$ is characteristic in $R(G)$ and $R(G)\unlhd N$, we have $R(H)\unlhd N$. Since $|S|=p$, it follows that $\Gamma$ is $N$-locally primitive. The orbit set of $R(H)$ on $V(\Gamma)$ is $\{H,bH,b^2H,b^3H\}=V(\Gamma_{R(H)})$. By Lemma~\ref{no-arc}\,(b), either $R(H)$ is the kernel of $N$ on $V(\Gamma_{R(H)})$ and $\Gamma$ is a $R(H)$-cover of $\Gamma_{R(H)}$, or $\Gamma_{R(H)}$ is the directed cycle $\overrightarrow{\mathrm{C}}_4$ of length $4$.
Assume that $R(H)$ is the kernel of $N$ on $V(\Gamma_{R(H)})$ and $\Gamma$ is a $R(H)$-cover of $\Gamma_{R(H)}$. Then $\Gamma_{R(H)}$ has order $4$ and out-valency $p\geq3$. Hence $p=3$. According to~\cite[Theorem 1.4]{Ma}, $\mathrm{Q}_{4p^\ell}$ is a 3-DCI-group. Hence $\Gamma$ is a CI-digraph, as required.
Now assume that $\Gamma_{R(H)}=\overrightarrow{\mathrm{C}}_4$. Since $\Gamma$ is connected, we have $ba^i\in S$ for some integer $i$. Note that there is an automorphism $\alpha$ of $G$ sending $a$ and $b$ to $a$ and $ba^i$, respectively. Then replacing $S$ by $S^\alpha$, we may assume that $b\in S$, whence \begin{equation}\label{Eqn2} \mathrm{Arc}(\Gamma_{R(H)})=\{(H,bH),(bH,b^2H),(b^2H,b^3H),(b^3H,H)\}. \end{equation} Let $C=\mathrm{C}_A(R(H))$ and let $K$ be the kernel of $C$ acting on $V(\Gamma_{R(H)})$. Then $R(H)\leq C$, $R(b^2)\in C$, $C\unlhd N$, and $C/K\leq\mathrm{Aut}(\Gamma_{R(H)})=\mathrm{Aut}(\overrightarrow{\mathrm{C}}_4)\cong\mathbb{Z}_4$. Note that $b^iH$ is an orbit for both $R(H)$ and $K$. By the Frattini argument, $K=R(H)K_u$ for $u\in V(\Gamma)$. As $\Gamma_{R(H)}=\overrightarrow{\mathrm{C}}_4$, it follows that $C_u$ fixes $V(\Gamma_{R(H)})$ pointwise. Hence $C_u\leq K$ and $C_u=K_u$. Since $K\leq C=\mathrm{C}_A(R(H))$, we obtain \begin{equation}\label{Eqn3} K=R(H)\times C_u\text{ for every }u\in V(\Gamma). \end{equation}
Consequently, $C_1C_{b^2}\unlhd K$. Noting that $R(H)$ is a $p$-group, it follows from~\eqref{Eqn3} that $|K|_{p'}=|C_1|_{p'}=|C_{b^2}|_{p'}=|C_1C_{b^2}|_{p'}$. Since $|C_1\cap C_{b^2}|=|C_1||C_{b^2}|/|C_1C_{b^2}|$, this implies \[
|C_1\cap C_{b^2}|_{p'}=|K|_{p'}. \]
Suppose for a contradiction that $K=R(H)$. Then \eqref{Eqn3} implies that $C_1=1$, and so $N_1$ acts faithfully on $R(H)$ by conjugation. Hence $N_1\leq \mathrm{Aut}(R(H))\cong\mathbb{Z}_{p^{\ell-1}(p-1)}$ is cyclic. This together with \eqref{Eqn1} implies that $N_1$ has a unique subgroup of order $p$, say $P$. Let $L$ be the kernel of $N$ acting on $V(\Gamma_{R(H)})$. Since $\Gamma_{R(H)}=\overrightarrow{\mathrm{C}}_4$, it follows that $N_1$ fixes $V(\Gamma_{R(H)})$ pointwise, which means that $N_1=L_1$. Thus, by the Frattini argument, $L=R(H)N_1$. Consequently, $L/R(H)$ is cyclic. Write $M=R(H)P$. Then $M/R(H)$ is the unique subgroup of order $p$ of $L/R(H)$ and so characteristic in $L/R(H)$. Note that $L/R(H)\unlhd N/R(H)$. Then $M/R(H)\unlhd N/R(H)$. This implies that $R(H)\leq M\unlhd N$, and so all orbits of $M$ on $V(\Gamma)$ have length $|R(H)|$. Clearly, \[ R(H)P=M=R(H)M_1=R(H)M_b. \]
Since $|M|=|R(H)||P|=p|R(H)|$, we obtain $|M_1|=p=|M_b|$. Hence both $M_1$ and $M_b$ are cyclic groups of order $p$. Recall that $\mathrm{Aut}(R(H))\cong\mathbb{Z}_{p^{\ell-1}(p-1)}$ and $\ell\geq 2$. The unique subgroup of order $p$ of $\mathrm{Aut}(R(H))$ is generated by the automorphism $\gamma$ of $R(H)=\langle R(a^2)\rangle$ defined by \[ \gamma: R(a^2)\mapsto R(a^2)^r=R(a^{2r}),\ \text{ where } r:=p^{\ell-1}+1. \] Since the action of $M_1\leq N_1$ by conjugation on $R(H)$ is faithful, it follows that \[ R(a^2)^{\alpha}=R(a^2)^{\gamma}=R(a^{2r}) \text{ for some generator } \alpha \text{ of } M_1. \] For integers $i$ and $j$, since $a^2$ has order $n=p^\ell$ and $r^j\equiv jp^{\ell-1}+1\pmod{p^\ell}$, we have \begin{equation}\label{Eqn4} R(a^{2i})^{\alpha^j}=R(a^{2ir^j})=R(a^{2i(jp^{\ell-1}+1)})\in R(a^{2i})\langle R(a^{2p^{\ell-1}})\rangle. \end{equation} Take arbitrary $x,y\in M$. Since $M=R(H)M_1$, we may write $x=x_1x_2$ and $y=y_1y_2$ with $x_1,y_1\in R(H)$ and $x_2,y_2\in M_1$. Then the commutator \[ [x,y]=[x_1x_2,y_1y_2]=(x_1x_2)^{-1}(y_1y_2)^{-1}(x_1x_2)(y_1y_2)=(x_1^{-1})^{x_2}(y_1^{-1}x_1)^{x_2y_2}(y_1)^{y_2}. \] This together with \eqref{Eqn4} implies that $[x,y]\in \langle R(a^{2p^{\ell-1}})\rangle$. Hence the derived group \[ M'=\langle R(a^{2p^{\ell-1}})\rangle\cong\mathbb{Z}_p. \]
Since $M=M_bR(H)$, we may write $\alpha=\beta R(a^2)^t$ for some $\beta$ of $M_b$ and integer $t$. Since $|M'|=p$, we derive from Proposition~\ref{regular-p-group} that $(R(a^2)^t)^p=(\beta^{-1}\alpha)^p=(\beta^{-1})^p\alpha^p=1$. Therefore, $t$ is divisible by $p^{\ell-1}$. In particular, $t$ is divisible by $p$ as $\ell\geq 2$. Since \[ b^\alpha=b^{\beta R(a^2)^t}=b^{R(a^2)^t}=ba^{2t}, \] we derive for each integer $k$ that \[ (ba^{2tk})^\alpha=b^{R(a^{2tk})\alpha}=b^{\alpha R(a^{2tk})^\alpha}=b^{\alpha R(a^{2tkr})}=(ba^{2t})^{R(a^{2tkr})}=ba^{2t(1+kr)}. \] Hence $\alpha$ stabilizes $b\langle a^{2t}\rangle$, and so $M_1=\langle \alpha\rangle$ stabilizes $b\langle a^{2t}\rangle$. Note that the stabilizer $M_1$ is transitive or trivial on the out-neighborhood $\Gamma^+(1)=S$ of $1$ in $V(\Gamma)$. If $M_1$ is trivial on $S$, then we obtain a contradiction that $M_1=1$ as $\Gamma$ is $N$-vertex-transitive and strongly connected. Hence $M_1$ is transitive on $S$, and so $S=b^{M_1}$ as $b\in S$. Then $S=b^{M_1}\subseteq b\langle a^{2t}\rangle$, and so $\langle S\rangle\leq \langle b,a^p\rangle<G$ as $p$ divides $t$. This contradicts the connectivity of $\Gamma$.
Thus we have $K\neq R(H)$. Suppose for a contradiction that $N$ has an imprimitive block system on $V(\Gamma)$ such that each block is an independent set of size $p$ and the induced subdigraph of each two blocks is either $\overline{\mathrm{K}}_{2p}$ or $\overrightarrow{\mathrm{K}}_{p,p}$. Let $\Delta$ be an imprimitive block containing $1$. Then since $R(G)$ is a regular subgroup of $N$, we derive that $\Delta$ is a subgroup of $G$ and $S$ is a union of left cosets of $\Delta$ in $G$. Since $b\in S$ and $|S|=p$, it follows that $\Delta=\langle a^{2p^{\ell-1}}\rangle$ and $S=b\Delta=b\langle a^{2p^{\ell-1}}\rangle$. This contradicts the condition $\langle S\rangle=G$ by the connectivity of $\Gamma$. Thus $N$ does not have such an imprimitive block system. However, we prove in the following, distinguishing two cases, that $\Gamma\cong\overrightarrow{\mathrm{C}}_{4p^{\ell-1}}[\overline{\mathrm{K}}_p]$, which will complete the proof of part~(b).
\noindent{\bf Case 1:} $C_1\cap C_{b^2}=1$.
Recall that $R(H)\times C_1=K\neq R(H)$. Then $C_1\neq 1$, and $|C_1|_{p'}=|K|_{p'}=|C_1\cap C_{b^2}|_{p'}=1$. This means that $C_1$ is a $p$-group. Since $C/K\leq\mathbb{Z}_4$, it follows that $K=R(H)\times C_1$ is a Sylow $p$-subgroup of $C$ and thus characteristic in $C$. Note that \[ C_1\cong C_1/(C_1\cap C_{b^2})\cong C_1C_{b^2}/C_{b^2}\leq K/C_{b^2}\cong R(H) \] is cyclic. Hence $K$ has a characteristic subgroup $D=X\times Y\cong \mathbb{Z}_p^2$, where $\mathbb{Z}_p\cong X\leq R(H)$ and $\mathbb{Z}_p\cong Y\leq C_1$. Then $D$ is characteristic in $C$. As $C\unlhd N$, we have $D\unlhd N$. Since $X$ is semiregular of order $p$ and $Y$ fixes the vertex $1$, we then conclude that the orbits of $D=YX$ on $V(\Gamma)$ all have length $p$. For every $u\in V(\Gamma)$, it follows that $D=XD_u$, and so $D_u\cong\mathbb{Z}_p$ is either transitive or trivial on the out-neighborhood $\Gamma^+(u)$ of $u$. If $D_u$ is trivial on $\Gamma^+(u)$, then $D_u=1$ as $\Gamma$ is $N$-vertex-transitive and strongly connected, contradicting to $D_u\cong\mathbb{Z}_p$. Thus $D_u$ is transitive on $\Gamma^+(u)$ for every $u\in V(\Gamma)$. This implies that if $\Delta_1$ and $\Delta_2$ are two orbits of $D$ and there is an arc from some vertex of $\Delta_1$ to some vertex of $\Delta_2$, then $(x,y)\in \mathrm{Arc}(\Gamma)$ for all $x\in \Delta_1$ and $y\in \Delta_2$. Since $\Gamma$ has out-valency $p$, it follows that $\Gamma\cong \overrightarrow{\mathrm{C}}_{4p^{\ell-1}}[\overline{\mathrm{K}}_p]$, as required.
\noindent{\bf Case 2:} $C_1\cap C_{b^2}\neq1$.
Recall that $\Gamma_{R(H)}=\overrightarrow{\mathrm{C}}_4$ and $b\in S$, we have $S\cap(H\cup b^2H\cup b^3H)=\emptyset$. From $C/K\leq\mathbb{Z}_4$ we deduce that $B:=K\langle R(b^2)\rangle\unlhd C$. Since $R(b^2)$ interchanges $C_1$ and $C_{b^2}$ by conjugation, we have $C_1\cap C_{b^2}\unlhd B$. Note that $H\cup b^2H$ and $bH\cup b^3H$ are the orbits of $B$ on $V(\Gamma)$. Then the orbits of $C_1\cap C_{b^2}$ on $bH\cup b^3H$ have the same length, say $t$. Hence the valency of $\Gamma$ is a multiple of $t$. As $\Gamma$ is $p$-valent, we deduce that $t=1$ or $p$. Recall that \[ K=R(H)\times C_1=R(H)\times C_{b^2}. \] The group $C_1\cap C_{b^2}$ fixes $H\cup b^2H$ pointwise. If $t=1$, then $C_1\cap C_{b^2}$ fixes both $H\cup b^2H$ and $bH\cup b^3H$ pointwise, which means that $C_1\cap C_{b^2}=1$, a contradiction. Thus $t=p$, that is, the orbits of $C_1\cap C_{b^2}$ on $bH\cup b^3H$ all have length $p$. Since $R(b)\in N$ normalizes $C$, it follows that $C_b\cap C_{b^3}=(C_1\cap C_{b^2})^{R(b)}$ fixes $(H\cup b^2H)^{R(b)}=bH\cup b^3H$ pointwise and that the orbits of $C_b\cap C_{b^3}$ on $(bH\cup b^3H)^{R(b)}=H\cup b^2H$ all have length $p$. Let \[ T=(C_1\cap C_{b^2})(C_b\cap C_{b^3}). \] Then all orbits of $T$ on $V(\Gamma)$ have length $p$. Note that $C_1\cap C_{b^2}\leq T_v$ for every $v\in H\cup b^2H$ and $C_b\cap C_{b^3}\leq T_w$ for every $w\in bH\cup b^3H$. Then we derive from~\eqref{Eqn2} that the stabilizer $T_u$ is transitive on the out-neighbors of $u$ in $\Gamma$ for every $u\in V(\Gamma)$. This implies that if $\Delta_1$ and $\Delta_2$ are two orbits of $T$ and there exists an arc from some vertex of $\Delta_1$ to some vertex of $\Delta_2$, then $(x,y)\in \mathrm{Arc}(\Gamma)$ for all $x\in \Delta_1$ and $y\in \Delta_2$. Hence $\Gamma\cong \overrightarrow{\mathrm{C}}_{4p^{\ell-1}}[\overline{\mathrm{K}}_p]$, as required. \end{proof}
Let $\Gamma$ be a connected Cayley digraph of a finite group $G$ of valency $m<p$, and let $A=\mathrm{Aut}(\Gamma)$. By the same argument as~\cite[Lemma 2.1]{C.H.Li0} we see that every prime divisor of $|A_1|$ is less than $p$. Thus the following result is a consequence of Lemma~\ref{p-CI-Q}.
\begin{lem}\label{p-power-CI-Q}
Let $n$ be a power of an odd prime $p$, let $\Gamma=\mathrm{Cay}(\mathrm{Q}_{4n},S)$ be a connected Cayley digraph of $\mathrm{Q}_{4n}$ with $|S|\leq p$. Then $\Gamma$ is a CI-digraph. \end{lem}
Now we are ready to prove Theorem~\ref{mainth2}.
\begin{proof}[Proof of Theorem~\ref{mainth2}.] Let $n=p^\ell$, where $p$ is a prime and $\ell$ is a positive integer, let $G=\mathrm{Q}_{4n}=\langle a,b\mid a^{2n}=1,\, b^2=a^n,\,a^b=a^{-1}\rangle$, and let $m$ be an integer with $1\leq m\leq 2n-1$.
First, we suppose that $G$ has the $m$-DCI property. By Theorem~\ref{mainth1}, $n$ is odd, and so $p$ is odd. If $\ell \geq 2$, then it follows from Theorem~\ref{mainth1} that $m \leq p$. This shows that either $n=p$ or $m\leq p$, which completes the proof of the necessity.
Next, we prove the sufficiency. So suppose that $p$ is odd and either $n=p$ or $m\leq p$. If $n=p$, then since $\mathrm{Q}_{4p}$ is a DCI-group (by~\cite{Mu4}), $G$ has the $m$-DCI property. Now assume $m\leq p$. Let $\mathrm{Cay}(G,S)$ be a Cayley digraph with $|S|=m$, and let $\mathrm{Cay}(G,T)$ be a Cayley digraph isomorphic to $\mathrm{Cay}(G,S)$. Since $\mathrm{Cay}(G,S)\cong \mathrm{Cay}(G,T)$, we have $\mathrm{Cay}(\langle S\rangle,S)\cong \mathrm{Cay}(\langle T\rangle,T)$, which implies that $|\langle S\rangle|=|\langle T\rangle|$. As $G$ is a generalized quaternion group of order $4p^\ell$ with $p$ odd prime, it follows that $\langle S\rangle\cong\langle T\rangle$. According to Lemma~\ref{homogeneous}, there exists $\delta\in\mathrm{Aut}(G)$ such that $\langle T\rangle^\delta=\langle S\rangle$. Then we have \[ \mathrm{Cay}(\langle T\rangle,T)\cong \mathrm{Cay}(\langle T\rangle^\delta,T^\delta)=\mathrm{Cay}(\langle S\rangle,T^\delta), \]
and hence $\mathrm{Cay}(\langle S\rangle,S)\cong \mathrm{Cay}(\langle S\rangle,T^\delta)$. Set $\Gamma=\mathrm{Cay}(\langle S\rangle,S)$. Then $\Gamma$ is a connected $m$-valent Cayley digraph with $m=|S|\leq p$. As a subgroup of $\mathrm{Q}_{4n}$, we see that $\langle S\rangle$ is either a cyclic or a generalized quaternion subgroup of $\mathrm{Q}_{4n}$. Since Lemmas~\ref{p-CI-cyclic} and~\ref{p-power-CI-Q} assert that $\Gamma$ is a CI-digraph, there is an automorphism of $\langle S\rangle$ mapping $S$ to $T^\delta$. Again by Lemma~\ref{homogeneous}, this automorphism can be extended to an automorphism of $G$, say $\gamma$. Then $S^\gamma=T^{\delta}$, and by taking $\sigma=\gamma\delta^{-1}$ we have $\sigma\in \mathrm{Aut}(G)$ and $S^\sigma=T$. This shows that $G$ has the $m$-DCI property, proving the sufficiency. \end{proof}
\noindent {\bf Acknowledgements:} The work was supported by the National Natural Science Foundation of China (12161141005,12271024), the 111 Project of China (B16002) and the scholarship No.~202207090064 from the China Scholarship Council. The work was done during a visit of the first author to The University of Melbourne. The first author would like to thank The University of Melbourne for its hospitality and Beijing Jiaotong University for consistent support.
\end{document} |
\begin{document}
\title[]{Numerical methods for time-fractional evolution equations with nonsmooth data: a concise overview} \author[Bangti Jin]{Bangti Jin} \address{Department of Computer Science, University College London, Gower Street, London, WC1E 2BT, UK.} \email {[email protected],[email protected]}
\author[Raytcho Lazarov]{$\,\,$Raytcho Lazarov$\,$} \address{Department of Mathematics, Texas A\&M University, College Station, TX 77843, USA} \email {[email protected]}
\author[Zhi Zhou]{$\,\,$Zhi Zhou$\,$} \address{Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, P.R. China} \email {[email protected]}
\keywords{time-fractional evolution, subdiffusion, nonsmooth solution, finite element method, time-stepping, initial correction, error estimates, space-time formulation}
\date{\today}
\begin{abstract} Over the past few decades, there has been substantial interest in evolution equations that involving a fractional-order derivative of order $\alpha\in(0,1)$ in time, due to their
many successful applications in engineering, physics, biology and finance. Thus, it is of paramount importance to develop and to analyze efficient and accurate numerical methods for reliably simulating such models, and the literature on the topic is vast and fast growing. The present paper gives a concise overview on numerical schemes for the subdiffusion model with nonsmooth problem data, which are important for the numerical analysis of many problems arising in optimal control, inverse problems and stochastic analysis. We focus on the following aspects of the subdiffusion model: regularity theory, Galerkin finite element discretization in space, time-stepping schemes (including convolution quadrature and L1 type schemes), and space-time variational formulations, and compare the results with that for standard parabolic problems. Further, these aspects are showcased with illustrative numerical experiments and complemented with perspectives and pointers to relevant literature. \end{abstract}
\maketitle
\section{Introduction}\label{sec:intro} Diffusion is one of the most prominent transport mechanisms found in nature. The classical diffusion model $\partial_t u-\Delta u=f$, which employs a first-order derivative $\partial_t u$ in time and the Laplace operator $\Delta u$ in space, rests on the assumption that the particle motion is Brownian. One of the distinct features of Brownian motion is a linear growth of the mean squared particle displacement with the time $t$. Over the last few decades, a long list of experimental studies indicates that the Brownian motion assumption may not be adequate for accurately describing some physical processes, and the mean squared displacement can grow either sublinearly or superlinearly with time $t$, which are known as subdiffusion and superdiffusion, respectively, in the literature. These experimental studies cover an extremely broad and diverse range of important practical applications in engineering, physics, biology and finance, including electron transport in Xerox photocopier \cite{ScherMontroll:1975}, visco-elastic materials \cite{Caputo:1967, GinoaCerbelliRoman:1992}, thermal diffusion in fractal domains \cite{Nigmatulin:1986}, column experiments \cite{HatanoHatano:1998} and protein transport in cell membrane \cite{Kou:2008} etc. The underlying stochastic process for subdiffusion and superdiffusion is usually given by continuous time random walk and L\'{e}vy process, respectively, and the corresponding macroscopic model for the probability density function of the particle appearing at certain time instance $t$ and location $x$ is given by a diffusion model with a fractional-order derivative in time and in space, respectively. We refer interested readers to the excellent surveys \cite{MetzlerJeon:2014,MetzlerKlafter:2000} for an extensive list of practical applications and physical modeling in engineering, physic, and biology and.
The present work surveys rigorous numerical methods for subdiffusion. The prototypical mathematical model for subdiffusion is as follows. Let $\Omega\subset\mathbb{R}^d $ ($d= 1,2,3$) be a convex polygonal domain with a boundary $\partial\Omega$, and consider the following fractional-order parabolic problem for the function $u(x,t)$: \begin{align}\label{eqn:pde} \left\{\begin{aligned} {\partial^\alpha_t} u(x,t) -\Delta u(x,t) &= f(x,t) &&(x,t)\in\Omega\times(0,T], \\ u(x,t)&=0 &&(x,t)\in \partial\Omega\times(0,T], \\ u(x,0)&=v(x) &&x\in\Omega, \end{aligned} \right. \end{align} where $T>0$ is a fixed final time, $f \in L^\infty(0,T;L^2(\Omega))$ and $v\in L^2(\Omega)$ are given source term and initial data, respectively, and $\Delta$ is the Laplace operator in space. Here ${\partial^\alpha_t} u(t)$ denotes the Caputo fractional derivative in time $t$ of order $\alpha\in(0,1)$ \cite[p. 70]{KilbasSrivastavaTrujillo:2006} \begin{align}\label{eqn:RLderive}
{\partial^\alpha_t} u(t)= \frac{1}{\Gamma(1-\alpha)}\int_0^t(t-s)^{-\alpha} u'(s){\rm d} s, \end{align} where $\Gamma(z)$ is the Gamma function defined by \begin{equation*}
\Gamma(z) = \int_0^\infty s^{z-1}e^{-s}{\rm d} s,\quad \Re (z)>0. \end{equation*} It is named after geophysicist Michele Caputo \cite{Caputo:1967}, who first introduced it for describing the stress-strain relation in linear elasticity, although it was predated by the work of Armenian mathematician Mkhitar Djrbashian \cite{Djrbashian:1993}. So more precisely, it should be called Djrbashian-Caputo fractional derivative. Note that the fractional derivative ${\partial^\alpha_t} u$ recovers the usual first-order derivative $u'(t)$ as $\alpha\to1^-$, provided that $u$ is sufficiently smooth \cite[p. 100]{NakagawaSakamotoYamamoto:2010}. Thus the model \eqref{eqn:pde} can be viewed as a fractional analogue of the classical parabolic equation. Therefore, it is natural and instructive to compare its analytical and numerical properties with that of standard parabolic problems.
\begin{remark} All the discussions below extend straightforwardly to a general second-order coercive and symmetric elliptic differential operator, given by $\nabla\cdot(a(x)\nabla u(x)) - q(x)u(x)$ with $q\ge0$ a.e. \end{remark}
Motivated by its tremendous success in the mathematical modeling of many physical problems, over the last two decades there has been an explosive growth in the numerical methods, algorithms, and analysis of the subdiffusion model. More recently this interest has been extended to related topics in optimal control, inverse problems and stochastic fractional models. The literature on the topic is vast, and the list is still fast growing in the community of scientific and engineering computation, and more recently also in the community of numerical analysis; see, e.g., the recent special issues on the topic at the journals \textit{Journal of Computational Physics} \cite{KarniadakisHesthavenPodlubny:2015} and \textit{Computational Methods in Applied Mathematics} \cite{JinLazarovVabishchevich:2017}, for some important progress in the area of numerical methods for fractional evolution equations.
It is impossible to survey all important and relevant works in a short review. Instead, in this paper, we aim at only reviewing relevant works on the numerical methods for the subdiffusion model \eqref{eqn:pde} with nonsmooth problem data. This choice allows us to highlight some distinct features common for many nonlocal models, especially how the smoothnes of the data influences the solution and the corresponding numerical methods.
It is precisely these features that pose substantial new mathematical and computational challenges when compared with standard parabolic problems, and extra care has to be exerted when developing and analyzing numerical methods. In particular, since the solution operators of the fractional model have limited smoothing property, a numerical method that requires high regularity of the solution will impose severe restrictions (compatibility conditions) on the data and generally does not work well and thus substantially limits its scope of potential applications. Finally, nonsmooth data analysis is fundamental to the rigorous study of areas related to various applications, e.g., optimal control, inverse problems, and stochastic fractional diffusion (see, e.g., \cite{JinLiZhou:2017control,JinRundell:2015,Yan:2005}).
Amongst the numerous possible choices, we shall focus the review on the following four aspects: \begin{itemize}
\item[(i)] Regularity theory in Sobolev spaces;
\item[(ii)] Spatial discretization via finite element methods (FEMs), e.g., standard Galerkin, lumped mass and finite volume element methods;
\item[(iii)] Temporal discretization via time-stepping schemes (including convolution quadrature and L1 type schemes);
\item[(iv)] Space-time formulations (Galerkin or Petrov-Galerkin type). \end{itemize} In each aspect, we describe some representative results and leave most of technical proofs to the references. Further, we compare the results with that for standard parabolic problems (see, e.g., \cite{Thomee:2006}) and give some numerical illustrations of the theory. Finally, we complement each part with comments on future research problems and further references. The goal of the overview is to give readers a flavor of the numerical analysis of nonlocal problems and potential pitfalls in developing efficient numerical methods. We also refer the readers to the excellent surveys for other nonlocal problems and applications, namely, on problem involving fractional (spectral and integral) Laplacian \cite{BonitoBorthagaray:2018}, on application to image processing \cite{yang2016fractional}, and on nonlocal problems arising in peridynamics \cite{DuGunzburgerLehoucqZhou:2012}. For a nice overview on the numerical methods for fractional-order ordinary differential equations, we refer to the paper \cite{DiethelmFordFreedLuchko:2005}.
The rest of the paper is organized as follows. For the model \eqref{eqn:pde}, in Section \ref{sec:reg}, we describe the regularity theory and in Sections \ref{sec:fem} and \ref{sec:time-stepping}, we discuss the finite element methods and two popular classes of time stepping schemes, i.e., convolution quadrature and L1 type schemes, respectively. Then, in Section \ref{sec:space-time} we discuss two space-time formulations for problem \eqref{eqn:pde} with $v=0$. We conclude the overview with some further discussions in Section \ref{sec:conclus}. Throughout, the discussions focus on the case of nonsmooth problem data, and only references directly relevant are given. Obviously, the list of references is not meant to be complete in any sense, and strongly biased by the personal taste and limited by the knowledge of the authors. Throughout, the notation $c$ denotes a generic constant which may change at each occurrence, but it is always independent of the discretization parameters $h$ and $\tau$ etc. In the paper we use the standard notation on Sobolev spaces (see, e.g., \cite{AdamsFournier:2003}).
\section{Regularity of the solution}\label{sec:reg}
First, we describe some regularity results for the model \eqref{eqn:pde}, which are crucial for rigorous numerical analysis. To this end, we need suitable function spaces. The most convenient one for our purpose is the space $\dot H^s(\Omega)$ defined as below \cite[Chapter 3]{Thomee:2006}. Let $\{\lambda_j\}_{j=1}^\infty$ and $\{\varphi_j\}_{j=1}^\infty$ be respectively the eigenvalues (ordered nondecreasingly with multiplicity counted) and the $L^2(\Omega)$-orthonormal eigenfunctions of the negative Laplace operator $-\Delta$ on the domain $\Omega$ with a zero Dirichlet boundary condition. Then $\{\varphi_j\}_{j=1}^\infty$
forms an orthonormal basis in $L^2(\Omega)$.
For any real number $s\ge-1$, we denote by $\dH s$ the Hilbert space consisting of the functions of the form \begin{equation*}
v = \sum_{j=1}^\infty \langle v,\varphi_j\rangle\varphi_j, \end{equation*} where $\langle\cdot,\cdot\rangle$ denotes the duality pairing between $H^{-1}(\Omega)$ and $H_0^1 (\Omega)$, and it coincides with the usual $L^2(\Omega)$ inner product $(\cdot,\cdot)$ if the function
$v\in L^2(\Omega)$. The induced norm $\|\cdot\|_{\dH s}$ is defined by \begin{equation*}
\|v\|_{\dH s}^2=\sum_{j=1}^{\infty}\lambda_j^s\langle v,\varphi_j \rangle^2. \end{equation*}
Then, $\|v\|_{\dH 0}=\|v\|_{L^2(\Omega)}=(v,v)^\frac{1}{2}$ is the norm in $L^2(\Omega)$ and $\|v\|_{\dH {-1}}
= \|v\|_{H^{-1}(\Omega)}$ is the norm in $H^{-1}(\Omega)$. Besides, it is easy to verify that
$\|v\|_{\dH 1}= \|\nabla v\|_{L^2(\Omega)}$ is also an equivalent norm in $H_0^1(\Omega)$
and $\|v\|_{\dH 2}=\|\Delta v\|_{L^2(\Omega)}$ is equivalent to the norm in $H^2(\Omega)\cap H^1_0(\Omega)$, provided the domain $\Omega$ is convex \cite[Section 3.1]{Thomee:2006}. Note that the spaces $\dot H^s(\Omega)$, $s\ge -1$, form a Hilbert scale of interpolation spaces. Motivated by this, we denote $\|\cdot\|_{H_0^s(\Omega)}$ to be the norm on the interpolation scale between $H^1_0(\Omega)$ and $L^2(\Omega)$ when $s$
is in $[0,1]$ and $\|\cdot\|_{H_0^{s}(\Omega)}$ to be the norm on the interpolation scale between
$L^2(\Omega)$ and $H^{-1}(\Omega)$ when $s$ is in $[-1,0]$. Then, $\| \cdot \|_{H_0^s(\Omega)}$
and $\|\cdot\|_{\dH s}$ are equivalent for $s\in [-1,0]$ by interpolation.
There are several different ways to analyze problem \eqref{eqn:pde}. We outline one approach to derive regularity results by means of Laplace transform below. We denote the Laplace transform of a function $f:(0,\infty)\to\mathbb{R}$ by $\widehat{f}$ below. The starting point of the analysis is the following identity on the Laplace transform of the Caputo fractional derivative $\partial_t^\alpha u(t)$ \cite[Lemma 2.24, p. 98]{KilbasSrivastavaTrujillo:2006} \begin{equation*}
\widehat{\partial_t^\alpha u}(z) = z^\alpha \widehat{u}(z) - z^{\alpha-1}u(0). \end{equation*} By viewing $u(t)$ as a vector-valued function, applying Laplace transform to both sizes of \eqref{eqn:pde} yields \begin{equation*}
z^\alpha \widehat{u}(z) -\Delta \widehat{u} = \widehat f + z^{\alpha-1}u(0), \end{equation*} i.e., \begin{equation*}
\widehat{u}(z) = (z^\alpha-\Delta)^{-1}(\widehat f + z^{\alpha-1}u(0)). \end{equation*} By inverse Laplace transform and the convolution rule, the solution $u(t)$ can be formally represented by \begin{align}\label{eqn:Sol-expr-u-const} u(t)= F(t)v + \int_0^t E(t-s) f(s) {\rm d} s , \end{align} where the solution operators $F(t)$ and $E(t)$ are respectively defined by \begin{align*} F(t):=\frac{1}{2\pi {\rm i}}\int_{\Gamma_{\theta,\delta }}e^{zt} z^{\alpha-1} (z^\alpha-\Delta )^{-1}\, {\rm d} z \quad\mbox{and}\quad E(t):=\frac{1}{2\pi {\rm i}}\int_{\Gamma_{\theta,\delta}}e^{zt} (z^\alpha-\Delta)^{-1}\, {\rm d} z , \end{align*} with integration over a contour $\Gamma_{\theta,\delta}$ in the complex plane (oriented counterclockwise), i.e., \begin{equation*}
\Gamma_{\theta,\delta}=\left\{z\in \mathbb{C}: |z|=\delta, |\arg z|\le \theta\right\}\cup
\{z\in \mathbb{C}: z=\rho e^{\pm\mathrm{i}\theta}, \rho\ge \delta\} . \end{equation*} Throughout, we fix $\theta \in(\frac{\pi}{2},\pi)$ so that $z^{\alpha} \in \Sigma_{\alpha\theta} \subset \Sigma_{\theta}:=\{0\neq z\in\mathbb{C}: {\rm arg}(z)\leq\theta\},$ for all $z\in\Sigma_{\theta}$. Recall the following resolvent estimate for the Laplacian $\Delta$ with homogenous Dirichlet boundary condition \cite{}: \begin{equation} \label{eqn:resol}
\| (z-\Delta )^{-1} \|\le c_\phi |z|^{-1}, \quad \forall z \in \Sigma_{\phi},
\,\,\,\forall\,\phi\in(0,\pi), \end{equation}
where $\|\cdot\|$ denotes the operator norm from $L^2(\Omega)$ to $L^2(\Omega)$.
Equivalently, using the eigenfunction expansion $\{(\lambda_j,\varphi_j)\}_{j=1}^\infty$, these operators can be expressed as \begin{equation*}
F(t)v = \sum_{j=1}^\infty E_{\alpha,1}(-\lambda_jt^\alpha)(v,\varphi_j)\varphi_j\quad \mbox{and}\quad
E(t)v = \sum_{j=1}^\infty t^{\alpha-1}E_{\alpha,\alpha}(-\lambda_jt^\alpha)(v,\varphi_j)\varphi_j. \end{equation*} Here $E_{\alpha,\beta}(z)$ is the two-parameter Mittag-Leffler function defined by \cite[Section 1.8, pp. 40-45]{KilbasSrivastavaTrujillo:2006} \begin{equation*}
E_{\alpha,\beta}(z) = \sum_{k=0}^\infty \frac{z^{k}}{\Gamma(k\alpha+\beta)}\quad \forall z\in \mathbb{C}. \end{equation*} The Mittag-Leffler function $E_{\alpha,\beta}(z)$ is a generalization of the familiar exponential function $e^z$ appearing in normal diffusion, and it can be evaluated efficiently via contour integral \cite{GorenfloLoutchko:2002b, SeyboldHilfer:2008}. Since the solution operators involve only $E_{\alpha,\beta}(z)$ with $z$ being a negative real argument, the following decay behavior $E_{\alpha,\beta}(z)$ is crucial to the smoothing properties of $F(t)$ and $E(t)$: for any $\alpha\in (0,1)$, the function $E_{\alpha,1} (-\lambda t^\alpha)$ decays only polynomially like $t^{-\alpha}$ as $t\to\infty$ \cite[equation (1.8.28), p. 43]{KilbasSrivastavaTrujillo:2006}, which contrasts sharply with the exponential decay for $e^{-\lambda t}$ appearing in normal diffusion. These important features directly translate into the limited smoothing property in space and in time for the solution operators $E(t)$ and $F(t)$.
Next, we state a few regularity results. The proof of these results can be found in, e.g., \cite{Bajlekov:2001,JinLiZhou:nonlinear,SakamotoYamamoto:2011}.
\begin{theorem} \label{thm:reg-u} Let $u(t)$ be the solution to problem \eqref{eqn:pde}. Then the following statements hold. \begin{itemize}
\item[(i)] If $v \in \dH s$ with $s\in(-1,2]$ and $f=0$, then $u(t)\in \dH {s+2}$ and \begin{equation*}
\| \partial_t^{(m)} u(t) \|_{\dH p} \le c t^{\frac{(s-p)\alpha}{2}-m} \| v \|_{\dH s} \end{equation*} with $0\le p-s\le 2$ and any integer $m\ge 0$ .
\item[(ii)]If $v=0$ and $f\in L^p(0,T;L^2(\Omega))$ with $1<p<\infty$, then there holds \begin{equation*}
\| u\|_{L^p(0,T;\dot H^2(\Omega))}
+\|{\partial^\alpha_t} u\|_{L^p(0,T;L^2(\Omega))}
\le c\|f\|_{L^p(0,T;L^2(\Omega))}. \end{equation*} Moreover, if $f\in L^\infty(0,T;L^2(\Omega))$, we have for any $\epsilon\in(0,1)$ \begin{equation*}
\| u(t) \|_{ \dot H^{2-\epsilon}(\Omega) } \le c \epsilon^{-1} t^{\epsilon\alpha} \| f \|_{L^\infty(0,t;L^2(\Omega))}. \end{equation*}
\item[(iii)]If $v=0$ and $f\in C^{m-1}([0,T];L^2(\Omega))$, $\int_0^t (t-s)^{\alpha-1} \| \partial_s^{(m)} f(s) \|_{ L^2(\Omega)}{\rm d} s<\infty$, then there holds \begin{equation*}
\| \partial_t^{(m)} u(t) \|_{ L^2(\Omega) } \le c \sum_{k=0}^{m-1} \| \partial_t^{(k)}f (0) \|_{L^2(\Omega)}t^{\alpha-k}+ \int_0^t (t-s)^{\alpha-1} \| \partial_s^{(m)} f(s) \|_{ L^2(\Omega)}{\rm d} s. \end{equation*} \end{itemize} \end{theorem}
The estimate in Theorem \ref{thm:reg-u}(i) indicates that for homogeneous problems, the solution $u(t)$ is smooth in time $t>0$ (actually analytic in a sector in the complex plane $\mathbb{C}$ \cite[Theorem 2.1]{SakamotoYamamoto:2011}), but has a weak singularity around $t=0$. The strength of the singularity depends on the regularity of the initial data $v$: the smoother is $v$ (measured in the space $\dot H^s(\Omega)$), the less singular is the solution $u$ at the initial layer. Interestingly, even if the initial data $v$ is very smooth, the solution $u$ is generally not very smooth in time in the fractional case, which also differs from the standard parabolic case. By now, it is well known that smooth solutions are produced by a small class of data \cite{Stynes:2016}. The condition $0\leq p-s\leq 2$ in Theorem \ref{thm:reg-u}(i) represents an essential restriction on the smoothing property in space of order two. This restriction contrasts sharply with that for the standard diffusion equation: the following estimate
\begin{equation*}
\| \partial_t^{(m)} u(t) \|_{\dH p} \le c t^{\frac{s-p}{2}-m} \| v \|_{\dH s} \end{equation*} holds for any $t>0$ and any $p\geq s, m\geq 0$ (see, e.g. \cite[Lemma 3.2, p. 39]{Thomee:2006}). This means that the solution operator for standard parabolic problems is infinitely smoothing in space, as long as $t>0$. The limited smoothing property in space of the model \eqref{eqn:pde} represents one very distinct feature, which is generic for many other nonlocal (in time) models.
The first inequality in Theorem \ref{thm:reg-u}(ii) is often known as maximal $L^p$ regularity, which is very useful in the numerical analysis of nonlinear problems (see, e.g., \cite{KovacsLiLubich:2016, AkrivisLiLubich2017} for standard parabolic problems and \cite{JinLiZhou:nonlinear} for subdiffusion). Theorem \ref{thm:reg-u}(iii) asserts that the temporal regularity of the solution $u(t)$ is essentially determined by that of the right hand side $f$. The solution $u(t)$ can still have weak singularity near $t=0$, even for a very smooth source term $f$, which differs dramatically again from standard parabolic problems. In order to have high temporal regularity uniformly in time $t$, for the homogeneous problem, it is necessary to impose the following (rather restrictive) compatibility conditions: $\partial_t^{(k)}f(0)=0$, $k=0,\ldots,m-1$. In the numerical analysis, it is important to take into account the initial singularity of the solution, which represents one of the main challenges in developing robust numerical methods.
Now we illustrate the results for the homogeneous problem. \begin{example}\label{exam:reg} Consider problem \eqref{eqn:pde} on the unit interval $\Omega=(0,1)$ with \begin{itemize}
\item[(i)] $v=\sin(\pi x)$ and $f=0$;
\item[(ii)] $v=\delta_{0.5}(x)$, with $\delta_{0.5}(x)$ the Dirac $\delta$ function concentrated at $x=0.5$, and $f=0$. \end{itemize} The solution $u(t)$ in case {\rm(i)} is given by $u(t)=E_{\alpha,1}(-\pi^2t^\alpha)\sin(\pi x)$. Since $\sin(\pi t)$ is a Dirichlet eigenfunction of the negative Laplacian $-\Delta$ on $\Omega$, it is easy to see that for any $s\geq0$, $v\in \dH s$, but the solution $u(t)$ has limited temporal regularity for any $\alpha\in(0,1)$: as $t\to0$, $E_{\alpha}(-\pi^2t^\alpha)\sim 1-\frac{\pi^2}{\Gamma(\alpha+1)}t^\alpha$, which is continuous at $t=0$ but with an unbounded first-order derivative. This observation clearly reflects the inherently limited smoothing property in time of problem \eqref{eqn:pde}. It contrasts sharply with the standard parabolic case, $\alpha=1$, for which the solution $u(t)$ is given explicitly by $u(t)=e^{-\pi^2 t}\sin (\pi x)$ and is $C^\infty $ in time. In Fig. \ref{fig:diffwave2-space}, we show the solution profiles for $\alpha=0.5$ and $\alpha=1$ at two different time instances for case {\rm(ii)}. Observe that the solution profile for $\alpha=1$ decays much faster than that for $\alpha=0.5$. For any $t>0$, the solution $u(t)$ is very smooth in space for $\alpha=1$, but it remains nonsmooth for $\alpha=0.5$. In the latter case, the kink at $x=0.5$ in the plot shows clearly the limited spatial smoothing property of the solution operator $F(t)$, and it remain no matter how long the problem evolves. \end{example}
\begin{figure}
\caption{The solution profiles for Example \ref{exam:reg}(ii) at three time instances for $\alpha=0.5$ and $1$.}
\label{fig:diffwave2-space}
\end{figure}
The analytical theory of problem \eqref{eqn:pde} has been developed successfully in the last two decades, e.g., \cite{Bajlekov:2001,EidelmanKochubei:2004,Luchko:2009,Pskhu:2009,McLean:2010,SakamotoYamamoto:2011,Kochubei:2014,JinLiZhou:nonlinear, AllenCaffarelliVasseur:2016,LiuRundellYamamoto:2016,Yamamoto:2018,GalWarma:2017}; see also the monograph \cite{Pruss:1993} for closely related evolutionary integral equations. Eidelman and Kochubei \cite{EidelmanKochubei:2004} derived fundamental solutions to problem in the whole space using Fox $H$-functions, and derived various estimates, see also \cite{SchneiderWyss:1989,GorenfloLuchkoYamamoto:2015};. Luchko \cite{Luchko:2009} studied the existence and uniqueness of a strong solution. Sakamoto and Yamamoto \cite{SakamotoYamamoto:2011} analyzed the problem by means of separation of variables, reducing it to an infinite system of fractional-order ODEs, studied the existence and uniqueness of weak solutions, and proved various regularity results including the asymptotic behavior of the solution for $t \to 0$ and $t \to \infty$. We note that the Laplace transform technique described above is essentially along the same line of reasoning. The important issue of properly interpreting the initial condition (for $\alpha$ close to zero) was discussed in \cite{GorenfloLuchkoYamamoto:2015,LiLiu:2016}.
It is worth noting that techniques like separation of variables and Laplace transform are most convenient for analyzing time-independent elliptic operators. For time-dependent elliptic operators or nonlinear problems, e.g., time-dependent diffusion coefficients and Fokker-Planck equation, energy arguments \cite{VergaraZacher:2015} or perturbation arguments \cite{KimKimLim:2017} can be used to show existence and uniqueness of the solution. However, the slightly more refined stability estimates, needed for numerical analysis of nonsmooth problem data, often do not directly follow and have to be derived separately. This represents one of the main obstacles in extending the results below for the model problem \eqref{eqn:pde} to these important classes of applied problems.
\section{Spatially semidiscrete approximation}\label{sec:fem}
Now we describe several spatially semidiscrete finite element schemes for problem \eqref{eqn:pde} using the standard notation from the classical monograph \cite{Thomee:2006}. Semidiscrete methods are usually not directly implementable and used in practical computations, but they are important for understanding the role of the regularity of problem data and also for the analysis of some space-time formulations and spectral, Pad\'e, and rational approximations. Let ${\{\mathcal{T}_h\}}_{0<h<1}$ be a family of shape regular and quasi-uniform partitions of the domain $\Omega$ into $d$-simplexes, called finite elements, with the mesh size $h$ denoting the maximum diameter of the elements. An approximate solution $u_h$ is then sought in the finite element space $X_h\equiv X_h(\Omega)$ of continuous piecewise linear functions over the triangulation $\mathcal{T}_h $, defined by \begin{equation*}
X_h =\left\{\chi\in H^1_0(\Omega): \ \chi ~~\mbox{is a linear function over} ~~\tau,
\,\,\,\,\forall \tau \in \mathcal{T}_h\right\}. \end{equation*} To describe the schemes, we need the $L^2(\Omega)$ projection $P_h:L^2(\Omega)\to X_h$ and Ritz projection $R_h:\dH1\to X_h$, respectively, defined by (recall that $(\cdot, \cdot)$ denotes the $L^2(\Omega)$ inner product) \begin{equation*}
\begin{aligned}
(P_h \psi,\chi) & =(\psi,\chi) \quad\forall \chi\in X_h,\psi\in L^2(\Omega),\\
(\nabla R_h \psi,\nabla\chi) & =(\nabla \psi,\nabla\chi) \quad \forall \chi\in X_h, \psi\in \dot H^1(\Omega).
\end{aligned} \end{equation*} Then by means of duality, the operator $P_h$ can be boundedly extended to $\dH s$, $s \in [-1,0]$. The following approximation properties of $R_h$ and $P_h$ are well known: \begin{align*}
\|P_h\psi-\psi\|_{L^2(\Omega)}+h\|\nabla(P_h\psi-\psi)\|_{L^2(\Omega)}& \leq ch^q\|\psi\|_{H^q(\Omega)}\quad \forall\psi\in \dH q, q=1,2,\\
\|R_h\psi-\psi\|_{L^2(\Omega)}+h\|\nabla(R_h\psi-\psi)\|_{L^2(\Omega)}& \leq ch^q\|\psi\|_{H^q(\Omega)}\quad \forall\psi\in \dH q, q=1,2. \end{align*}
By multiplying both sides of equation \eqref{eqn:pde} by a test function $\varphi\in H_0^1(\Omega)$, integrating over the domain $\Omega$ and then applying integration by parts formula yield the following weak formulation of problem \eqref{eqn:pde}: find $u(t) \in H^1_0(\Omega)$ for $t>0$ such that \begin{equation}\label{eqn:weak} ({\partial^\alpha_t} u(t), \varphi) + a(u(t), \varphi) = (f,\varphi), \quad \forall \varphi \in H^1_0(\Omega),\ \ t>0, \mbox{with }u(0)=v, \end{equation} where $a(u,\varphi)=(\nabla u, \nabla \varphi)$ for $u,\varphi\in H_0^1(\Omega)$ denotes the bilinear form for the elliptic operator $A=-\Delta$ (with a zero Dirichlet boundary condition). Then the spatially semidiscrete approximation of problem \eqref{eqn:pde} is to find $u_h (t)\in X_h$ such that \begin{equation}\label{eqn:semi}
{[ {\partial^\alpha_t} u_{h}(t),\chi]}+ a(u_h(t),\chi)= (f,\chi), \quad \forall \chi\in X_h,\ t >0, \mbox{with }u_h(0)=v_h, \end{equation} where $v_h \in X_h$ is an approximation of the initial data $v$, and the notation $[\cdot,\cdot]$ refers to a suitable inner product on the space $X_h$, approximating the usual $L^2(\Omega)$ inner product $(\cdot,\cdot)$. Following Thom\'ee \cite{Thomee:2006}, we shall take $v_h=R_hv$ in case of smooth initial data $v\in \dH2$ and $v_h=P_hv$ in case of nonsmooth initial data, i.e., $v\in \dH s$, $-1\leq s\le0$. Moreover, the spatially semidiscrete variational problem \eqref{eqn:semi} can be written in an operator form as \begin{equation*}
{\partial^\alpha_t} u_{h}(t) + A_h u_h(t) = f_h, \quad \forall \chi\in X_h,\ t >0, \mbox{with }u_h(0)=v_h, \end{equation*} where $A_h$ is a discrete approximation to the elliptic operator $A$ on the space $X_h$, and will be given below.
Based on the abstract form \eqref{eqn:semi}, we shall present three predominant finite element type discretization methods in space, i.e., standard Galerkin finite element (SG) method, lumped mass (LM) method and finite volume element (FVE) method, below. In passing, we note that in principle any other spatial discretization methods, e.g., finite difference methods \cite{LinXu:2007,ZhangSun:2011adi}, collocation, and spectral methods \cite{LinXu:2007,chenHestaven2015multi} can also be used. Our choice of the FEMs is motivated by nonsmooth problem data.
\subsection{Standard Galerkin finite element.}
The SG method is obtained from \eqref{eqn:semi} when the approximate inner product $[\cdot,\cdot]$ is chosen to be the usual $L^2(\Omega)$ inner product $(\cdot,\cdot)$. The SG method was first developed and rigorously analyzed for nonsmooth data in \cite{JinLazarovZhou:SIAM2013,JinLazarovPasciakZhou:2013, JinLazarovPasciakZhou:2015} for problem \eqref{eqn:pde} on convex polygonal domains, and in \cite{LeMcLeanLamichhane:2017} for the case of nonconvex domains.
Upon introducing the discrete Laplacian $\Delta_h: X_h\to X_h$ defined by \begin{equation*}
-(\Delta_h\psi,\chi)=(\nabla\psi,\nabla\chi)\quad\forall\psi,\,\chi\in X_h, \end{equation*} and $f_h= P_h f$, we may write the spatially discrete problem \eqref{eqn:semi} as \begin{equation}\label{fem-operator}
{\partial^\alpha_t} u_{h}(t)-\Delta_h u_h(t) =f_h(t) \for t\ge0 \quad \mbox{with} \quad u_h(0)=v_h. \end{equation} Now we introduce the semidiscrete analogues of $F(t)$ and $E(t)$ for $t>0 $: \begin{align*} F_h(t):=\frac{1}{2\pi {\rm i}}\int_{\Gamma_{\theta,\delta }}e^{zt} z^{\alpha-1} (z^\alpha- \Delta_h)^{-1}\, {\rm d} z \qquad\text{and}\qquad E_h(t):=\frac{1}{2\pi {\rm i}}\int_{\Gamma_{\theta,\delta}}e^{zt} (z^\alpha- \Delta_h)^{-1}\, {\rm d} z. \end{align*} Then the solution $u_h(t)$ of the semidiscrete problem \eqref{fem-operator} can be succinctly expressed by: \begin{equation}\label{Duhamel_o}
u_h(t)= F_h(t) v_h + \int_0^t E_h(t-s) f_h(s)\,{\rm d} s. \end{equation}
Now we give pointwise-in-time $L^2(\Omega)$ error estimates for the semidiscrete Galerkin approximation $u_h$. \begin{theorem}\label{thm:error-fem}
Let $u$ be the solution of problem \eqref{eqn:pde} and $u_h$ be the solution of problem \eqref{fem-operator}, respectively. Then with $\ell_h=|\log h|$, for any $t>0$, the following error estimates hold: \begin{itemize} \item[$\rm(i)$] If $f\equiv 0$, $v\in L^2(\Omega)$, and $v_h=P_hv$, then \begin{equation*}
\|(u-u_h)(t)\|_{L^2(\Omega)} \le ch^2\ell_h t^{-\alpha} \|v\|_{L^2(\Omega)}. \end{equation*} \item[$\rm(ii)$] If $f\equiv0$, $v\in \dH 2$, $v_h=R_hv$, then \begin{equation*}
\|(u-u_h)(t)\|_{L^2(\Omega)} \le ch^2 \| \Delta v\|_{L^2(\Omega)}. \end{equation*} \item[$\rm(iii)$] If $f\in L^\infty(0,T;L^2(\Omega))$ and $v\equiv0$, then \begin{equation*}
\|(u-u_h)(t)\|_{L^2(\Omega)} \le c h^2 \ell_h^2\| f\|_{L^\infty(0,t;L^2(\Omega))}. \end{equation*} \end{itemize} \end{theorem} \begin{proof} We only briefly sketch the proof for part (i) to give a flavor, and refer interested readers to \cite{JinLazarovZhou:SIAM2013,JinLazarovPasciakZhou:2015} for further details. In a customary way, we split the error $u_h(t)-u(t)$ into two terms as \begin{equation*} u_h-u= (u_h-P_hu)+(P_hu-u):=\vartheta + \varrho. \end{equation*} By the approximation property of the $L^2(\Omega)$ projection $P_h$ and Theorem \ref{thm:reg-u}, we have for any $t>0$ \begin{equation*}
\| \varrho(t) \|_{L^2(\Omega)}
\le ch^2 \| u(t) \|_{H^2(\Omega)} \le ch^2 t^{-\alpha} \| v \|_{L^2(\Omega)}. \end{equation*} So it remains to obtain proper estimates on $\vartheta(t) $. Obviously, $ P_h {\partial^\alpha_t} \varrho = {\partial^\alpha_t} P_h(P_hu-u)=0$ and using the identity $\Delta_hR_h=P_h\Delta$ \cite[equation (1.34), p. 11]{Thomee:2006}, we deduce that $\vartheta$ satisfies: \begin{equation*}
{\partial^\alpha_t} \vartheta(t) -\Delta_h \vartheta(t) = -\Delta_h (R_h u - P_h u)(t), \quad t>0, \quad \vartheta(0)=0. \end{equation*} Then with the help of Duhamel's formula \eqref{Duhamel_o}, $\vartheta(t)$ can be represented by \begin{equation*}
\begin{aligned}
\vartheta(t) &= -\int_0^t E_h(t-s)\Delta_h(R_hu-P_hu)(s)\,{\rm d} s\\
&= \int_0^t (-\Delta_h)^{1-\epsilon}E_h(t-s)(-\Delta_h)^{\epsilon}(R_hu-P_hu)(s)\,{\rm d} s,
\end{aligned} \end{equation*} where the constant $\epsilon\in(0,1)$ is to be chosen below. Consequently, \begin{equation*}
\|\vartheta(t)\|_{L^2(\Omega)} \leq
\int_0^t\|(-\Delta_h)^{1-\epsilon}E_h(t-s)\|\|(-\Delta_h)^{\epsilon}(R_hu-P_hu)(s)\|_{L^2(\Omega)}{\rm d} s, \end{equation*} where $(-\Delta_h)^\epsilon$ is the fractional power of $-\Delta_h$ defined in the spectral sense. That is, if $(\lambda_j^h, \phi_j^h)$ are the eigenvalues and eigenfunctions of $-\Delta_h$, then for $v \in X_h$, $(-\Delta_h)^\epsilon v = \sum_j (\lambda_j^h)^\epsilon (v,\phi_j^h) \phi_j^h$. Now recall the smoothing property of the semidiscrete solution operator $E_h(t)$ \begin{equation*}
\|E_h(t)(-\Delta_h)^s\|\leq ct^{-1+(1-s)\alpha}\quad \forall s\in[0,1], \end{equation*} which follows directly from the resolvent estimate \eqref{eqn:resol} (for $\Delta_h$), and the inverse estimate for FEM functions \begin{equation*}
\|(-\Delta_h)^s(R_hu-P_hu)(t)\|_{L^2(\Omega)} \leq c h^{-2s}\|(R_hu-P_hu)(t)\|_{L^2(\Omega)}\quad\forall s\in[0,1]. \end{equation*} Thus, by the triangle inequality and the approximation properties of $R_h$ and $P_h$, we deduce \begin{equation*}
\|(R_hu-P_hu)(t)\|_{L^2(\Omega)} \leq \|(R_hu-u)(t)\|_{L^2(\Omega)} + \|(P_hu-u)(t)\|_{L^2(\Omega)}\leq ch^2\|u(t)\|_{H^2(\Omega)}. \end{equation*} The preceding estimates together with Theorem \ref{thm:reg-u} imply \begin{equation*} \begin{split}
\|\vartheta(t) \|_{L^2(\Omega)} &\le c h^{2-2\epsilon}\int_0^t (t-s)^{\epsilon\alpha-1}\|u(s)\|_{H^2(\Omega)}\,{\rm d} s\\
&\le c h^{2-2\epsilon}\| v \|_{L^2(\Omega)}\int_0^t (t-s)^{\epsilon\alpha-1}s^{-\alpha}\,{\rm d} s\\
& \le c \epsilon^{-1}h^{2-2\epsilon} t^{-\alpha} \| v \|_{L^2(\Omega)}. \end{split} \end{equation*} The desired assertion follows by choosing $\epsilon=1/\ell_h$. \end{proof}
\begin{remark}\label{rmk:fem-err} It is instructive to compare the error estimate in Theorem \ref{thm:error-fem} with that for standard parabolic problems. For example, in the latter case, for the homogeneous problem with $v\in L^2(\Omega)$, the following error estimate holds \cite[Theorem 3.5, p. 47]{Thomee:2006}: \begin{equation*}
\|u(t)-u_h(t)\|_{L^2(\Omega)}\leq ch^2t^{-1} \|v\|_{L^2(\Omega)}. \end{equation*} This estimate is comparable with that in Theorem \ref{thm:error-fem}(i), apart from the log factor $\ell_h$, which can be overcome using an operator trick due to Fujita and Suzuki \cite{FujitaSuzuki:1991}. Hence, in the limit $\alpha\to1^-$, the result in the fractional case essentially recovers that for the standard parabolic case. The log factor $\ell_h$ in the estimate for the inhomogeneous problem in Theorem \ref{thm:error-fem}(iii) is due to the limited smoothing property, cf. Theorem \ref{thm:reg-u}(ii). It is unclear whether the factor $\ell_h$ is intrinsic or due to the limitation of the proof technique.
Upon extension, the following error estimate analogous to Theorem \ref{thm:error-fem}(i) holds for very weak initial data, i.e., $v\in H^{-s}(\Omega)$, $0\leq s\leq 1$ \cite[Theorem 2]{JinLazarovPasciakZhou:2013}: \begin{equation*}
\|u(t)-u_h(t)\|_{L^2(\Omega)}\leq ch^{2-s}\ell_ht^{-\alpha}\|v\|_{H^{-s}(\Omega)}. \end{equation*} \end{remark}
\subsection{Two variants (lumped mass and finite volume) of Galerkin method.}
Now we discuss two variants of the standard Galerkin FEM, i.e., lumped mass FEM and finite volume element method. These methods have also been analyzed for nonsmooth data, but less extensively \cite{JinLazarovZhou:SIAM2013, JinLazarovPasciakZhou:2015,KaraaMustaphaPani:2017,Kopteva:2017}. These variants are essential for some applications: the lumped mass FEM is important for preserving qualitative properties of the approximations, e.g., positivity \cite{ChatzipantelidisHorvathThomee:2015,JinLazarovThomeeZhou:2017}, while the finite volume method inherits the local conservation property of the physical problem.
First, we describe the lumped mass FEM (see, e.g. \cite[Chapter 15, pp. 239--244]{Thomee:2006}), where the mass matrix is replaced by a diagonal matrix with the row sums of the original mass matrix as its diagonal elements.
Specifically, let $z^\K_j $, $j=1,\dots,d+1$ be the vertices of a $d$-simplex $\tau \in \mathcal{T}_h$. Consider the following quadrature formula \begin{equation*}
Q_{\tau,h}(f) = \frac{|\tau|}{d+1} \sum_{j=1}^{d+1} f(z^\K_j) \approx \int_\tau f {\rm d} x\quad \forall f\in C(K). \end{equation*}
where $|K|$ denotes the area/volume of the simplex $K$. Then we define an approximate $L^2(\Omega)$-inner product $(\cdot,\cdot)_h$ in $X_h$ by \begin{equation*} (w, \chi)_h = \sum_{\tau \in \mathcal{T}_h} Q_{\tau,h}(w \chi). \end{equation*} The lumped mass FEM is to find $ \bar u_h (t)\in X_h$ such that \begin{equation*}
{({\partial^\alpha_t} \bar u_h, \chi)_h}+ a(\bar u_h,\chi)= (f, \chi) \quad \forall \chi\in X_h,\ t >0, \quad\mbox{with }\bar u_h(0)=P_hv. \end{equation*} Then we introduce the discrete Laplacian $-{\bar{\Delta}_h}:X_h\rightarrow X_h$, corresponding to the inner product $(\cdot,\cdot)_h$, by \begin{equation*}
-({\bar{\Delta}_h}\psi,\chi)_h = (\nabla \psi,\nabla \chi)\quad \forall\psi,\chi\in X_h. \end{equation*}
\begin{remark} In a rectangular domain $\Omega$ and a uniform square mesh partitioned into triangles {\rm(}by connecting the lower left corner with the upper right corner{\rm)} the operator ${\color{blue}\bar\Delta_h}$ is identical with the canonical five-point finite difference approximation of the Laplace operator. Such relation may allow extending the analysis below to various finite difference approximations of problem \eqref{eqn:pde}. \end{remark}
Also, we introduce a projection operator $\bar P_h: L^2(\Omega) \rightarrow X_h$ by $$(\bar P_h f, \chi)_h = (f, \chi), \quad \forall \chi\in X_h.$$ Then with $f_h = \bar P_hf$, the lumped mass FEM can be written in an operator form as \begin{equation}\label{eqn:fem-lumped}
{\partial^\alpha_t}{\bar u_h}(t)-{\bar{\Delta}_h} \bar u_h(t) = f_h(t) \quad \mbox{ for }t\geq 0 \quad \mbox{with }\bar u_h(0)= P_h v. \end{equation}
Next, we describe the finite volume element (FVE) method (see, e.g., \cite{ChouLi:2000,ChatzLazarovThomee:2013}). It is based on a discrete version of the local conservation law \begin{equation}
\label{FVE}
\int_V{\partial^\alpha_t} u(t) {\rm d} x - \int_{\partial V}\frac{\partial u}{\partial n}{\rm d} s
= \int_V f \,{\rm d} x, \quad \mbox{for }t> 0, \end{equation} valid for any $V\subset\Omega$ with a piecewise smooth boundary $\partial V$, with $n$ being the unit outward normal to $\partial V$. The FVE requires \eqref{FVE} to be satisfied for $V=V_j,\ j=1, \dots,N$, which are disjoint and known as control volumes associated with the nodes $P_j$ of $\mathcal{T}_h$. Then the discrete problem reads: find $\widetilde{u}_{h}(t)\in X_h$ such that \begin{equation}\label{FVE-0}
\int_{V_j}{\partial^\alpha_t} \widetilde{u}_{h}(t) \,{\rm d} x-\int_{\partial V}\frac{\widetilde{u}_{h}}{\partial n}{\rm d} s =\int_{V_j} f \,{\rm d} x,
\quad \mbox{for }t\geq 0,
\quad \text{with}~~ \widetilde{u}_h(0)=P_hv, \end{equation} It can be recast as a Galerkin method \cite{ChatzLazarovThomee:2013}, by letting \begin{equation*}
Y_h= \left\{ \varphi\in L^2(\Omega): \varphi|_{V_j} = \text{constant}, ~~j=1,2,...,N; \ \varphi = 0~~\text{outside}~~ \cup_{j=1}^N V_j \right\}, \end{equation*} introducing the interpolation operator $J_h:C(\Omega)\to Y_h$ by $(J_hv)(P_j)=v(P_j)$, $j=1,\ldots,N$, and then defining an approximate $L^2(\Omega)$ inner product $\langle\chi,\psi\rangle=(\chi,J_h\psi)$ for all $\chi,\psi \in X_h$. The FVE method \eqref{FVE-0} can be reformulated by \begin{equation*}
\langle{\partial^\alpha_t}\widetilde{u}_{h}(t),\chi \rangle + a(\widetilde{u}(t),\chi) = (f(t), J_h \chi) \quad \mbox{ for }t\geq 0 \quad \mbox{with }\widetilde{u}_h(0)= P_h v \in X_h. \end{equation*} In order to be consistent with \eqref{eqn:semi}, we perturb the right hand side to $(f(t), \chi)$. Then the FVE is to find $\widetilde{u}_{h}\in X_h$ such that \begin{equation}\label{fem-FVE}
\langle{\partial^\alpha_t}\widetilde{u}_{h}(t),\chi \rangle + a(\widetilde{u}(t),\chi) = (f(t), \chi) \quad \mbox{ for }t\geq 0 \quad \mbox{with }\widetilde{u}_h(0)=P_h v . \end{equation} Thus, it corresponds to \eqref{eqn:semi} with $[\cdot,\cdot]=\langle\cdot,\cdot\rangle$. By introducing the discrete Laplacian $-\widetilde \Delta_h:X_h\rightarrow X_h$, corresponding to the inner product $\langle\cdot,\cdot\rangle$, defined by \begin{equation*}
-\langle\widetilde \Delta_h\psi,\chi\rangle = (\nabla \psi,\nabla \chi)\quad \forall\psi,\chi\in X_h, \end{equation*} and a projection operator $\widetilde P_h: L^2(\Omega) \rightarrow X_h$ defined by \begin{equation*}
\langle\widetilde P_h f, \chi\rangle = (f, \chi) \quad \forall \chi\in X_h. \end{equation*} In this way, the FVE method \eqref{fem-FVE} can be written with $f_h = \widetilde P_h f$ in an operator form as \begin{equation}\label{eqn:fem-fvem}
{\partial^\alpha_t}\widetilde{u}_{h}(t)-\widetilde \Delta_h \widetilde{u}_{h}(t) = f_h(t) \quad \mbox{ for }t\geq 0 \quad \mbox{with } \widetilde{u}_{h}(0)=P_hv. \end{equation}
For the analysis of the LM and FVE methods, we recall a useful quadrature error operator $Q_h: X_h\rightarrow X_h$ defined by \begin{equation}\label{eqn:Q}
(\nabla Q_h\chi,\nabla \psi) = \epsilon_h(\chi,\psi)
: = [\chi,\psi]-(\chi,\psi)\quad \forall \chi,\psi\in X_h. \end{equation} The operator $Q_h$ represents the quadrature error in a special way. It satisfies the following error estimate \cite[Lemma 2.4]{chatzipa-l-thomee12} for LM method and \cite[Lemma 2.2]{ChatzLazarovThomee:2013} for FVE method. \begin{lemma}\label{lem:Q} Let $A_h$ be $-{\bar{\Delta}_h}$ or $-\widetilde \Delta_h$, and $Q_h$ be the operator defined by \eqref{eqn:Q}. Then there holds \begin{equation*}
\|\nabla Q_h\chi\|_{L^2(\Omega)}+h\|A_h Q_h\chi\|_{L^2(\Omega)}\leq ch^{p+1}\|\nabla^p\chi\|_{L^2(\Omega)} \quad \forall \chi\in X_h, ~~~p=0,1. \end{equation*} Furthermore, if the meshes are symmetric {\rm(}for details and illustration, see \cite[Section 5, Fig. 2 and 3]{chatzipa-l-thomee12}{\rm)}, then there holds \begin{equation}\label{eqn:condQ}
\|Q_h\chi\|_{L^2(\Omega)}\leq ch^2\|\chi\|_{L^2(\Omega)}\quad \forall\chi \in X_h. \end{equation} \end{lemma}
\def\bar u_h{\bar u_h}
\begin{theorem}\label{lumped-mass-nonsmooth} Let $u$ be the solution of problem \eqref{eqn:pde} and $\bar u_h$ be the solution of \eqref{eqn:fem-lumped}
or \eqref{eqn:fem-fvem}, respectively. Then under condition \eqref{eqn:condQ}, the following estimates are valid for $t >0$ and $\ell_h=|\ln h|$. \begin{itemize} \item[(i)] If $f\equiv0$, $v\in L^2(\Omega)$ and $v_h=P_hv$, then \begin{equation}\label{L2-improved}
\|\bar u_h(t)-u(t)\|_{L^2(\Omega)}\leq ch^2 \ell_h t^{-\alpha} \|v\|_{L^2(\Omega)}. \end{equation} \item[(ii)] If $v\equiv0$, $f\in L^\infty(0,T;\dH q)$, $-1<q\leq0$, and $f_h=P_hf$, then \begin{equation*}
\|\bar u_h(t) - u(t) \|_{L^2(\Omega)} \le ch^{2+q} \ell_h^{2} \|f\|_{L^\infty(0,t;\dH q)}. \end{equation*} \end{itemize} \end{theorem} \begin{proof} We only sketch the proof for part (i). For the analysis, we split the error $\bar u_h(t)-u(t)$ into \begin{equation*}
\bar u_h(t)-u(t) = u_h(t)- u(t) + \delta(t) \end{equation*}
with $ \delta(t) = \bar u_h(t)-u_h(t)$ and $u_h(t)$ being the standard Galerkin FEM solution. Upon noting Theorem \ref{thm:error-fem} for $\|u_h -u\|_{L^2(\Omega)}$, it suffices to show \begin{equation*}
\|\delta(t)\|_{L^2(\Omega)} \leq ch^2 \ell_h t^{-\alpha} \|v\|_{L^2(\Omega)}. \end{equation*} It follows from the definitions of $u_h(t)$, $\bar u_h(t)$, and $Q_h$ that \begin{equation*}
{\partial^\alpha_t} \delta(t) + A_h \delta(t) = -A_h Q_h{\partial^\alpha_t} u_h(t) \quad \mbox{ for } t > 0, \quad \mbox{with }\delta(0)=0, \end{equation*} where the operator $A_h$ denotes either $-{\bar{\Delta}_h}$ or $-\widetilde \Delta_h$. By Duhamel's principle \eqref{Duhamel_o}, $\delta(t)$ can be represented by \begin{align*}
\delta(t) &= - \int_0^t E_h (t-s) A_h Q_h {\partial^\alpha_t} u_h(s){{\rm d} s}\\
& = - \int_0^t A_h^{1-\epsilon}E_h (t-s) A_h^{\epsilon} Q_h {\partial^\alpha_t} u_h(s){{\rm d} s}. \end{align*} Then the smoothing property of $E_h$, the inverse estimate and the quadrature error assumption \eqref{eqn:condQ} imply \begin{align*}
\| \delta(t) \|_{L^2(\Omega)} & \le \int_0^t \|E_h (t-s)A_h^{1-\epsilon} \| \| A_h^{\epsilon}Q_h {\partial^\alpha_t} u_h(s) \|_{L^2(\Omega)}{{\rm d} s}\\
&\le c h^{-2\epsilon}\int_0^t (t-s)^{\epsilon\alpha-1} \| Q_h {\partial^\alpha_t} u_h(s) \|_{L^2(\Omega)}{{\rm d} s} \\
&\le c h^{2-2\epsilon}\int_0^t (t-s)^{\epsilon\alpha-1} \| {\partial^\alpha_t} u_h(s) \|_{L^2(\Omega)}{{\rm d} s}. \end{align*}
Last, the (discrete) stability result $\|{\partial^\alpha_t} u_h(t)\|_{L^2(\Omega)}\leq ct^{-\alpha}\|v_h\|_{L^2(\Omega)}$ (which follows analogously as Theorem \ref{thm:reg-u}(i)) and the $L^2(\Omega)$-stability of $P_h$ imply \begin{equation*} \begin{split}
\| \delta(t) \|_{L^2(\Omega)}
&\le c h^{2-2\epsilon}\int_0^t (t-s)^{\epsilon\alpha-1} s^{-\alpha} \| u_h(0) \|_{L^2(\Omega)}{{\rm d} s} \\
&\le c \epsilon^{-1} h^{2-2\epsilon} t^{-\alpha} \| v_h \|_{L^2(\Omega)} \le c \epsilon^{-1} h^{2-2\epsilon} t^{-\alpha} \| v \|_{L^2(\Omega)}. \end{split} \end{equation*} Then the desired assertion follows immediately by choosing $\epsilon=1/\ell_h$. \end{proof}
\begin{remark} The quadrature error condition \eqref{eqn:condQ} is satisfied for symmetric meshes \cite[Section 5]{chatzipa-l-thomee12}. If condition \eqref{eqn:condQ} does not hold, we are able to show only a suboptimal $O(h)$-convergence rate for $L^2(\Omega)$-norm of the error \cite[Theorem 4.5]{JinLazarovZhou:SIAM2013}, which is reminiscent of that in the classical parabolic case, e.g. \cite[Theorem 4.4]{chatzipa-l-thomee12}. \end{remark}
Generally, the FEM analysis in the fractional case is much more delicate than the standard parabolic case due to the less standard solution operators. Nonetheless, the results in the two cases are largely comparable, and the overall proof strategy is often similar. The Laplace approach described above represents only one way to analyze the spatially semidiscrete schemes. Recently, Karaa \cite{Karaa:2017} gave a unified analysis of all three methods for the homogeneous problem based on an energy argument, which generalizes the corresponding technique for standard parabolic problems in \cite[Chapter 3]{Thomee:2006}. However, the analysis of the inhomogeneous case is still missing. The energy type argument is generally more tricky in the fractional case. This is due to the nonlocality of the fractional derivative $\partial_t^\alpha u$ and consequently that many powerful PDE tools, like integration by parts formula and product rule, are either invalid or require substantial modification. See also \cite{PaniKaraa:2018} for some results on a related subdiffusion model.
\subsection{Illustrations and outstanding issues on semidiscrete methods}
Now we illustrate the three semidiscrete methods with very weak initial data. \begin{example}\label{exam:fem} Consider problem \eqref{eqn:pde} on the unit square $\Omega=(0,1)^2$ with $f=0$ and very weak initial data $v=\delta_\Gamma $, with $\Gamma$ being the boundary of the square $[\frac14,\frac34]\times[\frac14,\frac34]$ with $\langle \delta_\Gamma,\phi\rangle = \int_\Gamma \phi(s) {\rm d} s$. One may view $(v,\chi)$ for $\chi \in X_h \subset \dot H^{\frac12+\epsilon}(\Omega)$ as duality pairing between the spaces $H^{-\frac12-\epsilon}(\Omega)$ and $\dot H^{\frac12+\epsilon}(\Omega)$ for any $\epsilon >0$ so that $\delta_\Gamma \in H^{-\frac12-\epsilon}(\Omega)$. Indeed, it follows from H\"{o}lder's inequality and trace theorem \cite{AdamsFournier:2003} that \begin{equation*}
\|\delta_\Gamma\|_{H^{-\frac{1}{2}-\epsilon}(\Omega)}= \sup_{\phi \in \dot H^{\frac12+\epsilon}(\Omega)}
\frac{|\int_\Gamma \phi(s){\rm d} s|}{\|\phi\|_{\frac12+\epsilon,\Omega}}
\le |\Gamma|^\frac12 \sup_{\phi \in \dot H^{\frac12+\epsilon} (\Omega)}\frac{\|\phi\|_{L^2(\Gamma)}}{\|\phi\|_{\frac12+\epsilon,\Omega}}.
\end{equation*}
\end{example} The empirical convergence rate for the very weak data $\delta_\Gamma$ agrees well with the theoretically predicted convergence rate in Remark \ref{rmk:fem-err}; see Tables \ref{tab:Galerkinweak} and \ref{tab:Deltafun} for the standard Galerkin method and lumped mass method, respectively. In the tables, the numbers in the bracket in the last column refer to the theoretical rate. Interestingly, for the standard Galerkin scheme, the $L^2(\Omega)$-norm of the error exhibits super-convergence. This is attributed to the fact that the singularity of the solution is supported on the interface $\Gamma$ and it is aligned with the mesh. It is observed that for both the standard Galerkin method and lumped mass FEM, the error increases as the time $t\to0^+$, which concurs with the weak solution singularity at the initial time.
\begin{table}[h]
\caption{The errors $\|u(t)-u_h(t)\|_{L^2(\Omega)}$ of the Galerkin approximation $u_h(t)$ for Example \ref{exam:fem} with $\alpha=0.5$, at $t=0.1, 0.01, 0.001$, discretized on a uniform mesh, $h = 2^{-k}$.} \label{tab:Galerkinweak} \begin{center}
\begin{tabular}{|c|cccccc|}
\hline
$k$& $1/8$ & $1/16$ &$1/32$ &$1/64$ & $1/128$ & rate\\
\hline
$t=0.001$& 5.37e-2 & 1.56e-2 & 4.40e-3 & 1.23e-3 & 3.41e-4 & $\approx 1.84$ ($1.50$)\\
$t=0.01$ & 2.26e-2 & 6.20e-3 & 1.67e-3 & 4.46e-4 & 1.19e-4 & $\approx 1.90$ ($1.50$)\\
$t=0.1$ & 8.33e-3 & 2.23e-3 & 5.90e-3 & 1.55e-3 & 4.10e-4 & $\approx 1.91$ ($1.50$)\\
\hline
\end{tabular} \end{center} \end{table}
\begin{table}[h]
\caption{The errors $\|u(t)-\bar u_h(t)\|_{L^2(\Omega)}$ of the lumped mass approximation $\bar u_h(t)$ for Example \ref{exam:fem} with $\alpha=0.5$, at $t=0.1, 0.01, 0.001$, discretized on a uniform mesh, $h = 2^{-k}$} \label{tab:Deltafun} \begin{center}
\begin{tabular}{|c|cccccc|c|}
\hline
$k$& $3$ & $4$ &$5$ &$6$ & $7$ & rate \\
\hline
$t=0.001$ & 1.98e-1 & 7.95e-2 & 3.00e-2 & 1.09e-2 & 3.95e-3 & $\approx 1.51$ (1.50)\\
$t=0.01$ & 6.61e-2 & 2.56e-2& 9.51e-3 & 3.47e-3 & 1.25e-3 & $\approx 1.52$ (1.50)\\
$t=0.1$ & 2.15e-2 & 8.13e-3 & 3.01e-3 & 1.09e-3 & 3.95e-4&$\approx 1.52$ (1.50)\\ \hline
\end{tabular} \end{center} \end{table}
We end this section with some research problems. Despite the maturity of the FEM analysis, there are still a few interesting questions on the FEMs for the model \eqref{eqn:pde} which are not well understood: \begin{itemize} \item[(i)] So far the analysis is mostly concerned with a time-independent coefficient, which can be treated conveniently using the semigroup type techniques. The time dependent case requires different techniques, and the nonlocality of the operator $\partial_t^\alpha u$ prevents a straightforward adaptation of known techniques for standard parabolic problems \cite{LuskinRannacher:1982}. Encouraging results in this direction using an energy argument were established in the recent work of Mustapha \cite{Mustapha:2017}, where error estimates for the homogeneous problem were obtained. \item[(ii)] All existing works focus on linear finite elements, and there seems no study on high-order finite elements for nonsmooth data. It is unclear whether there are similar nonsmooth data estimates, as in the parabolic case \cite[Chapter 3]{Thomee:2006} (see, e.g., \cite[p. 397]{QuarteroniValli:1994} for smooth data). This problem is interesting in view of the limited smoothing property of the solution operators in space in Theorem \ref{thm:reg-u}, which has played a major role in the analysis. Thus it is of interest to develop and analyze high-order schemes in space. \item[(iii)]The study on nonlinear subdiffusion models is rather limited, and there seems no error estimate with respect to the data regularity, especially for nonsmooth problem data. The recent progress \cite{KovacsLiLubich:2016, JinLiZhou:2018nm} in discrete maximal $\ell^p$ regularity results may provide useful tools for this purpose, which have proven extremely powerful for the study of nonlinear parabolic problems. One outstanding issue seems to be sharp regularity estimates for general problem data. \end{itemize}
\section{Fully discrete schemes by time-stepping}\label{sec:time-stepping} One outstanding challenge for solving the subdiffusion model lies in the accurate and efficient discretization of the fractional derivative $\partial_t^\alpha u$. Roughly speaking, there are two predominant groups of numerical methods for time stepping, i.e., convolution quadrature and finite difference type methods, e.g., L1 scheme and L1-2 scheme. The former relies on approximating the (Riemann-Liouville) fractional derivative in the Laplace domain (i.e., symbol), whereas the latter approximates the Caputo derivative directly by piecewise polynomials. These two approaches have their pros and cons: convolution quadrature (CQ) is quite flexible and often much easier to analyze, since by construction, it inherits excellent numerical stability property of the underlying schemes for ODEs, but it is often restricted to uniform grids. The finite difference type methods are very flexible in construction and implementation and can easily generalize to nonuniform grids, but often challenging to analyze. Generally, these schemes are only first-order accurate when implemented straightforwardly, unless restrictive compatibility conditions are fulfilled. Hence, suitable corrections to the straightforward implementation are needed in order to restore the desired high-order convergence.
In this section, we review these two popular classes of time-stepping schemes on uniform grids. Specifically, let $\{t_n=n\tau\}_{n=0}^N$ be a uniform partition of the time interval $[0,T]$, with a time step size $\tau=T/N$. The case of general nonuniform time grids is also of interest, e.g., in resolving initial or interior layer, but the analysis seems not well understood at present; we refer interested readers to the references \cite{Kopteva:2017, LiaoLiZhang:2018,StynesORiordanGracia:2017,ZhangSunLiao:2014} for some recent progress on nonuniform grids.
\subsection{Convolution quadrature} Convolution quadrature (CQ) was first proposed by Lubich in a series of works \cite{Lubich:1986, Lubich:1988,Lubich:2004} for discretizing Volterra integral equations. It has been widely applied in discretizing the Riemann-Liouville fractional derivative (see, e.g., \cite{YusteAcedo:2005, ZengLiLiuTurner:2013,JinLazarovZhou:SISC2016}). One distinct feature is that the construction requires only that Laplace transform of the kernel be known. Specfically, the CQ approximates the Riemann-Liouville derivative $\partial_t^\alpha \varphi(t_n)$, which is defined by \begin{equation*}
^R\partial_t^\alpha \varphi := \frac{{\rm d}}{{\rm d} t}\frac{1}{\Gamma(1-\alpha)}\int_0^t(t-s)^{-\alpha}\varphi(s){\rm d} s \end{equation*} (with $\varphi(0)=0$) by a discrete convolution (with the shorthand notation $\varphi^n=\varphi(t_n)$) \begin{equation}\label{eqn:CQ} \bar\partial_\tau^\alpha \varphi^n:=\frac{1}{\tau^\alpha}\sum_{j=0}^n b_j \varphi^{n-j}. \end{equation} The weights $\{b_j\}_{j=0}^\infty$ are the coefficients in the power series expansion \begin{equation}\label{eqn:delta} \delta_\tau(\zeta)^\alpha=\frac{1}{\tau^\alpha}\sum_{j=0}^\infty b_j\zeta ^j.
\end{equation} where $\delta_\tau(\zeta)=\delta(\zeta)/\tau$ is the characteristic polynomial of a linear multistep method for ODEs, with $\delta(\zeta)=\delta_1(\zeta)$. There are several possible choices of the characteristic polynomial, e.g., backward differentiation formula, trapezoidal rule, Newton-Gregory method and Runge-Kutta methods. The most popular one is the backward differentiation formula of order $k$ (BDF$k$), $k=1,\ldots,6$, for which $\delta(\zeta)$ is given by \begin{equation*}
\delta_\tau(\zeta ):=\frac{1}{\tau}\sum_{j=1}^k \frac {1}{j} (1-\zeta )^j,\quad j=1,2,\ldots. \end{equation*} The special case $k=1$, i.e., the backward Euler convolution quadrature, is commonly known as Gr\"{u}nwald-Letnikov approximation in the literature and the coefficients $b_j$ are given explicitly by the following recurrence relation \begin{equation*}
b_0 = 1, \quad b_{j}=-\frac{\alpha-j+1}{j}b_{j-1}. \end{equation*} Generally, the weights $b_j$ can be evaluated efficiently via recursion or discrete Fourier transform \cite{Podlubny:1999,Sousa:2012}.
The CQ discretization first reformulates problem \eqref{eqn:pde} by the Riemann-Liouville derivative $^R\partial_t^\alpha \varphi$ using the defining relation for the Caputo derivatives \cite[p. 91]{KilbasSrivastavaTrujillo:2006}: $\partial_t^\alpha \varphi(t) = {^R\partial_t^\alpha}(\varphi-\varphi(0))$ into the form \begin{equation*}
^R\partial_t^\alpha(u-v) - \Delta u = f. \end{equation*} The time stepping scheme based on the CQ for problem \eqref{eqn:pde} is to seek approximations $U^n$, $n=1,\dots,N$, to the exact solution $u(t_n)$ by \begin{equation}\label{eqn:BDF-CQ-0} \bar\partial_\tau^\alpha (U-v)^n - \Delta U^n = f(t_n) . \end{equation} It can be combined with space semidiscrete schemes described in Section \ref{sec:fem} to arrive at fully discrete schemes, which are implementable. Our discussions below focus on the temporal error for time-stepping schemes, and omit the space discretization in this part.
If the exact solution $u$ is smooth and has sufficiently many vanishing derivatives at $t=0$, then the approximation $U^n$ converges at a rate of $O(\tau^k)$ \cite[Theorem 3.1]{Lubich:1988}. However, it generally only exhibits a first-order accuracy when solving fractional evolution equations
even for smooth $v$ and $f$ \cite{CuestaLubichPalencia:2006,JinLazarovZhou:SISC2016}. This loss of accuracy is one distinct feature for most time stepping schemes, since they are usually derived under the assumption that the solution $u$ is sufficiently smooth, which holds only if the problem data satisfy certain rather restrictive compatibility conditions. In brevity, they tend to lack {robustness} with respect to the regularity of problem data.
This observation on accuracy loss has motivated some research works. For fractional ODEs, one idea is to use starting weights \cite{Lubich:1986} to correct the CQ in discretizing $\partial_t^\alpha \varphi(t_n)$ by \begin{equation*}
\bar\partial_\tau^\alpha\varphi^n = \tau^{-\alpha} \sum_{j=0}^nb_{n-j}\varphi^j + \sum_{j=0}^Mw_{n,j}\varphi^j, \end{equation*} where $M\in\mathbb{N}$ and the weights $w_{n,j}$ depend on $\alpha$ and $k$. The purpose of the starting term $\sum_{j=0}^Mw_{n,j}\varphi^j $ is to capture all leading singularities so as to recover a uniform $O(\tau^k)$ rate. The weights $w_{n,j}$ have to be computed at every time step, which involves solving a linear system with Vandermonde type matrices and may lead to instability issue (if a large $M$ is needed, which is likely the case when $\alpha$ is close to zero). This idea works well for fractional ODEs; however, its extension to fractional PDEs essentially seems to boil down to expanding the solution into (fractional-order) power series in $t$, which would impose certain strong compatibility conditions on the source $f$.
The more promising idea for the model \eqref{eqn:pde} is initial correction. It corrects only the first few steps of the schemes. This idea was first developed in \cite{LubichSloanThomee:1996} for an integro-differential equation. Then it was applied as an abstract framework in \cite{CuestaLubichPalencia:2006} for BDF2 in order to achieve a uniform second-order convergence for semilinear fractional diffusion-wave equations (which is slightly different from the model \eqref{eqn:pde}) with smooth data. Further, BDF2 CQ was extended to subdiffusion and diffusion wave equations in \cite{JinLazarovZhou:SISC2016} and very recently also general BDF$k$ \cite{JinLiZhou:2017sisc}. In the latter work \cite{JinLiZhou:2017sisc}, by careful analysis of the error representation in the Laplace domain, a set of simple algebraic criteria was derived. Below we describe the correction scheme for the BDF CQ derived in \cite{JinLiZhou:2017sisc}.
To restore the $k^{\rm th}$-order accuracy for BDF$k$ CQ, we correct it at the starting $k-1$ steps by (as usual, the summation disappears if the upper index is smaller than the lower one) \begin{equation}\label{eqn:BDF-CQ} \left\{ \begin{aligned} &\bar\partial_\tau^\alpha (U-v)^n -\Delta U^n = a_n^{(k)} (\Delta v+f(0))+f(t_n)+ \sum_{\ell=1}^{k-2} b_{\ell,n}^{(k)}\tau^{\ell} \partial_t^{(\ell)}f(0) , &&1\le n\le k-1,\\ &\bar\partial_\tau^\alpha (U-v)^n - \Delta U^n = f(t_n) , &&k\le n\le N. \end{aligned}\right. \end{equation} where the coefficients $a_n^{(k)}$ and $b_{\ell,n}^{(k)}$ are given in Table \ref{tab:an}. When compared with the vanilla scheme \eqref{eqn:BDF-CQ-0}, the additional terms are constructed so as to improve the overall accuracy of the scheme to $O(\tau^k)$ for a general initial data $v\in D(\Delta)$ and a possibly incompatible right-hand side $f$ \cite{JinLiZhou:2017sisc}. The only difference between the corrected scheme \eqref{eqn:BDF-CQ} and the standard scheme \eqref{eqn:BDF-CQ-0} lies in the correction terms at the starting $k-1$ steps for BDF$k$. Hence, the scheme \eqref{eqn:BDF-CQ} is easy to implement. The correction is also minimal in the sense that there is no other correction scheme which uses fewer correction steps while attaining the same accuracy. The corrected scheme \eqref{eqn:BDF-CQ} satisfies the following error estimates \cite[Theorem 2.4]{JinLiZhou:2017sisc}.
\begin{theorem}\label{thm:conv}
Let $f\in C^{k-1}([0,T];L^2(\Omega))$ and $\int_0^t(t-s)^{\alpha-1}\|\partial_s^{(k)}f(s)\|_{L^2(\Omega)}{\rm d} s<\infty$. Then for the solution $U^n$ to \eqref{eqn:BDF-CQ}, the following error estimates hold for any $t_n>0$. \begin{itemize} \item[$\rm(i)$] If $ \Delta v\in L^2(\Omega)$, then \begin{equation*}
\begin{aligned}
\|U^n-u(t_n)\|_{L^2(\Omega)} \leq & c\tau^k \bigg(t_n^{ \alpha -k } \|f(0)+\Delta v\|_{L^2(\Omega)} + \sum_{\ell=1}^{k-1} t_n^{ \alpha+\ell -k } \|\partial_t^{(\ell)}f(0)\|_{L^2(\Omega)}\\
&\ \ +\int_0^{t_n}(t_n-s)^{\alpha-1}\|\partial_s^{(k)}f(s)\|_{L^2(\Omega)}{\rm d} s\bigg).
\end{aligned} \end{equation*} \item[$\rm(ii)$] If $v\in L^2(\Omega)$, then \begin{align*}
\|U^n-u(t_n)\|_{L^2(\Omega)} \leq c\tau^k& \bigg( t_n^{-k} \|v\|_{L^2(\Omega)} + \sum_{\ell=0}^{k-1} t_n^{ \alpha+\ell -k } \|\partial_t^{(\ell)}f(0)\|_{L^2(\Omega)}\\
&+\int_0^{t_n}(t_n-s)^{\alpha-1}\|\partial_s^{(k)}f(s)\|_{L^2(\Omega)}{\rm d} s\bigg). \end{align*} \end{itemize} \end{theorem}
\begin{remark} Note that the estimate depends only on the regularity of $f$ and $v$, rather than the regularity of $u$. Theorem \ref{thm:conv} implies that for any fixed $t_n>0$, the rate is $O(\tau^k)$ for BDF$k$ CQ. In order to have a {uniform} $O(\tau^k)$ rate, the following compatibility conditions are needed: \begin{equation*}
f(0) + \Delta v = 0 \quad \mbox{and}\quad \partial_t^{(\ell)} f(0) = 0, \quad \ell=1,\ldots,k-1. \end{equation*}
Otherwise, the estimate deteriorates as $t\to0$, in accordance with the regularity theory in Theorem \ref{thm:reg-u}: the solution {\rm(}and its derivatives{\rm)} exhibits weak singularity at $t=0$. \end{remark}
\begin{remark} The case $k=1$ corresponds to the backward Euler CQ, and it does not require any correction in order to achieve a first-order convergence. \end{remark}
\begin{table}[htb!] \caption{The coefficients $a_j^{(k)}$ and $b_{\ell,j}^{(k)}$ \cite[Tables 1 and 2]{JinLiZhou:2017sisc}.} \label{tab:an} \centering
\begin{tabular}{|c|ccccc|} \hline
BDF$k$ & $a_1^{(k)}$ & $a_2^{(k)}$ & $a_3^{(k)}$ & $a_4^{(k)}$ & $a_5^{(k)}$ \\[2pt] \hline
$k=2$ & $ \frac{1}{2}$ & & & & \\ \hline
$k=3$ &$\frac{11}{12}$ & $-\frac{5}{12}$ & & & \\[2pt]
\hline
$k=4$ &$\frac{31}{24}$ & $-\frac{7}{6}$ & $\frac{3}{8}$ & & \\[2pt] \hline
$k=5$ &$\frac{1181}{720}$ & $-\frac{177}{80}$ & $\frac{341}{240}$ & $-\frac{251}{720}$ & \\[2pt] \hline
$k=6$ &$\frac{2837}{1440}$& $-\frac{2543}{720}$ &$\frac{17}{5}$ & $-\frac{1201}{720}$ & $\frac{95}{288}$ \\[2pt] \hline \end{tabular}
\begin{tabular}{|c c|ccccc|} \hline
BDF$k$ & & $b_{\ell,1}^{(k)}$ & $b_{\ell,2}^{(k)}$ & $b_{\ell,3}^{(k)}$ & $b_{\ell,4}^{(k)}$ & $b_{\ell,5}^{(k)}$ \\[2pt] \hline
$k=3$ &$\ell=1$ &$\frac{1}{12}$ & 0 & & & \\[2pt] \hline
$k=4$ &$\ell=1$ &$\frac{1}{6}$ & $-\frac{1}{12}$ & $0$ & & \\[3pt]
&$\ell=2$ & $0$ & $0$ & $0$ & & \\[2pt] \hline
$k=5$ &$\ell=1$ &$\frac{59}{240}$ & $-\frac{29}{120}$ & $\frac{19}{240}$ & $0$ & \\[3pt]
&$\ell=2$ & $\frac{1}{240}$ & $-\frac{1}{240}$ & $0$ & $0$ & \\[3pt]
&$\ell=3$ &$\frac{1}{720}$ & $0$ & $0$ & $0$ & \\[2pt] \hline
$k=6$ &$\ell=1$ &$\frac{77}{240}$& $-\frac{7}{15}$ &$\frac{73}{240}$ & $-\frac{3}{40}$ & 0\\[3pt]
&$\ell=2$ & $\frac{1}{96}$ & $-\frac{1}{60}$ & $\frac{1}{160}$ & $0$ & 0 \\[3pt]
&$\ell=3$ & $-\frac{1}{360}$ & $\frac{1}{720}$ & $0$ & $0$ & 0\\[3pt]
&$\ell=4$ & $0$ & $0$ & $0$ & $0$ & 0\\[2pt] \hline \end{tabular} \end{table}
In passing, we note that not all CQ schemes require initial correction in order to recover high-order convergence. One notable example is Runge-Kutta CQ; see \cite{LubichOstermann:1995} for semilinear parabolic problems and \cite{Fischer:2018} for the subdiffusion model. Further, a proper weighted average of shifted standard Grunwald-Letnikov approximations can also lead to high-order approximations \cite{ChenDeng:2014}. CQ schemes can exhibit superconvergence at points that may be different from the grid points, which can also be effectively exploited to develop high-order schemes (see \cite{Dimitrov:2014} for Grunwald-Letnikov formula). However, the corrected versions of these approximations have not yet been developed for the general case, except for a fractional variant of Crank-Nicolson scheme \cite{JinLiZhou:2017ima}.
\subsection{Piecewise polynomial interpolation} Now we describe the time stepping schemes based on piecewise polynomial interpolation. These schemes are essentially of finite difference nature, and the most prominent one is the L1 scheme. The L1 approximation of the Caputo derivative ${{\partial^\alpha_t}} u(t_n)$ is given by \cite[Section 3]{LinXu:2007} \begin{equation}\label{eqn:$L^1$approx}
\begin{aligned}
{{\partial^\alpha_t}} u(t_n) &= \frac{1}{\Gamma(1-\alpha)}\sum^{n-1}_{j=0}\int^{t_{j+1}}_{t_j}
\frac{\partial u(s)}{\partial s} (t_n-s)^{-\alpha}\, {\rm d} s \\
&\approx \frac{1}{\Gamma(1-\alpha)}\sum^{n-1}_{j=0} \frac{u(t_{j+1})-u(t_j)}{\tau}\int_{t_j}^{t_{j+1}}(t_n-s)^{-\alpha}{\rm d} s\\
&=\sum_{j=0}^{n-1}b_j\frac{u(t_{n-j})-u(t_{n-j-1})}{\tau^\alpha}\\
&=\tau^{-\alpha} [b_0u(t_n)-b_{n-1}u(t_0)+\sum_{j=1}^{n-1}(b_j-b_{j-1})u(t_{n-j})] =:L_1^n(u),
\end{aligned}
\end{equation} where the weights $b_j$ are given by \begin{equation*} b_j=((j+1)^{1-\alpha}-j^{1-\alpha})/\Gamma(2-\alpha),\ j=0,1,\ldots,N-1. \end{equation*} It was shown in \cite[equation (3.3)]{LinXu:2007} and \cite[Lemma 4.1]{SunWu:2006} that the local truncation error of the L1 approximation is bounded by \begin{equation}\label{eqn:err-L1}
|\partial_t^\alpha u(t_n)-L_1^n(u)|\leq c(u)\tau^{2-\alpha}, \end{equation}
where the constant $c(u)$ depends on $\|u\|_{C^2([0,T])}$. Thus, it requires that the solution $u$ be twice continuously differentiable in time. Since its first appearance, the L1 scheme has been widely used in practice, and currently it is one of the most popular and successful numerical methods for solving the model \eqref{eqn:pde}. With the L1 scheme in time, we arrive at the following time stepping scheme: Given $U^0=v$, find $U^n\in \dH1$ for $n=1,2,\ldots,N$ \begin{equation}\label{eqn:fullyl1}
L_1^n(U) -\Delta U^n= f(t_n). \end{equation}
We have the following temporal error estimate for the scheme \eqref{eqn:fullyl1} \cite{JinLazarovZhou:2016ima, JinLiZhou:nonlinear}. This is achieved by means of discrete Laplace transform, and it is rather technical, since the discrete Laplace transform of the weights $b_j$ involves the wieldy polylogarithmic function. See also \cite{JinZhou:2017} for a different analysis via an energy argument. Formally, the error estimate is nearly identical with that for the backward Euler CQ. Thus, in stark contrast to the $O(\tau^{2-\alpha})$ rate expected from the local truncation error \eqref{eqn:err-L1} for smooth solutions, the L1 scheme is generally only first-order accurate, even for smooth initial data or source term. \begin{theorem}\label{thm:error_fullyl1-nonsmooth} Let $u$ and $U^n$ be the solutions of problems \eqref{eqn:pde} and \eqref{eqn:fullyl1}, respectively. Then there holds \begin{equation*}
\|u(t_n)-U^n\|_{L^2(\Omega)} \leq c\tau t_n^{\beta\alpha-1}\|(-\Delta)^\beta u_0\|_{L^2(\Omega)}+ c\tau\bigg(t_n^{\alpha-1}\|f(0)\|_{L^2(\Omega)}+\int_0^{t_n}(t_n-s)^{\alpha-1}\|f'(s)\|_{L^2(\Omega)}{\rm d} s\bigg). \end{equation*} \end{theorem}
Very recently, a corrected L1 scheme was developed by Yan et al \cite{YanKhanFord:2018} (see also \cite{XingYan:2018,FordYan:2017} for related works from the group). The corrected scheme is given by \begin{equation}\label{eqn:fullyl1-corrected}
\left\{\begin{aligned}
L_1^1(U) -\Delta U^1 - \tfrac{1}{2}
\Delta U^0 & = f(t_1)+ \tfrac{1}{2}f(0), \quad n=1\\
L_1^n(U) -\Delta U^n & = f(t_n),\quad n\geq 2.
\end{aligned}\right. \end{equation} It is noteworthy that it requires only correcting the first step, and incidentally, the correction term is identical with that for BDF2 CQ. Then the following error estimate holds for the corrected scheme. Note that the stated regularity requirement on the source term $f$ may not be optimal for $\alpha>1/2$. \begin{theorem}\label{thm:error_fullyl1-corrected} Let $u$ and $U^n$ be the solutions of problems \eqref{eqn:pde} and \eqref{eqn:fullyl1-corrected}, respectively. Then there holds \begin{align*}
\|u(t_n)-U^n\|_{L^2(\Omega)} &\le c \tau^{2-\alpha}\bigg(t_n^{(\beta+1)\alpha-2} \|(-\Delta)^\beta v\|_{L^2(\Omega)} + t_n^{2\alpha-2}\|f(0)\|_{L^2(\Omega)} + t_n^{2\alpha-1}\|f'(0)\|_{L^2(\Omega)}\\
&\quad +\int_0^{t_n}(t_n-s)^{2\alpha-1}\|f''(s)\|_{L^2(\Omega)}{\rm d} s\bigg). \end{align*} \end{theorem}
There have been several important efforts in extending the L1 scheme to high-order schemes by using high-order local polynomials \cite{LvXu:2016,GaoSunZhang:2014,MustaphaAbdallahFurati:2014} and superconvergent points \cite{Alikhanov:2015}. For example, the L1-2 scheme due to Gao et al \cite{GaoSunZhang:2014} applies a piecewise linear approximation on the first subinterval, and a quadratic approximation on the other subintervals to improve the numerical accuracy. However, the performance of these methods for nonsmooth data is not fully understood.
Besides, Mustapha and McLean developed several discontinuous Galerkin methods \cite{McLeanMustapha:2009,MustaphaMcLean:2011,Mustapha:2015} for a variant of the model \eqref{eqn:pde}: \begin{equation*}
\partial_t u - {^R\partial_t^{1-\alpha}}\Delta u = f, \end{equation*} with suitable boundary and initial conditions. Formally, this model can be derived by applying the Riemann-Liouville operator $^R\partial_t^{1-\alpha}$ to both sides of the equation in \eqref{eqn:pde}. The resulting schemes are similar to piecewise polynomial interpolation described above. However, the nonsmooth error estimates are mostly unavailable, except for the piecewise constant discontinuous Galerkin method (for the homogeneous problem) \cite{McLeanMustapha:2015,YangYanFord:2018}; see also \cite{GunzburgerWang:2018} for a Crank-Nicolson type scheme for a related model.
\subsection{Illustrations and outstanding issues}
Now we illustrate the performance of the corrected time stepping schemes. \begin{example}\label{exam:time-stepping} Consider problem \eqref{eqn:pde} on $\Omega=(0,1)$ with $v=x\sin(2 \pi x) \in \dH 2$ and $f= 0$. \end{example}
In Table \ref{tab:v-nocorrect} we present the error $\|u_h(t_N) - u_h^N\|_{L^2(\Omega)}$ at $t_N=1$. The numerical results show only a first-order empirical convergence rate, for all standard BDF$k$ CQ, $k\geq2$, which shows clearly the lack of robustness of the naive CQ scheme \eqref{eqn:BDF-CQ-0} with respect to problem data regularity, despite the good regularity of the initial data $v$. In sharp contrast, the corrected scheme \eqref{eqn:BDF-CQ} can achieve the desired convergence rate; see Table \ref{tab:v-correct-smooth}. These observations remain valid for the L1 scheme and its corrected version; see Tables \ref{tab:uncorrect-smooth-L1} and \ref{tab:correct-smooth-L1}. It is worth noting that the desired rate for the corrected L1 scheme only kicks in at a relatively small time step size, and its precise mechanism remains unclear. These results show clearly the effectiveness of the idea of initial correction for restoring the desired high-order convergence.
\begin{table}[htb!] \caption{The $L^2$-norm error for Example \ref{exam:time-stepping} at $t_N=1$, by the scheme \eqref{eqn:BDF-CQ-0} with $h=1/100$.}\label{tab:v-nocorrect} \vskip-10pt \centering
\begin{tabular}{|c|c|ccccc|c|}
\hline
$\alpha$ & $N$ &$50$ &$100$ &$200$ & $400$ & $800$ &rate \\
\hline
& BDF2 &4.94e-3 &2.48e-3 &1.24e-3 &6.20e-4 &3.10e-4 &$\approx$ 1.00 ($1.00$)\\
& BDF3 &4.99e-3 &2.49e-3 &1.24e-3 &6.21e-4 &3.11e-4 &$\approx$ 1.00 ($1.00$)\\
$ 0.5$ & BDF4 &4.99e-3 &2.49e-3 &1.24e-3 &6.21e-4 &3.11e-4 &$\approx$ 1.00 ($1.00$)\\
& BDF5 &4.99e-3 &2.49e-3 &1.24e-3 &6.21e-4 &3.11e-4 &$\approx$ 1.00 ($1.00$)\\
& BDF6 &4.96e-3 &2.49e-3 &1.24e-3 &6.21e-4 &3.11e-4 &$\approx$ 1.00 ($1.00$)\\
\hline
\end{tabular} \end{table}
\begin{table}[htb!] \caption{The $L^2$-norm error for Example \ref{exam:time-stepping} at $t_N=1$, by the corrected scheme \eqref{eqn:BDF-CQ} with $h=1/100$.} \label{tab:v-correct-smooth} \centering
\begin{tabular}{|c|c|ccccc|c|}
\hline
$\alpha$ & $k\backslash N$ &$50$ &$100$ &$200$ &$400$ & $800$ &rate \\
\hline
& 2 &5.87e-5 &1.45e-5 &3.59e-6 &8.95e-7 &2.23e-7 &$\approx$ 2.00 (2.00)\\
& 3 &2.39e-6 &2.88e-7 &3.53e-8 &4.38e-9 &5.45e-10 &$\approx$ 3.00 (3.00)\\
$ 0.25$ & 4 &1.49e-7 &8.72e-9 &5.27e-10 &3.24e-11 &2.01e-12 &$\approx$ 4.02 (4.00)\\
& 5 &1.33e-8 &3.57e-10 &1.06e-11 &3.22e-13 &9.91e-15 &$\approx$ 5.02 (5.00)\\
& 6 &1.12e-5 &1.54e-9 &2.68e-13 &4.02e-15 &6.16e-17 &$\approx$ 6.04 (6.00)\\
\hline
& 2 &1.77e-4 &4.34e-5 &1.08e-5 &2.68e-6 &6.69e-7 &$\approx$ 2.00 (2.00)\\
& 3 &7.85e-6 &9.44e-7 &1.16e-7 &1.43e-8 &1.78e-9 &$\approx$ 3.01 (3.00)\\
$ 0.5$ & 4 &5.23e-7 &3.04e-8 &1.83e-9 &1.12e-10 &6.97e-12 &$\approx$ 4.02 (4.00)\\
& 5 &4.86e-8 &1.30e-9 &3.85e-11 &1.17e-12 &3.60e-14 &$\approx$ 5.03 (5.00)\\
& 6 &2.82e-5 &2.99e-9 &1.01e-12 &1.51e-14 &2.32e-16 &$\approx$ 6.05 (6.00)\\
\hline
& 2 &4.58e-4 &1.12e-4 &2.78e-5 &6.92e-6 &1.73e-6 &$\approx$ 2.00 (2.00)\\
& 3 &2.39e-5 &2.85e-6 &3.49e-7 &4.31e-8 &5.36e-9 &$\approx$ 3.01 (3.00)\\
$ 0.75$ & 4 &1.80e-6 &1.04e-7 &6.22e-9 &3.81e-10 &2.36e-11 &$\approx$ 4.02 (4.00)\\
& 5 &2.51e-7 &4.90e-9 &1.44e-10 &4.35e-12 &1.34e-13 &$\approx$ 5.03 (5.00)\\
& 6 &1.65e-3 &4.20e-7 &4.17e-12 &6.10e-14 &9.31e-16 &$\approx$ 6.06 (6.00)\\
\hline
\end{tabular} \end{table}
\begin{table}[htb!] \caption{The $L^2$-norm error for Example \ref{exam:time-stepping} at $t_N=1$, by the L1 scheme \eqref{eqn:BDF-CQ} with $h=1/100$.} \label{tab:uncorrect-smooth-L1} \centering
\begin{tabular}{|c|ccccc|c|}
\hline
$\alpha \backslash N$ &$50$ &$100$ &$200$ &$400$ & $800$ &rate \\
\hline
$0.3$ &2.40e-3 &1.19e-3 &5.96e-4 &2.98e-4 &1.49e-4 &$\approx$ 1.01 (1.00)\\
\hline
$0.5$ &5.09e-3 &2.52e-3 &1.25e-3 &6.25e-4 &3.12e-4 &$\approx$ 1.02 (1.00)\\
\hline
$0.7$ &9.04e-3 &4.42e-3 &2.18e-3 &1.08e-3 &5.33e-4 &$\approx$ 1.01 (1.00)\\
\hline
\end{tabular} \end{table}
\begin{table}[htb!] \caption{The $L^2$-norm error for Example \ref{exam:time-stepping} at $t_N=0.01$, corrected L1 scheme,
$h=1/100$ and $N=1000\times 2^{k}$.} \label{tab:correct-smooth-L1} \centering
\begin{tabular}{|c|cccccc|c|}
\hline
$\alpha \backslash k$ &$1$ &$2$ &$3$ &$4$ & $5$ & $6$ &rate \\
\hline
$0.3$ &7.94e-8 &2.79e-8 &9.67e-9 &3.28e-9 &1.09e-9 &3.56e-10 &$\approx$ 1.63 (1.70)\\
\hline
$0.5$ &1.90e-6 &6.93e-7 &2.50e-7 &8.95e-8 &3.19e-8 &1.14e-8 &$\approx$ 1.49 (1.50)\\
\hline
$0.7$ &1.97e-5 &8.06e-6 &3.29e-6 &1.34e-6 &5.44e-7 &2.21e-7 &$\approx$ 1.30 (1.30)\\
\hline
\end{tabular} \end{table}
We conclude this section with two research directions on time stepping schemes that need/deserve further investigation. \begin{itemize} \item[(i)] Nonsmooth error analysis for time-stepping schemes is still in its infancy. So far all known results are only for uniform grids, and all the proofs rely essentially on Laplace transform. It is of immense interest to develop energy type arguments that yield nonsmooth data error estimates, which might allow deriving results for nonuniform grids. Likewise, correction schemes are also only developed for uniform grids. This is partially due to the fact that the current construction of corrections essentially relies on Laplace transform of the kernel and its discrete analogue. \item[(ii)] The error estimates are only derived for problems with a time-independent elliptic operator, and there are no analogous results for time-dependent elliptic operators, including time-dependent coefficient and certain nonlinear problems. \end{itemize}
\section{Space-time formulations}\label{sec:space-time} Due to the nonlocality of the fractional derivative $\partial_t^\alpha u$, at each time step one has to use the numerical solutions at all preceding time levels. Thus, the advantages of time stepping schemes, when compared to space-time schemes, are not as pronounced as in the case of standard parabolic problems, and it is natural to consider space-time discretization. Naturally, any such construction would rely on a proper variational formulation of the fractional derivative, which is only well understood for the Riemann-Liouville derivative $^R\partial_t^\alpha u$ at present. Thus, the idea so far is mostly restricted to problem \eqref{eqn:pde} with $v=0$, for which the Riemann-Liouville and Caputo derivatives coincide, and we shall not distinguish the two fractional derivatives in this section. Throughout, let $I=(0,T)$, and the space $\widetilde{H}_L^s(I) $ consists of functions whose extension by zero belong to $H^{s}(-\infty,T)$. On the cylindrical domain $Q_T=\Omega\times I$, we denote the $L^2(Q_T)$-inner product by $(\cdot,\cdot)_{L^2(Q_T)}$.
\subsection{Standard Galerkin formulation}\label{ssec:space-time-Galerkin} In an influential work, Li and Xu \cite{LiXu:2009} proposed a first rigorous space-time formulation for problem \eqref{eqn:pde}, which was extended and refined by many other researchers (see, e.g., \cite{ZayernouriAinsworth:2015,HouHasanXu:2018} and the references therein). For any $s\in[0,1]$, we denote by $$B^s(Q_T)=H^s(I;L^2(\Omega))\cap L^2(I;H_0^1(\Omega)),$$ with a norm defined by \begin{equation*}
\|v\|_{B^s(Q_T)}^2=\|v\|_{H^s(I;L^2(\Omega))}^2+\|v|_{L^2(I;H^1(\Omega))}^2. \end{equation*} The foundation of the method is the following important identity \cite[Lemma 2.6]{LiXu:2009} \begin{equation}\label{eqn:int-part}
({\DDR0{\alpha}}w,v)_{L^2(I)} = ({\DDR0{\frac{\alpha}{2}}}w,{\DDR1{\frac{\alpha}{2}}}v)_{L^2(I)}\quad \forall w\in \widetilde H_L^1(I),v\in \widetilde H_L^\frac{\alpha}{2}(I), \end{equation} where $\DDR0\gamma w$ and $\DDR1\gamma w$ denote the left-sided and right-sided Riemann-Liouville fractional derivatives, respectively, and for $\gamma\in(0,1)$, and are defined by \begin{align*}
{\DDR0\gamma w}(t) &= \frac{{\rm d}}{{\rm d} t}\frac{1}{\Gamma(1-\gamma)}\int_0^t(t-s)^{-\gamma}w(s){\rm d} s, \\
{\DDR1\gamma w}(t) &= -\frac{{\rm d}}{{\rm d} t}\frac{1}{\Gamma(1-\gamma)}\int_t^T(s-t)^{-\gamma}w(s){\rm d} s. \end{align*} By multiplying both sides of problem \eqref{eqn:pde} with $v\in B^\frac{\alpha}{2}(Q_T)$, integrating over the cylindrical domain $Q_T$, applying the formula \eqref{eqn:int-part} in time and integration by parts in space, we obtain the following bilinear form on the space $B^\frac{\alpha}{2}(Q_T)$: \begin{equation*}
a(u,v) = ({\DDR0{\frac{\alpha}{2}}}u,{\DDR1{\frac{\alpha}{2}}}v)_{L^2(Q_T)} + (\nabla u,\nabla v)_{L^2(Q_T)}. \end{equation*} Hence, the weak formulation of problem \eqref{eqn:pde} is given by: for $f\in L^2(Q_T)$, find $u\in B^\frac{\alpha}{2}(Q_T)$ such that \begin{equation}\label{eqn:weak-LiXu}
a(u,v) = (f,v)_{L^2(Q_T)}\quad \forall v\in B^\frac{\alpha}{2}(Q_T). \end{equation} Clearly, the bilinear form $a(\cdot,\cdot)$ is not symmetric, since the Riemann-Liouville derivatives $\DDR0\gamma u(t)$ and $\DDR1\gamma u(t)$ differ. Nonetheless, it is continuous on the space $B^\frac\alpha2(Q_T)$. Further, since the inner product $({\DDR0{\frac{\alpha}{2}}}v,\ {\DDR1{\frac{\alpha}{2}}}v)_{L^2(I)}$ involving Riemann-Liouville derivatives actually induces an equivalent norm on the space $H^\frac{\alpha}{2}(I)$ (e.g., by means of Fourier transform) (see, e.g., \cite[Lemma 2.5]{LiXu:2009} and \cite[Lemma 4.2]{JinLazarovPasciakRundell:2015}): \begin{equation*}
({\DDR0{\frac{\alpha}{2}}}v,\ {\DDR1{\frac{\alpha}{2}}}v)_{L^2(I)}\geq \|v\|_{H^\frac{\alpha}{2}(I)}^2, \end{equation*} we have the following coercivity of the bilinear form $a(\cdot,\cdot)$ \begin{equation*}
a(u,u) \geq c\|u\|_{B^\frac\alpha2(Q_T)}^2. \end{equation*} Then the well-posedness of the weak formulation \eqref{eqn:weak-LiXu} follows directly from Lax-Milgram theorem.
To discretize the weak formulation, Li and Xu \cite{LiXu:2009} employed a spectral approximation for the case of one-dimensional spatial domain $\Omega$. Specifically, let $P_N(I)$ (respectively $P_M(\Omega)$) be the polynomial space of degree less than or equal to $N$ (respectively $M$) with respect to $t$ (respectively $x$). For the spectral approximation in space, the authors employ the space $P_M^0(\Omega):=P_M(\Omega)\cap H_0^1(\Omega)$, and since $v=0$, it is natural to construct the approximation space (in time): \begin{equation*}
P_N^E(I) :=\{v\in P_N(I): v(0)=0\}. \end{equation*} Then for a given pair of integers $M,N$, let $L:=(M,N)$ and $S_L:=P_M^0(\Omega)\otimes P_N^E(I)\subset B^\frac{\alpha}{2}(Q_T)$. The space-time spectral Galerkin approximation to problem \eqref{eqn:pde} reads: find $u_L\in S_L$ such that \begin{equation*}
a(u_L,v_L) = (f,v_L)_{L^2(Q_T)}\quad \forall v_L\in S_L. \end{equation*} The well-posedness of the discrete problem follows directly from Lax-Milgram theorem. The authors also provided optimal error estimates in the energy norm. However, the $L^2(Q_T)$ error estimate for the approximation remains unclear, since the regularity of the adjoint problem is not well understood. Clearly the construction extends directly to rectangular domains.
Note that in order to achieve high-order convergence, the standard polynomial approximation space requires high regularity of the solution $u$ in time, which is nontrivial to ensure a priori, in view of the limited smoothing property of the solution operators. Hence, recently, there have been immense interest in developing schemes that can take care of the solution singularity directly. In the context of space-time formulations, singularity enriched trial and/or test spaces, e.g., generalized Jacobi polynomials \cite{ChenShenWang:2016} (including
Jacobi poly-fractonomials \cite{ZayernouriKarniadakis:2013}) and M\"untz polynomials \cite{HouHasanXu:2018}, are extremely promising and have demonstrated very encouraging numerical results. However, the rigorous convergence analysis of such schemes can be very challenging, and is mostly missing for nonsmooth problem data.
\subsection{Petrov-Galerkin formulation}Now we introduce a Petrov-Galerkin formulation recently developed in \cite{DuanJinLazarovPasciakZhou:2018}. Let $V(Q_T)=L^2(I;H_0^1(\Omega))$ and by $V^*(Q_T)$ its dual, and for any $0<s<1$, define the space $B^s(Q_T)$ by \begin{equation*}
B^s(Q_T)=\widetilde{H}_L^s(I; H^{-1}(\Omega)) \cap L^2(I;H_0^{1}(\Omega)). \end{equation*} The space is endowed with the norm \begin{equation*}
\|v\|^2_{B^s(Q_T)} = \| {\partial_t^s} v \|^2_{V^*(Q_T)} + (\nabla v,\nabla v)_{L^2(Q_T)}. \end{equation*} Here we have slightly abused the notation $B^s(Q_T)$ since it differs from that in Section \ref{ssec:space-time-Galerkin}. Then we define the bilinear form $a(\cdot,\cdot):B^\alpha(Q_T) \times V(Q_T) \to \mathbb{R}$ by \begin{equation*} a(v, \phi) := ({\partial_t^\alpha} v, \phi)_{L^2(Q_T)} + ( \nabla u, \nabla v )_{L^2(Q_T)}. \end{equation*} The Petrov-Galerkin weak formulation of problem \eqref{eqn:pde} reads: find $ u \in {B}^\alpha(Q_T)$ such that \begin{equation}\label{eqn:BV-weak}
a(u, \phi) = ( f, \phi )_{L^2(Q_T)} \quad \forall \phi \in V(Q_T). \end{equation} The bilinear form $a(\cdot,\cdot)$ is continuous on $B^\alpha(Q_T) \times V(Q_T)$, and
it satisfies the following inf-sup condition \begin{equation*}
\sup_{\phi \in V(Q_T)} \frac{a(v,\phi) }{\|\phi \|_{V(Q_T)}} \ge \|v \|_{B^\alpha(Q_T)}\quad \forall v\in B^\alpha(Q_T) \end{equation*} and a compatibility condition, i.e., $\sup_{v \in B^\alpha (Q_T)} a(v,\phi) >0$ for any $0\neq \phi\in V(Q_T)$ \cite[Lemma 2.4]{DuanJinLazarovPasciakZhou:2018}. Thus the well-posedness of the space-time formulation follows directly from the Babuska-Brezzi theory.
Now the development of a novel Petrov-Galerkin method is based on the following idea. Let ${X}_h$ be the space of continuous piecewise linear functions on a quasi-uniform shape regular triangulation $\mathcal{T}_h$ of the domain $\Omega$. Also, take a uniform partition of the time interval $I$ with grid points $t_n=n \tau$, $n=0,\ldots,N$, and time step-size $\tau=T/N$. Following \cite{JinLazarovZhou:2016sinum}, define a set of ``fractionalized'' piecewise constant basis functions $ \phi_n(t)$, $n =1,\ldots,N$, by \begin{equation*} \phi_n(t) = (t - t_{n-1})^{\alpha}\chi_{[t_{n-1}, T]}(t), \end{equation*} where $\chi_S$ denotes the characteristic function of the set $S$. It is easy to verify that $$ \phi_n(t)=\Gamma(\alpha +1){_0I_t^{\alpha}}\chi_{[t_{n-1}, T]}(t) \quad \mbox{ and } \quad \partial_t^\alpha \phi_k (t) =\Gamma(\alpha +1) \chi_{[t_{n-1}, T]}(t). $$ Clearly, $\phi_k \in \widetilde H_L^{\alpha+s}(0,T)$ for any $s\in[0,1/2)$.
Further, we introduce the following two spaces \begin{equation*} V_\tau= \text{span}(\{\phi_n(t) \}_{n=1}^N) \quad\mbox{and}\quad W_\tau := \text{span}(\{ \chi_{[t_{n-1}, T]}(t) \}_{n=1}^N). \end{equation*} Then the solution space ${B}_{h,\tau}^\alpha \subset B^\alpha(Q_T)$ and the test space ${V}_{h,\tau}(Q_T) \subset V(Q_T)$ are respectively defined by ${B}_{h,\tau}^\alpha (Q_T):= X_h \otimes {V}_\tau$ and ${V}_{h,\tau}(Q_T) := {X}_h \otimes {W}_\tau$. The space-time Petrov-Galerkin FEM problem of \eqref{eqn:pde} reads: given $f\in V^*(Q_T)$, find $u_{h\tau} \in {B}_{h,\tau}^\alpha(Q_T)$ such that \begin{equation}\label{eqn:BV-weak-h}
a(u_{h \tau}, \phi) = ( f, \phi )_{L^2(Q_T)} \quad \forall \phi \in {V}_{h,\tau}(Q_T). \end{equation} Algorithmically, it leads to a time-stepping like scheme, and thus admits an efficient practical implementation. The existence and the stability of the solution $u_{h \tau}$ follows from the discrete inf-sup condition \cite[Lemma 3.3]{DuanJinLazarovPasciakZhou:2018} \begin{equation*}
\sup_{\phi \in{V}_{h,\tau}(Q_T)} \frac{a(v,\phi) }{\|\phi \|_{V(Q_T)}} \ge c_\alpha\|v\|_{B^\alpha(Q_T)}
\quad \forall v \in {B}_{h,\tau}^\alpha(Q_T). \end{equation*} This condition was shown using the $L^2(I)$ stability of the projection operator from ${V}_\tau$ to ${W}_\tau$. It is interesting to note that the constant in the $L^2(I)$-stability of the operator depends on the fractional order $\alpha$ and deteriorates as $\alpha \to 1$. Note that for standard parabolic problems ($\alpha=1$), it depends on the time step size $\tau$, leading to an undesirable CFL-condition, a fact shown in \cite{LarssonMolteni:2017}. This indicates one significant difference between the fractional model and the standard parabolic model in the context of space-time formulations. In passing, we also note a different Petrov-Galerkin formulation proposed very recently in \cite{Karkulik:2018}, whose numerical realization, however, has not been carried out yet and needs computational verification.
Next, we give two error estimates for the space-time Petrov-Galerkin approximation $u_{h\tau}$, \cite[Theorems 5.2 and 5.3]{DuanJinLazarovPasciakZhou:2018}, in $ H_L^{s}(0,T;L^2(\Omega ))$- and $L^2(Q_T)$-norms, respectively. \begin{theorem}\label{thm:err-energy} Let $f \in \widetilde H_L^{s}(0,T;L^2(\Omega ))$ with $0 \le s \le 1$, and $u$ and $u_{h \tau}$ be the solutions of \eqref{eqn:BV-weak} and \eqref{eqn:BV-weak-h}, respectively. Then there holds \begin{align*}
\| u-u_{h \tau}\|_{B^\alpha(Q_T)}&\le c ( \tau^s+ h)\|f\|_{\widetilde{H}_L^s( {0,T;L^2 (\Omega )})},\\
\| u-u_{h \tau} \|_{{L^2}({Q_T})}&\le c ({\tau ^{\alpha+s}} + h^2 )\| f\|_{\widetilde{H} _L^s(0,T;L^2(\Omega))}. \end{align*} \end{theorem}
\subsection{Numerical illustrations, comments and research questions} Now we present some numerical results to show the performance of the space-time Petrov-Galerkin FEM. \begin{example}\label{exam:space-time-1} Consider problem \eqref{eqn:pde} on the unit square domain $\Omega=(0,1)^2$ with $v\equiv 0$ and \begin{itemize} \item[(a)] $f= (e^t-1)x(1-x)y(1-y)\in \widetilde{H}_L^1( {0,T;L^2 (\Omega )})$; \item[(b)] $f= t^{-0.2}x(1-x)y(1-y)\in\widetilde{H}_L^s( {0,T;L^2 (\Omega )})$, for any $s<0.3$. \end{itemize}
The error $\|u_h - u_{h \tau}\|_{{L^2}(Q_T)}$ with $T=1$ is presented in Table \ref{tab:space-time-1}. Due to the compatibility of the data, we observe an error in ${L^2}(Q_T)$-norm of order $O(\tau^{\alpha+1})$ and $O(\tau^{\alpha+0.3})$ for cases {\rm(}a{\rm)} and {\rm(}b{\rm)}, respectively, which fully supports the theoretical results in Theorem \ref{thm:err-energy}. \end{example}
\begin{table}[htb!] \caption{The ${L^2}(Q_T)$-norm error for Example \ref{exam:space-time-1}, space-time PGFEM scheme with $\tau=T/N$ and $h=1/200$.}\label{tab:space-time-1} \vskip-10pt \centering
\begin{tabular}{|c|c|ccccc|c|}
\hline
&$\alpha\backslash N$ &$20$ &$40$ & $80$ & $160$ & $320$ &rate \\
\hline
& 0.3 &1.02e-5 &4.16e-6 &1.70e-6 &7.03e-7 &2.96e-7 &$\approx$ 1.26 ($1.30$)\\
$(a)$ & 0.5 &4.30e-6 &1.53e-6 &5.51e-7 &2.02e-7 &7.73e-8 &$\approx$ 1.45 ($1.50$)\\
& 0.7 &2.05e-6 &6.43e-7 &1.98e-7 &6.17e-8 &2.00e-8 &$\approx$ 1.63 ($1.70$)\\
\hline
& 0.3 &3.20e-5 &2.50e-5 &1.93e-4 &1.48e-4 &1.13e-4 &$\approx$ 0.40 ($0.60$)\\
$(b)$ & 0.5 &2.93e-4 &1.99e-4 &1.31e-4 &8.37e-5 &5.24e-5 &$\approx$ 0.67 ($0.80$)\\
& 0.7 &2.15e-4 &1.14e-4 &5.64e-5 &2.72e-5 &1.31e-5 &$\approx$ 1.05 ($1.00$)\\
\hline
\end{tabular} \end{table}
We conclude this sections with two important research problems on space-time formulations. \begin{itemize}
\item[(i)] The development of space-time formulations relies crucially on proper variational formulations
for the fractional derivative, and this is relatively well understood for the Riemann-Liouville fractional derivative
but not yet for the Caputo one. This is largely the main reason for the restriction to the case of a
zero initial data. It is of much interest to develop techniques for handling nonzero initial data in the Caputo case,
especially nonsmooth initial data.
\item[(ii)] Nonpolynomial type approximation spaces for trial and test lead to interesting new schemes, supported by extremely promising numerical results. However, the performance may depend strongly on the exponent of the fractional powers, and it would be of much interest to develop strategies to adapt the algorithmic parameter automatically. Many theoretical questions surrounding such schemes, of either Galerkin or Petrov-Galerkin type, are largely open. \end{itemize}
\section{Concluding remarks}\label{sec:conclus} In this paper, we have concisely surveyed relevant results on the topic of numerical methods of the subdiffusion problem with nonsmooth problem data, with a focus on the state of the art of the following aspects: regularity theory, finite element discretization, time-stepping schemes and space-time formulations. We compared the theoretical results with that for standard parabolic problems, and provided illustrative numerical results. We also outlined a few interesting research problems that would lead to further developments and theoretical understanding, and pointed out the most relevant references. Thus, it may serve as a brief introduction to this fast growing area of numerical analysis.
The subdiffusion model represents one of the simplest models in the zoology of fractional diffusion or anomalous diffusion. The authors believe that many of the analysis may be extended to more complex ones, e.g., diffusion wave model, multi-term, distributed-order model, tempered subdiffusion, nonsingular Caputo-Fabrizio fractional derivative, and space-time fractional models. However, these complex models have scarcely been studied in the context of nonsmooth problem data, and their distinct features remain largely to be explored both analytically and numerically.
\end{document} |
\begin{document}
\maketitle
\begin{abstract} We prove Bombieri--Vinogradov and Barban--Davenport--Halberstam type theorems for the $y$-smooth numbers less than $x$, on the range $\log^{K}x \leq y \leq x$. This improves on the range $\exp\{\log^{2/3+\epsilon}x\} \leq y \leq x$ that was previously available. Our proofs combine zero-density methods with direct applications of the large sieve, which seems to be an essential feature and allows us to cope with the sparseness of the smooth numbers. We also obtain improved individual (i.e. not averaged) estimates for character sums over smooth numbers. \end{abstract}
\section{Introduction} For $y \geq 1$, let $\mathcal{S}(y)$ denote the set of $y$-smooth numbers: that is, the set of numbers all of whose prime factors are less than or equal to $y$. Smooth numbers are ubiquitous in analytic number theory, (see e.g. the survey paper~\cite{ht3} of Hildebrand and Tenenbaum), and it is natural to investigate many of the same questions for them as are studied for the prime numbers. For example, one might be interested in the distribution of smooth numbers among the integers less than $x$; in arithmetic progressions; in short intervals $[x,x+z]$; or in arithmetic progressions on average. In this paper we will prove some results concerning the latter problem.
In the case of primes, the most celebrated theorem concerning their average distribution in arithmetic progressions is undoubtedly the {\em Bombieri--Vinogradov theorem}: for any fixed $A > 0$, and any $\sqrt{x} \log^{-A}x \leq Q \leq \sqrt{x}$, we have
$$ \sum_{q \leq Q} \max_{y \leq x} \max_{(a,q)=1} \left| \Psi(y;q,a)-\frac{y}{\phi(q)} \right| \ll_{A} \sqrt{x}Q\log^{4}x, $$ where as usual we set $\Psi(y;q,a) := \sum_{n \leq y, n \equiv a (\textrm{mod } q)} \Lambda(n)$. In particular, if $Q = \sqrt{x} \log^{-A}x$ then the right hand side is $\ll_{A} x\log^{4-A}x$, which beats the trivial bound $\ll x\log Q$ (obtained since $\Psi(x;q,a) \ll x/\phi(q)$ for $q \leq \sqrt{x}$) provided $A > 3$.
If one is prepared to sum over residue classes $a$, rather than taking a maximum, then one can obtain an interesting bound with $Q$ much larger. Such a result may be called a {\em Barban--Davenport--Halberstam theorem}: for any fixed $A > 0$, and any $x \log^{-A}x \leq Q \leq x$, we have
$$ \sum_{q \leq Q} \sum_{(a,q)=1} \left| \Psi(x;q,a)-\frac{x}{\phi(q)} \right|^{2} \ll_{A} xQ \log x . $$
The theorems quoted above are essentially as stated in Davenport's book~\cite{davenport}, and are actually due to Vaughan~\cite{vaughan} and to Gallagher~\cite{gallagher}, respectively\footnote{In Davenport's book~\cite{davenport} the Bombieri--Vinogradov theorem is stated with a bound $\sqrt{x}Q\log^{5}x$, but the proof is by Vaughan's~\cite{vaughan} method which readily yields a bound $\sqrt{x}Q\log^{4}x$ if one is slightly more careful.}. The original results of Bombieri~\cite{bomb} and Vinogradov~\cite{aivino}, and of Barban (see $\S 3$ of \cite{barban}) and Davenport and Halberstam~\cite{davhal}, were slightly quantitatively weaker.
Let us point out two things about these results that will be significant later. Firstly, both bounds are ineffective (unless $A$ is very small), because the proofs rely on information about possible Siegel zeros of Dirichlet $L$-functions. In our current state of knowledge this ineffectiveness seems unavoidable, because if a Siegel zero did exist it would genuinely distort the distribution of primes in arithmetic progressions. Secondly, the known proofs actually give bounds for absolute values of character sums, namely
$$ \sum_{q \leq Q} \frac{1}{\phi(q)} \sum_{\substack{\chi \; (\textrm{mod } q), \\ \chi \neq \chi_{0}}} \left|\sum_{n \leq x} \Lambda(n) \chi(n) \right| \;\;\; \textrm{ and } \;\;\; \sum_{q \leq Q} \frac{1}{\phi(q)} \sum_{\substack{\chi \; (\textrm{mod } q), \\ \chi \neq \chi_{0}}} \left|\sum_{n \leq x} \Lambda(n) \chi(n) \right|^{2}, $$
respectively. One wouldn't hope for a better bound than $|\sum_{n \leq x} \Lambda(n) \chi(n)| \ll \sqrt{x}$, and the right hand sides in the theorems do indeed correspond, up to logarithmic factors, to such squareroot cancellation. Thus, although we would expect a non-trivial Bombieri--Vinogradov type theorem to hold with $Q$ much larger than $\sqrt{x}$, we wouldn't hope to prove such a result in this way (i.e. by bounding absolute values of character sums). In the case of the Barban--Davenport--Halberstam theorem the bound is known to be sharp: see e.g. the paper~\cite{mont2} of Montgomery. Indeed, in that case the original problem is equivalent to bounding the character sum that we wrote down.
Now we turn to the case of smooth numbers. For $x \geq 1$, and natural numbers $a,q$, we define $$ \Psi_{q}(x,y) := \sum_{n \leq x, (n,q)=1} \textbf{1}_{\{n \in \mathcal{S}(y)\}} \;\;\; \textrm{ and } \;\;\; \Psi(x,y;q,a) := \sum_{n \leq x, n \equiv a (\textrm{mod } q)} \textbf{1}_{\{n \in \mathcal{S}(y)\}}, $$
where $\textbf{1}$ denotes the indicator function. We will also write $\Psi(x,y) := \sum_{n \leq x} \textbf{1}_{\{n \in \mathcal{S}(y)\}}$. Thus our task is to give bounds, on average, for $|\Psi(x,y;q,a) - \Psi_{q}(x,y)/\phi(q)|$. As in the case of prime numbers, one of our concerns will be to obtain good bounds with as large a range of summation over $q$ as possible. However, here we also need to be concerned about the range of $y$ that we can handle. When $y$ is small compared with $x$ the $y$-smooth numbers become a very sparse set, (for example if $y = \log^{K}x$ for some fixed $K \geq 1$ then $\Psi(x,\log^{K}x)/x \approx x^{-1/K}$, which is less by far than the density $\approx 1/\log x$ of the primes less than $x$), which creates new and interesting difficulties.
We shall prove the following results: in their statements $c > 0$ and $K > 0$ are certain fixed and effective constants, that should be thought of as quite small and quite large, respectively. \begin{thm1} Let $\log^{K}x \leq y \leq x$ be large and let $1 \leq Q \leq \sqrt{\Psi(x,y)}$. Then
$$ \sum_{q \leq Q} \max_{(a,q)=1} \left| \Psi(x,y;q,a)-\frac{\Psi_{q}(x,y)}{\phi(q)} \right| \ll \Psi(x,y)\left(e^{-\frac{cu}{\log^{2}(u+1)}} + y^{-c}\right) + \sqrt{\Psi(x,y)}Q\log^{7/2}x, $$ where $u:=(\log x)/\log y$, and the implicit constant is absolute {\em and effective}. In addition, for any $A > 0$ we have the bound $$ \ll_{A} \Psi(x,y)\left(\frac{e^{-\frac{cu}{\log^{2}(u+1)}}}{\log^{A}x} + y^{-c}\right) + \sqrt{\Psi(x,y)}Q\log^{7/2}x, $$ but the implicit constant is now ineffective. \end{thm1}
\begin{thm2} Let $\log^{K}x \leq y \leq x$ be large and let $1 \leq Q \leq \Psi(x,y)$. Then
$$ \sum_{q \leq Q} \sum_{(a,q)=1} \left| \Psi(x,y;q,a)-\frac{\Psi_{q}(x,y)}{\phi(q)} \right|^{2} \ll \Psi(x,y)^{2} \left(e^{-\frac{cu}{\log^{2}(u+1)}} + y^{-c}\right) + \Psi(x,y)Q, $$ where the implicit constant is absolute {\em and effective}. In addition, for any $A > 0$ we have the bound $$ \ll_{A} \Psi(x,y)^{2}\left(\frac{e^{-\frac{cu}{\log^{2}(u+1)}}}{\log^{A}x} + y^{-c}\right) + \Psi(x,y)Q , $$ but the implicit constant is now ineffective. \end{thm2}
Theorems 1 and 2 supply non-trivial bounds when $Q=\sqrt{\Psi(x,y)}\log^{-A}x$ and $Q=\Psi(x,y)\log^{-A}x$, respectively. This mirrors the classical Bombieri--Vinogradov and Barban--Davenport--Halberstam theorems, and as there one couldn't hope to prove non-trivial bounds for larger $Q$ by a method based on absolute values of character sums\footnote{By ``non-trivial bounds'' we mean ``better than would follow if we just assumed that $|\Psi(x,y;q,a)-\Psi_{q}(x,y)/\phi(q)| \ll \Psi_{q}(x,y)/\phi(q)$ always''. However, for general $x,y,q$ we cannot prove bounds like this, (see Hildebrand and Tenenbaum's survey paper~\cite{ht3} for some discussion of what is known), so Theorems 1 and 2 might still be interesting for larger $Q$.}. Moreover, provided that $y$ isn't too big, or more specifically provided $u/(\log^{2}(u+1) \log\log x) \rightarrow \infty$, we save an arbitrary power of a logarithm {\em in an effective way}. We can prove strong effective results because a hypothetical Siegel zero does not distort the distribution of $y$-smooth numbers too much, when $y$ is small. One way to think about this is to note that if the $L$-series $L(s,\chi)$ has a Siegel zero, then the character $\chi$ must ``behave a lot like'' the M\"{o}bius function $\mu$. See e.g. Granville and Soundararajan's article~\cite{gransound} for some discussion of such issues. Whereas the sum of the M\"{o}bius function over primes is very large, the sum of the M\"{o}bius function over smooth numbers exhibits cancellation, as discussed in detail in Tenenbaum's paper~\cite{tenerdosall}.
Theorem 1 improves on Th\'{e}or\`{e}me 1 of Fouvry and Tenenbaum~\cite{fouvryten}, who proved such a result on the restricted range $\exp\{\log^{2/3+\epsilon}x\} \leq y \leq x$, for any fixed $\epsilon > 0$, with a bound roughly of the form $$ \ll_{A,\epsilon} \frac{\Psi(x,y) e^{-c_{1}u/\log^{2}(u+1)}}{\log^{A}x} + \sqrt{x}Q u^{u(1+o_{A,\epsilon}(1))} \log^{A+5}x . $$ Here $c_{1}=c_{1}(\epsilon,A) > 0$ is a certain constant, and the $o(1)$ term tends to 0 (in a way that depends a little bit on $\epsilon$ and $A$) as $u \rightarrow \infty$. Note that the bound in Theorem 1 is always at least as good as this. Although Fouvry and Tenenbaum do not emphasise it, the implicit constant in their bound appears, like ours, to be effectively computable when $A$ is small.
See Fouvry and Tenenbaum's paper~\cite{fouvryten} for some discussion of earlier Bombieri--Vinogradov type results for smooth numbers. In particular, Granville~\cite{granville} proved such a result, with a bound $\ll_{A} \Psi(x,y)/\log^{A}x$, that is valid for any $100 \leq y \leq x$, but only when $Q \leq \min\{\sqrt{x}/\log^{B(A)}x , y^{C(A)\log\log y /\log\log\log y } \}$. When $y$ is small this range for $Q$ is much smaller than permitted in Theorem 1. The author is not aware of any previous results like Theorem 2 in the literature, although the methods of Fouvry and Tenenbaum~\cite{fouvryten} could presumably be adapted to yield a similar result on the range $\exp\{\log^{2/3+\epsilon}x\} \leq y \leq x$.
Now let us say something about the proofs of Theorems 1 and 2. As usual, on introducing Dirichlet characters the left hand side in Theorem 1 is seen to be
$$ \leq \sum_{q \leq Q} \frac{1}{\phi(q)} \sum_{\substack{\chi \; (\textrm{mod } q), \\ \chi \neq \chi_{0}}} |\Psi(x,y;\chi)|, \;\;\; \textrm{ where } \;\;\; \Psi(x,y;\chi) := \sum_{n \leq x} \chi(n)\textbf{1}_{\{n \in \mathcal{S}(y)\}}. $$ We will divide the double sum into three parts, according as the conductor $\textrm{cond}(\chi)$ satisfies: \begin{enumerate} \item $\textrm{cond}(\chi) \leq \min\{y^{\eta},e^{\eta \sqrt{\log x}}\}$; or
\item $\min\{y^{\eta},e^{\eta \sqrt{\log x}}\} < \textrm{cond}(\chi) \leq x^{\eta}$; or
\item $\textrm{cond}(\chi) > x^{\eta}$. \end{enumerate} Here $\eta > 0$ will be a certain sufficiently small constant.
In $\S 3$ and in the appendix we will prove a result that gives bounds on $|\Psi(x,y;\chi)|$ in terms of the zero-free region of the $L$-series $L(s,\chi)$. Combining this result with the classical zero-free region for Dirichlet $L$-functions will yield the following character sum estimate, which improves on Th\'{e}or\`{e}me 4 of Fouvry and Tenenbaum~\cite{fouvryten0}: \begin{thm3} There exist a small absolute constant $b > 0$, and a large absolute constant $M > 0$, such that the following is true. If $\log^{M}x \leq y \leq x$ is large; and $\chi$ is a non-principal Dirichlet character with conductor $r := \textup{cond}(\chi) \leq y^{b}$ and to a modulus $\leq x$; and the largest real zero $\beta=\beta_{\chi}$ of $L(s,\chi)$ is $\leq 1 - M/\log y$; then
$$ |\Psi(x,y;\chi)| \ll \Psi(x,y) \sqrt{\log x \log y} (e^{-(b \log x) \min\{1/\log r, 1-\beta \}}\log\log x + e^{-b\sqrt{\log x}} + y^{-b}) . $$ \end{thm3} The part of the double sum on the range (i) can be bounded using Theorem 3, except for any characters for which $L(s,\chi)$ has a real zero too close to 1 (i.e. a Siegel zero, in a somewhat strong sense). However, the contribution from any such characters can also be bounded successfully, as we will show in $\S 3$.
On the range (ii), our bounds for $|\Psi(x,y;\chi)|$ will be satisfactory provided we have a fairly big zero-free region for $L(s,\chi)$. Although such a zero-free region isn't known for individual $L$-functions, zero-density estimates imply that it fails for a few at most. Since the summands in (ii) are accompanied by a factor $1/\phi(q)$, with $q > \min\{y^{\eta},e^{\eta \sqrt{\log x}}\}$, the contribution from such rogue $L$-functions will trivially be small enough.
On the range (iii) we will apply the multiplicative large sieve directly to bound our sums. It is now well understood that such a procedure will succeed provided we can decompose $|\Psi(x,y;\chi)|$ into (a small number of non-trivial) double character sums, and provided we look at characters whose conductor is large enough relative to the density of the $y$-smooth numbers less than $x$. We supply the relevant argument in $\S 4$.
The left hand side in Theorem 2 is equal to $\sum_{q \leq Q} (1/\phi(q)) \sum_{\chi \; (\textrm{mod } q), \; \chi \neq \chi_{0}} |\Psi(x,y;\chi)|^{2}$, and the arguments used to prove Theorem 1 will apply directly to bound this as well (considerably more easily when it comes to using the large sieve on the range (iii)).
The reader might wonder why it is necessary to use both zero-density methods, and the large sieve, to handle characters of large conductor here, when the classical Bombieri--Vinogradov theorem can be proved using either method on its own. (See e.g. Bombieri's paper~\cite{bomb} for a proof using zero-density estimates, and the third edition of Davenport's book~\cite{davenport} for a proof using the large sieve directly.) The answer is that the $y$-smooth numbers are much sparser than the primes, unless $y$ is very large, and are related to the $L$-series $L(s,\chi)$ in a more indirect, ``exponentiated'' way.
Fouvry and Tenenbaum~\cite{fouvryten} used the large sieve to prove a Bombieri--Vinogradov theorem for smooth numbers, and as we remarked this only works when $\exp\{\log^{2/3+\epsilon}x\} \leq y \leq x$. For smaller $y$ the $y$-smooth numbers are sufficiently sparse that the bound supplied by the large sieve, which is insensitive to this sparseness, becomes poor.
In the classical Bombieri--Vinogradov theorem, after applying the explicit formula for character sums over primes one is left to bound a double sum over characters and over zeros of $L(s,\chi)$. In contrast, in proving Theorems 1 and 2 the sum over zeros and the sum over characters become separated ``by an exponentiation''. The reader is referred to $\S 3$ for a proper explanation of this, but it essentially means that one needs good bounds for the {\em number of characters} $\chi$ (with certain conductors) such that $L(s,\chi)$ has at least one zero in a box, as opposed to bounds for the total number of zeros of all such $L(s,\chi)$ in that box. Zero-density estimates of the latter type imply bounds of the former type, but with a quantitative loss that is considerable when counting zeros far into the critical strip or of large height. This loss means that we cannot handle characters whose conductor is too big by zero-density methods.
Finally we point out that our approach of relating $\Psi(x,y;\chi)$ to zeros of $L(s,\chi)$ is connected to various previous work. Within the classical zero-free region, this idea appears to originate with Fouvry and Tenenbaum~\cite{fouvryten0}. Later, Konyagin and Soundararajan~\cite{konsound} and Soundararajan~\cite{sound} (see particularly $\S 3$ of that paper) investigated the consequences of a larger zero-free region, in conjunction with zero-density estimates. The author extended some of that work as an ingredient in the paper~\cite{harper3}. In $\S 3$ of the present paper we prove a result connecting $\Psi(x,y;\chi)$ to zeros of $L(s,\chi)$, that is roughly comparable to what is obtained for character sums over primes using Perron's formula. As we will discuss, this result is still capable of various refinements, but the author hopes it will be a useful general tool in future work on smooth numbers in arithmetic progressions.
\section{Background on smooth numbers and on zeros of $L$-functions} Our arguments will require a few background results on the distribution of smooth numbers. Most importantly, we shall require the following seminal result of Hildebrand and Tenenbaum~\cite{ht}: \begin{smooth1}[Hildebrand and Tenenbaum, 1986] We have uniformly for $x \geq y \geq 2$, $$ \Psi(x,y) = \frac{x^{\alpha} \zeta(\alpha,y)}{\alpha \sqrt{2\pi(1+(\log x)/y)\log x \log y}} \left(1 + O\left(\frac{1}{\log(u+1)} + \frac{1}{\log y} \right) \right), $$ where $u = (\log x)/\log y$, $\zeta(s,y) := \sum_{n: n \textrm{ is } y \textrm{ smooth}} 1/n^{s} = \prod_{p \leq y}(1-p^{-s})^{-1}$ for $\Re(s) > 0$, and $\alpha = \alpha(x,y) > 0$ is defined by $$ \sum_{p \leq y} \frac{\log p}{p^{\alpha}-1} = \log x. $$ \end{smooth1}
Smooth Numbers Result 1 is proved using a saddle-point method, and we will sometimes refer to this expression for $\Psi(x,y)$ as the ``saddle-point expression''. Hildebrand and Tenenbaum~\cite{ht} also established a simple approximation for $\alpha(x,y)$ on the whole range $2 \leq y \leq x$. Their Lemma 2 implies, in particular, that when $\log x < y \leq x^{1/3}$ one has $$ \alpha(x,y) = 1 - \frac{\log(u\log u)}{\log y} + O\left(\frac{1}{\log y}\right). $$
The reader might desire a more explicit estimate for $\Psi(x,y)$, so we remark that a result of Hildebrand~\cite{hildebrand} implies, in particular, that $$ \Psi(x,y) = x\rho(u)\left(1+O\left(\frac{\log(u+1)}{\log y}\right) \right), \;\;\;\;\; e^{(\log\log x)^{2}} \leq y \leq x, $$ where the Dickman function $\rho(u)$ is a certain continuous function that satisfies $\rho(u) = e^{-(1+o(1))u\log u}$ as $u \rightarrow \infty$. Thus the $y$-smooth numbers are a very sparse set when $u=(\log x)/\log y$ is large. See Hildebrand and Tenenbaum's paper~\cite{ht} for much further discussion of the behaviour of $\Psi(x,y)$.
We will also require some crude ``local'' information about $\Psi(x,y)$, that describes roughly how this function changes when $x$ or $y$ change a little. \begin{smooth2}[Following Hildebrand and Tenenbaum, and others] For any large $\log x \leq y \leq x$ we have $$ \Psi(2x,y) \ll \Psi(x,y) \;\;\; \textrm{ and } \;\;\; \Psi(x,y(1+1/\log x)) \ll \Psi(x,y) . $$ \end{smooth2} The first bound follows immediately from Theorem 3 of Hildebrand and Tenenbaum~\cite{ht}, for example. For the second bound, we may assume that $y \leq x^{1/3}$ (since otherwise $\Psi(x,y) \gg x$), and in view of Smooth Numbers Result 1 it will suffice to show that $\alpha' := \alpha(x,y(1+1/\log x))$ satisfies $\alpha' = \alpha(x,y) + O(1/\log x)$ when $y \geq \log x$. But by definition of $\alpha'$ we have $$ \sum_{p \leq y(1+1/\log x)} \frac{\log p}{p^{\alpha'}-1} = \log x , $$ and therefore $$ \sum_{p \leq y} \frac{\log p}{p^{\alpha'}-1} = \log x - O(\frac{y^{1-\alpha'}\log y}{\log x}) = \log x - O(\log u) = \log(x/u^{O(1)}) . $$ By definition of the saddle-point $\alpha(\cdot,\cdot)$, this implies that $\alpha' = \alpha(x/u^{O(1)},y)$. Then the claimed estimate $\alpha' - \alpha = O(1/\log x)$ follows because, when $y > \log x$, we have $\partial\alpha(x,y)/\partial x = O(1/(x \log x \log y)) = O(1/(x \log x \log u))$, as shown in e.g. the proof of Theorem 4 of Hildebrand and Tenenbaum~\cite{ht}.
Finally, in some of our applications of Perron's formula we will need an upper bound for the quantity of $y$-smooth numbers in short intervals. The following result, which is a consequence of a ``sublinearity result'' of Hildebrand~\cite{hildebrandshort}, will be sufficient. \begin{smooth3}[Hildebrand, 1985] For any large $y,z$ and $x \geq \max\{y,z\}$ we have $$ \Psi(x+z,y) - \Psi(x,y) \ll \Psi(z,y) \ll 2^{\log(x/z)/\log y} \left(\frac{z}{x}\right)^{\alpha(x,y)} \Psi(x,y) , $$ where $\alpha(x,y)$ is the saddle-point defined in Smooth Numbers Result 1. \end{smooth3} The first inequality here follows from Theorem 4 of Hildebrand~\cite{hildebrandshort}, and the second by iteratively applying Theorem 3 of Hildebrand and Tenenbaum~\cite{ht} with the choice $c = \min\{x/z,y\}$.
As the reader might expect having read the introduction, our arguments will also require various information about the zeros of Dirichlet $L$-functions $L(s,\chi)$. The following statement collects together the facts we will need. \begin{zeros1} There is an absolute and effective constant $\kappa > 0$ such that, for any $q, Q \geq 1$, the functions $F_{q}(s):=\prod_{\chi (\textrm{mod } q)} L(s,\chi)$ and $G_{Q}(s):=\prod_{q \leq Q} \prod_{\chi (\textrm{mod } q)}^{*} L(s,\chi)$ have the following properties (where $\prod^{*}$ denotes a product over primitive characters): \begin{enumerate}
\item {\em (zero-free region)} $F_{q}(\sigma+it)$ has at most one zero in the region $\sigma \geq 1 - \kappa/\log(q(2+|t|))$. If such an ``exceptional'' zero exists then it is real, simple, and corresponds to a non-principal real character.
\item {\em (Page's theorem)} $G_{Q}(\sigma+it)$ has at most one zero (which, if it exists, is necessarily real, simple, and arises from a real character) in the region $\sigma \geq 1 - \kappa/\log(Q(2+|t|))$.
\item {\em (Siegel's theorem)} for any $\epsilon > 0$ there is a constant $C(\epsilon) > 0$, {\em which in general is non-effective}, such that $F_{q}(\sigma)$ has no real zeros $\sigma \geq 1 - C(\epsilon)/q^{\epsilon}$.
\item {\em (log-free zero-density estimate)} for any $\epsilon > 0$ and any $\sigma \geq 1/2$, $T \geq 1$, the function $F_{q}(s)$ has $\ll_{\epsilon} (qT)^{(12/5+\epsilon)(1-\sigma)}$ zeros $s$, counted with multiplicity, with $\Re(s) \geq \sigma$ and $|\Im(s)| \leq T$. Moreover, $G_{Q}(s)$ has $\ll_{\epsilon} (Q^{2}T)^{(12/5+\epsilon)(1-\sigma)}$ zeros in that region. \end{enumerate} \end{zeros1}
The zero-free region, Page's theorem and Siegel's theorem are all proved in standard textbooks on multiplicative number theory: see e.g. Chapter 11 of Montgomery and Vaughan~\cite{mv}. The log-free zero-density estimates stated above are proved in Huxley's paper~\cite{huxley} for values of $\sigma$ bounded away from 1, and in Jutila's paper~\cite{jutila} for values of $\sigma$ close to 1 (actually with a smaller exponent than the famous $12/5 + \epsilon$). The description ``log-free'' refers to the fact that there are no logarithmic factors, or other factors that do not decay with $1-\sigma$, in the estimates, which are therefore still very useful when $\sigma$ is very close to 1. We will exploit this state of affairs in e.g. Lemma 1, below.
\section{The zero-density argument} In this section our main goal is to prove the following proposition, which gives Perron-type bounds for $\Psi(x,y;\chi)$ in terms of the zero-free region of $L(s,\chi)$.
\begin{prop1} There exist a small absolute constant $d > 0$, and a large absolute constant $C > 0$, such that the following is true.
Let $\log^{1.1}x \leq y \leq x$ be large. Suppose that $\chi$ is a non-principal Dirichlet character with conductor $r:=\textup{cond}(\chi) \leq x^{d}$, and to modulus $q \leq x$, such that $L(s,\chi)$ has no zeros in the region
$$ \Re(s) > 1- \epsilon, \;\;\; |\Im(s)| \leq H, $$ where $C/\log y < \epsilon \leq \alpha(x,y)/2$ and $y^{0.9\epsilon} \log^{2}x \leq H \leq x^{d}$. Suppose, moreover, that {\em at least one} of the following holds: \begin{itemize} \item $y \geq (Hr)^{C}$;
\item $\epsilon \geq 40\log\log(qyH)/\log y$. \end{itemize}
Then we have the bound
$$ |\Psi(x,y;\chi)| \ll \Psi(x,y)\sqrt{\log x \log y}(x^{-0.3\epsilon}\log H + \frac{1}{H^{0.02}}) . $$ \end{prop1}
We will then use Proposition 1 to deduce Theorem 3, and to handle characters on the ranges (i) and (ii) (for Theorems 1 and 2) described in the introduction.
Before we do this, let us make a few remarks. The restrictions on $\epsilon$ and $H$ in Proposition 1 may seem technical and off-putting, but they will be easy to satisfy in practice and, as the reader will see, they arise naturally in the proof. The restriction that $y \geq \log^{1.1}x$ is, to some extent, for simplicity, in particular so that $$ \alpha(x,y) \geq \alpha(x,\log^{1.1}x) = 1-\frac{\log\log x + O(1)}{\log(\log^{1.1}x)} \geq 0.05 \gg 1, $$ and many of our arguments work when $y$ is smaller. However, when $y \leq \log x$ there is a genuine change in the nature of the $y$-smooth numbers less than $x$, in that almost all of them are products of large powers of primes. Moreover, when $y \leq \log x$ then $\Psi(x,y) = x^{o(1)}$, (see e.g. Chapter 7.1 of Montgomery and Vaughan~\cite{mv}), so one couldn't have a Bombieri--Vinogradov type theorem with a large range of summation over $q$.
Secondly, it would be more natural to compare $|\Psi(x,y;\chi)|$ with its trivial bound $\Psi_{q}(x,y)$, rather than $\Psi(x,y)$. We refrain from doing this because it would be quite complicated to formulate a single result that holds on a very wide range of $x,y$ and $q$, but again many of our arguments do supply bounds involving $\Psi_{q}(x,y)$, and a reader who wants such a result should have no difficulty in adapting our methods. Also see de la Bret\`{e}che and Tenenbaum's paper~\cite{dlbten} for many results relating $\Psi_{q}(x,y)$ and $\Psi(x,y)$.
Thirdly, the multiplier $\sqrt{\log x \log y}$ in Proposition 1 and Theorem 3, which will be problematic when $\epsilon$ is very small or $r$ is very close to $x$, can almost certainly be removed by adapting the ``majorant principle'' argument in $\S 2.3$ of the author's paper~\cite{harper3}. One can perhaps also remove the condition that either $y \geq (Hr)^{C}$ or $\epsilon \geq 40\log\log(qyH)/\log y$, by introducing smooth weights into the explicit formula arguments that prove Proposition 1 (as was done in less general settings by Soundararajan~\cite{sound} and the author~\cite{harper3}). However, since the proof of Proposition 1 is already quite involved we do not work these extensions out here.
\subsection{Proof of Proposition 1} The proposition will be a relatively easy consequence of the following lemmas: \begin{lem1} There is a large absolute constant $C > 0$ such that the following is true. Suppose $\chi$ is a non-principal Dirichlet character with conductor $r$, and to modulus $q$. If $L(s,\chi)$ has no zeros in the region
$$ \Re(s) > 1- \epsilon, \;\;\; |\Im(s)| \leq H, $$
where $0 < \epsilon \leq 1/2$ and $H \geq 4$, then for any $z \geq (Hr)^{C}$, any $|t| \leq H/2$, and any $0 \leq \sigma < 1$ we have
$$ \left| \sum_{n \leq z} \frac{\Lambda(n) \chi(n)}{n^{\sigma + it}} \right| \ll \frac{z^{1-\sigma-0.9\epsilon}}{1-\sigma} + \frac{z^{1-\sigma} \log^{2}(qzH)}{(1-\sigma)H} + \log(rH) + \log^{0.9}q + \frac{1}{\epsilon} . $$ \end{lem1}
\begin{lem2} Let the situation be as in Lemma 1, but with the condition that $z \geq (Hr)^{C}$ replaced by the condition $z \geq 2$. Then
$$ \left| \sum_{n \leq z} \frac{\Lambda(n) \chi(n)}{n^{\sigma + it}} \right| \ll \frac{z^{1-\sigma-0.95\epsilon}\log^{2}(qzH)}{1-\sigma} + \frac{z^{1-\sigma} \log^{2}(qzH)}{(1-\sigma)H} + \log(rH) + \log^{0.9}q + \frac{1}{\epsilon} . $$ \end{lem2}
We defer the proofs of Lemmas 1 and 2 to the appendix, but we remark that the proof of Lemma 1 itself uses a log-free zero-density estimate to remove various logarithmic factors, which is why it can supply a non-trivial bound even if $\epsilon$ is very small (unlike Lemma 2). Readers familiar with the proofs of Linnik's theorem on the least prime in an arithmetic progression should find this familiar. As we will soon see, the sums in Lemmas 1 and 2 will essentially appear as exponents in the proof of Proposition 1, and clearly a loss of logarithmic factors in an exponent would not yield acceptable bounds.
To deduce Proposition 1 we obtain bounds on $|\log L(\sigma+it,\chi;y) - \log L(\alpha+it,\chi;y)|$, where $\alpha=\alpha(x,y)$ is the saddle-point from Smooth Numbers Result 1, and $$ L(s,\chi;y):=\prod_{p \leq y} \left(1-\frac{\chi(p)}{p^{s}}\right)^{-1} = \sum_{n \in \mathcal{S}(y)} \frac{\chi(n)}{n^{s}}, \;\;\; \Re(s) > 0 $$
is the Dirichlet series corresponding to the $y$-smooth numbers, and where $\alpha-0.8\epsilon \leq \sigma \leq \alpha$ and $|t| \leq H/2$. Indeed, remembering that we have $\epsilon \leq \alpha/2$ and $\alpha \gg 1$ in Proposition 1 (since $y \geq \log^{1.1}x$), this difference is certainly at most \begin{eqnarray}
(\alpha-\sigma) \sup_{\sigma \leq \sigma' \leq \alpha} \left|\frac{L'(\sigma'+it,\chi;y)}{L(\sigma'+it,\chi;y)} \right| & \ll & (\alpha - \sigma)(\sup_{\alpha-0.8\epsilon \leq \sigma' \leq \alpha} \left|\sum_{n \leq y} \frac{\Lambda(n) \chi(n)}{n^{\sigma'+it}} \right| + \sum_{p \leq y} \frac{\log p}{p^{2(\alpha-0.8\epsilon)}} ) \nonumber \\
& \ll & (\alpha - \sigma)(\sup_{\alpha-0.8\epsilon \leq \sigma' \leq \alpha} \left|\sum_{n \leq y} \frac{\Lambda(n) \chi(n)}{n^{\sigma'+it}} \right| + \frac{y^{1-\alpha-0.1\epsilon}}{1-\alpha} + \frac{1}{\epsilon} ), \nonumber \end{eqnarray} since $2(\alpha-0.8\epsilon) \geq \alpha + 0.4\epsilon$. If $y \geq (Hr)^{C}$ then Lemma 1 implies this is all $$ \ll (\alpha - \sigma)(\frac{y^{1-\alpha-0.1\epsilon}}{1-\alpha} + \frac{y^{1-\alpha+0.8\epsilon}\log^{2}(qyH)}{(1-\alpha)H} + \log(rH) + \log^{0.9}q + \frac{1}{\epsilon} ), $$ and since we assume in Proposition 1 that $H \geq y^{0.9\epsilon} \log^{2}x$ the second term may be omitted. If $y < (Hr)^{C}$ then the conditions of Proposition 1 will only be satisfied if $\epsilon \geq 40\log\log(qyH)/\log y$, in which case we can use Lemma 2 instead of Lemma 1, with the additional saving of $y^{-0.05\epsilon}$ in the first term there compensating for the multiplier $\log^{2}(qyH)$. Thus in any event we have
$$ |\log L(\sigma+it,\chi;y) - \log L(\alpha+it,\chi;y)| \ll (\alpha - \sigma)(\frac{y^{1-\alpha-0.1\epsilon}}{1-\alpha} + \log(rH) + \log^{0.9}q + \frac{1}{\epsilon} ). $$
Next, we know that $\alpha(x,y) = 1 - (\log(u\log u) + O(1))/\log y$, and so the above is $$ \ll (\alpha-\sigma)(y^{-0.1\epsilon}u\log y + \log(rH) + \log^{0.9}q + \frac{1}{\epsilon}) = (\alpha-\sigma)(y^{-0.1\epsilon}\log x + \log(rH) + \log^{0.9}q + \frac{1}{\epsilon}) . $$ Since we have $\epsilon > C/\log y \geq C/\log x$, and $r,H \leq x^{d}$, and $q \leq x$ in Proposition 1, where $C$ is large and $d$ is small, we finally obtain that
$$ |\log L(\sigma+it,\chi;y) - \log L(\alpha+it,\chi;y)| \leq \frac{(\alpha-\sigma)\log x}{2} \;\;\;\;\; \textrm{if } \alpha-0.8\epsilon \leq \sigma \leq \alpha \;\; \textrm{and } |t| \leq H/2 . $$
Now we can prove Proposition 1 by expressing $\Psi(x,y;\chi)$ as a contour integral involving $L(\alpha+it,\chi;y)$, and shifting the line of integration. However, because the $y$-smooth numbers may be a very sparse set, and we only have useful information about $L(s,\chi;y)$ when $|t| \leq H/2$, we need to be careful about the truncation errors that arise in doing this. Using the truncated Perron formula, as in e.g. Theorems 5.2 and 5.3 of Montgomery and Vaughan~\cite{mv}, we find \begin{eqnarray}
\Psi(x,y;\chi) & = & \frac{1}{2\pi i} \int_{\alpha-iH/2}^{\alpha+iH/2} L(s,\chi;y) \frac{x^{s}}{s} ds + O(1 + \frac{x^{\alpha}L(\alpha,\chi_{0};y)}{H} + \sum_{\substack{x/2 < n < 2x, \\ n \textrm{ is } y \textrm{ smooth}, \\ (n,q)=1}} \min\{1,\frac{x}{H|x-n|}\} ) \nonumber \\
& = & \frac{1}{2\pi i} \int_{\alpha-iH/2}^{\alpha+iH/2} L(s,\chi;y) \frac{x^{s}}{s} ds + O(1 + \frac{x^{\alpha}L(\alpha,\chi_{0};y)}{\sqrt{H}} + \sum_{\substack{|n-x| \leq x/\sqrt{H}, \\ n \textrm{ is } y \textrm{ smooth}, \\ (n,q)=1}} 1 ), \nonumber \end{eqnarray} where the second equality uses the Rankin-type upper bound $$ \sum_{\substack{x/2 < n < 2x, \\ n \textrm{ is } y \textrm{ smooth}, \\ (n,q)=1}} 1 \ll \sum_{n : n \textrm{ is } y \textrm{ smooth}, (n,q)=1} \frac{x^{\alpha}}{n^{\alpha}} = x^{\alpha} L(\alpha,\chi_{0};y). $$ See e.g. Fouvry and Tenenbaum's paper~\cite{fouvryten0} for some exactly similar calculations. Our assumption that $y \geq \log^{1.1}x$ implies that $\alpha(x,y) = 1 - (\log(u\log u) + O(1))/\log y \geq 0.05$, say, and so Smooth Numbers Result 3 reveals that
$$ \sum_{\substack{|n-x| \leq x/\sqrt{H}, \\ n \textrm{ is } y \textrm{ smooth}, \\ (n,q)=1}} 1 \ll \Psi(x/\sqrt{H},y) \ll \Psi(x,y) (\sqrt{H})^{-0.04} = \Psi(x,y) H^{-0.02}. $$ Moreover, since we clearly have $\Psi(x/\sqrt{H},y) \gg 1$ (since we assume that $H \leq x^{d}$) this term includes the $O(1)$ term in our preceding expression for $\Psi(x,y;\chi)$. In addition, Smooth Numbers Result 1 implies $x^{\alpha}L(\alpha,\chi_{0};y) \leq x^{\alpha} \zeta(\alpha,y) \ll \Psi(x,y) \sqrt{\log x \log y}$, and so $$ \Psi(x,y;\chi) = \frac{1}{2\pi i} \int_{\alpha-iH/2}^{\alpha+iH/2} L(s,\chi;y) \frac{x^{s}}{s} ds + O(\Psi(x,y)(\frac{\sqrt{\log x \log y}}{\sqrt{H}} + \frac{1}{H^{0.02}})) . $$
Finally, if we shift the line of integration and use our bound on $|\log L(\sigma+it,\chi;y) - \log L(\alpha+it,\chi;y)|$ we see \begin{eqnarray} \frac{1}{2\pi i} \int_{\alpha-iH/2}^{\alpha+iH/2} L(s,\chi;y) \frac{x^{s}}{s} ds & = & \frac{1}{2\pi i} \int_{\alpha-0.8\epsilon-iH/2}^{\alpha-0.8\epsilon+iH/2} L(s,\chi;y) \frac{x^{s}}{s} ds + \nonumber \\ && + O(\frac{L(\alpha,\chi_{0};y)}{H} \int_{\alpha-0.8\epsilon}^{\alpha} e^{(\alpha-\sigma)(\log x)/2} x^{\sigma} d\sigma) \nonumber \\ & = & O(x^{\alpha}L(\alpha,\chi_{0};y)(x^{-0.8\epsilon+\epsilon/2}(\frac{1}{\alpha} + \log H) + \frac{1}{H})) \nonumber \\ & = & O(\Psi(x,y)\sqrt{\log x \log y}(x^{-0.3\epsilon}\log H + \frac{1}{H})) , \nonumber \end{eqnarray} from which the bound claimed in Proposition 1 immediately follows. \begin{flushright} Q.E.D. \end{flushright}
\subsection{Proof of Theorem 3} We will apply Proposition 1 with a suitable choice of $\epsilon$ and $H$. We assume, as we may, that the value of $b > 0$ in Theorem 3 was set small enough in terms of the values $d,C$ in Proposition 1 and the constant $\kappa$ in Zeros Result 1, and also that the value of $M$ in Theorem 3 was set large enough in terms of $b,d,C$.
In Theorem 3 we have $r := \textrm{cond}(\chi) \leq y^{b}$, so for any $2 \leq H \leq y^{50b}$ we have $$ (Hr)^{C} \leq y^{51bC} \leq y . $$ Moreover, Zeros Result 1 (applied to the primitive character inducing $\chi$) and the assumptions of Theorem 3 imply that $L(s,\chi)$ has no zeros in the region
$$ \Re(s) > \max\{\beta_{\chi},1 - \frac{\kappa}{\log(r(2+H))} \}, \;\;\; |\Im(s)| \leq H , $$ where $\max\{\beta_{\chi},1 - \kappa/\log(r(2+H))\} \leq \max\{1-M/\log y,1 - \kappa/\log(y^{b}(2+y^{50b}))\} \leq 1 - C/\log y$. So if we choose $$ H = \min\{y^{50b},e^{\sqrt{\log x}},e^{(\log x)/\log r}\log^{2}x , x^{1-\beta_{\chi}}\log^{2}x\}, \;\;\; \textrm{and} \;\;\; \epsilon = 1 - \max\{\beta_{\chi},1 - \frac{\kappa}{\log(r(2+H))} \}, $$ the reader can easily check that $H \geq y^{0.9\epsilon}\log^{2}x$ (bearing in mind that $y \geq \log^{M}x$ in Theorem 3), and so all the conditions of Proposition 1 will be satisfied. Since we have $$ \epsilon \geq \min\{1-\beta_{\chi},\frac{\kappa}{2\log r},\frac{\kappa}{2\log(2+H)}\} \gg \min\{1-\beta_{\chi},\frac{1}{\log r},\frac{1}{\sqrt{\log x}}\}, $$ we conclude that \begin{eqnarray}
|\Psi(x,y;\chi)| & \ll & \Psi(x,y)\sqrt{\log x \log y}(x^{-0.3\epsilon}\log H + \frac{1}{H^{0.02}}) \nonumber \\ & \ll & \Psi(x,y)\sqrt{\log x \log y}(x^{-0.3\epsilon}\log H + y^{-b} + e^{-b\sqrt{\log x}} + e^{-b(\log x)/\log r} + e^{-(b\log x)(1-\beta)}) \nonumber \\ & \ll & \Psi(x,y) \sqrt{\log x \log y} (e^{-(b \log x) \min\{1/\log r, 1-\beta \}}\log\log x + y^{-b} + e^{-b\sqrt{\log x}}), \nonumber \end{eqnarray} as asserted in Theorem 3. \begin{flushright} Q.E.D. \end{flushright}
\subsection{Application to Theorems 1 and 2} Recall that in Theorem 1 we are trying to bound
$$ \sum_{q \leq Q} \max_{(a,q)=1} \left| \Psi(x,y;q,a)-\frac{\Psi_{q}(x,y)}{\phi(q)} \right| \leq \sum_{q \leq Q} \frac{1}{\phi(q)} \sum_{\substack{\chi \; (\textrm{mod } q), \\ \chi \neq \chi_{0}}} |\Psi(x,y;\chi)|. $$ In this subsection we will show that, provided $\eta$ is fixed small enough in terms of the various constants $b,M,d,C$ in Theorem 3 and Proposition 1, and provided the constants $c,K$ in Theorem 1 are fixed suitably small and large (respectively) in terms of $\eta$, then \begin{eqnarray}\label{zdtarget}
\sum_{1 < r \leq x^{\eta}} \sum_{\substack{\chi^{*} (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} \sum_{q \leq Q} \frac{1}{\phi(q)} \sum_{\substack{\chi (\textrm{mod } q), \\ \chi^{*} \; \textrm{induces } \chi}} |\Psi(x,y;\chi)| \ll \Psi(x,y)\left(e^{-\frac{cu}{\log^{2}(u+1)}} + y^{-c}\right) , \end{eqnarray} and also that it satisfies the stronger ineffective bound claimed in Theorem 1. We will also prove corresponding statements for Theorem 2. The remaining characters, with conductor $x^{\eta} < r \leq Q$, will be dealt with in $\S 4$.
We remark that we will break the sum in (\ref{zdtarget}) into a few different pieces, as suggested in the introduction, and for most of the pieces will obtain bounds of the form $$ \ll \Psi(x,y)(e^{-\Theta(\sqrt{\log x})} + y^{-\Theta(1)}). $$ This is certainly acceptable for (\ref{zdtarget}), since (recalling that $u := (\log x)/\log y$) we always have $e^{-\sqrt{\log x}} \leq \max\{y^{-1},e^{-u}\}$. As in many multiplicative problems, a term $e^{-\sqrt{\log x}}$ arises from balancing contributions of the form $e^{-(\log x)/\log r} + r^{-1}$ and $e^{-(\log x)/\log H} + H^{-1}$ (as we have already seen in the proof of Theorem 3). The only instance in which we actually obtain a bound $ \ll \Psi(x,y)\left(e^{-cu/\log^{2}(u+1)} + y^{-c}\right)$ is, as we shall see, when considering the contribution from an exceptional character that gives rise to a Siegel zero.
For ease of writing, let us introduce some temporary notation: we will let $$ \mathcal{G}_{1} := \bigcup_{1 < r \leq \min\{y^{\eta},e^{\eta \sqrt{\log x}}\}} \{\chi^{*} \; (\textrm{mod } r) : L(s,\chi^{*}) \; \textrm{has no real zero that is } > 1 - \frac{M}{\min\{\log y ,\sqrt{\log x}\}}\}, $$
$$ \mathcal{G}_{2} := \bigcup_{\min\{y^{\eta},e^{\eta \sqrt{\log x}}\} < r \leq x^{\eta}} \{\chi^{*} \; (\textrm{mod } r) : L(s,\chi^{*}) \neq 0 \; \textrm{for any } \Re(s) > \frac{299}{300}, \; |\Im(s)| \leq r^{100}\}, $$ where $M$ is the absolute constant from the statement of Theorem 3. These are sets of ``good'' characters corresponding to the ranges (i) and (ii) described in the introduction. Indeed, using Theorem 3 we see the contribution to (\ref{zdtarget}) from characters induced from $\chi^{*} \in \mathcal{G}_{1}$ has order at most \begin{eqnarray} && \sum_{\chi^{*} \in \mathcal{G}_{1}} \sum_{q \leq Q} \frac{1}{\phi(q)} \sum_{\substack{\chi \; (\textrm{mod } q), \\ \chi^{*} \; \textrm{induces } \chi}} \Psi(x,y)\sqrt{\log x \log y}(e^{-b\sqrt{\log x}} + y^{-b}) \nonumber \\ & \ll & \Psi(x,y)\sqrt{\log x \log y}(e^{-b\sqrt{\log x}} + y^{-b}) \sum_{r \leq \min\{y^{\eta},e^{\eta \sqrt{\log x}}\}} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \in \mathcal{G}_{1}}} \frac{1}{\phi(r)} \sum_{s \leq Q/r} \frac{1}{\phi(s)} \nonumber \\ & \ll & \Psi(x,y)(e^{-b\sqrt{\log x}/2} + y^{-b/2}), \nonumber \end{eqnarray} where the final line uses the fact that $\eta$ is small in terms of $b$, and also the fact that we have $y \geq \log^{K}x$ in Theorem 1 (so we can absorb all the logarithmic factors). This is certainly an acceptable bound. Using Proposition 1 with the choices $$ \epsilon = \min\{1/300,(10\log r)/\log y\} \;\;\; \textrm{and} \;\;\; H = r^{100}, $$ (which do satisfy the conditions $40\log\log(qyH)/\log y \leq \epsilon \leq \alpha(x,y)/2$ and $y^{0.9\epsilon}\log^{2}x \leq H \leq x^{d}$, since we have $\min\{y^{\eta},e^{\eta \sqrt{\log x}}\} < r \leq x^{\eta}$ and $y \geq \log^{K}x$ with $K$ large), the contribution to (\ref{zdtarget}) from characters induced from $\chi^{*} \in \mathcal{G}_{2}$ is also seen to be \begin{eqnarray} & \ll & \sum_{\min\{y^{\eta},e^{\eta \sqrt{\log x}}\} < r \leq \min\{y^{1/3000},x^{\eta}\}} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} \sum_{q \leq Q} \frac{1}{\phi(q)} \sum_{\substack{\chi \; (\textrm{mod } q), \\ \chi^{*} \; \textrm{induces } \chi}} \frac{\Psi(x,y) \sqrt{\log x \log y}}{r^{2}} \nonumber \\ && + \sum_{\min\{y^{1/3000},x^{\eta}\} < r \leq x^{\eta}} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} \sum_{q \leq Q} \frac{1}{\phi(q)} \sum_{\substack{\chi \; (\textrm{mod } q), \\ \chi^{*} \; \textrm{induces } \chi}} \Psi(x,y) \sqrt{\log x \log y} \left(\frac{\log r}{x^{0.001}} + \frac{1}{r^{2}} \right) \nonumber \\ & \ll & \Psi(x,y) \sqrt{\log x \log y} \sum_{\min\{y^{\eta},e^{\eta \sqrt{\log x}}\} < r \leq x^{\eta}} \frac{1}{\phi(r) r^{2}} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} \sum_{s \leq Q/r} \frac{1}{\phi(s)} \nonumber \\ & \ll & \Psi(x,y) \log^{2}x (e^{-\eta \sqrt{\log x}} + y^{-\eta}), \nonumber \end{eqnarray} provided $\eta$ was set sufficiently small that $1/r^{2} \geq (\log r)/x^{0.001}$ in the above. This will be an acceptable bound, since we have $y \geq \log^{K}x$ and so the $\log^{2}x$ multiplier can be absorbed into the other terms.
Next, using the log-free zero-density estimate in Zeros Result 1 we see that, for any $R \geq 3$, \begin{eqnarray}
\sum_{R < r \leq 2R} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ L(s,\chi^{*}) = 0 \; \textrm{for some} \\ \Re(s) > 299/300, \; |\Im(s)| \leq r^{100}}} \frac{1}{\phi(r)} & \ll & \frac{\log\log R}{R} \sum_{R < r \leq 2R} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ L(s,\chi^{*}) = 0 \; \textrm{for some} \\ \Re(s) > 299/300, \; |\Im(s)| \leq (2R)^{100}}} 1 \nonumber \\ & \ll & \frac{\log\log R}{R} (R^{102})^{(5/2)(1-(299/300))}, \nonumber \end{eqnarray} which is $\ll R^{-1/10}$, say. On splitting into dyadic intervals, it follows that
$$ \sum_{\min\{y^{\eta},e^{\eta \sqrt{\log x}}\} < r \leq x^{\eta}} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \notin \mathcal{G}_{2}}} \sum_{q \leq Q} \frac{1}{\phi(q)} \sum_{\substack{\chi \; (\textrm{mod } q), \\ \chi^{*} \; \textrm{induces } \chi}} |\Psi(x,y;\chi)| \ll \Psi(x,y) \log x (y^{-\eta/10} + e^{-\eta \sqrt{\log x}/10}), $$ which is acceptable for (\ref{zdtarget}).
At this point the only contribution to (\ref{zdtarget}) we have not dealt with is that of characters $\chi$ with conductor at most $\min\{y^{\eta},e^{\eta \sqrt{\log x}}\}$ and a real zero that is $> 1 - M/\min\{\log y ,\sqrt{\log x}\}$. Using Page's Theorem (from Zeros Result 1), provided $\eta$ was chosen small enough in terms of $M$ there will be at most one primitive character $\chi^{*}_{\textrm{bad}}$ giving rise to such contributions. If such $\chi^{*}_{\textrm{bad}}$ exists, and has conductor $r_{\textrm{bad}}$, then the contribution is
$$ \sum_{\substack{q \leq Q, \\ r_{\textrm{bad}} \mid q}} \frac{1}{\phi(q)} \sum_{\substack{\chi \; (\textrm{mod } q), \\ \chi^{*}_{\textrm{bad}} \; \textrm{induces } \chi}} |\Psi(x,y;\chi)| \ll \frac{\max_{\chi} |\Psi(x,y;\chi)|}{\phi(r_{\textrm{bad}})} \sum_{s \leq Q/r_{\textrm{bad}}} \frac{1}{\phi(s)} \ll \frac{\log x }{\phi(r_{\textrm{bad}})} \max_{\chi} |\Psi(x,y;\chi)|, $$ where the maxima are over characters $\chi$ to modulus $\leq Q$ and induced by $\chi^{*}_{\textrm{bad}}$.
Let us assume, at first, that $\log^{K}x \leq y \leq x^{1/(\log\log x)^{2}}$, say. In this case we have $u = (\log x)/\log y \geq (\log\log x)^{2}$, so it will suffice for Theorem 1 if we can show that
$$ |\Psi(x,y;\chi)| \ll \Psi(x,y) \log^{2}x \left(e^{-\frac{cu}{\log^{2}(u+1)}} + y^{-c}\right) $$ for all characters $\chi$ to modulus $\leq Q$ and induced by $\chi^{*}_{\textrm{bad}}$. The point here is that we don't need to worry about the factor $\log x /\phi(r_{\textrm{bad}})$ in the previous display, or the multiplier $\log^{2}x$, since these can be absorbed into our bound when $u$ is this large (at the cost of slightly adjusting the value of $c$).
Using the truncated Perron formula exactly as in the proof of Proposition 1, \begin{eqnarray} \Psi(x,y;\chi) & = & \frac{1}{2\pi i} \int_{\alpha - i/(2\log y)}^{\alpha + i/(2\log y)} L(s,\chi;y) \frac{x^{s}}{s} ds + \frac{1}{2\pi i} \int_{\alpha - iy/2}^{\alpha - i/(2\log y)} L(s,\chi;y) \frac{x^{s}}{s} ds + \nonumber \\ && + \frac{1}{2\pi i} \int_{\alpha + i/(2\log y)}^{\alpha + iy/2} L(s,\chi;y) \frac{x^{s}}{s} ds + O\left(\frac{\Psi(x,y) \sqrt{\log x \log y}}{y^{0.02}} \right). \nonumber \end{eqnarray} The ``big Oh'' term is acceptably small, bearing in mind that $c \leq 0.02$ is small.
To bound the integrals we employ an argument that was used to great effect\footnote{Lemma 5.2 of Soundararajan~\cite{sound}, which we shall ultimately use, has a relatively straightforward proof using the Cauchy--Schwarz inequality and estimates for the Riemann zeta function, but it will nevertheless produce good bounds. A direct argument using information about $L(\alpha+it,\chi)$ would seem (when $|t|$ is large) to be far more complicated.} by Soundararajan~\cite{sound}. Thus $|L(\alpha+it,\chi;y)/L(\alpha,\chi_{0};y)|$ is equal to \begin{eqnarray}
\prod_{p \leq y} \left|\frac{1-\chi(p)/p^{\alpha+it}}{1-\chi_{0}(p)/p^{\alpha}}\right|^{-1} \leq \prod_{p \leq y} \left|1 + \frac{\Re(\chi_{0}(p)-\chi(p)/p^{it})}{p^{\alpha}-\chi_{0}(p)}\right|^{-1} & = & \prod_{p \leq y, p \nmid q} \left|1 + \sum_{k=1}^{\infty} \frac{1-\Re(\chi(p)/p^{it})}{p^{k\alpha}}\right|^{-1} \nonumber \\ & \leq & e^{-\sum_{p \leq y, p \nmid q} \frac{1-\Re(\chi(p)/p^{it})}{p^{\alpha}} }, \nonumber \end{eqnarray}
where the final inequality uses the series expansion of $\exp\{(1-\Re(\chi(p)/p^{it}))/p^{\alpha}\}$, and the fact that $0 \leq 1-\Re(\chi(p)/p^{it}) \leq 2$. Using Page's theorem, if $\chi^{*}_{\textrm{bad}}$ exists then it, and all the characters $\chi$ it induces, have order two (i.e. are real and non-principal). Therefore Lemma 5.2 of Soundararajan~\cite{sound} implies that, if $1/(2\log y) \leq |t| \leq y/2$, $$ \sum_{p \leq y, p \nmid q} \frac{1-\Re(\chi(p)/p^{it})}{p^{\alpha}} \gg \frac{u}{\log^{2}(u+1)}. $$
When $|t| < 1/(2\log y)$, Soundararajan's lemma instead yields (keeping in mind that $\chi$ and $\chi^{*}_{\textrm{bad}}$ are real characters) that \begin{eqnarray} \sum_{p \leq y, p \nmid q} \frac{1-\Re(\chi(p)/p^{it})}{p^{\alpha}} \gg \sum_{p \leq y, p \nmid q} \frac{1-\chi(p)}{p^{\alpha}} & = & \sum_{p \leq y, p \nmid q} \frac{1-\chi^{*}_{\textrm{bad}}(p)}{p^{\alpha}} \nonumber \\ & \geq & \frac{1}{\log y} \left(\sum_{\sqrt{y} \leq p \leq y, p \nmid q} \frac{\log p}{p^{\alpha}} - \sum_{\sqrt{y} \leq p \leq y, p \nmid q} \frac{\chi^{*}_{\textrm{bad}}(p)\log p}{p^{\alpha}} \right) . \nonumber \end{eqnarray} Since we assume that $r_{\textrm{bad}} = \textrm{cond}(\chi^{*}_{\textrm{bad}})$ is at most $y^{\eta}$, with $\eta$ small, partial summation from standard estimates for $\sum_{n \leq z} \Lambda(n) \chi(n)$ (as in e.g. Theorem 11.16 and Exercise 11.3.1.2 of Montgomery and Vaughan~\cite{mv}) implies $$ \sum_{\sqrt{y} \leq p \leq y} \frac{\log p}{p^{\alpha}} - \sum_{\sqrt{y} \leq p \leq y} \frac{\chi^{*}_{\textrm{bad}}(p)\log p}{p^{\alpha}} \gg \sum_{\sqrt{y} \leq p \leq y} \frac{\log p}{p^{\alpha}} \gg \frac{y^{1-\alpha}}{1-\alpha} \gg u\log y = \log x , $$ recalling from $\S 2$ that $\alpha(x,y) = 1 - (\log(u\log u) + O(1))/\log y$. The essential point in these calculations is that the contribution to $\sum_{n \leq z} \Lambda(n) \chi(n)$ from an exceptional real zero comes with a negative sign, so when subtracted makes a positive (i.e. a helpful) contribution to our lower bounds. We need to be careful about the contribution from primes $p$ that divide $q$, but in fact this is $\ll (\log q)/y^{\alpha/2} \ll (\log q)/\log x \ll 1$ (bearing in mind that $\alpha(x,y) \geq \alpha(x,\log^{K}x) \gg 1$), which is negligible.
In summary we have shown, as we wanted, that when $\log^{K}x \leq y \leq x^{1/(\log\log x)^{2}}$, \begin{eqnarray}
\left|\frac{1}{2\pi i} \int_{\alpha - iy/2}^{\alpha + iy/2} L(s,\chi;y) \frac{x^{s}}{s} ds \right| & \ll & L(\alpha,\chi_{0};y) e^{-\Theta(u/\log^{2}(u+1))} x^{\alpha} \log y \nonumber \\ & \ll & e^{-\Theta(u/\log^{2}(u+1))} \Psi(x,y) \sqrt{\log x} \log^{3/2}y , \nonumber \end{eqnarray} where the final inequality uses Smooth Numbers Result 1.
We must still bound the contribution from $\chi^{*}_{\textrm{bad}}$ in the case where $x^{1/(\log\log x)^{2}} < y \leq x$, which is potentially difficult because of various logarithmic multipliers that occur. Fortunately we can deploy existing results of Fouvry and Tenenbaum~\cite{fouvryten0,fouvryten}, who carefully investigated the influence of exceptional zeros on $\Psi(x,y;\chi)$ when $y$ isn't too small. Indeed, since we assume that $r_{\textrm{bad}} \leq e^{\eta \sqrt{\log x}} \leq y^{\eta/\log\log x}$, say, Lemme 2.2 of Fouvry and Tenenbaum~\cite{fouvryten} implies that \begin{eqnarray}
|\Psi(x,y;\chi)| = \left|\sum_{\substack{n \leq x, n \in \mathcal{S}(y), \\ (n,q/r_{\textrm{bad}})=1}} \chi^{*}_{\textrm{bad}}(n)\right| & \ll & \left(\sum_{d \mid (q/r_{\textrm{bad}})} \frac{1}{d^{\alpha(x,y)}}\right) \Psi(x,y) \frac{\log(r_{\textrm{bad}})}{\log y} e^{-\Theta(u/\log^{2}(u+1))} \nonumber \\ & \ll & \left(\sum_{d \mid (q/r_{\textrm{bad}})} \frac{1}{\sqrt{d}}\right) \Psi(x,y) \frac{\log(r_{\textrm{bad}})}{\log x} e^{-\Theta(u/\log^{2}(u+1))} . \nonumber \end{eqnarray} Here the second inequality uses the facts that $\alpha(x,y) = 1 - (\log(u\log u)+O(1))/\log y \geq 1/2$ on our range of $y$, and that $\log x = u \log y$. Actually Fouvry and Tenenbaum's result would give this with the sum replaced by $\sum_{d \mid (q/r_{\textrm{bad}})} 1$, but one can check that their proof (combined with e.g. Theorem 3 of Hildebrand and Tenenbaum~\cite{ht}) gives the stronger bound that we claimed. Thus the contribution from characters induced by $\chi^{*}_{\textrm{bad}}$ is $$ \ll \Psi(x,y) \frac{\log(r_{\textrm{bad}})}{\log x} e^{-\Theta(u/\log^{2}(u+1))} \sum_{\substack{q \leq Q, \\ r_{\textrm{bad}} \mid q}} \frac{1}{\phi(q)} \sum_{d \mid (q/r_{\textrm{bad}})} \frac{1}{\sqrt{d}} \ll \Psi(x,y) \frac{\log(r_{\textrm{bad}})}{\phi(r_{\textrm{bad}})} e^{-\Theta(u/\log^{2}(u+1))} . $$ This is obviously acceptable for the effective part of Theorem 1, as in (\ref{zdtarget}). As usual, the stronger ineffective bound $\ll_{A} \Psi(x,y) e^{-\Theta(u/\log^{2}(u+1))} \log^{-A}x$ follows from Siegel's theorem (in Zeros Result 1), which implies that if $L(s,\chi^{*}_{\textrm{bad}})$ has a real zero that is $> 1 - M/\sqrt{\log x}$ then we must have $r_{\textrm{bad}} \gg_{A} \log^{A}x$.
We have now completed our treatment of all characters with conductor $\leq x^{\eta}$ in Theorem 1. One can handle the contribution from such characters to Theorem 2 using exactly the same arguments, since in Theorem 2 one needs to bound
$$ \sum_{q \leq Q} \sum_{(a,q)=1} \left| \Psi(x,y;q,a)-\frac{\Psi_{q}(x,y)}{\phi(q)} \right|^{2} = \sum_{q \leq Q} \frac{1}{\phi(q)} \sum_{\substack{\chi \; (\textrm{mod } q), \\ \chi \neq \chi_{0}}} |\Psi(x,y;\chi)|^{2}, $$
so we merely insert the squares of all our bounds for $|\Psi(x,y;\chi)|$. It remains, for Theorems 1 and 2, to handle characters with conductor $> x^{\eta}$, which we shall do in the next section using the large sieve.
\section{The large sieve argument} In this section we will use the multiplicative large sieve to complete the proofs of Theorems 1 and 2. We will apply the large sieve in the following standard form, due (apart from the values of the constants 1 and 3 multiplying $N$ and $Q^{2}$) to Gallagher~\cite{gallagher}. \begin{multls}[Gallagher, 1967] For any $Q \geq 1$ and any complex numbers $(a_{n})_{n=M+1}^{M+N}$, we have
$$ \sum_{q \leq Q} \frac{q}{\phi(q)} \sum_{\substack{\chi^{*} \; (\textrm{mod } q), \\ \chi^{*} \; \textrm{primitive}}} \left| \sum_{n=M+1}^{M+N} a_{n}\chi^{*}(n) \right|^{2} \leq (N+3Q^{2}) \sum_{n=M+1}^{M+N} |a_{n}|^{2}. $$ \end{multls}
Indeed, we can finish the proof of Theorem 2 almost immediately using Multiplicative Large Sieve 1. In view of the calculations in $\S 3.3$, it remains to bound the contribution from characters with conductor $ \geq x^{\eta}$, which is \begin{eqnarray}
&& \sum_{x^{\eta} \leq r \leq Q} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} \sum_{\substack{x^{\eta} \leq q \leq Q, \\ r \mid q}} \frac{1}{\phi(q)} \sum_{\substack{\chi \; (\textrm{mod } q), \\ \chi^{*} \; \textrm{induces } \chi}} \left|\sum_{n \leq x, n \in \mathcal{S}(y)} \chi^{*}(n) \sum_{d \mid (n,q/r)} \mu(d) \right|^{2} \nonumber \\
& \leq & \sum_{x^{\eta} \leq r \leq Q} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} \sum_{\substack{x^{\eta} \leq q \leq Q, \\ r \mid q}} \frac{1}{\phi(q)} \tau(q/r) \sum_{d \mid (q/r)} |\Psi(x/d,y;\chi^{*})|^{2} \nonumber \\
& \ll & \sum_{l=0}^{[\log(Q/x^{\eta})/\log 2]} \sum_{d \leq Q/(2^{l}x^{\eta})} \frac{\log^{2}(Q/(2^{l}x^{\eta}d) + 1) \tau(d)}{\phi(d)} \sum_{2^{l}x^{\eta} \leq r \leq 2^{l+1}x^{\eta}} \frac{1}{\phi(r)} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} |\Psi(x/d,y;\chi^{*})|^{2} . \nonumber \end{eqnarray} Here the first line uses the fact that $\chi(n)=\chi^{*}(n) \textbf{1}_{(n,q/r)=1}$; on the second line we use the Cauchy--Schwarz inequality, and write $\tau(\cdot)$ for the divisor function; and the third line follows because $$ \sum_{\substack{x^{\eta} \leq q \leq Q, \\ rd \mid q}} \frac{\phi(r)\phi(d) \tau(q/r)}{\phi(q) \tau(d)} \leq \sum_{s \leq Q/rd} \frac{\tau(s)}{\phi(s)} \leq \sum_{a \leq Q/rd} \frac{1}{\phi(a)} \sum_{b \leq Q/rda} \frac{1}{\phi(b)} \ll \log^{2}(Q/rd + 1), $$ as in e.g. Exercise 2.1.13 of Montgomery and Vaughan~\cite{mv}. Applying Multiplicative Large Sieve 1 to the inner sums, we find the above is \begin{eqnarray} & \ll & \sum_{l=0}^{[\log(Q/x^{\eta})/\log 2]} \sum_{d \leq Q/(2^{l}x^{\eta})} \frac{\log^{2}(Q/(2^{l}x^{\eta}d) + 1) \tau(d)}{\phi(d)} \Psi(x/d,y) \left(\frac{x/d}{2^{l}x^{\eta}} + 2^{l}x^{\eta} \right) \nonumber \\ & \ll & \sum_{l=0}^{[\log(Q/x^{\eta})/\log 2]} \log^{2}(Q/(2^{l}x^{\eta}) + 1) \Psi(x,y) \left(\frac{x}{2^{l}x^{\eta}} + 2^{l}x^{\eta}\log^{2}(Q/(2^{l}x^{\eta}) + 1) \right) \nonumber \\ & \ll & \Psi(x,y) \left(\frac{x \log^{2}Q}{x^{\eta}} + Q \right) , \nonumber \end{eqnarray} say. Since we assume that $y \geq \log^{K}x$ in Theorem 2, and $\Psi(x,\log^{K}x) = x^{1-1/K+o(1)}$ for any fixed $K \geq 1$ (see Corollary 7.9 of Montgomery and Vaughan~\cite{mv}, or Smooth Numbers Result 1), the first term here will be $\ll \Psi(x,y)^{2}x^{-\eta/2}$ provided $K$ was set large enough in terms of $\eta$. This bound is acceptable for Theorem 2, provided the value of $c$ there is set small enough in terms of $\eta$. \begin{flushright} Q.E.D. \end{flushright}
The completion of the proof of Theorem 1 will be a bit more complicated, since we do not a priori have any squares of character sums around, and it will require some care to introduce these in a way that does not spoil the resulting bounds. In $\S\S 4.1-4.3$ we shall prove the following result: \begin{prop2} Let $0 < \eta \leq 1/80$ be any fixed constant. Then for any large $y \leq x^{9/10}$ and any $x^{\eta} \leq Q \leq \sqrt{x}$, we have
$$ \sum_{x^{\eta} \leq r \leq Q} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} \sum_{x^{\eta} \leq q \leq Q} \frac{1}{\phi(q)} \sum_{\substack{\chi \; (\textrm{mod } q), \\ \chi^{*} \; \textrm{induces } \chi}} |\Psi(x,y;\chi)| \ll \log^{7/2}x \sqrt{\Psi(x,y)} \left(Q + x^{1/2 - \eta}\log^{2}x \right). $$ \end{prop2}
The reader may easily check that, together with the calculations in $\S 3.3$ (and on setting the value of $\eta$ as in $\S 3.3$), Proposition 2 will complete the proof of Theorem 1 for all $\log^{K}x \leq y \leq x^{9/10}$, provided the values of $K$ and $c$ in Theorem 1 were set suitably in terms of $\eta$.
If $x^{9/10} < y \leq x$ then the bound claimed in Proposition 2 is still true, but the arguments needed to prove this are a bit different. Indeed, on this range of $y$ the problem essentially reduces to bounding averages of character sums over primes, as in the classical Bombieri--Vinogradov theorem, and one needs to use Vaughan's Type I/Type II sums identity before applying the large sieve. We sketch a suitable argument in $\S 4.4$, which will complete the proof\footnote{We could also refer the reader to earlier Bombieri--Vinogradov type results for smooth numbers, which certainly cover the range $x^{9/10} < y \leq x$. However, the quantitative bounds in those results are not as precise as claimed in Theorem 1, and in any case it seems desirable to give a self contained treatment.} of Theorem 1 on the whole range $\log^{K}x \leq y \leq x$.
\subsection{Factoring the $y$-smooth numbers} We can ``reveal'' any number $n$ less than $x$ by exposing its prime factors one at a time, in non-increasing order of their size. If we know that $n$ is $y$-smooth, and if $n$ isn't too small and if $y \leq x^{37/40}$, say, then none of the factors we expose will be extremely large, and so at some point as we reveal them we will have split $n$ as a product of two fairly large factors. Such an approach has been used by many previous authors, and it will allow us to decompose character sums over $y$-smooth numbers as double character sums\footnote{The reader should be forewarned that, although we use the letter $x$ in this subsection and the next, we will ultimately apply our results with $x$ replaced by $x/d$, for various small values of $d$. This is why we only postulate here that $y \leq x^{37/40}$, which is slightly weaker than the condition $y \leq x^{9/10}$ assumed in Proposition 2.}. Then we will apply the Cauchy--Schwarz inequality and the large sieve.
More precisely, let us write $P(t), p_{1}(t)$ for the greatest and least prime factors of $t \in \mathbb{N}$, respectively, and note that we have \begin{eqnarray} \Psi(x,y;\chi) & = & \Psi(x^{1/20},y;\chi) + \sum_{\substack{x^{1/20} < n \leq x, \\ n \in \mathcal{S}(y)}} \chi(n) \nonumber \\ & = & \Psi(x^{1/20},y;\chi) + \sum_{\substack{x^{1/20} < m \leq yx^{1/20}, \\ m \in \mathcal{S}(y), m/p_{1}(m) \leq x^{1/20}}} \sum_{\substack{n \leq x/m, \\ P(n) \leq p_{1}(m)}} \chi(mn) \nonumber \\ & = & \Psi(x^{1/20},y;\chi) + \sum_{i=0}^{[\log y /\log 2]} \sum_{j=0}^{[\log y / \log\lambda]} \sum_{\substack{m \in \mathcal{S}(y), m/p_{1}(m) \leq x^{1/20}, \\ 2^{i}x^{1/20} < m \leq 2^{i+1}x^{1/20}, \\ y/\lambda^{j+1} < p_{1}(m) \leq y/\lambda^{j}}} \sum_{\substack{n \leq x/m, \\ P(n) \leq p_{1}(m)}} \chi(mn) , \nonumber \end{eqnarray} where we set $\lambda := 1+1/(1000\log x)$. Since we assume that $y \leq x^{37/40}$ we have $yx^{1/20} \leq x^{39/40}$, and so the sums over $n \leq x/m$ are always quite long. This will be important in various of our later calculations, and particularly in $\S 4.3$ when we apply the Cauchy--Schwarz inequality and the large sieve.
The above essentially completes the ``factorisation'' step in the proof of Proposition 2, since we have decomposed $\Psi(x,y;\chi)$ as the sum of $\Psi(x^{1/20},y;\chi)$, which will make a negligible contribution, and of a small number of double sums over $m$ and $n$. At present there is some dependence between the ranges of summation over $m$ and $n$, but we will deal with that using Perron's formula in the next subsection. However, to simplify that process we will first modify the sums a little. For the sake of concision, let us write $\mathcal{M}_{i,j}=\mathcal{M}_{i,j,x,y}$ for the range of summation in the sum over $m$. Then the quadruple sum in the previous display can be rewritten as $$ \sum_{i=0}^{[\log y /\log 2]} \sum_{j=0}^{[\log y /\log\lambda]} \sum_{m \in \mathcal{M}_{i,j}} \chi(m) \left(\sum_{\substack{n \leq x^{19/20}/2^{i+1}, \\ P(n) \leq y/\lambda^{j+1}}} \chi(n) + \sum_{\substack{n \leq x^{19/20}/2^{i+1}, \\ y/\lambda^{j+1} < P(n) \leq y/\lambda^{j}}} \chi(n) \textbf{1}_{P(n) \leq p_{1}(m)} + \right. $$ $$ \left. + \sum_{\substack{x^{19/20}/2^{i+1} < n \leq x^{19/20}/2^{i}, \\ P(n) \leq y/\lambda^{j+1}}} \chi(n) \textbf{1}_{mn \leq x} + \sum_{\substack{x^{19/20}/2^{i+1} < n \leq x^{19/20}/2^{i}, \\ y/\lambda^{j+1} < P(n) \leq y/\lambda^{j}}} \chi(n) \textbf{1}_{mn \leq x} \textbf{1}_{P(n) \leq p_{1}(m)} \right), $$ where $\textbf{1}$ denotes the indicator function. The point of this additional decomposition is that whenever an indicator function $\textbf{1}_{V \leq W}$ appears, with $V=mn$ and $W=x$ or with $V=P(n)$ and $W=p_{1}(m)$, the quantities $V$ and $W$ are already forced to be of comparable magnitude. This will be useful in the next subsection. For the sake of concision, at some points we will use $\mathcal{N}_{i,j}^{(1)},...,\mathcal{N}_{i,j}^{(4)}$ to denote the ranges of summation over $n$ in the sums above.
We remark that the reason for dividing the values of $m$ into dyadic intervals, whilst dividing the values of $p_{1}(m)$ using the finer parameter $\lambda$, is because $\Psi(x,y)$ is more sensitive to changes in $y$ than in $x$. Recall, for example, Smooth Numbers Result 2, which we will shortly make use of.
\subsection{Separating the factors} Using the truncated Perron formula (as in e.g. Theorems 5.2 and 5.3 of Montgomery and Vaughan~\cite{mv}), if $T > 0$ and if we set $\tilde{x} := [x] + 1/2 \in \mathbb{N} + 1/2$, with $[x]$ denoting the integer part of $x$, then we have
$$ \textbf{1}_{mn \leq x} = \textbf{1}_{mn < \tilde{x}} = \frac{1}{2\pi i} \int_{1/2-iT}^{1/2+iT} \frac{\tilde{x}^{s}}{m^{s}n^{s}} \frac{ds}{s} + O\left(\frac{1}{T} \left(\frac{1}{|\log(\tilde{x}/mn)|} + \sqrt{\frac{\tilde{x}}{mn}} \right) \right) . $$ Exactly similarly, we have \begin{eqnarray} \textbf{1}_{P(n) \leq p_{1}(m)} = \textbf{1}_{P(n) < p_{1}(m) + 1/2} & = & \frac{1}{2\pi i} \int_{1/2-iT}^{1/2+iT} \frac{(p_{1}(m)+1/2)^{s}}{P(n)^{s}} \frac{ds}{s} + \nonumber \\
&& + O\left(\frac{1}{T} \left(\frac{1}{|\log((p_{1}(m)+1/2)/P(n))|} + \sqrt{\frac{p_{1}(m)}{P(n)}} \right) \right) . \nonumber \end{eqnarray}
In particular, if we choose $T=x^{5}$ then, since $|\tilde{x}/mn - 1| \geq 1/2mn \gg 1/x$ in the setting of $\S 4.1$, similarly for $|(p_{1}(m)+1/2)/P(n) - 1|$, both of the ``big Oh'' terms above will be $O(1/x^{4})$.
As usual, the point of applying Perron's formula in this manner is that it separates the $m$ and $n$ variables in a multiplicative way. For example, we see \begin{eqnarray} && \sum_{m \in \mathcal{M}_{i,j}} \chi(m) \sum_{n \in \mathcal{N}_{i,j}^{(4)}} \chi(n) \textbf{1}_{mn \leq x} \textbf{1}_{P(n) \leq p_{1}(m)} \nonumber \\ & = & \sum_{m \in \mathcal{M}_{i,j}} \chi(m) \sum_{\substack{x^{19/20}/2^{i+1} < n \leq x^{19/20}/2^{i}, \\ y/\lambda^{j+1} < P(n) \leq y/\lambda^{j}}} \chi(n) \textbf{1}_{mn \leq x} \textbf{1}_{P(n) \leq p_{1}(m)} \nonumber \\ & = & \frac{1}{(2\pi i)^{2}} \int_{1/2-ix^{5}}^{1/2+ix^{5}} \int_{1/2-ix^{5}}^{1/2+ix^{5}} \left( \sum_{m \in \mathcal{M}_{i,j}} \chi(m) \frac{\tilde{x}^{s} (p_{1}(m)+1/2)^{u}}{m^{s}} \sum_{\substack{\frac{x^{19/20}}{2^{i+1}} < n \leq \frac{x^{19/20}}{2^{i}}, \\ y/\lambda^{j+1} < P(n) \leq y/\lambda^{j}}} \frac{\chi(n)}{n^{s}P(n)^{u}} \right) \frac{du}{u} \frac{ds}{s} \nonumber \\ && + O(\frac{1}{x^{4}} \sum_{m \in \mathcal{M}_{i,j}} \sum_{\substack{x^{19/20}/2^{i+1} < n \leq x^{19/20}/2^{i}, \\ y/\lambda^{j+1} < P(n) \leq y/\lambda^{j}}} 1) , \nonumber \end{eqnarray} where crucially there is no interaction between the inner sums over $m$ and $n$.
Summarising all of the calculations from $\S\S 4.1-4.2$, we conclude that $|\Psi(x,y;\chi)|$ is \begin{eqnarray}\label{separatedsums}
& \ll & |\Psi(x^{1/20},y;\chi)| + \sum_{i,j} \sum_{k=1}^{4} \int_{1/2-ix^{5}}^{1/2+ix^{5}} \int_{1/2-ix^{5}}^{1/2+ix^{5}} \left|\sum_{m \in \mathcal{M}_{i,j}} \chi(m) a_{s,u}^{(k)}(m) \right| \left|\sum_{n \in \mathcal{N}_{i,j}^{(k)}} \chi(n) b_{s,u}^{(k)}(n) \right| \frac{d|u|}{|u|} \frac{d|s|}{|s|} \nonumber \\ && + \frac{\Psi(x,y)}{x^{4}} , \end{eqnarray} for certain coefficients $a_{s,u}^{(k)}(m)$ and $b_{s,u}^{(k)}(n)$ whose precise forms the reader may readily ascertain, and where the ranges of summation over $i$ and $j$ are from $0$ to $[\log y / \log 2]$ and from $0$ to $[\log y / \log \lambda]$, respectively. The bound for the final ``big Oh'' term follows because, recalling that $2^{i}x^{1/20} < m \leq 2^{i+1}x^{1/20}$ and $y/\lambda^{j+1} < p_{1}(m) \leq y/\lambda^{j}$ for all $m \in \mathcal{M}_{i,j}$, we have \begin{eqnarray}\label{smooth2app} \sum_{i,j,k} \sum_{m \in \mathcal{M}_{i,j}} \sum_{n \in \mathcal{N}_{i,j}^{(k)}} 1 \ll \sum_{i,j} \sum_{m \in \mathcal{M}_{i,j}} \sum_{\substack{n \leq x^{19/20}/2^{i}, \\ P(n) \leq y/\lambda^{j}}} 1 \ll \sum_{i,j} \sum_{m \in \mathcal{M}_{i,j}} \sum_{\substack{n \leq x/m, \\ P(n) \leq p_{1}(m)}} 1 \leq \Psi(x,y), \end{eqnarray} where the second inequality uses Smooth Numbers Result 2. Actually Smooth Numbers Result 2 doesn't apply to pairs $(i,j)$ where $y/\lambda^{j} \leq \log(x^{19/20}/2^{i})$, but in that case if $y/\lambda^{j+1} < p_{1}(m) \leq y/\lambda^{j}$ then we must have $p_{1}(m) = [y/\lambda^{j}]$ anyway, since $\lambda$ is very close to 1. As we have a denominator of $x^{4}$, we could of course rely on much cruder arguments at this point, but we will require the precise calculations that we just performed in the next subsection.
In the above, the coefficients $a_{s,u}^{(k)}(m)$ have the same order of magnitude for all $m \in \mathcal{M}_{i,j}$, for given $i,j,k$ (similarly for $b_{s,u}^{(k)}(n)$), and their products satisfy
$$ |a_{s,u}^{(k)}(m)||b_{s,u}^{(k)}(n)| \ll 1 \;\;\; \forall m \in \mathcal{M}_{i,j}, n \in \mathcal{N}_{i,j}^{(k)}, \; s, u \in [1/2-ix^{5},1/2+ix^{5}], $$ for given $i,j,k$. These properties only hold because we split our sum over $n$ into the subsums over $\mathcal{N}_{i,j}^{(k)}$ in the previous subsection, so that in our applications of Perron's formula the quantities $x$ and $mn$, and $p_{1}(m)$ and $P(n)$, are always of comparable size.
\subsection{Proof of Proposition 2} Now we shall prove Proposition 2, which we remind the reader will complete the proof of Theorem 1 for all $\log^{K}x \leq y \leq x^{9/10}$. Exactly as at the beginning of $\S 4$, when we proved Theorem 2, we find the left hand side in Proposition 2 is \begin{eqnarray}
&& \sum_{x^{\eta} \leq r \leq Q} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} \sum_{\substack{x^{\eta} \leq q \leq Q, \\ r \mid q}} \frac{1}{\phi(q)} \sum_{\substack{\chi \; (\textrm{mod } q), \\ \chi^{*} \; \textrm{induces } \chi}} \left|\sum_{n \leq x, n \in \mathcal{S}(y)} \chi^{*}(n) \sum_{d \mid (n,q/r)} \mu(d) \right| \nonumber \\
& \leq & \sum_{x^{\eta} \leq r \leq Q} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} \sum_{\substack{x^{\eta} \leq q \leq Q, \\ r \mid q}} \frac{1}{\phi(q)} \sum_{d \mid (q/r)} |\Psi(x/d,y;\chi^{*})| \nonumber \\
& \ll & \sum_{l=0}^{[\log(Q/x^{\eta})/\log 2]} \sum_{d \leq Q/(2^{l}x^{\eta})} \frac{\log(Q/(2^{l}x^{\eta}d) + 1)}{\phi(d)} \sum_{2^{l}x^{\eta} \leq r \leq 2^{l+1}x^{\eta}} \frac{1}{\phi(r)} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} |\Psi(x/d,y;\chi^{*})| . \nonumber \end{eqnarray}
Next we would like to apply the argument of $\S\S 4.1-4.2$ to study $\Psi(x/d,y;\chi^{*})$, but this will be problematic if $d$ is too large, because then our condition that $y \leq (x/d)^{37/40}$ may be violated. However, combining the Cauchy--Schwarz inequality with Multiplicative Large Sieve 1 yields \begin{eqnarray}
\sum_{2^{l}x^{\eta} \leq r \leq 2^{l+1}x^{\eta}} \frac{1}{\phi(r)} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} |\Psi(x/d,y;\chi^{*})| & \ll & \sqrt{\sum_{2^{l}x^{\eta} \leq r \leq 2^{l+1}x^{\eta}} \frac{2^{l}x^{\eta}}{\phi(r)} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} |\Psi(x/d,y;\chi^{*})|^{2}} \nonumber \\ & \ll & (\sqrt{x/d} + 2^{l}x^{\eta})\sqrt{\Psi(x/d,y)}, \nonumber \end{eqnarray} and therefore \begin{eqnarray}
&& \sum_{l=0}^{[\log(Q/x^{\eta})/\log 2]} \sum_{x^{1/40} < d \leq Q/(2^{l}x^{\eta})} \frac{\log(Q/(2^{l}x^{\eta}d) + 1)}{\phi(d)} \sum_{2^{l}x^{\eta} \leq r \leq 2^{l+1}x^{\eta}} \frac{1}{\phi(r)} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} |\Psi(x/d,y;\chi^{*})| \nonumber \\ & \ll & \sqrt{\Psi(x,y)} \sum_{l=0}^{[\log(Q/x^{\eta})/\log 2]} \log(Q/(2^{l}x^{\eta}) + 1) \sum_{x^{1/40} < d \leq Q/(2^{l}x^{\eta})} \frac{1}{\phi(d)} (\sqrt{x/d} + 2^{l}x^{\eta}) \nonumber \\ & \ll & \sqrt{\Psi(x,y)}(x^{1/2-1/80}\log^{2}Q + Q). \nonumber \end{eqnarray} This bound is acceptable for Proposition 2. On the other hand, when $d \leq x^{1/40}$ we have $y \leq x^{9/10} \leq (x/d)^{36/39} < (x/d)^{37/40}$, so we shall be able to apply the argument of $\S\S 4.1-4.2$.
Indeed, combining the bound (\ref{separatedsums}) with the Cauchy--Schwarz inequality and Multiplicative Large Sieve 1, we see \begin{eqnarray}
&& \sum_{2^{l}x^{\eta} \leq r \leq 2^{l+1}x^{\eta}} \frac{1}{\phi(r)} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} |\Psi(x/d,y;\chi^{*})| \nonumber \\
& \ll & \frac{2^{l}x^{\eta}}{(x/d)^{3}} + \sqrt{\sum_{2^{l}x^{\eta} \leq r \leq 2^{l+1}x^{\eta}} \frac{2^{l}x^{\eta}}{\phi(r)} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} |\Psi((x/d)^{1/20},y;\chi^{*})|^{2}} + \nonumber \\
&& + \frac{1}{2^{l}x^{\eta}} \sum_{i,j,k} \int \sqrt{ \sum_{r=1}^{2^{l+1}x^{\eta}} \frac{r}{\phi(r)} \sum_{\chi^{*}} \left|\sum_{m} \chi^{*}(m) a_{s,u}^{(k)}(m) \right|^{2} \cdot \sum_{r=1}^{2^{l+1}x^{\eta}} \frac{r}{\phi(r)} \sum_{\chi^{*}} \left|\sum_{n} \chi^{*}(n) b_{s,u}^{(k)}(n) \right|^{2} } \frac{d|u| d|s|}{|u||s|} \nonumber \\ & \ll & 2^{l}x^{\eta} + \sqrt{((x/d)^{1/20} + (2^{l}x^{\eta})^{2}) \Psi((x/d)^{1/20},y)} + \nonumber \\
&& + \frac{1}{2^{l}x^{\eta}} \sum_{i,j,k} \int \sqrt{(2^{i}(\frac{x}{d})^{\frac{1}{20}} + (2^{l}x^{\eta})^{2}) \sum_{m \in \mathcal{M}_{i,j}} |a_{s,u}^{(k)}(m)|^{2} (\frac{(x/d)^{\frac{19}{20}}}{2^{i}} + (2^{l}x^{\eta})^{2}) \sum_{n \in \mathcal{N}_{i,j}^{(k)}} |b_{s,u}^{(k)}(n)|^{2}} \frac{d|u| d|s|}{|u||s|}. \nonumber \end{eqnarray} In view of the discussion in the final paragraph of $\S 4.2$, for any $i,j,k,s,u$ we have
$\sum_{m \in \mathcal{M}_{i,j}} |a_{s,u}^{(k)}(m)|^{2} \cdot \sum_{n \in \mathcal{N}_{i,j}^{(k)}} |b_{s,u}^{(k)}(n)|^{2} \ll \#\mathcal{M}_{i,j} \#\mathcal{N}_{i,j}^{(k)}$. If we insert this upper bound then none of the terms inside the squareroot depend on $s$ or $u$ any longer, so we can perform the integrations over those variables and pick up an additional factor of $\log^{2}(x/d)$. Then applying the Cauchy--Schwarz inequality to the sum over $i,j,k$, we see the third term in the above is \begin{eqnarray}\label{largesievepenult} & \ll & \frac{\log^{2}(x/d)}{2^{l}x^{\eta}} \sqrt{ \sum_{i,j,k} (x/d + (2^{l}x^{\eta})^{4} + (2^{l}x^{\eta})^{2}2^{i}(x/d)^{1/20} + (2^{l}x^{\eta})^{2}\frac{(x/d)^{19/20}}{2^{i}}) \cdot \sum_{i,j,k} \#\mathcal{M}_{i,j} \#\mathcal{N}_{i,j}^{(k)} } \nonumber \\ & \ll & \log^{5/2}(x/d) \log y \left(\frac{\sqrt{x/d}}{2^{l}x^{\eta}} + 2^{l}x^{\eta} + \sqrt{y} (x/d)^{1/40} + (x/d)^{19/40} \right) \sqrt{\Psi(x/d,y)}, \end{eqnarray} bearing in mind that the ranges of summation over $i,j,k$ are from 0 to $[\log y /\log 2]$, from 0 to $[\log y /\log \lambda] = O(\log y \log(x/d))$, and from 1 to 4 respectively. Here we also note that $\sum_{i,j,k} \#\mathcal{M}_{i,j} \#\mathcal{N}_{i,j}^{(k)} \ll \Psi(x/d,y)$, as we saw in the calculations (\ref{smooth2app}) with $x$ replaced by $x/d$.
It is clear that the bound (\ref{largesievepenult}) is also an upper bound for the two other terms in our bound for $\sum_{2^{l}x^{\eta} \leq r \leq 2^{l+1}x^{\eta}} (1/\phi(r)) \sum_{\chi^{*} (\textrm{mod } r), \; \chi^{*} \; \textrm{primitive}} |\Psi(x/d,y;\chi^{*})|$. Moreover, since we assume in Proposition 2 that $y \leq x^{9/10}$, and therefore $\sqrt{y}x^{1/40} \leq x^{19/40}$, we can replace (\ref{largesievepenult}) by the simplified upper bound
$$ \sum_{2^{l}x^{\eta} \leq r \leq 2^{l+1}x^{\eta}} \frac{1}{\phi(r)} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} |\Psi(x/d,y;\chi^{*})| \ll \log^{7/2}x \sqrt{\Psi(x,y)} \left(\frac{\sqrt{x/d}}{2^{l}x^{\eta}} + 2^{l}x^{\eta} + \frac{x^{19/40}}{d^{1/40}} \right) . $$ Thus the left hand side in Proposition 2 is \begin{eqnarray} & \ll & \log^{7/2}x \sqrt{\Psi(x,y)} \sum_{l=0}^{[\log(Q/x^{\eta})/\log 2]} \log(Q/2^{l}x^{\eta} + 1) \left(\frac{\sqrt{x}}{2^{l}x^{\eta}} + 2^{l}x^{\eta}\log(Q/2^{l}x^{\eta} + 1) + x^{19/40}\right) \nonumber \\ & \ll & \log^{7/2}x \sqrt{\Psi(x,y)} \left(x^{1/2-\eta}\log Q + Q + x^{19/40}\log^{2}Q \right) . \nonumber \end{eqnarray} The proposition follows immediately, given our hypothesis that $\eta \leq 1/80 < 1/40$. \begin{flushright} Q.E.D. \end{flushright}
\subsection{The large sieve argument for very large $y$} In this subsection we will sketch a proof that Proposition 2 still holds when $x^{9/10} < y \leq x$. This will finally complete the proof of Theorem 1. Arguing as in $\S 4.3$, the reader may check it will suffice to show that, for any $0 \leq l \leq \log(Q/x^{\eta})/\log 2$ and any $d \leq Q \leq \sqrt{x}$, \begin{eqnarray}\label{largeylargesieve}
\sum_{2^{l}x^{\eta} \leq r \leq 2^{l+1}x^{\eta}} \frac{1}{\phi(r)} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} |\Psi(x/d,y;\chi^{*})| \ll (\frac{\sqrt{x/d}}{2^{l}x^{\eta}} + 2^{l}x^{\eta} + x^{9/20})\sqrt{x}\log^{7/2}x . \end{eqnarray}
Firstly, if $d \geq x/y$ then $\Psi(x/d,y;\chi^{*}) = \sum_{n \leq x/d} \chi^{*}(n) = O(\sqrt{r}\log r) = O(\sqrt{x}\log x)$, in view of the P\'{o}lya--Vinogradov inequality (see e.g. Theorem 9.18 of Montgomery and Vaughan~\cite{mv}). This bound is certainly acceptable for (\ref{largeylargesieve}).
On the other hand, if $d < x/y$ then \begin{eqnarray} \Psi(\frac{x}{d},y;\chi^{*}) & = & \sum_{n \leq x/d} \chi^{*}(n) - \sum_{y < p \leq x/d} \sum_{m \leq x/dp} \chi^{*}(mp) \nonumber \\ & = & \sum_{n \leq x/d} \chi^{*}(n) - \sum_{m \leq x/dy} \chi^{*}(m) \sum_{y < n \leq \frac{x}{dm}} \frac{\chi^{*}(n) \Lambda(n)}{\log n} + \sum_{k=2}^{[\frac{\log x}{\log 2}]} \sum_{m \leq x/dy} \chi^{*}(m) \sum_{y < p^{k} \leq \frac{x}{dm}} \frac{\chi^{*}(p^{k})}{k} . \nonumber \end{eqnarray} Moreover, when $k \geq 3$ we trivially have $$ \sum_{m \leq x/dy} \chi^{*}(m) \sum_{y < p^{k} \leq \frac{x}{dm}} \frac{\chi^{*}(p^{k})}{k} = O(\sum_{m \leq x/dy} \left(\frac{x}{dm}\right)^{1/3}) = O\left(\frac{x}{dy^{2/3}}\right) = O(\sqrt{x}), $$ and the P\'{o}lya--Vinogradov inequality again yields $\sum_{n \leq x/d} \chi^{*}(n) = O(\sqrt{x}\log x)$. Thus \begin{eqnarray}
|\Psi(x/d,y;\chi^{*})| & \ll & \sqrt{x}\log x + \left|\sum_{m \leq x/dy} \chi^{*}(m) \sum_{y < n \leq \frac{x}{dm}} \frac{\chi^{*}(n) \Lambda(n)}{\log n}\right| + \left|\sum_{m \leq x/dy} \chi^{*}(m) \sum_{y < p^{2} \leq \frac{x}{dm}} \chi^{*}(p^{2})\right| \nonumber \\
& \ll & \sqrt{x}\log x + \frac{1}{\log y} \left|\sum_{m \leq x/dy} \chi^{*}(m) \sum_{y < n \leq \frac{x}{dm}} \chi^{*}(n) \Lambda(n) \right| + \nonumber \\
&& + \int_{y}^{x/d} \frac{1}{t\log^{2}t} \left|\sum_{m \leq x/dy} \chi^{*}(m) \sum_{t < n \leq \frac{x}{dm}} \chi^{*}(n) \Lambda(n) \right|dt + \left|\sum_{\substack{mp^{2} \leq x/d, \\ p^{2} > y}} \chi^{*}(mp^{2})\right| , \nonumber \end{eqnarray} where the second inequality follows by writing $1/\log n = 1/\log y - \int_{y}^{n} dt/(t\log^{2}t)$, i.e. by using partial summation.
As before, the contribution to (\ref{largeylargesieve}) from the $\sqrt{x}\log x$ term is acceptable. Bounding the last term trivially will not quite be satisfactory, but since a number less than $x$ has at most one representation as $mp^{2}$ with $p^{2} > y$, and since $\sum_{mp^{2} \leq x/d, p^{2} > y} 1 \ll x/(d\sqrt{y})$, then the Cauchy--Schwarz inequality and Multiplicative Large Sieve 1 imply that \begin{eqnarray}
\sum_{2^{l}x^{\eta} \leq r \leq 2^{l+1}x^{\eta}} \frac{1}{\phi(r)} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} \left|\sum_{\substack{mp^{2} \leq x/d, \\ p^{2} > y}} \chi^{*}(mp^{2})\right| & \ll & \sqrt{\sum_{2^{l}x^{\eta} \leq r \leq 2^{l+1}x^{\eta}} \frac{2^{l}x^{\eta}}{\phi(r)} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} \left|\sum_{\substack{mp^{2} \leq x/d, \\ p^{2} > y}} \chi^{*}(mp^{2})\right|^{2} } \nonumber \\ & \ll & (2^{l}x^{\eta} + \sqrt{x/d}) \sqrt{x/(d\sqrt{y})}. \nonumber \end{eqnarray} This is acceptable for (\ref{largeylargesieve}) with much room to spare.
Finally, to bound the terms involving $\sum_{n} \chi^{*}(n)\Lambda(n)$ one can use Vaughan's identity to expand these sums into non-trivial double sums, and then collect the sum over $m \leq x/dy$ (which is a short sum, since $y$ is so close to $x$) with one of those sums and apply the Cauchy--Schwarz inequality and Multiplicative Large Sieve 1. We do not write out the details, since this is cumbersome, but refer the reader to pages 166--167 of Davenport's book~\cite{davenport} for an argument that can easily be adapted to our purposes. More specifically, one can follow that argument with the simple choices $U=V=x^{1/10}$, and discover that for the analogues of the sums $S_{1},S_{2}',S_{2}'',S_{3},S_{4}$ arising there one obtains, in our case, that $$ S_{1}, S_{2}', S_{3} \ll \sqrt{r}x^{1/10}\log^{2}x \frac{x}{dy} \ll x^{9/20}\log^{2}x , $$ $$ \sum_{2^{l}x^{\eta} \leq r \leq 2^{l+1}x^{\eta}} \frac{1}{\phi(r)} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} S_{2}'' \ll \left(2^{l}x^{\eta} + \frac{\sqrt{x/d}}{x^{1/20}} + x^{3/20} + \frac{\sqrt{x/d}}{2^{l}x^{\eta}} \right)\sqrt{x/d}\log^{9/2}x , $$ $$ \sum_{2^{l}x^{\eta} \leq r \leq 2^{l+1}x^{\eta}} \frac{1}{\phi(r)} \sum_{\substack{\chi^{*} \; (\textrm{mod } r), \\ \chi^{*} \; \textrm{primitive}}} S_{4} \ll \left(2^{l}x^{\eta} + \frac{\sqrt{x/d}}{x^{1/20}} + \frac{\sqrt{x/d}}{2^{l}x^{\eta}} \right)\sqrt{x/d}\log^{9/2}x . $$ Here we recall that $r \ll \sqrt{x}$ in Theorem 1, and $y > x^{9/10}$ in this subsection. Remembering that we must still multiply by a factor $O(1/\log x)$ that arose from partial summation, these estimates all suffice to give the bound (\ref{largeylargesieve}). \begin{flushright} Q.E.D. \end{flushright}
\appendix \section{The sums in the exponents}
\subsection{Proof of Lemma 1} First we make some observations that will make the main part of the proof run more smoothly. We may certainly assume that $\epsilon \geq 1/\log z$, because if it isn't then the bound in Lemma 1 is trivial. We may also assume that $\chi$ is a primitive Dirichlet character, because otherwise we can replace it by the primitive character it is induced from, at the cost of an error term that is $$ \ll \sum_{p \mid q} \frac{\log p}{p^{\sigma}} \ll \sum_{p \leq 10\log q} \frac{\log p}{p^{\sigma}} \ll \sum_{p \leq 10\log q} \frac{\log p}{p^{0.1}} \ll \log^{0.9}q \;\;\; \textrm{ if } 0.1 \leq \sigma \leq 1, $$ and is $$ \ll \sum_{p \mid q} \log p \left[\frac{\log z}{\log p}\right] \ll \frac{\log z \log q}{\log\log(q+2)} \;\;\; \textrm{ if } 0 \leq \sigma \leq 0.1, \textrm{ say}. $$ Bearing in mind that we have $z \geq (Hr)^{C} \geq H^{5}$ and $\epsilon \leq 1/2$ in Lemma 1, the second of these terms is $$ \ll H\log^{2}z + \frac{\log^{2}q}{H} \ll z^{1/4} + \frac{\log^{2}(qzH)}{H} \ll \frac{z^{1-\sigma-0.9\epsilon}}{1-\sigma} + \frac{z^{1-\sigma} \log^{2}(qzH)}{(1-\sigma)H}, $$ so in any case the error term may be absorbed into the right hand side of Lemma 1. Finally, we may assume that $\sigma+it$ is not a zero of $L(s,\chi)$, because if it is then we can replace $\sigma+it$ by an arbitrarily close point that is not a zero, which will have a negligible effect on the left hand side in the statement of the lemma.
Now a classical explicit formula, reproduced as e.g. Theorem 12.10 of Montgomery and Vaughan~\cite{mv}, implies that if $z,T \geq 2$ and if $\chi$ is a primitive non-principal Dirichlet character then
$$ \sum_{n \leq z} \Lambda(n)\chi(n) = - \sum_{\rho, \atop |\Im(\rho)| \leq T} \frac{z^{\rho}}{\rho} + C(\chi) + O(\log z) + O(\frac{z \log^{2}(rzT)}{T}), $$ where $r$ is the conductor of $\chi$ and $$ C(\chi) := \frac{L'(1,\overline{\chi})}{L(1,\overline{\chi})} + \log(r/2\pi) - \gamma , $$ with $\gamma$ denoting Euler's constant. The proof of this formula can be modified in a straightforward way to show that, when $0 \leq \sigma < 1$ and $\sigma + it \neq 0$ is not a zero of $L(s,\chi)$, \begin{eqnarray}
\sum_{n \leq z} \frac{\Lambda(n)\chi(n)}{n^{\sigma + it}} & = & - \sum_{\rho, \atop |\Im(\rho)-t| \leq T} \frac{z^{\rho-\sigma-it}}{\rho - \sigma - it} + (1-a(\chi))\frac{z^{-\sigma-it}}{\sigma + it} -\frac{L'(\sigma+it,\chi)}{L(\sigma+it,\chi)} + O(\frac{\log z}{z^{\sigma}}) \nonumber \\
&& + O(\frac{z^{1-\sigma} \log^{2}(rzT(|t|+1))}{T}), \nonumber \end{eqnarray} where $a(\chi)$ is zero or one according as $\chi(-1)$ is 1 or $-1$. (Here the term $(1-a(\chi))z^{-\sigma-it}/(\sigma+it)$ arises because, if $\chi(-1)=1$, the function $L(s,\chi)$ has a zero at $s=0$.)
At this point we shall divide the proof of Lemma 1 into two cases, according to the relative sizes of $1-\sigma$ and of $\epsilon$ (the width of the hypothesised zero-free region): \begin{enumerate} \item if $1-\sigma \leq 0.99\epsilon$;
\item if $1-\sigma > 0.99\epsilon$. \end{enumerate}
In the first case, if we choose $T=H/2$ in the preceding discussion, and note that we have $0.505 \leq 1-0.99\epsilon \leq \sigma < 1$ and $|t| \leq H/2$ in Lemma 1, we find that
$$ \left|\sum_{n \leq z} \frac{\Lambda(n)\chi(n)}{n^{\sigma + it}}\right| \ll \sum_{\rho, \atop |\Im(\rho)| \leq H} \frac{z^{\Re(\rho)-\sigma}}{|\rho - \sigma - it|} + \left|\frac{L'(\sigma+it,\chi)}{L(\sigma+it,\chi)}\right| + \frac{\log z}{z^{\sigma}} + \frac{z^{1-\sigma} \log^{2}(rzH)}{H}. $$
Next, a direct modification of the proof of Lemma 3 in the author's paper~\cite{harper3} (replacing $1+1/\log q$ there by $1+\epsilon$, and breaking the sums over zeros according as $|\Im(\rho)| \leq H$, rather than $|\Im(\rho)| \leq q$) shows that $$ \frac{L'(\sigma+it,\chi)}{L(\sigma+it,\chi)} = O(1/\epsilon + \log(rH)). $$ Keeping in mind that, by assumption, every term $\rho$ in the sum satisfies $\Re(\rho) \leq 1 - \epsilon$, we also have
$$ \sum_{\rho, \atop |\Im(\rho)| \leq H} \frac{z^{\Re(\rho)-\sigma}}{|\rho - \sigma - it|} \ll z^{1/2-\sigma} \sum_{\rho : \Re(\rho) \leq 1/2, \atop |\Im(\rho)| \leq H} \frac{1}{1+|\rho-it|} + \frac{z^{1-\sigma}}{\epsilon} \sum_{k=1}^{[1/2\epsilon]} z^{-k\epsilon} \sum_{\rho : \Re(\rho) > 1-(k+1)\epsilon, \atop |\Im(\rho)| \leq H} 1, $$
since $|\rho-\sigma-it| \geq \max\{|\Re(\rho)-\sigma|,|\Im(\rho)-t|\} \gg \max\{\epsilon,|\Im(\rho)-t|\}$ in this case. Now standard results on the vertical distribution of zeros of $L(s,\chi)$, as in e.g. Theorem 10.17 of Montgomery and Vaughan~\cite{mv}, show that the first sum is $O(\log^{2}(rH))$. Moreover, the log-free zero-density estimate in Zeros Result 1 shows the second sum is $$ \ll \sum_{k=1}^{[1/2\epsilon]} z^{-k\epsilon} (rH)^{3(k+1)\epsilon} \ll \sum_{k=1}^{[1/2\epsilon]} z^{-0.9k\epsilon} \ll z^{-0.9\epsilon}, $$ provided the value of $C > 0$ in Lemma 1 (for which $z \geq (rH)^{C}$) was chosen large enough. Here we used our assumption that $\epsilon \geq 1/\log z$ to sum the geometric progression. Putting all of this together, and remembering that we have $\epsilon \gg 1-\sigma$ in this first case, we see
$$ \left|\sum_{n \leq z} \frac{\Lambda(n)\chi(n)}{n^{\sigma + it}}\right| \ll \frac{z^{1-\sigma-0.9\epsilon}}{1-\sigma} + \log(rH) + \frac{1}{\epsilon} + \frac{z^{1-\sigma} \log^{2}(rzH)}{H}, $$ which suffices for the bound claimed in Lemma 1.
In the second case of the proof, where $1-\sigma > 0.99\epsilon$, we shall take a slightly more ``low-tech'' approach. Thus we have \begin{eqnarray}
\left|\sum_{n \leq z} \frac{\Lambda(n)\chi(n)}{n^{\sigma + it}}\right| & \leq & \sum_{n \leq z^{1/100}} \frac{\Lambda(n)}{n^{\sigma}} + \sum_{j=0}^{[99\log z /(100\log 2)]} \left|\sum_{2^{j}z^{1/100} < n \leq \min\{2^{j+1}z^{1/100},z\}} \frac{\Lambda(n) \chi(n)}{n^{\sigma+it}} \right| \nonumber \\
& \ll & \frac{z^{(1-\sigma)/100}}{1-\sigma} + \sum_{j=0}^{[99\log z /(100\log 2)]} \frac{1}{(2^{j}z^{1/100})^{\sigma}} \max_{m \leq 2^{j+1}z^{1/100}} \left|\sum_{2^{j}z^{1/100} < n \leq m} \frac{\Lambda(n) \chi(n)}{n^{it}} \right|, \nonumber \end{eqnarray} where the first line is simply the triangle inequality, and the second line uses Abel's partial summation lemma. Note that $$ z^{(1-\sigma)/100} = z^{1-\sigma-0.99(1-\sigma)} \leq z^{1-\sigma-0.9\epsilon} $$ in this case, which is acceptable for Lemma 1. We will show that, under the hypotheses of Lemma 1, each subsum in the sum over $j$ is $\ll (2^{j+1}z^{1/100})^{1-0.9\epsilon} + 2^{j+1}z^{1/100}\log^{2}(rzH)/H$, which the reader may check is sufficient to establish the bound claimed in the lemma.
In fact we have already done almost all of the necessary work. The explicit formula that we stated above implies that, for any $X \geq 2$ and any $0 < |t| \leq H/2$,
$$ \left|\sum_{n \leq X} \frac{\Lambda(n)\chi(n)}{n^{it}} \right| \ll \sum_{\rho, \atop |\Im(\rho)| \leq H} \frac{X^{\Re(\rho)}}{|\rho - it|} + \left|(1-a(\chi))\frac{X^{-it}}{it} -\frac{L'(it,\chi)}{L(it,\chi)} \right| + \log X + \frac{X\log^{2}(rXH)}{H}. $$ Moreover, exploiting the functional equation for $L(s,\chi)$, using e.g. formulae (12.9) and (C.17) of Montgomery and Vaughan~\cite{mv}, we find \begin{eqnarray} -\frac{L'(it,\chi)}{L(it,\chi)} & = & \frac{L'(1-it,\overline{\chi})}{L(1-it,\overline{\chi})} + \log(r/2\pi) + \frac{\Gamma'(1-it)}{\Gamma(1-it)} - \frac{\pi}{2}\cot((\pi/2)(it+a(\chi))) \nonumber \\
& = & \frac{L'(1-it,\overline{\chi})}{L(1-it,\overline{\chi})} + O(\log(r(|t|+1))) -\frac{1}{it+a(\chi)} + O(1), \nonumber \end{eqnarray}
and so for $0 < |t| \leq H/2$ we have \begin{eqnarray}\label{appexplicit}
\left|\sum_{n \leq X} \frac{\Lambda(n)\chi(n)}{n^{it}} \right| \ll \sum_{\rho, \atop |\Im(\rho)| \leq H} \frac{X^{\Re(\rho)}}{|\rho - it|} + \left|\frac{L'(1-it,\overline{\chi})}{L(1-it,\overline{\chi})} \right| + \log(rXH) + \frac{X\log^{2}(rXH)}{H} \end{eqnarray}
This also holds when $t=0$, that being the standard case that we quoted at the very beginning of this section. In addition, the zero-free region hypothesised in Lemma 1 implies that any ``exceptional'' real zero of $L(s,\overline{\chi})$ is $\leq 1-\epsilon \leq 1-1/\log z$, so standard results (as in Theorem 11.4 of Montgomery and Vaughan~\cite{mv}, for example) imply that $|L'(1-it,\overline{\chi})/L(1-it,\overline{\chi})| \ll \epsilon^{-1} + \log(r(|t|+1)) \ll \log(rzH)$.
Finally, if $z^{1/100} \leq X \leq z$ then, as we did earlier, we can use the bound (\ref{appexplicit}) and the log-free zero-density estimate from Zeros Result 1 to conclude that
$$ \left|\sum_{n \leq X} \frac{\Lambda(n)\chi(n)}{n^{it}} \right| \ll \sqrt{X}\log^{2}(rzH) + X^{1-0.9\epsilon} + \log(rzH) + \frac{X\log^{2}(rzH)}{H}, $$ provided the constant $C > 0$ in Lemma 1 was chosen large enough. The first three terms here are all $\ll X^{1-0.9\epsilon}$, (bearing in mind that $0 < \epsilon \leq 1/2$ and $z \geq (rH)^{C}$), and applying this estimate for $X=2^{j}z^{1/100}$ gives the bound we wanted. \begin{flushright} Q.E.D. \end{flushright}
\subsection{Proof of Lemma 2} We follow the proof of Lemma 1 closely, with only two changes. Firstly, when we argued that we could replace $\chi$ by the primitive character it is induced from, we required the assumption that $z \geq (Hr)^{C}$ when $0 \leq \sigma \leq 0.1$, say. However, now we can argue that the error term arising there is $$ \ll \frac{\log z \log q}{\log\log(q+2)} \ll \frac{z^{1-\sigma-0.95\epsilon}\log^{2}(qzH)}{1-\sigma} , $$ which is acceptable for Lemma 2. Secondly, we shall give simpler treatments of some of the sums over zeros in the proof to replace the appeal to a log-free zero-density estimate, which we cannot use successfully having dropped the assumption that $z \geq (Hr)^{C}$.
In the first case of the proof, where $1-\sigma \leq 0.99\epsilon$, we note that
$$ \sum_{\rho, \atop |\Im(\rho)| \leq H} \frac{z^{\Re(\rho)-\sigma}}{|\rho - \sigma - it|} \ll \frac{z^{1-\epsilon-\sigma}}{\epsilon} \sum_{\rho, \atop |\Im(\rho)| \leq H} \frac{1}{1+|\rho - it|} \ll \frac{z^{1-\epsilon-\sigma}}{1-\sigma} \log^{2}rH , $$ in view of standard results on the vertical distribution of zeros. This suffices for the bound claimed in the lemma.
In the second case of the proof, where $1-\sigma > 0.99\epsilon$, it suffices to show that
$$ \sum_{\rho, \atop |\Im(\rho)| \leq H} \frac{X^{\Re(\rho)}}{|\rho - it|} \ll X^{1-\epsilon} \log^{2}(rzH) $$ when $z^{1/100} \leq X \leq z$ (say), and then apply this estimate with $X=2^{j}z^{1/100}$ as in the proof of Lemma 1. However, we immediately see that
$$ \sum_{\rho, \atop |\Im(\rho)| \leq H} \frac{X^{\Re(\rho)}}{|\rho - it|} \ll \sqrt{X}\log^{2}(rzH) + X^{1-\epsilon}\sum_{\rho : \Re(\rho) > 1/2, \atop |\Im(\rho)| \leq H} \frac{1}{1+|\rho - it|} \ll X^{1-\epsilon}\log^{2}(rzH) , $$ as required. \begin{flushright} Q.E.D. \end{flushright}
\noindent {\em Acknowledgements.} The author would like to thank Andrew Granville for his help with the literature on smooth numbers in arithmetic progressions.
\end{document} |
\begin{document}
\title{f A Linear-Quadratic Stackelberg Differential Game with Mixed Deterministic and Stochastic Controls
hanks{Shi acknowledges the financial support by the National Key R\&D Program of China under Grant No. 2018YFB1305400, and by the NSFC under Grant Nos. 11971266, 11571205, 11831010. Wang acknowledges the financial support by the NSFC for Distinguished Young Scholars under Grant No. 61925306.}
\noindent{\bf Abstract:}\quad This paper is concerned with a linear-quadratic (LQ) Stackelberg differential game with mixed deterministic and stochastic controls. Here in the game, the follower is a random controller which means that the follower can choose adapted random processes, while the leader is a deterministic controller which means that the leader can choose only deterministic time functions. An open-loop Stackelberg equilibrium solution is considered. First, an optimal control process of the follower is obtained by maximum principle of controlled stochastic differential equation (SDE), which is a linear functional of optimal state variable and control variable of the leader, via a classical Riccati equation. Then an optimal control function of the leader is got via a direct calculation of derivative of cost functional, by the solution to a system of mean-field forward-backward stochastic differential equations (MF-FBSDEs). And it is represented as a functional of expectation of optimal state variable, together with solutions to a two-point boundary value problem of ordinary differential equation (ODE), by a system consisting of two coupled Riccati equations. The solvability of this new system of Riccati equation is discussed.
\noindent{\bf Keywords:}\quad Stackelberg differential game, mixed deterministic and stochastic controls, linear-quadratic control, feedback representation of optimal control, mean-field forward-backward stochastic differential equation
\noindent{\bf Mathematics Subject Classification:}\quad 93E20, 49K45, 49N10, 49N70, 60H10
\section{Introduction}
In this paper, we use $\mathbb{R}^n$ to denote the Euclidean space of $n$-dimensional vectors, $\mathbb{R}^{n\times d}$ to denote the space of $n\times d$ matrices, $\mathbb{S}^n$ to denote the space of $n\times n$ symmetric matrices. For a matrix-valued function $R:[0,T]\rightarrow\mathbb{S}^n$, we denote by $R\geqslant0$ that $R_t$ is uniformly positive semi-definite for any $t\in[0,T]$. For a matrix-valued function $R:[0,T]\rightarrow\mathbb{S}^n$, we denote by $R\gg0$ that $R_t$ is uniformly positive definite, i.e., there is a positive real number $\alpha$ such that $R_t\geq\alpha I$ for any $t\in[0,T]$. $\langle\cdot,\cdot\rangle$ and $|\cdot|$ are used to denote the scalar product and norm in some Euclidean space, respectively. $A^\top$ appearing in the superscript of a matrix, denotes its transpose. $\mbox{trace}[A]$ denotes the trace of a square matrix $A$. $f_x,f_{xx}$ denote the first- and second-order partial derivatives with respect to $x$ for a differentiable function $f$, respectively.
Let $(\Omega,\mathcal{F},\mathbb{P})$ be a complete probability space, on which an $\mathbb{R}^d$-valued standard Brownian motion $\{W_t\}_{t\geq0}=\{W^1_t,W^2_t,\cdots,W^d_t\}_{t\geq0}$ is defined. $\{\mathcal{F}_t\}_{t\geq0}$ is the natural filtration generated by $W(\cdot)$ which is augmented by all $\mathbb{P}$-null sets, and $T>0$ is a fixed finite time duration. $\mathbb{E}$ denotes the expectation with respect to the probability measure $\mathbb{P}$.
We will use the following notations. $L_{\mathcal{F}_T}^2(\Omega;\mathbb{R}^n)$ denotes the set of $\mathbb{R}^n$-valued, $\mathcal{F}_T$-measurable random vectors $\xi$ with $\mathbb{E}\big[|\xi|^2\big]<\infty$, $L^2_\mathcal{F}(0,T;\mathbb{R}^n)$ denotes the set of $\mathbb{R}^n$-valued, $\mathcal{F}_t$-adapted processes $f$ on $[0,T]$ with $\mathbb{E}\big[\int_0^T|f(t)|^2dt\big]<\infty$, $L^2_\mathcal{F}(0,T;\mathbb{R}^{n\times d})$ denotes the set of $n\times d$-matrix-valued, $\mathcal{F}_t$-adapted processes $\Phi$ on $[0,T]$ with $\mathbb{E}\big[\int_0^T|\Phi(t)|^2dt=\mathbb{E}\big[\int_0^T\mbox{trace}[\Phi(t)^\top\Phi(t)]dt\big]<\infty$, and $L^2(0,T;\mathbb{R}^n)$ denotes the set of $\mathbb{R}^n$-valued functions $f$ on $[0,T]$ with $\int_0^T|f(t)|^2dt<\infty$.
We consider the state process $x^{u,w}:\Omega\times[0,T]\rightarrow\mathbb{R}^n$ satisfies a linear SDE \begin{equation}\label{state equation} \left\{ \begin{aligned}
dx^{u,w}_t&=\big(A_tx^{u,w}_t+B^1_tu_t+B^2_tw_t\big)dt+\big(C_tx^{u,w}_t+D^1_tu_t+D^2_tw_t\big)dW_t,\ t\in[0,T],\\
x^{u,w}_0&=x. \end{aligned} \right. \end{equation} Here for simplicity, we denote $\big(C_tx^{u,w}_t+D^1_tu_t+D^2_tw_t\big)dW_t=\sum\limits_{j=1}^d\big(C_t^jx^{u,w}_t+D^{1j}_tu_t+D^{2j}_tw_t\big)dW_t^j$ with $A,B^1,B^2,C^j,D^{1j}$ and $D^{2j}$ being all bounded Borel measurable functions from $[0,T]$ to $\mathbb{R}^{n\times n},\mathbb{R}^{n\times k_1},\mathbb{R}^{n\times k_2},\mathbb{R}^{n\times n},\mathbb{R}^{n\times k_1}$ and $\mathbb{R}^{n\times k_2}$, respectively. Similar notations are used in the rest of this paper. In the above, $u:\Omega\times[0,T]\rightarrow\mathbb{R}^{k_1}$ is the follower's control process and $w:[0,T]\rightarrow\mathbb{R}^{k_2}$ is the leader's control function. Let $\mathcal{U}^1_{ad}=L^2_\mathcal{F}(0,T;\mathbb{R}^{k_1})$ and $\mathcal{U}^2_{ad}=L^2(0,T;\mathbb{R}^{k_2})$ be the {\it admissible control} sets of the follower and the leader, respectively. That is to say, the control process $u$ of the follower is taken from $\mathcal{U}^1_{ad}$ and the control function $w$ of the leader is taken from $\mathcal{U}^2_{ad}$.
For given initial value $x\in\mathbb{R}^n$ and $(u,w)\in\mathcal{U}^1_{ad}\times\mathcal{U}^2_{ad}$, it is classical that there exists a unique solution $x^{u,w}\in L^2_\mathcal{F}(0,T;\mathbb{R}^n)$ to (\ref{state equation}). Thus, we could define the cost functionals of the players as follows: \begin{equation}\label{cost functional-follower} \begin{aligned}
J_1(x;u,w)=\frac{1}{2}\mathbb{E}\left[\int_0^T\Big(\big\langle Q^1_tx^{u,w}_t,x^{u,w}_t\big\rangle+2\big\langle S^1_tx^{u,w}_t,u_t\big\rangle+\big\langle R^1_tu_t,u_t\big\rangle\Big)dt+\big\langle G^1x^{u,w}_T,x^{u,w}_T\big\rangle\right], \end{aligned} \end{equation} \begin{equation}\label{cost functional-leader} \begin{aligned}
J_2(x;u,w)=\frac{1}{2}\mathbb{E}\left[\int_0^T\Big(\big\langle Q^2_tx^{u,w}_t,x^{u,w}_t\big\rangle+2\big\langle S^2_tx^{u,w}_t,w_t\big\rangle+\big\langle R^2_tw_t,w_t\big\rangle\Big)dt+\big\langle G^2x^{u,w}_T,x^{u,w}_T\big\rangle\right], \end{aligned} \end{equation} where $Q^1,Q^2,S^1,S^2,R^1,R^2$ are bounded Borel measurable functions from $[0,T]$ to $\mathbb{S}^n,\mathbb{S}^n,\mathbb{R}^{k_1\times n},\\\mathbb{R}^{k_2\times n},\mathbb{S}^{k_1},\mathbb{S}^{k_2}$, respectively, and $G^i$ are $\mathbb{S}^n$-valued matrices for $i=1,2$.
We formulate the Stackelberg game by two steps. In the first step, for any chosen $w\in\mathcal{U}^2_{ad}$ and a fixed initial state $x\in\mathbb{R}^n$, the follower would like to choose a $u^*\in\mathcal{U}^1_{ad}$ such that $J_1(x;u^*,w)$ is the minimum of the cost functional $J_1(x;u,w)$ over $\mathcal{U}^1_{ad}$. In a more rigorous way, the follower wants to find a map $\alpha^*:\mathcal{U}^2_{ad}\times[0,T]\rightarrow\mathcal{U}^1_{ad}$, such that \begin{equation}\label{follower} \begin{aligned}
J_1(x;\alpha^*[w,x],w)=\min\limits_{u\in\mathcal{U}^1_{ad}}J_1(x;u,w),\mbox{\ for all }w\in\mathcal{U}^2_{ad}. \end{aligned} \end{equation}
In the second step, knowing that the follower would take $u^*\equiv\alpha^*[w,x_0]$, the leader wishes to choose some $w^*$ to minimize $J_2(x_0;\alpha^*[w,x],w)$ over $\mathcal{U}^2_{ad}$. That is to say, the leader wants to find a control function $w^*$ such that \begin{equation}\label{leader} \begin{aligned} J_2(x;\alpha^*[w^*,x],w^*)=\min\limits_{w\in\mathcal{U}^2_{ad}}J_2(x;\alpha^*[w,x],w). \end{aligned} \end{equation} If $(\alpha^*[\cdot],w^*)$ exists, we refer to it as an {\it open-loop Stackelberg equilibrium solution} to the above {\it LQ Stackelberg differential game with mixed deterministic and stochastic controls}. In this paper, we will make a great effort to find a state feedback representation for the open-loop Stackelberg equilibrium solution.
The Stackelberg differential game is also known as leader-follower differential game, which attracts more and more research attention recently, since it has wide practical backgrounds, especially in economics and finance. The earliest work about the game can be traced back to Stackelberg \cite{S52}, where the concept of Stackelberg equilibrium solution was defined for economic markets when some firms have power of domination over others. Bagchi and Ba\c{s}ar \cite{BB81} discussed an LQ stochastic Stackelberg differential game, where state and control variables do not enter diffusion coefficient in state equation. Yong \cite{Yong02} considered an LQ Stackelberg differential game in a rather general framework, with random coefficient, control dependent diffusion and weight matrix for controls in cost functional being not necessarily nonnegative definite. \O ksendal et al. \cite{OSU13} obtained a maximum principle for Stackelberg differential game in the jump-diffusion case, and applied the result to a newsvendor problem. Bensoussan et al. \cite{BCS15} investigated several information structures for stochastic Stackelberg differential game, whereas diffusion coefficient does not contain control variable. Shi et al. \cite{SWX16} introduced a new explanation for the asymmetric information feature of Stackelberg differential game, and an LQ stochastic Stackelberg differential game with noisy observation was solved, where not all the diffusion coefficients contain control variables. Shi et al. \cite{SWX17} studied an LQ stochastic Stackelberg differential game with asymmetric information, where control variables enter both diffusion coefficients of state equation. Xu and Zhang \cite{XZ16} and Xu et al. \cite{XSZ18} addressed a Stackelberg differential game with time-delay. Li and Yu \cite{LY18} applied FBSDE with a multilevel self-similar domination-monotonicity structure, to characterize the unique equilibrium of an LQ generalized Stackelberg game with hierarchy. Moon and Ba\c{s}ar \cite{MB18} investigated an LQ mean field Stackelberg differential game with adapted open-loop information structure of the leader where there are only one leader but arbitrarily large number of followers. See also Lin et al. \cite{LJZ19}, Wang et al. \cite{WWZ20} for recent developments on open-loop LQ Stackelberg game of mean-field type stochastic systems.
Recently, an interesting paper by Hu and Tang \cite{HT19}, considered a mixed deterministic and random optimal control problem of linear stochastic system with quadratic cost functional, with two controllers---one can choose only deterministic time functions which is called the deterministic controller, while the other can choose adapted random processes which is called the random controller. The optimal control is characterized via a system of fully coupled FBSDEs of mean-field type, whose solvability is proved by solutions to two (not coupled) Riccati equations. Inspired by \cite{HT19}, here in this paper we consider an LQ Stackelberg differential game with mixed deterministic and random controls, where the follower is a random controller and the leader is a deterministic controller. In practical applications such as in Stackelberg's type financial market, some securities investor is the follower and the government who makes macro policies is the leader. The novelty and contribution of this paper can be summarized as follows.
\begin{itemize} \item The game problem is new. To the best of our knowledge, it is the first paper to consider the mixed deterministic and random controls in the study of Stackelberg games. So this paper can be regarded as a continuation of \cite{HT19}, from control to game problems. \item The problem of the leader is related with a system of MF-FBSDEs, via a direct calculation of derivative of cost functional. This interesting feature is different from \cite{Yong02}. \item A feedback representation of optimal control function of the leader with respect to the expectation of optimal state variable, is obtained by solutions to a system of two coupled Riccati equations and a two-point value problem of ODEs. This is also different from \cite{Yong02}, where a dimensional-expansion technique is applied. \end{itemize}
The rest of this paper is organized as follows. In Section 2, the game problem is solved in two subsections. The problem of the follower is discussed in Subsection 2.1, and that of the leader is studied in Subsection 2.2. First, an optimal control process of the follower is obtained by maximum principle of controlled SDE, which is a linear functional of optimal state variable and control variable of the leader, via a classical Riccati equation. Then an optimal control function of the leader is got via a direct calculation of derivative of cost functional, via the solution to a system of MF-FBSDEs. And it is represented as a functional of expectation of optimal state variable, together with solutions to a two-point boundary value problem of ODEs, by a system consisting of two coupled Riccati equations. The solvability of this new system of Riccati equation is discussed. Finally, Section 3 gives some concluding remarks.
\section{Main Result}
We split this section into two subsections, to deal with the problems of the follower and the leader, respectively.
\subsection{Problem of the Follower}
For given control function $w\in\mathcal{U}^1_{ad}$, assume that $u^*$ is an optimal control process of the follower and the corresponding optimal state is $x^{u^*,w}$. Define the Hamiltonian function $H_1:[0,T]\times\mathbb{R}^n\times\mathbb{R}^{k_1}\times\mathbb{R}^{k_2}\times\mathbb{R}^n\times\mathbb{R}^{n\times d}\rightarrow\mathbb{R}$ of the follower as \begin{equation}\label{Hamiltonian-follower} \begin{aligned} H_1\big(t,x,u,w,q,k\big)&=\langle q,Ax+B_1u+B_2w\rangle+\langle k,Cx+D_1u+D_2w\rangle\\
&\quad-\frac{1}{2}\langle Q^1x,x\rangle-\langle S^1x,u\rangle-\frac{1}{2}\langle R^1u,u\rangle. \end{aligned} \end{equation} By the maximum principle (see, e.g., Chapter 6 of Yong and Zhou \cite{YZ99}), there exists a unique pair of processes $(q,k\equiv(k^1,k^2,\cdots,k^d))\in L^2_\mathcal{F}(0,T;\mathbb{R}^n)\times (L^2_\mathcal{F}(0,T;\mathbb{R}^n))^d$ satisfying the {\it backward SDE} (BSDE) \begin{equation}\label{adjoint equation-follower} \left\{ \begin{aligned} -dq_t=&\big[A_t^\top q_t+C_t^\top k_t-(S^1_t)^\top u_t-Q^1_tx^{u^*,w}_t\big]dt-k_tdW_t,\ t\in[0,T],\\
q_T=&-G^1x^{u^*,w}_T, \end{aligned} \right. \end{equation} and the optimality condition holds true \begin{equation}\label{optimality condition-follower} \begin{aligned} 0=R^1_tu^*_t+S^1_tx^{u^*,w}_t-(B^1_t)^\top q_t-(D^1_t)^\top k_t,\ t\in[0,T]. \end{aligned} \end{equation}
We wish to obtain a state feedback representation of $u^*$. Noticing the terminal condition of (\ref{adjoint equation-follower}) and the appearance of the control function $w$, we set \begin{equation}\label{supposed form of q} q_t=-P_tx^{u^*,w}_t-\varphi_t,\ t\in[0,T], \end{equation} for some differentiable function $P$ and $\varphi$ from $[0,T]$ to $\mathbb{S}^n$ and $\mathbb{R}^n$, respectively, satisfying $P_T=G^1$ and $\varphi_T=0$.
Applying It\^{o}'s formula to (\ref{supposed form of q}), we have \begin{equation}\label{applying Ito's formula to q} \begin{aligned}
-dq_t&=\big(\dot{P}_tx^{u^*,w}_t+P_tA_tx^{u^*,w}_t+\dot{\varphi}_t+P_tB^1_tu^*_t+P_tB^2_tw_t\big)dt\\
&\quad+P_t\big(C_tx^{u^*,w}_t+D^1_tu^*_t+D^2_tw_t\big)dW_t. \end{aligned} \end{equation} Comparing the $dW_t$ term in (\ref{applying Ito's formula to q}) with that in (\ref{adjoint equation-follower}), we arrive at \begin{equation}\label{comparing dW} \begin{aligned} k_t=-P_t\big(C_tx^{u^*,w}_t+D^1_tu^*_t+D^2_tw_t\big),\ t\in[0,T]. \end{aligned} \end{equation} Plugging (\ref{supposed form of q}) and (\ref{comparing dW}) into optimality condition (\ref{optimality condition-follower}), and supposing that
\noindent{\bf (A2.1)}\quad{\it $R^1_t+(D^1_t)^\top P_tD^1_t$ is convertible, for all $t\in[0,T]$,}
\noindent we immediately arrive at \begin{equation}\label{optimal control-follower} \begin{aligned} u^*_t&=-\big(R^1_t+(D^1_t)^\top P_tD^1_t\big)^{-1}\Big\{\big[(B^1_t)^\top P_t+(D^1_t)^\top P_tC_t+S^1_t\big]x^{u^*w}_t\\
&\qquad+(D^1_t)^\top P_tD^2_tw_t+(B^1_t)^\top\varphi_t\Big\},\ t\in[0,T]. \end{aligned} \end{equation} Comparing the $dt$ term in (\ref{applying Ito's formula to q}) with that in (\ref{adjoint equation-follower}), noting (\ref{supposed form of q}), (\ref{comparing dW}) and (\ref{optimal control-follower}), we can obtain that if \begin{equation}\label{Riccati equation-P} \left\{ \begin{aligned}
&\dot{P}_t+A_t^\top P_t+P_tA_t+C_t^\top P_tC_t+Q^1_t-\big[P_tB^1_t+C_t^\top P_tD^1_t+(S^1_t)^\top\big]\\
&\quad\times\big(R^1_t+(D^1_t)^\top P_tD^1_t\big)^{-1}\big[(B^1_t)^\top P_t+(D^1_t)^\top P_tC_t+S^1_t\big]=0,\ t\in[0,T],\\
&P_T=G_1, \end{aligned} \right. \end{equation} admits a unique differentiable solution $P\in\mathbb{S}^n$, then \begin{equation}\label{varphi-equation} \left\{\begin{aligned}
&\dot{\varphi}_t+\big[A^\top_t-(P_tB^1_t+C_t^\top P_tD^1_t+(S^1_t)^\top)\big(R^1_t+(D^1_t)^\top P_tD^1_t\big)^{-1}(B^1_t)^\top\big]\varphi_t\\
&+\big[P_tB^2_t+C_t^\top P_tD^2_t-\big(P_tB^1_t+C_t^\top P_tD^1_t+(S^1_t)^\top\big)\\
&\quad\times\big(R^1_t+(D^1_t)^\top P_tD^1_t\big)^{-1}(D^1_t)^\top P_tD^2_t\big]w_t=0,\ t\in[0,T],\\
&\varphi_T=0. \end{aligned}\right. \end{equation} For the solvability of Riccati equation (\ref{Riccati equation-P}), in the following standard assumption that
\noindent{\bf (A2.2)}\quad{\it $R^1\gg0,\ G^1\geqslant0,\ Q^1-S^1(R^1)^{-1}(S^1)^\top\geqslant0$,}
\noindent(\ref{Riccati equation-P}) admits a unique differentiable solution $P\geqslant0$ by Theorem 7.2, Chapter 6 of \cite{YZ99}. For given $w\in\mathcal{U}^2_{ad}$, the solvability of ODE (\ref{varphi-equation}) is obvious.
Under {\bf (A2.2)}, the map $u\mapsto J_1(x;u,w)$ is uniformly convex, thus (\ref{optimal control-follower}) is also sufficient for $(u^*,x^{u^*,w})$ being a unique optimal pair of the follower.
Now, inserting (\ref{optimal control-follower}) into the state equation of (\ref{state equation}), we have \begin{equation}\label{optimal state equation-follower} \left\{ \begin{aligned}
dx^{u^*,w}_t&=\Big\{\big[A_t-B^1_t\big(R^1_t+(D^1_t)^\top P_tD^1_t\big)^{-1}\big((B^1_t)^\top P_t+(D^1_t)^\top P_tC_t\big)+S^1_t\big]x^{u^*,w}_t\\
&\qquad+\big[B^2_t-B^1_t\big(R^1_t+(D^1_t)^\top P_tD^1_t\big)^{-1}(D^1_t)^\top P_tD^2_t\big]w_t\\
&\qquad-B^1_t\big(R^1_t+(D^1_t)^\top P_tD^1_t\big)^{-1}(B^1_t)^\top\varphi_t\Big\}dt\\
&\quad+\Big\{\big[C_t-D^1_t\big(R^1_t+(D^1_t)^\top P_tD^1_t\big)^{-1}\big((B^1_t)^\top P_t+(D^1_t)^\top P_tC_t\big)+S^1_t\big]x^{u^*,w}_t\\
&\qquad+\big[D^2_t-D^1_t\big(R^1_t+(D^1_t)^\top P_tD^1_t\big)^{-1}(D^1_t)^\top P_tD^2_t\big]w_t\\
&\qquad-D^1_t\big(R^1_t+(D^1_t)^\top P_tD^1_t\big)^{-1}(B^1_t)^\top\varphi_t\Big\}dW_t,\ t\in[0,T],\\
x^{u^*,w}_0&=x, \end{aligned} \right. \end{equation} which admits a unique solution $x^{u^*,w}\in L^2_\mathcal{F}(0,T;\mathbb{R}^n)$, for given $w\in\mathcal{U}^2_{ad}$.
Moreover, we have the result.
\noindent{\bf Theorem 2.1}\quad{\it Let {\bf (A2.1), (A2.2)} hold, $P\geqslant0$ satisfy (\ref{Riccati equation-P}). For chosen control function $w\in\mathcal{U}^2_{ad}$ of the leader, there is a unique optimal control process $u^*\in\mathcal{U}^1_{ad}$ of the follower, whose state feedback representation is given by (\ref{optimal control-follower}), where $x^{u^*,w}\in L^2_\mathcal{F}(0,T;\mathbb{R}^n)$ is the optimal state satisfying (\ref{optimal state equation-follower}) and the differential function $\varphi$ satisfy (\ref{varphi-equation}). The optimal value is given by \begin{equation}\label{optimal value-follower} \begin{aligned} J_1(x;u^*,w)&=\frac{1}{2}\langle P_0x,x\rangle+\langle\varphi_0,x\rangle
+\int_0^T\Big(\big\langle(B^2_t)^\top\varphi_t,w_t\big\rangle+\big\langle(D^2_t)^\top P_tD^2_tw_t,w_t\big\rangle\\
&\quad-\big|\big(R^1_t+(D^1_t)^\top P_tD^1_t\big)^{-\frac{1}{2}}\big[(B^2_t)^\top\varphi_t+(D^2_t)^\top P_tD^2_tw_t\big]\big|^2\Big)dt. \end{aligned} \end{equation}} \begin{proof} We only need to prove (\ref{optimal value-follower}). However, it can be easily obtained by applying It\^{o}'s formula to $\langle Px^{u^*,w},x^{u^*,w}\rangle+\langle\varphi,x^{u^*,w}\rangle$, together with the completion of squares technique. We omit the detail. \end{proof}
The results in this subsection is a special case of those in Section 2 of Yong \cite{Yong02}, but with the cross term. We display those here with some refined derivation for the self-integrity of this paper.
\subsection{Problem of the Leader}
Since the leader knows that the follower will take his optimal control process $u^*\in\mathcal{U}^1_{ad}$ by (\ref{optimal control-follower}), the state equation of the leader now writes \begin{equation}\label{state equation-leader} \left\{ \begin{aligned}
dx^w_t&=\big(\widetilde{A}_tx^w_t+\widetilde{B}^1_t\varphi_t+\widetilde{B}^2_tw_t\big)dt
+\big(\widetilde{C}_tx^w_t+\widetilde{D}^1_t\varphi_t+\widetilde{D}^2_tw_t\big)dW_t,\\
d\varphi_t&=-(\widetilde{A}_t^\top\varphi_t+\Gamma_tw_t)dt,\ t\in[0,T],\\
x^w_0&=x,\ \varphi_T=0, \end{aligned} \right. \end{equation} where we have denoted $x^w\equiv x^{u^*,w}$ and \begin{equation*} \left\{ \begin{aligned} \widetilde{R}^1&:=\widetilde{R}^1(P):=R^1+(D^1)^\top PD^1,\\
\widetilde{A}&:=\widetilde{A}(P):=A-B^1(\widetilde{R}^1)^{-1}\big[(B^1)^\top P+(D^1)^\top PC+S^1\big],\\ \widetilde{B}^1&:=\widetilde{B}^1(P):=-B^1(\widetilde{R}^1)^{-1}(B^1)^\top,\\ \widetilde{B}^2&:=\widetilde{B}^2(P):=B^2-B^1(\widetilde{R}^1)^{-1}(D^1)^\top PD^2,\\
\widetilde{C}&:=\widetilde{C}(P):=C-D^1(\widetilde{R}^1)^{-1}\big[(B^1)^\top P+(D^1)^\top PC+S^1\big],\\ \widetilde{D}^1&:=\widetilde{B}^1(P):=-D^1(\widetilde{R}^1)^{-1}(B^1)^\top,\\ \widetilde{D}^2&:=\widetilde{D}^2(P):=D^2-D^1(\widetilde{R}^1)^{-1}(D^1)^\top PD^2,\\
\Gamma&:=\Gamma(P):=PB^2+C^\top PD^2-\big[PB^1+C^\top PD^1+(S^1)^\top\big](\widetilde{R}^1)^{-1}(D^1)^\top PD^2. \end{aligned} \right. \end{equation*}
The problem of the leader is to choose an optimal control function $w^*\in\mathcal{U}^2_{ad}$ such that $$ J^2(x;u^*,w^*)=\min\limits_{w\in\mathcal{U}^2_{ad}}J_2(x;u^*,w). $$
We first have the following result.
\noindent{\bf Theorem 2.2}\quad{\it Suppose that $w^*$ is an optimal control function of the leader, and the corresponding optimal state is $x^*\equiv x^{w^*}$ together with $\varphi^*$ being solution to (\ref{state equation-leader}). Then we have \begin{equation}\label{optimal control-leader} \begin{aligned} 0=R^2_tw^*_t+\big(\widetilde{B}^2_t\big)^\top\mathbb{E}y_t+\big(\widetilde{D}^2_t\big)^\top\mathbb{E}z_t
+S^2_t\mathbb{E}x^*_t+\Gamma_t^\top\mathbb{E}p_t,\ t\in[0,T], \end{aligned} \end{equation} where the triple of processes $(y,z,p)\in\mathbb{R}^n\times\mathbb{R}^{n\times d}\times\mathbb{R}^n$ satisfy the FBSDE \begin{equation}\label{adjoint equation-leader} \left\{ \begin{aligned}
dp_t&=\big[\widetilde{A}^\top_tp_t+(\widetilde{B}^1_t)^\top y_t+(\widetilde{D}^1_t)^\top z_t\big]dt,\\
-dy_t&=\big[\widetilde{A}_t^\top y_t+\widetilde{C}_t^\top z_t+(S^2_t)^\top w^*_t+Q^2_tx^*_t\big]dt-z_tdW_t,\ t\in[0,T],\\
p_0&=0,\ y_T=G^2x^*_T. \end{aligned} \right. \end{equation} Moreover, if we assume that
\noindent{\bf (A2.3)}\quad{\it $G^2\geqslant0,\ Q^2-S^2(R^2)^{-1}(S^2)^\top\geqslant0,\ R^2\gg0$,}
\noindent then the above optimality condition becomes sufficient for the unique existence of the optimal control function $w^*$ of the leader.} \begin{proof} Without loss of generality, let $x\equiv0$, and set the perturbed optimal control function $w^*+\lambda w$ for $\lambda>0$ sufficiently small, with $w\in\mathbb{R}^{k_2}$. Then it is easy to see from the linearity of (\ref{state equation-leader}), that the solution to (\ref{state equation-leader}) is $x^*+\lambda x^w$. We first have \begin{equation*} \begin{aligned} \widetilde{J}(\lambda)&:=J_2(0;u^*,w^*+\lambda w)\\ &=\frac{1}{2}\mathbb{E}\int_0^T\big[\big\langle Q^2_t(x^*_t+\lambda x^w_t),x^*_t+\lambda x^w_t\big\rangle+2\big\langle S^2_t(x^*_t+\lambda x^w_t),w^*_t+\lambda w_t\big\rangle\\ &\qquad+\big\langle R^2_t(w^*_t+\lambda w_t),w^*_t+\lambda w_t\big\rangle\big]dt+\frac{1}{2}\mathbb{E}\big\langle G^2(x^*_T+\lambda x^w_T),x^*_T+\lambda x^w_T\big\rangle. \end{aligned} \end{equation*} Hence \begin{equation*} \begin{aligned}
0=\frac{\partial\widetilde{J}(\lambda)}{\partial\lambda}\bigg|_{\lambda=0}
&=\mathbb{E}\int_0^T\big[\big\langle Q^2_tx^*_t,x^w_t\big\rangle+\big\langle S^2_tx^*_t,w_t\big\rangle+\big\langle S^2_tx^w_t,w^*_t\big\rangle\\
&\qquad+\big\langle R^2_tw^*_t,w_t\big\rangle\big]dt+\mathbb{E}\big\langle G^2x^*_T,x^w_T\big\rangle. \end{aligned} \end{equation*} Let the triple $(p,y,z)$ satisfy (\ref{adjoint equation-leader}). Then we have \begin{equation*} \begin{aligned} 0=\mathbb{E}\int_0^T\big[\langle Q^2_tx^*_t,x^w_t\rangle+\big\langle S^2_tx^*_t,w_t\big\rangle+\big\langle S^2_tx^w_t,w^*_t\big\rangle+\langle R^2_tw^*_t,w_t\rangle\big]dt+\mathbb{E}\langle y_T,x^w_T\rangle. \end{aligned} \end{equation*} Applying It\^{o}'s formula to $\langle x^w_t,y_t\rangle-\langle\varphi_t,p_t\rangle$, noticing (\ref{state equation-leader}) and (\ref{adjoint equation-leader}), we derive \begin{equation*} \begin{aligned} 0&=\mathbb{E}\int_0^T\big\langle R^2_tw^*_t+\big(\widetilde{B}^2_t\big)^\top y_t+\big(\widetilde{D}^2_t\big)^\top z_t+S^2_t x^*_t+\Gamma_t^\top p_t,w_t\big\rangle dt\\
&=\int_0^T\big\langle R^2_tw^*_t+\big(\widetilde{B}^2_t\big)^\top\mathbb{E}y_t+\big(\widetilde{D}^2_t\big)^\top\mathbb{E}z_t
+S^2_t\mathbb{E}x^*_t+\Gamma_t^\top\mathbb{E}p_t,w_t\big\rangle dt. \end{aligned} \end{equation*} This implies (\ref{optimal control-leader}). Further, if {\bf (A2.3)} holds, then the functional $w\rightarrow J_2(x;u^*,w)$ is uniformly convex. Thus the necessary condition becomes sufficient for the unique existence of $w^*$. See the remark of Theorem 2.2 in Yong \cite{Yong02} for more details. The proof is complete. \end{proof}
Next, putting (\ref{state equation-leader}), (\ref{optimal control-leader}) and (\ref{adjoint equation-leader}) together, corresponding with the optimal triple $(w^*,x^*,\varphi^*)$, we get \begin{equation}\label{system of MF-FBSDE} \left\{ \begin{aligned} dx^*_t&=\big(\widetilde{A}_tx^*_t+\widetilde{B}^1_t\varphi_t^*+\widetilde{B}^2_tw_t^*\big)dt
+\big(\widetilde{C}_tx^*_t+\widetilde{D}^1_t\varphi_t^*+\widetilde{D}^2_tw_t^*\big)dW_t,\\
d{\varphi}_t^*&=-(\widetilde{A}_t^\top\varphi_t^*+\Gamma_tw_t^*)dt,\\
dp_t&=\big[\widetilde{A}^\top_tp_t+(\widetilde{B}^1_t)^\top y_t+(\widetilde{D}^1_t)^\top z_t\big]dt,\\
-dy_t&=\big[\widetilde{A}_t^\top y_t+\widetilde{C}_t^\top z_t+(S^2_t)^\top w^*_t+Q^2_tx^*_t\big]dt-z_tdW_t,\\
x^*_0&=x,\ \varphi_T^*=0, \ p_0=0,\ y_T=G^2x^*_T,\\
0&=R^2_tw^*_t+\big(\widetilde{B}^2_t\big)^\top\mathbb{E}y_t+\big(\widetilde{D}^2_t\big)^\top\mathbb{E}z_t
+S^2_t\mathbb{E}x^*_t+\Gamma_t^\top\mathbb{E}p_t,\ t\in[0,T], \end{aligned} \right. \end{equation} which is a system of coupled MF-FBSDEs. Note that it is different from that in Yong \cite{Yong02}. We need to decouple (\ref{system of MF-FBSDE}), and to study the solvability of it via some Riccati equations. For this target, for the optimal control function $w^*$ of (\ref{optimal control-leader}), we expect a state feedback representation of the form \begin{equation}\label{supposed form of y} y_t=P^1_tx^*_t+P^2_t(x^*_t-\mathbb{E}x^*_t)+\phi_t, \end{equation} for some differentiable functions $P^1,P^2$ and $\phi$ from $[0,T]$ to $\mathbb{S}^n,\mathbb{R}^{n\times n}$ and $\mathbb{R}^n$, respectively, satisfying $P^1_T=G^2,P^2_T=0$ and $\phi_T=0$.
Noticing that \begin{equation}\label{dEx} \left\{ \begin{aligned} d\mathbb{E}x^*_t&=\big(\widetilde{A}_t\mathbb{E}x^*_t+\widetilde{B}^1_t\varphi_t^*+\widetilde{B}^2_tw_t^*\big)dt,\ t\in[0,T],\\
\mathbb{E}x^*_0&=x, \end{aligned} \right. \end{equation} and applying It\^{o}'s formula to (\ref{supposed form of y}), we obtain \begin{equation}\label{Ito's formula} \begin{aligned}
dy_t=&\big[\dot{\phi}_t+\big(\dot{P}^1_t+P^1_t\widetilde{A}_t\big)x^*_t+\big(\dot{P}^2_t+P^2_t\widetilde{A}_t\big)(x^*_t-\mathbb{E}x^*_t)
+P^1_t\widetilde{B}^1_t\varphi_t^*+P^1_t\widetilde{B}^2_tw_t^*\big]dt\\
&+\big[(P^1_t+P^2_t)\widetilde{C}_tx^*_t+(P^1_t+P^2_t)\widetilde{D}^1_t\varphi^*_t+(P^1_t+P^2_t)\widetilde{D}^2_tw^*_t\big]dW_t\\
=&-\big[\widetilde{A}_t^\top P^1_tx^*_t+\widetilde{A}_t^\top P^2_t(x^*_t-\mathbb{E}x^*_t)+\widetilde{A}_t^\top\phi_t+\widetilde{C}_t^\top z_t
+(S^2_t)^\top w^*_t+Q^2_tx^*_t\big]dt+z_tdW_t. \end{aligned} \end{equation} Thus \begin{equation}\label{z} \begin{aligned} z_t=(P^1_t+P^2_t)\widetilde{C}_tx^*_t+(P^1_t+P^2_t)\widetilde{D}^1_t\varphi^*_t+(P^1_t+P^2_t)\widetilde{D}^2_tw^*_t,\ t\in[0, T]. \end{aligned} \end{equation} Plugging (\ref{supposed form of y}), (\ref{z}) into (\ref{optimal control-leader}), and supposing that
\noindent {\bf (A2.4)}\quad $\widetilde{R}^2_t:=\widetilde{R}^2_t(P_t,P^1_t,P^2_t):=R^2_t+(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)\widetilde{D}^2_t$ is convertible, for all $t\in[0,T]$,
\noindent we get \begin{equation}\label{optimal control-leader-feedback} \begin{aligned} w^*_t&=-(\widetilde{R}^2_t)^{-1}\Big\{\big[(\widetilde{B}^2_t)^\top P^1_t+(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)\widetilde{C}_t+S^2_t\big]\mathbb{E}x^*_t\\
&\qquad\qquad\qquad+(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)\widetilde{D}^1_t\varphi^*_t+\Gamma_t^\top\mathbb{E}p_t+(\widetilde{B}^2_t)^\top\phi_t\Big\}. \end{aligned} \end{equation} Inserting (\ref{optimal control-leader-feedback}) into (\ref{z}), we have \begin{equation}\label{zz} \begin{aligned} z_t&=(P^1_t+P^2_t)\widetilde{C}_tx^*_t
-(P^1_t+P^2_t)\widetilde{D}^2_t(\widetilde{R}^2_t)^{-1}\big[(\widetilde{B}^2_t)^\top P^1_t+(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)\widetilde{C}_t+S^2_t\big]\mathbb{E}x^*_t\\
&\quad+\big[(P^1_t+P^2_t)\widetilde{D}^1_t-(P^1_t+P^2_t)\widetilde{D}^2_t(\widetilde{R}^2_t)^{-1}(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)\widetilde{D}^1_t\big]\varphi^*_t\\
&\quad-(P^1_t+P^2_t)\widetilde{D}^2_t(\widetilde{R}^2_t)^{-1}\Gamma_t^\top\mathbb{E}p_t-(P^1_t+P^2_t)\widetilde{D}^2_t(\widetilde{R}^2_t)^{-1}(\widetilde{B}^2_t)^\top\phi_t. \end{aligned} \end{equation} Comparing $dt$ terms in the fourth equation in (\ref{system of MF-FBSDE}) and (\ref{Ito's formula}) and substituting (\ref{optimal control-leader-feedback}), (\ref{zz}) into them, we obtain \begin{equation}\label{system of Riccati equations} \left\{ \begin{aligned}
&0=\dot{P}^1_t+P^1_t\widetilde{A}_t+\widetilde{A}_t^\top P^1_t+\widetilde{C}_t^\top(P^1_t+P^2_t)\widetilde{C}_t-\big[P^1_t\widetilde{B}^2_t+\widetilde{C}_t^\top(P^1_t+P^2_t)\widetilde{D}^2_t+(S^2_t)^\top\big]\\
&\qquad\times(\widetilde{R}^2_t)^{-1}\big[(\widetilde{B}^2_t)^\top P^1_t+(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)\widetilde{C}_t+S^2_t\big]+Q^2_t,\ P^1_T=G^2,\\
&0=\dot{P}^2_t+P^2_t\widetilde{A}_t+\widetilde{A}_t^\top P^2_t+\big[P^1_t\widetilde{B}^2_t+\widetilde{C}_t^\top(P^1_t+P^2_t)\widetilde{D}^2_t+(S^2_t)^\top\big]\\
&\qquad\times(\widetilde{R}^2_t)^{-1}\big[(\widetilde{B}^2_t)^\top P^1_t+(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)\widetilde{C}_t+S^2_t\big],\ P^2_T=0, \end{aligned} \right. \end{equation} and \begin{equation}\label{phi} \left\{ \begin{aligned}
&0=\dot{\phi}_t+\Big\{\widetilde{A}_t^\top-\big[P^1_t\widetilde{B}^2_t+\widetilde{C}_t^\top(P^1_t+P^2_t)\widetilde{D}^2_t+(S^2_t)^\top\big]
(\widetilde{R}^2_t)^{-1}(\widetilde{B}^2_t)^\top\Big\}\phi_t+\Big\{P^1_t\widetilde{B}^1_t\\
&\qquad+\widetilde{C}_t^\top(P^1_t+P^2_t)\widetilde{D}^1_t
-\big[P^1_t\widetilde{B}^2_t+\widetilde{C}_t^\top(P^1_t+P^2_t)\widetilde{D}^2_t+(S^2_t)^\top\big](\widetilde{R}^2_t)^{-1}(\widetilde{D}^2_t)^\top\\
&\qquad\times(P^1_t+P^2_t)\widetilde{D}^1_t\Big\}\varphi^*_t-\big[P^1_t\widetilde{B}^2_t+\widetilde{C}_t^\top(P^1_t+P^2_t)\widetilde{D}^2_t+(S^2_t)^\top\big]
(\widetilde{R}^2_t)^{-1}\Gamma_t^\top\mathbb{E}p_t,\\
&\phi_T=0. \end{aligned} \right. \end{equation}
Note that system (\ref{system of Riccati equations}) consists two coupled Riccati equations, which is entirely new and its solvability is interesting. In fact, adding the two equations in (\ref{system of Riccati equations}), it is obviously that $P^1+P^2\in\mathbb{R}^{n\times n}$ uniquely satisfies the ODE \begin{equation}\label{P1+P2} \begin{aligned}
0=\dot{\mathcal{P}}_t+\mathcal{P}_t\widetilde{A}_t+\widetilde{A}_t^\top\mathcal{P}_t+\widetilde{C}_t^\top\mathcal{P}_t\widetilde{C}_t+Q^2_t,\ \mathcal{P}_T=G^2. \end{aligned} \end{equation} Thus (\ref{system of Riccati equations}) becomes \begin{equation}\label{system of Riccati equations-new} \left\{ \begin{aligned}
&0=\dot{P}^1_t+P^1_t\widetilde{A}_t+\widetilde{A}_t^\top P^1_t+\widetilde{C}_t^\top\mathcal{P}_t\widetilde{C}_t
-\big[P^1_t\widetilde{B}^2_t+\widetilde{C}_t^\top\mathcal{P}_t\widetilde{D}^2_t+(S^2_t)^\top\big]\\
&\qquad\times(R^2_t+(\widetilde{D}^2_t)^\top\mathcal{P}_t\widetilde{D}^2_t)^{-1}
\big[(\widetilde{B}^2_t)^\top P^1_t+(\widetilde{D}^2_t)^\top\mathcal{P}_t\widetilde{C}_t+S^2_t\big]+Q^2_t,\ P^1_T=G^2,\\
&0=\dot{P}^2_t+P^2_t\widetilde{A}_t+\widetilde{A}_t^\top P^2_t+\big[P^1_t\widetilde{B}^2_t+\widetilde{C}_t^\top\mathcal{P}_t\widetilde{D}^2_t+(S^2_t)^\top\big]\\
&\qquad\times(R^2_t+(\widetilde{D}^2_t)^\top\mathcal{P}_t\widetilde{D}^2_t)^{-1}
\big[(\widetilde{B}^2_t)^\top P^1_t+(\widetilde{D}^2_t)^\top\mathcal{P}_t\widetilde{C}_t+S^2_t\big],\ P^2_T=0, \end{aligned} \right. \end{equation} and it is a decoupled one now. Let $$ \widetilde{Q}^2_t:=Q^2_t+\widetilde{C}_t^\top\mathcal{P}_t\widetilde{C}_t,\quad \widetilde{S}^2_t:=S^2_t+(\widetilde{D}^2_t)^\top\mathcal{P}_t\widetilde{C}_t,\quad \forall t\in[0,T]. $$ Then the Riccati equation of $P^1$ can be written as \begin{equation}\label{P1} \left\{ \begin{aligned}
&0=\dot{P}^1_t+P^1_t\widetilde{A}_t+\widetilde{A}_t^\top P^1_t-\big[P^1_t\widetilde{B}^2_t+(\widetilde{S}^2_t)^\top\big](\widetilde{R}^2_t)^{-1}
\big[(\widetilde{B}^2_t)^\top P^1_t+\widetilde{S}^2_t\big]+\widetilde{Q}^2_t,\\
&P^1_T=G^2,\\ \end{aligned} \right. \end{equation} If we assume that
\noindent {\bf (A2.5)}\quad $\widetilde{Q}^2-\widetilde{S}^2(\widetilde{R}^2)^{-1}(\widetilde{S}^2)^\top\geqslant0$,
\noindent by {\bf (A2.3), (A2.4)} and {\bf (A2.5)}, there is a unique solution $P^1\geqslant0$. Then there also exists a unique solution $P^2=\mathcal{P}-P^1\in\mathbb{R}^{n\times n}$.
We discuss the solvability of equation (\ref{phi}) for the function $\phi$. In fact, with some computation, we can obtain a two-point boundary value problem for coupled linear ODE for $(\mathbb{E}x^*,\mathbb{E}p,\varphi^*,\phi)$: \begin{equation}\label{Ex,Ep,varphi,phi} \left\{ \begin{aligned} \frac{d\mathbb{E}x^*_t}{dt}&=\big[\widetilde{A}_t-\widetilde{B}^2_t(\widetilde{R}^2_t)^{-1}\overline{S}^2_t\big]\mathbb{E}x^*_t
-\widetilde{B}^2_t(\widetilde{R}^2_t)^{-1}\Gamma_t^\top\mathbb{E}p_t-\widetilde{B}^2_t(\widetilde{R}^2_t)^{-1}(\widetilde{B}^2_t)^\top\phi_t
+\overline{B}^2_t\varphi^*_t,\\ \frac{d\mathbb{E}p_t}{dt}&=\big(\widetilde{A}_t^\top-\overline{\Gamma}_t^\top\big)\mathbb{E}p_t+(\overline{B}^1_t)^\top\mathbb{E}x^*_t+(\overline{B}^2_t)^\top\phi_t
+\overline{D}^1_t\varphi^*_t,\\ \frac{d\varphi^*_t}{dt}&=\big(\overline{\Gamma}_t-\widetilde{A}_t^\top\big)\varphi^*_t
+\Gamma_t(\widetilde{R}^2_t)^{-1}\Gamma_t^\top\mathbb{E}p_t+\Gamma_t(\widetilde{R}^2_t)^{-1}(\widetilde{B}^2_t)^\top\phi_t
+\Gamma_t(\widetilde{R}^2_t)^{-1}\overline{S}^2_t\mathbb{E}x^*_t,\\ \frac{d\phi_t}{dt}&=-\big[\widetilde{A}_t^\top-(\overline{S}^2_t)^\top(\widetilde{R}^2_t)^{-1}(\widetilde{B}^2_t)^\top\big]\phi_t
-\overline{B}^1_t\varphi^*_t+(\overline{S}^2_t)^\top(\widetilde{R}^2_t)^{-1}\Gamma_t^\top\mathbb{E}p_t,\quad t\in[0,T],\\
\mathbb{E}x^*_0&=x,\ \mathbb{E}p_0=0,\ \varphi_T=0,\ \phi_T=0, \end{aligned} \right. \end{equation} where for simplicity, we denote \begin{equation*} \left\{ \begin{aligned} \overline{S}^2_t&:=(\widetilde{B}^2_t)^\top P^1_t+(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)\widetilde{C}_t+S^2_t,\\ \overline{\Gamma}_t&:=\Gamma_t(\widetilde{R}^2_t)^{-1}(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)\widetilde{D}^1_t,\\ \overline{B}^1_t&:=P^1_t\widetilde{B}^1_t+\widetilde{C}_t^\top(P^1_t+P^2_t)\widetilde{D}^1_t
-(\overline{S}^2_t)^\top(\widetilde{R}^2_t)^{-1}(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)\widetilde{D}^1_t,\\ \overline{B}^2_t&:=\widetilde{B}^1_t-\widetilde{B}^2_t(\widetilde{R}^2_t)^{-1}(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)\widetilde{D}^1_t,\\ \overline{D}^1_t&:=(\widetilde{D}^1_t)^\top\big[(P^1_t+P^2_t)\widetilde{D}^1_t-(P^1_t+P^2_t)\widetilde{D}^2_t(\widetilde{R}^2_t)^{-1}(\widetilde{D}^2_t)^\top
(P^1_t+P^2_t)\widetilde{D}^1_t\big]. \end{aligned} \right. \end{equation*}
We define \begin{equation*} X:=\begin{pmatrix}\mathbb{E}x^*\\\mathbb{E}p\end{pmatrix},\quad Y:=\begin{pmatrix}\varphi^*\\\phi\end{pmatrix}, \end{equation*} \begin{equation*} \begin{aligned} &\mathbf{A}_t:=\begin{pmatrix}\widetilde{A}_t-\widetilde{B}^2_t(\widetilde{R}^2_t)^{-1}\overline{S}^2_t&-\widetilde{B}^2_t(\widetilde{R}^2_t)^{-1}\Gamma_t^\top\\ (\overline{B}^1_t)^\top&\widetilde{A}_t^\top-\overline{\Gamma}_t^\top\end{pmatrix},\quad \mathbf{B}_t:=\begin{pmatrix}\overline{B}^2_t&-\widetilde{B}^2_t(\widetilde{R}^2_t)^{-1}(\widetilde{B}^2_t)^\top\\\overline{D}^1_t&(\overline{B}^2_t)^\top\end{pmatrix},\\ &\widehat{\mathbf{A}}_t:=\begin{pmatrix}\Gamma_t(\widetilde{R}^2_t)^{-1}\overline{S}^2_t&\Gamma_t(\widetilde{R}^2_t)^{-1}\Gamma_t^\top\\ 0&(\overline{S}^2_t)^\top(\widetilde{R}^2_t)^{-1}\Gamma_t^\top\end{pmatrix},\quad \widehat{\mathbf{B}}_t:=\begin{pmatrix}\overline{\Gamma}_t-\widetilde{A}_t^\top&\Gamma_t(\widetilde{R}^2_t)^{-1}(\widetilde{B}^2_t)^\top\\ -\overline{B}^1_t&-\widetilde{A}_t^\top+(\overline{S}^2_t)^\top(\widetilde{R}^2_t)^{-1}(\widetilde{B}^2_t)^\top\end{pmatrix}, \end{aligned} \end{equation*} and denote \begin{equation*} \mathcal{A}_t:=\begin{pmatrix}\mathbf{A}_t&\mathbf{B}_t\\\widehat{\mathbf{A}}_t&\widehat{\mathbf{B}}_t\end{pmatrix}, \end{equation*} thus (\ref{Ex,Ep,varphi,phi}) can be written as \begin{equation}\label{X,Y} \left\{ \begin{aligned} &d\begin{pmatrix}X_t\\Y_t\end{pmatrix}=\mathcal{A}_t\begin{pmatrix}X_t\\Y_t\end{pmatrix}dt,\quad t\in[0,T],\\ &X_0=(x^\top\quad 0)^\top,\ Y_T=(0\quad 0)^\top. \end{aligned} \right. \end{equation} From the theory by Yong \cite{Yong99}, we know that (\ref{X,Y}) admits a unique solution $(X,Y)\in L^2(0,T;\mathbb{R}^{2n})\\\times L^2(0,T;\mathbb{R}^{2n})$ if and only if \begin{equation}\label{assumption of Yong 1999} \mbox{det}\left\{(0\quad I)e^{\mathcal{A}_tt}\begin{pmatrix}0\\I\end{pmatrix}\right\}>0,\quad \forall t\in[0,T]. \end{equation} In this case, (\ref{Ex,Ep,varphi,phi}) admits a unique solution $(\mathbb{E}x^*,\mathbb{E}p,\varphi^*,\phi)\in L^2(0,T;\mathbb{R}^n)\times L^2(0,T;\mathbb{R}^n)\times L^2(0,T;\mathbb{R}^n)\times L^2(0,T;\mathbb{R}^n)$. Some recent progress for the two-point boundary value problems associated with ODEs, refer to Liu and Wu \cite{LW18}.
We summarize the above process in the following theorem.
\noindent{\bf Theorem 2.3}\quad{\it Let {\bf (A2.1)$\sim$(A2.5)} and (\ref{assumption of Yong 1999}) hold, $(P^1,P^2)$ satisfy (\ref{system of Riccati equations}), and $(\mathbb{E}x^*,\mathbb{E}p,\varphi^*,\phi)$ satisfy (\ref{Ex,Ep,varphi,phi}). Then $w^*$ given by (\ref{optimal control-leader-feedback}) is the state feedback representation of the unique optimal control of the leader. Let $x^*$ satisfy \begin{equation}\label{x} \left\{ \begin{aligned} dx^*_t&=\Big\{\widetilde{A}_tx^*_t-\widetilde{B}^2_t(\widetilde{R}^2_t)^{-1}\overline{S}^2_t\mathbb{E}x^*_t-\widetilde{B}^2_t(\widetilde{R}^2_t)^{-1}\Gamma_t^\top\mathbb{E}p_t\\
&\qquad+\big[\widetilde{B}^1_t-\widetilde{B}^2_t(\widetilde{R}^2_t)^{-1}(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)
\widetilde{D}^1_t\big]\varphi^*_t-\widetilde{B}^2_t(\widetilde{R}^2_t)^{-1}(\widetilde{B}^2_t)^\top\phi_t\Big\}dt\\
&\quad+\Big\{\widetilde{C}_tx^*_t-\widetilde{D}^2_t(\widetilde{R}^2_t)^{-1}\overline{S}^2_t\mathbb{E}x^*_t-\widetilde{D}^2_t(\widetilde{R}^2_t)^{-1}\Gamma_t^\top\mathbb{E}p_t\\
&\qquad+\big[\widetilde{D}^1_t-\widetilde{D}^2_t(\widetilde{R}^2_t)^{-1}(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)
\widetilde{D}^1_t\big]\varphi^*_t-\widetilde{D}^2_t(\widetilde{R}^2_t)^{-1}(\widetilde{B}^2_t)^\top\phi_t\Big\}dW_t,\\
x^*_0&=x, \end{aligned} \right. \end{equation} $p$ satisfy \begin{equation}\label{p} \left\{ \begin{aligned} dp_t&=\Big\{\widetilde{A}_t^\top p_t-(\widetilde{D}^1_t)^\top(P^1_t+P^2_t)\widetilde{D}^2_t(\widetilde{R}^2_t)^{-1}\Gamma_t^\top\mathbb{E}p_t\\
&\qquad+\big[(\widetilde{B}^1_t)^\top(P^1_t+P^2_t)+(\widetilde{D}^1_t)^\top(P^1_t+P^2_t)\widetilde{C}_t\big]x^*_t\\
&\qquad-\big[(\widetilde{B}^1_t)^\top P^2_t+(\widetilde{D}^1_t)^\top(P^1_t+P^2_t)\widetilde{D}^2_t(\widetilde{R}^2_t)^{-1}\overline{S}^2_t\big]\mathbb{E}x^*_t\\
&\qquad+\big[(\widetilde{D}^1_t)^\top(P^1_t+P^2_t)\widetilde{D}^1_t-(\widetilde{D}^1_t)^\top(P^1_t+P^2_t)\widetilde{D}^2_t(\widetilde{R}^2_t)^{-1}
(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)\widetilde{D}^1_t\big]\varphi^*_t\\
&\qquad+\big[\widetilde{B}^1_t-(\widetilde{D}^1_t)^\top(P^1_t+P^2_t)\widetilde{D}^2_t(\widetilde{R}^2_t)^{-1}(\widetilde{B}^2_t)^\top\big]\phi_t\Big\}dt,\ t\in[0,T],\\
p_0&=0, \end{aligned} \right. \end{equation} and define $y^*$ and $z^*$ in (\ref{supposed form of y}) and (\ref{zz}), respectively, then $(x^*,y,z,p,\varphi)$ is the solution to the system of MF-FBSDEs (\ref{system of MF-FBSDE}).}
Finally, from (\ref{optimal control-follower}) and (\ref{optimal control-leader-feedback}), we obtain \begin{equation}\label{optimal control-follower-feedback} \begin{aligned} u^*_t&=-(\widetilde{R}^1_t)^{-1}\big[(B^1_t)^\top P_t+(D^1_t)^\top P_tC_t+S^1_t\big]x^*_t\\
&\quad+(\widetilde{R}^1_t)^{-1}(D^1_t)^\top P_tD^2_t(\widetilde{R}^2_t)^{-1}\big[(\widetilde{B}^2_t)^\top P^1_t
+(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)\widetilde{C}_t+S^2_t\big]\mathbb{E}x^*_t\\
&\quad+(\widetilde{R}^1_t)^{-1}\big[(D^1_t)^\top P_tD^2_t(\widetilde{R}^2_t)^{-1}(\widetilde{D}^2_t)^\top(P^1_t+P^2_t)\widetilde{D}^1_t-(B^1_t)^\top\big]\varphi^*_t\\
&\quad+(\widetilde{R}^1_t)^{-1}(D^1_t)^\top P_tD^2_t(\widetilde{R}^2_t)^{-1}\Gamma_t^\top\mathbb{E}p_t
+(\widetilde{R}^1_t)^{-1}(D^1_t)^\top P_tD^2_t(\widetilde{R}^2_t)^{-1}(\widetilde{B}^2_t)^\top\phi_t,\ t\in[0,T], \end{aligned} \end{equation} where $x^*$ is given by the MF-SDE (\ref{x}). Up to now, we obtain the state feedback representation for the open-loop Stackelberg equilibrium solution $(u^*,w^*)$.
\section{Concluding Remarks}
To conclude this paper, let us give some remarks. In this paper, we have considered a new kind of LQ Stackelberg differential game with mixed deterministic and stochastic controls. The open-loop Stackelberg equilibrium solution is represented as a feedback form of state variable and its expectation, via solutions to some new Riccati equations. Though the framework is a special case of Yong \cite{Yong02}, some new ideas and interesting phenomena come out. We point out that is is possible for us to relax the assumptions in Section 2 of this paper. Possible extension of the results to those in an infinite time horizon with constant coefficients, is an interesting topic. In this case, some stabilizability problems need to be investigated first, and differential Riccati equations will become algebraic Riccati equations. The practical applications of the theoretic results to Stackelberg's type financial market is another challenging problem. We will consider these problems in the near future.
\end{document} |
\begin{document}
\title{A Groupoid Proof of The Lefschetz fixed point formula}
\author{Zelin Yi}
\date{}
\maketitle
\begin{abstract}
The purpose of this article is to present a "Groupoid proof" to the Lefschetz fixed point formula for elliptic complexes. We shall define a "relative version" of tangent groupoid, describe the corresponding pseudodifferential calculi and explain the relation with the Lefschetz fixed point formula.
\end{abstract}
\section{Introduction}
The notion of tangent groupoids is invented by Alain Connes\cite{Connes94} to simplify the proof of the Atiyah Singer index theorem(see also \cite{Higson10}). In \cite{HigsonYi19}, the authors construct a "rescaled vector bundle" over the tangent groupoid whose space of smooth sections supports a continuous family of supertraces with Getzler rescaling built in. The existence of such a continuous family of supertraces can be thought of as a version of the index theorem. In contrast, the Lefschetz fixed point formula requires a "rescaling" in a differential direction. In this paper, instead of building a "rescaled bundle", we shall modify the tangent groupoid construction to encode the appropriate "Lefschetz type rescaling".
This paper grow out of an effort to apply \cite{HigsonYi19} to Bismut's hypoelliptic Laplacian \cite{Bismut11} where both Getzler rescaling and "Lefschetz type rescaling" are needed, the detail of which we plan to pursue elsewhere.
To fix the notations, let us quickly go through the basic setting of the Lefschetz fixed point formula\cite{AtiyahBott66}. Let $V$ be a compact manifold, $E_0,E_1,\cdots ,E_N$ be a sequence of vector bundles over $V$. Let
\begin{equation}\label{elliptic-complex}
0\to C^\infty(V,E_0) \xrightarrow{d_0} C^\infty(V,E_1) \xrightarrow{d_1} \cdots \xrightarrow{d_{N-1}} C^\infty(V,E_N) \to 0
\end{equation}
be a sequence of differential operators such that $d_{i+1}d_i=0$. By standard terminology, such a sequence is called a \emph{complex}. An \emph{Elliptic complex} over $V$ is a complex such that its sequence of principal symbols
\[
0\to C^\infty(T^\ast V,\pi^\ast E_0) \xrightarrow{\sigma(d_0)} C^\infty(T^\ast V,\pi^\ast E_1) \xrightarrow{\sigma(d_1)} \cdots \xrightarrow{\sigma(d_{N-1})} C^\infty(T^\ast V,\pi^\ast E_N) \to 0
\]
is exact outside the zero section of $T^\ast V$. Let
\begin{equation}\label{eq-vector-bundle-E}
E=\bigoplus_i E_i
\end{equation}
be the direct sum vector bundle and
\begin{equation}\label{eq-sum-diff}
d=\bigoplus_i d_i: C^\infty(V,E)\to C^\infty(V,E)
\end{equation}
be the corresponding direct sum of differential operators. It is convenient, for our purpose, to consider the $\mathbb{Z}/2\mathbb{Z}$-grading on $E=E^{\text{ev}}\oplus E^{\text{odd}}$ where $E^{\text{ev}}=\bigoplus_{i} E_{2i}$ and $E^{\text{odd}}=\bigoplus_{i} E_{2i+1}$. If $T: C^\infty(V,E) \to C^\infty(V,E)$ is an even smoothing operator, meaning that $T$ preserves the decomposition $C^\infty(V,E)=C^\infty(V,E^{\text{ev}})\oplus C^\infty(V,E^{\text{odd}})$ and its Schwartz kernel belongs to $C^\infty(V\times V, E\boxtimes E^\ast)$, its supertrace $\operatorname{Str}(T)$ is defined as
\begin{equation}
\operatorname{Str}(T) = \operatorname{Tr}\Big(T: C^\infty(V,E^{\text{ev}}) \to C^\infty(V,E^{\text{ev}})\Big)-\operatorname{Tr}\Big(T: C^\infty(V,E^{\text{odd}})\to C^\infty(V,E^{\text{odd}})\Big)
\end{equation}
where $\operatorname{Tr}$ is the usual trace on smoothing operators(trace class).
Let $\varphi:V \to V$ be a smooth map with only simple fixed points (meaning that $\left|\det(\varphi_{\ast,m}-1)\right|$ is invertible for all its fixed point $m\in V$). Assume in addition that there is a bundle map $\zeta: \varphi^\ast E\to E$ which induces the smooth map $\zeta: C^\infty(V,\varphi^\ast E)\to C^\infty(V,E)$. Consider the composition
\[
T: C^\infty(V,E) \xrightarrow{\varphi^\ast} C^\infty(V,\varphi^\ast E)\xrightarrow{\zeta} C^\infty(V,E).
\]
If $T=\zeta\circ \varphi^\ast$ commutes with the differential $d$ defined in \eqref{eq-sum-diff}, it lifts to an endomorphism on the cohomology ring
\[
T: H^\ast(E,d) \to H^\ast(E,d).
\]
The Lefschetz fixed point formula is about to calculate the alternating sum
\begin{equation}\label{Lefschetz-number}
\sum_{i=1}^m (-1)^{i}\operatorname{Tr}\left(T: H^i(E,d) \to H^i(E,d)\right).
\end{equation}
The Hodge decomposition theorem(see \cite[Chapter~1]{Gilkey84}) identifies the cohomology ring $H^\ast (E,d)$ with the kernel of Laplacian $\Delta=(d+d^\ast)^2$ as vector spaces. Under this light, the alternating sum \eqref{Lefschetz-number} can also be expressed as a supertrace
\begin{equation}\label{Lefschetz-supertrace}
\operatorname{Str}\left(Te^{-t\Delta}: C^\infty(V,E)\to C^\infty(V,E)\right).
\end{equation}
Thanks to the fact that $T$ commutes with $d$, they can be simultaneously diagonalized. Moreover, since each nonzero eigenspace of $\Delta$ in $C^\infty(V,E^{\text{ev}})$ can be identified with that in $C^\infty(V,E^{\text{odd}})$ by $d+d^\ast$, only zero eigenvalue contributes to \eqref{Lefschetz-supertrace}. The supertrace \eqref{Lefschetz-supertrace} is therefore independent of $t$.
The role of deformation spaces and groupoids is to help us justify and compute the limit of \eqref{Lefschetz-supertrace} as $t\to 0$. To make it more precise, we shall build a groupoid whose space of smooth functions is equipped with a continuous family of functionals $\operatorname{Str}_t$ parametrized by $t\in [0,1]$ and whose corresponding set of pseudodifferential operators contains $Te^{-t\Delta}$. We can recover \eqref{Lefschetz-supertrace} by applying $\operatorname{Str}_t$ to the integral kernel of $Te^{-t\Delta}$ and compute its value by setting $t=0$.
\begin{theorem}{(The Lefschetz fixed point formula)}\label{thm-lef-fix}
\[
\operatorname{Str}\left(Te^{-t\Delta}: C^\infty(V,E)\to C^\infty(V,E)\right)=\sum_{m\in M}\left|\det(\varphi_{\ast,m}-1)\right|^{-1}\operatorname{str}(\zeta_m).
\]
\end{theorem}
The correspondence between abstract groupoids and pseudodifferential calculus shall be reviewd in Section~\ref{groupoids-pseudo}. The groupoid that serves our purpose will be bulit in Section~\ref{deformation-space} and Section~\ref{coefficient-bundle}. The final calculation of \eqref{Lefschetz-supertrace} will be performed in Section~\ref{final-calculation}.
\section{Groupoids and pseudodifferential operators}\label{groupoids-pseudo}
In this section, we shall quickly review basics of groupoids and the correspondence between Lie groupoids and pseudodifferential operators established in \cite{NistorWeinsteinXu99}. For our purpose, we shall mainly focus on smoothing operators.
A Lie groupoid, usually denoted as $G\rightrightarrows G^{(0)}$, consists of the following data:
\begin{enumerate}
\item Two smooth manifolds $G=G^{(1)}$ and $G^{(0)}$ with two submersions $s,r: G\to G^{(0)}$ called source and range maps of the groupoid.
Roughly speaking, $G^{(0)}$ can be thought of as a set of points and $G^{(1)}$ as a set of arrows between those points. The source and target maps send an arrow to the initial and terminal points respectively;
\item An associative multiplication map $m: G^{(2)} \to G$ where $G^{(2)}=\{(\gamma,\eta)\in G\times G \mid s(\gamma)=r(\eta)\}$.
Given two arrows, if the initial point of the first arrow coincides with the terminal point of the second one, they can be "connected" to produce a new arrow. This type of "connection" determines the multiplication map $m$; We shall write $\gamma \cdot \eta$ for $m(\gamma, \eta)$.
\item A smooth map $\varepsilon: G^{(0)}\to G^{(1)}$.
Given a point $x\in G^{(0)}$, it can be viewed as an arrow(loop) which goes from $x$ to $x$;
\item A smooth map $\iota:G\to G$ such that
\[
\iota(\gamma)\circ \gamma = \varepsilon(s(\gamma)), \quad \gamma\circ\iota(\gamma)=\varepsilon(r(\gamma)) \quad \forall \gamma\in G.
\]
Given an arrow, the map $\iota$ reverse its direction. We shall write $\gamma^{-1}$ for $\iota(\gamma)$.
\end{enumerate}
Let $G\rightrightarrows G^{(0)}$ be a Lie groupoid, $E$ be a vector bundle over the unit space $G^{(0)}$. Let $G_x=s^{-1}(x)$ and $G^y=r^{-1}(y)$ be the corresponding source and range fibers where $x,y\in G^{(0)}$. Define $\mathbb{E}$ to be the tensor product vector bundle $r^\ast E \otimes s^\ast E^\ast \to G$.
\begin{definition}
$\left(P_x,x\in G^{(0)}\right)$ is said to be \emph{a family of pseudodifferential operators} on $\mathbb{E}\to G$ if $P_x$ is a pseudodifferential operators acting on $C^\infty_c(G_x,r^\ast E)$ for all $x\in G^{(0)}$.
\end{definition}
Let $g\in G$, it defines a right translation $U_g: C^\infty(G_{s(g)},r^\ast E)\to C^\infty(G_{r(g)},r^\ast E)$ which is given by $(U_gf)(h)=f(hg)$.
\begin{definition}\label{def-equi-family}
$\left(P_x,x\in G^{(0)}\right)$ is called an \emph{equivariant} family of pseudodifferential operators if
\begin{equation}\label{eq-equivariance}
U_gP_{s(g)} = P_{r(g)}U_g
\end{equation}
for all $g\in G$.
\end{definition}
In the following discussion, let us assume that $\left(P_x,x\in G^{(0)}\right)$ consists of smoothing operators. That is, the Schwartz kernel $k_x$ of $P_x$ belongs to $C^\infty(G_x\times G_x,r^\ast E \boxtimes r^\ast E^\ast)$ for all $x\in G^{(0)}$. The importance of the equivariance is that it allows us to reduce the Schwartz kernel. Indeed, in terms of the Schwartz kernel, equation \eqref{eq-equivariance} read
\[
k_{r(g)}(h^\prime, h) = k_{s(g)}(h^\prime g, hg)
\]
which is then equivalent to
\[
k_{s(g)}(g^\prime, g) = k_{r(g)}(g^\prime g^{-1},r(g)).
\]
\begin{definition}
Let $\left(P_x,x\in G^{(0)}\right)$ be an equivariant family of smoothing operators. Its \emph{reduced kernel} $k_P$, which is a (not necessarily smooth) section of $\mathbb{E}\to G$, is defined to be
\[
k_P(g) = k_{s(g)}(g,s(g))
\]
where $k_x$ is the Schwartz kernel of $P_x$.
\end{definition}
According to the definition, $k_P(g) = k_{s(g)}(g,s(g))\in E_{r(g)}\otimes E^\ast_{s(g)}$. So, $k_p$ is indeed a section of $\mathbb{E}\to G$.
\begin{definition}\label{def-smooth-family}
The family $\left(P_x,x\in G^{(0)}\right)$ is said to be a \emph{smooth} if its reduced kernel $k_p$ is a smooth section.
\end{definition}
\begin{remark}
In \cite{NistorWeinsteinXu99}, in order to discuss composition of pseudodifferential operators, the authors need some conditions on support. Since in this paper the composition formula is not necessary, we are able to get rid of the support conditions and consider larger class of pseudodifferential operators. \end{remark}
\begin{example}\label{ex-pair-groupoid}
Let $V$ be a smooth manifold and $G^{(1)}=V\times V\rightrightarrows V=G^{(0)}$ be the pair groupoid where the range and source maps are the projections onto the first and second variables respectively. The source fiber over $v\in V$ is $G_v=V\times \{v\}$. A family of pseudodifferential operators $\left(P_v,v\in V\right)$ on $G$ is simply a family of pseudodifferential operators $\{P_v\}_{v\in V}$ on $V$ parametrized by $v\in V$. If $\left(P_v,v\in V\right)$ is equivariant in the sense of Definition~\ref{def-equi-family}, then \eqref{eq-equivariance} read
\[
P_v U_{(v,v^\prime)} = U_{(v,v^\prime)}P_{v^\prime}.
\]
Notice that $U_{(v,v^\prime)}$ identifies $G_v$ with $G_{v^\prime}$. The above equation simply means $P_v=P_{v^\prime}$. In a word, equivariant family in this context is constant family. If $P_v$ is a smoothing operator, the reduced kernel $k_P$ is precisely the kernel of $P_v$ which belongs to $C^\infty(V\times V)$.
\end{example}
\section{Deformation space}\label{deformation-space}
In this section, we briefly review the deformation to the normal cone construction. For a more detailed account see \cite{Mohsen19Witten,Debord17blowup}.
To the embedding $M\subseteq V$, the associated deformation to the normal cone $\mathbb{N}_V M$ is a smooth manifold whose underlying set is
\[
\mathbb{N}_V M= N_V M \sqcup V\times (0,1].
\]
Let $V\supseteq U \xrightarrow{\varphi} \mathbb{R}^n=\mathbb{R}^p\times \mathbb{R}^q$ be a local coordinate chart of $V$ such that $M\cap U = \varphi^{-1}\left(\varphi(U) \cap \mathbb{R}^p\times \{0\}\right)$. Then $\mathbb{N}_V M \supseteq \mathbb{N}_U (M\cap U) \xrightarrow{\phi} \mathbb{R}^{n+1}$ is seen as a local coordinate chart of the deformation space with the homeomorphism $\phi$ given by
\begin{equation}\label{smooth-structure}
\begin{split}
(v,t) &\mapsto (\varphi_p(v),\frac{1}{t}\varphi_q(v),t);\\
(X,m) &\mapsto (\varphi_{p,\ast}(X),\varphi_q(m),0),
\end{split}
\end{equation}
where $\varphi_p$ and $\varphi_q$ are the first $p$ and last $q$ components of $\varphi$ respectively.
Equivalently, the smooth structure is determined by declaring the following functions to be smooth:
\begin{enumerate}
\item If $f$ is a smooth functions on $V$ then the assignment $\mathbb{N}_V M \to \mathbb{R}$
\begin{align*}
(v,t) &\mapsto f(v) \\
X_m &\mapsto f(m)
\end{align*}
where $X_m$ is a normal vector at $m\in M$, is a smooth function on $\mathbb{N}_V M$.
\item If $f$ is a smooth functions on $V$ that vanishes to order $r$ on $M$, then the assignment $\mathbb{N}_V M \to \mathbb{R}$
\begin{align*}
(v,t) &\mapsto \frac{1}{t^r}f(v) \\
X_m &\mapsto \frac{1}{r!}X_m^r(f)
\end{align*}
is a smooth function on $\mathbb{N}_V M$.
\end{enumerate}
The deformation to the normal cone construction is functorial in the following sense. Given a commutative diagram
\begin{equation*}
\xymatrix{
V^\prime \ar[r] & V \\
M^\prime \ar[r] \ar[u] & M\ar[u]
}
\end{equation*}
where the vertical maps are inclusion of submanifolds and the horizontal maps are any smooth maps, there is an induced map between deformation spaces $\mathbb{N}_V^\prime M^\prime \to \mathbb{N}_V M$. Moreover, if the horizontal maps in the above diagram are submersions, then the induces map between deformation spaces is also a submersion.
The following result will be important to us.
\begin{proposition}\label{Integral-deformation-space}
Fix a smooth measure $\mu_V$ on $V$, it induces smooth measures $\mu_m$ on $T_m V/T_m M$ for each $m\in M$ and a smooth measure $\mu_M$ on $M$. Let $f\in C^\infty(\mathbb{N}_V M)$ and assume that
\[
t^{\operatorname{dim} M-\operatorname{dim} V}\int_V f(v,t)d\mu_V(v)< \infty
\]
is uniformly bounded with respect to $t\in (0,1]$. Then it converges, as $t\to 0$, to
\[
\int_M \int_{T_m V/T_m M} f(X_m)d\mu_m(X_m)d\mu_M(m)<\infty.
\]
\end{proposition}
\begin{proof}
Let $U$ be an open subset of $V$. According to the smooth structure \eqref{smooth-structure} of the deformation space, $\mathbb{N}_U (M\cap U)$ is a local coordinate chart of $\mathbb{N}_V M$. Without loss of generality, we may assume that $f$ is supported inside the coordinate chart $\mathbb{N}_U (M\cap U)$. Then the result follows from a local calculation:
\begin{align*}
t^{\operatorname{dim} M-\operatorname{dim} V}\int_V f(v,t) d\mu_V(v) &= t^{-q}\int_{\mathbb{R}^n} f(v_1,\cdots,v_{p},tv_{p+1},\cdots,tv_{p+q},t) d\mu(v) \\
&=\int_{\mathbb{R}^n} f(v_1,\cdots,v_{p+q},t) d\mu(v) \\
&\to \int_{\mathbb{R}^n} f(v_1,\cdots,v_{p+q},0) d\mu(v).
\end{align*}
\end{proof}
Consider the diagonal embedding $M\hookrightarrow V\times V$ and its associated deformation to the normal cone $\mathbb{N}_{V^2} M$. Thanks to the functoriality of the deformation construction, $\mathbb{N}_{V^2} M$ is a Lie groupoid with the unit space being $\mathbb{N}_V M$:
\[
\mathbb{N}_{V^2} M \rightrightarrows \mathbb{N}_VM.
\] The range and source maps are induced from the the commutative diagrams
\begin{align*}
\xymatrix{
V\times V \ar[r]^{\pi_1} & V\\
M\ar[r] \ar[u] & M \ar[u]
}
&
\,
&
\xymatrix{
V\times V \ar[r]^{\pi_2} & V\\
M\ar[r] \ar[u] & M \ar[u]
}
\end{align*}
where in both diagrams the left vertical maps are the diagonal embeddings, the right vertical maps are the embeddings of $M$ into $V$, the lower horizontal maps are the identity map and $\pi_1, \pi_2$ are the projections onto the first and second factors respectively.
To emphasis the groupoid structure and hint the similarity to the tangent groupoid, we shall write $\mathbb{T}_M V$ for $\mathbb{N}_{V^2} M$ and call it \emph{the relative tangent groupoid}.
\begin{definition}\label{def-traces}
Let $k$ be a smooth function on the relative tangent groupoid $\mathbb{T}_M V$. We shall use $\operatorname{Tr}_t(k)$ to denote the following quantity
\[
\operatorname{Tr}_t(k)=\int_V k(v,v,t) d\mu_V(v).
\]
for all $t\neq 0$. And define
\[
\operatorname{Tr}_0(k)=\int_M\int_{T_m V/T_m M} k(X_m,X_m)d\mu_m(X_m) d\mu_M(m).
\]
\end{definition}
Let $\varphi: V\to V$ be a smooth map with only simple fixed points and let $M$ be the set of its fixed points. It is straightforward to see that
\[
\mathbb{T}_M V = \sqcup_{m\in M} T_m V\oplus T_m V \sqcup V\times V\times (0,1].
\]
The source map sends $(v_1,v_2,t)$ to $(v_2,t)$ when $t\neq 0$ and sends $(X_m,Y_m)\in T_mV\oplus T_m V$ to $Y_m$. The range map sends $(v_1,v_2,t)$ to $(v_1,t)$ when $t\neq 0$ and sends $(X_m,Y_m)\in T_mV\oplus T_m V$ to $X_m$.
The following proposition is a direct consequence of Proposition~\ref{Integral-deformation-space}.
\begin{proposition}\label{prop-cont-traces}
Let $n=\operatorname{dim} V$, if $k\in C^\infty(\mathbb{T}_MV)$ such that $$f(t)=t^{-n}\operatorname{Tr}_t(k)$$ is uniformly bounded with respect to $t\in (0,1]$. Then $f(t)$ converges to $\operatorname{Tr}_0(k)$ as $t\to 0$.
\end{proposition}
If $k\in C^\infty(\mathbb{T}_M V)$, then $k_\varphi(x,y)=k(\varphi(x),y)$ is still a smooth function on the relative tangent groupoid $\mathbb{T}_M V$. Its valued at $(X,Y)\in T_mV\oplus T_mV$ can be computed
\[
k_\varphi(X,Y) = k(\varphi_\ast(X),Y)
\]
where $\varphi_{\ast,m}: T_mV \to T_mV$ is the differential of $\varphi$ at $m\in M$. Assume, in addition, that $k$ satisfies
\begin{equation}
k(X_m,Y_m)=k(X_m-Y_m,0)
\end{equation}
which is saying that the pseudodifferential operator, of which $k$ is the Schwartz kernel, is translation invariant. Then
\begin{equation}
\begin{split}
\operatorname{Tr}_0(k_\varphi)&=\sum_{m\in M} \int_{T_m V} k((\varphi_\ast-1)X_m,0)d\mu_m(X_m) \\
&= \sum_{m\in M}\int_{T_m V} k(X_m,0)\left|\det(\varphi_{\ast,m}-1)\right|^{-1}d\mu_m(X_m)\\
&= \sum_{m\in M}\left|\det(\varphi_{\ast,m}-1)\right|^{-1}\int_{T_m V}k(X_m,0)d\mu_m(X_m).
\end{split}
\end{equation}
\section{The coefficient bundle}\label{coefficient-bundle}
In this section, we shall build a vector bundle over the relative tangent groupoid $\mathbb{T}_M V$ to account for vector bundles appeared in the elliptic complex \eqref{elliptic-complex}.
For the deformation space $\mathbb{N}_V M$, there is a canonical map $\mathbb{N}_V M \to V$ which sends $(v,\lambda)\in V\times (0,1]$ to $v$ and $X_m\in T_mV/T_mM$ to $m\in M\subset V$. Let $F$ be the pullback of the vector bundle $E$ defined in \eqref{eq-vector-bundle-E} along this canonical map and let $\mathbb{E}$ be the tensor product $r^\ast F \otimes s^\ast F^\ast \to \mathbb{T}_M V$. It is easy to check that
\[
\mathbb{E}_{(v_1,v_2,t)} \cong E_{v_1}\otimes E^\ast_{v_2}, \qquad \text{and}\qquad \mathbb{E}_{(X_m,Y_m)}\cong \End(E_m)
\] where $(v_1,v_2,t)\in V\times V\times (0,1]$ and $(X_m,Y_m)\in T_mV \times T_m V$.
The following Definition and Proposition are supertrace version of Definition~\ref{def-traces} and Proposition~\ref{prop-cont-traces}:
\begin{definition}
If $k$ is a smooth section of $\mathbb{E} \to \mathbb{T}_M V$. We shall use $\operatorname{Str}_t(k)$ to denote the following quantity
\[
\operatorname{Str}_t(k)=\int_V \operatorname{str}\left(k(v,v,t)\right) d\mu_V(v).
\]
for all $t\neq 0$. And define
\[
\operatorname{Str}_0(k)=\sum_{m\in M}\int_{T_m V} \operatorname{str}\left(k(X_m,X_m)\right)d\mu_m(X_m)
\]
where $ \operatorname{str}: \End(E_v)\to \mathbb{C}$ is the supertrace which reflects the $\mathbb{Z}/2\mathbb{Z}$-grading of $E_v$.
\end{definition}
\begin{proposition}\label{prop-cont-supertraces}
Let $n=\operatorname{dim} V$, if $k\in C^\infty(\mathbb{T}_MV)$ such that $$f(t)=t^{-n}\operatorname{Str}_t(k)$$ is uniformly bounded with respect to $t\in (0,1]$. Then $f(t)$ converges to $\operatorname{Str}_0(k)$ as $t\to 0$.
\end{proposition}
Now, if $\Delta=(d+d^\ast)^2: C^\infty(V,E) \to C^\infty(V,E)$ is an elliptic differential operator of order $2s$ with principal symbol
\begin{equation}\label{principal-symbol}
\sigma(\Delta)(v,\xi) = \sum_{|\alpha|=2s}a_\alpha(v) \xi^\alpha
\end{equation}
where $a_\alpha(v)$ are positive definite matrices in $\End(E_v)$.
It is well known that the heat kernel $e^{-t^{2s}\Delta}$ is a smoothing operator acting on $C^\infty_c(V,E)$. Let $X$ be the coordinates on the tangent space $T_vV$, then the heat kernel
\[
\exp\left({-\sum_{|\alpha|=2s}a_\alpha(v) \frac{\partial^\alpha}{\partial X^\alpha}}\right)
\]
is a smoothing operator acting on $C^\infty_c(T_vV, E_v)$. Let $\left(Q_x,x\in G^{(0)}\right)$ be the family of smoothing operators on the relative tangent groupoid $\mathbb{T}_M V$ which is defined by
\begin{equation}\label{eq-family-heat-kernel}
Q_x=
\begin{cases}
\exp\left({-t^{2s}\Delta}\right) & x=(v,t)\in V\times (0,1] \\
\exp\left({-\sum_{|\alpha|=2s}a_\alpha(m) \frac{\partial^\alpha}{\partial X^\alpha}}\right) & x=X_m\in T_m V.
\end{cases}
\end{equation}
\begin{proposition}\label{prop-smooth-family-diff}
The family $\left(Q_x,x\in G^{(0)}\right)$ is equivariant and smooth in the sense of definition~\ref{def-equi-family} and \ref{def-smooth-family}.
\end{proposition}
\begin{proof}
The equivariance can be shown as in Example~\ref{ex-pair-groupoid}.
The smoothness can be checked locally. Indeed, pick a local coordinate chart $V\supset U\xrightarrow{\varphi} \mathbb{R}^n$ such that $M\cap U$ contains only single point $m\in M$. Then $\mathbb{T}_{M\cap U} U\subset \mathbb{T}_M V$ forms a local coordinate chart whose diffeomorphism into Euclidean space is given by
\begin{equation}\label{eq-rel-tangent-local-coordinate}
\mathbb{T}_{M\cap U} U \to \mathbb{R}^{2n+1}:
\begin{cases}
(u_1,u_2,t) \mapsto \left(\frac{\varphi(u_1)-\varphi(m)}{t}, \frac{\varphi(u_2)-\varphi(m)}{t}, t\right);\\
(X_m, Y_m,0) \mapsto \left(\varphi_\ast(X_m),\varphi_\ast(Y_m),0\right).
\end{cases}
\end{equation}
By the definition of principal symbol \eqref{principal-symbol}, $\Delta$ has the following local expression on $U$:
\[
\Delta= -\sum_{|\alpha|=2s}a_\alpha(u) \frac{\partial^\alpha}{\partial u^\alpha}+ \text{lower order terms}.
\]
Under the local coordinate chart \eqref{eq-rel-tangent-local-coordinate}, taken as an operator on $\mathbb{T}_M V_{(v,t)}$, $t^{2s}\Delta$ has following local expression:
\[
t^{2s}\Delta= -\sum_{|\alpha|=2s}a_\alpha(tu+m) \frac{\partial^\alpha}{\partial u^\alpha}+ t\cdot \text{lower order terms}.
\]
The heat kernel of
\[
\exp\left( -\sum_{|\alpha|=2s}a_\alpha(tu+m) \frac{\partial^\alpha}{\partial u^\alpha}+ t\cdot \text{lower order terms}\right)
\]
is smooth in all variables, in particular in $t\in [0,1]$. And its value at $t=0$ is the heat kernel of
\[
\exp\left(-\sum_{|\alpha|=2s}a_\alpha(m) \frac{\partial^\alpha}{\partial X^\alpha}\right).
\]
\end{proof}
\section{Lefschetz fixed point formula}\label{final-calculation}
In this section we apply the gadget developed above to calculate the supertrace \eqref{Lefschetz-supertrace} as $t\to 0$.
Let $k_P\in C^\infty(\mathbb{T}_M V, \mathbb{E})$ be the reduced kernel of $\left(Q_x,x\in G^{(0)}\right)$ defined in \eqref{eq-family-heat-kernel}. Then we have
\begin{multline}\label{lines-supertraces}
\operatorname{Str}\left(Te^{-t\Delta}: C^\infty(V,E)\to C^\infty(V,E)\right)
= \int_V \operatorname{str}\left(\zeta \cdot k_P(\varphi(v),v,t)\right)t^{-n}d\mu_V(v) \\ = t^{-n}\operatorname{Str}_t\Big((\zeta\cdot k_p)_\varphi\Big)
\end{multline}
where $(\zeta\cdot k_p)_\varphi$ is a smooth section of the bundle $\mathbb{E}\to \mathbb{T}_M V$ such that
\[
\begin{cases}
(\zeta\cdot k_p)_\varphi(v_1,v_2,t) = \zeta \cdot k_P(\varphi(v_1),v_2,t)\\
(\zeta\cdot k_p)_\varphi(X_m,Y_m) = \zeta_m \cdot k_p(\varphi_\ast X_m,Y_m).
\end{cases}
\]
By Proposition~\ref{prop-cont-supertraces}, \eqref{lines-supertraces} converges to
\begin{align*}
\operatorname{Str}_0 \Big( (\zeta\cdot k_p)_\varphi \Big)&= \sum_{m\in M}\int_{T_m V} \operatorname{str}\left(\zeta_m\cdot k(\varphi_\ast X_m,X_m)\right)d\mu_m \\
&=\sum_{m\in M}\left|\det(\varphi_{\ast,m}-1)\right|^{-1}\operatorname{str}(\zeta_m\cdot \int_{T_mV}k(X_m,0)d\mu_m).\\
\end{align*}
By Proposition~\ref{prop-smooth-family-diff}, $k(X_m,0)$ is the heat kernel of $-\sum_{|\alpha|=2s}a_\alpha(x) \frac{\partial^\alpha}{\partial X^\alpha}$.
\begin{lemma}\label{lem-integral-heat-kernel}
Let $\alpha=(\alpha_1,\alpha_2,\dots,\alpha_n)$ be a multi-index such that $|\alpha|=2s$ is an even integer. Let
\[
K_0=-A \frac{\partial^\alpha}{\partial X^\alpha}
\]
be a differential operator on $\mathbb{R}^n$ with coefficient $A$ being positive definite matrix. Let $k_0$ be the heat kernel of $K_0$, then
\[
\int_{\mathbb{R}^n} k_0(X,0)dX = 1.
\]
\end{lemma}
\begin{proof}
Fourier transformation turn the heat equation
\begin{equation}\label{eq-heat-eq}
\frac{\partial u}{\partial t} +A \frac{\partial^\alpha u}{\partial X^\alpha} = 0
\end{equation}
into the ordinary differential equation
\[
\mathscr{F}(u_t)+A\xi^\alpha \mathscr{F}(u)=0
\]
where $\mathscr{F}$ is the Fourier transformation. Therefore the fundamental solution of \eqref{eq-heat-eq} is
\[
\mathscr{F}^{-1}\Big(\exp(-A\xi^\alpha)\Big)
\]
where $\mathscr{F}^{-1}$ is the Fourier inverse transformation.
Then
\[
\int_{\mathbb{R}^n} k_0(X,0)dX = \int e^{-A\xi^\alpha} e^{ix\cdot \xi}d\xi dx = 1.
\]
The final equation follows from the simple fact that
\[
\int_{\mathbb{R}^n} \mathscr{F}^{-1}(u)(x) dx = u(0).
\]
\end{proof}
By Lemma~\ref{lem-integral-heat-kernel}, we have
\[
\int_{T_mV}k(X_m,0)d\mu_m=1.
\]
Therefore
\[
\operatorname{Str}_0 \Big( (\zeta\cdot k_p)_\varphi \Big) = \sum_{m\in M}\left|\det(\varphi_{\ast,m}-1)\right|^{-1}\operatorname{str}(\zeta_m).
\]
This completes the proof of Theorem~\ref{thm-lef-fix}.
\section*{Acknowlegement} The author would like to thank Professor Weiping Zhang for helpful suggestions on an early draft.
\noindent {\small Chern Institute of Mathematics, Nankai University, Tianjin, P. R. China 300071.}
\noindent{\small Email: [email protected]}
\end{document} |
\begin{document}
\title{Daehee numbers and polynomials}
\author{Dae San Kim and Taekyun Kim } \begin{abstract} We consider the Witt-type formula for Daehee numbrers and polynomials and investigate some properties of those numbers and polynomials. In particular, Daehee numbers are closely related to higher-order Bernoulli numbers and Bernoulli numbers of the second kind.
\newcommandx\ud[1][usedefault, addprefix=\global, 1=]{\textnormal{UD}\left(#1\right)}
\end{abstract} \maketitle
\section{Introduction}
As is known, the $n$-th Daehee polynomials are defined by the generating function to be \begin{equation} \left(\frac{\log\left(1+t\right)}{t}\right)\left(1+t\right)^{x}=\sum_{n=0}^{\infty}D_{n}\left(x\right)\frac{t^{n}}{n!},\:\left(\textrm{see\:[5,6,8,9,10,11]}\right).\label{eq:1} \end{equation}
In the special case, $x=0$, $D_{n}=D_{n}\left(0\right)$ are called the Daehee numbers.
Throughout this paper, $\mathbb{Z}_{p}$, $\mathbb{Q}_{p}$ and $\mathbb{C}_{p}$
will denote the rings of $p$-adic integers, the fields of $p$-adic numbers and the completion of algebraic closure of $\mathbb{Q}_{p}$. The $p-$adic norm $\left|\cdot\right|_{p}$ is normalized by
$\left|p\right|_{p}=\nicefrac{1}{p}$. Let $\ud[\mathbb{Z}_{p}]$ be the space of uniformly differentiable functions on $\mathbb{Z}_{p}$. For $f\in\ud[\mathbb{Z}_{p}]$, the $p$-adic invariant integral on $\mathbb{Z}_{p}$ is defined by \begin{equation} I\left(f\right)=\int_{\mathbb{Z}_{p}}f\left(x\right)d\mu_{0}\left(x\right)=\lim_{n\rightarrow\infty}\frac{1}{p^{n}}\sum_{x=0}^{p^{n}-1}f\left(x\right),\:\left(\textrm{see }[6]\right).\label{eq:2} \end{equation}
Let $f_{1}$ be the translation of $f$ with $f_{1}\left(x\right)=f\left(x+1\right).$ Then, by (\ref{eq:2}), we get \begin{equation}
I\left(f_{1}\right)=I\left(f\right)+f^{\prime}\left(0\right),\:\textrm{where }f^{\prime}\left(0\right)=\left.\frac{df\left(x\right)}{dx}\right|_{x=0}.\label{eq:3} \end{equation}
As is known, the Stirling number of the first kind is defined by \begin{equation} \left(x\right)_{n}=x\left(x-1\right)\cdots\left(x-n+1\right)=\sum_{l=0}^{n}S_{1}\left(n,l\right)x^{l},\label{eq:4} \end{equation}
and the Stirling number of the second kind is given by the generating function to be \begin{equation} \left(e^{t}-1\right)^{m}=m!\sum_{l=m}^{\infty}S_{2}\left(l,m\right)\frac{t^{l}}{l!},\:\left(\textrm{see [2,3,4]}\right).\label{eq:5} \end{equation}
For $\alpha\in\mathbb{Z}$, the Bernoulli polynomials of order $\alpha$ are defined by the generating function to be \begin{equation} \left(\frac{t}{e^{t}-1}\right)^{\alpha}e^{xt}=\sum_{n=0}^{\infty}B_{n}^{\left(\alpha\right)}\left(x\right)\frac{t^{n}}{n!},\:\left(\textrm{see }[1,2,8]\right).\label{eq:6} \end{equation}
When $x=0$, $B_{n}^{\left(\alpha\right)}=B_{n}^{\left(\alpha\right)}\left(0\right)$ are called the Bernoulli numbers of order $\alpha$.
In this paper, we give a $p$-adic integral representation of Daehee numbers and polynomials, which are called the Witt-type formula for Daehee numbers and polynomials. From our integral representation, we can derive some interesting properties related to Daehee numbers and polynomials.
\section {Witt-type formula for Daehee numbers and polynomials}
First, we consider the following integral representation associated with falling factorial sequences : \begin{equation} \int_{\mathbb{Z}_{p}}\left(x\right)_{n}d\mu_{0}\left(x\right),\:\textrm{ where }n\in\mathbb{Z}_{+}=\mathbb{N}\cup\left\{ 0\right\} .\label{eq:7} \end{equation}
By (\ref{eq:7}), we get \begin{eqnarray} \sum_{n=0}^{\infty}\int_{\mathbb{Z}_{p}}\left(x\right)_{n}d\mu_{0}\left(x\right)\frac{t^{n}}{n!} & = & \int_{\mathbb{Z}_{p}}\sum_{n=0}^{\infty}\dbinom{x}{n}t^{n}d\mu_{0}\left(x\right)\label{eq:8}\\
& = & \int_{\mathbb{Z}_{p}}\left(1+t\right)^{x}d\mu_{0}\left(x\right),\nonumber \end{eqnarray}
where $t\in\mathbb{C}_{p}$ with $\left|t\right|_{p}<p^{-\frac{1}{p-1}}.$
For $t\in\mathbb{C}_{p}$ with $\left|t\right|_{p}<p^{-\frac{1}{p-1}}$, let us take $f\left(x\right)=\left(1+t\right)^{x}.$ Then, from (\ref{eq:3}), we have \begin{equation} \int_{\mathbb{Z}_{p}}\left(1+t\right)^{x}d\mu_{0}\left(x\right)=\frac{\log\left(1+t\right)}{t}.\label{eq:9} \end{equation}
By (\ref{eq:1}) and (\ref{eq:9}), we see that \begin{eqnarray} \sum_{n=0}^{\infty}D_{n}\frac{t^{n}}{n!} & = & \frac{\log\left(1+t\right)}{t}\label{eq:10}\\
& = & \int_{\mathbb{Z}_{p}}\left(1+t\right)^{x}d\mu_{0}\left(x\right)\nonumber \\
& = & \sum_{n=0}^{\infty}\int_{\mathbb{Z}_{p}}\left(x\right)_{n}d\mu_{0}\left(x\right)\frac{t^{n}}{n!}.\nonumber \end{eqnarray}
Therefore, by (\ref{eq:10}), we obtain the following theorem. \begin{thm} \label{thm:1}For $n\ge0$, we have \[ \int_{\mathbb{Z}_{p}}\left(x\right)_{n}d\mu_{0}\left(x\right)=D_{n}. \]
\end{thm}
For $n\in\mathbb{Z}$, it is known that \begin{equation} \left(\frac{t}{\log\left(1+t\right)}\right)^{n}\left(1+t\right)^{x-1}=\sum_{k=0}^{\infty}B_{k}^{\left(k-n+1\right)}\left(x\right)\frac{t^k}{k!},\:\left(\textrm{see [2,3,4]}\right).\label{eq:11} \end{equation}
Thus, by (\ref{eq:11}), we get \begin{equation} D_{k}=\int_{\mathbb{Z}_{p}}\left(x\right)_{k}d\mu_{0}\left(x\right)=B_{k}^{\left(k+2\right)}\left(1\right),\quad\left(k\ge0\right),\label{eq:12} \end{equation} where $B_{k}^{\left(n\right)}\left(x\right)$ are the Bernoulli polynomials of order $n$.
In the special case, $x=0$, $B_{k}^{\left(n\right)}=B_{k}^{\left(n\right)}\left(0\right)$ are called the $n$-th Bernoulli numbers of order $n$.
From (\ref{eq:10}), we note that \begin{eqnarray} \left(1+t\right)^{x}\int_{\mathbb{Z}_{p}}\left(1+t\right)^{y}d\mu_{0}\left(y\right) & = & \left(\frac{\log\left(1+t\right)}{t}\right)\left(1+t\right)^{x}\label{eq:13}\\
& = & \sum_{n=0}^{\infty}D_{n}\left(x\right)\frac{t^{n}}{n!}.\nonumber \end{eqnarray}
Thus, by (\ref{eq:13}), we get \begin{equation} \int_{\mathbb{Z}_{p}}\left(x+y\right)_{n}d\mu_{0}\left(y\right)=D_{n}\left(x\right),\quad\left(n\ge0\right),\label{eq:14} \end{equation}
and, from (\ref{eq:11}), we have \begin{equation} D_{n}\left(x\right)=B_{n}^{\left(n+2\right)}\left(x+1\right).\label{eq:15} \end{equation}
Therefore, by (\ref{eq:14}) and (\ref{eq:15}), we obtain the following theorem. \begin{thm} \label{thm:2}For $n\ge0$, we have \[ D_{n}\left(x\right)=\int_{\mathbb{Z}_{p}}\left(x+y\right)_{n}d\mu_{0}\left(y\right), \]
and \[ D_{n}\left(x\right)=B_{n}^{\left(n+2\right)}\left(x+1\right). \]
\end{thm}
By Theorem \ref{thm:1}, we easily see that \begin{equation} D_{n}=\sum_{l=0}^{n}S_{1}\left(n,l\right)B_{l},\label{eq:16} \end{equation} where $B_{l}$ are the ordinary Bernoulli numbers.
From Theorem \ref{thm:2}, we have \begin{eqnarray} D_{n}\left(x\right) & = & \int_{\mathbb{Z}_{p}}\left(x+y\right)_{n}d\mu_{0}\left(y\right)\label{eq:17}\\
& = & \sum_{l=0}^{n}S_{1}\left(n,l\right)B_{l}\left(x\right),\nonumber \end{eqnarray} where $B_{l}\left(x\right)$ are the Bernoulli polynomials defined by generating function to be \[ \frac{t}{e^{t}-1}e^{xt}=\sum_{n=0}^{\infty}B_{n}\left(x\right)\frac{t^{n}}{n!}. \]
Therefore, by (\ref{eq:16}) and (\ref{eq:17}), we obtain the following corollary. \begin{cor} \label{cor:3}For $n\ge0$, we have \[ D_{n}\left(x\right)=\sum_{l=0}^{n}S_{1}\left(n,l\right)B_{l}\left(x\right). \]
\end{cor}
In (\ref{eq:10}), we have \begin{eqnarray} \frac{t}{e^{t}-1} & = & \sum_{n=0}^{\infty}D_{n}\frac{1}{n!}\left(e^{t}-1\right)^{n}\label{eq:18}\\
& = & \sum_{n=0}^{\infty}D_{n}\frac{1}{n!}n!\sum_{m=n}^{\infty}S_{2}\left(m,n\right)\frac{t^{m}}{m!}\nonumber \\
& = & \sum_{m=0}^{\infty}\left(\sum_{n=0}^{m}D_{n}S_{2}\left(m,n\right)\right)\frac{t^{m}}{m!}\nonumber \end{eqnarray} and \begin{equation} \frac{t}{e^{t}-1}=\sum_{m=0}^{\infty}B_{m}\frac{t^{m}}{m!}.\label{eq:19} \end{equation}
Therefore, by (\ref{eq:18}) and (\ref{eq:19}), we obtain the following theorem. \begin{thm} \label{thm:4}For $m\ge0$, we have \[ B_{m}=\sum_{n=0}^{m}D_{n}S_{2}\left(m,n\right). \]
In particular, \[ \int_{\mathbb{Z}_{p}}x^{m}d\mu_{0}\left(x\right)=\sum_{n=0}^{m}D_{n}S_{2}\left(m,n\right). \] \end{thm} \begin{rem*} For $m\ge0$, by (\ref{eq:17}), we have \[ \int_{\mathbb{Z}_{p}}\left(x+y\right)^{m}d\mu_{0}\left(y\right)=\sum_{n=0}^{m}D_{n}\left(x\right)S_{2}\left(m,n\right). \]
\end{rem*}
For $n\in\mathbb{Z}_{\ge0}$, the rising factorial sequence is defined by \begin{equation} x^{\left(n\right)}=x\left(x+1\right)\cdots\left(x+n-1\right).\label{eq:20} \end{equation}
Let us define the Daehee numbers of the second kind as follows : \begin{equation} \widehat{D}_{n}=\int_{\mathbb{Z}_{p}}\left(-x\right)_{n}d\mu_{0}\left(x\right),\:\left(n\in\mathbb{Z}_{\ge0}\right).\label{eq:21} \end{equation}
By (\ref{eq:21}), we get \begin{equation} x^{\left(n\right)}=\left(-1\right)^{n}\left(-x\right)_{n}=\sum_{l=0}^{n}S_{1}\left(n,l\right)\left(-1\right)^{n-l}x^{l}.\label{eq:22} \end{equation}
From (\ref{eq:21}) and (\ref{eq:22}), we have \begin{eqnarray} \widehat{D}_{n} & = & \int_{\mathbb{Z}_{p}}\left(-x\right)_{n}d\mu_{0}\left(x\right)=\int_{\mathbb{Z}_{p}}x^{\left(n\right)}\left(-1\right)^{n}d\mu_{0}\left(x\right)\label{eq:23}\\
& = & \sum_{l=0}^{n}S_{1}\left(n,l\right)\left(-1\right)^{l}B_{l}.\nonumber \end{eqnarray}
Therefore, by (\ref{eq:23}), we obtain the following theorem. \begin{thm} \label{thm:5}For $n\ge0$, we have \[ \widehat{D}_{n}=\sum_{l=0}^{n}S_{1}\left(n,l\right)\left(-1\right)^{l}B_{l}. \]
\end{thm}
Let us consider the generating function of the Daehee numbers of the second kind as follows :
\begin{eqnarray} \sum_{n=0}^{\infty}\widehat{D}_{n}\frac{t^{n}}{n!} & = & \sum_{n=0}^{\infty}\int_{\mathbb{Z}_{p}}\left(-x\right)_{n}d\mu_{0}\left(x\right)\frac{t^{n}}{n!}\label{eq:24}\\
& = & \int_{\mathbb{Z}_{p}}\sum_{n=0}^{\infty}\dbinom{-x}{n}t^{n}d\mu_{0}\left(x\right)\nonumber \\
& = & \int_{\mathbb{Z}_{p}}\left(1+t\right)^{-x}d\mu_{0}\left(x\right).\nonumber \end{eqnarray}
From (\ref{eq:3}), we can derive the following equation :
\begin{equation} \int_{\mathbb{Z}_{p}}\left(1+t\right)^{-x}d\mu_{0}\left(x\right)=\frac{\left(1+t\right)\log\left(1+t\right)}{t},\label{eq:25} \end{equation}
where $\left|t\right|_{p}<p^{-\frac{1}{p}}.$
By (\ref{eq:24}) and (\ref{eq:25}), we get \begin{eqnarray} \frac{1}{t}\left(1+t\right)\log\left(1+t\right) & = & \int_{\mathbb{Z}_{p}}\left(1+t\right)^{-x}d\mu_{0}\left(x\right)\label{eq:26}\\
& = & \sum_{n=0}^{\infty}\widehat{D}_{n}\frac{t^{n}}{n!}.\nonumber \end{eqnarray}
Let us consider the Daehee polynomials of the second kind as follows :
\begin{equation} \frac{\left(1+t\right)\log\left(1+t\right)}{t}\frac{1}{\left(1+t\right)^{x}}=\sum_{n=0}^{\infty}\widehat{D}_{n}\left(x\right)\frac{t^{n}}{n!}.\label{eq:27} \end{equation}
Then, by (\ref{eq:27}), we get \begin{equation} \int_{\mathbb{Z}_{p}}\left(1+t\right)^{-x-y}d\mu_{0}\left(y\right)=\sum_{n=0}^{\infty}\hat{D}_{n}\left(x\right)\frac{t^{n}}{n!}.\label{eq:28} \end{equation}
From (\ref{eq:28}), we get \begin{eqnarray} \widehat{D}_{n}\left(x\right) & = & \int_{\mathbb{Z}_{p}}\left(-x-y\right)_{n}d\mu_{0}\left(y\right),\quad\left(n\ge0\right)\label{eq:29}\\
& = & \sum_{l=0}^{n}\left(-1\right)^{l}S_{1}\left(n,l\right)B_{l}\left(x\right).\nonumber \end{eqnarray}
Therefore, by (\ref{eq:29}), we obtain the following theorem. \begin{thm} \label{thm:6}For $n\ge0$, we have \[ \widehat{D}_{n}\left(x\right)=\int_{\mathbb{Z}_{p}}\left(-x-y\right)_{n}d\mu_{0}\left(y\right)=\sum_{l=0}^{n}\left(-1\right)^{l}S_{1}\left(n,l\right)B_{l}\left(x\right). \]
\end{thm}
From (\ref{eq:27}) and (\ref{eq:28}), we have \begin{eqnarray} \left(\frac{t}{e^{t}-1}\right)e^{\left(1-x\right)t} & = & \sum_{n=0}^{\infty}\widehat{D}_{n}\left(x\right)\frac{1}{n!}\left(e^{t}-1\right)^{n}\label{eq:30}\\
& = & \sum_{n=0}^{\infty}\widehat{D}_{n}\left(x\right)\frac{1}{n!}n!\sum_{m=n}^{\infty}S_{2}\left(m,n\right)\frac{t^{m}}{m!}\nonumber \\
& = & \sum_{m=0}^{\infty}\left(\sum_{n=0}^{m}\widehat{D}_{n}\left(x\right)S_{2}\left(m,n\right)\right)\frac{t^{n}}{m!},\nonumber \end{eqnarray}
and \begin{eqnarray} \int_{\mathbb{Z}_{p}}e^{-\left(x+y\right)t}d\mu_{0}\left(y\right) & = & \sum_{n=0}^{\infty}\widehat{D}_{n}\left(x\right)\frac{\left(e^{t}-1\right)^{n}}{n!}\label{eq:31}\\
& = & \sum_{m=0}^{\infty}\left(\sum_{n=0}^{m}\widehat{D}_{n}\left(x\right)S_{2}\left(m,n\right)\right)\frac{t^{m}}{m!}.\nonumber \end{eqnarray}
Therefore, by (\ref{eq:30}) and (\ref{eq:31}), we obtain the follwoing theorem. \begin{thm} \label{thm:7}For $m\ge0$, we have \begin{eqnarray*} B_{m}\left(1-x\right) & = & \left(-1\right)^{m}\int_{\mathbb{Z}_{p}}\left(x+y\right)^{m}d\mu_{0}\left(y\right)\\
& = & \sum_{n=0}^{m}\widehat{D}_{n}\left(x\right)S_{2}\left(m,n\right). \end{eqnarray*}
In particular, \[ B_{m}\left(1-x\right)=\left(-1\right)^{m}B_{m}\left(x\right)=\sum_{n=0}^{m}\widehat{D}_{m}\left(x\right)S_{2}\left(m,n\right). \] \end{thm} \begin{rem*} By (\ref{eq:11}), (\ref{eq:26}) and (\ref{eq:27}), we see that \[ \widehat{D}_{n}=B_{n}^{\left(n+2\right)}\left(2\right),\quad\widehat{D}_{n}\left(x\right)=B_{n}^{\left(n+2\right)}\left(2-x\right). \]
\end{rem*}
From Theorem \ref{thm:1} and (\ref{eq:21}), we have \begin{eqnarray} \left(-1\right)^{n}\frac{D_{n}}{n!} & = & \left(-1\right)^{n}\int_{\mathbb{Z}_{p}}\dbinom{x}{n}d\mu_{0}\left(x\right)\label{eq:32}\\
& = & \int_{\mathbb{Z}_{p}}\dbinom{-x+n-1}{n}d\mu_{0}\left(x\right)\nonumber \\
& = & \sum_{m=0}^{n}\dbinom{n-1}{n-m}\int_{\mathbb{Z}_{p}}\dbinom{-x}{m}d\mu_{0}\left(x\right)\nonumber \\
& = & \sum_{m=0}^{n}\dbinom{n-1}{n-m}\frac{\widehat{D}_{m}}{m!}=\sum_{m=1}^{n}\dbinom{n-1}{m-1}\frac{\widehat{D}_{m}}{m!},\nonumber \end{eqnarray}
and \begin{eqnarray} \left(-1\right)^{n}\frac{\widehat{D}_{n}}{n!} & = & \left(-1\right)^{n}\int_{\mathbb{Z}_{p}}\dbinom{-x}{n}d\mu_{0}\left(x\right)=\int_{\mathbb{Z}_{p}}\dbinom{x+n-1}{n}d\mu_{0}\left(x\right)\label{eq:33}\\
& = & \sum_{m=0}^{n}\dbinom{n-1}{n-m}\int_{0}^{1}\dbinom{x}{m}d\mu_{0}\left(x\right)\nonumber \\
& = & \sum_{m=0}^{n}\dbinom{n-1}{m-1}\frac{D_{m}}{m!}=\sum_{m=1}^{n}\dbinom{n-1}{m-1}\frac{D_{m}}{m!}.\nonumber \end{eqnarray}
Therefore, by (\ref{eq:32}) and (\ref{eq:33}), we obtain the following theorem. \begin{thm} \label{thm:8}For $n\in\mathbb{N},$ we have \[ \left(-1\right)^{n}\frac{D_{n}}{n!}=\sum_{m=1}^{n}\dbinom{n-1}{m-1}\frac{\widehat{D}_{m}}{m!}, \] and \[ \left(-1\right)^{n}\frac{\widehat{D}_{n}}{n!}=\sum_{m=1}^{n}\dbinom{n-1}{m-1}\frac{D_{m}}{m!}. \] \end{thm}
$\,$
\noindent Department of Mathematics, Sogang University, Seoul 121-742, Republic of Korea
\noindent e-mail:[email protected]\\
\noindent Department of Mathematics, Kwangwoon University, Seoul 139-701, Republic of Korea
\noindent e-mail:[email protected]\\
\end{document} |
\begin{document}
\begin{titlepage}
\begin{center}
\vspace*{1cm}
\Huge
\textbf{A Report on Hausdorff Compactifications of $\mathbb{R}$}
\Large
\textbf{Arnold Tan Junhan}
\Large
\textbf{Michaelmas 2018 Mini Projects: Analytic Topology}
\Large
University of Oxford
\end{center} \end{titlepage} \tableofcontents
\section{Abstract}
The goal of this report is to investigate the variety of Hausdorff compactifications of $\mathbb{R}$. The Alexandroff one-point compactification, the two-point compactification $[-\infty,\infty]$, and the Stone-$\check{\text{C}}$ech compactification are all clearly different. The ultimate aim is to show that there are in fact uncountably many. An intermediate aim is to exhibit one compactification of $\mathbb{R}$ different from all the compactifications already mentioned. \\
\noindent We will often just write $\delta X$ to refer to a compactification $\langle l, \delta X \rangle$ of a space $X$. We will compare two $T_2$ compactifications of a space $X$ by writing $\langle l_1, \delta_1 X \rangle \leq \langle l_2, \delta_2 X \rangle$ to mean that there is a continuous function $L: \delta_2 X \rightarrow \delta_1 X$ such that $L \circ l_2 = l_1$. (Such a function will automatically be onto.) It is not hard to see that if $\delta_1 X \leq \delta_2 X$ and $\delta_2 X \leq \delta_1 X$ then $\delta_1 X$ and $\delta_2 X$ are homeomorphic as topological spaces. \\
\noindent Let us declare two compactifications $\langle l_1, \delta_1 X \rangle$ and $\langle l_2, \delta_2 X \rangle$ to be \textit{equivalent} if $\delta_1 X \leq \delta_2 X$ and $\delta_2 X \leq \delta_1 X$. Then $\leq$ gives us a partial ordering on the set of equivalence classes of compactifications. This will be useful for us towards the end of the report, where we shall apply Zorn's Lemma to this poset of equivalence classes. \\
\noindent For that purpose, let us also recall here that an element $p \in P$ of a poset $(P, \leq)$ is \textit{maximal} if whenever we have $q \in P$ with $p \leq q$, then $p=q$. (When the equivalence class of a compactification is maximal -- with respect to $\leq$, among all compactifications with some given property -- we will simply say the compactification is maximal.) On the other hand $p \in P$ is a \textit{greatest} element if $q \leq p$ for all $q \in P$. Writing $p < q$ to mean $p \leq q$, $p \neq q$ (and writing $p \nless q$ otherwise), we see that $p$ is maximal iff $p \nless q$ for all $q \in P$. A greatest element in a poset is unique and certainly maximal, however we may have several different maximal elements. A \textit{chain}, or \textit{linearly ordered set}, is a poset $(P, \leq)$ in which we have comparability of elements: for all $p,q \in P$, either $p \leq q$ or $q \leq p$. In a chain, the notions of maximal and greatest element do coincide. \\
\section{Compactifications via their characterising properties} The reader is surely familiar with the idea that the essence of the \textit{Stone-$\check{\text{C}}$ech compactification} $ \langle h, \beta X \rangle$ can be captured via a certain characterizing property. We run through the steps of showing this, and then, borrowing some of these ideas, we will exhibit a compactification of $\mathbb{R}$ that turns out to be different from $ \langle h, \beta X \rangle$.
\begin{defn} Let $X$ be a (nonempty) Tychonoff space. \\ Let $\{f_\lambda: \lambda \in \Lambda \}$ be a list of all bounded continuous functions from $X$ to $\mathbb{R}$. \\ For each $\lambda$, let $I_\lambda$ be the smallest closed interval such that $ran(f_\lambda) \subseteq I\lambda$. That is, let $I_\lambda=[\text{inf ran}(f_\lambda),\text{sup ran}(f_\lambda)]$. \\ Let $Y = \prod_{\lambda \in \Lambda} I_\lambda$ be the Tychonoff product of the $I_\lambda$. \\ Define $h: X \rightarrow Y$ such that for each $\lambda \in \Lambda$, $h(x)(\lambda) = f_\lambda (x)$. Let $\beta X = \text{cl}^Y(h(X))$. \\ Define the \textbf{Stone-$\check{\text{C}}$ech compactification} of $X$ to be $ \langle h, \beta X \rangle$. \end{defn}
\noindent Let us briefly check that this is indeed a $T_2$ compactification of $X$: \begin{itemize}
\item $Y = \prod_{\lambda \in \Lambda} I_\lambda$, which is compact (by Tychonoff's Theorem) and $T_2$, since this is true for each of the $I_\lambda$. Therefore, since $\beta X$ is a subspace of $Y$, it is $T_2$; since it is closed in $Y$, it is compact.
\item $h$ is injective. Suppose we have distinct points $x, y \in X$. Since $X$ is Tychonoff (and hence $\{y\}$ is closed), there is a continuous function $f: X \rightarrow [0,1]$ such that $f(x)=0, f(\{y\})=\{1\}$. $f$ is bounded, so there is $\lambda \in \Lambda$ with $f=f_\lambda$. Then, $h(x)(\lambda)=f_\lambda (x) = 0 \neq 1 = f_\lambda (y) = h(y)(\lambda) $, so $h(x) \neq h(y)$;
\item $h$ is continuous. A subbasic open set in $Y$ has the form $U_\lambda \times \prod _{\mu \neq \lambda} I_\mu$, where $U_\lambda$ is open in $I_\lambda$. Set $U = (U_\lambda \times \prod _{\mu \neq \lambda} I_\mu )\cap \beta X$, then $$h ^{-1} (U) = \{x \in X: h (x) \in U\}= \{x \in X: h (x)(\lambda) \in U_\lambda \}= \{x \in X: f_\lambda (x) \in U_\lambda \} = f_\lambda ^{-1} (U_\lambda),$$ and this is open, since $f_\lambda$ is continuous;
\item $h^{-1}$ is continuous. It is enough to see that whenever $x \in U$, where $U$ is open in $h(X)$, there is an open $V \ni h(x)$ in $h(X)$ such that $h^{-1} (V) \subseteq U$. Well, since $x \notin X \backslash U$, which is closed, and $X$ is Tychonoff, there is some continuous function $f : X \rightarrow [0,1]$ such that $f(x) = 0$ and $f(X \backslash U) = \{1\}$. $f$ is a bounded continuous function from $X$ to $\mathbb{R}$, so there is $\lambda \in \Lambda$ with $f = f_\lambda$. Hence $f_\lambda (x) = 0$ and $f_\lambda (X \backslash U) = \{1\}$.
\\ Note that $V_\lambda := [0,1)$ is open in $[0,1] = I_\lambda$, so $V := (V_\lambda \times \prod_{\mu \neq \lambda} I_\mu) \cap h(X)$ is open in $h(X)$. We have
$$h^{-1} (V) = \{x \in X: h(x) \in V\} = \{ x \in X: h(x) (\lambda) \in V_\lambda \} = \{x \in X: f_\lambda (x) \in [0,1)\} = f_\lambda ^{-1} [0,1),$$
but $x \in f_\lambda ^{-1} [0,1) \subseteq U$, so $x \in h^{-1} (V) \subseteq U$;
\item $\text{cl}^{\beta X} (h(X)) = \beta X$ holds. $\beta X$ is the smallest closed set in $Y$ containing $h(X)$, so it is the smallest closed set in $\beta X$ containing $h(X)$, because $\beta X$ is closed in $Y$ by construction. \end{itemize}
\begin{lem} \label{sc1} Let $X$ be a Tychonoff space, and $I$ be a closed bounded interval in $\mathbb{R}$. Let $f: X \rightarrow I$ be continuous. Then there exists a continuous function $\beta f : \beta X \rightarrow I$ such that $\beta f \circ h = f$. \end{lem}
\begin{proof} $f$ is bounded and continuous, so there is some $\lambda \in \Lambda$ such that $f=f_\lambda$. \\ Define $\beta f : \beta X \rightarrow I, y \mapsto y(\lambda)$. This is a projection, so it is continuous. \\ Furthermore, for all $x \in X$, we have $\beta f \circ h (x) = h(x) (\lambda)= f_\lambda (x)= f(x)$. \end{proof}
\begin{lem} \label{sc2} Let $X$ be a Tychonoff space, and $Z = \prod_{\mu \in M} I_\mu$ be a product of closed bounded intervals in $\mathbb{R}$. Let $f: X \rightarrow Z$ be continuous. Then there exists a continuous function $\beta f : \beta X \rightarrow Z$ such that $\beta f \circ h = f$. \end{lem}
\begin{proof} Define $f^\mu : X \rightarrow I_\mu, x \mapsto f(x)(\mu)$. $f^\mu = \pi_\mu \circ f$, so it is continuous. Apply Lemma \ref{sc1} to see that there exists a continuous function $\beta f^\mu : \beta X \rightarrow I_\mu$ such that $\beta f^\mu \circ h = f^\mu$. \\ Now define $\beta f : \beta X \rightarrow Z$ such that for all $\mu$, $\beta f (x) (\mu) = \beta f^\mu (x)$. That is, $\pi_\mu \circ \beta f = \beta f^\mu$. \\ It remains to see that $\beta f$ is continuous. \\ A subbasic open set in $Z$ has the form $U=U_\mu \times \prod _{\nu \neq \mu} I_\nu$, where $U_\mu$ is open in $I_\mu$. We have $$\beta f ^{-1} (U) = \{x \in \beta X: \beta f (x) \in U\}= \{x \in \beta X: \beta f (x)(\mu) \in U_\mu \}= \{x \in \beta X: \beta f^\mu (x) \in U_\mu \} = (\beta f^\mu) ^{-1} (U_\mu).$$ This is open, since $\beta f^\mu$ is continuous. \end{proof}
\begin{lem} \label{embed} Any Tychonoff space $X$ can be embedded in a product of closed bounded intervals. \end{lem}
\begin{proof} $\beta X$ is a subset of such a product! \end{proof}
\begin{thm}[The Stone-$\check{\text{C}}$ech Property] Let $X$ be a Tychonoff space. \\ Say a compactification $(k, \gamma X)$ of $X$ has the \textbf{Stone-$\check{\text{C}}$ech property} if whenever $K$ is a compact $T_2$ space and $f:X \rightarrow K$ is continuous, there exists a continuous map $\gamma f : \gamma X \rightarrow K$ such that $\gamma f \circ k = f$. \\ ($\gamma f$ will automatically be unique, since it is already determined on the dense set $k(X) \subseteq \gamma X$.) \\ Then $(h, \beta X)$ has the Stone-$\check{\text{C}}$ech property. \end{thm}
\begin{proof} Since $K$ is compact $T_2$, it is Tychonoff. By Lemma \ref{embed}, without loss of generality there is a product $Z = \prod_{\mu \in M} I_\mu$ of closed bounded intervals such that $K \subseteq Z$. Viewing $f$ as a continuous function $X \rightarrow Z$, Lemma \ref{sc2} gives us a continuous function $\beta f : \beta X \rightarrow Z$ such that $\beta f \circ h = f$. It only remains to see that the image of $\beta f$ lies in $K$. \\ K is compact in the Hausdorff space $Z$, hence $K$ is closed in $Z$, so $(\beta f)^{-1} (K)$ is closed in $\beta X$. Also, $f(X) \subseteq K$ implies $h(X) \subseteq (\beta f)^{-1} (K)$. Since $h(X)$ is dense in $\beta X$, we must have $(\beta f)^{-1} (K) = \beta X$. \end{proof}
\begin{thm} \label{largest} If $(k, \gamma X)$ is a Hausdorff compactification of $X$ that has the Stone-$\check{\text{C}}$ech property, then $ \langle k', \gamma' X \rangle \leq \langle k, \gamma X \rangle$ for any other compactification $ \langle k', \gamma' X \rangle$. \end{thm}
\begin{proof} Take $K= \gamma' X$ and $f = k'$ in the definition of $(k, \gamma X)$ having the Stone-$\check{\text{C}}$ech property, to see that there exists a continuous map $\gamma h : \gamma X \rightarrow \gamma' X$ such that $\gamma k' \circ k = k'$. This is precisely the statement that $ \langle k', \gamma' X \rangle \leq \langle k, \gamma X \rangle$. \end{proof}
\noindent Since the Stone-$\check{\text{C}}$ech compactification has the Stone-$\check{\text{C}}$ech property, we deduce:
\begin{cor} $\langle h, \beta X \rangle$ is the largest compactification of $X$. \end{cor}
\noindent On the other hand, we could set $\langle k', \gamma' X \rangle$ in Theorem \ref{largest} to be the Stone-$\check{\text{C}}$ech compactification, to get another corollary:
\begin{cor} \label{sc3} If $\langle k, \gamma X \rangle$ is a Hausdorff compactification of $X$ that has the Stone-$\check{\text{C}}$ech property, then $ \langle h, \beta X \rangle \leq \langle k, \gamma X \rangle$. \end{cor}
\noindent This says that the Stone-$\check{\text{C}}$ech compactification is the smallest one having the Stone-$\check{\text{C}}$ech extension property. Suppose now that we consider the problem of extending a given family of bounded continuous functions on $X$, rather than all bounded continuous functions. \\
\noindent For example, suppose we are asked to construct a Hausdorff compactification $\langle k, \gamma \mathbb{R} \rangle $ of $\mathbb{R}$ that has the following property: whenever $f: \mathbb{R} \rightarrow \mathbb{R}$ is of the form $f(x) = \cos(nx)$ for some $n \in \mathbb{Z}$, there exists a continuous function $\gamma f : \gamma \mathbb{R} \rightarrow \mathbb{R}$ such that $\gamma f \circ k = f$. \\
\noindent We give a construction of such a compactification $\langle k, \gamma \mathbb{R} \rangle $, by altering that of $\langle h, \beta X \rangle$. We note beforehand that $\cos(nx)=\cos(-nx)$ for each $n \in \mathbb{Z}$, and the constant function $\cos(0)=1$ extends trivially to any compactification, so we need only consider $n \geq 1$.
\begin{prop} Consider a set $\{f_n: n \in \mathbb{N} \}$ of functions from $\mathbb{R}$ to $[-1,1]$, where $$f_0 (x)= \tanh(x), \ \ \ \ f_n (x) = \cos(x) \ \forall n \geq 1.$$ \\ Let $Y = \prod_{n \in \mathbb{N}} [-1,1]$. \\ Define $k: \mathbb{R} \rightarrow Y$ such that for each $n \in \mathbb{N}$, $k(x)(n) = f_n (x)$. Let $\gamma \mathbb{R} = \text{cl}^Y(k(\mathbb{R}))$. \\ Then $ \langle k, \gamma \mathbb{R} \rangle$ is a compactification of $\mathbb{R}$. \end{prop}
\begin{proof} We check this is a compactification. \begin{itemize}
\item $Y = \prod_{n \in \mathbb{N}} [-1,1]$ is compact (by Tychonoff's Theorem) and $T_2$, since this is true for $[-1,1]$. Therefore, since $\gamma X$ is a subspace of $Y$, it is $T_2$; since it is closed in $Y$, it is compact.
\item $k$ is injective. Suppose we have distinct points $x, y \in \mathbb{R}$. $f_0(x)=\tanh(x)$ is strictly monotone, hence injective. Therefore, $k(x)(n)=f_0 (x) \neq f_0 (y) = k(y)(n) $, so $k(x) \neq k(y)$;
\item $k$ is continuous. A subbasic open set in $Y$ has the form $U_n \times \prod _{m \neq n} [-1,1]$, where $U_n$ is open in $[-1,1]$. Set $U = (U_n \times \prod _{m \neq n} I_m )\cap \gamma \mathbb{R}$, then $$k ^{-1} (U) = \{x \in \mathbb{R}: k (x) \in U\}= \{x \in \mathbb{R}: k (x)(n) \in U_n \}= \{x \in \mathbb{R}: f_n (x) \in U_n \} = f_n ^{-1} (U_n),$$ and this is open, since $f_n$ is continuous;
\item $k^{-1}$ is continuous. It is enough to see that whenever $x \in U$, where $U$ is open in $k(\mathbb{R})$, there is an open $V \ni k(x)$ in $k(\mathbb{R})$ such that $k^{-1} (V) \subseteq U$. Well, since $x \notin \mathbb{R} \backslash U$, which is closed, $f_0(x)=\tanh(x)$ is such that $f_0 (x) \notin \text{cl}^{[-1,1]}(f_0 (\mathbb{R} \backslash U))$.
\\ Then $f_0(x) \in V_0 := [-1,1] \backslash \text{cl}^{[-1,1]}(f_0 (\mathbb{R} \backslash U))$, which is open in $[-1,1]$.
\\ The set $V := (V_0 \times \prod_{n \neq 0} [-1,1]) \cap k(\mathbb{R})$ is open in $k(\mathbb{R})$, and we have
$$k^{-1} (V) = \{x \in \mathbb{R}: k(x) \in V\} = \{ x \in \mathbb{R}: k(x) (0) \in V_0 \} = \{x \in \mathbb{R}: f_0 (x) \in V_0 )\} = f_0 ^{-1} (V_0),$$
so $x \in f_0 ^{-1} V_0 = k^{-1}(V)$ shows that $k(x) \in V$. \\ Finally, note that $k^{-1}(V) \subseteq U$, since $f_0 (\mathbb{R} \backslash U) \subseteq [-1,1] \backslash V_0$ implies $f_0^{-1}(V_0)=\mathbb{R} \backslash f_0^{-1}([-1,1] \backslash V_0) \subseteq U$;
\item $\text{cl}^{\gamma \mathbb{R}} (k(\mathbb{R})) = \gamma \mathbb{R}$ holds. \end{itemize}
\end{proof}
\noindent Next, let us show that each $f_n$ does extend continuously onto $ \langle k, \gamma \mathbb{R} \rangle$.
\begin{lem} \label{extendo1} Let $f_n: \mathbb{R} \rightarrow \mathbb{R}, \ x \mapsto \cos(nx)$, where $n \geq 1 $. Then there exists a continuous function $\gamma f_n : \gamma \mathbb{R} \rightarrow [-1,1]$ such that $\gamma f_n \circ k = f_n$. \end{lem}
\begin{proof} Simply define $\gamma f_n : \gamma \mathbb{R} \rightarrow [-1,1], y \mapsto y(n)$. This is a projection, so it is continuous. \\ Furthermore, for all $x \in \mathbb{R}$, we have $\gamma f_n \circ k (x) = k(x) (n)= f_n (x)$. \end{proof}
\noindent This already gives us the result that whenever $f: \mathbb{R} \rightarrow \mathbb{R}$ is of the form $f(x) = \cos(nx)$ for some $n \in \mathbb{Z}$, there exists a continuous function $\gamma f : \gamma \mathbb{R} \rightarrow \mathbb{R}$ such that $\gamma f \circ k = f$.
\noindent Next, we show that $ \langle k, \gamma \mathbb{R} \rangle$ is the smallest compactification to which $f_n$ extends continuously for each $n \geq 0$.
\begin{prop} Suppose $\langle l, \delta \mathbb{R} \rangle $ is a Hausdorff compactification of $\mathbb{R}$ that has the following property: for each $n \geq 0 $, there exists a continuous function $\delta f_n : \delta \mathbb{R} \rightarrow [-1,1]$ such that $\delta f_n \circ l = f_n$. Then $ \langle k, \gamma \mathbb{R} \rangle \leq \langle l, \delta \mathbb{R} \rangle$. \end{prop}
\begin{proof} Define $F : \delta \mathbb{R} \rightarrow \gamma \mathbb{R} $ as follows: for each $y \in \delta \mathbb{R}$ and $n \in \mathbb{N}$, let $F(y)(n)= \delta f_n (y)$. Clearly, $F \circ l = k$, since for all $x \in \mathbb{R}$, $$F \circ l (x) (n) = F (l (x)) (n) = \delta f_n (l(x)) = f_n(x)= k (x) (n).$$
\noindent It remains to see that $F$ is continuous. \\ Recall that $\gamma \mathbb{R} \subseteq \prod_{n \in \mathbb{N}} [-1,1]$, and a subbasic open set in $\prod_{n \in \mathbb{N}}[-1,1]$ has the form $U_n \times \prod _{m \neq n} I_m$, where $U_n$ is open in $[-1,1]$. Let $U = (U_n \times \prod _{m \neq n} I_m) \cap \gamma \mathbb{R} $, then $$F ^{-1} (U) = \{y \in \delta \mathbb{R}: F (y) \in U\}= \{y \in \delta \mathbb{R}: F (y) (n) \in U_n\} = \{y \in \delta \mathbb{R}: \delta f_n (y) \in U_n \} =(\delta f_n)^{-1}(U_n).$$ This is open, since $\delta f_n$ is continuous. \end{proof}
\noindent Note that this proposition does not quite tell us that $ \langle k, \gamma \mathbb{R} \rangle$ is the smallest compactification to which $f(x)=\cos(nx)$ extends continuously for each $n \in \mathbb{Z}$, because among the $f_n$ is the function $f_0(x)=\tanh(x)$, which we added to the family in order to construct $\gamma \mathbb{R}$. We did this so that the family would \textit{separate points and closed sets}; for a more general construction see Folland (1999). \\
\noindent Nevertheless, we shall show in the next section that this compactification is genuinely different from the ones we have seen before.
\iffalse \begin{defn} Let $X$ be a topological space. Let $\mathfrak{F}$ be a subset of $C(X,I)$, the set of continuous functions from $X$ to $[0,1]$. Say $\mathfrak{F}$ \textbf{separates points and closed sets} if for each closed subset $E \subseteq X$ and each $x \in X \backslash E$ there is some $f \in \mathfrak{F}$ such that $f(x) \notin \text{cl}^{[0,1]}(f(E))$. \end{defn}
\noindent Observe that each such family $\mathfrak{F}$ gives rise to a map $e: X \rightarrow \prod_{f \in \mathfrak{F}} I$.
\begin{prop}[Folland, 1999] Let $X$ be Tychonoff, and $\mathfrak{F} \subseteq C(X,I)$ separate points and closed sets. \end{prop}
\noindent Each such family $\mathfrak{F}$ gives rise to a compactification of $X$. In particular, the Stone-$\check{\text{C}}$ech compactification is the special case of when $\mathfrak{F}$ is the set of all continuous functions from $X$ to $[0,1]$, meaning it really is a $T_2$ compactification. \fi \section{A genuinely new compactification}
The compactification we have just constructed is genuinely different from any of the one-point, two-point, or Stone-$\check{\text{C}}$ech compactification of $\mathbb{R}$. It cannot be the one-point or two-point compactification, because the function $\mathbb{R} \rightarrow \mathbb{R}, x \mapsto \text{cos}(x)$ does not extend continuously to either of these:
\begin{prop} Let $f: \mathbb{R} \rightarrow \mathbb{R}, x \mapsto \text{cos}(x)$. There is no continuous function extending $f$ to either the Alexandroff one-point compactification or the two point compactification of $\mathbb{R}$. \end{prop}
\begin{proof} Suppose for a contradiction we did have such an extension $\tilde{f}$. \\ \noindent One way to write the one-point compactification is as $\langle i_1, S^1 \rangle$ where $i_1: \mathbb{R} \rightarrow S^1, x \mapsto (\frac{2x}{1+x^2},\frac{x^2-1}{1+x^2})$. Note that $\text{lim}_{n \rightarrow \infty} i_1(n)=(0,1)$. Hence, by the continuity of $\tilde {f}$ we would have
$$\tilde {f} (0,1) = \tilde {f} (\text{lim}_{n \rightarrow \infty} i_1(n))= \text{lim}_{n \rightarrow \infty} (\tilde {f} \circ i_1)(n) =\text{lim}_{n \rightarrow \infty} \text{cos}(n), $$
\noindent but this does not exist in $\mathbb{R}$. \\
\noindent Similarly, we may write the two-point compactification as $\langle i_2, [-1,1] \rangle$ where $i_2: \mathbb{R} \rightarrow [-1,1], x \mapsto \text{tanh}(n)$. Now $\text{lim}_{n \rightarrow \infty} i_2(n)=1$, so
$$\tilde {f} (1) = \tilde {f} (\text{lim}_{n \rightarrow \infty} i_2(n))= \text{lim}_{n \rightarrow \infty} (\tilde {f} \circ i_2)(n) =\text{lim}_{n \rightarrow \infty} \text{cos}(n) $$
\noindent again contradicts that this limit does not exist.
\end{proof}
\noindent To show that $\gamma \mathbb{R}$ is not homeomorphic to $\beta \mathbb{R}$, we will show that the former is metrisable while the latter is not. \\
\noindent The following is a standard result.
\begin{lem} A countable product of metric spaces is metrisable. \end{lem}
\begin{proof} The result is easy for finite products. (Alternatively, if you like, it is deducible from the case of countably infinite products, by setting all-but-finitely-many of the factors to be singletons.) \\
\noindent Let $\{(X_n, d_n): n \in \mathbb{N}\}$ be a countably infinite family of metric spaces.
\begin{itemize}
\item[] \textbf{Claim}: We may assume each $d_n$ is bounded above by $1$.
\item[] \textbf{Proof}: To prove the claim, it is enough to see that any metric $d$ on any space $X$ has an equivalent metric $d'$ defined by $d'(x,y)=\text{min}\{1,d(x,y)\}$. This is a metric on $X$:
\begin{itemize}
\item it is non-negative, and zero if and only if $x=y$;
\item it is symmetric in its variables;
\item $\text{min}\{1,d(x,z)\} \leq \text{min}\{1,d(x,y)\} +\text{min}\{1,d(y,z)\}$.
\\ If $d(x,y), d(y,z) \leq 1$, then $$\text{min}\{1,d(x,z)\} \leq d(x,z) \leq d(x,y) + d(y,z)=\text{min}\{1,d(x,y)\} +\text{min}\{1,d(y,z)\}.$$
Otherwise, without loss $d(x,y)> 1$, then $$\text{min}\{1,d(x,z)\} \leq 1 = \text{min}\{1,d(x,y)\} \leq \text{min}\{1,d(x,y)\} +\text{min}\{1,d(y,z)\}.$$
\end{itemize}
$d'$ induces the same topology as $d$ does on $X$.
Indeed, wite $B^d_r(x)$ and $B^{d'}_r(x)$ respectively for the open balls of radius $r$ centered at $x$, with respect to $d$ and $d'$ respectively. Since $d' \leq d$, we certainly have $B^d_r(x) \subseteq B^{d'}_r(x)$ for all $r >0, x\in X$, so the topology induced by $d$ is finer than that induced by $d'$. On the other hand, for all $r >0, x\in X$, we have $B^{d'}_{r'}(x) \subseteq B^{d}_r(x)$ where $r'=\text{min}\{1,r\}$. Indeed, suppose $y \in B^{d'}_{r'}(x)$. Then $\text{min}\{1,d(x,y)\} < \text{min}\{1,r\}$, so $d(x,y) < r$. \end{itemize}
\noindent By the claim, we may assume each $d_n$ is bounded above by $1$, so it makes sense to define, for $x,y \in \prod_{n \in \mathbb{N}}X_n$, $$d(x,y) = \Sigma_{n \in \mathbb{N}} \frac{d_n(x(n),y(n))}{2^n},$$ since this series converges to a value no greater than the convergent series $\Sigma_{n \in \mathbb{N}} \frac{1}{2^n}=2$.
\begin{itemize}
\item[] \textbf{Claim}: $d$ defined above is a metric on the product space $\prod_{n \in \mathbb{N}}X_n$.
\item[] \textbf{Proof}:
\begin{itemize}
\item it is non-negative, and zero if and only if every term in the series is zero, if and only if $x$ and $y$ agree on every component, if and only if $x=y$;
\item it is symmetric in its variables;
\item $\Sigma_{n \in \mathbb{N}} \frac{d_n(x(n),z(n))}{2^n} \leq \Sigma_{n \in \mathbb{N}} \frac{d_n(x(n),y(n))}{2^n} +\Sigma_{n \in \mathbb{N}} \frac{d_n(y(n),z(n))}{2^n}$ follows immediately from the triangle inequalities for the individual $d_n$.
\end{itemize} \end{itemize}
\noindent It remains to check that this metric induces the usual product topology on $\prod_{n \in \mathbb{N}}X_n$. \\
\noindent Given $r>0, N \in \mathbb{N},$ and $x,y \in \prod_{n \in \mathbb{N}}X_n$, we certainly have $d_N(x(N),y(N)) < r$ whenever $d(x,y) < \frac{r}{2^N}$. Therefore, the projections $\pi_N: (\prod_{n \in \mathbb{N}}X_n,d) \rightarrow (X_N,d_N)$ are continuous with respect to these metrics. Therefore the topology $\tau_d$ induced by $d$ on the product space is finer than the Tychonoff topology $\tau$. One way to see this is via the universal property of the product: the projection maps $\pi_N: (\prod_{n \in \mathbb{N}}X_n,d) \rightarrow (X_N,d_N)$ give rise to a unique continuous map $i: (\prod_{n \in \mathbb{N}}X_n,\tau_d) \rightarrow (\prod_{n \in \mathbb{N}}X_n,\tau)$ such that $i \circ \pi_N = \pi_N$ for each $N$. Of course, setting $i$ to be the identity map satisfies this equation, and therefore we must have that the identity is continuous as a map $(\prod_{n \in \mathbb{N}}X_n,\tau_d) \rightarrow (\prod_{n \in \mathbb{N}}X_n,\tau)$. In particular, taking the preimage of each open set under the identity map, we see that $\tau \subseteq \tau_d$. \\
\noindent On the other hand, we show that any open set $U$ in $(\prod_{n \in \mathbb{N}}X_n,\tau_d)$ is also open in the Tychonoff topology. Let $x \in U$. There is some $r >0$ with $B^d_r(x) \subseteq U$. Choose some $k$ large enough so that $\Sigma_{n=k+1}^\infty \frac{1}{2^n}=\frac{1}{2^k}<\frac{r}{2}$. \\ For each $n \in \{0, \ldots, k\}$, define $U_n = B^{d_n}_{r / 4}(x(n))$. Then, $$x \in \bigcap_{n=0}^k\pi_n^{-1}(U_n) \subseteq B^d_r(x) \subseteq U.$$ Indeed, whenever $y \in \bigcap_{n=0}^k\pi_n^{-1}(U_n)$, we have $d_n(x(n),y(n)) < r / 4$ for each $n \in \{0, \ldots, k\}$, so $$d(x,y) = \Sigma_{n=0}^k \frac{d_n(x(n),y(n))}{2^n} + \Sigma_{n=k+1}^\infty \frac{d_n(x(n),y(n))}{2^n} < \frac{r}{2} + \frac{r}{2} = r.$$
\noindent Since $\bigcap_{n=0}^k\pi_n^{-1}(U_n) \in \tau$, we have shown that $U$ is open in the Tychonoff topology, as required! \end{proof}
\begin{cor} $\gamma \mathbb{R}$ is metrisable. \end{cor}
\begin{proof} $\gamma \mathbb{R}$ can be embedded into the product $\prod_{n \in \mathbb{N}} [-1,1]$, which by the lemma above can be given a metric space structure. Identifying $\gamma \mathbb{R}$ with its image in the product space, it will inherit the subspace metric induced by the metric on the product space. \end{proof}
\begin{lem} \label{nomax} A non-compact Tychonoff space has no maximal metrisable $T_2$ compactifications. \end{lem}
\begin{proof} Suppose $X$ is a non-compact metric space, and $\langle m , \eta X \rangle$ is a metrisable $T_2$ compactification. We construct another compactification that is strictly larger. $m(X)$ is homeomorphic to $X$, hence non-compact, hence $m(X) \neq \eta X$. Pick any $x \in \eta X \backslash m(X)$. Since $\text{cl}^{\eta X}(m(X))=\eta X$, there is a sequence $(x_n)$ of distinct points in $m(X)$ converging to $x$ in the metric $d$ on $\eta X$. (The open ball $B_1(x)$ must meet $m(X)$ at some point $x_1$; the open ball $B_{\text{min}\{2^{-i},d(x,x_i)\}}(x)$ must meet $m(X)$ at some point $x_{i+1}$ for each $i \geq 1$. In this way we construct an infinite sequence of distinct points of $m(X)$ whose distance to $x$ tends to $0$.) \\
\noindent Consider the disjoint subsets $S_0= \{x_i: i\text{ is even}\}$ and $S_1= \{x_i: i\text{ is odd}\}$ of $m(X)$. Each $S_i$ is closed in $m(X)$, since no any sequence in $S_i$ has a limit in $m(X)$. (If the limit of such a sequence existed, it would have to be $x$, but this is in $\eta X \backslash m(X)$.) Since $m(X)$ is a subset of the metric space $\eta X$, it is metrisable and hence normal, so by Urysohn's Lemma there exists a continuous function $F: m(X) \rightarrow [0,1]$ such that $F(S_0)=\{0\}, F(S_1)=\{1\}$. This function does not extend continuously to $\eta X$. For if it did, then we would have $$F(x)=F(\text{lim}_{n \rightarrow \infty}x_{2n})=\text{lim}_{n \rightarrow \infty}F(x_{2n})=0,$$ and similarly $$F(x)=F(\text{lim}_{n \rightarrow \infty}x_{2n+1})=\text{lim}_{n \rightarrow \infty}F(x_{2n+1})=1,$$ which taken together produce an obvious contradiction. \\
\noindent Consider the function $m': X \rightarrow \eta X \times [0,1], s \mapsto (m(s),F(s))$. This is continuous since both of its components are continuous. Then $\langle m', \tilde{X} \rangle $ where $\tilde{X} = \text{cl}^{\eta X \times [0,1]} (m'(X)) $ is a $T_2$ compactification of $X$: \begin{itemize}
\item $\tilde{X}$ is compact and $T_2$, since it is a closed subspace of the compact $T_2$ space $\eta X \times [0,1]$;
\item $m'$ is injective since its first component is injective;
\item $m'$ is continuous;
\item $m'^{-1}$ is continuous as the composition of the first projection $\pi_1: m'(X) \rightarrow \pi_1(m'(X))$ and the map $m^{-1}: m(X) \rightarrow X$;
\item $\text{cl}^{\tilde{X}} (m'(X)) = \tilde{X}$ holds. \end{itemize}
\noindent This compactification is larger than $\langle m , \eta X \rangle$, because there exists a continuous function $\pi_1: \tilde{X} \rightarrow \eta X$ such that $\pi_1 \circ m' = m$; this is simply the first projection $\pi_1: (z,t) \mapsto z$. \\
\noindent On the other hand, $F$ extends continuously to $\tilde{X}$; consider $\tilde{F}: \tilde{X} \rightarrow [0,1], \ (z,t) \mapsto t$. This is just the second projection, so it is continuous, and we have $\tilde{F} \circ m'= F$. Since $F$ did not extend continuously to $\eta X$, we conclude that there is no homeomorphism from $\eta X$ to $\tilde{X}$ (or else we could compose $\tilde{F}$ with such a homeomorphism to get an extension of $F$ to $\eta X$). In particular, $\tilde{X}$ is a strictly larger compactification of $X$.
\end{proof}
\begin{cor} For any non-compact Tychonoff space $X$, the Stone-$\check{\text{C}}$ech compactification $\langle h,\beta X \rangle$ is not metrisable. \end{cor}
\begin{proof} $\beta X$ is maximal among \textit{all} compactifications, hence if it were metrisable it would be maximal among all metrisable compactifications. \end{proof}
\begin{cor} $\beta \mathbb{R}$ is not metrisable. \end{cor}
\begin{proof} $\mathbb{R}$ is a non-compact metric space! \end{proof}
\noindent Now, clearly $\beta \mathbb{R}$ was homeomorphic to $\gamma \mathbb{R}$, since one is metrisable and the other is not. We therefore obtain our desired result: \begin{cor} $\gamma \mathbb{R}$ is not homeomorphic to $\beta \mathbb{R}$. \end{cor}
\section{Uncountably many compactifications of $\mathbb{R}$}
\noindent Our final task is to show that there are uncountably many different $T_2$ compactifications of $\mathbb{R}$. \\ For this, we introduce the concept of the \textit{inverse limit} (which really is a limit, in the categorical sense) of a sequence of spaces with maps between them.
\begin{defn} Suppose that $\langle X_n, d_n \rangle$, for $n \in \mathbb{N}$, is a pair such that $X_n$ is a topological space, and $d_n: X_{n+1} \rightarrow X_n$ is continuous. \\ The \textbf{inverse limit} $\langle X_\omega, d_{\omega,n} \rangle$ of the sequence $\langle \langle X_n, d_n \rangle: n \in \mathbb{N} \rangle$ is defined as follows. Let $$X_\omega = \{ x \in \prod_{n \in \mathbb{N}} X_n : \forall n, \ x(n) = d_n (x(n+1)) \},$$ and $d_{\omega,n}= \pi_n: X_\omega \rightarrow X_n$ be the restriction of the $n^{th}$ projection to $X_\omega$, so $d_{\omega,n}(x)=\pi_n(x)=x(n)$ for each $x \in X_\omega$. \end{defn}
\noindent Observe that each $d_{\omega,n}$ is continuous, as the restriction of a continuous function. Observe also that for each $n$, $d_{\omega,n}= d_n \circ d_{\omega,n+1}$, since $$d_n \circ d_{\omega,n+1} (x)= d_n (x(n+1)) = x(n) = d_{\omega,n} (x).$$
\noindent We now give a property that characterises the inverse limit.
\begin{prop} \label{limit} Suppose $\langle Y, \langle g_n: n\in \mathbb{N} \rangle \rangle$ is any pair such that $Y$ is a topological space, each $g_n: Y \rightarrow X_n$ is continuous, and for all $n$, $g_n = d_n \circ g_{n+1}$. Then there is a continuous function $g: Y \rightarrow X_\omega$ such that for all $n$, $g_n = d_{\omega, n } \circ g$. \end{prop}
\begin{proof} Simply define $g: Y \rightarrow X_\omega, \ y \mapsto x$ where $x(n)=g_n(y)$. This is well-defined, because for each $n$ we have $x(n) = d_n (x(n+1))$: $$x(n) = g_n(y) = d_n \circ g_{n+1} (y)= d_n ( g_{n+1} (y)) = d_n (x(n+1)).$$ We also have $g_n = d_{\omega, n } \circ g$, because $$d_{\omega, n } \circ g (y)= d_{\omega, n } (x) = x(n) = g_n(y).$$ It only remains to show that $g$ is continuous. We show that the preimage under $g$ of each subbasic open set is open. Let $U= U_j \times \prod_{n \neq j} X_n$, where $U_j$ is open in $X_j$. Then, $$g^{-1}(U \cap X_\omega)= \{y \in Y: g(y) \in U\}= \{y \in Y: g_j(y) \in U_j\}=g_j^{-1}(U_j).$$ This is the continuous preimage of an open set, hence it is open. \end{proof}
\noindent Let us make a few more easy observations. \\ Firstly, if each $d_n$ is onto, then each $d_{\omega, n}$ is onto. Indeed, given $x_n \in X_n$, we can recursively find $x_i \in X_i$ for each $i>n$ such that $d_i(x_i)=x_{i-1}$, by surjectivity of the $d_i$. We can also define, for $i<n$, $x_i=d_{i+1}(x_{i+1})$. Define $x \in X_\omega$ by $x(n)=x_n$; then $d_{\omega, n}(x)=x(n)=x_n$. \\
\noindent Also, if all of the spaces $X_n$ are compact Hausdorff, then $X_\omega$ is compact Hausdorff. Indeed, $\prod_{n \in \mathbb{N}} X_n$ is Hausdorff and compact by Tychonoff's theorem, so if we know that $X_\omega$ is a closed subspace, then it is Hausdorff and compact. It remains to see that $X_\omega$ is closed in $\prod_{n \in \mathbb{N}} X_n$. Well, $$X_\omega = \bigcap_{N \in \mathbb{N}} \{ x \in \prod_{n \in \mathbb{N}} X_n : \ x(N) = d_N (x(N+1)) \}= \bigcap_{N \in \mathbb{N}} \psi^{-1}(\Delta_{X_N \times X_N}),$$ where $\psi:\prod_{n \in \mathbb{N}} X_n \rightarrow X_N \times X_N, \ x \mapsto (\pi_N(x), d_N \circ \pi_{N+1}(x))=(x_N,d_N(x(N+1)))$ is continuous, since each component is continuous in $x$. Since $X_N$ is Hausdorff, the diagonal $\Delta_{X_N \times X_N}= \{(x,x): x \in X_N\}$ is closed in $X_N \times X_N$. Therefore each $\psi^{-1}(\Delta_{X_N \times X_N})$ is closed as the continuous preimage of a closed set. Hence $X_\omega$ is closed, as the intersection of closed sets. \\
\begin{lem} \label{max} If $\langle \langle g_n, \delta_n \mathbb{R} \rangle: n \in \mathbb{N} \rangle$ is a sequence of metrisasble $T_2$ compactifications of $\mathbb{R}$ such that for all $n$, $\delta_n \mathbb{R} \leq \delta_{n+1} \mathbb{R}$, then there exists a metrisable $T_2$ compactification $\delta_\omega \mathbb{R}$ of $\mathbb{R}$ such that for all $n$, $\delta_n \mathbb{R} \leq \delta_\omega \mathbb{R}$. \end{lem}
\begin{proof} By assumption, for each $n$ there is an onto function $d_n: \delta_{n+1} \mathbb{R} \rightarrow \delta_n \mathbb{R}$ such that $d_n \circ g_{n+1} = g_n$. Let us take the inverse limit of the system $\langle \langle \delta_n \mathbb{R}, d_n \rangle: n \in \mathbb{N} \rangle$. Call it $\langle X_\omega, d_{\omega,n} \rangle$. By definition $X_\omega$ is a subspace of a countable product of the spaces $\delta_n \mathbb{R}$, and is therefore metrisable by metrisability of each of the $\delta_n \mathbb{R}$. We have remarked above that $X_\omega$ must be compact Hausdorff, since each individual space $\delta_n \mathbb(R)$ is. Since we have a pair $\langle \mathbb{R}, \langle g_n: n\in \mathbb{N} \rangle \rangle$ such that $\mathbb{R}$ is a topological space, each $g_n: \mathbb{R} \rightarrow \delta_n\mathbb{R}$ is continuous, and for all $n$, $g_n = d_n \circ g_{n+1}$, by Proposition \ref{limit} there is a continuous function $g: \mathbb{R} \rightarrow X_\omega$ such that for all $n$, $g_n = d_{\omega, n } \circ g$. Let $\delta_{\omega} \mathbb{R} = \text{cl}^{X_\omega}(g(\mathbb{R}))$. Then $\langle g, \delta_\omega \mathbb{R} \rangle$ is the desired compactification:
\begin{itemize}
\item $\delta_\omega \mathbb{R}$ is compact $T_2$ and metrisable, since it is a closed subspace of the compact $T_2$ and metrisable space $X_\omega$;
\item $g$ is injective since $g_0$ is injective;
\item $g$ is continuous by assumption;
\item Suppose $U$ is open in $\mathbb{R}$. We claim $g(U)$ is open in $g(\mathbb{R})$. Well, $d_{\omega, 0} \circ g(U) = g_0 (U)$ is open in $\delta_0 \mathbb{R}$. (It is open in $g_0 (\mathbb{R})$, which is in turn open in $\delta_0 \mathbb{R}$ as $\mathbb{R}$ is locally compact). Then, $d_{\omega, 0}^{-1} (g_0 (U))$ is open in $\delta_\omega \mathbb{R}$, and $g(U)= d_{\omega, 0}^{-1} (g_0 (U))\cap g(\mathbb{R})$ shows that $g(U)$ is open in $g(\mathbb{R})$. Altogether this shows that $g^{-1}$ is continuous;
\item $\text{cl}^{\delta_{\omega} \mathbb{R}} (g(\mathbb{R})) = \delta_{\omega} \mathbb{R}$ holds.
\item for all $n$, $\delta_n \mathbb{R} \leq \delta_\omega \mathbb{R}$. This is witnessed by the continuous functions $d_{\omega,n}: \delta_\omega \mathbb{R} \rightarrow \delta_n\mathbb{R}$. We have remarked that they are onto because the $d_n$ are onto; furthermore, for each $n$ we have $g_n = d_{\omega, n } \circ g$. \end{itemize} \end{proof}
\noindent We are almost ready to show that $\mathbb{R}$ has uncountably many (non-equivalent) $T_2$ compactifications. For this, let us recall Zorn's Lemma.
\begin{lem}[Zorn's Lemma] Let $\mathfrak{A} = (A, \leq)$ be a nonempty poset in which every nonempty chain has an upper bound. Then $\mathfrak{A}$ has a maximal element. \end{lem}
\begin{thm} $\mathbb{R}$ has uncountably many $T_2$ compactifications. \end{thm}
\begin{proof} Suppose $\mathbb{R}$ has only countably many $T_2$ compactifications. In particular $\mathbb{R}$ has only countably many \textit{metrisable} $T_2$ compactifications. We may assume without loss that there are countably infinitely many of these. (If there are only finitely many metrisable $T_2$ compactifications, then certainly one of these is maximal among all the others; this contradicts Lemma \ref{nomax}.) \\
\noindent Let the set of all metrisable $T_2$ compactifications of $\mathbb{R}$ be $\mathfrak{A}=\{\langle h_n, \delta_n \mathbb{R} \rangle: n \in \mathbb{N} \}$. (We write $\langle h'_n, \delta'_n \mathbb{R} \rangle$ for ease of notation, but we really mean its class $[\langle h'_n, \delta'_n \mathbb{R} \rangle ]$, of course.) \\ This is a poset. We show that it has a maximal element, by checking that it satisfies the conditions of Zorn's Lemma. $\mathfrak{A}$ is nonempty, since it contains $\langle k,\gamma \mathbb{R} \rangle$. Suppose $\mathfrak{C} \subseteq \mathfrak{A}$ is a nonempty chain. We need to exhibit an upper bound for $\mathfrak{C}$. We split into two cases:
\begin{itemize}
\item If $\mathfrak{C}$ has only finitely many elements, write these as $\langle h'_1, \delta'_1 \mathbb{R} \rangle \leq \ldots \leq \langle h'_r, \delta'_r \mathbb{R} \rangle $. Then $ \delta'_r \mathbb{R} $ is a greatest element of the chain, hence certainly an upper bound.
\item If $\mathfrak{C}$ has countably infinitely many elements, write $\mathfrak{C}=\{\langle h'_n, \delta'_n \mathbb{R} \rangle: n \in \mathbb{N} \}$. Let us assume without loss that $\mathfrak{C}$ has no maximal element. (A maximal element in a chain would also be a greatest element and hence an upper bound for the chain, so we would be done.) Note also that each nonempty finite subset $\mathfrak{C'}$ of $\mathfrak{C}$ is still a chain, and by the above case, $\mathfrak{C'}$ has a greatest element max$\{\mathfrak{C}'\}$. We now construct a sequence $\langle \langle h'_{n_i}, \delta'_{n_i} \mathbb{R} \rangle: i \in \mathbb{N} \rangle$ of compactifications in $\mathfrak{C'}$ such that for all $i$, $\delta'_{n_i} \mathbb{R} \leq \delta'_{n_{i+1}} \mathbb{R}$.
\\
Let $n_0=0$. $\langle h'_{n_0}, \delta'_{n_0} \mathbb{R} \rangle$ is not a maximal element of the chain, so there is $n_1 > n_0$ with $\delta'_{n_0}\mathbb{R}<\delta'_{n_1}\mathbb{R}$.
\\ For $r >0$, at the $r^{th}$ stage consider the finite subchain $\mathfrak{C}'_r=\{\delta'_i \mathbb{R}: 0 \leq i \leq n_r\}$; max$\{\mathfrak{C}'_r\}$ is not a maximal element of $\mathfrak{C}$, so there is $n_{r+1}>n_r$ with max$\{\mathfrak{C}'_r\} < \delta'_{n_{r+1}}\mathbb{R}$.
\\
We have inductively defined a sequence $\langle \langle h'_{n_i}, \delta'_{n_i} \mathbb{R} \rangle: i \in \mathbb{N} \rangle$ such that for each $i$, $ \delta'_{n_i} \mathbb{R} \leq \delta'_{n_{i+1}} \mathbb{R}$. Therefore Lemma \ref{max} applied to this sequence $\langle \langle h'_{n_i}, \delta'_{n_i} \mathbb{R} \rangle: i \in \mathbb{N} \rangle$ (now considered as a sequence of actual compactifications rather than classes of these) tells us that there exists a metrisable $T_2$ compactification $\delta_{\omega} \mathbb{R}$ such that for each $i$, $\delta'_{n_i} \mathbb{R} \leq \delta_{\omega} \mathbb{R}$.
\\
$\delta_{\omega} \mathbb{R}$ is an element of $\mathfrak{A}$; let us show that it is an upper bound for $\mathfrak{C}$.
\\ Well, for each $r \in \mathbb{N}$ we have $n_r \geq r$ so $\delta'_r \mathbb{R}$ is among $\delta'_0 \mathbb{R}, \ldots, \delta'_{n_r} \mathbb{R}$. Therefore $$\delta'_r \mathbb{R} \leq \text{max} \{ \mathfrak{C}'_r \}< \delta'_{n_{r+1}} \mathbb{R} \leq \delta_{\omega} \mathbb{R},$$ as required. \end{itemize}
\noindent We have now shown that $\mathfrak{A}$ satisfies the conditions of Zorn's Lemma, and so has a maximal element $\langle h_{\text{max}},\delta_{\text{max}} \mathbb{R} \rangle$. That is, $\langle h_{\text{max}},\delta_{\text{max}} \mathbb{R} \rangle$ is maximal among all metrisable $T_2$ compactifications of $\mathbb{R}$. \\
\noindent This contradicts Lemma \ref{nomax}. Therefore $\mathbb{R}$ could not have only countably many $T_2$ compactifications! \end{proof}
\section{Conclusion}
\noindent In Section 2, for the problem of finding a compactification of $\mathbb{R}$ to which the family $f_n(x)=\cos(x)$ extended continuously, we could have gone a different route by defining $\langle k, \gamma \mathbb{R} \rangle $ as follows. Take $$k: \mathbb{R} \rightarrow [-1,1] \times [-1,1], \ x \mapsto (\tanh(x),\cos(x)),$$ and let $\gamma \mathbb{R}$ be the closure of the image of $k$ in $[-1,1] \times [-1,1]$. Indeed, a bit of thought shows that if we have found a compactification $\langle k, \gamma \mathbb{R} \rangle $ onto which $f_1(x) = \cos(x)$ extends continuously, then for each $n \in \mathbb{Z}$, $f_n(x) = \cos(nx)$ will also extend continuously. \\
\noindent This relies on the fact that each $f_n(x)=\cos(nx)$ can be expanded as a polynomial $T_n$ in $\cos(x)$: $$\cos(nx) = T_n (\cos(x)),$$ where, in fact, $T_n$ is the $n^{th}$ Chebyshev polynomial. \\ Therefore, if we have a compactification $\langle k, \gamma \mathbb{R} \rangle $ and a continuous function $\gamma f_1 : \gamma \mathbb{R} \rightarrow \mathbb{R}$ such that $$\gamma_1 f \circ k = f_1,$$ then this would also yield, for each $n \in \mathbb{Z}$, a continuous function $\gamma f_n : \gamma \mathbb{R} \rightarrow \mathbb{R}$ such that $$\gamma f_n \circ k = f_n.$$ Simply take $\gamma f_n= T_n \circ \gamma f_1$: $$\gamma f_n \circ k= T_n \circ \gamma f_1 \circ k = T_n \circ f_1 = f_n.$$
\noindent The advantage of this approach is that we can instantly see this space is metrisable, as a subspace of $[-1,1] \times [-1,1]$. This means we do not need to rely on the result that a countable product of metric spaces is metrisable. \\
\noindent Notice also that we did not prove that our choice of $\langle k, \gamma \mathbb{R} \rangle $ was smallest among all compactifications to which the family $f_n(x)=\cos(nx)$ extends continuously -- this was not necessary for us to show that $\langle k, \gamma \mathbb{R} \rangle $ is distinct from the one-point, two-point, and Stone-$\check{\text{C}}$ech compactification.
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Unifying theory of quantum state estimation \\ using past and future information}
\author{Areeya Chantasri\corref{cor1}} \ead{[email protected]} \address{Centre for Quantum Computation and Communication Technology (Australian Research Council), \\ Centre for Quantum Dynamics, Griffith University, Nathan, Queensland 4111, Australia} \address{Optical and Quantum Physics Laboratory, Department of Physics, \\Faculty of Science, Mahidol University, Bangkok, 10400, Thailand} \author{Ivonne Guevara\corref{}}
\author{Kiarn T. Laverick\corref{}}
\author{Howard M. Wiseman\corref{cor1}} \ead{[email protected]} \address{Centre for Quantum Computation and Communication Technology (Australian Research Council), \\ Centre for Quantum Dynamics, Griffith University, Nathan, Queensland 4111, Australia} \cortext[cor1]{Corresponding authors}
\begin{abstract} Quantum state estimation for continuously monitored dynamical systems involves assigning a quantum state to an individual system at some time, conditioned on the results of continuous observations. The quality of the estimation depends on how much observed information is used and on how optimality is defined for the estimate. In this work, we consider problems of quantum state estimation where some of the measurement records are not available, but where the available records come from both before (past) and after (future) the estimation time, enabling better estimates than is possible using the past information alone. Past-future information for quantum systems has been used in various ways in the literature, in particular, the quantum state smoothing, the most-likely path, and the two-state vector and related formalisms. To unify these seemingly unrelated approaches, we propose a framework for partially observed quantum systems with continuous monitoring, wherein the first two existing formalisms can be accommodated, with some generalization. The unifying framework is based on state estimation with expected cost minimization, where the cost can be defined either in the space of the unknown record or in the space of the unknown true state. Moreover, we connect all three existing approaches conceptually by defining five new cost functions, and thus new types of estimators, which bridge the gaps between them. We illustrate the applicability of our method by calculating all seven estimators we consider for the example of a driven two-level system dissipatively coupled to bosonic baths. Our theory also allows connections to classical state estimation, which create further conceptual links between our quantum state estimators. \end{abstract}
\date{today}
\begin{keyword}
Quantum state estimation \sep Filtering \sep Smoothing \sep Continuous quantum measurement \sep Quantum trajectory \sep Cost functions \sep Stochastic processes
\end{keyword}
\end{frontmatter}
\tableofcontents
\section{Introduction}
Estimation problems deal with assigning appropriate values to quantities that are unknown because they are not completely accessible via observation. Given available information, such as observed measurement records and the relationships between the records and the quantities of interest, one can devise an optimal strategy to extract appropriate estimators of the unknowns. We are interested in dynamical systems, where quantities of interest at time $t$ can be estimated conditioned on the observation at other times. Examples are classical \emph{filtering} and \emph{smoothing} techniques which give optimal estimators, typically minimizing the mean square errors of the estimation~\cite{BookJazwinski,KalBuc1961,Kushner1964,FraPot1969,BookWienerSmt}. The filtering uses only records before time $t$ (\emph{past} records) for the estimation, while the smoothing uses records both before and after $t$ (\emph{past-future} records). It has been shown that the smoothed estimate, using both past and future records, is statistically closer to the \emph{true} values of the estimated quantity, because it uses more information from the observation than the filtered one, and thus is preferable provided that real-time estimation is not required~\cite{Wheatley2010,Ivonne2015,Huang2018,Laverick2018,LavCha2019}.
Estimation techniques from classical cases have been adopted for quantum systems. However, the implementation is not always straightforward. For problems where the quantities to be estimated can be treated as ``classical", such as parameters that affect quantum systems, techniques in classical parameter estimation (including classical filtering and smoothing) can be applied relatively straightforwardly~\cite{BookHelstrom,Tsangsmt2009,Wheatley2010}. However, for estimating quantities that are inherently ``quantum'', such as quantum states or observables, naively applying classical techniques can lead to strange results if operators representing the quantities of interest and the observed operators do not commute. This occurs when we include both past and future information in the estimation~\cite{Wise2002,Ivonne2015,Budini2017,LuisPsmooth2017,Budini2018b}, as a quantum system's observables at time $t$ do not commute with operators representing measurement results of the system at later time~\cite{BookWiseman}.
It is precisely this class of estimation problems, for \emph{quantum states and observables}, involving the use of past-future information that we are concerned with in this paper.
The uses of past-future information for quantum systems have been investigated since the 1950's~\cite{Watanabe1955} and is still a topic that creates debate~\cite{Vaidman2017,Tsang2019}. In this work, we identify three types of approaches in the literature, based on their mathematical similarity and underlying concepts. The main result of this paper is to unify these approaches, after first generalizing them by using the idea of optimal estimators defined with distinct \emph{cost functions}. The existing approaches are as follows, in chronological order.
The first category (including most of the work in the literature), we name the \emph{two-state formalism}. We chose this name to encompass a number of closely related formalisms~\cite{Watanabe1955,ABL1964,AV2002,Gammelmark2013} that use two states (one forward-in-time and one backward-in-time) to utilise the past-future information. This idea also led to the formulation of the theory of weak values~\cite{AAV1988}. The real part of the weak value has a simple interpretation as the ensemble-average of weak measurement results with appropriate preselection (forward-in-time state) and postselection (backward-in-time state). Here, a weak measurement is one whose result contains an arbitrarily small amount of information about a quantity of interest and which is thereby able to have an arbitrarily small measurement back-action upon the system. The two-state formalism is also useful within the context of classical smoothing theory, for estimating the probabilities of outcomes of measurements that are not weak, at some time between the preselection and postselection~\cite{Tsangsmt2009-2,Gammelmark2013}.
Finally, a smoothing technique for quantum observables has been formulated based on the least-mean-squared errors~\cite{Ohki15,Ohki19,Tsang2019}, analogous to standard classical smoothing, and shown to equal to the real part of the weak value.
The second category is the \emph{most-likely path formalism}, which considers quantum state trajectories from all possible measurement records associated with initial and final boundary conditions. These trajectories result from continuous measurements, which can be thought of as infinitely many consecutive weak measurements, a time $\dd t$ apart, where the average measurement back-action scales as $\dd t$. The most-likely path is defined as the one with the maximum-likelihood measurement record. The formalism resembles the most-probable paths (also known by many closely related names) considered for classical stochastic processes~\cite{ZeiDem1987,DurBac1978,DAD2014}, which have been useful in computing transition rates between initial and final states~\cite{Dykman1994}. Similar techniques were developed for quantum systems, in particular, for diffusive-type continuous measurement in Refs.~\cite{Chantasri2013,chantasri2015stochastic}, where the boundary conditions are defined as initial and final quantum states. This use of past-future information is compatible with formulating the estimation problem in the space of all possible records because there is a one-to-one relation between a quantum-state path and a scalar measurement record.
The last category is the \emph{quantum state smoothing formalism}. This is similar to the most-likely path formalism in that it yields a quantum state following a trajectory, but is also related to the two-state formalism in that it is a quantum analogue of the classical state smoothing used in some of the approaches mentioned in the paragraph about that formalism above. Quantum state smoothing considers estimating unknown quantum states in a partially-observed quantum system scenario, where an observer has access to only some measurement records (observed records) and not to the rest (unobserved records). The smoothed quantum state is defined as an observer's estimate conditioned on the past-future observed information, which is a better estimate of unknown true states than the usual past-conditioned quantum trajectory~\cite{Ivonne2015,Budini2017,Budini2018b,chantasri2019,GueWis20}.
The formalism is a quantum analogue of classical state smoothing, in the sense that the smoothed quantum state reduces to its classical counterpart when initial conditions and dynamics of the system can be described probabilistically in a fixed basis~\cite{LavCha2020a}.
\begin{figure}
\caption{Diagrams showing eight different expected costs, for $\textbf{Q}_1$--$\textbf{Q}_8$, which gives eight optimal estimators, connecting the existing formalisms (grey boxes): quantum state smoothing, the CDJ most-likely path formalism, and the two-state formalism. Expected costs in blue and pink boxes represent the costs defined in the quantum state space and the unknown record space, respectively. The connections are described in different lines and colors, following the discussion in the text in Section~\ref{sec-sevenQSE}. Since the observed record is fixed in the optimization, we omit the $O$-dependence in the definition of true states. The labels $(\textbf{Q}_1)$--$(\textbf{Q}_8)$ are colour-coded consistent with the colours used for estimators in the later plots in Section~\ref{sec-example} and Table~\ref{tab-avecost}.}
\label{fig-diagramQ}
\end{figure}
Given these three very different existing formalisms, it is natural to ask whether they can somehow be related. For that purpose, here we introduce a unifying framework for quantum state estimation. The framework is built upon the scenario of an open quantum system with observed (O) and unobserved (U) records, with different optimal state estimators corresponding to different cost functions (see Figure~\ref{fig-diagramQ}). Of the above three categorized formalisms, the last two --- most-likely path formalism and quantum state smoothing --- which directly involve quantum states, can be generalized and rederived by miminizing expected cost functions. The expected costs are conditioned on the past-future information of the observed record and are defined in the record space (the most-likely path formalism) and in the state space (the quantum state smoothing). For the first category (the two-state formalism), even though it does not seek to estimate a quantum state in a conventional sense, we exploit a state-like quantity as originally proposed by Tsang~\cite{Tsangsmt2009}, which we call the smoothed weak-value state~\cite{LavCha2019}. This is defined in our theory by using a cost that relates to estimating weak measurement results. Having established this unifying framework, we also introduce other costs and their associated estimators, showing conceptual connections between all three formalisms, shown as $\textbf{Q}_1$--$\textbf{Q}_8$ in Figure~\ref{fig-diagramQ}. Such a framework can also be applied to classical estimators, which helps in solidifying the structure of the quantum estimators we construct, as shown in Figure~\ref{fig-diagramQ}.
We start by explaining the required types of estimation problems, \textit{i.e.}, configuration estimation, classical state estimation, quantum state estimation, and record estimation, in Section~\ref{sec-estimation}. We then review the three categories of formalisms in more details, in a way that allows us concurrently introduce the partially-observed quantum system with observed, unobserved, and both records, in Section~\ref{sec-existing}. In Section~\ref{sec-sevenQSE} we describe all eight types of estimators which form the unifying theory, show various equivalences for classical state estimators, and investigate how to calculate the expected costs for an arbitrary (not necessarily optimal) estimator. We show how all eight estimators, and many costs, all can be calculated explicitly using the example of a single qubit coupled to bosonic bath in Section~\ref{sec-example}. We conclude with a discussion of future work and open questions in Section~\ref{sec-discussion}. Details of the calculations, numerical methods, and glossary of abbreviations and acronyms are presented in Appendices \ref{sec-app-pdf}--\ref{Glossary}.
\section{Estimation and cost functions}\label{sec-estimation} In estimation problems, it is usually assumed that definite \emph{true} values of unknown quantities exist, but are unknown to an observer. Given the data from observation, one can devise a systematic approach to best guess the hidden quantities, based on some measure of how far the observer's guesses are from the true quantities. In this section, we develop this concept as cost-minimization estimation. We start with the conventional case of estimating classical variables, which we refer to as \emph{configurations}, and then generalize to the estimation of probability distributions of such configurations, which we refer to as classical \emph{states}. This enables us to naturally introduce the optimal quantum state estimation and the unknown record estimation, which will be used intensively in the rest of the paper.
\subsection{Configuration estimation}\label{sec-CFE} The most common type of estimation problems is the configuration estimation. Consider an example of estimating a vector ${\bf x}^{\rm true}$ of $d$ parameters, representing the system's true configuration that is unknown. The true configuration can take any value ${\bf x}$ in a set denoted by ${\mathbb X}$, which could be a set of continuous or discrete parameters. Using data from observation and prior knowledge, an observer, named Alice, refines her knowledge of such parameters, in the form of a probability distribution of true configurations, $\wp_{\text{D}}({\bf x})$, where the subscript `D' refers to conditioning on the relevant data (either observations or constraints). Given this posterior knowledge, an \emph{optimal estimate of the configuration} is defined as an estimator $\est{\bf x}$ that minimizes an expected cost, \begin{align}\label{eq-gencostx}
\est{\bf x} = \argmin_{{\bf x} \in \mathbb X} \left\langle c\left({\bf x},{\bf x}^{\rm true} \right) \right\rangle_{{\bf x}^{\rm true}|{\rm D}}, \end{align} where the expected cost is a cost function $c\left({\bf x},{\bf x}^{\rm true} \right)$ averaged over possible true configurations ${\bf x}^{\rm true}$ conditioned on `D', and can be written explicitly as \begin{align}\label{eq-confest-int}
\left\langle c\left({\bf x},{\bf x}^{\rm true} \right) \right\rangle_{{\bf x}^{\rm true}|{\rm D}} \equiv \int\!\!{\rm d}{\bf x}' \,\wp_{\text{D}}({\bf x}') c\left({\bf x},{\bf x}' \right), \end{align} using ${\bf x}'$ as a dummy variable. This integral can be replaced by a sum (with associated probability weights), if the possible values of ${\bf x}^{\rm true}$ are discrete. Note that, for the rest of the paper, we will use `$\star$' to indicate estimators, and use unadorned variables for dummy variables or arguments of $\argmax$ and $\argmin$ functions.
In this work, we are most interested in two common costs for configuration estimates: Square Deviation (SD) and negative Equality (nE). The estimators that minimize these expected costs are the well-known Bayesian mean estimator (BME) and the maximum likelihood estimator (MLE), respectively. That is, the BME minimizes the expected SD cost (also referred to as the mean square error), \begin{align}\label{eq-confestBME1}
\est{\bf x}_{\rm SD} = & \argmin_{{\bf x} \in \mathbb X} \left\langle \left({\bf x} - {\bf x}^{\rm true}\right)^{\top} \left({\bf x} - {\bf x}^{\rm true}\right) \right\rangle_{{\bf x}^{\rm true}|{\rm D}} \nonumber \\
= & \left\langle {\bf x}^{\rm true} \right\rangle_{{\bf x}^{\rm true}|{\rm D}} \equiv \est{\bf x}_{\rm BME}. \end{align} This result can be obtained by taking the derivative of the expected cost in the first line of Eq.~\eqref{eq-confestBME1} over ${\bf x}$ and setting it to zero. Similarly, the MLE minimizes an expected nE cost, \begin{align}\label{eq-mleprobmax}
\est{\bf x}_{\rm nE} = & \argmin_{{\bf x} \in \mathbb X} \left\langle -\delta^{(d)}\!\left({\bf x} - {\bf x}^{\rm true}\right) \right\rangle_{{\bf x}^{\rm true}|{\rm D}} \nonumber \\ =& \argmin_{{\bf x} \in {\mathbb X}}\,[ - \wp_{\text{D}}({\bf x}) ], \nonumber \\ =& \argmax_{{\bf x} \in {\mathbb X}}\,\wp_{\text{D}}({\bf x}) \equiv \est{\bf x}_{\rm MLE}, \end{align} where we have used $\delta^{(d)}(\cdots)$ as a $d$-dimensional Dirac $\delta$-function, which can be replaced by the Kronecker delta $\delta_{{\bf x},{\bf x}^{\rm true}}$ for a discrete-variable binary cost function. To obtain the last line of Eq.~\eqref{eq-mleprobmax}, we use the definition of an expected cost in Eq.~\eqref{eq-confest-int}, where minimizing $-\wp_{\text{D}}({\bf x})$ is equivalent to maximizing $\wp_{\text{D}}({\bf x})$.
\subsection{Classical state estimation}\label{sec-CSE} The optimality concept in configuration estimation can be applied to the estimation of classical states, which are probability density functions (PDFs) of unknown configurations. In the configuration estimation, we assumed that the true configuration of the system does exist as ${\bf x}^{\rm true}$. Therefore, if any observer has a perfect knowledge about the true configuration, the observer's classical state of knowledge should be a \emph{true classical state}, which is either the Dirac $\delta$-function $\wp^{\rm true}({\bf x}) = \delta^{(d)}\!\left({\bf x} - {\bf x}^{\rm true}\right)$ for continuous variables, or the Kronecker $\delta$-function $\wp^{\rm true}({\bf x}) = \delta_{{\bf x}, {\bf x}^{\rm true}}$ for discrete variables. Classical state estimation is relevant to the question: What is an optimal posterior distribution $\wp({\bf x})$ given that a possible true state is $\wp^{\rm true}({\bf x})$ and the observer's belief of true configurations conditioned on observation is given by $\wp_{\text{D}}({\bf x})$? We will see below that classical state estimation might seem redundant given configuration estimation; however, for us, it is an important step towards introducing the concept of quantum state estimation in the next subsection.
In order to make a smooth transition to quantum state estimation, we will simplify the analysis in this subsection by only considering a discrete-type configuration. That is, let ${ x}$, the discrete variable of interest, take any value in a countable set ${\mathbb X}$, and define a set ${\mathbb P} = \{ \wp({ x}) : { x} \in {\mathbb X}\}$ of all possible normalized PDFs of ${ x}$. We can then define an \emph{optimal estimate of the classical state} as that which minimizes an expected cost function, \begin{align}
\est\wp({ x}) = \argmin_{\wp \in {\mathbb P}} \left\langle c\left[\wp, \wp^{\rm true}\right] \right\rangle_{\wp^{\rm true}|{\rm D}}\!\!({ x}). \end{align}
Recall that the true state is the Kronecker $\delta$-function $\wp^{\rm true}({ x}) = \delta_{{ x},{ x}^{\rm true}}$. The expected value $\langle \cdots \rangle_{\wp^{\rm true}|{\rm D}}$ is computed by integrating over all possible true states, which in this case is equivalent to summing over all possible true configurations, with weights conditioned on the observation data: \begin{align}\label{eq-confest-int2}
\left\langle c\left[\wp,\wp^{\rm true} \right] \right\rangle_{\wp^{\rm true}|{\rm D}} \equiv & \left\langle c\left[\wp({ x}),\delta_{{ x},{ x}^{\rm true}} \right] \right\rangle_{{ x}^{\rm true}|{\rm D}} \nonumber \\
= & \sum_{{ x}' \in {\mathbb X}} \wp_{\text{D}}({ x}') \, c\left[\wp({ x}),\delta_{{ x},{ x}'} \right], \end{align} where the probability distribution $\wp_{\text{D}}({ x})$ represents the observer's knowledge of the true configuration conditioned on the observed data. We now consider the two types of cost functions presented in Section~\ref{sec-CFE}.
First, the state estimator that minimizes a sum Square Deviation (\textSigma SD) cost for classical states is, \begin{align}\label{eq-cstateBME1}
\est\wp_{ \text{ \textSigma SD}} ({ x})= & \argmin_{\wp \in {\mathbb P}} \left\langle \sum_{{ x}' \in {\mathbb X}} \left\{\wp({ x}') - \wp^{\rm true}({ x}')\right\}^2 \right\rangle_{\wp^{\rm true}|{\rm D}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!({ x})\nonumber \\
= & \left\langle \delta_{{ x},{ x}^{\rm true}} \right\rangle_{{ x}^{\rm true}|{\rm D}} = \wp_{\text{D}}({ x}), \end{align} where, comparing with Eq.~\eqref{eq-confestBME1}, the square deviation becomes an inner product of two identical functional deviations (between $\wp({ x})$ and $\wp^{\rm true}({ x})$), where $\wp^{\rm true}({ x}) = \delta_{{ x},{ x}^{\rm true}}$. In the second line of Eq.~\eqref{eq-cstateBME1}, it turns out that the optimal state estimator for the \textSigma SD cost is exactly the observer's original state of knowledge of the system's true configuration $\wp_{\text{D}}({ x})$. However, this coincidence happens only when the true state is assumed to be a $\delta$-function, not necessarily for general cases where true states are of other forms.
Second, for the negative equality cost, we need to define a $\delta$-functional for an equality between any two functions, $\delta\!\left[\wp, \wp^{\rm true}\right]$, which gives a positive value when the arguments are exactly the same and zero otherwise. The optimal state estimator that minimizes this expected negative cost is given by \begin{align}\label{eq-cstate-ne}
\est\wp_{\rm nE}({ x}) = & \argmin_{\wp \in {\mathbb P}} \left\langle -\delta\!\left[\wp, \wp^{\rm true}\right] \right\rangle_{\wp^{\rm true}|{\rm D}}\!\!({ x}) \nonumber \\ =&\, \delta_{{ x},\est{ x}_{\rm MLE}} , \end{align} where $\est{ x}_{\rm MLE} = \argmax_{{ x}} \wp_{\text{D}}({ x})$ from Eq.~\eqref{eq-mleprobmax} in the previous subsection. To understand the result in the second line above, we use the definition of the expected cost in Eq.~\eqref{eq-confest-int2} to obtain the expected cost, \begin{align}\label{eq-cstate-necost}
\left\langle -\delta\!\left[\wp, \wp^{\rm true}\right] \right\rangle_{\wp^{\rm true}|{\rm D}} = &\, -\!\!\!\sum_{{ x}' \in \mathbb X} \wp_{\text{D}}({ x}') \, \delta\!\left[\wp, \delta_{{ x},{ x}'}\right], \end{align} where $\delta\!\left[\wp, \delta_{{ x},{ x}'}\right]$ is zero unless the argument $\wp({ x})$ is also a pure ($\delta$-function) state of the same ${ x}'$. Since there is also the weight $\wp_{\text{D}}({ x})$ in Eq.~\eqref{eq-cstate-necost}, we can conclude that the optimal state that minimizes this expected cost is a $\delta$-function state with a true configuration that maximizes $\wp_{\text{D}}({ x})$, as stated in the second line of Eq.~\eqref{eq-cstate-ne}.
\subsection{Quantum state estimation}\label{sec-QSE}
Following classical state estimation for a discrete configuration ${ x}$, let us consider a quantum system described similarly by a discrete set ${\mathbb X}$ of basis states. The observer's knowledge of the system is represented by a state matrix (also called a density matrix) $\rho$, which we also refer to as a \emph{quantum state}. Diagonal elements of the quantum state equal the probabilities that the system can be found in the basis states, for example when projective measurements are performed on the system. If the quantum state is diagonal, it can be written as a sum of basis projectors with corresponding probabilities, $\rho_{\rm diag} = \sum_{{ x}} \wp({ x}) |{ x}\rangle \langle { x}|$.
Maximum knowledge of the quantum system is represented by a quantum state with unit purity. Using the notation $\hat\psi \equiv |\psi\rangle\langle \psi|$ for the projector onto a pure quantum state $|\psi\rangle$ in a Hilbert space ${\mathbb H}$, let us denote a pure \emph{true quantum state} by $\hat\psi^{\rm true}$. This may be unknown analogously to an unknown true pure classical state $\wp^{\rm true}({\bf x}) = \delta_{{\bf x}, {\bf x}^{\rm true}}$. The quantum state estimation can therefore be formulated as to find an estimator from the set of possible quantum states. We can define an \emph{optimal estimate of the quantum state} that minimizes an expected cost with the true state, \begin{align}
\est\rho = \argmin_{\rho \in {\mathfrak G}({\mathbb H})} \left\langle c\left[\rho, \hat\psi^{\rm true}\right] \right\rangle_{\hat\psi^{\rm true} | {\rm D}}. \end{align} Here we have defined the set of valid state matrices as ${\mathfrak G}({\mathbb H})$, which can be represented as a convex subset of ${\mathbb R}^{2d-1}$~\cite{holevo2001statistical}. For example, for a qubit, the set is the Bloch ball. The expected cost is defined on the Hilbert space of pure quantum states, \begin{align}
\left\langle c\left[\rho, \hat\psi^{\rm true} \right] \right\rangle_{\hat\psi^{\rm true} | {\rm D}} \!\equiv \!\! \int\!\! {\rm d} \mu_{\rm H}(\hat\psi)\, \wp_{\text{D}}(\hat\psi) \, c\left[\rho, \hat\psi\right], \end{align} using the Haar measure ${\rm d}\mu_{\rm H}(\hat\psi)$ as the measure of the integral, with a conditioned PDF of pure states $\wp_{\text{D}}(\hat\psi)$.
As before, we consider two examples for the cost functions. The quantum state estimator that minimizes an expected Trace Square Deviation (TrSD) cost is given by \begin{align}\label{eq-q-sd}
\est\rho_{\rm TrSD} = & \argmin_{\rho \in {{\mathfrak G}({\mathbb H})}} \left\langle {\rm Tr} \left [ \left(\rho - \hat\psi^{\rm true} \right)^2 \right] \right\rangle_{\hat\psi^{\rm true} | {\rm D}} \nonumber \\
= & \left\langle \hat\psi^{\rm true} \right\rangle_{\hat\psi^{\rm true} | {\rm D}}, \end{align} where, comparing with Eq.~\eqref{eq-cstateBME1}, the inner product becomes a trace of a square of difference between two quantum states, and the estimator is a conditioned average of all possible true states. For the negative equality cost, we define a $\delta$-functional for pure quantum states using the Haar measure, $\delta_{\rm H}\!\left[\rho, \hat\psi^{\rm true} \right]$, giving an infinite value when the arguments are the same and zero otherwise. The quantum state estimator that minimizes the expected negative equality cost is, \begin{align}\label{eq-q-ne}
\est\rho_{\rm nE} = & \argmin_{\rho \in {{\mathfrak G}({\mathbb H})}} \left\langle -\delta_{\rm H}\!\left[\rho, \hat\psi^{\rm true} \right] \right\rangle_{\hat\psi^{\rm true} | {\rm D}} \nonumber\\ =& \argmin_{\hat\psi\in {\mathbb H}}\, [- \wp_{\text{D}}(\hat\psi)], \nonumber \\ =& \argmax_{\hat\psi\in {\mathbb H}}\, \wp_{\text{D}}(\hat\psi), \end{align} which is a conditioned most-likely (pure) state. The proof of the second and third lines is similar to the classical case, where the $\delta$-functional picks out a pure state and the argmin chooses a pure state that maximizes the probability distribution $\wp_{\text{D}}(\hat\psi)$.
\subsection{Hidden record estimation}\label{sec-RE} Following the previous two subsections, if the classical and quantum states to be estimated live in large-dimension functional spaces, with specific constraints such as the Hermitian and positivity constraints, the averages over all possible true states can become overly complex. Therefore, in some cases, it is preferable to devise a simpler set of book-keeping variables which can transform the measure of the integral of the state variables to that of simpler parameters, such as unknown measurement records that determine the system's true states. However, this is possible only if there exist the records that can be unambiguously mapped to all possible true states of interest.
In this work, we consider the case for continuously monitored systems, where there exists an unknown \emph{true record} $R^{\rm true}$ that can unambiguously determine, at any time, a true classical configuration ${ x}^{\rm true} = { x}_{R^{\rm true}}$ and its corresponding true classical state $\wp^{\rm true}({ x}) = \delta_{{ x}, { x}_{R^{\rm true}}}$, or a true quantum state $\hat\psi^{\rm true} = |\psi_{R^{\rm true}}\rangle \langle \psi_{R^{\rm true}}|$. Let us discretize the total time of measurement into $d_r$ timesteps and describe the measurement record as a string of $d_r$ real-continuous variables, where its realization is denoted by $R = \{ R_1, R_2, ... , R_{d_r} \}^{\top}$, that is, $R \in {\mathbb R}^{d_r}$. Consequently, we can replace the average over all possible true states by an average over all possible hidden measurement records with appropriate probability measures. For example, an average of a quantum state function ${\cal A}[\hat\psi]$ can be written in two ways, \begin{subequations}\label{eq-r-ave} \begin{align}
\left\langle {\cal A} \right\rangle_{\hat\psi^{\rm true} | {\rm D}} \equiv & \int \!\! {\rm d} \mu_{\rm H}(\hat\psi)\, \wp_{\text{D}}(\hat\psi) \, {\cal A}\!\left[\hat\psi \right]\\
= & \int \!\! {\rm d}\mu( R)\, \,\wp_{\text{D}}(R) \, {\cal A} \!\left[\hat\psi_{R} \right] \equiv \left\langle {\cal A} \right\rangle_{R^{\rm true} | {\rm D}}, \end{align} \end{subequations} where $\hat\psi_R$ is a true state corresponding to a possible hidden record $R$, and, in the second line, the usual multi-variable measure of the integral is defined as $\int\! {\rm d}\mu(R) \equiv \int \prod_{k =1}^{d_r} {\rm d} R_k$. The first line is the average over all pure quantum states $\hat\psi$ with the specific Haar measure, and is equivalent to the average over all possible records that determine the pure states in the second line.
In addition to the above advantage in transforming the measures, we can also formulate another type of optimal estimation based on the costs defined in hidden record variables. That is, we define an \emph{optimal estimate of the measurement record} as an estimator that minimizes an expected cost function, \begin{align}\label{eq-gencostx}
\est R = \argmin_{R \in {\mathbb R}^{d_r}} \left\langle c\left(R,R^{\rm true} \right) \right\rangle_{R^{\rm true}|{\rm D}}, \end{align} where the expected cost is \begin{align}
\left\langle c(R, R^{\rm true}) \right\rangle_{R^{\rm true} | {\rm D}} \equiv \!\int\!\! {\rm d}\mu(R') \, \wp_{\text{D}}(R')\, c(R, R'). \end{align} This is similar to the estimation of unknown configurations because the unknown measurement records are simply classical variables.
Therefore, an optimal estimator for the hidden record that minimizes an expected square deviation cost can be defined as \begin{align}
\est R_{\rm SD} = & \argmin_{R \in {\mathbb R}^{d_r}} \left\langle \left(R - R^{\rm true} \right)^{\top} \left(R - R^{\rm true} \right) \right\rangle_{R^{\rm true} | {\rm D}} \nonumber \\
= & \left\langle R^{\rm true} \right\rangle_{R^{\rm true} | {\rm D}}. \end{align} The hidden record estimator that minimizes an expected negative equality cost is \begin{align}
\est R_{\rm nE} = & \argmin_{R\in {\mathbb R}^{d_r}} \left\langle -\delta^{(d_r)}\!\left(R - R^{\rm true} \right) \right\rangle_{R^{\rm true} | {\rm D}} \nonumber \\ =& \argmin_{R\in {\mathbb R}^{d_r}}\,[ -\wp_{\text{D}}(R)], \nonumber \\ =& \argmax_{R\in {\mathbb R}^{d_r}}\, \wp_{\text{D}}(R), \end{align} which is the maximum likelihood record given the conditioned PDF $\wp_{\text{D}}(R)$.
It is important to note here that, since we have assumed that the hidden records can determine true configurations and true states, we can use the record estimators $\est R$ derived above to compute estimators for: configurations ${ x}_{\est R}$ and classical states $\delta_{{ x},{ x}_{\est R}}$, or quantum states $|\psi_{\est R} \rangle \langle \psi_{\est R} |$. However, one has to keep in mind that these estimators are not ``optimal'' in the configuration or state spaces; the optimality is defined in the record space.
For the rest of the paper, we consider a dynamical system under continuous measurements, where we define each measurement record as a string of $n = T/\dd t$ measurement results, from an initial time $t_0 =0$ to a final time $T$ with an infinitesimal time resolution $\dd t$. Given complete information of the system at the initial time, a true configuration/classical state, or a true quantum state, at any intermediate time $\tau \in (0, T]$, can be determined from a complete knowledge of true records up to that time. However, in the scenario that the complete knowledge of measurement records are not possible, for example there are observed and unobserved records, we will then use our concepts of estimation with defined cost functions to assign optimal estimates of those quantities.
Throughout this paper, we follow the notations for measurement records with overhead arrows introduced in Ref.~\cite{Ivonne2015}, but with a slight modification. Denoting $R_t$ as a measurement result acquired between time $t$ and $t+\dd t$, we then define: (a) the \emph{past} record $\past R_\tau = \{ R_t : t \in [0, \tau)\}^{\top}$ for a string of measurement results from the initial time to an estimation time $\tau$, (b) the \emph{future} record $\futp{R}_\tau = \{ R_t : t \in [\tau, T]\}^{\top}$ for a string of results from the estimation time $\tau$ to the final time (including a result at the final time), and (c) the \emph{past-future} record $\bothp{R} = \{ R_t : t \in [0,T] \}^{\top}$ for a string of results of all times. For convenience, we also define a future and past-future record excluding a measurement at the final time $T$. We denote such a record with an open-faced arrow \textit{i.e.}, $\fut R_\tau = \{ R_t : t \in [\tau, T)\}^{\top}$ and $\both R = \{ R_t : t \in [0, T) \}^{\top}$, respectively. The subscript $\tau$ on these records is omitted unless needed for clarity. Note, these superscript arrows are defined in a slightly different way from those in Refs.~\cite{Ivonne2015,LavCha2019,LavCha2020a,GueWis20,LavCha2020b}. In those papers, generic arrowheads, such as $\overleftrightarrow{R}$, were used as there was no need to distinguish between a record including a measurement at the final time $T$ and one without. All other conventions remain the same.
\section{Existing formalisms and their roles in partially observed systems}\label{sec-existing}
In this section, we briefly overview the three main existing quantum formalisms that utilize past-future information. We present the formalisms in a chronological order, which handily allows us to introduce the observed record (two-state formalism), the unobserved record (the most-likely path formalism), and both records (quantum state smoothing formalism), one at a time.
\subsection{Two-state formalisms}\label{sec-wv} \subsubsection{Two-state vector formalism}
The notion of a time-symmetric formulation of quantum mechanics can be traced as far back as 1928, in a remarkable footnote by Eddington~\cite{Eddington1928} (our italics): \begin{quote} ``The probability is often stated to be proportional to $\psi^2$ ... The whole interpretation is very obscure, but it seems to depend on whether you are considering the probability after you know what has happened or the probability for the purpose of prediction. The $\psi^2$ is obtained by {\em introducing two symmetrical systems of $\psi$-waves travelling in opposite directions in time}; one of these must presumably corresponds to probable inference from what is known (or is stated) to have been the condition at a later time. Probability necessarily means ``probability in the light of certain given information," so that the probability cannot possibly be represented by the same function in different classes of problems with different initial data''. \end{quote}
While one might expect $|\psi|^2$ here, rather than $\psi^2$, it should be born in mind that this was a book for a popular audience, and one published less than two years after Born introduced the $|\psi|^2$ probability rule (for scattering problems), also in a footnote~\cite{Born26}.
This idea of Eddington seems to have been forgotten, but was discovered again and investigated in full detail
in the 1950s by Watanabe, as his \emph{double inferential state-vector} formalism~\cite{Watanabe1955,Watanabe1956}. Watanabe introduced a backward-evolving (from the future) state vector, referred to as the \emph{retrodictive} state, to be used in conjunction with the conventional forward-evolving (from the past) state vector, referred to as the \emph{predictive} state, to completely describe a quantum system at intermediate times. This theory was subsequently rediscovered as the \emph{two-state vector} formalism (TSVF) by Aharonov, Bergmann and Leibwitz \cite{ABL1964} in the 1960s, receiving considerably more attention and debate \cite{VAA1987,AhaRoh91,AV91,Aharonov1998,AV2002,Qi2010,Yang2013,DanVaid13,Aharonov2014,CampagneI2014,Hashmi2016,Nowakowski2018} than its predecessor.
Most notably, the TSVF led to the formulation of weak values~\cite{AAV1988} by Aharonov, Albert, and Vaidman in 1988. They considered a \emph{weak} measurement of a quantum system, by coupling the system to another system, called the measuring device or probe, with preselection (before the weak measurement) and postselection (after) conditions on the system's quantum state. Specifically, the original work~\cite{AAV1988} considered a weak measurement of an observable described by an Hermitian operator $\op{X}$, via a weak von-Neumann-type coupling to a probe with a Gaussian wavefunction at some time $\tau$~\cite{BookVon1932},
on a system prepared in state $|\psi_i\rangle$ and subsequently subjected to a
projective measurement onto state $|\psi_f \rangle$. The weak value, defined as \begin{align}\label{eq-wv-orig}
\frac{ \langle \psi_f | \op{X} | \psi_i \rangle }{\langle \psi_f | \psi_i \rangle },
\end{align}
is a complex quantity which is sufficient to describe the change in the Gaussian wavefunction of the probe conditioned on the preselected and postselected states of the system, $\hat\psi_i$ and $\hat\psi_f$, respectively. This is known as an anomalous weak values when it lies outside the eigenvalue range of the quantum observable $\op X$ which is measured. See Refs.~\cite{Kofman2012,Tamir2013,Aharonov2014-2,Dressel2014} for some recent reviews on this topic. Weak values have been investigated for their use in quantum metrology or tomography~\cite{Hosten2008,Starling2009,Brunner2010,Hofmann2011a,Lundeen11,Kedem2012,Viza2013,Dressel2014,Jordan2014,Knee2014,Salazar2014,Gross2015,VMH15,Zhang2015,Knee2016,Steinberg2017,Ren2020} and for illuminating numerous other phenomena in quantum theory and its interpretations~\cite{RohAha02,Wiseman2007-2,Brunner2004,Mir07,Lundeen09,Yokota09,Dressel2011,KBR2011,Pryde2011,RozSte12,HWP13,Kaneda14,HWP15,MRF2016,YX2017,XW2019,Ramos2020}.
The phenomenon of weak values have now been demonstrated in many experiments (some already cited above)~\cite{Hulet1997,Brunner2004,PryWis2005,Mir07,Lundeen09,Yokota09,Dressel2011,Hofmann2011c,KBR2011,Pryde2011,RozSte12,HWP13,Kaneda14,HWP15,SDG2015,MRF2016,Vaidman2017-2,YX2017,XW2019,Ramos2020}.
\subsubsection{Weak measurements and weak-values} While there have been debates~\cite{Johansen2004,Jozsa2007,Hosoya2010,DreAga2010,Dressel2012,Dressel2015,Hall2016} on the physical meaning of the above complex weak value in Eq.~\eqref{eq-wv-orig}, its real part has a simple operational meaning as the conditioned average of the weak measurement result. Consider the
probe's state to be Gaussian as in Ref.~\cite{AAV1988} and denote the measurement result and its possible value by $X^w$ and $x$, respectively. Then one can write the measurement operator~\cite{nielsen2010quantum,BookWiseman,BookJacobs}, also called a Kraus operator~\cite{BookKraus}, describing the measurement backaction on the system's state as \begin{align}\label{eq-wv-kraus} \op K_{x} =&\, (\epsilon/2\pi)^{1/4}\exp\left\{ - \epsilon(x - \op X)^2/4 \right\} \nonumber \\ \approx & \, \sqrt{\wp_0(x)} \left \{ \hat 1 + \tfrac{1}{2} \epsilon \, x \hat X - \tfrac{1}{4}\epsilon \hat X^2 + \tfrac{1}{8} \epsilon^2 x^2 \hat X^2\right\}. \end{align} Here $\wp_0(x) = (\epsilon/2\pi)^{1/2} \exp(-\epsilon x^2/2)$ is a zero-mean Gaussian function. The limit of a weak measurement is when $\epsilon \rightarrow 0$. This limit is best understood from the second line of Eq.~\eqref{eq-wv-kraus}, which is the expansion of the operator parts of $\op{K}_x$ up to first order in $\epsilon$. The last term, which on the face of it is ${\cal O}(\epsilon^2)$, is in fact ${\cal O}(\epsilon)$ because $x^2$ is typically of order $1/\epsilon$. This follows from the form of $\wp_0(x)$ and the fact that, in this limit, the PDF for the result $x$,
$\wp(x|\hat\psi_i) = ||\hat K_x \ket{\psi_i}||^2$, approaches $\wp_0(x)$, which is independent of the initial system state $\hat\psi_i$.
The completeness relation $\int \!\! {\rm d} x \hat K_{x}^\dagger \hat K_{x} = \hat 1 + {\cal O}(\epsilon^n)$ is satisfied, to all orders of $\epsilon$ ($n = \infty$) for the first line of Eq.~\eqref{eq-wv-kraus}, and to first order of $\epsilon$ ($n = 2$) for the second line. Using the Kraus operator, we can calculate the conditional PDF of the weak measurement result with both preselection and postselection as \begin{align} \label{preposPDF}
\wp(x | \hat\psi_i, \hat\psi_f) \propto \,
\frac{|\langle \psi_f | \hat K_{x} | \psi_i \rangle |^2}{\wp(\hat\psi_f|\hat \psi_i)}, \end{align}
where $\wp(\hat\psi_f|\hat \psi_i) = \int \!\! {\rm d} x\,|\langle \psi_f | \hat K_{x} | \psi_i \rangle |^2$ is independent of $x$.
From \erf{preposPDF}, it is not difficult to show that the average result, over many trials, conditioned on the preparation and the final postselection,
and in the limit $\epsilon\rightarrow 0$, is \begin{align}\label{eq-wv-avg}
_{\hat\psi_f}\!\langle X^w \rangle_{\hat\psi_i} =& \lim_{\epsilon \rightarrow 0} \int_{-\infty}^\infty \!\! {\rm d} x \, x \,\wp(x| \hat\psi_i,\hat\psi_f) \nonumber \\
= & \, {\rm Re}\, \frac{ \langle \psi_f | \op{X} | \psi_i \rangle }{ \langle \psi_f | \psi_i \rangle }, \end{align} which is the real part of the weak value. We have used $_{\hat\psi_f}\!\langle \cdots \rangle_{\hat\psi_i}$ as an expectation conditioned on the preselected and postselected states, following the notation of Ref.~\cite{Wise2002}, with the $X^w$ denoting the result of a weak measurement of $\hat X$, rather than the weak value of $\hat X$ as is common in the literature. In this limit, the average back-action from the
weak measurement is negligible so that $\wp(\hat\psi_f|\hat \psi_i) \approx |\langle \psi_f | \psi_i \rangle |^2$. Thus we can see that averaging the weak value over a complete set of final states with this probability gives the usual quantum expectation value, as expected: \begin{align}
\langle X^w \rangle_{\hat\psi_i} \,&= \, \sum_{\psi_f} \, \wp(\hat\psi_f|\hat \psi_i) \, \times \, _{\hat\psi_f}\!\langle X^w \rangle_{\hat\psi_i} \nonumber \\
&\approx \,\sum_{\psi_f} \, \langle \psi_i | \psi_f \rangle \langle \psi_f | \op{X} | \psi_i \rangle \,=\, \langle \psi_i | \op{X} | \psi_i \rangle. \label{sumoverfin} \end{align}
The weak value formula, Eq.~\eqref{eq-wv-avg}, involving pure states for preselection and postselection, has some unique properties that are evident in experiments with weakish-strength measurements~\cite{Vai14}. Nevertheless, it is very natural to consider generalizing Eq.~\eqref{eq-wv-avg} to weak measurement with mixed states for the preselection and postselection, and also for a non-Hermitian measurement coupling~\cite{Wise2002,Tsangsmt2009-2,DreAga2010,Gammelmark2013}. Say the system's measured observable can be written as $\op X = \op c + \op c^\dagger$, where $\hat c$ is now the operator describing the system-detector interaction. By this we mean that the Kraus operator is generalized from Eq.~\eqref{eq-wv-kraus} to \begin{align}\label{eq-wv-kraus-gen} \hat{K}_x \approx & \, \sqrt{\wp_0(x)}\left \{ \op{1} - i \epsilon \hat H + \epsilon \, x \op{c} - \tfrac{1}{2} \epsilon \, \op{c}^\dagger \op{c} - \tfrac{1}{2} \epsilon\, \op{c}^2 + \tfrac{1}{2} \epsilon^2 x^2 \op{c}^2 \right\}, \end{align} where $\hat H$ is Hermitian. Again, this satisfies the completeness relation to first order: $\int \!\! {\rm d} x \hat K_{x}^\dagger \hat K_{x} = \hat 1 + {\cal O}(\epsilon^2)$.
Say the system's state just before the time $\tau$ of the weak measurement is $\rho$ (which can be mixed), and the final measurement (which can be non-projective) is described by a positive operator (POVM element) $\op{E}$~\cite{BookHelstrom,BookKraus,BookWiseman} at the time just after the weak measurement time $\tau$. Then the PDF conditioned on the preparation $\rho$ and the postselection positive operator $\hat E$ (also known as an {\em effect}~\cite{BookDavies,BookKraus,BookWiseman})
is given by \begin{align}\label{eq-wv-prob-gen}
\wp(x | \rho, \op{E}) = \frac{\Tr{ {\op E} {\hat K}_{x}\rho \hat K_{x}^\dagger}}{\wp(\hat E|\rho)}, \end{align}
where $\wp(\hat E|\rho)= \int \!\! {\rm d} x\, \Tr{\hat E \hat K_x \rho \hat K_x^\dagger }$. Following a similar calculation to Eq.~\eqref{eq-wv-avg}, the conditioned average of the detector readout is found to be \begin{align}\label{eq-wv-gen}
_{\op{E}}\langle X^w \rangle_{\rho} = \,2\,{\rm Re}\,\frac{{\rm Tr}\left( \op{E} \,\op c\, \rho \right) }{{\rm Tr}\left( \op{E} \rho\right)}, \end{align} to lowest order in $\epsilon$. This reduces to Eq.~\eqref{eq-wv-avg}, for the case of pure-state preparation,
$\rho = |\psi_i \rangle \langle \psi_i|$, rank-one final measurement $\op{E} \propto |\psi_f \rangle \langle \psi_f|$, and Hermitian weak coupling $\op c = \op c^\dagger \equiv \op X/2$.
Note that, for no postselection, we can replace $\op E$ with the identity and Eq.~\eqref{eq-wv-gen} reduces to the usual expectation value, $\langle X^w \rangle_{\rho} = {\rm Tr}\left( \op X \rho \right)$, which justifies considering Eq.~\eqref{eq-wv-kraus-gen} to describe a measurement of $\op X$.
Similar to the average in \erf{sumoverfin}, this no-postselection expectation value can also be obtained by averaging the weak value with the appropriate probability for the final measurement result, using the POVM completeness condition $\sum_{\hat E}\hat E = \hat{1}$.
\subsubsection{Weak-value states}
For the case for Hermitian weak coupling, $\op c = \op c^\dagger \equiv \op X/2$, where the Kraus operator, Eq.~\eqref{eq-wv-kraus-gen}, reduces to Eq.~\eqref{eq-wv-kraus}, the general expression, Eq.~\eqref{eq-wv-gen}, for the weak value becomes \begin{align}\label{eq-wv}
_{\op{E}}\langle X^w \rangle_{\rho} = \frac{ {\rm Tr} [ \op{X} (\op{E} \rho + \rho \op{E}) ] }{{\rm Tr}( \op{E} \rho + \rho \op{E})}. \end{align} From Eq.~\eqref{eq-wv}, we see that the conditioned average of the weak (Hermitian) measurement results can be written as an expectation value of the observable $\op X$ and an Hermitian matrix as $ _{\op{E} }\langle X^w \rangle_{\rho} = \Tr{ \op{X} \varrho_{\rm SWV}}$, where the symmetrized matrix is given by~\cite{Tsangsmt2009-2,Gammelmark2013,LavCha2019} \begin{align}\label{eq-wvs} \varrho_{\rm SWV} = \frac{\op{E} \rho+ \rho\op{E}} {\Tr{\op{E}\rho + \rho \op{E}}}. \end{align} This has been referred to as a Smoothed Weak-Value (SWV) state~\cite{LavCha2019,LavCha2020a,LavCha2020b}; the relation to smoothing will become clear in our main results in Section~\ref{sec-sevenQSE}. Note that $ \varrho_{\rm SWV} \in { {\mathfrak G}'({\mathbb H})}$, { where ${\mathfrak G}'({\mathbb H})$ is a superset of ${\mathfrak G}({\mathbb H})$ made by dropping the positivity requirement}. Thus $\varrho_{\rm SWV}$ not generally positive-semi definite, and therefore cannot represent a proper quantum state. This is necessary because, as noted earlier, the weak value can be outside the eigenvalue range of the observable $\op X$, which is in contrast to the expectation value, ${\rm Tr}\left( \op X \rho \right)$, for a proper positive quantum state $\rho$~\cite{Tsangsmt2009-2}.
\subsubsection{Related two-state formalisms}\label{sec-reltwostate}
In this work, we are interested in the case where, as well as the measurement at the intermediate time $\tau$, there is an \emph{observed} continuous measurement on the system, before and after $\tau$. Therefore, the preselected state is replaced by a \emph{filtered state}, $\rho = \rho_{\pasts{O}}$, computed from the past observed record up to time $\tau$ using the usual quantum trajectory approach~\cite{BookCarmichael,BookWiseman,BookJacobs}. Similarly, the postselected retrodictive effect is replaced by a \emph{retrofiltered effect}, $\op{E} = \op{E}_{\futps{O}}$, that is a POVM element giving the probability of the future observed record.
The retrofiltered effect is a solution of an adjoint equation of the quantum trajectory, evolving backward in time given the record $\futp O$. Gammelmark, Julsgaard and M\o lmer have introduced the terminology ``past quantum state'' for the pair $\{ \rho_{\pasts{O}}, \op{E} _{\futps{O}}\}$~\cite{Gammelmark2013}. As well as weak values, they considered a measurement of arbitrary strength at the intermediate time $\tau$, and calculated its probability distribution conditioned on the past-future observation using the pair of operators as in Eq.~\eqref{eq-wv-prob-gen}. However, unlike in the weak-measurement limit of Eq.~\eqref{eq-wv-gen}, one cannot use the approximation
that $\wp(\hat E_{\futps{O}}|\rho_{\pasts{O}}) \approx \Tr{\hat E_{\futps{O}} \rho_{\pasts{O}}}$.
This general two-state formalism
has been applied to the special case of linear Gaussian quantum systems~\cite{ZhaMol17,Huang2018}, to investigate non-Markovianity in quantum systems~\cite{Budini2018a,Budini2019},
and to many other theoretical topics~\cite{GreMol15,GreMol16,GreMol17,Gough2020}, as well as to experiments in cavity-QED~\cite{RybMol15}, circuit-QED~\cite{TanMol15,TanMol16,FTM16,TanMol17}, and large atomic-ensembles~\cite{BaoMol20a,BaoMol20b}.
\subsection{Most-likely path formalism}\label{sec-cdj} The most-likely path formalism was proposed by Chantasri, Dressel and Jordan (CDJ) \cite{Chantasri2013,chantasri2015stochastic}, as a tool to investigate statistics of quantum trajectories for diffusive time-continuous quantum measurement. This measurement can be considered as a series of weak measurements with their strength determined by the time resolution for acquiring the measurement record, \textit{i.e.}, $\epsilon \propto \dd t $. Given the record, one can compute the corresponding quantum state evolution (or quantum trajectory) of the measured system, via a Kraus operator of the form in Eq.~\eqref{eq-wv-kraus-gen}~\cite{BookKraus}. The resulting quantum trajectories are called diffusive because they realize a diffusive stochastic process in the quantum state space~\cite{BookCarmichael,BookGardiner1,BookWiseman}. The most-likely path formalism has been verified in circuit-QED experiments~\cite{Weber2014,ChaKim2016,JorMur2017} and has been used in studying entanglement generation~\cite{ChaKim2016,Silveri2016}, quantum chaos~\cite{Lewalle2018chaos}, and quantum correlations of non-commuting observables~\cite{JorCha2016,ChaAta2018}, with time-continuous quantum measurements.
\subsubsection{Continuous diffusive measurement}\label{sec-contmeas}
The CDJ most-likely path formalism, based on techniques in classical stochastic processes, yields a deterministic optimization problem for a quantum state's path to maximize the PDF for the diffusive records (\textit{i.e.}, giving maximum-likelihood records) between any two quantum states at two different times. Therefore, in the language adopted in this paper, we can treat the diffusive measurement record as \emph{unknown or unobserved}. The most-likely path formalism can be considered as a kind of optimal estimation for the unknown record and its corresponding quantum state, conditioned on the past (initial condition) and future (final condition) information.
Let us discretize the time into steps of size $\delta t$, where the continuous time range $[0, T]$ becomes a set
of discretized times $\{ 0, \delta t, 2\delta t,..., T\}$. We then denote the unknown measurement record by $\both
U = \{ U_t : t\in \{ 0, \delta t, ..., T-\delta t \} \}^{\top}$, where $ U_t$ is a measurement result acquired between time
$t$ and $t+\delta t$. The reason we have defined the unobserved record excluding a measurement at the final time will become clear later. Given a possible realization of the unknown record denoted by $\both u = \{ u_t : t\in \{ 0,
\delta t, ..., T-\delta t \}\}^{\top}$, one can compute the system's state dynamics conditioned on the record from an update
equation, \begin{align}\label{eq-stup} \rho_{t+\delta t} =& \frac{{\cal M}_{u_t} \rho_t }{ {\rm Tr}( {\cal M}_{u_t} \rho_t ) }, \end{align} where the completely positive map ${\cal M}_{u_t}$ is a function of $u_t$. Denoting the initial state of the system by $\rho_0$, we can compute the system's state at any time $\tau$, \begin{align}\label{eq-qtrajmap} \rho_\tau = \rho_{\,\pasts{u}} = & \frac{{\cal M}_{u_{\tau-\delta t}} \cdots {\cal M}_{u_0} \rho_0}{{\rm Tr}\left({\cal M} _{u_{\tau-\delta t}} \cdots {\cal M}_{u_0} \rho_0 \right)}. \end{align} In this subsection, we only consider the system's dynamics arising from a Hamiltonian $\hat H$ plus the measurement backaction from conditioning on the unknown record $\both u$. Therefore, the completely positive map becomes a purity-preserving measurement operation defined as ${\cal M}_{u_t} \bullet = \hat{M} _{u_t} \bullet \hat{M}_{u_t}^\dagger$. Here \begin{align}\label{eq-moput} \hat{M}_{u_t}\! = \op{1} - i \, \delta t \op H + \delta t \, u_t \op{c} - \tfrac{1}{2} \delta t \, \op{c}^\dagger \op{c} - \tfrac{1} {2} \delta t \, \op{c}^2 + \tfrac{1}{2} \delta t^2 u_t^2 \op{c}^2, \end{align} is the so-called diffusive measurement operator, with high-order terms introduced in Refs.~\cite{Amini2011,Rouchon2015}. Apart from the Hamiltonian term, this operator is simply obtained from the general (not necessarily Hermitian) weak-measurement Kraus operator in Eq.~\eqref{eq-wv-kraus-gen} by taking $\epsilon = \delta t$ and defining $\hat{K}_{u_t} = \sqrt{\wp_0(u_t)}\, \hat{M}_{u_t}$. Following common terminology~\cite{BookWiseman}, we call $\wp_{\rm ost}(u_t) \equiv \wp_0(u_t)$ the \emph{ostensible} probability: \begin{equation} \label{postend} \wp_{\rm ost}(u_t) = (\delta t/2\pi)^{1/2} \exp(- u_t^2\delta t/2). \end{equation} The completeness relation for this measurement operator is given by $\int \!\! {\rm d} u_t \, \wp_{\rm ost}(u_t) \hat M_{u_t}^\dagger \hat M_{u_t} = \hat 1 + {\cal O}(\delta t^2)$.
\subsubsection{Quantum state paths with most-likely unknown records}
The goal of the CDJ formalism is to find the most-likely record estimator, which can be formulated as \begin{align}\label{eq-originalmlp} \both U_{\rm CDJ}^\star = \argmax_{\boths{u}} \, \wp_{\text{D}}(\,\both{u}\,), \end{align} where the PDF of unknown records $\both u$ is given by \cite{Chantasri2013} \begin{align}\label{eq-mlpprob}
&\wp_{\text{D}}(\,\both{u}\,) {\rm d}\mu(\,\both{u}\,) = \int \!\! {\rm d}\mu(\,\bothp{{\bm q}}\,) \, {\cal B}_{\text{D}} \! \prod_{t=0}^{T-\delta t} {\rm d} u_{t} \, \wp(u_{t} | {\bm q}_{t}) \delta[{\bm q}_{t+\delta t} - \bm{{\cal E}}({\bm q}_{t}, u_{t})]. \end{align} Here we have defined the vector ${\bm q}_t$ that parametrizes the quantum state matrix, such that $\rho_t = {\hat S}({\bm q}_{t})$ for a state-operator function $\hat S$. An example is ${\bm q}_t = \{ x_t , y_t, z_t \}^{\top}$, which is a Bloch vector for a qubit state matrix, for which ${\hat S}({\bm q}_{t}) = \tfrac{1}{2}(\hat 1 + {\bm q}_{t} \cdot \hat {\bm \sigma} $) and the vector of Pauli matrices is $\hat {\bm \sigma} = \{ \hat \sigma_x, \hat \sigma_y, \hat \sigma_z \}^{\top}$. We also defined the measure ${\rm d}\mu(\,\bothp{{\bm q}}\,) = \prod_{t =0}^{T} {\rm d} {\bm q}_{t}$ for the path integral over the state vectors and the measure ${\rm d}\mu(\,\both{u}\,) = \prod_{t=0}^{T-\delta t} {\rm d} u_t$ for the unknown records. This measure is to be understood in any PDFs of the unknown records in the rest of the paper.
The term $\wp(u_{t} | {\bm q}_{t})$ in the product of Eq.~\eqref{eq-mlpprob} is the probability of a time-local unknown
measurement result $u_{t}$, conditioned on a state ${\bm q}_{t}$ right before its measurement. We have $\wp(u_{t} | {\bm q}_{t}) = \wp_{\rm ost}(u_t) {\rm Tr}[ {\cal M}_{u_t} \hat S({\bm q}_t) ]$ for the map in Eq.~\eqref{eq-moput}. The delta-function term $\delta[{\bm q}_{t+\delta t} - \bm{{\cal E}}({\bm q}_{t}, u_{t})]$ is to ensure that the quantum states at all times satisfy an update equation denoted by ${\bm q}_{t+\delta t} = \bm{{\cal E}}({\bm q}_{t}, u_{t})$, which describes the evolution of the system's state in Eq.~\eqref{eq-stup} in terms of the vector ${\bm q}_{t}$. The boundary condition term ${\cal B}_{\text{D}}$ takes care of any constraints imposed on the evolution.
\subsubsection{Calculational techniques and boundary conditions}
In the original work~\cite{Chantasri2013}, the CDJ approach was used for fixed boundary states at the initial and final times. The solution is a quantum state path and a measurement record that maximize the probability distribution in Eq.~\eqref{eq-mlpprob} under the following constraints: the correct evolution ${\bm q}_{t+\delta t} = \bm{{\cal E}}({\bm q}_{t}, u_{t})$ for $t \in \{ 0, \delta t, ..., T-\delta t \}$; and the boundary conditions, ${\bm q}_0 = {\bm q}_I$ and $ {\bm q}_T = {\bm q}_F$ corresponding to ${\cal B}_{\text{D}} = \delta({\bm q}_0 -{\bm q}_I)\delta({\bm q}_T-{\bm q}_F)$. This final condition is the reason the past-future unobserved record is defined on the open interval $[0,T)$. The technique uses the Lagrange multiplier method, where in this case the Lagrangian function is \begin{align}
{\cal S} =& - {\bm p}_{-\delta t}({\bm q}_0 - {\bm q}_I) - {\bm p}_{T}({\bm q}_T - {\bm q}_F) + \sum_{t=0}^{T-\delta t} \left\{ - {\bm p}_{t} [ {\bm q}_{t+\delta t} - \bm{{\cal E}}({\bm q}_{t}, u_{t}) ] + \ln \wp(u_{t} | {\bm q}_{t}) \right\}, \end{align} given the multipliers ${\bm p}_{t}$ for times $t \in \{ -\delta t, 0, \delta t, ..., T \}$. By extremizing this Lagrangian function over its variables ${\bm q}_{t}$, ${\bm p}_{t}$, and $u_{t}$, one arrives at difference equations describing an optimal solution, \begin{subequations}\label{eq-original-diffeq} \begin{align} {\bm q}_{t+\delta t} = & \,\, \bm{{\cal E}}( {\bm q}_{t} , u_{t}), \\
{\bm p}_{t-\delta t} =&\,\, \bm{\nabla}_{{\bm q}_{t}} \big[ {\bm p}_{t} \cdot \bm{{\cal E}}({\bm q}_{t}, u_{t} ) + \ln \wp (u_{t} | {\bm q}_{t})\big], \\
0 = & \,\, \frac{\partial}{\partial u_{t}} \big[ {\bm p}_{t} \cdot \bm{{\cal E}}({\bm q}_{t}, u_{t})+ \ln \wp( u_{t} | {\bm q}_{t}) \big] , \end{align} \end{subequations} where $\bm{\nabla}_{{\bm q}_{t}}$ denotes the gradient for the $d^2-1$ dimensional vector ${\bm q}_t$ at time $t$. Eqs.~\eqref{eq-original-diffeq} can be solved with the boundary condition ${\bm q}_0 = {\bm q}_I$ and ${\bm q}_T = {\bm q}_F$. In the time-continuum limit $\delta t\rightarrow \dd t$, the equations become a set of differential equations, which can be solved to yield a differentiable quantum path ${\bm q}_{t}$ from Eq.~\eqref{eq-original-diffeq}. This is the quantum trajectory given the most-likely record $\both u$. There can also exist multiple solutions, local minima and maxima of the PDF $\wp_{\text{D}}(\,\both{u}\,)$, where at least one of the solutions is the most-likely record $\both U_{\rm CDJ}^\star$. A thorough analysis for multiple solutions of the most-likely path can be found in Lewalle \textit{et al.}~\cite{LewCha17}.
\subsection{Quantum state smoothing formalism}\label{sec-qss} The final existing approach that uses past-future information for quantum systems is quantum state smoothing introduced by Guevara and Wiseman~\cite{Ivonne2015}. It was inspired by Ref.~\cite{Armen2009}, which applied a type of classical smoothing theory \cite{Sarkka13} (posterior decoding) to a quantum system, by treating the system semiclassically. Quantum state smoothing theory was proposed as a way to generalize classical state smoothing to the fully quantum realm, while guaranteeing a positive-semidefinite smoothed quantum state. The smoothed quantum state is defined under a scenario of a partially observed quantum system, where the system of interest is subjected to two time-continuous monitoring channels. One is accessible by an observer (Alice), giving an \emph{observed} record $\bothp{O}$. The other is not accessible by her, but is assumed observed by an omniscient observer (Bob), giving an \emph{unknown} record $\bothp{ U}^{\rm true} \equiv \bothp{ U}$. Note that the `true' superscript indicates that the record is known to Bob, but unknown to Alice. The omniscient Bob also has access to Alice's record and therefore can construct a true quantum state of the system, $ \rho^{\rm true}_\tau = \rho_{\pasts{O}, \pasts{ U}}$; as mentioned in Section~\ref{sec-RE}, a true quantum state can be computed using only the past portion of the complete records up until time $\tau$. The central idea of quantum state smoothing~\cite{Ivonne2015} is that Alice uses her future record $\futp{O}$ (as well as her past record $\past{O}$) to make inferences about Bob's past record $\past{U}$, and by appropriately averaging $\rho_{\pasts{O}, \pasts{ U}}$ can obtain a smoothed quantum state, $\rho_{\bothps{O}}$, conditioned on the past-future information. This is explained in detail in the subsections below.
The technique of quantum state smoothing has been applied to qubit systems~\cite{Ivonne2015,chantasri2019,LavCha2020b,GueWis20,LavGueWis21} as well as linear Gaussian quantum systems \cite{LavCha2019,LavCha2020a,LavCha2020b,Laverick21}. Numerous issues related to the quantum state smoothing theory have been investigated, including how the observers Alice and Bob should choose their unravellings to best demonstrate the efficacy of smoothing~\cite{chantasri2019,LavCha2020b}, the relation between the smoothed state, the true state, and classical smoothing~\cite{LavCha2020a,LavWarWis21}, and
measures of closeness between the true state, filtered state, and smoothed state in individual stochastic trajectories \cite{GueWis20,LavGueWis21}.
Quantum state smoothing has also been illustrated in the simpler regime of a single discrete-time measurement by Budini~\cite{Budini2018b}. The same author also considered the case where, in place of an unknown record, there is a unknown true state of a classical stochastic system interacting with the quantum system of interest~\cite{Budini2017}, a sort of hybrid quantum--classical state smoothing.
\subsubsection{Partially observed systems with observed and unknown records}
The observer Alice can use the traditional quantum trajectory approach to compute the filtered state at time $ \tau$, which comes from tracing out the unknown bath that contains the unknown measurement records. Defining an additional measurement operation describing the backaction from the observed record, ${\cal M} _{O_t} \bullet \equiv {\hat M}_{O_t} \bullet {\hat M}_{O_t}^\dagger$, and an unconditioned map for the unknown record, $e^{\delta t{\cal L}_{\rm u}} \bullet \equiv \int {\rm d} u_t \, \wp_{\rm ost}(u_t) {\cal M}_{u_t} \bullet$, the filtered state at time $\tau$ is given by \begin{align}\label{eq-state}
\rho_{\text F} \equiv \rho_{\pasts{O}} = \frac{{\cal M}_{O_{\tau-\delta t}} e^{\delta t{\cal L}_{\rm u}} \cdots {\cal M}_{O_{0}} e^{\delta t{\cal L}_{\rm u}} \rho_0}{{\rm Tr} ({\cal M}_{O_{\tau-\delta t}} e^{\delta t{\cal L}_{\rm u}} \cdots {\cal M}_{O_{0}} e^{\delta t{\cal L}_{\rm u}} \rho_0)}, \end{align} for the initial state $\rho_0$. It can be shown~\cite{GamWis2005} that this is equivalent to \begin{align}\label{eq-filstate} \rho_{\text F} = \int \!\! {\rm d}\mu(\,\past{u}\,)\, \wp_{\pasts{O}}(\,\past{u}\,) \, \rho_{\pasts{O}, \pasts{u}}\,.
\end{align} That is, the filtered state can also be obtained by averaging the possible true states by integrating over all possible unknown records, with the weights given by the PDF conditioned on the past observed record
$\wp_{\pasts{O}}(\,\past{u}\,) \equiv \wp\left(\past{u}\,| \,\past{O}\right)$. For the diffusive unknown record, the measure of the integration ${\rm d}\mu(\,\past{u}\,) =\prod_{t=0}^{\tau-\delta t } {\rm d} u_t$ is defined in a similar way as in Eq.~\eqref{eq-mlpprob}, but for the past record only.
\subsubsection{From quantum state filtering to quantum state smoothing}
This form of filtered state Eq.~\eqref{eq-filstate} naturally suggests that the observer could do better by using the past-future observed record to weight the true states. That is, it would be better to use the \emph{smoothed quantum state} at time $\tau$ defined as~\cite{Ivonne2015,chantasri2019}, \begin{align}\label{eq-qssstate} \rho_{\text S} \equiv \int \!\! {\rm d}\mu(\,\past{u}\,)\, \wp_{\bothps{O}}(\,\past{u}\,)\, \rho_{\pasts{O}, \pasts{u}}\,.
\end{align} It has been shown numerically~\cite{Ivonne2015,chantasri2019,LavCha2019,GueWis20} that the smoothed quantum state gives an estimated state that is, on average, \emph{more pure} than the filtered quantum state. Moreover, it was shown analytically that this average purity for the smoothed (or filtered) state is equal to the average fidelity with the true state~\cite{Ivonne2015}. We note that the integrations in Eq.~\eqref{eq-filstate} and Eq.~\eqref{eq-qssstate} are defined in general and can be used for any type of unknown records. Also, the above can be easily generalized to the case where $ \bothpO$ and $\bothp{ U}$ could represent multiple records obtained from different couplings to the system.
\subsubsection{Calculational techniques}
The key problem of the quantum state smoothing is to compute the conditional PDF in Eq.~\eqref{eq-qssstate}. This can be calculated from an unnormalized true state ${{\tilde \rho}}_{\pasts{O},\pasts{u}}$ and the retrodictive effect~\cite{Ivonne2015,chantasri2019}. The unnormalized state conditioned on measurement records (both observed and unknown) is calculated from a series of measurement operations applied to the system's initial state $\rho_0$, \textit{i.e.}, \begin{align}\label{eq-unnorm} {{\tilde \rho}}_{\pasts{O}, \pasts{u}} = {\cal M}_{O_{\tau-\delta t}} {\cal M}_{u_{\tau-\delta t}} \cdots {\cal M}_{O_0} {\cal M}_{u_0} \rho_0. \end{align} This is similar to Eq.~\eqref{eq-qtrajmap}, but the map is defined with the measurement operations describing the backaction from both observed and unknown records, without the normalizing denominator. The retrodictive effect representing the statistics of the future record, discussed in Section~\ref{sec-reltwostate}, is computed backward in time from the final identity matrix using adjoint operations, \textit{i.e.}, \begin{align}\label{eq-povm} {\hat E}_{\futps{O}} = e^{\delta t{\cal L}_{\rm u}^\dagger} {\cal M}_{O_\tau}^\dagger \cdots \, e^{\delta t{\cal L}_{\rm u} ^\dagger} {\cal M}_{O_{T-\delta t}}^\dagger e^{\delta t{\cal L}_{\rm u}^\dagger} {\cal M}_{O_{T}}^\dagger \hat I, \end{align} for any time $\tau$. We have used the adjoint measurement operations for the observed record ${\cal M} _{O_{t}}^\dagger$ and an adjoint unconditioned map for the unknown record $e^{\delta t{\cal L}_{\rm u}^\dagger} \bullet$.
The point of using the unnormalized state is that its trace gives the probability of the conditioning measurement records, \textit{i.e.}, \begin{align}\label{eq-actualprob}
\wp\left(\past{O}, \past{u}\right) \propto \wp_{\rm ost}(\,\past{u}\,)\, {\rm Tr}\left( {{\tilde \rho}}_{\pasts{O},\pasts{u}} \right), \end{align} where $\wp_{\rm ost}(\,\past{u}\,) = \prod_{t=0}^{\tau-\delta t} \wp_{\rm ost}(u_t)$. Here $\wp_{\rm ost}(u_t)$ is defined for diffusive measurements as in \erf{postend}, but there are analogous expressions for quantum jump trajectories. In \erf{eq-actualprob}, and subsequently, the proportionality sign ($\propto$) allows a factor that is independent of the unknown record, but may be a function of the observed record (which is fixed for any given estimation which Alice must perform). Using the Bayes' theorem, we obtain \begin{align} \label{eq-bothoprob1} \wp_{\pasts{O}}(\,\past{u}\,) =&\,\, \wp\left(\past{u} , \past{O}\right)/\wp(\past{O}\,)\nonumber \\ \propto &\,\, \wp_{\rm ost}(\,\past{u}\,) \, {\rm Tr}\left( {{\tilde \rho}}_{\pasts{O}, \pasts{u}} \right), \end{align} as the PDF of the past unknown record used for the filtered state in Eq.~\eqref{eq-filstate}, where the proportional factor is different from the one in Eq.~\eqref{eq-actualprob}.
At this point, the reader may ask: Why bother with the second line of Eq.~\eqref{eq-bothoprob1} with its ostensible probabilities? Why not use the first line, with the actual probabilities? That is, why not, for a given past observed record $\past{O}$, generate hypothetical past unobserved records $\past{u}$ simply by the usual quantum filtering theory (past-conditioned quantum trajectory theory)? The answer is that this does not work~\cite{GamWis2005}. If we were interested in only a single piece of record $O_t$ and $u_t$ in an infinitesimal time interval $[t,t+\delta t)$, then we could use the normalized quantum trajectory theory which simultaneously generates $O_t$ and $u_t$ with the correct statistics. However, while $\past{O}$ and $\past{u}$ are both `past records' on an interval $[0, \tau)$, parts of $\past{O}$ are in the future of parts of $\past{u}$. That is, to generate with the correct statistics $\past{u}$ given $\past{O}$, one would need to take into account how the later parts of $\past{O}$ affect the likelihood of the earlier parts of $\past{u}$, and standard quantum trajectories demonstrably does not do this~\cite{GamWis2005}. (If they did, they would automatically generate the smoothed quantum state, rather than the filtered quantum state as they do.)
Thus it is necessary, whether using analytical or numerical methods, to consider
unnormalized states, with the unobserved results generated according to an ostensible probability, with $u_t$ independent from $u_s$ for $t\neq s$.
Once we have the filtered PDF $\wp_{\pasts{O}}(\pasts{u})$ from \erf{eq-bothoprob1}, the smoothed PDF is simply given by \begin{align}
\label{eq-bothoprob2} \wp_{\bothps{O}}(\,\past{u}\,) = & \, \, \wp\left(\past{u} , \past{O}\right)\wp\left(\futp{O}\,|\, \past{u}, \past{O}\right)/\wp(\bothp{O}\,) \nonumber \\
\propto & \,\, \wp_{\rm ost}(\,\past{u}\,) \, {\rm Tr}\left( \op{E}_{\futps{O}}\, { {\tilde \rho}}_{\pasts{O}, \pasts{u}} \right), \nonumber \\
\propto & \,\, \wp_{\pasts{O}}(\,\past{u}\,) \, {\rm Tr}\left( \op{E}_{\futps{O}}\, { \rho}_{\pasts{O}, \pasts{u}} \right), \end{align} where ${ \rho}_{\pasts{O}, \pasts{u}} = { {\tilde \rho}}_{\pasts{O}, \pasts{u}\,}/{\rm Tr}\left({ {\tilde \rho}}_{\pasts{O}, \pasts{u}}\right)$.
There have been various techniques used in generating the conditional PDFs and in computing the integration in Eq.~\eqref{eq-qssstate} to obtain the smoothed quantum state. In the original work~\cite{Ivonne2015} (also \cite{chantasri2019}), the smoothed state for qubit examples was calculated numerically by simulating stochastic unnormalized states ${ {\tilde \rho}}_{\pasts{O}, \pasts{u}\,}$, with the appropriate ostensible probabilities (\textit{e.g.}, $\wp_{\rm ost}(u_{t})$ for diffusive records). On the other hand, semi-analytical solutions for smoothed quantum states were possible for linear Gaussian systems as shown by Laverick
\textit{et al.}~\cite{LavCha2019}, where their closed-form solutions were derived and investigated.
In this work, we introduce another method to compute the smoothed quantum state in Section~\ref{sec-example}. For a qubit system with particular dynamics and detection setups, the smoothed state can again be calculated via a semi-analytical approach, by directly computing a PDF of the unnormalized state conditioned on the past-future observed records.
\section{Unifying theory of quantum state estimation}\label{sec-sevenQSE}
\begin{figure}\label{fig-intro}
\end{figure}
Our unifying theory is based on the same scenario as in the quantum state smoothing formalism (see Figure~\ref{fig-intro}). That is, when a quantum system is partially observed by an observer, using the Alice-Bob protocol, we can build a framework that encompasses the existing formalisms, or at least generalizations of them. Given an observed record $\bothp{O}$, Alice's task is to try her best to guess the true state $\hat\psi^{\rm true} = |\psi_{\pasts{O},\pasts{ U}}\rangle \langle \psi_{\pasts{O},\pasts{ U}}| = \rho_{\pasts{O}, \pasts{ U}}$, where the record $\both{ U}$ is unknown to her. Note that we have excluded the measurement result at time $T$ in the unknown record for consistency with the most-likely path formalism. The best estimate of the true state is a state that optimizes some kind of cost function as in Section~\ref{sec-QSE} or \ref{sec-RE}. That is, the cost functions can be any functions (or functionals) defined for unknown variables, which include the unknown true state $\hat\psi^{\rm true}$, and the unknown record $\both{ U}$.
We will focus on the most common distance measures as cost functions, and introduce in total eight quantum state estimators. We show, in Figure \ref{fig-diagramQ2} (a more comprehensive version of Figure~\ref{fig-diagramQ}), a diagram summarizing our unifying theory. We explain the eight different state estimates operationally, in terms of the expected costs, labelled $\textbf{Q}_1$ to $\textbf{Q}_8$, and show how they are connected or related (indicated by different types of connecting lines or arrows). We present the cost functions and their estimators in three subsections. The first three costs ($\textbf{Q}_1$--$\textbf{Q}_3$) are defined in the quantum state space (blue boxes in Figure \ref{fig-diagramQ2}) and so can be applied for any types of unknown records. The next four costs ($\textbf{Q}_4$--$\textbf{Q}_7$) are defined in the unknown record space (pink boxes in Figure \ref{fig-diagramQ2}) and can be applied to only diffusive measurement records. The last subsection is the SWV state for $\textbf{Q}_8$, which is associated with a weak von Neumann measurement of an Hermitian observable. These costs are associated with distinct state estimators, making the complete connections among the three existing formalisms presented in the previous section.
\begin{figure}\label{fig-diagramQ2}
\end{figure}
\subsection{Arbitrary-type unknown records ($\textbf{Q}_1$--$\textbf{Q}_3$)}
Let us start with the smoothed quantum state on the top-left of the diagrams (\textit{i.e.}~both Figure~\ref{fig-diagramQ} and Figure \ref{fig-diagramQ2}). This was introduced in Ref.~\cite{Ivonne2015} as an analogue to the classical smoothed state. As prefigured in Section~\ref{sec-qss}, the smoothed quantum state, in Eq.~\eqref{eq-qssstate}, can be defined as the state that minimizes the expected Trace Square Deviation from the true state, \\ {\bf ($\textbf{Q}_1$) $\langle$TrSDf$\rho_{\pasts{ U}}\rangle$:}
\begin{align}\label{eq-smt}
\est\rho_\tau = & \argmin_{\rho} \left\langle {\rm Tr} \left [ \left(\rho - \rho_{\pasts{O}, \pasts{ U}}\right)^2 \right ] \right \rangle_{\pasts{ U} | \bothps{O}}\nonumber \\
= & \left\langle \rho_{\pasts{O}, \pasts{ U}} \right\rangle_{\pasts{ U} | \bothps{O}} \equiv \rho_{\text S}. \end{align}
This follows from Eq.~\eqref{eq-q-sd}; see Ref.~\cite{LavGueWis21} for a more more detailed discussion, including another cost function that gives the same state. In Figure~\ref{fig-diagramQ2} (top left), we show illustrative plots explaining the calculation of the states $\rho_{\pasts{O}, \pasts{ U}}$ (grey crosses) from all possible $\pasts{ U}$ (grey signals), where the smoothed state (an individual dot in the dotted red trajectory) is the conditioned average, $\left\langle \rho_{\pasts{O}, \pasts{ U}} \right\rangle_{\pasts{ U} | \bothps{O}}$.
Now, using the connection between two types of averages in Eq.~\eqref{eq-r-ave}, we can define the expected average of an arbitrary functional $\left\langle {\cal A} \right\rangle_{\pasts{ U} | \bothps{O}}$ in two ways, \begin{subequations}\label{eq-ave-alter} \begin{align}
\label{eq-ave-alter1} \left\langle {\cal A} \right\rangle_{\pasts{ U} | \bothps{O}} = & \int \!\! {\rm d}\mu(\,\past{u}\,)\, \wp_{\bothps{O}}(\,\past{u}\,) \, {\cal A} \!\left[\rho_{\pasts{O}, \pasts{u}} \right]\\ \label{eq-ave-alter2}= & \int \!\! {\rm d} \mu_{\rm H}(\hat\psi)\, \wp_{\bothps{O}}(\hat\psi) \, {\cal A}\!\left[\hat\psi \right]. \end{align} \end{subequations} The first line is a weighted average over all possible unknown records $\past{u}$, whereas the second line is a weighted average over all possible (pure) true states using a PDF of pure states conditioned on the past-future observed record, \textit{i.e.}, \begin{align}\label{eq-bayes}
\wp_{\bothps{O}} (\hat\psi) = &\,\, \wp\left(\hat\psi | \past{O}\right)\wp\left(\futp{O}\, | \hat\psi\right)/\wp(\futp{O}\,) \nonumber \\ \propto & \,\, \wp_{\pasts{O}}(\hat\psi) \Tr{{\hat E}_{\futps{O}} \, \hat\psi}, \end{align} as before in Eq.~\eqref{eq-bothoprob2}. Therefore, the estimator of this trace-square-deviation cost function, the smoothed state, can also be written as an average over pure states, \begin{align}\label{eq-qss-alter} \rho_{\text S} = \int \!\! {\rm d} \mu_{\rm H}(\hat\psi)\, \wp_{\bothps{O}}(\hat\psi) \, {\hat\psi}. \end{align} We note that these two forms of conditioned averaged state, Eqs.~\eqref{eq-qssstate} and \eqref{eq-qss-alter}, will be useful when we define the estimators $\textbf{Q}_3$ and $\textbf{Q}_4$.
Following the dashed black line down from $\textbf{Q}_1$ in the diagrams of Figures~\ref{fig-diagramQ} and \ref{fig-diagramQ2}, since the trace square deviation is a distance measure between any two states, we can also consider another common distance measure, the quantum state fidelity, for a cost function. We use the Jozsa fidelity~\cite{Jozsa1994}, \begin{align}\label{eq-Jozsafid} F[\rho_{\text A}, \rho_{\text B}] & \equiv \left[ \Tr{\sqrt{\rho_{\text B}^{1/2} \rho_{\text A} \rho_{\text B}^{1/2}} } \right]^2, \end{align} since it has a direct connection to the fidelity for classical states Eq.~\eqref{eq-cjozsa} as will be discussed in Section~\ref{sec-classana}. Provided one of the state arguments in the fidelity function is a pure state, $\rho_{\text A} = \hat\psi_{\text A}$, the fidelity between $\rho_{\text A}$ and $\rho_{\text B}$ can be written as \begin{align}\label{eq-fidel} F[\hat\psi_{\text A}, \rho_{\text B}] = {\rm Tr}\left(\hat\psi_{\text A} \rho_{\text B} \right). \end{align} Therefore, it is natural to define a new state estimator that minimizes a new cost function, the expected negative Fidelity with the true state,\\ {\bf ($\textbf{Q}_2$) $\langle$nFw$\rho_{\pasts{ U}}\rangle$}: \begin{align}\label{eq-th-q-nf}
\est\rho_\tau = & \argmin_{\rho} \left\langle -F\!\left[\rho, \rho_{\pasts{O}, \pasts{ U}} \right] \right\rangle_{\pasts{ U} | \bothps{O}} \nonumber \\ = & \argmin_{\rho} \, [-{\rm Tr}\left( \rho \, \rho_{\text S}\right)] \nonumber \\
= & \, |{\psi}_{\text S}^{\rm max} \rangle \langle {\psi}_{\text S}^{\rm max} | \equiv \hat {\psi}_{\text S}^{\rm max}. \end{align}
Here we have used Eq.~\eqref{eq-qss-alter} and \eqref{eq-fidel} to obtain the second line of Eq.~\eqref{eq-th-q-nf}, which leads to the optimal estimator that maximizes the trace ${\rm Tr}\left( \rho \, \rho_{\text S}\right)$, namely the eigenprojector of the smoothed state $\rho_{\text S}$ with the largest eigenvalue: $\rho_{\text S} |{\psi}_{\text S}^{\rm max} \rangle = \lambda_{\text S}^{\rm max}|{\psi}_{\text S}^{\rm max} \rangle$. A more detailed treatment is given in Ref.~\cite{LavGueWis21}, which calls $|{\psi}_{\text S}^{\rm max} \rangle$ the ``lustrated'' smoothed state. In Figure~\ref{fig-diagramQ2}, the illustrative plots for the calculation of the $\textbf{Q}_2$ state estimator is very similar to that of $\textbf{Q}_1$; the only difference is that after it is calculated, the state estimator is made pure (lustrous) by the procedure just described. Note, in the event that two or more eigenstates have equally large eigenvalues, any convex combination of the eigenstates will also be an optimal estimator for $\textbf{Q}_2$.
Such cases form a set of measure zero; in our numerical simulations, Sec.~\ref{sec-fpe}, they do not occur in any of the trajectories we show, and do not contribute to any of the averages we calculate.
Moving down in the diagrams of Figures~\ref{fig-diagramQ} and \ref{fig-diagramQ2}, with the dotted orange line referring to an ``equal'' connection in classical state estimation (discussed later in Section~\ref{sec-classana}), we consider next the negative equality cost function in the state space. Using the definition of this cost in Eq.~\eqref{eq-q-ne}, we obtain an estimated state that minimizes an expected negative Equality with the true state,\\ {\bf ($\textbf{Q}_3$) $\langle$nEw$\rho_{\pasts{ U}}\rangle$}: \begin{align}\label{eq-th-q-ne}
\est\rho_\tau = & \argmin_{\rho} \left\langle - \delta_{\rm H}\!\left[ \rho - \rho_{\pasts{O}, \pasts{ U}} \right] \right\rangle_{\pasts{ U} | \bothps{O}}\nonumber \\ = & \argmin_{\hat\psi}\, [-\wp_{\bothps{O}}(\hat\psi)], \end{align} which is the most-likely state using the PDF $\wp_{\bothps{O}} (\hat\psi)$ in Eq.~\eqref{eq-bayes}.
In the event that the PDF $\wp_{\bothps{O}} (\hat\psi)$ has more than one most-likely state, any one of the states is an optimal solution to $\textbf{Q}_3$. Unlike the case of degeneracy in $\textbf{Q}_2$ (above), a convex combination of any of these states will be suboptimal. We stress that, even when they are non-degenerate, these cost functions do not give the same optimal state. That is, the most likely state (by the Haar measure) is not the same as the estimator in Eq.~\eqref{eq-th-q-nf} in general, even though they both are pure states.
In addition to the examples shown above, one could certainly come up with other types of distance measures in the true state space, and define new optimal estimators based on them. As stated before, in this work, we by no means aim to compile a comprehensive list of estimators. Thus it is time to move instead to estimators with optimality defined in the unknown measurement record space.
\subsection{Diffusive-type unknown records ($\textbf{Q}_4$--$\textbf{Q}_7$)}\label{sec-sevenQSEdiff} Given the two definitions of the smoothed state in Eq.~\eqref{eq-qssstate} and \eqref{eq-qss-alter}, if the weight in the state space $\wp_{\bothps{O}}(\hat\psi)$ can be used to find the most-likely state conditioned on the past-future observed record as in Eq.~\eqref{eq-th-q-ne}, we should also be able to use the weight in the record space $\wp_{\bothps{O}}(\,\past{u}\,)$ to find a similar ``most-likely'' estimator. Contrary to the averages, however, the peaks of a PDF are not preserved under a change of variables. Therefore, we do not expect the most-likely state to be the same as the state with the most-likely past record. Thus, the solid arrows from the smoothed state in $\textbf{Q}_1$, in the diagrams in Figures~\ref{fig-diagramQ} and \ref{fig-diagramQ2}, branch out to $\textbf{Q}_3$ (the former) and $\textbf{Q}_4$ (the latter).
We formally define our new estimator as that which minimizes the expected negative Equality with the past Unknown record,\\ {\bf ($\textbf{Q}_4$) $\langle$nEw$\past U\rangle$}: \begin{align}\label{eq-q4} \est\rho_\tau = \,\, \rho_{\pasts{O},\est{\pasts{ U}}} &, \text{ where}\nonumber \\
\est{\past{ U}} \!\!\! = \,\, \argmin_{\pasts{u}} & \left\langle -\delta \left(\past{u} - \past{ U} \right) \right\rangle_{\pasts{ U} | \bothps{O}} \nonumber \\ = \,\, \argmin_{\pasts{u}} & \, \,[-\wp_{\bothps{O}}(\,\past{u}\,)]. \end{align}
Since the change of variables can affect the peaks of a PDF, one has to be clear to what measure of the PDF used in the $\argmax$ function in the last line of Eq.~\eqref{eq-q4}. For the most-likely state in Eq.~\eqref{eq-th-q-ne}, the Haar measure was chosen as a natural measure for pure quantum states. In the case of the unknown diffusive record, we choose the natural measure already defined in Eq.~\eqref{eq-mlpprob}. We also note that any constant scaling factors or offset added to the record does not affect the peaks of the PDF.
To solve for the most-likely past record in Eq.~\eqref{eq-q4}, we generalize the CDJ technique of Section~\ref{sec-cdj}, where the boundary conditions are the initial-final states of the system, to boundary conditions that are functions of past-future observed records. Given that ${\bm q}_{t}$ represents the true state at time $t$, we realize that the conditional PDF for the unknown record is $\wp_{\bothps{O}}(\,\past{u}\,) \propto \wp(\,\past{u}, \past{O}\,) \wp(\futp{O} \,| {\bm q}_\tau)$, where $\wp(\futp{O}\, | {\bm q}_\tau) \propto {\rm Tr}[ {\hat E}_{\futps{O}\,} \hat S({\bm q}_\tau) ]$. Similarly to Eq.~\eqref{eq-mlpprob}, with discretized times, we write the probability function of the past unknown record, up to time $\tau$, \begin{align}\label{eq-mlp1-prob}
\wp_{\bothps{O}} (\,\past{u}\,) & \propto \int \!\! {\rm d} \mu(\,\past{{\bm q}}\,) \,\, {\rm Tr}\left[ {\hat E}_{\futps{O}}\, \hat S({\bm q}_\tau) \right] \prod_{t=0}^{\tau-\delta t} \wp( u_{t}, O_{t} | {\bm q}_{t}) \, \delta[{\bm q}_{t+\delta t} - \bm{{\cal E}}({\bm q}_{t}, u_{t}, O_{t})], \end{align} allowing, as before, a proportional factor which is a function of the observed record. The measure of the integration is ${\rm d} \mu(\,\past{{\bm q}}\,) = \prod_{t = 0}^{\tau} {\bm q}_{t}$ and now, generalizing Ref.~\cite{Chantasri2013,chantasri2015stochastic} the boundary term can be identified as ${\cal B}_{\text{D}} = \delta({\bm q}_0 -{\bm q}_I) {\rm Tr}[ {\hat E}_{\futps{O}\,} \hat S({\bm q}_\tau) ]$. The condition on the past and future observed records is included in terms of $O_{t}$ for $t\in \{ 0, \delta t, ..., \tau-\delta t\}$ (past) and in the effect $\op{E}_{\futps{O}}$ for $\futp{O} = \{ O_t : t \in\{ \tau, \tau+\delta t, ..., T \}\}^{\top}$ (future), respectively. The quantum state ${\bm q}_{t}$ at any time needs to satisfy an update equation ${\bm q}_{t+\delta t} = \bm{{\cal E}}( {\bm q}_{t} , u_{t}, O_{t} )$, which describes the state evolution, \begin{align} {\rho}_{t+\delta t}= \frac{{\cal M}_{O_t} {\cal M}_{u_t} \rho_t}{{\rm Tr}\big( {\cal M}_{O_t} {\cal M}_{u_t} \rho_t \big)}, \end{align} modified from Eq.~\eqref{eq-stup} to include the measurement backaction from the observed record $O_{t}$.
We follow the CDJ optimization process, maximizing the PDF under constraints. This gives the Lagrangian function as \begin{align} {\cal S} = & - {\bm p}_{-\delta t}({\bm q}_0 - {\bm q}_I) + \ln {\rm Tr}\left [ {\hat E}_{\futps{O}\,} \hat S({\bm q}_\tau) \right ]
+ \sum_{t=0}^{\tau-\delta t} \big\{ - {\bm p}_{t} [ {\bm q}_{t+\delta t} - \bm{{\cal E}}({\bm q}_{t}, u_{t}, O_{t}) ] + \ln \wp(u_{t}, O_{t} | {\bm q}_{t}) \big\}, \end{align} with the Lagrange multipliers ${\bm p}_t$. Extremizing the Lagrangian function, we arrive at a set of difference equations, \begin{subequations}\label{eq-diffeq1} \begin{align} {\bm q}_{t+\delta t} = & \,\, \bm{{\cal E}}( {\bm q}_{t} , u_{t}, O_{t} ), \\
{\bm p}_{t-\delta t} =& \,\, \bm{\nabla}_{{\bm q}_{t}} \big[ {\bm p}_{t} \cdot \bm{{\cal E}}({\bm q}_{t}, u_{t}, O_{t} ) + \ln \wp (u_{t}, O_{t} | {\bm q}_{t}) \big], \\ {\bm p}_{\tau-\delta t} = & \,\,\bm{\nabla}_{{\bm q}_\tau} \ln {\rm Tr}\left[ {\hat E}_{\futps{O}\,} \hat S({\bm q}_\tau) \right],\\
0 = & \,\, \frac{\partial}{\partial u_{t}} \big[ {\bm p}_{t} \cdot \bm{{\cal E}}({\bm q}_{t}, u_{t}, O_{t})+ \ln \wp( u_{t}, O_{t} | {\bm q}_{t}) \big] , \end{align} \end{subequations} which are slightly different from the original ones in Eq.~\eqref{eq-original-diffeq}. The initial condition is fixed, ${\bm q}_0 = {\bm q}_I$, but the third line appears to be a new final boundary condition that depends on the observed record after time $\tau$ (instead of the final fixed state as in the original CDJ formalism~\cite{Chantasri2013,chantasri2015stochastic}). Solving the difference equations, Eqs.~\eqref{eq-diffeq1}, gives the most-likely past record $\{ u_0, u_{\delta t}, \cdots, u_{\tau-\delta t} \}$ and its associated state trajectory $\{ {\bm q}_0, {\bm q}_{\delta t}, \cdots, {\bm q}_\tau \}$, for a particular value of $\tau$.
It is important to note that the optimization in Eq.~\eqref{eq-q4} depends on the value of $\tau$, making the state estimator for $\textbf{Q}_4$ quite complicated to solve. The $\tau$-dependence is implicit in the definition of the past record $\past{ U}$. Its most-likely estimator varies in its length, from a single number for $\tau=\delta t$ to an entire record for $\tau = T$. Therefore, we have to solve Eqs.~\eqref{eq-diffeq1} and obtain the solutions of $\{ u_0, u_{\delta t}, \cdots, u_{\tau-\delta t} \}$ and $\{ {\bm q}_0, {\bm q}_{\delta t}, \cdots, {\bm q}_\tau \}$ for every different value of $\tau$. The state estimator for $\textbf{Q}_4$ is then given by the collection of $\{ {\bm q}_\tau : \tau \in\{ \delta t, 2\delta t, ..., T\} \}$, a final state of each solution of Eqs.~\eqref{eq-diffeq1}. In Figure~\ref{fig-diagramQ2}, the illustrative plots for $\textbf{Q}_4$ show the most-likely past record (red signal in the middle panel), which is used in calculating the state estimator (red cross) at time $\tau$.
Moving down in the diagram of Figure~\ref{fig-diagramQ} to the cost $\textbf{Q}_5$ takes us a step towards the original CDJ approach, where we optimize the entire unknown record, rather than the past portion of it as in $\textbf{Q}_4$. As a result, we have a state estimator that minimizes an expected negative Equality with entire Unknown records, \\ {\bf ($\textbf{Q}_5$) $\langle$nEw$\both{ U}\rangle$}: \begin{align}\label{eq-mlp} \est\rho_\tau = & \,\, \rho_{\pasts{O}, \est{\pasts{ U}}}, \text{ where}\nonumber \\
\est{\past{ U}} \!\!: \,\,\, \est{\both{ U}}\!\!\! = &\argmin_{\boths{u}} \, \left\langle - \delta \left(\both{u} - \both{ U} \right) \right\rangle_{\boths{ U} | \bothps{O}} \nonumber \\
= & \argmin_{\boths{u}} \, [-\wp_{\bothps{O}}(\,\both{u}\,)]. \end{align} This is a direct generalization of the CDJ formalism, where the quantum dynamics now includes the measurement backaction of observed measurement records.
The conditional PDF for this case can be written as $\wp_{\bothps{O}}(\,\both{u}\,) \propto \wp\left(\both{u} , \both{O}\right)\wp(O_T | {\bm q}_T)$. This gives \begin{align}\label{eq-probmlp2}
\wp_{\bothps{O}}(\,\both{u}\,) & \propto \int \!\! {\rm d} \mu (\,\both{{\bm q}}\,)\,\, {\rm Tr}\left[ {\hat E}_T \hat S({\bm q}_T) \right] \prod_{t=0}^{T-\delta t} \wp(u_{t}, O_{t} | {\bm q}_{t}) \delta[{\bm q}_{t+\delta t} - \bm{{\cal E}}({\bm q}_{t}, u_{t}, O_{t})], \end{align}
where we have used $\wp(O_T| {\bm q}_T) = {\rm Tr}[ {\hat E}_T \hat S({\bm q}_T) ]$ in terms of the effect $\hat E_T$ and state ${\bm q}_T$ at the final time. The Lagrangian function for optimization is given by \begin{align}\label{eq-action2}
{\cal S} = & - {\bm p}_{-\delta t}({\bm q}_0 - {\bm q}_I) + \ln {\rm Tr}\left[ {\hat E}_T \hat S({\bm q}_T) \right] + \sum_{t=0}^{T-\delta t} \big\{ - {\bm p}_{t} [ {\bm q}_{t+\delta t} - \bm{{\cal E}}({\bm q}_{t}, u_{t}, O_{t}) ] + \ln \wp(u_{t}, O_{t} | {\bm q}_{t}) \big\}. \end{align}
We then extremize the Lagrangian function to get \begin{subequations}\label{eq-diffeq2} \begin{align} {\bm q}_{t+\delta t} = & \,\, \bm{{\cal E}}( {\bm q}_{t} , u_{t}, O_{t} ) ,\\
{\bm p}_{t-\delta t} = & \,\, \bm{\nabla}_{{\bm q}_{t}} \left[ {\bm p}_{t} \cdot \bm{{\cal E}}({\bm q}_{t}, u_{t}, O_{t} )+ \ln \wp (u_{t}, O_{t} | {\bm q}_{t})\right], \\ {\bm p}_{T-\delta t} = & \,\,\bm{\nabla}_{{\bm q}_T} \ln {\rm Tr}\left[ {\hat E}_T \hat S({\bm q}_T) \right],\\
0 = & \,\, \frac{\partial}{\partial u_{t}} \left[ {\bm p}_{t} \cdot \bm{{\cal E}}({\bm q}_{t}, u_{t}, O_{t}) + \ln \wp( u_{t}, O_{t} | {\bm q}_{t}) \right], \end{align} \end{subequations} where the third line serves as a final condition for this case.
We note that these difference equations, Eqs.~\eqref{eq-diffeq2}, are exactly the same as Eqs.~\eqref{eq-diffeq1} for $\tau = T$, when the past unknown record becomes the entire unknown record. Therefore, we only need to solve Eqs.~\eqref{eq-diffeq2} once and its solution $\{ {\bm q}_0, {\bm q}_{\delta t}, \cdots, {\bm q}_T \}$ is the state estimator for $\textbf{Q}_5$ for all times $t \in \{ 0, \delta t, ..., T\}$. This is in contrast to the laborious procedure required for ${\textbf{Q}_4}$. We show in Figure~\ref{fig-diagramQ2} for ${\textbf{Q}_5}$ that the most-likely record (red signal in the middle panel) and its corresponding state trajectory (red trajectory in the bottom panel) can be calculated all at once (\textit{i.e.}, we use solid curves) for all times.
Now that a clear connection between the smoothed quantum state in $\textbf{Q}_1$ and the generalized version of CDJ formalism in $\textbf{Q}_5$ has been established, we begin to extrapolate the cost function idea towards the two-state vector formalism. Moving up from $\textbf{Q}_5$ in the diagram of Figure~\ref{fig-diagramQ}, following the dashed black line up, we consider next the most-likely unknown record locally in time. In this case, the record estimator will be from optimizing a cost function independently at each time $\tau$, instead of the whole past record ($\textbf{Q}_4$) or the whole record ($\textbf{Q}_5$). We can then define a quantum state that is computed from a string of record estimators, where each minimizes an expected negative Equality with Unknown record at any time $\tau$, \\ {\bf ($\textbf{Q}_6$) $\langle$nEw$ U_\tau \rangle$}: \begin{align}\label{eq-q6} \est\rho_\tau = & \,\, \rho_{\pasts{O}, \est{\pasts{ U}}}, \text{ where}\nonumber \\
\est{\past{ U}}\!\!\! : \,\,\, \est{U}_\tau = & \argmin_{u_\tau} \, \left\langle - \delta \left(u_\tau - U_\tau \right) \right\rangle_{ U_\tau | \bothps O} \nonumber \\ = & \argmin_{u_\tau}\,[- \wp_{\bothps{O}}(u_\tau)]. \end{align} Note the similarity in formulation to Eq.~\eqref{eq-mlp}. The PDF for an unknown result at time $\tau$ is given by \begin{align}\label{eq-probut}
\wp_{\bothps{O}}(u_\tau) = & \, \wp\left(u_\tau |\,\past{O}\right)\wp\left(\futp{O}\,|u_\tau, \past{O}\right)/\wp(\,\futp{O}\,) \nonumber \\
\propto & \, \Tr{ {\op E}_{\futps{O}} {\hat M}_{u_\tau} \rho_{\pasts{O}} {\hat M}_{u_\tau}^\dagger}, \end{align} where we have used the measurement operator ${\hat M}_{u_\tau}$ for a diffusive measurement result, defined in Eq.~\eqref{eq-moput}.
Having considered the negative equality cost for the local record, the next one, following up the double solid lines from $\textbf{Q}_6$ in the diagram, is the square deviation cost for the same parameters. We define a state path calculated from a string of record estimators that minimize an expected Square Deviation from Unknown records at any time $\tau$,\\ {\bf ($\textbf{Q}_7$) $\langle$SDf$\, U_\tau\rangle$}: \begin{subequations}\label{eq-q7} \begin{align} \est\rho_\tau = & \,\, \rho_{\pasts{O}, \est{\pasts{ U}}}, \text{ where}\nonumber \\
\est{\past{ U}}\!\!\! : \,\,\, \est{U}_\tau = & \, \argmin_{u_\tau} \, \left\langle \left(u_\tau - U_\tau \right)^2 \right\rangle_{ U_\tau | \bothps{O}} \nonumber \\
= & \, \argmin_{u_\tau} \, \left\langle - 2 u_\tau U_\tau + u_\tau^2 \right\rangle_{ U_\tau | \bothps{O}} \label{eq-q-sd-ut-2} \\
= & \,\, \langle U_\tau \rangle_{ U_\tau | \bothps{O}} \,, \end{align} \end{subequations} where the average in the last line is defined with the same probability weight in Eq.~\eqref{eq-probut}.
The reason that we use the double solid lines connecting $\textbf{Q}_6$ and $\textbf{Q}_7$ is because they are equivalent for the continuous diffusive measurement. In the time-continuum limit, a record $u_\tau$ at any time $\tau$ is acquired during an infinitesimal time between $\tau$ and $\tau+\dd t$ and is then regarded as a result of a weak measurement of an observable $\op U = \op c + \op c^\dagger$. As mentioned in Section~\ref{sec-contmeas}, the record statistics is described by a measurement operation in Eq.~\eqref{eq-moput}, for a Lindblad operator $\hat c$ describing the coupling between the system and its bath. It can be shown that the mean and the mode (most-likely value) of the PDF $\wp_{\bothps{O}}(u_\tau)$ in Eq.~\eqref{eq-probut} for this weak measurement are the same to first order of $\dd t$ (see~Appendix~\ref{sec-app-pdf} for its derivation). Therefore, the record estimators of $\textbf{Q}_6$ and $\textbf{Q}_7$ coincide and are given by the real part of the weak value, \begin{align}\label{eq-wv-gen2}
\langle U_\tau \rangle_{ U_\tau | \bothps{O}} \approx \frac{\Tr{ \op{E}_{\futps{O}}\, \op{c} \, \rho_{\pasts{O}} + \rho_{\pasts{O}} \, \op{c}^\dagger \, \op{E}_{\futps{O}}}}{ \Tr{ \op{E}_{\futps{O}}\, \rho_{\pasts{O}}}}, \end{align}
as per Eq.~\eqref{eq-wv-gen}, to leading order in $\dd t$. In Figure~\ref{fig-diagramQ2}, we show in the illustrative plots for $\textbf{Q}_6$ and $\textbf{Q}_7$ that the record estimators (shown as the dotted red signals in the middle panels) are the most-likely and the mean measurement results, respectively. The state estimators for both cases are the state trajectories (solid red trajectories in the bottom panels) calculated from the record estimators.
\subsection{Unknown result of the weak von Neumann measurements ($\textbf{Q}_8$)}
For completeness of our proposed quantum state estimation theory, we construct a cost function that leads to the smoothed weak value (SWV) state defined in Eq.~\eqref{eq-wvs} for the weak value in Eq.~\eqref{eq-wv}. Considering the weak von Neumann measurement of an Hermitian observable $\op{X}$ at time $\tau$, the real part of the weak value, which can be written as an expectation of the SWV state, \begin{align} {\rm Re} \frac{\Tr{ \op{E}_{\futps{O}} \, \rho_{\pasts{O}} \op X}}{\Tr{ \op{E}_{\futps{O}} \,\rho_{\pasts{O}}}} = & \, \Tr{\varrho_{\rm SWV} \op X} \nonumber \\
= & \, _{\op{E}_{\futpss{O}}}\,\!\!\langle X^w \rangle_{\rho_{\pastss{O}}} \equiv \langle X^w \rangle_{X^w | \bothps{O}} , \end{align} is also equivalent to the past-future conditioned average of the weak measurement results $X^w$. Therefore, we can consider the expectation value $\Tr{\varrho_{\rm SWV} \op X}$ as an estimator that minimizes an expected square deviation from true (but unknown) weak measurement results. However, knowing the expectation value for one observable $\op X$ cannot uniquely determine the full description of the SWV state. We need the weak values for all observables from the operator algebra for the Hilbert space. It suffices to consider the set of generalized Gell-Mann matrices $\{ \op\Lambda_j \}$ for $i = 1, 2, ..., d$ (where $d$ is the dimension of the quantum system), satisfying the orthonormal property $\Tr{{\op\Lambda}_i {\op\Lambda}_j} = 2 \delta_{i,j}$~\cite{NLevelBloch}.
The SWV state is an estimator $\varrho \in { {\mathfrak G}'({\mathbb H})}$ that minimizes an expected square deviation, between its trace with an observable $\op{\Lambda}_j$ and that observable's weak measurement result $\Lambda_j^w$, summed over all observables in the set of generalized Gell-Mann matrices. That is,\\ {\bf ($\textbf{Q}_8$) SWV state}: \begin{align}\label{eq-q8}
\est\varrho_\tau = & \argmin_{\varrho} \sum_{j=1}^{d^2-1}\left\langle \left[ {\rm Tr}(\varrho \op{\Lambda}_j) - \Lambda^w_j \right]^2\right\rangle_{\Lambda^w_j | \bothps{O}}\,. \end{align}
This cost function inside the angle brackets ensures that the complex-matrix estimator gives the correct real weak value $\Tr{\est\varrho_\tau \op{\Lambda}_j} = \langle \Lambda^w_j \rangle_{\Lambda^w_j | \bothps{O}}$, leading to the SWV state, \begin{subequations} \begin{align}\label{eq-wvsGell}
\est\varrho_\tau = &\,\, \tfrac{1}{d} {\hat 1} + { \tfrac{1}{2}}\! \sum_{j=1}^{d^2-1} \langle \Lambda^w_j \rangle_{\Lambda^w_j | \bothps{O}} \, {\op \Lambda}_j \\ = & \label{eq-wvsO} \,\, \frac{\op{E}_{\futps{O}}\, \rho_{\pasts{O}} + \rho_{\pasts{O}}\,\op{E}_{\futps{O}}} {\Tr{\op{E}_{\futps{O}}\,\rho_{\pasts{O}} + \rho_{\pasts{O}}\, \op{E}_{\futps{O}}}}. \end{align} \end{subequations}
One can also see that for the case without the postselection, or the future record, the solution of the optimization reduces back to the filtered state, giving the usual expectation value $\Tr{\rho_{\pasts{O}}\, \op{\Lambda}_j} = \langle \Lambda^w_j \rangle_{\Lambda^w_j | \pasts{O}}$. We show in Figure~\ref{fig-diagramQ2} for $\textbf{Q}_8$ that the SWV state (illustrated as a dotted green trajectory in the bottom panel) has its components being the conditioned expectation values of the orthonormal Gell-Mann observables. We chose the green color for this estimator because it is not necessarily a valid quantum state, to distinguish it from other estimators that are valid quantum states.
\subsection{Connections via classical state estimation}\label{sec-classana}
In the quantum state estimation diagram in Figure~\ref{fig-diagramQ}, we have included the double-dotted orange lines which indicate ``classically the same" connections. This indicates that when we replace everywhere the quantum state $\rho$ with the classical state $\wp({ x})$ for a discrete configuration ${ x}$, the two estimators in question yield the same optimal classical state. As we preluded in Section~\ref{sec-CSE}, we assume that the true classical state under continuous monitoring can be determined from complete records. That is the true state for a discrete configuration ${ x}$ at time $\tau$ is given by $\wp_\tau^{\rm true}({ x}) = \wp_{\pasts{O},\pasts{ U}}({ x}) = \delta_{{ x},{ x}_{\pastss{O},\pastss{ U}}}$, where ${ x}_{\pasts{O},\pasts{ U}}$ is the configuration determined by a complete knowledge of both observed and unknown records. In this subsection, we present the mathematical justification for the two connections shown in the diagram: (1) between the quantum state smoothing (${\textbf{Q}_1}$) and the SWV state (${\textbf{Q}_8}$), and (2) between the negative Fidelity cost (${\textbf{Q}_2}$) and the negative Equality cost (${\textbf{Q}_3}$). While we present the results for the states conditioned on the past-future record, $\bothp{O}$, in fact these classical equalities result from the cost functions themselves, and so hold for conditioning on any data (D).
We first consider a classical analog of ${\textbf{Q}_1}$. By replacing the trace with the sum over the basis states, the classical analogue of the trace square deviation cost Eq.~\eqref{eq-q-sd} is the sum square deviation cost as shown in Eq.~\eqref{eq-cstateBME1}. Therefore, conditioned on the past-future observed record, we obtain a classical state estimator that minimizes an expected Sum Square Deviation cost from the true state,\\ {\bf ($\textbf{C}_1$) $\langle$\textSigma SDf\,$\wp_{ \pasts{ U}}\rangle$}: \begin{align}\label{eq-c-smt}
\est\wp_\tau({ x}) &= \argmin_{\wp} \left\langle \sum_{{ x}'} \left [ \wp({ x}') - \wp_{\pasts{O}, \pasts{ U}}({ x}')\right ]^2 \right\rangle_{\pasts{ U} | \bothps{O}}\,\,\!\!\!\!({ x}) \nonumber \\ & = \int \!\! {\rm d}\mu(\,\past{u}\,)\, \wp_{\bothps{O}}(\,\past{u}\,) \, \delta_{{ x},{ x}_{\pastss{O},\pastss{u}}} = \wp_{\bothps{O}}({ x}). \end{align} In the second line, the estimator is written in terms of the conditional average of possible true states, $\delta_{{ x},{ x}_{\pastss{O},\pastss{u}}}$, with the weight $\wp_{\bothps{O}}(\,\past{u}\,)$. The integral with the $\delta$-function then effectively leads to the change of variables between the two PDFs, \textit{i.e.}, from $\wp_{\bothps{O}}(\,\past{u}\,)$ to $\wp_{\bothps{O}}({ x})$. The end result is the \emph{classical smoothed state}, the Bayesian PDF of the system's configuration conditioned on $\bothp{O}$~\cite{Sarkka13}.
Next, we consider a classical version of the expected cost for $\textbf{Q}_8$ and its optimal SWV state. For a system to be considered classical, its initial conditions, final conditions, and its dynamics can all be described probabilistically in a fixed basis. In that basis, the state matrix is diagonal. Let us consider Eq.~\eqref{eq-q8}, where we used the Gell-Mann matrices to represent observables in the Hilbert space. For the classical counterpart, we can keep the form of the estimator in terms of the Gell-Mann matrices as in Eq.~\eqref{eq-wvsGell}, by only keeping the diagonal ones. However, we can easily see how the estimator turns out to be exactly the classical smoothed state, by using projectors as observables in the fixed basis instead. The basis is the discrete configuration ${ x}$, which can take any value in the countable set ${\mathbb X}$.
Therefore, we replace the set of observables with the projectors $|{ x} \rangle \langle { x}|$, where ${ x} \in {\mathbb X}$, and replace the trace with the sum over the elements of ${\mathbb X}$. This means we can substitute $\op{\Lambda}_j$ in Eq.~\eqref{eq-q8} with $\Lambda_{{ x}}({ x}') = \delta_{{ x},{ x}'}$, which is a projector in the language of functions of ${ x}'$. These observables can be measured weakly, and classically it means that the measurement is noisy, where a measurement result, $\Lambda^w_{{ x}}$, includes additional uncorrelated noise from the detection procedure. Combining all these classical analogs, we then obtain a formula for the expected cost and the classical estimator,\\ {\bf ($\textbf{C}_8$) Classical SWV state $=$ ($\textbf{C}_1$) $\langle$\textSigma SDf\,$\wp_{ \pasts{ U}}\rangle$}: \begin{align}\label{eq-c8}
\est\wp_\tau({ x}) = & \argmin_{\wp} \sum_{{ x} \in {\mathbb X}}\left\langle \left[ \sum_{{ x}' \in {\mathbb X}}\wp({ x}') \Lambda_{{ x}}({ x}') - \Lambda^w_{{ x}} \right]^2\right\rangle_{\Lambda^w_{{ x}} | \bothps{O}} \nonumber \\
= & \argmin_{\wp} \sum_{{ x} \in {\mathbb X}}\left\langle \left[ \wp({ x})- \Lambda^w_{{ x}} \right]^2\right\rangle_{\Lambda^w_{{ x}} | \bothps{O}} \nonumber \\
= & \, \langle \Lambda^w_{{ x}} \rangle_{\Lambda^w_{{ x}} | \bothps{O}}\,, \end{align} where we have substituted $\Lambda_{{ x}}({ x}') = \delta_{{ x},{ x}'}$ in the first line to get the second line. In the last line, we have obtained the estimator in a similar way as for the mean square deviation cost in Eq.~\eqref{eq-confestBME1}. We can then use the fact that an average result of a noisy (non-disturbing) measurement is equal to an average result of a perfect measurement, given that the detection noise is unbiased and not correlated with the true observables. Then, we realize that the perfect measurement of an observable ${ x}$ gives an outcome as either 1 or 0, depending on whether one finds the system to be in the state $x$ or not, respectively. Therefore, continuing from Eq.~\eqref{eq-c8}, we get \begin{align}
\est\wp_\tau({ x}) = \langle \Lambda^w_{{ x}} \rangle_{\Lambda^w_{{ x}} | \bothps{O}} = \langle \delta_{{ x},{ x}'} \rangle_{{ x}' | \bothps{O}} = \wp_{\bothps{O}}({ x}), \end{align} as the classical version of the SWV state, which is exactly the same as the classical smoothed state in Eq.~\eqref{eq-c-smt}.
We can also construct the classical version of the SWV state directly from Eq.~\eqref{eq-wvsO} using state matrices corresponding to classical states. A state matrix of a classical state can be represented by a diagonal matrix in a fixed basis. Therefore, in that basis, the filtered and retrofiltered states in Eq.~\eqref{eq-wvsO} can be written as $\rho_{\text F} = \sum_{{ x}} \wp_{\pasts{O}}({ x}) |{ x} \rangle \langle { x} |$ and $\hat E_{\futps{O}} = \sum_{{ x}} \wp(\futp{O}\,|{ x}) |{ x}\rangle \langle { x} |$~\cite{LavCha2020a}. The diagonal elements of $\rho_{\text F}$ are the probabilities for the system to be found in configurations ${ x}$, given the past observed record; while, the diagonal elements of the retrodictive effect are probabilities of the future record given configurations ${ x}$. Given both matrices, we obtain, \begin{align}
\varrho_{\rm SWV, \rm diag} & = \, \frac{ \sum_{{ x}} \wp_{\pasts{O}}({ x}) \wp(\futp{O}\,|{ x}) |{ x}\rangle \langle { x} | }{\sum_{{ x}} \wp_{\pasts{O}}({ x}) \wp(\,\futp{O}\,|{ x})} \nonumber \\
& = \sum_{{ x}} \wp_{\bothps{O}}({ x}) |{ x} \rangle \langle { x} |, \end{align} as a direct classical analogue of the SWV state in Eq.~\eqref{eq-wvsO}. This method of proving the equivalence was explicitly used in Refs.~\cite{Tsangsmt2009-2,LavCha2020a}.
We now turn to proving the connection between the classical analogs of the negative fidelity cost (${\textbf{Q}_2}$) and the negative equality cost (${\textbf{Q}_3}$). The expected negative equality cost for classical states was shown in Eqs.~\eqref{eq-cstate-ne} and \eqref{eq-cstate-necost}, where its optimal state is the pure state with the most-likely configuration, \textit{i.e.}, $\est\wp_{\rm nE}({ x}) = \delta_{{ x},\est{ x}_{\rm MLE}}$. For the negative fidelity, we then consider the classical fidelity \begin{align}\label{eq-cjozsa} F\!\left[\wp_{\rm A}, \wp_{\rm B}\right] = \left(\sum_{{ x}' \in \mathbb X}\sqrt{\wp_{\rm A}({ x}')\wp_{\rm B}({ x}')} \right)^2, \end{align} which quantifies the difference between two classical states, $\wp_{\rm A}({ x})$ and $\wp_{\rm B}({ x})$. This is the definition that motivated Jozsa's fidelity~\cite{Jozsa1994} for quantum states in Eq.~\eqref{eq-Jozsafid}. Using this definition of fidelity in Eq.~\eqref{eq-cjozsa}, we can calculate an expected negative fidelity cost, \begin{align}\label{eq-prove-nef}
\left\langle -F\!\left[\wp, \wp^{\rm true}\right] \right\rangle_{\wp^{\rm true} | \bothpsO} = & - \sum_{{ x}' \in \mathbb X} \wp_{\bothpsO}({ x}') \, \left(\sum_{{ x}'' \in \mathbb X} \sqrt{ \wp({ x}'')\,\delta_{{ x}'',{ x}'}} \right)^2 \nonumber \\ = & -\sum_{{ x}'} \wp_{\bothpsO}({ x}') \, \left(\sum_{{ x}''} \delta_{{ x}'',{ x}'}\sqrt{ \wp({ x}'')} \right)^2 \nonumber \\ = & -\sum_{{ x}'} \wp_{\bothpsO}({ x}') \, \wp({ x}') \nonumber\\ \ge & - \max_{{ x}'}\, \wp_{\bothpsO}({ x}'), \end{align} where properties of $\delta$-functions were used in getting the second and third lines. The lower bound on the last line of Eq.~\eqref{eq-prove-nef} follows from the positivity of $\wp_{\bothpsO}(x)$ and the fact that $\wp(x)$ is normalized. Obviously, we can saturate this lower bound, and hence obtain the optimal estimator, by choosing $\wp({ x}) = \delta_{{ x},\est{ x}}$, where $\est{ x} = \argmax_{{ x}'} \wp_{\bothpsO}({ x}')$.
Therefore, we have proved that \\ {\bf ($\textbf{C}_2$) $\langle$nFw$\wp_{ \pasts{ U}}\rangle$ $=$ ($\textbf{C}_3$) $\langle$nEw$\wp_{ \pasts{ U}}\rangle$}:\\ \begin{align}\label{eq-cstate-nFnE}
\est\wp_\tau({ x}) =& \argmin_{\wp} \left\langle -F\left[\wp, \wp_{\pasts{O}, \pasts{ U}} \right] \right\rangle_{\pasts{ U} | \bothps{O}}\!({ x}) \nonumber \\
=& \argmin_{\wp} \left\langle -\delta\!\left[\wp , \wp_{\pasts{O}, \pasts{ U}} \right] \right\rangle_{\pasts{ U} | \bothps{O}}\!({ x}) \nonumber \\ = &\, \delta_{{ x}, \est{ x}_{\rm MLE}}, \end{align} where \begin{align} \est{ x}_{\rm MLE} = \argmax_{{ x}}\,\wp_{\bothps{O}}({ x}), \end{align} is the most-likely configuration of the smoothed classical state.
As was discussed in the quantum case, it is possible that the probability distribution $\wp_{\bothps{O}}({ x})$ does not have a unique maximum. While this is a set of measure zero, it is still worth discussing the potential differences between the estimators from $\textbf{C}_2$ and $\textbf{C}_3$ in the event of multiple most-likely configurations $\est{ x}_{\rm MLE}$. For $\textbf{C}_3$, any and only members of the set of $\delta$-functions $\{ \delta_{{ x},\est{ x}_{\rm MLE}} \}$ will be an optimal classical state estimator. For $\textbf{C}_2$, all of these are optimal estimators, but, in addition,
any state in the convex hull of this set will also be an optimal estimator.
Thus, while it is possible to chose optimal estimators to break the equality between $\textbf{C}_2$ and $\textbf{C}_3$ in this degenerate case, it is also possible to choose the optimal estimators in a way that the equality is always satisfied.
\subsection{Expected cost functions for the estimators}\label{sec-expcost}
Later, in Section~\ref{sec-eightestqubit}, we will investigate the eight estimators and their expected cost functions for a qubit example. By definition, each estimator should optimize its associated expected cost function. Nevertheless, we can still make use of the available estimator solutions and illustrate the optimality by evaluating expected costs for other (non-optimal) estimators to verify that the optimal solutions give the smallest values. Some of the cost functions are defined in the state space and some are defined in the unknown record space. Therefore, for the latter, we also need to solve for the past unknown record associated with each of the state estimators. We here consider seven out of eight expected costs. We exclude the expected cost of ${\textbf{Q}_4}$, for which one needs to search for past records that maximize the PDF, $\wp_{\bothps{O}}(\,\past{u}\,)$, for the non-optimal states at all times $\tau \in (0, T]$, which is extremely difficult to calculate.
For the first three cost functions, defined in the state space, \textit{i.e.}, ${\textbf{Q}_1}$\,--${\textbf{Q}_3}$, their expected costs are straightforward to calculate. The expected costs, \begin{align}
\label{eq-cq1}{\cal C}_1(\rho) = & \,\, \left\langle {\rm Tr} \left [ \left(\rho - \rho_{\pasts{O}, \pasts{ U}}\right)^2 \right ] \right \rangle_{\pasts{ U} | \bothps{O}},\\
\label{eq-cq2}{\cal C}_2(\rho) = & \,\, \left\langle -F\!\left[\rho, \rho_{\pasts{O}, \pasts{ U}} \right] \right\rangle_{\pasts{ U} | \bothps{O}}, \\ \label{eq-cq3}{\cal C}_3(\hat\psi) = & \,\, -\wp_{\bothps{O}}(\hat\psi), \end{align} are simply the arguments of the $\argmin$ functions in Eqs.~\eqref{eq-smt}, \eqref{eq-th-q-nf}, and \eqref{eq-th-q-ne}, respectively. Note, however, that the third expected cost, ${\cal C}_3(\hat\psi)$, can only be applied to pure states.
For the next three cost functions defined in the record space, \textit{i.e.}, ${\textbf{Q}_5}$\,--${\textbf{Q}_8}$, their expected costs are dependent on the time resolution $\delta t$ (or $\dd t$, which has to be specified as a small but finite value for the numerical evaluation). We therefore modify the expected costs in Eq.~\eqref{eq-mlp}, \eqref{eq-q6}, and \eqref{eq-q7} to obtain new definitions that are $\delta t$-independent, without affecting their optimization. For ${\textbf{Q}_5}$, the expected cost is the negative probability density, $-\, \wp_{\bothps{O}}(\,\both{u}\,)$, which has $\delta t$-dependent terms in the normalized factors of instantaneous-record PDFs, $\wp(u_{t}, O_{t} |{\bm q}_{t})$, in Eq.~\eqref{eq-probmlp2}. Such terms can be eliminated by defining a new expected cost as a log ratio, \textit{i.e.}, \begin{align} \label{eq-cq5}{\cal C}_5(\both u)= \,\, \log[\wp_{\bothps{O}}(\,\both{u}_{{\textbf{Q}_5}})/\wp_{\bothps{O}}(\,\both{u}\,)], \end{align} which should have its minimum (zero) value when $\both{u} = \both{u}_{{\textbf{Q}_5}}$, the optimal record solution of Eqs.~\eqref{eq-diffeq2}. Similarly, the expected cost for ${\textbf{Q}_6}$ is $-\wp_{\bothps{O}}(u_\tau)$, which is a negative probability density of an unknown record at time $\tau$. The probability density has its variance depending on the time resolution $\delta t$. Therefore, we can define a new expected cost, \begin{align} \label{eq-cq6}{\cal C}_6(u_\tau) = -2 v \log\left[ \! \sqrt{2\pi v} \, \wp_{\bothps{O}}(u_\tau)\right], \end{align}
where the variance $v = \langle U_\tau^2 \rangle_{ U_\tau | \bothps{O}} \sim 1/\dd t$ is eliminated.
Lastly, for ${\textbf{Q}_7}$ and ${\textbf{Q}_8}$, we can consider shifting the mean-square deviation to remove the terms which depend on the time resolution or weakness of the measurement, respectively. An example of such shifting is shown in Eq.~\eqref{eq-q-sd-ut-2}, which does not affect the optimization. Therefore, we modify the expected costs for the last two categories to \begin{align}
\label{eq-cq7}&{\cal C}_7(u_\tau) = \,\, \left\langle u_\tau^2 - 2 u_\tau U_\tau \right\rangle_{ U_\tau | \bothps{O}}, \\
\label{eq-cq8}&{\cal C}_8(\varrho) = \,\, \sum_{j=1}^{d^2-1}\left\langle {\rm Tr}(\varrho \op{\Lambda}_j)^2 - 2\, {\rm Tr}(\varrho \op{\Lambda}_j) \, \Lambda^w_j \right\rangle_{\Lambda^w_j | \bothps{O}}. \end{align} The sum in Eq.~\eqref{eq-cq8} is over the set of generalized Gell-Mann matrices $\{ \op\Lambda_j \}$ as in Eq.~\eqref{eq-q8}. We can also show that ${\cal C}_6 = {\cal C}_7$ by writing \begin{align}
{\cal C}_6(u_\tau) = u_\tau^2 - 2 u_\tau \langle U_\tau \rangle_{ U_\tau | \bothps{O}} = {\cal C}_7(u_\tau), \end{align} given the form of $\wp_{\bothps{O}}(u_\tau)$ presented in Appendix~\ref{sec-app-pdf}.
\section{Example: single qubit with bosonic baths}\label{sec-example}
In this section, we implement our unified theory of quantum state estimation for the example of a resonantly driven two-level system (qubit) spontaneously emitting photons to two independent vacuum bosonic baths~\cite{Ivonne2015,chantasri2019}. The qubit system has two eigenstates given by $|e\rangle$ (excited state) and $|g\rangle$ (ground state), with an excitation energy $\omega$ (taking $\hbar = 1$). In a frame rotating at the excitation frequency, the qubit is driven to rotate around the $x$-axis in the Bloch sphere with a Rabi frequency $\Omega$. The qubit-bath coupling operator responsible for the qubit's spontaneous decay is the lowering operator $\op{\sigma}_- = |g\rangle\langle e|$. The baths can be measured, for example, via photon detection or homodyne detection \cite{Wiseman1993-2}.
Let us consider the partially-observed (Alice-Bob) scenario where the qubit's coupling to the respective observers' baths is described by Lindblad operators $\op{c}_{\rm o}$ and $\op{c}_{\rm u}$, which are both proportional to the lowering operator $\hat\sigma_{-}$. The lower-case letter subscripts `o' and `u' are to label the channels observed and unknown, respectively, by the observer Alice. Under the strongest Markov assumption~\cite{LiLi2018}, the unconditional dynamics $\rho(t)$ of the qubit (no conditioning on measurement records) is described by the Lindblad master equation, \begin{align}\label{eq-master} {\rm d} \rho(t) &= - i \, \dd t [ \tfrac{\Omega}{2} \,\op{\sigma}_x , \rho(t)] + \dd t \,{\cal D}[\op{c}_{\rm u}]\rho(t) + \dd t \,{\cal D}[\op{c}_{\rm o}]\rho(t), \nonumber \\ & = \, \dd t\, {\cal L}_{\rm u} \rho(t) + \dd t \,{\cal D}[\op{c}_{\rm o}]\rho(t), \end{align} where the system's Hamiltonian is ${\hat H} = (\Omega/2){\hat \sigma}_x$ and the superoperator is defined as ${\cal D}[\hat c] \bullet = \op{c}\bullet \op{c}^\dagger - \tfrac{1}{2}(\op{c}^\dagger \op{c} \bullet + \bullet \op{c}^\dagger \op{c})$. In the second line of Eq.~\eqref{eq-master}, we write the master equation in terms of the superoperator ${\cal L}_{\rm u}$ from Eq.~\eqref{eq-povm} for consistency.
The Lindblad evolution, Eq.~\eqref{eq-master}, can be \emph{unravelled} to different stochastic pure state trajectories, depending on the type of measurement applied on the baths. In this work, we consider the specific scenario where Alice measures her bath using photon detection and Bob measures his bath using a $y$-homodyne measurement (with a local oscillator phase $\Phi = \pi/2$). Therefore, we can denote the Lindblad operators for Alice's and Bob's channels by $\op{c}_{\rm o} = \sqrt{\gamma_{\rm o}}\,\op{\sigma}_-$ and $\op{c}_{\rm u} = \sqrt{\gamma_{\rm u}}\,\op{\sigma}_- e^{-i \pi/2}$, respectively. The coupling rates $\gamma_{\rm o}$ and $\gamma_{\rm u}$ sum to give the total system-bath coupling rate, $\gamma_{\rm o}+\gamma_{\rm u} = \gamma = 1/T_\gamma$, where $T_\gamma$ is the system's decay time. In this scenario, the unravelled qubit's state is always confined to the $y$--$z$ great circle of the Bloch sphere, which significantly simplifies the analysis.
Following the presentation in sections~\ref{sec-cdj} and \ref{sec-qss}, we describe the conditional dynamics of the monitored quantum system by measurement operations. For the photon detection on Alice's side, at most one photon can be detected during a sufficiently short time $\delta t$. Thus, there are two operations associated with two possible outcomes (1 or 0 detected photon). If a photon is detected, the operation describing the state-update is ${\cal N}_{1} \rho = \op{c}_{\rm o} \, \rho \, \op{c}_{\rm o}^\dagger \delta t$. If no photon is detected, then the state evolves by a measurement operation, ${\cal N}_0 \rho = {\hat N}_0 \rho {\hat N}_0^\dagger$, where ${\hat N}_0 = {\hat 1}- \tfrac{1}{2} \op{c}_{\rm o}^\dagger \op{c}_{\rm o} \, \delta t $. Because ${\cal N}_{1}$ causes a finite change in the state---from $\rho$ to $\op{c}_{\rm o} \, \rho \, \op{c}_{\rm o}^\dagger / \Tr{\op{c}_{\rm o}^\dagger \op{c}_{\rm o}\rho}$ when normalized---in a time interval of arbitrarily short duration $\delta t$, it can be called a {\em quantum jump}~\cite{BookCarmichael,Plenio1998,BookWiseman}.
The above jump and no-jump evolution is in contrast to the dynamical measurements we have considered hitherto, of the homodyne type. This can be called {\em quantum diffusion} because in the limit $\delta t \to \dd t$ the state evolution is continuous but not differentiable in time~\cite{BookCarmichael,Plenio1998,BookWiseman}. This is the type of measurement we consider here for Bob's side; specifically, a $y$-homodyne detection for which the measurement result $u_t$ can take any real value. The measurement backaction for this diffusive measurement is described by the operation ${\cal M}_{u_t} \rho = {\hat M}_{u_t} \rho {\hat M}_{u_t}^\dagger$ where ${\hat M}_{u_t}$ is given by Eq.~\eqref{eq-moput}, which includes the unitary evolution for the short time step $\delta t$. It is worth remarking that the term ``quantum jump'' is sometimes also used for a rapid, but actually continuous, transition that can occur in a particular limit of a diffusive measurement, as studied experimentally in Ref.~\cite{MinMun2019}; this could be contrasted with the experimental study of a quantum jump of the type considered here, induced by a single-photon detection, in Ref.~\cite{smiRei2002}.
It can be shown that the above operations, ${\cal N}_{O_{t}}$ and ${\cal M}_{u_{t}}$, unravel the Lindblad master equation, Eq.~\eqref{eq-master}, by writing the unconditional state at time $t+\delta t$, \begin{align}\label{eq-masterunnorm}
{\rho}(t+\delta t) = & \,\, \sum_{O_t} \!\int\!\! {\rm d} u_t \, \wp_{\rm ost}(u_t) {\tilde \rho}_{O_{t},u_{t}} \nonumber \\ =& \, \, \rho(t) + \delta t {\cal L}_{\rm u} \rho(t) + \delta t {\cal D}[{\hat c}_{\rm o}]\rho(t),
\end{align} where ${\tilde \rho}_{O_{t},u_{t}} = {\cal N}_{O_t}{\cal M}_{u_t} \rho(t)$ is the unnormalized state at time $t+\delta t$ conditioned on the results $O_{t}$ and $u_t$. This agrees with a more conventional approach, \begin{align}\label{eq-masternorm}
\rho(t+\delta t) = & \,\, \sum_{O_t} \!\int\!\! {\rm d} u_t \, \wp(u_t, O_t | \rho(t)) \rho_{O_{t},u_{t}} \nonumber \\ = & \sum_{O_t=0,1} \!\!\int\!\! {\rm d} u_t \, \wp_{\rm ost}(u_t) {\rm Tr}( {\tilde \rho}_{O_{t},u_{t}} ) \rho_{O_{t},u_{t}}, \end{align}
where the unconditional state is obtained from summing (integrating) over all possible normalized states $\rho_{O_{t},u_{t}} = {\tilde \rho}_{O_{t},u_{t}}/{\rm Tr}( {\tilde \rho}_{O_{t},u_{t}} )$, with the actual probability $\wp(u_{t}, O_{t} | \rho(t) ) = \wp_{\rm ost}(u_t) {\rm Tr}( {\tilde \rho}_{O_{t},u_{t}} )$.
These two ways of writing the unconditional state, Eq.~\eqref{eq-masterunnorm} and Eq.~\eqref{eq-masternorm}, give us an idea that, to obtain correct statistics for the Lindblad master equation, we can either generate normalized states with the actual probabilities, or generate unnormalized states with ostensible probabilities, \textit{e.g.}, treating $u_{t}$ as a random variable with the Gaussian distribution $\wp_{\rm ost}(u_t)$. This latter method is essential for estimation with past-future information as discussed in Section~\ref{sec-qss}. We introduce a stochastic master equation for the unnormalized state in the next subsection.
\subsection{Estimating dynamics between two jumps}\label{sec-btw}
\begin{figure}
\caption{ Schematic diagrams showing the Alice-Bob protocol for the two-level system (qubit) coupled to two bosonic baths via the Lindblad operators $\hat c_{\rm o}$ and $\hat c_{\rm u}$ and measured by Alice and Bob, respectively (similar to Figure~\ref{fig-intro}). Here, Alice's measurement is photon detection, where its record is a set of `clicks' at random times. Bob's measurement is $y$-homodyne detection, where its record is a noisy photocurrent. Considering Alice estimating the state dynamics of the qubit between any two clicks (jumps), where the time $t$ is redefined to be between $t=0$ and $t=T$, Bob's possible unobserved records are shown as grey records in the middle right panel. Alice's estimated state dynamics (here represented by one Bloch-vector component, $z_\tau$) for some cost function is shown as the red curve in the enlarged time interval in the bottom right panel.}
\label{fig-introqubit}
\end{figure}
Consider the Alice-Bob scenario as shown in Figure~\ref{fig-introqubit}, where Alice has access to only the observed record and her task is to estimate Bob's true state. Alice's record is from the photon detection, so her record (during a time period of interest) is a series of $J$ detections (jumps) at times denoted by $T_j \in \{ T_1, T_2, ..., T_J \}$. After each detection, the qubit state deterministically collapses to the ground state, regardless of its preceding state $ \rho(T_j)$ right before the jumps. That is, \begin{align}
\rho(T_j+\delta t) = \frac{ {\cal N}_{1} \rho(T_j)}{{\rm Tr}[ {\cal N}_1 \rho(T_j) ]} = |g\rangle\langle g|, \end{align} using the measurement operation for jumps defined earlier. Since the jumps reset the qubit state to its ground state, we can consider separately its dynamics between any two consecutive jumps (or the initial time and $T_1$, or $T_J$ and the final time). The first and last intervals are negligible for the long-time averages. We therefore consider a \emph{block} of qubit's evolution between any two jumps, where, for the rest of this Section, we will redefine $t$ as the time from the ground state after a jump at time $T_j$, running from $t=0$ to $t = T = T_{j+1} - T_j$. The observed record in this block is thus given by $\bothp{O} = \{ O_0 , O_{\delta t}, ..., O_{T-\delta t} ,O_T \}^{\top} = \{ 0,0,..., 0,1\}^{\top}$; that is, no jump during the interval and one jump at the end. The jump $O_T$ at the end of the interval is included in the observed record, even though the unobserved record is still defined as $\both{u} = \{ u_0 , u_{\delta t}, ..., u_{T-\delta t} \}^{\top}$. This is because the jump resets the state to the ground state and the unobserved record at the final time is not relevant.
For Alice to estimate the state dynamics between jumps, she needs to know the possible dynamics of the true state. The true state is conditioned on both $\bothp{O}$ (observed) and $\both{u}$ (unknown) records. Following the unnormalized state conditioned on both records in Eq.~\eqref{eq-unnorm}, the state update for the unnormalized true state ${\tilde \rho}_{\text T}(t) \equiv {\tilde \rho}_{\pasts{O}, \pasts{u}}$ is \begin{align}\label{eq-updatetrue} {\tilde \rho}_{\text T}(t+\delta t) = &\, {\cal N}_{0} {\cal M}_{u_t} {\tilde \rho}_{\text T}(t), \nonumber \\ = &\, {\tilde \rho}_{\text T}(t) - i \, \delta t\, [\hat H, {\tilde \rho}_{\text T}(t)] - \tfrac{1}{2}(\delta t - u_t^2\delta t^2) \bar{\cal H}[\hat c^2_{\rm u}]{\tilde \rho}_{\text T}(t) \\
& +\, u_t^2 \delta t^2 \hat c_{\rm u} {\tilde \rho}_{\text T}(t) \hat c_{\rm u}^\dagger - \delta t \bar{ \cal H}[\tfrac{1}{2} \hat c_{\rm u}^\dagger \hat c_{\rm u}] {\tilde \rho}_{\text T}(t) + \, u_t \delta t \, {\bar {\cal H}}[{\hat c}_{\rm u}] {\tilde \rho}_{\text T}(t) - \delta t\, {\bar{ \cal H}}\left[\tfrac{1}{2} {\hat c}_{\rm o}^{\dagger}{\hat c}_{\rm o} \right]{\tilde \rho}_{\text T}(t), \nonumber \end{align} where ${\bar {\cal H}}[\hat c] \bullet = \hat c \bullet + \bullet \hat c^{\dagger}$. We note the importance of keeping terms of the order $u_t^2\delta t^2$, in order to see how various differential equations in the time-continuum limit arise from this one map. Depending on the assumed statistical properties of the record $u_{t}$, Eq.~\eqref{eq-updatetrue} can lead to two different differential equations for the qubit state, which will be used in the Fokker-Planck technique in Section~\ref{sec-fpe} and the most-likely path technique in Section~\ref{sec-mlp}.
From the map Eq.~\eqref{eq-updatetrue}, we can derive a stochastic master equation (SME) for the unnormalized state, which averages to the unconditioned master equation. By referring to Eq.~\eqref{eq-masterunnorm} and the discussion surrounded, we should treat the measurement result $u_t$ as having a zero-mean Gaussian statistics in order to generate a correct statistics for the unnormalized conditioned states. Therefore, we can derive the SME in the It\^o interpretation~\cite{BookGardiner2,BookJacobsSto} by taking $\lim_{\delta t \rightarrow \dd t} u_t^2 \delta t^2 = \dd t$, which gives \begin{align}\label{eq-untruequbit} {\rm d} {\tilde \rho}_{\text T} = & - i\, \dd t\, [\hat H, {\tilde \rho}_{\text T}] + \dd t\, {\cal D}[{\hat c}_{\rm u}]{\tilde \rho}_{\text T} + u_t \dd t \, {\bar{\cal H}}[{\hat c}_{\rm u}]{\tilde \rho}_{\text T} - \dd t\, {\bar{ \cal H}}\left[\tfrac{1}{2} {\hat c}_{\rm o}^{\dagger}{\hat c}_{\rm o} \right]{\tilde \rho}_{\text T}. \end{align} Here we are omitting the $t$-dependence argument for the state whenever possible. Moreover, since the initial state $\rho_{\text T}(t=0)$ is a pure state, the true state at any time is always pure, and Eq.~\eqref{eq-untruequbit} could be replaced with a stochastic Schr\"odinger equation (SSE) \cite{BookGardiner1,BookWiseman} or, indeed, something even simpler as we will see below in an equation for the qubit's angle $\theta$.
For completeness, we also give the normalized version of the SME in Eq.~\eqref{eq-untruequbit}: \begin{align}\label{eq-truequbit} {\rm d} \rho_{\text T} =& - i\, \dd t\, [\hat H, \rho_{\text T}] + \dd t\, {\cal D}[{\hat c}_{\rm u}]\rho_{\text T} - \dd t\, {\cal H}\left[\tfrac{1}{2} {\hat c}_{\rm o}^{\dagger}{\hat c}_{\rm o} \right]\rho_{\text T} + \dd t\{ u_t - {\rm Tr}[ (\hat c_{\rm u} + \hat c_{\rm u}^\dagger) \rho_{\text T} ] \} \, {\cal H}[{\hat c}_{\rm u}]\rho_{\text T}. \end{align}
This is the usual It\^o SME for a normalized state conditioned on both Alice's no-jump and Bob's diffusive records, where ${ {\cal H}}[\hat c] \bullet = \hat c \bullet + \bullet \hat c^{\dagger} - {\rm Tr}( \hat c \bullet + \bullet \hat c^{\dagger} )\bullet$. Note that the statistics of the unknown record $u_{t}$ in this case is governed by its actual conditional PDF $\wp_{\pasts{O}}(u_{t})$. The PDF can be approximated as a Gaussian function with a mean given by $\langle U_{t} \rangle_{ U_{t} |\bothps{O}} = {\rm Tr}[ (\hat c_{\rm u} + \hat c_{\rm u}^\dagger)\rho_{\text T}]$; see Eq.~\eqref{eq-app-probupast} in Appendix~\ref{sec-app-pdf}. Therefore, the SME for the normalized state in Eq.~\eqref{eq-truequbit} is usually seen written in terms of an innovation, ${\rm d} w_{\text T}(t) \equiv u_t \dd t - {\rm Tr}[ (\hat c_{\rm u} + \hat c_{\rm u}^\dagger) \rho_{\text T} ]\dd t$, which has the same statistics as the Wiener increment, \textit{i.e.}, $\langle {\rm d} w_{\text T} \rangle = 0$ and $\langle {\rm d} w_{\text T}^2 \rangle = \dd t$.
Alice, with only the record of observed jumps, can calculate her filtered state trajectory $\rho_{\text F}$ conditioned on her past observed record following Eq.~\eqref{eq-filstate}. This can also be obtained by averaging over all possible unknown records from Eq.~\eqref{eq-truequbit}, which is equivalent to averaging the equation over the innovation ${\rm d} w_{\text T}$, yielding \begin{align}\label{eq-filterqubit} {\rm d} \rho_{\text F} = \!- i\, \dd t\, [{\hat H}, \rho_{\text F}] + \dd t\, {\cal D}[{\hat c}_{\rm u}]\rho_{\text F} - \dd t\, {\cal H}\left[\tfrac{1}{2} {\hat c}_{\rm o}^{\dagger}{\hat c}_{\rm o} \right]\rho_{\text F}, \end{align}
which is Eq.~\eqref{eq-truequbit} with its last term removed. Contrary to the true state dynamics, this filtered state does not have unit purity, even though the initial state is pure. However, for a better estimate of the true state, the observer can use the entire observed record (both past-future information) in the estimation, following the estimators based on the cost functions presented in Section~\ref{sec-sevenQSE}.
Since, the true qubit state in Eq.~\eqref{eq-truequbit} is limited to the $y$--$z$ great circle of the Bloch sphere, other states, such as the unconditioned state Eq.~\eqref{eq-master} or the filtered state Eq.~\eqref{eq-filterqubit}, should always be on the $y$--$z$ plane. Therefore, we can reparametrize the qubit states with two parameters, $\theta$ and $R$, where $z = R \cos \theta$ and $y= R \sin \theta$ are the two coordinates of the Bloch vector (for the pure true state, $R = 1$). The excited state is $\theta = 0$ and the ground state is $\theta = \pi$. The normalized and unnormalized true states at any time $t$ are given by $\rho_{\text T}(t) = (1/2)( \hat I + \sin\theta_t \hat \sigma_y + \cos\theta_t \hat \sigma_z)$ and ${\tilde \rho}_{\text T}(t)= (\lambda_t/2)( \hat I + \sin\theta_t \hat \sigma_y + \cos\theta_t \hat \sigma_z)$, respectively, where $\lambda_t \equiv {\rm Tr}[{\tilde \rho}_{\text T}(t)]$ takes care of the state's norm. We then obtain the state update Eq.~\eqref{eq-updatetrue} for these new variables, \begin{subequations}\label{eq-updatetheta} \begin{align} \theta_{t+\delta t} = &\,\, \theta_t - \delta t\, \Omega + \delta t \tfrac{\gamma}{2} \sin\theta_t + (\delta t-u_t^2\delta t^2)\tfrac{\Gamma}{2}\sin\theta_t - u_t^2 \delta t^2\tfrac{\Gamma}{2} \cos\theta_t \sin\theta_t - u_t \delta t \sqrt{\Gamma}(\cos\theta_t+1), \\ \lambda_{t+\delta t} =& \,\, \lambda_t - \delta t \tfrac{\gamma}{2} ( \cos\theta_t + 1 ) \lambda - u_t \delta t \sqrt{\Gamma} \sin\theta_t \lambda_t - (\delta t- u_t^2\delta t^2)\tfrac{\Gamma}{2} \lambda_t - (\delta t- u_t^2\delta t^2)\tfrac{\Gamma}{2} \cos\theta_t \lambda_t , \end{align} \end{subequations}
with terms up to the order $u_{t}^2 \delta t^2$. These equations will be used in deriving two types of differential equations in the next subsections, as mentioned above.
In the following, we present the calculation methods for the eight estimators, and the numerical simulation results, in three subsections. Each subsection contains the estimators that are calculated with similar mathematical techniques, which are the Fokker-Planck equation, the CDJ most-likely path technique, and the local optimal records. We finish the section with the calculation of the expected cost functions and their averages for all possible jump times, where possible.
\subsection{State estimators with Fokker-Planck equation}\label{sec-fpe} The first three cost functions, $\textbf{Q}_1$--$\textbf{Q}_3$, are defined in the space of unknown true states at any time, from Eq.~\eqref{eq-qss-alter}, Eq.~\eqref{eq-th-q-nf} and Eq.~\eqref{eq-th-q-ne}. The three estimators are, respectively, the mean, the max-eigenstate of the mean, and the mode, of the PDF $\wp_{\bothps{O}}(\hat\psi)$ of the true state conditioned on the past-future observed record. One could use the stochastic simulation to find the PDF of the true state. However, in this subsection, the true state can be parametrized by the one-dimensional variable, $\theta$. This allows us to solve the PDF $\wp_{\bothps{O}}(\theta)$ via a semi-analytical method similar to solving a Fokker-Planck equation.
The dynamical equations for the true state are shown in Eq.~\eqref{eq-untruequbit} (unnormalized) and Eq.~\eqref{eq-truequbit} (normalized version) and one might be tempted to think that the equation for the normalized true state could be enough. However, as mentioned in the quantum state smoothing formalism, Section~\ref{sec-qss}, the correct statistics of true states or unknown records, conditioned on the fixed observed record, cannot be obtained from the normalized state and should be derived from the unnormalized state equations and their norms. Therefore, let us refer back to the PDF of the unobserved record written in terms of the unnormalized state in Eq.~\eqref{eq-bothoprob1}. From this, we can derive a general form of the PDF $\wp_{\pasts{O}}(\hat\psi)$ of the true state: \begin{align}\label{eq-findpdf} \wp_{\pasts{O}}(\hat\psi) \propto & \int \!\! {\rm d}\mu(\,\past{u}\,) \delta[\hat\psi - \rho_{\pasts{O}, \pasts{u}}] \wp_{\rm ost}(\,\past{u}\,) {\rm Tr}\left( {{\tilde \rho}}_{\pasts{O}, \pasts{u}} \right), \nonumber \\ = & \int \!\! {\rm d}\mu(\,\past{u}\,) \!\! \int\!\! {\rm d} \lambda\, \delta[\lambda - {\rm Tr}({\tilde \rho}_{\pasts{O}, \pasts{u}})] \,\delta[\hat\psi - \rho_{\pasts{O}, \pasts{u}}] \, \wp_{\rm ost}(\,\past{u}\,) \lambda. \end{align} In the second line, we added the $\delta$-function mapping the state norm to the variable $\lambda$. This is so that we can separately define an ostensible PDF, \begin{align} Q_{\pasts{O}}(\lambda, \hat\psi) = \int \!\! {\rm d}\mu&(\,\past{u}\,)\, \delta[\lambda - {\rm Tr}({\tilde \rho}_{\pasts{O}, \pasts{u}})]\, \delta[\hat\psi - \rho_{\pasts{O}, \pasts{u}}]\, \wp_{\rm ost}(\,\past{u}\,), \end{align} which has an advantage that it can be solved for using the dynamical equation for the unnormalized state and its norm, for example, as in Eq.~\eqref{eq-updatetheta}.
It is important to note that the ostensible PDF $Q_{\pasts{O}}(\lambda, \hat\psi)$ is not the actual joint PDF of $\lambda$ and $\hat\psi$. However, the actual PDF of the true state can be obtained via \begin{align}\label{eq-quasi-int} \wp_{\pasts{O}}(\hat\psi) \propto \tilde\wp_{\pasts{O}}(\hat\psi) = \int \!\! {\rm d} \lambda \, Q_{\pasts{O}}(\lambda, \hat\psi) \lambda, \end{align} where the tilde indicates an unnormalized PDF, \textit{i.e.}, $\wp_{\pasts{O}}(\hat\psi) = \tilde\wp_{\pasts{O}}(\hat\psi) / \int \!\!{\rm d} \mu_{\rm H}(\hat\psi) \wp_{\pasts{O}}(\hat\psi)$. Note the multiplication by $\lambda$ in the integrand in Eq.~\eqref{eq-quasi-int}. The past-future conditional PDF $\wp_{\bothps{O}}(\hat\psi)$ is then given by the Bayesian rule in Eq.~\eqref{eq-bayes}. For the particular example of the qubit pure state parametrized by the Bloch angle $\theta$, we can use ${\rm d} \mu_{\rm H}(\hat\psi) = {\rm d} \theta$ as the Haar measure and replace \begin{align} \hat\psi = \hat S(\theta) = (1/2)( \hat I + \sin\theta \hat \sigma_y + \cos\theta \hat \sigma_z), \end{align} where the PDF of the pure state becomes $\wp_{\bothps{O}}(\theta)$.
In order to solve for the ostensible PDF $Q_{\pasts{O}}(\lambda, \theta)$, we start with a Langevin-type equation for the unnormalized state from Eqs.~\eqref{eq-updatetheta}. As we mentioned in the previous subsection, the statistics of $u_{t}$ are governed by the ostensible PDF $\wp_{\rm ost}(u_{t})$, appropriate to the unnormalized state. Therefore, we can take ${\rm d} v = \lim_{\delta t \rightarrow \dd t} u_t\delta t $ to have the same statistics as the Wiener increment. This then leads to the It\^o rule, $ \lim_{\delta t \rightarrow \dd t} u_t^2\delta t^2 = \dd t$, which makes the terms with $(\delta t - u_t^2\delta t^2)$ in Eqs.~\eqref{eq-updatetheta} disappear. We obtain a two-dimensional Langevin equation for $\theta$ and $\lambda$, \begin{subequations} \begin{align}\label{eq-langevin} {\rm d} \theta = & \, A_{\theta}(\theta) \dd t + B_{\theta}(\theta) {\rm d} v,\\ {\rm d} \lambda = & A_\lambda(\lambda, \theta) \dd t + B_\lambda(\lambda,\theta) {\rm d} v, \end{align} \end{subequations} where we have defined \begin{subequations} \begin{gather} A_\theta(\theta) = - \Omega + \tfrac{\gamma}{2} \sin\theta - \tfrac{\Gamma}{2} \cos\theta \sin\theta, \quad B_\theta(\theta) = - \sqrt{\Gamma} (\cos\theta + 1),\\ A_\lambda(\lambda,\theta) = - \tfrac{\gamma}{2} (\cos\theta+1)\, \lambda, \quad B_\lambda(\lambda, \theta) = - \sqrt{\Gamma} \sin\theta \, \lambda. \end{gather} \end{subequations} From the Langevin equation, we can write a corresponding Fokker-Planck equation for $Q_{\pasts{O}}(\lambda,\theta)$, describing the diffusion in the two-dimensional space $(\theta,\lambda)$, \begin{align}\label{eq-FPE} \partial_t Q_{\pasts{O}}&(\lambda, \theta) = - \sum_{j} \partial_j [ A_j(\lambda,\theta) \, Q_{\pasts{O}}(\lambda,\theta )] + \sum_{i,j} \tfrac{1}{2}\partial_i\partial_j \left[ B_i (\lambda,\theta) B_j (\lambda,\theta) \, Q_{\pasts{O}}(\lambda,\theta) \right], \end{align} where the summations are over the coordinate labels, $i, j \in \{ \lambda, \theta \}$.
The unnormalized true PDF in the $\theta$ variable is defined in Eq.~\eqref{eq-quasi-int}. Substituting this into Eq.~\eqref{eq-FPE}, we obtain a partial differential equation for the unnormalized true PDF, \begin{align}\label{eq-PDEsol} \partial_t \tilde \wp_{\pasts{O}}(\theta) = &- \tfrac{\gamma}{2}(1+\cos\theta)\tilde \wp_{\pasts{O}}(\theta) + \partial_\theta \left\{ \left[ \sqrt{\Gamma}\sin\theta \, B_\theta(\theta)- A_\theta(\theta) \right] \tilde \wp_{\pasts{O}}(\theta)\right\} + \tfrac{1}{2} \partial^2_\theta\left[ B_\theta^2(\theta) \tilde \wp_{\pasts{O}}(\theta) \right]. \end{align} This can be solved numerically with an appropriate initial condition and a periodic boundary condition for the variable $\theta$. Thus we can finally obtain the normalized past-only and past-future conditional PDFs of the true state, \begin{align} \wp_{\pasts{O}}(\theta) = & \frac{\tilde \wp_{\pasts{O}}(\theta)}{\int\! {\rm d} \theta \, \tilde \wp_{\pasts{O}}(\theta)},\\ \wp_{\bothps{O}}(\theta) = & \frac{\tilde \wp_{\pasts{O}}(\theta) {\rm Tr} \big [ \hat E_{\futps{O}}\, \hat S(\theta) \big ] }{\int\! {\rm d} \theta\, \tilde \wp_{\pasts{O}}(\theta) {\rm Tr}\big[\hat E_{\futps{O}} \,\hat S(\theta) \big ]}. \end{align} We show in Appendix~\ref{sec-app-retroeff} how the retrofiltered matrix ${\hat E}_{\futps{O}}$ can be simply calculated from the no-observed-jump evolution. These PDFs above are then used to compute the smoothed state (the estimator for ${\textbf{Q}_1}$): \begin{align}
\rho_{\textbf{Q}_1} = &\, \rho_{\text S} = \int\!\! {\rm d} \theta \, \hat S(\theta) \,\wp_{\bothps{O}}(\theta). \end{align} Moreover, following Eqs.~\eqref{eq-th-q-nf} and \eqref{eq-th-q-ne}, the other two estimators in this subsection, $\rho_{\textbf{Q}_2}$ and $\rho_{\textbf{Q}_3}$, are found as \begin{align}
\rho_{\textbf{Q}_2} & = \hat{\psi}_{\text S}^{\rm max} = \hat S(\theta_{\textbf{Q}_2}),\\
\rho_{\textbf{Q}_3} & = \hat S(\theta_{\textbf{Q}_3}). \end{align}
The first line corresponds to the max-eigenvalue eigenvector of the smoothed state, \textit{i.e.}, $ \rho_{\text S} |{\psi}_{\text S}^{\rm max} \rangle = \lambda_{\text S}^{\rm max}|{\psi}_{\text S}^{\rm max} \rangle$, which leads to $\theta_{\textbf{Q}_2} = \arg {\rm Tr}[(\hat \sigma_z + i \hat \sigma_y)\rho_{\text S}]$. The second line is the most-likely state obtained from the most-likely angle, $\theta_{\textbf{Q}_3} = \argmax_{\theta} \wp_{\bothps{O}}(\theta)$.
\begin{figure}\label{fig-trajs}
\end{figure}
\begin{figure}\label{fig-blochprob}
\end{figure}
We show in Figure~\ref{fig-trajs} the numerical results of qubit state estimators between two jumps, compared with the regular filtered quantum trajectory $\rho_{\text F}$ obtained from Eq.~\eqref{eq-filterqubit} (blue curves). We solve the partial differential equation in Eq.~\eqref{eq-PDEsol} (using Mathematica's `NDSolve' package) with a delta-function initial state replaced with an approximated narrow-width Gaussian function $\wp_{t=0} (\theta) = (2\pi \sigma^2)^{-1/2}\exp[ - (\theta-\pi)^2/2\sigma^2]$, for the initial ground state $\theta_0 = \pi$. We chose $\sigma = 0.01$, which gives the value at its peak to be $\wp_{t=0}(\pi) \approx 39.89$ instead of infinity.
The first three estimators, $\rho_{\textbf{Q}_1} = \rho_{\text S}$, $\rho_{\textbf{Q}_2}$, and $\rho_{\textbf{Q}_3}$, are shown in red, dashed light blue, and dot-dashed magenta curves, respectively. The smoothed state $\rho_{\text S}$ is not necessarily pure, while the other two estimators are pure states with unit radius; see Figure~\ref{fig-trajs}(d). At the final time, $T=4 T_\gamma$, one might expect that $\rho_{\text F}(T) = \rho_{\text S}(T)$ since the conditioning on the past record should be the same as the conditioning on the whole record at the final time. However, this is not true for our example because the last jump is not included in the past record $\past{O}$. One could include the last jump by considering the state estimators with one more time step, but all estimators would simply jump to the ground state at the new final time. Also note that, in Figure~\ref{fig-trajs}(c), the estimator $\rho_{\textbf{Q}_2}$ always has exactly the same Bloch angle as the smoothed state (also shown as orange dashed lines in Figure~\ref{fig-blochprob}). This is because $\rho_{\textbf{Q}_2}$, in Eq.~\eqref{eq-th-q-nf}, is simply the purest state possible with the same Bloch angle as the smoothed state.
We have chosen the parameter regime (detail in Figure~\ref{fig-trajs}'s caption) such that we can see an interesting feature: the discontinuity in the most-likely state $\rho_{\textbf{Q}_3}$ at $t = 2.09 T_\gamma$. This occurs because the most-likely state follows the highest point of the PDF of true states Eq.~\eqref{eq-bayes} at any local time. The PDF can have multiple competing peaks, which results in most-likely states jumping between local maxima as the PDF changes in time. We show in Figure~\ref{fig-blochprob} how the PDFs and the jumping states (magenta dots) look, right before the jump time, $t = 2.09 T_\gamma$, and not long after the jump, $t = 2.15 T_\gamma$. Also, in Figure~\ref{fig-blochprob}(a)-(b), we can see that the smoothed state and $\rho_{\textbf{Q}_2}$ have the same angle on the Bloch sphere at both times.
\subsection{State estimators with CDJ optimization}\label{sec-mlp} For the estimators of $\textbf{Q}_4$ and $\textbf{Q}_5$, we need to optimize the PDFs of the unknown measurement record conditioned on the observed record. We follow the optimization approach presented in Section~\ref{sec-sevenQSE}. As in the previous subsection, we are interested in the block of qubit's dynamics between any two jumps, where the observed record is given by $\bothp{O} = \{ O_0 , O_{\delta t}, ..., O_{T-\delta t} ,O_T \}^{\top} = \{ 0,0,..., 0,1\}^{\top}$ and the unknown record $u_{t}$ is from the $y$-homodyne measurement.
Following the CDJ optimization, we need to solve the difference equations in Eqs.~\eqref{eq-diffeq1} and Eqs.~\eqref{eq-diffeq2}, for the estimators $\rho_{\textbf{Q}_4}$ and $\rho_{\textbf{Q}_5}$, respectively. These difference equations have to be derived from the state update for normalized states, given by the $\theta$ equation in Eqs.~\eqref{eq-updatetheta}, and the PDFs of time-local unknown measurement results. For the state update, since a solution of the difference equations gives smooth optimal unknown records and their corresponding smooth state paths, we need to treat the variable $u_t$ as zeroth order in $\delta t$, instead of ${\cal O}(\delta t^{-1/2})$ as in the Fokker-Planck approach. Therefore, the state mapping $\theta_{t+\delta t} = \bm{{\cal E}}(\theta_{t}, u_{t}, O_{t} =0)$ is given by \begin{align} \theta_{t+\delta t} = &\, \theta_t - \delta t\, \Omega + \delta t \tfrac{\gamma+\Gamma}{2} \sin\theta_t - u_t \delta t \sqrt{\Gamma}(\cos\theta_t+1), \\ \label{eq-lambdt}\lambda_{t+\delta t} =& \, \left[1 - \delta t \tfrac{\gamma+\Gamma}{2} ( \cos\theta_t + 1 ) - u_t \delta t \sqrt{\Gamma} \sin\theta_t \right ] \lambda_t. \end{align} This is the update equation for the true state Eqs.~\eqref{eq-updatetheta} keeping up to first order in $\delta t$. For the PDF of the unknown result, we can use the equation of the state norm, \begin{align}
\wp(u_{t}, O_{t} = 0 | \theta_{t}) =&\, \, \wp_{\rm ost}(u_t) {\rm Tr}[{\cal N}_0 {\cal M}_{u_{t}} \hat S(\theta_{t}) ] \nonumber \\ = & \,\, \wp_{\rm ost}(u_t)\lambda_{t+\delta t}(\theta_{t}, u_{t},\lambda_t)/\lambda_t, \end{align} with the form of $\lambda_{t+\delta t}$ given in Eq.~\eqref{eq-lambdt}. This leads to the logarithm of the probability to first order in $\delta t$,
\begin{align}
\ln \wp(u_{t}, & O_{t} = 0 | \theta_{t}) = \tfrac{1}{2} \ln (\delta t/2\pi) - \delta t \left[ \tfrac{1}{2} u_t^2 + \tfrac{\gamma+\Gamma}{2} ( \cos\theta_t + 1 ) + u_t \sqrt{\Gamma} \sin\theta_t \right]. \end{align} Putting all the components together, we can construct the difference equations as in Eqs.~\eqref{eq-diffeq1} or \eqref{eq-diffeq2}. Taking the continuum limit $\delta t \rightarrow \dd t$, we obtain a set of ordinary differential equations (ODEs) for $\theta_t$, its conjugate $p_t$, and an optimal record $u_t$, \begin{subequations}\label{eq-diffeqcont} \begin{align} \partial_t \theta_t = & - \Omega + \tfrac{\Gamma+\gamma}{2} \sin\theta_t - \sqrt{\Gamma}(\cos\theta_t+1) u_t, \label{eq-thetaeq}\\ \partial_t p_t = & - \tfrac{\Gamma+\gamma}{2}\,p_t\, \cos\theta_t - \sqrt{\Gamma} \, p_t\, \sin\theta_t\, u_t + \sqrt{\Gamma}\cos\theta_t\, u_t - \tfrac{\Gamma + \gamma}{2} \sin\theta_t ,\\ u_t = & - \sqrt{\Gamma}\left( p_t + p_t \cos\theta_t + \sin\theta_t \right). \end{align} \end{subequations} The initial condition for these ODEs is the qubit's initial state $\theta_0 = \pi$ (ground state). However, the final condition for the ODEs will depend on the problem of interest; that is, whether Eqs.~\eqref{eq-diffeqcont} will be used for $\textbf{Q}_4$ or $\textbf{Q}_5$.
Let us first consider the simpler case, $\textbf{Q}_5$, where the optimal unknown record and its corresponding state estimator $\rho_{\textbf{Q}_3}$ are calculated from the ODEs just once with the final boundary condition at time $T$. Following Eqs.~\eqref{eq-diffeq2}, the final condition is in the third line. Using the final observed jump record, we get ${\hat E}_T = {\hat c}_{\rm o}^{\dagger} {\hat c}_{\rm o} \dd t = \gamma \dd t |e\rangle \langle e |$ and $\wp(O_T = 1 | \theta_T) = {\rm Tr}[ {\hat c}_{\rm o}^{\dagger} {\hat c}_{\rm o} \hat S(\theta_T) ] \dd t = \tfrac{\gamma}{2}(1+\cos\theta_T)\dd t$. This leads to the final condition for the conjugate variable, \begin{align}\label{eq-finalbc2}
p_T = \, - \frac{\sin\theta_T}{1+\cos\theta_T}, \end{align} where we have approximated $p_T = \lim_{\delta t \rightarrow \dd t} p_{T-\delta t}$ for the time-continuum limit. Solving Eq.~\eqref{eq-diffeqcont} with the final condition Eq.~\eqref{eq-finalbc2}, we obtain the optimal $\both{u} = \{ u_t : t \in [0, T)\}^{\top}$ and its state path $\{ \theta_t : t \in [0, T]\}^{\top}$, where the latter is exactly the qubit state estimator $\rho_{\textbf{Q}_5}$.
For the estimator of $\textbf{Q}_4$, the calculation is more involved, since we need to solve the ODEs for every intermediate time $t=\tau$ during the time period of interest $\tau \in (0, T]$. Following Eqs.~\eqref{eq-diffeq1}, we numerically solve the ODEs, Eqs.~\eqref{eq-diffeqcont}, for discrete values of $\tau \in \{ \delta t, 2\delta t, ..., T\}$, with the final conditions, \begin{align}
p_\tau = & \,\,\frac{\partial}{\partial \theta_\tau} \ln {\rm Tr} [ {\hat E}_{\futps{O}}\, \hat S(\theta_\tau) ] \nonumber \\
= &\, \frac{ - \zeta \sin\theta_\tau + \beta \cos\theta_\tau}{\alpha + \zeta \cos\theta_\tau + \beta \sin\theta_\tau}, \end{align} for any time $\tau$. Here we have defined the retrofiltered matrix's elements as \begin{align} {\hat E}_{\futps{O}}= \alpha\, \hat 1 + \beta \, \hat\sigma_y + \zeta \, \hat\sigma_z, \end{align} where all elements are functions of $\tau$ (see Appendix~\ref{sec-app-retroeff}). In contrast to the previous $\rho_{\textbf{Q}_5}$, the state estimator $\rho_{\textbf{Q}_4}$ is a series of $\theta_\tau$, each of which is a `final' qubit's state via the solution of the ODEs, Eqs.~\eqref{eq-diffeqcont}, for each value of $\tau \in \{ \delta t, 2\delta t, ..., T\}$. As noted before, the qubit state estimator $\rho_{\textbf{Q}_4}$ at the final time $\tau = T$ is exactly the same as $\rho_{\textbf{Q}_5}$.
We solve for $\rho_{\textbf{Q}_4}$ and $\rho_{\textbf{Q}_5}$ using the numerical method and Python codes developed by Lewalle \emph{et~al.}~\cite{LewCha17,Lewalle2018chaos}. Since the ODEs are non-linear, there can be multiple solutions, which serve as multiple optimal paths (in a similar way that a PDF could have multiple peaks). We therefore solve for all candidate most probable solutions and pick the one which does maximize the joint probability density of the path, \textit{e.g.} calculated from Eqs.~\eqref{eq-mlp1-prob} or \eqref{eq-probmlp2}. The solutions of $\rho_{\textbf{Q}_4}$ and $\rho_{\textbf{Q}_5}$ are shown as solid orange and dotted brown curves, respectively, in Figure~\ref{fig-trajs}. They are similar in shape, have unit purities, and the same final states $\rho_{\textbf{Q}_4}(T) = \rho_{\textbf{Q}_5}(T)$ as expected, but they are not exactly the same. In Figure~\ref{fig-blochprob}, only $\rho_{\textbf{Q}_5}$ is shown as the brown dots. The corresponding optimal record for $\rho_{\textbf{Q}_5}$ will be discussed more in Section~\ref{sec-eightestqubit}.
\subsection{State estimators using local optimal records} For the last three estimators of $\textbf{Q}_6$--$\textbf{Q}_8$, their cost functions are defined with the time-local unknown records. We have assumed that the unknown record was from a continuous diffusive measurement ($y$-homodyne measurement for the qubit example). Therefore, the measurement can be thought of as a series of infinitely weak measurements, each with an infinitesimal duration $\dd t$. As mentioned in Section~\ref{sec-sevenQSEdiff}, the maximum value of the PDF in Eq.~\eqref{eq-probut} is the same as its mean value given by the weak value in Eq.~\eqref{eq-wv-gen2}. Thus we can calculate the state estimators $\rho_{\textbf{Q}_6} = \rho_{\textbf{Q}_7}$ from the state update Eqs.~\eqref{eq-updatetheta} using the weak value record in place of $u_{t}$. For the last estimator $\varrho_{\textbf{Q}_8}$, we use Eq.~\eqref{eq-wvsO} to compute the SWV state. It is important to note that since, in our case, the Lindblad operator for the unknown measurement record ${\hat c}_{\rm u}$ is not Hermitian, the SWV state is not associated with the weak value defined in Eq.~\eqref{eq-wv-gen}, \textit{i.e.}, $\langle U_\tau \rangle_{\bothps{O}} \ne {\rm Tr} [ ({\hat c}_{\rm u} + {\hat c}^\dagger_{\rm u}) \varrho_{\textbf{Q}_8} ] $, given the definition of the mean in Eq.~\eqref{eq-wv-gen2}. This would be true if ${\hat c}_{\rm u}$ were Hermitian, when Eqs.~\eqref{eq-wv-gen} and \eqref{eq-wv} coincide.
We show the numerical results for the last three estimators in Figure~\ref{fig-trajs} as dashed green (for $\rho_{\textbf{Q}_6} = \rho_{\textbf{Q}_7}$) and solid dark green (for the SWV state $\varrho_{\textbf{Q}_8}$) curves. As expected, we see in Figure~\ref{fig-trajs}(d) that the former estimators are pure quantum states with unit Bloch radius, while the the SWV `state' is not even constrained within the valid quantum state space as it can have a Bloch radius larger than one.
\begin{figure}\label{fig-cost1}
\end{figure}
\subsection{Expected costs for qubit estimators}\label{sec-eightestqubit}
From the eight estimators presented in Figure~\ref{fig-trajs}, we can also show that they do optimize their corresponding expected cost functions compared to all the other estimators. We do this by evaluating, where possible, the expected costs (or equivalent versions theoreof) discussed in Section~\ref{sec-expcost} for the eight qubit estimators. We first numerically calculate the local-state expected costs that can be applied to all eight estimators. The expected costs are ${\cal C}_1$, ${\cal C}_2$, and ${\cal C}_8$ and the results are shown in Figure~\ref{fig-cost1}(a), (b), and (c), respectively. We see that the estimators $\rho_{\textbf{Q}_1}$, $\rho_{\textbf{Q}_2}$, and $\varrho_{\textbf{Q}_8}$ give the minimum values for the expected costs ${\cal C}_1$, ${\cal C}_2$, and ${\cal C}_8$, respectively.
For ${\cal C}_3$, the expected cost (negative equality with the true state) can only be applied to pure-state estimators. This is because we assume the true state is pure and thus the likelihood for a mixed state to be the true state is zero. The results are shown in Figure~\ref{fig-cost2}(a), where the expected cost ${\cal C}_3$ is evaluated for the pure-state estimators: $\rho_{\textbf{Q}_2}$\,--$\rho_{\textbf{Q}_7}$.
The most-likely state $\rho_{\textbf{Q}_3}$ gives the lowest value for the negative probability density at all time. As we have discussed in Section~\ref{sec-expcost}, the expected cost ${\cal C}_4$ is excluded from our analysis because it is extremely difficult to calculate.
\begin{figure}\label{fig-cost2}
\end{figure}
For the expected costs defined in the unknown record space, ${\cal C}_5$--${\cal C}_7$, the relevant estimators have to also be pure states, because we need to solve for corresponding unknown records that reproduce them as possible evolutions of the pure true state. The calculation for the unknown records can be done in different ways, but it is not guarantee that any pure-state evolution will have a physical corresponding unknown record. For $\rho_{\textbf{Q}_2}$, $\rho_{\textbf{Q}_3}$, and $\rho_{\textbf{Q}_4}$, their records can be calculated by reversing the dynamics in Eq.~\eqref{eq-thetaeq}. For $\rho_{\textbf{Q}_5}$ and $\rho_{\textbf{Q}_6}$ (or $\rho_{\textbf{Q}_7}$), their associated unknown records can be determined in the process of calculating the state estimators in Eq.~\eqref{eq-diffeq2} and \eqref{eq-wv-gen2}. However, for our driven-qubit example, the state estimator $\rho_{\textbf{Q}_3}$ exhibits the discontinuity (as shown in Figure~\ref{fig-trajs}) at $T = 2.09 T_{\gamma}$, which leads to an unphysical infinite measurement result at that particular time. Therefore, we only show the estimated unknown records for the five pure-state estimators in Figure~\ref{fig-cost2}(b), excluding the estimator $\rho_{\textbf{Q}_3}$.
Using the estimated unknown records, we can calculate the expected costs ${\cal C}_5$--${\cal C}_7$. The expected cost, ${\cal C}_5$, is the log ratio of the probability densities of the entire unknown record, as shown in Eq.~\eqref{eq-cq5}. This quantity is non-local in time and therefore is a single number associated with each of the state estimators: $1.8\times 10^{16}$ for $\rho_{\textbf{Q}_2}$, $8.7\times 10^{10}$ for $\rho_{\textbf{Q}_4}$, $0$ for $\rho_{\textbf{Q}_5}$, and $3.3\times 10^{1}$ for $\rho_{\textbf{Q}_6}$ and $\rho_{\textbf{Q}_7}$. The expected costs ${\cal C}_6$ and ${\cal C}_7$ in Eqs.~\eqref{eq-cq6} and \eqref{eq-cq7} are exactly the same and are presented in Figure~\ref{fig-cost2}(b), showing that the estimators $\rho_{\textbf{Q}_6}=\rho_{\textbf{Q}_7}$ give the minimum value.
The expected cost functions we have presented so far are dependent on the duration of time $T$ between two jumps.
However, in some cases, we can also calculate an average of the expected cost over all possible jump times, $T$. This can be done by defining a jump-time-average expected cost as \begin{align}\label{eq-avetimecost} \bar{\cal C}_\kappa (\bullet) \equiv & \int^\infty_0 \!\!\! {\rm d} T \, \wp_{\rm wait}(T) \, {T}^{-1} \!\! \int_0^T\!\!\! \dd t \, \, {\cal C}_\kappa (\bullet_t), \end{align} where the first time-integral is weighted by the waiting-time PDF, normalized as $\int^\infty_0 \! {\rm d} T \, \wp_{\rm wait}(T) = 1$. The second time-integral, $T^{-1}\int_0^T \!\dd t\, \bullet$, is to average over the local time, $t$, which is applied to the expected cost that is $t$-dependent. The waiting-time PDF can be calculated from, \begin{align}\label{eq-jumptimeavg} \wp_{\rm wait}(T) & = {\rm Tr}\left[ \hat{c}_{\rm o}^\dagger \hat{c}_{\rm o}\, \rho_{\pasts{O}}(T) \right] {\rm Tr}\left[ {{\tilde \rho}}_{\pasts{O}}(T)\right] \nonumber \\ & = {\rm Tr}\left[ \hat{c}_{\rm o}^\dagger \hat{c}_{\rm o}\, {\tilde \rho}_{\pasts{O}}(T) \right], \end{align} where the unnormalized filtered state is given by \begin{align} {\tilde \rho}_{\pasts{O}}(T) = {\cal M}_{O_{T-\delta t}} e^{\delta t{\cal L}_{\rm u}}\cdots \, {\cal M}_{O_{0}} e^{\delta t{\cal L}_{\rm u}}\rho_0. \end{align} The first line in Eq.~\eqref{eq-jumptimeavg} is a product of the probability of an observed jump occurred at time $T$ (given that the system's state is $\rho_{\text F}(T)$) multiplied by the probability of having no jumps before that (for the length of time $T$). The latter is given by the trace of the unnormalized filtered state, which tells us the probability of the observed no-jump record from $t=0$ to $t = T$.
We approximate the integrals in Eq.~\eqref{eq-jumptimeavg} by using numerical summations (details are in Appendix~\ref{sec-app-numer}). The results are shown in Table~\ref{tab-avecost}, where the jump-time-average expected costs are calculated for ${\textbf{Q}_1}$, ${\textbf{Q}_2}$, ${\textbf{Q}_3}$, and ${\textbf{Q}_8}$, applied to relevant estimators. The minimal values of the average are shown in bold font. These results show that the optimal estimators minimize their expected costs, not just for the value $T=4 T_\gamma$. We note that we did not include ${\textbf{Q}_5}$ in our calculation for the jump-time-average expected cost, because solving the whole-record optimization becomes extremely resource-extensive for large values of $T$. The same reason holds for why we did not apply the calculation to $\rho_{\textbf{Q}_4}$ and $\rho_{\textbf{Q}_5}$. For ${\textbf{Q}_6}$ and ${\textbf{Q}_7}$, there were problems in reversing the dynamics to get reasonable estimated unknown records, and records could be infinite in some cases. (Evidence of this divergence can be seen for the estimated record for $\rho_{\textbf{Q}_2}$ and $\rho_{\textbf{Q}_4}$ shown in Figure~\ref{fig-cost2}(b).)
{\renewcommand{1.5}{1.5} \renewcommand{0.2cm}{0.2cm} \begin{table}[t] \centering
\begin{tabular}{|c|c|c|c|c|c|} \hline \backslashbox{Cost function}{Estimator} & $\textcolor{colq1}{\rho_{\textbf{Q}_1}}$ & $\textcolor{colq2}{\rho_{\textbf{Q}_2}}$ & $\textcolor{colq3}{\rho_{\textbf{Q}_3}}$ & $\textcolor{colq67}{\rho_{6,7}}$ & $\textcolor{colq8}{\varrho_{\textbf{Q}_8}}$ \\ \hline {\bf $\langle$TrSDf$\rho_{\pasts{ U}}\rangle$}: $\textcolor{colq1}{{{\cal C}_1}}$, Eq.~(\ref{eq-cq1}) & {\bf 0.31} & 0.46 & 0.48 & 0.62 & 0.34 \\ {\bf $\langle$nFw$\rho_{\pasts{ U}}\rangle$}: $\textcolor{colq2}{{{\cal C}_2}}$, Eq.~(\ref{eq-cq2}) & -0.69 & {\bf -0.77} & -0.76 & -0.69 & -0.70 \\ {\bf $\langle$nEw$\rho_{\pasts{ U}}\rangle$}: $\textcolor{colq3}{{{\cal C}_3}}$, Eq.~(\ref{eq-cq3}) & NA & -2.76 & {\bf -2.85} & -2.69 & NA \\ {\bf SWV}: $\textcolor{colq8}{{{\cal C}_8}}$, Eq.~(\ref{eq-cq8}) & -0.44 & -0.18 & -0.07 & 0.22 & {\bf -0.51} \\ \hline \end{tabular}
\caption{Numerical results for the jump-time-average expected costs ${\bar {\cal C}}_\kappa(\rho)$ for $\kappa = 1,2,3,8$ applied to relevant estimators: $\rho_{\textbf{Q}_1}$, $\rho_{\textbf{Q}_2}$, $\rho_{\textbf{Q}_3}$, $\rho_{\textbf{Q}_6}=\rho_{\textbf{Q}_7}$, and $\varrho_{\textbf{Q}_8}$. `NA' indicates `Not applicable'. The bold numbers are to emphasize the minimum values in each row. Details of the numerical calculation are presented in Appendix~\ref{sec-app-numer}. } \label{tab-avecost} \end{table}}
\section{Discussion}\label{sec-discussion}
We have presented the cost-optimized quantum state estimation theory, unifying various existing formalisms that use the past-future information for quantum systems. The three main existing theories are the smoothed quantum state (minimizing the trace square deviation cost from true states), the most-likely path (minimizing the negative equality with entire true unknown records), and the smoothed weak-value state (minimizing the square deviation of an expectation value from true weak measurement results). As shown in Figure~\ref{fig-diagramQ}, the theory outlines explicit connections of possible estimators using various choices of cost functions, where the latter can be defined either in the unknown state space or the unknown measurement record space.
In addition to clarifying the connection among existing formalisms, the theory suggested six new quantum state estimators ($\rho_{\textbf{Q}_2}$\,--$\rho_{\textbf{Q}_8}$) that could be used in estimating unknown quantum states in different contexts. It is noted that, for problems with unknown quantities, there is no single best guess for an unknown, but there is an optimal guess with respect to a chosen cost function. The theory was then applied to the coherently driven qubit coupled to bosonic baths, showing that all but two of the estimators are distinct, even though they all represent an optimal estimate of the unknown qubit's state. We do not claim to have compiled all possible cost functions in this work; however, we have built the framework that allows other estimators to be introduced.
One obvious generalization of our proposed formalism would be to problems where the true states are \emph{mixed}. Indeed, this is necessary for experimental tests or practical applications of the formalism, in which Bob is a real party with a real measurement record, not a hypothetical omniscient observer. That is because an actual experiment performed by Bob will not be able to garner all of the information unavailable to Alice. Some information will be irrevocably lost to the environment, causing the state jointly conditioned on Alices's and Bob's measurement records to be mixed.
For most cases, except $\textbf{Q}_2$, the form of the estimators presented here still hold, after replacing the true states, $\rho_{\pasts{O}, \pasts{ U}} = \hat\psi_{\pasts{O}, \pasts{ U}}$, with pure mixed states. For $\textbf{Q}_2$, the fidelity simplification in Eq.~\eqref{eq-fidel} will need to be modified for mixed states, which will likely result in an optimal estimator different from Eq.~\eqref{eq-th-q-nf}.
The central concern of this paper was conditioning on the past-future information. However, the idea of different optimal estimates resulting from different cost function can also be applied to conditioning only on past observation, or even no observation at all. Some estimators can be easily surmised. For example, for $\textbf{Q}_1$, the quantum smoothed state will be replaced by the quantum filtered state (for the past-only conditioning), or by a solution of the Lindblad master equation (for the no observation conditioning). However, for some other estimators, the generalization is quite involved, and this is a topic for future investigation.
\section{Acknowledgments} We would like to thank Philippe Lewalle and John Steinmetz for providing Python software used in numerically solving the most-likely paths. We acknowledge the traditional owners of the land on which this work was undertaken at Griffith University, the Yuggera people. This research is supported by the Australian Research Council Centre of Excellence Program CE170100012 and Mahidol University (Basic Research Fund: fiscal year 2021) Grant Number BRF1-A29/2564. A.C.~acknowledges the support of the Griffith University Postdoctoral Fellowship scheme and the Australia Awards Endeavour Scholarships and Fellowships.
\appendix
\setcounter{section}{0} \renewcommand{\Alph{section}}{\Alph{section}}
\section{ Probability density function for time-local unknown records}\label{sec-app-pdf} Here we discuss some properties of the PDF of the time-local unknown record, $\wp_{\bothps{O}}(u_\tau)$, and show that the mean and the mode of both distributions are the same for a weak measurement with a Lindblad operator $\op{c} = \op{c}_{\rm u}$. We note that the derivation below is applied to the past-only conditional PDF, $\wp_{\pasts{O}}(u_\tau)$, by taking $\op E_{\futps{O}} = \hat I$.
From the definition of the PDF in Eq.~\eqref{eq-probut}, we obtain \begin{align}
\wp_{\bothps{O}}(u_\tau) = & \, \frac{\Tr{ {\op E}_{\futps{O}}\,{\hat M}_{u_\tau} \rho_{\pasts{O}}\, {\hat M}_{u_\tau}^\dagger}}{\int \!\!{\rm d} u_t \, \Tr{ {\op E}_{\futps{O}}\,{\hat M}_{u_t} \rho_{\pasts{O}}\, {\hat M}_{u_t}^\dagger}}, \end{align} where ${\hat M}_{u_{t}} = (\delta t/2\pi)^{1/4}\exp(u_t^2\delta t/4) [ \hat{1} - \tfrac{1}{2}( {\hat c}^\dagger {\hat c} + {\hat c}^2) \delta t +{\hat c}\, u_t \delta t + {\cal O}(\delta t^2) ]$ (similar to Eq.~\eqref{eq-moput} in the main text, but without the Hamiltonian and only keeping to first order in $\delta t$) is the measurement operator for the unknown record $u_{t}$ at any time $t$. We can show that the PDF $\wp_{\bothps{O}}(u_\tau)$ can be approximated as a Gaussian function, correct up to first order in $\delta t$, \begin{align}\label{eq-app-probuboth} \wp_{\bothps{O}}(u_\tau) \approx & \left(\frac{\delta t}{2\pi}\right)^{1/2} \!\!\!\! \exp\left(-u_\tau^2 \delta t/2\right) \left[ \op{1} + u_\tau \frac{\Tr{\op E_{\futps{O}}\, \op c \, \rho_{\pasts{O}} + \rho_{\pasts{O}}\, \op c^\dagger\, \op E_{\futps{O}}}}{\Tr{\op E_{\futps{O}} \,\rho_{\pasts{O}}}} \delta t +{\cal O}(\delta t^2) \right] .
\end{align} The PDF definitely has the same mean and mode, \textit{i.e.}, \begin{align}\label{eq-app-probupast} \langle u_\tau \rangle_{\bothps{O}} = \argmax_{u_\tau} \wp_{\bothps{O}} (u_\tau) =\frac{\Tr{\op E_{\futps{O}}\, \op c \, \rho_{\pasts{O}} + \rho_{\pasts{O}}\, \op c^\dagger\, \op E_{\futps{O}}}}{\Tr{\op E_{\futps{O}} \,\rho_{\pastsO}}}, \end{align} which is exactly the real part of the weak value in Eq.~\eqref{eq-wv-gen2}. The second moment of this distribution is dominated by $1/\delta t$, that is $\langle u_\tau^2 \rangle_{\bothps{O}} \approx 1/\delta t$, therefore the distribution can be approximated as a Gaussian function with variance $1/\delta t$. Similarly, the mean and mode for the filtering PDF, $\wp_{\pasts{O}}(u_\tau)$, are then simply $\langle u_\tau \rangle_{\pasts{O}} = \argmax_{u_\tau} \wp_{\pasts{O}} (u_\tau) = {\rm Tr} [ ( \op c + \op c^\dagger) \rho_{\pasts{O}} ]$.
\section{ Retrodictive effect matrix for the driven qubit example}\label{sec-app-retroeff} The POVM $\hat E_{\futps{O}}$ can be computed semi-analytically for the coherently-driven qubit example used in the main text. Here, we rewrite the POVM for the partially observed system, Eq.~\eqref{eq-povm} in the main text, \begin{align}\label{eq-povm-app} {\hat E}_{\futps{O}} = e^{\delta t{\cal L}_{\rm u}^\dagger} {\cal M}_{O_\tau}^\dagger \cdots e^{\delta t{\cal L}_{\rm u}^\dagger} {\cal M}_{O_{T-\delta t}}^\dagger \hat E_T, \end{align} which can be used to compute a PDF of the future record given a state $\rho_\tau$ at time $\tau$, \begin{align}
\wp(\,\futp{O}\,|\rho_\tau) = \Tr{ {\hat E}_{\futps{O}}\, \rho_\tau}. \end{align} In the main text, we are interested in the observed record $\bothp{O} = \{ O_0 , O_{\delta t}, ..., O_{T-\delta t} ,O_T \}^{\top} = \{ 0,0,..., 0,1\}^{\top}$ including the final jump at time $T$. Therefore, the final POVM is given by \begin{align}
{\hat E}_T = {\hat c}_{\rm o}^{\dagger} {\hat c}_{\rm o} \delta t = \gamma \delta t |e\rangle \langle e |, \end{align} and the POVM at other times can be calculated from a dynamical equation \begin{align} {\hat E}_{t-\delta t} = e^{\delta t{\cal L}_{\rm u}^\dagger} {\cal N}^\dagger_0 {\hat E}_t, \end{align} where we have replaced all ${\cal M}^\dagger_{O_t}$ for $t \in [\tau, T)$ in Eq.~\eqref{eq-povm-app} by the adjoint of the no-jump map ${\cal N}^\dagger_0$. We can take the continuum limit $\delta t \rightarrow \dd t$ and write the dynamical equation in the form, ${\hat E}_{t-\dd t} = {\hat E}_t - {\rm d}{\hat E}$, where ${\rm d}{\hat E}$ is given by \begin{align} - {\rm d}{\hat E} =& + i \dd t [\tfrac{\Omega}{2} \hat \sigma_x, {\hat E}_t] + \dd t {\cal D}^{\dagger}[{\hat c}_{\rm u}]{\hat E}_t - \dd t\, \frac{1}{2} \left( {\hat c}_{\rm o}^{\dagger} {\hat c}_{\rm o} {\hat E}_t + {\hat E}_t{\hat c}_{\rm o}^{\dagger} {\hat c}_{\rm o}\right). \end{align} The first two terms and the last term are from expanding the operations $e^{\delta t{\cal L}_{\rm u}^\dagger}$ and ${\cal N}^\dagger_0$ to first order in $\delta t$ (or $\dd t$), respectively.
Given the form of the Hermitian POVM matrix \begin{align} {\hat E}_{\futps{O}}= \left( \begin{matrix} \alpha + \zeta & -i \beta \\ i \beta & \alpha - \zeta\end{matrix} \right), \end{align} where $\alpha$, $\beta$, and $\zeta$ are real numbers as functions of time (one can check that the real part of its diagonal elements has to be zero). From this we obtain a set of differential equations for the matrix elements, \begin{align} -\frac{{\rm d}}{\dd t}\left( \begin{matrix} \alpha \\ \beta \\ \zeta \end{matrix} \right) = -\frac{1}{2}\left( \begin{matrix} {\gamma} & 0 & \gamma+2\Gamma\\ 0& \gamma+\Gamma &-2\Omega \\ \gamma& 2\Omega & \gamma+2\Gamma\end{matrix} \right) \left( \begin{matrix} \alpha \\ \beta \\ \zeta \end{matrix} \right), \end{align}
which can be solved backwards in time (as indicated by the sign of the left hand side) to get $\hat E_{\futps{O}}$ at time $\tau$ from the final condition $\hat E_T = \gamma \delta t |e\rangle \langle e |$, \textit{i.e.}, $\alpha=\zeta = \gamma \delta t /2$, $\beta=0$.
\section{ Numerical calculation for time-average expected costs}\label{sec-app-numer}
The jump-time-average expected cost and the waiting-time PDF are given in Eqs.~\eqref{eq-avetimecost} and \eqref{eq-jumptimeavg}, respectively. However, the time integral with the infinite wait time limit is impractical, given that one needs to calculate the state estimators for any wait time $T$ numerically. We therefore approximate the wait time integral with a finite summation such that an average of an arbitrary function $f(T)$ is given by \begin{align} \int_0^\infty \!\!\! {\rm d} T \, \wp_{\rm wait}(T) f(T) & = \int_0^1\!\!\! {\rm d} x \, f( G^{-1}(x)),\\ & = \Delta x \sum_{j = 1}^J f( G^{-1}(j \Delta x)), \end{align} where $G(\tau) \equiv \int_0^\tau \! {\rm d} T \, \wp_{\rm wait}(T)$ and $\Delta x = 1/J$. For simplicity, let us denote a set of jump times by $\{ T_j \}$ where each element $T_j \equiv G^{-1}(j \Delta x)$, for $j = 1, 2,...,J$, we can write the time-average expected cost Eq.~\eqref{eq-avetimecost} as \begin{align} \bar{\cal C}_\kappa(\bullet) \approx \frac{\sum_{j=1}^J \Delta t \sum_{k=0}^{K_j} \, {\cal C}_{\textbf{Q}_\kappa}(\bullet_k) }{\sum_{j=1}^J \sum_{k=0}^{K_j} \Delta t}, \end{align} where we have also discretized the second time integral in Eq.~\eqref{eq-avetimecost} with a step size $\Delta t$ and $K_j$ is the time steps given the jump time $T_j = K_j \Delta t$. The numerical results shown in Table~\ref{tab-avecost} are calculated with $\Delta x = 0.1$ and $\Delta t = 0.05$.
Moreover, in order to show that our technique solving the filtered and smoothed states using the Fokker-Planck equation Section~\ref{sec-fpe} are correct, we show in Figure~\ref{fig-compare} the comparison of the states with solutions from simulating all possible true states and averaging them as in Eq.~\eqref{eq-filstate} and \eqref{eq-qssstate}. We use the numerical techniques and C++ programming codes used in Refs.~\cite{Ivonne2015,chantasri2019,GueWis20}.
\begin{figure}
\caption{Comparison of the filtered and smoothed states calculated using two different techniques. The dashed curves are from the Fokker-Planck techniques presented in Section~\ref{sec-fpe} and the solid colored are from trajectory simulation in Ref.~\cite{Ivonne2015,chantasri2019}.}
\label{fig-compare}
\end{figure}
\section{ Glossary of acronyms and abbreviations} \label{Glossary}
\begin{table}[h] \centering
\begin{tabular}{|c|c|} \hline Abbreviation & Meaning\\ \hline SD & Square Deviation\\[1mm] nE & negative Equality\\[1mm] BME & Bayesian Mean Estimator\\[1mm] MLE & Maximum Likelihood Estimator\\[1mm] PDF & Probability Density Function\\[1mm] $\Sigma$SD & sum Square Deviation\\[1mm] TrSD & Trace Square Deviation\\[1mm] TSVF & Two-State Vector Formalism\\[1mm] POVM & Positive Operator-Valued Measure\\[1mm] SWV & Smoothed Weak-Value\\[1mm] QED & Quantum Electrodynamics\\[1mm] CDJ & Chantasri, Dressel and Jordan\\[1mm] ODE & Ordinary Differential Equation\\[1mm] \hline \end{tabular} \caption{List of abbreviations (in order of appearance)} \end{table}
\begin{table}[h] \centering \setlength{\extrarowheight}{8pt}
\begin{tabular}{|p{6mm}|p{0.32\textwidth}|c|c|} \hline & (Expected) Cost Function & Acronym & Definition\\\hline
${\bf Q}_1$ & {\bf Tr}ace-{\bf S}quare {\bf D}eviation \newline {\bf f}rom the true state & {\bf $\langle$TrSDf$\rho_{\pasts{ U}}\rangle$} & $\left\langle {\rm Tr} \left [ \left(\rho - \rho_{\pasts{O}, \pasts{ U}}\right)^2 \right ] \right \rangle_{\pasts{ U} | \bothps{O}}$\\\hline
${\bf Q}_2$ & {\bf n}egative {\bf F}idelity \newline {\bf w}ith the true state & {\bf $\langle$nFw$\rho_{\pasts{ U}}\rangle$} & $\left\langle -F\!\left[\rho, \rho_{\pasts{O}, \pasts{ U}} \right] \right\rangle_{\pasts{ U} | \bothps{O}}$\\\hline
${\bf Q}_3$ & {\bf n}egative {\bf E}quality \newline {\bf w}ith the true state & {\bf $\langle$nEw$\rho_{\pasts{ U}}\rangle$} & $\left\langle - \delta_{\rm H}\!\left[ \rho - \rho_{\pasts{O}, \pasts{ U}} \right] \right\rangle_{\pasts{ U} | \bothps{O}}$\\\hline
${\bf Q}_4$ & {\bf n}egative {\bf E}quality \newline {\bf w}ith the past unknown record & {\bf $\langle$nEw$\past U\rangle$} & $\left\langle -\delta \left(\past{u} - \past{ U} \right) \right\rangle_{\pasts{ U} | \bothps{O}}$\\\hline
${\bf Q}_5$ & {\bf n}egative {\bf E}quality \newline {\bf w}ith the entire unknown record & {\bf $\langle$nEw$\both{ U}\rangle$} & $\left\langle - \delta \left(\both{u} - \both{ U} \right) \right\rangle_{\boths{ U} | \bothps{O}}$\\\hline
${\bf Q}_6$ & {\bf n}egative {\bf E}quality \newline {\bf w}ith the unknown record at $\tau$ & {\bf $\langle$nEw$ U_\tau \rangle$} & $\left\langle - \delta \left(u_\tau - U_\tau \right) \right\rangle_{ U_\tau | \bothps O}$\\\hline
${\bf Q}_7$ & {\bf S}quare {\bf D}eviation \newline {\bf f}rom the unknown record at $\tau$ & {\bf $\langle$SDf$\, U_\tau\rangle$} & $\left\langle \left(u_\tau - U_\tau \right)^2 \right\rangle_{ U_\tau | \bothps{O}}$\\\hline
${\bf Q}_8$ & Sum Square Deviation \newline from the weak measurement result ({\bf S}moothed {\bf W}eak-{\bf V}alue state) & {\bf SWV}& $\displaystyle{\sum_{j=1}^{d^2-1}\left\langle \left[ {\rm Tr}(\varrho \op{\Lambda}_j) - \Lambda^w_j \right]^2\right\rangle_{\Lambda^w_j | \bothps{O}}}$\\ \hline \end{tabular} \caption{List of cost functions and definitions of expected cost functions} \end{table}
\begin{table} \centering
\begin{tabular}{|c|c|l|} \hline Notation & Definition & Description\\ \hline ${\bf x}$ & & The configuration, a vector of $d$ classical variables. \\[2mm] $\mathbb{X}$ & & The set of possible configurations ${\bf x}$ for the classical system. \\[2mm] $\wp({\bf x})$ & & \makecell{A probability density function of the configuration ${\bf x}$. \\ Also referred to as the classical state.} \\[4mm]
$\wp_{\rm D}({\bf x})$ & $\wp({\bf x}|{\rm D})$ & \makecell{The probability density function of the configuration ${\bf x}$ \\ conditioned on $\rm D$. Also referred to as the conditioned \\ (classical) state.} \\[5mm]
$\langle\bullet\rangle_{A|{\rm D}}$ & $\displaystyle{\int}{\rm d}\mu(A) \wp_{\rm D}(A) \bullet$ & \makecell{The expectation value, averaging over $A$ and conditioned on $\rm D$, \\ where the integration measure ${\rm d}\mu(A)$ will vary depending on $A$.} \\[4mm] $\est A_{\rm C}$ & & \makecell{An optimal estimator, given $D$, for cost function \\ with abbreviation ${\rm C}$.} \\[4mm] $\mathbb{P}$ & $\{\wp({\bf x}): {\bf x} \in \mathbb{X} \}$ & \makecell{The set of all possible normalized probability density functions \\ of the configuration.} \\[4mm] $\rho$ & & A density operator/quantum state. \\[2mm] $\tilde{\rho}$ & & An unnormalized quantum state. \\[2mm] $\rho_{\rm D}$ & & A quantum state conditioned on $\rm D$. \\[2mm] $\hat{\psi}$ & $\ket{\psi}\bra{\psi}$ & A pure quantum state. \\[2mm] $\varrho$ & & An indefinite Hermitian matrix with unit trace. \\[2mm] $\hat{E}$ & & A POVM element/quantum effect. \\[2mm] $\hat{E}_{\rm D}$ & & A quantum effect conditioned on D. \\[2mm] $\mathbb{H}$ & & The Hilbert space of the quantum system. \\[2mm] $\mathfrak{G}({\mathbb{H}})$ & & The set of all valid density operators in the Hilbert space. \\[2mm] $\mathfrak{G}'({\mathbb{H}})$ & & The set of unit-trace, Hermitian operators in the Hilbert space. \\[2mm] ${\rm d}\mu_{\rm H}(\hat{\psi})$ & & The Haar measure over pure states. \\[2mm] $\past{R}$ & $\{R_t: t\in[0, \tau)\}^{\top}$ & The measurement results $R_t$ {\em prior} to the estimation time $\tau$. \\[2mm] $\fut{R}$ & $\{R_t: t\in[\tau, T)\}^{\top}$ & \makecell{The measurement results $R_t$ including at and {\em posterior} \\ to the estimation time $\tau$, {\em excluding} the result at the final time $T$. } \\[4.5mm] $\futp{R}$ & $\{R_t: t\in[\tau, T]\}^{\top}$ & \makecell{The measurement results $R_t$ including at and {\em posterior} \\ to the estimation time $\tau$, {\em including} the result at the final time $T$. }\\[4.5mm] $\both{R}$ & $\{R_t: t\in[0, T)\}^{\top}$ & \makecell{The set of all measurement results $R_t$ both {\em prior} and {\em posterior} \\ to the estimation time $\tau$, {\em excluding} the result at the final time $T$. }\\[4.5mm] $\bothp{R}$ & $\{R_t: t\in[0, T]\}^{\top}$ & \makecell{The set of all measurement results $R_t$ both {\em prior} and {\em posterior} \\ to the estimation time $\tau$, {\em including} the result at the final time $T$. } \\[4.5mm]
$ _{\hat f}\langle X^w\rangle_{\hat i}$ & $\displaystyle{\lim_{\epsilon \to 0}\int_{-\infty} ^{\infty}{\rm d} x \,x\,\wp(x|\hat{i},\hat{f}) }$ & \makecell{The weak-value associated with pre- ($\hat{i}$) and post-selecting ($\hat{f}$) \\ on the outcome of a weak measurement of $\hat{X}$; \\ $\epsilon$ is the measurement strength.} \\[5mm] \hline \end{tabular} \caption{List of notations and descriptions} \end{table}
\end{document} |
\begin{document}
\title{On the monotone complexity of the shift operator} \date{} \author{Igor S. Sergeev\footnote{e-mail: [email protected]}} \maketitle
\begin{abstract} We show that the complexity of minimal monotone circuits implementing a monotone version of the permutation operator on $n$ boolean vectors of length $q$ is $\Theta(qn\log n)$. In particular, we obtain an alternative way to prove the known complexity bound $\Theta(n\log n)$ for the monotone shift operator on $n$ boolean inputs. \end{abstract}
{\bf Introduction.} The recent paper~\cite{net19e} shows that a plausible hypothesis from network coding theory implies a lower bound $\Omega(n\log n)$ for the complexity of the $n$-input boolean shift operator when implemented by circuits over a full basis. As a corollary, the same bound holds for the multiplication of $n$-bit numbers. (Definitions of boolean circuits and complexity see e.g. in~\cite{weg87e}.) Curiously, nearly at the same time an upper bound $O(n\log n)$ for multiplication has been proved in~\cite{hv19e}. Actually, for the shift operator, the bound $O(n\log n)$ is trivial.
The shift can be implemented by monotone circuits. Lamagna~\cite{la75e,la79e} and independently Pippenger and Valiant~\cite{pv76e} proved that its complexity is bounded by $\Omega(n\log n)$ with respect to the circuits over the basis $\{\vee,\,\wedge\}$. Essentially the same bound was established by Chashkin~\cite{ch06e} for the close problem of implementation of the real-valued shift operator by circuits over the basis of 2-multiplexors and binary boolean functions. We show that the argument from~\cite{ch06e} works for the boolean setting as well thus obtaining yet another proof of the known result. On the other hand, an upper bound $O(n\log n)$ is easy to obtain when a suitable encoding of the shift value is chosen.
A version of the shift operator may be seen as a partially defined order-$n$ boolean convolution operator. It is known that the complexity of the convolution is $n^{2-o(1)}$~\cite{gs11e}, while the complexity of the corresponding shift operator is $\Theta(n\log n)$.
A more general form of shift is permutation. By analogy, one can introduce a monotone permutation operator. If a special encoding on the set of permutations is chosen, then the permutation operator on $n$ boolean inputs can be implemented with complexity $O(n\log n)$ employing the optimal sorting network from~\cite{aks83e}. If size-$n$ boolean vectors are given as inputs, then there exists a version of the permutation operator, which is the restriction of the $n\times n$ boolean matrix multiplication operator. The boolean matrix multiplication complexity is known to be $\Theta(n^3)$~\cite{me74e,pa75e}. It can be compared with the complexity $\Theta(n^2\log n)$ of the corresponding permutation operator. (The lower bound follows from the bound on the complexity of the shift operator.)
{\bf Preliminaries.} Further, $L(F)$ denotes the complexity of implementing the operator $F$ by circuits over the basis $\{\vee,\,\wedge\}$.
Let ${\Bbb B}=\{0,\,1\}$ and $A = \{ \alpha_0, \ldots, \alpha_{n-1} \} \subset {\Bbb B}^m$ be an antichain of cardinality $n$. By $X=(x_0,\ldots,x_{n-1})$, $x_i = (x_{i,0}, \ldots, x_{i,q-1})^T$, denote the $(q,n)$-matrix of boolean variables. Let $Y=(y_0,\ldots,y_{m-1})$ denote the vector of boolean variables encoding elements of the antichain $A$. By $v \gg k$ we denote the vector obtained from $v$ via a cyclic shift by $k$ positions to the right.
Monotone cyclic shift $(nq+m,nq)$-operator $S_{q,A}(X,Y)=(s_0,\ldots,s_{n-1})$ is a partially defined operator taking values $X \gg k$ for $Y=\alpha_k$, where $k=0,\ldots,n-1$.
Consider a few examples of encoding shift values. The vector $(v,\overline{v})$, where $\overline{\,\cdot\,}$ is the componentwise negation, we call {\it doubling} of the vector $v$. Typically, the shift value $k$ is encoded by its binary representation $[k]_2$. For the monotone version, one can use doubling of $[k]_2$. In this case, $m=2(\lfloor \log_2 n \rfloor+1)$. The described encoding corresponds to the antichain $A_0=\left\{ \left. \left( [k]_2,\,\overline{[k]_2} \right)
\right| 0 \le k < n \right\}$.
Another natural choice for $A$ is the set $A_1$ of all weight-1 vectors in ${\Bbb B}^n$. In this case, $m=n$. Let $q=1$. Define $$c_i(X,Y) = \bigvee_{j + k \,=\, i \bmod n} x_j y_k.$$ The operator $C(X,Y)=(c_0,\ldots,c_{n-1})$ is called a cyclic {\it boolean convolution} of the vectors $X$ and $Y$.
By the definition of the shift operator, $S_{1,A_1}(X,Y)$ coincides with $C(X,Y)$ on inputs from ${\Bbb B}^n \times A_1$. It can be checked that \begin{equation*} S_{1,A_1}(X,Y) = C(X,Y) \vee x_0\cdot \ldots \cdot x_{n-1} \cdot g \vee r(X,Y), \end{equation*}
where $g$ is an undefined boolean vector, and $r(X,Y)=0$ for ${|Y|
\le 1}$ (here $|v|$ denotes the weight of the vector $v$). The complexity of convolution is known to be almost quadratic, $L(C) = \Omega(n^2/\log^6n)$~\cite{gs11e}. Supposedly, a trivial upper bound $L(C)=O(n^2)$ is tight. At the same time, $L(S_{1,A_1}) = O(n\log n)$. We show below that in fact $L(S_{1,A}) = \Omega(n\log n)$ for any $A$.
Now let $\Pi = \{ \pi_0, \ldots, \pi_{n!-1} \} \subset {\Bbb B}^m$ be an antichain of cardinality $n!$. We can assign to its elements different permutations $\pi$ on the set $\{0,\ldots,n-1\}$. Denote $\pi(X) = \left(x_{\pi(0)},\ldots,x_{\pi(n-1)}\right)$. The monotone permutation operator $P_{q,\Pi}(X,Y)$ is defined on inputs $Y \in \Pi$ as $P_{q,\Pi}(X,Y) = \pi(X)$, where the permutation $\pi$ corresponds to the value of $Y$. Since a cyclic shift is a special case of permutation, any permutation operator can be viewed as a shift operator defined on a larger domain.
Trivially, any permutation $\pi$ may be represented by the vector of numbers $([\pi(0)]_2, \ldots, [\pi(n-1)]_2)$. Let $\Pi_0$ denote the corresponding coding set (it constitutes an antichain).
Otherwise, permutations may be specified as square boolean matrices with all rows and columns having weight 1. Denote the set of such matrices by $\Pi_1 \subset {\Bbb B}^{n \times n}$. The corresponding permutation operator performs the multiplication of the permutation matrix $Y=\{y_{j,k}\}$ by the matrix of variables~$X$. Define $$z_{i,k}(X,Y) = \bigvee_{j=0}^{n-1} x_{i,j} \, y_{j,k}.$$ Then $Z(X,Y) = \{ z_{i,k} \}: {\Bbb B}^{q \times n} \times {\Bbb B}^{n \times n} \to {\Bbb B}^{q \times n}$ is the operator of boolean product of matrices $X$ and $Y$. By definition, the operators $P_{q,\Pi_1}$ and $Z$ take the same values on inputs from ${\Bbb B}^{q \times n} \times \Pi_1$. It is known that $L(Z) = qn(2n-1)$~\cite{pa75e} (see also~\cite{weg87e}), which means: the naive method to multiply boolean matrices is optimal. On the other hand, $L(P_{q,\Pi_1}) = O(qn\log n+n^2)$ (see below). Moreover, we manage to show that $L(P_{q,\Pi}) = \Omega(qn\log n)$ for any $\Pi$, and this bound is achievable.
{\bf Upper complexity bounds.} For $v=(v_0,\ldots,v_{m-1}) \in {\Bbb B}^m$ let $Y^v = \bigwedge_{v_i=1} y_i$ denote the monomial of variables $y_i$ corresponding to the vector $v$. Let $L(A)$ stand for the complexity of computation of the set of monomials $\{ Y^{\alpha} \mid \alpha \in A \}$.
\begin{theorem} $L(S_{q,A}) \le L(A) + O(qn\log n)$. \end{theorem}
\noindent {\it Proof. } The standard circuit for the shift operator consists of $\log_2 n$ layers of $n$ multiplexors in each. It can be built according to the binary representation of the shift value~$k$. The first layer shifts the input by either 0 or 1 positions, depending on the value of the least significant bit of $k$. The second layer shifts by 0 or 2 positions, etc.
The monotone circuit employs indicators $Y^{i,\beta} = \bigvee_{\lfloor k/2^i\rfloor = \beta \bmod 2} Y^{\alpha_i}$ of equality of bits of $Y$ to zeros or ones. Instead of multiplexors, there are used similar monotone subcircuits that calculate operators of the form $Y^{i,1} a \vee Y^{i,0} b$.
It remains to note that all boolean sums $Y^{i,\beta}$ can be computed with complexity $O(n)$. \qed
In particular, since $L(A_0)=O(n)$ and $L(A_1)=0$, we obtain $L(S_{1,A_0}), L(S_{1,A_1}) \in O(n\log n)$.
To derive the upper bounds on the complexity of the permutation operator, we use a circuit $\Sigma$ sorting $n$ elements with complexity $O(n\log n)$ provided by~\cite{aks83e}. Such a circuit consists of comparator gates that order a pair of inputs.
\begin{theorem}
$ $
$(i)$ There exists an antichain $\Pi$ such that $L(P_{q,\Pi}) = O(qn\log n)$.
$(ii)$ $L(P_{q,\Pi_1}) = O(qn\log n+n^2)$. \end{theorem}
\noindent {\it Proof. } A set $\Pi$ can be specified following the circuit $\Sigma$. Assign to any permutation $\pi$ a linear order $x_{\pi(0)} > x_{\pi(1)} > \ldots > x_{\pi(n-1)}$ on the set of inputs of $\Sigma$ (in general, we do not consider these inputs boolean). Let $\Sigma$ receive inputs ordered in correspondence to a given permutation $\pi$. Assign to each comparator $e$ a boolean parameter $y_e$ whose value is determined by the result of the comparison. Let the doubling of the vector of parameters $y_e$, $e \in \Sigma$, encode a permutation $\pi$.
Now, we transform the circuit $\Sigma$ to a monotone circuit for $P_{q,\Pi}(X,Y)$, replacing any comparator $e$ receiving vector inputs $a, b$ with a subcircuit that evaluates vectors $ay_e \vee b\overline{y_e}$ and $a\overline{y_e} \vee by_e$.
Let us prove $(ii)$. First, recode $Y$ from $\Pi_1$ to $\Pi_0$. To do this, one simply needs to compute positions $y'_0,\ldots,y'_{n-1}$ of 1s in the columns of the matrix $Y$. The position of 1 in a weight-1 column may be calculated by a trivial circuit of linear complexity. Therefore, the complexity of the recoding is $O(n^2)$.
Next, arrange the inputs $x_i$ in accordance to the ordering of numbers $y'_i$ with the use of the circuit $\Sigma$. At each node of the obtained circuit two $y'_i$ inputs are compared and, depending on the result of the comparison, the order of the vectors $x_i$ accompanied by the numbers $y'_i$ is determined. The complexity of comparison is linear, so the complexity of the subcircuit at each node is $O(q+\log n)$. \qed
{\bf Lower complexity bounds.} The proof of the following theorem closely follows the proof of the main result in~\cite{ch06e}.
\begin{theorem} For any choice of antichain $A$ of cardinality $n$ the following inequality holds: $L(S_{q,A}) \ge qn\log_2 n - O(qn)$. \end{theorem}
\noindent {\it Proof. } Essentially, it suffices to consider the case $q=1$. Let $S$ be a monotone circuit of complexity $L$ that computes $S_{1,A}(X,Y)$.
a) First, note that for any assignment $Y=\alpha_k$ for each $i$, the circuit $S$ contains a path connecting the input $x_i$ and the output $s_{i+k \bmod n}$, and passing exclusively through the gates whose outputs return the function $x_i$.
Indeed, $s_{i+k \bmod n}(X,\alpha_k) = x_i$ by definition. It remains to check that if $x_i = f \vee g$ or $x_i = fg$, where $f$ and $g$ are monotone functions, then either $f=x_i$ or $g=x_i$. From $x_i = f \vee g$ it follows that $f \le x_i$ and $g \le x_i$. Assume that $f \ne x_i$ and $g \ne x_i$. It means that $f=g=0$ under the assignment $x_i=1$, $x_j=0$ for all $j \ne i$. But then $f \vee g=0 \ne x_i$. A contradiction. The case $x_i = fg$ follows by a dual argument.
So, moving from an output $s_{i+k \bmod n}$ towards the inputs of the circuit, for any gate, we can select an appropriate input computing the function $x_i$. Finally, we obtain the desired path.
b) Denote the path providing by the above argument by $p_{i,k}$. Let $\chi(e)$ stand for the number of paths $p_{i,k}$, $0 \le i,k < n$, passing through the gate $e$ in the circuit $S$. Note that $\chi(e) \le n$ for all $e \in S$. Indeed, any assignment $Y=\alpha_k$ uniquely defines the function of variables $X$ computed at the output of any gate $e$. Thus, $e$ does not belong to two different paths $p_{i,k}$ and $p_{j,k}$. Consequently, \begin{equation}\label{up} \sum_{e \in S} \chi(e) \le Ln. \end{equation}
c) Let us estimate the sum $\sum_{e \in S} \chi(e)$ in another way. Denote by $\chi(e,j)$ the number of paths $p_{i,k}$ passing through $e$ to the output $s_j$. By construction, $\sum_j \chi(e,j) = \chi(e)$.
Consider the subcircuit $S_j$ obtained by combining all $n$ paths $p_{i,k}$ leading to the output $s_j$, i.e. satisfying the condition $i+k=j \bmod n$. By construction, $S_j$ is a connected binary\footnote{Any vertex receives at most two incoming edges.} directed graph with $n$ inputs and one output. We manage to bound $\sum_{e \in S_j} \chi(e,j)$ following a simple argument from~\cite{ls74e}\footnote{In~\cite{ls74e}, the argument was used to bound the monotone complexity of the boolean sorting operator, see also~\cite{weg87e}.}.
Due to the binarity property, the subcircuit $S_j$ has an input at a distance of at least $\log_2 n$ edges from the output. In other words, some path making up $S_j$ contains at least $\log_2 n$ gates. Exclude this path and consider a subcircuit obtained by combining the remaining $n-1$ paths. Then, it contains a path of length at least $\log_2 (n-1)$. We proceed this way until there is no path remained. The argument leads to the bound \begin{equation}\label{xej} \sum_{e \in S_j} \chi(e,j) \ge \log_2 n! = n\log_2 n - O(n), \end{equation} following by \begin{equation}\label{down} \sum_{e \in S} \chi(e) = \sum_j \sum_{e \in S_j} \chi(e,j) \ge n^2\log_2 n - O(n^2). \end{equation} Putting together (\ref{up}) and (\ref{down}), we establish the inequality $L \ge n\log_2 n - O(n)$.
d) For $q>1$, we consider separately the components of the input and output vectors at the same positions. This results in $q$ groups of paths $p_{i,j}$. The inequality (\ref{up}) remains valid, and the inequality (\ref{xej}) holds for any of $qn$ outputs. Thus, the required bound finally follows. \qed
Since a permutation operator is a more completely defined shift operator, as a corollary we establish $L(P_{q,\Pi}) \ge qn\log_2 n - O(qn)$ for any~$\Pi$.
The research is supported by RFBR grant, project no.\,19-01-00294a.
\end{document} |
\begin{document}
\begin{abstract} We show that the minimal number of generators and the Cohen-Macaulay type of a family of numerical semigroups generated by concatenation of arithmetic sequences is unbounded. \end{abstract} \maketitle
\section{introduction} A \textit{numerical semigroup} $\Gamma$ is a subset of the set of nonnegative integers $\mathbb{N}$, closed under addition, contains zero and generates $\mathbb{Z}$ as a group. We refer to \cite{rgs} for basic facts on numerical semigroups. In the paper \cite{mss3}, the authors have introduced the notion of concatenation of two arithmetic sequences to define a new family of numerical semigroups. This definition was largely inspired by the family of numerical semigroups in embedding dimension $4$, defined by Bresinsky in \cite{bre}. The authors have also proved in an earlier work \cite{mss2} that all the Betti numbers of the Bresinsky curves are unbounded. The whole motivation comes from the old question, whether every family of affine curves, in a fixed embedding dimension, has an upper bound on the minimal number of equations defining them ideal theoretically. The question was answered in negative for affine curves parametrised by monomials in \cite{bre}, and for algebroid space curves in \cite{moh}, \cite{mss3}. First example was probably given by F.S. Macaulay; we refer to \cite{abhyankar} for a detailed discussion on Macaylay's examples. However, all the examples found in the literature are in embedding dimension $4$ or lower. We did not find any example in embedding dimension $5$ or higher or for an arbitrary embedding dimension. This search led us to define the notion of concatenation of two numerical semigroups and we could indeed define a family in arbitrary embedding dimension in \cite{mss3}. In the same paper, we have conjectured that this family of numerical semigroups, in an arbitrary embedding dimension, defines affine monomial curves with arbitrarily large first Betti number (the minimal number of generators for the defining ideal) and the last Betti number (the Cohen-Macaulay type). In this article we show that, in embedding dimensions $4$ and $5$, our conjecture holds good.
\section{Numerical semigroups and Concatenation} Let us quickly discuss some basic facts on numerical semigroups and concatenation. Let $\Gamma$ be a numerical semigroup. It is true that (see \cite{rgs}) the set $\mathbb{N}\setminus \Gamma$ is finite and that the semigroup $\Gamma$ has a unique minimal system of generators $n_{0} < n_{1} < \cdots < n_{p}$. The greatest integer not belonging to $\Gamma$ is called the \textit{Frobenius number} of $\Gamma$, denoted by $F(\Gamma)$. The integers $n_{0}$ and $p + 1$ are known as the \textit{multiplicity} and the \textit{embedding dimension} of the semigroup $\Gamma$, usually denoted by $m(\Gamma)$ and $e(\Gamma)$ respectively. The \textit{Ap\'{e}ry set} of $\Gamma$ with respect to a non-zero $a\in \Gamma$ is defined to be the set $\rm{Ap}(\Gamma,a)=\{s\in \Gamma\mid s-a\notin \Gamma\}$. Given integers $n_{0} < n_{1} < \cdots < n_{p}$; the map $\nu : k[x_{0}, \ldots, x_{p}]\longrightarrow k[t]$ defined as $\nu(x_{i}) = t^{n_{i}}$, $0\leq i\leq p$, defines a parametrization for an affine monomial curve; the ideal $\ker(\nu)=\mathfrak{p}$ is called the defining ideal of the monomial curve defined by the parametrization $\nu(x_{i}) = t^{n_{i}}$, $0\leq i\leq p$. The defining ideal $\mathfrak{p}$ is a graded ideal with respect to the weighted gradation and therefore any two minimal generating sets of $\mathfrak{p}$ have the same cardinality. Similarly, by an abuse of notation, one can define a semigroup homomorphism $\nu: \mathbb{N}^{p+1} \rightarrow \mathbb{N}$ as \, $\nu((a_{0}, \ldots , a_{p})) = a_{0}n_{0} + a_{1}n_{1} + \cdots + a_{p}n_{p}$. Let $\sigma$ denote the kernel of congruence of the map $\nu$. It is known that $\sigma$ is finitely generated. The minimal number of generators of the ideal $\mathfrak{p}$, i.e., cardinality of a minimal generating set of $\mathfrak{p}$ is the same as the minimal cardinality of a system of generators of $\sigma$.
Let $\Gamma$ be a numerical semigroup, we say that $x\in\mathbb{Z}$ is a \textit{pseudo-Frobenius number} if $x\notin \Gamma$ and $x+s\in \Gamma$ for all $s\in \Gamma\setminus \{0\}$. We denote by $\mathbf{PF}(\Gamma)$ the set of pseudo-Frobenius numbers of $\Gamma$. The cardinality of $\mathbf{PF}(\Gamma)$ is denoted by $t(\Gamma)$ and we call it the \textit{Cohen-Macaulay type} or simply the \textit{type} of $\Gamma$. Let $a,b\in \mathbb{Z}$. We define $\leq_{\Gamma}$ as $a\leq_{\Gamma} b$ if $b-a\in \Gamma$. This order relation defines a poset structure on $\mathbb{Z}$. It can be proved (see Proposition 8 in \cite{ap}) that $$\mathbf{PF}(\Gamma) = \{w-a\mid w\in \,\mathrm{Maximals}_{\leq_{\Gamma}}Ap(\Gamma, a)\}.$$
Let $e\geq 4$. Let us consider the string of positive integers in arithmetic progression: $a<a+d<a+2d<\ldots<a+(n-1)d<b<b+d<\ldots<b+(m-1)d$, where $m,n\in \mathbb{N}$, $m+n=e$ and $\gcd(a,d)=1$. Note that $a<a+d<a+2d<\ldots<a+(n-1)d$ and $b<b+d<\ldots<b+(m-1)d$ are both arithmetic sequences with the same common difference $d$. We further assume that this sequence minimally generates the numerical semigroup $\Gamma=\langle a, a+d, a+2d,\ldots, a+(n-1)d, b, b+d,\ldots, b+(m-1)d\rangle$. Then, $\Gamma$ is called the \textit{numerical semigroup generated by concatenation of two arithmetic sequences with the same common difference $d$}.
Let $e\geq 4$, $n\geq 5$ and $q\geq 0$. We have defined in \cite{mss3} the numerical semigroup $\mathfrak{S}_{(n,e,q)}$, generated by the integers $\{m_{0},\ldots,m_{e-1}\}$, where $m_{i}:=n^2+(e-2)n+q+i$, for $0\leq i\leq e-3 $ and $m_{e-2}:=n^2+(e-1)n+q+(e-3)$, $m_{e-1}:=n^2+(e-1)n+q+(e-2)$. This is formed by concatenation of two arithmetic sequences with common difference $1$. Let $\mathcal{Q}_{(n,e,q)}\subset k[x_{0},\ldots,x_{e-1}]$ be the defining ideal of $\mathfrak{S}_{(n,e,q)}$. We have called $\mathfrak{S}_{(n,e,q)}$ an \textit{unbounded concatenation} and have conjectured the following:
\noindent\textbf{Conjecture 3.3 \cite{mss3}.}\begin{enumerate}[(i)] \item $\mu(\mathcal{Q}_{(n,e,e-4)})\geq n+2$; \item the set $\{\mu(\mathcal{Q}_{(n,e,q)})\mid n\geq 5,e\geq 4, q\geq 0\}$ is unbounded above. \end{enumerate}
\noindent We have also proved that $\mu(\mathcal{Q}_{(n,4,0)})= 2(n+1)$ in Theorem 3.5 in \cite{mss3}. This settles part (i) of the conjecture for the special case of $e=4$ and in deed justifies the naming \textit{unbounded concatenation}.
In this paper, we calculate the pseudo-Frobenius set for the case $e=4, q=0$ and we prove that the pseudo-Frobenius number or the Cohen-Macaulay type of the numerical semigroup ring is also unbounded. Note that the Cohen-Macaulay type is also the last Betti number. We then prove part (i) of the above conjecture for the case $e=5$ and this indeed gives us the desired class of monomial curves in embedding dimension $5$. We also compute the pseudo-Frobenius set and prove that the the pseudo-Frobenius number or the Cohen-Macaulay type in unbounded for $e=5$ as well. In light of these results we now revise the above conjecture and state the modified conjecture as follows:
\noindent\textbf{Unbounded Concatenation Conjecture.}\begin{enumerate}[(i)] \item $\mu(\mathcal{Q}_{(n,e,e-4)})\geq n+2$; \item the Cohen-Macaulay type of $R/\mathcal{Q}_{(n,e,e-4)}$ is unbounded above; \item the set $\{\mu(\mathcal{Q}_{(n,e,q)})\mid n\geq 5,e\geq 4, q\geq 0\}$ is unbounded above. \end{enumerate}
\section{Ap\'{e}ry set and the Pseudo-Frobenius set of $\mathfrak{S}_{(n,4)}$} This section is devoted to the study of $\mathfrak{S}_{(n,e,q)}$, for $e=4$ and $q=e-4=0$. We will be writing $\mathfrak{S}_{(n,e)}$ instead of $\mathfrak{S}_{(n,e,0)}$, for simplicity of notation. Let $e\geq 4, i\geq 2$, $n=i(e-3)+(e-1)$ and $\mathfrak{S}_{(n,e,0)}=\langle m_{0},\ldots,m_{e-1}\rangle$, where \begin{align*} m_{j} & = n^{2}+(e-2)n+(e-4+j),\quad 0\leq j\leq e-3\\ m_{e-2} & = n^{2}+(e-1)n+(2e-7),\\ m_{e-1} & = n^{2}+(e-1)n+(2e-6). \end{align*}
\begin{theorem} Let $e\geq 4, i\geq 2$ and $n=i(e-3)+(e-1)$, then the numerical semigroup $\mathfrak{S}_{(n,e)}$, is minimally generated by $\{m_{0},\ldots,m_{e-1}\}$ \end{theorem} \proof See lemma 3.1 in \cite{mss3}\qed
\noindent We note that $\mathfrak{S}_{(n,4)}=\langle m_{0},\ldots, m_{3}\rangle$, where $m_{0}=n^{2}+2n, m_{1}=n^{2}+2n+1, m_{2}=n^{2}+3n+1, m_{3}=n^{2}+3n+2$.
\begin{theorem}\label{Apery 4} The \textit{Ap\'{e}ry set} of $\mathfrak{S}_{(n,4)}$ with respect to $m_{0}=n^{2}+2n$ is $\rm{Ap}(\mathfrak{S}_{(n,4)},m_{0})=\displaystyle\cup_{i=1}^{5} A_{i}\cup \{0\}$. Where \begin{align*} A_{1}&=\{rm_{1}\mid 1\leq r\leq n\}\\ A_{2}&=\{rm_{2}\mid 1\leq r\leq n\}\\ A_{3}&=\{rm_{3}\mid 1\leq r\leq n-1\}\\ A_{4}&=\{rm_{1}+sm_{3}\mid 1\leq r\leq n-1,1\leq s\leq n-r\}\\ A_{5}&=\{rm_{2}+sm_{3}\mid 1\leq r\leq n-1,1\leq s\leq n-r\}. \end{align*} \end{theorem}
\proof \textbf{Case 1.} We show that $A_{1}\subset \rm{Ap}(\mathfrak{S}_{(n,4)},m_{0})$. We proceed by induction on $r$. For $r=1$, $m_{1}\in \rm{Ap}(\mathfrak{S}_{(n,4)},m_{0})$, because $m_{1}$ is an element of minimal generating set of $\mathfrak{S}_{(n,4)} $ hence it is true. Suppose $r>1$, we have $rm_{1}\equiv r(\rm{mod}\, m_{0})$, so there exists $s_{r}\in \rm{Ap}(\mathfrak{S}_{(n,4)},m_{0})$ such that $ rm_{1}=km_{0}+s_{r}$, $k\geq 0$ . Let $s_{r}=c_{1r}m_{1}+c_{2r}m_{2}+c_{3r}m_{3}$, then obviously $0\leq c_{1r}\leq r$. If $ c_{1r}=r $, then $c_{2r}=c_{3r}=0$ and we are done. If $0\leq c_{1r}< r$, then $$ (r-c_{1r})m_{1}=km_{0}+c_{2r}m_{2}+c_{3r}m_{3} .$$ If $c_{1r}\neq 0$ then by induction, $(r-c_{1r})m_{1}\in \rm{Ap}(\mathfrak{S}_{(n,4)},m_{0})$ therefore $k=0$ and $rm_{1}=s_{r}\in \rm{Ap}(\mathfrak{S}_{(n,4)},m_{0})$. We assume $c_{1r}=0$, which implies $s_{r}=c_{2r}m_{2}+c_{3r}m_{3}$. Hence \begin{align*} rm_{1} &= km_{0}+c_{2r}m_{2}+c_{3r}m_{3},\quad k\geq 0\\ (r-c_{2r}-c_{3r})m_{1} &= km_{0}+c_{2r}n+c_{3r}(n+1). \end{align*} As R.H.S. of the above equation is positive we have $c_{2r}+c_{3r}<r\leq n$. We have $$(r-c_{2r}-c_{3r}-k)m_{0}+(r-c_{2r}-c_{3r})=(c_{2r}+c_{3r})n+c_{3r}.$$ If $(r-c_{2r}-c_{3r}-k)=0 $, then from the above equation $(r-c_{2r}-c_{3r})=(c_{2r}+c_{3r})n+c_{3r} $ which gives a contradiction as $r<n$. Therefore $(r-c_{2r}-c_{3r}-k)\geq 1 $,implies $(r-c_{2r}-c_{3r}-k)m_{0}+(r-c_{2r}-c_{3r})\geq m_{1} $ hence we have $(c_{2r}+c_{3r})n+c_{3r}\geq m_{1}$. But $(c_{2r}+c_{3r})n+c_{3r}\leq rn+c_{3r}\leq n^{2}+n$ and $m_{1}=n^{2}+2n+1$. So we get a contradiction.
\noindent\textbf{Case 2.} We want to show that $A_{2}\subset \rm{Ap}(\mathfrak{S}_{(n,4)},m_{0})$. We have $m_{2}\in \rm{Ap}(\mathfrak{S}_{(n,4)},m_{0})$. Let $r>1$, there exists $s_{r}\in \rm{Ap}(\mathfrak{S}_{(n,4)},m_{0})$ such that $rm_{2}\equiv s_{r}(\rm{mod}\,m_{0})$. Therefore $rm_{2}=km_{0}+c_{1r}m_{1}+c_{3r}m_{3}$ $k\geq 0$ (by induction coefficient of $m_{2}$ say $c_{2r}$ is zero). Therefore $rm_{2}=km_{0}+c_{1r}m_{1}+c_{3r}(m_{2}+1)$, hence $(r-c_{3r})m_{2}= km_{0}+c_{1r}m_{1}+c_{3r}$. Since R.H.S. is positive we have $c_{3r}<r$. Again \begin{align*} r(m_{1}+n) & = km_{0}+c_{1r}m_{1}+c_{3r}(m_{1}+n+1)\\ (r-c_{1r}-c_{3r})m_{1} &= km_{0}+c_{3r}(n+1)-rn,\quad 1\leq r\leq n. \end{align*} If $k=0$ then $rm_{2}=s_{r}$ and we are done. We assume $k>0$. As R.H.S. is positive we get $c_{1r}+c_{3r}<r$. Now \begin{align*} & r(m_{0}+n+1) = km_{0}+c_{1r}(m_{0}+1)+c_{3r}(m_{0}+n+2)\\ \rm{implies}\quad &(r-k-c_{1r}-c_{3r})m_{0}+r(n+1)=c_{1r}+c_{3r}(n+2). \end{align*} If $r-k-c_{1r}-c_{3r}=0$ then $r(n+1)=(c_{1r}+c_{3r})+c_{3r}(n+1)<r+(r-1)(n+1)=r(n+1)+(r-n-1)$, $1\leq r\leq n$, which gives a contradiction.
If $r-k-c_{1r}-c_{3r}\neq 0$, then \begin{align*} & (r-k-c_{1r}-c_{3r})m_{0}+r(n+1)\geq m_{0}+r(n+1)\\ \rm{implies}\quad & c_{1r}+c_{3r}(n+2)\geq m_{0}+r(n+1)\geq n^{2}+3n+1. \end{align*} But $c_{1r}+c_{3r}<r\leq n$, therefore $c_{1r}+c_{3r}(n+2)\leq n^{2}+2n-1$. Which gives a contradiction.
\noindent\textbf{Case 3.} We wish to show that $A_{3}\subset \rm{Ap}(\mathfrak{S}_{(n,4)},m_{0})$. We note that for $r\geq 1$, $(r+1)m_{3}-m_{0}\notin \mathfrak{S}_{(n,4)}$ implies $rm_{3}-m_{0}\notin \mathfrak{S}_{(n,4)}$. Therefore it is enough to show that $(n-1)m_{3}-m_{0}\notin \mathfrak{S}_{(n,4)}$. Suppose \begin{align*} (n-1)(m_{0}+n+2)-m_{0} & =c_{0}m_{0}+c_{1}m_{1}+c_{2}m_{2}+c_{3}m_{3}\\
& =c_{0}m_{0}+c_{1}(m_{0}+1)+c_{2}(m_{0}+n+1)+c_{3}(m_{0}+n+2). \end{align*} Therefore \begin{align*} [(n-2)-(c_{0}+c_{1}+c_{2}+c_{3})]m_{0}+(n^{2}+n-2) &=c_{1}+c_{2}(n+1)+c_{3}(n+2). \end{align*}
As R.H.S. of the above equation is positive $(c_{0}+c_{1}+c_{3}+c_{4})\leq n-2.$ We have
$$[(n-2)-(c_{0}+c_{1}+c_{2}+c_{3})]m_{0} =(c_{1}+c_{2}+2c_{3}+2)+n(c_{2}+c_{3}-n-1)$$ and $m_{0}=n(n+2)$. If $[(n-2)-(c_{0}+c_{1}+c_{2}+c_{3})]=0$ then we have $n(c_{2}+c_{3}-n-1)=-(c_{1}+c_{2}+2c_{3}+2)$, a contradiction, therefore $[(n-2)-(c_{0}+c_{1}+c_{2}+c_{3})]\neq 0$ and $n\mid (c_{1}+c_{2}+2c_{3}+2)$. Let $ c_{1}+c_{2}+2c_{3}+2=nk$ for some $k\geq 0$. If $k\geq 2$ then $c_{1}+c_{2}+2c_{3}+2\geq 2n$, on the other hand $c_{1}+c_{2}+c_{3}\leq n-2$ and $c_{3}\leq n-2$ implies $c_{1}+c_{2}+2c_{3}+2 \leq 2n-2<2n$, a contradiction. Therefore $k=0,1$. If $k=1$, then R.H.S. is $n(c_{2}+c_{3}-n)<0$, a contradiction. If $k=0$ then $c_{1}+c_{2}+2c_{3}+2=0$ again we get a contradiction.
\noindent\textbf{Case 4.} We will show that $A_{4}\subset \rm{Ap}(\mathfrak{S}_{(n,4)},m_{0})$. We fix $r$, $1\leq r\leq n-1$. We need to show that $rm_{1}+(n-r)m_{3}-m_{0}\notin \mathfrak{S}_{(n,4)}$. Suppose $rm_{1}+(n-r)m_{3}-m_{0}=\displaystyle\sum_{i=0}^{3}c_{i}m_{i}.$ After simplifying we get, $$[n-(c_{0}+c_{1}+c_{2}+c_{3})]m_{0} =(c_{1}+c_{2}+2c_{3}+r)+n(c_{2}+c_{3}+r)$$ As R.H.S. is positive, we have $\displaystyle\sum_{i=0}^{3}c_{i}\leq n-1$ and $n\mid c_{1}+c_{2}+2c_{3}+r$. Let $c_{1}+c_{2}+2c_{3}+r=nk$, for some $k\geq 0$. Now $c_{1}+c_{2}+c_{3}\leq n-1, r\leq n-1, c_{3}\leq n-1$ implies $c_{1}+c_{2}+2c_{3}+r<3n $, hence $k\leq 2$. As $r>0$, we have $k\neq 0$.
Suppose $k=1$ then $c_{1}+c_{2}+2c_{3}+r=n$ and R.H.S. is $n(c_{2}+c_{3}+r+1)$. Therefore $n+2\mid (c_{2}+c_{3}+r+1)$. Let $(c_{2}+c_{3}+r+1)=(n+2)\ell$ where $\ell>0$ (as $r+1>0$). We have $c_{2}+c_{3}\leq n-1$ and $r\leq n-1$, so $(c_{2}+c_{3}+r+1)<2(n+2)$. Hence $\ell=1$, We have $c_{2}+c_{3}+r=n+1$ and $c_{1}+c_{2}+2c_{3}+r=n$. Which implies $c_{1}+c_{3}+1=0$ a contradiction.
If $k=2$, then $c_{1}+c_{2}+2c_{3}+r=2n$ and R.H.S. is $n(c_{2}+c_{3}+r+2)$. Thus $n+2\mid (c_{2}+c_{3}+r+2)$. By the same arguments we get $c_{2}+c_{3}+r=n$. Therefore $c_{1}+c_{3}=n$ gives a contradiction as $c_{1}+c_{3}\leq n-1$.
\noindent\textbf{Case 5.} We will show that $A_{5}\subset \rm{Ap}(\mathfrak{S}_{(n,4)},m_{0})$. We fix $r$, $1\leq r\leq n-1$. We need to show that $rm_{2}+(n-r)m_{3}-m_{0}\notin \mathfrak{S}_{(n,4)}$. Suppose $rm_{2}+(n-r)m_{3}-m_{0}=\displaystyle\sum_{i=0}^{3}c_{i}m_{i}.$ After simplifying we get, $$[n-( \displaystyle\sum_{i=0}^{3}c_{i})]m_{0}=(c_{1}+c_{2}+2c_{3}+r)+(c_{2}+c_{3})n.$$ As R.H.S. is positive we have $(\displaystyle\sum_{i=0}^{3}c_{i})\leq n-1$. Again we have $c_{1}+c_{2}+2c_{3}+r=kn$ with $k>0$. Now $c_{1}+c_{2}+c_{3}\leq n-1$, $c_{3}\leq n-1$ and $r\leq n-1$ implies that $c_{1}+c_{2}+2c_{3}+r<3n$. Therefore $k=1,2$.
If $k=1$, $c_{1}+c_{2}+2c_{3}+r=n$ and R.H.S. is $n(c_{2}+c_{3}+1)$. Now $n+2\mid c_{2}+c_{3}+1$. Let $(c_{2}+c_{3}+1)=(n+2)\ell$, where $\ell\neq 0$. Which gives a contradiction as $c_{2}+c_{3}\leq n-1$.
If $k=2$, $c_{1}+c_{2}+2c_{3}+r=2n$ and R.H.S. is $n(c_{2}+c_{3}+2)$. Now $n+2\mid c_{2}+c_{3}+2$. Let $(c_{2}+c_{3}+2)=(n+2)\ell$, where $\ell\neq 0$. Again a contradiction as $c_{2}+c_{3}\leq n-1$. \qed
\begin{theorem}\label{pf4} The pseudo-Frobenius set of the numerical semigroup $\mathfrak{S}_{(n,4)}$ is $PF(\mathfrak{S}_{(n,4)})=P_{1}\cup P_{2}\cup P_{3}$, where \begin{align*} P_{1}&=\{(n-1)m_{0}+n\};\\ P_{2}&=\{(n-1)m_{0}+n+k(n+1)\mid 1\leq k\leq (n-1)\};\\ P_{3}&=\{(n-1)m_{0}+n+(n-1)(n+1)+t\mid 1\leq t \leq n\}. \end{align*} \end{theorem}
\proof Let $P^{'}_{j}=P_{j}+m_{0}$, $1\leq j\leq 3$, therefore \begin{align*} P^{'}_{1}&= \{nm_{1}\};\\ P^{'}_{2}&= \{km_{1}+(n-k)m_{3},1\leq k\leq n-1\};\\ P^{'}_{3}&= \{km_{2}+(n-k)m_{3},0\leq k\leq n-1\}. \end{align*} We want to show the following two conditions:
\begin{enumerate} \item[(i)] For each $x\in Ap(\mathfrak{S}_{(n,4)},m_{0})\setminus \{P^{'}_{1}\cup P^{'}_{2} \cup P^{'}_{3}\}$, there exists $ y\in \{P^{'}_{1}\cup P^{'}_{2} \cup P^{'}_{3}\}$ such that $y-x \in \mathfrak{S}_{(n,4)}$.
\item[(ii)] For any $y_{1},y_{2}\in \{P^{'}_{1}\cup P^{'}_{2} \cup P^{'}_{3}\}$, $y_{1}-y_{2} \notin \mathfrak{S}_{(n,4)}$. \end{enumerate}
\noindent\textbf{Proof of (i):} Let $x=rm_{1}$, where $1\leq r\leq n-1$. Take $y=nm_{1}$ and $y-x=(n-r)m_{1}\in \mathfrak{S}_{(n,4)}$.
Let $x=rm_{2}$, where $1\leq r\leq n-1$ then take $y=rm_{2}+(n-r)m_{3}\in\{P^{'}_{1}\cup P^{'}_{2} \cup P^{'}_{3} $ and $y-x=(n-r)m_{3}\in \mathfrak{S}_{(n,4)}$.
For $x\in A_{3}\subset Ap(\mathfrak{S}_{(n,4)},m_{0})\setminus \{P^{'}_{1}\cup P^{'}_{2} \cup P^{'}_{3}\}$, we take $y=m_{1}+(n-1)m_{3}$. Then we have $ y-x=m_{1}+(n-1-r)m_{3}\in \mathfrak{S}_{(n,4)}$ for $1\leq r\leq n-1$.
For $x\in Ap(\mathfrak{S}_{(n,4)},m_{0})\setminus \{P^{'}_{1}\cup P^{'}_{2} \cup P^{'}_{3}\}$ and $x\in\{rm_{1}+sm_{3}|1\leq r \leq n-2, 1\leq s\leq n-r-1\}$ $x=rm_{1}+s m_{3}$ then we choose $y=rm_{1}+(n-r) m_{3}$ so that we have $y-x=(n-r-s)m_{3}$ since $1\leq s\leq n-1-r$ therefore $(n-r-s)\geq 1$
For $x\in Ap(\mathfrak{S}_{(n,4)},m_{0})\setminus \{P^{'}_{1}\cup P^{'}_{2} \cup P^{'}_{3}\}$ and $x\in\{rm_{2}+s m_{3}|1\leq r \leq n-2, 1\leq s \leq n-r-1\}$ $x=rm_{2}+s m_{3}$ then we choose $y=rm_{2}+(n-r) m_{3}$ so that we have $y-x=(n-r-s)m_{2}$ since $1\leq s\leq n-1-r$ therefore $(n-r-s)\geq 1$.
\noindent\textbf{Proof of (ii):} It is easy to check for any $y_{1},y_{2}\in P^{'}_{j}$, $1\leq j\leq 3$, $y_{1}-y_{2}\notin\mathfrak{S}_{(n,4)}$.
Let $y_{1}= nm_{1}$ and $y_{2}\in P^{'}_{2}$, then \begin{align*} y_{2}-y_{1}&=km_{1}+(n-k)m_{3}-nm_{1}\\ &=(k-n)m_{1}+(n-k)(m_{1}+n+1)\\ &=n^{2}+n-k(n+1) \end{align*} Since $1\leq k \leq n-1$ therefore $y_{2}-y_{1}=n^{2}+n-k(n+1)< m_{0}$ therefore $y_{1}-y_{2}\notin\mathfrak{S}_{(n,4)}$.
Let $y_{1}= nm_{1}$ and $y_{2}\in P^{'}_{3}$, then \begin{align*} y_{2}-y_{1}&=k m_{2}+(n-k)m_{3}-nm_{1}\\ &=(k-n)m_{1}+kn+(n-k)(m_{1}+n+1)\\ &=n^2+n-k \end{align*} Since $0\leq k \leq n-1$ therefore $y_{2}-y_{1}=n^2+n-k< m_{0}$ therefore $y_{1}-y_{2}\notin\mathfrak{S}_{(n,4)}$.
Let $y_{1}=\in P^{'}_{2}$ and $y_{2}\in P^{'}_{3}$, then \begin{align*} y_{2}-y_{1}&=k_{2}m_{2}+(n-k_{2})m_{3}-k_{1}m_{1}-(n-k_{1})m_{3}\\ &=(k_{2}-k_{1})m_{1}+k_{2}n+(k_{1}-k_{2})m_{3}\\ &=k_{2}n+(k_{1}-k_{2})(n+1)\\ &=k_{1}(n+1) \end{align*} Since $1\leq k_{1} \leq n-1$ therefore $y_{2}-y_{1}=k_{1}(n+1)< m_{0}$ therefore $y_{1}-y_{2}\notin\mathfrak{S}_{(n,4)}$.
\qed
\section{Ap\'{e}ry set and the Pseudo-Felonious set of $\mathfrak{S}_{(n,5)}$} We now consider the numerical semigroup $\mathfrak{S}_{(n,5,0)}$. We write $\mathfrak{S}_{(n,5)}$ instead of $\mathfrak{S}_{(n,5,0)}$, for simplicity of notation. We first describe the Ap\'{e}ry set of $\mathfrak{S}_{(n,5)}$, which will help us write the pseudo-Frobenius set. We have proved that the pseudo-Frobenius number of the Cohen-Macaulay type is indeed unbounded. In the next section, we find a minimal generating set for the defining ideal and show that the minimal number of generators of the defining ideal is unbounded above, thereby proving parts (i) and (ii) of the Unbounded Concatenation Conjecture.
Let us recall that for $e\geq 4, i\geq 2$, $n=i(e-3)+(e-1)$, the numerical semigroup $\mathfrak{S}_{(n,e,0)}=\langle m_{0},\ldots,m_{e-1}\rangle$ is minimally generated by $m_{0},\ldots,m_{e-1}$, where \begin{align*} m_{j} & = n^{2}+(e-2)n+(e-4+j),\quad 0\leq j\leq e-3\\ m_{e-2} & = n^{2}+(e-1)n+(2e-7),\\ m_{e-1} & = n^{2}+(e-1)n+(2e-6). \end{align*}
\noindent We note that $\mathfrak{S}_{(n,5)}=\langle m_{0},\ldots, m_{4}\rangle$, where $m_{0}=n^{2}+3n+1$, $m_{1}=n^{2}+3n+2$, $m_{2}=n^{2}+3n+3$, $m_{3}=n^{2}+4n+3$, $m_{4}= n^{2}+3n+4$. Let $\mathcal{Q}_{(n,5)}\subset k[x_{0},\ldots,x_{4}]$ be the defining ideal of $\mathfrak{S}_{(n,5)}$.
\begin{theorem}\label{Apery 5} The \textit{Ap\'{e}ry set} of $\mathfrak{S}_{(n,5)}$ with respect to $m_{0}=n^{2}+3n+1$ is $\rm{Ap}(\mathfrak{S}_{(n,5)},m_{0})=\displaystyle\cup_{i=1}^{11} A_{i}$. Where \begin{align*} A_{1}& =\{0,m_{1}\}\\ A_{2}&=\{rm_{2}\mid 1\leq r\leq \dfrac{n}{2}\}\\ A_{3}&=\{rm_{3}\mid 1\leq r\leq n\}\\ A_{4}&=\{rm_{4}\mid 1\leq r\leq n\}\\ A_{5}&=\{m_{1}+r m_{2}\mid 1\leq r\leq \dfrac{n}{2}\}\\ A_{6}&=\{m_{3}+r m_{2}\mid 1\leq r\leq \dfrac{n}{2}\}\\ A_{7}&=\{rm_{2}+2s m_{4}\mid 1\leq s\leq \dfrac{n}{2}-1,1\leq r \leq \dfrac{n}{2}-s \}\\ A_{8}&=\{rm_{2}+(2s-1)m_{4}\mid 1\leq s\leq \dfrac{n}{2},1\leq r \leq \dfrac{n}{2}+1-s \}\\ A_{9}&=\{rm_{3}+(n-k-r+1)m_{4}\mid 1\leq k\leq n-1,1\leq r \leq n-k \}\\ A_{10}&=\{rm_{2}+m_{3}+2s m_{4}\mid 1\leq s\leq \dfrac{n}{2}-1,1\leq r \leq \dfrac{n}{2}-s \}\\ A_{11}&=\{rm_{2}+m_{3}+(2s-1)m_{4}\mid 1\leq s\leq \dfrac{n}{2}-1,1\leq r \leq \dfrac{n}{2}+1-s \}. \end{align*} \end{theorem} \proof
\noindent \textbf{Case 1.} Clearly $A_{1}\in \rm{Ap}(\mathfrak{S}_{(n,5)},m_{0})$
\noindent \textbf{Case 2.} To show $A_{2}, A_{3}, A_{4}\in \rm{Ap}(\mathfrak{S}_{(n,5)},m_{0})$, we proceed similarly as in the cases 1, 2 and 3 of \ref{Apery 4}.
\noindent \textbf{Case 3.} To show $A_{5}\in \rm{Ap}(\mathfrak{S}_{(n,5)},m_{0})$, it is enough to show that $m_{1}+\dfrac{n}{2}m_{2}-m_{0}\notin \mathfrak{S}_{(n,5)}$. Let $m_{1}+\dfrac{n}{2}m_{2}-m_{0}=c_{0}m_{0}+c_{1}m_{1}+c_{2}m_{2}+c_{3}m_{3}+c_{4}m_{4}$. Converting $m_{i}, 1\leq i\leq 4$ in the term of $m_{0}$, we get the following equation $$(\dfrac{n}{2}-\sum\limits_{i=0}^{4}c_{i})m_{0}=c_{1}+2c_{2}+(n+2)c_{3}+(n+3)c_{4}-(n+1)$$ If $\sum\limits_{i=0}^{4}c_{i}<\dfrac{n}{2}$, then $(\dfrac{n}{2}-\sum\limits_{i=0}^{4}c_{i})m_{0}> m_{0}$, and\\[2mm] $c_{1}+2c_{2}+(n+2)c_{3}+(n+3)c_{4}-(n+1)$\\[2mm] $ < c_{1}+2c_{2}+(n+2)c_{3}+(n+3)c_{4}$\\[2mm] $ \leq c_{1}+2c_{2}+ 2c_{3}+3c_{4}+\dfrac{n^{2}}{2}$\\[2mm] $< \dfrac{n^{2}}{2}+\dfrac{3n}{2}< m_{0}$,\\[2mm] which is a contradiction.
\noindent Again if $\sum\limits_{i=0}^{4}c_{i}> \dfrac{n}{2}$, then $(\dfrac{n}{2}-\sum\limits_{i=0}^{4}c_{i})m_{0}\leq -m_{0}$, and\\
$c_{1}+2c_{2}+(n+2)c_{3}+(n+3)c_{4}-(n+1)\geq -(n+1)>-m_{0}$, which gives a contradiction.
\noindent If $\sum\limits_{i=0}^{4}c_{i}= \dfrac{n}{2}$, then $c_{1}+2c_{2}+(n+2)c_{3}+(n+3)c_{4}=n+1$, which gives $(c_{0}+c_{1}+c_{2}+c_{3}+c_{4})+ c_{2}+(n+1)c_{3}+(n+2)c_{4}=n+1-c_{0}$, hence $c_{2}+(n+1)c_{3}+(n+2)c_{4}=1+\dfrac{n}{2}-c_{0}$\\ Proceeding in the similar way we get, $nc_{3}+(n+1)c_{4}=1-2c_{0}-c_{1}$, and $c_{4}=1-\dfrac{n^2}{2}-(n+2)c_{0}-(n+1)c_{1}-nc_{2}$.\\ Since $n\geq 8$ and $c_{0}, c_{1}, c_{2}\geq 0$, the above equation gives $c_{4}<0$, which is not possible.
\noindent Similarly we can show that $A_{6}\in \rm{Ap}(\mathfrak{S}_{(n,5)},m_{0})$.
\noindent \textbf{Case 4.} To show $A_{7}\in \rm{Ap}(\mathfrak{S}_{(n,5)},m_{0})$. At first we fix $1\leq s\leq \dfrac{n}{2}-1$.\\ We need to show $(\dfrac{n}{2}-s)m_{2}+2s m_{4}-m_{0}\notin \mathfrak{S}_{(n,5)}$.\\ Let $(\dfrac{n}{2}-s)m_{2}+2s m_{4}-m_{0}=c_{0}m_{0}+c_{1}m_{1}+c_{2}m_{2}+c_{3}m_{3}+c_{4}m_{4}$, which gives,\\ $(\dfrac{n}{2}+s-1-\sum\limits_{i=0}^{4}c_{i})m_{0}=c_{1}+2c_{2}+(n+2)c_{3}+(n+3)c_{4}-n(2s+1)-4s$\\ If $\dfrac{n}{2}+s-1>\sum\limits_{i=0}^{4}c_{i}$, then $(\dfrac{n}{2}+s-1-\sum\limits_{i=0}^{4}c_{i})m_{0}\geq m_{0}$ and\\[2mm] $c_{1}+2c_{2}+(n+2)c_{3}+(n+3)c_{4}-n(2s+1)-4s$\\[2mm] $\leq c_{1}+2c_{2}+2c_{3}+3c_{4}+(n+3)(\dfrac{n}{2}+s-1)-n(2s+1)-4s$\\[2mm] $\leq 3(\dfrac{n}{2}+s-1)+\dfrac{n^2}{2}-\dfrac{n}{2}-ns-s-3$\\[2mm] $\leq\dfrac{n^2}{2}+n+2s\leq \dfrac{n^2}{2}+2n<m_{0}$, which is a contradiction.
\noindent If $\dfrac{n}{2}+s-1<\sum\limits_{i=0}^{4}c_{i}$, then $(\dfrac{n}{2}+s-1-\sum\limits_{i=0}^{4}c_{i})m_{0}\leq -m_{0}$, and\\[2mm] $c_{1}+2c_{2}+(n+2)c_{3}+(n+3)c_{4}-n(2s+1)-4s$\\[2mm] $\geq -n(2s+1)-4s$\\[2mm] $\geq -n^{2}-n+4 \quad (\text{substituting}\, s=\dfrac{n}{2}-1)$\\[2mm] $ > -m_{0}$, \, which is a contradiction.
\noindent If $\dfrac{n}{2}+s-1=\sum\limits_{i=0}^{4}c_{i}$, then $c_{1}+2c_{2}+(n+2)c_{3}+(n+3)c_{4}=n(2s+1)+4s$. We have $(\dfrac{n}{2}-s)m_{2}+2s m_{4}=c_{0}^{'}m_{0}+c_{1}m_{1}+c_{2}m_{2}+c_{3}m_{3}+c_{4}m_{4}$, where $c_{0}^{'}=c_{0}+1\geq 1$. Now \begin{align*} (\dfrac{n}{2}-s)m_{2}+2s m_{4}&=n^{2}(\dfrac{n}{2}+s)+(n+1)(\dfrac{3n}{2}+5s)\\ &=(n^{2}-1)(\dfrac{n}{2}+s)+(n+1)(\dfrac{3n}{2}+5s)+(\dfrac{n}{2}+s)\\ &=(n+1)[\dfrac{n^{2}}{2}+n+4s+ns]+(\dfrac{n}{2}+s)\equiv (\dfrac{n}{2}+s) mod (n+1) \end{align*} and\\[2mm] $c'_{0}(m_{1}-1)+c_{1}m_{1}+c_{2}(m_{1}+1)+c_{3}(m_{1}+n+1)+c_{4}(m_{1}+n+2)$\\[2mm] $(c_{2}+c_{4}-c'_{0})mod (n+1)$,\\[2mm] which implies $(\dfrac{n}{2}+s)\equiv (c_{2}+c_{4}-c'_{0})mod (n+1)$. Substituting $c'_{0}+c_{1}+c_{2}+c_{3}+c_{4}= \dfrac{n}{2}+s$, we get $c'_{0}+c_{1}+c_{2}+c_{3}+c_{4}\equiv (c_{2}+c_{4}-c'_{0}) mod (n+1)$. Therefore, $n+1$ divides $2c'_{0}+c_{1}+c_{3}$, which implies $2c'_{0}+c_{1}+c_{3}=0$ (since $c_{2}+c_{4}-c_{0}^{'}> 0$). Hence $c'_{0}=0$, which is a contradiction to $c'_{0}\geq 1$.
\noindent Since expressions of $A_{8}, A_{9}, A_{10}, A_{11}$ are similar to the expression of $A_{7}$, therefore proofs follow similar steps.\qed
\begin{theorem} The set of all pseudo Frobenius numbers of the numerical semigroup $\mathfrak{S}_{(n,5)}$ is $$PF(\mathfrak{S}_{(n,5)})=P_{1}\cup P_{2}\cup P_{3}.$$ Where, \begin{align*} P_{1}&=\{\dfrac{n}{2}m_{0}+(n+1)\}\\ P_{2}&=\{\dfrac{n}{2}m_{0}+(n+1)+k(m_{3}+(n+2))\mid 1\leq k\leq (\dfrac{n}{2}-1)\}\\ P_{3}&=\{\dfrac{n}{2}m_{0}+(n+1)+(\dfrac{n}{2}-1)(m_{3}+n+2)+(n+1+t)\mid 0\leq t\leq n+2\}. \end{align*} \end{theorem} \proof \proof Similar as \ref{pf4}.\qed
\section{minimal generating set for the defining ideal $\mathcal{Q}_{(n,5)}$} \begin{theorem}\label{gastinger}
Let $A = k[x_{1},\ldots,x_{n}]$ be a polynomial ring, $I\subset A$ the defining ideal of a monomial curve defined by natural numbers $a_{1},\ldots,a_{n}$, whose greatest common divisor is $1$. Let $J \subset I$ be a subideal. Then $J = I$ if and only if $\mathrm{dim}_{k} A/\langle J + (x_{i}) \rangle =a_{i}$ for some $i$. (Note that the above conditions are also equivalent to $\mathrm{dim}_{k} A/\langle J + (x_{i}) \rangle =a_{i}$ for any $i$.) \end{theorem}
\proof See \cite{g}.\qed
\begin{theorem} Let $e\geq 4, i\geq 2$ and $n=i(e-3)+(e-1)$ then a minimal generating set of the defining ideal of $\mathfrak{S}_{(n,5)}$ consist of the following polynomials, \begin{itemize} \item $f_{1}=x_{1}x_{3}-x_{0}x_{4}$. \item $f_{2}=x_{2}x_{3}-x_{1}x_{4}$. \item $g_{1}=x_{1}^2-x_{0}x_{2}$. \item $g_{2}=x_{2}^{i+3}-x_{0}^{i+2}x_{3}$. \item $\xi_{t}=x_{0}^tx_{1}^{n+2-t}-x_{3}^{t}x_{4}^{n+1-t},\quad 0\leq t \leq n+1$. \item $\eta_{k}=x_{0}^{k+1}x_{3}^{n-2k-1}-x_{2}^{k+2}x_{4}^{n-2k-2},\quad 0\leq k \leq i$. \item $l_{1}=x_{0}^{n+1}x_{1}-x_{2}x_{4}^n$. \item $l_{2}=x_{0}^{n+2}-x_{2}x_{3}x_{4}^{n-1}$. \end{itemize}
\end{theorem}
\proof We consider the set $S=\{f_{1},f_{2},g_{1},g_{2}, \xi_{t}, \eta_{k}, l_{1}, l_{2}\mid0\leq t\leq n+1,\, 0\leq k\leq i,\}$. Let $J$ be the ideal generated by $S$, we consider the ideal $J+\langle x_{0}\rangle$. Then a generating set of $J+\langle x_{0}\rangle$ is \begin{itemize} \item $q=x_{0}$ \item $\tilde{f_{1}}=x_{1}x_{3}$ \item $\tilde{f_{2}}=x_{2}x_{3}-x_{1}x_{4}$ \item $\tilde{g_{1}}=x_{1}^2$ \item $\tilde{g_{2}}=x_{2}^{i+3}$ \item $\tilde{\xi_{0}}=x_{4}^{n+1}$ \item $\tilde{\xi_{t}}=-x_{3}^{t}x_{4}^{n+1-t},\quad 1\leq t \leq n+1$ \item $\tilde{\eta_{k}}=-x_{2}^{k+2}x_{4}^{n-2k-2},\quad 0\leq k \leq i$ \item $\tilde{l_{1}}=-x_{2}x_{4}^n$ \item $\tilde{l_{2}}=-x_{2}x_{3}x_{4}^{n-1}$ \end{itemize} We consider the lexicographic monomial order induced by $x_{0}\geq x_{1}\geq x_{2}\geq x_{3}\geq x_{4}$ on $k[x_{0},\ldots,x_{4}]$. Then a a standard basis of $J+\langle x_{0}\rangle$ w.r.t the given monomial order consist of following polynomials
\begin{itemize} \item $q=x_{0}$ \item $\tilde{f_{1}}=x_{1}x_{3}$ \item $\tilde{f_{2}}=x_{2}x_{3}-x_{1}x_{4}$ \item $\tilde{g_{1}}=x_{1}^2$ \item $\tilde{g_{2}}=x_{2}^{i+3}$ \item $\tilde{\xi_{0}}=x_{4}^{n+1}$ \item $\tilde{\xi_{t}}=x_{3}^{t}x_{4}^{n+1-t},\quad 1\leq t \leq n+1$ \item $\tilde{\eta_{k}}=x_{2}^{k+2}x_{4}^{n-2k-2},\quad 0\leq k \leq i$ \item $\tilde{l_{1}}=x_{2}x_{4}^n$ \item $\tilde{l_{2}}=x_{2}x_{3}x_{4}^{n-1}$ \item $h=x_{2}x_{3}^2$ \end{itemize}
Since all the generators of ideal $J+\langle x_{0}\rangle$ except $\tilde{f}_{2}$ are monomials therefore we calculate only all the S-polynomials with $\tilde{f}_{2}$ and they are as follows: \begin{itemize} \item $S(\tilde{f_{2}}, \tilde{f_{1}})=-x_{2}x_{3}^2=-h$ \item $S(\tilde{f_{2}}, \tilde{g_{1}})=-x_{1}x_{2}x_{3}=-x_{2}\cdot\tilde{f_{1}}$ \item It is clear that $\gcd(\rm{Lt}(\tilde{f_{2}}), \tilde{g_{2}})=1$ \item $S(\tilde{f_{2}}, \tilde{\xi_{0}})=-x_{2}x_{3}x_{4}^{n}=-x_{2}\cdot \tilde{\xi_{1}}$ \item For $1\leq t\leq n$, we have $S(\tilde{f_{2}}, \tilde{\xi_{t}})=x_{2}x_{3}^{t+1}=x_{3}^{t}\cdot\tilde{f_{2}}+x_{3}^{t-1}x_{4}\cdot\tilde{f_{1}}$ \item Since $\xi_{n+1}=x_{3}^{n+1}$, hence $\gcd(\rm{Lt}(\tilde{f_{2}}), \tilde{\xi_{n+1}})=1$ \item For $0\leq k\leq i$, $S(\tilde{f_{2}}, \tilde{\eta_{k}})=-x_{2}^{k+3}x_{3}x_{4}^{n-2k-2}=-x_{2}^{k+3}\cdot\xi_{0}$ \item $S(\tilde{f_{2}}, \tilde{l_{1}})=-x_{2}^{2}x_{3}x_{4}^{n-1}=x_{2}\cdot\tilde{l_{2}}$ \item $S(\tilde{f_{2}}, \tilde{l_{2}})=-x_{2}^{2}x_{3}^2x_{4}^{n-2}=x_{2}x_{3}x_{4}^{n-2}\cdot\tilde{f_{2}}-x_{1}\cdot\tilde{l_{2}}$ \end{itemize} Therefore the set $$T=\{q,\tilde{f_{1}},\tilde{f_{2}},\tilde{g_{1}},\tilde{g_{2}},\tilde{\xi_{0}},\tilde{\xi_{t}},\tilde{\eta_{k}},\tilde{l_{1}},\tilde{l_{2}},h\mid 1\leq t\leq n+1, 0\leq k\leq i\}$$ forms a standard basis for the ideal $J+\langle x_{0}\rangle$. Hence, the leading ideal $\mathrm{lead}(J+\langle x_{0}\rangle)$ of $J+\langle x_{0}\rangle$, with respect to the given monomial order is generated by the following set, \begin{eqnarray*} G & = & \{x_{0}, x_{1}x_{3}, x_{1}x_{4}, x_{1}^2, x_{2}^{i+3}, x_{4}^{n+1}, x_{2}x_{4}^{n}, x_{2}x_{3}x_{4}^{n-1}\} \cup\\ {} & {} & \{x_{3}^{t}x_{4}^{n+1-t},x_{2}^{k+2}x_{4}^{n-2k-2} \mid 1\leq t \leq n+1,0\leq k \leq i \}. \end{eqnarray*} We need to show that $\dim_{k}\left(k[x_{0},\ldots,x_{4}]/J+\langle x_{0}\rangle\right)=m_{0}$. We list all monomials which are not divided by any element of $G$. \begin{itemize} \item $\{1,x_{1}\}$ \item $\{x_{2},x_{2}^2, \ldots,x_{2}^{i+2}\}$ \item $\{x_{3},x_{3}^{2},\ldots, x_{3}^n\}$ \item $\{x_{4},x_{4}^2,\ldots, x_{4}^n\}$ \item $\{x_{1}x_{2}, x_{1}x_{2}^2,\ldots, x_{1}x_{2}^{i+2}\}$ \item $\{x_{2}x_{3},x_{2}^2x_{3},\ldots,x_{2}^{i+2}x_{3}\}$ \item \begin{align*} &\{x_{2}x_{4},\ldots, x_{2}x_{4}^{n-1}\}\\ &\{x_{2}^2x_{4},\ldots, x_{2}^2x_{4}^{n-3}\}\\ &\vdots\\ &\{x_{2}^{i+2}x_{4},\ldots, x_{2}^{i+2}x_{4}^{n-2i-3}\}\\ \end{align*} \item \noindent\begin{align*} &\{x_{3}x_{4},\ldots,x_{3}x_{4}^{n-1}\}\\ &\{x_{3}^{2}x_{4},\ldots, x_{3}^2x_{4}^{n-2}\}\\ &\vdots\\ &\{x_{3}^{n-1}x_{4}\} \end{align*} \item \noindent\begin{align*} &\{x_{2}x_{3}x_{4},\ldots, x_{2}x_{3}x_{4}^{n-2}\}\\ &\{x_{2}^{2}x_{3}x_{4},\ldots, x_{2}^{2}x_{3}x_{4}^{n-3}\}\\ &\{x_{2}^{2}x_{3}x_{4},\ldots, x_{2}^{2}x_{3}x_{4}^{n-5}\}\\ &\vdots\\ &\{x_{2}^{i+2}x_{3}x_{4},\ldots, x_{2}^{i+2}x_{3}x_{4}^{n-2i-3}\} \end{align*} \end{itemize} The cardinality of the set containing all the above elements given is \\ $2+2i+2n+(i+2)+(i+2)+(i+2)(n-i-2)+\dfrac{n(n-1)}{2}+(i+2)(n-i-2)-1$.\\ Using $(i+2)=\dfrac{n}{2}$ we get\\ $2(n+1)+\dfrac{3n}{2}+\dfrac{n^{2}}{2}-\dfrac{n^{2}}{4}+\dfrac{n^{2}}{2}- \dfrac{n}{2}+\dfrac{n^{2}}{2}-\dfrac{n^{2}}{4}-1=n^{2}+3n+1=m_{0}$.\\ Therefore, by Theorem \ref{gastinger}, the set $S$ is a generating set for the defining ideal of $\mathfrak{S}_{(n,5)}$.
To prove minimality of the generating set, we prove that no element of this generating set can be expressed by other elements.
Let $\pi_{024}: k[x_{0},x_{1},x_{2},x_{3},x_{4}]\rightarrow k[x_{1},x_{3}]$ be defined as,\\ $\pi_{024}(x_{0})=0, \pi_{024}(x_{1})=x_{1}, \pi_{024}(x_{2})=0, \pi_{024}(x_{3})=x_{3},\pi_{024}(x_{4})=0$. Let $$f_{1}= x_{1}x_{3}-x_{0}x_{4}= c_{f2} f_{2}+c_{g1}g_{1}+c_{g2}g_{2}+\sum\limits_{t=0}^{t=n+1}c_{\xi t}\xi_{t}+\sum\limits_{k=0}^{k=i}c_{\eta k}\eta_{k}+c_{l1} l_{1}+c_{l2}l_{2},$$ for $c_{f2}, c_{g1}, c_{g2}, c_{\xi t}, c_{\eta k}, c_{l1}, c_{l2}\in k[x_{0},x_{1},x_{2},x_{3},x_{4}]$. Applying $\pi_{024}$ to both the sides of the equation we get, $x_{1}x_{3}=c_{g1}(0,x_{1},0,x_{3},0)x_{1}^{2}$. By comparing the exponent of $x_{1}$ on both sides of the above equation we can conclude that the above equation is not possible.
Let $\pi_{014}: k[x_{0},x_{1},x_{2},x_{3},x_{4}]\rightarrow k[x_{2},x_{3}]$ be defined as,\\
$\pi_{014}(x_{0})=0, \pi_{014}(x_{1})=0, \pi_{014}(x_{2})=x_{2}, \pi_{014}(x_{3})=x_{3},\pi_{014}(x_{4})=0$. Let $$f_{2}= x_{2}x_{3}-x_{1}x_{4}= c_{f1} f_{1}+c_{g1}g_{1}+c_{g2}g_{2}+\sum\limits_{t=0}^{t=n+1}c_{\xi t}\xi_{t}+\sum\limits_{k=0}^{k=i}c_{\eta k}\eta_{k}+c_{l1} l_{1}+c_{l2}l_{2},$$ for $c_{f1}, c_{g1}, c_{g2}, c_{\xi t}, c_{\eta k}, c_{l1}, c_{l2}\in k[x_{0},x_{1},x_{2},x_{3},x_{4}]$. Applying $\pi_{014}$ to both the sides of the equation we get, $x_{2}x_{3}=c_{g2}(0,0,x_{2},x_{3},0)x_{2}^{i+3}$, where $i\geq 2$. By comparing the exponent of $x_{2}$ in both the sides of the above equation we can conclude that the above equation is not possible.
Let $$g_{1}= x_{1}^{2}-x_{0}x_{2}= c_{f1} f_{1}+c_{f2}f_{2}+c_{g2}g_{2}+\sum\limits_{t=0}^{t=n+1}c_{\xi t}\xi_{t}+\sum\limits_{k=0}^{k=i}c_{\eta k}\eta_{k}+c_{l1} l_{1}+c_{l2}l_{2},$$ for $c_{f1}, c_{f2}, c_{g2}, c_{\xi t}, c_{\eta k}, c_{l1}, c_{l2}\in k[x_{0},x_{1},x_{2},x_{3},x_{4}]$. Applying $\pi_{024}$ to both the sides of the equation we get, $x_{1}^{2}= c_{f1}(0,x_{1},x_{2},x_{3},0) (x_{1}x_{3})$, where $i\geq 2$. By comparing the exponent of $x_{3}$ in both the sides of the above equation we can conclude that the above equation is not possible.
Let $$g_{2}= x_{2}^{i+3}-x_{0}^{i+2}x_{3}= c_{f1} f_{1}+c_{f2}f_{2}+c_{g1}g_{1}+\sum\limits_{t=0}^{t=n+1}c_{\xi t}\xi_{t}+\sum\limits_{k=0}^{k=i}c_{\eta k}\eta_{k}+c_{l1} l_{1}+c_{l2}l_{2},$$ for $c_{f2}, c_{g1}, c_{\xi t}, c_{\eta k}, c_{l1}, c_{l2}\in k[x_{0},x_{1},x_{2},x_{3},x_{4}]$. Applying $\pi_{014}$ to both the sides of the equation we get, $x_{2}^{i+3}= c_{f2}(0,0,x_{2},x_{3},0) (x_{2}x_{3})$, where $i\geq 2$. By comparing the exponent of $x_{3}$ in both the sides of the above equation we can conclude that the above equation is not possible.
Let $\pi_{01}: k[x_{0},x_{1},x_{2},x_{3},x_{4}]\rightarrow k[x_{2},x_{3},x_{4}]$ be defined as,\\ $\pi_{01}(x_{0})=0, \pi_{01}(x_{1})=0, \pi_{01}(x_{2})=x_{2}, \pi_{01}(x_{3})=x_{3},\pi_{01}(x_{4})=x_{4}$.\\ Let $$\xi_{t}=x_{0}^tx_{1}^{n+2-t}-x_{3}^{t}x_{4}^{n+1-t}=c_{f1} f_{1}+c_{f2}f_{2}+c_{g1}g_{1}+c_{g2}g_{2}+\sum\limits_{k=0}^{k=i}c_{\eta k}\eta_{k}+c_{l1} l_{1}+c_{l2}l_{2}.$$ Applying $\pi_{01}$ to both the sides of the equation we get, \begin{eqnarray*} -x_{3}^{t}x_{4}^{n+1-t} & = & c_{f2}(0,0,x_{2},x_{3},x_{4})(x_{2}x_{3})+c_{g2}(0,0,x_{2},x_{3},x_{4})x_{2}^{i+3}\\ {} & {} & -\sum\limits_{k=0}^{k=i}c_{\eta k}(0,0,x_{2},x_{3},x_{4}) x_{2}^{k+2}x_{4}^{n-2k-2}-c_{l1}(0,0,x_{2},x_{3},x_{4}) x_{2}x_{4}^{n}\\ {} & {} & -c_{l2}(0,0,x_{2},x_{3},x_{4})(x_{2}x_{3}x_{4}^{n-1}), \quad 0\leq t\leq n+1. \end{eqnarray*} From the above equations we observe that every term in the R.H.S contains $x_{2}$ but L.H.S does not have $x_{2}$, therefore the above equation is not possible.
Let $\pi_{124}: k[x_{0}, x_{1}, x_{2}, x_{3}, x_{4}]\rightarrow k[x_{0}, x_{3}]$ be defined as,\\ $\pi_{124}(x_{0}) = x_{0}, \pi_{124}(x_{1}) = 0, \pi_{124}(x_{2}) = 0, \pi_{124}(x_{3}) = x_{3}, \pi_{124}(x_{4})=0$. Let $$\eta_{k}=x_{0}^{k+1}x_{3}^{n-2k-1}-x_{2}^{k+2}x_{4}^{n-2k-2}=c_{f1} f_{1}+c_{f2}f_{2}+c_{g1}g_{1}+c_{g2}g_{2}+\sum\limits_{t=0}^{t=n+1}c_{\xi t}\xi_{t}+c_{l1} l_{1}+c_{l2}l_{2}.$$ Applying $\pi_{124}$ to both the sides of the equation we get: $$x_{0}^{k+1}x_{3}^{n-2k-1}= -c_{g2}(x_{0},0,0,x_{3},0)x_{0}^{i+2}x_{3}+c_{l2}(x_{0},0,0,x_{3},0)x_{0}^{n+2}, \quad 0\leq k\leq i.$$ Since $0\leq k\leq i$, $i\geq 2$ and $k+1 < i+2$, $n=4+2i$, the above equation is not possible.
Let $\pi_{013}: k[x_{0},x_{1},x_{2},x_{3},x_{4}]\rightarrow k[x_{2},x_{4}]$ be defined as,\\ $\pi_{013}(x_{0}) = 0, \pi_{013}(x_{1}) = 0, \pi_{013}(x_{2}) = x_{2}, \pi_{013}(x_{3}) = 0, \pi_{013}(x_{4})=x_{4}$. Let $$l_{1}= x_{0}^{n+1}x_{1}-x_{2}x_{4}^n =c_{f1} f_{1}+c_{f2}f_{2}+c_{g1}g_{1}+c_{g2}g_{2}+\sum\limits_{t=0}^{t=n+1}c_{\eta k}\eta_{k}+\sum\limits_{t=0}^{t=n+1}c_{\xi t}\xi_{t}+c_{l2}l_{2}.$$ Applying $\pi_{013}$ to both the sides of the equation we get:\\ $$-x_{2}x_{4}^n= c_{g2}(0,0,x_{2},0,x_{4})x_{2}^{i+3}-\sum\limits_{k=0}^{k=i}c_{\eta k}(0,0,x_{2},0,x_{4})x_{2}^{k+2}x_{4}^{n-2k-2}.$$ Since $i\geq 2$, each term of R.H.S of the above equation contains $x_{2}^{2}$ whereas L.H.S containing only $x_{2}$. Hence the above equation is not possible.
Let $\pi_{234}: k[x_{0},x_{1},x_{2},x_{3},x_{4}]\rightarrow k[x_{0},x_{1}]$ be defined as,\\ $\pi_{234}(x_{0})=x_{0}, \pi_{234}(x_{1})=x_{1}, \pi_{234}(x_{2})=0, \pi_{234}(x_{3})=0, \pi_{234}(x_{4})=0$. Let $$l_{2}= x_{0}^{n+2}-x_{2}x_{3}x_{4}^{n-1}=c_{f1} f_{1}+c_{f2}f_{2}+c_{g1}g_{1}+c_{g2}g_{2}+\sum\limits_{k=0}^{k=i}c_{\eta k}\eta_{k}+\sum\limits_{t=0}^{t=n+1}c_{\xi t}\xi_{t}+c_{l1}l_{1}.$$ Applying $\pi_{234}$ to both the sides of the equation we get: $$x_{0}^{n+2}= c_{g1}(x_{0},x_{1},0,0,0)x_{1}^{2}+\sum\limits_{t=0}^{t=n+1}c_{\xi t}(x_{0},x_{1},0,0,0)x_{0}^{t}x_{1}^{n+2-t}+c_{l1}(x_{0},x_{1},0,0,0)x_{0}^{n+1}x_{1}.$$ Since each term of R.H.S of the equation contains $x_{1}$, but L.H.S does not contain $x_{1}$, we see an absurd situation. Therefore the above equation is not possible. \qed
\end{document} |
\begin{document}
\preprint{APS/123-QED}
\title[]{Shape Coherence and Finite-Time Curvature Evolution}
\author{Tian Ma}
\author{Erik M. Bollt}
\email{[email protected]} \affiliation{ Department of Mathematics and Computer Science, Clarkson University, USA. }
\date{\today}
\begin{abstract} We introduce a definition of finite-time curvature evolution along with our recent study on shape coherence in nonautonomous dynamical systems. Comparing to slow evolving curvature preserving the shape, large curvature growth points reveal the dramatic change on shape such as the folding behaviors in a system. Closed trough curves of low finite-time curvature (FTC) evolution field indicate the existence of shape coherent sets, and troughs in the field indicate most significant shape coherence. Here we will demonstrate these properties of the FTC, as well as contrast to the popular Finite-Time Lyapunov Exponent (FTLE) computation, often used to indicated hyperbolic material curves as Lagrangian Coherent Structures (LCS). We show that often the FTC troughs are in close proximity to the FTLE ridges, but in other scenarios the FTC indicates entirely different regions.
\end{abstract}
\pacs{Valid PACS appear here}
\maketitle
Coherence has clearly become a central concept of interest in nonautonomous dynamical systems, particularly in the study of turbulent flows, with many recent papers designed toward describing, quantifying and constructing such sets. \cite{MB1, HB, FSM, MB, FK, OG2, TR}. There have been a wide range of notions of coherence, from spectral, \cite{LumleyHolmes}, to set oriented, \cite{DJ} and through transfer operators \cite{FK,FSM} as well as variational principles \cite{M}, and even topological methods, \cite{Thiffeault2, Ross}.
Traditionally there has been an emphasis on vorticity \cite{HU}, but generally an understanding that, coherent motions have a role in maintenance (production and dissipation) of turbulence in a boundary layer, \cite{RB}. A number of theories have been developed to model and analyze the dynamics in the Lagrangian perspective (moving frame), such as the geodesic transport barriers \cite{HB} and transfer operators method \cite{FSM}. These have included analysis of coherence in important problems such as how regions of fluids are isolated from each other \cite{Thiffeault2} including in prediction of oceanic structures \cite{FK1} and atmospheric forecasting \cite{Ross1, Ross2}, especially for the understanding of movement of pollution including such as oil spills, \cite{Haller3, Mezic, BolltGulf}. Whatever the perspectives taken, we generally interpretively summarize that coherent structures can be taken as a region of simplicity, within the observed time scale and stated spatial scale, perhaps embedded within an otherwise possibly turbulent flow, \cite{HB, FSM, FK, MB1}.
In particular, the ridges from Finite Time Lyapunov Exponents (FTLE) fields have been widely used \cite{H1, H2,SLM,TR} to indicate hyperbolic material curves, often called Lagranian coherent structures (LCS). We contrast here the fundamental nonlinear notions of ``stretching" encapsulated in the FTLE concept to ``folding" which is a complementary concepts of a nonlinear dynamical system which must be present if a material curve can stretch indefinitely within a compact domain. As we will show that exploring the much-overlooked folding concepts leads to developing curvature changes of material curves yielding an elegant description of coherence that we call shape coherence \cite{MB}. We introduce here a method of visualizing propensity of a material curve to change its curvature, which we call the Finite-Time Curvature (FTC) field. Contrasting the FTC to the FTLE, we will illustrate that sometimes the FTC troughs indicative of shape coherence are often co-mingled in close proximity to ridges of the FTLE, and in such case they indicate a generally similar story. However we show that in many cases the FTC troughs occur in locations not near an FTLE ridge, indicating entirely different regions. Thus we view these as complementary concepts, stretch and fold, as revealed by the traditional FTLE and the here introduced FTC.
We have recently presented a mathematical interpretation of coherence \cite{MB} in terms of a definition of {\it shape coherent sets},
motivated by a simple observation regarding sets that ``hold together'' over finite-time in nonautonomous dynamical systems. As a general setup, assume an area preserving system that can be represented,
$\dot{z}=g(z,t), $
for $z(t)\in \Re^2$, with enough regularity of $g$ so that that a corresponding flow, $\Phi_T(z_0):\Re\times \Re^2\rightarrow \Re^2$, exists. To capture the idea of a set that roughly preserves its own shape, we define \cite{MB} the {\it shape coherence factor} $\alpha$ between two sets $A$ and $B$ under an area preserving flow $\Phi_t$ over a finite time interval $[0, T]$, {\begin{eqnarray}\label{shapecoherenced} \displaystyle \alpha(A, B,T) := \sup_{S(B)} \frac{m( S(B) \cap \Phi_{ T}(A)) }{m(B)}, \end{eqnarray}} where $m(\cdot)$ here denotes Lebesgue measure, and we restrict the domain of $\alpha$ to sets such that $m(B)\neq 0$ by assumptions to follow that $B$ should be a fundamental domain \cite{Complex}. Here, $S(B)$ is the group of transformations of rigid body motions of $B$, specifically translations and rotations descriptive of {\it frame invariance,}\cite{Carmo}. We say $A$ is finite time shape coherent to $B$ with shape coherence factor $\alpha$, under the flow $\Phi_T$ after the time epoch $T$. We call $B$ the {\it reference set}, and $A$ shall be called the {\it dynamic set}. If we choose $B = A$, we can verify to what degree a set $A$ preserves its shape over the time epoch $T$. Notice that the shape of $A$ may vary during the time interval, but for a high shape coherence, the shapes must be similar at the terminal times.
By the area preserving assumption, $0 \leq \alpha \leq 1$, and values closer to $1$ indicate a set for which the otherwise nonlinear flow restricted to $A$ is much simpler, at least on the time scale $T$ and on the spatial scale corresponding to $A$; that is $\Phi_T|_A$, the flow restricted to $A$ is roughly much simpler than a turbulent system, as it is much more like a rigid body motion. This does not preclude on finer scales, that there may be turbulence within a shape coherent set.
Recall that for any material curve, $\gamma(s,t)=(x_1(s,t),x_2(s,t))$ of initial conditions defining an initial segment $\gamma(s,0)=(x_1(s,0),x_2(s,0))$, $a\leq s\leq b$ where each point on the curve evolves in time $t$ according to the differential equation, the curvature at time $t$ may be written in terms of the parametric derivative along the curve segment, $d/ds:='$,
$k(s,t)=\frac{|x_1'x_2''-x_2'x_1''|}{(x_1'^2+x_2'^2)^{3/2}}$. We will relate the pointwise changes of this curvature function for points on those material curves that correspond to shape coherence.
\begin{figure}
\caption{ Tangency points of stable and unstable foliations are highlighted as at such points, infinitesimal curve elements experience curvature that terminally}
\label{IdeaOfFTC}
\end{figure}
\begin{figure}
\caption{(a) The finite-time stable and unstable foliation field $f_s^t(z)$ and $f_u^t(z)$ for the Rossby wave system. Notice that for each point, there are two vectors, and therefore an associated angle $\theta(z, t)$. (b) zero-splitting curves corresponding to boundaries of shape coherent sets corresponding to a significant shape coherence factor, Eq.~(\ref{shapecoherenced}).
The parameters are set, $U_0=44.31, c_2=0.2055U_0, c_3=0.462U_0, A_3=0.3, A_2=0.12, A_1=0.075$ \cite{RBB}, and time epoch $T=10 $ days. }
\label{FoliationFieldAndCurves}
\end{figure}
The analysis of the geometry of shape coherent sets $A$
depends on the boundary of these sets, $\partial A$, which we restrict in the following to simply connected sets such that the boundary is a smooth and simple closed curve, $\partial A=\gamma(s), 0 \leq s \leq 1$, and these are often called ``fundamental domains" \cite{Complex}. These $B=A$ are in the domain of $\alpha$. We may relate shape coherence to the classical differential geometry whereby two curves are defined to be congruent if their underlying curvature functions can be exactly matched, pointwise, \cite{Carmo}. Therefore, considering the Frenet-Serret formula \cite{Carmo}, it can be proved \cite{MB1} through a series of regularity theorems that those sets with a slowly evolving propensity to change curvature correspond to boundaries of sets with a significant degree of shape coherence. That is $\alpha(A,\Phi_T(A))\approx 1$. Furthermore, a sufficient condition theorem connects geometry that points $z$ where there is a tangency between finite time stable and unstable foliations $f_u^t(z)$, $f_s^t(z)$ must correspond to slowly changing curvature. In Fig.~\ref{IdeaOfFTC}, we indicate the geometry of stable and unstable foliations that correspond to tangency or near tangency where curves passing through such points experience slowly changing curvature, and hence indicative of points on the boundaries of shape coherent sets \cite{MB1}. Hence, to find shape coherent sets lead us to the search for curves of tangency points as the boundaries of such set which we review below. Much has been written about the role of how stable and unstable manifolds can become reversed at tangency points in that errors can grow transversely to the the unstable manifolds as noted in \cite{Kantz, BSLZ, ZB}. Scaling relationships for frequency of given curvatures in \cite{Thiffeault, Thiffeault1, TA}, \cite{Liu,Drummond1,Drummond2,Pope,Ishihara}, as well as the propensity of curvature growth in turbulent systems \cite{OG,OG1,OG2,OG3} have both been studied.
We review, the finite time stable foliation $f_s^t(z)$ at a point $z$ describes the dominant direction of local contraction in forward time, and the finite time unstable foliation $f_u^t(z)$ describes the dominant direction of contraction in ``backward" time, and these vectors have a long history in the stability analysis of a dynamical system, particularly related to Lyapunov exponents and directions, \cite{Ulrich, BN}, and lately in \cite{HB}. See Fig.~\ref{IdeaOfFTC}. The derivative, $D\Phi_t(z)$ of the flow $\Phi_t(\cdot)$ evaluated at the point $z$ maps a circle onto an ellipse, as does any general matrix, \cite{Geometry} the infinitesimal geometry of a small disc of variations from near $\Phi_t(z)$ shown in Fig.~\ref{IdeaOfFTC}. Likewise, a disc centered on $\Phi_t(z)$ pulls back under $D\Phi_{-t}({\Phi_{t}(z)})$ to an ellipsoid centered on $z$. The major axis of that infinitesimal ellipsoid defines $f_s^t(z)$, the stable foliation at $z$. Likewise, from $\Phi_{-t}(z)$, a small disc of variations pushes forward under $D\Phi_{t}({\Phi_{-t}(z)})$ to an ellipsoid, the major axis of which defines, $f_u^t(z)$. These major axis can be readily computed in terms of the singular value decomposition \cite{VanLoan} of derivative matrices, as noted regarding the Lyapunov directions \cite{Ulrich, BN, Oseledets} and recently \cite{DanielarXiv}. Let, $D\Phi_{t}({z})=U\Sigma V^*$, where $^*$ denotes the transpose of a matrix. $U$ and $V$ are orthogonal matrices, and $\Sigma=diag(\sigma_1,\sigma_2)$ is a diagonal matrix. Indexing, $V=[v_1,v_2]$, and $U=[u_1,u_2]$, note that $D \Phi_{t}({z}) v_1=\sigma_1 u_1$ describes the vector $v_1$ at $z$ that maps onto the major axis, $\sigma u_1$ at $\Phi_t(z)$. Since $\Phi_{-t}\circ \Phi_t(z)=z$, and $D\Phi_{-t}(\Phi_t(z)) D \Phi_t(z)=I$, then recalling orthogonality of $U$ and $V$, yields, $D\Phi_{-t}({\Phi_t(z)})=V\Sigma^{-1} U^*$, and $\Sigma^{-1}=diag(\frac{1}{\sigma_1},\frac{1}{\sigma_2})$. Therefore, $\frac{1}{\sigma_2}\geq \frac{1}{\sigma_1}$, and the dominant axis of the image of an infinitesimal disc from $\Phi_t(z)$ comes from, $D \Phi_{t}({z}) u_2=\frac{1}{\sigma_2} v_2$. Hence, \begin{equation}
f_s^t(z)=v_2, \mbox{ and, } f_u^t(z)=\overline{u}_1,
\end{equation} where $v_2$ is the second right singular vector of $D\Phi_{t}({z})=U\Sigma V^*$ and likewise, $\overline{u}_1$ is the first left singular vector of $D\Phi_{t}({\Phi_{-t}(z)})=\overline{U} \mbox{ } \overline{\Sigma} \mbox{ } \overline{V}^*.$
\begin{figure}
\caption{(a)
shows the maxFTC fields $C_{t_0}^{t_0+\tau}(z)$ of different groups of parameters of the Rossby wave system. Note that in both figures, the level curves of relatively smaller maxFTC, $C_{t_0}^{t_0+\tau}(z)$ from Eq.~(\ref{ftc}), indicate that there exist material curves whose curvature changes slowly (blue curves) and these correspond to the zero-splitting curves in Fig.~\ref{FoliationFieldAndCurves}, \cite{MB1}. In the top and sides of (a) and (b) we show a slice of the maxFTC function along the red lines shown respectively. Large variation of these slice functions indicate the boundary of shape coherent sets; the interiors correspond to the slow variations of the function and shape coherence, and generally those boundaries are indicated by low values of the maxFTC, small propensity to grow curvature. The fast varying nature at boundaries indicates that high curvature change is often closely proximal to low curvature change, as indicated within (hetero)homoclinic tangle where tangencies and hyperbolicity often co-exist.
(c) The FTC field, $r_{t_0}^{t_0+\tau}(z)$, Eq.~(\ref{ftcfield}).
}
\label{RWFTC}
\end{figure}
For sake of further presentation, a specific example will be helpful. We choose the Rossby wave \cite{RBB} system, an idealized zonal stratospheric flow. Consider the Hamiltonian system $dx/dt=-\partial H/ \partial y$, $dy/dt=\partial H/ \partial x$, where
$H(x,y,t)=c_3y-U_0Ltanh(y/L) +A_3U_0Lsech^2(y/L)cos(k_1x)+A_2U_0Lsech^2(y/L) cos(k_2x-\sigma _2t) +A_1U_0Lsech^2(y/L)cos(k_1x-\sigma _1t) $. In Fig.~\ref{FoliationFieldAndCurves}a we show simultaneously the stable and unstable foliation fields, $f_s^t(z)$ and $f_u^t(z)$, of this system, together with curves of zero-angle Fig.~\ref{FoliationFieldAndCurves}b, $\theta(z, t)=0$, where $
\theta(z, t):=\arccos\frac{\left\langle f_s^t(z), f_u^t(z) \right\rangle}{\|f_s^t(z) \| \| f_u^t(z) \|},$ found by implicit function theorem as described in \cite{MB1}, corresponding to significant shape coherence.
The main work of this paper therefore it that we show that this detail can be skipped as the FTC we introduce significantly simplifies the geometry and facilitates the computation.
\begin{figure}
\caption{The partition of the Rossby wave system from diffusion-like ``seeded region growing" method of the FTC seen in Fig.~\ref{RWFTC}. Numbers adjacent each letter a-j indicate the shape coherence $ \alpha(A, \Phi_T(A),T)$ of the set shown.}
\label{Partition}
\end{figure}
The intuition behind the FTC development is based the idea that the folding behaviors involve the maximal propensity of changing curvature. This suggests that regions of space corresponding to slowly changing curvature include boundaries of significant shape coherence. We define the {\bf maximum finite-time curvature (maxFTC)}, $C_{t_0}^{t_0+\tau}(z)$, and {\bf minimum finite-time curvature (minFTC)}, $c_{t_0}^{t_0+\tau}(z)$, for a point $z$ in the plane under a flow $\Phi_{t_0}^{t_0+\tau}$ over the time interval $[t_0, t_0+ \tau]$ by, \begin{eqnarray}\label{ftc}
C_{t_0}^{t_0+\tau}(z)=\lim_{\varepsilon \to 0} \sup_{\|v\|=1} \kappa(\Phi_{t_0}^{t_0+\tau}(l_{\varepsilon, v}(z))),\\
c_{t_0}^{t_0+\tau}(z)=\lim_{\varepsilon \to 0} \inf_{\|v\|=1} \kappa(\Phi_{t_0}^{t_0+\tau}(l_{\varepsilon, v}(z))) \end{eqnarray} where,
$l_{\varepsilon, v}(z):=\{\hat{z}=z+\varepsilon s v, |s|<1\}, $ and $v$ is a unit vector. So, $l_{\varepsilon, v}(z)$ is a small line segment passing through the point $z=(x,y)$, when $\epsilon<<1$.
Then finally we find it most useful to define the ratio of these,
\begin{equation}\label{ftcfield}
r_{t_0}^{t_0+\tau}(z)=\frac{C_{t_0}^{t_0+\tau}(z)}{c_{t_0}^{t_0+\tau}(z)},
\end{equation}
which we simply call the {\bf finite-time curvature field,} or {\bf FTC}.
Generally when the maxFTC has a trough (curve) of small values, then this suggests that there is a strong nonhyperbolicity such as an elliptic island boundary or some other form of tangency as displayed in Fig.~ \ref{IdeaOfFTC}. These are the darker blue ``FTC trough curves" we see in Fig.~\ref{RWFTC}a, and they serve as boundaries between shape coherent sets. On the other hand,
the largest ridges of the maxFTC illustrate points where there is both significant curvature growth along one direction but small curvature growth along a transverse direction, recalling the area preservation assumption. These level curves arise in the scenario of the sharply changing curvature developing at the most extreme points in a (hetero)homoclinic tangle, such as illustrated in Fig.~\ref{IdeaOfFTC}. These curves can maintain their shape for some time. Notice that the FTC also shows troughs similar to the the maxFTC, but emphasized, and so these (blue) trough curves can also be used to determine shape coherent sets. A particularly interesting feature of these FTC fields is the large variation in certain regions, indicated at the top and side of Figs.~\ref{RWFTC}a; this is clearly due to co-located hyperbolicity and nonhyperbolicity regions of (hetero)homoclinic tangles, discussed in greater detail in comparison to FTLE in Fig.~\ref{FTC-FTLEz}. The ratio FTC field in Fig.~\ref{RWFTC}c, most clearly delineates boundaries of the shape coherent sets as low (blue) troughs.
To construct shape coherent sets from the FTC, we describe two complementary perspectives. One again follows the idea of curve continuation by the implicit function theorem, but on the FTC to track a level curves of $ r_{t_0}^{t_0+\tau}(z)$. That is, if a point $z_0$ where a (near) minimal value $ r_{t_0}^{t_0+\tau}(z)=R$ is found, representing a point in the trough, then other values nearby can be derived by $z'=h(z)=-\frac{\partial(r_{t_0}^{t_0+\tau})/\partial y}{\partial(r_{t_0}^{t_0+\tau})/\partial x}(z) $, as an ordinary differential equation with initial condition $z(0)=z_0$, and the derivative $'=\frac{d}{ds}$ represents variation along the $s$-parameterized arc. Furthermore, by the above regarding principle component analysis, directions of maximal curvature are also encoded in the principle vectors of $D\Phi_t(z)$.
A direct search for the interiors of sets between low troughs of the FTC is a problem of defining regions between boundary curves and this relates to a common problem of image processing called image segmentation, \cite{ImageSegmentation}.
In particular, we applied the diffusion-like ``seeded region growing" method \cite{Adams-Bischof} that begins with selecting a set of seed points. Here we apply 100 uniform grid points as seeds and use 4 connected neighborhood to grow from the seed points. We slightly improve a well regarded implementation that can be found at, \cite{mathworks}. See Fig.~\ref{Partition} for the partitioning results. Several shape coherent sets corresponding to Fig.~\ref{RWFTC} are found. Specifically, the middle yellow band has an $ \alpha(A, \Phi_T(A),T)=0.8574$, and likewise the $\alpha$-values of the rest of the colored sets are shown in Fig.~\ref{Partition}. Some of the difference of otherwise symmetric regions are due to the region clipping as shown.
\begin{figure}
\caption{ The FTC troughs of the double-gyre highlighted in blue, and FTLE ridges highlighted in red, are seen to be sometimes in close proximity. Thus similar regions are indicated. However, they are often not near each other indicating disparate regions. In the blow-up insets above, we see labels ``2" indicate where the FTLE ridges are in-between FTC troughs, and likewise the one-dimensional slice along the green curve show these field values on a log scale repeating this outcome. Lows of the blue curve are near but offset slightly highs of the red curve, often, highs of the blue field surrounding lows of the red field. At such locations the two disparate computations reveal similar dynamics. However, regions indicated by ``1" reveal FTC-trough curves which are entirely separated from any FTLE behavior of interest. Thus a different outcome is found at such locations. Closed curves of FTC-troughs indicate shape coherence. At regions indicated by ``3" we see gaps in the FTC curves. Bottom we show a slice of the FTC and FTLE curve through the full phase space.}
\label{FTC-FTLEz}
\end{figure}
Finally we contrast results from the FTC field versus the highly popular FTLE field \cite{SLM} since at first glance the pictures may seem essentially similar, despite the significantly different definitions and different perspectives. Recall \cite{SLM} that the FTLE is defined pointwise over a time epoch-$t$ that $L_t(z)=\frac{1}{t} \log \sqrt{\rho(D\Phi_t'(z)D\Phi_t(z))}$, where $\rho$ is the largest eigenvalue of the Cauchy-Green strain tensor. In the following we contrast FTLE and FTC in the context of the that follows the nonautonomous Hamiltonian, $H(x,y,t)=A \cos (\pi f(x,t)) \cos (\pi y)$, where $f(x,t)= \epsilon \sin(\omega t) x^2 + (1-2 \epsilon \sin (\omega t)) x$, $\epsilon = 0.1$, $\omega=2 \pi/10$ and $A=0.1$, which has become a benchmark problem, \cite{SLM}. Observe in Fig.~\ref{FTC-FTLEz} that sometimes an FTC trough indicative of shape coherence may occur spatially in close proximity to an FTLE ridge indicative of high finite time hyperbolicity \cite{HB, SLM} and thus suggests a transport pseudo-barrier \cite{HB, SLM}. It is true that folding often occurs in close proximity to regions of strong hyperbolic stretching, \cite{Thiffeault,Thiffeault1, TA}, as already hinted by the fast variations of FTC in hyperbolic regions as seen in the traces on the tops and sides of Fig.~\ref{RWFTC}ab. However, in Fig.~\ref{FTC-FTLEz} we directly address the coincidences and differences, by locating the troughs of the FTC shown as blue curves, and the ridges of the FTLE shown as red curves. Clearly sometimes FTC troughs sometimes finds curves close to FTLE ridges, but sometimes entirely new curves are found. When the FTC troughs are closed, shape coherent sets are indicated, and not found any other way when not near the FTLE. Finally note that indicated by ``3" in Fig.~\ref{FTC-FTLEz}, where the FTLE curves may have a strong curvature in them those FTC may be in close parallel, but the FTC trough curves may have breaks indicated.
With these coincidences, and given differences in definitions, concepts and results, we have offered here the FTC as a new concept for interpreting shape coherence in turbulent systems, that results in a decomposition of chaotic systems into regions of simplicity, and by complement regions of complexity. There is the promised implications that we plan to study further, between shape coherence and persistence of energy and enstrophy along Lagrangian trajectories as was likewise previously studied in the context of FTLE \cite{OG4}.
\end{document} |
\begin{document}
\title{ Rotational symmetry of uniformly 3-convex translating solitons of mean curvature flow in higher dimensions}
\begin{abstract}
In this paper, we generalize the result of \cite{zhu2020so} to higher dimension. We prove that uniformly 3-convex translating solitons of mean curvature flow in $\mathbb{R}^{n+1}$ which arise as blow up limit of embedded, mean convex mean curvature flow must have $SO(n-1)$ symmetry. \end{abstract}
\author{Jingze Zhu} \address{Department of Mathematics, Columbia University, New York, NY 10027} \email{[email protected]} \maketitle
\section{Introduction} It is known that the blow up limit of an embedded, mean convex mean curvature flow is a convex, noncollapsed ancient solution. This was first proved by the seminal work of White \cite{white2000size} \cite{white2003nature} and later streamlined by several authors (see \cite{sheng2009singularity} , \cite{andrews2012noncollapsing}, \cite{haslhofer2017mean}).
Recently, lots of work have been done in studying the convex noncollapsed ancient solutions. For example, in the uniform two convex case, the classification is complete: Haslhofer \cite{haslhofer2015uniqueness} proved that the only convex, uniformly two convex, noncollapsed translator is the Bowl soliton, Brendle and Choi \cite{brendle2018uniqueness} \cite{brendle2019uniqueness} proved that the only strictly convex, uniformly two convex, noncollapsed noncompact ancient solution must be the Bowl soliton. Angenent, Daskalopoulos and Sesum \cite{angenent2020uniqueness} proved that the only convex, uniformly two convex, noncollapsed compact solution which is not the shrinking sphere must be the unique ancient Oval. The uniqueness is up to scaling and translation.
In the uniformly 3-convex case, however, little classification results are known so far. In the previous paper \cite{zhu2020so}, we proved that the convex, uniformly 3-convex, noncollapsed translator has $SO(2)$ symmetry in $\mathbb{R}^4$.
In this paper, we generalize this result to higher dimensions, here is the main theorem: \begin{Th}\label{Main Theorem} Suppose that $M^n\subset \mathbb{R}^{n+1}$ is a complete, noncollapsed, convex, uniformly 3-convex smooth translating soliton of mean curvature flow with a tip that attains maximal mean curvature. Then M has SO(n-1) symmetry. \end{Th}
The proof has many similarities with \cite{zhu2020so} and \cite{brendle2018uniqueness}. For readers' convenience and self consistency, we follow these two papers closely and present necessary details as much as possible.
This paper is organized as follows. In section 3, we discuss about the symmetry improvement, which is inspired by the Neck Improvement Theorem of \cite{brendle2018uniqueness} \cite{brendle2019uniqueness}. The spirit is to show that if a solution of mean curvature flow is $\epsilon$ symmetric in a large parabolic neighborhood, then it is $\epsilon/2$ symmetric at the center point. We prove Cylindrical Improvement Theorem \ref{Cylindrical improvement}, this is a direct generalization of the Neck Improvement Theorem. We need to analyze the 2D equation instead of 1D heat equation. In the uniform 3-convex case we need an additional symmetry improvement (Theorem \ref{bowl x R improvement}) that works for Bowl$\times\mathbb{R}$, just as in our previous paper \cite{zhu2020so}. The strategy is inspired from \cite{brendle2018uniqueness} and \cite{angenent2020uniqueness}, we can iterate the Cylindrical Improvement to get much better symmetry along the boundary and use the barrier to control the symmetry near the center of the parabolic neighborhood. However in our case the boundary is not as simple. In fact there is some portions of the boundary on which the symmetry don't improve. It is overcome by carefully choosing barrier functions.
In section 4, we argue exactly as in \cite{zhu2020so}, using the ingredients established in section 3. First we prove the canonical neighborhood Lemma (Lemma \ref{canonical nbhd lemma}, \ref{blowdownnecklemma}). They tells us that away from a compact set, every point lies on a large parabolic neighborhood that resembles $S^{n-2}\times\mathbb{R}^2$ or Bowl$\times\mathbb{R}$ if the soliton is not uniformly 2-convex. We briefly explain how this is done: first assume that the conclusion is not true, then we find the contradicting sequence and by rescaling and passing to subsequential limit we obtain an convex noncollapsed, uniformly 3-convex ancient solution that contains a line, and thus splits. The results of \cite{white2000size} \cite{white2003nature}, \cite{sheng2009singularity}, \cite{haslhofer2017mean} and the maximal principal \cite{hamilton1986four} are used crucially. If we modulo the splitting, the remain solution is uniformly two convex, thus is either compact or is the Bowl soliton or $S^{n-2}\times\mathbb{R}$, by the classification \cite{brendle2018uniqueness}. The latter two noncompact models leads to the contradiction whereas we can exploit the translator equation (which in particular implies that the solution is a graph over a hyperplane) to rule out the compact models. The final step is to combine the above ingredients and follow the argument of Theorem 5.2 in \cite{brendle2018uniqueness} to finish the proof of the main theorem.
\textbf{Acknowledgement:} The author would like to thank his advisor Simon Brendle for his helpful discussions and encouragement.\newline
\section{Preliminary} In this section we give some definitions and basic facts about the mean curvature flow.
Recall that the mean curvature flow is a family of embeddings: $F_t: M^n\rightarrow\mathbb{R}^{n+1}$ which satisfies \begin{align*}
\frac{\partial F_t}{\partial t} & =\vec{H} \end{align*} Denote $M_t=F_t(M)$. A mean curvature solution is called ancient if $F_t$ exists on $t\in(-\infty,T)$ for some $T$.
If $F_t$ are complete and oriented, we can fix an orientation and globally define the normal vector $\nu$. Then the mean curvature $H$ is defined to be $\vec{H}=H\nu$. The mean curvature flow is called \textbf{(strictly) mean convex}, if $H\geq0$ $(H>0) $ along the flow.
A mean convex mean curvature flow solution $M_t^n\subset\mathbb{R}^{n+1}$
is called \textbf{uniformly $k$-convex}, if there is a positive constant $\beta$
such that
\begin{align*}
\lambda_1+...+\lambda_k\geq \beta H
\end{align*}
along the flow, where $\lambda_1\leq\lambda_2\leq ...\leq \lambda_n$ are principal curvatures.
In particular, any mean convex surface is uniformly $n$-convex, and being uniformly $1$-convex is equivalent to being uniformly convex.
An important special case for ancient solution is the \textbf{translating solution}, which is characterized by the equation \begin{align*}
H = \left<V,\nu\right> \end{align*} for some fixed nonzero vector $V$. In this paper we usually use $\omega_{n}$ in place of $V$, where $\omega_n$ is a unit vector in the direction of the $x_n$ axis.
The family of surfaces $M_t=M_0+tV$ is a mean curvature flow provided that $M_0$ satisfies the translator equation.
For a point $x$ on a hypersurface $M^n\subset\mathbb{R}^{n+1}$ and radius $r$, we use $B_r(x)$ to denote the Euclidean ball and use $B_{g}(x,r)$ to denote the geodesic ball with respect to the metric $g$ on $M$ induced by the embedding. for a space-time point $(\bar{x},\bar{t})$ in a mean curvature flow solution, $\hat{\mathcal{P}}(\bar{x},\bar{t},L,T)=B_{g(\bar{t})}(\bar{x},LH^{-1})\times [\bar{t}-TH^{-2},\bar{t}]$ where $H=H(\bar{x},\bar{t})$ (c.f \cite{huisken2009mean} pp.188-190) ).
\section{Symmetry Improvement} \begin{Def} A collection of vector fields $\mathcal{K} = \{K_{\alpha} : 1 \leq \alpha \leq \frac{(n-1)(n-2)}{2} \}$ in $\mathbb{R}^{n+1}$ is called normalized set of rotation vector fields, if there exists a matrix $S \in O(n+1)$, a set of orthonormal basis $\{J_{\alpha} : 1 \leq \alpha \leq \frac{(n-1)(n-2)}{2} \}$ of $so(n-1) \subset so(n+1)$ and a point $q \in \mathbb{R}^{n+1}$ such that \begin{align*}
K_{\alpha}(x) & = SJ_{\alpha}S^{-1}(x-q) \end{align*} where we use the inner product $\left<A,B\right> = \text{Tr}(AB^{T})$ for matrices $A, B \in so(n-1)$. \end{Def}
\begin{Rem}
The choice of $S,q$ is not unique. In other words, we can always find different
$S,q$ that represent the same normalized set of rotation vector field. \end{Rem}
\begin{Def}
Let $M_t$ be a mean curvature flow solution. We say that the space time point $(\bar{x}, \bar{t})$ is $(\epsilon, R)$ cylindrical if the parabolic neighborhood $\hat{P}(\bar{x},\bar{t}, R^2, R^2)$ is $\epsilon$ close
(in $C^{10}$ norm) to a family of shrinking cylinders
$S^{n-2}\times\mathbb{R}^2$ after a parabolic rescaling such that $H(\bar{x}, \bar{t}) = \sqrt{\frac{n-2}{2}}$. \end{Def}
\begin{Def}
Let $M_t$ be a mean curvature flow solution. We say that a space time point
$(\bar{x}, \bar{t})$ is $\epsilon$ symmetric if there exists a normalized set of rotation vector fields
$\mathcal{K} = \{K_{\alpha} : 1 \leq \alpha \leq \frac{(n-1)(n-2)}{2}\}$ such that $\max_{\alpha}|\left<K_{\alpha}, \nu\right> |H \leq \epsilon$ and
$\max_{\alpha}|K_{\alpha}|H \leq 5n$ in the parabolic neighborhood
$\hat{P}(\bar{x}, \bar{t}, 100n^{5/2}, 100^2n^5)$ \end{Def}
\begin{Lemma}\label{Lin Alg Cylinder A}
Let $\mathcal{K}=\{K_{\alpha},1\leq\alpha\leq \frac{(n-1)(n-2)}{2}\}$
be a normalized set of rotation vector fields in the form of
\begin{align*}
K_{\alpha}=SJ_{\alpha}S^{-1}(x-q)
\end{align*}
where
$\{J_{\alpha}, 1\leq\alpha\leq \frac{(n-1)(n-2)}{2}\}$ is an
orthonormal basis of $so(n-1)\subset so(n+1)$ and $S\in O(n+1), q\in\mathbb{R}^{n+1}$.
Suppose that either one of the following happens:
\begin{enumerate}
\item on the cylinder $S^{n-2}\times\mathbb{R}^2$ in $\mathbb{R}^{n+1}$ (for which the
$S^{n-2}$ factor has radius 1) :
\begin{itemize}
\item $\left<K_{\alpha},\nu\right> = 0$ in $B_{g}(p, 1)$ for each $\alpha$
\item $\max_{\alpha}|K_{\alpha}| H \leq 5n$ in
$B_{g}(p, 10n^{2})$
\end{itemize}
\item on $Bowl^{n-1}\times\mathbb{R}$ in $\mathbb{R}^{n+1}$:
\begin{itemize}
\item $\left<K_{\alpha},\nu\right> = 0$ in $B_{g}(p, 1)$ for each $\alpha$
\item $\max_{\alpha}|K_{\alpha}| H \leq 5n$ at $p$
\end{itemize}
\end{enumerate}
where $\nu$ is the normal vector, $g$ is the induced metric,
$p$ is arbitrary point on either one of the model.
Then $S,q$ can be chosen in such a way:
\begin{itemize}
\item $S\in O(n-1)\subset O(n+1)$
\item $q=0$
\end{itemize}
In particular, for any orthonormal basis $\{J'_{\alpha}, 1\leq\alpha\leq \frac{(n-1)(n-2)}{2}\}$ of $so(n-1)$ there is a basis transform matrix
$\omega\in O(\frac{(n-1)(n-2)}{2})$ such that
$K_{\alpha} = \sum_{\beta=1}^{\frac{(n-1)(n-2)}{2}}\omega_{\alpha\beta} J'_{\beta}x$ \end{Lemma}
\begin{proof}
\textbf{Case (1)} Let's first assume that $n\geq 4$, we use the coordinate $(x_1,...x_{n+1})$ in $\mathbb{R}^{n+1}$ and let the cylinder $S^{n-2}\times\mathbb{R}^2$ be
represented by $\{x_1^2+...+x_{n-1}^2=1\}$
Now $\langle K_{\alpha},\nu\rangle =0$ is equivalent to:
\begin{align}\label{Lin Alg 1}
\nu^{T}SJ_{\alpha}S^{-1}(x-q)=0
\end{align}
without loss of generality, we may choose $q$ such that
\begin{align}\label{LinAlg Condition for q}
q\perp \bigcap_{\alpha}\ker J_{\alpha}S^{-1}
\end{align}
Note that $SJ_{\alpha}S^{-1}$ is antisymmetric and $\nu^{T}=(x_1,...,x_{n-1},0,0)$
on $S^{n-2}\times\mathbb{R}^2$, (\ref{Lin Alg 1}) is equivalent to
\begin{align}\label{LinAlg2}
\sum_{1\leq i\leq n-1} x_i \left(\sum_{l=n,n+1}(SJ_{\alpha}S^{-1})_{il}x_l - (SJ_{\alpha}S^{-1}q)_{i}\right) = 0
\end{align}
Since this holds on an open set of the cylinder, we obtain that
\begin{align}
(SJ_{\alpha}S^{-1})_{il} =&0, \text{ \ } i=1,...,n-1,\ l=n,n+1 \label{LinAlg3}\\
(SJ_{\alpha}S^{-1}q)_{i} =&0, \text{ \ } i=1,...,n-1\label{LinAlg4}
\end{align}
holds for each $1\leq\alpha \leq\frac{(n-1)(n-2)}{2} $.
Using the fact that $\{J_{\alpha}, 1\leq\alpha\leq \frac{(n-1)(n-2)}{2} \}$ is an orthonormal basis of $so(n-1)$
in (\ref{LinAlg3}), we get:
\begin{align*}
S_{ij}S^{-1}_{kl} - S_{ik}S^{-1}_{jl} =0
\end{align*}
or equivalently (since $S^{-1}=S^{T}$ or $S^{-1}=-S^{T}$)
\begin{align}\label{LinAlg5}
S_{ij}S_{lk} - S_{ik}S_{lj} =0
\end{align}
for all $1\leq i,j,k\leq n-1$ and $l=n,n+1$.
Next, since $n-1\geq 3$, we can choose nontrivial $(y_1,...y_{n-1})$ such that
for $l=n,n+1$
\begin{align}\label{LinAlg6}
\sum_{k=1}^{n-1}S_{lk}y_k =0
\end{align}
Now we multiply (\ref{LinAlg5}) by $y_k$ and sum over $k=1,...,n-1$ to obtain that, for $1\leq i,j\leq n-1$ and $l=n,n+1$:
\begin{align}\label{Lin Alg 7}
\sum_{k=1}^{n-1}S_{ik}y_k S_{lj} = 0
\end{align}
The invertibility of $S$ implies that
\begin{align}\label{Lin Alg 9}
\sum_{k=1}^{n-1}S_{ik}y_k \neq 0
\end{align}
holds for at least one $i\leq n+1$. (\ref{LinAlg6}) implies that (\ref{Lin Alg 9}) can not hold for $i=n,n+1$, thus must hold for some $i\leq n-1$.
At this point, (\ref{Lin Alg 7}) implies that
\begin{align}\label{Lin Alg 8}
S_{lj}=0
\end{align}
for all $l=n,n+1$ and $j=1,...,n-1$. That means $S$ preserves the direct sum decomposition, therefore we may choose $S\in O(n-1)\subset O(n+1)$.
Moreover, (\ref{LinAlg4}) and (\ref{LinAlg Condition for q}) implies that $q=0$.
Finally,$\{SJ_{\alpha}S^{-1}, 1\leq\alpha\leq \frac{(n-1)(n-2)}{2} \}$ is still an orthonormal basis of $so(n-1)$, therefore we can find basis transform matrix
$\omega\in O(\frac{(n-1)(n-2)}{2})$ such that
$K_{\alpha} = \sum_{\beta=1}^{\frac{(n-1)(n-2)}{2}}\omega_{\alpha\beta} J'_{\beta}x$ for each given orthonormal basis $\{J'_{\alpha}, 1\leq\alpha\leq\frac{(n-1)(n-2)}{2}\}$ of $so(n-1)$.
If $n=3$, the argument in Lemma 3.5 of \cite{zhu2020so} applies.
However we give a brief proof here:
Note that (\ref{LinAlg3}), (\ref{LinAlg4}) still applies. Since $so(2)$ is one dimensional, $\alpha\equiv 1$ and $J_{\alpha}=J$ is a rank 2 matrix.
Since $SJS^{-1}$ is antisymmetric and rank 2, (\ref{LinAlg3}) implies that $SJS^{-1}=J$ or $J'$,
where $J'$ is a rank 2 matrix with only two nonzero entries $J'_{34}=-J'_{43}=\frac{1}{\sqrt{2}}$.
If $SJS^{-1}=J'$, then $|K|H\geq 5n^2>5n$, a contradiction. So $SJS^{-1}=J$ and (\ref{LinAlg Condition for q})(\ref{LinAlg3}) implies that $q=0$. Clearly $S$ can be chosen to be Id.
\textbf{Case (2)} Note that on Bowl$\times\mathbb{R}$, we have $\nu^{T}=(\frac{u'(\bar{x})}{|\bar{x}|}\bar{x},-1,0)/\sqrt{1+u'^2}$ and $x_n=u(|\bar{x}|)$
where $\bar{x}=(x_1,...,x_{n-1})$ and $u$ is the solution to the ODE:
\begin{align*}
\frac{u''}{1+u'^2}+\frac{(n-1)u'}{x}=1
\end{align*}
with initial condition $u(0)= u'(0)=0$. Note that $\frac{u'(r)}{r}$ is smooth even at $r=0$.
(\ref{Lin Alg 1}) is equivalent to:
\begin{align}\label{LinAlg II.2}
\sum_{1\leq i\leq n-1} x_i &\left(-(SJ_{\alpha}S^{-1})_{ni}+\sum_{l=n,n+1}\frac{u'(\bar{x})}{|\bar{x}|}(SJ_{\alpha}S^{-1})_{il}x_l - \frac{u'(\bar{x})}{|\bar{x}|}(SJ_{\alpha}S^{-1}q)_{i}\right) \\
-& (SJ_{\alpha}S^{-1})_{n,n+1}x_{n+1}+(SJ_{\alpha}S^{-1}q)_{n}= 0
\end{align}
Since this holds on an open set of a Bowl$\times\mathbb{R}$, we can first fix $|\bar{x}|, x_n, x_{n+1}$ and $(x_1,...,x_{n-1})$ moves freely on a small open set of $S^{n-2}_{|\bar{x}|}$, therefore:
\begin{align}\label{Lin Alg II.1}
-(SJ_{\alpha}S^{-1})_{ni}+\sum_{l=n,n+1}\frac{u'(\bar{x})}{|\bar{x}|}(SJ_{\alpha}S^{-1})_{il}x_l - \frac{u'(\bar{x})}{|\bar{x}|}(SJ_{\alpha}S^{-1}q)_{i}=0
\end{align}
for each $1\leq i\leq n-1$. Moreover,
\begin{align}\label{Lin Alg II.4}
-(SJ_{\alpha}S^{-1})_{n,n+1}x_{n+1}+(SJ_{\alpha}S^{-1}q)_{n}=0
\end{align}
Then we can let $x_{n+1}$ move freely in a small open set for (\ref{Lin Alg II.1}) (\ref{Lin Alg II.4}), since $u'(r)/r$ is never 0, we get
\begin{align}\label{Lin Alg II.5}
(SJ_{\alpha}S^{-1})_{i,n+1}=&0
\end{align}
for all $ i\leq n$.
Next, $\frac{u'(r)}{r}$ is not a constant function in any open interval, therefore
\begin{align}\label{Lin Alg II.6}
(SJ_{\alpha}S^{-1})_{in}=&0 \\
(SJ_{\alpha}S^{-1}q)_i=&0
\end{align}
for all $i\leq n-1$.
Consequently, (\ref{LinAlg3}) and (\ref{LinAlg4}) holds, and the conclusion follows immediately by the same argument as in case (1). \end{proof}
Using similar computations and some basic linear algebra, we also have the following: \begin{Lemma}\label{Lin Alg Cylinder B}
Let $\{J_{\alpha}, 1\leq\alpha\leq \frac{(n-1)(n-2)}{2}\}$ be an
orthonormal basis of $so(n-1)\subset so(n+1)$ and $A\in so(n+1)$ such that
$A\perp so(n-1)\oplus so(2)$.
Suppose that either of the following scenarios happens:
\begin{enumerate}
\item on the cylinder $S^{n-2}\times\mathbb{R}^2$ in $\mathbb{R}^{n+1}$ (for which the
$S^{n-2}$ factor has radius 1) we have:
\begin{itemize}
\item $\left<[A,J_{\alpha}]x+c_{\alpha},\nu\right> = 0$ in $B_{g}(p, 1)$ for each $\alpha$
\end{itemize}
\item on $Bowl^{n-1}\times\mathbb{R}$ in $\mathbb{R}^{n+1}$ we have:
\begin{itemize}
\item $\left<[A, J_{\alpha}]x+c_{\alpha},\nu\right> = 0$ in $B_{g}(p, 1)$ for each $\alpha$
\end{itemize}
\end{enumerate}
where $\nu$ is the normal vector, $g$ is the induced metric,
$p$ is arbitrary point on either one of the model.
Then $A=0$ and $c_{\alpha}=0$.
\end{Lemma}
\begin{Lemma}\label{Vector Field closeness on Cylinder}
There exists constants $0 < \epsilon_{c} \ll 1$ and $C>1$ depending only on $n$
with the following properties. Let $M$ be a hypersurface in $\mathbb{R}^{n+1}$
which is $\epsilon_c$ close (in $C^3$ norm) to
a geodesic ball in
$ S_{\sqrt{2(n-2)}}^{n-2}\times \mathbb{R}^2 $
of radius $20n^{5/2}\sqrt{\frac{2}{n-2}}$ and
$\bar{x}\in M $ be a point that is $\epsilon_c$ close to $S_{\sqrt{2(n-2)}}^{n-2}\times \{0\}$.
Suppose that $\epsilon \leq \epsilon_c$ and
$\mathcal{K}^{(1)} = \{K^{(1)}_{\alpha}: 1 \leq \alpha \leq \frac{(n-1)(n-2)}{2}\}$,
$\mathcal{K}^{(2)} = \{K^{(2)}_{\alpha}: 1 \leq \alpha \leq \frac{(n-1)(n-2)}{2}\}$, are two normalized set of rotation vector fields, assume that:
\begin{itemize}
\item $\max_{\alpha}|\left<K_{\alpha}^{(i)}, \nu\right>|H \leq \epsilon$ in
$B_g(\bar{x}, H^{-1}(\bar{x}))\subset M$
\item $\max_{\alpha}|K_{\alpha}^{(i)}|H \leq 5n$ in $B_{g}(\bar{x}, 10n^{5/2}H^{-1}(\bar{x}))\subset M$
\end{itemize}
for $i = 1 , 2$, where $g$ denotes the induced metric on $M$ by embedding.
Then for any $L > 1$:
\begin{align*}
\inf\limits_{\omega\in O(\frac{(n-1)(n-2)}{2})}\sup\limits_{B_{LH(\bar{x})^{-1}}(\bar{x})}\max_{\alpha}
|K_{\alpha}^{(1)}-\sum_{\beta = 1}^{\frac{(n-1)(n-2)}{2}}\omega_{\alpha\beta}K^{(2)}_{\beta}|H(\bar{x})
\leq CL\epsilon
\end{align*} \end{Lemma} \begin{proof}
The proof is analogous to \cite{brendle2019uniqueness},\cite{zhu2020so}. We only consider the case that $L=10$, for general $L$ use the fact that
$K^{(i)}$ are affine functions. Throughout the proof, the constant $C$ depends only on $n$.
Argue by contradiction, if the conclusion is not true.
Then there exists a sequence of pointed hypersurfaces $(M_j, p_j)$
that are $\frac{1}{j}$ close to a geodesic ball of radius $20n^{5/2}H_0^{-1}$ in $S_{\sqrt{2(n-2)}}^{n-2}\times \mathbb{R}^2 $ and $|p_j - (\sqrt{2(n-2)},0,...,0)| \leq \frac{1}{j}$,
where $H_0 = \sqrt{\frac{n - 2}{2}}$ is the mean curvature of $S_{\sqrt{2(n-2)}}^{n-2}\times \mathbb{R}^2$
and $g_j$ is the metric on $M_j$ induced by embedding.
Moreover, there exists normalized set of rotation vector fields
$\mathcal{K}^{(i,j)} = \{K^{(i,j)}_{\alpha}: 1\leq\alpha\leq \frac{(n-1)(n-2)}{2}\}$ and $\epsilon_j\leq\frac{1}{j}$ such that
\begin{itemize}
\item $\max_{\alpha}|\left<K^{(i,j)}_{\alpha}, \nu\right>|H\leq\epsilon_j$
in $B_{g_j}(p_j, H(p_j)^{-1})$
\item $\max_{\alpha}|K_{\alpha}^{(i,j)}|H\leq 5n$ in
$B_{g_j}(p_j, 10n^{5/2}H(p_j)^{-1})$
\end{itemize}
for $i = 1,2$, but
\begin{itemize}
\item $\inf\limits_{\omega\in O(\frac{(n-1)(n-2)}{2})}\sup\limits_{B_{LH(\bar{x})^{-1}}(\bar{x})}\max_{\alpha}
|K_{\alpha}^{(1,j)}-\sum_{\beta = 1}^{\frac{(n-1)(n-2)}{2}}\omega_{\alpha\beta}K^{(2,j)}_{\beta}|H(\bar{x})
\geq j\epsilon_j$
\end{itemize}
Therefore, $M_j$ converges to $M_{\infty} = B_{g_{\infty}}(p_{\infty}, 20n^{5/2}H_0^{-1})\subset S_{\sqrt{2(n-2)}}^{n-2}\times \mathbb{R}^2 $
and $H(p_j)\rightarrow \sqrt{\frac{n-2}{2}}$, where $p_{\infty}=(\sqrt{2(n-2)},0,...,0)$
and $g_{\infty}$ denoted the induced metric on
$S_{\sqrt{2(n-2)}}^{n-2}\times \mathbb{R}^2 $.
Suppose that for each $i=1,2$ and $j\geq 1$,
$K^{(i,j)}_{\alpha}(x) = S_{(i,j)}J^{(i,j)}_{\alpha}S_{(i,j)}^{-1}(x-b_{(i,j)})$
where $S_{(i,j)}\in O(n+1)$ and $b_{(i,j)}\in\mathbb{R}^{n+1}$,
$\{J^{(i,j)}_{\alpha}: 1\leq\alpha \leq\frac{(n-1)(n-2)}{2}\}$ is
orthonormal basis of
$so(n-1)\subset so(n+1)$. Without loss of generality we may assume that
$\displaystyle b_{(i,j)}\perp \bigcap_{\alpha}\ker(J^{(i,j)}_{\alpha}S_{(i,j)}^{-1})$.
Then
\begin{align*}
|b_{(i,j)}| \leq C\sum_{\alpha}|S_{(i,j)}J^{(i,j)}_{\alpha}S_{(i,j)}^{-1} b_{(i,j)}| \leq C(n)
\end{align*}
Therefore we can pass to a subsequence such that $S_{(i,j)}\rightarrow S_{(i,\infty)}$, $J_{\alpha}^{(i,j)}\rightarrow J_{\alpha}^{(i,\infty)}$
and $b_{(i,j)}\rightarrow b_{(i,\infty)}$ for each $i,\alpha$.
Consequently $K^{(i,j)}\rightarrow K^{(i,\infty)}_{\alpha} = S_{(i,\infty)}J_{\alpha}^{(i,\infty)}S_{(i,\infty)}^{-1}(x-b_{(i,\infty)})$ for $i=1,2$.
The convergence implies that
\begin{itemize}
\item $\left<K^{(i,\infty)}_{\alpha},\nu\right> = 0$ in $B_{g_{\infty}}(p_{\infty,}, H_0^{-1})$ for each $\alpha$
\item $\max_{\alpha}|K^{(i,\infty)}_{\alpha}| H \leq 5n$ in
$B_{g_{\infty}}(p_{\infty,}, 10n^{5/2}H_0^{-1})$
\end{itemize}
Let's fix an orthonormal basis $\{J_{\alpha}: 1\leq\alpha \leq\frac{(n-1)(n-2)}{2}\}$ of $so(n-1)$. By Lemma \ref{Lin Alg Cylinder A} we have for each $\alpha$:
\begin{flalign*}
K_{\alpha}^{(i,\infty)}(x) = \sum_{\beta=1}^{\frac{(n-1)(n-2)}{2}}\omega^{(i,\infty)}_{\alpha\beta} J_{\beta}x
\end{flalign*}
for some $\omega^{(i,\infty)}\in O(\frac{(n-1)(n-2)}{2})$ .
In particular we have
\begin{align}\label{sjs-1=omega J}
S_{(i,\infty)}J^{(i,\infty)}_{\alpha}S_{(i,\infty)}^{-1} = \sum_{\beta=1}^{\frac{(n-1)(n-2)}{2}}\omega^{(i,\infty)}_{\alpha\beta} J_{\beta}
\end{align}
and $b_{(i,\infty)}=0$ for $i=1,2$ (Note that $b_{(i,\infty)}\perp\bigcap_{\alpha}\ker(J^{(i,\infty)}_{\alpha}S^{-1}_{(i,\infty)}$ ).
Since $\{J^{(i,\infty)}_{\alpha}, 1\leq\alpha\leq \frac{(n-1)(n-2)}{2}\}$ is an orthonormal basis of $so(n-1)$, we can find $\eta^{(i,j)}\in O(\frac{(n-1)(n-2)}{2})$ such that
$J_{\alpha}^{(i,j)}=\sum_{\beta}\eta^{(i,j)}_{\alpha\beta}J_{\beta}^{(i,\infty)}$ . Then for each $i,j$
\begin{align}\label{sjs-1=omega J 2}
S_{(i,\infty)}J^{(i,j)}_{\alpha}S_{(i,\infty)}^{-1}=&\sum_{\beta}\eta^{(i,j)}_{\alpha\beta}S_{(i,\infty)}J_{\beta}^{(i,\infty)}S_{(i,\infty)}^{-1} \notag \\
=& \sum_{\beta,\gamma}\eta^{(i,j)}_{\alpha\beta}\omega^{(i.\infty)}_{\beta\gamma}J_{\gamma}
\end{align}
Now (\ref{sjs-1=omega J 2}) means that $\{S_{(i,\infty)}J^{(i,j)}_{\alpha}S_{(i,\infty)}^{-1}, 1\leq\alpha\leq \frac{(n-1)(n-2)}{2}\}$ is an orthonormal basis of $so(n-1)$.
Without loss of generality, we may assume that
$S_{(1,\infty)}=S_{(2,\infty)} =Id$ and $\omega^{(1,\infty)}_{\alpha\beta} = \delta_{\alpha\beta}$, for otherwise we may replace $S_{(1,j)}$
by $S_{(i,j)}S_{(i,\infty)}^{-1}$, replace $J_{\alpha}^{(i,j)}$ by $S_{(i,\infty)}J_{\alpha}^{(i,j)}S_{(i,\infty)}^{-1}$ for $i=1,2$
and replace $J_{\alpha}$ by
$\sum_{\beta=1}^{\frac{(n-1)(n-2)}{2}}\omega^{(1,\infty)}_{\alpha\beta} J_{\beta}$.
Therefore, $S_{(1,j)}^{-1}S_{(2,j)}$ is close to Id for large $j$. Then we can find $S_j\in O(n+1)$
and $A_j\in so(n+1)$ such that
\begin{itemize}
\item $S_{(1,j)}^{-1}S_{(2,j)} = \exp(A_j)S_j $
\item $S_j$ preserves the direct sum decomposition $\mathbb{R}^{n-1}\oplus\mathbb{R}^2$
\item $A_j\perp so(n-1)\oplus so(2)$
\item $S_j\rightarrow Id$ and $A_j\rightarrow 0$
\end{itemize}
Since $S_j$ preserves the direct sum decomposition,
we can find basis transform matrix ${\omega}^{(j)}\in O(\frac{(n-1)(n-2)}{2})$ such that
\begin{align*}
S_j^{-1} J^{(1,j)}_{\alpha}S_{j} = \sum_{\beta=1}^{\frac{(n-1)(n-2)}{2}}{\omega}^{(j)}_{\alpha\beta} J_{\beta}^{(2,j)}
\end{align*}
for every $j$ and $\alpha$. Equivalently,
\begin{align*}
J^{(1,j)}_{\alpha} = \sum_{\beta=1}^{\frac{(n-1)(n-2)}{2}}{\omega}^{(j)}_{\alpha\beta} S_{j}J_{\beta}^{(2,j)}S_j^{-1}
\end{align*}
To see what the definition of $\omega^{(j)}$ implies, we compute the following:
\begin{align*}
\sum_{\beta=1}^{\frac{(n-1)(n-2)}{2}} \omega^{(j)}_{\alpha\beta}K_{\beta}^{(2,j)}(x) = & \sum_{\beta} \omega^{(j)}_{\alpha\beta}S_{(2,j)}J^{(2,j)}_{\beta}S_{(2,j)}^{-1}(x-b_{(2,j)})\\
=& \sum_{\beta}\omega^{(j)}_{\alpha\beta} S_{(1,j)}\exp(A_j) S_j J_{\beta}^{(2,j)}S_j^{-1}\exp(-A_j)S_{(1,j)}^{-1} (x-b_{(2,j)}) \\
=& S_{(1,j)}\exp(A_j)J^{(1,j)}_{\alpha}\exp(-A_j)S_{(1,j)}^{-1}(x-b_{(2,j)})
\end{align*}
For each $\alpha$, define:
\begin{align*}
W_{\alpha}^j = \frac{K_{\alpha}^{(1,j)} -\sum_{\beta} \omega^{(j)}_{\alpha\beta}K_{\beta}^{(2,j)} }{\sup\limits_{B_{10H(p_j)^{-1}}(p_j)}\max_{\alpha}|K_{\alpha}^{(1,j)} -\sum_{\beta} \omega^{(j)}_{\alpha\beta}K_{\beta}^{(2,j)} |}
\end{align*}
Let
\begin{itemize}
\item $P_{\alpha}^j = S_{(1,j)}[J^{(1,j)}_{\alpha}-\exp(A_j)J^{(1,j)}_{\alpha}\exp(-A_j)]S_{(1,j)}^{-1}$
\item $c_{\alpha}^j = -S_{(1,j)}J^{(1,j)}_{\alpha}S_{(1,j)}^{-1}(b_{(1,j)}-b_{(2,j)})$
\item $Q_j$ = $\sup\limits_{B_{10H(p_j)^{-1}}(p_j)}\max_{\alpha}|K_{\alpha}^{(1,j)} -\sum_{\beta} \omega^{(j)}_{\alpha\beta}K_{\beta}^{(2,j)} |$
\end{itemize}
By the above discussion we can find that for each $\alpha$:
\begin{align*}
K_{\alpha}^{(1,j)} -\sum_{\beta=1}^{\frac{(n-1)(n-2)}{2}} \omega^{(j)}_{\alpha\beta}K_{\beta}^{(2,j)} &= P^j_{\alpha}(x-b_{(2,j)}) + c^j_{\alpha}
\end{align*}
By definition of $Q_j$ we have $|P^j_{\alpha}| + |c^j_{\alpha}| \leq CQ_j$. Consequently, for sufficiently large $j$:
\begin{align*}
& |P_{\alpha}^j| = |[A_j, J^{(1,j)}_{\alpha}] + o(|A_j|)| \leq CQ_j \\
\Rightarrow & |A_j|\leq C\max_{\alpha}|[A_j,J^{(1,j)}_{\alpha}]| \leq CQ_j+o(|A_j|)\\
\Rightarrow & |A_j|\leq CQ_j
\end{align*}
The second inequality used the fact that $A_j \perp so(n-1)\oplus so(2)$.
Note that $J^{(1,j)}_{\alpha}\rightarrow J_{\alpha}$ by the previous discussion. Now we can pass to a subsequence such that $\frac{P_{\alpha}^j}{Q_j}\rightarrow [A,J_{\alpha}]$ and
$\frac{c^j_{\alpha}}{Q_j}\rightarrow c_{\alpha}\in Im(J_{\alpha})$, . Consequently:
\begin{align*}
W^j_{\alpha} \rightarrow W_{\alpha}^{\infty} = [A,J_{\alpha}]x +c_{\alpha}
\end{align*}
note that $W^{\infty}_{\alpha}$ are not all $0$, because
$\sup\limits_{B_{10H(p_j)^{-1}}(p_j)}\max_{\alpha}|W^{\infty}_{\alpha}|=1$.
On the other hand, by assumption
\begin{itemize}
\item $\max_{\alpha}|\left<K^{(1,j)}_{\alpha} - \sum_{\beta}\omega^{(j)}_{\alpha\beta}K_{\beta}^{(2,j)}, \nu\right>|\leq 2H^{-1}\epsilon_j$
\item ${\sup\limits_{B_{10H(p_j)^{-1}}(p_j)}\max_{\alpha}|K_{\alpha}^{(1,j)} -\sum_{\beta} \omega^{(j)}_{\alpha\beta}K_{\beta}^{(2,j)} |\geq jH^{-1}}\epsilon_j$
\end{itemize}
therefore in the limit $\max_{\alpha}|\left<\nu, W_{\alpha}^{\infty}\right>|=0$ in
$B_{g_{\infty}}(p_{\infty},1)$.
By Lemma \ref{Lin Alg Cylinder B}, $W_{\alpha}^{\infty}\equiv 0$ for all $\alpha$, a contradiction.
\begin{comment}
Therefore, $S_{(i,j)}^{-1}S_{(2,j)}\rightarrow S_{(2,\infty)}$ for large $j$. Then we can find $S_j\in O(n+1)$
and $A_j\in so(n+1)$ such that
\begin{itemize}
\item $S_{(1,j)}^{-1}S_{(2,j)} = \exp(A_j)S_j S_{(2,\infty)}$
\item $S_j$ preserves the direct sum decomposition $\mathbb{R}^{n-1}\oplus\mathbb{R}^2$
\item $A_j\perp so(n-1)\oplus so(2)$
\item $S_j\rightarrow Id$ and $A_j\rightarrow 0$
\end{itemize}
Since $S_j$ preserves the direct sum decomposition, (\ref{sjs-1=omega J}) tells us that $S_{(i,\infty)}$ also preserves the direct sum decomposition,
we can therefore find basis transform matrix $\tilde{\omega}^{(j)}\in O(\frac{(n-1)(n-2)}{2})$ such that
\begin{align*}
S_jS_{(2,\infty)}J^{(2,j)}_{\alpha}S_{(2,\infty)}^{-1}S_{j}^{-1} = \sum_{\beta=0}^{\frac{(n-1)(n-2)}{2}}\tilde{\omega}^{(j)}_{\alpha\beta} J_{\beta}^{(1,j)}
\end{align*}
for every $j$ and $\alpha$ .
Let $\omega^{(j)} = (\tilde{\omega}^{(j)})^{-1}$ or equivalently $\sum_{\gamma}\omega^{(j)}_{\alpha\beta}\tilde{\omega}^{(j)}_{\beta\gamma}=\delta_{\alpha\gamma}$.
To see what the definition of $\omega^{(j)}$ implies, we compute the following:
\begin{align*}
\sum_{\beta=1}^{\frac{(n-1)(n-2)}{2}} \omega^{(j)}_{\alpha\beta}K_{\beta}^{(2,j)} = & \sum_{\beta} \omega^{(j)}_{\alpha\beta}S_{(2,j)}J^{(2,j)}_{\beta}S_{(2,j)}^{-1}(x-b_{(2,j)})\\
=& \sum_{\beta} \omega^{(j)}_{\alpha\beta}S_{(1,j)}\exp(A_j)S_j S_{(2,\infty)}J_{\beta}^{(2,j)}S_{(2,\infty)}^{-1}S_j^{-1}\exp(-A_j)S_{(1,j)}^{-1} (x-b_{(2,j)}) \\
=& \sum_{\beta,\gamma}\omega^{(j)}_{\alpha\beta}\tilde{\omega}^{(j)}_{\beta\gamma}S_{(1,j)}\exp(A_j)J^{(1,j)}_{\gamma}\exp(-A_j)S_{(1,j)}^{-1}(x-b_{(2,j)}) \\
=& \sum_{\alpha} S_{(1,j)}\exp(A_j)J^{(1,j)}_{\alpha}\exp(-A_j)S_{(1,j)}^{-1}(x-b_{(2,j)})
\end{align*}
For each $\alpha$, define:
\begin{align*}
W_{\alpha}^j = \frac{K_{\alpha}^{(1,j)} -\sum_{\beta} \omega^{(j)}_{\alpha\beta}K_{\beta}^{(2,j)} }{\sup\limits_{B_{10H(p_j)^{-1}}(p_j)}\max_{\alpha}|K_{\alpha}^{(1,j)} -\sum_{\beta} \omega^{(j)}_{\alpha\beta}K_{\beta}^{(2,j)} |}
\end{align*}
Let
\begin{itemize}
\item $P_{\alpha}^j = S_{(1,j)}[J^{(1,j)}_{\alpha}-\exp(A_j)J^{(1,j)}_{\alpha}\exp(-A_j)]S_{(1,j)}^{-1}$
\item $c_{\alpha}^j = -S_{(1,j)}J^{(1,j)}_{\alpha}S_{(1,j)}^{-1}(b_{(1,j)}-b_{(2,j)})$
\item $Q_j$ = $\sup\limits_{B_{10H(p_j)^{-1}}(p_j)}\max_{\alpha}|K_{\alpha}^{(1,j)} -\sum_{\beta} \omega^{(j)}_{\alpha\beta}K_{\beta}^{(2,j)} |$
\end{itemize}
By the above discussion we can find that for each $\alpha$:
\begin{align*}
K_{\alpha}^{(1,j)} -\sum_{\beta=1}^{\frac{(n-1)(n-2)}{2}} \omega^{(j)}_{\alpha\beta}K_{\beta}^{(2,j)} &= P^j_{\alpha}(x-b_{(2,j)}) + c^j_{\alpha}
\end{align*}
Therefore we have $|P^j_{\alpha}| + |c^j_{\alpha}| \leq CQ_j$. Consequently, for sufficiently large $j$:
\begin{align*}
& |P_{\alpha}^j| = |[A_j, J^{(1,j)}_{\alpha}] + o(|A_j|)| \leq CQ_j \\
\Rightarrow & |A_j|\leq C\max_{\alpha}|[A_j,J^{(1,j)}_{\alpha}]| \leq CQ_j+o(|A_j|)\\
\Rightarrow & |A_j|\leq CQ_j
\end{align*}
The second inequality used the fact that $A_j \perp so(n-1)\oplus so(2)$.
Note that $J^{(1,j)}_{\alpha}\rightarrow J_{\alpha}$ by the previous discussion. Now we can pass to a subsequence such that $\frac{P_{\alpha}^j}{Q_j}\rightarrow [A,J_{\alpha}]$ and
$\frac{c^j_{\alpha}}{Q_j}\rightarrow c_{\alpha}\in Im(J_{\alpha})$, . Consequently:
\begin{align*}
W^j_{\alpha} \rightarrow W_{\alpha}^{\infty} = [A,J_{\alpha}]x +c_{\alpha}
\end{align*}
note that $W^{\infty}_{\alpha}$ are not all $0$, because
$\sup\limits_{B_{10H(p_j)^{-1}}(p_j)}\max_{\alpha}|W^{\infty}_{\alpha}|=1$.
On the other hand, by assumption
\begin{itemize}
\item $|\left<K^{(1,j)}_{\alpha} - \sum_{\beta}\omega^{(j)}_{\alpha\beta}K_{\beta}^{(2,j)}, \nu\right>|\leq 2H^{-1}\epsilon_j$
\item ${\sup\limits_{B_{10H(p_j)^{-1}}(p_j)}\max_{\alpha}|K_{\alpha}^{(1,j)} -\sum_{\beta} \omega^{(j)}_{\alpha\beta}K_{\beta}^{(2,j)} |\geq jH^{-1}}\epsilon_j$
\end{itemize}
therefore in the limit $\max_{\alpha}|\left<\nu, W_{\alpha}^{\infty}\right>|=0$ in
$B_{g_{\infty}}(p_{\infty},1)$.
By Lemma \ref{Lin Alg Cylinder B} $W_{\alpha}^{\infty}\equiv 0$ for all $\alpha$, a contradiction.
\end{comment} \end{proof}
\begin{Lemma}\label{VF close Bowl times R}
Given $\delta < 1$, there exists $0<\epsilon_b \ll 1$ and $C>1$ depending
only on $n$ and $\delta$ with the following properties. Let $\Sigma^{n-1}\subset\mathbb{R}^n$ be the Bowl solition with maximal
mean curvature 1 and $M\subset\mathbb{R}^{n+1}$ be a hypersurface with induced
metric $g$. Suppose that $q\in \Sigma\times\mathbb{R}$,
and $M$ is a graph over the geodesic ball in $\Sigma\times\mathbb{R}$ of radius
$2H(q)^{-1}$ centered at $q$. After rescaling by $H(q)^{-1}$ the graph $C^3$ norm is no more than $\epsilon_b$.
Let $\bar{x}\in M$ be a point that has rescaled distance to $q$ no more than
$\epsilon_b$.
Suppose that $\epsilon \leq \epsilon_b$ and
$\mathcal{K}^{(1)} = \{K^{(1)}_{\alpha}: 1 \leq \alpha \leq \frac{(n-1)(n-2)}{2}\}$,
$\mathcal{K}^{(2)} = \{K^{(2)}_{\alpha}: 1 \leq \alpha \leq \frac{(n-1)(n-2)}{2}\}$, are two normalized set of rotation vector fields, assume that
\begin{itemize}
\item $\lambda_1+\lambda_2\geq \delta H$ in $B_{g}(\bar{x},H(\bar{x})^{-1})$,
where $\lambda_1, \lambda_2$ are the lowest principal curvatures.
\item $\max_{\alpha}|\left<K^{(i)}_{\alpha},\nu\right>|H\leq \epsilon$
in $B_{g}(\bar{x},H(\bar{x})^{-1})$
\item $\max_{\alpha}|K_{\alpha}^{(j)}|H\leq 5n$ at $\bar{x}$
\end{itemize}
for $i=1,2$. Then for $L\geq 1$,
\begin{align*}
\inf\limits_{\omega\in O(\frac{(n-1)(n-2)}{2})}\sup\limits_{B_{LH(\bar{x})^{-1}}(\bar{x})}\max_{\alpha}
|K_{\alpha}^{(1)}-\sum_{\beta = 1}^{\frac{(n-1)(n-2)}{2}}\omega_{\alpha\beta}K^{(2)}_{\beta}|H(\bar{x})
\leq CL\epsilon
\end{align*}
\begin{Rem}
The condition $\lambda_1+\lambda_2\geq \delta H$ means that the point has bounded distance to $\{p\}\times\mathbb{R}$ (note that we have normalized the mean curvature), where $p$ denotes the tip of the Bowl soliton.
\end{Rem}
\begin{proof}
The proof is analogous to \cite{zhu2020so}. Let's make the convention that
the tip of $\Sigma$ is the origin, the rotation axis is $x_n$, and
$\Sigma$ encloses the positive part of $x_n$ axis.
Argue by contradiction, if the assertion is not true, then there exists
a sequence of points $q_j\in \kappa_j^{-1}\Sigma\times\mathbb{R}$ with
$H(q_j)=1$ and a sequence of pointed hypersurfaces $(M_j, p_j)$ that are
$1/j$ close to a geodesic ball $B_{\tilde{g_j}}(q_j,2)$ in
$\kappa_j^{-1}\Sigma\times\mathbb{R}$, where $\tilde{g_j}$ are the induced
metric on $\kappa_j^{-1}\Sigma\times\mathbb{R}$. Suppose that $|p_j-q_j|\leq 1/j$.
Without loss of generality we may assume that
$\left<q_j,\omega_{n+1}\right>=0$
where $\omega_{n+1}$ is the unit vector in the $\mathbb{R}$ (splitting)
direction.
Further, there exists normalized set of rotation vector fields $\mathcal{K}^{(i,j)}$=$\{K^{(i,j)}_{\alpha}$, $1\leq \alpha\leq \frac{(n-1)(n-2)}{2}\}$ $i=1,2$ and
$\epsilon_j<1/j$ such that
\begin{itemize}
\item $\max_{\alpha}|\left<K^{(i,j)}_{\alpha},\nu\right>|H\leq \epsilon_j$ in $B_{g_j}(p_j,H(p_j)^{-1})\subset M_j$
\item $\max_{\alpha}|K^{(i,j)_{\alpha}}|H\leq 5$ at $p_j$
\end{itemize}
for $i=1,2$, but
\begin{itemize}
\item $\inf\limits_{\omega\in O(\frac{(n-1)(n-2)}{2})}\sup\limits_{B_{LH(\bar{x})^{-1}}(\bar{x})}\max_{\alpha}
|K_{\alpha}^{(1,j)}-\sum_{\beta = 1}^{\frac{(n-1)(n-2)}{2}}\omega_{\alpha\beta}K^{(2,j)}_{\beta}|H(\bar{x})
\geq j\epsilon_j$
\end{itemize}
Now the maximal mean curvature of $\kappa_j^{-1}\Sigma\times\mathbb{R}$ is $\kappa_j$.
For any $j>2C/\delta$, by the first condition and approximation we know that $\frac{\lambda_1+\lambda_2}{H}\geq \frac{\delta}{2}$ around $q_j$.
The asymptotic behaviour of the Bowl soliton indicates that $\frac{H(q_j)}{\kappa_j}<C(\delta)$ and $|q_j-\left<q_j,\omega_{n+1}\right>\omega_{n+1}|\kappa_j<C(\delta)$, thus
$\kappa_j>C(\delta)^{-1}$ and $|q_j|=|q_j-\left<q_j,\omega_{n+1}\right>\omega_{n+1}|<C(\delta)$.
We can then pass to a subsequence such that $q_j\rightarrow q_{\infty}$ and $\kappa_j\rightarrow\kappa_{\infty}>C(\delta)^{-1}>0$.
Consequently $\kappa_j^{-1}\Sigma\times\mathbb{R}\rightarrow \kappa_{\infty}^{-1}\Sigma\times\mathbb{R}$ and $B_{\tilde{g}}(q_j,2)\rightarrow B_{\tilde{g}_{\infty}}(q_{\infty},2)$ smoothly,
where $B_{\tilde{g}_{\infty}}(q_{\infty},2)$ is the geodesic ball in $\kappa_{\infty}^{-1}\Sigma\times\mathbb{R}$.
Combing with the assumption that $(M_j,p_j)$ is $1/j$ close to $(B_{\tilde{g}_j}(q_j,2),q_j)$ and $H(q_j)=1$, we have $M_j\rightarrow B_{\tilde{g}_{\infty}}(q_{\infty},2)$ with $p_j\rightarrow q_{\infty}$
and $H(q_{\infty})=1$.
We can write $K_{\alpha}^{(i,j)}(x)=S_{(i,j)}J^{(i,j)}_{\alpha}S_{(i,j)}^{-1}(x-b_{(i,j)})$ for some orthonormal basis $\{J^{(i,j)}_{\alpha}, 1\leq \alpha\leq \frac{(n-1)(n-2)}{2}\}$ of
$so(n-1)$
and assume that $(b_{(i,j)}-p_j)\perp \bigcap_{\alpha}\ker J^{(i,j)}_{\alpha}S_{(i,j)}^{-1}$. Then
$|b_j-p_j|=|S_{(i,j)}J^{(i,j)}_{\alpha}S_{(i,j)}^{-1}(p_j-b_j)|\leq C(n)$ for large $j$.
We can pass to a subsequence such that $S_{(i,j)}$ and $b_{(i,j)}$ converge to $S_{(i,\infty)}, b_{(i,\infty)}$ respectively.
Consequently $K^{(i,j)}\rightarrow K^{(i,\infty)}_{\alpha} = S_{(i,\infty)}J^{(i,j)}_{\alpha}S_{(i,\infty)}^{-1}(x-b_{(i,\infty)})$ for $i=1,2$.
The convergence implies that
\begin{itemize}
\item $\left<K^{(i,\infty)}_{\alpha},\nu\right> = 0$ in $B_{g_{\infty}}(p_{\infty,}, H_0^{-1})$
\item $\max_{\alpha}|K^{(i,\infty)}_{\alpha}| H \leq 5n$ at
$p_{\infty}$
\end{itemize}
By Lemma \ref{Lin Alg Cylinder A} we have:
\begin{flalign*}
K_{\alpha}^{(i,\infty)}(x) = \sum_{\beta=1}^{\frac{(n-1)(n-2)}{2}}\omega^{(i)}_{\alpha\beta} J_{\beta}x
\end{flalign*}
for some fixed orthonormal basis $\{J_{\alpha}, 1\leq \alpha\leq \frac{(n-1)(n-2)}{2}\}$ of $so(n-1)$.
Arguing exactly as in Lemma \ref{Vector Field closeness on Cylinder} to reach
a contradiction.
\end{proof}
\end{Lemma}
\begin{Rem}\label{Alternative Conclusion}
Under the assumption of Lemma \ref{Vector Field closeness on Cylinder} or Lemma \ref{VF close Bowl times R}, an
alternative conclusion is also proved:
if $K^{(1)}_{\alpha} = SJ_{\alpha}S^{-1}(x-b)$, then there
exists $S', b'$ such that $|S'-S|+|b-b'|\leq C\epsilon$ and
$\omega\in O(\frac{(n-1)(n-2)}{2})$ such that for each $\alpha$
\begin{align}\label{V F closeness remark on Cylinder}
K^{(2)}_{\alpha} = \sum_{\beta=1}^{\frac{(n-1)(n-2)}{2}} \omega_{\alpha\beta} S' J_{\beta}S'^{-1}(x-b')
\end{align} \end{Rem}
\begin{Th}\label{Cylindrical improvement} There exists constant $L_0>1$ and $0<\epsilon_0<1/10$ with the following properties: suppose that $M_t$ is a mean curvature flow solution, if every point in the parabolic neighborhood $\hat{\mathcal{P}}(\bar{x},\bar{t},L_0,L_0^2)$ is $\epsilon$ symmetric and $(\epsilon_0,100n^{5/2})$ cylindrical,
where $0<\epsilon\leq \epsilon_0$, then $(\bar{x},\bar{t})$ is $\frac{\epsilon}{2}$ symmetric. \end{Th} \begin{proof}
We will abbreviate some details that are similar to those in \cite{brendle2018uniqueness}, \cite{brendle2019uniqueness} or \cite{zhu2020so}.
We assume that $L_0$ is large and $\epsilon_0$ is small depending on $L_0$. The constant $C$ is always assumed to depend on $n$.
For any space-time point $(y,s)\in \hat{\mathcal{P}}(\bar{x},\bar{t},L_0,L_0^2)$, by assumption there is a normalized set of rotation field $\mathcal{K}^{(y,s)} = \{K_{\alpha}^{(y,s)}, 1\leq\alpha \leq \frac{(n-1)(n-2)}{2}\}$ such that
$\max_{\alpha}|\left<K_{\alpha}^{(y,s)},\nu\right>|H\leq \epsilon$ and
$\max_{\alpha}|K_{\alpha}^{(y,s)}|H\leq 5n$ in a parabolic neighbourhood $\hat{\mathcal{P}}(y,s,100n^{5/2},100^2n^5)$.
Without loss of generality we may assume $\bar{t}=-1$, $H(\bar{x})=\sqrt{\frac{n-2}{2}}$
and $|\bar{x}-(\sqrt{2(n-2)},0,0,0)|\leq \epsilon_0$.
Let's set up a reference normalized set of rotation vector fields
$\mathcal{K}^0 =\{K^0_{\alpha}(x) = J_{\alpha}x, 1\leq\alpha\leq\frac{(n-1)(n-2)}{2}\}$ and let
$\bar{\mathcal{K}}=\mathcal{K}^{(\bar{x},-1)}$,
where $\{J_{\alpha}, 1\leq\alpha\leq\frac{(n-1)(n-2)}{2}\}$
is the orthonormal basis of $so(n-1)$.
By the cylindrical assumption,
for each $(y,s)\in\hat{\mathcal{P}}(\bar{x},\bar{t},L_0,L_0^2)$,
the parabolic neighborhood $\hat{\mathcal{P}}(y,s,100^2n^5,100^2n^5)$ is $C(L_0)\epsilon_0$ close to the shrinking cylinder
$S^{n-2}_{\sqrt{-2(n-2)t}}\times \mathbb{R}^2$ in $C^{10}$ norm.
We use the spherical coordinate $(r\Theta,z_1,z_2)$ on $M_t$, under which $M_t$ is expressed as a radial graph.
More precisely,
the set $$\left\{(r\Theta,z_1,z_2)\big|r=r(\Theta,z_1,z_2), z_1^2+z_2^2\leq \frac{L_0^2}{2},\Theta\in S^{n-2}\right\}$$ is contained in $M_t$.
Let $0=\lambda_0<\lambda_1\leq\lambda_2\leq ...$ be all eigenvalues
of the Laplacian on the unit sphere $S^{n-2}$ and let $Y_m$ be the
eigenfunction corresponding to $\lambda_m$ such that $\{Y_m\}$ forms an
orthonormal basis with respect to $L^2$ inner product on unit sphere.
In particular, $\lambda_1=...=\lambda_{n-1}=n-2$, and
for each $k\leq n-1$ we can choose $Y_k$ to be proportional to $\Theta_k$, the
$k$-th coordinate
function of $\Theta$ (if we consider $\Theta$ as unit
vector in $\mathbb{R}^{n-1}$). \\
\noindent\textbf{Step 1:}
Using Lemma \ref{Vector Field closeness on Cylinder}
repeatedly we can conclude that, for any $R>1$ and
$(x_0, t_0)\in \hat{\mathcal{P}}(\bar{x},\bar{t},L_0,L_0^2)$
\begin{align*}
\inf\limits_{\omega\in O(\frac{(n-1)(n-2)}{2})}\sup\limits_{B_{RH(\bar{x})^{-1}}(\bar{x})}\max_{\alpha}
|\bar{K_{\alpha}}-\sum_{\beta = 1}^{\frac{(n-1)(n-2)}{2}}\omega_{\alpha\beta}K^{(x_0, t_0)}_{\beta}|H(\bar{x})
\leq C(L_0)R\epsilon
\end{align*}
Therefore we can replace $K^{(x_0, t_0)}_{\alpha}$ by
$\sum_{\beta}\omega_{\alpha\beta}K_{\beta}^{(x_0, t_0)}$ for some
$\omega\in O(\frac{(n-1)(n-2)}{2})$ depending on $(y,s)$ such that
\begin{align}\label{Single Vector}
\sup\limits_{B_{RH(\bar{x})^{-1}}(\bar{x})}\max_{\alpha}
|\bar{K_{\alpha}}- K^{(x_0, t_0)}_{\alpha}|H(\bar{x})
\leq C(L_0)R\epsilon
\end{align}
Further applying Lemma \ref{Vector Field closeness on Cylinder} for
$\bar{\mathcal{K}}$ and $\mathcal{K}^0$ with $\epsilon_0$, we can find $ \omega\in O(\frac{(n-1)(n-2)}{2})$ such that:
\begin{align}
\sup\limits_{B_{RH(\bar{x})^{-1}}(\bar{x})}\max_{\alpha}
|\bar{K}_{\alpha} - \sum_{\beta = 1}^{\frac{(n-1)(n-2)}{2}}\omega_{\alpha\beta}K^{0}_{\beta}|H(\bar{x})
\leq CR\epsilon_0
\end{align}
By Remark \ref{Alternative Conclusion}, the axis of $\bar{K}_{\alpha}$ and the axis of the approximating cylinder differs by at most $CR\epsilon_0$, therefore by rotating the cylinder we
may assume that $\bar{\mathcal{K}}=\mathcal{K}^0$.
Let's remark that since $|\sum_{\beta = 1}^{\frac{(n-1)(n-2)}{2}}\omega_{\alpha\beta}K^{0}_{\beta}|H \equiv n-2$ on $S^{n-2}\times \mathbb{R}^2$, by approximation the condition $|K_{\alpha}|H\leq 5n$ is always satisfied throughout the proof. \\
\noindent\textbf{Step 2:} Let's fix an $\alpha$. Given any space time point
$(x_0, t_0)\in \hat{\mathcal{P}}(\bar{x},\bar{t},L_0,L_0^2)$, there exists
constants $a_i, b_i, c_i$ (depending on the choice of $(x_0, t_0)$) such that
\begin{align*}
|a_1| + ...+|a_{n-1}|\leq &C(L_0)\epsilon \\
|b_1| + ...+|b_{n-1}|\leq &C(L_0)\epsilon \\
|c_1| + ...+|c_{n-1}| \leq &C(L_0)\epsilon
\end{align*}
and
\begin{align*}
|\left<\bar{K}_{\alpha}-K_{\alpha}^{(x_0, t_0)},\nu \right>&-(a_1Y_1+...+a_{n-1}Y_{n-1})\\
&-(b_1Y_1+...+b_{n-1}Y_{n-1})z_1 \\
&-(c_1Y_1+...+c_{n-1}Y_{n-1})z_2|\leq C(L_0)\epsilon_0\epsilon
\end{align*}
holds in $\hat{\mathcal{P}}(x_0, t_0, 10,100)$.
Consequently, function $u=\left<\bar{K_{\alpha}},\nu\right>$ satisfies
\begin{align}\label{u Bound}
|u&-(a_1Y_1+...+a_{n-1}Y_{n-1})\notag\\
&-(b_1Y_1+...+b_{n-1}Y_{n-1})z_1 \notag\\
&-(c_1Y_1+...+c_{n-1}Y_{n-1})z_2|\leq C(L_0)\epsilon_0\epsilon + C(-t_0)^{1/2}\epsilon
\end{align}
in $\hat{\mathcal{P}}(x_0, t_0, 10,100)$.
Function $u$ satisfies the Jacobi equation:
\begin{align}\label{Jacobi Equation}
\frac{\partial u}{\partial t} = \Delta u + |A|^2u
\end{align}
using parabolic interior estimate we obtain $|\nabla u| + |\nabla^2 u|\leq C(L_0)\epsilon$ in $\hat{\mathcal{P}}(\bar{x},-1, \frac{L_0}{\sqrt{2}}, \frac{L_0^2}{2})$.
By approximation,
\begin{align}\label{Nonhom Jacobi}
|\frac{\partial u}{\partial t}-(\frac{\partial^2 u}{\partial z_1^2}+\frac{\partial^2 u}{\partial z_2^2}+\frac{\Delta_{S^{n-2}}u}{-2(n-2)t}+\frac{1}{-2t}u)|\leq C(L_0)\epsilon\epsilon_0
\end{align}
holds for $z_1^2+z_2^2 \leq \frac{L_0^2}{4}$ and $-\frac{L_0^2}{4}\leq t\leq -1$.
Define
$\Omega_l(\bar{z}_1,\bar{z}_2) =\{(z_1,z_2)\in\mathbb{R}^2| \ |z_i-\bar{z}_i|\leq l\}$
$\Omega_l=\Omega_l(0, 0)$
$\Gamma_l=\{(z_1,z_2)\in \Omega_l,\theta\in [0,2\pi],t\in [-l^2,-1] \big| t=-l^2 \text{ or } |z_1|=l \text{ or } |z_2|=l\}$
Let $\tilde{u}$ solves the homogenous equation
\begin{align*}
\frac{\partial u}{\partial t}=\frac{\partial^2 u}{\partial z_1^2}+\frac{\partial^2 u}{\partial z_2^2}+\frac{\Delta_{S^{n-2}}u}{-2(n-2)t}+\frac{1}{-2t}u
\end{align*}
in $\Omega_{L_0/4}\times [0,2\pi]\times [-L_0^2/16,-1]$ and satisfies the boundary condition $\tilde{u}=u$ on $\Gamma_{L_0/4}$.
By (\ref{Nonhom Jacobi}) and maximal principle
\begin{align}\label{diff tildeu and u}
|u-\tilde{u}|\leq C(L_0)\epsilon\epsilon_0
\end{align}
Next, we compute Fourier coefficients of $\tilde{u}$ and analysis each of them.
\begin{align*}
v_m=\int_{S^{n-2}}\tilde{u}(z,t,\Theta)Y_m(\Theta) d\Theta
\end{align*}
where $z=(z_1,z_2)$. $v_m$ satisfies the following equation:
\begin{align*}
\frac{\partial v_m}{\partial t}=\frac{\partial^2 v_m}{\partial z_1^2}+\frac{\partial^2 v_m}{\partial z_2^2}+\frac{n-2-\lambda_m}{-2(n-2)t}v_m
\end{align*}
Therefore
Let $\hat{v}_m=v_m(-t)^{\frac{n-2-\lambda_m}{2(n-2)}}$.
Then $\hat{v}_m$ satisfies the linear heat equation
\begin{align*}
\frac{\partial \hat{v}_m}{\partial t}=\frac{\partial^2 \hat{v}_m}{\partial z_1^2}+\frac{\partial^2 \hat{v}_m}{\partial z_2^2}
\end{align*}
\noindent\textit{Case 1: } $m\geq n$
In this case $\lambda_m\geq 2(n-1)$. Taking the $L^2$ inner product with
$Y_m$ on both sides of (\ref{u Bound}) and multiplying by
$(-t)^{\frac{n-2-\lambda_m}{2(n-2)}}$ we obtain that:
\begin{align*}
|\hat{v}_m|\leq (C(L_0)\epsilon\epsilon_0+C\epsilon)(-t)^{1-\frac{\lambda_m}{2(n-2)}}
\end{align*}
in $\Omega_{L_0/4}\times [-L_0^2/16,-1]$
The heat kernel with Dirichlet Boundary for $\Omega_{L_0/4}$ is
\begin{align*}
K_t&(x,y)=-\frac{1}{4\pi t}\sum_{\delta_i\in \{\pm 1\}, k_i\in \mathbb{Z}} (-1)^{-(\delta_1+\delta_2)/2}\cdot\\ &\exp\left({-\frac{\left|(x_1,x_2)-(\delta_1 y_1, \delta_2 y_2)-(1-\delta_1,1-\delta_2)\frac{L_0}{4}+(4k_1,4k_2)\frac{L_0}{4}\right|^2}{4t}}\right)
\end{align*}
where $x=(x_1,x_2), y=(y_1, y_2)$ are in $\Omega_{L_0/4}$.
$K_t$ satisfies
\begin{enumerate}
\item $\partial_tK_t=\Delta_x K_t$ in $\Omega_{L_0/4}\times (0,\infty) $
\item $K_t$ is symmetric in $x$ and $y$
\item $\lim_{t\rightarrow 0}K_t(x,y)=\delta(x-y)$ in the distribution sense.
\item $K_t(x,y)=0$ \ for $y\in\partial\Omega_{L_0/4}$ and $t>0$
\end{enumerate}
The solution formula is
\begin{align*}
\hat{v}_m(x,t)= &\int_{\Omega_{L_0/4}}K_{t+L_0^2/16}(x,y)\hat{v}_m(y,-\frac{L_0^2}{16})dy \\ -&\int_{-L_0^2/16}^{t}\int_{\partial\Omega_{L_0/4}}\partial_{\nu_y}K_{t-\tau}(x,y)\hat{v}_m(y,\tau)dy \ d\tau
\end{align*}
Now we let $(x,t)\in \Omega_{L_0/100}\times [-L_0^2/100^2,-1]$ ($L_0$ is large enough) and $-L_0^2/16\leq\tau<t$.
For all such $(x,t)$ and $\tau$ we have the following heat kernel estimate (see Appendix \ref{Appendix HKEST} for details):
\begin{align}\label{hkest1}
\int_{\Omega_{L_0/4}}|K_{t+L_0^2/16}(x,y)|dy\leq C
\end{align}
\begin{align}\label{hkest2}
\int_{\partial{\Omega_{L_0/4}}}|\partial_{\nu_y}K_{t-\tau}(x,y)|dy\leq \frac{CL_0^2}{(t-\tau)^2}e^{-\frac{L_0^2}{1000(t-\tau)}}
\end{align}
Now we can estimate $\hat{v}_m(\bar{x},\bar{t})$ for $(\bar{x},\bar{t})\in \Omega_{L_0/100}\times [-\frac{L_0^2}{100^2},-1]$.
\begin{align*}
|\hat{v}_m(x,t)|&\leq (C(L_0)\epsilon_0\epsilon+C\epsilon)\Big(\frac{L_0^2}{16}\Big)^{1-\frac{\lambda_m}{2(n-2)}} \\
&+(C(L_0)\epsilon_0\epsilon+C\epsilon)\int_{-L_0^2/16}^{t}\frac{CL_0^2}{(t-\tau)^2}e^{-\frac{L_0^2}{1000(t-\tau)}}(-\tau)^{1-\frac{\lambda_m}{2(n-2)}}d\tau\\
\end{align*}
For $L_0$ large and $-200^2n^5\leq t\leq -1$
\begin{align*}
\frac{CL_0^2}{(t-\tau)^2}e^{-\frac{L_0^2}{2000(t-\tau)}}(-\tau)^{1-\frac{1}{2(n-2)}}\leq CL_0^{-\frac{1}{n-2}}
\end{align*}
whenever $\tau<t$.
Therefore
\begin{align}\label{estimate large m}
|\hat{v}_m(x,t)|\leq (C(L_0)\epsilon_0\epsilon+C\epsilon)&\Big(\frac{L_0^2}{16}\Big)^{1-\frac{\lambda_m}{2(n-2)}} \notag \\ \notag
+(C(L_0)\epsilon_0\epsilon+C\epsilon)&\int_{-\frac{L_0^2}{16}}^{t}CL_0^{-\frac{1}{n-2}} e^{-\frac{L_0^2}{2000(t-\tau)}}(-\tau)^{\frac{1-\lambda_m}{2(n-2)}}d\tau\\\notag
\leq(C(L_0)\epsilon_0\epsilon+C\epsilon)&\Big[\Big(\frac{L_0}{4}\Big)^{2-\frac{\lambda_m}{n-2}}+\int_{(1+\frac{1}{\sqrt{\lambda_m}})t}^{t}C e^{-\frac{L_0^2}{2000(t-\tau)}}d\tau\\ \notag &+\int_{-\infty}^{(1+\frac{1}{\sqrt{\lambda_m}})t}CL_0^{-\frac{1}{n-2}}(-\tau)^{\frac{1-\lambda_m}{2(n-2)}}d\tau \Big] \\ \notag
\leq(C(L_0)\epsilon_0\epsilon+C\epsilon)&\Big[\Big(\frac{L_0}{4}\Big)^{2-\frac{\lambda_m}{n-2}}+\frac{C}{\sqrt{\lambda_m}}e^{-\frac{\sqrt{\lambda_m}L_0^2}{2000}}\\ &+CL_0^{-\frac{1}{n-2}}\left(1+\frac{1}{\sqrt{\lambda_m}}\right)^{\frac{2n-3-\lambda_m}{2(n-2)}} \Big]
\end{align}
Note that $2-\frac{\lambda_m}{n-2}\leq \frac{-2}{n-2}$ and $\frac{2n-3-\lambda_m}{2(n-2)}\leq -\frac{1}{2(n-2)}$. Morevoer,
The eigenvalues of Laplacian on $S^{n-2}$ are $l(l+n-3)$ for integer
$l\geq 1$ with multiplicity $N_l={n+l-2 \choose n-2}-{n+l-4 \choose n-2}$.
For large $l$ we have $l(l+n-3)\leq Cl^2$ and $N_l\leq Cl^{n-2}$.
Putting these facts into (\ref{estimate large m}) and summing over
$m\geq n$ we obtain:
\begin{align*}
\sum_{m=n}^{\infty} |\tilde{v}_m|\leq C(L_0)\epsilon_0\epsilon+CL_0^{-\frac{1}{n-2}}\epsilon
\end{align*}
for $(x,t)\in\Omega_{200n^{5/2}}\times[-200^2n^5,-1]$ \\
\noindent\textit{Case 2: } $1\leq m \leq n-1$
In this case $v_m = \hat{v}_m$. Recall that they satisfies the heat equation:
\begin{align*}
\frac{\partial v_m}{\partial t}=\frac{\partial^2 v_m}{\partial z_1^2}+\frac{\partial^2 v_m}{\partial z_2^2} \ \ \text{ in } \Omega_{L_0/4}\times [-L_0^2/16,-1]
\end{align*}
For each $(X_0,t_0)=(x_1,x_2,t_0)\in\Omega_{L_0/4} \times [-L_0^2/16,-1]$, taking the Fourier coefficient of $Y_m$ of (\ref{u Bound}), we can find $a_m, b_m, c_m $ satisfying $|a_m|+|b_m|+|c_m|\leq C(L_0)\epsilon$ and
$$|v_m - (a_m + b_m z_1 + c_m z_2)|\leq (C(L_0)\epsilon\epsilon_0+C\epsilon)(-t)^{1/2}$$
in $\Omega_{(-t_0)^{\frac{1}{2}}}(x_1,x_2)\times[0,2\pi]\times[2t_0,t_0]$.
The linear term satisfies the heat equation trivially, so the parabolic interior gradient estimate implies that $|\frac{\partial^2 v_m}{\partial z_i\partial z_j}|\leq(C(L_0)\epsilon\epsilon_0+C\epsilon)(-t)^{-1/2}$ for each pair of $1\leq i,j\leq2$ in $\Omega_{L_0/8}\times[-L_0^2/64,-1]$.
Note that $\frac{\partial^2 v_m}{\partial z_i\partial z_j}$ also satisfies the heat equation in $\mathbb{R}^2$,
we can apply the same argument as in $m\geq n$ case to $\frac{\partial^2 v_m}{\partial z_i\partial z_j}$ to obtain that:
\begin{align*}
|\frac{\partial^2 v_m}{\partial z_i\partial z_j}|&\leq (C(L_0)\epsilon_0\epsilon+C\epsilon)\Big(\frac{L_0^2}{64}\Big)^{-\frac{1}{2}} \\
&+(C(L_0)\epsilon_0\epsilon+C\epsilon)\int_{-L_0^2/64}^{t}\frac{CL_0^2}{(t-\tau)^2}e^{-\frac{L_0^2}{1000(t-\tau)}}(-\tau)^{-\frac{1}{2}}d\tau\\
\end{align*}
for each $1\leq i, j\leq 2$.
For $L_0$ large, $-200^2n^5\leq t\leq -1$.
\begin{align*}
\frac{CL_0^2}{(t-\tau)^2}e^{-\frac{L_0^2}{1000(t-\tau)}}\leq CL_0^{-2}
\end{align*}
whenever $\tau<t$.
Then we have:
\begin{align}
\sum_{i,j=1,2}|\frac{\partial^2 v_m}{\partial z_i\partial z_j}|\leq C(L_0)\epsilon_0\epsilon+CL_0^{-1}\epsilon
\end{align}
for $(x,t)\in\Omega_{200n^{5/2}}\times[-200^2n^5,-1]$ and any $1\leq m \leq n-1$.
This means that we can find real numbers $A_m, B_m, C_m$ such that
\begin{align}
|v_m-(A_m+B_m z_1+C_m z_2)|\leq C(L_0)\epsilon_0\epsilon+CL_0^{-1}\epsilon
\end{align}
and $|A_m|+|B_m|+|C_m|\leq C(L_0)\epsilon$. \\
\noindent\textit{Case 3: } $m=0$
Under the polar coordinate on $M_t$, the normal vector $\nu$ satisfies:
\begin{align*}
\nu\sqrt{1+\frac{|\nabla^{S^{n-2}} r|}{r}^2+(\frac{\partial u}{\partial z_1})^2+(\frac{\partial u}{\partial z_2})^2}=(\Theta,-\frac{\partial r}{\partial z_1},-\frac{\partial r}{\partial z_2})+(-\frac{\nabla^{S^{n-2}} r}{r}, 0, 0)
\end{align*}
Since $J_{\alpha}\Theta$ is divergence free on $S^{n-2}$, we have
\begin{align*}
&\int_{S^{n-2}}u\sqrt{1+\frac{|\nabla^{S^{n-2}} r|}{r}^2+(\frac{\partial u}{\partial z_1})^2+(\frac{\partial u}{\partial z_2})^2}d\Theta \\
=&\int_{S^{n-2}}\left<J_{\alpha}\Theta, -\nabla^{S^{n-2}}r \right>_{S^{n-2}}d\Theta \\
=&\int_{S^{n-2}} \text{div}_{S^{n-2}}(rJ_{\alpha}\Theta)d\Theta = 0
\end{align*}
Since $|u|\leq C(L_0)\epsilon$ and $|\frac{\nabla^{S^{n-2}} r}{r}|+|\frac{\partial u}{\partial z_1}|
+|\frac{\partial u}{\partial z_2}|\leq C\epsilon_0$, we obtain that
\begin{align}\label{mode0estimate}
\left|\int_{0}^{2\pi}u(\Theta,z_1,z_2)d\Theta\right|\leq C(L_0)\epsilon_0\epsilon
\end{align}
in $\Omega_{200n^{5/2}}\times[-200^2n^5,-1]$.
We then conclude that $|\hat{v}_0|\leq C(L_0)\epsilon_0\epsilon$ in $\Omega_{200n^{5/2}}\times[-200^2n^5,-1]$ \\
\noindent\textbf{Step 3: }Combing the analysis for all $m\geq 0$ and the fact that $|u-\tilde{u}|\leq C(L_0)\epsilon_0\epsilon$,
we conclude that
there exists constants $A_{\alpha,i}, B_{\alpha,i}, C_{\alpha,i}$ for
$1\leq \alpha \leq \frac{(n-1)(n-2)}{2}$ and $1\leq i\leq n-1$ such that
\begin{align*}
|A_{\alpha,1}| + ...+|A_{\alpha,n-1}|\leq &C(L_0)\epsilon \\
|B_{\alpha,1}| + ...+|B_{\alpha,n-1}|\leq &C(L_0)\epsilon \\
|C_{\alpha,1}| + ...+|C_{\alpha,n-1}| \leq &C(L_0)\epsilon
\end{align*}
and
\begin{align}\label{untuned u bound}
|\left<\bar{K}_{\alpha},\nu\right>&-(A_{\alpha,1}Y_1+...+A_{\alpha,n-1}Y_{n-1})\notag\\
&-(B_{\alpha,1}Y_1+...+B_{\alpha,n-1}Y_{n-1})z_1 \notag\\
&-(C_{\alpha,1}Y_1+...+C_{\alpha,n-1}Y_{n-1})z_2|\leq C(L_0)\epsilon_0\epsilon+CL_0^{-\frac{1}{n-2}}\epsilon
\end{align}
in $S^{n-2}\times\Omega_{200n^{5/2}}\times[-200^2n^5,-1]$.
For each $i\in\{1,2,...,n-1\}$, define
\begin{align*}
F_i(z_1,z_2) = \int_{S^{n-2}}r(\Theta,z_1,z_2)Y_i(\Theta)d\Theta
\end{align*}
(recall that $r$ is the radius function under the polar coordinate, which is approximately a constant)
We compute
\begin{align}\label{L2 inner product 1}
\int_{S^{n-2}}\left<\bar{K}_{\alpha},\nu\right>Y_id\Theta= &\int_{S^{n-2}}-\text{div}_{S^{n-2}}(rJ_{\alpha}\Theta)Y_i \notag\\
=&\int_{S^{n-2}} -\text{div}_{S^{n-2}}(rJ_{\alpha}\Theta Y_i)+\left<\nabla^{S^{n-2}}Y_i,rJ_{\alpha}\Theta\right> \notag\\
=& \sum_{j=1}^{n-1} \int_{S^{n-2}} J_{\alpha,ij}Y_j rd\Theta \notag\\
=& \sum_{j=1}^{n-1} J_{\alpha,ij}F_j(z_1,z_2)
\end{align}
On the other hand, if we take $L^2$ inner product with $Y_i$ on both sides
of (\ref{untuned u bound}) we have
\begin{align}\label{L2 prod 2}
\left|\int_{S^{n-2}}\left<\bar{K}_{\alpha},\nu\right>Y_id\Theta - (A_{\alpha, i} + B_{\alpha,i}z_1 + C_{\alpha,i}z_2 )\right|\leq C(L_0)\epsilon_0\epsilon+CL_0^{-\frac{1}{n-2}}\epsilon
\end{align}
Let $b\in \mathbb{R}^{n+1}$ and $P\in so(n-1)^{\perp}\subset so(n+1)$ satisfy
\begin{align}\label{define b p}
b_i =& F_i(0,0)\notag\\
P_{n,i}= &-\frac{1}{2}(F_i(1,0)-F_i(-1,0))\\
P_{n+1,i}=&-\frac{1}{2}(F_i(0,1)-F_i(0,-1))\notag
\end{align}
for each $i\in\{1,2,...,n-1\}$. Hence
\begin{align}\label{Lie Bracket P J}
[P,J_{\alpha}]_{n,i} =& -\sum_{j=1}^{n-1} \frac{1}{2}(F_j(1,0)-F_j(-1,0))J_{\alpha,ji} \notag\\
[P,J_{\alpha}]_{n+1,i} =& -\sum_{j=1}^{n-1} \frac{1}{2}(F_j(0,1)-F_j(0,-1))J_{\alpha,ji}\\
|P|+|b|\leq &C(L_0)\epsilon \notag
\end{align}
for each $i$.
Now we combine the computations (\ref{L2 inner product 1}) (\ref{L2 prod 2}) and ($\ref{Lie Bracket P J}$)
to conclude that, for each $1\leq \alpha\leq \frac{(n-1)(n-2)}{2}$ and $1\leq i\leq n-1$:
\begin{align*}
|A_{\alpha,i} - (J_{\alpha}b)_i| \leq C(L_0)\epsilon_0\epsilon+CL_0^{-\frac{1}{n-2}}\epsilon\\
|B_{\alpha,i} -[P,J_{\alpha}]_{n,i}|\leq C(L_0)\epsilon_0\epsilon+CL_0^{-\frac{1}{n-2}}\epsilon \\
|C_{\alpha,i} -[P,J_{\alpha}]_{n+1,i}|\leq C(L_0)\epsilon_0\epsilon+CL_0^{-\frac{1}{n-2}}\epsilon
\end{align*}
Now we let $S=\exp(P)$ and let
\begin{align*}
\tilde{K}_{\alpha} = SJ_{\alpha}S^{-1}(x-b)
\end{align*}
Then it's easy to compute that on the cylinder $S^{n-2}\times\mathbb{R}^2$
(of any radius) we have
\begin{align}\label{diff tuning}
|\left<\bar{K}_{\alpha}-\tilde{K}_{\alpha},\nu\right> - &\left<[P,J_{\alpha}]x - J_{\alpha}b,\nu\right> | \leq C|P|^2+C|P||b|+C(L_0)\epsilon_0\epsilon+CL_0^{-\frac{1}{n-2}}\epsilon \notag\\
\Rightarrow|\left<\bar{K}_{\alpha}-\tilde{K}_{\alpha},\nu\right> - &(A_{\alpha,1}Y_1+...+A_{\alpha,n-1}Y_{n-1})\\
-&(B_{\alpha,1}Y_1+...+B_{\alpha,n-1}Y_{n-1})z_1 \notag\\
-&(C_{\alpha,1}Y_1+...+C_{\alpha,n-1}Y_{n-1})z_2| \leq C(L_0)\epsilon_0\epsilon+CL_0^{-\frac{1}{n-2}}\epsilon\notag\\ \notag
\end{align}
for each $1\leq\alpha \leq \frac{(n-1)(n-2)}{2}$.
By approximation and (\ref{diff tuning}), (\ref{untuned u bound}) we conclude that,
\begin{align*}
|\left<\tilde{K}_{\alpha},\nu\right>|H \leq C(L_0)\epsilon_0\epsilon + CL_0^{-\frac{1}{n-2}}\epsilon
\end{align*}
in $\hat{\mathcal{P}}(\bar{x},\bar{t},100n^{5/2},100^2n^5)$,
for each $1\leq\alpha \leq \frac{(n-1)(n-2)}{2}$.
We choose $L_0$ large enough and then choose $\epsilon_0$ small enough depending on $L_0$, then $(\bar{x},\bar{t})$ is $\frac{\epsilon}{2}$
symmetric. \end{proof}
\begin{Th}\label{bowl x R improvement} There exists a constant $L_1\gg L_0$ and $0<\epsilon_1\ll \epsilon_0$ such that: for a mean curvature flow solution $M_t$ and a space-time point $(\bar{x},\bar{t})$, if $\hat{\mathcal{P}}(\bar{x},\bar{t},L_1,L_1^2)$ is $\epsilon_1$ close to a piece of $\text{Bowl}^{n-1} \times \mathbb{R}$ in $C^{10}$ norm after the parabolic rescaling (which makes $H(\bar{x},\bar{t})=1$) and every points in $\hat{\mathcal{P}}(\bar{x},\bar{t},L_1,L_1^2)$ are $\epsilon$ symmetric with $0<\epsilon\leq \epsilon_1$, then $(\bar{x},\bar{t})$ is $\frac{\epsilon}{2}$ symmetric. \end{Th}
\begin{proof}
Throughout the proof, $L_1$ is always assumed to be large enough depending only on $L_0,\epsilon_0, n$ and $\epsilon_1$ is assumed to be small enough depending on $L_1, L_0, \epsilon_0, n$. $C$ denotes the constant depending only on $n, L_0, \epsilon_0$.
We denote by $\Sigma$ the standard Bowl soliton in $\mathbb{R}^{n}$.
That means the tip of $\Sigma$ is the origin, the mean curvature at the tip is $1$ and the rotation axis is $x_n$, also it enclose the positive part of $x_n$ axis. Let $\omega_n$ be the unit vector in the $x_n$ axis that coincide with the inward normal vector of $\Sigma$ at the origin.
We write $\Sigma_t=\Sigma+\omega_n t$, then $\Sigma_t$ is the translating mean curvature flow solution and $\Sigma=\Sigma_{0}$.
After rescaling we may assume that $H(\bar{x},\bar{t})=1$ and $\bar{t}=0$ .
Moreover, there exists a scaling factor $\kappa>0$ such that the parabolic neighborhood $\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2)$ can be approximated by the family of translating $\kappa^{-1}\Sigma_t\times \mathbb{R}$ with an error that is $\epsilon_1$ small in $C^{10}$ norm.
By the above setup, the maximal mean curvature of $\kappa^{-1}\Sigma_t$ is $\kappa$. Let $p_t=\omega_n t\in\kappa^{-1}\Sigma_t$ be the tip.
Let the straight line $l_t=\{p_t\}\times\mathbb{R}\subset \kappa^{-1}\Sigma_t\times
\mathbb{R}$. The splitting direction $\mathbb{R}$ is assumed to be $x_{n+1}$ axis.
By our normalization $H(\bar{x},-1)=1$, the maximality of $\kappa$ together with the approximation ensures that $\kappa\geq 1-C\epsilon_1$.
Define $d(\bar{x},l_t)=\min_{a\in l_t}|\bar{x}-a|$ to be the Euclidean distance between $\bar{x}$ and the line $l_t$.
We divide the proof into several steps, the first step deals with the case that $\bar{x}$ is far away from $l_{-1}$ in which we can apply cylindrical improvement. The remaining steps deal with the case that $\bar{x}$ is not too far from $l_{-1}$. \\
\noindent\textbf{Step 1:} Using the structure of the Bowl soliton and approximation, we can find a large constant $\Lambda_{\star}$ that depends
only on $L_0, \epsilon_0, n$ such that, if $d(\bar{x},l_{0})\geq\Lambda_{\star}$,
then for every point $(y,s)\in\hat{\mathcal{P}}(\bar{x},0,L_0,L_0^2)$
\begin{align*}
\hat{\mathcal{P}}(y,s,100^2n^5,100^2n^5)\subset\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2)
\end{align*} and $(y,s)$ is $(\epsilon_0,100n^{5/2})$ cylindrical.
Therefore we can apply Theorem \ref{Cylindrical improvement} to conclude that
$(\bar{x},\bar{t})$ is $\frac{\epsilon}{2}$ symmetric and we are done.\\
In the following steps we assume that $d(\bar{x},l_{-1})\leq\Lambda_{\star}$\\
\noindent\textbf{Step 2:}
By the asymptotic behaviour of $\Sigma_t$, we have the scale invariant identity:
$$\frac{\kappa}{H(p)}=f(H(p)d(p,l_t))$$ where $p\in \kappa^{-1}\Sigma_{t}\times\mathbb{R}$ and $f$ is a continuous increasing function such that $f(0)=1$.
In fact $f(x)=O(x)$ as $x\rightarrow+\infty$.
By approximation we know that
\begin{align}\label{upperlowerboundkappa}
\frac{1}{2}<\kappa<C
\end{align}
This means $\kappa^{-1}\Sigma_{t}\times\mathbb{R}$ is equivalent to the standard Bowl$\times\mathbb{R}$ up to a scaling factor depending on $L_0,\epsilon_0,n$. \\
We define a series of set $\text{Int}_j(\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2))$ inductively:
\begin{align*}
&\text{Int}_{0}(\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2))=\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2)\\
&(y,s) \in \text{Int}_j(\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2))\\ \Leftrightarrow &
\hat{\mathcal{P}}(y,s,(10n)^6L_0,(10n)^{12}L_0^2)\subset \text{Int}_{j-1}(\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2))
\end{align*}
We abbreviate $\text{Int}_j(\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2))$
as $\text{Int}(j)$. It's clear that Int($j$) is a decreasing set that are all contained in $\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2)$. \\
\noindent\textit{Claim: } there exists $\Lambda_1\gg \Lambda_{\star}$ depending only on $\epsilon_0, L_0$,
such that for any point $(x,t)\in \text{Int}_j(\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2))$,
if $d(x,l_t)\geq 2^{\frac{j}{100}}\Lambda_1$, then $(x,t)$ is $2^{-j}\epsilon$ symmetric, for $j=0,1,2,...$\\
By the assumption the case $j=0$ is automatically true. Now suppose that the statement is true for $j-1$.
By the structure of the Bowl solition and approximation, we can find constant
$\Lambda_1$ that depends only on $L_0,\epsilon_0,n$ such that
\begin{align}\label{bigdistimpliescylindrical}
&\text{If } d(x,l_t)\geq \Lambda_{1},
\text{then every point }(y,s)\in\hat{\mathcal{P}}(x,t,L_0,L_0^2) \text{ is } (\epsilon_0,100n^{5/2})\\ & \text{ cylindrical}\text{ and } d(x,l_t)H(x)\geq 1000L_0. \notag\\
&\text{ Moreover } \hat{\mathcal{P}}(y,s,100^2n^5,100^2n^5)\subset\hat{\mathcal{P}}(x,t,(10n)^6L_0,(10n)^{12}L_0^2)\notag
\end{align}
\noindent\textit{Remark:} We need to rescale the picture before checking $(\epsilon_0,100n^{5/2})$ cylindrical condition. The scaling factor depends only on $L_1$. Thus it can be conquered by choosing $\epsilon_1$ small enough, since $\epsilon_1$ is chosen after $L_1$. \\
By approximation and structure of the Bowl soliton, we know that if $(x,t)\in\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2)$ with $d(x,l_t)\geq 1 \geq \frac{1}{2}\kappa^{-1}$, then
\begin{itemize}
\item $\frac{\left<\nu, x-x'\right>}{|x-x'|}\geq -C\epsilon_1$
\item $\frac{\left<\omega_n, x-x'\right>}{|x-x'|}\geq C(n)^{-1}$
\end{itemize}
where $x'$ is a point on $p_t$ such that $|x-x'|=d(x,p_t)$.
Let $x(t)$ be the trajectory of $x$ under mean curvature flow, such that
$x(t_0)=x_0$ and $d(x_0,p_{t_0})> 1$, then for $t\leq t_0$
\begin{align}\label{distancetotipdecreasing}
\frac{d}{dt}d(x(t),l_t)^2&= \left<x-x',H(x)\nu(x)-\kappa\omega_n\right>\notag\\
&\leq|x-x'|(C\epsilon_1 - C(n)^{-1}) < 0
\end{align}
Therefore $x(t)$ increases as $t$ decreases, as long as $x(t)\in\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2)$.
Now suppose that $(x,t)\in\text{Int}(j)$ and $d(x,l_t)\geq 2^{\frac{j}{100}}\Lambda_1.$
For any point $(\tilde{x},t)\in\hat{\mathcal{P}}(x,t,L_0,L_0^2)$, by (\ref{bigdistimpliescylindrical})
\begin{align}\label{distance increasing1}
d(\tilde{x},l_t)&\geq d(x,l_t)-d(x,\tilde{x})\notag\\
&\geq d(x,l_t)-L_0H(x)^{-1}\notag\\
&\geq (1-1/1000)d(x,l_t)\notag\\
&\geq (1-1/1000) 2^{\frac{j}{100}}\Lambda_1>2^{\frac{j-1}{100}}\Lambda_1
\end{align}
Additionally, every point $(y,s)\in\hat{\mathcal{P}}(x,t,L_0,L_0^2)$ must be in Int($j-1$), hence by (\ref{distancetotipdecreasing}) and (\ref{distance increasing1}) $d(y,l_s)>2^{\frac{j-1}{100}}\Lambda_1$ and therefore is $2^{-j+1}\epsilon$ symmetric by induction hypothesis.
Together with (\ref{bigdistimpliescylindrical}), the condition of the cylindrical improvement are satisfied. Hence $(x,t)$ is $2^{-j}\epsilon$ symmetric and the conclusion is true for $j$, the Claim follow from induction.\\
Finally, we describe the size of Int($j$) in terms of $n,L_0$.
Recall that $d(\bar{x},l_{0})\leq \Lambda_{\star}$. By approximation we have:
\begin{align*}
&\hat{\mathcal{P}}(\bar{x},0,(10n)^{-8}L_0^{-1}\hat{L},(10n)^{-16}L_0^{-2}\hat{L}^2)
\subset \text{Int}_1\hat{\mathcal{P}}(\bar{x},0,\hat{L},\hat{L}^2)
\end{align*}
whenever $(10n)^{-16} L_0^{-2}\hat{L}>\Lambda_1$ and $\hat{L}<L_1$.
By induction on $j$ we obtain:
\begin{align}\label{size of Int 2}
\hat{\mathcal{P}}(\bar{x},0,(10n)^{-8j} L_0^{-j}L_1,(10n)^{-16j} L_0^{-2j}L_1)\subset\text{Int}(j)
\end{align}
whenever $(10n)^{-8j-8}L_0^{-j-1}L_1>\Lambda_1$. \\
\noindent\textbf{Step 3:} Define the region
\begin{align*}
\Omega_j^t=&\{(y,t)\big||y_{n+1}|\leq W_j, d(y,l_t)\kappa\leq D_j\}\\
\Omega_j=&\bigcup\limits_{t\in[-1-T_j,-1]} \Omega_j^t\\
\partial^1\Omega_j=&\{(y,s)\in\partial\Omega_j|d(y,l_s)\kappa= D_j\}\\
\partial^2\Omega_j=&\{(y,s)\in\partial\Omega_j\big||y_n|=W_j|\}
\end{align*}
where $D_j=2^{\frac{j}{100}}\Lambda_1$, $W_j=T_j=2^{\frac{j}{50}}\Lambda_1^2$.
By repeatedly applying Lemma \ref{Vector Field closeness on Cylinder} and
\ref{VF close Bowl times R}, we obtain a normalized set of rotation vector fields $\mathcal{K}^{(j)} =\{K_{\alpha}^{(j)}, 1\leq\alpha\leq \frac{(n-1)(n-2)}{2}\}$ such that
\begin{align}
&\max_{\alpha}|\left<K^j_{\alpha},\nu\right>|H\leq C(W_j+D_j+T_j)^2 2^{-j}\epsilon \text{ on }\partial^1\Omega_j^t \label{eq1}\\
&\max_{\alpha}|\left<K^j_{\alpha},\nu\right>|H \leq C(W_j+D_j+T_j)^2 \epsilon \text{ on }\Omega_j \label{eq2}\\
&\max_{\alpha}|K^j_{\alpha}|H \leq 2n\text{ on }\Omega_j \label{eq3}
\end{align}
\begin{comment}
\begin{enumerate}\label{ji}
\item $\max_{\alpha}|\left<K^j_{\alpha},\nu\right>|H\leq C(W_j+D_j+T_j)^2 2^{-j}\epsilon$ on $\partial^1\Omega_j^t$
\item $\max_{\alpha}|\left<K^j_{\alpha},\nu\right>|H \leq C(W_j+D_j+T_j)^2 \epsilon$ in $\Omega_j$
\item $\max_{\alpha}|K^j_{\alpha}|H \leq 2n$ in $\Omega_j$
\end{enumerate}
\end{comment}
(\ref{eq3}) follows from approximating by the normalized set of vector fields
$\mathcal{K}^0 = \{K^0_{\alpha}(x) = J_{\alpha}x, 1\leq \alpha\leq \frac{(n-1)(n-2)}{2}\}$ and the fact that $\max_{\alpha}|K^0_{\alpha}|H\leq n$
on Bowl$\times\mathbb{R}$ (see Appendix \ref{ode of bowl}).
\\
For each $K^j_{\alpha}$ ($1\leq \alpha\leq\frac{(n-1)(n-2)}{2}$, $j\geq 1$), let $u=\left<K_{\alpha}^j,\nu\right>$ on $M_t$
and define the function
$$f(x,t)=e^{-\Phi(x)+\lambda(t-\bar{t})}\frac{u}{H-\mu}$$
where $\lambda,c$ will be determined later.
$H$ and $u$ satisfy the Jacobi equation
\begin{align*}
\partial_t u & =\Delta u+|A|^2u\\
\partial_t H & =\Delta H+ |A|^2H
\end{align*}
We get the evolution equation for $\frac{u}{H-\mu}$:
\begin{align*}
(\partial_t -\Delta )\left(\frac{u}{H-\mu}\right)=&\frac{(\partial_t -\Delta )u}{H-\mu}-
\frac{u(\partial_t -\Delta )H}{(H-\mu)^2}+\frac{2\left<\nabla u,\nabla H\right>}{(H-\mu)^2}-\frac{2|\nabla H|^2u}{(H-\mu)^3}\\
=&-\frac{cu|A|^2}{(H-\mu)^2}+2\left<\frac{\nabla H}{H-\mu},\nabla\left(\frac{u}{H-\mu}\right)\right>
\end{align*}
Then the evolution equation for $f$:
\begin{align}\label{EQNforf}
(\partial_t -\Delta )f=&e^{-\Phi+\lambda(t-\bar{t})}(\partial_t -\Delta )\left(\frac{u}{H-\mu}\right)+(\lambda-\partial_t\Phi+\Delta \Phi-|\nabla \Phi|^2)f \notag\\
&+2e^{-\Phi+\lambda(t-\bar{t})}\left<\nabla \Phi,\nabla\left(\frac{u}{H-\mu}\right)\right>\notag\\
=&\left(\lambda-\frac{\mu|A|^2}{H-\mu}-\partial_t\Phi+\Delta \Phi-|\nabla\Phi|^2\right)f\notag\\
&+2e^{-\Phi+\lambda(t-\bar{t})}\left<\nabla\left(\frac{u}{H-\mu}\right),\nabla \Phi+\frac{\nabla H}{H-\mu}\right>\notag\\
=&\left(\lambda-\frac{\mu|A|^2}{H-\mu}-\partial_t\Phi+\Delta \Phi-|\nabla\Phi|^2\right)f+2\left<\nabla f,\nabla\Phi+\frac{\nabla H}{H-\mu}\right>\notag\\
&+2\left<\nabla \Phi,\nabla\Phi+\frac{\nabla H}{H-c}\right>f\notag\\
=&\left(\lambda-\frac{\mu|A|^2}{H-\mu}-\partial_t\Phi+\Delta \Phi +|\nabla\Phi|^2+2\frac{\left<\nabla \Phi,\nabla H\right>}{H-\mu}\right)f\\
&+2\left<\nabla f,\nabla\Phi+\frac{\nabla H}{H-\mu}\right> \notag
\end{align}
Now let $\Phi(x)=\phi(x_{n+1})=\phi(\left<x,\omega_{n+1}\right>)$ where $\phi$ is a one variable function.
We have the following computations:
\begin{itemize}
\item $\partial_t\Phi=\phi'(x_{n+1})\left<\partial_t x, \omega_{n+1}\right>=\phi'(x_{n+1})\left<\vec{H},\omega_{n+1}\right>$
\item $\Delta\Phi=\phi'(x_{n+1})\Delta x_{n+1}+\phi''(x_{n+1})|\nabla x_{n+1}|^2=\phi'(x_{n+1})\left<\vec{H},\omega_{n+1}\right>+\phi''(x_{n+1})|\omega_{n+1}^T|^2$
\item $\left<\nabla\Phi,\nabla H\right>=\phi'(x_{n+1})\left<\nabla x_{n+1},\nabla H\right>=\phi'(x_{n+1})\left<\omega_{n+1}^{T},\nabla H\right>$
\item $|\nabla \Phi|^2=\phi'(x_{n+1})^2|\nabla x_{n+1}|^2=\phi'(x_{n+1})^2|\omega_{n+1}^T|^2$
\end{itemize}
\begin{comment}
\begin{align*}
&\partial_t\Phi=\phi'(x_{n+1})\left<\partial_t x, \omega_{n+1}\right>=\phi'(x_{n+1})\left<\vec{H},\omega_{n+1}\right>\\
&\Delta\Phi=\phi'(x_{n+1})\Delta x_{n+1}+\phi''(x_{n+1})|\nabla x_{n+1}|^2=\phi'(x_{n+1})\left<\vec{H},\omega_{n+1}\right>+\phi''(x_{n+1})|\omega_{n+1}^T|^2\\
&\left<\nabla\Phi,\nabla H\right>=\phi'(x_{n+1})\left<\nabla x_{n+1},\nabla H\right>=\phi'(x_{n+1})\left<\omega_{n+1}^{T},\nabla H\right>\\
&|\nabla \Phi|^2=\phi'(x_{n+1})^2|\nabla x_{n+1}|^2=\phi'(x_{n+1})^2|\omega_{n+1}^T|^2
\end{align*}
\end{comment}
Here $x_{n+1}$ is a short hand of $\left<x,\omega_{n+1}\right>$, $\omega_{n+1}^T$ denotes the projection onto the tangent plane of $M_t$.
In the second line we used the identity $\Delta x=\vec{H}$.
Since $\left<\vec{H},\omega_{n+1}\right>=\left<\omega_{n+1},\nabla H\right>=0$ on $\kappa^{-1}\Sigma\times\mathbb{R}$ and the curvature is
bounded by $C\kappa$, by approximation we have:
\begin{align*}
|\omega_{n+1}^T-\omega_{n+1}| + |\left<\nabla H,\omega_{n+1}\right>| + |\left<\vec{H},\omega_{n+1}\right>|\leq C(L_1)\epsilon_1
\end{align*} in
$\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2)$.
Therefore we have the estimate:
\begin{align}\label{Coeff Est D Phi}
|\partial_t\Phi| & \leq C(L_1)\epsilon_1|\phi'(x_{n+1})|\notag\\
|\Delta\Phi| & \leq C(L_1)\epsilon_1|\phi'(x_{n+1})|+|\phi''(x_{n+1})| \notag\\
|\left<\nabla\Phi,\nabla H\right>| & \leq C(L_1)\epsilon_1|\phi'(x_{n+1})| \notag\\
|\nabla \Phi|^2&\leq\phi'(x_{n+1})^2
\end{align}
in $\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2)$.
By the asymptotic of the Bowl soliton and approximation, there is a constant $c_n \in(0,1)$ depending only on $n$ such that
$H(x) \geq 2c_nd(x,l_t)^{-\frac{1}{2}}$ in $\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2)$ when $d(x,l_t)$ is large. Moreover $|A|^2\geq \frac{H^2}{n}$ in $\hat{\mathcal{P}}(\bar{x},0,L_1,L_1^2)$.
Next we choose an even function $\phi\in C^2(\mathbb{R})$ satisfying the following:
\begin{align}
|\phi'| & \leq \frac{c_n}{n}D_j^{-1/2} \label{Choice of varphi1}\\
|\phi''| & \leq \frac{c_n^2}{n}D_j^{-1} \label{Choice of varphi2}\\
\phi(W_j)& \geq \log\left((W_j+D_j+T_j)^{20}\right) \label{Choice of varphi3}\\
\phi(200n^{5/2}) & \leq \log\left(W_j+D_j+T_j\right) \label{Choice of varphi4}\\
\phi(0) &= 0, \ \phi'>0 \text{ when } x>0 \label{Choice of varphi5}
\end{align}
We can take $$\phi(s)=\frac{c_n^2}{n}D_j^{-1}\log(\cosh(s))$$
for large $j$. Note that $\log(\cosh(s))\in (s-1,s]$.
It's straightforward to check (\ref{Choice of varphi1}) (\ref{Choice of varphi2}) (\ref{Choice of varphi5}).
To check (\ref{Choice of varphi3}) (\ref{Choice of varphi4}), note that there is a $j_1$ depending only on $n, \Lambda_1$ such that if $j\geq j_1$, then
\begin{align*}
\phi(W_j) &\geq \frac{c_n^2}{n}D_j^{-1}(W_j-1)\\
&\geq \frac{c_n^2}{2n}2^{\frac{j}{100}} \\
& \geq \log((W_j+D_j+T_j)^{20})
\end{align*}
and
\begin{align*}
\phi(200n^{5/2}) &\leq \frac{c_n^2}{n}D_j^{-1}\cdot 200n^{5/2} \\
&\leq \log(2^{\frac{j}{100}}) \\
&\leq \log\left(W_j+D_j+T_j\right)
\end{align*}
Now we let $\lambda=\frac{c_n^2}{n}D_j^{-1}, \mu=c_n D_j^{-1/2}$, therefore $H\geq 2\mu$ in $\Omega_j$ and
\begin{align}\label{Coeff A2 / H}
\frac{\mu|A|^2}{H-\mu} \geq \frac{\mu H^2}{n(H-\mu)}\geq \frac{2\mu H}{n}\geq \frac{4\mu^2}{n}
\end{align}
With (\ref{Coeff Est D Phi}) (\ref{Coeff A2 / H}) (\ref{Choice of varphi1})-(\ref{Choice of varphi5}), we have:
\begin{align*}
&\lambda-\frac{\mu|A|^2}{H-\mu}-\partial_t\Phi+\Delta \Phi+|\nabla\Phi|^2+2\frac{\left<\nabla \Phi,\nabla H\right>}{H-\mu}\\
\leq&\lambda-\frac{4\mu^2}{n}+C(L_1)\epsilon_1|\phi'(x_n)|+|\phi''(x_n)|+\phi'(x_n)^2+C(L_1)\epsilon_1|\phi'(x_n)|\mu^{-1} \\
<& \frac{c_n^2}{n}D_j^{-1}-\frac{4c_n^2}{n}D_j^{-1}+\frac{c_n^2}{n}D_j^{-1}
+(\frac{c_n}{n}D_j^{-\frac{1}{2}})^2 + C(L_1)\epsilon_1 < 0
\end{align*}
Then maximal principle applies to (\ref{EQNforf}), we have
$\sup\limits_{\Omega_j}|f|\leq \sup\limits_{\partial\Omega_j}|f|$.
The boundary $\partial\Omega_j=\partial^1\Omega_j\cup\partial^2\Omega_j\cup\Omega_j^{-1-T_j}$,
we can estimate $f$ on each of them:
\begin{itemize}
\item on the boundary portion $\partial^1\Omega_j$:
\begin{align*}
\sup\limits_{\partial^1\Omega_j}|f|\leq & \sup\limits_{\partial^1\Omega_j}\frac{|u|H}{(H-\mu)H}\\
\leq &C(W_j+D_j+T_j)^2 2^{-j}\epsilon\cdot D_j\\
\leq &2^{-\frac{j}{2}}C\epsilon
\end{align*}
\item on the portion $\partial^2\Omega_j$:
\begin{align*}
\sup\limits_{\partial^2\Omega_j}|f|\leq & \sup\limits_{\partial^2\Omega_j}e^{-\phi(W_j)}\frac{|u|H}{H(H-\mu)}\\
<&e^{-\phi(W_j)}\cdot C(W_j+D_j+T_j)^2 \epsilon\cdot D_j\\
\leq& C(W_j+D_j+T_j)^{-10}\epsilon \\
\leq & 2^{-\frac{j}{5}} C\epsilon
\end{align*}
we used $\phi(W_j)\geq \log\left((W_j+D_j+T_j)^{20}\right)$
\item on the portion $\partial^3\Omega_j$, we have
\begin{align*}
\sup\limits_{\Omega_j^{-1-T_j}}|f|\leq & e^{-\lambda T_j}\sup\limits_{\Omega_j}\frac{|u|H}{H(H-\mu)}\\
\leq &e^{-\frac{c_n^2}{n}T_j D_j^{-1}}\cdot C(W_j+D_j+T_j)^2\cdot 32D_j\epsilon\\
\leq &2^{-j} C\epsilon
\end{align*}
where the last inequality used $T_jD_j^{-1} = D_j\geq jn c_n^{-2}-C$.
\end{itemize}
Putting them together we obtain that:
\begin{align}\label{maxf}
\sup\limits_{\Omega_j}|f|\leq &\sup\limits_{\partial\Omega_j}|f|\leq 2^{-\frac{j}{5}}C\epsilon
\end{align}
By approximation, in the parabolic neighborhood
$\hat{\mathcal{P}}(\bar{x},0,100n^{5/2},100^2n^5)$ we have
\begin{itemize}
\item $|x_{n+1}|<200n^{5/2}$
\item $-2\cdot 10^4n^5\leq t\leq 0$
\item $H\leq C$
\end{itemize}
We also assume that $j$ is appropriate such that
\begin{align}\label{Condition on j 1}
\hat{\mathcal{P}}(\bar{x},0, 100n^{5/2}, 100^2n^5)\subset\Omega_j
\end{align}
\begin{align}\label{Cond on j 2}
\Omega_j\subset\text{Int}(j)
\end{align}
Hence,
\begin{align}\label{finalineqn}
|u(y,s)|H(y,s)= & e^{\phi(x_{n+1})-\lambda t}(H(y,s)-\mu)f(y,s)H(y,s)\notag\\
\leq& e^{\phi(200n^{5/2})+2\cdot10^4n^5\lambda}\cdot 2^{-\frac{j}{5}}C\epsilon\notag \\
\leq&(W_j+D_j+T_j)\cdot 2^{-\frac{j}{5}}C\epsilon \notag\\
\leq& 2^{-\frac{j}{10}} C_1\epsilon
\end{align}
in $\hat{\mathcal{P}}(\bar{x},0,100n^{5/2},100^2n^5)$. Here $C_1$
depends only on $L_0,\epsilon_0,n$.
We pick constants in the following order:
First we can find $j_2\geq j_1$ depending only on $L_0,\epsilon_0,n$ such that
(\ref{Condition on j 1}) (\ref{Cond on j 2}) holds with $j=j_2$ and
\begin{align}\label{choice of j}
2^{-\frac{j_2}{10}}C_1\epsilon <\frac{\epsilon}{2}
\end{align}
Next we pick $L_1$ large enough, finally we take $\epsilon_1$ small enough.
(Therefore, $L_1$ may depend on $j_2, L_0, \epsilon_0, n$ and $\epsilon_1$ may depend on $L_1, j_2, L_0, \epsilon_0, n$)
With such choice of constants, we can take $j=j_2$. Since
(\ref{eq3}) ,(\ref{finalineqn})
applies to every
$\alpha\in \{1,2,...\frac{(n-1)(n-2)}{2}\}$, we obtain that
\begin{itemize}
\item $\max_{\alpha}|\left<\bar{K_{\alpha}},\nu\right>|H<\frac{\epsilon}{2}$
\item $\max_{\alpha}|\bar{K_{\alpha}}|H<5n$
\end{itemize}
in $\hat{\mathcal{P}}(\bar{x},0,100n^{5/2},100^2n^5)$.
By definition $(\bar{x},\bar{t}) = (\bar{x},0)$ is $\frac{\epsilon}{2}$ symmetric.
\end{proof}
\section{Canonical neighborhood Lemmas and the proof of the main theorem}\label{section proof of main theorem}
In this section we prove the main theorem. While this section is mostly the same as the Section 4 of \cite{zhu2020so}, we decide to contain most of the argument here for readers' convenience.
Recall that the translating soliton satisfies the equation $ H = \left<V,\nu\right>$ for some fixed nonzero vector $V$, where $\nu$ is the inward pointing normal vector. With a translation and dilation we may assume that $V=\omega_{n}$ is a unit vector. The equation then becomes: \begin{align}\label{translatoreqn1}
H &= \left<\omega_n,\nu\right> \end{align}
$M_t = M+t\omega_n$ is an mean curvature flow.
Let the height function in the space time to be \begin{align}\label{height def}
h(x,t)=\left<x,\omega_n\right>-t \end{align}
Throughout this section the mean curvature flow solution is always assumed to be embedded and complete.
\begin{Def}
A mean convex mean curvature flow solution $M_t^n\subset\mathbb{R}^{n+1}$
is said to be uniformly $k$-convex, if there is a positive constant $\beta$
such that
\begin{align*}
\lambda_1+...+\lambda_k\geq \beta H
\end{align*}
along the flow, where $\lambda_1\leq\lambda_2\leq ...\leq \lambda_n$ are principal curvatures. \end{Def}
\begin{Def}
For any ancient solution $M_t$ defined on $t<0$, a blow down limit, or a tangent flow at $-\infty$, is the limit flow of $M^j_t=c_j^{-1}M_{c_j^2t}$ for some sequence $c_j\rightarrow\infty$, if the limit exists. \end{Def}
If $M$ is mean convex and noncollapsed ancient solution, then at least one blow down limit exists and any blow down sequence $c_j^{-1}M_{c_j^2t}$ has a subsequence that converges smoothly to $S^k_{\sqrt{-2kt}}\times\mathbb{R}^{n-k}$ for some $k=0,1,...,n$ with possibly a rotation, see e.g. \cite{haslhofer2017mean}, \cite{sheng2009singularity}, \cite{white2003nature}, \cite{white2000size}, \cite{huisken1999convexity}.
Recall that the Gaussian density of a surface is defined by:
\begin{align*}
\Theta_{x_0,t_0}(M)=\int_{M}\frac{1}{(4\pi t_0)^{\frac{n}{2}}}e^{-\frac{|x-x_0|^2}{4t_0}}d\mu
\end{align*}
Using Huisken's monotoncity formula \cite{huisken1990asymptotic}, we will have the following:
\begin{Lemma}\label{entropy limit lemma}
For any mean convex and noncollapsed ancient solution $M_t$ defined on $t<0$, suppose that $M^{\infty}_t$ is a blow down limit, then $M_t^{\infty}$ must be the same up to rotation.
\end{Lemma}
\begin{proof}
Suppose that $M^{\infty}_t$ is the limiting flow of $M^j_t=c_j^{-1}M_{c_j^2t}$ for some sequence $c_j\rightarrow\infty$. By the previous discussion, $M^{\infty}_t$ must be one of the self-similar generalized cylinders $S^k_{\sqrt{-2kt}}\times\mathbb{R}^{n-k}$.
Moreover, mean convex ancient solution must be convex by the convexity estimate \cite{huisken1999convexity} (also c.f \cite{haslhofer2017mean}). Therefore the convergence is smooth with multiplicity one.
It suffices to only consider the time slice $M^{\infty}_{-1}$ by the scale invariance of the entropy.
By Huisken's monotonicity formula (\cite{huisken1990asymptotic}),
$\Theta_{x_0,t_0+s_0-t}(M_{t})$ is monotone increasing in $t$.
Hence $$\Theta = \lim\limits_{t\rightarrow\infty} \Theta_{x_0,t_0+s_0-t}(M_{t})\leq \infty$$ exists.
By the scaling property of $\Theta$ we can compute
\begin{align}\label{entropy bound 1}
\Theta_{c_j^{-1}x_0,c_j^{-2}(t_0+s_0)-t}(c_j^{-1}M_{c_j^2t})=\Theta_{x_0,t_0+s_0-c_j^2t}(M_{c_j^2t})
\end{align}
whenever $c_j^2t<s_0$.
By convexity, $\text{Vol}(M_t\cap B_R(0))\leq CR^n$ for some uniform constant $C$, so $F_{x',t'}(M_t\backslash B_R(0))\leq Ce^{-R^2/8}$ for some uniform constant $C$, whenever $x'$ and $\log t'$ are bounded.
Taking $t=-1$ and letting $j$ large in (\ref{entropy bound 1}).
Since $c_j^{-1}x_0\rightarrow 0, c^{-2}_j(t_0+s_0)-t\rightarrow 1$ and $c_j^{-1}M_{-c_j^2}$ converge smoothly to $M^{\infty}_{-1}$, we have
\begin{align*}
\Theta_{0,1}(M^{\infty}_{-1})=\lim\limits_{j\rightarrow\infty}\Theta_{c_j^{-1}x_0,c_j^{-2}(t_0+s_0)-t}(c_j^{-1}M_{c_j^2t})=\Theta
\end{align*}
It's easy to compute that $\Theta_{0,1}(S^{k}\times\mathbb{R}^{n-k})$, $k=0,1,...,n$ are all different numbers, we know that $M_{-1}^{\infty}$ must have the same shape, which means they must be the same up to rotation.
\begin{Rem}
By the work of Colding and Minicozzi \cite{colding2015uniqueness}, the blow down limit is actually unique (without any rotation), but we don't need this strong result in this paper.
\end{Rem}
\begin{comment}
Suppose that $M^{\infty}_t$ is the limiting flow of $M^j_t=c_j^{-1}M_{c_j^2t}$ for some sequence $c_j\rightarrow\infty$. By the previous discussion, $M^{\infty}_t$ must be one of the self-similar generalized cylinders $S^k_{\sqrt{-2kt}}\times\mathbb{R}^{n-k}$.
Moreover, mean convex ancient solution must be convex by the convexity estimate \cite{huisken1999convexity} (also c.f \cite{haslhofer2017mean}). Therefore the convergence is smooth with multiplicity one.
It suffices to only consider the time slice $M^{\infty}_{-1}$ by the scale invariance of the entropy.
Suppose that $\lim\limits_{s\rightarrow-\infty}\lambda(M_s)=\lambda$. Then for any $\epsilon>0$ there is a point $(x_0,t_0)$ in space time and $s_0<0$ such that $F_{x_0,t_0}(M_{s_0})>\lambda-\epsilon$.
By Huisken's monotonicity formula (\cite{huisken1990asymptotic}),
\begin{align*}
F_{x_0,t_0+s_0-t}(M_{t})\geq F_{x_0,t_0}(M_{s_0})>\lambda-\epsilon
\end{align*}
for any $t<s_0$.
Hence
\begin{align}\label{entropy bound 1}
F_{c_j^{-1}x_0,c_j^{-2}(t_0+s_0)-t}(c_j^{-1}M_{c_j^2t})=F_{x_0,t_0+s_0-c_j^2t}(M_{c_j^2t})>\lambda-\epsilon
\end{align}
whenever $c_j^2t<s_0$.
By convexity, $\text{Vol}(M_t\cap B_R(0))\leq CR^n$ for some uniform constant $C$, so $F_{x',t'}(M_t\backslash B_R(0))\leq Ce^{-R^2/8}$ for some uniform constant $C$, whenever $x'$ and $\log t'$ are bounded.
Taking $t=-1$ and letting $j$ large in (\ref{entropy bound 1}).
Since $c_j^{-1}x_0\rightarrow 0, c^{-2}_j(t_0+s_0)-t\rightarrow 1$ and $c_j^{-1}M_{-c_j^2}$ converges to $M^{\infty}_{-1}$, we have
\begin{align*}
\lambda(M^{\infty}_{-1})\geq F_{0,1}(M^{\infty}_{-1})=\lim\limits_{j\rightarrow\infty}F_{c_j^{-1}x_0,c_j^{-2}(t_0+s_0)-t}(c_j^{-1}M_{c_j^2t})\geq\lambda-\epsilon
\end{align*}
Sending $\epsilon$ to 0, $\lambda(M_{-1}^{\infty})\geq\lim\limits_{t\rightarrow-\infty}\lambda(M_t)$
On the other hand, $M^{\infty}_{-1}$ satisfies the self-shrinker equation $H=\frac{1}{2}\left<x,\nu\right>$, so by the work of Colding-Minicozzi \cite{colding2012generic}: $\lambda(M^{\infty}_1)=F_{0,1}(M^{\infty}_1)$. Therefore
\begin{align*}
\lambda(M^{\infty}_{-1})=F_{0,1}(M^{\infty}_1)= \lim\limits_{j\rightarrow\infty}F_{0,1}(c_j^{-1}M_{-c_j^2})\leq
\lim\limits_{t\rightarrow-\infty}\lambda(M_t)
\end{align*}
\end{comment}
\end{proof}
\begin{comment}
Since the entropy of the each generalized cylinders $S^k\times\mathbb{R}^{n-k}$ are different, we have:
\begin{Cor}\label{unique blowdown}
Given $M_t$ a mean convex, noncollapsed ancient solution of mean curvature flow, then blow down limit exists and up to rotation, all blow down limits are the same .
\end{Cor}
\end{comment}
In particular if $M_t$ is a translating solution, we can interpret the blow down process in a single time slice:
\begin{Cor}\label{blow down for translator}
Given $M^n\subset\mathbb{R}^{n+1}$ a strictly mean convex, noncollapsed translator which satisfies (\ref{translatoreqn1}),
then for any $R>1, \epsilon>0$, there exists a large $C_0$ such that, if $a\geq C_0$ then $a^{-1}(M-a^2\omega_n)\cap B_{R}(0))$ is $\epsilon$ close to a $S^k_{\sqrt{2k}}\times\mathbb{R}^{n-k} $ with some rotation for a fixed $1\leq k\leq n - 1$ in $C^{10}$ norm.
\end{Cor}
In the following we prove some canonical neighborhood Lemmas.
\begin{Lemma}\label{canonical nbhd lemma}
Let $M^n\subset\mathbb{R}^{n+1}$ to be a noncollapsed, mean convex, uniformly 3-convex smooth translating soliton of mean curvature flow.\ \ $p\in M$ is a fixed point. Set $M_t$ to be the associated translating solution of the mean curvature flow. Suppose that one blow down limit of $M_t$ is $S^{n-2}\times\mathbb{R}^2$ and that the sub-level set $M_t\cap\{h(\cdot,t)\leq h_0\}$ is compact for any $h_0\in\mathbb{R}$.
Then for any given $L>10, \epsilon>0$, there exist a constant $\Lambda$ with the following property:
If $x\in M$ satisfies $|x-p|\geq\Lambda$, then after a parabolic rescaling by the factor $H(x,t)$,
the parabolic neighborhood $\hat{\mathcal{P}}(x,t,L,L^2)$ is $\epsilon$ close to the corresponding piece of the shrinking $S^{n-2}\times \mathbb{R}^2$, or the translating $\text{Bowl}^{n-1}\times \mathbb{R}$.
\end{Lemma}
\begin{proof}
If $M$ is not strictly mean convex, then by strong maximal principle $M$ must be flat plane, therefore all the blow down limit is flat plane, this is a contradiction.
Without loss of generality we assume that $p$ is the origin, and that $\omega_n$ is the unit vector in $x_n$ axis which points to the positive part of $x_n$ axis.
Argue by contradiction. Suppose that the conclusion is not true, then there exist a sequence of points $x_j\in M$ satisfying $|x_j-p|\geq j$ but $\hat{\mathcal{P}}(x_j,0,L,L^2)$ is not $\epsilon$ close to either one of the models after appropriate parabolic rescaling.
By the long range curvature estimate (c.f. \cite{white2000size}, \cite{white2003nature}, \cite{haslhofer2017mean}), $H(p)|x_j-p|\rightarrow \infty$ implies
\begin{align}\label{long range curvature estimate}
H(x_j)|x_j-p|\rightarrow \infty
\end{align}
After passing to a subsequence, we may assume
\begin{align}\label{distancedoubling}
|p-x_{j+1}|\geq 2|p-x_j|
\end{align}
Now let $M^{(j)}_t=H(x_j)\Big(M_{H^{-2}(x_j)t}-x_j\Big)$ be a sequence of rescaled solutions. Under this rescaling $M^{(j)}_t$ remains eternal, $x_j$ is sent to the origin and $H_{M_0^{(j)}}(0)=1$.
By the global convergence theorem (c.f \cite{haslhofer2017mean}) $M^{(j)}_t$ converges to an ancient solution $M^{\infty}_t$ which is smooth, non-collapsed, weakly convex, uniformly 3-convex and $H_{M^{\infty}_0}(0)=1$. Thus $M^{\infty}_0$ is strictly mean convex.
Denote by $K^{\infty}$ the convex domain bounded by $M^{\infty}$.
Suppose that $p$ is sent to $p^{(j)}$ in the $j^{th}$ rescaling. Then $p^{(j)}=-H(x_j)x_j$ and $|p^{(j)}|\rightarrow \infty$ by (\ref{long range curvature estimate}).
After passing to a subsequence, we may assume that $\frac{p^{(j)}}{|p^{(j)}|}=-\frac{x_j}{|x_j|}\rightarrow \Theta \in S^{n}$.
Suppose that $l$ is the line in the direction of $\Theta$, i.e. $l=\{s\Theta\ | \ s\in\mathbb{R}\}$.
Since rescaling doesn't change angle, we have $\angle x_jpx_{j+1}\rightarrow 0$. \\
By elementary triangle geometry and (\ref{distancedoubling}) we have
\begin{align}\label{edgerelation1}
\frac{1}{2}|x_{j+1}-p|<|x_{j+1}-p|-|x_j-p|<|x_j-x_{j+1}| < |x_{j+1}-p|
\end{align}
Consequently $\angle x_{j+1}x_jp)\leq \angle x_jpx_{j+1})\rightarrow 0$. Therefore $\angle x_{j+1}x_jp\rightarrow \pi$.
This implies that the limit $M_0$ contains the line $l$. By convexity the first principal curvature vanishes. Then strong maximal principle (c.f \cite{hamilton1986four}, \cite{haslhofer2017mean}, \cite{white2003nature})
implies that $M_t^{\infty}$ split off a line. That is,
$M ^{\infty}_t=\mathbb{R}\times M'_t$ where $\mathbb{R}$ is in the direction of $l$ and $M'_t$ is an ancient solution that is non-collapsed, strictly mean convex, uniformly 2-convex. \\
\noindent\textit{Case 1: } $M'_t$ is noncompact
By the classification result of Brendle and Choi \cite{brendle2018uniqueness} \cite{brendle2019uniqueness}, we conclude that, up to translation and rotation, $M'_t$ is the shrinking $S^1\times\mathbb{R}$, or the translating $\text{Bowl}^{2}$.
This means that for large $j$, the parabolic neighborhood $\hat{\mathcal{P}}^{(j)}(0,0,L,L^2)$ is $\epsilon$ close to either $S^{n-2}\times\mathbb{R}^2$ or $\text{Bowl}^2\times\mathbb{R}$, a contradiction.\\
\noindent\textit{Case 2: } $M'_t$ is compact
\noindent\textit{Claim: } $l$ is the $x_n$ axis, consequently $\frac{x_j}{|x_j|}$ converges to $\omega_n$.
In fact, let's denote by $l^{\perp}$ the plane perpendicular to $l$ that passes through the origin. (Note that $l$ also passed through the origin by definition).
Hence $l^{\perp}\cap M_t^{\infty}$ is isometric to $M_t'$.
Suppose that $l$ is not the $x_n$ axis, then there is a unit vector $v\perp l$ such that $\left<v,\omega_n\right><0$.
Since $M_t^{\infty}$ splits off in the direction $l$ with cross section a closed surface $M_t'$, we can find a point $y_j'\in M_0^{\infty}\cap l^{\perp}$ such that the inward normal vector $\nu$ of $M_0^{\infty}$ at $y_j'$ is equal to $v$.
Then for large $j$ there is a point $y_j\in M_0^{(j)}\cap l^{\perp}$ such that the unit inward normal $\nu_{M^{(j)}_0}(y_j)$ is sufficiently close to $v$. But this implies that $H_{M^{(j)}_0}(y_j)=H_{M_0}(x_j)^{-1}\left<\nu_{M^{(j)}_0}(y_j),\omega_n\right><0$ a contradiction.
By the above argument we know that the $\mathbb{R}$ direction is parallel to $\omega_n$.
Hence the cross section perpendicular to the $\mathbb{R}$ factor are the level set of the height function.
Moreover the cross section $M_0^{(j)}\cap \{h(\cdot,0)=0\}$ converge smoothly to the cross section $(\mathbb{R}\times M_0')\cap \{h(\cdot,0)=0\}$, which is $M_0'$.
Now let's convert it to the unrescaled picture. Let $h_j=h(x_j,0)$ and $N_j=M\cap \{h(\cdot,0)=h_j\}$ ($N_j$ should be considered as a hypersurface in $\mathbb{R}^n$). Then $N_j$, after appropriate rescaling, converges to $M_0'$.
Define a scale invariant quantity for hypersurface(possibly with boundary):
\begin{align*}
\text{ecc}(N_j)=\text{diam}(N_j)\sup H_{N_j}
\end{align*}
where diam denotes the extrinsic diameter and $H_{N_j}$ is the mean curvature of $N_j$ in $\mathbb{R}^n$.
Since $N_j\rightarrow M_0'$, we have:
\begin{align}\label{ecc M0'}
\text{ecc}(N_j)< 2\text{ecc}(M_0')
\end{align}
for all large $j$.
On the other hand, by assumption each sub-level set $\{h(\cdot, 0)\leq h_0\}$ is compact, this means $h_j\rightarrow+\infty$.
By Corollary \ref{blow down for translator}, for any $\eta>0, R\gg \sqrt{2n}$ there exists $A_j\in SO(4)$ for each sufficiently large $j$ such that
$\sqrt{h_j}^{-1}(M_0-h_j\omega_n)\cap B_R(0)$ is $\eta$ close to $\Sigma_j:=A_j(S^{n-2}_{\sqrt{2(n-2)}}\times\mathbb{R})$, which is a rotation of $S^{n-2}_{\sqrt{2(n-2)}}\times\mathbb{R}$.
Since $\left<\nu,\omega_3\right>>0$ on $\sqrt{h_j}^{-1}(M_0-h_j\omega_3)\cap B_R(0)$, by approximation $\left<\nu,\omega_3\right>>-C\eta$ on $\Sigma_j\cap B_R(0)$.
Since $\nu$ is arbitrary and $R$ is large, we have $|\left<\nu,\omega_3\right>|<C\eta$ along the $S^{n-2}$ fiber of $\Sigma$.
This means that the $\mathbb{R}^2$ factor of $\Sigma_j$ must be almost perpendicular to the level set of the height function $h$.
Consequently, the intersection of $\Sigma_j\cap B_R(0)$ with $\{h(\cdot,0)=0\}$ must be $C\eta$ close to some rotated $(S^{n-2}_{\sqrt{2(n-2)}}\times\mathbb{R})\cap B_R(0)$,
hence the mean curvature (computed in $\mathbb{R}^n$) is at least $\sqrt{\frac{n-2}{2}}-C\eta$.
Now let $\tilde{N}_j=\sqrt{h_j}^{-1}(M_0-h_j\omega_3)\cap\{h(\cdot,0)=0\}\cap B_R(0)$. Then $\tilde{N}_j$, considered as surface in $\mathbb{R}^n$, is $C\eta$ close to $\Sigma_j\cap\{h(\cdot,0)=0\}\cap B_R(0)$. Therefore
\begin{align}\label{ecc cylinder}
\text{ecc}(\tilde{N}_j)\geq (\sqrt{\frac{n-2}{2}}-C\eta)R
\end{align}
Now we take $\eta<\frac{1}{8C}$, $R>4\text{ecc}(M_0')$ and $j$ large accordingly. Note that $\tilde{N}_j=\sqrt{h_j}^{-1}(N_j-h_j\omega_n)$, then the scale invariance of ecc together with (\ref{ecc M0'}) and (\ref{ecc cylinder}) lead to a contradiction.
\end{proof}
\begin{Lemma}\label{blowdownnecklemma}
Suppose that $M^n\subset\mathbb{R}^{n+1}$ is noncollapsed, convex, uniformly 3-convex translating soliton of mean curvature flow. Set $M_t$ to be the associated translating solution. Suppose that one blow down limit of $M_t$ is $S^{n-1}\times\mathbb{R}$ and that the sub-level set $M_t\cap\{h(\cdot,t)\leq h_0\}$ is compact for any $h_0\in\mathbb{R}$. Then $M=\text{Bowl}^n$.
\end{Lemma}
\begin{proof}
We use $M_t$ to denote the associated translating solution of the mean curvature flow.
If $M$ is not strictly convex, then it split off a line. Since one of the blow down limit is $S^{n-1}\times\mathbb{R}$, it must be shrinking cylinders, a contradiction.
Fixing $p\in M$.
Assume without loss of generality that $p$ is the origin and $\omega_n$ is the unit vector in $x_n$ axis which points to the positive part of $x_n$ axis.
Using the the argument from the Case 2 in Lemma \ref{canonical nbhd lemma}, the ball $B_{100\sqrt{a}}(a\omega_3)\cap M$ is close a $S^{n-1}_{\sqrt{2(n-1)}}\times\mathbb{R} $ with some translation and rotation for all large $a$.
Moreover, the fact that $\left<\nu, \omega_n\right> >0$ and approximation implies that the $\mathbb{R}$ direction is almost parallel to the $x_n$ axis, and the cross section $\{h(\cdot,0)=a\}$ is close to $S^{n-1}$ after some scaling for sufficiently large $a$.
In particular $M\cap \{h(\cdot,0)\geq A\}$ is uniformly 2-convex for a fixed large $A$.
Since the sub-level set $\{h(\cdot, 0)\leq A\}$ is compact and $M$ is strictly convex, we know that $M$ is uniformly 2-convex.
Therefore $M$ is noncompact and strictly convex, the classification result (c.f \cite{haslhofer2015uniqueness}, \cite{brendle2018uniqueness})applies and $M=\text{Bowl}^n$.
\begin{comment}
Following the proof of Lemma \ref{canonical nbhd lemma}, for any $L,\epsilon >0$ we can find $\Lambda$ large such that whenever $x\in M$ satisfies $|x-p|>\Lambda$,
the parabolic neighborhood $\hat{\mathcal{P}}(x,0,L,L^2)$ is $\epsilon$ close to a corresponding piece of $\mathbb{R}\times M'_t$ for some non-collapsed, strictly mean convex, uniformly 2-convex ancient solution $M'_t$.
If $M'_t$ is noncompact, then it must be one of $S^1\times\mathbb{R}$ or $\text{Bowl}^2$.
Since the entropy $\lambda(S^2\times\mathbb{R})< \lambda(S^1\times\mathbb{R}^2)$ and $\lambda(S^2\times\mathbb{R})< \lambda(\text{Bowl}^2\times\mathbb{R})$.
We can a fixed large $R$ such that
\begin{align*}
\lambda(M)\geq\lambda(B_{RH(x)^{-1}}(x)\cap M) > \lambda(S^2\times\mathbb{R})
\end{align*}
But Lemma \ref{entropy limit lemma} implies that $\lambda(M)=\lambda(S^2\times\mathbb{R})$, a contradiction.
If $M_t'$ is compact, arguing as in Lemma \ref{canonical nbhd lemma}, we know that the splitting direction $\mathbb{R}$ must coincide with $x_3$ axis.
But they are both uniformly 2 convex, thus $M$ must be uniformly two convex. Then $M$ must be Bowl$^3$.
\end{comment}
\end{proof}
\begin{proof}[Proof of Theorem \ref{Main Theorem}]
Let $M_t=M+t\omega_n$ be the associated mean curvature flow. Denote by $g$ the metric on $M$ and $g_t$ the metric on $M_t$.
By the convexity estimate (c.f \cite{haslhofer2017mean}), $M$ must be convex.
If $M$ is not strictly convex, then by maximal principle \cite{hamilton1986four} it must split off a line, namely it is a product of a line and a convex, uniformly 2-convex, non-collapsed tranlating solution (which must be noncompact). By the classification result \cite{brendle2019uniqueness} \cite{brendle2018uniqueness} or \cite{haslhofer2015uniqueness}, $M=$Bowl$^{n-1}\times\mathbb{R}$ and we are done.
We may then assume that $M$ (or $M_t$) is strictly convex. It suffices to find a normalized rotation vector field that is tangential to $M$.
Suppose that $M_t$ bounds the open domain $K_t=K+t\omega_n$ and attains maximal mean curvature at the tip $p_t=p+t\omega_n$.
After some rotation and translation we assume that $p$ is the origin and $\omega_n$ is the unit vector in $x_n$ axis which points to the positive part of $x_n$ axis.
As in \cite{haslhofer2015uniqueness}, we can show that $\omega_n^T=0$ at tip $p$ : first take the gradient of (\ref{translatoreqn1}) at the point $p$: \begin{align*}
\nabla H &= A(\omega_n^{T}) \end{align*} where $A$ is the shape operator, which is non-degenerate because of the strict convexity. Since $\nabla H=A(\omega_n^T)=0$ at $p$, we have $\omega_n^T=0$, moreover $\vec{H}=\omega_n$. In particular $\nu(p)=\omega_n$ and $H(p)=1$, thus $p_t$ is indeed the trajectory that moves by mean curvature.
By the convexity, $K$ is contained in the upper half space $\{x_n\geq 0\}$. The strict convexity implies that there is a cone $\mathcal{C}_{\eta}=\{x\in\mathbb{R}^{n+1}| x_n\geq \eta\sqrt{x_1^2+...+x_{n-1}^2+x_{n+1}^2}\}$ ($0<\eta<1$) such that
\begin{align}\label{m contained in cone}
K_t\backslash B_{1}(p)\subset \mathcal{C}_{\eta}\backslash B_{1}(p)
\end{align}
The height function $h(x,t)=\left<x,\omega_n\right>-t$ \ then measures the signed distance from the support plane $\{x_n=t\}$ at the tip.
(\ref{m contained in cone}) implies that any sub-level set $\{h(\cdot, 0)\leq h_0 \}$ is compact.
By Corollary \ref{blow down for translator}, a blow down limit of $M_t$ exists and is uniformly 3-convex, thus must be $S^{k}_{\sqrt{-2kt}}\times\mathbb{R}^{n-k}$ up to rotation, for a fixed $k=n-2,n-1,n$. If $k=n$, then $M$ is compact, thus can't be translator.
If $k=n-1$, by Lemma \ref{blowdownnecklemma}, $M$ is Bowl$^n$, and the result follows immediately.
So from now on we assume that $k=n-2$ and Lemma \ref{canonical nbhd lemma} is applicable.
By Lemma \ref{canonical nbhd lemma}, there exists $\Lambda'$ depending on $L_1,\epsilon_1$ given in the Theorem \ref{bowl x R improvement} such that if $(x,t)\in M_t$ satisfies $h(x,t)\geq \Lambda'$, then $\hat{\mathcal{P}}(x,t,L_1,L_1^2)$ is $\epsilon_1$ close to either a piece of translating Bowl$^{n-1}\times\mathbb{R}$ or a family of shrinking cylinder $S^{n-2}\times\mathbb{R}^2$ after rescaling.
By the equation (\ref{translatoreqn1}) $\partial_t h(x,t)=\left<\nu,\omega_n\right>-1\leq 0$, so $h$ is nonincreasing.
Using (\ref{m contained in cone}) we have $h(x,t)\geq \frac{\eta}{2}|x-p_t|$ whenever $|x-p_t|\geq 1$. Also $h(x,t)\leq|x-p_t|$.
By the long range curvature estimate (c.f. \cite{white2000size} \cite{white2003nature}, \cite{haslhofer2017mean}) there exists $\Lambda>\Lambda'$ such that
$H(x,t)|x-p_t|>2\cdot10^3\eta^{-1}L_1$ for all $(x,t)$ satisfying $h(x,t)>\Lambda$. Consequently $H(x,t)h(x,t)>10^3L_1$.
Define
$\Omega_j=\{(x,t)\ |\ x\in M_t,\ t\in[-2^{\frac{j}{100}}, 0],\ h(x,t)\leq2^{\frac{j}{100}}\Lambda \}$
$\partial^{1} \Omega_j = \{(x,t)\ |\ x\in M_t,\ t\in[-2^{\frac{j}{100}}, 0],\ h(x,t)=2^{\frac{j}{100}}\Lambda \}$
$\partial^{2} \Omega_j = \{(x,t)\in\partial\Omega_j \ |\ t=-2^{\frac{j}{100}} \}$
\noindent\textbf{Step 1:} If $x\in M_t$ satisfies $h(x,t)\geq 2^{\frac{j}{100}}\Lambda$, then $(x,t)$ is $2^{-j}\epsilon_1$ symmetric.
When $j=0$ the statement is true by the choice of $\Lambda', \Lambda$. If the statement is true for $j-1$. Given $x\in M_t$ satisfying $h(x,t)\geq 2^{\frac{j}{100}}\Lambda$, the choice of $\Lambda$ ensures that
$L_1H(x,t)^{-1}<10^{-3}h(x,t)$. So every point in the geodesic ball $B_{g_t}(x,L_1H(x)^{-1})$ has height at least $(1-10^{-3})h(x,t)\geq 2^{\frac{j-1}{100}}\Lambda$. Moreover, the height function is nonincreasing in time $t$ (equivalently nondecreasing backward in time $t$) so we conclude that the parabolic neighborhood $\hat{\mathcal{P}}(x,t,L_1,L_1^2)$ is contained in the set $\{(x,t) | x\in M_t, h(x,t)\geq 2^{\frac{j-1}{100}}\Lambda\}$. In particular every point in
$\hat{\mathcal{P}}(x,t,L_1,L_1^2)$ is $2^{-j+1}\epsilon_1$ symmetric by induction hypothesis.
Now we can apply either Lemma \ref{Cylindrical improvement} or Lemma \ref{bowl x R improvement} to obtain that $(x,t)$ is $2^{-j}\epsilon$ symmetric.\\
\noindent\textbf{Step 2:} The intrinsic diameter of the set $M_t\cap\{h(x,t)=a\}$ is bounded by $6\eta^{-1}a$ for $a\geq 1$.
It suffices to consider $t=0$. Since $M$ is convex, the level set $\{h(x,t)=a\}$ is also convex. By (\ref{m contained in cone}), $M \cap\{h(x,0)=a\}$ is contained in a 3 dimensional ball of radius $\eta^{-1}a$, thus has extrinsic diameter at most $2\eta^{-1}a$. The intrinsic diameter of a convex set is bounded by triple of the extrinsic diameter, the assertion then follows immediately. \newline
\noindent\textbf{Step 3:} For each $j$, by repeatedly applying Lemma \ref{Vector Field closeness on Cylinder} and Lemma \ref{VF close Bowl times R}, there eixsts a single normalized set of rotation vector fields $\mathcal{K}^{(j)} = \{K^{(j)}_{\alpha}, 1\leq\alpha\leq \frac{(n-2)(n-1)}{2}\}$ satisfying $\max_{\alpha}|\left<K^{(j)}_{\alpha},\nu\right>|H\leq 2^{-\frac{j}{2}}C\epsilon_1$ on $\partial^{1} \Omega_j$
For each $\alpha$, Define the function $f^{(j)}$ on $\Omega_j$ to be
\begin{align}\label{f in the final step}
f^{(j)}(x,t)=\exp (2^{\lambda_j t})\frac{\left<K_{\alpha}^{(j)},\nu\right>}{H-c_j}
\end{align}
where $\lambda_j=2^{-\frac{j}{50}}, c_j=2^{-\frac{j}{100}}$.
As in \cite{brendle2019uniqueness} or (\ref{EQNforf}), we have
\begin{align*}
(\partial_t -\Delta )f^{(j)}
=&\left(\lambda_j-\frac{c_j|A|^2}{H-c_j} \right)f^{(j)}+2\left<\nabla f^{(j)},\frac{\nabla H}{H-c_j}\right>
\end{align*}
Since $H\geq 10^{3}h^{-1}$ whenever $h>\Lambda$, we have $H>n\cdot 2^{-\frac{j}{100}}=nc_j$ in $\Omega_j$ for all large $j$.
Hence
\begin{align*}
\lambda_j-\frac{c_j|A|^2}{H-c_j}\leq \lambda_j-\frac{c_jH^2}{n(H-H/2)}
\leq \lambda_j-2c_j^2<0
\end{align*}
By the maximal principle,
\begin{align*}
\sup\limits_{\Omega_j}|f^{(j)}|\leq & \sup\limits_{\partial\Omega_j}|f^{(j)}|=
\max\left\{\sup\limits_{\partial^1\Omega_j}|f^{(j)}|,\sup\limits_{\partial^2\Omega_j}|f^{(j)}|\right\}
\end{align*}
Since
\begin{align*}
\sup\limits_{\partial^1\Omega_j}|f^{(j)}|\leq \frac{|\left<K^{(j)},\nu\right>H|}{(H-c_j)H}\leq C\frac{2^{-\frac{j}{2}}\epsilon_1}{2c_j^2}\leq 2^{-\frac{j}{4}}C\epsilon_1
\end{align*}
Meanwhile, $|\left<K^{(j)},\nu\right>|\leq C2^{\frac{j}{50}}$ in $\Omega_j$. Therefore, for large $j$:
\begin{align*}
\sup\limits_{\partial^2\Omega_j}|f^{(j)}|\leq \exp(-2^{\frac{j}{50}})\frac{|\left<K^{(j)},\nu\right>|}{c_j}\leq 2^{-j}
\end{align*}
Putting them together we get $|f^{(j)}|\leq 2^{-j/4}$ in $\Omega_j$ . Since this is true for each $\alpha$, we know that the axis of $\mathcal{K}^{(j)}$ (i.e the 0 set of all $K_{\alpha}^{(j)}$) has a uniform bounded distance from $p$ or equivalently $\max_{\alpha}|K_{\alpha}^{(j)}(0)|\leq C$.
(If this is not the case, then passing to subsequence we may assume that $\max_{\alpha}|K_{\alpha}^{(j)}(0)|\rightarrow \infty$, then at least one of
$\tilde{K}_{\alpha}^{(j)}=\frac{K_{\alpha}^{(j)}}{\max_{\alpha}|K_{\alpha}^{(j)}(0)|}$ converges locally to nonzero constant vector field $\tilde{K}_{\alpha}$ which is tangential to $M_0$ near $0$, hence $M_0$ is not strictly convex, a contradiction.)
Passing to a subsequence and taking limit, there exists normalized set of rotation vector fields $\mathcal{K}$ which is tangential to $M$. This completes the proof. \end{proof}
\section{Appendix }
\subsection{ODE for Bowl soliton}\label{ode of bowl}
We will set the origin to be the tip of the Bowl soliton and the $x_{n+1}$ to be the translating axis.
Use the parametrization: $\varphi:\mathbb{R}^n\rightarrow\text{Bowl}^n$:
$\varphi(x)=(x,h(x))$ where $h(x)=\varphi(|x|)$ is a one variable function satisfying the ODE:
\begin{align*}
\frac{\varphi''}{1+\varphi'^2}+\frac{(n-1)\varphi'}{r}=1
\end{align*}
with initial condition $\varphi(0)=\varphi'(0)=0$.
Since $\varphi''\geq0$, we have the inequality:
\begin{align*}
\varphi''+\frac{(n-1)\varphi'}{r}\geq 1
\end{align*}
Using the integrating factor, when $r>0$:
\begin{align*}
(r^{n-1}\varphi')'=r^{n-1}\left(\varphi''+\frac{(n-1)\varphi'}{r}\right)\geq r^{n-1}
\end{align*}
Integrating from $0$ to $r$ we obtain $\varphi'\geq \frac{r}{n}$ and $\varphi \geq \frac{r^2}{2n}$
The mean curvature at $(x,h(x))$ is given by
\begin{align*}
\frac{1}{\sqrt{1+\varphi'(|x|)^2}}\leq \frac{1}{\sqrt{1+\frac{|x|^2}{n^2}}}<\frac{n}{|x|}
\end{align*}
As an application, if $J_{\alpha}$ is the antisymmetric matrix in $so(n-1)\subset so(n+1)$ with Tr$(J_{\alpha}J_{\alpha}^T)$=1 (i.e the unit vector in $so(n-1)\subset so(n+1)$), Then $|J|H<2n$ on the Bowl soliton.
\subsection{Heat Kernel estimate}\label{Appendix HKEST}
The purpose is to justify (\ref{hkest2}). Let's use a more general notation, that is, we replace $\frac{L_0}{4}$ by $L$. Denote
\begin{align*}
&D(x,y,k_1,k_2,\delta_1,\delta_2)=
\left|(x_1,x_2)-(\delta_1 y_1, \delta_2 y_2)-(1-\delta_1,1-\delta_2)L+(4k_1,4k_2)L\right|
\end{align*}
Then
\begin{align*}
K_t&(x,y)=-\frac{1}{4\pi t}\sum_{\delta_i\in \{\pm 1\}, k_i\in \mathbb{Z}}(-1)^{-(\delta_1+\delta_2)/2}\cdot e^{-\frac{D^2}{4t}}
\end{align*}
We observe that, when $x\in\Omega_{L/25}$ and $y\in\partial\Omega_{L}$,
$$|D|\geq \frac{|k_1|+|k_2|+1}{2}L$$.
Then $|\partial_{\nu_{y}}e^{-D^2/4t}|\leq \frac{|D|}{2t}
e^{-D^2/4t}\leq C\frac{(|k_1|+|k_2|+1)L}{t}\exp\left(-\frac{(|k_1|+|k_2|+1)^2L^2}{16t}\right)$.
The last inequality holds because the function $\lambda e^{-\lambda}$ is decreasing for $\lambda>1$.
Consequently,
\begin{align*}
|\partial_{\nu_y} K_t(x,y)|&\leq \sum_{n=0}^{\infty}\sum_{|k_1|+|k_2|=n, \delta_i\in\{\pm1\}}\frac{1}{4t}|\partial_{\nu_{y}}e^{-\frac{D^2}{4t}}|\\
&\leq C\sum_{n=0}^{\infty}\sum_{|k_1|+|k_2|=n} \frac{(|k_1|+|k_2|+1)L}{t^2}\exp\left(-\frac{(|k_1|+|k_2|+1)^2L^2}{16t}\right)\\
& \leq C\sum_{n=1}^{\infty}\frac{n^2L}{t^2}\exp\left(\frac{-n^2L^2}{50t}\right)\exp\left(\frac{-n^2L^2}{50t}\right)\\
&\leq C\sum_{n=1}^{\infty}\frac{n^2L}{t^2}\frac{2(50t)^2}{(n^2L^2)^2}e^{\frac{-L^2}{50t}} \leq \frac{C}{L^3}e^{\frac{-L^2}{50t}}
\end{align*}
where the last inequality used the fact that $e^{-s}< \frac{2}{s^2}$ when $s>0$. Integrating along the boundary and replacing $t$ by $t-\tau$ we get \begin{align*}
\int_{\partial{\Omega_{L}}}|\partial_{\nu_y}K_{t-\tau}(x,y)|dy\leq \frac{C}{L^2}e^{\frac{-L^2}{50t}}\leq \frac{CL^2}{(t-\tau)^2}e^{-\frac{L^2}{50(t-\tau)}}
\end{align*}
The last inequality is because $t-\tau<L^2$.
Putting $L_0/4$ in place of $L$ and then we get (\ref{hkest2}).
\subsection{Intrinsic and extrinsic diameter of a convex hypersurface}\label{intrisic diam extrinsic diam appendix}
Give a compact, convex set $K\subset\mathbb{R}^n$, the boundary $M=\partial K$. The intrinsic diameter of $M$ is
\begin{align*}
d_1(M)=\sup\limits_{x,y\in M}\inf\limits_{\substack{\gamma(0)=x, \gamma(1)=y\\ \gamma \text{ continous }}} L(\gamma)
\end{align*}
The extrinsic diameter of $M$ is
\begin{align*}
d_2(M)=\sup\limits_{x,y\in M}|x-y|
\end{align*}
We show that $d_1(M)\leq 3d_2(M)$.
Given a two dimensional plane $P$ whose intersection with $M$ contains at least two points. $P\cap K$ is convex. Let's restriction our attention to $P$.
Take $x,y\in P\cap M$ that attains $d_2(P\cap M)$. Without loss of generality we may assume that $P=\mathbb{R}^2$, $x=(-1,0), y=(1,0)$. Thus $d_2(P\cap M)=2$.
$P\cap K$ is contained in the rectangle
$R=\{(x,y)\in\mathbb{R}^2\ | \ |x|\leq 1, |y|\leq 2\}$ by the choice of $x,y$.
Let $R^{\pm}$ be the upper/lower half of this rectangle, respectively.
By convexity we see that $M\cap P\cap R^{+}$ is a graph of a concave function on $[-1,1]$, thus the length is bounded by half of the perimeter of $P$, which is $6$. In the same way $M\cap P\cap R^{-}$ has length at most $6$.
Now let $x',y'$ attains $d_1(M)$. Then we find some $P$ passing through $x',y'$. The above argument shows that there exists a curve $\gamma$ connecting $x',y'$ with $L(\gamma)\leq 3d_2(M\cap P)$.
Then we have:
\begin{align*}
d_1(M)\leq L(\gamma)\leq 3 d_2(M\cap P)\leq 3d_2(M)
\end{align*}
\nocite{*}
\end{document} |
\begin{document}
\title{An isometric study of the Lindeberg-Feller CLT via Stein's method} \author{B. Berckmoes \and R. Lowen \and J. Van Casteren} \date{}
\subjclass[2000]{60A05, 65D99, 60F05, 60G05, 60G07, 62P99} \keywords{approach structure, asymptotically negligible, central limit theorem, distance, Kolmogorov metric, limit, Lindeberg condition, probability measure, random variable, Stein's method, triangular array, weak topology} \thanks{Ben Berckmoes is PhD fellow at the Fund for Scientific Research of Flanders (FWO)}
\maketitle
\newtheorem{pro}{Proposition}[section] \newtheorem{lem}[pro]{Lemma} \newtheorem{thm}[pro]{Theorem} \newtheorem{de}[pro]{Definition} \newtheorem{co}[pro]{Comment} \newtheorem{no}[pro]{Notation} \newtheorem{vb}[pro]{Example} \newtheorem{vbn}[pro]{Examples} \newtheorem{gev}[pro]{Corollary} \newtheorem{vrg}[pro]{Question}
\newtheorem{proA}{Proposition} \newtheorem{lemA}[proA]{Lemma} \newtheorem{thmA}[proA]{Theorem} \newtheorem{deA}[proA]{Definition} \newtheorem{coA}[proA]{Comment} \newtheorem{noA}[proA]{Notation} \newtheorem{vbA}[proA]{Example} \newtheorem{vbnA}[proA]{Examples} \newtheorem{gevA}[proA]{Corollary} \newtheorem{vrgA}[proA]{Question}
\newtheorem{proB}{Proposition} \newtheorem{lemB}[proB]{Lemma} \newtheorem{thmB}[proB]{Theorem} \newtheorem{deB}[proB]{Definition} \newtheorem{coB}[proB]{Comment} \newtheorem{noB}[proB]{Notation} \newtheorem{vbB}[proB]{Example} \newtheorem{vbnB}[proB]{Examples} \newtheorem{gevB}[proB]{Corollary} \newtheorem{vrgB}[proB]{Question}
\newcommand{\textrm{\upshape{Lin}}\left(\left\{\xi_{n,k}\right\}\right)}{\textrm{\upshape{Lin}}\left(\left\{\xi_{n,k}\right\}\right)}
\hyphenation{frame-work} \hyphenation{dif-fe-rent} \hyphenation{a-vai-la-ble} \hyphenation{me-tric} \hyphenation{to-po-lo-gi-cal} \hyphenation{con-ti--nu-ous-ly} \hyphenation{de-pen-ding} \hyphenation{ne-gli-gi-ble} \hyphenation{de-ri-va-tive}
\begin{abstract} We use Stein's method to prove a generalization of the Lindeberg-Feller CLT providing an upper and a lower bound for the superior limit of the Kolmogorov distance between a normally distributed random variable and the rowwise sums of a rowwise independent triangular array of random variables which is asymptotically negligible in the sense of Feller. A natural example shows that the upper bound is of optimal order. The lower bound improves a result by Andrew Barbour and Peter Hall. \end{abstract}
\section{Introduction and motivation}
\paragraph{} One of the most important questions in probability theory reads as follows: \begin{vrg}\label{HeurQ} Under which conditions is a large sum of independent random variables approximately normally distributed? \end{vrg} \paragraph{} The most common way to address this question formally is within the framework of so-called triangular arrays of random variables, a concept which we now introduce. \paragraph{} By a \textit{standard triangular array} (STA) we mean a triangular array of real square integrable random variables \begin{displaymath} \begin{array}{cccc} \xi_{1,1} & & \\ \xi_{2,1} & \xi_{2,2} & \\ \xi_{3,1} & \xi_{3,2} & \xi_{3,3} \\
& \vdots & \end{array}
\end{displaymath} satisfying the following properties.
\begin{eqnarray*}
&(a)& \forall n : \xi_{n,1}, \ldots, \xi_{n,n} \textrm{ are independent.}\\
&(b)& \forall n, k : \mathbb{E}\left[\xi_{n,k}\right] = 0.\\
&(c)& \forall n : \sum_{k=1}^{n} \sigma_{n,k}^2 = 1\textrm{, where } \sigma_{n,k}^2 = \mathbb{E}\left[\xi_{n,k}^2\right].
\end{eqnarray*} \paragraph{} Let $\left\{\xi_{n,k}\right\}$ be an STA and $\xi$ a standard normally distributed random variable. Question \ref{HeurQ} can now be put as follows: \begin{vrg}\label{FormQ} Under which conditions is the weak limit relation $\displaystyle{\sum_{k=1}^n \xi_{n,k} \stackrel{w}{\rightarrow} \xi}$ valid? \end{vrg} \paragraph{} It turns out that when $\left\{\xi_{n,k}\right\}$ satisfies \textit{Feller's negligibility condition} in the sense that \begin{eqnarray} \max_{k=1}^n \sigma_{n,k}^2 \rightarrow 0, \label{FelNegCond} \end{eqnarray} the Lindeberg-Feller CLT (\cite{Fel}) provides us with a completely satisfactory answer to Question \ref{FormQ}: \begin{thm}\upshape{(Lindeberg-Feller CLT)}\label{LFCLT} \textit{Suppose that $\left\{\xi_{n,k}\right\}$ satisfies} \upshape{(\ref{FelNegCond})}. \textit{Then the following are equivalent}: \begin{eqnarray}
&(a)&\phantom{1}\sum_{k=1}^n \xi_{n,k} \stackrel{w}{\rightarrow} \xi.\nonumber \\
&(b)& \forall \epsilon > 0 : \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 ; \left|\xi_{n,k}\right| \geq \epsilon\right] \rightarrow 0.\label{LinCon} \end{eqnarray} \upshape{(\ref{LinCon})} \textit{is often referred to as} \upshape{Lindeberg's condition}. \end{thm}
\paragraph{} Throughout the remainder of the paper, $\xi$ and $\{\xi_{n,k}\}$ will be as in Theorem \ref{LFCLT}.
\paragraph{} The Kolmogorov distance between random variables $\eta$ and $\eta^\prime$ is defined by \begin{displaymath}
K\left(\eta,\eta^\prime\right) = \sup_{x \in \mathbb{R}} \left|\mathbb{P}[\eta \leq x] - \mathbb{P}[\eta^\prime \leq x]\right|. \end{displaymath} In general, $K$ is too strong to metrize weak convergence, but it is well known that if $\eta$ is continuously distributed, the following are equivalent for any sequence $\left(\eta_n\right)_n$: \begin{eqnarray*}
&(a)& \eta_n \stackrel{w}{\rightarrow} \eta.\\
&(b)& \limsup_{n \rightarrow \infty} K(\eta,\eta_n) = 0. \end{eqnarray*}
\paragraph{} The previous observation reveals that the Lindeberg-Feller CLT in fact gives a necessary and sufficient condition for the number $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$ to equal $0$. The theorem however does not answer the following question, which is slightly more general than Question \ref{FormQ}, but nevertheless important from both a theoretical and applied point of view:
\begin{vrg}\label{QFormQ} Under which conditions is the number $\displaystyle{\limsup_{n \rightarrow \infty}K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$ small? \end{vrg}
\paragraph{} In this paper we will perform what we call an `isometric study' in which we answer Question \ref{QFormQ}, by providing an upper and a lower bound for the number $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$, and which constitutes a generalization of the Lindeberg-Feller CLT. Stein's method (\cite{Barbour},\cite{Stein1},\cite{Stein2}) will turn out to be a powerful and indispensable tool for the elaboration of this program.
\section{An upper bound for $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$}
\paragraph{} We start with Lemma \ref{EasyK}, which makes the task of finding an upper bound for $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$ considerably more feasible. We let $\mathcal{H}$ stand for the collection of all strictly decreasing functions $h : \mathbb{R} \rightarrow \mathbb{R}$, with a bounded first and second derivative and a bounded and piecewise continuous third derivative, and for which $\displaystyle{\lim_{x \rightarrow -\infty} h(x) = 1}$ and $\displaystyle{\lim_{x \rightarrow \infty} h(x) = 0}$.
\begin{lem}\label{EasyK} If $\eta$ is continuously distributed, then the formula \begin{eqnarray}
\limsup_{n \rightarrow \infty} K\left(\eta,\eta_n\right) = \sup_{h \in \mathcal{H}} \limsup_{n \rightarrow \infty} \left|\mathbb{E}\left[h(\eta) - h(\eta_n)\right]\right|\label{AppForm} \end{eqnarray} is valid for any sequence $\left(\eta_n\right)_n$. \end{lem}
\begin{proof} Let $\epsilon > 0$ be arbitrary. The continuity of $\eta$ allows us to construct points $x_1 < \cdots < x_N$ such that for each $n$ \begin{eqnarray}
K(\eta,\eta_n) \leq \max_{k=1}^N \left|\mathbb{P}\left[\eta \leq x_k\right] - \mathbb{P}\left[\eta_n \leq x_k\right]\right| + \epsilon.\label{KFP} \end{eqnarray} But, again invoking the continuity of $\eta$, it is also easily seen that for each $x \in \mathbb{R}$ there exists $\delta > 0$ and functions $h, \widetilde{h} \in \mathcal{H}$ such that for each $n$ \begin{eqnarray}
\lefteqn{\left|\mathbb{P}\left[\eta \leq x \right] - \mathbb{P}\left[\eta_n \leq x\right]\right|}\nonumber\\ &\leq& \max\left\{\mathbb{P}\left[\eta \leq x - \delta\right] - \mathbb{P}\left[\eta_n \leq x\right], \mathbb{P}\left[\eta_n \leq x\right] - \mathbb{P}[\eta \leq x + \delta]\right\} + \epsilon/2\nonumber\\ &\leq& \max \left\{ \mathbb{E}\left[h(\eta) - h(\eta_n)\right] , \mathbb{E}\left[\widetilde{h}(\eta_n) - \widetilde{h}(\eta)\right]\right\} + \epsilon.\label{FPF} \end{eqnarray} Combining (\ref{KFP}) and (\ref{FPF}) reveals that there exist functions $h_1, \ldots, h_{2N} \in \mathcal{H}$ such that \begin{eqnarray*}
\limsup_{n \rightarrow \infty} K(\eta,\eta_n) &\leq& \limsup_{n \rightarrow \infty} \max_{k=1}^{2N} \left|\mathbb{E}[h_k(\eta) - h_k(\eta_n)]\right| + 2 \epsilon\\
&=& \max_{k=1}^{2N} \limsup_{n \rightarrow \infty} \left|\mathbb{E}[h_k(\eta) - h_k(\eta_n)]\right| + 2\epsilon\\
&\leq& \sup_{h \in \mathcal{H}} \limsup_{n \rightarrow \infty} \left|\mathbb{E}[h(\eta) - h(\eta_n)]\right| + 2 \epsilon. \end{eqnarray*}
Hence the left-hand side of (\ref{AppForm}) is dominated by the right-hand side.
For the converse inequality, it suffices to remark that for any $h \in \mathcal{H}$ \begin{displaymath}
\left|\mathbb{E}[h(\eta) - h(\eta_n)]\right| \leq \int_0^1 \left|\mathbb{P}\left[\eta \leq h^{-1}t\right] - \mathbb{P}\left[\eta_n \leq h^{-1}t\right]\right| dt \leq K(\eta,\eta_n). \end{displaymath} \end{proof}
\paragraph{} With the Lindeberg-Feller CLT in mind, it seems plausible that bounds for $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$ should be based on Lindeberg's condition (\ref{LinCon}). Hence, a first naive guess leads us to the definition of the number \begin{eqnarray}
\textrm{\upshape{Lin}}\left(\left\{\xi_{n,k}\right\}\right) = \sup_{\epsilon > 0} \limsup_{n \rightarrow \infty} \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2;\left|\xi_{n,k}\right| \geq \epsilon\right]\label{LinInd} \end{eqnarray} which we call the \textit{Lindeberg index}. It is clear that $\left\{\xi_{n,k}\right\}$ satisfies Lindeberg's condition if and only if $\textrm{\upshape{Lin}}\left(\left\{\xi_{n,k}\right\}\right) = 0$. We will provide an example, inspired by one of the problems posed in \cite{Fel}, chapter XV, which illustrates that the Lindeberg index is not trivial in our context.
\paragraph{} Fix $0 < \alpha < 1$, let $\beta = \frac{\alpha}{1 - \alpha}$\label{def:beta} and put \begin{eqnarray} s_n^2 = (1 + \beta) n - \beta \sum_{k=1}^n k^{-1} = n + \beta \sum_{k=1}^{n} \left(1 - k^{-1}\right).\label{def:sn} \end{eqnarray} Notice that $s_n^2 \rightarrow \infty$. Now consider the STA $\left\{\eta_{\alpha,n,k}\right\}$ such that \begin{eqnarray} \mathbb{P}\left[\eta_{\alpha,n,k} = -1/s_n\right] = \mathbb{P}\left[\eta_{\alpha,n,k} = 1/s_n\right] = \frac{1}{2}\left(1 - \beta k^{-1}\right)\label{def:STAeta1} \end{eqnarray} and \begin{eqnarray} \mathbb{P}\left[\eta_{\alpha,n,k} = -\sqrt{k}/{s_n}\right] = \mathbb{P}\left[\eta_{\alpha,n,k} = \sqrt{k}/{s_n}\right] = \frac{1}{2}\beta k^{-1}.\label{def:STAeta2} \end{eqnarray}
\begin{pro}\label{thm:ExplLin} The STA $\left\{\eta_{\alpha,n,k}\right\}$ satisfies Feller's condition \upshape{(\ref{FelNegCond})} and \begin{eqnarray} \textrm{\upshape{Lin}}\left(\left\{\eta_{\alpha,n,k}\right\}\right) = \alpha.\label{eq:ExplLin} \end{eqnarray} \end{pro}
\begin{proof} For Feller's negligibility condition we observe \begin{displaymath} \max_{k=1}^{n} \mathbb{E}\left[\eta_{\alpha,n,k}^2\right] = \frac{(1 + \beta) - \beta n^{-1} }{s_n^2} \rightarrow 0. \end{displaymath} \paragraph{} In order to calculate $\textrm{\upshape{Lin}}\left(\left\{\eta_{\alpha,n,k}\right\}\right)$, we fix $\epsilon > 0$ so that $\epsilon^2 (1 + \beta) \leq 1$ and hence $\epsilon^2 s_n^2 \leq n$. Then for $n$ so that $\epsilon s_n > 1$ and $k \leq n$ we have \begin{displaymath}
\mathbb{E}\left[\eta_{\alpha,n,k}^2 ; \left|\eta_{\alpha,n,k}\right| \geq \epsilon\right] = \left\{\begin{array}{clrr} \beta/s_n^2 & \textrm{ if }& k \geq\epsilon^2 s_n^2 \\ 0 & \textrm{ if } & k < \epsilon^2 s_n^2
\end{array}\right.. \end{displaymath} It follows that \begin{eqnarray*}
\lefteqn{\limsup_{n \rightarrow \infty} \sum_{k=1}^n \mathbb{E}\left[\eta_{\alpha,n,k}^2 ; \left|\eta_{\alpha,n,k}\right| \geq \epsilon \right]}\\ &=& \limsup_{n \rightarrow \infty} \frac{\beta (n - \left\lceil\epsilon^2 s_n^2\right\rceil + 1)}{s_n^2}\\ &=& \limsup_{n \rightarrow \infty} \frac{\beta n}{(1+\beta)n - \beta \sum_{k=1}^n k^{-1}} - \beta \limsup_{n \rightarrow \infty} \frac{\left\lceil \epsilon^2 s_n^2\right\rceil - 1}{s_n^2}\\ &=& \limsup_{n \rightarrow \infty} \frac{\beta}{(1 + \beta) - \beta \frac{1}{n} \sum_{k=1}^n k^{-1}} - \beta \epsilon^2\\ &=&\frac{\beta}{1+\beta} - \beta \epsilon^2\\ &=& \alpha - \beta \epsilon^2 \end{eqnarray*} which proves the proposition. \end{proof}
\paragraph{} We now embark on the search for an upper bound for $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$ in terms of $\textrm{\upshape{Lin}}\left(\left\{\xi_{n,k}\right\}\right)$. We will discuss and compare two different methods, which are both known to lead to a proof of the sufficiency of Lindeberg's condition for normal convergence in the Lindeberg-Feller CLT.
\subsection{The classical method}
\paragraph{} The `classical method' to prove normal convergence appears in many different forms such as e.g. characteristic functions (\cite{Kal}) or Gaussian transforms (\cite{Berg}), but it always involves estimating an expression of the type \begin{eqnarray}
\left|\mathbb{E}\left[h\left(\xi\right) - h\left(\sum_{k=1}^{n}\xi_{n,k}\right) \right]\right|\label{ExprToEst} \end{eqnarray} based on the following three key observations. Their proofs are elementary and can be found in the literature. \begin{itemize}
\item[(I)] The random variable $\xi$ is infinitely divisible in the sense that for each $n$
\begin{eqnarray}
\xi = \sum_{k=1}^n \eta_{n,k}\label{infdiveq}
\end{eqnarray}
where $\{\eta_{n,k}\}$ is the STA consisting of normally distributed random variables with $\mathbb{E}\left[\eta_{n,k}^2\right] = \sigma_{n,k}^2$. \item[(II)] Let $\{\eta_{n,k}\}$ be as in (I). Then, for any bounded and continuous function $h : \mathbb{R} \rightarrow \mathbb{R}$, \begin{eqnarray}
\left|\mathbb{E}\left[h\left(\sum_{k=1}^n \eta_{n,k}\right) - h\left(\sum_{k=1}^n \xi_{n,k}\right)\right]\right|
\leq \sum_{k=1}^n \sup_{a \in \mathbb{R}}\left|\mathbb{E}[h\left(a + \eta_{n,k}\right)-h(a + \xi_{n,k})]\right|.\label{stability} \end{eqnarray} \item[(III)] Let $h : \mathbb{R} \rightarrow \mathbb{R}$ have a bounded second derivative and a bounded and piecewise continuous third derivative. Then for any $a, x \in \mathbb{R}$ \begin{eqnarray}
\left|h(a + x) - h(a) - h^\prime(a) x - \frac{1}{2} h^{\prime \prime}(a) x^2\right|
\leq \min \left\{\left\|h^{\prime \prime}\right\|_{\infty} x^2, \frac{1}{6} \left\|h^{\prime \prime \prime}\right\|_\infty \left|x\right|^3\right\}.\label{Taylor} \end{eqnarray} \end{itemize}
\paragraph{} Combining (\ref{infdiveq}), (\ref{stability}) and (\ref{Taylor}) for our purpose, yields for any $h \in \mathcal{H}$ and $\epsilon > 0$ \begin{eqnarray*}
\lefteqn{\left|\mathbb{E}\left[h(\xi) - h\left(\sum_{k=1}^n \xi_{n,k}\right)\right]\right|}\\
&=& \left|\mathbb{E}\left[h\left(\sum_{k=1}^n \eta_{n,k}\right) - h\left(\sum_{k=1}^n \xi_{n,k}\right)\right]\right|\\
&\leq& \sum_{k=1}^n \sup_{a \in \mathbb{R}} \left|\mathbb{E}\left[h(a + \eta_{n,k}) - h(a + \xi_{n,k})\right] \right|\\
&\leq& \sum_{k=1}^n \sup_{a \in \mathbb{R}} \mathbb{E}\left[\left|h(a + \eta_{n,k}) - h(a) - h^\prime(a) \eta_{n,k} - \frac{1}{2} h^{\prime \prime}(a) \eta_{n,k}^2\right|\right]\\
&&+\sum_{k=1}^n \sup_{a \in \mathbb{R}} \mathbb{E}\left[\left|h(a + \xi_{n,k}) - h(a) - h^\prime(a) \xi_{n,k} - \frac{1}{2} h^{\prime \prime}(a) \xi_{n,k}^2\right|\right]\\
&\leq& \frac{1}{6} \left\|h^{\prime \prime \prime}\right\|_\infty \sum_{k=1}^n \mathbb{E}\left[\left|\eta_{n,k}\right|^3\right] + \frac{1}{6} \left\|h^{\prime \prime \prime}\right\|_{\infty} \sum_{k=1}^n \mathbb{E}\left[\left|\xi_{n,k}\right|^3;\left|\xi_{n,k}\right| < \epsilon\right]\\
&&+ \left\|h^{\prime \prime}\right\|_\infty \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 ; \left|\xi_{n,k}\right| \geq \epsilon\right] \\
&\leq& \frac{1}{6} \left\|h^{\prime \prime \prime}\right\|_\infty \left( \mathbb{E}\left[\left|\xi\right|^3\right] \max_{k=1}^{n} \sigma_{n,k} + \epsilon \right) + \left\|h^{\prime \prime}\right\|_\infty \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 ; \left|\xi_{n,k}\right| \geq \epsilon\right]. \end{eqnarray*} Because of (\ref{FelNegCond}), the previous calculation shows that for any $h \in \mathcal{H}$ \begin{eqnarray}
\limsup_{n \rightarrow \infty} \left|\mathbb{E}\left[h(\xi) - h\left(\sum_{k=1}^n \xi_{n,k}\right)\right]\right| \leq \left\|h^{\prime \prime}\right\|_\infty \textrm{\upshape{Lin}}\left(\left\{\xi_{n,k}\right\}\right)\label{ClassResult}. \end{eqnarray}
\paragraph{} Recalling (\ref{AppForm}), we see that (\ref{ClassResult}) proves that Lindeberg's condition is sufficient for the weak limit relation $\displaystyle{\sum_{k=1}^n \xi_{n,k} \stackrel{w}{\rightarrow} \xi}$ to hold. However, since $\left\|h^{\prime \prime}\right\|_\infty$ blows up if we let $h$ run through the collection $\mathcal{H}$, (\ref{ClassResult}) is useless for the derivation of an upper bound for $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$. We conclude that although the classical method suffices to decide when the number $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$ is $0$, it is not subtle enough to decide when it is small.
\subsection{Stein's method}
\paragraph{} Whereas the classical method provides an upper bound for (\ref{ExprToEst}) based on a direct analysis of the function $h$, Stein's method first transforms (\ref{ExprToEst}) to an expression which is easier to analyze. The basics of the method are contained in the following lemma. The proofs can be found in e.g. \cite{Barbour}.
\begin{lem}\label{SteinBasics} Let $h : \mathbb{R} \rightarrow \mathbb{R}$ be measurable and bounded. Put \begin{eqnarray} f_h(x) = e^{x^2/2} \int_{-\infty}^x \left(h(t) - \mathbb{E}[h(\xi)]\right) e^{-t^2/2} dt \label{SteinSolution}. \end{eqnarray} Then for any $x \in \mathbb{R}$ \begin{eqnarray} \mathbb{E}\left[h(\xi)\right] - h(x)= x f_h(x) - f_h^\prime (x).\label{SteinIdentity} \end{eqnarray} Moreover, if $h$ is absolutely continuous, then \begin{eqnarray}
\left\|f_h^{\prime \prime}\right\|_\infty \leq 2\left\|h^\prime\right\|_\infty\label{fhbounded}, \end{eqnarray} and if $h_z = 1_{\left]-\infty,z\right]}$ for $z \in \mathbb{R}$, then for all $x,y \in \mathbb{R}$ \begin{eqnarray}
\left|f^\prime_{h_z}(x) - f^\prime_{h_z} (y)\right| \leq 1.\label{SteinUpperBound} \end{eqnarray} \end{lem}
\paragraph{} We will try to apply Lemma \ref{SteinBasics} in order to derive an upper bound for $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$. We first need three additional lemmata.
\paragraph{} Stein's method was used by Barbour and Hall to derive Berry-Esseen type bounds in \cite{BHall}. The following lemma is inspired by their paper.
\begin{lem} Let $h \in \mathcal{H}$ and put \begin{eqnarray} \delta_{n,k} = f_h\left(\sum_{i \neq k} \xi_{n,i} + \xi_{n,k}\right) - f_{h}\left(\sum_{i \neq k} \xi_{n,i}\right) - \xi_{n,k} f^\prime_h\left(\sum_{i \neq k} \xi_{n,i}\right)\label{defdelta} \end{eqnarray} and \begin{eqnarray} \epsilon_{n,k} = f^\prime_h\left(\sum_{i \neq k} \xi_{n,i} + \xi_{n,k}\right) - f^\prime_{h}\left(\sum_{i \neq k} \xi_{n,i}\right) - \xi_{n,k} f^{\prime \prime}_h\left(\sum_{i \neq k}\xi_{n,i}\right).\label{defepsilon} \end{eqnarray} Then \begin{eqnarray} \lefteqn{\mathbb{E}\left[\left(\sum_{k=1}^n \xi_{n,k}\right) f_h\left(\sum_{k=1}^n \xi_{n,k}\right) - f_h^\prime\left(\sum_{k=1}^n \xi_{n,k}\right)\right]}\nonumber\\ && =\sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}\delta_{n,k}\right] - \sum_{k=1}^n \sigma_{n,k}^2 \mathbb{E}\left[\epsilon_{n,k}\right].\label{BHallIneq} \end{eqnarray} \end{lem}
\begin{proof} Calculate the right-hand side of (\ref{BHallIneq}) and recall that $\xi_{n,k}$ and $\sum_{i \neq k} \xi_{n,i}$ are independent, $\mathbb{E}\left[\xi_{n,k}\right] = 0$ and $\sum_{k=1}^n \sigma_{n,k}^2 = 1$. \end{proof}
\paragraph{} The following lemma is a straightforward application of Taylor's theorem.
\begin{lem} Let $f : \mathbb{R} \rightarrow \mathbb{R}$ have a bounded derivative and a bounded and piecewise continuous second derivative. Then for any $a, x \in \mathbb{R}$ \begin{eqnarray}
\lefteqn{\left|f(a + x) - f(a) - f^\prime(a) x \right|}\nonumber\\
&&\leq \min \left\{\left(\sup_{x_1,x_2 \in \mathbb{R}}\left|f^\prime(x_1) - f^{\prime}(x_2)\right|\right) \left|x\right|,\frac{1}{2} \left\|f^{\prime \prime}\right\|_\infty x^2\right\}.\label{Taylorf} \end{eqnarray} \end{lem}
\paragraph{} We finally observe that the favorable inequality (\ref{SteinUpperBound}) is extendable to all functions in the collection $\mathcal{H}$:
\begin{lem} Let $h \in \mathcal{H}$. Then for all $x,y \in \mathbb{R}$ \begin{eqnarray}
\left|f^\prime_h(x) - f^\prime_h(y)\right| \leq 1.\label{DiffFh} \end{eqnarray} \end{lem}
\begin{proof} From (\ref{SteinSolution}) we derive that \begin{eqnarray} f_h^\prime(x) = x e^{x^2/2} \int_{-\infty}^x \left(h(t) - \mathbb{E}[h(\xi)]\right) e^{-t^2/2} dt + h(x) - \mathbb{E}[h(\xi)].\label{fhRep} \end{eqnarray} Furthermore, for all $h \in \mathcal{H}$ we have \begin{eqnarray} h(x) = \int_0^1 h_{h^{-1} s}(x) ds.\label{hRep} \end{eqnarray} Combining (\ref{fhRep}) and (\ref{hRep}) and applying Fubini yields \begin{eqnarray} f^\prime_h(x) - f^\prime_h(y) = \int_0^1 \left[f^\prime_{h^{-1}s}(x) - f^\prime_{h^{-1}s}(y)\right] ds\label{DiffRep} \end{eqnarray} and the lemma follows from (\ref{SteinUpperBound}). \end{proof}
\paragraph{} Combining (\ref{SteinIdentity}), (\ref{fhbounded}), (\ref{BHallIneq}), (\ref{Taylorf}) and (\ref{DiffFh}) yields for $h \in \mathcal{H}$ and $\epsilon > 0$ \begin{eqnarray*}
\lefteqn{\left|\mathbb{E}\left[h\left(\xi\right) - h\left(\sum_{k=1}^{n}\xi_{n,k}\right) \right]\right|}\\
&=& \left|\mathbb{E}\left[\left(\sum_{k=1}^n \xi_{n,k}\right) f_h\left(\sum_{k=1}^n \xi_{n,k}\right) - f_h^\prime\left(\sum_{k=1}^n \xi_{n,k}\right)\right]\right|\\
&\leq& \sum_{k=1}^n \mathbb{E}\left[\left|\xi_{n,k}\delta_{n,k}\right|\right] + \sum_{k=1}^n \sigma_{n,k}^2 \mathbb{E}\left[\left|\epsilon_{n,k}\right|\right]\\
&=& \frac{1}{2}\left\|f_h^{\prime \prime}\right\|_\infty \sum_{k=1}^n \mathbb{E}\left[\left|\xi_{n,k}\right|^3 ; \left|\xi_{n,k}\right|<\epsilon\right]\\
&& + \left(\sup_{x_1,x_2 \in \mathbb{R}} \left|f_h^\prime(x_1) - f_{h}^\prime(x_2)\right|\right) \sum_{k=1}^n \mathbb{E}\left[\left|\xi_{n,k}\right|^2;\left|\xi_{n,k}\right|\geq \epsilon\right]\\
&& + \left(\sup_{x_1, x_2 \in \mathbb{R}} \left|f_h^{\prime \prime}(x_1) - f_h^{\prime \prime} (x_2)\right| \right)\sum_{k=1}^n\sigma_{n,k}^2 \mathbb{E}\left[\left|\xi_{n,k}\right|\right]\\
&\leq& \frac{1}{2} \left\|f_h^{\prime \prime}\right\|_\infty \epsilon + \sum_{k=1}^n \mathbb{E}\left[\left|\xi_{n,k}\right|^2;\left|\xi_{n,k}\right|\geq \epsilon\right] \\
&&+ \left(\sup_{x_1,x_2 \in \mathbb{R}} \left|f_h^{\prime \prime}(x_1) - f_h^{\prime \prime}(x_2)\right|\right) \max_{k=1}^n \sigma_{n,k}. \end{eqnarray*}
\paragraph{} Because of (\ref{FelNegCond}), the previous calculation shows that for any $h \in \mathcal{H}$ \begin{eqnarray}
\limsup_{n \rightarrow \infty} \left|\mathbb{E}\left[h(\xi) - h\left(\sum_{k=1}^n \xi_{n,k}\right)\right]\right| \leq \textrm{\upshape{Lin}}\left(\left\{\xi_{n,k}\right\}\right)\label{SteinResult}. \end{eqnarray} \paragraph{} Recalling (\ref{AppForm}), we see that (\ref{SteinResult}) entails the following beautiful generalization of the sufficiency of Lindeberg's condition in the Lindeberg-Feller CLT.
\begin{thm}\label{MainThm} The inequality \begin{eqnarray} \limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right) \leq \textrm{\upshape{Lin}}\left(\left\{\xi_{n,k}\right\}\right)\label{ineqMainThm} \end{eqnarray} is valid. \end{thm}
\section{A lower bound for $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$}
\paragraph{} In \cite{BHall} the following theorem is proved.
\begin{thm}\label{BHallLowerBoundTheorem} There exists a constant $C > 0$, not depending on $\left\{\xi_{n,k}\right\}$, such that \begin{eqnarray} \limsup_{n \rightarrow \infty} \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \left(1 - e^{-\frac{1}{4} \xi_{n,k}^2}\right)\right] \leq C \limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right).\label{BHallIneq2} \end{eqnarray} Moreover, (\ref{BHallIneq2}) can be shown to hold with $C \leq 41$. \end{thm}
\paragraph{} Let $\Phi_L$ be the collection of all non-decreasing functions $\phi : \mathbb{R}^+ \rightarrow \left[0,1\right]$ which are strictly positive on $\mathbb{R}^+_0$ and for which $\displaystyle{\lim_{\epsilon \downarrow 0} \phi(\epsilon) = 0}$. \paragraph{} Inspired by Theorem \ref{BHallLowerBoundTheorem}, we define for $\phi \in \Phi_L$ the number \begin{eqnarray}
\widetilde{\textrm{\upshape{Lin}}}_\phi\left(\left\{\xi_{n,k}\right\}\right) = \limsup_{n \rightarrow \infty} \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \phi(\left|\xi_{n,k}\right|)\right] \end{eqnarray} which we call the \textit{relaxed Lindeberg index (with respect to $\phi$)}. Furthermore, we put for $\gamma > 0$ \begin{eqnarray}
\phi_{\gamma}(x) = 1 - e^{-\gamma x^2} \end{eqnarray} and \begin{eqnarray} \widetilde{\textrm{\upshape{Lin}}}_{\gamma}\left(\left\{\xi_{n,k}\right\}\right) = \widetilde{\textrm{\upshape{Lin}}}_{\phi_\gamma}\left(\left\{\xi_{n,k}\right\}\right). \end{eqnarray} \paragraph{} The following proposition collects some basic properties of the relaxed Lindeberg index. \begin{pro}\label{thm:PropRelLin} For any $\phi \in \Phi_L$ we have \begin{eqnarray} \widetilde{\textrm{\upshape{Lin}}}_\phi\left(\left\{\xi_{n,k}\right\}\right) \leq \textrm{\upshape{Lin}}\left(\left\{\xi_{n,k}\right\}\right)\label{RelLinLeqLin} \end{eqnarray} and, in addition, \begin{eqnarray} \left\{\xi_{n,k}\right\} \textrm{ satisfies Lindeberg's condition } \Leftrightarrow \widetilde{\textrm{\upshape{Lin}}}_\phi \left(\left\{\xi_{n,k}\right\}\right) = 0.\label{LCByRelLin} \end{eqnarray} Furthermore, for any sequence $(\phi_n)_n$ in $\Phi_L$ we have \begin{eqnarray} \phi_n \uparrow 1_{]0, \infty[} \Rightarrow \widetilde{\textrm{\upshape{Lin}}}_{\phi_n}\left(\left\{\xi_{n,k}\right\}\right) \uparrow \textrm{\upshape{Lin}}\left(\left\{\xi_{n,k}\right\}\right).\label{gensupLin} \end{eqnarray} In particular, \begin{eqnarray}
\gamma \uparrow \infty \Rightarrow \widetilde{\textrm{\upshape{Lin}}}_\gamma\left(\left\{\xi_{n,k}\right\}\right) \uparrow \textrm{\upshape{Lin}}\left(\left\{\xi_{n,k}\right\}\right).\label{supLin} \end{eqnarray} Finally, for any $\gamma > 0$, \begin{eqnarray} \widetilde{\textrm{\upshape{Lin}}}_{2 \gamma}\left(\left\{\xi_{n,k}\right\}\right) \leq 2 \widetilde{\textrm{\upshape{Lin}}}_{\gamma}\left(\left\{\xi_{n,k}\right\}\right).\label{CompLin} \end{eqnarray} \end{pro}
\begin{proof} (\ref{RelLinLeqLin}) follows from \begin{eqnarray*}
\lefteqn{\sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \phi\left(\left|\xi_{n,k}\right|\right)\right]}\\
&=& \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \phi\left(\left|\xi_{n,k}\right|\right);\left|\xi_{n,k}\right| \geq \epsilon \right] + \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \phi\left(\left|\xi_{n,k}\right|\right) ; \left|\xi_{n,k}\right| < \epsilon\right]\\
&\leq& \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 ; \left|\xi_{n,k}\right| \geq \epsilon \right] + \phi(\epsilon) \end{eqnarray*} and the fact that $\displaystyle{\lim_{\epsilon \downarrow 0} \phi(\epsilon) = 0}$. \paragraph{} Notice that (\ref{RelLinLeqLin}) entails that $\widetilde{\textrm{\upshape{Lin}}}_\phi \left(\left\{\xi_{n,k}\right\}\right) = 0$ if $\left\{\xi_{n,k}\right\}$ satisfies Lindeberg's condition. For the converse implication, observe that, for any $\epsilon > 0$, \begin{displaymath}
\phi(\epsilon) \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 ; \left|\xi_{n,k}\right| > \epsilon\right] \leq \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \phi\left(\left|\xi_{n,k}\right|\right)\right] \end{displaymath} and $\phi(\epsilon) > 0$. This proves (\ref{LCByRelLin}). \paragraph{} In order to prove (\ref{gensupLin}), we choose for $\epsilon > 0$ a natural number $n_0$ such that \begin{eqnarray*} \phi_{n_0}(\epsilon) \geq 1 - \epsilon. \end{eqnarray*} Then \begin{displaymath}
\limsup_{n \rightarrow \infty} \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 ; \left|\xi_{n,k}\right| > \epsilon\right] \leq \limsup_{n \rightarrow \infty} \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \phi_{n_0}\left(\left|\xi_{n,k}\right|\right)\right] + \epsilon \end{displaymath} and (\ref{gensupLin}) follows. \paragraph{} Finally, (\ref{CompLin}) follows from the observation that \begin{eqnarray*}
\lefteqn{\sum_{k=1}^{n} \mathbb{E}\left[\xi_{n,k}^2 \phi_{2 \gamma}\left(\left|\xi_{n,k}\right|\right)\right]}\\
&=&\sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \phi_\gamma\left(\left|\xi_{n,k}\right|\right)\right] + \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \phi_{\gamma}\left(\left|\xi_{n,k}\right|\right) e^{-\gamma \xi_{n,k}^2}\right]. \end{eqnarray*} \end{proof}
\paragraph{} Proposition \ref{thm:ExplRelLin} provides an explicit formula for $\widetilde{\textrm{\upshape{Lin}}}_{\gamma}\left(\left\{\eta_{\alpha,n,k}\right\}\right)$, where $\{\eta_{\alpha,n,k}\}$ is the STA determined by (\ref{def:sn}), (\ref{def:STAeta1}) and (\ref{def:STAeta2}). Recall that $\{\eta_{\alpha,n,k}\}$ satisfies Feller's condition (\ref{FelNegCond}) and that $\textrm{\upshape{Lin}}\left(\left\{\eta_{\alpha,n,k}\right\}\right) = \alpha.$ (Proposition \ref{thm:ExplLin})
\begin{pro}\label{thm:ExplRelLin} The formula \begin{eqnarray} \widetilde{\textrm{\upshape{Lin}}}_{\gamma}\left(\left\{\eta_{\alpha,n,k}\right\}\right) = \alpha \left(1 - \frac{1 - e^{-\gamma(1-\alpha)}}{\gamma (1 - \alpha)}\right)\label{eq:ExplRelLin} \end{eqnarray} is valid. \end{pro}
\begin {proof} An easy calculation leads to \begin{eqnarray*}
\lefteqn{\sum_{k=1}^n \mathbb{E}\left[\eta_{\alpha,n,k}^2 \phi_\gamma\left(\left|\eta_{\alpha,n,k}\right|\right)\right]}\\ &=& \frac{n}{s_n^2} \left(1 - e^{-\frac{\gamma}{s_n^2}}\right) - \beta e^{-\frac{\gamma}{s_n^2}} \left(\frac{1}{s_n^2} \sum_{k=1}^n k^{-1}\right) + \beta \frac{n}{s_n^2} - \beta \left(\frac{1}{s_n^2} \sum_{k=1}^n e^{-\frac{\gamma k}{s_n^2}}\right). \end{eqnarray*} Arguing analogously as in the proof of Proposition \ref{thm:ExplLin}, we see that \begin{displaymath} \frac{n}{s_n^2} \rightarrow \frac{1}{1 + \beta} \textrm{ and } \frac{1}{s_n^2} \sum_{k=1}^n k^{-1} \rightarrow 0. \end{displaymath} Also, \begin{displaymath} \frac{1}{s_n^2} \sum_{k=1}^n e^{-\frac{\gamma k}{s_n^2}} = e^{-\frac{\gamma}{s_n^2}} \frac{1 - e^{-\frac{n\gamma}{s_n^2}}}{s_n^2\left(1 - e^{-\frac{\gamma}{s_n^2}}\right)} \rightarrow \frac{1 - e^{- \frac{\gamma}{1 + \beta}}}{\gamma}. \end{displaymath} Recalling that $\alpha = \frac{\beta}{1 + \beta}$, the previous limit relations yield the desired result. \end{proof}
\paragraph{} In terms of the relaxed Lindeberg index, Theorem \ref{BHallLowerBoundTheorem} is turned into
\begin{thm}\label{BHallLowerBoundTheoremNew} There exists a constant $C_{1/4} > 0$, not depending on $\left\{\xi_{n,k}\right\}$, such that \begin{eqnarray} \widetilde{\textrm{\upshape{Lin}}}_{1/4}\left(\left\{\xi_{n,k}\right\}\right) \leq C_{1/4} \limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right).\label{BHallIneq2new} \end{eqnarray} Moreover, (\ref{BHallIneq2new}) can be shown to hold with $C_{1/4} \leq 41$. \end{thm}
\paragraph{} Theorem \ref{BHallLowerBoundTheoremNew} generalizes the necessity of Lindeberg's condition in the Lindeberg-Feller CLT and therefore constitutes a counterpart for Theorem \ref{MainThm}. It also has the following
\begin{gev} For each $\gamma > 0$ there exists a constant $C_\gamma > 0$, not depending on $\left\{\xi_{n,k}\right\}$, such that \begin{eqnarray} \widetilde{\textrm{\upshape{Lin}}}_{\gamma}\left(\left\{\xi_{n,k}\right\}\right) \leq C_\gamma \limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right). \end{eqnarray} \end{gev}
\begin{proof} Combine (\ref{CompLin}) and (\ref{BHallIneq2new}). \end{proof}
\paragraph{} In the same spirit we have obtained Theorem \ref{thmBHallIneqour}, which is stronger than Theorem \ref{BHallLowerBoundTheoremNew}. It provides the sharpest lower bound for $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$ we have produced so far. We defer the proof to Appendix A.
\begin{thm}\label{thmBHallIneqour} There exists a constant $C_{1/2} > 0$, not depending on $\left\{\xi_{n,k}\right\}$, such that \begin{eqnarray} \widetilde{\textrm{\upshape{Lin}}}_{1/2}\left(\left\{\xi_{n,k}\right\}\right) \leq C_{1/2} \limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right).\label{BHallIneq2our} \end{eqnarray} Moreover, (\ref{BHallIneq2our}) can be shown to hold with $C_{1/2} \leq 30.3$. \end{thm}
\section{Conclusion}
\paragraph{} Theorem \ref{MainThm} and Theorem \ref{thmBHallIneqour} together yield Theorem \ref{MainResultLowUp}, our main result, which answers Question \ref{QFormQ} by providing an upper and a lower bound for the number $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$ which constitute a generalization of the Lindeberg-Feller CLT.
\begin{thm}\label{MainResultLowUp} There exists a constant $\widetilde{C}_{1/2} > 0$, not depending on $\left\{\xi_{n,k}\right\}$, such that \begin{eqnarray} \widetilde{C}_{1/2} \widetilde{\textrm{\upshape{Lin}}}_{1/2}\left(\left\{\xi_{n,k}\right\}\right) \leq \limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right) \leq \textrm{\upshape{Lin}}\left(\left\{\xi_{n,k}\right\}\right).\label{MainResultIneqs} \end{eqnarray} Moreover, we can take $\widetilde{C}_{1/2} \geq 0.033$. \end{thm}
\paragraph{} Applying (\ref{eq:ExplLin}), (\ref{eq:ExplRelLin}) and (\ref{MainResultIneqs}) to the STA $\{\eta_{\alpha,n,k}\}$, determined by (\ref{def:sn}), (\ref{def:STAeta1}) and (\ref{def:STAeta2}), we obtain
\begin{thm}\label{thm:ExplMainIneqs} There exists a constant $\widetilde{C}_{1/2} > 0$ such that for each $\alpha > 0$ \begin{eqnarray} \widetilde{C}_{1/2} \alpha \left(1 - \frac{1 - e^{-\frac{1}{2}(1-\alpha)}}{\frac{1}{2} (1 - \alpha)}\right) \leq \limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \eta_{\alpha,n,k}\right) \leq \alpha.\label{ExplMainIneqs} \end{eqnarray} Moreover, we can take $\widetilde{C}_{1/2} \geq 0.033$. \end{thm}
\paragraph{} From the lower bound for $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \eta_{\alpha,n,k}\right)}$ in (\ref{ExplMainIneqs}) we conclude that the upper bound in (\ref{MainResultIneqs}) is of optimal order in the sense that
\begin{thm} There does not exist a constant $C>0$, not depending on $\left\{\xi_{n,k}\right\}$, such that \begin{eqnarray} \limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right) \leq C \left[\textrm{\upshape{Lin}}\left(\left\{\xi_{n,k}\right\}\right)\right]^{1 + p} \end{eqnarray} with $p > 0$. \end{thm}
\paragraph{} We end this paper with the following open question.
\begin{vrg}\label{LBQ} Does there exist a constant $C > 0$, not depending on $\left\{\xi_{n,k}\right\}$, such that \begin{eqnarray} \textrm{\upshape{Lin}}\left(\left\{\xi_{n,k}\right\}\right) \leq C \limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)? \end{eqnarray} \end{vrg}
\paragraph{} A positive answer to Question \ref{LBQ} is equivalent with the existence of a constant $C > 0$, not depending on $\left\{\xi_{n,k}\right\}$, such that for each $\gamma > 0$ \begin{eqnarray*} \widetilde{\textrm{\upshape{Lin}}}_{\gamma}\left(\left\{\xi_{n,k}\right\}\right) \leq C \limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right). \end{eqnarray*}
\section*{Appendix A : Proof of Theorem \ref{thmBHallIneqour} }
\paragraph{} We assume w.l.o.g. that $\xi$ and $\left\{\xi_{n,k}\right\}$ are independent. Furthermore, we let $\left\{\widetilde{\xi}_{n,k}\right\}$ be a copy of $\left\{\xi_{n,k}\right\}$ not depending on $\xi$ and $\left\{\xi_{n,k}\right\}$. \\ \paragraph{} Put \begin{eqnarray} \psi_{1/2}(x) = 1 - \int_0^1 e^{-\frac{1}{2}(1-s^2)x^2} ds. \end{eqnarray} The following lemma reveals that $\psi_{1/2}$ is closely related to $\phi_{1/2}$.
\begin{lemA} The inequalities \begin{eqnarray} \psi_{1/2} \leq \phi_{1/2} \leq \frac{3}{2}\psi_{1/2}\label{PreCompPsi} \end{eqnarray} are valid. \end{lemA}
\begin{proof} The first inequality is obvious. For the second inequality, we observe that \begin{eqnarray*} \phi_{1/2}(x) - \psi_{1/2}(x) &=& e^{-\frac{1}{2}x^2} \int_0^1 \left(e^{\frac{1}{2} s^2 x^2} - e^0\right) ds\\ &=& \frac{1}{2} e^{-\frac{1}{2} x^2} \int_0^1 \int_0^1 s^2 x^2 e^{\frac{1}{2} t s^2 x^2} dt ds\\ &\leq& \frac{1}{2} e^{-\frac{1}{2} x^2} \int_0^1 s^2 x^2 e^{\frac{1}{2} s^2 x^2} ds\\ &=& \frac{1}{2} e^{-\frac{1}{2} x^2} \int_0^1 s de^{\frac{1}{2}s^2 x^2}\\ &=& \frac{1}{2} e^{-\frac{1}{2} x^2} \left(e^{\frac{1}{2}x^2} - \int_0^1 e^{\frac{1}{2} s^2 x^2} ds\right)\\ &=& \frac{1}{2} \psi_{1/2}(x) \end{eqnarray*} and we are done. \end{proof}
\paragraph{} It follows from (\ref{PreCompPsi}) that $\psi_{1/2}$ belongs to $\Phi_L$. Also,
\begin{gevA} \begin{eqnarray} \widetilde{\textrm{\upshape{Lin}}}_{\psi_{1/2}}\left(\left\{\xi_{n,k}\right\}\right) \leq \widetilde{\textrm{\upshape{Lin}}}_{1/2}\left(\left\{\xi_{n,k}\right\}\right) \leq \frac{3}{2}\widetilde{\textrm{\upshape{Lin}}}_{\psi_{1/2}}\left(\left\{\xi_{n,k}\right\}\right).\label{CompPsi} \end{eqnarray} \end{gevA}
\begin{lemA} Let $f : \mathbb{R} \rightarrow \mathbb{R}$ be bounded, measurable and antisymmetric and put \begin{eqnarray} g(x) = xf(x). \end{eqnarray} Then, for $a \in \mathbb{R}$, \begin{eqnarray} \mathbb{E}\left[a^2 \int_0^1 g\left(\xi + sa\right) e^{-\frac{1}{2}\left(1-s^2\right)a^2} ds\right] = \mathbb{E}\left[a f(\xi + a)\right].\label{basicRelation} \end{eqnarray} \end{lemA}
\begin{proof} First suppose that $f$ is continuously differentiable. Then \begin{eqnarray*} \lefteqn{\mathbb{E}\left[a^2 \int_0^1 g(\xi + sa) e^{-\frac{1}{2} (1-s^2) a^2} ds\right]}\\ &=& \mathbb{E}\left[a^2 \int_0^1 \xi f(\xi + sa) e^{-\frac{1}{2} (1-s^2) a^2}ds\right] \\ &&+ \mathbb{E}\left[a^2 \int_0^1 sa f(\xi + sa) e^{-\frac{1}{2} (1-s^2) a^2} ds \right]\\ &=& \mathbb{E}\left[a^2 \int_0^1f^\prime(\xi + sa) e^{-\frac{1}{2} (1-s^2) a^2}ds\right]\\ && + \mathbb{E}\left[a \int_0^1 f(\xi + sa) d\left[e^{\frac{1}{2} s^2 a^2}\right] e^{-\frac{1}{2} a^2}\right]\\ &=& \mathbb{E}\left[a^2 \int_0^1f^\prime(\xi + sa) e^{-\frac{1}{2} (1-s^2) a^2}ds\right]\\ && +\left(\mathbb{E}\left[a f(\xi + a)\right] - \mathbb{E}\left[a^2 \int_0^1f^\prime(\xi + sa) e^{-\frac{1}{2} (1-s^2) a^2}ds\right]\right)\\ &=& \mathbb{E}\left[a f(\xi + a)\right], \end{eqnarray*} where the second equality follows from the fact that, for $h : \mathbb{R} \rightarrow \mathbb{R}$ continuously differentiable, \begin{eqnarray} \mathbb{E}\left[\xi h(\xi)\right] = \mathbb{E}\left[h^\prime(\xi)\right], \end{eqnarray} which is seen by performing an integration by parts. Now some standard approximation procedures allow us to drop the condition that $f$ be continuously differentiable. \end{proof}
\begin{lemA} Let $\mu$ be a symmetric probability distribution on the real line with Fourier transform \begin{eqnarray} \widehat{\mu}(t) = \int_{-\infty}^\infty e^{-itx}d\mu(x). \end{eqnarray} Then, for $a \in \mathbb{R}$, \begin{eqnarray} \mathbb{E}\left[\widehat{\mu}(\xi + a)\right] = \int_{-\infty}^\infty e^{-\frac{1}{2}x^2} \cos(ax) d\mu(x).\label{FourTransl} \end{eqnarray} \end{lemA}
\begin{proof} This is standard. \end{proof}
\begin{lemA} Let $h : \mathbb{R} \rightarrow \mathbb{R}$ be bounded with a piecewise continuous derivative. Then, for random variables $\eta, \eta^\prime$, \begin{eqnarray}
\left|\mathbb{E}\left[h(\eta) - h\left(\eta^\prime\right)\right]\right| \leq \left(\int_{-\infty}^\infty \left|h^\prime (x) \right| dx\right) K\left(\eta,\eta^\prime\right).\label{AbsContEst} \end{eqnarray} \end{lemA}
\begin{proof} Perform an integration by parts on the left side of (\ref{AbsContEst}). \end{proof}
\begin{thmA}\label{AbstractThmLowerBound} Let $\mu$ be a symmetric probability distribution on the real line, put \begin{eqnarray} f(x) = \left\{\begin{array}{clrr} \frac{1 - \widehat{\mu}(x)}{x} & \textrm{ if }& x \neq 0\\ 0 & \textrm{ if }& x = 0 \end{array}\right. \end{eqnarray} and suppose that $f$ has a bounded second derivative and a bounded and piecewise continuous third derivative. Then \begin{eqnarray} \lefteqn{\left(\int_{-\infty}^\infty \left[1 - \left(1 + \frac{1}{2}x^2\right) e^{-\frac{1}{2} x^2}\right] d\mu(x)\right) \widetilde{\textrm{\upshape{Lin}}}_{\psi_{1/2}}\left(\left\{\xi_{n,k}\right\}\right)}\label{abstrIneq}\\
&\leq& \left(\int_{-\infty}^\infty \left|\left(\widehat{\mu}\right)^\prime(x)\right| dx + \int_{-\infty}^\infty \left|f^{\prime \prime}(x)\right| dx\right) \limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right).\nonumber \end{eqnarray} \end{thmA}
\begin{proof} Put \begin{eqnarray} g(x) = xf(x) \end{eqnarray} and \begin{eqnarray} \beta_{n,k} &=& \int_0^1 \left[\widehat{\mu}\left(\xi\right) - \widehat{\mu}\left(\xi + s\xi_{n,k}\right)\right] e^{-\frac{1}{2} (1-s^2) \xi_{n,k}^2} ds,\\ \gamma_{n} &=& g(\xi) - g\left(\sum_{k=1}^n \xi_{n,k}\right),\\ \delta_{n,k} &=& \int_0^1 \left[f^\prime\left(s\xi_{n,k} + \sum_{i \neq k} \xi_{n,i} + \widetilde{\xi}_{n,k}\right) - f^\prime\left(s \xi_{n,k} + \xi\right)\right] ds,\\ \epsilon_{n,k} &=& \int_{0}^1 \left[f^\prime\left(\sum_{i \neq k} \xi_{n,i} + s \xi_{n,k}\right) - f^\prime\left(\sum_{i \neq k} \xi_{n,i} + s\xi_{n,k} + \widetilde{\xi}_{n,k}\right)\right.\\ && + \left.\widetilde{\xi}_{n,k} f^{\prime \prime}\left(\sum_{i \neq k} \xi_{n,i} + s \xi_{n,k}\right) \right]ds.\nonumber \end{eqnarray} Then \begin{eqnarray}
\sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \left(1-\widehat{\mu}(\xi)\right)\psi_{1/2}\left(\left|\xi_{n,k}\right|\right)\right] = \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \left(\beta_{n,k} + \gamma_n + \delta_{n,k} + \epsilon_{n,k}\right)\right] \end{eqnarray} which is seen by calculating the right side applying independence, the fact that $\mathbb{E}\left[\widetilde{\xi}_{n,k}\right] = \mathbb{E}\left[\xi_{n,k}\right] = 0$ and (\ref{basicRelation}).\\ It follows from (\ref{FourTransl}) and the inequality $\displaystyle{1 - \cos(a) \leq \frac{1}{2} a^2}$ that \begin{eqnarray} \lefteqn{\sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \beta_{n,k}\right]}\\ &=& \int_{-\infty}^\infty e^{-\frac{1}{2}x^2} \sum_{k=1}^n \mathbb{E} \left[\xi_{n,k}^2 \int_0^1 \left[1 - \cos(s \xi_{n,k} x)\right] e^{-\frac{1}{2} (1-s^2) \xi_{n,k}^2}ds\right] d\mu(x)\nonumber\\ &\leq& \left(\int_{-\infty}^\infty \frac{1}{2} x^2 e^{-\frac{1}{2} x^2} d\mu(x)\right) \sum_{k=1}^n\mathbb{E}\left[\xi_{n,k}^2 \int_0^1 s^2 \xi_{n,k}^2 e^{-\frac{1}{2} (1 - s^2) \xi_{n,k}^2}ds\right]\nonumber\\
&=& \left(\int_{-\infty}^\infty \frac{1}{2}x^2 e^{-\frac{1}{2} x^2} d\mu(x)\right) \sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \psi_{1/2}\left(\left|\xi_{n,k}\right|\right)\right].\nonumber \end{eqnarray} From (\ref{AbsContEst}) we learn that \begin{eqnarray}
\sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \gamma_{n}\right] \leq \left(\int_{-\infty}^\infty \left|\left(\widehat{\mu}\right)^{\prime}(x)\right| dx\right) K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right) \end{eqnarray} and \begin{eqnarray}
\sum_{k=1}^n\mathbb{E}\left[\xi_{n,k}^2 \delta_{n,k}\right] &\leq& \left(\int_{-\infty}^\infty \left|f^{\prime \prime}(x)\right|dx\right) K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right). \end{eqnarray} Finally, (\ref{Taylorf}) reveals that \begin{eqnarray}
\sum_{k=1}^n \mathbb{E}\left[\xi_{n,k}^2 \epsilon_{n,k}\right] \leq \frac{1}{2} \left\|f^{\prime \prime \prime}\right\|_\infty \sum_{k=1}^n \sigma_{n,k}^4. \end{eqnarray} Now the lemma follows from (\ref{FelNegCond}). \end{proof}
\begin{gevA} There exists a constant $C_{\psi_{1/2}}$, not depending on $\left\{\xi_{n,k}\right\}$, such that \begin{eqnarray} \widetilde{\textrm{\upshape{Lin}}}_{\psi_{1/2}}\left(\left\{\xi_{n,k}\right\}\right) \leq C_{\psi_{1/2}} \limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right).\label{psiIneq} \end{eqnarray} Moreover, (\ref{psiIneq}) can be shown to hold with $\displaystyle{C_{\psi_{1/2}} \leq 20.2}$. \end{gevA}
\begin{proof} Let $\mu_\sigma$ be normal with mean 0 and variance $\sigma^2$ and put \begin{eqnarray} f_\sigma(x) = \frac{1 - \widehat{\mu}_\sigma(x)}{x}. \end{eqnarray} Then \begin{eqnarray} \int_{-\infty}^\infty \left[1 - \left(1 + \frac{1}{2}x^2\right) e^{-\frac{1}{2} x^2}\right] d\mu_\sigma(x) = 1 - \left(1 + \frac{1}{2}\frac{\sigma^2}{1+\sigma^2}\right)\frac{1}{\sqrt{1+\sigma^2}} \end{eqnarray} and \begin{eqnarray}
\int_{-\infty}^\infty \left|\left(\widehat{\mu}_{\sigma}\right)^\prime(x)\right| dx =2. \end{eqnarray} Furthermore, \begin{eqnarray}
\int_{-\infty}^\infty \left|f_\sigma^{\prime \prime}(x)\right|dx = \sigma^2 - 4 f_\sigma^\prime\left(R_\sigma\right) \end{eqnarray} with $R_\sigma$ the strictly positive zero of $f_\sigma^{\prime \prime}$.
If $\sigma = 1.7$, then $1.4912 < R_\sigma < 1.4914$ and applying (\ref{abstrIneq}) in this case reveals that we can take $C_{\psi_{1/2}} \approx 20.19$. \end{proof}
\paragraph{\bf{Remark}:} The authors have attempted to sharpen the upper bound for $C_{\psi_{1/2}}$ by applying Theorem \ref{AbstractThmLowerBound} to non-normal probability distributions, but have not succeeded so far.
\begin{proof}(Theorem \ref{thmBHallIneqour}) Combine (\ref{CompPsi}) and (\ref{psiIneq}). \end{proof}
\section*{Appendix B : The theoretical framework behind the calculations}
\paragraph{} Since there are many metrics which metrize the weak topology, the reader may well ask why we have singled out the Kolmogorov metric $K$ in our estimations and why we would be interested in assessing $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$ the way we did. The deeper fundamental reason for this is to be found in approach theory (\cite{BeTop},\cite{BeAn},\cite{Lo}).
\paragraph{} Whereas convergence is handled in the weak topology, notions of approximate convergence can only be handled in a structure that allows for numbers, such as e.g.~a metric. In the theory of approximation in metric or normed spaces such notions exist. Indeed, asymptotic radius and asymptotic center of a sequence are well-known (\cite{beny}, \cite{bos}, \cite{Edel},\cite{Ha},\cite{lam}). Approach theory makes these concepts available in a much wider setting than merely metric spaces.
\paragraph{} An approach structure on a set can be thought of as a generalized metric structure in the sense that it is also given by a distance (which we usually denote by $\delta$), but not between pairs of points but between points and sets. As in the case of metric spaces, the natural mappings between approach spaces are contractions. Another analogy with metrics is that a distance $\delta$ has a canonical underlying topology which we sometimes refer to as the topology distancized by $\delta$. Since we do not want to go into details and technicalities here, we refer the interested reader to \cite{Lo}.
\paragraph{} Why do we then nevertheless end up with a metric, and why the Kolmogorov metric? This is not an arbitrary and ad-hoc choice, it is dictated by the theory, just like weak convergence is dictated by the weak topology. Consider the following setup.
\paragraph{} Let $\mathcal{F}$ (respectively $\mathcal{F}_c$) stand for the collection of probability distributions (respectively continuous probability distributions) on the real line and convolution is denoted by $\star$. Now one of the main principles of Bergstr{\"o}m's direct convolution method (\cite{BergCLT},\cite{Ra}) is that weak convergence in $\mathcal{F}$ of a sequence $(\eta_n)_n$ to $\eta$ is equivalent with uniform convergence of the sequence $\left(\eta_n \star \zeta\right)_n$ to $\eta \star \zeta$ for every continuous $\zeta$. In other words, if we let $\mathcal{T}_w$ stand for the topology of weak convergence on $\mathcal{F}$ and $\mathcal{T}_K$ for the topology of uniform convergence (i.e.~generated by the Kolmogorov metric) on $\mathcal{F}_c$, then $\mathcal{T}_w$ is the weakest topology on $\mathcal{F}$ making all mappings \begin{eqnarray} \left(\mathcal{F} \rightarrow \left(\mathcal{F}_c,\mathcal{T}_K\right) : \eta \mapsto \eta \star \zeta\right)_{\zeta \in \mathcal{F}_c}\label{GenSource} \end{eqnarray} continuous. Fortunately, we can take the weakest topology with the above property, but this is something we cannot do with metrics, and this is where approach theory comes to the rescue.
\paragraph{} If we replace the uniform topology $\mathcal{T}_K$ in (\ref{GenSource}) by its generating metric $K$, then we end up with the mappings \begin{eqnarray} \left(\mathcal{F} \rightarrow \left(\mathcal{F}_c,K\right) : \eta \mapsto \eta \star \zeta\right)_{\zeta \in \mathcal{F}_c}.\label{NumGenSource} \end{eqnarray} We are not able to construct a weakest metric on $\mathcal{F}$, metrizing the weak topology and making all mappings in (\ref{NumGenSource}) contractive, it simply does not exist. However, approach spaces allow for so much flexibility that we are able to construct a weakest distance on $\mathcal{F}$, distancizing the weak topology and making all mappings in (\ref{NumGenSource}) contractive. We are interested in the approach structure determined by the latter distance. We call it the continuity approach structure and we denote it by $\delta_c$. So we introduced $\delta_c$ in exactly the same way as $\mathcal{T}_w$.
\paragraph{} It can be shown that the actual distance $\delta_c$ is given by \begin{eqnarray} \delta_c\left(\eta,\mathcal{D}\right) = \sup_{\mathcal{F}_0} \myinf_{\psi \in \mathcal{D}} \sup_{\zeta \in \mathcal{F}_0} K\left(\eta \star \zeta, \psi \star \zeta \right) \end{eqnarray} where $\eta \in \mathcal{F}$ and $\mathcal{D} \subset \mathcal{F}$ and where the first supremum runs over all finite subsets $\mathcal{F}_0$ of $\mathcal{F}_c$.
\paragraph{} The nature of approach structures is such that they allow for many topologi-cal-like considerations. Thus in a topological space we can speak of convergent sequences and in an approach space we can speak of the limit operator of a sequence which in our case is given by \begin{eqnarray} \lambda_c(\eta_n \rightarrow \eta) = \sup_{\zeta \in \mathcal{F}_c} \limsup_{n \rightarrow \infty} K\left(\eta \star \zeta, \eta_n \star \zeta \right). \end{eqnarray} Notice that $(\eta_n)_n$ converges weakly to $\eta$ if and only if $\displaystyle{\lambda_c\left(\eta_n \rightarrow \eta\right) = 0}$.
\paragraph{} The above $\delta_c$ is not a metric, it is a genuine approach structure and $\lambda_c$ is not an asymptotic radius. So why then do we nevertheless end up with just the number $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$? This is because of the peculiarity of the present set-up.
\paragraph{} We can prove the following results, where $j(\eta)$ stands for the supremum of the discontinuity jumps of the distribution $\eta$ and where $\delta_K$ stands for the distance generated by the metric $K$ in the usual way, i.e. \begin{eqnarray} \delta_K(\eta, \mathcal{D}) = \inf_{\psi \in \mathcal{D}} K(\eta, \psi). \end{eqnarray}
\begin{thmB}\label{CompDcKThm} For $\eta \in \mathcal{F}$ and $\mathcal{D} \subset \mathcal{F}$ \begin{eqnarray} \delta_c(\eta,\mathcal{D}) \leq \delta_{K}(\eta,\mathcal{D}) \leq \delta_c(\eta,\mathcal{D}) + j(\eta). \end{eqnarray} \end{thmB}
\begin{gevB}\label{EqDcKc} For $\eta \in \mathcal{F}_c$ and $\mathcal{D} \subset \mathcal{F}$ \begin{eqnarray} \delta_c(\eta,\mathcal{D}) = \delta_{K}(\eta,\mathcal{D}). \end{eqnarray} \end{gevB}
\begin{gevB}\label{EqLimcKc} For $\eta \in \mathcal{F}_c$ and $\left(\eta_n\right)_n$ in $\mathcal{F}$ \begin{eqnarray} \lambda_c\left(\eta_n \rightarrow \eta\right) = \limsup_{n \rightarrow \infty} K\left(\eta,\eta_n\right).\label{explLimOp} \end{eqnarray} \end{gevB}
\paragraph{} The number $\displaystyle{\limsup_{n \rightarrow \infty} K\left(\xi,\sum_{k=1}^n \xi_{n,k}\right)}$ is therefore not ad-hoc, it is nothing else but the limit-operator of the sequence $\displaystyle{\left(\sum_{k=1}^n \xi_{n,k}\right)_n}$ evaluated in $\xi$ for a canonical approach structure on the set of all probability distributions on the real line, and only in our particular case, where we are dealing with a continuous limit distribution, there is equality between the limit-operator for the continuity approach structure $\delta_c$ and the asymptotic radius for the Kolmogorov metric $K$.
\end{document} |
\begin{document}
\title{Appendix for Topology-Imbalance Learning for Semi-Supervised Node Classification}
\appendix
\section{More Details of Dataset and Experiments} \label{appendix::dataset}
\paragraph{Linux Server} We run all the experiments on a Linux server, some important information is listed: \begin{itemize}
\item CPU: Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz $\times$ 40
\item GPU: NVIDIA GeForce RTX2080TI-11GB $\times$ 8
\item RAM: 125GB
\item cuda: 11.1 \end{itemize}
\paragraph{Python Package} We implement all deep learning methods based on Python3.7. The experiment code is included in the supplementary materials. The versions of some important packages are listed: \begin{itemize}
\item torch~\citep{code_pytorch}: 1.9.1+cu111
\item torch-geometric~\citep{code_geometric}: 2.0.1
\item torch-cluster:1.5.9
\item torch-sparse: 0.6.12
\item scikit-learn: 1,0
\item numpy:1.20.3
\item scipy:1.7.1 \end{itemize}
\paragraph{Datasets} The statistical information of our experimental datasets are shown in Table~\ref{table::dataset}. All these data are publicly available and the URLs listed as follows: \begin{itemize}
\item Planetoid Citation Datasets~\citep{dataset_ccp} (\textit{CORA/CiteSeer/PubMed}): \url{https://github.com/rusty1s/pytorch_geometric/blob/master/torch_geometric/datasets/planetoid.py}
\item Amazon Co-purchasing Datasets~\citep{dataset_real_amazon} (\textit{Photo/Computers}): \url{https://github.com/rusty1s/pytorch_geometric/blob/master/torch_geometric/datasets/amazon.py}
\item Reddit Comment Dataset~\citep{model_sage}:\url{https://github.com/TUM-DAML/pprgo_pytorch/blob/master/data/get_reddit.md}
\item MAG-Scholar Dataset~\citep{pr_pprgo} (coarse grained version): \url{https://figshare.com/articles/dataset/mag_scholar/12696653/2} \end{itemize}
\begin{table}[t] \centering \caption{Statistical information about datasets. M indicates million.}
\sisetup{detect-all,mode=text,group-separator={,},group-minimum-digits=3,input-decimal-markers=.} \setlength{\tabcolsep}{4pt}
{
\begin{tabular}{@{}l|rrrrrc@{}} \toprule \bf Dataset & \textbf{\#Node} & \textbf{\#Edge} & \textbf{\#Feature} & \textbf{\#Class} &\textbf{Training} \\\midrule CORA & 2,708 & 5,429 & 1,433 & 7 & Transductive \\ CiteSeer & 3,327 & 4,732 & 3,703 & 6 & Transductive \\ PubMed & 19,717 & 44,338 & 5,00 & 3 & Transductive \\ Photo & 7,487 & 119,043 & 745 & 8 & Transductive \\ Computers & 13,381 & 245,778 & 767 & 10 & Transductive \\ \midrule Reddit & 232,965 & 11,606,919 &602 & 41 & Inductive \\ MAG-Scholar & 10.5145M & 132.8176M & 2.7842M & 8 & Inductive \\ \bottomrule \end{tabular}} \label{table::dataset} \end{table}
\begin{table*}[t] \centering \caption{Dataset Topology-Imbalance Level}
\resizebox{.5\columnwidth}{!} {
\begin{tabular}{l|c|c|c} \toprule
$\sum_{v\in{\bm{\mathcal{L}}}}\bm{T}_v$ & \textbf{LOW} & \textbf{MIDDLE} & \textbf{HIGH} \\ \midrule CORA & 4.26\tiny$\pm$0.27 & 6.03\tiny$\pm$0.21 & 7.39\tiny$\pm$0.43 \\ CiteSeer & 1.19\tiny$\pm$0.11 & 2.26\tiny$\pm$0.01 & 4.37\tiny$\pm$0.23 \\ Pubmed & 0.14\tiny$\pm$0.02 & 0.25\tiny$\pm$0.01 & 0.42\tiny$\pm$0.05 \\\bottomrule \end{tabular}} \label{table::conflict} \end{table*}
\paragraph{Dataset Splitting} In training, we run 5 different random splittings for each dataset to relieve the randomness introduce by the training set selection following~\citet{graph_pitfall}. We repeat experiments 3 times for each splitting to relieve the training splitting. The final performance (weighted F1, macro F1, and the standard deviation) is calculated based on the 15 repeated experiments. The dataset splitting seed list is $[0,1,2,3,4]$; the model training random seed list is: $[0,1,2]$.
\paragraph{Method Hyperparameters} For all encoders ($\mathcal{F}$ and $\mathcal{F'}$), we stacked two GNN or linear layers with the ReLU~\citep{relu} activation function\footnote{Except the SGC model, which increases the power iteration times of the normalized adjacency matrix to replace stacking GNN layers.}. All the hyper-parameters are tuned on the validation set. The tuning range of dataset-specific hyperparameters is as follows: \begin{itemize} \item PageRank teleport probability $\alpha$: $[0.05,0.1,0.15,0.2]$; \item Dimension of hidden layer: $[16,32,64,128,256]$; \item Lower bound of the cosine annealing $w_{min}$: $[0.25,0.5,0.75]$; \item Upper bound of the cosine annealing $w_{max}$: $[1.25,1.5,1.75]$; \end{itemize}
\paragraph{Training Setting} We take the Adam~\citep{kingma2014adam} as the model optimizer. The learning ratio begins to decay after 20 epochs with a ratio of 0.95. We early stop the training process if there is no improvement in 20 epochs. The tuning range of dataset-specific hyperparameters is as follows: \begin{itemize}
\item Learning Rate:$[0.005,{0.0075},0.01,0.015]$,
\item Dropout Probability: $[0.2,0.3,0.4,{0.5},0.6]$. \end{itemize}
\section{Supplement to the ReNode Method} Apart from the relative ranking re-weight method in ReNode, we also tried to adjust the training weight based on the following scheduling methods: \begin{itemize}
\item Linear decay based on the original node Totoro values;
\item Linear decay based on the rank of node Totoro values;
\item Discrete values for different nodes with a piece-wise function; \end{itemize} Among all these methods, the presented cosine annealing method works best. We analyze the reason lies in that, PageRank is proposed for node ranking; hence adjusting weights based on the original values is not robust and can be largely affected by outliers. Comparing to the linear decay schedule, the cosine schedule methods pay more attention to nodes with middle-level conflict, distinguishing which is of great importance for the model training.
The ReNode method assigns more weights to nodes far away from the graph class boundaries, which it is different from methods used in metric learning~\citep{hn_metric1,hn_metric2} or contrastive learning~\citep{hn_mixing,hn_sample} that pay more attention to the 'hard' samples closing to class boundaries. In semi-supervised node classification, most message-passing based GNN model (e.g. GCN) relies on smoothing the adjacent nodes to transfer the category information from the labeled nodes to the unlabeled nodes~\citep{analysis_smoothing,chen_smoothing}. Thus, the 'easy' labeled nodes far away from the class boundaries are expected to better represent class prototypes. Enlarging the training weights of those 'hard' nodes that are close to the class boundaries makes it easier to confuse the class prototype with others. Besides, the labeling size in semi-supervised learning is much smaller than supervised learning (usually 20 nodes per class) and usually, the training nodes are sampled randomly. Hence, a very likely scenario is that the 'hard' samples for some categories are very close to the true class boundaries, while the 'hard' samples for other categories are far away from the true class boundaries. Relying on these 'hard' nodes to decide decision boundaries will cause a large shift of decision boundaries from the true ones.
\section{Settings of Dataset Topology-Imbalance Levels} In Section~3, we evaluate the model performance under different levels of topology imbalance. We introduce the settings for the topology-imbalance levels. For each experiment dataset, we randomly sampled 100 training sets, and calculate the dataset overall conflict as introduced in Section~2.3. Then we choose the 3 training sets with the highest/middle/lowest overall conflict as the high/middle/low-level topology-imbalance setting and report the average results on the 3 training sets for each dataset. The specific conflict values of different levels are displayed in Table~\ref{table::conflict}.
\section{Submission Checklist}
\begin{enumerate}
\item For all authors... \begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{The main claims stated in the abstract and introduction accurately reflect the paper's contributions and scope.}
\item Did you describe the limitations of your work?
\answerYes{We discuss the limitation in Section~4.2.}
\item Did you discuss any potential negative societal impacts of your work?
\answerNA{We think that our proposal has no obvious potential negative societal effect.}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{We have read ethics review guidelines and ensure that our paper conforms to them.} \end{enumerate}
\item If you are including theoretical results... \begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerNA{}
\item Did you include complete proofs of all theoretical results?
\answerNA{} \end{enumerate}
\item If you ran experiments... \begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{We include them in the supplemental material.}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{We describe them in detail in the paper main body and the appendix.}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerYes{We report error bars and the random seed for all experiments in the paper main body and the appendix.}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{We include them in the appendix.} \end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{We cite all the existing assets used in this work.}
\item Did you mention the license of the assets?
\answerYes{We mention the license in appendix.}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerNA{We have no new assets.}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerYes{All the datasets we use is open-source and can be obtained from their public release.}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerYes{ the datasets we use has no personally identifiable information or offensive content.} \end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{} \end{enumerate}
\end{enumerate}
\end{document} |
Subsets and Splits